id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.04689 | Ruoxi Shi | Ruoxi Shi, Hongzhi Wang, Tao Wang, Yutai Hou and Yiwen Tang | Similarity Search Combining Query Relaxation and Diversification | Conference: DASFAA 2017 | null | null | 287 | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the similarity search problem which aims to find the similar query
results according to a set of given data and a query string. To balance the
result number and result quality, we combine query result diversity with query
relaxation. Relaxation guarantees the number of the query results, returning
more relevant elements to the query if the results are too few, while the
diversity tries to reduce the similarity among the returned results. By making
a trade-off of similarity and diversity, we improve the user experience. To
achieve this goal, we define a novel goal function combining similarity and
diversity. Aiming at this goal, we propose three algorithms. Among them,
algorithms genGreedy and genCluster perform relaxation first and select part of
the candidates to diversify. The third algorithm CB2S splits the dataset into
smaller pieces using the clustering algorithm of k-means and processes queries
in several small sets to retrieve more diverse results. The balance of
similarity and diversity is determined through setting a threshold, which has a
default value and can be adjusted according to users' preference. The
performance and efficiency of our system are demonstrated through extensive
experiments based on various datasets.
| [
{
"version": "v1",
"created": "Tue, 15 Nov 2016 03:26:19 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Feb 2017 07:04:17 GMT"
}
] | 2017-02-24T00:00:00 | [
[
"Shi",
"Ruoxi",
""
],
[
"Wang",
"Hongzhi",
""
],
[
"Wang",
"Tao",
""
],
[
"Hou",
"Yutai",
""
],
[
"Tang",
"Yiwen",
""
]
] | TITLE: Similarity Search Combining Query Relaxation and Diversification
ABSTRACT: We study the similarity search problem which aims to find the similar query
results according to a set of given data and a query string. To balance the
result number and result quality, we combine query result diversity with query
relaxation. Relaxation guarantees the number of the query results, returning
more relevant elements to the query if the results are too few, while the
diversity tries to reduce the similarity among the returned results. By making
a trade-off of similarity and diversity, we improve the user experience. To
achieve this goal, we define a novel goal function combining similarity and
diversity. Aiming at this goal, we propose three algorithms. Among them,
algorithms genGreedy and genCluster perform relaxation first and select part of
the candidates to diversify. The third algorithm CB2S splits the dataset into
smaller pieces using the clustering algorithm of k-means and processes queries
in several small sets to retrieve more diverse results. The balance of
similarity and diversity is determined through setting a threshold, which has a
default value and can be adjusted according to users' preference. The
performance and efficiency of our system are demonstrated through extensive
experiments based on various datasets.
| no_new_dataset | 0.947575 |
1611.06056 | Jingfang Fan | Jun Meng, Jingfang Fan, Yosef Ashkenazy, Shlomo Havlin | Percolation framework to describe El Ni\~no conditions | null | Chaos 27, 035807 (2017) | 10.1063/1.4975766 | null | physics.ao-ph physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex networks have been used intensively to investigate the flow and
dynamics of many natural systems including the climate system. Here, we develop
a percolation based measure, the order parameter, to study and quantify climate
networks. We find that abrupt transitions of the order parameter usually occur
$\sim$1 year before El Ni\~{n}o ~ events, suggesting that they can be used as
early warning precursors of El Ni\~{n}o. Using this method we analyze several
reanalysis datasets and show the potential for good forecasting of El Ni\~{n}o.
The percolation based order parameter exhibits discontinuous features,
indicating possible relation to the first order phase transition mechanism.
| [
{
"version": "v1",
"created": "Fri, 18 Nov 2016 12:16:35 GMT"
}
] | 2017-02-24T00:00:00 | [
[
"Meng",
"Jun",
""
],
[
"Fan",
"Jingfang",
""
],
[
"Ashkenazy",
"Yosef",
""
],
[
"Havlin",
"Shlomo",
""
]
] | TITLE: Percolation framework to describe El Ni\~no conditions
ABSTRACT: Complex networks have been used intensively to investigate the flow and
dynamics of many natural systems including the climate system. Here, we develop
a percolation based measure, the order parameter, to study and quantify climate
networks. We find that abrupt transitions of the order parameter usually occur
$\sim$1 year before El Ni\~{n}o ~ events, suggesting that they can be used as
early warning precursors of El Ni\~{n}o. Using this method we analyze several
reanalysis datasets and show the potential for good forecasting of El Ni\~{n}o.
The percolation based order parameter exhibits discontinuous features,
indicating possible relation to the first order phase transition mechanism.
| no_new_dataset | 0.955152 |
1702.05471 | Soheil Feizi | Soheil Feizi and David Tse | Maximally Correlated Principal Component Analysis | 35 pages, 5 figures | null | null | null | stat.ML cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the era of big data, reducing data dimensionality is critical in many
areas of science. Widely used Principal Component Analysis (PCA) addresses this
problem by computing a low dimensional data embedding that maximally explain
variance of the data. However, PCA has two major weaknesses. Firstly, it only
considers linear correlations among variables (features), and secondly it is
not suitable for categorical data. We resolve these issues by proposing
Maximally Correlated Principal Component Analysis (MCPCA). MCPCA computes
transformations of variables whose covariance matrix has the largest Ky Fan
norm. Variable transformations are unknown, can be nonlinear and are computed
in an optimization. MCPCA can also be viewed as a multivariate extension of
Maximal Correlation. For jointly Gaussian variables we show that the covariance
matrix corresponding to the identity (or the negative of the identity)
transformations majorizes covariance matrices of non-identity functions. Using
this result we characterize global MCPCA optimizers for nonlinear functions of
jointly Gaussian variables for every rank constraint. For categorical variables
we characterize global MCPCA optimizers for the rank one constraint based on
the leading eigenvector of a matrix computed using pairwise joint
distributions. For a general rank constraint we propose a block coordinate
descend algorithm and show its convergence to stationary points of the MCPCA
optimization. We compare MCPCA with PCA and other state-of-the-art
dimensionality reduction methods including Isomap, LLE, multilayer autoencoders
(neural networks), kernel PCA, probabilistic PCA and diffusion maps on several
synthetic and real datasets. We show that MCPCA consistently provides improved
performance compared to other methods.
| [
{
"version": "v1",
"created": "Fri, 17 Feb 2017 18:43:58 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Feb 2017 20:38:13 GMT"
}
] | 2017-02-24T00:00:00 | [
[
"Feizi",
"Soheil",
""
],
[
"Tse",
"David",
""
]
] | TITLE: Maximally Correlated Principal Component Analysis
ABSTRACT: In the era of big data, reducing data dimensionality is critical in many
areas of science. Widely used Principal Component Analysis (PCA) addresses this
problem by computing a low dimensional data embedding that maximally explain
variance of the data. However, PCA has two major weaknesses. Firstly, it only
considers linear correlations among variables (features), and secondly it is
not suitable for categorical data. We resolve these issues by proposing
Maximally Correlated Principal Component Analysis (MCPCA). MCPCA computes
transformations of variables whose covariance matrix has the largest Ky Fan
norm. Variable transformations are unknown, can be nonlinear and are computed
in an optimization. MCPCA can also be viewed as a multivariate extension of
Maximal Correlation. For jointly Gaussian variables we show that the covariance
matrix corresponding to the identity (or the negative of the identity)
transformations majorizes covariance matrices of non-identity functions. Using
this result we characterize global MCPCA optimizers for nonlinear functions of
jointly Gaussian variables for every rank constraint. For categorical variables
we characterize global MCPCA optimizers for the rank one constraint based on
the leading eigenvector of a matrix computed using pairwise joint
distributions. For a general rank constraint we propose a block coordinate
descend algorithm and show its convergence to stationary points of the MCPCA
optimization. We compare MCPCA with PCA and other state-of-the-art
dimensionality reduction methods including Isomap, LLE, multilayer autoencoders
(neural networks), kernel PCA, probabilistic PCA and diffusion maps on several
synthetic and real datasets. We show that MCPCA consistently provides improved
performance compared to other methods.
| no_new_dataset | 0.94699 |
1702.05970 | Patrick Christ | Patrick Ferdinand Christ, Florian Ettlinger, Felix Gr\"un, Mohamed
Ezzeldin A. Elshaera, Jana Lipkova, Sebastian Schlecht, Freba Ahmaddy, Sunil
Tatavarty, Marc Bickel, Patrick Bilic, Markus Rempfler, Felix Hofmann, Melvin
D Anastasi, Seyed-Ahmad Ahmadi, Georgios Kaissis, Julian Holch, Wieland
Sommer, Rickmer Braren, Volker Heinemann, Bjoern Menze | Automatic Liver and Tumor Segmentation of CT and MRI Volumes using
Cascaded Fully Convolutional Neural Networks | Under Review | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic segmentation of the liver and hepatic lesions is an important step
towards deriving quantitative biomarkers for accurate clinical diagnosis and
computer-aided decision support systems. This paper presents a method to
automatically segment liver and lesions in CT and MRI abdomen images using
cascaded fully convolutional neural networks (CFCNs) enabling the segmentation
of a large-scale medical trial or quantitative image analysis. We train and
cascade two FCNs for a combined segmentation of the liver and its lesions. In
the first step, we train a FCN to segment the liver as ROI input for a second
FCN. The second FCN solely segments lesions within the predicted liver ROIs of
step 1. CFCN models were trained on an abdominal CT dataset comprising 100
hepatic tumor volumes. Validations on further datasets show that CFCN-based
semantic liver and lesion segmentation achieves Dice scores over 94% for liver
with computation times below 100s per volume. We further experimentally
demonstrate the robustness of the proposed method on an 38 MRI liver tumor
volumes and the public 3DIRCAD dataset.
| [
{
"version": "v1",
"created": "Mon, 20 Feb 2017 13:52:57 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Feb 2017 15:02:59 GMT"
}
] | 2017-02-24T00:00:00 | [
[
"Christ",
"Patrick Ferdinand",
""
],
[
"Ettlinger",
"Florian",
""
],
[
"Grün",
"Felix",
""
],
[
"Elshaera",
"Mohamed Ezzeldin A.",
""
],
[
"Lipkova",
"Jana",
""
],
[
"Schlecht",
"Sebastian",
""
],
[
"Ahmaddy",
"Freba",
""
],
[
"Tatavarty",
"Sunil",
""
],
[
"Bickel",
"Marc",
""
],
[
"Bilic",
"Patrick",
""
],
[
"Rempfler",
"Markus",
""
],
[
"Hofmann",
"Felix",
""
],
[
"Anastasi",
"Melvin D",
""
],
[
"Ahmadi",
"Seyed-Ahmad",
""
],
[
"Kaissis",
"Georgios",
""
],
[
"Holch",
"Julian",
""
],
[
"Sommer",
"Wieland",
""
],
[
"Braren",
"Rickmer",
""
],
[
"Heinemann",
"Volker",
""
],
[
"Menze",
"Bjoern",
""
]
] | TITLE: Automatic Liver and Tumor Segmentation of CT and MRI Volumes using
Cascaded Fully Convolutional Neural Networks
ABSTRACT: Automatic segmentation of the liver and hepatic lesions is an important step
towards deriving quantitative biomarkers for accurate clinical diagnosis and
computer-aided decision support systems. This paper presents a method to
automatically segment liver and lesions in CT and MRI abdomen images using
cascaded fully convolutional neural networks (CFCNs) enabling the segmentation
of a large-scale medical trial or quantitative image analysis. We train and
cascade two FCNs for a combined segmentation of the liver and its lesions. In
the first step, we train a FCN to segment the liver as ROI input for a second
FCN. The second FCN solely segments lesions within the predicted liver ROIs of
step 1. CFCN models were trained on an abdominal CT dataset comprising 100
hepatic tumor volumes. Validations on further datasets show that CFCN-based
semantic liver and lesion segmentation achieves Dice scores over 94% for liver
with computation times below 100s per volume. We further experimentally
demonstrate the robustness of the proposed method on an 38 MRI liver tumor
volumes and the public 3DIRCAD dataset.
| no_new_dataset | 0.949949 |
1702.07005 | Thomas Parnelll | Thomas Parnell, Celestine D\"unner, Kubilay Atasu, Manolis Sifalakis
and Haris Pozidis | Large-Scale Stochastic Learning using GPUs | Accepted for publication in ParLearning 2017: The 6th International
Workshop on Parallel and Distributed Computing for Large Scale Machine
Learning and Big Data Analytics, Orlando, Florida, May 2017 | null | null | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we propose an accelerated stochastic learning system for very
large-scale applications. Acceleration is achieved by mapping the training
algorithm onto massively parallel processors: we demonstrate a parallel,
asynchronous GPU implementation of the widely used stochastic coordinate
descent/ascent algorithm that can provide up to 35x speed-up over a sequential
CPU implementation. In order to train on very large datasets that do not fit
inside the memory of a single GPU, we then consider techniques for distributed
stochastic learning. We propose a novel method for optimally aggregating model
updates from worker nodes when the training data is distributed either by
example or by feature. Using this technique, we demonstrate that one can scale
out stochastic learning across up to 8 worker nodes without any significant
loss of training time. Finally, we combine GPU acceleration with the optimized
distributed method to train on a dataset consisting of 200 million training
examples and 75 million features. We show by scaling out across 4 GPUs, one can
attain a high degree of training accuracy in around 4 seconds: a 20x speed-up
in training time compared to a multi-threaded, distributed implementation
across 4 CPUs.
| [
{
"version": "v1",
"created": "Wed, 22 Feb 2017 21:03:11 GMT"
}
] | 2017-02-24T00:00:00 | [
[
"Parnell",
"Thomas",
""
],
[
"Dünner",
"Celestine",
""
],
[
"Atasu",
"Kubilay",
""
],
[
"Sifalakis",
"Manolis",
""
],
[
"Pozidis",
"Haris",
""
]
] | TITLE: Large-Scale Stochastic Learning using GPUs
ABSTRACT: In this work we propose an accelerated stochastic learning system for very
large-scale applications. Acceleration is achieved by mapping the training
algorithm onto massively parallel processors: we demonstrate a parallel,
asynchronous GPU implementation of the widely used stochastic coordinate
descent/ascent algorithm that can provide up to 35x speed-up over a sequential
CPU implementation. In order to train on very large datasets that do not fit
inside the memory of a single GPU, we then consider techniques for distributed
stochastic learning. We propose a novel method for optimally aggregating model
updates from worker nodes when the training data is distributed either by
example or by feature. Using this technique, we demonstrate that one can scale
out stochastic learning across up to 8 worker nodes without any significant
loss of training time. Finally, we combine GPU acceleration with the optimized
distributed method to train on a dataset consisting of 200 million training
examples and 75 million features. We show by scaling out across 4 GPUs, one can
attain a high degree of training accuracy in around 4 seconds: a 20x speed-up
in training time compared to a multi-threaded, distributed implementation
across 4 CPUs.
| no_new_dataset | 0.942718 |
1702.07046 | Travis Wolfe | Travis Wolfe, Mark Dredze, Benjamin Van Durme | Feature Generation for Robust Semantic Role Labeling | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hand-engineered feature sets are a well understood method for creating robust
NLP models, but they require a lot of expertise and effort to create. In this
work we describe how to automatically generate rich feature sets from simple
units called featlets, requiring less engineering. Using information gain to
guide the generation process, we train models which rival the state of the art
on two standard Semantic Role Labeling datasets with almost no task or
linguistic insight.
| [
{
"version": "v1",
"created": "Wed, 22 Feb 2017 23:39:03 GMT"
}
] | 2017-02-24T00:00:00 | [
[
"Wolfe",
"Travis",
""
],
[
"Dredze",
"Mark",
""
],
[
"Van Durme",
"Benjamin",
""
]
] | TITLE: Feature Generation for Robust Semantic Role Labeling
ABSTRACT: Hand-engineered feature sets are a well understood method for creating robust
NLP models, but they require a lot of expertise and effort to create. In this
work we describe how to automatically generate rich feature sets from simple
units called featlets, requiring less engineering. Using information gain to
guide the generation process, we train models which rival the state of the art
on two standard Semantic Role Labeling datasets with almost no task or
linguistic insight.
| no_new_dataset | 0.95418 |
1702.07092 | Arman Cohan | Arman Cohan, Allan Fong, Nazli Goharian, and Raj Ratwani | A Neural Attention Model for Categorizing Patient Safety Events | ECIR 2017 | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical errors are leading causes of death in the US and as such, prevention
of these errors is paramount to promoting health care. Patient Safety Event
reports are narratives describing potential adverse events to the patients and
are important in identifying and preventing medical errors. We present a neural
network architecture for identifying the type of safety events which is the
first step in understanding these narratives. Our proposed model is based on a
soft neural attention model to improve the effectiveness of encoding long
sequences. Empirical results on two large-scale real-world datasets of patient
safety reports demonstrate the effectiveness of our method with significant
improvements over existing methods.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2017 04:27:49 GMT"
}
] | 2017-02-24T00:00:00 | [
[
"Cohan",
"Arman",
""
],
[
"Fong",
"Allan",
""
],
[
"Goharian",
"Nazli",
""
],
[
"Ratwani",
"Raj",
""
]
] | TITLE: A Neural Attention Model for Categorizing Patient Safety Events
ABSTRACT: Medical errors are leading causes of death in the US and as such, prevention
of these errors is paramount to promoting health care. Patient Safety Event
reports are narratives describing potential adverse events to the patients and
are important in identifying and preventing medical errors. We present a neural
network architecture for identifying the type of safety events which is the
first step in understanding these narratives. Our proposed model is based on a
soft neural attention model to improve the effectiveness of encoding long
sequences. Empirical results on two large-scale real-world datasets of patient
safety reports demonstrate the effectiveness of our method with significant
improvements over existing methods.
| no_new_dataset | 0.951504 |
1702.07189 | David Malmgren-Hansen Mr. | David Malmgren-Hansen, Allan Aasbjerg Nielsen and Rasmus Engholm | Analyzing Learned Convnet Features with Dirichlet Process Gaussian
Mixture Models | Presented at NIPS 2016 Workshop: Practical Bayesian Nonparametrics | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Neural Networks (Convnets) have achieved good results in a
range of computer vision tasks the recent years. Though given a lot of
attention, visualizing the learned representations to interpret Convnets, still
remains a challenging task. The high dimensionality of internal representations
and the high abstractions of deep layers are the main challenges when
visualizing Convnet functionality. We present in this paper a technique based
on clustering internal Convnet representations with a Dirichlet Process
Gaussian Mixture Model, for visualization of learned representations in
Convnets. Our method copes with the high dimensionality of a Convnet by
clustering representations across all nodes of each layer. We will discuss how
this application is useful when considering transfer learning, i.e.\
transferring a model trained on one dataset to solve a task on a different one.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2017 12:11:42 GMT"
}
] | 2017-02-24T00:00:00 | [
[
"Malmgren-Hansen",
"David",
""
],
[
"Nielsen",
"Allan Aasbjerg",
""
],
[
"Engholm",
"Rasmus",
""
]
] | TITLE: Analyzing Learned Convnet Features with Dirichlet Process Gaussian
Mixture Models
ABSTRACT: Convolutional Neural Networks (Convnets) have achieved good results in a
range of computer vision tasks the recent years. Though given a lot of
attention, visualizing the learned representations to interpret Convnets, still
remains a challenging task. The high dimensionality of internal representations
and the high abstractions of deep layers are the main challenges when
visualizing Convnet functionality. We present in this paper a technique based
on clustering internal Convnet representations with a Dirichlet Process
Gaussian Mixture Model, for visualization of learned representations in
Convnets. Our method copes with the high dimensionality of a Convnet by
clustering representations across all nodes of each layer. We will discuss how
this application is useful when considering transfer learning, i.e.\
transferring a model trained on one dataset to solve a task on a different one.
| no_new_dataset | 0.948489 |
1702.07219 | Tuan-Minh Pham | Tuan-Minh Pham, Thi-Thuy-Lien Nguyen, Serge Fdida (UPMC), Huynh Thi
Thanh Binh | Online Load Balancing for Network Functions Virtualization | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network Functions Virtualization (NFV) aims to support service providers to
deploy various services in a more agile and cost-effective way. However, the
softwarization and cloudification of network functions can result in severe
congestion and low network performance. In this paper, we propose a solution to
address this issue. We analyze and solve the online load balancing problem
using multipath routing in NFV to optimize network performance in response to
the dynamic changes of user demands. In particular, we first formulate the
optimization problem of load balancing as a mixed integer linear program for
achieving the optimal solution. We then develop the ORBIT algorithm that solves
the online load balancing problem. The performance guarantee of ORBIT is
analytically proved in comparison with the optimal offline solution. The
experiment results on real-world datasets show that ORBIT performs very well
for distributing traffic of each service demand across multipaths without
knowledge of future demands, especially under high-load conditions.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2017 14:03:41 GMT"
}
] | 2017-02-24T00:00:00 | [
[
"Pham",
"Tuan-Minh",
"",
"UPMC"
],
[
"Nguyen",
"Thi-Thuy-Lien",
"",
"UPMC"
],
[
"Fdida",
"Serge",
"",
"UPMC"
],
[
"Binh",
"Huynh Thi Thanh",
""
]
] | TITLE: Online Load Balancing for Network Functions Virtualization
ABSTRACT: Network Functions Virtualization (NFV) aims to support service providers to
deploy various services in a more agile and cost-effective way. However, the
softwarization and cloudification of network functions can result in severe
congestion and low network performance. In this paper, we propose a solution to
address this issue. We analyze and solve the online load balancing problem
using multipath routing in NFV to optimize network performance in response to
the dynamic changes of user demands. In particular, we first formulate the
optimization problem of load balancing as a mixed integer linear program for
achieving the optimal solution. We then develop the ORBIT algorithm that solves
the online load balancing problem. The performance guarantee of ORBIT is
analytically proved in comparison with the optimal offline solution. The
experiment results on real-world datasets show that ORBIT performs very well
for distributing traffic of each service demand across multipaths without
knowledge of future demands, especially under high-load conditions.
| no_new_dataset | 0.947235 |
1702.07306 | David Lopez-Paz | Mateo Rojas-Carulla, Marco Baroni, David Lopez-Paz | Causal Discovery Using Proxy Variables | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discovering causal relations is fundamental to reasoning and intelligence. In
particular, observational causal discovery algorithms estimate the cause-effect
relation between two random entities $X$ and $Y$, given $n$ samples from
$P(X,Y)$.
In this paper, we develop a framework to estimate the cause-effect relation
between two static entities $x$ and $y$: for instance, an art masterpiece $x$
and its fraudulent copy $y$. To this end, we introduce the notion of proxy
variables, which allow the construction of a pair of random entities $(A,B)$
from the pair of static entities $(x,y)$. Then, estimating the cause-effect
relation between $A$ and $B$ using an observational causal discovery algorithm
leads to an estimation of the cause-effect relation between $x$ and $y$. For
example, our framework detects the causal relation between unprocessed
photographs and their modifications, and orders in time a set of shuffled
frames from a video.
As our main case study, we introduce a human-elicited dataset of 10,000 pairs
of casually-linked pairs of words from natural language. Our methods discover
75% of these causal relations. Finally, we discuss the role of proxy variables
in machine learning, as a general tool to incorporate static knowledge into
prediction tasks.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2017 17:46:39 GMT"
}
] | 2017-02-24T00:00:00 | [
[
"Rojas-Carulla",
"Mateo",
""
],
[
"Baroni",
"Marco",
""
],
[
"Lopez-Paz",
"David",
""
]
] | TITLE: Causal Discovery Using Proxy Variables
ABSTRACT: Discovering causal relations is fundamental to reasoning and intelligence. In
particular, observational causal discovery algorithms estimate the cause-effect
relation between two random entities $X$ and $Y$, given $n$ samples from
$P(X,Y)$.
In this paper, we develop a framework to estimate the cause-effect relation
between two static entities $x$ and $y$: for instance, an art masterpiece $x$
and its fraudulent copy $y$. To this end, we introduce the notion of proxy
variables, which allow the construction of a pair of random entities $(A,B)$
from the pair of static entities $(x,y)$. Then, estimating the cause-effect
relation between $A$ and $B$ using an observational causal discovery algorithm
leads to an estimation of the cause-effect relation between $x$ and $y$. For
example, our framework detects the causal relation between unprocessed
photographs and their modifications, and orders in time a set of shuffled
frames from a video.
As our main case study, we introduce a human-elicited dataset of 10,000 pairs
of casually-linked pairs of words from natural language. Our methods discover
75% of these causal relations. Finally, we discuss the role of proxy variables
in machine learning, as a general tool to incorporate static knowledge into
prediction tasks.
| new_dataset | 0.957278 |
1602.01921 | Haanvid Lee | Haanvid Lee, Minju Jung, and Jun Tani | Recognition of Visually Perceived Compositional Human Actions by
Multiple Spatio-Temporal Scales Recurrent Neural Networks | 10 pages, 9 figures, 5 tables | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The current paper proposes a novel neural network model for recognizing
visually perceived human actions. The proposed multiple spatio-temporal scales
recurrent neural network (MSTRNN) model is derived by introducing multiple
timescale recurrent dynamics to the conventional convolutional neural network
model. One of the essential characteristics of the MSTRNN is that its
architecture imposes both spatial and temporal constraints simultaneously on
the neural activity which vary in multiple scales among different layers. As
suggested by the principle of the upward and downward causation, it is assumed
that the network can develop meaningful structures such as functional hierarchy
by taking advantage of such constraints during the course of learning. To
evaluate the characteristics of the model, the current study uses three types
of human action video dataset consisting of different types of primitive
actions and different levels of compositionality on them. The performance of
the MSTRNN in testing with these dataset is compared with the ones by other
representative deep learning models used in the field. The analysis of the
internal representation obtained through the learning with the dataset
clarifies what sorts of functional hierarchy can be developed by extracting the
essential compositionality underlying the dataset.
| [
{
"version": "v1",
"created": "Fri, 5 Feb 2016 04:00:16 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2016 07:59:03 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Feb 2017 16:33:49 GMT"
}
] | 2017-02-23T00:00:00 | [
[
"Lee",
"Haanvid",
""
],
[
"Jung",
"Minju",
""
],
[
"Tani",
"Jun",
""
]
] | TITLE: Recognition of Visually Perceived Compositional Human Actions by
Multiple Spatio-Temporal Scales Recurrent Neural Networks
ABSTRACT: The current paper proposes a novel neural network model for recognizing
visually perceived human actions. The proposed multiple spatio-temporal scales
recurrent neural network (MSTRNN) model is derived by introducing multiple
timescale recurrent dynamics to the conventional convolutional neural network
model. One of the essential characteristics of the MSTRNN is that its
architecture imposes both spatial and temporal constraints simultaneously on
the neural activity which vary in multiple scales among different layers. As
suggested by the principle of the upward and downward causation, it is assumed
that the network can develop meaningful structures such as functional hierarchy
by taking advantage of such constraints during the course of learning. To
evaluate the characteristics of the model, the current study uses three types
of human action video dataset consisting of different types of primitive
actions and different levels of compositionality on them. The performance of
the MSTRNN in testing with these dataset is compared with the ones by other
representative deep learning models used in the field. The analysis of the
internal representation obtained through the learning with the dataset
clarifies what sorts of functional hierarchy can be developed by extracting the
essential compositionality underlying the dataset.
| no_new_dataset | 0.930015 |
1607.04381 | Song Han | Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Enhao Gong, Shijian
Tang, Erich Elsen, Peter Vajda, Manohar Paluri, John Tran, Bryan Catanzaro,
William J. Dally | DSD: Dense-Sparse-Dense Training for Deep Neural Networks | Published as a conference paper at ICLR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern deep neural networks have a large number of parameters, making them
very hard to train. We propose DSD, a dense-sparse-dense training flow, for
regularizing deep neural networks and achieving better optimization
performance. In the first D (Dense) step, we train a dense network to learn
connection weights and importance. In the S (Sparse) step, we regularize the
network by pruning the unimportant connections with small weights and
retraining the network given the sparsity constraint. In the final D (re-Dense)
step, we increase the model capacity by removing the sparsity constraint,
re-initialize the pruned parameters from zero and retrain the whole dense
network. Experiments show that DSD training can improve the performance for a
wide range of CNNs, RNNs and LSTMs on the tasks of image classification,
caption generation and speech recognition. On ImageNet, DSD improved the Top1
accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50
by 1.1%, respectively. On the WSJ'93 dataset, DSD improved DeepSpeech and
DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the
NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training
time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S
step. At testing time, DSD doesn't change the network architecture or incur any
inference overhead. The consistent and significant performance gain of DSD
experiments shows the inadequacy of the current training methods for finding
the best local optimum, while DSD effectively achieves superior optimization
performance for finding a better solution. DSD models are available to download
at https://songhan.github.io/DSD.
| [
{
"version": "v1",
"created": "Fri, 15 Jul 2016 04:56:27 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Feb 2017 20:51:05 GMT"
}
] | 2017-02-23T00:00:00 | [
[
"Han",
"Song",
""
],
[
"Pool",
"Jeff",
""
],
[
"Narang",
"Sharan",
""
],
[
"Mao",
"Huizi",
""
],
[
"Gong",
"Enhao",
""
],
[
"Tang",
"Shijian",
""
],
[
"Elsen",
"Erich",
""
],
[
"Vajda",
"Peter",
""
],
[
"Paluri",
"Manohar",
""
],
[
"Tran",
"John",
""
],
[
"Catanzaro",
"Bryan",
""
],
[
"Dally",
"William J.",
""
]
] | TITLE: DSD: Dense-Sparse-Dense Training for Deep Neural Networks
ABSTRACT: Modern deep neural networks have a large number of parameters, making them
very hard to train. We propose DSD, a dense-sparse-dense training flow, for
regularizing deep neural networks and achieving better optimization
performance. In the first D (Dense) step, we train a dense network to learn
connection weights and importance. In the S (Sparse) step, we regularize the
network by pruning the unimportant connections with small weights and
retraining the network given the sparsity constraint. In the final D (re-Dense)
step, we increase the model capacity by removing the sparsity constraint,
re-initialize the pruned parameters from zero and retrain the whole dense
network. Experiments show that DSD training can improve the performance for a
wide range of CNNs, RNNs and LSTMs on the tasks of image classification,
caption generation and speech recognition. On ImageNet, DSD improved the Top1
accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50
by 1.1%, respectively. On the WSJ'93 dataset, DSD improved DeepSpeech and
DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the
NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training
time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S
step. At testing time, DSD doesn't change the network architecture or incur any
inference overhead. The consistent and significant performance gain of DSD
experiments shows the inadequacy of the current training methods for finding
the best local optimum, while DSD effectively achieves superior optimization
performance for finding a better solution. DSD models are available to download
at https://songhan.github.io/DSD.
| no_new_dataset | 0.941493 |
1608.00514 | Alireza Davoudi | Alireza Davoudi, Saeed Shiry Ghidary, Khadijeh Sadatnejad | Dimensionality reduction based on Distance Preservation to Local Mean
(DPLM) for SPD matrices and its application in BCI | null | null | 10.1088/1741-2552/aa61bb | null | cs.NA cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a nonlinear dimensionality reduction algorithm for
the manifold of Symmetric Positive Definite (SPD) matrices that considers the
geometry of SPD matrices and provides a low dimensional representation of the
manifold with high class discrimination. The proposed algorithm, tries to
preserve the local structure of the data by preserving distance to local mean
(DPLM) and also provides an implicit projection matrix. DPLM is linear in terms
of the number of training samples and may use the label information when they
are available in order to performance improvement in classification tasks. We
performed several experiments on the multi-class dataset IIa from BCI
competition IV. The results show that our approach as dimensionality reduction
technique - leads to superior results in comparison with other competitor in
the related literature because of its robustness against outliers. The
experiments confirm that the combination of DPLM with FGMDM as the classifier
leads to the state of the art performance on this dataset.
| [
{
"version": "v1",
"created": "Fri, 29 Jul 2016 15:17:16 GMT"
}
] | 2017-02-23T00:00:00 | [
[
"Davoudi",
"Alireza",
""
],
[
"Ghidary",
"Saeed Shiry",
""
],
[
"Sadatnejad",
"Khadijeh",
""
]
] | TITLE: Dimensionality reduction based on Distance Preservation to Local Mean
(DPLM) for SPD matrices and its application in BCI
ABSTRACT: In this paper, we propose a nonlinear dimensionality reduction algorithm for
the manifold of Symmetric Positive Definite (SPD) matrices that considers the
geometry of SPD matrices and provides a low dimensional representation of the
manifold with high class discrimination. The proposed algorithm, tries to
preserve the local structure of the data by preserving distance to local mean
(DPLM) and also provides an implicit projection matrix. DPLM is linear in terms
of the number of training samples and may use the label information when they
are available in order to performance improvement in classification tasks. We
performed several experiments on the multi-class dataset IIa from BCI
competition IV. The results show that our approach as dimensionality reduction
technique - leads to superior results in comparison with other competitor in
the related literature because of its robustness against outliers. The
experiments confirm that the combination of DPLM with FGMDM as the classifier
leads to the state of the art performance on this dataset.
| no_new_dataset | 0.947186 |
1609.02907 | Thomas Kipf | Thomas N. Kipf, Max Welling | Semi-Supervised Classification with Graph Convolutional Networks | Published as a conference paper at ICLR 2017 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a scalable approach for semi-supervised learning on
graph-structured data that is based on an efficient variant of convolutional
neural networks which operate directly on graphs. We motivate the choice of our
convolutional architecture via a localized first-order approximation of
spectral graph convolutions. Our model scales linearly in the number of graph
edges and learns hidden layer representations that encode both local graph
structure and features of nodes. In a number of experiments on citation
networks and on a knowledge graph dataset we demonstrate that our approach
outperforms related methods by a significant margin.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2016 19:48:41 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Oct 2016 21:25:47 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Nov 2016 18:37:23 GMT"
},
{
"version": "v4",
"created": "Wed, 22 Feb 2017 09:55:36 GMT"
}
] | 2017-02-23T00:00:00 | [
[
"Kipf",
"Thomas N.",
""
],
[
"Welling",
"Max",
""
]
] | TITLE: Semi-Supervised Classification with Graph Convolutional Networks
ABSTRACT: We present a scalable approach for semi-supervised learning on
graph-structured data that is based on an efficient variant of convolutional
neural networks which operate directly on graphs. We motivate the choice of our
convolutional architecture via a localized first-order approximation of
spectral graph convolutions. Our model scales linearly in the number of graph
edges and learns hidden layer representations that encode both local graph
structure and features of nodes. In a number of experiments on citation
networks and on a knowledge graph dataset we demonstrate that our approach
outperforms related methods by a significant margin.
| no_new_dataset | 0.946448 |
1611.03915 | Kuan-Ting Chen | Kuan-Ting Chen and Jiebo Luo | When Fashion Meets Big Data: Discriminative Mining of Best Selling
Clothing Features | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the prevalence of e-commence websites and the ease of online shopping,
consumers are embracing huge amounts of various options in products.
Undeniably, shopping is one of the most essential activities in our society and
studying consumer's shopping behavior is important for the industry as well as
sociology and psychology. Indisputable, one of the most popular e-commerce
categories is clothing business. There arises the needs for analysis of popular
and attractive clothing features which could further boost many emerging
applications, such as clothing recommendation and advertising. In this work, we
design a novel system that consists of three major components: 1) exploring and
organizing a large-scale clothing dataset from a online shopping website, 2)
pruning and extracting images of best-selling products in clothing item data
and user transaction history, and 3) utilizing a machine learning based
approach to discovering fine-grained clothing attributes as the representative
and discriminative characteristics of popular clothing style elements. Through
the experiments over a large-scale online clothing shopping dataset, we
demonstrate the effectiveness of our proposed system, and obtain useful
insights on clothing consumption trends and profitable clothing features.
| [
{
"version": "v1",
"created": "Fri, 11 Nov 2016 23:58:06 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2017 05:34:33 GMT"
}
] | 2017-02-23T00:00:00 | [
[
"Chen",
"Kuan-Ting",
""
],
[
"Luo",
"Jiebo",
""
]
] | TITLE: When Fashion Meets Big Data: Discriminative Mining of Best Selling
Clothing Features
ABSTRACT: With the prevalence of e-commence websites and the ease of online shopping,
consumers are embracing huge amounts of various options in products.
Undeniably, shopping is one of the most essential activities in our society and
studying consumer's shopping behavior is important for the industry as well as
sociology and psychology. Indisputable, one of the most popular e-commerce
categories is clothing business. There arises the needs for analysis of popular
and attractive clothing features which could further boost many emerging
applications, such as clothing recommendation and advertising. In this work, we
design a novel system that consists of three major components: 1) exploring and
organizing a large-scale clothing dataset from a online shopping website, 2)
pruning and extracting images of best-selling products in clothing item data
and user transaction history, and 3) utilizing a machine learning based
approach to discovering fine-grained clothing attributes as the representative
and discriminative characteristics of popular clothing style elements. Through
the experiments over a large-scale online clothing shopping dataset, we
demonstrate the effectiveness of our proposed system, and obtain useful
insights on clothing consumption trends and profitable clothing features.
| no_new_dataset | 0.942823 |
1702.06700 | Yuetan Lin | Yuetan Lin, Zhangyang Pang, Donghui Wang, Yueting Zhuang | Task-driven Visual Saliency and Attention-based Visual Question
Answering | 8 pages, 3 figures | null | null | null | cs.CV cs.AI cs.CL cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual question answering (VQA) has witnessed great progress since May, 2015
as a classic problem unifying visual and textual data into a system. Many
enlightening VQA works explore deep into the image and question encodings and
fusing methods, of which attention is the most effective and infusive
mechanism. Current attention based methods focus on adequate fusion of visual
and textual features, but lack the attention to where people focus to ask
questions about the image. Traditional attention based methods attach a single
value to the feature at each spatial location, which losses many useful
information. To remedy these problems, we propose a general method to perform
saliency-like pre-selection on overlapped region features by the interrelation
of bidirectional LSTM (BiLSTM), and use a novel element-wise multiplication
based attention method to capture more competent correlation information
between visual and textual features. We conduct experiments on the large-scale
COCO-VQA dataset and analyze the effectiveness of our model demonstrated by
strong empirical results.
| [
{
"version": "v1",
"created": "Wed, 22 Feb 2017 08:19:38 GMT"
}
] | 2017-02-23T00:00:00 | [
[
"Lin",
"Yuetan",
""
],
[
"Pang",
"Zhangyang",
""
],
[
"Wang",
"Donghui",
""
],
[
"Zhuang",
"Yueting",
""
]
] | TITLE: Task-driven Visual Saliency and Attention-based Visual Question
Answering
ABSTRACT: Visual question answering (VQA) has witnessed great progress since May, 2015
as a classic problem unifying visual and textual data into a system. Many
enlightening VQA works explore deep into the image and question encodings and
fusing methods, of which attention is the most effective and infusive
mechanism. Current attention based methods focus on adequate fusion of visual
and textual features, but lack the attention to where people focus to ask
questions about the image. Traditional attention based methods attach a single
value to the feature at each spatial location, which losses many useful
information. To remedy these problems, we propose a general method to perform
saliency-like pre-selection on overlapped region features by the interrelation
of bidirectional LSTM (BiLSTM), and use a novel element-wise multiplication
based attention method to capture more competent correlation information
between visual and textual features. We conduct experiments on the large-scale
COCO-VQA dataset and analyze the effectiveness of our model demonstrated by
strong empirical results.
| no_new_dataset | 0.943138 |
1702.06703 | Jiwei Li | Jiwei Li, Will Monroe and Dan Jurafsky | Data Distillation for Controlling Specificity in Dialogue Generation | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | People speak at different levels of specificity in different situations.
Depending on their knowledge, interlocutors, mood, etc.} A conversational agent
should have this ability and know when to be specific and when to be general.
We propose an approach that gives a neural network--based conversational agent
this ability. Our approach involves alternating between \emph{data
distillation} and model training : removing training examples that are closest
to the responses most commonly produced by the model trained from the last
round and then retrain the model on the remaining dataset. Dialogue generation
models trained with different degrees of data distillation manifest different
levels of specificity.
We then train a reinforcement learning system for selecting among this pool
of generation models, to choose the best level of specificity for a given
input. Compared to the original generative model trained without distillation,
the proposed system is capable of generating more interesting and
higher-quality responses, in addition to appropriately adjusting specificity
depending on the context.
Our research constitutes a specific case of a broader approach involving
training multiple subsystems from a single dataset distinguished by differences
in a specific property one wishes to model. We show that from such a set of
subsystems, one can use reinforcement learning to build a system that tailors
its output to different input contexts at test time.
| [
{
"version": "v1",
"created": "Wed, 22 Feb 2017 08:32:47 GMT"
}
] | 2017-02-23T00:00:00 | [
[
"Li",
"Jiwei",
""
],
[
"Monroe",
"Will",
""
],
[
"Jurafsky",
"Dan",
""
]
] | TITLE: Data Distillation for Controlling Specificity in Dialogue Generation
ABSTRACT: People speak at different levels of specificity in different situations.
Depending on their knowledge, interlocutors, mood, etc.} A conversational agent
should have this ability and know when to be specific and when to be general.
We propose an approach that gives a neural network--based conversational agent
this ability. Our approach involves alternating between \emph{data
distillation} and model training : removing training examples that are closest
to the responses most commonly produced by the model trained from the last
round and then retrain the model on the remaining dataset. Dialogue generation
models trained with different degrees of data distillation manifest different
levels of specificity.
We then train a reinforcement learning system for selecting among this pool
of generation models, to choose the best level of specificity for a given
input. Compared to the original generative model trained without distillation,
the proposed system is capable of generating more interesting and
higher-quality responses, in addition to appropriately adjusting specificity
depending on the context.
Our research constitutes a specific case of a broader approach involving
training multiple subsystems from a single dataset distinguished by differences
in a specific property one wishes to model. We show that from such a set of
subsystems, one can use reinforcement learning to build a system that tailors
its output to different input contexts at test time.
| no_new_dataset | 0.951729 |
1702.06709 | Abhishek . | Abhishek, Ashish Anand and Amit Awekar | Fine-Grained Entity Type Classification by Jointly Learning
Representations and Label Embeddings | 11 pages, 5 figures, accepted at EACL 2017 conference | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-grained entity type classification (FETC) is the task of classifying an
entity mention to a broad set of types. Distant supervision paradigm is
extensively used to generate training data for this task. However, generated
training data assigns same set of labels to every mention of an entity without
considering its local context. Existing FETC systems have two major drawbacks:
assuming training data to be noise free and use of hand crafted features. Our
work overcomes both drawbacks. We propose a neural network model that jointly
learns entity mentions and their context representation to eliminate use of
hand crafted features. Our model treats training data as noisy and uses
non-parametric variant of hinge loss function. Experiments show that the
proposed model outperforms previous state-of-the-art methods on two publicly
available datasets, namely FIGER (GOLD) and BBN with an average relative
improvement of 2.69% in micro-F1 score. Knowledge learnt by our model on one
dataset can be transferred to other datasets while using same model or other
FETC systems. These approaches of transferring knowledge further improve the
performance of respective models.
| [
{
"version": "v1",
"created": "Wed, 22 Feb 2017 08:59:37 GMT"
}
] | 2017-02-23T00:00:00 | [
[
"Abhishek",
"",
""
],
[
"Anand",
"Ashish",
""
],
[
"Awekar",
"Amit",
""
]
] | TITLE: Fine-Grained Entity Type Classification by Jointly Learning
Representations and Label Embeddings
ABSTRACT: Fine-grained entity type classification (FETC) is the task of classifying an
entity mention to a broad set of types. Distant supervision paradigm is
extensively used to generate training data for this task. However, generated
training data assigns same set of labels to every mention of an entity without
considering its local context. Existing FETC systems have two major drawbacks:
assuming training data to be noise free and use of hand crafted features. Our
work overcomes both drawbacks. We propose a neural network model that jointly
learns entity mentions and their context representation to eliminate use of
hand crafted features. Our model treats training data as noisy and uses
non-parametric variant of hinge loss function. Experiments show that the
proposed model outperforms previous state-of-the-art methods on two publicly
available datasets, namely FIGER (GOLD) and BBN with an average relative
improvement of 2.69% in micro-F1 score. Knowledge learnt by our model on one
dataset can be transferred to other datasets while using same model or other
FETC systems. These approaches of transferring knowledge further improve the
performance of respective models.
| no_new_dataset | 0.947332 |
1702.06712 | Atif Raza | Atif Raza and Stefan Kramer | Ensembles of Randomized Time Series Shapelets Provide Improved Accuracy
while Reducing Computational Costs | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Shapelets are discriminative time series subsequences that allow generation
of interpretable classification models, which provide faster and generally
better classification than the nearest neighbor approach. However, the shapelet
discovery process requires the evaluation of all possible subsequences of all
time series in the training set, making it extremely computation intensive.
Consequently, shapelet discovery for large time series datasets quickly becomes
intractable. A number of improvements have been proposed to reduce the training
time. These techniques use approximation or discretization and often lead to
reduced classification accuracy compared to the exact method.
We are proposing the use of ensembles of shapelet-based classifiers obtained
using random sampling of the shapelet candidates. Using random sampling reduces
the number of evaluated candidates and consequently the required computational
cost, while the classification accuracy of the resulting models is also not
significantly different than that of the exact algorithm. The combination of
randomized classifiers rectifies the inaccuracies of individual models because
of the diversity of the solutions. Based on the experiments performed, it is
shown that the proposed approach of using an ensemble of inexpensive
classifiers provides better classification accuracy compared to the exact
method at a significantly lesser computational cost.
| [
{
"version": "v1",
"created": "Wed, 22 Feb 2017 09:07:00 GMT"
}
] | 2017-02-23T00:00:00 | [
[
"Raza",
"Atif",
""
],
[
"Kramer",
"Stefan",
""
]
] | TITLE: Ensembles of Randomized Time Series Shapelets Provide Improved Accuracy
while Reducing Computational Costs
ABSTRACT: Shapelets are discriminative time series subsequences that allow generation
of interpretable classification models, which provide faster and generally
better classification than the nearest neighbor approach. However, the shapelet
discovery process requires the evaluation of all possible subsequences of all
time series in the training set, making it extremely computation intensive.
Consequently, shapelet discovery for large time series datasets quickly becomes
intractable. A number of improvements have been proposed to reduce the training
time. These techniques use approximation or discretization and often lead to
reduced classification accuracy compared to the exact method.
We are proposing the use of ensembles of shapelet-based classifiers obtained
using random sampling of the shapelet candidates. Using random sampling reduces
the number of evaluated candidates and consequently the required computational
cost, while the classification accuracy of the resulting models is also not
significantly different than that of the exact algorithm. The combination of
randomized classifiers rectifies the inaccuracies of individual models because
of the diversity of the solutions. Based on the experiments performed, it is
shown that the proposed approach of using an ensemble of inexpensive
classifiers provides better classification accuracy compared to the exact
method at a significantly lesser computational cost.
| no_new_dataset | 0.949763 |
1702.06850 | Jobin Wilson | Jobin Wilson and Muhammad Arif | Scene Recognition by Combining Local and Global Image Descriptors | A full implementation of our model is available at
https://github.com/flytxtds/scene-recognition | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object recognition is an important problem in computer vision, having diverse
applications. In this work, we construct an end-to-end scene recognition
pipeline consisting of feature extraction, encoding, pooling and
classification. Our approach simultaneously utilize global feature descriptors
as well as local feature descriptors from images, to form a hybrid feature
descriptor corresponding to each image. We utilize DAISY features associated
with key points within images as our local feature descriptor and histogram of
oriented gradients (HOG) corresponding to an entire image as a global
descriptor. We make use of a bag-of-visual-words encoding and apply Mini- Batch
K-Means algorithm to reduce the complexity of our feature encoding scheme. A
2-level pooling procedure is used to combine DAISY and HOG features
corresponding to each image. Finally, we experiment with a multi-class SVM
classifier with several kernels, in a cross-validation setting, and tabulate
our results on the fifteen scene categories dataset. The average accuracy of
our model was 76.4% in the case of a 40%-60% random split of images into
training and testing datasets respectively. The primary objective of this work
is to clearly outline the practical implementation of a basic
screne-recognition pipeline having a reasonable accuracy, in python, using
open-source libraries. A full implementation of the proposed model is available
in our github repository.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2017 06:57:37 GMT"
}
] | 2017-02-23T00:00:00 | [
[
"Wilson",
"Jobin",
""
],
[
"Arif",
"Muhammad",
""
]
] | TITLE: Scene Recognition by Combining Local and Global Image Descriptors
ABSTRACT: Object recognition is an important problem in computer vision, having diverse
applications. In this work, we construct an end-to-end scene recognition
pipeline consisting of feature extraction, encoding, pooling and
classification. Our approach simultaneously utilize global feature descriptors
as well as local feature descriptors from images, to form a hybrid feature
descriptor corresponding to each image. We utilize DAISY features associated
with key points within images as our local feature descriptor and histogram of
oriented gradients (HOG) corresponding to an entire image as a global
descriptor. We make use of a bag-of-visual-words encoding and apply Mini- Batch
K-Means algorithm to reduce the complexity of our feature encoding scheme. A
2-level pooling procedure is used to combine DAISY and HOG features
corresponding to each image. Finally, we experiment with a multi-class SVM
classifier with several kernels, in a cross-validation setting, and tabulate
our results on the fifteen scene categories dataset. The average accuracy of
our model was 76.4% in the case of a 40%-60% random split of images into
training and testing datasets respectively. The primary objective of this work
is to clearly outline the practical implementation of a basic
screne-recognition pipeline having a reasonable accuracy, in python, using
open-source libraries. A full implementation of the proposed model is available
in our github repository.
| no_new_dataset | 0.947866 |
1406.7770 | Peter Duggins | Peter Duggins | A Psychologically-Motivated Model of Opinion Change with Applications to
American Politics | 18 pages, 10 figures. Keywords: Agent-Based Model, Opinion Dynamics,
Social Networks, Conformity, Polarization, Extremism | Journal of Artificial Societies and Social Simulation 20 (1) 13,
(2017) | 10.18564/jasss.3316 | null | cs.MA cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Agent-based models are versatile tools for studying how societal opinion
change, including political polarization and cultural diffusion, emerges from
individual behavior. This study expands agents' psychological realism using
empirically-motivated rules governing interpersonal influence, commitment to
previous beliefs, and conformity in social contexts. Computational experiments
establish that these extensions produce three novel results: (a) sustained
strong diversity of opinions within the population, (b) opinion subcultures,
and (c) pluralistic ignorance. These phenomena arise from a combination of
agents' intolerance, susceptibility and conformity, with extremist agents and
social networks playing important roles. The distribution and dynamics of
simulated opinions reproduce two empirical datasets on Americans' political
opinions.
| [
{
"version": "v1",
"created": "Mon, 30 Jun 2014 15:12:57 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Sep 2016 20:16:43 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Feb 2017 19:59:31 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"Duggins",
"Peter",
""
]
] | TITLE: A Psychologically-Motivated Model of Opinion Change with Applications to
American Politics
ABSTRACT: Agent-based models are versatile tools for studying how societal opinion
change, including political polarization and cultural diffusion, emerges from
individual behavior. This study expands agents' psychological realism using
empirically-motivated rules governing interpersonal influence, commitment to
previous beliefs, and conformity in social contexts. Computational experiments
establish that these extensions produce three novel results: (a) sustained
strong diversity of opinions within the population, (b) opinion subcultures,
and (c) pluralistic ignorance. These phenomena arise from a combination of
agents' intolerance, susceptibility and conformity, with extremist agents and
social networks playing important roles. The distribution and dynamics of
simulated opinions reproduce two empirical datasets on Americans' political
opinions.
| no_new_dataset | 0.947039 |
1511.06252 | Matus Medo | Fei Yu, An Zeng, Sebastien Gillard, Matus Medo | Network-based recommendation algorithms: A review | review article; 16 pages, 4 figures, 4 tables | Physica A 452, 192 (2016) | 10.1016/j.physa.2016.02.021 | null | cs.IR physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems are a vital tool that helps us to overcome the
information overload problem. They are being used by most e-commerce web sites
and attract the interest of a broad scientific community. A recommender system
uses data on users' past preferences to choose new items that might be
appreciated by a given individual user. While many approaches to recommendation
exist, the approach based on a network representation of the input data has
gained considerable attention in the past. We review here a broad range of
network-based recommendation algorithms and for the first time compare their
performance on three distinct real datasets. We present recommendation topics
that go beyond the mere question of which algorithm to use - such as the
possible influence of recommendation on the evolution of systems that use it -
and finally discuss open research directions and challenges.
| [
{
"version": "v1",
"created": "Thu, 19 Nov 2015 16:51:44 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"Yu",
"Fei",
""
],
[
"Zeng",
"An",
""
],
[
"Gillard",
"Sebastien",
""
],
[
"Medo",
"Matus",
""
]
] | TITLE: Network-based recommendation algorithms: A review
ABSTRACT: Recommender systems are a vital tool that helps us to overcome the
information overload problem. They are being used by most e-commerce web sites
and attract the interest of a broad scientific community. A recommender system
uses data on users' past preferences to choose new items that might be
appreciated by a given individual user. While many approaches to recommendation
exist, the approach based on a network representation of the input data has
gained considerable attention in the past. We review here a broad range of
network-based recommendation algorithms and for the first time compare their
performance on three distinct real datasets. We present recommendation topics
that go beyond the mere question of which algorithm to use - such as the
possible influence of recommendation on the evolution of systems that use it -
and finally discuss open research directions and challenges.
| no_new_dataset | 0.942295 |
1603.06571 | Oren Barkan | Oren Barkan | Bayesian Neural Word Embedding | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, several works in the domain of natural language processing
presented successful methods for word embedding. Among them, the Skip-Gram with
negative sampling, known also as word2vec, advanced the state-of-the-art of
various linguistics tasks. In this paper, we propose a scalable Bayesian neural
word embedding algorithm. The algorithm relies on a Variational Bayes solution
for the Skip-Gram objective and a detailed step by step description is
provided. We present experimental results that demonstrate the performance of
the proposed algorithm for word analogy and similarity tasks on six different
datasets and show it is competitive with the original Skip-Gram method.
| [
{
"version": "v1",
"created": "Mon, 21 Mar 2016 16:32:06 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Jun 2016 16:49:11 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Feb 2017 20:45:33 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"Barkan",
"Oren",
""
]
] | TITLE: Bayesian Neural Word Embedding
ABSTRACT: Recently, several works in the domain of natural language processing
presented successful methods for word embedding. Among them, the Skip-Gram with
negative sampling, known also as word2vec, advanced the state-of-the-art of
various linguistics tasks. In this paper, we propose a scalable Bayesian neural
word embedding algorithm. The algorithm relies on a Variational Bayes solution
for the Skip-Gram objective and a detailed step by step description is
provided. We present experimental results that demonstrate the performance of
the proposed algorithm for word analogy and similarity tasks on six different
datasets and show it is competitive with the original Skip-Gram method.
| no_new_dataset | 0.950549 |
1603.08983 | Alex Graves | Alex Graves | Adaptive Computation Time for Recurrent Neural Networks | null | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces Adaptive Computation Time (ACT), an algorithm that
allows recurrent neural networks to learn how many computational steps to take
between receiving an input and emitting an output. ACT requires minimal changes
to the network architecture, is deterministic and differentiable, and does not
add any noise to the parameter gradients. Experimental results are provided for
four synthetic problems: determining the parity of binary vectors, applying
binary logic operations, adding integers, and sorting real numbers. Overall,
performance is dramatically improved by the use of ACT, which successfully
adapts the number of computational steps to the requirements of the problem. We
also present character-level language modelling results on the Hutter prize
Wikipedia dataset. In this case ACT does not yield large gains in performance;
however it does provide intriguing insight into the structure of the data, with
more computation allocated to harder-to-predict transitions, such as spaces
between words and ends of sentences. This suggests that ACT or other adaptive
computation methods could provide a generic method for inferring segment
boundaries in sequence data.
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2016 22:09:00 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2016 10:27:31 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Apr 2016 18:38:25 GMT"
},
{
"version": "v4",
"created": "Mon, 18 Apr 2016 19:10:22 GMT"
},
{
"version": "v5",
"created": "Thu, 2 Feb 2017 10:09:32 GMT"
},
{
"version": "v6",
"created": "Tue, 21 Feb 2017 16:21:21 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"Graves",
"Alex",
""
]
] | TITLE: Adaptive Computation Time for Recurrent Neural Networks
ABSTRACT: This paper introduces Adaptive Computation Time (ACT), an algorithm that
allows recurrent neural networks to learn how many computational steps to take
between receiving an input and emitting an output. ACT requires minimal changes
to the network architecture, is deterministic and differentiable, and does not
add any noise to the parameter gradients. Experimental results are provided for
four synthetic problems: determining the parity of binary vectors, applying
binary logic operations, adding integers, and sorting real numbers. Overall,
performance is dramatically improved by the use of ACT, which successfully
adapts the number of computational steps to the requirements of the problem. We
also present character-level language modelling results on the Hutter prize
Wikipedia dataset. In this case ACT does not yield large gains in performance;
however it does provide intriguing insight into the structure of the data, with
more computation allocated to harder-to-predict transitions, such as spaces
between words and ends of sentences. This suggests that ACT or other adaptive
computation methods could provide a generic method for inferring segment
boundaries in sequence data.
| no_new_dataset | 0.94428 |
1605.05721 | Ping Li | Ping Li | Linearized GMM Kernels and Normalized Random Fourier Features | null | null | null | null | cs.LG cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The method of "random Fourier features (RFF)" has become a popular tool for
approximating the "radial basis function (RBF)" kernel. The variance of RFF is
actually large. Interestingly, the variance can be substantially reduced by a
simple normalization step as we theoretically demonstrate. We name the improved
scheme as the "normalized RFF (NRFF)".
We also propose the "generalized min-max (GMM)" kernel as a measure of data
similarity. GMM is positive definite as there is an associated hashing method
named "generalized consistent weighted sampling (GCWS)" which linearizes this
nonlinear kernel. We provide an extensive empirical evaluation of the RBF
kernel and the GMM kernel on more than 50 publicly available datasets. For a
majority of the datasets, the (tuning-free) GMM kernel outperforms the
best-tuned RBF kernel.
We conduct extensive experiments for comparing the linearized RBF kernel
using NRFF with the linearized GMM kernel using GCWS. We observe that, to reach
a comparable classification accuracy, GCWS typically requires substantially
fewer samples than NRFF, even on datasets where the original RBF kernel
outperforms the original GMM kernel. The empirical success of GCWS (compared to
NRFF) can also be explained from a theoretical perspective. Firstly, the
relative variance (normalized by the squared expectation) of GCWS is
substantially smaller than that of NRFF, except for the very high similarity
region (where the variances of both methods are close to zero). Secondly, if we
make a model assumption on the data, we can show analytically that GCWS
exhibits much smaller variance than NRFF for estimating the same object (e.g.,
the RBF kernel), except for the very high similarity region.
| [
{
"version": "v1",
"created": "Wed, 18 May 2016 19:54:22 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2016 19:51:39 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Nov 2016 18:42:09 GMT"
},
{
"version": "v4",
"created": "Tue, 21 Feb 2017 17:11:48 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"Li",
"Ping",
""
]
] | TITLE: Linearized GMM Kernels and Normalized Random Fourier Features
ABSTRACT: The method of "random Fourier features (RFF)" has become a popular tool for
approximating the "radial basis function (RBF)" kernel. The variance of RFF is
actually large. Interestingly, the variance can be substantially reduced by a
simple normalization step as we theoretically demonstrate. We name the improved
scheme as the "normalized RFF (NRFF)".
We also propose the "generalized min-max (GMM)" kernel as a measure of data
similarity. GMM is positive definite as there is an associated hashing method
named "generalized consistent weighted sampling (GCWS)" which linearizes this
nonlinear kernel. We provide an extensive empirical evaluation of the RBF
kernel and the GMM kernel on more than 50 publicly available datasets. For a
majority of the datasets, the (tuning-free) GMM kernel outperforms the
best-tuned RBF kernel.
We conduct extensive experiments for comparing the linearized RBF kernel
using NRFF with the linearized GMM kernel using GCWS. We observe that, to reach
a comparable classification accuracy, GCWS typically requires substantially
fewer samples than NRFF, even on datasets where the original RBF kernel
outperforms the original GMM kernel. The empirical success of GCWS (compared to
NRFF) can also be explained from a theoretical perspective. Firstly, the
relative variance (normalized by the squared expectation) of GCWS is
substantially smaller than that of NRFF, except for the very high similarity
region (where the variances of both methods are close to zero). Secondly, if we
make a model assumption on the data, we can show analytically that GCWS
exhibits much smaller variance than NRFF for estimating the same object (e.g.,
the RBF kernel), except for the very high similarity region.
| no_new_dataset | 0.945248 |
1606.01341 | Sonse Shimaoka | Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, Sebastian Riedel | Neural Architectures for Fine-grained Entity Type Classification | 10 pages, 3 figures, accepted at EACL2017 conference | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In this work, we investigate several neural network architectures for
fine-grained entity type classification. Particularly, we consider extensions
to a recently proposed attentive neural architecture and make three key
contributions. Previous work on attentive neural architectures do not consider
hand-crafted features, we combine learnt and hand-crafted features and observe
that they complement each other. Additionally, through quantitative analysis we
establish that the attention mechanism is capable of learning to attend over
syntactic heads and the phrase containing the mention, where both are known
strong hand-crafted features for our task. We enable parameter sharing through
a hierarchical label encoding method, that in low-dimensional projections show
clear clusters for each type hierarchy. Lastly, despite using the same
evaluation dataset, the literature frequently compare models trained using
different data. We establish that the choice of training data has a drastic
impact on performance, with decreases by as much as 9.85% loose micro F1 score
for a previously proposed method. Despite this, our best model achieves
state-of-the-art results with 75.36% loose micro F1 score on the well-
established FIGER (GOLD) dataset.
| [
{
"version": "v1",
"created": "Sat, 4 Jun 2016 07:52:22 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Feb 2017 06:49:42 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"Shimaoka",
"Sonse",
""
],
[
"Stenetorp",
"Pontus",
""
],
[
"Inui",
"Kentaro",
""
],
[
"Riedel",
"Sebastian",
""
]
] | TITLE: Neural Architectures for Fine-grained Entity Type Classification
ABSTRACT: In this work, we investigate several neural network architectures for
fine-grained entity type classification. Particularly, we consider extensions
to a recently proposed attentive neural architecture and make three key
contributions. Previous work on attentive neural architectures do not consider
hand-crafted features, we combine learnt and hand-crafted features and observe
that they complement each other. Additionally, through quantitative analysis we
establish that the attention mechanism is capable of learning to attend over
syntactic heads and the phrase containing the mention, where both are known
strong hand-crafted features for our task. We enable parameter sharing through
a hierarchical label encoding method, that in low-dimensional projections show
clear clusters for each type hierarchy. Lastly, despite using the same
evaluation dataset, the literature frequently compare models trained using
different data. We establish that the choice of training data has a drastic
impact on performance, with decreases by as much as 9.85% loose micro F1 score
for a previously proposed method. Despite this, our best model achieves
state-of-the-art results with 75.36% loose micro F1 score on the well-
established FIGER (GOLD) dataset.
| no_new_dataset | 0.947235 |
1609.03068 | Filippo Maria Bianchi | Filippo Maria Bianchi, Lorenzo Livi, Cesare Alippi and Robert Jenssen | Multiplex visibility graphs to investigate recurrent neural networks
dynamics | null | null | 10.1038/srep44037 | null | cs.NE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A recurrent neural network (RNN) is a universal approximator of dynamical
systems, whose performance often depends on sensitive hyperparameters. Tuning
of such hyperparameters may be difficult and, typically, based on a
trial-and-error approach. In this work, we adopt a graph-based framework to
interpret and characterize the internal RNN dynamics. Through this insight, we
are able to design a principled unsupervised method to derive configurations
with maximized performances, in terms of prediction error and memory capacity.
In particular, we propose to model time series of neurons activations with the
recently introduced horizontal visibility graphs, whose topological properties
reflect important dynamical features of the underlying dynamic system.
Successively, each graph becomes a layer of a larger structure, called
multiplex. We show that topological properties of such a multiplex reflect
important features of RNN dynamics and are used to guide the tuning procedure.
To validate the proposed method, we consider a class of RNNs called echo state
networks. We perform experiments and discuss results on several benchmarks and
real-world dataset of call data records.
| [
{
"version": "v1",
"created": "Sat, 10 Sep 2016 16:12:27 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Nov 2016 09:01:14 GMT"
},
{
"version": "v3",
"created": "Fri, 20 Jan 2017 17:47:44 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"Bianchi",
"Filippo Maria",
""
],
[
"Livi",
"Lorenzo",
""
],
[
"Alippi",
"Cesare",
""
],
[
"Jenssen",
"Robert",
""
]
] | TITLE: Multiplex visibility graphs to investigate recurrent neural networks
dynamics
ABSTRACT: A recurrent neural network (RNN) is a universal approximator of dynamical
systems, whose performance often depends on sensitive hyperparameters. Tuning
of such hyperparameters may be difficult and, typically, based on a
trial-and-error approach. In this work, we adopt a graph-based framework to
interpret and characterize the internal RNN dynamics. Through this insight, we
are able to design a principled unsupervised method to derive configurations
with maximized performances, in terms of prediction error and memory capacity.
In particular, we propose to model time series of neurons activations with the
recently introduced horizontal visibility graphs, whose topological properties
reflect important dynamical features of the underlying dynamic system.
Successively, each graph becomes a layer of a larger structure, called
multiplex. We show that topological properties of such a multiplex reflect
important features of RNN dynamics and are used to guide the tuning procedure.
To validate the proposed method, we consider a class of RNNs called echo state
networks. We perform experiments and discuss results on several benchmarks and
real-world dataset of call data records.
| no_new_dataset | 0.9463 |
1702.06151 | Tal Yarkoni | Quinten McNamara, Alejandro de la Vega, and Tal Yarkoni | Developing a comprehensive framework for multimodal feature extraction | null | null | null | null | cs.CV cs.IR cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature extraction is a critical component of many applied data science
workflows. In recent years, rapid advances in artificial intelligence and
machine learning have led to an explosion of feature extraction tools and
services that allow data scientists to cheaply and effectively annotate their
data along a vast array of dimensions---ranging from detecting faces in images
to analyzing the sentiment expressed in coherent text. Unfortunately, the
proliferation of powerful feature extraction services has been mirrored by a
corresponding expansion in the number of distinct interfaces to feature
extraction services. In a world where nearly every new service has its own API,
documentation, and/or client library, data scientists who need to combine
diverse features obtained from multiple sources are often forced to write and
maintain ever more elaborate feature extraction pipelines. To address this
challenge, we introduce a new open-source framework for comprehensive
multimodal feature extraction. Pliers is an open-source Python package that
supports standardized annotation of diverse data types (video, images, audio,
and text), and is expressly with both ease-of-use and extensibility in mind.
Users can apply a wide range of pre-existing feature extraction tools to their
data in just a few lines of Python code, and can also easily add their own
custom extractors by writing modular classes. A graph-based API enables rapid
development of complex feature extraction pipelines that output results in a
single, standardized format. We describe the package's architecture, detail its
major advantages over previous feature extraction toolboxes, and use a sample
application to a large functional MRI dataset to illustrate how pliers can
significantly reduce the time and effort required to construct sophisticated
feature extraction workflows while increasing code clarity and maintainability.
| [
{
"version": "v1",
"created": "Mon, 20 Feb 2017 19:22:21 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"McNamara",
"Quinten",
""
],
[
"de la Vega",
"Alejandro",
""
],
[
"Yarkoni",
"Tal",
""
]
] | TITLE: Developing a comprehensive framework for multimodal feature extraction
ABSTRACT: Feature extraction is a critical component of many applied data science
workflows. In recent years, rapid advances in artificial intelligence and
machine learning have led to an explosion of feature extraction tools and
services that allow data scientists to cheaply and effectively annotate their
data along a vast array of dimensions---ranging from detecting faces in images
to analyzing the sentiment expressed in coherent text. Unfortunately, the
proliferation of powerful feature extraction services has been mirrored by a
corresponding expansion in the number of distinct interfaces to feature
extraction services. In a world where nearly every new service has its own API,
documentation, and/or client library, data scientists who need to combine
diverse features obtained from multiple sources are often forced to write and
maintain ever more elaborate feature extraction pipelines. To address this
challenge, we introduce a new open-source framework for comprehensive
multimodal feature extraction. Pliers is an open-source Python package that
supports standardized annotation of diverse data types (video, images, audio,
and text), and is expressly with both ease-of-use and extensibility in mind.
Users can apply a wide range of pre-existing feature extraction tools to their
data in just a few lines of Python code, and can also easily add their own
custom extractors by writing modular classes. A graph-based API enables rapid
development of complex feature extraction pipelines that output results in a
single, standardized format. We describe the package's architecture, detail its
major advantages over previous feature extraction toolboxes, and use a sample
application to a large functional MRI dataset to illustrate how pliers can
significantly reduce the time and effort required to construct sophisticated
feature extraction workflows while increasing code clarity and maintainability.
| no_new_dataset | 0.943504 |
1702.06212 | Rui Yao | Rui Yao, Guosheng Lin, Qinfeng Shi, Damith Ranasinghe | Efficient Dense Labeling of Human Activity Sequences from Wearables
using Fully Convolutional Networks | 7 pages | null | null | null | cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognizing human activities in a sequence is a challenging area of research
in ubiquitous computing. Most approaches use a fixed size sliding window over
consecutive samples to extract features---either handcrafted or learned
features---and predict a single label for all samples in the window. Two key
problems emanate from this approach: i) the samples in one window may not
always share the same label. Consequently, using one label for all samples
within a window inevitably lead to loss of information; ii) the testing phase
is constrained by the window size selected during training while the best
window size is difficult to tune in practice. We propose an efficient algorithm
that can predict the label of each sample, which we call dense labeling, in a
sequence of human activities of arbitrary length using a fully convolutional
network. In particular, our approach overcomes the problems posed by the
sliding window step. Additionally, our algorithm learns both the features and
classifier automatically. We release a new daily activity dataset based on a
wearable sensor with hospitalized patients. We conduct extensive experiments
and demonstrate that our proposed approach is able to outperform the
state-of-the-arts in terms of classification and label misalignment measures on
three challenging datasets: Opportunity, Hand Gesture, and our new dataset.
| [
{
"version": "v1",
"created": "Mon, 20 Feb 2017 23:44:54 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"Yao",
"Rui",
""
],
[
"Lin",
"Guosheng",
""
],
[
"Shi",
"Qinfeng",
""
],
[
"Ranasinghe",
"Damith",
""
]
] | TITLE: Efficient Dense Labeling of Human Activity Sequences from Wearables
using Fully Convolutional Networks
ABSTRACT: Recognizing human activities in a sequence is a challenging area of research
in ubiquitous computing. Most approaches use a fixed size sliding window over
consecutive samples to extract features---either handcrafted or learned
features---and predict a single label for all samples in the window. Two key
problems emanate from this approach: i) the samples in one window may not
always share the same label. Consequently, using one label for all samples
within a window inevitably lead to loss of information; ii) the testing phase
is constrained by the window size selected during training while the best
window size is difficult to tune in practice. We propose an efficient algorithm
that can predict the label of each sample, which we call dense labeling, in a
sequence of human activities of arbitrary length using a fully convolutional
network. In particular, our approach overcomes the problems posed by the
sliding window step. Additionally, our algorithm learns both the features and
classifier automatically. We release a new daily activity dataset based on a
wearable sensor with hospitalized patients. We conduct extensive experiments
and demonstrate that our proposed approach is able to outperform the
state-of-the-arts in terms of classification and label misalignment measures on
three challenging datasets: Opportunity, Hand Gesture, and our new dataset.
| new_dataset | 0.958924 |
1702.06228 | Yanwei Fu | Yu-ting Qiang, Yanwei Fu, Xiao Yu, Yanwen Guo, Zhi-Hua Zhou and Leonid
Sigal | Learning to Generate Posters of Scientific Papers by Probabilistic
Graphical Models | 10 pages, submission to IEEE TPAMI. arXiv admin note: text overlap
with arXiv:1604.01219 | null | null | null | cs.CV cs.GR cs.HC cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Researchers often summarize their work in the form of scientific posters.
Posters provide a coherent and efficient way to convey core ideas expressed in
scientific papers. Generating a good scientific poster, however, is a complex
and time consuming cognitive task, since such posters need to be readable,
informative, and visually aesthetic. In this paper, for the first time, we
study the challenging problem of learning to generate posters from scientific
papers. To this end, a data-driven framework, that utilizes graphical models,
is proposed. Specifically, given content to display, the key elements of a good
poster, including attributes of each panel and arrangements of graphical
elements are learned and inferred from data. During the inference stage, an MAP
inference framework is employed to incorporate some design principles. In order
to bridge the gap between panel attributes and the composition within each
panel, we also propose a recursive page splitting algorithm to generate the
panel layout for a poster. To learn and validate our model, we collect and
release a new benchmark dataset, called NJU-Fudan Paper-Poster dataset, which
consists of scientific papers and corresponding posters with exhaustively
labelled panels and attributes. Qualitative and quantitative results indicate
the effectiveness of our approach.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2017 01:02:56 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"Qiang",
"Yu-ting",
""
],
[
"Fu",
"Yanwei",
""
],
[
"Yu",
"Xiao",
""
],
[
"Guo",
"Yanwen",
""
],
[
"Zhou",
"Zhi-Hua",
""
],
[
"Sigal",
"Leonid",
""
]
] | TITLE: Learning to Generate Posters of Scientific Papers by Probabilistic
Graphical Models
ABSTRACT: Researchers often summarize their work in the form of scientific posters.
Posters provide a coherent and efficient way to convey core ideas expressed in
scientific papers. Generating a good scientific poster, however, is a complex
and time consuming cognitive task, since such posters need to be readable,
informative, and visually aesthetic. In this paper, for the first time, we
study the challenging problem of learning to generate posters from scientific
papers. To this end, a data-driven framework, that utilizes graphical models,
is proposed. Specifically, given content to display, the key elements of a good
poster, including attributes of each panel and arrangements of graphical
elements are learned and inferred from data. During the inference stage, an MAP
inference framework is employed to incorporate some design principles. In order
to bridge the gap between panel attributes and the composition within each
panel, we also propose a recursive page splitting algorithm to generate the
panel layout for a poster. To learn and validate our model, we collect and
release a new benchmark dataset, called NJU-Fudan Paper-Poster dataset, which
consists of scientific papers and corresponding posters with exhaustively
labelled panels and attributes. Qualitative and quantitative results indicate
the effectiveness of our approach.
| new_dataset | 0.959837 |
1702.06298 | Md Saiful Islam | Md. Saiful Islam, Wenny Rahayu, Chengfei Liu, Tarique Anwar and Bela
Stantic | Computing Influence of a Product through Uncertain Reverse Skyline | 12 pages, 3 tables, 12 figures, submitted to SSDBM 2017 | null | null | null | cs.DB cs.DC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the influence of a product is crucially important for making
informed business decisions. This paper introduces a new type of skyline
queries, called uncertain reverse skyline, for measuring the influence of a
probabilistic product in uncertain data settings. More specifically, given a
dataset of probabilistic products P and a set of customers C, an uncertain
reverse skyline of a probabilistic product q retrieves all customers c in C
which include q as one of their preferred products. We present efficient
pruning ideas and techniques for processing the uncertain reverse skyline query
of a probabilistic product using R-Tree data index. We also present an
efficient parallel approach to compute the uncertain reverse skyline and
influence score of a probabilistic product. Our approach significantly
outperforms the baseline approach derived from the existing literature. The
efficiency of our approach is demonstrated by conducting extensive experiments
with both real and synthetic datasets.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2017 09:06:04 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"Islam",
"Md. Saiful",
""
],
[
"Rahayu",
"Wenny",
""
],
[
"Liu",
"Chengfei",
""
],
[
"Anwar",
"Tarique",
""
],
[
"Stantic",
"Bela",
""
]
] | TITLE: Computing Influence of a Product through Uncertain Reverse Skyline
ABSTRACT: Understanding the influence of a product is crucially important for making
informed business decisions. This paper introduces a new type of skyline
queries, called uncertain reverse skyline, for measuring the influence of a
probabilistic product in uncertain data settings. More specifically, given a
dataset of probabilistic products P and a set of customers C, an uncertain
reverse skyline of a probabilistic product q retrieves all customers c in C
which include q as one of their preferred products. We present efficient
pruning ideas and techniques for processing the uncertain reverse skyline query
of a probabilistic product using R-Tree data index. We also present an
efficient parallel approach to compute the uncertain reverse skyline and
influence score of a probabilistic product. Our approach significantly
outperforms the baseline approach derived from the existing literature. The
efficiency of our approach is demonstrated by conducting extensive experiments
with both real and synthetic datasets.
| no_new_dataset | 0.944125 |
1702.06336 | Miroslav Vodol\'an | Miroslav Vodol\'an, Rudolf Kadlec, Jan Kleindienst | Hybrid Dialog State Tracker with ASR Features | Accepted to EACL 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a hybrid dialog state tracker enhanced by trainable
Spoken Language Understanding (SLU) for slot-filling dialog systems. Our
architecture is inspired by previously proposed neural-network-based
belief-tracking systems. In addition, we extended some parts of our modular
architecture with differentiable rules to allow end-to-end training. We
hypothesize that these rules allow our tracker to generalize better than pure
machine-learning based systems. For evaluation, we used the Dialog State
Tracking Challenge (DSTC) 2 dataset - a popular belief tracking testbed with
dialogs from restaurant information system. To our knowledge, our hybrid
tracker sets a new state-of-the-art result in three out of four categories
within the DSTC2.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2017 11:34:14 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"Vodolán",
"Miroslav",
""
],
[
"Kadlec",
"Rudolf",
""
],
[
"Kleindienst",
"Jan",
""
]
] | TITLE: Hybrid Dialog State Tracker with ASR Features
ABSTRACT: This paper presents a hybrid dialog state tracker enhanced by trainable
Spoken Language Understanding (SLU) for slot-filling dialog systems. Our
architecture is inspired by previously proposed neural-network-based
belief-tracking systems. In addition, we extended some parts of our modular
architecture with differentiable rules to allow end-to-end training. We
hypothesize that these rules allow our tracker to generalize better than pure
machine-learning based systems. For evaluation, we used the Dialog State
Tracking Challenge (DSTC) 2 dataset - a popular belief tracking testbed with
dialogs from restaurant information system. To our knowledge, our hybrid
tracker sets a new state-of-the-art result in three out of four categories
within the DSTC2.
| no_new_dataset | 0.920361 |
1702.06354 | Makoto Yamada | Makoto Yamada, Song Liu, Samuel Kaski | Interpreting Outliers: Localized Logistic Regression for Density Ratio
Estimation | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an inlier-based outlier detection method capable of both
identifying the outliers and explaining why they are outliers, by identifying
the outlier-specific features. Specifically, we employ an inlier-based outlier
detection criterion, which uses the ratio of inlier and test probability
densities as a measure of plausibility of being an outlier. For estimating the
density ratio function, we propose a localized logistic regression algorithm.
Thanks to the locality of the model, variable selection can be
outlier-specific, and will help interpret why points are outliers in a
high-dimensional space. Through synthetic experiments, we show that the
proposed algorithm can successfully detect the important features for outliers.
Moreover, we show that the proposed algorithm tends to outperform existing
algorithms in benchmark datasets.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2017 12:37:35 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"Yamada",
"Makoto",
""
],
[
"Liu",
"Song",
""
],
[
"Kaski",
"Samuel",
""
]
] | TITLE: Interpreting Outliers: Localized Logistic Regression for Density Ratio
Estimation
ABSTRACT: We propose an inlier-based outlier detection method capable of both
identifying the outliers and explaining why they are outliers, by identifying
the outlier-specific features. Specifically, we employ an inlier-based outlier
detection criterion, which uses the ratio of inlier and test probability
densities as a measure of plausibility of being an outlier. For estimating the
density ratio function, we propose a localized logistic regression algorithm.
Thanks to the locality of the model, variable selection can be
outlier-specific, and will help interpret why points are outliers in a
high-dimensional space. Through synthetic experiments, we show that the
proposed algorithm can successfully detect the important features for outliers.
Moreover, we show that the proposed algorithm tends to outperform existing
algorithms in benchmark datasets.
| no_new_dataset | 0.946399 |
1702.06408 | Vikram Venkatraghavan | Vikram Venkatraghavan, Esther Bron, Wiro Niessen, Stefan Klein | A Discriminative Event Based Model for Alzheimer's Disease Progression
Modeling | Information Processing in Medical Imaging (IPMI), 2017 | null | null | null | cs.CV q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The event-based model (EBM) for data-driven disease progression modeling
estimates the sequence in which biomarkers for a disease become abnormal. This
helps in understanding the dynamics of disease progression and facilitates
early diagnosis by staging patients on a disease progression timeline. Existing
EBM methods are all generative in nature. In this work we propose a novel
discriminative approach to EBM, which is shown to be more accurate as well as
computationally more efficient than existing state-of-the art EBM methods. The
method first estimates for each subject an approximate ordering of events, by
ranking the posterior probabilities of individual biomarkers being abnormal.
Subsequently, the central ordering over all subjects is estimated by fitting a
generalized Mallows model to these approximate subject-specific orderings based
on a novel probabilistic Kendall's Tau distance. To evaluate the accuracy, we
performed extensive experiments on synthetic data simulating the progression of
Alzheimer's disease. Subsequently, the method was applied to the Alzheimer's
Disease Neuroimaging Initiative (ADNI) data to estimate the central event
ordering in the dataset. The experiments benchmark the accuracy of the new
model under various conditions and compare it with existing state-of-the-art
EBM methods. The results indicate that discriminative EBM could be a simple and
elegant approach to disease progression modeling.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2017 14:41:15 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"Venkatraghavan",
"Vikram",
""
],
[
"Bron",
"Esther",
""
],
[
"Niessen",
"Wiro",
""
],
[
"Klein",
"Stefan",
""
]
] | TITLE: A Discriminative Event Based Model for Alzheimer's Disease Progression
Modeling
ABSTRACT: The event-based model (EBM) for data-driven disease progression modeling
estimates the sequence in which biomarkers for a disease become abnormal. This
helps in understanding the dynamics of disease progression and facilitates
early diagnosis by staging patients on a disease progression timeline. Existing
EBM methods are all generative in nature. In this work we propose a novel
discriminative approach to EBM, which is shown to be more accurate as well as
computationally more efficient than existing state-of-the art EBM methods. The
method first estimates for each subject an approximate ordering of events, by
ranking the posterior probabilities of individual biomarkers being abnormal.
Subsequently, the central ordering over all subjects is estimated by fitting a
generalized Mallows model to these approximate subject-specific orderings based
on a novel probabilistic Kendall's Tau distance. To evaluate the accuracy, we
performed extensive experiments on synthetic data simulating the progression of
Alzheimer's disease. Subsequently, the method was applied to the Alzheimer's
Disease Neuroimaging Initiative (ADNI) data to estimate the central event
ordering in the dataset. The experiments benchmark the accuracy of the new
model under various conditions and compare it with existing state-of-the-art
EBM methods. The results indicate that discriminative EBM could be a simple and
elegant approach to disease progression modeling.
| no_new_dataset | 0.948058 |
1702.06461 | Dmitrij Schlesinger | Dmitrij Schlesinger and Florian Jug and Gene Myers and Carsten Rother
and Dagmar Kainm\"uller | Crowd Sourcing Image Segmentation with iaSTAPLE | Accepted to ISBI2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel label fusion technique as well as a crowdsourcing protocol
to efficiently obtain accurate epithelial cell segmentations from non-expert
crowd workers. Our label fusion technique simultaneously estimates the true
segmentation, the performance levels of individual crowd workers, and an image
segmentation model in the form of a pairwise Markov random field. We term our
approach image-aware STAPLE (iaSTAPLE) since our image segmentation model
seamlessly integrates into the well-known and widely used STAPLE approach. In
an evaluation on a light microscopy dataset containing more than 5000 membrane
labeled epithelial cells of a fly wing, we show that iaSTAPLE outperforms
STAPLE in terms of segmentation accuracy as well as in terms of the accuracy of
estimated crowd worker performance levels, and is able to correctly segment 99%
of all cells when compared to expert segmentations. These results show that
iaSTAPLE is a highly useful tool for crowd sourcing image segmentation.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2017 16:12:18 GMT"
}
] | 2017-02-22T00:00:00 | [
[
"Schlesinger",
"Dmitrij",
""
],
[
"Jug",
"Florian",
""
],
[
"Myers",
"Gene",
""
],
[
"Rother",
"Carsten",
""
],
[
"Kainmüller",
"Dagmar",
""
]
] | TITLE: Crowd Sourcing Image Segmentation with iaSTAPLE
ABSTRACT: We propose a novel label fusion technique as well as a crowdsourcing protocol
to efficiently obtain accurate epithelial cell segmentations from non-expert
crowd workers. Our label fusion technique simultaneously estimates the true
segmentation, the performance levels of individual crowd workers, and an image
segmentation model in the form of a pairwise Markov random field. We term our
approach image-aware STAPLE (iaSTAPLE) since our image segmentation model
seamlessly integrates into the well-known and widely used STAPLE approach. In
an evaluation on a light microscopy dataset containing more than 5000 membrane
labeled epithelial cells of a fly wing, we show that iaSTAPLE outperforms
STAPLE in terms of segmentation accuracy as well as in terms of the accuracy of
estimated crowd worker performance levels, and is able to correctly segment 99%
of all cells when compared to expert segmentations. These results show that
iaSTAPLE is a highly useful tool for crowd sourcing image segmentation.
| no_new_dataset | 0.944382 |
1511.03745 | Marcus Rohrbach | Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, Bernt
Schiele | Grounding of Textual Phrases in Images by Reconstruction | published at ECCV 2016 (oral); updated to final version | null | 10.1007/978-3-319-46448-0_49 | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Grounding (i.e. localizing) arbitrary, free-form textual phrases in visual
content is a challenging problem with many applications for human-computer
interaction and image-text reference resolution. Few datasets provide the
ground truth spatial localization of phrases, thus it is desirable to learn
from data with no or little grounding supervision. We propose a novel approach
which learns grounding by reconstructing a given phrase using an attention
mechanism, which can be either latent or optimized directly. During training
our approach encodes the phrase using a recurrent network language model and
then learns to attend to the relevant image region in order to reconstruct the
input phrase. At test time, the correct attention, i.e., the grounding, is
evaluated. If grounding supervision is available it can be directly applied via
a loss over the attention mechanism. We demonstrate the effectiveness of our
approach on the Flickr 30k Entities and ReferItGame datasets with different
levels of supervision, ranging from no supervision over partial supervision to
full supervision. Our supervised variant improves by a large margin over the
state-of-the-art on both datasets.
| [
{
"version": "v1",
"created": "Thu, 12 Nov 2015 01:13:47 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2016 18:59:11 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Mar 2016 04:03:15 GMT"
},
{
"version": "v4",
"created": "Fri, 17 Feb 2017 21:02:05 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Rohrbach",
"Anna",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Hu",
"Ronghang",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: Grounding of Textual Phrases in Images by Reconstruction
ABSTRACT: Grounding (i.e. localizing) arbitrary, free-form textual phrases in visual
content is a challenging problem with many applications for human-computer
interaction and image-text reference resolution. Few datasets provide the
ground truth spatial localization of phrases, thus it is desirable to learn
from data with no or little grounding supervision. We propose a novel approach
which learns grounding by reconstructing a given phrase using an attention
mechanism, which can be either latent or optimized directly. During training
our approach encodes the phrase using a recurrent network language model and
then learns to attend to the relevant image region in order to reconstruct the
input phrase. At test time, the correct attention, i.e., the grounding, is
evaluated. If grounding supervision is available it can be directly applied via
a loss over the attention mechanism. We demonstrate the effectiveness of our
approach on the Flickr 30k Entities and ReferItGame datasets with different
levels of supervision, ranging from no supervision over partial supervision to
full supervision. Our supervised variant improves by a large margin over the
state-of-the-art on both datasets.
| no_new_dataset | 0.949949 |
1602.07349 | Tomaso Aste | Wolfram Barfuss, Guido Previde Massara, T. Di Matteo, Tomaso Aste | Parsimonious modeling with Information Filtering Networks | 17 pages, 10 figures, 3 tables | Phys. Rev. E 94, 062306 (2016) | 10.1103/PhysRevE.94.062306 | null | cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a methodology to construct parsimonious probabilistic models.
This method makes use of Information Filtering Networks to produce a robust
estimate of the global sparse inverse covariance from a simple sum of local
inverse covariances computed on small sub-parts of the network. Being based on
local and low-dimensional inversions, this method is computationally very
efficient and statistically robust even for the estimation of inverse
covariance of high-dimensional, noisy and short time-series. Applied to
financial data our method results computationally more efficient than
state-of-the-art methodologies such as Glasso producing, in a fraction of the
computation time, models that can have equivalent or better performances but
with a sparser inference structure. We also discuss performances with sparse
factor models where we notice that relative performances decrease with the
number of factors. The local nature of this approach allows us to perform
computations in parallel and provides a tool for dynamical adaptation by
partial updating when the properties of some variables change without the need
of recomputing the whole model. This makes this approach particularly suitable
to handle big datasets with large numbers of variables. Examples of practical
application for forecasting, stress testing and risk allocation in financial
systems are also provided.
| [
{
"version": "v1",
"created": "Tue, 23 Feb 2016 23:03:56 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Jun 2016 15:11:14 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Nov 2016 15:32:05 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Barfuss",
"Wolfram",
""
],
[
"Massara",
"Guido Previde",
""
],
[
"Di Matteo",
"T.",
""
],
[
"Aste",
"Tomaso",
""
]
] | TITLE: Parsimonious modeling with Information Filtering Networks
ABSTRACT: We introduce a methodology to construct parsimonious probabilistic models.
This method makes use of Information Filtering Networks to produce a robust
estimate of the global sparse inverse covariance from a simple sum of local
inverse covariances computed on small sub-parts of the network. Being based on
local and low-dimensional inversions, this method is computationally very
efficient and statistically robust even for the estimation of inverse
covariance of high-dimensional, noisy and short time-series. Applied to
financial data our method results computationally more efficient than
state-of-the-art methodologies such as Glasso producing, in a fraction of the
computation time, models that can have equivalent or better performances but
with a sparser inference structure. We also discuss performances with sparse
factor models where we notice that relative performances decrease with the
number of factors. The local nature of this approach allows us to perform
computations in parallel and provides a tool for dynamical adaptation by
partial updating when the properties of some variables change without the need
of recomputing the whole model. This makes this approach particularly suitable
to handle big datasets with large numbers of variables. Examples of practical
application for forecasting, stress testing and risk allocation in financial
systems are also provided.
| no_new_dataset | 0.945248 |
1605.01046 | Pavel Chebotarev | Vladimir Ivashkin and Pavel Chebotarev | Do logarithmic proximity measures outperform plain ones in graph
clustering? | 11 pages, 5 tables, 9 figures. Accepted for publication in the
Proceedings of 6th International Conference on Network Analysis, May 26-28,
2016, Nizhny Novgorod, Russia | null | null | null | cs.LG cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a number of graph kernels and proximity measures including
commute time kernel, regularized Laplacian kernel, heat kernel, exponential
diffusion kernel (also called "communicability"), etc., and the corresponding
distances as applied to clustering nodes in random graphs and several
well-known datasets. The model of generating random graphs involves edge
probabilities for the pairs of nodes that belong to the same class or different
predefined classes of nodes. It turns out that in most cases, logarithmic
measures (i.e., measures resulting after taking logarithm of the proximities)
perform better while distinguishing underlying classes than the "plain"
measures. A comparison in terms of reject curves of inter-class and intra-class
distances confirms this conclusion. A similar conclusion can be made for
several well-known datasets. A possible origin of this effect is that most
kernels have a multiplicative nature, while the nature of distances used in
cluster algorithms is an additive one (cf. the triangle inequality). The
logarithmic transformation is a tool to transform the first nature to the
second one. Moreover, some distances corresponding to the logarithmic measures
possess a meaningful cutpoint additivity property. In our experiments, the
leader is usually the logarithmic Communicability measure. However, we indicate
some more complicated cases in which other measures, typically, Communicability
and plain Walk, can be the winners.
| [
{
"version": "v1",
"created": "Tue, 3 May 2016 19:52:48 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Dec 2016 20:01:08 GMT"
},
{
"version": "v3",
"created": "Sat, 18 Feb 2017 09:04:02 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Ivashkin",
"Vladimir",
""
],
[
"Chebotarev",
"Pavel",
""
]
] | TITLE: Do logarithmic proximity measures outperform plain ones in graph
clustering?
ABSTRACT: We consider a number of graph kernels and proximity measures including
commute time kernel, regularized Laplacian kernel, heat kernel, exponential
diffusion kernel (also called "communicability"), etc., and the corresponding
distances as applied to clustering nodes in random graphs and several
well-known datasets. The model of generating random graphs involves edge
probabilities for the pairs of nodes that belong to the same class or different
predefined classes of nodes. It turns out that in most cases, logarithmic
measures (i.e., measures resulting after taking logarithm of the proximities)
perform better while distinguishing underlying classes than the "plain"
measures. A comparison in terms of reject curves of inter-class and intra-class
distances confirms this conclusion. A similar conclusion can be made for
several well-known datasets. A possible origin of this effect is that most
kernels have a multiplicative nature, while the nature of distances used in
cluster algorithms is an additive one (cf. the triangle inequality). The
logarithmic transformation is a tool to transform the first nature to the
second one. Moreover, some distances corresponding to the logarithmic measures
possess a meaningful cutpoint additivity property. In our experiments, the
leader is usually the logarithmic Communicability measure. However, we indicate
some more complicated cases in which other measures, typically, Communicability
and plain Walk, can be the winners.
| no_new_dataset | 0.945951 |
1609.03433 | Zherong Pan | Zherong Pan and Dinesh Manocha | Feedback Motion Planning for Liquid Transfer using Supervised Learning | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel motion planning algorithm for transferring a liquid body
from a source to a target container. Our approach uses a receding-horizon
optimization strategy that takes into account fluid constraints and avoids
collisions. In order to efficiently handle the high-dimensional configuration
space of a liquid body, we use system identification to learn its dynamics
characteristics using a neural network. We generate the training dataset using
stochastic optimization in a transfer-problem-specific search space. The
runtime feedback motion planner is used for real-time planning and we observe
high success rate in our simulated 2D and 3D fluid transfer benchmarks.
| [
{
"version": "v1",
"created": "Mon, 12 Sep 2016 15:06:22 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Feb 2017 21:56:42 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Pan",
"Zherong",
""
],
[
"Manocha",
"Dinesh",
""
]
] | TITLE: Feedback Motion Planning for Liquid Transfer using Supervised Learning
ABSTRACT: We present a novel motion planning algorithm for transferring a liquid body
from a source to a target container. Our approach uses a receding-horizon
optimization strategy that takes into account fluid constraints and avoids
collisions. In order to efficiently handle the high-dimensional configuration
space of a liquid body, we use system identification to learn its dynamics
characteristics using a neural network. We generate the training dataset using
stochastic optimization in a transfer-problem-specific search space. The
runtime feedback motion planner is used for real-time planning and we observe
high success rate in our simulated 2D and 3D fluid transfer benchmarks.
| no_new_dataset | 0.950824 |
1610.02177 | Patrick Christ | Patrick Ferdinand Christ, Mohamed Ezzeldin A. Elshaer, Florian
Ettlinger, Sunil Tatavarty, Marc Bickel, Patrick Bilic, Markus Rempfler,
Marco Armbruster, Felix Hofmann, Melvin D'Anastasi, Wieland H. Sommer,
Seyed-Ahmad Ahmadi and Bjoern H. Menze | Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully
Convolutional Neural Networks and 3D Conditional Random Fields | Accepted at MICCAI 2016. Source code available on
https://github.com/IBBM/Cascaded-FCN | null | 10.1007/978-3-319-46723-8_48 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic segmentation of the liver and its lesion is an important step
towards deriving quantitative biomarkers for accurate clinical diagnosis and
computer-aided decision support systems. This paper presents a method to
automatically segment liver and lesions in CT abdomen images using cascaded
fully convolutional neural networks (CFCNs) and dense 3D conditional random
fields (CRFs). We train and cascade two FCNs for a combined segmentation of the
liver and its lesions. In the first step, we train a FCN to segment the liver
as ROI input for a second FCN. The second FCN solely segments lesions from the
predicted liver ROIs of step 1. We refine the segmentations of the CFCN using a
dense 3D CRF that accounts for both spatial coherence and appearance. CFCN
models were trained in a 2-fold cross-validation on the abdominal CT dataset
3DIRCAD comprising 15 hepatic tumor volumes. Our results show that CFCN-based
semantic liver and lesion segmentation achieves Dice scores over 94% for liver
with computation times below 100s per volume. We experimentally demonstrate the
robustness of the proposed method as a decision support system with a high
accuracy and speed for usage in daily clinical routine.
| [
{
"version": "v1",
"created": "Fri, 7 Oct 2016 08:23:32 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Christ",
"Patrick Ferdinand",
""
],
[
"Elshaer",
"Mohamed Ezzeldin A.",
""
],
[
"Ettlinger",
"Florian",
""
],
[
"Tatavarty",
"Sunil",
""
],
[
"Bickel",
"Marc",
""
],
[
"Bilic",
"Patrick",
""
],
[
"Rempfler",
"Markus",
""
],
[
"Armbruster",
"Marco",
""
],
[
"Hofmann",
"Felix",
""
],
[
"D'Anastasi",
"Melvin",
""
],
[
"Sommer",
"Wieland H.",
""
],
[
"Ahmadi",
"Seyed-Ahmad",
""
],
[
"Menze",
"Bjoern H.",
""
]
] | TITLE: Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully
Convolutional Neural Networks and 3D Conditional Random Fields
ABSTRACT: Automatic segmentation of the liver and its lesion is an important step
towards deriving quantitative biomarkers for accurate clinical diagnosis and
computer-aided decision support systems. This paper presents a method to
automatically segment liver and lesions in CT abdomen images using cascaded
fully convolutional neural networks (CFCNs) and dense 3D conditional random
fields (CRFs). We train and cascade two FCNs for a combined segmentation of the
liver and its lesions. In the first step, we train a FCN to segment the liver
as ROI input for a second FCN. The second FCN solely segments lesions from the
predicted liver ROIs of step 1. We refine the segmentations of the CFCN using a
dense 3D CRF that accounts for both spatial coherence and appearance. CFCN
models were trained in a 2-fold cross-validation on the abdominal CT dataset
3DIRCAD comprising 15 hepatic tumor volumes. Our results show that CFCN-based
semantic liver and lesion segmentation achieves Dice scores over 94% for liver
with computation times below 100s per volume. We experimentally demonstrate the
robustness of the proposed method as a decision support system with a high
accuracy and speed for usage in daily clinical routine.
| no_new_dataset | 0.952794 |
1610.02865 | Travis Gagie | Travis Gagie, Giovanni Manzini and Rossano Venturini | An Encoding for Order-Preserving Matching | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Encoding data structures store enough information to answer the queries they
are meant to support but not enough to recover their underlying datasets. In
this paper we give the first encoding data structure for the challenging
problem of order-preserving pattern matching. This problem was introduced only
a few years ago but has already attracted significant attention because of its
applications in data analysis. Two strings are said to be an order-preserving
match if the {\em relative order} of their characters is the same: e.g., $4, 1,
3, 2$ and $10, 3, 7, 5$ are an order-preserving match. We show how, given a
string $S [1..n]$ over an arbitrary alphabet and a constant $c \geq 1$, we can
build an $O (n \log \log n)$-bit encoding such that later, given a pattern $P
[1..m]$ with $m \leq \lg^c n$, we can return the number of order-preserving
occurrences of $P$ in $S$ in $O (m)$ time. Within the same time bound we can
also return the starting position of some order-preserving match for $P$ in $S$
(if such a match exists). We prove that our space bound is within a constant
factor of optimal; our query time is optimal if $\log \sigma = \Omega(\log n)$.
Our space bound contrasts with the $\Omega (n \log n)$ bits needed in the worst
case to store $S$ itself, an index for order-preserving pattern matching with
no restrictions on the pattern length, or an index for standard pattern
matching even with restrictions on the pattern length. Moreover, we can build
our encoding knowing only how each character compares to $O (\lg^c n)$
neighbouring characters.
| [
{
"version": "v1",
"created": "Mon, 10 Oct 2016 11:47:05 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2017 21:51:33 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Gagie",
"Travis",
""
],
[
"Manzini",
"Giovanni",
""
],
[
"Venturini",
"Rossano",
""
]
] | TITLE: An Encoding for Order-Preserving Matching
ABSTRACT: Encoding data structures store enough information to answer the queries they
are meant to support but not enough to recover their underlying datasets. In
this paper we give the first encoding data structure for the challenging
problem of order-preserving pattern matching. This problem was introduced only
a few years ago but has already attracted significant attention because of its
applications in data analysis. Two strings are said to be an order-preserving
match if the {\em relative order} of their characters is the same: e.g., $4, 1,
3, 2$ and $10, 3, 7, 5$ are an order-preserving match. We show how, given a
string $S [1..n]$ over an arbitrary alphabet and a constant $c \geq 1$, we can
build an $O (n \log \log n)$-bit encoding such that later, given a pattern $P
[1..m]$ with $m \leq \lg^c n$, we can return the number of order-preserving
occurrences of $P$ in $S$ in $O (m)$ time. Within the same time bound we can
also return the starting position of some order-preserving match for $P$ in $S$
(if such a match exists). We prove that our space bound is within a constant
factor of optimal; our query time is optimal if $\log \sigma = \Omega(\log n)$.
Our space bound contrasts with the $\Omega (n \log n)$ bits needed in the worst
case to store $S$ itself, an index for order-preserving pattern matching with
no restrictions on the pattern length, or an index for standard pattern
matching even with restrictions on the pattern length. Moreover, we can build
our encoding knowing only how each character compares to $O (\lg^c n)$
neighbouring characters.
| no_new_dataset | 0.941439 |
1702.05538 | Terrance DeVries | Terrance DeVries, Graham W. Taylor | Dataset Augmentation in Feature Space | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dataset augmentation, the practice of applying a wide array of
domain-specific transformations to synthetically expand a training set, is a
standard tool in supervised learning. While effective in tasks such as visual
recognition, the set of transformations must be carefully designed,
implemented, and tested for every new domain, limiting its re-use and
generality. In this paper, we adopt a simpler, domain-agnostic approach to
dataset augmentation. We start with existing data points and apply simple
transformations such as adding noise, interpolating, or extrapolating between
them. Our main insight is to perform the transformation not in input space, but
in a learned feature space. A re-kindling of interest in unsupervised
representation learning makes this technique timely and more effective. It is a
simple proposal, but to-date one that has not been tested empirically. Working
in the space of context vectors generated by sequence-to-sequence models, we
demonstrate a technique that is effective for both static and sequential data.
| [
{
"version": "v1",
"created": "Fri, 17 Feb 2017 23:13:15 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"DeVries",
"Terrance",
""
],
[
"Taylor",
"Graham W.",
""
]
] | TITLE: Dataset Augmentation in Feature Space
ABSTRACT: Dataset augmentation, the practice of applying a wide array of
domain-specific transformations to synthetically expand a training set, is a
standard tool in supervised learning. While effective in tasks such as visual
recognition, the set of transformations must be carefully designed,
implemented, and tested for every new domain, limiting its re-use and
generality. In this paper, we adopt a simpler, domain-agnostic approach to
dataset augmentation. We start with existing data points and apply simple
transformations such as adding noise, interpolating, or extrapolating between
them. Our main insight is to perform the transformation not in input space, but
in a learned feature space. A re-kindling of interest in unsupervised
representation learning makes this technique timely and more effective. It is a
simple proposal, but to-date one that has not been tested empirically. Working
in the space of context vectors generated by sequence-to-sequence models, we
demonstrate a technique that is effective for both static and sequential data.
| no_new_dataset | 0.946498 |
1702.05564 | Angus Galloway | Angus Galloway, Graham W. Taylor, Aaron Ramsay, Medhat Moussa | The Ciona17 Dataset for Semantic Segmentation of Invasive Species in a
Marine Aquaculture Environment | Submitted to the Conference on Computer and Robot Vision (CRV) 2017 | null | 10.5683/SP/NTUOK9 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An original dataset for semantic segmentation, Ciona17, is introduced, which
to the best of the authors' knowledge, is the first dataset of its kind with
pixel-level annotations pertaining to invasive species in a marine environment.
Diverse outdoor illumination, a range of object shapes, colour, and severe
occlusion provide a significant real world challenge for the computer vision
community. An accompanying ground-truthing tool for superpixel labeling, Truth
and Crop, is also introduced. Finally, we provide a baseline using a variant of
Fully Convolutional Networks, and report results in terms of the standard mean
intersection over union (mIoU) metric.
| [
{
"version": "v1",
"created": "Sat, 18 Feb 2017 03:40:33 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Galloway",
"Angus",
""
],
[
"Taylor",
"Graham W.",
""
],
[
"Ramsay",
"Aaron",
""
],
[
"Moussa",
"Medhat",
""
]
] | TITLE: The Ciona17 Dataset for Semantic Segmentation of Invasive Species in a
Marine Aquaculture Environment
ABSTRACT: An original dataset for semantic segmentation, Ciona17, is introduced, which
to the best of the authors' knowledge, is the first dataset of its kind with
pixel-level annotations pertaining to invasive species in a marine environment.
Diverse outdoor illumination, a range of object shapes, colour, and severe
occlusion provide a significant real world challenge for the computer vision
community. An accompanying ground-truthing tool for superpixel labeling, Truth
and Crop, is also introduced. Finally, we provide a baseline using a variant of
Fully Convolutional Networks, and report results in terms of the standard mean
intersection over union (mIoU) metric.
| new_dataset | 0.954137 |
1702.05596 | Shitao Chen | Shitao Chen, Songyi Zhang, Jinghao Shang, Badong Chen, Nanning Zheng | Brain Inspired Cognitive Model with Attention for Self-Driving Cars | 13 pages, 10 figures | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Perception-driven approach and end-to-end system are two major vision-based
frameworks for self-driving cars. However, it is difficult to introduce
attention and historical information of autonomous driving process, which are
the essential factors for achieving human-like driving into these two methods.
In this paper, we propose a novel model for self-driving cars named
brain-inspired cognitive model with attention (CMA). This model consists of
three parts: a convolutional neural network for simulating human visual cortex,
a cognitive map built to describe relationships between objects in complex
traffic scene and a recurrent neural network that combines with the real-time
updated cognitive map to implement attention mechanism and long-short term
memory. The benefit of our model is that can accurately solve three tasks
simultaneously:1) detection of the free space and boundaries of the current and
adjacent lanes. 2)estimation of obstacle distance and vehicle attitude, and 3)
learning of driving behavior and decision making from human driver. More
significantly, the proposed model could accept external navigating instructions
during an end-to-end driving process. For evaluation, we build a large-scale
road-vehicle dataset which contains more than forty thousand labeled road
images captured by three cameras on our self-driving car. Moreover, human
driving activities and vehicle states are recorded in the meanwhile.
| [
{
"version": "v1",
"created": "Sat, 18 Feb 2017 10:47:16 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Chen",
"Shitao",
""
],
[
"Zhang",
"Songyi",
""
],
[
"Shang",
"Jinghao",
""
],
[
"Chen",
"Badong",
""
],
[
"Zheng",
"Nanning",
""
]
] | TITLE: Brain Inspired Cognitive Model with Attention for Self-Driving Cars
ABSTRACT: Perception-driven approach and end-to-end system are two major vision-based
frameworks for self-driving cars. However, it is difficult to introduce
attention and historical information of autonomous driving process, which are
the essential factors for achieving human-like driving into these two methods.
In this paper, we propose a novel model for self-driving cars named
brain-inspired cognitive model with attention (CMA). This model consists of
three parts: a convolutional neural network for simulating human visual cortex,
a cognitive map built to describe relationships between objects in complex
traffic scene and a recurrent neural network that combines with the real-time
updated cognitive map to implement attention mechanism and long-short term
memory. The benefit of our model is that can accurately solve three tasks
simultaneously:1) detection of the free space and boundaries of the current and
adjacent lanes. 2)estimation of obstacle distance and vehicle attitude, and 3)
learning of driving behavior and decision making from human driver. More
significantly, the proposed model could accept external navigating instructions
during an end-to-end driving process. For evaluation, we build a large-scale
road-vehicle dataset which contains more than forty thousand labeled road
images captured by three cameras on our self-driving car. Moreover, human
driving activities and vehicle states are recorded in the meanwhile.
| new_dataset | 0.951323 |
1702.05597 | Shuai Ma | Xuelian Lin, Shuai Ma, Han Zhang, Tianyu Wo, Jinpeng Huai | One-Pass Error Bounded Trajectory Simplification | published at the 43rd International Conference on Very Large Data
Bases (VLDB), Munich, Germany, 2017 | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, various sensors are collecting, storing and transmitting tremendous
trajectory data, and it is known that raw trajectory data seriously wastes the
storage, network band and computing resource. Line simplification (LS)
algorithms are an effective approach to attacking this issue by compressing
data points in a trajectory to a set of continuous line segments, and are
commonly used in practice. However, existing LS algorithms are not sufficient
for the needs of sensors in mobile devices. In this study, we first develop a
one-pass error bounded trajectory simplification algorithm (OPERB), which scans
each data point in a trajectory once and only once. We then propose an
aggressive one-pass error bounded trajectory simplification algorithm
(OPERB-A), which allows interpolating new data points into a trajectory under
certain conditions. Finally, we experimentally verify that our approaches
(OPERB and OPERB-A) are both efficient and effective, using four real-life
trajectory datasets.
| [
{
"version": "v1",
"created": "Sat, 18 Feb 2017 10:47:20 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Lin",
"Xuelian",
""
],
[
"Ma",
"Shuai",
""
],
[
"Zhang",
"Han",
""
],
[
"Wo",
"Tianyu",
""
],
[
"Huai",
"Jinpeng",
""
]
] | TITLE: One-Pass Error Bounded Trajectory Simplification
ABSTRACT: Nowadays, various sensors are collecting, storing and transmitting tremendous
trajectory data, and it is known that raw trajectory data seriously wastes the
storage, network band and computing resource. Line simplification (LS)
algorithms are an effective approach to attacking this issue by compressing
data points in a trajectory to a set of continuous line segments, and are
commonly used in practice. However, existing LS algorithms are not sufficient
for the needs of sensors in mobile devices. In this study, we first develop a
one-pass error bounded trajectory simplification algorithm (OPERB), which scans
each data point in a trajectory once and only once. We then propose an
aggressive one-pass error bounded trajectory simplification algorithm
(OPERB-A), which allows interpolating new data points into a trajectory under
certain conditions. Finally, we experimentally verify that our approaches
(OPERB and OPERB-A) are both efficient and effective, using four real-life
trajectory datasets.
| no_new_dataset | 0.946843 |
1702.05659 | Wojciech Czarnecki | Katarzyna Janocha, Wojciech Marian Czarnecki | On Loss Functions for Deep Neural Networks in Classification | Presented at Theoretical Foundations of Machine Learning 2017 (TFML
2017) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks are currently among the most commonly used classifiers.
Despite easily achieving very good performance, one of the best selling points
of these models is their modular design - one can conveniently adapt their
architecture to specific needs, change connectivity patterns, attach
specialised layers, experiment with a large amount of activation functions,
normalisation schemes and many others. While one can find impressively wide
spread of various configurations of almost every aspect of the deep nets, one
element is, in authors' opinion, underrepresented - while solving
classification problems, vast majority of papers and applications simply use
log loss. In this paper we try to investigate how particular choices of loss
functions affect deep models and their learning dynamics, as well as resulting
classifiers robustness to various effects. We perform experiments on classical
datasets, as well as provide some additional, theoretical insights into the
problem. In particular we show that L1 and L2 losses are, quite surprisingly,
justified classification objectives for deep nets, by providing probabilistic
interpretation in terms of expected misclassification. We also introduce two
losses which are not typically used as deep nets objectives and show that they
are viable alternatives to the existing ones.
| [
{
"version": "v1",
"created": "Sat, 18 Feb 2017 21:39:36 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Janocha",
"Katarzyna",
""
],
[
"Czarnecki",
"Wojciech Marian",
""
]
] | TITLE: On Loss Functions for Deep Neural Networks in Classification
ABSTRACT: Deep neural networks are currently among the most commonly used classifiers.
Despite easily achieving very good performance, one of the best selling points
of these models is their modular design - one can conveniently adapt their
architecture to specific needs, change connectivity patterns, attach
specialised layers, experiment with a large amount of activation functions,
normalisation schemes and many others. While one can find impressively wide
spread of various configurations of almost every aspect of the deep nets, one
element is, in authors' opinion, underrepresented - while solving
classification problems, vast majority of papers and applications simply use
log loss. In this paper we try to investigate how particular choices of loss
functions affect deep models and their learning dynamics, as well as resulting
classifiers robustness to various effects. We perform experiments on classical
datasets, as well as provide some additional, theoretical insights into the
problem. In particular we show that L1 and L2 losses are, quite surprisingly,
justified classification objectives for deep nets, by providing probabilistic
interpretation in terms of expected misclassification. We also introduce two
losses which are not typically used as deep nets objectives and show that they
are viable alternatives to the existing ones.
| no_new_dataset | 0.941815 |
1702.05693 | Shanshan Zhang | Shanshan Zhang, Rodrigo Benenson and Bernt Schiele | CityPersons: A Diverse Dataset for Pedestrian Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convnets have enabled significant progress in pedestrian detection recently,
but there are still open questions regarding suitable architectures and
training data. We revisit CNN design and point out key adaptations, enabling
plain FasterRCNN to obtain state-of-the-art results on the Caltech dataset.
To achieve further improvement from more and better data, we introduce
CityPersons, a new set of person annotations on top of the Cityscapes dataset.
The diversity of CityPersons allows us for the first time to train one single
CNN model that generalizes well over multiple benchmarks. Moreover, with
additional training with CityPersons, we obtain top results using FasterRCNN on
Caltech, improving especially for more difficult cases (heavy occlusion and
small scale) and providing higher localization quality.
| [
{
"version": "v1",
"created": "Sun, 19 Feb 2017 03:01:55 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Zhang",
"Shanshan",
""
],
[
"Benenson",
"Rodrigo",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: CityPersons: A Diverse Dataset for Pedestrian Detection
ABSTRACT: Convnets have enabled significant progress in pedestrian detection recently,
but there are still open questions regarding suitable architectures and
training data. We revisit CNN design and point out key adaptations, enabling
plain FasterRCNN to obtain state-of-the-art results on the Caltech dataset.
To achieve further improvement from more and better data, we introduce
CityPersons, a new set of person annotations on top of the Cityscapes dataset.
The diversity of CityPersons allows us for the first time to train one single
CNN model that generalizes well over multiple benchmarks. Moreover, with
additional training with CityPersons, we obtain top results using FasterRCNN on
Caltech, improving especially for more difficult cases (heavy occlusion and
small scale) and providing higher localization quality.
| new_dataset | 0.951774 |
1702.05711 | Hongyang Li | Hongyang Li and Yu Liu and Wanli Ouyang and Xiaogang Wang | Zoom Out-and-In Network with Recursive Training for Object Proposal | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this paper, we propose a zoom-out-and-in network for generating object
proposals. We utilize different resolutions of feature maps in the network to
detect object instances of various sizes. Specifically, we divide the anchor
candidates into three clusters based on the scale size and place them on
feature maps of distinct strides to detect small, medium and large objects,
respectively. Deeper feature maps contain region-level semantics which can help
shallow counterparts to identify small objects. Therefore we design a zoom-in
sub-network to increase the resolution of high level features via a
deconvolution operation. The high-level features with high resolution are then
combined and merged with low-level features to detect objects. Furthermore, we
devise a recursive training pipeline to consecutively regress region proposals
at the training stage in order to match the iterative regression at the testing
stage. We demonstrate the effectiveness of the proposed method on ILSVRC DET
and MS COCO datasets, where our algorithm performs better than the
state-of-the-arts in various evaluation metrics. It also increases average
precision by around 2% in the detection system.
| [
{
"version": "v1",
"created": "Sun, 19 Feb 2017 07:43:27 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Li",
"Hongyang",
""
],
[
"Liu",
"Yu",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Wang",
"Xiaogang",
""
]
] | TITLE: Zoom Out-and-In Network with Recursive Training for Object Proposal
ABSTRACT: In this paper, we propose a zoom-out-and-in network for generating object
proposals. We utilize different resolutions of feature maps in the network to
detect object instances of various sizes. Specifically, we divide the anchor
candidates into three clusters based on the scale size and place them on
feature maps of distinct strides to detect small, medium and large objects,
respectively. Deeper feature maps contain region-level semantics which can help
shallow counterparts to identify small objects. Therefore we design a zoom-in
sub-network to increase the resolution of high level features via a
deconvolution operation. The high-level features with high resolution are then
combined and merged with low-level features to detect objects. Furthermore, we
devise a recursive training pipeline to consecutively regress region proposals
at the training stage in order to match the iterative regression at the testing
stage. We demonstrate the effectiveness of the proposed method on ILSVRC DET
and MS COCO datasets, where our algorithm performs better than the
state-of-the-arts in various evaluation metrics. It also increases average
precision by around 2% in the detection system.
| no_new_dataset | 0.951414 |
1702.05732 | Philipp Pelz | Philipp Michael Pelz, Wen Xuan Qiu, Robert B\"ucker, G\"unther
Kassier, R.J. Dwayne Miller | Low-dose cryo electron ptychography via non-convex Bayesian optimization | null | null | null | null | physics.comp-ph math.OC physics.data-an stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electron ptychography has seen a recent surge of interest for phase sensitive
imaging at atomic or near-atomic resolution. However, applications are so far
mainly limited to radiation-hard samples because the required doses are too
high for imaging biological samples at high resolution. We propose the use of
non-convex, Bayesian optimization to overcome this problem and reduce the dose
required for successful reconstruction by two orders of magnitude compared to
previous experiments. We suggest to use this method for imaging single
biological macromolecules at cryogenic temperatures and demonstrate 2D
single-particle reconstructions from simulated data with a resolution of 7.9
\AA$\,$ at a dose of 20 $e^- / \AA^2$. When averaging over only 15 low-dose
datasets, a resolution of 4 \AA$\,$ is possible for large macromolecular
complexes. With its independence from microscope transfer function, direct
recovery of phase contrast and better scaling of signal-to-noise ratio,
cryo-electron ptychography may become a promising alternative to Zernike
phase-contrast microscopy.
| [
{
"version": "v1",
"created": "Sun, 19 Feb 2017 10:08:16 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Pelz",
"Philipp Michael",
""
],
[
"Qiu",
"Wen Xuan",
""
],
[
"Bücker",
"Robert",
""
],
[
"Kassier",
"Günther",
""
],
[
"Miller",
"R. J. Dwayne",
""
]
] | TITLE: Low-dose cryo electron ptychography via non-convex Bayesian optimization
ABSTRACT: Electron ptychography has seen a recent surge of interest for phase sensitive
imaging at atomic or near-atomic resolution. However, applications are so far
mainly limited to radiation-hard samples because the required doses are too
high for imaging biological samples at high resolution. We propose the use of
non-convex, Bayesian optimization to overcome this problem and reduce the dose
required for successful reconstruction by two orders of magnitude compared to
previous experiments. We suggest to use this method for imaging single
biological macromolecules at cryogenic temperatures and demonstrate 2D
single-particle reconstructions from simulated data with a resolution of 7.9
\AA$\,$ at a dose of 20 $e^- / \AA^2$. When averaging over only 15 low-dose
datasets, a resolution of 4 \AA$\,$ is possible for large macromolecular
complexes. With its independence from microscope transfer function, direct
recovery of phase contrast and better scaling of signal-to-noise ratio,
cryo-electron ptychography may become a promising alternative to Zernike
phase-contrast microscopy.
| no_new_dataset | 0.948394 |
1702.05815 | Johann Paratte | Johan Paratte, Nathana\"el Perraudin, Pierre Vandergheynst | Compressive Embedding and Visualization using Graphs | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visualizing high-dimensional data has been a focus in data analysis
communities for decades, which has led to the design of many algorithms, some
of which are now considered references (such as t-SNE for example). In our era
of overwhelming data volumes, the scalability of such methods have become more
and more important. In this work, we present a method which allows to apply any
visualization or embedding algorithm on very large datasets by considering only
a fraction of the data as input and then extending the information to all data
points using a graph encoding its global similarity. We show that in most
cases, using only $\mathcal{O}(\log(N))$ samples is sufficient to diffuse the
information to all $N$ data points. In addition, we propose quantitative
methods to measure the quality of embeddings and demonstrate the validity of
our technique on both synthetic and real-world datasets.
| [
{
"version": "v1",
"created": "Sun, 19 Feb 2017 22:59:12 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Paratte",
"Johan",
""
],
[
"Perraudin",
"Nathanaël",
""
],
[
"Vandergheynst",
"Pierre",
""
]
] | TITLE: Compressive Embedding and Visualization using Graphs
ABSTRACT: Visualizing high-dimensional data has been a focus in data analysis
communities for decades, which has led to the design of many algorithms, some
of which are now considered references (such as t-SNE for example). In our era
of overwhelming data volumes, the scalability of such methods have become more
and more important. In this work, we present a method which allows to apply any
visualization or embedding algorithm on very large datasets by considering only
a fraction of the data as input and then extending the information to all data
points using a graph encoding its global similarity. We show that in most
cases, using only $\mathcal{O}(\log(N))$ samples is sufficient to diffuse the
information to all $N$ data points. In addition, we propose quantitative
methods to measure the quality of embeddings and demonstrate the validity of
our technique on both synthetic and real-world datasets.
| no_new_dataset | 0.951504 |
1702.05911 | Patrick Wieschollek | Patrick Wieschollek, Oliver Wang, Alexander Sorkine-Hornung, Hendrik
P.A. Lensch | Efficient Large-scale Approximate Nearest Neighbor Search on the GPU | null | The IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), pp. 2027 - 2035 (2016) | 10.1109/CVPR.2016.223 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new approach for efficient approximate nearest neighbor (ANN)
search in high dimensional spaces, extending the idea of Product Quantization.
We propose a two-level product and vector quantization tree that reduces the
number of vector comparisons required during tree traversal. Our approach also
includes a novel highly parallelizable re-ranking method for candidate vectors
by efficiently reusing already computed intermediate values. Due to its small
memory footprint during traversal, the method lends itself to an efficient,
parallel GPU implementation. This Product Quantization Tree (PQT) approach
significantly outperforms recent state of the art methods for high dimensional
nearest neighbor queries on standard reference datasets. Ours is the first work
that demonstrates GPU performance superior to CPU performance on high
dimensional, large scale ANN problems in time-critical real-world applications,
like loop-closing in videos.
| [
{
"version": "v1",
"created": "Mon, 20 Feb 2017 09:57:11 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Wieschollek",
"Patrick",
""
],
[
"Wang",
"Oliver",
""
],
[
"Sorkine-Hornung",
"Alexander",
""
],
[
"Lensch",
"Hendrik P. A.",
""
]
] | TITLE: Efficient Large-scale Approximate Nearest Neighbor Search on the GPU
ABSTRACT: We present a new approach for efficient approximate nearest neighbor (ANN)
search in high dimensional spaces, extending the idea of Product Quantization.
We propose a two-level product and vector quantization tree that reduces the
number of vector comparisons required during tree traversal. Our approach also
includes a novel highly parallelizable re-ranking method for candidate vectors
by efficiently reusing already computed intermediate values. Due to its small
memory footprint during traversal, the method lends itself to an efficient,
parallel GPU implementation. This Product Quantization Tree (PQT) approach
significantly outperforms recent state of the art methods for high dimensional
nearest neighbor queries on standard reference datasets. Ours is the first work
that demonstrates GPU performance superior to CPU performance on high
dimensional, large scale ANN problems in time-critical real-world applications,
like loop-closing in videos.
| no_new_dataset | 0.949482 |
1702.05993 | Gabriela Csurka | Gabriela Csurka, Boris Chidlovski, Stephane Clinchant and Sophia
Michel | An Extended Framework for Marginalized Domain Adaptation | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an extended framework for marginalized domain adaptation, aimed at
addressing unsupervised, supervised and semi-supervised scenarios. We argue
that the denoising principle should be extended to explicitly promote
domain-invariant features as well as help the classification task. Therefore we
propose to jointly learn the data auto-encoders and the target classifiers.
First, in order to make the denoised features domain-invariant, we propose a
domain regularization that may be either a domain prediction loss or a maximum
mean discrepancy between the source and target data. The noise marginalization
in this case is reduced to solving the linear matrix system $AX=B$ which has a
closed-form solution. Second, in order to help the classification, we include a
class regularization term. Adding this component reduces the learning problem
to solving a Sylvester linear matrix equation $AX+BX=C$, for which an efficient
iterative procedure exists as well. We did an extensive study to assess how
these regularization terms improve the baseline performance in the three domain
adaptation scenarios and present experimental results on two image and one text
benchmark datasets, conventionally used for validating domain adaptation
methods. We report our findings and comparison with state-of-the-art methods.
| [
{
"version": "v1",
"created": "Mon, 20 Feb 2017 15:00:13 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Csurka",
"Gabriela",
""
],
[
"Chidlovski",
"Boris",
""
],
[
"Clinchant",
"Stephane",
""
],
[
"Michel",
"Sophia",
""
]
] | TITLE: An Extended Framework for Marginalized Domain Adaptation
ABSTRACT: We propose an extended framework for marginalized domain adaptation, aimed at
addressing unsupervised, supervised and semi-supervised scenarios. We argue
that the denoising principle should be extended to explicitly promote
domain-invariant features as well as help the classification task. Therefore we
propose to jointly learn the data auto-encoders and the target classifiers.
First, in order to make the denoised features domain-invariant, we propose a
domain regularization that may be either a domain prediction loss or a maximum
mean discrepancy between the source and target data. The noise marginalization
in this case is reduced to solving the linear matrix system $AX=B$ which has a
closed-form solution. Second, in order to help the classification, we include a
class regularization term. Adding this component reduces the learning problem
to solving a Sylvester linear matrix equation $AX+BX=C$, for which an efficient
iterative procedure exists as well. We did an extensive study to assess how
these regularization terms improve the baseline performance in the three domain
adaptation scenarios and present experimental results on two image and one text
benchmark datasets, conventionally used for validating domain adaptation
methods. We report our findings and comparison with state-of-the-art methods.
| no_new_dataset | 0.943764 |
1702.06025 | Saravanan Thirumuruganathan | Rade Stanojevic, Sofiane Abbar, Saravanan Thirumuruganathan, Sanjay
Chawla, Fethi Filali, Ahid Aleimat | Kharita: Robust Map Inference using Graph Spanners | null | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The widespread availability of GPS information in everyday devices such as
cars, smartphones and smart watches make it possible to collect large amount of
geospatial trajectory information. A particularly important, yet technically
challenging, application of this data is to identify the underlying road
network and keep it updated under various changes. In this paper, we propose
efficient algorithms that can generate accurate maps in both batch and online
settings. Our algorithms utilize techniques from graph spanners so that they
produce maps can effectively handle a wide variety of road and intersection
shapes. We conduct a rigorous evaluation of our algorithms over two real-world
datasets and under a wide variety of performance metrics. Our experiments show
a significant improvement over prior work. In particular, we observe an
increase in Biagioni f-score of up to 20% when compared to the state of the art
while reducing the execution time by an order of magnitude. We also make our
source code open source for reproducibility and enable other researchers to
build on our work.
| [
{
"version": "v1",
"created": "Mon, 20 Feb 2017 15:51:07 GMT"
}
] | 2017-02-21T00:00:00 | [
[
"Stanojevic",
"Rade",
""
],
[
"Abbar",
"Sofiane",
""
],
[
"Thirumuruganathan",
"Saravanan",
""
],
[
"Chawla",
"Sanjay",
""
],
[
"Filali",
"Fethi",
""
],
[
"Aleimat",
"Ahid",
""
]
] | TITLE: Kharita: Robust Map Inference using Graph Spanners
ABSTRACT: The widespread availability of GPS information in everyday devices such as
cars, smartphones and smart watches make it possible to collect large amount of
geospatial trajectory information. A particularly important, yet technically
challenging, application of this data is to identify the underlying road
network and keep it updated under various changes. In this paper, we propose
efficient algorithms that can generate accurate maps in both batch and online
settings. Our algorithms utilize techniques from graph spanners so that they
produce maps can effectively handle a wide variety of road and intersection
shapes. We conduct a rigorous evaluation of our algorithms over two real-world
datasets and under a wide variety of performance metrics. Our experiments show
a significant improvement over prior work. In particular, we observe an
increase in Biagioni f-score of up to 20% when compared to the state of the art
while reducing the execution time by an order of magnitude. We also make our
source code open source for reproducibility and enable other researchers to
build on our work.
| no_new_dataset | 0.946843 |
1610.08431 | Zewei Chu | Zewei Chu, Hai Wang, Kevin Gimpel, David McAllester | Broad Context Language Modeling as Reading Comprehension | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Progress in text understanding has been driven by large datasets that test
particular capabilities, like recent datasets for reading comprehension
(Hermann et al., 2015). We focus here on the LAMBADA dataset (Paperno et al.,
2016), a word prediction task requiring broader context than the immediate
sentence. We view LAMBADA as a reading comprehension problem and apply
comprehension models based on neural networks. Though these models are
constrained to choose a word from the context, they improve the state of the
art on LAMBADA from 7.3% to 49%. We analyze 100 instances, finding that neural
network readers perform well in cases that involve selecting a name from the
context based on dialogue or discourse cues but struggle when coreference
resolution or external knowledge is needed.
| [
{
"version": "v1",
"created": "Wed, 26 Oct 2016 17:25:38 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2016 18:54:44 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Feb 2017 21:33:30 GMT"
}
] | 2017-02-20T00:00:00 | [
[
"Chu",
"Zewei",
""
],
[
"Wang",
"Hai",
""
],
[
"Gimpel",
"Kevin",
""
],
[
"McAllester",
"David",
""
]
] | TITLE: Broad Context Language Modeling as Reading Comprehension
ABSTRACT: Progress in text understanding has been driven by large datasets that test
particular capabilities, like recent datasets for reading comprehension
(Hermann et al., 2015). We focus here on the LAMBADA dataset (Paperno et al.,
2016), a word prediction task requiring broader context than the immediate
sentence. We view LAMBADA as a reading comprehension problem and apply
comprehension models based on neural networks. Though these models are
constrained to choose a word from the context, they improve the state of the
art on LAMBADA from 7.3% to 49%. We analyze 100 instances, finding that neural
network readers perform well in cases that involve selecting a name from the
context based on dialogue or discourse cues but struggle when coreference
resolution or external knowledge is needed.
| no_new_dataset | 0.949012 |
1611.06310 | Razvan Pascanu | Grzegorz Swirszcz, Wojciech Marian Czarnecki and Razvan Pascanu | Local minima in training of neural networks | null | null | null | null | stat.ML cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been a lot of recent interest in trying to characterize the error
surface of deep models. This stems from a long standing question. Given that
deep networks are highly nonlinear systems optimized by local gradient methods,
why do they not seem to be affected by bad local minima? It is widely believed
that training of deep models using gradient methods works so well because the
error surface either has no local minima, or if they exist they need to be
close in value to the global minimum. It is known that such results hold under
very strong assumptions which are not satisfied by real models. In this paper
we present examples showing that for such theorem to be true additional
assumptions on the data, initialization schemes and/or the model classes have
to be made. We look at the particular case of finite size datasets. We
demonstrate that in this scenario one can construct counter-examples (datasets
or initialization schemes) when the network does become susceptible to bad
local minima over the weight space.
| [
{
"version": "v1",
"created": "Sat, 19 Nov 2016 05:49:22 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2017 14:51:54 GMT"
}
] | 2017-02-20T00:00:00 | [
[
"Swirszcz",
"Grzegorz",
""
],
[
"Czarnecki",
"Wojciech Marian",
""
],
[
"Pascanu",
"Razvan",
""
]
] | TITLE: Local minima in training of neural networks
ABSTRACT: There has been a lot of recent interest in trying to characterize the error
surface of deep models. This stems from a long standing question. Given that
deep networks are highly nonlinear systems optimized by local gradient methods,
why do they not seem to be affected by bad local minima? It is widely believed
that training of deep models using gradient methods works so well because the
error surface either has no local minima, or if they exist they need to be
close in value to the global minimum. It is known that such results hold under
very strong assumptions which are not satisfied by real models. In this paper
we present examples showing that for such theorem to be true additional
assumptions on the data, initialization schemes and/or the model classes have
to be made. We look at the particular case of finite size datasets. We
demonstrate that in this scenario one can construct counter-examples (datasets
or initialization schemes) when the network does become susceptible to bad
local minima over the weight space.
| no_new_dataset | 0.948775 |
1611.08108 | Jiani Zhang | Jiani Zhang, Xingjian Shi, Irwin King and Dit-Yan Yeung | Dynamic Key-Value Memory Networks for Knowledge Tracing | To appear in 26th International Conference on World Wide Web (WWW),
2017 | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge Tracing (KT) is a task of tracing evolving knowledge state of
students with respect to one or more concepts as they engage in a sequence of
learning activities. One important purpose of KT is to personalize the practice
sequence to help students learn knowledge concepts efficiently. However,
existing methods such as Bayesian Knowledge Tracing and Deep Knowledge Tracing
either model knowledge state for each predefined concept separately or fail to
pinpoint exactly which concepts a student is good at or unfamiliar with. To
solve these problems, this work introduces a new model called Dynamic Key-Value
Memory Networks (DKVMN) that can exploit the relationships between underlying
concepts and directly output a student's mastery level of each concept. Unlike
standard memory-augmented neural networks that facilitate a single memory
matrix or two static memory matrices, our model has one static matrix called
key, which stores the knowledge concepts and the other dynamic matrix called
value, which stores and updates the mastery levels of corresponding concepts.
Experiments show that our model consistently outperforms the state-of-the-art
model in a range of KT datasets. Moreover, the DKVMN model can automatically
discover underlying concepts of exercises typically performed by human
annotations and depict the changing knowledge state of a student.
| [
{
"version": "v1",
"created": "Thu, 24 Nov 2016 09:12:47 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2017 06:09:27 GMT"
}
] | 2017-02-20T00:00:00 | [
[
"Zhang",
"Jiani",
""
],
[
"Shi",
"Xingjian",
""
],
[
"King",
"Irwin",
""
],
[
"Yeung",
"Dit-Yan",
""
]
] | TITLE: Dynamic Key-Value Memory Networks for Knowledge Tracing
ABSTRACT: Knowledge Tracing (KT) is a task of tracing evolving knowledge state of
students with respect to one or more concepts as they engage in a sequence of
learning activities. One important purpose of KT is to personalize the practice
sequence to help students learn knowledge concepts efficiently. However,
existing methods such as Bayesian Knowledge Tracing and Deep Knowledge Tracing
either model knowledge state for each predefined concept separately or fail to
pinpoint exactly which concepts a student is good at or unfamiliar with. To
solve these problems, this work introduces a new model called Dynamic Key-Value
Memory Networks (DKVMN) that can exploit the relationships between underlying
concepts and directly output a student's mastery level of each concept. Unlike
standard memory-augmented neural networks that facilitate a single memory
matrix or two static memory matrices, our model has one static matrix called
key, which stores the knowledge concepts and the other dynamic matrix called
value, which stores and updates the mastery levels of corresponding concepts.
Experiments show that our model consistently outperforms the state-of-the-art
model in a range of KT datasets. Moreover, the DKVMN model can automatically
discover underlying concepts of exercises typically performed by human
annotations and depict the changing knowledge state of a student.
| no_new_dataset | 0.94801 |
1702.03865 | Akosua Busia | Akosua Busia and Navdeep Jaitly | Next-Step Conditioned Deep Convolutional Neural Networks Improve Protein
Secondary Structure Prediction | 11 pages, 3 figures, 4 tables, submitted to ISMB/ECCB 2017. arXiv
admin note: text overlap with arXiv:1611.01503 | null | null | null | cs.LG q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently developed deep learning techniques have significantly improved the
accuracy of various speech and image recognition systems. In this paper we show
how to adapt some of these techniques to create a novel chained convolutional
architecture with next-step conditioning for improving performance on protein
sequence prediction problems. We explore its value by demonstrating its ability
to improve performance on eight-class secondary structure prediction. We first
establish a state-of-the-art baseline by adapting recent advances in
convolutional neural networks which were developed for vision tasks. This model
achieves 70.0% per amino acid accuracy on the CB513 benchmark dataset without
use of standard performance-boosting techniques such as ensembling or multitask
learning. We then improve upon this state-of-the-art result using a novel
chained prediction approach which frames the secondary structure prediction as
a next-step prediction problem. This sequential model achieves 70.3% Q8
accuracy on CB513 with a single model; an ensemble of these models produces
71.4% Q8 accuracy on the same test set, improving upon the previous overall
state of the art for the eight-class secondary structure problem. Our models
are implemented using TensorFlow, an open-source machine learning software
library available at TensorFlow.org; we aim to release the code for these
experiments as part of the TensorFlow repository.
| [
{
"version": "v1",
"created": "Mon, 13 Feb 2017 16:44:18 GMT"
}
] | 2017-02-20T00:00:00 | [
[
"Busia",
"Akosua",
""
],
[
"Jaitly",
"Navdeep",
""
]
] | TITLE: Next-Step Conditioned Deep Convolutional Neural Networks Improve Protein
Secondary Structure Prediction
ABSTRACT: Recently developed deep learning techniques have significantly improved the
accuracy of various speech and image recognition systems. In this paper we show
how to adapt some of these techniques to create a novel chained convolutional
architecture with next-step conditioning for improving performance on protein
sequence prediction problems. We explore its value by demonstrating its ability
to improve performance on eight-class secondary structure prediction. We first
establish a state-of-the-art baseline by adapting recent advances in
convolutional neural networks which were developed for vision tasks. This model
achieves 70.0% per amino acid accuracy on the CB513 benchmark dataset without
use of standard performance-boosting techniques such as ensembling or multitask
learning. We then improve upon this state-of-the-art result using a novel
chained prediction approach which frames the secondary structure prediction as
a next-step prediction problem. This sequential model achieves 70.3% Q8
accuracy on CB513 with a single model; an ensemble of these models produces
71.4% Q8 accuracy on the same test set, improving upon the previous overall
state of the art for the eight-class secondary structure problem. Our models
are implemented using TensorFlow, an open-source machine learning software
library available at TensorFlow.org; we aim to release the code for these
experiments as part of the TensorFlow repository.
| no_new_dataset | 0.949248 |
1702.05192 | Mohammad-Parsa Hosseini | Mohammad-Parsa Hosseini, Hamid Soltanian-Zadeh, Kost Elisevich, and
Dario Pompili | Cloud-based Deep Learning of Big EEG Data for Epileptic Seizure
Prediction | IEEE Global Conference on Signal and Information Processing
(GlobalSIP), Greater Washington, DC, Dec 7-9, 2016 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing a Brain-Computer Interface~(BCI) for seizure prediction can help
epileptic patients have a better quality of life. However, there are many
difficulties and challenges in developing such a system as a real-life support
for patients. Because of the nonstationary nature of EEG signals, normal and
seizure patterns vary across different patients. Thus, finding a group of
manually extracted features for the prediction task is not practical. Moreover,
when using implanted electrodes for brain recording massive amounts of data are
produced. This big data calls for the need for safe storage and high
computational resources for real-time processing. To address these challenges,
a cloud-based BCI system for the analysis of this big EEG data is presented.
First, a dimensionality-reduction technique is developed to increase
classification accuracy as well as to decrease the communication bandwidth and
computation time. Second, following a deep-learning approach, a stacked
autoencoder is trained in two steps for unsupervised feature extraction and
classification. Third, a cloud-computing solution is proposed for real-time
analysis of big EEG data. The results on a benchmark clinical dataset
illustrate the superiority of the proposed patient-specific BCI as an
alternative method and its expected usefulness in real-life support of epilepsy
patients.
| [
{
"version": "v1",
"created": "Fri, 17 Feb 2017 00:00:38 GMT"
}
] | 2017-02-20T00:00:00 | [
[
"Hosseini",
"Mohammad-Parsa",
""
],
[
"Soltanian-Zadeh",
"Hamid",
""
],
[
"Elisevich",
"Kost",
""
],
[
"Pompili",
"Dario",
""
]
] | TITLE: Cloud-based Deep Learning of Big EEG Data for Epileptic Seizure
Prediction
ABSTRACT: Developing a Brain-Computer Interface~(BCI) for seizure prediction can help
epileptic patients have a better quality of life. However, there are many
difficulties and challenges in developing such a system as a real-life support
for patients. Because of the nonstationary nature of EEG signals, normal and
seizure patterns vary across different patients. Thus, finding a group of
manually extracted features for the prediction task is not practical. Moreover,
when using implanted electrodes for brain recording massive amounts of data are
produced. This big data calls for the need for safe storage and high
computational resources for real-time processing. To address these challenges,
a cloud-based BCI system for the analysis of this big EEG data is presented.
First, a dimensionality-reduction technique is developed to increase
classification accuracy as well as to decrease the communication bandwidth and
computation time. Second, following a deep-learning approach, a stacked
autoencoder is trained in two steps for unsupervised feature extraction and
classification. Third, a cloud-computing solution is proposed for real-time
analysis of big EEG data. The results on a benchmark clinical dataset
illustrate the superiority of the proposed patient-specific BCI as an
alternative method and its expected usefulness in real-life support of epilepsy
patients.
| no_new_dataset | 0.951233 |
1702.05200 | Abdullah Alfarrarjeh | Abdullah Alfarrarjeh, Cyrus Shahabi | Hybrid Indexes to Expedite Spatial-Visual Search | 12 Pages, 19 Figures, 7 Tables | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the growth of geo-tagged images, recent web and mobile applications
provide search capabilities for images that are similar to a given query image
and simultaneously within a given geographical area. In this paper, we focus on
designing index structures to expedite these spatial-visual searches. We start
by baseline indexes that are straightforward extensions of the current popular
spatial (R*-tree) and visual (LSH) index structures. Subsequently, we propose
hybrid index structures that evaluate both spatial and visual features in
tandem. The unique challenge of this type of query is that there are
inaccuracies in both spatial and visual features. Therefore, different
traversals of the index structures may produce different images as output, some
of which more relevant to the query than the others. We compare our hybrid
structures with a set of baseline indexes in both performance and result
accuracy using three real world datasets from Flickr, Google Street View, and
GeoUGV.
| [
{
"version": "v1",
"created": "Fri, 17 Feb 2017 01:16:25 GMT"
}
] | 2017-02-20T00:00:00 | [
[
"Alfarrarjeh",
"Abdullah",
""
],
[
"Shahabi",
"Cyrus",
""
]
] | TITLE: Hybrid Indexes to Expedite Spatial-Visual Search
ABSTRACT: Due to the growth of geo-tagged images, recent web and mobile applications
provide search capabilities for images that are similar to a given query image
and simultaneously within a given geographical area. In this paper, we focus on
designing index structures to expedite these spatial-visual searches. We start
by baseline indexes that are straightforward extensions of the current popular
spatial (R*-tree) and visual (LSH) index structures. Subsequently, we propose
hybrid index structures that evaluate both spatial and visual features in
tandem. The unique challenge of this type of query is that there are
inaccuracies in both spatial and visual features. Therefore, different
traversals of the index structures may produce different images as output, some
of which more relevant to the query than the others. We compare our hybrid
structures with a set of baseline indexes in both performance and result
accuracy using three real world datasets from Flickr, Google Street View, and
GeoUGV.
| no_new_dataset | 0.950869 |
1702.05398 | Pradeep Dasigi | Pradeep Dasigi, Gully A.P.C. Burns, Eduard Hovy, and Anita de Waard | Experiment Segmentation in Scientific Discourse as Clause-level
Structured Prediction using Recurrent Neural Networks | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We propose a deep learning model for identifying structure within experiment
narratives in scientific literature. We take a sequence labeling approach to
this problem, and label clauses within experiment narratives to identify the
different parts of the experiment. Our dataset consists of paragraphs taken
from open access PubMed papers labeled with rhetorical information as a result
of our pilot annotation. Our model is a Recurrent Neural Network (RNN) with
Long Short-Term Memory (LSTM) cells that labels clauses. The clause
representations are computed by combining word representations using a novel
attention mechanism that involves a separate RNN. We compare this model against
LSTMs where the input layer has simple or no attention and a feature rich CRF
model. Furthermore, we describe how our work could be useful for information
extraction from scientific literature.
| [
{
"version": "v1",
"created": "Fri, 17 Feb 2017 15:39:21 GMT"
}
] | 2017-02-20T00:00:00 | [
[
"Dasigi",
"Pradeep",
""
],
[
"Burns",
"Gully A. P. C.",
""
],
[
"Hovy",
"Eduard",
""
],
[
"de Waard",
"Anita",
""
]
] | TITLE: Experiment Segmentation in Scientific Discourse as Clause-level
Structured Prediction using Recurrent Neural Networks
ABSTRACT: We propose a deep learning model for identifying structure within experiment
narratives in scientific literature. We take a sequence labeling approach to
this problem, and label clauses within experiment narratives to identify the
different parts of the experiment. Our dataset consists of paragraphs taken
from open access PubMed papers labeled with rhetorical information as a result
of our pilot annotation. Our model is a Recurrent Neural Network (RNN) with
Long Short-Term Memory (LSTM) cells that labels clauses. The clause
representations are computed by combining word representations using a novel
attention mechanism that involves a separate RNN. We compare this model against
LSTMs where the input layer has simple or no attention and a feature rich CRF
model. Furthermore, we describe how our work could be useful for information
extraction from scientific literature.
| new_dataset | 0.955981 |
1702.05464 | Eric Tzeng | Eric Tzeng, Judy Hoffman, Kate Saenko, Trevor Darrell | Adversarial Discriminative Domain Adaptation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adversarial learning methods are a promising approach to training robust deep
networks, and can generate complex samples across diverse domains. They also
can improve recognition despite the presence of domain shift or dataset bias:
several adversarial approaches to unsupervised domain adaptation have recently
been introduced, which reduce the difference between the training and test
domain distributions and thus improve generalization performance. Prior
generative approaches show compelling visualizations, but are not optimal on
discriminative tasks and can be limited to smaller shifts. Prior discriminative
approaches could handle larger domain shifts, but imposed tied weights on the
model and did not exploit a GAN-based loss. We first outline a novel
generalized framework for adversarial adaptation, which subsumes recent
state-of-the-art approaches as special cases, and we use this generalized view
to better relate the prior approaches. We propose a previously unexplored
instance of our general framework which combines discriminative modeling,
untied weight sharing, and a GAN loss, which we call Adversarial Discriminative
Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably
simpler than competing domain-adversarial methods, and demonstrate the promise
of our approach by exceeding state-of-the-art unsupervised adaptation results
on standard cross-domain digit classification tasks and a new more difficult
cross-modality object classification task.
| [
{
"version": "v1",
"created": "Fri, 17 Feb 2017 18:10:53 GMT"
}
] | 2017-02-20T00:00:00 | [
[
"Tzeng",
"Eric",
""
],
[
"Hoffman",
"Judy",
""
],
[
"Saenko",
"Kate",
""
],
[
"Darrell",
"Trevor",
""
]
] | TITLE: Adversarial Discriminative Domain Adaptation
ABSTRACT: Adversarial learning methods are a promising approach to training robust deep
networks, and can generate complex samples across diverse domains. They also
can improve recognition despite the presence of domain shift or dataset bias:
several adversarial approaches to unsupervised domain adaptation have recently
been introduced, which reduce the difference between the training and test
domain distributions and thus improve generalization performance. Prior
generative approaches show compelling visualizations, but are not optimal on
discriminative tasks and can be limited to smaller shifts. Prior discriminative
approaches could handle larger domain shifts, but imposed tied weights on the
model and did not exploit a GAN-based loss. We first outline a novel
generalized framework for adversarial adaptation, which subsumes recent
state-of-the-art approaches as special cases, and we use this generalized view
to better relate the prior approaches. We propose a previously unexplored
instance of our general framework which combines discriminative modeling,
untied weight sharing, and a GAN loss, which we call Adversarial Discriminative
Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably
simpler than competing domain-adversarial methods, and demonstrate the promise
of our approach by exceeding state-of-the-art unsupervised adaptation results
on standard cross-domain digit classification tasks and a new more difficult
cross-modality object classification task.
| no_new_dataset | 0.945147 |
1609.08017 | Xuezhe Ma | Xuezhe Ma, Yingkai Gao, Zhiting Hu, Yaoliang Yu, Yuntian Deng, Eduard
Hovy | Dropout with Expectation-linear Regularization | Published as a conference paper at ICLR 2017. Camera-ready Version.
23 pages (paper + appendix) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dropout, a simple and effective way to train deep neural networks, has led to
a number of impressive empirical successes and spawned many recent theoretical
investigations. However, the gap between dropout's training and inference
phases, introduced due to tractability considerations, has largely remained
under-appreciated. In this work, we first formulate dropout as a tractable
approximation of some latent variable model, leading to a clean view of
parameter sharing and enabling further theoretical analysis. Then, we introduce
(approximate) expectation-linear dropout neural networks, whose inference gap
we are able to formally characterize. Algorithmically, we show that our
proposed measure of the inference gap can be used to regularize the standard
dropout training objective, resulting in an \emph{explicit} control of the gap.
Our method is as simple and efficient as standard dropout. We further prove the
upper bounds on the loss in accuracy due to expectation-linearization, describe
classes of input distributions that expectation-linearize easily. Experiments
on three image classification benchmark datasets demonstrate that reducing the
inference gap can indeed improve the performance consistently.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2016 15:14:05 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Oct 2016 18:04:11 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Feb 2017 19:40:29 GMT"
}
] | 2017-02-17T00:00:00 | [
[
"Ma",
"Xuezhe",
""
],
[
"Gao",
"Yingkai",
""
],
[
"Hu",
"Zhiting",
""
],
[
"Yu",
"Yaoliang",
""
],
[
"Deng",
"Yuntian",
""
],
[
"Hovy",
"Eduard",
""
]
] | TITLE: Dropout with Expectation-linear Regularization
ABSTRACT: Dropout, a simple and effective way to train deep neural networks, has led to
a number of impressive empirical successes and spawned many recent theoretical
investigations. However, the gap between dropout's training and inference
phases, introduced due to tractability considerations, has largely remained
under-appreciated. In this work, we first formulate dropout as a tractable
approximation of some latent variable model, leading to a clean view of
parameter sharing and enabling further theoretical analysis. Then, we introduce
(approximate) expectation-linear dropout neural networks, whose inference gap
we are able to formally characterize. Algorithmically, we show that our
proposed measure of the inference gap can be used to regularize the standard
dropout training objective, resulting in an \emph{explicit} control of the gap.
Our method is as simple and efficient as standard dropout. We further prove the
upper bounds on the loss in accuracy due to expectation-linearization, describe
classes of input distributions that expectation-linearize easily. Experiments
on three image classification benchmark datasets demonstrate that reducing the
inference gap can indeed improve the performance consistently.
| no_new_dataset | 0.944228 |
1702.04254 | Gali Noti | Noam Nisan and Gali Noti | A "Quantal Regret" Method for Structural Econometrics in Repeated Games | null | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We suggest a general method for inferring players' values from their actions
in repeated games. The method extends and improves upon the recent suggestion
of (Nekipelov et al., EC 2015) and is based on the assumption that players are
more likely to exhibit sequences of actions that have lower regret.
We evaluate this "quantal regret" method on two different datasets from
experiments of repeated games with controlled player values: those of (Selten
and Chmura, AER 2008) on a variety of two-player 2x2 games and our own
experiment on ad-auctions (Noti et al., WWW 2014). We find that the quantal
regret method is consistently and significantly more precise than either
"classic" econometric methods that are based on Nash equilibria, or the
"min-regret" method of (Nekipelov et al., EC 2015).
| [
{
"version": "v1",
"created": "Tue, 14 Feb 2017 15:10:35 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2017 17:04:36 GMT"
}
] | 2017-02-17T00:00:00 | [
[
"Nisan",
"Noam",
""
],
[
"Noti",
"Gali",
""
]
] | TITLE: A "Quantal Regret" Method for Structural Econometrics in Repeated Games
ABSTRACT: We suggest a general method for inferring players' values from their actions
in repeated games. The method extends and improves upon the recent suggestion
of (Nekipelov et al., EC 2015) and is based on the assumption that players are
more likely to exhibit sequences of actions that have lower regret.
We evaluate this "quantal regret" method on two different datasets from
experiments of repeated games with controlled player values: those of (Selten
and Chmura, AER 2008) on a variety of two-player 2x2 games and our own
experiment on ad-auctions (Noti et al., WWW 2014). We find that the quantal
regret method is consistently and significantly more precise than either
"classic" econometric methods that are based on Nash equilibria, or the
"min-regret" method of (Nekipelov et al., EC 2015).
| no_new_dataset | 0.943086 |
1702.04479 | Sungeun Hong | Sungeun Hong, Jongbin Ryu, Woobin Im, Hyun S. Yang | Recognizing Dynamic Scenes with Deep Dual Descriptor based on Key Frames
and Key Segments | 10 pages, 7 figures, 8 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recognizing dynamic scenes is one of the fundamental problems in scene
understanding, which categorizes moving scenes such as a forest fire,
landslide, or avalanche. While existing methods focus on reliable capturing of
static and dynamic information, few works have explored frame selection from a
dynamic scene sequence. In this paper, we propose dynamic scene recognition
using a deep dual descriptor based on `key frames' and `key segments.' Key
frames that reflect the feature distribution of the sequence with a small
number are used for capturing salient static appearances. Key segments, which
are captured from the area around each key frame, provide an additional
discriminative power by dynamic patterns within short time intervals. To this
end, two types of transferred convolutional neural network features are used in
our approach. A fully connected layer is used to select the key frames and key
segments, while the convolutional layer is used to describe them. We conducted
experiments using public datasets as well as a new dataset comprised of 23
dynamic scene classes with 10 videos per class. The evaluation results
demonstrated the state-of-the-art performance of the proposed method.
| [
{
"version": "v1",
"created": "Wed, 15 Feb 2017 06:59:01 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2017 07:14:19 GMT"
}
] | 2017-02-17T00:00:00 | [
[
"Hong",
"Sungeun",
""
],
[
"Ryu",
"Jongbin",
""
],
[
"Im",
"Woobin",
""
],
[
"Yang",
"Hyun S.",
""
]
] | TITLE: Recognizing Dynamic Scenes with Deep Dual Descriptor based on Key Frames
and Key Segments
ABSTRACT: Recognizing dynamic scenes is one of the fundamental problems in scene
understanding, which categorizes moving scenes such as a forest fire,
landslide, or avalanche. While existing methods focus on reliable capturing of
static and dynamic information, few works have explored frame selection from a
dynamic scene sequence. In this paper, we propose dynamic scene recognition
using a deep dual descriptor based on `key frames' and `key segments.' Key
frames that reflect the feature distribution of the sequence with a small
number are used for capturing salient static appearances. Key segments, which
are captured from the area around each key frame, provide an additional
discriminative power by dynamic patterns within short time intervals. To this
end, two types of transferred convolutional neural network features are used in
our approach. A fully connected layer is used to select the key frames and key
segments, while the convolutional layer is used to describe them. We conducted
experiments using public datasets as well as a new dataset comprised of 23
dynamic scene classes with 10 videos per class. The evaluation results
demonstrated the state-of-the-art performance of the proposed method.
| new_dataset | 0.957873 |
1702.04869 | Sergi Valverde | Sergi Valverde, Mariano Cabezas, Eloy Roura, Sandra
Gonz\'alez-Vill\`a, Deborah Pareto, Joan-Carles Vilanova, LLu\'is
Rami\'o-Torrent\`a, \`Alex Rovira, Arnau Oliver and Xavier Llad\'o | Improving automated multiple sclerosis lesion segmentation with a
cascaded 3D convolutional neural network approach | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a novel automated method for White Matter (WM)
lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is
based on a cascade of two 3D patch-wise convolutional neural networks (CNN).
The first network is trained to be more sensitive revealing possible candidate
lesion voxels while the second network is trained to reduce the number of
misclassified voxels coming from the first network. This cascaded CNN
architecture tends to learn well from small sets of training data, which can be
very interesting in practice, given the difficulty to obtain manual label
annotations and the large amount of available unlabeled Magnetic Resonance
Imaging (MRI) data. We evaluate the accuracy of the proposed method on the
public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with
respect to other state-of-the-art MS lesion segmentation tools. Furthermore,
the proposed method is also evaluated on two private MS clinical datasets,
where the performance of our method is also compared with different recent
public available state-of-the-art MS lesion segmentation methods. At the time
of writing this paper, our method is the best ranked approach on the MICCAI2008
challenge, outperforming the rest of 60 participant methods when using all the
available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank
(3rd position) when using only T1-w and FLAIR modalities. On clinical MS data,
our approach exhibits a significant increase in the accuracy segmenting of WM
lesions when compared with the rest of evaluated methods, highly correlating
($r \ge 0.97$) also with the expected lesion volume.
| [
{
"version": "v1",
"created": "Thu, 16 Feb 2017 06:23:14 GMT"
}
] | 2017-02-17T00:00:00 | [
[
"Valverde",
"Sergi",
""
],
[
"Cabezas",
"Mariano",
""
],
[
"Roura",
"Eloy",
""
],
[
"González-Villà",
"Sandra",
""
],
[
"Pareto",
"Deborah",
""
],
[
"Vilanova",
"Joan-Carles",
""
],
[
"Ramió-Torrentà",
"LLuís",
""
],
[
"Rovira",
"Àlex",
""
],
[
"Oliver",
"Arnau",
""
],
[
"Lladó",
"Xavier",
""
]
] | TITLE: Improving automated multiple sclerosis lesion segmentation with a
cascaded 3D convolutional neural network approach
ABSTRACT: In this paper, we present a novel automated method for White Matter (WM)
lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is
based on a cascade of two 3D patch-wise convolutional neural networks (CNN).
The first network is trained to be more sensitive revealing possible candidate
lesion voxels while the second network is trained to reduce the number of
misclassified voxels coming from the first network. This cascaded CNN
architecture tends to learn well from small sets of training data, which can be
very interesting in practice, given the difficulty to obtain manual label
annotations and the large amount of available unlabeled Magnetic Resonance
Imaging (MRI) data. We evaluate the accuracy of the proposed method on the
public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with
respect to other state-of-the-art MS lesion segmentation tools. Furthermore,
the proposed method is also evaluated on two private MS clinical datasets,
where the performance of our method is also compared with different recent
public available state-of-the-art MS lesion segmentation methods. At the time
of writing this paper, our method is the best ranked approach on the MICCAI2008
challenge, outperforming the rest of 60 participant methods when using all the
available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank
(3rd position) when using only T1-w and FLAIR modalities. On clinical MS data,
our approach exhibits a significant increase in the accuracy segmenting of WM
lesions when compared with the rest of evaluated methods, highly correlating
($r \ge 0.97$) also with the expected lesion volume.
| no_new_dataset | 0.953101 |
1702.04943 | Pavlos Sermpezis | Pavlos Sermpezis, Thrasyvoulos Spyropoulos, Luigi Vigneri, Theodoros
Giannakas | Femto-Caching with Soft Cache Hits: Improving Performance through
Recommendation and Delivery of Related Content | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pushing popular content to cheap "helper" nodes (e.g., small cells) during
off-peak hours has recently been proposed to cope with the increase in mobile
data traffic. User requests can be served locally from these helper nodes, if
the requested content is available in at least one of the nearby helpers.
Nevertheless, the collective storage of a few nearby helper nodes does not
usually suffice to achieve a high enough hit rate in practice. We propose to
depart from the assumption of hard cache hits, common in existing works, and
consider "soft" cache hits, where if the original content is not available,
some related contents that are locally cached can be recommended instead. Given
that Internet content consumption is entertainment-oriented, we argue that
there exist scenarios where a user might accept an alternative content (e.g.,
better download rate for alternative content, low rate plans, etc.), thus
avoiding to access expensive/congested links. We formulate the problem of
optimal edge caching with soft cache hits in a relatively generic setup,
propose efficient algorithms, and analyze the expected gains. We then show
using synthetic and real datasets of related video contents that promising
caching gains could be achieved in practice.
| [
{
"version": "v1",
"created": "Thu, 16 Feb 2017 12:41:25 GMT"
}
] | 2017-02-17T00:00:00 | [
[
"Sermpezis",
"Pavlos",
""
],
[
"Spyropoulos",
"Thrasyvoulos",
""
],
[
"Vigneri",
"Luigi",
""
],
[
"Giannakas",
"Theodoros",
""
]
] | TITLE: Femto-Caching with Soft Cache Hits: Improving Performance through
Recommendation and Delivery of Related Content
ABSTRACT: Pushing popular content to cheap "helper" nodes (e.g., small cells) during
off-peak hours has recently been proposed to cope with the increase in mobile
data traffic. User requests can be served locally from these helper nodes, if
the requested content is available in at least one of the nearby helpers.
Nevertheless, the collective storage of a few nearby helper nodes does not
usually suffice to achieve a high enough hit rate in practice. We propose to
depart from the assumption of hard cache hits, common in existing works, and
consider "soft" cache hits, where if the original content is not available,
some related contents that are locally cached can be recommended instead. Given
that Internet content consumption is entertainment-oriented, we argue that
there exist scenarios where a user might accept an alternative content (e.g.,
better download rate for alternative content, low rate plans, etc.), thus
avoiding to access expensive/congested links. We formulate the problem of
optimal edge caching with soft cache hits in a relatively generic setup,
propose efficient algorithms, and analyze the expected gains. We then show
using synthetic and real datasets of related video contents that promising
caching gains could be achieved in practice.
| no_new_dataset | 0.939304 |
1702.04946 | Shenghui Wang | Shenghui Wang, Rob Koopman | Clustering articles based on semantic similarity | Special Issue of Scientometrics: Same data - different results?
Towards a comparative approach to the identification of thematic structures
in science | null | 10.1007/s11192-017-2298-x | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Document clustering is generally the first step for topic identification.
Since many clustering methods operate on the similarities between documents, it
is important to build representations of these documents which keep their
semantics as much as possible and are also suitable for efficient similarity
calculation. The metadata of articles in the Astro dataset contribute to a
semantic matrix, which uses a vector space to capture the semantics of entities
derived from these articles and consequently supports the contextual
exploration of these entities in LittleAriadne. However, this semantic matrix
does not allow to calculate similarities between articles directly. In this
paper, we will describe in detail how we build a semantic representation for an
article from the entities that are associated with it. Base on such semantic
representations of articles, we apply two standard clustering methods, K-Means
and the Louvain community detection algorithm, which leads to our two
clustering solutions labelled as OCLC-31 (standing for K-Means) and
OCLC-Louvain (standing for Louvain). In this paper, we will give the
implementation details and a basic comparison with other clustering solutions
that are reported in this special issue.
| [
{
"version": "v1",
"created": "Thu, 16 Feb 2017 12:48:54 GMT"
}
] | 2017-02-17T00:00:00 | [
[
"Wang",
"Shenghui",
""
],
[
"Koopman",
"Rob",
""
]
] | TITLE: Clustering articles based on semantic similarity
ABSTRACT: Document clustering is generally the first step for topic identification.
Since many clustering methods operate on the similarities between documents, it
is important to build representations of these documents which keep their
semantics as much as possible and are also suitable for efficient similarity
calculation. The metadata of articles in the Astro dataset contribute to a
semantic matrix, which uses a vector space to capture the semantics of entities
derived from these articles and consequently supports the contextual
exploration of these entities in LittleAriadne. However, this semantic matrix
does not allow to calculate similarities between articles directly. In this
paper, we will describe in detail how we build a semantic representation for an
article from the entities that are associated with it. Base on such semantic
representations of articles, we apply two standard clustering methods, K-Means
and the Louvain community detection algorithm, which leads to our two
clustering solutions labelled as OCLC-31 (standing for K-Means) and
OCLC-Louvain (standing for Louvain). In this paper, we will give the
implementation details and a basic comparison with other clustering solutions
that are reported in this special issue.
| no_new_dataset | 0.950365 |
1702.04996 | Kiran Garimella | Hieu Nguyen, Kiran Garimella | Understanding International Migration using Tensor Factorization | Accepted as poster at WWW 2017, Perth | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding human migration is of great interest to demographers and social
scientists. User generated digital data has made it easier to study such
patterns at a global scale. Geo coded Twitter data, in particular, has been
shown to be a promising source to analyse large scale human migration. But
given the scale of these datasets, a lot of manual effort has to be put into
processing and getting actionable insights from this data.
In this paper, we explore feasibility of using a new tool, tensor
decomposition, to understand trends in global human migration. We model human
migration as a three mode tensor, consisting of (origin country, destination
country, time of migration) and apply CP decomposition to get meaningful low
dimensional factors. Our experiments on a large Twitter dataset spanning 5
years and over 100M tweets show that we can extract meaningful migration
patterns.
| [
{
"version": "v1",
"created": "Thu, 16 Feb 2017 14:54:44 GMT"
}
] | 2017-02-17T00:00:00 | [
[
"Nguyen",
"Hieu",
""
],
[
"Garimella",
"Kiran",
""
]
] | TITLE: Understanding International Migration using Tensor Factorization
ABSTRACT: Understanding human migration is of great interest to demographers and social
scientists. User generated digital data has made it easier to study such
patterns at a global scale. Geo coded Twitter data, in particular, has been
shown to be a promising source to analyse large scale human migration. But
given the scale of these datasets, a lot of manual effort has to be put into
processing and getting actionable insights from this data.
In this paper, we explore feasibility of using a new tool, tensor
decomposition, to understand trends in global human migration. We model human
migration as a three mode tensor, consisting of (origin country, destination
country, time of migration) and apply CP decomposition to get meaningful low
dimensional factors. Our experiments on a large Twitter dataset spanning 5
years and over 100M tweets show that we can extract meaningful migration
patterns.
| no_new_dataset | 0.943504 |
1702.05085 | Amit Kumar | Amit Kumar, Azadeh Alavi and Rama Chellappa | KEPLER: Keypoint and Pose Estimation of Unconstrained Faces by Learning
Efficient H-CNN Regressors | Accept as Oral FG'17 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Keypoint detection is one of the most important pre-processing steps in tasks
such as face modeling, recognition and verification. In this paper, we present
an iterative method for Keypoint Estimation and Pose prediction of
unconstrained faces by Learning Efficient H-CNN Regressors (KEPLER) for
addressing the face alignment problem. Recent state of the art methods have
shown improvements in face keypoint detection by employing Convolution Neural
Networks (CNNs). Although a simple feed forward neural network can learn the
mapping between input and output spaces, it cannot learn the inherent
structural dependencies. We present a novel architecture called H-CNN
(Heatmap-CNN) which captures structured global and local features and thus
favors accurate keypoint detecion. HCNN is jointly trained on the visibility,
fiducials and 3D-pose of the face. As the iterations proceed, the error
decreases making the gradients small and thus requiring efficient training of
DCNNs to mitigate this. KEPLER performs global corrections in pose and
fiducials for the first four iterations followed by local corrections in the
subsequent stage. As a by-product, KEPLER also provides 3D pose (pitch, yaw and
roll) of the face accurately. In this paper, we show that without using any 3D
information, KEPLER outperforms state of the art methods for alignment on
challenging datasets such as AFW and AFLW.
| [
{
"version": "v1",
"created": "Thu, 16 Feb 2017 18:44:59 GMT"
}
] | 2017-02-17T00:00:00 | [
[
"Kumar",
"Amit",
""
],
[
"Alavi",
"Azadeh",
""
],
[
"Chellappa",
"Rama",
""
]
] | TITLE: KEPLER: Keypoint and Pose Estimation of Unconstrained Faces by Learning
Efficient H-CNN Regressors
ABSTRACT: Keypoint detection is one of the most important pre-processing steps in tasks
such as face modeling, recognition and verification. In this paper, we present
an iterative method for Keypoint Estimation and Pose prediction of
unconstrained faces by Learning Efficient H-CNN Regressors (KEPLER) for
addressing the face alignment problem. Recent state of the art methods have
shown improvements in face keypoint detection by employing Convolution Neural
Networks (CNNs). Although a simple feed forward neural network can learn the
mapping between input and output spaces, it cannot learn the inherent
structural dependencies. We present a novel architecture called H-CNN
(Heatmap-CNN) which captures structured global and local features and thus
favors accurate keypoint detecion. HCNN is jointly trained on the visibility,
fiducials and 3D-pose of the face. As the iterations proceed, the error
decreases making the gradients small and thus requiring efficient training of
DCNNs to mitigate this. KEPLER performs global corrections in pose and
fiducials for the first four iterations followed by local corrections in the
subsequent stage. As a by-product, KEPLER also provides 3D pose (pitch, yaw and
roll) of the face accurately. In this paper, we show that without using any 3D
information, KEPLER outperforms state of the art methods for alignment on
challenging datasets such as AFW and AFLW.
| no_new_dataset | 0.947332 |
1702.05089 | Dena Bazazian | Dena Bazazian, Raul Gomez, Anguelos Nicolaou, Lluis Gomez, Dimosthenis
Karatzas, Andrew D. Bagdanov | Improving Text Proposals for Scene Images with Fully Convolutional
Networks | 6 pages, 8 figures, International Conference on Pattern Recognition
(ICPR) - DLPR (Deep Learning for Pattern Recognition) workshop | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text Proposals have emerged as a class-dependent version of object proposals
- efficient approaches to reduce the search space of possible text object
locations in an image. Combined with strong word classifiers, text proposals
currently yield top state of the art results in end-to-end scene text
recognition. In this paper we propose an improvement over the original Text
Proposals algorithm of Gomez and Karatzas (2016), combining it with Fully
Convolutional Networks to improve the ranking of proposals. Results on the
ICDAR RRC and the COCO-text datasets show superior performance over current
state-of-the-art.
| [
{
"version": "v1",
"created": "Thu, 16 Feb 2017 18:56:53 GMT"
}
] | 2017-02-17T00:00:00 | [
[
"Bazazian",
"Dena",
""
],
[
"Gomez",
"Raul",
""
],
[
"Nicolaou",
"Anguelos",
""
],
[
"Gomez",
"Lluis",
""
],
[
"Karatzas",
"Dimosthenis",
""
],
[
"Bagdanov",
"Andrew D.",
""
]
] | TITLE: Improving Text Proposals for Scene Images with Fully Convolutional
Networks
ABSTRACT: Text Proposals have emerged as a class-dependent version of object proposals
- efficient approaches to reduce the search space of possible text object
locations in an image. Combined with strong word classifiers, text proposals
currently yield top state of the art results in end-to-end scene text
recognition. In this paper we propose an improvement over the original Text
Proposals algorithm of Gomez and Karatzas (2016), combining it with Fully
Convolutional Networks to improve the ranking of proposals. Results on the
ICDAR RRC and the COCO-text datasets show superior performance over current
state-of-the-art.
| no_new_dataset | 0.955899 |
1602.08425 | Florian Bernard | Florian Bernard, Luis Salamanca, Johan Thunberg, Alexander Tack,
Dennis Jentsch, Hans Lamecker, Stefan Zachow, Frank Hertel, Jorge Goncalves,
Peter Gemmar | Shape-aware Surface Reconstruction from Sparse 3D Point-Clouds | null | null | 10.1016/j.media.2017.02.005 | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reconstruction of an object's shape or surface from a set of 3D points
plays an important role in medical image analysis, e.g. in anatomy
reconstruction from tomographic measurements or in the process of aligning
intra-operative navigation and preoperative planning data. In such scenarios,
one usually has to deal with sparse data, which significantly aggravates the
problem of reconstruction. However, medical applications often provide
contextual information about the 3D point data that allow to incorporate prior
knowledge about the shape that is to be reconstructed. To this end, we propose
the use of a statistical shape model (SSM) as a prior for surface
reconstruction. The SSM is represented by a point distribution model (PDM),
which is associated with a surface mesh. Using the shape distribution that is
modelled by the PDM, we formulate the problem of surface reconstruction from a
probabilistic perspective based on a Gaussian Mixture Model (GMM). In order to
do so, the given points are interpreted as samples of the GMM. By using mixture
components with anisotropic covariances that are "oriented" according to the
surface normals at the PDM points, a surface-based fitting is accomplished.
Estimating the parameters of the GMM in a maximum a posteriori manner yields
the reconstruction of the surface from the given data points. We compare our
method to the extensively used Iterative Closest Points method on several
different anatomical datasets/SSMs (brain, femur, tibia, hip, liver) and
demonstrate superior accuracy and robustness on sparse data.
| [
{
"version": "v1",
"created": "Fri, 26 Feb 2016 18:30:07 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2017 11:45:36 GMT"
}
] | 2017-02-16T00:00:00 | [
[
"Bernard",
"Florian",
""
],
[
"Salamanca",
"Luis",
""
],
[
"Thunberg",
"Johan",
""
],
[
"Tack",
"Alexander",
""
],
[
"Jentsch",
"Dennis",
""
],
[
"Lamecker",
"Hans",
""
],
[
"Zachow",
"Stefan",
""
],
[
"Hertel",
"Frank",
""
],
[
"Goncalves",
"Jorge",
""
],
[
"Gemmar",
"Peter",
""
]
] | TITLE: Shape-aware Surface Reconstruction from Sparse 3D Point-Clouds
ABSTRACT: The reconstruction of an object's shape or surface from a set of 3D points
plays an important role in medical image analysis, e.g. in anatomy
reconstruction from tomographic measurements or in the process of aligning
intra-operative navigation and preoperative planning data. In such scenarios,
one usually has to deal with sparse data, which significantly aggravates the
problem of reconstruction. However, medical applications often provide
contextual information about the 3D point data that allow to incorporate prior
knowledge about the shape that is to be reconstructed. To this end, we propose
the use of a statistical shape model (SSM) as a prior for surface
reconstruction. The SSM is represented by a point distribution model (PDM),
which is associated with a surface mesh. Using the shape distribution that is
modelled by the PDM, we formulate the problem of surface reconstruction from a
probabilistic perspective based on a Gaussian Mixture Model (GMM). In order to
do so, the given points are interpreted as samples of the GMM. By using mixture
components with anisotropic covariances that are "oriented" according to the
surface normals at the PDM points, a surface-based fitting is accomplished.
Estimating the parameters of the GMM in a maximum a posteriori manner yields
the reconstruction of the surface from the given data points. We compare our
method to the extensively used Iterative Closest Points method on several
different anatomical datasets/SSMs (brain, femur, tibia, hip, liver) and
demonstrate superior accuracy and robustness on sparse data.
| no_new_dataset | 0.950088 |
1606.09282 | Zhizhong Li | Zhizhong Li, Derek Hoiem | Learning without Forgetting | Conference version appears in ECCV 2016; updated with journal version | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When building a unified vision system or gradually adding new capabilities to
a system, the usual assumption is that training data for all tasks is always
available. However, as the number of tasks grows, storing and retraining on
such data becomes infeasible. A new problem arises where we add new
capabilities to a Convolutional Neural Network (CNN), but the training data for
its existing capabilities are unavailable. We propose our Learning without
Forgetting method, which uses only new task data to train the network while
preserving the original capabilities. Our method performs favorably compared to
commonly used feature extraction and fine-tuning adaption techniques and
performs similarly to multitask learning that uses original task data we assume
unavailable. A more surprising observation is that Learning without Forgetting
may be able to replace fine-tuning with similar old and new task datasets for
improved new task performance.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2016 20:54:04 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Aug 2016 22:12:43 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Feb 2017 22:32:30 GMT"
}
] | 2017-02-16T00:00:00 | [
[
"Li",
"Zhizhong",
""
],
[
"Hoiem",
"Derek",
""
]
] | TITLE: Learning without Forgetting
ABSTRACT: When building a unified vision system or gradually adding new capabilities to
a system, the usual assumption is that training data for all tasks is always
available. However, as the number of tasks grows, storing and retraining on
such data becomes infeasible. A new problem arises where we add new
capabilities to a Convolutional Neural Network (CNN), but the training data for
its existing capabilities are unavailable. We propose our Learning without
Forgetting method, which uses only new task data to train the network while
preserving the original capabilities. Our method performs favorably compared to
commonly used feature extraction and fine-tuning adaption techniques and
performs similarly to multitask learning that uses original task data we assume
unavailable. A more surprising observation is that Learning without Forgetting
may be able to replace fine-tuning with similar old and new task datasets for
improved new task performance.
| no_new_dataset | 0.94545 |
1611.01578 | Quoc Le | Barret Zoph and Quoc V. Le | Neural Architecture Search with Reinforcement Learning | null | null | null | null | cs.LG cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214.
| [
{
"version": "v1",
"created": "Sat, 5 Nov 2016 00:41:37 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2017 05:28:05 GMT"
}
] | 2017-02-16T00:00:00 | [
[
"Zoph",
"Barret",
""
],
[
"Le",
"Quoc V.",
""
]
] | TITLE: Neural Architecture Search with Reinforcement Learning
ABSTRACT: Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214.
| no_new_dataset | 0.949623 |
1701.04516 | Chandan Gautam | Chandan Gautam, Aruna Tiwari and Qian Leng | On The Construction of Extreme Learning Machine for Online and Offline
One-Class Classification - An Expanded Toolbox | This paper has been accepted in Neurocomputing Journal (Elsevier)
with Manuscript id: NEUCOM-D-15-02856 | null | 10.1016/j.neucom.2016.04.070 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One-Class Classification (OCC) has been prime concern for researchers and
effectively employed in various disciplines. But, traditional methods based
one-class classifiers are very time consuming due to its iterative process and
various parameters tuning. In this paper, we present six OCC methods based on
extreme learning machine (ELM) and Online Sequential ELM (OSELM). Our proposed
classifiers mainly lie in two categories: reconstruction based and boundary
based, which supports both types of learning viz., online and offline learning.
Out of various proposed methods, four are offline and remaining two are online
methods. Out of four offline methods, two methods perform random feature
mapping and two methods perform kernel feature mapping. Kernel feature mapping
based approaches have been tested with RBF kernel and online version of
one-class classifiers are tested with both types of nodes viz., additive and
RBF. It is well known fact that threshold decision is a crucial factor in case
of OCC, so, three different threshold deciding criteria have been employed so
far and analyses the effectiveness of one threshold deciding criteria over
another. Further, these methods are tested on two artificial datasets to check
there boundary construction capability and on eight benchmark datasets from
different discipline to evaluate the performance of the classifiers. Our
proposed classifiers exhibit better performance compared to ten traditional
one-class classifiers and ELM based two one-class classifiers. Through proposed
one-class classifiers, we intend to expand the functionality of the most used
toolbox for OCC i.e. DD toolbox. All of our methods are totally compatible with
all the present features of the toolbox.
| [
{
"version": "v1",
"created": "Tue, 17 Jan 2017 02:55:51 GMT"
}
] | 2017-02-16T00:00:00 | [
[
"Gautam",
"Chandan",
""
],
[
"Tiwari",
"Aruna",
""
],
[
"Leng",
"Qian",
""
]
] | TITLE: On The Construction of Extreme Learning Machine for Online and Offline
One-Class Classification - An Expanded Toolbox
ABSTRACT: One-Class Classification (OCC) has been prime concern for researchers and
effectively employed in various disciplines. But, traditional methods based
one-class classifiers are very time consuming due to its iterative process and
various parameters tuning. In this paper, we present six OCC methods based on
extreme learning machine (ELM) and Online Sequential ELM (OSELM). Our proposed
classifiers mainly lie in two categories: reconstruction based and boundary
based, which supports both types of learning viz., online and offline learning.
Out of various proposed methods, four are offline and remaining two are online
methods. Out of four offline methods, two methods perform random feature
mapping and two methods perform kernel feature mapping. Kernel feature mapping
based approaches have been tested with RBF kernel and online version of
one-class classifiers are tested with both types of nodes viz., additive and
RBF. It is well known fact that threshold decision is a crucial factor in case
of OCC, so, three different threshold deciding criteria have been employed so
far and analyses the effectiveness of one threshold deciding criteria over
another. Further, these methods are tested on two artificial datasets to check
there boundary construction capability and on eight benchmark datasets from
different discipline to evaluate the performance of the classifiers. Our
proposed classifiers exhibit better performance compared to ten traditional
one-class classifiers and ELM based two one-class classifiers. Through proposed
one-class classifiers, we intend to expand the functionality of the most used
toolbox for OCC i.e. DD toolbox. All of our methods are totally compatible with
all the present features of the toolbox.
| no_new_dataset | 0.949295 |
1702.04377 | Amir Ghaderi | Ali Sharifara, Mohd Shafry Mohd Rahim, Farhad Navabifar, Dylan Ebert,
Amir Ghaderi, Michalis Papakostas | Enhanced Facial Recognition Framework based on Skin Tone and False Alarm
Rejection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Face detection is one of the challenging tasks in computer vision. Human face
detection plays an essential role in the first stage of face processing
applications such as face recognition, face tracking, image database
management, etc. In these applications, face objects often come from an
inconsequential part of images that contain variations, namely different
illumination, poses, and occlusion. These variations can decrease face
detection rate noticeably. Most existing face detection approaches are not
accurate, as they have not been able to resolve unstructured images due to
large appearance variations and can only detect human faces under one
particular variation. Existing frameworks of face detection need enhancements
to detect human faces under the stated variations to improve detection rate and
reduce detection time. In this study, an enhanced face detection framework is
proposed to improve detection rate based on skin color and provide a validation
process. A preliminary segmentation of the input images based on skin color can
significantly reduce search space and accelerate the process of human face
detection. The primary detection is based on Haar-like features and the
Adaboost algorithm. A validation process is introduced to reject non-face
objects, which might occur during the face detection process. The validation
process is based on two-stage Extended Local Binary Patterns. The experimental
results on the CMU-MIT and Caltech 10000 datasets over a wide range of facial
variations in different colors, positions, scales, and lighting conditions
indicated a successful face detection rate.
| [
{
"version": "v1",
"created": "Tue, 14 Feb 2017 20:21:09 GMT"
}
] | 2017-02-16T00:00:00 | [
[
"Sharifara",
"Ali",
""
],
[
"Rahim",
"Mohd Shafry Mohd",
""
],
[
"Navabifar",
"Farhad",
""
],
[
"Ebert",
"Dylan",
""
],
[
"Ghaderi",
"Amir",
""
],
[
"Papakostas",
"Michalis",
""
]
] | TITLE: Enhanced Facial Recognition Framework based on Skin Tone and False Alarm
Rejection
ABSTRACT: Face detection is one of the challenging tasks in computer vision. Human face
detection plays an essential role in the first stage of face processing
applications such as face recognition, face tracking, image database
management, etc. In these applications, face objects often come from an
inconsequential part of images that contain variations, namely different
illumination, poses, and occlusion. These variations can decrease face
detection rate noticeably. Most existing face detection approaches are not
accurate, as they have not been able to resolve unstructured images due to
large appearance variations and can only detect human faces under one
particular variation. Existing frameworks of face detection need enhancements
to detect human faces under the stated variations to improve detection rate and
reduce detection time. In this study, an enhanced face detection framework is
proposed to improve detection rate based on skin color and provide a validation
process. A preliminary segmentation of the input images based on skin color can
significantly reduce search space and accelerate the process of human face
detection. The primary detection is based on Haar-like features and the
Adaboost algorithm. A validation process is introduced to reject non-face
objects, which might occur during the face detection process. The validation
process is based on two-stage Extended Local Binary Patterns. The experimental
results on the CMU-MIT and Caltech 10000 datasets over a wide range of facial
variations in different colors, positions, scales, and lighting conditions
indicated a successful face detection rate.
| no_new_dataset | 0.944944 |
1702.04471 | Navaneeth Bodla | Navaneeth Bodla, Jingxiao Zheng, Hongyu Xu, Jun-Cheng Chen, Carlos
Castillo, Rama Chellappa | Deep Heterogeneous Feature Fusion for Template-Based Face Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although deep learning has yielded impressive performance for face
recognition, many studies have shown that different networks learn different
feature maps: while some networks are more receptive to pose and illumination
others appear to capture more local information. Thus, in this work, we propose
a deep heterogeneous feature fusion network to exploit the complementary
information present in features generated by different deep convolutional
neural networks (DCNNs) for template-based face recognition, where a template
refers to a set of still face images or video frames from different sources
which introduces more blur, pose, illumination and other variations than
traditional face datasets. The proposed approach efficiently fuses the
discriminative information of different deep features by 1) jointly learning
the non-linear high-dimensional projection of the deep features and 2)
generating a more discriminative template representation which preserves the
inherent geometry of the deep features in the feature space. Experimental
results on the IARPA Janus Challenge Set 3 (Janus CS3) dataset demonstrate that
the proposed method can effectively improve the recognition performance. In
addition, we also present a series of covariate experiments on the face
verification task for in-depth qualitative evaluations for the proposed
approach.
| [
{
"version": "v1",
"created": "Wed, 15 Feb 2017 06:23:05 GMT"
}
] | 2017-02-16T00:00:00 | [
[
"Bodla",
"Navaneeth",
""
],
[
"Zheng",
"Jingxiao",
""
],
[
"Xu",
"Hongyu",
""
],
[
"Chen",
"Jun-Cheng",
""
],
[
"Castillo",
"Carlos",
""
],
[
"Chellappa",
"Rama",
""
]
] | TITLE: Deep Heterogeneous Feature Fusion for Template-Based Face Recognition
ABSTRACT: Although deep learning has yielded impressive performance for face
recognition, many studies have shown that different networks learn different
feature maps: while some networks are more receptive to pose and illumination
others appear to capture more local information. Thus, in this work, we propose
a deep heterogeneous feature fusion network to exploit the complementary
information present in features generated by different deep convolutional
neural networks (DCNNs) for template-based face recognition, where a template
refers to a set of still face images or video frames from different sources
which introduces more blur, pose, illumination and other variations than
traditional face datasets. The proposed approach efficiently fuses the
discriminative information of different deep features by 1) jointly learning
the non-linear high-dimensional projection of the deep features and 2)
generating a more discriminative template representation which preserves the
inherent geometry of the deep features in the feature space. Experimental
results on the IARPA Janus Challenge Set 3 (Janus CS3) dataset demonstrate that
the proposed method can effectively improve the recognition performance. In
addition, we also present a series of covariate experiments on the face
verification task for in-depth qualitative evaluations for the proposed
approach.
| no_new_dataset | 0.94743 |
1702.04663 | Abdul Kawsar Tushar | Akm Ashiquzzaman and Abdul Kawsar Tushar | Handwritten Arabic Numeral Recognition using Deep Learning Neural
Networks | Conference Name - 2017 IEEE International Conference on Imaging,
Vision & Pattern Recognition (icIVPR17) 4 pages, 5 figures, 1 table | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Handwritten character recognition is an active area of research with
applications in numerous fields. Past and recent works in this field have
concentrated on various languages. Arabic is one language where the scope of
research is still widespread, with it being one of the most popular languages
in the world and being syntactically different from other major languages. Das
et al. \cite{DBLP:journals/corr/abs-1003-1891} has pioneered the research for
handwritten digit recognition in Arabic. In this paper, we propose a novel
algorithm based on deep learning neural networks using appropriate activation
function and regularization layer, which shows significantly improved accuracy
compared to the existing Arabic numeral recognition methods. The proposed model
gives 97.4 percent accuracy, which is the recorded highest accuracy of the
dataset used in the experiment. We also propose a modification of the method
described in \cite{DBLP:journals/corr/abs-1003-1891}, where our method scores
identical accuracy as that of \cite{DBLP:journals/corr/abs-1003-1891}, with the
value of 93.8 percent.
| [
{
"version": "v1",
"created": "Wed, 15 Feb 2017 16:06:15 GMT"
}
] | 2017-02-16T00:00:00 | [
[
"Ashiquzzaman",
"Akm",
""
],
[
"Tushar",
"Abdul Kawsar",
""
]
] | TITLE: Handwritten Arabic Numeral Recognition using Deep Learning Neural
Networks
ABSTRACT: Handwritten character recognition is an active area of research with
applications in numerous fields. Past and recent works in this field have
concentrated on various languages. Arabic is one language where the scope of
research is still widespread, with it being one of the most popular languages
in the world and being syntactically different from other major languages. Das
et al. \cite{DBLP:journals/corr/abs-1003-1891} has pioneered the research for
handwritten digit recognition in Arabic. In this paper, we propose a novel
algorithm based on deep learning neural networks using appropriate activation
function and regularization layer, which shows significantly improved accuracy
compared to the existing Arabic numeral recognition methods. The proposed model
gives 97.4 percent accuracy, which is the recorded highest accuracy of the
dataset used in the experiment. We also propose a modification of the method
described in \cite{DBLP:journals/corr/abs-1003-1891}, where our method scores
identical accuracy as that of \cite{DBLP:journals/corr/abs-1003-1891}, with the
value of 93.8 percent.
| no_new_dataset | 0.945801 |
1507.00101 | Huei-Fang Yang | Huei-Fang Yang, Kevin Lin, Chu-Song Chen | Supervised Learning of Semantics-Preserving Hash via Deep Convolutional
Neural Networks | To appear in IEEE Trans. Pattern Analysis and Machine Intelligence | null | 10.1109/TPAMI.2017.2666812 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a simple yet effective supervised deep hash approach that
constructs binary hash codes from labeled data for large-scale image search. We
assume that the semantic labels are governed by several latent attributes with
each attribute on or off, and classification relies on these attributes. Based
on this assumption, our approach, dubbed supervised semantics-preserving deep
hashing (SSDH), constructs hash functions as a latent layer in a deep network
and the binary codes are learned by minimizing an objective function defined
over classification error and other desirable hash codes properties. With this
design, SSDH has a nice characteristic that classification and retrieval are
unified in a single learning model. Moreover, SSDH performs joint learning of
image representations, hash codes, and classification in a point-wised manner,
and thus is scalable to large-scale datasets. SSDH is simple and can be
realized by a slight enhancement of an existing deep architecture for
classification; yet it is effective and outperforms other hashing approaches on
several benchmarks and large datasets. Compared with state-of-the-art
approaches, SSDH achieves higher retrieval accuracy, while the classification
performance is not sacrificed.
| [
{
"version": "v1",
"created": "Wed, 1 Jul 2015 04:40:31 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Feb 2017 07:31:18 GMT"
}
] | 2017-02-15T00:00:00 | [
[
"Yang",
"Huei-Fang",
""
],
[
"Lin",
"Kevin",
""
],
[
"Chen",
"Chu-Song",
""
]
] | TITLE: Supervised Learning of Semantics-Preserving Hash via Deep Convolutional
Neural Networks
ABSTRACT: This paper presents a simple yet effective supervised deep hash approach that
constructs binary hash codes from labeled data for large-scale image search. We
assume that the semantic labels are governed by several latent attributes with
each attribute on or off, and classification relies on these attributes. Based
on this assumption, our approach, dubbed supervised semantics-preserving deep
hashing (SSDH), constructs hash functions as a latent layer in a deep network
and the binary codes are learned by minimizing an objective function defined
over classification error and other desirable hash codes properties. With this
design, SSDH has a nice characteristic that classification and retrieval are
unified in a single learning model. Moreover, SSDH performs joint learning of
image representations, hash codes, and classification in a point-wised manner,
and thus is scalable to large-scale datasets. SSDH is simple and can be
realized by a slight enhancement of an existing deep architecture for
classification; yet it is effective and outperforms other hashing approaches on
several benchmarks and large datasets. Compared with state-of-the-art
approaches, SSDH achieves higher retrieval accuracy, while the classification
performance is not sacrificed.
| no_new_dataset | 0.943348 |
1605.08174 | Hyeryung Jang | Hyeryung Jang, Hyungwon Choi, Yung Yi, Jinwoo Shin | Adiabatic Persistent Contrastive Divergence Learning | 22 pages, 2 figures | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies the problem of parameter learning in probabilistic
graphical models having latent variables, where the standard approach is the
expectation maximization algorithm alternating expectation (E) and maximization
(M) steps. However, both E and M steps are computationally intractable for high
dimensional data, while the substitution of one step to a faster surrogate for
combating against intractability can often cause failure in convergence. We
propose a new learning algorithm which is computationally efficient and
provably ensures convergence to a correct optimum. Its key idea is to run only
a few cycles of Markov Chains (MC) in both E and M steps. Such an idea of
running incomplete MC has been well studied only for M step in the literature,
called Contrastive Divergence (CD) learning. While such known CD-based schemes
find approximated gradients of the log-likelihood via the mean-field approach
in E step, our proposed algorithm does exact ones via MC algorithms in both
steps due to the multi-time-scale stochastic approximation theory. Despite its
theoretical guarantee in convergence, the proposed scheme might suffer from the
slow mixing of MC in E step. To tackle it, we also propose a hybrid approach
applying both mean-field and MC approximation in E step, where the hybrid
approach outperforms the bare mean-field CD scheme in our experiments on
real-world datasets.
| [
{
"version": "v1",
"created": "Thu, 26 May 2016 07:26:25 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Feb 2017 10:52:07 GMT"
}
] | 2017-02-15T00:00:00 | [
[
"Jang",
"Hyeryung",
""
],
[
"Choi",
"Hyungwon",
""
],
[
"Yi",
"Yung",
""
],
[
"Shin",
"Jinwoo",
""
]
] | TITLE: Adiabatic Persistent Contrastive Divergence Learning
ABSTRACT: This paper studies the problem of parameter learning in probabilistic
graphical models having latent variables, where the standard approach is the
expectation maximization algorithm alternating expectation (E) and maximization
(M) steps. However, both E and M steps are computationally intractable for high
dimensional data, while the substitution of one step to a faster surrogate for
combating against intractability can often cause failure in convergence. We
propose a new learning algorithm which is computationally efficient and
provably ensures convergence to a correct optimum. Its key idea is to run only
a few cycles of Markov Chains (MC) in both E and M steps. Such an idea of
running incomplete MC has been well studied only for M step in the literature,
called Contrastive Divergence (CD) learning. While such known CD-based schemes
find approximated gradients of the log-likelihood via the mean-field approach
in E step, our proposed algorithm does exact ones via MC algorithms in both
steps due to the multi-time-scale stochastic approximation theory. Despite its
theoretical guarantee in convergence, the proposed scheme might suffer from the
slow mixing of MC in E step. To tackle it, we also propose a hybrid approach
applying both mean-field and MC approximation in E step, where the hybrid
approach outperforms the bare mean-field CD scheme in our experiments on
real-world datasets.
| no_new_dataset | 0.946547 |
1611.09718 | Thalaiyasingam Ajanthan | Thalaiyasingam Ajanthan, Alban Desmaison, Rudy Bunel, Mathieu
Salzmann, Philip H.S. Torr, M. Pawan Kumar | Efficient Linear Programming for Dense CRFs | 24 pages, 10 figures and 4 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The fully connected conditional random field (CRF) with Gaussian pairwise
potentials has proven popular and effective for multi-class semantic
segmentation. While the energy of a dense CRF can be minimized accurately using
a linear programming (LP) relaxation, the state-of-the-art algorithm is too
slow to be useful in practice. To alleviate this deficiency, we introduce an
efficient LP minimization algorithm for dense CRFs. To this end, we develop a
proximal minimization framework, where the dual of each proximal problem is
optimized via block coordinate descent. We show that each block of variables
can be efficiently optimized. Specifically, for one block, the problem
decomposes into significantly smaller subproblems, each of which is defined
over a single pixel. For the other block, the problem is optimized via
conditional gradient descent. This has two advantages: 1) the conditional
gradient can be computed in a time linear in the number of pixels and labels;
and 2) the optimal step size can be computed analytically. Our experiments on
standard datasets provide compelling evidence that our approach outperforms all
existing baselines including the previous LP based approach for dense CRFs.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2016 16:46:54 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Feb 2017 07:34:13 GMT"
}
] | 2017-02-15T00:00:00 | [
[
"Ajanthan",
"Thalaiyasingam",
""
],
[
"Desmaison",
"Alban",
""
],
[
"Bunel",
"Rudy",
""
],
[
"Salzmann",
"Mathieu",
""
],
[
"Torr",
"Philip H. S.",
""
],
[
"Kumar",
"M. Pawan",
""
]
] | TITLE: Efficient Linear Programming for Dense CRFs
ABSTRACT: The fully connected conditional random field (CRF) with Gaussian pairwise
potentials has proven popular and effective for multi-class semantic
segmentation. While the energy of a dense CRF can be minimized accurately using
a linear programming (LP) relaxation, the state-of-the-art algorithm is too
slow to be useful in practice. To alleviate this deficiency, we introduce an
efficient LP minimization algorithm for dense CRFs. To this end, we develop a
proximal minimization framework, where the dual of each proximal problem is
optimized via block coordinate descent. We show that each block of variables
can be efficiently optimized. Specifically, for one block, the problem
decomposes into significantly smaller subproblems, each of which is defined
over a single pixel. For the other block, the problem is optimized via
conditional gradient descent. This has two advantages: 1) the conditional
gradient can be computed in a time linear in the number of pixels and labels;
and 2) the optimal step size can be computed analytically. Our experiments on
standard datasets provide compelling evidence that our approach outperforms all
existing baselines including the previous LP based approach for dense CRFs.
| no_new_dataset | 0.944995 |
1702.03970 | Ray Smith | Raymond Smith, Chunhui Gu, Dar-Shyang Lee, Huiyi Hu, Ranjith
Unnikrishnan, Julian Ibarz, Sacha Arnoud, Sophia Lin | End-to-End Interpretation of the French Street Name Signs Dataset | Presented at the IWRR workshop at ECCV 2016 | Computer Vision - ECCV 2016 Workshops Volume 9913 of the series
Lecture Notes in Computer Science pp 411-426 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the French Street Name Signs (FSNS) Dataset consisting of more
than a million images of street name signs cropped from Google Street View
images of France. Each image contains several views of the same street name
sign. Every image has normalized, title case folded ground-truth text as it
would appear on a map. We believe that the FSNS dataset is large and complex
enough to train a deep network of significant complexity to solve the street
name extraction problem "end-to-end" or to explore the design trade-offs
between a single complex engineered network and multiple sub-networks designed
and trained to solve sub-problems. We present such an "end-to-end"
network/graph for Tensor Flow and its results on the FSNS dataset.
| [
{
"version": "v1",
"created": "Mon, 13 Feb 2017 20:18:18 GMT"
}
] | 2017-02-15T00:00:00 | [
[
"Smith",
"Raymond",
""
],
[
"Gu",
"Chunhui",
""
],
[
"Lee",
"Dar-Shyang",
""
],
[
"Hu",
"Huiyi",
""
],
[
"Unnikrishnan",
"Ranjith",
""
],
[
"Ibarz",
"Julian",
""
],
[
"Arnoud",
"Sacha",
""
],
[
"Lin",
"Sophia",
""
]
] | TITLE: End-to-End Interpretation of the French Street Name Signs Dataset
ABSTRACT: We introduce the French Street Name Signs (FSNS) Dataset consisting of more
than a million images of street name signs cropped from Google Street View
images of France. Each image contains several views of the same street name
sign. Every image has normalized, title case folded ground-truth text as it
would appear on a map. We believe that the FSNS dataset is large and complex
enough to train a deep network of significant complexity to solve the street
name extraction problem "end-to-end" or to explore the design trade-offs
between a single complex engineered network and multiple sub-networks designed
and trained to solve sub-problems. We present such an "end-to-end"
network/graph for Tensor Flow and its results on the FSNS dataset.
| new_dataset | 0.956309 |
1702.04037 | Yang Wang | Yang Wang, Vinh Tran, Minh Hoai | Evolution-Preserving Dense Trajectory Descriptors | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently Trajectory-pooled Deep-learning Descriptors were shown to achieve
state-of-the-art human action recognition results on a number of datasets. This
paper improves their performance by applying rank pooling to each trajectory,
encoding the temporal evolution of deep learning features computed along the
trajectory. This leads to Evolution-Preserving Trajectory (EPT) descriptors, a
novel type of video descriptor that significantly outperforms Trajectory-pooled
Deep-learning Descriptors. EPT descriptors are defined based on dense
trajectories, and they provide complimentary benefits to video descriptors that
are not based on trajectories. In particular, we show that the combination of
EPT descriptors and VideoDarwin leads to state-of-the-art performance on
Hollywood2 and UCF101 datasets.
| [
{
"version": "v1",
"created": "Tue, 14 Feb 2017 00:54:52 GMT"
}
] | 2017-02-15T00:00:00 | [
[
"Wang",
"Yang",
""
],
[
"Tran",
"Vinh",
""
],
[
"Hoai",
"Minh",
""
]
] | TITLE: Evolution-Preserving Dense Trajectory Descriptors
ABSTRACT: Recently Trajectory-pooled Deep-learning Descriptors were shown to achieve
state-of-the-art human action recognition results on a number of datasets. This
paper improves their performance by applying rank pooling to each trajectory,
encoding the temporal evolution of deep learning features computed along the
trajectory. This leads to Evolution-Preserving Trajectory (EPT) descriptors, a
novel type of video descriptor that significantly outperforms Trajectory-pooled
Deep-learning Descriptors. EPT descriptors are defined based on dense
trajectories, and they provide complimentary benefits to video descriptors that
are not based on trajectories. In particular, we show that the combination of
EPT descriptors and VideoDarwin leads to state-of-the-art performance on
Hollywood2 and UCF101 datasets.
| no_new_dataset | 0.95561 |
1702.04218 | John Prpic | J. Prpic and P. Shukla | Crowd Capital in Governance Contexts | Oxford Internet Institute, University of Oxford - IPP 2014 -
Crowdsourcing for Politics and Policy | null | null | null | cs.CY cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To begin to understand the implications of the implementation of IT-mediated
Crowds for Politics and Policy purposes, this research builds the first-known
dataset of IT-mediated Crowd applications currently in use in the governance
context. Using Crowd Capital theory and governance theory as frameworks to
organize our data collection, we undertake an exploratory data analysis of some
fundamental factors defining this emerging field. Specific factors outlined and
discussed include the type of actors implementing IT-mediated Crowds in the
governance context, the global geographic distribution of the applications, and
the nature of the Crowd-derived resources being generated for governance
purposes. The findings from our dataset of 209 on-going endeavours indicates
that a wide-diversity of actors are engaging IT-mediated Crowds in the
governance context, both jointly and severally, that these endeavours can be
found to exist on all continents, and that said actors are generating
Crowd-derived resources in at least ten distinct governance sectors. We discuss
the ramifications of these and our other findings in comparison to the research
literature on the private-sector use of IT-mediated Crowds, while highlighting
some unique future research opportunities stemming from our work.
| [
{
"version": "v1",
"created": "Fri, 10 Feb 2017 09:45:57 GMT"
}
] | 2017-02-15T00:00:00 | [
[
"Prpic",
"J.",
""
],
[
"Shukla",
"P.",
""
]
] | TITLE: Crowd Capital in Governance Contexts
ABSTRACT: To begin to understand the implications of the implementation of IT-mediated
Crowds for Politics and Policy purposes, this research builds the first-known
dataset of IT-mediated Crowd applications currently in use in the governance
context. Using Crowd Capital theory and governance theory as frameworks to
organize our data collection, we undertake an exploratory data analysis of some
fundamental factors defining this emerging field. Specific factors outlined and
discussed include the type of actors implementing IT-mediated Crowds in the
governance context, the global geographic distribution of the applications, and
the nature of the Crowd-derived resources being generated for governance
purposes. The findings from our dataset of 209 on-going endeavours indicates
that a wide-diversity of actors are engaging IT-mediated Crowds in the
governance context, both jointly and severally, that these endeavours can be
found to exist on all continents, and that said actors are generating
Crowd-derived resources in at least ten distinct governance sectors. We discuss
the ramifications of these and our other findings in comparison to the research
literature on the private-sector use of IT-mediated Crowds, while highlighting
some unique future research opportunities stemming from our work.
| new_dataset | 0.906653 |
1504.03871 | Saeed Reza Kheradpisheh | Saeed Reza Kheradpisheh, Mohammad Ganjtabesh, Timoth\'ee Masquelier | Bio-inspired Unsupervised Learning of Visual Features Leads to Robust
Invariant Object Recognition | null | Neurocomputing 205 (2016) 382-392 | 10.1016/j.neucom.2016.04.029 | null | cs.CV q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Retinal image of surrounding objects varies tremendously due to the changes
in position, size, pose, illumination condition, background context, occlusion,
noise, and nonrigid deformations. But despite these huge variations, our visual
system is able to invariantly recognize any object in just a fraction of a
second. To date, various computational models have been proposed to mimic the
hierarchical processing of the ventral visual pathway, with limited success.
Here, we show that the association of both biologically inspired network
architecture and learning rule significantly improves the models' performance
when facing challenging invariant object recognition problems. Our model is an
asynchronous feedforward spiking neural network. When the network is presented
with natural images, the neurons in the entry layers detect edges, and the most
activated ones fire first, while neurons in higher layers are equipped with
spike timing-dependent plasticity. These neurons progressively become selective
to intermediate complexity visual features appropriate for object
categorization. The model is evaluated on 3D-Object and ETH-80 datasets which
are two benchmarks for invariant object recognition, and is shown to outperform
state-of-the-art models, including DeepConvNet and HMAX. This demonstrates its
ability to accurately recognize different instances of multiple object classes
even under various appearance conditions (different views, scales, tilts, and
backgrounds). Several statistical analysis techniques are used to show that our
model extracts class specific and highly informative features.
| [
{
"version": "v1",
"created": "Wed, 15 Apr 2015 11:47:21 GMT"
},
{
"version": "v2",
"created": "Sun, 3 May 2015 12:40:59 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Jun 2016 10:54:22 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Kheradpisheh",
"Saeed Reza",
""
],
[
"Ganjtabesh",
"Mohammad",
""
],
[
"Masquelier",
"Timothée",
""
]
] | TITLE: Bio-inspired Unsupervised Learning of Visual Features Leads to Robust
Invariant Object Recognition
ABSTRACT: Retinal image of surrounding objects varies tremendously due to the changes
in position, size, pose, illumination condition, background context, occlusion,
noise, and nonrigid deformations. But despite these huge variations, our visual
system is able to invariantly recognize any object in just a fraction of a
second. To date, various computational models have been proposed to mimic the
hierarchical processing of the ventral visual pathway, with limited success.
Here, we show that the association of both biologically inspired network
architecture and learning rule significantly improves the models' performance
when facing challenging invariant object recognition problems. Our model is an
asynchronous feedforward spiking neural network. When the network is presented
with natural images, the neurons in the entry layers detect edges, and the most
activated ones fire first, while neurons in higher layers are equipped with
spike timing-dependent plasticity. These neurons progressively become selective
to intermediate complexity visual features appropriate for object
categorization. The model is evaluated on 3D-Object and ETH-80 datasets which
are two benchmarks for invariant object recognition, and is shown to outperform
state-of-the-art models, including DeepConvNet and HMAX. This demonstrates its
ability to accurately recognize different instances of multiple object classes
even under various appearance conditions (different views, scales, tilts, and
backgrounds). Several statistical analysis techniques are used to show that our
model extracts class specific and highly informative features.
| no_new_dataset | 0.946695 |
1506.04130 | Harsh Agrawal | Harsh Agrawal, Clint Solomon Mathialagan, Yash Goyal, Neelima Chavali,
Prakriti Banik, Akrit Mohapatra, Ahmed Osman, Dhruv Batra | CloudCV: Large Scale Distributed Computer Vision as a Cloud Service | null | null | null | null | cs.CV cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We are witnessing a proliferation of massive visual data. Unfortunately
scaling existing computer vision algorithms to large datasets leaves
researchers repeatedly solving the same algorithmic, logistical, and
infrastructural problems. Our goal is to democratize computer vision; one
should not have to be a computer vision, big data and distributed computing
expert to have access to state-of-the-art distributed computer vision
algorithms. We present CloudCV, a comprehensive system to provide access to
state-of-the-art distributed computer vision algorithms as a cloud service
through a Web Interface and APIs.
| [
{
"version": "v1",
"created": "Fri, 12 Jun 2015 19:50:07 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2016 22:01:51 GMT"
},
{
"version": "v3",
"created": "Mon, 13 Feb 2017 07:30:56 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Agrawal",
"Harsh",
""
],
[
"Mathialagan",
"Clint Solomon",
""
],
[
"Goyal",
"Yash",
""
],
[
"Chavali",
"Neelima",
""
],
[
"Banik",
"Prakriti",
""
],
[
"Mohapatra",
"Akrit",
""
],
[
"Osman",
"Ahmed",
""
],
[
"Batra",
"Dhruv",
""
]
] | TITLE: CloudCV: Large Scale Distributed Computer Vision as a Cloud Service
ABSTRACT: We are witnessing a proliferation of massive visual data. Unfortunately
scaling existing computer vision algorithms to large datasets leaves
researchers repeatedly solving the same algorithmic, logistical, and
infrastructural problems. Our goal is to democratize computer vision; one
should not have to be a computer vision, big data and distributed computing
expert to have access to state-of-the-art distributed computer vision
algorithms. We present CloudCV, a comprehensive system to provide access to
state-of-the-art distributed computer vision algorithms as a cloud service
through a Web Interface and APIs.
| no_new_dataset | 0.954858 |
1603.08199 | Loris Bazzani | Loris Bazzani and Hugo Larochelle and Lorenzo Torresani | Recurrent Mixture Density Network for Spatiotemporal Visual Attention | ICLR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many computer vision tasks, the relevant information to solve the problem
at hand is mixed to irrelevant, distracting information. This has motivated
researchers to design attentional models that can dynamically focus on parts of
images or videos that are salient, e.g., by down-weighting irrelevant pixels.
In this work, we propose a spatiotemporal attentional model that learns where
to look in a video directly from human fixation data. We model visual attention
with a mixture of Gaussians at each frame. This distribution is used to express
the probability of saliency for each pixel. Time consistency in videos is
modeled hierarchically by: 1) deep 3D convolutional features to represent
spatial and short-term time relations and 2) a long short-term memory network
on top that aggregates the clip-level representation of sequential clips and
therefore expands the temporal domain from few frames to seconds. The
parameters of the proposed model are optimized via maximum likelihood
estimation using human fixations as training data, without knowledge of the
action in each video. Our experiments on Hollywood2 show state-of-the-art
performance on saliency prediction for video. We also show that our attentional
model trained on Hollywood2 generalizes well to UCF101 and it can be leveraged
to improve action classification accuracy on both datasets.
| [
{
"version": "v1",
"created": "Sun, 27 Mar 2016 10:34:22 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Apr 2016 14:17:51 GMT"
},
{
"version": "v3",
"created": "Sun, 15 May 2016 11:55:35 GMT"
},
{
"version": "v4",
"created": "Sat, 11 Feb 2017 10:05:06 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Bazzani",
"Loris",
""
],
[
"Larochelle",
"Hugo",
""
],
[
"Torresani",
"Lorenzo",
""
]
] | TITLE: Recurrent Mixture Density Network for Spatiotemporal Visual Attention
ABSTRACT: In many computer vision tasks, the relevant information to solve the problem
at hand is mixed to irrelevant, distracting information. This has motivated
researchers to design attentional models that can dynamically focus on parts of
images or videos that are salient, e.g., by down-weighting irrelevant pixels.
In this work, we propose a spatiotemporal attentional model that learns where
to look in a video directly from human fixation data. We model visual attention
with a mixture of Gaussians at each frame. This distribution is used to express
the probability of saliency for each pixel. Time consistency in videos is
modeled hierarchically by: 1) deep 3D convolutional features to represent
spatial and short-term time relations and 2) a long short-term memory network
on top that aggregates the clip-level representation of sequential clips and
therefore expands the temporal domain from few frames to seconds. The
parameters of the proposed model are optimized via maximum likelihood
estimation using human fixations as training data, without knowledge of the
action in each video. Our experiments on Hollywood2 show state-of-the-art
performance on saliency prediction for video. We also show that our attentional
model trained on Hollywood2 generalizes well to UCF101 and it can be leveraged
to improve action classification accuracy on both datasets.
| no_new_dataset | 0.947769 |
1607.00485 | Simone Scardapane | Simone Scardapane, Danilo Comminiello, Amir Hussain, Aurelio Uncini | Group Sparse Regularization for Deep Neural Networks | null | null | 10.1016/j.neucom.2017.02.029 | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider the joint task of simultaneously optimizing (i)
the weights of a deep neural network, (ii) the number of neurons for each
hidden layer, and (iii) the subset of active input features (i.e., feature
selection). While these problems are generally dealt with separately, we
present a simple regularized formulation allowing to solve all three of them in
parallel, using standard optimization routines. Specifically, we extend the
group Lasso penalty (originated in the linear regression literature) in order
to impose group-level sparsity on the network's connections, where each group
is defined as the set of outgoing weights from a unit. Depending on the
specific case, the weights can be related to an input variable, to a hidden
neuron, or to a bias unit, thus performing simultaneously all the
aforementioned tasks in order to obtain a compact network. We perform an
extensive experimental evaluation, by comparing with classical weight decay and
Lasso penalties. We show that a sparse version of the group Lasso penalty is
able to achieve competitive performances, while at the same time resulting in
extremely compact networks with a smaller number of input features. We evaluate
both on a toy dataset for handwritten digit recognition, and on multiple
realistic large-scale classification problems.
| [
{
"version": "v1",
"created": "Sat, 2 Jul 2016 09:55:26 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Scardapane",
"Simone",
""
],
[
"Comminiello",
"Danilo",
""
],
[
"Hussain",
"Amir",
""
],
[
"Uncini",
"Aurelio",
""
]
] | TITLE: Group Sparse Regularization for Deep Neural Networks
ABSTRACT: In this paper, we consider the joint task of simultaneously optimizing (i)
the weights of a deep neural network, (ii) the number of neurons for each
hidden layer, and (iii) the subset of active input features (i.e., feature
selection). While these problems are generally dealt with separately, we
present a simple regularized formulation allowing to solve all three of them in
parallel, using standard optimization routines. Specifically, we extend the
group Lasso penalty (originated in the linear regression literature) in order
to impose group-level sparsity on the network's connections, where each group
is defined as the set of outgoing weights from a unit. Depending on the
specific case, the weights can be related to an input variable, to a hidden
neuron, or to a bias unit, thus performing simultaneously all the
aforementioned tasks in order to obtain a compact network. We perform an
extensive experimental evaluation, by comparing with classical weight decay and
Lasso penalties. We show that a sparse version of the group Lasso penalty is
able to achieve competitive performances, while at the same time resulting in
extremely compact networks with a smaller number of input features. We evaluate
both on a toy dataset for handwritten digit recognition, and on multiple
realistic large-scale classification problems.
| no_new_dataset | 0.946448 |
1607.02539 | Liansheng Zhuang | Liansheng Zhuang, Zihan Zhou, Jingwen Yin, Shenghua Gao, Zhouchen Lin,
Yi Ma, Nenghai Yu | Graph Construction with Label Information for Semi-Supervised Learning | This paper is withdrawn by the authors for some errors | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the literature, most existing graph-based semi-supervised learning (SSL)
methods only use the label information of observed samples in the label
propagation stage, while ignoring such valuable information when learning the
graph. In this paper, we argue that it is beneficial to consider the label
information in the graph learning stage. Specifically, by enforcing the weight
of edges between labeled samples of different classes to be zero, we explicitly
incorporate the label information into the state-of-the-art graph learning
methods, such as the Low-Rank Representation (LRR), and propose a novel
semi-supervised graph learning method called Semi-Supervised Low-Rank
Representation (SSLRR). This results in a convex optimization problem with
linear constraints, which can be solved by the linearized alternating direction
method. Though we take LRR as an example, our proposed method is in fact very
general and can be applied to any self-representation graph learning methods.
Experiment results on both synthetic and real datasets demonstrate that the
proposed graph learning method can better capture the global geometric
structure of the data, and therefore is more effective for semi-supervised
learning tasks.
| [
{
"version": "v1",
"created": "Fri, 8 Jul 2016 22:24:20 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2016 01:17:57 GMT"
},
{
"version": "v3",
"created": "Sun, 12 Feb 2017 22:20:26 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Zhuang",
"Liansheng",
""
],
[
"Zhou",
"Zihan",
""
],
[
"Yin",
"Jingwen",
""
],
[
"Gao",
"Shenghua",
""
],
[
"Lin",
"Zhouchen",
""
],
[
"Ma",
"Yi",
""
],
[
"Yu",
"Nenghai",
""
]
] | TITLE: Graph Construction with Label Information for Semi-Supervised Learning
ABSTRACT: In the literature, most existing graph-based semi-supervised learning (SSL)
methods only use the label information of observed samples in the label
propagation stage, while ignoring such valuable information when learning the
graph. In this paper, we argue that it is beneficial to consider the label
information in the graph learning stage. Specifically, by enforcing the weight
of edges between labeled samples of different classes to be zero, we explicitly
incorporate the label information into the state-of-the-art graph learning
methods, such as the Low-Rank Representation (LRR), and propose a novel
semi-supervised graph learning method called Semi-Supervised Low-Rank
Representation (SSLRR). This results in a convex optimization problem with
linear constraints, which can be solved by the linearized alternating direction
method. Though we take LRR as an example, our proposed method is in fact very
general and can be applied to any self-representation graph learning methods.
Experiment results on both synthetic and real datasets demonstrate that the
proposed graph learning method can better capture the global geometric
structure of the data, and therefore is more effective for semi-supervised
learning tasks.
| no_new_dataset | 0.949012 |
1608.07973 | Aviv Eisenschtat | Aviv Eisenschtat and Lior Wolf | Linking Image and Text with 2-Way Nets | 14 pages, 2 figures, 6 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linking two data sources is a basic building block in numerous computer
vision problems. Canonical Correlation Analysis (CCA) achieves this by
utilizing a linear optimizer in order to maximize the correlation between the
two views. Recent work makes use of non-linear models, including deep learning
techniques, that optimize the CCA loss in some feature space. In this paper, we
introduce a novel, bi-directional neural network architecture for the task of
matching vectors from two data sources. Our approach employs two tied neural
network channels that project the two views into a common, maximally correlated
space using the Euclidean loss. We show a direct link between the
correlation-based loss and Euclidean loss, enabling the use of Euclidean loss
for correlation maximization. To overcome common Euclidean regression
optimization problems, we modify well-known techniques to our problem,
including batch normalization and dropout. We show state of the art results on
a number of computer vision matching tasks including MNIST image matching and
sentence-image matching on the Flickr8k, Flickr30k and COCO datasets.
| [
{
"version": "v1",
"created": "Mon, 29 Aug 2016 09:57:47 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Nov 2016 06:10:56 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Feb 2017 20:38:46 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Eisenschtat",
"Aviv",
""
],
[
"Wolf",
"Lior",
""
]
] | TITLE: Linking Image and Text with 2-Way Nets
ABSTRACT: Linking two data sources is a basic building block in numerous computer
vision problems. Canonical Correlation Analysis (CCA) achieves this by
utilizing a linear optimizer in order to maximize the correlation between the
two views. Recent work makes use of non-linear models, including deep learning
techniques, that optimize the CCA loss in some feature space. In this paper, we
introduce a novel, bi-directional neural network architecture for the task of
matching vectors from two data sources. Our approach employs two tied neural
network channels that project the two views into a common, maximally correlated
space using the Euclidean loss. We show a direct link between the
correlation-based loss and Euclidean loss, enabling the use of Euclidean loss
for correlation maximization. To overcome common Euclidean regression
optimization problems, we modify well-known techniques to our problem,
including batch normalization and dropout. We show state of the art results on
a number of computer vision matching tasks including MNIST image matching and
sentence-image matching on the Flickr8k, Flickr30k and COCO datasets.
| no_new_dataset | 0.9462 |
1611.01236 | Alexey Kurakin | Alexey Kurakin, Ian Goodfellow, Samy Bengio | Adversarial Machine Learning at Scale | 17 pages, 5 figures | null | null | null | cs.CV cs.CR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adversarial examples are malicious inputs designed to fool machine learning
models. They often transfer from one model to another, allowing attackers to
mount black box attacks without knowledge of the target model's parameters.
Adversarial training is the process of explicitly training a model on
adversarial examples, in order to make it more robust to attack or to reduce
its test error on clean inputs. So far, adversarial training has primarily been
applied to small problems. In this research, we apply adversarial training to
ImageNet. Our contributions include: (1) recommendations for how to succesfully
scale adversarial training to large models and datasets, (2) the observation
that adversarial training confers robustness to single-step attack methods, (3)
the finding that multi-step attack methods are somewhat less transferable than
single-step attack methods, so single-step attacks are the best for mounting
black-box attacks, and (4) resolution of a "label leaking" effect that causes
adversarially trained models to perform better on adversarial examples than on
clean examples, because the adversarial example construction process uses the
true label and the model can learn to exploit regularities in the construction
process.
| [
{
"version": "v1",
"created": "Fri, 4 Nov 2016 01:11:02 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Feb 2017 00:15:46 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Kurakin",
"Alexey",
""
],
[
"Goodfellow",
"Ian",
""
],
[
"Bengio",
"Samy",
""
]
] | TITLE: Adversarial Machine Learning at Scale
ABSTRACT: Adversarial examples are malicious inputs designed to fool machine learning
models. They often transfer from one model to another, allowing attackers to
mount black box attacks without knowledge of the target model's parameters.
Adversarial training is the process of explicitly training a model on
adversarial examples, in order to make it more robust to attack or to reduce
its test error on clean inputs. So far, adversarial training has primarily been
applied to small problems. In this research, we apply adversarial training to
ImageNet. Our contributions include: (1) recommendations for how to succesfully
scale adversarial training to large models and datasets, (2) the observation
that adversarial training confers robustness to single-step attack methods, (3)
the finding that multi-step attack methods are somewhat less transferable than
single-step attack methods, so single-step attacks are the best for mounting
black-box attacks, and (4) resolution of a "label leaking" effect that causes
adversarially trained models to perform better on adversarial examples than on
clean examples, because the adversarial example construction process uses the
true label and the model can learn to exploit regularities in the construction
process.
| no_new_dataset | 0.944382 |
1612.03928 | Sergey Zagoruyko | Sergey Zagoruyko and Nikos Komodakis | Paying More Attention to Attention: Improving the Performance of
Convolutional Neural Networks via Attention Transfer | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer
| [
{
"version": "v1",
"created": "Mon, 12 Dec 2016 21:15:57 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Jan 2017 23:26:16 GMT"
},
{
"version": "v3",
"created": "Sun, 12 Feb 2017 22:05:47 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Zagoruyko",
"Sergey",
""
],
[
"Komodakis",
"Nikos",
""
]
] | TITLE: Paying More Attention to Attention: Improving the Performance of
Convolutional Neural Networks via Attention Transfer
ABSTRACT: Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer
| no_new_dataset | 0.945298 |
1612.04679 | Hadi Zare | Mahdi Hajiabadi, Hadi Zare, Hossein Bobarshad | IEDC: An Integrated Approach for Overlapping and Non-overlapping
Community Detection | The paper is accepted in Knowledge-Based Systems journal, 12 Figures,
6 Tables | null | 10.1016/j.knosys.2017.02.018 | null | cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community detection is a task of fundamental importance in social network
analysis that can be used in a variety of knowledge-based domains. While there
exist many works on community detection based on connectivity structures, they
suffer from either considering the overlapping or non-overlapping communities.
In this work, we propose a novel approach for general community detection
through an integrated framework to extract the overlapping and non-overlapping
community structures without assuming prior structural connectivity on
networks. Our general framework is based on a primary node based criterion
which consists of the internal association degree along with the external
association degree. The evaluation of the proposed method is investigated
through the extensive simulation experiments and several benchmark real network
datasets. The experimental results show that the proposed method outperforms
the earlier state-of-the-art algorithms based on the well-known evaluation
criteria.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2016 15:14:45 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Feb 2017 11:27:37 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Hajiabadi",
"Mahdi",
""
],
[
"Zare",
"Hadi",
""
],
[
"Bobarshad",
"Hossein",
""
]
] | TITLE: IEDC: An Integrated Approach for Overlapping and Non-overlapping
Community Detection
ABSTRACT: Community detection is a task of fundamental importance in social network
analysis that can be used in a variety of knowledge-based domains. While there
exist many works on community detection based on connectivity structures, they
suffer from either considering the overlapping or non-overlapping communities.
In this work, we propose a novel approach for general community detection
through an integrated framework to extract the overlapping and non-overlapping
community structures without assuming prior structural connectivity on
networks. Our general framework is based on a primary node based criterion
which consists of the internal association degree along with the external
association degree. The evaluation of the proposed method is investigated
through the extensive simulation experiments and several benchmark real network
datasets. The experimental results show that the proposed method outperforms
the earlier state-of-the-art algorithms based on the well-known evaluation
criteria.
| no_new_dataset | 0.946448 |
1612.07837 | Soroush Mehri | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham
Jain, Jose Sotelo, Aaron Courville and Yoshua Bengio | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | Published as a conference paper at ICLR 2017 | null | null | null | cs.SD cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2016 23:28:47 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Feb 2017 20:04:46 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Mehri",
"Soroush",
""
],
[
"Kumar",
"Kundan",
""
],
[
"Gulrajani",
"Ishaan",
""
],
[
"Kumar",
"Rithesh",
""
],
[
"Jain",
"Shubham",
""
],
[
"Sotelo",
"Jose",
""
],
[
"Courville",
"Aaron",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
ABSTRACT: In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance.
| no_new_dataset | 0.950503 |
1701.07388 | Konstantinos Xirogiannopoulos | Konstantinos Xirogiannopoulos, Amol Deshpande | Extracting and Analyzing Hidden Graphs from Relational Databases | A shorter version of the paper is to appear in SIGMOD 2017 | null | 10.1145/3035918.3035949 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analyzing interconnection structures among underlying entities or objects in
a dataset through the use of graph analytics has been shown to provide
tremendous value in many application domains. However, graphs are not the
primary representation choice for storing most data today, and in order to have
access to these analyses, users are forced to extract data from their data
stores, construct the requisite graphs, and then load them into some graph
engine in order to execute their graph analysis task. Moreover, these graphs
can be significantly larger than the initial input stored in the database,
making it infeasible to construct or analyze such graphs in memory. In this
paper we address both of these challenges by building a system that enables
users to declaratively specify graph extraction tasks over a relational
database schema and then execute graph algorithms on the extracted graphs. We
propose a declarative domain-specific language for this purpose, and pair it up
with a novel condensed, in-memory representation that significantly reduces the
memory footprint of these graphs, permitting analysis of larger-than-memory
graphs. We present a general algorithm for creating this condensed
representation for a large class of graph extraction queries against arbitrary
schemas. We observe that the condensed representation suffers from a
duplication issue, that results in inaccuracies for most graph algorithms. We
then present a suite of in-memory representations that handle this duplication
in different ways and allow trading off the memory required and the
computational cost for executing different graph algorithms. We introduce novel
deduplication algorithms for removing this duplication in the graph, which are
of independent interest for graph compression, and provide a comprehensive
experimental evaluation over several real-world and synthetic datasets
illustrating these trade-offs.
| [
{
"version": "v1",
"created": "Wed, 25 Jan 2017 17:25:56 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Feb 2017 04:25:01 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Xirogiannopoulos",
"Konstantinos",
""
],
[
"Deshpande",
"Amol",
""
]
] | TITLE: Extracting and Analyzing Hidden Graphs from Relational Databases
ABSTRACT: Analyzing interconnection structures among underlying entities or objects in
a dataset through the use of graph analytics has been shown to provide
tremendous value in many application domains. However, graphs are not the
primary representation choice for storing most data today, and in order to have
access to these analyses, users are forced to extract data from their data
stores, construct the requisite graphs, and then load them into some graph
engine in order to execute their graph analysis task. Moreover, these graphs
can be significantly larger than the initial input stored in the database,
making it infeasible to construct or analyze such graphs in memory. In this
paper we address both of these challenges by building a system that enables
users to declaratively specify graph extraction tasks over a relational
database schema and then execute graph algorithms on the extracted graphs. We
propose a declarative domain-specific language for this purpose, and pair it up
with a novel condensed, in-memory representation that significantly reduces the
memory footprint of these graphs, permitting analysis of larger-than-memory
graphs. We present a general algorithm for creating this condensed
representation for a large class of graph extraction queries against arbitrary
schemas. We observe that the condensed representation suffers from a
duplication issue, that results in inaccuracies for most graph algorithms. We
then present a suite of in-memory representations that handle this duplication
in different ways and allow trading off the memory required and the
computational cost for executing different graph algorithms. We introduce novel
deduplication algorithms for removing this duplication in the graph, which are
of independent interest for graph compression, and provide a comprehensive
experimental evaluation over several real-world and synthetic datasets
illustrating these trade-offs.
| no_new_dataset | 0.949949 |
1702.03307 | Ershad Banijamali Mr. | Ershad Banijamali, Ali Ghodsi, Pascal Poupart | Generative Mixture of Networks | 9 pages | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A generative model based on training deep architectures is proposed. The
model consists of K networks that are trained together to learn the underlying
distribution of a given data set. The process starts with dividing the input
data into K clusters and feeding each of them into a separate network. After
few iterations of training networks separately, we use an EM-like algorithm to
train the networks together and update the clusters of the data. We call this
model Mixture of Networks. The provided model is a platform that can be used
for any deep structure and be trained by any conventional objective function
for distribution modeling. As the components of the model are neural networks,
it has high capability in characterizing complicated data distributions as well
as clustering data. We apply the algorithm on MNIST hand-written digits and
Yale face datasets. We also demonstrate the clustering ability of the model
using some real-world and toy examples.
| [
{
"version": "v1",
"created": "Fri, 10 Feb 2017 19:21:02 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Banijamali",
"Ershad",
""
],
[
"Ghodsi",
"Ali",
""
],
[
"Poupart",
"Pascal",
""
]
] | TITLE: Generative Mixture of Networks
ABSTRACT: A generative model based on training deep architectures is proposed. The
model consists of K networks that are trained together to learn the underlying
distribution of a given data set. The process starts with dividing the input
data into K clusters and feeding each of them into a separate network. After
few iterations of training networks separately, we use an EM-like algorithm to
train the networks together and update the clusters of the data. We call this
model Mixture of Networks. The provided model is a platform that can be used
for any deep structure and be trained by any conventional objective function
for distribution modeling. As the components of the model are neural networks,
it has high capability in characterizing complicated data distributions as well
as clustering data. We apply the algorithm on MNIST hand-written digits and
Yale face datasets. We also demonstrate the clustering ability of the model
using some real-world and toy examples.
| no_new_dataset | 0.946745 |
1702.03334 | Kirthevasan Kandasamy | Kirthevasan Kandasamy, Yoram Bachrach, Ryota Tomioka, Daniel Tarlow,
David Carter | Batch Policy Gradient Methods for Improving Neural Conversation Models | International Conference on Learning Representations (ICLR) 2017 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study reinforcement learning of chatbots with recurrent neural network
architectures when the rewards are noisy and expensive to obtain. For instance,
a chatbot used in automated customer service support can be scored by quality
assurance agents, but this process can be expensive, time consuming and noisy.
Previous reinforcement learning work for natural language processing uses
on-policy updates and/or is designed for on-line learning settings. We
demonstrate empirically that such strategies are not appropriate for this
setting and develop an off-policy batch policy gradient method (BPG). We
demonstrate the efficacy of our method via a series of synthetic experiments
and an Amazon Mechanical Turk experiment on a restaurant recommendations
dataset.
| [
{
"version": "v1",
"created": "Fri, 10 Feb 2017 21:58:40 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Kandasamy",
"Kirthevasan",
""
],
[
"Bachrach",
"Yoram",
""
],
[
"Tomioka",
"Ryota",
""
],
[
"Tarlow",
"Daniel",
""
],
[
"Carter",
"David",
""
]
] | TITLE: Batch Policy Gradient Methods for Improving Neural Conversation Models
ABSTRACT: We study reinforcement learning of chatbots with recurrent neural network
architectures when the rewards are noisy and expensive to obtain. For instance,
a chatbot used in automated customer service support can be scored by quality
assurance agents, but this process can be expensive, time consuming and noisy.
Previous reinforcement learning work for natural language processing uses
on-policy updates and/or is designed for on-line learning settings. We
demonstrate empirically that such strategies are not appropriate for this
setting and develop an off-policy batch policy gradient method (BPG). We
demonstrate the efficacy of our method via a series of synthetic experiments
and an Amazon Mechanical Turk experiment on a restaurant recommendations
dataset.
| no_new_dataset | 0.948106 |
1702.03345 | Amarjot Singh | Amarjot Singh and Nick Kingsbury | Multi-Resolution Dual-Tree Wavelet Scattering Network for Signal
Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a Deep Scattering network that utilizes Dual-Tree
complex wavelets to extract translation invariant representations from an input
signal. The computationally efficient Dual-Tree wavelets decompose the input
signal into densely spaced representations over scales. Translation invariance
is introduced in the representations by applying a non-linearity over a region
followed by averaging. The discriminatory information in the densely spaced,
locally smooth, signal representations aids the learning of the classifier. The
proposed network is shown to outperform Mallat's ScatterNet on four datasets
with different modalities on classification accuracy.
| [
{
"version": "v1",
"created": "Fri, 10 Feb 2017 22:52:13 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Singh",
"Amarjot",
""
],
[
"Kingsbury",
"Nick",
""
]
] | TITLE: Multi-Resolution Dual-Tree Wavelet Scattering Network for Signal
Classification
ABSTRACT: This paper introduces a Deep Scattering network that utilizes Dual-Tree
complex wavelets to extract translation invariant representations from an input
signal. The computationally efficient Dual-Tree wavelets decompose the input
signal into densely spaced representations over scales. Translation invariance
is introduced in the representations by applying a non-linearity over a region
followed by averaging. The discriminatory information in the densely spaced,
locally smooth, signal representations aids the learning of the classifier. The
proposed network is shown to outperform Mallat's ScatterNet on four datasets
with different modalities on classification accuracy.
| no_new_dataset | 0.949059 |
1702.03390 | Arnab Bhattacharya | Anuradha Awasthi, Arnab Bhattacharya, Sanchit Gupta, Ujjwal Kumar
Singh | K-Dominant Skyline Join Queries: Extending the Join Paradigm to
K-Dominant Skylines | Appeared as a short paper in ICDE 2017 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Skyline queries enable multi-criteria optimization by filtering objects that
are worse in all the attributes of interest than another object. To handle the
large answer set of skyline queries in high-dimensional datasets, the concept
of k-dominance was proposed where an object is said to dominate another object
if it is better (or equal) in at least k attributes. This relaxes the full
domination criterion of normal skyline queries and, therefore, produces lesser
number of skyline objects. This is called the k-dominant skyline set. Many
practical applications, however, require that the preferences are applied on a
joined relation. Common examples include flights having one or multiple stops,
a combination of product price and shipping costs, etc. In this paper, we
extend the k-dominant skyline queries to the join paradigm by enabling such
queries to be asked on joined relations. We call such queries KSJQ (k-dominant
skyline join queries). The number of skyline attributes, k, that an object must
dominate is from the combined set of skyline attributes of the joined relation.
We show how pre-processing the base relations helps in reducing the time of
answering such queries over the naive method of joining the relations first and
then running the k-dominant skyline computation. We also extend the query to
handle cases where the skyline preference is on aggregated values in the joined
relation (such as total cost of the multiple legs of the flight) which are
available only after the join is performed. In addition to these problems, we
devise efficient algorithms to choose the value of k based on the desired
cardinality of the final skyline set. Experiments on both real and synthetic
datasets demonstrate the efficiency, scalability and practicality of our
algorithms.
| [
{
"version": "v1",
"created": "Sat, 11 Feb 2017 06:51:21 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Awasthi",
"Anuradha",
""
],
[
"Bhattacharya",
"Arnab",
""
],
[
"Gupta",
"Sanchit",
""
],
[
"Singh",
"Ujjwal Kumar",
""
]
] | TITLE: K-Dominant Skyline Join Queries: Extending the Join Paradigm to
K-Dominant Skylines
ABSTRACT: Skyline queries enable multi-criteria optimization by filtering objects that
are worse in all the attributes of interest than another object. To handle the
large answer set of skyline queries in high-dimensional datasets, the concept
of k-dominance was proposed where an object is said to dominate another object
if it is better (or equal) in at least k attributes. This relaxes the full
domination criterion of normal skyline queries and, therefore, produces lesser
number of skyline objects. This is called the k-dominant skyline set. Many
practical applications, however, require that the preferences are applied on a
joined relation. Common examples include flights having one or multiple stops,
a combination of product price and shipping costs, etc. In this paper, we
extend the k-dominant skyline queries to the join paradigm by enabling such
queries to be asked on joined relations. We call such queries KSJQ (k-dominant
skyline join queries). The number of skyline attributes, k, that an object must
dominate is from the combined set of skyline attributes of the joined relation.
We show how pre-processing the base relations helps in reducing the time of
answering such queries over the naive method of joining the relations first and
then running the k-dominant skyline computation. We also extend the query to
handle cases where the skyline preference is on aggregated values in the joined
relation (such as total cost of the multiple legs of the flight) which are
available only after the join is performed. In addition to these problems, we
devise efficient algorithms to choose the value of k based on the desired
cardinality of the final skyline set. Experiments on both real and synthetic
datasets demonstrate the efficiency, scalability and practicality of our
algorithms.
| no_new_dataset | 0.940844 |
1702.03519 | Zeyi Wen | Zeyi Wen, Dong Deng, Rui Zhang, Kotagiri Ramamohanarao | A Technical Report: Entity Extraction using Both Character-based and
Token-based Similarity | 12 pages, 6 figures, technical report | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Entity extraction is fundamental to many text mining tasks such as
organisation name recognition. A popular approach to entity extraction is based
on matching sub-string candidates in a document against a dictionary of
entities. To handle spelling errors and name variations of entities, usually
the matching is approximate and edit or Jaccard distance is used to measure
dissimilarity between sub-string candidates and the entities. For approximate
entity extraction from free text, existing work considers solely
character-based or solely token-based similarity and hence cannot
simultaneously deal with minor variations at token level and typos. In this
paper, we address this problem by considering both character-based similarity
and token-based similarity (i.e. two-level similarity). Measuring one-level
(e.g. character-based) similarity is computationally expensive, and measuring
two-level similarity is dramatically more expensive. By exploiting the
properties of the two-level similarity and the weights of tokens, we develop
novel techniques to significantly reduce the number of sub-string candidates
that require computation of two-level similarity against the dictionary of
entities. A comprehensive experimental study on real world datasets show that
our algorithm can efficiently extract entities from documents and produce a
high F1 score in the range of [0.91, 0.97].
| [
{
"version": "v1",
"created": "Sun, 12 Feb 2017 12:46:40 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Wen",
"Zeyi",
""
],
[
"Deng",
"Dong",
""
],
[
"Zhang",
"Rui",
""
],
[
"Ramamohanarao",
"Kotagiri",
""
]
] | TITLE: A Technical Report: Entity Extraction using Both Character-based and
Token-based Similarity
ABSTRACT: Entity extraction is fundamental to many text mining tasks such as
organisation name recognition. A popular approach to entity extraction is based
on matching sub-string candidates in a document against a dictionary of
entities. To handle spelling errors and name variations of entities, usually
the matching is approximate and edit or Jaccard distance is used to measure
dissimilarity between sub-string candidates and the entities. For approximate
entity extraction from free text, existing work considers solely
character-based or solely token-based similarity and hence cannot
simultaneously deal with minor variations at token level and typos. In this
paper, we address this problem by considering both character-based similarity
and token-based similarity (i.e. two-level similarity). Measuring one-level
(e.g. character-based) similarity is computationally expensive, and measuring
two-level similarity is dramatically more expensive. By exploiting the
properties of the two-level similarity and the weights of tokens, we develop
novel techniques to significantly reduce the number of sub-string candidates
that require computation of two-level similarity against the dictionary of
entities. A comprehensive experimental study on real world datasets show that
our algorithm can efficiently extract entities from documents and produce a
high F1 score in the range of [0.91, 0.97].
| no_new_dataset | 0.947575 |
1702.03555 | Iuliia Kotseruba | Amir Rasouli, Iuliia Kotseruba, John K. Tsotsos | Agreeing to Cross: How Drivers and Pedestrians Communicate | 6 pages, 6 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The contribution of this paper is twofold. The first is a novel dataset for
studying behaviors of traffic participants while crossing. Our dataset contains
more than 650 samples of pedestrian behaviors in various street configurations
and weather conditions. These examples were selected from approx. 240 hours of
driving in the city, suburban and urban roads. The second contribution is an
analysis of our data from the point of view of joint attention. We identify
what types of non-verbal communication cues road users use at the point of
crossing, their responses, and under what circumstances the crossing event
takes place. It was found that in more than 90% of the cases pedestrians gaze
at the approaching cars prior to crossing in non-signalized crosswalks. The
crossing action, however, depends on additional factors such as time to
collision (TTC), explicit driver's reaction or structure of the crosswalk.
| [
{
"version": "v1",
"created": "Sun, 12 Feb 2017 18:41:06 GMT"
}
] | 2017-02-14T00:00:00 | [
[
"Rasouli",
"Amir",
""
],
[
"Kotseruba",
"Iuliia",
""
],
[
"Tsotsos",
"John K.",
""
]
] | TITLE: Agreeing to Cross: How Drivers and Pedestrians Communicate
ABSTRACT: The contribution of this paper is twofold. The first is a novel dataset for
studying behaviors of traffic participants while crossing. Our dataset contains
more than 650 samples of pedestrian behaviors in various street configurations
and weather conditions. These examples were selected from approx. 240 hours of
driving in the city, suburban and urban roads. The second contribution is an
analysis of our data from the point of view of joint attention. We identify
what types of non-verbal communication cues road users use at the point of
crossing, their responses, and under what circumstances the crossing event
takes place. It was found that in more than 90% of the cases pedestrians gaze
at the approaching cars prior to crossing in non-signalized crosswalks. The
crossing action, however, depends on additional factors such as time to
collision (TTC), explicit driver's reaction or structure of the crosswalk.
| new_dataset | 0.956675 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.