id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1403.8105 | Jamie Portsmouth | David Koerner, Jamie Portsmouth, Filip Sadlo, Thomas Ertl, and Bernd
Eberhardt | Flux-Limited Diffusion for Multiple Scattering in Participating Media | Accepted in Computer Graphics Forum | null | 10.1111/cgf.12342 | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For the rendering of multiple scattering effects in participating media,
methods based on the diffusion approximation are an extremely efficient
alternative to Monte Carlo path tracing. However, in sufficiently transparent
regions, classical diffusion approximation suffers from non-physical radiative
fluxes which leads to a poor match to correct light transport. In particular,
this prevents the application of classical diffusion approximation to
heterogeneous media, where opaque material is embedded within transparent
regions. To address this limitation, we introduce flux-limited diffusion, a
technique from the astrophysics domain. This method provides a better
approximation to light transport than classical diffusion approximation,
particularly when applied to heterogeneous media, and hence broadens the
applicability of diffusion-based techniques. We provide an algorithm for
flux-limited diffusion, which is validated using the transport theory for a
point light source in an infinite homogeneous medium. We further demonstrate
that our implementation of flux-limited diffusion produces more accurate
renderings of multiple scattering in various heterogeneous datasets than
classical diffusion approximation, by comparing both methods to ground truth
renderings obtained via volumetric path tracing.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2014 17:54:34 GMT"
}
] | 2014-04-01T00:00:00 | [
[
"Koerner",
"David",
""
],
[
"Portsmouth",
"Jamie",
""
],
[
"Sadlo",
"Filip",
""
],
[
"Ertl",
"Thomas",
""
],
[
"Eberhardt",
"Bernd",
""
]
] | TITLE: Flux-Limited Diffusion for Multiple Scattering in Participating Media
ABSTRACT: For the rendering of multiple scattering effects in participating media,
methods based on the diffusion approximation are an extremely efficient
alternative to Monte Carlo path tracing. However, in sufficiently transparent
regions, classical diffusion approximation suffers from non-physical radiative
fluxes which leads to a poor match to correct light transport. In particular,
this prevents the application of classical diffusion approximation to
heterogeneous media, where opaque material is embedded within transparent
regions. To address this limitation, we introduce flux-limited diffusion, a
technique from the astrophysics domain. This method provides a better
approximation to light transport than classical diffusion approximation,
particularly when applied to heterogeneous media, and hence broadens the
applicability of diffusion-based techniques. We provide an algorithm for
flux-limited diffusion, which is validated using the transport theory for a
point light source in an infinite homogeneous medium. We further demonstrate
that our implementation of flux-limited diffusion produces more accurate
renderings of multiple scattering in various heterogeneous datasets than
classical diffusion approximation, by comparing both methods to ground truth
renderings obtained via volumetric path tracing.
| no_new_dataset | 0.951684 |
1403.7315 | Chuan Shi | Yitong Li, Chuan Shi, Philip S. Yu, and Qing Chen | HRank: A Path based Ranking Framework in Heterogeneous Information
Network | 12 pages, 11 figures | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, there is a surge of interests on heterogeneous information network
analysis. As a newly emerging network model, heterogeneous information networks
have many unique features (e.g., complex structure and rich semantics) and a
number of interesting data mining tasks have been exploited in this kind of
networks, such as similarity measure, clustering, and classification. Although
evaluating the importance of objects has been well studied in homogeneous
networks, it is not yet exploited in heterogeneous networks. In this paper, we
study the ranking problem in heterogeneous networks and propose the HRank
framework to evaluate the importance of multiple types of objects and meta
paths. Since the importance of objects depends upon the meta paths in
heterogeneous networks, HRank develops a path based random walk process.
Moreover, a constrained meta path is proposed to subtly capture the rich
semantics in heterogeneous networks. Furthermore, HRank can simultaneously
determine the importance of objects and meta paths through applying the tensor
analysis. Extensive experiments on three real datasets show that HRank can
effectively evaluate the importance of objects and paths together. Moreover,
the constrained meta path shows its potential on mining subtle semantics by
obtaining more accurate ranking results.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2014 09:31:43 GMT"
}
] | 2014-03-31T00:00:00 | [
[
"Li",
"Yitong",
""
],
[
"Shi",
"Chuan",
""
],
[
"Yu",
"Philip S.",
""
],
[
"Chen",
"Qing",
""
]
] | TITLE: HRank: A Path based Ranking Framework in Heterogeneous Information
Network
ABSTRACT: Recently, there is a surge of interests on heterogeneous information network
analysis. As a newly emerging network model, heterogeneous information networks
have many unique features (e.g., complex structure and rich semantics) and a
number of interesting data mining tasks have been exploited in this kind of
networks, such as similarity measure, clustering, and classification. Although
evaluating the importance of objects has been well studied in homogeneous
networks, it is not yet exploited in heterogeneous networks. In this paper, we
study the ranking problem in heterogeneous networks and propose the HRank
framework to evaluate the importance of multiple types of objects and meta
paths. Since the importance of objects depends upon the meta paths in
heterogeneous networks, HRank develops a path based random walk process.
Moreover, a constrained meta path is proposed to subtly capture the rich
semantics in heterogeneous networks. Furthermore, HRank can simultaneously
determine the importance of objects and meta paths through applying the tensor
analysis. Extensive experiments on three real datasets show that HRank can
effectively evaluate the importance of objects and paths together. Moreover,
the constrained meta path shows its potential on mining subtle semantics by
obtaining more accurate ranking results.
| no_new_dataset | 0.948632 |
1403.7373 | Radek Pel\'anek | Radek Pel\'anek | Difficulty Rating of Sudoku Puzzles: An Overview and Evaluation | 24 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can we predict the difficulty of a Sudoku puzzle? We give an overview of
difficulty rating metrics and evaluate them on extensive dataset on human
problem solving (more then 1700 Sudoku puzzles, hundreds of solvers). The best
results are obtained using a computational model of human solving activity.
Using the model we show that there are two sources of the problem difficulty:
complexity of individual steps (logic operations) and structure of dependency
among steps. We also describe metrics based on analysis of solutions under
relaxed constraints -- a novel approach inspired by phase transition phenomenon
in the graph coloring problem. In our discussion we focus not just on the
performance of individual metrics on the Sudoku puzzle, but also on their
generalizability and applicability to other problems.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2014 13:43:50 GMT"
}
] | 2014-03-31T00:00:00 | [
[
"Pelánek",
"Radek",
""
]
] | TITLE: Difficulty Rating of Sudoku Puzzles: An Overview and Evaluation
ABSTRACT: How can we predict the difficulty of a Sudoku puzzle? We give an overview of
difficulty rating metrics and evaluate them on extensive dataset on human
problem solving (more then 1700 Sudoku puzzles, hundreds of solvers). The best
results are obtained using a computational model of human solving activity.
Using the model we show that there are two sources of the problem difficulty:
complexity of individual steps (logic operations) and structure of dependency
among steps. We also describe metrics based on analysis of solutions under
relaxed constraints -- a novel approach inspired by phase transition phenomenon
in the graph coloring problem. In our discussion we focus not just on the
performance of individual metrics on the Sudoku puzzle, but also on their
generalizability and applicability to other problems.
| no_new_dataset | 0.946051 |
1403.6950 | Manuel Marin-Jimenez | F.M. Castro and M.J. Marin-Jimenez and R. Medina-Carnicer | Pyramidal Fisher Motion for Multiview Gait Recognition | Submitted to International Conference on Pattern Recognition, ICPR,
2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this paper is to identify individuals by analyzing their gait.
Instead of using binary silhouettes as input data (as done in many previous
works) we propose and evaluate the use of motion descriptors based on densely
sampled short-term trajectories. We take advantage of state-of-the-art people
detectors to define custom spatial configurations of the descriptors around the
target person. Thus, obtaining a pyramidal representation of the gait motion.
The local motion features (described by the Divergence-Curl-Shear descriptor)
extracted on the different spatial areas of the person are combined into a
single high-level gait descriptor by using the Fisher Vector encoding. The
proposed approach, coined Pyramidal Fisher Motion, is experimentally validated
on the recent `AVA Multiview Gait' dataset. The results show that this new
approach achieves promising results in the problem of gait recognition.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2014 08:39:31 GMT"
}
] | 2014-03-28T00:00:00 | [
[
"Castro",
"F. M.",
""
],
[
"Marin-Jimenez",
"M. J.",
""
],
[
"Medina-Carnicer",
"R.",
""
]
] | TITLE: Pyramidal Fisher Motion for Multiview Gait Recognition
ABSTRACT: The goal of this paper is to identify individuals by analyzing their gait.
Instead of using binary silhouettes as input data (as done in many previous
works) we propose and evaluate the use of motion descriptors based on densely
sampled short-term trajectories. We take advantage of state-of-the-art people
detectors to define custom spatial configurations of the descriptors around the
target person. Thus, obtaining a pyramidal representation of the gait motion.
The local motion features (described by the Divergence-Curl-Shear descriptor)
extracted on the different spatial areas of the person are combined into a
single high-level gait descriptor by using the Fisher Vector encoding. The
proposed approach, coined Pyramidal Fisher Motion, is experimentally validated
on the recent `AVA Multiview Gait' dataset. The results show that this new
approach achieves promising results in the problem of gait recognition.
| no_new_dataset | 0.947962 |
1403.7057 | Alexander Kolesnikov | Alexander Kolesnikov, Matthieu Guillaumin, Vittorio Ferrari and
Christoph H. Lampert | Closed-Form Training of Conditional Random Fields for Large Scale Image
Segmentation | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present LS-CRF, a new method for very efficient large-scale training of
Conditional Random Fields (CRFs). It is inspired by existing closed-form
expressions for the maximum likelihood parameters of a generative graphical
model with tree topology. LS-CRF training requires only solving a set of
independent regression problems, for which closed-form expression as well as
efficient iterative solvers are available. This makes it orders of magnitude
faster than conventional maximum likelihood learning for CRFs that require
repeated runs of probabilistic inference. At the same time, the models learned
by our method still allow for joint inference at test time. We apply LS-CRF to
the task of semantic image segmentation, showing that it is highly efficient,
even for loopy models where probabilistic inference is problematic. It allows
the training of image segmentation models from significantly larger training
sets than had been used previously. We demonstrate this on two new datasets
that form a second contribution of this paper. They consist of over 180,000
images with figure-ground segmentation annotations. Our large-scale experiments
show that the possibilities of CRF-based image segmentation are far from
exhausted, indicating, for example, that semi-supervised learning and the use
of non-linear predictors are promising directions for achieving higher
segmentation accuracy in the future.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2014 14:38:23 GMT"
}
] | 2014-03-28T00:00:00 | [
[
"Kolesnikov",
"Alexander",
""
],
[
"Guillaumin",
"Matthieu",
""
],
[
"Ferrari",
"Vittorio",
""
],
[
"Lampert",
"Christoph H.",
""
]
] | TITLE: Closed-Form Training of Conditional Random Fields for Large Scale Image
Segmentation
ABSTRACT: We present LS-CRF, a new method for very efficient large-scale training of
Conditional Random Fields (CRFs). It is inspired by existing closed-form
expressions for the maximum likelihood parameters of a generative graphical
model with tree topology. LS-CRF training requires only solving a set of
independent regression problems, for which closed-form expression as well as
efficient iterative solvers are available. This makes it orders of magnitude
faster than conventional maximum likelihood learning for CRFs that require
repeated runs of probabilistic inference. At the same time, the models learned
by our method still allow for joint inference at test time. We apply LS-CRF to
the task of semantic image segmentation, showing that it is highly efficient,
even for loopy models where probabilistic inference is problematic. It allows
the training of image segmentation models from significantly larger training
sets than had been used previously. We demonstrate this on two new datasets
that form a second contribution of this paper. They consist of over 180,000
images with figure-ground segmentation annotations. Our large-scale experiments
show that the possibilities of CRF-based image segmentation are far from
exhausted, indicating, for example, that semi-supervised learning and the use
of non-linear predictors are promising directions for achieving higher
segmentation accuracy in the future.
| no_new_dataset | 0.517297 |
1311.4082 | Joel Leibo | Qianli Liao, Joel Z Leibo, Youssef Mroueh, Tomaso Poggio | Can a biologically-plausible hierarchy effectively replace face
detection, alignment, and recognition pipelines? | 11 Pages, 4 Figures. Mar 26, (2014): Improved exposition. Added CBMM
memo cover page. No substantive changes | null | null | CBMM-003 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The standard approach to unconstrained face recognition in natural
photographs is via a detection, alignment, recognition pipeline. While that
approach has achieved impressive results, there are several reasons to be
dissatisfied with it, among them is its lack of biological plausibility. A
recent theory of invariant recognition by feedforward hierarchical networks,
like HMAX, other convolutional networks, or possibly the ventral stream,
implies an alternative approach to unconstrained face recognition. This
approach accomplishes detection and alignment implicitly by storing
transformations of training images (called templates) rather than explicitly
detecting and aligning faces at test time. Here we propose a particular
locality-sensitive hashing based voting scheme which we call "consensus of
collisions" and show that it can be used to approximate the full 3-layer
hierarchy implied by the theory. The resulting end-to-end system for
unconstrained face recognition operates on photographs of faces taken under
natural conditions, e.g., Labeled Faces in the Wild (LFW), without aligning or
cropping them, as is normally done. It achieves a drastic improvement in the
state of the art on this end-to-end task, reaching the same level of
performance as the best systems operating on aligned, closely cropped images
(no outside training data). It also performs well on two newer datasets,
similar to LFW, but more difficult: LFW-jittered (new here) and SUFR-W.
| [
{
"version": "v1",
"created": "Sat, 16 Nov 2013 17:49:31 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Nov 2013 10:25:29 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2014 10:11:42 GMT"
}
] | 2014-03-27T00:00:00 | [
[
"Liao",
"Qianli",
""
],
[
"Leibo",
"Joel Z",
""
],
[
"Mroueh",
"Youssef",
""
],
[
"Poggio",
"Tomaso",
""
]
] | TITLE: Can a biologically-plausible hierarchy effectively replace face
detection, alignment, and recognition pipelines?
ABSTRACT: The standard approach to unconstrained face recognition in natural
photographs is via a detection, alignment, recognition pipeline. While that
approach has achieved impressive results, there are several reasons to be
dissatisfied with it, among them is its lack of biological plausibility. A
recent theory of invariant recognition by feedforward hierarchical networks,
like HMAX, other convolutional networks, or possibly the ventral stream,
implies an alternative approach to unconstrained face recognition. This
approach accomplishes detection and alignment implicitly by storing
transformations of training images (called templates) rather than explicitly
detecting and aligning faces at test time. Here we propose a particular
locality-sensitive hashing based voting scheme which we call "consensus of
collisions" and show that it can be used to approximate the full 3-layer
hierarchy implied by the theory. The resulting end-to-end system for
unconstrained face recognition operates on photographs of faces taken under
natural conditions, e.g., Labeled Faces in the Wild (LFW), without aligning or
cropping them, as is normally done. It achieves a drastic improvement in the
state of the art on this end-to-end task, reaching the same level of
performance as the best systems operating on aligned, closely cropped images
(no outside training data). It also performs well on two newer datasets,
similar to LFW, but more difficult: LFW-jittered (new here) and SUFR-W.
| no_new_dataset | 0.94887 |
1311.4529 | Afroza Sultana | Afroza Sultana, Naeemul Hassan, Chengkai Li, Jun Yang, Cong Yu | Incremental Discovery of Prominent Situational Facts | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the novel problem of finding new, prominent situational facts, which
are emerging statements about objects that stand out within certain contexts.
Many such facts are newsworthy---e.g., an athlete's outstanding performance in
a game, or a viral video's impressive popularity. Effective and efficient
identification of these facts assists journalists in reporting, one of the main
goals of computational journalism. Technically, we consider an ever-growing
table of objects with dimension and measure attributes. A situational fact is a
"contextual" skyline tuple that stands out against historical tuples in a
context, specified by a conjunctive constraint involving dimension attributes,
when a set of measure attributes are compared. New tuples are constantly added
to the table, reflecting events happening in the real world. Our goal is to
discover constraint-measure pairs that qualify a new tuple as a contextual
skyline tuple, and discover them quickly before the event becomes yesterday's
news. A brute-force approach requires exhaustive comparison with every tuple,
under every constraint, and in every measure subspace. We design algorithms in
response to these challenges using three corresponding ideas---tuple reduction,
constraint pruning, and sharing computation across measure subspaces. We also
adopt a simple prominence measure to rank the discovered facts when they are
numerous. Experiments over two real datasets validate the effectiveness and
efficiency of our techniques.
| [
{
"version": "v1",
"created": "Mon, 18 Nov 2013 20:44:13 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2014 16:43:25 GMT"
}
] | 2014-03-27T00:00:00 | [
[
"Sultana",
"Afroza",
""
],
[
"Hassan",
"Naeemul",
""
],
[
"Li",
"Chengkai",
""
],
[
"Yang",
"Jun",
""
],
[
"Yu",
"Cong",
""
]
] | TITLE: Incremental Discovery of Prominent Situational Facts
ABSTRACT: We study the novel problem of finding new, prominent situational facts, which
are emerging statements about objects that stand out within certain contexts.
Many such facts are newsworthy---e.g., an athlete's outstanding performance in
a game, or a viral video's impressive popularity. Effective and efficient
identification of these facts assists journalists in reporting, one of the main
goals of computational journalism. Technically, we consider an ever-growing
table of objects with dimension and measure attributes. A situational fact is a
"contextual" skyline tuple that stands out against historical tuples in a
context, specified by a conjunctive constraint involving dimension attributes,
when a set of measure attributes are compared. New tuples are constantly added
to the table, reflecting events happening in the real world. Our goal is to
discover constraint-measure pairs that qualify a new tuple as a contextual
skyline tuple, and discover them quickly before the event becomes yesterday's
news. A brute-force approach requires exhaustive comparison with every tuple,
under every constraint, and in every measure subspace. We design algorithms in
response to these challenges using three corresponding ideas---tuple reduction,
constraint pruning, and sharing computation across measure subspaces. We also
adopt a simple prominence measure to rank the discovered facts when they are
numerous. Experiments over two real datasets validate the effectiveness and
efficiency of our techniques.
| no_new_dataset | 0.942665 |
1311.2008 | Matus Medo | Matus Medo | Statistical validation of high-dimensional models of growing networks | 8 pages, 5 figures, 2 tables | Phys. Rev. E 89, 032801, 2014 | 10.1103/PhysRevE.89.032801 | null | physics.soc-ph cs.SI physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The abundance of models of complex networks and the current insufficient
validation standards make it difficult to judge which models are strongly
supported by data and which are not. We focus here on likelihood maximization
methods for models of growing networks with many parameters and compare their
performance on artificial and real datasets. While high dimensionality of the
parameter space harms the performance of direct likelihood maximization on
artificial data, this can be improved by introducing a suitable penalization
term. Likelihood maximization on real data shows that the presented approach is
able to discriminate among available network models. To make large-scale
datasets accessible to this kind of analysis, we propose a subset sampling
technique and show that it yields substantial model evidence in a fraction of
time necessary for the analysis of the complete data.
| [
{
"version": "v1",
"created": "Fri, 8 Nov 2013 16:09:08 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Jan 2014 13:16:06 GMT"
}
] | 2014-03-26T00:00:00 | [
[
"Medo",
"Matus",
""
]
] | TITLE: Statistical validation of high-dimensional models of growing networks
ABSTRACT: The abundance of models of complex networks and the current insufficient
validation standards make it difficult to judge which models are strongly
supported by data and which are not. We focus here on likelihood maximization
methods for models of growing networks with many parameters and compare their
performance on artificial and real datasets. While high dimensionality of the
parameter space harms the performance of direct likelihood maximization on
artificial data, this can be improved by introducing a suitable penalization
term. Likelihood maximization on real data shows that the presented approach is
able to discriminate among available network models. To make large-scale
datasets accessible to this kind of analysis, we propose a subset sampling
technique and show that it yields substantial model evidence in a fraction of
time necessary for the analysis of the complete data.
| no_new_dataset | 0.947284 |
1403.6270 | Alessio Guerrieri | Alessio Guerrieri, Alberto Montresor | Distributed Edge Partitioning for Graph Processing | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The availability of larger and larger graph datasets, growing exponentially
over the years, has created several new algorithmic challenges to be addressed.
Sequential approaches have become unfeasible, while interest on parallel and
distributed algorithms has greatly increased.
Appropriately partitioning the graph as a preprocessing step can improve the
degree of parallelism of its analysis. A number of heuristic algorithms have
been developed to solve this problem, but many of them subdivide the graph on
its vertex set, thus obtaining a vertex-partitioned graph.
Aim of this paper is to explore a completely different approach based on edge
partitioning, in which edges, rather than vertices, are partitioned into
disjoint subsets. Contribution of this paper is twofold: first, we introduce a
graph processing framework based on edge partitioning, that is flexible enough
to be applied to several different graph problems. Second, we show the
feasibility of these ideas by presenting a distributed edge partitioning
algorithm called d-fep.
Our framework is thoroughly evaluated, using both simulations and an Hadoop
implementation running on the Amazon EC2 cloud. The experiments show that d-fep
is efficient, scalable and obtains consistently good partitions. The resulting
edge-partitioned graph can be exploited to obtain more efficient
implementations of graph analysis algorithms.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2014 09:38:12 GMT"
}
] | 2014-03-26T00:00:00 | [
[
"Guerrieri",
"Alessio",
""
],
[
"Montresor",
"Alberto",
""
]
] | TITLE: Distributed Edge Partitioning for Graph Processing
ABSTRACT: The availability of larger and larger graph datasets, growing exponentially
over the years, has created several new algorithmic challenges to be addressed.
Sequential approaches have become unfeasible, while interest on parallel and
distributed algorithms has greatly increased.
Appropriately partitioning the graph as a preprocessing step can improve the
degree of parallelism of its analysis. A number of heuristic algorithms have
been developed to solve this problem, but many of them subdivide the graph on
its vertex set, thus obtaining a vertex-partitioned graph.
Aim of this paper is to explore a completely different approach based on edge
partitioning, in which edges, rather than vertices, are partitioned into
disjoint subsets. Contribution of this paper is twofold: first, we introduce a
graph processing framework based on edge partitioning, that is flexible enough
to be applied to several different graph problems. Second, we show the
feasibility of these ideas by presenting a distributed edge partitioning
algorithm called d-fep.
Our framework is thoroughly evaluated, using both simulations and an Hadoop
implementation running on the Amazon EC2 cloud. The experiments show that d-fep
is efficient, scalable and obtains consistently good partitions. The resulting
edge-partitioned graph can be exploited to obtain more efficient
implementations of graph analysis algorithms.
| no_new_dataset | 0.941007 |
1403.6275 | Vibhav Vineet Mr | Vibhav Vineet, Jonathan Warrell and Philip H.S. Torr | A Tiered Move-making Algorithm for General Non-submodular Pairwise
Energies | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A large number of problems in computer vision can be modelled as energy
minimization problems in a Markov Random Field (MRF) or Conditional Random
Field (CRF) framework. Graph-cuts based $\alpha$-expansion is a standard
move-making method to minimize the energy functions with sub-modular pairwise
terms. However, certain problems require more complex pairwise terms where the
$\alpha$-expansion method is generally not applicable.
In this paper, we propose an iterative {\em tiered move making algorithm}
which is able to handle general pairwise terms. Each move to the next
configuration is based on the current labeling and an optimal tiered move,
where each tiered move requires one application of the dynamic programming
based tiered labeling method introduced in Felzenszwalb et. al.
\cite{tiered_cvpr_felzenszwalbV10}. The algorithm converges to a local minimum
for any general pairwise potential, and we give a theoretical analysis of the
properties of the algorithm, characterizing the situations in which we can
expect good performance. We first evaluate our method on an object-class
segmentation problem using the Pascal VOC-11 segmentation dataset where we
learn general pairwise terms. Further we evaluate the algorithm on many other
benchmark labeling problems such as stereo, image segmentation, image stitching
and image denoising. Our method consistently gets better accuracy and energy
values than alpha-expansion, loopy belief propagation (LBP), quadratic
pseudo-boolean optimization (QPBO), and is competitive with TRWS.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2014 10:18:47 GMT"
}
] | 2014-03-26T00:00:00 | [
[
"Vineet",
"Vibhav",
""
],
[
"Warrell",
"Jonathan",
""
],
[
"Torr",
"Philip H. S.",
""
]
] | TITLE: A Tiered Move-making Algorithm for General Non-submodular Pairwise
Energies
ABSTRACT: A large number of problems in computer vision can be modelled as energy
minimization problems in a Markov Random Field (MRF) or Conditional Random
Field (CRF) framework. Graph-cuts based $\alpha$-expansion is a standard
move-making method to minimize the energy functions with sub-modular pairwise
terms. However, certain problems require more complex pairwise terms where the
$\alpha$-expansion method is generally not applicable.
In this paper, we propose an iterative {\em tiered move making algorithm}
which is able to handle general pairwise terms. Each move to the next
configuration is based on the current labeling and an optimal tiered move,
where each tiered move requires one application of the dynamic programming
based tiered labeling method introduced in Felzenszwalb et. al.
\cite{tiered_cvpr_felzenszwalbV10}. The algorithm converges to a local minimum
for any general pairwise potential, and we give a theoretical analysis of the
properties of the algorithm, characterizing the situations in which we can
expect good performance. We first evaluate our method on an object-class
segmentation problem using the Pascal VOC-11 segmentation dataset where we
learn general pairwise terms. Further we evaluate the algorithm on many other
benchmark labeling problems such as stereo, image segmentation, image stitching
and image denoising. Our method consistently gets better accuracy and energy
values than alpha-expansion, loopy belief propagation (LBP), quadratic
pseudo-boolean optimization (QPBO), and is competitive with TRWS.
| no_new_dataset | 0.949389 |
1403.6426 | Elmar Peise | Elmar Peise (1), Diego Fabregat-Traver (1), Paolo Bientinesi (1) ((1)
AICES, RWTH Aachen) | High Performance Solutions for Big-data GWAS | Submitted to Parallel Computing. arXiv admin note: substantial text
overlap with arXiv:1304.2272 | null | null | AICES-2013/12-01 | q-bio.GN cs.CE cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to associate complex traits with genetic polymorphisms, genome-wide
association studies process huge datasets involving tens of thousands of
individuals genotyped for millions of polymorphisms. When handling these
datasets, which exceed the main memory of contemporary computers, one faces two
distinct challenges: 1) Millions of polymorphisms and thousands of phenotypes
come at the cost of hundreds of gigabytes of data, which can only be kept in
secondary storage; 2) the relatedness of the test population is represented by
a relationship matrix, which, for large populations, can only fit in the
combined main memory of a distributed architecture. In this paper, by using
distributed resources such as Cloud or clusters, we address both challenges:
The genotype and phenotype data is streamed from secondary storage using a
double buffer- ing technique, while the relationship matrix is kept across the
main memory of a distributed memory system. With the help of these solutions,
we develop separate algorithms for studies involving only one or a multitude of
traits. We show that these algorithms sustain high-performance and allow the
analysis of enormous datasets.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2014 17:21:55 GMT"
}
] | 2014-03-26T00:00:00 | [
[
"Peise",
"Elmar",
""
],
[
"Fabregat-Traver",
"Diego",
""
],
[
"Bientinesi",
"Paolo",
""
]
] | TITLE: High Performance Solutions for Big-data GWAS
ABSTRACT: In order to associate complex traits with genetic polymorphisms, genome-wide
association studies process huge datasets involving tens of thousands of
individuals genotyped for millions of polymorphisms. When handling these
datasets, which exceed the main memory of contemporary computers, one faces two
distinct challenges: 1) Millions of polymorphisms and thousands of phenotypes
come at the cost of hundreds of gigabytes of data, which can only be kept in
secondary storage; 2) the relatedness of the test population is represented by
a relationship matrix, which, for large populations, can only fit in the
combined main memory of a distributed architecture. In this paper, by using
distributed resources such as Cloud or clusters, we address both challenges:
The genotype and phenotype data is streamed from secondary storage using a
double buffer- ing technique, while the relationship matrix is kept across the
main memory of a distributed memory system. With the help of these solutions,
we develop separate algorithms for studies involving only one or a multitude of
traits. We show that these algorithms sustain high-performance and allow the
analysis of enormous datasets.
| no_new_dataset | 0.943919 |
1312.5663 | Alireza Makhzani | Alireza Makhzani, Brendan Frey | k-Sparse Autoencoders | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, it has been observed that when representations are learnt in a way
that encourages sparsity, improved performance is obtained on classification
tasks. These methods involve combinations of activation functions, sampling
steps and different kinds of penalties. To investigate the effectiveness of
sparsity by itself, we propose the k-sparse autoencoder, which is an
autoencoder with linear activation function, where in hidden layers only the k
highest activities are kept. When applied to the MNIST and NORB datasets, we
find that this method achieves better classification results than denoising
autoencoders, networks trained with dropout, and RBMs. k-sparse autoencoders
are simple to train and the encoding stage is very fast, making them
well-suited to large problem sizes, where conventional sparse coding algorithms
cannot be applied.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2013 17:46:46 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2014 17:12:07 GMT"
}
] | 2014-03-25T00:00:00 | [
[
"Makhzani",
"Alireza",
""
],
[
"Frey",
"Brendan",
""
]
] | TITLE: k-Sparse Autoencoders
ABSTRACT: Recently, it has been observed that when representations are learnt in a way
that encourages sparsity, improved performance is obtained on classification
tasks. These methods involve combinations of activation functions, sampling
steps and different kinds of penalties. To investigate the effectiveness of
sparsity by itself, we propose the k-sparse autoencoder, which is an
autoencoder with linear activation function, where in hidden layers only the k
highest activities are kept. When applied to the MNIST and NORB datasets, we
find that this method achieves better classification results than denoising
autoencoders, networks trained with dropout, and RBMs. k-sparse autoencoders
are simple to train and the encoding stage is very fast, making them
well-suited to large problem sizes, where conventional sparse coding algorithms
cannot be applied.
| no_new_dataset | 0.949342 |
1403.5693 | Dougal Maclaurin | Dougal Maclaurin and Ryan P. Adams | Firefly Monte Carlo: Exact MCMC with Subsets of Data | null | null | null | null | stat.ML cs.LG stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Markov chain Monte Carlo (MCMC) is a popular and successful general-purpose
tool for Bayesian inference. However, MCMC cannot be practically applied to
large data sets because of the prohibitive cost of evaluating every likelihood
term at every iteration. Here we present Firefly Monte Carlo (FlyMC) an
auxiliary variable MCMC algorithm that only queries the likelihoods of a
potentially small subset of the data at each iteration yet simulates from the
exact posterior distribution, in contrast to recent proposals that are
approximate even in the asymptotic limit. FlyMC is compatible with a wide
variety of modern MCMC algorithms, and only requires a lower bound on the
per-datum likelihood factors. In experiments, we find that FlyMC generates
samples from the posterior more than an order of magnitude faster than regular
MCMC, opening up MCMC methods to larger datasets than were previously
considered feasible.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2014 18:21:29 GMT"
}
] | 2014-03-25T00:00:00 | [
[
"Maclaurin",
"Dougal",
""
],
[
"Adams",
"Ryan P.",
""
]
] | TITLE: Firefly Monte Carlo: Exact MCMC with Subsets of Data
ABSTRACT: Markov chain Monte Carlo (MCMC) is a popular and successful general-purpose
tool for Bayesian inference. However, MCMC cannot be practically applied to
large data sets because of the prohibitive cost of evaluating every likelihood
term at every iteration. Here we present Firefly Monte Carlo (FlyMC) an
auxiliary variable MCMC algorithm that only queries the likelihoods of a
potentially small subset of the data at each iteration yet simulates from the
exact posterior distribution, in contrast to recent proposals that are
approximate even in the asymptotic limit. FlyMC is compatible with a wide
variety of modern MCMC algorithms, and only requires a lower bound on the
per-datum likelihood factors. In experiments, we find that FlyMC generates
samples from the posterior more than an order of magnitude faster than regular
MCMC, opening up MCMC methods to larger datasets than were previously
considered feasible.
| no_new_dataset | 0.9463 |
1403.5877 | Anastasios Kyrillidis | Anastasios Kyrillidis and Anastasios Zouzias | Non-uniform Feature Sampling for Decision Tree Ensembles | 7 pages, 7 figures, 1 table | null | null | null | stat.ML cs.IT cs.LG math.IT stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the effectiveness of non-uniform randomized feature selection in
decision tree classification. We experimentally evaluate two feature selection
methodologies, based on information extracted from the provided dataset: $(i)$
\emph{leverage scores-based} and $(ii)$ \emph{norm-based} feature selection.
Experimental evaluation of the proposed feature selection techniques indicate
that such approaches might be more effective compared to naive uniform feature
selection and moreover having comparable performance to the random forest
algorithm [3]
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2014 08:26:19 GMT"
}
] | 2014-03-25T00:00:00 | [
[
"Kyrillidis",
"Anastasios",
""
],
[
"Zouzias",
"Anastasios",
""
]
] | TITLE: Non-uniform Feature Sampling for Decision Tree Ensembles
ABSTRACT: We study the effectiveness of non-uniform randomized feature selection in
decision tree classification. We experimentally evaluate two feature selection
methodologies, based on information extracted from the provided dataset: $(i)$
\emph{leverage scores-based} and $(ii)$ \emph{norm-based} feature selection.
Experimental evaluation of the proposed feature selection techniques indicate
that such approaches might be more effective compared to naive uniform feature
selection and moreover having comparable performance to the random forest
algorithm [3]
| no_new_dataset | 0.953535 |
1403.5299 | Arkadiusz Stopczynski Mr. | Arkadiusz Stopczynski, Riccardo Pietri, Alex Pentland, David Lazer,
Sune Lehmann | Privacy in Sensor-Driven Human Data Collection: A Guide for
Practitioners | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, the amount of information collected about human beings has
increased dramatically. This development has been partially driven by
individuals posting and storing data about themselves and friends using online
social networks or collecting their data for self-tracking purposes
(quantified-self movement). Across the sciences, researchers conduct studies
collecting data with an unprecedented resolution and scale. Using computational
power combined with mathematical models, such rich datasets can be mined to
infer underlying patterns, thereby providing insights into human nature. Much
of the data collected is sensitive. It is private in the sense that most
individuals would feel uncomfortable sharing their collected personal data
publicly. For this reason, the need for solutions to ensure the privacy of the
individuals generating data has grown alongside the data collection efforts.
Out of all the massive data collection efforts, this paper focuses on efforts
directly instrumenting human behavior, and notes that -- in many cases -- the
privacy of participants is not sufficiently addressed. For example, study
purposes are often not explicit, informed consent is ill-defined, and security
and sharing protocols are only partially disclosed. This paper provides a
survey of the work related to addressing privacy issues in research studies
that collect detailed sensor data on human behavior. Reflections on the key
problems and recommendations for future work are included. We hope the overview
of the privacy-related practices in massive data collection studies can be used
as a frame of reference for practitioners in the field. Although focused on
data collection in an academic context, we believe that many of the challenges
and solutions we identify are also relevant and useful for other domains where
massive data collection takes place, including businesses and governments.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2014 21:16:55 GMT"
}
] | 2014-03-24T00:00:00 | [
[
"Stopczynski",
"Arkadiusz",
""
],
[
"Pietri",
"Riccardo",
""
],
[
"Pentland",
"Alex",
""
],
[
"Lazer",
"David",
""
],
[
"Lehmann",
"Sune",
""
]
] | TITLE: Privacy in Sensor-Driven Human Data Collection: A Guide for
Practitioners
ABSTRACT: In recent years, the amount of information collected about human beings has
increased dramatically. This development has been partially driven by
individuals posting and storing data about themselves and friends using online
social networks or collecting their data for self-tracking purposes
(quantified-self movement). Across the sciences, researchers conduct studies
collecting data with an unprecedented resolution and scale. Using computational
power combined with mathematical models, such rich datasets can be mined to
infer underlying patterns, thereby providing insights into human nature. Much
of the data collected is sensitive. It is private in the sense that most
individuals would feel uncomfortable sharing their collected personal data
publicly. For this reason, the need for solutions to ensure the privacy of the
individuals generating data has grown alongside the data collection efforts.
Out of all the massive data collection efforts, this paper focuses on efforts
directly instrumenting human behavior, and notes that -- in many cases -- the
privacy of participants is not sufficiently addressed. For example, study
purposes are often not explicit, informed consent is ill-defined, and security
and sharing protocols are only partially disclosed. This paper provides a
survey of the work related to addressing privacy issues in research studies
that collect detailed sensor data on human behavior. Reflections on the key
problems and recommendations for future work are included. We hope the overview
of the privacy-related practices in massive data collection studies can be used
as a frame of reference for practitioners in the field. Although focused on
data collection in an academic context, we believe that many of the challenges
and solutions we identify are also relevant and useful for other domains where
massive data collection takes place, including businesses and governments.
| no_new_dataset | 0.932515 |
1403.5381 | Silu Huang | Silu Huang, Ada Wai-Chee Fu | ({\alpha}, k)-Minimal Sorting and Skew Join in MPI and MapReduce | 18 pages | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As computer clusters are found to be highly effective for handling massive
datasets, the design of efficient parallel algorithms for such a computing
model is of great interest. We consider ({\alpha}, k)-minimal algorithms for
such a purpose, where {\alpha} is the number of rounds in the algorithm, and k
is a bound on the deviation from perfect workload balance. We focus on new
({\alpha}, k)-minimal algorithms for sorting and skew equijoin operations for
computer clusters. To the best of our knowledge the proposed sorting and skew
join algorithms achieve the best workload balancing guarantee when compared to
previous works. Our empirical study shows that they are close to optimal in
workload balancing. In particular, our proposed sorting algorithm is around 25%
more efficient than the state-of-the-art Terasort algorithm and achieves
significantly more even workload distribution by over 50%.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2014 07:10:16 GMT"
}
] | 2014-03-24T00:00:00 | [
[
"Huang",
"Silu",
""
],
[
"Fu",
"Ada Wai-Chee",
""
]
] | TITLE: ({\alpha}, k)-Minimal Sorting and Skew Join in MPI and MapReduce
ABSTRACT: As computer clusters are found to be highly effective for handling massive
datasets, the design of efficient parallel algorithms for such a computing
model is of great interest. We consider ({\alpha}, k)-minimal algorithms for
such a purpose, where {\alpha} is the number of rounds in the algorithm, and k
is a bound on the deviation from perfect workload balance. We focus on new
({\alpha}, k)-minimal algorithms for sorting and skew equijoin operations for
computer clusters. To the best of our knowledge the proposed sorting and skew
join algorithms achieve the best workload balancing guarantee when compared to
previous works. Our empirical study shows that they are close to optimal in
workload balancing. In particular, our proposed sorting algorithm is around 25%
more efficient than the state-of-the-art Terasort algorithm and achieves
significantly more even workload distribution by over 50%.
| no_new_dataset | 0.948585 |
1403.5488 | Tshilidzi Marwala | Collins Leke, Bhekisipho Twala, and T. Marwala | Missing Data Prediction and Classification: The Use of Auto-Associative
Neural Networks and Optimization Algorithms | null | null | null | null | cs.NE cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This paper presents methods which are aimed at finding approximations to
missing data in a dataset by using optimization algorithms to optimize the
network parameters after which prediction and classification tasks can be
performed. The optimization methods that are considered are genetic algorithm
(GA), simulated annealing (SA), particle swarm optimization (PSO), random
forest (RF) and negative selection (NS) and these methods are individually used
in combination with auto-associative neural networks (AANN) for missing data
estimation and the results obtained are compared. The methods suggested use the
optimization algorithms to minimize an error function derived from training the
auto-associative neural network during which the interrelationships between the
inputs and the outputs are obtained and stored in the weights connecting the
different layers of the network. The error function is expressed as the square
of the difference between the actual observations and predicted values from an
auto-associative neural network. In the event of missing data, all the values
of the actual observations are not known hence, the error function is
decomposed to depend on the known and unknown variable values. Multi-layer
perceptron (MLP) neural network is employed to train the neural networks using
the scaled conjugate gradient (SCG) method. Prediction accuracy is determined
by mean squared error (MSE), root mean squared error (RMSE), mean absolute
error (MAE), and correlation coefficient (r) computations. Accuracy in
classification is obtained by plotting ROC curves and calculating the areas
under these. Analysis of results depicts that the approach using RF with AANN
produces the most accurate predictions and classifications while on the other
end of the scale is the approach which entails using NS with AANN.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2014 15:11:52 GMT"
}
] | 2014-03-24T00:00:00 | [
[
"Leke",
"Collins",
""
],
[
"Twala",
"Bhekisipho",
""
],
[
"Marwala",
"T.",
""
]
] | TITLE: Missing Data Prediction and Classification: The Use of Auto-Associative
Neural Networks and Optimization Algorithms
ABSTRACT: This paper presents methods which are aimed at finding approximations to
missing data in a dataset by using optimization algorithms to optimize the
network parameters after which prediction and classification tasks can be
performed. The optimization methods that are considered are genetic algorithm
(GA), simulated annealing (SA), particle swarm optimization (PSO), random
forest (RF) and negative selection (NS) and these methods are individually used
in combination with auto-associative neural networks (AANN) for missing data
estimation and the results obtained are compared. The methods suggested use the
optimization algorithms to minimize an error function derived from training the
auto-associative neural network during which the interrelationships between the
inputs and the outputs are obtained and stored in the weights connecting the
different layers of the network. The error function is expressed as the square
of the difference between the actual observations and predicted values from an
auto-associative neural network. In the event of missing data, all the values
of the actual observations are not known hence, the error function is
decomposed to depend on the known and unknown variable values. Multi-layer
perceptron (MLP) neural network is employed to train the neural networks using
the scaled conjugate gradient (SCG) method. Prediction accuracy is determined
by mean squared error (MSE), root mean squared error (RMSE), mean absolute
error (MAE), and correlation coefficient (r) computations. Accuracy in
classification is obtained by plotting ROC curves and calculating the areas
under these. Analysis of results depicts that the approach using RF with AANN
produces the most accurate predictions and classifications while on the other
end of the scale is the approach which entails using NS with AANN.
| no_new_dataset | 0.945851 |
1403.5115 | Ugo Louche | Ugo Louche (LIF), Liva Ralaivola (LIF) | Unconfused Ultraconservative Multiclass Algorithms | ACML, Australia (2013) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We tackle the problem of learning linear classifiers from noisy datasets in a
multiclass setting. The two-class version of this problem was studied a few
years ago by, e.g. Bylander (1994) and Blum et al. (1996): in these
contributions, the proposed approaches to fight the noise revolve around a
Perceptron learning scheme fed with peculiar examples computed through a
weighted average of points from the noisy training set. We propose to build
upon these approaches and we introduce a new algorithm called UMA (for
Unconfused Multiclass additive Algorithm) which may be seen as a generalization
to the multiclass setting of the previous approaches. In order to characterize
the noise we use the confusion matrix as a multiclass extension of the
classification noise studied in the aforementioned literature. Theoretically
well-founded, UMA furthermore displays very good empirical noise robustness, as
evidenced by numerical simulations conducted on both synthetic and real data.
Keywords: Multiclass classification, Perceptron, Noisy labels, Confusion Matrix
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2014 12:46:33 GMT"
}
] | 2014-03-21T00:00:00 | [
[
"Louche",
"Ugo",
"",
"LIF"
],
[
"Ralaivola",
"Liva",
"",
"LIF"
]
] | TITLE: Unconfused Ultraconservative Multiclass Algorithms
ABSTRACT: We tackle the problem of learning linear classifiers from noisy datasets in a
multiclass setting. The two-class version of this problem was studied a few
years ago by, e.g. Bylander (1994) and Blum et al. (1996): in these
contributions, the proposed approaches to fight the noise revolve around a
Perceptron learning scheme fed with peculiar examples computed through a
weighted average of points from the noisy training set. We propose to build
upon these approaches and we introduce a new algorithm called UMA (for
Unconfused Multiclass additive Algorithm) which may be seen as a generalization
to the multiclass setting of the previous approaches. In order to characterize
the noise we use the confusion matrix as a multiclass extension of the
classification noise studied in the aforementioned literature. Theoretically
well-founded, UMA furthermore displays very good empirical noise robustness, as
evidenced by numerical simulations conducted on both synthetic and real data.
Keywords: Multiclass classification, Perceptron, Noisy labels, Confusion Matrix
| no_new_dataset | 0.947962 |
1403.5118 | Robin Lovelace Dr | Robin Lovelace, Nick Malleson, Kirk Harland and Mark Birkin | Geotagged tweets to inform a spatial interaction model: a case study of
museums | A concise version of this article was submitted to GISRUK2014
conference | null | null | null | stat.ME cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores the potential of volunteered geographical information
from social media for informing geographical models of behavior, based on a
case study of museums in Yorkshire, UK. A spatial interaction model of visitors
to 15 museums from 179 administrative zones is constructed to test this
potential. The main input dataset comprises geo-tagged messages harvested using
the Twitter Streaming Application Programming Interface (API), filtered,
analyzed and aggregated to allow direct comparison with the model's output.
Comparison between model output and tweet information allowed the calibration
of model parameters to optimize the fit between flows to museums inferred from
tweets and flow matrices generated by the spatial interaction model. We
conclude that volunteered geographic information from social media sites have
great potential for informing geographical models of behavior, especially if
the volume of geo-tagged social media messages continues to increase. However,
we caution that volunteered geographical information from social media has some
major limitations so should be used only as a supplement to more consistent
data sources or when official datasets are unavailable.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2014 12:48:24 GMT"
}
] | 2014-03-21T00:00:00 | [
[
"Lovelace",
"Robin",
""
],
[
"Malleson",
"Nick",
""
],
[
"Harland",
"Kirk",
""
],
[
"Birkin",
"Mark",
""
]
] | TITLE: Geotagged tweets to inform a spatial interaction model: a case study of
museums
ABSTRACT: This paper explores the potential of volunteered geographical information
from social media for informing geographical models of behavior, based on a
case study of museums in Yorkshire, UK. A spatial interaction model of visitors
to 15 museums from 179 administrative zones is constructed to test this
potential. The main input dataset comprises geo-tagged messages harvested using
the Twitter Streaming Application Programming Interface (API), filtered,
analyzed and aggregated to allow direct comparison with the model's output.
Comparison between model output and tweet information allowed the calibration
of model parameters to optimize the fit between flows to museums inferred from
tweets and flow matrices generated by the spatial interaction model. We
conclude that volunteered geographic information from social media sites have
great potential for informing geographical models of behavior, especially if
the volume of geo-tagged social media messages continues to increase. However,
we caution that volunteered geographical information from social media has some
major limitations so should be used only as a supplement to more consistent
data sources or when official datasets are unavailable.
| no_new_dataset | 0.951414 |
1403.4415 | Julia Perl | Julia Preusse, J\'er\^ome Kunegis, Matthias Thimm, Sergej Sizov | DecLiNe -- Models for Decay of Links in Networks | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The prediction of graph evolution is an important and challenging problem in
the analysis of networks and of the Web in particular. But while the appearance
of new links is part of virtually every model of Web growth, the disappearance
of links has received much less attention in the literature. To fill this gap,
our approach DecLiNe (an acronym for DECay of LInks in NEtworks) aims to
predict link decay in networks, based on structural analysis of corresponding
graph models. In analogy to the link prediction problem, we show that analysis
of graph structures can help to identify indicators for superfluous links under
consideration of common network models. In doing so, we introduce novel metrics
that denote the likelihood of certain links in social graphs to remain in the
network, and combine them with state-of-the-art machine learning methods for
predicting link decay. Our methods are independent of the underlying network
type, and can be applied to such diverse networks as the Web, social networks
and any other structure representable as a network, and can be easily combined
with case-specific content analysis and adopted for a variety of social network
mining, filtering and recommendation applications. In systematic evaluations
with large-scale datasets of Wikipedia we show the practical feasibility of the
proposed structure-based link decay prediction algorithms.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2014 11:44:36 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2014 13:03:24 GMT"
}
] | 2014-03-20T00:00:00 | [
[
"Preusse",
"Julia",
""
],
[
"Kunegis",
"Jérôme",
""
],
[
"Thimm",
"Matthias",
""
],
[
"Sizov",
"Sergej",
""
]
] | TITLE: DecLiNe -- Models for Decay of Links in Networks
ABSTRACT: The prediction of graph evolution is an important and challenging problem in
the analysis of networks and of the Web in particular. But while the appearance
of new links is part of virtually every model of Web growth, the disappearance
of links has received much less attention in the literature. To fill this gap,
our approach DecLiNe (an acronym for DECay of LInks in NEtworks) aims to
predict link decay in networks, based on structural analysis of corresponding
graph models. In analogy to the link prediction problem, we show that analysis
of graph structures can help to identify indicators for superfluous links under
consideration of common network models. In doing so, we introduce novel metrics
that denote the likelihood of certain links in social graphs to remain in the
network, and combine them with state-of-the-art machine learning methods for
predicting link decay. Our methods are independent of the underlying network
type, and can be applied to such diverse networks as the Web, social networks
and any other structure representable as a network, and can be easily combined
with case-specific content analysis and adopted for a variety of social network
mining, filtering and recommendation applications. In systematic evaluations
with large-scale datasets of Wikipedia we show the practical feasibility of the
proposed structure-based link decay prediction algorithms.
| no_new_dataset | 0.949106 |
1403.4781 | Subhadip Mukherjee | Subhadip Mukherjee and Chandra Sekhar Seelamantula | A Split-and-Merge Dictionary Learning Algorithm for Sparse
Representation | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In big data image/video analytics, we encounter the problem of learning an
overcomplete dictionary for sparse representation from a large training
dataset, which can not be processed at once because of storage and
computational constraints. To tackle the problem of dictionary learning in such
scenarios, we propose an algorithm for parallel dictionary learning. The
fundamental idea behind the algorithm is to learn a sparse representation in
two phases. In the first phase, the whole training dataset is partitioned into
small non-overlapping subsets, and a dictionary is trained independently on
each small database. In the second phase, the dictionaries are merged to form a
global dictionary. We show that the proposed algorithm is efficient in its
usage of memory and computational complexity, and performs on par with the
standard learning strategy operating on the entire data at a time. As an
application, we consider the problem of image denoising. We present a
comparative analysis of our algorithm with the standard learning techniques,
that use the entire database at a time, in terms of training and denoising
performance. We observe that the split-and-merge algorithm results in a
remarkable reduction of training time, without significantly affecting the
denoising performance.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2014 12:16:17 GMT"
}
] | 2014-03-20T00:00:00 | [
[
"Mukherjee",
"Subhadip",
""
],
[
"Seelamantula",
"Chandra Sekhar",
""
]
] | TITLE: A Split-and-Merge Dictionary Learning Algorithm for Sparse
Representation
ABSTRACT: In big data image/video analytics, we encounter the problem of learning an
overcomplete dictionary for sparse representation from a large training
dataset, which can not be processed at once because of storage and
computational constraints. To tackle the problem of dictionary learning in such
scenarios, we propose an algorithm for parallel dictionary learning. The
fundamental idea behind the algorithm is to learn a sparse representation in
two phases. In the first phase, the whole training dataset is partitioned into
small non-overlapping subsets, and a dictionary is trained independently on
each small database. In the second phase, the dictionaries are merged to form a
global dictionary. We show that the proposed algorithm is efficient in its
usage of memory and computational complexity, and performs on par with the
standard learning strategy operating on the entire data at a time. As an
application, we consider the problem of image denoising. We present a
comparative analysis of our algorithm with the standard learning techniques,
that use the entire database at a time, in terms of training and denoising
performance. We observe that the split-and-merge algorithm results in a
remarkable reduction of training time, without significantly affecting the
denoising performance.
| no_new_dataset | 0.946547 |
1403.4540 | Llu\'is Belanche-Mu\~noz | Llu\'is Belanche and Jer\'onimo Hern\'andez | Similarity networks for classification: a case study in the Horse Colic
problem | 16 pages, 1 figure Universitat Polit\`ecnica de Catalunya preprint | null | null | Technical Report LSI-14-4-R | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper develops a two-layer neural network in which the neuron model
computes a user-defined similarity function between inputs and weights. The
neuron transfer function is formed by composition of an adapted logistic
function with the mean of the partial input-weight similarities. The resulting
neuron model is capable of dealing directly with variables of potentially
different nature (continuous, fuzzy, ordinal, categorical). There is also
provision for missing values. The network is trained using a two-stage
procedure very similar to that used to train a radial basis function (RBF)
neural network. The network is compared to two types of RBF networks in a
non-trivial dataset: the Horse Colic problem, taken as a case study and
analyzed in detail.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2014 17:15:21 GMT"
}
] | 2014-03-19T00:00:00 | [
[
"Belanche",
"Lluís",
""
],
[
"Hernández",
"Jerónimo",
""
]
] | TITLE: Similarity networks for classification: a case study in the Horse Colic
problem
ABSTRACT: This paper develops a two-layer neural network in which the neuron model
computes a user-defined similarity function between inputs and weights. The
neuron transfer function is formed by composition of an adapted logistic
function with the mean of the partial input-weight similarities. The resulting
neuron model is capable of dealing directly with variables of potentially
different nature (continuous, fuzzy, ordinal, categorical). There is also
provision for missing values. The network is trained using a two-stage
procedure very similar to that used to train a radial basis function (RBF)
neural network. The network is compared to two types of RBF networks in a
non-trivial dataset: the Horse Colic problem, taken as a case study and
analyzed in detail.
| no_new_dataset | 0.951459 |
1206.5580 | John Moeller | John Moeller, Parasaran Raman, Avishek Saha, Suresh Venkatasubramanian | A Geometric Algorithm for Scalable Multiple Kernel Learning | 20 pages | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a geometric formulation of the Multiple Kernel Learning (MKL)
problem. To do so, we reinterpret the problem of learning kernel weights as
searching for a kernel that maximizes the minimum (kernel) distance between two
convex polytopes. This interpretation combined with novel structural insights
from our geometric formulation allows us to reduce the MKL problem to a simple
optimization routine that yields provable convergence as well as quality
guarantees. As a result our method scales efficiently to much larger data sets
than most prior methods can handle. Empirical evaluation on eleven datasets
shows that we are significantly faster and even compare favorably with a
uniform unweighted combination of kernels.
| [
{
"version": "v1",
"created": "Mon, 25 Jun 2012 05:57:29 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2014 04:33:18 GMT"
}
] | 2014-03-18T00:00:00 | [
[
"Moeller",
"John",
""
],
[
"Raman",
"Parasaran",
""
],
[
"Saha",
"Avishek",
""
],
[
"Venkatasubramanian",
"Suresh",
""
]
] | TITLE: A Geometric Algorithm for Scalable Multiple Kernel Learning
ABSTRACT: We present a geometric formulation of the Multiple Kernel Learning (MKL)
problem. To do so, we reinterpret the problem of learning kernel weights as
searching for a kernel that maximizes the minimum (kernel) distance between two
convex polytopes. This interpretation combined with novel structural insights
from our geometric formulation allows us to reduce the MKL problem to a simple
optimization routine that yields provable convergence as well as quality
guarantees. As a result our method scales efficiently to much larger data sets
than most prior methods can handle. Empirical evaluation on eleven datasets
shows that we are significantly faster and even compare favorably with a
uniform unweighted combination of kernels.
| no_new_dataset | 0.9434 |
1307.1289 | Miguel Angel Veganzones | Miguel Angel Veganzones (GIPSA), Mihai Datcu (DLR), Manuel Gra\~na
(GIC) | Further results on dissimilarity spaces for hyperspectral images RF-CBIR | In Pattern Recognition Letters (2013) | Pattern Recognition Letters 34, 14 (2013) 1659-1668 | 10.1016/j.patrec.2013.05.025 | veganzones_PRL2013 | cs.IR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Content-Based Image Retrieval (CBIR) systems are powerful search tools in
image databases that have been little applied to hyperspectral images.
Relevance feedback (RF) is an iterative process that uses machine learning
techniques and user's feedback to improve the CBIR systems performance. We
pursued to expand previous research in hyperspectral CBIR systems built on
dissimilarity functions defined either on spectral and spatial features
extracted by spectral unmixing techniques, or on dictionaries extracted by
dictionary-based compressors. These dissimilarity functions were not suitable
for direct application in common machine learning techniques. We propose to use
a RF general approach based on dissimilarity spaces which is more appropriate
for the application of machine learning algorithms to the hyperspectral
RF-CBIR. We validate the proposed RF method for hyperspectral CBIR systems over
a real hyperspectral dataset.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2013 11:58:04 GMT"
}
] | 2014-03-18T00:00:00 | [
[
"Veganzones",
"Miguel Angel",
"",
"GIPSA"
],
[
"Datcu",
"Mihai",
"",
"DLR"
],
[
"Graña",
"Manuel",
"",
"GIC"
]
] | TITLE: Further results on dissimilarity spaces for hyperspectral images RF-CBIR
ABSTRACT: Content-Based Image Retrieval (CBIR) systems are powerful search tools in
image databases that have been little applied to hyperspectral images.
Relevance feedback (RF) is an iterative process that uses machine learning
techniques and user's feedback to improve the CBIR systems performance. We
pursued to expand previous research in hyperspectral CBIR systems built on
dissimilarity functions defined either on spectral and spatial features
extracted by spectral unmixing techniques, or on dictionaries extracted by
dictionary-based compressors. These dissimilarity functions were not suitable
for direct application in common machine learning techniques. We propose to use
a RF general approach based on dissimilarity spaces which is more appropriate
for the application of machine learning algorithms to the hyperspectral
RF-CBIR. We validate the proposed RF method for hyperspectral CBIR systems over
a real hyperspectral dataset.
| no_new_dataset | 0.950457 |
1403.3829 | Wei Di | Zixuan Wang, Wei Di, Anurag Bhardwaj, Vignesh Jagadeesh, Robinson
Piramuthu | Geometric VLAD for Large Scale Image Search | 8 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | We present a novel compact image descriptor for large scale image search. Our
proposed descriptor - Geometric VLAD (gVLAD) is an extension of VLAD (Vector of
Locally Aggregated Descriptors) that incorporates weak geometry information
into the VLAD framework. The proposed geometry cues are derived as a membership
function over keypoint angles which contain evident and informative information
but yet often discarded. A principled technique for learning the membership
function by clustering angles is also presented. Further, to address the
overhead of iterative codebook training over real-time datasets, a novel
codebook adaptation strategy is outlined. Finally, we demonstrate the efficacy
of proposed gVLAD based retrieval framework where we achieve more than 15%
improvement in mAP over existing benchmarks.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2014 17:35:26 GMT"
}
] | 2014-03-18T00:00:00 | [
[
"Wang",
"Zixuan",
""
],
[
"Di",
"Wei",
""
],
[
"Bhardwaj",
"Anurag",
""
],
[
"Jagadeesh",
"Vignesh",
""
],
[
"Piramuthu",
"Robinson",
""
]
] | TITLE: Geometric VLAD for Large Scale Image Search
ABSTRACT: We present a novel compact image descriptor for large scale image search. Our
proposed descriptor - Geometric VLAD (gVLAD) is an extension of VLAD (Vector of
Locally Aggregated Descriptors) that incorporates weak geometry information
into the VLAD framework. The proposed geometry cues are derived as a membership
function over keypoint angles which contain evident and informative information
but yet often discarded. A principled technique for learning the membership
function by clustering angles is also presented. Further, to address the
overhead of iterative codebook training over real-time datasets, a novel
codebook adaptation strategy is outlined. Finally, we demonstrate the efficacy
of proposed gVLAD based retrieval framework where we achieve more than 15%
improvement in mAP over existing benchmarks.
| no_new_dataset | 0.947235 |
1403.4017 | Longqi Yang | Longqi Yang, Yibing Wang, Zhisong Pan and Guyu Hu | Multi-task Feature Selection based Anomaly Detection | 6 pages, 5 figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network anomaly detection is still a vibrant research area. As the fast
growth of network bandwidth and the tremendous traffic on the network, there
arises an extremely challengeable question: How to efficiently and accurately
detect the anomaly on multiple traffic? In multi-task learning, the traffic
consisting of flows at different time periods is considered as a task. Multiple
tasks at different time periods performed simultaneously to detect anomalies.
In this paper, we apply the multi-task feature selection in network anomaly
detection area which provides a powerful method to gather information from
multiple traffic and detect anomalies on it simultaneously. In particular, the
multi-task feature selection includes the well-known l1-norm based feature
selection as a special case given only one task. Moreover, we show that the
multi-task feature selection is more accurate by utilizing more information
simultaneously than the l1-norm based method. At the evaluation stage, we
preprocess the raw data trace from trans-Pacific backbone link between Japan
and the United States, label with anomaly communities, and generate a
248-feature dataset. We show empirically that the multi-task feature selection
outperforms independent l1-norm based feature selection on real traffic
dataset.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2014 08:04:41 GMT"
}
] | 2014-03-18T00:00:00 | [
[
"Yang",
"Longqi",
""
],
[
"Wang",
"Yibing",
""
],
[
"Pan",
"Zhisong",
""
],
[
"Hu",
"Guyu",
""
]
] | TITLE: Multi-task Feature Selection based Anomaly Detection
ABSTRACT: Network anomaly detection is still a vibrant research area. As the fast
growth of network bandwidth and the tremendous traffic on the network, there
arises an extremely challengeable question: How to efficiently and accurately
detect the anomaly on multiple traffic? In multi-task learning, the traffic
consisting of flows at different time periods is considered as a task. Multiple
tasks at different time periods performed simultaneously to detect anomalies.
In this paper, we apply the multi-task feature selection in network anomaly
detection area which provides a powerful method to gather information from
multiple traffic and detect anomalies on it simultaneously. In particular, the
multi-task feature selection includes the well-known l1-norm based feature
selection as a special case given only one task. Moreover, we show that the
multi-task feature selection is more accurate by utilizing more information
simultaneously than the l1-norm based method. At the evaluation stage, we
preprocess the raw data trace from trans-Pacific backbone link between Japan
and the United States, label with anomaly communities, and generate a
248-feature dataset. We show empirically that the multi-task feature selection
outperforms independent l1-norm based feature selection on real traffic
dataset.
| no_new_dataset | 0.910227 |
1205.1758 | Jonathan Ullman | Justin Thaler, Jonathan Ullman, Salil Vadhan | Faster Algorithms for Privately Releasing Marginals | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of releasing $k$-way marginals of a database $D \in
(\{0,1\}^d)^n$, while preserving differential privacy. The answer to a $k$-way
marginal query is the fraction of $D$'s records $x \in \{0,1\}^d$ with a given
value in each of a given set of up to $k$ columns. Marginal queries enable a
rich class of statistical analyses of a dataset, and designing efficient
algorithms for privately releasing marginal queries has been identified as an
important open problem in private data analysis (cf. Barak et. al., PODS '07).
We give an algorithm that runs in time $d^{O(\sqrt{k})}$ and releases a
private summary capable of answering any $k$-way marginal query with at most
$\pm .01$ error on every query as long as $n \geq d^{O(\sqrt{k})}$. To our
knowledge, ours is the first algorithm capable of privately releasing marginal
queries with non-trivial worst-case accuracy guarantees in time substantially
smaller than the number of $k$-way marginal queries, which is $d^{\Theta(k)}$
(for $k \ll d$).
| [
{
"version": "v1",
"created": "Tue, 8 May 2012 17:43:11 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Jun 2012 14:59:54 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Mar 2014 13:56:07 GMT"
}
] | 2014-03-17T00:00:00 | [
[
"Thaler",
"Justin",
""
],
[
"Ullman",
"Jonathan",
""
],
[
"Vadhan",
"Salil",
""
]
] | TITLE: Faster Algorithms for Privately Releasing Marginals
ABSTRACT: We study the problem of releasing $k$-way marginals of a database $D \in
(\{0,1\}^d)^n$, while preserving differential privacy. The answer to a $k$-way
marginal query is the fraction of $D$'s records $x \in \{0,1\}^d$ with a given
value in each of a given set of up to $k$ columns. Marginal queries enable a
rich class of statistical analyses of a dataset, and designing efficient
algorithms for privately releasing marginal queries has been identified as an
important open problem in private data analysis (cf. Barak et. al., PODS '07).
We give an algorithm that runs in time $d^{O(\sqrt{k})}$ and releases a
private summary capable of answering any $k$-way marginal query with at most
$\pm .01$ error on every query as long as $n \geq d^{O(\sqrt{k})}$. To our
knowledge, ours is the first algorithm capable of privately releasing marginal
queries with non-trivial worst-case accuracy guarantees in time substantially
smaller than the number of $k$-way marginal queries, which is $d^{\Theta(k)}$
(for $k \ll d$).
| no_new_dataset | 0.93744 |
1304.0869 | Conrad Sanderson | Yongkang Wong, Shaokang Chen, Sandra Mau, Conrad Sanderson, Brian C.
Lovell | Patch-based Probabilistic Image Quality Assessment for Face Selection
and Improved Video-based Face Recognition | null | IEEE Conference on Computer Vision and Pattern Recognition
Workshops (CVPRW), pp. 74-81, 2011 | 10.1109/CVPRW.2011.5981881 | null | cs.CV stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In video based face recognition, face images are typically captured over
multiple frames in uncontrolled conditions, where head pose, illumination,
shadowing, motion blur and focus change over the sequence. Additionally,
inaccuracies in face localisation can also introduce scale and alignment
variations. Using all face images, including images of poor quality, can
actually degrade face recognition performance. While one solution it to use
only the "best" subset of images, current face selection techniques are
incapable of simultaneously handling all of the abovementioned issues. We
propose an efficient patch-based face image quality assessment algorithm which
quantifies the similarity of a face image to a probabilistic face model,
representing an "ideal" face. Image characteristics that affect recognition are
taken into account, including variations in geometric alignment (shift,
rotation and scale), sharpness, head pose and cast shadows. Experiments on
FERET and PIE datasets show that the proposed algorithm is able to identify
images which are simultaneously the most frontal, aligned, sharp and well
illuminated. Further experiments on a new video surveillance dataset (termed
ChokePoint) show that the proposed method provides better face subsets than
existing face selection techniques, leading to significant improvements in
recognition accuracy.
| [
{
"version": "v1",
"created": "Wed, 3 Apr 2013 08:41:23 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2014 15:53:31 GMT"
}
] | 2014-03-17T00:00:00 | [
[
"Wong",
"Yongkang",
""
],
[
"Chen",
"Shaokang",
""
],
[
"Mau",
"Sandra",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Lovell",
"Brian C.",
""
]
] | TITLE: Patch-based Probabilistic Image Quality Assessment for Face Selection
and Improved Video-based Face Recognition
ABSTRACT: In video based face recognition, face images are typically captured over
multiple frames in uncontrolled conditions, where head pose, illumination,
shadowing, motion blur and focus change over the sequence. Additionally,
inaccuracies in face localisation can also introduce scale and alignment
variations. Using all face images, including images of poor quality, can
actually degrade face recognition performance. While one solution it to use
only the "best" subset of images, current face selection techniques are
incapable of simultaneously handling all of the abovementioned issues. We
propose an efficient patch-based face image quality assessment algorithm which
quantifies the similarity of a face image to a probabilistic face model,
representing an "ideal" face. Image characteristics that affect recognition are
taken into account, including variations in geometric alignment (shift,
rotation and scale), sharpness, head pose and cast shadows. Experiments on
FERET and PIE datasets show that the proposed algorithm is able to identify
images which are simultaneously the most frontal, aligned, sharp and well
illuminated. Further experiments on a new video surveillance dataset (termed
ChokePoint) show that the proposed method provides better face subsets than
existing face selection techniques, leading to significant improvements in
recognition accuracy.
| new_dataset | 0.965641 |
1403.3460 | Chi Wang | Chi Wang, Xueqing Liu, Yanglei Song, Jiawei Han | Scalable and Robust Construction of Topical Hierarchies | null | null | null | null | cs.LG cs.CL cs.DB cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated generation of high-quality topical hierarchies for a text
collection is a dream problem in knowledge engineering with many valuable
applications. In this paper a scalable and robust algorithm is proposed for
constructing a hierarchy of topics from a text collection. We divide and
conquer the problem using a top-down recursive framework, based on a tensor
orthogonal decomposition technique. We solve a critical challenge to perform
scalable inference for our newly designed hierarchical topic model. Experiments
with various real-world datasets illustrate its ability to generate robust,
high-quality hierarchies efficiently. Our method reduces the time of
construction by several orders of magnitude, and its robust feature renders it
possible for users to interactively revise the hierarchy.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2014 23:22:21 GMT"
}
] | 2014-03-17T00:00:00 | [
[
"Wang",
"Chi",
""
],
[
"Liu",
"Xueqing",
""
],
[
"Song",
"Yanglei",
""
],
[
"Han",
"Jiawei",
""
]
] | TITLE: Scalable and Robust Construction of Topical Hierarchies
ABSTRACT: Automated generation of high-quality topical hierarchies for a text
collection is a dream problem in knowledge engineering with many valuable
applications. In this paper a scalable and robust algorithm is proposed for
constructing a hierarchy of topics from a text collection. We divide and
conquer the problem using a top-down recursive framework, based on a tensor
orthogonal decomposition technique. We solve a critical challenge to perform
scalable inference for our newly designed hierarchical topic model. Experiments
with various real-world datasets illustrate its ability to generate robust,
high-quality hierarchies efficiently. Our method reduces the time of
construction by several orders of magnitude, and its robust feature renders it
possible for users to interactively revise the hierarchy.
| no_new_dataset | 0.945197 |
1403.3628 | Remi Flamary | R\'emi Flamary (LAGRANGE), Nisrine Jrad (GIPSA-lab), Ronald Phlypo
(GIPSA-lab), Marco Congedo (GIPSA-lab), Alain Rakotomamonjy (LITIS) | Mixed-norm Regularization for Brain Decoding | Computational and Mathematical Methods in Medicine (2014)
http://www.hindawi.com/journals/cmmm/ | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work investigates the use of mixed-norm regularization for sensor
selection in Event-Related Potential (ERP) based Brain-Computer Interfaces
(BCI). The classification problem is cast as a discriminative optimization
framework where sensor selection is induced through the use of mixed-norms.
This framework is extended to the multi-task learning situation where several
similar classification tasks related to different subjects are learned
simultaneously. In this case, multi-task learning helps in leveraging data
scarcity issue yielding to more robust classifiers. For this purpose, we have
introduced a regularizer that induces both sensor selection and classifier
similarities. The different regularization approaches are compared on three ERP
datasets showing the interest of mixed-norm regularization in terms of sensor
selection. The multi-task approaches are evaluated when a small number of
learning examples are available yielding to significant performance
improvements especially for subjects performing poorly.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2014 16:15:24 GMT"
}
] | 2014-03-17T00:00:00 | [
[
"Flamary",
"Rémi",
"",
"LAGRANGE"
],
[
"Jrad",
"Nisrine",
"",
"GIPSA-lab"
],
[
"Phlypo",
"Ronald",
"",
"GIPSA-lab"
],
[
"Congedo",
"Marco",
"",
"GIPSA-lab"
],
[
"Rakotomamonjy",
"Alain",
"",
"LITIS"
]
] | TITLE: Mixed-norm Regularization for Brain Decoding
ABSTRACT: This work investigates the use of mixed-norm regularization for sensor
selection in Event-Related Potential (ERP) based Brain-Computer Interfaces
(BCI). The classification problem is cast as a discriminative optimization
framework where sensor selection is induced through the use of mixed-norms.
This framework is extended to the multi-task learning situation where several
similar classification tasks related to different subjects are learned
simultaneously. In this case, multi-task learning helps in leveraging data
scarcity issue yielding to more robust classifiers. For this purpose, we have
introduced a regularizer that induces both sensor selection and classifier
similarities. The different regularization approaches are compared on three ERP
datasets showing the interest of mixed-norm regularization in terms of sensor
selection. The multi-task approaches are evaluated when a small number of
learning examples are available yielding to significant performance
improvements especially for subjects performing poorly.
| no_new_dataset | 0.94474 |
1106.0797 | Ryan Ogliore | R. C. Ogliore, G. R. Huss, K. Nagashima | Ratio Estimation in SIMS Analysis | null | null | 10.1016/j.nimb.2011.04.120 | null | astro-ph.IM astro-ph.EP physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The determination of an isotope ratio by secondary ion mass spectrometry
(SIMS) traditionally involves averaging a number of ratios collected over the
course of a measurement. We show that this method leads to an additive positive
bias in the expectation value of the estimated ratio that is approximately
equal to the true ratio divided by the counts of the denominator isotope of an
individual ratio. This bias does not decrease as the number of ratios used in
the average increases. By summing all counts in the numerator isotope, then
dividing by the sum of counts in the denominator isotope, the estimated ratio
is less biased: the bias is approximately equal to the ratio divided by the
summed counts of the denominator isotope over the entire measurement. We
propose a third ratio estimator (Beale's estimator) that can be used when the
bias from the summed counts is unacceptably large for the hypothesis being
tested. We derive expressions for the variance of these ratio estimators as
well as the conditions under which they are normally distributed. Finally, we
investigate a SIMS dataset showing the effects of ratio bias, and discuss
proper ratio estimation for SIMS analysis.
| [
{
"version": "v1",
"created": "Sat, 4 Jun 2011 07:52:43 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2014 02:09:32 GMT"
}
] | 2014-03-13T00:00:00 | [
[
"Ogliore",
"R. C.",
""
],
[
"Huss",
"G. R.",
""
],
[
"Nagashima",
"K.",
""
]
] | TITLE: Ratio Estimation in SIMS Analysis
ABSTRACT: The determination of an isotope ratio by secondary ion mass spectrometry
(SIMS) traditionally involves averaging a number of ratios collected over the
course of a measurement. We show that this method leads to an additive positive
bias in the expectation value of the estimated ratio that is approximately
equal to the true ratio divided by the counts of the denominator isotope of an
individual ratio. This bias does not decrease as the number of ratios used in
the average increases. By summing all counts in the numerator isotope, then
dividing by the sum of counts in the denominator isotope, the estimated ratio
is less biased: the bias is approximately equal to the ratio divided by the
summed counts of the denominator isotope over the entire measurement. We
propose a third ratio estimator (Beale's estimator) that can be used when the
bias from the summed counts is unacceptably large for the hypothesis being
tested. We derive expressions for the variance of these ratio estimators as
well as the conditions under which they are normally distributed. Finally, we
investigate a SIMS dataset showing the effects of ratio bias, and discuss
proper ratio estimation for SIMS analysis.
| no_new_dataset | 0.858659 |
1310.8214 | Blair Edwards | LUX Collaboration: D.S. Akerib, H.M. Araujo, X. Bai, A.J. Bailey, J.
Balajthy, S. Bedikian, E. Bernard, A. Bernstein, A. Bolozdynya, A. Bradley,
D. Byram, S.B. Cahn, M.C. Carmona-Benitez, C. Chan, J.J. Chapman, A.A.
Chiller, C. Chiller, K. Clark, T. Coffey, A. Currie, A. Curioni, S. Dazeley,
L. de Viveiros, A. Dobi, J. Dobson, E.M. Dragowsky, E. Druszkiewicz, B.
Edwards, C.H. Faham, S. Fiorucci, C. Flores, R.J. Gaitskell, V.M. Gehman, C.
Ghag, K.R. Gibson, M.G.D. Gilchriese, C. Hall, M. Hanhardt, S.A. Hertel, M.
Horn, D.Q. Huang, M. Ihm, R.G. Jacobsen, L. Kastens, K. Kazkaz, R. Knoche, S.
Kyre, R. Lander, N.A. Larsen, C. Lee, D.S. Leonard, K.T. Lesko, A. Lindote,
M.I. Lopes, A. Lyashenko, D.C. Malling, R. Mannino, D.N. McKinsey, D.-M. Mei,
J. Mock, M. Moongweluwan, J. Morad, M. Morii, A.St.J. Murphy, C. Nehrkorn, H.
Nelson, F. Neves, J.A. Nikkel, R.A. Ott, M. Pangilinan, P.D. Parker, E.K.
Pease, K. Pech, P. Phelps, L. Reichhart, T. Shutt, C. Silva, W. Skulski, C.J.
Sofka, V.N. Solovov, P. Sorensen, T. Stiegler, K. O`Sullivan, T.J. Sumner, R.
Svoboda, M. Sweany, M. Szydagis, D. Taylor, B. Tennyson, D.R. Tiedt, M.
Tripathi, S. Uvarov, J.R. Verbus, N. Walsh, R. Webb, J.T. White, D. White,
M.S. Witherell, M. Wlasenko, F.L.H. Wolfs, M. Woods, and C. Zhang | First results from the LUX dark matter experiment at the Sanford
Underground Research Facility | Accepted by Phys. Rev. Lett. Appendix A included as supplementary
material with PRL article | Phys. Rev. Lett. 112, 091303 (2014) | 10.1103/PhysRevLett.112.091303 | null | astro-ph.CO astro-ph.IM hep-ex physics.ins-det | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Large Underground Xenon (LUX) experiment, a dual-phase xenon
time-projection chamber operating at the Sanford Underground Research Facility
(Lead, South Dakota), was cooled and filled in February 2013. We report results
of the first WIMP search dataset, taken during the period April to August 2013,
presenting the analysis of 85.3 live-days of data with a fiducial volume of 118
kg. A profile-likelihood analysis technique shows our data to be consistent
with the background-only hypothesis, allowing 90% confidence limits to be set
on spin-independent WIMP-nucleon elastic scattering with a minimum upper limit
on the cross section of $7.6 \times 10^{-46}$ cm$^{2}$ at a WIMP mass of 33
GeV/c$^2$. We find that the LUX data are in strong disagreement with low-mass
WIMP signal interpretations of the results from several recent direct detection
experiments.
| [
{
"version": "v1",
"created": "Wed, 30 Oct 2013 16:15:27 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Feb 2014 16:51:59 GMT"
}
] | 2014-03-12T00:00:00 | [
[
"LUX Collaboration",
"",
""
],
[
"Akerib",
"D. S.",
""
],
[
"Araujo",
"H. M.",
""
],
[
"Bai",
"X.",
""
],
[
"Bailey",
"A. J.",
""
],
[
"Balajthy",
"J.",
""
],
[
"Bedikian",
"S.",
""
],
[
"Bernard",
"E.",
""
],
[
"Bernstein",
"A.",
""
],
[
"Bolozdynya",
"A.",
""
],
[
"Bradley",
"A.",
""
],
[
"Byram",
"D.",
""
],
[
"Cahn",
"S. B.",
""
],
[
"Carmona-Benitez",
"M. C.",
""
],
[
"Chan",
"C.",
""
],
[
"Chapman",
"J. J.",
""
],
[
"Chiller",
"A. A.",
""
],
[
"Chiller",
"C.",
""
],
[
"Clark",
"K.",
""
],
[
"Coffey",
"T.",
""
],
[
"Currie",
"A.",
""
],
[
"Curioni",
"A.",
""
],
[
"Dazeley",
"S.",
""
],
[
"de Viveiros",
"L.",
""
],
[
"Dobi",
"A.",
""
],
[
"Dobson",
"J.",
""
],
[
"Dragowsky",
"E. M.",
""
],
[
"Druszkiewicz",
"E.",
""
],
[
"Edwards",
"B.",
""
],
[
"Faham",
"C. H.",
""
],
[
"Fiorucci",
"S.",
""
],
[
"Flores",
"C.",
""
],
[
"Gaitskell",
"R. J.",
""
],
[
"Gehman",
"V. M.",
""
],
[
"Ghag",
"C.",
""
],
[
"Gibson",
"K. R.",
""
],
[
"Gilchriese",
"M. G. D.",
""
],
[
"Hall",
"C.",
""
],
[
"Hanhardt",
"M.",
""
],
[
"Hertel",
"S. A.",
""
],
[
"Horn",
"M.",
""
],
[
"Huang",
"D. Q.",
""
],
[
"Ihm",
"M.",
""
],
[
"Jacobsen",
"R. G.",
""
],
[
"Kastens",
"L.",
""
],
[
"Kazkaz",
"K.",
""
],
[
"Knoche",
"R.",
""
],
[
"Kyre",
"S.",
""
],
[
"Lander",
"R.",
""
],
[
"Larsen",
"N. A.",
""
],
[
"Lee",
"C.",
""
],
[
"Leonard",
"D. S.",
""
],
[
"Lesko",
"K. T.",
""
],
[
"Lindote",
"A.",
""
],
[
"Lopes",
"M. I.",
""
],
[
"Lyashenko",
"A.",
""
],
[
"Malling",
"D. C.",
""
],
[
"Mannino",
"R.",
""
],
[
"McKinsey",
"D. N.",
""
],
[
"Mei",
"D. -M.",
""
],
[
"Mock",
"J.",
""
],
[
"Moongweluwan",
"M.",
""
],
[
"Morad",
"J.",
""
],
[
"Morii",
"M.",
""
],
[
"Murphy",
"A. St. J.",
""
],
[
"Nehrkorn",
"C.",
""
],
[
"Nelson",
"H.",
""
],
[
"Neves",
"F.",
""
],
[
"Nikkel",
"J. A.",
""
],
[
"Ott",
"R. A.",
""
],
[
"Pangilinan",
"M.",
""
],
[
"Parker",
"P. D.",
""
],
[
"Pease",
"E. K.",
""
],
[
"Pech",
"K.",
""
],
[
"Phelps",
"P.",
""
],
[
"Reichhart",
"L.",
""
],
[
"Shutt",
"T.",
""
],
[
"Silva",
"C.",
""
],
[
"Skulski",
"W.",
""
],
[
"Sofka",
"C. J.",
""
],
[
"Solovov",
"V. N.",
""
],
[
"Sorensen",
"P.",
""
],
[
"Stiegler",
"T.",
""
],
[
"O`Sullivan",
"K.",
""
],
[
"Sumner",
"T. J.",
""
],
[
"Svoboda",
"R.",
""
],
[
"Sweany",
"M.",
""
],
[
"Szydagis",
"M.",
""
],
[
"Taylor",
"D.",
""
],
[
"Tennyson",
"B.",
""
],
[
"Tiedt",
"D. R.",
""
],
[
"Tripathi",
"M.",
""
],
[
"Uvarov",
"S.",
""
],
[
"Verbus",
"J. R.",
""
],
[
"Walsh",
"N.",
""
],
[
"Webb",
"R.",
""
],
[
"White",
"J. T.",
""
],
[
"White",
"D.",
""
],
[
"Witherell",
"M. S.",
""
],
[
"Wlasenko",
"M.",
""
],
[
"Wolfs",
"F. L. H.",
""
],
[
"Woods",
"M.",
""
],
[
"Zhang",
"C.",
""
]
] | TITLE: First results from the LUX dark matter experiment at the Sanford
Underground Research Facility
ABSTRACT: The Large Underground Xenon (LUX) experiment, a dual-phase xenon
time-projection chamber operating at the Sanford Underground Research Facility
(Lead, South Dakota), was cooled and filled in February 2013. We report results
of the first WIMP search dataset, taken during the period April to August 2013,
presenting the analysis of 85.3 live-days of data with a fiducial volume of 118
kg. A profile-likelihood analysis technique shows our data to be consistent
with the background-only hypothesis, allowing 90% confidence limits to be set
on spin-independent WIMP-nucleon elastic scattering with a minimum upper limit
on the cross section of $7.6 \times 10^{-46}$ cm$^{2}$ at a WIMP mass of 33
GeV/c$^2$. We find that the LUX data are in strong disagreement with low-mass
WIMP signal interpretations of the results from several recent direct detection
experiments.
| no_new_dataset | 0.841305 |
1403.2372 | Mehdi Naseriparsa | Mehdi Naseriparsa, Amir-Masoud Bidgoli, Touraj Varaee | A Hybrid Feature Selection Method to Improve Performance of a Group of
Classification Algorithms | 8 pages. arXiv admin note: substantial text overlap with
arXiv:1403.1946; and text overlap with arXiv:1106.1813 by other authors | International Journal of Computer Applications,Vol 69,No 17,pp
28-35,2013 | 10.5120/12065-8172 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper a hybrid feature selection method is proposed which takes
advantages of wrapper subset evaluation with a lower cost and improves the
performance of a group of classifiers. The method uses combination of sample
domain filtering and resampling to refine the sample domain and two feature
subset evaluation methods to select reliable features. This method utilizes
both feature space and sample domain in two phases. The first phase filters and
resamples the sample domain and the second phase adopts a hybrid procedure by
information gain, wrapper subset evaluation and genetic search to find the
optimal feature space. Experiments carried out on different types of datasets
from UCI Repository of Machine Learning databases and the results show a rise
in the average performance of five classifiers (Naive Bayes, Logistic,
Multilayer Perceptron, Best First Decision Tree and JRIP) simultaneously and
the classification error for these classifiers decreases considerably. The
experiments also show that this method outperforms other feature selection
methods with a lower cost.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2014 08:04:29 GMT"
}
] | 2014-03-12T00:00:00 | [
[
"Naseriparsa",
"Mehdi",
""
],
[
"Bidgoli",
"Amir-Masoud",
""
],
[
"Varaee",
"Touraj",
""
]
] | TITLE: A Hybrid Feature Selection Method to Improve Performance of a Group of
Classification Algorithms
ABSTRACT: In this paper a hybrid feature selection method is proposed which takes
advantages of wrapper subset evaluation with a lower cost and improves the
performance of a group of classifiers. The method uses combination of sample
domain filtering and resampling to refine the sample domain and two feature
subset evaluation methods to select reliable features. This method utilizes
both feature space and sample domain in two phases. The first phase filters and
resamples the sample domain and the second phase adopts a hybrid procedure by
information gain, wrapper subset evaluation and genetic search to find the
optimal feature space. Experiments carried out on different types of datasets
from UCI Repository of Machine Learning databases and the results show a rise
in the average performance of five classifiers (Naive Bayes, Logistic,
Multilayer Perceptron, Best First Decision Tree and JRIP) simultaneously and
the classification error for these classifiers decreases considerably. The
experiments also show that this method outperforms other feature selection
methods with a lower cost.
| no_new_dataset | 0.954393 |
1403.2404 | Long Cheng | Long Cheng, Avinash Malik, Spyros Kotoulas, Tomas E Ward, Georgios
Theodoropoulos | Scalable RDF Data Compression using X10 | null | null | null | null | cs.DC cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Semantic Web comprises enormous volumes of semi-structured data elements.
For interoperability, these elements are represented by long strings. Such
representations are not efficient for the purposes of Semantic Web applications
that perform computations over large volumes of information. A typical method
for alleviating the impact of this problem is through the use of compression
methods that produce more compact representations of the data. The use of
dictionary encoding for this purpose is particularly prevalent in Semantic Web
database systems. However, centralized implementations present performance
bottlenecks, giving rise to the need for scalable, efficient distributed
encoding schemes. In this paper, we describe an encoding implementation based
on the asynchronous partitioned global address space (APGAS) parallel
programming model. We evaluate performance on a cluster of up to 384 cores and
datasets of up to 11 billion triples (1.9 TB). Compared to the state-of-art
MapReduce algorithm, we demonstrate a speedup of 2.6-7.4x and excellent
scalability. These results illustrate the strong potential of the APGAS model
for efficient implementation of dictionary encoding and contributes to the
engineering of larger scale Semantic Web applications.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2014 20:48:08 GMT"
}
] | 2014-03-12T00:00:00 | [
[
"Cheng",
"Long",
""
],
[
"Malik",
"Avinash",
""
],
[
"Kotoulas",
"Spyros",
""
],
[
"Ward",
"Tomas E",
""
],
[
"Theodoropoulos",
"Georgios",
""
]
] | TITLE: Scalable RDF Data Compression using X10
ABSTRACT: The Semantic Web comprises enormous volumes of semi-structured data elements.
For interoperability, these elements are represented by long strings. Such
representations are not efficient for the purposes of Semantic Web applications
that perform computations over large volumes of information. A typical method
for alleviating the impact of this problem is through the use of compression
methods that produce more compact representations of the data. The use of
dictionary encoding for this purpose is particularly prevalent in Semantic Web
database systems. However, centralized implementations present performance
bottlenecks, giving rise to the need for scalable, efficient distributed
encoding schemes. In this paper, we describe an encoding implementation based
on the asynchronous partitioned global address space (APGAS) parallel
programming model. We evaluate performance on a cluster of up to 384 cores and
datasets of up to 11 billion triples (1.9 TB). Compared to the state-of-art
MapReduce algorithm, we demonstrate a speedup of 2.6-7.4x and excellent
scalability. These results illustrate the strong potential of the APGAS model
for efficient implementation of dictionary encoding and contributes to the
engineering of larger scale Semantic Web applications.
| no_new_dataset | 0.945197 |
1309.2074 | Qiang Qiu | Qiang Qiu, Guillermo Sapiro | Learning Transformations for Clustering and Classification | arXiv admin note: substantial text overlap with arXiv:1308.0273,
arXiv:1308.0275 | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A low-rank transformation learning framework for subspace clustering and
classification is here proposed. Many high-dimensional data, such as face
images and motion sequences, approximately lie in a union of low-dimensional
subspaces. The corresponding subspace clustering problem has been extensively
studied in the literature to partition such high-dimensional data into clusters
corresponding to their underlying low-dimensional subspaces. However,
low-dimensional intrinsic structures are often violated for real-world
observations, as they can be corrupted by errors or deviate from ideal models.
We propose to address this by learning a linear transformation on subspaces
using matrix rank, via its convex surrogate nuclear norm, as the optimization
criteria. The learned linear transformation restores a low-rank structure for
data from the same subspace, and, at the same time, forces a a maximally
separated structure for data from different subspaces. In this way, we reduce
variations within subspaces, and increase separation between subspaces for a
more robust subspace clustering. This proposed learned robust subspace
clustering framework significantly enhances the performance of existing
subspace clustering methods. Basic theoretical results here presented help to
further support the underlying framework. To exploit the low-rank structures of
the transformed subspaces, we further introduce a fast subspace clustering
technique, which efficiently combines robust PCA with sparse modeling. When
class labels are present at the training stage, we show this low-rank
transformation framework also significantly enhances classification
performance. Extensive experiments using public datasets are presented, showing
that the proposed approach significantly outperforms state-of-the-art methods
for subspace clustering and classification.
| [
{
"version": "v1",
"created": "Mon, 9 Sep 2013 09:16:02 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2014 18:50:35 GMT"
}
] | 2014-03-11T00:00:00 | [
[
"Qiu",
"Qiang",
""
],
[
"Sapiro",
"Guillermo",
""
]
] | TITLE: Learning Transformations for Clustering and Classification
ABSTRACT: A low-rank transformation learning framework for subspace clustering and
classification is here proposed. Many high-dimensional data, such as face
images and motion sequences, approximately lie in a union of low-dimensional
subspaces. The corresponding subspace clustering problem has been extensively
studied in the literature to partition such high-dimensional data into clusters
corresponding to their underlying low-dimensional subspaces. However,
low-dimensional intrinsic structures are often violated for real-world
observations, as they can be corrupted by errors or deviate from ideal models.
We propose to address this by learning a linear transformation on subspaces
using matrix rank, via its convex surrogate nuclear norm, as the optimization
criteria. The learned linear transformation restores a low-rank structure for
data from the same subspace, and, at the same time, forces a a maximally
separated structure for data from different subspaces. In this way, we reduce
variations within subspaces, and increase separation between subspaces for a
more robust subspace clustering. This proposed learned robust subspace
clustering framework significantly enhances the performance of existing
subspace clustering methods. Basic theoretical results here presented help to
further support the underlying framework. To exploit the low-rank structures of
the transformed subspaces, we further introduce a fast subspace clustering
technique, which efficiently combines robust PCA with sparse modeling. When
class labels are present at the training stage, we show this low-rank
transformation framework also significantly enhances classification
performance. Extensive experiments using public datasets are presented, showing
that the proposed approach significantly outperforms state-of-the-art methods
for subspace clustering and classification.
| no_new_dataset | 0.952397 |
1312.4314 | David Eigen | David Eigen, Marc'Aurelio Ranzato, Ilya Sutskever | Learning Factored Representations in a Deep Mixture of Experts | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mixtures of Experts combine the outputs of several "expert" networks, each of
which specializes in a different part of the input space. This is achieved by
training a "gating" network that maps each input to a distribution over the
experts. Such models show promise for building larger networks that are still
cheap to compute at test time, and more parallelizable at training time. In
this this work, we extend the Mixture of Experts to a stacked model, the Deep
Mixture of Experts, with multiple sets of gating and experts. This
exponentially increases the number of effective experts by associating each
input with a combination of experts at each layer, yet maintains a modest model
size. On a randomly translated version of the MNIST dataset, we find that the
Deep Mixture of Experts automatically learns to develop location-dependent
("where") experts at the first layer, and class-specific ("what") experts at
the second layer. In addition, we see that the different combinations are in
use when the model is applied to a dataset of speech monophones. These
demonstrate effective use of all expert combinations.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2013 11:15:10 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Feb 2014 17:57:53 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2014 20:15:03 GMT"
}
] | 2014-03-11T00:00:00 | [
[
"Eigen",
"David",
""
],
[
"Ranzato",
"Marc'Aurelio",
""
],
[
"Sutskever",
"Ilya",
""
]
] | TITLE: Learning Factored Representations in a Deep Mixture of Experts
ABSTRACT: Mixtures of Experts combine the outputs of several "expert" networks, each of
which specializes in a different part of the input space. This is achieved by
training a "gating" network that maps each input to a distribution over the
experts. Such models show promise for building larger networks that are still
cheap to compute at test time, and more parallelizable at training time. In
this this work, we extend the Mixture of Experts to a stacked model, the Deep
Mixture of Experts, with multiple sets of gating and experts. This
exponentially increases the number of effective experts by associating each
input with a combination of experts at each layer, yet maintains a modest model
size. On a randomly translated version of the MNIST dataset, we find that the
Deep Mixture of Experts automatically learns to develop location-dependent
("where") experts at the first layer, and class-specific ("what") experts at
the second layer. In addition, we see that the different combinations are in
use when the model is applied to a dataset of speech monophones. These
demonstrate effective use of all expert combinations.
| no_new_dataset | 0.943191 |
1401.0509 | Yann Dauphin | Yann N. Dauphin, Gokhan Tur, Dilek Hakkani-Tur, Larry Heck | Zero-Shot Learning for Semantic Utterance Classification | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/3.0/ | We propose a novel zero-shot learning method for semantic utterance
classification (SUC). It learns a classifier $f: X \to Y$ for problems where
none of the semantic categories $Y$ are present in the training set. The
framework uncovers the link between categories and utterances using a semantic
space. We show that this semantic space can be learned by deep neural networks
trained on large amounts of search engine query log data. More precisely, we
propose a novel method that can learn discriminative semantic features without
supervision. It uses the zero-shot learning framework to guide the learning of
the semantic features. We demonstrate the effectiveness of the zero-shot
semantic learning algorithm on the SUC dataset collected by (Tur, 2012).
Furthermore, we achieve state-of-the-art results by combining the semantic
features with a supervised method.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 17:08:26 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Feb 2014 20:34:08 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2014 23:31:02 GMT"
}
] | 2014-03-11T00:00:00 | [
[
"Dauphin",
"Yann N.",
""
],
[
"Tur",
"Gokhan",
""
],
[
"Hakkani-Tur",
"Dilek",
""
],
[
"Heck",
"Larry",
""
]
] | TITLE: Zero-Shot Learning for Semantic Utterance Classification
ABSTRACT: We propose a novel zero-shot learning method for semantic utterance
classification (SUC). It learns a classifier $f: X \to Y$ for problems where
none of the semantic categories $Y$ are present in the training set. The
framework uncovers the link between categories and utterances using a semantic
space. We show that this semantic space can be learned by deep neural networks
trained on large amounts of search engine query log data. More precisely, we
propose a novel method that can learn discriminative semantic features without
supervision. It uses the zero-shot learning framework to guide the learning of
the semantic features. We demonstrate the effectiveness of the zero-shot
semantic learning algorithm on the SUC dataset collected by (Tur, 2012).
Furthermore, we achieve state-of-the-art results by combining the semantic
features with a supervised method.
| no_new_dataset | 0.944944 |
1403.1946 | Mehdi Naseriparsa | Mehdi Naseriparsa, Amir-masoud Bidgoli, Touraj Varaee | Improving Performance of a Group of Classification Algorithms Using
Resampling and Feature Selection | 7 pages | World of Computer Science and Information Technology Journal,Vol
3, No 4,pp 70-76,2013 | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years the importance of finding a meaningful pattern from huge
datasets has become more challenging. Data miners try to adopt innovative
methods to face this problem by applying feature selection methods. In this
paper we propose a new hybrid method in which we use a combination of
resampling, filtering the sample domain and wrapper subset evaluation method
with genetic search to reduce dimensions of Lung-Cancer dataset that we
received from UCI Repository of Machine Learning databases. Finally, we apply
some well- known classification algorithms (Na\"ive Bayes, Logistic, Multilayer
Perceptron, Best First Decision Tree and JRIP) to the resulting dataset and
compare the results and prediction rates before and after the application of
our feature selection method on that dataset. The results show a substantial
progress in the average performance of five classification algorithms
simultaneously and the classification error for these classifiers decreases
considerably. The experiments also show that this method outperforms other
feature selection methods with a lower cost.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2014 07:47:44 GMT"
}
] | 2014-03-11T00:00:00 | [
[
"Naseriparsa",
"Mehdi",
""
],
[
"Bidgoli",
"Amir-masoud",
""
],
[
"Varaee",
"Touraj",
""
]
] | TITLE: Improving Performance of a Group of Classification Algorithms Using
Resampling and Feature Selection
ABSTRACT: In recent years the importance of finding a meaningful pattern from huge
datasets has become more challenging. Data miners try to adopt innovative
methods to face this problem by applying feature selection methods. In this
paper we propose a new hybrid method in which we use a combination of
resampling, filtering the sample domain and wrapper subset evaluation method
with genetic search to reduce dimensions of Lung-Cancer dataset that we
received from UCI Repository of Machine Learning databases. Finally, we apply
some well- known classification algorithms (Na\"ive Bayes, Logistic, Multilayer
Perceptron, Best First Decision Tree and JRIP) to the resulting dataset and
compare the results and prediction rates before and after the application of
our feature selection method on that dataset. The results show a substantial
progress in the average performance of five classification algorithms
simultaneously and the classification error for these classifiers decreases
considerably. The experiments also show that this method outperforms other
feature selection methods with a lower cost.
| no_new_dataset | 0.948202 |
1403.1949 | Mehdi Naseriparsa | Mehdi Naseriparsa, Mohammad Mansour Riahi Kashani | Combination of PCA with SMOTE Resampling to Boost the Prediction Rate in
Lung Cancer Dataset | 6 pages. arXiv admin note: text overlap with arXiv:1106.1813,
arXiv:1001.1446 by other authors | International Journal of Computer Applications,Vol 77,No 3,pp
33-38,2013 | 10.5120/13376-0987 | null | cs.LG cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classification algorithms are unable to make reliable models on the datasets
with huge sizes. These datasets contain many irrelevant and redundant features
that mislead the classifiers. Furthermore, many huge datasets have imbalanced
class distribution which leads to bias over majority class in the
classification process. In this paper combination of unsupervised
dimensionality reduction methods with resampling is proposed and the results
are tested on Lung-Cancer dataset. In the first step PCA is applied on
Lung-Cancer dataset to compact the dataset and eliminate irrelevant features
and in the second step SMOTE resampling is carried out to balance the class
distribution and increase the variety of sample domain. Finally, Naive Bayes
classifier is applied on the resulting dataset and the results are compared and
evaluation metrics are calculated. The experiments show the effectiveness of
the proposed method across four evaluation metrics: Overall accuracy, False
Positive Rate, Precision, Recall.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2014 08:12:54 GMT"
}
] | 2014-03-11T00:00:00 | [
[
"Naseriparsa",
"Mehdi",
""
],
[
"Kashani",
"Mohammad Mansour Riahi",
""
]
] | TITLE: Combination of PCA with SMOTE Resampling to Boost the Prediction Rate in
Lung Cancer Dataset
ABSTRACT: Classification algorithms are unable to make reliable models on the datasets
with huge sizes. These datasets contain many irrelevant and redundant features
that mislead the classifiers. Furthermore, many huge datasets have imbalanced
class distribution which leads to bias over majority class in the
classification process. In this paper combination of unsupervised
dimensionality reduction methods with resampling is proposed and the results
are tested on Lung-Cancer dataset. In the first step PCA is applied on
Lung-Cancer dataset to compact the dataset and eliminate irrelevant features
and in the second step SMOTE resampling is carried out to balance the class
distribution and increase the variety of sample domain. Finally, Naive Bayes
classifier is applied on the resulting dataset and the results are compared and
evaluation metrics are calculated. The experiments show the effectiveness of
the proposed method across four evaluation metrics: Overall accuracy, False
Positive Rate, Precision, Recall.
| no_new_dataset | 0.9549 |
1403.2006 | Morteza Yousefi Kharaji | Morteza Yousefi Kharaji, Fatemeh Salehi Rizi | An IAC Approach for Detecting Profile Cloning in Online Social Networks | null | null | null | null | cs.SI cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, Online Social Networks are popular websites on the internet, which
millions of users register on and share their own personal information with
others. Privacy threats and disclosing personal information are the most
important concerns of OSNs users. Recently, a new attack which is named
Identity Cloned Attack is detected on OSNs. In this attack the attacker tries
to make a fake identity of a real user in order to access to private
information of the users friends which they do not publish on the public
profiles. In today OSNs, there are some verification services, but they are not
active services and they are useful for users who are familiar with online
identity issues. In this paper, Identity cloned attacks are explained in more
details and a new and precise method to detect profile cloning in online social
networks is proposed. In this method, first, the social network is shown in a
form of graph, then, according to similarities among users, this graph is
divided into smaller communities. Afterwards, all of the similar profiles to
the real profile are gathered (from the same community), then strength of
relationship (among all selected profiles and the real profile) is calculated,
and those which have the less strength of relationship will be verified by
mutual friend system. In this study, in order to evaluate the effectiveness of
proposed method, all steps are applied on a dataset of Facebook, and finally
this work is compared with two previous works by applying them on the dataset.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2014 20:38:57 GMT"
}
] | 2014-03-11T00:00:00 | [
[
"Kharaji",
"Morteza Yousefi",
""
],
[
"Rizi",
"Fatemeh Salehi",
""
]
] | TITLE: An IAC Approach for Detecting Profile Cloning in Online Social Networks
ABSTRACT: Nowadays, Online Social Networks are popular websites on the internet, which
millions of users register on and share their own personal information with
others. Privacy threats and disclosing personal information are the most
important concerns of OSNs users. Recently, a new attack which is named
Identity Cloned Attack is detected on OSNs. In this attack the attacker tries
to make a fake identity of a real user in order to access to private
information of the users friends which they do not publish on the public
profiles. In today OSNs, there are some verification services, but they are not
active services and they are useful for users who are familiar with online
identity issues. In this paper, Identity cloned attacks are explained in more
details and a new and precise method to detect profile cloning in online social
networks is proposed. In this method, first, the social network is shown in a
form of graph, then, according to similarities among users, this graph is
divided into smaller communities. Afterwards, all of the similar profiles to
the real profile are gathered (from the same community), then strength of
relationship (among all selected profiles and the real profile) is calculated,
and those which have the less strength of relationship will be verified by
mutual friend system. In this study, in order to evaluate the effectiveness of
proposed method, all steps are applied on a dataset of Facebook, and finally
this work is compared with two previous works by applying them on the dataset.
| no_new_dataset | 0.947088 |
1403.2024 | Pin-Yu Chen | Pin-Yu Chen and Alfred O. Hero III | Node Removal Vulnerability of the Largest Component of a Network | Published in IEEE GlobalSIP 2013 | null | null | null | cs.SI cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The connectivity structure of a network can be very sensitive to removal of
certain nodes in the network. In this paper, we study the sensitivity of the
largest component size to node removals. We prove that minimizing the largest
component size is equivalent to solving a matrix one-norm minimization problem
whose column vectors are orthogonal and sparse and they form a basis of the
null space of the associated graph Laplacian matrix. A greedy node removal
algorithm is then proposed based on the matrix one-norm minimization. In
comparison with other node centralities such as node degree and betweenness,
experimental results on US power grid dataset validate the effectiveness of the
proposed approach in terms of reduction of the largest component size with
relatively few node removals.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2014 02:52:51 GMT"
}
] | 2014-03-11T00:00:00 | [
[
"Chen",
"Pin-Yu",
""
],
[
"Hero",
"Alfred O.",
"III"
]
] | TITLE: Node Removal Vulnerability of the Largest Component of a Network
ABSTRACT: The connectivity structure of a network can be very sensitive to removal of
certain nodes in the network. In this paper, we study the sensitivity of the
largest component size to node removals. We prove that minimizing the largest
component size is equivalent to solving a matrix one-norm minimization problem
whose column vectors are orthogonal and sparse and they form a basis of the
null space of the associated graph Laplacian matrix. A greedy node removal
algorithm is then proposed based on the matrix one-norm minimization. In
comparison with other node centralities such as node degree and betweenness,
experimental results on US power grid dataset validate the effectiveness of the
proposed approach in terms of reduction of the largest component size with
relatively few node removals.
| no_new_dataset | 0.953057 |
1310.2125 | Ritabrata Dutta | Ritabrata Dutta and Sohan Seth and Samuel Kaski | Retrieval of Experiments with Sequential Dirichlet Process Mixtures in
Model Space | null | null | null | null | stat.ML cs.IR stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of retrieving relevant experiments given a query
experiment, motivated by the public databases of datasets in molecular biology
and other experimental sciences, and the need of scientists to relate to
earlier work on the level of actual measurement data. Since experiments are
inherently noisy and databases ever accumulating, we argue that a retrieval
engine should possess two particular characteristics. First, it should compare
models learnt from the experiments rather than the raw measurements themselves:
this allows incorporating experiment-specific prior knowledge to suppress noise
effects and focus on what is important. Second, it should be updated
sequentially from newly published experiments, without explicitly storing
either the measurements or the models, which is critical for saving storage
space and protecting data privacy: this promotes life long learning. We
formulate the retrieval as a ``supermodelling'' problem, of sequentially
learning a model of the set of posterior distributions, represented as sets of
MCMC samples, and suggest the use of Particle-Learning-based sequential
Dirichlet process mixture (DPM) for this purpose. The relevance measure for
retrieval is derived from the supermodel through the mixture representation. We
demonstrate the performance of the proposed retrieval method on simulated data
and molecular biological experiments.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2013 13:10:26 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2014 22:04:33 GMT"
}
] | 2014-03-10T00:00:00 | [
[
"Dutta",
"Ritabrata",
""
],
[
"Seth",
"Sohan",
""
],
[
"Kaski",
"Samuel",
""
]
] | TITLE: Retrieval of Experiments with Sequential Dirichlet Process Mixtures in
Model Space
ABSTRACT: We address the problem of retrieving relevant experiments given a query
experiment, motivated by the public databases of datasets in molecular biology
and other experimental sciences, and the need of scientists to relate to
earlier work on the level of actual measurement data. Since experiments are
inherently noisy and databases ever accumulating, we argue that a retrieval
engine should possess two particular characteristics. First, it should compare
models learnt from the experiments rather than the raw measurements themselves:
this allows incorporating experiment-specific prior knowledge to suppress noise
effects and focus on what is important. Second, it should be updated
sequentially from newly published experiments, without explicitly storing
either the measurements or the models, which is critical for saving storage
space and protecting data privacy: this promotes life long learning. We
formulate the retrieval as a ``supermodelling'' problem, of sequentially
learning a model of the set of posterior distributions, represented as sets of
MCMC samples, and suggest the use of Particle-Learning-based sequential
Dirichlet process mixture (DPM) for this purpose. The relevance measure for
retrieval is derived from the supermodel through the mixture representation. We
demonstrate the performance of the proposed retrieval method on simulated data
and molecular biological experiments.
| no_new_dataset | 0.949389 |
1403.1600 | Kai Zhu | Kai Zhu, Rui Wu, Lei Ying, R. Srikant | Collaborative Filtering with Information-Rich and Information-Sparse
Entities | null | null | null | null | stat.ML cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider a popular model for collaborative filtering in
recommender systems where some users of a website rate some items, such as
movies, and the goal is to recover the ratings of some or all of the unrated
items of each user. In particular, we consider both the clustering model, where
only users (or items) are clustered, and the co-clustering model, where both
users and items are clustered, and further, we assume that some users rate many
items (information-rich users) and some users rate only a few items
(information-sparse users). When users (or items) are clustered, our algorithm
can recover the rating matrix with $\omega(MK \log M)$ noisy entries while $MK$
entries are necessary, where $K$ is the number of clusters and $M$ is the
number of items. In the case of co-clustering, we prove that $K^2$ entries are
necessary for recovering the rating matrix, and our algorithm achieves this
lower bound within a logarithmic factor when $K$ is sufficiently large. We
compare our algorithms with a well-known algorithms called alternating
minimization (AM), and a similarity score-based algorithm known as the
popularity-among-friends (PAF) algorithm by applying all three to the MovieLens
and Netflix data sets. Our co-clustering algorithm and AM have similar overall
error rates when recovering the rating matrix, both of which are lower than the
error rate under PAF. But more importantly, the error rate of our co-clustering
algorithm is significantly lower than AM and PAF in the scenarios of interest
in recommender systems: when recommending a few items to each user or when
recommending items to users who only rated a few items (these users are the
majority of the total user population). The performance difference increases
even more when noise is added to the datasets.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2014 21:51:48 GMT"
}
] | 2014-03-10T00:00:00 | [
[
"Zhu",
"Kai",
""
],
[
"Wu",
"Rui",
""
],
[
"Ying",
"Lei",
""
],
[
"Srikant",
"R.",
""
]
] | TITLE: Collaborative Filtering with Information-Rich and Information-Sparse
Entities
ABSTRACT: In this paper, we consider a popular model for collaborative filtering in
recommender systems where some users of a website rate some items, such as
movies, and the goal is to recover the ratings of some or all of the unrated
items of each user. In particular, we consider both the clustering model, where
only users (or items) are clustered, and the co-clustering model, where both
users and items are clustered, and further, we assume that some users rate many
items (information-rich users) and some users rate only a few items
(information-sparse users). When users (or items) are clustered, our algorithm
can recover the rating matrix with $\omega(MK \log M)$ noisy entries while $MK$
entries are necessary, where $K$ is the number of clusters and $M$ is the
number of items. In the case of co-clustering, we prove that $K^2$ entries are
necessary for recovering the rating matrix, and our algorithm achieves this
lower bound within a logarithmic factor when $K$ is sufficiently large. We
compare our algorithms with a well-known algorithms called alternating
minimization (AM), and a similarity score-based algorithm known as the
popularity-among-friends (PAF) algorithm by applying all three to the MovieLens
and Netflix data sets. Our co-clustering algorithm and AM have similar overall
error rates when recovering the rating matrix, both of which are lower than the
error rate under PAF. But more importantly, the error rate of our co-clustering
algorithm is significantly lower than AM and PAF in the scenarios of interest
in recommender systems: when recommending a few items to each user or when
recommending items to users who only rated a few items (these users are the
majority of the total user population). The performance difference increases
even more when noise is added to the datasets.
| no_new_dataset | 0.952131 |
1403.1347 | Jian Zhou Zhou | Jian Zhou and Olga G. Troyanskaya | Deep Supervised and Convolutional Generative Stochastic Network for
Protein Secondary Structure Prediction | Accepted by ICML 2014 | null | null | null | q-bio.QM cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting protein secondary structure is a fundamental problem in protein
structure prediction. Here we present a new supervised generative stochastic
network (GSN) based method to predict local secondary structure with deep
hierarchical representations. GSN is a recently proposed deep learning
technique (Bengio & Thibodeau-Laufer, 2013) to globally train deep generative
model. We present the supervised extension of GSN, which learns a Markov chain
to sample from a conditional distribution, and applied it to protein structure
prediction. To scale the model to full-sized, high-dimensional data, like
protein sequences with hundreds of amino acids, we introduce a convolutional
architecture, which allows efficient learning across multiple layers of
hierarchical representations. Our architecture uniquely focuses on predicting
structured low-level labels informed with both low and high-level
representations learned by the model. In our application this corresponds to
labeling the secondary structure state of each amino-acid residue. We trained
and tested the model on separate sets of non-homologous proteins sharing less
than 30% sequence identity. Our model achieves 66.4% Q8 accuracy on the CB513
dataset, better than the previously reported best performance 64.9% (Wang et
al., 2011) for this challenging secondary structure prediction problem.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2014 05:18:26 GMT"
}
] | 2014-03-07T00:00:00 | [
[
"Zhou",
"Jian",
""
],
[
"Troyanskaya",
"Olga G.",
""
]
] | TITLE: Deep Supervised and Convolutional Generative Stochastic Network for
Protein Secondary Structure Prediction
ABSTRACT: Predicting protein secondary structure is a fundamental problem in protein
structure prediction. Here we present a new supervised generative stochastic
network (GSN) based method to predict local secondary structure with deep
hierarchical representations. GSN is a recently proposed deep learning
technique (Bengio & Thibodeau-Laufer, 2013) to globally train deep generative
model. We present the supervised extension of GSN, which learns a Markov chain
to sample from a conditional distribution, and applied it to protein structure
prediction. To scale the model to full-sized, high-dimensional data, like
protein sequences with hundreds of amino acids, we introduce a convolutional
architecture, which allows efficient learning across multiple layers of
hierarchical representations. Our architecture uniquely focuses on predicting
structured low-level labels informed with both low and high-level
representations learned by the model. In our application this corresponds to
labeling the secondary structure state of each amino-acid residue. We trained
and tested the model on separate sets of non-homologous proteins sharing less
than 30% sequence identity. Our model achieves 66.4% Q8 accuracy on the CB513
dataset, better than the previously reported best performance 64.9% (Wang et
al., 2011) for this challenging secondary structure prediction problem.
| no_new_dataset | 0.95297 |
1403.1353 | Yang Wu | Yang Wu, Vansteenberge Jarich, Masayuki Mukunoki, and Michihiko Minoh | Collaborative Representation for Classification, Sparse or Non-sparse? | 8 pages, 1 figure | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparse representation based classification (SRC) has been proved to be a
simple, effective and robust solution to face recognition. As it gets popular,
doubts on the necessity of enforcing sparsity starts coming up, and primary
experimental results showed that simply changing the $l_1$-norm based
regularization to the computationally much more efficient $l_2$-norm based
non-sparse version would lead to a similar or even better performance. However,
that's not always the case. Given a new classification task, it's still unclear
which regularization strategy (i.e., making the coefficients sparse or
non-sparse) is a better choice without trying both for comparison. In this
paper, we present as far as we know the first study on solving this issue,
based on plenty of diverse classification experiments. We propose a scoring
function for pre-selecting the regularization strategy using only the dataset
size, the feature dimensionality and a discrimination score derived from a
given feature representation. Moreover, we show that when dictionary learning
is taking into account, non-sparse representation has a more significant
superiority to sparse representation. This work is expected to enrich our
understanding of sparse/non-sparse collaborative representation for
classification and motivate further research activities.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2014 05:44:32 GMT"
}
] | 2014-03-07T00:00:00 | [
[
"Wu",
"Yang",
""
],
[
"Jarich",
"Vansteenberge",
""
],
[
"Mukunoki",
"Masayuki",
""
],
[
"Minoh",
"Michihiko",
""
]
] | TITLE: Collaborative Representation for Classification, Sparse or Non-sparse?
ABSTRACT: Sparse representation based classification (SRC) has been proved to be a
simple, effective and robust solution to face recognition. As it gets popular,
doubts on the necessity of enforcing sparsity starts coming up, and primary
experimental results showed that simply changing the $l_1$-norm based
regularization to the computationally much more efficient $l_2$-norm based
non-sparse version would lead to a similar or even better performance. However,
that's not always the case. Given a new classification task, it's still unclear
which regularization strategy (i.e., making the coefficients sparse or
non-sparse) is a better choice without trying both for comparison. In this
paper, we present as far as we know the first study on solving this issue,
based on plenty of diverse classification experiments. We propose a scoring
function for pre-selecting the regularization strategy using only the dataset
size, the feature dimensionality and a discrimination score derived from a
given feature representation. Moreover, we show that when dictionary learning
is taking into account, non-sparse representation has a more significant
superiority to sparse representation. This work is expected to enrich our
understanding of sparse/non-sparse collaborative representation for
classification and motivate further research activities.
| no_new_dataset | 0.942981 |
1302.4886 | Aleksandr Aravkin | Aleksandr Y. Aravkin and Rajiv Kumar and Hassan Mansour and Ben Recht
and Felix J. Herrmann | Fast methods for denoising matrix completion formulations, with
applications to robust seismic data interpolation | 26 pages, 13 figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent SVD-free matrix factorization formulations have enabled rank
minimization for systems with millions of rows and columns, paving the way for
matrix completion in extremely large-scale applications, such as seismic data
interpolation.
In this paper, we consider matrix completion formulations designed to hit a
target data-fitting error level provided by the user, and propose an algorithm
called LR-BPDN that is able to exploit factorized formulations to solve the
corresponding optimization problem. Since practitioners typically have strong
prior knowledge about target error level, this innovation makes it easy to
apply the algorithm in practice, leaving only the factor rank to be determined.
Within the established framework, we propose two extensions that are highly
relevant to solving practical challenges of data interpolation. First, we
propose a weighted extension that allows known subspace information to improve
the results of matrix completion formulations. We show how this weighting can
be used in the context of frequency continuation, an essential aspect to
seismic data interpolation. Second, we propose matrix completion formulations
that are robust to large measurement errors in the available data.
We illustrate the advantages of LR-BPDN on the collaborative filtering
problem using the MovieLens 1M, 10M, and Netflix 100M datasets. Then, we use
the new method, along with its robust and subspace re-weighted extensions, to
obtain high-quality reconstructions for large scale seismic interpolation
problems with real data, even in the presence of data contamination.
| [
{
"version": "v1",
"created": "Wed, 20 Feb 2013 12:31:30 GMT"
},
{
"version": "v2",
"created": "Wed, 1 May 2013 10:03:30 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2014 10:29:18 GMT"
}
] | 2014-03-06T00:00:00 | [
[
"Aravkin",
"Aleksandr Y.",
""
],
[
"Kumar",
"Rajiv",
""
],
[
"Mansour",
"Hassan",
""
],
[
"Recht",
"Ben",
""
],
[
"Herrmann",
"Felix J.",
""
]
] | TITLE: Fast methods for denoising matrix completion formulations, with
applications to robust seismic data interpolation
ABSTRACT: Recent SVD-free matrix factorization formulations have enabled rank
minimization for systems with millions of rows and columns, paving the way for
matrix completion in extremely large-scale applications, such as seismic data
interpolation.
In this paper, we consider matrix completion formulations designed to hit a
target data-fitting error level provided by the user, and propose an algorithm
called LR-BPDN that is able to exploit factorized formulations to solve the
corresponding optimization problem. Since practitioners typically have strong
prior knowledge about target error level, this innovation makes it easy to
apply the algorithm in practice, leaving only the factor rank to be determined.
Within the established framework, we propose two extensions that are highly
relevant to solving practical challenges of data interpolation. First, we
propose a weighted extension that allows known subspace information to improve
the results of matrix completion formulations. We show how this weighting can
be used in the context of frequency continuation, an essential aspect to
seismic data interpolation. Second, we propose matrix completion formulations
that are robust to large measurement errors in the available data.
We illustrate the advantages of LR-BPDN on the collaborative filtering
problem using the MovieLens 1M, 10M, and Netflix 100M datasets. Then, we use
the new method, along with its robust and subspace re-weighted extensions, to
obtain high-quality reconstructions for large scale seismic interpolation
problems with real data, even in the presence of data contamination.
| no_new_dataset | 0.941061 |
1311.6079 | Amirreza Shaban | Amirreza Shaban, Hamid R. Rabiee and Mahyar Najibi | Local Similarities, Global Coding: An Algorithm for Feature Coding and
its Applications | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data coding as a building block of several image processing algorithms has
been received great attention recently. Indeed, the importance of the locality
assumption in coding approaches is studied in numerous works and several
methods are proposed based on this concept. We probe this assumption and claim
that taking the similarity between a data point and a more global set of anchor
points does not necessarily weaken the coding method as long as the underlying
structure of the anchor points are taken into account. Based on this fact, we
propose to capture this underlying structure by assuming a random walker over
the anchor points. We show that our method is a fast approximate learning
algorithm based on the diffusion map kernel. The experiments on various
datasets show that making different state-of-the-art coding algorithms aware of
this structure boosts them in different learning tasks.
| [
{
"version": "v1",
"created": "Sun, 24 Nov 2013 04:39:28 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2014 20:30:13 GMT"
}
] | 2014-03-06T00:00:00 | [
[
"Shaban",
"Amirreza",
""
],
[
"Rabiee",
"Hamid R.",
""
],
[
"Najibi",
"Mahyar",
""
]
] | TITLE: Local Similarities, Global Coding: An Algorithm for Feature Coding and
its Applications
ABSTRACT: Data coding as a building block of several image processing algorithms has
been received great attention recently. Indeed, the importance of the locality
assumption in coding approaches is studied in numerous works and several
methods are proposed based on this concept. We probe this assumption and claim
that taking the similarity between a data point and a more global set of anchor
points does not necessarily weaken the coding method as long as the underlying
structure of the anchor points are taken into account. Based on this fact, we
propose to capture this underlying structure by assuming a random walker over
the anchor points. We show that our method is a fast approximate learning
algorithm based on the diffusion map kernel. The experiments on various
datasets show that making different state-of-the-art coding algorithms aware of
this structure boosts them in different learning tasks.
| no_new_dataset | 0.948202 |
1402.7025 | Max Welling | Max Welling | Exploiting the Statistics of Learning and Inference | Proceedings of the NIPS workshop on "Probabilistic Models for Big
Data" | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When dealing with datasets containing a billion instances or with simulations
that require a supercomputer to execute, computational resources become part of
the equation. We can improve the efficiency of learning and inference by
exploiting their inherent statistical nature. We propose algorithms that
exploit the redundancy of data relative to a model by subsampling data-cases
for every update and reasoning about the uncertainty created in this process.
In the context of learning we propose to test for the probability that a
stochastically estimated gradient points more than 180 degrees in the wrong
direction. In the context of MCMC sampling we use stochastic gradients to
improve the efficiency of MCMC updates, and hypothesis tests based on adaptive
mini-batches to decide whether to accept or reject a proposed parameter update.
Finally, we argue that in the context of likelihood free MCMC one needs to
store all the information revealed by all simulations, for instance in a
Gaussian process. We conclude that Bayesian methods will remain to play a
crucial role in the era of big data and big simulations, but only if we
overcome a number of computational challenges.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2014 10:47:09 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2014 21:12:43 GMT"
}
] | 2014-03-06T00:00:00 | [
[
"Welling",
"Max",
""
]
] | TITLE: Exploiting the Statistics of Learning and Inference
ABSTRACT: When dealing with datasets containing a billion instances or with simulations
that require a supercomputer to execute, computational resources become part of
the equation. We can improve the efficiency of learning and inference by
exploiting their inherent statistical nature. We propose algorithms that
exploit the redundancy of data relative to a model by subsampling data-cases
for every update and reasoning about the uncertainty created in this process.
In the context of learning we propose to test for the probability that a
stochastically estimated gradient points more than 180 degrees in the wrong
direction. In the context of MCMC sampling we use stochastic gradients to
improve the efficiency of MCMC updates, and hypothesis tests based on adaptive
mini-batches to decide whether to accept or reject a proposed parameter update.
Finally, we argue that in the context of likelihood free MCMC one needs to
store all the information revealed by all simulations, for instance in a
Gaussian process. We conclude that Bayesian methods will remain to play a
crucial role in the era of big data and big simulations, but only if we
overcome a number of computational challenges.
| no_new_dataset | 0.949949 |
1403.1056 | Conrad Sanderson | Andres Sanin, Conrad Sanderson, Mehrtash T. Harandi, Brian C. Lovell | K-Tangent Spaces on Riemannian Manifolds for Improved Pedestrian
Detection | IEEE International Conference on Image Processing (ICIP), 2012 | null | 10.1109/ICIP.2012.6466899 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For covariance-based image descriptors, taking into account the curvature of
the corresponding feature space has been shown to improve discrimination
performance. This is often done through representing the descriptors as points
on Riemannian manifolds, with the discrimination accomplished on a tangent
space. However, such treatment is restrictive as distances between arbitrary
points on the tangent space do not represent true geodesic distances, and hence
do not represent the manifold structure accurately. In this paper we propose a
general discriminative model based on the combination of several tangent
spaces, in order to preserve more details of the structure. The model can be
used as a weak learner in a boosting-based pedestrian detection framework.
Experiments on the challenging INRIA and DaimlerChrysler datasets show that the
proposed model leads to considerably higher performance than methods based on
histograms of oriented gradients as well as previous Riemannian-based
techniques.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2014 09:44:41 GMT"
}
] | 2014-03-06T00:00:00 | [
[
"Sanin",
"Andres",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Harandi",
"Mehrtash T.",
""
],
[
"Lovell",
"Brian C.",
""
]
] | TITLE: K-Tangent Spaces on Riemannian Manifolds for Improved Pedestrian
Detection
ABSTRACT: For covariance-based image descriptors, taking into account the curvature of
the corresponding feature space has been shown to improve discrimination
performance. This is often done through representing the descriptors as points
on Riemannian manifolds, with the discrimination accomplished on a tangent
space. However, such treatment is restrictive as distances between arbitrary
points on the tangent space do not represent true geodesic distances, and hence
do not represent the manifold structure accurately. In this paper we propose a
general discriminative model based on the combination of several tangent
spaces, in order to preserve more details of the structure. The model can be
used as a weak learner in a boosting-based pedestrian detection framework.
Experiments on the challenging INRIA and DaimlerChrysler datasets show that the
proposed model leads to considerably higher performance than methods based on
histograms of oriented gradients as well as previous Riemannian-based
techniques.
| no_new_dataset | 0.948394 |
1211.6664 | Fabien Campagne | Fabien Campagne, Kevin C. Dorff, Nyasha Chambwe, James T. Robinson,
Jill P. Mesirov and Thomas D. Wu | Compression of structured high-throughput sequencing data | main article: 2 figures, 2 tables. Supplementary material: 2 figures,
4 tables. Comment on this manuscript on Twitter or Google Plus using handle
#Goby2Paper | null | 10.1371/journal.pone.0079871 | null | q-bio.QM cs.DB q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large biological datasets are being produced at a rapid pace and create
substantial storage challenges, particularly in the domain of high-throughput
sequencing (HTS). Most approaches currently used to store HTS data are either
unable to quickly adapt to the requirements of new sequencing or analysis
methods (because they do not support schema evolution), or fail to provide
state of the art compression of the datasets. We have devised new approaches to
store HTS data that support seamless data schema evolution and compress
datasets substantially better than existing approaches. Building on these new
approaches, we discuss and demonstrate how a multi-tier data organization can
dramatically reduce the storage, computational and network burden of
collecting, analyzing, and archiving large sequencing datasets. For instance,
we show that spliced RNA-Seq alignments can be stored in less than 4% the size
of a BAM file with perfect data fidelity. Compared to the previous compression
state of the art, these methods reduce dataset size more than 20% when storing
gene expression and epigenetic datasets. The approaches have been integrated in
a comprehensive suite of software tools (http://goby.campagnelab.org) that
support common analyses for a range of high-throughput sequencing assays.
| [
{
"version": "v1",
"created": "Wed, 28 Nov 2012 17:11:54 GMT"
}
] | 2014-03-05T00:00:00 | [
[
"Campagne",
"Fabien",
""
],
[
"Dorff",
"Kevin C.",
""
],
[
"Chambwe",
"Nyasha",
""
],
[
"Robinson",
"James T.",
""
],
[
"Mesirov",
"Jill P.",
""
],
[
"Wu",
"Thomas D.",
""
]
] | TITLE: Compression of structured high-throughput sequencing data
ABSTRACT: Large biological datasets are being produced at a rapid pace and create
substantial storage challenges, particularly in the domain of high-throughput
sequencing (HTS). Most approaches currently used to store HTS data are either
unable to quickly adapt to the requirements of new sequencing or analysis
methods (because they do not support schema evolution), or fail to provide
state of the art compression of the datasets. We have devised new approaches to
store HTS data that support seamless data schema evolution and compress
datasets substantially better than existing approaches. Building on these new
approaches, we discuss and demonstrate how a multi-tier data organization can
dramatically reduce the storage, computational and network burden of
collecting, analyzing, and archiving large sequencing datasets. For instance,
we show that spliced RNA-Seq alignments can be stored in less than 4% the size
of a BAM file with perfect data fidelity. Compared to the previous compression
state of the art, these methods reduce dataset size more than 20% when storing
gene expression and epigenetic datasets. The approaches have been integrated in
a comprehensive suite of software tools (http://goby.campagnelab.org) that
support common analyses for a range of high-throughput sequencing assays.
| no_new_dataset | 0.943867 |
1306.0196 | Peng Bao | Peng Bao, Hua-Wei Shen, Wei Chen, Xue-Qi Cheng | Cumulative Effect in Information Diffusion: A Comprehensive Empirical
Study on Microblogging Network | null | null | 10.1371/journal.pone.0076027 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cumulative effect in social contagions underlies many studies on the spread
of innovation, behaviors, and influence. However, few large-scale empirical
studies are conducted to validate the existence of cumulative effect in the
information diffusion on social networks. In this paper, using the
population-scale dataset from the largest Chinese microblogging website, we
conduct a comprehensive study on the cumulative effect in information
diffusion. We base our study on the diffusion network of each message, where
nodes are the involved users and links are the following relationships among
them. We find that multiple exposures to the same message indeed increase the
possibility of forwarding it. However, additional exposures cannot further
improve the chance of forwarding when the number of exposures crosses its peak
at two. This finding questions the cumulative effect hypothesis in information
diffusion. Furthermore, to clarify the forwarding preference among users, we
investigate both the structural motif of the diffusion network and the temporal
pattern of information diffusion process among users. The patterns provide
vital insight for understanding the variation of message popularity and explain
the characteristics of diffusion networks.
| [
{
"version": "v1",
"created": "Sun, 2 Jun 2013 11:31:51 GMT"
}
] | 2014-03-05T00:00:00 | [
[
"Bao",
"Peng",
""
],
[
"Shen",
"Hua-Wei",
""
],
[
"Chen",
"Wei",
""
],
[
"Cheng",
"Xue-Qi",
""
]
] | TITLE: Cumulative Effect in Information Diffusion: A Comprehensive Empirical
Study on Microblogging Network
ABSTRACT: Cumulative effect in social contagions underlies many studies on the spread
of innovation, behaviors, and influence. However, few large-scale empirical
studies are conducted to validate the existence of cumulative effect in the
information diffusion on social networks. In this paper, using the
population-scale dataset from the largest Chinese microblogging website, we
conduct a comprehensive study on the cumulative effect in information
diffusion. We base our study on the diffusion network of each message, where
nodes are the involved users and links are the following relationships among
them. We find that multiple exposures to the same message indeed increase the
possibility of forwarding it. However, additional exposures cannot further
improve the chance of forwarding when the number of exposures crosses its peak
at two. This finding questions the cumulative effect hypothesis in information
diffusion. Furthermore, to clarify the forwarding preference among users, we
investigate both the structural motif of the diffusion network and the temporal
pattern of information diffusion process among users. The patterns provide
vital insight for understanding the variation of message popularity and explain
the characteristics of diffusion networks.
| no_new_dataset | 0.949295 |
1307.6086 | Xiao-Pu Han | Ya-Nan Pan, Jing-Jing Lou, Xiao-Pu Han | Outbreak Patterns of the Novel Avian Influenza (H7N9) | 13 pages, 3 figures | null | 10.1016/j.physa.2014.01.040 | null | physics.soc-ph physics.data-an q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The attack of novel avian influenza (H7N9) in east China caused a serious
health crisis and public panic. In this paper, we empirically analyze the onset
patterns of human cases of the novel avian influenza and observe several
spatial and temporal properties that are similar to other infective diseases.
More deeply, using the empirical analysis and modeling studies, we find that
the spatio-temporal network that connects the cities with human cases along the
order of outbreak timing emerges two-section-power-law edge-length
distribution, indicating the picture that several islands with higher and
heterogeneous risk straggle in east China. The proposed method is applicable to
the analysis on the spreading situation in early stage of disease outbreak
using quite limited dataset.
| [
{
"version": "v1",
"created": "Tue, 23 Jul 2013 14:01:54 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Jul 2013 12:27:29 GMT"
},
{
"version": "v3",
"created": "Thu, 19 Dec 2013 04:28:13 GMT"
}
] | 2014-03-05T00:00:00 | [
[
"Pan",
"Ya-Nan",
""
],
[
"Lou",
"Jing-Jing",
""
],
[
"Han",
"Xiao-Pu",
""
]
] | TITLE: Outbreak Patterns of the Novel Avian Influenza (H7N9)
ABSTRACT: The attack of novel avian influenza (H7N9) in east China caused a serious
health crisis and public panic. In this paper, we empirically analyze the onset
patterns of human cases of the novel avian influenza and observe several
spatial and temporal properties that are similar to other infective diseases.
More deeply, using the empirical analysis and modeling studies, we find that
the spatio-temporal network that connects the cities with human cases along the
order of outbreak timing emerges two-section-power-law edge-length
distribution, indicating the picture that several islands with higher and
heterogeneous risk straggle in east China. The proposed method is applicable to
the analysis on the spreading situation in early stage of disease outbreak
using quite limited dataset.
| no_new_dataset | 0.941115 |
1308.3060 | Wei Zeng | Wei Zeng, An Zeng, Ming-Sheng Shang and Yi-Cheng Zhang | Information filtering in sparse online systems: recommendation via
semi-local diffusion | 8 figures | null | 10.1371/journal.pone.0079354 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid growth of the Internet and overwhelming amount of information
and choices that people are confronted with, recommender systems have been
developed to effectively support users' decision-making process in the online
systems. However, many recommendation algorithms suffer from the data sparsity
problem, i.e. the user-object bipartite networks are so sparse that algorithms
cannot accurately recommend objects for users. This data sparsity problem makes
many well-known recommendation algorithms perform poorly. To solve the problem,
we propose a recommendation algorithm based on the semi-local diffusion process
on a user-object bipartite network. The numerical simulation on two sparse
datasets, Amazon and Bookcross, show that our method significantly outperforms
the state-of-the-art methods especially for those small-degree users. Two
personalized semi-local diffusion methods are proposed which further improve
the recommendation accuracy. Finally, our work indicates that sparse online
systems are essentially different from the dense online systems, all the
algorithms and conclusions based on dense data should be rechecked again in
sparse data.
| [
{
"version": "v1",
"created": "Wed, 14 Aug 2013 08:29:41 GMT"
}
] | 2014-03-05T00:00:00 | [
[
"Zeng",
"Wei",
""
],
[
"Zeng",
"An",
""
],
[
"Shang",
"Ming-Sheng",
""
],
[
"Zhang",
"Yi-Cheng",
""
]
] | TITLE: Information filtering in sparse online systems: recommendation via
semi-local diffusion
ABSTRACT: With the rapid growth of the Internet and overwhelming amount of information
and choices that people are confronted with, recommender systems have been
developed to effectively support users' decision-making process in the online
systems. However, many recommendation algorithms suffer from the data sparsity
problem, i.e. the user-object bipartite networks are so sparse that algorithms
cannot accurately recommend objects for users. This data sparsity problem makes
many well-known recommendation algorithms perform poorly. To solve the problem,
we propose a recommendation algorithm based on the semi-local diffusion process
on a user-object bipartite network. The numerical simulation on two sparse
datasets, Amazon and Bookcross, show that our method significantly outperforms
the state-of-the-art methods especially for those small-degree users. Two
personalized semi-local diffusion methods are proposed which further improve
the recommendation accuracy. Finally, our work indicates that sparse online
systems are essentially different from the dense online systems, all the
algorithms and conclusions based on dense data should be rechecked again in
sparse data.
| no_new_dataset | 0.945298 |
1308.5703 | Gonzalo Diaz | Marcelo Arenas, Gonzalo I. Diaz, Achille Fokoue, Anastasios
Kementsietsidis, Kavitha Srinivas | A Principled Approach to Bridging the Gap between Graph Data and their
Schemas | 18 pages, 8 figures. To be published in PVLDB Vol. 8, No. 9 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although RDF graphs have schema information associated with them, in practice
it is very common to find cases in which data do not fully conform to their
schema. A prominent example of this is DBpedia, which is RDF data extracted
from Wikipedia, a publicly editable source of information. In such situations,
it becomes interesting to study the structural properties of the actual data,
because the schema gives an incomplete description of the organization of a
dataset. In this paper we have approached the study of the structuredness of an
RDF graph in a principled way: we propose a framework for specifying
structuredness functions, which gauge the degree to which an RDF graph conforms
to a schema. In particular, we first define a formal language for specifying
structuredness functions with expressions we call rules. This language allows a
user or a database administrator to state a rule to which an RDF graph may
fully or partially conform. Then we consider the issue of discovering a
refinement of a sort (type) by partitioning the dataset into subsets whose
structuredness is over a specified threshold. In particular, we prove that the
natural decision problem associated to this refinement problem is NP-complete,
and we provide a natural translation of this problem into Integer Linear
Programming (ILP). Finally, we test this ILP solution with two real world
datasets, DBpedia Persons and WordNet Nouns, and 4 different and intuitive
rules, which gauge the structuredness in different ways. The rules give
meaningful refinements of the datasets, showing that our language can be a
powerful tool for understanding the structure of RDF data.
| [
{
"version": "v1",
"created": "Mon, 26 Aug 2013 21:26:00 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2014 14:01:46 GMT"
}
] | 2014-03-05T00:00:00 | [
[
"Arenas",
"Marcelo",
""
],
[
"Diaz",
"Gonzalo I.",
""
],
[
"Fokoue",
"Achille",
""
],
[
"Kementsietsidis",
"Anastasios",
""
],
[
"Srinivas",
"Kavitha",
""
]
] | TITLE: A Principled Approach to Bridging the Gap between Graph Data and their
Schemas
ABSTRACT: Although RDF graphs have schema information associated with them, in practice
it is very common to find cases in which data do not fully conform to their
schema. A prominent example of this is DBpedia, which is RDF data extracted
from Wikipedia, a publicly editable source of information. In such situations,
it becomes interesting to study the structural properties of the actual data,
because the schema gives an incomplete description of the organization of a
dataset. In this paper we have approached the study of the structuredness of an
RDF graph in a principled way: we propose a framework for specifying
structuredness functions, which gauge the degree to which an RDF graph conforms
to a schema. In particular, we first define a formal language for specifying
structuredness functions with expressions we call rules. This language allows a
user or a database administrator to state a rule to which an RDF graph may
fully or partially conform. Then we consider the issue of discovering a
refinement of a sort (type) by partitioning the dataset into subsets whose
structuredness is over a specified threshold. In particular, we prove that the
natural decision problem associated to this refinement problem is NP-complete,
and we provide a natural translation of this problem into Integer Linear
Programming (ILP). Finally, we test this ILP solution with two real world
datasets, DBpedia Persons and WordNet Nouns, and 4 different and intuitive
rules, which gauge the structuredness in different ways. The rules give
meaningful refinements of the datasets, showing that our language can be a
powerful tool for understanding the structure of RDF data.
| no_new_dataset | 0.946051 |
1312.3806 | Alberto Ambrosetti | Alberto Ambrosetti, Anthony M. Reilly, Robert A. DiStasio Jr.,
Alexandre Tkatchenko | Long-range correlation energy calculated from coupled atomic response
functions | 15 pages, 3 figures | null | 10.1063/1.4865104 | null | physics.chem-ph cond-mat.mtrl-sci physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An accurate determination of the electron correlation energy is essential for
describing the structure, stability, and function in a wide variety of systems,
ranging from gas-phase molecular assemblies to condensed matter and
organic/inorganic interfaces. Even small errors in the correlation energy can
have a large impact on the description of chemical and physical properties in
the systems of interest. In this context, the development of efficient
approaches for the accurate calculation of the long-range correlation energy
(and hence dispersion) is the main challenge. In the last years a number of
methods have been developed to augment density functional approximations via
dispersion energy corrections, but most of these approaches ignore the
intrinsic many-body nature of correlation effects, leading to inconsistent and
sometimes even qualitatively incorrect predictions. Here we build upon the
recent many-body dispersion (MBD) framework, which is intimately linked to the
random-phase approximation for the correlation energy. We separate the
correlation energy into short-range contributions that are modeled by
semi-local functionals and long-range contributions that are calculated by
mapping the complex all-electron problem onto a set of atomic response
functions coupled in the dipole approximation. We propose an effective
range-separation of the coupling between the atomic response functions that
extends the already broad applicability of the MBD method to non-metallic
materials with highly anisotropic responses, such as layered nanostructures.
Application to a variety of high-quality benchmark datasets illustrates the
accuracy and applicability of the improved MBD approach, which offers the
prospect of first-principles modeling of large structurally complex systems
with an accurate description of the long-range correlation energy.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2013 13:33:02 GMT"
}
] | 2014-03-05T00:00:00 | [
[
"Ambrosetti",
"Alberto",
""
],
[
"Reilly",
"Anthony M.",
""
],
[
"DiStasio",
"Robert A.",
"Jr."
],
[
"Tkatchenko",
"Alexandre",
""
]
] | TITLE: Long-range correlation energy calculated from coupled atomic response
functions
ABSTRACT: An accurate determination of the electron correlation energy is essential for
describing the structure, stability, and function in a wide variety of systems,
ranging from gas-phase molecular assemblies to condensed matter and
organic/inorganic interfaces. Even small errors in the correlation energy can
have a large impact on the description of chemical and physical properties in
the systems of interest. In this context, the development of efficient
approaches for the accurate calculation of the long-range correlation energy
(and hence dispersion) is the main challenge. In the last years a number of
methods have been developed to augment density functional approximations via
dispersion energy corrections, but most of these approaches ignore the
intrinsic many-body nature of correlation effects, leading to inconsistent and
sometimes even qualitatively incorrect predictions. Here we build upon the
recent many-body dispersion (MBD) framework, which is intimately linked to the
random-phase approximation for the correlation energy. We separate the
correlation energy into short-range contributions that are modeled by
semi-local functionals and long-range contributions that are calculated by
mapping the complex all-electron problem onto a set of atomic response
functions coupled in the dipole approximation. We propose an effective
range-separation of the coupling between the atomic response functions that
extends the already broad applicability of the MBD method to non-metallic
materials with highly anisotropic responses, such as layered nanostructures.
Application to a variety of high-quality benchmark datasets illustrates the
accuracy and applicability of the improved MBD approach, which offers the
prospect of first-principles modeling of large structurally complex systems
with an accurate description of the long-range correlation energy.
| no_new_dataset | 0.947817 |
1312.4400 | Min Lin | Min Lin, Qiang Chen, Shuicheng Yan | Network In Network | 10 pages, 4 figures, for iclr2014 | null | null | null | cs.NE cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel deep network structure called "Network In Network" (NIN)
to enhance model discriminability for local patches within the receptive field.
The conventional convolutional layer uses linear filters followed by a
nonlinear activation function to scan the input. Instead, we build micro neural
networks with more complex structures to abstract the data within the receptive
field. We instantiate the micro neural network with a multilayer perceptron,
which is a potent function approximator. The feature maps are obtained by
sliding the micro networks over the input in a similar manner as CNN; they are
then fed into the next layer. Deep NIN can be implemented by stacking mutiple
of the above described structure. With enhanced local modeling via the micro
network, we are able to utilize global average pooling over feature maps in the
classification layer, which is easier to interpret and less prone to
overfitting than traditional fully connected layers. We demonstrated the
state-of-the-art classification performances with NIN on CIFAR-10 and
CIFAR-100, and reasonable performances on SVHN and MNIST datasets.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2013 15:34:13 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Dec 2013 09:30:27 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2014 05:15:42 GMT"
}
] | 2014-03-05T00:00:00 | [
[
"Lin",
"Min",
""
],
[
"Chen",
"Qiang",
""
],
[
"Yan",
"Shuicheng",
""
]
] | TITLE: Network In Network
ABSTRACT: We propose a novel deep network structure called "Network In Network" (NIN)
to enhance model discriminability for local patches within the receptive field.
The conventional convolutional layer uses linear filters followed by a
nonlinear activation function to scan the input. Instead, we build micro neural
networks with more complex structures to abstract the data within the receptive
field. We instantiate the micro neural network with a multilayer perceptron,
which is a potent function approximator. The feature maps are obtained by
sliding the micro networks over the input in a similar manner as CNN; they are
then fed into the next layer. Deep NIN can be implemented by stacking mutiple
of the above described structure. With enhanced local modeling via the micro
network, we are able to utilize global average pooling over feature maps in the
classification layer, which is easier to interpret and less prone to
overfitting than traditional fully connected layers. We demonstrated the
state-of-the-art classification performances with NIN on CIFAR-10 and
CIFAR-100, and reasonable performances on SVHN and MNIST datasets.
| no_new_dataset | 0.951006 |
1402.0131 | J. M. Vaquero | M. Ant\'on, J.M. Vaquero and A.J.P. Aparicio | The controversial early brightening in the first half of 20th century: a
contribution from pyrheliometer measurements in Madrid (Spain) | 19 pages, 1 figure, accepted for publication in "Global and Planetary
Change" | null | 10.1016/j.gloplacha.2014.01.013 | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A long-term decrease in downward surface solar radiation from the 1950s to
the 1980s ("global dimming") followed by a multi-decadal increase up to the
present ("brightening") have been detected in many regions worldwide. In
addition, some researchers have suggested the existence of an "early
brightening" period in the first half of 20th century. However, this latter
phenomenon is an open issue due to the opposite results found in literature and
the scarcity of solar radiation data during this period. This paper contributes
to this relevant discussion analyzing, for the first time in Southern Europe,
the atmospheric column transparency derived from pyrheliometer measurements in
Madrid (Spain) for the period 1911-1928. This time series is one of the three
longest dataset during the first quarter of the 20th century in Europe. The
results showed the great effects of the Katmai eruption (June 1912, Alaska) on
transparency values during 1912-1913 with maximum relative anomalies around 8%.
Outside the period affected by this volcano, the atmospheric transparency
exhibited a stable behavior with a slight negative trend without any
statistical significance on an annual and seasonal basis. Overall, there is no
evidence of a possible early brightening period in direct solar radiation in
Madrid. This phenomenon is currently an open issue and further research is
needed using the few sites with available experimental records during the first
half of the 20th century.
| [
{
"version": "v1",
"created": "Sat, 1 Feb 2014 22:32:41 GMT"
}
] | 2014-03-05T00:00:00 | [
[
"Antón",
"M.",
""
],
[
"Vaquero",
"J. M.",
""
],
[
"Aparicio",
"A. J. P.",
""
]
] | TITLE: The controversial early brightening in the first half of 20th century: a
contribution from pyrheliometer measurements in Madrid (Spain)
ABSTRACT: A long-term decrease in downward surface solar radiation from the 1950s to
the 1980s ("global dimming") followed by a multi-decadal increase up to the
present ("brightening") have been detected in many regions worldwide. In
addition, some researchers have suggested the existence of an "early
brightening" period in the first half of 20th century. However, this latter
phenomenon is an open issue due to the opposite results found in literature and
the scarcity of solar radiation data during this period. This paper contributes
to this relevant discussion analyzing, for the first time in Southern Europe,
the atmospheric column transparency derived from pyrheliometer measurements in
Madrid (Spain) for the period 1911-1928. This time series is one of the three
longest dataset during the first quarter of the 20th century in Europe. The
results showed the great effects of the Katmai eruption (June 1912, Alaska) on
transparency values during 1912-1913 with maximum relative anomalies around 8%.
Outside the period affected by this volcano, the atmospheric transparency
exhibited a stable behavior with a slight negative trend without any
statistical significance on an annual and seasonal basis. Overall, there is no
evidence of a possible early brightening period in direct solar radiation in
Madrid. This phenomenon is currently an open issue and further research is
needed using the few sites with available experimental records during the first
half of the 20th century.
| no_new_dataset | 0.940134 |
1403.0598 | Pinar Yanardag | Pinar Yanardag, S.V.N. Vishwanathan | The Structurally Smoothed Graphlet Kernel | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A commonly used paradigm for representing graphs is to use a vector that
contains normalized frequencies of occurrence of certain motifs or sub-graphs.
This vector representation can be used in a variety of applications, such as,
for computing similarity between graphs. The graphlet kernel of Shervashidze et
al. [32] uses induced sub-graphs of k nodes (christened as graphlets by Przulj
[28]) as motifs in the vector representation, and computes the kernel via a dot
product between these vectors. One can easily show that this is a valid kernel
between graphs. However, such a vector representation suffers from a few
drawbacks. As k becomes larger we encounter the sparsity problem; most higher
order graphlets will not occur in a given graph. This leads to diagonal
dominance, that is, a given graph is similar to itself but not to any other
graph in the dataset. On the other hand, since lower order graphlets tend to be
more numerous, using lower values of k does not provide enough discrimination
ability. We propose a smoothing technique to tackle the above problems. Our
method is based on a novel extension of Kneser-Ney and Pitman-Yor smoothing
techniques from natural language processing to graphs. We use the relationships
between lower order and higher order graphlets in order to derive our method.
Consequently, our smoothing algorithm not only respects the dependency between
sub-graphs but also tackles the diagonal dominance problem by distributing the
probability mass across graphlets. In our experiments, the smoothed graphlet
kernel outperforms graph kernels based on raw frequency counts.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2014 21:20:14 GMT"
}
] | 2014-03-05T00:00:00 | [
[
"Yanardag",
"Pinar",
""
],
[
"Vishwanathan",
"S. V. N.",
""
]
] | TITLE: The Structurally Smoothed Graphlet Kernel
ABSTRACT: A commonly used paradigm for representing graphs is to use a vector that
contains normalized frequencies of occurrence of certain motifs or sub-graphs.
This vector representation can be used in a variety of applications, such as,
for computing similarity between graphs. The graphlet kernel of Shervashidze et
al. [32] uses induced sub-graphs of k nodes (christened as graphlets by Przulj
[28]) as motifs in the vector representation, and computes the kernel via a dot
product between these vectors. One can easily show that this is a valid kernel
between graphs. However, such a vector representation suffers from a few
drawbacks. As k becomes larger we encounter the sparsity problem; most higher
order graphlets will not occur in a given graph. This leads to diagonal
dominance, that is, a given graph is similar to itself but not to any other
graph in the dataset. On the other hand, since lower order graphlets tend to be
more numerous, using lower values of k does not provide enough discrimination
ability. We propose a smoothing technique to tackle the above problems. Our
method is based on a novel extension of Kneser-Ney and Pitman-Yor smoothing
techniques from natural language processing to graphs. We use the relationships
between lower order and higher order graphlets in order to derive our method.
Consequently, our smoothing algorithm not only respects the dependency between
sub-graphs but also tackles the diagonal dominance problem by distributing the
probability mass across graphlets. In our experiments, the smoothed graphlet
kernel outperforms graph kernels based on raw frequency counts.
| no_new_dataset | 0.94545 |
1403.0699 | Conrad Sanderson | Azadeh Alavi, Yan Yang, Mehrtash Harandi, Conrad Sanderson | Multi-Shot Person Re-Identification via Relational Stein Divergence | IEEE International Conference on Image Processing (ICIP), 2013 | null | 10.1109/ICIP.2013.6738731 | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Person re-identification is particularly challenging due to significant
appearance changes across separate camera views. In order to re-identify
people, a representative human signature should effectively handle differences
in illumination, pose and camera parameters. While general appearance-based
methods are modelled in Euclidean spaces, it has been argued that some
applications in image and video analysis are better modelled via non-Euclidean
manifold geometry. To this end, recent approaches represent images as
covariance matrices, and interpret such matrices as points on Riemannian
manifolds. As direct classification on such manifolds can be difficult, in this
paper we propose to represent each manifold point as a vector of similarities
to class representers, via a recently introduced form of Bregman matrix
divergence known as the Stein divergence. This is followed by using a
discriminative mapping of similarity vectors for final classification. The use
of similarity vectors is in contrast to the traditional approach of embedding
manifolds into tangent spaces, which can suffer from representing the manifold
structure inaccurately. Comparative evaluations on benchmark ETHZ and iLIDS
datasets for the person re-identification task show that the proposed approach
obtains better performance than recent techniques such as Histogram Plus
Epitome, Partial Least Squares, and Symmetry-Driven Accumulation of Local
Features.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2014 06:44:17 GMT"
}
] | 2014-03-05T00:00:00 | [
[
"Alavi",
"Azadeh",
""
],
[
"Yang",
"Yan",
""
],
[
"Harandi",
"Mehrtash",
""
],
[
"Sanderson",
"Conrad",
""
]
] | TITLE: Multi-Shot Person Re-Identification via Relational Stein Divergence
ABSTRACT: Person re-identification is particularly challenging due to significant
appearance changes across separate camera views. In order to re-identify
people, a representative human signature should effectively handle differences
in illumination, pose and camera parameters. While general appearance-based
methods are modelled in Euclidean spaces, it has been argued that some
applications in image and video analysis are better modelled via non-Euclidean
manifold geometry. To this end, recent approaches represent images as
covariance matrices, and interpret such matrices as points on Riemannian
manifolds. As direct classification on such manifolds can be difficult, in this
paper we propose to represent each manifold point as a vector of similarities
to class representers, via a recently introduced form of Bregman matrix
divergence known as the Stein divergence. This is followed by using a
discriminative mapping of similarity vectors for final classification. The use
of similarity vectors is in contrast to the traditional approach of embedding
manifolds into tangent spaces, which can suffer from representing the manifold
structure inaccurately. Comparative evaluations on benchmark ETHZ and iLIDS
datasets for the person re-identification task show that the proposed approach
obtains better performance than recent techniques such as Histogram Plus
Epitome, Partial Least Squares, and Symmetry-Driven Accumulation of Local
Features.
| no_new_dataset | 0.949389 |
1403.0829 | Weifeng Liu | W. Liu, H. Liu, D. Tao, Y. Wang, Ke Lu | Multiview Hessian regularized logistic regression for action recognition | 13 pages,2 figures, submitted to signal processing | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid development of social media sharing, people often need to
manage the growing volume of multimedia data such as large scale video
classification and annotation, especially to organize those videos containing
human activities. Recently, manifold regularized semi-supervised learning
(SSL), which explores the intrinsic data probability distribution and then
improves the generalization ability with only a small number of labeled data,
has emerged as a promising paradigm for semiautomatic video classification. In
addition, human action videos often have multi-modal content and different
representations. To tackle the above problems, in this paper we propose
multiview Hessian regularized logistic regression (mHLR) for human action
recognition. Compared with existing work, the advantages of mHLR lie in three
folds: (1) mHLR combines multiple Hessian regularization, each of which
obtained from a particular representation of instance, to leverage the
exploring of local geometry; (2) mHLR naturally handle multi-view instances
with multiple representations; (3) mHLR employs a smooth loss function and then
can be effectively optimized. We carefully conduct extensive experiments on the
unstructured social activity attribute (USAA) dataset and the experimental
results demonstrate the effectiveness of the proposed multiview Hessian
regularized logistic regression for human action recognition.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2014 01:11:40 GMT"
}
] | 2014-03-05T00:00:00 | [
[
"Liu",
"W.",
""
],
[
"Liu",
"H.",
""
],
[
"Tao",
"D.",
""
],
[
"Wang",
"Y.",
""
],
[
"Lu",
"Ke",
""
]
] | TITLE: Multiview Hessian regularized logistic regression for action recognition
ABSTRACT: With the rapid development of social media sharing, people often need to
manage the growing volume of multimedia data such as large scale video
classification and annotation, especially to organize those videos containing
human activities. Recently, manifold regularized semi-supervised learning
(SSL), which explores the intrinsic data probability distribution and then
improves the generalization ability with only a small number of labeled data,
has emerged as a promising paradigm for semiautomatic video classification. In
addition, human action videos often have multi-modal content and different
representations. To tackle the above problems, in this paper we propose
multiview Hessian regularized logistic regression (mHLR) for human action
recognition. Compared with existing work, the advantages of mHLR lie in three
folds: (1) mHLR combines multiple Hessian regularization, each of which
obtained from a particular representation of instance, to leverage the
exploring of local geometry; (2) mHLR naturally handle multi-view instances
with multiple representations; (3) mHLR employs a smooth loss function and then
can be effectively optimized. We carefully conduct extensive experiments on the
unstructured social activity attribute (USAA) dataset and the experimental
results demonstrate the effectiveness of the proposed multiview Hessian
regularized logistic regression for human action recognition.
| no_new_dataset | 0.948775 |
1403.0224 | Rakesh Mohanty | Mitali Sinha, Suchismita Pattanaik, Rakesh Mohanty and Prachi Tripathy | Experimental Study of A Novel Variant of Fiduccia Mattheyses(FM)
Partitioning Algorithm | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partitioning is a well studied research problem in the area of VLSI physical
design automation. In this problem, input is an integrated circuit and output
is a set of almost equal disjoint blocks. The main objective of partitioning is
to assign the components of circuit to blocks in order to minimize the numbers
of inter-block connections. A partitioning algorithm using hypergraph was
proposed by Fiduccia and Mattheyses with linear time complexity which has been
popularly known as FM algorithm. Most of the hypergraph based partitioning
algorithms proposed in the literature are variants of FM algorithm. In this
paper, we have proposed a novel variant of FM algorithm by using pair wise
swapping technique. We have performed a comparative experimental study of FM
algorithm and our proposed algorithm using two dataset such as ISPD98 and
ISPD99. Experimental results show that performance of our proposed algorithm is
better than the FM algorithm using the above dataset.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2014 15:34:48 GMT"
}
] | 2014-03-04T00:00:00 | [
[
"Sinha",
"Mitali",
""
],
[
"Pattanaik",
"Suchismita",
""
],
[
"Mohanty",
"Rakesh",
""
],
[
"Tripathy",
"Prachi",
""
]
] | TITLE: Experimental Study of A Novel Variant of Fiduccia Mattheyses(FM)
Partitioning Algorithm
ABSTRACT: Partitioning is a well studied research problem in the area of VLSI physical
design automation. In this problem, input is an integrated circuit and output
is a set of almost equal disjoint blocks. The main objective of partitioning is
to assign the components of circuit to blocks in order to minimize the numbers
of inter-block connections. A partitioning algorithm using hypergraph was
proposed by Fiduccia and Mattheyses with linear time complexity which has been
popularly known as FM algorithm. Most of the hypergraph based partitioning
algorithms proposed in the literature are variants of FM algorithm. In this
paper, we have proposed a novel variant of FM algorithm by using pair wise
swapping technique. We have performed a comparative experimental study of FM
algorithm and our proposed algorithm using two dataset such as ISPD98 and
ISPD99. Experimental results show that performance of our proposed algorithm is
better than the FM algorithm using the above dataset.
| no_new_dataset | 0.948917 |
1403.0316 | Kang Zhang | Kang Zhang, Yuqiang Fang, Dongbo Min, Lifeng Sun, Shiqiang Yang.
Shuicheng Yan, Qi Tian | Cross-Scale Cost Aggregation for Stereo Matching | To Appear in 2013 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR). 2014 (poster, 29.88%) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human beings process stereoscopic correspondence across multiple scales.
However, this bio-inspiration is ignored by state-of-the-art cost aggregation
methods for dense stereo correspondence. In this paper, a generic cross-scale
cost aggregation framework is proposed to allow multi-scale interaction in cost
aggregation. We firstly reformulate cost aggregation from a unified
optimization perspective and show that different cost aggregation methods
essentially differ in the choices of similarity kernels. Then, an inter-scale
regularizer is introduced into optimization and solving this new optimization
problem leads to the proposed framework. Since the regularization term is
independent of the similarity kernel, various cost aggregation methods can be
integrated into the proposed general framework. We show that the cross-scale
framework is important as it effectively and efficiently expands
state-of-the-art cost aggregation methods and leads to significant
improvements, when evaluated on Middlebury, KITTI and New Tsukuba datasets.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2014 05:20:28 GMT"
}
] | 2014-03-04T00:00:00 | [
[
"Zhang",
"Kang",
""
],
[
"Fang",
"Yuqiang",
""
],
[
"Min",
"Dongbo",
""
],
[
"Sun",
"Lifeng",
""
],
[
"Yan",
"Shiqiang Yang. Shuicheng",
""
],
[
"Tian",
"Qi",
""
]
] | TITLE: Cross-Scale Cost Aggregation for Stereo Matching
ABSTRACT: Human beings process stereoscopic correspondence across multiple scales.
However, this bio-inspiration is ignored by state-of-the-art cost aggregation
methods for dense stereo correspondence. In this paper, a generic cross-scale
cost aggregation framework is proposed to allow multi-scale interaction in cost
aggregation. We firstly reformulate cost aggregation from a unified
optimization perspective and show that different cost aggregation methods
essentially differ in the choices of similarity kernels. Then, an inter-scale
regularizer is introduced into optimization and solving this new optimization
problem leads to the proposed framework. Since the regularization term is
independent of the similarity kernel, various cost aggregation methods can be
integrated into the proposed general framework. We show that the cross-scale
framework is important as it effectively and efficiently expands
state-of-the-art cost aggregation methods and leads to significant
improvements, when evaluated on Middlebury, KITTI and New Tsukuba datasets.
| no_new_dataset | 0.944791 |
1403.0481 | Arindam Chaudhuri AC | Arindam Chaudhuri | Support Vector Machine Model for Currency Crisis Discrimination | Book Chapter Selected Works in Infrastructural Finance, Rudra P.
Pradhan, Indian Institute of Technology Kharagpur, Editor, Macmillan
Publishers, India, pp 249 - 256, 2011 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Support Vector Machine (SVM) is powerful classification technique based on
the idea of structural risk minimization. Use of kernel function enables curse
of dimensionality to be addressed. However, proper kernel function for certain
problem is dependent on specific dataset and as such there is no good method on
choice of kernel function. In this paper, SVM is used to build empirical models
of currency crisis in Argentina. An estimation technique is developed by
training model on real life data set which provides reasonably accurate model
outputs and helps policy makers to identify situations in which currency crisis
may happen. The third and fourth order polynomial kernel is generally best
choice to achieve high generalization of classifier performance. SVM has high
level of maturity with algorithms that are simple, easy to implement, tolerates
curse of dimensionality and good empirical performance. The satisfactory
results show that currency crisis situation is properly emulated using only
small fraction of database and could be used as an evaluation tool as well as
an early warning system. To the best of knowledge this is the first work on SVM
approach for currency crisis evaluation of Argentina.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2014 16:34:38 GMT"
}
] | 2014-03-04T00:00:00 | [
[
"Chaudhuri",
"Arindam",
""
]
] | TITLE: Support Vector Machine Model for Currency Crisis Discrimination
ABSTRACT: Support Vector Machine (SVM) is powerful classification technique based on
the idea of structural risk minimization. Use of kernel function enables curse
of dimensionality to be addressed. However, proper kernel function for certain
problem is dependent on specific dataset and as such there is no good method on
choice of kernel function. In this paper, SVM is used to build empirical models
of currency crisis in Argentina. An estimation technique is developed by
training model on real life data set which provides reasonably accurate model
outputs and helps policy makers to identify situations in which currency crisis
may happen. The third and fourth order polynomial kernel is generally best
choice to achieve high generalization of classifier performance. SVM has high
level of maturity with algorithms that are simple, easy to implement, tolerates
curse of dimensionality and good empirical performance. The satisfactory
results show that currency crisis situation is properly emulated using only
small fraction of database and could be used as an evaluation tool as well as
an early warning system. To the best of knowledge this is the first work on SVM
approach for currency crisis evaluation of Argentina.
| no_new_dataset | 0.950411 |
1310.2959 | Partha Talukdar | Partha Pratim Talukdar, William Cohen | Scaling Graph-based Semi Supervised Learning to Large Number of Labels
Using Count-Min Sketch | 9 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph-based Semi-supervised learning (SSL) algorithms have been successfully
used in a large number of applications. These methods classify initially
unlabeled nodes by propagating label information over the structure of graph
starting from seed nodes. Graph-based SSL algorithms usually scale linearly
with the number of distinct labels (m), and require O(m) space on each node.
Unfortunately, there exist many applications of practical significance with
very large m over large graphs, demanding better space and time complexity. In
this paper, we propose MAD-SKETCH, a novel graph-based SSL algorithm which
compactly stores label distribution on each node using Count-min Sketch, a
randomized data structure. We present theoretical analysis showing that under
mild conditions, MAD-SKETCH can reduce space complexity at each node from O(m)
to O(log m), and achieve similar savings in time complexity as well. We support
our analysis through experiments on multiple real world datasets. We observe
that MAD-SKETCH achieves similar performance as existing state-of-the-art
graph- based SSL algorithms, while requiring smaller memory footprint and at
the same time achieving up to 10x speedup. We find that MAD-SKETCH is able to
scale to datasets with one million labels, which is beyond the scope of
existing graph- based SSL algorithms.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2013 20:30:06 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2014 21:19:41 GMT"
}
] | 2014-03-03T00:00:00 | [
[
"Talukdar",
"Partha Pratim",
""
],
[
"Cohen",
"William",
""
]
] | TITLE: Scaling Graph-based Semi Supervised Learning to Large Number of Labels
Using Count-Min Sketch
ABSTRACT: Graph-based Semi-supervised learning (SSL) algorithms have been successfully
used in a large number of applications. These methods classify initially
unlabeled nodes by propagating label information over the structure of graph
starting from seed nodes. Graph-based SSL algorithms usually scale linearly
with the number of distinct labels (m), and require O(m) space on each node.
Unfortunately, there exist many applications of practical significance with
very large m over large graphs, demanding better space and time complexity. In
this paper, we propose MAD-SKETCH, a novel graph-based SSL algorithm which
compactly stores label distribution on each node using Count-min Sketch, a
randomized data structure. We present theoretical analysis showing that under
mild conditions, MAD-SKETCH can reduce space complexity at each node from O(m)
to O(log m), and achieve similar savings in time complexity as well. We support
our analysis through experiments on multiple real world datasets. We observe
that MAD-SKETCH achieves similar performance as existing state-of-the-art
graph- based SSL algorithms, while requiring smaller memory footprint and at
the same time achieving up to 10x speedup. We find that MAD-SKETCH is able to
scale to datasets with one million labels, which is beyond the scope of
existing graph- based SSL algorithms.
| no_new_dataset | 0.954942 |
1311.0680 | Bartosz Hawelka | Bartosz Hawelka, Izabela Sitko, Euro Beinat, Stanislav Sobolevsky,
Pavlos Kazakopoulos and Carlo Ratti | Geo-located Twitter as the proxy for global mobility patterns | 17 pages, 13 figures | null | 10.1080/15230406.2014.890072 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the advent of a pervasive presence of location sharing services
researchers gained an unprecedented access to the direct records of human
activity in space and time. This paper analyses geo-located Twitter messages in
order to uncover global patterns of human mobility. Based on a dataset of
almost a billion tweets recorded in 2012 we estimate volumes of international
travelers in respect to their country of residence. We examine mobility
profiles of different nations looking at the characteristics such as mobility
rate, radius of gyration, diversity of destinations and a balance of the
inflows and outflows. The temporal patterns disclose the universal seasons of
increased international mobility and the peculiar national nature of overseen
travels. Our analysis of the community structure of the Twitter mobility
network, obtained with the iterative network partitioning, reveals spatially
cohesive regions that follow the regional division of the world. Finally, we
validate our result with the global tourism statistics and mobility models
provided by other authors, and argue that Twitter is a viable source to
understand and quantify global mobility patterns.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2013 12:46:08 GMT"
},
{
"version": "v2",
"created": "Sat, 28 Dec 2013 13:40:30 GMT"
}
] | 2014-03-03T00:00:00 | [
[
"Hawelka",
"Bartosz",
""
],
[
"Sitko",
"Izabela",
""
],
[
"Beinat",
"Euro",
""
],
[
"Sobolevsky",
"Stanislav",
""
],
[
"Kazakopoulos",
"Pavlos",
""
],
[
"Ratti",
"Carlo",
""
]
] | TITLE: Geo-located Twitter as the proxy for global mobility patterns
ABSTRACT: In the advent of a pervasive presence of location sharing services
researchers gained an unprecedented access to the direct records of human
activity in space and time. This paper analyses geo-located Twitter messages in
order to uncover global patterns of human mobility. Based on a dataset of
almost a billion tweets recorded in 2012 we estimate volumes of international
travelers in respect to their country of residence. We examine mobility
profiles of different nations looking at the characteristics such as mobility
rate, radius of gyration, diversity of destinations and a balance of the
inflows and outflows. The temporal patterns disclose the universal seasons of
increased international mobility and the peculiar national nature of overseen
travels. Our analysis of the community structure of the Twitter mobility
network, obtained with the iterative network partitioning, reveals spatially
cohesive regions that follow the regional division of the world. Finally, we
validate our result with the global tourism statistics and mobility models
provided by other authors, and argue that Twitter is a viable source to
understand and quantify global mobility patterns.
| no_new_dataset | 0.931711 |
1402.5596 | Jason Lee | Jason D Lee and Jonathan E Taylor | Exact Post Model Selection Inference for Marginal Screening | null | null | null | null | stat.ME cs.LG math.ST stat.ML stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a framework for post model selection inference, via marginal
screening, in linear regression. At the core of this framework is a result that
characterizes the exact distribution of linear functions of the response $y$,
conditional on the model being selected (``condition on selection" framework).
This allows us to construct valid confidence intervals and hypothesis tests for
regression coefficients that account for the selection procedure. In contrast
to recent work in high-dimensional statistics, our results are exact
(non-asymptotic) and require no eigenvalue-like assumptions on the design
matrix $X$. Furthermore, the computational cost of marginal regression,
constructing confidence intervals and hypothesis testing is negligible compared
to the cost of linear regression, thus making our methods particularly suitable
for extremely large datasets. Although we focus on marginal screening to
illustrate the applicability of the condition on selection framework, this
framework is much more broadly applicable. We show how to apply the proposed
framework to several other selection procedures including orthogonal matching
pursuit, non-negative least squares, and marginal screening+Lasso.
| [
{
"version": "v1",
"created": "Sun, 23 Feb 2014 10:30:21 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2014 00:28:21 GMT"
}
] | 2014-03-03T00:00:00 | [
[
"Lee",
"Jason D",
""
],
[
"Taylor",
"Jonathan E",
""
]
] | TITLE: Exact Post Model Selection Inference for Marginal Screening
ABSTRACT: We develop a framework for post model selection inference, via marginal
screening, in linear regression. At the core of this framework is a result that
characterizes the exact distribution of linear functions of the response $y$,
conditional on the model being selected (``condition on selection" framework).
This allows us to construct valid confidence intervals and hypothesis tests for
regression coefficients that account for the selection procedure. In contrast
to recent work in high-dimensional statistics, our results are exact
(non-asymptotic) and require no eigenvalue-like assumptions on the design
matrix $X$. Furthermore, the computational cost of marginal regression,
constructing confidence intervals and hypothesis testing is negligible compared
to the cost of linear regression, thus making our methods particularly suitable
for extremely large datasets. Although we focus on marginal screening to
illustrate the applicability of the condition on selection framework, this
framework is much more broadly applicable. We show how to apply the proposed
framework to several other selection procedures including orthogonal matching
pursuit, non-negative least squares, and marginal screening+Lasso.
| no_new_dataset | 0.947721 |
1312.3245 | Hien Thi Thu Truong | Hien Thi Thu Truong, Eemil Lagerspetz, Petteri Nurmi, Adam J. Oliner,
Sasu Tarkoma, N. Asokan, Sourav Bhattacharya | The Company You Keep: Mobile Malware Infection Rates and Inexpensive
Risk Indicators | null | null | 10.1145/2566486.2568046 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is little information from independent sources in the public domain
about mobile malware infection rates. The only previous independent estimate
(0.0009%) [12], was based on indirect measurements obtained from domain name
resolution traces. In this paper, we present the first independent study of
malware infection rates and associated risk factors using data collected
directly from over 55,000 Android devices. We find that the malware infection
rates in Android devices estimated using two malware datasets (0.28% and
0.26%), though small, are significantly higher than the previous independent
estimate. Using our datasets, we investigate how indicators extracted
inexpensively from the devices correlate with malware infection. Based on the
hypothesis that some application stores have a greater density of malicious
applications and that advertising within applications and cross-promotional
deals may act as infection vectors, we investigate whether the set of
applications used on a device can serve as an indicator for infection of that
device. Our analysis indicates that this alone is not an accurate indicator for
pinpointing infection. However, it is a very inexpensive but surprisingly
useful way for significantly narrowing down the pool of devices on which
expensive monitoring and analysis mechanisms must be deployed. Using our two
malware datasets we show that this indicator performs 4.8 and 4.6 times
(respectively) better at identifying infected devices than the baseline of
random checks. Such indicators can be used, for example, in the search for new
or previously undetected malware. It is therefore a technique that can
complement standard malware scanning by anti-malware tools. Our analysis also
demonstrates a marginally significant difference in battery use between
infected and clean devices.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2013 17:06:16 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2014 16:58:12 GMT"
}
] | 2014-02-28T00:00:00 | [
[
"Truong",
"Hien Thi Thu",
""
],
[
"Lagerspetz",
"Eemil",
""
],
[
"Nurmi",
"Petteri",
""
],
[
"Oliner",
"Adam J.",
""
],
[
"Tarkoma",
"Sasu",
""
],
[
"Asokan",
"N.",
""
],
[
"Bhattacharya",
"Sourav",
""
]
] | TITLE: The Company You Keep: Mobile Malware Infection Rates and Inexpensive
Risk Indicators
ABSTRACT: There is little information from independent sources in the public domain
about mobile malware infection rates. The only previous independent estimate
(0.0009%) [12], was based on indirect measurements obtained from domain name
resolution traces. In this paper, we present the first independent study of
malware infection rates and associated risk factors using data collected
directly from over 55,000 Android devices. We find that the malware infection
rates in Android devices estimated using two malware datasets (0.28% and
0.26%), though small, are significantly higher than the previous independent
estimate. Using our datasets, we investigate how indicators extracted
inexpensively from the devices correlate with malware infection. Based on the
hypothesis that some application stores have a greater density of malicious
applications and that advertising within applications and cross-promotional
deals may act as infection vectors, we investigate whether the set of
applications used on a device can serve as an indicator for infection of that
device. Our analysis indicates that this alone is not an accurate indicator for
pinpointing infection. However, it is a very inexpensive but surprisingly
useful way for significantly narrowing down the pool of devices on which
expensive monitoring and analysis mechanisms must be deployed. Using our two
malware datasets we show that this indicator performs 4.8 and 4.6 times
(respectively) better at identifying infected devices than the baseline of
random checks. Such indicators can be used, for example, in the search for new
or previously undetected malware. It is therefore a technique that can
complement standard malware scanning by anti-malware tools. Our analysis also
demonstrates a marginally significant difference in battery use between
infected and clean devices.
| no_new_dataset | 0.917303 |
1402.6366 | Mustafa Abdul Salam | Osman Hegazy, Omar S. Soliman and Mustafa Abdul Salam | LSSVM-ABC Algorithm for Stock Price prediction | 12 pages. International Journal of Computer Trends and Technology
(IJCTT)2014 | null | null | null | cs.CE cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, Artificial Bee Colony (ABC) algorithm which inspired from the
behavior of honey bees swarm is presented. ABC is a stochastic population-based
evolutionary algorithm for problem solving. ABC algorithm, which is considered
one of the most recently swarm intelligent techniques, is proposed to optimize
least square support vector machine (LSSVM) to predict the daily stock prices.
The proposed model is based on the study of stocks historical data, technical
indicators and optimizing LSSVM with ABC algorithm. ABC selects best free
parameters combination for LSSVM to avoid over-fitting and local minima
problems and improve prediction accuracy. LSSVM optimized by Particle swarm
optimization (PSO) algorithm, LSSVM, and ANN techniques are used for comparison
with proposed model. Proposed model tested with twenty datasets representing
different sectors in S&P 500 stock market. Results presented in this paper show
that the proposed model has fast convergence speed, and it also achieves better
accuracy than compared techniques in most cases.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2014 23:02:08 GMT"
}
] | 2014-02-28T00:00:00 | [
[
"Hegazy",
"Osman",
""
],
[
"Soliman",
"Omar S.",
""
],
[
"Salam",
"Mustafa Abdul",
""
]
] | TITLE: LSSVM-ABC Algorithm for Stock Price prediction
ABSTRACT: In this paper, Artificial Bee Colony (ABC) algorithm which inspired from the
behavior of honey bees swarm is presented. ABC is a stochastic population-based
evolutionary algorithm for problem solving. ABC algorithm, which is considered
one of the most recently swarm intelligent techniques, is proposed to optimize
least square support vector machine (LSSVM) to predict the daily stock prices.
The proposed model is based on the study of stocks historical data, technical
indicators and optimizing LSSVM with ABC algorithm. ABC selects best free
parameters combination for LSSVM to avoid over-fitting and local minima
problems and improve prediction accuracy. LSSVM optimized by Particle swarm
optimization (PSO) algorithm, LSSVM, and ANN techniques are used for comparison
with proposed model. Proposed model tested with twenty datasets representing
different sectors in S&P 500 stock market. Results presented in this paper show
that the proposed model has fast convergence speed, and it also achieves better
accuracy than compared techniques in most cases.
| no_new_dataset | 0.947332 |
1402.6865 | J\'er\^ome Kunegis | J\'er\^ome Kunegis | Applications of Structural Balance in Signed Social Networks | 37 pages | null | null | null | cs.SI physics.soc-ph | http://creativecommons.org/licenses/by/3.0/ | We present measures, models and link prediction algorithms based on the
structural balance in signed social networks. Certain social networks contain,
in addition to the usual 'friend' links, 'enemy' links. These networks are
called signed social networks. A classical and major concept for signed social
networks is that of structural balance, i.e., the tendency of triangles to be
'balanced' towards including an even number of negative edges, such as
friend-friend-friend and friend-enemy-enemy triangles. In this article, we
introduce several new signed network analysis methods that exploit structural
balance for measuring partial balance, for finding communities of people based
on balance, for drawing signed social networks, and for solving the problem of
link prediction. Notably, the introduced methods are based on the signed graph
Laplacian and on the concept of signed resistance distances. We evaluate our
methods on a collection of four signed social network datasets.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2014 11:32:50 GMT"
}
] | 2014-02-28T00:00:00 | [
[
"Kunegis",
"Jérôme",
""
]
] | TITLE: Applications of Structural Balance in Signed Social Networks
ABSTRACT: We present measures, models and link prediction algorithms based on the
structural balance in signed social networks. Certain social networks contain,
in addition to the usual 'friend' links, 'enemy' links. These networks are
called signed social networks. A classical and major concept for signed social
networks is that of structural balance, i.e., the tendency of triangles to be
'balanced' towards including an even number of negative edges, such as
friend-friend-friend and friend-enemy-enemy triangles. In this article, we
introduce several new signed network analysis methods that exploit structural
balance for measuring partial balance, for finding communities of people based
on balance, for drawing signed social networks, and for solving the problem of
link prediction. Notably, the introduced methods are based on the signed graph
Laplacian and on the concept of signed resistance distances. We evaluate our
methods on a collection of four signed social network datasets.
| no_new_dataset | 0.951188 |
1402.7063 | Spyros Sioutas SS | Nikolaos Nodarakis, Spyros Sioutas, Dimitrios Tsoumakos, Giannis
Tzimas and Evaggelia Pitoura | Rapid AkNN Query Processing for Fast Classification of Multidimensional
Data in the Cloud | 12 pages, 14 figures, 4 tables (it will be submitted to DEXA 2014) | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A $k$-nearest neighbor ($k$NN) query determines the $k$ nearest points, using
distance metrics, from a specific location. An all $k$-nearest neighbor
(A$k$NN) query constitutes a variation of a $k$NN query and retrieves the $k$
nearest points for each point inside a database. Their main usage resonates in
spatial databases and they consist the backbone of many location-based
applications and not only (i.e. $k$NN joins in databases, classification in
data mining). So, it is very crucial to develop methods that answer them
efficiently. In this work, we propose a novel method for classifying
multidimensional data using an A$k$NN algorithm in the MapReduce framework. Our
approach exploits space decomposition techniques for processing the
classification procedure in a parallel and distributed manner. To our
knowledge, we are the first to study the classification of multidimensional
objects under this perspective. Through an extensive experimental evaluation we
prove that our solution is efficient and scalable in processing the given
queries. We investigate many different perspectives that can affect the total
computational cost, such as different dataset distributions, number of
dimensions, growth of $k$ value and granularity of space decomposition and
prove that our system is efficient, robust and scalable.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2014 20:46:09 GMT"
}
] | 2014-02-28T00:00:00 | [
[
"Nodarakis",
"Nikolaos",
""
],
[
"Sioutas",
"Spyros",
""
],
[
"Tsoumakos",
"Dimitrios",
""
],
[
"Tzimas",
"Giannis",
""
],
[
"Pitoura",
"Evaggelia",
""
]
] | TITLE: Rapid AkNN Query Processing for Fast Classification of Multidimensional
Data in the Cloud
ABSTRACT: A $k$-nearest neighbor ($k$NN) query determines the $k$ nearest points, using
distance metrics, from a specific location. An all $k$-nearest neighbor
(A$k$NN) query constitutes a variation of a $k$NN query and retrieves the $k$
nearest points for each point inside a database. Their main usage resonates in
spatial databases and they consist the backbone of many location-based
applications and not only (i.e. $k$NN joins in databases, classification in
data mining). So, it is very crucial to develop methods that answer them
efficiently. In this work, we propose a novel method for classifying
multidimensional data using an A$k$NN algorithm in the MapReduce framework. Our
approach exploits space decomposition techniques for processing the
classification procedure in a parallel and distributed manner. To our
knowledge, we are the first to study the classification of multidimensional
objects under this perspective. Through an extensive experimental evaluation we
prove that our solution is efficient and scalable in processing the given
queries. We investigate many different perspectives that can affect the total
computational cost, such as different dataset distributions, number of
dimensions, growth of $k$ value and granularity of space decomposition and
prove that our system is efficient, robust and scalable.
| no_new_dataset | 0.941761 |
1402.6428 | Vishakha Metre VAM | Jayshree Ghorpade-Aher and Vishakha A. Metre | Clustering Multidimensional Data with PSO based Algorithm | 6 pages,6 figures,3 tables, conference paper | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data clustering is a recognized data analysis method in data mining whereas
K-Means is the well known partitional clustering method, possessing pleasant
features. We observed that, K-Means and other partitional clustering techniques
suffer from several limitations such as initial cluster centre selection,
preknowledge of number of clusters, dead unit problem, multiple cluster
membership and premature convergence to local optima. Several optimization
methods are proposed in the literature in order to solve clustering
limitations, but Swarm Intelligence (SI) has achieved its remarkable position
in the concerned area. Particle Swarm Optimization (PSO) is the most popular SI
technique and one of the favorite areas of researchers. In this paper, we
present a brief overview of PSO and applicability of its variants to solve
clustering challenges. Also, we propose an advanced PSO algorithm named as
Subtractive Clustering based Boundary Restricted Adaptive Particle Swarm
Optimization (SC-BR-APSO) algorithm for clustering multidimensional data. For
comparison purpose, we have studied and analyzed various algorithms such as
K-Means, PSO, K-Means-PSO, Hybrid Subtractive + PSO, BRAPSO, and proposed
algorithm on nine different datasets. The motivation behind proposing
SC-BR-APSO algorithm is to deal with multidimensional data clustering, with
minimum error rate and maximum convergence rate.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2014 06:08:27 GMT"
}
] | 2014-02-27T00:00:00 | [
[
"Ghorpade-Aher",
"Jayshree",
""
],
[
"Metre",
"Vishakha A.",
""
]
] | TITLE: Clustering Multidimensional Data with PSO based Algorithm
ABSTRACT: Data clustering is a recognized data analysis method in data mining whereas
K-Means is the well known partitional clustering method, possessing pleasant
features. We observed that, K-Means and other partitional clustering techniques
suffer from several limitations such as initial cluster centre selection,
preknowledge of number of clusters, dead unit problem, multiple cluster
membership and premature convergence to local optima. Several optimization
methods are proposed in the literature in order to solve clustering
limitations, but Swarm Intelligence (SI) has achieved its remarkable position
in the concerned area. Particle Swarm Optimization (PSO) is the most popular SI
technique and one of the favorite areas of researchers. In this paper, we
present a brief overview of PSO and applicability of its variants to solve
clustering challenges. Also, we propose an advanced PSO algorithm named as
Subtractive Clustering based Boundary Restricted Adaptive Particle Swarm
Optimization (SC-BR-APSO) algorithm for clustering multidimensional data. For
comparison purpose, we have studied and analyzed various algorithms such as
K-Means, PSO, K-Means-PSO, Hybrid Subtractive + PSO, BRAPSO, and proposed
algorithm on nine different datasets. The motivation behind proposing
SC-BR-APSO algorithm is to deal with multidimensional data clustering, with
minimum error rate and maximum convergence rate.
| no_new_dataset | 0.951188 |
1402.6636 | Iain Rice Mr | Iain Rice, Roger Benton, Les Hart and David Lowe | Analysis of Multibeam SONAR Data using Dissimilarity Representations | Presented at IMA Mathematics in Defence 2013 | null | null | null | cs.CE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the problem of low-dimensional visualisation of very
high dimensional information sources for the purpose of situation awareness in
the maritime environment. In response to the requirement for human decision
support aids to reduce information overload (and specifically, data amenable to
inter-point relative similarity measures) appropriate to the below-water
maritime domain, we are investigating a preliminary prototype topographic
visualisation model. The focus of the current paper is on the mathematical
problem of exploiting a relative dissimilarity representation of signals in a
visual informatics mapping model, driven by real-world sonar systems. An
independent source model is used to analyse the sonar beams from which a simple
probabilistic input model to represent uncertainty is mapped to a latent
visualisation space where data uncertainty can be accommodated. The use of
euclidean and non-euclidean measures are used and the motivation for future use
of non-euclidean measures is made. Concepts are illustrated using a simulated
64 beam weak SNR dataset with realistic sonar targets.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2014 10:21:34 GMT"
}
] | 2014-02-27T00:00:00 | [
[
"Rice",
"Iain",
""
],
[
"Benton",
"Roger",
""
],
[
"Hart",
"Les",
""
],
[
"Lowe",
"David",
""
]
] | TITLE: Analysis of Multibeam SONAR Data using Dissimilarity Representations
ABSTRACT: This paper considers the problem of low-dimensional visualisation of very
high dimensional information sources for the purpose of situation awareness in
the maritime environment. In response to the requirement for human decision
support aids to reduce information overload (and specifically, data amenable to
inter-point relative similarity measures) appropriate to the below-water
maritime domain, we are investigating a preliminary prototype topographic
visualisation model. The focus of the current paper is on the mathematical
problem of exploiting a relative dissimilarity representation of signals in a
visual informatics mapping model, driven by real-world sonar systems. An
independent source model is used to analyse the sonar beams from which a simple
probabilistic input model to represent uncertainty is mapped to a latent
visualisation space where data uncertainty can be accommodated. The use of
euclidean and non-euclidean measures are used and the motivation for future use
of non-euclidean measures is made. Concepts are illustrated using a simulated
64 beam weak SNR dataset with realistic sonar targets.
| no_new_dataset | 0.939304 |
1402.6650 | Ahmed Sahlol | Ahmed Sahlol and Cheng Suen | A Novel Method for the Recognition of Isolated Handwritten Arabic
Characters | Indicate 13 pages, 5 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | There are many difficulties facing a handwritten Arabic recognition system
such as unlimited variation in human handwriting, similarities of distinct
character shapes, interconnections of neighbouring characters and their
position in the word. The typical Optical Character Recognition (OCR) systems
are based mainly on three stages, preprocessing, features extraction and
recognition. This paper proposes new methods for handwritten Arabic character
recognition which is based on novel preprocessing operations including
different kinds of noise removal also different kind of features like
structural, Statistical and Morphological features from the main body of the
character and also from the secondary components. Evaluation of the accuracy of
the selected features is made. The system was trained and tested by back
propagation neural network with CENPRMI dataset. The proposed algorithm
obtained promising results as it is able to recognize 88% of our test set
accurately. In Comparable with other related works we find that our result is
the highest among other published works.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2014 19:09:09 GMT"
}
] | 2014-02-27T00:00:00 | [
[
"Sahlol",
"Ahmed",
""
],
[
"Suen",
"Cheng",
""
]
] | TITLE: A Novel Method for the Recognition of Isolated Handwritten Arabic
Characters
ABSTRACT: There are many difficulties facing a handwritten Arabic recognition system
such as unlimited variation in human handwriting, similarities of distinct
character shapes, interconnections of neighbouring characters and their
position in the word. The typical Optical Character Recognition (OCR) systems
are based mainly on three stages, preprocessing, features extraction and
recognition. This paper proposes new methods for handwritten Arabic character
recognition which is based on novel preprocessing operations including
different kinds of noise removal also different kind of features like
structural, Statistical and Morphological features from the main body of the
character and also from the secondary components. Evaluation of the accuracy of
the selected features is made. The system was trained and tested by back
propagation neural network with CENPRMI dataset. The proposed algorithm
obtained promising results as it is able to recognize 88% of our test set
accurately. In Comparable with other related works we find that our result is
the highest among other published works.
| no_new_dataset | 0.945096 |
1402.6690 | Jalal Mahmud | Jalal Mahmud, Jilin Chen, Jeffrey Nichols | Why Are You More Engaged? Predicting Social Engagement from Word Use | null | null | null | null | cs.SI cs.CL cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a study to analyze how word use can predict social engagement
behaviors such as replies and retweets in Twitter. We compute psycholinguistic
category scores from word usage, and investigate how people with different
scores exhibited different reply and retweet behaviors on Twitter. We also
found psycholinguistic categories that show significant correlations with such
social engagement behaviors. In addition, we have built predictive models of
replies and retweets from such psycholinguistic category based features. Our
experiments using a real world dataset collected from Twitter validates that
such predictions can be done with reasonable accuracy.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2014 20:58:00 GMT"
}
] | 2014-02-27T00:00:00 | [
[
"Mahmud",
"Jalal",
""
],
[
"Chen",
"Jilin",
""
],
[
"Nichols",
"Jeffrey",
""
]
] | TITLE: Why Are You More Engaged? Predicting Social Engagement from Word Use
ABSTRACT: We present a study to analyze how word use can predict social engagement
behaviors such as replies and retweets in Twitter. We compute psycholinguistic
category scores from word usage, and investigate how people with different
scores exhibited different reply and retweet behaviors on Twitter. We also
found psycholinguistic categories that show significant correlations with such
social engagement behaviors. In addition, we have built predictive models of
replies and retweets from such psycholinguistic category based features. Our
experiments using a real world dataset collected from Twitter validates that
such predictions can be done with reasonable accuracy.
| no_new_dataset | 0.908456 |
1306.1704 | Dmytro Karamshuk | Dmytro Karamshuk, Anastasios Noulas, Salvatore Scellato, Vincenzo
Nicosia, Cecilia Mascolo | Geo-Spotting: Mining Online Location-based Services for Optimal Retail
Store Placement | Proceedings of the 19th ACM SIGKDD international conference on
Knowledge discovery and data mining, Chicago, 2013, Pages 793-801 | null | 10.1145/2487575.2487616 | null | cs.SI cs.CE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of identifying the optimal location for a new retail store has
been the focus of past research, especially in the field of land economy, due
to its importance in the success of a business. Traditional approaches to the
problem have factored in demographics, revenue and aggregated human flow
statistics from nearby or remote areas. However, the acquisition of relevant
data is usually expensive. With the growth of location-based social networks,
fine grained data describing user mobility and popularity of places has
recently become attainable.
In this paper we study the predictive power of various machine learning
features on the popularity of retail stores in the city through the use of a
dataset collected from Foursquare in New York. The features we mine are based
on two general signals: geographic, where features are formulated according to
the types and density of nearby places, and user mobility, which includes
transitions between venues or the incoming flow of mobile users from distant
areas. Our evaluation suggests that the best performing features are common
across the three different commercial chains considered in the analysis,
although variations may exist too, as explained by heterogeneities in the way
retail facilities attract users. We also show that performance improves
significantly when combining multiple features in supervised learning
algorithms, suggesting that the retail success of a business may depend on
multiple factors.
| [
{
"version": "v1",
"created": "Fri, 7 Jun 2013 12:42:06 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Feb 2014 10:48:20 GMT"
}
] | 2014-02-26T00:00:00 | [
[
"Karamshuk",
"Dmytro",
""
],
[
"Noulas",
"Anastasios",
""
],
[
"Scellato",
"Salvatore",
""
],
[
"Nicosia",
"Vincenzo",
""
],
[
"Mascolo",
"Cecilia",
""
]
] | TITLE: Geo-Spotting: Mining Online Location-based Services for Optimal Retail
Store Placement
ABSTRACT: The problem of identifying the optimal location for a new retail store has
been the focus of past research, especially in the field of land economy, due
to its importance in the success of a business. Traditional approaches to the
problem have factored in demographics, revenue and aggregated human flow
statistics from nearby or remote areas. However, the acquisition of relevant
data is usually expensive. With the growth of location-based social networks,
fine grained data describing user mobility and popularity of places has
recently become attainable.
In this paper we study the predictive power of various machine learning
features on the popularity of retail stores in the city through the use of a
dataset collected from Foursquare in New York. The features we mine are based
on two general signals: geographic, where features are formulated according to
the types and density of nearby places, and user mobility, which includes
transitions between venues or the incoming flow of mobile users from distant
areas. Our evaluation suggests that the best performing features are common
across the three different commercial chains considered in the analysis,
although variations may exist too, as explained by heterogeneities in the way
retail facilities attract users. We also show that performance improves
significantly when combining multiple features in supervised learning
algorithms, suggesting that the retail success of a business may depend on
multiple factors.
| no_new_dataset | 0.945601 |
1402.5953 | Richard McClatchey | Andrew Branson, Jetendr Shamdasani, Richard McClatchey | A Description Driven Approach for Flexible Metadata Tracking | 10 pages and 3 figures. arXiv admin note: text overlap with
arXiv:1402.5753, arXiv:1402.5764 | 7th ESA International Conference on Ensuring Long-Term
Preservation and Adding Value to Scientific and Technical Data (PV 2013)
4--6th November 2013. Frascati, Italy | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evolving user requirements presents a considerable software engineering
challenge, all the more so in an environment where data will be stored for a
very long time, and must remain usable as the system specification evolves
around it. Capturing the description of the system addresses this issue since a
description-driven approach enables new versions of data structures and
processes to be created alongside the old, thereby providing a history of
changes to the underlying data models and enabling the capture of provenance
data. This description-driven approach is advocated in this paper in which a
system called CRISTAL is presented. CRISTAL is based on description-driven
principles; it can use previous versions of stored descriptions to define
various versions of data which can be stored in various forms. To demonstrate
the efficacy of this approach the history of the project at CERN is presented
where CRISTAL was used to track data and process definitions and their
associated provenance data in the construction of the CMS ECAL detector, how it
was applied to handle analysis tracking and data index provenance in the
neuGRID and N4U projects, and how it will be matured further in the CRISTAL-ISE
project. We believe that the CRISTAL approach could be invaluable in handling
the evolution, indexing and tracking of large datasets, and are keen to apply
it further in this direction.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2014 10:09:30 GMT"
}
] | 2014-02-26T00:00:00 | [
[
"Branson",
"Andrew",
""
],
[
"Shamdasani",
"Jetendr",
""
],
[
"McClatchey",
"Richard",
""
]
] | TITLE: A Description Driven Approach for Flexible Metadata Tracking
ABSTRACT: Evolving user requirements presents a considerable software engineering
challenge, all the more so in an environment where data will be stored for a
very long time, and must remain usable as the system specification evolves
around it. Capturing the description of the system addresses this issue since a
description-driven approach enables new versions of data structures and
processes to be created alongside the old, thereby providing a history of
changes to the underlying data models and enabling the capture of provenance
data. This description-driven approach is advocated in this paper in which a
system called CRISTAL is presented. CRISTAL is based on description-driven
principles; it can use previous versions of stored descriptions to define
various versions of data which can be stored in various forms. To demonstrate
the efficacy of this approach the history of the project at CERN is presented
where CRISTAL was used to track data and process definitions and their
associated provenance data in the construction of the CMS ECAL detector, how it
was applied to handle analysis tracking and data index provenance in the
neuGRID and N4U projects, and how it will be matured further in the CRISTAL-ISE
project. We believe that the CRISTAL approach could be invaluable in handling
the evolution, indexing and tracking of large datasets, and are keen to apply
it further in this direction.
| no_new_dataset | 0.944434 |
1402.6077 | Zhi-Hua Zhou | Wang-Zhou Dai and Zhi-Hua Zhou | Inductive Logic Boosting | 19 pages, 2 figures | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years have seen a surge of interest in Probabilistic Logic Programming
(PLP) and Statistical Relational Learning (SRL) models that combine logic with
probabilities. Structure learning of these systems is an intersection area of
Inductive Logic Programming (ILP) and statistical learning (SL). However, ILP
cannot deal with probabilities, SL cannot model relational hypothesis. The
biggest challenge of integrating these two machine learning frameworks is how
to estimate the probability of a logic clause only from the observation of
grounded logic atoms. Many current methods models a joint probability by
representing clause as graphical model and literals as vertices in it. This
model is still too complicate and only can be approximate by pseudo-likelihood.
We propose Inductive Logic Boosting framework to transform the relational
dataset into a feature-based dataset, induces logic rules by boosting Problog
Rule Trees and relaxes the independence constraint of pseudo-likelihood.
Experimental evaluation on benchmark datasets demonstrates that the AUC-PR and
AUC-ROC value of ILP learned rules are higher than current state-of-the-art SRL
methods.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2014 07:53:49 GMT"
}
] | 2014-02-26T00:00:00 | [
[
"Dai",
"Wang-Zhou",
""
],
[
"Zhou",
"Zhi-Hua",
""
]
] | TITLE: Inductive Logic Boosting
ABSTRACT: Recent years have seen a surge of interest in Probabilistic Logic Programming
(PLP) and Statistical Relational Learning (SRL) models that combine logic with
probabilities. Structure learning of these systems is an intersection area of
Inductive Logic Programming (ILP) and statistical learning (SL). However, ILP
cannot deal with probabilities, SL cannot model relational hypothesis. The
biggest challenge of integrating these two machine learning frameworks is how
to estimate the probability of a logic clause only from the observation of
grounded logic atoms. Many current methods models a joint probability by
representing clause as graphical model and literals as vertices in it. This
model is still too complicate and only can be approximate by pseudo-likelihood.
We propose Inductive Logic Boosting framework to transform the relational
dataset into a feature-based dataset, induces logic rules by boosting Problog
Rule Trees and relaxes the independence constraint of pseudo-likelihood.
Experimental evaluation on benchmark datasets demonstrates that the AUC-PR and
AUC-ROC value of ILP learned rules are higher than current state-of-the-art SRL
methods.
| no_new_dataset | 0.943556 |
1402.6238 | Jobin Wilson | Jobin Wilson, Santanu Chaudhury, Brejesh Lall, Prateek Kapadia | Improving Collaborative Filtering based Recommenders using Topic
Modelling | null | null | null | null | cs.IR cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Standard Collaborative Filtering (CF) algorithms make use of interactions
between users and items in the form of implicit or explicit ratings alone for
generating recommendations. Similarity among users or items is calculated
purely based on rating overlap in this case,without considering explicit
properties of users or items involved, limiting their applicability in domains
with very sparse rating spaces. In many domains such as movies, news or
electronic commerce recommenders, considerable contextual data in text form
describing item properties is available along with the rating data, which could
be utilized to improve recommendation quality.In this paper, we propose a novel
approach to improve standard CF based recommenders by utilizing latent
Dirichlet allocation (LDA) to learn latent properties of items, expressed in
terms of topic proportions, derived from their textual description. We infer
user's topic preferences or persona in the same latent space,based on her
historical ratings. While computing similarity between users, we make use of a
combined similarity measure involving rating overlap as well as similarity in
the latent topic space. This approach alleviates sparsity problem as it allows
calculation of similarity between users even if they have not rated any items
in common. Our experiments on multiple public datasets indicate that the
proposed hybrid approach significantly outperforms standard user Based and item
Based CF recommenders in terms of classification accuracy metrics such as
precision, recall and f-measure.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2014 16:52:05 GMT"
}
] | 2014-02-26T00:00:00 | [
[
"Wilson",
"Jobin",
""
],
[
"Chaudhury",
"Santanu",
""
],
[
"Lall",
"Brejesh",
""
],
[
"Kapadia",
"Prateek",
""
]
] | TITLE: Improving Collaborative Filtering based Recommenders using Topic
Modelling
ABSTRACT: Standard Collaborative Filtering (CF) algorithms make use of interactions
between users and items in the form of implicit or explicit ratings alone for
generating recommendations. Similarity among users or items is calculated
purely based on rating overlap in this case,without considering explicit
properties of users or items involved, limiting their applicability in domains
with very sparse rating spaces. In many domains such as movies, news or
electronic commerce recommenders, considerable contextual data in text form
describing item properties is available along with the rating data, which could
be utilized to improve recommendation quality.In this paper, we propose a novel
approach to improve standard CF based recommenders by utilizing latent
Dirichlet allocation (LDA) to learn latent properties of items, expressed in
terms of topic proportions, derived from their textual description. We infer
user's topic preferences or persona in the same latent space,based on her
historical ratings. While computing similarity between users, we make use of a
combined similarity measure involving rating overlap as well as similarity in
the latent topic space. This approach alleviates sparsity problem as it allows
calculation of similarity between users even if they have not rated any items
in common. Our experiments on multiple public datasets indicate that the
proposed hybrid approach significantly outperforms standard user Based and item
Based CF recommenders in terms of classification accuracy metrics such as
precision, recall and f-measure.
| no_new_dataset | 0.952706 |
1402.5634 | Gaurav Pandey | Gaurav Pandey and Ambedkar Dukkipati | To go deep or wide in learning? | 9 pages, 1 figure, Accepted for publication in Seventeenth
International Conference on Artificial Intelligence and Statistics | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To achieve acceptable performance for AI tasks, one can either use
sophisticated feature extraction methods as the first layer in a two-layered
supervised learning model, or learn the features directly using a deep
(multi-layered) model. While the first approach is very problem-specific, the
second approach has computational overheads in learning multiple layers and
fine-tuning of the model. In this paper, we propose an approach called wide
learning based on arc-cosine kernels, that learns a single layer of infinite
width. We propose exact and inexact learning strategies for wide learning and
show that wide learning with single layer outperforms single layer as well as
deep architectures of finite width for some benchmark datasets.
| [
{
"version": "v1",
"created": "Sun, 23 Feb 2014 16:51:51 GMT"
}
] | 2014-02-25T00:00:00 | [
[
"Pandey",
"Gaurav",
""
],
[
"Dukkipati",
"Ambedkar",
""
]
] | TITLE: To go deep or wide in learning?
ABSTRACT: To achieve acceptable performance for AI tasks, one can either use
sophisticated feature extraction methods as the first layer in a two-layered
supervised learning model, or learn the features directly using a deep
(multi-layered) model. While the first approach is very problem-specific, the
second approach has computational overheads in learning multiple layers and
fine-tuning of the model. In this paper, we propose an approach called wide
learning based on arc-cosine kernels, that learns a single layer of infinite
width. We propose exact and inexact learning strategies for wide learning and
show that wide learning with single layer outperforms single layer as well as
deep architectures of finite width for some benchmark datasets.
| no_new_dataset | 0.950595 |
1402.5749 | Richard McClatchey | R. McClatchey, A. Branson, A. Anjum, P. Bloodsworth, I. Habib, K.
Munir, J. Shamdasani, K. Soomro and the neuGRID Consortium | Providing Traceability for Neuroimaging Analyses | 17 pages, 9 figures, 2 tables | International Journal of Medical Informatics, 82 (2013) pp 882-894
Elsevier publishers | 10.1016/j.ijmedinf.2013.05.005 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasingly digital nature of biomedical data and as the complexity
of analyses in medical research increases, the need for accurate information
capture, traceability and accessibility has become crucial to medical
researchers in the pursuance of their research goals. Grid- or Cloud-based
technologies, often based on so-called Service Oriented Architectures (SOA),
are increasingly being seen as viable solutions for managing distributed data
and algorithms in the bio-medical domain. For neuroscientific analyses,
especially those centred on complex image analysis, traceability of processes
and datasets is essential but up to now this has not been captured in a manner
that facilitates collaborative study. Over the past decade, we have been
working with mammographers, paediatricians and neuroscientists in three
generations of projects to provide the data management and provenance services
now required for 21st century medical research. This paper outlines the finding
of a requirements study and a resulting system architecture for the production
of services to support neuroscientific studies of biomarkers for Alzheimers
Disease. The paper proposes a software infrastructure and services that provide
the foundation for such support. It introduces the use of the CRISTAL software
to provide provenance management as one of a number of services delivered on a
SOA, deployed to manage neuroimaging projects that have been studying
biomarkers for Alzheimers disease.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2014 08:44:49 GMT"
}
] | 2014-02-25T00:00:00 | [
[
"McClatchey",
"R.",
""
],
[
"Branson",
"A.",
""
],
[
"Anjum",
"A.",
""
],
[
"Bloodsworth",
"P.",
""
],
[
"Habib",
"I.",
""
],
[
"Munir",
"K.",
""
],
[
"Shamdasani",
"J.",
""
],
[
"Soomro",
"K.",
""
],
[
"Consortium",
"the neuGRID",
""
]
] | TITLE: Providing Traceability for Neuroimaging Analyses
ABSTRACT: With the increasingly digital nature of biomedical data and as the complexity
of analyses in medical research increases, the need for accurate information
capture, traceability and accessibility has become crucial to medical
researchers in the pursuance of their research goals. Grid- or Cloud-based
technologies, often based on so-called Service Oriented Architectures (SOA),
are increasingly being seen as viable solutions for managing distributed data
and algorithms in the bio-medical domain. For neuroscientific analyses,
especially those centred on complex image analysis, traceability of processes
and datasets is essential but up to now this has not been captured in a manner
that facilitates collaborative study. Over the past decade, we have been
working with mammographers, paediatricians and neuroscientists in three
generations of projects to provide the data management and provenance services
now required for 21st century medical research. This paper outlines the finding
of a requirements study and a resulting system architecture for the production
of services to support neuroscientific studies of biomarkers for Alzheimers
Disease. The paper proposes a software infrastructure and services that provide
the foundation for such support. It introduces the use of the CRISTAL software
to provide provenance management as one of a number of services delivered on a
SOA, deployed to manage neuroimaging projects that have been studying
biomarkers for Alzheimers disease.
| no_new_dataset | 0.947866 |
1402.5757 | Richard McClatchey | Kamran Munir, Saad Liaquat Kiani, Khawar Hasham, Richard McClatchey,
Andrew Branson, Jetendr Shamdasani and the N4U Consortium | An Integrated e-science Analysis Base for Computation Neuroscience
Experiments and Analysis | 8 pages & 4 figures | Procedia - Social and Behavioral Sciences. Vol 73 pp 85-92 (2013)
Elsevier Publishers | 10.1016/j.sbspro.2013.02.026. | null | cs.SE cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent developments in data management and imaging technologies have
significantly affected diagnostic and extrapolative research in the
understanding of neurodegenerative diseases. However, the impact of these new
technologies is largely dependent on the speed and reliability with which the
medical data can be visualised, analysed and interpreted. The EUs neuGRID for
Users (N4U) is a follow-on project to neuGRID, which aims to provide an
integrated environment to carry out computational neuroscience experiments.
This paper reports on the design and development of the N4U Analysis Base and
related Information Services, which addresses existing research and practical
challenges by offering an integrated medical data analysis environment with the
necessary building blocks for neuroscientists to optimally exploit neuroscience
workflows, large image datasets and algorithms in order to conduct analyses.
The N4U Analysis Base enables such analyses by indexing and interlinking the
neuroimaging and clinical study datasets stored on the N4U Grid infrastructure,
algorithms and scientific workflow definitions along with their associated
provenance information.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2014 09:14:44 GMT"
}
] | 2014-02-25T00:00:00 | [
[
"Munir",
"Kamran",
""
],
[
"Kiani",
"Saad Liaquat",
""
],
[
"Hasham",
"Khawar",
""
],
[
"McClatchey",
"Richard",
""
],
[
"Branson",
"Andrew",
""
],
[
"Shamdasani",
"Jetendr",
""
],
[
"Consortium",
"the N4U",
""
]
] | TITLE: An Integrated e-science Analysis Base for Computation Neuroscience
Experiments and Analysis
ABSTRACT: Recent developments in data management and imaging technologies have
significantly affected diagnostic and extrapolative research in the
understanding of neurodegenerative diseases. However, the impact of these new
technologies is largely dependent on the speed and reliability with which the
medical data can be visualised, analysed and interpreted. The EUs neuGRID for
Users (N4U) is a follow-on project to neuGRID, which aims to provide an
integrated environment to carry out computational neuroscience experiments.
This paper reports on the design and development of the N4U Analysis Base and
related Information Services, which addresses existing research and practical
challenges by offering an integrated medical data analysis environment with the
necessary building blocks for neuroscientists to optimally exploit neuroscience
workflows, large image datasets and algorithms in order to conduct analyses.
The N4U Analysis Base enables such analyses by indexing and interlinking the
neuroimaging and clinical study datasets stored on the N4U Grid infrastructure,
algorithms and scientific workflow definitions along with their associated
provenance information.
| no_new_dataset | 0.948442 |
1402.5923 | Tatiana Tommasi | Tatiana Tommasi, Tinne Tuytelaars, Barbara Caputo | A Testbed for Cross-Dataset Analysis | null | null | null | December 2013, Technical Report: KUL/ESAT/PSI/1304 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since its beginning visual recognition research has tried to capture the huge
variability of the visual world in several image collections. The number of
available datasets is still progressively growing together with the amount of
samples per object category. However, this trend does not correspond directly
to an increasing in the generalization capabilities of the developed
recognition systems. Each collection tends to have its specific characteristics
and to cover just some aspects of the visual world: these biases often narrow
the effect of the methods defined and tested separately over each image set.
Our work makes a first step towards the analysis of the dataset bias problem on
a large scale. We organize twelve existing databases in a unique corpus and we
present the visual community with a useful feature repository for future
research.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2014 19:25:17 GMT"
}
] | 2014-02-25T00:00:00 | [
[
"Tommasi",
"Tatiana",
""
],
[
"Tuytelaars",
"Tinne",
""
],
[
"Caputo",
"Barbara",
""
]
] | TITLE: A Testbed for Cross-Dataset Analysis
ABSTRACT: Since its beginning visual recognition research has tried to capture the huge
variability of the visual world in several image collections. The number of
available datasets is still progressively growing together with the amount of
samples per object category. However, this trend does not correspond directly
to an increasing in the generalization capabilities of the developed
recognition systems. Each collection tends to have its specific characteristics
and to cover just some aspects of the visual world: these biases often narrow
the effect of the methods defined and tested separately over each image set.
Our work makes a first step towards the analysis of the dataset bias problem on
a large scale. We organize twelve existing databases in a unique corpus and we
present the visual community with a useful feature repository for future
research.
| no_new_dataset | 0.745954 |
1402.5255 | Christian von der Weth | Christian von der Weth, Manfred Hauswirth | Analysing Parallel and Passive Web Browsing Behavior and its Effects on
Website Metrics | 22 pages, 11 figures, 3 tables, 29 references. arXiv admin note: text
overlap with arXiv:1307.1542 | null | null | null | cs.HC cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Getting deeper insights into the online browsing behavior of Web users has
been a major research topic since the advent of the WWW. It provides useful
information to optimize website design, Web browser design, search engines
offerings, and online advertisement. We argue that new technologies and new
services continue to have significant effects on the way how people browse the
Web. For example, listening to music clips on YouTube or to a radio station on
Last.fm does not require users to sit in front of their computer. Social media
and networking sites like Facebook or micro-blogging sites like Twitter have
attracted new types of users that previously were less inclined to go online.
These changes in how people browse the Web feature new characteristics which
are not well understood so far. In this paper, we provide novel and unique
insights by presenting first results of DOBBS, our long-term effort to create a
comprehensive and representative dataset capturing online user behavior. We
firstly investigate the concepts of parallel browsing and passive browsing,
showing that browsing the Web is no longer a dedicated task for many users.
Based on these results, we then analyze their impact on the calculation of a
user's dwell time -- i.e., the time the user spends on a webpage -- which has
become an important metric to quantify the popularity of websites.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2014 11:15:02 GMT"
}
] | 2014-02-24T00:00:00 | [
[
"von der Weth",
"Christian",
""
],
[
"Hauswirth",
"Manfred",
""
]
] | TITLE: Analysing Parallel and Passive Web Browsing Behavior and its Effects on
Website Metrics
ABSTRACT: Getting deeper insights into the online browsing behavior of Web users has
been a major research topic since the advent of the WWW. It provides useful
information to optimize website design, Web browser design, search engines
offerings, and online advertisement. We argue that new technologies and new
services continue to have significant effects on the way how people browse the
Web. For example, listening to music clips on YouTube or to a radio station on
Last.fm does not require users to sit in front of their computer. Social media
and networking sites like Facebook or micro-blogging sites like Twitter have
attracted new types of users that previously were less inclined to go online.
These changes in how people browse the Web feature new characteristics which
are not well understood so far. In this paper, we provide novel and unique
insights by presenting first results of DOBBS, our long-term effort to create a
comprehensive and representative dataset capturing online user behavior. We
firstly investigate the concepts of parallel browsing and passive browsing,
showing that browsing the Web is no longer a dedicated task for many users.
Based on these results, we then analyze their impact on the calculation of a
user's dwell time -- i.e., the time the user spends on a webpage -- which has
become an important metric to quantify the popularity of websites.
| new_dataset | 0.958886 |
1402.5360 | Chanabasayya Vastrad M | Doreswamy, Chanabasayya M. Vastrad | Important Molecular Descriptors Selection Using Self Tuned Reweighted
Sampling Method for Prediction of Antituberculosis Activity | published 2013 | null | null | null | cs.LG stat.AP stat.ML | http://creativecommons.org/licenses/by/3.0/ | In this paper, a new descriptor selection method for selecting an optimal
combination of important descriptors of sulfonamide derivatives data, named
self tuned reweighted sampling (STRS), is developed. descriptors are defined as
the descriptors with large absolute coefficients in a multivariate linear
regression model such as partial least squares(PLS). In this study, the
absolute values of regression coefficients of PLS model are used as an index
for evaluating the importance of each descriptor Then, based on the importance
level of each descriptor, STRS sequentially selects N subsets of descriptors
from N Monte Carlo (MC) sampling runs in an iterative and competitive manner.
In each sampling run, a fixed ratio (e.g. 80%) of samples is first randomly
selected to establish a regresson model. Next, based on the regression
coefficients, a two-step procedure including rapidly decreasing function (RDF)
based enforced descriptor selection and self tuned sampling (STS) based
competitive descriptor selection is adopted to select the important
descriptorss. After running the loops, a number of subsets of descriptors are
obtained and root mean squared error of cross validation (RMSECV) of PLS models
established with subsets of descriptors is computed. The subset of descriptors
with the lowest RMSECV is considered as the optimal descriptor subset. The
performance of the proposed algorithm is evaluated by sulfanomide derivative
dataset. The results reveal an good characteristic of STRS that it can usually
locate an optimal combination of some important descriptors which are
interpretable to the biologically of interest. Additionally, our study shows
that better prediction is obtained by STRS when compared to full descriptor set
PLS modeling, Monte Carlo uninformative variable elimination (MC-UVE).
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2014 17:24:53 GMT"
}
] | 2014-02-24T00:00:00 | [
[
"Doreswamy",
"",
""
],
[
"Vastrad",
"Chanabasayya M.",
""
]
] | TITLE: Important Molecular Descriptors Selection Using Self Tuned Reweighted
Sampling Method for Prediction of Antituberculosis Activity
ABSTRACT: In this paper, a new descriptor selection method for selecting an optimal
combination of important descriptors of sulfonamide derivatives data, named
self tuned reweighted sampling (STRS), is developed. descriptors are defined as
the descriptors with large absolute coefficients in a multivariate linear
regression model such as partial least squares(PLS). In this study, the
absolute values of regression coefficients of PLS model are used as an index
for evaluating the importance of each descriptor Then, based on the importance
level of each descriptor, STRS sequentially selects N subsets of descriptors
from N Monte Carlo (MC) sampling runs in an iterative and competitive manner.
In each sampling run, a fixed ratio (e.g. 80%) of samples is first randomly
selected to establish a regresson model. Next, based on the regression
coefficients, a two-step procedure including rapidly decreasing function (RDF)
based enforced descriptor selection and self tuned sampling (STS) based
competitive descriptor selection is adopted to select the important
descriptorss. After running the loops, a number of subsets of descriptors are
obtained and root mean squared error of cross validation (RMSECV) of PLS models
established with subsets of descriptors is computed. The subset of descriptors
with the lowest RMSECV is considered as the optimal descriptor subset. The
performance of the proposed algorithm is evaluated by sulfanomide derivative
dataset. The results reveal an good characteristic of STRS that it can usually
locate an optimal combination of some important descriptors which are
interpretable to the biologically of interest. Additionally, our study shows
that better prediction is obtained by STRS when compared to full descriptor set
PLS modeling, Monte Carlo uninformative variable elimination (MC-UVE).
| no_new_dataset | 0.951006 |
1312.6199 | Joan Bruna | Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna,
Dumitru Erhan, Ian Goodfellow, Rob Fergus | Intriguing properties of neural networks | null | null | null | null | cs.CV cs.LG cs.NE | http://creativecommons.org/licenses/by/3.0/ | Deep neural networks are highly expressive models that have recently achieved
state of the art performance on speech and visual recognition tasks. While
their expressiveness is the reason they succeed, it also causes them to learn
uninterpretable solutions that could have counter-intuitive properties. In this
paper we report two such properties.
First, we find that there is no distinction between individual high level
units and random linear combinations of high level units, according to various
methods of unit analysis. It suggests that it is the space, rather than the
individual units, that contains of the semantic information in the high layers
of neural networks.
Second, we find that deep neural networks learn input-output mappings that
are fairly discontinuous to a significant extend. We can cause the network to
misclassify an image by applying a certain imperceptible perturbation, which is
found by maximizing the network's prediction error. In addition, the specific
nature of these perturbations is not a random artifact of learning: the same
perturbation can cause a different network, that was trained on a different
subset of the dataset, to misclassify the same input.
| [
{
"version": "v1",
"created": "Sat, 21 Dec 2013 03:36:08 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jan 2014 04:37:34 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Feb 2014 17:40:08 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Feb 2014 16:33:14 GMT"
}
] | 2014-02-20T00:00:00 | [
[
"Szegedy",
"Christian",
""
],
[
"Zaremba",
"Wojciech",
""
],
[
"Sutskever",
"Ilya",
""
],
[
"Bruna",
"Joan",
""
],
[
"Erhan",
"Dumitru",
""
],
[
"Goodfellow",
"Ian",
""
],
[
"Fergus",
"Rob",
""
]
] | TITLE: Intriguing properties of neural networks
ABSTRACT: Deep neural networks are highly expressive models that have recently achieved
state of the art performance on speech and visual recognition tasks. While
their expressiveness is the reason they succeed, it also causes them to learn
uninterpretable solutions that could have counter-intuitive properties. In this
paper we report two such properties.
First, we find that there is no distinction between individual high level
units and random linear combinations of high level units, according to various
methods of unit analysis. It suggests that it is the space, rather than the
individual units, that contains of the semantic information in the high layers
of neural networks.
Second, we find that deep neural networks learn input-output mappings that
are fairly discontinuous to a significant extend. We can cause the network to
misclassify an image by applying a certain imperceptible perturbation, which is
found by maximizing the network's prediction error. In addition, the specific
nature of these perturbations is not a random artifact of learning: the same
perturbation can cause a different network, that was trained on a different
subset of the dataset, to misclassify the same input.
| no_new_dataset | 0.944893 |
1402.4542 | Chunguo Li | Chun-Guo Li, Xing Mei, Bao-Gang Hu | Unsupervised Ranking of Multi-Attribute Objects Based on Principal
Curves | This paper has 14 pages and 9 figures. The paper has submitted to
IEEE Transactions on Knowledge and Data Engineering (TKDE) | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised ranking faces one critical challenge in evaluation applications,
that is, no ground truth is available. When PageRank and its variants show a
good solution in related subjects, they are applicable only for ranking from
link-structure data. In this work, we focus on unsupervised ranking from
multi-attribute data which is also common in evaluation tasks. To overcome the
challenge, we propose five essential meta-rules for the design and assessment
of unsupervised ranking approaches: scale and translation invariance, strict
monotonicity, linear/nonlinear capacities, smoothness, and explicitness of
parameter size. These meta-rules are regarded as high level knowledge for
unsupervised ranking tasks. Inspired by the works in [8] and [14], we propose a
ranking principal curve (RPC) model, which learns a one-dimensional manifold
function to perform unsupervised ranking tasks on multi-attribute observations.
Furthermore, the RPC is modeled to be a cubic B\'ezier curve with control
points restricted in the interior of a hypercube, thereby complying with all
the five meta-rules to infer a reasonable ranking list. With control points as
the model parameters, one is able to understand the learned manifold and to
interpret the ranking list semantically. Numerical experiments of the presented
RPC model are conducted on two open datasets of different ranking applications.
In comparison with the state-of-the-art approaches, the new model is able to
show more reasonable ranking lists.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2014 01:29:14 GMT"
}
] | 2014-02-20T00:00:00 | [
[
"Li",
"Chun-Guo",
""
],
[
"Mei",
"Xing",
""
],
[
"Hu",
"Bao-Gang",
""
]
] | TITLE: Unsupervised Ranking of Multi-Attribute Objects Based on Principal
Curves
ABSTRACT: Unsupervised ranking faces one critical challenge in evaluation applications,
that is, no ground truth is available. When PageRank and its variants show a
good solution in related subjects, they are applicable only for ranking from
link-structure data. In this work, we focus on unsupervised ranking from
multi-attribute data which is also common in evaluation tasks. To overcome the
challenge, we propose five essential meta-rules for the design and assessment
of unsupervised ranking approaches: scale and translation invariance, strict
monotonicity, linear/nonlinear capacities, smoothness, and explicitness of
parameter size. These meta-rules are regarded as high level knowledge for
unsupervised ranking tasks. Inspired by the works in [8] and [14], we propose a
ranking principal curve (RPC) model, which learns a one-dimensional manifold
function to perform unsupervised ranking tasks on multi-attribute observations.
Furthermore, the RPC is modeled to be a cubic B\'ezier curve with control
points restricted in the interior of a hypercube, thereby complying with all
the five meta-rules to infer a reasonable ranking list. With control points as
the model parameters, one is able to understand the learned manifold and to
interpret the ranking list semantically. Numerical experiments of the presented
RPC model are conducted on two open datasets of different ranking applications.
In comparison with the state-of-the-art approaches, the new model is able to
show more reasonable ranking lists.
| no_new_dataset | 0.948058 |
1402.4624 | Aleksandr Aravkin | Aleksandr Y. Aravkin and Anju Kambadur and Aurelie C. Lozano and Ronny
Luss | Sparse Quantile Huber Regression for Efficient and Robust Estimation | 9 pages | null | null | null | stat.ML cs.DS math.OC stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider new formulations and methods for sparse quantile regression in
the high-dimensional setting. Quantile regression plays an important role in
many applications, including outlier-robust exploratory analysis in gene
selection. In addition, the sparsity consideration in quantile regression
enables the exploration of the entire conditional distribution of the response
variable given the predictors and therefore yields a more comprehensive view of
the important predictors. We propose a generalized OMP algorithm for variable
selection, taking the misfit loss to be either the traditional quantile loss or
a smooth version we call quantile Huber, and compare the resulting greedy
approaches with convex sparsity-regularized formulations. We apply a recently
proposed interior point methodology to efficiently solve all convex
formulations as well as convex subproblems in the generalized OMP setting, pro-
vide theoretical guarantees of consistent estimation, and demonstrate the
performance of our approach using empirical studies of simulated and genomic
datasets.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2014 11:18:32 GMT"
}
] | 2014-02-20T00:00:00 | [
[
"Aravkin",
"Aleksandr Y.",
""
],
[
"Kambadur",
"Anju",
""
],
[
"Lozano",
"Aurelie C.",
""
],
[
"Luss",
"Ronny",
""
]
] | TITLE: Sparse Quantile Huber Regression for Efficient and Robust Estimation
ABSTRACT: We consider new formulations and methods for sparse quantile regression in
the high-dimensional setting. Quantile regression plays an important role in
many applications, including outlier-robust exploratory analysis in gene
selection. In addition, the sparsity consideration in quantile regression
enables the exploration of the entire conditional distribution of the response
variable given the predictors and therefore yields a more comprehensive view of
the important predictors. We propose a generalized OMP algorithm for variable
selection, taking the misfit loss to be either the traditional quantile loss or
a smooth version we call quantile Huber, and compare the resulting greedy
approaches with convex sparsity-regularized formulations. We apply a recently
proposed interior point methodology to efficiently solve all convex
formulations as well as convex subproblems in the generalized OMP setting, pro-
vide theoretical guarantees of consistent estimation, and demonstrate the
performance of our approach using empirical studies of simulated and genomic
datasets.
| no_new_dataset | 0.941708 |
1402.4653 | Sohan Seth | Sohan Seth, John Shawe-Taylor, Samuel Kaski | Retrieval of Experiments by Efficient Estimation of Marginal Likelihood | null | null | null | null | stat.ML cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the task of retrieving relevant experiments given a query
experiment. By experiment, we mean a collection of measurements from a set of
`covariates' and the associated `outcomes'. While similar experiments can be
retrieved by comparing available `annotations', this approach ignores the
valuable information available in the measurements themselves. To incorporate
this information in the retrieval task, we suggest employing a retrieval metric
that utilizes probabilistic models learned from the measurements. We argue that
such a metric is a sensible measure of similarity between two experiments since
it permits inclusion of experiment-specific prior knowledge. However, accurate
models are often not analytical, and one must resort to storing posterior
samples which demands considerable resources. Therefore, we study strategies to
select informative posterior samples to reduce the computational load while
maintaining the retrieval performance. We demonstrate the efficacy of our
approach on simulated data with simple linear regression as the models, and
real world datasets.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2014 13:21:40 GMT"
}
] | 2014-02-20T00:00:00 | [
[
"Seth",
"Sohan",
""
],
[
"Shawe-Taylor",
"John",
""
],
[
"Kaski",
"Samuel",
""
]
] | TITLE: Retrieval of Experiments by Efficient Estimation of Marginal Likelihood
ABSTRACT: We study the task of retrieving relevant experiments given a query
experiment. By experiment, we mean a collection of measurements from a set of
`covariates' and the associated `outcomes'. While similar experiments can be
retrieved by comparing available `annotations', this approach ignores the
valuable information available in the measurements themselves. To incorporate
this information in the retrieval task, we suggest employing a retrieval metric
that utilizes probabilistic models learned from the measurements. We argue that
such a metric is a sensible measure of similarity between two experiments since
it permits inclusion of experiment-specific prior knowledge. However, accurate
models are often not analytical, and one must resort to storing posterior
samples which demands considerable resources. Therefore, we study strategies to
select informative posterior samples to reduce the computational load while
maintaining the retrieval performance. We demonstrate the efficacy of our
approach on simulated data with simple linear regression as the models, and
real world datasets.
| no_new_dataset | 0.945801 |
1309.1369 | Aleksandr Aravkin | Aleksandr Y. Aravkin, Anna Choromanska, Tony Jebara, and Dimitri
Kanevsky | Semistochastic Quadratic Bound Methods | 11 pages, 1 figure | null | null | null | stat.ML cs.LG math.NA stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partition functions arise in a variety of settings, including conditional
random fields, logistic regression, and latent gaussian models. In this paper,
we consider semistochastic quadratic bound (SQB) methods for maximum likelihood
inference based on partition function optimization. Batch methods based on the
quadratic bound were recently proposed for this class of problems, and
performed favorably in comparison to state-of-the-art techniques.
Semistochastic methods fall in between batch algorithms, which use all the
data, and stochastic gradient type methods, which use small random selections
at each iteration. We build semistochastic quadratic bound-based methods, and
prove both global convergence (to a stationary point) under very weak
assumptions, and linear convergence rate under stronger assumptions on the
objective. To make the proposed methods faster and more stable, we consider
inexact subproblem minimization and batch-size selection schemes. The efficacy
of SQB methods is demonstrated via comparison with several state-of-the-art
techniques on commonly used datasets.
| [
{
"version": "v1",
"created": "Thu, 5 Sep 2013 15:12:11 GMT"
},
{
"version": "v2",
"created": "Sat, 21 Dec 2013 02:42:50 GMT"
},
{
"version": "v3",
"created": "Sat, 25 Jan 2014 21:00:34 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Feb 2014 22:18:34 GMT"
}
] | 2014-02-19T00:00:00 | [
[
"Aravkin",
"Aleksandr Y.",
""
],
[
"Choromanska",
"Anna",
""
],
[
"Jebara",
"Tony",
""
],
[
"Kanevsky",
"Dimitri",
""
]
] | TITLE: Semistochastic Quadratic Bound Methods
ABSTRACT: Partition functions arise in a variety of settings, including conditional
random fields, logistic regression, and latent gaussian models. In this paper,
we consider semistochastic quadratic bound (SQB) methods for maximum likelihood
inference based on partition function optimization. Batch methods based on the
quadratic bound were recently proposed for this class of problems, and
performed favorably in comparison to state-of-the-art techniques.
Semistochastic methods fall in between batch algorithms, which use all the
data, and stochastic gradient type methods, which use small random selections
at each iteration. We build semistochastic quadratic bound-based methods, and
prove both global convergence (to a stationary point) under very weak
assumptions, and linear convergence rate under stronger assumptions on the
objective. To make the proposed methods faster and more stable, we consider
inexact subproblem minimization and batch-size selection schemes. The efficacy
of SQB methods is demonstrated via comparison with several state-of-the-art
techniques on commonly used datasets.
| no_new_dataset | 0.948585 |
1309.3797 | Louis M Shekhtman | Louis M. Shekhtman, James P. Bagrow, and Dirk Brockmann | Robustness of skeletons and salient features in networks | null | null | 10.1093/comnet/cnt019 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real world network datasets often contain a wealth of complex topological
information. In the face of these data, researchers often employ methods to
extract reduced networks containing the most important structures or pathways,
sometimes known as `skeletons' or `backbones'. Numerous such methods have been
developed. Yet data are often noisy or incomplete, with unknown numbers of
missing or spurious links. Relatively little effort has gone into understanding
how salient network extraction methods perform in the face of noisy or
incomplete networks. We study this problem by comparing how the salient
features extracted by two popular methods change when networks are perturbed,
either by deleting nodes or links, or by randomly rewiring links. Our results
indicate that simple, global statistics for skeletons can be accurately
inferred even for noisy and incomplete network data, but it is crucial to have
complete, reliable data to use the exact topologies of skeletons or backbones.
These results also help us understand how skeletons respond to damage to the
network itself, as in an attack scenario.
| [
{
"version": "v1",
"created": "Sun, 15 Sep 2013 20:48:41 GMT"
}
] | 2014-02-19T00:00:00 | [
[
"Shekhtman",
"Louis M.",
""
],
[
"Bagrow",
"James P.",
""
],
[
"Brockmann",
"Dirk",
""
]
] | TITLE: Robustness of skeletons and salient features in networks
ABSTRACT: Real world network datasets often contain a wealth of complex topological
information. In the face of these data, researchers often employ methods to
extract reduced networks containing the most important structures or pathways,
sometimes known as `skeletons' or `backbones'. Numerous such methods have been
developed. Yet data are often noisy or incomplete, with unknown numbers of
missing or spurious links. Relatively little effort has gone into understanding
how salient network extraction methods perform in the face of noisy or
incomplete networks. We study this problem by comparing how the salient
features extracted by two popular methods change when networks are perturbed,
either by deleting nodes or links, or by randomly rewiring links. Our results
indicate that simple, global statistics for skeletons can be accurately
inferred even for noisy and incomplete network data, but it is crucial to have
complete, reliable data to use the exact topologies of skeletons or backbones.
These results also help us understand how skeletons respond to damage to the
network itself, as in an attack scenario.
| no_new_dataset | 0.949809 |
1312.4695 | Wiktor Mlynarski | Wiktor Mlynarski | Sparse, complex-valued representations of natural sounds learned with
phase and amplitude continuity priors | 11 + 7 pages This version includes changes suggested by ICLR 2014
reviewers | null | null | null | cs.LG cs.SD q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex-valued sparse coding is a data representation which employs a
dictionary of two-dimensional subspaces, while imposing a sparse, factorial
prior on complex amplitudes. When trained on a dataset of natural image
patches, it learns phase invariant features which closely resemble receptive
fields of complex cells in the visual cortex. Features trained on natural
sounds however, rarely reveal phase invariance and capture other aspects of the
data. This observation is a starting point of the present work. As its first
contribution, it provides an analysis of natural sound statistics by means of
learning sparse, complex representations of short speech intervals. Secondly,
it proposes priors over the basis function set, which bias them towards
phase-invariant solutions. In this way, a dictionary of complex basis functions
can be learned from the data statistics, while preserving the phase invariance
property. Finally, representations trained on speech sounds with and without
priors are compared. Prior-based basis functions reveal performance comparable
to unconstrained sparse coding, while explicitely representing phase as a
temporal shift. Such representations can find applications in many perceptual
and machine learning tasks.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2013 09:12:55 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Dec 2013 10:48:17 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Feb 2014 10:20:25 GMT"
}
] | 2014-02-19T00:00:00 | [
[
"Mlynarski",
"Wiktor",
""
]
] | TITLE: Sparse, complex-valued representations of natural sounds learned with
phase and amplitude continuity priors
ABSTRACT: Complex-valued sparse coding is a data representation which employs a
dictionary of two-dimensional subspaces, while imposing a sparse, factorial
prior on complex amplitudes. When trained on a dataset of natural image
patches, it learns phase invariant features which closely resemble receptive
fields of complex cells in the visual cortex. Features trained on natural
sounds however, rarely reveal phase invariance and capture other aspects of the
data. This observation is a starting point of the present work. As its first
contribution, it provides an analysis of natural sound statistics by means of
learning sparse, complex representations of short speech intervals. Secondly,
it proposes priors over the basis function set, which bias them towards
phase-invariant solutions. In this way, a dictionary of complex basis functions
can be learned from the data statistics, while preserving the phase invariance
property. Finally, representations trained on speech sounds with and without
priors are compared. Prior-based basis functions reveal performance comparable
to unconstrained sparse coding, while explicitely representing phase as a
temporal shift. Such representations can find applications in many perceptual
and machine learning tasks.
| no_new_dataset | 0.950457 |
1312.5869 | Dimitrios Athanasakis Mr | Dimitrios Athanasakis, John Shawe-Taylor, Delmiro Fernandez-Reyes | Principled Non-Linear Feature Selection | arXiv admin note: substantial text overlap with arXiv:1311.5636 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent non-linear feature selection approaches employing greedy optimisation
of Centred Kernel Target Alignment(KTA) exhibit strong results in terms of
generalisation accuracy and sparsity. However, they are computationally
prohibitive for large datasets. We propose randSel, a randomised feature
selection algorithm, with attractive scaling properties. Our theoretical
analysis of randSel provides strong probabilistic guarantees for correct
identification of relevant features. RandSel's characteristics make it an ideal
candidate for identifying informative learned representations. We've conducted
experimentation to establish the performance of this approach, and present
encouraging results, including a 3rd position result in the recent ICML black
box learning challenge as well as competitive results for signal peptide
prediction, an important problem in bioinformatics.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 10:16:13 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Feb 2014 17:25:43 GMT"
}
] | 2014-02-19T00:00:00 | [
[
"Athanasakis",
"Dimitrios",
""
],
[
"Shawe-Taylor",
"John",
""
],
[
"Fernandez-Reyes",
"Delmiro",
""
]
] | TITLE: Principled Non-Linear Feature Selection
ABSTRACT: Recent non-linear feature selection approaches employing greedy optimisation
of Centred Kernel Target Alignment(KTA) exhibit strong results in terms of
generalisation accuracy and sparsity. However, they are computationally
prohibitive for large datasets. We propose randSel, a randomised feature
selection algorithm, with attractive scaling properties. Our theoretical
analysis of randSel provides strong probabilistic guarantees for correct
identification of relevant features. RandSel's characteristics make it an ideal
candidate for identifying informative learned representations. We've conducted
experimentation to establish the performance of this approach, and present
encouraging results, including a 3rd position result in the recent ICML black
box learning challenge as well as competitive results for signal peptide
prediction, an important problem in bioinformatics.
| no_new_dataset | 0.945399 |
1402.4293 | Alexander Davies | Alex Davies, Zoubin Ghahramani | The Random Forest Kernel and other kernels for big data from random
partitions | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Random Partition Kernels, a new class of kernels derived by
demonstrating a natural connection between random partitions of objects and
kernels between those objects. We show how the construction can be used to
create kernels from methods that would not normally be viewed as random
partitions, such as Random Forest. To demonstrate the potential of this method,
we propose two new kernels, the Random Forest Kernel and the Fast Cluster
Kernel, and show that these kernels consistently outperform standard kernels on
problems involving real-world datasets. Finally, we show how the form of these
kernels lend themselves to a natural approximation that is appropriate for
certain big data problems, allowing $O(N)$ inference in methods such as
Gaussian Processes, Support Vector Machines and Kernel PCA.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2014 11:13:45 GMT"
}
] | 2014-02-19T00:00:00 | [
[
"Davies",
"Alex",
""
],
[
"Ghahramani",
"Zoubin",
""
]
] | TITLE: The Random Forest Kernel and other kernels for big data from random
partitions
ABSTRACT: We present Random Partition Kernels, a new class of kernels derived by
demonstrating a natural connection between random partitions of objects and
kernels between those objects. We show how the construction can be used to
create kernels from methods that would not normally be viewed as random
partitions, such as Random Forest. To demonstrate the potential of this method,
we propose two new kernels, the Random Forest Kernel and the Fast Cluster
Kernel, and show that these kernels consistently outperform standard kernels on
problems involving real-world datasets. Finally, we show how the form of these
kernels lend themselves to a natural approximation that is appropriate for
certain big data problems, allowing $O(N)$ inference in methods such as
Gaussian Processes, Support Vector Machines and Kernel PCA.
| no_new_dataset | 0.950411 |
1402.4388 | Mohammed Javed | Mohammed Javed, P. Nagabhushan, B.B. Chaudhuri | Automatic Detection of Font Size Straight from Run Length Compressed
Text Documents | 8 Pages | (IJCSIT) International Journal of Computer Science and Information
Technologies, Vol. 5 (1) , 2014, 818-825 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic detection of font size finds many applications in the area of
intelligent OCRing and document image analysis, which has been traditionally
practiced over uncompressed documents, although in real life the documents
exist in compressed form for efficient storage and transmission. It would be
novel and intelligent if the task of font size detection could be carried out
directly from the compressed data of these documents without decompressing,
which would result in saving of considerable amount of processing time and
space. Therefore, in this paper we present a novel idea of learning and
detecting font size directly from run-length compressed text documents at line
level using simple line height features, which paves the way for intelligent
OCRing and document analysis directly from compressed documents. In the
proposed model, the given mixed-case text documents of different font size are
segmented into compressed text lines and the features extracted such as line
height and ascender height are used to capture the pattern of font size in the
form of a regression line, using which the automatic detection of font size is
done during the recognition stage. The method is experimented with a dataset of
50 compressed documents consisting of 780 text lines of single font size and
375 text lines of mixed font size resulting in an overall accuracy of 99.67%.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2014 16:30:59 GMT"
}
] | 2014-02-19T00:00:00 | [
[
"Javed",
"Mohammed",
""
],
[
"Nagabhushan",
"P.",
""
],
[
"Chaudhuri",
"B. B.",
""
]
] | TITLE: Automatic Detection of Font Size Straight from Run Length Compressed
Text Documents
ABSTRACT: Automatic detection of font size finds many applications in the area of
intelligent OCRing and document image analysis, which has been traditionally
practiced over uncompressed documents, although in real life the documents
exist in compressed form for efficient storage and transmission. It would be
novel and intelligent if the task of font size detection could be carried out
directly from the compressed data of these documents without decompressing,
which would result in saving of considerable amount of processing time and
space. Therefore, in this paper we present a novel idea of learning and
detecting font size directly from run-length compressed text documents at line
level using simple line height features, which paves the way for intelligent
OCRing and document analysis directly from compressed documents. In the
proposed model, the given mixed-case text documents of different font size are
segmented into compressed text lines and the features extracted such as line
height and ascender height are used to capture the pattern of font size in the
form of a regression line, using which the automatic detection of font size is
done during the recognition stage. The method is experimented with a dataset of
50 compressed documents consisting of 780 text lines of single font size and
375 text lines of mixed font size resulting in an overall accuracy of 99.67%.
| new_dataset | 0.956391 |
1303.5966 | M\'arton Karsai | M\'arton Karsai, Nicola Perra, Alessandro Vespignani | Time varying networks and the weakness of strong ties | 22 pages, 15 figures | Scientific Reports 4, 4001 (2014) | 10.1038/srep04001 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In most social and information systems the activity of agents generates
rapidly evolving time-varying networks. The temporal variation in networks'
connectivity patterns and the ongoing dynamic processes are usually coupled in
ways that still challenge our mathematical or computational modelling. Here we
analyse a mobile call dataset and find a simple statistical law that
characterize the temporal evolution of users' egocentric networks. We encode
this observation in a reinforcement process defining a time-varying network
model that exhibits the emergence of strong and weak ties. We study the effect
of time-varying and heterogeneous interactions on the classic rumour spreading
model in both synthetic, and real-world networks. We observe that strong ties
severely inhibit information diffusion by confining the spreading process among
agents with recurrent communication patterns. This provides the
counterintuitive evidence that strong ties may have a negative role in the
spreading of information across networks.
| [
{
"version": "v1",
"created": "Sun, 24 Mar 2013 16:42:48 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Feb 2014 14:25:55 GMT"
}
] | 2014-02-18T00:00:00 | [
[
"Karsai",
"Márton",
""
],
[
"Perra",
"Nicola",
""
],
[
"Vespignani",
"Alessandro",
""
]
] | TITLE: Time varying networks and the weakness of strong ties
ABSTRACT: In most social and information systems the activity of agents generates
rapidly evolving time-varying networks. The temporal variation in networks'
connectivity patterns and the ongoing dynamic processes are usually coupled in
ways that still challenge our mathematical or computational modelling. Here we
analyse a mobile call dataset and find a simple statistical law that
characterize the temporal evolution of users' egocentric networks. We encode
this observation in a reinforcement process defining a time-varying network
model that exhibits the emergence of strong and weak ties. We study the effect
of time-varying and heterogeneous interactions on the classic rumour spreading
model in both synthetic, and real-world networks. We observe that strong ties
severely inhibit information diffusion by confining the spreading process among
agents with recurrent communication patterns. This provides the
counterintuitive evidence that strong ties may have a negative role in the
spreading of information across networks.
| no_new_dataset | 0.945651 |
1312.4190 | Jakub Konecny | Jakub Kone\v{c}n\'y and Michal Hagara | One-Shot-Learning Gesture Recognition using HOG-HOF Features | 20 pages, 10 figures, 2 tables To appear in Journal of Machine
Learning Research subject to minor revision | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The purpose of this paper is to describe one-shot-learning gesture
recognition systems developed on the \textit{ChaLearn Gesture Dataset}. We use
RGB and depth images and combine appearance (Histograms of Oriented Gradients)
and motion descriptors (Histogram of Optical Flow) for parallel temporal
segmentation and recognition. The Quadratic-Chi distance family is used to
measure differences between histograms to capture cross-bin relationships. We
also propose a new algorithm for trimming videos --- to remove all the
unimportant frames from videos. We present two methods that use combination of
HOG-HOF descriptors together with variants of Dynamic Time Warping technique.
Both methods outperform other published methods and help narrow down the gap
between human performance and algorithms on this task. The code has been made
publicly available in the MLOSS repository.
| [
{
"version": "v1",
"created": "Sun, 15 Dec 2013 20:58:21 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Feb 2014 17:47:11 GMT"
}
] | 2014-02-18T00:00:00 | [
[
"Konečný",
"Jakub",
""
],
[
"Hagara",
"Michal",
""
]
] | TITLE: One-Shot-Learning Gesture Recognition using HOG-HOF Features
ABSTRACT: The purpose of this paper is to describe one-shot-learning gesture
recognition systems developed on the \textit{ChaLearn Gesture Dataset}. We use
RGB and depth images and combine appearance (Histograms of Oriented Gradients)
and motion descriptors (Histogram of Optical Flow) for parallel temporal
segmentation and recognition. The Quadratic-Chi distance family is used to
measure differences between histograms to capture cross-bin relationships. We
also propose a new algorithm for trimming videos --- to remove all the
unimportant frames from videos. We present two methods that use combination of
HOG-HOF descriptors together with variants of Dynamic Time Warping technique.
Both methods outperform other published methods and help narrow down the gap
between human performance and algorithms on this task. The code has been made
publicly available in the MLOSS repository.
| no_new_dataset | 0.950732 |
1312.5242 | Alexey Dosovitskiy | Alexey Dosovitskiy, Jost Tobias Springenberg and Thomas Brox | Unsupervised feature learning by augmenting single images | ICLR 2014 workshop track submission (7 pages, 4 figures, 1 table) | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When deep learning is applied to visual object recognition, data augmentation
is often used to generate additional training data without extra labeling cost.
It helps to reduce overfitting and increase the performance of the algorithm.
In this paper we investigate if it is possible to use data augmentation as the
main component of an unsupervised feature learning architecture. To that end we
sample a set of random image patches and declare each of them to be a separate
single-image surrogate class. We then extend these trivial one-element classes
by applying a variety of transformations to the initial 'seed' patches. Finally
we train a convolutional neural network to discriminate between these surrogate
classes. The feature representation learned by the network can then be used in
various vision tasks. We find that this simple feature learning algorithm is
surprisingly successful, achieving competitive classification results on
several popular vision datasets (STL-10, CIFAR-10, Caltech-101).
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2013 17:44:17 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Jan 2014 18:02:09 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Feb 2014 13:07:23 GMT"
}
] | 2014-02-18T00:00:00 | [
[
"Dosovitskiy",
"Alexey",
""
],
[
"Springenberg",
"Jost Tobias",
""
],
[
"Brox",
"Thomas",
""
]
] | TITLE: Unsupervised feature learning by augmenting single images
ABSTRACT: When deep learning is applied to visual object recognition, data augmentation
is often used to generate additional training data without extra labeling cost.
It helps to reduce overfitting and increase the performance of the algorithm.
In this paper we investigate if it is possible to use data augmentation as the
main component of an unsupervised feature learning architecture. To that end we
sample a set of random image patches and declare each of them to be a separate
single-image surrogate class. We then extend these trivial one-element classes
by applying a variety of transformations to the initial 'seed' patches. Finally
we train a convolutional neural network to discriminate between these surrogate
classes. The feature representation learned by the network can then be used in
various vision tasks. We find that this simple feature learning algorithm is
surprisingly successful, achieving competitive classification results on
several popular vision datasets (STL-10, CIFAR-10, Caltech-101).
| no_new_dataset | 0.948585 |
1312.6095 | Bojan Pepikj | Bojan Pepik, Michael Stark, Peter Gehler, Bernt Schiele | Multi-View Priors for Learning Detectors from Sparse Viewpoint Data | 13 pages, 7 figures, 4 tables, International Conference on Learning
Representations 2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While the majority of today's object class models provide only 2D bounding
boxes, far richer output hypotheses are desirable including viewpoint,
fine-grained category, and 3D geometry estimate. However, models trained to
provide richer output require larger amounts of training data, preferably well
covering the relevant aspects such as viewpoint and fine-grained categories. In
this paper, we address this issue from the perspective of transfer learning,
and design an object class model that explicitly leverages correlations between
visual features. Specifically, our model represents prior distributions over
permissible multi-view detectors in a parametric way -- the priors are learned
once from training data of a source object class, and can later be used to
facilitate the learning of a detector for a target class. As we show in our
experiments, this transfer is not only beneficial for detectors based on
basic-level category representations, but also enables the robust learning of
detectors that represent classes at finer levels of granularity, where training
data is typically even scarcer and more unbalanced. As a result, we report
largely improved performance in simultaneous 2D object localization and
viewpoint estimation on a recent dataset of challenging street scenes.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 20:12:07 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Feb 2014 10:39:35 GMT"
}
] | 2014-02-18T00:00:00 | [
[
"Pepik",
"Bojan",
""
],
[
"Stark",
"Michael",
""
],
[
"Gehler",
"Peter",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: Multi-View Priors for Learning Detectors from Sparse Viewpoint Data
ABSTRACT: While the majority of today's object class models provide only 2D bounding
boxes, far richer output hypotheses are desirable including viewpoint,
fine-grained category, and 3D geometry estimate. However, models trained to
provide richer output require larger amounts of training data, preferably well
covering the relevant aspects such as viewpoint and fine-grained categories. In
this paper, we address this issue from the perspective of transfer learning,
and design an object class model that explicitly leverages correlations between
visual features. Specifically, our model represents prior distributions over
permissible multi-view detectors in a parametric way -- the priors are learned
once from training data of a source object class, and can later be used to
facilitate the learning of a detector for a target class. As we show in our
experiments, this transfer is not only beneficial for detectors based on
basic-level category representations, but also enables the robust learning of
detectors that represent classes at finer levels of granularity, where training
data is typically even scarcer and more unbalanced. As a result, we report
largely improved performance in simultaneous 2D object localization and
viewpoint estimation on a recent dataset of challenging street scenes.
| no_new_dataset | 0.948442 |
1402.3689 | Radu Horaud P | Maxime Janvier, Xavier Alameda-Pineda, Laurent Girin and Radu Horaud | Sound Representation and Classification Benchmark for Domestic Robots | 8 pages, 2 figures | null | null | null | cs.SD cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of sound representation and classification and present
results of a comparative study in the context of a domestic robotic scenario. A
dataset of sounds was recorded in realistic conditions (background noise,
presence of several sound sources, reverberations, etc.) using the humanoid
robot NAO. An extended benchmark is carried out to test a variety of
representations combined with several classifiers. We provide results obtained
with the annotated dataset and we assess the methods quantitatively on the
basis of their classification scores, computation times and memory
requirements. The annotated dataset is publicly available at
https://team.inria.fr/perception/nard/.
| [
{
"version": "v1",
"created": "Sat, 15 Feb 2014 13:27:01 GMT"
}
] | 2014-02-18T00:00:00 | [
[
"Janvier",
"Maxime",
""
],
[
"Alameda-Pineda",
"Xavier",
""
],
[
"Girin",
"Laurent",
""
],
[
"Horaud",
"Radu",
""
]
] | TITLE: Sound Representation and Classification Benchmark for Domestic Robots
ABSTRACT: We address the problem of sound representation and classification and present
results of a comparative study in the context of a domestic robotic scenario. A
dataset of sounds was recorded in realistic conditions (background noise,
presence of several sound sources, reverberations, etc.) using the humanoid
robot NAO. An extended benchmark is carried out to test a variety of
representations combined with several classifiers. We provide results obtained
with the annotated dataset and we assess the methods quantitatively on the
basis of their classification scores, computation times and memory
requirements. The annotated dataset is publicly available at
https://team.inria.fr/perception/nard/.
| new_dataset | 0.957477 |
1402.3847 | Daniele de Rigo | Claudio Bosco, Daniele de Rigo, Olivier Dewitte and Luca Montanarella | Towards the reproducibility in soil erosion modeling: a new Pan-European
soil erosion map | 9 pages, from a poster presented at the Wageningen Conference on
Applied Soil Science "Soil Science in a Changing World", 18 - 22 September
2011, Wageningen, The Netherlands | null | 10.6084/m9.figshare.936872 | null | cs.SY cs.CE physics.geo-ph | http://creativecommons.org/licenses/by/3.0/ | Soil erosion by water is a widespread phenomenon throughout Europe and has
the potentiality, with his on-site and off-site effects, to affect water
quality, food security and floods. Despite the implementation of numerous and
different models for estimating soil erosion by water in Europe, there is still
a lack of harmonization of assessment methodologies.
Often, different approaches result in soil erosion rates significantly
different. Even when the same model is applied to the same region the results
may differ. This can be due to the way the model is implemented (i.e. with the
selection of different algorithms when available) and/or to the use of datasets
having different resolution or accuracy. Scientific computation is emerging as
one of the central topic of the scientific method, for overcoming these
problems there is thus the necessity to develop reproducible computational
method where codes and data are available.
The present study illustrates this approach. Using only public available
datasets, we applied the Revised Universal Soil loss Equation (RUSLE) to locate
the most sensitive areas to soil erosion by water in Europe.
A significant effort was made for selecting the better simplified equations
to be used when a strict application of the RUSLE model is not possible. In
particular for the computation of the Rainfall Erosivity factor (R) the
reproducible research paradigm was applied. The calculation of the R factor was
implemented using public datasets and the GNU R language. An easily
reproducible validation procedure based on measured precipitation time series
was applied using MATLAB language. Designing the computational modelling
architecture with the aim to ease as much as possible the future reuse of the
model in analysing climate change scenarios is also a challenging goal of the
research.
| [
{
"version": "v1",
"created": "Sun, 16 Feb 2014 22:10:42 GMT"
}
] | 2014-02-18T00:00:00 | [
[
"Bosco",
"Claudio",
""
],
[
"de Rigo",
"Daniele",
""
],
[
"Dewitte",
"Olivier",
""
],
[
"Montanarella",
"Luca",
""
]
] | TITLE: Towards the reproducibility in soil erosion modeling: a new Pan-European
soil erosion map
ABSTRACT: Soil erosion by water is a widespread phenomenon throughout Europe and has
the potentiality, with his on-site and off-site effects, to affect water
quality, food security and floods. Despite the implementation of numerous and
different models for estimating soil erosion by water in Europe, there is still
a lack of harmonization of assessment methodologies.
Often, different approaches result in soil erosion rates significantly
different. Even when the same model is applied to the same region the results
may differ. This can be due to the way the model is implemented (i.e. with the
selection of different algorithms when available) and/or to the use of datasets
having different resolution or accuracy. Scientific computation is emerging as
one of the central topic of the scientific method, for overcoming these
problems there is thus the necessity to develop reproducible computational
method where codes and data are available.
The present study illustrates this approach. Using only public available
datasets, we applied the Revised Universal Soil loss Equation (RUSLE) to locate
the most sensitive areas to soil erosion by water in Europe.
A significant effort was made for selecting the better simplified equations
to be used when a strict application of the RUSLE model is not possible. In
particular for the computation of the Rainfall Erosivity factor (R) the
reproducible research paradigm was applied. The calculation of the R factor was
implemented using public datasets and the GNU R language. An easily
reproducible validation procedure based on measured precipitation time series
was applied using MATLAB language. Designing the computational modelling
architecture with the aim to ease as much as possible the future reuse of the
model in analysing climate change scenarios is also a challenging goal of the
research.
| no_new_dataset | 0.954393 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.