id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1501.05581 | Daniele Riboni | Daniele Riboni, Claudio Bettini, Gabriele Civitarese, Zaffar Haider
Janjua, Rim Helaoui | Extended Report: Fine-grained Recognition of Abnormal Behaviors for
Early Detection of Mild Cognitive Impairment | null | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | According to the World Health Organization, the rate of people aged 60 or
more is growing faster than any other age group in almost every country, and
this trend is not going to change in a near future. Since senior citizens are
at high risk of non communicable diseases requiring long-term care, this trend
will challenge the sustainability of the entire health system. Pervasive
computing can provide innovative methods and tools for early detecting the
onset of health issues. In this paper we propose a novel method to detect
abnormal behaviors of elderly people living at home. The method relies on
medical models, provided by cognitive neuroscience researchers, describing
abnormal activity routines that may indicate the onset of early symptoms of
mild cognitive impairment. A non-intrusive sensor-based infrastructure acquires
low-level data about the interaction of the individual with home appliances and
furniture, as well as data from environmental sensors. Based on those data, a
novel hybrid statistical-symbolical technique is used to detect the abnormal
behaviors of the patient, which are communicated to the medical center.
Differently from related works, our method can detect abnormal behaviors at a
fine-grained level, thus providing an important tool to support the medical
diagnosis. In order to evaluate our method we have developed a prototype of the
system and acquired a large dataset of abnormal behaviors carried out in an
instrumented smart home. Experimental results show that our technique is able
to detect most anomalies while generating a small number of false positives.
| [
{
"version": "v1",
"created": "Thu, 22 Jan 2015 17:34:16 GMT"
}
] | 2015-01-23T00:00:00 | [
[
"Riboni",
"Daniele",
""
],
[
"Bettini",
"Claudio",
""
],
[
"Civitarese",
"Gabriele",
""
],
[
"Janjua",
"Zaffar Haider",
""
],
[
"Helaoui",
"Rim",
""
]
] | TITLE: Extended Report: Fine-grained Recognition of Abnormal Behaviors for
Early Detection of Mild Cognitive Impairment
ABSTRACT: According to the World Health Organization, the rate of people aged 60 or
more is growing faster than any other age group in almost every country, and
this trend is not going to change in a near future. Since senior citizens are
at high risk of non communicable diseases requiring long-term care, this trend
will challenge the sustainability of the entire health system. Pervasive
computing can provide innovative methods and tools for early detecting the
onset of health issues. In this paper we propose a novel method to detect
abnormal behaviors of elderly people living at home. The method relies on
medical models, provided by cognitive neuroscience researchers, describing
abnormal activity routines that may indicate the onset of early symptoms of
mild cognitive impairment. A non-intrusive sensor-based infrastructure acquires
low-level data about the interaction of the individual with home appliances and
furniture, as well as data from environmental sensors. Based on those data, a
novel hybrid statistical-symbolical technique is used to detect the abnormal
behaviors of the patient, which are communicated to the medical center.
Differently from related works, our method can detect abnormal behaviors at a
fine-grained level, thus providing an important tool to support the medical
diagnosis. In order to evaluate our method we have developed a prototype of the
system and acquired a large dataset of abnormal behaviors carried out in an
instrumented smart home. Experimental results show that our technique is able
to detect most anomalies while generating a small number of false positives.
| new_dataset | 0.96796 |
1501.05624 | John Paisley | San Gultekin and John Paisley | A Collaborative Kalman Filter for Time-Evolving Dyadic Processes | Appeared at 2014 IEEE International Conference on Data Mining (ICDM) | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the collaborative Kalman filter (CKF), a dynamic model for
collaborative filtering and related factorization models. Using the matrix
factorization approach to collaborative filtering, the CKF accounts for time
evolution by modeling each low-dimensional latent embedding as a
multidimensional Brownian motion. Each observation is a random variable whose
distribution is parameterized by the dot product of the relevant Brownian
motions at that moment in time. This is naturally interpreted as a Kalman
filter with multiple interacting state space vectors. We also present a method
for learning a dynamically evolving drift parameter for each location by
modeling it as a geometric Brownian motion. We handle posterior intractability
via a mean-field variational approximation, which also preserves tractability
for downstream calculations in a manner similar to the Kalman filter. We
evaluate the model on several large datasets, providing quantitative evaluation
on the 10 million Movielens and 100 million Netflix datasets and qualitative
evaluation on a set of 39 million stock returns divided across roughly 6,500
companies from the years 1962-2014.
| [
{
"version": "v1",
"created": "Thu, 22 Jan 2015 20:24:32 GMT"
}
] | 2015-01-23T00:00:00 | [
[
"Gultekin",
"San",
""
],
[
"Paisley",
"John",
""
]
] | TITLE: A Collaborative Kalman Filter for Time-Evolving Dyadic Processes
ABSTRACT: We present the collaborative Kalman filter (CKF), a dynamic model for
collaborative filtering and related factorization models. Using the matrix
factorization approach to collaborative filtering, the CKF accounts for time
evolution by modeling each low-dimensional latent embedding as a
multidimensional Brownian motion. Each observation is a random variable whose
distribution is parameterized by the dot product of the relevant Brownian
motions at that moment in time. This is naturally interpreted as a Kalman
filter with multiple interacting state space vectors. We also present a method
for learning a dynamically evolving drift parameter for each location by
modeling it as a geometric Brownian motion. We handle posterior intractability
via a mean-field variational approximation, which also preserves tractability
for downstream calculations in a manner similar to the Kalman filter. We
evaluate the model on several large datasets, providing quantitative evaluation
on the 10 million Movielens and 100 million Netflix datasets and qualitative
evaluation on a set of 39 million stock returns divided across roughly 6,500
companies from the years 1962-2014.
| no_new_dataset | 0.944893 |
1411.5406 | Elizabeth Silber | Elizabeth A. Silber, Peter G. Brown, Zbigniew Krzeminski | Optical Observations of Meteors Generating Infrasound - II: Weak Shock
Theory and Validation | 58 pages, 14 figures, 5 tables | null | 10.1002/2014JE004680 | null | physics.ao-ph astro-ph.EP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have recorded a dataset of 24 centimeter-sized meteoroids detected
simultaneously by video and infrasound to critically examine the ReVelle [1974]
weak shock meteor infrasound model. We find that the effect of gravity wave
perturbations to the wind field and updated absorption coefficients in the
linear regime on the initial value of the blast radius (R0), which is the
strongly non-linear zone of shock propagation near the body and corresponds to
energy deposition per path length, is relatively small. Using optical
photometry for ground-truth for energy deposition, we find that the ReVelle
model accurately predicts blast radii from infrasound periods ({\tau}), but
systematically under-predicts R0 using pressure amplitude. If the weak shock to
linear propagation distortion distance is adjusted as part of the modelling
process we are able to self-consistently fit a single blast radius value for
amplitude and period. In this case, the distortion distance is always much less
(usually just a few percent) than the value of 10 percent assumed in the
ReVelle model. Our study shows that fragmentation is an important process even
for centimeter sized meteoroids, implying that R0, while a good measure of
energy deposition by the meteoroid, is not a reliable means of obtaining the
meteoroid mass. We derived an empirical period-blast radius relation
appropriate to cm sized meteoroids. Our observations suggest that meteors
having blast radii as small as 1m are detectable infrasonically at the ground,
an order of magnitude smaller than previously considered.
| [
{
"version": "v1",
"created": "Wed, 19 Nov 2014 23:54:42 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jan 2015 04:44:35 GMT"
}
] | 2015-01-22T00:00:00 | [
[
"Silber",
"Elizabeth A.",
""
],
[
"Brown",
"Peter G.",
""
],
[
"Krzeminski",
"Zbigniew",
""
]
] | TITLE: Optical Observations of Meteors Generating Infrasound - II: Weak Shock
Theory and Validation
ABSTRACT: We have recorded a dataset of 24 centimeter-sized meteoroids detected
simultaneously by video and infrasound to critically examine the ReVelle [1974]
weak shock meteor infrasound model. We find that the effect of gravity wave
perturbations to the wind field and updated absorption coefficients in the
linear regime on the initial value of the blast radius (R0), which is the
strongly non-linear zone of shock propagation near the body and corresponds to
energy deposition per path length, is relatively small. Using optical
photometry for ground-truth for energy deposition, we find that the ReVelle
model accurately predicts blast radii from infrasound periods ({\tau}), but
systematically under-predicts R0 using pressure amplitude. If the weak shock to
linear propagation distortion distance is adjusted as part of the modelling
process we are able to self-consistently fit a single blast radius value for
amplitude and period. In this case, the distortion distance is always much less
(usually just a few percent) than the value of 10 percent assumed in the
ReVelle model. Our study shows that fragmentation is an important process even
for centimeter sized meteoroids, implying that R0, while a good measure of
energy deposition by the meteoroid, is not a reliable means of obtaining the
meteoroid mass. We derived an empirical period-blast radius relation
appropriate to cm sized meteoroids. Our observations suggest that meteors
having blast radii as small as 1m are detectable infrasonically at the ground,
an order of magnitude smaller than previously considered.
| no_new_dataset | 0.95096 |
1501.04981 | Manuel Moussallam | Manuel Moussallam and Antoine Liutkus and Laurent Daudet | Listening to features | Technical Report | null | null | null | cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work explores nonparametric methods which aim at synthesizing audio from
low-dimensionnal acoustic features typically used in MIR frameworks. Several
issues prevent this task to be straightforwardly achieved. Such features are
designed for analysis and not for synthesis, thus favoring high-level
description over easily inverted acoustic representation. Whereas some previous
studies already considered the problem of synthesizing audio from features such
as Mel-Frequency Cepstral Coefficients, they mainly relied on the explicit
formula used to compute those features in order to inverse them. Here, we
instead adopt a simple blind approach, where arbitrary sets of features can be
used during synthesis and where reconstruction is exemplar-based. After testing
the approach on a speech synthesis from well known features problem, we apply
it to the more complex task of inverting songs from the Million Song Dataset.
What makes this task harder is twofold. First, that features are irregularly
spaced in the temporal domain according to an onset-based segmentation. Second
the exact method used to compute these features is unknown, although the
features for new audio can be computed using their API as a black-box. In this
paper, we detail these difficulties and present a framework to nonetheless
attempting such synthesis by concatenating audio samples from a training
dataset, whose features have been computed beforehand. Samples are selected at
the segment level, in the feature space with a simple nearest neighbor search.
Additionnal constraints can then be defined to enhance the synthesis
pertinence. Preliminary experiments are presented using RWC and GTZAN audio
datasets to synthesize tracks from the Million Song Dataset.
| [
{
"version": "v1",
"created": "Mon, 19 Jan 2015 19:41:35 GMT"
}
] | 2015-01-22T00:00:00 | [
[
"Moussallam",
"Manuel",
""
],
[
"Liutkus",
"Antoine",
""
],
[
"Daudet",
"Laurent",
""
]
] | TITLE: Listening to features
ABSTRACT: This work explores nonparametric methods which aim at synthesizing audio from
low-dimensionnal acoustic features typically used in MIR frameworks. Several
issues prevent this task to be straightforwardly achieved. Such features are
designed for analysis and not for synthesis, thus favoring high-level
description over easily inverted acoustic representation. Whereas some previous
studies already considered the problem of synthesizing audio from features such
as Mel-Frequency Cepstral Coefficients, they mainly relied on the explicit
formula used to compute those features in order to inverse them. Here, we
instead adopt a simple blind approach, where arbitrary sets of features can be
used during synthesis and where reconstruction is exemplar-based. After testing
the approach on a speech synthesis from well known features problem, we apply
it to the more complex task of inverting songs from the Million Song Dataset.
What makes this task harder is twofold. First, that features are irregularly
spaced in the temporal domain according to an onset-based segmentation. Second
the exact method used to compute these features is unknown, although the
features for new audio can be computed using their API as a black-box. In this
paper, we detail these difficulties and present a framework to nonetheless
attempting such synthesis by concatenating audio samples from a training
dataset, whose features have been computed beforehand. Samples are selected at
the segment level, in the feature space with a simple nearest neighbor search.
Additionnal constraints can then be defined to enhance the synthesis
pertinence. Preliminary experiments are presented using RWC and GTZAN audio
datasets to synthesize tracks from the Million Song Dataset.
| no_new_dataset | 0.945701 |
1501.05132 | Catarina Moreira | Catarina Moreira and Bruno Martins and P\'avel Calado | Learning to Rank Academic Experts in the DBLP Dataset | Expert Systems, 2013. arXiv admin note: text overlap with
arXiv:1302.0413 | null | 10.1111/exsy.12062 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Expert finding is an information retrieval task that is concerned with the
search for the most knowledgeable people with respect to a specific topic, and
the search is based on documents that describe people's activities. The task
involves taking a user query as input and returning a list of people who are
sorted by their level of expertise with respect to the user query. Despite
recent interest in the area, the current state-of-the-art techniques lack in
principled approaches for optimally combining different sources of evidence.
This article proposes two frameworks for combining multiple estimators of
expertise. These estimators are derived from textual contents, from
graph-structure of the citation patterns for the community of experts, and from
profile information about the experts. More specifically, this article explores
the use of supervised learning to rank methods, as well as rank aggregation
approaches, for combing all of the estimators of expertise. Several supervised
learning algorithms, which are representative of the pointwise, pairwise and
listwise approaches, were tested, and various state-of-the-art data fusion
techniques were also explored for the rank aggregation framework. Experiments
that were performed on a dataset of academic publications from the Computer
Science domain attest the adequacy of the proposed approaches.
| [
{
"version": "v1",
"created": "Wed, 21 Jan 2015 11:25:33 GMT"
}
] | 2015-01-22T00:00:00 | [
[
"Moreira",
"Catarina",
""
],
[
"Martins",
"Bruno",
""
],
[
"Calado",
"Pável",
""
]
] | TITLE: Learning to Rank Academic Experts in the DBLP Dataset
ABSTRACT: Expert finding is an information retrieval task that is concerned with the
search for the most knowledgeable people with respect to a specific topic, and
the search is based on documents that describe people's activities. The task
involves taking a user query as input and returning a list of people who are
sorted by their level of expertise with respect to the user query. Despite
recent interest in the area, the current state-of-the-art techniques lack in
principled approaches for optimally combining different sources of evidence.
This article proposes two frameworks for combining multiple estimators of
expertise. These estimators are derived from textual contents, from
graph-structure of the citation patterns for the community of experts, and from
profile information about the experts. More specifically, this article explores
the use of supervised learning to rank methods, as well as rank aggregation
approaches, for combing all of the estimators of expertise. Several supervised
learning algorithms, which are representative of the pointwise, pairwise and
listwise approaches, were tested, and various state-of-the-art data fusion
techniques were also explored for the rank aggregation framework. Experiments
that were performed on a dataset of academic publications from the Computer
Science domain attest the adequacy of the proposed approaches.
| no_new_dataset | 0.945751 |
1501.05279 | Wojciech Czarnecki | Wojciech Marian Czarnecki, Jacek Tabor | Extreme Entropy Machines: Robust information theoretic classification | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most of the existing classification methods are aimed at minimization of
empirical risk (through some simple point-based error measured with loss
function) with added regularization. We propose to approach this problem in a
more information theoretic way by investigating applicability of entropy
measures as a classification model objective function. We focus on quadratic
Renyi's entropy and connected Cauchy-Schwarz Divergence which leads to the
construction of Extreme Entropy Machines (EEM).
The main contribution of this paper is proposing a model based on the
information theoretic concepts which on the one hand shows new, entropic
perspective on known linear classifiers and on the other leads to a
construction of very robust method competetitive with the state of the art
non-information theoretic ones (including Support Vector Machines and Extreme
Learning Machines).
Evaluation on numerous problems spanning from small, simple ones from UCI
repository to the large (hundreads of thousands of samples) extremely
unbalanced (up to 100:1 classes' ratios) datasets shows wide applicability of
the EEM in real life problems and that it scales well.
| [
{
"version": "v1",
"created": "Wed, 21 Jan 2015 19:54:26 GMT"
}
] | 2015-01-22T00:00:00 | [
[
"Czarnecki",
"Wojciech Marian",
""
],
[
"Tabor",
"Jacek",
""
]
] | TITLE: Extreme Entropy Machines: Robust information theoretic classification
ABSTRACT: Most of the existing classification methods are aimed at minimization of
empirical risk (through some simple point-based error measured with loss
function) with added regularization. We propose to approach this problem in a
more information theoretic way by investigating applicability of entropy
measures as a classification model objective function. We focus on quadratic
Renyi's entropy and connected Cauchy-Schwarz Divergence which leads to the
construction of Extreme Entropy Machines (EEM).
The main contribution of this paper is proposing a model based on the
information theoretic concepts which on the one hand shows new, entropic
perspective on known linear classifiers and on the other leads to a
construction of very robust method competetitive with the state of the art
non-information theoretic ones (including Support Vector Machines and Extreme
Learning Machines).
Evaluation on numerous problems spanning from small, simple ones from UCI
repository to the large (hundreads of thousands of samples) extremely
unbalanced (up to 100:1 classes' ratios) datasets shows wide applicability of
the EEM in real life problems and that it scales well.
| no_new_dataset | 0.949106 |
1403.6888 | Nenad Marku\v{s} | Nenad Marku\v{s} and Miroslav Frljak and Igor S. Pand\v{z}i\'c and
J\"orgen Ahlberg and Robert Forchheimer | Fast Localization of Facial Landmark Points | null | Proceedings of the Croatian Compter Vision Workshop, 2014 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Localization of salient facial landmark points, such as eye corners or the
tip of the nose, is still considered a challenging computer vision problem
despite recent efforts. This is especially evident in unconstrained
environments, i.e., in the presence of background clutter and large head pose
variations. Most methods that achieve state-of-the-art accuracy are slow, and,
thus, have limited applications. We describe a method that can accurately
estimate the positions of relevant facial landmarks in real-time even on
hardware with limited processing power, such as mobile devices. This is
achieved with a sequence of estimators based on ensembles of regression trees.
The trees use simple pixel intensity comparisons in their internal nodes and
this makes them able to process image regions very fast. We test the developed
system on several publicly available datasets and analyse its processing speed
on various devices. Experimental results show that our method has practical
value.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2014 23:12:08 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jan 2015 12:19:05 GMT"
}
] | 2015-01-21T00:00:00 | [
[
"Markuš",
"Nenad",
""
],
[
"Frljak",
"Miroslav",
""
],
[
"Pandžić",
"Igor S.",
""
],
[
"Ahlberg",
"Jörgen",
""
],
[
"Forchheimer",
"Robert",
""
]
] | TITLE: Fast Localization of Facial Landmark Points
ABSTRACT: Localization of salient facial landmark points, such as eye corners or the
tip of the nose, is still considered a challenging computer vision problem
despite recent efforts. This is especially evident in unconstrained
environments, i.e., in the presence of background clutter and large head pose
variations. Most methods that achieve state-of-the-art accuracy are slow, and,
thus, have limited applications. We describe a method that can accurately
estimate the positions of relevant facial landmarks in real-time even on
hardware with limited processing power, such as mobile devices. This is
achieved with a sequence of estimators based on ensembles of regression trees.
The trees use simple pixel intensity comparisons in their internal nodes and
this makes them able to process image regions very fast. We test the developed
system on several publicly available datasets and analyse its processing speed
on various devices. Experimental results show that our method has practical
value.
| no_new_dataset | 0.947575 |
1501.04675 | Zhi Liu | Zhi Liu, Yan Huang | Community Detection from Location-Tagged Networks | null | null | null | null | cs.SI physics.soc-ph | http://creativecommons.org/licenses/publicdomain/ | Many real world systems or web services can be represented as a network such
as social networks and transportation networks. In the past decade, many
algorithms have been developed to detect the communities in a network using
connections between nodes. However in many real world networks, the locations
of nodes have great influence on the community structure. For example, in a
social network, more connections are established between geographically
proximate users. The impact of locations on community has not been fully
investigated by the research literature. In this paper, we propose a community
detection method which takes locations of nodes into consideration. The goal is
to detect communities with both geographic proximity and network closeness. We
analyze the distribution of the distances between connected and unconnected
nodes to measure the influence of location on the network structure on two real
location-tagged social networks. We propose a method to determine if a
location-based community detection method is suitable for a given network. We
propose a new community detection algorithm that pushes the location
information into the community detection. We test our proposed method on both
synthetic data and real world network datasets. The results show that the
communities detected by our method distribute in a smaller area compared with
the traditional methods and have the similar or higher tightness on network
connections.
| [
{
"version": "v1",
"created": "Mon, 19 Jan 2015 23:37:40 GMT"
}
] | 2015-01-21T00:00:00 | [
[
"Liu",
"Zhi",
""
],
[
"Huang",
"Yan",
""
]
] | TITLE: Community Detection from Location-Tagged Networks
ABSTRACT: Many real world systems or web services can be represented as a network such
as social networks and transportation networks. In the past decade, many
algorithms have been developed to detect the communities in a network using
connections between nodes. However in many real world networks, the locations
of nodes have great influence on the community structure. For example, in a
social network, more connections are established between geographically
proximate users. The impact of locations on community has not been fully
investigated by the research literature. In this paper, we propose a community
detection method which takes locations of nodes into consideration. The goal is
to detect communities with both geographic proximity and network closeness. We
analyze the distribution of the distances between connected and unconnected
nodes to measure the influence of location on the network structure on two real
location-tagged social networks. We propose a method to determine if a
location-based community detection method is suitable for a given network. We
propose a new community detection algorithm that pushes the location
information into the community detection. We test our proposed method on both
synthetic data and real world network datasets. The results show that the
communities detected by our method distribute in a smaller area compared with
the traditional methods and have the similar or higher tightness on network
connections.
| no_new_dataset | 0.948346 |
1501.04686 | Pichao Wang | Pichao Wang, Wanqing Li, Zhimin Gao, Jing Zhang, Chang Tang and Philip
Ogunbona | Deep Convolutional Neural Networks for Action Recognition Using Depth
Map Sequences | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, deep learning approach has achieved promising results in various
fields of computer vision. In this paper, a new framework called Hierarchical
Depth Motion Maps (HDMM) + 3 Channel Deep Convolutional Neural Networks
(3ConvNets) is proposed for human action recognition using depth map sequences.
Firstly, we rotate the original depth data in 3D pointclouds to mimic the
rotation of cameras, so that our algorithms can handle view variant cases.
Secondly, in order to effectively extract the body shape and motion
information, we generate weighted depth motion maps (DMM) at several temporal
scales, referred to as Hierarchical Depth Motion Maps (HDMM). Then, three
channels of ConvNets are trained on the HDMMs from three projected orthogonal
planes separately. The proposed algorithms are evaluated on MSRAction3D,
MSRAction3DExt, UTKinect-Action and MSRDailyActivity3D datasets respectively.
We also combine the last three datasets into a larger one (called Combined
Dataset) and test the proposed method on it. The results show that our approach
can achieve state-of-the-art results on the individual datasets and without
dramatical performance degradation on the Combined Dataset.
| [
{
"version": "v1",
"created": "Tue, 20 Jan 2015 00:46:10 GMT"
}
] | 2015-01-21T00:00:00 | [
[
"Wang",
"Pichao",
""
],
[
"Li",
"Wanqing",
""
],
[
"Gao",
"Zhimin",
""
],
[
"Zhang",
"Jing",
""
],
[
"Tang",
"Chang",
""
],
[
"Ogunbona",
"Philip",
""
]
] | TITLE: Deep Convolutional Neural Networks for Action Recognition Using Depth
Map Sequences
ABSTRACT: Recently, deep learning approach has achieved promising results in various
fields of computer vision. In this paper, a new framework called Hierarchical
Depth Motion Maps (HDMM) + 3 Channel Deep Convolutional Neural Networks
(3ConvNets) is proposed for human action recognition using depth map sequences.
Firstly, we rotate the original depth data in 3D pointclouds to mimic the
rotation of cameras, so that our algorithms can handle view variant cases.
Secondly, in order to effectively extract the body shape and motion
information, we generate weighted depth motion maps (DMM) at several temporal
scales, referred to as Hierarchical Depth Motion Maps (HDMM). Then, three
channels of ConvNets are trained on the HDMMs from three projected orthogonal
planes separately. The proposed algorithms are evaluated on MSRAction3D,
MSRAction3DExt, UTKinect-Action and MSRDailyActivity3D datasets respectively.
We also combine the last three datasets into a larger one (called Combined
Dataset) and test the proposed method on it. The results show that our approach
can achieve state-of-the-art results on the individual datasets and without
dramatical performance degradation on the Combined Dataset.
| no_new_dataset | 0.952086 |
1501.04690 | Erjin Zhou | Erjin Zhou, Zhimin Cao, Qi Yin | Naive-Deep Face Recognition: Touching the Limit of LFW Benchmark or Not? | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Face recognition performance improves rapidly with the recent deep learning
technique developing and underlying large training dataset accumulating. In
this paper, we report our observations on how big data impacts the recognition
performance. According to these observations, we build our Megvii Face
Recognition System, which achieves 99.50% accuracy on the LFW benchmark,
outperforming the previous state-of-the-art. Furthermore, we report the
performance in a real-world security certification scenario. There still exists
a clear gap between machine recognition and human performance. We summarize our
experiments and present three challenges lying ahead in recent face
recognition. And we indicate several possible solutions towards these
challenges. We hope our work will stimulate the community's discussion of the
difference between research benchmark and real-world applications.
| [
{
"version": "v1",
"created": "Tue, 20 Jan 2015 01:15:02 GMT"
}
] | 2015-01-21T00:00:00 | [
[
"Zhou",
"Erjin",
""
],
[
"Cao",
"Zhimin",
""
],
[
"Yin",
"Qi",
""
]
] | TITLE: Naive-Deep Face Recognition: Touching the Limit of LFW Benchmark or Not?
ABSTRACT: Face recognition performance improves rapidly with the recent deep learning
technique developing and underlying large training dataset accumulating. In
this paper, we report our observations on how big data impacts the recognition
performance. According to these observations, we build our Megvii Face
Recognition System, which achieves 99.50% accuracy on the LFW benchmark,
outperforming the previous state-of-the-art. Furthermore, we report the
performance in a real-world security certification scenario. There still exists
a clear gap between machine recognition and human performance. We summarize our
experiments and present three challenges lying ahead in recent face
recognition. And we indicate several possible solutions towards these
challenges. We hope our work will stimulate the community's discussion of the
difference between research benchmark and real-world applications.
| no_new_dataset | 0.945248 |
1501.04717 | Yuting Zhang | Yuting Zhang, Kui Jia, Yueming Wang, Gang Pan, Tsung-Han Chan, Yi Ma | Robust Face Recognition by Constrained Part-based Alignment | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing a reliable and practical face recognition system is a
long-standing goal in computer vision research. Existing literature suggests
that pixel-wise face alignment is the key to achieve high-accuracy face
recognition. By assuming a human face as piece-wise planar surfaces, where each
surface corresponds to a facial part, we develop in this paper a Constrained
Part-based Alignment (CPA) algorithm for face recognition across pose and/or
expression. Our proposed algorithm is based on a trainable CPA model, which
learns appearance evidence of individual parts and a tree-structured shape
configuration among different parts. Given a probe face, CPA simultaneously
aligns all its parts by fitting them to the appearance evidence with
consideration of the constraint from the tree-structured shape configuration.
This objective is formulated as a norm minimization problem regularized by
graph likelihoods. CPA can be easily integrated with many existing classifiers
to perform part-based face recognition. Extensive experiments on benchmark face
datasets show that CPA outperforms or is on par with existing methods for
robust face recognition across pose, expression, and/or illumination changes.
| [
{
"version": "v1",
"created": "Tue, 20 Jan 2015 06:05:01 GMT"
}
] | 2015-01-21T00:00:00 | [
[
"Zhang",
"Yuting",
""
],
[
"Jia",
"Kui",
""
],
[
"Wang",
"Yueming",
""
],
[
"Pan",
"Gang",
""
],
[
"Chan",
"Tsung-Han",
""
],
[
"Ma",
"Yi",
""
]
] | TITLE: Robust Face Recognition by Constrained Part-based Alignment
ABSTRACT: Developing a reliable and practical face recognition system is a
long-standing goal in computer vision research. Existing literature suggests
that pixel-wise face alignment is the key to achieve high-accuracy face
recognition. By assuming a human face as piece-wise planar surfaces, where each
surface corresponds to a facial part, we develop in this paper a Constrained
Part-based Alignment (CPA) algorithm for face recognition across pose and/or
expression. Our proposed algorithm is based on a trainable CPA model, which
learns appearance evidence of individual parts and a tree-structured shape
configuration among different parts. Given a probe face, CPA simultaneously
aligns all its parts by fitting them to the appearance evidence with
consideration of the constraint from the tree-structured shape configuration.
This objective is formulated as a norm minimization problem regularized by
graph likelihoods. CPA can be easily integrated with many existing classifiers
to perform part-based face recognition. Extensive experiments on benchmark face
datasets show that CPA outperforms or is on par with existing methods for
robust face recognition across pose, expression, and/or illumination changes.
| no_new_dataset | 0.947866 |
1306.3284 | Edith Cohen | Edith Cohen | All-Distances Sketches, Revisited: HIP Estimators for Massive Graphs
Analysis | 16 pages, 3 figures, extended version of a PODS 2014 paper | null | null | null | cs.DS cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph datasets with billions of edges, such as social and Web graphs, are
prevalent, and scalable computation is critical. All-distances sketches (ADS)
[Cohen 1997], are a powerful tool for scalable approximation of statistics.
The sketch is a small size sample of the distance relation of a node which
emphasizes closer nodes. Sketches for all nodes are computed using a nearly
linear computation and estimators are applied to sketches of nodes to estimate
their properties.
We provide, for the first time, a unified exposition of ADS algorithms and
applications. We present the Historic Inverse Probability (HIP) estimators
which are applied to the ADS of a node to estimate a large natural class of
statistics. For the important special cases of neighborhood cardinalities (the
number of nodes within some query distance) and closeness centralities, HIP
estimators have at most half the variance of previous estimators and we show
that this is essentially optimal. Moreover, HIP obtains a polynomial
improvement for more general statistics and the estimators are simple,
flexible, unbiased, and elegant.
For approximate distinct counting on data streams, HIP outperforms the
original estimators for the HyperLogLog MinHash sketches (Flajolet et al.
2007), obtaining significantly improved estimation quality for this
state-of-the-art practical algorithm.
| [
{
"version": "v1",
"created": "Fri, 14 Jun 2013 03:33:05 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Jul 2013 12:01:34 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Dec 2013 00:54:09 GMT"
},
{
"version": "v4",
"created": "Wed, 11 Dec 2013 05:36:59 GMT"
},
{
"version": "v5",
"created": "Wed, 23 Apr 2014 23:09:46 GMT"
},
{
"version": "v6",
"created": "Wed, 5 Nov 2014 06:11:04 GMT"
},
{
"version": "v7",
"created": "Sat, 17 Jan 2015 07:55:41 GMT"
}
] | 2015-01-20T00:00:00 | [
[
"Cohen",
"Edith",
""
]
] | TITLE: All-Distances Sketches, Revisited: HIP Estimators for Massive Graphs
Analysis
ABSTRACT: Graph datasets with billions of edges, such as social and Web graphs, are
prevalent, and scalable computation is critical. All-distances sketches (ADS)
[Cohen 1997], are a powerful tool for scalable approximation of statistics.
The sketch is a small size sample of the distance relation of a node which
emphasizes closer nodes. Sketches for all nodes are computed using a nearly
linear computation and estimators are applied to sketches of nodes to estimate
their properties.
We provide, for the first time, a unified exposition of ADS algorithms and
applications. We present the Historic Inverse Probability (HIP) estimators
which are applied to the ADS of a node to estimate a large natural class of
statistics. For the important special cases of neighborhood cardinalities (the
number of nodes within some query distance) and closeness centralities, HIP
estimators have at most half the variance of previous estimators and we show
that this is essentially optimal. Moreover, HIP obtains a polynomial
improvement for more general statistics and the estimators are simple,
flexible, unbiased, and elegant.
For approximate distinct counting on data streams, HIP outperforms the
original estimators for the HyperLogLog MinHash sketches (Flajolet et al.
2007), obtaining significantly improved estimation quality for this
state-of-the-art practical algorithm.
| no_new_dataset | 0.940188 |
1501.04277 | Canyi Lu | Canyi Lu, Jinhui Tang, Min Lin, Liang Lin, Shuicheng Yan, and Zhouchen
Lin | Correntropy Induced L2 Graph for Robust Subspace Clustering | International Conference on Computer Vision (ICCV), 2013 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the robust subspace clustering problem, which aims to
cluster the given possibly noisy data points into their underlying subspaces. A
large pool of previous subspace clustering methods focus on the graph
construction by different regularization of the representation coefficient. We
instead focus on the robustness of the model to non-Gaussian noises. We propose
a new robust clustering method by using the correntropy induced metric, which
is robust for handling the non-Gaussian and impulsive noises. Also we further
extend the method for handling the data with outlier rows/features. The
multiplicative form of half-quadratic optimization is used to optimize the
non-convex correntropy objective function of the proposed models. Extensive
experiments on face datasets well demonstrate that the proposed methods are
more robust to corruptions and occlusions.
| [
{
"version": "v1",
"created": "Sun, 18 Jan 2015 10:06:55 GMT"
}
] | 2015-01-20T00:00:00 | [
[
"Lu",
"Canyi",
""
],
[
"Tang",
"Jinhui",
""
],
[
"Lin",
"Min",
""
],
[
"Lin",
"Liang",
""
],
[
"Yan",
"Shuicheng",
""
],
[
"Lin",
"Zhouchen",
""
]
] | TITLE: Correntropy Induced L2 Graph for Robust Subspace Clustering
ABSTRACT: In this paper, we study the robust subspace clustering problem, which aims to
cluster the given possibly noisy data points into their underlying subspaces. A
large pool of previous subspace clustering methods focus on the graph
construction by different regularization of the representation coefficient. We
instead focus on the robustness of the model to non-Gaussian noises. We propose
a new robust clustering method by using the correntropy induced metric, which
is robust for handling the non-Gaussian and impulsive noises. Also we further
extend the method for handling the data with outlier rows/features. The
multiplicative form of half-quadratic optimization is used to optimize the
non-convex correntropy objective function of the proposed models. Extensive
experiments on face datasets well demonstrate that the proposed methods are
more robust to corruptions and occlusions.
| no_new_dataset | 0.947186 |
1501.04281 | Pankaj Pansari | Pankaj Pansari, C. Rajagopalan, Ramasubramanian Sundararajan | Grouping Entities in a Fleet by Community Detection in Network of
Regression Models | 8 pages, 4 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper deals with grouping of entities in a fleet based on their
behavior. The behavior of each entity is characterized by its historical
dataset, which comprises a dependent variable, typically a performance measure,
and multiple independent variables, typically operating conditions. A
regression model built using this dataset is used as a proxy for the behavior
of an entity. The validation error of the model of one unit with respect to the
dataset of another unit is used as a measure of the difference in behavior
between two units. Grouping entities based on their behavior is posed as a
graph clustering problem with nodes representing regression models and edge
weights given by the validation errors. Specifically, we find communities in
this graph, having dense edge connections within and sparse connections
outside. A way to assess the goodness of grouping and finding the optimum
number of divisions is proposed. The algorithm and measures proposed are
illustrated with application to synthetic data.
| [
{
"version": "v1",
"created": "Sun, 18 Jan 2015 11:24:26 GMT"
}
] | 2015-01-20T00:00:00 | [
[
"Pansari",
"Pankaj",
""
],
[
"Rajagopalan",
"C.",
""
],
[
"Sundararajan",
"Ramasubramanian",
""
]
] | TITLE: Grouping Entities in a Fleet by Community Detection in Network of
Regression Models
ABSTRACT: This paper deals with grouping of entities in a fleet based on their
behavior. The behavior of each entity is characterized by its historical
dataset, which comprises a dependent variable, typically a performance measure,
and multiple independent variables, typically operating conditions. A
regression model built using this dataset is used as a proxy for the behavior
of an entity. The validation error of the model of one unit with respect to the
dataset of another unit is used as a measure of the difference in behavior
between two units. Grouping entities based on their behavior is posed as a
graph clustering problem with nodes representing regression models and edge
weights given by the validation errors. Specifically, we find communities in
this graph, having dense edge connections within and sparse connections
outside. A way to assess the goodness of grouping and finding the optimum
number of divisions is proposed. The algorithm and measures proposed are
illustrated with application to synthetic data.
| no_new_dataset | 0.945601 |
1310.6119 | Christos Patsonakis | Christos Patsonakis and Mema Roussopoulos | Asynchronous Rumour Spreading in Social and Signed Topologies | 10 pages, 4 figures, 5 tables | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present an experimental analysis of the asynchronous push &
pull rumour spreading protocol. This protocol is, to date, the best-performing
rumour spreading protocol for simple, scalable, and robust information
dissemination in distributed systems. We analyse the effect that multiple
parameters have on the protocol's performance, such as using memory to avoid
contacting the same neighbor twice in a row, varying the stopping criteria used
by nodes to decide when to stop spreading the rumour, employing more
sophisticated neighbor selection policies instead of the standard uniform
random choice, and others. Prior work has focused on either providing
theoretical upper bounds regarding the number of rounds needed to spread the
rumour to all nodes, or, proposes improvements by adjusting isolated
parameters. To our knowledge, our work is the first to study how multiple
parameters affect system behaviour both in isolation and combination and under
a wide range of values. Our analysis is based on experimental simulations using
real-world social network datasets, thus complementing prior theoretical work
to shed light on how the protocol behaves in practical, real-world systems. We
also study the behaviour of the protocol on a special type of social graph,
called signed networks (e.g., Slashdot and Epinions), whose links indicate
stronger trust relationships. Finally, through our detailed analysis, we
demonstrate how a few simple additions to the protocol can improve the total
time required to inform 100% of the nodes by a maximum of 99.69% and an average
of 82.37%.
| [
{
"version": "v1",
"created": "Wed, 23 Oct 2013 06:12:54 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Feb 2014 07:37:04 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Jan 2015 10:35:53 GMT"
}
] | 2015-01-16T00:00:00 | [
[
"Patsonakis",
"Christos",
""
],
[
"Roussopoulos",
"Mema",
""
]
] | TITLE: Asynchronous Rumour Spreading in Social and Signed Topologies
ABSTRACT: In this paper, we present an experimental analysis of the asynchronous push &
pull rumour spreading protocol. This protocol is, to date, the best-performing
rumour spreading protocol for simple, scalable, and robust information
dissemination in distributed systems. We analyse the effect that multiple
parameters have on the protocol's performance, such as using memory to avoid
contacting the same neighbor twice in a row, varying the stopping criteria used
by nodes to decide when to stop spreading the rumour, employing more
sophisticated neighbor selection policies instead of the standard uniform
random choice, and others. Prior work has focused on either providing
theoretical upper bounds regarding the number of rounds needed to spread the
rumour to all nodes, or, proposes improvements by adjusting isolated
parameters. To our knowledge, our work is the first to study how multiple
parameters affect system behaviour both in isolation and combination and under
a wide range of values. Our analysis is based on experimental simulations using
real-world social network datasets, thus complementing prior theoretical work
to shed light on how the protocol behaves in practical, real-world systems. We
also study the behaviour of the protocol on a special type of social graph,
called signed networks (e.g., Slashdot and Epinions), whose links indicate
stronger trust relationships. Finally, through our detailed analysis, we
demonstrate how a few simple additions to the protocol can improve the total
time required to inform 100% of the nodes by a maximum of 99.69% and an average
of 82.37%.
| no_new_dataset | 0.94868 |
1411.3921 | Brendon Brewer | Brendon J. Brewer | Inference for Trans-dimensional Bayesian Models with Diffusive Nested
Sampling | Only published here for the time being. 17 pages, 10 figures.
Software available at https://github.com/eggplantbren/RJObject | null | null | null | stat.CO astro-ph.IM physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many inference problems involve inferring the number $N$ of components in
some region, along with their properties $\{\mathbf{x}_i\}_{i=1}^N$, from a
dataset $\mathcal{D}$. A common statistical example is finite mixture
modelling. In the Bayesian framework, these problems are typically solved using
one of the following two methods: i) by executing a Monte Carlo algorithm (such
as Nested Sampling) once for each possible value of $N$, and calculating the
marginal likelihood or evidence as a function of $N$; or ii) by doing a single
run that allows the model dimension $N$ to change (such as Markov Chain Monte
Carlo with birth/death moves), and obtaining the posterior for $N$ directly. In
this paper we present a general approach to this problem that uses
trans-dimensional MCMC embedded within a Nested Sampling algorithm, allowing us
to explore the posterior distribution and calculate the marginal likelihood
(summed over $N$) even if the problem contains a phase transition or other
difficult features such as multimodality. We present two example problems,
finding sinusoidal signals in noisy data, and finding and measuring galaxies in
a noisy astronomical image. Both of the examples demonstrate phase transitions
in the relationship between the likelihood and the cumulative prior mass,
highlighting the need for Nested Sampling.
| [
{
"version": "v1",
"created": "Fri, 14 Nov 2014 14:40:54 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Nov 2014 03:06:47 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Jan 2015 20:31:53 GMT"
}
] | 2015-01-15T00:00:00 | [
[
"Brewer",
"Brendon J.",
""
]
] | TITLE: Inference for Trans-dimensional Bayesian Models with Diffusive Nested
Sampling
ABSTRACT: Many inference problems involve inferring the number $N$ of components in
some region, along with their properties $\{\mathbf{x}_i\}_{i=1}^N$, from a
dataset $\mathcal{D}$. A common statistical example is finite mixture
modelling. In the Bayesian framework, these problems are typically solved using
one of the following two methods: i) by executing a Monte Carlo algorithm (such
as Nested Sampling) once for each possible value of $N$, and calculating the
marginal likelihood or evidence as a function of $N$; or ii) by doing a single
run that allows the model dimension $N$ to change (such as Markov Chain Monte
Carlo with birth/death moves), and obtaining the posterior for $N$ directly. In
this paper we present a general approach to this problem that uses
trans-dimensional MCMC embedded within a Nested Sampling algorithm, allowing us
to explore the posterior distribution and calculate the marginal likelihood
(summed over $N$) even if the problem contains a phase transition or other
difficult features such as multimodality. We present two example problems,
finding sinusoidal signals in noisy data, and finding and measuring galaxies in
a noisy astronomical image. Both of the examples demonstrate phase transitions
in the relationship between the likelihood and the cumulative prior mass,
highlighting the need for Nested Sampling.
| no_new_dataset | 0.947672 |
1501.03210 | Piyush Bansal | Piyush Bansal, Romil Bansal and Vasudeva Varma | Towards Deep Semantic Analysis Of Hashtags | To Appear in 37th European Conference on Information Retrieval | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hashtags are semantico-syntactic constructs used across various social
networking and microblogging platforms to enable users to start a topic
specific discussion or classify a post into a desired category. Segmenting and
linking the entities present within the hashtags could therefore help in better
understanding and extraction of information shared across the social media.
However, due to lack of space delimiters in the hashtags (e.g #nsavssnowden),
the segmentation of hashtags into constituent entities ("NSA" and "Edward
Snowden" in this case) is not a trivial task. Most of the current
state-of-the-art social media analytics systems like Sentiment Analysis and
Entity Linking tend to either ignore hashtags, or treat them as a single word.
In this paper, we present a context aware approach to segment and link entities
in the hashtags to a knowledge base (KB) entry, based on the context within the
tweet. Our approach segments and links the entities in hashtags such that the
coherence between hashtag semantics and the tweet is maximized. To the best of
our knowledge, no existing study addresses the issue of linking entities in
hashtags for extracting semantic information. We evaluate our method on two
different datasets, and demonstrate the effectiveness of our technique in
improving the overall entity linking in tweets via additional semantic
information provided by segmenting and linking entities in a hashtag.
| [
{
"version": "v1",
"created": "Tue, 13 Jan 2015 23:51:29 GMT"
}
] | 2015-01-15T00:00:00 | [
[
"Bansal",
"Piyush",
""
],
[
"Bansal",
"Romil",
""
],
[
"Varma",
"Vasudeva",
""
]
] | TITLE: Towards Deep Semantic Analysis Of Hashtags
ABSTRACT: Hashtags are semantico-syntactic constructs used across various social
networking and microblogging platforms to enable users to start a topic
specific discussion or classify a post into a desired category. Segmenting and
linking the entities present within the hashtags could therefore help in better
understanding and extraction of information shared across the social media.
However, due to lack of space delimiters in the hashtags (e.g #nsavssnowden),
the segmentation of hashtags into constituent entities ("NSA" and "Edward
Snowden" in this case) is not a trivial task. Most of the current
state-of-the-art social media analytics systems like Sentiment Analysis and
Entity Linking tend to either ignore hashtags, or treat them as a single word.
In this paper, we present a context aware approach to segment and link entities
in the hashtags to a knowledge base (KB) entry, based on the context within the
tweet. Our approach segments and links the entities in hashtags such that the
coherence between hashtag semantics and the tweet is maximized. To the best of
our knowledge, no existing study addresses the issue of linking entities in
hashtags for extracting semantic information. We evaluate our method on two
different datasets, and demonstrate the effectiveness of our technique in
improving the overall entity linking in tweets via additional semantic
information provided by segmenting and linking entities in a hashtag.
| no_new_dataset | 0.949669 |
1407.0439 | Haixia Liu | Haixia Liu, Raymond H. Chan, and Yuan Yao | Geometric Tight Frame based Stylometry for Art Authentication of van
Gogh Paintings | 14 pages, 13 figures | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is about authenticating genuine van Gogh paintings from forgeries.
The authentication process depends on two key steps: feature extraction and
outlier detection. In this paper, a geometric tight frame and some simple
statistics of the tight frame coefficients are used to extract features from
the paintings. Then a forward stage-wise rank boosting is used to select a
small set of features for more accurate classification so that van Gogh
paintings are highly concentrated towards some center point while forgeries are
spread out as outliers. Numerical results show that our method can achieve
86.08% classification accuracy under the leave-one-out cross-validation
procedure. Our method also identifies five features that are much more
predominant than other features. Using just these five features for
classification, our method can give 88.61% classification accuracy which is the
highest so far reported in literature. Evaluation of the five features is also
performed on two hundred datasets generated by bootstrap sampling with
replacement. The median and the mean are 88.61% and 87.77% respectively. Our
results show that a small set of statistics of the tight frame coefficients
along certain orientations can serve as discriminative features for van Gogh
paintings. It is more important to look at the tail distributions of such
directional coefficients than mean values and standard deviations. It reflects
a highly consistent style in van Gogh's brushstroke movements, where many
forgeries demonstrate a more diverse spread in these features.
| [
{
"version": "v1",
"created": "Wed, 2 Jul 2014 01:55:37 GMT"
},
{
"version": "v2",
"created": "Sat, 13 Sep 2014 00:53:16 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Jan 2015 07:20:12 GMT"
}
] | 2015-01-14T00:00:00 | [
[
"Liu",
"Haixia",
""
],
[
"Chan",
"Raymond H.",
""
],
[
"Yao",
"Yuan",
""
]
] | TITLE: Geometric Tight Frame based Stylometry for Art Authentication of van
Gogh Paintings
ABSTRACT: This paper is about authenticating genuine van Gogh paintings from forgeries.
The authentication process depends on two key steps: feature extraction and
outlier detection. In this paper, a geometric tight frame and some simple
statistics of the tight frame coefficients are used to extract features from
the paintings. Then a forward stage-wise rank boosting is used to select a
small set of features for more accurate classification so that van Gogh
paintings are highly concentrated towards some center point while forgeries are
spread out as outliers. Numerical results show that our method can achieve
86.08% classification accuracy under the leave-one-out cross-validation
procedure. Our method also identifies five features that are much more
predominant than other features. Using just these five features for
classification, our method can give 88.61% classification accuracy which is the
highest so far reported in literature. Evaluation of the five features is also
performed on two hundred datasets generated by bootstrap sampling with
replacement. The median and the mean are 88.61% and 87.77% respectively. Our
results show that a small set of statistics of the tight frame coefficients
along certain orientations can serve as discriminative features for van Gogh
paintings. It is more important to look at the tail distributions of such
directional coefficients than mean values and standard deviations. It reflects
a highly consistent style in van Gogh's brushstroke movements, where many
forgeries demonstrate a more diverse spread in these features.
| no_new_dataset | 0.952882 |
1411.3229 | Tian Cao | Tian Cao, Christopher Zach, Shannon Modla, Debbie Powell, Kirk Czymmek
and Marc Niethammer | Multi-modal Image Registration for Correlative Microscopy | 24 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Correlative microscopy is a methodology combining the functionality of light
microscopy with the high resolution of electron microscopy and other microscopy
technologies. Image registration for correlative microscopy is quite
challenging because it is a multi-modal, multi-scale and multi-dimensional
registration problem. In this report, I introduce two methods of image
registration for correlative microscopy. The first method is based on fiducials
(beads). I generate landmarks from the fiducials and compute the similarity
transformation matrix based on three pairs of nearest corresponding landmarks.
A least-squares matching process is applied afterwards to further refine the
registration. The second method is inspired by the image analogies approach. I
introduce the sparse representation model into image analogies. I first train
representative image patches (dictionaries) for pre-registered datasets from
two different modalities, and then I use the sparse coding technique to
transfer a given image to a predicted image from one modality to another based
on the learned dictionaries. The final image registration is between the
predicted image and the original image corresponding to the given image in the
different modality. The method transforms a multi-modal registration problem to
a mono-modal one. I test my approaches on Transmission Electron Microscopy
(TEM) and confocal microscopy images. Experimental results of the methods are
also shown in this report.
| [
{
"version": "v1",
"created": "Wed, 12 Nov 2014 16:32:17 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jan 2015 15:44:08 GMT"
}
] | 2015-01-14T00:00:00 | [
[
"Cao",
"Tian",
""
],
[
"Zach",
"Christopher",
""
],
[
"Modla",
"Shannon",
""
],
[
"Powell",
"Debbie",
""
],
[
"Czymmek",
"Kirk",
""
],
[
"Niethammer",
"Marc",
""
]
] | TITLE: Multi-modal Image Registration for Correlative Microscopy
ABSTRACT: Correlative microscopy is a methodology combining the functionality of light
microscopy with the high resolution of electron microscopy and other microscopy
technologies. Image registration for correlative microscopy is quite
challenging because it is a multi-modal, multi-scale and multi-dimensional
registration problem. In this report, I introduce two methods of image
registration for correlative microscopy. The first method is based on fiducials
(beads). I generate landmarks from the fiducials and compute the similarity
transformation matrix based on three pairs of nearest corresponding landmarks.
A least-squares matching process is applied afterwards to further refine the
registration. The second method is inspired by the image analogies approach. I
introduce the sparse representation model into image analogies. I first train
representative image patches (dictionaries) for pre-registered datasets from
two different modalities, and then I use the sparse coding technique to
transfer a given image to a predicted image from one modality to another based
on the learned dictionaries. The final image registration is between the
predicted image and the original image corresponding to the given image in the
different modality. The method transforms a multi-modal registration problem to
a mono-modal one. I test my approaches on Transmission Electron Microscopy
(TEM) and confocal microscopy images. Experimental results of the methods are
also shown in this report.
| no_new_dataset | 0.954647 |
1501.02825 | Sven Bambach | Sven Bambach | A Survey on Recent Advances of Computer Vision Algorithms for Egocentric
Video | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent technological advances have made lightweight, head mounted cameras
both practical and affordable and products like Google Glass show first
approaches to introduce the idea of egocentric (first-person) video to the
mainstream. Interestingly, the computer vision community has only recently
started to explore this new domain of egocentric vision, where research can
roughly be categorized into three areas: Object recognition, activity
detection/recognition, video summarization. In this paper, we try to give a
broad overview about the different problems that have been addressed and
collect and compare evaluation results. Moreover, along with the emergence of
this new domain came the introduction of numerous new and versatile benchmark
datasets, which we summarize and compare as well.
| [
{
"version": "v1",
"created": "Mon, 12 Jan 2015 21:14:56 GMT"
}
] | 2015-01-14T00:00:00 | [
[
"Bambach",
"Sven",
""
]
] | TITLE: A Survey on Recent Advances of Computer Vision Algorithms for Egocentric
Video
ABSTRACT: Recent technological advances have made lightweight, head mounted cameras
both practical and affordable and products like Google Glass show first
approaches to introduce the idea of egocentric (first-person) video to the
mainstream. Interestingly, the computer vision community has only recently
started to explore this new domain of egocentric vision, where research can
roughly be categorized into three areas: Object recognition, activity
detection/recognition, video summarization. In this paper, we try to give a
broad overview about the different problems that have been addressed and
collect and compare evaluation results. Moreover, along with the emergence of
this new domain came the introduction of numerous new and versatile benchmark
datasets, which we summarize and compare as well.
| new_dataset | 0.928149 |
1501.02954 | Dominik Egarter | Dominik Egarter and Manfred P\"ochacker and Wilfried Elmenreich | Complexity of Power Draws for Load Disaggregation | null | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-Intrusive Load Monitoring (NILM) is a technology offering methods to
identify appliances in homes based on their consumption characteristics and the
total household demand. Recently, many different novel NILM approaches were
introduced, tested on real-world data and evaluated with a common evaluation
metric. However, the fair comparison between different NILM approaches even
with the usage of the same evaluation metric is nearly impossible due to
incomplete or missing problem definitions. Each NILM approach typically is
evaluated under different test scenarios. Test results are thus influenced by
the considered appliances, the number of used appliances, the device type
representing the appliance and the pre-processing stages denoising the
consumption data. This paper introduces a novel complexity measure of
aggregated consumption data providing an assessment of the problem complexity
affected by the used appliances, the appliance characteristics and the
appliance usage over time. We test our load disaggregation complexity on
different real-world datasets and with a state-of-the-art NILM approach. The
introduced disaggregation complexity measure is able to classify the
disaggregation problem based on the used appliance set and the considered
measurement noise.
| [
{
"version": "v1",
"created": "Tue, 13 Jan 2015 11:09:51 GMT"
}
] | 2015-01-14T00:00:00 | [
[
"Egarter",
"Dominik",
""
],
[
"Pöchacker",
"Manfred",
""
],
[
"Elmenreich",
"Wilfried",
""
]
] | TITLE: Complexity of Power Draws for Load Disaggregation
ABSTRACT: Non-Intrusive Load Monitoring (NILM) is a technology offering methods to
identify appliances in homes based on their consumption characteristics and the
total household demand. Recently, many different novel NILM approaches were
introduced, tested on real-world data and evaluated with a common evaluation
metric. However, the fair comparison between different NILM approaches even
with the usage of the same evaluation metric is nearly impossible due to
incomplete or missing problem definitions. Each NILM approach typically is
evaluated under different test scenarios. Test results are thus influenced by
the considered appliances, the number of used appliances, the device type
representing the appliance and the pre-processing stages denoising the
consumption data. This paper introduces a novel complexity measure of
aggregated consumption data providing an assessment of the problem complexity
affected by the used appliances, the appliance characteristics and the
appliance usage over time. We test our load disaggregation complexity on
different real-world datasets and with a state-of-the-art NILM approach. The
introduced disaggregation complexity measure is able to classify the
disaggregation problem based on the used appliance set and the considered
measurement noise.
| no_new_dataset | 0.929216 |
1501.03044 | Wei Lu | Wei Lu | Effects of Data Resolution and Human Behavior on Large Scale Evacuation
Simulations | PhD dissertation. UT Knoxville. 130 pages, 37 figures, 8 tables.
University of Tennessee, 2013. http://trace.tennessee.edu/utk_graddiss/2595 | null | null | null | physics.soc-ph cs.CE | http://creativecommons.org/licenses/publicdomain/ | Traffic Analysis Zones (TAZ) based macroscopic simulation studies are mostly
applied in evacuation planning and operation areas. The large size in TAZ and
aggregated information of macroscopic simulation underestimate the real
evacuation performance. To take advantage of the high resolution demographic
data LandScan USA (the zone size is much smaller than TAZ) and agent-based
microscopic traffic simulation models, many new problems appeared and novel
solutions are needed. A series of studies are conducted using LandScan USA
Population Cells (LPC) data for evacuation assignments with different network
configurations, travel demand models, and travelers compliance behavior.
First, a new Multiple Source Nearest Destination Shortest Path (MSNDSP)
problem is defined for generating Origin Destination matrix in evacuation
assignments when using LandScan dataset. Second, a new agent-based traffic
assignment framework using LandScan and TRANSIMS modules is proposed for
evacuation planning and operation study. Impact analysis on traffic analysis
area resolutions (TAZ vs LPC), evacuation start times (daytime vs nighttime),
and departure time choice models (normal S shape model vs location based model)
are studied. Third, based on the proposed framework, multi-scale network
configurations (two levels of road networks and two scales of zone sizes) and
three routing schemes (shortest network distance, highway biased, and shortest
straight-line distance routes) are implemented for the evacuation performance
comparison studies. Fourth, to study the impact of human behavior under
evacuation operations, travelers compliance behavior with compliance levels
from total complied to total non-complied are analyzed.
| [
{
"version": "v1",
"created": "Tue, 30 Dec 2014 19:49:52 GMT"
}
] | 2015-01-14T00:00:00 | [
[
"Lu",
"Wei",
""
]
] | TITLE: Effects of Data Resolution and Human Behavior on Large Scale Evacuation
Simulations
ABSTRACT: Traffic Analysis Zones (TAZ) based macroscopic simulation studies are mostly
applied in evacuation planning and operation areas. The large size in TAZ and
aggregated information of macroscopic simulation underestimate the real
evacuation performance. To take advantage of the high resolution demographic
data LandScan USA (the zone size is much smaller than TAZ) and agent-based
microscopic traffic simulation models, many new problems appeared and novel
solutions are needed. A series of studies are conducted using LandScan USA
Population Cells (LPC) data for evacuation assignments with different network
configurations, travel demand models, and travelers compliance behavior.
First, a new Multiple Source Nearest Destination Shortest Path (MSNDSP)
problem is defined for generating Origin Destination matrix in evacuation
assignments when using LandScan dataset. Second, a new agent-based traffic
assignment framework using LandScan and TRANSIMS modules is proposed for
evacuation planning and operation study. Impact analysis on traffic analysis
area resolutions (TAZ vs LPC), evacuation start times (daytime vs nighttime),
and departure time choice models (normal S shape model vs location based model)
are studied. Third, based on the proposed framework, multi-scale network
configurations (two levels of road networks and two scales of zone sizes) and
three routing schemes (shortest network distance, highway biased, and shortest
straight-line distance routes) are implemented for the evacuation performance
comparison studies. Fourth, to study the impact of human behavior under
evacuation operations, travelers compliance behavior with compliance levels
from total complied to total non-complied are analyzed.
| no_new_dataset | 0.954478 |
1408.2292 | Jingwei Sun | Jingwei Sun, Guangzhong Sun | SPLZ: An Efficient Algorithm for Single Source Shortest Path Problem
Using Compression Method | 20 pages, 5 figures | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient solution of the single source shortest path (SSSP) problem on road
networks is an important requirement for numerous real-world applications. This
paper introduces an algorithm for the SSSP problem using compression method.
Owning to precomputing and storing all-pairs shortest path (APSP), the process
of solving SSSP problem is a simple lookup of a little data from precomputed
APSP and decompression. APSP without compression needs at least 1TB memory for
a road network with one million vertices. Our algorithm can compress such an
APSP into several GB, and ensure a good performance of decompression. In our
experiment on a dataset about Northwest USA (with 1.2 millions vertices), our
method can achieve about three orders of magnitude faster than Dijkstra
algorithm based on binary heap.
| [
{
"version": "v1",
"created": "Mon, 11 Aug 2014 01:40:00 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Jan 2015 11:57:32 GMT"
}
] | 2015-01-13T00:00:00 | [
[
"Sun",
"Jingwei",
""
],
[
"Sun",
"Guangzhong",
""
]
] | TITLE: SPLZ: An Efficient Algorithm for Single Source Shortest Path Problem
Using Compression Method
ABSTRACT: Efficient solution of the single source shortest path (SSSP) problem on road
networks is an important requirement for numerous real-world applications. This
paper introduces an algorithm for the SSSP problem using compression method.
Owning to precomputing and storing all-pairs shortest path (APSP), the process
of solving SSSP problem is a simple lookup of a little data from precomputed
APSP and decompression. APSP without compression needs at least 1TB memory for
a road network with one million vertices. Our algorithm can compress such an
APSP into several GB, and ensure a good performance of decompression. In our
experiment on a dataset about Northwest USA (with 1.2 millions vertices), our
method can achieve about three orders of magnitude faster than Dijkstra
algorithm based on binary heap.
| no_new_dataset | 0.936692 |
1501.02431 | Rashmi Paithankar Ms | Rashmi Paithankar and Bharat Tidke | A H-K Clustering Algorithm For High Dimensional Data Using Ensemble
Learning | 9 pages, 1 table, 2 figures, International Journal of Information
Technology Convergence and Services (IJITCS) Vol.4, No.5/6, December 2014 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advances made to the traditional clustering algorithms solves the various
problems such as curse of dimensionality and sparsity of data for multiple
attributes. The traditional H-K clustering algorithm can solve the randomness
and apriority of the initial centers of K-means clustering algorithm. But when
we apply it to high dimensional data it causes the dimensional disaster problem
due to high computational complexity. All the advanced clustering algorithms
like subspace and ensemble clustering algorithms improve the performance for
clustering high dimension dataset from different aspects in different extent.
Still these algorithms will improve the performance form a single perspective.
The objective of the proposed model is to improve the performance of
traditional H-K clustering and overcome the limitations such as high
computational complexity and poor accuracy for high dimensional data by
combining the three different approaches of clustering algorithm as subspace
clustering algorithm and ensemble clustering algorithm with H-K clustering
algorithm.
| [
{
"version": "v1",
"created": "Sun, 11 Jan 2015 08:30:15 GMT"
}
] | 2015-01-13T00:00:00 | [
[
"Paithankar",
"Rashmi",
""
],
[
"Tidke",
"Bharat",
""
]
] | TITLE: A H-K Clustering Algorithm For High Dimensional Data Using Ensemble
Learning
ABSTRACT: Advances made to the traditional clustering algorithms solves the various
problems such as curse of dimensionality and sparsity of data for multiple
attributes. The traditional H-K clustering algorithm can solve the randomness
and apriority of the initial centers of K-means clustering algorithm. But when
we apply it to high dimensional data it causes the dimensional disaster problem
due to high computational complexity. All the advanced clustering algorithms
like subspace and ensemble clustering algorithms improve the performance for
clustering high dimension dataset from different aspects in different extent.
Still these algorithms will improve the performance form a single perspective.
The objective of the proposed model is to improve the performance of
traditional H-K clustering and overcome the limitations such as high
computational complexity and poor accuracy for high dimensional data by
combining the three different approaches of clustering algorithm as subspace
clustering algorithm and ensemble clustering algorithm with H-K clustering
algorithm.
| no_new_dataset | 0.952042 |
1501.02432 | Jayadeva | Jayadeva, Sanjit Singh Batra, and Siddarth Sabharwal | Learning a Fuzzy Hyperplane Fat Margin Classifier with Minimum VC
dimension | arXiv admin note: text overlap with arXiv:1410.4573 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Vapnik-Chervonenkis (VC) dimension measures the complexity of a learning
machine, and a low VC dimension leads to good generalization. The recently
proposed Minimal Complexity Machine (MCM) learns a hyperplane classifier by
minimizing an exact bound on the VC dimension. This paper extends the MCM
classifier to the fuzzy domain. The use of a fuzzy membership is known to
reduce the effect of outliers, and to reduce the effect of noise on learning.
Experimental results show, that on a number of benchmark datasets, the the
fuzzy MCM classifier outperforms SVMs and the conventional MCM in terms of
generalization, and that the fuzzy MCM uses fewer support vectors. On several
benchmark datasets, the fuzzy MCM classifier yields excellent test set
accuracies while using one-tenth the number of support vectors used by SVMs.
| [
{
"version": "v1",
"created": "Sun, 11 Jan 2015 09:29:05 GMT"
}
] | 2015-01-13T00:00:00 | [
[
"Jayadeva",
"",
""
],
[
"Batra",
"Sanjit Singh",
""
],
[
"Sabharwal",
"Siddarth",
""
]
] | TITLE: Learning a Fuzzy Hyperplane Fat Margin Classifier with Minimum VC
dimension
ABSTRACT: The Vapnik-Chervonenkis (VC) dimension measures the complexity of a learning
machine, and a low VC dimension leads to good generalization. The recently
proposed Minimal Complexity Machine (MCM) learns a hyperplane classifier by
minimizing an exact bound on the VC dimension. This paper extends the MCM
classifier to the fuzzy domain. The use of a fuzzy membership is known to
reduce the effect of outliers, and to reduce the effect of noise on learning.
Experimental results show, that on a number of benchmark datasets, the the
fuzzy MCM classifier outperforms SVMs and the conventional MCM in terms of
generalization, and that the fuzzy MCM uses fewer support vectors. On several
benchmark datasets, the fuzzy MCM classifier yields excellent test set
accuracies while using one-tenth the number of support vectors used by SVMs.
| no_new_dataset | 0.955361 |
1501.02527 | Nicholas Locascio | Harini Suresh, Nicholas Locascio | Autodetection and Classification of Hidden Cultural City Districts from
Yelp Reviews | null | null | null | null | cs.CL cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Topic models are a way to discover underlying themes in an otherwise
unstructured collection of documents. In this study, we specifically used the
Latent Dirichlet Allocation (LDA) topic model on a dataset of Yelp reviews to
classify restaurants based off of their reviews. Furthermore, we hypothesize
that within a city, restaurants can be grouped into similar "clusters" based on
both location and similarity. We used several different clustering methods,
including K-means Clustering and a Probabilistic Mixture Model, in order to
uncover and classify districts, both well-known and hidden (i.e. cultural areas
like Chinatown or hearsay like "the best street for Italian restaurants")
within a city. We use these models to display and label different clusters on a
map. We also introduce a topic similarity heatmap that displays the similarity
distribution in a city to a new restaurant.
| [
{
"version": "v1",
"created": "Mon, 12 Jan 2015 03:10:01 GMT"
}
] | 2015-01-13T00:00:00 | [
[
"Suresh",
"Harini",
""
],
[
"Locascio",
"Nicholas",
""
]
] | TITLE: Autodetection and Classification of Hidden Cultural City Districts from
Yelp Reviews
ABSTRACT: Topic models are a way to discover underlying themes in an otherwise
unstructured collection of documents. In this study, we specifically used the
Latent Dirichlet Allocation (LDA) topic model on a dataset of Yelp reviews to
classify restaurants based off of their reviews. Furthermore, we hypothesize
that within a city, restaurants can be grouped into similar "clusters" based on
both location and similarity. We used several different clustering methods,
including K-means Clustering and a Probabilistic Mixture Model, in order to
uncover and classify districts, both well-known and hidden (i.e. cultural areas
like Chinatown or hearsay like "the best street for Italian restaurants")
within a city. We use these models to display and label different clusters on a
map. We also introduce a topic similarity heatmap that displays the similarity
distribution in a city to a new restaurant.
| no_new_dataset | 0.944791 |
1501.02530 | Anna Senina | Anna Rohrbach, Marcus Rohrbach, Niket Tandon, Bernt Schiele | A Dataset for Movie Description | null | null | null | null | cs.CV cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Descriptive video service (DVS) provides linguistic descriptions of movies
and allows visually impaired people to follow a movie along with their peers.
Such descriptions are by design mainly visual and thus naturally form an
interesting data source for computer vision and computational linguistics. In
this work we propose a novel dataset which contains transcribed DVS, which is
temporally aligned to full length HD movies. In addition we also collected the
aligned movie scripts which have been used in prior work and compare the two
different sources of descriptions. In total the Movie Description dataset
contains a parallel corpus of over 54,000 sentences and video snippets from 72
HD movies. We characterize the dataset by benchmarking different approaches for
generating video descriptions. Comparing DVS to scripts, we find that DVS is
far more visual and describes precisely what is shown rather than what should
happen according to the scripts created prior to movie production.
| [
{
"version": "v1",
"created": "Mon, 12 Jan 2015 03:31:33 GMT"
}
] | 2015-01-13T00:00:00 | [
[
"Rohrbach",
"Anna",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Tandon",
"Niket",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: A Dataset for Movie Description
ABSTRACT: Descriptive video service (DVS) provides linguistic descriptions of movies
and allows visually impaired people to follow a movie along with their peers.
Such descriptions are by design mainly visual and thus naturally form an
interesting data source for computer vision and computational linguistics. In
this work we propose a novel dataset which contains transcribed DVS, which is
temporally aligned to full length HD movies. In addition we also collected the
aligned movie scripts which have been used in prior work and compare the two
different sources of descriptions. In total the Movie Description dataset
contains a parallel corpus of over 54,000 sentences and video snippets from 72
HD movies. We characterize the dataset by benchmarking different approaches for
generating video descriptions. Comparing DVS to scripts, we find that DVS is
far more visual and describes precisely what is shown rather than what should
happen according to the scripts created prior to movie production.
| new_dataset | 0.963541 |
1501.02652 | Kostas Stefanidis | Yannis Roussakis, Ioannis Chrysakis, Kostas Stefanidis, Giorgos
Flouris, Yannis Stavrakas | A Flexible Framework for Defining, Representing and Detecting Changes on
the Data Web | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dynamic nature of Web data gives rise to a multitude of problems related
to the identification, computation and management of the evolving versions and
the related changes. In this paper, we consider the problem of change
recognition in RDF datasets, i.e., the problem of identifying, and when
possible give semantics to, the changes that led from one version of an RDF
dataset to another. Despite our RDF focus, our approach is sufficiently general
to engulf different data models that can be encoded in RDF, such as relational
or multi-dimensional. In fact, we propose a flexible, extendible and
data-model-independent methodology of defining changes that can capture the
peculiarities and needs of different data models and applications, while being
formally robust due to the satisfaction of the properties of completeness and
unambiguity. Further, we propose an ontology of changes for storing the
detected changes that allows automated processing and analysis of changes,
cross-snapshot queries (spanning across different versions), as well as queries
involving both changes and data. To detect changes and populate said ontology,
we propose a customizable detection algorithm, which is applicable to different
data models and applications requiring the detection of custom, user-defined
changes. Finally, we provide a proof-of-concept application and evaluation of
our framework for different data models.
| [
{
"version": "v1",
"created": "Mon, 12 Jan 2015 14:15:35 GMT"
}
] | 2015-01-13T00:00:00 | [
[
"Roussakis",
"Yannis",
""
],
[
"Chrysakis",
"Ioannis",
""
],
[
"Stefanidis",
"Kostas",
""
],
[
"Flouris",
"Giorgos",
""
],
[
"Stavrakas",
"Yannis",
""
]
] | TITLE: A Flexible Framework for Defining, Representing and Detecting Changes on
the Data Web
ABSTRACT: The dynamic nature of Web data gives rise to a multitude of problems related
to the identification, computation and management of the evolving versions and
the related changes. In this paper, we consider the problem of change
recognition in RDF datasets, i.e., the problem of identifying, and when
possible give semantics to, the changes that led from one version of an RDF
dataset to another. Despite our RDF focus, our approach is sufficiently general
to engulf different data models that can be encoded in RDF, such as relational
or multi-dimensional. In fact, we propose a flexible, extendible and
data-model-independent methodology of defining changes that can capture the
peculiarities and needs of different data models and applications, while being
formally robust due to the satisfaction of the properties of completeness and
unambiguity. Further, we propose an ontology of changes for storing the
detected changes that allows automated processing and analysis of changes,
cross-snapshot queries (spanning across different versions), as well as queries
involving both changes and data. To detect changes and populate said ontology,
we propose a customizable detection algorithm, which is applicable to different
data models and applications requiring the detection of custom, user-defined
changes. Finally, we provide a proof-of-concept application and evaluation of
our framework for different data models.
| no_new_dataset | 0.949012 |
1501.02702 | Feng Nan | Feng Nan, Joseph Wang, Venkatesh Saligrama | Max-Cost Discrete Function Evaluation Problem under a Budget | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose novel methods for max-cost Discrete Function Evaluation Problem
(DFEP) under budget constraints. We are motivated by applications such as
clinical diagnosis where a patient is subjected to a sequence of (possibly
expensive) tests before a decision is made. Our goal is to develop strategies
for minimizing max-costs. The problem is known to be NP hard and greedy methods
based on specialized impurity functions have been proposed. We develop a broad
class of \emph{admissible} impurity functions that admit monomials, classes of
polynomials, and hinge-loss functions that allow for flexible impurity design
with provably optimal approximation bounds. This flexibility is important for
datasets when max-cost can be overly sensitive to "outliers." Outliers bias
max-cost to a few examples that require a large number of tests for
classification. We design admissible functions that allow for accuracy-cost
trade-off and result in $O(\log n)$ guarantees of the optimal cost among trees
with corresponding classification accuracy levels.
| [
{
"version": "v1",
"created": "Mon, 12 Jan 2015 16:33:47 GMT"
}
] | 2015-01-13T00:00:00 | [
[
"Nan",
"Feng",
""
],
[
"Wang",
"Joseph",
""
],
[
"Saligrama",
"Venkatesh",
""
]
] | TITLE: Max-Cost Discrete Function Evaluation Problem under a Budget
ABSTRACT: We propose novel methods for max-cost Discrete Function Evaluation Problem
(DFEP) under budget constraints. We are motivated by applications such as
clinical diagnosis where a patient is subjected to a sequence of (possibly
expensive) tests before a decision is made. Our goal is to develop strategies
for minimizing max-costs. The problem is known to be NP hard and greedy methods
based on specialized impurity functions have been proposed. We develop a broad
class of \emph{admissible} impurity functions that admit monomials, classes of
polynomials, and hinge-loss functions that allow for flexible impurity design
with provably optimal approximation bounds. This flexibility is important for
datasets when max-cost can be overly sensitive to "outliers." Outliers bias
max-cost to a few examples that require a large number of tests for
classification. We design admissible functions that allow for accuracy-cost
trade-off and result in $O(\log n)$ guarantees of the optimal cost among trees
with corresponding classification accuracy levels.
| no_new_dataset | 0.941708 |
1501.02732 | Ilya Goldin | April Galyardt and Ilya Goldin | Predicting Performance During Tutoring with Models of Recent Performance | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In educational technology and learning sciences, there are multiple uses for
a predictive model of whether a student will perform a task correctly or not.
For example, an intelligent tutoring system may use such a model to estimate
whether or not a student has mastered a skill. We analyze the significance of
data recency in making such predictions, i.e., asking whether relatively more
recent observations of a student's performance matter more than relatively
older observations. We develop a new Recent-Performance Factors Analysis model
that takes data recency into account. The new model significantly improves
predictive accuracy over both existing logistic-regression performance models
and over novel baseline models in evaluations on real-world and synthetic
datasets. As a secondary contribution, we demonstrate how the widely used
cross-validation with 0-1 loss is inferior to AIC and to cross-validation with
L1 prediction error loss as a measure of model performance.
| [
{
"version": "v1",
"created": "Mon, 12 Jan 2015 17:39:53 GMT"
}
] | 2015-01-13T00:00:00 | [
[
"Galyardt",
"April",
""
],
[
"Goldin",
"Ilya",
""
]
] | TITLE: Predicting Performance During Tutoring with Models of Recent Performance
ABSTRACT: In educational technology and learning sciences, there are multiple uses for
a predictive model of whether a student will perform a task correctly or not.
For example, an intelligent tutoring system may use such a model to estimate
whether or not a student has mastered a skill. We analyze the significance of
data recency in making such predictions, i.e., asking whether relatively more
recent observations of a student's performance matter more than relatively
older observations. We develop a new Recent-Performance Factors Analysis model
that takes data recency into account. The new model significantly improves
predictive accuracy over both existing logistic-regression performance models
and over novel baseline models in evaluations on real-world and synthetic
datasets. As a secondary contribution, we demonstrate how the widely used
cross-validation with 0-1 loss is inferior to AIC and to cross-validation with
L1 prediction error loss as a measure of model performance.
| no_new_dataset | 0.944177 |
1501.01924 | Leman Akoglu | Shebuti Rayana and Leman Akoglu | Less is More: Building Selective Anomaly Ensembles | 14 pages, 5 pages Appendix, 10 Figures, 15 Tables, to appear at SDM
2015 | null | null | null | cs.DB cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensemble techniques for classification and clustering have long proven
effective, yet anomaly ensembles have been barely studied. In this work, we tap
into this gap and propose a new ensemble approach for anomaly mining, with
application to event detection in temporal graphs. Our method aims to combine
results from heterogeneous detectors with varying outputs, and leverage the
evidence from multiple sources to yield better performance. However, trusting
all the results may deteriorate the overall ensemble accuracy, as some
detectors may fall short and provide inaccurate results depending on the nature
of the data in hand. This suggests that being selective in which results to
combine is vital in building effective ensembles---hence "less is more".
In this paper we propose SELECT; an ensemble approach for anomaly mining that
employs novel techniques to automatically and systematically select the results
to assemble in a fully unsupervised fashion. We apply our method to event
detection in temporal graphs, where SELECT successfully utilizes five base
detectors and seven consensus methods under a unified ensemble framework. We
provide extensive quantitative evaluation of our approach on five real-world
datasets (four with ground truth), including Enron email communications, New
York Times news corpus, and World Cup 2014 Twitter news feed. Thanks to its
selection mechanism, SELECT yields superior performance compared to individual
detectors alone, the full ensemble (naively combining all results), and an
existing diversity-based ensemble.
| [
{
"version": "v1",
"created": "Thu, 8 Jan 2015 18:54:09 GMT"
}
] | 2015-01-12T00:00:00 | [
[
"Rayana",
"Shebuti",
""
],
[
"Akoglu",
"Leman",
""
]
] | TITLE: Less is More: Building Selective Anomaly Ensembles
ABSTRACT: Ensemble techniques for classification and clustering have long proven
effective, yet anomaly ensembles have been barely studied. In this work, we tap
into this gap and propose a new ensemble approach for anomaly mining, with
application to event detection in temporal graphs. Our method aims to combine
results from heterogeneous detectors with varying outputs, and leverage the
evidence from multiple sources to yield better performance. However, trusting
all the results may deteriorate the overall ensemble accuracy, as some
detectors may fall short and provide inaccurate results depending on the nature
of the data in hand. This suggests that being selective in which results to
combine is vital in building effective ensembles---hence "less is more".
In this paper we propose SELECT; an ensemble approach for anomaly mining that
employs novel techniques to automatically and systematically select the results
to assemble in a fully unsupervised fashion. We apply our method to event
detection in temporal graphs, where SELECT successfully utilizes five base
detectors and seven consensus methods under a unified ensemble framework. We
provide extensive quantitative evaluation of our approach on five real-world
datasets (four with ground truth), including Enron email communications, New
York Times news corpus, and World Cup 2014 Twitter news feed. Thanks to its
selection mechanism, SELECT yields superior performance compared to individual
detectors alone, the full ensemble (naively combining all results), and an
existing diversity-based ensemble.
| no_new_dataset | 0.952086 |
1501.01996 | Amin Javari | Amin Javari, Mahdi Jalili | A probabilistic model to resolve diversity-accuracy challenge of
recommendation systems | 19 pages, 5 figures | Knowledge and Information Systems, 1-19 (2014) | 10.1007/s10115-014-0779-2 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommendation systems have wide-spread applications in both academia and
industry. Traditionally, performance of recommendation systems has been
measured by their precision. By introducing novelty and diversity as key
qualities in recommender systems, recently increasing attention has been
focused on this topic. Precision and novelty of recommendation are not in the
same direction, and practical systems should make a trade-off between these two
quantities. Thus, it is an important feature of a recommender system to make it
possible to adjust diversity and accuracy of the recommendations by tuning the
model. In this paper, we introduce a probabilistic structure to resolve the
diversity-accuracy dilemma in recommender systems. We propose a hybrid model
with adjustable level of diversity and precision such that one can perform this
by tuning a single parameter. The proposed recommendation model consists of two
models: one for maximization of the accuracy and the other one for
specification of the recommendation list to tastes of users. Our experiments on
two real datasets show the functionality of the model in resolving
accuracy-diversity dilemma and outperformance of the model over other classic
models. The proposed method could be extensively applied to real commercial
systems due to its low computational complexity and significant performance.
| [
{
"version": "v1",
"created": "Thu, 8 Jan 2015 22:42:39 GMT"
}
] | 2015-01-12T00:00:00 | [
[
"Javari",
"Amin",
""
],
[
"Jalili",
"Mahdi",
""
]
] | TITLE: A probabilistic model to resolve diversity-accuracy challenge of
recommendation systems
ABSTRACT: Recommendation systems have wide-spread applications in both academia and
industry. Traditionally, performance of recommendation systems has been
measured by their precision. By introducing novelty and diversity as key
qualities in recommender systems, recently increasing attention has been
focused on this topic. Precision and novelty of recommendation are not in the
same direction, and practical systems should make a trade-off between these two
quantities. Thus, it is an important feature of a recommender system to make it
possible to adjust diversity and accuracy of the recommendations by tuning the
model. In this paper, we introduce a probabilistic structure to resolve the
diversity-accuracy dilemma in recommender systems. We propose a hybrid model
with adjustable level of diversity and precision such that one can perform this
by tuning a single parameter. The proposed recommendation model consists of two
models: one for maximization of the accuracy and the other one for
specification of the recommendation list to tastes of users. Our experiments on
two real datasets show the functionality of the model in resolving
accuracy-diversity dilemma and outperformance of the model over other classic
models. The proposed method could be extensively applied to real commercial
systems due to its low computational complexity and significant performance.
| no_new_dataset | 0.945951 |
1501.02159 | Riccardo Gallotti | Riccardo Gallotti and Marc Barthelemy | The Multilayer Temporal Network of Public Transport in Great Britain | 18 pages, 10 figures | Scientific Data 2, 140056 (2015) | 10.1038/sdata.2014.56 | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the widespread availability of information concerning Public
Transport from different sources, it is extremely hard to have a complete
picture, in particular at a national scale. Here, we integrate timetable data
obtained from the United Kingdom open-data program together with timetables of
domestic flights, and obtain a comprehensive snapshot of the temporal
characteristics of the whole UK public transport system for a week in October
2010. In order to focus on the multi-modal aspects of the system, we use a
coarse graining procedure and define explicitly the coupling between different
transport modes such as connections at airports, ferry docks, rail, metro,
coach and bus stations. The resulting weighted, directed, temporal and
multilayer network is provided in simple, commonly used formats, ensuring easy
accessibility and the possibility of a straightforward use of old or
specifically developed methods on this new and extensive dataset.
| [
{
"version": "v1",
"created": "Fri, 9 Jan 2015 14:44:22 GMT"
}
] | 2015-01-12T00:00:00 | [
[
"Gallotti",
"Riccardo",
""
],
[
"Barthelemy",
"Marc",
""
]
] | TITLE: The Multilayer Temporal Network of Public Transport in Great Britain
ABSTRACT: Despite the widespread availability of information concerning Public
Transport from different sources, it is extremely hard to have a complete
picture, in particular at a national scale. Here, we integrate timetable data
obtained from the United Kingdom open-data program together with timetables of
domestic flights, and obtain a comprehensive snapshot of the temporal
characteristics of the whole UK public transport system for a week in October
2010. In order to focus on the multi-modal aspects of the system, we use a
coarse graining procedure and define explicitly the coupling between different
transport modes such as connections at airports, ferry docks, rail, metro,
coach and bus stations. The resulting weighted, directed, temporal and
multilayer network is provided in simple, commonly used formats, ensuring easy
accessibility and the possibility of a straightforward use of old or
specifically developed methods on this new and extensive dataset.
| new_dataset | 0.946101 |
1501.01694 | Mayank Kejriwal | Mayank Kejriwal, Daniel P. Miranker | A DNF Blocking Scheme Learner for Heterogeneous Datasets | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Entity Resolution concerns identifying co-referent entity pairs across
datasets. A typical workflow comprises two steps. In the first step, a blocking
method uses a one-many function called a blocking scheme to map entities to
blocks. In the second step, entities sharing a block are paired and compared.
Current DNF blocking scheme learners (DNF-BSLs) apply only to structurally
homogeneous tables. We present an unsupervised algorithmic pipeline for
learning DNF blocking schemes on RDF graph datasets, as well as structurally
heterogeneous tables. Previous DNF-BSLs are admitted as special cases. We
evaluate the pipeline on six real-world dataset pairs. Unsupervised results are
shown to be competitive with supervised and semi-supervised baselines. To the
best of our knowledge, this is the first unsupervised DNF-BSL that admits RDF
graphs and structurally heterogeneous tables as inputs.
| [
{
"version": "v1",
"created": "Thu, 8 Jan 2015 00:37:09 GMT"
}
] | 2015-01-09T00:00:00 | [
[
"Kejriwal",
"Mayank",
""
],
[
"Miranker",
"Daniel P.",
""
]
] | TITLE: A DNF Blocking Scheme Learner for Heterogeneous Datasets
ABSTRACT: Entity Resolution concerns identifying co-referent entity pairs across
datasets. A typical workflow comprises two steps. In the first step, a blocking
method uses a one-many function called a blocking scheme to map entities to
blocks. In the second step, entities sharing a block are paired and compared.
Current DNF blocking scheme learners (DNF-BSLs) apply only to structurally
homogeneous tables. We present an unsupervised algorithmic pipeline for
learning DNF blocking schemes on RDF graph datasets, as well as structurally
heterogeneous tables. Previous DNF-BSLs are admitted as special cases. We
evaluate the pipeline on six real-world dataset pairs. Unsupervised results are
shown to be competitive with supervised and semi-supervised baselines. To the
best of our knowledge, this is the first unsupervised DNF-BSL that admits RDF
graphs and structurally heterogeneous tables as inputs.
| no_new_dataset | 0.950686 |
1406.3332 | Julien Mairal | Julien Mairal (INRIA Grenoble Rh\^one-Alpes / LJK Laboratoire Jean
Kuntzmann), Piotr Koniusz (INRIA Grenoble Rh\^one-Alpes / LJK Laboratoire
Jean Kuntzmann), Zaid Harchaoui (INRIA Grenoble Rh\^one-Alpes / LJK
Laboratoire Jean Kuntzmann), Cordelia Schmid (INRIA Grenoble Rh\^one-Alpes /
LJK Laboratoire Jean Kuntzmann) | Convolutional Kernel Networks | appears in Advances in Neural Information Processing Systems (NIPS),
Dec 2014, Montreal, Canada, http://nips.cc | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An important goal in visual recognition is to devise image representations
that are invariant to particular transformations. In this paper, we address
this goal with a new type of convolutional neural network (CNN) whose
invariance is encoded by a reproducing kernel. Unlike traditional approaches
where neural networks are learned either to represent data or for solving a
classification task, our network learns to approximate the kernel feature map
on training data. Such an approach enjoys several benefits over classical ones.
First, by teaching CNNs to be invariant, we obtain simple network architectures
that achieve a similar accuracy to more complex ones, while being easy to train
and robust to overfitting. Second, we bridge a gap between the neural network
literature and kernels, which are natural tools to model invariance. We
evaluate our methodology on visual recognition tasks where CNNs have proven to
perform well, e.g., digit recognition with the MNIST dataset, and the more
challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive
with the state of the art.
| [
{
"version": "v1",
"created": "Thu, 12 Jun 2014 19:41:03 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Nov 2014 16:58:48 GMT"
}
] | 2015-01-08T00:00:00 | [
[
"Mairal",
"Julien",
"",
"INRIA Grenoble Rhône-Alpes / LJK Laboratoire Jean\n Kuntzmann"
],
[
"Koniusz",
"Piotr",
"",
"INRIA Grenoble Rhône-Alpes / LJK Laboratoire\n Jean Kuntzmann"
],
[
"Harchaoui",
"Zaid",
"",
"INRIA Grenoble Rhône-Alpes / LJK\n Laboratoire Jean Kuntzmann"
],
[
"Schmid",
"Cordelia",
"",
"INRIA Grenoble Rhône-Alpes /\n LJK Laboratoire Jean Kuntzmann"
]
] | TITLE: Convolutional Kernel Networks
ABSTRACT: An important goal in visual recognition is to devise image representations
that are invariant to particular transformations. In this paper, we address
this goal with a new type of convolutional neural network (CNN) whose
invariance is encoded by a reproducing kernel. Unlike traditional approaches
where neural networks are learned either to represent data or for solving a
classification task, our network learns to approximate the kernel feature map
on training data. Such an approach enjoys several benefits over classical ones.
First, by teaching CNNs to be invariant, we obtain simple network architectures
that achieve a similar accuracy to more complex ones, while being easy to train
and robust to overfitting. Second, we bridge a gap between the neural network
literature and kernels, which are natural tools to model invariance. We
evaluate our methodology on visual recognition tasks where CNNs have proven to
perform well, e.g., digit recognition with the MNIST dataset, and the more
challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive
with the state of the art.
| no_new_dataset | 0.949576 |
1501.01426 | Mansaf Alam | Mansaf Alam, Kashish Ara Shakil and Shuchi Sethi | Analysis and Clustering of Workload in Google Cluster Trace based on
Resource Usage | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloud computing has gained interest amongst commercial organizations,
research communities, developers and other individuals during the past few
years.In order to move ahead with research in field of data management and
processing of such data, we need benchmark datasets and freely available data
which are publicly accessible. Google in May 2011 released a trace of a cluster
of 11k machines referred as Google Cluster Trace.This trace contains cell
information of about 29 days.This paper provides analysis of resource usage and
requirements in this trace and is an attempt to give an insight into such kind
of production trace similar to the ones in cloud environment.The major
contributions of this paper include Statistical Profile of Jobs based on
resource usage, clustering of Workload Patterns and Classification of jobs into
different types based on k-means clustering.Though there have been earlier
works for analysis of this trace, but our analysis provides several new
findings such as jobs in a production trace are trimodal and there occurs
symmetry in the tasks within a long job type
| [
{
"version": "v1",
"created": "Wed, 7 Jan 2015 10:15:05 GMT"
}
] | 2015-01-08T00:00:00 | [
[
"Alam",
"Mansaf",
""
],
[
"Shakil",
"Kashish Ara",
""
],
[
"Sethi",
"Shuchi",
""
]
] | TITLE: Analysis and Clustering of Workload in Google Cluster Trace based on
Resource Usage
ABSTRACT: Cloud computing has gained interest amongst commercial organizations,
research communities, developers and other individuals during the past few
years.In order to move ahead with research in field of data management and
processing of such data, we need benchmark datasets and freely available data
which are publicly accessible. Google in May 2011 released a trace of a cluster
of 11k machines referred as Google Cluster Trace.This trace contains cell
information of about 29 days.This paper provides analysis of resource usage and
requirements in this trace and is an attempt to give an insight into such kind
of production trace similar to the ones in cloud environment.The major
contributions of this paper include Statistical Profile of Jobs based on
resource usage, clustering of Workload Patterns and Classification of jobs into
different types based on k-means clustering.Though there have been earlier
works for analysis of this trace, but our analysis provides several new
findings such as jobs in a production trace are trimodal and there occurs
symmetry in the tasks within a long job type
| no_new_dataset | 0.942876 |
1412.7625 | Teng Qiu | Teng Qiu, Yongjie Li | An Effective Semi-supervised Divisive Clustering Algorithm | 8 pages, 4 figures, a new (6th) member of the in-tree clustering
family | null | null | null | cs.LG cs.CV stat.ML | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Nowadays, data are generated massively and rapidly from scientific fields as
bioinformatics, neuroscience and astronomy to business and engineering fields.
Cluster analysis, as one of the major data analysis tools, is therefore more
significant than ever. We propose in this work an effective Semi-supervised
Divisive Clustering algorithm (SDC). Data points are first organized by a
minimal spanning tree. Next, this tree structure is transitioned to the in-tree
structure, and then divided into sub-trees under the supervision of the labeled
data, and in the end, all points in the sub-trees are directly associated with
specific cluster centers. SDC is fully automatic, non-iterative, involving no
free parameter, insensitive to noise, able to detect irregularly shaped cluster
structures, applicable to the data sets of high dimensionality and different
attributes. The power of SDC is demonstrated on several datasets.
| [
{
"version": "v1",
"created": "Wed, 24 Dec 2014 08:55:50 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jan 2015 09:35:39 GMT"
}
] | 2015-01-07T00:00:00 | [
[
"Qiu",
"Teng",
""
],
[
"Li",
"Yongjie",
""
]
] | TITLE: An Effective Semi-supervised Divisive Clustering Algorithm
ABSTRACT: Nowadays, data are generated massively and rapidly from scientific fields as
bioinformatics, neuroscience and astronomy to business and engineering fields.
Cluster analysis, as one of the major data analysis tools, is therefore more
significant than ever. We propose in this work an effective Semi-supervised
Divisive Clustering algorithm (SDC). Data points are first organized by a
minimal spanning tree. Next, this tree structure is transitioned to the in-tree
structure, and then divided into sub-trees under the supervision of the labeled
data, and in the end, all points in the sub-trees are directly associated with
specific cluster centers. SDC is fully automatic, non-iterative, involving no
free parameter, insensitive to noise, able to detect irregularly shaped cluster
structures, applicable to the data sets of high dimensionality and different
attributes. The power of SDC is demonstrated on several datasets.
| no_new_dataset | 0.951323 |
1501.00994 | Vikram Krishnamurthy | Vikram Krishnamurthy and William Hoiles | Online Reputation and Polling Systems: Data Incest, Social Learning and
Revealed Preferences | arXiv admin note: substantial text overlap with arXiv:1412.4171 | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers online reputation and polling systems where individuals
make recommendations based on their private observations and recommendations of
friends. Such interaction of individuals and their social influence is modelled
as social learning on a directed acyclic graph. Data incest (misinformation
propagation) occurs due to unintentional re-use of identical actions in the
for- mation of public belief in social learning; the information gathered by
each agent is mistakenly considered to be independent. This results in
overconfidence and bias in estimates of the state. Necessary and sufficient
conditions are given on the structure of information exchange graph to mitigate
data incest. Incest removal algorithms are presented. Experimental results on
human subjects are presented to illustrate the effect of social influence and
data incest on decision making. These experimental results indicate that social
learning protocols require careful design to handle and mitigate data incest.
The incest removal algorithms are illustrated in an expectation polling system
where participants in a poll respond with a summary of their friends' beliefs.
Finally, the principle of revealed preferences arising in micro-economics
theory is used to parse Twitter datasets to determine if social sensors are
utility maximizers and then determine their utility functions.
| [
{
"version": "v1",
"created": "Mon, 5 Jan 2015 21:00:51 GMT"
}
] | 2015-01-07T00:00:00 | [
[
"Krishnamurthy",
"Vikram",
""
],
[
"Hoiles",
"William",
""
]
] | TITLE: Online Reputation and Polling Systems: Data Incest, Social Learning and
Revealed Preferences
ABSTRACT: This paper considers online reputation and polling systems where individuals
make recommendations based on their private observations and recommendations of
friends. Such interaction of individuals and their social influence is modelled
as social learning on a directed acyclic graph. Data incest (misinformation
propagation) occurs due to unintentional re-use of identical actions in the
for- mation of public belief in social learning; the information gathered by
each agent is mistakenly considered to be independent. This results in
overconfidence and bias in estimates of the state. Necessary and sufficient
conditions are given on the structure of information exchange graph to mitigate
data incest. Incest removal algorithms are presented. Experimental results on
human subjects are presented to illustrate the effect of social influence and
data incest on decision making. These experimental results indicate that social
learning protocols require careful design to handle and mitigate data incest.
The incest removal algorithms are illustrated in an expectation polling system
where participants in a poll respond with a summary of their friends' beliefs.
Finally, the principle of revealed preferences arising in micro-economics
theory is used to parse Twitter datasets to determine if social sensors are
utility maximizers and then determine their utility functions.
| no_new_dataset | 0.949201 |
1501.01083 | Mohana S H | S.H. Mohana, C.J. Prabhakar | Stem-Calyx Recognition of an Apple using Shape Descriptors | 15 pages, 10 figures and 2 tables in Signal & Image Processing : An
International Journal (SIPIJ) Vol.5, No.6, December 2014 | null | 10.5121/sipij.2014.5602 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This paper presents a novel method to recognize stem - calyx of an apple
using shape descriptors. The main drawback of existing apple grading techniques
is that stem - calyx part of an apple is treated as defects, this leads to poor
grading of apples. In order to overcome this drawback, we proposed an approach
to recognize stem-calyx and differentiated from true defects based on shape
features. Our method comprises of steps such as segmentation of apple using
grow-cut method, candidate objects such as stem-calyx and small defects are
detected using multi-threshold segmentation. The shape features are extracted
from detected objects using Multifractal, Fourier and Radon descriptor and
finally stem-calyx regions are recognized and differentiated from true defects
using SVM classifier. The proposed algorithm is evaluated using experiments
conducted on apple image dataset and results exhibit considerable improvement
in recognition of stem-calyx region compared to other techniques.
| [
{
"version": "v1",
"created": "Tue, 6 Jan 2015 05:51:23 GMT"
}
] | 2015-01-07T00:00:00 | [
[
"Mohana",
"S. H.",
""
],
[
"Prabhakar",
"C. J.",
""
]
] | TITLE: Stem-Calyx Recognition of an Apple using Shape Descriptors
ABSTRACT: This paper presents a novel method to recognize stem - calyx of an apple
using shape descriptors. The main drawback of existing apple grading techniques
is that stem - calyx part of an apple is treated as defects, this leads to poor
grading of apples. In order to overcome this drawback, we proposed an approach
to recognize stem-calyx and differentiated from true defects based on shape
features. Our method comprises of steps such as segmentation of apple using
grow-cut method, candidate objects such as stem-calyx and small defects are
detected using multi-threshold segmentation. The shape features are extracted
from detected objects using Multifractal, Fourier and Radon descriptor and
finally stem-calyx regions are recognized and differentiated from true defects
using SVM classifier. The proposed algorithm is evaluated using experiments
conducted on apple image dataset and results exhibit considerable improvement
in recognition of stem-calyx region compared to other techniques.
| no_new_dataset | 0.955402 |
1405.4807 | Yuxin Chen | Qixing Huang, Yuxin Chen, and Leonidas Guibas | Scalable Semidefinite Relaxation for Maximum A Posterior Estimation | accepted to International Conference on Machine Learning (ICML 2014) | International Conference on Machine Learning (ICML), vol. 32, pp.
64-72, June 2014 | null | null | cs.LG cs.CV cs.IT math.IT math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Maximum a posteriori (MAP) inference over discrete Markov random fields is a
fundamental task spanning a wide spectrum of real-world applications, which is
known to be NP-hard for general graphs. In this paper, we propose a novel
semidefinite relaxation formulation (referred to as SDR) to estimate the MAP
assignment. Algorithmically, we develop an accelerated variant of the
alternating direction method of multipliers (referred to as SDPAD-LR) that can
effectively exploit the special structure of the new relaxation. Encouragingly,
the proposed procedure allows solving SDR for large-scale problems, e.g.,
problems on a grid graph comprising hundreds of thousands of variables with
multiple states per node. Compared with prior SDP solvers, SDPAD-LR is capable
of attaining comparable accuracy while exhibiting remarkably improved
scalability, in contrast to the commonly held belief that semidefinite
relaxation can only been applied on small-scale MRF problems. We have evaluated
the performance of SDR on various benchmark datasets including OPENGM2 and PIC
in terms of both the quality of the solutions and computation time.
Experimental results demonstrate that for a broad class of problems, SDPAD-LR
outperforms state-of-the-art algorithms in producing better MAP assignment in
an efficient manner.
| [
{
"version": "v1",
"created": "Mon, 19 May 2014 16:58:24 GMT"
}
] | 2015-01-06T00:00:00 | [
[
"Huang",
"Qixing",
""
],
[
"Chen",
"Yuxin",
""
],
[
"Guibas",
"Leonidas",
""
]
] | TITLE: Scalable Semidefinite Relaxation for Maximum A Posterior Estimation
ABSTRACT: Maximum a posteriori (MAP) inference over discrete Markov random fields is a
fundamental task spanning a wide spectrum of real-world applications, which is
known to be NP-hard for general graphs. In this paper, we propose a novel
semidefinite relaxation formulation (referred to as SDR) to estimate the MAP
assignment. Algorithmically, we develop an accelerated variant of the
alternating direction method of multipliers (referred to as SDPAD-LR) that can
effectively exploit the special structure of the new relaxation. Encouragingly,
the proposed procedure allows solving SDR for large-scale problems, e.g.,
problems on a grid graph comprising hundreds of thousands of variables with
multiple states per node. Compared with prior SDP solvers, SDPAD-LR is capable
of attaining comparable accuracy while exhibiting remarkably improved
scalability, in contrast to the commonly held belief that semidefinite
relaxation can only been applied on small-scale MRF problems. We have evaluated
the performance of SDR on various benchmark datasets including OPENGM2 and PIC
in terms of both the quality of the solutions and computation time.
Experimental results demonstrate that for a broad class of problems, SDPAD-LR
outperforms state-of-the-art algorithms in producing better MAP assignment in
an efficient manner.
| no_new_dataset | 0.946547 |
1412.7828 | S{\o}ren S{\o}nderby | S{\o}ren Kaae S{\o}nderby and Ole Winther | Protein Secondary Structure Prediction with Long Short Term Memory
Networks | v2: adds larger network with slightly better results, update author
affiliations | null | null | null | q-bio.QM cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prediction of protein secondary structure from the amino acid sequence is a
classical bioinformatics problem. Common methods use feed forward neural
networks or SVMs combined with a sliding window, as these models does not
naturally handle sequential data. Recurrent neural networks are an
generalization of the feed forward neural network that naturally handle
sequential data. We use a bidirectional recurrent neural network with long
short term memory cells for prediction of secondary structure and evaluate
using the CB513 dataset. On the secondary structure 8-class problem we report
better performance (0.674) than state of the art (0.664). Our model includes
feed forward networks between the long short term memory cells, a path that can
be further explored.
| [
{
"version": "v1",
"created": "Thu, 25 Dec 2014 14:27:42 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Jan 2015 19:44:17 GMT"
}
] | 2015-01-06T00:00:00 | [
[
"Sønderby",
"Søren Kaae",
""
],
[
"Winther",
"Ole",
""
]
] | TITLE: Protein Secondary Structure Prediction with Long Short Term Memory
Networks
ABSTRACT: Prediction of protein secondary structure from the amino acid sequence is a
classical bioinformatics problem. Common methods use feed forward neural
networks or SVMs combined with a sliding window, as these models does not
naturally handle sequential data. Recurrent neural networks are an
generalization of the feed forward neural network that naturally handle
sequential data. We use a bidirectional recurrent neural network with long
short term memory cells for prediction of secondary structure and evaluate
using the CB513 dataset. On the secondary structure 8-class problem we report
better performance (0.674) than state of the art (0.664). Our model includes
feed forward networks between the long short term memory cells, a path that can
be further explored.
| no_new_dataset | 0.951051 |
1501.00549 | David Pastor-Escuredo | David Pastor-Escuredo, Thierry Savy and Miguel A. Luengo-Oroz | Can Fires, Night Lights, and Mobile Phones reveal behavioral
fingerprints useful for Development? | Published in D4D Challenge. NetMob, May 1-3, 2013, MIT | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fires, lights at night and mobile phone activity have been separately used as
proxy indicators of human activity with high potential for measuring human
development. In this preliminary report, we develop some tools and
methodologies to identify and visualize relations among remote sensing datasets
containing fires and night lights information with mobile phone activity in
Cote D'Ivoire from December 2011 to April 2012.
| [
{
"version": "v1",
"created": "Sat, 3 Jan 2015 09:28:20 GMT"
}
] | 2015-01-06T00:00:00 | [
[
"Pastor-Escuredo",
"David",
""
],
[
"Savy",
"Thierry",
""
],
[
"Luengo-Oroz",
"Miguel A.",
""
]
] | TITLE: Can Fires, Night Lights, and Mobile Phones reveal behavioral
fingerprints useful for Development?
ABSTRACT: Fires, lights at night and mobile phone activity have been separately used as
proxy indicators of human activity with high potential for measuring human
development. In this preliminary report, we develop some tools and
methodologies to identify and visualize relations among remote sensing datasets
containing fires and night lights information with mobile phone activity in
Cote D'Ivoire from December 2011 to April 2012.
| no_new_dataset | 0.9314 |
1501.00607 | Kwetishe Danjuma | Kwetishe Danjuma and Adenike O. Osofisan | Evaluation of Predictive Data Mining Algorithms in Erythemato-Squamous
Disease Diagnosis | 10 pages, 3 figures 2 tables | IJCSI International Journal of Computer Science Issues, 11(6),
85-94 (2014) | null | null | cs.LG cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A lot of time is spent searching for the most performing data mining
algorithms applied in clinical diagnosis. The study set out to identify the
most performing predictive data mining algorithms applied in the diagnosis of
Erythemato-squamous diseases. The study used Naive Bayes, Multilayer Perceptron
and J48 decision tree induction to build predictive data mining models on 366
instances of Erythemato-squamous diseases datasets. Also, 10-fold
cross-validation and sets of performance metrics were used to evaluate the
baseline predictive performance of the classifiers. The comparative analysis
shows that the Naive Bayes performed best with accuracy of 97.4%, Multilayer
Perceptron came out second with accuracy of 96.6%, and J48 came out the worst
with accuracy of 93.5%. The evaluation of these classifiers on clinical
datasets, gave an insight into the predictive ability of different data mining
algorithms applicable in clinical diagnosis especially in the diagnosis of
Erythemato-squamous diseases.
| [
{
"version": "v1",
"created": "Sat, 3 Jan 2015 21:34:35 GMT"
}
] | 2015-01-06T00:00:00 | [
[
"Danjuma",
"Kwetishe",
""
],
[
"Osofisan",
"Adenike O.",
""
]
] | TITLE: Evaluation of Predictive Data Mining Algorithms in Erythemato-Squamous
Disease Diagnosis
ABSTRACT: A lot of time is spent searching for the most performing data mining
algorithms applied in clinical diagnosis. The study set out to identify the
most performing predictive data mining algorithms applied in the diagnosis of
Erythemato-squamous diseases. The study used Naive Bayes, Multilayer Perceptron
and J48 decision tree induction to build predictive data mining models on 366
instances of Erythemato-squamous diseases datasets. Also, 10-fold
cross-validation and sets of performance metrics were used to evaluate the
baseline predictive performance of the classifiers. The comparative analysis
shows that the Naive Bayes performed best with accuracy of 97.4%, Multilayer
Perceptron came out second with accuracy of 96.6%, and J48 came out the worst
with accuracy of 93.5%. The evaluation of these classifiers on clinical
datasets, gave an insight into the predictive ability of different data mining
algorithms applicable in clinical diagnosis especially in the diagnosis of
Erythemato-squamous diseases.
| no_new_dataset | 0.954393 |
1501.00614 | Mahdi Kalayeh | Mahdi M. Kalayeh, Stephen Mussmann, Alla Petrakova, Niels da Vitoria
Lobo and Mubarak Shah | Understanding Trajectory Behavior: A Motion Pattern Approach | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mining the underlying patterns in gigantic and complex data is of great
importance to data analysts. In this paper, we propose a motion pattern
approach to mine frequent behaviors in trajectory data. Motion patterns,
defined by a set of highly similar flow vector groups in a spatial locality,
have been shown to be very effective in extracting dominant motion behaviors in
video sequences. Inspired by applications and properties of motion patterns, we
have designed a framework that successfully solves the general task of
trajectory clustering. Our proposed algorithm consists of four phases: flow
vector computation, motion component extraction, motion component's
reachability set creation, and motion pattern formation. For the first phase,
we break down trajectories into flow vectors that indicate instantaneous
movements. In the second phase, via a Kmeans clustering approach, we create
motion components by clustering the flow vectors with respect to their location
and velocity. Next, we create motion components' reachability set in terms of
spatial proximity and motion similarity. Finally, for the fourth phase, we
cluster motion components using agglomerative clustering with the weighted
Jaccard distance between the motion components' signatures, a set created using
path reachability. We have evaluated the effectiveness of our proposed method
in an extensive set of experiments on diverse datasets. Further, we have shown
how our proposed method handles difficulties in the general task of trajectory
clustering that challenge the existing state-of-the-art methods.
| [
{
"version": "v1",
"created": "Sun, 4 Jan 2015 00:07:00 GMT"
}
] | 2015-01-06T00:00:00 | [
[
"Kalayeh",
"Mahdi M.",
""
],
[
"Mussmann",
"Stephen",
""
],
[
"Petrakova",
"Alla",
""
],
[
"Lobo",
"Niels da Vitoria",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: Understanding Trajectory Behavior: A Motion Pattern Approach
ABSTRACT: Mining the underlying patterns in gigantic and complex data is of great
importance to data analysts. In this paper, we propose a motion pattern
approach to mine frequent behaviors in trajectory data. Motion patterns,
defined by a set of highly similar flow vector groups in a spatial locality,
have been shown to be very effective in extracting dominant motion behaviors in
video sequences. Inspired by applications and properties of motion patterns, we
have designed a framework that successfully solves the general task of
trajectory clustering. Our proposed algorithm consists of four phases: flow
vector computation, motion component extraction, motion component's
reachability set creation, and motion pattern formation. For the first phase,
we break down trajectories into flow vectors that indicate instantaneous
movements. In the second phase, via a Kmeans clustering approach, we create
motion components by clustering the flow vectors with respect to their location
and velocity. Next, we create motion components' reachability set in terms of
spatial proximity and motion similarity. Finally, for the fourth phase, we
cluster motion components using agglomerative clustering with the weighted
Jaccard distance between the motion components' signatures, a set created using
path reachability. We have evaluated the effectiveness of our proposed method
in an extensive set of experiments on diverse datasets. Further, we have shown
how our proposed method handles difficulties in the general task of trajectory
clustering that challenge the existing state-of-the-art methods.
| no_new_dataset | 0.948442 |
1501.00825 | Jianfeng Wang | Jianfeng Wang, Shuicheng Yan, Yi Yang, Mohan S Kankanhalli, Shipeng
Li, Jingdong Wang | Group $K$-Means | The developed algorithm is similar with "Christopher F. Barnes, A new
multiple path search technique for residual vector quantizers, 1994", but we
conduct the research independently and apply it in data/feature compression
and image retrieval | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study how to learn multiple dictionaries from a dataset, and approximate
any data point by the sum of the codewords each chosen from the corresponding
dictionary. Although theoretically low approximation errors can be achieved by
the global solution, an effective solution has not been well studied in
practice. To solve the problem, we propose a simple yet effective algorithm
\textit{Group $K$-Means}. Specifically, we take each dictionary, or any two
selected dictionaries, as a group of $K$-means cluster centers, and then deal
with the approximation issue by minimizing the approximation errors. Besides,
we propose a hierarchical initialization for such a non-convex problem.
Experimental results well validate the effectiveness of the approach.
| [
{
"version": "v1",
"created": "Mon, 5 Jan 2015 11:43:26 GMT"
}
] | 2015-01-06T00:00:00 | [
[
"Wang",
"Jianfeng",
""
],
[
"Yan",
"Shuicheng",
""
],
[
"Yang",
"Yi",
""
],
[
"Kankanhalli",
"Mohan S",
""
],
[
"Li",
"Shipeng",
""
],
[
"Wang",
"Jingdong",
""
]
] | TITLE: Group $K$-Means
ABSTRACT: We study how to learn multiple dictionaries from a dataset, and approximate
any data point by the sum of the codewords each chosen from the corresponding
dictionary. Although theoretically low approximation errors can be achieved by
the global solution, an effective solution has not been well studied in
practice. To solve the problem, we propose a simple yet effective algorithm
\textit{Group $K$-Means}. Specifically, we take each dictionary, or any two
selected dictionaries, as a group of $K$-means cluster centers, and then deal
with the approximation issue by minimizing the approximation errors. Besides,
we propose a hierarchical initialization for such a non-convex problem.
Experimental results well validate the effectiveness of the approach.
| no_new_dataset | 0.941439 |
1412.8412 | Mohammed Tuhin | Mohammad Alaggan, S\'ebastien Gambs, Stan Matwin, Eriko Souza, and
Mohammed Tuhin | Sanitization of Call Detail Records via Differentially-private Summaries | Withdrawn due to some possible agreement issues | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we initiate the study of human mobility from sanitized call
detail records (CDRs). Such data can be extremely valuable to solve important
societal issues such as the improvement of urban transportation or the
understanding on the spread of diseases. One of the fundamental building block
for such study is the computation of mobility patterns summarizing how
individuals move during a given period from one area e.g., cellular tower or
administrative district) to another. However, such knowledge cannot be
published directly as it has been demonstrated that the access to this type of
data enable the (re-)identification of individuals. To answer this issue and to
foster the development of such applications in a privacy-preserving manner, we
propose in this paper a novel approach in which CDRs are summarized under the
form of a differentially-private Bloom filter for the purpose of privately
counting the number of mobile service users moving from one area (region) to
another in a given time frame. Our sanitization method is both time and space
efficient, and ensures differential privacy while solving the shortcomings of a
solution recently proposed to this problem. We also report on experiments
conducted with the proposed solution using a real life CDRs dataset. The
results obtained show that our method achieves - in most cases - a performance
similar to another method (linear counting sketch) that does not provide any
privacy guarantees. Thus, we conclude that our method maintains a high utility
while providing strong privacy guarantees.
| [
{
"version": "v1",
"created": "Mon, 29 Dec 2014 18:28:12 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Dec 2014 15:22:04 GMT"
}
] | 2015-01-05T00:00:00 | [
[
"Alaggan",
"Mohammad",
""
],
[
"Gambs",
"Sébastien",
""
],
[
"Matwin",
"Stan",
""
],
[
"Souza",
"Eriko",
""
],
[
"Tuhin",
"Mohammed",
""
]
] | TITLE: Sanitization of Call Detail Records via Differentially-private Summaries
ABSTRACT: In this work, we initiate the study of human mobility from sanitized call
detail records (CDRs). Such data can be extremely valuable to solve important
societal issues such as the improvement of urban transportation or the
understanding on the spread of diseases. One of the fundamental building block
for such study is the computation of mobility patterns summarizing how
individuals move during a given period from one area e.g., cellular tower or
administrative district) to another. However, such knowledge cannot be
published directly as it has been demonstrated that the access to this type of
data enable the (re-)identification of individuals. To answer this issue and to
foster the development of such applications in a privacy-preserving manner, we
propose in this paper a novel approach in which CDRs are summarized under the
form of a differentially-private Bloom filter for the purpose of privately
counting the number of mobile service users moving from one area (region) to
another in a given time frame. Our sanitization method is both time and space
efficient, and ensures differential privacy while solving the shortcomings of a
solution recently proposed to this problem. We also report on experiments
conducted with the proposed solution using a real life CDRs dataset. The
results obtained show that our method achieves - in most cases - a performance
similar to another method (linear counting sketch) that does not provide any
privacy guarantees. Thus, we conclude that our method maintains a high utility
while providing strong privacy guarantees.
| no_new_dataset | 0.926968 |
1501.00255 | Florin Rusu | Chengjie Qin and Florin Rusu | Speculative Approximations for Terascale Analytics | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model calibration is a major challenge faced by the plethora of statistical
analytics packages that are increasingly used in Big Data applications.
Identifying the optimal model parameters is a time-consuming process that has
to be executed from scratch for every dataset/model combination even by
experienced data scientists. We argue that the incapacity to evaluate multiple
parameter configurations simultaneously and the lack of support to quickly
identify sub-optimal configurations are the principal causes. In this paper, we
develop two database-inspired techniques for efficient model calibration.
Speculative parameter testing applies advanced parallel multi-query processing
methods to evaluate several configurations concurrently. The number of
configurations is determined adaptively at runtime, while the configurations
themselves are extracted from a distribution that is continuously learned
following a Bayesian process. Online aggregation is applied to identify
sub-optimal configurations early in the processing by incrementally sampling
the training dataset and estimating the objective function corresponding to
each configuration. We design concurrent online aggregation estimators and
define halting conditions to accurately and timely stop the execution. We apply
the proposed techniques to distributed gradient descent optimization -- batch
and incremental -- for support vector machines and logistic regression models.
We implement the resulting solutions in GLADE PF-OLA -- a state-of-the-art Big
Data analytics system -- and evaluate their performance over terascale-size
synthetic and real datasets. The results confirm that as many as 32
configurations can be evaluated concurrently almost as fast as one, while
sub-optimal configurations are detected accurately in as little as a
$1/20^{\text{th}}$ fraction of the time.
| [
{
"version": "v1",
"created": "Thu, 1 Jan 2015 07:07:44 GMT"
}
] | 2015-01-05T00:00:00 | [
[
"Qin",
"Chengjie",
""
],
[
"Rusu",
"Florin",
""
]
] | TITLE: Speculative Approximations for Terascale Analytics
ABSTRACT: Model calibration is a major challenge faced by the plethora of statistical
analytics packages that are increasingly used in Big Data applications.
Identifying the optimal model parameters is a time-consuming process that has
to be executed from scratch for every dataset/model combination even by
experienced data scientists. We argue that the incapacity to evaluate multiple
parameter configurations simultaneously and the lack of support to quickly
identify sub-optimal configurations are the principal causes. In this paper, we
develop two database-inspired techniques for efficient model calibration.
Speculative parameter testing applies advanced parallel multi-query processing
methods to evaluate several configurations concurrently. The number of
configurations is determined adaptively at runtime, while the configurations
themselves are extracted from a distribution that is continuously learned
following a Bayesian process. Online aggregation is applied to identify
sub-optimal configurations early in the processing by incrementally sampling
the training dataset and estimating the objective function corresponding to
each configuration. We design concurrent online aggregation estimators and
define halting conditions to accurately and timely stop the execution. We apply
the proposed techniques to distributed gradient descent optimization -- batch
and incremental -- for support vector machines and logistic regression models.
We implement the resulting solutions in GLADE PF-OLA -- a state-of-the-art Big
Data analytics system -- and evaluate their performance over terascale-size
synthetic and real datasets. The results confirm that as many as 32
configurations can be evaluated concurrently almost as fast as one, while
sub-optimal configurations are detected accurately in as little as a
$1/20^{\text{th}}$ fraction of the time.
| no_new_dataset | 0.947769 |
1412.7242 | Chengyao Shen | Chengyao Shen, Xun Huang and Qi Zhao | Learning of Proto-object Representations via Fixations on Low Resolution | This paper has been withdrawn by the author due to incompletion of
the submission | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While previous researches in eye fixation prediction typically rely on
integrating low-level features (e.g. color, edge) to form a saliency map,
recently it has been found that the structural organization of these features
into a proto-object representation can play a more significant role. In this
work, we present a computational framework based on deep network to demonstrate
that proto-object representations can be learned from low-resolution image
patches from fixation regions. We advocate the use of low-resolution inputs in
this work due to the following reasons: (1) Proto-objects are computed in
parallel over an entire visual field (2) People can perceive or recognize
objects well even it is in low resolution. (3) Fixations from lower resolution
images can predict fixations on higher resolution images. In the proposed
computational model, we extract multi-scale image patches on fixation regions
from eye fixation datasets, resize them to low resolution and feed them into a
hierarchical. With layer-wise unsupervised feature learning, we find that many
proto-objects like features responsive to different shapes of object blobs are
learned out. Visualizations also show that these features are selective to
potential objects in the scene and the responses of these features work well in
predicting eye fixations on the images when combined with learned weights.
| [
{
"version": "v1",
"created": "Tue, 23 Dec 2014 03:14:21 GMT"
},
{
"version": "v2",
"created": "Sat, 27 Dec 2014 08:29:00 GMT"
}
] | 2014-12-30T00:00:00 | [
[
"Shen",
"Chengyao",
""
],
[
"Huang",
"Xun",
""
],
[
"Zhao",
"Qi",
""
]
] | TITLE: Learning of Proto-object Representations via Fixations on Low Resolution
ABSTRACT: While previous researches in eye fixation prediction typically rely on
integrating low-level features (e.g. color, edge) to form a saliency map,
recently it has been found that the structural organization of these features
into a proto-object representation can play a more significant role. In this
work, we present a computational framework based on deep network to demonstrate
that proto-object representations can be learned from low-resolution image
patches from fixation regions. We advocate the use of low-resolution inputs in
this work due to the following reasons: (1) Proto-objects are computed in
parallel over an entire visual field (2) People can perceive or recognize
objects well even it is in low resolution. (3) Fixations from lower resolution
images can predict fixations on higher resolution images. In the proposed
computational model, we extract multi-scale image patches on fixation regions
from eye fixation datasets, resize them to low resolution and feed them into a
hierarchical. With layer-wise unsupervised feature learning, we find that many
proto-objects like features responsive to different shapes of object blobs are
learned out. Visualizations also show that these features are selective to
potential objects in the scene and the responses of these features work well in
predicting eye fixations on the images when combined with learned weights.
| no_new_dataset | 0.951908 |
1412.7782 | Roshan Ragel | MAC Jiffriya, MAC Akmal Jahan, and Roshan G. Ragel | Plagiarism Detection on Electronic Text based Assignments using Vector
Space Model (ICIAfS14) | appears in The 7th International Conference on Information and
Automation for Sustainability (ICIAfS) 2014 | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Plagiarism is known as illegal use of others' part of work or whole work as
one's own in any field such as art, poetry, literature, cinema, research and
other creative forms of study. Plagiarism is one of the important issues in
academic and research fields and giving more concern in academic systems. The
situation is even worse with the availability of ample resources on the web.
This paper focuses on an effective plagiarism detection tool on identifying
suitable intra-corpal plagiarism detection for text based assignments by
comparing unigram, bigram, trigram of vector space model with cosine similarity
measure. Manually evaluated, labelled dataset was tested using unigram, bigram
and trigram vector. Even though trigram vector consumes comparatively more
time, it shows better results with the labelled data. In addition, the selected
trigram vector space model with cosine similarity measure is compared with
tri-gram sequence matching technique with Jaccard measure. In the results,
cosine similarity score shows slightly higher values than the other. Because,
it focuses on giving more weight for terms that do not frequently exist in the
dataset and cosine similarity measure using trigram technique is more
preferable than the other. Therefore, we present our new tool and it could be
used as an effective tool to evaluate text based electronic assignments and
minimize the plagiarism among students.
| [
{
"version": "v1",
"created": "Thu, 25 Dec 2014 03:54:01 GMT"
}
] | 2014-12-30T00:00:00 | [
[
"Jiffriya",
"MAC",
""
],
[
"Jahan",
"MAC Akmal",
""
],
[
"Ragel",
"Roshan G.",
""
]
] | TITLE: Plagiarism Detection on Electronic Text based Assignments using Vector
Space Model (ICIAfS14)
ABSTRACT: Plagiarism is known as illegal use of others' part of work or whole work as
one's own in any field such as art, poetry, literature, cinema, research and
other creative forms of study. Plagiarism is one of the important issues in
academic and research fields and giving more concern in academic systems. The
situation is even worse with the availability of ample resources on the web.
This paper focuses on an effective plagiarism detection tool on identifying
suitable intra-corpal plagiarism detection for text based assignments by
comparing unigram, bigram, trigram of vector space model with cosine similarity
measure. Manually evaluated, labelled dataset was tested using unigram, bigram
and trigram vector. Even though trigram vector consumes comparatively more
time, it shows better results with the labelled data. In addition, the selected
trigram vector space model with cosine similarity measure is compared with
tri-gram sequence matching technique with Jaccard measure. In the results,
cosine similarity score shows slightly higher values than the other. Because,
it focuses on giving more weight for terms that do not frequently exist in the
dataset and cosine similarity measure using trigram technique is more
preferable than the other. Therefore, we present our new tool and it could be
used as an effective tool to evaluate text based electronic assignments and
minimize the plagiarism among students.
| no_new_dataset | 0.94801 |
1412.7851 | Odemir Bruno PhD | Jo\~ao Batista Florindo and Odemir Martinez Bruno | Fractal descriptors based on the probability dimension: a texture
analysis and classification approach | 7 pages, 6 figures. arXiv admin note: text overlap with
arXiv:1205.2821 | Pattern Recognition Letters, Volume 42, Pages 107-114, 2014 | 10.1016/j.patrec.2014.01.009 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose a novel technique for obtaining descriptors of
gray-level texture images. The descriptors are provided by applying a
multiscale transform to the fractal dimension of the image estimated through
the probability (Voss) method. The effectiveness of the descriptors is verified
in a classification task using benchmark over texture datasets. The results
obtained demonstrate the efficiency of the proposed method as a tool for the
description and discrimination of texture images.
| [
{
"version": "v1",
"created": "Thu, 25 Dec 2014 18:50:31 GMT"
}
] | 2014-12-30T00:00:00 | [
[
"Florindo",
"João Batista",
""
],
[
"Bruno",
"Odemir Martinez",
""
]
] | TITLE: Fractal descriptors based on the probability dimension: a texture
analysis and classification approach
ABSTRACT: In this work, we propose a novel technique for obtaining descriptors of
gray-level texture images. The descriptors are provided by applying a
multiscale transform to the fractal dimension of the image estimated through
the probability (Voss) method. The effectiveness of the descriptors is verified
in a classification task using benchmark over texture datasets. The results
obtained demonstrate the efficiency of the proposed method as a tool for the
description and discrimination of texture images.
| no_new_dataset | 0.952794 |
1412.7963 | Odemir Bruno PhD | Jo\~ao B. Florindo, Odemir M. Bruno | Texture analysis by multi-resolution fractal descriptors | 8 pages, 6 figures | Expert Systems with Applications, Volume 40, Issue 10, Pages
4022-4028, 2013 | 10.1016/j.eswa.2013.01.007 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work proposes a texture descriptor based on fractal theory. The method
is based on the Bouligand-Minkowski descriptors. We decompose the original
image recursively into 4 equal parts. In each recursion step, we estimate the
average and the deviation of the Bouligand-Minkowski descriptors computed over
each part. Thus, we extract entropy features from both average and deviation.
The proposed descriptors are provided by the concatenation of such measures.
The method is tested in a classification experiment under well known datasets,
that is, Brodatz and Vistex. The results demonstrate that the proposed
technique achieves better results than classical and state-of-the-art texture
descriptors, such as Gabor-wavelets and co-occurrence matrix.
| [
{
"version": "v1",
"created": "Fri, 26 Dec 2014 17:45:41 GMT"
}
] | 2014-12-30T00:00:00 | [
[
"Florindo",
"João B.",
""
],
[
"Bruno",
"Odemir M.",
""
]
] | TITLE: Texture analysis by multi-resolution fractal descriptors
ABSTRACT: This work proposes a texture descriptor based on fractal theory. The method
is based on the Bouligand-Minkowski descriptors. We decompose the original
image recursively into 4 equal parts. In each recursion step, we estimate the
average and the deviation of the Bouligand-Minkowski descriptors computed over
each part. Thus, we extract entropy features from both average and deviation.
The proposed descriptors are provided by the concatenation of such measures.
The method is tested in a classification experiment under well known datasets,
that is, Brodatz and Vistex. The results demonstrate that the proposed
technique achieves better results than classical and state-of-the-art texture
descriptors, such as Gabor-wavelets and co-occurrence matrix.
| no_new_dataset | 0.947284 |
1412.7990 | Ernesto Diaz-Aviles | Ernesto Diaz-Aviles, Hoang Thanh Lam, Fabio Pinelli, Stefano Braghin,
Yiannis Gkoufas, Michele Berlingerio, and Francesco Calabrese | Predicting User Engagement in Twitter with Collaborative Ranking | RecSysChallenge'14 at RecSys 2014, October 10, 2014, Foster City, CA,
USA | In Proceedings of the 2014 Recommender Systems Challenge
(RecSysChallenge'14). ACM, New York, NY, USA, , Pages 41 , 6 pages | 10.1145/2668067.2668072 | null | cs.IR cs.CY cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative Filtering (CF) is a core component of popular web-based
services such as Amazon, YouTube, Netflix, and Twitter. Most applications use
CF to recommend a small set of items to the user. For instance, YouTube
presents to a user a list of top-n videos she would likely watch next based on
her rating and viewing history. Current methods of CF evaluation have been
focused on assessing the quality of a predicted rating or the ranking
performance for top-n recommended items. However, restricting the recommender
system evaluation to these two aspects is rather limiting and neglects other
dimensions that could better characterize a well-perceived recommendation. In
this paper, instead of optimizing rating or top-n recommendation, we focus on
the task of predicting which items generate the highest user engagement. In
particular, we use Twitter as our testbed and cast the problem as a
Collaborative Ranking task where the rich features extracted from the metadata
of the tweets help to complement the transaction information limited to user
ids, item ids, ratings and timestamps. We learn a scoring function that
directly optimizes the user engagement in terms of nDCG@10 on the predicted
ranking. Experiments conducted on an extended version of the MovieTweetings
dataset, released as part of the RecSys Challenge 2014, show the effectiveness
of our approach.
| [
{
"version": "v1",
"created": "Fri, 26 Dec 2014 21:00:14 GMT"
}
] | 2014-12-30T00:00:00 | [
[
"Diaz-Aviles",
"Ernesto",
""
],
[
"Lam",
"Hoang Thanh",
""
],
[
"Pinelli",
"Fabio",
""
],
[
"Braghin",
"Stefano",
""
],
[
"Gkoufas",
"Yiannis",
""
],
[
"Berlingerio",
"Michele",
""
],
[
"Calabrese",
"Francesco",
""
]
] | TITLE: Predicting User Engagement in Twitter with Collaborative Ranking
ABSTRACT: Collaborative Filtering (CF) is a core component of popular web-based
services such as Amazon, YouTube, Netflix, and Twitter. Most applications use
CF to recommend a small set of items to the user. For instance, YouTube
presents to a user a list of top-n videos she would likely watch next based on
her rating and viewing history. Current methods of CF evaluation have been
focused on assessing the quality of a predicted rating or the ranking
performance for top-n recommended items. However, restricting the recommender
system evaluation to these two aspects is rather limiting and neglects other
dimensions that could better characterize a well-perceived recommendation. In
this paper, instead of optimizing rating or top-n recommendation, we focus on
the task of predicting which items generate the highest user engagement. In
particular, we use Twitter as our testbed and cast the problem as a
Collaborative Ranking task where the rich features extracted from the metadata
of the tweets help to complement the transaction information limited to user
ids, item ids, ratings and timestamps. We learn a scoring function that
directly optimizes the user engagement in terms of nDCG@10 on the predicted
ranking. Experiments conducted on an extended version of the MovieTweetings
dataset, released as part of the RecSys Challenge 2014, show the effectiveness
of our approach.
| no_new_dataset | 0.949482 |
1412.8099 | Rathipriya R | R. Rathipriya, K. Thangavel | Extraction of Web Usage Profiles using Simulated Annealing Based
Biclustering Approach | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, the Simulated Annealing (SA) based biclustering approach is
proposed in which SA is used as an optimization tool for biclustering of web
usage data to identify the optimal user profile from the given web usage data.
Extracted biclusters are consists of correlated users whose usage behaviors are
similar across the subset of web pages of a web site where as these users are
uncorrelated for remaining pages of a web site. These results are very useful
in web personalization so that it communicates better with its users and for
making customized prediction. Also useful for providing customized web service
too. Experiment was conducted on the real web usage dataset called CTI dataset.
Results show that proposed SA based biclustering approach can extract highly
correlated user groups from the preprocessed web usage data.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 2014 10:06:25 GMT"
}
] | 2014-12-30T00:00:00 | [
[
"Rathipriya",
"R.",
""
],
[
"Thangavel",
"K.",
""
]
] | TITLE: Extraction of Web Usage Profiles using Simulated Annealing Based
Biclustering Approach
ABSTRACT: In this paper, the Simulated Annealing (SA) based biclustering approach is
proposed in which SA is used as an optimization tool for biclustering of web
usage data to identify the optimal user profile from the given web usage data.
Extracted biclusters are consists of correlated users whose usage behaviors are
similar across the subset of web pages of a web site where as these users are
uncorrelated for remaining pages of a web site. These results are very useful
in web personalization so that it communicates better with its users and for
making customized prediction. Also useful for providing customized web service
too. Experiment was conducted on the real web usage dataset called CTI dataset.
Results show that proposed SA based biclustering approach can extract highly
correlated user groups from the preprocessed web usage data.
| no_new_dataset | 0.939969 |
1412.8118 | Lanbo Zhang | Lanbo Zhang and Yi Zhang | Hierarchical Bayesian Models with Factorization for Content-Based
Recommendation | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most existing content-based filtering approaches learn user profiles
independently without capturing the similarity among users. Bayesian
hierarchical models \cite{Zhang:Efficient} learn user profiles jointly and have
the advantage of being able to borrow discriminative information from other
users through a Bayesian prior. However, the standard Bayesian hierarchical
models assume all user profiles are generated from the same prior. Considering
the diversity of user interests, this assumption could be improved by
introducing more flexibility. Besides, most existing content-based filtering
approaches implicitly assume that each user profile corresponds to exactly one
user interest and fail to capture a user's multiple interests (information
needs).
In this paper, we present a flexible Bayesian hierarchical modeling approach
to model both commonality and diversity among users as well as individual
users' multiple interests. We propose two models each with different
assumptions, and the proposed models are called Discriminative Factored Prior
Models (DFPM). In our models, each user profile is modeled as a discriminative
classifier with a factored model as its prior, and different factors contribute
in different levels to each user profile. Compared with existing content-based
filtering models, DFPM are interesting because they can 1) borrow
discriminative criteria of other users while learning a particular user profile
through the factored prior; 2) trade off well between diversity and commonality
among users; and 3) handle the challenging classification situation where each
class contains multiple concepts. The experimental results on a dataset
collected from real users on digg.com show that our models significantly
outperform the baseline models of L-2 regularized logistic regression and
traditional Bayesian hierarchical model with logistic regression.
| [
{
"version": "v1",
"created": "Sun, 28 Dec 2014 06:07:48 GMT"
}
] | 2014-12-30T00:00:00 | [
[
"Zhang",
"Lanbo",
""
],
[
"Zhang",
"Yi",
""
]
] | TITLE: Hierarchical Bayesian Models with Factorization for Content-Based
Recommendation
ABSTRACT: Most existing content-based filtering approaches learn user profiles
independently without capturing the similarity among users. Bayesian
hierarchical models \cite{Zhang:Efficient} learn user profiles jointly and have
the advantage of being able to borrow discriminative information from other
users through a Bayesian prior. However, the standard Bayesian hierarchical
models assume all user profiles are generated from the same prior. Considering
the diversity of user interests, this assumption could be improved by
introducing more flexibility. Besides, most existing content-based filtering
approaches implicitly assume that each user profile corresponds to exactly one
user interest and fail to capture a user's multiple interests (information
needs).
In this paper, we present a flexible Bayesian hierarchical modeling approach
to model both commonality and diversity among users as well as individual
users' multiple interests. We propose two models each with different
assumptions, and the proposed models are called Discriminative Factored Prior
Models (DFPM). In our models, each user profile is modeled as a discriminative
classifier with a factored model as its prior, and different factors contribute
in different levels to each user profile. Compared with existing content-based
filtering models, DFPM are interesting because they can 1) borrow
discriminative criteria of other users while learning a particular user profile
through the factored prior; 2) trade off well between diversity and commonality
among users; and 3) handle the challenging classification situation where each
class contains multiple concepts. The experimental results on a dataset
collected from real users on digg.com show that our models significantly
outperform the baseline models of L-2 regularized logistic regression and
traditional Bayesian hierarchical model with logistic regression.
| no_new_dataset | 0.950915 |
1412.8120 | Amlan Kusum | Amlan Kusum, Iulian Neamtiu and Rajiv Gupta | Adapting Graph Application Performance via Alternate Data Structure
Representation | Part of ADAPT Workshop proceedings, 2015 (arXiv:1412.2347) | null | null | ADAPT/2015/03 | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph processing is used extensively in areas from social networking mining
to web indexing. We demonstrate that the performance and dependability of such
applications critically hinges on the graph data structure used, because a
fixed, compile-time choice of data structure can lead to poor performance or
applications unable to complete. To address this problem, we introduce an
approach that helps programmers transform regular, off-the-shelf graph
applications into adaptive, more dependable applications where adaptations are
performed via runtime selection from alternate data structure representations.
Using our approach, applications dynamically adapt to the input graph's
characteristics and changes in available memory so they continue to run when
faced with adverse conditions such as low memory. Experiments with graph
algorithms on real-world (e.g., Wikipedia metadata, Gnutella topology) and
synthetic graph datasets show that our adaptive applications run to completion
with lower execution time and/or memory utilization in comparison to their
non-adaptive versions.
| [
{
"version": "v1",
"created": "Sun, 28 Dec 2014 06:49:23 GMT"
}
] | 2014-12-30T00:00:00 | [
[
"Kusum",
"Amlan",
""
],
[
"Neamtiu",
"Iulian",
""
],
[
"Gupta",
"Rajiv",
""
]
] | TITLE: Adapting Graph Application Performance via Alternate Data Structure
Representation
ABSTRACT: Graph processing is used extensively in areas from social networking mining
to web indexing. We demonstrate that the performance and dependability of such
applications critically hinges on the graph data structure used, because a
fixed, compile-time choice of data structure can lead to poor performance or
applications unable to complete. To address this problem, we introduce an
approach that helps programmers transform regular, off-the-shelf graph
applications into adaptive, more dependable applications where adaptations are
performed via runtime selection from alternate data structure representations.
Using our approach, applications dynamically adapt to the input graph's
characteristics and changes in available memory so they continue to run when
faced with adverse conditions such as low memory. Experiments with graph
algorithms on real-world (e.g., Wikipedia metadata, Gnutella topology) and
synthetic graph datasets show that our adaptive applications run to completion
with lower execution time and/or memory utilization in comparison to their
non-adaptive versions.
| no_new_dataset | 0.947137 |
1412.8341 | Pavel H\'ala | Pavel H\'ala | Spectral classification using convolutional neural networks | 71 pages, 50 figures, Master's thesis, Masaryk University | null | null | null | cs.CV astro-ph.IM cs.NE | http://creativecommons.org/licenses/by/3.0/ | There is a great need for accurate and autonomous spectral classification
methods in astrophysics. This thesis is about training a convolutional neural
network (ConvNet) to recognize an object class (quasar, star or galaxy) from
one-dimension spectra only. Author developed several scripts and C programs for
datasets preparation, preprocessing and postprocessing of the data. EBLearn
library (developed by Pierre Sermanet and Yann LeCun) was used to create
ConvNets. Application on dataset of more than 60000 spectra yielded success
rate of nearly 95%. This thesis conclusively proved great potential of
convolutional neural networks and deep learning methods in astrophysics.
| [
{
"version": "v1",
"created": "Mon, 29 Dec 2014 13:47:06 GMT"
}
] | 2014-12-30T00:00:00 | [
[
"Hála",
"Pavel",
""
]
] | TITLE: Spectral classification using convolutional neural networks
ABSTRACT: There is a great need for accurate and autonomous spectral classification
methods in astrophysics. This thesis is about training a convolutional neural
network (ConvNet) to recognize an object class (quasar, star or galaxy) from
one-dimension spectra only. Author developed several scripts and C programs for
datasets preparation, preprocessing and postprocessing of the data. EBLearn
library (developed by Pierre Sermanet and Yann LeCun) was used to create
ConvNets. Application on dataset of more than 60000 spectra yielded success
rate of nearly 95%. This thesis conclusively proved great potential of
convolutional neural networks and deep learning methods in astrophysics.
| no_new_dataset | 0.952662 |
1412.7584 | Zhanglong Ji | Zhanglong Ji, Zachary C. Lipton, Charles Elkan | Differential Privacy and Machine Learning: a Survey and Review | null | null | null | null | cs.LG cs.CR cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The objective of machine learning is to extract useful information from data,
while privacy is preserved by concealing information. Thus it seems hard to
reconcile these competing interests. However, they frequently must be balanced
when mining sensitive data. For example, medical research represents an
important application where it is necessary both to extract useful information
and protect patient privacy. One way to resolve the conflict is to extract
general characteristics of whole populations without disclosing the private
information of individuals.
In this paper, we consider differential privacy, one of the most popular and
powerful definitions of privacy. We explore the interplay between machine
learning and differential privacy, namely privacy-preserving machine learning
algorithms and learning-based data release mechanisms. We also describe some
theoretical results that address what can be learned differentially privately
and upper bounds of loss functions for differentially private algorithms.
Finally, we present some open questions, including how to incorporate public
data, how to deal with missing data in private datasets, and whether, as the
number of observed samples grows arbitrarily large, differentially private
machine learning algorithms can be achieved at no cost to utility as compared
to corresponding non-differentially private algorithms.
| [
{
"version": "v1",
"created": "Wed, 24 Dec 2014 01:51:06 GMT"
}
] | 2014-12-25T00:00:00 | [
[
"Ji",
"Zhanglong",
""
],
[
"Lipton",
"Zachary C.",
""
],
[
"Elkan",
"Charles",
""
]
] | TITLE: Differential Privacy and Machine Learning: a Survey and Review
ABSTRACT: The objective of machine learning is to extract useful information from data,
while privacy is preserved by concealing information. Thus it seems hard to
reconcile these competing interests. However, they frequently must be balanced
when mining sensitive data. For example, medical research represents an
important application where it is necessary both to extract useful information
and protect patient privacy. One way to resolve the conflict is to extract
general characteristics of whole populations without disclosing the private
information of individuals.
In this paper, we consider differential privacy, one of the most popular and
powerful definitions of privacy. We explore the interplay between machine
learning and differential privacy, namely privacy-preserving machine learning
algorithms and learning-based data release mechanisms. We also describe some
theoretical results that address what can be learned differentially privately
and upper bounds of loss functions for differentially private algorithms.
Finally, we present some open questions, including how to incorporate public
data, how to deal with missing data in private datasets, and whether, as the
number of observed samples grows arbitrarily large, differentially private
machine learning algorithms can be achieved at no cost to utility as compared
to corresponding non-differentially private algorithms.
| no_new_dataset | 0.941761 |
1412.6821 | Roland Kwitt | Jan Reininghaus, Stefan Huber, Ulrich Bauer, Roland Kwitt | A Stable Multi-Scale Kernel for Topological Machine Learning | null | null | null | null | stat.ML cs.CV cs.LG math.AT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Topological data analysis offers a rich source of valuable information to
study vision problems. Yet, so far we lack a theoretically sound connection to
popular kernel-based learning techniques, such as kernel SVMs or kernel PCA. In
this work, we establish such a connection by designing a multi-scale kernel for
persistence diagrams, a stable summary representation of topological features
in data. We show that this kernel is positive definite and prove its stability
with respect to the 1-Wasserstein distance. Experiments on two benchmark
datasets for 3D shape classification/retrieval and texture recognition show
considerable performance gains of the proposed method compared to an
alternative approach that is based on the recently introduced persistence
landscapes.
| [
{
"version": "v1",
"created": "Sun, 21 Dec 2014 19:17:08 GMT"
}
] | 2014-12-24T00:00:00 | [
[
"Reininghaus",
"Jan",
""
],
[
"Huber",
"Stefan",
""
],
[
"Bauer",
"Ulrich",
""
],
[
"Kwitt",
"Roland",
""
]
] | TITLE: A Stable Multi-Scale Kernel for Topological Machine Learning
ABSTRACT: Topological data analysis offers a rich source of valuable information to
study vision problems. Yet, so far we lack a theoretically sound connection to
popular kernel-based learning techniques, such as kernel SVMs or kernel PCA. In
this work, we establish such a connection by designing a multi-scale kernel for
persistence diagrams, a stable summary representation of topological features
in data. We show that this kernel is positive definite and prove its stability
with respect to the 1-Wasserstein distance. Experiments on two benchmark
datasets for 3D shape classification/retrieval and texture recognition show
considerable performance gains of the proposed method compared to an
alternative approach that is based on the recently introduced persistence
landscapes.
| no_new_dataset | 0.945349 |
1209.3686 | Barzan Mozafari | Barzan Mozafari, Purnamrita Sarkar, Michael J. Franklin, Michael I.
Jordan, Samuel Madden | Active Learning for Crowd-Sourced Databases | A shorter version of this manuscript has been published in
Proceedings of Very Large Data Bases 2015, entitled "Scaling Up
Crowd-Sourcing to Very Large Datasets: A Case for Active Learning" | null | null | null | cs.LG cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crowd-sourcing has become a popular means of acquiring labeled data for a
wide variety of tasks where humans are more accurate than computers, e.g.,
labeling images, matching objects, or analyzing sentiment. However, relying
solely on the crowd is often impractical even for data sets with thousands of
items, due to time and cost constraints of acquiring human input (which cost
pennies and minutes per label). In this paper, we propose algorithms for
integrating machine learning into crowd-sourced databases, with the goal of
allowing crowd-sourcing applications to scale, i.e., to handle larger datasets
at lower costs. The key observation is that, in many of the above tasks, humans
and machine learning algorithms can be complementary, as humans are often more
accurate but slow and expensive, while algorithms are usually less accurate,
but faster and cheaper.
Based on this observation, we present two new active learning algorithms to
combine humans and algorithms together in a crowd-sourced database. Our
algorithms are based on the theory of non-parametric bootstrap, which makes our
results applicable to a broad class of machine learning models. Our results, on
three real-life datasets collected with Amazon's Mechanical Turk, and on 15
well-known UCI data sets, show that our methods on average ask humans to label
one to two orders of magnitude fewer items to achieve the same accuracy as a
baseline that labels random images, and two to eight times fewer questions than
previous active learning schemes.
| [
{
"version": "v1",
"created": "Mon, 17 Sep 2012 15:21:06 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Dec 2012 15:45:55 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Dec 2012 18:20:04 GMT"
},
{
"version": "v4",
"created": "Sat, 20 Dec 2014 08:56:15 GMT"
}
] | 2014-12-23T00:00:00 | [
[
"Mozafari",
"Barzan",
""
],
[
"Sarkar",
"Purnamrita",
""
],
[
"Franklin",
"Michael J.",
""
],
[
"Jordan",
"Michael I.",
""
],
[
"Madden",
"Samuel",
""
]
] | TITLE: Active Learning for Crowd-Sourced Databases
ABSTRACT: Crowd-sourcing has become a popular means of acquiring labeled data for a
wide variety of tasks where humans are more accurate than computers, e.g.,
labeling images, matching objects, or analyzing sentiment. However, relying
solely on the crowd is often impractical even for data sets with thousands of
items, due to time and cost constraints of acquiring human input (which cost
pennies and minutes per label). In this paper, we propose algorithms for
integrating machine learning into crowd-sourced databases, with the goal of
allowing crowd-sourcing applications to scale, i.e., to handle larger datasets
at lower costs. The key observation is that, in many of the above tasks, humans
and machine learning algorithms can be complementary, as humans are often more
accurate but slow and expensive, while algorithms are usually less accurate,
but faster and cheaper.
Based on this observation, we present two new active learning algorithms to
combine humans and algorithms together in a crowd-sourced database. Our
algorithms are based on the theory of non-parametric bootstrap, which makes our
results applicable to a broad class of machine learning models. Our results, on
three real-life datasets collected with Amazon's Mechanical Turk, and on 15
well-known UCI data sets, show that our methods on average ask humans to label
one to two orders of magnitude fewer items to achieve the same accuracy as a
baseline that labels random images, and two to eight times fewer questions than
previous active learning schemes.
| no_new_dataset | 0.952086 |
1412.6570 | Changchun Zhang | Robert C. Qiu | The Foundation of Big Data: Experiments, Formulation, and Applications | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The central theme of this talk is to promote the non-asymptotic statistical
viewpoint in the context of massive datasets. The classical viewpoint breaks
down when the data size becomes large.
| [
{
"version": "v1",
"created": "Sat, 20 Dec 2014 01:14:55 GMT"
}
] | 2014-12-23T00:00:00 | [
[
"Qiu",
"Robert C.",
""
]
] | TITLE: The Foundation of Big Data: Experiments, Formulation, and Applications
ABSTRACT: The central theme of this talk is to promote the non-asymptotic statistical
viewpoint in the context of massive datasets. The classical viewpoint breaks
down when the data size becomes large.
| no_new_dataset | 0.946745 |
1412.6791 | Anoop Katti | Anoop Katti, Anurag Mittal | Mixture of Parts Revisited: Expressive Part Interactions for Pose
Estimation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Part-based models with restrictive tree-structured interactions for the Human
Pose Estimation problem, leaves many part interactions unhandled. Two of the
most common and strong manifestations of such unhandled interactions are
self-occlusion among the parts and the confusion in the localization of the
non-adjacent symmetric parts. By handling the self-occlusion in a data
efficient manner, we improve the performance of the basic Mixture of Parts
model by a large margin, especially on uncommon poses. Through addressing the
confusion in the symmetric limb localization using a combination of two
complementing trees, we improve the performance on all the parts by atmost
doubling the running time. Finally, we show that the combination of the two
solutions improves the results. We report results that are equivalent to the
state-of-the-art on two standard datasets. Because of maintaining the
tree-structured interactions and only part-level modeling of the base Mixture
of Parts model, this is achieved in time that is much less than the best
performing part-based model.
| [
{
"version": "v1",
"created": "Sun, 21 Dec 2014 14:48:41 GMT"
}
] | 2014-12-23T00:00:00 | [
[
"Katti",
"Anoop",
""
],
[
"Mittal",
"Anurag",
""
]
] | TITLE: Mixture of Parts Revisited: Expressive Part Interactions for Pose
Estimation
ABSTRACT: Part-based models with restrictive tree-structured interactions for the Human
Pose Estimation problem, leaves many part interactions unhandled. Two of the
most common and strong manifestations of such unhandled interactions are
self-occlusion among the parts and the confusion in the localization of the
non-adjacent symmetric parts. By handling the self-occlusion in a data
efficient manner, we improve the performance of the basic Mixture of Parts
model by a large margin, especially on uncommon poses. Through addressing the
confusion in the symmetric limb localization using a combination of two
complementing trees, we improve the performance on all the parts by atmost
doubling the running time. Finally, we show that the combination of the two
solutions improves the results. We report results that are equivalent to the
state-of-the-art on two standard datasets. Because of maintaining the
tree-structured interactions and only part-level modeling of the base Mixture
of Parts model, this is achieved in time that is much less than the best
performing part-based model.
| no_new_dataset | 0.946448 |
1412.6883 | Mahdi Nasrullah Al-Ameen | Mahdi Nasrullah Al-Ameen and Matthew Wright | iPersea : The Improved Persea with Sybil Detection Mechanism | 10 pages | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | P2P systems are highly susceptible to Sybil attacks, in which an attacker
creates a large number of identities and uses them to control a substantial
fraction of the system. Persea is the most recent approach towards designing a
social network based Sybil-resistant DHT. Unlike prior Sybil-resistant P2P
systems based on social networks, Persea does not rely on two key assumptions:
(i) that the social network is fast mixing, and (ii) that there is a small
ratio of attack edges to honest peers. Both assumptions have been shown to be
unreliable in real social networks. The hierarchical distribution of node IDs
in Persea confines a large attacker botnet to a considerably smaller region of
the ID space than in a normal P2P system and its replication mechanism lets a
peer to retrieve the desired results even if a given region is occupied by
attackers. However, Persea system suffers from certain limitations, since it
cannot handle the scenario, where the malicious target returns an incorrect
result instead of just ignoring the lookup request. In this paper, we address
this major limitation of Persea through a Sybil detection mechanism built on
top of Persea system, which accommodates inspection lookup, a specially
designed lookup scheme to detect the Sybil nodes based on their responses to
the lookup query. We design a scheme to filter those detected Sybils to ensure
the participation of honest nodes on the lookup path during regular DHT lookup.
Since the malicious nodes are opt-out from the lookup path in our system, they
cannot return any incorrect result during regular lookup. We evaluate our
system in simulations with social network datasets and the results show that
catster, the largest network in our simulation with 149700 nodes and 5449275
edges, gains 100% lookup success rate, even when the number of attack edges is
equal to the number of benign peers in the network.
| [
{
"version": "v1",
"created": "Mon, 22 Dec 2014 06:25:49 GMT"
}
] | 2014-12-23T00:00:00 | [
[
"Al-Ameen",
"Mahdi Nasrullah",
""
],
[
"Wright",
"Matthew",
""
]
] | TITLE: iPersea : The Improved Persea with Sybil Detection Mechanism
ABSTRACT: P2P systems are highly susceptible to Sybil attacks, in which an attacker
creates a large number of identities and uses them to control a substantial
fraction of the system. Persea is the most recent approach towards designing a
social network based Sybil-resistant DHT. Unlike prior Sybil-resistant P2P
systems based on social networks, Persea does not rely on two key assumptions:
(i) that the social network is fast mixing, and (ii) that there is a small
ratio of attack edges to honest peers. Both assumptions have been shown to be
unreliable in real social networks. The hierarchical distribution of node IDs
in Persea confines a large attacker botnet to a considerably smaller region of
the ID space than in a normal P2P system and its replication mechanism lets a
peer to retrieve the desired results even if a given region is occupied by
attackers. However, Persea system suffers from certain limitations, since it
cannot handle the scenario, where the malicious target returns an incorrect
result instead of just ignoring the lookup request. In this paper, we address
this major limitation of Persea through a Sybil detection mechanism built on
top of Persea system, which accommodates inspection lookup, a specially
designed lookup scheme to detect the Sybil nodes based on their responses to
the lookup query. We design a scheme to filter those detected Sybils to ensure
the participation of honest nodes on the lookup path during regular DHT lookup.
Since the malicious nodes are opt-out from the lookup path in our system, they
cannot return any incorrect result during regular lookup. We evaluate our
system in simulations with social network datasets and the results show that
catster, the largest network in our simulation with 149700 nodes and 5449275
edges, gains 100% lookup success rate, even when the number of attack edges is
equal to the number of benign peers in the network.
| no_new_dataset | 0.937498 |
1412.6124 | Jianyu Wang | Jianyu Wang and Alan Yuille | Semantic Part Segmentation using Compositional Model combining Shape and
Appearance | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the problem of semantic part segmentation for
animals. This is more challenging than standard object detection, object
segmentation and pose estimation tasks because semantic parts of animals often
have similar appearance and highly varying shapes. To tackle these challenges,
we build a mixture of compositional models to represent the object boundary and
the boundaries of semantic parts. And we incorporate edge, appearance, and
semantic part cues into the compositional model. Given part-level segmentation
annotation, we develop a novel algorithm to learn a mixture of compositional
models under various poses and viewpoints for certain animal classes.
Furthermore, a linear complexity algorithm is offered for efficient inference
of the compositional model using dynamic programming. We evaluate our method
for horse and cow using a newly annotated dataset on Pascal VOC 2010 which has
pixelwise part labels. Experimental results demonstrate the effectiveness of
our method.
| [
{
"version": "v1",
"created": "Thu, 18 Dec 2014 21:27:38 GMT"
}
] | 2014-12-22T00:00:00 | [
[
"Wang",
"Jianyu",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: Semantic Part Segmentation using Compositional Model combining Shape and
Appearance
ABSTRACT: In this paper, we study the problem of semantic part segmentation for
animals. This is more challenging than standard object detection, object
segmentation and pose estimation tasks because semantic parts of animals often
have similar appearance and highly varying shapes. To tackle these challenges,
we build a mixture of compositional models to represent the object boundary and
the boundaries of semantic parts. And we incorporate edge, appearance, and
semantic part cues into the compositional model. Given part-level segmentation
annotation, we develop a novel algorithm to learn a mixture of compositional
models under various poses and viewpoints for certain animal classes.
Furthermore, a linear complexity algorithm is offered for efficient inference
of the compositional model using dynamic programming. We evaluate our method
for horse and cow using a newly annotated dataset on Pascal VOC 2010 which has
pixelwise part labels. Experimental results demonstrate the effectiveness of
our method.
| new_dataset | 0.961714 |
1412.6154 | Ana Romero | Ana Romero, Julio Rubio, Francis Sergeraert | Effective persistent homology of digital images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, three Computational Topology methods (namely effective
homology, persistent homology and discrete vector fields) are mixed together to
produce algorithms for homological digital image processing. The algorithms
have been implemented as extensions of the Kenzo system and have shown a good
performance when applied on some actual images extracted from a public dataset.
| [
{
"version": "v1",
"created": "Mon, 6 Oct 2014 11:45:07 GMT"
}
] | 2014-12-22T00:00:00 | [
[
"Romero",
"Ana",
""
],
[
"Rubio",
"Julio",
""
],
[
"Sergeraert",
"Francis",
""
]
] | TITLE: Effective persistent homology of digital images
ABSTRACT: In this paper, three Computational Topology methods (namely effective
homology, persistent homology and discrete vector fields) are mixed together to
produce algorithms for homological digital image processing. The algorithms
have been implemented as extensions of the Kenzo system and have shown a good
performance when applied on some actual images extracted from a public dataset.
| no_new_dataset | 0.954984 |
1412.6170 | Francesco Lettich | Francesco Lettich, Salvatore Orlando and Claudio Silvestri | Manycore processing of repeated k-NN queries over massive moving objects
observations | null | null | null | null | cs.DC cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to timely process significant amounts of continuously updated
spatial data is mandatory for an increasing number of applications. In this
paper we focus on a specific data-intensive problem concerning the repeated
processing of huge amounts of k nearest neighbours (k-NN) queries over massive
sets of moving objects, where the spatial extents of queries and the position
of objects are continuously modified over time. In particular, we propose a
novel hybrid CPU/GPU pipeline that significantly accelerate query processing
thanks to a combination of ad-hoc data structures and non-trivial memory access
patterns. To the best of our knowledge this is the first work that exploits
GPUs to efficiently solve repeated k-NN queries over massive sets of
continuously moving objects, even characterized by highly skewed spatial
distributions. In comparison with state-of-the-art sequential CPU-based
implementations, our method highlights significant speedups in the order of
10x-20x, depending on the datasets, even when considering cheap GPUs.
| [
{
"version": "v1",
"created": "Thu, 18 Dec 2014 22:43:28 GMT"
}
] | 2014-12-22T00:00:00 | [
[
"Lettich",
"Francesco",
""
],
[
"Orlando",
"Salvatore",
""
],
[
"Silvestri",
"Claudio",
""
]
] | TITLE: Manycore processing of repeated k-NN queries over massive moving objects
observations
ABSTRACT: The ability to timely process significant amounts of continuously updated
spatial data is mandatory for an increasing number of applications. In this
paper we focus on a specific data-intensive problem concerning the repeated
processing of huge amounts of k nearest neighbours (k-NN) queries over massive
sets of moving objects, where the spatial extents of queries and the position
of objects are continuously modified over time. In particular, we propose a
novel hybrid CPU/GPU pipeline that significantly accelerate query processing
thanks to a combination of ad-hoc data structures and non-trivial memory access
patterns. To the best of our knowledge this is the first work that exploits
GPUs to efficiently solve repeated k-NN queries over massive sets of
continuously moving objects, even characterized by highly skewed spatial
distributions. In comparison with state-of-the-art sequential CPU-based
implementations, our method highlights significant speedups in the order of
10x-20x, depending on the datasets, even when considering cheap GPUs.
| no_new_dataset | 0.948346 |
1412.6257 | Alexander Kalmanovich | Alexander Kalmanovich and Gal Chechik | Gradual training of deep denoising auto encoders | null | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stacked denoising auto encoders (DAEs) are well known to learn useful deep
representations, which can be used to improve supervised training by
initializing a deep network. We investigate a training scheme of a deep DAE,
where DAE layers are gradually added and keep adapting as additional layers are
added. We show that in the regime of mid-sized datasets, this gradual training
provides a small but consistent improvement over stacked training in both
reconstruction quality and classification error over stacked training on MNIST
and CIFAR datasets.
| [
{
"version": "v1",
"created": "Fri, 19 Dec 2014 09:30:33 GMT"
}
] | 2014-12-22T00:00:00 | [
[
"Kalmanovich",
"Alexander",
""
],
[
"Chechik",
"Gal",
""
]
] | TITLE: Gradual training of deep denoising auto encoders
ABSTRACT: Stacked denoising auto encoders (DAEs) are well known to learn useful deep
representations, which can be used to improve supervised training by
initializing a deep network. We investigate a training scheme of a deep DAE,
where DAE layers are gradually added and keep adapting as additional layers are
added. We show that in the regime of mid-sized datasets, this gradual training
provides a small but consistent improvement over stacked training in both
reconstruction quality and classification error over stacked training on MNIST
and CIFAR datasets.
| no_new_dataset | 0.947721 |
1412.6264 | Taraka Rama Kasicheyanula | Taraka Rama K | Supertagging: Introduction, learning, and application | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Supertagging is an approach originally developed by Bangalore and Joshi
(1999) to improve the parsing efficiency. In the beginning, the scholars used
small training datasets and somewhat na\"ive smoothing techniques to learn the
probability distributions of supertags. Since its inception, the applicability
of Supertags has been explored for TAG (tree-adjoining grammar) formalism as
well as other related yet, different formalisms such as CCG. This article will
try to summarize the various chapters, relevant to statistical parsing, from
the most recent edited book volume (Bangalore and Joshi, 2010). The chapters
were selected so as to blend the learning of supertags, its integration into
full-scale parsing, and in semantic parsing.
| [
{
"version": "v1",
"created": "Fri, 19 Dec 2014 09:53:57 GMT"
}
] | 2014-12-22T00:00:00 | [
[
"K",
"Taraka Rama",
""
]
] | TITLE: Supertagging: Introduction, learning, and application
ABSTRACT: Supertagging is an approach originally developed by Bangalore and Joshi
(1999) to improve the parsing efficiency. In the beginning, the scholars used
small training datasets and somewhat na\"ive smoothing techniques to learn the
probability distributions of supertags. Since its inception, the applicability
of Supertags has been explored for TAG (tree-adjoining grammar) formalism as
well as other related yet, different formalisms such as CCG. This article will
try to summarize the various chapters, relevant to statistical parsing, from
the most recent edited book volume (Bangalore and Joshi, 2010). The chapters
were selected so as to blend the learning of supertags, its integration into
full-scale parsing, and in semantic parsing.
| no_new_dataset | 0.953232 |
1412.6402 | Pierre de Buyl | Rebecca R. Murphy, Sophie E. Jackson, David Klenerman | pyFRET: A Python Library for Single Molecule Fluorescence Data Analysis | Part of the Proceedings of the 7th European Conference on Python in
Science (EuroSciPy 2014), Pierre de Buyl and Nelle Varoquaux editors, (2014) | null | null | euroscipy-proceedings2014-10 | cs.CE physics.bio-ph q-bio.BM | http://creativecommons.org/licenses/by/3.0/ | Single molecule F\"orster resonance energy transfer (smFRET) is a powerful
experimental technique for studying the properties of individual biological
molecules in solution. However, as adoption of smFRET techniques becomes more
widespread, the lack of available software, whether open source or commercial,
for data analysis, is becoming a significant issue. Here, we present pyFRET, an
open source Python package for the analysis of data from single-molecule
fluorescence experiments from freely diffusing biomolecules. The package
provides methods for the complete analysis of a smFRET dataset, from burst
selection and denoising, through data visualisation and model fitting. We
provide support for both continuous excitation and alternating laser excitation
(ALEX) data analysis. pyFRET is available as a package downloadable from the
Python Package Index (PyPI) under the open source three-clause BSD licence,
together with links to extensive documentation and tutorials, including example
usage and test data. Additional documentation including tutorials is hosted
independently on ReadTheDocs. The code is available from the free hosting site
Bitbucket. Through distribution of this software, we hope to lower the barrier
for the adoption of smFRET experiments by other research groups and we
encourage others to contribute modules for specific analysis needs.
| [
{
"version": "v1",
"created": "Fri, 19 Dec 2014 16:00:31 GMT"
}
] | 2014-12-22T00:00:00 | [
[
"Murphy",
"Rebecca R.",
""
],
[
"Jackson",
"Sophie E.",
""
],
[
"Klenerman",
"David",
""
]
] | TITLE: pyFRET: A Python Library for Single Molecule Fluorescence Data Analysis
ABSTRACT: Single molecule F\"orster resonance energy transfer (smFRET) is a powerful
experimental technique for studying the properties of individual biological
molecules in solution. However, as adoption of smFRET techniques becomes more
widespread, the lack of available software, whether open source or commercial,
for data analysis, is becoming a significant issue. Here, we present pyFRET, an
open source Python package for the analysis of data from single-molecule
fluorescence experiments from freely diffusing biomolecules. The package
provides methods for the complete analysis of a smFRET dataset, from burst
selection and denoising, through data visualisation and model fitting. We
provide support for both continuous excitation and alternating laser excitation
(ALEX) data analysis. pyFRET is available as a package downloadable from the
Python Package Index (PyPI) under the open source three-clause BSD licence,
together with links to extensive documentation and tutorials, including example
usage and test data. Additional documentation including tutorials is hosted
independently on ReadTheDocs. The code is available from the free hosting site
Bitbucket. Through distribution of this software, we hope to lower the barrier
for the adoption of smFRET experiments by other research groups and we
encourage others to contribute modules for specific analysis needs.
| no_new_dataset | 0.941169 |
1412.6493 | Zichao Yang | Zichao Yang and Alexander J. Smola and Le Song and Andrew Gordon
Wilson | A la Carte - Learning Fast Kernels | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kernel methods have great promise for learning rich statistical
representations of large modern datasets. However, compared to neural networks,
kernel methods have been perceived as lacking in scalability and flexibility.
We introduce a family of fast, flexible, lightly parametrized and general
purpose kernel learning methods, derived from Fastfood basis function
expansions. We provide mechanisms to learn the properties of groups of spectral
frequencies in these expansions, which require only O(mlogd) time and O(m)
memory, for m basis functions and d input dimensions. We show that the proposed
methods can learn a wide class of kernels, outperforming the alternatives in
accuracy, speed, and memory consumption.
| [
{
"version": "v1",
"created": "Fri, 19 Dec 2014 19:27:21 GMT"
}
] | 2014-12-22T00:00:00 | [
[
"Yang",
"Zichao",
""
],
[
"Smola",
"Alexander J.",
""
],
[
"Song",
"Le",
""
],
[
"Wilson",
"Andrew Gordon",
""
]
] | TITLE: A la Carte - Learning Fast Kernels
ABSTRACT: Kernel methods have great promise for learning rich statistical
representations of large modern datasets. However, compared to neural networks,
kernel methods have been perceived as lacking in scalability and flexibility.
We introduce a family of fast, flexible, lightly parametrized and general
purpose kernel learning methods, derived from Fastfood basis function
expansions. We provide mechanisms to learn the properties of groups of spectral
frequencies in these expansions, which require only O(mlogd) time and O(m)
memory, for m basis functions and d input dimensions. We show that the proposed
methods can learn a wide class of kernels, outperforming the alternatives in
accuracy, speed, and memory consumption.
| no_new_dataset | 0.947478 |
1303.1624 | Conrad Sanderson | Yongkang Wong, Mehrtash T. Harandi, Conrad Sanderson | On Robust Face Recognition via Sparse Encoding: the Good, the Bad, and
the Ugly | null | IET Biometrics, Vol. 3, No. 4, pp. 176-189, 2014 | 10.1049/iet-bmt.2013.0033 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field of face recognition, Sparse Representation (SR) has received
considerable attention during the past few years. Most of the relevant
literature focuses on holistic descriptors in closed-set identification
applications. The underlying assumption in SR-based methods is that each class
in the gallery has sufficient samples and the query lies on the subspace
spanned by the gallery of the same class. Unfortunately, such assumption is
easily violated in the more challenging face verification scenario, where an
algorithm is required to determine if two faces (where one or both have not
been seen before) belong to the same person. In this paper, we first discuss
why previous attempts with SR might not be applicable to verification problems.
We then propose an alternative approach to face verification via SR.
Specifically, we propose to use explicit SR encoding on local image patches
rather than the entire face. The obtained sparse signals are pooled via
averaging to form multiple region descriptors, which are then concatenated to
form an overall face descriptor. Due to the deliberate loss spatial relations
within each region (caused by averaging), the resulting descriptor is robust to
misalignment & various image deformations. Within the proposed framework, we
evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder
Neural Network (SANN), and an implicit probabilistic technique based on
Gaussian Mixture Models. Thorough experiments on AR, FERET, exYaleB, BANCA and
ChokePoint datasets show that the proposed local SR approach obtains
considerably better and more robust performance than several previous
state-of-the-art holistic SR methods, in both verification and closed-set
identification problems. The experiments also show that l1-minimisation based
encoding has a considerably higher computational than the other techniques, but
leads to higher recognition rates.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2013 09:30:10 GMT"
}
] | 2014-12-19T00:00:00 | [
[
"Wong",
"Yongkang",
""
],
[
"Harandi",
"Mehrtash T.",
""
],
[
"Sanderson",
"Conrad",
""
]
] | TITLE: On Robust Face Recognition via Sparse Encoding: the Good, the Bad, and
the Ugly
ABSTRACT: In the field of face recognition, Sparse Representation (SR) has received
considerable attention during the past few years. Most of the relevant
literature focuses on holistic descriptors in closed-set identification
applications. The underlying assumption in SR-based methods is that each class
in the gallery has sufficient samples and the query lies on the subspace
spanned by the gallery of the same class. Unfortunately, such assumption is
easily violated in the more challenging face verification scenario, where an
algorithm is required to determine if two faces (where one or both have not
been seen before) belong to the same person. In this paper, we first discuss
why previous attempts with SR might not be applicable to verification problems.
We then propose an alternative approach to face verification via SR.
Specifically, we propose to use explicit SR encoding on local image patches
rather than the entire face. The obtained sparse signals are pooled via
averaging to form multiple region descriptors, which are then concatenated to
form an overall face descriptor. Due to the deliberate loss spatial relations
within each region (caused by averaging), the resulting descriptor is robust to
misalignment & various image deformations. Within the proposed framework, we
evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder
Neural Network (SANN), and an implicit probabilistic technique based on
Gaussian Mixture Models. Thorough experiments on AR, FERET, exYaleB, BANCA and
ChokePoint datasets show that the proposed local SR approach obtains
considerably better and more robust performance than several previous
state-of-the-art holistic SR methods, in both verification and closed-set
identification problems. The experiments also show that l1-minimisation based
encoding has a considerably higher computational than the other techniques, but
leads to higher recognition rates.
| no_new_dataset | 0.950915 |
1412.3506 | Jose M. Alvarez | Jose M. Alvarez and Theo Gevers and Antonio M. Lopez | Road Detection by One-Class Color Classification: Dataset and
Experiments | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting traversable road areas ahead a moving vehicle is a key process for
modern autonomous driving systems. A common approach to road detection consists
of exploiting color features to classify pixels as road or background. These
algorithms reduce the effect of lighting variations and weather conditions by
exploiting the discriminant/invariant properties of different color
representations. Furthermore, the lack of labeled datasets has motivated the
development of algorithms performing on single images based on the assumption
that the bottom part of the image belongs to the road surface.
In this paper, we first introduce a dataset of road images taken at different
times and in different scenarios using an onboard camera. Then, we devise a
simple online algorithm and conduct an exhaustive evaluation of different
classifiers and the effect of using different color representation to
characterize pixels.
| [
{
"version": "v1",
"created": "Thu, 11 Dec 2014 00:31:37 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Dec 2014 00:57:36 GMT"
}
] | 2014-12-19T00:00:00 | [
[
"Alvarez",
"Jose M.",
""
],
[
"Gevers",
"Theo",
""
],
[
"Lopez",
"Antonio M.",
""
]
] | TITLE: Road Detection by One-Class Color Classification: Dataset and
Experiments
ABSTRACT: Detecting traversable road areas ahead a moving vehicle is a key process for
modern autonomous driving systems. A common approach to road detection consists
of exploiting color features to classify pixels as road or background. These
algorithms reduce the effect of lighting variations and weather conditions by
exploiting the discriminant/invariant properties of different color
representations. Furthermore, the lack of labeled datasets has motivated the
development of algorithms performing on single images based on the assumption
that the bottom part of the image belongs to the road surface.
In this paper, we first introduce a dataset of road images taken at different
times and in different scenarios using an onboard camera. Then, we devise a
simple online algorithm and conduct an exhaustive evaluation of different
classifiers and the effect of using different color representation to
characterize pixels.
| new_dataset | 0.957794 |
1412.5617 | Shuang Song | Shuang Song, Kamalika Chaudhuri, Anand D. Sarwate | Learning from Data with Heterogeneous Noise using SGD | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider learning from data of variable quality that may be obtained from
different heterogeneous sources. Addressing learning from heterogeneous data in
its full generality is a challenging problem. In this paper, we adopt instead a
model in which data is observed through heterogeneous noise, where the noise
level reflects the quality of the data source. We study how to use stochastic
gradient algorithms to learn in this model. Our study is motivated by two
concrete examples where this problem arises naturally: learning with local
differential privacy based on data from multiple sources with different privacy
requirements, and learning from data with labels of variable quality.
The main contribution of this paper is to identify how heterogeneous noise
impacts performance. We show that given two datasets with heterogeneous noise,
the order in which to use them in standard SGD depends on the learning rate. We
propose a method for changing the learning rate as a function of the
heterogeneity, and prove new regret bounds for our method in two cases of
interest. Experiments on real data show that our method performs better than
using a single learning rate and using only the less noisy of the two datasets
when the noise level is low to moderate.
| [
{
"version": "v1",
"created": "Wed, 17 Dec 2014 21:15:06 GMT"
}
] | 2014-12-19T00:00:00 | [
[
"Song",
"Shuang",
""
],
[
"Chaudhuri",
"Kamalika",
""
],
[
"Sarwate",
"Anand D.",
""
]
] | TITLE: Learning from Data with Heterogeneous Noise using SGD
ABSTRACT: We consider learning from data of variable quality that may be obtained from
different heterogeneous sources. Addressing learning from heterogeneous data in
its full generality is a challenging problem. In this paper, we adopt instead a
model in which data is observed through heterogeneous noise, where the noise
level reflects the quality of the data source. We study how to use stochastic
gradient algorithms to learn in this model. Our study is motivated by two
concrete examples where this problem arises naturally: learning with local
differential privacy based on data from multiple sources with different privacy
requirements, and learning from data with labels of variable quality.
The main contribution of this paper is to identify how heterogeneous noise
impacts performance. We show that given two datasets with heterogeneous noise,
the order in which to use them in standard SGD depends on the learning rate. We
propose a method for changing the learning rate as a function of the
heterogeneity, and prove new regret bounds for our method in two cases of
interest. Experiments on real data show that our method performs better than
using a single learning rate and using only the less noisy of the two datasets
when the noise level is low to moderate.
| no_new_dataset | 0.947817 |
1412.5627 | Fabricio Martins Lopes | Bruno Mendes Moro Conque and Andr\'e Yoshiaki Kashiwabara and
Fabr\'icio Martins Lopes | Feature extraction from complex networks: A case of study in genomic
sequences classification | 8 pages | null | null | null | cs.CE cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents a new approach for classification of genomic sequences
from measurements of complex networks and information theory. For this, it is
considered the nucleotides, dinucleotides and trinucleotides of a genomic
sequence. For each of them, the entropy, sum entropy and maximum entropy values
are calculated.For each of them is also generated a network, in which the nodes
are the nucleotides, dinucleotides or trinucleotides and its edges are
estimated by observing the respective adjacency among them in the genomic
sequence. In this way, it is generated three networks, for which measures of
complex networks are extracted.These measures together with measures of
information theory comprise a feature vector representing a genomic sequence.
Thus, the feature vector is used for classification by methods such as SVM,
MultiLayer Perceptron, J48, IBK, Naive Bayes and Random Forest in order to
evaluate the proposed approach.It was adopted coding sequences, intergenic
sequences and TSS (Transcriptional Starter Sites) as datasets, for which the
better results were obtained by the Random Forest with 91.2%, followed by J48
with 89.1% and SVM with 84.8% of accuracy. These results indicate that the new
approach of feature extraction has its value, reaching good levels of
classification even considering only the genomic sequences, i.e., no other a
priori knowledge about them is considered.
| [
{
"version": "v1",
"created": "Wed, 17 Dec 2014 21:31:51 GMT"
}
] | 2014-12-19T00:00:00 | [
[
"Conque",
"Bruno Mendes Moro",
""
],
[
"Kashiwabara",
"André Yoshiaki",
""
],
[
"Lopes",
"Fabrício Martins",
""
]
] | TITLE: Feature extraction from complex networks: A case of study in genomic
sequences classification
ABSTRACT: This work presents a new approach for classification of genomic sequences
from measurements of complex networks and information theory. For this, it is
considered the nucleotides, dinucleotides and trinucleotides of a genomic
sequence. For each of them, the entropy, sum entropy and maximum entropy values
are calculated.For each of them is also generated a network, in which the nodes
are the nucleotides, dinucleotides or trinucleotides and its edges are
estimated by observing the respective adjacency among them in the genomic
sequence. In this way, it is generated three networks, for which measures of
complex networks are extracted.These measures together with measures of
information theory comprise a feature vector representing a genomic sequence.
Thus, the feature vector is used for classification by methods such as SVM,
MultiLayer Perceptron, J48, IBK, Naive Bayes and Random Forest in order to
evaluate the proposed approach.It was adopted coding sequences, intergenic
sequences and TSS (Transcriptional Starter Sites) as datasets, for which the
better results were obtained by the Random Forest with 91.2%, followed by J48
with 89.1% and SVM with 84.8% of accuracy. These results indicate that the new
approach of feature extraction has its value, reaching good levels of
classification even considering only the genomic sequences, i.e., no other a
priori knowledge about them is considered.
| no_new_dataset | 0.94801 |
1412.5720 | David Budden | Madison Flannery, David M Budden and Alexandre Mendes | FlexDM: Enabling robust and reliable parallel data mining using WEKA | 4 pages, 2 figures | null | null | null | cs.MS cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Performing massive data mining experiments with multiple datasets and methods
is a common task faced by most bioinformatics and computational biology
laboratories. WEKA is a machine learning package designed to facilitate this
task by providing tools that allow researchers to select from several
classification methods and specific test strategies. Despite its popularity,
the current WEKA environment for batch experiments, namely Experimenter, has
four limitations that impact its usability: the selection of value ranges for
methods options lacks flexibility and is not intuitive; there is no support for
parallelisation when running large-scale data mining tasks; the XML schema is
difficult to read, necessitating the use of the Experimenter's graphical user
interface for generation and modification; and robustness is limited by the
fact that results are not saved until the last test has concluded.
FlexDM implements an interface to WEKA to run batch processing tasks in a
simple and intuitive way. In a short and easy-to-understand XML file, one can
define hundreds of tests to be performed on several datasets. FlexDM also
allows those tests to be executed asynchronously in parallel to take advantage
of multi-core processors, significantly increasing usability and productivity.
Results are saved incrementally for better robustness and reliability.
FlexDM is implemented in Java and runs on Windows, Linux and OSX. As we
encourage other researchers to explore and adopt our software, FlexDM is made
available as a pre-configured bootable reference environment. All code,
supporting documentation and usage examples are also available for download at
http://sourceforge.net/projects/flexdm.
| [
{
"version": "v1",
"created": "Thu, 18 Dec 2014 05:07:44 GMT"
}
] | 2014-12-19T00:00:00 | [
[
"Flannery",
"Madison",
""
],
[
"Budden",
"David M",
""
],
[
"Mendes",
"Alexandre",
""
]
] | TITLE: FlexDM: Enabling robust and reliable parallel data mining using WEKA
ABSTRACT: Performing massive data mining experiments with multiple datasets and methods
is a common task faced by most bioinformatics and computational biology
laboratories. WEKA is a machine learning package designed to facilitate this
task by providing tools that allow researchers to select from several
classification methods and specific test strategies. Despite its popularity,
the current WEKA environment for batch experiments, namely Experimenter, has
four limitations that impact its usability: the selection of value ranges for
methods options lacks flexibility and is not intuitive; there is no support for
parallelisation when running large-scale data mining tasks; the XML schema is
difficult to read, necessitating the use of the Experimenter's graphical user
interface for generation and modification; and robustness is limited by the
fact that results are not saved until the last test has concluded.
FlexDM implements an interface to WEKA to run batch processing tasks in a
simple and intuitive way. In a short and easy-to-understand XML file, one can
define hundreds of tests to be performed on several datasets. FlexDM also
allows those tests to be executed asynchronously in parallel to take advantage
of multi-core processors, significantly increasing usability and productivity.
Results are saved incrementally for better robustness and reliability.
FlexDM is implemented in Java and runs on Windows, Linux and OSX. As we
encourage other researchers to explore and adopt our software, FlexDM is made
available as a pre-configured bootable reference environment. All code,
supporting documentation and usage examples are also available for download at
http://sourceforge.net/projects/flexdm.
| no_new_dataset | 0.934634 |
1412.5949 | Pengtao Xie | Pengtao Xie and Eric Xing | Large Scale Distributed Distance Metric Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In large scale machine learning and data mining problems with high feature
dimensionality, the Euclidean distance between data points can be
uninformative, and Distance Metric Learning (DML) is often desired to learn a
proper similarity measure (using side information such as example data pairs
being similar or dissimilar). However, high dimensionality and large volume of
pairwise constraints in modern big data can lead to prohibitive computational
cost for both the original DML formulation in Xing et al. (2002) and later
extensions. In this paper, we present a distributed algorithm for DML, and a
large-scale implementation on a parameter server architecture. Our approach
builds on a parallelizable reformulation of Xing et al. (2002), and an
asynchronous stochastic gradient descent optimization procedure. To our
knowledge, this is the first distributed solution to DML, and we show that, on
a system with 256 CPU cores, our program is able to complete a DML task on a
dataset with 1 million data points, 22-thousand features, and 200 million
labeled data pairs, in 15 hours; and the learned metric shows great
effectiveness in properly measuring distances.
| [
{
"version": "v1",
"created": "Thu, 18 Dec 2014 17:14:34 GMT"
}
] | 2014-12-19T00:00:00 | [
[
"Xie",
"Pengtao",
""
],
[
"Xing",
"Eric",
""
]
] | TITLE: Large Scale Distributed Distance Metric Learning
ABSTRACT: In large scale machine learning and data mining problems with high feature
dimensionality, the Euclidean distance between data points can be
uninformative, and Distance Metric Learning (DML) is often desired to learn a
proper similarity measure (using side information such as example data pairs
being similar or dissimilar). However, high dimensionality and large volume of
pairwise constraints in modern big data can lead to prohibitive computational
cost for both the original DML formulation in Xing et al. (2002) and later
extensions. In this paper, we present a distributed algorithm for DML, and a
large-scale implementation on a parameter server architecture. Our approach
builds on a parallelizable reformulation of Xing et al. (2002), and an
asynchronous stochastic gradient descent optimization procedure. To our
knowledge, this is the first distributed solution to DML, and we show that, on
a system with 256 CPU cores, our program is able to complete a DML task on a
dataset with 1 million data points, 22-thousand features, and 200 million
labeled data pairs, in 15 hours; and the learned metric shows great
effectiveness in properly measuring distances.
| no_new_dataset | 0.946794 |
1412.5968 | Andrew Lan | Andrew S. Lan, Christoph Studer, Richard G. Baraniuk | Quantized Matrix Completion for Personalized Learning | null | In Proc. 7th Intl. Conf. on Educational Data Mining, pages
280-283, July 2014 | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recently proposed SPARse Factor Analysis (SPARFA) framework for
personalized learning performs factor analysis on ordinal or binary-valued
(e.g., correct/incorrect) graded learner responses to questions. The underlying
factors are termed "concepts" (or knowledge components) and are used for
learning analytics (LA), the estimation of learner concept-knowledge profiles,
and for content analytics (CA), the estimation of question-concept associations
and question difficulties. While SPARFA is a powerful tool for LA and CA, it
requires a number of algorithm parameters (including the number of concepts),
which are difficult to determine in practice. In this paper, we propose
SPARFA-Lite, a convex optimization-based method for LA that builds on matrix
completion, which only requires a single algorithm parameter and enables us to
automatically identify the required number of concepts. Using a variety of
educational datasets, we demonstrate that SPARFALite (i) achieves comparable
performance in predicting unobserved learner responses to existing methods,
including item response theory (IRT) and SPARFA, and (ii) is computationally
more efficient.
| [
{
"version": "v1",
"created": "Thu, 18 Dec 2014 17:48:17 GMT"
}
] | 2014-12-19T00:00:00 | [
[
"Lan",
"Andrew S.",
""
],
[
"Studer",
"Christoph",
""
],
[
"Baraniuk",
"Richard G.",
""
]
] | TITLE: Quantized Matrix Completion for Personalized Learning
ABSTRACT: The recently proposed SPARse Factor Analysis (SPARFA) framework for
personalized learning performs factor analysis on ordinal or binary-valued
(e.g., correct/incorrect) graded learner responses to questions. The underlying
factors are termed "concepts" (or knowledge components) and are used for
learning analytics (LA), the estimation of learner concept-knowledge profiles,
and for content analytics (CA), the estimation of question-concept associations
and question difficulties. While SPARFA is a powerful tool for LA and CA, it
requires a number of algorithm parameters (including the number of concepts),
which are difficult to determine in practice. In this paper, we propose
SPARFA-Lite, a convex optimization-based method for LA that builds on matrix
completion, which only requires a single algorithm parameter and enables us to
automatically identify the required number of concepts. Using a variety of
educational datasets, we demonstrate that SPARFALite (i) achieves comparable
performance in predicting unobserved learner responses to existing methods,
including item response theory (IRT) and SPARFA, and (ii) is computationally
more efficient.
| no_new_dataset | 0.950869 |
1412.5448 | Micka\"el Poussevin | Micka\"el Poussevin and Vincent Guigue and Patrick Gallinari | Extended Recommendation Framework: Generating the Text of a User Review
as a Personalized Summary | null | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose to augment rating based recommender systems by providing the user
with additional information which might help him in his choice or in the
understanding of the recommendation. We consider here as a new task, the
generation of personalized reviews associated to items. We use an extractive
summary formulation for generating these reviews. We also show that the two
information sources, ratings and items could be used both for estimating
ratings and for generating summaries, leading to improved performance for each
system compared to the use of a single source. Besides these two contributions,
we show how a personalized polarity classifier can integrate the rating and
textual aspects. Overall, the proposed system offers the user three
personalized hints for a recommendation: rating, text and polarity. We evaluate
these three components on two datasets using appropriate measures for each
task.
| [
{
"version": "v1",
"created": "Wed, 17 Dec 2014 15:46:28 GMT"
}
] | 2014-12-18T00:00:00 | [
[
"Poussevin",
"Mickaël",
""
],
[
"Guigue",
"Vincent",
""
],
[
"Gallinari",
"Patrick",
""
]
] | TITLE: Extended Recommendation Framework: Generating the Text of a User Review
as a Personalized Summary
ABSTRACT: We propose to augment rating based recommender systems by providing the user
with additional information which might help him in his choice or in the
understanding of the recommendation. We consider here as a new task, the
generation of personalized reviews associated to items. We use an extractive
summary formulation for generating these reviews. We also show that the two
information sources, ratings and items could be used both for estimating
ratings and for generating summaries, leading to improved performance for each
system compared to the use of a single source. Besides these two contributions,
we show how a personalized polarity classifier can integrate the rating and
textual aspects. Overall, the proposed system offers the user three
personalized hints for a recommendation: rating, text and polarity. We evaluate
these three components on two datasets using appropriate measures for each
task.
| no_new_dataset | 0.952309 |
1412.5513 | Engelbert Mephu Nguifo | Cyrine Arouri, Engelbert Mephu Nguifo, Sabeur Aridhi, C\'ecile
Roucelle, Gaelle Bonnet-Loosli, Norbert Tsopz\'e | Towards a constructive multilayer perceptron for regression task using
non-parametric clustering. A case study of Photo-Z redshift reconstruction | null | null | null | null | cs.NE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The choice of architecture of artificial neuron network (ANN) is still a
challenging task that users face every time. It greatly affects the accuracy of
the built network. In fact there is no optimal method that is applicable to
various implementations at the same time. In this paper we propose a method to
construct ANN based on clustering, that resolves the problems of random and ad
hoc approaches for multilayer ANN architecture. Our method can be applied to
regression problems. Experimental results obtained with different datasets,
reveals the efficiency of our method.
| [
{
"version": "v1",
"created": "Wed, 17 Dec 2014 18:36:23 GMT"
}
] | 2014-12-18T00:00:00 | [
[
"Arouri",
"Cyrine",
""
],
[
"Nguifo",
"Engelbert Mephu",
""
],
[
"Aridhi",
"Sabeur",
""
],
[
"Roucelle",
"Cécile",
""
],
[
"Bonnet-Loosli",
"Gaelle",
""
],
[
"Tsopzé",
"Norbert",
""
]
] | TITLE: Towards a constructive multilayer perceptron for regression task using
non-parametric clustering. A case study of Photo-Z redshift reconstruction
ABSTRACT: The choice of architecture of artificial neuron network (ANN) is still a
challenging task that users face every time. It greatly affects the accuracy of
the built network. In fact there is no optimal method that is applicable to
various implementations at the same time. In this paper we propose a method to
construct ANN based on clustering, that resolves the problems of random and ad
hoc approaches for multilayer ANN architecture. Our method can be applied to
regression problems. Experimental results obtained with different datasets,
reveals the efficiency of our method.
| no_new_dataset | 0.94699 |
1312.0041 | Jonathan Tu | Jonathan H. Tu, Clarence W. Rowley, Dirk M. Luchtenburg, Steven L.
Brunton, and J. Nathan Kutz | On Dynamic Mode Decomposition: Theory and Applications | null | J.Comput. Dyn. 1(2):391-421 (2014) | 10.3934/jcd.2014.1.391 | null | math.NA physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Originally introduced in the fluid mechanics community, dynamic mode
decomposition (DMD) has emerged as a powerful tool for analyzing the dynamics
of nonlinear systems. However, existing DMD theory deals primarily with
sequential time series for which the measurement dimension is much larger than
the number of measurements taken. We present a theoretical framework in which
we define DMD as the eigendecomposition of an approximating linear operator.
This generalizes DMD to a larger class of datasets, including nonsequential
time series. We demonstrate the utility of this approach by presenting novel
sampling strategies that increase computational efficiency and mitigate the
effects of noise, respectively. We also introduce the concept of linear
consistency, which helps explain the potential pitfalls of applying DMD to
rank-deficient datasets, illustrating with examples. Such computations are not
considered in the existing literature, but can be understood using our more
general framework. In addition, we show that our theory strengthens the
connections between DMD and Koopman operator theory. It also establishes
connections between DMD and other techniques, including the eigensystem
realization algorithm (ERA), a system identification method, and linear inverse
modeling (LIM), a method from climate science. We show that under certain
conditions, DMD is equivalent to LIM.
| [
{
"version": "v1",
"created": "Fri, 29 Nov 2013 23:55:41 GMT"
}
] | 2014-12-17T00:00:00 | [
[
"Tu",
"Jonathan H.",
""
],
[
"Rowley",
"Clarence W.",
""
],
[
"Luchtenburg",
"Dirk M.",
""
],
[
"Brunton",
"Steven L.",
""
],
[
"Kutz",
"J. Nathan",
""
]
] | TITLE: On Dynamic Mode Decomposition: Theory and Applications
ABSTRACT: Originally introduced in the fluid mechanics community, dynamic mode
decomposition (DMD) has emerged as a powerful tool for analyzing the dynamics
of nonlinear systems. However, existing DMD theory deals primarily with
sequential time series for which the measurement dimension is much larger than
the number of measurements taken. We present a theoretical framework in which
we define DMD as the eigendecomposition of an approximating linear operator.
This generalizes DMD to a larger class of datasets, including nonsequential
time series. We demonstrate the utility of this approach by presenting novel
sampling strategies that increase computational efficiency and mitigate the
effects of noise, respectively. We also introduce the concept of linear
consistency, which helps explain the potential pitfalls of applying DMD to
rank-deficient datasets, illustrating with examples. Such computations are not
considered in the existing literature, but can be understood using our more
general framework. In addition, we show that our theory strengthens the
connections between DMD and Koopman operator theory. It also establishes
connections between DMD and other techniques, including the eigensystem
realization algorithm (ERA), a system identification method, and linear inverse
modeling (LIM), a method from climate science. We show that under certain
conditions, DMD is equivalent to LIM.
| no_new_dataset | 0.942771 |
1412.4842 | Mingjie Tang | Mingjie Tang, Ruby Y.Tahboub, Walid G.Are, Mikhail J. Atallah,
Qutaibah M. Malluhi, Mourad Ouzzani, and Yasin N. Silva | Similarity Group-by Operators for Multi-dimensional Relational Data | submit to TKDE | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The SQL group-by operator plays an important role in summarizing and
aggregating large datasets in a data analytic stack.While the standard group-by
operator, which is based on equality, is useful in several applications,
allowing similarity aware grouping provides a more realistic view on real-world
data that could lead to better insights. The Similarity SQL-based Group-By
operator (SGB, for short) extends the semantics of the standard SQL Group-by by
grouping data with similar but not necessarily equal values. While existing
similarity-based grouping operators efficiently materialize this approximate
semantics, they primarily focus on one-dimensional attributes and treat
multidimensional attributes independently. However, correlated attributes, such
as in spatial data, are processed independently, and hence, groups in the
multidimensional space are not detected properly. To address this problem, we
introduce two new SGB operators for multidimensional data. The first operator
is the clique (or distance-to-all) SGB, where all the tuples in a group are
within some distance from each other. The second operator is the
distance-to-any SGB, where a tuple belongs to a group if the tuple is within
some distance from any other tuple in the group. We implement and test the new
SGB operators and their algorithms inside PostgreSQL. The overhead introduced
by these operators proves to be minimal and the execution times are comparable
to those of the standard Group-by. The experimental study, based on TPC-H and a
social check-in data, demonstrates that the proposed algorithms can achieve up
to three orders of magnitude enhancement in performance over baseline methods
developed to solve the same problem.
| [
{
"version": "v1",
"created": "Tue, 16 Dec 2014 00:27:52 GMT"
}
] | 2014-12-17T00:00:00 | [
[
"Tang",
"Mingjie",
""
],
[
"Tahboub",
"Ruby Y.",
""
],
[
"Are",
"Walid G.",
""
],
[
"Atallah",
"Mikhail J.",
""
],
[
"Malluhi",
"Qutaibah M.",
""
],
[
"Ouzzani",
"Mourad",
""
],
[
"Silva",
"Yasin N.",
""
]
] | TITLE: Similarity Group-by Operators for Multi-dimensional Relational Data
ABSTRACT: The SQL group-by operator plays an important role in summarizing and
aggregating large datasets in a data analytic stack.While the standard group-by
operator, which is based on equality, is useful in several applications,
allowing similarity aware grouping provides a more realistic view on real-world
data that could lead to better insights. The Similarity SQL-based Group-By
operator (SGB, for short) extends the semantics of the standard SQL Group-by by
grouping data with similar but not necessarily equal values. While existing
similarity-based grouping operators efficiently materialize this approximate
semantics, they primarily focus on one-dimensional attributes and treat
multidimensional attributes independently. However, correlated attributes, such
as in spatial data, are processed independently, and hence, groups in the
multidimensional space are not detected properly. To address this problem, we
introduce two new SGB operators for multidimensional data. The first operator
is the clique (or distance-to-all) SGB, where all the tuples in a group are
within some distance from each other. The second operator is the
distance-to-any SGB, where a tuple belongs to a group if the tuple is within
some distance from any other tuple in the group. We implement and test the new
SGB operators and their algorithms inside PostgreSQL. The overhead introduced
by these operators proves to be minimal and the execution times are comparable
to those of the standard Group-by. The experimental study, based on TPC-H and a
social check-in data, demonstrates that the proposed algorithms can achieve up
to three orders of magnitude enhancement in performance over baseline methods
developed to solve the same problem.
| no_new_dataset | 0.942823 |
1412.5104 | Angjoo Kanazawa | Angjoo Kanazawa, Abhishek Sharma, David Jacobs | Locally Scale-Invariant Convolutional Neural Networks | Deep Learning and Representation Learning Workshop: NIPS 2014 | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Neural Networks (ConvNets) have shown excellent results on many
visual classification tasks. With the exception of ImageNet, these datasets are
carefully crafted such that objects are well-aligned at similar scales.
Naturally, the feature learning problem gets more challenging as the amount of
variation in the data increases, as the models have to learn to be invariant to
certain changes in appearance. Recent results on the ImageNet dataset show that
given enough data, ConvNets can learn such invariances producing very
discriminative features [1]. But could we do more: use less parameters, less
data, learn more discriminative features, if certain invariances were built
into the learning process? In this paper we present a simple model that allows
ConvNets to learn features in a locally scale-invariant manner without
increasing the number of model parameters. We show on a modified MNIST dataset
that when faced with scale variation, building in scale-invariance allows
ConvNets to learn more discriminative features with reduced chances of
over-fitting.
| [
{
"version": "v1",
"created": "Tue, 16 Dec 2014 18:09:34 GMT"
}
] | 2014-12-17T00:00:00 | [
[
"Kanazawa",
"Angjoo",
""
],
[
"Sharma",
"Abhishek",
""
],
[
"Jacobs",
"David",
""
]
] | TITLE: Locally Scale-Invariant Convolutional Neural Networks
ABSTRACT: Convolutional Neural Networks (ConvNets) have shown excellent results on many
visual classification tasks. With the exception of ImageNet, these datasets are
carefully crafted such that objects are well-aligned at similar scales.
Naturally, the feature learning problem gets more challenging as the amount of
variation in the data increases, as the models have to learn to be invariant to
certain changes in appearance. Recent results on the ImageNet dataset show that
given enough data, ConvNets can learn such invariances producing very
discriminative features [1]. But could we do more: use less parameters, less
data, learn more discriminative features, if certain invariances were built
into the learning process? In this paper we present a simple model that allows
ConvNets to learn features in a locally scale-invariant manner without
increasing the number of model parameters. We show on a modified MNIST dataset
that when faced with scale variation, building in scale-invariance allows
ConvNets to learn more discriminative features with reduced chances of
over-fitting.
| no_new_dataset | 0.951684 |
1409.3215 | Ilya Sutskever | Ilya Sutskever and Oriol Vinyals and Quoc V. Le | Sequence to Sequence Learning with Neural Networks | 9 pages | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Neural Networks (DNNs) are powerful models that have achieved excellent
performance on difficult learning tasks. Although DNNs work well whenever large
labeled training sets are available, they cannot be used to map sequences to
sequences. In this paper, we present a general end-to-end approach to sequence
learning that makes minimal assumptions on the sequence structure. Our method
uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to
a vector of a fixed dimensionality, and then another deep LSTM to decode the
target sequence from the vector. Our main result is that on an English to
French translation task from the WMT'14 dataset, the translations produced by
the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's
BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did
not have difficulty on long sentences. For comparison, a phrase-based SMT
system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM
to rerank the 1000 hypotheses produced by the aforementioned SMT system, its
BLEU score increases to 36.5, which is close to the previous best result on
this task. The LSTM also learned sensible phrase and sentence representations
that are sensitive to word order and are relatively invariant to the active and
the passive voice. Finally, we found that reversing the order of the words in
all source sentences (but not target sentences) improved the LSTM's performance
markedly, because doing so introduced many short term dependencies between the
source and the target sentence which made the optimization problem easier.
| [
{
"version": "v1",
"created": "Wed, 10 Sep 2014 19:55:35 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Oct 2014 12:13:17 GMT"
},
{
"version": "v3",
"created": "Sun, 14 Dec 2014 20:59:51 GMT"
}
] | 2014-12-16T00:00:00 | [
[
"Sutskever",
"Ilya",
""
],
[
"Vinyals",
"Oriol",
""
],
[
"Le",
"Quoc V.",
""
]
] | TITLE: Sequence to Sequence Learning with Neural Networks
ABSTRACT: Deep Neural Networks (DNNs) are powerful models that have achieved excellent
performance on difficult learning tasks. Although DNNs work well whenever large
labeled training sets are available, they cannot be used to map sequences to
sequences. In this paper, we present a general end-to-end approach to sequence
learning that makes minimal assumptions on the sequence structure. Our method
uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to
a vector of a fixed dimensionality, and then another deep LSTM to decode the
target sequence from the vector. Our main result is that on an English to
French translation task from the WMT'14 dataset, the translations produced by
the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's
BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did
not have difficulty on long sentences. For comparison, a phrase-based SMT
system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM
to rerank the 1000 hypotheses produced by the aforementioned SMT system, its
BLEU score increases to 36.5, which is close to the previous best result on
this task. The LSTM also learned sensible phrase and sentence representations
that are sensitive to word order and are relatively invariant to the active and
the passive voice. Finally, we found that reversing the order of the words in
all source sentences (but not target sentences) improved the LSTM's performance
markedly, because doing so introduced many short term dependencies between the
source and the target sentence which made the optimization problem easier.
| no_new_dataset | 0.948489 |
1412.4378 | Bharath Kumar Samanthula | Bharath K. Samanthula, Fang-Yu Rao, Elisa Bertino, Xun Yi, Dongxi Liu | Privacy-Preserving and Outsourced Multi-User k-Means Clustering | 16 pages, 2 figures, 5 tables | null | null | null | cs.CR cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many techniques for privacy-preserving data mining (PPDM) have been
investigated over the past decade. Often, the entities involved in the data
mining process are end-users or organizations with limited computing and
storage resources. As a result, such entities may want to refrain from
participating in the PPDM process. To overcome this issue and to take many
other benefits of cloud computing, outsourcing PPDM tasks to the cloud
environment has recently gained special attention. We consider the scenario
where n entities outsource their databases (in encrypted format) to the cloud
and ask the cloud to perform the clustering task on their combined data in a
privacy-preserving manner. We term such a process as privacy-preserving and
outsourced distributed clustering (PPODC). In this paper, we propose a novel
and efficient solution to the PPODC problem based on k-means clustering
algorithm. The main novelty of our solution lies in avoiding the secure
division operations required in computing cluster centers altogether through an
efficient transformation technique. Our solution builds the clusters securely
in an iterative fashion and returns the final cluster centers to all entities
when a pre-determined termination condition holds. The proposed solution
protects data confidentiality of all the participating entities under the
standard semi-honest model. To the best of our knowledge, ours is the first
work to discuss and propose a comprehensive solution to the PPODC problem that
incurs negligible cost on the participating entities. We theoretically estimate
both the computation and communication costs of the proposed protocol and also
demonstrate its practical value through experiments on a real dataset.
| [
{
"version": "v1",
"created": "Sun, 14 Dec 2014 16:54:26 GMT"
}
] | 2014-12-16T00:00:00 | [
[
"Samanthula",
"Bharath K.",
""
],
[
"Rao",
"Fang-Yu",
""
],
[
"Bertino",
"Elisa",
""
],
[
"Yi",
"Xun",
""
],
[
"Liu",
"Dongxi",
""
]
] | TITLE: Privacy-Preserving and Outsourced Multi-User k-Means Clustering
ABSTRACT: Many techniques for privacy-preserving data mining (PPDM) have been
investigated over the past decade. Often, the entities involved in the data
mining process are end-users or organizations with limited computing and
storage resources. As a result, such entities may want to refrain from
participating in the PPDM process. To overcome this issue and to take many
other benefits of cloud computing, outsourcing PPDM tasks to the cloud
environment has recently gained special attention. We consider the scenario
where n entities outsource their databases (in encrypted format) to the cloud
and ask the cloud to perform the clustering task on their combined data in a
privacy-preserving manner. We term such a process as privacy-preserving and
outsourced distributed clustering (PPODC). In this paper, we propose a novel
and efficient solution to the PPODC problem based on k-means clustering
algorithm. The main novelty of our solution lies in avoiding the secure
division operations required in computing cluster centers altogether through an
efficient transformation technique. Our solution builds the clusters securely
in an iterative fashion and returns the final cluster centers to all entities
when a pre-determined termination condition holds. The proposed solution
protects data confidentiality of all the participating entities under the
standard semi-honest model. To the best of our knowledge, ours is the first
work to discuss and propose a comprehensive solution to the PPODC problem that
incurs negligible cost on the participating entities. We theoretically estimate
both the computation and communication costs of the proposed protocol and also
demonstrate its practical value through experiments on a real dataset.
| no_new_dataset | 0.947478 |
1412.4682 | Mykola Pechenizkiy | Erik Tromp and Mykola Pechenizkiy | Rule-based Emotion Detection on Social Media: Putting Tweets on
Plutchik's Wheel | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study sentiment analysis beyond the typical granularity of polarity and
instead use Plutchik's wheel of emotions model. We introduce RBEM-Emo as an
extension to the Rule-Based Emission Model algorithm to deduce such emotions
from human-written messages. We evaluate our approach on two different datasets
and compare its performance with the current state-of-the-art techniques for
emotion detection, including a recursive auto-encoder. The results of the
experimental study suggest that RBEM-Emo is a promising approach advancing the
current state-of-the-art in emotion detection.
| [
{
"version": "v1",
"created": "Mon, 15 Dec 2014 17:20:47 GMT"
}
] | 2014-12-16T00:00:00 | [
[
"Tromp",
"Erik",
""
],
[
"Pechenizkiy",
"Mykola",
""
]
] | TITLE: Rule-based Emotion Detection on Social Media: Putting Tweets on
Plutchik's Wheel
ABSTRACT: We study sentiment analysis beyond the typical granularity of polarity and
instead use Plutchik's wheel of emotions model. We introduce RBEM-Emo as an
extension to the Rule-Based Emission Model algorithm to deduce such emotions
from human-written messages. We evaluate our approach on two different datasets
and compare its performance with the current state-of-the-art techniques for
emotion detection, including a recursive auto-encoder. The results of the
experimental study suggest that RBEM-Emo is a promising approach advancing the
current state-of-the-art in emotion detection.
| no_new_dataset | 0.946941 |
1412.4726 | Rustam Tagiew | Rustam Tagiew and Dmitry I. Ignatov and Fadi Amroush | Experimental economics for web mining | 3 pages, 2 tables | null | null | null | cs.CE cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper offers a step towards research infrastructure, which makes data
from experimental economics efficiently usable for analysis of web data. We
believe that regularities of human behavior found in experimental data also
emerge in real world web data. A format for data from experiments is suggested,
which enables its publication as open data. Once standardized datasets of
experiments are available on-line, web mining can take advantages from this
data. Further, the questions about the order of causalities arisen from web
data analysis can inspire new experiment setups.
| [
{
"version": "v1",
"created": "Mon, 15 Dec 2014 19:09:48 GMT"
}
] | 2014-12-16T00:00:00 | [
[
"Tagiew",
"Rustam",
""
],
[
"Ignatov",
"Dmitry I.",
""
],
[
"Amroush",
"Fadi",
""
]
] | TITLE: Experimental economics for web mining
ABSTRACT: This paper offers a step towards research infrastructure, which makes data
from experimental economics efficiently usable for analysis of web data. We
believe that regularities of human behavior found in experimental data also
emerge in real world web data. A format for data from experiments is suggested,
which enables its publication as open data. Once standardized datasets of
experiments are available on-line, web mining can take advantages from this
data. Further, the questions about the order of causalities arisen from web
data analysis can inspire new experiment setups.
| no_new_dataset | 0.949763 |
1412.4754 | Yuxiao Dong | Yuxiao Dong, Reid A. Johnson, Nitesh V. Chawla | Will This Paper Increase Your h-index? Scientific Impact Prediction | Proc. of the 8th ACM International Conference on Web Search and Data
Mining (WSDM'15) | null | 10.1145/2684822.2685314 | null | cs.SI cs.DL physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scientific impact plays a central role in the evaluation of the output of
scholars, departments, and institutions. A widely used measure of scientific
impact is citations, with a growing body of literature focused on predicting
the number of citations obtained by any given publication. The effectiveness of
such predictions, however, is fundamentally limited by the power-law
distribution of citations, whereby publications with few citations are
extremely common and publications with many citations are relatively rare.
Given this limitation, in this work we instead address a related question asked
by many academic researchers in the course of writing a paper, namely: "Will
this paper increase my h-index?" Using a real academic dataset with over 1.7
million authors, 2 million papers, and 8 million citation relationships from
the premier online academic service ArnetMiner, we formalize a novel scientific
impact prediction problem to examine several factors that can drive a paper to
increase the primary author's h-index. We find that the researcher's authority
on the publication topic and the venue in which the paper is published are
crucial factors to the increase of the primary author's h-index, while the
topic popularity and the co-authors' h-indices are of surprisingly little
relevance. By leveraging relevant factors, we find a greater than 87.5%
potential predictability for whether a paper will contribute to an author's
h-index within five years. As a further experiment, we generate a
self-prediction for this paper, estimating that there is a 76% probability that
it will contribute to the h-index of the co-author with the highest current
h-index in five years. We conclude that our findings on the quantification of
scientific impact can help researchers to expand their influence and more
effectively leverage their position of "standing on the shoulders of giants."
| [
{
"version": "v1",
"created": "Mon, 15 Dec 2014 20:36:00 GMT"
}
] | 2014-12-16T00:00:00 | [
[
"Dong",
"Yuxiao",
""
],
[
"Johnson",
"Reid A.",
""
],
[
"Chawla",
"Nitesh V.",
""
]
] | TITLE: Will This Paper Increase Your h-index? Scientific Impact Prediction
ABSTRACT: Scientific impact plays a central role in the evaluation of the output of
scholars, departments, and institutions. A widely used measure of scientific
impact is citations, with a growing body of literature focused on predicting
the number of citations obtained by any given publication. The effectiveness of
such predictions, however, is fundamentally limited by the power-law
distribution of citations, whereby publications with few citations are
extremely common and publications with many citations are relatively rare.
Given this limitation, in this work we instead address a related question asked
by many academic researchers in the course of writing a paper, namely: "Will
this paper increase my h-index?" Using a real academic dataset with over 1.7
million authors, 2 million papers, and 8 million citation relationships from
the premier online academic service ArnetMiner, we formalize a novel scientific
impact prediction problem to examine several factors that can drive a paper to
increase the primary author's h-index. We find that the researcher's authority
on the publication topic and the venue in which the paper is published are
crucial factors to the increase of the primary author's h-index, while the
topic popularity and the co-authors' h-indices are of surprisingly little
relevance. By leveraging relevant factors, we find a greater than 87.5%
potential predictability for whether a paper will contribute to an author's
h-index within five years. As a further experiment, we generate a
self-prediction for this paper, estimating that there is a 76% probability that
it will contribute to the h-index of the co-author with the highest current
h-index in five years. We conclude that our findings on the quantification of
scientific impact can help researchers to expand their influence and more
effectively leverage their position of "standing on the shoulders of giants."
| no_new_dataset | 0.943919 |
1412.3898 | Lu Yu | Lu Yu and Junming Huang and Chuang Liu and Zike Zhang | ILCR: Item-based Latent Factors for Sparse Collaborative Retrieval | 10 pages, conference | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Interactions between search and recommendation have recently attracted
significant attention, and several studies have shown that many potential
applications involve with a joint problem of producing recommendations to users
with respect to a given query, termed $Collaborative$ $Retrieval$ (CR).
Successful algorithms designed for CR should be potentially flexible at dealing
with the sparsity challenges since the setup of collaborative retrieval
associates with a given $query$ $\times$ $user$ $\times$ $item$ tensor instead
of traditional $user$ $\times$ $item$ matrix. Recently, several works are
proposed to study CR task from users' perspective. In this paper, we aim to
sufficiently explore the sophisticated relationship of each $query$ $\times$
$user$ $\times$ $item$ triple from items' perspective. By integrating
item-based collaborative information for this joint task, we present an
alternative factorized model that could better evaluate the ranks of those
items with sparse information for the given query-user pair. In addition, we
suggest to employ a recently proposed scalable ranking learning algorithm,
namely BPR, to optimize the state-of-the-art approach, $Latent$ $Collaborative$
$Retrieval$ model, instead of the original learning algorithm. The experimental
results on two real-world datasets, (i.e. \emph{Last.fm}, \emph{Yelp}),
demonstrate the efficiency and effectiveness of our proposed approach.
| [
{
"version": "v1",
"created": "Fri, 12 Dec 2014 06:32:47 GMT"
}
] | 2014-12-15T00:00:00 | [
[
"Yu",
"Lu",
""
],
[
"Huang",
"Junming",
""
],
[
"Liu",
"Chuang",
""
],
[
"Zhang",
"Zike",
""
]
] | TITLE: ILCR: Item-based Latent Factors for Sparse Collaborative Retrieval
ABSTRACT: Interactions between search and recommendation have recently attracted
significant attention, and several studies have shown that many potential
applications involve with a joint problem of producing recommendations to users
with respect to a given query, termed $Collaborative$ $Retrieval$ (CR).
Successful algorithms designed for CR should be potentially flexible at dealing
with the sparsity challenges since the setup of collaborative retrieval
associates with a given $query$ $\times$ $user$ $\times$ $item$ tensor instead
of traditional $user$ $\times$ $item$ matrix. Recently, several works are
proposed to study CR task from users' perspective. In this paper, we aim to
sufficiently explore the sophisticated relationship of each $query$ $\times$
$user$ $\times$ $item$ triple from items' perspective. By integrating
item-based collaborative information for this joint task, we present an
alternative factorized model that could better evaluate the ranks of those
items with sparse information for the given query-user pair. In addition, we
suggest to employ a recently proposed scalable ranking learning algorithm,
namely BPR, to optimize the state-of-the-art approach, $Latent$ $Collaborative$
$Retrieval$ model, instead of the original learning algorithm. The experimental
results on two real-world datasets, (i.e. \emph{Last.fm}, \emph{Yelp}),
demonstrate the efficiency and effectiveness of our proposed approach.
| no_new_dataset | 0.942242 |
1412.3919 | Alexandre Abraham | Alexandre Abraham (NEUROSPIN, INRIA Saclay - Ile de France), Fabian
Pedregosa (INRIA Saclay - Ile de France), Michael Eickenberg (LNAO, INRIA
Saclay - Ile de France), Philippe Gervais (NEUROSPIN, INRIA Saclay - Ile de
France, LNAO), Andreas Muller, Jean Kossaifi, Alexandre Gramfort (NEUROSPIN,
LTCI), Bertrand Thirion (NEUROSPIN, INRIA Saclay - Ile de France), G\"ael
Varoquaux (NEUROSPIN, INRIA Saclay - Ile de France, LNAO) | Machine Learning for Neuroimaging with Scikit-Learn | Frontiers in neuroscience, Frontiers Research Foundation, 2013, pp.15 | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Statistical machine learning methods are increasingly used for neuroimaging
data analysis. Their main virtue is their ability to model high-dimensional
datasets, e.g. multivariate analysis of activation images or resting-state time
series. Supervised learning is typically used in decoding or encoding settings
to relate brain images to behavioral or clinical observations, while
unsupervised learning can uncover hidden structures in sets of images (e.g.
resting state functional MRI) or find sub-populations in large cohorts. By
considering different functional neuroimaging applications, we illustrate how
scikit-learn, a Python machine learning library, can be used to perform some
key analysis steps. Scikit-learn contains a very large set of statistical
learning algorithms, both supervised and unsupervised, and its application to
neuroimaging data provides a versatile tool to study the brain.
| [
{
"version": "v1",
"created": "Fri, 12 Dec 2014 08:38:35 GMT"
}
] | 2014-12-15T00:00:00 | [
[
"Abraham",
"Alexandre",
"",
"NEUROSPIN, INRIA Saclay - Ile de France"
],
[
"Pedregosa",
"Fabian",
"",
"INRIA Saclay - Ile de France"
],
[
"Eickenberg",
"Michael",
"",
"LNAO, INRIA\n Saclay - Ile de France"
],
[
"Gervais",
"Philippe",
"",
"NEUROSPIN, INRIA Saclay - Ile de\n France, LNAO"
],
[
"Muller",
"Andreas",
"",
"NEUROSPIN,\n LTCI"
],
[
"Kossaifi",
"Jean",
"",
"NEUROSPIN,\n LTCI"
],
[
"Gramfort",
"Alexandre",
"",
"NEUROSPIN,\n LTCI"
],
[
"Thirion",
"Bertrand",
"",
"NEUROSPIN, INRIA Saclay - Ile de France"
],
[
"Varoquaux",
"Gäel",
"",
"NEUROSPIN, INRIA Saclay - Ile de France, LNAO"
]
] | TITLE: Machine Learning for Neuroimaging with Scikit-Learn
ABSTRACT: Statistical machine learning methods are increasingly used for neuroimaging
data analysis. Their main virtue is their ability to model high-dimensional
datasets, e.g. multivariate analysis of activation images or resting-state time
series. Supervised learning is typically used in decoding or encoding settings
to relate brain images to behavioral or clinical observations, while
unsupervised learning can uncover hidden structures in sets of images (e.g.
resting state functional MRI) or find sub-populations in large cohorts. By
considering different functional neuroimaging applications, we illustrate how
scikit-learn, a Python machine learning library, can be used to perform some
key analysis steps. Scikit-learn contains a very large set of statistical
learning algorithms, both supervised and unsupervised, and its application to
neuroimaging data provides a versatile tool to study the brain.
| no_new_dataset | 0.941385 |
1412.4042 | D\'aniel Kondor Mr | D\'aniel Kondor, Istv\'an Csabai, J\'anos Sz\"ule, M\'arton P\'osfai,
G\'abor Vattay | Inferring the interplay of network structure and market effects in
Bitcoin | project website: http://www.vo.elte.hu/bitcoin | New J. Phys. 16 (2014) 125003 | 10.1088/1367-2630/16/12/125003 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A main focus in economics research is understanding the time series of prices
of goods and assets. While statistical models using only the properties of the
time series itself have been successful in many aspects, we expect to gain a
better understanding of the phenomena involved if we can model the underlying
system of interacting agents. In this article, we consider the history of
Bitcoin, a novel digital currency system, for which the complete list of
transactions is available for analysis. Using this dataset, we reconstruct the
transaction network between users and analyze changes in the structure of the
subgraph induced by the most active users. Our approach is based on the
unsupervised identification of important features of the time variation of the
network. Applying the widely used method of Principal Component Analysis to the
matrix constructed from snapshots of the network at different times, we are
able to show how structural changes in the network accompany significant
changes in the exchange price of bitcoins.
| [
{
"version": "v1",
"created": "Fri, 12 Dec 2014 16:31:24 GMT"
}
] | 2014-12-15T00:00:00 | [
[
"Kondor",
"Dániel",
""
],
[
"Csabai",
"István",
""
],
[
"Szüle",
"János",
""
],
[
"Pósfai",
"Márton",
""
],
[
"Vattay",
"Gábor",
""
]
] | TITLE: Inferring the interplay of network structure and market effects in
Bitcoin
ABSTRACT: A main focus in economics research is understanding the time series of prices
of goods and assets. While statistical models using only the properties of the
time series itself have been successful in many aspects, we expect to gain a
better understanding of the phenomena involved if we can model the underlying
system of interacting agents. In this article, we consider the history of
Bitcoin, a novel digital currency system, for which the complete list of
transactions is available for analysis. Using this dataset, we reconstruct the
transaction network between users and analyze changes in the structure of the
subgraph induced by the most active users. Our approach is based on the
unsupervised identification of important features of the time variation of the
network. Applying the widely used method of Principal Component Analysis to the
matrix constructed from snapshots of the network at different times, we are
able to show how structural changes in the network accompany significant
changes in the exchange price of bitcoins.
| new_dataset | 0.724919 |
1412.4102 | Chunyu Wang | Chunyu Wang, John Flynn, Yizhou Wang, Alan L. Yuille | Representing Data by a Mixture of Activated Simplices | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new model which represents data as a mixture of simplices.
Simplices are geometric structures that generalize triangles. We give a simple
geometric understanding that allows us to learn a simplicial structure
efficiently. Our method requires that the data are unit normalized (and thus
lie on the unit sphere). We show that under this restriction, building a model
with simplices amounts to constructing a convex hull inside the sphere whose
boundary facets is close to the data. We call the boundary facets of the convex
hull that are close to the data Activated Simplices. While the total number of
bases used to build the simplices is a parameter of the model, the dimensions
of the individual activated simplices are learned from the data. Simplices can
have different dimensions, which facilitates modeling of inhomogeneous data
sources. The simplicial structure is bounded --- this is appropriate for
modeling data with constraints, such as human elbows can not bend more than 180
degrees. The simplices are easy to interpret and extremes within the data can
be discovered among the vertices. The method provides good reconstruction and
regularization. It supports good nearest neighbor classification and it allows
realistic generative models to be constructed. It achieves state-of-the-art
results on benchmark datasets, including 3D poses and digits.
| [
{
"version": "v1",
"created": "Fri, 12 Dec 2014 20:12:40 GMT"
}
] | 2014-12-15T00:00:00 | [
[
"Wang",
"Chunyu",
""
],
[
"Flynn",
"John",
""
],
[
"Wang",
"Yizhou",
""
],
[
"Yuille",
"Alan L.",
""
]
] | TITLE: Representing Data by a Mixture of Activated Simplices
ABSTRACT: We present a new model which represents data as a mixture of simplices.
Simplices are geometric structures that generalize triangles. We give a simple
geometric understanding that allows us to learn a simplicial structure
efficiently. Our method requires that the data are unit normalized (and thus
lie on the unit sphere). We show that under this restriction, building a model
with simplices amounts to constructing a convex hull inside the sphere whose
boundary facets is close to the data. We call the boundary facets of the convex
hull that are close to the data Activated Simplices. While the total number of
bases used to build the simplices is a parameter of the model, the dimensions
of the individual activated simplices are learned from the data. Simplices can
have different dimensions, which facilitates modeling of inhomogeneous data
sources. The simplicial structure is bounded --- this is appropriate for
modeling data with constraints, such as human elbows can not bend more than 180
degrees. The simplices are easy to interpret and extremes within the data can
be discovered among the vertices. The method provides good reconstruction and
regularization. It supports good nearest neighbor classification and it allows
realistic generative models to be constructed. It achieves state-of-the-art
results on benchmark datasets, including 3D poses and digits.
| no_new_dataset | 0.954393 |
1406.0146 | Darko Hric | Darko Hric, Richard K. Darst, Santo Fortunato | Community detection in networks: Structural communities versus ground
truth | 21 pages, 19 figures | Phys. Rev. E 90, 062805 (2014) | 10.1103/PhysRevE.90.062805 | null | physics.soc-ph cs.IR cs.SI q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Algorithms to find communities in networks rely just on structural
information and search for cohesive subsets of nodes. On the other hand, most
scholars implicitly or explicitly assume that structural communities represent
groups of nodes with similar (non-topological) properties or functions. This
hypothesis could not be verified, so far, because of the lack of network
datasets with information on the classification of the nodes. We show that
traditional community detection methods fail to find the metadata groups in
many large networks. Our results show that there is a marked separation between
structural communities and metadata groups, in line with recent findings. That
means that either our current modeling of community structure has to be
substantially modified, or that metadata groups may not be recoverable from
topology alone.
| [
{
"version": "v1",
"created": "Sun, 1 Jun 2014 09:06:16 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Dec 2014 18:08:15 GMT"
}
] | 2014-12-12T00:00:00 | [
[
"Hric",
"Darko",
""
],
[
"Darst",
"Richard K.",
""
],
[
"Fortunato",
"Santo",
""
]
] | TITLE: Community detection in networks: Structural communities versus ground
truth
ABSTRACT: Algorithms to find communities in networks rely just on structural
information and search for cohesive subsets of nodes. On the other hand, most
scholars implicitly or explicitly assume that structural communities represent
groups of nodes with similar (non-topological) properties or functions. This
hypothesis could not be verified, so far, because of the lack of network
datasets with information on the classification of the nodes. We show that
traditional community detection methods fail to find the metadata groups in
many large networks. Our results show that there is a marked separation between
structural communities and metadata groups, in line with recent findings. That
means that either our current modeling of community structure has to be
substantially modified, or that metadata groups may not be recoverable from
topology alone.
| no_new_dataset | 0.948106 |
1412.3474 | Eric Tzeng | Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, Trevor Darrell | Deep Domain Confusion: Maximizing for Domain Invariance | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent reports suggest that a generic supervised deep CNN model trained on a
large-scale dataset reduces, but does not remove, dataset bias on a standard
benchmark. Fine-tuning deep models in a new domain can require a significant
amount of data, which for many applications is simply not available. We propose
a new CNN architecture which introduces an adaptation layer and an additional
domain confusion loss, to learn a representation that is both semantically
meaningful and domain invariant. We additionally show that a domain confusion
metric can be used for model selection to determine the dimension of an
adaptation layer and the best position for the layer in the CNN architecture.
Our proposed adaptation method offers empirical performance which exceeds
previously published results on a standard benchmark visual domain adaptation
task.
| [
{
"version": "v1",
"created": "Wed, 10 Dec 2014 21:20:54 GMT"
}
] | 2014-12-12T00:00:00 | [
[
"Tzeng",
"Eric",
""
],
[
"Hoffman",
"Judy",
""
],
[
"Zhang",
"Ning",
""
],
[
"Saenko",
"Kate",
""
],
[
"Darrell",
"Trevor",
""
]
] | TITLE: Deep Domain Confusion: Maximizing for Domain Invariance
ABSTRACT: Recent reports suggest that a generic supervised deep CNN model trained on a
large-scale dataset reduces, but does not remove, dataset bias on a standard
benchmark. Fine-tuning deep models in a new domain can require a significant
amount of data, which for many applications is simply not available. We propose
a new CNN architecture which introduces an adaptation layer and an additional
domain confusion loss, to learn a representation that is both semantically
meaningful and domain invariant. We additionally show that a domain confusion
metric can be used for model selection to determine the dimension of an
adaptation layer and the best position for the layer in the CNN architecture.
Our proposed adaptation method offers empirical performance which exceeds
previously published results on a standard benchmark visual domain adaptation
task.
| no_new_dataset | 0.949623 |
1412.3684 | Soren Goyal | Soren Goyal, Paul Benjamin | Object Recognition Using Deep Neural Networks: A Survey | null | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognition of objects using Deep Neural Networks is an active area of
research and many breakthroughs have been made in the last few years. The paper
attempts to indicate how far this field has progressed. The paper briefly
describes the history of research in Neural Networks and describe several of
the recent advances in this field. The performances of recently developed
Neural Network Algorithm over benchmark datasets have been tabulated. Finally,
some the applications of this field have been provided.
| [
{
"version": "v1",
"created": "Wed, 10 Dec 2014 18:23:13 GMT"
}
] | 2014-12-12T00:00:00 | [
[
"Goyal",
"Soren",
""
],
[
"Benjamin",
"Paul",
""
]
] | TITLE: Object Recognition Using Deep Neural Networks: A Survey
ABSTRACT: Recognition of objects using Deep Neural Networks is an active area of
research and many breakthroughs have been made in the last few years. The paper
attempts to indicate how far this field has progressed. The paper briefly
describes the history of research in Neural Networks and describe several of
the recent advances in this field. The performances of recently developed
Neural Network Algorithm over benchmark datasets have been tabulated. Finally,
some the applications of this field have been provided.
| no_new_dataset | 0.95594 |
1402.5450 | Nicholas Rotella | Nicholas Rotella, Michael Bloesch, Ludovic Righetti and Stefan Schaal | State Estimation for a Humanoid Robot | IROS 2014 Submission, IEEE/RSJ International Conference on
Intelligent Robots and Systems (2014) 952-958 | null | 10.1109/IROS.2014.6942674 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a framework for state estimation on a humanoid robot
platform using only common proprioceptive sensors and knowledge of leg
kinematics. The presented approach extends that detailed in [1] on a quadruped
platform by incorporating the rotational constraints imposed by the humanoid's
flat feet. As in previous work, the proposed Extended Kalman Filter (EKF)
accommodates contact switching and makes no assumptions about gait or terrain,
making it applicable on any humanoid platform for use in any task. The filter
employs a sensor-based prediction model which uses inertial data from an IMU
and corrects for integrated error using a kinematics-based measurement model
which relies on joint encoders and a kinematic model to determine the relative
position and orientation of the feet. A nonlinear observability analysis is
performed on both the original and updated filters and it is concluded that the
new filter significantly simplifies singular cases and improves the
observability characteristics of the system. Results on simulated walking and
squatting datasets demonstrate the performance gain of the flat-foot filter as
well as confirm the results of the presented observability analysis.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2014 23:35:34 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Dec 2014 20:42:58 GMT"
}
] | 2014-12-11T00:00:00 | [
[
"Rotella",
"Nicholas",
""
],
[
"Bloesch",
"Michael",
""
],
[
"Righetti",
"Ludovic",
""
],
[
"Schaal",
"Stefan",
""
]
] | TITLE: State Estimation for a Humanoid Robot
ABSTRACT: This paper introduces a framework for state estimation on a humanoid robot
platform using only common proprioceptive sensors and knowledge of leg
kinematics. The presented approach extends that detailed in [1] on a quadruped
platform by incorporating the rotational constraints imposed by the humanoid's
flat feet. As in previous work, the proposed Extended Kalman Filter (EKF)
accommodates contact switching and makes no assumptions about gait or terrain,
making it applicable on any humanoid platform for use in any task. The filter
employs a sensor-based prediction model which uses inertial data from an IMU
and corrects for integrated error using a kinematics-based measurement model
which relies on joint encoders and a kinematic model to determine the relative
position and orientation of the feet. A nonlinear observability analysis is
performed on both the original and updated filters and it is concluded that the
new filter significantly simplifies singular cases and improves the
observability characteristics of the system. Results on simulated walking and
squatting datasets demonstrate the performance gain of the flat-foot filter as
well as confirm the results of the presented observability analysis.
| no_new_dataset | 0.947914 |
1412.3161 | Xiaoyu Wang | Xiaoyu Wang, Tianbao Yang, Guobin Chen, Yuanqing Lin | Object-centric Sampling for Fine-grained Image Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes to go beyond the state-of-the-art deep convolutional
neural network (CNN) by incorporating the information from object detection,
focusing on dealing with fine-grained image classification. Unfortunately, CNN
suffers from over-fiting when it is trained on existing fine-grained image
classification benchmarks, which typically only consist of less than a few tens
of thousands training images. Therefore, we first construct a large-scale
fine-grained car recognition dataset that consists of 333 car classes with more
than 150 thousand training images. With this large-scale dataset, we are able
to build a strong baseline for CNN with top-1 classification accuracy of 81.6%.
One major challenge in fine-grained image classification is that many classes
are very similar to each other while having large within-class variation. One
contributing factor to the within-class variation is cluttered image
background. However, the existing CNN training takes uniform window sampling
over the image, acting as blind on the location of the object of interest. In
contrast, this paper proposes an \emph{object-centric sampling} (OCS) scheme
that samples image windows based on the object location information. The
challenge in using the location information lies in how to design powerful
object detector and how to handle the imperfectness of detection results. To
that end, we design a saliency-aware object detection approach specific for the
setting of fine-grained image classification, and the uncertainty of detection
results are naturally handled in our OCS scheme. Our framework is demonstrated
to be very effective, improving top-1 accuracy to 89.3% (from 81.6%) on the
large-scale fine-grained car classification dataset.
| [
{
"version": "v1",
"created": "Wed, 10 Dec 2014 00:28:49 GMT"
}
] | 2014-12-11T00:00:00 | [
[
"Wang",
"Xiaoyu",
""
],
[
"Yang",
"Tianbao",
""
],
[
"Chen",
"Guobin",
""
],
[
"Lin",
"Yuanqing",
""
]
] | TITLE: Object-centric Sampling for Fine-grained Image Classification
ABSTRACT: This paper proposes to go beyond the state-of-the-art deep convolutional
neural network (CNN) by incorporating the information from object detection,
focusing on dealing with fine-grained image classification. Unfortunately, CNN
suffers from over-fiting when it is trained on existing fine-grained image
classification benchmarks, which typically only consist of less than a few tens
of thousands training images. Therefore, we first construct a large-scale
fine-grained car recognition dataset that consists of 333 car classes with more
than 150 thousand training images. With this large-scale dataset, we are able
to build a strong baseline for CNN with top-1 classification accuracy of 81.6%.
One major challenge in fine-grained image classification is that many classes
are very similar to each other while having large within-class variation. One
contributing factor to the within-class variation is cluttered image
background. However, the existing CNN training takes uniform window sampling
over the image, acting as blind on the location of the object of interest. In
contrast, this paper proposes an \emph{object-centric sampling} (OCS) scheme
that samples image windows based on the object location information. The
challenge in using the location information lies in how to design powerful
object detector and how to handle the imperfectness of detection results. To
that end, we design a saliency-aware object detection approach specific for the
setting of fine-grained image classification, and the uncertainty of detection
results are naturally handled in our OCS scheme. Our framework is demonstrated
to be very effective, improving top-1 accuracy to 89.3% (from 81.6%) on the
large-scale fine-grained car classification dataset.
| new_dataset | 0.883588 |
1412.3352 | Neda Pourali | Neda Pourali | Web image annotation by diffusion maps manifold learning algorithm | 11 pages, 8 figures | null | 10.5121/ijfcst.2014.4606 | null | cs.CV cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic image annotation is one of the most challenging problems in machine
vision areas. The goal of this task is to predict number of keywords
automatically for images captured in real data. Many methods are based on
visual features in order to calculate similarities between image samples. But
the computation cost of these approaches is very high. These methods require
many training samples to be stored in memory. To lessen this burden, a number
of techniques have been developed to reduce the number of features in a
dataset. Manifold learning is a popular approach to nonlinear dimensionality
reduction. In this paper, we investigate Diffusion maps manifold learning
method for web image auto-annotation task. Diffusion maps manifold learning
method is used to reduce the dimension of some visual features. Extensive
experiments and analysis on NUS-WIDE-LITE web image dataset with different
visual features show how this manifold learning dimensionality reduction method
can be applied effectively to image annotation.
| [
{
"version": "v1",
"created": "Mon, 8 Dec 2014 10:38:28 GMT"
}
] | 2014-12-11T00:00:00 | [
[
"Pourali",
"Neda",
""
]
] | TITLE: Web image annotation by diffusion maps manifold learning algorithm
ABSTRACT: Automatic image annotation is one of the most challenging problems in machine
vision areas. The goal of this task is to predict number of keywords
automatically for images captured in real data. Many methods are based on
visual features in order to calculate similarities between image samples. But
the computation cost of these approaches is very high. These methods require
many training samples to be stored in memory. To lessen this burden, a number
of techniques have been developed to reduce the number of features in a
dataset. Manifold learning is a popular approach to nonlinear dimensionality
reduction. In this paper, we investigate Diffusion maps manifold learning
method for web image auto-annotation task. Diffusion maps manifold learning
method is used to reduce the dimension of some visual features. Extensive
experiments and analysis on NUS-WIDE-LITE web image dataset with different
visual features show how this manifold learning dimensionality reduction method
can be applied effectively to image annotation.
| no_new_dataset | 0.950549 |
1306.4920 | Mariusz Tarnopolski | Mariusz Tarnopolski | Nonlinear time series analysis of Hyperion's rotation: photometric
observations and numerical simulations | An updated version (new template, structure, methods including
numerical simulations and aims; dropped the HE analysis, extended mLCE
analysis) available at arXiv:1412.2423 | null | null | null | nlin.CD astro-ph.EP physics.space-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The case of Hyperion has been studied excesively due to the fact it is the
largest known celestial body of a highly aspherical shape. It also has a low
mass density and remains in a 4:3 orbital resonance with Titan. Its lightcurve,
obtained through photometric observations by (Klavetter 1989a,b), was initialy
used to show that Hyperion's rotation exhibits no periodicity. Herein it is
analyzed in the means of time series analysis. The Hurst Exponent was estimated
to be H=0.87, indicating a persistent behaviour. The largest Lyapunov Exponent
$\lambda_{max}$ unfortunately could not be given a reliable estimate because of
the shortness of the dataset, consisting 38 observational points. These results
are compared with numerical simulations, which gave a value H=0.88 for the
chaotic zone of the phase space. The Lyapunov time $T_{Lyap}=1/\lambda_{max}$
is about 30 days, which is roughly 1.5 times greater than the orbital period.
By conducting observations over a longer period an insight in the dynamical
features of the present rotational state is possible.
| [
{
"version": "v1",
"created": "Thu, 20 Jun 2013 15:54:03 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Aug 2013 15:28:43 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Dec 2014 02:22:29 GMT"
}
] | 2014-12-10T00:00:00 | [
[
"Tarnopolski",
"Mariusz",
""
]
] | TITLE: Nonlinear time series analysis of Hyperion's rotation: photometric
observations and numerical simulations
ABSTRACT: The case of Hyperion has been studied excesively due to the fact it is the
largest known celestial body of a highly aspherical shape. It also has a low
mass density and remains in a 4:3 orbital resonance with Titan. Its lightcurve,
obtained through photometric observations by (Klavetter 1989a,b), was initialy
used to show that Hyperion's rotation exhibits no periodicity. Herein it is
analyzed in the means of time series analysis. The Hurst Exponent was estimated
to be H=0.87, indicating a persistent behaviour. The largest Lyapunov Exponent
$\lambda_{max}$ unfortunately could not be given a reliable estimate because of
the shortness of the dataset, consisting 38 observational points. These results
are compared with numerical simulations, which gave a value H=0.88 for the
chaotic zone of the phase space. The Lyapunov time $T_{Lyap}=1/\lambda_{max}$
is about 30 days, which is roughly 1.5 times greater than the orbital period.
By conducting observations over a longer period an insight in the dynamical
features of the present rotational state is possible.
| no_new_dataset | 0.945399 |
1406.2227 | Max Jaderberg | Max Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman | Synthetic Data and Artificial Neural Networks for Natural Scene Text
Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we present a framework for the recognition of natural scene
text. Our framework does not require any human-labelled data, and performs word
recognition on the whole image holistically, departing from the character based
recognition systems of the past. The deep neural network models at the centre
of this framework are trained solely on data produced by a synthetic text
generation engine -- synthetic data that is highly realistic and sufficient to
replace real data, giving us infinite amounts of training data. This excess of
data exposes new possibilities for word recognition models, and here we
consider three models, each one "reading" words in a different way: via 90k-way
dictionary encoding, character sequence encoding, and bag-of-N-grams encoding.
In the scenarios of language based and completely unconstrained text
recognition we greatly improve upon state-of-the-art performance on standard
datasets, using our fast, simple machinery and requiring zero data-acquisition
costs.
| [
{
"version": "v1",
"created": "Mon, 9 Jun 2014 15:53:33 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jun 2014 03:10:35 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Oct 2014 16:08:24 GMT"
},
{
"version": "v4",
"created": "Tue, 9 Dec 2014 11:22:59 GMT"
}
] | 2014-12-10T00:00:00 | [
[
"Jaderberg",
"Max",
""
],
[
"Simonyan",
"Karen",
""
],
[
"Vedaldi",
"Andrea",
""
],
[
"Zisserman",
"Andrew",
""
]
] | TITLE: Synthetic Data and Artificial Neural Networks for Natural Scene Text
Recognition
ABSTRACT: In this work we present a framework for the recognition of natural scene
text. Our framework does not require any human-labelled data, and performs word
recognition on the whole image holistically, departing from the character based
recognition systems of the past. The deep neural network models at the centre
of this framework are trained solely on data produced by a synthetic text
generation engine -- synthetic data that is highly realistic and sufficient to
replace real data, giving us infinite amounts of training data. This excess of
data exposes new possibilities for word recognition models, and here we
consider three models, each one "reading" words in a different way: via 90k-way
dictionary encoding, character sequence encoding, and bag-of-N-grams encoding.
In the scenarios of language based and completely unconstrained text
recognition we greatly improve upon state-of-the-art performance on standard
datasets, using our fast, simple machinery and requiring zero data-acquisition
costs.
| no_new_dataset | 0.952486 |
1410.5772 | Lutz Bornmann Dr. | Lutz Bornmann, Werner Marx | Methods for the generation of normalized citation impact scores in
bibliometrics: Which method best reflects the judgements of experts? | Accepted for publication in the Journal of Informetrics | null | null | null | cs.DL stat.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evaluative bibliometrics compares the citation impact of researchers,
research groups and institutions with each other across time scales and
disciplines. Both factors - discipline and period - have an influence on the
citation count which is independent of the quality of the publication.
Normalizing the citation impact of papers for these two factors started in the
mid-1980s. Since then, a range of different methods have been presented for
producing normalized citation impact scores. The current study uses a data set
of over 50,000 records to test which of the methods so far presented correlate
better with the assessment of papers by peers. The peer assessments come from
F1000Prime - a post-publication peer review system of the biomedical
literature. Of the normalized indicators, the current study involves not only
cited-side indicators, such as the mean normalized citation score, but also
citing-side indicators. As the results show, the correlations of the indicators
with the peer assessments all turn out to be very similar. Since F1000 focuses
on biomedicine, it is important that the results of this study are validated by
other studies based on datasets from other disciplines or (ideally) based on
multi-disciplinary datasets.
| [
{
"version": "v1",
"created": "Mon, 20 Oct 2014 07:57:32 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Dec 2014 10:58:06 GMT"
}
] | 2014-12-10T00:00:00 | [
[
"Bornmann",
"Lutz",
""
],
[
"Marx",
"Werner",
""
]
] | TITLE: Methods for the generation of normalized citation impact scores in
bibliometrics: Which method best reflects the judgements of experts?
ABSTRACT: Evaluative bibliometrics compares the citation impact of researchers,
research groups and institutions with each other across time scales and
disciplines. Both factors - discipline and period - have an influence on the
citation count which is independent of the quality of the publication.
Normalizing the citation impact of papers for these two factors started in the
mid-1980s. Since then, a range of different methods have been presented for
producing normalized citation impact scores. The current study uses a data set
of over 50,000 records to test which of the methods so far presented correlate
better with the assessment of papers by peers. The peer assessments come from
F1000Prime - a post-publication peer review system of the biomedical
literature. Of the normalized indicators, the current study involves not only
cited-side indicators, such as the mean normalized citation score, but also
citing-side indicators. As the results show, the correlations of the indicators
with the peer assessments all turn out to be very similar. Since F1000 focuses
on biomedicine, it is important that the results of this study are validated by
other studies based on datasets from other disciplines or (ideally) based on
multi-disciplinary datasets.
| no_new_dataset | 0.941061 |
1410.7835 | Zhensong Qian | Oliver Schulte, Zhensong Qian, Arthur E. Kirkpatrick, Xiaoqian Yin,
Yan Sun | Fast Learning of Relational Dependency Networks | 17 pages, 2 figures, 3 tables, Accepted as long paper by ILP 2014,
September 14- 16th, Nancy, France. Added the Appendix: Proof of Consistency
Characterization | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Relational Dependency Network (RDN) is a directed graphical model widely
used for multi-relational data. These networks allow cyclic dependencies,
necessary to represent relational autocorrelations. We describe an approach for
learning both the RDN's structure and its parameters, given an input relational
database: First learn a Bayesian network (BN), then transform the Bayesian
network to an RDN. Thus fast Bayes net learning can provide fast RDN learning.
The BN-to-RDN transform comprises a simple, local adjustment of the Bayes net
structure and a closed-form transform of the Bayes net parameters. This method
can learn an RDN for a dataset with a million tuples in minutes. We empirically
compare our approach to state-of-the art RDN learning methods that use
functional gradient boosting, on five benchmark datasets. Learning RDNs via BNs
scales much better to large datasets than learning RDNs with boosting, and
provides competitive accuracy in predictions.
| [
{
"version": "v1",
"created": "Tue, 28 Oct 2014 23:14:56 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Dec 2014 01:07:36 GMT"
}
] | 2014-12-10T00:00:00 | [
[
"Schulte",
"Oliver",
""
],
[
"Qian",
"Zhensong",
""
],
[
"Kirkpatrick",
"Arthur E.",
""
],
[
"Yin",
"Xiaoqian",
""
],
[
"Sun",
"Yan",
""
]
] | TITLE: Fast Learning of Relational Dependency Networks
ABSTRACT: A Relational Dependency Network (RDN) is a directed graphical model widely
used for multi-relational data. These networks allow cyclic dependencies,
necessary to represent relational autocorrelations. We describe an approach for
learning both the RDN's structure and its parameters, given an input relational
database: First learn a Bayesian network (BN), then transform the Bayesian
network to an RDN. Thus fast Bayes net learning can provide fast RDN learning.
The BN-to-RDN transform comprises a simple, local adjustment of the Bayes net
structure and a closed-form transform of the Bayes net parameters. This method
can learn an RDN for a dataset with a million tuples in minutes. We empirically
compare our approach to state-of-the art RDN learning methods that use
functional gradient boosting, on five benchmark datasets. Learning RDNs via BNs
scales much better to large datasets than learning RDNs with boosting, and
provides competitive accuracy in predictions.
| no_new_dataset | 0.954095 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.