Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1306.4534 | Antonio Lima | Antonio Lima, Manlio De Domenico, Veljko Pejovic, Mirco Musolesi | Exploiting Cellular Data for Disease Containment and Information
Campaigns Strategies in Country-Wide Epidemics | 9 pages, 9 figures. Appeared in Proceedings of NetMob 2013. Boston,
MA, USA. May 2013 | null | null | School of Computer Science University of Birmingham Technical Report
CSR-13-01 | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human mobility is one of the key factors at the basis of the spreading of
diseases in a population. Containment strategies are usually devised on
movement scenarios based on coarse-grained assumptions. Mobility phone data
provide a unique opportunity for building models and defining strategies based
on very precise information about the movement of people in a region or in a
country. Another very important aspect is the underlying social structure of a
population, which might play a fundamental role in devising information
campaigns to promote vaccination and preventive measures, especially in
countries with a strong family (or tribal) structure.
In this paper we analyze a large-scale dataset describing the mobility and
the call patterns of a large number of individuals in Ivory Coast. We present a
model that describes how diseases spread across the country by exploiting
mobility patterns of people extracted from the available data. Then, we
simulate several epidemics scenarios and we evaluate mechanisms to contain the
epidemic spreading of diseases, based on the information about people mobility
and social ties, also gathered from the phone call data. More specifically, we
find that restricting mobility does not delay the occurrence of an endemic
state and that an information campaign based on one-to-one phone conversations
among members of social groups might be an effective countermeasure.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2013 13:22:11 GMT"
}
] | 2013-06-20T00:00:00 | [
[
"Lima",
"Antonio",
""
],
[
"De Domenico",
"Manlio",
""
],
[
"Pejovic",
"Veljko",
""
],
[
"Musolesi",
"Mirco",
""
]
] | TITLE: Exploiting Cellular Data for Disease Containment and Information
Campaigns Strategies in Country-Wide Epidemics
ABSTRACT: Human mobility is one of the key factors at the basis of the spreading of
diseases in a population. Containment strategies are usually devised on
movement scenarios based on coarse-grained assumptions. Mobility phone data
provide a unique opportunity for building models and defining strategies based
on very precise information about the movement of people in a region or in a
country. Another very important aspect is the underlying social structure of a
population, which might play a fundamental role in devising information
campaigns to promote vaccination and preventive measures, especially in
countries with a strong family (or tribal) structure.
In this paper we analyze a large-scale dataset describing the mobility and
the call patterns of a large number of individuals in Ivory Coast. We present a
model that describes how diseases spread across the country by exploiting
mobility patterns of people extracted from the available data. Then, we
simulate several epidemics scenarios and we evaluate mechanisms to contain the
epidemic spreading of diseases, based on the information about people mobility
and social ties, also gathered from the phone call data. More specifically, we
find that restricting mobility does not delay the occurrence of an endemic
state and that an information campaign based on one-to-one phone conversations
among members of social groups might be an effective countermeasure.
|
1305.3633 | Mohammad Pourhomayoun | Mohammad Pourhomayoun, Peter Dugan, Marian Popescu, Denise Risch, Hal
Lewis, Christopher Clark | Classification for Big Dataset of Bioacoustic Signals Based on Human
Scoring System and Artificial Neural Network | To be Submitted to "ICML 2013 Workshop on Machine Learning for
Bioacoustics", 6 pages, 4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a method to improve sound classification
performance by combining signal features, derived from the time-frequency
spectrogram, with human perception. The method presented herein exploits an
artificial neural network (ANN) and learns the signal features based on the
human perception knowledge. The proposed method is applied to a large acoustic
dataset containing 24 months of nearly continuous recordings. The results show
a significant improvement in performance of the detection-classification
system; yielding as much as 20% improvement in true positive rate for a given
false positive rate.
| [
{
"version": "v1",
"created": "Wed, 15 May 2013 20:53:39 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jun 2013 20:29:14 GMT"
}
] | 2013-06-19T00:00:00 | [
[
"Pourhomayoun",
"Mohammad",
""
],
[
"Dugan",
"Peter",
""
],
[
"Popescu",
"Marian",
""
],
[
"Risch",
"Denise",
""
],
[
"Lewis",
"Hal",
""
],
[
"Clark",
"Christopher",
""
]
] | TITLE: Classification for Big Dataset of Bioacoustic Signals Based on Human
Scoring System and Artificial Neural Network
ABSTRACT: In this paper, we propose a method to improve sound classification
performance by combining signal features, derived from the time-frequency
spectrogram, with human perception. The method presented herein exploits an
artificial neural network (ANN) and learns the signal features based on the
human perception knowledge. The proposed method is applied to a large acoustic
dataset containing 24 months of nearly continuous recordings. The results show
a significant improvement in performance of the detection-classification
system; yielding as much as 20% improvement in true positive rate for a given
false positive rate.
|
1305.3635 | Mohammad Pourhomayoun | Mohammad Pourhomayoun, Peter Dugan, Marian Popescu, Christopher Clark | Bioacoustic Signal Classification Based on Continuous Region Processing,
Grid Masking and Artificial Neural Network | To be Submitted to "ICML 2013 Workshop on Machine Learning for
Bioacoustics", 6 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we develop a novel method based on machine-learning and image
processing to identify North Atlantic right whale (NARW) up-calls in the
presence of high levels of ambient and interfering noise. We apply a continuous
region algorithm on the spectrogram to extract the regions of interest, and
then use grid masking techniques to generate a small feature set that is then
used in an artificial neural network classifier to identify the NARW up-calls.
It is shown that the proposed technique is effective in detecting and capturing
even very faint up-calls, in the presence of ambient and interfering noises.
The method is evaluated on a dataset recorded in Massachusetts Bay, United
States. The dataset includes 20000 sound clips for training, and 10000 sound
clips for testing. The results show that the proposed technique can achieve an
error rate of less than FPR = 4.5% for a 90% true positive rate.
| [
{
"version": "v1",
"created": "Wed, 15 May 2013 20:59:03 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jun 2013 20:28:33 GMT"
}
] | 2013-06-19T00:00:00 | [
[
"Pourhomayoun",
"Mohammad",
""
],
[
"Dugan",
"Peter",
""
],
[
"Popescu",
"Marian",
""
],
[
"Clark",
"Christopher",
""
]
] | TITLE: Bioacoustic Signal Classification Based on Continuous Region Processing,
Grid Masking and Artificial Neural Network
ABSTRACT: In this paper, we develop a novel method based on machine-learning and image
processing to identify North Atlantic right whale (NARW) up-calls in the
presence of high levels of ambient and interfering noise. We apply a continuous
region algorithm on the spectrogram to extract the regions of interest, and
then use grid masking techniques to generate a small feature set that is then
used in an artificial neural network classifier to identify the NARW up-calls.
It is shown that the proposed technique is effective in detecting and capturing
even very faint up-calls, in the presence of ambient and interfering noises.
The method is evaluated on a dataset recorded in Massachusetts Bay, United
States. The dataset includes 20000 sound clips for training, and 10000 sound
clips for testing. The results show that the proposed technique can achieve an
error rate of less than FPR = 4.5% for a 90% true positive rate.
|
1306.4207 | Ragesh Jaiswal | Ragesh Jaiswal and Prachi Jain and Saumya Yadav | A bad 2-dimensional instance for k-means++ | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The k-means++ seeding algorithm is one of the most popular algorithms that is
used for finding the initial $k$ centers when using the k-means heuristic. The
algorithm is a simple sampling procedure and can be described as follows:
{quote} Pick the first center randomly from among the given points. For $i >
1$, pick a point to be the $i^{th}$ center with probability proportional to the
square of the Euclidean distance of this point to the previously $(i-1)$ chosen
centers. {quote} The k-means++ seeding algorithm is not only simple and fast
but gives an $O(\log{k})$ approximation in expectation as shown by Arthur and
Vassilvitskii \cite{av07}. There are datasets \cite{av07,adk09} on which this
seeding algorithm gives an approximation factor $\Omega(\log{k})$ in
expectation. However, it is not clear from these results if the algorithm
achieves good approximation factor with reasonably large probability (say
$1/poly(k)$). Brunsch and R\"{o}glin \cite{br11} gave a dataset where the
k-means++ seeding algorithm achieves an approximation ratio of $(2/3 -
\epsilon)\cdot \log{k}$ only with probability that is exponentially small in
$k$. However, this and all other known {\em lower-bound examples}
\cite{av07,adk09} are high dimensional. So, an open problem is to understand
the behavior of the algorithm on low dimensional datasets. In this work, we
give a simple two dimensional dataset on which the seeding algorithm achieves
an approximation ratio $c$ (for some universal constant $c$) only with
probability exponentially small in $k$. This is the first step towards solving
open problems posed by Mahajan et al \cite{mnv12} and by Brunsch and R\"{o}glin
\cite{br11}.
| [
{
"version": "v1",
"created": "Tue, 18 Jun 2013 14:22:12 GMT"
}
] | 2013-06-19T00:00:00 | [
[
"Jaiswal",
"Ragesh",
""
],
[
"Jain",
"Prachi",
""
],
[
"Yadav",
"Saumya",
""
]
] | TITLE: A bad 2-dimensional instance for k-means++
ABSTRACT: The k-means++ seeding algorithm is one of the most popular algorithms that is
used for finding the initial $k$ centers when using the k-means heuristic. The
algorithm is a simple sampling procedure and can be described as follows:
{quote} Pick the first center randomly from among the given points. For $i >
1$, pick a point to be the $i^{th}$ center with probability proportional to the
square of the Euclidean distance of this point to the previously $(i-1)$ chosen
centers. {quote} The k-means++ seeding algorithm is not only simple and fast
but gives an $O(\log{k})$ approximation in expectation as shown by Arthur and
Vassilvitskii \cite{av07}. There are datasets \cite{av07,adk09} on which this
seeding algorithm gives an approximation factor $\Omega(\log{k})$ in
expectation. However, it is not clear from these results if the algorithm
achieves good approximation factor with reasonably large probability (say
$1/poly(k)$). Brunsch and R\"{o}glin \cite{br11} gave a dataset where the
k-means++ seeding algorithm achieves an approximation ratio of $(2/3 -
\epsilon)\cdot \log{k}$ only with probability that is exponentially small in
$k$. However, this and all other known {\em lower-bound examples}
\cite{av07,adk09} are high dimensional. So, an open problem is to understand
the behavior of the algorithm on low dimensional datasets. In this work, we
give a simple two dimensional dataset on which the seeding algorithm achieves
an approximation ratio $c$ (for some universal constant $c$) only with
probability exponentially small in $k$. This is the first step towards solving
open problems posed by Mahajan et al \cite{mnv12} and by Brunsch and R\"{o}glin
\cite{br11}.
|
1211.6014 | Vincent A Traag | Bal\'azs Cs. Cs\'aji, Arnaud Browet, V.A. Traag, Jean-Charles
Delvenne, Etienne Huens, Paul Van Dooren, Zbigniew Smoreda and Vincent D.
Blondel | Exploring the Mobility of Mobile Phone Users | 16 pages, 12 figures | Physica A 392(6), pp. 1459-1473 (2013) | 10.1016/j.physa.2012.11.040 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile phone datasets allow for the analysis of human behavior on an
unprecedented scale. The social network, temporal dynamics and mobile behavior
of mobile phone users have often been analyzed independently from each other
using mobile phone datasets. In this article, we explore the connections
between various features of human behavior extracted from a large mobile phone
dataset. Our observations are based on the analysis of communication data of
100000 anonymized and randomly chosen individuals in a dataset of
communications in Portugal. We show that clustering and principal component
analysis allow for a significant dimension reduction with limited loss of
information. The most important features are related to geographical location.
In particular, we observe that most people spend most of their time at only a
few locations. With the help of clustering methods, we then robustly identify
home and office locations and compare the results with official census data.
Finally, we analyze the geographic spread of users' frequent locations and show
that commuting distances can be reasonably well explained by a gravity model.
| [
{
"version": "v1",
"created": "Mon, 26 Nov 2012 16:30:59 GMT"
}
] | 2013-06-17T00:00:00 | [
[
"Csáji",
"Balázs Cs.",
""
],
[
"Browet",
"Arnaud",
""
],
[
"Traag",
"V. A.",
""
],
[
"Delvenne",
"Jean-Charles",
""
],
[
"Huens",
"Etienne",
""
],
[
"Van Dooren",
"Paul",
""
],
[
"Smoreda",
"Zbigniew",
""
],
[
"Blondel",
"Vincent D.",
""
]
] | TITLE: Exploring the Mobility of Mobile Phone Users
ABSTRACT: Mobile phone datasets allow for the analysis of human behavior on an
unprecedented scale. The social network, temporal dynamics and mobile behavior
of mobile phone users have often been analyzed independently from each other
using mobile phone datasets. In this article, we explore the connections
between various features of human behavior extracted from a large mobile phone
dataset. Our observations are based on the analysis of communication data of
100000 anonymized and randomly chosen individuals in a dataset of
communications in Portugal. We show that clustering and principal component
analysis allow for a significant dimension reduction with limited loss of
information. The most important features are related to geographical location.
In particular, we observe that most people spend most of their time at only a
few locations. With the help of clustering methods, we then robustly identify
home and office locations and compare the results with official census data.
Finally, we analyze the geographic spread of users' frequent locations and show
that commuting distances can be reasonably well explained by a gravity model.
|
1306.3294 | Quan Wang | Quan Wang, Kim L. Boyer | Feature Learning by Multidimensional Scaling and its Applications in
Object Recognition | To appear in SIBGRAPI 2013 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the MDS feature learning framework, in which multidimensional
scaling (MDS) is applied on high-level pairwise image distances to learn
fixed-length vector representations of images. The aspects of the images that
are captured by the learned features, which we call MDS features, completely
depend on what kind of image distance measurement is employed. With properly
selected semantics-sensitive image distances, the MDS features provide rich
semantic information about the images that is not captured by other feature
extraction techniques. In our work, we introduce the iterated
Levenberg-Marquardt algorithm for solving MDS, and study the MDS feature
learning with IMage Euclidean Distance (IMED) and Spatial Pyramid Matching
(SPM) distance. We present experiments on both synthetic data and real images
--- the publicly accessible UIUC car image dataset. The MDS features based on
SPM distance achieve exceptional performance for the car recognition task.
| [
{
"version": "v1",
"created": "Fri, 14 Jun 2013 04:43:40 GMT"
}
] | 2013-06-17T00:00:00 | [
[
"Wang",
"Quan",
""
],
[
"Boyer",
"Kim L.",
""
]
] | TITLE: Feature Learning by Multidimensional Scaling and its Applications in
Object Recognition
ABSTRACT: We present the MDS feature learning framework, in which multidimensional
scaling (MDS) is applied on high-level pairwise image distances to learn
fixed-length vector representations of images. The aspects of the images that
are captured by the learned features, which we call MDS features, completely
depend on what kind of image distance measurement is employed. With properly
selected semantics-sensitive image distances, the MDS features provide rich
semantic information about the images that is not captured by other feature
extraction techniques. In our work, we introduce the iterated
Levenberg-Marquardt algorithm for solving MDS, and study the MDS feature
learning with IMage Euclidean Distance (IMED) and Spatial Pyramid Matching
(SPM) distance. We present experiments on both synthetic data and real images
--- the publicly accessible UIUC car image dataset. The MDS features based on
SPM distance achieve exceptional performance for the car recognition task.
|
1306.3474 | Yijun Wang | Yijun Wang | Classifying Single-Trial EEG during Motor Imagery with a Small Training
Set | 13 pages, 3 figures | null | null | null | cs.LG cs.HC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Before the operation of a motor imagery based brain-computer interface (BCI)
adopting machine learning techniques, a cumbersome training procedure is
unavoidable. The development of a practical BCI posed the challenge of
classifying single-trial EEG with a small training set. In this letter, we
addressed this problem by employing a series of signal processing and machine
learning approaches to alleviate overfitting and obtained test accuracy similar
to training accuracy on the datasets from BCI Competition III and our own
experiments.
| [
{
"version": "v1",
"created": "Fri, 14 Jun 2013 18:24:19 GMT"
}
] | 2013-06-17T00:00:00 | [
[
"Wang",
"Yijun",
""
]
] | TITLE: Classifying Single-Trial EEG during Motor Imagery with a Small Training
Set
ABSTRACT: Before the operation of a motor imagery based brain-computer interface (BCI)
adopting machine learning techniques, a cumbersome training procedure is
unavoidable. The development of a practical BCI posed the challenge of
classifying single-trial EEG with a small training set. In this letter, we
addressed this problem by employing a series of signal processing and machine
learning approaches to alleviate overfitting and obtained test accuracy similar
to training accuracy on the datasets from BCI Competition III and our own
experiments.
|
1306.3003 | Xuhui Fan | Xuhui Fan, Yiling Zeng, Longbing Cao | Non-parametric Power-law Data Clustering | null | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has always been a great challenge for clustering algorithms to
automatically determine the cluster numbers according to the distribution of
datasets. Several approaches have been proposed to address this issue,
including the recent promising work which incorporate Bayesian Nonparametrics
into the $k$-means clustering procedure. This approach shows simplicity in
implementation and solidity in theory, while it also provides a feasible way to
inference in large scale datasets. However, several problems remains unsolved
in this pioneering work, including the power-law data applicability, mechanism
to merge centers to avoid the over-fitting problem, clustering order problem,
e.t.c.. To address these issues, the Pitman-Yor Process based k-means (namely
\emph{pyp-means}) is proposed in this paper. Taking advantage of the Pitman-Yor
Process, \emph{pyp-means} treats clusters differently by dynamically and
adaptively changing the threshold to guarantee the generation of power-law
clustering results. Also, one center agglomeration procedure is integrated into
the implementation to be able to merge small but close clusters and then
adaptively determine the cluster number. With more discussion on the clustering
order, the convergence proof, complexity analysis and extension to spectral
clustering, our approach is compared with traditional clustering algorithm and
variational inference methods. The advantages and properties of pyp-means are
validated by experiments on both synthetic datasets and real world datasets.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2013 01:20:50 GMT"
}
] | 2013-06-14T00:00:00 | [
[
"Fan",
"Xuhui",
""
],
[
"Zeng",
"Yiling",
""
],
[
"Cao",
"Longbing",
""
]
] | TITLE: Non-parametric Power-law Data Clustering
ABSTRACT: It has always been a great challenge for clustering algorithms to
automatically determine the cluster numbers according to the distribution of
datasets. Several approaches have been proposed to address this issue,
including the recent promising work which incorporate Bayesian Nonparametrics
into the $k$-means clustering procedure. This approach shows simplicity in
implementation and solidity in theory, while it also provides a feasible way to
inference in large scale datasets. However, several problems remains unsolved
in this pioneering work, including the power-law data applicability, mechanism
to merge centers to avoid the over-fitting problem, clustering order problem,
e.t.c.. To address these issues, the Pitman-Yor Process based k-means (namely
\emph{pyp-means}) is proposed in this paper. Taking advantage of the Pitman-Yor
Process, \emph{pyp-means} treats clusters differently by dynamically and
adaptively changing the threshold to guarantee the generation of power-law
clustering results. Also, one center agglomeration procedure is integrated into
the implementation to be able to merge small but close clusters and then
adaptively determine the cluster number. With more discussion on the clustering
order, the convergence proof, complexity analysis and extension to spectral
clustering, our approach is compared with traditional clustering algorithm and
variational inference methods. The advantages and properties of pyp-means are
validated by experiments on both synthetic datasets and real world datasets.
|
1306.3058 | Sebastien Paris | S\'ebastien Paris and Yann Doh and Herv\'e Glotin and Xanadu Halkias
and Joseph Razik | Physeter catodon localization by sparse coding | 6 pages, 6 figures, workshop ICML4B in ICML 2013 conference | null | null | null | cs.LG cs.CE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a spermwhale' localization architecture using jointly a
bag-of-features (BoF) approach and machine learning framework. BoF methods are
known, especially in computer vision, to produce from a collection of local
features a global representation invariant to principal signal transformations.
Our idea is to regress supervisely from these local features two rough
estimates of the distance and azimuth thanks to some datasets where both
acoustic events and ground-truth position are now available. Furthermore, these
estimates can feed a particle filter system in order to obtain a precise
spermwhale' position even in mono-hydrophone configuration. Anti-collision
system and whale watching are considered applications of this work.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2013 09:05:08 GMT"
}
] | 2013-06-14T00:00:00 | [
[
"Paris",
"Sébastien",
""
],
[
"Doh",
"Yann",
""
],
[
"Glotin",
"Hervé",
""
],
[
"Halkias",
"Xanadu",
""
],
[
"Razik",
"Joseph",
""
]
] | TITLE: Physeter catodon localization by sparse coding
ABSTRACT: This paper presents a spermwhale' localization architecture using jointly a
bag-of-features (BoF) approach and machine learning framework. BoF methods are
known, especially in computer vision, to produce from a collection of local
features a global representation invariant to principal signal transformations.
Our idea is to regress supervisely from these local features two rough
estimates of the distance and azimuth thanks to some datasets where both
acoustic events and ground-truth position are now available. Furthermore, these
estimates can feed a particle filter system in order to obtain a precise
spermwhale' position even in mono-hydrophone configuration. Anti-collision
system and whale watching are considered applications of this work.
|
1306.3084 | Doriane Ibarra | Jorge Hernandez (CMM), Beatriz Marcotegui (CMM) | Segmentation et Interpr\'etation de Nuages de Points pour la
Mod\'elisation d'Environnements Urbains | null | Revue fran\c{c}aise de photogrammetrie et de t\'el\'edection 191
(2008) 28-35 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dans cet article, nous pr\'esentons une m\'ethode pour la d\'etection et la
classification d'artefacts au niveau du sol, comme phase de filtrage
pr\'ealable \`a la mod\'elisation d'environnements urbains. La m\'ethode de
d\'etection est r\'ealis\'ee sur l'image profondeur, une projection de nuage de
points sur un plan image o\`u la valeur du pixel correspond \`a la distance du
point au plan. En faisant l'hypoth\`ese que les artefacts sont situ\'es au sol,
ils sont d\'etect\'es par une transformation de chapeau haut de forme par
remplissage de trous sur l'image de profondeur. Les composantes connexes ainsi
obtenues, sont ensuite caract\'eris\'ees et une analyse des variables est
utilis\'ee pour la s\'election des caract\'eristiques les plus discriminantes.
Les composantes connexes sont donc classifi\'ees en quatre cat\'egories
(lampadaires, pi\'etons, voitures et "Reste") \`a l'aide d'un algorithme
d'apprentissage supervis\'e. La m\'ethode a \'et\'e test\'ee sur des nuages de
points de la ville de Paris, en montrant de bons r\'esultats de d\'etection et
de classification dans l'ensemble de donn\'ees.---In this article, we present a
method for detection and classification of artifacts at the street level, in
order to filter cloud point, facilitating the urban modeling process. Our
approach exploits 3D information by using range image, a projection of 3D
points onto an image plane where the pixel intensity is a function of the
measured distance between 3D points and the plane. By assuming that the
artifacts are on the ground, they are detected using a Top-Hat of the hole
filling algorithm of range images. Then, several features are extracted from
the detected connected components and a stepwise forward variable/model
selection by using the Wilk's Lambda criterion is performed. Afterward, CCs are
classified in four categories (lampposts, pedestrians, cars and others) by
using a supervised machine learning method. The proposed method was tested on
cloud points of Paris, and have shown satisfactory results on the whole
dataset.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2013 11:27:58 GMT"
}
] | 2013-06-14T00:00:00 | [
[
"Hernandez",
"Jorge",
"",
"CMM"
],
[
"Marcotegui",
"Beatriz",
"",
"CMM"
]
] | TITLE: Segmentation et Interpr\'etation de Nuages de Points pour la
Mod\'elisation d'Environnements Urbains
ABSTRACT: Dans cet article, nous pr\'esentons une m\'ethode pour la d\'etection et la
classification d'artefacts au niveau du sol, comme phase de filtrage
pr\'ealable \`a la mod\'elisation d'environnements urbains. La m\'ethode de
d\'etection est r\'ealis\'ee sur l'image profondeur, une projection de nuage de
points sur un plan image o\`u la valeur du pixel correspond \`a la distance du
point au plan. En faisant l'hypoth\`ese que les artefacts sont situ\'es au sol,
ils sont d\'etect\'es par une transformation de chapeau haut de forme par
remplissage de trous sur l'image de profondeur. Les composantes connexes ainsi
obtenues, sont ensuite caract\'eris\'ees et une analyse des variables est
utilis\'ee pour la s\'election des caract\'eristiques les plus discriminantes.
Les composantes connexes sont donc classifi\'ees en quatre cat\'egories
(lampadaires, pi\'etons, voitures et "Reste") \`a l'aide d'un algorithme
d'apprentissage supervis\'e. La m\'ethode a \'et\'e test\'ee sur des nuages de
points de la ville de Paris, en montrant de bons r\'esultats de d\'etection et
de classification dans l'ensemble de donn\'ees.---In this article, we present a
method for detection and classification of artifacts at the street level, in
order to filter cloud point, facilitating the urban modeling process. Our
approach exploits 3D information by using range image, a projection of 3D
points onto an image plane where the pixel intensity is a function of the
measured distance between 3D points and the plane. By assuming that the
artifacts are on the ground, they are detected using a Top-Hat of the hole
filling algorithm of range images. Then, several features are extracted from
the detected connected components and a stepwise forward variable/model
selection by using the Wilk's Lambda criterion is performed. Afterward, CCs are
classified in four categories (lampposts, pedestrians, cars and others) by
using a supervised machine learning method. The proposed method was tested on
cloud points of Paris, and have shown satisfactory results on the whole
dataset.
|
1306.2795 | Ronan Collobert | Pedro H. O. Pinheiro, Ronan Collobert | Recurrent Convolutional Neural Networks for Scene Parsing | null | null | null | Idiap-RR-22-2013 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene parsing is a technique that consist on giving a label to all pixels in
an image according to the class they belong to. To ensure a good visual
coherence and a high class accuracy, it is essential for a scene parser to
capture image long range dependencies. In a feed-forward architecture, this can
be simply achieved by considering a sufficiently large input context patch,
around each pixel to be labeled. We propose an approach consisting of a
recurrent convolutional neural network which allows us to consider a large
input context, while limiting the capacity of the model. Contrary to most
standard approaches, our method does not rely on any segmentation methods, nor
any task-specific features. The system is trained in an end-to-end manner over
raw pixels, and models complex spatial dependencies with low inference cost. As
the context size increases with the built-in recurrence, the system identifies
and corrects its own errors. Our approach yields state-of-the-art performance
on both the Stanford Background Dataset and the SIFT Flow Dataset, while
remaining very fast at test time.
| [
{
"version": "v1",
"created": "Wed, 12 Jun 2013 11:56:57 GMT"
}
] | 2013-06-13T00:00:00 | [
[
"Pinheiro",
"Pedro H. O.",
""
],
[
"Collobert",
"Ronan",
""
]
] | TITLE: Recurrent Convolutional Neural Networks for Scene Parsing
ABSTRACT: Scene parsing is a technique that consist on giving a label to all pixels in
an image according to the class they belong to. To ensure a good visual
coherence and a high class accuracy, it is essential for a scene parser to
capture image long range dependencies. In a feed-forward architecture, this can
be simply achieved by considering a sufficiently large input context patch,
around each pixel to be labeled. We propose an approach consisting of a
recurrent convolutional neural network which allows us to consider a large
input context, while limiting the capacity of the model. Contrary to most
standard approaches, our method does not rely on any segmentation methods, nor
any task-specific features. The system is trained in an end-to-end manner over
raw pixels, and models complex spatial dependencies with low inference cost. As
the context size increases with the built-in recurrence, the system identifies
and corrects its own errors. Our approach yields state-of-the-art performance
on both the Stanford Background Dataset and the SIFT Flow Dataset, while
remaining very fast at test time.
|
1306.2864 | Catarina Moreira | Catarina Moreira and Andreas Wichert | Finding Academic Experts on a MultiSensor Approach using Shannon's
Entropy | null | Journal of Expert Systems with Applications, 2013, volume 40,
issue 14 | 10.1016/j.eswa.2013.04.001 | null | cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Expert finding is an information retrieval task concerned with the search for
the most knowledgeable people, in some topic, with basis on documents
describing peoples activities. The task involves taking a user query as input
and returning a list of people sorted by their level of expertise regarding the
user query. This paper introduces a novel approach for combining multiple
estimators of expertise based on a multisensor data fusion framework together
with the Dempster-Shafer theory of evidence and Shannon's entropy. More
specifically, we defined three sensors which detect heterogeneous information
derived from the textual contents, from the graph structure of the citation
patterns for the community of experts, and from profile information about the
academic experts. Given the evidences collected, each sensor may define
different candidates as experts and consequently do not agree in a final
ranking decision. To deal with these conflicts, we applied the Dempster-Shafer
theory of evidence combined with Shannon's Entropy formula to fuse this
information and come up with a more accurate and reliable final ranking list.
Experiments made over two datasets of academic publications from the Computer
Science domain attest for the adequacy of the proposed approach over the
traditional state of the art approaches. We also made experiments against
representative supervised state of the art algorithms. Results revealed that
the proposed method achieved a similar performance when compared to these
supervised techniques, confirming the capabilities of the proposed framework.
| [
{
"version": "v1",
"created": "Wed, 12 Jun 2013 15:35:57 GMT"
}
] | 2013-06-13T00:00:00 | [
[
"Moreira",
"Catarina",
""
],
[
"Wichert",
"Andreas",
""
]
] | TITLE: Finding Academic Experts on a MultiSensor Approach using Shannon's
Entropy
ABSTRACT: Expert finding is an information retrieval task concerned with the search for
the most knowledgeable people, in some topic, with basis on documents
describing peoples activities. The task involves taking a user query as input
and returning a list of people sorted by their level of expertise regarding the
user query. This paper introduces a novel approach for combining multiple
estimators of expertise based on a multisensor data fusion framework together
with the Dempster-Shafer theory of evidence and Shannon's entropy. More
specifically, we defined three sensors which detect heterogeneous information
derived from the textual contents, from the graph structure of the citation
patterns for the community of experts, and from profile information about the
academic experts. Given the evidences collected, each sensor may define
different candidates as experts and consequently do not agree in a final
ranking decision. To deal with these conflicts, we applied the Dempster-Shafer
theory of evidence combined with Shannon's Entropy formula to fuse this
information and come up with a more accurate and reliable final ranking list.
Experiments made over two datasets of academic publications from the Computer
Science domain attest for the adequacy of the proposed approach over the
traditional state of the art approaches. We also made experiments against
representative supervised state of the art algorithms. Results revealed that
the proposed method achieved a similar performance when compared to these
supervised techniques, confirming the capabilities of the proposed framework.
|
1306.2459 | Sutanay Choudhury | Sutanay Choudhury, Lawrence Holder, George Chin, John Feo | Fast Search for Dynamic Multi-Relational Graphs | SIGMOD Workshop on Dynamic Networks Management and Mining (DyNetMM),
2013 | null | null | null | cs.DB | http://creativecommons.org/licenses/publicdomain/ | Acting on time-critical events by processing ever growing social media or
news streams is a major technical challenge. Many of these data sources can be
modeled as multi-relational graphs. Continuous queries or techniques to search
for rare events that typically arise in monitoring applications have been
studied extensively for relational databases. This work is dedicated to answer
the question that emerges naturally: how can we efficiently execute a
continuous query on a dynamic graph? This paper presents an exact subgraph
search algorithm that exploits the temporal characteristics of representative
queries for online news or social media monitoring. The algorithm is based on a
novel data structure called the Subgraph Join Tree (SJ-Tree) that leverages the
structural and semantic characteristics of the underlying multi-relational
graph. The paper concludes with extensive experimentation on several real-world
datasets that demonstrates the validity of this approach.
| [
{
"version": "v1",
"created": "Tue, 11 Jun 2013 09:21:42 GMT"
}
] | 2013-06-12T00:00:00 | [
[
"Choudhury",
"Sutanay",
""
],
[
"Holder",
"Lawrence",
""
],
[
"Chin",
"George",
""
],
[
"Feo",
"John",
""
]
] | TITLE: Fast Search for Dynamic Multi-Relational Graphs
ABSTRACT: Acting on time-critical events by processing ever growing social media or
news streams is a major technical challenge. Many of these data sources can be
modeled as multi-relational graphs. Continuous queries or techniques to search
for rare events that typically arise in monitoring applications have been
studied extensively for relational databases. This work is dedicated to answer
the question that emerges naturally: how can we efficiently execute a
continuous query on a dynamic graph? This paper presents an exact subgraph
search algorithm that exploits the temporal characteristics of representative
queries for online news or social media monitoring. The algorithm is based on a
novel data structure called the Subgraph Join Tree (SJ-Tree) that leverages the
structural and semantic characteristics of the underlying multi-relational
graph. The paper concludes with extensive experimentation on several real-world
datasets that demonstrates the validity of this approach.
|
1306.2539 | Albane Saintenoy | Albane Saintenoy (IDES), J.-M. Friedt (UMR 6174), Adam D. Booth, F.
Tolle (Th\'eMA), E. Bernard (Th\'eMA), Dominique Laffly (GEODE), C. Marlin
(IDES), M. Griselin (Th\'eMA) | Deriving ice thickness, glacier volume and bedrock morphology of the
Austre Lov\'enbreen (Svalbard) using Ground-penetrating Radar | null | Near Surface Geophysics 11 (2013) 253-261 | null | null | physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Austre Lov\'enbreen is a 4.6 km2 glacier on the Archipelago of Svalbard
(79 degrees N) that has been surveyed over the last 47 years in order of
monitoring in particular the glacier evolution and associated hydrological
phenomena in the context of nowadays global warming. A three-week field survey
over April 2010 allowed for the acquisition of a dense mesh of
Ground-penetrating Radar (GPR) data with an average of 14683 points per km2
(67542 points total) on the glacier surface. The profiles were acquired using a
Mala equipment with 100 MHz antennas, towed slowly enough to record on average
every 0.3 m, a trace long enough to sound down to 189 m of ice. One profile was
repeated with 50 MHz antenna to improve electromagnetic wave propagation depth
in scattering media observed in the cirques closest to the slopes. The GPR was
coupled to a GPS system to position traces. Each profile has been manually
edited using standard GPR data processing including migration, to pick the
reflection arrival time from the ice-bedrock interface. Snow cover was
evaluated through 42 snow drilling measurements regularly spaced to cover all
the glacier. These data were acquired at the time of the GPR survey and
subsequently spatially interpolated using ordinary kriging. Using a snow
velocity of 0.22 m/ns, the snow thickness was converted to electromagnetic wave
travel-times and subtracted from the picked travel-times to the ice-bedrock
interface. The resulting travel-times were converted to ice thickness using a
velocity of 0.17 m/ns. The velocity uncertainty is discussed from a common
mid-point profile analysis. A total of 67542 georeferenced data points with
GPR-derived ice thicknesses, in addition to a glacier boundary line derived
from satellite images taken during summer, were interpolated over the entire
glacier surface using kriging with a 10 m grid size. Some uncertainty analysis
were carried on and we calculated an averaged ice thickness of 76 m and a
maximum depth of 164 m with a relative error of 11.9%. The volume of the
glacier is derived as 0.3487$\pm$0.041 km3. Finally a 10-m grid map of the
bedrock topography was derived by subtracting the ice thicknesses from a
dual-frequency GPS-derived digital elevation model of the surface. These two
datasets are the first step for modelling thermal evolution of the glacier and
its bedrock, as well as the main hydrological network.
| [
{
"version": "v1",
"created": "Tue, 11 Jun 2013 14:45:13 GMT"
}
] | 2013-06-12T00:00:00 | [
[
"Saintenoy",
"Albane",
"",
"IDES"
],
[
"Friedt",
"J. -M.",
"",
"UMR 6174"
],
[
"Booth",
"Adam D.",
"",
"ThéMA"
],
[
"Tolle",
"F.",
"",
"ThéMA"
],
[
"Bernard",
"E.",
"",
"ThéMA"
],
[
"Laffly",
"Dominique",
"",
"GEODE"
],
[
"Marlin",
"C.",
"",
"IDES"
],
[
"Griselin",
"M.",
"",
"ThéMA"
]
] | TITLE: Deriving ice thickness, glacier volume and bedrock morphology of the
Austre Lov\'enbreen (Svalbard) using Ground-penetrating Radar
ABSTRACT: The Austre Lov\'enbreen is a 4.6 km2 glacier on the Archipelago of Svalbard
(79 degrees N) that has been surveyed over the last 47 years in order of
monitoring in particular the glacier evolution and associated hydrological
phenomena in the context of nowadays global warming. A three-week field survey
over April 2010 allowed for the acquisition of a dense mesh of
Ground-penetrating Radar (GPR) data with an average of 14683 points per km2
(67542 points total) on the glacier surface. The profiles were acquired using a
Mala equipment with 100 MHz antennas, towed slowly enough to record on average
every 0.3 m, a trace long enough to sound down to 189 m of ice. One profile was
repeated with 50 MHz antenna to improve electromagnetic wave propagation depth
in scattering media observed in the cirques closest to the slopes. The GPR was
coupled to a GPS system to position traces. Each profile has been manually
edited using standard GPR data processing including migration, to pick the
reflection arrival time from the ice-bedrock interface. Snow cover was
evaluated through 42 snow drilling measurements regularly spaced to cover all
the glacier. These data were acquired at the time of the GPR survey and
subsequently spatially interpolated using ordinary kriging. Using a snow
velocity of 0.22 m/ns, the snow thickness was converted to electromagnetic wave
travel-times and subtracted from the picked travel-times to the ice-bedrock
interface. The resulting travel-times were converted to ice thickness using a
velocity of 0.17 m/ns. The velocity uncertainty is discussed from a common
mid-point profile analysis. A total of 67542 georeferenced data points with
GPR-derived ice thicknesses, in addition to a glacier boundary line derived
from satellite images taken during summer, were interpolated over the entire
glacier surface using kriging with a 10 m grid size. Some uncertainty analysis
were carried on and we calculated an averaged ice thickness of 76 m and a
maximum depth of 164 m with a relative error of 11.9%. The volume of the
glacier is derived as 0.3487$\pm$0.041 km3. Finally a 10-m grid map of the
bedrock topography was derived by subtracting the ice thicknesses from a
dual-frequency GPS-derived digital elevation model of the surface. These two
datasets are the first step for modelling thermal evolution of the glacier and
its bedrock, as well as the main hydrological network.
|
1306.2597 | Tao Qin Dr. | Tao Qin and Tie-Yan Liu | Introducing LETOR 4.0 Datasets | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LETOR is a package of benchmark data sets for research on LEarning TO Rank,
which contains standard features, relevance judgments, data partitioning,
evaluation tools, and several baselines. Version 1.0 was released in April
2007. Version 2.0 was released in Dec. 2007. Version 3.0 was released in Dec.
2008. This version, 4.0, was released in July 2009. Very different from
previous versions (V3.0 is an update based on V2.0 and V2.0 is an update based
on V1.0), LETOR4.0 is a totally new release. It uses the Gov2 web page
collection (~25M pages) and two query sets from Million Query track of TREC
2007 and TREC 2008. We call the two query sets MQ2007 and MQ2008 for short.
There are about 1700 queries in MQ2007 with labeled documents and about 800
queries in MQ2008 with labeled documents. If you have any questions or
suggestions about the datasets, please kindly email us ([email protected]).
Our goal is to make the dataset reliable and useful for the community.
| [
{
"version": "v1",
"created": "Sun, 9 Jun 2013 09:58:00 GMT"
}
] | 2013-06-12T00:00:00 | [
[
"Qin",
"Tao",
""
],
[
"Liu",
"Tie-Yan",
""
]
] | TITLE: Introducing LETOR 4.0 Datasets
ABSTRACT: LETOR is a package of benchmark data sets for research on LEarning TO Rank,
which contains standard features, relevance judgments, data partitioning,
evaluation tools, and several baselines. Version 1.0 was released in April
2007. Version 2.0 was released in Dec. 2007. Version 3.0 was released in Dec.
2008. This version, 4.0, was released in July 2009. Very different from
previous versions (V3.0 is an update based on V2.0 and V2.0 is an update based
on V1.0), LETOR4.0 is a totally new release. It uses the Gov2 web page
collection (~25M pages) and two query sets from Million Query track of TREC
2007 and TREC 2008. We call the two query sets MQ2007 and MQ2008 for short.
There are about 1700 queries in MQ2007 with labeled documents and about 800
queries in MQ2008 with labeled documents. If you have any questions or
suggestions about the datasets, please kindly email us ([email protected]).
Our goal is to make the dataset reliable and useful for the community.
|
1306.1850 | Ayad Ghany Ismaeel | Ayad Ghany Ismaeel, Anar Auda Ablahad | Enhancement of a Novel Method for Mutational Disease Prediction using
Bioinformatics Techniques and Backpropagation Algorithm | 5 pages, 8 figures, 1 Table, conference or other essential info | International Journal of Scientific & Engineering Research, Volume
4, Issue 6, June 2013 pages 1169-1173 | null | null | cs.CE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The noval method for mutational disease prediction using bioinformatics tools
and datasets for diagnosis the malignant mutations with powerful Artificial
Neural Network (Backpropagation Network) for classifying these malignant
mutations are related to gene(s) (like BRCA1 and BRCA2) cause a disease (breast
cancer). This noval method did not take in consideration just like adopted for
dealing, analyzing and treat the gene sequences for extracting useful
information from the sequence, also exceeded the environment factors which play
important roles in deciding and calculating some of genes features in order to
view its functional parts and relations to diseases. This paper is proposed an
enhancement of a novel method as a first way for diagnosis and prediction the
disease by mutations considering and introducing multi other features show the
alternations, changes in the environment as well as genes, comparing sequences
to gain information about the structure or function of a query sequence, also
proposing optimal and more accurate system for classification and dealing with
specific disorder using backpropagation with mean square rate 0.000000001.
Index Terms (Homology sequence, GC content and AT content, Bioinformatics,
Backpropagation Network, BLAST, DNA Sequence, Protein Sequence)
| [
{
"version": "v1",
"created": "Fri, 7 Jun 2013 21:53:25 GMT"
}
] | 2013-06-11T00:00:00 | [
[
"Ismaeel",
"Ayad Ghany",
""
],
[
"Ablahad",
"Anar Auda",
""
]
] | TITLE: Enhancement of a Novel Method for Mutational Disease Prediction using
Bioinformatics Techniques and Backpropagation Algorithm
ABSTRACT: The noval method for mutational disease prediction using bioinformatics tools
and datasets for diagnosis the malignant mutations with powerful Artificial
Neural Network (Backpropagation Network) for classifying these malignant
mutations are related to gene(s) (like BRCA1 and BRCA2) cause a disease (breast
cancer). This noval method did not take in consideration just like adopted for
dealing, analyzing and treat the gene sequences for extracting useful
information from the sequence, also exceeded the environment factors which play
important roles in deciding and calculating some of genes features in order to
view its functional parts and relations to diseases. This paper is proposed an
enhancement of a novel method as a first way for diagnosis and prediction the
disease by mutations considering and introducing multi other features show the
alternations, changes in the environment as well as genes, comparing sequences
to gain information about the structure or function of a query sequence, also
proposing optimal and more accurate system for classification and dealing with
specific disorder using backpropagation with mean square rate 0.000000001.
Index Terms (Homology sequence, GC content and AT content, Bioinformatics,
Backpropagation Network, BLAST, DNA Sequence, Protein Sequence)
|
1306.2084 | Maximilian Nickel | Maximilian Nickel, Volker Tresp | Logistic Tensor Factorization for Multi-Relational Data | Accepted at ICML 2013 Workshop "Structured Learning: Inferring Graphs
from Structured and Unstructured Inputs" (SLG 2013) | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tensor factorizations have become increasingly popular approaches for various
learning tasks on structured data. In this work, we extend the RESCAL tensor
factorization, which has shown state-of-the-art results for multi-relational
learning, to account for the binary nature of adjacency tensors. We study the
improvements that can be gained via this approach on various benchmark datasets
and show that the logistic extension can improve the prediction results
significantly.
| [
{
"version": "v1",
"created": "Mon, 10 Jun 2013 01:45:49 GMT"
}
] | 2013-06-11T00:00:00 | [
[
"Nickel",
"Maximilian",
""
],
[
"Tresp",
"Volker",
""
]
] | TITLE: Logistic Tensor Factorization for Multi-Relational Data
ABSTRACT: Tensor factorizations have become increasingly popular approaches for various
learning tasks on structured data. In this work, we extend the RESCAL tensor
factorization, which has shown state-of-the-art results for multi-relational
learning, to account for the binary nature of adjacency tensors. We study the
improvements that can be gained via this approach on various benchmark datasets
and show that the logistic extension can improve the prediction results
significantly.
|
1306.2118 | E.N.Sathishkumar | E.N.Sathishkumar, K.Thangavel, T.Chandrasekhar | A Novel Approach for Single Gene Selection Using Clustering and
Dimensionality Reduction | 6 pages, 4 figures. arXiv admin note: text overlap with
arXiv:1306.1323 | International Journal of Scientific & Engineering Research, Volume
4, Issue 5, May-2013, page no 1540-1545 | null | null | cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We extend the standard rough set-based approach to deal with huge amounts of
numeric attributes versus small amount of available objects. Here, a novel
approach of clustering along with dimensionality reduction; Hybrid Fuzzy C
Means-Quick Reduct (FCMQR) algorithm is proposed for single gene selection.
Gene selection is a process to select genes which are more informative. It is
one of the important steps in knowledge discovery. The problem is that all
genes are not important in gene expression data. Some of the genes may be
redundant, and others may be irrelevant and noisy. In this study, the entire
dataset is divided in proper grouping of similar genes by applying Fuzzy C
Means (FCM) algorithm. A high class discriminated genes has been selected based
on their degree of dependence by applying Quick Reduct algorithm based on Rough
Set Theory to all the resultant clusters. Average Correlation Value (ACV) is
calculated for the high class discriminated genes. The clusters which have the
ACV value a s 1 is determined as significant clusters, whose classification
accuracy will be equal or high when comparing to the accuracy of the entire
dataset. The proposed algorithm is evaluated using WEKA classifiers and
compared. Finally, experimental results related to the leukemia cancer data
confirm that our approach is quite promising, though it surely requires further
research.
| [
{
"version": "v1",
"created": "Mon, 10 Jun 2013 07:28:51 GMT"
}
] | 2013-06-11T00:00:00 | [
[
"Sathishkumar",
"E. N.",
""
],
[
"Thangavel",
"K.",
""
],
[
"Chandrasekhar",
"T.",
""
]
] | TITLE: A Novel Approach for Single Gene Selection Using Clustering and
Dimensionality Reduction
ABSTRACT: We extend the standard rough set-based approach to deal with huge amounts of
numeric attributes versus small amount of available objects. Here, a novel
approach of clustering along with dimensionality reduction; Hybrid Fuzzy C
Means-Quick Reduct (FCMQR) algorithm is proposed for single gene selection.
Gene selection is a process to select genes which are more informative. It is
one of the important steps in knowledge discovery. The problem is that all
genes are not important in gene expression data. Some of the genes may be
redundant, and others may be irrelevant and noisy. In this study, the entire
dataset is divided in proper grouping of similar genes by applying Fuzzy C
Means (FCM) algorithm. A high class discriminated genes has been selected based
on their degree of dependence by applying Quick Reduct algorithm based on Rough
Set Theory to all the resultant clusters. Average Correlation Value (ACV) is
calculated for the high class discriminated genes. The clusters which have the
ACV value a s 1 is determined as significant clusters, whose classification
accuracy will be equal or high when comparing to the accuracy of the entire
dataset. The proposed algorithm is evaluated using WEKA classifiers and
compared. Finally, experimental results related to the leukemia cancer data
confirm that our approach is quite promising, though it surely requires further
research.
|
1305.0445 | Yoshua Bengio | Yoshua Bengio | Deep Learning of Representations: Looking Forward | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning research aims at discovering learning algorithms that discover
multiple levels of distributed representations, with higher levels representing
more abstract concepts. Although the study of deep learning has already led to
impressive theoretical results, learning algorithms and breakthrough
experiments, several challenges lie ahead. This paper proposes to examine some
of these challenges, centering on the questions of scaling deep learning
algorithms to much larger models and datasets, reducing optimization
difficulties due to ill-conditioning or local minima, designing more efficient
and powerful inference and sampling procedures, and learning to disentangle the
factors of variation underlying the observed data. It also proposes a few
forward-looking research directions aimed at overcoming these challenges.
| [
{
"version": "v1",
"created": "Thu, 2 May 2013 14:33:28 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jun 2013 02:35:21 GMT"
}
] | 2013-06-10T00:00:00 | [
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Deep Learning of Representations: Looking Forward
ABSTRACT: Deep learning research aims at discovering learning algorithms that discover
multiple levels of distributed representations, with higher levels representing
more abstract concepts. Although the study of deep learning has already led to
impressive theoretical results, learning algorithms and breakthrough
experiments, several challenges lie ahead. This paper proposes to examine some
of these challenges, centering on the questions of scaling deep learning
algorithms to much larger models and datasets, reducing optimization
difficulties due to ill-conditioning or local minima, designing more efficient
and powerful inference and sampling procedures, and learning to disentangle the
factors of variation underlying the observed data. It also proposes a few
forward-looking research directions aimed at overcoming these challenges.
|
1306.1716 | Alexander Petukhov | Alexander Petukhov and Inna Kozlov | Fast greedy algorithm for subspace clustering from corrupted and
incomplete data | arXiv admin note: substantial text overlap with arXiv:1304.4282 | null | null | null | cs.LG cs.DS math.NA stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe the Fast Greedy Sparse Subspace Clustering (FGSSC) algorithm
providing an efficient method for clustering data belonging to a few
low-dimensional linear or affine subspaces. The main difference of our
algorithm from predecessors is its ability to work with noisy data having a
high rate of erasures (missed entries with the known coordinates) and errors
(corrupted entries with unknown coordinates). We discuss here how to implement
the fast version of the greedy algorithm with the maximum efficiency whose
greedy strategy is incorporated into iterations of the basic algorithm.
We provide numerical evidences that, in the subspace clustering capability,
the fast greedy algorithm outperforms not only the existing state-of-the art
SSC algorithm taken by the authors as a basic algorithm but also the recent
GSSC algorithm. At the same time, its computational cost is only slightly
higher than the cost of SSC.
The numerical evidence of the algorithm significant advantage is presented
for a few synthetic models as well as for the Extended Yale B dataset of facial
images. In particular, the face recognition misclassification rate turned out
to be 6-20 times lower than for the SSC algorithm. We provide also the
numerical evidence that the FGSSC algorithm is able to perform clustering of
corrupted data efficiently even when the sum of subspace dimensions
significantly exceeds the dimension of the ambient space.
| [
{
"version": "v1",
"created": "Fri, 7 Jun 2013 13:14:50 GMT"
}
] | 2013-06-10T00:00:00 | [
[
"Petukhov",
"Alexander",
""
],
[
"Kozlov",
"Inna",
""
]
] | TITLE: Fast greedy algorithm for subspace clustering from corrupted and
incomplete data
ABSTRACT: We describe the Fast Greedy Sparse Subspace Clustering (FGSSC) algorithm
providing an efficient method for clustering data belonging to a few
low-dimensional linear or affine subspaces. The main difference of our
algorithm from predecessors is its ability to work with noisy data having a
high rate of erasures (missed entries with the known coordinates) and errors
(corrupted entries with unknown coordinates). We discuss here how to implement
the fast version of the greedy algorithm with the maximum efficiency whose
greedy strategy is incorporated into iterations of the basic algorithm.
We provide numerical evidences that, in the subspace clustering capability,
the fast greedy algorithm outperforms not only the existing state-of-the art
SSC algorithm taken by the authors as a basic algorithm but also the recent
GSSC algorithm. At the same time, its computational cost is only slightly
higher than the cost of SSC.
The numerical evidence of the algorithm significant advantage is presented
for a few synthetic models as well as for the Extended Yale B dataset of facial
images. In particular, the face recognition misclassification rate turned out
to be 6-20 times lower than for the SSC algorithm. We provide also the
numerical evidence that the FGSSC algorithm is able to perform clustering of
corrupted data efficiently even when the sum of subspace dimensions
significantly exceeds the dimension of the ambient space.
|
1104.2930 | Donghui Yan | Donghui Yan, Aiyou Chen, Michael I. Jordan | Cluster Forests | 23 pages, 6 figures | Computational Statistics and Data Analysis 2013, Vol. 66, 178-192 | 10.1016/j.csda.2013.04.010 | COMSTA5571 | stat.ME cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With inspiration from Random Forests (RF) in the context of classification, a
new clustering ensemble method---Cluster Forests (CF) is proposed.
Geometrically, CF randomly probes a high-dimensional data cloud to obtain "good
local clusterings" and then aggregates via spectral clustering to obtain
cluster assignments for the whole dataset. The search for good local
clusterings is guided by a cluster quality measure kappa. CF progressively
improves each local clustering in a fashion that resembles the tree growth in
RF. Empirical studies on several real-world datasets under two different
performance metrics show that CF compares favorably to its competitors.
Theoretical analysis reveals that the kappa measure makes it possible to grow
the local clustering in a desirable way---it is "noise-resistant". A
closed-form expression is obtained for the mis-clustering rate of spectral
clustering under a perturbation model, which yields new insights into some
aspects of spectral clustering.
| [
{
"version": "v1",
"created": "Thu, 14 Apr 2011 21:29:10 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Apr 2011 05:06:04 GMT"
},
{
"version": "v3",
"created": "Thu, 23 May 2013 21:17:26 GMT"
}
] | 2013-06-07T00:00:00 | [
[
"Yan",
"Donghui",
""
],
[
"Chen",
"Aiyou",
""
],
[
"Jordan",
"Michael I.",
""
]
] | TITLE: Cluster Forests
ABSTRACT: With inspiration from Random Forests (RF) in the context of classification, a
new clustering ensemble method---Cluster Forests (CF) is proposed.
Geometrically, CF randomly probes a high-dimensional data cloud to obtain "good
local clusterings" and then aggregates via spectral clustering to obtain
cluster assignments for the whole dataset. The search for good local
clusterings is guided by a cluster quality measure kappa. CF progressively
improves each local clustering in a fashion that resembles the tree growth in
RF. Empirical studies on several real-world datasets under two different
performance metrics show that CF compares favorably to its competitors.
Theoretical analysis reveals that the kappa measure makes it possible to grow
the local clustering in a desirable way---it is "noise-resistant". A
closed-form expression is obtained for the mis-clustering rate of spectral
clustering under a perturbation model, which yields new insights into some
aspects of spectral clustering.
|
1306.1298 | Allon G. Percus | Cristina Garcia-Cardona, Arjuna Flenner, Allon G. Percus | Multiclass Semi-Supervised Learning on Graphs using Ginzburg-Landau
Functional Minimization | 16 pages, to appear in Springer's Lecture Notes in Computer Science
volume "Pattern Recognition Applications and Methods 2013", part of series on
Advances in Intelligent and Soft Computing | null | null | null | stat.ML cs.LG math.ST physics.data-an stat.TH | http://creativecommons.org/licenses/publicdomain/ | We present a graph-based variational algorithm for classification of
high-dimensional data, generalizing the binary diffuse interface model to the
case of multiple classes. Motivated by total variation techniques, the method
involves minimizing an energy functional made up of three terms. The first two
terms promote a stepwise continuous classification function with sharp
transitions between classes, while preserving symmetry among the class labels.
The third term is a data fidelity term, allowing us to incorporate prior
information into the model in a semi-supervised framework. The performance of
the algorithm on synthetic data, as well as on the COIL and MNIST benchmark
datasets, is competitive with state-of-the-art graph-based multiclass
segmentation methods.
| [
{
"version": "v1",
"created": "Thu, 6 Jun 2013 05:32:00 GMT"
}
] | 2013-06-07T00:00:00 | [
[
"Garcia-Cardona",
"Cristina",
""
],
[
"Flenner",
"Arjuna",
""
],
[
"Percus",
"Allon G.",
""
]
] | TITLE: Multiclass Semi-Supervised Learning on Graphs using Ginzburg-Landau
Functional Minimization
ABSTRACT: We present a graph-based variational algorithm for classification of
high-dimensional data, generalizing the binary diffuse interface model to the
case of multiple classes. Motivated by total variation techniques, the method
involves minimizing an energy functional made up of three terms. The first two
terms promote a stepwise continuous classification function with sharp
transitions between classes, while preserving symmetry among the class labels.
The third term is a data fidelity term, allowing us to incorporate prior
information into the model in a semi-supervised framework. The performance of
the algorithm on synthetic data, as well as on the COIL and MNIST benchmark
datasets, is competitive with state-of-the-art graph-based multiclass
segmentation methods.
|
1209.1797 | Eitan Menahem | Eitan Menahem, Alon Schclar, Lior Rokach, Yuval Elovici | Securing Your Transactions: Detecting Anomalous Patterns In XML
Documents | Journal version (14 pages) | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | XML transactions are used in many information systems to store data and
interact with other systems. Abnormal transactions, the result of either an
on-going cyber attack or the actions of a benign user, can potentially harm the
interacting systems and therefore they are regarded as a threat. In this paper
we address the problem of anomaly detection and localization in XML
transactions using machine learning techniques. We present a new XML anomaly
detection framework, XML-AD. Within this framework, an automatic method for
extracting features from XML transactions was developed as well as a practical
method for transforming XML features into vectors of fixed dimensionality. With
these two methods in place, the XML-AD framework makes it possible to utilize
general learning algorithms for anomaly detection. Central to the functioning
of the framework is a novel multi-univariate anomaly detection algorithm,
ADIFA. The framework was evaluated on four XML transactions datasets, captured
from real information systems, in which it achieved over 89% true positive
detection rate with less than a 0.2% false positive rate.
| [
{
"version": "v1",
"created": "Sun, 9 Sep 2012 13:02:49 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Sep 2012 05:48:34 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Jun 2013 13:19:42 GMT"
}
] | 2013-06-06T00:00:00 | [
[
"Menahem",
"Eitan",
""
],
[
"Schclar",
"Alon",
""
],
[
"Rokach",
"Lior",
""
],
[
"Elovici",
"Yuval",
""
]
] | TITLE: Securing Your Transactions: Detecting Anomalous Patterns In XML
Documents
ABSTRACT: XML transactions are used in many information systems to store data and
interact with other systems. Abnormal transactions, the result of either an
on-going cyber attack or the actions of a benign user, can potentially harm the
interacting systems and therefore they are regarded as a threat. In this paper
we address the problem of anomaly detection and localization in XML
transactions using machine learning techniques. We present a new XML anomaly
detection framework, XML-AD. Within this framework, an automatic method for
extracting features from XML transactions was developed as well as a practical
method for transforming XML features into vectors of fixed dimensionality. With
these two methods in place, the XML-AD framework makes it possible to utilize
general learning algorithms for anomaly detection. Central to the functioning
of the framework is a novel multi-univariate anomaly detection algorithm,
ADIFA. The framework was evaluated on four XML transactions datasets, captured
from real information systems, in which it achieved over 89% true positive
detection rate with less than a 0.2% false positive rate.
|
1306.0974 | Jiuqing Wan | Jiuqing Wan, Li Liu | Distributed Bayesian inference for consistent labeling of tracked
objects in non-overlapping camera networks | 19 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the fundamental requirements for visual surveillance using
non-overlapping camera networks is the correct labeling of tracked objects on
each camera in a consistent way,in the sense that the captured tracklets, or
observations in this paper, of the same object at different cameras should be
assigned with the same label. In this paper, we formulate this task as a
Bayesian inference problem and propose a distributed inference framework in
which the posterior distribution of labeling variable corresponding to each
observation, conditioned on all history appearance and spatio-temporal evidence
made in the whole networks, is calculated based solely on local information
processing on each camera and mutual information exchanging between neighboring
cameras. In our framework, the number of objects presenting in the monitored
region, i.e. the sampling space of labeling variables, does not need to be
specified beforehand. Instead, it can be determined automatically on the fly.
In addition, we make no assumption about the appearance distribution of a
single object, but use similarity scores between appearance pairs, given by
advanced object re-identification algorithm, as appearance likelihood for
inference. This feature makes our method very flexible and competitive when
observing condition undergoes large changes across camera views. To cope with
the problem of missing detection, which is critical for distributed inference,
we consider an enlarged neighborhood of each camera during inference and use a
mixture model to describe the higher order spatio-temporal constraints. The
robustness of the algorithm against missing detection is improved at the cost
of slightly increased computation and communication burden at each camera node.
Finally, we demonstrate the effectiveness of our method through experiments on
an indoor Office Building dataset and an outdoor Campus Garden dataset.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2013 03:50:58 GMT"
}
] | 2013-06-06T00:00:00 | [
[
"Wan",
"Jiuqing",
""
],
[
"Liu",
"Li",
""
]
] | TITLE: Distributed Bayesian inference for consistent labeling of tracked
objects in non-overlapping camera networks
ABSTRACT: One of the fundamental requirements for visual surveillance using
non-overlapping camera networks is the correct labeling of tracked objects on
each camera in a consistent way,in the sense that the captured tracklets, or
observations in this paper, of the same object at different cameras should be
assigned with the same label. In this paper, we formulate this task as a
Bayesian inference problem and propose a distributed inference framework in
which the posterior distribution of labeling variable corresponding to each
observation, conditioned on all history appearance and spatio-temporal evidence
made in the whole networks, is calculated based solely on local information
processing on each camera and mutual information exchanging between neighboring
cameras. In our framework, the number of objects presenting in the monitored
region, i.e. the sampling space of labeling variables, does not need to be
specified beforehand. Instead, it can be determined automatically on the fly.
In addition, we make no assumption about the appearance distribution of a
single object, but use similarity scores between appearance pairs, given by
advanced object re-identification algorithm, as appearance likelihood for
inference. This feature makes our method very flexible and competitive when
observing condition undergoes large changes across camera views. To cope with
the problem of missing detection, which is critical for distributed inference,
we consider an enlarged neighborhood of each camera during inference and use a
mixture model to describe the higher order spatio-temporal constraints. The
robustness of the algorithm against missing detection is improved at the cost
of slightly increased computation and communication burden at each camera node.
Finally, we demonstrate the effectiveness of our method through experiments on
an indoor Office Building dataset and an outdoor Campus Garden dataset.
|
1306.1083 | Puneet Kumar | Pierre-Yves Baudin (INRIA Saclay - Ile de France), Danny Goodman,
Puneet Kumar (INRIA Saclay - Ile de France, CVN), Noura Azzabou (MIRCEN,
UPMC), Pierre G. Carlier (UPMC), Nikos Paragios (INRIA Saclay - Ile de
France, LIGM, ENPC, MAS), M. Pawan Kumar (INRIA Saclay - Ile de France, CVN) | Discriminative Parameter Estimation for Random Walks Segmentation:
Technical Report | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Random Walks (RW) algorithm is one of the most e - cient and easy-to-use
probabilistic segmentation methods. By combining contrast terms with prior
terms, it provides accurate segmentations of medical images in a fully
automated manner. However, one of the main drawbacks of using the RW algorithm
is that its parameters have to be hand-tuned. we propose a novel discriminative
learning framework that estimates the parameters using a training dataset. The
main challenge we face is that the training samples are not fully supervised.
Speci cally, they provide a hard segmentation of the images, instead of a
proba-bilistic segmentation. We overcome this challenge by treating the optimal
probabilistic segmentation that is compatible with the given hard segmentation
as a latent variable. This allows us to employ the latent support vector
machine formulation for parameter estimation. We show that our approach signi
cantly outperforms the baseline methods on a challenging dataset consisting of
real clinical 3D MRI volumes of skeletal muscles.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2013 12:48:02 GMT"
}
] | 2013-06-06T00:00:00 | [
[
"Baudin",
"Pierre-Yves",
"",
"INRIA Saclay - Ile de France"
],
[
"Goodman",
"Danny",
"",
"INRIA Saclay - Ile de France, CVN"
],
[
"Kumar",
"Puneet",
"",
"INRIA Saclay - Ile de France, CVN"
],
[
"Azzabou",
"Noura",
"",
"MIRCEN,\n UPMC"
],
[
"Carlier",
"Pierre G.",
"",
"UPMC"
],
[
"Paragios",
"Nikos",
"",
"INRIA Saclay - Ile de\n France, LIGM, ENPC, MAS"
],
[
"Kumar",
"M. Pawan",
"",
"INRIA Saclay - Ile de France, CVN"
]
] | TITLE: Discriminative Parameter Estimation for Random Walks Segmentation:
Technical Report
ABSTRACT: The Random Walks (RW) algorithm is one of the most e - cient and easy-to-use
probabilistic segmentation methods. By combining contrast terms with prior
terms, it provides accurate segmentations of medical images in a fully
automated manner. However, one of the main drawbacks of using the RW algorithm
is that its parameters have to be hand-tuned. we propose a novel discriminative
learning framework that estimates the parameters using a training dataset. The
main challenge we face is that the training samples are not fully supervised.
Speci cally, they provide a hard segmentation of the images, instead of a
proba-bilistic segmentation. We overcome this challenge by treating the optimal
probabilistic segmentation that is compatible with the given hard segmentation
as a latent variable. This allows us to employ the latent support vector
machine formulation for parameter estimation. We show that our approach signi
cantly outperforms the baseline methods on a challenging dataset consisting of
real clinical 3D MRI volumes of skeletal muscles.
|
1306.0886 | Felix X. Yu | Felix X. Yu, Dong Liu, Sanjiv Kumar, Tony Jebara, Shih-Fu Chang | $\propto$SVM for learning with label proportions | Appears in Proceedings of the 30th International Conference on
Machine Learning (ICML 2013) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of learning with label proportions in which the training
data is provided in groups and only the proportion of each class in each group
is known. We propose a new method called proportion-SVM, or $\propto$SVM, which
explicitly models the latent unknown instance labels together with the known
group label proportions in a large-margin framework. Unlike the existing works,
our approach avoids making restrictive assumptions about the data. The
$\propto$SVM model leads to a non-convex integer programming problem. In order
to solve it efficiently, we propose two algorithms: one based on simple
alternating optimization and the other based on a convex relaxation. Extensive
experiments on standard datasets show that $\propto$SVM outperforms the
state-of-the-art, especially for larger group sizes.
| [
{
"version": "v1",
"created": "Tue, 4 Jun 2013 19:35:31 GMT"
}
] | 2013-06-05T00:00:00 | [
[
"Yu",
"Felix X.",
""
],
[
"Liu",
"Dong",
""
],
[
"Kumar",
"Sanjiv",
""
],
[
"Jebara",
"Tony",
""
],
[
"Chang",
"Shih-Fu",
""
]
] | TITLE: $\propto$SVM for learning with label proportions
ABSTRACT: We study the problem of learning with label proportions in which the training
data is provided in groups and only the proportion of each class in each group
is known. We propose a new method called proportion-SVM, or $\propto$SVM, which
explicitly models the latent unknown instance labels together with the known
group label proportions in a large-margin framework. Unlike the existing works,
our approach avoids making restrictive assumptions about the data. The
$\propto$SVM model leads to a non-convex integer programming problem. In order
to solve it efficiently, we propose two algorithms: one based on simple
alternating optimization and the other based on a convex relaxation. Extensive
experiments on standard datasets show that $\propto$SVM outperforms the
state-of-the-art, especially for larger group sizes.
|
1210.0091 | Hong Zhao | Hong Zhao, Fan Min, William Zhu | Test-cost-sensitive attribute reduction of data with normal distribution
measurement errors | This paper has been withdrawn by the author due to the error of the
title | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The measurement error with normal distribution is universal in applications.
Generally, smaller measurement error requires better instrument and higher test
cost. In decision making based on attribute values of objects, we shall select
an attribute subset with appropriate measurement error to minimize the total
test cost. Recently, error-range-based covering rough set with uniform
distribution error was proposed to investigate this issue. However, the
measurement errors satisfy normal distribution instead of uniform distribution
which is rather simple for most applications. In this paper, we introduce
normal distribution measurement errors to covering-based rough set model, and
deal with test-cost-sensitive attribute reduction problem in this new model.
The major contributions of this paper are four-fold. First, we build a new data
model based on normal distribution measurement errors. With the new data model,
the error range is an ellipse in a two-dimension space. Second, the
covering-based rough set with normal distribution measurement errors is
constructed through the "3-sigma" rule. Third, the test-cost-sensitive
attribute reduction problem is redefined on this covering-based rough set.
Fourth, a heuristic algorithm is proposed to deal with this problem. The
algorithm is tested on ten UCI (University of California - Irvine) datasets.
The experimental results show that the algorithm is more effective and
efficient than the existing one. This study is a step toward realistic
applications of cost-sensitive learning.
| [
{
"version": "v1",
"created": "Sat, 29 Sep 2012 10:22:55 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jun 2013 03:15:51 GMT"
}
] | 2013-06-04T00:00:00 | [
[
"Zhao",
"Hong",
""
],
[
"Min",
"Fan",
""
],
[
"Zhu",
"William",
""
]
] | TITLE: Test-cost-sensitive attribute reduction of data with normal distribution
measurement errors
ABSTRACT: The measurement error with normal distribution is universal in applications.
Generally, smaller measurement error requires better instrument and higher test
cost. In decision making based on attribute values of objects, we shall select
an attribute subset with appropriate measurement error to minimize the total
test cost. Recently, error-range-based covering rough set with uniform
distribution error was proposed to investigate this issue. However, the
measurement errors satisfy normal distribution instead of uniform distribution
which is rather simple for most applications. In this paper, we introduce
normal distribution measurement errors to covering-based rough set model, and
deal with test-cost-sensitive attribute reduction problem in this new model.
The major contributions of this paper are four-fold. First, we build a new data
model based on normal distribution measurement errors. With the new data model,
the error range is an ellipse in a two-dimension space. Second, the
covering-based rough set with normal distribution measurement errors is
constructed through the "3-sigma" rule. Third, the test-cost-sensitive
attribute reduction problem is redefined on this covering-based rough set.
Fourth, a heuristic algorithm is proposed to deal with this problem. The
algorithm is tested on ten UCI (University of California - Irvine) datasets.
The experimental results show that the algorithm is more effective and
efficient than the existing one. This study is a step toward realistic
applications of cost-sensitive learning.
|
1303.0309 | Krikamol Muandet | Krikamol Muandet and Bernhard Sch\"olkopf | One-Class Support Measure Machines for Group Anomaly Detection | Conference on Uncertainty in Artificial Intelligence (UAI2013) | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose one-class support measure machines (OCSMMs) for group anomaly
detection which aims at recognizing anomalous aggregate behaviors of data
points. The OCSMMs generalize well-known one-class support vector machines
(OCSVMs) to a space of probability measures. By formulating the problem as
quantile estimation on distributions, we can establish an interesting
connection to the OCSVMs and variable kernel density estimators (VKDEs) over
the input space on which the distributions are defined, bridging the gap
between large-margin methods and kernel density estimators. In particular, we
show that various types of VKDEs can be considered as solutions to a class of
regularization problems studied in this paper. Experiments on Sloan Digital Sky
Survey dataset and High Energy Particle Physics dataset demonstrate the
benefits of the proposed framework in real-world applications.
| [
{
"version": "v1",
"created": "Fri, 1 Mar 2013 21:50:09 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Jun 2013 13:42:46 GMT"
}
] | 2013-06-04T00:00:00 | [
[
"Muandet",
"Krikamol",
""
],
[
"Schölkopf",
"Bernhard",
""
]
] | TITLE: One-Class Support Measure Machines for Group Anomaly Detection
ABSTRACT: We propose one-class support measure machines (OCSMMs) for group anomaly
detection which aims at recognizing anomalous aggregate behaviors of data
points. The OCSMMs generalize well-known one-class support vector machines
(OCSVMs) to a space of probability measures. By formulating the problem as
quantile estimation on distributions, we can establish an interesting
connection to the OCSVMs and variable kernel density estimators (VKDEs) over
the input space on which the distributions are defined, bridging the gap
between large-margin methods and kernel density estimators. In particular, we
show that various types of VKDEs can be considered as solutions to a class of
regularization problems studied in this paper. Experiments on Sloan Digital Sky
Survey dataset and High Energy Particle Physics dataset demonstrate the
benefits of the proposed framework in real-world applications.
|
1306.0152 | Eugenio Culurciello Eugenio Culurciello | Eugenio Culurciello, Jonghoon Jin, Aysegul Dundar, Jordan Bates | An Analysis of the Connections Between Layers of Deep Neural Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an analysis of different techniques for selecting the connection
be- tween layers of deep neural networks. Traditional deep neural networks use
ran- dom connection tables between layers to keep the number of connections
small and tune to different image features. This kind of connection performs
adequately in supervised deep networks because their values are refined during
the training. On the other hand, in unsupervised learning, one cannot rely on
back-propagation techniques to learn the connections between layers. In this
work, we tested four different techniques for connecting the first layer of the
network to the second layer on the CIFAR and SVHN datasets and showed that the
accuracy can be im- proved up to 3% depending on the technique used. We also
showed that learning the connections based on the co-occurrences of the
features does not confer an advantage over a random connection table in small
networks. This work is helpful to improve the efficiency of connections between
the layers of unsupervised deep neural networks.
| [
{
"version": "v1",
"created": "Sat, 1 Jun 2013 21:37:25 GMT"
}
] | 2013-06-04T00:00:00 | [
[
"Culurciello",
"Eugenio",
""
],
[
"Jin",
"Jonghoon",
""
],
[
"Dundar",
"Aysegul",
""
],
[
"Bates",
"Jordan",
""
]
] | TITLE: An Analysis of the Connections Between Layers of Deep Neural Networks
ABSTRACT: We present an analysis of different techniques for selecting the connection
be- tween layers of deep neural networks. Traditional deep neural networks use
ran- dom connection tables between layers to keep the number of connections
small and tune to different image features. This kind of connection performs
adequately in supervised deep networks because their values are refined during
the training. On the other hand, in unsupervised learning, one cannot rely on
back-propagation techniques to learn the connections between layers. In this
work, we tested four different techniques for connecting the first layer of the
network to the second layer on the CIFAR and SVHN datasets and showed that the
accuracy can be im- proved up to 3% depending on the technique used. We also
showed that learning the connections based on the co-occurrences of the
features does not confer an advantage over a random connection table in small
networks. This work is helpful to improve the efficiency of connections between
the layers of unsupervised deep neural networks.
|
1306.0326 | Tomasz Kajdanowicz | Tomasz Kajdanowicz, Przemyslaw Kazienko, Wojciech Indyk | Parallel Processing of Large Graphs | Preprint submitted to Future Generation Computer Systems | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | More and more large data collections are gathered worldwide in various IT
systems. Many of them possess the networked nature and need to be processed and
analysed as graph structures. Due to their size they require very often usage
of parallel paradigm for efficient computation. Three parallel techniques have
been compared in the paper: MapReduce, its map-side join extension and Bulk
Synchronous Parallel (BSP). They are implemented for two different graph
problems: calculation of single source shortest paths (SSSP) and collective
classification of graph nodes by means of relational influence propagation
(RIP). The methods and algorithms are applied to several network datasets
differing in size and structural profile, originating from three domains:
telecommunication, multimedia and microblog. The results revealed that
iterative graph processing with the BSP implementation always and
significantly, even up to 10 times outperforms MapReduce, especially for
algorithms with many iterations and sparse communication. Also MapReduce
extension based on map-side join usually noticeably presents better efficiency,
although not as much as BSP. Nevertheless, MapReduce still remains the good
alternative for enormous networks, whose data structures do not fit in local
memories.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2013 08:44:32 GMT"
}
] | 2013-06-04T00:00:00 | [
[
"Kajdanowicz",
"Tomasz",
""
],
[
"Kazienko",
"Przemyslaw",
""
],
[
"Indyk",
"Wojciech",
""
]
] | TITLE: Parallel Processing of Large Graphs
ABSTRACT: More and more large data collections are gathered worldwide in various IT
systems. Many of them possess the networked nature and need to be processed and
analysed as graph structures. Due to their size they require very often usage
of parallel paradigm for efficient computation. Three parallel techniques have
been compared in the paper: MapReduce, its map-side join extension and Bulk
Synchronous Parallel (BSP). They are implemented for two different graph
problems: calculation of single source shortest paths (SSSP) and collective
classification of graph nodes by means of relational influence propagation
(RIP). The methods and algorithms are applied to several network datasets
differing in size and structural profile, originating from three domains:
telecommunication, multimedia and microblog. The results revealed that
iterative graph processing with the BSP implementation always and
significantly, even up to 10 times outperforms MapReduce, especially for
algorithms with many iterations and sparse communication. Also MapReduce
extension based on map-side join usually noticeably presents better efficiency,
although not as much as BSP. Nevertheless, MapReduce still remains the good
alternative for enormous networks, whose data structures do not fit in local
memories.
|
1306.0424 | Lionel Tabourier | Abdelhamid Salah Brahim, Lionel Tabourier, B\'en\'edicte Le Grand | A data-driven analysis to question epidemic models for citation cascades
on the blogosphere | 18 pages, 9 figures, to be published in ICWSM-13 proceedings | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Citation cascades in blog networks are often considered as traces of
information spreading on this social medium. In this work, we question this
point of view using both a structural and semantic analysis of five months
activity of the most representative blogs of the french-speaking
community.Statistical measures reveal that our dataset shares many features
with those that can be found in the literature, suggesting the existence of an
identical underlying process. However, a closer analysis of the post content
indicates that the popular epidemic-like descriptions of cascades are
misleading in this context.A basic model, taking only into account the behavior
of bloggers and their restricted social network, accounts for several important
statistical features of the data.These arguments support the idea that
citations primary goal may not be information spreading on the blogosphere.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2013 14:17:54 GMT"
}
] | 2013-06-04T00:00:00 | [
[
"Brahim",
"Abdelhamid Salah",
""
],
[
"Tabourier",
"Lionel",
""
],
[
"Grand",
"Bénédicte Le",
""
]
] | TITLE: A data-driven analysis to question epidemic models for citation cascades
on the blogosphere
ABSTRACT: Citation cascades in blog networks are often considered as traces of
information spreading on this social medium. In this work, we question this
point of view using both a structural and semantic analysis of five months
activity of the most representative blogs of the french-speaking
community.Statistical measures reveal that our dataset shares many features
with those that can be found in the literature, suggesting the existence of an
identical underlying process. However, a closer analysis of the post content
indicates that the popular epidemic-like descriptions of cascades are
misleading in this context.A basic model, taking only into account the behavior
of bloggers and their restricted social network, accounts for several important
statistical features of the data.These arguments support the idea that
citations primary goal may not be information spreading on the blogosphere.
|
1306.0505 | Juan Guan | Kejia Chen, Bo Wang, Juan Guan, and Steve Granick | Diagnosing Heterogeneous Dynamics in Single Molecule/Particle
Trajectories with Multiscale Wavelets | null | null | null | null | physics.data-an physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a simple automated method to extract and quantify transient
heterogeneous dynamical changes from large datasets generated in single
molecule/particle tracking experiments. Based on wavelet transform, the method
transforms raw data to locally match dynamics of interest. This is accomplished
using statistically adaptive universal thresholding, whose advantage is to
avoid a single arbitrary threshold that might conceal individual variability
across populations. How to implement this multiscale method is described,
focusing on local confined diffusion separated by transient transport periods
or hopping events, with 3 specific examples: in cell biology, biotechnology,
and glassy colloid dynamics. This computationally-efficient method can run
routinely on hundreds of millions of data points analyzed within an hour on a
desktop personal computer.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2013 17:12:20 GMT"
}
] | 2013-06-04T00:00:00 | [
[
"Chen",
"Kejia",
""
],
[
"Wang",
"Bo",
""
],
[
"Guan",
"Juan",
""
],
[
"Granick",
"Steve",
""
]
] | TITLE: Diagnosing Heterogeneous Dynamics in Single Molecule/Particle
Trajectories with Multiscale Wavelets
ABSTRACT: We describe a simple automated method to extract and quantify transient
heterogeneous dynamical changes from large datasets generated in single
molecule/particle tracking experiments. Based on wavelet transform, the method
transforms raw data to locally match dynamics of interest. This is accomplished
using statistically adaptive universal thresholding, whose advantage is to
avoid a single arbitrary threshold that might conceal individual variability
across populations. How to implement this multiscale method is described,
focusing on local confined diffusion separated by transient transport periods
or hopping events, with 3 specific examples: in cell biology, biotechnology,
and glassy colloid dynamics. This computationally-efficient method can run
routinely on hundreds of millions of data points analyzed within an hour on a
desktop personal computer.
|
1305.7438 | Tian Qiu | Tian Qiu, Tian-Tian Wang, Zi-Ke Zhang, Li-Xin Zhong, Guang Chen | Heterogeneity Involved Network-based Algorithm Leads to Accurate and
Personalized Recommendations | null | null | null | null | physics.soc-ph cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Heterogeneity of both the source and target objects is taken into account in
a network-based algorithm for the directional resource transformation between
objects. Based on a biased heat conduction recommendation method (BHC) which
considers the heterogeneity of the target object, we propose a heterogeneous
heat conduction algorithm (HHC), by further taking the source object degree as
the weight of diffusion. Tested on three real datasets, the Netflix, RYM and
MovieLens, the HHC algorithm is found to present a better recommendation in
both the accuracy and personalization than two excellent algorithms, i.e., the
original BHC and a hybrid algorithm of heat conduction and mass diffusion
(HHM), while not requiring any other accessorial information or parameter.
Moreover, the HHC even elevates the recommendation accuracy on cold objects,
referring to the so-called cold start problem, for effectively relieving the
recommendation bias on objects with different level of popularity.
| [
{
"version": "v1",
"created": "Fri, 31 May 2013 15:01:25 GMT"
}
] | 2013-06-03T00:00:00 | [
[
"Qiu",
"Tian",
""
],
[
"Wang",
"Tian-Tian",
""
],
[
"Zhang",
"Zi-Ke",
""
],
[
"Zhong",
"Li-Xin",
""
],
[
"Chen",
"Guang",
""
]
] | TITLE: Heterogeneity Involved Network-based Algorithm Leads to Accurate and
Personalized Recommendations
ABSTRACT: Heterogeneity of both the source and target objects is taken into account in
a network-based algorithm for the directional resource transformation between
objects. Based on a biased heat conduction recommendation method (BHC) which
considers the heterogeneity of the target object, we propose a heterogeneous
heat conduction algorithm (HHC), by further taking the source object degree as
the weight of diffusion. Tested on three real datasets, the Netflix, RYM and
MovieLens, the HHC algorithm is found to present a better recommendation in
both the accuracy and personalization than two excellent algorithms, i.e., the
original BHC and a hybrid algorithm of heat conduction and mass diffusion
(HHM), while not requiring any other accessorial information or parameter.
Moreover, the HHC even elevates the recommendation accuracy on cold objects,
referring to the so-called cold start problem, for effectively relieving the
recommendation bias on objects with different level of popularity.
|
1305.7454 | Uwe Aickelin | Jan Feyereisl, Uwe Aickelin | Privileged Information for Data Clustering | Information Sciences 194, 4-23, 2012 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many machine learning algorithms assume that all input samples are
independently and identically distributed from some common distribution on
either the input space X, in the case of unsupervised learning, or the input
and output space X x Y in the case of supervised and semi-supervised learning.
In the last number of years the relaxation of this assumption has been explored
and the importance of incorporation of additional information within machine
learning algorithms became more apparent. Traditionally such fusion of
information was the domain of semi-supervised learning. More recently the
inclusion of knowledge from separate hypothetical spaces has been proposed by
Vapnik as part of the supervised setting. In this work we are interested in
exploring Vapnik's idea of master-class learning and the associated learning
using privileged information, however within the unsupervised setting. Adoption
of the advanced supervised learning paradigm for the unsupervised setting
instigates investigation into the difference between privileged and technical
data. By means of our proposed aRi-MAX method stability of the KMeans algorithm
is improved and identification of the best clustering solution is achieved on
an artificial dataset. Subsequently an information theoretic dot product based
algorithm called P-Dot is proposed. This method has the ability to utilize a
wide variety of clustering techniques, individually or in combination, while
fusing privileged and technical data for improved clustering. Application of
the P-Dot method to the task of digit recognition confirms our findings in a
real-world scenario.
| [
{
"version": "v1",
"created": "Fri, 31 May 2013 15:28:44 GMT"
}
] | 2013-06-03T00:00:00 | [
[
"Feyereisl",
"Jan",
""
],
[
"Aickelin",
"Uwe",
""
]
] | TITLE: Privileged Information for Data Clustering
ABSTRACT: Many machine learning algorithms assume that all input samples are
independently and identically distributed from some common distribution on
either the input space X, in the case of unsupervised learning, or the input
and output space X x Y in the case of supervised and semi-supervised learning.
In the last number of years the relaxation of this assumption has been explored
and the importance of incorporation of additional information within machine
learning algorithms became more apparent. Traditionally such fusion of
information was the domain of semi-supervised learning. More recently the
inclusion of knowledge from separate hypothetical spaces has been proposed by
Vapnik as part of the supervised setting. In this work we are interested in
exploring Vapnik's idea of master-class learning and the associated learning
using privileged information, however within the unsupervised setting. Adoption
of the advanced supervised learning paradigm for the unsupervised setting
instigates investigation into the difference between privileged and technical
data. By means of our proposed aRi-MAX method stability of the KMeans algorithm
is improved and identification of the best clustering solution is achieved on
an artificial dataset. Subsequently an information theoretic dot product based
algorithm called P-Dot is proposed. This method has the ability to utilize a
wide variety of clustering techniques, individually or in combination, while
fusing privileged and technical data for improved clustering. Application of
the P-Dot method to the task of digit recognition confirms our findings in a
real-world scenario.
|
1305.7465 | Uwe Aickelin | Yihui Liu, Uwe Aickelin, Jan Feyereisl, Lindy G. Durrant | Wavelet feature extraction and genetic algorithm for biomarker detection
in colorectal cancer data | null | Knowledge-Based Systems 37, 502-514, 2013 | null | null | cs.NE cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biomarkers which predict patient's survival can play an important role in
medical diagnosis and treatment. How to select the significant biomarkers from
hundreds of protein markers is a key step in survival analysis. In this paper a
novel method is proposed to detect the prognostic biomarkers of survival in
colorectal cancer patients using wavelet analysis, genetic algorithm, and Bayes
classifier. One dimensional discrete wavelet transform (DWT) is normally used
to reduce the dimensionality of biomedical data. In this study one dimensional
continuous wavelet transform (CWT) was proposed to extract the features of
colorectal cancer data. One dimensional CWT has no ability to reduce
dimensionality of data, but captures the missing features of DWT, and is
complementary part of DWT. Genetic algorithm was performed on extracted wavelet
coefficients to select the optimized features, using Bayes classifier to build
its fitness function. The corresponding protein markers were located based on
the position of optimized features. Kaplan-Meier curve and Cox regression model
were used to evaluate the performance of selected biomarkers. Experiments were
conducted on colorectal cancer dataset and several significant biomarkers were
detected. A new protein biomarker CD46 was found to significantly associate
with survival time.
| [
{
"version": "v1",
"created": "Fri, 31 May 2013 15:53:08 GMT"
}
] | 2013-06-03T00:00:00 | [
[
"Liu",
"Yihui",
""
],
[
"Aickelin",
"Uwe",
""
],
[
"Feyereisl",
"Jan",
""
],
[
"Durrant",
"Lindy G.",
""
]
] | TITLE: Wavelet feature extraction and genetic algorithm for biomarker detection
in colorectal cancer data
ABSTRACT: Biomarkers which predict patient's survival can play an important role in
medical diagnosis and treatment. How to select the significant biomarkers from
hundreds of protein markers is a key step in survival analysis. In this paper a
novel method is proposed to detect the prognostic biomarkers of survival in
colorectal cancer patients using wavelet analysis, genetic algorithm, and Bayes
classifier. One dimensional discrete wavelet transform (DWT) is normally used
to reduce the dimensionality of biomedical data. In this study one dimensional
continuous wavelet transform (CWT) was proposed to extract the features of
colorectal cancer data. One dimensional CWT has no ability to reduce
dimensionality of data, but captures the missing features of DWT, and is
complementary part of DWT. Genetic algorithm was performed on extracted wavelet
coefficients to select the optimized features, using Bayes classifier to build
its fitness function. The corresponding protein markers were located based on
the position of optimized features. Kaplan-Meier curve and Cox regression model
were used to evaluate the performance of selected biomarkers. Experiments were
conducted on colorectal cancer dataset and several significant biomarkers were
detected. A new protein biomarker CD46 was found to significantly associate
with survival time.
|
1210.6844 | Qing-Bin Lu | Qing-Bin Lu | Cosmic-Ray-Driven Reaction and Greenhouse Effect of Halogenated
Molecules: Culprits for Atmospheric Ozone Depletion and Global Climate Change | 24 pages, 12 figures; an updated version | Int. J. Mod. Phys. B Vol. 27 (2013) 1350073 (38 pages) | 10.1142/S0217979213500732 | null | physics.ao-ph physics.atm-clus physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study is focused on the effects of cosmic rays (solar activity) and
halogenated molecules (mainly chlorofluorocarbons-CFCs) on atmospheric O3
depletion and global climate change. Brief reviews are first given on the
cosmic-ray-driven electron-induced-reaction (CRE) theory for O3 depletion and
the warming theory of CFCs for climate change. Then natural and anthropogenic
contributions are examined in detail and separated well through in-depth
statistical analyses of comprehensive measured datasets. For O3 loss, new
statistical analyses of the CRE equation with observed data of total O3 and
stratospheric temperature give high linear correlation coefficients >=0.92.
After removal of the CR effect, a pronounced recovery by 20~25% of the
Antarctic O3 hole is found, while no recovery of O3 loss in mid-latitudes has
been observed. These results show both the dominance of the CRE mechanism and
the success of the Montreal Protocol. For global climate change, in-depth
analyses of observed data clearly show that the solar effect and human-made
halogenated gases played the dominant role in Earth climate change prior to and
after 1970, respectively. Remarkably, a statistical analysis gives a nearly
zero correlation coefficient (R=-0.05) between global surface temperature and
CO2 concentration in 1850-1970. In contrast, a nearly perfect linear
correlation with R=0.96-0.97 is found between global surface temperature and
total amount of stratospheric halogenated gases in 1970-2012. Further, a new
theoretical calculation on the greenhouse effect of halogenated gases shows
that they (mainly CFCs) could alone lead to the global surface temperature rise
of ~0.6 deg C in 1970-2002. These results provide solid evidence that recent
global warming was indeed caused by anthropogenic halogenated gases. Thus, a
slow reversal of global temperature to the 1950 value is predicted for coming
5~7 decades.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 16:32:15 GMT"
},
{
"version": "v2",
"created": "Mon, 13 May 2013 04:54:36 GMT"
}
] | 2013-05-31T00:00:00 | [
[
"Lu",
"Qing-Bin",
""
]
] | TITLE: Cosmic-Ray-Driven Reaction and Greenhouse Effect of Halogenated
Molecules: Culprits for Atmospheric Ozone Depletion and Global Climate Change
ABSTRACT: This study is focused on the effects of cosmic rays (solar activity) and
halogenated molecules (mainly chlorofluorocarbons-CFCs) on atmospheric O3
depletion and global climate change. Brief reviews are first given on the
cosmic-ray-driven electron-induced-reaction (CRE) theory for O3 depletion and
the warming theory of CFCs for climate change. Then natural and anthropogenic
contributions are examined in detail and separated well through in-depth
statistical analyses of comprehensive measured datasets. For O3 loss, new
statistical analyses of the CRE equation with observed data of total O3 and
stratospheric temperature give high linear correlation coefficients >=0.92.
After removal of the CR effect, a pronounced recovery by 20~25% of the
Antarctic O3 hole is found, while no recovery of O3 loss in mid-latitudes has
been observed. These results show both the dominance of the CRE mechanism and
the success of the Montreal Protocol. For global climate change, in-depth
analyses of observed data clearly show that the solar effect and human-made
halogenated gases played the dominant role in Earth climate change prior to and
after 1970, respectively. Remarkably, a statistical analysis gives a nearly
zero correlation coefficient (R=-0.05) between global surface temperature and
CO2 concentration in 1850-1970. In contrast, a nearly perfect linear
correlation with R=0.96-0.97 is found between global surface temperature and
total amount of stratospheric halogenated gases in 1970-2012. Further, a new
theoretical calculation on the greenhouse effect of halogenated gases shows
that they (mainly CFCs) could alone lead to the global surface temperature rise
of ~0.6 deg C in 1970-2002. These results provide solid evidence that recent
global warming was indeed caused by anthropogenic halogenated gases. Thus, a
slow reversal of global temperature to the 1950 value is predicted for coming
5~7 decades.
|
1206.4229 | Torsten Ensslin | Torsten A. En{\ss}lin | Information field dynamics for simulation scheme construction | 19 pages, 3 color figures, accepted by Phys. Rev. E | null | 10.1103/PhysRevE.87.013308 | null | physics.comp-ph astro-ph.IM cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information field dynamics (IFD) is introduced here as a framework to derive
numerical schemes for the simulation of physical and other fields without
assuming a particular sub-grid structure as many schemes do. IFD constructs an
ensemble of non-parametric sub-grid field configurations from the combination
of the data in computer memory, representing constraints on possible field
configurations, and prior assumptions on the sub-grid field statistics. Each of
these field configurations can formally be evolved to a later moment since any
differential operator of the dynamics can act on fields living in continuous
space. However, these virtually evolved fields need again a representation by
data in computer memory. The maximum entropy principle of information theory
guides the construction of updated datasets via entropic matching, optimally
representing these field configurations at the later time. The field dynamics
thereby become represented by a finite set of evolution equations for the data
that can be solved numerically. The sub-grid dynamics is treated within an
auxiliary analytic consideration and the resulting scheme acts solely on the
data space. It should provide a more accurate description of the physical field
dynamics than simulation schemes constructed ad-hoc, due to the more rigorous
accounting of sub-grid physics and the space discretization process.
Assimilation of measurement data into an IFD simulation is conceptually
straightforward since measurement and simulation data can just be merged. The
IFD approach is illustrated using the example of a coarsely discretized
representation of a thermally excited classical Klein-Gordon field. This should
pave the way towards the construction of schemes for more complex systems like
turbulent hydrodynamics.
| [
{
"version": "v1",
"created": "Tue, 19 Jun 2012 15:01:52 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Oct 2012 10:21:19 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Dec 2012 13:24:30 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Dec 2012 12:29:19 GMT"
}
] | 2013-05-30T00:00:00 | [
[
"Enßlin",
"Torsten A.",
""
]
] | TITLE: Information field dynamics for simulation scheme construction
ABSTRACT: Information field dynamics (IFD) is introduced here as a framework to derive
numerical schemes for the simulation of physical and other fields without
assuming a particular sub-grid structure as many schemes do. IFD constructs an
ensemble of non-parametric sub-grid field configurations from the combination
of the data in computer memory, representing constraints on possible field
configurations, and prior assumptions on the sub-grid field statistics. Each of
these field configurations can formally be evolved to a later moment since any
differential operator of the dynamics can act on fields living in continuous
space. However, these virtually evolved fields need again a representation by
data in computer memory. The maximum entropy principle of information theory
guides the construction of updated datasets via entropic matching, optimally
representing these field configurations at the later time. The field dynamics
thereby become represented by a finite set of evolution equations for the data
that can be solved numerically. The sub-grid dynamics is treated within an
auxiliary analytic consideration and the resulting scheme acts solely on the
data space. It should provide a more accurate description of the physical field
dynamics than simulation schemes constructed ad-hoc, due to the more rigorous
accounting of sub-grid physics and the space discretization process.
Assimilation of measurement data into an IFD simulation is conceptually
straightforward since measurement and simulation data can just be merged. The
IFD approach is illustrated using the example of a coarsely discretized
representation of a thermally excited classical Klein-Gordon field. This should
pave the way towards the construction of schemes for more complex systems like
turbulent hydrodynamics.
|
1304.5862 | Forrest Briggs | Forrest Briggs, Xiaoli Z. Fern, Jed Irvine | Multi-Label Classifier Chains for Bird Sound | 6 pages, 1 figure, submission to ICML 2013 workshop on bioacoustics.
Note: this is a minor revision- the blind submission format has been replaced
with one that shows author names, and a few corrections have been made | null | null | null | cs.LG cs.SD stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bird sound data collected with unattended microphones for automatic surveys,
or mobile devices for citizen science, typically contain multiple
simultaneously vocalizing birds of different species. However, few works have
considered the multi-label structure in birdsong. We propose to use an ensemble
of classifier chains combined with a histogram-of-segments representation for
multi-label classification of birdsong. The proposed method is compared with
binary relevance and three multi-instance multi-label learning (MIML)
algorithms from prior work (which focus more on structure in the sound, and
less on structure in the label sets). Experiments are conducted on two
real-world birdsong datasets, and show that the proposed method usually
outperforms binary relevance (using the same features and base-classifier), and
is better in some cases and worse in others compared to the MIML algorithms.
| [
{
"version": "v1",
"created": "Mon, 22 Apr 2013 07:44:05 GMT"
},
{
"version": "v2",
"created": "Wed, 29 May 2013 17:36:07 GMT"
}
] | 2013-05-30T00:00:00 | [
[
"Briggs",
"Forrest",
""
],
[
"Fern",
"Xiaoli Z.",
""
],
[
"Irvine",
"Jed",
""
]
] | TITLE: Multi-Label Classifier Chains for Bird Sound
ABSTRACT: Bird sound data collected with unattended microphones for automatic surveys,
or mobile devices for citizen science, typically contain multiple
simultaneously vocalizing birds of different species. However, few works have
considered the multi-label structure in birdsong. We propose to use an ensemble
of classifier chains combined with a histogram-of-segments representation for
multi-label classification of birdsong. The proposed method is compared with
binary relevance and three multi-instance multi-label learning (MIML)
algorithms from prior work (which focus more on structure in the sound, and
less on structure in the label sets). Experiments are conducted on two
real-world birdsong datasets, and show that the proposed method usually
outperforms binary relevance (using the same features and base-classifier), and
is better in some cases and worse in others compared to the MIML algorithms.
|
0910.1800 | Cyril Furtlehner | Cyril Furtlehner, Michele Sebag and Xiangliang Zhang | Scaling Analysis of Affinity Propagation | 28 pages, 14 figures, Inria research report | Phys. Rev. E 81,066102 (2010) | 10.1103/PhysRevE.81.066102 | 7046 | cs.AI cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze and exploit some scaling properties of the Affinity Propagation
(AP) clustering algorithm proposed by Frey and Dueck (2007). First we observe
that a divide and conquer strategy, used on a large data set hierarchically
reduces the complexity ${\cal O}(N^2)$ to ${\cal O}(N^{(h+2)/(h+1)})$, for a
data-set of size $N$ and a depth $h$ of the hierarchical strategy. For a
data-set embedded in a $d$-dimensional space, we show that this is obtained
without notably damaging the precision except in dimension $d=2$. In fact, for
$d$ larger than 2 the relative loss in precision scales like
$N^{(2-d)/(h+1)d}$. Finally, under some conditions we observe that there is a
value $s^*$ of the penalty coefficient, a free parameter used to fix the number
of clusters, which separates a fragmentation phase (for $s<s^*$) from a
coalescent one (for $s>s^*$) of the underlying hidden cluster structure. At
this precise point holds a self-similarity property which can be exploited by
the hierarchical strategy to actually locate its position. From this
observation, a strategy based on \AP can be defined to find out how many
clusters are present in a given dataset.
| [
{
"version": "v1",
"created": "Fri, 9 Oct 2009 17:43:35 GMT"
}
] | 2013-05-29T00:00:00 | [
[
"Furtlehner",
"Cyril",
""
],
[
"Sebag",
"Michele",
""
],
[
"Zhang",
"Xiangliang",
""
]
] | TITLE: Scaling Analysis of Affinity Propagation
ABSTRACT: We analyze and exploit some scaling properties of the Affinity Propagation
(AP) clustering algorithm proposed by Frey and Dueck (2007). First we observe
that a divide and conquer strategy, used on a large data set hierarchically
reduces the complexity ${\cal O}(N^2)$ to ${\cal O}(N^{(h+2)/(h+1)})$, for a
data-set of size $N$ and a depth $h$ of the hierarchical strategy. For a
data-set embedded in a $d$-dimensional space, we show that this is obtained
without notably damaging the precision except in dimension $d=2$. In fact, for
$d$ larger than 2 the relative loss in precision scales like
$N^{(2-d)/(h+1)d}$. Finally, under some conditions we observe that there is a
value $s^*$ of the penalty coefficient, a free parameter used to fix the number
of clusters, which separates a fragmentation phase (for $s<s^*$) from a
coalescent one (for $s>s^*$) of the underlying hidden cluster structure. At
this precise point holds a self-similarity property which can be exploited by
the hierarchical strategy to actually locate its position. From this
observation, a strategy based on \AP can be defined to find out how many
clusters are present in a given dataset.
|
1305.6046 | Sidahmed Mokeddem | Sidahmed Mokeddem, Baghdad Atmani and Mostefa Mokaddem | Supervised Feature Selection for Diagnosis of Coronary Artery Disease
Based on Genetic Algorithm | First International Conference on Computational Science and
Engineering (CSE-2013), May 18 ~ 19, 2013, Dubai, UAE. Volume Editors:
Sundarapandian Vaidyanathan, Dhinaharan Nagamalai | null | 10.5121/csit.2013.3305 | null | cs.LG cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature Selection (FS) has become the focus of much research on decision
support systems areas for which data sets with tremendous number of variables
are analyzed. In this paper we present a new method for the diagnosis of
Coronary Artery Diseases (CAD) founded on Genetic Algorithm (GA) wrapped Bayes
Naive (BN) based FS. Basically, CAD dataset contains two classes defined with
13 features. In GA BN algorithm, GA generates in each iteration a subset of
attributes that will be evaluated using the BN in the second step of the
selection procedure. The final set of attribute contains the most relevant
feature model that increases the accuracy. The algorithm in this case produces
85.50% classification accuracy in the diagnosis of CAD. Thus, the asset of the
Algorithm is then compared with the use of Support Vector Machine (SVM),
MultiLayer Perceptron (MLP) and C4.5 decision tree Algorithm. The result of
classification accuracy for those algorithms are respectively 83.5%, 83.16% and
80.85%. Consequently, the GA wrapped BN Algorithm is correspondingly compared
with other FS algorithms. The Obtained results have shown very promising
outcomes for the diagnosis of CAD.
| [
{
"version": "v1",
"created": "Sun, 26 May 2013 18:16:52 GMT"
}
] | 2013-05-28T00:00:00 | [
[
"Mokeddem",
"Sidahmed",
""
],
[
"Atmani",
"Baghdad",
""
],
[
"Mokaddem",
"Mostefa",
""
]
] | TITLE: Supervised Feature Selection for Diagnosis of Coronary Artery Disease
Based on Genetic Algorithm
ABSTRACT: Feature Selection (FS) has become the focus of much research on decision
support systems areas for which data sets with tremendous number of variables
are analyzed. In this paper we present a new method for the diagnosis of
Coronary Artery Diseases (CAD) founded on Genetic Algorithm (GA) wrapped Bayes
Naive (BN) based FS. Basically, CAD dataset contains two classes defined with
13 features. In GA BN algorithm, GA generates in each iteration a subset of
attributes that will be evaluated using the BN in the second step of the
selection procedure. The final set of attribute contains the most relevant
feature model that increases the accuracy. The algorithm in this case produces
85.50% classification accuracy in the diagnosis of CAD. Thus, the asset of the
Algorithm is then compared with the use of Support Vector Machine (SVM),
MultiLayer Perceptron (MLP) and C4.5 decision tree Algorithm. The result of
classification accuracy for those algorithms are respectively 83.5%, 83.16% and
80.85%. Consequently, the GA wrapped BN Algorithm is correspondingly compared
with other FS algorithms. The Obtained results have shown very promising
outcomes for the diagnosis of CAD.
|
1305.5824 | Slim Bouker | Slim Bouker, Rabie Saidi, Sadok Ben Yahia, Engelbert Mephu Nguifo | Towards a semantic and statistical selection of association rules | null | null | null | null | cs.DB | http://creativecommons.org/licenses/by/3.0/ | The increasing growth of databases raises an urgent need for more accurate
methods to better understand the stored data. In this scope, association rules
were extensively used for the analysis and the comprehension of huge amounts of
data. However, the number of generated rules is too large to be efficiently
analyzed and explored in any further process. Association rules selection is a
classical topic to address this issue, yet, new innovated approaches are
required in order to provide help to decision makers. Hence, many interesting-
ness measures have been defined to statistically evaluate and filter the
association rules. However, these measures present two major problems. On the
one hand, they do not allow eliminating irrelevant rules, on the other hand,
their abun- dance leads to the heterogeneity of the evaluation results which
leads to confusion in decision making. In this paper, we propose a two-winged
approach to select statistically in- teresting and semantically incomparable
rules. Our statis- tical selection helps discovering interesting association
rules without favoring or excluding any measure. The semantic comparability
helps to decide if the considered association rules are semantically related
i.e comparable. The outcomes of our experiments on real datasets show promising
results in terms of reduction in the number of rules.
| [
{
"version": "v1",
"created": "Fri, 24 May 2013 18:46:34 GMT"
}
] | 2013-05-27T00:00:00 | [
[
"Bouker",
"Slim",
""
],
[
"Saidi",
"Rabie",
""
],
[
"Yahia",
"Sadok Ben",
""
],
[
"Nguifo",
"Engelbert Mephu",
""
]
] | TITLE: Towards a semantic and statistical selection of association rules
ABSTRACT: The increasing growth of databases raises an urgent need for more accurate
methods to better understand the stored data. In this scope, association rules
were extensively used for the analysis and the comprehension of huge amounts of
data. However, the number of generated rules is too large to be efficiently
analyzed and explored in any further process. Association rules selection is a
classical topic to address this issue, yet, new innovated approaches are
required in order to provide help to decision makers. Hence, many interesting-
ness measures have been defined to statistically evaluate and filter the
association rules. However, these measures present two major problems. On the
one hand, they do not allow eliminating irrelevant rules, on the other hand,
their abun- dance leads to the heterogeneity of the evaluation results which
leads to confusion in decision making. In this paper, we propose a two-winged
approach to select statistically in- teresting and semantically incomparable
rules. Our statis- tical selection helps discovering interesting association
rules without favoring or excluding any measure. The semantic comparability
helps to decide if the considered association rules are semantically related
i.e comparable. The outcomes of our experiments on real datasets show promising
results in terms of reduction in the number of rules.
|
1305.5826 | Kian Hsiang Low | Jie Chen, Nannan Cao, Kian Hsiang Low, Ruofei Ouyang, Colin Keng-Yan
Tan, Patrick Jaillet | Parallel Gaussian Process Regression with Low-Rank Covariance Matrix
Approximations | 29th Conference on Uncertainty in Artificial Intelligence (UAI 2013),
Extended version with proofs, 13 pages | null | null | null | stat.ML cs.DC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gaussian processes (GP) are Bayesian non-parametric models that are widely
used for probabilistic regression. Unfortunately, it cannot scale well with
large data nor perform real-time predictions due to its cubic time cost in the
data size. This paper presents two parallel GP regression methods that exploit
low-rank covariance matrix approximations for distributing the computational
load among parallel machines to achieve time efficiency and scalability. We
theoretically guarantee the predictive performances of our proposed parallel
GPs to be equivalent to that of some centralized approximate GP regression
methods: The computation of their centralized counterparts can be distributed
among parallel machines, hence achieving greater time efficiency and
scalability. We analytically compare the properties of our parallel GPs such as
time, space, and communication complexity. Empirical evaluation on two
real-world datasets in a cluster of 20 computing nodes shows that our parallel
GPs are significantly more time-efficient and scalable than their centralized
counterparts and exact/full GP while achieving predictive performances
comparable to full GP.
| [
{
"version": "v1",
"created": "Fri, 24 May 2013 19:00:28 GMT"
}
] | 2013-05-27T00:00:00 | [
[
"Chen",
"Jie",
""
],
[
"Cao",
"Nannan",
""
],
[
"Low",
"Kian Hsiang",
""
],
[
"Ouyang",
"Ruofei",
""
],
[
"Tan",
"Colin Keng-Yan",
""
],
[
"Jaillet",
"Patrick",
""
]
] | TITLE: Parallel Gaussian Process Regression with Low-Rank Covariance Matrix
Approximations
ABSTRACT: Gaussian processes (GP) are Bayesian non-parametric models that are widely
used for probabilistic regression. Unfortunately, it cannot scale well with
large data nor perform real-time predictions due to its cubic time cost in the
data size. This paper presents two parallel GP regression methods that exploit
low-rank covariance matrix approximations for distributing the computational
load among parallel machines to achieve time efficiency and scalability. We
theoretically guarantee the predictive performances of our proposed parallel
GPs to be equivalent to that of some centralized approximate GP regression
methods: The computation of their centralized counterparts can be distributed
among parallel machines, hence achieving greater time efficiency and
scalability. We analytically compare the properties of our parallel GPs such as
time, space, and communication complexity. Empirical evaluation on two
real-world datasets in a cluster of 20 computing nodes shows that our parallel
GPs are significantly more time-efficient and scalable than their centralized
counterparts and exact/full GP while achieving predictive performances
comparable to full GP.
|
1305.5267 | Bluma Gelley | Bluma S. Gelley | Investigating Deletion in Wikipedia | null | null | null | null | cs.CY cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several hundred Wikipedia articles are deleted every day because they lack
sufficient significance to be included in the encyclopedia. We collect a
dataset of deleted articles and analyze them to determine whether or not the
deletions were justified. We find evidence to support the hypothesis that many
deletions are carried out correctly, but also find that a large number were
done very quickly. Based on our conclusions, we make some recommendations to
reduce the number of non-significant pages and simultaneously improve retention
of new editors.
| [
{
"version": "v1",
"created": "Wed, 22 May 2013 20:54:18 GMT"
}
] | 2013-05-24T00:00:00 | [
[
"Gelley",
"Bluma S.",
""
]
] | TITLE: Investigating Deletion in Wikipedia
ABSTRACT: Several hundred Wikipedia articles are deleted every day because they lack
sufficient significance to be included in the encyclopedia. We collect a
dataset of deleted articles and analyze them to determine whether or not the
deletions were justified. We find evidence to support the hypothesis that many
deletions are carried out correctly, but also find that a large number were
done very quickly. Based on our conclusions, we make some recommendations to
reduce the number of non-significant pages and simultaneously improve retention
of new editors.
|
1305.5306 | Yin Zheng | Yin Zheng, Yu-Jin Zhang, Hugo Larochelle | A Supervised Neural Autoregressive Topic Model for Simultaneous Image
Classification and Annotation | 13 pages, 5 figures | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Topic modeling based on latent Dirichlet allocation (LDA) has been a
framework of choice to perform scene recognition and annotation. Recently, a
new type of topic model called the Document Neural Autoregressive Distribution
Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance
for document modeling. In this work, we show how to successfully apply and
extend this model to the context of visual scene modeling. Specifically, we
propose SupDocNADE, a supervised extension of DocNADE, that increases the
discriminative power of the hidden topic features by incorporating label
information into the training objective of the model. We also describe how to
leverage information about the spatial position of the visual words and how to
embed additional image annotations, so as to simultaneously perform image
classification and annotation. We test our model on the Scene15, LabelMe and
UIUC-Sports datasets and show that it compares favorably to other topic models
such as the supervised variant of LDA.
| [
{
"version": "v1",
"created": "Thu, 23 May 2013 03:35:31 GMT"
}
] | 2013-05-24T00:00:00 | [
[
"Zheng",
"Yin",
""
],
[
"Zhang",
"Yu-Jin",
""
],
[
"Larochelle",
"Hugo",
""
]
] | TITLE: A Supervised Neural Autoregressive Topic Model for Simultaneous Image
Classification and Annotation
ABSTRACT: Topic modeling based on latent Dirichlet allocation (LDA) has been a
framework of choice to perform scene recognition and annotation. Recently, a
new type of topic model called the Document Neural Autoregressive Distribution
Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance
for document modeling. In this work, we show how to successfully apply and
extend this model to the context of visual scene modeling. Specifically, we
propose SupDocNADE, a supervised extension of DocNADE, that increases the
discriminative power of the hidden topic features by incorporating label
information into the training objective of the model. We also describe how to
leverage information about the spatial position of the visual words and how to
embed additional image annotations, so as to simultaneously perform image
classification and annotation. We test our model on the Scene15, LabelMe and
UIUC-Sports datasets and show that it compares favorably to other topic models
such as the supervised variant of LDA.
|
1304.1209 | Ben Fulcher | Ben D. Fulcher, Max A. Little, Nick S. Jones | Highly comparative time-series analysis: The empirical structure of time
series and their methods | null | J. R. Soc. Interface vol. 10 no. 83 20130048 (2013) | 10.1098/rsif.2013.0048 | null | physics.data-an cs.CV physics.bio-ph q-bio.QM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The process of collecting and organizing sets of observations represents a
common theme throughout the history of science. However, despite the ubiquity
of scientists measuring, recording, and analyzing the dynamics of different
processes, an extensive organization of scientific time-series data and
analysis methods has never been performed. Addressing this, annotated
collections of over 35 000 real-world and model-generated time series and over
9000 time-series analysis algorithms are analyzed in this work. We introduce
reduced representations of both time series, in terms of their properties
measured by diverse scientific methods, and of time-series analysis methods, in
terms of their behaviour on empirical time series, and use them to organize
these interdisciplinary resources. This new approach to comparing across
diverse scientific data and methods allows us to organize time-series datasets
automatically according to their properties, retrieve alternatives to
particular analysis methods developed in other scientific disciplines, and
automate the selection of useful methods for time-series classification and
regression tasks. The broad scientific utility of these tools is demonstrated
on datasets of electroencephalograms, self-affine time series, heart beat
intervals, speech signals, and others, in each case contributing novel analysis
techniques to the existing literature. Highly comparative techniques that
compare across an interdisciplinary literature can thus be used to guide more
focused research in time-series analysis for applications across the scientific
disciplines.
| [
{
"version": "v1",
"created": "Wed, 3 Apr 2013 23:24:02 GMT"
}
] | 2013-05-23T00:00:00 | [
[
"Fulcher",
"Ben D.",
""
],
[
"Little",
"Max A.",
""
],
[
"Jones",
"Nick S.",
""
]
] | TITLE: Highly comparative time-series analysis: The empirical structure of time
series and their methods
ABSTRACT: The process of collecting and organizing sets of observations represents a
common theme throughout the history of science. However, despite the ubiquity
of scientists measuring, recording, and analyzing the dynamics of different
processes, an extensive organization of scientific time-series data and
analysis methods has never been performed. Addressing this, annotated
collections of over 35 000 real-world and model-generated time series and over
9000 time-series analysis algorithms are analyzed in this work. We introduce
reduced representations of both time series, in terms of their properties
measured by diverse scientific methods, and of time-series analysis methods, in
terms of their behaviour on empirical time series, and use them to organize
these interdisciplinary resources. This new approach to comparing across
diverse scientific data and methods allows us to organize time-series datasets
automatically according to their properties, retrieve alternatives to
particular analysis methods developed in other scientific disciplines, and
automate the selection of useful methods for time-series classification and
regression tasks. The broad scientific utility of these tools is demonstrated
on datasets of electroencephalograms, self-affine time series, heart beat
intervals, speech signals, and others, in each case contributing novel analysis
techniques to the existing literature. Highly comparative techniques that
compare across an interdisciplinary literature can thus be used to guide more
focused research in time-series analysis for applications across the scientific
disciplines.
|
1305.5189 | Neven Caplar | Neven Caplar, Mirko Suznjevic and Maja Matijasevic | Analysis of player's in-game performance vs rating: Case study of Heroes
of Newerth | 8 pages, 14 figures, to appear in proceedings of "Foundation of
Digital Games 2013" conference (14-17 May 2013) | null | null | null | physics.soc-ph cs.SI physics.data-an physics.pop-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We evaluate the rating system of "Heroes of Newerth" (HoN), a multiplayer
online action role-playing game, by using statistical analysis and comparison
of a player's in-game performance metrics and the player rating assigned by the
rating system. The datasets for the analysis have been extracted from the web
sites that record the players' ratings and a number of empirical metrics.
Results suggest that the HoN's Matchmaking rating algorithm, while generally
capturing the skill level of the player well, also has weaknesses, which have
been exploited by players to achieve a higher placement on the ranking ladder
than deserved by actual skill. In addition, we also illustrate the effects of
the choice of the business model (from pay-to-play to free-to-play) on player
population.
| [
{
"version": "v1",
"created": "Wed, 22 May 2013 16:45:47 GMT"
}
] | 2013-05-23T00:00:00 | [
[
"Caplar",
"Neven",
""
],
[
"Suznjevic",
"Mirko",
""
],
[
"Matijasevic",
"Maja",
""
]
] | TITLE: Analysis of player's in-game performance vs rating: Case study of Heroes
of Newerth
ABSTRACT: We evaluate the rating system of "Heroes of Newerth" (HoN), a multiplayer
online action role-playing game, by using statistical analysis and comparison
of a player's in-game performance metrics and the player rating assigned by the
rating system. The datasets for the analysis have been extracted from the web
sites that record the players' ratings and a number of empirical metrics.
Results suggest that the HoN's Matchmaking rating algorithm, while generally
capturing the skill level of the player well, also has weaknesses, which have
been exploited by players to achieve a higher placement on the ranking ladder
than deserved by actual skill. In addition, we also illustrate the effects of
the choice of the business model (from pay-to-play to free-to-play) on player
population.
|
1209.5601 | Fan Min | Fan Min, Qinghua Hu, William Zhu | Feature selection with test cost constraint | 23 pages | null | 10.1016/j.ijar.2013.04.003 | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature selection is an important preprocessing step in machine learning and
data mining. In real-world applications, costs, including money, time and other
resources, are required to acquire the features. In some cases, there is a test
cost constraint due to limited resources. We shall deliberately select an
informative and cheap feature subset for classification. This paper proposes
the feature selection with test cost constraint problem for this issue. The new
problem has a simple form while described as a constraint satisfaction problem
(CSP). Backtracking is a general algorithm for CSP, and it is efficient in
solving the new problem on medium-sized data. As the backtracking algorithm is
not scalable to large datasets, a heuristic algorithm is also developed.
Experimental results show that the heuristic algorithm can find the optimal
solution in most cases. We also redefine some existing feature selection
problems in rough sets, especially in decision-theoretic rough sets, from the
viewpoint of CSP. These new definitions provide insight to some new research
directions.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2012 13:21:40 GMT"
}
] | 2013-05-22T00:00:00 | [
[
"Min",
"Fan",
""
],
[
"Hu",
"Qinghua",
""
],
[
"Zhu",
"William",
""
]
] | TITLE: Feature selection with test cost constraint
ABSTRACT: Feature selection is an important preprocessing step in machine learning and
data mining. In real-world applications, costs, including money, time and other
resources, are required to acquire the features. In some cases, there is a test
cost constraint due to limited resources. We shall deliberately select an
informative and cheap feature subset for classification. This paper proposes
the feature selection with test cost constraint problem for this issue. The new
problem has a simple form while described as a constraint satisfaction problem
(CSP). Backtracking is a general algorithm for CSP, and it is efficient in
solving the new problem on medium-sized data. As the backtracking algorithm is
not scalable to large datasets, a heuristic algorithm is also developed.
Experimental results show that the heuristic algorithm can find the optimal
solution in most cases. We also redefine some existing feature selection
problems in rough sets, especially in decision-theoretic rough sets, from the
viewpoint of CSP. These new definitions provide insight to some new research
directions.
|
1302.7278 | Gregory Kucherov | Kamil Salikhov, Gustavo Sacomoto, and Gregory Kucherov | Using cascading Bloom filters to improve the memory usage for de Brujin
graphs | 12 pages, submitted | null | null | null | cs.DS | http://creativecommons.org/licenses/by/3.0/ | De Brujin graphs are widely used in bioinformatics for processing
next-generation sequencing data. Due to a very large size of NGS datasets, it
is essential to represent de Bruijn graphs compactly, and several approaches to
this problem have been proposed recently. In this work, we show how to reduce
the memory required by the algorithm of [3] that represents de Brujin graphs
using Bloom filters. Our method requires 30% to 40% less memory with respect to
the method of [3], with insignificant impact to construction time. At the same
time, our experiments showed a better query time compared to [3]. This is, to
our knowledge, the best practical representation for de Bruijn graphs.
| [
{
"version": "v1",
"created": "Thu, 28 Feb 2013 18:35:21 GMT"
},
{
"version": "v2",
"created": "Tue, 21 May 2013 15:25:19 GMT"
}
] | 2013-05-22T00:00:00 | [
[
"Salikhov",
"Kamil",
""
],
[
"Sacomoto",
"Gustavo",
""
],
[
"Kucherov",
"Gregory",
""
]
] | TITLE: Using cascading Bloom filters to improve the memory usage for de Brujin
graphs
ABSTRACT: De Brujin graphs are widely used in bioinformatics for processing
next-generation sequencing data. Due to a very large size of NGS datasets, it
is essential to represent de Bruijn graphs compactly, and several approaches to
this problem have been proposed recently. In this work, we show how to reduce
the memory required by the algorithm of [3] that represents de Brujin graphs
using Bloom filters. Our method requires 30% to 40% less memory with respect to
the method of [3], with insignificant impact to construction time. At the same
time, our experiments showed a better query time compared to [3]. This is, to
our knowledge, the best practical representation for de Bruijn graphs.
|
1305.4820 | Nader Jelassi | Mohamed Nader Jelassi and Sadok Ben Yahia and Engelbert Mephu Nguifo | Nouvelle approche de recommandation personnalisee dans les folksonomies
basee sur le profil des utilisateurs | 7 pages | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In folksonomies, users use to share objects (movies, books, bookmarks, etc.)
by annotating them with a set of tags of their own choice. With the rise of the
Web 2.0 age, users become the core of the system since they are both the
contributors and the creators of the information. Yet, each user has its own
profile and its own ideas making thereby the strength as well as the weakness
of folksonomies. Indeed, it would be helpful to take account of users' profile
when suggesting a list of tags and resources or even a list of friends, in
order to make a personal recommandation, instead of suggesting the more used
tags and resources in the folksonomy. In this paper, we consider users' profile
as a new dimension of a folksonomy classically composed of three dimensions
<users, tags, ressources> and we propose an approach to group users with
equivalent profiles and equivalent interests as quadratic concepts. Then, we
use such structures to propose our personalized recommendation system of users,
tags and resources according to each user's profile. Carried out experiments on
two real-world datasets, i.e., MovieLens and BookCrossing highlight encouraging
results in terms of precision as well as a good social evaluation.
| [
{
"version": "v1",
"created": "Tue, 21 May 2013 13:59:51 GMT"
}
] | 2013-05-22T00:00:00 | [
[
"Jelassi",
"Mohamed Nader",
""
],
[
"Yahia",
"Sadok Ben",
""
],
[
"Nguifo",
"Engelbert Mephu",
""
]
] | TITLE: Nouvelle approche de recommandation personnalisee dans les folksonomies
basee sur le profil des utilisateurs
ABSTRACT: In folksonomies, users use to share objects (movies, books, bookmarks, etc.)
by annotating them with a set of tags of their own choice. With the rise of the
Web 2.0 age, users become the core of the system since they are both the
contributors and the creators of the information. Yet, each user has its own
profile and its own ideas making thereby the strength as well as the weakness
of folksonomies. Indeed, it would be helpful to take account of users' profile
when suggesting a list of tags and resources or even a list of friends, in
order to make a personal recommandation, instead of suggesting the more used
tags and resources in the folksonomy. In this paper, we consider users' profile
as a new dimension of a folksonomy classically composed of three dimensions
<users, tags, ressources> and we propose an approach to group users with
equivalent profiles and equivalent interests as quadratic concepts. Then, we
use such structures to propose our personalized recommendation system of users,
tags and resources according to each user's profile. Carried out experiments on
two real-world datasets, i.e., MovieLens and BookCrossing highlight encouraging
results in terms of precision as well as a good social evaluation.
|
1208.0787 | Shang Shang | Shang Shang, Sanjeev R. Kulkarni, Paul W. Cuff and Pan Hui | A Random Walk Based Model Incorporating Social Information for
Recommendations | 2012 IEEE Machine Learning for Signal Processing Workshop (MLSP), 6
pages | null | null | null | cs.IR cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Collaborative filtering (CF) is one of the most popular approaches to build a
recommendation system. In this paper, we propose a hybrid collaborative
filtering model based on a Makovian random walk to address the data sparsity
and cold start problems in recommendation systems. More precisely, we construct
a directed graph whose nodes consist of items and users, together with item
content, user profile and social network information. We incorporate user's
ratings into edge settings in the graph model. The model provides personalized
recommendations and predictions to individuals and groups. The proposed
algorithms are evaluated on MovieLens and Epinions datasets. Experimental
results show that the proposed methods perform well compared with other
graph-based methods, especially in the cold start case.
| [
{
"version": "v1",
"created": "Fri, 3 Aug 2012 16:15:10 GMT"
},
{
"version": "v2",
"created": "Fri, 17 May 2013 21:57:26 GMT"
}
] | 2013-05-21T00:00:00 | [
[
"Shang",
"Shang",
""
],
[
"Kulkarni",
"Sanjeev R.",
""
],
[
"Cuff",
"Paul W.",
""
],
[
"Hui",
"Pan",
""
]
] | TITLE: A Random Walk Based Model Incorporating Social Information for
Recommendations
ABSTRACT: Collaborative filtering (CF) is one of the most popular approaches to build a
recommendation system. In this paper, we propose a hybrid collaborative
filtering model based on a Makovian random walk to address the data sparsity
and cold start problems in recommendation systems. More precisely, we construct
a directed graph whose nodes consist of items and users, together with item
content, user profile and social network information. We incorporate user's
ratings into edge settings in the graph model. The model provides personalized
recommendations and predictions to individuals and groups. The proposed
algorithms are evaluated on MovieLens and Epinions datasets. Experimental
results show that the proposed methods perform well compared with other
graph-based methods, especially in the cold start case.
|
1211.7312 | Francis Casson | F. J. Casson, R. M. McDermott, C. Angioni, Y. Camenen, R. Dux, E.
Fable, R. Fischer, B. Geiger, P. Manas, L. Menchero, G. Tardini, and ASDEX
Upgrade team | Validation of gyrokinetic modelling of light impurity transport
including rotation in ASDEX Upgrade | 19 pages, 11 figures, accepted in Nuclear Fusion | Nucl. Fusion 53 063026 (2013) | 10.1088/0029-5515/53/6/063026 | null | physics.plasm-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Upgraded spectroscopic hardware and an improved impurity concentration
calculation allow accurate determination of boron density in the ASDEX Upgrade
tokamak. A database of boron measurements is compared to quasilinear and
nonlinear gyrokinetic simulations including Coriolis and centrifugal rotational
effects over a range of H-mode plasma regimes. The peaking of the measured
boron profiles shows a strong anti-correlation with the plasma rotation
gradient, via a relationship explained and reproduced by the theory. It is
demonstrated that the rotodiffusive impurity flux driven by the rotation
gradient is required for the modelling to reproduce the hollow boron profiles
at higher rotation gradients. The nonlinear simulations validate the
quasilinear approach, and, with the addition of perpendicular flow shear,
demonstrate that each symmetry breaking mechanism that causes momentum
transport also couples to rotodiffusion. At lower rotation gradients, the
parallel compressive convection is required to match the most peaked boron
profiles. The sensitivities of both datasets to possible errors is
investigated, and quantitative agreement is found within the estimated
uncertainties. The approach used can be considered a template for mitigating
uncertainty in quantitative comparisons between simulation and experiment.
| [
{
"version": "v1",
"created": "Fri, 30 Nov 2012 17:01:21 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Apr 2013 12:53:48 GMT"
}
] | 2013-05-21T00:00:00 | [
[
"Casson",
"F. J.",
""
],
[
"McDermott",
"R. M.",
""
],
[
"Angioni",
"C.",
""
],
[
"Camenen",
"Y.",
""
],
[
"Dux",
"R.",
""
],
[
"Fable",
"E.",
""
],
[
"Fischer",
"R.",
""
],
[
"Geiger",
"B.",
""
],
[
"Manas",
"P.",
""
],
[
"Menchero",
"L.",
""
],
[
"Tardini",
"G.",
""
],
[
"team",
"ASDEX Upgrade",
""
]
] | TITLE: Validation of gyrokinetic modelling of light impurity transport
including rotation in ASDEX Upgrade
ABSTRACT: Upgraded spectroscopic hardware and an improved impurity concentration
calculation allow accurate determination of boron density in the ASDEX Upgrade
tokamak. A database of boron measurements is compared to quasilinear and
nonlinear gyrokinetic simulations including Coriolis and centrifugal rotational
effects over a range of H-mode plasma regimes. The peaking of the measured
boron profiles shows a strong anti-correlation with the plasma rotation
gradient, via a relationship explained and reproduced by the theory. It is
demonstrated that the rotodiffusive impurity flux driven by the rotation
gradient is required for the modelling to reproduce the hollow boron profiles
at higher rotation gradients. The nonlinear simulations validate the
quasilinear approach, and, with the addition of perpendicular flow shear,
demonstrate that each symmetry breaking mechanism that causes momentum
transport also couples to rotodiffusion. At lower rotation gradients, the
parallel compressive convection is required to match the most peaked boron
profiles. The sensitivities of both datasets to possible errors is
investigated, and quantitative agreement is found within the estimated
uncertainties. The approach used can be considered a template for mitigating
uncertainty in quantitative comparisons between simulation and experiment.
|
1305.4345 | Alon Schclar | Alon Schclar and Lior Rokach and Amir Amit | Ensembles of Classifiers based on Dimensionality Reduction | 31 pages, 4 figures, 4 tables, Submitted to Pattern Analysis and
Applications | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel approach for the construction of ensemble classifiers
based on dimensionality reduction. Dimensionality reduction methods represent
datasets using a small number of attributes while preserving the information
conveyed by the original dataset. The ensemble members are trained based on
dimension-reduced versions of the training set. These versions are obtained by
applying dimensionality reduction to the original training set using different
values of the input parameters. This construction meets both the diversity and
accuracy criteria which are required to construct an ensemble classifier where
the former criterion is obtained by the various input parameter values and the
latter is achieved due to the decorrelation and noise reduction properties of
dimensionality reduction. In order to classify a test sample, it is first
embedded into the dimension reduced space of each individual classifier by
using an out-of-sample extension algorithm. Each classifier is then applied to
the embedded sample and the classification is obtained via a voting scheme. We
present three variations of the proposed approach based on the Random
Projections, the Diffusion Maps and the Random Subspaces dimensionality
reduction algorithms. We also present a multi-strategy ensemble which combines
AdaBoost and Diffusion Maps. A comparison is made with the Bagging, AdaBoost,
Rotation Forest ensemble classifiers and also with the base classifier which
does not incorporate dimensionality reduction. Our experiments used seventeen
benchmark datasets from the UCI repository. The results obtained by the
proposed algorithms were superior in many cases to other algorithms.
| [
{
"version": "v1",
"created": "Sun, 19 May 2013 10:24:06 GMT"
}
] | 2013-05-21T00:00:00 | [
[
"Schclar",
"Alon",
""
],
[
"Rokach",
"Lior",
""
],
[
"Amit",
"Amir",
""
]
] | TITLE: Ensembles of Classifiers based on Dimensionality Reduction
ABSTRACT: We present a novel approach for the construction of ensemble classifiers
based on dimensionality reduction. Dimensionality reduction methods represent
datasets using a small number of attributes while preserving the information
conveyed by the original dataset. The ensemble members are trained based on
dimension-reduced versions of the training set. These versions are obtained by
applying dimensionality reduction to the original training set using different
values of the input parameters. This construction meets both the diversity and
accuracy criteria which are required to construct an ensemble classifier where
the former criterion is obtained by the various input parameter values and the
latter is achieved due to the decorrelation and noise reduction properties of
dimensionality reduction. In order to classify a test sample, it is first
embedded into the dimension reduced space of each individual classifier by
using an out-of-sample extension algorithm. Each classifier is then applied to
the embedded sample and the classification is obtained via a voting scheme. We
present three variations of the proposed approach based on the Random
Projections, the Diffusion Maps and the Random Subspaces dimensionality
reduction algorithms. We also present a multi-strategy ensemble which combines
AdaBoost and Diffusion Maps. A comparison is made with the Bagging, AdaBoost,
Rotation Forest ensemble classifiers and also with the base classifier which
does not incorporate dimensionality reduction. Our experiments used seventeen
benchmark datasets from the UCI repository. The results obtained by the
proposed algorithms were superior in many cases to other algorithms.
|
1305.4429 | Youfang Lin | Youfang Lin, Xuguang Jia, Mingjie Lin, Steve Gregory, Huaiyu Wan,
Zhihao Wu | Inferring High Quality Co-Travel Networks | 20 pages, 23 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social networks provide a new perspective for enterprises to better
understand their customers and have attracted substantial attention in
industry. However, inferring high quality customer social networks is a great
challenge while there are no explicit customer relations in many traditional
OLTP environments. In this paper, we study this issue in the field of passenger
transport and introduce a new member to the family of social networks, which is
named Co-Travel Networks, consisting of passengers connected by their co-travel
behaviors. We propose a novel method to infer high quality co-travel networks
of civil aviation passengers from their co-booking behaviors derived from the
PNRs (Passenger Naming Records). In our method, to accurately evaluate the
strength of ties, we present a measure of Co-Journey Times to count the
co-travel times of complete journeys between passengers. We infer a high
quality co-travel network based on a large encrypted PNR dataset and conduct a
series of network analyses on it. The experimental results show the
effectiveness of our inferring method, as well as some special characteristics
of co-travel networks, such as the sparsity and high aggregation, compared with
other kinds of social networks. It can be expected that such co-travel networks
will greatly help the industry to better understand their passengers so as to
improve their services. More importantly, we contribute a special kind of
social networks with high strength of ties generated from very close and high
cost travel behaviors, for further scientific researches on human travel
behaviors, group travel patterns, high-end travel market evolution, etc., from
the perspective of social networks.
| [
{
"version": "v1",
"created": "Mon, 20 May 2013 03:04:23 GMT"
}
] | 2013-05-21T00:00:00 | [
[
"Lin",
"Youfang",
""
],
[
"Jia",
"Xuguang",
""
],
[
"Lin",
"Mingjie",
""
],
[
"Gregory",
"Steve",
""
],
[
"Wan",
"Huaiyu",
""
],
[
"Wu",
"Zhihao",
""
]
] | TITLE: Inferring High Quality Co-Travel Networks
ABSTRACT: Social networks provide a new perspective for enterprises to better
understand their customers and have attracted substantial attention in
industry. However, inferring high quality customer social networks is a great
challenge while there are no explicit customer relations in many traditional
OLTP environments. In this paper, we study this issue in the field of passenger
transport and introduce a new member to the family of social networks, which is
named Co-Travel Networks, consisting of passengers connected by their co-travel
behaviors. We propose a novel method to infer high quality co-travel networks
of civil aviation passengers from their co-booking behaviors derived from the
PNRs (Passenger Naming Records). In our method, to accurately evaluate the
strength of ties, we present a measure of Co-Journey Times to count the
co-travel times of complete journeys between passengers. We infer a high
quality co-travel network based on a large encrypted PNR dataset and conduct a
series of network analyses on it. The experimental results show the
effectiveness of our inferring method, as well as some special characteristics
of co-travel networks, such as the sparsity and high aggregation, compared with
other kinds of social networks. It can be expected that such co-travel networks
will greatly help the industry to better understand their passengers so as to
improve their services. More importantly, we contribute a special kind of
social networks with high strength of ties generated from very close and high
cost travel behaviors, for further scientific researches on human travel
behaviors, group travel patterns, high-end travel market evolution, etc., from
the perspective of social networks.
|
1305.4455 | Mark Wilkinson | Ben P Vandervalk, E Luke McCarthy, Mark D Wilkinson | SHARE: A Web Service Based Framework for Distributed Querying and
Reasoning on the Semantic Web | Third Asian Semantic Web Conference, ASWC2008 Bangkok, Thailand
December 2008, Workshops Proceedings (NEFORS2008), pp69-78 | null | null | null | cs.DL cs.AI cs.SE | http://creativecommons.org/licenses/by/3.0/ | Here we describe the SHARE system, a web service based framework for
distributed querying and reasoning on the semantic web. The main innovations of
SHARE are: (1) the extension of a SPARQL query engine to perform on-demand data
retrieval from web services, and (2) the extension of an OWL reasoner to test
property restrictions by means of web service invocations. In addition to
enabling queries across distributed datasets, the system allows for a target
dataset that is significantly larger than is possible under current,
centralized approaches. Although the architecture is equally applicable to all
types of data, the SHARE system targets bioinformatics, due to the large number
of interoperable web services that are already available in this area. SHARE is
built entirely on semantic web standards, and is the successor of the BioMOBY
project.
| [
{
"version": "v1",
"created": "Mon, 20 May 2013 07:54:09 GMT"
}
] | 2013-05-21T00:00:00 | [
[
"Vandervalk",
"Ben P",
""
],
[
"McCarthy",
"E Luke",
""
],
[
"Wilkinson",
"Mark D",
""
]
] | TITLE: SHARE: A Web Service Based Framework for Distributed Querying and
Reasoning on the Semantic Web
ABSTRACT: Here we describe the SHARE system, a web service based framework for
distributed querying and reasoning on the semantic web. The main innovations of
SHARE are: (1) the extension of a SPARQL query engine to perform on-demand data
retrieval from web services, and (2) the extension of an OWL reasoner to test
property restrictions by means of web service invocations. In addition to
enabling queries across distributed datasets, the system allows for a target
dataset that is significantly larger than is possible under current,
centralized approaches. Although the architecture is equally applicable to all
types of data, the SHARE system targets bioinformatics, due to the large number
of interoperable web services that are already available in this area. SHARE is
built entirely on semantic web standards, and is the successor of the BioMOBY
project.
|
1305.3616 | Manuel Gomez Rodriguez | Manuel Gomez Rodriguez, Jure Leskovec, Bernhard Schoelkopf | Modeling Information Propagation with Survival Theory | To appear at ICML '13 | null | null | null | cs.SI cs.DS physics.soc-ph stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Networks provide a skeleton for the spread of contagions, like, information,
ideas, behaviors and diseases. Many times networks over which contagions
diffuse are unobserved and need to be inferred. Here we apply survival theory
to develop general additive and multiplicative risk models under which the
network inference problems can be solved efficiently by exploiting their
convexity. Our additive risk model generalizes several existing network
inference models. We show all these models are particular cases of our more
general model. Our multiplicative model allows for modeling scenarios in which
a node can either increase or decrease the risk of activation of another node,
in contrast with previous approaches, which consider only positive risk
increments. We evaluate the performance of our network inference algorithms on
large synthetic and real cascade datasets, and show that our models are able to
predict the length and duration of cascades in real data.
| [
{
"version": "v1",
"created": "Wed, 15 May 2013 20:01:06 GMT"
}
] | 2013-05-17T00:00:00 | [
[
"Rodriguez",
"Manuel Gomez",
""
],
[
"Leskovec",
"Jure",
""
],
[
"Schoelkopf",
"Bernhard",
""
]
] | TITLE: Modeling Information Propagation with Survival Theory
ABSTRACT: Networks provide a skeleton for the spread of contagions, like, information,
ideas, behaviors and diseases. Many times networks over which contagions
diffuse are unobserved and need to be inferred. Here we apply survival theory
to develop general additive and multiplicative risk models under which the
network inference problems can be solved efficiently by exploiting their
convexity. Our additive risk model generalizes several existing network
inference models. We show all these models are particular cases of our more
general model. Our multiplicative model allows for modeling scenarios in which
a node can either increase or decrease the risk of activation of another node,
in contrast with previous approaches, which consider only positive risk
increments. We evaluate the performance of our network inference algorithms on
large synthetic and real cascade datasets, and show that our models are able to
predict the length and duration of cascades in real data.
|
1305.3384 | Lior Rokach | Naseem Biadsy, Lior Rokach, Armin Shmilovici | Transfer Learning for Content-Based Recommender Systems using Tree
Matching | null | null | null | null | cs.LG cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a new approach to content-based transfer learning
for solving the data sparsity problem in cases when the users' preferences in
the target domain are either scarce or unavailable, but the necessary
information on the preferences exists in another domain. We show that training
a system to use such information across domains can produce better performance.
Specifically, we represent users' behavior patterns based on topological graph
structures. Each behavior pattern represents the behavior of a set of users,
when the users' behavior is defined as the items they rated and the items'
rating values. In the next step we find a correlation between behavior patterns
in the source domain and behavior patterns in the target domain. This mapping
is considered a bridge between the two domains. Based on the correlation and
content-attributes of the items, we train a machine learning model to predict
users' ratings in the target domain. When we compare our approach to the
popularity approach and KNN-cross-domain on a real world dataset, the results
show that on an average of 83$%$ of the cases our approach outperforms both
methods.
| [
{
"version": "v1",
"created": "Wed, 15 May 2013 08:00:54 GMT"
}
] | 2013-05-16T00:00:00 | [
[
"Biadsy",
"Naseem",
""
],
[
"Rokach",
"Lior",
""
],
[
"Shmilovici",
"Armin",
""
]
] | TITLE: Transfer Learning for Content-Based Recommender Systems using Tree
Matching
ABSTRACT: In this paper we present a new approach to content-based transfer learning
for solving the data sparsity problem in cases when the users' preferences in
the target domain are either scarce or unavailable, but the necessary
information on the preferences exists in another domain. We show that training
a system to use such information across domains can produce better performance.
Specifically, we represent users' behavior patterns based on topological graph
structures. Each behavior pattern represents the behavior of a set of users,
when the users' behavior is defined as the items they rated and the items'
rating values. In the next step we find a correlation between behavior patterns
in the source domain and behavior patterns in the target domain. This mapping
is considered a bridge between the two domains. Based on the correlation and
content-attributes of the items, we train a machine learning model to predict
users' ratings in the target domain. When we compare our approach to the
popularity approach and KNN-cross-domain on a real world dataset, the results
show that on an average of 83$%$ of the cases our approach outperforms both
methods.
|
1302.4922 | David Duvenaud | David Duvenaud, James Robert Lloyd, Roger Grosse, Joshua B. Tenenbaum,
Zoubin Ghahramani | Structure Discovery in Nonparametric Regression through Compositional
Kernel Search | 9 pages, 7 figures, To appear in proceedings of the 2013
International Conference on Machine Learning | null | null | null | stat.ML cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite its importance, choosing the structural form of the kernel in
nonparametric regression remains a black art. We define a space of kernel
structures which are built compositionally by adding and multiplying a small
number of base kernels. We present a method for searching over this space of
structures which mirrors the scientific discovery process. The learned
structures can often decompose functions into interpretable components and
enable long-range extrapolation on time-series datasets. Our structure search
method outperforms many widely used kernels and kernel combination methods on a
variety of prediction tasks.
| [
{
"version": "v1",
"created": "Wed, 20 Feb 2013 14:53:13 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Mar 2013 11:48:12 GMT"
},
{
"version": "v3",
"created": "Fri, 5 Apr 2013 16:53:30 GMT"
},
{
"version": "v4",
"created": "Mon, 13 May 2013 13:10:31 GMT"
}
] | 2013-05-15T00:00:00 | [
[
"Duvenaud",
"David",
""
],
[
"Lloyd",
"James Robert",
""
],
[
"Grosse",
"Roger",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Ghahramani",
"Zoubin",
""
]
] | TITLE: Structure Discovery in Nonparametric Regression through Compositional
Kernel Search
ABSTRACT: Despite its importance, choosing the structural form of the kernel in
nonparametric regression remains a black art. We define a space of kernel
structures which are built compositionally by adding and multiplying a small
number of base kernels. We present a method for searching over this space of
structures which mirrors the scientific discovery process. The learned
structures can often decompose functions into interpretable components and
enable long-range extrapolation on time-series datasets. Our structure search
method outperforms many widely used kernels and kernel combination methods on a
variety of prediction tasks.
|
1305.0540 | Shang Shang | Shang Shang and Yuk Hui and Pan Hui and Paul Cuff and Sanjeev Kulkarni | Privacy Preserving Recommendation System Based on Groups | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/3.0/ | Recommendation systems have received considerable attention in the recent
decades. Yet with the development of information technology and social media,
the risk in revealing private data to service providers has been a growing
concern to more and more users. Trade-offs between quality and privacy in
recommendation systems naturally arise. In this paper, we present a privacy
preserving recommendation framework based on groups. The main idea is to use
groups as a natural middleware to preserve users' privacy. A distributed
preference exchange algorithm is proposed to ensure the anonymity of data,
wherein the effective size of the anonymity set asymptotically approaches the
group size with time. We construct a hybrid collaborative filtering model based
on Markov random walks to provide recommendations and predictions to group
members. Experimental results on the MovieLens and Epinions datasets show that
our proposed methods outperform the baseline methods, L+ and ItemRank, two
state-of-the-art personalized recommendation algorithms, for both
recommendation precision and hit rate despite the absence of personal
preference information.
| [
{
"version": "v1",
"created": "Thu, 2 May 2013 19:17:08 GMT"
},
{
"version": "v2",
"created": "Mon, 13 May 2013 19:50:41 GMT"
}
] | 2013-05-14T00:00:00 | [
[
"Shang",
"Shang",
""
],
[
"Hui",
"Yuk",
""
],
[
"Hui",
"Pan",
""
],
[
"Cuff",
"Paul",
""
],
[
"Kulkarni",
"Sanjeev",
""
]
] | TITLE: Privacy Preserving Recommendation System Based on Groups
ABSTRACT: Recommendation systems have received considerable attention in the recent
decades. Yet with the development of information technology and social media,
the risk in revealing private data to service providers has been a growing
concern to more and more users. Trade-offs between quality and privacy in
recommendation systems naturally arise. In this paper, we present a privacy
preserving recommendation framework based on groups. The main idea is to use
groups as a natural middleware to preserve users' privacy. A distributed
preference exchange algorithm is proposed to ensure the anonymity of data,
wherein the effective size of the anonymity set asymptotically approaches the
group size with time. We construct a hybrid collaborative filtering model based
on Markov random walks to provide recommendations and predictions to group
members. Experimental results on the MovieLens and Epinions datasets show that
our proposed methods outperform the baseline methods, L+ and ItemRank, two
state-of-the-art personalized recommendation algorithms, for both
recommendation precision and hit rate despite the absence of personal
preference information.
|
1305.2788 | Fabian Pedregosa | Fabian Pedregosa (INRIA Paris - Rocquencourt, INRIA Saclay - Ile de
France), Michael Eickenberg (INRIA Saclay - Ile de France, LNAO), Bertrand
Thirion (INRIA Saclay - Ile de France, LNAO), Alexandre Gramfort (LTCI) | HRF estimation improves sensitivity of fMRI encoding and decoding models | 3nd International Workshop on Pattern Recognition in NeuroImaging
(2013) | null | null | null | cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extracting activation patterns from functional Magnetic Resonance Images
(fMRI) datasets remains challenging in rapid-event designs due to the inherent
delay of blood oxygen level-dependent (BOLD) signal. The general linear model
(GLM) allows to estimate the activation from a design matrix and a fixed
hemodynamic response function (HRF). However, the HRF is known to vary
substantially between subjects and brain regions. In this paper, we propose a
model for jointly estimating the hemodynamic response function (HRF) and the
activation patterns via a low-rank representation of task effects.This model is
based on the linearity assumption behind the GLM and can be computed using
standard gradient-based solvers. We use the activation patterns computed by our
model as input data for encoding and decoding studies and report performance
improvement in both settings.
| [
{
"version": "v1",
"created": "Mon, 13 May 2013 14:19:24 GMT"
}
] | 2013-05-14T00:00:00 | [
[
"Pedregosa",
"Fabian",
"",
"INRIA Paris - Rocquencourt, INRIA Saclay - Ile de\n France"
],
[
"Eickenberg",
"Michael",
"",
"INRIA Saclay - Ile de France, LNAO"
],
[
"Thirion",
"Bertrand",
"",
"INRIA Saclay - Ile de France, LNAO"
],
[
"Gramfort",
"Alexandre",
"",
"LTCI"
]
] | TITLE: HRF estimation improves sensitivity of fMRI encoding and decoding models
ABSTRACT: Extracting activation patterns from functional Magnetic Resonance Images
(fMRI) datasets remains challenging in rapid-event designs due to the inherent
delay of blood oxygen level-dependent (BOLD) signal. The general linear model
(GLM) allows to estimate the activation from a design matrix and a fixed
hemodynamic response function (HRF). However, the HRF is known to vary
substantially between subjects and brain regions. In this paper, we propose a
model for jointly estimating the hemodynamic response function (HRF) and the
activation patterns via a low-rank representation of task effects.This model is
based on the linearity assumption behind the GLM and can be computed using
standard gradient-based solvers. We use the activation patterns computed by our
model as input data for encoding and decoding studies and report performance
improvement in both settings.
|
1305.2835 | Kostas Tsichlas | Andreas Kosmatopoulos and Kostas Tsichlas | Dynamic Top-$k$ Dominating Queries | null | null | null | null | cs.CG cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Let $\mathcal{S}$ be a dataset of $n$ 2-dimensional points. The top-$k$
dominating query aims to report the $k$ points that dominate the most points in
$\mathcal{S}$. A point $p$ dominates a point $q$ iff all coordinates of $p$ are
smaller than or equal to those of $q$ and at least one of them is strictly
smaller. The top-$k$ dominating query combines the dominance concept of maxima
queries with the ranking function of top-$k$ queries and can be used as an
important tool in multi-criteria decision making systems. In this work, we
propose novel algorithms for answering semi-dynamic (insertions only) and fully
dynamic (insertions and deletions) top-$k$ dominating queries. To the best of
our knowledge, this is the first work towards handling (semi-)dynamic top-$k$
dominating queries that offers algorithms with asymptotic guarantees regarding
their time and space cost.
| [
{
"version": "v1",
"created": "Mon, 13 May 2013 16:30:11 GMT"
}
] | 2013-05-14T00:00:00 | [
[
"Kosmatopoulos",
"Andreas",
""
],
[
"Tsichlas",
"Kostas",
""
]
] | TITLE: Dynamic Top-$k$ Dominating Queries
ABSTRACT: Let $\mathcal{S}$ be a dataset of $n$ 2-dimensional points. The top-$k$
dominating query aims to report the $k$ points that dominate the most points in
$\mathcal{S}$. A point $p$ dominates a point $q$ iff all coordinates of $p$ are
smaller than or equal to those of $q$ and at least one of them is strictly
smaller. The top-$k$ dominating query combines the dominance concept of maxima
queries with the ranking function of top-$k$ queries and can be used as an
important tool in multi-criteria decision making systems. In this work, we
propose novel algorithms for answering semi-dynamic (insertions only) and fully
dynamic (insertions and deletions) top-$k$ dominating queries. To the best of
our knowledge, this is the first work towards handling (semi-)dynamic top-$k$
dominating queries that offers algorithms with asymptotic guarantees regarding
their time and space cost.
|
1304.4682 | Yuan Li | Yuan Li, Haoyu Gao, Mingmin Yang, Wanqiu Guan, Haixin Ma, Weining
Qian, Zhigang Cao, Xiaoguang Yang | What are Chinese Talking about in Hot Weibos? | null | null | null | null | cs.SI cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | SinaWeibo is a Twitter-like social network service emerging in China in
recent years. People can post weibos (microblogs) and communicate with others
on it. Based on a dataset of 650 million weibos from August 2009 to January
2012 crawled from APIs of SinaWeibo, we study the hot ones that have been
reposted for at least 1000 times. We find that hot weibos can be roughly
classified into eight categories, i.e. Entertainment & Fashion, Hot Social
Events, Leisure & Mood, Life & Health, Seeking for Help, Sales Promotion,
Fengshui & Fortune and Deleted Weibos. In particular, Leisure & Mood and Hot
Social Events account for almost 65% of all the hot weibos. This reflects very
well the fundamental dual-structure of the current society of China: On the one
hand, economy has made a great progress and quite a part of people are now
living a relatively prosperous and fairly easy life. On the other hand, there
still exist quite a lot of serious social problems, such as government
corruptions and environmental pollutions. It is also shown that users' posting
and reposting behaviors are greatly affected by their identity factors (gender,
verification status, and regional location). For instance, (1) Two thirds of
the hot weibos are created by male users. (2) Although verified users account
for only 0.1% in SinaWeibo, 46.5% of the hot weibos are contributed by them.
Very interestingly, 39.2% are written by SPA users. A more or less pathetic
fact is that only 14.4% of the hot weibos are created by grassroots (individual
users that are neither SPA nor verified). (3) Users from different areas of
China have distinct posting and reposting behaviors which usually reflect very
their local cultures. Homophily is also examined for people's reposting
behaviors.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2013 04:25:33 GMT"
},
{
"version": "v2",
"created": "Fri, 10 May 2013 09:52:35 GMT"
}
] | 2013-05-13T00:00:00 | [
[
"Li",
"Yuan",
""
],
[
"Gao",
"Haoyu",
""
],
[
"Yang",
"Mingmin",
""
],
[
"Guan",
"Wanqiu",
""
],
[
"Ma",
"Haixin",
""
],
[
"Qian",
"Weining",
""
],
[
"Cao",
"Zhigang",
""
],
[
"Yang",
"Xiaoguang",
""
]
] | TITLE: What are Chinese Talking about in Hot Weibos?
ABSTRACT: SinaWeibo is a Twitter-like social network service emerging in China in
recent years. People can post weibos (microblogs) and communicate with others
on it. Based on a dataset of 650 million weibos from August 2009 to January
2012 crawled from APIs of SinaWeibo, we study the hot ones that have been
reposted for at least 1000 times. We find that hot weibos can be roughly
classified into eight categories, i.e. Entertainment & Fashion, Hot Social
Events, Leisure & Mood, Life & Health, Seeking for Help, Sales Promotion,
Fengshui & Fortune and Deleted Weibos. In particular, Leisure & Mood and Hot
Social Events account for almost 65% of all the hot weibos. This reflects very
well the fundamental dual-structure of the current society of China: On the one
hand, economy has made a great progress and quite a part of people are now
living a relatively prosperous and fairly easy life. On the other hand, there
still exist quite a lot of serious social problems, such as government
corruptions and environmental pollutions. It is also shown that users' posting
and reposting behaviors are greatly affected by their identity factors (gender,
verification status, and regional location). For instance, (1) Two thirds of
the hot weibos are created by male users. (2) Although verified users account
for only 0.1% in SinaWeibo, 46.5% of the hot weibos are contributed by them.
Very interestingly, 39.2% are written by SPA users. A more or less pathetic
fact is that only 14.4% of the hot weibos are created by grassroots (individual
users that are neither SPA nor verified). (3) Users from different areas of
China have distinct posting and reposting behaviors which usually reflect very
their local cultures. Homophily is also examined for people's reposting
behaviors.
|
1305.1956 | Andrew Lan | Andrew S. Lan, Christoph Studer, Andrew E. Waters and Richard G.
Baraniuk | Joint Topic Modeling and Factor Analysis of Textual Information and
Graded Response Data | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern machine learning methods are critical to the development of
large-scale personalized learning systems that cater directly to the needs of
individual learners. The recently developed SPARse Factor Analysis (SPARFA)
framework provides a new statistical model and algorithms for machine
learning-based learning analytics, which estimate a learner's knowledge of the
latent concepts underlying a domain, and content analytics, which estimate the
relationships among a collection of questions and the latent concepts. SPARFA
estimates these quantities given only the binary-valued graded responses to a
collection of questions. In order to better interpret the estimated latent
concepts, SPARFA relies on a post-processing step that utilizes user-defined
tags (e.g., topics or keywords) available for each question. In this paper, we
relax the need for user-defined tags by extending SPARFA to jointly process
both graded learner responses and the text of each question and its associated
answer(s) or other feedback. Our purely data-driven approach (i) enhances the
interpretability of the estimated latent concepts without the need of
explicitly generating a set of tags or performing a post-processing step, (ii)
improves the prediction performance of SPARFA, and (iii) scales to large
test/assessments where human annotation would prove burdensome. We demonstrate
the efficacy of the proposed approach on two real educational datasets.
| [
{
"version": "v1",
"created": "Wed, 8 May 2013 20:44:55 GMT"
},
{
"version": "v2",
"created": "Fri, 10 May 2013 01:05:09 GMT"
}
] | 2013-05-13T00:00:00 | [
[
"Lan",
"Andrew S.",
""
],
[
"Studer",
"Christoph",
""
],
[
"Waters",
"Andrew E.",
""
],
[
"Baraniuk",
"Richard G.",
""
]
] | TITLE: Joint Topic Modeling and Factor Analysis of Textual Information and
Graded Response Data
ABSTRACT: Modern machine learning methods are critical to the development of
large-scale personalized learning systems that cater directly to the needs of
individual learners. The recently developed SPARse Factor Analysis (SPARFA)
framework provides a new statistical model and algorithms for machine
learning-based learning analytics, which estimate a learner's knowledge of the
latent concepts underlying a domain, and content analytics, which estimate the
relationships among a collection of questions and the latent concepts. SPARFA
estimates these quantities given only the binary-valued graded responses to a
collection of questions. In order to better interpret the estimated latent
concepts, SPARFA relies on a post-processing step that utilizes user-defined
tags (e.g., topics or keywords) available for each question. In this paper, we
relax the need for user-defined tags by extending SPARFA to jointly process
both graded learner responses and the text of each question and its associated
answer(s) or other feedback. Our purely data-driven approach (i) enhances the
interpretability of the estimated latent concepts without the need of
explicitly generating a set of tags or performing a post-processing step, (ii)
improves the prediction performance of SPARFA, and (iii) scales to large
test/assessments where human annotation would prove burdensome. We demonstrate
the efficacy of the proposed approach on two real educational datasets.
|
1305.2269 | Fang Wang | Fang Wang and Yi Li | Beyond Physical Connections: Tree Models in Human Pose Estimation | CVPR 2013 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simple tree models for articulated objects prevails in the last decade.
However, it is also believed that these simple tree models are not capable of
capturing large variations in many scenarios, such as human pose estimation.
This paper attempts to address three questions: 1) are simple tree models
sufficient? more specifically, 2) how to use tree models effectively in human
pose estimation? and 3) how shall we use combined parts together with single
parts efficiently?
Assuming we have a set of single parts and combined parts, and the goal is to
estimate a joint distribution of their locations. We surprisingly find that no
latent variables are introduced in the Leeds Sport Dataset (LSP) during
learning latent trees for deformable model, which aims at approximating the
joint distributions of body part locations using minimal tree structure. This
suggests one can straightforwardly use a mixed representation of single and
combined parts to approximate their joint distribution in a simple tree model.
As such, one only needs to build Visual Categories of the combined parts, and
then perform inference on the learned latent tree. Our method outperformed the
state of the art on the LSP, both in the scenarios when the training images are
from the same dataset and from the PARSE dataset. Experiments on animal images
from the VOC challenge further support our findings.
| [
{
"version": "v1",
"created": "Fri, 10 May 2013 07:09:14 GMT"
}
] | 2013-05-13T00:00:00 | [
[
"Wang",
"Fang",
""
],
[
"Li",
"Yi",
""
]
] | TITLE: Beyond Physical Connections: Tree Models in Human Pose Estimation
ABSTRACT: Simple tree models for articulated objects prevails in the last decade.
However, it is also believed that these simple tree models are not capable of
capturing large variations in many scenarios, such as human pose estimation.
This paper attempts to address three questions: 1) are simple tree models
sufficient? more specifically, 2) how to use tree models effectively in human
pose estimation? and 3) how shall we use combined parts together with single
parts efficiently?
Assuming we have a set of single parts and combined parts, and the goal is to
estimate a joint distribution of their locations. We surprisingly find that no
latent variables are introduced in the Leeds Sport Dataset (LSP) during
learning latent trees for deformable model, which aims at approximating the
joint distributions of body part locations using minimal tree structure. This
suggests one can straightforwardly use a mixed representation of single and
combined parts to approximate their joint distribution in a simple tree model.
As such, one only needs to build Visual Categories of the combined parts, and
then perform inference on the learned latent tree. Our method outperformed the
state of the art on the LSP, both in the scenarios when the training images are
from the same dataset and from the PARSE dataset. Experiments on animal images
from the VOC challenge further support our findings.
|
1305.2388 | Ehsan Saboori Mr. | Shafigh Parsazad, Ehsan Saboori, Amin Allahyar | Fast Feature Reduction in intrusion detection datasets | null | Parsazad, Shafigh; Saboori, Ehsan; Allahyar, Amin; , "Fast Feature
Reduction in intrusion detection datasets," MIPRO, 2012 Proceedings of the
35th International Convention , vol., no., pp.1023-1029, 21-25 May 2012 | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the most intrusion detection systems (IDS), a system tries to learn
characteristics of different type of attacks by analyzing packets that sent or
received in network. These packets have a lot of features. But not all of them
is required to be analyzed to detect that specific type of attack. Detection
speed and computational cost is another vital matter here, because in these
types of problems, datasets are very huge regularly. In this paper we tried to
propose a very simple and fast feature selection method to eliminate features
with no helpful information on them. Result faster learning in process of
redundant feature omission. We compared our proposed method with three most
successful similarity based feature selection algorithm including Correlation
Coefficient, Least Square Regression Error and Maximal Information Compression
Index. After that we used recommended features by each of these algorithms in
two popular classifiers including: Bayes and KNN classifier to measure the
quality of the recommendations. Experimental result shows that although the
proposed method can't outperform evaluated algorithms with high differences in
accuracy, but in computational cost it has huge superiority over them.
| [
{
"version": "v1",
"created": "Mon, 1 Apr 2013 05:27:47 GMT"
}
] | 2013-05-13T00:00:00 | [
[
"Parsazad",
"Shafigh",
""
],
[
"Saboori",
"Ehsan",
""
],
[
"Allahyar",
"Amin",
""
]
] | TITLE: Fast Feature Reduction in intrusion detection datasets
ABSTRACT: In the most intrusion detection systems (IDS), a system tries to learn
characteristics of different type of attacks by analyzing packets that sent or
received in network. These packets have a lot of features. But not all of them
is required to be analyzed to detect that specific type of attack. Detection
speed and computational cost is another vital matter here, because in these
types of problems, datasets are very huge regularly. In this paper we tried to
propose a very simple and fast feature selection method to eliminate features
with no helpful information on them. Result faster learning in process of
redundant feature omission. We compared our proposed method with three most
successful similarity based feature selection algorithm including Correlation
Coefficient, Least Square Regression Error and Maximal Information Compression
Index. After that we used recommended features by each of these algorithms in
two popular classifiers including: Bayes and KNN classifier to measure the
quality of the recommendations. Experimental result shows that although the
proposed method can't outperform evaluated algorithms with high differences in
accuracy, but in computational cost it has huge superiority over them.
|
1305.1946 | Luca Mazzola | Elena Camossi, Paola Villa, Luca Mazzola | Semantic-based Anomalous Pattern Discovery in Moving Object Trajectories | null | null | null | null | cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we investigate a novel semantic approach for pattern discovery
in trajectories that, relying on ontologies, enhances object movement
information with event semantics. The approach can be applied to the detection
of movement patterns and behaviors whenever the semantics of events occurring
along the trajectory is, explicitly or implicitly, available. In particular, we
tested it against an exacting case scenario in maritime surveillance, i.e., the
discovery of suspicious container transportations.
The methodology we have developed entails the formalization of the
application domain through a domain ontology, extending the Moving Object
Ontology (MOO) described in this paper. Afterwards, movement patterns have to
be formalized, either as Description Logic (DL) axioms or queries, enabling the
retrieval of the trajectories that follow the patterns.
In our experimental evaluation, we have considered a real world dataset of 18
Million of container events describing the deed undertaken in a port to
accomplish the shipping (e.g., loading on a vessel, export operation).
Leveraging events, we have reconstructed almost 300 thousand container
trajectories referring to 50 thousand containers travelling along three years.
We have formalized the anomalous itinerary patterns as DL axioms, testing
different ontology APIs and DL reasoners to retrieve the suspicious
transportations.
Our experiments demonstrate that the approach is feasible and efficient. In
particular, the joint use of Pellet and SPARQL-DL enables to detect the
trajectories following a given pattern in a reasonable time with big size
datasets.
| [
{
"version": "v1",
"created": "Wed, 8 May 2013 20:14:03 GMT"
}
] | 2013-05-10T00:00:00 | [
[
"Camossi",
"Elena",
""
],
[
"Villa",
"Paola",
""
],
[
"Mazzola",
"Luca",
""
]
] | TITLE: Semantic-based Anomalous Pattern Discovery in Moving Object Trajectories
ABSTRACT: In this work, we investigate a novel semantic approach for pattern discovery
in trajectories that, relying on ontologies, enhances object movement
information with event semantics. The approach can be applied to the detection
of movement patterns and behaviors whenever the semantics of events occurring
along the trajectory is, explicitly or implicitly, available. In particular, we
tested it against an exacting case scenario in maritime surveillance, i.e., the
discovery of suspicious container transportations.
The methodology we have developed entails the formalization of the
application domain through a domain ontology, extending the Moving Object
Ontology (MOO) described in this paper. Afterwards, movement patterns have to
be formalized, either as Description Logic (DL) axioms or queries, enabling the
retrieval of the trajectories that follow the patterns.
In our experimental evaluation, we have considered a real world dataset of 18
Million of container events describing the deed undertaken in a port to
accomplish the shipping (e.g., loading on a vessel, export operation).
Leveraging events, we have reconstructed almost 300 thousand container
trajectories referring to 50 thousand containers travelling along three years.
We have formalized the anomalous itinerary patterns as DL axioms, testing
different ontology APIs and DL reasoners to retrieve the suspicious
transportations.
Our experiments demonstrate that the approach is feasible and efficient. In
particular, the joint use of Pellet and SPARQL-DL enables to detect the
trajectories following a given pattern in a reasonable time with big size
datasets.
|
1305.1657 | Emanuele Goldoni | Alberto Savioli, Emanuele Goldoni, Pietro Savazzi, Paolo Gamba | Low Complexity Indoor Localization in Wireless Sensor Networks by UWB
and Inertial Data Fusion | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Precise indoor localization of moving targets is a challenging activity which
cannot be easily accomplished without combining different sources of
information. In this sense, the combination of different data sources with an
appropriate filter might improve both positioning and tracking performance.
This work proposes an algorithm for hybrid positioning in Wireless Sensor
Networks based on data fusion of UWB and inertial information. A constant-gain
Steady State Kalman Filter is used to bound the complexity of the system,
simplifying its implementation on a typical low-power WSN node. The performance
of the presented data fusion algorithm has been evaluated in a realistic
scenario using both simulations and realistic datasets. The obtained results
prove the validity of this approach, which efficiently fuses different
positioning data sources, reducing the localization error.
| [
{
"version": "v1",
"created": "Tue, 7 May 2013 21:43:07 GMT"
}
] | 2013-05-09T00:00:00 | [
[
"Savioli",
"Alberto",
""
],
[
"Goldoni",
"Emanuele",
""
],
[
"Savazzi",
"Pietro",
""
],
[
"Gamba",
"Paolo",
""
]
] | TITLE: Low Complexity Indoor Localization in Wireless Sensor Networks by UWB
and Inertial Data Fusion
ABSTRACT: Precise indoor localization of moving targets is a challenging activity which
cannot be easily accomplished without combining different sources of
information. In this sense, the combination of different data sources with an
appropriate filter might improve both positioning and tracking performance.
This work proposes an algorithm for hybrid positioning in Wireless Sensor
Networks based on data fusion of UWB and inertial information. A constant-gain
Steady State Kalman Filter is used to bound the complexity of the system,
simplifying its implementation on a typical low-power WSN node. The performance
of the presented data fusion algorithm has been evaluated in a realistic
scenario using both simulations and realistic datasets. The obtained results
prove the validity of this approach, which efficiently fuses different
positioning data sources, reducing the localization error.
|
1305.1372 | Fan Min | Fan Min and William Zhu | Cold-start recommendation through granular association rules | Submitted to Joint Rough Sets 2013 | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems are popular in e-commerce as they suggest items of
interest to users. Researchers have addressed the cold-start problem where
either the user or the item is new. However, the situation with both new user
and new item has seldom been considered. In this paper, we propose a cold-start
recommendation approach to this situation based on granular association rules.
Specifically, we provide a means for describing users and items through
information granules, a means for generating association rules between users
and items, and a means for recommending items to users using these rules.
Experiments are undertaken on a publicly available dataset MovieLens. Results
indicate that rule sets perform similarly on the training and the testing sets,
and the appropriate setting of granule is essential to the application of
granular association rules.
| [
{
"version": "v1",
"created": "Tue, 7 May 2013 01:08:27 GMT"
}
] | 2013-05-08T00:00:00 | [
[
"Min",
"Fan",
""
],
[
"Zhu",
"William",
""
]
] | TITLE: Cold-start recommendation through granular association rules
ABSTRACT: Recommender systems are popular in e-commerce as they suggest items of
interest to users. Researchers have addressed the cold-start problem where
either the user or the item is new. However, the situation with both new user
and new item has seldom been considered. In this paper, we propose a cold-start
recommendation approach to this situation based on granular association rules.
Specifically, we provide a means for describing users and items through
information granules, a means for generating association rules between users
and items, and a means for recommending items to users using these rules.
Experiments are undertaken on a publicly available dataset MovieLens. Results
indicate that rule sets perform similarly on the training and the testing sets,
and the appropriate setting of granule is essential to the application of
granular association rules.
|
1210.1207 | Hema Swetha Koppula | Hema Swetha Koppula, Rudhir Gupta, Ashutosh Saxena | Learning Human Activities and Object Affordances from RGB-D Videos | arXiv admin note: substantial text overlap with arXiv:1208.0967 | null | null | null | cs.RO cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding human activities and object affordances are two very important
skills, especially for personal robots which operate in human environments. In
this work, we consider the problem of extracting a descriptive labeling of the
sequence of sub-activities being performed by a human, and more importantly, of
their interactions with the objects in the form of associated affordances.
Given a RGB-D video, we jointly model the human activities and object
affordances as a Markov random field where the nodes represent objects and
sub-activities, and the edges represent the relationships between object
affordances, their relations with sub-activities, and their evolution over
time. We formulate the learning problem using a structural support vector
machine (SSVM) approach, where labelings over various alternate temporal
segmentations are considered as latent variables. We tested our method on a
challenging dataset comprising 120 activity videos collected from 4 subjects,
and obtained an accuracy of 79.4% for affordance, 63.4% for sub-activity and
75.0% for high-level activity labeling. We then demonstrate the use of such
descriptive labeling in performing assistive tasks by a PR2 robot.
| [
{
"version": "v1",
"created": "Thu, 4 Oct 2012 04:53:42 GMT"
},
{
"version": "v2",
"created": "Mon, 6 May 2013 01:13:39 GMT"
}
] | 2013-05-07T00:00:00 | [
[
"Koppula",
"Hema Swetha",
""
],
[
"Gupta",
"Rudhir",
""
],
[
"Saxena",
"Ashutosh",
""
]
] | TITLE: Learning Human Activities and Object Affordances from RGB-D Videos
ABSTRACT: Understanding human activities and object affordances are two very important
skills, especially for personal robots which operate in human environments. In
this work, we consider the problem of extracting a descriptive labeling of the
sequence of sub-activities being performed by a human, and more importantly, of
their interactions with the objects in the form of associated affordances.
Given a RGB-D video, we jointly model the human activities and object
affordances as a Markov random field where the nodes represent objects and
sub-activities, and the edges represent the relationships between object
affordances, their relations with sub-activities, and their evolution over
time. We formulate the learning problem using a structural support vector
machine (SSVM) approach, where labelings over various alternate temporal
segmentations are considered as latent variables. We tested our method on a
challenging dataset comprising 120 activity videos collected from 4 subjects,
and obtained an accuracy of 79.4% for affordance, 63.4% for sub-activity and
75.0% for high-level activity labeling. We then demonstrate the use of such
descriptive labeling in performing assistive tasks by a PR2 robot.
|
1212.2278 | Carl Vondrick | Carl Vondrick and Aditya Khosla and Tomasz Malisiewicz and Antonio
Torralba | Inverting and Visualizing Features for Object Detection | This paper is a preprint of our conference paper. We have made it
available early in the hopes that others find it useful | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce algorithms to visualize feature spaces used by object detectors.
The tools in this paper allow a human to put on `HOG goggles' and perceive the
visual world as a HOG based object detector sees it. We found that these
visualizations allow us to analyze object detection systems in new ways and
gain new insight into the detector's failures. For example, when we visualize
the features for high scoring false alarms, we discovered that, although they
are clearly wrong in image space, they do look deceptively similar to true
positives in feature space. This result suggests that many of these false
alarms are caused by our choice of feature space, and indicates that creating a
better learning algorithm or building bigger datasets is unlikely to correct
these errors. By visualizing feature spaces, we can gain a more intuitive
understanding of our detection systems.
| [
{
"version": "v1",
"created": "Tue, 11 Dec 2012 01:59:51 GMT"
},
{
"version": "v2",
"created": "Sun, 5 May 2013 18:17:44 GMT"
}
] | 2013-05-07T00:00:00 | [
[
"Vondrick",
"Carl",
""
],
[
"Khosla",
"Aditya",
""
],
[
"Malisiewicz",
"Tomasz",
""
],
[
"Torralba",
"Antonio",
""
]
] | TITLE: Inverting and Visualizing Features for Object Detection
ABSTRACT: We introduce algorithms to visualize feature spaces used by object detectors.
The tools in this paper allow a human to put on `HOG goggles' and perceive the
visual world as a HOG based object detector sees it. We found that these
visualizations allow us to analyze object detection systems in new ways and
gain new insight into the detector's failures. For example, when we visualize
the features for high scoring false alarms, we discovered that, although they
are clearly wrong in image space, they do look deceptively similar to true
positives in feature space. This result suggests that many of these false
alarms are caused by our choice of feature space, and indicates that creating a
better learning algorithm or building bigger datasets is unlikely to correct
these errors. By visualizing feature spaces, we can gain a more intuitive
understanding of our detection systems.
|
1305.1002 | Ji Won Yoon | Ji Won Yoon and Nial Friel | Efficient Estimation of the number of neighbours in Probabilistic K
Nearest Neighbour Classification | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic k-nearest neighbour (PKNN) classification has been introduced
to improve the performance of original k-nearest neighbour (KNN) classification
algorithm by explicitly modelling uncertainty in the classification of each
feature vector. However, an issue common to both KNN and PKNN is to select the
optimal number of neighbours, $k$. The contribution of this paper is to
incorporate the uncertainty in $k$ into the decision making, and in so doing
use Bayesian model averaging to provide improved classification. Indeed the
problem of assessing the uncertainty in $k$ can be viewed as one of statistical
model selection which is one of the most important technical issues in the
statistics and machine learning domain. In this paper, a new functional
approximation algorithm is proposed to reconstruct the density of the model
(order) without relying on time consuming Monte Carlo simulations. In addition,
this algorithm avoids cross validation by adopting Bayesian framework. The
performance of this algorithm yielded very good performance on several real
experimental datasets.
| [
{
"version": "v1",
"created": "Sun, 5 May 2013 09:44:08 GMT"
}
] | 2013-05-07T00:00:00 | [
[
"Yoon",
"Ji Won",
""
],
[
"Friel",
"Nial",
""
]
] | TITLE: Efficient Estimation of the number of neighbours in Probabilistic K
Nearest Neighbour Classification
ABSTRACT: Probabilistic k-nearest neighbour (PKNN) classification has been introduced
to improve the performance of original k-nearest neighbour (KNN) classification
algorithm by explicitly modelling uncertainty in the classification of each
feature vector. However, an issue common to both KNN and PKNN is to select the
optimal number of neighbours, $k$. The contribution of this paper is to
incorporate the uncertainty in $k$ into the decision making, and in so doing
use Bayesian model averaging to provide improved classification. Indeed the
problem of assessing the uncertainty in $k$ can be viewed as one of statistical
model selection which is one of the most important technical issues in the
statistics and machine learning domain. In this paper, a new functional
approximation algorithm is proposed to reconstruct the density of the model
(order) without relying on time consuming Monte Carlo simulations. In addition,
this algorithm avoids cross validation by adopting Bayesian framework. The
performance of this algorithm yielded very good performance on several real
experimental datasets.
|
1305.1040 | Ting-Li Chen | Ting-Li Chen | On the Convergence and Consistency of the Blurring Mean-Shift Process | arXiv admin note: text overlap with arXiv:1201.1979 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The mean-shift algorithm is a popular algorithm in computer vision and image
processing. It can also be cast as a minimum gamma-divergence estimation. In
this paper we focus on the "blurring" mean shift algorithm, which is one
version of the mean-shift process that successively blurs the dataset. The
analysis of the blurring mean-shift is relatively more complicated compared to
the nonblurring version, yet the algorithm convergence and the estimation
consistency have not been well studied in the literature. In this paper we
prove both the convergence and the consistency of the blurring mean-shift. We
also perform simulation studies to compare the efficiency of the blurring and
the nonblurring versions of the mean-shift algorithms. Our results show that
the blurring mean-shift has more efficiency.
| [
{
"version": "v1",
"created": "Sun, 5 May 2013 18:51:24 GMT"
}
] | 2013-05-07T00:00:00 | [
[
"Chen",
"Ting-Li",
""
]
] | TITLE: On the Convergence and Consistency of the Blurring Mean-Shift Process
ABSTRACT: The mean-shift algorithm is a popular algorithm in computer vision and image
processing. It can also be cast as a minimum gamma-divergence estimation. In
this paper we focus on the "blurring" mean shift algorithm, which is one
version of the mean-shift process that successively blurs the dataset. The
analysis of the blurring mean-shift is relatively more complicated compared to
the nonblurring version, yet the algorithm convergence and the estimation
consistency have not been well studied in the literature. In this paper we
prove both the convergence and the consistency of the blurring mean-shift. We
also perform simulation studies to compare the efficiency of the blurring and
the nonblurring versions of the mean-shift algorithms. Our results show that
the blurring mean-shift has more efficiency.
|
1210.0748 | Yongming Luo | Yongming Luo, George H. L. Fletcher, Jan Hidders, Yuqing Wu and Paul
De Bra | External memory bisimulation reduction of big graphs | 17 pages | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present, to our knowledge, the first known I/O efficient
solutions for computing the k-bisimulation partition of a massive directed
graph, and performing maintenance of such a partition upon updates to the
underlying graph. Ubiquitous in the theory and application of graph data,
bisimulation is a robust notion of node equivalence which intuitively groups
together nodes in a graph which share fundamental structural features.
k-bisimulation is the standard variant of bisimulation where the topological
features of nodes are only considered within a local neighborhood of radius
$k\geqslant 0$.
The I/O cost of our partition construction algorithm is bounded by $O(k\cdot
\mathit{sort}(|\et|) + k\cdot scan(|\nt|) + \mathit{sort}(|\nt|))$, while our
maintenance algorithms are bounded by $O(k\cdot \mathit{sort}(|\et|) + k\cdot
\mathit{sort}(|\nt|))$. The space complexity bounds are $O(|\nt|+|\et|)$ and
$O(k\cdot|\nt|+k\cdot|\et|)$, resp. Here, $|\et|$ and $|\nt|$ are the number of
disk pages occupied by the input graph's edge set and node set, resp., and
$\mathit{sort}(n)$ and $\mathit{scan}(n)$ are the cost of sorting and scanning,
resp., a file occupying $n$ pages in external memory. Empirical analysis on a
variety of massive real-world and synthetic graph datasets shows that our
algorithms perform efficiently in practice, scaling gracefully as graphs grow
in size.
| [
{
"version": "v1",
"created": "Tue, 2 Oct 2012 12:30:15 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Nov 2012 09:26:03 GMT"
},
{
"version": "v3",
"created": "Thu, 2 May 2013 08:23:28 GMT"
}
] | 2013-05-03T00:00:00 | [
[
"Luo",
"Yongming",
""
],
[
"Fletcher",
"George H. L.",
""
],
[
"Hidders",
"Jan",
""
],
[
"Wu",
"Yuqing",
""
],
[
"De Bra",
"Paul",
""
]
] | TITLE: External memory bisimulation reduction of big graphs
ABSTRACT: In this paper, we present, to our knowledge, the first known I/O efficient
solutions for computing the k-bisimulation partition of a massive directed
graph, and performing maintenance of such a partition upon updates to the
underlying graph. Ubiquitous in the theory and application of graph data,
bisimulation is a robust notion of node equivalence which intuitively groups
together nodes in a graph which share fundamental structural features.
k-bisimulation is the standard variant of bisimulation where the topological
features of nodes are only considered within a local neighborhood of radius
$k\geqslant 0$.
The I/O cost of our partition construction algorithm is bounded by $O(k\cdot
\mathit{sort}(|\et|) + k\cdot scan(|\nt|) + \mathit{sort}(|\nt|))$, while our
maintenance algorithms are bounded by $O(k\cdot \mathit{sort}(|\et|) + k\cdot
\mathit{sort}(|\nt|))$. The space complexity bounds are $O(|\nt|+|\et|)$ and
$O(k\cdot|\nt|+k\cdot|\et|)$, resp. Here, $|\et|$ and $|\nt|$ are the number of
disk pages occupied by the input graph's edge set and node set, resp., and
$\mathit{sort}(n)$ and $\mathit{scan}(n)$ are the cost of sorting and scanning,
resp., a file occupying $n$ pages in external memory. Empirical analysis on a
variety of massive real-world and synthetic graph datasets shows that our
algorithms perform efficiently in practice, scaling gracefully as graphs grow
in size.
|
1305.0423 | Somayeh Danafar | Somayeh Danafar, Paola M.V. Rancoita, Tobias Glasmachers, Kevin
Whittingstall, Juergen Schmidhuber | Testing Hypotheses by Regularized Maximum Mean Discrepancy | null | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Do two data samples come from different distributions? Recent studies of this
fundamental problem focused on embedding probability distributions into
sufficiently rich characteristic Reproducing Kernel Hilbert Spaces (RKHSs), to
compare distributions by the distance between their embeddings. We show that
Regularized Maximum Mean Discrepancy (RMMD), our novel measure for kernel-based
hypothesis testing, yields substantial improvements even when sample sizes are
small, and excels at hypothesis tests involving multiple comparisons with power
control. We derive asymptotic distributions under the null and alternative
hypotheses, and assess power control. Outstanding results are obtained on:
challenging EEG data, MNIST, the Berkley Covertype, and the Flare-Solar
dataset.
| [
{
"version": "v1",
"created": "Thu, 2 May 2013 13:03:53 GMT"
}
] | 2013-05-03T00:00:00 | [
[
"Danafar",
"Somayeh",
""
],
[
"Rancoita",
"Paola M. V.",
""
],
[
"Glasmachers",
"Tobias",
""
],
[
"Whittingstall",
"Kevin",
""
],
[
"Schmidhuber",
"Juergen",
""
]
] | TITLE: Testing Hypotheses by Regularized Maximum Mean Discrepancy
ABSTRACT: Do two data samples come from different distributions? Recent studies of this
fundamental problem focused on embedding probability distributions into
sufficiently rich characteristic Reproducing Kernel Hilbert Spaces (RKHSs), to
compare distributions by the distance between their embeddings. We show that
Regularized Maximum Mean Discrepancy (RMMD), our novel measure for kernel-based
hypothesis testing, yields substantial improvements even when sample sizes are
small, and excels at hypothesis tests involving multiple comparisons with power
control. We derive asymptotic distributions under the null and alternative
hypotheses, and assess power control. Outstanding results are obtained on:
challenging EEG data, MNIST, the Berkley Covertype, and the Flare-Solar
dataset.
|
1301.3641 | Ryan Kiros | Ryan Kiros | Training Neural Networks with Stochastic Hessian-Free Optimization | 11 pages, ICLR 2013 | null | null | null | cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hessian-free (HF) optimization has been successfully used for training deep
autoencoders and recurrent networks. HF uses the conjugate gradient algorithm
to construct update directions through curvature-vector products that can be
computed on the same order of time as gradients. In this paper we exploit this
property and study stochastic HF with gradient and curvature mini-batches
independent of the dataset size. We modify Martens' HF for these settings and
integrate dropout, a method for preventing co-adaptation of feature detectors,
to guard against overfitting. Stochastic Hessian-free optimization gives an
intermediary between SGD and HF that achieves competitive performance on both
classification and deep autoencoder experiments.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2013 10:10:23 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Mar 2013 05:51:37 GMT"
},
{
"version": "v3",
"created": "Wed, 1 May 2013 06:57:50 GMT"
}
] | 2013-05-02T00:00:00 | [
[
"Kiros",
"Ryan",
""
]
] | TITLE: Training Neural Networks with Stochastic Hessian-Free Optimization
ABSTRACT: Hessian-free (HF) optimization has been successfully used for training deep
autoencoders and recurrent networks. HF uses the conjugate gradient algorithm
to construct update directions through curvature-vector products that can be
computed on the same order of time as gradients. In this paper we exploit this
property and study stochastic HF with gradient and curvature mini-batches
independent of the dataset size. We modify Martens' HF for these settings and
integrate dropout, a method for preventing co-adaptation of feature detectors,
to guard against overfitting. Stochastic Hessian-free optimization gives an
intermediary between SGD and HF that achieves competitive performance on both
classification and deep autoencoder experiments.
|
1304.2272 | Elmar Peise | Elmar Peise (1), Diego Fabregat (1), Yurii Aulchenko (2), Paolo
Bientinesi (1) ((1) AICES, RWTH Aachen, (2) Institute of Cytology and
Genetics, Novosibirsk) | Algorithms for Large-scale Whole Genome Association Analysis | null | null | null | AICES-2013/04-2 | cs.CE cs.MS q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to associate complex traits with genetic polymorphisms, genome-wide
association studies process huge datasets involving tens of thousands of
individuals genotyped for millions of polymorphisms. When handling these
datasets, which exceed the main memory of contemporary computers, one faces two
distinct challenges: 1) Millions of polymorphisms come at the cost of hundreds
of Gigabytes of genotype data, which can only be kept in secondary storage; 2)
the relatedness of the test population is represented by a covariance matrix,
which, for large populations, can only fit in the combined main memory of a
distributed architecture. In this paper, we present solutions for both
challenges: The genotype data is streamed from and to secondary storage using a
double buffering technique, while the covariance matrix is kept across the main
memory of a distributed memory system. We show that these methods sustain
high-performance and allow the analysis of enormous dataset
| [
{
"version": "v1",
"created": "Mon, 8 Apr 2013 17:13:39 GMT"
}
] | 2013-05-02T00:00:00 | [
[
"Peise",
"Elmar",
""
],
[
"Fabregat",
"Diego",
""
],
[
"Aulchenko",
"Yurii",
""
],
[
"Bientinesi",
"Paolo",
""
]
] | TITLE: Algorithms for Large-scale Whole Genome Association Analysis
ABSTRACT: In order to associate complex traits with genetic polymorphisms, genome-wide
association studies process huge datasets involving tens of thousands of
individuals genotyped for millions of polymorphisms. When handling these
datasets, which exceed the main memory of contemporary computers, one faces two
distinct challenges: 1) Millions of polymorphisms come at the cost of hundreds
of Gigabytes of genotype data, which can only be kept in secondary storage; 2)
the relatedness of the test population is represented by a covariance matrix,
which, for large populations, can only fit in the combined main memory of a
distributed architecture. In this paper, we present solutions for both
challenges: The genotype data is streamed from and to secondary storage using a
double buffering technique, while the covariance matrix is kept across the main
memory of a distributed memory system. We show that these methods sustain
high-performance and allow the analysis of enormous dataset
|
1305.0015 | Balaji Lakshminarayanan | Balaji Lakshminarayanan and Yee Whye Teh | Inferring ground truth from multi-annotator ordinal data: a
probabilistic approach | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A popular approach for large scale data annotation tasks is crowdsourcing,
wherein each data point is labeled by multiple noisy annotators. We consider
the problem of inferring ground truth from noisy ordinal labels obtained from
multiple annotators of varying and unknown expertise levels. Annotation models
for ordinal data have been proposed mostly as extensions of their
binary/categorical counterparts and have received little attention in the
crowdsourcing literature. We propose a new model for crowdsourced ordinal data
that accounts for instance difficulty as well as annotator expertise, and
derive a variational Bayesian inference algorithm for parameter estimation. We
analyze the ordinal extensions of several state-of-the-art annotator models for
binary/categorical labels and evaluate the performance of all the models on two
real world datasets containing ordinal query-URL relevance scores, collected
through Amazon's Mechanical Turk. Our results indicate that the proposed model
performs better or as well as existing state-of-the-art methods and is more
resistant to `spammy' annotators (i.e., annotators who assign labels randomly
without actually looking at the instance) than popular baselines such as mean,
median, and majority vote which do not account for annotator expertise.
| [
{
"version": "v1",
"created": "Tue, 30 Apr 2013 20:12:01 GMT"
}
] | 2013-05-02T00:00:00 | [
[
"Lakshminarayanan",
"Balaji",
""
],
[
"Teh",
"Yee Whye",
""
]
] | TITLE: Inferring ground truth from multi-annotator ordinal data: a
probabilistic approach
ABSTRACT: A popular approach for large scale data annotation tasks is crowdsourcing,
wherein each data point is labeled by multiple noisy annotators. We consider
the problem of inferring ground truth from noisy ordinal labels obtained from
multiple annotators of varying and unknown expertise levels. Annotation models
for ordinal data have been proposed mostly as extensions of their
binary/categorical counterparts and have received little attention in the
crowdsourcing literature. We propose a new model for crowdsourced ordinal data
that accounts for instance difficulty as well as annotator expertise, and
derive a variational Bayesian inference algorithm for parameter estimation. We
analyze the ordinal extensions of several state-of-the-art annotator models for
binary/categorical labels and evaluate the performance of all the models on two
real world datasets containing ordinal query-URL relevance scores, collected
through Amazon's Mechanical Turk. Our results indicate that the proposed model
performs better or as well as existing state-of-the-art methods and is more
resistant to `spammy' annotators (i.e., annotators who assign labels randomly
without actually looking at the instance) than popular baselines such as mean,
median, and majority vote which do not account for annotator expertise.
|
1305.0103 | Marthinus Christoffel du Plessis Marthinus Christoffel du Plessi | Marthinus Christoffel du Plessis and Masashi Sugiyama | Clustering Unclustered Data: Unsupervised Binary Labeling of Two
Datasets Having Different Class Balances | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the unsupervised learning problem of assigning labels to
unlabeled data. A naive approach is to use clustering methods, but this works
well only when data is properly clustered and each cluster corresponds to an
underlying class. In this paper, we first show that this unsupervised labeling
problem in balanced binary cases can be solved if two unlabeled datasets having
different class balances are available. More specifically, estimation of the
sign of the difference between probability densities of two unlabeled datasets
gives the solution. We then introduce a new method to directly estimate the
sign of the density difference without density estimation. Finally, we
demonstrate the usefulness of the proposed method against several clustering
methods on various toy problems and real-world datasets.
| [
{
"version": "v1",
"created": "Wed, 1 May 2013 06:32:12 GMT"
}
] | 2013-05-02T00:00:00 | [
[
"Plessis",
"Marthinus Christoffel du",
""
],
[
"Sugiyama",
"Masashi",
""
]
] | TITLE: Clustering Unclustered Data: Unsupervised Binary Labeling of Two
Datasets Having Different Class Balances
ABSTRACT: We consider the unsupervised learning problem of assigning labels to
unlabeled data. A naive approach is to use clustering methods, but this works
well only when data is properly clustered and each cluster corresponds to an
underlying class. In this paper, we first show that this unsupervised labeling
problem in balanced binary cases can be solved if two unlabeled datasets having
different class balances are available. More specifically, estimation of the
sign of the difference between probability densities of two unlabeled datasets
gives the solution. We then introduce a new method to directly estimate the
sign of the density difference without density estimation. Finally, we
demonstrate the usefulness of the proposed method against several clustering
methods on various toy problems and real-world datasets.
|
1305.0159 | Anthony J Cox | Lilian Janin and Giovanna Rosone and Anthony J. Cox | Adaptive reference-free compression of sequence quality scores | Accepted paper for HiTSeq 2013, to appear in Bioinformatics.
Bioinformatics should be considered the original place of publication of this
work, please cite accordingly | null | null | null | q-bio.GN cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation:
Rapid technological progress in DNA sequencing has stimulated interest in
compressing the vast datasets that are now routinely produced. Relatively
little attention has been paid to compressing the quality scores that are
assigned to each sequence, even though these scores may be harder to compress
than the sequences themselves. By aggregating a set of reads into a compressed
index, we find that the majority of bases can be predicted from the sequence of
bases that are adjacent to them and hence are likely to be less informative for
variant calling or other applications. The quality scores for such bases are
aggressively compressed, leaving a relatively small number at full resolution.
Since our approach relies directly on redundancy present in the reads, it does
not need a reference sequence and is therefore applicable to data from
metagenomics and de novo experiments as well as to resequencing data.
Results:
We show that a conservative smoothing strategy affecting 75% of the quality
scores above Q2 leads to an overall quality score compression of 1 bit per
value with a negligible effect on variant calling. A compression of 0.68 bit
per quality value is achieved using a more aggressive smoothing strategy, again
with a very small effect on variant calling.
Availability:
Code to construct the BWT and LCP-array on large genomic data sets is part of
the BEETL library, available as a github respository at
http://[email protected]:BEETL/BEETL.git .
| [
{
"version": "v1",
"created": "Wed, 1 May 2013 12:51:10 GMT"
}
] | 2013-05-02T00:00:00 | [
[
"Janin",
"Lilian",
""
],
[
"Rosone",
"Giovanna",
""
],
[
"Cox",
"Anthony J.",
""
]
] | TITLE: Adaptive reference-free compression of sequence quality scores
ABSTRACT: Motivation:
Rapid technological progress in DNA sequencing has stimulated interest in
compressing the vast datasets that are now routinely produced. Relatively
little attention has been paid to compressing the quality scores that are
assigned to each sequence, even though these scores may be harder to compress
than the sequences themselves. By aggregating a set of reads into a compressed
index, we find that the majority of bases can be predicted from the sequence of
bases that are adjacent to them and hence are likely to be less informative for
variant calling or other applications. The quality scores for such bases are
aggressively compressed, leaving a relatively small number at full resolution.
Since our approach relies directly on redundancy present in the reads, it does
not need a reference sequence and is therefore applicable to data from
metagenomics and de novo experiments as well as to resequencing data.
Results:
We show that a conservative smoothing strategy affecting 75% of the quality
scores above Q2 leads to an overall quality score compression of 1 bit per
value with a negligible effect on variant calling. A compression of 0.68 bit
per quality value is achieved using a more aggressive smoothing strategy, again
with a very small effect on variant calling.
Availability:
Code to construct the BWT and LCP-array on large genomic data sets is part of
the BEETL library, available as a github respository at
http://[email protected]:BEETL/BEETL.git .
|
1305.0160 | Anthony J Cox | Markus J. Bauer and Anthony J. Cox and Giovanna Rosone and Marinella
Sciortino | Lightweight LCP Construction for Next-Generation Sequencing Datasets | Springer LNCS (Lecture Notes in Computer Science) should be
considered as the original place of publication, please cite accordingly. The
final version of this manuscript is available at
http://link.springer.com/chapter/10.1007/978-3-642-33122-0_26 | Lecture Notes in Computer Science Volume 7534, 2012, pp 326-337 | 10.1007/978-3-642-33122-0_26 | null | cs.DS q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advent of "next-generation" DNA sequencing (NGS) technologies has meant
that collections of hundreds of millions of DNA sequences are now commonplace
in bioinformatics. Knowing the longest common prefix array (LCP) of such a
collection would facilitate the rapid computation of maximal exact matches,
shortest unique substrings and shortest absent words. CPU-efficient algorithms
for computing the LCP of a string have been described in the literature, but
require the presence in RAM of large data structures. This prevents such
methods from being feasible for NGS datasets.
In this paper we propose the first lightweight method that simultaneously
computes, via sequential scans, the LCP and BWT of very large collections of
sequences. Computational results on collections as large as 800 million
100-mers demonstrate that our algorithm scales to the vast sequence collections
encountered in human whole genome sequencing experiments.
| [
{
"version": "v1",
"created": "Wed, 1 May 2013 12:51:45 GMT"
}
] | 2013-05-02T00:00:00 | [
[
"Bauer",
"Markus J.",
""
],
[
"Cox",
"Anthony J.",
""
],
[
"Rosone",
"Giovanna",
""
],
[
"Sciortino",
"Marinella",
""
]
] | TITLE: Lightweight LCP Construction for Next-Generation Sequencing Datasets
ABSTRACT: The advent of "next-generation" DNA sequencing (NGS) technologies has meant
that collections of hundreds of millions of DNA sequences are now commonplace
in bioinformatics. Knowing the longest common prefix array (LCP) of such a
collection would facilitate the rapid computation of maximal exact matches,
shortest unique substrings and shortest absent words. CPU-efficient algorithms
for computing the LCP of a string have been described in the literature, but
require the presence in RAM of large data structures. This prevents such
methods from being feasible for NGS datasets.
In this paper we propose the first lightweight method that simultaneously
computes, via sequential scans, the LCP and BWT of very large collections of
sequences. Computational results on collections as large as 800 million
100-mers demonstrate that our algorithm scales to the vast sequence collections
encountered in human whole genome sequencing experiments.
|
1211.2863 | Alon Schclar | Alon Schclar | Multi-Sensor Fusion via Reduction of Dimensionality | PhD Thesis, Tel Aviv Univ, 2008 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large high-dimensional datasets are becoming more and more popular in an
increasing number of research areas. Processing the high dimensional data
incurs a high computational cost and is inherently inefficient since many of
the values that describe a data object are redundant due to noise and inner
correlations. Consequently, the dimensionality, i.e. the number of values that
are used to describe a data object, needs to be reduced prior to any other
processing of the data. The dimensionality reduction removes, in most cases,
noise from the data and reduces substantially the computational cost of
algorithms that are applied to the data.
In this thesis, a novel coherent integrated methodology is introduced
(theory, algorithm and applications) to reduce the dimensionality of
high-dimensional datasets. The method constructs a diffusion process among the
data coordinates via a random walk. The dimensionality reduction is obtained
based on the eigen-decomposition of the Markov matrix that is associated with
the random walk. The proposed method is utilized for: (a) segmentation and
detection of anomalies in hyper-spectral images; (b) segmentation of
multi-contrast MRI images; and (c) segmentation of video sequences.
We also present algorithms for: (a) the characterization of materials using
their spectral signatures to enable their identification; (b) detection of
vehicles according to their acoustic signatures; and (c) classification of
vascular vessels recordings to detect hyper-tension and cardio-vascular
diseases.
The proposed methodology and algorithms produce excellent results that
successfully compete with current state-of-the-art algorithms.
| [
{
"version": "v1",
"created": "Tue, 13 Nov 2012 01:05:42 GMT"
}
] | 2013-05-01T00:00:00 | [
[
"Schclar",
"Alon",
""
]
] | TITLE: Multi-Sensor Fusion via Reduction of Dimensionality
ABSTRACT: Large high-dimensional datasets are becoming more and more popular in an
increasing number of research areas. Processing the high dimensional data
incurs a high computational cost and is inherently inefficient since many of
the values that describe a data object are redundant due to noise and inner
correlations. Consequently, the dimensionality, i.e. the number of values that
are used to describe a data object, needs to be reduced prior to any other
processing of the data. The dimensionality reduction removes, in most cases,
noise from the data and reduces substantially the computational cost of
algorithms that are applied to the data.
In this thesis, a novel coherent integrated methodology is introduced
(theory, algorithm and applications) to reduce the dimensionality of
high-dimensional datasets. The method constructs a diffusion process among the
data coordinates via a random walk. The dimensionality reduction is obtained
based on the eigen-decomposition of the Markov matrix that is associated with
the random walk. The proposed method is utilized for: (a) segmentation and
detection of anomalies in hyper-spectral images; (b) segmentation of
multi-contrast MRI images; and (c) segmentation of video sequences.
We also present algorithms for: (a) the characterization of materials using
their spectral signatures to enable their identification; (b) detection of
vehicles according to their acoustic signatures; and (c) classification of
vascular vessels recordings to detect hyper-tension and cardio-vascular
diseases.
The proposed methodology and algorithms produce excellent results that
successfully compete with current state-of-the-art algorithms.
|
0905.4614 | Alexander Artikis | A. Artikis, M. Sergot and G. Paliouras | A Logic Programming Approach to Activity Recognition | The original publication is available in the Proceedings of the 2nd
ACM international workshop on Events in multimedia, 2010 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have been developing a system for recognising human activity given a
symbolic representation of video content. The input of our system is a set of
time-stamped short-term activities detected on video frames. The output of our
system is a set of recognised long-term activities, which are pre-defined
temporal combinations of short-term activities. The constraints on the
short-term activities that, if satisfied, lead to the recognition of a
long-term activity, are expressed using a dialect of the Event Calculus. We
illustrate the expressiveness of the dialect by showing the representation of
several typical complex activities. Furthermore, we present a detailed
evaluation of the system through experimentation on a benchmark dataset of
surveillance videos.
| [
{
"version": "v1",
"created": "Thu, 28 May 2009 11:44:04 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Apr 2013 17:06:25 GMT"
}
] | 2013-04-30T00:00:00 | [
[
"Artikis",
"A.",
""
],
[
"Sergot",
"M.",
""
],
[
"Paliouras",
"G.",
""
]
] | TITLE: A Logic Programming Approach to Activity Recognition
ABSTRACT: We have been developing a system for recognising human activity given a
symbolic representation of video content. The input of our system is a set of
time-stamped short-term activities detected on video frames. The output of our
system is a set of recognised long-term activities, which are pre-defined
temporal combinations of short-term activities. The constraints on the
short-term activities that, if satisfied, lead to the recognition of a
long-term activity, are expressed using a dialect of the Event Calculus. We
illustrate the expressiveness of the dialect by showing the representation of
several typical complex activities. Furthermore, we present a detailed
evaluation of the system through experimentation on a benchmark dataset of
surveillance videos.
|
1304.7632 | Rastislav \v{S}r\'amek | Barbara Geissmann and Rastislav \v{S}r\'amek | Counting small cuts in a graph | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the minimum cut problem in the presence of uncertainty and show how
to apply a novel robust optimization approach, which aims to exploit the
similarity in subsequent graph measurements or similar graph instances, without
posing any assumptions on the way they have been obtained. With experiments we
show that the approach works well when compared to other approaches that are
also oblivious towards the relationship between the input datasets.
| [
{
"version": "v1",
"created": "Mon, 29 Apr 2013 12:08:32 GMT"
}
] | 2013-04-30T00:00:00 | [
[
"Geissmann",
"Barbara",
""
],
[
"Šrámek",
"Rastislav",
""
]
] | TITLE: Counting small cuts in a graph
ABSTRACT: We study the minimum cut problem in the presence of uncertainty and show how
to apply a novel robust optimization approach, which aims to exploit the
similarity in subsequent graph measurements or similar graph instances, without
posing any assumptions on the way they have been obtained. With experiments we
show that the approach works well when compared to other approaches that are
also oblivious towards the relationship between the input datasets.
|
1304.6933 | Manuel Keglevic | Manuel Keglevic and Robert Sablatnig | Digit Recognition in Handwritten Weather Records | Part of the OAGM/AAPR 2013 proceedings (arXiv:1304.1876), 8 pages | null | null | OAGM-AAPR/2013/07 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the automatic recognition of handwritten temperature
values in weather records. The localization of table cells is based on line
detection using projection profiles. Further, a stroke-preserving line removal
method which is based on gradient images is proposed. The presented digit
recognition utilizes features which are extracted using a set of filters and a
Support Vector Machine classifier. It was evaluated on the MNIST and the USPS
dataset and our own database with about 17,000 RGB digit images. An accuracy of
99.36% per digit is achieved for the entire system using a set of 84 weather
records.
| [
{
"version": "v1",
"created": "Thu, 25 Apr 2013 15:14:42 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Apr 2013 08:35:18 GMT"
}
] | 2013-04-29T00:00:00 | [
[
"Keglevic",
"Manuel",
""
],
[
"Sablatnig",
"Robert",
""
]
] | TITLE: Digit Recognition in Handwritten Weather Records
ABSTRACT: This paper addresses the automatic recognition of handwritten temperature
values in weather records. The localization of table cells is based on line
detection using projection profiles. Further, a stroke-preserving line removal
method which is based on gradient images is proposed. The presented digit
recognition utilizes features which are extracted using a set of filters and a
Support Vector Machine classifier. It was evaluated on the MNIST and the USPS
dataset and our own database with about 17,000 RGB digit images. An accuracy of
99.36% per digit is achieved for the entire system using a set of 84 weather
records.
|
1304.7140 | Michael Helmberger Michael Helmberger | M. Helmberger, M. Urschler, M. Pienn, Z.Balint, A. Olschewski and H.
Bischof | Pulmonary Vascular Tree Segmentation from Contrast-Enhanced CT Images | Part of the OAGM/AAPR 2013 proceedings (1304.1876) | null | null | OAGM-AAPR/2013/09 | cs.CV physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a pulmonary vessel segmentation algorithm, which is fast, fully
automatic and robust. It uses a coarse segmentation of the airway tree and a
left and right lung labeled volume to restrict a vessel enhancement filter,
based on an offset medialness function, to the lungs. We show the application
of our algorithm on contrast-enhanced CT images, where we derive a clinical
parameter to detect pulmonary hypertension (PH) in patients. Results on a
dataset of 24 patients show that quantitative indices derived from the
segmentation are applicable to distinguish patients with and without PH.
Further work-in-progress results are shown on the VESSEL12 challenge dataset,
which is composed of non-contrast-enhanced scans, where we range in the
midfield of participating contestants.
| [
{
"version": "v1",
"created": "Fri, 26 Apr 2013 12:30:36 GMT"
}
] | 2013-04-29T00:00:00 | [
[
"Helmberger",
"M.",
""
],
[
"Urschler",
"M.",
""
],
[
"Pienn",
"M.",
""
],
[
"Balint",
"Z.",
""
],
[
"Olschewski",
"A.",
""
],
[
"Bischof",
"H.",
""
]
] | TITLE: Pulmonary Vascular Tree Segmentation from Contrast-Enhanced CT Images
ABSTRACT: We present a pulmonary vessel segmentation algorithm, which is fast, fully
automatic and robust. It uses a coarse segmentation of the airway tree and a
left and right lung labeled volume to restrict a vessel enhancement filter,
based on an offset medialness function, to the lungs. We show the application
of our algorithm on contrast-enhanced CT images, where we derive a clinical
parameter to detect pulmonary hypertension (PH) in patients. Results on a
dataset of 24 patients show that quantitative indices derived from the
segmentation are applicable to distinguish patients with and without PH.
Further work-in-progress results are shown on the VESSEL12 challenge dataset,
which is composed of non-contrast-enhanced scans, where we range in the
midfield of participating contestants.
|
1304.7236 | Alessandro Perina | Alessandro Perina, Nebojsa Jojic | In the sight of my wearable camera: Classifying my visual experience | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce and we analyze a new dataset which resembles the input to
biological vision systems much more than most previously published ones. Our
analysis leaded to several important conclusions. First, it is possible to
disambiguate over dozens of visual scenes (locations) encountered over the
course of several weeks of a human life with accuracy of over 80%, and this
opens up possibility for numerous novel vision applications, from early
detection of dementia to everyday use of wearable camera streams for automatic
reminders, and visual stream exchange. Second, our experimental results
indicate that, generative models such as Latent Dirichlet Allocation or
Counting Grids, are more suitable to such types of data, as they are more
robust to overtraining and comfortable with images at low resolution, blurred
and characterized by relatively random clutter and a mix of objects.
| [
{
"version": "v1",
"created": "Fri, 26 Apr 2013 17:28:13 GMT"
}
] | 2013-04-29T00:00:00 | [
[
"Perina",
"Alessandro",
""
],
[
"Jojic",
"Nebojsa",
""
]
] | TITLE: In the sight of my wearable camera: Classifying my visual experience
ABSTRACT: We introduce and we analyze a new dataset which resembles the input to
biological vision systems much more than most previously published ones. Our
analysis leaded to several important conclusions. First, it is possible to
disambiguate over dozens of visual scenes (locations) encountered over the
course of several weeks of a human life with accuracy of over 80%, and this
opens up possibility for numerous novel vision applications, from early
detection of dementia to everyday use of wearable camera streams for automatic
reminders, and visual stream exchange. Second, our experimental results
indicate that, generative models such as Latent Dirichlet Allocation or
Counting Grids, are more suitable to such types of data, as they are more
robust to overtraining and comfortable with images at low resolution, blurred
and characterized by relatively random clutter and a mix of objects.
|
1304.6480 | Liwei Wang | Yining Wang, Liwei Wang, Yuanzhi Li, Di He, Tie-Yan Liu, Wei Chen | A Theoretical Analysis of NDCG Type Ranking Measures | COLT 2013 | null | null | null | cs.LG cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A central problem in ranking is to design a ranking measure for evaluation of
ranking functions. In this paper we study, from a theoretical perspective, the
widely used Normalized Discounted Cumulative Gain (NDCG)-type ranking measures.
Although there are extensive empirical studies of NDCG, little is known about
its theoretical properties. We first show that, whatever the ranking function
is, the standard NDCG which adopts a logarithmic discount, converges to 1 as
the number of items to rank goes to infinity. On the first sight, this result
is very surprising. It seems to imply that NDCG cannot differentiate good and
bad ranking functions, contradicting to the empirical success of NDCG in many
applications. In order to have a deeper understanding of ranking measures in
general, we propose a notion referred to as consistent distinguishability. This
notion captures the intuition that a ranking measure should have such a
property: For every pair of substantially different ranking functions, the
ranking measure can decide which one is better in a consistent manner on almost
all datasets. We show that NDCG with logarithmic discount has consistent
distinguishability although it converges to the same limit for all ranking
functions. We next characterize the set of all feasible discount functions for
NDCG according to the concept of consistent distinguishability. Specifically we
show that whether NDCG has consistent distinguishability depends on how fast
the discount decays, and 1/r is a critical point. We then turn to the cut-off
version of NDCG, i.e., NDCG@k. We analyze the distinguishability of NDCG@k for
various choices of k and the discount functions. Experimental results on real
Web search datasets agree well with the theory.
| [
{
"version": "v1",
"created": "Wed, 24 Apr 2013 04:08:23 GMT"
}
] | 2013-04-25T00:00:00 | [
[
"Wang",
"Yining",
""
],
[
"Wang",
"Liwei",
""
],
[
"Li",
"Yuanzhi",
""
],
[
"He",
"Di",
""
],
[
"Liu",
"Tie-Yan",
""
],
[
"Chen",
"Wei",
""
]
] | TITLE: A Theoretical Analysis of NDCG Type Ranking Measures
ABSTRACT: A central problem in ranking is to design a ranking measure for evaluation of
ranking functions. In this paper we study, from a theoretical perspective, the
widely used Normalized Discounted Cumulative Gain (NDCG)-type ranking measures.
Although there are extensive empirical studies of NDCG, little is known about
its theoretical properties. We first show that, whatever the ranking function
is, the standard NDCG which adopts a logarithmic discount, converges to 1 as
the number of items to rank goes to infinity. On the first sight, this result
is very surprising. It seems to imply that NDCG cannot differentiate good and
bad ranking functions, contradicting to the empirical success of NDCG in many
applications. In order to have a deeper understanding of ranking measures in
general, we propose a notion referred to as consistent distinguishability. This
notion captures the intuition that a ranking measure should have such a
property: For every pair of substantially different ranking functions, the
ranking measure can decide which one is better in a consistent manner on almost
all datasets. We show that NDCG with logarithmic discount has consistent
distinguishability although it converges to the same limit for all ranking
functions. We next characterize the set of all feasible discount functions for
NDCG according to the concept of consistent distinguishability. Specifically we
show that whether NDCG has consistent distinguishability depends on how fast
the discount decays, and 1/r is a critical point. We then turn to the cut-off
version of NDCG, i.e., NDCG@k. We analyze the distinguishability of NDCG@k for
various choices of k and the discount functions. Experimental results on real
Web search datasets agree well with the theory.
|
1304.5894 | Bruno Cornelis | Bruno Cornelis, Yun Yang, Joshua T. Vogelstein, Ann Dooms, Ingrid
Daubechies, David Dunson | Bayesian crack detection in ultra high resolution multimodal images of
paintings | 8 pages, double column | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The preservation of our cultural heritage is of paramount importance. Thanks
to recent developments in digital acquisition techniques, powerful image
analysis algorithms are developed which can be useful non-invasive tools to
assist in the restoration and preservation of art. In this paper we propose a
semi-supervised crack detection method that can be used for high-dimensional
acquisitions of paintings coming from different modalities. Our dataset
consists of a recently acquired collection of images of the Ghent Altarpiece
(1432), one of Northern Europe's most important art masterpieces. Our goal is
to build a classifier that is able to discern crack pixels from the background
consisting of non-crack pixels, making optimal use of the information that is
provided by each modality. To accomplish this we employ a recently developed
non-parametric Bayesian classifier, that uses tensor factorizations to
characterize any conditional probability. A prior is placed on the parameters
of the factorization such that every possible interaction between predictors is
allowed while still identifying a sparse subset among these predictors. The
proposed Bayesian classifier, which we will refer to as conditional Bayesian
tensor factorization or CBTF, is assessed by visually comparing classification
results with the Random Forest (RF) algorithm.
| [
{
"version": "v1",
"created": "Mon, 22 Apr 2013 09:46:47 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Apr 2013 09:00:01 GMT"
}
] | 2013-04-24T00:00:00 | [
[
"Cornelis",
"Bruno",
""
],
[
"Yang",
"Yun",
""
],
[
"Vogelstein",
"Joshua T.",
""
],
[
"Dooms",
"Ann",
""
],
[
"Daubechies",
"Ingrid",
""
],
[
"Dunson",
"David",
""
]
] | TITLE: Bayesian crack detection in ultra high resolution multimodal images of
paintings
ABSTRACT: The preservation of our cultural heritage is of paramount importance. Thanks
to recent developments in digital acquisition techniques, powerful image
analysis algorithms are developed which can be useful non-invasive tools to
assist in the restoration and preservation of art. In this paper we propose a
semi-supervised crack detection method that can be used for high-dimensional
acquisitions of paintings coming from different modalities. Our dataset
consists of a recently acquired collection of images of the Ghent Altarpiece
(1432), one of Northern Europe's most important art masterpieces. Our goal is
to build a classifier that is able to discern crack pixels from the background
consisting of non-crack pixels, making optimal use of the information that is
provided by each modality. To accomplish this we employ a recently developed
non-parametric Bayesian classifier, that uses tensor factorizations to
characterize any conditional probability. A prior is placed on the parameters
of the factorization such that every possible interaction between predictors is
allowed while still identifying a sparse subset among these predictors. The
proposed Bayesian classifier, which we will refer to as conditional Bayesian
tensor factorization or CBTF, is assessed by visually comparing classification
results with the Random Forest (RF) algorithm.
|
1304.6291 | Fang Wang | Fang Wang and Yi Li | Learning Visual Symbols for Parsing Human Poses in Images | IJCAI 2013 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parsing human poses in images is fundamental in extracting critical visual
information for artificial intelligent agents. Our goal is to learn
self-contained body part representations from images, which we call visual
symbols, and their symbol-wise geometric contexts in this parsing process. Each
symbol is individually learned by categorizing visual features leveraged by
geometric information. In the categorization, we use Latent Support Vector
Machine followed by an efficient cross validation procedure to learn visual
symbols. Then, these symbols naturally define geometric contexts of body parts
in a fine granularity. When the structure of the compositional parts is a tree,
we derive an efficient approach to estimating human poses in images.
Experiments on two large datasets suggest our approach outperforms state of the
art methods.
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2013 14:07:19 GMT"
}
] | 2013-04-24T00:00:00 | [
[
"Wang",
"Fang",
""
],
[
"Li",
"Yi",
""
]
] | TITLE: Learning Visual Symbols for Parsing Human Poses in Images
ABSTRACT: Parsing human poses in images is fundamental in extracting critical visual
information for artificial intelligent agents. Our goal is to learn
self-contained body part representations from images, which we call visual
symbols, and their symbol-wise geometric contexts in this parsing process. Each
symbol is individually learned by categorizing visual features leveraged by
geometric information. In the categorization, we use Latent Support Vector
Machine followed by an efficient cross validation procedure to learn visual
symbols. Then, these symbols naturally define geometric contexts of body parts
in a fine granularity. When the structure of the compositional parts is a tree,
we derive an efficient approach to estimating human poses in images.
Experiments on two large datasets suggest our approach outperforms state of the
art methods.
|
1212.5238 | Andrea Baronchelli | Delia Mocanu, Andrea Baronchelli, Bruno Gon\c{c}alves, Nicola Perra,
Alessandro Vespignani | The Twitter of Babel: Mapping World Languages through Microblogging
Platforms | null | PLoS One 8, E61981 (2013) | 10.1371/journal.pone.0061981 | null | physics.soc-ph cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large scale analysis and statistics of socio-technical systems that just a
few short years ago would have required the use of consistent economic and
human resources can nowadays be conveniently performed by mining the enormous
amount of digital data produced by human activities. Although a
characterization of several aspects of our societies is emerging from the data
revolution, a number of questions concerning the reliability and the biases
inherent to the big data "proxies" of social life are still open. Here, we
survey worldwide linguistic indicators and trends through the analysis of a
large-scale dataset of microblogging posts. We show that available data allow
for the study of language geography at scales ranging from country-level
aggregation to specific city neighborhoods. The high resolution and coverage of
the data allows us to investigate different indicators such as the linguistic
homogeneity of different countries, the touristic seasonal patterns within
countries and the geographical distribution of different languages in
multilingual regions. This work highlights the potential of geolocalized
studies of open data sources to improve current analysis and develop indicators
for major social phenomena in specific communities.
| [
{
"version": "v1",
"created": "Thu, 20 Dec 2012 20:43:12 GMT"
}
] | 2013-04-23T00:00:00 | [
[
"Mocanu",
"Delia",
""
],
[
"Baronchelli",
"Andrea",
""
],
[
"Gonçalves",
"Bruno",
""
],
[
"Perra",
"Nicola",
""
],
[
"Vespignani",
"Alessandro",
""
]
] | TITLE: The Twitter of Babel: Mapping World Languages through Microblogging
Platforms
ABSTRACT: Large scale analysis and statistics of socio-technical systems that just a
few short years ago would have required the use of consistent economic and
human resources can nowadays be conveniently performed by mining the enormous
amount of digital data produced by human activities. Although a
characterization of several aspects of our societies is emerging from the data
revolution, a number of questions concerning the reliability and the biases
inherent to the big data "proxies" of social life are still open. Here, we
survey worldwide linguistic indicators and trends through the analysis of a
large-scale dataset of microblogging posts. We show that available data allow
for the study of language geography at scales ranging from country-level
aggregation to specific city neighborhoods. The high resolution and coverage of
the data allows us to investigate different indicators such as the linguistic
homogeneity of different countries, the touristic seasonal patterns within
countries and the geographical distribution of different languages in
multilingual regions. This work highlights the potential of geolocalized
studies of open data sources to improve current analysis and develop indicators
for major social phenomena in specific communities.
|
1301.5177 | Andrea Scharnhorst | Linda Reijnhoudt, Rodrigo Costas, Ed Noyons, Katy Boerner, Andrea
Scharnhorst | "Seed+Expand": A validated methodology for creating high quality
publication oeuvres of individual researchers | Paper accepted for the ISSI 2013, small changes in the text due to
referee comments, one figure added (Fig 3) | null | null | null | cs.DL cs.IR | http://creativecommons.org/licenses/by/3.0/ | The study of science at the individual micro-level frequently requires the
disambiguation of author names. The creation of author's publication oeuvres
involves matching the list of unique author names to names used in publication
databases. Despite recent progress in the development of unique author
identifiers, e.g., ORCID, VIVO, or DAI, author disambiguation remains a key
problem when it comes to large-scale bibliometric analysis using data from
multiple databases. This study introduces and validates a new methodology
called seed+expand for semi-automatic bibliographic data collection for a given
set of individual authors. Specifically, we identify the oeuvre of a set of
Dutch full professors during the period 1980-2011. In particular, we combine
author records from the National Research Information System (NARCIS) with
publication records from the Web of Science. Starting with an initial list of
8,378 names, we identify "seed publications" for each author using five
different approaches. Subsequently, we "expand" the set of publication in three
different approaches. The different approaches are compared and resulting
oeuvres are evaluated on precision and recall using a "gold standard" dataset
of authors for which verified publications in the period 2001-2010 are
available.
| [
{
"version": "v1",
"created": "Tue, 22 Jan 2013 13:16:15 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Apr 2013 11:01:55 GMT"
}
] | 2013-04-23T00:00:00 | [
[
"Reijnhoudt",
"Linda",
""
],
[
"Costas",
"Rodrigo",
""
],
[
"Noyons",
"Ed",
""
],
[
"Boerner",
"Katy",
""
],
[
"Scharnhorst",
"Andrea",
""
]
] | TITLE: "Seed+Expand": A validated methodology for creating high quality
publication oeuvres of individual researchers
ABSTRACT: The study of science at the individual micro-level frequently requires the
disambiguation of author names. The creation of author's publication oeuvres
involves matching the list of unique author names to names used in publication
databases. Despite recent progress in the development of unique author
identifiers, e.g., ORCID, VIVO, or DAI, author disambiguation remains a key
problem when it comes to large-scale bibliometric analysis using data from
multiple databases. This study introduces and validates a new methodology
called seed+expand for semi-automatic bibliographic data collection for a given
set of individual authors. Specifically, we identify the oeuvre of a set of
Dutch full professors during the period 1980-2011. In particular, we combine
author records from the National Research Information System (NARCIS) with
publication records from the Web of Science. Starting with an initial list of
8,378 names, we identify "seed publications" for each author using five
different approaches. Subsequently, we "expand" the set of publication in three
different approaches. The different approaches are compared and resulting
oeuvres are evaluated on precision and recall using a "gold standard" dataset
of authors for which verified publications in the period 2001-2010 are
available.
|
1304.5755 | Puneet Kishor | Puneet Kishor, Oshani Seneviratne, and Noah Giansiracusa | Policy Aware Geospatial Data | 5 pages. Accepted for ACMGIS 2009, but withdrawn because ACM would
not include this paper unless I presented in person (prior commitments
prevented me from travel even though I had registered) | null | null | null | cs.OH | http://creativecommons.org/licenses/publicdomain/ | Digital Rights Management (DRM) prevents end-users from using content in a
manner inconsistent with its creator's wishes. The license describing these
use-conditions typically accompanies the content as its metadata. A resulting
problem is that the license and the content can get separated and lose track of
each other. The best metadata have two distinct qualities--they are created
automatically without user intervention, and they are embedded within the data
that they describe. If licenses are also created and transported this way, data
will always have licenses, and the licenses will be readily examinable. When
two or more datasets are combined, a new dataset, and with it a new license,
are created. This new license is a function of the licenses of the component
datasets and any additional conditions that the person combining the datasets
might want to impose. Following the notion of a data-purpose algebra, we model
this phenomenon by interpreting the transfer and conjunction of data as
inducing an algebraic operation on the corresponding licenses. When a dataset
passes from one source to the next its license is transformed in a
deterministic way, and similarly when datasets are combined the associated
licenses are combined in a non-trivial algebraic manner. Modern,
computer-savvy, licensing regimes such as Creative Commons allow writing the
license in a special kind of language called Creative Commons Rights Expression
Language (ccREL). ccREL allows creating and embedding the license using RDFa
utilizing XHTML. This is preferred over DRM which includes the rights in a
binary file completely opaque to nearly all users. The colocation of metadata
with human-visible XHTML makes the license more transparent. In this paper we
describe a methodology for creating and embedding licenses in geographic data
utilizing ccREL, and programmatically examining embedded licenses in component
data...
| [
{
"version": "v1",
"created": "Sun, 21 Apr 2013 15:50:46 GMT"
}
] | 2013-04-23T00:00:00 | [
[
"Kishor",
"Puneet",
""
],
[
"Seneviratne",
"Oshani",
""
],
[
"Giansiracusa",
"Noah",
""
]
] | TITLE: Policy Aware Geospatial Data
ABSTRACT: Digital Rights Management (DRM) prevents end-users from using content in a
manner inconsistent with its creator's wishes. The license describing these
use-conditions typically accompanies the content as its metadata. A resulting
problem is that the license and the content can get separated and lose track of
each other. The best metadata have two distinct qualities--they are created
automatically without user intervention, and they are embedded within the data
that they describe. If licenses are also created and transported this way, data
will always have licenses, and the licenses will be readily examinable. When
two or more datasets are combined, a new dataset, and with it a new license,
are created. This new license is a function of the licenses of the component
datasets and any additional conditions that the person combining the datasets
might want to impose. Following the notion of a data-purpose algebra, we model
this phenomenon by interpreting the transfer and conjunction of data as
inducing an algebraic operation on the corresponding licenses. When a dataset
passes from one source to the next its license is transformed in a
deterministic way, and similarly when datasets are combined the associated
licenses are combined in a non-trivial algebraic manner. Modern,
computer-savvy, licensing regimes such as Creative Commons allow writing the
license in a special kind of language called Creative Commons Rights Expression
Language (ccREL). ccREL allows creating and embedding the license using RDFa
utilizing XHTML. This is preferred over DRM which includes the rights in a
binary file completely opaque to nearly all users. The colocation of metadata
with human-visible XHTML makes the license more transparent. In this paper we
describe a methodology for creating and embedding licenses in geographic data
utilizing ccREL, and programmatically examining embedded licenses in component
data...
|
1304.4371 | Joel Lang | Joel Lang and James Henderson | Efficient Computation of Mean Truncated Hitting Times on Very Large
Graphs | null | null | null | null | cs.DS cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous work has shown the effectiveness of random walk hitting times as a
measure of dissimilarity in a variety of graph-based learning problems such as
collaborative filtering, query suggestion or finding paraphrases. However,
application of hitting times has been limited to small datasets because of
computational restrictions. This paper develops a new approximation algorithm
with which hitting times can be computed on very large, disk-resident graphs,
making their application possible to problems which were previously out of
reach. This will potentially benefit a range of large-scale problems.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2013 09:11:16 GMT"
}
] | 2013-04-17T00:00:00 | [
[
"Lang",
"Joel",
""
],
[
"Henderson",
"James",
""
]
] | TITLE: Efficient Computation of Mean Truncated Hitting Times on Very Large
Graphs
ABSTRACT: Previous work has shown the effectiveness of random walk hitting times as a
measure of dissimilarity in a variety of graph-based learning problems such as
collaborative filtering, query suggestion or finding paraphrases. However,
application of hitting times has been limited to small datasets because of
computational restrictions. This paper develops a new approximation algorithm
with which hitting times can be computed on very large, disk-resident graphs,
making their application possible to problems which were previously out of
reach. This will potentially benefit a range of large-scale problems.
|
1304.3745 | Khadoudja Ghanem | Khadoudja Ghanem | Towards more accurate clustering method by using dynamic time warping | 12 pages, 1 figure, 2 tables, journal. arXiv admin note: text overlap
with arXiv:1206.3509 by other authors | International Journal of Data Mining & Knowledge Management
Process (IJDKP) Vol.3, No.2, March 2013 | 10.5121/ijdkp.2013.3207 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An intrinsic problem of classifiers based on machine learning (ML) methods is
that their learning time grows as the size and complexity of the training
dataset increases. For this reason, it is important to have efficient
computational methods and algorithms that can be applied on large datasets,
such that it is still possible to complete the machine learning tasks in
reasonable time. In this context, we present in this paper a more accurate
simple process to speed up ML methods. An unsupervised clustering algorithm is
combined with Expectation, Maximization (EM) algorithm to develop an efficient
Hidden Markov Model (HMM) training. The idea of the proposed process consists
of two steps. In the first step, training instances with similar inputs are
clustered and a weight factor which represents the frequency of these instances
is assigned to each representative cluster. Dynamic Time Warping technique is
used as a dissimilarity function to cluster similar examples. In the second
step, all formulas in the classical HMM training algorithm (EM) associated with
the number of training instances are modified to include the weight factor in
appropriate terms. This process significantly accelerates HMM training while
maintaining the same initial, transition and emission probabilities matrixes as
those obtained with the classical HMM training algorithm. Accordingly, the
classification accuracy is preserved. Depending on the size of the training
set, speedups of up to 2200 times is possible when the size is about 100.000
instances. The proposed approach is not limited to training HMMs, but it can be
employed for a large variety of MLs methods.
| [
{
"version": "v1",
"created": "Fri, 12 Apr 2013 22:23:53 GMT"
}
] | 2013-04-16T00:00:00 | [
[
"Ghanem",
"Khadoudja",
""
]
] | TITLE: Towards more accurate clustering method by using dynamic time warping
ABSTRACT: An intrinsic problem of classifiers based on machine learning (ML) methods is
that their learning time grows as the size and complexity of the training
dataset increases. For this reason, it is important to have efficient
computational methods and algorithms that can be applied on large datasets,
such that it is still possible to complete the machine learning tasks in
reasonable time. In this context, we present in this paper a more accurate
simple process to speed up ML methods. An unsupervised clustering algorithm is
combined with Expectation, Maximization (EM) algorithm to develop an efficient
Hidden Markov Model (HMM) training. The idea of the proposed process consists
of two steps. In the first step, training instances with similar inputs are
clustered and a weight factor which represents the frequency of these instances
is assigned to each representative cluster. Dynamic Time Warping technique is
used as a dissimilarity function to cluster similar examples. In the second
step, all formulas in the classical HMM training algorithm (EM) associated with
the number of training instances are modified to include the weight factor in
appropriate terms. This process significantly accelerates HMM training while
maintaining the same initial, transition and emission probabilities matrixes as
those obtained with the classical HMM training algorithm. Accordingly, the
classification accuracy is preserved. Depending on the size of the training
set, speedups of up to 2200 times is possible when the size is about 100.000
instances. The proposed approach is not limited to training HMMs, but it can be
employed for a large variety of MLs methods.
|
1304.3816 | Justin Thaler | Amit Chakrabarti and Graham Cormode and Navin Goyal and Justin Thaler | Annotations for Sparse Data Streams | 29 pages, 5 tables | null | null | null | cs.CC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by cloud computing, a number of recent works have studied annotated
data streams and variants thereof. In this setting, a computationally weak
verifier (cloud user), lacking the resources to store and manipulate his
massive input locally, accesses a powerful but untrusted prover (cloud
service). The verifier must work within the restrictive data streaming
paradigm. The prover, who can annotate the data stream as it is read, must not
just supply the answer but also convince the verifier of its correctness.
Ideally, both the amount of annotation and the space used by the verifier
should be sublinear in the relevant input size parameters.
A rich theory of such algorithms -- which we call schemes -- has emerged.
Prior work has shown how to leverage the prover's power to efficiently solve
problems that have no non-trivial standard data stream algorithms. However,
while optimal schemes are now known for several basic problems, such optimality
holds only for streams whose length is commensurate with the size of the data
universe. In contrast, many real-world datasets are relatively sparse,
including graphs that contain only O(n^2) edges, and IP traffic streams that
contain much fewer than the total number of possible IP addresses, 2^128 in
IPv6.
We design the first schemes that allow both the annotation and the space
usage to be sublinear in the total number of stream updates rather than the
size of the data universe. We solve significant problems, including variations
of INDEX, SET-DISJOINTNESS, and FREQUENCY-MOMENTS, plus several natural
problems on graphs. On the other hand, we give a new lower bound that, for the
first time, rules out smooth tradeoffs between annotation and space usage for a
specific problem. Our technique brings out new nuances in Merlin-Arthur
communication complexity models, and provides a separation between online
versions of the MA and AMA models.
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 15:17:28 GMT"
}
] | 2013-04-16T00:00:00 | [
[
"Chakrabarti",
"Amit",
""
],
[
"Cormode",
"Graham",
""
],
[
"Goyal",
"Navin",
""
],
[
"Thaler",
"Justin",
""
]
] | TITLE: Annotations for Sparse Data Streams
ABSTRACT: Motivated by cloud computing, a number of recent works have studied annotated
data streams and variants thereof. In this setting, a computationally weak
verifier (cloud user), lacking the resources to store and manipulate his
massive input locally, accesses a powerful but untrusted prover (cloud
service). The verifier must work within the restrictive data streaming
paradigm. The prover, who can annotate the data stream as it is read, must not
just supply the answer but also convince the verifier of its correctness.
Ideally, both the amount of annotation and the space used by the verifier
should be sublinear in the relevant input size parameters.
A rich theory of such algorithms -- which we call schemes -- has emerged.
Prior work has shown how to leverage the prover's power to efficiently solve
problems that have no non-trivial standard data stream algorithms. However,
while optimal schemes are now known for several basic problems, such optimality
holds only for streams whose length is commensurate with the size of the data
universe. In contrast, many real-world datasets are relatively sparse,
including graphs that contain only O(n^2) edges, and IP traffic streams that
contain much fewer than the total number of possible IP addresses, 2^128 in
IPv6.
We design the first schemes that allow both the annotation and the space
usage to be sublinear in the total number of stream updates rather than the
size of the data universe. We solve significant problems, including variations
of INDEX, SET-DISJOINTNESS, and FREQUENCY-MOMENTS, plus several natural
problems on graphs. On the other hand, we give a new lower bound that, for the
first time, rules out smooth tradeoffs between annotation and space usage for a
specific problem. Our technique brings out new nuances in Merlin-Arthur
communication complexity models, and provides a separation between online
versions of the MA and AMA models.
|
1304.4041 | Humayun Irshad | H. Irshad, A. Gouaillard, L. Roux, D. Racoceanu | Multispectral Spatial Characterization: Application to Mitosis Detection
in Breast Cancer Histopathology | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate detection of mitosis plays a critical role in breast cancer
histopathology. Manual detection and counting of mitosis is tedious and subject
to considerable inter- and intra-reader variations. Multispectral imaging is a
recent medical imaging technology, proven successful in increasing the
segmentation accuracy in other fields. This study aims at improving the
accuracy of mitosis detection by developing a specific solution using
multispectral and multifocal imaging of breast cancer histopathological data.
We propose to enable clinical routine-compliant quality of mitosis
discrimination from other objects. The proposed framework includes
comprehensive analysis of spectral bands and z-stack focus planes, detection of
expected mitotic regions (candidates) in selected focus planes and spectral
bands, computation of multispectral spatial features for each candidate,
selection of multispectral spatial features and a study of different
state-of-the-art classification methods for candidates classification as
mitotic or non mitotic figures. This framework has been evaluated on MITOS
multispectral medical dataset and achieved 60% detection rate and 57%
F-Measure. Our results indicate that multispectral spatial features have more
information for mitosis classification in comparison with white spectral band
features, being therefore a very promising exploration area to improve the
quality of the diagnosis assistance in histopathology.
| [
{
"version": "v1",
"created": "Mon, 15 Apr 2013 10:11:34 GMT"
}
] | 2013-04-16T00:00:00 | [
[
"Irshad",
"H.",
""
],
[
"Gouaillard",
"A.",
""
],
[
"Roux",
"L.",
""
],
[
"Racoceanu",
"D.",
""
]
] | TITLE: Multispectral Spatial Characterization: Application to Mitosis Detection
in Breast Cancer Histopathology
ABSTRACT: Accurate detection of mitosis plays a critical role in breast cancer
histopathology. Manual detection and counting of mitosis is tedious and subject
to considerable inter- and intra-reader variations. Multispectral imaging is a
recent medical imaging technology, proven successful in increasing the
segmentation accuracy in other fields. This study aims at improving the
accuracy of mitosis detection by developing a specific solution using
multispectral and multifocal imaging of breast cancer histopathological data.
We propose to enable clinical routine-compliant quality of mitosis
discrimination from other objects. The proposed framework includes
comprehensive analysis of spectral bands and z-stack focus planes, detection of
expected mitotic regions (candidates) in selected focus planes and spectral
bands, computation of multispectral spatial features for each candidate,
selection of multispectral spatial features and a study of different
state-of-the-art classification methods for candidates classification as
mitotic or non mitotic figures. This framework has been evaluated on MITOS
multispectral medical dataset and achieved 60% detection rate and 57%
F-Measure. Our results indicate that multispectral spatial features have more
information for mitosis classification in comparison with white spectral band
features, being therefore a very promising exploration area to improve the
quality of the diagnosis assistance in histopathology.
|
1304.3192 | Yulan Guo | Yulan Guo, Ferdous Sohel, Mohammed Bennamoun, Min Lu, Jianwei Wan | Rotational Projection Statistics for 3D Local Surface Description and
Object Recognition | The final publication is available at link.springer.com International
Journal of Computer Vision 2013 | null | 10.1007/s11263-013-0627-y | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | Recognizing 3D objects in the presence of noise, varying mesh resolution,
occlusion and clutter is a very challenging task. This paper presents a novel
method named Rotational Projection Statistics (RoPS). It has three major
modules: Local Reference Frame (LRF) definition, RoPS feature description and
3D object recognition. We propose a novel technique to define the LRF by
calculating the scatter matrix of all points lying on the local surface. RoPS
feature descriptors are obtained by rotationally projecting the neighboring
points of a feature point onto 2D planes and calculating a set of statistics
(including low-order central moments and entropy) of the distribution of these
projected points. Using the proposed LRF and RoPS descriptor, we present a
hierarchical 3D object recognition algorithm. The performance of the proposed
LRF, RoPS descriptor and object recognition algorithm was rigorously tested on
a number of popular and publicly available datasets. Our proposed techniques
exhibited superior performance compared to existing techniques. We also showed
that our method is robust with respect to noise and varying mesh resolution.
Our RoPS based algorithm achieved recognition rates of 100%, 98.9%, 95.4% and
96.0% respectively when tested on the Bologna, UWA, Queen's and Ca' Foscari
Venezia Datasets.
| [
{
"version": "v1",
"created": "Thu, 11 Apr 2013 04:26:52 GMT"
}
] | 2013-04-12T00:00:00 | [
[
"Guo",
"Yulan",
""
],
[
"Sohel",
"Ferdous",
""
],
[
"Bennamoun",
"Mohammed",
""
],
[
"Lu",
"Min",
""
],
[
"Wan",
"Jianwei",
""
]
] | TITLE: Rotational Projection Statistics for 3D Local Surface Description and
Object Recognition
ABSTRACT: Recognizing 3D objects in the presence of noise, varying mesh resolution,
occlusion and clutter is a very challenging task. This paper presents a novel
method named Rotational Projection Statistics (RoPS). It has three major
modules: Local Reference Frame (LRF) definition, RoPS feature description and
3D object recognition. We propose a novel technique to define the LRF by
calculating the scatter matrix of all points lying on the local surface. RoPS
feature descriptors are obtained by rotationally projecting the neighboring
points of a feature point onto 2D planes and calculating a set of statistics
(including low-order central moments and entropy) of the distribution of these
projected points. Using the proposed LRF and RoPS descriptor, we present a
hierarchical 3D object recognition algorithm. The performance of the proposed
LRF, RoPS descriptor and object recognition algorithm was rigorously tested on
a number of popular and publicly available datasets. Our proposed techniques
exhibited superior performance compared to existing techniques. We also showed
that our method is robust with respect to noise and varying mesh resolution.
Our RoPS based algorithm achieved recognition rates of 100%, 98.9%, 95.4% and
96.0% respectively when tested on the Bologna, UWA, Queen's and Ca' Foscari
Venezia Datasets.
|
1304.3345 | Marzieh Parandehgheibi | Marzieh Parandehgheibi | Probabilistic Classification using Fuzzy Support Vector Machines | 6 pages, Proceedings of the 6th INFORMS Workshop on Data Mining and
Health Informatics (DM-HI 2011) | null | null | null | cs.LG math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In medical applications such as recognizing the type of a tumor as Malignant
or Benign, a wrong diagnosis can be devastating. Methods like Fuzzy Support
Vector Machines (FSVM) try to reduce the effect of misplaced training points by
assigning a lower weight to the outliers. However, there are still uncertain
points which are similar to both classes and assigning a class by the given
information will cause errors. In this paper, we propose a two-phase
classification method which probabilistically assigns the uncertain points to
each of the classes. The proposed method is applied to the Breast Cancer
Wisconsin (Diagnostic) Dataset which consists of 569 instances in 2 classes of
Malignant and Benign. This method assigns certain instances to their
appropriate classes with probability of one, and the uncertain instances to
each of the classes with associated probabilities. Therefore, based on the
degree of uncertainty, doctors can suggest further examinations before making
the final diagnosis.
| [
{
"version": "v1",
"created": "Thu, 11 Apr 2013 15:44:18 GMT"
}
] | 2013-04-12T00:00:00 | [
[
"Parandehgheibi",
"Marzieh",
""
]
] | TITLE: Probabilistic Classification using Fuzzy Support Vector Machines
ABSTRACT: In medical applications such as recognizing the type of a tumor as Malignant
or Benign, a wrong diagnosis can be devastating. Methods like Fuzzy Support
Vector Machines (FSVM) try to reduce the effect of misplaced training points by
assigning a lower weight to the outliers. However, there are still uncertain
points which are similar to both classes and assigning a class by the given
information will cause errors. In this paper, we propose a two-phase
classification method which probabilistically assigns the uncertain points to
each of the classes. The proposed method is applied to the Breast Cancer
Wisconsin (Diagnostic) Dataset which consists of 569 instances in 2 classes of
Malignant and Benign. This method assigns certain instances to their
appropriate classes with probability of one, and the uncertain instances to
each of the classes with associated probabilities. Therefore, based on the
degree of uncertainty, doctors can suggest further examinations before making
the final diagnosis.
|
1304.3406 | Seyed Hamed Alemohammad | Seyed Hamed Alemohammad, Dara Entekhabi | Merging Satellite Measurements of Rainfall Using Multi-scale Imagery
Technique | 6 pages, 10 Figures, WCRP Open Science Conference, 2011 | null | null | null | cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several passive microwave satellites orbit the Earth and measure rainfall.
These measurements have the advantage of almost full global coverage when
compared to surface rain gauges. However, these satellites have low temporal
revisit and missing data over some regions. Image fusion is a useful technique
to fill in the gaps of one image (one satellite measurement) using another one.
The proposed algorithm uses an iterative fusion scheme to integrate information
from two satellite measurements. The algorithm is implemented on two datasets
for 7 years of half-hourly data. The results show significant improvements in
rain detection and rain intensity in the merged measurements.
| [
{
"version": "v1",
"created": "Thu, 11 Apr 2013 19:31:57 GMT"
}
] | 2013-04-12T00:00:00 | [
[
"Alemohammad",
"Seyed Hamed",
""
],
[
"Entekhabi",
"Dara",
""
]
] | TITLE: Merging Satellite Measurements of Rainfall Using Multi-scale Imagery
Technique
ABSTRACT: Several passive microwave satellites orbit the Earth and measure rainfall.
These measurements have the advantage of almost full global coverage when
compared to surface rain gauges. However, these satellites have low temporal
revisit and missing data over some regions. Image fusion is a useful technique
to fill in the gaps of one image (one satellite measurement) using another one.
The proposed algorithm uses an iterative fusion scheme to integrate information
from two satellite measurements. The algorithm is implemented on two datasets
for 7 years of half-hourly data. The results show significant improvements in
rain detection and rain intensity in the merged measurements.
|
1302.6569 | Nicola Perra | Qian Zhang, Nicola Perra, Bruno Goncalves, Fabio Ciulla, Alessandro
Vespignani | Characterizing scientific production and consumption in Physics | null | Nature Scientific Reports 3, 1640 (2013) | 10.1038/srep01640 | null | physics.soc-ph cs.DL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the entire publication database of the American Physical Society
generating longitudinal (50 years) citation networks geolocalized at the level
of single urban areas. We define the knowledge diffusion proxy, and scientific
production ranking algorithms to capture the spatio-temporal dynamics of
Physics knowledge worldwide. By using the knowledge diffusion proxy we identify
the key cities in the production and consumption of knowledge in Physics as a
function of time. The results from the scientific production ranking algorithm
allow us to characterize the top cities for scholarly research in Physics.
Although we focus on a single dataset concerning a specific field, the
methodology presented here opens the path to comparative studies of the
dynamics of knowledge across disciplines and research areas
| [
{
"version": "v1",
"created": "Tue, 26 Feb 2013 20:33:51 GMT"
}
] | 2013-04-11T00:00:00 | [
[
"Zhang",
"Qian",
""
],
[
"Perra",
"Nicola",
""
],
[
"Goncalves",
"Bruno",
""
],
[
"Ciulla",
"Fabio",
""
],
[
"Vespignani",
"Alessandro",
""
]
] | TITLE: Characterizing scientific production and consumption in Physics
ABSTRACT: We analyze the entire publication database of the American Physical Society
generating longitudinal (50 years) citation networks geolocalized at the level
of single urban areas. We define the knowledge diffusion proxy, and scientific
production ranking algorithms to capture the spatio-temporal dynamics of
Physics knowledge worldwide. By using the knowledge diffusion proxy we identify
the key cities in the production and consumption of knowledge in Physics as a
function of time. The results from the scientific production ranking algorithm
allow us to characterize the top cities for scholarly research in Physics.
Although we focus on a single dataset concerning a specific field, the
methodology presented here opens the path to comparative studies of the
dynamics of knowledge across disciplines and research areas
|
1303.3087 | Togerchety Hitendra sarma | Mallikarjun Hangarge, K.C. Santosh, Srikanth Doddamani, Rajmohan
Pardeshi | Statistical Texture Features based Handwritten and Printed Text
Classification in South Indian Documents | Appeared in ICECIT-2102 | null | null | Volume 1,Number 32 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we use statistical texture features for handwritten and
printed text classification. We primarily aim for word level classification in
south Indian scripts. Words are first extracted from the scanned document. For
each extracted word, statistical texture features are computed such as mean,
standard deviation, smoothness, moment, uniformity, entropy and local range
including local entropy. These feature vectors are then used to classify words
via k-NN classifier. We have validated the approach over several different
datasets. Scripts like Kannada, Telugu, Malayalam and Hindi i.e., Devanagari
are primarily employed where an average classification rate of 99.26% is
achieved. In addition, to provide an extensibility of the approach, we address
Roman script by using publicly available dataset and interesting results are
reported.
| [
{
"version": "v1",
"created": "Wed, 13 Mar 2013 04:51:22 GMT"
}
] | 2013-04-11T00:00:00 | [
[
"Hangarge",
"Mallikarjun",
""
],
[
"Santosh",
"K. C.",
""
],
[
"Doddamani",
"Srikanth",
""
],
[
"Pardeshi",
"Rajmohan",
""
]
] | TITLE: Statistical Texture Features based Handwritten and Printed Text
Classification in South Indian Documents
ABSTRACT: In this paper, we use statistical texture features for handwritten and
printed text classification. We primarily aim for word level classification in
south Indian scripts. Words are first extracted from the scanned document. For
each extracted word, statistical texture features are computed such as mean,
standard deviation, smoothness, moment, uniformity, entropy and local range
including local entropy. These feature vectors are then used to classify words
via k-NN classifier. We have validated the approach over several different
datasets. Scripts like Kannada, Telugu, Malayalam and Hindi i.e., Devanagari
are primarily employed where an average classification rate of 99.26% is
achieved. In addition, to provide an extensibility of the approach, we address
Roman script by using publicly available dataset and interesting results are
reported.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.