Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1209.1411 | Chaoming Song Dr. | Chaoming Song, Dashun Wang, Albert-Laszlo Barabasi | Connections between Human Dynamics and Network Science | null | null | null | null | physics.soc-ph cond-mat.stat-mech cs.SI physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing availability of large-scale data on human behavior has
catalyzed simultaneous advances in network theory, capturing the scaling
properties of the interactions between a large number of individuals, and human
dynamics, quantifying the temporal characteristics of human activity patterns.
These two areas remain disjoint, each pursuing as separate lines of inquiry.
Here we report a series of generic relationships between the quantities
characterizing these two areas by demonstrating that the degree and link weight
distributions in social networks can be expressed in terms of the dynamical
exponents characterizing human activity patterns. We test the validity of these
theoretical predictions on datasets capturing various facets of human
interactions, from mobile calls to tweets.
| [
{
"version": "v1",
"created": "Thu, 6 Sep 2012 21:04:21 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Apr 2013 20:01:21 GMT"
}
] | 2013-04-10T00:00:00 | [
[
"Song",
"Chaoming",
""
],
[
"Wang",
"Dashun",
""
],
[
"Barabasi",
"Albert-Laszlo",
""
]
] | TITLE: Connections between Human Dynamics and Network Science
ABSTRACT: The increasing availability of large-scale data on human behavior has
catalyzed simultaneous advances in network theory, capturing the scaling
properties of the interactions between a large number of individuals, and human
dynamics, quantifying the temporal characteristics of human activity patterns.
These two areas remain disjoint, each pursuing as separate lines of inquiry.
Here we report a series of generic relationships between the quantities
characterizing these two areas by demonstrating that the degree and link weight
distributions in social networks can be expressed in terms of the dynamical
exponents characterizing human activity patterns. We test the validity of these
theoretical predictions on datasets capturing various facets of human
interactions, from mobile calls to tweets.
|
1304.2604 | Jean Souviron | Jean Souviron | On the predictability of the number of convex vertices | 6 pages, 6 figures | null | null | null | cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convex hulls are a fundamental geometric tool used in a number of algorithms.
As a side-effect of exhaustive tests for an algorithm for which a convex hull
computation was the first step, interesting experimental results were found and
are the sunject of this paper. They establish that the number of convex
vertices of natural datasets can be predicted, if not precisely at least within
a defined range. Namely it was found that the number of convex vertices of a
dataset of N points lies in the range 2.35 N^0.091 <= h <= 19.19 N^0.091. This
range obviously does not describe neither natural nor artificial worst-cases
but corresponds to the distributions of natural data. This can be used for
instance to define a starting size for pre-allocated arrays or to evaluate
output-sensitive algorithms. A further consequence of these results is that the
random models of data used to test convex hull algorithms should be bounded by
rectangles and not as they usually are by circles if they want to represent
accurately natural datasets
| [
{
"version": "v1",
"created": "Tue, 9 Apr 2013 14:17:12 GMT"
}
] | 2013-04-10T00:00:00 | [
[
"Souviron",
"Jean",
""
]
] | TITLE: On the predictability of the number of convex vertices
ABSTRACT: Convex hulls are a fundamental geometric tool used in a number of algorithms.
As a side-effect of exhaustive tests for an algorithm for which a convex hull
computation was the first step, interesting experimental results were found and
are the sunject of this paper. They establish that the number of convex
vertices of natural datasets can be predicted, if not precisely at least within
a defined range. Namely it was found that the number of convex vertices of a
dataset of N points lies in the range 2.35 N^0.091 <= h <= 19.19 N^0.091. This
range obviously does not describe neither natural nor artificial worst-cases
but corresponds to the distributions of natural data. This can be used for
instance to define a starting size for pre-allocated arrays or to evaluate
output-sensitive algorithms. A further consequence of these results is that the
random models of data used to test convex hull algorithms should be bounded by
rectangles and not as they usually are by circles if they want to represent
accurately natural datasets
|
1303.7390 | Aasa Feragen | Aasa Feragen, Jens Petersen, Dominik Grimm, Asger Dirksen, Jesper
Holst Pedersen, Karsten Borgwardt and Marleen de Bruijne | Geometric tree kernels: Classification of COPD from airway tree geometry | 12 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Methodological contributions: This paper introduces a family of kernels for
analyzing (anatomical) trees endowed with vector valued measurements made along
the tree. While state-of-the-art graph and tree kernels use combinatorial
tree/graph structure with discrete node and edge labels, the kernels presented
in this paper can include geometric information such as branch shape, branch
radius or other vector valued properties. In addition to being flexible in
their ability to model different types of attributes, the presented kernels are
computationally efficient and some of them can easily be computed for large
datasets (N of the order 10.000) of trees with 30-600 branches. Combining the
kernels with standard machine learning tools enables us to analyze the relation
between disease and anatomical tree structure and geometry. Experimental
results: The kernels are used to compare airway trees segmented from low-dose
CT, endowed with branch shape descriptors and airway wall area percentage
measurements made along the tree. Using kernelized hypothesis testing we show
that the geometric airway trees are significantly differently distributed in
patients with Chronic Obstructive Pulmonary Disease (COPD) than in healthy
individuals. The geometric tree kernels also give a significant increase in the
classification accuracy of COPD from geometric tree structure endowed with
airway wall thickness measurements in comparison with state-of-the-art methods,
giving further insight into the relationship between airway wall thickness and
COPD. Software: Software for computing kernels and statistical tests is
available at http://image.diku.dk/aasa/software.php.
| [
{
"version": "v1",
"created": "Fri, 29 Mar 2013 13:25:17 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Apr 2013 12:11:24 GMT"
}
] | 2013-04-09T00:00:00 | [
[
"Feragen",
"Aasa",
""
],
[
"Petersen",
"Jens",
""
],
[
"Grimm",
"Dominik",
""
],
[
"Dirksen",
"Asger",
""
],
[
"Pedersen",
"Jesper Holst",
""
],
[
"Borgwardt",
"Karsten",
""
],
[
"de Bruijne",
"Marleen",
""
]
] | TITLE: Geometric tree kernels: Classification of COPD from airway tree geometry
ABSTRACT: Methodological contributions: This paper introduces a family of kernels for
analyzing (anatomical) trees endowed with vector valued measurements made along
the tree. While state-of-the-art graph and tree kernels use combinatorial
tree/graph structure with discrete node and edge labels, the kernels presented
in this paper can include geometric information such as branch shape, branch
radius or other vector valued properties. In addition to being flexible in
their ability to model different types of attributes, the presented kernels are
computationally efficient and some of them can easily be computed for large
datasets (N of the order 10.000) of trees with 30-600 branches. Combining the
kernels with standard machine learning tools enables us to analyze the relation
between disease and anatomical tree structure and geometry. Experimental
results: The kernels are used to compare airway trees segmented from low-dose
CT, endowed with branch shape descriptors and airway wall area percentage
measurements made along the tree. Using kernelized hypothesis testing we show
that the geometric airway trees are significantly differently distributed in
patients with Chronic Obstructive Pulmonary Disease (COPD) than in healthy
individuals. The geometric tree kernels also give a significant increase in the
classification accuracy of COPD from geometric tree structure endowed with
airway wall thickness measurements in comparison with state-of-the-art methods,
giving further insight into the relationship between airway wall thickness and
COPD. Software: Software for computing kernels and statistical tests is
available at http://image.diku.dk/aasa/software.php.
|
1304.1924 | Shuguang Han | Shuguang Han, Zhen Yue, Daqing He | Automatic Detection of Search Tactic in Individual Information Seeking:
A Hidden Markov Model Approach | 5 pages, 3 figures, 3 tables | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information seeking process is an important topic in information seeking
behavior research. Both qualitative and empirical methods have been adopted in
analyzing information seeking processes, with major focus on uncovering the
latent search tactics behind user behaviors. Most of the existing works require
defining search tactics in advance and coding data manually. Among the few
works that can recognize search tactics automatically, they missed making sense
of those tactics. In this paper, we proposed using an automatic technique, i.e.
the Hidden Markov Model (HMM), to explicitly model the search tactics. HMM
results show that the identified search tactics of individual information
seeking behaviors are consistent with Marchioninis Information seeking process
model. With the advantages of showing the connections between search tactics
and search actions and the transitions among search tactics, we argue that HMM
is a useful tool to investigate information seeking process, or at least it
provides a feasible way to analyze large scale dataset.
| [
{
"version": "v1",
"created": "Sat, 6 Apr 2013 19:13:41 GMT"
}
] | 2013-04-09T00:00:00 | [
[
"Han",
"Shuguang",
""
],
[
"Yue",
"Zhen",
""
],
[
"He",
"Daqing",
""
]
] | TITLE: Automatic Detection of Search Tactic in Individual Information Seeking:
A Hidden Markov Model Approach
ABSTRACT: Information seeking process is an important topic in information seeking
behavior research. Both qualitative and empirical methods have been adopted in
analyzing information seeking processes, with major focus on uncovering the
latent search tactics behind user behaviors. Most of the existing works require
defining search tactics in advance and coding data manually. Among the few
works that can recognize search tactics automatically, they missed making sense
of those tactics. In this paper, we proposed using an automatic technique, i.e.
the Hidden Markov Model (HMM), to explicitly model the search tactics. HMM
results show that the identified search tactics of individual information
seeking behaviors are consistent with Marchioninis Information seeking process
model. With the advantages of showing the connections between search tactics
and search actions and the transitions among search tactics, we argue that HMM
is a useful tool to investigate information seeking process, or at least it
provides a feasible way to analyze large scale dataset.
|
1304.1979 | Esteban Moro | Giovanna Miritello, Rub\'en Lara, Manuel Cebri\'an, and Esteban Moro | Limited communication capacity unveils strategies for human interaction | Main Text: 8 pages, 5 figures. Supplementary info: 8 pages, 8 figures | null | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social connectivity is the key process that characterizes the structural
properties of social networks and in turn processes such as navigation,
influence or information diffusion. Since time, attention and cognition are
inelastic resources, humans should have a predefined strategy to manage their
social interactions over time. However, the limited observational length of
existing human interaction datasets, together with the bursty nature of dyadic
communications have hampered the observation of tie dynamics in social
networks. Here we develop a method for the detection of tie
activation/deactivation, and apply it to a large longitudinal, cross-sectional
communication dataset ($\approx$ 19 months, $\approx$ 20 million people).
Contrary to the perception of ever-growing connectivity, we observe that
individuals exhibit a finite communication capacity, which limits the number of
ties they can maintain active. In particular we find that men have an overall
higher communication capacity than women and that this capacity decreases
gradually for both sexes over the lifespan of individuals (16-70 years). We are
then able to separate communication capacity from communication activity,
revealing a diverse range of tie activation patterns, from stable to
exploratory. We find that, in simulation, individuals exhibiting exploratory
strategies display longer time to receive information spreading in the network
those individuals with stable strategies. Our principled method to determine
the communication capacity of an individual allows us to quantify how
strategies for human interaction shape the dynamical evolution of social
networks.
| [
{
"version": "v1",
"created": "Sun, 7 Apr 2013 11:00:16 GMT"
}
] | 2013-04-09T00:00:00 | [
[
"Miritello",
"Giovanna",
""
],
[
"Lara",
"Rubén",
""
],
[
"Cebrián",
"Manuel",
""
],
[
"Moro",
"Esteban",
""
]
] | TITLE: Limited communication capacity unveils strategies for human interaction
ABSTRACT: Social connectivity is the key process that characterizes the structural
properties of social networks and in turn processes such as navigation,
influence or information diffusion. Since time, attention and cognition are
inelastic resources, humans should have a predefined strategy to manage their
social interactions over time. However, the limited observational length of
existing human interaction datasets, together with the bursty nature of dyadic
communications have hampered the observation of tie dynamics in social
networks. Here we develop a method for the detection of tie
activation/deactivation, and apply it to a large longitudinal, cross-sectional
communication dataset ($\approx$ 19 months, $\approx$ 20 million people).
Contrary to the perception of ever-growing connectivity, we observe that
individuals exhibit a finite communication capacity, which limits the number of
ties they can maintain active. In particular we find that men have an overall
higher communication capacity than women and that this capacity decreases
gradually for both sexes over the lifespan of individuals (16-70 years). We are
then able to separate communication capacity from communication activity,
revealing a diverse range of tie activation patterns, from stable to
exploratory. We find that, in simulation, individuals exhibiting exploratory
strategies display longer time to receive information spreading in the network
those individuals with stable strategies. Our principled method to determine
the communication capacity of an individual allows us to quantify how
strategies for human interaction shape the dynamical evolution of social
networks.
|
1304.2133 | Conrad Sanderson | Yongkang Wong, Conrad Sanderson, Sandra Mau, Brian C. Lovell | Dynamic Amelioration of Resolution Mismatches for Local Feature Based
Identity Inference | null | International Conference on Pattern Recognition (ICPR), pp.
1200-1203, 2010 | 10.1109/ICPR.2010.299 | null | cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While existing face recognition systems based on local features are robust to
issues such as misalignment, they can exhibit accuracy degradation when
comparing images of differing resolutions. This is common in surveillance
environments where a gallery of high resolution mugshots is compared to low
resolution CCTV probe images, or where the size of a given image is not a
reliable indicator of the underlying resolution (eg. poor optics). To alleviate
this degradation, we propose a compensation framework which dynamically chooses
the most appropriate face recognition system for a given pair of image
resolutions. This framework applies a novel resolution detection method which
does not rely on the size of the input images, but instead exploits the
sensitivity of local features to resolution using a probabilistic multi-region
histogram approach. Experiments on a resolution-modified version of the
"Labeled Faces in the Wild" dataset show that the proposed resolution detector
frontend obtains a 99% average accuracy in selecting the most appropriate face
recognition system, resulting in higher overall face discrimination accuracy
(across several resolutions) compared to the individual baseline face
recognition systems.
| [
{
"version": "v1",
"created": "Mon, 8 Apr 2013 08:36:55 GMT"
}
] | 2013-04-09T00:00:00 | [
[
"Wong",
"Yongkang",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Mau",
"Sandra",
""
],
[
"Lovell",
"Brian C.",
""
]
] | TITLE: Dynamic Amelioration of Resolution Mismatches for Local Feature Based
Identity Inference
ABSTRACT: While existing face recognition systems based on local features are robust to
issues such as misalignment, they can exhibit accuracy degradation when
comparing images of differing resolutions. This is common in surveillance
environments where a gallery of high resolution mugshots is compared to low
resolution CCTV probe images, or where the size of a given image is not a
reliable indicator of the underlying resolution (eg. poor optics). To alleviate
this degradation, we propose a compensation framework which dynamically chooses
the most appropriate face recognition system for a given pair of image
resolutions. This framework applies a novel resolution detection method which
does not rely on the size of the input images, but instead exploits the
sensitivity of local features to resolution using a probabilistic multi-region
histogram approach. Experiments on a resolution-modified version of the
"Labeled Faces in the Wild" dataset show that the proposed resolution detector
frontend obtains a 99% average accuracy in selecting the most appropriate face
recognition system, resulting in higher overall face discrimination accuracy
(across several resolutions) compared to the individual baseline face
recognition systems.
|
1304.1712 | Michele Coscia | Michele Coscia | Competition and Success in the Meme Pool: a Case Study on Quickmeme.com | null | International Conference of Weblogs and Social Media, 2013 | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advent of social media has provided data and insights about how people
relate to information and culture. While information is composed by bits and
its fundamental building bricks are relatively well understood, the same cannot
be said for culture. The fundamental cultural unit has been defined as a
"meme". Memes are defined in literature as specific fundamental cultural
traits, that are floating in their environment together. Just like genes
carried by bodies, memes are carried by cultural manifestations like songs,
buildings or pictures. Memes are studied in their competition for being
successfully passed from one generation of minds to another, in different ways.
In this paper we choose an empirical approach to the study of memes. We
downloaded data about memes from a well-known website hosting hundreds of
different memes and thousands of their implementations. From this data, we
empirically describe the behavior of these memes. We statistically describe
meme occurrences in our dataset and we delineate their fundamental traits,
along with those traits that make them more or less apt to be successful.
| [
{
"version": "v1",
"created": "Fri, 5 Apr 2013 13:52:55 GMT"
}
] | 2013-04-08T00:00:00 | [
[
"Coscia",
"Michele",
""
]
] | TITLE: Competition and Success in the Meme Pool: a Case Study on Quickmeme.com
ABSTRACT: The advent of social media has provided data and insights about how people
relate to information and culture. While information is composed by bits and
its fundamental building bricks are relatively well understood, the same cannot
be said for culture. The fundamental cultural unit has been defined as a
"meme". Memes are defined in literature as specific fundamental cultural
traits, that are floating in their environment together. Just like genes
carried by bodies, memes are carried by cultural manifestations like songs,
buildings or pictures. Memes are studied in their competition for being
successfully passed from one generation of minds to another, in different ways.
In this paper we choose an empirical approach to the study of memes. We
downloaded data about memes from a well-known website hosting hundreds of
different memes and thousands of their implementations. From this data, we
empirically describe the behavior of these memes. We statistically describe
meme occurrences in our dataset and we delineate their fundamental traits,
along with those traits that make them more or less apt to be successful.
|
1204.1259 | Bal\'azs Hidasi | Bal\'azs Hidasi, Domonkos Tikk | Fast ALS-based tensor factorization for context-aware recommendation
from implicit feedback | Accepted for ECML/PKDD 2012, presented on 25th September 2012,
Bristol, UK | Proceedings of the 2012 European conference on Machine Learning
and Knowledge Discovery in Databases - Volume Part II | 10.1007/978-3-642-33486-3_5 | null | cs.LG cs.IR cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Albeit, the implicit feedback based recommendation problem - when only the
user history is available but there are no ratings - is the most typical
setting in real-world applications, it is much less researched than the
explicit feedback case. State-of-the-art algorithms that are efficient on the
explicit case cannot be straightforwardly transformed to the implicit case if
scalability should be maintained. There are few if any implicit feedback
benchmark datasets, therefore new ideas are usually experimented on explicit
benchmarks. In this paper, we propose a generic context-aware implicit feedback
recommender algorithm, coined iTALS. iTALS apply a fast, ALS-based tensor
factorization learning method that scales linearly with the number of non-zero
elements in the tensor. The method also allows us to incorporate diverse
context information into the model while maintaining its computational
efficiency. In particular, we present two such context-aware implementation
variants of iTALS. The first incorporates seasonality and enables to
distinguish user behavior in different time intervals. The other views the user
history as sequential information and has the ability to recognize usage
pattern typical to certain group of items, e.g. to automatically tell apart
product types or categories that are typically purchased repetitively
(collectibles, grocery goods) or once (household appliances). Experiments
performed on three implicit datasets (two proprietary ones and an implicit
variant of the Netflix dataset) show that by integrating context-aware
information with our factorization framework into the state-of-the-art implicit
recommender algorithm the recommendation quality improves significantly.
| [
{
"version": "v1",
"created": "Thu, 5 Apr 2012 15:34:30 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Apr 2013 15:33:31 GMT"
}
] | 2013-04-05T00:00:00 | [
[
"Hidasi",
"Balázs",
""
],
[
"Tikk",
"Domonkos",
""
]
] | TITLE: Fast ALS-based tensor factorization for context-aware recommendation
from implicit feedback
ABSTRACT: Albeit, the implicit feedback based recommendation problem - when only the
user history is available but there are no ratings - is the most typical
setting in real-world applications, it is much less researched than the
explicit feedback case. State-of-the-art algorithms that are efficient on the
explicit case cannot be straightforwardly transformed to the implicit case if
scalability should be maintained. There are few if any implicit feedback
benchmark datasets, therefore new ideas are usually experimented on explicit
benchmarks. In this paper, we propose a generic context-aware implicit feedback
recommender algorithm, coined iTALS. iTALS apply a fast, ALS-based tensor
factorization learning method that scales linearly with the number of non-zero
elements in the tensor. The method also allows us to incorporate diverse
context information into the model while maintaining its computational
efficiency. In particular, we present two such context-aware implementation
variants of iTALS. The first incorporates seasonality and enables to
distinguish user behavior in different time intervals. The other views the user
history as sequential information and has the ability to recognize usage
pattern typical to certain group of items, e.g. to automatically tell apart
product types or categories that are typically purchased repetitively
(collectibles, grocery goods) or once (household appliances). Experiments
performed on three implicit datasets (two proprietary ones and an implicit
variant of the Netflix dataset) show that by integrating context-aware
information with our factorization framework into the state-of-the-art implicit
recommender algorithm the recommendation quality improves significantly.
|
1206.4813 | Adrian Buzatu | Adrian Buzatu, Andreas Warburton, Nils Krumnack, Wei-Ming Yao | A Novel in situ Trigger Combination Method | 17 pages, 2 figures, 6 tables, accepted by Nuclear Instruments and
Methods in Physics Research A | Nucl.Instrum.Meth. A711 (2013) 111-120 | 10.1016/j.nima.2013.01.034 | FERMILAB-PUB-12-296-E | physics.ins-det hep-ex physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Searches for rare physics processes using particle detectors in
high-luminosity colliding hadronic beam environments require the use of
multi-level trigger systems to reject colossal background rates in real time.
In analyses like the search for the Higgs boson, there is a need to maximize
the signal acceptance by combining multiple different trigger chains when
forming the offline data sample. In such statistically limited searches,
datasets are often amassed over periods of several years, during which the
trigger characteristics evolve and system performance can vary significantly.
Reliable production cross-section measurements and upper limits must take into
account a detailed understanding of the effective trigger inefficiency for
every selected candidate event. We present as an example the complex situation
of three trigger chains, based on missing energy and jet energy, that were
combined in the context of the search for the Higgs (H) boson produced in
association with a $W$ boson at the Collider Detector at Fermilab (CDF). We
briefly review the existing techniques for combining triggers, namely the
inclusion, division, and exclusion methods. We introduce and describe a novel
fourth in situ method whereby, for each candidate event, only the trigger chain
with the highest a priori probability of selecting the event is considered. We
compare the inclusion and novel in situ methods for signal event yields in the
CDF $WH$ search. This new combination method, by virtue of its scalability to
large numbers of differing trigger chains and insensitivity to correlations
between triggers, will benefit future long-running collider experiments,
including those currently operating on the Large Hadron Collider.
| [
{
"version": "v1",
"created": "Thu, 21 Jun 2012 09:14:15 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Apr 2013 13:28:58 GMT"
}
] | 2013-04-05T00:00:00 | [
[
"Buzatu",
"Adrian",
""
],
[
"Warburton",
"Andreas",
""
],
[
"Krumnack",
"Nils",
""
],
[
"Yao",
"Wei-Ming",
""
]
] | TITLE: A Novel in situ Trigger Combination Method
ABSTRACT: Searches for rare physics processes using particle detectors in
high-luminosity colliding hadronic beam environments require the use of
multi-level trigger systems to reject colossal background rates in real time.
In analyses like the search for the Higgs boson, there is a need to maximize
the signal acceptance by combining multiple different trigger chains when
forming the offline data sample. In such statistically limited searches,
datasets are often amassed over periods of several years, during which the
trigger characteristics evolve and system performance can vary significantly.
Reliable production cross-section measurements and upper limits must take into
account a detailed understanding of the effective trigger inefficiency for
every selected candidate event. We present as an example the complex situation
of three trigger chains, based on missing energy and jet energy, that were
combined in the context of the search for the Higgs (H) boson produced in
association with a $W$ boson at the Collider Detector at Fermilab (CDF). We
briefly review the existing techniques for combining triggers, namely the
inclusion, division, and exclusion methods. We introduce and describe a novel
fourth in situ method whereby, for each candidate event, only the trigger chain
with the highest a priori probability of selecting the event is considered. We
compare the inclusion and novel in situ methods for signal event yields in the
CDF $WH$ search. This new combination method, by virtue of its scalability to
large numbers of differing trigger chains and insensitivity to correlations
between triggers, will benefit future long-running collider experiments,
including those currently operating on the Large Hadron Collider.
|
1304.1262 | Conrad Sanderson | Arnold Wiliem, Yongkang Wong, Conrad Sanderson, Peter Hobson, Shaokang
Chen, Brian C. Lovell | Classification of Human Epithelial Type 2 Cell Indirect
Immunofluoresence Images via Codebook Based Descriptors | null | IEEE Workshop on Applications of Computer Vision (WACV), pp.
95-102, 2013 | 10.1109/WACV.2013.6475005 | null | q-bio.CB cs.CV q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Anti-Nuclear Antibody (ANA) clinical pathology test is commonly used to
identify the existence of various diseases. A hallmark method for identifying
the presence of ANAs is the Indirect Immunofluorescence method on Human
Epithelial (HEp-2) cells, due to its high sensitivity and the large range of
antigens that can be detected. However, the method suffers from numerous
shortcomings, such as being subjective as well as time and labour intensive.
Computer Aided Diagnostic (CAD) systems have been developed to address these
problems, which automatically classify a HEp-2 cell image into one of its known
patterns (eg., speckled, homogeneous). Most of the existing CAD systems use
handpicked features to represent a HEp-2 cell image, which may only work in
limited scenarios. In this paper, we propose a cell classification system
comprised of a dual-region codebook-based descriptor, combined with the Nearest
Convex Hull Classifier. We evaluate the performance of several variants of the
descriptor on two publicly available datasets: ICPR HEp-2 cell classification
contest dataset and the new SNPHEp-2 dataset. To our knowledge, this is the
first time codebook-based descriptors are applied and studied in this domain.
Experiments show that the proposed system has consistent high performance and
is more robust than two recent CAD systems.
| [
{
"version": "v1",
"created": "Thu, 4 Apr 2013 07:51:32 GMT"
}
] | 2013-04-05T00:00:00 | [
[
"Wiliem",
"Arnold",
""
],
[
"Wong",
"Yongkang",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Hobson",
"Peter",
""
],
[
"Chen",
"Shaokang",
""
],
[
"Lovell",
"Brian C.",
""
]
] | TITLE: Classification of Human Epithelial Type 2 Cell Indirect
Immunofluoresence Images via Codebook Based Descriptors
ABSTRACT: The Anti-Nuclear Antibody (ANA) clinical pathology test is commonly used to
identify the existence of various diseases. A hallmark method for identifying
the presence of ANAs is the Indirect Immunofluorescence method on Human
Epithelial (HEp-2) cells, due to its high sensitivity and the large range of
antigens that can be detected. However, the method suffers from numerous
shortcomings, such as being subjective as well as time and labour intensive.
Computer Aided Diagnostic (CAD) systems have been developed to address these
problems, which automatically classify a HEp-2 cell image into one of its known
patterns (eg., speckled, homogeneous). Most of the existing CAD systems use
handpicked features to represent a HEp-2 cell image, which may only work in
limited scenarios. In this paper, we propose a cell classification system
comprised of a dual-region codebook-based descriptor, combined with the Nearest
Convex Hull Classifier. We evaluate the performance of several variants of the
descriptor on two publicly available datasets: ICPR HEp-2 cell classification
contest dataset and the new SNPHEp-2 dataset. To our knowledge, this is the
first time codebook-based descriptors are applied and studied in this domain.
Experiments show that the proposed system has consistent high performance and
is more robust than two recent CAD systems.
|
1304.1391 | Sachin Talathi | Manu Nandan, Pramod P. Khargonekar, Sachin S. Talathi | Fast SVM training using approximate extreme points | The manuscript in revised form has been submitted to J. Machine
Learning Research | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Applications of non-linear kernel Support Vector Machines (SVMs) to large
datasets is seriously hampered by its excessive training time. We propose a
modification, called the approximate extreme points support vector machine
(AESVM), that is aimed at overcoming this burden. Our approach relies on
conducting the SVM optimization over a carefully selected subset, called the
representative set, of the training dataset. We present analytical results that
indicate the similarity of AESVM and SVM solutions. A linear time algorithm
based on convex hulls and extreme points is used to compute the representative
set in kernel space. Extensive computational experiments on nine datasets
compared AESVM to LIBSVM \citep{LIBSVM}, CVM \citep{Tsang05}, BVM
\citep{Tsang07}, LASVM \citep{Bordes05}, $\text{SVM}^{\text{perf}}$
\citep{Joachims09}, and the random features method \citep{rahimi07}. Our AESVM
implementation was found to train much faster than the other methods, while its
classification accuracy was similar to that of LIBSVM in all cases. In
particular, for a seizure detection dataset, AESVM training was almost $10^3$
times faster than LIBSVM and LASVM and more than forty times faster than CVM
and BVM. Additionally, AESVM also gave competitively fast classification times.
| [
{
"version": "v1",
"created": "Thu, 4 Apr 2013 15:08:31 GMT"
}
] | 2013-04-05T00:00:00 | [
[
"Nandan",
"Manu",
""
],
[
"Khargonekar",
"Pramod P.",
""
],
[
"Talathi",
"Sachin S.",
""
]
] | TITLE: Fast SVM training using approximate extreme points
ABSTRACT: Applications of non-linear kernel Support Vector Machines (SVMs) to large
datasets is seriously hampered by its excessive training time. We propose a
modification, called the approximate extreme points support vector machine
(AESVM), that is aimed at overcoming this burden. Our approach relies on
conducting the SVM optimization over a carefully selected subset, called the
representative set, of the training dataset. We present analytical results that
indicate the similarity of AESVM and SVM solutions. A linear time algorithm
based on convex hulls and extreme points is used to compute the representative
set in kernel space. Extensive computational experiments on nine datasets
compared AESVM to LIBSVM \citep{LIBSVM}, CVM \citep{Tsang05}, BVM
\citep{Tsang07}, LASVM \citep{Bordes05}, $\text{SVM}^{\text{perf}}$
\citep{Joachims09}, and the random features method \citep{rahimi07}. Our AESVM
implementation was found to train much faster than the other methods, while its
classification accuracy was similar to that of LIBSVM in all cases. In
particular, for a seizure detection dataset, AESVM training was almost $10^3$
times faster than LIBSVM and LASVM and more than forty times faster than CVM
and BVM. Additionally, AESVM also gave competitively fast classification times.
|
1302.6396 | Michael Schreiber | Michael Schreiber | How to derive an advantage from the arbitrariness of the g-index | 13 pages, 3 tables, 3 figures, accepted for publication in Journal of
Informetrics | Journal of Informetrics, 7, 555-561 (2013) | 10.1016/j.joi.2013.02.003 | null | physics.soc-ph cs.DL | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The definition of the g-index is as arbitrary as that of the h-index, because
the threshold number g^2 of citations to the g most cited papers can be
modified by a prefactor at one's discretion, thus taking into account more or
less of the highly cited publications within a dataset. In a case study I
investigate the citation records of 26 physicists and show that the prefactor
influences the ranking in terms of the generalized g-index less than for the
generalized h-index. I propose specifically a prefactor of 2 for the g-index,
because then the resulting values are of the same order of magnitude as for the
common h-index. In this way one can avoid the disadvantage of the original
g-index, namely that the values are usually substantially larger than for the
h-index and thus the precision problem is substantially larger; while the
advantages of the g-index over the h-index are kept. Like for the generalized
h-index, also for the generalized g-index different prefactors might be more
useful for investigations which concentrate only on top scientists with high
citation frequencies or on junior researchers with small numbers of citations.
| [
{
"version": "v1",
"created": "Tue, 26 Feb 2013 11:24:25 GMT"
}
] | 2013-04-04T00:00:00 | [
[
"Schreiber",
"Michael",
""
]
] | TITLE: How to derive an advantage from the arbitrariness of the g-index
ABSTRACT: The definition of the g-index is as arbitrary as that of the h-index, because
the threshold number g^2 of citations to the g most cited papers can be
modified by a prefactor at one's discretion, thus taking into account more or
less of the highly cited publications within a dataset. In a case study I
investigate the citation records of 26 physicists and show that the prefactor
influences the ranking in terms of the generalized g-index less than for the
generalized h-index. I propose specifically a prefactor of 2 for the g-index,
because then the resulting values are of the same order of magnitude as for the
common h-index. In this way one can avoid the disadvantage of the original
g-index, namely that the values are usually substantially larger than for the
h-index and thus the precision problem is substantially larger; while the
advantages of the g-index over the h-index are kept. Like for the generalized
h-index, also for the generalized g-index different prefactors might be more
useful for investigations which concentrate only on top scientists with high
citation frequencies or on junior researchers with small numbers of citations.
|
1304.0886 | Conrad Sanderson | Vikas Reddy, Conrad Sanderson, Brian C. Lovell | Improved Anomaly Detection in Crowded Scenes via Cell-based Analysis of
Foreground Speed, Size and Texture | null | IEEE Conference on Computer Vision and Pattern Recognition
Workshops (CVPRW), pp. 55-61, 2011 | 10.1109/CVPRW.2011.5981799 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A robust and efficient anomaly detection technique is proposed, capable of
dealing with crowded scenes where traditional tracking based approaches tend to
fail. Initial foreground segmentation of the input frames confines the analysis
to foreground objects and effectively ignores irrelevant background dynamics.
Input frames are split into non-overlapping cells, followed by extracting
features based on motion, size and texture from each cell. Each feature type is
independently analysed for the presence of an anomaly. Unlike most methods, a
refined estimate of object motion is achieved by computing the optical flow of
only the foreground pixels. The motion and size features are modelled by an
approximated version of kernel density estimation, which is computationally
efficient even for large training datasets. Texture features are modelled by an
adaptively grown codebook, with the number of entries in the codebook selected
in an online fashion. Experiments on the recently published UCSD Anomaly
Detection dataset show that the proposed method obtains considerably better
results than three recent approaches: MPPCA, social force, and mixture of
dynamic textures (MDT). The proposed method is also several orders of magnitude
faster than MDT, the next best performing method.
| [
{
"version": "v1",
"created": "Wed, 3 Apr 2013 09:31:27 GMT"
}
] | 2013-04-04T00:00:00 | [
[
"Reddy",
"Vikas",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Lovell",
"Brian C.",
""
]
] | TITLE: Improved Anomaly Detection in Crowded Scenes via Cell-based Analysis of
Foreground Speed, Size and Texture
ABSTRACT: A robust and efficient anomaly detection technique is proposed, capable of
dealing with crowded scenes where traditional tracking based approaches tend to
fail. Initial foreground segmentation of the input frames confines the analysis
to foreground objects and effectively ignores irrelevant background dynamics.
Input frames are split into non-overlapping cells, followed by extracting
features based on motion, size and texture from each cell. Each feature type is
independently analysed for the presence of an anomaly. Unlike most methods, a
refined estimate of object motion is achieved by computing the optical flow of
only the foreground pixels. The motion and size features are modelled by an
approximated version of kernel density estimation, which is computationally
efficient even for large training datasets. Texture features are modelled by an
adaptively grown codebook, with the number of entries in the codebook selected
in an online fashion. Experiments on the recently published UCSD Anomaly
Detection dataset show that the proposed method obtains considerably better
results than three recent approaches: MPPCA, social force, and mixture of
dynamic textures (MDT). The proposed method is also several orders of magnitude
faster than MDT, the next best performing method.
|
1304.0913 | Morteza Ansarinia | Ahmad Salahi, Morteza Ansarinia | Predicting Network Attacks Using Ontology-Driven Inference | 9 pages | International Journal of Information and Communication Technology
(IJICT), Volume 4, Issue 1, 2012 | null | null | cs.AI cs.CR cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph knowledge models and ontologies are very powerful modeling and re
asoning tools. We propose an effective approach to model network attacks and
attack prediction which plays important roles in security management. The goals
of this study are: First we model network attacks, their prerequisites and
consequences using knowledge representation methods in order to provide
description logic reasoning and inference over attack domain concepts. And
secondly, we propose an ontology-based system which predicts potential attacks
using inference and observing information which provided by sensory inputs. We
generate our ontology and evaluate corresponding methods using CAPEC, CWE, and
CVE hierarchical datasets. Results from experiments show significant capability
improvements comparing to traditional hierarchical and relational models.
Proposed method also reduces false alarms and improves intrusion detection
effectiveness.
| [
{
"version": "v1",
"created": "Wed, 3 Apr 2013 11:04:38 GMT"
}
] | 2013-04-04T00:00:00 | [
[
"Salahi",
"Ahmad",
""
],
[
"Ansarinia",
"Morteza",
""
]
] | TITLE: Predicting Network Attacks Using Ontology-Driven Inference
ABSTRACT: Graph knowledge models and ontologies are very powerful modeling and re
asoning tools. We propose an effective approach to model network attacks and
attack prediction which plays important roles in security management. The goals
of this study are: First we model network attacks, their prerequisites and
consequences using knowledge representation methods in order to provide
description logic reasoning and inference over attack domain concepts. And
secondly, we propose an ontology-based system which predicts potential attacks
using inference and observing information which provided by sensory inputs. We
generate our ontology and evaluate corresponding methods using CAPEC, CWE, and
CVE hierarchical datasets. Results from experiments show significant capability
improvements comparing to traditional hierarchical and relational models.
Proposed method also reduces false alarms and improves intrusion detection
effectiveness.
|
1212.0142 | Pierre Sermanet | Pierre Sermanet and Koray Kavukcuoglu and Soumith Chintala and Yann
LeCun | Pedestrian Detection with Unsupervised Multi-Stage Feature Learning | 12 pages | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pedestrian detection is a problem of considerable practical interest. Adding
to the list of successful applications of deep learning methods to vision, we
report state-of-the-art and competitive results on all major pedestrian
datasets with a convolutional network model. The model uses a few new twists,
such as multi-stage features, connections that skip layers to integrate global
shape information with local distinctive motif information, and an unsupervised
method based on convolutional sparse coding to pre-train the filters at each
stage.
| [
{
"version": "v1",
"created": "Sat, 1 Dec 2012 18:13:03 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Apr 2013 18:05:46 GMT"
}
] | 2013-04-03T00:00:00 | [
[
"Sermanet",
"Pierre",
""
],
[
"Kavukcuoglu",
"Koray",
""
],
[
"Chintala",
"Soumith",
""
],
[
"LeCun",
"Yann",
""
]
] | TITLE: Pedestrian Detection with Unsupervised Multi-Stage Feature Learning
ABSTRACT: Pedestrian detection is a problem of considerable practical interest. Adding
to the list of successful applications of deep learning methods to vision, we
report state-of-the-art and competitive results on all major pedestrian
datasets with a convolutional network model. The model uses a few new twists,
such as multi-stage features, connections that skip layers to integrate global
shape information with local distinctive motif information, and an unsupervised
method based on convolutional sparse coding to pre-train the filters at each
stage.
|
1304.0725 | Ashok P | P. Ashok, G.M Kadhar Nawaz, E. Elayaraja, V. Vadivel | Improved Performance of Unsupervised Method by Renovated K-Means | 7 pages, to strengthen the k means algorithm | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering is a separation of data into groups of similar objects. Every
group called cluster consists of objects that are similar to one another and
dissimilar to objects of other groups. In this paper, the K-Means algorithm is
implemented by three distance functions and to identify the optimal distance
function for clustering methods. The proposed K-Means algorithm is compared
with K-Means, Static Weighted K-Means (SWK-Means) and Dynamic Weighted K-Means
(DWK-Means) algorithm by using Davis Bouldin index, Execution Time and
Iteration count methods. Experimental results show that the proposed K-Means
algorithm performed better on Iris and Wine dataset when compared with other
three clustering methods.
| [
{
"version": "v1",
"created": "Mon, 11 Mar 2013 05:28:06 GMT"
}
] | 2013-04-03T00:00:00 | [
[
"Ashok",
"P.",
""
],
[
"Nawaz",
"G. M Kadhar",
""
],
[
"Elayaraja",
"E.",
""
],
[
"Vadivel",
"V.",
""
]
] | TITLE: Improved Performance of Unsupervised Method by Renovated K-Means
ABSTRACT: Clustering is a separation of data into groups of similar objects. Every
group called cluster consists of objects that are similar to one another and
dissimilar to objects of other groups. In this paper, the K-Means algorithm is
implemented by three distance functions and to identify the optimal distance
function for clustering methods. The proposed K-Means algorithm is compared
with K-Means, Static Weighted K-Means (SWK-Means) and Dynamic Weighted K-Means
(DWK-Means) algorithm by using Davis Bouldin index, Execution Time and
Iteration count methods. Experimental results show that the proposed K-Means
algorithm performed better on Iris and Wine dataset when compared with other
three clustering methods.
|
1303.7012 | Abedelaziz Mohaisen | Abedelaziz Mohaisen and Omar Alrawi | Unveiling Zeus | Accepted to SIMPLEX 2013 (a workshop held in conjunction with WWW
2013) | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Malware family classification is an age old problem that many Anti-Virus (AV)
companies have tackled. There are two common techniques used for
classification, signature based and behavior based. Signature based
classification uses a common sequence of bytes that appears in the binary code
to identify and detect a family of malware. Behavior based classification uses
artifacts created by malware during execution for identification. In this paper
we report on a unique dataset we obtained from our operations and classified
using several machine learning techniques using the behavior-based approach.
Our main class of malware we are interested in classifying is the popular Zeus
malware. For its classification we identify 65 features that are unique and
robust for identifying malware families. We show that artifacts like file
system, registry, and network features can be used to identify distinct malware
families with high accuracy---in some cases as high as 95%.
| [
{
"version": "v1",
"created": "Thu, 28 Mar 2013 00:11:54 GMT"
}
] | 2013-03-29T00:00:00 | [
[
"Mohaisen",
"Abedelaziz",
""
],
[
"Alrawi",
"Omar",
""
]
] | TITLE: Unveiling Zeus
ABSTRACT: Malware family classification is an age old problem that many Anti-Virus (AV)
companies have tackled. There are two common techniques used for
classification, signature based and behavior based. Signature based
classification uses a common sequence of bytes that appears in the binary code
to identify and detect a family of malware. Behavior based classification uses
artifacts created by malware during execution for identification. In this paper
we report on a unique dataset we obtained from our operations and classified
using several machine learning techniques using the behavior-based approach.
Our main class of malware we are interested in classifying is the popular Zeus
malware. For its classification we identify 65 features that are unique and
robust for identifying malware families. We show that artifacts like file
system, registry, and network features can be used to identify distinct malware
families with high accuracy---in some cases as high as 95%.
|
1303.6886 | Jordan Raddick | M. Jordan Raddick, Georgia Bracey, Pamela L. Gay, Chris J. Lintott,
Carie Cardamone, Phil Murray, Kevin Schawinski, Alexander S. Szalay, Jan
Vandenberg | Galaxy Zoo: Motivations of Citizen Scientists | 41 pages, including 6 figures and one appendix. In press at Astronomy
Education Review | null | null | null | physics.ed-ph astro-ph.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Citizen science, in which volunteers work with professional scientists to
conduct research, is expanding due to large online datasets. To plan projects,
it is important to understand volunteers' motivations for participating. This
paper analyzes results from an online survey of nearly 11,000 volunteers in
Galaxy Zoo, an astronomy citizen science project. Results show that volunteers'
primary motivation is a desire to contribute to scientific research. We
encourage other citizen science projects to study the motivations of their
volunteers, to see whether and how these results may be generalized to inform
the field of citizen science.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 16:28:51 GMT"
}
] | 2013-03-28T00:00:00 | [
[
"Raddick",
"M. Jordan",
""
],
[
"Bracey",
"Georgia",
""
],
[
"Gay",
"Pamela L.",
""
],
[
"Lintott",
"Chris J.",
""
],
[
"Cardamone",
"Carie",
""
],
[
"Murray",
"Phil",
""
],
[
"Schawinski",
"Kevin",
""
],
[
"Szalay",
"Alexander S.",
""
],
[
"Vandenberg",
"Jan",
""
]
] | TITLE: Galaxy Zoo: Motivations of Citizen Scientists
ABSTRACT: Citizen science, in which volunteers work with professional scientists to
conduct research, is expanding due to large online datasets. To plan projects,
it is important to understand volunteers' motivations for participating. This
paper analyzes results from an online survey of nearly 11,000 volunteers in
Galaxy Zoo, an astronomy citizen science project. Results show that volunteers'
primary motivation is a desire to contribute to scientific research. We
encourage other citizen science projects to study the motivations of their
volunteers, to see whether and how these results may be generalized to inform
the field of citizen science.
|
0812.0146 | Vladimir Pestov | Vladimir Pestov | Lower Bounds on Performance of Metric Tree Indexing Schemes for Exact
Similarity Search in High Dimensions | 21 pages, revised submission to Algorithmica, an improved and
extended journal version of the conference paper arXiv:0812.0146v3 [cs.DS],
with lower bounds strengthened, and the proof of the main Theorem 4
simplified | Algorithmica 66 (2013), 310-328 | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Within a mathematically rigorous model, we analyse the curse of
dimensionality for deterministic exact similarity search in the context of
popular indexing schemes: metric trees. The datasets $X$ are sampled randomly
from a domain $\Omega$, equipped with a distance, $\rho$, and an underlying
probability distribution, $\mu$. While performing an asymptotic analysis, we
send the intrinsic dimension $d$ of $\Omega$ to infinity, and assume that the
size of a dataset, $n$, grows superpolynomially yet subexponentially in $d$.
Exact similarity search refers to finding the nearest neighbour in the dataset
$X$ to a query point $\omega\in\Omega$, where the query points are subject to
the same probability distribution $\mu$ as datapoints. Let $\mathscr F$ denote
a class of all 1-Lipschitz functions on $\Omega$ that can be used as decision
functions in constructing a hierarchical metric tree indexing scheme. Suppose
the VC dimension of the class of all sets $\{\omega\colon f(\omega)\geq a\}$,
$a\in\R$ is $o(n^{1/4}/\log^2n)$. (In view of a 1995 result of Goldberg and
Jerrum, even a stronger complexity assumption $d^{O(1)}$ is reasonable.) We
deduce the $\Omega(n^{1/4})$ lower bound on the expected average case
performance of hierarchical metric-tree based indexing schemes for exact
similarity search in $(\Omega,X)$. In paricular, this bound is superpolynomial
in $d$.
| [
{
"version": "v1",
"created": "Sun, 30 Nov 2008 15:17:22 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Aug 2010 03:42:50 GMT"
},
{
"version": "v3",
"created": "Tue, 10 May 2011 16:17:39 GMT"
},
{
"version": "v4",
"created": "Fri, 24 Feb 2012 18:38:50 GMT"
}
] | 2013-03-27T00:00:00 | [
[
"Pestov",
"Vladimir",
""
]
] | TITLE: Lower Bounds on Performance of Metric Tree Indexing Schemes for Exact
Similarity Search in High Dimensions
ABSTRACT: Within a mathematically rigorous model, we analyse the curse of
dimensionality for deterministic exact similarity search in the context of
popular indexing schemes: metric trees. The datasets $X$ are sampled randomly
from a domain $\Omega$, equipped with a distance, $\rho$, and an underlying
probability distribution, $\mu$. While performing an asymptotic analysis, we
send the intrinsic dimension $d$ of $\Omega$ to infinity, and assume that the
size of a dataset, $n$, grows superpolynomially yet subexponentially in $d$.
Exact similarity search refers to finding the nearest neighbour in the dataset
$X$ to a query point $\omega\in\Omega$, where the query points are subject to
the same probability distribution $\mu$ as datapoints. Let $\mathscr F$ denote
a class of all 1-Lipschitz functions on $\Omega$ that can be used as decision
functions in constructing a hierarchical metric tree indexing scheme. Suppose
the VC dimension of the class of all sets $\{\omega\colon f(\omega)\geq a\}$,
$a\in\R$ is $o(n^{1/4}/\log^2n)$. (In view of a 1995 result of Goldberg and
Jerrum, even a stronger complexity assumption $d^{O(1)}$ is reasonable.) We
deduce the $\Omega(n^{1/4})$ lower bound on the expected average case
performance of hierarchical metric-tree based indexing schemes for exact
similarity search in $(\Omega,X)$. In paricular, this bound is superpolynomial
in $d$.
|
1006.2761 | Yuliang Jin | Yuliang Jin, Dmitrij Turaev, Thomas Weinmaier, Thomas Rattei, Hernan
A. Makse | The evolutionary dynamics of protein-protein interaction networks
inferred from the reconstruction of ancient networks | null | PLoS ONE 2013, Volume 8, Issue 3, e58134 | 10.1371/journal.pone.0058134 | null | q-bio.MN cond-mat.dis-nn physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cellular functions are based on the complex interplay of proteins, therefore
the structure and dynamics of these protein-protein interaction (PPI) networks
are the key to the functional understanding of cells. In the last years,
large-scale PPI networks of several model organisms were investigated.
Methodological improvements now allow the analysis of PPI networks of multiple
organisms simultaneously as well as the direct modeling of ancestral networks.
This provides the opportunity to challenge existing assumptions on network
evolution. We utilized present-day PPI networks from integrated datasets of
seven model organisms and developed a theoretical and bioinformatic framework
for studying the evolutionary dynamics of PPI networks. A novel filtering
approach using percolation analysis was developed to remove low confidence
interactions based on topological constraints. We then reconstructed the
ancient PPI networks of different ancestors, for which the ancestral proteomes,
as well as the ancestral interactions, were inferred. Ancestral proteins were
reconstructed using orthologous groups on different evolutionary levels. A
stochastic approach, using the duplication-divergence model, was developed for
estimating the probabilities of ancient interactions from today's PPI networks.
The growth rates for nodes, edges, sizes and modularities of the networks
indicate multiplicative growth and are consistent with the results from
independent static analysis. Our results support the duplication-divergence
model of evolution and indicate fractality and multiplicative growth as general
properties of the PPI network structure and dynamics.
| [
{
"version": "v1",
"created": "Mon, 14 Jun 2010 16:40:39 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Feb 2013 15:36:49 GMT"
}
] | 2013-03-27T00:00:00 | [
[
"Jin",
"Yuliang",
""
],
[
"Turaev",
"Dmitrij",
""
],
[
"Weinmaier",
"Thomas",
""
],
[
"Rattei",
"Thomas",
""
],
[
"Makse",
"Hernan A.",
""
]
] | TITLE: The evolutionary dynamics of protein-protein interaction networks
inferred from the reconstruction of ancient networks
ABSTRACT: Cellular functions are based on the complex interplay of proteins, therefore
the structure and dynamics of these protein-protein interaction (PPI) networks
are the key to the functional understanding of cells. In the last years,
large-scale PPI networks of several model organisms were investigated.
Methodological improvements now allow the analysis of PPI networks of multiple
organisms simultaneously as well as the direct modeling of ancestral networks.
This provides the opportunity to challenge existing assumptions on network
evolution. We utilized present-day PPI networks from integrated datasets of
seven model organisms and developed a theoretical and bioinformatic framework
for studying the evolutionary dynamics of PPI networks. A novel filtering
approach using percolation analysis was developed to remove low confidence
interactions based on topological constraints. We then reconstructed the
ancient PPI networks of different ancestors, for which the ancestral proteomes,
as well as the ancestral interactions, were inferred. Ancestral proteins were
reconstructed using orthologous groups on different evolutionary levels. A
stochastic approach, using the duplication-divergence model, was developed for
estimating the probabilities of ancient interactions from today's PPI networks.
The growth rates for nodes, edges, sizes and modularities of the networks
indicate multiplicative growth and are consistent with the results from
independent static analysis. Our results support the duplication-divergence
model of evolution and indicate fractality and multiplicative growth as general
properties of the PPI network structure and dynamics.
|
1303.4969 | Jeff Jones Dr | Jeff Jones, Andrew Adamatzky | Computation of the Travelling Salesman Problem by a Shrinking Blob | 27 Pages, 13 Figures. 25-03-13: Amended typos | null | null | null | cs.ET cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Travelling Salesman Problem (TSP) is a well known and challenging
combinatorial optimisation problem. Its computational intractability has
attracted a number of heuristic approaches to generate satisfactory, if not
optimal, candidate solutions. In this paper we demonstrate a simple
unconventional computation method to approximate the Euclidean TSP using a
virtual material approach. The morphological adaptation behaviour of the
material emerges from the low-level interactions of a population of particles
moving within a diffusive lattice. A `blob' of this material is placed over a
set of data points projected into the lattice, representing TSP city locations,
and the blob is reduced in size over time. As the blob shrinks it
morphologically adapts to the configuration of the cities. The shrinkage
process automatically stops when the blob no longer completely covers all
cities. By manually tracing the perimeter of the blob a path between cities is
elicited corresponding to a TSP tour. Over 6 runs on 20 randomly generated
datasets of 20 cities this simple and unguided method found tours with a mean
best tour length of 1.04, mean average tour length of 1.07 and mean worst tour
length of 1.09 when expressed as a fraction of the minimal tour computed by an
exact TSP solver. We examine the insertion mechanism by which the blob
constructs a tour, note some properties and limitations of its performance, and
discuss the relationship between the blob TSP and proximity graphs which group
points on the plane. The method is notable for its simplicity and the spatially
represented mechanical mode of its operation. We discuss similarities between
this method and previously suggested models of human performance on the TSP and
suggest possibilities for further improvement.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:36:54 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Mar 2013 17:45:47 GMT"
}
] | 2013-03-27T00:00:00 | [
[
"Jones",
"Jeff",
""
],
[
"Adamatzky",
"Andrew",
""
]
] | TITLE: Computation of the Travelling Salesman Problem by a Shrinking Blob
ABSTRACT: The Travelling Salesman Problem (TSP) is a well known and challenging
combinatorial optimisation problem. Its computational intractability has
attracted a number of heuristic approaches to generate satisfactory, if not
optimal, candidate solutions. In this paper we demonstrate a simple
unconventional computation method to approximate the Euclidean TSP using a
virtual material approach. The morphological adaptation behaviour of the
material emerges from the low-level interactions of a population of particles
moving within a diffusive lattice. A `blob' of this material is placed over a
set of data points projected into the lattice, representing TSP city locations,
and the blob is reduced in size over time. As the blob shrinks it
morphologically adapts to the configuration of the cities. The shrinkage
process automatically stops when the blob no longer completely covers all
cities. By manually tracing the perimeter of the blob a path between cities is
elicited corresponding to a TSP tour. Over 6 runs on 20 randomly generated
datasets of 20 cities this simple and unguided method found tours with a mean
best tour length of 1.04, mean average tour length of 1.07 and mean worst tour
length of 1.09 when expressed as a fraction of the minimal tour computed by an
exact TSP solver. We examine the insertion mechanism by which the blob
constructs a tour, note some properties and limitations of its performance, and
discuss the relationship between the blob TSP and proximity graphs which group
points on the plane. The method is notable for its simplicity and the spatially
represented mechanical mode of its operation. We discuss similarities between
this method and previously suggested models of human performance on the TSP and
suggest possibilities for further improvement.
|
1303.6271 | Marcel Blattner | J\'er\^ome Kunegis, Marcel Blattner, Christine Moser | Preferential Attachment in Online Networks: Measurement and Explanations | 10 pages, 5 figures, Accepted for the WebSci'13 Conference, Paris,
2013 | null | null | null | physics.soc-ph cs.SI physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We perform an empirical study of the preferential attachment phenomenon in
temporal networks and show that on the Web, networks follow a nonlinear
preferential attachment model in which the exponent depends on the type of
network considered. The classical preferential attachment model for networks by
Barab\'asi and Albert (1999) assumes a linear relationship between the number
of neighbors of a node in a network and the probability of attachment. Although
this assumption is widely made in Web Science and related fields, the
underlying linearity is rarely measured. To fill this gap, this paper performs
an empirical longitudinal (time-based) study on forty-seven diverse Web network
datasets from seven network categories and including directed, undirected and
bipartite networks. We show that contrary to the usual assumption, preferential
attachment is nonlinear in the networks under consideration. Furthermore, we
observe that the deviation from linearity is dependent on the type of network,
giving sublinear attachment in certain types of networks, and superlinear
attachment in others. Thus, we introduce the preferential attachment exponent
$\beta$ as a novel numerical network measure that can be used to discriminate
different types of networks. We propose explanations for the behavior of that
network measure, based on the mechanisms that underly the growth of the network
in question.
| [
{
"version": "v1",
"created": "Sat, 23 Mar 2013 09:23:39 GMT"
}
] | 2013-03-27T00:00:00 | [
[
"Kunegis",
"Jérôme",
""
],
[
"Blattner",
"Marcel",
""
],
[
"Moser",
"Christine",
""
]
] | TITLE: Preferential Attachment in Online Networks: Measurement and Explanations
ABSTRACT: We perform an empirical study of the preferential attachment phenomenon in
temporal networks and show that on the Web, networks follow a nonlinear
preferential attachment model in which the exponent depends on the type of
network considered. The classical preferential attachment model for networks by
Barab\'asi and Albert (1999) assumes a linear relationship between the number
of neighbors of a node in a network and the probability of attachment. Although
this assumption is widely made in Web Science and related fields, the
underlying linearity is rarely measured. To fill this gap, this paper performs
an empirical longitudinal (time-based) study on forty-seven diverse Web network
datasets from seven network categories and including directed, undirected and
bipartite networks. We show that contrary to the usual assumption, preferential
attachment is nonlinear in the networks under consideration. Furthermore, we
observe that the deviation from linearity is dependent on the type of network,
giving sublinear attachment in certain types of networks, and superlinear
attachment in others. Thus, we introduce the preferential attachment exponent
$\beta$ as a novel numerical network measure that can be used to discriminate
different types of networks. We propose explanations for the behavior of that
network measure, based on the mechanisms that underly the growth of the network
in question.
|
1303.6361 | Conrad Sanderson | Sandra Mau, Shaokang Chen, Conrad Sanderson, Brian C. Lovell | Video Face Matching using Subset Selection and Clustering of
Probabilistic Multi-Region Histograms | null | International Conference of Image and Vision Computing New Zealand
(IVCNZ), 2010 | 10.1109/IVCNZ.2010.6148860 | null | cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Balancing computational efficiency with recognition accuracy is one of the
major challenges in real-world video-based face recognition. A significant
design decision for any such system is whether to process and use all possible
faces detected over the video frames, or whether to select only a few "best"
faces. This paper presents a video face recognition system based on
probabilistic Multi-Region Histograms to characterise performance trade-offs
in: (i) selecting a subset of faces compared to using all faces, and (ii)
combining information from all faces via clustering. Three face selection
metrics are evaluated for choosing a subset: face detection confidence, random
subset, and sequential selection. Experiments on the recently introduced MOBIO
dataset indicate that the usage of all faces through clustering always
outperformed selecting only a subset of faces. The experiments also show that
the face selection metric based on face detection confidence generally provides
better recognition performance than random or sequential sampling. Moreover,
the optimal number of faces varies drastically across selection metric and
subsets of MOBIO. Given the trade-offs between computational effort,
recognition accuracy and robustness, it is recommended that face feature
clustering would be most advantageous in batch processing (particularly for
video-based watchlists), whereas face selection methods should be limited to
applications with significant computational restrictions.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2013 01:34:42 GMT"
}
] | 2013-03-27T00:00:00 | [
[
"Mau",
"Sandra",
""
],
[
"Chen",
"Shaokang",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Lovell",
"Brian C.",
""
]
] | TITLE: Video Face Matching using Subset Selection and Clustering of
Probabilistic Multi-Region Histograms
ABSTRACT: Balancing computational efficiency with recognition accuracy is one of the
major challenges in real-world video-based face recognition. A significant
design decision for any such system is whether to process and use all possible
faces detected over the video frames, or whether to select only a few "best"
faces. This paper presents a video face recognition system based on
probabilistic Multi-Region Histograms to characterise performance trade-offs
in: (i) selecting a subset of faces compared to using all faces, and (ii)
combining information from all faces via clustering. Three face selection
metrics are evaluated for choosing a subset: face detection confidence, random
subset, and sequential selection. Experiments on the recently introduced MOBIO
dataset indicate that the usage of all faces through clustering always
outperformed selecting only a subset of faces. The experiments also show that
the face selection metric based on face detection confidence generally provides
better recognition performance than random or sequential sampling. Moreover,
the optimal number of faces varies drastically across selection metric and
subsets of MOBIO. Given the trade-offs between computational effort,
recognition accuracy and robustness, it is recommended that face feature
clustering would be most advantageous in batch processing (particularly for
video-based watchlists), whereas face selection methods should be limited to
applications with significant computational restrictions.
|
1302.5235 | Adrien Guille | Adrien Guille, Hakim Hacid, C\'ecile Favre | Predicting the Temporal Dynamics of Information Diffusion in Social
Networks | 10 pages; (corrected typos) | null | null | ERIC Laboratory Report RI-ERIC-13/001 | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online social networks play a major role in the spread of information at very
large scale and it becomes essential to provide means to analyse this
phenomenon. In this paper we address the issue of predicting the temporal
dynamics of the information diffusion process. We develop a graph-based
approach built on the assumption that the macroscopic dynamics of the spreading
process are explained by the topology of the network and the interactions that
occur through it, between pairs of users, on the basis of properties at the
microscopic level. We introduce a generic model, called T-BaSIC, and describe
how to estimate its parameters from users behaviours using machine learning
techniques. Contrary to classical approaches where the parameters are fixed in
advance, T-BaSIC's parameters are functions depending of time, which permit to
better approximate and adapt to the diffusion phenomenon observed in online
social networks. Our proposal has been validated on real Twitter datasets.
Experiments show that our approach is able to capture the particular patterns
of diffusion depending of the studied sub-networks of users and topics. The
results corroborate the "two-step" theory (1955) that states that information
flows from media to a few "opinion leaders" who then transfer it to the mass
population via social networks and show that it applies in the online context.
This work also highlights interesting recommendations for future
investigations.
| [
{
"version": "v1",
"created": "Thu, 21 Feb 2013 10:06:35 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Mar 2013 10:21:08 GMT"
}
] | 2013-03-26T00:00:00 | [
[
"Guille",
"Adrien",
""
],
[
"Hacid",
"Hakim",
""
],
[
"Favre",
"Cécile",
""
]
] | TITLE: Predicting the Temporal Dynamics of Information Diffusion in Social
Networks
ABSTRACT: Online social networks play a major role in the spread of information at very
large scale and it becomes essential to provide means to analyse this
phenomenon. In this paper we address the issue of predicting the temporal
dynamics of the information diffusion process. We develop a graph-based
approach built on the assumption that the macroscopic dynamics of the spreading
process are explained by the topology of the network and the interactions that
occur through it, between pairs of users, on the basis of properties at the
microscopic level. We introduce a generic model, called T-BaSIC, and describe
how to estimate its parameters from users behaviours using machine learning
techniques. Contrary to classical approaches where the parameters are fixed in
advance, T-BaSIC's parameters are functions depending of time, which permit to
better approximate and adapt to the diffusion phenomenon observed in online
social networks. Our proposal has been validated on real Twitter datasets.
Experiments show that our approach is able to capture the particular patterns
of diffusion depending of the studied sub-networks of users and topics. The
results corroborate the "two-step" theory (1955) that states that information
flows from media to a few "opinion leaders" who then transfer it to the mass
population via social networks and show that it applies in the online context.
This work also highlights interesting recommendations for future
investigations.
|
1303.5926 | Sourish Dasgupta | Sourish Dasgupta, Satish Bhat, Yugyung Lee | STC: Semantic Taxonomical Clustering for Service Category Learning | 14 pages | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Service discovery is one of the key problems that has been widely researched
in the area of Service Oriented Architecture (SOA) based systems. Service
category learning is a technique for efficiently facilitating service
discovery. Most approaches for service category learning are based on suitable
similarity distance measures using thresholds. Threshold selection is
essentially difficult and often leads to unsatisfactory accuracy. In this
paper, we have proposed a self-organizing based clustering algorithm called
Semantic Taxonomical Clustering (STC) for taxonomically organizing services
with self-organizing information and knowledge. We have tested the STC
algorithm on both randomly generated data and the standard OWL-S TC dataset. We
have observed promising results both in terms of classification accuracy and
runtime performance compared to existing approaches.
| [
{
"version": "v1",
"created": "Sun, 24 Mar 2013 08:30:44 GMT"
}
] | 2013-03-26T00:00:00 | [
[
"Dasgupta",
"Sourish",
""
],
[
"Bhat",
"Satish",
""
],
[
"Lee",
"Yugyung",
""
]
] | TITLE: STC: Semantic Taxonomical Clustering for Service Category Learning
ABSTRACT: Service discovery is one of the key problems that has been widely researched
in the area of Service Oriented Architecture (SOA) based systems. Service
category learning is a technique for efficiently facilitating service
discovery. Most approaches for service category learning are based on suitable
similarity distance measures using thresholds. Threshold selection is
essentially difficult and often leads to unsatisfactory accuracy. In this
paper, we have proposed a self-organizing based clustering algorithm called
Semantic Taxonomical Clustering (STC) for taxonomically organizing services
with self-organizing information and knowledge. We have tested the STC
algorithm on both randomly generated data and the standard OWL-S TC dataset. We
have observed promising results both in terms of classification accuracy and
runtime performance compared to existing approaches.
|
1303.6021 | Conrad Sanderson | Andres Sanin, Conrad Sanderson, Mehrtash T. Harandi, Brian C. Lovell | Spatio-Temporal Covariance Descriptors for Action and Gesture
Recognition | null | IEEE Workshop on Applications of Computer Vision, pp. 103-110,
2013 | 10.1109/WACV.2013.6475006 | null | cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new action and gesture recognition method based on
spatio-temporal covariance descriptors and a weighted Riemannian locality
preserving projection approach that takes into account the curved space formed
by the descriptors. The weighted projection is then exploited during boosting
to create a final multiclass classification algorithm that employs the most
useful spatio-temporal regions. We also show how the descriptors can be
computed quickly through the use of integral video representations. Experiments
on the UCF sport, CK+ facial expression and Cambridge hand gesture datasets
indicate superior performance of the proposed method compared to several recent
state-of-the-art techniques. The proposed method is robust and does not require
additional processing of the videos, such as foreground detection,
interest-point detection or tracking.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2013 03:16:08 GMT"
}
] | 2013-03-26T00:00:00 | [
[
"Sanin",
"Andres",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Harandi",
"Mehrtash T.",
""
],
[
"Lovell",
"Brian C.",
""
]
] | TITLE: Spatio-Temporal Covariance Descriptors for Action and Gesture
Recognition
ABSTRACT: We propose a new action and gesture recognition method based on
spatio-temporal covariance descriptors and a weighted Riemannian locality
preserving projection approach that takes into account the curved space formed
by the descriptors. The weighted projection is then exploited during boosting
to create a final multiclass classification algorithm that employs the most
useful spatio-temporal regions. We also show how the descriptors can be
computed quickly through the use of integral video representations. Experiments
on the UCF sport, CK+ facial expression and Cambridge hand gesture datasets
indicate superior performance of the proposed method compared to several recent
state-of-the-art techniques. The proposed method is robust and does not require
additional processing of the videos, such as foreground detection,
interest-point detection or tracking.
|
1301.7192 | Michael Schreiber | Michael Schreiber | Empirical Evidence for the Relevance of Fractional Scoring in the
Calculation of Percentile Rank Scores | 10 pages, 4 tables, accepted for publication in Journal of American
Society for Information Science and Technology | Journal of the American Society for Information Science and
Technology, 64(4), 861-867 (2013) | 10.1002/asi.22774 | null | cs.DL physics.soc-ph stat.AP | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Fractional scoring has been proposed to avoid inconsistencies in the
attribution of publications to percentile rank classes. Uncertainties and
ambiguities in the evaluation of percentile ranks can be demonstrated most
easily with small datasets. But for larger datasets an often large number of
papers with the same citation count leads to the same uncertainties and
ambiguities which can be avoided by fractional scoring. This is demonstrated
for four different empirical datasets with several thousand publications each
which are assigned to 6 percentile rank classes. Only by utilizing fractional
scoring the total score of all papers exactly reproduces the theoretical value
in each case.
| [
{
"version": "v1",
"created": "Wed, 30 Jan 2013 10:49:39 GMT"
}
] | 2013-03-25T00:00:00 | [
[
"Schreiber",
"Michael",
""
]
] | TITLE: Empirical Evidence for the Relevance of Fractional Scoring in the
Calculation of Percentile Rank Scores
ABSTRACT: Fractional scoring has been proposed to avoid inconsistencies in the
attribution of publications to percentile rank classes. Uncertainties and
ambiguities in the evaluation of percentile ranks can be demonstrated most
easily with small datasets. But for larger datasets an often large number of
papers with the same citation count leads to the same uncertainties and
ambiguities which can be avoided by fractional scoring. This is demonstrated
for four different empirical datasets with several thousand publications each
which are assigned to 6 percentile rank classes. Only by utilizing fractional
scoring the total score of all papers exactly reproduces the theoretical value
in each case.
|
1301.3485 | Antoine Bordes | Xavier Glorot and Antoine Bordes and Jason Weston and Yoshua Bengio | A Semantic Matching Energy Function for Learning with Multi-relational
Data | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale relational learning becomes crucial for handling the huge amounts
of structured data generated daily in many application domains ranging from
computational biology or information retrieval, to natural language processing.
In this paper, we present a new neural network architecture designed to embed
multi-relational graphs into a flexible continuous vector space in which the
original data is kept and enhanced. The network is trained to encode the
semantics of these graphs in order to assign high probabilities to plausible
components. We empirically show that it reaches competitive performance in link
prediction on standard datasets from the literature.
| [
{
"version": "v1",
"created": "Tue, 15 Jan 2013 20:52:50 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Mar 2013 17:02:48 GMT"
}
] | 2013-03-22T00:00:00 | [
[
"Glorot",
"Xavier",
""
],
[
"Bordes",
"Antoine",
""
],
[
"Weston",
"Jason",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: A Semantic Matching Energy Function for Learning with Multi-relational
Data
ABSTRACT: Large-scale relational learning becomes crucial for handling the huge amounts
of structured data generated daily in many application domains ranging from
computational biology or information retrieval, to natural language processing.
In this paper, we present a new neural network architecture designed to embed
multi-relational graphs into a flexible continuous vector space in which the
original data is kept and enhanced. The network is trained to encode the
semantics of these graphs in order to assign high probabilities to plausible
components. We empirically show that it reaches competitive performance in link
prediction on standard datasets from the literature.
|
1303.5177 | Nabila Shikoun | Nabila Shikoun, Mohamed El Nahas and Samar Kassim | Model Based Framework for Estimating Mutation Rate of Hepatitis C Virus
in Egypt | 6 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hepatitis C virus (HCV) is a widely spread disease all over the world. HCV
has very high mutation rate that makes it resistant to antibodies. Modeling HCV
to identify the virus mutation process is essential to its detection and
predicting its evolution. This paper presents a model based framework for
estimating mutation rate of HCV in two steps. Firstly profile hidden Markov
model (PHMM) architecture was builder to select the sequences which represents
sequence per year. Secondly mutation rate was calculated by using pair-wise
distance method between sequences. A pilot study is conducted on NS5B zone of
HCV dataset of genotype 4 subtype a (HCV4a) in Egypt.
| [
{
"version": "v1",
"created": "Thu, 21 Mar 2013 06:49:05 GMT"
}
] | 2013-03-22T00:00:00 | [
[
"Shikoun",
"Nabila",
""
],
[
"Nahas",
"Mohamed El",
""
],
[
"Kassim",
"Samar",
""
]
] | TITLE: Model Based Framework for Estimating Mutation Rate of Hepatitis C Virus
in Egypt
ABSTRACT: Hepatitis C virus (HCV) is a widely spread disease all over the world. HCV
has very high mutation rate that makes it resistant to antibodies. Modeling HCV
to identify the virus mutation process is essential to its detection and
predicting its evolution. This paper presents a model based framework for
estimating mutation rate of HCV in two steps. Firstly profile hidden Markov
model (PHMM) architecture was builder to select the sequences which represents
sequence per year. Secondly mutation rate was calculated by using pair-wise
distance method between sequences. A pilot study is conducted on NS5B zone of
HCV dataset of genotype 4 subtype a (HCV4a) in Egypt.
|
1206.5829 | Alexandre Bartel | Alexandre Bartel (SnT), Jacques Klein (SnT), Martin Monperrus (INRIA
Lille - Nord Europe), Yves Le Traon (SnT) | Automatically Securing Permission-Based Software by Reducing the Attack
Surface: An Application to Android | null | null | null | ISBN: 978-2-87971-107-2 | cs.CR cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A common security architecture, called the permission-based security model
(used e.g. in Android and Blackberry), entails intrinsic risks. For instance,
applications can be granted more permissions than they actually need, what we
call a "permission gap". Malware can leverage the unused permissions for
achieving their malicious goals, for instance using code injection. In this
paper, we present an approach to detecting permission gaps using static
analysis. Our prototype implementation in the context of Android shows that the
static analysis must take into account a significant amount of
platform-specific knowledge. Using our tool on two datasets of Android
applications, we found out that a non negligible part of applications suffers
from permission gaps, i.e. does not use all the permissions they declare.
| [
{
"version": "v1",
"created": "Tue, 22 May 2012 13:58:03 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Mar 2013 19:43:56 GMT"
}
] | 2013-03-21T00:00:00 | [
[
"Bartel",
"Alexandre",
"",
"SnT"
],
[
"Klein",
"Jacques",
"",
"SnT"
],
[
"Monperrus",
"Martin",
"",
"INRIA\n Lille - Nord Europe"
],
[
"Traon",
"Yves Le",
"",
"SnT"
]
] | TITLE: Automatically Securing Permission-Based Software by Reducing the Attack
Surface: An Application to Android
ABSTRACT: A common security architecture, called the permission-based security model
(used e.g. in Android and Blackberry), entails intrinsic risks. For instance,
applications can be granted more permissions than they actually need, what we
call a "permission gap". Malware can leverage the unused permissions for
achieving their malicious goals, for instance using code injection. In this
paper, we present an approach to detecting permission gaps using static
analysis. Our prototype implementation in the context of Android shows that the
static analysis must take into account a significant amount of
platform-specific knowledge. Using our tool on two datasets of Android
applications, we found out that a non negligible part of applications suffers
from permission gaps, i.e. does not use all the permissions they declare.
|
1303.4803 | Chunhua Shen | Xi Li, Weiming Hu, Chunhua Shen, Zhongfei Zhang, Anthony Dick, Anton
van den Hengel | A Survey of Appearance Models in Visual Object Tracking | Appearing in ACM Transactions on Intelligent Systems and Technology,
2013 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual object tracking is a significant computer vision task which can be
applied to many domains such as visual surveillance, human computer
interaction, and video compression. In the literature, researchers have
proposed a variety of 2D appearance models. To help readers swiftly learn the
recent advances in 2D appearance models for visual object tracking, we
contribute this survey, which provides a detailed review of the existing 2D
appearance models. In particular, this survey takes a module-based architecture
that enables readers to easily grasp the key points of visual object tracking.
In this survey, we first decompose the problem of appearance modeling into two
different processing stages: visual representation and statistical modeling.
Then, different 2D appearance models are categorized and discussed with respect
to their composition modules. Finally, we address several issues of interest as
well as the remaining challenges for future research on this topic. The
contributions of this survey are four-fold. First, we review the literature of
visual representations according to their feature-construction mechanisms
(i.e., local and global). Second, the existing statistical modeling schemes for
tracking-by-detection are reviewed according to their model-construction
mechanisms: generative, discriminative, and hybrid generative-discriminative.
Third, each type of visual representations or statistical modeling techniques
is analyzed and discussed from a theoretical or practical viewpoint. Fourth,
the existing benchmark resources (e.g., source code and video datasets) are
examined in this survey.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 01:08:33 GMT"
}
] | 2013-03-21T00:00:00 | [
[
"Li",
"Xi",
""
],
[
"Hu",
"Weiming",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Zhang",
"Zhongfei",
""
],
[
"Dick",
"Anthony",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: A Survey of Appearance Models in Visual Object Tracking
ABSTRACT: Visual object tracking is a significant computer vision task which can be
applied to many domains such as visual surveillance, human computer
interaction, and video compression. In the literature, researchers have
proposed a variety of 2D appearance models. To help readers swiftly learn the
recent advances in 2D appearance models for visual object tracking, we
contribute this survey, which provides a detailed review of the existing 2D
appearance models. In particular, this survey takes a module-based architecture
that enables readers to easily grasp the key points of visual object tracking.
In this survey, we first decompose the problem of appearance modeling into two
different processing stages: visual representation and statistical modeling.
Then, different 2D appearance models are categorized and discussed with respect
to their composition modules. Finally, we address several issues of interest as
well as the remaining challenges for future research on this topic. The
contributions of this survey are four-fold. First, we review the literature of
visual representations according to their feature-construction mechanisms
(i.e., local and global). Second, the existing statistical modeling schemes for
tracking-by-detection are reviewed according to their model-construction
mechanisms: generative, discriminative, and hybrid generative-discriminative.
Third, each type of visual representations or statistical modeling techniques
is analyzed and discussed from a theoretical or practical viewpoint. Fourth,
the existing benchmark resources (e.g., source code and video datasets) are
examined in this survey.
|
1303.4994 | Albert Wegener | Albert Wegener | Universal Numerical Encoder and Profiler Reduces Computing's Memory Wall
with Software, FPGA, and SoC Implementations | 10 pages, 4 figures, 3 tables, 19 references | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the multicore era, the time to computational results is increasingly
determined by how quickly operands are accessed by cores, rather than by the
speed of computation per operand. From high-performance computing (HPC) to
mobile application processors, low multicore utilization rates result from the
slowness of accessing off-chip operands, i.e. the memory wall. The APplication
AXcelerator (APAX) universal numerical encoder reduces computing's memory wall
by compressing numerical operands (integers and floats), thereby decreasing CPU
access time by 3:1 to 10:1 as operands stream between memory and cores. APAX
encodes numbers using a low-complexity algorithm designed both for time series
sensor data and for multi-dimensional data, including images. APAX encoding
parameters are determined by a profiler that quantifies the uncertainty
inherent in numerical datasets and recommends encoding parameters reflecting
this uncertainty. Compatible software, FPGA, and systemon-chip (SoC)
implementations efficiently support encoding rates between 150 MByte/sec and
1.5 GByte/sec at low power. On 25 integer and floating-point datasets, we
achieved encoding rates between 3:1 and 10:1, with average correlation of
0.999959, while accelerating computational "time to results."
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 17:11:12 GMT"
}
] | 2013-03-21T00:00:00 | [
[
"Wegener",
"Albert",
""
]
] | TITLE: Universal Numerical Encoder and Profiler Reduces Computing's Memory Wall
with Software, FPGA, and SoC Implementations
ABSTRACT: In the multicore era, the time to computational results is increasingly
determined by how quickly operands are accessed by cores, rather than by the
speed of computation per operand. From high-performance computing (HPC) to
mobile application processors, low multicore utilization rates result from the
slowness of accessing off-chip operands, i.e. the memory wall. The APplication
AXcelerator (APAX) universal numerical encoder reduces computing's memory wall
by compressing numerical operands (integers and floats), thereby decreasing CPU
access time by 3:1 to 10:1 as operands stream between memory and cores. APAX
encodes numbers using a low-complexity algorithm designed both for time series
sensor data and for multi-dimensional data, including images. APAX encoding
parameters are determined by a profiler that quantifies the uncertainty
inherent in numerical datasets and recommends encoding parameters reflecting
this uncertainty. Compatible software, FPGA, and systemon-chip (SoC)
implementations efficiently support encoding rates between 150 MByte/sec and
1.5 GByte/sec at low power. On 25 integer and floating-point datasets, we
achieved encoding rates between 3:1 and 10:1, with average correlation of
0.999959, while accelerating computational "time to results."
|
1301.3527 | Vamsi Potluru | Vamsi K. Potluru, Sergey M. Plis, Jonathan Le Roux, Barak A.
Pearlmutter, Vince D. Calhoun, Thomas P. Hayes | Block Coordinate Descent for Sparse NMF | null | null | null | null | cs.LG cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data
analysis. An important variant is the sparse NMF problem which arises when we
explicitly require the learnt features to be sparse. A natural measure of
sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms,
such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based
on intuitive attributes that such measures need to satisfy. This is in contrast
to computationally cheaper alternatives such as the plain L$_1$ norm. However,
present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow
and other formulations for sparse NMF have been proposed such as those based on
L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm
sparsity constraints while not sacrificing computation time. We present
experimental evidence on real-world datasets that shows our new algorithm
performs an order of magnitude faster compared to the current state-of-the-art
solvers optimizing the mixed norm and is suitable for large-scale datasets.
| [
{
"version": "v1",
"created": "Tue, 15 Jan 2013 23:11:05 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Mar 2013 22:42:11 GMT"
}
] | 2013-03-20T00:00:00 | [
[
"Potluru",
"Vamsi K.",
""
],
[
"Plis",
"Sergey M.",
""
],
[
"Roux",
"Jonathan Le",
""
],
[
"Pearlmutter",
"Barak A.",
""
],
[
"Calhoun",
"Vince D.",
""
],
[
"Hayes",
"Thomas P.",
""
]
] | TITLE: Block Coordinate Descent for Sparse NMF
ABSTRACT: Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data
analysis. An important variant is the sparse NMF problem which arises when we
explicitly require the learnt features to be sparse. A natural measure of
sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms,
such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based
on intuitive attributes that such measures need to satisfy. This is in contrast
to computationally cheaper alternatives such as the plain L$_1$ norm. However,
present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow
and other formulations for sparse NMF have been proposed such as those based on
L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm
sparsity constraints while not sacrificing computation time. We present
experimental evidence on real-world datasets that shows our new algorithm
performs an order of magnitude faster compared to the current state-of-the-art
solvers optimizing the mixed norm and is suitable for large-scale datasets.
|
1303.4402 | Julian McAuley | Julian McAuley and Jure Leskovec | From Amateurs to Connoisseurs: Modeling the Evolution of User Expertise
through Online Reviews | 11 pages, 7 figures | null | null | null | cs.SI cs.IR physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommending products to consumers means not only understanding their tastes,
but also understanding their level of experience. For example, it would be a
mistake to recommend the iconic film Seven Samurai simply because a user enjoys
other action movies; rather, we might conclude that they will eventually enjoy
it -- once they are ready. The same is true for beers, wines, gourmet foods --
or any products where users have acquired tastes: the `best' products may not
be the most `accessible'. Thus our goal in this paper is to recommend products
that a user will enjoy now, while acknowledging that their tastes may have
changed over time, and may change again in the future. We model how tastes
change due to the very act of consuming more products -- in other words, as
users become more experienced. We develop a latent factor recommendation system
that explicitly accounts for each user's level of experience. We find that such
a model not only leads to better recommendations, but also allows us to study
the role of user experience and expertise on a novel dataset of fifteen million
beer, wine, food, and movie reviews.
| [
{
"version": "v1",
"created": "Mon, 18 Mar 2013 20:01:19 GMT"
}
] | 2013-03-20T00:00:00 | [
[
"McAuley",
"Julian",
""
],
[
"Leskovec",
"Jure",
""
]
] | TITLE: From Amateurs to Connoisseurs: Modeling the Evolution of User Expertise
through Online Reviews
ABSTRACT: Recommending products to consumers means not only understanding their tastes,
but also understanding their level of experience. For example, it would be a
mistake to recommend the iconic film Seven Samurai simply because a user enjoys
other action movies; rather, we might conclude that they will eventually enjoy
it -- once they are ready. The same is true for beers, wines, gourmet foods --
or any products where users have acquired tastes: the `best' products may not
be the most `accessible'. Thus our goal in this paper is to recommend products
that a user will enjoy now, while acknowledging that their tastes may have
changed over time, and may change again in the future. We model how tastes
change due to the very act of consuming more products -- in other words, as
users become more experienced. We develop a latent factor recommendation system
that explicitly accounts for each user's level of experience. We find that such
a model not only leads to better recommendations, but also allows us to study
the role of user experience and expertise on a novel dataset of fifteen million
beer, wine, food, and movie reviews.
|
1303.4614 | Santosh K.C. | Abdel Bela\"id (LORIA), K.C. Santosh (LORIA), Vincent Poulain D'Andecy | Handwritten and Printed Text Separation in Real Document | Machine Vision Applications (2013) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of the paper is to separate handwritten and printed text from a real
document embedded with noise, graphics including annotations. Relying on
run-length smoothing algorithm (RLSA), the extracted pseudo-lines and
pseudo-words are used as basic blocks for classification. To handle this, a
multi-class support vector machine (SVM) with Gaussian kernel performs a first
labelling of each pseudo-word including the study of local neighbourhood. It
then propagates the context between neighbours so that we can correct possible
labelling errors. Considering running time complexity issue, we propose linear
complexity methods where we use k-NN with constraint. When using a kd-tree, it
is almost linearly proportional to the number of pseudo-words. The performance
of our system is close to 90%, even when very small learning dataset where
samples are basically composed of complex administrative documents.
| [
{
"version": "v1",
"created": "Tue, 19 Mar 2013 14:23:24 GMT"
}
] | 2013-03-20T00:00:00 | [
[
"Belaïd",
"Abdel",
"",
"LORIA"
],
[
"Santosh",
"K. C.",
"",
"LORIA"
],
[
"D'Andecy",
"Vincent Poulain",
""
]
] | TITLE: Handwritten and Printed Text Separation in Real Document
ABSTRACT: The aim of the paper is to separate handwritten and printed text from a real
document embedded with noise, graphics including annotations. Relying on
run-length smoothing algorithm (RLSA), the extracted pseudo-lines and
pseudo-words are used as basic blocks for classification. To handle this, a
multi-class support vector machine (SVM) with Gaussian kernel performs a first
labelling of each pseudo-word including the study of local neighbourhood. It
then propagates the context between neighbours so that we can correct possible
labelling errors. Considering running time complexity issue, we propose linear
complexity methods where we use k-NN with constraint. When using a kd-tree, it
is almost linearly proportional to the number of pseudo-words. The performance
of our system is close to 90%, even when very small learning dataset where
samples are basically composed of complex administrative documents.
|
1303.3664 | Weicong Ding | Weicong Ding, Mohammad H. Rohban, Prakash Ishwar, Venkatesh Saligrama | Topic Discovery through Data Dependent and Random Projections | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present algorithms for topic modeling based on the geometry of
cross-document word-frequency patterns. This perspective gains significance
under the so called separability condition. This is a condition on existence of
novel-words that are unique to each topic. We present a suite of highly
efficient algorithms based on data-dependent and random projections of
word-frequency patterns to identify novel words and associated topics. We will
also discuss the statistical guarantees of the data-dependent projections
method based on two mild assumptions on the prior density of topic document
matrix. Our key insight here is that the maximum and minimum values of
cross-document frequency patterns projected along any direction are associated
with novel words. While our sample complexity bounds for topic recovery are
similar to the state-of-art, the computational complexity of our random
projection scheme scales linearly with the number of documents and the number
of words per document. We present several experiments on synthetic and
real-world datasets to demonstrate qualitative and quantitative merits of our
scheme.
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2013 02:37:19 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Mar 2013 13:11:02 GMT"
}
] | 2013-03-19T00:00:00 | [
[
"Ding",
"Weicong",
""
],
[
"Rohban",
"Mohammad H.",
""
],
[
"Ishwar",
"Prakash",
""
],
[
"Saligrama",
"Venkatesh",
""
]
] | TITLE: Topic Discovery through Data Dependent and Random Projections
ABSTRACT: We present algorithms for topic modeling based on the geometry of
cross-document word-frequency patterns. This perspective gains significance
under the so called separability condition. This is a condition on existence of
novel-words that are unique to each topic. We present a suite of highly
efficient algorithms based on data-dependent and random projections of
word-frequency patterns to identify novel words and associated topics. We will
also discuss the statistical guarantees of the data-dependent projections
method based on two mild assumptions on the prior density of topic document
matrix. Our key insight here is that the maximum and minimum values of
cross-document frequency patterns projected along any direction are associated
with novel words. While our sample complexity bounds for topic recovery are
similar to the state-of-art, the computational complexity of our random
projection scheme scales linearly with the number of documents and the number
of words per document. We present several experiments on synthetic and
real-world datasets to demonstrate qualitative and quantitative merits of our
scheme.
|
1303.4087 | Rafi Muhammad | Muhammad Rafi, Mohammad Shahid Shaikh | An improved semantic similarity measure for document clustering based on
topic maps | 5 pages | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major computational burden, while performing document clustering, is the
calculation of similarity measure between a pair of documents. Similarity
measure is a function that assigns a real number between 0 and 1 to a pair of
documents, depending upon the degree of similarity between them. A value of
zero means that the documents are completely dissimilar whereas a value of one
indicates that the documents are practically identical. Traditionally,
vector-based models have been used for computing the document similarity. The
vector-based models represent several features present in documents. These
approaches to similarity measures, in general, cannot account for the semantics
of the document. Documents written in human languages contain contexts and the
words used to describe these contexts are generally semantically related.
Motivated by this fact, many researchers have proposed seman-tic-based
similarity measures by utilizing text annotation through external thesauruses
like WordNet (a lexical database). In this paper, we define a semantic
similarity measure based on documents represented in topic maps. Topic maps are
rapidly becoming an industrial standard for knowledge representation with a
focus for later search and extraction. The documents are transformed into a
topic map based coded knowledge and the similarity between a pair of documents
is represented as a correlation between the common patterns (sub-trees). The
experimental studies on the text mining datasets reveal that this new
similarity measure is more effective as compared to commonly used similarity
measures in text clustering.
| [
{
"version": "v1",
"created": "Sun, 17 Mar 2013 18:28:02 GMT"
}
] | 2013-03-19T00:00:00 | [
[
"Rafi",
"Muhammad",
""
],
[
"Shaikh",
"Mohammad Shahid",
""
]
] | TITLE: An improved semantic similarity measure for document clustering based on
topic maps
ABSTRACT: A major computational burden, while performing document clustering, is the
calculation of similarity measure between a pair of documents. Similarity
measure is a function that assigns a real number between 0 and 1 to a pair of
documents, depending upon the degree of similarity between them. A value of
zero means that the documents are completely dissimilar whereas a value of one
indicates that the documents are practically identical. Traditionally,
vector-based models have been used for computing the document similarity. The
vector-based models represent several features present in documents. These
approaches to similarity measures, in general, cannot account for the semantics
of the document. Documents written in human languages contain contexts and the
words used to describe these contexts are generally semantically related.
Motivated by this fact, many researchers have proposed seman-tic-based
similarity measures by utilizing text annotation through external thesauruses
like WordNet (a lexical database). In this paper, we define a semantic
similarity measure based on documents represented in topic maps. Topic maps are
rapidly becoming an industrial standard for knowledge representation with a
focus for later search and extraction. The documents are transformed into a
topic map based coded knowledge and the similarity between a pair of documents
is represented as a correlation between the common patterns (sub-trees). The
experimental studies on the text mining datasets reveal that this new
similarity measure is more effective as compared to commonly used similarity
measures in text clustering.
|
1303.4160 | Conrad Sanderson | Vikas Reddy, Conrad Sanderson, Brian C. Lovell | Improved Foreground Detection via Block-based Classifier Cascade with
Probabilistic Decision Integration | null | IEEE Transactions on Circuits and Systems for Video Technology,
Vol. 23, No. 1, pp. 83-93, 2013 | 10.1109/TCSVT.2012.2203199 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background subtraction is a fundamental low-level processing task in numerous
computer vision applications. The vast majority of algorithms process images on
a pixel-by-pixel basis, where an independent decision is made for each pixel. A
general limitation of such processing is that rich contextual information is
not taken into account. We propose a block-based method capable of dealing with
noise, illumination variations and dynamic backgrounds, while still obtaining
smooth contours of foreground objects. Specifically, image sequences are
analysed on an overlapping block-by-block basis. A low-dimensional texture
descriptor obtained from each block is passed through an adaptive classifier
cascade, where each stage handles a distinct problem. A probabilistic
foreground mask generation approach then exploits block overlaps to integrate
interim block-level decisions into final pixel-level foreground segmentation.
Unlike many pixel-based methods, ad-hoc post-processing of foreground masks is
not required. Experiments on the difficult Wallflower and I2R datasets show
that the proposed approach obtains on average better results (both
qualitatively and quantitatively) than several prominent methods. We
furthermore propose the use of tracking performance as an unbiased approach for
assessing the practical usefulness of foreground segmentation methods, and show
that the proposed approach leads to considerable improvements in tracking
accuracy on the CAVIAR dataset.
| [
{
"version": "v1",
"created": "Mon, 18 Mar 2013 05:48:40 GMT"
}
] | 2013-03-19T00:00:00 | [
[
"Reddy",
"Vikas",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Lovell",
"Brian C.",
""
]
] | TITLE: Improved Foreground Detection via Block-based Classifier Cascade with
Probabilistic Decision Integration
ABSTRACT: Background subtraction is a fundamental low-level processing task in numerous
computer vision applications. The vast majority of algorithms process images on
a pixel-by-pixel basis, where an independent decision is made for each pixel. A
general limitation of such processing is that rich contextual information is
not taken into account. We propose a block-based method capable of dealing with
noise, illumination variations and dynamic backgrounds, while still obtaining
smooth contours of foreground objects. Specifically, image sequences are
analysed on an overlapping block-by-block basis. A low-dimensional texture
descriptor obtained from each block is passed through an adaptive classifier
cascade, where each stage handles a distinct problem. A probabilistic
foreground mask generation approach then exploits block overlaps to integrate
interim block-level decisions into final pixel-level foreground segmentation.
Unlike many pixel-based methods, ad-hoc post-processing of foreground masks is
not required. Experiments on the difficult Wallflower and I2R datasets show
that the proposed approach obtains on average better results (both
qualitatively and quantitatively) than several prominent methods. We
furthermore propose the use of tracking performance as an unbiased approach for
assessing the practical usefulness of foreground segmentation methods, and show
that the proposed approach leads to considerable improvements in tracking
accuracy on the CAVIAR dataset.
|
1301.3583 | Yann Dauphin | Yann N. Dauphin, Yoshua Bengio | Big Neural Networks Waste Capacity | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article exposes the failure of some big neural networks to leverage
added capacity to reduce underfitting. Past research suggest diminishing
returns when increasing the size of neural networks. Our experiments on
ImageNet LSVRC-2010 show that this may be due to the fact there are highly
diminishing returns for capacity in terms of training error, leading to
underfitting. This suggests that the optimization method - first order gradient
descent - fails at this regime. Directly attacking this problem, either through
the optimization method or the choices of parametrization, may allow to improve
the generalization error on large datasets, for which a large capacity is
required.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2013 04:45:29 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Jan 2013 18:11:34 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Feb 2013 23:07:05 GMT"
},
{
"version": "v4",
"created": "Thu, 14 Mar 2013 20:49:20 GMT"
}
] | 2013-03-18T00:00:00 | [
[
"Dauphin",
"Yann N.",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Big Neural Networks Waste Capacity
ABSTRACT: This article exposes the failure of some big neural networks to leverage
added capacity to reduce underfitting. Past research suggest diminishing
returns when increasing the size of neural networks. Our experiments on
ImageNet LSVRC-2010 show that this may be due to the fact there are highly
diminishing returns for capacity in terms of training error, leading to
underfitting. This suggests that the optimization method - first order gradient
descent - fails at this regime. Directly attacking this problem, either through
the optimization method or the choices of parametrization, may allow to improve
the generalization error on large datasets, for which a large capacity is
required.
|
1303.3751 | Michael (Micky) Fire | Michael Fire, Dima Kagan, Aviad Elyashar, and Yuval Elovici | Friend or Foe? Fake Profile Identification in Online Social Networks | Draft Version | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The amount of personal information unwillingly exposed by users on online
social networks is staggering, as shown in recent research. Moreover, recent
reports indicate that these networks are infested with tens of millions of fake
users profiles, which may jeopardize the users' security and privacy. To
identify fake users in such networks and to improve users' security and
privacy, we developed the Social Privacy Protector software for Facebook. This
software contains three protection layers, which improve user privacy by
implementing different methods. The software first identifies a user's friends
who might pose a threat and then restricts this "friend's" exposure to the
user's personal information. The second layer is an expansion of Facebook's
basic privacy settings based on different types of social network usage
profiles. The third layer alerts users about the number of installed
applications on their Facebook profile, which have access to their private
information. An initial version of the Social Privacy Protection software
received high media coverage, and more than 3,000 users from more than twenty
countries have installed the software, out of which 527 used the software to
restrict more than nine thousand friends. In addition, we estimate that more
than a hundred users accepted the software's recommendations and removed at
least 1,792 Facebook applications from their profiles. By analyzing the unique
dataset obtained by the software in combination with machine learning
techniques, we developed classifiers, which are able to predict which Facebook
profiles have high probabilities of being fake and therefore, threaten the
user's well-being. Moreover, in this study, we present statistics on users'
privacy settings and statistics of the number of applications installed on
Facebook profiles...
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2013 12:17:10 GMT"
}
] | 2013-03-18T00:00:00 | [
[
"Fire",
"Michael",
""
],
[
"Kagan",
"Dima",
""
],
[
"Elyashar",
"Aviad",
""
],
[
"Elovici",
"Yuval",
""
]
] | TITLE: Friend or Foe? Fake Profile Identification in Online Social Networks
ABSTRACT: The amount of personal information unwillingly exposed by users on online
social networks is staggering, as shown in recent research. Moreover, recent
reports indicate that these networks are infested with tens of millions of fake
users profiles, which may jeopardize the users' security and privacy. To
identify fake users in such networks and to improve users' security and
privacy, we developed the Social Privacy Protector software for Facebook. This
software contains three protection layers, which improve user privacy by
implementing different methods. The software first identifies a user's friends
who might pose a threat and then restricts this "friend's" exposure to the
user's personal information. The second layer is an expansion of Facebook's
basic privacy settings based on different types of social network usage
profiles. The third layer alerts users about the number of installed
applications on their Facebook profile, which have access to their private
information. An initial version of the Social Privacy Protection software
received high media coverage, and more than 3,000 users from more than twenty
countries have installed the software, out of which 527 used the software to
restrict more than nine thousand friends. In addition, we estimate that more
than a hundred users accepted the software's recommendations and removed at
least 1,792 Facebook applications from their profiles. By analyzing the unique
dataset obtained by the software in combination with machine learning
techniques, we developed classifiers, which are able to predict which Facebook
profiles have high probabilities of being fake and therefore, threaten the
user's well-being. Moreover, in this study, we present statistics on users'
privacy settings and statistics of the number of applications installed on
Facebook profiles...
|
1301.2820 | Eugenio Culurciello Eugenio Culurciello | Eugenio Culurciello, Jordan Bates, Aysegul Dundar, Jose Carrasco,
Clement Farabet | Clustering Learning for Robotic Vision | Code for this paper is available here:
https://github.com/culurciello/CL_paper1_code | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the clustering learning technique applied to multi-layer
feedforward deep neural networks. We show that this unsupervised learning
technique can compute network filters with only a few minutes and a much
reduced set of parameters. The goal of this paper is to promote the technique
for general-purpose robotic vision systems. We report its use in static image
datasets and object tracking datasets. We show that networks trained with
clustering learning can outperform large networks trained for many hours on
complex datasets.
| [
{
"version": "v1",
"created": "Sun, 13 Jan 2013 20:49:30 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Jan 2013 14:53:21 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Mar 2013 22:48:38 GMT"
}
] | 2013-03-15T00:00:00 | [
[
"Culurciello",
"Eugenio",
""
],
[
"Bates",
"Jordan",
""
],
[
"Dundar",
"Aysegul",
""
],
[
"Carrasco",
"Jose",
""
],
[
"Farabet",
"Clement",
""
]
] | TITLE: Clustering Learning for Robotic Vision
ABSTRACT: We present the clustering learning technique applied to multi-layer
feedforward deep neural networks. We show that this unsupervised learning
technique can compute network filters with only a few minutes and a much
reduced set of parameters. The goal of this paper is to promote the technique
for general-purpose robotic vision systems. We report its use in static image
datasets and object tracking datasets. We show that networks trained with
clustering learning can outperform large networks trained for many hours on
complex datasets.
|
1301.3572 | Camille Couprie | Camille Couprie, Cl\'ement Farabet, Laurent Najman and Yann LeCun | Indoor Semantic Segmentation using depth information | 8 pages, 3 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work addresses multi-class segmentation of indoor scenes with RGB-D
inputs. While this area of research has gained much attention recently, most
works still rely on hand-crafted features. In contrast, we apply a multiscale
convolutional network to learn features directly from the images and the depth
information. We obtain state-of-the-art on the NYU-v2 depth dataset with an
accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos
sequences that could be processed in real-time using appropriate hardware such
as an FPGA.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2013 03:31:30 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Mar 2013 18:18:17 GMT"
}
] | 2013-03-15T00:00:00 | [
[
"Couprie",
"Camille",
""
],
[
"Farabet",
"Clément",
""
],
[
"Najman",
"Laurent",
""
],
[
"LeCun",
"Yann",
""
]
] | TITLE: Indoor Semantic Segmentation using depth information
ABSTRACT: This work addresses multi-class segmentation of indoor scenes with RGB-D
inputs. While this area of research has gained much attention recently, most
works still rely on hand-crafted features. In contrast, we apply a multiscale
convolutional network to learn features directly from the images and the depth
information. We obtain state-of-the-art on the NYU-v2 depth dataset with an
accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos
sequences that could be processed in real-time using appropriate hardware such
as an FPGA.
|
1303.3517 | Yingyi Bu Yingyi Bu | Joshua Rosen, Neoklis Polyzotis, Vinayak Borkar, Yingyi Bu, Michael J.
Carey, Markus Weimer, Tyson Condie, Raghu Ramakrishnan | Iterative MapReduce for Large Scale Machine Learning | null | null | null | null | cs.DC cs.DB cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large datasets ("Big Data") are becoming ubiquitous because the potential
value in deriving insights from data, across a wide range of business and
scientific applications, is increasingly recognized. In particular, machine
learning - one of the foundational disciplines for data analysis, summarization
and inference - on Big Data has become routine at most organizations that
operate large clouds, usually based on systems such as Hadoop that support the
MapReduce programming paradigm. It is now widely recognized that while
MapReduce is highly scalable, it suffers from a critical weakness for machine
learning: it does not support iteration. Consequently, one has to program
around this limitation, leading to fragile, inefficient code. Further, reliance
on the programmer is inherently flawed in a multi-tenanted cloud environment,
since the programmer does not have visibility into the state of the system when
his or her program executes. Prior work has sought to address this problem by
either developing specialized systems aimed at stylized applications, or by
augmenting MapReduce with ad hoc support for saving state across iterations
(driven by an external loop). In this paper, we advocate support for looping as
a first-class construct, and propose an extension of the MapReduce programming
paradigm called {\em Iterative MapReduce}. We then develop an optimizer for a
class of Iterative MapReduce programs that cover most machine learning
techniques, provide theoretical justifications for the key optimization steps,
and empirically demonstrate that system-optimized programs for significant
machine learning tasks are competitive with state-of-the-art specialized
solutions.
| [
{
"version": "v1",
"created": "Wed, 13 Mar 2013 04:24:12 GMT"
}
] | 2013-03-15T00:00:00 | [
[
"Rosen",
"Joshua",
""
],
[
"Polyzotis",
"Neoklis",
""
],
[
"Borkar",
"Vinayak",
""
],
[
"Bu",
"Yingyi",
""
],
[
"Carey",
"Michael J.",
""
],
[
"Weimer",
"Markus",
""
],
[
"Condie",
"Tyson",
""
],
[
"Ramakrishnan",
"Raghu",
""
]
] | TITLE: Iterative MapReduce for Large Scale Machine Learning
ABSTRACT: Large datasets ("Big Data") are becoming ubiquitous because the potential
value in deriving insights from data, across a wide range of business and
scientific applications, is increasingly recognized. In particular, machine
learning - one of the foundational disciplines for data analysis, summarization
and inference - on Big Data has become routine at most organizations that
operate large clouds, usually based on systems such as Hadoop that support the
MapReduce programming paradigm. It is now widely recognized that while
MapReduce is highly scalable, it suffers from a critical weakness for machine
learning: it does not support iteration. Consequently, one has to program
around this limitation, leading to fragile, inefficient code. Further, reliance
on the programmer is inherently flawed in a multi-tenanted cloud environment,
since the programmer does not have visibility into the state of the system when
his or her program executes. Prior work has sought to address this problem by
either developing specialized systems aimed at stylized applications, or by
augmenting MapReduce with ad hoc support for saving state across iterations
(driven by an external loop). In this paper, we advocate support for looping as
a first-class construct, and propose an extension of the MapReduce programming
paradigm called {\em Iterative MapReduce}. We then develop an optimizer for a
class of Iterative MapReduce programs that cover most machine learning
techniques, provide theoretical justifications for the key optimization steps,
and empirically demonstrate that system-optimized programs for significant
machine learning tasks are competitive with state-of-the-art specialized
solutions.
|
1303.3164 | Uma Sawant | Uma Sawant and Soumen Chakrabarti | Features and Aggregators for Web-scale Entity Search | 10 pages, 12 figures including tables | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We focus on two research issues in entity search: scoring a document or
snippet that potentially supports a candidate entity, and aggregating scores
from different snippets into an entity score. Proximity scoring has been
studied in IR outside the scope of entity search. However, aggregation has been
hardwired except in a few cases where probabilistic language models are used.
We instead explore simple, robust, discriminative ranking algorithms, with
informative snippet features and broad families of aggregation functions. Our
first contribution is a study of proximity-cognizant snippet features. In
contrast with prior work which uses hardwired "proximity kernels" that
implement a fixed decay with distance, we present a "universal" feature
encoding which jointly expresses the perplexity (informativeness) of a query
term match and the proximity of the match to the entity mention. Our second
contribution is a study of aggregation functions. Rather than train the ranking
algorithm on snippets and then aggregate scores, we directly train on entities
such that the ranking algorithm takes into account the aggregation function
being used. Our third contribution is an extensive Web-scale evaluation of the
above algorithms on two data sets having quite different properties and
behavior. The first one is the W3C dataset used in TREC-scale enterprise
search, with pre-annotated entity mentions. The second is a Web-scale
open-domain entity search dataset consisting of 500 million Web pages, which
contain about 8 billion token spans annotated automatically with two million
entities from 200,000 entity types in Wikipedia. On the TREC dataset, the
performance of our system is comparable to the currently prevalent systems. On
the much larger and noisier Web dataset, our system delivers significantly
better performance than all other systems, with 8% MAP improvement over the
closest competitor.
| [
{
"version": "v1",
"created": "Wed, 13 Mar 2013 14:06:49 GMT"
}
] | 2013-03-14T00:00:00 | [
[
"Sawant",
"Uma",
""
],
[
"Chakrabarti",
"Soumen",
""
]
] | TITLE: Features and Aggregators for Web-scale Entity Search
ABSTRACT: We focus on two research issues in entity search: scoring a document or
snippet that potentially supports a candidate entity, and aggregating scores
from different snippets into an entity score. Proximity scoring has been
studied in IR outside the scope of entity search. However, aggregation has been
hardwired except in a few cases where probabilistic language models are used.
We instead explore simple, robust, discriminative ranking algorithms, with
informative snippet features and broad families of aggregation functions. Our
first contribution is a study of proximity-cognizant snippet features. In
contrast with prior work which uses hardwired "proximity kernels" that
implement a fixed decay with distance, we present a "universal" feature
encoding which jointly expresses the perplexity (informativeness) of a query
term match and the proximity of the match to the entity mention. Our second
contribution is a study of aggregation functions. Rather than train the ranking
algorithm on snippets and then aggregate scores, we directly train on entities
such that the ranking algorithm takes into account the aggregation function
being used. Our third contribution is an extensive Web-scale evaluation of the
above algorithms on two data sets having quite different properties and
behavior. The first one is the W3C dataset used in TREC-scale enterprise
search, with pre-annotated entity mentions. The second is a Web-scale
open-domain entity search dataset consisting of 500 million Web pages, which
contain about 8 billion token spans annotated automatically with two million
entities from 200,000 entity types in Wikipedia. On the TREC dataset, the
performance of our system is comparable to the currently prevalent systems. On
the much larger and noisier Web dataset, our system delivers significantly
better performance than all other systems, with 8% MAP improvement over the
closest competitor.
|
1303.2751 | Togerchety Hitendra sarma | Mallikarjun Hangarge | Gaussian Mixture Model for Handwritten Script Identification | Appeared in ICECIT-2012 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a Gaussian Mixture Model (GMM) to identify the script of
handwritten words of Roman, Devanagari, Kannada and Telugu scripts. It
emphasizes the significance of directional energies for identification of
script of the word. It is robust to varied image sizes and different styles of
writing. A GMM is modeled using a set of six novel features derived from
directional energy distributions of the underlying image. The standard
deviation of directional energy distributions are computed by decomposing an
image matrix into right and left diagonals. Furthermore, deviation of
horizontal and vertical distributions of energies is also built-in to GMM. A
dataset of 400 images out of 800 (200 of each script) are used for training GMM
and the remaining is for testing. An exhaustive experimentation is carried out
at bi-script, tri-script and multi-script level and achieved script
identification accuracies in percentage as 98.7, 98.16 and 96.91 respectively.
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2013 02:32:02 GMT"
}
] | 2013-03-13T00:00:00 | [
[
"Hangarge",
"Mallikarjun",
""
]
] | TITLE: Gaussian Mixture Model for Handwritten Script Identification
ABSTRACT: This paper presents a Gaussian Mixture Model (GMM) to identify the script of
handwritten words of Roman, Devanagari, Kannada and Telugu scripts. It
emphasizes the significance of directional energies for identification of
script of the word. It is robust to varied image sizes and different styles of
writing. A GMM is modeled using a set of six novel features derived from
directional energy distributions of the underlying image. The standard
deviation of directional energy distributions are computed by decomposing an
image matrix into right and left diagonals. Furthermore, deviation of
horizontal and vertical distributions of energies is also built-in to GMM. A
dataset of 400 images out of 800 (200 of each script) are used for training GMM
and the remaining is for testing. An exhaustive experimentation is carried out
at bi-script, tri-script and multi-script level and achieved script
identification accuracies in percentage as 98.7, 98.16 and 96.91 respectively.
|
1303.2783 | Conrad Sanderson | Conrad Sanderson, Mehrtash T. Harandi, Yongkang Wong, Brian C. Lovell | Combined Learning of Salient Local Descriptors and Distance Metrics for
Image Set Face Verification | null | IEEE International Conference on Advanced Video and Signal-Based
Surveillance (AVSS), pp, 294-299, 2012 | 10.1109/AVSS.2012.23 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In contrast to comparing faces via single exemplars, matching sets of face
images increases robustness and discrimination performance. Recent image set
matching approaches typically measure similarities between subspaces or
manifolds, while representing faces in a rigid and holistic manner. Such
representations are easily affected by variations in terms of alignment,
illumination, pose and expression. While local feature based representations
are considerably more robust to such variations, they have received little
attention within the image set matching area. We propose a novel image set
matching technique, comprised of three aspects: (i) robust descriptors of face
regions based on local features, partly inspired by the hierarchy in the human
visual system, (ii) use of several subspace and exemplar metrics to compare
corresponding face regions, (iii) jointly learning which regions are the most
discriminative while finding the optimal mixing weights for combining metrics.
Face recognition experiments on LFW, PIE and MOBIO face datasets show that the
proposed algorithm obtains considerably better performance than several recent
state-of-the-art techniques, such as Local Principal Angle and the Kernel
Affine Hull Method.
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2013 06:12:59 GMT"
}
] | 2013-03-13T00:00:00 | [
[
"Sanderson",
"Conrad",
""
],
[
"Harandi",
"Mehrtash T.",
""
],
[
"Wong",
"Yongkang",
""
],
[
"Lovell",
"Brian C.",
""
]
] | TITLE: Combined Learning of Salient Local Descriptors and Distance Metrics for
Image Set Face Verification
ABSTRACT: In contrast to comparing faces via single exemplars, matching sets of face
images increases robustness and discrimination performance. Recent image set
matching approaches typically measure similarities between subspaces or
manifolds, while representing faces in a rigid and holistic manner. Such
representations are easily affected by variations in terms of alignment,
illumination, pose and expression. While local feature based representations
are considerably more robust to such variations, they have received little
attention within the image set matching area. We propose a novel image set
matching technique, comprised of three aspects: (i) robust descriptors of face
regions based on local features, partly inspired by the hierarchy in the human
visual system, (ii) use of several subspace and exemplar metrics to compare
corresponding face regions, (iii) jointly learning which regions are the most
discriminative while finding the optimal mixing weights for combining metrics.
Face recognition experiments on LFW, PIE and MOBIO face datasets show that the
proposed algorithm obtains considerably better performance than several recent
state-of-the-art techniques, such as Local Principal Angle and the Kernel
Affine Hull Method.
|
1209.2178 | Sutanay Choudhury | Sutanay Choudhury, Lawrence B. Holder, Abhik Ray, George Chin Jr.,
John T. Feo | Continuous Queries for Multi-Relational Graphs | Withdrawn because for information disclosure considerations | null | null | PNNL-SA-90326 | cs.DB cs.SI | http://creativecommons.org/licenses/publicdomain/ | Acting on time-critical events by processing ever growing social media or
news streams is a major technical challenge. Many of these data sources can be
modeled as multi-relational graphs. Continuous queries or techniques to search
for rare events that typically arise in monitoring applications have been
studied extensively for relational databases. This work is dedicated to answer
the question that emerges naturally: how can we efficiently execute a
continuous query on a dynamic graph? This paper presents an exact subgraph
search algorithm that exploits the temporal characteristics of representative
queries for online news or social media monitoring. The algorithm is based on a
novel data structure called the Subgraph Join Tree (SJ-Tree) that leverages the
structural and semantic characteristics of the underlying multi-relational
graph. The paper concludes with extensive experimentation on several real-world
datasets that demonstrates the validity of this approach.
| [
{
"version": "v1",
"created": "Mon, 10 Sep 2012 23:23:16 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Mar 2013 00:28:38 GMT"
}
] | 2013-03-12T00:00:00 | [
[
"Choudhury",
"Sutanay",
""
],
[
"Holder",
"Lawrence B.",
""
],
[
"Ray",
"Abhik",
""
],
[
"Chin",
"George",
"Jr."
],
[
"Feo",
"John T.",
""
]
] | TITLE: Continuous Queries for Multi-Relational Graphs
ABSTRACT: Acting on time-critical events by processing ever growing social media or
news streams is a major technical challenge. Many of these data sources can be
modeled as multi-relational graphs. Continuous queries or techniques to search
for rare events that typically arise in monitoring applications have been
studied extensively for relational databases. This work is dedicated to answer
the question that emerges naturally: how can we efficiently execute a
continuous query on a dynamic graph? This paper presents an exact subgraph
search algorithm that exploits the temporal characteristics of representative
queries for online news or social media monitoring. The algorithm is based on a
novel data structure called the Subgraph Join Tree (SJ-Tree) that leverages the
structural and semantic characteristics of the underlying multi-relational
graph. The paper concludes with extensive experimentation on several real-world
datasets that demonstrates the validity of this approach.
|
1302.6556 | Theodoros Rekatsinas | Theodoros Rekatsinas, Amol Deshpande, Ashwin Machanavajjhala | On Sharing Private Data with Multiple Non-Colluding Adversaries | 14 pages, 6 figures, 2 tables | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present SPARSI, a theoretical framework for partitioning sensitive data
across multiple non-colluding adversaries. Most work in privacy-aware data
sharing has considered disclosing summaries where the aggregate information
about the data is preserved, but sensitive user information is protected.
Nonetheless, there are applications, including online advertising, cloud
computing and crowdsourcing markets, where detailed and fine-grained user-data
must be disclosed. We consider a new data sharing paradigm and introduce the
problem of privacy-aware data partitioning, where a sensitive dataset must be
partitioned among k untrusted parties (adversaries). The goal is to maximize
the utility derived by partitioning and distributing the dataset, while
minimizing the amount of sensitive information disclosed. The data should be
distributed so that an adversary, without colluding with other adversaries,
cannot draw additional inferences about the private information, by linking
together multiple pieces of information released to her. The assumption of no
collusion is both reasonable and necessary in the above application domains
that require release of private user information. SPARSI enables us to formally
define privacy-aware data partitioning using the notion of sensitive properties
for modeling private information and a hypergraph representation for describing
the interdependencies between data entries and private information. We show
that solving privacy-aware partitioning is, in general, NP-hard, but for
specific information disclosure functions, good approximate solutions can be
found using relaxation techniques. Finally, we present a local search algorithm
applicable to generic information disclosure functions. We apply SPARSI
together with the proposed algorithms on data from a real advertising scenario
and show that we can partition data with no disclosure to any single
advertiser.
| [
{
"version": "v1",
"created": "Tue, 26 Feb 2013 19:49:55 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Feb 2013 20:48:52 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Mar 2013 15:41:40 GMT"
}
] | 2013-03-12T00:00:00 | [
[
"Rekatsinas",
"Theodoros",
""
],
[
"Deshpande",
"Amol",
""
],
[
"Machanavajjhala",
"Ashwin",
""
]
] | TITLE: On Sharing Private Data with Multiple Non-Colluding Adversaries
ABSTRACT: We present SPARSI, a theoretical framework for partitioning sensitive data
across multiple non-colluding adversaries. Most work in privacy-aware data
sharing has considered disclosing summaries where the aggregate information
about the data is preserved, but sensitive user information is protected.
Nonetheless, there are applications, including online advertising, cloud
computing and crowdsourcing markets, where detailed and fine-grained user-data
must be disclosed. We consider a new data sharing paradigm and introduce the
problem of privacy-aware data partitioning, where a sensitive dataset must be
partitioned among k untrusted parties (adversaries). The goal is to maximize
the utility derived by partitioning and distributing the dataset, while
minimizing the amount of sensitive information disclosed. The data should be
distributed so that an adversary, without colluding with other adversaries,
cannot draw additional inferences about the private information, by linking
together multiple pieces of information released to her. The assumption of no
collusion is both reasonable and necessary in the above application domains
that require release of private user information. SPARSI enables us to formally
define privacy-aware data partitioning using the notion of sensitive properties
for modeling private information and a hypergraph representation for describing
the interdependencies between data entries and private information. We show
that solving privacy-aware partitioning is, in general, NP-hard, but for
specific information disclosure functions, good approximate solutions can be
found using relaxation techniques. Finally, we present a local search algorithm
applicable to generic information disclosure functions. We apply SPARSI
together with the proposed algorithms on data from a real advertising scenario
and show that we can partition data with no disclosure to any single
advertiser.
|
1303.0045 | Bogdan State | Bogdan State, Patrick Park, Ingmar Weber, Yelena Mejova, Michael Macy | The Mesh of Civilizations and International Email Flows | 10 pages, 3 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In The Clash of Civilizations, Samuel Huntington argued that the primary axis
of global conflict was no longer ideological or economic but cultural and
religious, and that this division would characterize the "battle lines of the
future." In contrast to the "top down" approach in previous research focused on
the relations among nation states, we focused on the flows of interpersonal
communication as a bottom-up view of international alignments. To that end, we
mapped the locations of the world's countries in global email networks to see
if we could detect cultural fault lines. Using IP-geolocation on a worldwide
anonymized dataset obtained from a large Internet company, we constructed a
global email network. In computing email flows we employ a novel rescaling
procedure to account for differences due to uneven adoption of a particular
Internet service across the world. Our analysis shows that email flows are
consistent with Huntington's thesis. In addition to location in Huntington's
"civilizations," our results also attest to the importance of both cultural and
economic factors in the patterning of inter-country communication ties.
| [
{
"version": "v1",
"created": "Thu, 28 Feb 2013 23:29:11 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Mar 2013 19:15:12 GMT"
}
] | 2013-03-12T00:00:00 | [
[
"State",
"Bogdan",
""
],
[
"Park",
"Patrick",
""
],
[
"Weber",
"Ingmar",
""
],
[
"Mejova",
"Yelena",
""
],
[
"Macy",
"Michael",
""
]
] | TITLE: The Mesh of Civilizations and International Email Flows
ABSTRACT: In The Clash of Civilizations, Samuel Huntington argued that the primary axis
of global conflict was no longer ideological or economic but cultural and
religious, and that this division would characterize the "battle lines of the
future." In contrast to the "top down" approach in previous research focused on
the relations among nation states, we focused on the flows of interpersonal
communication as a bottom-up view of international alignments. To that end, we
mapped the locations of the world's countries in global email networks to see
if we could detect cultural fault lines. Using IP-geolocation on a worldwide
anonymized dataset obtained from a large Internet company, we constructed a
global email network. In computing email flows we employ a novel rescaling
procedure to account for differences due to uneven adoption of a particular
Internet service across the world. Our analysis shows that email flows are
consistent with Huntington's thesis. In addition to location in Huntington's
"civilizations," our results also attest to the importance of both cultural and
economic factors in the patterning of inter-country communication ties.
|
1303.2277 | Guilherme de Castro Mendes Gomes | Guilherme de Castro Mendes Gomes, Vitor Campos de Oliveira, Jussara
Marques de Almeida and Marcos Andr\'e Gon\c{c}alves | Is Learning to Rank Worth It? A Statistical Analysis of Learning to Rank
Methods | 7 pages, 10 tables, 14 references. Original (short) paper published
in the Brazilian Symposium on Databases, 2012 (SBBD2012). Current revision
submitted to the Journal of Information and Data Management (JIDM) | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Learning to Rank (L2R) research field has experienced a fast paced growth
over the last few years, with a wide variety of benchmark datasets and
baselines available for experimentation. We here investigate the main
assumption behind this field, which is that, the use of sophisticated L2R
algorithms and models, produce significant gains over more traditional and
simple information retrieval approaches. Our experimental results surprisingly
indicate that many L2R algorithms, when put up against the best individual
features of each dataset, may not produce statistically significant
differences, even if the absolute gains may seem large. We also find that most
of the reported baselines are statistically tied, with no clear winner.
| [
{
"version": "v1",
"created": "Sat, 9 Mar 2013 23:28:16 GMT"
}
] | 2013-03-12T00:00:00 | [
[
"Gomes",
"Guilherme de Castro Mendes",
""
],
[
"de Oliveira",
"Vitor Campos",
""
],
[
"de Almeida",
"Jussara Marques",
""
],
[
"Gonçalves",
"Marcos André",
""
]
] | TITLE: Is Learning to Rank Worth It? A Statistical Analysis of Learning to Rank
Methods
ABSTRACT: The Learning to Rank (L2R) research field has experienced a fast paced growth
over the last few years, with a wide variety of benchmark datasets and
baselines available for experimentation. We here investigate the main
assumption behind this field, which is that, the use of sophisticated L2R
algorithms and models, produce significant gains over more traditional and
simple information retrieval approaches. Our experimental results surprisingly
indicate that many L2R algorithms, when put up against the best individual
features of each dataset, may not produce statistically significant
differences, even if the absolute gains may seem large. We also find that most
of the reported baselines are statistically tied, with no clear winner.
|
1303.2465 | Conrad Sanderson | Vikas Reddy, Conrad Sanderson, Brian C. Lovell | A Low-Complexity Algorithm for Static Background Estimation from
Cluttered Image Sequences in Surveillance Contexts | null | EURASIP Journal on Image and Video Processing, 2011 | 10.1155/2011/164956 | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | For the purposes of foreground estimation, the true background model is
unavailable in many practical circumstances and needs to be estimated from
cluttered image sequences. We propose a sequential technique for static
background estimation in such conditions, with low computational and memory
requirements. Image sequences are analysed on a block-by-block basis. For each
block location a representative set is maintained which contains distinct
blocks obtained along its temporal line. The background estimation is carried
out in a Markov Random Field framework, where the optimal labelling solution is
computed using iterated conditional modes. The clique potentials are computed
based on the combined frequency response of the candidate block and its
neighbourhood. It is assumed that the most appropriate block results in the
smoothest response, indirectly enforcing the spatial continuity of structures
within a scene. Experiments on real-life surveillance videos demonstrate that
the proposed method obtains considerably better background estimates (both
qualitatively and quantitatively) than median filtering and the recently
proposed "intervals of stable intensity" method. Further experiments on the
Wallflower dataset suggest that the combination of the proposed method with a
foreground segmentation algorithm results in improved foreground segmentation.
| [
{
"version": "v1",
"created": "Mon, 11 Mar 2013 09:57:49 GMT"
}
] | 2013-03-12T00:00:00 | [
[
"Reddy",
"Vikas",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Lovell",
"Brian C.",
""
]
] | TITLE: A Low-Complexity Algorithm for Static Background Estimation from
Cluttered Image Sequences in Surveillance Contexts
ABSTRACT: For the purposes of foreground estimation, the true background model is
unavailable in many practical circumstances and needs to be estimated from
cluttered image sequences. We propose a sequential technique for static
background estimation in such conditions, with low computational and memory
requirements. Image sequences are analysed on a block-by-block basis. For each
block location a representative set is maintained which contains distinct
blocks obtained along its temporal line. The background estimation is carried
out in a Markov Random Field framework, where the optimal labelling solution is
computed using iterated conditional modes. The clique potentials are computed
based on the combined frequency response of the candidate block and its
neighbourhood. It is assumed that the most appropriate block results in the
smoothest response, indirectly enforcing the spatial continuity of structures
within a scene. Experiments on real-life surveillance videos demonstrate that
the proposed method obtains considerably better background estimates (both
qualitatively and quantitatively) than median filtering and the recently
proposed "intervals of stable intensity" method. Further experiments on the
Wallflower dataset suggest that the combination of the proposed method with a
foreground segmentation algorithm results in improved foreground segmentation.
|
1303.2593 | Adeel Ansari | Adeel Ansari, Afza Bt Shafie, Abas B Md Said, Seema Ansari | Independent Component Analysis for Filtering Airwaves in Seabed Logging
Application | 7 pages, 13 figures | International Journal of Advanced Studies in Computers, Science
and Engineering (IJASCSE), 2013 | null | null | cs.OH physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Marine controlled source electromagnetic (CSEM) sensing method used for the
detection of hydrocarbons based reservoirs in seabed logging application does
not perform well due to the presence of the airwaves (or sea-surface). These
airwaves interfere with the signal that comes from the subsurface seafloor and
also tend to dominate in the receiver response at larger offsets. The task is
to identify these air waves and the way they interact, and to filter them out.
In this paper, a popular method for counteracting with the above stated problem
scenario is Independent Component Analysis (ICA). Independent component
analysis (ICA) is a statistical method for transforming an observed
multidimensional or multivariate dataset into its constituent components
(sources) that are statistically as independent from each other as possible.
ICA-type de-convolution algorithm that is FASTICA is considered for mixed
signals de-convolution and considered convenient depending upon the nature of
the source and noise model. The results from the FASTICA algorithm are shown
and evaluated. In this paper, we present the FASTICA algorithm for the seabed
logging application.
| [
{
"version": "v1",
"created": "Mon, 4 Mar 2013 16:18:51 GMT"
}
] | 2013-03-12T00:00:00 | [
[
"Ansari",
"Adeel",
""
],
[
"Shafie",
"Afza Bt",
""
],
[
"Said",
"Abas B Md",
""
],
[
"Ansari",
"Seema",
""
]
] | TITLE: Independent Component Analysis for Filtering Airwaves in Seabed Logging
Application
ABSTRACT: Marine controlled source electromagnetic (CSEM) sensing method used for the
detection of hydrocarbons based reservoirs in seabed logging application does
not perform well due to the presence of the airwaves (or sea-surface). These
airwaves interfere with the signal that comes from the subsurface seafloor and
also tend to dominate in the receiver response at larger offsets. The task is
to identify these air waves and the way they interact, and to filter them out.
In this paper, a popular method for counteracting with the above stated problem
scenario is Independent Component Analysis (ICA). Independent component
analysis (ICA) is a statistical method for transforming an observed
multidimensional or multivariate dataset into its constituent components
(sources) that are statistically as independent from each other as possible.
ICA-type de-convolution algorithm that is FASTICA is considered for mixed
signals de-convolution and considered convenient depending upon the nature of
the source and noise model. The results from the FASTICA algorithm are shown
and evaluated. In this paper, we present the FASTICA algorithm for the seabed
logging application.
|
1208.3719 | Chris Thornton | Chris Thornton and Frank Hutter and Holger H. Hoos and Kevin
Leyton-Brown | Auto-WEKA: Combined Selection and Hyperparameter Optimization of
Classification Algorithms | 9 pages, 3 figures | null | null | Technical Report TR-2012-05 | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many different machine learning algorithms exist; taking into account each
algorithm's hyperparameters, there is a staggeringly large number of possible
alternatives overall. We consider the problem of simultaneously selecting a
learning algorithm and setting its hyperparameters, going beyond previous work
that addresses these issues in isolation. We show that this problem can be
addressed by a fully automated approach, leveraging recent innovations in
Bayesian optimization. Specifically, we consider a wide range of feature
selection techniques (combining 3 search and 8 evaluator methods) and all
classification approaches implemented in WEKA, spanning 2 ensemble methods, 10
meta-methods, 27 base classifiers, and hyperparameter settings for each
classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup
09, variants of the MNIST dataset and CIFAR-10, we show classification
performance often much better than using standard selection/hyperparameter
optimization methods. We hope that our approach will help non-expert users to
more effectively identify machine learning algorithms and hyperparameter
settings appropriate to their applications, and hence to achieve improved
performance.
| [
{
"version": "v1",
"created": "Sat, 18 Aug 2012 02:14:47 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Mar 2013 23:27:04 GMT"
}
] | 2013-03-08T00:00:00 | [
[
"Thornton",
"Chris",
""
],
[
"Hutter",
"Frank",
""
],
[
"Hoos",
"Holger H.",
""
],
[
"Leyton-Brown",
"Kevin",
""
]
] | TITLE: Auto-WEKA: Combined Selection and Hyperparameter Optimization of
Classification Algorithms
ABSTRACT: Many different machine learning algorithms exist; taking into account each
algorithm's hyperparameters, there is a staggeringly large number of possible
alternatives overall. We consider the problem of simultaneously selecting a
learning algorithm and setting its hyperparameters, going beyond previous work
that addresses these issues in isolation. We show that this problem can be
addressed by a fully automated approach, leveraging recent innovations in
Bayesian optimization. Specifically, we consider a wide range of feature
selection techniques (combining 3 search and 8 evaluator methods) and all
classification approaches implemented in WEKA, spanning 2 ensemble methods, 10
meta-methods, 27 base classifiers, and hyperparameter settings for each
classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup
09, variants of the MNIST dataset and CIFAR-10, we show classification
performance often much better than using standard selection/hyperparameter
optimization methods. We hope that our approach will help non-expert users to
more effectively identify machine learning algorithms and hyperparameter
settings appropriate to their applications, and hence to achieve improved
performance.
|
1303.1585 | Swaminathan Sankararaman | Swaminathan Sankararaman, Pankaj K. Agarwal, Thomas M{\o}lhave, Arnold
P. Boedihardjo | Computing Similarity between a Pair of Trajectories | null | null | null | null | cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With recent advances in sensing and tracking technology, trajectory data is
becoming increasingly pervasive and analysis of trajectory data is becoming
exceedingly important. A fundamental problem in analyzing trajectory data is
that of identifying common patterns between pairs or among groups of
trajectories. In this paper, we consider the problem of identifying similar
portions between a pair of trajectories, each observed as a sequence of points
sampled from it.
We present new measures of trajectory similarity --- both local and global
--- between a pair of trajectories to distinguish between similar and
dissimilar portions. Our model is robust under noise and outliers, it does not
make any assumptions on the sampling rates on either trajectory, and it works
even if they are partially observed. Additionally, the model also yields a
scalar similarity score which can be used to rank multiple pairs of
trajectories according to similarity, e.g. in clustering applications. We also
present efficient algorithms for computing the similarity under our measures;
the worst-case running time is quadratic in the number of sample points.
Finally, we present an extensive experimental study evaluating the
effectiveness of our approach on real datasets, comparing with it with earlier
approaches, and illustrating many issues that arise in trajectory data. Our
experiments show that our approach is highly accurate in distinguishing similar
and dissimilar portions as compared to earlier methods even with sparse
sampling.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2013 01:37:22 GMT"
}
] | 2013-03-08T00:00:00 | [
[
"Sankararaman",
"Swaminathan",
""
],
[
"Agarwal",
"Pankaj K.",
""
],
[
"Mølhave",
"Thomas",
""
],
[
"Boedihardjo",
"Arnold P.",
""
]
] | TITLE: Computing Similarity between a Pair of Trajectories
ABSTRACT: With recent advances in sensing and tracking technology, trajectory data is
becoming increasingly pervasive and analysis of trajectory data is becoming
exceedingly important. A fundamental problem in analyzing trajectory data is
that of identifying common patterns between pairs or among groups of
trajectories. In this paper, we consider the problem of identifying similar
portions between a pair of trajectories, each observed as a sequence of points
sampled from it.
We present new measures of trajectory similarity --- both local and global
--- between a pair of trajectories to distinguish between similar and
dissimilar portions. Our model is robust under noise and outliers, it does not
make any assumptions on the sampling rates on either trajectory, and it works
even if they are partially observed. Additionally, the model also yields a
scalar similarity score which can be used to rank multiple pairs of
trajectories according to similarity, e.g. in clustering applications. We also
present efficient algorithms for computing the similarity under our measures;
the worst-case running time is quadratic in the number of sample points.
Finally, we present an extensive experimental study evaluating the
effectiveness of our approach on real datasets, comparing with it with earlier
approaches, and illustrating many issues that arise in trajectory data. Our
experiments show that our approach is highly accurate in distinguishing similar
and dissimilar portions as compared to earlier methods even with sparse
sampling.
|
1303.1741 | Emilio Ferrara | Pasquale De Meo, Emilio Ferrara, Giacomo Fiumara, Alessandro Provetti | Enhancing community detection using a network weighting strategy | 28 pages, 2 figures | Information Sciences, 222:648-668, 2013 | 10.1016/j.ins.2012.08.001 | null | cs.SI cs.DS physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A community within a network is a group of vertices densely connected to each
other but less connected to the vertices outside. The problem of detecting
communities in large networks plays a key role in a wide range of research
areas, e.g. Computer Science, Biology and Sociology. Most of the existing
algorithms to find communities count on the topological features of the network
and often do not scale well on large, real-life instances.
In this article we propose a strategy to enhance existing community detection
algorithms by adding a pre-processing step in which edges are weighted
according to their centrality w.r.t. the network topology. In our approach, the
centrality of an edge reflects its contribute to making arbitrary graph
tranversals, i.e., spreading messages over the network, as short as possible.
Our strategy is able to effectively complements information about network
topology and it can be used as an additional tool to enhance community
detection. The computation of edge centralities is carried out by performing
multiple random walks of bounded length on the network. Our method makes the
computation of edge centralities feasible also on large-scale networks. It has
been tested in conjunction with three state-of-the-art community detection
algorithms, namely the Louvain method, COPRA and OSLOM. Experimental results
show that our method raises the accuracy of existing algorithms both on
synthetic and real-life datasets.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2013 16:43:30 GMT"
}
] | 2013-03-08T00:00:00 | [
[
"De Meo",
"Pasquale",
""
],
[
"Ferrara",
"Emilio",
""
],
[
"Fiumara",
"Giacomo",
""
],
[
"Provetti",
"Alessandro",
""
]
] | TITLE: Enhancing community detection using a network weighting strategy
ABSTRACT: A community within a network is a group of vertices densely connected to each
other but less connected to the vertices outside. The problem of detecting
communities in large networks plays a key role in a wide range of research
areas, e.g. Computer Science, Biology and Sociology. Most of the existing
algorithms to find communities count on the topological features of the network
and often do not scale well on large, real-life instances.
In this article we propose a strategy to enhance existing community detection
algorithms by adding a pre-processing step in which edges are weighted
according to their centrality w.r.t. the network topology. In our approach, the
centrality of an edge reflects its contribute to making arbitrary graph
tranversals, i.e., spreading messages over the network, as short as possible.
Our strategy is able to effectively complements information about network
topology and it can be used as an additional tool to enhance community
detection. The computation of edge centralities is carried out by performing
multiple random walks of bounded length on the network. Our method makes the
computation of edge centralities feasible also on large-scale networks. It has
been tested in conjunction with three state-of-the-art community detection
algorithms, namely the Louvain method, COPRA and OSLOM. Experimental results
show that our method raises the accuracy of existing algorithms both on
synthetic and real-life datasets.
|
1303.1747 | Emilio Ferrara | Pasquale De Meo, Emilio Ferrara, Giacomo Fiumara, Angela Ricciardello | A Novel Measure of Edge Centrality in Social Networks | 28 pages, 5 figures | Knowledge-based Systems, 30:136-150, 2012 | 10.1016/j.knosys.2012.01.007 | null | cs.SI cs.DS physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of assigning centrality values to nodes and edges in graphs has
been widely investigated during last years. Recently, a novel measure of node
centrality has been proposed, called k-path centrality index, which is based on
the propagation of messages inside a network along paths consisting of at most
k edges. On the other hand, the importance of computing the centrality of edges
has been put into evidence since 1970's by Anthonisse and, subsequently by
Girvan and Newman. In this work we propose the generalization of the concept of
k-path centrality by defining the k-path edge centrality, a measure of
centrality introduced to compute the importance of edges. We provide an
efficient algorithm, running in O(k m), being m the number of edges in the
graph. Thus, our technique is feasible for large scale network analysis.
Finally, the performance of our algorithm is analyzed, discussing the results
obtained against large online social network datasets.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2013 16:54:34 GMT"
}
] | 2013-03-08T00:00:00 | [
[
"De Meo",
"Pasquale",
""
],
[
"Ferrara",
"Emilio",
""
],
[
"Fiumara",
"Giacomo",
""
],
[
"Ricciardello",
"Angela",
""
]
] | TITLE: A Novel Measure of Edge Centrality in Social Networks
ABSTRACT: The problem of assigning centrality values to nodes and edges in graphs has
been widely investigated during last years. Recently, a novel measure of node
centrality has been proposed, called k-path centrality index, which is based on
the propagation of messages inside a network along paths consisting of at most
k edges. On the other hand, the importance of computing the centrality of edges
has been put into evidence since 1970's by Anthonisse and, subsequently by
Girvan and Newman. In this work we propose the generalization of the concept of
k-path centrality by defining the k-path edge centrality, a measure of
centrality introduced to compute the importance of edges. We provide an
efficient algorithm, running in O(k m), being m the number of edges in the
graph. Thus, our technique is feasible for large scale network analysis.
Finally, the performance of our algorithm is analyzed, discussing the results
obtained against large online social network datasets.
|
1303.1280 | Remi Lajugie | R\'emi Lajugie (LIENS), Sylvain Arlot (LIENS), Francis Bach (LIENS) | Large-Margin Metric Learning for Partitioning Problems | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider unsupervised partitioning problems, such as
clustering, image segmentation, video segmentation and other change-point
detection problems. We focus on partitioning problems based explicitly or
implicitly on the minimization of Euclidean distortions, which include
mean-based change-point detection, K-means, spectral clustering and normalized
cuts. Our main goal is to learn a Mahalanobis metric for these unsupervised
problems, leading to feature weighting and/or selection. This is done in a
supervised way by assuming the availability of several potentially partially
labelled datasets that share the same metric. We cast the metric learning
problem as a large-margin structured prediction problem, with proper definition
of regularizers and losses, leading to a convex optimization problem which can
be solved efficiently with iterative techniques. We provide experiments where
we show how learning the metric may significantly improve the partitioning
performance in synthetic examples, bioinformatics, video segmentation and image
segmentation problems.
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2013 09:23:45 GMT"
}
] | 2013-03-07T00:00:00 | [
[
"Lajugie",
"Rémi",
"",
"LIENS"
],
[
"Arlot",
"Sylvain",
"",
"LIENS"
],
[
"Bach",
"Francis",
"",
"LIENS"
]
] | TITLE: Large-Margin Metric Learning for Partitioning Problems
ABSTRACT: In this paper, we consider unsupervised partitioning problems, such as
clustering, image segmentation, video segmentation and other change-point
detection problems. We focus on partitioning problems based explicitly or
implicitly on the minimization of Euclidean distortions, which include
mean-based change-point detection, K-means, spectral clustering and normalized
cuts. Our main goal is to learn a Mahalanobis metric for these unsupervised
problems, leading to feature weighting and/or selection. This is done in a
supervised way by assuming the availability of several potentially partially
labelled datasets that share the same metric. We cast the metric learning
problem as a large-margin structured prediction problem, with proper definition
of regularizers and losses, leading to a convex optimization problem which can
be solved efficiently with iterative techniques. We provide experiments where
we show how learning the metric may significantly improve the partitioning
performance in synthetic examples, bioinformatics, video segmentation and image
segmentation problems.
|
1103.2068 | Tamara Kolda | Justin D. Basilico and M. Arthur Munson and Tamara G. Kolda and Kevin
R. Dixon and W. Philip Kegelmeyer | COMET: A Recipe for Learning and Using Large Ensembles on Massive Data | null | ICDM 2011: Proceedings of the 2011 IEEE International Conference
on Data Mining, pp. 41-50, 2011 | 10.1109/ICDM.2011.39 | null | cs.LG cs.DC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | COMET is a single-pass MapReduce algorithm for learning on large-scale data.
It builds multiple random forest ensembles on distributed blocks of data and
merges them into a mega-ensemble. This approach is appropriate when learning
from massive-scale data that is too large to fit on a single machine. To get
the best accuracy, IVoting should be used instead of bagging to generate the
training subset for each decision tree in the random forest. Experiments with
two large datasets (5GB and 50GB compressed) show that COMET compares favorably
(in both accuracy and training time) to learning on a subsample of data using a
serial algorithm. Finally, we propose a new Gaussian approach for lazy ensemble
evaluation which dynamically decides how many ensemble members to evaluate per
data point; this can reduce evaluation cost by 100X or more.
| [
{
"version": "v1",
"created": "Thu, 10 Mar 2011 16:15:42 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Sep 2011 16:20:45 GMT"
}
] | 2013-03-06T00:00:00 | [
[
"Basilico",
"Justin D.",
""
],
[
"Munson",
"M. Arthur",
""
],
[
"Kolda",
"Tamara G.",
""
],
[
"Dixon",
"Kevin R.",
""
],
[
"Kegelmeyer",
"W. Philip",
""
]
] | TITLE: COMET: A Recipe for Learning and Using Large Ensembles on Massive Data
ABSTRACT: COMET is a single-pass MapReduce algorithm for learning on large-scale data.
It builds multiple random forest ensembles on distributed blocks of data and
merges them into a mega-ensemble. This approach is appropriate when learning
from massive-scale data that is too large to fit on a single machine. To get
the best accuracy, IVoting should be used instead of bagging to generate the
training subset for each decision tree in the random forest. Experiments with
two large datasets (5GB and 50GB compressed) show that COMET compares favorably
(in both accuracy and training time) to learning on a subsample of data using a
serial algorithm. Finally, we propose a new Gaussian approach for lazy ensemble
evaluation which dynamically decides how many ensemble members to evaluate per
data point; this can reduce evaluation cost by 100X or more.
|
1208.4289 | Marcelo Serraro Zanetti | Marcelo Serrano Zanetti, Emre Sarigol, Ingo Scholtes, Claudio Juan
Tessone, Frank Schweitzer | A Quantitative Study of Social Organisation in Open Source Software
Communities | null | ICCSW 2012, pp. 116--122 | 10.4230/OASIcs.ICCSW.2012.116 | null | cs.SE cs.SI nlin.AO physics.soc-ph | http://creativecommons.org/licenses/by/3.0/ | The success of open source projects crucially depends on the voluntary
contributions of a sufficiently large community of users. Apart from the mere
size of the community, interesting questions arise when looking at the
evolution of structural features of collaborations between community members.
In this article, we discuss several network analytic proxies that can be used
to quantify different aspects of the social organisation in social
collaboration networks. We particularly focus on measures that can be related
to the cohesiveness of the communities, the distribution of responsibilities
and the resilience against turnover of community members. We present a
comparative analysis on a large-scale dataset that covers the full history of
collaborations between users of 14 major open source software communities. Our
analysis covers both aggregate and time-evolving measures and highlights
differences in the social organisation across communities. We argue that our
results are a promising step towards the definition of suitable, potentially
multi-dimensional, resilience and risk indicators for open source software
communities.
| [
{
"version": "v1",
"created": "Tue, 21 Aug 2012 15:34:35 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Nov 2012 10:55:11 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Mar 2013 13:17:21 GMT"
}
] | 2013-03-05T00:00:00 | [
[
"Zanetti",
"Marcelo Serrano",
""
],
[
"Sarigol",
"Emre",
""
],
[
"Scholtes",
"Ingo",
""
],
[
"Tessone",
"Claudio Juan",
""
],
[
"Schweitzer",
"Frank",
""
]
] | TITLE: A Quantitative Study of Social Organisation in Open Source Software
Communities
ABSTRACT: The success of open source projects crucially depends on the voluntary
contributions of a sufficiently large community of users. Apart from the mere
size of the community, interesting questions arise when looking at the
evolution of structural features of collaborations between community members.
In this article, we discuss several network analytic proxies that can be used
to quantify different aspects of the social organisation in social
collaboration networks. We particularly focus on measures that can be related
to the cohesiveness of the communities, the distribution of responsibilities
and the resilience against turnover of community members. We present a
comparative analysis on a large-scale dataset that covers the full history of
collaborations between users of 14 major open source software communities. Our
analysis covers both aggregate and time-evolving measures and highlights
differences in the social organisation across communities. We argue that our
results are a promising step towards the definition of suitable, potentially
multi-dimensional, resilience and risk indicators for open source software
communities.
|
1301.7015 | Entong Shen | Entong Shen, Ting Yu | Mining Frequent Graph Patterns with Differential Privacy | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discovering frequent graph patterns in a graph database offers valuable
information in a variety of applications. However, if the graph dataset
contains sensitive data of individuals such as mobile phone-call graphs and
web-click graphs, releasing discovered frequent patterns may present a threat
to the privacy of individuals. {\em Differential privacy} has recently emerged
as the {\em de facto} standard for private data analysis due to its provable
privacy guarantee. In this paper we propose the first differentially private
algorithm for mining frequent graph patterns.
We first show that previous techniques on differentially private discovery of
frequent {\em itemsets} cannot apply in mining frequent graph patterns due to
the inherent complexity of handling structural information in graphs. We then
address this challenge by proposing a Markov Chain Monte Carlo (MCMC) sampling
based algorithm. Unlike previous work on frequent itemset mining, our
techniques do not rely on the output of a non-private mining algorithm.
Instead, we observe that both frequent graph pattern mining and the guarantee
of differential privacy can be unified into an MCMC sampling framework. In
addition, we establish the privacy and utility guarantee of our algorithm and
propose an efficient neighboring pattern counting technique as well.
Experimental results show that the proposed algorithm is able to output
frequent patterns with good precision.
| [
{
"version": "v1",
"created": "Tue, 29 Jan 2013 18:37:35 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Mar 2013 21:43:20 GMT"
}
] | 2013-03-05T00:00:00 | [
[
"Shen",
"Entong",
""
],
[
"Yu",
"Ting",
""
]
] | TITLE: Mining Frequent Graph Patterns with Differential Privacy
ABSTRACT: Discovering frequent graph patterns in a graph database offers valuable
information in a variety of applications. However, if the graph dataset
contains sensitive data of individuals such as mobile phone-call graphs and
web-click graphs, releasing discovered frequent patterns may present a threat
to the privacy of individuals. {\em Differential privacy} has recently emerged
as the {\em de facto} standard for private data analysis due to its provable
privacy guarantee. In this paper we propose the first differentially private
algorithm for mining frequent graph patterns.
We first show that previous techniques on differentially private discovery of
frequent {\em itemsets} cannot apply in mining frequent graph patterns due to
the inherent complexity of handling structural information in graphs. We then
address this challenge by proposing a Markov Chain Monte Carlo (MCMC) sampling
based algorithm. Unlike previous work on frequent itemset mining, our
techniques do not rely on the output of a non-private mining algorithm.
Instead, we observe that both frequent graph pattern mining and the guarantee
of differential privacy can be unified into an MCMC sampling framework. In
addition, we establish the privacy and utility guarantee of our algorithm and
propose an efficient neighboring pattern counting technique as well.
Experimental results show that the proposed algorithm is able to output
frequent patterns with good precision.
|
1303.0339 | Chunhua Shen | Xi Li and Guosheng Lin and Chunhua Shen and Anton van den Hengel and
Anthony Dick | Learning Hash Functions Using Column Generation | 9 pages, published in International Conf. Machine Learning, 2013 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/3.0/ | Fast nearest neighbor searching is becoming an increasingly important tool in
solving many large-scale problems. Recently a number of approaches to learning
data-dependent hash functions have been developed. In this work, we propose a
column generation based method for learning data-dependent hash functions on
the basis of proximity comparison information. Given a set of triplets that
encode the pairwise proximity comparison information, our method learns hash
functions that preserve the relative comparison relationships in the data as
well as possible within the large-margin learning framework. The learning
procedure is implemented using column generation and hence is named CGHash. At
each iteration of the column generation procedure, the best hash function is
selected. Unlike most other hashing methods, our method generalizes to new data
points naturally; and has a training objective which is convex, thus ensuring
that the global optimum can be identified. Experiments demonstrate that the
proposed method learns compact binary codes and that its retrieval performance
compares favorably with state-of-the-art methods when tested on a few benchmark
datasets.
| [
{
"version": "v1",
"created": "Sat, 2 Mar 2013 03:01:46 GMT"
}
] | 2013-03-05T00:00:00 | [
[
"Li",
"Xi",
""
],
[
"Lin",
"Guosheng",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
],
[
"Dick",
"Anthony",
""
]
] | TITLE: Learning Hash Functions Using Column Generation
ABSTRACT: Fast nearest neighbor searching is becoming an increasingly important tool in
solving many large-scale problems. Recently a number of approaches to learning
data-dependent hash functions have been developed. In this work, we propose a
column generation based method for learning data-dependent hash functions on
the basis of proximity comparison information. Given a set of triplets that
encode the pairwise proximity comparison information, our method learns hash
functions that preserve the relative comparison relationships in the data as
well as possible within the large-margin learning framework. The learning
procedure is implemented using column generation and hence is named CGHash. At
each iteration of the column generation procedure, the best hash function is
selected. Unlike most other hashing methods, our method generalizes to new data
points naturally; and has a training objective which is convex, thus ensuring
that the global optimum can be identified. Experiments demonstrate that the
proposed method learns compact binary codes and that its retrieval performance
compares favorably with state-of-the-art methods when tested on a few benchmark
datasets.
|
1303.0566 | Taher Zaki | T. Zaki (1 and 2), M. Amrouch (1), D. Mammass (1), A. Ennaji (2) ((1)
IRFSIC Laboratory, Ibn Zohr University Agadir Morocco, (2) LITIS Laboratory,
University of Rouen France) | Arabic documents classification using fuzzy R.B.F. classifier with
sliding window | 5 pages, 2 figures | Journal of Computing , eISSN 2151-9617 , Volume 5, Issue 1,
January 2013 | null | null | cs.IR | http://creativecommons.org/licenses/publicdomain/ | In this paper, we propose a system for contextual and semantic Arabic
documents classification by improving the standard fuzzy model. Indeed,
promoting neighborhood semantic terms that seems absent in this model by using
a radial basis modeling. In order to identify the relevant documents to the
query. This approach calculates the similarity between related terms by
determining the relevance of each relative to documents (NEAR operator), based
on a kernel function. The use of sliding window improves the process of
classification. The results obtained on a arabic dataset of press show very
good performance compared with the literature.
| [
{
"version": "v1",
"created": "Sun, 3 Mar 2013 20:50:12 GMT"
}
] | 2013-03-05T00:00:00 | [
[
"Zaki",
"T.",
"",
"1 and 2"
],
[
"Amrouch",
"M.",
""
],
[
"Mammass",
"D.",
""
],
[
"Ennaji",
"A.",
""
]
] | TITLE: Arabic documents classification using fuzzy R.B.F. classifier with
sliding window
ABSTRACT: In this paper, we propose a system for contextual and semantic Arabic
documents classification by improving the standard fuzzy model. Indeed,
promoting neighborhood semantic terms that seems absent in this model by using
a radial basis modeling. In order to identify the relevant documents to the
query. This approach calculates the similarity between related terms by
determining the relevance of each relative to documents (NEAR operator), based
on a kernel function. The use of sliding window improves the process of
classification. The results obtained on a arabic dataset of press show very
good performance compared with the literature.
|
1303.0647 | Meena Kabilan | A. Meena and R. Raja | Spatial Fuzzy C Means PET Image Segmentation of Neurodegenerative
Disorder | null | Indian Journal of Computer Science and Engineering (IJCSE), ISSN :
0976-5166 Vol. 4 No.1 Feb-Mar 2013, pp.no: 50-55 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nuclear image has emerged as a promising research work in medical field.
Images from different modality meet its own challenge. Positron Emission
Tomography (PET) image may help to precisely localize disease to assist in
planning the right treatment for each case and saving valuable time. In this
paper, a novel approach of Spatial Fuzzy C Means (PET SFCM) clustering
algorithm is introduced on PET scan image datasets. The proposed algorithm is
incorporated the spatial neighborhood information with traditional FCM and
updating the objective function of each cluster. This algorithm is implemented
and tested on huge data collection of patients with brain neuro degenerative
disorder such as Alzheimers disease. It has demonstrated its effectiveness by
testing it for real world patient data sets. Experimental results are compared
with conventional FCM and K Means clustering algorithm. The performance of the
PET SFCM provides satisfactory results compared with other two algorithms
| [
{
"version": "v1",
"created": "Mon, 4 Mar 2013 09:08:34 GMT"
}
] | 2013-03-05T00:00:00 | [
[
"Meena",
"A.",
""
],
[
"Raja",
"R.",
""
]
] | TITLE: Spatial Fuzzy C Means PET Image Segmentation of Neurodegenerative
Disorder
ABSTRACT: Nuclear image has emerged as a promising research work in medical field.
Images from different modality meet its own challenge. Positron Emission
Tomography (PET) image may help to precisely localize disease to assist in
planning the right treatment for each case and saving valuable time. In this
paper, a novel approach of Spatial Fuzzy C Means (PET SFCM) clustering
algorithm is introduced on PET scan image datasets. The proposed algorithm is
incorporated the spatial neighborhood information with traditional FCM and
updating the objective function of each cluster. This algorithm is implemented
and tested on huge data collection of patients with brain neuro degenerative
disorder such as Alzheimers disease. It has demonstrated its effectiveness by
testing it for real world patient data sets. Experimental results are compared
with conventional FCM and K Means clustering algorithm. The performance of the
PET SFCM provides satisfactory results compared with other two algorithms
|
1206.5065 | Sofia Zaidenberg | Sofia Zaidenberg (INRIA Sophia Antipolis), Bernard Boulay (INRIA
Sophia Antipolis), Fran\c{c}ois Bremond (INRIA Sophia Antipolis) | A generic framework for video understanding applied to group behavior
recognition | (20/03/2012) | 9th IEEE International Conference on Advanced Video and
Signal-Based Surveillance (AVSS 2012) (2012) 136 -142 | 10.1109/AVSS.2012.1 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an approach to detect and track groups of people in
video-surveillance applications, and to automatically recognize their behavior.
This method keeps track of individuals moving together by maintaining a spacial
and temporal group coherence. First, people are individually detected and
tracked. Second, their trajectories are analyzed over a temporal window and
clustered using the Mean-Shift algorithm. A coherence value describes how well
a set of people can be described as a group. Furthermore, we propose a formal
event description language. The group events recognition approach is
successfully validated on 4 camera views from 3 datasets: an airport, a subway,
a shopping center corridor and an entrance hall.
| [
{
"version": "v1",
"created": "Fri, 22 Jun 2012 06:24:30 GMT"
}
] | 2013-03-04T00:00:00 | [
[
"Zaidenberg",
"Sofia",
"",
"INRIA Sophia Antipolis"
],
[
"Boulay",
"Bernard",
"",
"INRIA\n Sophia Antipolis"
],
[
"Bremond",
"François",
"",
"INRIA Sophia Antipolis"
]
] | TITLE: A generic framework for video understanding applied to group behavior
recognition
ABSTRACT: This paper presents an approach to detect and track groups of people in
video-surveillance applications, and to automatically recognize their behavior.
This method keeps track of individuals moving together by maintaining a spacial
and temporal group coherence. First, people are individually detected and
tracked. Second, their trajectories are analyzed over a temporal window and
clustered using the Mean-Shift algorithm. A coherence value describes how well
a set of people can be described as a group. Furthermore, we propose a formal
event description language. The group events recognition approach is
successfully validated on 4 camera views from 3 datasets: an airport, a subway,
a shopping center corridor and an entrance hall.
|
1301.5160 | Claudio Gentile | Fabio Vitale, Nicolo Cesa-Bianchi, Claudio Gentile, Giovanni Zappella | See the Tree Through the Lines: The Shazoo Algorithm -- Full Version -- | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the nodes of a given graph is a fascinating theoretical problem
with applications in several domains. Since graph sparsification via spanning
trees retains enough information while making the task much easier, trees are
an important special case of this problem. Although it is known how to predict
the nodes of an unweighted tree in a nearly optimal way, in the weighted case a
fully satisfactory algorithm is not available yet. We fill this hole and
introduce an efficient node predictor, Shazoo, which is nearly optimal on any
weighted tree. Moreover, we show that Shazoo can be viewed as a common
nontrivial generalization of both previous approaches for unweighted trees and
weighted lines. Experiments on real-world datasets confirm that Shazoo performs
well in that it fully exploits the structure of the input tree, and gets very
close to (and sometimes better than) less scalable energy minimization methods.
| [
{
"version": "v1",
"created": "Tue, 22 Jan 2013 11:59:04 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Feb 2013 17:31:08 GMT"
}
] | 2013-03-01T00:00:00 | [
[
"Vitale",
"Fabio",
""
],
[
"Cesa-Bianchi",
"Nicolo",
""
],
[
"Gentile",
"Claudio",
""
],
[
"Zappella",
"Giovanni",
""
]
] | TITLE: See the Tree Through the Lines: The Shazoo Algorithm -- Full Version --
ABSTRACT: Predicting the nodes of a given graph is a fascinating theoretical problem
with applications in several domains. Since graph sparsification via spanning
trees retains enough information while making the task much easier, trees are
an important special case of this problem. Although it is known how to predict
the nodes of an unweighted tree in a nearly optimal way, in the weighted case a
fully satisfactory algorithm is not available yet. We fill this hole and
introduce an efficient node predictor, Shazoo, which is nearly optimal on any
weighted tree. Moreover, we show that Shazoo can be viewed as a common
nontrivial generalization of both previous approaches for unweighted trees and
weighted lines. Experiments on real-world datasets confirm that Shazoo performs
well in that it fully exploits the structure of the input tree, and gets very
close to (and sometimes better than) less scalable energy minimization methods.
|
1302.7043 | Evangelos Papalexakis | Evangelos E. Papalexakis, Tom M. Mitchell, Nicholas D. Sidiropoulos,
Christos Faloutsos, Partha Pratim Talukdar, Brian Murphy | Scoup-SMT: Scalable Coupled Sparse Matrix-Tensor Factorization | 9 pages | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can we correlate neural activity in the human brain as it responds to
words, with behavioral data expressed as answers to questions about these same
words? In short, we want to find latent variables, that explain both the brain
activity, as well as the behavioral responses. We show that this is an instance
of the Coupled Matrix-Tensor Factorization (CMTF) problem. We propose
Scoup-SMT, a novel, fast, and parallel algorithm that solves the CMTF problem
and produces a sparse latent low-rank subspace of the data. In our experiments,
we find that Scoup-SMT is 50-100 times faster than a state-of-the-art algorithm
for CMTF, along with a 5 fold increase in sparsity. Moreover, we extend
Scoup-SMT to handle missing data without degradation of performance. We apply
Scoup-SMT to BrainQ, a dataset consisting of a (nouns, brain voxels, human
subjects) tensor and a (nouns, properties) matrix, with coupling along the
nouns dimension. Scoup-SMT is able to find meaningful latent variables, as well
as to predict brain activity with competitive accuracy. Finally, we demonstrate
the generality of Scoup-SMT, by applying it on a Facebook dataset (users,
friends, wall-postings); there, Scoup-SMT spots spammer-like anomalies.
| [
{
"version": "v1",
"created": "Thu, 28 Feb 2013 00:37:29 GMT"
}
] | 2013-03-01T00:00:00 | [
[
"Papalexakis",
"Evangelos E.",
""
],
[
"Mitchell",
"Tom M.",
""
],
[
"Sidiropoulos",
"Nicholas D.",
""
],
[
"Faloutsos",
"Christos",
""
],
[
"Talukdar",
"Partha Pratim",
""
],
[
"Murphy",
"Brian",
""
]
] | TITLE: Scoup-SMT: Scalable Coupled Sparse Matrix-Tensor Factorization
ABSTRACT: How can we correlate neural activity in the human brain as it responds to
words, with behavioral data expressed as answers to questions about these same
words? In short, we want to find latent variables, that explain both the brain
activity, as well as the behavioral responses. We show that this is an instance
of the Coupled Matrix-Tensor Factorization (CMTF) problem. We propose
Scoup-SMT, a novel, fast, and parallel algorithm that solves the CMTF problem
and produces a sparse latent low-rank subspace of the data. In our experiments,
we find that Scoup-SMT is 50-100 times faster than a state-of-the-art algorithm
for CMTF, along with a 5 fold increase in sparsity. Moreover, we extend
Scoup-SMT to handle missing data without degradation of performance. We apply
Scoup-SMT to BrainQ, a dataset consisting of a (nouns, brain voxels, human
subjects) tensor and a (nouns, properties) matrix, with coupling along the
nouns dimension. Scoup-SMT is able to find meaningful latent variables, as well
as to predict brain activity with competitive accuracy. Finally, we demonstrate
the generality of Scoup-SMT, by applying it on a Facebook dataset (users,
friends, wall-postings); there, Scoup-SMT spots spammer-like anomalies.
|
1302.6582 | Michael Schreiber | Michael Schreiber | A Case Study of the Arbitrariness of the h-Index and the
Highly-Cited-Publications Indicator | 16 pages, 3 tables, 5 figures. arXiv admin note: text overlap with
arXiv:1302.6396 | Journal of Informetrics, 7(2), 379-387 (2013) | 10.1016/j.joi.2012.12.006 | null | physics.soc-ph cs.DL | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The arbitrariness of the h-index becomes evident, when one requires q*h
instead of h citations as the threshold for the definition of the index, thus
changing the size of the core of the most influential publications of a
dataset. I analyze the citation records of 26 physicists in order to determine
how much the prefactor q influences the ranking. Likewise, the arbitrariness of
the highly-cited-publications indicator is due to the threshold value, given
either as an absolute number of citations or as a percentage of highly cited
papers. The analysis of the 26 citation records shows that the changes in the
rankings in dependence on these thresholds are rather large and comparable with
the respective changes for the h-index.
| [
{
"version": "v1",
"created": "Tue, 26 Feb 2013 11:49:29 GMT"
}
] | 2013-02-28T00:00:00 | [
[
"Schreiber",
"Michael",
""
]
] | TITLE: A Case Study of the Arbitrariness of the h-Index and the
Highly-Cited-Publications Indicator
ABSTRACT: The arbitrariness of the h-index becomes evident, when one requires q*h
instead of h citations as the threshold for the definition of the index, thus
changing the size of the core of the most influential publications of a
dataset. I analyze the citation records of 26 physicists in order to determine
how much the prefactor q influences the ranking. Likewise, the arbitrariness of
the highly-cited-publications indicator is due to the threshold value, given
either as an absolute number of citations or as a percentage of highly cited
papers. The analysis of the 26 citation records shows that the changes in the
rankings in dependence on these thresholds are rather large and comparable with
the respective changes for the h-index.
|
1302.6613 | Ratnadip Adhikari | Ratnadip Adhikari, R. K. Agrawal | An Introductory Study on Time Series Modeling and Forecasting | 67 pages, 29 figures, 33 references, book | LAP Lambert Academic Publishing, Germany, 2013 | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series modeling and forecasting has fundamental importance to various
practical domains. Thus a lot of active research works is going on in this
subject during several years. Many important models have been proposed in
literature for improving the accuracy and effectiveness of time series
forecasting. The aim of this dissertation work is to present a concise
description of some popular time series forecasting models used in practice,
with their salient features. In this thesis, we have described three important
classes of time series models, viz. the stochastic, neural networks and SVM
based models, together with their inherent forecasting strengths and
weaknesses. We have also discussed about the basic issues related to time
series modeling, such as stationarity, parsimony, overfitting, etc. Our
discussion about different time series models is supported by giving the
experimental forecast results, performed on six real time series datasets.
While fitting a model to a dataset, special care is taken to select the most
parsimonious one. To evaluate forecast accuracy as well as to compare among
different models fitted to a time series, we have used the five performance
measures, viz. MSE, MAD, RMSE, MAPE and Theil's U-statistics. For each of the
six datasets, we have shown the obtained forecast diagram which graphically
depicts the closeness between the original and forecasted observations. To have
authenticity as well as clarity in our discussion about time series modeling
and forecasting, we have taken the help of various published research works
from reputed journals and some standard books.
| [
{
"version": "v1",
"created": "Tue, 26 Feb 2013 22:18:55 GMT"
}
] | 2013-02-28T00:00:00 | [
[
"Adhikari",
"Ratnadip",
""
],
[
"Agrawal",
"R. K.",
""
]
] | TITLE: An Introductory Study on Time Series Modeling and Forecasting
ABSTRACT: Time series modeling and forecasting has fundamental importance to various
practical domains. Thus a lot of active research works is going on in this
subject during several years. Many important models have been proposed in
literature for improving the accuracy and effectiveness of time series
forecasting. The aim of this dissertation work is to present a concise
description of some popular time series forecasting models used in practice,
with their salient features. In this thesis, we have described three important
classes of time series models, viz. the stochastic, neural networks and SVM
based models, together with their inherent forecasting strengths and
weaknesses. We have also discussed about the basic issues related to time
series modeling, such as stationarity, parsimony, overfitting, etc. Our
discussion about different time series models is supported by giving the
experimental forecast results, performed on six real time series datasets.
While fitting a model to a dataset, special care is taken to select the most
parsimonious one. To evaluate forecast accuracy as well as to compare among
different models fitted to a time series, we have used the five performance
measures, viz. MSE, MAD, RMSE, MAPE and Theil's U-statistics. For each of the
six datasets, we have shown the obtained forecast diagram which graphically
depicts the closeness between the original and forecasted observations. To have
authenticity as well as clarity in our discussion about time series modeling
and forecasting, we have taken the help of various published research works
from reputed journals and some standard books.
|
1302.6666 | Yan Huang | Yan Huang, Ruoming Jin, Favyen Bastani, Xiaoyang Sean Wang | Large Scale Real-time Ridesharing with Service Guarantee on Road
Networks | null | null | null | null | cs.DS | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The mean occupancy rates of personal vehicle trips in the United States is
only 1.6 persons per vehicle mile. Urban traffic gridlock is a familiar scene.
Ridesharing has the potential to solve many environmental, congestion, and
energy problems. In this paper, we introduce the problem of large scale
real-time ridesharing with service guarantee on road networks. Servers and trip
requests are dynamically matched while waiting time and service time
constraints of trips are satisfied. We first propose two basic algorithms: a
branch-and-bound algorithm and an integer programing algorithm. However, these
algorithm structures do not adapt well to the dynamic nature of the ridesharing
problem. Thus, we then propose a kinetic tree algorithm capable of better
scheduling dynamic requests and adjusting routes on-the-fly. We perform
experiments on a large real taxi dataset from Shanghai. The results show that
the kinetic tree algorithm is faster than other algorithms in response time.
| [
{
"version": "v1",
"created": "Wed, 27 Feb 2013 05:41:49 GMT"
}
] | 2013-02-28T00:00:00 | [
[
"Huang",
"Yan",
""
],
[
"Jin",
"Ruoming",
""
],
[
"Bastani",
"Favyen",
""
],
[
"Wang",
"Xiaoyang Sean",
""
]
] | TITLE: Large Scale Real-time Ridesharing with Service Guarantee on Road
Networks
ABSTRACT: The mean occupancy rates of personal vehicle trips in the United States is
only 1.6 persons per vehicle mile. Urban traffic gridlock is a familiar scene.
Ridesharing has the potential to solve many environmental, congestion, and
energy problems. In this paper, we introduce the problem of large scale
real-time ridesharing with service guarantee on road networks. Servers and trip
requests are dynamically matched while waiting time and service time
constraints of trips are satisfied. We first propose two basic algorithms: a
branch-and-bound algorithm and an integer programing algorithm. However, these
algorithm structures do not adapt well to the dynamic nature of the ridesharing
problem. Thus, we then propose a kinetic tree algorithm capable of better
scheduling dynamic requests and adjusting routes on-the-fly. We perform
experiments on a large real taxi dataset from Shanghai. The results show that
the kinetic tree algorithm is faster than other algorithms in response time.
|
1302.6957 | Jayaraman J. Thiagarajan | Karthikeyan Natesan Ramamurthy, Jayaraman J. Thiagarajan, Prasanna
Sattigeri and Andreas Spanias | Ensemble Sparse Models for Image Analysis | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparse representations with learned dictionaries have been successful in
several image analysis applications. In this paper, we propose and analyze the
framework of ensemble sparse models, and demonstrate their utility in image
restoration and unsupervised clustering. The proposed ensemble model
approximates the data as a linear combination of approximations from multiple
\textit{weak} sparse models. Theoretical analysis of the ensemble model reveals
that even in the worst-case, the ensemble can perform better than any of its
constituent individual models. The dictionaries corresponding to the individual
sparse models are obtained using either random example selection or boosted
approaches. Boosted approaches learn one dictionary per round such that the
dictionary learned in a particular round is optimized for the training examples
having high reconstruction error in the previous round. Results with compressed
recovery show that the ensemble representations lead to a better performance
compared to using a single dictionary obtained with the conventional
alternating minimization approach. The proposed ensemble models are also used
for single image superresolution, and we show that they perform comparably to
the recent approaches. In unsupervised clustering, experiments show that the
proposed model performs better than baseline approaches in several standard
datasets.
| [
{
"version": "v1",
"created": "Wed, 27 Feb 2013 18:58:36 GMT"
}
] | 2013-02-28T00:00:00 | [
[
"Ramamurthy",
"Karthikeyan Natesan",
""
],
[
"Thiagarajan",
"Jayaraman J.",
""
],
[
"Sattigeri",
"Prasanna",
""
],
[
"Spanias",
"Andreas",
""
]
] | TITLE: Ensemble Sparse Models for Image Analysis
ABSTRACT: Sparse representations with learned dictionaries have been successful in
several image analysis applications. In this paper, we propose and analyze the
framework of ensemble sparse models, and demonstrate their utility in image
restoration and unsupervised clustering. The proposed ensemble model
approximates the data as a linear combination of approximations from multiple
\textit{weak} sparse models. Theoretical analysis of the ensemble model reveals
that even in the worst-case, the ensemble can perform better than any of its
constituent individual models. The dictionaries corresponding to the individual
sparse models are obtained using either random example selection or boosted
approaches. Boosted approaches learn one dictionary per round such that the
dictionary learned in a particular round is optimized for the training examples
having high reconstruction error in the previous round. Results with compressed
recovery show that the ensemble representations lead to a better performance
compared to using a single dictionary obtained with the conventional
alternating minimization approach. The proposed ensemble models are also used
for single image superresolution, and we show that they perform comparably to
the recent approaches. In unsupervised clustering, experiments show that the
proposed model performs better than baseline approaches in several standard
datasets.
|
1302.6210 | Ratnadip Adhikari | Ratnadip Adhikari, R. K. Agrawal | A Homogeneous Ensemble of Artificial Neural Networks for Time Series
Forecasting | 8 pages, 4 figures, 2 tables, 26 references, international journal | International Journal of Computer Applications, Vol. 32, No. 7,
October 2011, pp. 1-8 | 10.5120/3913-5505 | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enhancing the robustness and accuracy of time series forecasting models is an
active area of research. Recently, Artificial Neural Networks (ANNs) have found
extensive applications in many practical forecasting problems. However, the
standard backpropagation ANN training algorithm has some critical issues, e.g.
it has a slow convergence rate and often converges to a local minimum, the
complex pattern of error surfaces, lack of proper training parameters selection
methods, etc. To overcome these drawbacks, various improved training methods
have been developed in literature; but, still none of them can be guaranteed as
the best for all problems. In this paper, we propose a novel weighted ensemble
scheme which intelligently combines multiple training algorithms to increase
the ANN forecast accuracies. The weight for each training algorithm is
determined from the performance of the corresponding ANN model on the
validation dataset. Experimental results on four important time series depicts
that our proposed technique reduces the mentioned shortcomings of individual
ANN training algorithms to a great extent. Also it achieves significantly
better forecast accuracies than two other popular statistical models.
| [
{
"version": "v1",
"created": "Mon, 25 Feb 2013 20:09:19 GMT"
}
] | 2013-02-27T00:00:00 | [
[
"Adhikari",
"Ratnadip",
""
],
[
"Agrawal",
"R. K.",
""
]
] | TITLE: A Homogeneous Ensemble of Artificial Neural Networks for Time Series
Forecasting
ABSTRACT: Enhancing the robustness and accuracy of time series forecasting models is an
active area of research. Recently, Artificial Neural Networks (ANNs) have found
extensive applications in many practical forecasting problems. However, the
standard backpropagation ANN training algorithm has some critical issues, e.g.
it has a slow convergence rate and often converges to a local minimum, the
complex pattern of error surfaces, lack of proper training parameters selection
methods, etc. To overcome these drawbacks, various improved training methods
have been developed in literature; but, still none of them can be guaranteed as
the best for all problems. In this paper, we propose a novel weighted ensemble
scheme which intelligently combines multiple training algorithms to increase
the ANN forecast accuracies. The weight for each training algorithm is
determined from the performance of the corresponding ANN model on the
validation dataset. Experimental results on four important time series depicts
that our proposed technique reduces the mentioned shortcomings of individual
ANN training algorithms to a great extent. Also it achieves significantly
better forecast accuracies than two other popular statistical models.
|
1302.5101 | Jeremiah Blocki | Jeremiah Blocki and Saranga Komanduri and Ariel Procaccia and Or
Sheffet | Optimizing Password Composition Policies | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A password composition policy restricts the space of allowable passwords to
eliminate weak passwords that are vulnerable to statistical guessing attacks.
Usability studies have demonstrated that existing password composition policies
can sometimes result in weaker password distributions; hence a more principled
approach is needed. We introduce the first theoretical model for optimizing
password composition policies. We study the computational and sample complexity
of this problem under different assumptions on the structure of policies and on
users' preferences over passwords. Our main positive result is an algorithm
that -- with high probability --- constructs almost optimal policies (which are
specified as a union of subsets of allowed passwords), and requires only a
small number of samples of users' preferred passwords. We complement our
theoretical results with simulations using a real-world dataset of 32 million
passwords.
| [
{
"version": "v1",
"created": "Wed, 20 Feb 2013 20:53:41 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Feb 2013 19:44:50 GMT"
}
] | 2013-02-26T00:00:00 | [
[
"Blocki",
"Jeremiah",
""
],
[
"Komanduri",
"Saranga",
""
],
[
"Procaccia",
"Ariel",
""
],
[
"Sheffet",
"Or",
""
]
] | TITLE: Optimizing Password Composition Policies
ABSTRACT: A password composition policy restricts the space of allowable passwords to
eliminate weak passwords that are vulnerable to statistical guessing attacks.
Usability studies have demonstrated that existing password composition policies
can sometimes result in weaker password distributions; hence a more principled
approach is needed. We introduce the first theoretical model for optimizing
password composition policies. We study the computational and sample complexity
of this problem under different assumptions on the structure of policies and on
users' preferences over passwords. Our main positive result is an algorithm
that -- with high probability --- constructs almost optimal policies (which are
specified as a union of subsets of allowed passwords), and requires only a
small number of samples of users' preferred passwords. We complement our
theoretical results with simulations using a real-world dataset of 32 million
passwords.
|
1302.5771 | Anil Bhardwaj | Y. Futaana, S. Barabash, M. Wieser, C. Lue, P. Wurz, A. Vorburger, A.
Bhardwaj, K. Asamura | Remote Energetic Neutral Atom Imaging of Electric Potential Over a Lunar
Magnetic Anomaly | 19 pages, 3 figures | Geophys. Res. Lett., 40, doi:10.1002/grl.50135, 2013 | 10.1002/grl.50135 | null | physics.space-ph astro-ph.EP physics.plasm-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The formation of electric potential over lunar magnetized regions is
essential for understanding fundamental lunar science, for understanding the
lunar environment, and for planning human exploration on the Moon. A large
positive electric potential was predicted and detected from single point
measurements. Here, we demonstrate a remote imaging technique of electric
potential mapping at the lunar surface, making use of a new concept involving
hydrogen neutral atoms derived from solar wind. We apply the technique to a
lunar magnetized region using an existing dataset of the neutral atom energy
spectrometer SARA/CENA on Chandrayaan-1. Electrostatic potential larger than
+135 V inside the Gerasimovic anomaly is confirmed. This structure is found
spreading all over the magnetized region. The widely spread electric potential
can influence the local plasma and dust environment near the magnetic anomaly.
| [
{
"version": "v1",
"created": "Sat, 23 Feb 2013 07:35:28 GMT"
}
] | 2013-02-26T00:00:00 | [
[
"Futaana",
"Y.",
""
],
[
"Barabash",
"S.",
""
],
[
"Wieser",
"M.",
""
],
[
"Lue",
"C.",
""
],
[
"Wurz",
"P.",
""
],
[
"Vorburger",
"A.",
""
],
[
"Bhardwaj",
"A.",
""
],
[
"Asamura",
"K.",
""
]
] | TITLE: Remote Energetic Neutral Atom Imaging of Electric Potential Over a Lunar
Magnetic Anomaly
ABSTRACT: The formation of electric potential over lunar magnetized regions is
essential for understanding fundamental lunar science, for understanding the
lunar environment, and for planning human exploration on the Moon. A large
positive electric potential was predicted and detected from single point
measurements. Here, we demonstrate a remote imaging technique of electric
potential mapping at the lunar surface, making use of a new concept involving
hydrogen neutral atoms derived from solar wind. We apply the technique to a
lunar magnetized region using an existing dataset of the neutral atom energy
spectrometer SARA/CENA on Chandrayaan-1. Electrostatic potential larger than
+135 V inside the Gerasimovic anomaly is confirmed. This structure is found
spreading all over the magnetized region. The widely spread electric potential
can influence the local plasma and dust environment near the magnetic anomaly.
|
1302.5985 | Xiaodi Hou | Xiaodi Hou and Alan Yuille and Christof Koch | A Meta-Theory of Boundary Detection Benchmarks | NIPS 2012 Workshop on Human Computation for Science and Computational
Sustainability | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human labeled datasets, along with their corresponding evaluation algorithms,
play an important role in boundary detection. We here present a psychophysical
experiment that addresses the reliability of such benchmarks. To find better
remedies to evaluate the performance of any boundary detection algorithm, we
propose a computational framework to remove inappropriate human labels and
estimate the intrinsic properties of boundaries.
| [
{
"version": "v1",
"created": "Mon, 25 Feb 2013 03:12:12 GMT"
}
] | 2013-02-26T00:00:00 | [
[
"Hou",
"Xiaodi",
""
],
[
"Yuille",
"Alan",
""
],
[
"Koch",
"Christof",
""
]
] | TITLE: A Meta-Theory of Boundary Detection Benchmarks
ABSTRACT: Human labeled datasets, along with their corresponding evaluation algorithms,
play an important role in boundary detection. We here present a psychophysical
experiment that addresses the reliability of such benchmarks. To find better
remedies to evaluate the performance of any boundary detection algorithm, we
propose a computational framework to remove inappropriate human labels and
estimate the intrinsic properties of boundaries.
|
1302.6221 | Thierry Sousbie | Thierry Sousbie | DisPerSE: robust structure identification in 2D and 3D | To download DisPerSE, go to http://www2.iap.fr/users/sousbie/ | null | null | null | astro-ph.CO astro-ph.IM math-ph math.MP physics.comp-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the DIScrete PERsistent Structures Extractor (DisPerSE), an open
source software for the automatic and robust identification of structures in 2D
and 3D noisy data sets. The software is designed to identify all sorts of
topological structures, such as voids, peaks, sources, walls and filaments
through segmentation, with a special emphasis put on the later ones. Based on
discrete Morse theory, DisPerSE is able to deal directly with noisy datasets
using the concept of persistence (a measure of the robustness of topological
features) and can be applied indifferently to various sorts of data-sets
defined over a possibly bounded manifold : 2D and 3D images, structured and
unstructured grids, discrete point samples via the delaunay tesselation,
Healpix tesselations of the sphere, ...
Although it was initially developed with cosmology in mind, various I/O
formats have been implemented and the current version is quite versatile. It
should therefore be useful for any application where a robust structure
identification is required as well as for studying the topology of sampled
functions (e.g. computing persistent Betti numbers).
DisPerSE can be downloaded directly from the website
http://www2.iap.fr/users/sousbie/ and a thorough online documentation is also
available at the same address.
| [
{
"version": "v1",
"created": "Mon, 25 Feb 2013 20:47:19 GMT"
}
] | 2013-02-26T00:00:00 | [
[
"Sousbie",
"Thierry",
""
]
] | TITLE: DisPerSE: robust structure identification in 2D and 3D
ABSTRACT: We present the DIScrete PERsistent Structures Extractor (DisPerSE), an open
source software for the automatic and robust identification of structures in 2D
and 3D noisy data sets. The software is designed to identify all sorts of
topological structures, such as voids, peaks, sources, walls and filaments
through segmentation, with a special emphasis put on the later ones. Based on
discrete Morse theory, DisPerSE is able to deal directly with noisy datasets
using the concept of persistence (a measure of the robustness of topological
features) and can be applied indifferently to various sorts of data-sets
defined over a possibly bounded manifold : 2D and 3D images, structured and
unstructured grids, discrete point samples via the delaunay tesselation,
Healpix tesselations of the sphere, ...
Although it was initially developed with cosmology in mind, various I/O
formats have been implemented and the current version is quite versatile. It
should therefore be useful for any application where a robust structure
identification is required as well as for studying the topology of sampled
functions (e.g. computing persistent Betti numbers).
DisPerSE can be downloaded directly from the website
http://www2.iap.fr/users/sousbie/ and a thorough online documentation is also
available at the same address.
|
1211.3147 | Keke Chen | James Powers and Keke Chen | Secure Computation of Top-K Eigenvectors for Shared Matrices in the
Cloud | 8 pages | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the development of sensor network, mobile computing, and web
applications, data are now collected from many distributed sources to form big
datasets. Such datasets can be hosted in the cloud to achieve economical
processing. However, these data might be highly sensitive requiring secure
storage and processing. We envision a cloud-based data storage and processing
framework that enables users to economically and securely share and handle big
datasets. Under this framework, we study the matrix-based data mining
algorithms with a focus on the secure top-k eigenvector algorithm. Our approach
uses an iterative processing model in which the authorized user interacts with
the cloud to achieve the result. In this process, both the source matrix and
the intermediate results keep confidential and the client-side incurs low
costs. The security of this approach is guaranteed by using Paillier Encryption
and a random perturbation technique. We carefully analyze its security under a
cloud-specific threat model. Our experimental results show that the proposed
method is scalable to big matrices while requiring low client-side costs.
| [
{
"version": "v1",
"created": "Tue, 13 Nov 2012 21:59:18 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Feb 2013 05:35:22 GMT"
}
] | 2013-02-25T00:00:00 | [
[
"Powers",
"James",
""
],
[
"Chen",
"Keke",
""
]
] | TITLE: Secure Computation of Top-K Eigenvectors for Shared Matrices in the
Cloud
ABSTRACT: With the development of sensor network, mobile computing, and web
applications, data are now collected from many distributed sources to form big
datasets. Such datasets can be hosted in the cloud to achieve economical
processing. However, these data might be highly sensitive requiring secure
storage and processing. We envision a cloud-based data storage and processing
framework that enables users to economically and securely share and handle big
datasets. Under this framework, we study the matrix-based data mining
algorithms with a focus on the secure top-k eigenvector algorithm. Our approach
uses an iterative processing model in which the authorized user interacts with
the cloud to achieve the result. In this process, both the source matrix and
the intermediate results keep confidential and the client-side incurs low
costs. The security of this approach is guaranteed by using Paillier Encryption
and a random perturbation technique. We carefully analyze its security under a
cloud-specific threat model. Our experimental results show that the proposed
method is scalable to big matrices while requiring low client-side costs.
|
1301.3533 | Xanadu Halkias | Xanadu Halkias, Sebastien Paris, Herve Glotin | Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint | 8 pages, 7 figures (including subfigures), ICleaR conference | null | null | null | cs.NE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Belief Networks (DBN) have been successfully applied on popular machine
learning tasks. Specifically, when applied on hand-written digit recognition,
DBNs have achieved approximate accuracy rates of 98.8%. In an effort to
optimize the data representation achieved by the DBN and maximize their
descriptive power, recent advances have focused on inducing sparse constraints
at each layer of the DBN. In this paper we present a theoretical approach for
sparse constraints in the DBN using the mixed norm for both non-overlapping and
overlapping groups. We explore how these constraints affect the classification
accuracy for digit recognition in three different datasets (MNIST, USPS, RIMES)
and provide initial estimations of their usefulness by altering different
parameters such as the group size and overlap percentage.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2013 00:12:21 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Feb 2013 10:18:15 GMT"
}
] | 2013-02-25T00:00:00 | [
[
"Halkias",
"Xanadu",
""
],
[
"Paris",
"Sebastien",
""
],
[
"Glotin",
"Herve",
""
]
] | TITLE: Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint
ABSTRACT: Deep Belief Networks (DBN) have been successfully applied on popular machine
learning tasks. Specifically, when applied on hand-written digit recognition,
DBNs have achieved approximate accuracy rates of 98.8%. In an effort to
optimize the data representation achieved by the DBN and maximize their
descriptive power, recent advances have focused on inducing sparse constraints
at each layer of the DBN. In this paper we present a theoretical approach for
sparse constraints in the DBN using the mixed norm for both non-overlapping and
overlapping groups. We explore how these constraints affect the classification
accuracy for digit recognition in three different datasets (MNIST, USPS, RIMES)
and provide initial estimations of their usefulness by altering different
parameters such as the group size and overlap percentage.
|
1302.5125 | Oren Rippel | Oren Rippel, Ryan Prescott Adams | High-Dimensional Probability Estimation with Deep Density Models | 12 pages, 4 figures, 1 table. Submitted for publication | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the fundamental problems in machine learning is the estimation of a
probability distribution from data. Many techniques have been proposed to study
the structure of data, most often building around the assumption that
observations lie on a lower-dimensional manifold of high probability. It has
been more difficult, however, to exploit this insight to build explicit,
tractable density models for high-dimensional data. In this paper, we introduce
the deep density model (DDM), a new approach to density estimation. We exploit
insights from deep learning to construct a bijective map to a representation
space, under which the transformation of the distribution of the data is
approximately factorized and has identical and known marginal densities. The
simplicity of the latent distribution under the model allows us to feasibly
explore it, and the invertibility of the map to characterize contraction of
measure across it. This enables us to compute normalized densities for
out-of-sample data. This combination of tractability and flexibility allows us
to tackle a variety of probabilistic tasks on high-dimensional datasets,
including: rapid computation of normalized densities at test-time without
evaluating a partition function; generation of samples without MCMC; and
characterization of the joint entropy of the data.
| [
{
"version": "v1",
"created": "Wed, 20 Feb 2013 21:20:30 GMT"
}
] | 2013-02-22T00:00:00 | [
[
"Rippel",
"Oren",
""
],
[
"Adams",
"Ryan Prescott",
""
]
] | TITLE: High-Dimensional Probability Estimation with Deep Density Models
ABSTRACT: One of the fundamental problems in machine learning is the estimation of a
probability distribution from data. Many techniques have been proposed to study
the structure of data, most often building around the assumption that
observations lie on a lower-dimensional manifold of high probability. It has
been more difficult, however, to exploit this insight to build explicit,
tractable density models for high-dimensional data. In this paper, we introduce
the deep density model (DDM), a new approach to density estimation. We exploit
insights from deep learning to construct a bijective map to a representation
space, under which the transformation of the distribution of the data is
approximately factorized and has identical and known marginal densities. The
simplicity of the latent distribution under the model allows us to feasibly
explore it, and the invertibility of the map to characterize contraction of
measure across it. This enables us to compute normalized densities for
out-of-sample data. This combination of tractability and flexibility allows us
to tackle a variety of probabilistic tasks on high-dimensional datasets,
including: rapid computation of normalized densities at test-time without
evaluating a partition function; generation of samples without MCMC; and
characterization of the joint entropy of the data.
|
1302.5189 | Dilip K. Prasad | Dilip K. Prasad | Object Detection in Real Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object detection and recognition are important problems in computer vision.
Since these problems are meta-heuristic, despite a lot of research, practically
usable, intelligent, real-time, and dynamic object detection/recognition
methods are still unavailable. We propose a new object detection/recognition
method, which improves over the existing methods in every stage of the object
detection/recognition process. In addition to the usual features, we propose to
use geometric shapes, like linear cues, ellipses and quadrangles, as additional
features. The full potential of geometric cues is exploited by using them to
extract other features in a robust, computationally efficient, and less
meta-heuristic manner. We also propose a new hierarchical codebook, which
provides good generalization and discriminative properties. The codebook
enables fast multi-path inference mechanisms based on propagation of
conditional likelihoods, that make it robust to occlusion and noise. It has the
capability of dynamic learning. We also propose a new learning method that has
generative and discriminative learning capabilities, does not need large and
fully supervised training dataset, and is capable of online learning. The
preliminary work of detecting geometric shapes in real images has been
completed. This preliminary work is the focus of this report. Future path for
realizing the proposed object detection/recognition method is also discussed in
brief.
| [
{
"version": "v1",
"created": "Thu, 21 Feb 2013 06:06:47 GMT"
}
] | 2013-02-22T00:00:00 | [
[
"Prasad",
"Dilip K.",
""
]
] | TITLE: Object Detection in Real Images
ABSTRACT: Object detection and recognition are important problems in computer vision.
Since these problems are meta-heuristic, despite a lot of research, practically
usable, intelligent, real-time, and dynamic object detection/recognition
methods are still unavailable. We propose a new object detection/recognition
method, which improves over the existing methods in every stage of the object
detection/recognition process. In addition to the usual features, we propose to
use geometric shapes, like linear cues, ellipses and quadrangles, as additional
features. The full potential of geometric cues is exploited by using them to
extract other features in a robust, computationally efficient, and less
meta-heuristic manner. We also propose a new hierarchical codebook, which
provides good generalization and discriminative properties. The codebook
enables fast multi-path inference mechanisms based on propagation of
conditional likelihoods, that make it robust to occlusion and noise. It has the
capability of dynamic learning. We also propose a new learning method that has
generative and discriminative learning capabilities, does not need large and
fully supervised training dataset, and is capable of online learning. The
preliminary work of detecting geometric shapes in real images has been
completed. This preliminary work is the focus of this report. Future path for
realizing the proposed object detection/recognition method is also discussed in
brief.
|
1206.0051 | Florin Rusu | Chengjie Qin, Florin Rusu | PF-OLA: A High-Performance Framework for Parallel On-Line Aggregation | 36 pages | null | null | null | cs.DB cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online aggregation provides estimates to the final result of a computation
during the actual processing. The user can stop the computation as soon as the
estimate is accurate enough, typically early in the execution. This allows for
the interactive data exploration of the largest datasets. In this paper we
introduce the first framework for parallel online aggregation in which the
estimation virtually does not incur any overhead on top of the actual
execution. We define a generic interface to express any estimation model that
abstracts completely the execution details. We design a novel estimator
specifically targeted at parallel online aggregation. When executed by the
framework over a massive $8\text{TB}$ TPC-H instance, the estimator provides
accurate confidence bounds early in the execution even when the cardinality of
the final result is seven orders of magnitude smaller than the dataset size and
without incurring overhead.
| [
{
"version": "v1",
"created": "Thu, 31 May 2012 23:38:36 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Feb 2013 07:10:04 GMT"
}
] | 2013-02-21T00:00:00 | [
[
"Qin",
"Chengjie",
""
],
[
"Rusu",
"Florin",
""
]
] | TITLE: PF-OLA: A High-Performance Framework for Parallel On-Line Aggregation
ABSTRACT: Online aggregation provides estimates to the final result of a computation
during the actual processing. The user can stop the computation as soon as the
estimate is accurate enough, typically early in the execution. This allows for
the interactive data exploration of the largest datasets. In this paper we
introduce the first framework for parallel online aggregation in which the
estimation virtually does not incur any overhead on top of the actual
execution. We define a generic interface to express any estimation model that
abstracts completely the execution details. We design a novel estimator
specifically targeted at parallel online aggregation. When executed by the
framework over a massive $8\text{TB}$ TPC-H instance, the estimator provides
accurate confidence bounds early in the execution even when the cardinality of
the final result is seven orders of magnitude smaller than the dataset size and
without incurring overhead.
|
1302.4874 | Gon\c{c}alo Sim\~oes | Gon\c{c}alo Sim\~oes, Helena Galhardas, David Matos | A Labeled Graph Kernel for Relationship Extraction | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose an approach for Relationship Extraction (RE) based
on labeled graph kernels. The kernel we propose is a particularization of a
random walk kernel that exploits two properties previously studied in the RE
literature: (i) the words between the candidate entities or connecting them in
a syntactic representation are particularly likely to carry information
regarding the relationship; and (ii) combining information from distinct
sources in a kernel may help the RE system make better decisions. We performed
experiments on a dataset of protein-protein interactions and the results show
that our approach obtains effectiveness values that are comparable with the
state-of-the art kernel methods. Moreover, our approach is able to outperform
the state-of-the-art kernels when combined with other kernel methods.
| [
{
"version": "v1",
"created": "Wed, 20 Feb 2013 11:06:25 GMT"
}
] | 2013-02-21T00:00:00 | [
[
"Simões",
"Gonçalo",
""
],
[
"Galhardas",
"Helena",
""
],
[
"Matos",
"David",
""
]
] | TITLE: A Labeled Graph Kernel for Relationship Extraction
ABSTRACT: In this paper, we propose an approach for Relationship Extraction (RE) based
on labeled graph kernels. The kernel we propose is a particularization of a
random walk kernel that exploits two properties previously studied in the RE
literature: (i) the words between the candidate entities or connecting them in
a syntactic representation are particularly likely to carry information
regarding the relationship; and (ii) combining information from distinct
sources in a kernel may help the RE system make better decisions. We performed
experiments on a dataset of protein-protein interactions and the results show
that our approach obtains effectiveness values that are comparable with the
state-of-the art kernel methods. Moreover, our approach is able to outperform
the state-of-the-art kernels when combined with other kernel methods.
|
1302.4932 | John S. Breese | John S. Breese, Russ Blake | Automating Computer Bottleneck Detection with Belief Nets | Appears in Proceedings of the Eleventh Conference on Uncertainty in
Artificial Intelligence (UAI1995) | null | null | UAI-P-1995-PG-36-45 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe an application of belief networks to the diagnosis of bottlenecks
in computer systems. The technique relies on a high-level functional model of
the interaction between application workloads, the Windows NT operating system,
and system hardware. Given a workload description, the model predicts the
values of observable system counters available from the Windows NT performance
monitoring tool. Uncertainty in workloads, predictions, and counter values are
characterized with Gaussian distributions. During diagnostic inference, we use
observed performance monitor values to find the most probable assignment to the
workload parameters. In this paper we provide some background on automated
bottleneck detection, describe the structure of the system model, and discuss
empirical procedures for model calibration and verification. Part of the
calibration process includes generating a dataset to estimate a multivariate
Gaussian error model. Initial results in diagnosing bottlenecks are presented.
| [
{
"version": "v1",
"created": "Wed, 20 Feb 2013 15:19:11 GMT"
}
] | 2013-02-21T00:00:00 | [
[
"Breese",
"John S.",
""
],
[
"Blake",
"Russ",
""
]
] | TITLE: Automating Computer Bottleneck Detection with Belief Nets
ABSTRACT: We describe an application of belief networks to the diagnosis of bottlenecks
in computer systems. The technique relies on a high-level functional model of
the interaction between application workloads, the Windows NT operating system,
and system hardware. Given a workload description, the model predicts the
values of observable system counters available from the Windows NT performance
monitoring tool. Uncertainty in workloads, predictions, and counter values are
characterized with Gaussian distributions. During diagnostic inference, we use
observed performance monitor values to find the most probable assignment to the
workload parameters. In this paper we provide some background on automated
bottleneck detection, describe the structure of the system model, and discuss
empirical procedures for model calibration and verification. Part of the
calibration process includes generating a dataset to estimate a multivariate
Gaussian error model. Initial results in diagnosing bottlenecks are presented.
|
1301.0068 | Guy Bresler | Guy Bresler, Ma'ayan Bresler, David Tse | Optimal Assembly for High Throughput Shotgun Sequencing | 26 pages, 18 figures | null | null | null | q-bio.GN cs.DS cs.IT math.IT q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a framework for the design of optimal assembly algorithms for
shotgun sequencing under the criterion of complete reconstruction. We derive a
lower bound on the read length and the coverage depth required for
reconstruction in terms of the repeat statistics of the genome. Building on
earlier works, we design a de Brujin graph based assembly algorithm which can
achieve very close to the lower bound for repeat statistics of a wide range of
sequenced genomes, including the GAGE datasets. The results are based on a set
of necessary and sufficient conditions on the DNA sequence and the reads for
reconstruction. The conditions can be viewed as the shotgun sequencing analogue
of Ukkonen-Pevzner's necessary and sufficient conditions for Sequencing by
Hybridization.
| [
{
"version": "v1",
"created": "Tue, 1 Jan 2013 08:52:44 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Jan 2013 03:51:20 GMT"
},
{
"version": "v3",
"created": "Mon, 18 Feb 2013 17:41:09 GMT"
}
] | 2013-02-20T00:00:00 | [
[
"Bresler",
"Guy",
""
],
[
"Bresler",
"Ma'ayan",
""
],
[
"Tse",
"David",
""
]
] | TITLE: Optimal Assembly for High Throughput Shotgun Sequencing
ABSTRACT: We present a framework for the design of optimal assembly algorithms for
shotgun sequencing under the criterion of complete reconstruction. We derive a
lower bound on the read length and the coverage depth required for
reconstruction in terms of the repeat statistics of the genome. Building on
earlier works, we design a de Brujin graph based assembly algorithm which can
achieve very close to the lower bound for repeat statistics of a wide range of
sequenced genomes, including the GAGE datasets. The results are based on a set
of necessary and sufficient conditions on the DNA sequence and the reads for
reconstruction. The conditions can be viewed as the shotgun sequencing analogue
of Ukkonen-Pevzner's necessary and sufficient conditions for Sequencing by
Hybridization.
|
1302.4504 | Diego Amancio Raphael | Diego R. Amancio, Osvaldo N. Oliveira Jr. and Luciano da F. Costa | On the use of topological features and hierarchical characterization for
disambiguating names in collaborative networks | null | Europhysics Letters (2012) 99 48002 | 10.1209/0295-5075/99/48002 | null | physics.soc-ph cs.DL cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many features of complex systems can now be unveiled by applying statistical
physics methods to treat them as social networks. The power of the analysis may
be limited, however, by the presence of ambiguity in names, e.g., caused by
homonymy in collaborative networks. In this paper we show that the ability to
distinguish between homonymous authors is enhanced when longer-distance
connections are considered, rather than looking at only the immediate neighbors
of a node in the collaborative network. Optimized results were obtained upon
using the 3rd hierarchy in connections. Furthermore, reasonable distinction
among authors could also be achieved upon using pattern recognition strategies
for the data generated from the topology of the collaborative network. These
results were obtained with a network from papers in the arXiv repository, into
which homonymy was deliberately introduced to test the methods with a
controlled, reliable dataset. In all cases, several methods of supervised and
unsupervised machine learning were used, leading to the same overall results.
The suitability of using deeper hierarchies and network topology was confirmed
with a real database of movie actors, with the additional finding that the
distinguishing ability can be further enhanced by combining topology features
and long-range connections in the collaborative network.
| [
{
"version": "v1",
"created": "Tue, 19 Feb 2013 02:00:01 GMT"
}
] | 2013-02-20T00:00:00 | [
[
"Amancio",
"Diego R.",
""
],
[
"Oliveira",
"Osvaldo N.",
"Jr."
],
[
"Costa",
"Luciano da F.",
""
]
] | TITLE: On the use of topological features and hierarchical characterization for
disambiguating names in collaborative networks
ABSTRACT: Many features of complex systems can now be unveiled by applying statistical
physics methods to treat them as social networks. The power of the analysis may
be limited, however, by the presence of ambiguity in names, e.g., caused by
homonymy in collaborative networks. In this paper we show that the ability to
distinguish between homonymous authors is enhanced when longer-distance
connections are considered, rather than looking at only the immediate neighbors
of a node in the collaborative network. Optimized results were obtained upon
using the 3rd hierarchy in connections. Furthermore, reasonable distinction
among authors could also be achieved upon using pattern recognition strategies
for the data generated from the topology of the collaborative network. These
results were obtained with a network from papers in the arXiv repository, into
which homonymy was deliberately introduced to test the methods with a
controlled, reliable dataset. In all cases, several methods of supervised and
unsupervised machine learning were used, leading to the same overall results.
The suitability of using deeper hierarchies and network topology was confirmed
with a real database of movie actors, with the additional finding that the
distinguishing ability can be further enhanced by combining topology features
and long-range connections in the collaborative network.
|
1302.4680 | Gregory Newstadt | Gregory E. Newstadt, Edmund G. Zelnio, and Alfred O. Hero III | Moving target inference with hierarchical Bayesian models in synthetic
aperture radar imagery | 35 pages, 8 figures, 1 algorithm, 11 tables | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In synthetic aperture radar (SAR), images are formed by focusing the response
of stationary objects to a single spatial location. On the other hand, moving
targets cause phase errors in the standard formation of SAR images that cause
displacement and defocusing effects. SAR imagery also contains significant
sources of non-stationary spatially-varying noises, including antenna gain
discrepancies, angular scintillation (glints) and complex speckle. In order to
account for this intricate phenomenology, this work combines the knowledge of
the physical, kinematic, and statistical properties of SAR imaging into a
single unified Bayesian structure that simultaneously (a) estimates the
nuisance parameters such as clutter distributions and antenna miscalibrations
and (b) estimates the target signature required for detection/inference of the
target state. Moreover, we provide a Monte Carlo estimate of the posterior
distribution for the target state and nuisance parameters that infers the
parameters of the model directly from the data, largely eliminating tuning of
algorithm parameters. We demonstrate that our algorithm competes at least as
well on a synthetic dataset as state-of-the-art algorithms for estimating
sparse signals. Finally, performance analysis on a measured dataset
demonstrates that the proposed algorithm is robust at detecting/estimating
targets over a wide area and performs at least as well as popular algorithms
for SAR moving target detection.
| [
{
"version": "v1",
"created": "Tue, 19 Feb 2013 17:12:53 GMT"
}
] | 2013-02-20T00:00:00 | [
[
"Newstadt",
"Gregory E.",
""
],
[
"Zelnio",
"Edmund G.",
""
],
[
"Hero",
"Alfred O.",
"III"
]
] | TITLE: Moving target inference with hierarchical Bayesian models in synthetic
aperture radar imagery
ABSTRACT: In synthetic aperture radar (SAR), images are formed by focusing the response
of stationary objects to a single spatial location. On the other hand, moving
targets cause phase errors in the standard formation of SAR images that cause
displacement and defocusing effects. SAR imagery also contains significant
sources of non-stationary spatially-varying noises, including antenna gain
discrepancies, angular scintillation (glints) and complex speckle. In order to
account for this intricate phenomenology, this work combines the knowledge of
the physical, kinematic, and statistical properties of SAR imaging into a
single unified Bayesian structure that simultaneously (a) estimates the
nuisance parameters such as clutter distributions and antenna miscalibrations
and (b) estimates the target signature required for detection/inference of the
target state. Moreover, we provide a Monte Carlo estimate of the posterior
distribution for the target state and nuisance parameters that infers the
parameters of the model directly from the data, largely eliminating tuning of
algorithm parameters. We demonstrate that our algorithm competes at least as
well on a synthetic dataset as state-of-the-art algorithms for estimating
sparse signals. Finally, performance analysis on a measured dataset
demonstrates that the proposed algorithm is robust at detecting/estimating
targets over a wide area and performs at least as well as popular algorithms
for SAR moving target detection.
|
1301.5809 | Derek Greene | Derek Greene and P\'adraig Cunningham | Producing a Unified Graph Representation from Multiple Social Network
Views | 13 pages. Clarify notation | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many social networks, several different link relations will exist between
the same set of users. Additionally, attribute or textual information will be
associated with those users, such as demographic details or user-generated
content. For many data analysis tasks, such as community finding and data
visualisation, the provision of multiple heterogeneous types of user data makes
the analysis process more complex. We propose an unsupervised method for
integrating multiple data views to produce a single unified graph
representation, based on the combination of the k-nearest neighbour sets for
users derived from each view. These views can be either relation-based or
feature-based. The proposed method is evaluated on a number of annotated
multi-view Twitter datasets, where it is shown to support the discovery of the
underlying community structure in the data.
| [
{
"version": "v1",
"created": "Thu, 24 Jan 2013 15:07:12 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Jan 2013 15:41:22 GMT"
},
{
"version": "v3",
"created": "Mon, 18 Feb 2013 13:56:21 GMT"
}
] | 2013-02-19T00:00:00 | [
[
"Greene",
"Derek",
""
],
[
"Cunningham",
"Pádraig",
""
]
] | TITLE: Producing a Unified Graph Representation from Multiple Social Network
Views
ABSTRACT: In many social networks, several different link relations will exist between
the same set of users. Additionally, attribute or textual information will be
associated with those users, such as demographic details or user-generated
content. For many data analysis tasks, such as community finding and data
visualisation, the provision of multiple heterogeneous types of user data makes
the analysis process more complex. We propose an unsupervised method for
integrating multiple data views to produce a single unified graph
representation, based on the combination of the k-nearest neighbour sets for
users derived from each view. These views can be either relation-based or
feature-based. The proposed method is evaluated on a number of annotated
multi-view Twitter datasets, where it is shown to support the discovery of the
underlying community structure in the data.
|
1302.3219 | Chunhua Shen | Chunhua Shen, Junae Kim, Fayao Liu, Lei Wang, Anton van den Hengel | An Efficient Dual Approach to Distance Metric Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distance metric learning is of fundamental interest in machine learning
because the distance metric employed can significantly affect the performance
of many learning methods. Quadratic Mahalanobis metric learning is a popular
approach to the problem, but typically requires solving a semidefinite
programming (SDP) problem, which is computationally expensive. Standard
interior-point SDP solvers typically have a complexity of $O(D^{6.5})$ (with
$D$ the dimension of input data), and can thus only practically solve problems
exhibiting less than a few thousand variables. Since the number of variables is
$D (D+1) / 2 $, this implies a limit upon the size of problem that can
practically be solved of around a few hundred dimensions. The complexity of the
popular quadratic Mahalanobis metric learning approach thus limits the size of
problem to which metric learning can be applied. Here we propose a
significantly more efficient approach to the metric learning problem based on
the Lagrange dual formulation of the problem. The proposed formulation is much
simpler to implement, and therefore allows much larger Mahalanobis metric
learning problems to be solved. The time complexity of the proposed method is
$O (D ^ 3) $, which is significantly lower than that of the SDP approach.
Experiments on a variety of datasets demonstrate that the proposed method
achieves an accuracy comparable to the state-of-the-art, but is applicable to
significantly larger problems. We also show that the proposed method can be
applied to solve more general Frobenius-norm regularized SDP problems
approximately.
| [
{
"version": "v1",
"created": "Wed, 13 Feb 2013 08:48:53 GMT"
}
] | 2013-02-15T00:00:00 | [
[
"Shen",
"Chunhua",
""
],
[
"Kim",
"Junae",
""
],
[
"Liu",
"Fayao",
""
],
[
"Wang",
"Lei",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: An Efficient Dual Approach to Distance Metric Learning
ABSTRACT: Distance metric learning is of fundamental interest in machine learning
because the distance metric employed can significantly affect the performance
of many learning methods. Quadratic Mahalanobis metric learning is a popular
approach to the problem, but typically requires solving a semidefinite
programming (SDP) problem, which is computationally expensive. Standard
interior-point SDP solvers typically have a complexity of $O(D^{6.5})$ (with
$D$ the dimension of input data), and can thus only practically solve problems
exhibiting less than a few thousand variables. Since the number of variables is
$D (D+1) / 2 $, this implies a limit upon the size of problem that can
practically be solved of around a few hundred dimensions. The complexity of the
popular quadratic Mahalanobis metric learning approach thus limits the size of
problem to which metric learning can be applied. Here we propose a
significantly more efficient approach to the metric learning problem based on
the Lagrange dual formulation of the problem. The proposed formulation is much
simpler to implement, and therefore allows much larger Mahalanobis metric
learning problems to be solved. The time complexity of the proposed method is
$O (D ^ 3) $, which is significantly lower than that of the SDP approach.
Experiments on a variety of datasets demonstrate that the proposed method
achieves an accuracy comparable to the state-of-the-art, but is applicable to
significantly larger problems. We also show that the proposed method can be
applied to solve more general Frobenius-norm regularized SDP problems
approximately.
|
1302.3123 | Nizar Banu P K | P. K. Nizar Banu, H. Hannah Inbarani | An Analysis of Gene Expression Data using Penalized Fuzzy C-Means
Approach | 14; IJCCI, Vol. 1, Issue 2,(January-July)2011 | null | null | null | cs.CV cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid advances of microarray technologies, large amounts of
high-dimensional gene expression data are being generated, which poses
significant computational challenges. A first step towards addressing this
challenge is the use of clustering techniques, which is essential in the data
mining process to reveal natural structures and identify interesting patterns
in the underlying data. A robust gene expression clustering approach to
minimize undesirable clustering is proposed. In this paper, Penalized Fuzzy
C-Means (PFCM) Clustering algorithm is described and compared with the most
representative off-line clustering techniques: K-Means Clustering, Rough
K-Means Clustering and Fuzzy C-Means clustering. These techniques are
implemented and tested for a Brain Tumor gene expression Dataset. Analysis of
the performance of the proposed approach is presented through qualitative
validation experiments. From experimental results, it can be observed that
Penalized Fuzzy C-Means algorithm shows a much higher usability than the other
projected clustering algorithms used in our comparison study. Significant and
promising clustering results are presented using Brain Tumor Gene expression
dataset. Thus patterns seen in genome-wide expression experiments can be
interpreted as indications of the status of cellular processes. In these
clustering results, we find that Penalized Fuzzy C-Means algorithm provides
useful information as an aid to diagnosis in oncology.
| [
{
"version": "v1",
"created": "Tue, 8 Jan 2013 17:16:39 GMT"
}
] | 2013-02-14T00:00:00 | [
[
"Banu",
"P. K. Nizar",
""
],
[
"Inbarani",
"H. Hannah",
""
]
] | TITLE: An Analysis of Gene Expression Data using Penalized Fuzzy C-Means
Approach
ABSTRACT: With the rapid advances of microarray technologies, large amounts of
high-dimensional gene expression data are being generated, which poses
significant computational challenges. A first step towards addressing this
challenge is the use of clustering techniques, which is essential in the data
mining process to reveal natural structures and identify interesting patterns
in the underlying data. A robust gene expression clustering approach to
minimize undesirable clustering is proposed. In this paper, Penalized Fuzzy
C-Means (PFCM) Clustering algorithm is described and compared with the most
representative off-line clustering techniques: K-Means Clustering, Rough
K-Means Clustering and Fuzzy C-Means clustering. These techniques are
implemented and tested for a Brain Tumor gene expression Dataset. Analysis of
the performance of the proposed approach is presented through qualitative
validation experiments. From experimental results, it can be observed that
Penalized Fuzzy C-Means algorithm shows a much higher usability than the other
projected clustering algorithms used in our comparison study. Significant and
promising clustering results are presented using Brain Tumor Gene expression
dataset. Thus patterns seen in genome-wide expression experiments can be
interpreted as indications of the status of cellular processes. In these
clustering results, we find that Penalized Fuzzy C-Means algorithm provides
useful information as an aid to diagnosis in oncology.
|
1208.6157 | Atieh Mirshahvalad | Atieh Mirshahvalad, Olivier H. Beauchesne, Eric Archambault, Martin
Rosvall | Resampling effects on significance analysis of network clustering and
ranking | 12 pages, 7 figures | null | 10.1371/journal.pone.0053943 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community detection helps us simplify the complex configuration of networks,
but communities are reliable only if they are statistically significant. To
detect statistically significant communities, a common approach is to resample
the original network and analyze the communities. But resampling assumes
independence between samples, while the components of a network are inherently
dependent. Therefore, we must understand how breaking dependencies between
resampled components affects the results of the significance analysis. Here we
use scientific communication as a model system to analyze this effect. Our
dataset includes citations among articles published in journals in the years
1984-2010. We compare parametric resampling of citations with non-parametric
article resampling. While citation resampling breaks link dependencies, article
resampling maintains such dependencies. We find that citation resampling
underestimates the variance of link weights. Moreover, this underestimation
explains most of the differences in the significance analysis of ranking and
clustering. Therefore, when only link weights are available and article
resampling is not an option, we suggest a simple parametric resampling scheme
that generates link-weight variances close to the link-weight variances of
article resampling. Nevertheless, when we highlight and summarize important
structural changes in science, the more dependencies we can maintain in the
resampling scheme, the earlier we can predict structural change.
| [
{
"version": "v1",
"created": "Thu, 30 Aug 2012 12:58:10 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Feb 2013 10:11:06 GMT"
}
] | 2013-02-12T00:00:00 | [
[
"Mirshahvalad",
"Atieh",
""
],
[
"Beauchesne",
"Olivier H.",
""
],
[
"Archambault",
"Eric",
""
],
[
"Rosvall",
"Martin",
""
]
] | TITLE: Resampling effects on significance analysis of network clustering and
ranking
ABSTRACT: Community detection helps us simplify the complex configuration of networks,
but communities are reliable only if they are statistically significant. To
detect statistically significant communities, a common approach is to resample
the original network and analyze the communities. But resampling assumes
independence between samples, while the components of a network are inherently
dependent. Therefore, we must understand how breaking dependencies between
resampled components affects the results of the significance analysis. Here we
use scientific communication as a model system to analyze this effect. Our
dataset includes citations among articles published in journals in the years
1984-2010. We compare parametric resampling of citations with non-parametric
article resampling. While citation resampling breaks link dependencies, article
resampling maintains such dependencies. We find that citation resampling
underestimates the variance of link weights. Moreover, this underestimation
explains most of the differences in the significance analysis of ranking and
clustering. Therefore, when only link weights are available and article
resampling is not an option, we suggest a simple parametric resampling scheme
that generates link-weight variances close to the link-weight variances of
article resampling. Nevertheless, when we highlight and summarize important
structural changes in science, the more dependencies we can maintain in the
resampling scheme, the earlier we can predict structural change.
|
1302.2244 | Jiping Xiong | Jiping Xiong, Jian Zhao and Lei Chen | Efficient Data Gathering in Wireless Sensor Networks Based on Matrix
Completion and Compressive Sensing | null | null | null | null | cs.NI cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gathering data in an energy efficient manner in wireless sensor networks is
an important design challenge. In wireless sensor networks, the readings of
sensors always exhibit intra-temporal and inter-spatial correlations.
Therefore, in this letter, we use low rank matrix completion theory to explore
the inter-spatial correlation and use compressive sensing theory to take
advantage of intra-temporal correlation. Our method, dubbed MCCS, can
significantly reduce the amount of data that each sensor must send through
network and to the sink, thus prolong the lifetime of the whole networks.
Experiments using real datasets demonstrate the feasibility and efficacy of our
MCCS method.
| [
{
"version": "v1",
"created": "Sat, 9 Feb 2013 16:34:00 GMT"
}
] | 2013-02-12T00:00:00 | [
[
"Xiong",
"Jiping",
""
],
[
"Zhao",
"Jian",
""
],
[
"Chen",
"Lei",
""
]
] | TITLE: Efficient Data Gathering in Wireless Sensor Networks Based on Matrix
Completion and Compressive Sensing
ABSTRACT: Gathering data in an energy efficient manner in wireless sensor networks is
an important design challenge. In wireless sensor networks, the readings of
sensors always exhibit intra-temporal and inter-spatial correlations.
Therefore, in this letter, we use low rank matrix completion theory to explore
the inter-spatial correlation and use compressive sensing theory to take
advantage of intra-temporal correlation. Our method, dubbed MCCS, can
significantly reduce the amount of data that each sensor must send through
network and to the sink, thus prolong the lifetime of the whole networks.
Experiments using real datasets demonstrate the feasibility and efficacy of our
MCCS method.
|
1302.2436 | Mahmood Ali Mohd | Mohd Mahmood Ali, Mohd S Qaseem, Lakshmi Rajamani, A Govardhan | Extracting useful rules through improved decision tree induction using
information entropy | 15 pages, 7 figures, 4 tables, International Journal of Information
Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 | null | 10.5121/ijist.2013.3103 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classification is widely used technique in the data mining domain, where
scalability and efficiency are the immediate problems in classification
algorithms for large databases. We suggest improvements to the existing C4.5
decision tree algorithm. In this paper attribute oriented induction (AOI) and
relevance analysis are incorporated with concept hierarchys knowledge and
HeightBalancePriority algorithm for construction of decision tree along with
Multi level mining. The assignment of priorities to attributes is done by
evaluating information entropy, at different levels of abstraction for building
decision tree using HeightBalancePriority algorithm. Modified DMQL queries are
used to understand and explore the shortcomings of the decision trees generated
by C4.5 classifier for education dataset and the results are compared with the
proposed approach.
| [
{
"version": "v1",
"created": "Mon, 11 Feb 2013 10:29:17 GMT"
}
] | 2013-02-12T00:00:00 | [
[
"Ali",
"Mohd Mahmood",
""
],
[
"Qaseem",
"Mohd S",
""
],
[
"Rajamani",
"Lakshmi",
""
],
[
"Govardhan",
"A",
""
]
] | TITLE: Extracting useful rules through improved decision tree induction using
information entropy
ABSTRACT: Classification is widely used technique in the data mining domain, where
scalability and efficiency are the immediate problems in classification
algorithms for large databases. We suggest improvements to the existing C4.5
decision tree algorithm. In this paper attribute oriented induction (AOI) and
relevance analysis are incorporated with concept hierarchys knowledge and
HeightBalancePriority algorithm for construction of decision tree along with
Multi level mining. The assignment of priorities to attributes is done by
evaluating information entropy, at different levels of abstraction for building
decision tree using HeightBalancePriority algorithm. Modified DMQL queries are
used to understand and explore the shortcomings of the decision trees generated
by C4.5 classifier for education dataset and the results are compared with the
proposed approach.
|
1302.2576 | Oluwasanmi Koyejo | Oluwasanmi Koyejo and Cheng Lee and Joydeep Ghosh | The trace norm constrained matrix-variate Gaussian process for multitask
bipartite ranking | 14 pages, 9 figures, 5 tables | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel hierarchical model for multitask bipartite ranking. The
proposed approach combines a matrix-variate Gaussian process with a generative
model for task-wise bipartite ranking. In addition, we employ a novel trace
constrained variational inference approach to impose low rank structure on the
posterior matrix-variate Gaussian process. The resulting posterior covariance
function is derived in closed form, and the posterior mean function is the
solution to a matrix-variate regression with a novel spectral elastic net
regularizer. Further, we show that variational inference for the trace
constrained matrix-variate Gaussian process combined with maximum likelihood
parameter estimation for the bipartite ranking model is jointly convex. Our
motivating application is the prioritization of candidate disease genes. The
goal of this task is to aid the identification of unobserved associations
between human genes and diseases using a small set of observed associations as
well as kernels induced by gene-gene interaction networks and disease
ontologies. Our experimental results illustrate the performance of the proposed
model on real world datasets. Moreover, we find that the resulting low rank
solution improves the computational scalability of training and testing as
compared to baseline models.
| [
{
"version": "v1",
"created": "Mon, 11 Feb 2013 19:16:25 GMT"
}
] | 2013-02-12T00:00:00 | [
[
"Koyejo",
"Oluwasanmi",
""
],
[
"Lee",
"Cheng",
""
],
[
"Ghosh",
"Joydeep",
""
]
] | TITLE: The trace norm constrained matrix-variate Gaussian process for multitask
bipartite ranking
ABSTRACT: We propose a novel hierarchical model for multitask bipartite ranking. The
proposed approach combines a matrix-variate Gaussian process with a generative
model for task-wise bipartite ranking. In addition, we employ a novel trace
constrained variational inference approach to impose low rank structure on the
posterior matrix-variate Gaussian process. The resulting posterior covariance
function is derived in closed form, and the posterior mean function is the
solution to a matrix-variate regression with a novel spectral elastic net
regularizer. Further, we show that variational inference for the trace
constrained matrix-variate Gaussian process combined with maximum likelihood
parameter estimation for the bipartite ranking model is jointly convex. Our
motivating application is the prioritization of candidate disease genes. The
goal of this task is to aid the identification of unobserved associations
between human genes and diseases using a small set of observed associations as
well as kernels induced by gene-gene interaction networks and disease
ontologies. Our experimental results illustrate the performance of the proposed
model on real world datasets. Moreover, we find that the resulting low rank
solution improves the computational scalability of training and testing as
compared to baseline models.
|
1302.1529 | TongSheng Chu | TongSheng Chu, Yang Xiang | Exploring Parallelism in Learning Belief Networks | Appears in Proceedings of the Thirteenth Conference on Uncertainty in
Artificial Intelligence (UAI1997) | null | null | UAI-P-1997-PG-90-98 | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been shown that a class of probabilistic domain models cannot be
learned correctly by several existing algorithms which employ a single-link
look ahead search. When a multi-link look ahead search is used, the
computational complexity of the learning algorithm increases. We study how to
use parallelism to tackle the increased complexity in learning such models and
to speed up learning in large domains. An algorithm is proposed to decompose
the learning task for parallel processing. A further task decomposition is used
to balance load among processors and to increase the speed-up and efficiency.
For learning from very large datasets, we present a regrouping of the available
processors such that slow data access through file can be replaced by fast
memory access. Our implementation in a parallel computer demonstrates the
effectiveness of the algorithm.
| [
{
"version": "v1",
"created": "Wed, 6 Feb 2013 15:54:31 GMT"
}
] | 2013-02-08T00:00:00 | [
[
"Chu",
"TongSheng",
""
],
[
"Xiang",
"Yang",
""
]
] | TITLE: Exploring Parallelism in Learning Belief Networks
ABSTRACT: It has been shown that a class of probabilistic domain models cannot be
learned correctly by several existing algorithms which employ a single-link
look ahead search. When a multi-link look ahead search is used, the
computational complexity of the learning algorithm increases. We study how to
use parallelism to tackle the increased complexity in learning such models and
to speed up learning in large domains. An algorithm is proposed to decompose
the learning task for parallel processing. A further task decomposition is used
to balance load among processors and to increase the speed-up and efficiency.
For learning from very large datasets, we present a regrouping of the available
processors such that slow data access through file can be replaced by fast
memory access. Our implementation in a parallel computer demonstrates the
effectiveness of the algorithm.
|
1109.4920 | Reza Farrahi Moghaddam | Reza Farrahi Moghaddam and Mohamed Cheriet | Beyond pixels and regions: A non local patch means (NLPM) method for
content-level restoration, enhancement, and reconstruction of degraded
document images | This paper has been withdrawn by the author to avoid duplication on
the DBLP bibliography | Pattern Recognition 44 (2011) 363-374 | 10.1016/j.patcog.2010.07.027 | null | cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A patch-based non-local restoration and reconstruction method for
preprocessing degraded document images is introduced. The method collects
relative data from the whole input image, while the image data are first
represented by a content-level descriptor based on patches. This
patch-equivalent representation of the input image is then corrected based on
similar patches identified using a modified genetic algorithm (GA) resulting in
a low computational load. The corrected patch-equivalent is then converted to
the output restored image. The fact that the method uses the patches at the
content level allows it to incorporate high-level restoration in an objective
and self-sufficient way. The method has been applied to several degraded
document images, including the DIBCO'09 contest dataset with promising results.
| [
{
"version": "v1",
"created": "Thu, 22 Sep 2011 19:24:58 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Oct 2011 16:46:52 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Nov 2011 22:33:13 GMT"
}
] | 2013-02-07T00:00:00 | [
[
"Moghaddam",
"Reza Farrahi",
""
],
[
"Cheriet",
"Mohamed",
""
]
] | TITLE: Beyond pixels and regions: A non local patch means (NLPM) method for
content-level restoration, enhancement, and reconstruction of degraded
document images
ABSTRACT: A patch-based non-local restoration and reconstruction method for
preprocessing degraded document images is introduced. The method collects
relative data from the whole input image, while the image data are first
represented by a content-level descriptor based on patches. This
patch-equivalent representation of the input image is then corrected based on
similar patches identified using a modified genetic algorithm (GA) resulting in
a low computational load. The corrected patch-equivalent is then converted to
the output restored image. The fact that the method uses the patches at the
content level allows it to incorporate high-level restoration in an objective
and self-sufficient way. The method has been applied to several degraded
document images, including the DIBCO'09 contest dataset with promising results.
|
1302.1007 | Firas Ajil Jassim | Firas Ajil Jassim | Image Denoising Using Interquartile Range Filter with Local Averaging | 5 pages, 8 figures, 2 tables | International Journal of Soft Computing and Engineering (IJSCE)
ISSN: 2231-2307, Volume-2, Issue-6, January 2013 | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | Image denoising is one of the fundamental problems in image processing. In
this paper, a novel approach to suppress noise from the image is conducted by
applying the interquartile range (IQR) which is one of the statistical methods
used to detect outlier effect from a dataset. A window of size kXk was
implemented to support IQR filter. Each pixel outside the IQR range of the kXk
window is treated as noisy pixel. The estimation of the noisy pixels was
obtained by local averaging. The essential advantage of applying IQR filter is
to preserve edge sharpness better of the original image. A variety of test
images have been used to support the proposed filter and PSNR was calculated
and compared with median filter. The experimental results on standard test
images demonstrate this filter is simpler and better performing than median
filter.
| [
{
"version": "v1",
"created": "Tue, 5 Feb 2013 12:02:53 GMT"
}
] | 2013-02-06T00:00:00 | [
[
"Jassim",
"Firas Ajil",
""
]
] | TITLE: Image Denoising Using Interquartile Range Filter with Local Averaging
ABSTRACT: Image denoising is one of the fundamental problems in image processing. In
this paper, a novel approach to suppress noise from the image is conducted by
applying the interquartile range (IQR) which is one of the statistical methods
used to detect outlier effect from a dataset. A window of size kXk was
implemented to support IQR filter. Each pixel outside the IQR range of the kXk
window is treated as noisy pixel. The estimation of the noisy pixels was
obtained by local averaging. The essential advantage of applying IQR filter is
to preserve edge sharpness better of the original image. A variety of test
images have been used to support the proposed filter and PSNR was calculated
and compared with median filter. The experimental results on standard test
images demonstrate this filter is simpler and better performing than median
filter.
|
1206.1270 | Benjamin Recht | Victor Bittorf and Benjamin Recht and Christopher Re and Joel A. Tropp | Factoring nonnegative matrices with linear programs | 17 pages, 10 figures. Modified theorem statement for robust recovery
conditions. Revised proof techniques to make arguments more elementary.
Results on robustness when rows are duplicated have been superseded by
arxiv.org/1211.6687 | null | null | null | math.OC cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a new approach, based on linear programming, for
computing nonnegative matrix factorizations (NMFs). The key idea is a
data-driven model for the factorization where the most salient features in the
data are used to express the remaining features. More precisely, given a data
matrix X, the algorithm identifies a matrix C such that X approximately equals
CX and some linear constraints. The constraints are chosen to ensure that the
matrix C selects features; these features can then be used to find a low-rank
NMF of X. A theoretical analysis demonstrates that this approach has guarantees
similar to those of the recent NMF algorithm of Arora et al. (2012). In
contrast with this earlier work, the proposed method extends to more general
noise models and leads to efficient, scalable algorithms. Experiments with
synthetic and real datasets provide evidence that the new approach is also
superior in practice. An optimized C++ implementation can factor a
multigigabyte matrix in a matter of minutes.
| [
{
"version": "v1",
"created": "Wed, 6 Jun 2012 16:42:27 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Feb 2013 23:40:56 GMT"
}
] | 2013-02-05T00:00:00 | [
[
"Bittorf",
"Victor",
""
],
[
"Recht",
"Benjamin",
""
],
[
"Re",
"Christopher",
""
],
[
"Tropp",
"Joel A.",
""
]
] | TITLE: Factoring nonnegative matrices with linear programs
ABSTRACT: This paper describes a new approach, based on linear programming, for
computing nonnegative matrix factorizations (NMFs). The key idea is a
data-driven model for the factorization where the most salient features in the
data are used to express the remaining features. More precisely, given a data
matrix X, the algorithm identifies a matrix C such that X approximately equals
CX and some linear constraints. The constraints are chosen to ensure that the
matrix C selects features; these features can then be used to find a low-rank
NMF of X. A theoretical analysis demonstrates that this approach has guarantees
similar to those of the recent NMF algorithm of Arora et al. (2012). In
contrast with this earlier work, the proposed method extends to more general
noise models and leads to efficient, scalable algorithms. Experiments with
synthetic and real datasets provide evidence that the new approach is also
superior in practice. An optimized C++ implementation can factor a
multigigabyte matrix in a matter of minutes.
|
1302.0413 | Catarina Moreira | Catarina Moreira and P\'avel Calado and Bruno Martins | Learning to Rank for Expert Search in Digital Libraries of Academic
Publications | null | Progress in Artificial Intelligence, Lecture Notes in Computer
Science, Springer Berlin Heidelberg. In Proceedings of the 15th Portuguese
Conference on Artificial Intelligence, 2011 | null | null | cs.IR cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of expert finding has been getting increasing attention in
information retrieval literature. However, the current state-of-the-art is
still lacking in principled approaches for combining different sources of
evidence in an optimal way. This paper explores the usage of learning to rank
methods as a principled approach for combining multiple estimators of
expertise, derived from the textual contents, from the graph-structure with the
citation patterns for the community of experts, and from profile information
about the experts. Experiments made over a dataset of academic publications,
for the area of Computer Science, attest for the adequacy of the proposed
approaches.
| [
{
"version": "v1",
"created": "Sat, 2 Feb 2013 18:36:08 GMT"
}
] | 2013-02-05T00:00:00 | [
[
"Moreira",
"Catarina",
""
],
[
"Calado",
"Pável",
""
],
[
"Martins",
"Bruno",
""
]
] | TITLE: Learning to Rank for Expert Search in Digital Libraries of Academic
Publications
ABSTRACT: The task of expert finding has been getting increasing attention in
information retrieval literature. However, the current state-of-the-art is
still lacking in principled approaches for combining different sources of
evidence in an optimal way. This paper explores the usage of learning to rank
methods as a principled approach for combining multiple estimators of
expertise, derived from the textual contents, from the graph-structure with the
citation patterns for the community of experts, and from profile information
about the experts. Experiments made over a dataset of academic publications,
for the area of Computer Science, attest for the adequacy of the proposed
approaches.
|
1302.0540 | Harris Georgiou | Harris V. Georgiou, Michael E. Mavroforakis | A game-theoretic framework for classifier ensembles using weighted
majority voting with local accuracy estimates | 21 pages, 9 tables, 1 figure, 68 references | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In this paper, a novel approach for the optimal combination of binary
classifiers is proposed. The classifier combination problem is approached from
a Game Theory perspective. The proposed framework of adapted weighted majority
rules (WMR) is tested against common rank-based, Bayesian and simple majority
models, as well as two soft-output averaging rules. Experiments with ensembles
of Support Vector Machines (SVM), Ordinary Binary Tree Classifiers (OBTC) and
weighted k-nearest-neighbor (w/k-NN) models on benchmark datasets indicate that
this new adaptive WMR model, employing local accuracy estimators and the
analytically computed optimal weights outperform all the other simple
combination rules.
| [
{
"version": "v1",
"created": "Sun, 3 Feb 2013 22:12:52 GMT"
}
] | 2013-02-05T00:00:00 | [
[
"Georgiou",
"Harris V.",
""
],
[
"Mavroforakis",
"Michael E.",
""
]
] | TITLE: A game-theoretic framework for classifier ensembles using weighted
majority voting with local accuracy estimates
ABSTRACT: In this paper, a novel approach for the optimal combination of binary
classifiers is proposed. The classifier combination problem is approached from
a Game Theory perspective. The proposed framework of adapted weighted majority
rules (WMR) is tested against common rank-based, Bayesian and simple majority
models, as well as two soft-output averaging rules. Experiments with ensembles
of Support Vector Machines (SVM), Ordinary Binary Tree Classifiers (OBTC) and
weighted k-nearest-neighbor (w/k-NN) models on benchmark datasets indicate that
this new adaptive WMR model, employing local accuracy estimators and the
analytically computed optimal weights outperform all the other simple
combination rules.
|
1302.0739 | Conrad Lee | Conrad Lee, P\'adraig Cunningham | Benchmarking community detection methods on social media data | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Benchmarking the performance of community detection methods on empirical
social network data has been identified as critical for improving these
methods. In particular, while most current research focuses on detecting
communities in data that has been digitally extracted from large social media
and telecommunications services, most evaluation of this research is based on
small, hand-curated datasets. We argue that these two types of networks differ
so significantly that by evaluating algorithms solely on the former, we know
little about how well they perform on the latter. To address this problem, we
consider the difficulties that arise in constructing benchmarks based on
digitally extracted network data, and propose a task-based strategy which we
feel addresses these difficulties. To demonstrate that our scheme is effective,
we use it to carry out a substantial benchmark based on Facebook data. The
benchmark reveals that some of the most popular algorithms fail to detect
fine-grained community structure.
| [
{
"version": "v1",
"created": "Mon, 4 Feb 2013 16:12:22 GMT"
}
] | 2013-02-05T00:00:00 | [
[
"Lee",
"Conrad",
""
],
[
"Cunningham",
"Pádraig",
""
]
] | TITLE: Benchmarking community detection methods on social media data
ABSTRACT: Benchmarking the performance of community detection methods on empirical
social network data has been identified as critical for improving these
methods. In particular, while most current research focuses on detecting
communities in data that has been digitally extracted from large social media
and telecommunications services, most evaluation of this research is based on
small, hand-curated datasets. We argue that these two types of networks differ
so significantly that by evaluating algorithms solely on the former, we know
little about how well they perform on the latter. To address this problem, we
consider the difficulties that arise in constructing benchmarks based on
digitally extracted network data, and propose a task-based strategy which we
feel addresses these difficulties. To demonstrate that our scheme is effective,
we use it to carry out a substantial benchmark based on Facebook data. The
benchmark reveals that some of the most popular algorithms fail to detect
fine-grained community structure.
|
1208.4586 | Jeremiah Blocki | Jeremiah Blocki, Avrim Blum, Anupam Datta and Or Sheffet | Differentially Private Data Analysis of Social Networks via Restricted
Sensitivity | null | null | null | null | cs.CR cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the notion of restricted sensitivity as an alternative to global
and smooth sensitivity to improve accuracy in differentially private data
analysis. The definition of restricted sensitivity is similar to that of global
sensitivity except that instead of quantifying over all possible datasets, we
take advantage of any beliefs about the dataset that a querier may have, to
quantify over a restricted class of datasets. Specifically, given a query f and
a hypothesis H about the structure of a dataset D, we show generically how to
transform f into a new query f_H whose global sensitivity (over all datasets
including those that do not satisfy H) matches the restricted sensitivity of
the query f. Moreover, if the belief of the querier is correct (i.e., D is in
H) then f_H(D) = f(D). If the belief is incorrect, then f_H(D) may be
inaccurate.
We demonstrate the usefulness of this notion by considering the task of
answering queries regarding social-networks, which we model as a combination of
a graph and a labeling of its vertices. In particular, while our generic
procedure is computationally inefficient, for the specific definition of H as
graphs of bounded degree, we exhibit efficient ways of constructing f_H using
different projection-based techniques. We then analyze two important query
classes: subgraph counting queries (e.g., number of triangles) and local
profile queries (e.g., number of people who know a spy and a computer-scientist
who know each other). We demonstrate that the restricted sensitivity of such
queries can be significantly lower than their smooth sensitivity. Thus, using
restricted sensitivity we can maintain privacy whether or not D is in H, while
providing more accurate results in the event that H holds true.
| [
{
"version": "v1",
"created": "Wed, 22 Aug 2012 19:31:05 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Feb 2013 20:33:05 GMT"
}
] | 2013-02-04T00:00:00 | [
[
"Blocki",
"Jeremiah",
""
],
[
"Blum",
"Avrim",
""
],
[
"Datta",
"Anupam",
""
],
[
"Sheffet",
"Or",
""
]
] | TITLE: Differentially Private Data Analysis of Social Networks via Restricted
Sensitivity
ABSTRACT: We introduce the notion of restricted sensitivity as an alternative to global
and smooth sensitivity to improve accuracy in differentially private data
analysis. The definition of restricted sensitivity is similar to that of global
sensitivity except that instead of quantifying over all possible datasets, we
take advantage of any beliefs about the dataset that a querier may have, to
quantify over a restricted class of datasets. Specifically, given a query f and
a hypothesis H about the structure of a dataset D, we show generically how to
transform f into a new query f_H whose global sensitivity (over all datasets
including those that do not satisfy H) matches the restricted sensitivity of
the query f. Moreover, if the belief of the querier is correct (i.e., D is in
H) then f_H(D) = f(D). If the belief is incorrect, then f_H(D) may be
inaccurate.
We demonstrate the usefulness of this notion by considering the task of
answering queries regarding social-networks, which we model as a combination of
a graph and a labeling of its vertices. In particular, while our generic
procedure is computationally inefficient, for the specific definition of H as
graphs of bounded degree, we exhibit efficient ways of constructing f_H using
different projection-based techniques. We then analyze two important query
classes: subgraph counting queries (e.g., number of triangles) and local
profile queries (e.g., number of people who know a spy and a computer-scientist
who know each other). We demonstrate that the restricted sensitivity of such
queries can be significantly lower than their smooth sensitivity. Thus, using
restricted sensitivity we can maintain privacy whether or not D is in H, while
providing more accurate results in the event that H holds true.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.