id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1312.4740 | Yalong Bai | Yalong Bai, Kuiyuan Yang, Wei Yu, Wei-Ying Ma, Tiejun Zhao | Learning High-level Image Representation for Image Retrieval via
Multi-Task DNN using Clickthrough Data | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image retrieval refers to finding relevant images from an image database for
a query, which is considered difficult for the gap between low-level
representation of images and high-level representation of queries. Recently
further developed Deep Neural Network sheds light on automatically learning
high-level image representation from raw pixels. In this paper, we proposed a
multi-task DNN learned for image retrieval, which contains two parts, i.e.,
query-sharing layers for image representation computation and query-specific
layers for relevance estimation. The weights of multi-task DNN are learned on
clickthrough data by Ring Training. Experimental results on both simulated and
real dataset show the effectiveness of the proposed method.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2013 12:11:04 GMT"
},
{
"version": "v2",
"created": "Sat, 21 Dec 2013 00:47:19 GMT"
}
] | 2013-12-24T00:00:00 | [
[
"Bai",
"Yalong",
""
],
[
"Yang",
"Kuiyuan",
""
],
[
"Yu",
"Wei",
""
],
[
"Ma",
"Wei-Ying",
""
],
[
"Zhao",
"Tiejun",
""
]
] | TITLE: Learning High-level Image Representation for Image Retrieval via
Multi-Task DNN using Clickthrough Data
ABSTRACT: Image retrieval refers to finding relevant images from an image database for
a query, which is considered difficult for the gap between low-level
representation of images and high-level representation of queries. Recently
further developed Deep Neural Network sheds light on automatically learning
high-level image representation from raw pixels. In this paper, we proposed a
multi-task DNN learned for image retrieval, which contains two parts, i.e.,
query-sharing layers for image representation computation and query-specific
layers for relevance estimation. The weights of multi-task DNN are learned on
clickthrough data by Ring Training. Experimental results on both simulated and
real dataset show the effectiveness of the proposed method.
| no_new_dataset | 0.94428 |
1312.6122 | James Bagrow | James P. Bagrow, Suma Desu, Morgan R. Frank, Narine Manukyan, Lewis
Mitchell, Andrew Reagan, Eric E. Bloedorn, Lashon B. Booker, Luther K.
Branting, Michael J. Smith, Brian F. Tivnan, Christopher M. Danforth, Peter
S. Dodds, Joshua C. Bongard | Shadow networks: Discovering hidden nodes with models of information
flow | 12 pages, 3 figures | null | null | null | physics.soc-ph cond-mat.dis-nn cs.SI physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex, dynamic networks underlie many systems, and understanding these
networks is the concern of a great span of important scientific and engineering
problems. Quantitative description is crucial for this understanding yet, due
to a range of measurement problems, many real network datasets are incomplete.
Here we explore how accidentally missing or deliberately hidden nodes may be
detected in networks by the effect of their absence on predictions of the speed
with which information flows through the network. We use Symbolic Regression
(SR) to learn models relating information flow to network topology. These
models show localized, systematic, and non-random discrepancies when applied to
test networks with intentionally masked nodes, demonstrating the ability to
detect the presence of missing nodes and where in the network those nodes are
likely to reside.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 21:00:01 GMT"
}
] | 2013-12-24T00:00:00 | [
[
"Bagrow",
"James P.",
""
],
[
"Desu",
"Suma",
""
],
[
"Frank",
"Morgan R.",
""
],
[
"Manukyan",
"Narine",
""
],
[
"Mitchell",
"Lewis",
""
],
[
"Reagan",
"Andrew",
""
],
[
"Bloedorn",
"Eric E.",
""
],
[
"Booker",
"Lashon B.",
""
],
[
"Branting",
"Luther K.",
""
],
[
"Smith",
"Michael J.",
""
],
[
"Tivnan",
"Brian F.",
""
],
[
"Danforth",
"Christopher M.",
""
],
[
"Dodds",
"Peter S.",
""
],
[
"Bongard",
"Joshua C.",
""
]
] | TITLE: Shadow networks: Discovering hidden nodes with models of information
flow
ABSTRACT: Complex, dynamic networks underlie many systems, and understanding these
networks is the concern of a great span of important scientific and engineering
problems. Quantitative description is crucial for this understanding yet, due
to a range of measurement problems, many real network datasets are incomplete.
Here we explore how accidentally missing or deliberately hidden nodes may be
detected in networks by the effect of their absence on predictions of the speed
with which information flows through the network. We use Symbolic Regression
(SR) to learn models relating information flow to network topology. These
models show localized, systematic, and non-random discrepancies when applied to
test networks with intentionally masked nodes, demonstrating the ability to
detect the presence of missing nodes and where in the network those nodes are
likely to reside.
| no_new_dataset | 0.947137 |
1312.6180 | Weifeng Liu | W. Liu, H. Liu, D.Tao, Y. Wang, K. Lu | Manifold regularized kernel logistic regression for web image annotation | submitted to Neurocomputing | null | null | null | cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid advance of Internet technology and smart devices, users often
need to manage large amounts of multimedia information using smart devices,
such as personal image and video accessing and browsing. These requirements
heavily rely on the success of image (video) annotation, and thus large scale
image annotation through innovative machine learning methods has attracted
intensive attention in recent years. One representative work is support vector
machine (SVM). Although it works well in binary classification, SVM has a
non-smooth loss function and can not naturally cover multi-class case. In this
paper, we propose manifold regularized kernel logistic regression (KLR) for web
image annotation. Compared to SVM, KLR has the following advantages: (1) the
KLR has a smooth loss function; (2) the KLR produces an explicit estimate of
the probability instead of class label; and (3) the KLR can naturally be
generalized to the multi-class case. We carefully conduct experiments on MIR
FLICKR dataset and demonstrate the effectiveness of manifold regularized kernel
logistic regression for image annotation.
| [
{
"version": "v1",
"created": "Sat, 21 Dec 2013 00:32:24 GMT"
}
] | 2013-12-24T00:00:00 | [
[
"Liu",
"W.",
""
],
[
"Liu",
"H.",
""
],
[
"Tao",
"D.",
""
],
[
"Wang",
"Y.",
""
],
[
"Lu",
"K.",
""
]
] | TITLE: Manifold regularized kernel logistic regression for web image annotation
ABSTRACT: With the rapid advance of Internet technology and smart devices, users often
need to manage large amounts of multimedia information using smart devices,
such as personal image and video accessing and browsing. These requirements
heavily rely on the success of image (video) annotation, and thus large scale
image annotation through innovative machine learning methods has attracted
intensive attention in recent years. One representative work is support vector
machine (SVM). Although it works well in binary classification, SVM has a
non-smooth loss function and can not naturally cover multi-class case. In this
paper, we propose manifold regularized kernel logistic regression (KLR) for web
image annotation. Compared to SVM, KLR has the following advantages: (1) the
KLR has a smooth loss function; (2) the KLR produces an explicit estimate of
the probability instead of class label; and (3) the KLR can naturally be
generalized to the multi-class case. We carefully conduct experiments on MIR
FLICKR dataset and demonstrate the effectiveness of manifold regularized kernel
logistic regression for image annotation.
| no_new_dataset | 0.951278 |
1312.6182 | Weifeng Liu | W. Liu, H. Zhang, D. Tao, Y. Wang, K. Lu | Large-Scale Paralleled Sparse Principal Component Analysis | submitted to Multimedia Tools and Applications | null | null | null | cs.MS cs.LG cs.NA stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Principal component analysis (PCA) is a statistical technique commonly used
in multivariate data analysis. However, PCA can be difficult to interpret and
explain since the principal components (PCs) are linear combinations of the
original variables. Sparse PCA (SPCA) aims to balance statistical fidelity and
interpretability by approximating sparse PCs whose projections capture the
maximal variance of original data. In this paper we present an efficient and
paralleled method of SPCA using graphics processing units (GPUs), which can
process large blocks of data in parallel. Specifically, we construct parallel
implementations of the four optimization formulations of the generalized power
method of SPCA (GP-SPCA), one of the most efficient and effective SPCA
approaches, on a GPU. The parallel GPU implementation of GP-SPCA (using CUBLAS)
is up to eleven times faster than the corresponding CPU implementation (using
CBLAS), and up to 107 times faster than a MatLab implementation. Extensive
comparative experiments in several real-world datasets confirm that SPCA offers
a practical advantage.
| [
{
"version": "v1",
"created": "Sat, 21 Dec 2013 00:38:02 GMT"
}
] | 2013-12-24T00:00:00 | [
[
"Liu",
"W.",
""
],
[
"Zhang",
"H.",
""
],
[
"Tao",
"D.",
""
],
[
"Wang",
"Y.",
""
],
[
"Lu",
"K.",
""
]
] | TITLE: Large-Scale Paralleled Sparse Principal Component Analysis
ABSTRACT: Principal component analysis (PCA) is a statistical technique commonly used
in multivariate data analysis. However, PCA can be difficult to interpret and
explain since the principal components (PCs) are linear combinations of the
original variables. Sparse PCA (SPCA) aims to balance statistical fidelity and
interpretability by approximating sparse PCs whose projections capture the
maximal variance of original data. In this paper we present an efficient and
paralleled method of SPCA using graphics processing units (GPUs), which can
process large blocks of data in parallel. Specifically, we construct parallel
implementations of the four optimization formulations of the generalized power
method of SPCA (GP-SPCA), one of the most efficient and effective SPCA
approaches, on a GPU. The parallel GPU implementation of GP-SPCA (using CUBLAS)
is up to eleven times faster than the corresponding CPU implementation (using
CBLAS), and up to 107 times faster than a MatLab implementation. Extensive
comparative experiments in several real-world datasets confirm that SPCA offers
a practical advantage.
| no_new_dataset | 0.946547 |
1312.6200 | David A. Brown | John A. Hirdt and David A. Brown | Data mining the EXFOR database using network theory | 20 pages, 8 figures, 12 tables. Submitted to Physical Review X | null | null | BNL-103517-2013-JA | nucl-th physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The EXFOR database contains the largest collection of experimental nuclear
reaction data available as well as the data's bibliographic information and
experimental details. We created an undirected graph from the EXFOR datasets
with graph nodes representing single observables and graph links representing
the various types of connections between these observables. This graph is an
abstract representation of the connections in EXFOR, similar to graphs of
social networks, authorship networks, etc. By analyzing this abstract graph, we
are able to address very specific questions such as 1) what observables are
being used as reference measurements by the experimental nuclear science
community? 2) are these observables given the attention needed by various
nuclear data evaluation projects? 3) are there classes of observables that are
not connected to these reference measurements? In addressing these questions,
we propose several (mostly cross section) observables that should be evaluated
and made into reaction reference standards.
| [
{
"version": "v1",
"created": "Sat, 21 Dec 2013 03:54:00 GMT"
}
] | 2013-12-24T00:00:00 | [
[
"Hirdt",
"John A.",
""
],
[
"Brown",
"David A.",
""
]
] | TITLE: Data mining the EXFOR database using network theory
ABSTRACT: The EXFOR database contains the largest collection of experimental nuclear
reaction data available as well as the data's bibliographic information and
experimental details. We created an undirected graph from the EXFOR datasets
with graph nodes representing single observables and graph links representing
the various types of connections between these observables. This graph is an
abstract representation of the connections in EXFOR, similar to graphs of
social networks, authorship networks, etc. By analyzing this abstract graph, we
are able to address very specific questions such as 1) what observables are
being used as reference measurements by the experimental nuclear science
community? 2) are these observables given the attention needed by various
nuclear data evaluation projects? 3) are there classes of observables that are
not connected to these reference measurements? In addressing these questions,
we propose several (mostly cross section) observables that should be evaluated
and made into reaction reference standards.
| no_new_dataset | 0.946151 |
1312.6335 | Sen Pei | Sen Pei, Hernan A. Makse | Spreading dynamics in complex networks | 23 pages, 2 figures | Journal of Statistical Mechanics: Theory and Experiment 2013 (12),
P12002 | 10.1088/1742-5468/2013/12/P12002 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Searching for influential spreaders in complex networks is an issue of great
significance for applications across various domains, ranging from the epidemic
control, innovation diffusion, viral marketing, social movement to idea
propagation. In this paper, we first display some of the most important
theoretical models that describe spreading processes, and then discuss the
problem of locating both the individual and multiple influential spreaders
respectively. Recent approaches in these two topics are presented. For the
identification of privileged single spreaders, we summarize several widely used
centralities, such as degree, betweenness centrality, PageRank, k-shell, etc.
We investigate the empirical diffusion data in a large scale online social
community -- LiveJournal. With this extensive dataset, we find that various
measures can convey very distinct information of nodes. Of all the users in
LiveJournal social network, only a small fraction of them involve in spreading.
For the spreading processes in LiveJournal, while degree can locate nodes
participating in information diffusion with higher probability, k-shell is more
effective in finding nodes with large influence. Our results should provide
useful information for designing efficient spreading strategies in reality.
| [
{
"version": "v1",
"created": "Sun, 22 Dec 2013 02:55:36 GMT"
}
] | 2013-12-24T00:00:00 | [
[
"Pei",
"Sen",
""
],
[
"Makse",
"Hernan A.",
""
]
] | TITLE: Spreading dynamics in complex networks
ABSTRACT: Searching for influential spreaders in complex networks is an issue of great
significance for applications across various domains, ranging from the epidemic
control, innovation diffusion, viral marketing, social movement to idea
propagation. In this paper, we first display some of the most important
theoretical models that describe spreading processes, and then discuss the
problem of locating both the individual and multiple influential spreaders
respectively. Recent approaches in these two topics are presented. For the
identification of privileged single spreaders, we summarize several widely used
centralities, such as degree, betweenness centrality, PageRank, k-shell, etc.
We investigate the empirical diffusion data in a large scale online social
community -- LiveJournal. With this extensive dataset, we find that various
measures can convey very distinct information of nodes. Of all the users in
LiveJournal social network, only a small fraction of them involve in spreading.
For the spreading processes in LiveJournal, while degree can locate nodes
participating in information diffusion with higher probability, k-shell is more
effective in finding nodes with large influence. Our results should provide
useful information for designing efficient spreading strategies in reality.
| no_new_dataset | 0.931836 |
1312.6635 | Hamed Haddadi | Shana Dacres, Hamed Haddadi, Matthew Purver | Topic and Sentiment Analysis on OSNs: a Case Study of Advertising
Strategies on Twitter | null | null | null | null | cs.SI physics.soc-ph | http://creativecommons.org/licenses/by/3.0/ | Social media have substantially altered the way brands and businesses
advertise: Online Social Networks provide brands with more versatile and
dynamic channels for advertisement than traditional media (e.g., TV and radio).
Levels of engagement in such media are usually measured in terms of content
adoption (e.g., likes and retweets) and sentiment, around a given topic.
However, sentiment analysis and topic identification are both non-trivial
tasks.
In this paper, using data collected from Twitter as a case study, we analyze
how engagement and sentiment in promoted content spread over a 10-day period.
We find that promoted tweets lead to higher positive sentiment than promoted
trends; although promoted trends pay off in response volume. We observe that
levels of engagement for the brand and promoted content are highest on the
first day of the campaign, and fall considerably thereafter. However, we show
that these insights depend on the use of robust machine learning and natural
language processing techniques to gather focused, relevant datasets, and to
accurately gauge sentiment, rather than relying on the simple keyword- or
frequency-based metrics sometimes used in social media research.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2013 18:32:06 GMT"
}
] | 2013-12-24T00:00:00 | [
[
"Dacres",
"Shana",
""
],
[
"Haddadi",
"Hamed",
""
],
[
"Purver",
"Matthew",
""
]
] | TITLE: Topic and Sentiment Analysis on OSNs: a Case Study of Advertising
Strategies on Twitter
ABSTRACT: Social media have substantially altered the way brands and businesses
advertise: Online Social Networks provide brands with more versatile and
dynamic channels for advertisement than traditional media (e.g., TV and radio).
Levels of engagement in such media are usually measured in terms of content
adoption (e.g., likes and retweets) and sentiment, around a given topic.
However, sentiment analysis and topic identification are both non-trivial
tasks.
In this paper, using data collected from Twitter as a case study, we analyze
how engagement and sentiment in promoted content spread over a 10-day period.
We find that promoted tweets lead to higher positive sentiment than promoted
trends; although promoted trends pay off in response volume. We observe that
levels of engagement for the brand and promoted content are highest on the
first day of the campaign, and fall considerably thereafter. However, we show
that these insights depend on the use of robust machine learning and natural
language processing techniques to gather focused, relevant datasets, and to
accurately gauge sentiment, rather than relying on the simple keyword- or
frequency-based metrics sometimes used in social media research.
| no_new_dataset | 0.936749 |
1312.5697 | Andrew Rabinovich | Samy Bengio, Jeff Dean, Dumitru Erhan, Eugene Ie, Quoc Le, Andrew
Rabinovich, Jonathon Shlens, Yoram Singer | Using Web Co-occurrence Statistics for Improving Image Categorization | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object recognition and localization are important tasks in computer vision.
The focus of this work is the incorporation of contextual information in order
to improve object recognition and localization. For instance, it is natural to
expect not to see an elephant to appear in the middle of an ocean. We consider
a simple approach to encapsulate such common sense knowledge using
co-occurrence statistics from web documents. By merely counting the number of
times nouns (such as elephants, sharks, oceans, etc.) co-occur in web
documents, we obtain a good estimate of expected co-occurrences in visual data.
We then cast the problem of combining textual co-occurrence statistics with the
predictions of image-based classifiers as an optimization problem. The
resulting optimization problem serves as a surrogate for our inference
procedure. Albeit the simplicity of the resulting optimization problem, it is
effective in improving both recognition and localization accuracy. Concretely,
we observe significant improvements in recognition and localization rates for
both ImageNet Detection 2012 and Sun 2012 datasets.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2013 18:53:47 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Dec 2013 18:12:16 GMT"
}
] | 2013-12-23T00:00:00 | [
[
"Bengio",
"Samy",
""
],
[
"Dean",
"Jeff",
""
],
[
"Erhan",
"Dumitru",
""
],
[
"Ie",
"Eugene",
""
],
[
"Le",
"Quoc",
""
],
[
"Rabinovich",
"Andrew",
""
],
[
"Shlens",
"Jonathon",
""
],
[
"Singer",
"Yoram",
""
]
] | TITLE: Using Web Co-occurrence Statistics for Improving Image Categorization
ABSTRACT: Object recognition and localization are important tasks in computer vision.
The focus of this work is the incorporation of contextual information in order
to improve object recognition and localization. For instance, it is natural to
expect not to see an elephant to appear in the middle of an ocean. We consider
a simple approach to encapsulate such common sense knowledge using
co-occurrence statistics from web documents. By merely counting the number of
times nouns (such as elephants, sharks, oceans, etc.) co-occur in web
documents, we obtain a good estimate of expected co-occurrences in visual data.
We then cast the problem of combining textual co-occurrence statistics with the
predictions of image-based classifiers as an optimization problem. The
resulting optimization problem serves as a surrogate for our inference
procedure. Albeit the simplicity of the resulting optimization problem, it is
effective in improving both recognition and localization accuracy. Concretely,
we observe significant improvements in recognition and localization rates for
both ImageNet Detection 2012 and Sun 2012 datasets.
| no_new_dataset | 0.951459 |
1312.6024 | Yusuf Artan | Yusuf Artan, Peter Paul | Occupancy Detection in Vehicles Using Fisher Vector Image Representation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the high volume of traffic on modern roadways, transportation agencies
have proposed High Occupancy Vehicle (HOV) lanes and High Occupancy Tolling
(HOT) lanes to promote car pooling. However, enforcement of the rules of these
lanes is currently performed by roadside enforcement officers using visual
observation. Manual roadside enforcement is known to be inefficient, costly,
potentially dangerous, and ultimately ineffective. Violation rates up to
50%-80% have been reported, while manual enforcement rates of less than 10% are
typical. Therefore, there is a need for automated vehicle occupancy detection
to support HOV/HOT lane enforcement. A key component of determining vehicle
occupancy is to determine whether or not the vehicle's front passenger seat is
occupied. In this paper, we examine two methods of determining vehicle front
seat occupancy using a near infrared (NIR) camera system pointed at the
vehicle's front windshield. The first method examines a state-of-the-art
deformable part model (DPM) based face detection system that is robust to
facial pose. The second method examines state-of- the-art local aggregation
based image classification using bag-of-visual-words (BOW) and Fisher vectors
(FV). A dataset of 3000 images was collected on a public roadway and is used to
perform the comparison. From these experiments it is clear that the image
classification approach is superior for this problem.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 16:37:46 GMT"
}
] | 2013-12-23T00:00:00 | [
[
"Artan",
"Yusuf",
""
],
[
"Paul",
"Peter",
""
]
] | TITLE: Occupancy Detection in Vehicles Using Fisher Vector Image Representation
ABSTRACT: Due to the high volume of traffic on modern roadways, transportation agencies
have proposed High Occupancy Vehicle (HOV) lanes and High Occupancy Tolling
(HOT) lanes to promote car pooling. However, enforcement of the rules of these
lanes is currently performed by roadside enforcement officers using visual
observation. Manual roadside enforcement is known to be inefficient, costly,
potentially dangerous, and ultimately ineffective. Violation rates up to
50%-80% have been reported, while manual enforcement rates of less than 10% are
typical. Therefore, there is a need for automated vehicle occupancy detection
to support HOV/HOT lane enforcement. A key component of determining vehicle
occupancy is to determine whether or not the vehicle's front passenger seat is
occupied. In this paper, we examine two methods of determining vehicle front
seat occupancy using a near infrared (NIR) camera system pointed at the
vehicle's front windshield. The first method examines a state-of-the-art
deformable part model (DPM) based face detection system that is robust to
facial pose. The second method examines state-of- the-art local aggregation
based image classification using bag-of-visual-words (BOW) and Fisher vectors
(FV). A dataset of 3000 images was collected on a public roadway and is used to
perform the comparison. From these experiments it is clear that the image
classification approach is superior for this problem.
| new_dataset | 0.972046 |
1312.6061 | Timoteo Carletti | Floriana Gargiulo and Timoteo Carletti | Driving forces in researchers mobility | null | null | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Starting from the dataset of the publication corpus of the APS during the
period 1955-2009, we reconstruct the individual researchers trajectories,
namely the list of the consecutive affiliations for each scholar. Crossing this
information with different geographic datasets we embed these trajectories in a
spatial framework. Using methods from network theory and complex systems
analysis we characterise these patterns in terms of topological network
properties and we analyse the dependence of an academic path across different
dimensions: the distance between two subsequent positions, the relative
importance of the institutions (in terms of number of publications) and some
socio-cultural traits. We show that distance is not always a good predictor for
the next affiliation while other factors like "the previous steps" of the
career of the researchers (in particular the first position) or the linguistic
and historical similarity between two countries can have an important impact.
Finally we show that the dataset exhibit a memory effect, hence the fate of a
career strongly depends from the first two affiliations.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 18:07:10 GMT"
}
] | 2013-12-23T00:00:00 | [
[
"Gargiulo",
"Floriana",
""
],
[
"Carletti",
"Timoteo",
""
]
] | TITLE: Driving forces in researchers mobility
ABSTRACT: Starting from the dataset of the publication corpus of the APS during the
period 1955-2009, we reconstruct the individual researchers trajectories,
namely the list of the consecutive affiliations for each scholar. Crossing this
information with different geographic datasets we embed these trajectories in a
spatial framework. Using methods from network theory and complex systems
analysis we characterise these patterns in terms of topological network
properties and we analyse the dependence of an academic path across different
dimensions: the distance between two subsequent positions, the relative
importance of the institutions (in terms of number of publications) and some
socio-cultural traits. We show that distance is not always a good predictor for
the next affiliation while other factors like "the previous steps" of the
career of the researchers (in particular the first position) or the linguistic
and historical similarity between two countries can have an important impact.
Finally we show that the dataset exhibit a memory effect, hence the fate of a
career strongly depends from the first two affiliations.
| no_new_dataset | 0.940463 |
1301.0020 | Shabeh Ul Hasson | Shabeh ul Hasson, Valerio Lucarini, Salvatore Pascale | Hydrological Cycle over South and Southeast Asian River Basins as
Simulated by PCMDI/CMIP3 Experiments | null | Earth Syst. Dynam., 4, 199-217 | 10.5194/esd-4-199-2013 | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate how the climate models contributing to the PCMDI/CMIP3 dataset
describe the hydrological cycle over four major South and Southeast Asian river
basins (Indus, Ganges, Brahmaputra and Mekong) for the 20th, 21st (13 models)
and 22nd (10 models) centuries. For the 20th century, some models do not seem
to conserve water at the river basin scale up to a good degree of
approximation. The simulated precipitation minus evaporation (P - E), total
runoff (R) and precipitation (P) quantities are neither consistent with the
observations nor among the models themselves. Most of the models underestimate
P - E for all four river basins, which is mainly associated with the
underestimation of precipitation. This is in agreement with the recent results
on the biases of the representation of monsoonal dynamics by GCMs. Overall, a
modest inter-model agreement is found only for the evaporation and inter-annual
variability of P - E. For the 21st and 22nd centuries, models agree on the
negative (positive) changes of P - E for the Indus basin (Ganges, Brahmaputra
and Mekong basins). Most of the models foresee an increase in the inter-annual
variability of P - E for the Ganges and Mekong basins, thus suggesting an
increase in large low-frequency dry/wet events. Instead, no considerable future
change in the inter-annual variability of P - E is found for the Indus and
Brahmaputra basins.
| [
{
"version": "v1",
"created": "Mon, 31 Dec 2012 21:50:53 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Dec 2013 20:24:47 GMT"
}
] | 2013-12-20T00:00:00 | [
[
"Hasson",
"Shabeh ul",
""
],
[
"Lucarini",
"Valerio",
""
],
[
"Pascale",
"Salvatore",
""
]
] | TITLE: Hydrological Cycle over South and Southeast Asian River Basins as
Simulated by PCMDI/CMIP3 Experiments
ABSTRACT: We investigate how the climate models contributing to the PCMDI/CMIP3 dataset
describe the hydrological cycle over four major South and Southeast Asian river
basins (Indus, Ganges, Brahmaputra and Mekong) for the 20th, 21st (13 models)
and 22nd (10 models) centuries. For the 20th century, some models do not seem
to conserve water at the river basin scale up to a good degree of
approximation. The simulated precipitation minus evaporation (P - E), total
runoff (R) and precipitation (P) quantities are neither consistent with the
observations nor among the models themselves. Most of the models underestimate
P - E for all four river basins, which is mainly associated with the
underestimation of precipitation. This is in agreement with the recent results
on the biases of the representation of monsoonal dynamics by GCMs. Overall, a
modest inter-model agreement is found only for the evaporation and inter-annual
variability of P - E. For the 21st and 22nd centuries, models agree on the
negative (positive) changes of P - E for the Indus basin (Ganges, Brahmaputra
and Mekong basins). Most of the models foresee an increase in the inter-annual
variability of P - E for the Ganges and Mekong basins, thus suggesting an
increase in large low-frequency dry/wet events. Instead, no considerable future
change in the inter-annual variability of P - E is found for the Indus and
Brahmaputra basins.
| no_new_dataset | 0.9357 |
1312.5394 | Michael S. Gashler Ph.D. | Michael S. Gashler, Michael R. Smith, Richard Morris, Tony Martinez | Missing Value Imputation With Unsupervised Backpropagation | null | null | null | null | cs.NE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many data mining and data analysis techniques operate on dense matrices or
complete tables of data. Real-world data sets, however, often contain unknown
values. Even many classification algorithms that are designed to operate with
missing values still exhibit deteriorated accuracy. One approach to handling
missing values is to fill in (impute) the missing values. In this paper, we
present a technique for unsupervised learning called Unsupervised
Backpropagation (UBP), which trains a multi-layer perceptron to fit to the
manifold sampled by a set of observed point-vectors. We evaluate UBP with the
task of imputing missing values in datasets, and show that UBP is able to
predict missing values with significantly lower sum-squared error than other
collaborative filtering and imputation techniques. We also demonstrate with 24
datasets and 9 supervised learning algorithms that classification accuracy is
usually higher when randomly-withheld values are imputed using UBP, rather than
with other methods.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2013 02:38:40 GMT"
}
] | 2013-12-20T00:00:00 | [
[
"Gashler",
"Michael S.",
""
],
[
"Smith",
"Michael R.",
""
],
[
"Morris",
"Richard",
""
],
[
"Martinez",
"Tony",
""
]
] | TITLE: Missing Value Imputation With Unsupervised Backpropagation
ABSTRACT: Many data mining and data analysis techniques operate on dense matrices or
complete tables of data. Real-world data sets, however, often contain unknown
values. Even many classification algorithms that are designed to operate with
missing values still exhibit deteriorated accuracy. One approach to handling
missing values is to fill in (impute) the missing values. In this paper, we
present a technique for unsupervised learning called Unsupervised
Backpropagation (UBP), which trains a multi-layer perceptron to fit to the
manifold sampled by a set of observed point-vectors. We evaluate UBP with the
task of imputing missing values in datasets, and show that UBP is able to
predict missing values with significantly lower sum-squared error than other
collaborative filtering and imputation techniques. We also demonstrate with 24
datasets and 9 supervised learning algorithms that classification accuracy is
usually higher when randomly-withheld values are imputed using UBP, rather than
with other methods.
| no_new_dataset | 0.944382 |
1312.5670 | Tim Vines | Timothy Vines, Arianne Albert, Rose Andrew, Florence Debarr\'e, Dan
Bock, Michelle Franklin, Kimberley Gilbert, Jean-S\'ebastien Moore,
S\'ebastien Renaut, Diana J. Rennison | The availability of research data declines rapidly with article age | 14 pages, 2 figures | null | 10.1016/j.cub.2013.11.014 | null | cs.DL physics.soc-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Policies ensuring that research data are available on public archives are
increasingly being implemented at the government [1], funding agency [2-4], and
journal [5,6] level. These policies are predicated on the idea that authors are
poor stewards of their data, particularly over the long term [7], and indeed
many studies have found that authors are often unable or unwilling to share
their data [8-11]. However, there are no systematic estimates of how the
availability of research data changes with time since publication. We therefore
requested datasets from a relatively homogenous set of 516 articles published
between 2 and 22 years ago, and found that availability of the data was
strongly affected by article age. For papers where the authors gave the status
of their data, the odds of a dataset being extant fell by 17% per year. In
addition, the odds that we could find a working email address for the first,
last or corresponding author fell by 7% per year. Our results reinforce the
notion that, in the long term, research data cannot be reliably preserved by
individual researchers, and further demonstrate the urgent need for policies
mandating data sharing via public archives.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2013 17:57:53 GMT"
}
] | 2013-12-20T00:00:00 | [
[
"Vines",
"Timothy",
""
],
[
"Albert",
"Arianne",
""
],
[
"Andrew",
"Rose",
""
],
[
"Debarré",
"Florence",
""
],
[
"Bock",
"Dan",
""
],
[
"Franklin",
"Michelle",
""
],
[
"Gilbert",
"Kimberley",
""
],
[
"Moore",
"Jean-Sébastien",
""
],
[
"Renaut",
"Sébastien",
""
],
[
"Rennison",
"Diana J.",
""
]
] | TITLE: The availability of research data declines rapidly with article age
ABSTRACT: Policies ensuring that research data are available on public archives are
increasingly being implemented at the government [1], funding agency [2-4], and
journal [5,6] level. These policies are predicated on the idea that authors are
poor stewards of their data, particularly over the long term [7], and indeed
many studies have found that authors are often unable or unwilling to share
their data [8-11]. However, there are no systematic estimates of how the
availability of research data changes with time since publication. We therefore
requested datasets from a relatively homogenous set of 516 articles published
between 2 and 22 years ago, and found that availability of the data was
strongly affected by article age. For papers where the authors gave the status
of their data, the odds of a dataset being extant fell by 17% per year. In
addition, the odds that we could find a working email address for the first,
last or corresponding author fell by 7% per year. Our results reinforce the
notion that, in the long term, research data cannot be reliably preserved by
individual researchers, and further demonstrate the urgent need for policies
mandating data sharing via public archives.
| no_new_dataset | 0.933491 |
1312.5734 | Andrew Lan | Andrew S. Lan, Christoph Studer and Richard G. Baraniuk | Time-varying Learning and Content Analytics via Sparse Factor Analysis | null | null | null | null | stat.ML cs.LG math.OC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose SPARFA-Trace, a new machine learning-based framework for
time-varying learning and content analytics for education applications. We
develop a novel message passing-based, blind, approximate Kalman filter for
sparse factor analysis (SPARFA), that jointly (i) traces learner concept
knowledge over time, (ii) analyzes learner concept knowledge state transitions
(induced by interacting with learning resources, such as textbook sections,
lecture videos, etc, or the forgetting effect), and (iii) estimates the content
organization and intrinsic difficulty of the assessment questions. These
quantities are estimated solely from binary-valued (correct/incorrect) graded
learner response data and a summary of the specific actions each learner
performs (e.g., answering a question or studying a learning resource) at each
time instance. Experimental results on two online course datasets demonstrate
that SPARFA-Trace is capable of tracing each learner's concept knowledge
evolution over time, as well as analyzing the quality and content organization
of learning resources, the question-concept associations, and the question
intrinsic difficulties. Moreover, we show that SPARFA-Trace achieves comparable
or better performance in predicting unobserved learner responses than existing
collaborative filtering and knowledge tracing approaches for personalized
education.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2013 20:44:44 GMT"
}
] | 2013-12-20T00:00:00 | [
[
"Lan",
"Andrew S.",
""
],
[
"Studer",
"Christoph",
""
],
[
"Baraniuk",
"Richard G.",
""
]
] | TITLE: Time-varying Learning and Content Analytics via Sparse Factor Analysis
ABSTRACT: We propose SPARFA-Trace, a new machine learning-based framework for
time-varying learning and content analytics for education applications. We
develop a novel message passing-based, blind, approximate Kalman filter for
sparse factor analysis (SPARFA), that jointly (i) traces learner concept
knowledge over time, (ii) analyzes learner concept knowledge state transitions
(induced by interacting with learning resources, such as textbook sections,
lecture videos, etc, or the forgetting effect), and (iii) estimates the content
organization and intrinsic difficulty of the assessment questions. These
quantities are estimated solely from binary-valued (correct/incorrect) graded
learner response data and a summary of the specific actions each learner
performs (e.g., answering a question or studying a learning resource) at each
time instance. Experimental results on two online course datasets demonstrate
that SPARFA-Trace is capable of tracing each learner's concept knowledge
evolution over time, as well as analyzing the quality and content organization
of learning resources, the question-concept associations, and the question
intrinsic difficulties. Moreover, we show that SPARFA-Trace achieves comparable
or better performance in predicting unobserved learner responses than existing
collaborative filtering and knowledge tracing approaches for personalized
education.
| no_new_dataset | 0.951549 |
1312.5021 | Zhen Qin | Zhen Qin, Vaclav Petricek, Nikos Karampatziakis, Lihong Li, John
Langford | Efficient Online Bootstrapping for Large Scale Learning | 5 pages, appeared at Big Learning Workshop at Neural Information
Processing Systems 2013 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bootstrapping is a useful technique for estimating the uncertainty of a
predictor, for example, confidence intervals for prediction. It is typically
used on small to moderate sized datasets, due to its high computation cost.
This work describes a highly scalable online bootstrapping strategy,
implemented inside Vowpal Wabbit, that is several times faster than traditional
strategies. Our experiments indicate that, in addition to providing a black
box-like method for estimating uncertainty, our implementation of online
bootstrapping may also help to train models with better prediction performance
due to model averaging.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2013 02:10:21 GMT"
}
] | 2013-12-19T00:00:00 | [
[
"Qin",
"Zhen",
""
],
[
"Petricek",
"Vaclav",
""
],
[
"Karampatziakis",
"Nikos",
""
],
[
"Li",
"Lihong",
""
],
[
"Langford",
"John",
""
]
] | TITLE: Efficient Online Bootstrapping for Large Scale Learning
ABSTRACT: Bootstrapping is a useful technique for estimating the uncertainty of a
predictor, for example, confidence intervals for prediction. It is typically
used on small to moderate sized datasets, due to its high computation cost.
This work describes a highly scalable online bootstrapping strategy,
implemented inside Vowpal Wabbit, that is several times faster than traditional
strategies. Our experiments indicate that, in addition to providing a black
box-like method for estimating uncertainty, our implementation of online
bootstrapping may also help to train models with better prediction performance
due to model averaging.
| no_new_dataset | 0.94868 |
1312.5105 | David Garcia Soriano | Francesco Bonchi, David Garc\'ia-Soriano, Konstantin Kutzkov | Local correlation clustering | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Correlation clustering is perhaps the most natural formulation of clustering.
Given $n$ objects and a pairwise similarity measure, the goal is to cluster the
objects so that, to the best possible extent, similar objects are put in the
same cluster and dissimilar objects are put in different clusters. Despite its
theoretical appeal, the practical relevance of correlation clustering still
remains largely unexplored, mainly due to the fact that correlation clustering
requires the $\Theta(n^2)$ pairwise similarities as input.
In this paper we initiate the investigation into \emph{local} algorithms for
correlation clustering. In \emph{local correlation clustering} we are given the
identifier of a single object and we want to return the cluster to which it
belongs in some globally consistent near-optimal clustering, using a small
number of similarity queries. Local algorithms for correlation clustering open
the door to \emph{sublinear-time} algorithms, which are particularly useful
when the similarity between items is costly to compute, as it is often the case
in many practical application domains. They also imply $(i)$ distributed and
streaming clustering algorithms, $(ii)$ constant-time estimators and testers
for cluster edit distance, and $(iii)$ property-preserving parallel
reconstruction algorithms for clusterability.
Specifically, we devise a local clustering algorithm attaining a $(3,
\varepsilon)$-approximation in time $O(1/\varepsilon^2)$ independently of the
dataset size. An explicit approximate clustering for all objects can be
produced in time $O(n/\varepsilon)$ (which is provably optimal). We also
provide a fully additive $(1,\varepsilon)$-approximation with local query
complexity $poly(1/\varepsilon)$ and time complexity $2^{poly(1/\varepsilon)}$.
The latter yields the fastest polynomial-time approximation scheme for
correlation clustering known to date.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2013 12:04:10 GMT"
}
] | 2013-12-19T00:00:00 | [
[
"Bonchi",
"Francesco",
""
],
[
"García-Soriano",
"David",
""
],
[
"Kutzkov",
"Konstantin",
""
]
] | TITLE: Local correlation clustering
ABSTRACT: Correlation clustering is perhaps the most natural formulation of clustering.
Given $n$ objects and a pairwise similarity measure, the goal is to cluster the
objects so that, to the best possible extent, similar objects are put in the
same cluster and dissimilar objects are put in different clusters. Despite its
theoretical appeal, the practical relevance of correlation clustering still
remains largely unexplored, mainly due to the fact that correlation clustering
requires the $\Theta(n^2)$ pairwise similarities as input.
In this paper we initiate the investigation into \emph{local} algorithms for
correlation clustering. In \emph{local correlation clustering} we are given the
identifier of a single object and we want to return the cluster to which it
belongs in some globally consistent near-optimal clustering, using a small
number of similarity queries. Local algorithms for correlation clustering open
the door to \emph{sublinear-time} algorithms, which are particularly useful
when the similarity between items is costly to compute, as it is often the case
in many practical application domains. They also imply $(i)$ distributed and
streaming clustering algorithms, $(ii)$ constant-time estimators and testers
for cluster edit distance, and $(iii)$ property-preserving parallel
reconstruction algorithms for clusterability.
Specifically, we devise a local clustering algorithm attaining a $(3,
\varepsilon)$-approximation in time $O(1/\varepsilon^2)$ independently of the
dataset size. An explicit approximate clustering for all objects can be
produced in time $O(n/\varepsilon)$ (which is provably optimal). We also
provide a fully additive $(1,\varepsilon)$-approximation with local query
complexity $poly(1/\varepsilon)$ and time complexity $2^{poly(1/\varepsilon)}$.
The latter yields the fastest polynomial-time approximation scheme for
correlation clustering known to date.
| no_new_dataset | 0.946646 |
1312.5124 | Paul Fogel | Paul Fogel | Permuted NMF: A Simple Algorithm Intended to Minimize the Volume of the
Score Matrix | null | null | null | null | stat.AP cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-Negative Matrix Factorization, NMF, attempts to find a number of
archetypal response profiles, or parts, such that any sample profile in the
dataset can be approximated by a close profile among these archetypes or a
linear combination of these profiles. The non-negativity constraint is imposed
while estimating archetypal profiles, due to the non-negative nature of the
observed signal. Apart from non negativity, a volume constraint can be applied
on the Score matrix W to enhance the ability of learning parts of NMF. In this
report, we describe a very simple algorithm, which in effect achieves volume
minimization, although indirectly.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2013 13:13:39 GMT"
}
] | 2013-12-19T00:00:00 | [
[
"Fogel",
"Paul",
""
]
] | TITLE: Permuted NMF: A Simple Algorithm Intended to Minimize the Volume of the
Score Matrix
ABSTRACT: Non-Negative Matrix Factorization, NMF, attempts to find a number of
archetypal response profiles, or parts, such that any sample profile in the
dataset can be approximated by a close profile among these archetypes or a
linear combination of these profiles. The non-negativity constraint is imposed
while estimating archetypal profiles, due to the non-negative nature of the
observed signal. Apart from non negativity, a volume constraint can be applied
on the Score matrix W to enhance the ability of learning parts of NMF. In this
report, we describe a very simple algorithm, which in effect achieves volume
minimization, although indirectly.
| no_new_dataset | 0.946646 |
1303.3990 | Vladimir Gligorijevi\'c | Vladimir Gligorijevic | Master thesis: Growth and Self-Organization Processes in Directed Social
Network | This paper has been withdrawn due to its incompleteness | null | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large dataset collected from Ubuntu chat channel is studied as a complex
dynamical system with emergent collective behaviour of users. With the
appropriate network mappings we examined wealthy topological structure of
Ubuntu network. The structure of this network is determined by computing
different topological measures. The directed, weighted network, which is a
suitable representation of the dataset from Ubuntu chat channel is
characterized with power law dependencies of various quantities, hierarchical
organization and disassortative mixing patterns. Beyond the topological
features, the emergent collective state is further quantified by analysis of
time series of users activities driven by emotions. Analysis of time series
reveals self-organized dynamics with long-range temporal correlations in user
actions.
| [
{
"version": "v1",
"created": "Sat, 16 Mar 2013 15:33:21 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Mar 2013 11:25:59 GMT"
},
{
"version": "v3",
"created": "Sat, 14 Dec 2013 13:29:27 GMT"
}
] | 2013-12-17T00:00:00 | [
[
"Gligorijevic",
"Vladimir",
""
]
] | TITLE: Master thesis: Growth and Self-Organization Processes in Directed Social
Network
ABSTRACT: Large dataset collected from Ubuntu chat channel is studied as a complex
dynamical system with emergent collective behaviour of users. With the
appropriate network mappings we examined wealthy topological structure of
Ubuntu network. The structure of this network is determined by computing
different topological measures. The directed, weighted network, which is a
suitable representation of the dataset from Ubuntu chat channel is
characterized with power law dependencies of various quantities, hierarchical
organization and disassortative mixing patterns. Beyond the topological
features, the emergent collective state is further quantified by analysis of
time series of users activities driven by emotions. Analysis of time series
reveals self-organized dynamics with long-range temporal correlations in user
actions.
| no_new_dataset | 0.945248 |
1312.0086 | Filomena Ferrucci | Filomena Ferrucci, M-Tahar Kechadi, Pasquale Salza, Federica Sarro | A Framework for Genetic Algorithms Based on Hadoop | null | null | null | null | cs.NE cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetic Algorithms (GAs) are powerful metaheuristic techniques mostly used in
many real-world applications. The sequential execution of GAs requires
considerable computational power both in time and resources. Nevertheless, GAs
are naturally parallel and accessing a parallel platform such as Cloud is easy
and cheap. Apache Hadoop is one of the common services that can be used for
parallel applications. However, using Hadoop to develop a parallel version of
GAs is not simple without facing its inner workings. Even though some
sequential frameworks for GAs already exist, there is no framework supporting
the development of GA applications that can be executed in parallel. In this
paper is described a framework for parallel GAs on the Hadoop platform,
following the paradigm of MapReduce. The main purpose of this framework is to
allow the user to focus on the aspects of GA that are specific to the problem
to be addressed, being sure that this task is going to be correctly executed on
the Cloud with a good performance. The framework has been also exploited to
develop an application for Feature Subset Selection problem. A preliminary
analysis of the performance of the developed GA application has been performed
using three datasets and shown very promising performance.
| [
{
"version": "v1",
"created": "Sat, 30 Nov 2013 10:41:29 GMT"
},
{
"version": "v2",
"created": "Sun, 15 Dec 2013 23:01:10 GMT"
}
] | 2013-12-17T00:00:00 | [
[
"Ferrucci",
"Filomena",
""
],
[
"Kechadi",
"M-Tahar",
""
],
[
"Salza",
"Pasquale",
""
],
[
"Sarro",
"Federica",
""
]
] | TITLE: A Framework for Genetic Algorithms Based on Hadoop
ABSTRACT: Genetic Algorithms (GAs) are powerful metaheuristic techniques mostly used in
many real-world applications. The sequential execution of GAs requires
considerable computational power both in time and resources. Nevertheless, GAs
are naturally parallel and accessing a parallel platform such as Cloud is easy
and cheap. Apache Hadoop is one of the common services that can be used for
parallel applications. However, using Hadoop to develop a parallel version of
GAs is not simple without facing its inner workings. Even though some
sequential frameworks for GAs already exist, there is no framework supporting
the development of GA applications that can be executed in parallel. In this
paper is described a framework for parallel GAs on the Hadoop platform,
following the paradigm of MapReduce. The main purpose of this framework is to
allow the user to focus on the aspects of GA that are specific to the problem
to be addressed, being sure that this task is going to be correctly executed on
the Cloud with a good performance. The framework has been also exploited to
develop an application for Feature Subset Selection problem. A preliminary
analysis of the performance of the developed GA application has been performed
using three datasets and shown very promising performance.
| no_new_dataset | 0.944842 |
1312.4108 | F. Ozgur Catak | Ferhat \"Ozg\"ur \c{C}atak, Mehmet Erdal Balaban | A MapReduce based distributed SVM algorithm for binary classification | 19 Pages. arXiv admin note: text overlap with arXiv:1301.0082 | null | null | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although Support Vector Machine (SVM) algorithm has a high generalization
property to classify for unseen examples after training phase and it has small
loss value, the algorithm is not suitable for real-life classification and
regression problems. SVMs cannot solve hundreds of thousands examples in
training dataset. In previous studies on distributed machine learning
algorithms, SVM is trained over a costly and preconfigured computer
environment. In this research, we present a MapReduce based distributed
parallel SVM training algorithm for binary classification problems. This work
shows how to distribute optimization problem over cloud computing systems with
MapReduce technique. In the second step of this work, we used statistical
learning theory to find the predictive hypothesis that minimize our empirical
risks from hypothesis spaces that created with reduce function of MapReduce.
The results of this research are important for training of big datasets for SVM
algorithm based classification problems. We provided that iterative training of
split dataset with MapReduce technique; accuracy of the classifier function
will converge to global optimal classifier function's accuracy in finite
iteration size. The algorithm performance was measured on samples from letter
recognition and pen-based recognition of handwritten digits dataset.
| [
{
"version": "v1",
"created": "Sun, 15 Dec 2013 05:42:51 GMT"
}
] | 2013-12-17T00:00:00 | [
[
"Çatak",
"Ferhat Özgür",
""
],
[
"Balaban",
"Mehmet Erdal",
""
]
] | TITLE: A MapReduce based distributed SVM algorithm for binary classification
ABSTRACT: Although Support Vector Machine (SVM) algorithm has a high generalization
property to classify for unseen examples after training phase and it has small
loss value, the algorithm is not suitable for real-life classification and
regression problems. SVMs cannot solve hundreds of thousands examples in
training dataset. In previous studies on distributed machine learning
algorithms, SVM is trained over a costly and preconfigured computer
environment. In this research, we present a MapReduce based distributed
parallel SVM training algorithm for binary classification problems. This work
shows how to distribute optimization problem over cloud computing systems with
MapReduce technique. In the second step of this work, we used statistical
learning theory to find the predictive hypothesis that minimize our empirical
risks from hypothesis spaces that created with reduce function of MapReduce.
The results of this research are important for training of big datasets for SVM
algorithm based classification problems. We provided that iterative training of
split dataset with MapReduce technique; accuracy of the classifier function
will converge to global optimal classifier function's accuracy in finite
iteration size. The algorithm performance was measured on samples from letter
recognition and pen-based recognition of handwritten digits dataset.
| no_new_dataset | 0.948106 |
1312.4209 | Richard Davis | Richard Davis, Sanjay Chawla, Philip Leong | Feature Graph Architectures | 9 pages, with 5 pages of supplementary material (appendices) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article we propose feature graph architectures (FGA), which are deep
learning systems employing a structured initialisation and training method
based on a feature graph which facilitates improved generalisation performance
compared with a standard shallow architecture. The goal is to explore
alternative perspectives on the problem of deep network training. We evaluate
FGA performance for deep SVMs on some experimental datasets, and show how
generalisation and stability results may be derived for these models. We
describe the effect of permutations on the model accuracy, and give a criterion
for the optimal permutation in terms of feature correlations. The experimental
results show that the algorithm produces robust and significant test set
improvements over a standard shallow SVM training method for a range of
datasets. These gains are achieved with a moderate increase in time complexity.
| [
{
"version": "v1",
"created": "Sun, 15 Dec 2013 23:40:49 GMT"
}
] | 2013-12-17T00:00:00 | [
[
"Davis",
"Richard",
""
],
[
"Chawla",
"Sanjay",
""
],
[
"Leong",
"Philip",
""
]
] | TITLE: Feature Graph Architectures
ABSTRACT: In this article we propose feature graph architectures (FGA), which are deep
learning systems employing a structured initialisation and training method
based on a feature graph which facilitates improved generalisation performance
compared with a standard shallow architecture. The goal is to explore
alternative perspectives on the problem of deep network training. We evaluate
FGA performance for deep SVMs on some experimental datasets, and show how
generalisation and stability results may be derived for these models. We
describe the effect of permutations on the model accuracy, and give a criterion
for the optimal permutation in terms of feature correlations. The experimental
results show that the algorithm produces robust and significant test set
improvements over a standard shallow SVM training method for a range of
datasets. These gains are achieved with a moderate increase in time complexity.
| no_new_dataset | 0.951504 |
1312.4384 | Eren Golge | Eren Golge and Pinar Duygulu | Rectifying Self Organizing Maps for Automatic Concept Learning from Web
Images | present CVPR2014 submission | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We attack the problem of learning concepts automatically from noisy web image
search results. Going beyond low level attributes, such as colour and texture,
we explore weakly-labelled datasets for the learning of higher level concepts,
such as scene categories. The idea is based on discovering common
characteristics shared among subsets of images by posing a method that is able
to organise the data while eliminating irrelevant instances. We propose a novel
clustering and outlier detection method, namely Rectifying Self Organizing Maps
(RSOM). Given an image collection returned for a concept query, RSOM provides
clusters pruned from outliers. Each cluster is used to train a model
representing a different characteristics of the concept. The proposed method
outperforms the state-of-the-art studies on the task of learning low-level
concepts, and it is competitive in learning higher level concepts as well. It
is capable to work at large scale with no supervision through exploiting the
available sources.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2013 14:51:00 GMT"
}
] | 2013-12-17T00:00:00 | [
[
"Golge",
"Eren",
""
],
[
"Duygulu",
"Pinar",
""
]
] | TITLE: Rectifying Self Organizing Maps for Automatic Concept Learning from Web
Images
ABSTRACT: We attack the problem of learning concepts automatically from noisy web image
search results. Going beyond low level attributes, such as colour and texture,
we explore weakly-labelled datasets for the learning of higher level concepts,
such as scene categories. The idea is based on discovering common
characteristics shared among subsets of images by posing a method that is able
to organise the data while eliminating irrelevant instances. We propose a novel
clustering and outlier detection method, namely Rectifying Self Organizing Maps
(RSOM). Given an image collection returned for a concept query, RSOM provides
clusters pruned from outliers. Each cluster is used to train a model
representing a different characteristics of the concept. The proposed method
outperforms the state-of-the-art studies on the task of learning low-level
concepts, and it is competitive in learning higher level concepts as well. It
is capable to work at large scale with no supervision through exploiting the
available sources.
| no_new_dataset | 0.951594 |
1312.4477 | Ghazi Al-Naymat | Ghazi Al-Naymat | GCG: Mining Maximal Complete Graph Patterns from Large Spatial Data | 11 | International Conference on Computer Systems and Applications
(AICCSA), pp.1,8. Fes, Morocco.27-30 May 2013 | 10.1109/AICCSA.2013.6616417 | null | cs.DB | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Recent research on pattern discovery has progressed from mining frequent
patterns and sequences to mining structured patterns, such as trees and graphs.
Graphs as general data structure can model complex relations among data with
wide applications in web exploration and social networks. However, the process
of mining large graph patterns is a challenge due to the existence of large
number of subgraphs. In this paper, we aim to mine only frequent complete graph
patterns. A graph g in a database is complete if every pair of distinct
vertices is connected by a unique edge. Grid Complete Graph (GCG) is a mining
algorithm developed to explore interesting pruning techniques to extract
maximal complete graphs from large spatial dataset existing in Sloan Digital
Sky Survey (SDSS) data. Using a divide and conquer strategy, GCG shows high
efficiency especially in the presence of large number of patterns. In this
paper, we describe GCG that can mine not only simple co-location spatial
patterns but also complex ones. To the best of our knowledge, this is the first
algorithm used to exploit the extraction of maximal complete graphs in the
process of mining complex co-location patterns in large spatial dataset.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2013 15:00:50 GMT"
}
] | 2013-12-17T00:00:00 | [
[
"Al-Naymat",
"Ghazi",
""
]
] | TITLE: GCG: Mining Maximal Complete Graph Patterns from Large Spatial Data
ABSTRACT: Recent research on pattern discovery has progressed from mining frequent
patterns and sequences to mining structured patterns, such as trees and graphs.
Graphs as general data structure can model complex relations among data with
wide applications in web exploration and social networks. However, the process
of mining large graph patterns is a challenge due to the existence of large
number of subgraphs. In this paper, we aim to mine only frequent complete graph
patterns. A graph g in a database is complete if every pair of distinct
vertices is connected by a unique edge. Grid Complete Graph (GCG) is a mining
algorithm developed to explore interesting pruning techniques to extract
maximal complete graphs from large spatial dataset existing in Sloan Digital
Sky Survey (SDSS) data. Using a divide and conquer strategy, GCG shows high
efficiency especially in the presence of large number of patterns. In this
paper, we describe GCG that can mine not only simple co-location spatial
patterns but also complex ones. To the best of our knowledge, this is the first
algorithm used to exploit the extraction of maximal complete graphs in the
process of mining complex co-location patterns in large spatial dataset.
| no_new_dataset | 0.950732 |
1312.0624 | Uri Shalit | Uri Shalit and Gal Chechik | Efficient coordinate-descent for orthogonal matrices through Givens
rotations | A shorter version of this paper will appear in the proceedings of the
31st International Conference for Machine Learning (ICML 2014) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimizing over the set of orthogonal matrices is a central component in
problems like sparse-PCA or tensor decomposition. Unfortunately, such
optimization is hard since simple operations on orthogonal matrices easily
break orthogonality, and correcting orthogonality usually costs a large amount
of computation. Here we propose a framework for optimizing orthogonal matrices,
that is the parallel of coordinate-descent in Euclidean spaces. It is based on
{\em Givens-rotations}, a fast-to-compute operation that affects a small number
of entries in the learned matrix, and preserves orthogonality. We show two
applications of this approach: an algorithm for tensor decomposition that is
used in learning mixture models, and an algorithm for sparse-PCA. We study the
parameter regime where a Givens rotation approach converges faster and achieves
a superior model on a genome-wide brain-wide mRNA expression dataset.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2013 21:09:40 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Dec 2013 18:47:20 GMT"
}
] | 2013-12-16T00:00:00 | [
[
"Shalit",
"Uri",
""
],
[
"Chechik",
"Gal",
""
]
] | TITLE: Efficient coordinate-descent for orthogonal matrices through Givens
rotations
ABSTRACT: Optimizing over the set of orthogonal matrices is a central component in
problems like sparse-PCA or tensor decomposition. Unfortunately, such
optimization is hard since simple operations on orthogonal matrices easily
break orthogonality, and correcting orthogonality usually costs a large amount
of computation. Here we propose a framework for optimizing orthogonal matrices,
that is the parallel of coordinate-descent in Euclidean spaces. It is based on
{\em Givens-rotations}, a fast-to-compute operation that affects a small number
of entries in the learned matrix, and preserves orthogonality. We show two
applications of this approach: an algorithm for tensor decomposition that is
used in learning mixture models, and an algorithm for sparse-PCA. We study the
parameter regime where a Givens rotation approach converges faster and achieves
a superior model on a genome-wide brain-wide mRNA expression dataset.
| no_new_dataset | 0.952662 |
1312.2789 | Chanabasayya Vastrad M | Doreswamy and Chanabasayya .M. Vastrad | Performance Analysis Of Regularized Linear Regression Models For
Oxazolines And Oxazoles Derivitive Descriptor Dataset | null | published International Journal of Computational Science and
Information Technology (IJCSITY) Vol.1, No.4, November 2013 | 10.5121/ijcsity.2013.1408 | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Regularized regression techniques for linear regression have been created the
last few ten years to reduce the flaws of ordinary least squares regression
with regard to prediction accuracy. In this paper, new methods for using
regularized regression in model choice are introduced, and we distinguish the
conditions in which regularized regression develops our ability to discriminate
models. We applied all the five methods that use penalty-based (regularization)
shrinkage to handle Oxazolines and Oxazoles derivatives descriptor dataset with
far more predictors than observations. The lasso, ridge, elasticnet, lars and
relaxed lasso further possess the desirable property that they simultaneously
select relevant predictive descriptors and optimally estimate their effects.
Here, we comparatively evaluate the performance of five regularized linear
regression methods The assessment of the performance of each model by means of
benchmark experiments is an established exercise. Cross-validation and
resampling methods are generally used to arrive point evaluates the
efficiencies which are compared to recognize methods with acceptable features.
Predictive accuracy was evaluated using the root mean squared error (RMSE) and
Square of usual correlation between predictors and observed mean inhibitory
concentration of antitubercular activity (R square). We found that all five
regularized regression models were able to produce feasible models and
efficient capturing the linearity in the data. The elastic net and lars had
similar accuracies as well as lasso and relaxed lasso had similar accuracies
but outperformed ridge regression in terms of the RMSE and R square metrics.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2013 13:16:02 GMT"
}
] | 2013-12-13T00:00:00 | [
[
"Doreswamy",
"",
""
],
[
"Vastrad",
"Chanabasayya . M.",
""
]
] | TITLE: Performance Analysis Of Regularized Linear Regression Models For
Oxazolines And Oxazoles Derivitive Descriptor Dataset
ABSTRACT: Regularized regression techniques for linear regression have been created the
last few ten years to reduce the flaws of ordinary least squares regression
with regard to prediction accuracy. In this paper, new methods for using
regularized regression in model choice are introduced, and we distinguish the
conditions in which regularized regression develops our ability to discriminate
models. We applied all the five methods that use penalty-based (regularization)
shrinkage to handle Oxazolines and Oxazoles derivatives descriptor dataset with
far more predictors than observations. The lasso, ridge, elasticnet, lars and
relaxed lasso further possess the desirable property that they simultaneously
select relevant predictive descriptors and optimally estimate their effects.
Here, we comparatively evaluate the performance of five regularized linear
regression methods The assessment of the performance of each model by means of
benchmark experiments is an established exercise. Cross-validation and
resampling methods are generally used to arrive point evaluates the
efficiencies which are compared to recognize methods with acceptable features.
Predictive accuracy was evaluated using the root mean squared error (RMSE) and
Square of usual correlation between predictors and observed mean inhibitory
concentration of antitubercular activity (R square). We found that all five
regularized regression models were able to produce feasible models and
efficient capturing the linearity in the data. The elastic net and lars had
similar accuracies as well as lasso and relaxed lasso had similar accuracies
but outperformed ridge regression in terms of the RMSE and R square metrics.
| no_new_dataset | 0.948202 |
1312.2841 | Chanabasayya Vastrad M | Doreswamy and Chanabasayya .M. Vastrad | Predictive Comparative QSAR Analysis Of As 5-Nitofuran-2-YL Derivatives
Myco bacterium tuberculosis H37RV Inhibitors Bacterium Tuberculosis H37RV
Inhibitors | null | published Health Informatics- An International Journal (HIIJ)
Vol.2, No.4, November 2013 | 10.5121/hiij.2013.2404 | null | cs.CE | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Antitubercular activity of 5-nitrofuran-2-yl Derivatives series were
subjected to Quantitative Structure Activity Relationship (QSAR) Analysis with
an effort to derive and understand a correlation between the biological
activity as response variable and different molecular descriptors as
independent variables. QSAR models are built using 40 molecular descriptor
dataset. Different statistical regression expressions were got using Partial
Least Squares (PLS),Multiple Linear Regression (MLR) and Principal Component
Regression (PCR) techniques. The among these technique, Partial Least Square
Regression (PLS) technique has shown very promising result as compared to MLR
technique A QSAR model was build by a training set of 30 molecules with
correlation coefficient ($r^2$) of 0.8484, significant cross validated
correlation coefficient ($q^2$) is 0.0939, F test is 48.5187, ($r^2$) for
external test set (pred$_r^2$) is -0.5604, coefficient of correlation of
predicted data set (pred$_r^2se$) is 0.7252 and degree of freedom is 26 by
Partial Least Squares Regression technique.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2013 15:50:39 GMT"
}
] | 2013-12-13T00:00:00 | [
[
"Doreswamy",
"",
""
],
[
"Vastrad",
"Chanabasayya . M.",
""
]
] | TITLE: Predictive Comparative QSAR Analysis Of As 5-Nitofuran-2-YL Derivatives
Myco bacterium tuberculosis H37RV Inhibitors Bacterium Tuberculosis H37RV
Inhibitors
ABSTRACT: Antitubercular activity of 5-nitrofuran-2-yl Derivatives series were
subjected to Quantitative Structure Activity Relationship (QSAR) Analysis with
an effort to derive and understand a correlation between the biological
activity as response variable and different molecular descriptors as
independent variables. QSAR models are built using 40 molecular descriptor
dataset. Different statistical regression expressions were got using Partial
Least Squares (PLS),Multiple Linear Regression (MLR) and Principal Component
Regression (PCR) techniques. The among these technique, Partial Least Square
Regression (PLS) technique has shown very promising result as compared to MLR
technique A QSAR model was build by a training set of 30 molecules with
correlation coefficient ($r^2$) of 0.8484, significant cross validated
correlation coefficient ($q^2$) is 0.0939, F test is 48.5187, ($r^2$) for
external test set (pred$_r^2$) is -0.5604, coefficient of correlation of
predicted data set (pred$_r^2se$) is 0.7252 and degree of freedom is 26 by
Partial Least Squares Regression technique.
| no_new_dataset | 0.945901 |
1312.2859 | Chanabasayya Vastrad M | Doreswamy and Chanabasayya .M. Vastrad | A Robust Missing Value Imputation Method MifImpute For Incomplete
Molecular Descriptor Data And Comparative Analysis With Other Missing Value
Imputation Methods | arXiv admin note: text overlap with arXiv:1105.0828 by other authors
without attribution | Published International Journal on Computational Sciences &
Applications (IJCSA) Vol.3, No4, August 2013 | 10.5121/ijcsa.2013.3406 | null | cs.CE | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Missing data imputation is an important research topic in data mining.
Large-scale Molecular descriptor data may contains missing values (MVs).
However, some methods for downstream analyses, including some prediction tools,
require a complete descriptor data matrix. We propose and evaluate an iterative
imputation method MiFoImpute based on a random forest. By averaging over many
unpruned regression trees, random forest intrinsically constitutes a multiple
imputation scheme. Using the NRMSE and NMAE estimates of random forest, we are
able to estimate the imputation error. Evaluation is performed on two molecular
descriptor datasets generated from a diverse selection of pharmaceutical fields
with artificially introduced missing values ranging from 10% to 30%. The
experimental result demonstrates that missing values has a great impact on the
effectiveness of imputation techniques and our method MiFoImpute is more robust
to missing value than the other ten imputation methods used as benchmark.
Additionally, MiFoImpute exhibits attractive computational efficiency and can
cope with high-dimensional data.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2013 16:24:28 GMT"
}
] | 2013-12-13T00:00:00 | [
[
"Doreswamy",
"",
""
],
[
"Vastrad",
"Chanabasayya . M.",
""
]
] | TITLE: A Robust Missing Value Imputation Method MifImpute For Incomplete
Molecular Descriptor Data And Comparative Analysis With Other Missing Value
Imputation Methods
ABSTRACT: Missing data imputation is an important research topic in data mining.
Large-scale Molecular descriptor data may contains missing values (MVs).
However, some methods for downstream analyses, including some prediction tools,
require a complete descriptor data matrix. We propose and evaluate an iterative
imputation method MiFoImpute based on a random forest. By averaging over many
unpruned regression trees, random forest intrinsically constitutes a multiple
imputation scheme. Using the NRMSE and NMAE estimates of random forest, we are
able to estimate the imputation error. Evaluation is performed on two molecular
descriptor datasets generated from a diverse selection of pharmaceutical fields
with artificially introduced missing values ranging from 10% to 30%. The
experimental result demonstrates that missing values has a great impact on the
effectiveness of imputation techniques and our method MiFoImpute is more robust
to missing value than the other ten imputation methods used as benchmark.
Additionally, MiFoImpute exhibits attractive computational efficiency and can
cope with high-dimensional data.
| no_new_dataset | 0.941601 |
1312.2861 | Chanabasayya Vastrad M | Doreswamy and Chanabasayya .M. Vastrad | Identification Of Outliers In Oxazolines AND Oxazoles High Dimension
Molecular Descriptor Dataset Using Principal Component Outlier Detection
Algorithm And Comparative Numerical Study Of Other Robust Estimators | null | Published International Journal of Data Mining & Knowledge
Management Process (IJDKP) Vol.3, No.4, July 2013 | 10.5121/ijdkp.2013.3405 | null | cs.CE | http://creativecommons.org/licenses/by-nc-sa/3.0/ | From the past decade outlier detection has been in use. Detection of outliers
is an emerging topic and is having robust applications in medical sciences and
pharmaceutical sciences. Outlier detection is used to detect anomalous
behaviour of data. Typical problems in Bioinformatics can be addressed by
outlier detection. A computationally fast method for detecting outliers is
shown, that is particularly effective in high dimensions. PrCmpOut algorithm
make use of simple properties of principal components to detect outliers in the
transformed space, leading to significant computational advantages for high
dimensional data. This procedure requires considerably less computational time
than existing methods for outlier detection. The properties of this estimator
(Outlier error rate (FN), Non-Outlier error rate(FP) and computational costs)
are analyzed and compared with those of other robust estimators described in
the literature through simulation studies. Numerical evidence based Oxazolines
and Oxazoles molecular descriptor dataset shows that the proposed method
performs well in a variety of situations of practical interest. It is thus a
valuable companion to the existing outlier detection methods.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2013 16:35:25 GMT"
}
] | 2013-12-13T00:00:00 | [
[
"Doreswamy",
"",
""
],
[
"Vastrad",
"Chanabasayya . M.",
""
]
] | TITLE: Identification Of Outliers In Oxazolines AND Oxazoles High Dimension
Molecular Descriptor Dataset Using Principal Component Outlier Detection
Algorithm And Comparative Numerical Study Of Other Robust Estimators
ABSTRACT: From the past decade outlier detection has been in use. Detection of outliers
is an emerging topic and is having robust applications in medical sciences and
pharmaceutical sciences. Outlier detection is used to detect anomalous
behaviour of data. Typical problems in Bioinformatics can be addressed by
outlier detection. A computationally fast method for detecting outliers is
shown, that is particularly effective in high dimensions. PrCmpOut algorithm
make use of simple properties of principal components to detect outliers in the
transformed space, leading to significant computational advantages for high
dimensional data. This procedure requires considerably less computational time
than existing methods for outlier detection. The properties of this estimator
(Outlier error rate (FN), Non-Outlier error rate(FP) and computational costs)
are analyzed and compared with those of other robust estimators described in
the literature through simulation studies. Numerical evidence based Oxazolines
and Oxazoles molecular descriptor dataset shows that the proposed method
performs well in a variety of situations of practical interest. It is thus a
valuable companion to the existing outlier detection methods.
| no_new_dataset | 0.940844 |
1312.3388 | Tianlin Shi | Tianlin Shi and Jun Zhu | Online Bayesian Passive-Aggressive Learning | 10 Pages. ICML 2014, Beijing, China | null | null | null | cs.LG | http://creativecommons.org/licenses/by/3.0/ | Online Passive-Aggressive (PA) learning is an effective framework for
performing max-margin online learning. But the deterministic formulation and
estimated single large-margin model could limit its capability in discovering
descriptive structures underlying complex data. This pa- per presents online
Bayesian Passive-Aggressive (BayesPA) learning, which subsumes the online PA
and extends naturally to incorporate latent variables and perform nonparametric
Bayesian inference, thus providing great flexibility for explorative analysis.
We apply BayesPA to topic modeling and derive efficient online learning
algorithms for max-margin topic models. We further develop nonparametric
methods to resolve the number of topics. Experimental results on real datasets
show that our approaches significantly improve time efficiency while
maintaining comparable results with the batch counterparts.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2013 02:46:07 GMT"
}
] | 2013-12-13T00:00:00 | [
[
"Shi",
"Tianlin",
""
],
[
"Zhu",
"Jun",
""
]
] | TITLE: Online Bayesian Passive-Aggressive Learning
ABSTRACT: Online Passive-Aggressive (PA) learning is an effective framework for
performing max-margin online learning. But the deterministic formulation and
estimated single large-margin model could limit its capability in discovering
descriptive structures underlying complex data. This pa- per presents online
Bayesian Passive-Aggressive (BayesPA) learning, which subsumes the online PA
and extends naturally to incorporate latent variables and perform nonparametric
Bayesian inference, thus providing great flexibility for explorative analysis.
We apply BayesPA to topic modeling and derive efficient online learning
algorithms for max-margin topic models. We further develop nonparametric
methods to resolve the number of topics. Experimental results on real datasets
show that our approaches significantly improve time efficiency while
maintaining comparable results with the batch counterparts.
| no_new_dataset | 0.947721 |
1310.4342 | Kiran Sree Pokkuluri Prof | Pokkuluri Kiran Sree, Inampudi Ramesh Babuhor, SSSN Usha Devi N3 | An Extensive Report on Cellular Automata Based Artificial Immune System
for Strengthening Automated Protein Prediction | arXiv admin note: text overlap with arXiv:0801.4312 by other authors | Advances in Biomedical Engineering Research (ABER) Volume 1 Issue
3, September 2013 | null | null | cs.AI cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Immune System (AIS-MACA) a novel computational intelligence
technique is can be used for strengthening the automated protein prediction
system with more adaptability and incorporating more parallelism to the system.
Most of the existing approaches are sequential which will classify the input
into four major classes and these are designed for similar sequences. AIS-MACA
is designed to identify ten classes from the sequences that share twilight zone
similarity and identity with the training sequences with mixed and hybrid
variations. This method also predicts three states (helix, strand, and coil)
for the secondary structure. Our comprehensive design considers 10 feature
selection methods and 4 classifiers to develop MACA (Multiple Attractor
Cellular Automata) based classifiers that are build for each of the ten
classes. We have tested the proposed classifier with twilight-zone and
1-high-similarity benchmark datasets with over three dozens of modern competing
predictors shows that AIS-MACA provides the best overall accuracy that ranges
between 80% and 89.8% depending on the dataset.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2013 12:14:48 GMT"
}
] | 2013-12-12T00:00:00 | [
[
"Sree",
"Pokkuluri Kiran",
""
],
[
"Babuhor",
"Inampudi Ramesh",
""
],
[
"N3",
"SSSN Usha Devi",
""
]
] | TITLE: An Extensive Report on Cellular Automata Based Artificial Immune System
for Strengthening Automated Protein Prediction
ABSTRACT: Artificial Immune System (AIS-MACA) a novel computational intelligence
technique is can be used for strengthening the automated protein prediction
system with more adaptability and incorporating more parallelism to the system.
Most of the existing approaches are sequential which will classify the input
into four major classes and these are designed for similar sequences. AIS-MACA
is designed to identify ten classes from the sequences that share twilight zone
similarity and identity with the training sequences with mixed and hybrid
variations. This method also predicts three states (helix, strand, and coil)
for the secondary structure. Our comprehensive design considers 10 feature
selection methods and 4 classifiers to develop MACA (Multiple Attractor
Cellular Automata) based classifiers that are build for each of the ten
classes. We have tested the proposed classifier with twilight-zone and
1-high-similarity benchmark datasets with over three dozens of modern competing
predictors shows that AIS-MACA provides the best overall accuracy that ranges
between 80% and 89.8% depending on the dataset.
| no_new_dataset | 0.95018 |
1312.3020 | Huasha Zhao Mr | Huasha Zhao, John Canny | Sparse Allreduce: Efficient Scalable Communication for Power-Law Data | null | null | null | null | cs.DC cs.AI cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many large datasets exhibit power-law statistics: The web graph, social
networks, text data, click through data etc. Their adjacency graphs are termed
natural graphs, and are known to be difficult to partition. As a consequence
most distributed algorithms on these graphs are communication intensive. Many
algorithms on natural graphs involve an Allreduce: a sum or average of
partitioned data which is then shared back to the cluster nodes. Examples
include PageRank, spectral partitioning, and many machine learning algorithms
including regression, factor (topic) models, and clustering. In this paper we
describe an efficient and scalable Allreduce primitive for power-law data. We
point out scaling problems with existing butterfly and round-robin networks for
Sparse Allreduce, and show that a hybrid approach improves on both.
Furthermore, we show that Sparse Allreduce stages should be nested instead of
cascaded (as in the dense case). And that the optimum throughput Allreduce
network should be a butterfly of heterogeneous degree where degree decreases
with depth into the network. Finally, a simple replication scheme is introduced
to deal with node failures. We present experiments showing significant
improvements over existing systems such as PowerGraph and Hadoop.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2013 02:33:45 GMT"
}
] | 2013-12-12T00:00:00 | [
[
"Zhao",
"Huasha",
""
],
[
"Canny",
"John",
""
]
] | TITLE: Sparse Allreduce: Efficient Scalable Communication for Power-Law Data
ABSTRACT: Many large datasets exhibit power-law statistics: The web graph, social
networks, text data, click through data etc. Their adjacency graphs are termed
natural graphs, and are known to be difficult to partition. As a consequence
most distributed algorithms on these graphs are communication intensive. Many
algorithms on natural graphs involve an Allreduce: a sum or average of
partitioned data which is then shared back to the cluster nodes. Examples
include PageRank, spectral partitioning, and many machine learning algorithms
including regression, factor (topic) models, and clustering. In this paper we
describe an efficient and scalable Allreduce primitive for power-law data. We
point out scaling problems with existing butterfly and round-robin networks for
Sparse Allreduce, and show that a hybrid approach improves on both.
Furthermore, we show that Sparse Allreduce stages should be nested instead of
cascaded (as in the dense case). And that the optimum throughput Allreduce
network should be a butterfly of heterogeneous degree where degree decreases
with depth into the network. Finally, a simple replication scheme is introduced
to deal with node failures. We present experiments showing significant
improvements over existing systems such as PowerGraph and Hadoop.
| no_new_dataset | 0.950686 |
1312.3062 | Jingdong Wang | Jingdong Wang, Jing Wang, Gang Zeng, Rui Gan, Shipeng Li, Baining Guo | Fast Neighborhood Graph Search using Cartesian Concatenation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new data structure for approximate nearest
neighbor search. This structure augments the neighborhood graph with a bridge
graph. We propose to exploit Cartesian concatenation to produce a large set of
vectors, called bridge vectors, from several small sets of subvectors. Each
bridge vector is connected with a few reference vectors near to it, forming a
bridge graph. Our approach finds nearest neighbors by simultaneously traversing
the neighborhood graph and the bridge graph in the best-first strategy. The
success of our approach stems from two factors: the exact nearest neighbor
search over a large number of bridge vectors can be done quickly, and the
reference vectors connected to a bridge (reference) vector near the query are
also likely to be near the query. Experimental results on searching over large
scale datasets (SIFT, GIST and HOG) show that our approach outperforms
state-of-the-art ANN search algorithms in terms of efficiency and accuracy. The
combination of our approach with the IVFADC system also shows superior
performance over the BIGANN dataset of $1$ billion SIFT features compared with
the best previously published result.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2013 08:02:29 GMT"
}
] | 2013-12-12T00:00:00 | [
[
"Wang",
"Jingdong",
""
],
[
"Wang",
"Jing",
""
],
[
"Zeng",
"Gang",
""
],
[
"Gan",
"Rui",
""
],
[
"Li",
"Shipeng",
""
],
[
"Guo",
"Baining",
""
]
] | TITLE: Fast Neighborhood Graph Search using Cartesian Concatenation
ABSTRACT: In this paper, we propose a new data structure for approximate nearest
neighbor search. This structure augments the neighborhood graph with a bridge
graph. We propose to exploit Cartesian concatenation to produce a large set of
vectors, called bridge vectors, from several small sets of subvectors. Each
bridge vector is connected with a few reference vectors near to it, forming a
bridge graph. Our approach finds nearest neighbors by simultaneously traversing
the neighborhood graph and the bridge graph in the best-first strategy. The
success of our approach stems from two factors: the exact nearest neighbor
search over a large number of bridge vectors can be done quickly, and the
reference vectors connected to a bridge (reference) vector near the query are
also likely to be near the query. Experimental results on searching over large
scale datasets (SIFT, GIST and HOG) show that our approach outperforms
state-of-the-art ANN search algorithms in terms of efficiency and accuracy. The
combination of our approach with the IVFADC system also shows superior
performance over the BIGANN dataset of $1$ billion SIFT features compared with
the best previously published result.
| no_new_dataset | 0.948251 |
1303.6609 | Jagan Sankaranarayanan | Jeff LeFevre, Jagan Sankaranarayanan, Hakan Hacigumus, Junichi
Tatemura, Neoklis Polyzotis, Michael J. Carey | Exploiting Opportunistic Physical Design in Large-scale Data Analytics | 15 pages | null | null | null | cs.DB cs.DC cs.DS | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Large-scale systems, such as MapReduce and Hadoop, perform aggressive
materialization of intermediate job results in order to support fault
tolerance. When jobs correspond to exploratory queries submitted by data
analysts, these materializations yield a large set of materialized views that
typically capture common computation among successive queries from the same
analyst, or even across queries of different analysts who test similar
hypotheses. We propose to treat these views as an opportunistic physical design
and use them for the purpose of query optimization. We develop a novel
query-rewrite algorithm that addresses the two main challenges in this context:
how to search the large space of rewrites, and how to reason about views that
contain UDFs (a common feature in large-scale data analytics). The algorithm,
which provably finds the minimum-cost rewrite, is inspired by nearest-neighbor
searches in non-metric spaces. We present an extensive experimental study on
real-world datasets with a prototype data-analytics system based on Hive. The
results demonstrate that our approach can result in dramatic performance
improvements on complex data-analysis queries, reducing total execution time by
an average of 61% and up to two orders of magnitude.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2013 19:08:55 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Dec 2013 17:35:09 GMT"
}
] | 2013-12-11T00:00:00 | [
[
"LeFevre",
"Jeff",
""
],
[
"Sankaranarayanan",
"Jagan",
""
],
[
"Hacigumus",
"Hakan",
""
],
[
"Tatemura",
"Junichi",
""
],
[
"Polyzotis",
"Neoklis",
""
],
[
"Carey",
"Michael J.",
""
]
] | TITLE: Exploiting Opportunistic Physical Design in Large-scale Data Analytics
ABSTRACT: Large-scale systems, such as MapReduce and Hadoop, perform aggressive
materialization of intermediate job results in order to support fault
tolerance. When jobs correspond to exploratory queries submitted by data
analysts, these materializations yield a large set of materialized views that
typically capture common computation among successive queries from the same
analyst, or even across queries of different analysts who test similar
hypotheses. We propose to treat these views as an opportunistic physical design
and use them for the purpose of query optimization. We develop a novel
query-rewrite algorithm that addresses the two main challenges in this context:
how to search the large space of rewrites, and how to reason about views that
contain UDFs (a common feature in large-scale data analytics). The algorithm,
which provably finds the minimum-cost rewrite, is inspired by nearest-neighbor
searches in non-metric spaces. We present an extensive experimental study on
real-world datasets with a prototype data-analytics system based on Hive. The
results demonstrate that our approach can result in dramatic performance
improvements on complex data-analysis queries, reducing total execution time by
an average of 61% and up to two orders of magnitude.
| no_new_dataset | 0.942454 |
1312.2632 | Yongcai Wang | Yongcai Wang, Haoran Feng, Xiao Qi | SEED: Public Energy and Environment Dataset for Optimizing HVAC
Operation in Subway Stations | 5 pages, 14 figures | null | null | null | cs.SY | http://creativecommons.org/licenses/by/3.0/ | For sustainability and energy saving, the problem to optimize the control of
heating, ventilating, and air-conditioning (HVAC) systems has attracted great
attentions, but analyzing the signatures of thermal environments and HVAC
systems and the evaluation of the optimization policies has encountered
inefficiency and inconvenient problems due to the lack of public dataset. In
this paper, we present the Subway station Energy and Environment Dataset
(SEED), which was collected from a line of Beijing subway stations, providing
minute-resolution data regarding the environment dynamics (temperature,
humidity, CO2, etc.) working states and energy consumptions of the HVAC systems
(ventilators, refrigerators, pumps), and hour-resolution data of passenger
flows. We describe the sensor deployments and the HVAC systems for data
collection and for environment control, and also present initial investigation
for the energy disaggregation of HVAC system, the signatures of the thermal
load, cooling supply, and the passenger flow using the dataset.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2013 00:29:04 GMT"
}
] | 2013-12-11T00:00:00 | [
[
"Wang",
"Yongcai",
""
],
[
"Feng",
"Haoran",
""
],
[
"Qi",
"Xiao",
""
]
] | TITLE: SEED: Public Energy and Environment Dataset for Optimizing HVAC
Operation in Subway Stations
ABSTRACT: For sustainability and energy saving, the problem to optimize the control of
heating, ventilating, and air-conditioning (HVAC) systems has attracted great
attentions, but analyzing the signatures of thermal environments and HVAC
systems and the evaluation of the optimization policies has encountered
inefficiency and inconvenient problems due to the lack of public dataset. In
this paper, we present the Subway station Energy and Environment Dataset
(SEED), which was collected from a line of Beijing subway stations, providing
minute-resolution data regarding the environment dynamics (temperature,
humidity, CO2, etc.) working states and energy consumptions of the HVAC systems
(ventilators, refrigerators, pumps), and hour-resolution data of passenger
flows. We describe the sensor deployments and the HVAC systems for data
collection and for environment control, and also present initial investigation
for the energy disaggregation of HVAC system, the signatures of the thermal
load, cooling supply, and the passenger flow using the dataset.
| new_dataset | 0.965964 |
1312.2137 | Dimitri Palaz | Dimitri Palaz, Ronan Collobert, Mathew Magimai.-Doss | End-to-end Phoneme Sequence Recognition using Convolutional Neural
Networks | NIPS Deep Learning Workshop, 2013 | null | null | null | cs.LG cs.CL cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most phoneme recognition state-of-the-art systems rely on a classical neural
network classifiers, fed with highly tuned features, such as MFCC or PLP
features. Recent advances in ``deep learning'' approaches questioned such
systems, but while some attempts were made with simpler features such as
spectrograms, state-of-the-art systems still rely on MFCCs. This might be
viewed as a kind of failure from deep learning approaches, which are often
claimed to have the ability to train with raw signals, alleviating the need of
hand-crafted features. In this paper, we investigate a convolutional neural
network approach for raw speech signals. While convolutional architectures got
tremendous success in computer vision or text processing, they seem to have
been let down in the past recent years in the speech processing field. We show
that it is possible to learn an end-to-end phoneme sequence classifier system
directly from raw signal, with similar performance on the TIMIT and WSJ
datasets than existing systems based on MFCC, questioning the need of complex
hand-crafted features on large datasets.
| [
{
"version": "v1",
"created": "Sat, 7 Dec 2013 19:55:02 GMT"
}
] | 2013-12-10T00:00:00 | [
[
"Palaz",
"Dimitri",
""
],
[
"Collobert",
"Ronan",
""
],
[
"-Doss",
"Mathew Magimai.",
""
]
] | TITLE: End-to-end Phoneme Sequence Recognition using Convolutional Neural
Networks
ABSTRACT: Most phoneme recognition state-of-the-art systems rely on a classical neural
network classifiers, fed with highly tuned features, such as MFCC or PLP
features. Recent advances in ``deep learning'' approaches questioned such
systems, but while some attempts were made with simpler features such as
spectrograms, state-of-the-art systems still rely on MFCCs. This might be
viewed as a kind of failure from deep learning approaches, which are often
claimed to have the ability to train with raw signals, alleviating the need of
hand-crafted features. In this paper, we investigate a convolutional neural
network approach for raw speech signals. While convolutional architectures got
tremendous success in computer vision or text processing, they seem to have
been let down in the past recent years in the speech processing field. We show
that it is possible to learn an end-to-end phoneme sequence classifier system
directly from raw signal, with similar performance on the TIMIT and WSJ
datasets than existing systems based on MFCC, questioning the need of complex
hand-crafted features on large datasets.
| no_new_dataset | 0.948442 |
1312.2237 | Sugata Sanyal | Mustafa H.Hajeer, Alka Singh, Dipankar Dasgupta, Sugata Sanyal | Clustering online social network communities using genetic algorithms | 7 pages, 9 figures, 2 tables | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To analyze the activities in an Online Social network (OSN), we introduce the
concept of "Node of Attraction" (NoA) which represents the most active node in
a network community. This NoA is identified as the origin/initiator of a
post/communication which attracted other nodes and formed a cluster at any
point in time. In this research, a genetic algorithm (GA) is used as a data
mining method where the main objective is to determine clusters of network
communities in a given OSN dataset. This approach is efficient in handling
different type of discussion topics in our studied OSN - comments, emails, chat
expressions, etc. and can form clusters according to one or more topics. We
believe that this work can be useful in finding the source for spread of this
GA-based clustering of online interactions and reports some results of
experiments with real-world data and demonstrates the performance of proposed
approach.
| [
{
"version": "v1",
"created": "Sun, 8 Dec 2013 17:37:24 GMT"
}
] | 2013-12-10T00:00:00 | [
[
"Hajeer",
"Mustafa H.",
""
],
[
"Singh",
"Alka",
""
],
[
"Dasgupta",
"Dipankar",
""
],
[
"Sanyal",
"Sugata",
""
]
] | TITLE: Clustering online social network communities using genetic algorithms
ABSTRACT: To analyze the activities in an Online Social network (OSN), we introduce the
concept of "Node of Attraction" (NoA) which represents the most active node in
a network community. This NoA is identified as the origin/initiator of a
post/communication which attracted other nodes and formed a cluster at any
point in time. In this research, a genetic algorithm (GA) is used as a data
mining method where the main objective is to determine clusters of network
communities in a given OSN dataset. This approach is efficient in handling
different type of discussion topics in our studied OSN - comments, emails, chat
expressions, etc. and can form clusters according to one or more topics. We
believe that this work can be useful in finding the source for spread of this
GA-based clustering of online interactions and reports some results of
experiments with real-world data and demonstrates the performance of proposed
approach.
| no_new_dataset | 0.94743 |
1312.2362 | Maciej Jagielski | Maciej Jagielski and Ryszard Kutner | Modelling the income distribution in the European Union: An application
for the initial analysis of the recent worldwide financial crisis | null | null | null | null | q-fin.GN physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By using methods of statistical physics, we focus on the quantitative
analysis of the economic income data descending from different databases. To
explain our approach, we introduce the necessary theoretical background, the
extended Yakovenko et al. (EY) model. This model gives an analytical
description of the annual household incomes of all society classes in the
European Union (i.e., the low-, medium-, and high-income ones) by a single
unified formula based on unified formalism. We show that the EY model is very
useful for the analyses of various income datasets, in particular, in the case
of a smooth matching of two different datasets. The completed database which we
have constructed using this matching emphasises the significance of the
high-income society class in the analysis of all household incomes. For
instance, the Pareto exponent, which characterises this class, defines the Zipf
law having an exponent much lower than the one characterising the medium-income
society class. This result makes it possible to clearly distinguish between
medium- and high-income society classes. By using our approach, we found that
the high-income society class almost disappeared in 2009, which defines this
year as the most difficult for the EU. To our surprise, this is a contrast with
2008, considered the first year of a worldwide financial crisis, when the
status of the high-income society class was similar to that of 2010. This,
perhaps, emphasises that the crisis in the EU was postponed by about one year
in comparison with the United States.
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2013 10:04:35 GMT"
}
] | 2013-12-10T00:00:00 | [
[
"Jagielski",
"Maciej",
""
],
[
"Kutner",
"Ryszard",
""
]
] | TITLE: Modelling the income distribution in the European Union: An application
for the initial analysis of the recent worldwide financial crisis
ABSTRACT: By using methods of statistical physics, we focus on the quantitative
analysis of the economic income data descending from different databases. To
explain our approach, we introduce the necessary theoretical background, the
extended Yakovenko et al. (EY) model. This model gives an analytical
description of the annual household incomes of all society classes in the
European Union (i.e., the low-, medium-, and high-income ones) by a single
unified formula based on unified formalism. We show that the EY model is very
useful for the analyses of various income datasets, in particular, in the case
of a smooth matching of two different datasets. The completed database which we
have constructed using this matching emphasises the significance of the
high-income society class in the analysis of all household incomes. For
instance, the Pareto exponent, which characterises this class, defines the Zipf
law having an exponent much lower than the one characterising the medium-income
society class. This result makes it possible to clearly distinguish between
medium- and high-income society classes. By using our approach, we found that
the high-income society class almost disappeared in 2009, which defines this
year as the most difficult for the EU. To our surprise, this is a contrast with
2008, considered the first year of a worldwide financial crisis, when the
status of the high-income society class was similar to that of 2010. This,
perhaps, emphasises that the crisis in the EU was postponed by about one year
in comparison with the United States.
| no_new_dataset | 0.939192 |
1312.2451 | Sarwat Nizamani | Sarwat Nizamani, Nasrullah Memon | CEAI: CCM based Email Authorship Identification Model | null | Egyptian Informatics Journal,Volume 14, Issue 3, November 2013 | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In this paper we present a model for email authorship identification (EAI) by
employing a Cluster-based Classification (CCM) technique. Traditionally,
stylometric features have been successfully employed in various authorship
analysis tasks; we extend the traditional feature-set to include some more
interesting and effective features for email authorship identification (e.g.
the last punctuation mark used in an email, the tendency of an author to use
capitalization at the start of an email, or the punctuation after a greeting or
farewell). We also included Info Gain feature selection based content features.
It is observed that the use of such features in the authorship identification
process has a positive impact on the accuracy of the authorship identification
task. We performed experiments to justify our arguments and compared the
results with other base line models. Experimental results reveal that the
proposed CCM-based email authorship identification model, along with the
proposed feature set, outperforms the state-of-the-art support vector machine
(SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The
proposed model attains an accuracy rate of 94% for 10 authors, 89% for 25
authors, and 81% for 50 authors, respectively on Enron dataset, while 89.5%
accuracy has been achieved on authors' constructed real email dataset. The
results on Enron dataset have been achieved on quite a large number of authors
as compared to the models proposed by Iqbal et al. [1, 2].
| [
{
"version": "v1",
"created": "Fri, 6 Dec 2013 18:25:15 GMT"
}
] | 2013-12-10T00:00:00 | [
[
"Nizamani",
"Sarwat",
""
],
[
"Memon",
"Nasrullah",
""
]
] | TITLE: CEAI: CCM based Email Authorship Identification Model
ABSTRACT: In this paper we present a model for email authorship identification (EAI) by
employing a Cluster-based Classification (CCM) technique. Traditionally,
stylometric features have been successfully employed in various authorship
analysis tasks; we extend the traditional feature-set to include some more
interesting and effective features for email authorship identification (e.g.
the last punctuation mark used in an email, the tendency of an author to use
capitalization at the start of an email, or the punctuation after a greeting or
farewell). We also included Info Gain feature selection based content features.
It is observed that the use of such features in the authorship identification
process has a positive impact on the accuracy of the authorship identification
task. We performed experiments to justify our arguments and compared the
results with other base line models. Experimental results reveal that the
proposed CCM-based email authorship identification model, along with the
proposed feature set, outperforms the state-of-the-art support vector machine
(SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The
proposed model attains an accuracy rate of 94% for 10 authors, 89% for 25
authors, and 81% for 50 authors, respectively on Enron dataset, while 89.5%
accuracy has been achieved on authors' constructed real email dataset. The
results on Enron dataset have been achieved on quite a large number of authors
as compared to the models proposed by Iqbal et al. [1, 2].
| no_new_dataset | 0.948585 |
1304.6108 | Nicolas Charon | Nicolas Charon, Alain Trouv\'e | The varifold representation of non-oriented shapes for diffeomorphic
registration | 33 pages, 10 figures | SIAM Journal on Imaging Sciences, 2013, Vol. 6, No. 4 : pp.
2547-2580 | 10.1137/130918885 | null | cs.CG cs.CV math.DG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of orientation that naturally arises
when representing shapes like curves or surfaces as currents. In the field of
computational anatomy, the framework of currents has indeed proved very
efficient to model a wide variety of shapes. However, in such approaches,
orientation of shapes is a fundamental issue that can lead to several drawbacks
in treating certain kind of datasets. More specifically, problems occur with
structures like acute pikes because of canceling effects of currents or with
data that consists in many disconnected pieces like fiber bundles for which
currents require a consistent orientation of all pieces. As a promising
alternative to currents, varifolds, introduced in the context of geometric
measure theory by F. Almgren, allow the representation of any non-oriented
manifold (more generally any non-oriented rectifiable set). In particular, we
explain how varifolds can encode numerically non-oriented objects both from the
discrete and continuous point of view. We show various ways to build a Hilbert
space structure on the set of varifolds based on the theory of reproducing
kernels. We show that, unlike the currents' setting, these metrics are
consistent with shape volume (theorem 4.1) and we derive a formula for the
variation of metric with respect to the shape (theorem 4.2). Finally, we
propose a generalization to non-oriented shapes of registration algorithms in
the context of Large Deformations Metric Mapping (LDDMM), which we detail with
a few examples in the last part of the paper.
| [
{
"version": "v1",
"created": "Mon, 22 Apr 2013 21:03:45 GMT"
}
] | 2013-12-09T00:00:00 | [
[
"Charon",
"Nicolas",
""
],
[
"Trouvé",
"Alain",
""
]
] | TITLE: The varifold representation of non-oriented shapes for diffeomorphic
registration
ABSTRACT: In this paper, we address the problem of orientation that naturally arises
when representing shapes like curves or surfaces as currents. In the field of
computational anatomy, the framework of currents has indeed proved very
efficient to model a wide variety of shapes. However, in such approaches,
orientation of shapes is a fundamental issue that can lead to several drawbacks
in treating certain kind of datasets. More specifically, problems occur with
structures like acute pikes because of canceling effects of currents or with
data that consists in many disconnected pieces like fiber bundles for which
currents require a consistent orientation of all pieces. As a promising
alternative to currents, varifolds, introduced in the context of geometric
measure theory by F. Almgren, allow the representation of any non-oriented
manifold (more generally any non-oriented rectifiable set). In particular, we
explain how varifolds can encode numerically non-oriented objects both from the
discrete and continuous point of view. We show various ways to build a Hilbert
space structure on the set of varifolds based on the theory of reproducing
kernels. We show that, unlike the currents' setting, these metrics are
consistent with shape volume (theorem 4.1) and we derive a formula for the
variation of metric with respect to the shape (theorem 4.2). Finally, we
propose a generalization to non-oriented shapes of registration algorithms in
the context of Large Deformations Metric Mapping (LDDMM), which we detail with
a few examples in the last part of the paper.
| no_new_dataset | 0.942876 |
1305.6489 | Junzhou Zhao | Junzhou Zhao and John C. S. Lui and Don Towsley and Xiaohong Guan and
Pinghui Wang | Social Sensor Placement in Large Scale Networks: A Graph Sampling
Perspective | 10 pages, 8 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sensor placement for the purpose of detecting/tracking news outbreak and
preventing rumor spreading is a challenging problem in a large scale online
social network (OSN). This problem is a kind of subset selection problem:
choosing a small set of items from a large population so to maximize some
prespecified set function. However, it is known to be NP-complete. Existing
heuristics are very costly especially for modern OSNs which usually contain
hundreds of millions of users. This paper aims to design methods to find
\emph{good solutions} that can well trade off efficiency and accuracy. We first
show that it is possible to obtain a high quality solution with a probabilistic
guarantee from a "{\em candidate set}" of the underlying social network. By
exploring this candidate set, one can increase the efficiency of placing social
sensors. We also present how this candidate set can be obtained using "{\em
graph sampling}", which has an advantage over previous methods of not requiring
the prior knowledge of the complete network topology. Experiments carried out
on two real datasets demonstrate not only the accuracy and efficiency of our
approach, but also effectiveness in detecting and predicting news outbreak.
| [
{
"version": "v1",
"created": "Tue, 28 May 2013 13:49:00 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Dec 2013 09:19:57 GMT"
}
] | 2013-12-09T00:00:00 | [
[
"Zhao",
"Junzhou",
""
],
[
"Lui",
"John C. S.",
""
],
[
"Towsley",
"Don",
""
],
[
"Guan",
"Xiaohong",
""
],
[
"Wang",
"Pinghui",
""
]
] | TITLE: Social Sensor Placement in Large Scale Networks: A Graph Sampling
Perspective
ABSTRACT: Sensor placement for the purpose of detecting/tracking news outbreak and
preventing rumor spreading is a challenging problem in a large scale online
social network (OSN). This problem is a kind of subset selection problem:
choosing a small set of items from a large population so to maximize some
prespecified set function. However, it is known to be NP-complete. Existing
heuristics are very costly especially for modern OSNs which usually contain
hundreds of millions of users. This paper aims to design methods to find
\emph{good solutions} that can well trade off efficiency and accuracy. We first
show that it is possible to obtain a high quality solution with a probabilistic
guarantee from a "{\em candidate set}" of the underlying social network. By
exploring this candidate set, one can increase the efficiency of placing social
sensors. We also present how this candidate set can be obtained using "{\em
graph sampling}", which has an advantage over previous methods of not requiring
the prior knowledge of the complete network topology. Experiments carried out
on two real datasets demonstrate not only the accuracy and efficiency of our
approach, but also effectiveness in detecting and predicting news outbreak.
| no_new_dataset | 0.949902 |
1312.1685 | Suranjan Ganguly | Arindam Kar, Debotosh Bhattacharjee, Dipak Kumar Basu, Mita Nasipuri,
Mahantapas Kundu | Human Face Recognition using Gabor based Kernel Entropy Component
Analysis | October, 2012. International Journal of Computer Vision and Image
Processing : IGI Global(USA), 2012. arXiv admin note: substantial text
overlap with arXiv:1312.1517, arXiv:1312.1520 | null | null | null | cs.CV | http://creativecommons.org/licenses/publicdomain/ | In this paper, we present a novel Gabor wavelet based Kernel Entropy
Component Analysis (KECA) method by integrating the Gabor wavelet
transformation (GWT) of facial images with the KECA method for enhanced face
recognition performance. Firstly, from the Gabor wavelet transformed images the
most important discriminative desirable facial features characterized by
spatial frequency, spatial locality and orientation selectivity to cope with
the variations due to illumination and facial expression changes were derived.
After that KECA, relating to the Renyi entropy is extended to include cosine
kernel function. The KECA with the cosine kernels is then applied on the
extracted most important discriminating feature vectors of facial images to
obtain only those real kernel ECA eigenvectors that are associated with
eigenvalues having positive entropy contribution. Finally, these real KECA
features are used for image classification using the L1, L2 distance measures;
the Mahalanobis distance measure and the cosine similarity measure. The
feasibility of the Gabor based KECA method with the cosine kernel has been
successfully tested on both frontal and pose-angled face recognition, using
datasets from the ORL, FRAV2D and the FERET database.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2013 12:36:11 GMT"
}
] | 2013-12-09T00:00:00 | [
[
"Kar",
"Arindam",
""
],
[
"Bhattacharjee",
"Debotosh",
""
],
[
"Basu",
"Dipak Kumar",
""
],
[
"Nasipuri",
"Mita",
""
],
[
"Kundu",
"Mahantapas",
""
]
] | TITLE: Human Face Recognition using Gabor based Kernel Entropy Component
Analysis
ABSTRACT: In this paper, we present a novel Gabor wavelet based Kernel Entropy
Component Analysis (KECA) method by integrating the Gabor wavelet
transformation (GWT) of facial images with the KECA method for enhanced face
recognition performance. Firstly, from the Gabor wavelet transformed images the
most important discriminative desirable facial features characterized by
spatial frequency, spatial locality and orientation selectivity to cope with
the variations due to illumination and facial expression changes were derived.
After that KECA, relating to the Renyi entropy is extended to include cosine
kernel function. The KECA with the cosine kernels is then applied on the
extracted most important discriminating feature vectors of facial images to
obtain only those real kernel ECA eigenvectors that are associated with
eigenvalues having positive entropy contribution. Finally, these real KECA
features are used for image classification using the L1, L2 distance measures;
the Mahalanobis distance measure and the cosine similarity measure. The
feasibility of the Gabor based KECA method with the cosine kernel has been
successfully tested on both frontal and pose-angled face recognition, using
datasets from the ORL, FRAV2D and the FERET database.
| no_new_dataset | 0.948537 |
1312.1752 | Muhammad Marwan Muhammad Fuad | Muhammad Marwan Muhammad Fuad | Particle Swarm Optimization of Information-Content Weighting of Symbolic
Aggregate Approximation | The 8th International Conference on Advanced Data Mining and
Applications (ADMA 2012) | null | null | null | cs.NE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bio-inspired optimization algorithms have been gaining more popularity
recently. One of the most important of these algorithms is particle swarm
optimization (PSO). PSO is based on the collective intelligence of a swam of
particles. Each particle explores a part of the search space looking for the
optimal position and adjusts its position according to two factors; the first
is its own experience and the second is the collective experience of the whole
swarm. PSO has been successfully used to solve many optimization problems. In
this work we use PSO to improve the performance of a well-known representation
method of time series data which is the symbolic aggregate approximation (SAX).
As with other time series representation methods, SAX results in loss of
information when applied to represent time series. In this paper we use PSO to
propose a new minimum distance WMD for SAX to remedy this problem. Unlike the
original minimum distance, the new distance sets different weights to different
segments of the time series according to their information content. This
weighted minimum distance enhances the performance of SAX as we show through
experiments using different time series datasets.
| [
{
"version": "v1",
"created": "Fri, 6 Dec 2013 02:22:59 GMT"
}
] | 2013-12-09T00:00:00 | [
[
"Fuad",
"Muhammad Marwan Muhammad",
""
]
] | TITLE: Particle Swarm Optimization of Information-Content Weighting of Symbolic
Aggregate Approximation
ABSTRACT: Bio-inspired optimization algorithms have been gaining more popularity
recently. One of the most important of these algorithms is particle swarm
optimization (PSO). PSO is based on the collective intelligence of a swam of
particles. Each particle explores a part of the search space looking for the
optimal position and adjusts its position according to two factors; the first
is its own experience and the second is the collective experience of the whole
swarm. PSO has been successfully used to solve many optimization problems. In
this work we use PSO to improve the performance of a well-known representation
method of time series data which is the symbolic aggregate approximation (SAX).
As with other time series representation methods, SAX results in loss of
information when applied to represent time series. In this paper we use PSO to
propose a new minimum distance WMD for SAX to remedy this problem. Unlike the
original minimum distance, the new distance sets different weights to different
segments of the time series according to their information content. This
weighted minimum distance enhances the performance of SAX as we show through
experiments using different time series datasets.
| no_new_dataset | 0.950041 |
1312.1897 | Toni Gruetze | Toni Gruetze, Gjergji Kasneci, Zhe Zuo, Felix Naumann | Bootstrapped Grouping of Results to Ambiguous Person Name Queries | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Some of the main ranking features of today's search engines reflect result
popularity and are based on ranking models, such as PageRank, implicit feedback
aggregation, and more. While such features yield satisfactory results for a
wide range of queries, they aggravate the problem of search for ambiguous
entities: Searching for a person yields satisfactory results only if the person
we are looking for is represented by a high-ranked Web page and all required
information are contained in this page. Otherwise, the user has to either
reformulate/refine the query or manually inspect low-ranked results to find the
person in question. A possible approach to solve this problem is to cluster the
results, so that each cluster represents one of the persons occurring in the
answer set. However clustering search results has proven to be a difficult
endeavor by itself, where the clusters are typically of moderate quality.
A wealth of useful information about persons occurs in Web 2.0 platforms,
such as LinkedIn, Wikipedia, Facebook, etc. Being human-generated, the
information on these platforms is clean, focused, and already disambiguated. We
show that when searching for ambiguous person names the information from such
platforms can be bootstrapped to group the results according to the individuals
occurring in them. We have evaluated our methods on a hand-labeled dataset of
around 5,000 Web pages retrieved from Google queries on 50 ambiguous person
names.
| [
{
"version": "v1",
"created": "Fri, 6 Dec 2013 15:50:54 GMT"
}
] | 2013-12-09T00:00:00 | [
[
"Gruetze",
"Toni",
""
],
[
"Kasneci",
"Gjergji",
""
],
[
"Zuo",
"Zhe",
""
],
[
"Naumann",
"Felix",
""
]
] | TITLE: Bootstrapped Grouping of Results to Ambiguous Person Name Queries
ABSTRACT: Some of the main ranking features of today's search engines reflect result
popularity and are based on ranking models, such as PageRank, implicit feedback
aggregation, and more. While such features yield satisfactory results for a
wide range of queries, they aggravate the problem of search for ambiguous
entities: Searching for a person yields satisfactory results only if the person
we are looking for is represented by a high-ranked Web page and all required
information are contained in this page. Otherwise, the user has to either
reformulate/refine the query or manually inspect low-ranked results to find the
person in question. A possible approach to solve this problem is to cluster the
results, so that each cluster represents one of the persons occurring in the
answer set. However clustering search results has proven to be a difficult
endeavor by itself, where the clusters are typically of moderate quality.
A wealth of useful information about persons occurs in Web 2.0 platforms,
such as LinkedIn, Wikipedia, Facebook, etc. Being human-generated, the
information on these platforms is clean, focused, and already disambiguated. We
show that when searching for ambiguous person names the information from such
platforms can be bootstrapped to group the results according to the individuals
occurring in them. We have evaluated our methods on a hand-labeled dataset of
around 5,000 Web pages retrieved from Google queries on 50 ambiguous person
names.
| no_new_dataset | 0.812198 |
1312.1121 | Jan Palczewski | Anna Palczewska and Jan Palczewski and Richard Marchese Robinson and
Daniel Neagu | Interpreting random forest classification models using a feature
contribution method | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model interpretation is one of the key aspects of the model evaluation
process. The explanation of the relationship between model variables and
outputs is relatively easy for statistical models, such as linear regressions,
thanks to the availability of model parameters and their statistical
significance. For "black box" models, such as random forest, this information
is hidden inside the model structure. This work presents an approach for
computing feature contributions for random forest classification models. It
allows for the determination of the influence of each variable on the model
prediction for an individual instance. By analysing feature contributions for a
training dataset, the most significant variables can be determined and their
typical contribution towards predictions made for individual classes, i.e.,
class-specific feature contribution "patterns", are discovered. These patterns
represent a standard behaviour of the model and allow for an additional
assessment of the model reliability for a new data. Interpretation of feature
contributions for two UCI benchmark datasets shows the potential of the
proposed methodology. The robustness of results is demonstrated through an
extensive analysis of feature contributions calculated for a large number of
generated random forest models.
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2013 11:57:53 GMT"
}
] | 2013-12-05T00:00:00 | [
[
"Palczewska",
"Anna",
""
],
[
"Palczewski",
"Jan",
""
],
[
"Robinson",
"Richard Marchese",
""
],
[
"Neagu",
"Daniel",
""
]
] | TITLE: Interpreting random forest classification models using a feature
contribution method
ABSTRACT: Model interpretation is one of the key aspects of the model evaluation
process. The explanation of the relationship between model variables and
outputs is relatively easy for statistical models, such as linear regressions,
thanks to the availability of model parameters and their statistical
significance. For "black box" models, such as random forest, this information
is hidden inside the model structure. This work presents an approach for
computing feature contributions for random forest classification models. It
allows for the determination of the influence of each variable on the model
prediction for an individual instance. By analysing feature contributions for a
training dataset, the most significant variables can be determined and their
typical contribution towards predictions made for individual classes, i.e.,
class-specific feature contribution "patterns", are discovered. These patterns
represent a standard behaviour of the model and allow for an additional
assessment of the model reliability for a new data. Interpretation of feature
contributions for two UCI benchmark datasets shows the potential of the
proposed methodology. The robustness of results is demonstrated through an
extensive analysis of feature contributions calculated for a large number of
generated random forest models.
| no_new_dataset | 0.947962 |
1310.0036 | Saptarshi Bhattacharjee | Saptarshi Bhattacharjee, S Arunkumar, Samir Kumar Bandyopadhyay | Personal Identification from Lip-Print Features using a Statistical
Model | 5 pages, 7 images, Published with International Journal of Computer
Applications (IJCA) | null | 10.5120/8817-2801 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel approach towards identification of human beings
from the statistical analysis of their lip prints. Lip features are extracted
by studying the spatial orientations of the grooves present in lip prints of
individuals using standard edge detection techniques. Horizontal, vertical and
diagonal groove features are analysed using connected-component analysis to
generate the region-specific edge datasets. Comparison between test and
reference sample datasets against a threshold value to define a match yield
satisfactory results. FAR, FRR and ROC metrics have been used to gauge the
performance of the algorithm for real-world deployment in unimodal and
multimodal biometric verification systems.
| [
{
"version": "v1",
"created": "Mon, 30 Sep 2013 20:12:02 GMT"
}
] | 2013-12-04T00:00:00 | [
[
"Bhattacharjee",
"Saptarshi",
""
],
[
"Arunkumar",
"S",
""
],
[
"Bandyopadhyay",
"Samir Kumar",
""
]
] | TITLE: Personal Identification from Lip-Print Features using a Statistical
Model
ABSTRACT: This paper presents a novel approach towards identification of human beings
from the statistical analysis of their lip prints. Lip features are extracted
by studying the spatial orientations of the grooves present in lip prints of
individuals using standard edge detection techniques. Horizontal, vertical and
diagonal groove features are analysed using connected-component analysis to
generate the region-specific edge datasets. Comparison between test and
reference sample datasets against a threshold value to define a match yield
satisfactory results. FAR, FRR and ROC metrics have been used to gauge the
performance of the algorithm for real-world deployment in unimodal and
multimodal biometric verification systems.
| no_new_dataset | 0.948346 |
1311.7071 | Zitao Liu | Zitao Liu and Milos Hauskrecht | Sparse Linear Dynamical System with Its Application in Multivariate
Clinical Time Series | Appear in Neural Information Processing Systems(NIPS) Workshop on
Machine Learning for Clinical Data Analysis and Healthcare 2013 | null | null | null | cs.AI cs.LG stat.ML | http://creativecommons.org/licenses/by/3.0/ | Linear Dynamical System (LDS) is an elegant mathematical framework for
modeling and learning multivariate time series. However, in general, it is
difficult to set the dimension of its hidden state space. A small number of
hidden states may not be able to model the complexities of a time series, while
a large number of hidden states can lead to overfitting. In this paper, we
study methods that impose an $\ell_1$ regularization on the transition matrix
of an LDS model to alleviate the problem of choosing the optimal number of
hidden states. We incorporate a generalized gradient descent method into the
Maximum a Posteriori (MAP) framework and use Expectation Maximization (EM) to
iteratively achieve sparsity on the transition matrix of an LDS model. We show
that our Sparse Linear Dynamical System (SLDS) improves the predictive
performance when compared to ordinary LDS on a multivariate clinical time
series dataset.
| [
{
"version": "v1",
"created": "Wed, 27 Nov 2013 18:58:07 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Dec 2013 20:08:28 GMT"
}
] | 2013-12-04T00:00:00 | [
[
"Liu",
"Zitao",
""
],
[
"Hauskrecht",
"Milos",
""
]
] | TITLE: Sparse Linear Dynamical System with Its Application in Multivariate
Clinical Time Series
ABSTRACT: Linear Dynamical System (LDS) is an elegant mathematical framework for
modeling and learning multivariate time series. However, in general, it is
difficult to set the dimension of its hidden state space. A small number of
hidden states may not be able to model the complexities of a time series, while
a large number of hidden states can lead to overfitting. In this paper, we
study methods that impose an $\ell_1$ regularization on the transition matrix
of an LDS model to alleviate the problem of choosing the optimal number of
hidden states. We incorporate a generalized gradient descent method into the
Maximum a Posteriori (MAP) framework and use Expectation Maximization (EM) to
iteratively achieve sparsity on the transition matrix of an LDS model. We show
that our Sparse Linear Dynamical System (SLDS) improves the predictive
performance when compared to ordinary LDS on a multivariate clinical time
series dataset.
| no_new_dataset | 0.94801 |
1312.0860 | Zhiting Hu | Zhiting Hu, Chong Wang, Junjie Yao, Eric Xing, Hongzhi Yin, Bin Cui | Community Specific Temporal Topic Discovery from Social Media | 12 pages, 16 figures, submitted to VLDB 2014 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Studying temporal dynamics of topics in social media is very useful to
understand online user behaviors. Most of the existing work on this subject
usually monitors the global trends, ignoring variation among communities. Since
users from different communities tend to have varying tastes and interests,
capturing community-level temporal change can improve the understanding and
management of social content. Additionally, it can further facilitate the
applications such as community discovery, temporal prediction and online
marketing. However, this kind of extraction becomes challenging due to the
intricate interactions between community and topic, and intractable
computational complexity.
In this paper, we take a unified solution towards the community-level topic
dynamic extraction. A probabilistic model, CosTot (Community Specific
Topics-over-Time) is proposed to uncover the hidden topics and communities, as
well as capture community-specific temporal dynamics. Specifically, CosTot
considers text, time, and network information simultaneously, and well
discovers the interactions between community and topic over time. We then
discuss the approximate inference implementation to enable scalable computation
of model parameters, especially for large social data. Based on this, the
application layer support for multi-scale temporal analysis and community
exploration is also investigated.
We conduct extensive experimental studies on a large real microblog dataset,
and demonstrate the superiority of proposed model on tasks of time stamp
prediction, link prediction and topic perplexity.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2013 15:42:19 GMT"
}
] | 2013-12-04T00:00:00 | [
[
"Hu",
"Zhiting",
""
],
[
"Wang",
"Chong",
""
],
[
"Yao",
"Junjie",
""
],
[
"Xing",
"Eric",
""
],
[
"Yin",
"Hongzhi",
""
],
[
"Cui",
"Bin",
""
]
] | TITLE: Community Specific Temporal Topic Discovery from Social Media
ABSTRACT: Studying temporal dynamics of topics in social media is very useful to
understand online user behaviors. Most of the existing work on this subject
usually monitors the global trends, ignoring variation among communities. Since
users from different communities tend to have varying tastes and interests,
capturing community-level temporal change can improve the understanding and
management of social content. Additionally, it can further facilitate the
applications such as community discovery, temporal prediction and online
marketing. However, this kind of extraction becomes challenging due to the
intricate interactions between community and topic, and intractable
computational complexity.
In this paper, we take a unified solution towards the community-level topic
dynamic extraction. A probabilistic model, CosTot (Community Specific
Topics-over-Time) is proposed to uncover the hidden topics and communities, as
well as capture community-specific temporal dynamics. Specifically, CosTot
considers text, time, and network information simultaneously, and well
discovers the interactions between community and topic over time. We then
discuss the approximate inference implementation to enable scalable computation
of model parameters, especially for large social data. Based on this, the
application layer support for multi-scale temporal analysis and community
exploration is also investigated.
We conduct extensive experimental studies on a large real microblog dataset,
and demonstrate the superiority of proposed model on tasks of time stamp
prediction, link prediction and topic perplexity.
| no_new_dataset | 0.948965 |
1308.0371 | Benjamin Graham | Benjamin Graham | Sparse arrays of signatures for online character recognition | 10 pages, 2 figures | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In mathematics the signature of a path is a collection of iterated integrals,
commonly used for solving differential equations. We show that the path
signature, used as a set of features for consumption by a convolutional neural
network (CNN), improves the accuracy of online character recognition---that is
the task of reading characters represented as a collection of paths. Using
datasets of letters, numbers, Assamese and Chinese characters, we show that the
first, second, and even the third iterated integrals contain useful information
for consumption by a CNN.
On the CASIA-OLHWDB1.1 3755 Chinese character dataset, our approach gave a
test error of 3.58%, compared with 5.61% for a traditional CNN [Ciresan et
al.]. A CNN trained on the CASIA-OLHWDB1.0-1.2 datasets won the ICDAR2013
Online Isolated Chinese Character recognition competition.
Computationally, we have developed a sparse CNN implementation that make it
practical to train CNNs with many layers of max-pooling. Extending the MNIST
dataset by translations, our sparse CNN gets a test error of 0.31%.
| [
{
"version": "v1",
"created": "Thu, 1 Aug 2013 22:29:41 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Dec 2013 17:17:06 GMT"
}
] | 2013-12-03T00:00:00 | [
[
"Graham",
"Benjamin",
""
]
] | TITLE: Sparse arrays of signatures for online character recognition
ABSTRACT: In mathematics the signature of a path is a collection of iterated integrals,
commonly used for solving differential equations. We show that the path
signature, used as a set of features for consumption by a convolutional neural
network (CNN), improves the accuracy of online character recognition---that is
the task of reading characters represented as a collection of paths. Using
datasets of letters, numbers, Assamese and Chinese characters, we show that the
first, second, and even the third iterated integrals contain useful information
for consumption by a CNN.
On the CASIA-OLHWDB1.1 3755 Chinese character dataset, our approach gave a
test error of 3.58%, compared with 5.61% for a traditional CNN [Ciresan et
al.]. A CNN trained on the CASIA-OLHWDB1.0-1.2 datasets won the ICDAR2013
Online Isolated Chinese Character recognition competition.
Computationally, we have developed a sparse CNN implementation that make it
practical to train CNNs with many layers of max-pooling. Extending the MNIST
dataset by translations, our sparse CNN gets a test error of 0.31%.
| no_new_dataset | 0.945801 |
1309.6204 | Lei Jin | Lei Jin, Xuelian Long, James Joshi | A Friendship Privacy Attack on Friends and 2-Distant Neighbors in Social
Networks | This paper has been withdrawn by the authors | null | null | null | cs.SI cs.CR physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In an undirected social graph, a friendship link involves two users and the
friendship is visible in both the users' friend lists. Such a dual visibility
of the friendship may raise privacy threats. This is because both users can
separately control the visibility of a friendship link to other users and their
privacy policies for the link may not be consistent. Even if one of them
conceals the link from a third user, the third user may find such a friendship
link from another user's friend list. In addition, as most users allow their
friends to see their friend lists in most social network systems, an adversary
can exploit the inconsistent policies to launch privacy attacks to identify and
infer many of a targeted user's friends. In this paper, we propose, analyze and
evaluate such an attack which is called Friendship Identification and Inference
(FII) attack. In a FII attack scenario, we assume that an adversary can only
see his friend list and the friend lists of his friends who do not hide the
friend lists from him. Then, a FII attack contains two attack steps: 1) friend
identification and 2) friend inference. In the friend identification step, the
adversary tries to identify a target's friends based on his friend list and
those of his friends. In the friend inference step, the adversary attempts to
infer the target's friends by using the proposed random walk with restart
approach. We present experimental results using three real social network
datasets and show that FII attacks are generally efficient and effective when
adversaries and targets are friends or 2-distant neighbors. We also
comprehensively analyze the attack results in order to find what values of
parameters and network features could promote FII attacks. Currently, most
popular social network systems with an undirected friendship graph, such as
Facebook, LinkedIn and Foursquare, are susceptible to FII attacks.
| [
{
"version": "v1",
"created": "Tue, 24 Sep 2013 15:13:13 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Dec 2013 14:00:52 GMT"
}
] | 2013-12-03T00:00:00 | [
[
"Jin",
"Lei",
""
],
[
"Long",
"Xuelian",
""
],
[
"Joshi",
"James",
""
]
] | TITLE: A Friendship Privacy Attack on Friends and 2-Distant Neighbors in Social
Networks
ABSTRACT: In an undirected social graph, a friendship link involves two users and the
friendship is visible in both the users' friend lists. Such a dual visibility
of the friendship may raise privacy threats. This is because both users can
separately control the visibility of a friendship link to other users and their
privacy policies for the link may not be consistent. Even if one of them
conceals the link from a third user, the third user may find such a friendship
link from another user's friend list. In addition, as most users allow their
friends to see their friend lists in most social network systems, an adversary
can exploit the inconsistent policies to launch privacy attacks to identify and
infer many of a targeted user's friends. In this paper, we propose, analyze and
evaluate such an attack which is called Friendship Identification and Inference
(FII) attack. In a FII attack scenario, we assume that an adversary can only
see his friend list and the friend lists of his friends who do not hide the
friend lists from him. Then, a FII attack contains two attack steps: 1) friend
identification and 2) friend inference. In the friend identification step, the
adversary tries to identify a target's friends based on his friend list and
those of his friends. In the friend inference step, the adversary attempts to
infer the target's friends by using the proposed random walk with restart
approach. We present experimental results using three real social network
datasets and show that FII attacks are generally efficient and effective when
adversaries and targets are friends or 2-distant neighbors. We also
comprehensively analyze the attack results in order to find what values of
parameters and network features could promote FII attacks. Currently, most
popular social network systems with an undirected friendship graph, such as
Facebook, LinkedIn and Foursquare, are susceptible to FII attacks.
| no_new_dataset | 0.940353 |
1310.4366 | Dmitry Ignatov | Elena Nenova and Dmitry I. Ignatov and Andrey V. Konstantinov | An FCA-based Boolean Matrix Factorisation for Collaborative Filtering | http://ceur-ws.org/Vol-977/paper8.pdf | In: C. Carpineto, A. Napoli, S.O. Kuznetsov (eds), FCA Meets IR
2013, Vol. 977, CEUR Workshop Proceeding, 2013. P. 57-73 | null | null | cs.IR cs.DS stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new approach for Collaborative Filtering which is based on
Boolean Matrix Factorisation (BMF) and Formal Concept Analysis. In a series of
experiments on real data (Movielens dataset) we compare the approach with the
SVD- and NMF-based algorithms in terms of Mean Average Error (MAE). One of the
experimental consequences is that it is enough to have a binary-scaled rating
data to obtain almost the same quality in terms of MAE by BMF than for the
SVD-based algorithm in case of non-scaled data.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2013 13:17:37 GMT"
}
] | 2013-12-03T00:00:00 | [
[
"Nenova",
"Elena",
""
],
[
"Ignatov",
"Dmitry I.",
""
],
[
"Konstantinov",
"Andrey V.",
""
]
] | TITLE: An FCA-based Boolean Matrix Factorisation for Collaborative Filtering
ABSTRACT: We propose a new approach for Collaborative Filtering which is based on
Boolean Matrix Factorisation (BMF) and Formal Concept Analysis. In a series of
experiments on real data (Movielens dataset) we compare the approach with the
SVD- and NMF-based algorithms in terms of Mean Average Error (MAE). One of the
experimental consequences is that it is enough to have a binary-scaled rating
data to obtain almost the same quality in terms of MAE by BMF than for the
SVD-based algorithm in case of non-scaled data.
| no_new_dataset | 0.951414 |
1312.0182 | Haocheng Wu | Haocheng Wu, Yunhua Hu, Hang Li, Enhong Chen | Query Segmentation for Relevance Ranking in Web Search | 25 pages, 3 figures | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we try to answer the question of how to improve the
state-of-the-art methods for relevance ranking in web search by query
segmentation. Here, by query segmentation it is meant to segment the input
query into segments, typically natural language phrases, so that the
performance of relevance ranking in search is increased. We propose employing
the re-ranking approach in query segmentation, which first employs a generative
model to create top $k$ candidates and then employs a discriminative model to
re-rank the candidates to obtain the final segmentation result. The method has
been widely utilized for structure prediction in natural language processing,
but has not been applied to query segmentation, as far as we know. Furthermore,
we propose a new method for using the result of query segmentation in relevance
ranking, which takes both the original query words and the segmented query
phrases as units of query representation. We investigate whether our method can
improve three relevance models, namely BM25, key n-gram model, and dependency
model. Our experimental results on three large scale web search datasets show
that our method can indeed significantly improve relevance ranking in all the
three cases.
| [
{
"version": "v1",
"created": "Sun, 1 Dec 2013 07:23:12 GMT"
}
] | 2013-12-03T00:00:00 | [
[
"Wu",
"Haocheng",
""
],
[
"Hu",
"Yunhua",
""
],
[
"Li",
"Hang",
""
],
[
"Chen",
"Enhong",
""
]
] | TITLE: Query Segmentation for Relevance Ranking in Web Search
ABSTRACT: In this paper, we try to answer the question of how to improve the
state-of-the-art methods for relevance ranking in web search by query
segmentation. Here, by query segmentation it is meant to segment the input
query into segments, typically natural language phrases, so that the
performance of relevance ranking in search is increased. We propose employing
the re-ranking approach in query segmentation, which first employs a generative
model to create top $k$ candidates and then employs a discriminative model to
re-rank the candidates to obtain the final segmentation result. The method has
been widely utilized for structure prediction in natural language processing,
but has not been applied to query segmentation, as far as we know. Furthermore,
we propose a new method for using the result of query segmentation in relevance
ranking, which takes both the original query words and the segmented query
phrases as units of query representation. We investigate whether our method can
improve three relevance models, namely BM25, key n-gram model, and dependency
model. Our experimental results on three large scale web search datasets show
that our method can indeed significantly improve relevance ranking in all the
three cases.
| no_new_dataset | 0.951459 |
1311.2901 | Rob Fergus | Matthew D Zeiler, Rob Fergus | Visualizing and Understanding Convolutional Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Convolutional Network models have recently demonstrated impressive
classification performance on the ImageNet benchmark. However there is no clear
understanding of why they perform so well, or how they might be improved. In
this paper we address both issues. We introduce a novel visualization technique
that gives insight into the function of intermediate feature layers and the
operation of the classifier. We also perform an ablation study to discover the
performance contribution from different model layers. This enables us to find
model architectures that outperform Krizhevsky \etal on the ImageNet
classification benchmark. We show our ImageNet model generalizes well to other
datasets: when the softmax classifier is retrained, it convincingly beats the
current state-of-the-art results on Caltech-101 and Caltech-256 datasets.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2013 20:02:22 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Nov 2013 01:48:56 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Nov 2013 23:04:01 GMT"
}
] | 2013-12-02T00:00:00 | [
[
"Zeiler",
"Matthew D",
""
],
[
"Fergus",
"Rob",
""
]
] | TITLE: Visualizing and Understanding Convolutional Networks
ABSTRACT: Large Convolutional Network models have recently demonstrated impressive
classification performance on the ImageNet benchmark. However there is no clear
understanding of why they perform so well, or how they might be improved. In
this paper we address both issues. We introduce a novel visualization technique
that gives insight into the function of intermediate feature layers and the
operation of the classifier. We also perform an ablation study to discover the
performance contribution from different model layers. This enables us to find
model architectures that outperform Krizhevsky \etal on the ImageNet
classification benchmark. We show our ImageNet model generalizes well to other
datasets: when the softmax classifier is retrained, it convincingly beats the
current state-of-the-art results on Caltech-101 and Caltech-256 datasets.
| no_new_dataset | 0.949435 |
1311.7215 | Alireza Rezvanian | Aylin Mousavian, Alireza Rezvanian, Mohammad Reza Meybodi | Solving Minimum Vertex Cover Problem Using Learning Automata | 5 pages, 5 figures, conference | null | null | null | cs.AI cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Minimum vertex cover problem is an NP-Hard problem with the aim of finding
minimum number of vertices to cover graph. In this paper, a learning automaton
based algorithm is proposed to find minimum vertex cover in graph. In the
proposed algorithm, each vertex of graph is equipped with a learning automaton
that has two actions in the candidate or non-candidate of the corresponding
vertex cover set. Due to characteristics of learning automata, this algorithm
significantly reduces the number of covering vertices of graph. The proposed
algorithm based on learning automata iteratively minimize the candidate vertex
cover through the update its action probability. As the proposed algorithm
proceeds, a candidate solution nears to optimal solution of the minimum vertex
cover problem. In order to evaluate the proposed algorithm, several experiments
conducted on DIMACS dataset which compared to conventional methods.
Experimental results show the major superiority of the proposed algorithm over
the other methods.
| [
{
"version": "v1",
"created": "Thu, 28 Nov 2013 05:49:34 GMT"
}
] | 2013-12-02T00:00:00 | [
[
"Mousavian",
"Aylin",
""
],
[
"Rezvanian",
"Alireza",
""
],
[
"Meybodi",
"Mohammad Reza",
""
]
] | TITLE: Solving Minimum Vertex Cover Problem Using Learning Automata
ABSTRACT: Minimum vertex cover problem is an NP-Hard problem with the aim of finding
minimum number of vertices to cover graph. In this paper, a learning automaton
based algorithm is proposed to find minimum vertex cover in graph. In the
proposed algorithm, each vertex of graph is equipped with a learning automaton
that has two actions in the candidate or non-candidate of the corresponding
vertex cover set. Due to characteristics of learning automata, this algorithm
significantly reduces the number of covering vertices of graph. The proposed
algorithm based on learning automata iteratively minimize the candidate vertex
cover through the update its action probability. As the proposed algorithm
proceeds, a candidate solution nears to optimal solution of the minimum vertex
cover problem. In order to evaluate the proposed algorithm, several experiments
conducted on DIMACS dataset which compared to conventional methods.
Experimental results show the major superiority of the proposed algorithm over
the other methods.
| no_new_dataset | 0.947478 |
1212.3524 | Nicolas Tremblay | Nicolas Tremblay, Alain Barrat, Cary Forest, Mark Nornberg,
Jean-Fran\c{c}ois Pinton, Pierre Borgnat | Bootstrapping under constraint for the assessment of group behavior in
human contact networks | null | Phys. Rev. E 88, 052812 (2013) | 10.1103/PhysRevE.88.052812 | null | physics.soc-ph cs.SI math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing availability of time --and space-- resolved data describing
human activities and interactions gives insights into both static and dynamic
properties of human behavior. In practice, nevertheless, real-world datasets
can often be considered as only one realisation of a particular event. This
highlights a key issue in social network analysis: the statistical significance
of estimated properties. In this context, we focus here on the assessment of
quantitative features of specific subset of nodes in empirical networks. We
present a method of statistical resampling based on bootstrapping groups of
nodes under constraints within the empirical network. The method enables us to
define acceptance intervals for various Null Hypotheses concerning relevant
properties of the subset of nodes under consideration, in order to characterize
by a statistical test its behavior as ``normal'' or not. We apply this method
to a high resolution dataset describing the face-to-face proximity of
individuals during two co-located scientific conferences. As a case study, we
show how to probe whether co-locating the two conferences succeeded in bringing
together the two corresponding groups of scientists.
| [
{
"version": "v1",
"created": "Fri, 14 Dec 2012 16:48:12 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Nov 2013 12:06:21 GMT"
}
] | 2013-11-27T00:00:00 | [
[
"Tremblay",
"Nicolas",
""
],
[
"Barrat",
"Alain",
""
],
[
"Forest",
"Cary",
""
],
[
"Nornberg",
"Mark",
""
],
[
"Pinton",
"Jean-François",
""
],
[
"Borgnat",
"Pierre",
""
]
] | TITLE: Bootstrapping under constraint for the assessment of group behavior in
human contact networks
ABSTRACT: The increasing availability of time --and space-- resolved data describing
human activities and interactions gives insights into both static and dynamic
properties of human behavior. In practice, nevertheless, real-world datasets
can often be considered as only one realisation of a particular event. This
highlights a key issue in social network analysis: the statistical significance
of estimated properties. In this context, we focus here on the assessment of
quantitative features of specific subset of nodes in empirical networks. We
present a method of statistical resampling based on bootstrapping groups of
nodes under constraints within the empirical network. The method enables us to
define acceptance intervals for various Null Hypotheses concerning relevant
properties of the subset of nodes under consideration, in order to characterize
by a statistical test its behavior as ``normal'' or not. We apply this method
to a high resolution dataset describing the face-to-face proximity of
individuals during two co-located scientific conferences. As a case study, we
show how to probe whether co-locating the two conferences succeeded in bringing
together the two corresponding groups of scientists.
| no_new_dataset | 0.934753 |
1311.4486 | Yun-Qian Miao | Yun-Qian Miao, Ahmed K. Farahat, Mohamed S. Kamel | Discriminative Density-ratio Estimation | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The covariate shift is a challenging problem in supervised learning that
results from the discrepancy between the training and test distributions. An
effective approach which recently drew a considerable attention in the research
community is to reweight the training samples to minimize that discrepancy. In
specific, many methods are based on developing Density-ratio (DR) estimation
techniques that apply to both regression and classification problems. Although
these methods work well for regression problems, their performance on
classification problems is not satisfactory. This is due to a key observation
that these methods focus on matching the sample marginal distributions without
paying attention to preserving the separation between classes in the reweighted
space. In this paper, we propose a novel method for Discriminative
Density-ratio (DDR) estimation that addresses the aforementioned problem and
aims at estimating the density-ratio of joint distributions in a class-wise
manner. The proposed algorithm is an iterative procedure that alternates
between estimating the class information for the test data and estimating new
density ratio for each class. To incorporate the estimated class information of
the test data, a soft matching technique is proposed. In addition, we employ an
effective criterion which adopts mutual information as an indicator to stop the
iterative procedure while resulting in a decision boundary that lies in a
sparse region. Experiments on synthetic and benchmark datasets demonstrate the
superiority of the proposed method in terms of both accuracy and robustness.
| [
{
"version": "v1",
"created": "Mon, 18 Nov 2013 18:41:20 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Nov 2013 03:20:56 GMT"
}
] | 2013-11-27T00:00:00 | [
[
"Miao",
"Yun-Qian",
""
],
[
"Farahat",
"Ahmed K.",
""
],
[
"Kamel",
"Mohamed S.",
""
]
] | TITLE: Discriminative Density-ratio Estimation
ABSTRACT: The covariate shift is a challenging problem in supervised learning that
results from the discrepancy between the training and test distributions. An
effective approach which recently drew a considerable attention in the research
community is to reweight the training samples to minimize that discrepancy. In
specific, many methods are based on developing Density-ratio (DR) estimation
techniques that apply to both regression and classification problems. Although
these methods work well for regression problems, their performance on
classification problems is not satisfactory. This is due to a key observation
that these methods focus on matching the sample marginal distributions without
paying attention to preserving the separation between classes in the reweighted
space. In this paper, we propose a novel method for Discriminative
Density-ratio (DDR) estimation that addresses the aforementioned problem and
aims at estimating the density-ratio of joint distributions in a class-wise
manner. The proposed algorithm is an iterative procedure that alternates
between estimating the class information for the test data and estimating new
density ratio for each class. To incorporate the estimated class information of
the test data, a soft matching technique is proposed. In addition, we employ an
effective criterion which adopts mutual information as an indicator to stop the
iterative procedure while resulting in a decision boundary that lies in a
sparse region. Experiments on synthetic and benchmark datasets demonstrate the
superiority of the proposed method in terms of both accuracy and robustness.
| no_new_dataset | 0.944689 |
1311.6510 | Agata Lapedriza | Agata Lapedriza and Hamed Pirsiavash and Zoya Bylinskii and Antonio
Torralba | Are all training examples equally valuable? | null | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When learning a new concept, not all training examples may prove equally
useful for training: some may have higher or lower training value than others.
The goal of this paper is to bring to the attention of the vision community the
following considerations: (1) some examples are better than others for training
detectors or classifiers, and (2) in the presence of better examples, some
examples may negatively impact performance and removing them may be beneficial.
In this paper, we propose an approach for measuring the training value of an
example, and use it for ranking and greedily sorting examples. We test our
methods on different vision tasks, models, datasets and classifiers. Our
experiments show that the performance of current state-of-the-art detectors and
classifiers can be improved when training on a subset, rather than the whole
training set.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2013 22:59:24 GMT"
}
] | 2013-11-27T00:00:00 | [
[
"Lapedriza",
"Agata",
""
],
[
"Pirsiavash",
"Hamed",
""
],
[
"Bylinskii",
"Zoya",
""
],
[
"Torralba",
"Antonio",
""
]
] | TITLE: Are all training examples equally valuable?
ABSTRACT: When learning a new concept, not all training examples may prove equally
useful for training: some may have higher or lower training value than others.
The goal of this paper is to bring to the attention of the vision community the
following considerations: (1) some examples are better than others for training
detectors or classifiers, and (2) in the presence of better examples, some
examples may negatively impact performance and removing them may be beneficial.
In this paper, we propose an approach for measuring the training value of an
example, and use it for ranking and greedily sorting examples. We test our
methods on different vision tasks, models, datasets and classifiers. Our
experiments show that the performance of current state-of-the-art detectors and
classifiers can be improved when training on a subset, rather than the whole
training set.
| no_new_dataset | 0.955152 |
1311.6758 | Patrick Ott | Patrick Ott and Mark Everingham and Jiri Matas | Detection of Partially Visible Objects | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An "elephant in the room" for most current object detection and localization
methods is the lack of explicit modelling of partial visibility due to
occlusion by other objects or truncation by the image boundary. Based on a
sliding window approach, we propose a detection method which explicitly models
partial visibility by treating it as a latent variable. A novel non-maximum
suppression scheme is proposed which takes into account the inferred partial
visibility of objects while providing a globally optimal solution. The method
gives more detailed scene interpretations than conventional detectors in that
we are able to identify the visible parts of an object. We report improved
average precision on the PASCAL VOC 2010 dataset compared to a baseline
detector.
| [
{
"version": "v1",
"created": "Sun, 24 Nov 2013 16:59:19 GMT"
}
] | 2013-11-27T00:00:00 | [
[
"Ott",
"Patrick",
""
],
[
"Everingham",
"Mark",
""
],
[
"Matas",
"Jiri",
""
]
] | TITLE: Detection of Partially Visible Objects
ABSTRACT: An "elephant in the room" for most current object detection and localization
methods is the lack of explicit modelling of partial visibility due to
occlusion by other objects or truncation by the image boundary. Based on a
sliding window approach, we propose a detection method which explicitly models
partial visibility by treating it as a latent variable. A novel non-maximum
suppression scheme is proposed which takes into account the inferred partial
visibility of objects while providing a globally optimal solution. The method
gives more detailed scene interpretations than conventional detectors in that
we are able to identify the visible parts of an object. We report improved
average precision on the PASCAL VOC 2010 dataset compared to a baseline
detector.
| no_new_dataset | 0.943867 |
1307.5101 | Hsiang-Fu Yu | Hsiang-Fu Yu and Prateek Jain and Purushottam Kar and Inderjit S.
Dhillon | Large-scale Multi-label Learning with Missing Labels | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The multi-label classification problem has generated significant interest in
recent years. However, existing approaches do not adequately address two key
challenges: (a) the ability to tackle problems with a large number (say
millions) of labels, and (b) the ability to handle data with missing labels. In
this paper, we directly address both these problems by studying the multi-label
problem in a generic empirical risk minimization (ERM) framework. Our
framework, despite being simple, is surprisingly able to encompass several
recent label-compression based methods which can be derived as special cases of
our method. To optimize the ERM problem, we develop techniques that exploit the
structure of specific loss functions - such as the squared loss function - to
offer efficient algorithms. We further show that our learning framework admits
formal excess risk bounds even in the presence of missing labels. Our risk
bounds are tight and demonstrate better generalization performance for low-rank
promoting trace-norm regularization when compared to (rank insensitive)
Frobenius norm regularization. Finally, we present extensive empirical results
on a variety of benchmark datasets and show that our methods perform
significantly better than existing label compression based methods and can
scale up to very large datasets such as the Wikipedia dataset.
| [
{
"version": "v1",
"created": "Thu, 18 Jul 2013 23:55:55 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Oct 2013 22:33:17 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Nov 2013 16:57:43 GMT"
}
] | 2013-11-26T00:00:00 | [
[
"Yu",
"Hsiang-Fu",
""
],
[
"Jain",
"Prateek",
""
],
[
"Kar",
"Purushottam",
""
],
[
"Dhillon",
"Inderjit S.",
""
]
] | TITLE: Large-scale Multi-label Learning with Missing Labels
ABSTRACT: The multi-label classification problem has generated significant interest in
recent years. However, existing approaches do not adequately address two key
challenges: (a) the ability to tackle problems with a large number (say
millions) of labels, and (b) the ability to handle data with missing labels. In
this paper, we directly address both these problems by studying the multi-label
problem in a generic empirical risk minimization (ERM) framework. Our
framework, despite being simple, is surprisingly able to encompass several
recent label-compression based methods which can be derived as special cases of
our method. To optimize the ERM problem, we develop techniques that exploit the
structure of specific loss functions - such as the squared loss function - to
offer efficient algorithms. We further show that our learning framework admits
formal excess risk bounds even in the presence of missing labels. Our risk
bounds are tight and demonstrate better generalization performance for low-rank
promoting trace-norm regularization when compared to (rank insensitive)
Frobenius norm regularization. Finally, we present extensive empirical results
on a variety of benchmark datasets and show that our methods perform
significantly better than existing label compression based methods and can
scale up to very large datasets such as the Wikipedia dataset.
| no_new_dataset | 0.946001 |
1310.0509 | Isik Baris Fidaner | I\c{s}{\i}k Bar{\i}\c{s} Fidaner and Ali Taylan Cemgil | Summary Statistics for Partitionings and Feature Allocations | Accepted to NIPS 2013:
https://nips.cc/Conferences/2013/Program/event.php?ID=3763 | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Infinite mixture models are commonly used for clustering. One can sample from
the posterior of mixture assignments by Monte Carlo methods or find its maximum
a posteriori solution by optimization. However, in some problems the posterior
is diffuse and it is hard to interpret the sampled partitionings. In this
paper, we introduce novel statistics based on block sizes for representing
sample sets of partitionings and feature allocations. We develop an
element-based definition of entropy to quantify segmentation among their
elements. Then we propose a simple algorithm called entropy agglomeration (EA)
to summarize and visualize this information. Experiments on various infinite
mixture posteriors as well as a feature allocation dataset demonstrate that the
proposed statistics are useful in practice.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2013 22:34:18 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Oct 2013 06:28:18 GMT"
},
{
"version": "v3",
"created": "Sat, 5 Oct 2013 18:26:44 GMT"
},
{
"version": "v4",
"created": "Mon, 25 Nov 2013 08:43:59 GMT"
}
] | 2013-11-26T00:00:00 | [
[
"Fidaner",
"Işık Barış",
""
],
[
"Cemgil",
"Ali Taylan",
""
]
] | TITLE: Summary Statistics for Partitionings and Feature Allocations
ABSTRACT: Infinite mixture models are commonly used for clustering. One can sample from
the posterior of mixture assignments by Monte Carlo methods or find its maximum
a posteriori solution by optimization. However, in some problems the posterior
is diffuse and it is hard to interpret the sampled partitionings. In this
paper, we introduce novel statistics based on block sizes for representing
sample sets of partitionings and feature allocations. We develop an
element-based definition of entropy to quantify segmentation among their
elements. Then we propose a simple algorithm called entropy agglomeration (EA)
to summarize and visualize this information. Experiments on various infinite
mixture posteriors as well as a feature allocation dataset demonstrate that the
proposed statistics are useful in practice.
| no_new_dataset | 0.94868 |
1311.5947 | Chunhua Shen | Guosheng Lin, Chunhua Shen, Anton van den Hengel, David Suter | Fast Training of Effective Multi-class Boosting Using Coordinate Descent
Optimization | Appeared in Proc. Asian Conf. Computer Vision 2012. Code can be
downloaded at http://goo.gl/WluhrQ | null | null | null | cs.CV cs.LG stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wepresentanovelcolumngenerationbasedboostingmethod for multi-class
classification. Our multi-class boosting is formulated in a single optimization
problem as in Shen and Hao (2011). Different from most existing multi-class
boosting methods, which use the same set of weak learners for all the classes,
we train class specified weak learners (i.e., each class has a different set of
weak learners). We show that using separate weak learner sets for each class
leads to fast convergence, without introducing additional computational
overhead in the training procedure. To further make the training more efficient
and scalable, we also propose a fast co- ordinate descent method for solving
the optimization problem at each boosting iteration. The proposed coordinate
descent method is conceptually simple and easy to implement in that it is a
closed-form solution for each coordinate update. Experimental results on a
variety of datasets show that, compared to a range of existing multi-class
boosting meth- ods, the proposed method has much faster convergence rate and
better generalization performance in most cases. We also empirically show that
the proposed fast coordinate descent algorithm needs less training time than
the MultiBoost algorithm in Shen and Hao (2011).
| [
{
"version": "v1",
"created": "Sat, 23 Nov 2013 02:30:14 GMT"
}
] | 2013-11-26T00:00:00 | [
[
"Lin",
"Guosheng",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
],
[
"Suter",
"David",
""
]
] | TITLE: Fast Training of Effective Multi-class Boosting Using Coordinate Descent
Optimization
ABSTRACT: Wepresentanovelcolumngenerationbasedboostingmethod for multi-class
classification. Our multi-class boosting is formulated in a single optimization
problem as in Shen and Hao (2011). Different from most existing multi-class
boosting methods, which use the same set of weak learners for all the classes,
we train class specified weak learners (i.e., each class has a different set of
weak learners). We show that using separate weak learner sets for each class
leads to fast convergence, without introducing additional computational
overhead in the training procedure. To further make the training more efficient
and scalable, we also propose a fast co- ordinate descent method for solving
the optimization problem at each boosting iteration. The proposed coordinate
descent method is conceptually simple and easy to implement in that it is a
closed-form solution for each coordinate update. Experimental results on a
variety of datasets show that, compared to a range of existing multi-class
boosting meth- ods, the proposed method has much faster convergence rate and
better generalization performance in most cases. We also empirically show that
the proposed fast coordinate descent algorithm needs less training time than
the MultiBoost algorithm in Shen and Hao (2011).
| no_new_dataset | 0.94887 |
1311.6048 | Stefano Soatto | Jingming Dong, Jonathan Balzer, Damek Davis, Joshua Hernandez, Stefano
Soatto | On the Design and Analysis of Multiple View Descriptors | null | null | null | UCLA CSD TR130024, Nov. 8, 2013 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an extension of popular descriptors based on gradient orientation
histograms (HOG, computed in a single image) to multiple views. It hinges on
interpreting HOG as a conditional density in the space of sampled images, where
the effects of nuisance factors such as viewpoint and illumination are
marginalized. However, such marginalization is performed with respect to a very
coarse approximation of the underlying distribution. Our extension leverages on
the fact that multiple views of the same scene allow separating intrinsic from
nuisance variability, and thus afford better marginalization of the latter. The
result is a descriptor that has the same complexity of single-view HOG, and can
be compared in the same manner, but exploits multiple views to better trade off
insensitivity to nuisance variability with specificity to intrinsic
variability. We also introduce a novel multi-view wide-baseline matching
dataset, consisting of a mixture of real and synthetic objects with ground
truthed camera motion and dense three-dimensional geometry.
| [
{
"version": "v1",
"created": "Sat, 23 Nov 2013 20:38:50 GMT"
}
] | 2013-11-26T00:00:00 | [
[
"Dong",
"Jingming",
""
],
[
"Balzer",
"Jonathan",
""
],
[
"Davis",
"Damek",
""
],
[
"Hernandez",
"Joshua",
""
],
[
"Soatto",
"Stefano",
""
]
] | TITLE: On the Design and Analysis of Multiple View Descriptors
ABSTRACT: We propose an extension of popular descriptors based on gradient orientation
histograms (HOG, computed in a single image) to multiple views. It hinges on
interpreting HOG as a conditional density in the space of sampled images, where
the effects of nuisance factors such as viewpoint and illumination are
marginalized. However, such marginalization is performed with respect to a very
coarse approximation of the underlying distribution. Our extension leverages on
the fact that multiple views of the same scene allow separating intrinsic from
nuisance variability, and thus afford better marginalization of the latter. The
result is a descriptor that has the same complexity of single-view HOG, and can
be compared in the same manner, but exploits multiple views to better trade off
insensitivity to nuisance variability with specificity to intrinsic
variability. We also introduce a novel multi-view wide-baseline matching
dataset, consisting of a mixture of real and synthetic objects with ground
truthed camera motion and dense three-dimensional geometry.
| new_dataset | 0.627352 |
1311.6334 | Charanpal Dhanjal | Charanpal Dhanjal (LTCI), St\'ephan Cl\'emen\c{c}on (LTCI) | Learning Reputation in an Authorship Network | null | null | null | null | cs.SI cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of searching for experts in a given academic field is hugely
important in both industry and academia. We study exactly this issue with
respect to a database of authors and their publications. The idea is to use
Latent Semantic Indexing (LSI) and Latent Dirichlet Allocation (LDA) to perform
topic modelling in order to find authors who have worked in a query field. We
then construct a coauthorship graph and motivate the use of influence
maximisation and a variety of graph centrality measures to obtain a ranked list
of experts. The ranked lists are further improved using a Markov Chain-based
rank aggregation approach. The complete method is readily scalable to large
datasets. To demonstrate the efficacy of the approach we report on an extensive
set of computational simulations using the Arnetminer dataset. An improvement
in mean average precision is demonstrated over the baseline case of simply
using the order of authors found by the topic models.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2013 15:25:28 GMT"
}
] | 2013-11-26T00:00:00 | [
[
"Dhanjal",
"Charanpal",
"",
"LTCI"
],
[
"Clémençon",
"Stéphan",
"",
"LTCI"
]
] | TITLE: Learning Reputation in an Authorship Network
ABSTRACT: The problem of searching for experts in a given academic field is hugely
important in both industry and academia. We study exactly this issue with
respect to a database of authors and their publications. The idea is to use
Latent Semantic Indexing (LSI) and Latent Dirichlet Allocation (LDA) to perform
topic modelling in order to find authors who have worked in a query field. We
then construct a coauthorship graph and motivate the use of influence
maximisation and a variety of graph centrality measures to obtain a ranked list
of experts. The ranked lists are further improved using a Markov Chain-based
rank aggregation approach. The complete method is readily scalable to large
datasets. To demonstrate the efficacy of the approach we report on an extensive
set of computational simulations using the Arnetminer dataset. An improvement
in mean average precision is demonstrated over the baseline case of simply
using the order of authors found by the topic models.
| no_new_dataset | 0.945147 |
1311.5636 | Dimitrios Athanasakis Mr | Dimitrios Athanasakis, John Shawe-Taylor, Delmiro Fernandez-Reyes | Learning Non-Linear Feature Maps | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature selection plays a pivotal role in learning, particularly in areas
were parsimonious features can provide insight into the underlying process,
such as biology. Recent approaches for non-linear feature selection employing
greedy optimisation of Centred Kernel Target Alignment(KTA), while exhibiting
strong results in terms of generalisation accuracy and sparsity, can become
computationally prohibitive for high-dimensional datasets. We propose randSel,
a randomised feature selection algorithm, with attractive scaling properties.
Our theoretical analysis of randSel provides strong probabilistic guarantees
for the correct identification of relevant features. Experimental results on
real and artificial data, show that the method successfully identifies
effective features, performing better than a number of competitive approaches.
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2013 01:49:26 GMT"
}
] | 2013-11-25T00:00:00 | [
[
"Athanasakis",
"Dimitrios",
""
],
[
"Shawe-Taylor",
"John",
""
],
[
"Fernandez-Reyes",
"Delmiro",
""
]
] | TITLE: Learning Non-Linear Feature Maps
ABSTRACT: Feature selection plays a pivotal role in learning, particularly in areas
were parsimonious features can provide insight into the underlying process,
such as biology. Recent approaches for non-linear feature selection employing
greedy optimisation of Centred Kernel Target Alignment(KTA), while exhibiting
strong results in terms of generalisation accuracy and sparsity, can become
computationally prohibitive for high-dimensional datasets. We propose randSel,
a randomised feature selection algorithm, with attractive scaling properties.
Our theoretical analysis of randSel provides strong probabilistic guarantees
for the correct identification of relevant features. Experimental results on
real and artificial data, show that the method successfully identifies
effective features, performing better than a number of competitive approaches.
| no_new_dataset | 0.945399 |
1311.5763 | Peter Sarlin | Peter Sarlin | Automated and Weighted Self-Organizing Time Maps | Preprint submitted to a journal | null | null | null | cs.NE cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes schemes for automated and weighted Self-Organizing Time
Maps (SOTMs). The SOTM provides means for a visual approach to evolutionary
clustering, which aims at producing a sequence of clustering solutions. This
task we denote as visual dynamic clustering. The implication of an automated
SOTM is not only a data-driven parametrization of the SOTM, but also the
feature of adjusting the training to the characteristics of the data at each
time step. The aim of the weighted SOTM is to improve learning from more
trustworthy or important data with an instance-varying weight. The schemes for
automated and weighted SOTMs are illustrated on two real-world datasets: (i)
country-level risk indicators to measure the evolution of global imbalances,
and (ii) credit applicant data to measure the evolution of firm-level credit
risks.
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2013 14:34:38 GMT"
}
] | 2013-11-25T00:00:00 | [
[
"Sarlin",
"Peter",
""
]
] | TITLE: Automated and Weighted Self-Organizing Time Maps
ABSTRACT: This paper proposes schemes for automated and weighted Self-Organizing Time
Maps (SOTMs). The SOTM provides means for a visual approach to evolutionary
clustering, which aims at producing a sequence of clustering solutions. This
task we denote as visual dynamic clustering. The implication of an automated
SOTM is not only a data-driven parametrization of the SOTM, but also the
feature of adjusting the training to the characteristics of the data at each
time step. The aim of the weighted SOTM is to improve learning from more
trustworthy or important data with an instance-varying weight. The schemes for
automated and weighted SOTMs are illustrated on two real-world datasets: (i)
country-level risk indicators to measure the evolution of global imbalances,
and (ii) credit applicant data to measure the evolution of firm-level credit
risks.
| no_new_dataset | 0.948917 |
1311.5816 | Bryan Knowles | Bryan Knowles and Rong Yang | Sinkless: A Preliminary Study of Stress Propagation in Group Project
Social Networks using a Variant of the Abelian Sandpile Model | 11 pages, 8 figures | null | null | null | cs.SI physics.soc-ph | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We perform social network analysis on 53 students split over three semesters
and 13 groups, using conventional measures like eigenvector centrality,
betweeness centrality, and degree centrality, as well as defining a variant of
the Abelian Sandpile Model (ASM) with the intention of modeling stress
propagation in the college classroom. We correlate the results of these
analyses with group project grades received; due to a small or poorly collected
dataset, we are unable to conclude that any of these network measures relates
to those grades. However, we are successful in using this dataset to define a
discrete, recursive, and more generalized variant of the ASM. Abelian Sandpile
Model, College Grades, Self-organized Criticality, Sinkless Sandpile Model,
Social Network Analysis, Stress Propagation
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2013 17:08:43 GMT"
}
] | 2013-11-25T00:00:00 | [
[
"Knowles",
"Bryan",
""
],
[
"Yang",
"Rong",
""
]
] | TITLE: Sinkless: A Preliminary Study of Stress Propagation in Group Project
Social Networks using a Variant of the Abelian Sandpile Model
ABSTRACT: We perform social network analysis on 53 students split over three semesters
and 13 groups, using conventional measures like eigenvector centrality,
betweeness centrality, and degree centrality, as well as defining a variant of
the Abelian Sandpile Model (ASM) with the intention of modeling stress
propagation in the college classroom. We correlate the results of these
analyses with group project grades received; due to a small or poorly collected
dataset, we are unable to conclude that any of these network measures relates
to those grades. However, we are successful in using this dataset to define a
discrete, recursive, and more generalized variant of the ASM. Abelian Sandpile
Model, College Grades, Self-organized Criticality, Sinkless Sandpile Model,
Social Network Analysis, Stress Propagation
| no_new_dataset | 0.936518 |
1311.5290 | Odemir Bruno PhD | Wesley Nunes Gon\c{c}alves, Bruno Brandoli Machado, Odemir Martinez
Bruno | Texture descriptor combining fractal dimension and artificial crawlers | 12 pages 9 figures. Paper in press: Physica A: Statistical Mechanics
and its Applications | null | 10.1016/j.physa.2013.10.011 | null | physics.data-an cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Texture is an important visual attribute used to describe images. There are
many methods available for texture analysis. However, they do not capture the
details richness of the image surface. In this paper, we propose a new method
to describe textures using the artificial crawler model. This model assumes
that each agent can interact with the environment and each other. Since this
swarm system alone does not achieve a good discrimination, we developed a new
method to increase the discriminatory power of artificial crawlers, together
with the fractal dimension theory. Here, we estimated the fractal dimension by
the Bouligand-Minkowski method due to its precision in quantifying structural
properties of images. We validate our method on two texture datasets and the
experimental results reveal that our method leads to highly discriminative
textural features. The results indicate that our method can be used in
different texture applications.
| [
{
"version": "v1",
"created": "Thu, 21 Nov 2013 01:51:03 GMT"
}
] | 2013-11-22T00:00:00 | [
[
"Gonçalves",
"Wesley Nunes",
""
],
[
"Machado",
"Bruno Brandoli",
""
],
[
"Bruno",
"Odemir Martinez",
""
]
] | TITLE: Texture descriptor combining fractal dimension and artificial crawlers
ABSTRACT: Texture is an important visual attribute used to describe images. There are
many methods available for texture analysis. However, they do not capture the
details richness of the image surface. In this paper, we propose a new method
to describe textures using the artificial crawler model. This model assumes
that each agent can interact with the environment and each other. Since this
swarm system alone does not achieve a good discrimination, we developed a new
method to increase the discriminatory power of artificial crawlers, together
with the fractal dimension theory. Here, we estimated the fractal dimension by
the Bouligand-Minkowski method due to its precision in quantifying structural
properties of images. We validate our method on two texture datasets and the
experimental results reveal that our method leads to highly discriminative
textural features. The results indicate that our method can be used in
different texture applications.
| no_new_dataset | 0.953966 |
1108.2283 | Federico Schl\"uter | Federico Schl\"uter | A survey on independence-based Markov networks learning | 35 pages, 1 figure | Schl\"uter, F. (2011). A survey on independence-based Markov
networks learning. Artificial Intelligence Review, 1-25 | 10.1007/s10462-012-9346-y | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work reports the most relevant technical aspects in the problem of
learning the \emph{Markov network structure} from data. Such problem has become
increasingly important in machine learning, and many other application fields
of machine learning. Markov networks, together with Bayesian networks, are
probabilistic graphical models, a widely used formalism for handling
probability distributions in intelligent systems. Learning graphical models
from data have been extensively applied for the case of Bayesian networks, but
for Markov networks learning it is not tractable in practice. However, this
situation is changing with time, given the exponential growth of computers
capacity, the plethora of available digital data, and the researching on new
learning technologies. This work stresses on a technology called
independence-based learning, which allows the learning of the independence
structure of those networks from data in an efficient and sound manner,
whenever the dataset is sufficiently large, and data is a representative
sampling of the target distribution. In the analysis of such technology, this
work surveys the current state-of-the-art algorithms for learning Markov
networks structure, discussing its current limitations, and proposing a series
of open problems where future works may produce some advances in the area in
terms of quality and efficiency. The paper concludes by opening a discussion
about how to develop a general formalism for improving the quality of the
structures learned, when data is scarce.
| [
{
"version": "v1",
"created": "Wed, 10 Aug 2011 20:25:08 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Nov 2013 19:15:05 GMT"
}
] | 2013-11-21T00:00:00 | [
[
"Schlüter",
"Federico",
""
]
] | TITLE: A survey on independence-based Markov networks learning
ABSTRACT: This work reports the most relevant technical aspects in the problem of
learning the \emph{Markov network structure} from data. Such problem has become
increasingly important in machine learning, and many other application fields
of machine learning. Markov networks, together with Bayesian networks, are
probabilistic graphical models, a widely used formalism for handling
probability distributions in intelligent systems. Learning graphical models
from data have been extensively applied for the case of Bayesian networks, but
for Markov networks learning it is not tractable in practice. However, this
situation is changing with time, given the exponential growth of computers
capacity, the plethora of available digital data, and the researching on new
learning technologies. This work stresses on a technology called
independence-based learning, which allows the learning of the independence
structure of those networks from data in an efficient and sound manner,
whenever the dataset is sufficiently large, and data is a representative
sampling of the target distribution. In the analysis of such technology, this
work surveys the current state-of-the-art algorithms for learning Markov
networks structure, discussing its current limitations, and proposing a series
of open problems where future works may produce some advances in the area in
terms of quality and efficiency. The paper concludes by opening a discussion
about how to develop a general formalism for improving the quality of the
structures learned, when data is scarce.
| no_new_dataset | 0.949949 |
1107.3724 | Yuri Pirola | Yuri Pirola, Gianluca Della Vedova, Stefano Biffani, Alessandra Stella
and Paola Bonizzoni | Haplotype Inference on Pedigrees with Recombinations, Errors, and
Missing Genotypes via SAT solvers | 14 pages, 1 figure, 4 tables, the associated software reHCstar is
available at http://www.algolab.eu/reHCstar | IEEE/ACM Trans. on Computational Biology and Bioinformatics 9.6
(2012) 1582-1594 | 10.1109/TCBB.2012.100 | null | cs.DS q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Minimum-Recombinant Haplotype Configuration problem (MRHC) has been
highly successful in providing a sound combinatorial formulation for the
important problem of genotype phasing on pedigrees. Despite several algorithmic
advances and refinements that led to some efficient algorithms, its
applicability to real datasets has been limited by the absence of some
important characteristics of these data in its formulation, such as mutations,
genotyping errors, and missing data.
In this work, we propose the Haplotype Configuration with Recombinations and
Errors problem (HCRE), which generalizes the original MRHC formulation by
incorporating the two most common characteristics of real data: errors and
missing genotypes (including untyped individuals). Although HCRE is
computationally hard, we propose an exact algorithm for the problem based on a
reduction to the well-known Satisfiability problem. Our reduction exploits
recent progresses in the constraint programming literature and, combined with
the use of state-of-the-art SAT solvers, provides a practical solution for the
HCRE problem. Biological soundness of the phasing model and effectiveness (on
both accuracy and performance) of the algorithm are experimentally demonstrated
under several simulated scenarios and on a real dairy cattle population.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2011 14:25:10 GMT"
}
] | 2013-11-20T00:00:00 | [
[
"Pirola",
"Yuri",
""
],
[
"Della Vedova",
"Gianluca",
""
],
[
"Biffani",
"Stefano",
""
],
[
"Stella",
"Alessandra",
""
],
[
"Bonizzoni",
"Paola",
""
]
] | TITLE: Haplotype Inference on Pedigrees with Recombinations, Errors, and
Missing Genotypes via SAT solvers
ABSTRACT: The Minimum-Recombinant Haplotype Configuration problem (MRHC) has been
highly successful in providing a sound combinatorial formulation for the
important problem of genotype phasing on pedigrees. Despite several algorithmic
advances and refinements that led to some efficient algorithms, its
applicability to real datasets has been limited by the absence of some
important characteristics of these data in its formulation, such as mutations,
genotyping errors, and missing data.
In this work, we propose the Haplotype Configuration with Recombinations and
Errors problem (HCRE), which generalizes the original MRHC formulation by
incorporating the two most common characteristics of real data: errors and
missing genotypes (including untyped individuals). Although HCRE is
computationally hard, we propose an exact algorithm for the problem based on a
reduction to the well-known Satisfiability problem. Our reduction exploits
recent progresses in the constraint programming literature and, combined with
the use of state-of-the-art SAT solvers, provides a practical solution for the
HCRE problem. Biological soundness of the phasing model and effectiveness (on
both accuracy and performance) of the algorithm are experimentally demonstrated
under several simulated scenarios and on a real dairy cattle population.
| no_new_dataset | 0.947088 |
1309.7340 | Jiwei Li | Jiwei Li and Claire Cardie | Early Stage Influenza Detection from Twitter | null | null | null | null | cs.SI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Influenza is an acute respiratory illness that occurs virtually every year
and results in substantial disease, death and expense. Detection of Influenza
in its earliest stage would facilitate timely action that could reduce the
spread of the illness. Existing systems such as CDC and EISS which try to
collect diagnosis data, are almost entirely manual, resulting in about two-week
delays for clinical data acquisition. Twitter, a popular microblogging service,
provides us with a perfect source for early-stage flu detection due to its
real- time nature. For example, when a flu breaks out, people that get the flu
may post related tweets which enables the detection of the flu breakout
promptly. In this paper, we investigate the real-time flu detection problem on
Twitter data by proposing Flu Markov Network (Flu-MN): a spatio-temporal
unsupervised Bayesian algorithm based on a 4 phase Markov Network, trying to
identify the flu breakout at the earliest stage. We test our model on real
Twitter datasets from the United States along with baselines in multiple
applications, such as real-time flu breakout detection, future epidemic phase
prediction, or Influenza-like illness (ILI) physician visits. Experimental
results show the robustness and effectiveness of our approach. We build up a
real time flu reporting system based on the proposed approach, and we are
hopeful that it would help government or health organizations in identifying
flu outbreaks and facilitating timely actions to decrease unnecessary
mortality.
| [
{
"version": "v1",
"created": "Fri, 27 Sep 2013 19:47:11 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Oct 2013 18:01:47 GMT"
},
{
"version": "v3",
"created": "Mon, 18 Nov 2013 21:09:39 GMT"
}
] | 2013-11-20T00:00:00 | [
[
"Li",
"Jiwei",
""
],
[
"Cardie",
"Claire",
""
]
] | TITLE: Early Stage Influenza Detection from Twitter
ABSTRACT: Influenza is an acute respiratory illness that occurs virtually every year
and results in substantial disease, death and expense. Detection of Influenza
in its earliest stage would facilitate timely action that could reduce the
spread of the illness. Existing systems such as CDC and EISS which try to
collect diagnosis data, are almost entirely manual, resulting in about two-week
delays for clinical data acquisition. Twitter, a popular microblogging service,
provides us with a perfect source for early-stage flu detection due to its
real- time nature. For example, when a flu breaks out, people that get the flu
may post related tweets which enables the detection of the flu breakout
promptly. In this paper, we investigate the real-time flu detection problem on
Twitter data by proposing Flu Markov Network (Flu-MN): a spatio-temporal
unsupervised Bayesian algorithm based on a 4 phase Markov Network, trying to
identify the flu breakout at the earliest stage. We test our model on real
Twitter datasets from the United States along with baselines in multiple
applications, such as real-time flu breakout detection, future epidemic phase
prediction, or Influenza-like illness (ILI) physician visits. Experimental
results show the robustness and effectiveness of our approach. We build up a
real time flu reporting system based on the proposed approach, and we are
hopeful that it would help government or health organizations in identifying
flu outbreaks and facilitating timely actions to decrease unnecessary
mortality.
| no_new_dataset | 0.95418 |
1311.4731 | Lutz Bornmann Dr. | Lutz Bornmann, Loet Leydesdorff and Jian Wang | How to improve the prediction based on citation impact percentiles for
years shortly after the publication date? | Accepted for publication in the Journal of Informetrics. arXiv admin
note: text overlap with arXiv:1306.4454 | null | null | null | cs.DL stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The findings of Bornmann, Leydesdorff, and Wang (in press) revealed that the
consideration of journal impact improves the prediction of long-term citation
impact. This paper further explores the possibility of improving citation
impact measurements on the base of a short citation window by the consideration
of journal impact and other variables, such as the number of authors, the
number of cited references, and the number of pages. The dataset contains
475,391 journal papers published in 1980 and indexed in Web of Science (WoS,
Thomson Reuters), and all annual citation counts (from 1980 to 2010) for these
papers. As an indicator of citation impact, we used percentiles of citations
calculated using the approach of Hazen (1914). Our results show that citation
impact measurement can really be improved: If factors generally influencing
citation impact are considered in the statistical analysis, the explained
variance in the long-term citation impact can be much increased. However, this
increase is only visible when using the years shortly after publication but not
when using later years.
| [
{
"version": "v1",
"created": "Tue, 19 Nov 2013 13:27:14 GMT"
}
] | 2013-11-20T00:00:00 | [
[
"Bornmann",
"Lutz",
""
],
[
"Leydesdorff",
"Loet",
""
],
[
"Wang",
"Jian",
""
]
] | TITLE: How to improve the prediction based on citation impact percentiles for
years shortly after the publication date?
ABSTRACT: The findings of Bornmann, Leydesdorff, and Wang (in press) revealed that the
consideration of journal impact improves the prediction of long-term citation
impact. This paper further explores the possibility of improving citation
impact measurements on the base of a short citation window by the consideration
of journal impact and other variables, such as the number of authors, the
number of cited references, and the number of pages. The dataset contains
475,391 journal papers published in 1980 and indexed in Web of Science (WoS,
Thomson Reuters), and all annual citation counts (from 1980 to 2010) for these
papers. As an indicator of citation impact, we used percentiles of citations
calculated using the approach of Hazen (1914). Our results show that citation
impact measurement can really be improved: If factors generally influencing
citation impact are considered in the statistical analysis, the explained
variance in the long-term citation impact can be much increased. However, this
increase is only visible when using the years shortly after publication but not
when using later years.
| no_new_dataset | 0.949295 |
1311.4787 | Aaron Slepkov | Aaron D. Slepkov, Kevin B. Ironside, and David DiBattista | Benford's Law: Textbook Exercises and Multiple-choice Testbanks | null | null | null | null | physics.data-an physics.ed-ph physics.pop-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Benford's Law describes the finding that the distribution of leading (or
leftmost) digits of innumerable datasets follows a well-defined logarithmic
trend, rather than an intuitive uniformity. In practice this means that the
most common leading digit is 1, with an expected frequency of 30.1%, and the
least common is 9, with an expected frequency of 4.6%. The history and
development of Benford's Law is inexorably linked to physics, yet there has
been a dearth of physics-related Benford datasets reported in the literature.
Currently, the most common application of Benford's Law is in detecting number
invention and tampering such as found in accounting-, tax-, and voter-fraud. We
demonstrate that answers to end-of-chapter exercises in physics and chemistry
textbooks conform to Benford's Law. Subsequently, we investigate whether this
fact can be used to gain advantage over random guessing in multiple-choice
tests, and find that while testbank answers in introductory physics closely
conform to Benford's Law, the testbank is nonetheless secure against such a
Benford's attack for banal reasons.
| [
{
"version": "v1",
"created": "Tue, 19 Nov 2013 15:54:32 GMT"
}
] | 2013-11-20T00:00:00 | [
[
"Slepkov",
"Aaron D.",
""
],
[
"Ironside",
"Kevin B.",
""
],
[
"DiBattista",
"David",
""
]
] | TITLE: Benford's Law: Textbook Exercises and Multiple-choice Testbanks
ABSTRACT: Benford's Law describes the finding that the distribution of leading (or
leftmost) digits of innumerable datasets follows a well-defined logarithmic
trend, rather than an intuitive uniformity. In practice this means that the
most common leading digit is 1, with an expected frequency of 30.1%, and the
least common is 9, with an expected frequency of 4.6%. The history and
development of Benford's Law is inexorably linked to physics, yet there has
been a dearth of physics-related Benford datasets reported in the literature.
Currently, the most common application of Benford's Law is in detecting number
invention and tampering such as found in accounting-, tax-, and voter-fraud. We
demonstrate that answers to end-of-chapter exercises in physics and chemistry
textbooks conform to Benford's Law. Subsequently, we investigate whether this
fact can be used to gain advantage over random guessing in multiple-choice
tests, and find that while testbank answers in introductory physics closely
conform to Benford's Law, the testbank is nonetheless secure against such a
Benford's attack for banal reasons.
| no_new_dataset | 0.943919 |
1307.0468 | Aliaksei Sandryhaila | Aliaksei Sandryhaila, Jose M. F. Moura | Discrete Signal Processing on Graphs: Frequency Analysis | null | null | null | null | cs.SI math.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Signals and datasets that arise in physical and engineering applications, as
well as social, genetics, biomolecular, and many other domains, are becoming
increasingly larger and more complex. In contrast to traditional time and image
signals, data in these domains are supported by arbitrary graphs. Signal
processing on graphs extends concepts and techniques from traditional signal
processing to data indexed by generic graphs. This paper studies the concepts
of low and high frequencies on graphs, and low-, high-, and band-pass graph
filters. In traditional signal processing, there concepts are easily defined
because of a natural frequency ordering that has a physical interpretation. For
signals residing on graphs, in general, there is no obvious frequency ordering.
We propose a definition of total variation for graph signals that naturally
leads to a frequency ordering on graphs and defines low-, high-, and band-pass
graph signals and filters. We study the design of graph filters with specified
frequency response, and illustrate our approach with applications to sensor
malfunction detection and data classification.
| [
{
"version": "v1",
"created": "Mon, 1 Jul 2013 18:33:04 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Nov 2013 19:44:53 GMT"
}
] | 2013-11-19T00:00:00 | [
[
"Sandryhaila",
"Aliaksei",
""
],
[
"Moura",
"Jose M. F.",
""
]
] | TITLE: Discrete Signal Processing on Graphs: Frequency Analysis
ABSTRACT: Signals and datasets that arise in physical and engineering applications, as
well as social, genetics, biomolecular, and many other domains, are becoming
increasingly larger and more complex. In contrast to traditional time and image
signals, data in these domains are supported by arbitrary graphs. Signal
processing on graphs extends concepts and techniques from traditional signal
processing to data indexed by generic graphs. This paper studies the concepts
of low and high frequencies on graphs, and low-, high-, and band-pass graph
filters. In traditional signal processing, there concepts are easily defined
because of a natural frequency ordering that has a physical interpretation. For
signals residing on graphs, in general, there is no obvious frequency ordering.
We propose a definition of total variation for graph signals that naturally
leads to a frequency ordering on graphs and defines low-, high-, and band-pass
graph signals and filters. We study the design of graph filters with specified
frequency response, and illustrate our approach with applications to sensor
malfunction detection and data classification.
| no_new_dataset | 0.955068 |
1310.8428 | Hongyu Su | Hongyu Su, Juho Rousu | Multilabel Classification through Random Graph Ensembles | 15 Pages, 1 Figures | JMLR: Workshop and Conference Proceedings 29:404--418, 2013 | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We present new methods for multilabel classification, relying on ensemble
learning on a collection of random output graphs imposed on the multilabel and
a kernel-based structured output learner as the base classifier. For ensemble
learning, differences among the output graphs provide the required base
classifier diversity and lead to improved performance in the increasing size of
the ensemble. We study different methods of forming the ensemble prediction,
including majority voting and two methods that perform inferences over the
graph structures before or after combining the base models into the ensemble.
We compare the methods against the state-of-the-art machine learning approaches
on a set of heterogeneous multilabel benchmark problems, including multilabel
AdaBoost, convex multitask feature learning, as well as single target learning
approaches represented by Bagging and SVM. In our experiments, the random graph
ensembles are very competitive and robust, ranking first or second on most of
the datasets. Overall, our results show that random graph ensembles are viable
alternatives to flat multilabel and multitask learners.
| [
{
"version": "v1",
"created": "Thu, 31 Oct 2013 09:00:39 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Nov 2013 04:04:49 GMT"
}
] | 2013-11-19T00:00:00 | [
[
"Su",
"Hongyu",
""
],
[
"Rousu",
"Juho",
""
]
] | TITLE: Multilabel Classification through Random Graph Ensembles
ABSTRACT: We present new methods for multilabel classification, relying on ensemble
learning on a collection of random output graphs imposed on the multilabel and
a kernel-based structured output learner as the base classifier. For ensemble
learning, differences among the output graphs provide the required base
classifier diversity and lead to improved performance in the increasing size of
the ensemble. We study different methods of forming the ensemble prediction,
including majority voting and two methods that perform inferences over the
graph structures before or after combining the base models into the ensemble.
We compare the methods against the state-of-the-art machine learning approaches
on a set of heterogeneous multilabel benchmark problems, including multilabel
AdaBoost, convex multitask feature learning, as well as single target learning
approaches represented by Bagging and SVM. In our experiments, the random graph
ensembles are very competitive and robust, ranking first or second on most of
the datasets. Overall, our results show that random graph ensembles are viable
alternatives to flat multilabel and multitask learners.
| no_new_dataset | 0.948632 |
1311.0805 | Nikolaos Korfiatis | Todor Ivanov, Nikolaos Korfiatis, Roberto V. Zicari | On the inequality of the 3V's of Big Data Architectural Paradigms: A
case for heterogeneity | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The well-known 3V architectural paradigm for Big Data introduced by Laney
(2011), provides a simplified framework for defining the architecture of a big
data platform to be deployed in various scenarios tackling processing of
massive datasets. While additional components such as Variability and Veracity
have been discussed as an extension to the 3V model, the basic components
(volume, variety, velocity) provide a quantitative framework while variability
and veracity target a more qualitative approach. In this paper we argue why the
basic 3V's are not equal due to the different requirements that need to be
covered in case higher demands for a particular "V". Similar to other
conjectures such as the CAP theorem 3V based architectures differ on their
implementation. We call this paradigm heterogeneity and we provide a taxonomy
of the existing tools (as of 2013) covering the Hadoop ecosystem from the
perspective of heterogeneity. This paper contributes on the understanding of
the Hadoop ecosystem from the perspective of different workloads and aims to
help researchers and practitioners on the design of scalable platforms
targeting different operational needs.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2013 18:29:45 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Nov 2013 19:21:45 GMT"
}
] | 2013-11-19T00:00:00 | [
[
"Ivanov",
"Todor",
""
],
[
"Korfiatis",
"Nikolaos",
""
],
[
"Zicari",
"Roberto V.",
""
]
] | TITLE: On the inequality of the 3V's of Big Data Architectural Paradigms: A
case for heterogeneity
ABSTRACT: The well-known 3V architectural paradigm for Big Data introduced by Laney
(2011), provides a simplified framework for defining the architecture of a big
data platform to be deployed in various scenarios tackling processing of
massive datasets. While additional components such as Variability and Veracity
have been discussed as an extension to the 3V model, the basic components
(volume, variety, velocity) provide a quantitative framework while variability
and veracity target a more qualitative approach. In this paper we argue why the
basic 3V's are not equal due to the different requirements that need to be
covered in case higher demands for a particular "V". Similar to other
conjectures such as the CAP theorem 3V based architectures differ on their
implementation. We call this paradigm heterogeneity and we provide a taxonomy
of the existing tools (as of 2013) covering the Hadoop ecosystem from the
perspective of heterogeneity. This paper contributes on the understanding of
the Hadoop ecosystem from the perspective of different workloads and aims to
help researchers and practitioners on the design of scalable platforms
targeting different operational needs.
| no_new_dataset | 0.947381 |
1311.3982 | Juston Moore | Aaron Schein, Juston Moore, Hanna Wallach | Inferring Multilateral Relations from Dynamic Pairwise Interactions | NIPS 2013 Workshop on Frontiers of Network Analysis | null | null | null | cs.AI cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Correlations between anomalous activity patterns can yield pertinent
information about complex social processes: a significant deviation from normal
behavior, exhibited simultaneously by multiple pairs of actors, provides
evidence for some underlying relationship involving those pairs---i.e., a
multilateral relation. We introduce a new nonparametric Bayesian latent
variable model that explicitly captures correlations between anomalous
interaction counts and uses these shared deviations from normal activity
patterns to identify and characterize multilateral relations. We showcase our
model's capabilities using the newly curated Global Database of Events,
Location, and Tone, a dataset that has seen considerable interest in the social
sciences and the popular press, but which has is largely unexplored by the
machine learning community. We provide a detailed analysis of the latent
structure inferred by our model and show that the multilateral relations
correspond to major international events and long-term international
relationships. These findings lead us to recommend our model for any
data-driven analysis of interaction networks where dynamic interactions over
the edges provide evidence for latent social structure.
| [
{
"version": "v1",
"created": "Fri, 15 Nov 2013 21:22:37 GMT"
}
] | 2013-11-19T00:00:00 | [
[
"Schein",
"Aaron",
""
],
[
"Moore",
"Juston",
""
],
[
"Wallach",
"Hanna",
""
]
] | TITLE: Inferring Multilateral Relations from Dynamic Pairwise Interactions
ABSTRACT: Correlations between anomalous activity patterns can yield pertinent
information about complex social processes: a significant deviation from normal
behavior, exhibited simultaneously by multiple pairs of actors, provides
evidence for some underlying relationship involving those pairs---i.e., a
multilateral relation. We introduce a new nonparametric Bayesian latent
variable model that explicitly captures correlations between anomalous
interaction counts and uses these shared deviations from normal activity
patterns to identify and characterize multilateral relations. We showcase our
model's capabilities using the newly curated Global Database of Events,
Location, and Tone, a dataset that has seen considerable interest in the social
sciences and the popular press, but which has is largely unexplored by the
machine learning community. We provide a detailed analysis of the latent
structure inferred by our model and show that the multilateral relations
correspond to major international events and long-term international
relationships. These findings lead us to recommend our model for any
data-driven analysis of interaction networks where dynamic interactions over
the edges provide evidence for latent social structure.
| new_dataset | 0.95877 |
1311.3987 | Seyed-Mehdi-Reza Beheshti | Seyed-Mehdi-Reza Beheshti and Srikumar Venugopal and Seung Hwan Ryu
and Boualem Benatallah and Wei Wang | Big Data and Cross-Document Coreference Resolution: Current State and
Future Opportunities | null | null | null | null | cs.CL cs.DC cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information Extraction (IE) is the task of automatically extracting
structured information from unstructured/semi-structured machine-readable
documents. Among various IE tasks, extracting actionable intelligence from
ever-increasing amount of data depends critically upon Cross-Document
Coreference Resolution (CDCR) - the task of identifying entity mentions across
multiple documents that refer to the same underlying entity. Recently, document
datasets of the order of peta-/tera-bytes has raised many challenges for
performing effective CDCR such as scaling to large numbers of mentions and
limited representational power. The problem of analysing such datasets is
called "big data". The aim of this paper is to provide readers with an
understanding of the central concepts, subtasks, and the current
state-of-the-art in CDCR process. We provide assessment of existing
tools/techniques for CDCR subtasks and highlight big data challenges in each of
them to help readers identify important and outstanding issues for further
investigation. Finally, we provide concluding remarks and discuss possible
directions for future work.
| [
{
"version": "v1",
"created": "Thu, 14 Nov 2013 06:10:15 GMT"
}
] | 2013-11-19T00:00:00 | [
[
"Beheshti",
"Seyed-Mehdi-Reza",
""
],
[
"Venugopal",
"Srikumar",
""
],
[
"Ryu",
"Seung Hwan",
""
],
[
"Benatallah",
"Boualem",
""
],
[
"Wang",
"Wei",
""
]
] | TITLE: Big Data and Cross-Document Coreference Resolution: Current State and
Future Opportunities
ABSTRACT: Information Extraction (IE) is the task of automatically extracting
structured information from unstructured/semi-structured machine-readable
documents. Among various IE tasks, extracting actionable intelligence from
ever-increasing amount of data depends critically upon Cross-Document
Coreference Resolution (CDCR) - the task of identifying entity mentions across
multiple documents that refer to the same underlying entity. Recently, document
datasets of the order of peta-/tera-bytes has raised many challenges for
performing effective CDCR such as scaling to large numbers of mentions and
limited representational power. The problem of analysing such datasets is
called "big data". The aim of this paper is to provide readers with an
understanding of the central concepts, subtasks, and the current
state-of-the-art in CDCR process. We provide assessment of existing
tools/techniques for CDCR subtasks and highlight big data challenges in each of
them to help readers identify important and outstanding issues for further
investigation. Finally, we provide concluding remarks and discuss possible
directions for future work.
| no_new_dataset | 0.9462 |
1311.4040 | Miklos Kalman | Miklos Kalman, Ferenc Havasi | Enhanced XML Validation using SRML | 18 pages | International Journal of Web & Semantic Technology (IJWesT) Vol.4,
No.4, October 2013 | 10.5121/ijwest.2013.4401 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data validation is becoming more and more important with the ever-growing
amount of data being consumed and transmitted by systems over the Internet. It
is important to ensure that the data being sent is valid as it may contain
entry errors, which may be consumed by different systems causing further
errors. XML has become the defacto standard for data transfer. The XML Schema
Definition language (XSD) was created to help XML structural validation and
provide a schema for data type restrictions, however it does not allow for more
complex situations. In this article we introduce a way to provide rule based
XML validation and correction through the extension and improvement of our SRML
metalanguage. We also explore the option of applying it in a database as a
trigger for CRUD operations allowing more granular dataset validation on an
atomic level allowing for more complex dataset record validation rules.
| [
{
"version": "v1",
"created": "Sat, 16 Nov 2013 09:59:05 GMT"
}
] | 2013-11-19T00:00:00 | [
[
"Kalman",
"Miklos",
""
],
[
"Havasi",
"Ferenc",
""
]
] | TITLE: Enhanced XML Validation using SRML
ABSTRACT: Data validation is becoming more and more important with the ever-growing
amount of data being consumed and transmitted by systems over the Internet. It
is important to ensure that the data being sent is valid as it may contain
entry errors, which may be consumed by different systems causing further
errors. XML has become the defacto standard for data transfer. The XML Schema
Definition language (XSD) was created to help XML structural validation and
provide a schema for data type restrictions, however it does not allow for more
complex situations. In this article we introduce a way to provide rule based
XML validation and correction through the extension and improvement of our SRML
metalanguage. We also explore the option of applying it in a database as a
trigger for CRUD operations allowing more granular dataset validation on an
atomic level allowing for more complex dataset record validation rules.
| no_new_dataset | 0.947039 |
1311.3618 | Mircea Cimpoi | Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and
Andrea Vedaldi | Describing Textures in the Wild | 13 pages; 12 figures Fixed misplaced affiliation | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Patterns and textures are defining characteristics of many natural objects: a
shirt can be striped, the wings of a butterfly can be veined, and the skin of
an animal can be scaly. Aiming at supporting this analytical dimension in image
understanding, we address the challenging problem of describing textures with
semantic attributes. We identify a rich vocabulary of forty-seven texture terms
and use them to describe a large dataset of patterns collected in the wild.The
resulting Describable Textures Dataset (DTD) is the basis to seek for the best
texture representation for recognizing describable texture attributes in
images. We port from object recognition to texture recognition the Improved
Fisher Vector (IFV) and show that, surprisingly, it outperforms specialized
texture descriptors not only on our problem, but also in established material
recognition datasets. We also show that the describable attributes are
excellent texture descriptors, transferring between datasets and tasks; in
particular, combined with IFV, they significantly outperform the
state-of-the-art by more than 8 percent on both FMD and KTHTIPS-2b benchmarks.
We also demonstrate that they produce intuitive descriptions of materials and
Internet images.
| [
{
"version": "v1",
"created": "Thu, 14 Nov 2013 19:28:35 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Nov 2013 16:14:12 GMT"
}
] | 2013-11-18T00:00:00 | [
[
"Cimpoi",
"Mircea",
""
],
[
"Maji",
"Subhransu",
""
],
[
"Kokkinos",
"Iasonas",
""
],
[
"Mohamed",
"Sammy",
""
],
[
"Vedaldi",
"Andrea",
""
]
] | TITLE: Describing Textures in the Wild
ABSTRACT: Patterns and textures are defining characteristics of many natural objects: a
shirt can be striped, the wings of a butterfly can be veined, and the skin of
an animal can be scaly. Aiming at supporting this analytical dimension in image
understanding, we address the challenging problem of describing textures with
semantic attributes. We identify a rich vocabulary of forty-seven texture terms
and use them to describe a large dataset of patterns collected in the wild.The
resulting Describable Textures Dataset (DTD) is the basis to seek for the best
texture representation for recognizing describable texture attributes in
images. We port from object recognition to texture recognition the Improved
Fisher Vector (IFV) and show that, surprisingly, it outperforms specialized
texture descriptors not only on our problem, but also in established material
recognition datasets. We also show that the describable attributes are
excellent texture descriptors, transferring between datasets and tasks; in
particular, combined with IFV, they significantly outperform the
state-of-the-art by more than 8 percent on both FMD and KTHTIPS-2b benchmarks.
We also demonstrate that they produce intuitive descriptions of materials and
Internet images.
| no_new_dataset | 0.934873 |
1311.3732 | Kien Nguyen | Kien Duy Nguyen, Tuan Pham Minh, Quang Nhat Nguyen, Thanh Trung Nguyen | Exploiting Direct and Indirect Information for Friend Suggestion in
ZingMe | NIPS workshop, 9 pages, 4 figures | null | null | null | cs.SI cs.IR physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Friend suggestion is a fundamental problem in social networks with the goal
of assisting users in creating more relationships, and thereby enhances
interest of users to the social networks. This problem is often considered to
be the link prediction problem in the network. ZingMe is one of the largest
social networks in Vietnam. In this paper, we analyze the current approach for
the friend suggestion problem in ZingMe, showing its limitations and
disadvantages. We propose a new efficient approach for friend suggestion that
uses information from the network structure, attributes and interactions of
users to create resources for the evaluation of friend connection amongst
users. Friend connection is evaluated exploiting both direct communication
between the users and information from other ones in the network. The proposed
approach has been implemented in a new system version of ZingMe. We conducted
experiments, exploiting a dataset derived from the users' real use of ZingMe,
to compare the newly proposed approach to the current approach and some
well-known ones for the accuracy of friend suggestion. The experimental results
show that the newly proposed approach outperforms the current one, i.e., by an
increase of 7% to 98% on average in the friend suggestion accuracy. The
proposed approach also outperforms other ones for users who have a small number
of friends with improvements from 20% to 85% on average. In this paper, we also
discuss a number of open issues and possible improvements for the proposed
approach.
| [
{
"version": "v1",
"created": "Fri, 15 Nov 2013 05:56:48 GMT"
}
] | 2013-11-18T00:00:00 | [
[
"Nguyen",
"Kien Duy",
""
],
[
"Minh",
"Tuan Pham",
""
],
[
"Nguyen",
"Quang Nhat",
""
],
[
"Nguyen",
"Thanh Trung",
""
]
] | TITLE: Exploiting Direct and Indirect Information for Friend Suggestion in
ZingMe
ABSTRACT: Friend suggestion is a fundamental problem in social networks with the goal
of assisting users in creating more relationships, and thereby enhances
interest of users to the social networks. This problem is often considered to
be the link prediction problem in the network. ZingMe is one of the largest
social networks in Vietnam. In this paper, we analyze the current approach for
the friend suggestion problem in ZingMe, showing its limitations and
disadvantages. We propose a new efficient approach for friend suggestion that
uses information from the network structure, attributes and interactions of
users to create resources for the evaluation of friend connection amongst
users. Friend connection is evaluated exploiting both direct communication
between the users and information from other ones in the network. The proposed
approach has been implemented in a new system version of ZingMe. We conducted
experiments, exploiting a dataset derived from the users' real use of ZingMe,
to compare the newly proposed approach to the current approach and some
well-known ones for the accuracy of friend suggestion. The experimental results
show that the newly proposed approach outperforms the current one, i.e., by an
increase of 7% to 98% on average in the friend suggestion accuracy. The
proposed approach also outperforms other ones for users who have a small number
of friends with improvements from 20% to 85% on average. In this paper, we also
discuss a number of open issues and possible improvements for the proposed
approach.
| no_new_dataset | 0.943138 |
1311.3735 | Nicola Di Mauro | Nicola Di Mauro and Floriana Esposito | Ensemble Relational Learning based on Selective Propositionalization | 10 pages. arXiv admin note: text overlap with arXiv:1006.5188 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dealing with structured data needs the use of expressive representation
formalisms that, however, puts the problem to deal with the computational
complexity of the machine learning process. Furthermore, real world domains
require tools able to manage their typical uncertainty. Many statistical
relational learning approaches try to deal with these problems by combining the
construction of relevant relational features with a probabilistic tool. When
the combination is static (static propositionalization), the constructed
features are considered as boolean features and used offline as input to a
statistical learner; while, when the combination is dynamic (dynamic
propositionalization), the feature construction and probabilistic tool are
combined into a single process. In this paper we propose a selective
propositionalization method that search the optimal set of relational features
to be used by a probabilistic learner in order to minimize a loss function. The
new propositionalization approach has been combined with the random subspace
ensemble method. Experiments on real-world datasets shows the validity of the
proposed method.
| [
{
"version": "v1",
"created": "Fri, 15 Nov 2013 06:14:15 GMT"
}
] | 2013-11-18T00:00:00 | [
[
"Di Mauro",
"Nicola",
""
],
[
"Esposito",
"Floriana",
""
]
] | TITLE: Ensemble Relational Learning based on Selective Propositionalization
ABSTRACT: Dealing with structured data needs the use of expressive representation
formalisms that, however, puts the problem to deal with the computational
complexity of the machine learning process. Furthermore, real world domains
require tools able to manage their typical uncertainty. Many statistical
relational learning approaches try to deal with these problems by combining the
construction of relevant relational features with a probabilistic tool. When
the combination is static (static propositionalization), the constructed
features are considered as boolean features and used offline as input to a
statistical learner; while, when the combination is dynamic (dynamic
propositionalization), the feature construction and probabilistic tool are
combined into a single process. In this paper we propose a selective
propositionalization method that search the optimal set of relational features
to be used by a probabilistic learner in order to minimize a loss function. The
new propositionalization approach has been combined with the random subspace
ensemble method. Experiments on real-world datasets shows the validity of the
proposed method.
| no_new_dataset | 0.943712 |
1311.3312 | Thomas Cerqueus | Vanessa Ayala-Rivera, Patrick McDonagh, Thomas Cerqueus, Liam Murphy | Synthetic Data Generation using Benerator Tool | 12 pages, 5 figures, 10 references | null | null | UCD-CSI-2013-03 | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Datasets of different characteristics are needed by the research community
for experimental purposes. However, real data may be difficult to obtain due to
privacy concerns. Moreover, real data may not meet specific characteristics
which are needed to verify new approaches under certain conditions. Given these
limitations, the use of synthetic data is a viable alternative to complement
the real data. In this report, we describe the process followed to generate
synthetic data using Benerator, a publicly available tool. The results show
that the synthetic data preserves a high level of accuracy compared to the
original data. The generated datasets correspond to microdata containing
records with social, economic and demographic data which mimics the
distribution of aggregated statistics from the 2011 Irish Census data.
| [
{
"version": "v1",
"created": "Wed, 13 Nov 2013 21:14:40 GMT"
}
] | 2013-11-15T00:00:00 | [
[
"Ayala-Rivera",
"Vanessa",
""
],
[
"McDonagh",
"Patrick",
""
],
[
"Cerqueus",
"Thomas",
""
],
[
"Murphy",
"Liam",
""
]
] | TITLE: Synthetic Data Generation using Benerator Tool
ABSTRACT: Datasets of different characteristics are needed by the research community
for experimental purposes. However, real data may be difficult to obtain due to
privacy concerns. Moreover, real data may not meet specific characteristics
which are needed to verify new approaches under certain conditions. Given these
limitations, the use of synthetic data is a viable alternative to complement
the real data. In this report, we describe the process followed to generate
synthetic data using Benerator, a publicly available tool. The results show
that the synthetic data preserves a high level of accuracy compared to the
original data. The generated datasets correspond to microdata containing
records with social, economic and demographic data which mimics the
distribution of aggregated statistics from the 2011 Irish Census data.
| no_new_dataset | 0.947575 |
1311.3508 | Muhammad Qasim Pasta | Muhammad Qasim Pasta, Zohaib Jan, Faraz Zaidi, Celine Rozenblat | Demographic and Structural Characteristics to Rationalize Link Formation
in Online Social Networks | Second International Workshop on Complex Networks and their
Applications (10 pages, 8 figures) | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years have seen tremendous growth of many online social networks such
as Facebook, LinkedIn and MySpace. People connect to each other through these
networks forming large social communities providing researchers rich datasets
to understand, model and predict social interactions and behaviors. New
contacts in these networks can be formed either due to an individual's
demographic profile such as age group, gender, geographic location or due to
network's structural dynamics such as triadic closure and preferential
attachment, or a combination of both demographic and structural
characteristics.
A number of network generation models have been proposed in the last decade
to explain the structure, evolution and processes taking place in different
types of networks, and notably social networks. Network generation models
studied in the literature primarily consider structural properties, and in some
cases an individual's demographic profile in the formation of new social
contacts. These models do not present a mechanism to combine both structural
and demographic characteristics for the formation of new links. In this paper,
we propose a new network generation algorithm which incorporates both these
characteristics to model growth of a network.We use different publicly
available Facebook datasets as benchmarks to demonstrate the correctness of the
proposed network generation model.
| [
{
"version": "v1",
"created": "Thu, 14 Nov 2013 14:04:09 GMT"
}
] | 2013-11-15T00:00:00 | [
[
"Pasta",
"Muhammad Qasim",
""
],
[
"Jan",
"Zohaib",
""
],
[
"Zaidi",
"Faraz",
""
],
[
"Rozenblat",
"Celine",
""
]
] | TITLE: Demographic and Structural Characteristics to Rationalize Link Formation
in Online Social Networks
ABSTRACT: Recent years have seen tremendous growth of many online social networks such
as Facebook, LinkedIn and MySpace. People connect to each other through these
networks forming large social communities providing researchers rich datasets
to understand, model and predict social interactions and behaviors. New
contacts in these networks can be formed either due to an individual's
demographic profile such as age group, gender, geographic location or due to
network's structural dynamics such as triadic closure and preferential
attachment, or a combination of both demographic and structural
characteristics.
A number of network generation models have been proposed in the last decade
to explain the structure, evolution and processes taking place in different
types of networks, and notably social networks. Network generation models
studied in the literature primarily consider structural properties, and in some
cases an individual's demographic profile in the formation of new social
contacts. These models do not present a mechanism to combine both structural
and demographic characteristics for the formation of new links. In this paper,
we propose a new network generation algorithm which incorporates both these
characteristics to model growth of a network.We use different publicly
available Facebook datasets as benchmarks to demonstrate the correctness of the
proposed network generation model.
| no_new_dataset | 0.954393 |
1311.2978 | Shibamouli Lahiri | Shibamouli Lahiri, Rada Mihalcea | Authorship Attribution Using Word Network Features | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore a set of novel features for authorship attribution
of documents. These features are derived from a word network representation of
natural language text. As has been noted in previous studies, natural language
tends to show complex network structure at word level, with low degrees of
separation and scale-free (power law) degree distribution. There has also been
work on authorship attribution that incorporates ideas from complex networks.
The goal of our paper is to explore properties of these complex networks that
are suitable as features for machine-learning-based authorship attribution of
documents. We performed experiments on three different datasets, and obtained
promising results.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2013 23:11:40 GMT"
}
] | 2013-11-14T00:00:00 | [
[
"Lahiri",
"Shibamouli",
""
],
[
"Mihalcea",
"Rada",
""
]
] | TITLE: Authorship Attribution Using Word Network Features
ABSTRACT: In this paper, we explore a set of novel features for authorship attribution
of documents. These features are derived from a word network representation of
natural language text. As has been noted in previous studies, natural language
tends to show complex network structure at word level, with low degrees of
separation and scale-free (power law) degree distribution. There has also been
work on authorship attribution that incorporates ideas from complex networks.
The goal of our paper is to explore properties of these complex networks that
are suitable as features for machine-learning-based authorship attribution of
documents. We performed experiments on three different datasets, and obtained
promising results.
| no_new_dataset | 0.953275 |
1311.3037 | Junzhou Zhao | Pinghui Wang, Bruno Ribeiro, Junzhou Zhao, John C.S. Lui, Don Towsley,
Xiaohong Guan | Practical Characterization of Large Networks Using Neighborhood
Information | null | null | null | null | cs.SI cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Characterizing large online social networks (OSNs) through node querying is a
challenging task. OSNs often impose severe constraints on the query rate, hence
limiting the sample size to a small fraction of the total network. Various
ad-hoc subgraph sampling methods have been proposed, but many of them give
biased estimates and no theoretical basis on the accuracy. In this work, we
focus on developing sampling methods for OSNs where querying a node also
reveals partial structural information about its neighbors. Our methods are
optimized for NoSQL graph databases (if the database can be accessed directly),
or utilize Web API available on most major OSNs for graph sampling. We show
that our sampling method has provable convergence guarantees on being an
unbiased estimator, and it is more accurate than current state-of-the-art
methods. We characterize metrics such as node label density estimation and edge
label density estimation, two of the most fundamental network characteristics
from which other network characteristics can be derived. We evaluate our
methods on-the-fly over several live networks using their native APIs. Our
simulation studies over a variety of offline datasets show that by including
neighborhood information, our method drastically (4-fold) reduces the number of
samples required to achieve the same estimation accuracy of state-of-the-art
methods.
| [
{
"version": "v1",
"created": "Wed, 13 Nov 2013 07:36:55 GMT"
}
] | 2013-11-14T00:00:00 | [
[
"Wang",
"Pinghui",
""
],
[
"Ribeiro",
"Bruno",
""
],
[
"Zhao",
"Junzhou",
""
],
[
"Lui",
"John C. S.",
""
],
[
"Towsley",
"Don",
""
],
[
"Guan",
"Xiaohong",
""
]
] | TITLE: Practical Characterization of Large Networks Using Neighborhood
Information
ABSTRACT: Characterizing large online social networks (OSNs) through node querying is a
challenging task. OSNs often impose severe constraints on the query rate, hence
limiting the sample size to a small fraction of the total network. Various
ad-hoc subgraph sampling methods have been proposed, but many of them give
biased estimates and no theoretical basis on the accuracy. In this work, we
focus on developing sampling methods for OSNs where querying a node also
reveals partial structural information about its neighbors. Our methods are
optimized for NoSQL graph databases (if the database can be accessed directly),
or utilize Web API available on most major OSNs for graph sampling. We show
that our sampling method has provable convergence guarantees on being an
unbiased estimator, and it is more accurate than current state-of-the-art
methods. We characterize metrics such as node label density estimation and edge
label density estimation, two of the most fundamental network characteristics
from which other network characteristics can be derived. We evaluate our
methods on-the-fly over several live networks using their native APIs. Our
simulation studies over a variety of offline datasets show that by including
neighborhood information, our method drastically (4-fold) reduces the number of
samples required to achieve the same estimation accuracy of state-of-the-art
methods.
| no_new_dataset | 0.947284 |
1311.2677 | Raman Singh Mr. | Raman Singh, Harish Kumar and R.K. Singla | Sampling Based Approaches to Handle Imbalances in Network Traffic
Dataset for Machine Learning Techniques | 12 pages | null | 10.5121/csit.2013.3704 | null | cs.NI cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network traffic data is huge, varying and imbalanced because various classes
are not equally distributed. Machine learning (ML) algorithms for traffic
analysis uses the samples from this data to recommend the actions to be taken
by the network administrators as well as training. Due to imbalances in
dataset, it is difficult to train machine learning algorithms for traffic
analysis and these may give biased or false results leading to serious
degradation in performance of these algorithms. Various techniques can be
applied during sampling to minimize the effect of imbalanced instances. In this
paper various sampling techniques have been analysed in order to compare the
decrease in variation in imbalances of network traffic datasets sampled for
these algorithms. Various parameters like missing classes in samples,
probability of sampling of the different instances have been considered for
comparison.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2013 05:32:48 GMT"
}
] | 2013-11-13T00:00:00 | [
[
"Singh",
"Raman",
""
],
[
"Kumar",
"Harish",
""
],
[
"Singla",
"R. K.",
""
]
] | TITLE: Sampling Based Approaches to Handle Imbalances in Network Traffic
Dataset for Machine Learning Techniques
ABSTRACT: Network traffic data is huge, varying and imbalanced because various classes
are not equally distributed. Machine learning (ML) algorithms for traffic
analysis uses the samples from this data to recommend the actions to be taken
by the network administrators as well as training. Due to imbalances in
dataset, it is difficult to train machine learning algorithms for traffic
analysis and these may give biased or false results leading to serious
degradation in performance of these algorithms. Various techniques can be
applied during sampling to minimize the effect of imbalanced instances. In this
paper various sampling techniques have been analysed in order to compare the
decrease in variation in imbalances of network traffic datasets sampled for
these algorithms. Various parameters like missing classes in samples,
probability of sampling of the different instances have been considered for
comparison.
| no_new_dataset | 0.950778 |
1311.2100 | Nandish Jayaram | Nandish Jayaram and Arijit Khan and Chengkai Li and Xifeng Yan and
Ramez Elmasri | Querying Knowledge Graphs by Example Entity Tuples | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We witness an unprecedented proliferation of knowledge graphs that record
millions of entities and their relationships. While knowledge graphs are
structure-flexible and content rich, they are difficult to use. The challenge
lies in the gap between their overwhelming complexity and the limited database
knowledge of non-professional users. If writing structured queries over simple
tables is difficult, complex graphs are only harder to query. As an initial
step toward improving the usability of knowledge graphs, we propose to query
such data by example entity tuples, without requiring users to form complex
graph queries. Our system, GQBE (Graph Query By Example), automatically derives
a weighted hidden maximal query graph based on input query tuples, to capture a
user's query intent. It efficiently finds and ranks the top approximate answer
tuples. For fast query processing, GQBE only partially evaluates query graphs.
We conducted experiments and user studies on the large Freebase and DBpedia
datasets and observed appealing accuracy and efficiency. Our system provides a
complementary approach to the existing keyword-based methods, facilitating
user-friendly graph querying. To the best of our knowledge, there was no such
proposal in the past in the context of graphs.
| [
{
"version": "v1",
"created": "Fri, 8 Nov 2013 22:47:39 GMT"
}
] | 2013-11-12T00:00:00 | [
[
"Jayaram",
"Nandish",
""
],
[
"Khan",
"Arijit",
""
],
[
"Li",
"Chengkai",
""
],
[
"Yan",
"Xifeng",
""
],
[
"Elmasri",
"Ramez",
""
]
] | TITLE: Querying Knowledge Graphs by Example Entity Tuples
ABSTRACT: We witness an unprecedented proliferation of knowledge graphs that record
millions of entities and their relationships. While knowledge graphs are
structure-flexible and content rich, they are difficult to use. The challenge
lies in the gap between their overwhelming complexity and the limited database
knowledge of non-professional users. If writing structured queries over simple
tables is difficult, complex graphs are only harder to query. As an initial
step toward improving the usability of knowledge graphs, we propose to query
such data by example entity tuples, without requiring users to form complex
graph queries. Our system, GQBE (Graph Query By Example), automatically derives
a weighted hidden maximal query graph based on input query tuples, to capture a
user's query intent. It efficiently finds and ranks the top approximate answer
tuples. For fast query processing, GQBE only partially evaluates query graphs.
We conducted experiments and user studies on the large Freebase and DBpedia
datasets and observed appealing accuracy and efficiency. Our system provides a
complementary approach to the existing keyword-based methods, facilitating
user-friendly graph querying. To the best of our knowledge, there was no such
proposal in the past in the context of graphs.
| no_new_dataset | 0.940188 |
1311.2139 | Sundararajan Sellamanickam | P. Balamurugan, Shirish Shevade, Sundararajan Sellamanickam | Large Margin Semi-supervised Structured Output Learning | 9 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In structured output learning, obtaining labelled data for real-world
applications is usually costly, while unlabelled examples are available in
abundance. Semi-supervised structured classification has been developed to
handle large amounts of unlabelled structured data. In this work, we consider
semi-supervised structural SVMs with domain constraints. The optimization
problem, which in general is not convex, contains the loss terms associated
with the labelled and unlabelled examples along with the domain constraints. We
propose a simple optimization approach, which alternates between solving a
supervised learning problem and a constraint matching problem. Solving the
constraint matching problem is difficult for structured prediction, and we
propose an efficient and effective hill-climbing method to solve it. The
alternating optimization is carried out within a deterministic annealing
framework, which helps in effective constraint matching, and avoiding local
minima which are not very useful. The algorithm is simple to implement and
achieves comparable generalization performance on benchmark datasets.
| [
{
"version": "v1",
"created": "Sat, 9 Nov 2013 06:47:22 GMT"
}
] | 2013-11-12T00:00:00 | [
[
"Balamurugan",
"P.",
""
],
[
"Shevade",
"Shirish",
""
],
[
"Sellamanickam",
"Sundararajan",
""
]
] | TITLE: Large Margin Semi-supervised Structured Output Learning
ABSTRACT: In structured output learning, obtaining labelled data for real-world
applications is usually costly, while unlabelled examples are available in
abundance. Semi-supervised structured classification has been developed to
handle large amounts of unlabelled structured data. In this work, we consider
semi-supervised structural SVMs with domain constraints. The optimization
problem, which in general is not convex, contains the loss terms associated
with the labelled and unlabelled examples along with the domain constraints. We
propose a simple optimization approach, which alternates between solving a
supervised learning problem and a constraint matching problem. Solving the
constraint matching problem is difficult for structured prediction, and we
propose an efficient and effective hill-climbing method to solve it. The
alternating optimization is carried out within a deterministic annealing
framework, which helps in effective constraint matching, and avoiding local
minima which are not very useful. The algorithm is simple to implement and
achieves comparable generalization performance on benchmark datasets.
| no_new_dataset | 0.949902 |
1311.2276 | Sundararajan Sellamanickam | Vinod Nair, Rahul Kidambi, Sundararajan Sellamanickam, S. Sathiya
Keerthi, Johannes Gehrke, Vijay Narayanan | A Quantitative Evaluation Framework for Missing Value Imputation
Algorithms | 9 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of quantitatively evaluating missing value imputation
algorithms. Given a dataset with missing values and a choice of several
imputation algorithms to fill them in, there is currently no principled way to
rank the algorithms using a quantitative metric. We develop a framework based
on treating imputation evaluation as a problem of comparing two distributions
and show how it can be used to compute quantitative metrics. We present an
efficient procedure for applying this framework to practical datasets,
demonstrate several metrics derived from the existing literature on comparing
distributions, and propose a new metric called Neighborhood-based Dissimilarity
Score which is fast to compute and provides similar results. Results are shown
on several datasets, metrics, and imputations algorithms.
| [
{
"version": "v1",
"created": "Sun, 10 Nov 2013 14:17:47 GMT"
}
] | 2013-11-12T00:00:00 | [
[
"Nair",
"Vinod",
""
],
[
"Kidambi",
"Rahul",
""
],
[
"Sellamanickam",
"Sundararajan",
""
],
[
"Keerthi",
"S. Sathiya",
""
],
[
"Gehrke",
"Johannes",
""
],
[
"Narayanan",
"Vijay",
""
]
] | TITLE: A Quantitative Evaluation Framework for Missing Value Imputation
Algorithms
ABSTRACT: We consider the problem of quantitatively evaluating missing value imputation
algorithms. Given a dataset with missing values and a choice of several
imputation algorithms to fill them in, there is currently no principled way to
rank the algorithms using a quantitative metric. We develop a framework based
on treating imputation evaluation as a problem of comparing two distributions
and show how it can be used to compute quantitative metrics. We present an
efficient procedure for applying this framework to practical datasets,
demonstrate several metrics derived from the existing literature on comparing
distributions, and propose a new metric called Neighborhood-based Dissimilarity
Score which is fast to compute and provides similar results. Results are shown
on several datasets, metrics, and imputations algorithms.
| no_new_dataset | 0.946448 |
1311.2378 | Balamurugan Palaniappan | P. Balamurugan, Shirish Shevade, S. Sundararajan and S. S Keerthi | An Empirical Evaluation of Sequence-Tagging Trainers | 18 pages, 5 figures ams.org | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of assigning label sequences to a set of observed sequences is
common in computational linguistics. Several models for sequence labeling have
been proposed over the last few years. Here, we focus on discriminative models
for sequence labeling. Many batch and online (updating model parameters after
visiting each example) learning algorithms have been proposed in the
literature. On large datasets, online algorithms are preferred as batch
learning methods are slow. These online algorithms were designed to solve
either a primal or a dual problem. However, there has been no systematic
comparison of these algorithms in terms of their speed, generalization
performance (accuracy/likelihood) and their ability to achieve steady state
generalization performance fast. With this aim, we compare different algorithms
and make recommendations, useful for a practitioner. We conclude that the
selection of an algorithm for sequence labeling depends on the evaluation
criterion used and its implementation simplicity.
| [
{
"version": "v1",
"created": "Mon, 11 Nov 2013 08:26:09 GMT"
}
] | 2013-11-12T00:00:00 | [
[
"Balamurugan",
"P.",
""
],
[
"Shevade",
"Shirish",
""
],
[
"Sundararajan",
"S.",
""
],
[
"Keerthi",
"S. S",
""
]
] | TITLE: An Empirical Evaluation of Sequence-Tagging Trainers
ABSTRACT: The task of assigning label sequences to a set of observed sequences is
common in computational linguistics. Several models for sequence labeling have
been proposed over the last few years. Here, we focus on discriminative models
for sequence labeling. Many batch and online (updating model parameters after
visiting each example) learning algorithms have been proposed in the
literature. On large datasets, online algorithms are preferred as batch
learning methods are slow. These online algorithms were designed to solve
either a primal or a dual problem. However, there has been no systematic
comparison of these algorithms in terms of their speed, generalization
performance (accuracy/likelihood) and their ability to achieve steady state
generalization performance fast. With this aim, we compare different algorithms
and make recommendations, useful for a practitioner. We conclude that the
selection of an algorithm for sequence labeling depends on the evaluation
criterion used and its implementation simplicity.
| no_new_dataset | 0.950273 |
1302.6557 | Richard M Jiang | Richard M Jiang | Geodesic-based Salient Object Detection | The manuscript was submitted to a conference. Due to anonymous review
policy by the conference, I'd like to withdraw it temporarily | This is a revised version of our submissions to CVPR 2012, SIGRAPH
Asia 2012, and CVPR 2013; | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Saliency detection has been an intuitive way to provide useful cues for
object detection and segmentation, as desired for many vision and graphics
applications. In this paper, we provided a robust method for salient object
detection and segmentation. Other than using various pixel-level contrast
definitions, we exploited global image structures and proposed a new geodesic
method dedicated for salient object detection. In the proposed approach, a new
geodesic scheme, namely geodesic tunneling is proposed to tackle with textures
and local chaotic structures. With our new geodesic approach, a geodesic
saliency map is estimated in correspondence to spatial structures in an image.
Experimental evaluation on a salient object benchmark dataset validated that
our algorithm consistently outperformed a number of the state-of-art saliency
methods, yielding higher precision and better recall rates. With the robust
saliency estimation, we also present an unsupervised hierarchical salient
object cut scheme simply using adaptive saliency thresholding, which attained
the highest score in our F-measure test. We also applied our geodesic cut
scheme to a number of image editing tasks as demonstrated in additional
experiments.
| [
{
"version": "v1",
"created": "Tue, 26 Feb 2013 19:52:02 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Aug 2013 18:41:55 GMT"
}
] | 2013-11-11T00:00:00 | [
[
"Jiang",
"Richard M",
""
]
] | TITLE: Geodesic-based Salient Object Detection
ABSTRACT: Saliency detection has been an intuitive way to provide useful cues for
object detection and segmentation, as desired for many vision and graphics
applications. In this paper, we provided a robust method for salient object
detection and segmentation. Other than using various pixel-level contrast
definitions, we exploited global image structures and proposed a new geodesic
method dedicated for salient object detection. In the proposed approach, a new
geodesic scheme, namely geodesic tunneling is proposed to tackle with textures
and local chaotic structures. With our new geodesic approach, a geodesic
saliency map is estimated in correspondence to spatial structures in an image.
Experimental evaluation on a salient object benchmark dataset validated that
our algorithm consistently outperformed a number of the state-of-art saliency
methods, yielding higher precision and better recall rates. With the robust
saliency estimation, we also present an unsupervised hierarchical salient
object cut scheme simply using adaptive saliency thresholding, which attained
the highest score in our F-measure test. We also applied our geodesic cut
scheme to a number of image editing tasks as demonstrated in additional
experiments.
| no_new_dataset | 0.952662 |
1302.3101 | Matus Medo | An Zeng, Stanislao Gualdi, Matus Medo, Yi-Cheng Zhang | Trend prediction in temporal bipartite networks: the case of Movielens,
Netflix, and Digg | 9 pages, 1 table, 5 figures | Advances in Complex Systems 16, 1350024, 2013 | 10.1142/S0219525913500240 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online systems where users purchase or collect items of some kind can be
effectively represented by temporal bipartite networks where both nodes and
links are added with time. We use this representation to predict which items
might become popular in the near future. Various prediction methods are
evaluated on three distinct datasets originating from popular online services
(Movielens, Netflix, and Digg). We show that the prediction performance can be
further enhanced if the user social network is known and centrality of
individual users in this network is used to weight their actions.
| [
{
"version": "v1",
"created": "Wed, 13 Feb 2013 14:09:33 GMT"
}
] | 2013-11-08T00:00:00 | [
[
"Zeng",
"An",
""
],
[
"Gualdi",
"Stanislao",
""
],
[
"Medo",
"Matus",
""
],
[
"Zhang",
"Yi-Cheng",
""
]
] | TITLE: Trend prediction in temporal bipartite networks: the case of Movielens,
Netflix, and Digg
ABSTRACT: Online systems where users purchase or collect items of some kind can be
effectively represented by temporal bipartite networks where both nodes and
links are added with time. We use this representation to predict which items
might become popular in the near future. Various prediction methods are
evaluated on three distinct datasets originating from popular online services
(Movielens, Netflix, and Digg). We show that the prediction performance can be
further enhanced if the user social network is known and centrality of
individual users in this network is used to weight their actions.
| no_new_dataset | 0.949435 |
1305.0258 | Nathan Monnig | Nathan D. Monnig, Bengt Fornberg, and Francois G. Meyer | Inverting Nonlinear Dimensionality Reduction with Scale-Free Radial
Basis Function Interpolation | Accepted for publication in Applied and Computational Harmonic
Analysis | null | null | null | math.NA cs.NA physics.data-an stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nonlinear dimensionality reduction embeddings computed from datasets do not
provide a mechanism to compute the inverse map. In this paper, we address the
problem of computing a stable inverse map to such a general bi-Lipschitz map.
Our approach relies on radial basis functions (RBFs) to interpolate the inverse
map everywhere on the low-dimensional image of the forward map. We demonstrate
that the scale-free cubic RBF kernel performs better than the Gaussian kernel:
it does not suffer from ill-conditioning, and does not require the choice of a
scale. The proposed construction is shown to be similar to the Nystr\"om
extension of the eigenvectors of the symmetric normalized graph Laplacian
matrix. Based on this observation, we provide a new interpretation of the
Nystr\"om extension with suggestions for improvement.
| [
{
"version": "v1",
"created": "Wed, 1 May 2013 19:55:06 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Nov 2013 15:49:52 GMT"
}
] | 2013-11-06T00:00:00 | [
[
"Monnig",
"Nathan D.",
""
],
[
"Fornberg",
"Bengt",
""
],
[
"Meyer",
"Francois G.",
""
]
] | TITLE: Inverting Nonlinear Dimensionality Reduction with Scale-Free Radial
Basis Function Interpolation
ABSTRACT: Nonlinear dimensionality reduction embeddings computed from datasets do not
provide a mechanism to compute the inverse map. In this paper, we address the
problem of computing a stable inverse map to such a general bi-Lipschitz map.
Our approach relies on radial basis functions (RBFs) to interpolate the inverse
map everywhere on the low-dimensional image of the forward map. We demonstrate
that the scale-free cubic RBF kernel performs better than the Gaussian kernel:
it does not suffer from ill-conditioning, and does not require the choice of a
scale. The proposed construction is shown to be similar to the Nystr\"om
extension of the eigenvectors of the symmetric normalized graph Laplacian
matrix. Based on this observation, we provide a new interpretation of the
Nystr\"om extension with suggestions for improvement.
| no_new_dataset | 0.951549 |
1306.5554 | Brian McWilliams | Brian McWilliams, David Balduzzi and Joachim M. Buhmann | Correlated random features for fast semi-supervised learning | 15 pages, 3 figures, 6 tables | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents Correlated Nystrom Views (XNV), a fast semi-supervised
algorithm for regression and classification. The algorithm draws on two main
ideas. First, it generates two views consisting of computationally inexpensive
random features. Second, XNV applies multiview regression using Canonical
Correlation Analysis (CCA) on unlabeled data to bias the regression towards
useful features. It has been shown that, if the views contains accurate
estimators, CCA regression can substantially reduce variance with a minimal
increase in bias. Random views are justified by recent theoretical and
empirical work showing that regression with random features closely
approximates kernel regression, implying that random views can be expected to
contain accurate estimators. We show that XNV consistently outperforms a
state-of-the-art algorithm for semi-supervised learning: substantially
improving predictive performance and reducing the variability of performance on
a wide variety of real-world datasets, whilst also reducing runtime by orders
of magnitude.
| [
{
"version": "v1",
"created": "Mon, 24 Jun 2013 09:49:08 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Nov 2013 11:28:33 GMT"
}
] | 2013-11-06T00:00:00 | [
[
"McWilliams",
"Brian",
""
],
[
"Balduzzi",
"David",
""
],
[
"Buhmann",
"Joachim M.",
""
]
] | TITLE: Correlated random features for fast semi-supervised learning
ABSTRACT: This paper presents Correlated Nystrom Views (XNV), a fast semi-supervised
algorithm for regression and classification. The algorithm draws on two main
ideas. First, it generates two views consisting of computationally inexpensive
random features. Second, XNV applies multiview regression using Canonical
Correlation Analysis (CCA) on unlabeled data to bias the regression towards
useful features. It has been shown that, if the views contains accurate
estimators, CCA regression can substantially reduce variance with a minimal
increase in bias. Random views are justified by recent theoretical and
empirical work showing that regression with random features closely
approximates kernel regression, implying that random views can be expected to
contain accurate estimators. We show that XNV consistently outperforms a
state-of-the-art algorithm for semi-supervised learning: substantially
improving predictive performance and reducing the variability of performance on
a wide variety of real-world datasets, whilst also reducing runtime by orders
of magnitude.
| no_new_dataset | 0.946597 |
1311.0914 | Cho-Jui Hsieh Cho-Jui Hsieh | Cho-Jui Hsieh and Si Si and Inderjit S. Dhillon | A Divide-and-Conquer Solver for Kernel Support Vector Machines | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The kernel support vector machine (SVM) is one of the most widely used
classification methods; however, the amount of computation required becomes the
bottleneck when facing millions of samples. In this paper, we propose and
analyze a novel divide-and-conquer solver for kernel SVMs (DC-SVM). In the
division step, we partition the kernel SVM problem into smaller subproblems by
clustering the data, so that each subproblem can be solved independently and
efficiently. We show theoretically that the support vectors identified by the
subproblem solution are likely to be support vectors of the entire kernel SVM
problem, provided that the problem is partitioned appropriately by kernel
clustering. In the conquer step, the local solutions from the subproblems are
used to initialize a global coordinate descent solver, which converges quickly
as suggested by our analysis. By extending this idea, we develop a multilevel
Divide-and-Conquer SVM algorithm with adaptive clustering and early prediction
strategy, which outperforms state-of-the-art methods in terms of training
speed, testing accuracy, and memory usage. As an example, on the covtype
dataset with half-a-million samples, DC-SVM is 7 times faster than LIBSVM in
obtaining the exact SVM solution (to within $10^{-6}$ relative error) which
achieves 96.15% prediction accuracy. Moreover, with our proposed early
prediction strategy, DC-SVM achieves about 96% accuracy in only 12 minutes,
which is more than 100 times faster than LIBSVM.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2013 22:06:40 GMT"
}
] | 2013-11-06T00:00:00 | [
[
"Hsieh",
"Cho-Jui",
""
],
[
"Si",
"Si",
""
],
[
"Dhillon",
"Inderjit S.",
""
]
] | TITLE: A Divide-and-Conquer Solver for Kernel Support Vector Machines
ABSTRACT: The kernel support vector machine (SVM) is one of the most widely used
classification methods; however, the amount of computation required becomes the
bottleneck when facing millions of samples. In this paper, we propose and
analyze a novel divide-and-conquer solver for kernel SVMs (DC-SVM). In the
division step, we partition the kernel SVM problem into smaller subproblems by
clustering the data, so that each subproblem can be solved independently and
efficiently. We show theoretically that the support vectors identified by the
subproblem solution are likely to be support vectors of the entire kernel SVM
problem, provided that the problem is partitioned appropriately by kernel
clustering. In the conquer step, the local solutions from the subproblems are
used to initialize a global coordinate descent solver, which converges quickly
as suggested by our analysis. By extending this idea, we develop a multilevel
Divide-and-Conquer SVM algorithm with adaptive clustering and early prediction
strategy, which outperforms state-of-the-art methods in terms of training
speed, testing accuracy, and memory usage. As an example, on the covtype
dataset with half-a-million samples, DC-SVM is 7 times faster than LIBSVM in
obtaining the exact SVM solution (to within $10^{-6}$ relative error) which
achieves 96.15% prediction accuracy. Moreover, with our proposed early
prediction strategy, DC-SVM achieves about 96% accuracy in only 12 minutes,
which is more than 100 times faster than LIBSVM.
| no_new_dataset | 0.952442 |
1311.1169 | D\'aniel Kondor Mr | D\'aniel Kondor, Istv\'an Csabai, L\'aszl\'o Dobos, J\'anos Sz\"ule,
Norbert Barankai, Tam\'as Hanyecz, Tam\'as Seb\H{o}k, Zs\'ofia Kallus,
G\'abor Vattay | Using Robust PCA to estimate regional characteristics of language use
from geo-tagged Twitter messages | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Principal component analysis (PCA) and related techniques have been
successfully employed in natural language processing. Text mining applications
in the age of the online social media (OSM) face new challenges due to
properties specific to these use cases (e.g. spelling issues specific to texts
posted by users, the presence of spammers and bots, service announcements,
etc.). In this paper, we employ a Robust PCA technique to separate typical
outliers and highly localized topics from the low-dimensional structure present
in language use in online social networks. Our focus is on identifying
geospatial features among the messages posted by the users of the Twitter
microblogging service. Using a dataset which consists of over 200 million
geolocated tweets collected over the course of a year, we investigate whether
the information present in word usage frequencies can be used to identify
regional features of language use and topics of interest. Using the PCA pursuit
method, we are able to identify important low-dimensional features, which
constitute smoothly varying functions of the geographic location.
| [
{
"version": "v1",
"created": "Tue, 5 Nov 2013 19:31:33 GMT"
}
] | 2013-11-06T00:00:00 | [
[
"Kondor",
"Dániel",
""
],
[
"Csabai",
"István",
""
],
[
"Dobos",
"László",
""
],
[
"Szüle",
"János",
""
],
[
"Barankai",
"Norbert",
""
],
[
"Hanyecz",
"Tamás",
""
],
[
"Sebők",
"Tamás",
""
],
[
"Kallus",
"Zsófia",
""
],
[
"Vattay",
"Gábor",
""
]
] | TITLE: Using Robust PCA to estimate regional characteristics of language use
from geo-tagged Twitter messages
ABSTRACT: Principal component analysis (PCA) and related techniques have been
successfully employed in natural language processing. Text mining applications
in the age of the online social media (OSM) face new challenges due to
properties specific to these use cases (e.g. spelling issues specific to texts
posted by users, the presence of spammers and bots, service announcements,
etc.). In this paper, we employ a Robust PCA technique to separate typical
outliers and highly localized topics from the low-dimensional structure present
in language use in online social networks. Our focus is on identifying
geospatial features among the messages posted by the users of the Twitter
microblogging service. Using a dataset which consists of over 200 million
geolocated tweets collected over the course of a year, we investigate whether
the information present in word usage frequencies can be used to identify
regional features of language use and topics of interest. Using the PCA pursuit
method, we are able to identify important low-dimensional features, which
constitute smoothly varying functions of the geographic location.
| new_dataset | 0.91452 |
1311.1194 | Saif Mohammad Dr. | Saif M. Mohammad, Svetlana Kiritchenko, and Joel Martin | Identifying Purpose Behind Electoral Tweets | null | In Proceedings of the KDD Workshop on Issues of Sentiment
Discovery and Opinion Mining (WISDOM-2013), August 2013, Chicago, USA | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tweets pertaining to a single event, such as a national election, can number
in the hundreds of millions. Automatically analyzing them is beneficial in many
downstream natural language applications such as question answering and
summarization. In this paper, we propose a new task: identifying the purpose
behind electoral tweets--why do people post election-oriented tweets? We show
that identifying purpose is correlated with the related phenomenon of sentiment
and emotion detection, but yet significantly different. Detecting purpose has a
number of applications including detecting the mood of the electorate,
estimating the popularity of policies, identifying key issues of contention,
and predicting the course of events. We create a large dataset of electoral
tweets and annotate a few thousand tweets for purpose. We develop a system that
automatically classifies electoral tweets as per their purpose, obtaining an
accuracy of 43.56% on an 11-class task and an accuracy of 73.91% on a 3-class
task (both accuracies well above the most-frequent-class baseline). Finally, we
show that resources developed for emotion detection are also helpful for
detecting purpose.
| [
{
"version": "v1",
"created": "Tue, 5 Nov 2013 20:55:23 GMT"
}
] | 2013-11-06T00:00:00 | [
[
"Mohammad",
"Saif M.",
""
],
[
"Kiritchenko",
"Svetlana",
""
],
[
"Martin",
"Joel",
""
]
] | TITLE: Identifying Purpose Behind Electoral Tweets
ABSTRACT: Tweets pertaining to a single event, such as a national election, can number
in the hundreds of millions. Automatically analyzing them is beneficial in many
downstream natural language applications such as question answering and
summarization. In this paper, we propose a new task: identifying the purpose
behind electoral tweets--why do people post election-oriented tweets? We show
that identifying purpose is correlated with the related phenomenon of sentiment
and emotion detection, but yet significantly different. Detecting purpose has a
number of applications including detecting the mood of the electorate,
estimating the popularity of policies, identifying key issues of contention,
and predicting the course of events. We create a large dataset of electoral
tweets and annotate a few thousand tweets for purpose. We develop a system that
automatically classifies electoral tweets as per their purpose, obtaining an
accuracy of 43.56% on an 11-class task and an accuracy of 73.91% on a 3-class
task (both accuracies well above the most-frequent-class baseline). Finally, we
show that resources developed for emotion detection are also helpful for
detecting purpose.
| new_dataset | 0.952838 |
1210.3384 | Shankar Vembu | Wei Jiao, Shankar Vembu, Amit G. Deshwar, Lincoln Stein, Quaid Morris | Inferring clonal evolution of tumors from single nucleotide somatic
mutations | null | null | null | null | cs.LG q-bio.PE q-bio.QM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-throughput sequencing allows the detection and quantification of
frequencies of somatic single nucleotide variants (SNV) in heterogeneous tumor
cell populations. In some cases, the evolutionary history and population
frequency of the subclonal lineages of tumor cells present in the sample can be
reconstructed from these SNV frequency measurements. However, automated methods
to do this reconstruction are not available and the conditions under which
reconstruction is possible have not been described.
We describe the conditions under which the evolutionary history can be
uniquely reconstructed from SNV frequencies from single or multiple samples
from the tumor population and we introduce a new statistical model, PhyloSub,
that infers the phylogeny and genotype of the major subclonal lineages
represented in the population of cancer cells. It uses a Bayesian nonparametric
prior over trees that groups SNVs into major subclonal lineages and
automatically estimates the number of lineages and their ancestry. We sample
from the joint posterior distribution over trees to identify evolutionary
histories and cell population frequencies that have the highest probability of
generating the observed SNV frequency data. When multiple phylogenies are
consistent with a given set of SNV frequencies, PhyloSub represents the
uncertainty in the tumor phylogeny using a partial order plot. Experiments on a
simulated dataset and two real datasets comprising tumor samples from acute
myeloid leukemia and chronic lymphocytic leukemia patients demonstrate that
PhyloSub can infer both linear (or chain) and branching lineages and its
inferences are in good agreement with ground truth, where it is available.
| [
{
"version": "v1",
"created": "Thu, 11 Oct 2012 22:20:33 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Oct 2012 18:41:13 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Jun 2013 18:35:00 GMT"
},
{
"version": "v4",
"created": "Sat, 2 Nov 2013 21:38:34 GMT"
}
] | 2013-11-05T00:00:00 | [
[
"Jiao",
"Wei",
""
],
[
"Vembu",
"Shankar",
""
],
[
"Deshwar",
"Amit G.",
""
],
[
"Stein",
"Lincoln",
""
],
[
"Morris",
"Quaid",
""
]
] | TITLE: Inferring clonal evolution of tumors from single nucleotide somatic
mutations
ABSTRACT: High-throughput sequencing allows the detection and quantification of
frequencies of somatic single nucleotide variants (SNV) in heterogeneous tumor
cell populations. In some cases, the evolutionary history and population
frequency of the subclonal lineages of tumor cells present in the sample can be
reconstructed from these SNV frequency measurements. However, automated methods
to do this reconstruction are not available and the conditions under which
reconstruction is possible have not been described.
We describe the conditions under which the evolutionary history can be
uniquely reconstructed from SNV frequencies from single or multiple samples
from the tumor population and we introduce a new statistical model, PhyloSub,
that infers the phylogeny and genotype of the major subclonal lineages
represented in the population of cancer cells. It uses a Bayesian nonparametric
prior over trees that groups SNVs into major subclonal lineages and
automatically estimates the number of lineages and their ancestry. We sample
from the joint posterior distribution over trees to identify evolutionary
histories and cell population frequencies that have the highest probability of
generating the observed SNV frequency data. When multiple phylogenies are
consistent with a given set of SNV frequencies, PhyloSub represents the
uncertainty in the tumor phylogeny using a partial order plot. Experiments on a
simulated dataset and two real datasets comprising tumor samples from acute
myeloid leukemia and chronic lymphocytic leukemia patients demonstrate that
PhyloSub can infer both linear (or chain) and branching lineages and its
inferences are in good agreement with ground truth, where it is available.
| no_new_dataset | 0.942082 |
1305.4583 | Xin Zhao | Xin Zhao | Parallel Coordinates Guided High Dimensional Transfer Function Design | 6 pages, 5 figures. This paper has been withdrawn by the author due
to publication | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-dimensional transfer function design is widely used to provide
appropriate data classification for direct volume rendering of various
datasets. However, its design is a complicated task. Parallel coordinate plot
(PCP), as a powerful visualization tool, can efficiently display
high-dimensional geometry and accurately analyze multivariate data. In this
paper, we propose to combine parallel coordinates with dimensional reduction
methods to guide high-dimensional transfer function design. Our pipeline has
two major advantages: (1) combine and display extracted high-dimensional
features in parameter space; and (2) select appropriate high-dimensional
parameters, with the help of dimensional reduction methods, to obtain
sophisticated data classification as transfer function for volume rendering. In
order to efficiently design high-dimensional transfer functions, the
combination of both parallel coordinate components and dimension reduction
results is necessary to generate final visualization results. We demonstrate
the capability of our method for direct volume rendering using various CT and
MRI datasets.
| [
{
"version": "v1",
"created": "Mon, 20 May 2013 17:27:29 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Nov 2013 21:39:13 GMT"
}
] | 2013-11-05T00:00:00 | [
[
"Zhao",
"Xin",
""
]
] | TITLE: Parallel Coordinates Guided High Dimensional Transfer Function Design
ABSTRACT: High-dimensional transfer function design is widely used to provide
appropriate data classification for direct volume rendering of various
datasets. However, its design is a complicated task. Parallel coordinate plot
(PCP), as a powerful visualization tool, can efficiently display
high-dimensional geometry and accurately analyze multivariate data. In this
paper, we propose to combine parallel coordinates with dimensional reduction
methods to guide high-dimensional transfer function design. Our pipeline has
two major advantages: (1) combine and display extracted high-dimensional
features in parameter space; and (2) select appropriate high-dimensional
parameters, with the help of dimensional reduction methods, to obtain
sophisticated data classification as transfer function for volume rendering. In
order to efficiently design high-dimensional transfer functions, the
combination of both parallel coordinate components and dimension reduction
results is necessary to generate final visualization results. We demonstrate
the capability of our method for direct volume rendering using various CT and
MRI datasets.
| no_new_dataset | 0.954009 |
1305.6143 | Vivek Narayanan | Vivek Narayanan, Ishan Arora, Arjun Bhatia | Fast and accurate sentiment classification using an enhanced Naive Bayes
model | 8 pages, 2 figures | Intelligent Data Engineering and Automated Learning IDEAL 2013
Lecture Notes in Computer Science Volume 8206, 2013, pp 194-201 | 10.1007/978-3-642-41278-3_24 | null | cs.CL cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have explored different methods of improving the accuracy of a Naive Bayes
classifier for sentiment analysis. We observed that a combination of methods
like negation handling, word n-grams and feature selection by mutual
information results in a significant improvement in accuracy. This implies that
a highly accurate and fast sentiment classifier can be built using a simple
Naive Bayes model that has linear training and testing time complexities. We
achieved an accuracy of 88.80% on the popular IMDB movie reviews dataset.
| [
{
"version": "v1",
"created": "Mon, 27 May 2013 08:37:26 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Sep 2013 05:36:29 GMT"
}
] | 2013-11-05T00:00:00 | [
[
"Narayanan",
"Vivek",
""
],
[
"Arora",
"Ishan",
""
],
[
"Bhatia",
"Arjun",
""
]
] | TITLE: Fast and accurate sentiment classification using an enhanced Naive Bayes
model
ABSTRACT: We have explored different methods of improving the accuracy of a Naive Bayes
classifier for sentiment analysis. We observed that a combination of methods
like negation handling, word n-grams and feature selection by mutual
information results in a significant improvement in accuracy. This implies that
a highly accurate and fast sentiment classifier can be built using a simple
Naive Bayes model that has linear training and testing time complexities. We
achieved an accuracy of 88.80% on the popular IMDB movie reviews dataset.
| no_new_dataset | 0.950824 |
1306.0811 | Giovanni Zappella | Nicol\`o Cesa-Bianchi, Claudio Gentile and Giovanni Zappella | A Gang of Bandits | NIPS 2013 | null | null | null | cs.LG cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-armed bandit problems are receiving a great deal of attention because
they adequately formalize the exploration-exploitation trade-offs arising in
several industrially relevant applications, such as online advertisement and,
more generally, recommendation systems. In many cases, however, these
applications have a strong social component, whose integration in the bandit
algorithm could lead to a dramatic performance increase. For instance, we may
want to serve content to a group of users by taking advantage of an underlying
network of social relationships among them. In this paper, we introduce novel
algorithmic approaches to the solution of such networked bandit problems. More
specifically, we design and analyze a global strategy which allocates a bandit
algorithm to each network node (user) and allows it to "share" signals
(contexts and payoffs) with the neghboring nodes. We then derive two more
scalable variants of this strategy based on different ways of clustering the
graph nodes. We experimentally compare the algorithm and its variants to
state-of-the-art methods for contextual bandits that do not use the relational
information. Our experiments, carried out on synthetic and real-world datasets,
show a marked increase in prediction performance obtained by exploiting the
network structure.
| [
{
"version": "v1",
"created": "Tue, 4 Jun 2013 14:24:31 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Oct 2013 16:32:25 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Nov 2013 10:07:42 GMT"
}
] | 2013-11-05T00:00:00 | [
[
"Cesa-Bianchi",
"Nicolò",
""
],
[
"Gentile",
"Claudio",
""
],
[
"Zappella",
"Giovanni",
""
]
] | TITLE: A Gang of Bandits
ABSTRACT: Multi-armed bandit problems are receiving a great deal of attention because
they adequately formalize the exploration-exploitation trade-offs arising in
several industrially relevant applications, such as online advertisement and,
more generally, recommendation systems. In many cases, however, these
applications have a strong social component, whose integration in the bandit
algorithm could lead to a dramatic performance increase. For instance, we may
want to serve content to a group of users by taking advantage of an underlying
network of social relationships among them. In this paper, we introduce novel
algorithmic approaches to the solution of such networked bandit problems. More
specifically, we design and analyze a global strategy which allocates a bandit
algorithm to each network node (user) and allows it to "share" signals
(contexts and payoffs) with the neghboring nodes. We then derive two more
scalable variants of this strategy based on different ways of clustering the
graph nodes. We experimentally compare the algorithm and its variants to
state-of-the-art methods for contextual bandits that do not use the relational
information. Our experiments, carried out on synthetic and real-world datasets,
show a marked increase in prediction performance obtained by exploiting the
network structure.
| no_new_dataset | 0.942981 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.