Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
list | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1405.5661
|
Yan Kit Li
|
Yan Kit Li, Min Xu, Chun Ho Ng, Patrick P. C. Lee
|
Efficient Hybrid Inline and Out-of-line Deduplication for Backup Storage
| null | null | null | null |
cs.DC cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Backup storage systems often remove redundancy across backups via inline
deduplication, which works by referring duplicate chunks of the latest backup
to those of existing backups. However, inline deduplication degrades restore
performance of the latest backup due to fragmentation, and complicates deletion
of ex- pired backups due to the sharing of data chunks. While out-of-line
deduplication addresses the problems by forward-pointing existing duplicate
chunks to those of the latest backup, it introduces additional I/Os of writing
and removing duplicate chunks. We design and implement RevDedup, an efficient
hybrid inline and out-of-line deduplication system for backup storage. It
applies coarse-grained inline deduplication to remove duplicates of the latest
backup, and then fine-grained out-of-line reverse deduplication to remove
duplicates from older backups. Our reverse deduplication design limits the I/O
overhead and prepares for efficient deletion of expired backups. Through
extensive testbed experiments using synthetic and real-world datasets, we show
that RevDedup can bring high performance to the backup, restore, and deletion
operations, while maintaining high storage efficiency comparable to
conventional inline deduplication.
|
[
{
"version": "v1",
"created": "Thu, 22 May 2014 08:13:18 GMT"
}
] | 2014-05-23T00:00:00 |
[
[
"Li",
"Yan Kit",
""
],
[
"Xu",
"Min",
""
],
[
"Ng",
"Chun Ho",
""
],
[
"Lee",
"Patrick P. C.",
""
]
] |
TITLE: Efficient Hybrid Inline and Out-of-line Deduplication for Backup Storage
ABSTRACT: Backup storage systems often remove redundancy across backups via inline
deduplication, which works by referring duplicate chunks of the latest backup
to those of existing backups. However, inline deduplication degrades restore
performance of the latest backup due to fragmentation, and complicates deletion
of ex- pired backups due to the sharing of data chunks. While out-of-line
deduplication addresses the problems by forward-pointing existing duplicate
chunks to those of the latest backup, it introduces additional I/Os of writing
and removing duplicate chunks. We design and implement RevDedup, an efficient
hybrid inline and out-of-line deduplication system for backup storage. It
applies coarse-grained inline deduplication to remove duplicates of the latest
backup, and then fine-grained out-of-line reverse deduplication to remove
duplicates from older backups. Our reverse deduplication design limits the I/O
overhead and prepares for efficient deletion of expired backups. Through
extensive testbed experiments using synthetic and real-world datasets, we show
that RevDedup can bring high performance to the backup, restore, and deletion
operations, while maintaining high storage efficiency comparable to
conventional inline deduplication.
|
1405.5845
|
Ben Pringle
|
Ben Pringle, Mukkai Krishnamoorthy, Kenneth Simons
|
Case study to approaches to finding patterns in citation networks
|
16 pages, 6 figures
| null | null | null |
cs.DL cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by/3.0/
|
Analysis of a dataset including a network of LED patents and their metadata
is carried out using several methods in order to answer questions about the
domain. We are interested in finding the relationship between the metadata and
the network structure; for example, are central patents in the network produced
by larger or smaller companies? We begin by exploring the structure of the
network without any metadata, applying known techniques in citation analysis
and a simple clustering scheme. These techinques are then combined with
metadata analysis to draw preliminary conclusions about the dataset.
|
[
{
"version": "v1",
"created": "Thu, 22 May 2014 18:21:51 GMT"
}
] | 2014-05-23T00:00:00 |
[
[
"Pringle",
"Ben",
""
],
[
"Krishnamoorthy",
"Mukkai",
""
],
[
"Simons",
"Kenneth",
""
]
] |
TITLE: Case study to approaches to finding patterns in citation networks
ABSTRACT: Analysis of a dataset including a network of LED patents and their metadata
is carried out using several methods in order to answer questions about the
domain. We are interested in finding the relationship between the metadata and
the network structure; for example, are central patents in the network produced
by larger or smaller companies? We begin by exploring the structure of the
network without any metadata, applying known techniques in citation analysis
and a simple clustering scheme. These techinques are then combined with
metadata analysis to draw preliminary conclusions about the dataset.
|
1405.5869
|
Ping Li
|
Anshumali Shrivastava and Ping Li
|
Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search
(MIPS)
| null | null | null | null |
stat.ML cs.DS cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the first provably sublinear time algorithm for approximate
\emph{Maximum Inner Product Search} (MIPS). Our proposal is also the first
hashing algorithm for searching with (un-normalized) inner product as the
underlying similarity measure. Finding hashing schemes for MIPS was considered
hard. We formally show that the existing Locality Sensitive Hashing (LSH)
framework is insufficient for solving MIPS, and then we extend the existing LSH
framework to allow asymmetric hashing schemes. Our proposal is based on an
interesting mathematical phenomenon in which inner products, after independent
asymmetric transformations, can be converted into the problem of approximate
near neighbor search. This key observation makes efficient sublinear hashing
scheme for MIPS possible. In the extended asymmetric LSH (ALSH) framework, we
provide an explicit construction of provably fast hashing scheme for MIPS. The
proposed construction and the extended LSH framework could be of independent
theoretical interest. Our proposed algorithm is simple and easy to implement.
We evaluate the method, for retrieving inner products, in the collaborative
filtering task of item recommendations on Netflix and Movielens datasets.
|
[
{
"version": "v1",
"created": "Thu, 22 May 2014 19:42:57 GMT"
}
] | 2014-05-23T00:00:00 |
[
[
"Shrivastava",
"Anshumali",
""
],
[
"Li",
"Ping",
""
]
] |
TITLE: Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search
(MIPS)
ABSTRACT: We present the first provably sublinear time algorithm for approximate
\emph{Maximum Inner Product Search} (MIPS). Our proposal is also the first
hashing algorithm for searching with (un-normalized) inner product as the
underlying similarity measure. Finding hashing schemes for MIPS was considered
hard. We formally show that the existing Locality Sensitive Hashing (LSH)
framework is insufficient for solving MIPS, and then we extend the existing LSH
framework to allow asymmetric hashing schemes. Our proposal is based on an
interesting mathematical phenomenon in which inner products, after independent
asymmetric transformations, can be converted into the problem of approximate
near neighbor search. This key observation makes efficient sublinear hashing
scheme for MIPS possible. In the extended asymmetric LSH (ALSH) framework, we
provide an explicit construction of provably fast hashing scheme for MIPS. The
proposed construction and the extended LSH framework could be of independent
theoretical interest. Our proposed algorithm is simple and easy to implement.
We evaluate the method, for retrieving inner products, in the collaborative
filtering task of item recommendations on Netflix and Movielens datasets.
|
1405.5488
|
Marc'Aurelio Ranzato
|
Marc'Aurelio Ranzato
|
On Learning Where To Look
|
deep learning, vision
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current automatic vision systems face two major challenges: scalability and
extreme variability of appearance. First, the computational time required to
process an image typically scales linearly with the number of pixels in the
image, therefore limiting the resolution of input images to thumbnail size.
Second, variability in appearance and pose of the objects constitute a major
hurdle for robust recognition and detection. In this work, we propose a model
that makes baby steps towards addressing these challenges. We describe a
learning based method that recognizes objects through a series of glimpses.
This system performs an amount of computation that scales with the complexity
of the input rather than its number of pixels. Moreover, the proposed method is
potentially more robust to changes in appearance since its parameters are
learned in a data driven manner. Preliminary experiments on a handwritten
dataset of digits demonstrate the computational advantages of this approach.
|
[
{
"version": "v1",
"created": "Thu, 24 Apr 2014 02:29:19 GMT"
}
] | 2014-05-22T00:00:00 |
[
[
"Ranzato",
"Marc'Aurelio",
""
]
] |
TITLE: On Learning Where To Look
ABSTRACT: Current automatic vision systems face two major challenges: scalability and
extreme variability of appearance. First, the computational time required to
process an image typically scales linearly with the number of pixels in the
image, therefore limiting the resolution of input images to thumbnail size.
Second, variability in appearance and pose of the objects constitute a major
hurdle for robust recognition and detection. In this work, we propose a model
that makes baby steps towards addressing these challenges. We describe a
learning based method that recognizes objects through a series of glimpses.
This system performs an amount of computation that scales with the complexity
of the input rather than its number of pixels. Moreover, the proposed method is
potentially more robust to changes in appearance since its parameters are
learned in a data driven manner. Preliminary experiments on a handwritten
dataset of digits demonstrate the computational advantages of this approach.
|
1405.4979
|
Razen Al-Harbi
|
Razen Al-Harbi, Yasser Ebrahim, Panos Kalnis
|
PHD-Store: An Adaptive SPARQL Engine with Dynamic Partitioning for
Distributed RDF Repositories
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many repositories utilize the versatile RDF model to publish data.
Repositories are typically distributed and geographically remote, but data are
interconnected (e.g., the Semantic Web) and queried globally by a language such
as SPARQL. Due to the network cost and the nature of the queries, the execution
time can be prohibitively high. Current solutions attempt to minimize the
network cost by redistributing all data in a preprocessing phase, but here are
two drawbacks: (i) redistribution is based on heuristics that may not benefit
many of the future queries; and (ii) the preprocessing phase is very expensive
even for moderate size datasets. In this paper we propose PHD-Store, a SPARQL
engine for distributed RDF repositories. Our system does not assume any
particular initial data placement and does not require prepartitioning; hence,
it minimizes the startup cost. Initially, PHD-Store answers queries using a
potentially slow distributed semi-join algorithm, but adapts dynamically to the
query load by incrementally redistributing frequently accessed data.
Redistribution is done in a way that future queries can benefit from fast
hash-based parallel execution. Our experiments with synthetic and real data
verify that PHD-Store scales to very large datasets; many repositories;
converges to comparable or better quality of partitioning than existing
methods; and executes large query loads 1 to 2 orders of magnitude faster than
our competitors.
|
[
{
"version": "v1",
"created": "Tue, 20 May 2014 07:44:03 GMT"
}
] | 2014-05-21T00:00:00 |
[
[
"Al-Harbi",
"Razen",
""
],
[
"Ebrahim",
"Yasser",
""
],
[
"Kalnis",
"Panos",
""
]
] |
TITLE: PHD-Store: An Adaptive SPARQL Engine with Dynamic Partitioning for
Distributed RDF Repositories
ABSTRACT: Many repositories utilize the versatile RDF model to publish data.
Repositories are typically distributed and geographically remote, but data are
interconnected (e.g., the Semantic Web) and queried globally by a language such
as SPARQL. Due to the network cost and the nature of the queries, the execution
time can be prohibitively high. Current solutions attempt to minimize the
network cost by redistributing all data in a preprocessing phase, but here are
two drawbacks: (i) redistribution is based on heuristics that may not benefit
many of the future queries; and (ii) the preprocessing phase is very expensive
even for moderate size datasets. In this paper we propose PHD-Store, a SPARQL
engine for distributed RDF repositories. Our system does not assume any
particular initial data placement and does not require prepartitioning; hence,
it minimizes the startup cost. Initially, PHD-Store answers queries using a
potentially slow distributed semi-join algorithm, but adapts dynamically to the
query load by incrementally redistributing frequently accessed data.
Redistribution is done in a way that future queries can benefit from fast
hash-based parallel execution. Our experiments with synthetic and real data
verify that PHD-Store scales to very large datasets; many repositories;
converges to comparable or better quality of partitioning than existing
methods; and executes large query loads 1 to 2 orders of magnitude faster than
our competitors.
|
1405.5097
|
Junzhou Zhao
|
Junzhou Zhao, John C.S. Lui, Don Towsley, Pinghui Wang, and Xiaohong
Guan
|
Design of Efficient Sampling Methods on Hybrid Social-Affiliation
Networks
|
11 pages, 13 figures, technique report
| null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph sampling via crawling has become increasingly popular and important in
the study of measuring various characteristics of large scale complex networks.
While powerful, it is known to be challenging when the graph is loosely
connected or disconnected which slows down the convergence of random walks and
can cause poor estimation accuracy.
In this work, we observe that the graph under study, or called target graph,
usually does not exist in isolation. In many situations, the target graph is
related to an auxiliary graph and an affiliation graph, and the target graph
becomes well connected when we view it from the perspective of these three
graphs together, or called a hybrid social-affiliation graph in this paper.
When directly sampling the target graph is difficult or inefficient, we can
indirectly sample it efficiently with the assistances of the other two graphs.
We design three sampling methods on such a hybrid social-affiliation network.
Experiments conducted on both synthetic and real datasets demonstrate the
effectiveness of our proposed methods.
|
[
{
"version": "v1",
"created": "Tue, 20 May 2014 14:17:19 GMT"
}
] | 2014-05-21T00:00:00 |
[
[
"Zhao",
"Junzhou",
""
],
[
"Lui",
"John C. S.",
""
],
[
"Towsley",
"Don",
""
],
[
"Wang",
"Pinghui",
""
],
[
"Guan",
"Xiaohong",
""
]
] |
TITLE: Design of Efficient Sampling Methods on Hybrid Social-Affiliation
Networks
ABSTRACT: Graph sampling via crawling has become increasingly popular and important in
the study of measuring various characteristics of large scale complex networks.
While powerful, it is known to be challenging when the graph is loosely
connected or disconnected which slows down the convergence of random walks and
can cause poor estimation accuracy.
In this work, we observe that the graph under study, or called target graph,
usually does not exist in isolation. In many situations, the target graph is
related to an auxiliary graph and an affiliation graph, and the target graph
becomes well connected when we view it from the perspective of these three
graphs together, or called a hybrid social-affiliation graph in this paper.
When directly sampling the target graph is difficult or inefficient, we can
indirectly sample it efficiently with the assistances of the other two graphs.
We design three sampling methods on such a hybrid social-affiliation network.
Experiments conducted on both synthetic and real datasets demonstrate the
effectiveness of our proposed methods.
|
1405.5158
|
Yoshiaki Sakagami Ms.
|
Yoshiaki Sakagami and Pedro A. A. Santos and Reinaldo Haas and Julio
C. Passos and Frederico F. Taves
|
Logarithmic Wind Profile: A Stability Wind Shear Term
| null | null | null | null |
physics.ao-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A stability wind shear term of logarithmic wind profile based on the terms of
turbulent kinetic energy equation is proposed. The fraction influenced by
thermal stratification is considered in the shear production term. This
thermally affected shear is compared with buoyant term resulting in a stability
wind shear term. It is also considered Reynolds stress as a sum of two
components associated with wind shear from mechanical and thermal
stratification process. The stability wind shear is responsible to Reynolds
stress of thermal stratification term, and also to Reynolds stress of
mechanical term at no neutral condition. The wind profile and its derivative
are validated with data from Pedra do Sal experiment in a flat terrain and 300m
from shoreline located in northeast coast of Brazil. It is close to the Equator
line, so the meteorological condition are strongly influenced by trade winds
and sea breeze. The site has one 100m tower with five instrumented levels, one
3D sonic anemometer, and a medium-range wind lidar profiler up 500m. The
dataset are processed and filter from September to November of 2013 which
results in about 550 hours of data available. The results show the derivative
of wind profile with R^2 of 0.87 and RMSE of 0.08 m/s. The calculated wind
profile performances well up to 400m at unstable condition and up to 280m at
stable condition with R^2 better than 0.89. The proposed equation is valid for
this specific site and is limited to a stead state condition with constant
turbulent fluxes in the surface layer.
|
[
{
"version": "v1",
"created": "Tue, 20 May 2014 17:19:20 GMT"
}
] | 2014-05-21T00:00:00 |
[
[
"Sakagami",
"Yoshiaki",
""
],
[
"Santos",
"Pedro A. A.",
""
],
[
"Haas",
"Reinaldo",
""
],
[
"Passos",
"Julio C.",
""
],
[
"Taves",
"Frederico F.",
""
]
] |
TITLE: Logarithmic Wind Profile: A Stability Wind Shear Term
ABSTRACT: A stability wind shear term of logarithmic wind profile based on the terms of
turbulent kinetic energy equation is proposed. The fraction influenced by
thermal stratification is considered in the shear production term. This
thermally affected shear is compared with buoyant term resulting in a stability
wind shear term. It is also considered Reynolds stress as a sum of two
components associated with wind shear from mechanical and thermal
stratification process. The stability wind shear is responsible to Reynolds
stress of thermal stratification term, and also to Reynolds stress of
mechanical term at no neutral condition. The wind profile and its derivative
are validated with data from Pedra do Sal experiment in a flat terrain and 300m
from shoreline located in northeast coast of Brazil. It is close to the Equator
line, so the meteorological condition are strongly influenced by trade winds
and sea breeze. The site has one 100m tower with five instrumented levels, one
3D sonic anemometer, and a medium-range wind lidar profiler up 500m. The
dataset are processed and filter from September to November of 2013 which
results in about 550 hours of data available. The results show the derivative
of wind profile with R^2 of 0.87 and RMSE of 0.08 m/s. The calculated wind
profile performances well up to 400m at unstable condition and up to 280m at
stable condition with R^2 better than 0.89. The proposed equation is valid for
this specific site and is limited to a stead state condition with constant
turbulent fluxes in the surface layer.
|
1206.6214
|
Stefan Hennemann
|
Stefan Hennemann and Ben Derudder
|
An Alternative Approach to the Calculation and Analysis of Connectivity
in the World City Network
|
18 pages, 9 figures, 2 tables
|
Environment and Planning B: Planning and Design 41(3) 392-412
|
10.1068/b39108
| null |
physics.soc-ph cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Empirical research on world cities often draws on Taylor's (2001) notion of
an 'interlocking network model', in which office networks of globalized service
firms are assumed to shape the spatialities of urban networks. In spite of its
many merits, this approach is limited because the resultant adjacency matrices
are not really fit for network-analytic calculations. We therefore propose a
fresh analytical approach using a primary linkage algorithm that produces a
one-mode directed graph based on Taylor's two-mode city/firm network data. The
procedure has the advantage of creating less dense networks when compared to
the interlocking network model, while nonetheless retaining the network
structure apparent in the initial dataset. We randomize the empirical network
with a bootstrapping simulation approach, and compare the simulated parameters
of this null-model with our empirical network parameter (i.e. betweenness
centrality). We find that our approach produces results that are comparable to
those of the standard interlocking network model. However, because our approach
is based on an actual graph representation and network analysis, we are able to
assess cities' position in the network at large. For instance, we find that
cities such as Tokyo, Sydney, Melbourne, Almaty and Karachi hold more strategic
and valuable positions than suggested in the interlocking networks as they play
a bridging role in connecting cities across regions. In general, we argue that
our graph representation allows for further and deeper analysis of the original
data, further extending world city network research into a theory-based
empirical research approach.
|
[
{
"version": "v1",
"created": "Wed, 27 Jun 2012 09:33:33 GMT"
}
] | 2014-05-20T00:00:00 |
[
[
"Hennemann",
"Stefan",
""
],
[
"Derudder",
"Ben",
""
]
] |
TITLE: An Alternative Approach to the Calculation and Analysis of Connectivity
in the World City Network
ABSTRACT: Empirical research on world cities often draws on Taylor's (2001) notion of
an 'interlocking network model', in which office networks of globalized service
firms are assumed to shape the spatialities of urban networks. In spite of its
many merits, this approach is limited because the resultant adjacency matrices
are not really fit for network-analytic calculations. We therefore propose a
fresh analytical approach using a primary linkage algorithm that produces a
one-mode directed graph based on Taylor's two-mode city/firm network data. The
procedure has the advantage of creating less dense networks when compared to
the interlocking network model, while nonetheless retaining the network
structure apparent in the initial dataset. We randomize the empirical network
with a bootstrapping simulation approach, and compare the simulated parameters
of this null-model with our empirical network parameter (i.e. betweenness
centrality). We find that our approach produces results that are comparable to
those of the standard interlocking network model. However, because our approach
is based on an actual graph representation and network analysis, we are able to
assess cities' position in the network at large. For instance, we find that
cities such as Tokyo, Sydney, Melbourne, Almaty and Karachi hold more strategic
and valuable positions than suggested in the interlocking networks as they play
a bridging role in connecting cities across regions. In general, we argue that
our graph representation allows for further and deeper analysis of the original
data, further extending world city network research into a theory-based
empirical research approach.
|
1402.4963
|
Julius Hannink
|
Julius Hannink, Remco Duits and Erik Bekkers
|
Vesselness via Multiple Scale Orientation Scores
|
9 pages, 8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The multi-scale Frangi vesselness filter is an established tool in (retinal)
vascular imaging. However, it cannot cope with crossings or bifurcations, since
it only looks for elongated structures. Therefore, we disentangle crossing
structures in the image via (multiple scale) invertible orientation scores. The
described vesselness filter via scale-orientation scores performs considerably
better at enhancing vessels throughout crossings and bifurcations than the
Frangi version. Both methods are evaluated on a public dataset. Performance is
measured by comparing ground truth data to the segmentation results obtained by
basic thresholding and morphological component analysis of the filtered images.
|
[
{
"version": "v1",
"created": "Thu, 20 Feb 2014 11:06:35 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2014 18:30:55 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2014 12:33:37 GMT"
},
{
"version": "v4",
"created": "Mon, 19 May 2014 09:20:06 GMT"
}
] | 2014-05-20T00:00:00 |
[
[
"Hannink",
"Julius",
""
],
[
"Duits",
"Remco",
""
],
[
"Bekkers",
"Erik",
""
]
] |
TITLE: Vesselness via Multiple Scale Orientation Scores
ABSTRACT: The multi-scale Frangi vesselness filter is an established tool in (retinal)
vascular imaging. However, it cannot cope with crossings or bifurcations, since
it only looks for elongated structures. Therefore, we disentangle crossing
structures in the image via (multiple scale) invertible orientation scores. The
described vesselness filter via scale-orientation scores performs considerably
better at enhancing vessels throughout crossings and bifurcations than the
Frangi version. Both methods are evaluated on a public dataset. Performance is
measured by comparing ground truth data to the segmentation results obtained by
basic thresholding and morphological component analysis of the filtered images.
|
1405.4301
|
Stanislav Sobolevsky
|
Stanislav Sobolevsky, Izabela Sitko, Sebastian Grauwin, Remi Tachet
des Combes, Bartosz Hawelka, Juan Murillo Arias, Carlo Ratti
|
Mining Urban Performance: Scale-Independent Classification of Cities
Based on Individual Economic Transactions
|
10 pages, 7 figures, to be published in the proceedings of ASE
BigDataScience 2014 conference
| null | null | null |
physics.soc-ph cs.SI q-fin.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intensive development of urban systems creates a number of challenges for
urban planners and policy makers in order to maintain sustainable growth.
Running efficient urban policies requires meaningful urban metrics, which could
quantify important urban characteristics including various aspects of an actual
human behavior. Since a city size is known to have a major, yet often
nonlinear, impact on the human activity, it also becomes important to develop
scale-free metrics that capture qualitative city properties, beyond the effects
of scale. Recent availability of extensive datasets created by human activity
involving digital technologies creates new opportunities in this area. In this
paper we propose a novel approach of city scoring and classification based on
quantitative scale-free metrics related to economic activity of city residents,
as well as domestic and foreign visitors. It is demonstrated on the example of
Spain, but the proposed methodology is of a general character. We employ a new
source of large-scale ubiquitous data, which consists of anonymized countrywide
records of bank card transactions collected by one of the largest Spanish
banks. Different aspects of the classification reveal important properties of
Spanish cities, which significantly complement the pattern that might be
discovered with the official socioeconomic statistics.
|
[
{
"version": "v1",
"created": "Fri, 16 May 2014 20:36:08 GMT"
}
] | 2014-05-20T00:00:00 |
[
[
"Sobolevsky",
"Stanislav",
""
],
[
"Sitko",
"Izabela",
""
],
[
"Grauwin",
"Sebastian",
""
],
[
"Combes",
"Remi Tachet des",
""
],
[
"Hawelka",
"Bartosz",
""
],
[
"Arias",
"Juan Murillo",
""
],
[
"Ratti",
"Carlo",
""
]
] |
TITLE: Mining Urban Performance: Scale-Independent Classification of Cities
Based on Individual Economic Transactions
ABSTRACT: Intensive development of urban systems creates a number of challenges for
urban planners and policy makers in order to maintain sustainable growth.
Running efficient urban policies requires meaningful urban metrics, which could
quantify important urban characteristics including various aspects of an actual
human behavior. Since a city size is known to have a major, yet often
nonlinear, impact on the human activity, it also becomes important to develop
scale-free metrics that capture qualitative city properties, beyond the effects
of scale. Recent availability of extensive datasets created by human activity
involving digital technologies creates new opportunities in this area. In this
paper we propose a novel approach of city scoring and classification based on
quantitative scale-free metrics related to economic activity of city residents,
as well as domestic and foreign visitors. It is demonstrated on the example of
Spain, but the proposed methodology is of a general character. We employ a new
source of large-scale ubiquitous data, which consists of anonymized countrywide
records of bank card transactions collected by one of the largest Spanish
banks. Different aspects of the classification reveal important properties of
Spanish cities, which significantly complement the pattern that might be
discovered with the official socioeconomic statistics.
|
1405.4308
|
Le Lu
|
Meizhu Liu, Le Lu, Xiaojing Ye, Shipeng Yu
|
Coarse-to-Fine Classification via Parametric and Nonparametric Models
for Computer-Aided Diagnosis
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Classification is one of the core problems in Computer-Aided Diagnosis (CAD),
targeting for early cancer detection using 3D medical imaging interpretation.
High detection sensitivity with desirably low false positive (FP) rate is
critical for a CAD system to be accepted as a valuable or even indispensable
tool in radiologists' workflow. Given various spurious imagery noises which
cause observation uncertainties, this remains a very challenging task. In this
paper, we propose a novel, two-tiered coarse-to-fine (CTF) classification
cascade framework to tackle this problem. We first obtain
classification-critical data samples (e.g., samples on the decision boundary)
extracted from the holistic data distributions using a robust parametric model
(e.g., \cite{Raykar08}); then we build a graph-embedding based nonparametric
classifier on sampled data, which can more accurately preserve or formulate the
complex classification boundary. These two steps can also be considered as
effective "sample pruning" and "feature pursuing + $k$NN/template matching",
respectively. Our approach is validated comprehensively in colorectal polyp
detection and lung nodule detection CAD systems, as the top two deadly cancers,
using hospital scale, multi-site clinical datasets. The results show that our
method achieves overall better classification/detection performance than
existing state-of-the-art algorithms using single-layer classifiers, such as
the support vector machine variants \cite{Wang08}, boosting \cite{Slabaugh10},
logistic regression \cite{Ravesteijn10}, relevance vector machine
\cite{Raykar08}, $k$-nearest neighbor \cite{Murphy09} or spectral projections
on graph \cite{Cai08}.
|
[
{
"version": "v1",
"created": "Fri, 16 May 2014 21:13:01 GMT"
}
] | 2014-05-20T00:00:00 |
[
[
"Liu",
"Meizhu",
""
],
[
"Lu",
"Le",
""
],
[
"Ye",
"Xiaojing",
""
],
[
"Yu",
"Shipeng",
""
]
] |
TITLE: Coarse-to-Fine Classification via Parametric and Nonparametric Models
for Computer-Aided Diagnosis
ABSTRACT: Classification is one of the core problems in Computer-Aided Diagnosis (CAD),
targeting for early cancer detection using 3D medical imaging interpretation.
High detection sensitivity with desirably low false positive (FP) rate is
critical for a CAD system to be accepted as a valuable or even indispensable
tool in radiologists' workflow. Given various spurious imagery noises which
cause observation uncertainties, this remains a very challenging task. In this
paper, we propose a novel, two-tiered coarse-to-fine (CTF) classification
cascade framework to tackle this problem. We first obtain
classification-critical data samples (e.g., samples on the decision boundary)
extracted from the holistic data distributions using a robust parametric model
(e.g., \cite{Raykar08}); then we build a graph-embedding based nonparametric
classifier on sampled data, which can more accurately preserve or formulate the
complex classification boundary. These two steps can also be considered as
effective "sample pruning" and "feature pursuing + $k$NN/template matching",
respectively. Our approach is validated comprehensively in colorectal polyp
detection and lung nodule detection CAD systems, as the top two deadly cancers,
using hospital scale, multi-site clinical datasets. The results show that our
method achieves overall better classification/detection performance than
existing state-of-the-art algorithms using single-layer classifiers, such as
the support vector machine variants \cite{Wang08}, boosting \cite{Slabaugh10},
logistic regression \cite{Ravesteijn10}, relevance vector machine
\cite{Raykar08}, $k$-nearest neighbor \cite{Murphy09} or spectral projections
on graph \cite{Cai08}.
|
1405.4506
|
Limin Wang
|
Xiaojiang Peng and Limin Wang and Xingxing Wang and Yu Qiao
|
Bag of Visual Words and Fusion Methods for Action Recognition:
Comprehensive Study and Good Practice
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video based action recognition is one of the important and challenging
problems in computer vision research. Bag of Visual Words model (BoVW) with
local features has become the most popular method and obtained the
state-of-the-art performance on several realistic datasets, such as the HMDB51,
UCF50, and UCF101. BoVW is a general pipeline to construct a global
representation from a set of local features, which is mainly composed of five
steps: (i) feature extraction, (ii) feature pre-processing, (iii) codebook
generation, (iv) feature encoding, and (v) pooling and normalization. Many
efforts have been made in each step independently in different scenarios and
their effect on action recognition is still unknown. Meanwhile, video data
exhibits different views of visual pattern, such as static appearance and
motion dynamics. Multiple descriptors are usually extracted to represent these
different views. Many feature fusion methods have been developed in other areas
and their influence on action recognition has never been investigated before.
This paper aims to provide a comprehensive study of all steps in BoVW and
different fusion methods, and uncover some good practice to produce a
state-of-the-art action recognition system. Specifically, we explore two kinds
of local features, ten kinds of encoding methods, eight kinds of pooling and
normalization strategies, and three kinds of fusion methods. We conclude that
every step is crucial for contributing to the final recognition rate.
Furthermore, based on our comprehensive study, we propose a simple yet
effective representation, called hybrid representation, by exploring the
complementarity of different BoVW frameworks and local descriptors. Using this
representation, we obtain the state-of-the-art on the three challenging
datasets: HMDB51 (61.1%), UCF50 (92.3%), and UCF101 (87.9%).
|
[
{
"version": "v1",
"created": "Sun, 18 May 2014 13:56:07 GMT"
}
] | 2014-05-20T00:00:00 |
[
[
"Peng",
"Xiaojiang",
""
],
[
"Wang",
"Limin",
""
],
[
"Wang",
"Xingxing",
""
],
[
"Qiao",
"Yu",
""
]
] |
TITLE: Bag of Visual Words and Fusion Methods for Action Recognition:
Comprehensive Study and Good Practice
ABSTRACT: Video based action recognition is one of the important and challenging
problems in computer vision research. Bag of Visual Words model (BoVW) with
local features has become the most popular method and obtained the
state-of-the-art performance on several realistic datasets, such as the HMDB51,
UCF50, and UCF101. BoVW is a general pipeline to construct a global
representation from a set of local features, which is mainly composed of five
steps: (i) feature extraction, (ii) feature pre-processing, (iii) codebook
generation, (iv) feature encoding, and (v) pooling and normalization. Many
efforts have been made in each step independently in different scenarios and
their effect on action recognition is still unknown. Meanwhile, video data
exhibits different views of visual pattern, such as static appearance and
motion dynamics. Multiple descriptors are usually extracted to represent these
different views. Many feature fusion methods have been developed in other areas
and their influence on action recognition has never been investigated before.
This paper aims to provide a comprehensive study of all steps in BoVW and
different fusion methods, and uncover some good practice to produce a
state-of-the-art action recognition system. Specifically, we explore two kinds
of local features, ten kinds of encoding methods, eight kinds of pooling and
normalization strategies, and three kinds of fusion methods. We conclude that
every step is crucial for contributing to the final recognition rate.
Furthermore, based on our comprehensive study, we propose a simple yet
effective representation, called hybrid representation, by exploring the
complementarity of different BoVW frameworks and local descriptors. Using this
representation, we obtain the state-of-the-art on the three challenging
datasets: HMDB51 (61.1%), UCF50 (92.3%), and UCF101 (87.9%).
|
1405.4543
|
Dhruv Mahajan
|
Dhruv Mahajan, S. Sathiya Keerthi, S. Sundararajan
|
A Distributed Algorithm for Training Nonlinear Kernel Machines
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper concerns the distributed training of nonlinear kernel machines on
Map-Reduce. We show that a re-formulation of Nystr\"om approximation based
solution which is solved using gradient based techniques is well suited for
this, especially when it is necessary to work with a large number of basis
points. The main advantages of this approach are: avoidance of computing the
pseudo-inverse of the kernel sub-matrix corresponding to the basis points;
simplicity and efficiency of the distributed part of the computations; and,
friendliness to stage-wise addition of basis points. We implement the method
using an AllReduce tree on Hadoop and demonstrate its value on a few large
benchmark datasets.
|
[
{
"version": "v1",
"created": "Sun, 18 May 2014 19:54:18 GMT"
}
] | 2014-05-20T00:00:00 |
[
[
"Mahajan",
"Dhruv",
""
],
[
"Keerthi",
"S. Sathiya",
""
],
[
"Sundararajan",
"S.",
""
]
] |
TITLE: A Distributed Algorithm for Training Nonlinear Kernel Machines
ABSTRACT: This paper concerns the distributed training of nonlinear kernel machines on
Map-Reduce. We show that a re-formulation of Nystr\"om approximation based
solution which is solved using gradient based techniques is well suited for
this, especially when it is necessary to work with a large number of basis
points. The main advantages of this approach are: avoidance of computing the
pseudo-inverse of the kernel sub-matrix corresponding to the basis points;
simplicity and efficiency of the distributed part of the computations; and,
friendliness to stage-wise addition of basis points. We implement the method
using an AllReduce tree on Hadoop and demonstrate its value on a few large
benchmark datasets.
|
1405.4572
|
R. Joshua Tobin
|
R. Joshua Tobin and Conor J. Houghton
|
A Kernel-Based Calculation of Information on a Metric Space
| null |
Entropy 2013, 15(10), 4540-4552
|
10.3390/e15104540
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Kernel density estimation is a technique for approximating probability
distributions. Here, it is applied to the calculation of mutual information on
a metric space. This is motivated by the problem in neuroscience of calculating
the mutual information between stimuli and spiking responses; the space of
these responses is a metric space. It is shown that kernel density estimation
on a metric space resembles the k-nearest-neighbor approach. This approach is
applied to a toy dataset designed to mimic electrophysiological data.
|
[
{
"version": "v1",
"created": "Mon, 19 May 2014 01:17:48 GMT"
}
] | 2014-05-20T00:00:00 |
[
[
"Tobin",
"R. Joshua",
""
],
[
"Houghton",
"Conor J.",
""
]
] |
TITLE: A Kernel-Based Calculation of Information on a Metric Space
ABSTRACT: Kernel density estimation is a technique for approximating probability
distributions. Here, it is applied to the calculation of mutual information on
a metric space. This is motivated by the problem in neuroscience of calculating
the mutual information between stimuli and spiking responses; the space of
these responses is a metric space. It is shown that kernel density estimation
on a metric space resembles the k-nearest-neighbor approach. This approach is
applied to a toy dataset designed to mimic electrophysiological data.
|
1405.4699
|
Thanasis Naskos
|
Athanasios Naskos, Emmanouela Stachtiari, Anastasios Gounaris,
Panagiotis Katsaros, Dimitrios Tsoumakos, Ioannis Konstantinou, Spyros
Sioutas
|
Cloud elasticity using probabilistic model checking
|
14 pages
| null | null | null |
cs.DC cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cloud computing has become the leading paradigm for deploying large-scale
infrastructures and running big data applications, due to its capacity of
achieving economies of scale. In this work, we focus on one of the most
prominent advantages of cloud computing, namely the on-demand resource
provisioning, which is commonly referred to as elasticity. Although a lot of
effort has been invested in developing systems and mechanisms that enable
elasticity, the elasticity decision policies tend to be designed without
guaranteeing or quantifying the quality of their operation. This work aims to
make the development of elasticity policies more formalized and dependable. We
make two distinct contributions. First, we propose an extensible approach to
enforcing elasticity through the dynamic instantiation and online quantitative
verification of Markov Decision Processes (MDP) using probabilistic model
checking. Second, we propose concrete elasticity models and related elasticity
policies. We evaluate our decision policies using both real and synthetic
datasets in clusters of NoSQL databases. According to the experimental results,
our approach improves upon the state-of-the-art in significantly increasing
user-defined utility values and decreasing user-defined threshold violations.
|
[
{
"version": "v1",
"created": "Mon, 19 May 2014 12:47:16 GMT"
}
] | 2014-05-20T00:00:00 |
[
[
"Naskos",
"Athanasios",
""
],
[
"Stachtiari",
"Emmanouela",
""
],
[
"Gounaris",
"Anastasios",
""
],
[
"Katsaros",
"Panagiotis",
""
],
[
"Tsoumakos",
"Dimitrios",
""
],
[
"Konstantinou",
"Ioannis",
""
],
[
"Sioutas",
"Spyros",
""
]
] |
TITLE: Cloud elasticity using probabilistic model checking
ABSTRACT: Cloud computing has become the leading paradigm for deploying large-scale
infrastructures and running big data applications, due to its capacity of
achieving economies of scale. In this work, we focus on one of the most
prominent advantages of cloud computing, namely the on-demand resource
provisioning, which is commonly referred to as elasticity. Although a lot of
effort has been invested in developing systems and mechanisms that enable
elasticity, the elasticity decision policies tend to be designed without
guaranteeing or quantifying the quality of their operation. This work aims to
make the development of elasticity policies more formalized and dependable. We
make two distinct contributions. First, we propose an extensible approach to
enforcing elasticity through the dynamic instantiation and online quantitative
verification of Markov Decision Processes (MDP) using probabilistic model
checking. Second, we propose concrete elasticity models and related elasticity
policies. We evaluate our decision policies using both real and synthetic
datasets in clusters of NoSQL databases. According to the experimental results,
our approach improves upon the state-of-the-art in significantly increasing
user-defined utility values and decreasing user-defined threshold violations.
|
1403.1024
|
Hyun Oh Song
|
Hyun Oh Song, Ross Girshick, Stefanie Jegelka, Julien Mairal, Zaid
Harchaoui, Trevor Darrell
|
On learning to localize objects with minimal supervision
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning to localize objects with minimal supervision is an important problem
in computer vision, since large fully annotated datasets are extremely costly
to obtain. In this paper, we propose a new method that achieves this goal with
only image-level labels of whether the objects are present or not. Our approach
combines a discriminative submodular cover problem for automatically
discovering a set of positive object windows with a smoothed latent SVM
formulation. The latter allows us to leverage efficient quasi-Newton
optimization techniques. Our experiments demonstrate that the proposed approach
provides a 50% relative improvement in mean average precision over the current
state-of-the-art on PASCAL VOC 2007 detection.
|
[
{
"version": "v1",
"created": "Wed, 5 Mar 2014 07:21:20 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2014 00:50:26 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2014 21:04:49 GMT"
},
{
"version": "v4",
"created": "Thu, 15 May 2014 22:08:59 GMT"
}
] | 2014-05-19T00:00:00 |
[
[
"Song",
"Hyun Oh",
""
],
[
"Girshick",
"Ross",
""
],
[
"Jegelka",
"Stefanie",
""
],
[
"Mairal",
"Julien",
""
],
[
"Harchaoui",
"Zaid",
""
],
[
"Darrell",
"Trevor",
""
]
] |
TITLE: On learning to localize objects with minimal supervision
ABSTRACT: Learning to localize objects with minimal supervision is an important problem
in computer vision, since large fully annotated datasets are extremely costly
to obtain. In this paper, we propose a new method that achieves this goal with
only image-level labels of whether the objects are present or not. Our approach
combines a discriminative submodular cover problem for automatically
discovering a set of positive object windows with a smoothed latent SVM
formulation. The latter allows us to leverage efficient quasi-Newton
optimization techniques. Our experiments demonstrate that the proposed approach
provides a 50% relative improvement in mean average precision over the current
state-of-the-art on PASCAL VOC 2007 detection.
|
1405.4054
|
Jianfeng Wang
|
Jianfeng Wang, Jingdong Wang, Jingkuan Song, Xin-Shun Xu, Heng Tao
Shen, Shipeng Li
|
Optimized Cartesian $K$-Means
|
to appear in IEEE TKDE, accepted in Apr. 2014
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Product quantization-based approaches are effective to encode
high-dimensional data points for approximate nearest neighbor search. The space
is decomposed into a Cartesian product of low-dimensional subspaces, each of
which generates a sub codebook. Data points are encoded as compact binary codes
using these sub codebooks, and the distance between two data points can be
approximated efficiently from their codes by the precomputed lookup tables.
Traditionally, to encode a subvector of a data point in a subspace, only one
sub codeword in the corresponding sub codebook is selected, which may impose
strict restrictions on the search accuracy. In this paper, we propose a novel
approach, named Optimized Cartesian $K$-Means (OCKM), to better encode the data
points for more accurate approximate nearest neighbor search. In OCKM, multiple
sub codewords are used to encode the subvector of a data point in a subspace.
Each sub codeword stems from different sub codebooks in each subspace, which
are optimally generated with regards to the minimization of the distortion
errors. The high-dimensional data point is then encoded as the concatenation of
the indices of multiple sub codewords from all the subspaces. This can provide
more flexibility and lower distortion errors than traditional methods.
Experimental results on the standard real-life datasets demonstrate the
superiority over state-of-the-art approaches for approximate nearest neighbor
search.
|
[
{
"version": "v1",
"created": "Fri, 16 May 2014 03:09:01 GMT"
}
] | 2014-05-19T00:00:00 |
[
[
"Wang",
"Jianfeng",
""
],
[
"Wang",
"Jingdong",
""
],
[
"Song",
"Jingkuan",
""
],
[
"Xu",
"Xin-Shun",
""
],
[
"Shen",
"Heng Tao",
""
],
[
"Li",
"Shipeng",
""
]
] |
TITLE: Optimized Cartesian $K$-Means
ABSTRACT: Product quantization-based approaches are effective to encode
high-dimensional data points for approximate nearest neighbor search. The space
is decomposed into a Cartesian product of low-dimensional subspaces, each of
which generates a sub codebook. Data points are encoded as compact binary codes
using these sub codebooks, and the distance between two data points can be
approximated efficiently from their codes by the precomputed lookup tables.
Traditionally, to encode a subvector of a data point in a subspace, only one
sub codeword in the corresponding sub codebook is selected, which may impose
strict restrictions on the search accuracy. In this paper, we propose a novel
approach, named Optimized Cartesian $K$-Means (OCKM), to better encode the data
points for more accurate approximate nearest neighbor search. In OCKM, multiple
sub codewords are used to encode the subvector of a data point in a subspace.
Each sub codeword stems from different sub codebooks in each subspace, which
are optimally generated with regards to the minimization of the distortion
errors. The high-dimensional data point is then encoded as the concatenation of
the indices of multiple sub codewords from all the subspaces. This can provide
more flexibility and lower distortion errors than traditional methods.
Experimental results on the standard real-life datasets demonstrate the
superiority over state-of-the-art approaches for approximate nearest neighbor
search.
|
1307.0044
|
Maria Gorlatova
|
Maria Gorlatova and John Sarik and Guy Grebla and Mina Cong and
Ioannis Kymissis and Gil Zussman
|
Movers and Shakers: Kinetic Energy Harvesting for the Internet of Things
|
15 pages, 11 figures
| null | null | null |
cs.ET cs.NI cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Numerous energy harvesting wireless devices that will serve as building
blocks for the Internet of Things (IoT) are currently under development.
However, there is still only limited understanding of the properties of various
energy sources and their impact on energy harvesting adaptive algorithms.
Hence, we focus on characterizing the kinetic (motion) energy that can be
harvested by a wireless node with an IoT form factor and on developing energy
allocation algorithms for such nodes. In this paper, we describe methods for
estimating harvested energy from acceleration traces. To characterize the
energy availability associated with specific human activities (e.g., relaxing,
walking, cycling), we analyze a motion dataset with over 40 participants. Based
on acceleration measurements that we collected for over 200 hours, we study
energy generation processes associated with day-long human routines. We also
briefly summarize our experiments with moving objects. We develop energy
allocation algorithms that take into account practical IoT node design
considerations, and evaluate the algorithms using the collected measurements.
Our observations provide insights into the design of motion energy harvesters,
IoT nodes, and energy harvesting adaptive algorithms.
|
[
{
"version": "v1",
"created": "Fri, 28 Jun 2013 22:40:11 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Sep 2013 20:49:06 GMT"
},
{
"version": "v3",
"created": "Wed, 14 May 2014 22:34:35 GMT"
}
] | 2014-05-16T00:00:00 |
[
[
"Gorlatova",
"Maria",
""
],
[
"Sarik",
"John",
""
],
[
"Grebla",
"Guy",
""
],
[
"Cong",
"Mina",
""
],
[
"Kymissis",
"Ioannis",
""
],
[
"Zussman",
"Gil",
""
]
] |
TITLE: Movers and Shakers: Kinetic Energy Harvesting for the Internet of Things
ABSTRACT: Numerous energy harvesting wireless devices that will serve as building
blocks for the Internet of Things (IoT) are currently under development.
However, there is still only limited understanding of the properties of various
energy sources and their impact on energy harvesting adaptive algorithms.
Hence, we focus on characterizing the kinetic (motion) energy that can be
harvested by a wireless node with an IoT form factor and on developing energy
allocation algorithms for such nodes. In this paper, we describe methods for
estimating harvested energy from acceleration traces. To characterize the
energy availability associated with specific human activities (e.g., relaxing,
walking, cycling), we analyze a motion dataset with over 40 participants. Based
on acceleration measurements that we collected for over 200 hours, we study
energy generation processes associated with day-long human routines. We also
briefly summarize our experiments with moving objects. We develop energy
allocation algorithms that take into account practical IoT node design
considerations, and evaluate the algorithms using the collected measurements.
Our observations provide insights into the design of motion energy harvesters,
IoT nodes, and energy harvesting adaptive algorithms.
|
1402.1500
|
Eran Shaham Mr.
|
Eran Shaham, David Sarne, Boaz Ben-Moshe
|
Co-clustering of Fuzzy Lagged Data
|
Under consideration for publication in Knowledge and Information
Systems. The final publication is available at Springer via
http://dx.doi.org/10.1007/s10115-014-0758-7
| null |
10.1007/s10115-014-0758-7
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper focuses on mining patterns that are characterized by a fuzzy lagged
relationship between the data objects forming them. Such a regulatory mechanism
is quite common in real life settings. It appears in a variety of fields:
finance, gene expression, neuroscience, crowds and collective movements are but
a limited list of examples. Mining such patterns not only helps in
understanding the relationship between objects in the domain, but assists in
forecasting their future behavior. For most interesting variants of this
problem, finding an optimal fuzzy lagged co-cluster is an NP-complete problem.
We thus present a polynomial-time Monte-Carlo approximation algorithm for
mining fuzzy lagged co-clusters. We prove that for any data matrix, the
algorithm mines a fuzzy lagged co-cluster with fixed probability, which
encompasses the optimal fuzzy lagged co-cluster by a maximum 2 ratio columns
overhead and completely no rows overhead. Moreover, the algorithm handles
noise, anti-correlations, missing values and overlapping patterns. The
algorithm was extensively evaluated using both artificial and real datasets.
The results not only corroborate the ability of the algorithm to efficiently
mine relevant and accurate fuzzy lagged co-clusters, but also illustrate the
importance of including the fuzziness in the lagged-pattern model.
|
[
{
"version": "v1",
"created": "Thu, 6 Feb 2014 21:02:16 GMT"
},
{
"version": "v2",
"created": "Thu, 15 May 2014 12:01:08 GMT"
}
] | 2014-05-16T00:00:00 |
[
[
"Shaham",
"Eran",
""
],
[
"Sarne",
"David",
""
],
[
"Ben-Moshe",
"Boaz",
""
]
] |
TITLE: Co-clustering of Fuzzy Lagged Data
ABSTRACT: The paper focuses on mining patterns that are characterized by a fuzzy lagged
relationship between the data objects forming them. Such a regulatory mechanism
is quite common in real life settings. It appears in a variety of fields:
finance, gene expression, neuroscience, crowds and collective movements are but
a limited list of examples. Mining such patterns not only helps in
understanding the relationship between objects in the domain, but assists in
forecasting their future behavior. For most interesting variants of this
problem, finding an optimal fuzzy lagged co-cluster is an NP-complete problem.
We thus present a polynomial-time Monte-Carlo approximation algorithm for
mining fuzzy lagged co-clusters. We prove that for any data matrix, the
algorithm mines a fuzzy lagged co-cluster with fixed probability, which
encompasses the optimal fuzzy lagged co-cluster by a maximum 2 ratio columns
overhead and completely no rows overhead. Moreover, the algorithm handles
noise, anti-correlations, missing values and overlapping patterns. The
algorithm was extensively evaluated using both artificial and real datasets.
The results not only corroborate the ability of the algorithm to efficiently
mine relevant and accurate fuzzy lagged co-clusters, but also illustrate the
importance of including the fuzziness in the lagged-pattern model.
|
1405.2798
|
Jun Wang
|
Jun Wang, Ke Sun, Fei Sha, Stephane Marchand-Maillet, Alexandros
Kalousis
|
Two-Stage Metric Learning
|
Accepted for publication in ICML 2014
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a novel two-stage metric learning algorithm. We
first map each learning instance to a probability distribution by computing its
similarities to a set of fixed anchor points. Then, we define the distance in
the input data space as the Fisher information distance on the associated
statistical manifold. This induces in the input data space a new family of
distance metric with unique properties. Unlike kernelized metric learning, we
do not require the similarity measure to be positive semi-definite. Moreover,
it can also be interpreted as a local metric learning algorithm with well
defined distance approximation. We evaluate its performance on a number of
datasets. It outperforms significantly other metric learning methods and SVM.
|
[
{
"version": "v1",
"created": "Mon, 12 May 2014 15:18:15 GMT"
}
] | 2014-05-16T00:00:00 |
[
[
"Wang",
"Jun",
""
],
[
"Sun",
"Ke",
""
],
[
"Sha",
"Fei",
""
],
[
"Marchand-Maillet",
"Stephane",
""
],
[
"Kalousis",
"Alexandros",
""
]
] |
TITLE: Two-Stage Metric Learning
ABSTRACT: In this paper, we present a novel two-stage metric learning algorithm. We
first map each learning instance to a probability distribution by computing its
similarities to a set of fixed anchor points. Then, we define the distance in
the input data space as the Fisher information distance on the associated
statistical manifold. This induces in the input data space a new family of
distance metric with unique properties. Unlike kernelized metric learning, we
do not require the similarity measure to be positive semi-definite. Moreover,
it can also be interpreted as a local metric learning algorithm with well
defined distance approximation. We evaluate its performance on a number of
datasets. It outperforms significantly other metric learning methods and SVM.
|
1405.3727
|
Sweta Rai
|
Sweta Rai
|
Student Dropout Risk Assessment in Undergraduate Course at Residential
University
|
arXiv admin note: text overlap with arXiv:1202.4815, arXiv:1203.3832,
arXiv:1002.1144 by other authors
| null | null | null |
cs.CY cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Student dropout prediction is an indispensable for numerous intelligent
systems to measure the education system and success rate of any university as
well as throughout the university in the world. Therefore, it becomes essential
to develop efficient methods for prediction of the students at risk of dropping
out, enabling the adoption of proactive process to minimize the situation.
Thus, this research work propose a prototype machine learning tool which can
automatically recognize whether the student will continue their study or drop
their study using classification technique based on decision tree and extract
hidden information from large data about what factors are responsible for
dropout student. Further the contribution of factors responsible for dropout
risk was studied using discriminant analysis and to extract interesting
correlations, frequent patterns, associations or casual structures among
significant datasets, Association rule mining was applied. In this study, the
descriptive statistics analysis was carried out to measure the quality of data
using SPSS 20.0 statistical software and application of decision tree and
association rule were carried out by using WEKA data mining tool.
|
[
{
"version": "v1",
"created": "Thu, 15 May 2014 02:35:41 GMT"
}
] | 2014-05-16T00:00:00 |
[
[
"Rai",
"Sweta",
""
]
] |
TITLE: Student Dropout Risk Assessment in Undergraduate Course at Residential
University
ABSTRACT: Student dropout prediction is an indispensable for numerous intelligent
systems to measure the education system and success rate of any university as
well as throughout the university in the world. Therefore, it becomes essential
to develop efficient methods for prediction of the students at risk of dropping
out, enabling the adoption of proactive process to minimize the situation.
Thus, this research work propose a prototype machine learning tool which can
automatically recognize whether the student will continue their study or drop
their study using classification technique based on decision tree and extract
hidden information from large data about what factors are responsible for
dropout student. Further the contribution of factors responsible for dropout
risk was studied using discriminant analysis and to extract interesting
correlations, frequent patterns, associations or casual structures among
significant datasets, Association rule mining was applied. In this study, the
descriptive statistics analysis was carried out to measure the quality of data
using SPSS 20.0 statistical software and application of decision tree and
association rule were carried out by using WEKA data mining tool.
|
1405.3410
|
Tieming Chen
|
Tieming Chen, Xu Zhang, Shichao Jin, Okhee Kim
|
Efficient classification using parallel and scalable compressed model
and Its application on intrusion detection
| null | null |
10.1016/j.eswa.2014.04.009
| null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to achieve high efficiency of classification in intrusion detection,
a compressed model is proposed in this paper which combines horizontal
compression with vertical compression. OneR is utilized as horizontal
com-pression for attribute reduction, and affinity propagation is employed as
vertical compression to select small representative exemplars from large
training data. As to be able to computationally compress the larger volume of
training data with scalability, MapReduce based parallelization approach is
then implemented and evaluated for each step of the model compression process
abovementioned, on which common but efficient classification methods can be
directly used. Experimental application study on two publicly available
datasets of intrusion detection, KDD99 and CMDC2012, demonstrates that the
classification using the compressed model proposed can effectively speed up the
detection procedure at up to 184 times, most importantly at the cost of a
minimal accuracy difference with less than 1% on average.
|
[
{
"version": "v1",
"created": "Wed, 14 May 2014 08:47:31 GMT"
}
] | 2014-05-15T00:00:00 |
[
[
"Chen",
"Tieming",
""
],
[
"Zhang",
"Xu",
""
],
[
"Jin",
"Shichao",
""
],
[
"Kim",
"Okhee",
""
]
] |
TITLE: Efficient classification using parallel and scalable compressed model
and Its application on intrusion detection
ABSTRACT: In order to achieve high efficiency of classification in intrusion detection,
a compressed model is proposed in this paper which combines horizontal
compression with vertical compression. OneR is utilized as horizontal
com-pression for attribute reduction, and affinity propagation is employed as
vertical compression to select small representative exemplars from large
training data. As to be able to computationally compress the larger volume of
training data with scalability, MapReduce based parallelization approach is
then implemented and evaluated for each step of the model compression process
abovementioned, on which common but efficient classification methods can be
directly used. Experimental application study on two publicly available
datasets of intrusion detection, KDD99 and CMDC2012, demonstrates that the
classification using the compressed model proposed can effectively speed up the
detection procedure at up to 184 times, most importantly at the cost of a
minimal accuracy difference with less than 1% on average.
|
1405.2941
|
Jiang Wang Mr.
|
Jiang wang, Xiaohan Nie, Yin Xia, Ying Wu, Song-Chun Zhu
|
Cross-view Action Modeling, Learning and Recognition
|
CVPR 2014
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing methods on video-based action recognition are generally
view-dependent, i.e., performing recognition from the same views seen in the
training data. We present a novel multiview spatio-temporal AND-OR graph
(MST-AOG) representation for cross-view action recognition, i.e., the
recognition is performed on the video from an unknown and unseen view. As a
compositional model, MST-AOG compactly represents the hierarchical
combinatorial structures of cross-view actions by explicitly modeling the
geometry, appearance and motion variations. This paper proposes effective
methods to learn the structure and parameters of MST-AOG. The inference based
on MST-AOG enables action recognition from novel views. The training of MST-AOG
takes advantage of the 3D human skeleton data obtained from Kinect cameras to
avoid annotating enormous multi-view video frames, which is error-prone and
time-consuming, but the recognition does not need 3D information and is based
on 2D video input. A new Multiview Action3D dataset has been created and will
be released. Extensive experiments have demonstrated that this new action
representation significantly improves the accuracy and robustness for
cross-view action recognition on 2D videos.
|
[
{
"version": "v1",
"created": "Mon, 12 May 2014 20:21:53 GMT"
}
] | 2014-05-14T00:00:00 |
[
[
"wang",
"Jiang",
""
],
[
"Nie",
"Xiaohan",
""
],
[
"Xia",
"Yin",
""
],
[
"Wu",
"Ying",
""
],
[
"Zhu",
"Song-Chun",
""
]
] |
TITLE: Cross-view Action Modeling, Learning and Recognition
ABSTRACT: Existing methods on video-based action recognition are generally
view-dependent, i.e., performing recognition from the same views seen in the
training data. We present a novel multiview spatio-temporal AND-OR graph
(MST-AOG) representation for cross-view action recognition, i.e., the
recognition is performed on the video from an unknown and unseen view. As a
compositional model, MST-AOG compactly represents the hierarchical
combinatorial structures of cross-view actions by explicitly modeling the
geometry, appearance and motion variations. This paper proposes effective
methods to learn the structure and parameters of MST-AOG. The inference based
on MST-AOG enables action recognition from novel views. The training of MST-AOG
takes advantage of the 3D human skeleton data obtained from Kinect cameras to
avoid annotating enormous multi-view video frames, which is error-prone and
time-consuming, but the recognition does not need 3D information and is based
on 2D video input. A new Multiview Action3D dataset has been created and will
be released. Extensive experiments have demonstrated that this new action
representation significantly improves the accuracy and robustness for
cross-view action recognition on 2D videos.
|
1405.3080
|
Tong Zhang
|
Peilin Zhao, Tong Zhang
|
Accelerating Minibatch Stochastic Gradient Descent using Stratified
Sampling
| null | null | null | null |
stat.ML cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stochastic Gradient Descent (SGD) is a popular optimization method which has
been applied to many important machine learning tasks such as Support Vector
Machines and Deep Neural Networks. In order to parallelize SGD, minibatch
training is often employed. The standard approach is to uniformly sample a
minibatch at each step, which often leads to high variance. In this paper we
propose a stratified sampling strategy, which divides the whole dataset into
clusters with low within-cluster variance; we then take examples from these
clusters using a stratified sampling technique. It is shown that the
convergence rate can be significantly improved by the algorithm. Encouraging
experimental results confirm the effectiveness of the proposed method.
|
[
{
"version": "v1",
"created": "Tue, 13 May 2014 09:45:49 GMT"
}
] | 2014-05-14T00:00:00 |
[
[
"Zhao",
"Peilin",
""
],
[
"Zhang",
"Tong",
""
]
] |
TITLE: Accelerating Minibatch Stochastic Gradient Descent using Stratified
Sampling
ABSTRACT: Stochastic Gradient Descent (SGD) is a popular optimization method which has
been applied to many important machine learning tasks such as Support Vector
Machines and Deep Neural Networks. In order to parallelize SGD, minibatch
training is often employed. The standard approach is to uniformly sample a
minibatch at each step, which often leads to high variance. In this paper we
propose a stratified sampling strategy, which divides the whole dataset into
clusters with low within-cluster variance; we then take examples from these
clusters using a stratified sampling technique. It is shown that the
convergence rate can be significantly improved by the algorithm. Encouraging
experimental results confirm the effectiveness of the proposed method.
|
1405.3210
|
Jeremy Kun
|
Jeremy Kun, Rajmonda Caceres, Kevin Carter
|
Locally Boosted Graph Aggregation for Community Detection
|
arXiv admin note: substantial text overlap with arXiv:1401.3258
| null | null | null |
cs.LG cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning the right graph representation from noisy, multi-source data has
garnered significant interest in recent years. A central tenet of this problem
is relational learning. Here the objective is to incorporate the partial
information each data source gives us in a way that captures the true
underlying relationships. To address this challenge, we present a general,
boosting-inspired framework for combining weak evidence of entity associations
into a robust similarity metric. Building on previous work, we explore the
extent to which different local quality measurements yield graph
representations that are suitable for community detection. We present empirical
results on a variety of datasets demonstrating the utility of this framework,
especially with respect to real datasets where noise and scale present serious
challenges. Finally, we prove a convergence theorem in an ideal setting and
outline future research into other application domains.
|
[
{
"version": "v1",
"created": "Tue, 13 May 2014 16:08:55 GMT"
}
] | 2014-05-14T00:00:00 |
[
[
"Kun",
"Jeremy",
""
],
[
"Caceres",
"Rajmonda",
""
],
[
"Carter",
"Kevin",
""
]
] |
TITLE: Locally Boosted Graph Aggregation for Community Detection
ABSTRACT: Learning the right graph representation from noisy, multi-source data has
garnered significant interest in recent years. A central tenet of this problem
is relational learning. Here the objective is to incorporate the partial
information each data source gives us in a way that captures the true
underlying relationships. To address this challenge, we present a general,
boosting-inspired framework for combining weak evidence of entity associations
into a robust similarity metric. Building on previous work, we explore the
extent to which different local quality measurements yield graph
representations that are suitable for community detection. We present empirical
results on a variety of datasets demonstrating the utility of this framework,
especially with respect to real datasets where noise and scale present serious
challenges. Finally, we prove a convergence theorem in an ideal setting and
outline future research into other application domains.
|
1210.4460
|
Yaman Aksu Ph.D.
|
Yaman Aksu
|
Fast SVM-based Feature Elimination Utilizing Data Radius, Hard-Margin,
Soft-Margin
|
Incomplete but good, again. To Apr 28 version, made few misc text and
notation improvements including typo corrections, probably mostly in
Appendix, but probably best to read in whole again. New results for one of
the datasets (Leukemia gene dataset)
| null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Margin maximization in the hard-margin sense, proposed as feature elimination
criterion by the MFE-LO method, is combined here with data radius utilization
to further aim to lower generalization error, as several published bounds and
bound-related formulations pertaining to lowering misclassification risk (or
error) pertain to radius e.g. product of squared radius and weight vector
squared norm. Additionally, we propose additional novel feature elimination
criteria that, while instead being in the soft-margin sense, too can utilize
data radius, utilizing previously published bound-related formulations for
approaching radius for the soft-margin sense, whereby e.g. a focus was on the
principle stated therein as "finding a bound whose minima are in a region with
small leave-one-out values may be more important than its tightness". These
additional criteria we propose combine radius utilization with a novel and
computationally low-cost soft-margin light classifier retraining approach we
devise named QP1; QP1 is the soft-margin alternative to the hard-margin LO. We
correct an error in the MFE-LO description, find MFE-LO achieves the highest
generalization accuracy among the previously published margin-based feature
elimination (MFE) methods, discuss some limitations of MFE-LO, and find our
novel methods herein outperform MFE-LO, attain lower test set classification
error rate. On several datasets that each both have a large number of features
and fall into the `large features few samples' dataset category, and on
datasets with lower (low-to-intermediate) number of features, our novel methods
give promising results. Especially, among our methods the tunable ones, that do
not employ (the non-tunable) LO approach, can be tuned more aggressively in the
future than herein, to aim to demonstrate for them even higher performance than
herein.
|
[
{
"version": "v1",
"created": "Tue, 16 Oct 2012 15:54:36 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Jan 2013 16:28:17 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Apr 2014 21:15:46 GMT"
},
{
"version": "v4",
"created": "Sun, 11 May 2014 11:47:07 GMT"
}
] | 2014-05-13T00:00:00 |
[
[
"Aksu",
"Yaman",
""
]
] |
TITLE: Fast SVM-based Feature Elimination Utilizing Data Radius, Hard-Margin,
Soft-Margin
ABSTRACT: Margin maximization in the hard-margin sense, proposed as feature elimination
criterion by the MFE-LO method, is combined here with data radius utilization
to further aim to lower generalization error, as several published bounds and
bound-related formulations pertaining to lowering misclassification risk (or
error) pertain to radius e.g. product of squared radius and weight vector
squared norm. Additionally, we propose additional novel feature elimination
criteria that, while instead being in the soft-margin sense, too can utilize
data radius, utilizing previously published bound-related formulations for
approaching radius for the soft-margin sense, whereby e.g. a focus was on the
principle stated therein as "finding a bound whose minima are in a region with
small leave-one-out values may be more important than its tightness". These
additional criteria we propose combine radius utilization with a novel and
computationally low-cost soft-margin light classifier retraining approach we
devise named QP1; QP1 is the soft-margin alternative to the hard-margin LO. We
correct an error in the MFE-LO description, find MFE-LO achieves the highest
generalization accuracy among the previously published margin-based feature
elimination (MFE) methods, discuss some limitations of MFE-LO, and find our
novel methods herein outperform MFE-LO, attain lower test set classification
error rate. On several datasets that each both have a large number of features
and fall into the `large features few samples' dataset category, and on
datasets with lower (low-to-intermediate) number of features, our novel methods
give promising results. Especially, among our methods the tunable ones, that do
not employ (the non-tunable) LO approach, can be tuned more aggressively in the
future than herein, to aim to demonstrate for them even higher performance than
herein.
|
1210.4567
|
Jacob Eisenstein
|
David Bamman, Jacob Eisenstein, and Tyler Schnoebelen
|
Gender identity and lexical variation in social media
|
submission version
|
Journal of Sociolinguistics 18 (2014) 135-160
|
10.1111/josl.12080
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a study of the relationship between gender, linguistic style, and
social networks, using a novel corpus of 14,000 Twitter users. Prior
quantitative work on gender often treats this social variable as a female/male
binary; we argue for a more nuanced approach. By clustering Twitter users, we
find a natural decomposition of the dataset into various styles and topical
interests. Many clusters have strong gender orientations, but their use of
linguistic resources sometimes directly conflicts with the population-level
language statistics. We view these clusters as a more accurate reflection of
the multifaceted nature of gendered language styles. Previous corpus-based work
has also had little to say about individuals whose linguistic styles defy
population-level gender patterns. To identify such individuals, we train a
statistical classifier, and measure the classifier confidence for each
individual in the dataset. Examining individuals whose language does not match
the classifier's model for their gender, we find that they have social networks
that include significantly fewer same-gender social connections and that, in
general, social network homophily is correlated with the use of same-gender
language markers. Pairing computational methods and social theory thus offers a
new perspective on how gender emerges as individuals position themselves
relative to audiences, topics, and mainstream gender norms.
|
[
{
"version": "v1",
"created": "Tue, 16 Oct 2012 20:22:56 GMT"
},
{
"version": "v2",
"created": "Mon, 12 May 2014 15:04:32 GMT"
}
] | 2014-05-13T00:00:00 |
[
[
"Bamman",
"David",
""
],
[
"Eisenstein",
"Jacob",
""
],
[
"Schnoebelen",
"Tyler",
""
]
] |
TITLE: Gender identity and lexical variation in social media
ABSTRACT: We present a study of the relationship between gender, linguistic style, and
social networks, using a novel corpus of 14,000 Twitter users. Prior
quantitative work on gender often treats this social variable as a female/male
binary; we argue for a more nuanced approach. By clustering Twitter users, we
find a natural decomposition of the dataset into various styles and topical
interests. Many clusters have strong gender orientations, but their use of
linguistic resources sometimes directly conflicts with the population-level
language statistics. We view these clusters as a more accurate reflection of
the multifaceted nature of gendered language styles. Previous corpus-based work
has also had little to say about individuals whose linguistic styles defy
population-level gender patterns. To identify such individuals, we train a
statistical classifier, and measure the classifier confidence for each
individual in the dataset. Examining individuals whose language does not match
the classifier's model for their gender, we find that they have social networks
that include significantly fewer same-gender social connections and that, in
general, social network homophily is correlated with the use of same-gender
language markers. Pairing computational methods and social theory thus offers a
new perspective on how gender emerges as individuals position themselves
relative to audiences, topics, and mainstream gender norms.
|
1403.6382
|
Hossein Azizpour
|
Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, Stefan
Carlsson
|
CNN Features off-the-shelf: an Astounding Baseline for Recognition
|
version 3 revisions: 1)Added results using feature processing and
data augmentation 2)Referring to most recent efforts of using CNN for
different visual recognition tasks 3) updated text/caption
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent results indicate that the generic descriptors extracted from the
convolutional neural networks are very powerful. This paper adds to the
mounting evidence that this is indeed the case. We report on a series of
experiments conducted for different recognition tasks using the publicly
available code and model of the \overfeat network which was trained to perform
object classification on ILSVRC13. We use features extracted from the \overfeat
network as a generic image representation to tackle the diverse range of
recognition tasks of object image classification, scene recognition, fine
grained recognition, attribute detection and image retrieval applied to a
diverse set of datasets. We selected these tasks and datasets as they gradually
move further away from the original task and data the \overfeat network was
trained to solve. Astonishingly, we report consistent superior results compared
to the highly tuned state-of-the-art systems in all the visual classification
tasks on various datasets. For instance retrieval it consistently outperforms
low memory footprint methods except for sculptures dataset. The results are
achieved using a linear SVM classifier (or $L2$ distance in case of retrieval)
applied to a feature representation of size 4096 extracted from a layer in the
net. The representations are further modified using simple augmentation
techniques e.g. jittering. The results strongly suggest that features obtained
from deep learning with convolutional nets should be the primary candidate in
most visual recognition tasks.
|
[
{
"version": "v1",
"created": "Sun, 23 Mar 2014 13:42:03 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Apr 2014 12:43:13 GMT"
},
{
"version": "v3",
"created": "Mon, 12 May 2014 08:53:31 GMT"
}
] | 2014-05-13T00:00:00 |
[
[
"Razavian",
"Ali Sharif",
""
],
[
"Azizpour",
"Hossein",
""
],
[
"Sullivan",
"Josephine",
""
],
[
"Carlsson",
"Stefan",
""
]
] |
TITLE: CNN Features off-the-shelf: an Astounding Baseline for Recognition
ABSTRACT: Recent results indicate that the generic descriptors extracted from the
convolutional neural networks are very powerful. This paper adds to the
mounting evidence that this is indeed the case. We report on a series of
experiments conducted for different recognition tasks using the publicly
available code and model of the \overfeat network which was trained to perform
object classification on ILSVRC13. We use features extracted from the \overfeat
network as a generic image representation to tackle the diverse range of
recognition tasks of object image classification, scene recognition, fine
grained recognition, attribute detection and image retrieval applied to a
diverse set of datasets. We selected these tasks and datasets as they gradually
move further away from the original task and data the \overfeat network was
trained to solve. Astonishingly, we report consistent superior results compared
to the highly tuned state-of-the-art systems in all the visual classification
tasks on various datasets. For instance retrieval it consistently outperforms
low memory footprint methods except for sculptures dataset. The results are
achieved using a linear SVM classifier (or $L2$ distance in case of retrieval)
applied to a feature representation of size 4096 extracted from a layer in the
net. The representations are further modified using simple augmentation
techniques e.g. jittering. The results strongly suggest that features obtained
from deep learning with convolutional nets should be the primary candidate in
most visual recognition tasks.
|
1402.0728
|
Dominik Kowald
|
Dominik Kowald, Paul Seitlinger, Christoph Trattner, Tobias Ley
|
Forgetting the Words but Remembering the Meaning: Modeling Forgetting in
a Verbal and Semantic Tag Recommender
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We assume that recommender systems are more successful, when they are based
on a thorough understanding of how people process information. In the current
paper we test this assumption in the context of social tagging systems.
Cognitive research on how people assign tags has shown that they draw on two
interconnected levels of knowledge in their memory: on a conceptual level of
semantic fields or topics, and on a lexical level that turns patterns on the
semantic level into words. Another strand of tagging research reveals a strong
impact of time dependent forgetting on users' tag choices, such that recently
used tags have a higher probability being reused than "older" tags. In this
paper, we align both strands by implementing a computational theory of human
memory that integrates the two-level conception and the process of forgetting
in form of a tag recommender and test it in three large-scale social tagging
datasets (drawn from BibSonomy, CiteULike and Flickr).
As expected, our results reveal a selective effect of time: forgetting is
much more pronounced on the lexical level of tags. Second, an extensive
evaluation based on this observation shows that a tag recommender
interconnecting both levels and integrating time dependent forgetting on the
lexical level results in high accuracy predictions and outperforms other
well-established algorithms, such as Collaborative Filtering, Pairwise
Interaction Tensor Factorization, FolkRank and two alternative time dependent
approaches. We conclude that tag recommenders can benefit from going beyond the
manifest level of word co-occurrences, and from including forgetting processes
on the lexical level.
|
[
{
"version": "v1",
"created": "Tue, 4 Feb 2014 13:31:10 GMT"
},
{
"version": "v2",
"created": "Thu, 8 May 2014 08:37:04 GMT"
}
] | 2014-05-09T00:00:00 |
[
[
"Kowald",
"Dominik",
""
],
[
"Seitlinger",
"Paul",
""
],
[
"Trattner",
"Christoph",
""
],
[
"Ley",
"Tobias",
""
]
] |
TITLE: Forgetting the Words but Remembering the Meaning: Modeling Forgetting in
a Verbal and Semantic Tag Recommender
ABSTRACT: We assume that recommender systems are more successful, when they are based
on a thorough understanding of how people process information. In the current
paper we test this assumption in the context of social tagging systems.
Cognitive research on how people assign tags has shown that they draw on two
interconnected levels of knowledge in their memory: on a conceptual level of
semantic fields or topics, and on a lexical level that turns patterns on the
semantic level into words. Another strand of tagging research reveals a strong
impact of time dependent forgetting on users' tag choices, such that recently
used tags have a higher probability being reused than "older" tags. In this
paper, we align both strands by implementing a computational theory of human
memory that integrates the two-level conception and the process of forgetting
in form of a tag recommender and test it in three large-scale social tagging
datasets (drawn from BibSonomy, CiteULike and Flickr).
As expected, our results reveal a selective effect of time: forgetting is
much more pronounced on the lexical level of tags. Second, an extensive
evaluation based on this observation shows that a tag recommender
interconnecting both levels and integrating time dependent forgetting on the
lexical level results in high accuracy predictions and outperforms other
well-established algorithms, such as Collaborative Filtering, Pairwise
Interaction Tensor Factorization, FolkRank and two alternative time dependent
approaches. We conclude that tag recommenders can benefit from going beyond the
manifest level of word co-occurrences, and from including forgetting processes
on the lexical level.
|
1405.1511
|
Neha Gupta
|
Neha Gupta, Ponnurangam Kumaraguru
|
Exploration of gaps in Bitly's spam detection and relevant counter
measures
| null | null | null | null |
cs.SI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existence of spam URLs over emails and Online Social Media (OSM) has become a
growing phenomenon. To counter the dissemination issues associated with long
complex URLs in emails and character limit imposed on various OSM (like
Twitter), the concept of URL shortening gained a lot of traction. URL
shorteners take as input a long URL and give a short URL with the same landing
page in return. With its immense popularity over time, it has become a prime
target for the attackers giving them an advantage to conceal malicious content.
Bitly, a leading service in this domain is being exploited heavily to carry out
phishing attacks, work from home scams, pornographic content propagation, etc.
This imposes additional performance pressure on Bitly and other URL shorteners
to be able to detect and take a timely action against the illegitimate content.
In this study, we analyzed a dataset marked as suspicious by Bitly in the month
of October 2013 to highlight some ground issues in their spam detection
mechanism. In addition, we identified some short URL based features and coupled
them with two domain specific features to classify a Bitly URL as malicious /
benign and achieved a maximum accuracy of 86.41%. To the best of our knowledge,
this is the first large scale study to highlight the issues with Bitly's spam
detection policies and proposing a suitable countermeasure.
|
[
{
"version": "v1",
"created": "Wed, 7 May 2014 06:02:40 GMT"
}
] | 2014-05-08T00:00:00 |
[
[
"Gupta",
"Neha",
""
],
[
"Kumaraguru",
"Ponnurangam",
""
]
] |
TITLE: Exploration of gaps in Bitly's spam detection and relevant counter
measures
ABSTRACT: Existence of spam URLs over emails and Online Social Media (OSM) has become a
growing phenomenon. To counter the dissemination issues associated with long
complex URLs in emails and character limit imposed on various OSM (like
Twitter), the concept of URL shortening gained a lot of traction. URL
shorteners take as input a long URL and give a short URL with the same landing
page in return. With its immense popularity over time, it has become a prime
target for the attackers giving them an advantage to conceal malicious content.
Bitly, a leading service in this domain is being exploited heavily to carry out
phishing attacks, work from home scams, pornographic content propagation, etc.
This imposes additional performance pressure on Bitly and other URL shorteners
to be able to detect and take a timely action against the illegitimate content.
In this study, we analyzed a dataset marked as suspicious by Bitly in the month
of October 2013 to highlight some ground issues in their spam detection
mechanism. In addition, we identified some short URL based features and coupled
them with two domain specific features to classify a Bitly URL as malicious /
benign and achieved a maximum accuracy of 86.41%. To the best of our knowledge,
this is the first large scale study to highlight the issues with Bitly's spam
detection policies and proposing a suitable countermeasure.
|
1405.1705
|
Raman Grover
|
Raman Grover, Michael J. Carey
|
Scalable Fault-Tolerant Data Feeds in AsterixDB
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we describe the support for data feed ingestion in AsterixDB,
an open-source Big Data Management System (BDMS) that provides a platform for
storage and analysis of large volumes of semi-structured data. Data feeds are a
mechanism for having continuous data arrive into a BDMS from external sources
and incrementally populate a persisted dataset and associated indexes. The need
to persist and index "fast-flowing" high-velocity data (and support ad hoc
analytical queries) is ubiquitous. However, the state of the art today involves
'gluing' together different systems. AsterixDB is different in being a unified
system with "native support" for data feed ingestion.
We discuss the challenges and present the design and implementation of the
concepts involved in modeling and managing data feeds in AsterixDB. AsterixDB
allows the runtime behavior, allocation of resources and the offered degree of
robustness to be customized to suit the high-level application(s) that wish to
consume the ingested data. Initial experiments that evaluate scalability and
fault-tolerance of AsterixDB data feeds facility are reported.
|
[
{
"version": "v1",
"created": "Wed, 7 May 2014 19:14:42 GMT"
}
] | 2014-05-08T00:00:00 |
[
[
"Grover",
"Raman",
""
],
[
"Carey",
"Michael J.",
""
]
] |
TITLE: Scalable Fault-Tolerant Data Feeds in AsterixDB
ABSTRACT: In this paper we describe the support for data feed ingestion in AsterixDB,
an open-source Big Data Management System (BDMS) that provides a platform for
storage and analysis of large volumes of semi-structured data. Data feeds are a
mechanism for having continuous data arrive into a BDMS from external sources
and incrementally populate a persisted dataset and associated indexes. The need
to persist and index "fast-flowing" high-velocity data (and support ad hoc
analytical queries) is ubiquitous. However, the state of the art today involves
'gluing' together different systems. AsterixDB is different in being a unified
system with "native support" for data feed ingestion.
We discuss the challenges and present the design and implementation of the
concepts involved in modeling and managing data feeds in AsterixDB. AsterixDB
allows the runtime behavior, allocation of resources and the offered degree of
robustness to be customized to suit the high-level application(s) that wish to
consume the ingested data. Initial experiments that evaluate scalability and
fault-tolerance of AsterixDB data feeds facility are reported.
|
1311.5591
|
Ning Zhang
|
Ning Zhang, Manohar Paluri, Marc'Aurelio Ranzato, Trevor Darrell,
Lubomir Bourdev
|
PANDA: Pose Aligned Networks for Deep Attribute Modeling
|
8 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a method for inferring human attributes (such as gender, hair
style, clothes style, expression, action) from images of people under large
variation of viewpoint, pose, appearance, articulation and occlusion.
Convolutional Neural Nets (CNN) have been shown to perform very well on large
scale object recognition problems. In the context of attribute classification,
however, the signal is often subtle and it may cover only a small part of the
image, while the image is dominated by the effects of pose and viewpoint.
Discounting for pose variation would require training on very large labeled
datasets which are not presently available. Part-based models, such as poselets
and DPM have been shown to perform well for this problem but they are limited
by shallow low-level features. We propose a new method which combines
part-based models and deep learning by training pose-normalized CNNs. We show
substantial improvement vs. state-of-the-art methods on challenging attribute
classification tasks in unconstrained settings. Experiments confirm that our
method outperforms both the best part-based methods on this problem and
conventional CNNs trained on the full bounding box of the person.
|
[
{
"version": "v1",
"created": "Thu, 21 Nov 2013 21:43:12 GMT"
},
{
"version": "v2",
"created": "Mon, 5 May 2014 21:32:36 GMT"
}
] | 2014-05-07T00:00:00 |
[
[
"Zhang",
"Ning",
""
],
[
"Paluri",
"Manohar",
""
],
[
"Ranzato",
"Marc'Aurelio",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Bourdev",
"Lubomir",
""
]
] |
TITLE: PANDA: Pose Aligned Networks for Deep Attribute Modeling
ABSTRACT: We propose a method for inferring human attributes (such as gender, hair
style, clothes style, expression, action) from images of people under large
variation of viewpoint, pose, appearance, articulation and occlusion.
Convolutional Neural Nets (CNN) have been shown to perform very well on large
scale object recognition problems. In the context of attribute classification,
however, the signal is often subtle and it may cover only a small part of the
image, while the image is dominated by the effects of pose and viewpoint.
Discounting for pose variation would require training on very large labeled
datasets which are not presently available. Part-based models, such as poselets
and DPM have been shown to perform well for this problem but they are limited
by shallow low-level features. We propose a new method which combines
part-based models and deep learning by training pose-normalized CNNs. We show
substantial improvement vs. state-of-the-art methods on challenging attribute
classification tasks in unconstrained settings. Experiments confirm that our
method outperforms both the best part-based methods on this problem and
conventional CNNs trained on the full bounding box of the person.
|
1405.1392
|
Shamanth Kumar
|
Shamanth Kumar, Huan Liu, Sameep Mehta, and L. Venkata Subramaniam
|
From Tweets to Events: Exploring a Scalable Solution for Twitter Streams
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The unprecedented use of social media through smartphones and other
web-enabled mobile devices has enabled the rapid adoption of platforms like
Twitter. Event detection has found many applications on the web, including
breaking news identification and summarization. The recent increase in the
usage of Twitter during crises has attracted researchers to focus on detecting
events in tweets. However, current solutions have focused on static Twitter
data. The necessity to detect events in a streaming environment during fast
paced events such as a crisis presents new opportunities and challenges. In
this paper, we investigate event detection in the context of real-time Twitter
streams as observed in real-world crises. We highlight the key challenges in
this problem: the informal nature of text, and the high volume and high
velocity characteristics of Twitter streams. We present a novel approach to
address these challenges using single-pass clustering and the compression
distance to efficiently detect events in Twitter streams. Through experiments
on large Twitter datasets, we demonstrate that the proposed framework is able
to detect events in near real-time and can scale to large and noisy Twitter
streams.
|
[
{
"version": "v1",
"created": "Tue, 6 May 2014 18:35:18 GMT"
}
] | 2014-05-07T00:00:00 |
[
[
"Kumar",
"Shamanth",
""
],
[
"Liu",
"Huan",
""
],
[
"Mehta",
"Sameep",
""
],
[
"Subramaniam",
"L. Venkata",
""
]
] |
TITLE: From Tweets to Events: Exploring a Scalable Solution for Twitter Streams
ABSTRACT: The unprecedented use of social media through smartphones and other
web-enabled mobile devices has enabled the rapid adoption of platforms like
Twitter. Event detection has found many applications on the web, including
breaking news identification and summarization. The recent increase in the
usage of Twitter during crises has attracted researchers to focus on detecting
events in tweets. However, current solutions have focused on static Twitter
data. The necessity to detect events in a streaming environment during fast
paced events such as a crisis presents new opportunities and challenges. In
this paper, we investigate event detection in the context of real-time Twitter
streams as observed in real-world crises. We highlight the key challenges in
this problem: the informal nature of text, and the high volume and high
velocity characteristics of Twitter streams. We present a novel approach to
address these challenges using single-pass clustering and the compression
distance to efficiently detect events in Twitter streams. Through experiments
on large Twitter datasets, we demonstrate that the proposed framework is able
to detect events in near real-time and can scale to large and noisy Twitter
streams.
|
1405.1406
|
Sallam Abualhaija
|
Sallam Abualhaija, Karl-Heinz Zimmermann
|
D-Bees: A Novel Method Inspired by Bee Colony Optimization for Solving
Word Sense Disambiguation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Word sense disambiguation (WSD) is a problem in the field of computational
linguistics given as finding the intended sense of a word (or a set of words)
when it is activated within a certain context. WSD was recently addressed as a
combinatorial optimization problem in which the goal is to find a sequence of
senses that maximize the semantic relatedness among the target words. In this
article, a novel algorithm for solving the WSD problem called D-Bees is
proposed which is inspired by bee colony optimization (BCO)where artificial bee
agents collaborate to solve the problem. The D-Bees algorithm is evaluated on a
standard dataset (SemEval 2007 coarse-grained English all-words task corpus)and
is compared to simulated annealing, genetic algorithms, and two ant colony
optimization techniques (ACO). It will be observed that the BCO and ACO
approaches are on par.
|
[
{
"version": "v1",
"created": "Tue, 6 May 2014 19:26:35 GMT"
}
] | 2014-05-07T00:00:00 |
[
[
"Abualhaija",
"Sallam",
""
],
[
"Zimmermann",
"Karl-Heinz",
""
]
] |
TITLE: D-Bees: A Novel Method Inspired by Bee Colony Optimization for Solving
Word Sense Disambiguation
ABSTRACT: Word sense disambiguation (WSD) is a problem in the field of computational
linguistics given as finding the intended sense of a word (or a set of words)
when it is activated within a certain context. WSD was recently addressed as a
combinatorial optimization problem in which the goal is to find a sequence of
senses that maximize the semantic relatedness among the target words. In this
article, a novel algorithm for solving the WSD problem called D-Bees is
proposed which is inspired by bee colony optimization (BCO)where artificial bee
agents collaborate to solve the problem. The D-Bees algorithm is evaluated on a
standard dataset (SemEval 2007 coarse-grained English all-words task corpus)and
is compared to simulated annealing, genetic algorithms, and two ant colony
optimization techniques (ACO). It will be observed that the BCO and ACO
approaches are on par.
|
1402.0108
|
Eric Strobl
|
Eric V. Strobl, Shyam Visweswaran
|
Markov Blanket Ranking using Kernel-based Conditional Dependence
Measures
|
10 pages, 4 figures, 2 algorithms, NIPS 2013 Workshop on Causality,
code: github.com/ericstrobl/
| null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developing feature selection algorithms that move beyond a pure correlational
to a more causal analysis of observational data is an important problem in the
sciences. Several algorithms attempt to do so by discovering the Markov blanket
of a target, but they all contain a forward selection step which variables must
pass in order to be included in the conditioning set. As a result, these
algorithms may not consider all possible conditional multivariate combinations.
We improve on this limitation by proposing a backward elimination method that
uses a kernel-based conditional dependence measure to identify the Markov
blanket in a fully multivariate fashion. The algorithm is easy to implement and
compares favorably to other methods on synthetic and real datasets.
|
[
{
"version": "v1",
"created": "Sat, 1 Feb 2014 17:51:54 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Feb 2014 22:16:00 GMT"
},
{
"version": "v3",
"created": "Sat, 3 May 2014 01:07:49 GMT"
}
] | 2014-05-06T00:00:00 |
[
[
"Strobl",
"Eric V.",
""
],
[
"Visweswaran",
"Shyam",
""
]
] |
TITLE: Markov Blanket Ranking using Kernel-based Conditional Dependence
Measures
ABSTRACT: Developing feature selection algorithms that move beyond a pure correlational
to a more causal analysis of observational data is an important problem in the
sciences. Several algorithms attempt to do so by discovering the Markov blanket
of a target, but they all contain a forward selection step which variables must
pass in order to be included in the conditioning set. As a result, these
algorithms may not consider all possible conditional multivariate combinations.
We improve on this limitation by proposing a backward elimination method that
uses a kernel-based conditional dependence measure to identify the Markov
blanket in a fully multivariate fashion. The algorithm is easy to implement and
compares favorably to other methods on synthetic and real datasets.
|
1404.7287
|
Sameer Qazi
|
Sameer Qazi and Tim Moors
|
Disjoint-Path Selection in Internet: What traceroutes tell us?
|
9 pages, 9 figures
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Routing policies used in the Internet can be restrictive, limiting
communication between source-destination pairs to one path, when often better
alternatives exist. To avoid route flapping, recovery mechanisms may be
dampened, making adaptation slow. Unstructured overlays have been proposed to
mitigate the issues of path and performance failures in the Internet by routing
through an indirect-path via overlay peer(s). Choosing alternate-paths in
overlay networks is a challenging issue. Ensuring both availability and
performance guarantees on alternate paths requires aggressive monitoring of all
overlay paths using active probing; this limits scalability. An alternate
technique to select an overlay-path is to bias its selection based on physical
disjointness criteria to bypass the failure on the primary-path. Recently,
several techniques have emerged which can optimize the selection of a
disjoint-path without incurring the high costs associated with probing paths.
In this paper, we show that using only commodity approaches, i.e. running
infrequent traceroutes between overlay hosts, a lot of information can be
revealed about the underlying physical path diversity in the overlay network
which can be used to make informed-guesses for alternate-path selection. We
test our approach using datasets between real-world hosts in AMP and RIPE
networks.
|
[
{
"version": "v1",
"created": "Tue, 29 Apr 2014 09:28:41 GMT"
},
{
"version": "v2",
"created": "Mon, 5 May 2014 05:27:39 GMT"
}
] | 2014-05-06T00:00:00 |
[
[
"Qazi",
"Sameer",
""
],
[
"Moors",
"Tim",
""
]
] |
TITLE: Disjoint-Path Selection in Internet: What traceroutes tell us?
ABSTRACT: Routing policies used in the Internet can be restrictive, limiting
communication between source-destination pairs to one path, when often better
alternatives exist. To avoid route flapping, recovery mechanisms may be
dampened, making adaptation slow. Unstructured overlays have been proposed to
mitigate the issues of path and performance failures in the Internet by routing
through an indirect-path via overlay peer(s). Choosing alternate-paths in
overlay networks is a challenging issue. Ensuring both availability and
performance guarantees on alternate paths requires aggressive monitoring of all
overlay paths using active probing; this limits scalability. An alternate
technique to select an overlay-path is to bias its selection based on physical
disjointness criteria to bypass the failure on the primary-path. Recently,
several techniques have emerged which can optimize the selection of a
disjoint-path without incurring the high costs associated with probing paths.
In this paper, we show that using only commodity approaches, i.e. running
infrequent traceroutes between overlay hosts, a lot of information can be
revealed about the underlying physical path diversity in the overlay network
which can be used to make informed-guesses for alternate-path selection. We
test our approach using datasets between real-world hosts in AMP and RIPE
networks.
|
1405.0641
|
Xiaojun Wan
|
Xiaojun Wan
|
x-index: a fantastic new indicator for quantifying a scientist's
scientific impact
| null | null | null | null |
cs.DL physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
h-index has become the most popular indicator for quantifying a scientist's
scientific impact in various scientific fields. h-index is defined as the
largest number of papers with citation number larger than or equal to h and it
treats each citation equally. However, different citations usually come from
different papers with different influence and quality, and a citation from a
highly influential paper is a greater recognition of the target paper than a
citation from an ordinary paper. Based on this assumption, we proposed a new
indicator named x-index to quantify a scientist's scientific impact by
considering only the citations coming from influential papers. x-index is
defined as the largest number of papers with influential citation number larger
than or equal to x, where each influential citation comes from a paper for
which the average ACNPP (Average Citation Number Per Paper) of its authors
larger than or equal to x . Through analysis on the APS dataset, we find that
the proposed x-index has much better ability to discriminate between Physics
Prize Winners and ordinary physicists.
|
[
{
"version": "v1",
"created": "Sun, 4 May 2014 02:26:52 GMT"
}
] | 2014-05-06T00:00:00 |
[
[
"Wan",
"Xiaojun",
""
]
] |
TITLE: x-index: a fantastic new indicator for quantifying a scientist's
scientific impact
ABSTRACT: h-index has become the most popular indicator for quantifying a scientist's
scientific impact in various scientific fields. h-index is defined as the
largest number of papers with citation number larger than or equal to h and it
treats each citation equally. However, different citations usually come from
different papers with different influence and quality, and a citation from a
highly influential paper is a greater recognition of the target paper than a
citation from an ordinary paper. Based on this assumption, we proposed a new
indicator named x-index to quantify a scientist's scientific impact by
considering only the citations coming from influential papers. x-index is
defined as the largest number of papers with influential citation number larger
than or equal to x, where each influential citation comes from a paper for
which the average ACNPP (Average Citation Number Per Paper) of its authors
larger than or equal to x . Through analysis on the APS dataset, we find that
the proposed x-index has much better ability to discriminate between Physics
Prize Winners and ordinary physicists.
|
1405.0868
|
Zhana Bao
|
Zhana Bao
|
Finding Inner Outliers in High Dimensional Space
|
9 pages, 9 Figures, 3 tables
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Outlier detection in a large-scale database is a significant and complex
issue in knowledge discovering field. As the data distributions are obscure and
uncertain in high dimensional space, most existing solutions try to solve the
issue taking into account the two intuitive points: first, outliers are
extremely far away from other points in high dimensional space; second,
outliers are detected obviously different in projected-dimensional subspaces.
However, for a complicated case that outliers are hidden inside the normal
points in all dimensions, existing detection methods fail to find such inner
outliers. In this paper, we propose a method with twice dimension-projections,
which integrates primary subspace outlier detection and secondary
point-projection between subspaces, and sums up the multiple weight values for
each point. The points are computed with local density ratio separately in
twice-projected dimensions. After the process, outliers are those points
scoring the largest values of weight. The proposed method succeeds to find all
inner outliers on the synthetic test datasets with the dimension varying from
100 to 10000. The experimental results also show that the proposed algorithm
can work in low dimensional space and can achieve perfect performance in high
dimensional space. As for this reason, our proposed approach has considerable
potential to apply it in multimedia applications helping to process images or
video with large-scale attributes.
|
[
{
"version": "v1",
"created": "Mon, 5 May 2014 12:01:14 GMT"
}
] | 2014-05-06T00:00:00 |
[
[
"Bao",
"Zhana",
""
]
] |
TITLE: Finding Inner Outliers in High Dimensional Space
ABSTRACT: Outlier detection in a large-scale database is a significant and complex
issue in knowledge discovering field. As the data distributions are obscure and
uncertain in high dimensional space, most existing solutions try to solve the
issue taking into account the two intuitive points: first, outliers are
extremely far away from other points in high dimensional space; second,
outliers are detected obviously different in projected-dimensional subspaces.
However, for a complicated case that outliers are hidden inside the normal
points in all dimensions, existing detection methods fail to find such inner
outliers. In this paper, we propose a method with twice dimension-projections,
which integrates primary subspace outlier detection and secondary
point-projection between subspaces, and sums up the multiple weight values for
each point. The points are computed with local density ratio separately in
twice-projected dimensions. After the process, outliers are those points
scoring the largest values of weight. The proposed method succeeds to find all
inner outliers on the synthetic test datasets with the dimension varying from
100 to 10000. The experimental results also show that the proposed algorithm
can work in low dimensional space and can achieve perfect performance in high
dimensional space. As for this reason, our proposed approach has considerable
potential to apply it in multimedia applications helping to process images or
video with large-scale attributes.
|
1405.0869
|
Zhana Bao
|
Zhana Bao
|
Robust Subspace Outlier Detection in High Dimensional Space
|
10 pages, 6 figures, 4 tables
| null | null | null |
cs.AI cs.LG stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Rare data in a large-scale database are called outliers that reveal
significant information in the real world. The subspace-based outlier detection
is regarded as a feasible approach in very high dimensional space. However, the
outliers found in subspaces are only part of the true outliers in high
dimensional space, indeed. The outliers hidden in normal-clustered points are
sometimes neglected in the projected dimensional subspace. In this paper, we
propose a robust subspace method for detecting such inner outliers in a given
dataset, which uses two dimensional-projections: detecting outliers in
subspaces with local density ratio in the first projected dimensions; finding
outliers by comparing neighbor's positions in the second projected dimensions.
Each point's weight is calculated by summing up all related values got in the
two steps projected dimensions, and then the points scoring the largest weight
values are taken as outliers. By taking a series of experiments with the number
of dimensions from 10 to 10000, the results show that our proposed method
achieves high precision in the case of extremely high dimensional space, and
works well in low dimensional space.
|
[
{
"version": "v1",
"created": "Mon, 5 May 2014 12:01:24 GMT"
}
] | 2014-05-06T00:00:00 |
[
[
"Bao",
"Zhana",
""
]
] |
TITLE: Robust Subspace Outlier Detection in High Dimensional Space
ABSTRACT: Rare data in a large-scale database are called outliers that reveal
significant information in the real world. The subspace-based outlier detection
is regarded as a feasible approach in very high dimensional space. However, the
outliers found in subspaces are only part of the true outliers in high
dimensional space, indeed. The outliers hidden in normal-clustered points are
sometimes neglected in the projected dimensional subspace. In this paper, we
propose a robust subspace method for detecting such inner outliers in a given
dataset, which uses two dimensional-projections: detecting outliers in
subspaces with local density ratio in the first projected dimensions; finding
outliers by comparing neighbor's positions in the second projected dimensions.
Each point's weight is calculated by summing up all related values got in the
two steps projected dimensions, and then the points scoring the largest weight
values are taken as outliers. By taking a series of experiments with the number
of dimensions from 10 to 10000, the results show that our proposed method
achieves high precision in the case of extremely high dimensional space, and
works well in low dimensional space.
|
1405.0941
|
Serena Villata
|
Elena Cabrio and Serena Villata
|
Towards a Benchmark of Natural Language Arguments
| null |
Proceedings of the 15th International Workshop on Non-Monotonic
Reasoning (NMR 2014)
| null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The connections among natural language processing and argumentation theory
are becoming stronger in the latest years, with a growing amount of works going
in this direction, in different scenarios and applying heterogeneous
techniques. In this paper, we present two datasets we built to cope with the
combination of the Textual Entailment framework and bipolar abstract
argumentation. In our approach, such datasets are used to automatically
identify through a Textual Entailment system the relations among the arguments
(i.e., attack, support), and then the resulting bipolar argumentation graphs
are analyzed to compute the accepted arguments.
|
[
{
"version": "v1",
"created": "Mon, 5 May 2014 16:03:04 GMT"
}
] | 2014-05-06T00:00:00 |
[
[
"Cabrio",
"Elena",
""
],
[
"Villata",
"Serena",
""
]
] |
TITLE: Towards a Benchmark of Natural Language Arguments
ABSTRACT: The connections among natural language processing and argumentation theory
are becoming stronger in the latest years, with a growing amount of works going
in this direction, in different scenarios and applying heterogeneous
techniques. In this paper, we present two datasets we built to cope with the
combination of the Textual Entailment framework and bipolar abstract
argumentation. In our approach, such datasets are used to automatically
identify through a Textual Entailment system the relations among the arguments
(i.e., attack, support), and then the resulting bipolar argumentation graphs
are analyzed to compute the accepted arguments.
|
1404.0900
|
Xiaokui Xiao
|
Youze Tang, Xiaokui Xiao, Yanchen Shi
|
Influence Maximization: Near-Optimal Time Complexity Meets Practical
Efficiency
|
Revised Sections 1, 2.3, and 5 to remove incorrect claims about
reference [3]. Updated experiments accordingly. A shorter version of the
paper will appear in SIGMOD 2014
| null | null | null |
cs.SI cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a social network G and a constant k, the influence maximization problem
asks for k nodes in G that (directly and indirectly) influence the largest
number of nodes under a pre-defined diffusion model. This problem finds
important applications in viral marketing, and has been extensively studied in
the literature. Existing algorithms for influence maximization, however, either
trade approximation guarantees for practical efficiency, or vice versa. In
particular, among the algorithms that achieve constant factor approximations
under the prominent independent cascade (IC) model or linear threshold (LT)
model, none can handle a million-node graph without incurring prohibitive
overheads.
This paper presents TIM, an algorithm that aims to bridge the theory and
practice in influence maximization. On the theory side, we show that TIM runs
in O((k+\ell) (n+m) \log n / \epsilon^2) expected time and returns a
(1-1/e-\epsilon)-approximate solution with at least 1 - n^{-\ell} probability.
The time complexity of TIM is near-optimal under the IC model, as it is only a
\log n factor larger than the \Omega(m + n) lower-bound established in previous
work (for fixed k, \ell, and \epsilon). Moreover, TIM supports the triggering
model, which is a general diffusion model that includes both IC and LT as
special cases. On the practice side, TIM incorporates novel heuristics that
significantly improve its empirical efficiency without compromising its
asymptotic performance. We experimentally evaluate TIM with the largest
datasets ever tested in the literature, and show that it outperforms the
state-of-the-art solutions (with approximation guarantees) by up to four orders
of magnitude in terms of running time. In particular, when k = 50, \epsilon =
0.2, and \ell = 1, TIM requires less than one hour on a commodity machine to
process a network with 41.6 million nodes and 1.4 billion edges.
|
[
{
"version": "v1",
"created": "Thu, 3 Apr 2014 13:23:10 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Apr 2014 03:40:36 GMT"
}
] | 2014-05-02T00:00:00 |
[
[
"Tang",
"Youze",
""
],
[
"Xiao",
"Xiaokui",
""
],
[
"Shi",
"Yanchen",
""
]
] |
TITLE: Influence Maximization: Near-Optimal Time Complexity Meets Practical
Efficiency
ABSTRACT: Given a social network G and a constant k, the influence maximization problem
asks for k nodes in G that (directly and indirectly) influence the largest
number of nodes under a pre-defined diffusion model. This problem finds
important applications in viral marketing, and has been extensively studied in
the literature. Existing algorithms for influence maximization, however, either
trade approximation guarantees for practical efficiency, or vice versa. In
particular, among the algorithms that achieve constant factor approximations
under the prominent independent cascade (IC) model or linear threshold (LT)
model, none can handle a million-node graph without incurring prohibitive
overheads.
This paper presents TIM, an algorithm that aims to bridge the theory and
practice in influence maximization. On the theory side, we show that TIM runs
in O((k+\ell) (n+m) \log n / \epsilon^2) expected time and returns a
(1-1/e-\epsilon)-approximate solution with at least 1 - n^{-\ell} probability.
The time complexity of TIM is near-optimal under the IC model, as it is only a
\log n factor larger than the \Omega(m + n) lower-bound established in previous
work (for fixed k, \ell, and \epsilon). Moreover, TIM supports the triggering
model, which is a general diffusion model that includes both IC and LT as
special cases. On the practice side, TIM incorporates novel heuristics that
significantly improve its empirical efficiency without compromising its
asymptotic performance. We experimentally evaluate TIM with the largest
datasets ever tested in the literature, and show that it outperforms the
state-of-the-art solutions (with approximation guarantees) by up to four orders
of magnitude in terms of running time. In particular, when k = 50, \epsilon =
0.2, and \ell = 1, TIM requires less than one hour on a commodity machine to
process a network with 41.6 million nodes and 1.4 billion edges.
|
1405.0085
|
Mahmoud Khademi
|
Mahmoud Khademi and Louis-Philippe Morency
|
Relative Facial Action Unit Detection
|
Accepted at IEEE Winter Conference on Applications of Computer
Vision, Steamboat Springs Colorado, USA, 2014
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a subject-independent facial action unit (AU) detection
method by introducing the concept of relative AU detection, for scenarios where
the neutral face is not provided. We propose a new classification objective
function which analyzes the temporal neighborhood of the current frame to
decide if the expression recently increased, decreased or showed no change.
This approach is a significant change from the conventional absolute method
which decides about AU classification using the current frame, without an
explicit comparison with its neighboring frames. Our proposed method improves
robustness to individual differences such as face scale and shape, age-related
wrinkles, and transitions among expressions (e.g., lower intensity of
expressions). Our experiments on three publicly available datasets (Extended
Cohn-Kanade (CK+), Bosphorus, and DISFA databases) show significant improvement
of our approach over conventional absolute techniques. Keywords: facial action
coding system (FACS); relative facial action unit detection; temporal
information;
|
[
{
"version": "v1",
"created": "Thu, 1 May 2014 03:53:36 GMT"
}
] | 2014-05-02T00:00:00 |
[
[
"Khademi",
"Mahmoud",
""
],
[
"Morency",
"Louis-Philippe",
""
]
] |
TITLE: Relative Facial Action Unit Detection
ABSTRACT: This paper presents a subject-independent facial action unit (AU) detection
method by introducing the concept of relative AU detection, for scenarios where
the neutral face is not provided. We propose a new classification objective
function which analyzes the temporal neighborhood of the current frame to
decide if the expression recently increased, decreased or showed no change.
This approach is a significant change from the conventional absolute method
which decides about AU classification using the current frame, without an
explicit comparison with its neighboring frames. Our proposed method improves
robustness to individual differences such as face scale and shape, age-related
wrinkles, and transitions among expressions (e.g., lower intensity of
expressions). Our experiments on three publicly available datasets (Extended
Cohn-Kanade (CK+), Bosphorus, and DISFA databases) show significant improvement
of our approach over conventional absolute techniques. Keywords: facial action
coding system (FACS); relative facial action unit detection; temporal
information;
|
1305.5029
|
Yuchen Zhang
|
Yuchen Zhang and John C. Duchi and Martin J. Wainwright
|
Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with
Minimax Optimal Rates
| null | null | null | null |
math.ST cs.LG stat.ML stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We establish optimal convergence rates for a decomposition-based scalable
approach to kernel ridge regression. The method is simple to describe: it
randomly partitions a dataset of size N into m subsets of equal size, computes
an independent kernel ridge regression estimator for each subset, then averages
the local solutions into a global predictor. This partitioning leads to a
substantial reduction in computation time versus the standard approach of
performing kernel ridge regression on all N samples. Our two main theorems
establish that despite the computational speed-up, statistical optimality is
retained: as long as m is not too large, the partition-based estimator achieves
the statistical minimax rate over all estimators using the set of N samples. As
concrete examples, our theory guarantees that the number of processors m may
grow nearly linearly for finite-rank kernels and Gaussian kernels and
polynomially in N for Sobolev spaces, which in turn allows for substantial
reductions in computational cost. We conclude with experiments on both
simulated data and a music-prediction task that complement our theoretical
results, exhibiting the computational and statistical benefits of our approach.
|
[
{
"version": "v1",
"created": "Wed, 22 May 2013 06:30:46 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Apr 2014 22:02:35 GMT"
}
] | 2014-05-01T00:00:00 |
[
[
"Zhang",
"Yuchen",
""
],
[
"Duchi",
"John C.",
""
],
[
"Wainwright",
"Martin J.",
""
]
] |
TITLE: Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with
Minimax Optimal Rates
ABSTRACT: We establish optimal convergence rates for a decomposition-based scalable
approach to kernel ridge regression. The method is simple to describe: it
randomly partitions a dataset of size N into m subsets of equal size, computes
an independent kernel ridge regression estimator for each subset, then averages
the local solutions into a global predictor. This partitioning leads to a
substantial reduction in computation time versus the standard approach of
performing kernel ridge regression on all N samples. Our two main theorems
establish that despite the computational speed-up, statistical optimality is
retained: as long as m is not too large, the partition-based estimator achieves
the statistical minimax rate over all estimators using the set of N samples. As
concrete examples, our theory guarantees that the number of processors m may
grow nearly linearly for finite-rank kernels and Gaussian kernels and
polynomially in N for Sobolev spaces, which in turn allows for substantial
reductions in computational cost. We conclude with experiments on both
simulated data and a music-prediction task that complement our theoretical
results, exhibiting the computational and statistical benefits of our approach.
|
1404.7571
|
Mina Ghashami
|
Mina Ghashami, Jeff M. Phillips and Feifei Li
|
Continuous Matrix Approximation on Distributed Data
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tracking and approximating data matrices in streaming fashion is a
fundamental challenge. The problem requires more care and attention when data
comes from multiple distributed sites, each receiving a stream of data. This
paper considers the problem of "tracking approximations to a matrix" in the
distributed streaming model. In this model, there are m distributed sites each
observing a distinct stream of data (where each element is a row of a
distributed matrix) and has a communication channel with a coordinator, and the
goal is to track an eps-approximation to the norm of the matrix along any
direction. To that end, we present novel algorithms to address the matrix
approximation problem. Our algorithms maintain a smaller matrix B, as an
approximation to a distributed streaming matrix A, such that for any unit
vector x: | ||A x||^2 - ||B x||^2 | <= eps ||A||_F^2. Our algorithms work in
streaming fashion and incur small communication, which is critical for
distributed computation. Our best method is deterministic and uses only
O((m/eps) log(beta N)) communication, where N is the size of stream (at the
time of the query) and beta is an upper-bound on the squared norm of any row of
the matrix. In addition to proving all algorithmic properties theoretically,
extensive experiments with real large datasets demonstrate the efficiency of
these protocols.
|
[
{
"version": "v1",
"created": "Wed, 30 Apr 2014 01:57:40 GMT"
}
] | 2014-05-01T00:00:00 |
[
[
"Ghashami",
"Mina",
""
],
[
"Phillips",
"Jeff M.",
""
],
[
"Li",
"Feifei",
""
]
] |
TITLE: Continuous Matrix Approximation on Distributed Data
ABSTRACT: Tracking and approximating data matrices in streaming fashion is a
fundamental challenge. The problem requires more care and attention when data
comes from multiple distributed sites, each receiving a stream of data. This
paper considers the problem of "tracking approximations to a matrix" in the
distributed streaming model. In this model, there are m distributed sites each
observing a distinct stream of data (where each element is a row of a
distributed matrix) and has a communication channel with a coordinator, and the
goal is to track an eps-approximation to the norm of the matrix along any
direction. To that end, we present novel algorithms to address the matrix
approximation problem. Our algorithms maintain a smaller matrix B, as an
approximation to a distributed streaming matrix A, such that for any unit
vector x: | ||A x||^2 - ||B x||^2 | <= eps ||A||_F^2. Our algorithms work in
streaming fashion and incur small communication, which is critical for
distributed computation. Our best method is deterministic and uses only
O((m/eps) log(beta N)) communication, where N is the size of stream (at the
time of the query) and beta is an upper-bound on the squared norm of any row of
the matrix. In addition to proving all algorithmic properties theoretically,
extensive experiments with real large datasets demonstrate the efficiency of
these protocols.
|
1305.4987
|
Julie Tibshirani
|
Julie Tibshirani and Christopher D. Manning
|
Robust Logistic Regression using Shift Parameters (Long Version)
| null | null | null | null |
cs.AI cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Annotation errors can significantly hurt classifier performance, yet datasets
are only growing noisier with the increased use of Amazon Mechanical Turk and
techniques like distant supervision that automatically generate labels. In this
paper, we present a robust extension of logistic regression that incorporates
the possibility of mislabelling directly into the objective. Our model can be
trained through nearly the same means as logistic regression, and retains its
efficiency on high-dimensional datasets. Through named entity recognition
experiments, we demonstrate that our approach can provide a significant
improvement over the standard model when annotation errors are present.
|
[
{
"version": "v1",
"created": "Tue, 21 May 2013 23:36:18 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Apr 2014 07:32:58 GMT"
}
] | 2014-04-30T00:00:00 |
[
[
"Tibshirani",
"Julie",
""
],
[
"Manning",
"Christopher D.",
""
]
] |
TITLE: Robust Logistic Regression using Shift Parameters (Long Version)
ABSTRACT: Annotation errors can significantly hurt classifier performance, yet datasets
are only growing noisier with the increased use of Amazon Mechanical Turk and
techniques like distant supervision that automatically generate labels. In this
paper, we present a robust extension of logistic regression that incorporates
the possibility of mislabelling directly into the objective. Our model can be
trained through nearly the same means as logistic regression, and retains its
efficiency on high-dimensional datasets. Through named entity recognition
experiments, we demonstrate that our approach can provide a significant
improvement over the standard model when annotation errors are present.
|
1404.6383
|
Pierre de Buyl
|
Valentin Haenel
|
Bloscpack: a compressed lightweight serialization format for numerical
data
|
Part of the Proceedings of the 6th European Conference on Python in
Science (EuroSciPy 2013), Pierre de Buyl and Nelle Varoquaux editors, (2014)
| null | null |
euroscipy-proceedings2013-02
|
cs.MS cs.PL
|
http://creativecommons.org/licenses/by/3.0/
|
This paper introduces the Bloscpack file format and the accompanying Python
reference implementation. Bloscpack is a lightweight, compressed binary
file-format based on the Blosc codec and is designed for lightweight, fast
serialization of numerical data. This article presents the features of the
file-format and some some API aspects of the reference implementation, in
particular the ability to handle Numpy ndarrays. Furthermore, in order to
demonstrate its utility, the format is compared both feature- and
performance-wise to a few alternative lightweight serialization solutions for
Numpy ndarrays. The performance comparisons take the form of some comprehensive
benchmarks over a range of different artificial datasets with varying size and
complexity, the results of which are presented as the last section of this
article.
|
[
{
"version": "v1",
"created": "Fri, 25 Apr 2014 10:53:23 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Apr 2014 14:16:55 GMT"
}
] | 2014-04-30T00:00:00 |
[
[
"Haenel",
"Valentin",
""
]
] |
TITLE: Bloscpack: a compressed lightweight serialization format for numerical
data
ABSTRACT: This paper introduces the Bloscpack file format and the accompanying Python
reference implementation. Bloscpack is a lightweight, compressed binary
file-format based on the Blosc codec and is designed for lightweight, fast
serialization of numerical data. This article presents the features of the
file-format and some some API aspects of the reference implementation, in
particular the ability to handle Numpy ndarrays. Furthermore, in order to
demonstrate its utility, the format is compared both feature- and
performance-wise to a few alternative lightweight serialization solutions for
Numpy ndarrays. The performance comparisons take the form of some comprehensive
benchmarks over a range of different artificial datasets with varying size and
complexity, the results of which are presented as the last section of this
article.
|
1404.7176
|
Peter Schwander
|
P. Schwander, R. Fung, A. Ourmazd
|
Conformations of Macromolecules and their Complexes from Heterogeneous
Datasets
| null | null | null | null |
physics.bio-ph q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a new generation of algorithms capable of mapping the structure
and conformations of macromolecules and their complexes from large ensembles of
heterogeneous snapshots, and demonstrate the feasibility of determining both
discrete and continuous macromolecular conformational spectra. These algorithms
naturally incorporate conformational heterogeneity without resort to sorting
and classification, or prior knowledge of the type of heterogeneity present.
They are applicable to single-particle diffraction and image datasets produced
by X-ray lasers and cryo-electron microscopy, respectively, and particularly
suitable for systems not easily amenable to purification or crystallization.
|
[
{
"version": "v1",
"created": "Mon, 28 Apr 2014 21:47:07 GMT"
}
] | 2014-04-30T00:00:00 |
[
[
"Schwander",
"P.",
""
],
[
"Fung",
"R.",
""
],
[
"Ourmazd",
"A.",
""
]
] |
TITLE: Conformations of Macromolecules and their Complexes from Heterogeneous
Datasets
ABSTRACT: We describe a new generation of algorithms capable of mapping the structure
and conformations of macromolecules and their complexes from large ensembles of
heterogeneous snapshots, and demonstrate the feasibility of determining both
discrete and continuous macromolecular conformational spectra. These algorithms
naturally incorporate conformational heterogeneity without resort to sorting
and classification, or prior knowledge of the type of heterogeneity present.
They are applicable to single-particle diffraction and image datasets produced
by X-ray lasers and cryo-electron microscopy, respectively, and particularly
suitable for systems not easily amenable to purification or crystallization.
|
1305.0062
|
Zeinab Taghavi
|
Zeinab Taghavi, Narjes S. Movahedi, Sorin Draghici, Hamidreza Chitsaz
|
Distilled Single Cell Genome Sequencing and De Novo Assembly for Sparse
Microbial Communities
| null | null |
10.1093/bioinformatics/btt420
| null |
q-bio.GN cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Identification of every single genome present in a microbial sample is an
important and challenging task with crucial applications. It is challenging
because there are typically millions of cells in a microbial sample, the vast
majority of which elude cultivation. The most accurate method to date is
exhaustive single cell sequencing using multiple displacement amplification,
which is simply intractable for a large number of cells. However, there is hope
for breaking this barrier as the number of different cell types with distinct
genome sequences is usually much smaller than the number of cells.
Here, we present a novel divide and conquer method to sequence and de novo
assemble all distinct genomes present in a microbial sample with a sequencing
cost and computational complexity proportional to the number of genome types,
rather than the number of cells. The method is implemented in a tool called
Squeezambler. We evaluated Squeezambler on simulated data. The proposed divide
and conquer method successfully reduces the cost of sequencing in comparison
with the naive exhaustive approach.
Availability: Squeezambler and datasets are available under
http://compbio.cs.wayne.edu/software/squeezambler/.
|
[
{
"version": "v1",
"created": "Wed, 1 May 2013 00:49:29 GMT"
},
{
"version": "v2",
"created": "Wed, 22 May 2013 21:39:04 GMT"
}
] | 2014-04-29T00:00:00 |
[
[
"Taghavi",
"Zeinab",
""
],
[
"Movahedi",
"Narjes S.",
""
],
[
"Draghici",
"Sorin",
""
],
[
"Chitsaz",
"Hamidreza",
""
]
] |
TITLE: Distilled Single Cell Genome Sequencing and De Novo Assembly for Sparse
Microbial Communities
ABSTRACT: Identification of every single genome present in a microbial sample is an
important and challenging task with crucial applications. It is challenging
because there are typically millions of cells in a microbial sample, the vast
majority of which elude cultivation. The most accurate method to date is
exhaustive single cell sequencing using multiple displacement amplification,
which is simply intractable for a large number of cells. However, there is hope
for breaking this barrier as the number of different cell types with distinct
genome sequences is usually much smaller than the number of cells.
Here, we present a novel divide and conquer method to sequence and de novo
assemble all distinct genomes present in a microbial sample with a sequencing
cost and computational complexity proportional to the number of genome types,
rather than the number of cells. The method is implemented in a tool called
Squeezambler. We evaluated Squeezambler on simulated data. The proposed divide
and conquer method successfully reduces the cost of sequencing in comparison
with the naive exhaustive approach.
Availability: Squeezambler and datasets are available under
http://compbio.cs.wayne.edu/software/squeezambler/.
|
1404.6876
|
Voot Tangkaratt
|
Voot Tangkaratt, Ning Xie, and Masashi Sugiyama
|
Conditional Density Estimation with Dimensionality Reduction via
Squared-Loss Conditional Entropy Minimization
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Regression aims at estimating the conditional mean of output given input.
However, regression is not informative enough if the conditional density is
multimodal, heteroscedastic, and asymmetric. In such a case, estimating the
conditional density itself is preferable, but conditional density estimation
(CDE) is challenging in high-dimensional space. A naive approach to coping with
high-dimensionality is to first perform dimensionality reduction (DR) and then
execute CDE. However, such a two-step process does not perform well in practice
because the error incurred in the first DR step can be magnified in the second
CDE step. In this paper, we propose a novel single-shot procedure that performs
CDE and DR simultaneously in an integrated way. Our key idea is to formulate DR
as the problem of minimizing a squared-loss variant of conditional entropy, and
this is solved via CDE. Thus, an additional CDE step is not needed after DR. We
demonstrate the usefulness of the proposed method through extensive experiments
on various datasets including humanoid robot transition and computer art.
|
[
{
"version": "v1",
"created": "Mon, 28 Apr 2014 06:30:39 GMT"
}
] | 2014-04-29T00:00:00 |
[
[
"Tangkaratt",
"Voot",
""
],
[
"Xie",
"Ning",
""
],
[
"Sugiyama",
"Masashi",
""
]
] |
TITLE: Conditional Density Estimation with Dimensionality Reduction via
Squared-Loss Conditional Entropy Minimization
ABSTRACT: Regression aims at estimating the conditional mean of output given input.
However, regression is not informative enough if the conditional density is
multimodal, heteroscedastic, and asymmetric. In such a case, estimating the
conditional density itself is preferable, but conditional density estimation
(CDE) is challenging in high-dimensional space. A naive approach to coping with
high-dimensionality is to first perform dimensionality reduction (DR) and then
execute CDE. However, such a two-step process does not perform well in practice
because the error incurred in the first DR step can be magnified in the second
CDE step. In this paper, we propose a novel single-shot procedure that performs
CDE and DR simultaneously in an integrated way. Our key idea is to formulate DR
as the problem of minimizing a squared-loss variant of conditional entropy, and
this is solved via CDE. Thus, an additional CDE step is not needed after DR. We
demonstrate the usefulness of the proposed method through extensive experiments
on various datasets including humanoid robot transition and computer art.
|
1307.2982
|
Mohammad Norouzi
|
Mohammad Norouzi, Ali Punjani, David J. Fleet
|
Fast Exact Search in Hamming Space with Multi-Index Hashing
| null | null | null | null |
cs.CV cs.AI cs.DS cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is growing interest in representing image data and feature descriptors
using compact binary codes for fast near neighbor search. Although binary codes
are motivated by their use as direct indices (addresses) into a hash table,
codes longer than 32 bits are not being used as such, as it was thought to be
ineffective. We introduce a rigorous way to build multiple hash tables on
binary code substrings that enables exact k-nearest neighbor search in Hamming
space. The approach is storage efficient and straightforward to implement.
Theoretical analysis shows that the algorithm exhibits sub-linear run-time
behavior for uniformly distributed codes. Empirical results show dramatic
speedups over a linear scan baseline for datasets of up to one billion codes of
64, 128, or 256 bits.
|
[
{
"version": "v1",
"created": "Thu, 11 Jul 2013 05:52:21 GMT"
},
{
"version": "v2",
"created": "Sun, 15 Dec 2013 02:36:21 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Apr 2014 01:31:55 GMT"
}
] | 2014-04-28T00:00:00 |
[
[
"Norouzi",
"Mohammad",
""
],
[
"Punjani",
"Ali",
""
],
[
"Fleet",
"David J.",
""
]
] |
TITLE: Fast Exact Search in Hamming Space with Multi-Index Hashing
ABSTRACT: There is growing interest in representing image data and feature descriptors
using compact binary codes for fast near neighbor search. Although binary codes
are motivated by their use as direct indices (addresses) into a hash table,
codes longer than 32 bits are not being used as such, as it was thought to be
ineffective. We introduce a rigorous way to build multiple hash tables on
binary code substrings that enables exact k-nearest neighbor search in Hamming
space. The approach is storage efficient and straightforward to implement.
Theoretical analysis shows that the algorithm exhibits sub-linear run-time
behavior for uniformly distributed codes. Empirical results show dramatic
speedups over a linear scan baseline for datasets of up to one billion codes of
64, 128, or 256 bits.
|
1311.0202
|
Diego Amancio Raphael
|
D. R. Amancio, C. H. Comin, D. Casanova, G. Travieso, O. M. Bruno, F.
A. Rodrigues and L. da F. Costa
|
A systematic comparison of supervised classifiers
| null |
PLoS ONE 9 (4): e94137, 2014
|
10.1371/journal.pone.0094137
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pattern recognition techniques have been employed in a myriad of industrial,
medical, commercial and academic applications. To tackle such a diversity of
data, many techniques have been devised. However, despite the long tradition of
pattern recognition research, there is no technique that yields the best
classification in all scenarios. Therefore, the consideration of as many as
possible techniques presents itself as an fundamental practice in applications
aiming at high accuracy. Typical works comparing methods either emphasize the
performance of a given algorithm in validation tests or systematically compare
various algorithms, assuming that the practical use of these methods is done by
experts. In many occasions, however, researchers have to deal with their
practical classification tasks without an in-depth knowledge about the
underlying mechanisms behind parameters. Actually, the adequate choice of
classifiers and parameters alike in such practical circumstances constitutes a
long-standing problem and is the subject of the current paper. We carried out a
study on the performance of nine well-known classifiers implemented by the Weka
framework and compared the dependence of the accuracy with their configuration
parameter configurations. The analysis of performance with default parameters
revealed that the k-nearest neighbors method exceeds by a large margin the
other methods when high dimensional datasets are considered. When other
configuration of parameters were allowed, we found that it is possible to
improve the quality of SVM in more than 20% even if parameters are set
randomly. Taken together, the investigation conducted in this paper suggests
that, apart from the SVM implementation, Weka's default configuration of
parameters provides an performance close the one achieved with the optimal
configuration.
|
[
{
"version": "v1",
"created": "Thu, 17 Oct 2013 03:44:18 GMT"
}
] | 2014-04-28T00:00:00 |
[
[
"Amancio",
"D. R.",
""
],
[
"Comin",
"C. H.",
""
],
[
"Casanova",
"D.",
""
],
[
"Travieso",
"G.",
""
],
[
"Bruno",
"O. M.",
""
],
[
"Rodrigues",
"F. A.",
""
],
[
"Costa",
"L. da F.",
""
]
] |
TITLE: A systematic comparison of supervised classifiers
ABSTRACT: Pattern recognition techniques have been employed in a myriad of industrial,
medical, commercial and academic applications. To tackle such a diversity of
data, many techniques have been devised. However, despite the long tradition of
pattern recognition research, there is no technique that yields the best
classification in all scenarios. Therefore, the consideration of as many as
possible techniques presents itself as an fundamental practice in applications
aiming at high accuracy. Typical works comparing methods either emphasize the
performance of a given algorithm in validation tests or systematically compare
various algorithms, assuming that the practical use of these methods is done by
experts. In many occasions, however, researchers have to deal with their
practical classification tasks without an in-depth knowledge about the
underlying mechanisms behind parameters. Actually, the adequate choice of
classifiers and parameters alike in such practical circumstances constitutes a
long-standing problem and is the subject of the current paper. We carried out a
study on the performance of nine well-known classifiers implemented by the Weka
framework and compared the dependence of the accuracy with their configuration
parameter configurations. The analysis of performance with default parameters
revealed that the k-nearest neighbors method exceeds by a large margin the
other methods when high dimensional datasets are considered. When other
configuration of parameters were allowed, we found that it is possible to
improve the quality of SVM in more than 20% even if parameters are set
randomly. Taken together, the investigation conducted in this paper suggests
that, apart from the SVM implementation, Weka's default configuration of
parameters provides an performance close the one achieved with the optimal
configuration.
|
1404.6351
|
Harald Ganster
|
Harald Ganster, Martina Uray, Sylwia Steginska, Gerardus Croonen,
Rudolf Kaltenb\"ock, Karin Hennermann
|
Improving weather radar by fusion and classification
|
Part of the OAGM 2014 proceedings (arXiv:1404.3538)
| null | null |
OAGM/2014/04
|
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In air traffic management (ATM) all necessary operations (tactical planing,
sector configuration, required staffing, runway configuration, routing of
approaching aircrafts) rely on accurate measurements and predictions of the
current weather situation. An essential basis of information is delivered by
weather radar images (WXR), which, unfortunately, exhibit a vast amount of
disturbances. Thus, the improvement of these datasets is the key factor for
more accurate predictions of weather phenomena and weather conditions. Image
processing methods based on texture analysis and geometric operators allow to
identify regions including artefacts as well as zones of missing information.
Correction of these zones is implemented by exploiting multi-spectral satellite
data (Meteosat Second Generation). Results prove that the proposed system for
artefact detection and data correction significantly improves the quality of
WXR data and, thus, enables more reliable weather now- and forecast leading to
increased ATM safety.
|
[
{
"version": "v1",
"created": "Fri, 25 Apr 2014 08:32:51 GMT"
}
] | 2014-04-28T00:00:00 |
[
[
"Ganster",
"Harald",
""
],
[
"Uray",
"Martina",
""
],
[
"Steginska",
"Sylwia",
""
],
[
"Croonen",
"Gerardus",
""
],
[
"Kaltenböck",
"Rudolf",
""
],
[
"Hennermann",
"Karin",
""
]
] |
TITLE: Improving weather radar by fusion and classification
ABSTRACT: In air traffic management (ATM) all necessary operations (tactical planing,
sector configuration, required staffing, runway configuration, routing of
approaching aircrafts) rely on accurate measurements and predictions of the
current weather situation. An essential basis of information is delivered by
weather radar images (WXR), which, unfortunately, exhibit a vast amount of
disturbances. Thus, the improvement of these datasets is the key factor for
more accurate predictions of weather phenomena and weather conditions. Image
processing methods based on texture analysis and geometric operators allow to
identify regions including artefacts as well as zones of missing information.
Correction of these zones is implemented by exploiting multi-spectral satellite
data (Meteosat Second Generation). Results prove that the proposed system for
artefact detection and data correction significantly improves the quality of
WXR data and, thus, enables more reliable weather now- and forecast leading to
increased ATM safety.
|
1404.6413
|
Georg Waltner
|
Georg Waltner and Thomas Mauthner and Horst Bischof
|
Indoor Activity Detection and Recognition for Sport Games Analysis
|
Part of the OAGM 2014 proceedings (arXiv:1404.3538)
| null | null |
OAGM/2014/03
|
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Activity recognition in sport is an attractive field for computer vision
research. Game, player and team analysis are of great interest and research
topics within this field emerge with the goal of automated analysis. The very
specific underlying rules of sports can be used as prior knowledge for the
recognition task and present a constrained environment for evaluation. This
paper describes recognition of single player activities in sport with special
emphasis on volleyball. Starting from a per-frame player-centered activity
recognition, we incorporate geometry and contextual information via an activity
context descriptor that collects information about all player's activities over
a certain timespan relative to the investigated player. The benefit of this
context information on single player activity recognition is evaluated on our
new real-life dataset presenting a total amount of almost 36k annotated frames
containing 7 activity classes within 6 videos of professional volleyball games.
Our incorporation of the contextual information improves the average
player-centered classification performance of 77.56% by up to 18.35% on
specific classes, proving that spatio-temporal context is an important clue for
activity recognition.
|
[
{
"version": "v1",
"created": "Fri, 25 Apr 2014 13:25:09 GMT"
}
] | 2014-04-28T00:00:00 |
[
[
"Waltner",
"Georg",
""
],
[
"Mauthner",
"Thomas",
""
],
[
"Bischof",
"Horst",
""
]
] |
TITLE: Indoor Activity Detection and Recognition for Sport Games Analysis
ABSTRACT: Activity recognition in sport is an attractive field for computer vision
research. Game, player and team analysis are of great interest and research
topics within this field emerge with the goal of automated analysis. The very
specific underlying rules of sports can be used as prior knowledge for the
recognition task and present a constrained environment for evaluation. This
paper describes recognition of single player activities in sport with special
emphasis on volleyball. Starting from a per-frame player-centered activity
recognition, we incorporate geometry and contextual information via an activity
context descriptor that collects information about all player's activities over
a certain timespan relative to the investigated player. The benefit of this
context information on single player activity recognition is evaluated on our
new real-life dataset presenting a total amount of almost 36k annotated frames
containing 7 activity classes within 6 videos of professional volleyball games.
Our incorporation of the contextual information improves the average
player-centered classification performance of 77.56% by up to 18.35% on
specific classes, proving that spatio-temporal context is an important clue for
activity recognition.
|
1404.6039
|
Nicolas Charon
|
Benjamin Charlier (UM2), Nicolas Charon (DIKU, CMLA), Alain Trouv\'e
(CMLA)
|
The fshape framework for the variability analysis of functional shapes
| null | null | null | null |
cs.CG cs.CV math.DG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article introduces a full mathematical and numerical framework for
treating functional shapes (or fshapes) following the landmarks of shape spaces
and shape analysis. Functional shapes can be described as signal functions
supported on varying geometrical supports. Analysing variability of fshapes'
ensembles require the modelling and quantification of joint variations in
geometry and signal, which have been treated separately in previous approaches.
Instead, building on the ideas of shape spaces for purely geometrical objects,
we propose the extended concept of fshape bundles and define Riemannian metrics
for fshape metamorphoses to model geometrico-functional transformations within
these bundles. We also generalize previous works on data attachment terms based
on the notion of varifolds and demonstrate the utility of these distances.
Based on these, we propose variational formulations of the atlas estimation
problem on populations of fshapes and prove existence of solutions for the
different models. The second part of the article examines the numerical
implementation of the models by detailing discrete expressions for the metrics
and gradients and proposing an optimization scheme for the atlas estimation
problem. We present a few results of the methodology on a synthetic dataset as
well as on a population of retinal membranes with thickness maps.
|
[
{
"version": "v1",
"created": "Thu, 24 Apr 2014 06:23:30 GMT"
}
] | 2014-04-25T00:00:00 |
[
[
"Charlier",
"Benjamin",
"",
"UM2"
],
[
"Charon",
"Nicolas",
"",
"DIKU, CMLA"
],
[
"Trouvé",
"Alain",
"",
"CMLA"
]
] |
TITLE: The fshape framework for the variability analysis of functional shapes
ABSTRACT: This article introduces a full mathematical and numerical framework for
treating functional shapes (or fshapes) following the landmarks of shape spaces
and shape analysis. Functional shapes can be described as signal functions
supported on varying geometrical supports. Analysing variability of fshapes'
ensembles require the modelling and quantification of joint variations in
geometry and signal, which have been treated separately in previous approaches.
Instead, building on the ideas of shape spaces for purely geometrical objects,
we propose the extended concept of fshape bundles and define Riemannian metrics
for fshape metamorphoses to model geometrico-functional transformations within
these bundles. We also generalize previous works on data attachment terms based
on the notion of varifolds and demonstrate the utility of these distances.
Based on these, we propose variational formulations of the atlas estimation
problem on populations of fshapes and prove existence of solutions for the
different models. The second part of the article examines the numerical
implementation of the models by detailing discrete expressions for the metrics
and gradients and proposing an optimization scheme for the atlas estimation
problem. We present a few results of the methodology on a synthetic dataset as
well as on a population of retinal membranes with thickness maps.
|
1404.6151
|
Rajib Rana
|
Rajib Rana, Mingrui Yang, Tim Wark, Chun Tung Chou, Wen Hu
|
SimpleTrack:Adaptive Trajectory Compression with Deterministic
Projection Matrix for Mobile Sensor Networks
| null | null | null | null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Some mobile sensor network applications require the sensor nodes to transfer
their trajectories to a data sink. This paper proposes an adaptive trajectory
(lossy) compression algorithm based on compressive sensing. The algorithm has
two innovative elements. First, we propose a method to compute a deterministic
projection matrix from a learnt dictionary. Second, we propose a method for the
mobile nodes to adaptively predict the number of projections needed based on
the speed of the mobile nodes. Extensive evaluation of the proposed algorithm
using 6 datasets shows that our proposed algorithm can achieve sub-metre
accuracy. In addition, our method of computing projection matrices outperforms
two existing methods. Finally, comparison of our algorithm against a
state-of-the-art trajectory compression algorithm show that our algorithm can
reduce the error by 10-60 cm for the same compression ratio.
|
[
{
"version": "v1",
"created": "Wed, 23 Apr 2014 04:30:33 GMT"
}
] | 2014-04-25T00:00:00 |
[
[
"Rana",
"Rajib",
""
],
[
"Yang",
"Mingrui",
""
],
[
"Wark",
"Tim",
""
],
[
"Chou",
"Chun Tung",
""
],
[
"Hu",
"Wen",
""
]
] |
TITLE: SimpleTrack:Adaptive Trajectory Compression with Deterministic
Projection Matrix for Mobile Sensor Networks
ABSTRACT: Some mobile sensor network applications require the sensor nodes to transfer
their trajectories to a data sink. This paper proposes an adaptive trajectory
(lossy) compression algorithm based on compressive sensing. The algorithm has
two innovative elements. First, we propose a method to compute a deterministic
projection matrix from a learnt dictionary. Second, we propose a method for the
mobile nodes to adaptively predict the number of projections needed based on
the speed of the mobile nodes. Extensive evaluation of the proposed algorithm
using 6 datasets shows that our proposed algorithm can achieve sub-metre
accuracy. In addition, our method of computing projection matrices outperforms
two existing methods. Finally, comparison of our algorithm against a
state-of-the-art trajectory compression algorithm show that our algorithm can
reduce the error by 10-60 cm for the same compression ratio.
|
1303.2132
|
Xiao-Lei Zhang
|
Xiao-Lei Zhang
|
Heuristic Ternary Error-Correcting Output Codes Via Weight Optimization
and Layered Clustering-Based Approach
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One important classifier ensemble for multiclass classification problems is
Error-Correcting Output Codes (ECOCs). It bridges multiclass problems and
binary-class classifiers by decomposing multiclass problems to a serial
binary-class problems. In this paper, we present a heuristic ternary code,
named Weight Optimization and Layered Clustering-based ECOC (WOLC-ECOC). It
starts with an arbitrary valid ECOC and iterates the following two steps until
the training risk converges. The first step, named Layered Clustering based
ECOC (LC-ECOC), constructs multiple strong classifiers on the most confusing
binary-class problem. The second step adds the new classifiers to ECOC by a
novel Optimized Weighted (OW) decoding algorithm, where the optimization
problem of the decoding is solved by the cutting plane algorithm. Technically,
LC-ECOC makes the heuristic training process not blocked by some difficult
binary-class problem. OW decoding guarantees the non-increase of the training
risk for ensuring a small code length. Results on 14 UCI datasets and a music
genre classification problem demonstrate the effectiveness of WOLC-ECOC.
|
[
{
"version": "v1",
"created": "Fri, 8 Mar 2013 21:40:42 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Apr 2014 00:59:58 GMT"
}
] | 2014-04-24T00:00:00 |
[
[
"Zhang",
"Xiao-Lei",
""
]
] |
TITLE: Heuristic Ternary Error-Correcting Output Codes Via Weight Optimization
and Layered Clustering-Based Approach
ABSTRACT: One important classifier ensemble for multiclass classification problems is
Error-Correcting Output Codes (ECOCs). It bridges multiclass problems and
binary-class classifiers by decomposing multiclass problems to a serial
binary-class problems. In this paper, we present a heuristic ternary code,
named Weight Optimization and Layered Clustering-based ECOC (WOLC-ECOC). It
starts with an arbitrary valid ECOC and iterates the following two steps until
the training risk converges. The first step, named Layered Clustering based
ECOC (LC-ECOC), constructs multiple strong classifiers on the most confusing
binary-class problem. The second step adds the new classifiers to ECOC by a
novel Optimized Weighted (OW) decoding algorithm, where the optimization
problem of the decoding is solved by the cutting plane algorithm. Technically,
LC-ECOC makes the heuristic training process not blocked by some difficult
binary-class problem. OW decoding guarantees the non-increase of the training
risk for ensuring a small code length. Results on 14 UCI datasets and a music
genre classification problem demonstrate the effectiveness of WOLC-ECOC.
|
1305.4076
|
Fuqiang Chen
|
Fu-qiang Chen, Yan Wu, Guo-dong Zhao, Jun-ming Zhang, Ming Zhu, Jing
Bai
|
Contractive De-noising Auto-encoder
|
Figures edited
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Auto-encoder is a special kind of neural network based on reconstruction.
De-noising auto-encoder (DAE) is an improved auto-encoder which is robust to
the input by corrupting the original data first and then reconstructing the
original input by minimizing the reconstruction error function. And contractive
auto-encoder (CAE) is another kind of improved auto-encoder to learn robust
feature by introducing the Frobenius norm of the Jacobean matrix of the learned
feature with respect to the original input. In this paper, we combine
de-noising auto-encoder and contractive auto- encoder, and propose another
improved auto-encoder, contractive de-noising auto- encoder (CDAE), which is
robust to both the original input and the learned feature. We stack CDAE to
extract more abstract features and apply SVM for classification. The experiment
result on benchmark dataset MNIST shows that our proposed CDAE performed better
than both DAE and CAE, proving the effective of our method.
|
[
{
"version": "v1",
"created": "Fri, 17 May 2013 13:42:49 GMT"
},
{
"version": "v2",
"created": "Thu, 23 May 2013 04:22:44 GMT"
},
{
"version": "v3",
"created": "Thu, 30 May 2013 00:01:45 GMT"
},
{
"version": "v4",
"created": "Mon, 10 Mar 2014 13:41:32 GMT"
},
{
"version": "v5",
"created": "Wed, 23 Apr 2014 11:40:12 GMT"
}
] | 2014-04-24T00:00:00 |
[
[
"Chen",
"Fu-qiang",
""
],
[
"Wu",
"Yan",
""
],
[
"Zhao",
"Guo-dong",
""
],
[
"Zhang",
"Jun-ming",
""
],
[
"Zhu",
"Ming",
""
],
[
"Bai",
"Jing",
""
]
] |
TITLE: Contractive De-noising Auto-encoder
ABSTRACT: Auto-encoder is a special kind of neural network based on reconstruction.
De-noising auto-encoder (DAE) is an improved auto-encoder which is robust to
the input by corrupting the original data first and then reconstructing the
original input by minimizing the reconstruction error function. And contractive
auto-encoder (CAE) is another kind of improved auto-encoder to learn robust
feature by introducing the Frobenius norm of the Jacobean matrix of the learned
feature with respect to the original input. In this paper, we combine
de-noising auto-encoder and contractive auto- encoder, and propose another
improved auto-encoder, contractive de-noising auto- encoder (CDAE), which is
robust to both the original input and the learned feature. We stack CDAE to
extract more abstract features and apply SVM for classification. The experiment
result on benchmark dataset MNIST shows that our proposed CDAE performed better
than both DAE and CAE, proving the effective of our method.
|
1404.5765
|
Daniel Wolf
|
Daniel Wolf, Markus Bajones, Johann Prankl, Markus Vincze
|
Find my mug: Efficient object search with a mobile robot using semantic
segmentation
|
Part of the OAGM 2014 proceedings (arXiv:1404.3538)
| null | null |
OAGM/2014/14
|
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose an efficient semantic segmentation framework for
indoor scenes, tailored to the application on a mobile robot. Semantic
segmentation can help robots to gain a reasonable understanding of their
environment, but to reach this goal, the algorithms not only need to be
accurate, but also fast and robust. Therefore, we developed an optimized 3D
point cloud processing framework based on a Randomized Decision Forest,
achieving competitive results at sufficiently high frame rates. We evaluate the
capabilities of our method on the popular NYU depth dataset and our own data
and demonstrate its feasibility by deploying it on a mobile service robot, for
which we could optimize an object search procedure using our results.
|
[
{
"version": "v1",
"created": "Wed, 23 Apr 2014 09:48:30 GMT"
}
] | 2014-04-24T00:00:00 |
[
[
"Wolf",
"Daniel",
""
],
[
"Bajones",
"Markus",
""
],
[
"Prankl",
"Johann",
""
],
[
"Vincze",
"Markus",
""
]
] |
TITLE: Find my mug: Efficient object search with a mobile robot using semantic
segmentation
ABSTRACT: In this paper, we propose an efficient semantic segmentation framework for
indoor scenes, tailored to the application on a mobile robot. Semantic
segmentation can help robots to gain a reasonable understanding of their
environment, but to reach this goal, the algorithms not only need to be
accurate, but also fast and robust. Therefore, we developed an optimized 3D
point cloud processing framework based on a Randomized Decision Forest,
achieving competitive results at sufficiently high frame rates. We evaluate the
capabilities of our method on the popular NYU depth dataset and our own data
and demonstrate its feasibility by deploying it on a mobile service robot, for
which we could optimize an object search procedure using our results.
|
1404.5165
|
Kian Hsiang Low
|
Nuo Xu, Kian Hsiang Low, Jie Chen, Keng Kiat Lim, Etkin Baris Ozgul
|
GP-Localize: Persistent Mobile Robot Localization using Online Sparse
Gaussian Process Observation Model
|
28th AAAI Conference on Artificial Intelligence (AAAI 2014), Extended
version with proofs, 10 pages
| null | null | null |
cs.RO cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Central to robot exploration and mapping is the task of persistent
localization in environmental fields characterized by spatially correlated
measurements. This paper presents a Gaussian process localization (GP-Localize)
algorithm that, in contrast to existing works, can exploit the spatially
correlated field measurements taken during a robot's exploration (instead of
relying on prior training data) for efficiently and scalably learning the GP
observation model online through our proposed novel online sparse GP. As a
result, GP-Localize is capable of achieving constant time and memory (i.e.,
independent of the size of the data) per filtering step, which demonstrates the
practical feasibility of using GPs for persistent robot localization and
autonomy. Empirical evaluation via simulated experiments with real-world
datasets and a real robot experiment shows that GP-Localize outperforms
existing GP localization algorithms.
|
[
{
"version": "v1",
"created": "Mon, 21 Apr 2014 10:28:00 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Apr 2014 08:03:33 GMT"
}
] | 2014-04-23T00:00:00 |
[
[
"Xu",
"Nuo",
""
],
[
"Low",
"Kian Hsiang",
""
],
[
"Chen",
"Jie",
""
],
[
"Lim",
"Keng Kiat",
""
],
[
"Ozgul",
"Etkin Baris",
""
]
] |
TITLE: GP-Localize: Persistent Mobile Robot Localization using Online Sparse
Gaussian Process Observation Model
ABSTRACT: Central to robot exploration and mapping is the task of persistent
localization in environmental fields characterized by spatially correlated
measurements. This paper presents a Gaussian process localization (GP-Localize)
algorithm that, in contrast to existing works, can exploit the spatially
correlated field measurements taken during a robot's exploration (instead of
relying on prior training data) for efficiently and scalably learning the GP
observation model online through our proposed novel online sparse GP. As a
result, GP-Localize is capable of achieving constant time and memory (i.e.,
independent of the size of the data) per filtering step, which demonstrates the
practical feasibility of using GPs for persistent robot localization and
autonomy. Empirical evaluation via simulated experiments with real-world
datasets and a real robot experiment shows that GP-Localize outperforms
existing GP localization algorithms.
|
1210.5288
|
C. Seshadhri
|
Nurcan Durak and Tamara G. Kolda and Ali Pinar and C. Seshadhri
|
A Scalable Null Model for Directed Graphs Matching All Degree
Distributions: In, Out, and Reciprocal
|
Camera ready version for IEEE Workshop on Network Science; fixed some
typos in table
|
Proceedings of IEEE 2013 2nd International Network Science
Workshop (NSW 2013), pp. 22--30
| null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Degree distributions are arguably the most important property of real world
networks. The classic edge configuration model or Chung-Lu model can generate
an undirected graph with any desired degree distribution. This serves as a good
null model to compare algorithms or perform experimental studies. Furthermore,
there are scalable algorithms that implement these models and they are
invaluable in the study of graphs. However, networks in the real-world are
often directed, and have a significant proportion of reciprocal edges. A
stronger relation exists between two nodes when they each point to one another
(reciprocal edge) as compared to when only one points to the other (one-way
edge). Despite their importance, reciprocal edges have been disregarded by most
directed graph models.
We propose a null model for directed graphs inspired by the Chung-Lu model
that matches the in-, out-, and reciprocal-degree distributions of the real
graphs. Our algorithm is scalable and requires $O(m)$ random numbers to
generate a graph with $m$ edges. We perform a series of experiments on real
datasets and compare with existing graph models.
|
[
{
"version": "v1",
"created": "Fri, 19 Oct 2012 00:28:05 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Feb 2013 23:28:49 GMT"
},
{
"version": "v3",
"created": "Mon, 18 Mar 2013 19:41:43 GMT"
},
{
"version": "v4",
"created": "Thu, 25 Apr 2013 22:46:06 GMT"
}
] | 2014-04-22T00:00:00 |
[
[
"Durak",
"Nurcan",
""
],
[
"Kolda",
"Tamara G.",
""
],
[
"Pinar",
"Ali",
""
],
[
"Seshadhri",
"C.",
""
]
] |
TITLE: A Scalable Null Model for Directed Graphs Matching All Degree
Distributions: In, Out, and Reciprocal
ABSTRACT: Degree distributions are arguably the most important property of real world
networks. The classic edge configuration model or Chung-Lu model can generate
an undirected graph with any desired degree distribution. This serves as a good
null model to compare algorithms or perform experimental studies. Furthermore,
there are scalable algorithms that implement these models and they are
invaluable in the study of graphs. However, networks in the real-world are
often directed, and have a significant proportion of reciprocal edges. A
stronger relation exists between two nodes when they each point to one another
(reciprocal edge) as compared to when only one points to the other (one-way
edge). Despite their importance, reciprocal edges have been disregarded by most
directed graph models.
We propose a null model for directed graphs inspired by the Chung-Lu model
that matches the in-, out-, and reciprocal-degree distributions of the real
graphs. Our algorithm is scalable and requires $O(m)$ random numbers to
generate a graph with $m$ edges. We perform a series of experiments on real
datasets and compare with existing graph models.
|
1404.5214
|
Ping Li
|
Anshumali Shrivastava and Ping Li
|
Graph Kernels via Functional Embedding
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a representation of graph as a functional object derived from the
power iteration of the underlying adjacency matrix. The proposed functional
representation is a graph invariant, i.e., the functional remains unchanged
under any reordering of the vertices. This property eliminates the difficulty
of handling exponentially many isomorphic forms. Bhattacharyya kernel
constructed between these functionals significantly outperforms the
state-of-the-art graph kernels on 3 out of the 4 standard benchmark graph
classification datasets, demonstrating the superiority of our approach. The
proposed methodology is simple and runs in time linear in the number of edges,
which makes our kernel more efficient and scalable compared to many widely
adopted graph kernels with running time cubic in the number of vertices.
|
[
{
"version": "v1",
"created": "Mon, 21 Apr 2014 14:56:17 GMT"
}
] | 2014-04-22T00:00:00 |
[
[
"Shrivastava",
"Anshumali",
""
],
[
"Li",
"Ping",
""
]
] |
TITLE: Graph Kernels via Functional Embedding
ABSTRACT: We propose a representation of graph as a functional object derived from the
power iteration of the underlying adjacency matrix. The proposed functional
representation is a graph invariant, i.e., the functional remains unchanged
under any reordering of the vertices. This property eliminates the difficulty
of handling exponentially many isomorphic forms. Bhattacharyya kernel
constructed between these functionals significantly outperforms the
state-of-the-art graph kernels on 3 out of the 4 standard benchmark graph
classification datasets, demonstrating the superiority of our approach. The
proposed methodology is simple and runs in time linear in the number of edges,
which makes our kernel more efficient and scalable compared to many widely
adopted graph kernels with running time cubic in the number of vertices.
|
1404.4644
|
Ping Li
|
Anshumali Shrivastava and Ping Li
|
A New Space for Comparing Graphs
| null | null | null | null |
stat.ME cs.IR cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Finding a new mathematical representations for graph, which allows direct
comparison between different graph structures, is an open-ended research
direction. Having such a representation is the first prerequisite for a variety
of machine learning algorithms like classification, clustering, etc., over
graph datasets. In this paper, we propose a symmetric positive semidefinite
matrix with the $(i,j)$-{th} entry equal to the covariance between normalized
vectors $A^ie$ and $A^je$ ($e$ being vector of all ones) as a representation
for graph with adjacency matrix $A$. We show that the proposed matrix
representation encodes the spectrum of the underlying adjacency matrix and it
also contains information about the counts of small sub-structures present in
the graph such as triangles and small paths. In addition, we show that this
matrix is a \emph{"graph invariant"}. All these properties make the proposed
matrix a suitable object for representing graphs.
The representation, being a covariance matrix in a fixed dimensional metric
space, gives a mathematical embedding for graphs. This naturally leads to a
measure of similarity on graph objects. We define similarity between two given
graphs as a Bhattacharya similarity measure between their corresponding
covariance matrix representations. As shown in our experimental study on the
task of social network classification, such a similarity measure outperforms
other widely used state-of-the-art methodologies. Our proposed method is also
computationally efficient. The computation of both the matrix representation
and the similarity value can be performed in operations linear in the number of
edges. This makes our method scalable in practice.
We believe our theoretical and empirical results provide evidence for
studying truncated power iterations, of the adjacency matrix, to characterize
social networks.
|
[
{
"version": "v1",
"created": "Thu, 17 Apr 2014 20:39:24 GMT"
}
] | 2014-04-21T00:00:00 |
[
[
"Shrivastava",
"Anshumali",
""
],
[
"Li",
"Ping",
""
]
] |
TITLE: A New Space for Comparing Graphs
ABSTRACT: Finding a new mathematical representations for graph, which allows direct
comparison between different graph structures, is an open-ended research
direction. Having such a representation is the first prerequisite for a variety
of machine learning algorithms like classification, clustering, etc., over
graph datasets. In this paper, we propose a symmetric positive semidefinite
matrix with the $(i,j)$-{th} entry equal to the covariance between normalized
vectors $A^ie$ and $A^je$ ($e$ being vector of all ones) as a representation
for graph with adjacency matrix $A$. We show that the proposed matrix
representation encodes the spectrum of the underlying adjacency matrix and it
also contains information about the counts of small sub-structures present in
the graph such as triangles and small paths. In addition, we show that this
matrix is a \emph{"graph invariant"}. All these properties make the proposed
matrix a suitable object for representing graphs.
The representation, being a covariance matrix in a fixed dimensional metric
space, gives a mathematical embedding for graphs. This naturally leads to a
measure of similarity on graph objects. We define similarity between two given
graphs as a Bhattacharya similarity measure between their corresponding
covariance matrix representations. As shown in our experimental study on the
task of social network classification, such a similarity measure outperforms
other widely used state-of-the-art methodologies. Our proposed method is also
computationally efficient. The computation of both the matrix representation
and the similarity value can be performed in operations linear in the number of
edges. This makes our method scalable in practice.
We believe our theoretical and empirical results provide evidence for
studying truncated power iterations, of the adjacency matrix, to characterize
social networks.
|
1404.4800
|
Ayushi Sinha
|
Ayushi Sinha, William Gray Roncal, Narayanan Kasthuri, Ming Chuang,
Priya Manavalan, Dean M. Kleissas, Joshua T. Vogelstein, R. Jacob Vogelstein,
Randal Burns, Jeff W. Lichtman, Michael Kazhdan
|
Automatic Annotation of Axoplasmic Reticula in Pursuit of Connectomes
|
2 pages, 1 figure
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a new pipeline which automatically identifies and
annotates axoplasmic reticula, which are small subcellular structures present
only in axons. We run our algorithm on the Kasthuri11 dataset, which was color
corrected using gradient-domain techniques to adjust contrast. We use a
bilateral filter to smooth out the noise in this data while preserving edges,
which highlights axoplasmic reticula. These axoplasmic reticula are then
annotated using a morphological region growing algorithm. Additionally, we
perform Laplacian sharpening on the bilaterally filtered data to enhance edges,
and repeat the morphological region growing algorithm to annotate more
axoplasmic reticula. We track our annotations through the slices to improve
precision, and to create long objects to aid in segment merging. This method
annotates axoplasmic reticula with high precision. Our algorithm can easily be
adapted to annotate axoplasmic reticula in different sets of brain data by
changing a few thresholds. The contribution of this work is the introduction of
a straightforward and robust pipeline which annotates axoplasmic reticula with
high precision, contributing towards advancements in automatic feature
annotations in neural EM data.
|
[
{
"version": "v1",
"created": "Wed, 16 Apr 2014 20:09:37 GMT"
}
] | 2014-04-21T00:00:00 |
[
[
"Sinha",
"Ayushi",
""
],
[
"Roncal",
"William Gray",
""
],
[
"Kasthuri",
"Narayanan",
""
],
[
"Chuang",
"Ming",
""
],
[
"Manavalan",
"Priya",
""
],
[
"Kleissas",
"Dean M.",
""
],
[
"Vogelstein",
"Joshua T.",
""
],
[
"Vogelstein",
"R. Jacob",
""
],
[
"Burns",
"Randal",
""
],
[
"Lichtman",
"Jeff W.",
""
],
[
"Kazhdan",
"Michael",
""
]
] |
TITLE: Automatic Annotation of Axoplasmic Reticula in Pursuit of Connectomes
ABSTRACT: In this paper, we present a new pipeline which automatically identifies and
annotates axoplasmic reticula, which are small subcellular structures present
only in axons. We run our algorithm on the Kasthuri11 dataset, which was color
corrected using gradient-domain techniques to adjust contrast. We use a
bilateral filter to smooth out the noise in this data while preserving edges,
which highlights axoplasmic reticula. These axoplasmic reticula are then
annotated using a morphological region growing algorithm. Additionally, we
perform Laplacian sharpening on the bilaterally filtered data to enhance edges,
and repeat the morphological region growing algorithm to annotate more
axoplasmic reticula. We track our annotations through the slices to improve
precision, and to create long objects to aid in segment merging. This method
annotates axoplasmic reticula with high precision. Our algorithm can easily be
adapted to annotate axoplasmic reticula in different sets of brain data by
changing a few thresholds. The contribution of this work is the introduction of
a straightforward and robust pipeline which annotates axoplasmic reticula with
high precision, contributing towards advancements in automatic feature
annotations in neural EM data.
|
1404.4038
|
Grigorios Tsoumakas
|
Christina Papagiannopoulou, Grigorios Tsoumakas, Ioannis Tsamardinos
|
Discovering and Exploiting Entailment Relationships in Multi-Label
Learning
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents a sound probabilistic method for enforcing adherence of
the marginal probabilities of a multi-label model to automatically discovered
deterministic relationships among labels. In particular we focus on discovering
two kinds of relationships among the labels. The first one concerns pairwise
positive entailement: pairs of labels, where the presence of one implies the
presence of the other in all instances of a dataset. The second concerns
exclusion: sets of labels that do not coexist in the same instances of the
dataset. These relationships are represented with a Bayesian network. Marginal
probabilities are entered as soft evidence in the network and adjusted through
probabilistic inference. Our approach offers robust improvements in mean
average precision compared to the standard binary relavance approach across all
12 datasets involved in our experiments. The discovery process helps
interesting implicit knowledge to emerge, which could be useful in itself.
|
[
{
"version": "v1",
"created": "Tue, 15 Apr 2014 19:47:15 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Apr 2014 16:05:57 GMT"
}
] | 2014-04-18T00:00:00 |
[
[
"Papagiannopoulou",
"Christina",
""
],
[
"Tsoumakas",
"Grigorios",
""
],
[
"Tsamardinos",
"Ioannis",
""
]
] |
TITLE: Discovering and Exploiting Entailment Relationships in Multi-Label
Learning
ABSTRACT: This work presents a sound probabilistic method for enforcing adherence of
the marginal probabilities of a multi-label model to automatically discovered
deterministic relationships among labels. In particular we focus on discovering
two kinds of relationships among the labels. The first one concerns pairwise
positive entailement: pairs of labels, where the presence of one implies the
presence of the other in all instances of a dataset. The second concerns
exclusion: sets of labels that do not coexist in the same instances of the
dataset. These relationships are represented with a Bayesian network. Marginal
probabilities are entered as soft evidence in the network and adjusted through
probabilistic inference. Our approach offers robust improvements in mean
average precision compared to the standard binary relavance approach across all
12 datasets involved in our experiments. The discovery process helps
interesting implicit knowledge to emerge, which could be useful in itself.
|
1403.4640
|
Nabeel Gillani
|
Nabeel Gillani, Rebecca Eynon, Michael Osborne, Isis Hjorth, Stephen
Roberts
|
Communication Communities in MOOCs
|
10 pages, 3 figures, 1 table. Submitted for review to UAI 2014
| null | null | null |
cs.CY cs.SI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Massive Open Online Courses (MOOCs) bring together thousands of people from
different geographies and demographic backgrounds -- but to date, little is
known about how they learn or communicate. We introduce a new content-analysed
MOOC dataset and use Bayesian Non-negative Matrix Factorization (BNMF) to
extract communities of learners based on the nature of their online forum
posts. We see that BNMF yields a superior probabilistic generative model for
online discussions when compared to other models, and that the communities it
learns are differentiated by their composite students' demographic and course
performance indicators. These findings suggest that computationally efficient
probabilistic generative modelling of MOOCs can reveal important insights for
educational researchers and practitioners and help to develop more intelligent
and responsive online learning environments.
|
[
{
"version": "v1",
"created": "Tue, 18 Mar 2014 22:57:24 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Apr 2014 15:50:48 GMT"
}
] | 2014-04-17T00:00:00 |
[
[
"Gillani",
"Nabeel",
""
],
[
"Eynon",
"Rebecca",
""
],
[
"Osborne",
"Michael",
""
],
[
"Hjorth",
"Isis",
""
],
[
"Roberts",
"Stephen",
""
]
] |
TITLE: Communication Communities in MOOCs
ABSTRACT: Massive Open Online Courses (MOOCs) bring together thousands of people from
different geographies and demographic backgrounds -- but to date, little is
known about how they learn or communicate. We introduce a new content-analysed
MOOC dataset and use Bayesian Non-negative Matrix Factorization (BNMF) to
extract communities of learners based on the nature of their online forum
posts. We see that BNMF yields a superior probabilistic generative model for
online discussions when compared to other models, and that the communities it
learns are differentiated by their composite students' demographic and course
performance indicators. These findings suggest that computationally efficient
probabilistic generative modelling of MOOCs can reveal important insights for
educational researchers and practitioners and help to develop more intelligent
and responsive online learning environments.
|
1404.1377
|
Zheng Wang
|
Zheng Wang, Ming-Jun Lai, Zhaosong Lu, Wei Fan, Hasan Davulcu and
Jieping Ye
|
Orthogonal Rank-One Matrix Pursuit for Low Rank Matrix Completion
| null | null | null | null |
cs.LG math.NA stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose an efficient and scalable low rank matrix
completion algorithm. The key idea is to extend orthogonal matching pursuit
method from the vector case to the matrix case. We further propose an economic
version of our algorithm by introducing a novel weight updating rule to reduce
the time and storage complexity. Both versions are computationally inexpensive
for each matrix pursuit iteration, and find satisfactory results in a few
iterations. Another advantage of our proposed algorithm is that it has only one
tunable parameter, which is the rank. It is easy to understand and to use by
the user. This becomes especially important in large-scale learning problems.
In addition, we rigorously show that both versions achieve a linear convergence
rate, which is significantly better than the previous known results. We also
empirically compare the proposed algorithms with several state-of-the-art
matrix completion algorithms on many real-world datasets, including the
large-scale recommendation dataset Netflix as well as the MovieLens datasets.
Numerical results show that our proposed algorithm is more efficient than
competing algorithms while achieving similar or better prediction performance.
|
[
{
"version": "v1",
"created": "Fri, 4 Apr 2014 20:00:30 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Apr 2014 19:09:09 GMT"
}
] | 2014-04-17T00:00:00 |
[
[
"Wang",
"Zheng",
""
],
[
"Lai",
"Ming-Jun",
""
],
[
"Lu",
"Zhaosong",
""
],
[
"Fan",
"Wei",
""
],
[
"Davulcu",
"Hasan",
""
],
[
"Ye",
"Jieping",
""
]
] |
TITLE: Orthogonal Rank-One Matrix Pursuit for Low Rank Matrix Completion
ABSTRACT: In this paper, we propose an efficient and scalable low rank matrix
completion algorithm. The key idea is to extend orthogonal matching pursuit
method from the vector case to the matrix case. We further propose an economic
version of our algorithm by introducing a novel weight updating rule to reduce
the time and storage complexity. Both versions are computationally inexpensive
for each matrix pursuit iteration, and find satisfactory results in a few
iterations. Another advantage of our proposed algorithm is that it has only one
tunable parameter, which is the rank. It is easy to understand and to use by
the user. This becomes especially important in large-scale learning problems.
In addition, we rigorously show that both versions achieve a linear convergence
rate, which is significantly better than the previous known results. We also
empirically compare the proposed algorithms with several state-of-the-art
matrix completion algorithms on many real-world datasets, including the
large-scale recommendation dataset Netflix as well as the MovieLens datasets.
Numerical results show that our proposed algorithm is more efficient than
competing algorithms while achieving similar or better prediction performance.
|
1404.3543
|
Ping Luo
|
Zhenyao Zhu and Ping Luo and Xiaogang Wang and Xiaoou Tang
|
Recover Canonical-View Faces in the Wild with Deep Neural Networks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face images in the wild undergo large intra-personal variations, such as
poses, illuminations, occlusions, and low resolutions, which cause great
challenges to face-related applications. This paper addresses this challenge by
proposing a new deep learning framework that can recover the canonical view of
face images. It dramatically reduces the intra-person variances, while
maintaining the inter-person discriminativeness. Unlike the existing face
reconstruction methods that were either evaluated in controlled 2D environment
or employed 3D information, our approach directly learns the transformation
from the face images with a complex set of variations to their canonical views.
At the training stage, to avoid the costly process of labeling canonical-view
images from the training set by hand, we have devised a new measurement to
automatically select or synthesize a canonical-view image for each identity. As
an application, this face recovery approach is used for face verification.
Facial features are learned from the recovered canonical-view face images by
using a facial component-based convolutional neural network. Our approach
achieves the state-of-the-art performance on the LFW dataset.
|
[
{
"version": "v1",
"created": "Mon, 14 Apr 2014 11:32:17 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Apr 2014 04:35:34 GMT"
}
] | 2014-04-17T00:00:00 |
[
[
"Zhu",
"Zhenyao",
""
],
[
"Luo",
"Ping",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"Tang",
"Xiaoou",
""
]
] |
TITLE: Recover Canonical-View Faces in the Wild with Deep Neural Networks
ABSTRACT: Face images in the wild undergo large intra-personal variations, such as
poses, illuminations, occlusions, and low resolutions, which cause great
challenges to face-related applications. This paper addresses this challenge by
proposing a new deep learning framework that can recover the canonical view of
face images. It dramatically reduces the intra-person variances, while
maintaining the inter-person discriminativeness. Unlike the existing face
reconstruction methods that were either evaluated in controlled 2D environment
or employed 3D information, our approach directly learns the transformation
from the face images with a complex set of variations to their canonical views.
At the training stage, to avoid the costly process of labeling canonical-view
images from the training set by hand, we have devised a new measurement to
automatically select or synthesize a canonical-view image for each identity. As
an application, this face recovery approach is used for face verification.
Facial features are learned from the recovered canonical-view face images by
using a facial component-based convolutional neural network. Our approach
achieves the state-of-the-art performance on the LFW dataset.
|
1404.4171
|
Ning Chen
|
Ning Chen, Jun Zhu, Jianfei Chen, Bo Zhang
|
Dropout Training for Support Vector Machines
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dropout and other feature noising schemes have shown promising results in
controlling over-fitting by artificially corrupting the training data. Though
extensive theoretical and empirical studies have been performed for generalized
linear models, little work has been done for support vector machines (SVMs),
one of the most successful approaches for supervised learning. This paper
presents dropout training for linear SVMs. To deal with the intractable
expectation of the non-smooth hinge loss under corrupting distributions, we
develop an iteratively re-weighted least square (IRLS) algorithm by exploring
data augmentation techniques. Our algorithm iteratively minimizes the
expectation of a re-weighted least square problem, where the re-weights have
closed-form solutions. The similar ideas are applied to develop a new IRLS
algorithm for the expected logistic loss under corrupting distributions. Our
algorithms offer insights on the connection and difference between the hinge
loss and logistic loss in dropout training. Empirical results on several real
datasets demonstrate the effectiveness of dropout training on significantly
boosting the classification accuracy of linear SVMs.
|
[
{
"version": "v1",
"created": "Wed, 16 Apr 2014 08:54:01 GMT"
}
] | 2014-04-17T00:00:00 |
[
[
"Chen",
"Ning",
""
],
[
"Zhu",
"Jun",
""
],
[
"Chen",
"Jianfei",
""
],
[
"Zhang",
"Bo",
""
]
] |
TITLE: Dropout Training for Support Vector Machines
ABSTRACT: Dropout and other feature noising schemes have shown promising results in
controlling over-fitting by artificially corrupting the training data. Though
extensive theoretical and empirical studies have been performed for generalized
linear models, little work has been done for support vector machines (SVMs),
one of the most successful approaches for supervised learning. This paper
presents dropout training for linear SVMs. To deal with the intractable
expectation of the non-smooth hinge loss under corrupting distributions, we
develop an iteratively re-weighted least square (IRLS) algorithm by exploring
data augmentation techniques. Our algorithm iteratively minimizes the
expectation of a re-weighted least square problem, where the re-weights have
closed-form solutions. The similar ideas are applied to develop a new IRLS
algorithm for the expected logistic loss under corrupting distributions. Our
algorithms offer insights on the connection and difference between the hinge
loss and logistic loss in dropout training. Empirical results on several real
datasets demonstrate the effectiveness of dropout training on significantly
boosting the classification accuracy of linear SVMs.
|
1404.4175
|
Emanuele Olivetti
|
Emanuele Olivetti, Seyed Mostafa Kia, Paolo Avesani
|
MEG Decoding Across Subjects
| null | null | null | null |
stat.ML cs.LG q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Brain decoding is a data analysis paradigm for neuroimaging experiments that
is based on predicting the stimulus presented to the subject from the
concurrent brain activity. In order to make inference at the group level, a
straightforward but sometimes unsuccessful approach is to train a classifier on
the trials of a group of subjects and then to test it on unseen trials from new
subjects. The extreme difficulty is related to the structural and functional
variability across the subjects. We call this approach "decoding across
subjects". In this work, we address the problem of decoding across subjects for
magnetoencephalographic (MEG) experiments and we provide the following
contributions: first, we formally describe the problem and show that it belongs
to a machine learning sub-field called transductive transfer learning (TTL).
Second, we propose to use a simple TTL technique that accounts for the
differences between train data and test data. Third, we propose the use of
ensemble learning, and specifically of stacked generalization, to address the
variability across subjects within train data, with the aim of producing more
stable classifiers. On a face vs. scramble task MEG dataset of 16 subjects, we
compare the standard approach of not modelling the differences across subjects,
to the proposed one of combining TTL and ensemble learning. We show that the
proposed approach is consistently more accurate than the standard one.
|
[
{
"version": "v1",
"created": "Wed, 16 Apr 2014 09:21:26 GMT"
}
] | 2014-04-17T00:00:00 |
[
[
"Olivetti",
"Emanuele",
""
],
[
"Kia",
"Seyed Mostafa",
""
],
[
"Avesani",
"Paolo",
""
]
] |
TITLE: MEG Decoding Across Subjects
ABSTRACT: Brain decoding is a data analysis paradigm for neuroimaging experiments that
is based on predicting the stimulus presented to the subject from the
concurrent brain activity. In order to make inference at the group level, a
straightforward but sometimes unsuccessful approach is to train a classifier on
the trials of a group of subjects and then to test it on unseen trials from new
subjects. The extreme difficulty is related to the structural and functional
variability across the subjects. We call this approach "decoding across
subjects". In this work, we address the problem of decoding across subjects for
magnetoencephalographic (MEG) experiments and we provide the following
contributions: first, we formally describe the problem and show that it belongs
to a machine learning sub-field called transductive transfer learning (TTL).
Second, we propose to use a simple TTL technique that accounts for the
differences between train data and test data. Third, we propose the use of
ensemble learning, and specifically of stacked generalization, to address the
variability across subjects within train data, with the aim of producing more
stable classifiers. On a face vs. scramble task MEG dataset of 16 subjects, we
compare the standard approach of not modelling the differences across subjects,
to the proposed one of combining TTL and ensemble learning. We show that the
proposed approach is consistently more accurate than the standard one.
|
1404.4316
|
Xiaoyu Wang
|
Will Y. Zou, Xiaoyu Wang, Miao Sun, Yuanqing Lin
|
Generic Object Detection With Dense Neural Patterns and Regionlets
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses the challenge of establishing a bridge between deep
convolutional neural networks and conventional object detection frameworks for
accurate and efficient generic object detection. We introduce Dense Neural
Patterns, short for DNPs, which are dense local features derived from
discriminatively trained deep convolutional neural networks. DNPs can be easily
plugged into conventional detection frameworks in the same way as other dense
local features(like HOG or LBP). The effectiveness of the proposed approach is
demonstrated with the Regionlets object detection framework. It achieved 46.1%
mean average precision on the PASCAL VOC 2007 dataset, and 44.1% on the PASCAL
VOC 2010 dataset, which dramatically improves the original Regionlets approach
without DNPs.
|
[
{
"version": "v1",
"created": "Wed, 16 Apr 2014 17:23:47 GMT"
}
] | 2014-04-17T00:00:00 |
[
[
"Zou",
"Will Y.",
""
],
[
"Wang",
"Xiaoyu",
""
],
[
"Sun",
"Miao",
""
],
[
"Lin",
"Yuanqing",
""
]
] |
TITLE: Generic Object Detection With Dense Neural Patterns and Regionlets
ABSTRACT: This paper addresses the challenge of establishing a bridge between deep
convolutional neural networks and conventional object detection frameworks for
accurate and efficient generic object detection. We introduce Dense Neural
Patterns, short for DNPs, which are dense local features derived from
discriminatively trained deep convolutional neural networks. DNPs can be easily
plugged into conventional detection frameworks in the same way as other dense
local features(like HOG or LBP). The effectiveness of the proposed approach is
demonstrated with the Regionlets object detection framework. It achieved 46.1%
mean average precision on the PASCAL VOC 2007 dataset, and 44.1% on the PASCAL
VOC 2010 dataset, which dramatically improves the original Regionlets approach
without DNPs.
|
1404.4351
|
Navodit Misra
|
Navodit Misra and Ercan E. Kuruoglu
|
Stable Graphical Models
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stable random variables are motivated by the central limit theorem for
densities with (potentially) unbounded variance and can be thought of as
natural generalizations of the Gaussian distribution to skewed and heavy-tailed
phenomenon. In this paper, we introduce stable graphical (SG) models, a class
of multivariate stable densities that can also be represented as Bayesian
networks whose edges encode linear dependencies between random variables. One
major hurdle to the extensive use of stable distributions is the lack of a
closed-form analytical expression for their densities. This makes penalized
maximum-likelihood based learning computationally demanding. We establish
theoretically that the Bayesian information criterion (BIC) can asymptotically
be reduced to the computationally more tractable minimum dispersion criterion
(MDC) and develop StabLe, a structure learning algorithm based on MDC. We use
simulated datasets for five benchmark network topologies to empirically
demonstrate how StabLe improves upon ordinary least squares (OLS) regression.
We also apply StabLe to microarray gene expression data for lymphoblastoid
cells from 727 individuals belonging to eight global population groups. We
establish that StabLe improves test set performance relative to OLS via
ten-fold cross-validation. Finally, we develop SGEX, a method for quantifying
differential expression of genes between different population groups.
|
[
{
"version": "v1",
"created": "Wed, 16 Apr 2014 19:12:47 GMT"
}
] | 2014-04-17T00:00:00 |
[
[
"Misra",
"Navodit",
""
],
[
"Kuruoglu",
"Ercan E.",
""
]
] |
TITLE: Stable Graphical Models
ABSTRACT: Stable random variables are motivated by the central limit theorem for
densities with (potentially) unbounded variance and can be thought of as
natural generalizations of the Gaussian distribution to skewed and heavy-tailed
phenomenon. In this paper, we introduce stable graphical (SG) models, a class
of multivariate stable densities that can also be represented as Bayesian
networks whose edges encode linear dependencies between random variables. One
major hurdle to the extensive use of stable distributions is the lack of a
closed-form analytical expression for their densities. This makes penalized
maximum-likelihood based learning computationally demanding. We establish
theoretically that the Bayesian information criterion (BIC) can asymptotically
be reduced to the computationally more tractable minimum dispersion criterion
(MDC) and develop StabLe, a structure learning algorithm based on MDC. We use
simulated datasets for five benchmark network topologies to empirically
demonstrate how StabLe improves upon ordinary least squares (OLS) regression.
We also apply StabLe to microarray gene expression data for lymphoblastoid
cells from 727 individuals belonging to eight global population groups. We
establish that StabLe improves test set performance relative to OLS via
ten-fold cross-validation. Finally, we develop SGEX, a method for quantifying
differential expression of genes between different population groups.
|
1302.6309
|
Neil Zhenqiang Gong
|
Neil Zhenqiang Gong and Wenchang Xu
|
Reciprocal versus Parasocial Relationships in Online Social Networks
|
Social Network Analysis and Mining, Springer, 2014
| null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many online social networks are fundamentally directed, i.e., they consist of
both reciprocal edges (i.e., edges that have already been linked back) and
parasocial edges (i.e., edges that haven't been linked back). Thus,
understanding the structures and evolutions of reciprocal edges and parasocial
ones, exploring the factors that influence parasocial edges to become
reciprocal ones, and predicting whether a parasocial edge will turn into a
reciprocal one are basic research problems.
However, there have been few systematic studies about such problems. In this
paper, we bridge this gap using a novel large-scale Google+ dataset crawled by
ourselves as well as one publicly available social network dataset. First, we
compare the structures and evolutions of reciprocal edges and those of
parasocial edges. For instance, we find that reciprocal edges are more likely
to connect users with similar degrees while parasocial edges are more likely to
link ordinary users (e.g., users with low degrees) and popular users (e.g.,
celebrities). However, the impacts of reciprocal edges linking ordinary and
popular users on the network structures increase slowly as the social networks
evolve. Second, we observe that factors including user behaviors, node
attributes, and edge attributes all have significant impacts on the formation
of reciprocal edges. Third, in contrast to previous studies that treat
reciprocal edge prediction as either a supervised or a semi-supervised learning
problem, we identify that reciprocal edge prediction is better modeled as an
outlier detection problem. Finally, we perform extensive evaluations with the
two datasets, and we show that our proposal outperforms previous reciprocal
edge prediction approaches.
|
[
{
"version": "v1",
"created": "Tue, 26 Feb 2013 04:18:21 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Dec 2013 14:31:22 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2014 03:06:02 GMT"
},
{
"version": "v4",
"created": "Tue, 15 Apr 2014 03:38:46 GMT"
}
] | 2014-04-16T00:00:00 |
[
[
"Gong",
"Neil Zhenqiang",
""
],
[
"Xu",
"Wenchang",
""
]
] |
TITLE: Reciprocal versus Parasocial Relationships in Online Social Networks
ABSTRACT: Many online social networks are fundamentally directed, i.e., they consist of
both reciprocal edges (i.e., edges that have already been linked back) and
parasocial edges (i.e., edges that haven't been linked back). Thus,
understanding the structures and evolutions of reciprocal edges and parasocial
ones, exploring the factors that influence parasocial edges to become
reciprocal ones, and predicting whether a parasocial edge will turn into a
reciprocal one are basic research problems.
However, there have been few systematic studies about such problems. In this
paper, we bridge this gap using a novel large-scale Google+ dataset crawled by
ourselves as well as one publicly available social network dataset. First, we
compare the structures and evolutions of reciprocal edges and those of
parasocial edges. For instance, we find that reciprocal edges are more likely
to connect users with similar degrees while parasocial edges are more likely to
link ordinary users (e.g., users with low degrees) and popular users (e.g.,
celebrities). However, the impacts of reciprocal edges linking ordinary and
popular users on the network structures increase slowly as the social networks
evolve. Second, we observe that factors including user behaviors, node
attributes, and edge attributes all have significant impacts on the formation
of reciprocal edges. Third, in contrast to previous studies that treat
reciprocal edge prediction as either a supervised or a semi-supervised learning
problem, we identify that reciprocal edge prediction is better modeled as an
outlier detection problem. Finally, we perform extensive evaluations with the
two datasets, and we show that our proposal outperforms previous reciprocal
edge prediction approaches.
|
1312.4894
|
Yangqing Jia
|
Yunchao Gong, Yangqing Jia, Thomas Leung, Alexander Toshev, Sergey
Ioffe
|
Deep Convolutional Ranking for Multilabel Image Annotation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multilabel image annotation is one of the most important challenges in
computer vision with many real-world applications. While existing work usually
use conventional visual features for multilabel annotation, features based on
Deep Neural Networks have shown potential to significantly boost performance.
In this work, we propose to leverage the advantage of such features and analyze
key components that lead to better performances. Specifically, we show that a
significant performance gain could be obtained by combining convolutional
architectures with approximate top-$k$ ranking objectives, as thye naturally
fit the multilabel tagging problem. Our experiments on the NUS-WIDE dataset
outperforms the conventional visual features by about 10%, obtaining the best
reported performance in the literature.
|
[
{
"version": "v1",
"created": "Tue, 17 Dec 2013 19:00:50 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Apr 2014 19:21:13 GMT"
}
] | 2014-04-15T00:00:00 |
[
[
"Gong",
"Yunchao",
""
],
[
"Jia",
"Yangqing",
""
],
[
"Leung",
"Thomas",
""
],
[
"Toshev",
"Alexander",
""
],
[
"Ioffe",
"Sergey",
""
]
] |
TITLE: Deep Convolutional Ranking for Multilabel Image Annotation
ABSTRACT: Multilabel image annotation is one of the most important challenges in
computer vision with many real-world applications. While existing work usually
use conventional visual features for multilabel annotation, features based on
Deep Neural Networks have shown potential to significantly boost performance.
In this work, we propose to leverage the advantage of such features and analyze
key components that lead to better performances. Specifically, we show that a
significant performance gain could be obtained by combining convolutional
architectures with approximate top-$k$ ranking objectives, as thye naturally
fit the multilabel tagging problem. Our experiments on the NUS-WIDE dataset
outperforms the conventional visual features by about 10%, obtaining the best
reported performance in the literature.
|
1312.6082
|
Julian Ibarz
|
Ian J. Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, Vinay
Shet
|
Multi-digit Number Recognition from Street View Imagery using Deep
Convolutional Neural Networks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognizing arbitrary multi-character text in unconstrained natural
photographs is a hard problem. In this paper, we address an equally hard
sub-problem in this domain viz. recognizing arbitrary multi-digit numbers from
Street View imagery. Traditional approaches to solve this problem typically
separate out the localization, segmentation, and recognition steps. In this
paper we propose a unified approach that integrates these three steps via the
use of a deep convolutional neural network that operates directly on the image
pixels. We employ the DistBelief implementation of deep neural networks in
order to train large, distributed neural networks on high quality images. We
find that the performance of this approach increases with the depth of the
convolutional network, with the best performance occurring in the deepest
architecture we trained, with eleven hidden layers. We evaluate this approach
on the publicly available SVHN dataset and achieve over $96\%$ accuracy in
recognizing complete street numbers. We show that on a per-digit recognition
task, we improve upon the state-of-the-art, achieving $97.84\%$ accuracy. We
also evaluate this approach on an even more challenging dataset generated from
Street View imagery containing several tens of millions of street number
annotations and achieve over $90\%$ accuracy. To further explore the
applicability of the proposed system to broader text recognition tasks, we
apply it to synthetic distorted text from reCAPTCHA. reCAPTCHA is one of the
most secure reverse turing tests that uses distorted text to distinguish humans
from bots. We report a $99.8\%$ accuracy on the hardest category of reCAPTCHA.
Our evaluations on both tasks indicate that at specific operating thresholds,
the performance of the proposed system is comparable to, and in some cases
exceeds, that of human operators.
|
[
{
"version": "v1",
"created": "Fri, 20 Dec 2013 19:25:44 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Jan 2014 14:29:59 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2014 22:40:47 GMT"
},
{
"version": "v4",
"created": "Mon, 14 Apr 2014 05:25:54 GMT"
}
] | 2014-04-15T00:00:00 |
[
[
"Goodfellow",
"Ian J.",
""
],
[
"Bulatov",
"Yaroslav",
""
],
[
"Ibarz",
"Julian",
""
],
[
"Arnoud",
"Sacha",
""
],
[
"Shet",
"Vinay",
""
]
] |
TITLE: Multi-digit Number Recognition from Street View Imagery using Deep
Convolutional Neural Networks
ABSTRACT: Recognizing arbitrary multi-character text in unconstrained natural
photographs is a hard problem. In this paper, we address an equally hard
sub-problem in this domain viz. recognizing arbitrary multi-digit numbers from
Street View imagery. Traditional approaches to solve this problem typically
separate out the localization, segmentation, and recognition steps. In this
paper we propose a unified approach that integrates these three steps via the
use of a deep convolutional neural network that operates directly on the image
pixels. We employ the DistBelief implementation of deep neural networks in
order to train large, distributed neural networks on high quality images. We
find that the performance of this approach increases with the depth of the
convolutional network, with the best performance occurring in the deepest
architecture we trained, with eleven hidden layers. We evaluate this approach
on the publicly available SVHN dataset and achieve over $96\%$ accuracy in
recognizing complete street numbers. We show that on a per-digit recognition
task, we improve upon the state-of-the-art, achieving $97.84\%$ accuracy. We
also evaluate this approach on an even more challenging dataset generated from
Street View imagery containing several tens of millions of street number
annotations and achieve over $90\%$ accuracy. To further explore the
applicability of the proposed system to broader text recognition tasks, we
apply it to synthetic distorted text from reCAPTCHA. reCAPTCHA is one of the
most secure reverse turing tests that uses distorted text to distinguish humans
from bots. We report a $99.8\%$ accuracy on the hardest category of reCAPTCHA.
Our evaluations on both tasks indicate that at specific operating thresholds,
the performance of the proposed system is comparable to, and in some cases
exceeds, that of human operators.
|
1402.2681
|
Liang Zheng
|
Liang Zheng, Shengjin Wang, Ziqiong Liu, Qi Tian
|
Packing and Padding: Coupled Multi-index for Accurate Image Retrieval
|
8 pages, 7 figures, 6 tables. Accepted to CVPR 2014
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low
discriminative power, so false positive matches occur prevalently. Apart from
the information loss during quantization, another cause is that the SIFT
feature only describes the local gradient distribution. To address this
problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform
feature fusion at indexing level. Basically, complementary features are coupled
into a multi-dimensional inverted index. Each dimension of c-MI corresponds to
one kind of feature, and the retrieval process votes for images similar in both
SIFT and other feature spaces. Specifically, we exploit the fusion of local
color feature into c-MI. While the precision of visual match is greatly
enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation
of SIFT and color features significantly reduces the impact of false positive
matches.
Extensive experiments on several benchmark datasets demonstrate that c-MI
improves the retrieval accuracy significantly, while consuming only half of the
query time compared to the baseline. Importantly, we show that c-MI is well
complementary to many prior techniques. Assembling these methods, we have
obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench
datasets, respectively, which compare favorably with the state-of-the-arts.
|
[
{
"version": "v1",
"created": "Tue, 11 Feb 2014 22:00:31 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Apr 2014 09:51:54 GMT"
}
] | 2014-04-15T00:00:00 |
[
[
"Zheng",
"Liang",
""
],
[
"Wang",
"Shengjin",
""
],
[
"Liu",
"Ziqiong",
""
],
[
"Tian",
"Qi",
""
]
] |
TITLE: Packing and Padding: Coupled Multi-index for Accurate Image Retrieval
ABSTRACT: In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low
discriminative power, so false positive matches occur prevalently. Apart from
the information loss during quantization, another cause is that the SIFT
feature only describes the local gradient distribution. To address this
problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform
feature fusion at indexing level. Basically, complementary features are coupled
into a multi-dimensional inverted index. Each dimension of c-MI corresponds to
one kind of feature, and the retrieval process votes for images similar in both
SIFT and other feature spaces. Specifically, we exploit the fusion of local
color feature into c-MI. While the precision of visual match is greatly
enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation
of SIFT and color features significantly reduces the impact of false positive
matches.
Extensive experiments on several benchmark datasets demonstrate that c-MI
improves the retrieval accuracy significantly, while consuming only half of the
query time compared to the baseline. Importantly, we show that c-MI is well
complementary to many prior techniques. Assembling these methods, we have
obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench
datasets, respectively, which compare favorably with the state-of-the-arts.
|
1403.0284
|
Liang Zheng
|
Liang Zheng and Shengjin Wang and Wengang Zhou and Qi Tian
|
Bayes Merging of Multiple Vocabularies for Scalable Image Retrieval
|
8 pages, 7 figures, 6 tables, accepted to CVPR 2014
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
The Bag-of-Words (BoW) representation is well applied to recent
state-of-the-art image retrieval works. Typically, multiple vocabularies are
generated to correct quantization artifacts and improve recall. However, this
routine is corrupted by vocabulary correlation, i.e., overlapping among
different vocabularies. Vocabulary correlation leads to an over-counting of the
indexed features in the overlapped area, or the intersection set, thus
compromising the retrieval accuracy. In order to address the correlation
problem while preserve the benefit of high recall, this paper proposes a Bayes
merging approach to down-weight the indexed features in the intersection set.
Through explicitly modeling the correlation problem in a probabilistic view, a
joint similarity on both image- and feature-level is estimated for the indexed
features in the intersection set.
We evaluate our method through extensive experiments on three benchmark
datasets. Albeit simple, Bayes merging can be well applied in various merging
tasks, and consistently improves the baselines on multi-vocabulary merging.
Moreover, Bayes merging is efficient in terms of both time and memory cost, and
yields competitive performance compared with the state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2014 00:51:29 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Apr 2014 10:14:54 GMT"
}
] | 2014-04-15T00:00:00 |
[
[
"Zheng",
"Liang",
""
],
[
"Wang",
"Shengjin",
""
],
[
"Zhou",
"Wengang",
""
],
[
"Tian",
"Qi",
""
]
] |
TITLE: Bayes Merging of Multiple Vocabularies for Scalable Image Retrieval
ABSTRACT: The Bag-of-Words (BoW) representation is well applied to recent
state-of-the-art image retrieval works. Typically, multiple vocabularies are
generated to correct quantization artifacts and improve recall. However, this
routine is corrupted by vocabulary correlation, i.e., overlapping among
different vocabularies. Vocabulary correlation leads to an over-counting of the
indexed features in the overlapped area, or the intersection set, thus
compromising the retrieval accuracy. In order to address the correlation
problem while preserve the benefit of high recall, this paper proposes a Bayes
merging approach to down-weight the indexed features in the intersection set.
Through explicitly modeling the correlation problem in a probabilistic view, a
joint similarity on both image- and feature-level is estimated for the indexed
features in the intersection set.
We evaluate our method through extensive experiments on three benchmark
datasets. Albeit simple, Bayes merging can be well applied in various merging
tasks, and consistently improves the baselines on multi-vocabulary merging.
Moreover, Bayes merging is efficient in terms of both time and memory cost, and
yields competitive performance compared with the state-of-the-art methods.
|
1403.3780
|
Conrad Sanderson
|
Arnold Wiliem, Conrad Sanderson, Yongkang Wong, Peter Hobson, Rodney
F. Minchin, Brian C. Lovell
|
Automatic Classification of Human Epithelial Type 2 Cell Indirect
Immunofluorescence Images using Cell Pyramid Matching
|
arXiv admin note: substantial text overlap with arXiv:1304.1262
|
Pattern Recognition, Vol. 47, No. 7, pp. 2315-2324, 2014
|
10.1016/j.patcog.2013.10.014
| null |
q-bio.CB cs.CV q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a novel system for automatic classification of images
obtained from Anti-Nuclear Antibody (ANA) pathology tests on Human Epithelial
type 2 (HEp-2) cells using the Indirect Immunofluorescence (IIF) protocol. The
IIF protocol on HEp-2 cells has been the hallmark method to identify the
presence of ANAs, due to its high sensitivity and the large range of antigens
that can be detected. However, it suffers from numerous shortcomings, such as
being subjective as well as time and labour intensive. Computer Aided
Diagnostic (CAD) systems have been developed to address these problems, which
automatically classify a HEp-2 cell image into one of its known patterns (eg.
speckled, homogeneous). Most of the existing CAD systems use handpicked
features to represent a HEp-2 cell image, which may only work in limited
scenarios. We propose a novel automatic cell image classification method termed
Cell Pyramid Matching (CPM), which is comprised of regional histograms of
visual words coupled with the Multiple Kernel Learning framework. We present a
study of several variations of generating histograms and show the efficacy of
the system on two publicly available datasets: the ICPR HEp-2 cell
classification contest dataset and the SNPHEp-2 dataset.
|
[
{
"version": "v1",
"created": "Sat, 15 Mar 2014 10:15:25 GMT"
}
] | 2014-04-15T00:00:00 |
[
[
"Wiliem",
"Arnold",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Wong",
"Yongkang",
""
],
[
"Hobson",
"Peter",
""
],
[
"Minchin",
"Rodney F.",
""
],
[
"Lovell",
"Brian C.",
""
]
] |
TITLE: Automatic Classification of Human Epithelial Type 2 Cell Indirect
Immunofluorescence Images using Cell Pyramid Matching
ABSTRACT: This paper describes a novel system for automatic classification of images
obtained from Anti-Nuclear Antibody (ANA) pathology tests on Human Epithelial
type 2 (HEp-2) cells using the Indirect Immunofluorescence (IIF) protocol. The
IIF protocol on HEp-2 cells has been the hallmark method to identify the
presence of ANAs, due to its high sensitivity and the large range of antigens
that can be detected. However, it suffers from numerous shortcomings, such as
being subjective as well as time and labour intensive. Computer Aided
Diagnostic (CAD) systems have been developed to address these problems, which
automatically classify a HEp-2 cell image into one of its known patterns (eg.
speckled, homogeneous). Most of the existing CAD systems use handpicked
features to represent a HEp-2 cell image, which may only work in limited
scenarios. We propose a novel automatic cell image classification method termed
Cell Pyramid Matching (CPM), which is comprised of regional histograms of
visual words coupled with the Multiple Kernel Learning framework. We present a
study of several variations of generating histograms and show the efficacy of
the system on two publicly available datasets: the ICPR HEp-2 cell
classification contest dataset and the SNPHEp-2 dataset.
|
1404.3291
|
Michael Wilber
|
Michael J. Wilber and Iljung S. Kwak and Serge J. Belongie
|
Cost-Effective HITs for Relative Similarity Comparisons
|
7 pages, 7 figures
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Similarity comparisons of the form "Is object a more similar to b than to c?"
are useful for computer vision and machine learning applications.
Unfortunately, an embedding of $n$ points is specified by $n^3$ triplets,
making collecting every triplet an expensive task. In noticing this difficulty,
other researchers have investigated more intelligent triplet sampling
techniques, but they do not study their effectiveness or their potential
drawbacks. Although it is important to reduce the number of collected triplets,
it is also important to understand how best to display a triplet collection
task to a user. In this work we explore an alternative display for collecting
triplets and analyze the monetary cost and speed of the display. We propose
best practices for creating cost effective human intelligence tasks for
collecting triplets. We show that rather than changing the sampling algorithm,
simple changes to the crowdsourcing UI can lead to much higher quality
embeddings. We also provide a dataset as well as the labels collected from
crowd workers.
|
[
{
"version": "v1",
"created": "Sat, 12 Apr 2014 14:33:18 GMT"
}
] | 2014-04-15T00:00:00 |
[
[
"Wilber",
"Michael J.",
""
],
[
"Kwak",
"Iljung S.",
""
],
[
"Belongie",
"Serge J.",
""
]
] |
TITLE: Cost-Effective HITs for Relative Similarity Comparisons
ABSTRACT: Similarity comparisons of the form "Is object a more similar to b than to c?"
are useful for computer vision and machine learning applications.
Unfortunately, an embedding of $n$ points is specified by $n^3$ triplets,
making collecting every triplet an expensive task. In noticing this difficulty,
other researchers have investigated more intelligent triplet sampling
techniques, but they do not study their effectiveness or their potential
drawbacks. Although it is important to reduce the number of collected triplets,
it is also important to understand how best to display a triplet collection
task to a user. In this work we explore an alternative display for collecting
triplets and analyze the monetary cost and speed of the display. We propose
best practices for creating cost effective human intelligence tasks for
collecting triplets. We show that rather than changing the sampling algorithm,
simple changes to the crowdsourcing UI can lead to much higher quality
embeddings. We also provide a dataset as well as the labels collected from
crowd workers.
|
1404.3312
|
Xu Chen
|
Xu Chen, Alfred Hero, Silvio Savarese
|
Shrinkage Optimized Directed Information using Pictorial Structures for
Action Recognition
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
In this paper, we propose a novel action recognition framework. The method
uses pictorial structures and shrinkage optimized directed information
assessment (SODA) coupled with Markov Random Fields called SODA+MRF to model
the directional temporal dependency and bidirectional spatial dependency. As a
variant of mutual information, directional information captures the directional
information flow and temporal structure of video sequences across frames.
Meanwhile, within each frame, Markov random fields are utilized to model the
spatial relations among different parts of a human body and the body parts of
different people. The proposed SODA+MRF model is robust to view point
transformations and detect complex interactions accurately. We compare the
proposed method against several baseline methods to highlight the effectiveness
of the SODA+MRF model. We demonstrate that our algorithm has superior action
recognition performance on the UCF action recognition dataset, the Olympic
sports dataset and the collective activity dataset over several
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sat, 12 Apr 2014 19:01:36 GMT"
}
] | 2014-04-15T00:00:00 |
[
[
"Chen",
"Xu",
""
],
[
"Hero",
"Alfred",
""
],
[
"Savarese",
"Silvio",
""
]
] |
TITLE: Shrinkage Optimized Directed Information using Pictorial Structures for
Action Recognition
ABSTRACT: In this paper, we propose a novel action recognition framework. The method
uses pictorial structures and shrinkage optimized directed information
assessment (SODA) coupled with Markov Random Fields called SODA+MRF to model
the directional temporal dependency and bidirectional spatial dependency. As a
variant of mutual information, directional information captures the directional
information flow and temporal structure of video sequences across frames.
Meanwhile, within each frame, Markov random fields are utilized to model the
spatial relations among different parts of a human body and the body parts of
different people. The proposed SODA+MRF model is robust to view point
transformations and detect complex interactions accurately. We compare the
proposed method against several baseline methods to highlight the effectiveness
of the SODA+MRF model. We demonstrate that our algorithm has superior action
recognition performance on the UCF action recognition dataset, the Olympic
sports dataset and the collective activity dataset over several
state-of-the-art methods.
|
1404.3461
|
Xiaolu Lu
|
Xiaolu Lu, Dongxu Li, Xiang Li, Ling Feng
|
A 2D based Partition Strategy for Solving Ranking under Team Context
(RTP)
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a 2D based partition method for solving the problem
of Ranking under Team Context(RTC) on datasets without a priori. We first map
the data into 2D space using its minimum and maximum value among all
dimensions. Then we construct window queries with consideration of current team
context. Besides, during the query mapping procedure, we can pre-prune some
tuples which are not top ranked ones. This pre-classified step will defer
processing those tuples and can save cost while providing solutions for the
problem. Experiments show that our algorithm performs well especially on large
datasets with correctness.
|
[
{
"version": "v1",
"created": "Mon, 14 Apr 2014 05:20:48 GMT"
}
] | 2014-04-15T00:00:00 |
[
[
"Lu",
"Xiaolu",
""
],
[
"Li",
"Dongxu",
""
],
[
"Li",
"Xiang",
""
],
[
"Feng",
"Ling",
""
]
] |
TITLE: A 2D based Partition Strategy for Solving Ranking under Team Context
(RTP)
ABSTRACT: In this paper, we propose a 2D based partition method for solving the problem
of Ranking under Team Context(RTC) on datasets without a priori. We first map
the data into 2D space using its minimum and maximum value among all
dimensions. Then we construct window queries with consideration of current team
context. Besides, during the query mapping procedure, we can pre-prune some
tuples which are not top ranked ones. This pre-classified step will defer
processing those tuples and can save cost while providing solutions for the
problem. Experiments show that our algorithm performs well especially on large
datasets with correctness.
|
1404.2948
|
Anna Goldenberg
|
Bo Wang and Anna Goldenberg
|
Gradient-based Laplacian Feature Selection
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Analysis of high dimensional noisy data is of essence across a variety of
research fields. Feature selection techniques are designed to find the relevant
feature subset that can facilitate classification or pattern detection.
Traditional (supervised) feature selection methods utilize label information to
guide the identification of relevant feature subsets. In this paper, however,
we consider the unsupervised feature selection problem. Without the label
information, it is particularly difficult to identify a small set of relevant
features due to the noisy nature of real-world data which corrupts the
intrinsic structure of the data. Our Gradient-based Laplacian Feature Selection
(GLFS) selects important features by minimizing the variance of the Laplacian
regularized least squares regression model. With $\ell_1$ relaxation, GLFS can
find a sparse subset of features that is relevant to the Laplacian manifolds.
Extensive experiments on simulated, three real-world object recognition and two
computational biology datasets, have illustrated the power and superior
performance of our approach over multiple state-of-the-art unsupervised feature
selection methods. Additionally, we show that GLFS selects a sparser set of
more relevant features in a supervised setting outperforming the popular
elastic net methodology.
|
[
{
"version": "v1",
"created": "Thu, 10 Apr 2014 20:49:35 GMT"
}
] | 2014-04-14T00:00:00 |
[
[
"Wang",
"Bo",
""
],
[
"Goldenberg",
"Anna",
""
]
] |
TITLE: Gradient-based Laplacian Feature Selection
ABSTRACT: Analysis of high dimensional noisy data is of essence across a variety of
research fields. Feature selection techniques are designed to find the relevant
feature subset that can facilitate classification or pattern detection.
Traditional (supervised) feature selection methods utilize label information to
guide the identification of relevant feature subsets. In this paper, however,
we consider the unsupervised feature selection problem. Without the label
information, it is particularly difficult to identify a small set of relevant
features due to the noisy nature of real-world data which corrupts the
intrinsic structure of the data. Our Gradient-based Laplacian Feature Selection
(GLFS) selects important features by minimizing the variance of the Laplacian
regularized least squares regression model. With $\ell_1$ relaxation, GLFS can
find a sparse subset of features that is relevant to the Laplacian manifolds.
Extensive experiments on simulated, three real-world object recognition and two
computational biology datasets, have illustrated the power and superior
performance of our approach over multiple state-of-the-art unsupervised feature
selection methods. Additionally, we show that GLFS selects a sparser set of
more relevant features in a supervised setting outperforming the popular
elastic net methodology.
|
1301.2995
|
David Garcia
|
David Garc\'ia, Dorian Tanase
|
Measuring Cultural Dynamics Through the Eurovision Song Contest
|
Submitted to Advances in Complex Systems
|
Advances in Complex Systems, Vol 16, No 8 (2013) pp 33
|
10.1142/S0219525913500379
| null |
physics.soc-ph cs.SI physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Measuring culture and its dynamics through surveys has important limitations,
but the emerging field of computational social science allows us to overcome
them by analyzing large-scale datasets. In this article, we study cultural
dynamics through the votes in the Eurovision song contest, which are decided by
a crowd-based scheme in which viewers vote through mobile phone messages.
Taking into account asymmetries and imperfect perception of culture, we measure
cultural relations among European countries in terms of cultural affinity. We
propose the Friend-or-Foe coefficient, a metric to measure voting biases among
participants of a Eurovision contest. We validate how this metric represents
cultural affinity through its relation with known cultural distances, and
through numerical analysis of biased Eurovision contests. We apply this metric
to the historical set of Eurovision contests from 1975 to 2012, finding new
patterns of stronger modularity than using votes alone. Furthermore, we define
a measure of polarization that, when applied to empirical data, shows a sharp
increase within EU countries during 2010 and 2011. We empirically validate the
relation between this polarization and economic indicators in the EU, showing
how political decisions influence both the economy and the way citizens relate
to the culture of other EU members.
|
[
{
"version": "v1",
"created": "Mon, 14 Jan 2013 14:55:15 GMT"
},
{
"version": "v2",
"created": "Fri, 10 May 2013 11:41:36 GMT"
}
] | 2014-04-11T00:00:00 |
[
[
"García",
"David",
""
],
[
"Tanase",
"Dorian",
""
]
] |
TITLE: Measuring Cultural Dynamics Through the Eurovision Song Contest
ABSTRACT: Measuring culture and its dynamics through surveys has important limitations,
but the emerging field of computational social science allows us to overcome
them by analyzing large-scale datasets. In this article, we study cultural
dynamics through the votes in the Eurovision song contest, which are decided by
a crowd-based scheme in which viewers vote through mobile phone messages.
Taking into account asymmetries and imperfect perception of culture, we measure
cultural relations among European countries in terms of cultural affinity. We
propose the Friend-or-Foe coefficient, a metric to measure voting biases among
participants of a Eurovision contest. We validate how this metric represents
cultural affinity through its relation with known cultural distances, and
through numerical analysis of biased Eurovision contests. We apply this metric
to the historical set of Eurovision contests from 1975 to 2012, finding new
patterns of stronger modularity than using votes alone. Furthermore, we define
a measure of polarization that, when applied to empirical data, shows a sharp
increase within EU countries during 2010 and 2011. We empirically validate the
relation between this polarization and economic indicators in the EU, showing
how political decisions influence both the economy and the way citizens relate
to the culture of other EU members.
|
1404.2835
|
Arian Ojeda Gonz\'alez
|
Arian Ojeda Gonz\'alez, Odim Mendes Junior, Margarete Oliveira
Domingues and Varlei Everton Menconi
|
Daubechies wavelet coefficients: a tool to study interplanetary magnetic
field fluctuations
|
15 pages, 6 figures, 4 tables
http://www.geofisica.unam.mx/unid_apoyo/editorial/publicaciones/investigacion/geofisica_internacional/anteriores/2014/02/1_ojeda.pdf
|
Geofisica Internacional, 53-2: 101-115, ISSN: 0016-7169, 2014
| null | null |
physics.space-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We have studied a set of 41 magnetic clouds (MCs) measured by the ACE
spacecraft, using the discrete orthogonal wavelet transform (Daubechies wavelet
of order two) in three regions: Pre-MC (plasma sheath), MC and Post-MC. We have
used data from the IMF GSM-components with time resolution of 16 s. The
mathematical property chosen was the statistical mean of the wavelet
coefficients $(\langle Dd1 \rangle)$. The Daubechies wavelet coefficients have
been used because they represent the local regularity present in the signal
being studied. The results reproduced the well-known fact that the dynamics of
the sheath region is more than that of the MC region. This technique could be
useful to help a specialist to find events boundaries when working with IMF
datasets, i.e., a best form to visualize the data. The wavelet coefficients
have the advantage of helping to find some shocks that are not easy to see in
the IMF data by simple visual inspection. We can learn that fluctuations are
not low in all MCs, in some cases waves can penetrate from the sheath to the
MC. This methodology has not yet been tested to identify some specific
fluctuation patterns at IMF for any other geoeffective interplanetary events,
such as Co-rotating Interaction Regions (CIRs), Heliospheric Current Sheet
(HCS) or ICMEs without MC signatures. In our opinion, as is the first time that
this technique is applied to the IMF data with this purpose, the presentation
of this approach for the Space Physics Community is one of the contributions of
this work.
|
[
{
"version": "v1",
"created": "Thu, 10 Apr 2014 14:53:24 GMT"
}
] | 2014-04-11T00:00:00 |
[
[
"González",
"Arian Ojeda",
""
],
[
"Junior",
"Odim Mendes",
""
],
[
"Domingues",
"Margarete Oliveira",
""
],
[
"Menconi",
"Varlei Everton",
""
]
] |
TITLE: Daubechies wavelet coefficients: a tool to study interplanetary magnetic
field fluctuations
ABSTRACT: We have studied a set of 41 magnetic clouds (MCs) measured by the ACE
spacecraft, using the discrete orthogonal wavelet transform (Daubechies wavelet
of order two) in three regions: Pre-MC (plasma sheath), MC and Post-MC. We have
used data from the IMF GSM-components with time resolution of 16 s. The
mathematical property chosen was the statistical mean of the wavelet
coefficients $(\langle Dd1 \rangle)$. The Daubechies wavelet coefficients have
been used because they represent the local regularity present in the signal
being studied. The results reproduced the well-known fact that the dynamics of
the sheath region is more than that of the MC region. This technique could be
useful to help a specialist to find events boundaries when working with IMF
datasets, i.e., a best form to visualize the data. The wavelet coefficients
have the advantage of helping to find some shocks that are not easy to see in
the IMF data by simple visual inspection. We can learn that fluctuations are
not low in all MCs, in some cases waves can penetrate from the sheath to the
MC. This methodology has not yet been tested to identify some specific
fluctuation patterns at IMF for any other geoeffective interplanetary events,
such as Co-rotating Interaction Regions (CIRs), Heliospheric Current Sheet
(HCS) or ICMEs without MC signatures. In our opinion, as is the first time that
this technique is applied to the IMF data with this purpose, the presentation
of this approach for the Space Physics Community is one of the contributions of
this work.
|
1404.2872
|
Md Pavel Mahmud
|
Md Pavel Mahmud and Alexander Schliep
|
TreQ-CG: Clustering Accelerates High-Throughput Sequencing Read Mapping
| null | null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As high-throughput sequencers become standard equipment outside of sequencing
centers, there is an increasing need for efficient methods for pre-processing
and primary analysis. While a vast literature proposes methods for HTS data
analysis, we argue that significant improvements can still be gained by
exploiting expensive pre-processing steps which can be amortized with savings
from later stages. We propose a method to accelerate and improve read mapping
based on an initial clustering of possibly billions of high-throughput
sequencing reads, yielding clusters of high stringency and a high degree of
overlap. This clustering improves on the state-of-the-art in running time for
small datasets and, for the first time, makes clustering high-coverage human
libraries feasible. Given the efficiently computed clusters, only one
representative read from each cluster needs to be mapped using a traditional
readmapper such as BWA, instead of individually mapping all reads. On human
reads, all processing steps, including clustering and mapping, only require
11%-59% of the time for individually mapping all reads, achieving speed-ups for
all readmappers, while minimally affecting mapping quality. This accelerates a
highly sensitive readmapper such as Stampy to be competitive with a fast
readmapper such as BWA on unclustered reads.
|
[
{
"version": "v1",
"created": "Thu, 10 Apr 2014 16:29:09 GMT"
}
] | 2014-04-11T00:00:00 |
[
[
"Mahmud",
"Md Pavel",
""
],
[
"Schliep",
"Alexander",
""
]
] |
TITLE: TreQ-CG: Clustering Accelerates High-Throughput Sequencing Read Mapping
ABSTRACT: As high-throughput sequencers become standard equipment outside of sequencing
centers, there is an increasing need for efficient methods for pre-processing
and primary analysis. While a vast literature proposes methods for HTS data
analysis, we argue that significant improvements can still be gained by
exploiting expensive pre-processing steps which can be amortized with savings
from later stages. We propose a method to accelerate and improve read mapping
based on an initial clustering of possibly billions of high-throughput
sequencing reads, yielding clusters of high stringency and a high degree of
overlap. This clustering improves on the state-of-the-art in running time for
small datasets and, for the first time, makes clustering high-coverage human
libraries feasible. Given the efficiently computed clusters, only one
representative read from each cluster needs to be mapped using a traditional
readmapper such as BWA, instead of individually mapping all reads. On human
reads, all processing steps, including clustering and mapping, only require
11%-59% of the time for individually mapping all reads, achieving speed-ups for
all readmappers, while minimally affecting mapping quality. This accelerates a
highly sensitive readmapper such as Stampy to be competitive with a fast
readmapper such as BWA on unclustered reads.
|
1404.2268
|
Junyan Wang
|
Junyan Wang and Sai-Kit Yeung
|
A Compact Linear Programming Relaxation for Binary Sub-modular MRF
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel compact linear programming (LP) relaxation for binary
sub-modular MRF in the context of object segmentation. Our model is obtained by
linearizing an $l_1^+$-norm derived from the quadratic programming (QP) form of
the MRF energy. The resultant LP model contains significantly fewer variables
and constraints compared to the conventional LP relaxation of the MRF energy.
In addition, unlike QP which can produce ambiguous labels, our model can be
viewed as a quasi-total-variation minimization problem, and it can therefore
preserve the discontinuities in the labels. We further establish a relaxation
bound between our LP model and the conventional LP model. In the experiments,
we demonstrate our method for the task of interactive object segmentation. Our
LP model outperforms QP when converting the continuous labels to binary labels
using different threshold values on the entire Oxford interactive segmentation
dataset. The computational complexity of our LP is of the same order as that of
the QP, and it is significantly lower than the conventional LP relaxation.
|
[
{
"version": "v1",
"created": "Wed, 9 Apr 2014 16:33:44 GMT"
}
] | 2014-04-10T00:00:00 |
[
[
"Wang",
"Junyan",
""
],
[
"Yeung",
"Sai-Kit",
""
]
] |
TITLE: A Compact Linear Programming Relaxation for Binary Sub-modular MRF
ABSTRACT: We propose a novel compact linear programming (LP) relaxation for binary
sub-modular MRF in the context of object segmentation. Our model is obtained by
linearizing an $l_1^+$-norm derived from the quadratic programming (QP) form of
the MRF energy. The resultant LP model contains significantly fewer variables
and constraints compared to the conventional LP relaxation of the MRF energy.
In addition, unlike QP which can produce ambiguous labels, our model can be
viewed as a quasi-total-variation minimization problem, and it can therefore
preserve the discontinuities in the labels. We further establish a relaxation
bound between our LP model and the conventional LP model. In the experiments,
we demonstrate our method for the task of interactive object segmentation. Our
LP model outperforms QP when converting the continuous labels to binary labels
using different threshold values on the entire Oxford interactive segmentation
dataset. The computational complexity of our LP is of the same order as that of
the QP, and it is significantly lower than the conventional LP relaxation.
|
1404.1911
|
Bahador Saket
|
Bahador Saket, Paolo Simonetto, Stephen Kobourov and Katy Borner
|
Node, Node-Link, and Node-Link-Group Diagrams: An Evaluation
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Effectively showing the relationships between objects in a dataset is one of
the main tasks in information visualization. Typically there is a well-defined
notion of distance between pairs of objects, and traditional approaches such as
principal component analysis or multi-dimensional scaling are used to place the
objects as points in 2D space, so that similar objects are close to each other.
In another typical setting, the dataset is visualized as a network graph, where
related nodes are connected by links. More recently, datasets are also
visualized as maps, where in addition to nodes and links, there is an explicit
representation of groups and clusters. We consider these three Techniques,
characterized by a progressive increase of the amount of encoded information:
node diagrams, node-link diagrams and node-link-group diagrams. We assess these
three types of diagrams with a controlled experiment that covers nine different
tasks falling broadly in three categories: node-based tasks, network-based
tasks and group-based tasks. Our findings indicate that adding links, or links
and group representations, does not negatively impact performance (time and
accuracy) of node-based tasks. Similarly, adding group representations does not
negatively impact the performance of network-based tasks. Node-link-group
diagrams outperform the others on group-based tasks. These conclusions
contradict results in other studies, in similar but subtly different settings.
Taken together, however, such results can have significant implications for the
design of standard and domain specific visualizations tools.
|
[
{
"version": "v1",
"created": "Mon, 7 Apr 2014 20:01:40 GMT"
}
] | 2014-04-09T00:00:00 |
[
[
"Saket",
"Bahador",
""
],
[
"Simonetto",
"Paolo",
""
],
[
"Kobourov",
"Stephen",
""
],
[
"Borner",
"Katy",
""
]
] |
TITLE: Node, Node-Link, and Node-Link-Group Diagrams: An Evaluation
ABSTRACT: Effectively showing the relationships between objects in a dataset is one of
the main tasks in information visualization. Typically there is a well-defined
notion of distance between pairs of objects, and traditional approaches such as
principal component analysis or multi-dimensional scaling are used to place the
objects as points in 2D space, so that similar objects are close to each other.
In another typical setting, the dataset is visualized as a network graph, where
related nodes are connected by links. More recently, datasets are also
visualized as maps, where in addition to nodes and links, there is an explicit
representation of groups and clusters. We consider these three Techniques,
characterized by a progressive increase of the amount of encoded information:
node diagrams, node-link diagrams and node-link-group diagrams. We assess these
three types of diagrams with a controlled experiment that covers nine different
tasks falling broadly in three categories: node-based tasks, network-based
tasks and group-based tasks. Our findings indicate that adding links, or links
and group representations, does not negatively impact performance (time and
accuracy) of node-based tasks. Similarly, adding group representations does not
negatively impact the performance of network-based tasks. Node-link-group
diagrams outperform the others on group-based tasks. These conclusions
contradict results in other studies, in similar but subtly different settings.
Taken together, however, such results can have significant implications for the
design of standard and domain specific visualizations tools.
|
1404.2005
|
Duc Phu Chau
|
Duc Phu Chau (INRIA Sophia Antipolis), Fran\c{c}ois Bremond (INRIA
Sophia Antipolis), Monique Thonnat (INRIA Sophia Antipolis), Slawomir Bak
(INRIA Sophia Antipolis)
|
Automatic Tracker Selection w.r.t Object Detection Performance
|
IEEE Winter Conference on Applications of Computer Vision (WACV 2014)
(2014)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The tracking algorithm performance depends on video content. This paper
presents a new multi-object tracking approach which is able to cope with video
content variations. First the object detection is improved using Kanade-
Lucas-Tomasi (KLT) feature tracking. Second, for each mobile object, an
appropriate tracker is selected among a KLT-based tracker and a discriminative
appearance-based tracker. This selection is supported by an online tracking
evaluation. The approach has been experimented on three public video datasets.
The experimental results show a better performance of the proposed approach
compared to recent state of the art trackers.
|
[
{
"version": "v1",
"created": "Tue, 8 Apr 2014 04:09:32 GMT"
}
] | 2014-04-09T00:00:00 |
[
[
"Chau",
"Duc Phu",
"",
"INRIA Sophia Antipolis"
],
[
"Bremond",
"François",
"",
"INRIA\n Sophia Antipolis"
],
[
"Thonnat",
"Monique",
"",
"INRIA Sophia Antipolis"
],
[
"Bak",
"Slawomir",
"",
"INRIA Sophia Antipolis"
]
] |
TITLE: Automatic Tracker Selection w.r.t Object Detection Performance
ABSTRACT: The tracking algorithm performance depends on video content. This paper
presents a new multi-object tracking approach which is able to cope with video
content variations. First the object detection is improved using Kanade-
Lucas-Tomasi (KLT) feature tracking. Second, for each mobile object, an
appropriate tracker is selected among a KLT-based tracker and a discriminative
appearance-based tracker. This selection is supported by an online tracking
evaluation. The approach has been experimented on three public video datasets.
The experimental results show a better performance of the proposed approach
compared to recent state of the art trackers.
|
1307.7751
|
Guoming Tang
|
Guoming Tang, Kui Wu, Jingsheng Lei, Zhongqin Bi and Jiuyang Tang
|
From Landscape to Portrait: A New Approach for Outlier Detection in Load
Curve Data
|
10 pages, 9 figures
| null | null | null |
cs.DC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In power systems, load curve data is one of the most important datasets that
are collected and retained by utilities. The quality of load curve data,
however, is hard to guarantee since the data is subject to communication
losses, meter malfunctions, and many other impacts. In this paper, a new
approach to analyzing load curve data is presented. The method adopts a new
view, termed \textit{portrait}, on the load curve data by analyzing the
periodic patterns in the data and re-organizing the data for ease of analysis.
Furthermore, we introduce algorithms to build the virtual portrait load curve
data, and demonstrate its application on load curve data cleansing. Compared to
existing regression-based methods, our method is much faster and more accurate
for both small-scale and large-scale real-world datasets.
|
[
{
"version": "v1",
"created": "Mon, 29 Jul 2013 21:59:30 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Jul 2013 17:22:07 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2014 19:17:23 GMT"
}
] | 2014-04-08T00:00:00 |
[
[
"Tang",
"Guoming",
""
],
[
"Wu",
"Kui",
""
],
[
"Lei",
"Jingsheng",
""
],
[
"Bi",
"Zhongqin",
""
],
[
"Tang",
"Jiuyang",
""
]
] |
TITLE: From Landscape to Portrait: A New Approach for Outlier Detection in Load
Curve Data
ABSTRACT: In power systems, load curve data is one of the most important datasets that
are collected and retained by utilities. The quality of load curve data,
however, is hard to guarantee since the data is subject to communication
losses, meter malfunctions, and many other impacts. In this paper, a new
approach to analyzing load curve data is presented. The method adopts a new
view, termed \textit{portrait}, on the load curve data by analyzing the
periodic patterns in the data and re-organizing the data for ease of analysis.
Furthermore, we introduce algorithms to build the virtual portrait load curve
data, and demonstrate its application on load curve data cleansing. Compared to
existing regression-based methods, our method is much faster and more accurate
for both small-scale and large-scale real-world datasets.
|
1312.0803
|
Amir Najafi
|
Amir Najafi, Amir Joudaki, and Emad Fatemizadeh
|
Nonlinear Dimensionality Reduction via Path-Based Isometric Mapping
|
(29) pages, (12) figures
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nonlinear dimensionality reduction methods have demonstrated top-notch
performance in many pattern recognition and image classification tasks. Despite
their popularity, they suffer from highly expensive time and memory
requirements, which render them inapplicable to large-scale datasets. To
leverage such cases we propose a new method called "Path-Based Isomap". Similar
to Isomap, we exploit geodesic paths to find the low-dimensional embedding.
However, instead of preserving pairwise geodesic distances, the low-dimensional
embedding is computed via a path-mapping algorithm. Due to the much fewer
number of paths compared to number of data points, a significant improvement in
time and memory complexity without any decline in performance is achieved. The
method demonstrates state-of-the-art performance on well-known synthetic and
real-world datasets, as well as in the presence of noise.
|
[
{
"version": "v1",
"created": "Tue, 3 Dec 2013 12:56:46 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Dec 2013 15:05:53 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2014 13:38:32 GMT"
}
] | 2014-04-08T00:00:00 |
[
[
"Najafi",
"Amir",
""
],
[
"Joudaki",
"Amir",
""
],
[
"Fatemizadeh",
"Emad",
""
]
] |
TITLE: Nonlinear Dimensionality Reduction via Path-Based Isometric Mapping
ABSTRACT: Nonlinear dimensionality reduction methods have demonstrated top-notch
performance in many pattern recognition and image classification tasks. Despite
their popularity, they suffer from highly expensive time and memory
requirements, which render them inapplicable to large-scale datasets. To
leverage such cases we propose a new method called "Path-Based Isomap". Similar
to Isomap, we exploit geodesic paths to find the low-dimensional embedding.
However, instead of preserving pairwise geodesic distances, the low-dimensional
embedding is computed via a path-mapping algorithm. Due to the much fewer
number of paths compared to number of data points, a significant improvement in
time and memory complexity without any decline in performance is achieved. The
method demonstrates state-of-the-art performance on well-known synthetic and
real-world datasets, as well as in the presence of noise.
|
1404.1831
|
Artem Babenko
|
Artem Babenko and Victor Lempitsky
|
Improving Bilayer Product Quantization for Billion-Scale Approximate
Nearest Neighbors in High Dimensions
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The top-performing systems for billion-scale high-dimensional approximate
nearest neighbor (ANN) search are all based on two-layer architectures that
include an indexing structure and a compressed datapoints layer. An indexing
structure is crucial as it allows to avoid exhaustive search, while the lossy
data compression is needed to fit the dataset into RAM. Several of the most
successful systems use product quantization (PQ) for both the indexing and the
dataset compression layers. These systems are however limited in the way they
exploit the interaction of product quantization processes that happen at
different stages of these systems.
Here we introduce and evaluate two approximate nearest neighbor search
systems that both exploit the synergy of product quantization processes in a
more efficient way. The first system, called Fast Bilayer Product Quantization
(FBPQ), speeds up the runtime of the baseline system (Multi-D-ADC) by several
times, while achieving the same accuracy. The second system, Hierarchical
Bilayer Product Quantization (HBPQ) provides a significantly better recall for
the same runtime at a cost of small memory footprint increase. For the BIGANN
dataset of billion SIFT descriptors, the 10% increase in Recall@1 and the 17%
increase in Recall@10 is observed.
|
[
{
"version": "v1",
"created": "Mon, 7 Apr 2014 16:08:13 GMT"
}
] | 2014-04-08T00:00:00 |
[
[
"Babenko",
"Artem",
""
],
[
"Lempitsky",
"Victor",
""
]
] |
TITLE: Improving Bilayer Product Quantization for Billion-Scale Approximate
Nearest Neighbors in High Dimensions
ABSTRACT: The top-performing systems for billion-scale high-dimensional approximate
nearest neighbor (ANN) search are all based on two-layer architectures that
include an indexing structure and a compressed datapoints layer. An indexing
structure is crucial as it allows to avoid exhaustive search, while the lossy
data compression is needed to fit the dataset into RAM. Several of the most
successful systems use product quantization (PQ) for both the indexing and the
dataset compression layers. These systems are however limited in the way they
exploit the interaction of product quantization processes that happen at
different stages of these systems.
Here we introduce and evaluate two approximate nearest neighbor search
systems that both exploit the synergy of product quantization processes in a
more efficient way. The first system, called Fast Bilayer Product Quantization
(FBPQ), speeds up the runtime of the baseline system (Multi-D-ADC) by several
times, while achieving the same accuracy. The second system, Hierarchical
Bilayer Product Quantization (HBPQ) provides a significantly better recall for
the same runtime at a cost of small memory footprint increase. For the BIGANN
dataset of billion SIFT descriptors, the 10% increase in Recall@1 and the 17%
increase in Recall@10 is observed.
|
1404.1355
|
Maksym Gabielkov
|
Maksym Gabielkov (Inria Sophia Antipolis), Ashwin Rao (Inria Sophia
Antipolis), Arnaud Legout (Inria Sophia Antipolis)
|
Studying Social Networks at Scale: Macroscopic Anatomy of the Twitter
Social Graph
|
ACM Sigmetrics 2014 (2014)
| null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Twitter is one of the largest social networks using exclusively directed
links among accounts. This makes the Twitter social graph much closer to the
social graph supporting real life communications than, for instance, Facebook.
Therefore, understanding the structure of the Twitter social graph is
interesting not only for computer scientists, but also for researchers in other
fields, such as sociologists. However, little is known about how the
information propagation in Twitter is constrained by its inner structure. In
this paper, we present an in-depth study of the macroscopic structure of the
Twitter social graph unveiling the highways on which tweets propagate, the
specific user activity associated with each component of this macroscopic
structure, and the evolution of this macroscopic structure with time for the
past 6 years. For this study, we crawled Twitter to retrieve all accounts and
all social relationships (follow links) among accounts; the crawl completed in
July 2012 with 505 million accounts interconnected by 23 billion links. Then,
we present a methodology to unveil the macroscopic structure of the Twitter
social graph. This macroscopic structure consists of 8 components defined by
their connectivity characteristics. Each component group users with a specific
usage of Twitter. For instance, we identified components gathering together
spammers, or celebrities. Finally, we present a method to approximate the
macroscopic structure of the Twitter social graph in the past, validate this
method using old datasets, and discuss the evolution of the macroscopic
structure of the Twitter social graph during the past 6 years.
|
[
{
"version": "v1",
"created": "Fri, 4 Apr 2014 19:33:22 GMT"
}
] | 2014-04-07T00:00:00 |
[
[
"Gabielkov",
"Maksym",
"",
"Inria Sophia Antipolis"
],
[
"Rao",
"Ashwin",
"",
"Inria Sophia\n Antipolis"
],
[
"Legout",
"Arnaud",
"",
"Inria Sophia Antipolis"
]
] |
TITLE: Studying Social Networks at Scale: Macroscopic Anatomy of the Twitter
Social Graph
ABSTRACT: Twitter is one of the largest social networks using exclusively directed
links among accounts. This makes the Twitter social graph much closer to the
social graph supporting real life communications than, for instance, Facebook.
Therefore, understanding the structure of the Twitter social graph is
interesting not only for computer scientists, but also for researchers in other
fields, such as sociologists. However, little is known about how the
information propagation in Twitter is constrained by its inner structure. In
this paper, we present an in-depth study of the macroscopic structure of the
Twitter social graph unveiling the highways on which tweets propagate, the
specific user activity associated with each component of this macroscopic
structure, and the evolution of this macroscopic structure with time for the
past 6 years. For this study, we crawled Twitter to retrieve all accounts and
all social relationships (follow links) among accounts; the crawl completed in
July 2012 with 505 million accounts interconnected by 23 billion links. Then,
we present a methodology to unveil the macroscopic structure of the Twitter
social graph. This macroscopic structure consists of 8 components defined by
their connectivity characteristics. Each component group users with a specific
usage of Twitter. For instance, we identified components gathering together
spammers, or celebrities. Finally, we present a method to approximate the
macroscopic structure of the Twitter social graph in the past, validate this
method using old datasets, and discuss the evolution of the macroscopic
structure of the Twitter social graph during the past 6 years.
|
1404.0334
|
Menglong Zhu
|
Menglong Zhu, Nikolay Atanasov, George J. Pappas, Kostas Daniilidis
|
Active Deformable Part Models
|
9 pages
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an active approach for part-based object detection, which
optimizes the order of part filter evaluations and the time at which to stop
and make a prediction. Statistics, describing the part responses, are learned
from training data and are used to formalize the part scheduling problem as an
offline optimization. Dynamic programming is applied to obtain a policy, which
balances the number of part evaluations with the classification accuracy.
During inference, the policy is used as a look-up table to choose the part
order and the stopping time based on the observed filter responses. The method
is faster than cascade detection with deformable part models (which does not
optimize the part order) with negligible loss in accuracy when evaluated on the
PASCAL VOC 2007 and 2010 datasets.
|
[
{
"version": "v1",
"created": "Tue, 1 Apr 2014 18:07:58 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2014 19:00:29 GMT"
}
] | 2014-04-03T00:00:00 |
[
[
"Zhu",
"Menglong",
""
],
[
"Atanasov",
"Nikolay",
""
],
[
"Pappas",
"George J.",
""
],
[
"Daniilidis",
"Kostas",
""
]
] |
TITLE: Active Deformable Part Models
ABSTRACT: This paper presents an active approach for part-based object detection, which
optimizes the order of part filter evaluations and the time at which to stop
and make a prediction. Statistics, describing the part responses, are learned
from training data and are used to formalize the part scheduling problem as an
offline optimization. Dynamic programming is applied to obtain a policy, which
balances the number of part evaluations with the classification accuracy.
During inference, the policy is used as a look-up table to choose the part
order and the stopping time based on the observed filter responses. The method
is faster than cascade detection with deformable part models (which does not
optimize the part order) with negligible loss in accuracy when evaluated on the
PASCAL VOC 2007 and 2010 datasets.
|
1404.0404
|
Xu Chen
|
Xu Chen, Zeeshan Syed, Alfred Hero
|
EEG Spatial Decoding and Classification with Logit Shrinkage Regularized
Directed Information Assessment (L-SODA)
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
There is an increasing interest in studying the neural interaction mechanisms
behind patterns of cognitive brain activity. This paper proposes a new approach
to infer such interaction mechanisms from electroencephalographic (EEG) data
using a new estimator of directed information (DI) called logit shrinkage
optimized directed information assessment (L-SODA). Unlike previous directed
information measures applied to neural decoding, L-SODA uses shrinkage
regularization on multinomial logistic regression to deal with the high
dimensionality of multi-channel EEG signals and the small sizes of many
real-world datasets. It is designed to make few a priori assumptions and can
handle both non-linear and non-Gaussian flows among electrodes. Our L-SODA
estimator of the DI is accompanied by robust statistical confidence intervals
on the true DI that make it especially suitable for hypothesis testing on the
information flow patterns. We evaluate our work in the context of two different
problems where interaction localization is used to determine highly interactive
areas for EEG signals spatially and temporally. First, by mapping the areas
that have high DI into Brodmann area, we identify that the areas with high DI
are associated with motor-related functions. We demonstrate that L-SODA
provides better accuracy for neural decoding of EEG signals as compared to
several state-of-the-art approaches on the Brain Computer Interface (BCI) EEG
motor activity dataset. Second, the proposed L-SODA estimator is evaluated on
the CHB-MIT Scalp EEG database. We demonstrate that compared to the
state-of-the-art approaches, the proposed method provides better performance in
detecting the epileptic seizure.
|
[
{
"version": "v1",
"created": "Tue, 1 Apr 2014 21:43:13 GMT"
}
] | 2014-04-03T00:00:00 |
[
[
"Chen",
"Xu",
""
],
[
"Syed",
"Zeeshan",
""
],
[
"Hero",
"Alfred",
""
]
] |
TITLE: EEG Spatial Decoding and Classification with Logit Shrinkage Regularized
Directed Information Assessment (L-SODA)
ABSTRACT: There is an increasing interest in studying the neural interaction mechanisms
behind patterns of cognitive brain activity. This paper proposes a new approach
to infer such interaction mechanisms from electroencephalographic (EEG) data
using a new estimator of directed information (DI) called logit shrinkage
optimized directed information assessment (L-SODA). Unlike previous directed
information measures applied to neural decoding, L-SODA uses shrinkage
regularization on multinomial logistic regression to deal with the high
dimensionality of multi-channel EEG signals and the small sizes of many
real-world datasets. It is designed to make few a priori assumptions and can
handle both non-linear and non-Gaussian flows among electrodes. Our L-SODA
estimator of the DI is accompanied by robust statistical confidence intervals
on the true DI that make it especially suitable for hypothesis testing on the
information flow patterns. We evaluate our work in the context of two different
problems where interaction localization is used to determine highly interactive
areas for EEG signals spatially and temporally. First, by mapping the areas
that have high DI into Brodmann area, we identify that the areas with high DI
are associated with motor-related functions. We demonstrate that L-SODA
provides better accuracy for neural decoding of EEG signals as compared to
several state-of-the-art approaches on the Brain Computer Interface (BCI) EEG
motor activity dataset. Second, the proposed L-SODA estimator is evaluated on
the CHB-MIT Scalp EEG database. We demonstrate that compared to the
state-of-the-art approaches, the proposed method provides better performance in
detecting the epileptic seizure.
|
1312.4476
|
Philipp Mayr
|
Lars Kaczmirek, Philipp Mayr, Ravi Vatrapu, Arnim Bleier, Manuela
Blumenberg, Tobias Gummer, Abid Hussain, Katharina Kinder-Kurlanda, Kaveh
Manshaei, Mark Thamm, Katrin Weller, Alexander Wenz, Christof Wolf
|
Social Media Monitoring of the Campaigns for the 2013 German Bundestag
Elections on Facebook and Twitter
|
29 pages, 2 figures, GESIS-Working Papers No. 31
| null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As more and more people use social media to communicate their view and
perception of elections, researchers have increasingly been collecting and
analyzing data from social media platforms. Our research focuses on social
media communication related to the 2013 election of the German parlia-ment
[translation: Bundestagswahl 2013]. We constructed several social media
datasets using data from Facebook and Twitter. First, we identified the most
relevant candidates (n=2,346) and checked whether they maintained social media
accounts. The Facebook data was collected in November 2013 for the period of
January 2009 to October 2013. On Facebook we identified 1,408 Facebook walls
containing approximately 469,000 posts. Twitter data was collected between June
and December 2013 finishing with the constitution of the government. On Twitter
we identified 1,009 candidates and 76 other agents, for example, journalists.
We estimated the number of relevant tweets to exceed eight million for the
period from July 27 to September 27 alone. In this document we summarize past
research in the literature, discuss possibilities for research with our data
set, explain the data collection procedures, and provide a description of the
data and a discussion of issues for archiving and dissemination of social media
data.
|
[
{
"version": "v1",
"created": "Mon, 16 Dec 2013 19:32:39 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2014 09:59:24 GMT"
}
] | 2014-04-02T00:00:00 |
[
[
"Kaczmirek",
"Lars",
""
],
[
"Mayr",
"Philipp",
""
],
[
"Vatrapu",
"Ravi",
""
],
[
"Bleier",
"Arnim",
""
],
[
"Blumenberg",
"Manuela",
""
],
[
"Gummer",
"Tobias",
""
],
[
"Hussain",
"Abid",
""
],
[
"Kinder-Kurlanda",
"Katharina",
""
],
[
"Manshaei",
"Kaveh",
""
],
[
"Thamm",
"Mark",
""
],
[
"Weller",
"Katrin",
""
],
[
"Wenz",
"Alexander",
""
],
[
"Wolf",
"Christof",
""
]
] |
TITLE: Social Media Monitoring of the Campaigns for the 2013 German Bundestag
Elections on Facebook and Twitter
ABSTRACT: As more and more people use social media to communicate their view and
perception of elections, researchers have increasingly been collecting and
analyzing data from social media platforms. Our research focuses on social
media communication related to the 2013 election of the German parlia-ment
[translation: Bundestagswahl 2013]. We constructed several social media
datasets using data from Facebook and Twitter. First, we identified the most
relevant candidates (n=2,346) and checked whether they maintained social media
accounts. The Facebook data was collected in November 2013 for the period of
January 2009 to October 2013. On Facebook we identified 1,408 Facebook walls
containing approximately 469,000 posts. Twitter data was collected between June
and December 2013 finishing with the constitution of the government. On Twitter
we identified 1,009 candidates and 76 other agents, for example, journalists.
We estimated the number of relevant tweets to exceed eight million for the
period from July 27 to September 27 alone. In this document we summarize past
research in the literature, discuss possibilities for research with our data
set, explain the data collection procedures, and provide a description of the
data and a discussion of issues for archiving and dissemination of social media
data.
|
1404.0163
|
David Garcia
|
David Garcia, Ingmar Weber, Venkata Rama Kiran Garimella
|
Gender Asymmetries in Reality and Fiction: The Bechdel Test of Social
Media
|
To appear in Proceedings of the 8th International AAAI Conference on
Weblogs and Social Media (ICWSM '14)
| null | null | null |
cs.SI cs.CY physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The subjective nature of gender inequality motivates the analysis and
comparison of data from real and fictional human interaction. We present a
computational extension of the Bechdel test: A popular tool to assess if a
movie contains a male gender bias, by looking for two female characters who
discuss about something besides a man. We provide the tools to quantify Bechdel
scores for both genders, and we measure them in movie scripts and large
datasets of dialogues between users of MySpace and Twitter. Comparing movies
and users of social media, we find that movies and Twitter conversations have a
consistent male bias, which does not appear when analyzing MySpace.
Furthermore, the narrative of Twitter is closer to the movies that do not pass
the Bechdel test than to those that pass it.
We link the properties of movies and the users that share trailers of those
movies. Our analysis reveals some particularities of movies that pass the
Bechdel test: Their trailers are less popular, female users are more likely to
share them than male users, and users that share them tend to interact less
with male users. Based on our datasets, we define gender independence
measurements to analyze the gender biases of a society, as manifested through
digital traces of online behavior. Using the profile information of Twitter
users, we find larger gender independence for urban users in comparison to
rural ones. Additionally, the asymmetry between genders is larger for parents
and lower for students. Gender asymmetry varies across US states, increasing
with higher average income and latitude. This points to the relation between
gender inequality and social, economical, and cultural factors of a society,
and how gender roles exist in both fictional narratives and public online
dialogues.
|
[
{
"version": "v1",
"created": "Tue, 1 Apr 2014 08:40:28 GMT"
}
] | 2014-04-02T00:00:00 |
[
[
"Garcia",
"David",
""
],
[
"Weber",
"Ingmar",
""
],
[
"Garimella",
"Venkata Rama Kiran",
""
]
] |
TITLE: Gender Asymmetries in Reality and Fiction: The Bechdel Test of Social
Media
ABSTRACT: The subjective nature of gender inequality motivates the analysis and
comparison of data from real and fictional human interaction. We present a
computational extension of the Bechdel test: A popular tool to assess if a
movie contains a male gender bias, by looking for two female characters who
discuss about something besides a man. We provide the tools to quantify Bechdel
scores for both genders, and we measure them in movie scripts and large
datasets of dialogues between users of MySpace and Twitter. Comparing movies
and users of social media, we find that movies and Twitter conversations have a
consistent male bias, which does not appear when analyzing MySpace.
Furthermore, the narrative of Twitter is closer to the movies that do not pass
the Bechdel test than to those that pass it.
We link the properties of movies and the users that share trailers of those
movies. Our analysis reveals some particularities of movies that pass the
Bechdel test: Their trailers are less popular, female users are more likely to
share them than male users, and users that share them tend to interact less
with male users. Based on our datasets, we define gender independence
measurements to analyze the gender biases of a society, as manifested through
digital traces of online behavior. Using the profile information of Twitter
users, we find larger gender independence for urban users in comparison to
rural ones. Additionally, the asymmetry between genders is larger for parents
and lower for students. Gender asymmetry varies across US states, increasing
with higher average income and latitude. This points to the relation between
gender inequality and social, economical, and cultural factors of a society,
and how gender roles exist in both fictional narratives and public online
dialogues.
|
1308.3892
|
D\'aniel Kondor Mr
|
D\'aniel Kondor, M\'arton P\'osfai, Istv\'an Csabai and G\'abor Vattay
|
Do the rich get richer? An empirical analysis of the BitCoin transaction
network
|
Project website: http://www.vo.elte.hu/bitcoin/; updated after
publication
|
PLoS ONE 9(2): e86197, 2014
|
10.1371/journal.pone.0086197
| null |
physics.soc-ph cs.SI q-fin.GN
|
http://creativecommons.org/licenses/by/3.0/
|
The possibility to analyze everyday monetary transactions is limited by the
scarcity of available data, as this kind of information is usually considered
highly sensitive. Present econophysics models are usually employed on presumed
random networks of interacting agents, and only macroscopic properties (e.g.
the resulting wealth distribution) are compared to real-world data. In this
paper, we analyze BitCoin, which is a novel digital currency system, where the
complete list of transactions is publicly available. Using this dataset, we
reconstruct the network of transactions, and extract the time and amount of
each payment. We analyze the structure of the transaction network by measuring
network characteristics over time, such as the degree distribution, degree
correlations and clustering. We find that linear preferential attachment drives
the growth of the network. We also study the dynamics taking place on the
transaction network, i.e. the flow of money. We measure temporal patterns and
the wealth accumulation. Investigating the microscopic statistics of money
movement, we find that sublinear preferential attachment governs the evolution
of the wealth distribution. We report a scaling relation between the degree and
wealth associated to individual nodes.
|
[
{
"version": "v1",
"created": "Sun, 18 Aug 2013 20:02:34 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Feb 2014 10:17:56 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2014 11:26:54 GMT"
}
] | 2014-04-01T00:00:00 |
[
[
"Kondor",
"Dániel",
""
],
[
"Pósfai",
"Márton",
""
],
[
"Csabai",
"István",
""
],
[
"Vattay",
"Gábor",
""
]
] |
TITLE: Do the rich get richer? An empirical analysis of the BitCoin transaction
network
ABSTRACT: The possibility to analyze everyday monetary transactions is limited by the
scarcity of available data, as this kind of information is usually considered
highly sensitive. Present econophysics models are usually employed on presumed
random networks of interacting agents, and only macroscopic properties (e.g.
the resulting wealth distribution) are compared to real-world data. In this
paper, we analyze BitCoin, which is a novel digital currency system, where the
complete list of transactions is publicly available. Using this dataset, we
reconstruct the network of transactions, and extract the time and amount of
each payment. We analyze the structure of the transaction network by measuring
network characteristics over time, such as the degree distribution, degree
correlations and clustering. We find that linear preferential attachment drives
the growth of the network. We also study the dynamics taking place on the
transaction network, i.e. the flow of money. We measure temporal patterns and
the wealth accumulation. Investigating the microscopic statistics of money
movement, we find that sublinear preferential attachment governs the evolution
of the wealth distribution. We report a scaling relation between the degree and
wealth associated to individual nodes.
|
1403.7654
|
Anastasios Noulas Anastasios Noulas
|
Petko Georgiev, Anastasios Noulas and Cecilia Mascolo
|
Where Businesses Thrive: Predicting the Impact of the Olympic Games on
Local Retailers through Location-based Services Data
| null | null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Olympic Games are an important sporting event with notable consequences
for the general economic landscape of the host city. Traditional economic
assessments focus on the aggregated impact of the event on the national income,
but fail to provide micro-scale insights on why local businesses will benefit
from the increased activity during the Games. In this paper we provide a novel
approach to modeling the impact of the Olympic Games on local retailers by
analyzing a dataset mined from a large location-based social service,
Foursquare. We hypothesize that the spatial positioning of businesses as well
as the mobility trends of visitors are primary indicators of whether retailers
will rise their popularity during the event. To confirm this we formulate a
retail winners prediction task in the context of which we evaluate a set of
geographic and mobility metrics. We find that the proximity to stadiums, the
diversity of activity in the neighborhood, the nearby area sociability, as well
as the probability of customer flows from and to event places such as stadiums
and parks are all vital factors. Through supervised learning techniques we
demonstrate that the success of businesses hinges on a combination of both
geographic and mobility factors. Our results suggest that location-based social
networks, where crowdsourced information about the dynamic interaction of users
with urban spaces becomes publicly available, present an alternative medium to
assess the economic impact of large scale events in a city.
|
[
{
"version": "v1",
"created": "Sat, 29 Mar 2014 18:02:42 GMT"
}
] | 2014-04-01T00:00:00 |
[
[
"Georgiev",
"Petko",
""
],
[
"Noulas",
"Anastasios",
""
],
[
"Mascolo",
"Cecilia",
""
]
] |
TITLE: Where Businesses Thrive: Predicting the Impact of the Olympic Games on
Local Retailers through Location-based Services Data
ABSTRACT: The Olympic Games are an important sporting event with notable consequences
for the general economic landscape of the host city. Traditional economic
assessments focus on the aggregated impact of the event on the national income,
but fail to provide micro-scale insights on why local businesses will benefit
from the increased activity during the Games. In this paper we provide a novel
approach to modeling the impact of the Olympic Games on local retailers by
analyzing a dataset mined from a large location-based social service,
Foursquare. We hypothesize that the spatial positioning of businesses as well
as the mobility trends of visitors are primary indicators of whether retailers
will rise their popularity during the event. To confirm this we formulate a
retail winners prediction task in the context of which we evaluate a set of
geographic and mobility metrics. We find that the proximity to stadiums, the
diversity of activity in the neighborhood, the nearby area sociability, as well
as the probability of customer flows from and to event places such as stadiums
and parks are all vital factors. Through supervised learning techniques we
demonstrate that the success of businesses hinges on a combination of both
geographic and mobility factors. Our results suggest that location-based social
networks, where crowdsourced information about the dynamic interaction of users
with urban spaces becomes publicly available, present an alternative medium to
assess the economic impact of large scale events in a city.
|
1403.7726
|
Ayman I. Madbouly
|
Ayman I. Madbouly, Amr M. Gody, Tamer M. Barakat
|
Relevant Feature Selection Model Using Data Mining for Intrusion
Detection System
|
12 Pages, 3 figures, 5 tables, Published with "International Journal
of Engineering Trends and Technology (IJETT)". arXiv admin note: text overlap
with arXiv:1208.5997 by other authors without attribution
|
International Journal of Engineering Trends and Technology
(IJETT), V9(10),501-512 March 2014
|
10.14445/22315381/IJETT-V9P296
| null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Network intrusions have become a significant threat in recent years as a
result of the increased demand of computer networks for critical systems.
Intrusion detection system (IDS) has been widely deployed as a defense measure
for computer networks. Features extracted from network traffic can be used as
sign to detect anomalies. However with the huge amount of network traffic,
collected data contains irrelevant and redundant features that affect the
detection rate of the IDS, consumes high amount of system resources, and
slowdown the training and testing process of the IDS. In this paper, a new
feature selection model is proposed; this model can effectively select the most
relevant features for intrusion detection. Our goal is to build a lightweight
intrusion detection system by using a reduced features set. Deleting irrelevant
and redundant features helps to build a faster training and testing process, to
have less resource consumption as well as to maintain high detection rates. The
effectiveness and the feasibility of our feature selection model were verified
by several experiments on KDD intrusion detection dataset. The experimental
results strongly showed that our model is not only able to yield high detection
rates but also to speed up the detection process.
|
[
{
"version": "v1",
"created": "Sun, 30 Mar 2014 09:41:17 GMT"
}
] | 2014-04-01T00:00:00 |
[
[
"Madbouly",
"Ayman I.",
""
],
[
"Gody",
"Amr M.",
""
],
[
"Barakat",
"Tamer M.",
""
]
] |
TITLE: Relevant Feature Selection Model Using Data Mining for Intrusion
Detection System
ABSTRACT: Network intrusions have become a significant threat in recent years as a
result of the increased demand of computer networks for critical systems.
Intrusion detection system (IDS) has been widely deployed as a defense measure
for computer networks. Features extracted from network traffic can be used as
sign to detect anomalies. However with the huge amount of network traffic,
collected data contains irrelevant and redundant features that affect the
detection rate of the IDS, consumes high amount of system resources, and
slowdown the training and testing process of the IDS. In this paper, a new
feature selection model is proposed; this model can effectively select the most
relevant features for intrusion detection. Our goal is to build a lightweight
intrusion detection system by using a reduced features set. Deleting irrelevant
and redundant features helps to build a faster training and testing process, to
have less resource consumption as well as to maintain high detection rates. The
effectiveness and the feasibility of our feature selection model were verified
by several experiments on KDD intrusion detection dataset. The experimental
results strongly showed that our model is not only able to yield high detection
rates but also to speed up the detection process.
|
1403.7872
|
Manzil Zaheer
|
Chenjie Gu, Manzil Zaheer and Xin Li
|
Multiple-Population Moment Estimation: Exploiting Inter-Population
Correlation for Efficient Moment Estimation in Analog/Mixed-Signal Validation
| null | null | null | null |
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Moment estimation is an important problem during circuit validation, in both
pre-Silicon and post-Silicon stages. From the estimated moments, the
probability of failure and parametric yield can be estimated at each circuit
configuration and corner, and these metrics are used for design optimization
and making product qualification decisions. The problem is especially difficult
if only a very small sample size is allowed for measurement or simulation, as
is the case for complex analog/mixed-signal circuits. In this paper, we propose
an efficient moment estimation method, called Multiple-Population Moment
Estimation (MPME), that significantly improves estimation accuracy under small
sample size. The key idea is to leverage the data collected under different
corners/configurations to improve the accuracy of moment estimation at each
individual corner/configuration. Mathematically, we employ the hierarchical
Bayesian framework to exploit the underlying correlation in the data. We apply
the proposed method to several datasets including post-silicon measurements of
a commercial high-speed I/O link, and demonstrate an average error reduction of
up to 2$\times$, which can be equivalently translated to significant reduction
of validation time and cost.
|
[
{
"version": "v1",
"created": "Mon, 31 Mar 2014 05:23:09 GMT"
}
] | 2014-04-01T00:00:00 |
[
[
"Gu",
"Chenjie",
""
],
[
"Zaheer",
"Manzil",
""
],
[
"Li",
"Xin",
""
]
] |
TITLE: Multiple-Population Moment Estimation: Exploiting Inter-Population
Correlation for Efficient Moment Estimation in Analog/Mixed-Signal Validation
ABSTRACT: Moment estimation is an important problem during circuit validation, in both
pre-Silicon and post-Silicon stages. From the estimated moments, the
probability of failure and parametric yield can be estimated at each circuit
configuration and corner, and these metrics are used for design optimization
and making product qualification decisions. The problem is especially difficult
if only a very small sample size is allowed for measurement or simulation, as
is the case for complex analog/mixed-signal circuits. In this paper, we propose
an efficient moment estimation method, called Multiple-Population Moment
Estimation (MPME), that significantly improves estimation accuracy under small
sample size. The key idea is to leverage the data collected under different
corners/configurations to improve the accuracy of moment estimation at each
individual corner/configuration. Mathematically, we employ the hierarchical
Bayesian framework to exploit the underlying correlation in the data. We apply
the proposed method to several datasets including post-silicon measurements of
a commercial high-speed I/O link, and demonstrate an average error reduction of
up to 2$\times$, which can be equivalently translated to significant reduction
of validation time and cost.
|
1403.8084
|
Smriti Bhagat
|
Stratis Ioannidis, Andrea Montanari, Udi Weinsberg, Smriti Bhagat,
Nadia Fawaz, Nina Taft
|
Privacy Tradeoffs in Predictive Analytics
|
Extended version of the paper appearing in SIGMETRICS 2014
| null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online services routinely mine user data to predict user preferences, make
recommendations, and place targeted ads. Recent research has demonstrated that
several private user attributes (such as political affiliation, sexual
orientation, and gender) can be inferred from such data. Can a
privacy-conscious user benefit from personalization while simultaneously
protecting her private attributes? We study this question in the context of a
rating prediction service based on matrix factorization. We construct a
protocol of interactions between the service and users that has remarkable
optimality properties: it is privacy-preserving, in that no inference algorithm
can succeed in inferring a user's private attribute with a probability better
than random guessing; it has maximal accuracy, in that no other
privacy-preserving protocol improves rating prediction; and, finally, it
involves a minimal disclosure, as the prediction accuracy strictly decreases
when the service reveals less information. We extensively evaluate our protocol
using several rating datasets, demonstrating that it successfully blocks the
inference of gender, age and political affiliation, while incurring less than
5% decrease in the accuracy of rating prediction.
|
[
{
"version": "v1",
"created": "Mon, 31 Mar 2014 16:53:04 GMT"
}
] | 2014-04-01T00:00:00 |
[
[
"Ioannidis",
"Stratis",
""
],
[
"Montanari",
"Andrea",
""
],
[
"Weinsberg",
"Udi",
""
],
[
"Bhagat",
"Smriti",
""
],
[
"Fawaz",
"Nadia",
""
],
[
"Taft",
"Nina",
""
]
] |
TITLE: Privacy Tradeoffs in Predictive Analytics
ABSTRACT: Online services routinely mine user data to predict user preferences, make
recommendations, and place targeted ads. Recent research has demonstrated that
several private user attributes (such as political affiliation, sexual
orientation, and gender) can be inferred from such data. Can a
privacy-conscious user benefit from personalization while simultaneously
protecting her private attributes? We study this question in the context of a
rating prediction service based on matrix factorization. We construct a
protocol of interactions between the service and users that has remarkable
optimality properties: it is privacy-preserving, in that no inference algorithm
can succeed in inferring a user's private attribute with a probability better
than random guessing; it has maximal accuracy, in that no other
privacy-preserving protocol improves rating prediction; and, finally, it
involves a minimal disclosure, as the prediction accuracy strictly decreases
when the service reveals less information. We extensively evaluate our protocol
using several rating datasets, demonstrating that it successfully blocks the
inference of gender, age and political affiliation, while incurring less than
5% decrease in the accuracy of rating prediction.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.