id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1503.04864 | Fay\c{c}al Hamdi | Fay\c{c}al Hamdi, Nathalie Abadie, B\'en\'edicte Bucher and
Abdelfettah Feliachi | GeomRDF: A Geodata Converter with a Fine-Grained Structured
Representation of Geometry in the Web | 12 pages, 2 figures, the 1st International Workshop on Geospatial
Linked Data (GeoLD 2014) - SEMANTiCS 2014 | null | null | null | cs.DB cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, with the advent of the web of data, a growing number of
national mapping agencies tend to publish their geospatial data as Linked Data.
However, differences between traditional GIS data models and Linked Data model
can make the publication process more complicated. Besides, it may require, to
be done, the setting of several parameters and some expertise in the semantic
web technologies. In addition, the use of standards like GeoSPARQL (or ad hoc
predicates) is mandatory to perform spatial queries on published geospatial
data. In this paper, we present GeomRDF, a tool that helps users to convert
spatial data from traditional GIS formats to RDF model easily. It generates
geometries represented as GeoSPARQL WKT literal but also as structured
geometries that can be exploited by using only the RDF query language, SPARQL.
GeomRDF was implemented as a module in the RDF publication platform Datalift. A
validation of GeomRDF has been realized against the French administrative units
dataset (provided by IGN France).
| [
{
"version": "v1",
"created": "Mon, 16 Mar 2015 21:35:18 GMT"
}
] | 2015-03-18T00:00:00 | [
[
"Hamdi",
"Fayçal",
""
],
[
"Abadie",
"Nathalie",
""
],
[
"Bucher",
"Bénédicte",
""
],
[
"Feliachi",
"Abdelfettah",
""
]
] | TITLE: GeomRDF: A Geodata Converter with a Fine-Grained Structured
Representation of Geometry in the Web
ABSTRACT: In recent years, with the advent of the web of data, a growing number of
national mapping agencies tend to publish their geospatial data as Linked Data.
However, differences between traditional GIS data models and Linked Data model
can make the publication process more complicated. Besides, it may require, to
be done, the setting of several parameters and some expertise in the semantic
web technologies. In addition, the use of standards like GeoSPARQL (or ad hoc
predicates) is mandatory to perform spatial queries on published geospatial
data. In this paper, we present GeomRDF, a tool that helps users to convert
spatial data from traditional GIS formats to RDF model easily. It generates
geometries represented as GeoSPARQL WKT literal but also as structured
geometries that can be exploited by using only the RDF query language, SPARQL.
GeomRDF was implemented as a module in the RDF publication platform Datalift. A
validation of GeomRDF has been realized against the French administrative units
dataset (provided by IGN France).
| no_new_dataset | 0.946843 |
1503.04927 | Qingbo Hu | Qingbo Hu and Sihong Xie and Shuyang Lin and Senzhang Wang and Philip
Yu | CENI: a Hybrid Framework for Efficiently Inferring Information Networks | Full-length version of the paper with the same title published in
ICWSM 2015 | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, the message diffusion links among users or websites drive the
development of countless innovative applications. However, in reality, it is
easier for us to observe the timestamps when different nodes in the network
react on a message, while the connections empowering the diffusion of the
message remain hidden. This motivates recent extensive studies on the network
inference problem: unveiling the edges from the records of messages
disseminated through them. Existing solutions are computationally expensive,
which motivates us to develop an efficient two-step general framework,
Clustering Embedded Network Inference (CENI). CENI integrates clustering
strategies to improve the efficiency of network inference. By clustering nodes
directly on the timelines of messages, we propose two naive implementations of
CENI: Infection-centric CENI and Cascade-centric CENI. Additionally, we point
out the critical dimension problem of CENI: instead of one-dimensional
timelines, we need to first project the nodes to an Euclidean space of certain
dimension before clustering. A CENI adopting clustering method on the projected
space can better preserve the structure hidden in the cascades, and generate
more accurately inferred links. This insight sheds light on other related work
attempting to discover or utilize the latent cluster structure in the
disseminated messages. By addressing the critical dimension problem, we propose
the third implementation of the CENI framework: Projection-based CENI. Through
extensive experiments on two real datasets, we show that the three CENI models
only need around 20% $\sim$ 50% of the running time of state-of-the-art
methods. Moreover, the inferred edges of Projection-based CENI preserves or
even outperforms the effectiveness of state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 17 Mar 2015 05:47:49 GMT"
}
] | 2015-03-18T00:00:00 | [
[
"Hu",
"Qingbo",
""
],
[
"Xie",
"Sihong",
""
],
[
"Lin",
"Shuyang",
""
],
[
"Wang",
"Senzhang",
""
],
[
"Yu",
"Philip",
""
]
] | TITLE: CENI: a Hybrid Framework for Efficiently Inferring Information Networks
ABSTRACT: Nowadays, the message diffusion links among users or websites drive the
development of countless innovative applications. However, in reality, it is
easier for us to observe the timestamps when different nodes in the network
react on a message, while the connections empowering the diffusion of the
message remain hidden. This motivates recent extensive studies on the network
inference problem: unveiling the edges from the records of messages
disseminated through them. Existing solutions are computationally expensive,
which motivates us to develop an efficient two-step general framework,
Clustering Embedded Network Inference (CENI). CENI integrates clustering
strategies to improve the efficiency of network inference. By clustering nodes
directly on the timelines of messages, we propose two naive implementations of
CENI: Infection-centric CENI and Cascade-centric CENI. Additionally, we point
out the critical dimension problem of CENI: instead of one-dimensional
timelines, we need to first project the nodes to an Euclidean space of certain
dimension before clustering. A CENI adopting clustering method on the projected
space can better preserve the structure hidden in the cascades, and generate
more accurately inferred links. This insight sheds light on other related work
attempting to discover or utilize the latent cluster structure in the
disseminated messages. By addressing the critical dimension problem, we propose
the third implementation of the CENI framework: Projection-based CENI. Through
extensive experiments on two real datasets, we show that the three CENI models
only need around 20% $\sim$ 50% of the running time of state-of-the-art
methods. Moreover, the inferred edges of Projection-based CENI preserves or
even outperforms the effectiveness of state-of-the-art methods.
| no_new_dataset | 0.943919 |
1503.04996 | Khaled Fawagreh | Khaled Fawagreh, Mohamad Medhat Gaber, Eyad Elyan | On Extreme Pruning of Random Forest Ensembles for Real-time Predictive
Applications | 10 pages, 4 Figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Random Forest (RF) is an ensemble supervised machine learning technique that
was developed by Breiman over a decade ago. Compared with other ensemble
techniques, it has proved its accuracy and superiority. Many researchers,
however, believe that there is still room for enhancing and improving its
performance accuracy. This explains why, over the past decade, there have been
many extensions of RF where each extension employed a variety of techniques and
strategies to improve certain aspect(s) of RF. Since it has been proven
empiricallthat ensembles tend to yield better results when there is a
significant diversity among the constituent models, the objective of this paper
is twofold. First, it investigates how data clustering (a well known diversity
technique) can be applied to identify groups of similar decision trees in an RF
in order to eliminate redundant trees by selecting a representative from each
group (cluster). Second, these likely diverse representatives are then used to
produce an extension of RF termed CLUB-DRF that is much smaller in size than
RF, and yet performs at least as good as RF, and mostly exhibits higher
performance in terms of accuracy. The latter refers to a known technique called
ensemble pruning. Experimental results on 15 real datasets from the UCI
repository prove the superiority of our proposed extension over the traditional
RF. Most of our experiments achieved at least 95% or above pruning level while
retaining or outperforming the RF accuracy.
| [
{
"version": "v1",
"created": "Tue, 17 Mar 2015 11:01:37 GMT"
}
] | 2015-03-18T00:00:00 | [
[
"Fawagreh",
"Khaled",
""
],
[
"Gaber",
"Mohamad Medhat",
""
],
[
"Elyan",
"Eyad",
""
]
] | TITLE: On Extreme Pruning of Random Forest Ensembles for Real-time Predictive
Applications
ABSTRACT: Random Forest (RF) is an ensemble supervised machine learning technique that
was developed by Breiman over a decade ago. Compared with other ensemble
techniques, it has proved its accuracy and superiority. Many researchers,
however, believe that there is still room for enhancing and improving its
performance accuracy. This explains why, over the past decade, there have been
many extensions of RF where each extension employed a variety of techniques and
strategies to improve certain aspect(s) of RF. Since it has been proven
empiricallthat ensembles tend to yield better results when there is a
significant diversity among the constituent models, the objective of this paper
is twofold. First, it investigates how data clustering (a well known diversity
technique) can be applied to identify groups of similar decision trees in an RF
in order to eliminate redundant trees by selecting a representative from each
group (cluster). Second, these likely diverse representatives are then used to
produce an extension of RF termed CLUB-DRF that is much smaller in size than
RF, and yet performs at least as good as RF, and mostly exhibits higher
performance in terms of accuracy. The latter refers to a known technique called
ensemble pruning. Experimental results on 15 real datasets from the UCI
repository prove the superiority of our proposed extension over the traditional
RF. Most of our experiments achieved at least 95% or above pruning level while
retaining or outperforming the RF accuracy.
| no_new_dataset | 0.949949 |
1503.05018 | Martin Wistuba | Martin Wistuba, Josif Grabocka, Lars Schmidt-Thieme | Ultra-Fast Shapelets for Time Series Classification | Preprint submitted to Journal of Data & Knowledge Engineering January
24, 2015 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series shapelets are discriminative subsequences and their similarity to
a time series can be used for time series classification. Since the discovery
of time series shapelets is costly in terms of time, the applicability on long
or multivariate time series is difficult. In this work we propose Ultra-Fast
Shapelets that uses a number of random shapelets. It is shown that Ultra-Fast
Shapelets yield the same prediction quality as current state-of-the-art
shapelet-based time series classifiers that carefully select the shapelets by
being by up to three orders of magnitudes. Since this method allows a
ultra-fast shapelet discovery, using shapelets for long multivariate time
series classification becomes feasible.
A method for using shapelets for multivariate time series is proposed and
Ultra-Fast Shapelets is proven to be successful in comparison to
state-of-the-art multivariate time series classifiers on 15 multivariate time
series datasets from various domains. Finally, time series derivatives that
have proven to be useful for other time series classifiers are investigated for
the shapelet-based classifiers. It is shown that they have a positive impact
and that they are easy to integrate with a simple preprocessing step, without
the need of adapting the shapelet discovery algorithm.
| [
{
"version": "v1",
"created": "Tue, 17 Mar 2015 12:41:30 GMT"
}
] | 2015-03-18T00:00:00 | [
[
"Wistuba",
"Martin",
""
],
[
"Grabocka",
"Josif",
""
],
[
"Schmidt-Thieme",
"Lars",
""
]
] | TITLE: Ultra-Fast Shapelets for Time Series Classification
ABSTRACT: Time series shapelets are discriminative subsequences and their similarity to
a time series can be used for time series classification. Since the discovery
of time series shapelets is costly in terms of time, the applicability on long
or multivariate time series is difficult. In this work we propose Ultra-Fast
Shapelets that uses a number of random shapelets. It is shown that Ultra-Fast
Shapelets yield the same prediction quality as current state-of-the-art
shapelet-based time series classifiers that carefully select the shapelets by
being by up to three orders of magnitudes. Since this method allows a
ultra-fast shapelet discovery, using shapelets for long multivariate time
series classification becomes feasible.
A method for using shapelets for multivariate time series is proposed and
Ultra-Fast Shapelets is proven to be successful in comparison to
state-of-the-art multivariate time series classifiers on 15 multivariate time
series datasets from various domains. Finally, time series derivatives that
have proven to be useful for other time series classifiers are investigated for
the shapelet-based classifiers. It is shown that they have a positive impact
and that they are easy to integrate with a simple preprocessing step, without
the need of adapting the shapelet discovery algorithm.
| no_new_dataset | 0.952618 |
1503.05038 | Bojan Pepikj | Bojan Pepik, Michael Stark, Peter Gehler, Tobias Ritschel, Bernt
Schiele | 3D Object Class Detection in the Wild | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object class detection has been a synonym for 2D bounding box localization
for the longest time, fueled by the success of powerful statistical learning
techniques, combined with robust image representations. Only recently, there
has been a growing interest in revisiting the promise of computer vision from
the early days: to precisely delineate the contents of a visual scene, object
by object, in 3D. In this paper, we draw from recent advances in object
detection and 2D-3D object lifting in order to design an object class detector
that is particularly tailored towards 3D object class detection. Our 3D object
class detection method consists of several stages gradually enriching the
object detection output with object viewpoint, keypoints and 3D shape
estimates. Following careful design, in each stage it constantly improves the
performance and achieves state-ofthe-art performance in simultaneous 2D
bounding box and viewpoint estimation on the challenging Pascal3D+ dataset.
| [
{
"version": "v1",
"created": "Tue, 17 Mar 2015 13:34:22 GMT"
}
] | 2015-03-18T00:00:00 | [
[
"Pepik",
"Bojan",
""
],
[
"Stark",
"Michael",
""
],
[
"Gehler",
"Peter",
""
],
[
"Ritschel",
"Tobias",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: 3D Object Class Detection in the Wild
ABSTRACT: Object class detection has been a synonym for 2D bounding box localization
for the longest time, fueled by the success of powerful statistical learning
techniques, combined with robust image representations. Only recently, there
has been a growing interest in revisiting the promise of computer vision from
the early days: to precisely delineate the contents of a visual scene, object
by object, in 3D. In this paper, we draw from recent advances in object
detection and 2D-3D object lifting in order to design an object class detector
that is particularly tailored towards 3D object class detection. Our 3D object
class detection method consists of several stages gradually enriching the
object detection output with object viewpoint, keypoints and 3D shape
estimates. Following careful design, in each stage it constantly improves the
performance and achieves state-ofthe-art performance in simultaneous 2D
bounding box and viewpoint estimation on the challenging Pascal3D+ dataset.
| no_new_dataset | 0.948822 |
1503.05157 | Jeremy Debattista | Jeremy Debattista, Santiago Londo\~no, Christoph Lange, S\"oren Auer | Quality Assessment of Linked Datasets using Probabilistic Approximation | 15 pages, 2 figures, To appear in ESWC 2015 proceedings | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing application of Linked Open Data, assessing the quality of
datasets by computing quality metrics becomes an issue of crucial importance.
For large and evolving datasets, an exact, deterministic computation of the
quality metrics is too time consuming or expensive. We employ probabilistic
techniques such as Reservoir Sampling, Bloom Filters and Clustering Coefficient
estimation for implementing a broad set of data quality metrics in an
approximate but sufficiently accurate way. Our implementation is integrated in
the comprehensive data quality assessment framework Luzzu. We evaluated its
performance and accuracy on Linked Open Datasets of broad relevance.
| [
{
"version": "v1",
"created": "Tue, 17 Mar 2015 18:39:22 GMT"
}
] | 2015-03-18T00:00:00 | [
[
"Debattista",
"Jeremy",
""
],
[
"Londoño",
"Santiago",
""
],
[
"Lange",
"Christoph",
""
],
[
"Auer",
"Sören",
""
]
] | TITLE: Quality Assessment of Linked Datasets using Probabilistic Approximation
ABSTRACT: With the increasing application of Linked Open Data, assessing the quality of
datasets by computing quality metrics becomes an issue of crucial importance.
For large and evolving datasets, an exact, deterministic computation of the
quality metrics is too time consuming or expensive. We employ probabilistic
techniques such as Reservoir Sampling, Bloom Filters and Clustering Coefficient
estimation for implementing a broad set of data quality metrics in an
approximate but sufficiently accurate way. Our implementation is integrated in
the comprehensive data quality assessment framework Luzzu. We evaluated its
performance and accuracy on Linked Open Datasets of broad relevance.
| no_new_dataset | 0.951818 |
1004.5168 | Charles Clarke | Gordon V. Cormack, Mark D. Smucker, and Charles L. A. Clarke | Efficient and Effective Spam Filtering and Re-ranking for Large Web
Datasets | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The TREC 2009 web ad hoc and relevance feedback tasks used a new document
collection, the ClueWeb09 dataset, which was crawled from the general Web in
early 2009. This dataset contains 1 billion web pages, a substantial fraction
of which are spam --- pages designed to deceive search engines so as to deliver
an unwanted payload. We examine the effect of spam on the results of the TREC
2009 web ad hoc and relevance feedback tasks, which used the ClueWeb09 dataset.
We show that a simple content-based classifier with minimal training is
efficient enough to rank the "spamminess" of every page in the dataset using a
standard personal computer in 48 hours, and effective enough to yield
significant and substantive improvements in the fixed-cutoff precision (estP10)
as well as rank measures (estR-Precision, StatMAP, MAP) of nearly all submitted
runs. Moreover, using a set of "honeypot" queries the labeling of training data
may be reduced to an entirely automatic process. The results of classical
information retrieval methods are particularly enhanced by filtering --- from
among the worst to among the best.
| [
{
"version": "v1",
"created": "Thu, 29 Apr 2010 00:54:25 GMT"
}
] | 2015-03-17T00:00:00 | [
[
"Cormack",
"Gordon V.",
""
],
[
"Smucker",
"Mark D.",
""
],
[
"Clarke",
"Charles L. A.",
""
]
] | TITLE: Efficient and Effective Spam Filtering and Re-ranking for Large Web
Datasets
ABSTRACT: The TREC 2009 web ad hoc and relevance feedback tasks used a new document
collection, the ClueWeb09 dataset, which was crawled from the general Web in
early 2009. This dataset contains 1 billion web pages, a substantial fraction
of which are spam --- pages designed to deceive search engines so as to deliver
an unwanted payload. We examine the effect of spam on the results of the TREC
2009 web ad hoc and relevance feedback tasks, which used the ClueWeb09 dataset.
We show that a simple content-based classifier with minimal training is
efficient enough to rank the "spamminess" of every page in the dataset using a
standard personal computer in 48 hours, and effective enough to yield
significant and substantive improvements in the fixed-cutoff precision (estP10)
as well as rank measures (estR-Precision, StatMAP, MAP) of nearly all submitted
runs. Moreover, using a set of "honeypot" queries the labeling of training data
may be reduced to an entirely automatic process. The results of classical
information retrieval methods are particularly enhanced by filtering --- from
among the worst to among the best.
| new_dataset | 0.881666 |
1005.4298 | Sameer Singh | Sameer Singh and Michael Wick and Andrew McCallum | Distantly Labeling Data for Large Scale Cross-Document Coreference | 16 pages, submitted to ECML 2010 | null | null | null | cs.AI cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross-document coreference, the problem of resolving entity mentions across
multi-document collections, is crucial to automated knowledge base construction
and data mining tasks. However, the scarcity of large labeled data sets has
hindered supervised machine learning research for this task. In this paper we
develop and demonstrate an approach based on ``distantly-labeling'' a data set
from which we can train a discriminative cross-document coreference model. In
particular we build a dataset of more than a million people mentions extracted
from 3.5 years of New York Times articles, leverage Wikipedia for distant
labeling with a generative model (and measure the reliability of such
labeling); then we train and evaluate a conditional random field coreference
model that has factors on cross-document entities as well as mention-pairs.
This coreference model obtains high accuracy in resolving mentions and entities
that are not present in the training data, indicating applicability to
non-Wikipedia data. Given the large amount of data, our work is also an
exercise demonstrating the scalability of our approach.
| [
{
"version": "v1",
"created": "Mon, 24 May 2010 10:35:50 GMT"
}
] | 2015-03-17T00:00:00 | [
[
"Singh",
"Sameer",
""
],
[
"Wick",
"Michael",
""
],
[
"McCallum",
"Andrew",
""
]
] | TITLE: Distantly Labeling Data for Large Scale Cross-Document Coreference
ABSTRACT: Cross-document coreference, the problem of resolving entity mentions across
multi-document collections, is crucial to automated knowledge base construction
and data mining tasks. However, the scarcity of large labeled data sets has
hindered supervised machine learning research for this task. In this paper we
develop and demonstrate an approach based on ``distantly-labeling'' a data set
from which we can train a discriminative cross-document coreference model. In
particular we build a dataset of more than a million people mentions extracted
from 3.5 years of New York Times articles, leverage Wikipedia for distant
labeling with a generative model (and measure the reliability of such
labeling); then we train and evaluate a conditional random field coreference
model that has factors on cross-document entities as well as mention-pairs.
This coreference model obtains high accuracy in resolving mentions and entities
that are not present in the training data, indicating applicability to
non-Wikipedia data. Given the large amount of data, our work is also an
exercise demonstrating the scalability of our approach.
| new_dataset | 0.957477 |
1006.0234 | Manuel Gomez Rodriguez | Manuel Gomez-Rodriguez, Jure Leskovec, Andreas Krause | Inferring Networks of Diffusion and Influence | Short version appeared in ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining (KDD), 2010. Long version submitted to
ACM Transactions on Knowledge Discovery from Data (TKDD) | null | null | null | cs.DS cs.SI physics.soc-ph stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information diffusion and virus propagation are fundamental processes taking
place in networks. While it is often possible to directly observe when nodes
become infected with a virus or adopt the information, observing individual
transmissions (i.e., who infects whom, or who influences whom) is typically
very difficult. Furthermore, in many applications, the underlying network over
which the diffusions and propagations spread is actually unobserved. We tackle
these challenges by developing a method for tracing paths of diffusion and
influence through networks and inferring the networks over which contagions
propagate. Given the times when nodes adopt pieces of information or become
infected, we identify the optimal network that best explains the observed
infection times. Since the optimization problem is NP-hard to solve exactly, we
develop an efficient approximation algorithm that scales to large datasets and
finds provably near-optimal networks.
We demonstrate the effectiveness of our approach by tracing information
diffusion in a set of 170 million blogs and news articles over a one year
period to infer how information flows through the online media space. We find
that the diffusion network of news for the top 1,000 media sites and blogs
tends to have a core-periphery structure with a small set of core media sites
that diffuse information to the rest of the Web. These sites tend to have
stable circles of influence with more general news media sites acting as
connectors between them.
| [
{
"version": "v1",
"created": "Tue, 1 Jun 2010 20:02:31 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Dec 2010 20:35:08 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Oct 2011 18:56:10 GMT"
}
] | 2015-03-17T00:00:00 | [
[
"Gomez-Rodriguez",
"Manuel",
""
],
[
"Leskovec",
"Jure",
""
],
[
"Krause",
"Andreas",
""
]
] | TITLE: Inferring Networks of Diffusion and Influence
ABSTRACT: Information diffusion and virus propagation are fundamental processes taking
place in networks. While it is often possible to directly observe when nodes
become infected with a virus or adopt the information, observing individual
transmissions (i.e., who infects whom, or who influences whom) is typically
very difficult. Furthermore, in many applications, the underlying network over
which the diffusions and propagations spread is actually unobserved. We tackle
these challenges by developing a method for tracing paths of diffusion and
influence through networks and inferring the networks over which contagions
propagate. Given the times when nodes adopt pieces of information or become
infected, we identify the optimal network that best explains the observed
infection times. Since the optimization problem is NP-hard to solve exactly, we
develop an efficient approximation algorithm that scales to large datasets and
finds provably near-optimal networks.
We demonstrate the effectiveness of our approach by tracing information
diffusion in a set of 170 million blogs and news articles over a one year
period to infer how information flows through the online media space. We find
that the diffusion network of news for the top 1,000 media sites and blogs
tends to have a core-periphery structure with a small set of core media sites
that diffuse information to the rest of the Web. These sites tend to have
stable circles of influence with more general news media sites acting as
connectors between them.
| no_new_dataset | 0.951369 |
1010.2148 | Angela Bonifati | Angela Bonifati, Giansalvatore Mecca, Domenica Sileo and Gianvito
Summa | Ontological Matchmaking in Recommender Systems | 28 pages, 8 figures | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The electronic marketplace offers great potential for the recommendation of
supplies. In the so called recommender systems, it is crucial to apply
matchmaking strategies that faithfully satisfy the predicates specified in the
demand, and take into account as much as possible the user preferences. We
focus on real-life ontology-driven matchmaking scenarios and identify a number
of challenges, being inspired by such scenarios. A key challenge is that of
presenting the results to the users in an understandable and clear-cut fashion
in order to facilitate the analysis of the results. Indeed, such scenarios
evoke the opportunity to rank and group the results according to specific
criteria. A further challenge consists of presenting the results to the user in
an asynchronous fashion, i.e. the 'push' mode, along with the 'pull' mode, in
which the user explicitly issues a query, and displays the results. Moreover,
an important issue to consider in real-life cases is the possibility of
submitting a query to multiple providers, and collecting the various results.
We have designed and implemented an ontology-based matchmaking system that
suitably addresses the above challenges. We have conducted a comprehensive
experimental study, in order to investigate the usability of the system, the
performance and the effectiveness of the matchmaking strategies with real
ontological datasets.
| [
{
"version": "v1",
"created": "Mon, 11 Oct 2010 16:22:43 GMT"
}
] | 2015-03-17T00:00:00 | [
[
"Bonifati",
"Angela",
""
],
[
"Mecca",
"Giansalvatore",
""
],
[
"Sileo",
"Domenica",
""
],
[
"Summa",
"Gianvito",
""
]
] | TITLE: Ontological Matchmaking in Recommender Systems
ABSTRACT: The electronic marketplace offers great potential for the recommendation of
supplies. In the so called recommender systems, it is crucial to apply
matchmaking strategies that faithfully satisfy the predicates specified in the
demand, and take into account as much as possible the user preferences. We
focus on real-life ontology-driven matchmaking scenarios and identify a number
of challenges, being inspired by such scenarios. A key challenge is that of
presenting the results to the users in an understandable and clear-cut fashion
in order to facilitate the analysis of the results. Indeed, such scenarios
evoke the opportunity to rank and group the results according to specific
criteria. A further challenge consists of presenting the results to the user in
an asynchronous fashion, i.e. the 'push' mode, along with the 'pull' mode, in
which the user explicitly issues a query, and displays the results. Moreover,
an important issue to consider in real-life cases is the possibility of
submitting a query to multiple providers, and collecting the various results.
We have designed and implemented an ontology-based matchmaking system that
suitably addresses the above challenges. We have conducted a comprehensive
experimental study, in order to investigate the usability of the system, the
performance and the effectiveness of the matchmaking strategies with real
ontological datasets.
| no_new_dataset | 0.9434 |
1011.3557 | Kristina Lerman | Anon Plangprasopchok, Kristina Lerman, Lise Getoor | A Probabilistic Approach for Learning Folksonomies from Structured Data | In Proceedings of the 4th ACM Web Search and Data Mining Conference
(WSDM) | null | null | null | cs.AI cs.CY cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning structured representations has emerged as an important problem in
many domains, including document and Web data mining, bioinformatics, and image
analysis. One approach to learning complex structures is to integrate many
smaller, incomplete and noisy structure fragments. In this work, we present an
unsupervised probabilistic approach that extends affinity propagation to
combine the small ontological fragments into a collection of integrated,
consistent, and larger folksonomies. This is a challenging task because the
method must aggregate similar structures while avoiding structural
inconsistencies and handling noise. We validate the approach on a real-world
social media dataset, comprised of shallow personal hierarchies specified by
many individual users, collected from the photosharing website Flickr. Our
empirical results show that our proposed approach is able to construct deeper
and denser structures, compared to an approach using only the standard affinity
propagation algorithm. Additionally, the approach yields better overall
integration quality than a state-of-the-art approach based on incremental
relational clustering.
| [
{
"version": "v1",
"created": "Tue, 16 Nov 2010 00:46:31 GMT"
}
] | 2015-03-17T00:00:00 | [
[
"Plangprasopchok",
"Anon",
""
],
[
"Lerman",
"Kristina",
""
],
[
"Getoor",
"Lise",
""
]
] | TITLE: A Probabilistic Approach for Learning Folksonomies from Structured Data
ABSTRACT: Learning structured representations has emerged as an important problem in
many domains, including document and Web data mining, bioinformatics, and image
analysis. One approach to learning complex structures is to integrate many
smaller, incomplete and noisy structure fragments. In this work, we present an
unsupervised probabilistic approach that extends affinity propagation to
combine the small ontological fragments into a collection of integrated,
consistent, and larger folksonomies. This is a challenging task because the
method must aggregate similar structures while avoiding structural
inconsistencies and handling noise. We validate the approach on a real-world
social media dataset, comprised of shallow personal hierarchies specified by
many individual users, collected from the photosharing website Flickr. Our
empirical results show that our proposed approach is able to construct deeper
and denser structures, compared to an approach using only the standard affinity
propagation algorithm. Additionally, the approach yields better overall
integration quality than a state-of-the-art approach based on incremental
relational clustering.
| new_dataset | 0.644225 |
1012.4571 | Yannis Sismanis | Yannis Sismanis | How I won the "Chess Ratings - Elo vs the Rest of the World" Competition | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article discusses in detail the rating system that won the kaggle
competition "Chess Ratings: Elo vs the rest of the world". The competition
provided a historical dataset of outcomes for chess games, and aimed to
discover whether novel approaches can predict the outcomes of future games,
more accurately than the well-known Elo rating system. The winning rating
system, called Elo++ in the rest of the article, builds upon the Elo rating
system. Like Elo, Elo++ uses a single rating per player and predicts the
outcome of a game, by using a logistic curve over the difference in ratings of
the players. The major component of Elo++ is a regularization technique that
avoids overfitting these ratings. The dataset of chess games and outcomes is
relatively small and one has to be careful not to draw "too many conclusions"
out of the limited data. Many approaches tested in the competition showed signs
of such an overfitting. The leader-board was dominated by attempts that did a
very good job on a small test dataset, but couldn't generalize well on the
private hold-out dataset. The Elo++ regularization takes into account the
number of games per player, the recency of these games and the ratings of the
opponents. Finally, Elo++ employs a stochastic gradient descent scheme for
training the ratings, and uses only two global parameters (white's advantage
and regularization constant) that are optimized using cross-validation.
| [
{
"version": "v1",
"created": "Tue, 21 Dec 2010 09:11:53 GMT"
}
] | 2015-03-17T00:00:00 | [
[
"Sismanis",
"Yannis",
""
]
] | TITLE: How I won the "Chess Ratings - Elo vs the Rest of the World" Competition
ABSTRACT: This article discusses in detail the rating system that won the kaggle
competition "Chess Ratings: Elo vs the rest of the world". The competition
provided a historical dataset of outcomes for chess games, and aimed to
discover whether novel approaches can predict the outcomes of future games,
more accurately than the well-known Elo rating system. The winning rating
system, called Elo++ in the rest of the article, builds upon the Elo rating
system. Like Elo, Elo++ uses a single rating per player and predicts the
outcome of a game, by using a logistic curve over the difference in ratings of
the players. The major component of Elo++ is a regularization technique that
avoids overfitting these ratings. The dataset of chess games and outcomes is
relatively small and one has to be careful not to draw "too many conclusions"
out of the limited data. Many approaches tested in the competition showed signs
of such an overfitting. The leader-board was dominated by attempts that did a
very good job on a small test dataset, but couldn't generalize well on the
private hold-out dataset. The Elo++ regularization takes into account the
number of games per player, the recency of these games and the ratings of the
opponents. Finally, Elo++ employs a stochastic gradient descent scheme for
training the ratings, and uses only two global parameters (white's advantage
and regularization constant) that are optimized using cross-validation.
| no_new_dataset | 0.950732 |
1101.2604 | Wahbeh Qardaji | Ninghui Li, Wahbeh Qardaji, Dong Su | On Sampling, Anonymization, and Differential Privacy: Or,
k-Anonymization Meets Differential Privacy | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper aims at answering the following two questions in
privacy-preserving data analysis and publishing: What formal privacy guarantee
(if any) does $k$-anonymization provide? How to benefit from the adversary's
uncertainty about the data? We have found that random sampling provides a
connection that helps answer these two questions, as sampling can create
uncertainty. The main result of the paper is that $k$-anonymization, when done
"safely", and when preceded with a random sampling step, satisfies
$(\epsilon,\delta)$-differential privacy with reasonable parameters. This
result illustrates that "hiding in a crowd of $k$" indeed offers some privacy
guarantees. This result also suggests an alternative approach to output
perturbation for satisfying differential privacy: namely, adding a random
sampling step in the beginning and pruning results that are too sensitive to
change of a single tuple. Regarding the second question, we provide both
positive and negative results. On the positive side, we show that adding a
random-sampling pre-processing step to a differentially-private algorithm can
greatly amplify the level of privacy protection. Hence, when given a dataset
resulted from sampling, one can utilize a much large privacy budget. On the
negative side, any privacy notion that takes advantage of the adversary's
uncertainty likely does not compose. We discuss what these results imply in
practice.
| [
{
"version": "v1",
"created": "Thu, 13 Jan 2011 16:18:23 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jun 2011 02:37:02 GMT"
}
] | 2015-03-17T00:00:00 | [
[
"Li",
"Ninghui",
""
],
[
"Qardaji",
"Wahbeh",
""
],
[
"Su",
"Dong",
""
]
] | TITLE: On Sampling, Anonymization, and Differential Privacy: Or,
k-Anonymization Meets Differential Privacy
ABSTRACT: This paper aims at answering the following two questions in
privacy-preserving data analysis and publishing: What formal privacy guarantee
(if any) does $k$-anonymization provide? How to benefit from the adversary's
uncertainty about the data? We have found that random sampling provides a
connection that helps answer these two questions, as sampling can create
uncertainty. The main result of the paper is that $k$-anonymization, when done
"safely", and when preceded with a random sampling step, satisfies
$(\epsilon,\delta)$-differential privacy with reasonable parameters. This
result illustrates that "hiding in a crowd of $k$" indeed offers some privacy
guarantees. This result also suggests an alternative approach to output
perturbation for satisfying differential privacy: namely, adding a random
sampling step in the beginning and pruning results that are too sensitive to
change of a single tuple. Regarding the second question, we provide both
positive and negative results. On the positive side, we show that adding a
random-sampling pre-processing step to a differentially-private algorithm can
greatly amplify the level of privacy protection. Hence, when given a dataset
resulted from sampling, one can utilize a much large privacy budget. On the
negative side, any privacy notion that takes advantage of the adversary's
uncertainty likely does not compose. We discuss what these results imply in
practice.
| no_new_dataset | 0.947527 |
1101.3594 | Donghui Yan | Donghui Yan, Peng Gong, Aiyou Chen and Liheng Zhong | Classification under Data Contamination with Application to Remote
Sensing Image Mis-registration | 23 pages, 10 figures | null | null | null | stat.ME cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work is motivated by the problem of image mis-registration in remote
sensing and we are interested in determining the resulting loss in the accuracy
of pattern classification. A statistical formulation is given where we propose
to use data contamination to model and understand the phenomenon of image
mis-registration. This model is widely applicable to many other types of errors
as well, for example, measurement errors and gross errors etc. The impact of
data contamination on classification is studied under a statistical learning
theoretical framework. A closed-form asymptotic bound is established for the
resulting loss in classification accuracy, which is less than
$\epsilon/(1-\epsilon)$ for data contamination of an amount of $\epsilon$. Our
bound is sharper than similar bounds in the domain adaptation literature and,
unlike such bounds, it applies to classifiers with an infinite
Vapnik-Chervonekis (VC) dimension. Extensive simulations have been conducted on
both synthetic and real datasets under various types of data contamination,
including label flipping, feature swapping and the replacement of feature
values with data generated from a random source such as a Gaussian or Cauchy
distribution. Our simulation results show that the bound we derive is fairly
tight.
| [
{
"version": "v1",
"created": "Wed, 19 Jan 2011 00:41:43 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Jan 2012 18:04:10 GMT"
}
] | 2015-03-17T00:00:00 | [
[
"Yan",
"Donghui",
""
],
[
"Gong",
"Peng",
""
],
[
"Chen",
"Aiyou",
""
],
[
"Zhong",
"Liheng",
""
]
] | TITLE: Classification under Data Contamination with Application to Remote
Sensing Image Mis-registration
ABSTRACT: This work is motivated by the problem of image mis-registration in remote
sensing and we are interested in determining the resulting loss in the accuracy
of pattern classification. A statistical formulation is given where we propose
to use data contamination to model and understand the phenomenon of image
mis-registration. This model is widely applicable to many other types of errors
as well, for example, measurement errors and gross errors etc. The impact of
data contamination on classification is studied under a statistical learning
theoretical framework. A closed-form asymptotic bound is established for the
resulting loss in classification accuracy, which is less than
$\epsilon/(1-\epsilon)$ for data contamination of an amount of $\epsilon$. Our
bound is sharper than similar bounds in the domain adaptation literature and,
unlike such bounds, it applies to classifiers with an infinite
Vapnik-Chervonekis (VC) dimension. Extensive simulations have been conducted on
both synthetic and real datasets under various types of data contamination,
including label flipping, feature swapping and the replacement of feature
values with data generated from a random source such as a Gaussian or Cauchy
distribution. Our simulation results show that the bound we derive is fairly
tight.
| no_new_dataset | 0.949106 |
1403.3515 | Kieran Greer Dr | Kieran Greer | Concept Trees: Building Dynamic Concepts from Semi-Structured Data using
Nature-Inspired Methods | Pre-print | Q. Zhu, A.T Azar (eds.), Complex system modelling and control
through intelligent soft computations, Studies in Fuzziness and Soft
Computing, Springer-Verlag, Germany, Vol. 319, pp. 221 - 252, 2014 | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a method for creating structure from heterogeneous
sources, as part of an information database, or more specifically, a 'concept
base'. Structures called 'concept trees' can grow from the semi-structured
sources when consistent sequences of concepts are presented. They might be
considered to be dynamic databases, possibly a variation on the distributed
Agent-Based or Cellular Automata models, or even related to Markov models.
Semantic comparison of text is required, but the trees can be built more, from
automatic knowledge and statistical feedback. This reduced model might also be
attractive for security or privacy reasons, as not all of the potential data
gets saved. The construction process maintains the key requirement of
generality, allowing it to be used as part of a generic framework. The nature
of the method also means that some level of optimisation or normalisation of
the information will occur. This gives comparisons with databases or
knowledge-bases, but a database system would firstly model its environment or
datasets and then populate the database with instance values. The concept base
deals with a more uncertain environment and therefore cannot fully model it
beforehand. The model itself therefore evolves over time. Similar to databases,
it also needs a good indexing system, where the construction process provides
memory and indexing structures. These allow for more complex concepts to be
automatically created, stored and retrieved, possibly as part of a more
cognitive model. There are also some arguments, or more abstract ideas, for
merging physical-world laws into these automatic processes.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2014 09:38:01 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Jun 2014 17:07:10 GMT"
}
] | 2015-03-17T00:00:00 | [
[
"Greer",
"Kieran",
""
]
] | TITLE: Concept Trees: Building Dynamic Concepts from Semi-Structured Data using
Nature-Inspired Methods
ABSTRACT: This paper describes a method for creating structure from heterogeneous
sources, as part of an information database, or more specifically, a 'concept
base'. Structures called 'concept trees' can grow from the semi-structured
sources when consistent sequences of concepts are presented. They might be
considered to be dynamic databases, possibly a variation on the distributed
Agent-Based or Cellular Automata models, or even related to Markov models.
Semantic comparison of text is required, but the trees can be built more, from
automatic knowledge and statistical feedback. This reduced model might also be
attractive for security or privacy reasons, as not all of the potential data
gets saved. The construction process maintains the key requirement of
generality, allowing it to be used as part of a generic framework. The nature
of the method also means that some level of optimisation or normalisation of
the information will occur. This gives comparisons with databases or
knowledge-bases, but a database system would firstly model its environment or
datasets and then populate the database with instance values. The concept base
deals with a more uncertain environment and therefore cannot fully model it
beforehand. The model itself therefore evolves over time. Similar to databases,
it also needs a good indexing system, where the construction process provides
memory and indexing structures. These allow for more complex concepts to be
automatically created, stored and retrieved, possibly as part of a more
cognitive model. There are also some arguments, or more abstract ideas, for
merging physical-world laws into these automatic processes.
| no_new_dataset | 0.941439 |
1407.1571 | Jonathan Ullman | Jonathan Ullman | Private Multiplicative Weights Beyond Linear Queries | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A wide variety of fundamental data analyses in machine learning, such as
linear and logistic regression, require minimizing a convex function defined by
the data. Since the data may contain sensitive information about individuals,
and these analyses can leak that sensitive information, it is important to be
able to solve convex minimization in a privacy-preserving way.
A series of recent results show how to accurately solve a single convex
minimization problem in a differentially private manner. However, the same data
is often analyzed repeatedly, and little is known about solving multiple convex
minimization problems with differential privacy. For simpler data analyses,
such as linear queries, there are remarkable differentially private algorithms
such as the private multiplicative weights mechanism (Hardt and Rothblum, FOCS
2010) that accurately answer exponentially many distinct queries. In this work,
we extend these results to the case of convex minimization and show how to give
accurate and differentially private solutions to *exponentially many* convex
minimization problems on a sensitive dataset.
| [
{
"version": "v1",
"created": "Mon, 7 Jul 2014 02:51:37 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Sep 2014 18:43:19 GMT"
},
{
"version": "v3",
"created": "Sat, 14 Mar 2015 19:21:33 GMT"
}
] | 2015-03-17T00:00:00 | [
[
"Ullman",
"Jonathan",
""
]
] | TITLE: Private Multiplicative Weights Beyond Linear Queries
ABSTRACT: A wide variety of fundamental data analyses in machine learning, such as
linear and logistic regression, require minimizing a convex function defined by
the data. Since the data may contain sensitive information about individuals,
and these analyses can leak that sensitive information, it is important to be
able to solve convex minimization in a privacy-preserving way.
A series of recent results show how to accurately solve a single convex
minimization problem in a differentially private manner. However, the same data
is often analyzed repeatedly, and little is known about solving multiple convex
minimization problems with differential privacy. For simpler data analyses,
such as linear queries, there are remarkable differentially private algorithms
such as the private multiplicative weights mechanism (Hardt and Rothblum, FOCS
2010) that accurately answer exponentially many distinct queries. In this work,
we extend these results to the case of convex minimization and show how to give
accurate and differentially private solutions to *exponentially many* convex
minimization problems on a sensitive dataset.
| no_new_dataset | 0.94801 |
1411.4726 | Reza Rawassizadeh | Reza Rawassizadeh and Elaheh Momeni and Prajna Shetty | Scalable Mining of Daily Behavioral Patterns in Context Sensing Life-Log
Data | 10 pages, 6 figures, 2 tables | null | null | null | cs.HC cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the advent of wearable devices and the proliferation of smartphones,
there still is no ideal platform that can continuously sense and precisely
collect all available contextual information. Ideally, mobile sensing data
collection approaches should deal with uncertainty and data loss originating
from software and hardware restrictions. We have conducted life logging data
collection experiments from 35 users and created a rich dataset (9.26 million
records) to represent the real-world deployment issues of mobile sensing
systems. We create a novel set of algorithms to identify human behavioral
motifs while considering the uncertainty of collected data objects. Our work
benefits from combinations of sensors available on a device and identifies
behavioral patterns with a temporal granularity similar to human time
perception. Employing a combination of sensors rather than focusing on only one
sensor can handle uncertainty by neglecting sensor data that is not available
and focusing instead on available data. Moreover, by experimenting on two real,
large datasets, we demonstrate that using a sliding window significantly
improves the scalability of our algorithms, which can be used by applications
for small devices, such as smartphones and wearables.
| [
{
"version": "v1",
"created": "Tue, 18 Nov 2014 03:33:10 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Feb 2015 04:29:09 GMT"
},
{
"version": "v3",
"created": "Mon, 16 Mar 2015 14:57:48 GMT"
}
] | 2015-03-17T00:00:00 | [
[
"Rawassizadeh",
"Reza",
""
],
[
"Momeni",
"Elaheh",
""
],
[
"Shetty",
"Prajna",
""
]
] | TITLE: Scalable Mining of Daily Behavioral Patterns in Context Sensing Life-Log
Data
ABSTRACT: Despite the advent of wearable devices and the proliferation of smartphones,
there still is no ideal platform that can continuously sense and precisely
collect all available contextual information. Ideally, mobile sensing data
collection approaches should deal with uncertainty and data loss originating
from software and hardware restrictions. We have conducted life logging data
collection experiments from 35 users and created a rich dataset (9.26 million
records) to represent the real-world deployment issues of mobile sensing
systems. We create a novel set of algorithms to identify human behavioral
motifs while considering the uncertainty of collected data objects. Our work
benefits from combinations of sensors available on a device and identifies
behavioral patterns with a temporal granularity similar to human time
perception. Employing a combination of sensors rather than focusing on only one
sensor can handle uncertainty by neglecting sensor data that is not available
and focusing instead on available data. Moreover, by experimenting on two real,
large datasets, we demonstrate that using a sliding window significantly
improves the scalability of our algorithms, which can be used by applications
for small devices, such as smartphones and wearables.
| new_dataset | 0.961822 |
1503.04250 | Julia Bernd | Julia Bernd, Damian Borth, Benjamin Elizalde, Gerald Friedland,
Heather Gallagher, Luke Gottlieb, Adam Janin, Sara Karabashlieva, Jocelyn
Takahashi, Jennifer Won | The YLI-MED Corpus: Characteristics, Procedures, and Plans | 47 pages; 3 figures; 25 tables. Also published as ICSI Technical
Report TR-15-001 | null | null | TR-15-001 | cs.MM cs.CL | http://creativecommons.org/licenses/by/3.0/ | The YLI Multimedia Event Detection corpus is a public-domain index of videos
with annotations and computed features, specialized for research in multimedia
event detection (MED), i.e., automatically identifying what's happening in a
video by analyzing the audio and visual content. The videos indexed in the
YLI-MED corpus are a subset of the larger YLI feature corpus, which is being
developed by the International Computer Science Institute and Lawrence
Livermore National Laboratory based on the Yahoo Flickr Creative Commons 100
Million (YFCC100M) dataset. The videos in YLI-MED are categorized as depicting
one of ten target events, or no target event, and are annotated for additional
attributes like language spoken and whether the video has a musical score. The
annotations also include degree of annotator agreement and average annotator
confidence scores for the event categorization of each video. Version 1.0 of
YLI-MED includes 1823 "positive" videos that depict the target events and
48,138 "negative" videos, as well as 177 supplementary videos that are similar
to event videos but are not positive examples. Our goal in producing YLI-MED is
to be as open about our data and procedures as possible. This report describes
the procedures used to collect the corpus; gives detailed descriptive
statistics about the corpus makeup (and how video attributes affected
annotators' judgments); discusses possible biases in the corpus introduced by
our procedural choices and compares it with the most similar existing dataset,
TRECVID MED's HAVIC corpus; and gives an overview of our future plans for
expanding the annotation effort.
| [
{
"version": "v1",
"created": "Fri, 13 Mar 2015 23:36:42 GMT"
}
] | 2015-03-17T00:00:00 | [
[
"Bernd",
"Julia",
""
],
[
"Borth",
"Damian",
""
],
[
"Elizalde",
"Benjamin",
""
],
[
"Friedland",
"Gerald",
""
],
[
"Gallagher",
"Heather",
""
],
[
"Gottlieb",
"Luke",
""
],
[
"Janin",
"Adam",
""
],
[
"Karabashlieva",
"Sara",
""
],
[
"Takahashi",
"Jocelyn",
""
],
[
"Won",
"Jennifer",
""
]
] | TITLE: The YLI-MED Corpus: Characteristics, Procedures, and Plans
ABSTRACT: The YLI Multimedia Event Detection corpus is a public-domain index of videos
with annotations and computed features, specialized for research in multimedia
event detection (MED), i.e., automatically identifying what's happening in a
video by analyzing the audio and visual content. The videos indexed in the
YLI-MED corpus are a subset of the larger YLI feature corpus, which is being
developed by the International Computer Science Institute and Lawrence
Livermore National Laboratory based on the Yahoo Flickr Creative Commons 100
Million (YFCC100M) dataset. The videos in YLI-MED are categorized as depicting
one of ten target events, or no target event, and are annotated for additional
attributes like language spoken and whether the video has a musical score. The
annotations also include degree of annotator agreement and average annotator
confidence scores for the event categorization of each video. Version 1.0 of
YLI-MED includes 1823 "positive" videos that depict the target events and
48,138 "negative" videos, as well as 177 supplementary videos that are similar
to event videos but are not positive examples. Our goal in producing YLI-MED is
to be as open about our data and procedures as possible. This report describes
the procedures used to collect the corpus; gives detailed descriptive
statistics about the corpus makeup (and how video attributes affected
annotators' judgments); discusses possible biases in the corpus introduced by
our procedural choices and compares it with the most similar existing dataset,
TRECVID MED's HAVIC corpus; and gives an overview of our future plans for
expanding the annotation effort.
| no_new_dataset | 0.885186 |
1410.0260 | William March | William B. March, Bo Xiao, George Biros | ASKIT: Approximate Skeletonization Kernel-Independent Treecode in High
Dimensions | 22 pages, 6 figures | null | null | null | cs.DS cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a fast algorithm for kernel summation problems in high-dimensions.
These problems appear in computational physics, numerical approximation,
non-parametric statistics, and machine learning. In our context, the sums
depend on a kernel function that is a pair potential defined on a dataset of
points in a high-dimensional Euclidean space. A direct evaluation of the sum
scales quadratically with the number of points. Fast kernel summation methods
can reduce this cost to linear complexity, but the constants involved do not
scale well with the dimensionality of the dataset.
The main algorithmic components of fast kernel summation algorithms are the
separation of the kernel sum between near and far field (which is the basis for
pruning) and the efficient and accurate approximation of the far field.
We introduce novel methods for pruning and approximating the far field. Our
far field approximation requires only kernel evaluations and does not use
analytic expansions. Pruning is not done using bounding boxes but rather
combinatorially using a sparsified nearest-neighbor graph of the input. The
time complexity of our algorithm depends linearly on the ambient dimension. The
error in the algorithm depends on the low-rank approximability of the far
field, which in turn depends on the kernel function and on the intrinsic
dimensionality of the distribution of the points. The error of the far field
approximation does not depend on the ambient dimension.
We present the new algorithm along with experimental results that demonstrate
its performance. We report results for Gaussian kernel sums for 100 million
points in 64 dimensions, for one million points in 1000 dimensions, and for
problems in which the Gaussian kernel has a variable bandwidth. To the best of
our knowledge, all of these experiments are impossible or prohibitively
expensive with existing fast kernel summation methods.
| [
{
"version": "v1",
"created": "Wed, 1 Oct 2014 15:41:11 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Jan 2015 22:38:05 GMT"
},
{
"version": "v3",
"created": "Fri, 13 Mar 2015 17:31:21 GMT"
}
] | 2015-03-16T00:00:00 | [
[
"March",
"William B.",
""
],
[
"Xiao",
"Bo",
""
],
[
"Biros",
"George",
""
]
] | TITLE: ASKIT: Approximate Skeletonization Kernel-Independent Treecode in High
Dimensions
ABSTRACT: We present a fast algorithm for kernel summation problems in high-dimensions.
These problems appear in computational physics, numerical approximation,
non-parametric statistics, and machine learning. In our context, the sums
depend on a kernel function that is a pair potential defined on a dataset of
points in a high-dimensional Euclidean space. A direct evaluation of the sum
scales quadratically with the number of points. Fast kernel summation methods
can reduce this cost to linear complexity, but the constants involved do not
scale well with the dimensionality of the dataset.
The main algorithmic components of fast kernel summation algorithms are the
separation of the kernel sum between near and far field (which is the basis for
pruning) and the efficient and accurate approximation of the far field.
We introduce novel methods for pruning and approximating the far field. Our
far field approximation requires only kernel evaluations and does not use
analytic expansions. Pruning is not done using bounding boxes but rather
combinatorially using a sparsified nearest-neighbor graph of the input. The
time complexity of our algorithm depends linearly on the ambient dimension. The
error in the algorithm depends on the low-rank approximability of the far
field, which in turn depends on the kernel function and on the intrinsic
dimensionality of the distribution of the points. The error of the far field
approximation does not depend on the ambient dimension.
We present the new algorithm along with experimental results that demonstrate
its performance. We report results for Gaussian kernel sums for 100 million
points in 64 dimensions, for one million points in 1000 dimensions, and for
problems in which the Gaussian kernel has a variable bandwidth. To the best of
our knowledge, all of these experiments are impossible or prohibitively
expensive with existing fast kernel summation methods.
| no_new_dataset | 0.949012 |
1503.02761 | Ava Bargi | Ava Bargi, Richard Yi Da Xu, Massimo Piccardi | An Adaptive Online HDP-HMM for Segmentation and Classification of
Sequential Data | 23 pages, 9 figures and 4 tables | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the recent years, the desire and need to understand sequential data has
been increasing, with particular interest in sequential contexts such as
patient monitoring, understanding daily activities, video surveillance, stock
market and the like. Along with the constant flow of data, it is critical to
classify and segment the observations on-the-fly, without being limited to a
rigid number of classes. In addition, the model needs to be capable of updating
its parameters to comply with possible evolutions. This interesting problem,
however, is not adequately addressed in the literature since many studies focus
on offline classification over a pre-defined class set. In this paper, we
propose a principled solution to this gap by introducing an adaptive online
system based on Markov switching models with hierarchical Dirichlet process
priors. This infinite adaptive online approach is capable of segmenting and
classifying the sequential data over unlimited number of classes, while meeting
the memory and delay constraints of streaming contexts. The model is further
enhanced by introducing a learning rate, responsible for balancing the extent
to which the model sustains its previous learning (parameters) or adapts to the
new streaming observations. Experimental results on several variants of
stationary and evolving synthetic data and two video datasets, TUM Assistive
Kitchen and collatedWeizmann, show remarkable performance in segmentation and
classification, particularly for evolutionary sequences with changing
distributions and/or containing new, unseen classes.
| [
{
"version": "v1",
"created": "Tue, 10 Mar 2015 03:27:34 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Mar 2015 01:36:18 GMT"
}
] | 2015-03-16T00:00:00 | [
[
"Bargi",
"Ava",
""
],
[
"Da Xu",
"Richard Yi",
""
],
[
"Piccardi",
"Massimo",
""
]
] | TITLE: An Adaptive Online HDP-HMM for Segmentation and Classification of
Sequential Data
ABSTRACT: In the recent years, the desire and need to understand sequential data has
been increasing, with particular interest in sequential contexts such as
patient monitoring, understanding daily activities, video surveillance, stock
market and the like. Along with the constant flow of data, it is critical to
classify and segment the observations on-the-fly, without being limited to a
rigid number of classes. In addition, the model needs to be capable of updating
its parameters to comply with possible evolutions. This interesting problem,
however, is not adequately addressed in the literature since many studies focus
on offline classification over a pre-defined class set. In this paper, we
propose a principled solution to this gap by introducing an adaptive online
system based on Markov switching models with hierarchical Dirichlet process
priors. This infinite adaptive online approach is capable of segmenting and
classifying the sequential data over unlimited number of classes, while meeting
the memory and delay constraints of streaming contexts. The model is further
enhanced by introducing a learning rate, responsible for balancing the extent
to which the model sustains its previous learning (parameters) or adapts to the
new streaming observations. Experimental results on several variants of
stationary and evolving synthetic data and two video datasets, TUM Assistive
Kitchen and collatedWeizmann, show remarkable performance in segmentation and
classification, particularly for evolutionary sequences with changing
distributions and/or containing new, unseen classes.
| no_new_dataset | 0.9463 |
1503.04055 | Bas Jansen | Bas Jansen | Enron versus EUSES: A Comparison of Two Spreadsheet Corpora | In Proceedings of the 2nd Workshop on Software Engineering Methods in
Spreadsheets | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spreadsheets are widely used within companies and often form the basis for
business decisions. Numerous cases are known where incorrect information in
spreadsheets has lead to incorrect decisions. Such cases underline the
relevance of research on the professional use of spreadsheets.
Recently a new dataset became available for research, containing over 15.000
business spreadsheets that were extracted from the Enron E-mail Archive. With
this dataset, we 1) aim to obtain a thorough understanding of the
characteristics of spreadsheets used within companies, and 2) compare the
characteristics of the Enron spreadsheets with the EUSES corpus which is the
existing state of the art set of spreadsheets that is frequently used in
spreadsheet studies.
Our analysis shows that 1) the majority of spreadsheets are not large in
terms of worksheets and formulas, do not have a high degree of coupling, and
their formulas are relatively simple; 2) the spreadsheets from the EUSES corpus
are, with respect to the measured characteristics, quite similar to the Enron
spreadsheets.
| [
{
"version": "v1",
"created": "Fri, 13 Mar 2015 13:27:32 GMT"
}
] | 2015-03-16T00:00:00 | [
[
"Jansen",
"Bas",
""
]
] | TITLE: Enron versus EUSES: A Comparison of Two Spreadsheet Corpora
ABSTRACT: Spreadsheets are widely used within companies and often form the basis for
business decisions. Numerous cases are known where incorrect information in
spreadsheets has lead to incorrect decisions. Such cases underline the
relevance of research on the professional use of spreadsheets.
Recently a new dataset became available for research, containing over 15.000
business spreadsheets that were extracted from the Enron E-mail Archive. With
this dataset, we 1) aim to obtain a thorough understanding of the
characteristics of spreadsheets used within companies, and 2) compare the
characteristics of the Enron spreadsheets with the EUSES corpus which is the
existing state of the art set of spreadsheets that is frequently used in
spreadsheet studies.
Our analysis shows that 1) the majority of spreadsheets are not large in
terms of worksheets and formulas, do not have a high degree of coupling, and
their formulas are relatively simple; 2) the spreadsheets from the EUSES corpus
are, with respect to the measured characteristics, quite similar to the Enron
spreadsheets.
| new_dataset | 0.959535 |
1503.04065 | Praveen Kulkarni | Praveen Kulkarni, Joaquin Zepeda, Frederic Jurie, Patrick Perez and
Louis Chevallier | Hybrid multi-layer Deep CNN/Aggregator feature for image classification | Accepted in ICASSP 2015 conference, 5 pages including reference, 4
figures and 2 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Convolutional Neural Networks (DCNN) have established a remarkable
performance benchmark in the field of image classification, displacing
classical approaches based on hand-tailored aggregations of local descriptors.
Yet DCNNs impose high computational burdens both at training and at testing
time, and training them requires collecting and annotating large amounts of
training data. Supervised adaptation methods have been proposed in the
literature that partially re-learn a transferred DCNN structure from a new
target dataset. Yet these require expensive bounding-box annotations and are
still computationally expensive to learn. In this paper, we address these
shortcomings of DCNN adaptation schemes by proposing a hybrid approach that
combines conventional, unsupervised aggregators such as Bag-of-Words (BoW),
with the DCNN pipeline by treating the output of intermediate layers as densely
extracted local descriptors.
We test a variant of our approach that uses only intermediate DCNN layers on
the standard PASCAL VOC 2007 dataset and show performance significantly higher
than the standard BoW model and comparable to Fisher vector aggregation but
with a feature that is 150 times smaller. A second variant of our approach that
includes the fully connected DCNN layers significantly outperforms Fisher
vector schemes and performs comparably to DCNN approaches adapted to Pascal VOC
2007, yet at only a small fraction of the training and testing cost.
| [
{
"version": "v1",
"created": "Fri, 13 Mar 2015 13:49:26 GMT"
}
] | 2015-03-16T00:00:00 | [
[
"Kulkarni",
"Praveen",
""
],
[
"Zepeda",
"Joaquin",
""
],
[
"Jurie",
"Frederic",
""
],
[
"Perez",
"Patrick",
""
],
[
"Chevallier",
"Louis",
""
]
] | TITLE: Hybrid multi-layer Deep CNN/Aggregator feature for image classification
ABSTRACT: Deep Convolutional Neural Networks (DCNN) have established a remarkable
performance benchmark in the field of image classification, displacing
classical approaches based on hand-tailored aggregations of local descriptors.
Yet DCNNs impose high computational burdens both at training and at testing
time, and training them requires collecting and annotating large amounts of
training data. Supervised adaptation methods have been proposed in the
literature that partially re-learn a transferred DCNN structure from a new
target dataset. Yet these require expensive bounding-box annotations and are
still computationally expensive to learn. In this paper, we address these
shortcomings of DCNN adaptation schemes by proposing a hybrid approach that
combines conventional, unsupervised aggregators such as Bag-of-Words (BoW),
with the DCNN pipeline by treating the output of intermediate layers as densely
extracted local descriptors.
We test a variant of our approach that uses only intermediate DCNN layers on
the standard PASCAL VOC 2007 dataset and show performance significantly higher
than the standard BoW model and comparable to Fisher vector aggregation but
with a feature that is 150 times smaller. A second variant of our approach that
includes the fully connected DCNN layers significantly outperforms Fisher
vector schemes and performs comparably to DCNN approaches adapted to Pascal VOC
2007, yet at only a small fraction of the training and testing cost.
| no_new_dataset | 0.947137 |
1503.04115 | Nam Le | Nam Do-Hoang Le | Sparse Code Formation with Linear Inhibition | Technical report, 4 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparse code formation in the primary visual cortex (V1) has been inspiration
for many state-of-the-art visual recognition systems. To stimulate this
behavior, networks are trained networks under mathematical constraint of
sparsity or selectivity. In this paper, the authors exploit another approach
which uses lateral interconnections in feature learning networks. However,
instead of adding direct lateral interconnections among neurons, we introduce
an inhibitory layer placed right after normal encoding layer. This idea
overcomes the challenge of computational cost and complexity on lateral
networks while preserving crucial objective of sparse code formation. To
demonstrate this idea, we use sparse autoencoder as normal encoding layer and
apply inhibitory layer. Early experiments in visual recognition show relative
improvements over traditional approach on CIFAR-10 dataset. Moreover, simple
installment and training process using Hebbian rule allow inhibitory layer to
be integrated into existing networks, which enables further analysis in the
future.
| [
{
"version": "v1",
"created": "Fri, 13 Mar 2015 15:45:11 GMT"
}
] | 2015-03-16T00:00:00 | [
[
"Le",
"Nam Do-Hoang",
""
]
] | TITLE: Sparse Code Formation with Linear Inhibition
ABSTRACT: Sparse code formation in the primary visual cortex (V1) has been inspiration
for many state-of-the-art visual recognition systems. To stimulate this
behavior, networks are trained networks under mathematical constraint of
sparsity or selectivity. In this paper, the authors exploit another approach
which uses lateral interconnections in feature learning networks. However,
instead of adding direct lateral interconnections among neurons, we introduce
an inhibitory layer placed right after normal encoding layer. This idea
overcomes the challenge of computational cost and complexity on lateral
networks while preserving crucial objective of sparse code formation. To
demonstrate this idea, we use sparse autoencoder as normal encoding layer and
apply inhibitory layer. Early experiments in visual recognition show relative
improvements over traditional approach on CIFAR-10 dataset. Moreover, simple
installment and training process using Hebbian rule allow inhibitory layer to
be integrated into existing networks, which enables further analysis in the
future.
| no_new_dataset | 0.951997 |
0812.2636 | Tobias Friedrich | Karl Bringmann, Tobias Friedrich | Approximating the least hypervolume contributor: NP-hard in general, but
fast in practice | 22 pages, to appear in Theoretical Computer Science | null | 10.1016/j.tcs.2010.09.026 | null | cs.DS cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The hypervolume indicator is an increasingly popular set measure to compare
the quality of two Pareto sets. The basic ingredient of most hypervolume
indicator based optimization algorithms is the calculation of the hypervolume
contribution of single solutions regarding a Pareto set. We show that exact
calculation of the hypervolume contribution is #P-hard while its approximation
is NP-hard. The same holds for the calculation of the minimal contribution. We
also prove that it is NP-hard to decide whether a solution has the least
hypervolume contribution. Even deciding whether the contribution of a solution
is at most $(1+\eps)$ times the minimal contribution is NP-hard. This implies
that it is neither possible to efficiently find the least contributing solution
(unless $P = NP$) nor to approximate it (unless $NP = BPP$).
Nevertheless, in the second part of the paper we present a fast approximation
algorithm for this problem. We prove that for arbitrarily given $\eps,\delta>0$
it calculates a solution with contribution at most $(1+\eps)$ times the minimal
contribution with probability at least $(1-\delta)$. Though it cannot run in
polynomial time for all instances, it performs extremely fast on various
benchmark datasets. The algorithm solves very large problem instances which are
intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions)
within a few seconds.
| [
{
"version": "v1",
"created": "Sun, 14 Dec 2008 13:57:10 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Sep 2010 20:43:10 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Bringmann",
"Karl",
""
],
[
"Friedrich",
"Tobias",
""
]
] | TITLE: Approximating the least hypervolume contributor: NP-hard in general, but
fast in practice
ABSTRACT: The hypervolume indicator is an increasingly popular set measure to compare
the quality of two Pareto sets. The basic ingredient of most hypervolume
indicator based optimization algorithms is the calculation of the hypervolume
contribution of single solutions regarding a Pareto set. We show that exact
calculation of the hypervolume contribution is #P-hard while its approximation
is NP-hard. The same holds for the calculation of the minimal contribution. We
also prove that it is NP-hard to decide whether a solution has the least
hypervolume contribution. Even deciding whether the contribution of a solution
is at most $(1+\eps)$ times the minimal contribution is NP-hard. This implies
that it is neither possible to efficiently find the least contributing solution
(unless $P = NP$) nor to approximate it (unless $NP = BPP$).
Nevertheless, in the second part of the paper we present a fast approximation
algorithm for this problem. We prove that for arbitrarily given $\eps,\delta>0$
it calculates a solution with contribution at most $(1+\eps)$ times the minimal
contribution with probability at least $(1-\delta)$. Though it cannot run in
polynomial time for all instances, it performs extremely fast on various
benchmark datasets. The algorithm solves very large problem instances which are
intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions)
within a few seconds.
| no_new_dataset | 0.942295 |
0905.3582 | Yoshiharu Maeno | Yoshiharu Maeno | Profiling of a network behind an infectious disease outbreak | null | Physica A vol.389, pp.4755-4768 (2010) | 10.1016/j.physa.2010.07.014 | null | cs.AI q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochasticity and spatial heterogeneity are of great interest recently in
studying the spread of an infectious disease. The presented method solves an
inverse problem to discover the effectively decisive topology of a
heterogeneous network and reveal the transmission parameters which govern the
stochastic spreads over the network from a dataset on an infectious disease
outbreak in the early growth phase. Populations in a combination of
epidemiological compartment models and a meta-population network model are
described by stochastic differential equations. Probability density functions
are derived from the equations and used for the maximal likelihood estimation
of the topology and parameters. The method is tested with computationally
synthesized datasets and the WHO dataset on SARS outbreak.
| [
{
"version": "v1",
"created": "Thu, 21 May 2009 23:19:41 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Jun 2009 10:35:14 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Jan 2010 08:52:00 GMT"
},
{
"version": "v4",
"created": "Wed, 31 Mar 2010 03:17:35 GMT"
},
{
"version": "v5",
"created": "Mon, 14 Jun 2010 14:06:11 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Maeno",
"Yoshiharu",
""
]
] | TITLE: Profiling of a network behind an infectious disease outbreak
ABSTRACT: Stochasticity and spatial heterogeneity are of great interest recently in
studying the spread of an infectious disease. The presented method solves an
inverse problem to discover the effectively decisive topology of a
heterogeneous network and reveal the transmission parameters which govern the
stochastic spreads over the network from a dataset on an infectious disease
outbreak in the early growth phase. Populations in a combination of
epidemiological compartment models and a meta-population network model are
described by stochastic differential equations. Probability density functions
are derived from the equations and used for the maximal likelihood estimation
of the topology and parameters. The method is tested with computationally
synthesized datasets and the WHO dataset on SARS outbreak.
| no_new_dataset | 0.949435 |
1001.0592 | Georgios Zervas | John W. Byers, Michael Mitzenmacher, Georgios Zervas | Information Asymmetries in Pay-Per-Bid Auctions: How Swoopo Makes Bank | 48 pages, 21 figures | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Innovative auction methods can be exploited to increase profits, with
Shubik's famous "dollar auction" perhaps being the most widely known example.
Recently, some mainstream e-commerce web sites have apparently achieved the
same end on a much broader scale, by using "pay-per-bid" auctions to sell
items, from video games to bars of gold. In these auctions, bidders incur a
cost for placing each bid in addition to (or sometimes in lieu of) the winner's
final purchase cost. Thus even when a winner's purchase cost is a small
fraction of the item's intrinsic value, the auctioneer can still profit
handsomely from the bid fees. Our work provides novel analyses for these
auctions, based on both modeling and datasets derived from auctions at
Swoopo.com, the leading pay-per-bid auction site. While previous modeling work
predicts profit-free equilibria, we analyze the impact of information asymmetry
broadly, as well as Swoopo features such as bidpacks and the Swoop It Now
option specifically, to quantify the effects of imperfect information in these
auctions. We find that even small asymmetries across players (cheaper bids,
better estimates of other players' intent, different valuations of items,
committed players willing to play "chicken") can increase the auction duration
well beyond that predicted by previous work and thus skew the auctioneer's
profit disproportionately. Finally, we discuss our findings in the context of a
dataset of thousands of live auctions we observed on Swoopo, which enables us
also to examine behavioral factors, such as the power of aggressive bidding.
Ultimately, our findings show that even with fully rational players, if players
overlook or are unaware any of these factors, the result is outsized profits
for pay-per-bid auctioneers.
| [
{
"version": "v1",
"created": "Tue, 5 Jan 2010 16:31:06 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Jan 2010 19:58:07 GMT"
},
{
"version": "v3",
"created": "Tue, 30 Mar 2010 21:51:26 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Byers",
"John W.",
""
],
[
"Mitzenmacher",
"Michael",
""
],
[
"Zervas",
"Georgios",
""
]
] | TITLE: Information Asymmetries in Pay-Per-Bid Auctions: How Swoopo Makes Bank
ABSTRACT: Innovative auction methods can be exploited to increase profits, with
Shubik's famous "dollar auction" perhaps being the most widely known example.
Recently, some mainstream e-commerce web sites have apparently achieved the
same end on a much broader scale, by using "pay-per-bid" auctions to sell
items, from video games to bars of gold. In these auctions, bidders incur a
cost for placing each bid in addition to (or sometimes in lieu of) the winner's
final purchase cost. Thus even when a winner's purchase cost is a small
fraction of the item's intrinsic value, the auctioneer can still profit
handsomely from the bid fees. Our work provides novel analyses for these
auctions, based on both modeling and datasets derived from auctions at
Swoopo.com, the leading pay-per-bid auction site. While previous modeling work
predicts profit-free equilibria, we analyze the impact of information asymmetry
broadly, as well as Swoopo features such as bidpacks and the Swoop It Now
option specifically, to quantify the effects of imperfect information in these
auctions. We find that even small asymmetries across players (cheaper bids,
better estimates of other players' intent, different valuations of items,
committed players willing to play "chicken") can increase the auction duration
well beyond that predicted by previous work and thus skew the auctioneer's
profit disproportionately. Finally, we discuss our findings in the context of a
dataset of thousands of live auctions we observed on Swoopo, which enables us
also to examine behavioral factors, such as the power of aggressive bidding.
Ultimately, our findings show that even with fully rational players, if players
overlook or are unaware any of these factors, the result is outsized profits
for pay-per-bid auctioneers.
| no_new_dataset | 0.842669 |
1003.5956 | Lihong Li | Lihong Li and Wei Chu and John Langford and Xuanhui Wang | Unbiased Offline Evaluation of Contextual-bandit-based News Article
Recommendation Algorithms | 10 pages, 7 figures, revised from the published version at the WSDM
2011 conference | null | 10.1145/1935826.1935878 | null | cs.LG cs.AI cs.RO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contextual bandit algorithms have become popular for online recommendation
systems such as Digg, Yahoo! Buzz, and news recommendation in general.
\emph{Offline} evaluation of the effectiveness of new algorithms in these
applications is critical for protecting online user experiences but very
challenging due to their "partial-label" nature. Common practice is to create a
simulator which simulates the online environment for the problem at hand and
then run an algorithm against this simulator. However, creating simulator
itself is often difficult and modeling bias is usually unavoidably introduced.
In this paper, we introduce a \emph{replay} methodology for contextual bandit
algorithm evaluation. Different from simulator-based approaches, our method is
completely data-driven and very easy to adapt to different applications. More
importantly, our method can provide provably unbiased evaluations. Our
empirical results on a large-scale news article recommendation dataset
collected from Yahoo! Front Page conform well with our theoretical results.
Furthermore, comparisons between our offline replay and online bucket
evaluation of several contextual bandit algorithms show accuracy and
effectiveness of our offline evaluation method.
| [
{
"version": "v1",
"created": "Wed, 31 Mar 2010 01:20:07 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Mar 2012 23:33:07 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Li",
"Lihong",
""
],
[
"Chu",
"Wei",
""
],
[
"Langford",
"John",
""
],
[
"Wang",
"Xuanhui",
""
]
] | TITLE: Unbiased Offline Evaluation of Contextual-bandit-based News Article
Recommendation Algorithms
ABSTRACT: Contextual bandit algorithms have become popular for online recommendation
systems such as Digg, Yahoo! Buzz, and news recommendation in general.
\emph{Offline} evaluation of the effectiveness of new algorithms in these
applications is critical for protecting online user experiences but very
challenging due to their "partial-label" nature. Common practice is to create a
simulator which simulates the online environment for the problem at hand and
then run an algorithm against this simulator. However, creating simulator
itself is often difficult and modeling bias is usually unavoidably introduced.
In this paper, we introduce a \emph{replay} methodology for contextual bandit
algorithm evaluation. Different from simulator-based approaches, our method is
completely data-driven and very easy to adapt to different applications. More
importantly, our method can provide provably unbiased evaluations. Our
empirical results on a large-scale news article recommendation dataset
collected from Yahoo! Front Page conform well with our theoretical results.
Furthermore, comparisons between our offline replay and online bucket
evaluation of several contextual bandit algorithms show accuracy and
effectiveness of our offline evaluation method.
| no_new_dataset | 0.947527 |
1111.4930 | Arka Ghosh | Arka Ghosh | Comparative study of Financial Time Series Prediction by Artificial
Neural Network with Gradient Descent Learning | null | International Journal Of Scientific & Engineering Research
ISSN-2229-5518 Volume 3 Issue 1 January2012 | null | null | cs.NE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Financial forecasting is an example of a signal processing problem which is
challenging due to Small sample sizes, high noise, non-stationarity, and
non-linearity,but fast forecasting of stock market price is very important for
strategic business planning.Present study is aimed to develop a comparative
predictive model with Feedforward Multilayer Artificial Neural Network &
Recurrent Time Delay Neural Network for the Financial Timeseries
Prediction.This study is developed with the help of historical stockprice
dataset made available by GoogleFinance.To develop this prediction model
Backpropagation method with Gradient Descent learning has been
implemented.Finally the Neural Net, learned with said algorithm is found to be
skillful predictor for non-stationary noisy Financial Timeseries.
| [
{
"version": "v1",
"created": "Mon, 21 Nov 2011 16:58:58 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Jan 2012 08:09:57 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Ghosh",
"Arka",
""
]
] | TITLE: Comparative study of Financial Time Series Prediction by Artificial
Neural Network with Gradient Descent Learning
ABSTRACT: Financial forecasting is an example of a signal processing problem which is
challenging due to Small sample sizes, high noise, non-stationarity, and
non-linearity,but fast forecasting of stock market price is very important for
strategic business planning.Present study is aimed to develop a comparative
predictive model with Feedforward Multilayer Artificial Neural Network &
Recurrent Time Delay Neural Network for the Financial Timeseries
Prediction.This study is developed with the help of historical stockprice
dataset made available by GoogleFinance.To develop this prediction model
Backpropagation method with Gradient Descent learning has been
implemented.Finally the Neural Net, learned with said algorithm is found to be
skillful predictor for non-stationary noisy Financial Timeseries.
| no_new_dataset | 0.944944 |
1207.4570 | Farzad Parseh | Farzad Parseh, Davood Karimzadgan Moghaddam, Mir Mohsen Pedram,
Rohollah Esmaeli Manesh, Mohammad (behdad) Jamshidi | Presentation an Approach for Optimization of Semantic Web Language Based
on the Document Structure | 7 pages, 8 figures, 2 Tables | IJCSI International Journal of Computer Science Issues, Vol. 9,
Issue 3, No 1, May 2012 ISSN (Online): 1694-0814 | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pattern tree are based on integrated rules which are equal to a combination
of some points connected to each other in a hierarchical structure, called
Enquiry Hierarchical (EH). The main operation in pattern enquiry seeking is to
locate the steps that match the given EH in the dataset. A point of algorithms
has offered for EH matching; but the majority of this algorithms seeks all of
the enquiry steps to access all EHs in the dataset. A few algorithms such as
seek only steps that satisfy end points of EH. All of above algorithms are
trying to locate a way just for investigating direct testing of steps and to
locate the answer of enquiry, directly via these points. In this paper, we
describe a novel algorithm to locate the answer of enquiry without access to
real point of the dataset blindly. In this algorithm, first, the enquiry will
be executed on enquiry schema and this leads to a schema. Using this plan, it
will be clear how to seek end steps and how to achieve enquiry dataset, before
seeking of the dataset steps. Therefore, none of dataset steps will be seek
blindly.
| [
{
"version": "v1",
"created": "Thu, 19 Jul 2012 07:14:15 GMT"
},
{
"version": "v2",
"created": "Sat, 21 Jul 2012 05:05:20 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Parseh",
"Farzad",
"",
"behdad"
],
[
"Moghaddam",
"Davood Karimzadgan",
"",
"behdad"
],
[
"Pedram",
"Mir Mohsen",
"",
"behdad"
],
[
"Manesh",
"Rohollah Esmaeli",
"",
"behdad"
],
[
"Mohammad",
"",
"",
"behdad"
],
[
"Jamshidi",
"",
""
]
] | TITLE: Presentation an Approach for Optimization of Semantic Web Language Based
on the Document Structure
ABSTRACT: Pattern tree are based on integrated rules which are equal to a combination
of some points connected to each other in a hierarchical structure, called
Enquiry Hierarchical (EH). The main operation in pattern enquiry seeking is to
locate the steps that match the given EH in the dataset. A point of algorithms
has offered for EH matching; but the majority of this algorithms seeks all of
the enquiry steps to access all EHs in the dataset. A few algorithms such as
seek only steps that satisfy end points of EH. All of above algorithms are
trying to locate a way just for investigating direct testing of steps and to
locate the answer of enquiry, directly via these points. In this paper, we
describe a novel algorithm to locate the answer of enquiry without access to
real point of the dataset blindly. In this algorithm, first, the enquiry will
be executed on enquiry schema and this leads to a schema. Using this plan, it
will be clear how to seek end steps and how to achieve enquiry dataset, before
seeking of the dataset steps. Therefore, none of dataset steps will be seek
blindly.
| no_new_dataset | 0.941439 |
1208.4380 | Ismael Rafols | Luciano Kay, Nils Newman, Jan Youtie, Alan L. Porter, Ismael Rafols | Patent Overlay Mapping: Visualizing Technological Distance | Accepted in October 2013 in Journal of the American Society for
Information Science and Technology | null | null | null | physics.soc-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a new global patent map that represents all technological
categories, and a method to locate patent data of individual organizations and
technological fields on the global map. This overlay map technique may support
competitive intelligence and policy decision-making. The global patent map is
based on similarities in citing-to-cited relationships between categories of
theInternational Patent Classification (IPC) of European Patent Office (EPO)
patents from 2000 to 2006. This patent dataset, extracted from the PATSTAT
database, includes 760,000 patent records in 466 IPC-based categories. We
compare the global patent maps derived from this categorization to related
efforts of other global patent maps. The paper overlays nanotechnology-related
patenting activities of two companies and two different nanotechnology
subfields on the global patent map. The exercise shows the potential of patent
overlay maps to visualize technological areas and potentially support
decision-making. Furthermore, this study shows that IPC categories that are
similar to one another based on citing-to-cited patterns (and thus are close in
the global patent map) are not necessarily in the same hierarchical IPC branch,
thus revealing new relationships between technologies that are classified as
pertaining to different (and sometimes distant) subject areas in the IPC
scheme.
| [
{
"version": "v1",
"created": "Tue, 21 Aug 2012 20:40:07 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Aug 2013 08:35:35 GMT"
},
{
"version": "v3",
"created": "Sun, 8 Dec 2013 13:14:27 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Kay",
"Luciano",
""
],
[
"Newman",
"Nils",
""
],
[
"Youtie",
"Jan",
""
],
[
"Porter",
"Alan L.",
""
],
[
"Rafols",
"Ismael",
""
]
] | TITLE: Patent Overlay Mapping: Visualizing Technological Distance
ABSTRACT: This paper presents a new global patent map that represents all technological
categories, and a method to locate patent data of individual organizations and
technological fields on the global map. This overlay map technique may support
competitive intelligence and policy decision-making. The global patent map is
based on similarities in citing-to-cited relationships between categories of
theInternational Patent Classification (IPC) of European Patent Office (EPO)
patents from 2000 to 2006. This patent dataset, extracted from the PATSTAT
database, includes 760,000 patent records in 466 IPC-based categories. We
compare the global patent maps derived from this categorization to related
efforts of other global patent maps. The paper overlays nanotechnology-related
patenting activities of two companies and two different nanotechnology
subfields on the global patent map. The exercise shows the potential of patent
overlay maps to visualize technological areas and potentially support
decision-making. Furthermore, this study shows that IPC categories that are
similar to one another based on citing-to-cited patterns (and thus are close in
the global patent map) are not necessarily in the same hierarchical IPC branch,
thus revealing new relationships between technologies that are classified as
pertaining to different (and sometimes distant) subject areas in the IPC
scheme.
| no_new_dataset | 0.943815 |
1208.4809 | Husnabad Venkateswara Reddy | H. Venkateswara Reddy, Dr.S.Viswanadha Raju, B.Ramasubba Reddy | Comparing N-Node Set Importance Representative results with Node
Importance Representative results for Categorical Clustering: An exploratory
study | 16 pages, 4 figures, 3 equations | null | null | null | cs.DB | http://creativecommons.org/licenses/by/3.0/ | The proportionate increase in the size of the data with increase in space
implies that clustering a very large data set becomes difficult and is a time
consuming process.Sampling is one important technique to scale down the size of
dataset and to improve the efficiency of clustering. After sampling allocating
unlabeled objects into proper clusters is impossible in the categorical
domain.To address the problem, Chen employed a method called MAximal
Representative Data Labeling to allocate each unlabeled data point to the
appropriate cluster based on Node Importance Representative and N-Node
Importance Representative algorithms. This paper took off from Chen s
investigation and analyzed and compared the results of NIR and NNIR leading to
the conclusion that the two processes contradict each other when it comes to
finding the resemblance between an unlabeled data point and a cluster.A new and
better way of solving the problem was arrived at that finds resemblance between
unlabeled data point within all clusters, while also providing maximal
resemblance for allocation of data in the required cluster.
| [
{
"version": "v1",
"created": "Thu, 23 Aug 2012 17:32:32 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Reddy",
"H. Venkateswara",
""
],
[
"Raju",
"Dr. S. Viswanadha",
""
],
[
"Reddy",
"B. Ramasubba",
""
]
] | TITLE: Comparing N-Node Set Importance Representative results with Node
Importance Representative results for Categorical Clustering: An exploratory
study
ABSTRACT: The proportionate increase in the size of the data with increase in space
implies that clustering a very large data set becomes difficult and is a time
consuming process.Sampling is one important technique to scale down the size of
dataset and to improve the efficiency of clustering. After sampling allocating
unlabeled objects into proper clusters is impossible in the categorical
domain.To address the problem, Chen employed a method called MAximal
Representative Data Labeling to allocate each unlabeled data point to the
appropriate cluster based on Node Importance Representative and N-Node
Importance Representative algorithms. This paper took off from Chen s
investigation and analyzed and compared the results of NIR and NNIR leading to
the conclusion that the two processes contradict each other when it comes to
finding the resemblance between an unlabeled data point and a cluster.A new and
better way of solving the problem was arrived at that finds resemblance between
unlabeled data point within all clusters, while also providing maximal
resemblance for allocation of data in the required cluster.
| no_new_dataset | 0.955194 |
1209.1983 | Frank Meyer | Frank Meyer, Fran\c{c}oise Fessant, Fabrice Cl\'erot, Eric Gaussier | Toward a New Protocol to Evaluate Recommender Systems | 6 pages. arXiv admin note: text overlap with arXiv:1203.4487 | null | null | null | cs.IR cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose an approach to analyze the performance and the
added value of automatic recommender systems in an industrial context. We show
that recommender systems are multifaceted and can be organized around 4
structuring functions: help users to decide, help users to compare, help users
to discover, help users to explore. A global off line protocol is then proposed
to evaluate recommender systems. This protocol is based on the definition of
appropriate evaluation measures for each aforementioned function. The
evaluation protocol is discussed from the perspective of the usefulness and
trust of the recommendation. A new measure called Average Measure of Impact is
introduced. This measure evaluates the impact of the personalized
recommendation. We experiment with two classical methods, K-Nearest Neighbors
(KNN) and Matrix Factorization (MF), using the well known dataset: Netflix. A
segmentation of both users and items is proposed to finely analyze where the
algorithms perform well or badly. We show that the performance is strongly
dependent on the segments and that there is no clear correlation between the
RMSE and the quality of the recommendation.
| [
{
"version": "v1",
"created": "Mon, 10 Sep 2012 13:27:23 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Meyer",
"Frank",
""
],
[
"Fessant",
"Françoise",
""
],
[
"Clérot",
"Fabrice",
""
],
[
"Gaussier",
"Eric",
""
]
] | TITLE: Toward a New Protocol to Evaluate Recommender Systems
ABSTRACT: In this paper, we propose an approach to analyze the performance and the
added value of automatic recommender systems in an industrial context. We show
that recommender systems are multifaceted and can be organized around 4
structuring functions: help users to decide, help users to compare, help users
to discover, help users to explore. A global off line protocol is then proposed
to evaluate recommender systems. This protocol is based on the definition of
appropriate evaluation measures for each aforementioned function. The
evaluation protocol is discussed from the perspective of the usefulness and
trust of the recommendation. A new measure called Average Measure of Impact is
introduced. This measure evaluates the impact of the personalized
recommendation. We experiment with two classical methods, K-Nearest Neighbors
(KNN) and Matrix Factorization (MF), using the well known dataset: Netflix. A
segmentation of both users and items is proposed to finely analyze where the
algorithms perform well or badly. We show that the performance is strongly
dependent on the segments and that there is no clear correlation between the
RMSE and the quality of the recommendation.
| no_new_dataset | 0.940953 |
1211.6496 | Naushad UzZaman Naushad UzZaman | Naushad UzZaman, Roi Blanco, Michael Matthews | TwitterPaul: Extracting and Aggregating Twitter Predictions | Check out the blog post with a summary and Prediction Retrieval
information here: http://bitly.com/TwitterPaul | null | null | null | cs.SI cs.AI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces TwitterPaul, a system designed to make use of Social
Media data to help to predict game outcomes for the 2010 FIFA World Cup
tournament. To this end, we extracted over 538K mentions to football games from
a large sample of tweets that occurred during the World Cup, and we classified
into different types with a precision of up to 88%. The different mentions were
aggregated in order to make predictions about the outcomes of the actual games.
We attempt to learn which Twitter users are accurate predictors and explore
several techniques in order to exploit this information to make more accurate
predictions. We compare our results to strong baselines and against the betting
line (prediction market) and found that the quality of extractions is more
important than the quantity, suggesting that high precision methods working on
a medium-sized dataset are preferable over low precision methods that use a
larger amount of data. Finally, by aggregating some classes of predictions, the
system performance is close to the one of the betting line. Furthermore, we
believe that this domain independent framework can help to predict other
sports, elections, product release dates and other future events that people
talk about in social media.
| [
{
"version": "v1",
"created": "Wed, 28 Nov 2012 01:33:21 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Nov 2012 16:55:53 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"UzZaman",
"Naushad",
""
],
[
"Blanco",
"Roi",
""
],
[
"Matthews",
"Michael",
""
]
] | TITLE: TwitterPaul: Extracting and Aggregating Twitter Predictions
ABSTRACT: This paper introduces TwitterPaul, a system designed to make use of Social
Media data to help to predict game outcomes for the 2010 FIFA World Cup
tournament. To this end, we extracted over 538K mentions to football games from
a large sample of tweets that occurred during the World Cup, and we classified
into different types with a precision of up to 88%. The different mentions were
aggregated in order to make predictions about the outcomes of the actual games.
We attempt to learn which Twitter users are accurate predictors and explore
several techniques in order to exploit this information to make more accurate
predictions. We compare our results to strong baselines and against the betting
line (prediction market) and found that the quality of extractions is more
important than the quantity, suggesting that high precision methods working on
a medium-sized dataset are preferable over low precision methods that use a
larger amount of data. Finally, by aggregating some classes of predictions, the
system performance is close to the one of the betting line. Furthermore, we
believe that this domain independent framework can help to predict other
sports, elections, product release dates and other future events that people
talk about in social media.
| no_new_dataset | 0.951323 |
1303.6163 | Juan Nunez-Iglesias | Juan Nunez-Iglesias, Ryan Kennedy, Toufiq Parag, Jianbo Shi, Dmitri B.
Chklovskii | Machine learning of hierarchical clustering to segment 2D and 3D images | 15 pages, 8 figures | PLoS ONE, 2013, 8(8): e71715 | 10.1371/journal.pone.0071715 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/3.0/ | We aim to improve segmentation through the use of machine learning tools
during region agglomeration. We propose an active learning approach for
performing hierarchical agglomerative segmentation from superpixels. Our method
combines multiple features at all scales of the agglomerative process, works
for data with an arbitrary number of dimensions, and scales to very large
datasets. We advocate the use of variation of information to measure
segmentation accuracy, particularly in 3D electron microscopy (EM) images of
neural tissue, and using this metric demonstrate an improvement over competing
algorithms in EM and natural images.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2013 15:20:09 GMT"
},
{
"version": "v2",
"created": "Mon, 13 May 2013 17:37:05 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Jul 2013 11:15:25 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Nunez-Iglesias",
"Juan",
""
],
[
"Kennedy",
"Ryan",
""
],
[
"Parag",
"Toufiq",
""
],
[
"Shi",
"Jianbo",
""
],
[
"Chklovskii",
"Dmitri B.",
""
]
] | TITLE: Machine learning of hierarchical clustering to segment 2D and 3D images
ABSTRACT: We aim to improve segmentation through the use of machine learning tools
during region agglomeration. We propose an active learning approach for
performing hierarchical agglomerative segmentation from superpixels. Our method
combines multiple features at all scales of the agglomerative process, works
for data with an arbitrary number of dimensions, and scales to very large
datasets. We advocate the use of variation of information to measure
segmentation accuracy, particularly in 3D electron microscopy (EM) images of
neural tissue, and using this metric demonstrate an improvement over competing
algorithms in EM and natural images.
| no_new_dataset | 0.953837 |
1503.03506 | Christian Wachinger | Christian Wachinger and Polina Golland | Diverse Landmark Sampling from Determinantal Point Processes for
Scalable Manifold Learning | null | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High computational costs of manifold learning prohibit its application for
large point sets. A common strategy to overcome this problem is to perform
dimensionality reduction on selected landmarks and to successively embed the
entire dataset with the Nystr\"om method. The two main challenges that arise
are: (i) the landmarks selected in non-Euclidean geometries must result in a
low reconstruction error, (ii) the graph constructed from sparsely sampled
landmarks must approximate the manifold well. We propose the sampling of
landmarks from determinantal distributions on non-Euclidean spaces. Since
current determinantal sampling algorithms have the same complexity as those for
manifold learning, we present an efficient approximation running in linear
time. Further, we recover the local geometry after the sparsification by
assigning each landmark a local covariance matrix, estimated from the original
point set. The resulting neighborhood selection based on the Bhattacharyya
distance improves the embedding of sparsely sampled manifolds. Our experiments
show a significant performance improvement compared to state-of-the-art
landmark selection techniques.
| [
{
"version": "v1",
"created": "Wed, 11 Mar 2015 21:09:28 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Wachinger",
"Christian",
""
],
[
"Golland",
"Polina",
""
]
] | TITLE: Diverse Landmark Sampling from Determinantal Point Processes for
Scalable Manifold Learning
ABSTRACT: High computational costs of manifold learning prohibit its application for
large point sets. A common strategy to overcome this problem is to perform
dimensionality reduction on selected landmarks and to successively embed the
entire dataset with the Nystr\"om method. The two main challenges that arise
are: (i) the landmarks selected in non-Euclidean geometries must result in a
low reconstruction error, (ii) the graph constructed from sparsely sampled
landmarks must approximate the manifold well. We propose the sampling of
landmarks from determinantal distributions on non-Euclidean spaces. Since
current determinantal sampling algorithms have the same complexity as those for
manifold learning, we present an efficient approximation running in linear
time. Further, we recover the local geometry after the sparsification by
assigning each landmark a local covariance matrix, estimated from the original
point set. The resulting neighborhood selection based on the Bhattacharyya
distance improves the embedding of sparsely sampled manifolds. Our experiments
show a significant performance improvement compared to state-of-the-art
landmark selection techniques.
| no_new_dataset | 0.951006 |
1503.03524 | Mohamed Kafsi | Mohamed Kafsi, Henriette Cramer, Bart Thomee and David A. Shamma | Describing and Understanding Neighborhood Characteristics through Online
Social Media | Accepted in WWW 2015, 2015, Florence, Italy | null | 10.1145/2736277.2741133 | ACM 978-1-4503-3469-3/15/05 | stat.ML cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Geotagged data can be used to describe regions in the world and discover
local themes. However, not all data produced within a region is necessarily
specifically descriptive of that area. To surface the content that is
characteristic for a region, we present the geographical hierarchy model (GHM),
a probabilistic model based on the assumption that data observed in a region is
a random mixture of content that pertains to different levels of a hierarchy.
We apply the GHM to a dataset of 8 million Flickr photos in order to
discriminate between content (i.e., tags) that specifically characterizes a
region (e.g., neighborhood) and content that characterizes surrounding areas or
more general themes. Knowledge of the discriminative and non-discriminative
terms used throughout the hierarchy enables us to quantify the uniqueness of a
given region and to compare similar but distant regions. Our evaluation
demonstrates that our model improves upon traditional Naive Bayes
classification by 47% and hierarchical TF-IDF by 27%. We further highlight the
differences and commonalities with human reasoning about what is locally
characteristic for a neighborhood, distilled from ten interviews and a survey
that covered themes such as time, events, and prior regional knowledge
| [
{
"version": "v1",
"created": "Wed, 11 Mar 2015 22:13:38 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Kafsi",
"Mohamed",
""
],
[
"Cramer",
"Henriette",
""
],
[
"Thomee",
"Bart",
""
],
[
"Shamma",
"David A.",
""
]
] | TITLE: Describing and Understanding Neighborhood Characteristics through Online
Social Media
ABSTRACT: Geotagged data can be used to describe regions in the world and discover
local themes. However, not all data produced within a region is necessarily
specifically descriptive of that area. To surface the content that is
characteristic for a region, we present the geographical hierarchy model (GHM),
a probabilistic model based on the assumption that data observed in a region is
a random mixture of content that pertains to different levels of a hierarchy.
We apply the GHM to a dataset of 8 million Flickr photos in order to
discriminate between content (i.e., tags) that specifically characterizes a
region (e.g., neighborhood) and content that characterizes surrounding areas or
more general themes. Knowledge of the discriminative and non-discriminative
terms used throughout the hierarchy enables us to quantify the uniqueness of a
given region and to compare similar but distant regions. Our evaluation
demonstrates that our model improves upon traditional Naive Bayes
classification by 47% and hierarchical TF-IDF by 27%. We further highlight the
differences and commonalities with human reasoning about what is locally
characteristic for a neighborhood, distilled from ten interviews and a survey
that covered themes such as time, events, and prior regional knowledge
| no_new_dataset | 0.947817 |
1503.03607 | Najva Izadpanah | Najva Izadpanah | A divisive hierarchical clustering-based method for indexing image
information | null | Signal & Image Processing : An International Journal (SIPIJ)
Vol.6, No.1, February 2015 | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In most practical applications of image retrieval, high-dimensional feature
vectors are required, but current multi-dimensional indexing structures lose
their efficiency with growth of dimensions. Our goal is to propose a divisive
hierarchical clustering-based multi-dimensional indexing structure which is
efficient in high-dimensional feature spaces. A projection pursuit method has
been used for finding a component of the data, which data's projections onto it
maximizes the approximation of negentropy for preparing essential information
in order to partitioning of the data space. Various tests and experimental
results on high-dimensional datasets indicate the performance of proposed
method in comparison with others.
| [
{
"version": "v1",
"created": "Thu, 12 Mar 2015 06:51:06 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Izadpanah",
"Najva",
""
]
] | TITLE: A divisive hierarchical clustering-based method for indexing image
information
ABSTRACT: In most practical applications of image retrieval, high-dimensional feature
vectors are required, but current multi-dimensional indexing structures lose
their efficiency with growth of dimensions. Our goal is to propose a divisive
hierarchical clustering-based multi-dimensional indexing structure which is
efficient in high-dimensional feature spaces. A projection pursuit method has
been used for finding a component of the data, which data's projections onto it
maximizes the approximation of negentropy for preparing essential information
in order to partitioning of the data space. Various tests and experimental
results on high-dimensional datasets indicate the performance of proposed
method in comparison with others.
| no_new_dataset | 0.952442 |
1503.03650 | Weiqing Wang | Weiqing Wang, Hongzhi Yin, Ling Chen, Yizhou Sun, Shazia Sadiq,
Xiaofang Zhou | Geo-SAGE: A Geographical Sparse Additive Generative Model for Spatial
Item Recommendation | 10 pages, 15 figures | null | null | null | cs.IR cs.DB cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid development of location-based social networks (LBSNs), spatial
item recommendation has become an important means to help people discover
attractive and interesting venues and events, especially when users travel out
of town. However, this recommendation is very challenging compared to the
traditional recommender systems. A user can visit only a limited number of
spatial items, leading to a very sparse user-item matrix. Most of the items
visited by a user are located within a short distance from where he/she lives,
which makes it hard to recommend items when the user travels to a far away
place. Moreover, user interests and behavior patterns may vary dramatically
across different geographical regions. In light of this, we propose Geo-SAGE, a
geographical sparse additive generative model for spatial item recommendation
in this paper. Geo-SAGE considers both user personal interests and the
preference of the crowd in the target region, by exploiting both the
co-occurrence pattern of spatial items and the content of spatial items. To
further alleviate the data sparsity issue, Geo-SAGE exploits the geographical
correlation by smoothing the crowd's preferences over a well-designed spatial
index structure called spatial pyramid. We conduct extensive experiments to
evaluate the performance of our Geo-SAGE model on two real large-scale
datasets. The experimental results clearly demonstrate our Geo-SAGE model
outperforms the state-of-the-art in the two tasks of both out-of-town and
home-town recommendations.
| [
{
"version": "v1",
"created": "Thu, 12 Mar 2015 09:44:11 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Wang",
"Weiqing",
""
],
[
"Yin",
"Hongzhi",
""
],
[
"Chen",
"Ling",
""
],
[
"Sun",
"Yizhou",
""
],
[
"Sadiq",
"Shazia",
""
],
[
"Zhou",
"Xiaofang",
""
]
] | TITLE: Geo-SAGE: A Geographical Sparse Additive Generative Model for Spatial
Item Recommendation
ABSTRACT: With the rapid development of location-based social networks (LBSNs), spatial
item recommendation has become an important means to help people discover
attractive and interesting venues and events, especially when users travel out
of town. However, this recommendation is very challenging compared to the
traditional recommender systems. A user can visit only a limited number of
spatial items, leading to a very sparse user-item matrix. Most of the items
visited by a user are located within a short distance from where he/she lives,
which makes it hard to recommend items when the user travels to a far away
place. Moreover, user interests and behavior patterns may vary dramatically
across different geographical regions. In light of this, we propose Geo-SAGE, a
geographical sparse additive generative model for spatial item recommendation
in this paper. Geo-SAGE considers both user personal interests and the
preference of the crowd in the target region, by exploiting both the
co-occurrence pattern of spatial items and the content of spatial items. To
further alleviate the data sparsity issue, Geo-SAGE exploits the geographical
correlation by smoothing the crowd's preferences over a well-designed spatial
index structure called spatial pyramid. We conduct extensive experiments to
evaluate the performance of our Geo-SAGE model on two real large-scale
datasets. The experimental results clearly demonstrate our Geo-SAGE model
outperforms the state-of-the-art in the two tasks of both out-of-town and
home-town recommendations.
| no_new_dataset | 0.950824 |
1307.6365 | Josif Grabocka | Josif Grabocka, Martin Wistuba, Lars Schmidt-Thieme | Time-Series Classification Through Histograms of Symbolic Polynomials | null | null | 10.1109/TKDE.2014.2377746 | null | cs.AI cs.DB cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time-series classification has attracted considerable research attention due
to the various domains where time-series data are observed, ranging from
medicine to econometrics. Traditionally, the focus of time-series
classification has been on short time-series data composed of a unique pattern
with intraclass pattern distortions and variations, while recently there have
been attempts to focus on longer series composed of various local patterns.
This study presents a novel method which can detect local patterns in long
time-series via fitting local polynomial functions of arbitrary degrees. The
coefficients of the polynomial functions are converted to symbolic words via
equivolume discretizations of the coefficients' distributions. The symbolic
polynomial words enable the detection of similar local patterns by assigning
the same words to similar polynomials. Moreover, a histogram of the frequencies
of the words is constructed from each time-series' bag of words. Each row of
the histogram enables a new representation for the series and symbolize the
existence of local patterns and their frequencies. Experimental evidence
demonstrates outstanding results of our method compared to the state-of-art
baselines, by exhibiting the best classification accuracies in all the datasets
and having statistically significant improvements in the absolute majority of
experiments.
| [
{
"version": "v1",
"created": "Wed, 24 Jul 2013 10:07:50 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Jul 2013 03:40:27 GMT"
},
{
"version": "v3",
"created": "Wed, 31 Jul 2013 10:58:02 GMT"
},
{
"version": "v4",
"created": "Mon, 23 Dec 2013 22:26:35 GMT"
}
] | 2015-03-12T00:00:00 | [
[
"Grabocka",
"Josif",
""
],
[
"Wistuba",
"Martin",
""
],
[
"Schmidt-Thieme",
"Lars",
""
]
] | TITLE: Time-Series Classification Through Histograms of Symbolic Polynomials
ABSTRACT: Time-series classification has attracted considerable research attention due
to the various domains where time-series data are observed, ranging from
medicine to econometrics. Traditionally, the focus of time-series
classification has been on short time-series data composed of a unique pattern
with intraclass pattern distortions and variations, while recently there have
been attempts to focus on longer series composed of various local patterns.
This study presents a novel method which can detect local patterns in long
time-series via fitting local polynomial functions of arbitrary degrees. The
coefficients of the polynomial functions are converted to symbolic words via
equivolume discretizations of the coefficients' distributions. The symbolic
polynomial words enable the detection of similar local patterns by assigning
the same words to similar polynomials. Moreover, a histogram of the frequencies
of the words is constructed from each time-series' bag of words. Each row of
the histogram enables a new representation for the series and symbolize the
existence of local patterns and their frequencies. Experimental evidence
demonstrates outstanding results of our method compared to the state-of-art
baselines, by exhibiting the best classification accuracies in all the datasets
and having statistically significant improvements in the absolute majority of
experiments.
| no_new_dataset | 0.949201 |
1312.6712 | Josif Grabocka | Josif Grabocka, Lars Schmidt-Thieme | Invariant Factorization Of Time-Series | null | null | 10.1007/s10618-014-0364-z | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time-series classification is an important domain of machine learning and a
plethora of methods have been developed for the task. In comparison to existing
approaches, this study presents a novel method which decomposes a time-series
dataset into latent patterns and membership weights of local segments to those
patterns. The process is formalized as a constrained objective function and a
tailored stochastic coordinate descent optimization is applied. The time-series
are projected to a new feature representation consisting of the sums of the
membership weights, which captures frequencies of local patterns. Features from
various sliding window sizes are concatenated in order to encapsulate the
interaction of patterns from different sizes. Finally, a large-scale
experimental comparison against 6 state of the art baselines and 43 real life
datasets is conducted. The proposed method outperforms all the baselines with
statistically significant margins in terms of prediction accuracy.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2013 22:15:59 GMT"
}
] | 2015-03-12T00:00:00 | [
[
"Grabocka",
"Josif",
""
],
[
"Schmidt-Thieme",
"Lars",
""
]
] | TITLE: Invariant Factorization Of Time-Series
ABSTRACT: Time-series classification is an important domain of machine learning and a
plethora of methods have been developed for the task. In comparison to existing
approaches, this study presents a novel method which decomposes a time-series
dataset into latent patterns and membership weights of local segments to those
patterns. The process is formalized as a constrained objective function and a
tailored stochastic coordinate descent optimization is applied. The time-series
are projected to a new feature representation consisting of the sums of the
membership weights, which captures frequencies of local patterns. Features from
various sliding window sizes are concatenated in order to encapsulate the
interaction of patterns from different sizes. Finally, a large-scale
experimental comparison against 6 state of the art baselines and 43 real life
datasets is conducted. The proposed method outperforms all the baselines with
statistically significant margins in terms of prediction accuracy.
| no_new_dataset | 0.943919 |
1405.1681 | Chinh Dang | Chinh Dang, Hayder Radha | Representative Selection for Big Data via Sparse Graph and Geodesic
Grassmann Manifold Distance | This paper has been withdrawn by the author due to lacking details | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of identifying a very small subset of data
points that belong to a significantly larger massive dataset (i.e., Big Data).
The small number of selected data points must adequately represent and
faithfully characterize the massive Big Data. Such identification process is
known as representative selection [19]. We propose a novel representative
selection framework by generating an l1 norm sparse graph for a given Big-Data
dataset. The Big Data is partitioned recursively into clusters using a spectral
clustering algorithm on the generated sparse graph. We consider each cluster as
one point in a Grassmann manifold, and measure the geodesic distance among
these points. The distances are further analyzed using a min-max algorithm [1]
to extract an optimal subset of clusters. Finally, by considering a sparse
subgraph of each selected cluster, we detect a representative using principal
component centrality [11]. We refer to the proposed representative selection
framework as a Sparse Graph and Grassmann Manifold (SGGM) based approach. To
validate the proposed SGGM framework, we apply it onto the problem of video
summarization where only few video frames, known as key frames, are selected
among a much longer video sequence. A comparison of the results obtained by the
proposed algorithm with the ground truth, which is agreed by multiple human
judges, and with some state-of-the-art methods clearly indicates the viability
of the SGGM framework.
| [
{
"version": "v1",
"created": "Wed, 7 May 2014 17:57:25 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Mar 2015 13:57:52 GMT"
}
] | 2015-03-12T00:00:00 | [
[
"Dang",
"Chinh",
""
],
[
"Radha",
"Hayder",
""
]
] | TITLE: Representative Selection for Big Data via Sparse Graph and Geodesic
Grassmann Manifold Distance
ABSTRACT: This paper addresses the problem of identifying a very small subset of data
points that belong to a significantly larger massive dataset (i.e., Big Data).
The small number of selected data points must adequately represent and
faithfully characterize the massive Big Data. Such identification process is
known as representative selection [19]. We propose a novel representative
selection framework by generating an l1 norm sparse graph for a given Big-Data
dataset. The Big Data is partitioned recursively into clusters using a spectral
clustering algorithm on the generated sparse graph. We consider each cluster as
one point in a Grassmann manifold, and measure the geodesic distance among
these points. The distances are further analyzed using a min-max algorithm [1]
to extract an optimal subset of clusters. Finally, by considering a sparse
subgraph of each selected cluster, we detect a representative using principal
component centrality [11]. We refer to the proposed representative selection
framework as a Sparse Graph and Grassmann Manifold (SGGM) based approach. To
validate the proposed SGGM framework, we apply it onto the problem of video
summarization where only few video frames, known as key frames, are selected
among a much longer video sequence. A comparison of the results obtained by the
proposed algorithm with the ground truth, which is agreed by multiple human
judges, and with some state-of-the-art methods clearly indicates the viability
of the SGGM framework.
| no_new_dataset | 0.948585 |
1410.2686 | F. Ozgur Catak | Ferhat \"Ozg\"ur \c{C}atak | Polarization Measurement of High Dimensional Social Media Messages With
Support Vector Machine Algorithm Using Mapreduce | 12 pages, in Turkish | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, we propose a new Support Vector Machine (SVM) training
algorithm based on distributed MapReduce technique. In literature, there are a
lots of research that shows us SVM has highest generalization property among
classification algorithms used in machine learning area. Also, SVM classifier
model is not affected by correlations of the features. But SVM uses quadratic
optimization techniques in its training phase. The SVM algorithm is formulated
as quadratic optimization problem. Quadratic optimization problem has $O(m^3)$
time and $O(m^2)$ space complexity, where m is the training set size. The
computation time of SVM training is quadratic in the number of training
instances. In this reason, SVM is not a suitable classification algorithm for
large scale dataset classification. To solve this training problem we developed
a new distributed MapReduce method developed. Accordingly, (i) SVM algorithm is
trained in distributed dataset individually; (ii) then merge all support
vectors of classifier model in every trained node; and (iii) iterate these two
steps until the classifier model converges to the optimal classifier function.
In the implementation phase, large scale social media dataset is presented in
TFxIDF matrix. The matrix is used for sentiment analysis to get polarization
value. Two and three class models are created for classification method.
Confusion matrices of each classification model are presented in tables. Social
media messages corpus consists of 108 public and 66 private universities
messages in Turkey. Twitter is used for source of corpus. Twitter user messages
are collected using Twitter Streaming API. Results are shown in graphics and
tables.
| [
{
"version": "v1",
"created": "Fri, 10 Oct 2014 06:42:25 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Mar 2015 05:56:51 GMT"
}
] | 2015-03-12T00:00:00 | [
[
"Çatak",
"Ferhat Özgür",
""
]
] | TITLE: Polarization Measurement of High Dimensional Social Media Messages With
Support Vector Machine Algorithm Using Mapreduce
ABSTRACT: In this article, we propose a new Support Vector Machine (SVM) training
algorithm based on distributed MapReduce technique. In literature, there are a
lots of research that shows us SVM has highest generalization property among
classification algorithms used in machine learning area. Also, SVM classifier
model is not affected by correlations of the features. But SVM uses quadratic
optimization techniques in its training phase. The SVM algorithm is formulated
as quadratic optimization problem. Quadratic optimization problem has $O(m^3)$
time and $O(m^2)$ space complexity, where m is the training set size. The
computation time of SVM training is quadratic in the number of training
instances. In this reason, SVM is not a suitable classification algorithm for
large scale dataset classification. To solve this training problem we developed
a new distributed MapReduce method developed. Accordingly, (i) SVM algorithm is
trained in distributed dataset individually; (ii) then merge all support
vectors of classifier model in every trained node; and (iii) iterate these two
steps until the classifier model converges to the optimal classifier function.
In the implementation phase, large scale social media dataset is presented in
TFxIDF matrix. The matrix is used for sentiment analysis to get polarization
value. Two and three class models are created for classification method.
Confusion matrices of each classification model are presented in tables. Social
media messages corpus consists of 108 public and 66 private universities
messages in Turkey. Twitter is used for source of corpus. Twitter user messages
are collected using Twitter Streaming API. Results are shown in graphics and
tables.
| no_new_dataset | 0.951774 |
1503.03163 | Yanwei Fu | Xi Zhang, Yanwei Fu, Andi Zang, Leonid Sigal, Gady Agam | Learning Classifiers from Synthetic Data Using a Multichannel
Autoencoder | 10 pages | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a method for using synthetic data to help learning classifiers.
Synthetic data, even is generated based on real data, normally results in a
shift from the distribution of real data in feature space. To bridge the gap
between the real and synthetic data, and jointly learn from synthetic and real
data, this paper proposes a Multichannel Autoencoder(MCAE). We show that by
suing MCAE, it is possible to learn a better feature representation for
classification. To evaluate the proposed approach, we conduct experiments on
two types of datasets. Experimental results on two datasets validate the
efficiency of our MCAE model and our methodology of generating synthetic data.
| [
{
"version": "v1",
"created": "Wed, 11 Mar 2015 03:31:53 GMT"
}
] | 2015-03-12T00:00:00 | [
[
"Zhang",
"Xi",
""
],
[
"Fu",
"Yanwei",
""
],
[
"Zang",
"Andi",
""
],
[
"Sigal",
"Leonid",
""
],
[
"Agam",
"Gady",
""
]
] | TITLE: Learning Classifiers from Synthetic Data Using a Multichannel
Autoencoder
ABSTRACT: We propose a method for using synthetic data to help learning classifiers.
Synthetic data, even is generated based on real data, normally results in a
shift from the distribution of real data in feature space. To bridge the gap
between the real and synthetic data, and jointly learn from synthetic and real
data, this paper proposes a Multichannel Autoencoder(MCAE). We show that by
suing MCAE, it is possible to learn a better feature representation for
classification. To evaluate the proposed approach, we conduct experiments on
two types of datasets. Experimental results on two datasets validate the
efficiency of our MCAE model and our methodology of generating synthetic data.
| no_new_dataset | 0.952926 |
1503.03168 | Kalyani Desikan | G. Hannah Grace, Kalyani Desikan | Experimental Estimation of Number of Clusters Based on Cluster Quality | 12 pages, 9 figures | Journal of mathematics and computer science, Vol12 (2014), 304-315 | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text Clustering is a text mining technique which divides the given set of
text documents into significant clusters. It is used for organizing a huge
number of text documents into a well-organized form. In the majority of the
clustering algorithms, the number of clusters must be specified apriori, which
is a drawback of these algorithms. The aim of this paper is to show
experimentally how to determine the number of clusters based on cluster
quality. Since partitional clustering algorithms are well-suited for clustering
large document datasets, we have confined our analysis to a partitional
clustering algorithm.
| [
{
"version": "v1",
"created": "Tue, 10 Mar 2015 10:34:06 GMT"
}
] | 2015-03-12T00:00:00 | [
[
"Grace",
"G. Hannah",
""
],
[
"Desikan",
"Kalyani",
""
]
] | TITLE: Experimental Estimation of Number of Clusters Based on Cluster Quality
ABSTRACT: Text Clustering is a text mining technique which divides the given set of
text documents into significant clusters. It is used for organizing a huge
number of text documents into a well-organized form. In the majority of the
clustering algorithms, the number of clusters must be specified apriori, which
is a drawback of these algorithms. The aim of this paper is to show
experimentally how to determine the number of clusters based on cluster
quality. Since partitional clustering algorithms are well-suited for clustering
large document datasets, we have confined our analysis to a partitional
clustering algorithm.
| no_new_dataset | 0.949482 |
1503.03199 | Tatsuro Kawamoto | Tatsuro Kawamoto | Persistence of activity on Twitter triggered by a natural disaster: A
data analysis | 2 pages, 3 figures | null | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this note, we list the results of a simple analysis of a Twitter dataset:
the complete dataset of Japanese tweets in the 1-week period after the Great
East Japan earthquake, which occurred on March 11, 2011. Our data analysis
shows how people reacted to the earthquake on Twitter and how some users went
inactive in the long-term.
| [
{
"version": "v1",
"created": "Wed, 11 Mar 2015 07:31:19 GMT"
}
] | 2015-03-12T00:00:00 | [
[
"Kawamoto",
"Tatsuro",
""
]
] | TITLE: Persistence of activity on Twitter triggered by a natural disaster: A
data analysis
ABSTRACT: In this note, we list the results of a simple analysis of a Twitter dataset:
the complete dataset of Japanese tweets in the 1-week period after the Great
East Japan earthquake, which occurred on March 11, 2011. Our data analysis
shows how people reacted to the earthquake on Twitter and how some users went
inactive in the long-term.
| no_new_dataset | 0.925769 |
1503.03238 | Josif Grabocka | Josif Grabocka, Martin Wistuba, Lars Schmidt-Thieme | Scalable Discovery of Time-Series Shapelets | Under review in the journal "Knowledge and Information Systems"
(KAIS) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time-series classification is an important problem for the data mining
community due to the wide range of application domains involving time-series
data. A recent paradigm, called shapelets, represents patterns that are highly
predictive for the target variable. Shapelets are discovered by measuring the
prediction accuracy of a set of potential (shapelet) candidates. The candidates
typically consist of all the segments of a dataset, therefore, the discovery of
shapelets is computationally expensive. This paper proposes a novel method that
avoids measuring the prediction accuracy of similar candidates in Euclidean
distance space, through an online clustering pruning technique. In addition,
our algorithm incorporates a supervised shapelet selection that filters out
only those candidates that improve classification accuracy. Empirical evidence
on 45 datasets from the UCR collection demonstrate that our method is 3-4
orders of magnitudes faster than the fastest existing shapelet-discovery
method, while providing better prediction accuracy.
| [
{
"version": "v1",
"created": "Wed, 11 Mar 2015 09:38:49 GMT"
}
] | 2015-03-12T00:00:00 | [
[
"Grabocka",
"Josif",
""
],
[
"Wistuba",
"Martin",
""
],
[
"Schmidt-Thieme",
"Lars",
""
]
] | TITLE: Scalable Discovery of Time-Series Shapelets
ABSTRACT: Time-series classification is an important problem for the data mining
community due to the wide range of application domains involving time-series
data. A recent paradigm, called shapelets, represents patterns that are highly
predictive for the target variable. Shapelets are discovered by measuring the
prediction accuracy of a set of potential (shapelet) candidates. The candidates
typically consist of all the segments of a dataset, therefore, the discovery of
shapelets is computationally expensive. This paper proposes a novel method that
avoids measuring the prediction accuracy of similar candidates in Euclidean
distance space, through an online clustering pruning technique. In addition,
our algorithm incorporates a supervised shapelet selection that filters out
only those candidates that improve classification accuracy. Empirical evidence
on 45 datasets from the UCR collection demonstrate that our method is 3-4
orders of magnitudes faster than the fastest existing shapelet-discovery
method, while providing better prediction accuracy.
| no_new_dataset | 0.953013 |
1503.03261 | Jeff Jones Dr | Jeff Jones, Andrew Adamatzky | Approximation of Statistical Analysis and Estimation by Morphological
Adaptation in a Model of Slime Mould | null | null | null | null | cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | True slime mould Physarum polycephalum approximates a range of complex
computations via growth and adaptation of its proto- plasmic transport network,
stimulating a large body of recent research into how such a simple organism can
perform such complex feats. The properties of networks constructed by slime
mould are known to be in- fluenced by the local distribution of stimuli within
its environment. But can the morphological adaptation of slime mould yield any
information about the global statistical properties of its environment? We
explore this possibility using a particle based model of slime mould. We
demonstrate how morphological adaptation in blobs of virtual slime mould may be
used as a simple computational mechanism that can coarsely approx- imate
statistical analysis, estimation and tracking. Preliminary results include the
approximation of the geometric centroid of 2D shapes, ap- proximation of
arithmetic mean from spatially represented sorted and unsorted data
distributions, and the estimation and dynamical tracking of moving object
position in the presence of noise contaminated input stimuli. The results
suggest that it is possible to utilise collectives of very simple components
with limited individual computational ability (for ex- ample swarms of simple
robotic devices) to extract statistical features from complex datasets by means
of material adaptation and sensorial fusion.
| [
{
"version": "v1",
"created": "Wed, 11 Mar 2015 10:33:00 GMT"
}
] | 2015-03-12T00:00:00 | [
[
"Jones",
"Jeff",
""
],
[
"Adamatzky",
"Andrew",
""
]
] | TITLE: Approximation of Statistical Analysis and Estimation by Morphological
Adaptation in a Model of Slime Mould
ABSTRACT: True slime mould Physarum polycephalum approximates a range of complex
computations via growth and adaptation of its proto- plasmic transport network,
stimulating a large body of recent research into how such a simple organism can
perform such complex feats. The properties of networks constructed by slime
mould are known to be in- fluenced by the local distribution of stimuli within
its environment. But can the morphological adaptation of slime mould yield any
information about the global statistical properties of its environment? We
explore this possibility using a particle based model of slime mould. We
demonstrate how morphological adaptation in blobs of virtual slime mould may be
used as a simple computational mechanism that can coarsely approx- imate
statistical analysis, estimation and tracking. Preliminary results include the
approximation of the geometric centroid of 2D shapes, ap- proximation of
arithmetic mean from spatially represented sorted and unsorted data
distributions, and the estimation and dynamical tracking of moving object
position in the presence of noise contaminated input stimuli. The results
suggest that it is possible to utilise collectives of very simple components
with limited individual computational ability (for ex- ample swarms of simple
robotic devices) to extract statistical features from complex datasets by means
of material adaptation and sensorial fusion.
| no_new_dataset | 0.949809 |
1503.03264 | Jeff Jones Dr | Jeff Jones, Andrew Adamatzky | Material Approximation of Data Smoothing and Spline Curves Inspired by
Slime Mould | null | null | null | null | cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using a particle model of Physarum displaying emer- gent morphological
adaptation behaviour we demonstrate how a minimal approach to collective
material computation may be used to transform and summarise properties of
spatially represented datasets. We find that the virtual material relaxes more
strongly to high-frequency changes in data which can be used for the smoothing
(or filtering) of data by ap- proximating moving average and low-pass filters
in 1D datasets. The relaxation and minimisation properties of the model enable
the spatial computation of B-spline curves (approximating splines) in 2D
datasets. Both clamped and unclamped spline curves, of open and closed shapes,
can be represented and the degree of spline curvature corresponds to the
relaxation time of the material. The material computation of spline curves also
includes novel quasi-mechanical properties including unwind- ing of the shape
between control points and a preferential adhesion to longer, straighter paths.
Interpolating splines could not directly be ap- proximated due to the formation
and evolution of Steiner points at nar- row vertices, but were approximated
after rectilinear pre-processing of the source data. This pre-processing was
further simplified by transform- ing the original data to contain the material
inside the polyline. These exemplar results expand the repertoire of spatially
represented uncon- ventional computing devices by demonstrating a simple,
collective and distributed approach to data and curve smoothing.
| [
{
"version": "v1",
"created": "Wed, 11 Mar 2015 10:36:48 GMT"
}
] | 2015-03-12T00:00:00 | [
[
"Jones",
"Jeff",
""
],
[
"Adamatzky",
"Andrew",
""
]
] | TITLE: Material Approximation of Data Smoothing and Spline Curves Inspired by
Slime Mould
ABSTRACT: Using a particle model of Physarum displaying emer- gent morphological
adaptation behaviour we demonstrate how a minimal approach to collective
material computation may be used to transform and summarise properties of
spatially represented datasets. We find that the virtual material relaxes more
strongly to high-frequency changes in data which can be used for the smoothing
(or filtering) of data by ap- proximating moving average and low-pass filters
in 1D datasets. The relaxation and minimisation properties of the model enable
the spatial computation of B-spline curves (approximating splines) in 2D
datasets. Both clamped and unclamped spline curves, of open and closed shapes,
can be represented and the degree of spline curvature corresponds to the
relaxation time of the material. The material computation of spline curves also
includes novel quasi-mechanical properties including unwind- ing of the shape
between control points and a preferential adhesion to longer, straighter paths.
Interpolating splines could not directly be ap- proximated due to the formation
and evolution of Steiner points at nar- row vertices, but were approximated
after rectilinear pre-processing of the source data. This pre-processing was
further simplified by transform- ing the original data to contain the material
inside the polyline. These exemplar results expand the repertoire of spatially
represented uncon- ventional computing devices by demonstrating a simple,
collective and distributed approach to data and curve smoothing.
| no_new_dataset | 0.950778 |
1503.03270 | Vandna Bhalla Ms | Vandna Bhalla, Santanu Chaudhury, Arihant Jain | A Novel Hybrid CNN-AIS Visual Pattern Recognition Engine | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | Machine learning methods are used today for most recognition problems.
Convolutional Neural Networks (CNN) have time and again proved successful for
many image processing tasks primarily for their architecture. In this paper we
propose to apply CNN to small data sets like for example, personal albums or
other similar environs where the size of training dataset is a limitation,
within the framework of a proposed hybrid CNN-AIS model. We use Artificial
Immune System Principles to enhance small size of training data set. A layer of
Clonal Selection is added to the local filtering and max pooling of CNN
Architecture. The proposed Architecture is evaluated using the standard MNIST
dataset by limiting the data size and also with a small personal data sample
belonging to two different classes. Experimental results show that the proposed
hybrid CNN-AIS based recognition engine works well when the size of training
data is limited in size
| [
{
"version": "v1",
"created": "Wed, 11 Mar 2015 10:58:25 GMT"
}
] | 2015-03-12T00:00:00 | [
[
"Bhalla",
"Vandna",
""
],
[
"Chaudhury",
"Santanu",
""
],
[
"Jain",
"Arihant",
""
]
] | TITLE: A Novel Hybrid CNN-AIS Visual Pattern Recognition Engine
ABSTRACT: Machine learning methods are used today for most recognition problems.
Convolutional Neural Networks (CNN) have time and again proved successful for
many image processing tasks primarily for their architecture. In this paper we
propose to apply CNN to small data sets like for example, personal albums or
other similar environs where the size of training dataset is a limitation,
within the framework of a proposed hybrid CNN-AIS model. We use Artificial
Immune System Principles to enhance small size of training data set. A layer of
Clonal Selection is added to the local filtering and max pooling of CNN
Architecture. The proposed Architecture is evaluated using the standard MNIST
dataset by limiting the data size and also with a small personal data sample
belonging to two different classes. Experimental results show that the proposed
hybrid CNN-AIS based recognition engine works well when the size of training
data is limited in size
| no_new_dataset | 0.948965 |
1503.03355 | Evangelos Papalexakis | Evangelos E. Papalexakis | Automatic Unsupervised Tensor Mining with Quality Assessment | null | null | null | null | stat.ML cs.LG cs.NA stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A popular tool for unsupervised modelling and mining multi-aspect data is
tensor decomposition. In an exploratory setting, where and no labels or ground
truth are available how can we automatically decide how many components to
extract? How can we assess the quality of our results, so that a domain expert
can factor this quality measure in the interpretation of our results? In this
paper, we introduce AutoTen, a novel automatic unsupervised tensor mining
algorithm with minimal user intervention, which leverages and improves upon
heuristics that assess the result quality. We extensively evaluate AutoTen's
performance on synthetic data, outperforming existing baselines on this very
hard problem. Finally, we apply AutoTen on a variety of real datasets,
providing insights and discoveries. We view this work as a step towards a fully
automated, unsupervised tensor mining tool that can be easily adopted by
practitioners in academia and industry.
| [
{
"version": "v1",
"created": "Wed, 11 Mar 2015 14:34:46 GMT"
}
] | 2015-03-12T00:00:00 | [
[
"Papalexakis",
"Evangelos E.",
""
]
] | TITLE: Automatic Unsupervised Tensor Mining with Quality Assessment
ABSTRACT: A popular tool for unsupervised modelling and mining multi-aspect data is
tensor decomposition. In an exploratory setting, where and no labels or ground
truth are available how can we automatically decide how many components to
extract? How can we assess the quality of our results, so that a domain expert
can factor this quality measure in the interpretation of our results? In this
paper, we introduce AutoTen, a novel automatic unsupervised tensor mining
algorithm with minimal user intervention, which leverages and improves upon
heuristics that assess the result quality. We extensively evaluate AutoTen's
performance on synthetic data, outperforming existing baselines on this very
hard problem. Finally, we apply AutoTen on a variety of real datasets,
providing insights and discoveries. We view this work as a step towards a fully
automated, unsupervised tensor mining tool that can be easily adopted by
practitioners in academia and industry.
| no_new_dataset | 0.946349 |
1503.01596 | Sungjin Ahn | Sungjin Ahn, Anoop Korattikara, Nathan Liu, Suju Rajan, Max Welling | Large-Scale Distributed Bayesian Matrix Factorization using Stochastic
Gradient MCMC | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite having various attractive qualities such as high prediction accuracy
and the ability to quantify uncertainty and avoid over-fitting, Bayesian Matrix
Factorization has not been widely adopted because of the prohibitive cost of
inference. In this paper, we propose a scalable distributed Bayesian matrix
factorization algorithm using stochastic gradient MCMC. Our algorithm, based on
Distributed Stochastic Gradient Langevin Dynamics, can not only match the
prediction accuracy of standard MCMC methods like Gibbs sampling, but at the
same time is as fast and simple as stochastic gradient descent. In our
experiments, we show that our algorithm can achieve the same level of
prediction accuracy as Gibbs sampling an order of magnitude faster. We also
show that our method reduces the prediction error as fast as distributed
stochastic gradient descent, achieving a 4.1% improvement in RMSE for the
Netflix dataset and an 1.8% for the Yahoo music dataset.
| [
{
"version": "v1",
"created": "Thu, 5 Mar 2015 10:17:16 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Mar 2015 02:28:41 GMT"
}
] | 2015-03-11T00:00:00 | [
[
"Ahn",
"Sungjin",
""
],
[
"Korattikara",
"Anoop",
""
],
[
"Liu",
"Nathan",
""
],
[
"Rajan",
"Suju",
""
],
[
"Welling",
"Max",
""
]
] | TITLE: Large-Scale Distributed Bayesian Matrix Factorization using Stochastic
Gradient MCMC
ABSTRACT: Despite having various attractive qualities such as high prediction accuracy
and the ability to quantify uncertainty and avoid over-fitting, Bayesian Matrix
Factorization has not been widely adopted because of the prohibitive cost of
inference. In this paper, we propose a scalable distributed Bayesian matrix
factorization algorithm using stochastic gradient MCMC. Our algorithm, based on
Distributed Stochastic Gradient Langevin Dynamics, can not only match the
prediction accuracy of standard MCMC methods like Gibbs sampling, but at the
same time is as fast and simple as stochastic gradient descent. In our
experiments, we show that our algorithm can achieve the same level of
prediction accuracy as Gibbs sampling an order of magnitude faster. We also
show that our method reduces the prediction error as fast as distributed
stochastic gradient descent, achieving a 4.1% improvement in RMSE for the
Netflix dataset and an 1.8% for the Yahoo music dataset.
| no_new_dataset | 0.953837 |
1503.02940 | Gabriela Montoya | Gabriela Montoya, Hala Skaf-Molli, Pascal Molli, Maria-Esther Vidal | Efficient Query Processing for SPARQL Federations with Replicated
Fragments | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Low reliability and availability of public SPARQL endpoints prevent
real-world applications from exploiting all the potential of these querying
infras-tructures. Fragmenting data on servers can improve data availability but
degrades performance. Replicating fragments can offer new tradeoff between
performance and availability. We propose FEDRA, a framework for querying Linked
Data that takes advantage of client-side data replication, and performs a
source selection algorithm that aims to reduce the number of selected public
SPARQL endpoints, execution time, and intermediate results. FEDRA has been
implemented on the state-of-the-art query engines ANAPSID and FedX, and
empirically evaluated on a variety of real-world datasets.
| [
{
"version": "v1",
"created": "Tue, 10 Mar 2015 14:57:26 GMT"
}
] | 2015-03-11T00:00:00 | [
[
"Montoya",
"Gabriela",
""
],
[
"Skaf-Molli",
"Hala",
""
],
[
"Molli",
"Pascal",
""
],
[
"Vidal",
"Maria-Esther",
""
]
] | TITLE: Efficient Query Processing for SPARQL Federations with Replicated
Fragments
ABSTRACT: Low reliability and availability of public SPARQL endpoints prevent
real-world applications from exploiting all the potential of these querying
infras-tructures. Fragmenting data on servers can improve data availability but
degrades performance. Replicating fragments can offer new tradeoff between
performance and availability. We propose FEDRA, a framework for querying Linked
Data that takes advantage of client-side data replication, and performs a
source selection algorithm that aims to reduce the number of selected public
SPARQL endpoints, execution time, and intermediate results. FEDRA has been
implemented on the state-of-the-art query engines ANAPSID and FedX, and
empirically evaluated on a variety of real-world datasets.
| no_new_dataset | 0.942454 |
1503.02974 | Matthew Wade | Matthew J. Wade and Thomas P. Curtis and Russell J. Davenport | Modelling Computational Resources for Next Generation Sequencing
Bioinformatics Analysis of 16S rRNA Samples | 23 pages, 8 figures | null | null | null | q-bio.GN cs.CE cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the rapidly evolving domain of next generation sequencing and
bioinformatics analysis, data generation is one aspect that is increasing at a
concomitant rate. The burden associated with processing large amounts of
sequencing data has emphasised the need to allocate sufficient computing
resources to complete analyses in the shortest possible time with manageable
and predictable costs. A novel method for predicting time to completion for a
popular bioinformatics software (QIIME), was developed using key variables
characteristic of the input data assumed to impact processing time. Multiple
Linear Regression models were developed to determine run time for two denoising
algorithms and a general bioinformatics pipeline. The models were able to
accurately predict clock time for denoising sequences from a naturally
assembled community dataset, but not an artificial community. Speedup and
efficiency tests for AmpliconNoise also highlighted that caution was needed
when allocating resources for parallel processing of data. Accurate modelling
of computational processing time using easily measurable predictors can assist
NGS analysts in determining resource requirements for bioinformatics software
and pipelines. Whilst demonstrated on a specific group of scripts, the
methodology can be extended to encompass other packages running on multiple
architectures, either in parallel or sequentially.
| [
{
"version": "v1",
"created": "Tue, 10 Mar 2015 16:18:57 GMT"
}
] | 2015-03-11T00:00:00 | [
[
"Wade",
"Matthew J.",
""
],
[
"Curtis",
"Thomas P.",
""
],
[
"Davenport",
"Russell J.",
""
]
] | TITLE: Modelling Computational Resources for Next Generation Sequencing
Bioinformatics Analysis of 16S rRNA Samples
ABSTRACT: In the rapidly evolving domain of next generation sequencing and
bioinformatics analysis, data generation is one aspect that is increasing at a
concomitant rate. The burden associated with processing large amounts of
sequencing data has emphasised the need to allocate sufficient computing
resources to complete analyses in the shortest possible time with manageable
and predictable costs. A novel method for predicting time to completion for a
popular bioinformatics software (QIIME), was developed using key variables
characteristic of the input data assumed to impact processing time. Multiple
Linear Regression models were developed to determine run time for two denoising
algorithms and a general bioinformatics pipeline. The models were able to
accurately predict clock time for denoising sequences from a naturally
assembled community dataset, but not an artificial community. Speedup and
efficiency tests for AmpliconNoise also highlighted that caution was needed
when allocating resources for parallel processing of data. Accurate modelling
of computational processing time using easily measurable predictors can assist
NGS analysts in determining resource requirements for bioinformatics software
and pipelines. Whilst demonstrated on a specific group of scripts, the
methodology can be extended to encompass other packages running on multiple
architectures, either in parallel or sequentially.
| no_new_dataset | 0.945045 |
1503.03021 | Anastasios Noulas Anastasios Noulas | Vsevolod Salnikov, Renaud Lambiotte, Anastasios Noulas, Cecilia
Mascolo | OpenStreetCab: Exploiting Taxi Mobility Patterns in New York City to
Reduce Commuter Costs | in NetMob 2015 | null | null | null | cs.SI cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rise of Uber as the global alternative taxi operator has attracted a lot
of interest recently. Aside from the media headlines which discuss the new
phenomenon, e.g. on how it has disrupted the traditional transportation
industry, policy makers, economists, citizens and scientists have engaged in a
discussion that is centred around the means to integrate the new generation of
the sharing economy services in urban ecosystems. In this work, we aim to shed
new light on the discussion, by taking advantage of a publicly available
longitudinal dataset that describes the mobility of yellow taxis in New York
City. In addition to movement, this data contains information on the fares paid
by the taxi customers for each trip. As a result we are given the opportunity
to provide a first head to head comparison between the iconic yellow taxi and
its modern competitor, Uber, in one of the world's largest metropolitan
centres. We identify situations when Uber X, the cheapest version of the Uber
taxi service, tends to be more expensive than yellow taxis for the same
journey. We also demonstrate how Uber's economic model effectively takes
advantage of well known patterns in human movement. Finally, we take our
analysis a step further by proposing a new mobile application that compares
taxi prices in the city to facilitate traveller's taxi choices, hoping to
ultimately to lead to a reduction of commuter costs. Our study provides a case
on how big datasets that become public can improve urban services for consumers
by offering the opportunity for transparency in economic sectors that lack up
to date regulations.
| [
{
"version": "v1",
"created": "Tue, 10 Mar 2015 18:12:14 GMT"
}
] | 2015-03-11T00:00:00 | [
[
"Salnikov",
"Vsevolod",
""
],
[
"Lambiotte",
"Renaud",
""
],
[
"Noulas",
"Anastasios",
""
],
[
"Mascolo",
"Cecilia",
""
]
] | TITLE: OpenStreetCab: Exploiting Taxi Mobility Patterns in New York City to
Reduce Commuter Costs
ABSTRACT: The rise of Uber as the global alternative taxi operator has attracted a lot
of interest recently. Aside from the media headlines which discuss the new
phenomenon, e.g. on how it has disrupted the traditional transportation
industry, policy makers, economists, citizens and scientists have engaged in a
discussion that is centred around the means to integrate the new generation of
the sharing economy services in urban ecosystems. In this work, we aim to shed
new light on the discussion, by taking advantage of a publicly available
longitudinal dataset that describes the mobility of yellow taxis in New York
City. In addition to movement, this data contains information on the fares paid
by the taxi customers for each trip. As a result we are given the opportunity
to provide a first head to head comparison between the iconic yellow taxi and
its modern competitor, Uber, in one of the world's largest metropolitan
centres. We identify situations when Uber X, the cheapest version of the Uber
taxi service, tends to be more expensive than yellow taxis for the same
journey. We also demonstrate how Uber's economic model effectively takes
advantage of well known patterns in human movement. Finally, we take our
analysis a step further by proposing a new mobile application that compares
taxi prices in the city to facilitate traveller's taxi choices, hoping to
ultimately to lead to a reduction of commuter costs. Our study provides a case
on how big datasets that become public can improve urban services for consumers
by offering the opportunity for transparency in economic sectors that lack up
to date regulations.
| new_dataset | 0.574421 |
1411.1091 | Jonathan Long | Jonathan Long, Ning Zhang, Trevor Darrell | Do Convnets Learn Correspondence? | null | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional neural nets (convnets) trained from massive labeled datasets
have substantially improved the state-of-the-art in image classification and
object detection. However, visual understanding requires establishing
correspondence on a finer level than object category. Given their large pooling
regions and training from whole-image labels, it is not clear that convnets
derive their success from an accurate correspondence model which could be used
for precise localization. In this paper, we study the effectiveness of convnet
activation features for tasks requiring correspondence. We present evidence
that convnet features localize at a much finer scale than their receptive field
sizes, that they can be used to perform intraclass alignment as well as
conventional hand-engineered features, and that they outperform conventional
features in keypoint prediction on objects from PASCAL VOC 2011.
| [
{
"version": "v1",
"created": "Tue, 4 Nov 2014 21:35:55 GMT"
}
] | 2015-03-10T00:00:00 | [
[
"Long",
"Jonathan",
""
],
[
"Zhang",
"Ning",
""
],
[
"Darrell",
"Trevor",
""
]
] | TITLE: Do Convnets Learn Correspondence?
ABSTRACT: Convolutional neural nets (convnets) trained from massive labeled datasets
have substantially improved the state-of-the-art in image classification and
object detection. However, visual understanding requires establishing
correspondence on a finer level than object category. Given their large pooling
regions and training from whole-image labels, it is not clear that convnets
derive their success from an accurate correspondence model which could be used
for precise localization. In this paper, we study the effectiveness of convnet
activation features for tasks requiring correspondence. We present evidence
that convnet features localize at a much finer scale than their receptive field
sizes, that they can be used to perform intraclass alignment as well as
conventional hand-engineered features, and that they outperform conventional
features in keypoint prediction on objects from PASCAL VOC 2011.
| no_new_dataset | 0.952131 |
1502.00068 | Ameet Talwalkar | Evan R. Sparks, Ameet Talwalkar, Michael J. Franklin, Michael I.
Jordan, Tim Kraska | TuPAQ: An Efficient Planner for Large-scale Predictive Analytic Queries | null | null | null | null | cs.DB cs.DC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proliferation of massive datasets combined with the development of
sophisticated analytical techniques have enabled a wide variety of novel
applications such as improved product recommendations, automatic image tagging,
and improved speech-driven interfaces. These and many other applications can be
supported by Predictive Analytic Queries (PAQs). A major obstacle to supporting
PAQs is the challenging and expensive process of identifying and training an
appropriate predictive model. Recent efforts aiming to automate this process
have focused on single node implementations and have assumed that model
training itself is a black box, thus limiting the effectiveness of such
approaches on large-scale problems. In this work, we build upon these recent
efforts and propose an integrated PAQ planning architecture that combines
advanced model search techniques, bandit resource allocation via runtime
algorithm introspection, and physical optimization via batching. The result is
TuPAQ, a component of the MLbase system, which solves the PAQ planning problem
with comparable quality to exhaustive strategies but an order of magnitude more
efficiently than the standard baseline approach, and can scale to models
trained on terabytes of data across hundreds of machines.
| [
{
"version": "v1",
"created": "Sat, 31 Jan 2015 04:51:58 GMT"
},
{
"version": "v2",
"created": "Sun, 8 Mar 2015 22:02:24 GMT"
}
] | 2015-03-10T00:00:00 | [
[
"Sparks",
"Evan R.",
""
],
[
"Talwalkar",
"Ameet",
""
],
[
"Franklin",
"Michael J.",
""
],
[
"Jordan",
"Michael I.",
""
],
[
"Kraska",
"Tim",
""
]
] | TITLE: TuPAQ: An Efficient Planner for Large-scale Predictive Analytic Queries
ABSTRACT: The proliferation of massive datasets combined with the development of
sophisticated analytical techniques have enabled a wide variety of novel
applications such as improved product recommendations, automatic image tagging,
and improved speech-driven interfaces. These and many other applications can be
supported by Predictive Analytic Queries (PAQs). A major obstacle to supporting
PAQs is the challenging and expensive process of identifying and training an
appropriate predictive model. Recent efforts aiming to automate this process
have focused on single node implementations and have assumed that model
training itself is a black box, thus limiting the effectiveness of such
approaches on large-scale problems. In this work, we build upon these recent
efforts and propose an integrated PAQ planning architecture that combines
advanced model search techniques, bandit resource allocation via runtime
algorithm introspection, and physical optimization via batching. The result is
TuPAQ, a component of the MLbase system, which solves the PAQ planning problem
with comparable quality to exhaustive strategies but an order of magnitude more
efficiently than the standard baseline approach, and can scale to models
trained on terabytes of data across hundreds of machines.
| no_new_dataset | 0.944177 |
1503.02216 | Yuning Yang | Yuning Yang, Siamak Mehrkanoon and Johan A.K. Suykens | Higher order Matching Pursuit for Low Rank Tensor Learning | null | null | null | null | stat.ML cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Low rank tensor learning, such as tensor completion and multilinear multitask
learning, has received much attention in recent years. In this paper, we
propose higher order matching pursuit for low rank tensor learning problems
with a convex or a nonconvex cost function, which is a generalization of the
matching pursuit type methods. At each iteration, the main cost of the proposed
methods is only to compute a rank-one tensor, which can be done efficiently,
making the proposed methods scalable to large scale problems. Moreover, storing
the resulting rank-one tensors is of low storage requirement, which can help to
break the curse of dimensionality. The linear convergence rate of the proposed
methods is established in various circumstances. Along with the main methods,
we also provide a method of low computational complexity for approximately
computing the rank-one tensors, with provable approximation ratio, which helps
to improve the efficiency of the main methods and to analyze the convergence
rate. Experimental results on synthetic as well as real datasets verify the
efficiency and effectiveness of the proposed methods.
| [
{
"version": "v1",
"created": "Sat, 7 Mar 2015 21:38:07 GMT"
}
] | 2015-03-10T00:00:00 | [
[
"Yang",
"Yuning",
""
],
[
"Mehrkanoon",
"Siamak",
""
],
[
"Suykens",
"Johan A. K.",
""
]
] | TITLE: Higher order Matching Pursuit for Low Rank Tensor Learning
ABSTRACT: Low rank tensor learning, such as tensor completion and multilinear multitask
learning, has received much attention in recent years. In this paper, we
propose higher order matching pursuit for low rank tensor learning problems
with a convex or a nonconvex cost function, which is a generalization of the
matching pursuit type methods. At each iteration, the main cost of the proposed
methods is only to compute a rank-one tensor, which can be done efficiently,
making the proposed methods scalable to large scale problems. Moreover, storing
the resulting rank-one tensors is of low storage requirement, which can help to
break the curse of dimensionality. The linear convergence rate of the proposed
methods is established in various circumstances. Along with the main methods,
we also provide a method of low computational complexity for approximately
computing the rank-one tensors, with provable approximation ratio, which helps
to improve the efficiency of the main methods and to analyze the convergence
rate. Experimental results on synthetic as well as real datasets verify the
efficiency and effectiveness of the proposed methods.
| no_new_dataset | 0.94868 |
1503.02351 | Alexander Schwing | Alexander G. Schwing and Raquel Urtasun | Fully Connected Deep Structured Networks | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional neural networks with many layers have recently been shown to
achieve excellent results on many high-level tasks such as image
classification, object detection and more recently also semantic segmentation.
Particularly for semantic segmentation, a two-stage procedure is often
employed. Hereby, convolutional networks are trained to provide good local
pixel-wise features for the second step being traditionally a more global
graphical model. In this work we unify this two-stage process into a single
joint training algorithm. We demonstrate our method on the semantic image
segmentation task and show encouraging results on the challenging PASCAL VOC
2012 dataset.
| [
{
"version": "v1",
"created": "Mon, 9 Mar 2015 01:08:00 GMT"
}
] | 2015-03-10T00:00:00 | [
[
"Schwing",
"Alexander G.",
""
],
[
"Urtasun",
"Raquel",
""
]
] | TITLE: Fully Connected Deep Structured Networks
ABSTRACT: Convolutional neural networks with many layers have recently been shown to
achieve excellent results on many high-level tasks such as image
classification, object detection and more recently also semantic segmentation.
Particularly for semantic segmentation, a two-stage procedure is often
employed. Hereby, convolutional networks are trained to provide good local
pixel-wise features for the second step being traditionally a more global
graphical model. In this work we unify this two-stage process into a single
joint training algorithm. We demonstrate our method on the semantic image
segmentation task and show encouraging results on the challenging PASCAL VOC
2012 dataset.
| no_new_dataset | 0.954984 |
1402.4279 | Ingmar Schuster | Ingmar Schuster | A Bayesian Model of node interaction in networks | null | null | null | null | cs.LG stat.ME stat.ML | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We are concerned with modeling the strength of links in networks by taking
into account how often those links are used. Link usage is a strong indicator
of how closely two nodes are related, but existing network models in Bayesian
Statistics and Machine Learning are able to predict only wether a link exists
at all. As priors for latent attributes of network nodes we explore the Chinese
Restaurant Process (CRP) and a multivariate Gaussian with fixed dimensionality.
The model is applied to a social network dataset and a word coocurrence
dataset.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2014 10:34:41 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Mar 2015 10:22:12 GMT"
}
] | 2015-03-09T00:00:00 | [
[
"Schuster",
"Ingmar",
""
]
] | TITLE: A Bayesian Model of node interaction in networks
ABSTRACT: We are concerned with modeling the strength of links in networks by taking
into account how often those links are used. Link usage is a strong indicator
of how closely two nodes are related, but existing network models in Bayesian
Statistics and Machine Learning are able to predict only wether a link exists
at all. As priors for latent attributes of network nodes we explore the Chinese
Restaurant Process (CRP) and a multivariate Gaussian with fixed dimensionality.
The model is applied to a social network dataset and a word coocurrence
dataset.
| no_new_dataset | 0.950088 |
1503.01812 | Vanessa Ayala-Rivera | Vanessa Ayala-Rivera, Patrick McDonagh, Thomas Cerqueus, Liam Murphy | Ontology-Based Quality Evaluation of Value Generalization Hierarchies
for Data Anonymization | 18 pages, 7 figures, presented in the Privacy in Statistical
Databases Conference 2014 (Ibiza, Spain) | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In privacy-preserving data publishing, approaches using Value Generalization
Hierarchies (VGHs) form an important class of anonymization algorithms. VGHs
play a key role in the utility of published datasets as they dictate how the
anonymization of the data occurs. For categorical attributes, it is imperative
to preserve the semantics of the original data in order to achieve a higher
utility. Despite this, semantics have not being formally considered in the
specification of VGHs. Moreover, there are no methods that allow the users to
assess the quality of their VGH. In this paper, we propose a measurement
scheme, based on ontologies, to quantitatively evaluate the quality of VGHs, in
terms of semantic consistency and taxonomic organization, with the aim of
producing higher-quality anonymizations. We demonstrate, through a case study,
how our evaluation scheme can be used to compare the quality of multiple VGHs
and can help to identify faulty VGHs.
| [
{
"version": "v1",
"created": "Thu, 5 Mar 2015 22:58:19 GMT"
}
] | 2015-03-09T00:00:00 | [
[
"Ayala-Rivera",
"Vanessa",
""
],
[
"McDonagh",
"Patrick",
""
],
[
"Cerqueus",
"Thomas",
""
],
[
"Murphy",
"Liam",
""
]
] | TITLE: Ontology-Based Quality Evaluation of Value Generalization Hierarchies
for Data Anonymization
ABSTRACT: In privacy-preserving data publishing, approaches using Value Generalization
Hierarchies (VGHs) form an important class of anonymization algorithms. VGHs
play a key role in the utility of published datasets as they dictate how the
anonymization of the data occurs. For categorical attributes, it is imperative
to preserve the semantics of the original data in order to achieve a higher
utility. Despite this, semantics have not being formally considered in the
specification of VGHs. Moreover, there are no methods that allow the users to
assess the quality of their VGH. In this paper, we propose a measurement
scheme, based on ontologies, to quantitatively evaluate the quality of VGHs, in
terms of semantic consistency and taxonomic organization, with the aim of
producing higher-quality anonymizations. We demonstrate, through a case study,
how our evaluation scheme can be used to compare the quality of multiple VGHs
and can help to identify faulty VGHs.
| no_new_dataset | 0.949295 |
1503.01820 | Ninghang Hu | Ninghang Hu, Gwenn Englebienne, Zhongyu Lou, and Ben Kr\"ose | Latent Hierarchical Model for Activity Recognition | null | null | null | null | cs.RO cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel hierarchical model for human activity recognition. In
contrast to approaches that successively recognize actions and activities, our
approach jointly models actions and activities in a unified framework, and
their labels are simultaneously predicted. The model is embedded with a latent
layer that is able to capture a richer class of contextual information in both
state-state and observation-state pairs. Although loops are present in the
model, the model has an overall linear-chain structure, where the exact
inference is tractable. Therefore, the model is very efficient in both
inference and learning. The parameters of the graphical model are learned with
a Structured Support Vector Machine (Structured-SVM). A data-driven approach is
used to initialize the latent variables; therefore, no manual labeling for the
latent states is required. The experimental results from using two benchmark
datasets show that our model outperforms the state-of-the-art approach, and our
model is computationally more efficient.
| [
{
"version": "v1",
"created": "Fri, 6 Mar 2015 00:05:12 GMT"
}
] | 2015-03-09T00:00:00 | [
[
"Hu",
"Ninghang",
""
],
[
"Englebienne",
"Gwenn",
""
],
[
"Lou",
"Zhongyu",
""
],
[
"Kröse",
"Ben",
""
]
] | TITLE: Latent Hierarchical Model for Activity Recognition
ABSTRACT: We present a novel hierarchical model for human activity recognition. In
contrast to approaches that successively recognize actions and activities, our
approach jointly models actions and activities in a unified framework, and
their labels are simultaneously predicted. The model is embedded with a latent
layer that is able to capture a richer class of contextual information in both
state-state and observation-state pairs. Although loops are present in the
model, the model has an overall linear-chain structure, where the exact
inference is tractable. Therefore, the model is very efficient in both
inference and learning. The parameters of the graphical model are learned with
a Structured Support Vector Machine (Structured-SVM). A data-driven approach is
used to initialize the latent variables; therefore, no manual labeling for the
latent states is required. The experimental results from using two benchmark
datasets show that our model outperforms the state-of-the-art approach, and our
model is computationally more efficient.
| no_new_dataset | 0.949059 |
1503.01918 | Matej Kristan | Matej Kristan, Vildana Sulic, Stanislav Kovacic, Janez Pers | Fast image-based obstacle detection from unmanned surface vehicles | This is an extended version of the ACCV2014 paper [Kristan et al.,
2014] submitted to a journal. [Kristan et al., 2014] M. Kristan, J. Pers, V.
Sulic, S. Kovacic, A graphical model for rapid obstacle image-map estimation
from unmanned surface vehicles, in Proc. Asian Conf. Computer Vision, 2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Obstacle detection plays an important role in unmanned surface vehicles
(USV). The USVs operate in highly diverse environments in which an obstacle may
be a floating piece of wood, a scuba diver, a pier, or a part of a shoreline,
which presents a significant challenge to continuous detection from images
taken onboard. This paper addresses the problem of online detection by
constrained unsupervised segmentation. To this end, a new graphical model is
proposed that affords a fast and continuous obstacle image-map estimation from
a single video stream captured onboard a USV. The model accounts for the
semantic structure of marine environment as observed from USV by imposing weak
structural constraints. A Markov random field framework is adopted and a highly
efficient algorithm for simultaneous optimization of model parameters and
segmentation mask estimation is derived. Our approach does not require
computationally intensive extraction of texture features and comfortably runs
in real-time. The algorithm is tested on a new, challenging, dataset for
segmentation and obstacle detection in marine environments, which is the
largest annotated dataset of its kind. Results on this dataset show that our
model outperforms the related approaches, while requiring a fraction of
computational effort.
| [
{
"version": "v1",
"created": "Fri, 6 Mar 2015 11:21:07 GMT"
}
] | 2015-03-09T00:00:00 | [
[
"Kristan",
"Matej",
""
],
[
"Sulic",
"Vildana",
""
],
[
"Kovacic",
"Stanislav",
""
],
[
"Pers",
"Janez",
""
]
] | TITLE: Fast image-based obstacle detection from unmanned surface vehicles
ABSTRACT: Obstacle detection plays an important role in unmanned surface vehicles
(USV). The USVs operate in highly diverse environments in which an obstacle may
be a floating piece of wood, a scuba diver, a pier, or a part of a shoreline,
which presents a significant challenge to continuous detection from images
taken onboard. This paper addresses the problem of online detection by
constrained unsupervised segmentation. To this end, a new graphical model is
proposed that affords a fast and continuous obstacle image-map estimation from
a single video stream captured onboard a USV. The model accounts for the
semantic structure of marine environment as observed from USV by imposing weak
structural constraints. A Markov random field framework is adopted and a highly
efficient algorithm for simultaneous optimization of model parameters and
segmentation mask estimation is derived. Our approach does not require
computationally intensive extraction of texture features and comfortably runs
in real-time. The algorithm is tested on a new, challenging, dataset for
segmentation and obstacle detection in marine environments, which is the
largest annotated dataset of its kind. Results on this dataset show that our
model outperforms the related approaches, while requiring a fraction of
computational effort.
| new_dataset | 0.966632 |
1503.02031 | Vivek Kulkarni | Prateek Jain, Vivek Kulkarni, Abhradeep Thakurta, Oliver Williams | To Drop or Not to Drop: Robustness, Consistency and Differential Privacy
Properties of Dropout | Currently under review for ICML 2015 | null | null | null | cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training deep belief networks (DBNs) requires optimizing a non-convex
function with an extremely large number of parameters. Naturally, existing
gradient descent (GD) based methods are prone to arbitrarily poor local minima.
In this paper, we rigorously show that such local minima can be avoided (upto
an approximation error) by using the dropout technique, a widely used heuristic
in this domain. In particular, we show that by randomly dropping a few nodes of
a one-hidden layer neural network, the training objective function, up to a
certain approximation error, decreases by a multiplicative factor.
On the flip side, we show that for training convex empirical risk minimizers
(ERM), dropout in fact acts as a "stabilizer" or regularizer. That is, a simple
dropout based GD method for convex ERMs is stable in the face of arbitrary
changes to any one of the training points. Using the above assertion, we show
that dropout provides fast rates for generalization error in learning (convex)
generalized linear models (GLM). Moreover, using the above mentioned stability
properties of dropout, we design dropout based differentially private
algorithms for solving ERMs. The learned GLM thus, preserves privacy of each of
the individual training points while providing accurate predictions for new
test points. Finally, we empirically validate our stability assertions for
dropout in the context of convex ERMs and show that surprisingly, dropout
significantly outperforms (in terms of prediction accuracy) the L2
regularization based methods for several benchmark datasets.
| [
{
"version": "v1",
"created": "Fri, 6 Mar 2015 18:39:53 GMT"
}
] | 2015-03-09T00:00:00 | [
[
"Jain",
"Prateek",
""
],
[
"Kulkarni",
"Vivek",
""
],
[
"Thakurta",
"Abhradeep",
""
],
[
"Williams",
"Oliver",
""
]
] | TITLE: To Drop or Not to Drop: Robustness, Consistency and Differential Privacy
Properties of Dropout
ABSTRACT: Training deep belief networks (DBNs) requires optimizing a non-convex
function with an extremely large number of parameters. Naturally, existing
gradient descent (GD) based methods are prone to arbitrarily poor local minima.
In this paper, we rigorously show that such local minima can be avoided (upto
an approximation error) by using the dropout technique, a widely used heuristic
in this domain. In particular, we show that by randomly dropping a few nodes of
a one-hidden layer neural network, the training objective function, up to a
certain approximation error, decreases by a multiplicative factor.
On the flip side, we show that for training convex empirical risk minimizers
(ERM), dropout in fact acts as a "stabilizer" or regularizer. That is, a simple
dropout based GD method for convex ERMs is stable in the face of arbitrary
changes to any one of the training points. Using the above assertion, we show
that dropout provides fast rates for generalization error in learning (convex)
generalized linear models (GLM). Moreover, using the above mentioned stability
properties of dropout, we design dropout based differentially private
algorithms for solving ERMs. The learned GLM thus, preserves privacy of each of
the individual training points while providing accurate predictions for new
test points. Finally, we empirically validate our stability assertions for
dropout in the context of convex ERMs and show that surprisingly, dropout
significantly outperforms (in terms of prediction accuracy) the L2
regularization based methods for several benchmark datasets.
| no_new_dataset | 0.946597 |
1401.6330 | Li Dong | Li Dong, Furu Wei, Shujie Liu, Ming Zhou, Ke Xu | A Statistical Parsing Framework for Sentiment Classification | Accepted by Computational Linguistics | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a statistical parsing framework for sentence-level sentiment
classification in this article. Unlike previous works that employ syntactic
parsing results for sentiment analysis, we develop a statistical parser to
directly analyze the sentiment structure of a sentence. We show that
complicated phenomena in sentiment analysis (e.g., negation, intensification,
and contrast) can be handled the same as simple and straightforward sentiment
expressions in a unified and probabilistic way. We formulate the sentiment
grammar upon Context-Free Grammars (CFGs), and provide a formal description of
the sentiment parsing framework. We develop the parsing model to obtain
possible sentiment parse trees for a sentence, from which the polarity model is
proposed to derive the sentiment strength and polarity, and the ranking model
is dedicated to selecting the best sentiment tree. We train the parser directly
from examples of sentences annotated only with sentiment polarity labels but
without any syntactic annotations or polarity annotations of constituents
within sentences. Therefore we can obtain training data easily. In particular,
we train a sentiment parser, s.parser, from a large amount of review sentences
with users' ratings as rough sentiment polarity labels. Extensive experiments
on existing benchmark datasets show significant improvements over baseline
sentiment classification approaches.
| [
{
"version": "v1",
"created": "Fri, 24 Jan 2014 12:56:36 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Mar 2015 05:26:13 GMT"
}
] | 2015-03-06T00:00:00 | [
[
"Dong",
"Li",
""
],
[
"Wei",
"Furu",
""
],
[
"Liu",
"Shujie",
""
],
[
"Zhou",
"Ming",
""
],
[
"Xu",
"Ke",
""
]
] | TITLE: A Statistical Parsing Framework for Sentiment Classification
ABSTRACT: We present a statistical parsing framework for sentence-level sentiment
classification in this article. Unlike previous works that employ syntactic
parsing results for sentiment analysis, we develop a statistical parser to
directly analyze the sentiment structure of a sentence. We show that
complicated phenomena in sentiment analysis (e.g., negation, intensification,
and contrast) can be handled the same as simple and straightforward sentiment
expressions in a unified and probabilistic way. We formulate the sentiment
grammar upon Context-Free Grammars (CFGs), and provide a formal description of
the sentiment parsing framework. We develop the parsing model to obtain
possible sentiment parse trees for a sentence, from which the polarity model is
proposed to derive the sentiment strength and polarity, and the ranking model
is dedicated to selecting the best sentiment tree. We train the parser directly
from examples of sentences annotated only with sentiment polarity labels but
without any syntactic annotations or polarity annotations of constituents
within sentences. Therefore we can obtain training data easily. In particular,
we train a sentiment parser, s.parser, from a large amount of review sentences
with users' ratings as rough sentiment polarity labels. Extensive experiments
on existing benchmark datasets show significant improvements over baseline
sentiment classification approaches.
| no_new_dataset | 0.951908 |
1406.4625 | Bobak Shahriari | Bobak Shahriari and Ziyu Wang and Matthew W. Hoffman and Alexandre
Bouchard-C\^ot\'e and Nando de Freitas | An Entropy Search Portfolio for Bayesian Optimization | 10 pages, 5 figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian optimization is a sample-efficient method for black-box global
optimization. How- ever, the performance of a Bayesian optimization method very
much depends on its exploration strategy, i.e. the choice of acquisition
function, and it is not clear a priori which choice will result in superior
performance. While portfolio methods provide an effective, principled way of
combining a collection of acquisition functions, they are often based on
measures of past performance which can be misleading. To address this issue, we
introduce the Entropy Search Portfolio (ESP): a novel approach to portfolio
construction which is motivated by information theoretic considerations. We
show that ESP outperforms existing portfolio methods on several real and
synthetic problems, including geostatistical datasets and simulated control
tasks. We not only show that ESP is able to offer performance as good as the
best, but unknown, acquisition function, but surprisingly it often gives better
performance. Finally, over a wide range of conditions we find that ESP is
robust to the inclusion of poor acquisition functions.
| [
{
"version": "v1",
"created": "Wed, 18 Jun 2014 07:26:08 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Oct 2014 23:58:14 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Oct 2014 15:54:56 GMT"
},
{
"version": "v4",
"created": "Wed, 4 Mar 2015 21:25:31 GMT"
}
] | 2015-03-06T00:00:00 | [
[
"Shahriari",
"Bobak",
""
],
[
"Wang",
"Ziyu",
""
],
[
"Hoffman",
"Matthew W.",
""
],
[
"Bouchard-Côté",
"Alexandre",
""
],
[
"de Freitas",
"Nando",
""
]
] | TITLE: An Entropy Search Portfolio for Bayesian Optimization
ABSTRACT: Bayesian optimization is a sample-efficient method for black-box global
optimization. How- ever, the performance of a Bayesian optimization method very
much depends on its exploration strategy, i.e. the choice of acquisition
function, and it is not clear a priori which choice will result in superior
performance. While portfolio methods provide an effective, principled way of
combining a collection of acquisition functions, they are often based on
measures of past performance which can be misleading. To address this issue, we
introduce the Entropy Search Portfolio (ESP): a novel approach to portfolio
construction which is motivated by information theoretic considerations. We
show that ESP outperforms existing portfolio methods on several real and
synthetic problems, including geostatistical datasets and simulated control
tasks. We not only show that ESP is able to offer performance as good as the
best, but unknown, acquisition function, but surprisingly it often gives better
performance. Finally, over a wide range of conditions we find that ESP is
robust to the inclusion of poor acquisition functions.
| no_new_dataset | 0.934991 |
1503.01508 | Xiangxin Zhu | Xiangxin Zhu, Carl Vondrick, Charless Fowlkes, Deva Ramanan | Do We Need More Training Data? | null | null | 10.1007/s11263-015-0812-2 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Datasets for training object recognition systems are steadily increasing in
size. This paper investigates the question of whether existing detectors will
continue to improve as data grows, or saturate in performance due to limited
model complexity and the Bayes risk associated with the feature spaces in which
they operate. We focus on the popular paradigm of discriminatively trained
templates defined on oriented gradient features. We investigate the performance
of mixtures of templates as the number of mixture components and the amount of
training data grows. Surprisingly, even with proper treatment of regularization
and "outliers", the performance of classic mixture models appears to saturate
quickly ($\sim$10 templates and $\sim$100 positive training examples per
template). This is not a limitation of the feature space as compositional
mixtures that share template parameters via parts and that can synthesize new
templates not encountered during training yield significantly better
performance. Based on our analysis, we conjecture that the greatest gains in
detection performance will continue to derive from improved representations and
learning algorithms that can make efficient use of large datasets.
| [
{
"version": "v1",
"created": "Thu, 5 Mar 2015 01:51:12 GMT"
}
] | 2015-03-06T00:00:00 | [
[
"Zhu",
"Xiangxin",
""
],
[
"Vondrick",
"Carl",
""
],
[
"Fowlkes",
"Charless",
""
],
[
"Ramanan",
"Deva",
""
]
] | TITLE: Do We Need More Training Data?
ABSTRACT: Datasets for training object recognition systems are steadily increasing in
size. This paper investigates the question of whether existing detectors will
continue to improve as data grows, or saturate in performance due to limited
model complexity and the Bayes risk associated with the feature spaces in which
they operate. We focus on the popular paradigm of discriminatively trained
templates defined on oriented gradient features. We investigate the performance
of mixtures of templates as the number of mixture components and the amount of
training data grows. Surprisingly, even with proper treatment of regularization
and "outliers", the performance of classic mixture models appears to saturate
quickly ($\sim$10 templates and $\sim$100 positive training examples per
template). This is not a limitation of the feature space as compositional
mixtures that share template parameters via parts and that can synthesize new
templates not encountered during training yield significantly better
performance. Based on our analysis, we conjecture that the greatest gains in
detection performance will continue to derive from improved representations and
learning algorithms that can make efficient use of large datasets.
| no_new_dataset | 0.947186 |
1503.01538 | Natalia Bilenko | Natalia Y. Bilenko and Jack L. Gallant | Pyrcca: regularized kernel canonical correlation analysis in Python and
its applications to neuroimaging | null | null | null | null | q-bio.QM cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Canonical correlation analysis (CCA) is a valuable method for interpreting
cross-covariance across related datasets of different dimensionality. There are
many potential applications of CCA to neuroimaging data analysis. For instance,
CCA can be used for finding functional similarities across fMRI datasets
collected from multiple subjects without resampling individual datasets to a
template anatomy. In this paper, we introduce Pyrcca, an open-source Python
module for executing CCA between two or more datasets. Pyrcca can be used to
implement CCA with or without regularization, and with or without linear or a
Gaussian kernelization of the datasets. We demonstrate an application of CCA
implemented with Pyrcca to neuroimaging data analysis. We use CCA to find a
data-driven set of functional response patterns that are similar across
individual subjects in a natural movie experiment. We then demonstrate how this
set of response patterns discovered by CCA can be used to accurately predict
subject responses to novel natural movie stimuli.
| [
{
"version": "v1",
"created": "Thu, 5 Mar 2015 04:57:22 GMT"
}
] | 2015-03-06T00:00:00 | [
[
"Bilenko",
"Natalia Y.",
""
],
[
"Gallant",
"Jack L.",
""
]
] | TITLE: Pyrcca: regularized kernel canonical correlation analysis in Python and
its applications to neuroimaging
ABSTRACT: Canonical correlation analysis (CCA) is a valuable method for interpreting
cross-covariance across related datasets of different dimensionality. There are
many potential applications of CCA to neuroimaging data analysis. For instance,
CCA can be used for finding functional similarities across fMRI datasets
collected from multiple subjects without resampling individual datasets to a
template anatomy. In this paper, we introduce Pyrcca, an open-source Python
module for executing CCA between two or more datasets. Pyrcca can be used to
implement CCA with or without regularization, and with or without linear or a
Gaussian kernelization of the datasets. We demonstrate an application of CCA
implemented with Pyrcca to neuroimaging data analysis. We use CCA to find a
data-driven set of functional response patterns that are similar across
individual subjects in a natural movie experiment. We then demonstrate how this
set of response patterns discovered by CCA can be used to accurately predict
subject responses to novel natural movie stimuli.
| no_new_dataset | 0.939081 |
1503.01647 | Zhangyang Wang | Zhangyang Wang, Xianming Liu, Shiyu Chang, Jiayu Zhou, Guo-Jun Qi,
Thomas S. Huang | Decentralized Recommender Systems | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a decentralized recommender system by formulating the
popular collaborative filleting (CF) model into a decentralized matrix
completion form over a set of users. In such a way, data storages and
computations are fully distributed. Each user could exchange limited
information with its local neighborhood, and thus it avoids the centralized
fusion. Advantages of the proposed system include a protection on user privacy,
as well as better scalability and robustness. We compare our proposed algorithm
with several state-of-the-art algorithms on the FlickerUserFavor dataset, and
demonstrate that the decentralized algorithm can gain a competitive performance
to others.
| [
{
"version": "v1",
"created": "Thu, 5 Mar 2015 14:34:02 GMT"
}
] | 2015-03-06T00:00:00 | [
[
"Wang",
"Zhangyang",
""
],
[
"Liu",
"Xianming",
""
],
[
"Chang",
"Shiyu",
""
],
[
"Zhou",
"Jiayu",
""
],
[
"Qi",
"Guo-Jun",
""
],
[
"Huang",
"Thomas S.",
""
]
] | TITLE: Decentralized Recommender Systems
ABSTRACT: This paper proposes a decentralized recommender system by formulating the
popular collaborative filleting (CF) model into a decentralized matrix
completion form over a set of users. In such a way, data storages and
computations are fully distributed. Each user could exchange limited
information with its local neighborhood, and thus it avoids the centralized
fusion. Advantages of the proposed system include a protection on user privacy,
as well as better scalability and robustness. We compare our proposed algorithm
with several state-of-the-art algorithms on the FlickerUserFavor dataset, and
demonstrate that the decentralized algorithm can gain a competitive performance
to others.
| no_new_dataset | 0.949059 |
1503.01657 | Rui Zeng | Rui Zeng, Jiasong Wu, Zhuhong Shao, Yang Chen, Lotfi Senhadji,
Huazhong Shu | Color Image Classification via Quaternion Principal Component Analysis
Network | 9 figures,5 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Principal Component Analysis Network (PCANet), which is one of the
recently proposed deep learning architectures, achieves the state-of-the-art
classification accuracy in various databases. However, the performance of
PCANet may be degraded when dealing with color images. In this paper, a
Quaternion Principal Component Analysis Network (QPCANet), which is an
extension of PCANet, is proposed for color images classification. Compared to
PCANet, the proposed QPCANet takes into account the spatial distribution
information of color images and ensures larger amount of intra-class invariance
of color images. Experiments conducted on different color image datasets such
as Caltech-101, UC Merced Land Use, Georgia Tech face and CURet have revealed
that the proposed QPCANet achieves higher classification accuracy than PCANet.
| [
{
"version": "v1",
"created": "Thu, 5 Mar 2015 15:12:28 GMT"
}
] | 2015-03-06T00:00:00 | [
[
"Zeng",
"Rui",
""
],
[
"Wu",
"Jiasong",
""
],
[
"Shao",
"Zhuhong",
""
],
[
"Chen",
"Yang",
""
],
[
"Senhadji",
"Lotfi",
""
],
[
"Shu",
"Huazhong",
""
]
] | TITLE: Color Image Classification via Quaternion Principal Component Analysis
Network
ABSTRACT: The Principal Component Analysis Network (PCANet), which is one of the
recently proposed deep learning architectures, achieves the state-of-the-art
classification accuracy in various databases. However, the performance of
PCANet may be degraded when dealing with color images. In this paper, a
Quaternion Principal Component Analysis Network (QPCANet), which is an
extension of PCANet, is proposed for color images classification. Compared to
PCANet, the proposed QPCANet takes into account the spatial distribution
information of color images and ensures larger amount of intra-class invariance
of color images. Experiments conducted on different color image datasets such
as Caltech-101, UC Merced Land Use, Georgia Tech face and CURet have revealed
that the proposed QPCANet achieves higher classification accuracy than PCANet.
| no_new_dataset | 0.952838 |
1503.01737 | Ping Li | Ping Li | Min-Max Kernels | null | null | null | null | stat.ML cs.LG stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The min-max kernel is a generalization of the popular resemblance kernel
(which is designed for binary data). In this paper, we demonstrate, through an
extensive classification study using kernel machines, that the min-max kernel
often provides an effective measure of similarity for nonnegative data. As the
min-max kernel is nonlinear and might be difficult to be used for industrial
applications with massive data, we show that the min-max kernel can be
linearized via hashing techniques. This allows practitioners to apply min-max
kernel to large-scale applications using well matured linear algorithms such as
linear SVM or logistic regression.
The previous remarkable work on consistent weighted sampling (CWS) produces
samples in the form of ($i^*, t^*$) where the $i^*$ records the location (and
in fact also the weights) information analogous to the samples produced by
classical minwise hashing on binary data. Because the $t^*$ is theoretically
unbounded, it was not immediately clear how to effectively implement CWS for
building large-scale linear classifiers. In this paper, we provide a simple
solution by discarding $t^*$ (which we refer to as the "0-bit" scheme). Via an
extensive empirical study, we show that this 0-bit scheme does not lose
essential information. We then apply the "0-bit" CWS for building linear
classifiers to approximate min-max kernel classifiers, as extensively validated
on a wide range of publicly available classification datasets. We expect this
work will generate interests among data mining practitioners who would like to
efficiently utilize the nonlinear information of non-binary and nonnegative
data.
| [
{
"version": "v1",
"created": "Thu, 5 Mar 2015 19:29:03 GMT"
}
] | 2015-03-06T00:00:00 | [
[
"Li",
"Ping",
""
]
] | TITLE: Min-Max Kernels
ABSTRACT: The min-max kernel is a generalization of the popular resemblance kernel
(which is designed for binary data). In this paper, we demonstrate, through an
extensive classification study using kernel machines, that the min-max kernel
often provides an effective measure of similarity for nonnegative data. As the
min-max kernel is nonlinear and might be difficult to be used for industrial
applications with massive data, we show that the min-max kernel can be
linearized via hashing techniques. This allows practitioners to apply min-max
kernel to large-scale applications using well matured linear algorithms such as
linear SVM or logistic regression.
The previous remarkable work on consistent weighted sampling (CWS) produces
samples in the form of ($i^*, t^*$) where the $i^*$ records the location (and
in fact also the weights) information analogous to the samples produced by
classical minwise hashing on binary data. Because the $t^*$ is theoretically
unbounded, it was not immediately clear how to effectively implement CWS for
building large-scale linear classifiers. In this paper, we provide a simple
solution by discarding $t^*$ (which we refer to as the "0-bit" scheme). Via an
extensive empirical study, we show that this 0-bit scheme does not lose
essential information. We then apply the "0-bit" CWS for building linear
classifiers to approximate min-max kernel classifiers, as extensively validated
on a wide range of publicly available classification datasets. We expect this
work will generate interests among data mining practitioners who would like to
efficiently utilize the nonlinear information of non-binary and nonnegative
data.
| no_new_dataset | 0.947235 |
1503.01156 | David Felber | David Felber and Rafail Ostrovsky | A randomized online quantile summary in $O(\frac{1}{\varepsilon} \log
\frac{1}{\varepsilon})$ words | slight fixes to version submitted to ICALP 2015--mistake in time
complexity, and a few minor numeric miscalculations in section 3 | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A quantile summary is a data structure that approximates to
$\varepsilon$-relative error the order statistics of a much larger underlying
dataset.
In this paper we develop a randomized online quantile summary for the cash
register data input model and comparison data domain model that uses
$O(\frac{1}{\varepsilon} \log \frac{1}{\varepsilon})$ words of memory. This
improves upon the previous best upper bound of $O(\frac{1}{\varepsilon}
\log^{3/2} \frac{1}{\varepsilon})$ by Agarwal et. al. (PODS 2012). Further, by
a lower bound of Hung and Ting (FAW 2010) no deterministic summary for the
comparison model can outperform our randomized summary in terms of space
complexity. Lastly, our summary has the nice property that
$O(\frac{1}{\varepsilon} \log \frac{1}{\varepsilon})$ words suffice to ensure
that the success probability is $1 - e^{-\text{poly}(1/\varepsilon)}$.
| [
{
"version": "v1",
"created": "Tue, 3 Mar 2015 22:58:55 GMT"
}
] | 2015-03-05T00:00:00 | [
[
"Felber",
"David",
""
],
[
"Ostrovsky",
"Rafail",
""
]
] | TITLE: A randomized online quantile summary in $O(\frac{1}{\varepsilon} \log
\frac{1}{\varepsilon})$ words
ABSTRACT: A quantile summary is a data structure that approximates to
$\varepsilon$-relative error the order statistics of a much larger underlying
dataset.
In this paper we develop a randomized online quantile summary for the cash
register data input model and comparison data domain model that uses
$O(\frac{1}{\varepsilon} \log \frac{1}{\varepsilon})$ words of memory. This
improves upon the previous best upper bound of $O(\frac{1}{\varepsilon}
\log^{3/2} \frac{1}{\varepsilon})$ by Agarwal et. al. (PODS 2012). Further, by
a lower bound of Hung and Ting (FAW 2010) no deterministic summary for the
comparison model can outperform our randomized summary in terms of space
complexity. Lastly, our summary has the nice property that
$O(\frac{1}{\varepsilon} \log \frac{1}{\varepsilon})$ words suffice to ensure
that the success probability is $1 - e^{-\text{poly}(1/\varepsilon)}$.
| no_new_dataset | 0.944485 |
1503.01228 | Kui Tang | Kui Tang, Nicholas Ruozzi, David Belanger, Tony Jebara | Bethe Learning of Conditional Random Fields via MAP Decoding | 19 pages (9 supplementary), 10 figures (3 supplementary) | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many machine learning tasks can be formulated in terms of predicting
structured outputs. In frameworks such as the structured support vector machine
(SVM-Struct) and the structured perceptron, discriminative functions are
learned by iteratively applying efficient maximum a posteriori (MAP) decoding.
However, maximum likelihood estimation (MLE) of probabilistic models over these
same structured spaces requires computing partition functions, which is
generally intractable. This paper presents a method for learning discrete
exponential family models using the Bethe approximation to the MLE. Remarkably,
this problem also reduces to iterative (MAP) decoding. This connection emerges
by combining the Bethe approximation with a Frank-Wolfe (FW) algorithm on a
convex dual objective which circumvents the intractable partition function. The
result is a new single loop algorithm MLE-Struct, which is substantially more
efficient than previous double-loop methods for approximate maximum likelihood
estimation. Our algorithm outperforms existing methods in experiments involving
image segmentation, matching problems from vision, and a new dataset of
university roommate assignments.
| [
{
"version": "v1",
"created": "Wed, 4 Mar 2015 05:41:29 GMT"
}
] | 2015-03-05T00:00:00 | [
[
"Tang",
"Kui",
""
],
[
"Ruozzi",
"Nicholas",
""
],
[
"Belanger",
"David",
""
],
[
"Jebara",
"Tony",
""
]
] | TITLE: Bethe Learning of Conditional Random Fields via MAP Decoding
ABSTRACT: Many machine learning tasks can be formulated in terms of predicting
structured outputs. In frameworks such as the structured support vector machine
(SVM-Struct) and the structured perceptron, discriminative functions are
learned by iteratively applying efficient maximum a posteriori (MAP) decoding.
However, maximum likelihood estimation (MLE) of probabilistic models over these
same structured spaces requires computing partition functions, which is
generally intractable. This paper presents a method for learning discrete
exponential family models using the Bethe approximation to the MLE. Remarkably,
this problem also reduces to iterative (MAP) decoding. This connection emerges
by combining the Bethe approximation with a Frank-Wolfe (FW) algorithm on a
convex dual objective which circumvents the intractable partition function. The
result is a new single loop algorithm MLE-Struct, which is substantially more
efficient than previous double-loop methods for approximate maximum likelihood
estimation. Our algorithm outperforms existing methods in experiments involving
image segmentation, matching problems from vision, and a new dataset of
university roommate assignments.
| new_dataset | 0.967808 |
1503.01393 | Mete Ozay | Mete Ozay, Krzysztof Walas, Ales Leonardis | A Hierarchical Approach for Joint Multi-view Object Pose Estimation and
Categorization | 7 Figures | Proceedings of IEEE International Conference on Robotics and
Automation (ICRA), pp. 5480 - 5487, Hong Kong, 2014 | 10.1109/ICRA.2014.6907665 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a joint object pose estimation and categorization approach which
extracts information about object poses and categories from the object parts
and compositions constructed at different layers of a hierarchical object
representation algorithm, namely Learned Hierarchy of Parts (LHOP). In the
proposed approach, we first employ the LHOP to learn hierarchical part
libraries which represent entity parts and compositions across different object
categories and views. Then, we extract statistical and geometric features from
the part realizations of the objects in the images in order to represent the
information about object pose and category at each different layer of the
hierarchy. Unlike the traditional approaches which consider specific layers of
the hierarchies in order to extract information to perform specific tasks, we
combine the information extracted at different layers to solve a joint object
pose estimation and categorization problem using distributed optimization
algorithms. We examine the proposed generative-discriminative learning approach
and the algorithms on two benchmark 2-D multi-view image datasets. The proposed
approach and the algorithms outperform state-of-the-art classification,
regression and feature extraction algorithms. In addition, the experimental
results shed light on the relationship between object categorization, pose
estimation and the part realizations observed at different layers of the
hierarchy.
| [
{
"version": "v1",
"created": "Wed, 4 Mar 2015 17:17:48 GMT"
}
] | 2015-03-05T00:00:00 | [
[
"Ozay",
"Mete",
""
],
[
"Walas",
"Krzysztof",
""
],
[
"Leonardis",
"Ales",
""
]
] | TITLE: A Hierarchical Approach for Joint Multi-view Object Pose Estimation and
Categorization
ABSTRACT: We propose a joint object pose estimation and categorization approach which
extracts information about object poses and categories from the object parts
and compositions constructed at different layers of a hierarchical object
representation algorithm, namely Learned Hierarchy of Parts (LHOP). In the
proposed approach, we first employ the LHOP to learn hierarchical part
libraries which represent entity parts and compositions across different object
categories and views. Then, we extract statistical and geometric features from
the part realizations of the objects in the images in order to represent the
information about object pose and category at each different layer of the
hierarchy. Unlike the traditional approaches which consider specific layers of
the hierarchies in order to extract information to perform specific tasks, we
combine the information extracted at different layers to solve a joint object
pose estimation and categorization problem using distributed optimization
algorithms. We examine the proposed generative-discriminative learning approach
and the algorithms on two benchmark 2-D multi-view image datasets. The proposed
approach and the algorithms outperform state-of-the-art classification,
regression and feature extraction algorithms. In addition, the experimental
results shed light on the relationship between object categorization, pose
estimation and the part realizations observed at different layers of the
hierarchy.
| no_new_dataset | 0.946001 |
1412.8504 | Diego Amancio | Diego R. Amancio | Probing the topological properties of complex networks modeling short
written texts | null | PLoS ONE 10(2): e0118394, 2015 | 10.1371/journal.pone.0118394 | null | cs.CL physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, graph theory has been widely employed to probe several
language properties. More specifically, the so-called word adjacency model has
been proven useful for tackling several practical problems, especially those
relying on textual stylistic analysis. The most common approach to treat texts
as networks has simply considered either large pieces of texts or entire books.
This approach has certainly worked well -- many informative discoveries have
been made this way -- but it raises an uncomfortable question: could there be
important topological patterns in small pieces of texts? To address this
problem, the topological properties of subtexts sampled from entire books was
probed. Statistical analyzes performed on a dataset comprising 50 novels
revealed that most of the traditional topological measurements are stable for
short subtexts. When the performance of the authorship recognition task was
analyzed, it was found that a proper sampling yields a discriminability similar
to the one found with full texts. Surprisingly, the support vector machine
classification based on the characterization of short texts outperformed the
one performed with entire books. These findings suggest that a local
topological analysis of large documents might improve its global
characterization. Most importantly, it was verified, as a proof of principle,
that short texts can be analyzed with the methods and concepts of complex
networks. As a consequence, the techniques described here can be extended in a
straightforward fashion to analyze texts as time-varying complex networks.
| [
{
"version": "v1",
"created": "Mon, 29 Dec 2014 23:09:13 GMT"
}
] | 2015-03-04T00:00:00 | [
[
"Amancio",
"Diego R.",
""
]
] | TITLE: Probing the topological properties of complex networks modeling short
written texts
ABSTRACT: In recent years, graph theory has been widely employed to probe several
language properties. More specifically, the so-called word adjacency model has
been proven useful for tackling several practical problems, especially those
relying on textual stylistic analysis. The most common approach to treat texts
as networks has simply considered either large pieces of texts or entire books.
This approach has certainly worked well -- many informative discoveries have
been made this way -- but it raises an uncomfortable question: could there be
important topological patterns in small pieces of texts? To address this
problem, the topological properties of subtexts sampled from entire books was
probed. Statistical analyzes performed on a dataset comprising 50 novels
revealed that most of the traditional topological measurements are stable for
short subtexts. When the performance of the authorship recognition task was
analyzed, it was found that a proper sampling yields a discriminability similar
to the one found with full texts. Surprisingly, the support vector machine
classification based on the characterization of short texts outperformed the
one performed with entire books. These findings suggest that a local
topological analysis of large documents might improve its global
characterization. Most importantly, it was verified, as a proof of principle,
that short texts can be analyzed with the methods and concepts of complex
networks. As a consequence, the techniques described here can be extended in a
straightforward fashion to analyze texts as time-varying complex networks.
| no_new_dataset | 0.934694 |
1501.04560 | Yanwei Fu | Yanwei Fu, Timothy M. Hospedales, Tao Xiang and Shaogang Gong | Transductive Multi-view Zero-Shot Learning | accepted by IEEE TPAMI, more info and longer report will be available
in :http://www.eecs.qmul.ac.uk/~yf300/embedding/index.html | null | 10.1109/TPAMI.2015.2408354 | null | cs.CV cs.DS cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most existing zero-shot learning approaches exploit transfer learning via an
intermediate-level semantic representation shared between an annotated
auxiliary dataset and a target dataset with different classes and no
annotation. A projection from a low-level feature space to the semantic
representation space is learned from the auxiliary dataset and is applied
without adaptation to the target dataset. In this paper we identify two
inherent limitations with these approaches. First, due to having disjoint and
potentially unrelated classes, the projection functions learned from the
auxiliary dataset/domain are biased when applied directly to the target
dataset/domain. We call this problem the projection domain shift problem and
propose a novel framework, transductive multi-view embedding, to solve it. The
second limitation is the prototype sparsity problem which refers to the fact
that for each target class, only a single prototype is available for zero-shot
learning given a semantic representation. To overcome this problem, a novel
heterogeneous multi-view hypergraph label propagation method is formulated for
zero-shot learning in the transductive embedding space. It effectively exploits
the complementary information offered by different semantic representations and
takes advantage of the manifold structures of multiple representation spaces in
a coherent manner. We demonstrate through extensive experiments that the
proposed approach (1) rectifies the projection shift between the auxiliary and
target domains, (2) exploits the complementarity of multiple semantic
representations, (3) significantly outperforms existing methods for both
zero-shot and N-shot recognition on three image and video benchmark datasets,
and (4) enables novel cross-view annotation tasks.
| [
{
"version": "v1",
"created": "Mon, 19 Jan 2015 17:04:11 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Mar 2015 04:43:44 GMT"
}
] | 2015-03-04T00:00:00 | [
[
"Fu",
"Yanwei",
""
],
[
"Hospedales",
"Timothy M.",
""
],
[
"Xiang",
"Tao",
""
],
[
"Gong",
"Shaogang",
""
]
] | TITLE: Transductive Multi-view Zero-Shot Learning
ABSTRACT: Most existing zero-shot learning approaches exploit transfer learning via an
intermediate-level semantic representation shared between an annotated
auxiliary dataset and a target dataset with different classes and no
annotation. A projection from a low-level feature space to the semantic
representation space is learned from the auxiliary dataset and is applied
without adaptation to the target dataset. In this paper we identify two
inherent limitations with these approaches. First, due to having disjoint and
potentially unrelated classes, the projection functions learned from the
auxiliary dataset/domain are biased when applied directly to the target
dataset/domain. We call this problem the projection domain shift problem and
propose a novel framework, transductive multi-view embedding, to solve it. The
second limitation is the prototype sparsity problem which refers to the fact
that for each target class, only a single prototype is available for zero-shot
learning given a semantic representation. To overcome this problem, a novel
heterogeneous multi-view hypergraph label propagation method is formulated for
zero-shot learning in the transductive embedding space. It effectively exploits
the complementary information offered by different semantic representations and
takes advantage of the manifold structures of multiple representation spaces in
a coherent manner. We demonstrate through extensive experiments that the
proposed approach (1) rectifies the projection shift between the auxiliary and
target domains, (2) exploits the complementarity of multiple semantic
representations, (3) significantly outperforms existing methods for both
zero-shot and N-shot recognition on three image and video benchmark datasets,
and (4) enables novel cross-view annotation tasks.
| no_new_dataset | 0.946843 |
1503.00787 | Davide Modolo | Davide Modolo, Alexander Vezhnevets, Vittorio Ferrari | Context Forest for efficient object detection with large mixture models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Context Forest (ConF), a technique for predicting properties of
the objects in an image based on its global appearance. Compared to standard
nearest-neighbour techniques, ConF is more accurate, fast and memory efficient.
We train ConF to predict which aspects of an object class are likely to appear
in a given image (e.g. which viewpoint). This enables to speed-up
multi-component object detectors, by automatically selecting the most relevant
components to run on that image. This is particularly useful for detectors
trained from large datasets, which typically need many components to fully
absorb the data and reach their peak performance. ConF provides a speed-up of
2x for the DPM detector [1] and of 10x for the EE-SVM detector [2]. To show
ConF's generality, we also train it to predict at which locations objects are
likely to appear in an image. Incorporating this information in the detector
score improves mAP performance by about 2% by removing false positive
detections in unlikely locations.
| [
{
"version": "v1",
"created": "Tue, 3 Mar 2015 00:20:58 GMT"
}
] | 2015-03-04T00:00:00 | [
[
"Modolo",
"Davide",
""
],
[
"Vezhnevets",
"Alexander",
""
],
[
"Ferrari",
"Vittorio",
""
]
] | TITLE: Context Forest for efficient object detection with large mixture models
ABSTRACT: We present Context Forest (ConF), a technique for predicting properties of
the objects in an image based on its global appearance. Compared to standard
nearest-neighbour techniques, ConF is more accurate, fast and memory efficient.
We train ConF to predict which aspects of an object class are likely to appear
in a given image (e.g. which viewpoint). This enables to speed-up
multi-component object detectors, by automatically selecting the most relevant
components to run on that image. This is particularly useful for detectors
trained from large datasets, which typically need many components to fully
absorb the data and reach their peak performance. ConF provides a speed-up of
2x for the DPM detector [1] and of 10x for the EE-SVM detector [2]. To show
ConF's generality, we also train it to predict at which locations objects are
likely to appear in an image. Incorporating this information in the detector
score improves mAP performance by about 2% by removing false positive
detections in unlikely locations.
| no_new_dataset | 0.953837 |
1503.01070 | Atousa Torabi | Atousa Torabi, Christopher Pal, Hugo Larochelle, Aaron Courville | Using Descriptive Video Services to Create a Large Data Source for Video
Annotation Research | 7 pages | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we introduce a dataset of video annotated with high quality
natural language phrases describing the visual content in a given segment of
time. Our dataset is based on the Descriptive Video Service (DVS) that is now
encoded on many digital media products such as DVDs. DVS is an audio narration
describing the visual elements and actions in a movie for the visually
impaired. It is temporally aligned with the movie and mixed with the original
movie soundtrack. We describe an automatic DVS segmentation and alignment
method for movies, that enables us to scale up the collection of a DVS-derived
dataset with minimal human intervention. Using this method, we have collected
the largest DVS-derived dataset for video description of which we are aware.
Our dataset currently includes over 84.6 hours of paired video/sentences from
92 DVDs and is growing.
| [
{
"version": "v1",
"created": "Tue, 3 Mar 2015 19:22:01 GMT"
}
] | 2015-03-04T00:00:00 | [
[
"Torabi",
"Atousa",
""
],
[
"Pal",
"Christopher",
""
],
[
"Larochelle",
"Hugo",
""
],
[
"Courville",
"Aaron",
""
]
] | TITLE: Using Descriptive Video Services to Create a Large Data Source for Video
Annotation Research
ABSTRACT: In this work, we introduce a dataset of video annotated with high quality
natural language phrases describing the visual content in a given segment of
time. Our dataset is based on the Descriptive Video Service (DVS) that is now
encoded on many digital media products such as DVDs. DVS is an audio narration
describing the visual elements and actions in a movie for the visually
impaired. It is temporally aligned with the movie and mixed with the original
movie soundtrack. We describe an automatic DVS segmentation and alignment
method for movies, that enables us to scale up the collection of a DVS-derived
dataset with minimal human intervention. Using this method, we have collected
the largest DVS-derived dataset for video description of which we are aware.
Our dataset currently includes over 84.6 hours of paired video/sentences from
92 DVDs and is growing.
| new_dataset | 0.955194 |
1405.5850 | Martin Storath | Martin Storath, Andreas Weinmann, J\"urgen Frikel, Michael Unser | Joint Image Reconstruction and Segmentation Using the Potts Model | null | null | 10.1088/0266-5611/31/2/025003 | null | math.OC math.NA physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new algorithmic approach to the non-smooth and non-convex Potts
problem (also called piecewise-constant Mumford-Shah problem) for inverse
imaging problems. We derive a suitable splitting into specific subproblems that
can all be solved efficiently. Our method does not require a priori knowledge
on the gray levels nor on the number of segments of the reconstruction.
Further, it avoids anisotropic artifacts such as geometric staircasing. We
demonstrate the suitability of our method for joint image reconstruction and
segmentation. We focus on Radon data, where we in particular consider limited
data situations. For instance, our method is able to recover all segments of
the Shepp-Logan phantom from $7$ angular views only. We illustrate the
practical applicability on a real PET dataset. As further applications, we
consider spherical Radon data as well as blurred data.
| [
{
"version": "v1",
"created": "Thu, 22 May 2014 18:34:10 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jun 2014 09:53:47 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Jan 2015 14:47:14 GMT"
}
] | 2015-03-03T00:00:00 | [
[
"Storath",
"Martin",
""
],
[
"Weinmann",
"Andreas",
""
],
[
"Frikel",
"Jürgen",
""
],
[
"Unser",
"Michael",
""
]
] | TITLE: Joint Image Reconstruction and Segmentation Using the Potts Model
ABSTRACT: We propose a new algorithmic approach to the non-smooth and non-convex Potts
problem (also called piecewise-constant Mumford-Shah problem) for inverse
imaging problems. We derive a suitable splitting into specific subproblems that
can all be solved efficiently. Our method does not require a priori knowledge
on the gray levels nor on the number of segments of the reconstruction.
Further, it avoids anisotropic artifacts such as geometric staircasing. We
demonstrate the suitability of our method for joint image reconstruction and
segmentation. We focus on Radon data, where we in particular consider limited
data situations. For instance, our method is able to recover all segments of
the Shepp-Logan phantom from $7$ angular views only. We illustrate the
practical applicability on a real PET dataset. As further applications, we
consider spherical Radon data as well as blurred data.
| no_new_dataset | 0.948106 |
1412.6558 | David Sussillo | David Sussillo, L.F. Abbott | Random Walk Initialization for Training Very Deep Feedforward Networks | 10 pages, 4 figures | null | null | null | cs.NE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training very deep networks is an important open problem in machine learning.
One of many difficulties is that the norm of the back-propagated error gradient
can grow or decay exponentially. Here we show that training very deep
feed-forward networks (FFNs) is not as difficult as previously thought. Unlike
when back-propagation is applied to a recurrent network, application to an FFN
amounts to multiplying the error gradient by a different random matrix at each
layer. We show that the successive application of correctly scaled random
matrices to an initial vector results in a random walk of the log of the norm
of the resulting vectors, and we compute the scaling that makes this walk
unbiased. The variance of the random walk grows only linearly with network
depth and is inversely proportional to the size of each layer. Practically,
this implies a gradient whose log-norm scales with the square root of the
network depth and shows that the vanishing gradient problem can be mitigated by
increasing the width of the layers. Mathematical analyses and experimental
results using stochastic gradient descent to optimize tasks related to the
MNIST and TIMIT datasets are provided to support these claims. Equations for
the optimal matrix scaling are provided for the linear and ReLU cases.
| [
{
"version": "v1",
"created": "Fri, 19 Dec 2014 23:24:53 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jan 2015 21:28:29 GMT"
},
{
"version": "v3",
"created": "Fri, 27 Feb 2015 22:28:32 GMT"
}
] | 2015-03-03T00:00:00 | [
[
"Sussillo",
"David",
""
],
[
"Abbott",
"L. F.",
""
]
] | TITLE: Random Walk Initialization for Training Very Deep Feedforward Networks
ABSTRACT: Training very deep networks is an important open problem in machine learning.
One of many difficulties is that the norm of the back-propagated error gradient
can grow or decay exponentially. Here we show that training very deep
feed-forward networks (FFNs) is not as difficult as previously thought. Unlike
when back-propagation is applied to a recurrent network, application to an FFN
amounts to multiplying the error gradient by a different random matrix at each
layer. We show that the successive application of correctly scaled random
matrices to an initial vector results in a random walk of the log of the norm
of the resulting vectors, and we compute the scaling that makes this walk
unbiased. The variance of the random walk grows only linearly with network
depth and is inversely proportional to the size of each layer. Practically,
this implies a gradient whose log-norm scales with the square root of the
network depth and shows that the vanishing gradient problem can be mitigated by
increasing the width of the layers. Mathematical analyses and experimental
results using stochastic gradient descent to optimize tasks related to the
MNIST and TIMIT datasets are provided to support these claims. Equations for
the optimal matrix scaling are provided for the linear and ReLU cases.
| no_new_dataset | 0.948775 |
1503.00064 | Sanja Fidler | Dahua Lin, Chen Kong, Sanja Fidler, Raquel Urtasun | Generating Multi-Sentence Lingual Descriptions of Indoor Scenes | null | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a novel framework for generating lingual descriptions of
indoor scenes. Whereas substantial efforts have been made to tackle this
problem, previous approaches focusing primarily on generating a single sentence
for each image, which is not sufficient for describing complex scenes. We
attempt to go beyond this, by generating coherent descriptions with multiple
sentences. Our approach is distinguished from conventional ones in several
aspects: (1) a 3D visual parsing system that jointly infers objects,
attributes, and relations; (2) a generative grammar learned automatically from
training text; and (3) a text generation algorithm that takes into account the
coherence among sentences. Experiments on the augmented NYU-v2 dataset show
that our framework can generate natural descriptions with substantially higher
ROGUE scores compared to those produced by the baseline.
| [
{
"version": "v1",
"created": "Sat, 28 Feb 2015 04:26:21 GMT"
}
] | 2015-03-03T00:00:00 | [
[
"Lin",
"Dahua",
""
],
[
"Kong",
"Chen",
""
],
[
"Fidler",
"Sanja",
""
],
[
"Urtasun",
"Raquel",
""
]
] | TITLE: Generating Multi-Sentence Lingual Descriptions of Indoor Scenes
ABSTRACT: This paper proposes a novel framework for generating lingual descriptions of
indoor scenes. Whereas substantial efforts have been made to tackle this
problem, previous approaches focusing primarily on generating a single sentence
for each image, which is not sufficient for describing complex scenes. We
attempt to go beyond this, by generating coherent descriptions with multiple
sentences. Our approach is distinguished from conventional ones in several
aspects: (1) a 3D visual parsing system that jointly infers objects,
attributes, and relations; (2) a generative grammar learned automatically from
training text; and (3) a text generation algorithm that takes into account the
coherence among sentences. Experiments on the augmented NYU-v2 dataset show
that our framework can generate natural descriptions with substantially higher
ROGUE scores compared to those produced by the baseline.
| no_new_dataset | 0.951188 |
1503.00591 | Xu Zhang | Xu Zhang, Felix Xinnan Yu, Shih-Fu Chang, Shengjin Wang | Deep Transfer Network: Unsupervised Domain Adaptation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Domain adaptation aims at training a classifier in one dataset and applying
it to a related but not identical dataset. One successfully used framework of
domain adaptation is to learn a transformation to match both the distribution
of the features (marginal distribution), and the distribution of the labels
given features (conditional distribution). In this paper, we propose a new
domain adaptation framework named Deep Transfer Network (DTN), where the highly
flexible deep neural networks are used to implement such a distribution
matching process.
This is achieved by two types of layers in DTN: the shared feature extraction
layers which learn a shared feature subspace in which the marginal
distributions of the source and the target samples are drawn close, and the
discrimination layers which match conditional distributions by classifier
transduction. We also show that DTN has a computation complexity linear to the
number of training samples, making it suitable to large-scale problems. By
combining the best paradigms in both worlds (deep neural networks in
recognition, and matching marginal and conditional distributions in domain
adaptation), we demonstrate by extensive experiments that DTN improves
significantly over former methods in both execution time and classification
accuracy.
| [
{
"version": "v1",
"created": "Mon, 2 Mar 2015 16:17:06 GMT"
}
] | 2015-03-03T00:00:00 | [
[
"Zhang",
"Xu",
""
],
[
"Yu",
"Felix Xinnan",
""
],
[
"Chang",
"Shih-Fu",
""
],
[
"Wang",
"Shengjin",
""
]
] | TITLE: Deep Transfer Network: Unsupervised Domain Adaptation
ABSTRACT: Domain adaptation aims at training a classifier in one dataset and applying
it to a related but not identical dataset. One successfully used framework of
domain adaptation is to learn a transformation to match both the distribution
of the features (marginal distribution), and the distribution of the labels
given features (conditional distribution). In this paper, we propose a new
domain adaptation framework named Deep Transfer Network (DTN), where the highly
flexible deep neural networks are used to implement such a distribution
matching process.
This is achieved by two types of layers in DTN: the shared feature extraction
layers which learn a shared feature subspace in which the marginal
distributions of the source and the target samples are drawn close, and the
discrimination layers which match conditional distributions by classifier
transduction. We also show that DTN has a computation complexity linear to the
number of training samples, making it suitable to large-scale problems. By
combining the best paradigms in both worlds (deep neural networks in
recognition, and matching marginal and conditional distributions in domain
adaptation), we demonstrate by extensive experiments that DTN improves
significantly over former methods in both execution time and classification
accuracy.
| no_new_dataset | 0.950457 |
1503.00687 | Miguel \'A. Carreira-Perpi\~n\'an | Miguel \'A. Carreira-Perpi\~n\'an | A review of mean-shift algorithms for clustering | 28 pages, 9 figures. Invited book chapter to appear in the CRC
Handbook of Cluster Analysis (eds. Roberto Rocci, Fionn Murtagh, Marina Meila
and Christian Hennig) | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A natural way to characterize the cluster structure of a dataset is by
finding regions containing a high density of data. This can be done in a
nonparametric way with a kernel density estimate, whose modes and hence
clusters can be found using mean-shift algorithms. We describe the theory and
practice behind clustering based on kernel density estimates and mean-shift
algorithms. We discuss the blurring and non-blurring versions of mean-shift;
theoretical results about mean-shift algorithms and Gaussian mixtures;
relations with scale-space theory, spectral clustering and other algorithms;
extensions to tracking, to manifold and graph data, and to manifold denoising;
K-modes and Laplacian K-modes algorithms; acceleration strategies for large
datasets; and applications to image segmentation, manifold denoising and
multivalued regression.
| [
{
"version": "v1",
"created": "Mon, 2 Mar 2015 20:09:14 GMT"
}
] | 2015-03-03T00:00:00 | [
[
"Carreira-Perpiñán",
"Miguel Á.",
""
]
] | TITLE: A review of mean-shift algorithms for clustering
ABSTRACT: A natural way to characterize the cluster structure of a dataset is by
finding regions containing a high density of data. This can be done in a
nonparametric way with a kernel density estimate, whose modes and hence
clusters can be found using mean-shift algorithms. We describe the theory and
practice behind clustering based on kernel density estimates and mean-shift
algorithms. We discuss the blurring and non-blurring versions of mean-shift;
theoretical results about mean-shift algorithms and Gaussian mixtures;
relations with scale-space theory, spectral clustering and other algorithms;
extensions to tracking, to manifold and graph data, and to manifold denoising;
K-modes and Laplacian K-modes algorithms; acceleration strategies for large
datasets; and applications to image segmentation, manifold denoising and
multivalued regression.
| no_new_dataset | 0.946745 |
1409.7495 | Yaroslav Ganin | Yaroslav Ganin, Victor Lempitsky | Unsupervised Domain Adaptation by Backpropagation | null | null | null | null | stat.ML cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Top-performing deep architectures are trained on massive amounts of labeled
data. In the absence of labeled data for a certain task, domain adaptation
often provides an attractive option given that labeled data of similar nature
but from a different domain (e.g. synthetic images) are available. Here, we
propose a new approach to domain adaptation in deep architectures that can be
trained on large amount of labeled data from the source domain and large amount
of unlabeled data from the target domain (no labeled target-domain data is
necessary).
As the training progresses, the approach promotes the emergence of "deep"
features that are (i) discriminative for the main learning task on the source
domain and (ii) invariant with respect to the shift between the domains. We
show that this adaptation behaviour can be achieved in almost any feed-forward
model by augmenting it with few standard layers and a simple new gradient
reversal layer. The resulting augmented architecture can be trained using
standard backpropagation.
Overall, the approach can be implemented with little effort using any of the
deep-learning packages. The method performs very well in a series of image
classification experiments, achieving adaptation effect in the presence of big
domain shifts and outperforming previous state-of-the-art on Office datasets.
| [
{
"version": "v1",
"created": "Fri, 26 Sep 2014 08:22:21 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Feb 2015 14:54:37 GMT"
}
] | 2015-03-02T00:00:00 | [
[
"Ganin",
"Yaroslav",
""
],
[
"Lempitsky",
"Victor",
""
]
] | TITLE: Unsupervised Domain Adaptation by Backpropagation
ABSTRACT: Top-performing deep architectures are trained on massive amounts of labeled
data. In the absence of labeled data for a certain task, domain adaptation
often provides an attractive option given that labeled data of similar nature
but from a different domain (e.g. synthetic images) are available. Here, we
propose a new approach to domain adaptation in deep architectures that can be
trained on large amount of labeled data from the source domain and large amount
of unlabeled data from the target domain (no labeled target-domain data is
necessary).
As the training progresses, the approach promotes the emergence of "deep"
features that are (i) discriminative for the main learning task on the source
domain and (ii) invariant with respect to the shift between the domains. We
show that this adaptation behaviour can be achieved in almost any feed-forward
model by augmenting it with few standard layers and a simple new gradient
reversal layer. The resulting augmented architecture can be trained using
standard backpropagation.
Overall, the approach can be implemented with little effort using any of the
deep-learning packages. The method performs very well in a series of image
classification experiments, achieving adaptation effect in the presence of big
domain shifts and outperforming previous state-of-the-art on Office datasets.
| no_new_dataset | 0.947817 |
1502.07802 | Zongyuan Ge | ZongYuan Ge, Chris McCool, Conrad Sanderson, Peter Corke | Modelling Local Deep Convolutional Neural Network Features to Improve
Fine-Grained Image Classification | 5 pages, three figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a local modelling approach using deep convolutional neural
networks (CNNs) for fine-grained image classification. Recently, deep CNNs
trained from large datasets have considerably improved the performance of
object recognition. However, to date there has been limited work using these
deep CNNs as local feature extractors. This partly stems from CNNs having
internal representations which are high dimensional, thereby making such
representations difficult to model using stochastic models. To overcome this
issue, we propose to reduce the dimensionality of one of the internal fully
connected layers, in conjunction with layer-restricted retraining to avoid
retraining the entire network. The distribution of low-dimensional features
obtained from the modified layer is then modelled using a Gaussian mixture
model. Comparative experiments show that considerable performance improvements
can be achieved on the challenging Fish and UEC FOOD-100 datasets.
| [
{
"version": "v1",
"created": "Fri, 27 Feb 2015 02:04:57 GMT"
}
] | 2015-03-02T00:00:00 | [
[
"Ge",
"ZongYuan",
""
],
[
"McCool",
"Chris",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Corke",
"Peter",
""
]
] | TITLE: Modelling Local Deep Convolutional Neural Network Features to Improve
Fine-Grained Image Classification
ABSTRACT: We propose a local modelling approach using deep convolutional neural
networks (CNNs) for fine-grained image classification. Recently, deep CNNs
trained from large datasets have considerably improved the performance of
object recognition. However, to date there has been limited work using these
deep CNNs as local feature extractors. This partly stems from CNNs having
internal representations which are high dimensional, thereby making such
representations difficult to model using stochastic models. To overcome this
issue, we propose to reduce the dimensionality of one of the internal fully
connected layers, in conjunction with layer-restricted retraining to avoid
retraining the entire network. The distribution of low-dimensional features
obtained from the modified layer is then modelled using a Gaussian mixture
model. Comparative experiments show that considerable performance improvements
can be achieved on the challenging Fish and UEC FOOD-100 datasets.
| no_new_dataset | 0.949059 |
1502.08039 | Jihun Hamm | Jihun Hamm, Mikhail Belkin | Probabilistic Zero-shot Classification with Semantic Rankings | null | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a non-metric ranking-based representation of
semantic similarity that allows natural aggregation of semantic information
from multiple heterogeneous sources. We apply the ranking-based representation
to zero-shot learning problems, and present deterministic and probabilistic
zero-shot classifiers which can be built from pre-trained classifiers without
retraining. We demonstrate their the advantages on two large real-world image
datasets. In particular, we show that aggregating different sources of semantic
information, including crowd-sourcing, leads to more accurate classification.
| [
{
"version": "v1",
"created": "Fri, 27 Feb 2015 20:00:53 GMT"
}
] | 2015-03-02T00:00:00 | [
[
"Hamm",
"Jihun",
""
],
[
"Belkin",
"Mikhail",
""
]
] | TITLE: Probabilistic Zero-shot Classification with Semantic Rankings
ABSTRACT: In this paper we propose a non-metric ranking-based representation of
semantic similarity that allows natural aggregation of semantic information
from multiple heterogeneous sources. We apply the ranking-based representation
to zero-shot learning problems, and present deterministic and probabilistic
zero-shot classifiers which can be built from pre-trained classifiers without
retraining. We demonstrate their the advantages on two large real-world image
datasets. In particular, we show that aggregating different sources of semantic
information, including crowd-sourcing, leads to more accurate classification.
| no_new_dataset | 0.948917 |
1502.08046 | Piotr Plonski | Piotr P{\l}o\'nski, Dorota Stefan, Robert Sulej, Krzysztof Zaremba | Image Segmentation in Liquid Argon Time Projection Chamber Detector | 10 pages, 4 figures, 2 tables | null | null | null | cs.CV hep-ex | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Liquid Argon Time Projection Chamber (LAr-TPC) detectors provide
excellent imaging and particle identification ability for studying neutrinos.
An efficient and automatic reconstruction procedures are required to exploit
potential of this imaging technology. Herein, a novel method for segmentation
of images from LAr-TPC detectors is presented. The proposed approach computes a
feature descriptor for each pixel in the image, which characterizes amplitude
distribution in pixel and its neighbourhood. The supervised classifier is
employed to distinguish between pixels representing particle's track and noise.
The classifier is trained and evaluated on the hand-labeled dataset. The
proposed approach can be a preprocessing step for reconstructing algorithms
working directly on detector images.
| [
{
"version": "v1",
"created": "Fri, 27 Feb 2015 20:32:35 GMT"
}
] | 2015-03-02T00:00:00 | [
[
"Płoński",
"Piotr",
""
],
[
"Stefan",
"Dorota",
""
],
[
"Sulej",
"Robert",
""
],
[
"Zaremba",
"Krzysztof",
""
]
] | TITLE: Image Segmentation in Liquid Argon Time Projection Chamber Detector
ABSTRACT: The Liquid Argon Time Projection Chamber (LAr-TPC) detectors provide
excellent imaging and particle identification ability for studying neutrinos.
An efficient and automatic reconstruction procedures are required to exploit
potential of this imaging technology. Herein, a novel method for segmentation
of images from LAr-TPC detectors is presented. The proposed approach computes a
feature descriptor for each pixel in the image, which characterizes amplitude
distribution in pixel and its neighbourhood. The supervised classifier is
employed to distinguish between pixels representing particle's track and noise.
The classifier is trained and evaluated on the hand-labeled dataset. The
proposed approach can be a preprocessing step for reconstructing algorithms
working directly on detector images.
| no_new_dataset | 0.956634 |
1502.06682 | Chih-Ya Shen | Chih-Ya Shen, De-Nian Yang, Wang-Chien Lee, Ming-Syan Chen | Maximizing Friend-Making Likelihood for Social Activity Organization | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The social presence theory in social psychology suggests that
computer-mediated online interactions are inferior to face-to-face, in-person
interactions. In this paper, we consider the scenarios of organizing in person
friend-making social activities via online social networks (OSNs) and formulate
a new research problem, namely, Hop-bounded Maximum Group Friending (HMGF), by
modeling both existing friendships and the likelihood of new friend making. To
find a set of attendees for socialization activities, HMGF is unique and
challenging due to the interplay of the group size, the constraint on existing
friendships and the objective function on the likelihood of friend making. We
prove that HMGF is NP-Hard, and no approximation algorithm exists unless P =
NP. We then propose an error-bounded approximation algorithm to efficiently
obtain the solutions very close to the optimal solutions. We conduct a user
study to validate our problem formulation and per- form extensive experiments
on real datasets to demonstrate the efficiency and effectiveness of our
proposed algorithm.
| [
{
"version": "v1",
"created": "Tue, 24 Feb 2015 03:16:33 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Feb 2015 15:31:34 GMT"
}
] | 2015-02-27T00:00:00 | [
[
"Shen",
"Chih-Ya",
""
],
[
"Yang",
"De-Nian",
""
],
[
"Lee",
"Wang-Chien",
""
],
[
"Chen",
"Ming-Syan",
""
]
] | TITLE: Maximizing Friend-Making Likelihood for Social Activity Organization
ABSTRACT: The social presence theory in social psychology suggests that
computer-mediated online interactions are inferior to face-to-face, in-person
interactions. In this paper, we consider the scenarios of organizing in person
friend-making social activities via online social networks (OSNs) and formulate
a new research problem, namely, Hop-bounded Maximum Group Friending (HMGF), by
modeling both existing friendships and the likelihood of new friend making. To
find a set of attendees for socialization activities, HMGF is unique and
challenging due to the interplay of the group size, the constraint on existing
friendships and the objective function on the likelihood of friend making. We
prove that HMGF is NP-Hard, and no approximation algorithm exists unless P =
NP. We then propose an error-bounded approximation algorithm to efficiently
obtain the solutions very close to the optimal solutions. We conduct a user
study to validate our problem formulation and per- form extensive experiments
on real datasets to demonstrate the efficiency and effectiveness of our
proposed algorithm.
| no_new_dataset | 0.946151 |
1502.07504 | Attia Nehar | Attia Nehar and Djelloul Ziadi and Hadda Cherroun | Rational Kernels for Arabic Stemming and Text Classification | 12 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problems of Arabic Text Classification and
stemming using Transducers and Rational Kernels. We introduce a new stemming
technique based on the use of Arabic patterns (Pattern Based Stemmer). Patterns
are modelled using transducers and stemming is done without depending on any
dictionary. Using transducers for stemming, documents are transformed into
finite state transducers. This document representation allows us to use and
explore rational kernels as a framework for Arabic Text Classification.
Stemming experiments are conducted on three word collections and classification
experiments are done on the Saudi Press Agency dataset. Results show that our
approach, when compared with other approaches, is promising specially in terms
of Accuracy, Recall and F1.
| [
{
"version": "v1",
"created": "Thu, 26 Feb 2015 11:09:59 GMT"
}
] | 2015-02-27T00:00:00 | [
[
"Nehar",
"Attia",
""
],
[
"Ziadi",
"Djelloul",
""
],
[
"Cherroun",
"Hadda",
""
]
] | TITLE: Rational Kernels for Arabic Stemming and Text Classification
ABSTRACT: In this paper, we address the problems of Arabic Text Classification and
stemming using Transducers and Rational Kernels. We introduce a new stemming
technique based on the use of Arabic patterns (Pattern Based Stemmer). Patterns
are modelled using transducers and stemming is done without depending on any
dictionary. Using transducers for stemming, documents are transformed into
finite state transducers. This document representation allows us to use and
explore rational kernels as a framework for Arabic Text Classification.
Stemming experiments are conducted on three word collections and classification
experiments are done on the Saudi Press Agency dataset. Results show that our
approach, when compared with other approaches, is promising specially in terms
of Accuracy, Recall and F1.
| no_new_dataset | 0.953275 |
1502.05224 | Yun Gu | Yun Gu, Haoyang Xue, Jie Yang | Cross-Modality Hashing with Partial Correspondence | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning a hashing function for cross-media search is very desirable due to
its low storage cost and fast query speed. However, the data crawled from
Internet cannot always guarantee good correspondence among different modalities
which affects the learning for hashing function. In this paper, we focus on
cross-modal hashing with partially corresponded data. The data without full
correspondence are made in use to enhance the hashing performance. The
experiments on Wiki and NUS-WIDE datasets demonstrates that the proposed method
outperforms some state-of-the-art hashing approaches with fewer correspondence
information.
| [
{
"version": "v1",
"created": "Wed, 18 Feb 2015 13:41:23 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Feb 2015 12:13:47 GMT"
}
] | 2015-02-26T00:00:00 | [
[
"Gu",
"Yun",
""
],
[
"Xue",
"Haoyang",
""
],
[
"Yang",
"Jie",
""
]
] | TITLE: Cross-Modality Hashing with Partial Correspondence
ABSTRACT: Learning a hashing function for cross-media search is very desirable due to
its low storage cost and fast query speed. However, the data crawled from
Internet cannot always guarantee good correspondence among different modalities
which affects the learning for hashing function. In this paper, we focus on
cross-modal hashing with partially corresponded data. The data without full
correspondence are made in use to enhance the hashing performance. The
experiments on Wiki and NUS-WIDE datasets demonstrates that the proposed method
outperforms some state-of-the-art hashing approaches with fewer correspondence
information.
| no_new_dataset | 0.952353 |
1402.3163 | Xiaohao Yang | Xiaohao Yang and Pavol Juhas and Christopher L. Farrow and Simon J. L.
Billinge | xPDFsuite: an end-to-end software solution for high throughput pair
distribution function transformation, visualization and analysis | 3 pages, 2 figures | null | null | null | cond-mat.mtrl-sci cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The xPDFsuite software program is described. It is for processing and
analyzing atomic pair distribution functions (PDF) from X-ray powder
diffraction data. It provides a convenient GUI for SrXplanr and PDFgetX3,
allowing the users to easily obtain 1D diffraction pattern from raw 2D
diffraction images and then transform them to PDFs. It also bundles PDFgui
which allows the users to create structure models and fit to the experiment
data. It is specially useful for working with large numbers of datasets such as
from high throughout measurements. Some of the key features are: real time PDF
transformation and plotting; 2D waterfall, false color heatmap, and 3D contour
plotting for multiple datasets; static and dynamic mask editing; geometric
calibration of powder diffraction image; configurations and project saving and
loading; Pearson correlation analysis on selected datasets; written in Python
and support multiple platforms.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2014 14:55:14 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Feb 2014 04:01:14 GMT"
},
{
"version": "v3",
"created": "Mon, 23 Feb 2015 21:13:21 GMT"
}
] | 2015-02-25T00:00:00 | [
[
"Yang",
"Xiaohao",
""
],
[
"Juhas",
"Pavol",
""
],
[
"Farrow",
"Christopher L.",
""
],
[
"Billinge",
"Simon J. L.",
""
]
] | TITLE: xPDFsuite: an end-to-end software solution for high throughput pair
distribution function transformation, visualization and analysis
ABSTRACT: The xPDFsuite software program is described. It is for processing and
analyzing atomic pair distribution functions (PDF) from X-ray powder
diffraction data. It provides a convenient GUI for SrXplanr and PDFgetX3,
allowing the users to easily obtain 1D diffraction pattern from raw 2D
diffraction images and then transform them to PDFs. It also bundles PDFgui
which allows the users to create structure models and fit to the experiment
data. It is specially useful for working with large numbers of datasets such as
from high throughout measurements. Some of the key features are: real time PDF
transformation and plotting; 2D waterfall, false color heatmap, and 3D contour
plotting for multiple datasets; static and dynamic mask editing; geometric
calibration of powder diffraction image; configurations and project saving and
loading; Pearson correlation analysis on selected datasets; written in Python
and support multiple platforms.
| no_new_dataset | 0.947866 |
1502.06657 | Sahin Geyik | Sahin Cem Geyik, Abhishek Saxena, Ali Dasdan | Multi-Touch Attribution Based Budget Allocation in Online Advertising | This paper has been published in ADKDD 2014, August 24, New York
City, New York, U.S.A | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Budget allocation in online advertising deals with distributing the campaign
(insertion order) level budgets to different sub-campaigns which employ
different targeting criteria and may perform differently in terms of
return-on-investment (ROI). In this paper, we present the efforts at Turn on
how to best allocate campaign budget so that the advertiser or campaign-level
ROI is maximized. To do this, it is crucial to be able to correctly determine
the performance of sub-campaigns. This determination is highly related to the
action-attribution problem, i.e. to be able to find out the set of ads, and
hence the sub-campaigns that provided them to a user, that an action should be
attributed to. For this purpose, we employ both last-touch (last ad gets all
credit) and multi-touch (many ads share the credit) attribution methodologies.
We present the algorithms deployed at Turn for the attribution problem, as well
as their parallel implementation on the large advertiser performance datasets.
We conclude the paper with our empirical comparison of last-touch and
multi-touch attribution-based budget allocation in a real online advertising
setting.
| [
{
"version": "v1",
"created": "Tue, 24 Feb 2015 00:09:05 GMT"
}
] | 2015-02-25T00:00:00 | [
[
"Geyik",
"Sahin Cem",
""
],
[
"Saxena",
"Abhishek",
""
],
[
"Dasdan",
"Ali",
""
]
] | TITLE: Multi-Touch Attribution Based Budget Allocation in Online Advertising
ABSTRACT: Budget allocation in online advertising deals with distributing the campaign
(insertion order) level budgets to different sub-campaigns which employ
different targeting criteria and may perform differently in terms of
return-on-investment (ROI). In this paper, we present the efforts at Turn on
how to best allocate campaign budget so that the advertiser or campaign-level
ROI is maximized. To do this, it is crucial to be able to correctly determine
the performance of sub-campaigns. This determination is highly related to the
action-attribution problem, i.e. to be able to find out the set of ads, and
hence the sub-campaigns that provided them to a user, that an action should be
attributed to. For this purpose, we employ both last-touch (last ad gets all
credit) and multi-touch (many ads share the credit) attribution methodologies.
We present the algorithms deployed at Turn for the attribution problem, as well
as their parallel implementation on the large advertiser performance datasets.
We conclude the paper with our empirical comparison of last-touch and
multi-touch attribution-based budget allocation in a real online advertising
setting.
| no_new_dataset | 0.945851 |
1502.06671 | Pinghui Wang Dr. | Pinghui Wang and John C.S. Lui and Don Towsley | Minfer: Inferring Motif Statistics From Sampled Edges | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Characterizing motif (i.e., locally connected subgraph patterns) statistics
is important for understanding complex networks such as online social networks
and communication networks. Previous work made the strong assumption that the
graph topology of interest is known, and that the dataset either fits into main
memory or stored on disks such that it is not expensive to obtain all neighbors
of any given node. In practice, researchers have to deal with the situation
where the graph topology is unknown, either because the graph is dynamic, or
because it is expensive to collect and store all topological and meta
information on disk. Hence, what is available to researchers is only a snapshot
of the graph generated by sampling edges from the graph at random, which we
called a "RESampled graph". Clearly, a RESampled graph's motif statistics may
be quite different from the underlying original graph. To solve this challenge,
we propose a framework and implement a system called Minfer, which can take the
given RESampled graph and accurately infer the underlying graph's motif
statistics. We also use Fisher information to bound the error of our estimates.
Experiments using large scale datasets show that our method to be accurate.
| [
{
"version": "v1",
"created": "Tue, 24 Feb 2015 01:43:59 GMT"
}
] | 2015-02-25T00:00:00 | [
[
"Wang",
"Pinghui",
""
],
[
"Lui",
"John C. S.",
""
],
[
"Towsley",
"Don",
""
]
] | TITLE: Minfer: Inferring Motif Statistics From Sampled Edges
ABSTRACT: Characterizing motif (i.e., locally connected subgraph patterns) statistics
is important for understanding complex networks such as online social networks
and communication networks. Previous work made the strong assumption that the
graph topology of interest is known, and that the dataset either fits into main
memory or stored on disks such that it is not expensive to obtain all neighbors
of any given node. In practice, researchers have to deal with the situation
where the graph topology is unknown, either because the graph is dynamic, or
because it is expensive to collect and store all topological and meta
information on disk. Hence, what is available to researchers is only a snapshot
of the graph generated by sampling edges from the graph at random, which we
called a "RESampled graph". Clearly, a RESampled graph's motif statistics may
be quite different from the underlying original graph. To solve this challenge,
we propose a framework and implement a system called Minfer, which can take the
given RESampled graph and accurately infer the underlying graph's motif
statistics. We also use Fisher information to bound the error of our estimates.
Experiments using large scale datasets show that our method to be accurate.
| no_new_dataset | 0.945298 |
1502.06703 | Smitha M.L. | B.H. Shekar, Smitha M.L., P. Shivakumara | Discrete Wavelet Transform and Gradient Difference based approach for
text localization in videos | Fifth International Conference on Signals and Image Processing, IEEE,
DOI 10.1109/ICSIP.2014.50, pp. 280-284, held at BNMIT, Bangalore in January
2014 | null | 10.1109/ICSIP.2014.50 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The text detection and localization is important for video analysis and
understanding. The scene text in video contains semantic information and thus
can contribute significantly to video retrieval and understanding. However,
most of the approaches detect scene text in still images or single video frame.
Videos differ from images in temporal redundancy. This paper proposes a novel
hybrid method to robustly localize the texts in natural scene images and videos
based on fusion of discrete wavelet transform and gradient difference. A set of
rules and geometric properties have been devised to localize the actual text
regions. Then, morphological operation is performed to generate the text
regions and finally the connected component analysis is employed to localize
the text in a video frame. The experimental results obtained on publicly
available standard ICDAR 2003 and Hua dataset illustrate that the proposed
method can accurately detect and localize texts of various sizes, fonts and
colors. The experimentation on huge collection of video databases reveal the
suitability of the proposed method to video databases.
| [
{
"version": "v1",
"created": "Tue, 24 Feb 2015 07:46:34 GMT"
}
] | 2015-02-25T00:00:00 | [
[
"Shekar",
"B. H.",
""
],
[
"L.",
"Smitha M.",
""
],
[
"Shivakumara",
"P.",
""
]
] | TITLE: Discrete Wavelet Transform and Gradient Difference based approach for
text localization in videos
ABSTRACT: The text detection and localization is important for video analysis and
understanding. The scene text in video contains semantic information and thus
can contribute significantly to video retrieval and understanding. However,
most of the approaches detect scene text in still images or single video frame.
Videos differ from images in temporal redundancy. This paper proposes a novel
hybrid method to robustly localize the texts in natural scene images and videos
based on fusion of discrete wavelet transform and gradient difference. A set of
rules and geometric properties have been devised to localize the actual text
regions. Then, morphological operation is performed to generate the text
regions and finally the connected component analysis is employed to localize
the text in a video frame. The experimental results obtained on publicly
available standard ICDAR 2003 and Hua dataset illustrate that the proposed
method can accurately detect and localize texts of various sizes, fonts and
colors. The experimentation on huge collection of video databases reveal the
suitability of the proposed method to video databases.
| no_new_dataset | 0.953319 |
1502.06757 | Lse Lse | Mart\'in Dias (INRIA Lille - Nord Europe), Alberto Bacchelli, Georgios
Gousios, Damien Cassou (INRIA Lille - Nord Europe), St\'ephane Ducasse (INRIA
Lille - Nord Europe) | Untangling Fine-Grained Code Changes | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | After working for some time, developers commit their code changes to a
version control system. When doing so, they often bundle unrelated changes
(e.g., bug fix and refactoring) in a single commit, thus creating a so-called
tangled commit. Sharing tangled commits is problematic because it makes review,
reversion, and integration of these commits harder and historical analyses of
the project less reliable. Researchers have worked at untangling existing
commits, i.e., finding which part of a commit relates to which task. In this
paper, we contribute to this line of work in two ways: (1) A publicly available
dataset of untangled code changes, created with the help of two developers who
accurately split their code changes into self contained tasks over a period of
four months; (2) a novel approach, EpiceaUntangler, to help developers share
untangled commits (aka. atomic commits) by using fine-grained code change
information. EpiceaUntangler is based and tested on the publicly available
dataset, and further evaluated by deploying it to 7 developers, who used it for
2 weeks. We recorded a median success rate of 91% and average one of 75%, in
automatically creating clusters of untangled fine-grained code changes.
| [
{
"version": "v1",
"created": "Tue, 24 Feb 2015 10:50:13 GMT"
}
] | 2015-02-25T00:00:00 | [
[
"Dias",
"Martín",
"",
"INRIA Lille - Nord Europe"
],
[
"Bacchelli",
"Alberto",
"",
"INRIA Lille - Nord Europe"
],
[
"Gousios",
"Georgios",
"",
"INRIA Lille - Nord Europe"
],
[
"Cassou",
"Damien",
"",
"INRIA Lille - Nord Europe"
],
[
"Ducasse",
"Stéphane",
"",
"INRIA\n Lille - Nord Europe"
]
] | TITLE: Untangling Fine-Grained Code Changes
ABSTRACT: After working for some time, developers commit their code changes to a
version control system. When doing so, they often bundle unrelated changes
(e.g., bug fix and refactoring) in a single commit, thus creating a so-called
tangled commit. Sharing tangled commits is problematic because it makes review,
reversion, and integration of these commits harder and historical analyses of
the project less reliable. Researchers have worked at untangling existing
commits, i.e., finding which part of a commit relates to which task. In this
paper, we contribute to this line of work in two ways: (1) A publicly available
dataset of untangled code changes, created with the help of two developers who
accurately split their code changes into self contained tasks over a period of
four months; (2) a novel approach, EpiceaUntangler, to help developers share
untangled commits (aka. atomic commits) by using fine-grained code change
information. EpiceaUntangler is based and tested on the publicly available
dataset, and further evaluated by deploying it to 7 developers, who used it for
2 weeks. We recorded a median success rate of 91% and average one of 75%, in
automatically creating clusters of untangled fine-grained code changes.
| no_new_dataset | 0.562567 |
1502.06823 | Theodoros Rekatsinas | Theodoros Rekatsinas, Amol Deshpande and Aditya Parameswaran | CrowdGather: Entity Extraction over Structured Domains | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crowdsourced entity extraction is often used to acquire data for many
applications, including recommendation systems, construction of aggregated
listings and directories, and knowledge base construction. Current solutions
focus on entity extraction using a single query, e.g., only using "give me
another restaurant", when assembling a list of all restaurants. Due to the cost
of human labor, solutions that focus on a single query can be highly
impractical.
In this paper, we leverage the fact that entity extraction often focuses on
{\em structured domains}, i.e., domains that are described by a collection of
attributes, each potentially exhibiting hierarchical structure. Given such a
domain, we enable a richer space of queries, e.g., "give me another Moroccan
restaurant in Manhattan that does takeout". Naturally, enabling a richer space
of queries comes with a host of issues, especially since many queries return
empty answers. We develop new statistical tools that enable us to reason about
the gain of issuing {\em additional queries} given little to no information,
and show how we can exploit the overlaps across the results of queries for
different points of the data domain to obtain accurate estimates of the gain.
We cast the problem of {\em budgeted entity extraction} over large domains as
an adaptive optimization problem that seeks to maximize the number of extracted
entities, while minimizing the overall extraction costs. We evaluate our
techniques with experiments on both synthetic and real-world datasets,
demonstrating a yield of up to 4X over competing approaches for the same
budget.
| [
{
"version": "v1",
"created": "Tue, 24 Feb 2015 14:41:15 GMT"
}
] | 2015-02-25T00:00:00 | [
[
"Rekatsinas",
"Theodoros",
""
],
[
"Deshpande",
"Amol",
""
],
[
"Parameswaran",
"Aditya",
""
]
] | TITLE: CrowdGather: Entity Extraction over Structured Domains
ABSTRACT: Crowdsourced entity extraction is often used to acquire data for many
applications, including recommendation systems, construction of aggregated
listings and directories, and knowledge base construction. Current solutions
focus on entity extraction using a single query, e.g., only using "give me
another restaurant", when assembling a list of all restaurants. Due to the cost
of human labor, solutions that focus on a single query can be highly
impractical.
In this paper, we leverage the fact that entity extraction often focuses on
{\em structured domains}, i.e., domains that are described by a collection of
attributes, each potentially exhibiting hierarchical structure. Given such a
domain, we enable a richer space of queries, e.g., "give me another Moroccan
restaurant in Manhattan that does takeout". Naturally, enabling a richer space
of queries comes with a host of issues, especially since many queries return
empty answers. We develop new statistical tools that enable us to reason about
the gain of issuing {\em additional queries} given little to no information,
and show how we can exploit the overlaps across the results of queries for
different points of the data domain to obtain accurate estimates of the gain.
We cast the problem of {\em budgeted entity extraction} over large domains as
an adaptive optimization problem that seeks to maximize the number of extracted
entities, while minimizing the overall extraction costs. We evaluate our
techniques with experiments on both synthetic and real-world datasets,
demonstrating a yield of up to 4X over competing approaches for the same
budget.
| no_new_dataset | 0.941493 |
1306.0239 | Yichuan Tang | Yichuan Tang | Deep Learning using Linear Support Vector Machines | Contribution to the ICML 2013 Challenges in Representation Learning
Workshop | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, fully-connected and convolutional neural networks have been trained
to achieve state-of-the-art performance on a wide variety of tasks such as
speech recognition, image classification, natural language processing, and
bioinformatics. For classification tasks, most of these "deep learning" models
employ the softmax activation function for prediction and minimize
cross-entropy loss. In this paper, we demonstrate a small but consistent
advantage of replacing the softmax layer with a linear support vector machine.
Learning minimizes a margin-based loss instead of the cross-entropy loss. While
there have been various combinations of neural nets and SVMs in prior art, our
results using L2-SVMs show that by simply replacing softmax with linear SVMs
gives significant gains on popular deep learning datasets MNIST, CIFAR-10, and
the ICML 2013 Representation Learning Workshop's face expression recognition
challenge.
| [
{
"version": "v1",
"created": "Sun, 2 Jun 2013 18:46:58 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Jul 2013 21:30:59 GMT"
},
{
"version": "v3",
"created": "Mon, 23 Dec 2013 21:16:45 GMT"
},
{
"version": "v4",
"created": "Sat, 21 Feb 2015 16:58:39 GMT"
}
] | 2015-02-24T00:00:00 | [
[
"Tang",
"Yichuan",
""
]
] | TITLE: Deep Learning using Linear Support Vector Machines
ABSTRACT: Recently, fully-connected and convolutional neural networks have been trained
to achieve state-of-the-art performance on a wide variety of tasks such as
speech recognition, image classification, natural language processing, and
bioinformatics. For classification tasks, most of these "deep learning" models
employ the softmax activation function for prediction and minimize
cross-entropy loss. In this paper, we demonstrate a small but consistent
advantage of replacing the softmax layer with a linear support vector machine.
Learning minimizes a margin-based loss instead of the cross-entropy loss. While
there have been various combinations of neural nets and SVMs in prior art, our
results using L2-SVMs show that by simply replacing softmax with linear SVMs
gives significant gains on popular deep learning datasets MNIST, CIFAR-10, and
the ICML 2013 Representation Learning Workshop's face expression recognition
challenge.
| no_new_dataset | 0.950319 |
1312.6110 | Yichuan Tang | Yichuan Tang, Nitish Srivastava, Ruslan Salakhutdinov | Learning Generative Models with Visual Attention | In the proceedings of Neural Information Processing Systems, 2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attention has long been proposed by psychologists as important for
effectively dealing with the enormous sensory stimulus available in the
neocortex. Inspired by the visual attention models in computational
neuroscience and the need of object-centric data for generative models, we
describe for generative learning framework using attentional mechanisms.
Attentional mechanisms can propagate signals from region of interest in a scene
to an aligned canonical representation, where generative modeling takes place.
By ignoring background clutter, generative models can concentrate their
resources on the object of interest. Our model is a proper graphical model
where the 2D Similarity transformation is a part of the top-down process. A
ConvNet is employed to provide good initializations during posterior inference
which is based on Hamiltonian Monte Carlo. Upon learning images of faces, our
model can robustly attend to face regions of novel test subjects. More
importantly, our model can learn generative models of new faces from a novel
dataset of large images where the face locations are not known.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 20:50:43 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Dec 2013 16:49:43 GMT"
},
{
"version": "v3",
"created": "Sat, 21 Feb 2015 22:21:15 GMT"
}
] | 2015-02-24T00:00:00 | [
[
"Tang",
"Yichuan",
""
],
[
"Srivastava",
"Nitish",
""
],
[
"Salakhutdinov",
"Ruslan",
""
]
] | TITLE: Learning Generative Models with Visual Attention
ABSTRACT: Attention has long been proposed by psychologists as important for
effectively dealing with the enormous sensory stimulus available in the
neocortex. Inspired by the visual attention models in computational
neuroscience and the need of object-centric data for generative models, we
describe for generative learning framework using attentional mechanisms.
Attentional mechanisms can propagate signals from region of interest in a scene
to an aligned canonical representation, where generative modeling takes place.
By ignoring background clutter, generative models can concentrate their
resources on the object of interest. Our model is a proper graphical model
where the 2D Similarity transformation is a part of the top-down process. A
ConvNet is employed to provide good initializations during posterior inference
which is based on Hamiltonian Monte Carlo. Upon learning images of faces, our
model can robustly attend to face regions of novel test subjects. More
importantly, our model can learn generative models of new faces from a novel
dataset of large images where the face locations are not known.
| no_new_dataset | 0.910942 |
1405.0312 | Piotr Doll\'ar | Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross
Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, Piotr
Doll\'ar | Microsoft COCO: Common Objects in Context | 1) updated annotation pipeline description and figures; 2) added new
section describing datasets splits; 3) updated author list | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new dataset with the goal of advancing the state-of-the-art in
object recognition by placing the question of object recognition in the context
of the broader question of scene understanding. This is achieved by gathering
images of complex everyday scenes containing common objects in their natural
context. Objects are labeled using per-instance segmentations to aid in precise
object localization. Our dataset contains photos of 91 objects types that would
be easily recognizable by a 4 year old. With a total of 2.5 million labeled
instances in 328k images, the creation of our dataset drew upon extensive crowd
worker involvement via novel user interfaces for category detection, instance
spotting and instance segmentation. We present a detailed statistical analysis
of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide
baseline performance analysis for bounding box and segmentation detection
results using a Deformable Parts Model.
| [
{
"version": "v1",
"created": "Thu, 1 May 2014 21:43:32 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Jul 2014 18:39:56 GMT"
},
{
"version": "v3",
"created": "Sat, 21 Feb 2015 01:48:49 GMT"
}
] | 2015-02-24T00:00:00 | [
[
"Lin",
"Tsung-Yi",
""
],
[
"Maire",
"Michael",
""
],
[
"Belongie",
"Serge",
""
],
[
"Bourdev",
"Lubomir",
""
],
[
"Girshick",
"Ross",
""
],
[
"Hays",
"James",
""
],
[
"Perona",
"Pietro",
""
],
[
"Ramanan",
"Deva",
""
],
[
"Zitnick",
"C. Lawrence",
""
],
[
"Dollár",
"Piotr",
""
]
] | TITLE: Microsoft COCO: Common Objects in Context
ABSTRACT: We present a new dataset with the goal of advancing the state-of-the-art in
object recognition by placing the question of object recognition in the context
of the broader question of scene understanding. This is achieved by gathering
images of complex everyday scenes containing common objects in their natural
context. Objects are labeled using per-instance segmentations to aid in precise
object localization. Our dataset contains photos of 91 objects types that would
be easily recognizable by a 4 year old. With a total of 2.5 million labeled
instances in 328k images, the creation of our dataset drew upon extensive crowd
worker involvement via novel user interfaces for category detection, instance
spotting and instance segmentation. We present a detailed statistical analysis
of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide
baseline performance analysis for bounding box and segmentation detection
results using a Deformable Parts Model.
| new_dataset | 0.956104 |
1412.6039 | Xin Yuan | Yunchen Pu, Xin Yuan and Lawrence Carin | Generative Deep Deconvolutional Learning | 21 pages, 9 figures, revised version for ICLR 2015 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A generative Bayesian model is developed for deep (multi-layer) convolutional
dictionary learning. A novel probabilistic pooling operation is integrated into
the deep model, yielding efficient bottom-up and top-down probabilistic
learning. After learning the deep convolutional dictionary, testing is
implemented via deconvolutional inference. To speed up this inference, a new
statistical approach is proposed to project the top-layer dictionary elements
to the data level. Following this, only one layer of deconvolution is required
during testing. Experimental results demonstrate powerful capabilities of the
model to learn multi-layer features from images. Excellent classification
results are obtained on both the MNIST and Caltech 101 datasets.
| [
{
"version": "v1",
"created": "Thu, 18 Dec 2014 20:01:38 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Dec 2014 17:21:36 GMT"
},
{
"version": "v3",
"created": "Sun, 22 Feb 2015 18:13:29 GMT"
}
] | 2015-02-24T00:00:00 | [
[
"Pu",
"Yunchen",
""
],
[
"Yuan",
"Xin",
""
],
[
"Carin",
"Lawrence",
""
]
] | TITLE: Generative Deep Deconvolutional Learning
ABSTRACT: A generative Bayesian model is developed for deep (multi-layer) convolutional
dictionary learning. A novel probabilistic pooling operation is integrated into
the deep model, yielding efficient bottom-up and top-down probabilistic
learning. After learning the deep convolutional dictionary, testing is
implemented via deconvolutional inference. To speed up this inference, a new
statistical approach is proposed to project the top-layer dictionary elements
to the data level. Following this, only one layer of deconvolution is required
during testing. Experimental results demonstrate powerful capabilities of the
model to learn multi-layer features from images. Excellent classification
results are obtained on both the MNIST and Caltech 101 datasets.
| no_new_dataset | 0.951233 |
1502.06219 | Smitha M.L. | B.H. Shekar, Smitha M.L. | Video Text Localization with an emphasis on Edge Features | 8 pages, Eighth International Conference on Image and Signal
Processing, Elsevier Publications, ISBN: 9789351072522, pp: 324-330, held at
UVCE, Bangalore in July 2014. arXiv admin note: text overlap with
arXiv:1502.03913 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The text detection and localization plays a major role in video analysis and
understanding. The scene text embedded in video consist of high-level semantics
and hence contributes significantly to visual content analysis and retrieval.
This paper proposes a novel method to robustly localize the texts in natural
scene images and videos based on sobel edge emphasizing approach. The input
image is preprocessed and edge emphasis is done to detect the text clusters.
Further, a set of rules have been devised using morphological operators for
false positive elimination and connected component analysis is performed to
detect the text regions and hence text localization is performed. The
experimental results obtained on publicly available standard datasets
illustrate that the proposed method can detect and localize the texts of
various sizes, fonts and colors.
| [
{
"version": "v1",
"created": "Sun, 22 Feb 2015 12:32:18 GMT"
}
] | 2015-02-24T00:00:00 | [
[
"Shekar",
"B. H.",
""
],
[
"L.",
"Smitha M.",
""
]
] | TITLE: Video Text Localization with an emphasis on Edge Features
ABSTRACT: The text detection and localization plays a major role in video analysis and
understanding. The scene text embedded in video consist of high-level semantics
and hence contributes significantly to visual content analysis and retrieval.
This paper proposes a novel method to robustly localize the texts in natural
scene images and videos based on sobel edge emphasizing approach. The input
image is preprocessed and edge emphasis is done to detect the text clusters.
Further, a set of rules have been devised using morphological operators for
false positive elimination and connected component analysis is performed to
detect the text regions and hence text localization is performed. The
experimental results obtained on publicly available standard datasets
illustrate that the proposed method can detect and localize the texts of
various sizes, fonts and colors.
| no_new_dataset | 0.953751 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.