id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1407.1538 | Xiangnan Kong | Xiangnan Kong and Zhaoming Wu and Li-Jia Li and Ruofei Zhang and
Philip S. Yu and Hang Wu and Wei Fan | Large-Scale Multi-Label Learning with Incomplete Label Assignments | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-label learning deals with the classification problems where each
instance can be assigned with multiple labels simultaneously. Conventional
multi-label learning approaches mainly focus on exploiting label correlations.
It is usually assumed, explicitly or implicitly, that the label sets for
training instances are fully labeled without any missing labels. However, in
many real-world multi-label datasets, the label assignments for training
instances can be incomplete. Some ground-truth labels can be missed by the
labeler from the label set. This problem is especially typical when the number
instances is very large, and the labeling cost is very high, which makes it
almost impossible to get a fully labeled training set. In this paper, we study
the problem of large-scale multi-label learning with incomplete label
assignments. We propose an approach, called MPU, based upon positive and
unlabeled stochastic gradient descent and stacked models. Unlike prior works,
our method can effectively and efficiently consider missing labels and label
correlations simultaneously, and is very scalable, that has linear time
complexities over the size of the data. Extensive experiments on two real-world
multi-label datasets show that our MPU model consistently outperform other
commonly-used baselines.
| [
{
"version": "v1",
"created": "Sun, 6 Jul 2014 20:13:48 GMT"
}
] | 2014-07-08T00:00:00 | [
[
"Kong",
"Xiangnan",
""
],
[
"Wu",
"Zhaoming",
""
],
[
"Li",
"Li-Jia",
""
],
[
"Zhang",
"Ruofei",
""
],
[
"Yu",
"Philip S.",
""
],
[
"Wu",
"Hang",
""
],
[
"Fan",
"Wei",
""
]
] | TITLE: Large-Scale Multi-Label Learning with Incomplete Label Assignments
ABSTRACT: Multi-label learning deals with the classification problems where each
instance can be assigned with multiple labels simultaneously. Conventional
multi-label learning approaches mainly focus on exploiting label correlations.
It is usually assumed, explicitly or implicitly, that the label sets for
training instances are fully labeled without any missing labels. However, in
many real-world multi-label datasets, the label assignments for training
instances can be incomplete. Some ground-truth labels can be missed by the
labeler from the label set. This problem is especially typical when the number
instances is very large, and the labeling cost is very high, which makes it
almost impossible to get a fully labeled training set. In this paper, we study
the problem of large-scale multi-label learning with incomplete label
assignments. We propose an approach, called MPU, based upon positive and
unlabeled stochastic gradient descent and stacked models. Unlike prior works,
our method can effectively and efficiently consider missing labels and label
correlations simultaneously, and is very scalable, that has linear time
complexities over the size of the data. Extensive experiments on two real-world
multi-label datasets show that our MPU model consistently outperform other
commonly-used baselines.
| no_new_dataset | 0.944485 |
1407.1772 | Senzhang Wang | Senzhang Wang and Sihong Xie and Xiaoming Zhang and Zhoujun Li and
Philip S. Yu and Xinyu Shu | Future Influence Ranking of Scientific Literature | 9 pages, Proceedings of the 2014 SIAM International Conference on
Data Mining | null | 10.1137/1.9781611973440.86 | null | cs.SI cs.DL physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Researchers or students entering a emerging research area are particularly
interested in what newly published papers will be most cited and which young
researchers will become influential in the future, so that they can catch the
most recent advances and find valuable research directions. However, predicting
the future importance of scientific articles and authors is challenging due to
the dynamic nature of literature networks and evolving research topics.
Different from most previous studies aiming to rank the current importance of
literatures and authors, we focus on \emph{ranking the future popularity of new
publications and young researchers} by proposing a unified ranking model to
combine various available information. Specifically, we first propose to
extract two kinds of text features, words and words co-occurrence to
characterize innovative papers and authors. Then, instead of using static and
un-weighted graphs, we construct time-aware weighted graphs to distinguish the
various importance of links established at different time. Finally, by
leveraging both the constructed text features and graphs, we propose a mutual
reinforcement ranking framework called \emph{MRFRank} to rank the future
importance of papers and authors simultaneously. Experimental results on the
ArnetMiner dataset show that the proposed approach significantly outperforms
the baselines on the metric \emph{recommendation intensity}.
| [
{
"version": "v1",
"created": "Mon, 7 Jul 2014 17:00:34 GMT"
}
] | 2014-07-08T00:00:00 | [
[
"Wang",
"Senzhang",
""
],
[
"Xie",
"Sihong",
""
],
[
"Zhang",
"Xiaoming",
""
],
[
"Li",
"Zhoujun",
""
],
[
"Yu",
"Philip S.",
""
],
[
"Shu",
"Xinyu",
""
]
] | TITLE: Future Influence Ranking of Scientific Literature
ABSTRACT: Researchers or students entering a emerging research area are particularly
interested in what newly published papers will be most cited and which young
researchers will become influential in the future, so that they can catch the
most recent advances and find valuable research directions. However, predicting
the future importance of scientific articles and authors is challenging due to
the dynamic nature of literature networks and evolving research topics.
Different from most previous studies aiming to rank the current importance of
literatures and authors, we focus on \emph{ranking the future popularity of new
publications and young researchers} by proposing a unified ranking model to
combine various available information. Specifically, we first propose to
extract two kinds of text features, words and words co-occurrence to
characterize innovative papers and authors. Then, instead of using static and
un-weighted graphs, we construct time-aware weighted graphs to distinguish the
various importance of links established at different time. Finally, by
leveraging both the constructed text features and graphs, we propose a mutual
reinforcement ranking framework called \emph{MRFRank} to rank the future
importance of papers and authors simultaneously. Experimental results on the
ArnetMiner dataset show that the proposed approach significantly outperforms
the baselines on the metric \emph{recommendation intensity}.
| no_new_dataset | 0.951188 |
1407.1165 | Prashant Borde | Prashant Bordea, Amarsinh Varpeb, Ramesh Manzac, Pravin Yannawara | Recognition of Isolated Words using Zernike and MFCC features for Audio
Visual Speech Recognition | null | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic Speech Recognition (ASR) by machine is an attractive research topic
in signal processing domain and has attracted many researchers to contribute in
this area. In recent year, there have been many advances in automatic speech
reading system with the inclusion of audio and visual speech features to
recognize words under noisy conditions. The objective of audio-visual speech
recognition system is to improve recognition accuracy. In this paper we
computed visual features using Zernike moments and audio feature using Mel
Frequency Cepstral Coefficients (MFCC) on vVISWa (Visual Vocabulary of
Independent Standard Words) dataset which contains collection of isolated set
of city names of 10 speakers. The visual features were normalized and dimension
of features set was reduced by Principal Component Analysis (PCA) in order to
recognize the isolated word utterance on PCA space.The performance of
recognition of isolated words based on visual only and audio only features
results in 63.88 and 100 respectively.
| [
{
"version": "v1",
"created": "Fri, 4 Jul 2014 09:32:10 GMT"
}
] | 2014-07-07T00:00:00 | [
[
"Bordea",
"Prashant",
""
],
[
"Varpeb",
"Amarsinh",
""
],
[
"Manzac",
"Ramesh",
""
],
[
"Yannawara",
"Pravin",
""
]
] | TITLE: Recognition of Isolated Words using Zernike and MFCC features for Audio
Visual Speech Recognition
ABSTRACT: Automatic Speech Recognition (ASR) by machine is an attractive research topic
in signal processing domain and has attracted many researchers to contribute in
this area. In recent year, there have been many advances in automatic speech
reading system with the inclusion of audio and visual speech features to
recognize words under noisy conditions. The objective of audio-visual speech
recognition system is to improve recognition accuracy. In this paper we
computed visual features using Zernike moments and audio feature using Mel
Frequency Cepstral Coefficients (MFCC) on vVISWa (Visual Vocabulary of
Independent Standard Words) dataset which contains collection of isolated set
of city names of 10 speakers. The visual features were normalized and dimension
of features set was reduced by Principal Component Analysis (PCA) in order to
recognize the isolated word utterance on PCA space.The performance of
recognition of isolated words based on visual only and audio only features
results in 63.88 and 100 respectively.
| new_dataset | 0.962532 |
1407.1176 | Felipe Llinares | Felipe Llinares, Mahito Sugiyama, Karsten M. Borgwardt | Identifying Higher-order Combinations of Binary Features | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finding statistically significant interactions between binary variables is
computationally and statistically challenging in high-dimensional settings, due
to the combinatorial explosion in the number of hypotheses. Terada et al.
recently showed how to elegantly address this multiple testing problem by
excluding non-testable hypotheses. Still, it remains unclear how their approach
scales to large datasets.
We here proposed strategies to speed up the approach by Terada et al. and
evaluate them thoroughly in 11 real-world benchmark datasets. We observe that
one approach, incremental search with early stopping, is orders of magnitude
faster than the current state-of-the-art approach.
| [
{
"version": "v1",
"created": "Fri, 4 Jul 2014 10:17:43 GMT"
}
] | 2014-07-07T00:00:00 | [
[
"Llinares",
"Felipe",
""
],
[
"Sugiyama",
"Mahito",
""
],
[
"Borgwardt",
"Karsten M.",
""
]
] | TITLE: Identifying Higher-order Combinations of Binary Features
ABSTRACT: Finding statistically significant interactions between binary variables is
computationally and statistically challenging in high-dimensional settings, due
to the combinatorial explosion in the number of hypotheses. Terada et al.
recently showed how to elegantly address this multiple testing problem by
excluding non-testable hypotheses. Still, it remains unclear how their approach
scales to large datasets.
We here proposed strategies to speed up the approach by Terada et al. and
evaluate them thoroughly in 11 real-world benchmark datasets. We observe that
one approach, incremental search with early stopping, is orders of magnitude
faster than the current state-of-the-art approach.
| no_new_dataset | 0.949995 |
1407.1208 | Piotr Bojanowski | Piotr Bojanowski, R\'emi Lajugie, Francis Bach, Ivan Laptev, Jean
Ponce, Cordelia Schmid, Josef Sivic | Weakly Supervised Action Labeling in Videos Under Ordering Constraints | 17 pages, completed version of a ECCV2014 conference paper | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We are given a set of video clips, each one annotated with an {\em ordered}
list of actions, such as "walk" then "sit" then "answer phone" extracted from,
for example, the associated text script. We seek to temporally localize the
individual actions in each clip as well as to learn a discriminative classifier
for each action. We formulate the problem as a weakly supervised temporal
assignment with ordering constraints. Each video clip is divided into small
time intervals and each time interval of each video clip is assigned one action
label, while respecting the order in which the action labels appear in the
given annotations. We show that the action label assignment can be determined
together with learning a classifier for each action in a discriminative manner.
We evaluate the proposed model on a new and challenging dataset of 937 video
clips with a total of 787720 frames containing sequences of 16 different
actions from 69 Hollywood movies.
| [
{
"version": "v1",
"created": "Fri, 4 Jul 2014 12:53:15 GMT"
}
] | 2014-07-07T00:00:00 | [
[
"Bojanowski",
"Piotr",
""
],
[
"Lajugie",
"Rémi",
""
],
[
"Bach",
"Francis",
""
],
[
"Laptev",
"Ivan",
""
],
[
"Ponce",
"Jean",
""
],
[
"Schmid",
"Cordelia",
""
],
[
"Sivic",
"Josef",
""
]
] | TITLE: Weakly Supervised Action Labeling in Videos Under Ordering Constraints
ABSTRACT: We are given a set of video clips, each one annotated with an {\em ordered}
list of actions, such as "walk" then "sit" then "answer phone" extracted from,
for example, the associated text script. We seek to temporally localize the
individual actions in each clip as well as to learn a discriminative classifier
for each action. We formulate the problem as a weakly supervised temporal
assignment with ordering constraints. Each video clip is divided into small
time intervals and each time interval of each video clip is assigned one action
label, while respecting the order in which the action labels appear in the
given annotations. We show that the action label assignment can be determined
together with learning a classifier for each action in a discriminative manner.
We evaluate the proposed model on a new and challenging dataset of 937 video
clips with a total of 787720 frames containing sequences of 16 different
actions from 69 Hollywood movies.
| new_dataset | 0.957517 |
1407.0717 | Lubomir Bourdev | Lubomir Bourdev, Fei Yang, Rob Fergus | Deep Poselets for Human Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of detecting people in natural scenes using a part
approach based on poselets. We propose a bootstrapping method that allows us to
collect millions of weakly labeled examples for each poselet type. We use these
examples to train a Convolutional Neural Net to discriminate different poselet
types and separate them from the background class. We then use the trained CNN
as a way to represent poselet patches with a Pose Discriminative Feature (PDF)
vector -- a compact 256-dimensional feature vector that is effective at
discriminating pose from appearance. We train the poselet model on top of PDF
features and combine them with object-level CNNs for detection and bounding box
prediction. The resulting model leads to state-of-the-art performance for human
detection on the PASCAL datasets.
| [
{
"version": "v1",
"created": "Wed, 2 Jul 2014 20:28:22 GMT"
}
] | 2014-07-04T00:00:00 | [
[
"Bourdev",
"Lubomir",
""
],
[
"Yang",
"Fei",
""
],
[
"Fergus",
"Rob",
""
]
] | TITLE: Deep Poselets for Human Detection
ABSTRACT: We address the problem of detecting people in natural scenes using a part
approach based on poselets. We propose a bootstrapping method that allows us to
collect millions of weakly labeled examples for each poselet type. We use these
examples to train a Convolutional Neural Net to discriminate different poselet
types and separate them from the background class. We then use the trained CNN
as a way to represent poselet patches with a Pose Discriminative Feature (PDF)
vector -- a compact 256-dimensional feature vector that is effective at
discriminating pose from appearance. We train the poselet model on top of PDF
features and combine them with object-level CNNs for detection and bounding box
prediction. The resulting model leads to state-of-the-art performance for human
detection on the PASCAL datasets.
| no_new_dataset | 0.948058 |
1407.0786 | Chunhua Shen | Sakrapee Paisitkriangkrai, Chunhua Shen, Anton van den Hengel | Strengthening the Effectiveness of Pedestrian Detection with Spatially
Pooled Features | 16 pages. Appearing in Proc. European Conf. Computer Vision (ECCV)
2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a simple yet effective approach to the problem of pedestrian
detection which outperforms the current state-of-the-art. Our new features are
built on the basis of low-level visual features and spatial pooling.
Incorporating spatial pooling improves the translational invariance and thus
the robustness of the detection process. We then directly optimise the partial
area under the ROC curve (\pAUC) measure, which concentrates detection
performance in the range of most practical importance. The combination of these
factors leads to a pedestrian detector which outperforms all competitors on all
of the standard benchmark datasets. We advance state-of-the-art results by
lowering the average miss rate from $13\%$ to $11\%$ on the INRIA benchmark,
$41\%$ to $37\%$ on the ETH benchmark, $51\%$ to $42\%$ on the TUD-Brussels
benchmark and $36\%$ to $29\%$ on the Caltech-USA benchmark.
| [
{
"version": "v1",
"created": "Thu, 3 Jul 2014 05:39:30 GMT"
}
] | 2014-07-04T00:00:00 | [
[
"Paisitkriangkrai",
"Sakrapee",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: Strengthening the Effectiveness of Pedestrian Detection with Spatially
Pooled Features
ABSTRACT: We propose a simple yet effective approach to the problem of pedestrian
detection which outperforms the current state-of-the-art. Our new features are
built on the basis of low-level visual features and spatial pooling.
Incorporating spatial pooling improves the translational invariance and thus
the robustness of the detection process. We then directly optimise the partial
area under the ROC curve (\pAUC) measure, which concentrates detection
performance in the range of most practical importance. The combination of these
factors leads to a pedestrian detector which outperforms all competitors on all
of the standard benchmark datasets. We advance state-of-the-art results by
lowering the average miss rate from $13\%$ to $11\%$ on the INRIA benchmark,
$41\%$ to $37\%$ on the ETH benchmark, $51\%$ to $42\%$ on the TUD-Brussels
benchmark and $36\%$ to $29\%$ on the Caltech-USA benchmark.
| no_new_dataset | 0.950503 |
1407.0935 | Gopalkrishna MT | M. T Gopalakrishna, M. Ravishankar and D. R Rameshbabu | Multiple Moving Object Recognitions in video based on Log Gabor-PCA
Approach | 8,26,conference | null | 10.1007/978-3-319-01778-5_10 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object recognition in the video sequence or images is one of the sub-field of
computer vision. Moving object recognition from a video sequence is an
appealing topic with applications in various areas such as airport safety,
intrusion surveillance, video monitoring, intelligent highway, etc. Moving
object recognition is the most challenging task in intelligent video
surveillance system. In this regard, many techniques have been proposed based
on different methods. Despite of its importance, moving object recognition in
complex environments is still far from being completely solved for low
resolution videos, foggy videos, and also dim video sequences. All in all,
these make it necessary to develop exceedingly robust techniques. This paper
introduces multiple moving object recognition in the video sequence based on
LoG Gabor-PCA approach and Angle based distance Similarity measures techniques
used to recognize the object as a human, vehicle etc. Number of experiments are
conducted for indoor and outdoor video sequences of standard datasets and also
our own collection of video sequences comprising of partial night vision video
sequences. Experimental results show that our proposed approach achieves an
excellent recognition rate. Results obtained are satisfactory and competent.
| [
{
"version": "v1",
"created": "Thu, 3 Jul 2014 14:52:56 GMT"
}
] | 2014-07-04T00:00:00 | [
[
"Gopalakrishna",
"M. T",
""
],
[
"Ravishankar",
"M.",
""
],
[
"Rameshbabu",
"D. R",
""
]
] | TITLE: Multiple Moving Object Recognitions in video based on Log Gabor-PCA
Approach
ABSTRACT: Object recognition in the video sequence or images is one of the sub-field of
computer vision. Moving object recognition from a video sequence is an
appealing topic with applications in various areas such as airport safety,
intrusion surveillance, video monitoring, intelligent highway, etc. Moving
object recognition is the most challenging task in intelligent video
surveillance system. In this regard, many techniques have been proposed based
on different methods. Despite of its importance, moving object recognition in
complex environments is still far from being completely solved for low
resolution videos, foggy videos, and also dim video sequences. All in all,
these make it necessary to develop exceedingly robust techniques. This paper
introduces multiple moving object recognition in the video sequence based on
LoG Gabor-PCA approach and Angle based distance Similarity measures techniques
used to recognize the object as a human, vehicle etc. Number of experiments are
conducted for indoor and outdoor video sequences of standard datasets and also
our own collection of video sequences comprising of partial night vision video
sequences. Experimental results show that our proposed approach achieves an
excellent recognition rate. Results obtained are satisfactory and competent.
| new_dataset | 0.646097 |
1407.0455 | Yingyi Bu | Yingyi Bu, Vinayak Borkar, Jianfeng Jia, Michael J. Carey, Tyson
Condie | Pregelix: Big(ger) Graph Analytics on A Dataflow Engine | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a growing need for distributed graph processing systems that are
capable of gracefully scaling to very large graph datasets. Unfortunately, this
challenge has not been easily met due to the intense memory pressure imposed by
process-centric, message passing designs that many graph processing systems
follow. Pregelix is a new open source distributed graph processing system that
is based on an iterative dataflow design that is better tuned to handle both
in-memory and out-of-core workloads. As such, Pregelix offers improved
performance characteristics and scaling properties over current open source
systems (e.g., we have seen up to 15x speedup compared to Apache Giraph and up
to 35x speedup compared to distributed GraphLab), and makes more effective use
of available machine resources to support Big(ger) Graph Analytics.
| [
{
"version": "v1",
"created": "Wed, 2 Jul 2014 05:04:28 GMT"
}
] | 2014-07-03T00:00:00 | [
[
"Bu",
"Yingyi",
""
],
[
"Borkar",
"Vinayak",
""
],
[
"Jia",
"Jianfeng",
""
],
[
"Carey",
"Michael J.",
""
],
[
"Condie",
"Tyson",
""
]
] | TITLE: Pregelix: Big(ger) Graph Analytics on A Dataflow Engine
ABSTRACT: There is a growing need for distributed graph processing systems that are
capable of gracefully scaling to very large graph datasets. Unfortunately, this
challenge has not been easily met due to the intense memory pressure imposed by
process-centric, message passing designs that many graph processing systems
follow. Pregelix is a new open source distributed graph processing system that
is based on an iterative dataflow design that is better tuned to handle both
in-memory and out-of-core workloads. As such, Pregelix offers improved
performance characteristics and scaling properties over current open source
systems (e.g., we have seen up to 15x speedup compared to Apache Giraph and up
to 35x speedup compared to distributed GraphLab), and makes more effective use
of available machine resources to support Big(ger) Graph Analytics.
| no_new_dataset | 0.94699 |
1407.0547 | Mark Phillips | Mark Phillips, Lauren Ko | Understanding Repository Growth at the University of North Texas: A Case
Study | 5 pages | null | null | null | cs.DL | http://creativecommons.org/licenses/by/3.0/ | Over the past decade the University of North Texas Libraries (UNTL) has
developed a sizable digital library infrastructure for use in carrying out its
core mission to the students, faculty, staff and associated communities of the
university. This repository of content offers countless research possibilities
for end users across the Internet when it is discovered and used in research,
scholarship, entertainment, and lifelong learning. The characteristics of the
repository itself provide insight into the workings of a modern digital library
infrastructure, how it was created, how often it is updated, or how often it is
modified. In that vein, the authors created a dataset comprised of information
extracted from the UNT Libraries' archival repository Coda and analyzed this
dataset in order to demonstrate the value and insights that can be gained from
sharing repository characteristics more broadly. This case study presents the
findings from an analysis of this dataset.
| [
{
"version": "v1",
"created": "Wed, 2 Jul 2014 12:55:49 GMT"
}
] | 2014-07-03T00:00:00 | [
[
"Phillips",
"Mark",
""
],
[
"Ko",
"Lauren",
""
]
] | TITLE: Understanding Repository Growth at the University of North Texas: A Case
Study
ABSTRACT: Over the past decade the University of North Texas Libraries (UNTL) has
developed a sizable digital library infrastructure for use in carrying out its
core mission to the students, faculty, staff and associated communities of the
university. This repository of content offers countless research possibilities
for end users across the Internet when it is discovered and used in research,
scholarship, entertainment, and lifelong learning. The characteristics of the
repository itself provide insight into the workings of a modern digital library
infrastructure, how it was created, how often it is updated, or how often it is
modified. In that vein, the authors created a dataset comprised of information
extracted from the UNT Libraries' archival repository Coda and analyzed this
dataset in order to demonstrate the value and insights that can be gained from
sharing repository characteristics more broadly. This case study presents the
findings from an analysis of this dataset.
| new_dataset | 0.854156 |
1407.0179 | Novi Quadrianto | Daniel Hern\'andez-Lobato, Viktoriia Sharmanska, Kristian Kersting,
Christoph H. Lampert, Novi Quadrianto | Mind the Nuisance: Gaussian Process Classification using Privileged
Noise | 14 pages with figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The learning with privileged information setting has recently attracted a lot
of attention within the machine learning community, as it allows the
integration of additional knowledge into the training process of a classifier,
even when this comes in the form of a data modality that is not available at
test time. Here, we show that privileged information can naturally be treated
as noise in the latent function of a Gaussian Process classifier (GPC). That
is, in contrast to the standard GPC setting, the latent function is not just a
nuisance but a feature: it becomes a natural measure of confidence about the
training data by modulating the slope of the GPC sigmoid likelihood function.
Extensive experiments on public datasets show that the proposed GPC method
using privileged noise, called GPC+, improves over a standard GPC without
privileged knowledge, and also over the current state-of-the-art SVM-based
method, SVM+. Moreover, we show that advanced neural networks and deep learning
methods can be compressed as privileged information.
| [
{
"version": "v1",
"created": "Tue, 1 Jul 2014 10:44:49 GMT"
}
] | 2014-07-02T00:00:00 | [
[
"Hernández-Lobato",
"Daniel",
""
],
[
"Sharmanska",
"Viktoriia",
""
],
[
"Kersting",
"Kristian",
""
],
[
"Lampert",
"Christoph H.",
""
],
[
"Quadrianto",
"Novi",
""
]
] | TITLE: Mind the Nuisance: Gaussian Process Classification using Privileged
Noise
ABSTRACT: The learning with privileged information setting has recently attracted a lot
of attention within the machine learning community, as it allows the
integration of additional knowledge into the training process of a classifier,
even when this comes in the form of a data modality that is not available at
test time. Here, we show that privileged information can naturally be treated
as noise in the latent function of a Gaussian Process classifier (GPC). That
is, in contrast to the standard GPC setting, the latent function is not just a
nuisance but a feature: it becomes a natural measure of confidence about the
training data by modulating the slope of the GPC sigmoid likelihood function.
Extensive experiments on public datasets show that the proposed GPC method
using privileged noise, called GPC+, improves over a standard GPC without
privileged knowledge, and also over the current state-of-the-art SVM-based
method, SVM+. Moreover, we show that advanced neural networks and deep learning
methods can be compressed as privileged information.
| no_new_dataset | 0.950641 |
1210.3456 | Mingjun Zhong | Mingjun Zhong, Rong Liu, Bo Liu | Bayesian Analysis for miRNA and mRNA Interactions Using Expression Data | 21 pages, 11 figures, 8 tables | null | null | null | stat.AP cs.LG q-bio.GN q-bio.MN stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MicroRNAs (miRNAs) are small RNA molecules composed of 19-22 nt, which play
important regulatory roles in post-transcriptional gene regulation by
inhibiting the translation of the mRNA into proteins or otherwise cleaving the
target mRNA. Inferring miRNA targets provides useful information for
understanding the roles of miRNA in biological processes that are potentially
involved in complex diseases. Statistical methodologies for point estimation,
such as the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm,
have been proposed to identify the interactions of miRNA and mRNA based on
sequence and expression data. In this paper, we propose using the Bayesian
LASSO (BLASSO) and the non-negative Bayesian LASSO (nBLASSO) to analyse the
interactions between miRNA and mRNA using expression data. The proposed
Bayesian methods explore the posterior distributions for those parameters
required to model the miRNA-mRNA interactions. These approaches can be used to
observe the inferred effects of the miRNAs on the targets by plotting the
posterior distributions of those parameters. For comparison purposes, the Least
Squares Regression (LSR), Ridge Regression (RR), LASSO, non-negative LASSO
(nLASSO), and the proposed Bayesian approaches were applied to four public
datasets. We concluded that nLASSO and nBLASSO perform best in terms of
sensitivity and specificity. Compared to the point estimate algorithms, which
only provide single estimates for those parameters, the Bayesian methods are
more meaningful and provide credible intervals, which take into account the
uncertainty of the inferred interactions of the miRNA and mRNA. Furthermore,
Bayesian methods naturally provide statistical significance to select
convincing inferred interactions, while point estimate algorithms require a
manually chosen threshold, which is less meaningful, to choose the possible
interactions.
| [
{
"version": "v1",
"created": "Fri, 12 Oct 2012 09:03:14 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Jun 2014 10:16:51 GMT"
}
] | 2014-07-01T00:00:00 | [
[
"Zhong",
"Mingjun",
""
],
[
"Liu",
"Rong",
""
],
[
"Liu",
"Bo",
""
]
] | TITLE: Bayesian Analysis for miRNA and mRNA Interactions Using Expression Data
ABSTRACT: MicroRNAs (miRNAs) are small RNA molecules composed of 19-22 nt, which play
important regulatory roles in post-transcriptional gene regulation by
inhibiting the translation of the mRNA into proteins or otherwise cleaving the
target mRNA. Inferring miRNA targets provides useful information for
understanding the roles of miRNA in biological processes that are potentially
involved in complex diseases. Statistical methodologies for point estimation,
such as the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm,
have been proposed to identify the interactions of miRNA and mRNA based on
sequence and expression data. In this paper, we propose using the Bayesian
LASSO (BLASSO) and the non-negative Bayesian LASSO (nBLASSO) to analyse the
interactions between miRNA and mRNA using expression data. The proposed
Bayesian methods explore the posterior distributions for those parameters
required to model the miRNA-mRNA interactions. These approaches can be used to
observe the inferred effects of the miRNAs on the targets by plotting the
posterior distributions of those parameters. For comparison purposes, the Least
Squares Regression (LSR), Ridge Regression (RR), LASSO, non-negative LASSO
(nLASSO), and the proposed Bayesian approaches were applied to four public
datasets. We concluded that nLASSO and nBLASSO perform best in terms of
sensitivity and specificity. Compared to the point estimate algorithms, which
only provide single estimates for those parameters, the Bayesian methods are
more meaningful and provide credible intervals, which take into account the
uncertainty of the inferred interactions of the miRNA and mRNA. Furthermore,
Bayesian methods naturally provide statistical significance to select
convincing inferred interactions, while point estimate algorithms require a
manually chosen threshold, which is less meaningful, to choose the possible
interactions.
| no_new_dataset | 0.953144 |
1406.7362 | KyungHyun Cho | Kyunghyun Cho and Yoshua Bengio | Exponentially Increasing the Capacity-to-Computation Ratio for
Conditional Computation in Deep Learning | null | null | null | null | stat.ML cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many state-of-the-art results obtained with deep networks are achieved with
the largest models that could be trained, and if more computation power was
available, we might be able to exploit much larger datasets in order to improve
generalization ability. Whereas in learning algorithms such as decision trees
the ratio of capacity (e.g., the number of parameters) to computation is very
favorable (up to exponentially more parameters than computation), the ratio is
essentially 1 for deep neural networks. Conditional computation has been
proposed as a way to increase the capacity of a deep neural network without
increasing the amount of computation required, by activating some parameters
and computation "on-demand", on a per-example basis. In this note, we propose a
novel parametrization of weight matrices in neural networks which has the
potential to increase up to exponentially the ratio of the number of parameters
to computation. The proposed approach is based on turning on some parameters
(weight matrices) when specific bit patterns of hidden unit activations are
obtained. In order to better control for the overfitting that might result, we
propose a parametrization that is tree-structured, where each node of the tree
corresponds to a prefix of a sequence of sign bits, or gating units, associated
with hidden units.
| [
{
"version": "v1",
"created": "Sat, 28 Jun 2014 06:45:51 GMT"
}
] | 2014-07-01T00:00:00 | [
[
"Cho",
"Kyunghyun",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Exponentially Increasing the Capacity-to-Computation Ratio for
Conditional Computation in Deep Learning
ABSTRACT: Many state-of-the-art results obtained with deep networks are achieved with
the largest models that could be trained, and if more computation power was
available, we might be able to exploit much larger datasets in order to improve
generalization ability. Whereas in learning algorithms such as decision trees
the ratio of capacity (e.g., the number of parameters) to computation is very
favorable (up to exponentially more parameters than computation), the ratio is
essentially 1 for deep neural networks. Conditional computation has been
proposed as a way to increase the capacity of a deep neural network without
increasing the amount of computation required, by activating some parameters
and computation "on-demand", on a per-example basis. In this note, we propose a
novel parametrization of weight matrices in neural networks which has the
potential to increase up to exponentially the ratio of the number of parameters
to computation. The proposed approach is based on turning on some parameters
(weight matrices) when specific bit patterns of hidden unit activations are
obtained. In order to better control for the overfitting that might result, we
propose a parametrization that is tree-structured, where each node of the tree
corresponds to a prefix of a sequence of sign bits, or gating units, associated
with hidden units.
| no_new_dataset | 0.95253 |
1406.7429 | Jonathan Katzman | Jonathan Katzman and Diane Duros | Comparison of SVM Optimization Techniques in the Primal | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper examines the efficacy of different optimization techniques in a
primal formulation of a support vector machine (SVM). Three main techniques are
compared. The dataset used to compare all three techniques was the Sentiment
Analysis on Movie Reviews dataset, from kaggle.com.
| [
{
"version": "v1",
"created": "Sat, 28 Jun 2014 18:59:44 GMT"
}
] | 2014-07-01T00:00:00 | [
[
"Katzman",
"Jonathan",
""
],
[
"Duros",
"Diane",
""
]
] | TITLE: Comparison of SVM Optimization Techniques in the Primal
ABSTRACT: This paper examines the efficacy of different optimization techniques in a
primal formulation of a support vector machine (SVM). Three main techniques are
compared. The dataset used to compare all three techniques was the Sentiment
Analysis on Movie Reviews dataset, from kaggle.com.
| no_new_dataset | 0.954732 |
1406.7525 | Wenqi Huang | Wenqi Huang, Xiaojin Gong | Fusion Based Holistic Road Scene Understanding | 14 pages,11 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of holistic road scene understanding based
on the integration of visual and range data. To achieve the grand goal, we
propose an approach that jointly tackles object-level image segmentation and
semantic region labeling within a conditional random field (CRF) framework.
Specifically, we first generate semantic object hypotheses by clustering 3D
points, learning their prior appearance models, and using a deep learning
method for reasoning their semantic categories. The learned priors, together
with spatial and geometric contexts, are incorporated in CRF. With this
formulation, visual and range data are fused thoroughly, and moreover, the
coupled segmentation and semantic labeling problem can be inferred via Graph
Cuts. Our approach is validated on the challenging KITTI dataset that contains
diverse complicated road scenarios. Both quantitative and qualitative
evaluations demonstrate its effectiveness.
| [
{
"version": "v1",
"created": "Sun, 29 Jun 2014 17:11:25 GMT"
}
] | 2014-07-01T00:00:00 | [
[
"Huang",
"Wenqi",
""
],
[
"Gong",
"Xiaojin",
""
]
] | TITLE: Fusion Based Holistic Road Scene Understanding
ABSTRACT: This paper addresses the problem of holistic road scene understanding based
on the integration of visual and range data. To achieve the grand goal, we
propose an approach that jointly tackles object-level image segmentation and
semantic region labeling within a conditional random field (CRF) framework.
Specifically, we first generate semantic object hypotheses by clustering 3D
points, learning their prior appearance models, and using a deep learning
method for reasoning their semantic categories. The learned priors, together
with spatial and geometric contexts, are incorporated in CRF. With this
formulation, visual and range data are fused thoroughly, and moreover, the
coupled segmentation and semantic labeling problem can be inferred via Graph
Cuts. Our approach is validated on the challenging KITTI dataset that contains
diverse complicated road scenarios. Both quantitative and qualitative
evaluations demonstrate its effectiveness.
| no_new_dataset | 0.951188 |
1406.7738 | Walter Lasecki | Sanmay Das, and Allen Lavoie | Home Is Where the Up-Votes Are: Behavior Changes in Response to Feedback
in Social Media | null | null | null | ci-2014/93 | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent research shows that humans are heavily influenced by online social
interactions: We are more likely to perform actions which, in the past, have
led to positive social feedback. We introduce a quantitative model of behavior
changes in response to such feedback, drawing on inverse reinforcement learning
and studies of human game playing. The model allows us to make predictions,
particularly in the context of social media, about which community a user will
select, and to quantify how future selections change based on the feedback a
user receives. We show that our model predicts real-world changes in behavior
on a dataset gathered from reddit. We also explore how this relatively simple
model of individual behavior can lead to complex collective dynamics when there
is a population of users, each individual learning in response to feedback and
in turn providing feedback to others.
| [
{
"version": "v1",
"created": "Mon, 30 Jun 2014 13:51:23 GMT"
}
] | 2014-07-01T00:00:00 | [
[
"Das",
"Sanmay",
""
],
[
"Lavoie",
"Allen",
""
]
] | TITLE: Home Is Where the Up-Votes Are: Behavior Changes in Response to Feedback
in Social Media
ABSTRACT: Recent research shows that humans are heavily influenced by online social
interactions: We are more likely to perform actions which, in the past, have
led to positive social feedback. We introduce a quantitative model of behavior
changes in response to such feedback, drawing on inverse reinforcement learning
and studies of human game playing. The model allows us to make predictions,
particularly in the context of social media, about which community a user will
select, and to quantify how future selections change based on the feedback a
user receives. We show that our model predicts real-world changes in behavior
on a dataset gathered from reddit. We also explore how this relatively simple
model of individual behavior can lead to complex collective dynamics when there
is a population of users, each individual learning in response to feedback and
in turn providing feedback to others.
| no_new_dataset | 0.943919 |
1406.7799 | Pedram Mohammadi Mr. | Pedram Mohammadi, Abbas Ebrahimi-Moghadam, and Shahram Shirani | Subjective and Objective Quality Assessment of Image: A Survey | 50 pages, 12 figures, and 3 Tables. This work has been submitted to
Elsevier Journal of Visual Communication and Image Representation | null | null | null | cs.MM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing demand for image-based applications, the efficient and
reliable evaluation of image quality has increased in importance. Measuring the
image quality is of fundamental importance for numerous image processing
applications, where the goal of image quality assessment (IQA) methods is to
automatically evaluate the quality of images in agreement with human quality
judgments. Numerous IQA methods have been proposed over the past years to
fulfill this goal. In this paper, a survey of the quality assessment methods
for conventional image signals, as well as the newly emerged ones, which
includes the high dynamic range (HDR) and 3-D images, is presented. A
comprehensive explanation of the subjective and objective IQA and their
classification is provided. Six widely used subjective quality datasets, and
performance measures are reviewed. Emphasis is given to the full-reference
image quality assessment (FR-IQA) methods, and 9 often-used quality measures
(including mean squared error (MSE), structural similarity index (SSIM),
multi-scale structural similarity index (MS-SSIM), visual information fidelity
(VIF), most apparent distortion (MAD), feature similarity measure (FSIM),
feature similarity measure for color images (FSIMC), dynamic range independent
measure (DRIM), and tone-mapped images quality index (TMQI)) are carefully
described, and their performance and computation time on four subjective
quality datasets are evaluated. Furthermore, a brief introduction to 3-D IQA is
provided and the issues related to this area of research are reviewed.
| [
{
"version": "v1",
"created": "Mon, 30 Jun 2014 16:25:00 GMT"
}
] | 2014-07-01T00:00:00 | [
[
"Mohammadi",
"Pedram",
""
],
[
"Ebrahimi-Moghadam",
"Abbas",
""
],
[
"Shirani",
"Shahram",
""
]
] | TITLE: Subjective and Objective Quality Assessment of Image: A Survey
ABSTRACT: With the increasing demand for image-based applications, the efficient and
reliable evaluation of image quality has increased in importance. Measuring the
image quality is of fundamental importance for numerous image processing
applications, where the goal of image quality assessment (IQA) methods is to
automatically evaluate the quality of images in agreement with human quality
judgments. Numerous IQA methods have been proposed over the past years to
fulfill this goal. In this paper, a survey of the quality assessment methods
for conventional image signals, as well as the newly emerged ones, which
includes the high dynamic range (HDR) and 3-D images, is presented. A
comprehensive explanation of the subjective and objective IQA and their
classification is provided. Six widely used subjective quality datasets, and
performance measures are reviewed. Emphasis is given to the full-reference
image quality assessment (FR-IQA) methods, and 9 often-used quality measures
(including mean squared error (MSE), structural similarity index (SSIM),
multi-scale structural similarity index (MS-SSIM), visual information fidelity
(VIF), most apparent distortion (MAD), feature similarity measure (FSIM),
feature similarity measure for color images (FSIMC), dynamic range independent
measure (DRIM), and tone-mapped images quality index (TMQI)) are carefully
described, and their performance and computation time on four subjective
quality datasets are evaluated. Furthermore, a brief introduction to 3-D IQA is
provided and the issues related to this area of research are reviewed.
| no_new_dataset | 0.943556 |
1406.7075 | \"Omer Faruk Ertu\u{g}rul | Omer Faruk Ertugrul | Adaptive texture energy measure method | null | International Journal of Intelligent Information Systems. Vol. 3,
No. 2, 2014, pp. 13-18 | 10.11648/j.ijiis.20140302.11 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent developments in image quality, data storage, and computational
capacity have heightened the need for texture analysis in image process. To
date various methods have been developed and introduced for assessing textures
in images. One of the most popular texture analysis methods is the Texture
Energy Measure (TEM) and it has been used for detecting edges, levels, waves,
spots and ripples by employing predefined TEM masks to images. Despite several
success- ful studies, TEM has a number of serious weaknesses in use. The major
drawback is; the masks are predefined therefore they cannot be adapted to
image. A new method, Adaptive Texture Energy Measure Method (aTEM), was offered
to over- come this disadvantage of TEM by using adaptive masks by adjusting the
contrast, sharpening and orientation angle of the mask. To assess the
applicability of aTEM, it is compared with TEM. The accuracy of the
classification of butterfly, flower seed and Brodatz datasets are 0.08, 0.3292
and 0.3343, respectively by TEM and 0.0053, 0.2417 and 0.3153, respectively by
aTEM. The results of this study indicate that aTEM is a successful method for
texture analysis.
| [
{
"version": "v1",
"created": "Fri, 27 Jun 2014 06:00:17 GMT"
}
] | 2014-06-30T00:00:00 | [
[
"Ertugrul",
"Omer Faruk",
""
]
] | TITLE: Adaptive texture energy measure method
ABSTRACT: Recent developments in image quality, data storage, and computational
capacity have heightened the need for texture analysis in image process. To
date various methods have been developed and introduced for assessing textures
in images. One of the most popular texture analysis methods is the Texture
Energy Measure (TEM) and it has been used for detecting edges, levels, waves,
spots and ripples by employing predefined TEM masks to images. Despite several
success- ful studies, TEM has a number of serious weaknesses in use. The major
drawback is; the masks are predefined therefore they cannot be adapted to
image. A new method, Adaptive Texture Energy Measure Method (aTEM), was offered
to over- come this disadvantage of TEM by using adaptive masks by adjusting the
contrast, sharpening and orientation angle of the mask. To assess the
applicability of aTEM, it is compared with TEM. The accuracy of the
classification of butterfly, flower seed and Brodatz datasets are 0.08, 0.3292
and 0.3343, respectively by TEM and 0.0053, 0.2417 and 0.3153, respectively by
aTEM. The results of this study indicate that aTEM is a successful method for
texture analysis.
| no_new_dataset | 0.950411 |
1309.3132 | Xiao-Bo Jin | Xiao-Bo Jin, Guang-Gang Geng, Dexian Zhang | Combination of Multiple Bipartite Ranking for Web Content Quality
Evaluation | 17 pages, 8 figures, 2 tables | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Web content quality estimation is crucial to various web content processing
applications. Our previous work applied Bagging + C4.5 to achive the best
results on the ECML/PKDD Discovery Challenge 2010, which is the comibination of
many point-wise rankinig models. In this paper, we combine multiple pair-wise
bipartite ranking learner to solve the multi-partite ranking problems for the
web quality estimation. In encoding stage, we present the ternary encoding and
the binary coding extending each rank value to $L - 1$ (L is the number of the
different ranking value). For the decoding, we discuss the combination of
multiple ranking results from multiple bipartite ranking models with the
predefined weighting and the adaptive weighting. The experiments on ECML/PKDD
2010 Discovery Challenge datasets show that \textit{binary coding} +
\textit{predefined weighting} yields the highest performance in all four
combinations and furthermore it is better than the best results reported in
ECML/PKDD 2010 Discovery Challenge competition.
| [
{
"version": "v1",
"created": "Thu, 12 Sep 2013 12:15:51 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jun 2014 03:01:13 GMT"
}
] | 2014-06-27T00:00:00 | [
[
"Jin",
"Xiao-Bo",
""
],
[
"Geng",
"Guang-Gang",
""
],
[
"Zhang",
"Dexian",
""
]
] | TITLE: Combination of Multiple Bipartite Ranking for Web Content Quality
Evaluation
ABSTRACT: Web content quality estimation is crucial to various web content processing
applications. Our previous work applied Bagging + C4.5 to achive the best
results on the ECML/PKDD Discovery Challenge 2010, which is the comibination of
many point-wise rankinig models. In this paper, we combine multiple pair-wise
bipartite ranking learner to solve the multi-partite ranking problems for the
web quality estimation. In encoding stage, we present the ternary encoding and
the binary coding extending each rank value to $L - 1$ (L is the number of the
different ranking value). For the decoding, we discuss the combination of
multiple ranking results from multiple bipartite ranking models with the
predefined weighting and the adaptive weighting. The experiments on ECML/PKDD
2010 Discovery Challenge datasets show that \textit{binary coding} +
\textit{predefined weighting} yields the highest performance in all four
combinations and furthermore it is better than the best results reported in
ECML/PKDD 2010 Discovery Challenge competition.
| no_new_dataset | 0.951233 |
1405.1328 | Emiliano De Cristofaro | Igor Bilogrevic, Julien Freudiger, Emiliano De Cristofaro, and Ersin
Uzun | What's the Gist? Privacy-Preserving Aggregation of User Profiles | To appear in the Proceedings of ESORICS 2014 | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the past few years, online service providers have started gathering
increasing amounts of personal information to build user profiles and monetize
them with advertisers and data brokers. Users have little control of what
information is processed and are often left with an all-or-nothing decision
between receiving free services or refusing to be profiled. This paper explores
an alternative approach where users only disclose an aggregate model -- the
"gist" -- of their data. We aim to preserve data utility and simultaneously
provide user privacy. We show that this approach can be efficiently supported
by letting users contribute encrypted and differentially-private data to an
aggregator. The aggregator combines encrypted contributions and can only
extract an aggregate model of the underlying data. We evaluate our framework on
a dataset of 100,000 U.S. users obtained from the U.S. Census Bureau and show
that (i) it provides accurate aggregates with as little as 100 users, (ii) it
generates revenue for both users and data brokers, and (iii) its overhead is
appreciably low.
| [
{
"version": "v1",
"created": "Tue, 6 May 2014 15:49:48 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jun 2014 21:01:41 GMT"
}
] | 2014-06-27T00:00:00 | [
[
"Bilogrevic",
"Igor",
""
],
[
"Freudiger",
"Julien",
""
],
[
"De Cristofaro",
"Emiliano",
""
],
[
"Uzun",
"Ersin",
""
]
] | TITLE: What's the Gist? Privacy-Preserving Aggregation of User Profiles
ABSTRACT: Over the past few years, online service providers have started gathering
increasing amounts of personal information to build user profiles and monetize
them with advertisers and data brokers. Users have little control of what
information is processed and are often left with an all-or-nothing decision
between receiving free services or refusing to be profiled. This paper explores
an alternative approach where users only disclose an aggregate model -- the
"gist" -- of their data. We aim to preserve data utility and simultaneously
provide user privacy. We show that this approach can be efficiently supported
by letting users contribute encrypted and differentially-private data to an
aggregator. The aggregator combines encrypted contributions and can only
extract an aggregate model of the underlying data. We evaluate our framework on
a dataset of 100,000 U.S. users obtained from the U.S. Census Bureau and show
that (i) it provides accurate aggregates with as little as 100 users, (ii) it
generates revenue for both users and data brokers, and (iii) its overhead is
appreciably low.
| no_new_dataset | 0.94545 |
1406.6832 | Michel Plantie | Michel Crampes and Michel Planti\'e | Overlapping Community Detection Optimization and Nash Equilibrium | Submitted to KDD | null | null | null | cs.SI physics.soc-ph stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community detection using both graphs and social networks is the focus of
many algorithms. Recent methods aimed at optimizing the so-called modularity
function proceed by maximizing relations within communities while minimizing
inter-community relations.
However, given the NP-completeness of the problem, these algorithms are
heuristics that do not guarantee an optimum. In this paper, we introduce a new
algorithm along with a function that takes an approximate solution and modifies
it in order to reach an optimum. This reassignment function is considered a
'potential function' and becomes a necessary condition to asserting that the
computed optimum is indeed a Nash Equilibrium. We also use this function to
simultaneously show partitioning and overlapping communities, two detection and
visualization modes of great value in revealing interesting features of a
social network. Our approach is successfully illustrated through several
experiments on either real unipartite, multipartite or directed graphs of
medium and large-sized datasets.
| [
{
"version": "v1",
"created": "Thu, 26 Jun 2014 10:28:36 GMT"
}
] | 2014-06-27T00:00:00 | [
[
"Crampes",
"Michel",
""
],
[
"Plantié",
"Michel",
""
]
] | TITLE: Overlapping Community Detection Optimization and Nash Equilibrium
ABSTRACT: Community detection using both graphs and social networks is the focus of
many algorithms. Recent methods aimed at optimizing the so-called modularity
function proceed by maximizing relations within communities while minimizing
inter-community relations.
However, given the NP-completeness of the problem, these algorithms are
heuristics that do not guarantee an optimum. In this paper, we introduce a new
algorithm along with a function that takes an approximate solution and modifies
it in order to reach an optimum. This reassignment function is considered a
'potential function' and becomes a necessary condition to asserting that the
computed optimum is indeed a Nash Equilibrium. We also use this function to
simultaneously show partitioning and overlapping communities, two detection and
visualization modes of great value in revealing interesting features of a
social network. Our approach is successfully illustrated through several
experiments on either real unipartite, multipartite or directed graphs of
medium and large-sized datasets.
| no_new_dataset | 0.947624 |
1406.6947 | Ping Luo | Zhenyao Zhu and Ping Luo and Xiaogang Wang and Xiaoou Tang | Deep Learning Multi-View Representation for Face Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Various factors, such as identities, views (poses), and illuminations, are
coupled in face images. Disentangling the identity and view representations is
a major challenge in face recognition. Existing face recognition systems either
use handcrafted features or learn features discriminatively to improve
recognition accuracy. This is different from the behavior of human brain.
Intriguingly, even without accessing 3D data, human not only can recognize face
identity, but can also imagine face images of a person under different
viewpoints given a single 2D image, making face perception in the brain robust
to view changes. In this sense, human brain has learned and encoded 3D face
models from 2D images. To take into account this instinct, this paper proposes
a novel deep neural net, named multi-view perceptron (MVP), which can untangle
the identity and view features, and infer a full spectrum of multi-view images
in the meanwhile, given a single 2D face image. The identity features of MVP
achieve superior performance on the MultiPIE dataset. MVP is also capable to
interpolate and predict images under viewpoints that are unobserved in the
training data.
| [
{
"version": "v1",
"created": "Thu, 26 Jun 2014 17:09:25 GMT"
}
] | 2014-06-27T00:00:00 | [
[
"Zhu",
"Zhenyao",
""
],
[
"Luo",
"Ping",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"Tang",
"Xiaoou",
""
]
] | TITLE: Deep Learning Multi-View Representation for Face Recognition
ABSTRACT: Various factors, such as identities, views (poses), and illuminations, are
coupled in face images. Disentangling the identity and view representations is
a major challenge in face recognition. Existing face recognition systems either
use handcrafted features or learn features discriminatively to improve
recognition accuracy. This is different from the behavior of human brain.
Intriguingly, even without accessing 3D data, human not only can recognize face
identity, but can also imagine face images of a person under different
viewpoints given a single 2D image, making face perception in the brain robust
to view changes. In this sense, human brain has learned and encoded 3D face
models from 2D images. To take into account this instinct, this paper proposes
a novel deep neural net, named multi-view perceptron (MVP), which can untangle
the identity and view features, and infer a full spectrum of multi-view images
in the meanwhile, given a single 2D face image. The identity features of MVP
achieve superior performance on the MultiPIE dataset. MVP is also capable to
interpolate and predict images under viewpoints that are unobserved in the
training data.
| no_new_dataset | 0.943348 |
1406.6507 | Hyun Oh Song | Hyun Oh Song, Yong Jae Lee, Stefanie Jegelka, Trevor Darrell | Weakly-supervised Discovery of Visual Pattern Configurations | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing prominence of weakly labeled data nurtures a growing demand
for object detection methods that can cope with minimal supervision. We propose
an approach that automatically identifies discriminative configurations of
visual patterns that are characteristic of a given object class. We formulate
the problem as a constrained submodular optimization problem and demonstrate
the benefits of the discovered configurations in remedying mislocalizations and
finding informative positive and negative training examples. Together, these
lead to state-of-the-art weakly-supervised detection results on the challenging
PASCAL VOC dataset.
| [
{
"version": "v1",
"created": "Wed, 25 Jun 2014 09:35:40 GMT"
}
] | 2014-06-26T00:00:00 | [
[
"Song",
"Hyun Oh",
""
],
[
"Lee",
"Yong Jae",
""
],
[
"Jegelka",
"Stefanie",
""
],
[
"Darrell",
"Trevor",
""
]
] | TITLE: Weakly-supervised Discovery of Visual Pattern Configurations
ABSTRACT: The increasing prominence of weakly labeled data nurtures a growing demand
for object detection methods that can cope with minimal supervision. We propose
an approach that automatically identifies discriminative configurations of
visual patterns that are characteristic of a given object class. We formulate
the problem as a constrained submodular optimization problem and demonstrate
the benefits of the discovered configurations in remedying mislocalizations and
finding informative positive and negative training examples. Together, these
lead to state-of-the-art weakly-supervised detection results on the challenging
PASCAL VOC dataset.
| no_new_dataset | 0.951639 |
1406.6568 | Victor Miller | V. A. Miller, S. Erlien, J. Piersol | Support vector machine classification of dimensionally reduced
structural MRI images for dementia | technical note | null | null | null | cs.CV cs.LG physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We classify very-mild to moderate dementia in patients (CDR ranging from 0 to
2) using a support vector machine classifier acting on dimensionally reduced
feature set derived from MRI brain scans of the 416 subjects available in the
OASIS-Brains dataset. We use image segmentation and principal component
analysis to reduce the dimensionality of the data. Our resulting feature set
contains 11 features for each subject. Performance of the classifiers is
evaluated using 10-fold cross-validation. Using linear and (gaussian) kernels,
we obtain a training classification accuracy of 86.4% (90.1%), test accuracy of
85.0% (85.7%), test precision of 68.7% (68.5%), test recall of 68.0% (74.0%),
and test Matthews correlation coefficient of 0.594 (0.616).
| [
{
"version": "v1",
"created": "Wed, 25 Jun 2014 13:50:18 GMT"
}
] | 2014-06-26T00:00:00 | [
[
"Miller",
"V. A.",
""
],
[
"Erlien",
"S.",
""
],
[
"Piersol",
"J.",
""
]
] | TITLE: Support vector machine classification of dimensionally reduced
structural MRI images for dementia
ABSTRACT: We classify very-mild to moderate dementia in patients (CDR ranging from 0 to
2) using a support vector machine classifier acting on dimensionally reduced
feature set derived from MRI brain scans of the 416 subjects available in the
OASIS-Brains dataset. We use image segmentation and principal component
analysis to reduce the dimensionality of the data. Our resulting feature set
contains 11 features for each subject. Performance of the classifiers is
evaluated using 10-fold cross-validation. Using linear and (gaussian) kernels,
we obtain a training classification accuracy of 86.4% (90.1%), test accuracy of
85.0% (85.7%), test precision of 68.7% (68.5%), test recall of 68.0% (74.0%),
and test Matthews correlation coefficient of 0.594 (0.616).
| no_new_dataset | 0.951594 |
1406.6651 | Ishanu Chattopadhyay | Ishanu Chattopadhyay | Causality Networks | 22 pages, 8 figures | null | null | null | cs.LG cs.IT math.IT q-fin.ST stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While correlation measures are used to discern statistical relationships
between observed variables in almost all branches of data-driven scientific
inquiry, what we are really interested in is the existence of causal
dependence. Designing an efficient causality test, that may be carried out in
the absence of restrictive pre-suppositions on the underlying dynamical
structure of the data at hand, is non-trivial. Nevertheless, ability to
computationally infer statistical prima facie evidence of causal dependence may
yield a far more discriminative tool for data analysis compared to the
calculation of simple correlations. In the present work, we present a new
non-parametric test of Granger causality for quantized or symbolic data streams
generated by ergodic stationary sources. In contrast to state-of-art binary
tests, our approach makes precise and computes the degree of causal dependence
between data streams, without making any restrictive assumptions, linearity or
otherwise. Additionally, without any a priori imposition of specific dynamical
structure, we infer explicit generative models of causal cross-dependence,
which may be then used for prediction. These explicit models are represented as
generalized probabilistic automata, referred to crossed automata, and are shown
to be sufficient to capture a fairly general class of causal dependence. The
proposed algorithms are computationally efficient in the PAC sense; $i.e.$, we
find good models of cross-dependence with high probability, with polynomial
run-times and sample complexities. The theoretical results are applied to
weekly search-frequency data from Google Trends API for a chosen set of
socially "charged" keywords. The causality network inferred from this dataset
reveals, quite expectedly, the causal importance of certain keywords. It is
also illustrated that correlation analysis fails to gather such insight.
| [
{
"version": "v1",
"created": "Wed, 25 Jun 2014 17:46:32 GMT"
}
] | 2014-06-26T00:00:00 | [
[
"Chattopadhyay",
"Ishanu",
""
]
] | TITLE: Causality Networks
ABSTRACT: While correlation measures are used to discern statistical relationships
between observed variables in almost all branches of data-driven scientific
inquiry, what we are really interested in is the existence of causal
dependence. Designing an efficient causality test, that may be carried out in
the absence of restrictive pre-suppositions on the underlying dynamical
structure of the data at hand, is non-trivial. Nevertheless, ability to
computationally infer statistical prima facie evidence of causal dependence may
yield a far more discriminative tool for data analysis compared to the
calculation of simple correlations. In the present work, we present a new
non-parametric test of Granger causality for quantized or symbolic data streams
generated by ergodic stationary sources. In contrast to state-of-art binary
tests, our approach makes precise and computes the degree of causal dependence
between data streams, without making any restrictive assumptions, linearity or
otherwise. Additionally, without any a priori imposition of specific dynamical
structure, we infer explicit generative models of causal cross-dependence,
which may be then used for prediction. These explicit models are represented as
generalized probabilistic automata, referred to crossed automata, and are shown
to be sufficient to capture a fairly general class of causal dependence. The
proposed algorithms are computationally efficient in the PAC sense; $i.e.$, we
find good models of cross-dependence with high probability, with polynomial
run-times and sample complexities. The theoretical results are applied to
weekly search-frequency data from Google Trends API for a chosen set of
socially "charged" keywords. The causality network inferred from this dataset
reveals, quite expectedly, the causal importance of certain keywords. It is
also illustrated that correlation analysis fails to gather such insight.
| no_new_dataset | 0.939081 |
1106.2229 | Fionn Murtagh | Pedro Contreras and Fionn Murtagh | Fast, Linear Time Hierarchical Clustering using the Baire Metric | 27 pages, 6 tables, 10 figures | Journal of Classification, July 2012, Volume 29, Issue 2, pp
118-143 | 10.1007/s00357-012-9106-3 | null | stat.ML cs.IR stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Baire metric induces an ultrametric on a dataset and is of linear
computational complexity, contrasted with the standard quadratic time
agglomerative hierarchical clustering algorithm. In this work we evaluate
empirically this new approach to hierarchical clustering. We compare
hierarchical clustering based on the Baire metric with (i) agglomerative
hierarchical clustering, in terms of algorithm properties; (ii) generalized
ultrametrics, in terms of definition; and (iii) fast clustering through k-means
partititioning, in terms of quality of results. For the latter, we carry out an
in depth astronomical study. We apply the Baire distance to spectrometric and
photometric redshifts from the Sloan Digital Sky Survey using, in this work,
about half a million astronomical objects. We want to know how well the (more
costly to determine) spectrometric redshifts can predict the (more easily
obtained) photometric redshifts, i.e. we seek to regress the spectrometric on
the photometric redshifts, and we use clusterwise regression for this.
| [
{
"version": "v1",
"created": "Sat, 11 Jun 2011 12:05:43 GMT"
}
] | 2014-06-24T00:00:00 | [
[
"Contreras",
"Pedro",
""
],
[
"Murtagh",
"Fionn",
""
]
] | TITLE: Fast, Linear Time Hierarchical Clustering using the Baire Metric
ABSTRACT: The Baire metric induces an ultrametric on a dataset and is of linear
computational complexity, contrasted with the standard quadratic time
agglomerative hierarchical clustering algorithm. In this work we evaluate
empirically this new approach to hierarchical clustering. We compare
hierarchical clustering based on the Baire metric with (i) agglomerative
hierarchical clustering, in terms of algorithm properties; (ii) generalized
ultrametrics, in terms of definition; and (iii) fast clustering through k-means
partititioning, in terms of quality of results. For the latter, we carry out an
in depth astronomical study. We apply the Baire distance to spectrometric and
photometric redshifts from the Sloan Digital Sky Survey using, in this work,
about half a million astronomical objects. We want to know how well the (more
costly to determine) spectrometric redshifts can predict the (more easily
obtained) photometric redshifts, i.e. we seek to regress the spectrometric on
the photometric redshifts, and we use clusterwise regression for this.
| no_new_dataset | 0.952131 |
1312.3913 | Xi He | Xi He and Ashwin Machanavajjhala and Bolin Ding | Blowfish Privacy: Tuning Privacy-Utility Trade-offs using Policies | Full version of the paper at SIGMOD'14 Snowbird, Utah USA | null | 10.1145/2588555.2588581 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Privacy definitions provide ways for trading-off the privacy of individuals
in a statistical database for the utility of downstream analysis of the data.
In this paper, we present Blowfish, a class of privacy definitions inspired by
the Pufferfish framework, that provides a rich interface for this trade-off. In
particular, we allow data publishers to extend differential privacy using a
policy, which specifies (a) secrets, or information that must be kept secret,
and (b) constraints that may be known about the data. While the secret
specification allows increased utility by lessening protection for certain
individual properties, the constraint specification provides added protection
against an adversary who knows correlations in the data (arising from
constraints). We formalize policies and present novel algorithms that can
handle general specifications of sensitive information and certain count
constraints. We show that there are reasonable policies under which our privacy
mechanisms for k-means clustering, histograms and range queries introduce
significantly lesser noise than their differentially private counterparts. We
quantify the privacy-utility trade-offs for various policies analytically and
empirically on real datasets.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2013 19:23:12 GMT"
},
{
"version": "v2",
"created": "Sat, 28 Dec 2013 06:49:22 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Feb 2014 15:55:15 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Apr 2014 16:13:26 GMT"
},
{
"version": "v5",
"created": "Mon, 23 Jun 2014 05:09:12 GMT"
}
] | 2014-06-24T00:00:00 | [
[
"He",
"Xi",
""
],
[
"Machanavajjhala",
"Ashwin",
""
],
[
"Ding",
"Bolin",
""
]
] | TITLE: Blowfish Privacy: Tuning Privacy-Utility Trade-offs using Policies
ABSTRACT: Privacy definitions provide ways for trading-off the privacy of individuals
in a statistical database for the utility of downstream analysis of the data.
In this paper, we present Blowfish, a class of privacy definitions inspired by
the Pufferfish framework, that provides a rich interface for this trade-off. In
particular, we allow data publishers to extend differential privacy using a
policy, which specifies (a) secrets, or information that must be kept secret,
and (b) constraints that may be known about the data. While the secret
specification allows increased utility by lessening protection for certain
individual properties, the constraint specification provides added protection
against an adversary who knows correlations in the data (arising from
constraints). We formalize policies and present novel algorithms that can
handle general specifications of sensitive information and certain count
constraints. We show that there are reasonable policies under which our privacy
mechanisms for k-means clustering, histograms and range queries introduce
significantly lesser noise than their differentially private counterparts. We
quantify the privacy-utility trade-offs for various policies analytically and
empirically on real datasets.
| no_new_dataset | 0.947478 |
1405.1459 | Flavio Figueiredo | Flavio Figueiredo, Jussara M. Almeida, Yasuko Matsubara, Bruno
Ribeiro, Christos Faloutsos | Revisit Behavior in Social Media: The Phoenix-R Model and Discoveries | To appear on European Conference on Machine Learning and Principles
and Practice of Knowledge Discovery in Databases 2014 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How many listens will an artist receive on a online radio? How about plays on
a YouTube video? How many of these visits are new or returning users? Modeling
and mining popularity dynamics of social activity has important implications
for researchers, content creators and providers. We here investigate the effect
of revisits (successive visits from a single user) on content popularity. Using
four datasets of social activity, with up to tens of millions media objects
(e.g., YouTube videos, Twitter hashtags or LastFM artists), we show the effect
of revisits in the popularity evolution of such objects. Secondly, we propose
the Phoenix-R model which captures the popularity dynamics of individual
objects. Phoenix-R has the desired properties of being: (1) parsimonious, being
based on the minimum description length principle, and achieving lower root
mean squared error than state-of-the-art baselines; (2) applicable, the model
is effective for predicting future popularity values of objects.
| [
{
"version": "v1",
"created": "Tue, 6 May 2014 21:37:06 GMT"
},
{
"version": "v2",
"created": "Sun, 22 Jun 2014 19:13:29 GMT"
}
] | 2014-06-24T00:00:00 | [
[
"Figueiredo",
"Flavio",
""
],
[
"Almeida",
"Jussara M.",
""
],
[
"Matsubara",
"Yasuko",
""
],
[
"Ribeiro",
"Bruno",
""
],
[
"Faloutsos",
"Christos",
""
]
] | TITLE: Revisit Behavior in Social Media: The Phoenix-R Model and Discoveries
ABSTRACT: How many listens will an artist receive on a online radio? How about plays on
a YouTube video? How many of these visits are new or returning users? Modeling
and mining popularity dynamics of social activity has important implications
for researchers, content creators and providers. We here investigate the effect
of revisits (successive visits from a single user) on content popularity. Using
four datasets of social activity, with up to tens of millions media objects
(e.g., YouTube videos, Twitter hashtags or LastFM artists), we show the effect
of revisits in the popularity evolution of such objects. Secondly, we propose
the Phoenix-R model which captures the popularity dynamics of individual
objects. Phoenix-R has the desired properties of being: (1) parsimonious, being
based on the minimum description length principle, and achieving lower root
mean squared error than state-of-the-art baselines; (2) applicable, the model
is effective for predicting future popularity values of objects.
| no_new_dataset | 0.950088 |
1406.5565 | Sam Keene | Kenneth D. Morton Jr., Peter Torrione, Leslie Collins, Sam Keene | An Open Source Pattern Recognition Toolbox for MATLAB | null | null | null | null | stat.ML cs.CV cs.LG cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pattern recognition and machine learning are becoming integral parts of
algorithms in a wide range of applications. Different algorithms and approaches
for machine learning include different tradeoffs between performance and
computation, so during algorithm development it is often necessary to explore a
variety of different approaches to a given task. A toolbox with a unified
framework across multiple pattern recognition techniques enables algorithm
developers the ability to rapidly evaluate different choices prior to
deployment. MATLAB is a widely used environment for algorithm development and
prototyping, and although several MATLAB toolboxes for pattern recognition are
currently available these are either incomplete, expensive, or restrictively
licensed. In this work we describe a MATLAB toolbox for pattern recognition and
machine learning known as the PRT (Pattern Recognition Toolbox), licensed under
the permissive MIT license. The PRT includes many popular techniques for data
preprocessing, supervised learning, clustering, regression and feature
selection, as well as a methodology for combining these components using a
simple, uniform syntax. The resulting algorithms can be evaluated using
cross-validation and a variety of scoring metrics to ensure robust performance
when the algorithm is deployed. This paper presents an overview of the PRT as
well as an example of usage on Fisher's Iris dataset.
| [
{
"version": "v1",
"created": "Sat, 21 Jun 2014 01:50:54 GMT"
}
] | 2014-06-24T00:00:00 | [
[
"Morton",
"Kenneth D.",
"Jr."
],
[
"Torrione",
"Peter",
""
],
[
"Collins",
"Leslie",
""
],
[
"Keene",
"Sam",
""
]
] | TITLE: An Open Source Pattern Recognition Toolbox for MATLAB
ABSTRACT: Pattern recognition and machine learning are becoming integral parts of
algorithms in a wide range of applications. Different algorithms and approaches
for machine learning include different tradeoffs between performance and
computation, so during algorithm development it is often necessary to explore a
variety of different approaches to a given task. A toolbox with a unified
framework across multiple pattern recognition techniques enables algorithm
developers the ability to rapidly evaluate different choices prior to
deployment. MATLAB is a widely used environment for algorithm development and
prototyping, and although several MATLAB toolboxes for pattern recognition are
currently available these are either incomplete, expensive, or restrictively
licensed. In this work we describe a MATLAB toolbox for pattern recognition and
machine learning known as the PRT (Pattern Recognition Toolbox), licensed under
the permissive MIT license. The PRT includes many popular techniques for data
preprocessing, supervised learning, clustering, regression and feature
selection, as well as a methodology for combining these components using a
simple, uniform syntax. The resulting algorithms can be evaluated using
cross-validation and a variety of scoring metrics to ensure robust performance
when the algorithm is deployed. This paper presents an overview of the PRT as
well as an example of usage on Fisher's Iris dataset.
| no_new_dataset | 0.946794 |
1406.5617 | S. K. Sahay | R.K. Roul, O. R. Devanand and S.K. Sahay | Web Document Clustering and Ranking using Tf-Idf based Apriori Approach | 5 Pages | IJCA Proceedings on ICACEA, No. 2, p. 34 (2014) | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dynamic web has increased exponentially over the past few years with more
than thousands of documents related to a subject available to the user now.
Most of the web documents are unstructured and not in an organized manner and
hence user facing more difficult to find relevant documents. A more useful and
efficient mechanism is combining clustering with ranking, where clustering can
group the similar documents in one place and ranking can be applied to each
cluster for viewing the top documents at the beginning.. Besides the particular
clustering algorithm, the different term weighting functions applied to the
selected features to represent web document is a main aspect in clustering
task. Keeping this approach in mind, here we proposed a new mechanism called
Tf-Idf based Apriori for clustering the web documents. We then rank the
documents in each cluster using Tf-Idf and similarity factor of documents based
on the user query. This approach will helps the user to get all his relevant
documents in one place and can restrict his search to some top documents of his
choice. For experimental purpose, we have taken the Classic3 and Classic4
datasets of Cornell University having more than 10,000 documents and use gensim
toolkit to carry out our work. We have compared our approach with traditional
apriori algorithm and found that our approach is giving better results for
higher minimum support. Our ranking mechanism is also giving a good F-measure
of 78%.
| [
{
"version": "v1",
"created": "Sat, 21 Jun 2014 14:38:21 GMT"
}
] | 2014-06-24T00:00:00 | [
[
"Roul",
"R. K.",
""
],
[
"Devanand",
"O. R.",
""
],
[
"Sahay",
"S. K.",
""
]
] | TITLE: Web Document Clustering and Ranking using Tf-Idf based Apriori Approach
ABSTRACT: The dynamic web has increased exponentially over the past few years with more
than thousands of documents related to a subject available to the user now.
Most of the web documents are unstructured and not in an organized manner and
hence user facing more difficult to find relevant documents. A more useful and
efficient mechanism is combining clustering with ranking, where clustering can
group the similar documents in one place and ranking can be applied to each
cluster for viewing the top documents at the beginning.. Besides the particular
clustering algorithm, the different term weighting functions applied to the
selected features to represent web document is a main aspect in clustering
task. Keeping this approach in mind, here we proposed a new mechanism called
Tf-Idf based Apriori for clustering the web documents. We then rank the
documents in each cluster using Tf-Idf and similarity factor of documents based
on the user query. This approach will helps the user to get all his relevant
documents in one place and can restrict his search to some top documents of his
choice. For experimental purpose, we have taken the Classic3 and Classic4
datasets of Cornell University having more than 10,000 documents and use gensim
toolkit to carry out our work. We have compared our approach with traditional
apriori algorithm and found that our approach is giving better results for
higher minimum support. Our ranking mechanism is also giving a good F-measure
of 78%.
| no_new_dataset | 0.950457 |
1406.5653 | Rushil Anirudh | Rushil Anirudh and Pavan Turaga | Interactively Test Driving an Object Detector: Estimating Performance on
Unlabeled Data | Published at Winter Conference on Applications of Computer Vision,
2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the problem of `test-driving' a detector, i.e.
allowing a human user to get a quick sense of how well the detector generalizes
to their specific requirement. To this end, we present the first system that
estimates detector performance interactively without extensive ground truthing
using a human in the loop. We approach this as a problem of estimating
proportions and show that it is possible to make accurate inferences on the
proportion of classes or groups within a large data collection by observing
only $5-10\%$ of samples from the data. In estimating the false detections (for
precision), the samples are chosen carefully such that the overall
characteristics of the data collection are preserved. Next, inspired by its use
in estimating disease propagation we apply pooled testing approaches to
estimate missed detections (for recall) from the dataset. The estimates thus
obtained are close to the ones obtained using ground truth, thus reducing the
need for extensive labeling which is expensive and time consuming.
| [
{
"version": "v1",
"created": "Sat, 21 Jun 2014 21:37:30 GMT"
}
] | 2014-06-24T00:00:00 | [
[
"Anirudh",
"Rushil",
""
],
[
"Turaga",
"Pavan",
""
]
] | TITLE: Interactively Test Driving an Object Detector: Estimating Performance on
Unlabeled Data
ABSTRACT: In this paper, we study the problem of `test-driving' a detector, i.e.
allowing a human user to get a quick sense of how well the detector generalizes
to their specific requirement. To this end, we present the first system that
estimates detector performance interactively without extensive ground truthing
using a human in the loop. We approach this as a problem of estimating
proportions and show that it is possible to make accurate inferences on the
proportion of classes or groups within a large data collection by observing
only $5-10\%$ of samples from the data. In estimating the false detections (for
precision), the samples are chosen carefully such that the overall
characteristics of the data collection are preserved. Next, inspired by its use
in estimating disease propagation we apply pooled testing approaches to
estimate missed detections (for recall) from the dataset. The estimates thus
obtained are close to the ones obtained using ground truth, thus reducing the
need for extensive labeling which is expensive and time consuming.
| no_new_dataset | 0.944791 |
1406.5752 | Tianyi Zhou | Tianyi Zhou and Jeff Bilmes and Carlos Guestrin | Divide-and-Conquer Learning by Anchoring a Conical Hull | 26 pages, long version, in updating | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We reduce a broad class of machine learning problems, usually addressed by EM
or sampling, to the problem of finding the $k$ extremal rays spanning the
conical hull of a data point set. These $k$ "anchors" lead to a global solution
and a more interpretable model that can even outperform EM and sampling on
generalization error. To find the $k$ anchors, we propose a novel
divide-and-conquer learning scheme "DCA" that distributes the problem to
$\mathcal O(k\log k)$ same-type sub-problems on different low-D random
hyperplanes, each can be solved by any solver. For the 2D sub-problem, we
present a non-iterative solver that only needs to compute an array of cosine
values and its max/min entries. DCA also provides a faster subroutine for other
methods to check whether a point is covered in a conical hull, which improves
algorithm design in multiple dimensions and brings significant speedup to
learning. We apply our method to GMM, HMM, LDA, NMF and subspace clustering,
then show its competitive performance and scalability over other methods on
rich datasets.
| [
{
"version": "v1",
"created": "Sun, 22 Jun 2014 19:16:20 GMT"
}
] | 2014-06-24T00:00:00 | [
[
"Zhou",
"Tianyi",
""
],
[
"Bilmes",
"Jeff",
""
],
[
"Guestrin",
"Carlos",
""
]
] | TITLE: Divide-and-Conquer Learning by Anchoring a Conical Hull
ABSTRACT: We reduce a broad class of machine learning problems, usually addressed by EM
or sampling, to the problem of finding the $k$ extremal rays spanning the
conical hull of a data point set. These $k$ "anchors" lead to a global solution
and a more interpretable model that can even outperform EM and sampling on
generalization error. To find the $k$ anchors, we propose a novel
divide-and-conquer learning scheme "DCA" that distributes the problem to
$\mathcal O(k\log k)$ same-type sub-problems on different low-D random
hyperplanes, each can be solved by any solver. For the 2D sub-problem, we
present a non-iterative solver that only needs to compute an array of cosine
values and its max/min entries. DCA also provides a faster subroutine for other
methods to check whether a point is covered in a conical hull, which improves
algorithm design in multiple dimensions and brings significant speedup to
learning. We apply our method to GMM, HMM, LDA, NMF and subspace clustering,
then show its competitive performance and scalability over other methods on
rich datasets.
| no_new_dataset | 0.943556 |
1406.5824 | Serena Yeung | Serena Yeung, Alireza Fathi, and Li Fei-Fei | VideoSET: Video Summary Evaluation through Text | null | null | null | null | cs.CV cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present VideoSET, a method for Video Summary Evaluation
through Text that can evaluate how well a video summary is able to retain the
semantic information contained in its original video. We observe that semantics
is most easily expressed in words, and develop a text-based approach for the
evaluation. Given a video summary, a text representation of the video summary
is first generated, and an NLP-based metric is then used to measure its
semantic distance to ground-truth text summaries written by humans. We show
that our technique has higher agreement with human judgment than pixel-based
distance metrics. We also release text annotations and ground-truth text
summaries for a number of publicly available video datasets, for use by the
computer vision community.
| [
{
"version": "v1",
"created": "Mon, 23 Jun 2014 07:56:23 GMT"
}
] | 2014-06-24T00:00:00 | [
[
"Yeung",
"Serena",
""
],
[
"Fathi",
"Alireza",
""
],
[
"Fei-Fei",
"Li",
""
]
] | TITLE: VideoSET: Video Summary Evaluation through Text
ABSTRACT: In this paper we present VideoSET, a method for Video Summary Evaluation
through Text that can evaluate how well a video summary is able to retain the
semantic information contained in its original video. We observe that semantics
is most easily expressed in words, and develop a text-based approach for the
evaluation. Given a video summary, a text representation of the video summary
is first generated, and an NLP-based metric is then used to measure its
semantic distance to ground-truth text summaries written by humans. We show
that our technique has higher agreement with human judgment than pixel-based
distance metrics. We also release text annotations and ground-truth text
summaries for a number of publicly available video datasets, for use by the
computer vision community.
| no_new_dataset | 0.949809 |
1406.5910 | Roman Shapovalov | Roman Shapovalov, Dmitry Vetrov, Anton Osokin, Pushmeet Kohli | Multi-utility Learning: Structured-output Learning with Multiple
Annotation-specific Loss Functions | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structured-output learning is a challenging problem; particularly so because
of the difficulty in obtaining large datasets of fully labelled instances for
training. In this paper we try to overcome this difficulty by presenting a
multi-utility learning framework for structured prediction that can learn from
training instances with different forms of supervision. We propose a unified
technique for inferring the loss functions most suitable for quantifying the
consistency of solutions with the given weak annotation. We demonstrate the
effectiveness of our framework on the challenging semantic image segmentation
problem for which a wide variety of annotations can be used. For instance, the
popular training datasets for semantic segmentation are composed of images with
hard-to-generate full pixel labellings, as well as images with easy-to-obtain
weak annotations, such as bounding boxes around objects, or image-level labels
that specify which object categories are present in an image. Experimental
evaluation shows that the use of annotation-specific loss functions
dramatically improves segmentation accuracy compared to the baseline system
where only one type of weak annotation is used.
| [
{
"version": "v1",
"created": "Mon, 23 Jun 2014 14:06:24 GMT"
}
] | 2014-06-24T00:00:00 | [
[
"Shapovalov",
"Roman",
""
],
[
"Vetrov",
"Dmitry",
""
],
[
"Osokin",
"Anton",
""
],
[
"Kohli",
"Pushmeet",
""
]
] | TITLE: Multi-utility Learning: Structured-output Learning with Multiple
Annotation-specific Loss Functions
ABSTRACT: Structured-output learning is a challenging problem; particularly so because
of the difficulty in obtaining large datasets of fully labelled instances for
training. In this paper we try to overcome this difficulty by presenting a
multi-utility learning framework for structured prediction that can learn from
training instances with different forms of supervision. We propose a unified
technique for inferring the loss functions most suitable for quantifying the
consistency of solutions with the given weak annotation. We demonstrate the
effectiveness of our framework on the challenging semantic image segmentation
problem for which a wide variety of annotations can be used. For instance, the
popular training datasets for semantic segmentation are composed of images with
hard-to-generate full pixel labellings, as well as images with easy-to-obtain
weak annotations, such as bounding boxes around objects, or image-level labels
that specify which object categories are present in an image. Experimental
evaluation shows that the use of annotation-specific loss functions
dramatically improves segmentation accuracy compared to the baseline system
where only one type of weak annotation is used.
| no_new_dataset | 0.949529 |
1406.5947 | Thomas Martinetz | Bogdan Miclut, Thomas Kaester, Thomas Martinetz, Erhardt Barth | Committees of deep feedforward networks trained with few data | null | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional neural networks are known to give good results on image
classification tasks. In this paper we present a method to improve the
classification result by combining multiple such networks in a committee. We
adopt the STL-10 dataset which has very few training examples and show that our
method can achieve results that are better than the state of the art. The
networks are trained layer-wise and no backpropagation is used. We also explore
the effects of dataset augmentation by mirroring, rotation, and scaling.
| [
{
"version": "v1",
"created": "Mon, 23 Jun 2014 15:34:54 GMT"
}
] | 2014-06-24T00:00:00 | [
[
"Miclut",
"Bogdan",
""
],
[
"Kaester",
"Thomas",
""
],
[
"Martinetz",
"Thomas",
""
],
[
"Barth",
"Erhardt",
""
]
] | TITLE: Committees of deep feedforward networks trained with few data
ABSTRACT: Deep convolutional neural networks are known to give good results on image
classification tasks. In this paper we present a method to improve the
classification result by combining multiple such networks in a committee. We
adopt the STL-10 dataset which has very few training examples and show that our
method can achieve results that are better than the state of the art. The
networks are trained layer-wise and no backpropagation is used. We also explore
the effects of dataset augmentation by mirroring, rotation, and scaling.
| no_new_dataset | 0.950915 |
1301.2628 | Xu-Cheng Yin | Xu-Cheng Yin, Xuwang Yin, Kaizhu Huang, Hong-Wei Hao | Robust Text Detection in Natural Scene Images | A Draft Version (Submitted to IEEE TPAMI) | IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 36,
no. 5, pp. 970-983, 2014 | 10.1109/TPAMI.2013.182 | null | cs.CV cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text detection in natural scene images is an important prerequisite for many
content-based image analysis tasks. In this paper, we propose an accurate and
robust method for detecting texts in natural scene images. A fast and effective
pruning algorithm is designed to extract Maximally Stable Extremal Regions
(MSERs) as character candidates using the strategy of minimizing regularized
variations. Character candidates are grouped into text candidates by the
ingle-link clustering algorithm, where distance weights and threshold of the
clustering algorithm are learned automatically by a novel self-training
distance metric learning algorithm. The posterior probabilities of text
candidates corresponding to non-text are estimated with an character
classifier; text candidates with high probabilities are then eliminated and
finally texts are identified with a text classifier. The proposed system is
evaluated on the ICDAR 2011 Robust Reading Competition dataset; the f measure
is over 76% and is significantly better than the state-of-the-art performance
of 71%. Experimental results on a publicly available multilingual dataset also
show that our proposed method can outperform the other competitive method with
the f measure increase of over 9 percent. Finally, we have setup an online demo
of our proposed scene text detection system at
http://kems.ustb.edu.cn/learning/yin/dtext.
| [
{
"version": "v1",
"created": "Fri, 11 Jan 2013 23:08:15 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Jan 2013 19:57:46 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Jun 2013 16:27:49 GMT"
}
] | 2014-06-23T00:00:00 | [
[
"Yin",
"Xu-Cheng",
""
],
[
"Yin",
"Xuwang",
""
],
[
"Huang",
"Kaizhu",
""
],
[
"Hao",
"Hong-Wei",
""
]
] | TITLE: Robust Text Detection in Natural Scene Images
ABSTRACT: Text detection in natural scene images is an important prerequisite for many
content-based image analysis tasks. In this paper, we propose an accurate and
robust method for detecting texts in natural scene images. A fast and effective
pruning algorithm is designed to extract Maximally Stable Extremal Regions
(MSERs) as character candidates using the strategy of minimizing regularized
variations. Character candidates are grouped into text candidates by the
ingle-link clustering algorithm, where distance weights and threshold of the
clustering algorithm are learned automatically by a novel self-training
distance metric learning algorithm. The posterior probabilities of text
candidates corresponding to non-text are estimated with an character
classifier; text candidates with high probabilities are then eliminated and
finally texts are identified with a text classifier. The proposed system is
evaluated on the ICDAR 2011 Robust Reading Competition dataset; the f measure
is over 76% and is significantly better than the state-of-the-art performance
of 71%. Experimental results on a publicly available multilingual dataset also
show that our proposed method can outperform the other competitive method with
the f measure increase of over 9 percent. Finally, we have setup an online demo
of our proposed scene text detection system at
http://kems.ustb.edu.cn/learning/yin/dtext.
| no_new_dataset | 0.956431 |
1406.4966 | Jingdong Wang | Chao Du, Jingdong Wang | Inner Product Similarity Search using Compositional Codes | The approach presented in this paper (ECCV14 submission) is closely
related to multi-stage vector quantization and residual quantization. Thanks
the reviewers (CVPR14 and ECCV14) for pointing out the relationship to the
two algorithms. Related paper:
http://sites.skoltech.ru/app/data/uploads/sites/2/2013/09/CVPR14.pdf, which
also adopts the summation of vectors for vector approximation | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the nearest neighbor search problem under inner product
similarity and introduces a compact code-based approach. The idea is to
approximate a vector using the composition of several elements selected from a
source dictionary and to represent this vector by a short code composed of the
indices of the selected elements. The inner product between a query vector and
a database vector is efficiently estimated from the query vector and the short
code of the database vector. We show the superior performance of the proposed
group $M$-selection algorithm that selects $M$ elements from $M$ source
dictionaries for vector approximation in terms of search accuracy and
efficiency for compact codes of the same length via theoretical and empirical
analysis. Experimental results on large-scale datasets ($1M$ and $1B$ SIFT
features, $1M$ linear models and Netflix) demonstrate the superiority of the
proposed approach.
| [
{
"version": "v1",
"created": "Thu, 19 Jun 2014 07:42:05 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Jun 2014 02:13:56 GMT"
}
] | 2014-06-23T00:00:00 | [
[
"Du",
"Chao",
""
],
[
"Wang",
"Jingdong",
""
]
] | TITLE: Inner Product Similarity Search using Compositional Codes
ABSTRACT: This paper addresses the nearest neighbor search problem under inner product
similarity and introduces a compact code-based approach. The idea is to
approximate a vector using the composition of several elements selected from a
source dictionary and to represent this vector by a short code composed of the
indices of the selected elements. The inner product between a query vector and
a database vector is efficiently estimated from the query vector and the short
code of the database vector. We show the superior performance of the proposed
group $M$-selection algorithm that selects $M$ elements from $M$ source
dictionaries for vector approximation in terms of search accuracy and
efficiency for compact codes of the same length via theoretical and empirical
analysis. Experimental results on large-scale datasets ($1M$ and $1B$ SIFT
features, $1M$ linear models and Netflix) demonstrate the superiority of the
proposed approach.
| no_new_dataset | 0.940298 |
1406.5212 | Georgia Gkioxari | Georgia Gkioxari, Bharath Hariharan, Ross Girshick, Jitendra Malik | R-CNNs for Pose Estimation and Action Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present convolutional neural networks for the tasks of keypoint (pose)
prediction and action classification of people in unconstrained images. Our
approach involves training an R-CNN detector with loss functions depending on
the task being tackled. We evaluate our method on the challenging PASCAL VOC
dataset and compare it to previous leading approaches. Our method gives
state-of-the-art results for keypoint and action prediction. Additionally, we
introduce a new dataset for action detection, the task of simultaneously
localizing people and classifying their actions, and present results using our
approach.
| [
{
"version": "v1",
"created": "Thu, 19 Jun 2014 20:56:08 GMT"
}
] | 2014-06-23T00:00:00 | [
[
"Gkioxari",
"Georgia",
""
],
[
"Hariharan",
"Bharath",
""
],
[
"Girshick",
"Ross",
""
],
[
"Malik",
"Jitendra",
""
]
] | TITLE: R-CNNs for Pose Estimation and Action Detection
ABSTRACT: We present convolutional neural networks for the tasks of keypoint (pose)
prediction and action classification of people in unconstrained images. Our
approach involves training an R-CNN detector with loss functions depending on
the task being tackled. We evaluate our method on the challenging PASCAL VOC
dataset and compare it to previous leading approaches. Our method gives
state-of-the-art results for keypoint and action prediction. Additionally, we
introduce a new dataset for action detection, the task of simultaneously
localizing people and classifying their actions, and present results using our
approach.
| new_dataset | 0.948822 |
1406.5059 | Abhishek Bhola | Abhishek Bhola | Twitter and Polls: Analyzing and estimating political orientation of
Twitter users in India General #Elections2014 | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This year (2014) in the month of May, the tenure of the 15th Lok Sabha was to
end and the elections to the 543 parliamentary seats were to be held. A
whooping $5 billion were spent on these elections, which made us stand second
only to the US Presidential elections in terms of money spent. Swelling number
of Internet users and Online Social Media (OSM) users could effect 3-4% of
urban population votes as per a report of IAMAI (Internet & Mobile Association
of India). Our count of tweets related to elections from September 2013 to May
2014, was close to 18.21 million. We analyzed the complete dataset and found
that the activity on Twitter peaked during important events. It was evident
from our data that the political behavior of the politicians affected their
followers count. Yet another aim of our work was to find an efficient way to
classify the political orientation of the users on Twitter. We used four
different techniques: two were based on the content of the tweets, one on the
user based features and another based on community detection algorithm on the
retweet and user mention networks. We found that the community detection
algorithm worked best. We built a portal to show the analysis of the tweets of
the last 24 hours. To the best of our knowledge, this is the first academic
pursuit to analyze the elections data and classify the users in the India
General Elections 2014.
| [
{
"version": "v1",
"created": "Thu, 19 Jun 2014 14:27:09 GMT"
}
] | 2014-06-20T00:00:00 | [
[
"Bhola",
"Abhishek",
""
]
] | TITLE: Twitter and Polls: Analyzing and estimating political orientation of
Twitter users in India General #Elections2014
ABSTRACT: This year (2014) in the month of May, the tenure of the 15th Lok Sabha was to
end and the elections to the 543 parliamentary seats were to be held. A
whooping $5 billion were spent on these elections, which made us stand second
only to the US Presidential elections in terms of money spent. Swelling number
of Internet users and Online Social Media (OSM) users could effect 3-4% of
urban population votes as per a report of IAMAI (Internet & Mobile Association
of India). Our count of tweets related to elections from September 2013 to May
2014, was close to 18.21 million. We analyzed the complete dataset and found
that the activity on Twitter peaked during important events. It was evident
from our data that the political behavior of the politicians affected their
followers count. Yet another aim of our work was to find an efficient way to
classify the political orientation of the users on Twitter. We used four
different techniques: two were based on the content of the tweets, one on the
user based features and another based on community detection algorithm on the
retweet and user mention networks. We found that the community detection
algorithm worked best. We built a portal to show the analysis of the tweets of
the last 24 hours. To the best of our knowledge, this is the first academic
pursuit to analyze the elections data and classify the users in the India
General Elections 2014.
| no_new_dataset | 0.92421 |
1406.5074 | Vijendra Singh | Singh Vijendra and Pathak Shivani | Robust Outlier Detection Technique in Data Mining: A Univariate Approach | arXiv admin note: text overlap with arXiv:1402.6859 by other authors
without attribution | null | null | MT CS 2011 | cs.CV | http://creativecommons.org/licenses/by/3.0/ | Outliers are the points which are different from or inconsistent with the
rest of the data. They can be novel, new, abnormal, unusual or noisy
information. Outliers are sometimes more interesting than the majority of the
data. The main challenges of outlier detection with the increasing complexity,
size and variety of datasets, are how to catch similar outliers as a group, and
how to evaluate the outliers. This paper describes an approach which uses
Univariate outlier detection as a pre-processing step to detect the outlier and
then applies K-means algorithm hence to analyse the effects of the outliers on
the cluster analysis of dataset.
| [
{
"version": "v1",
"created": "Thu, 19 Jun 2014 15:12:49 GMT"
}
] | 2014-06-20T00:00:00 | [
[
"Vijendra",
"Singh",
""
],
[
"Shivani",
"Pathak",
""
]
] | TITLE: Robust Outlier Detection Technique in Data Mining: A Univariate Approach
ABSTRACT: Outliers are the points which are different from or inconsistent with the
rest of the data. They can be novel, new, abnormal, unusual or noisy
information. Outliers are sometimes more interesting than the majority of the
data. The main challenges of outlier detection with the increasing complexity,
size and variety of datasets, are how to catch similar outliers as a group, and
how to evaluate the outliers. This paper describes an approach which uses
Univariate outlier detection as a pre-processing step to detect the outlier and
then applies K-means algorithm hence to analyse the effects of the outliers on
the cluster analysis of dataset.
| no_new_dataset | 0.949295 |
1406.5095 | Conrad Sanderson | Vikas Reddy, Conrad Sanderson, Andres Sanin, Brian C. Lovell | MRF-based Background Initialisation for Improved Foreground Detection in
Cluttered Surveillance Videos | arXiv admin note: substantial text overlap with arXiv:1303.2465 | null | 10.1007/978-3-642-19318-7_43 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust foreground object segmentation via background modelling is a difficult
problem in cluttered environments, where obtaining a clear view of the
background to model is almost impossible. In this paper, we propose a method
capable of robustly estimating the background and detecting regions of interest
in such environments. In particular, we propose to extend the background
initialisation component of a recent patch-based foreground detection algorithm
with an elaborate technique based on Markov Random Fields, where the optimal
labelling solution is computed using iterated conditional modes. Rather than
relying purely on local temporal statistics, the proposed technique takes into
account the spatial continuity of the entire background. Experiments with
several tracking algorithms on the CAVIAR dataset indicate that the proposed
method leads to considerable improvements in object tracking accuracy, when
compared to methods based on Gaussian mixture models and feature histograms.
| [
{
"version": "v1",
"created": "Thu, 19 Jun 2014 16:06:53 GMT"
}
] | 2014-06-20T00:00:00 | [
[
"Reddy",
"Vikas",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Sanin",
"Andres",
""
],
[
"Lovell",
"Brian C.",
""
]
] | TITLE: MRF-based Background Initialisation for Improved Foreground Detection in
Cluttered Surveillance Videos
ABSTRACT: Robust foreground object segmentation via background modelling is a difficult
problem in cluttered environments, where obtaining a clear view of the
background to model is almost impossible. In this paper, we propose a method
capable of robustly estimating the background and detecting regions of interest
in such environments. In particular, we propose to extend the background
initialisation component of a recent patch-based foreground detection algorithm
with an elaborate technique based on Markov Random Fields, where the optimal
labelling solution is computed using iterated conditional modes. Rather than
relying purely on local temporal statistics, the proposed technique takes into
account the spatial continuity of the entire background. Experiments with
several tracking algorithms on the CAVIAR dataset indicate that the proposed
method leads to considerable improvements in object tracking accuracy, when
compared to methods based on Gaussian mixture models and feature histograms.
| no_new_dataset | 0.949435 |
1406.5161 | Jeyanthi Salem Narasimhan | Jeyanthi Narasimhan, Abhinav Vishnu, Lawrence Holder, Adolfy Hoisie | Fast Support Vector Machines Using Parallel Adaptive Shrinking on
Distributed Systems | 10 pages, 9 figures, 3 tables | null | null | null | cs.DC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Support Vector Machines (SVM), a popular machine learning technique, has been
applied to a wide range of domains such as science, finance, and social
networks for supervised learning. Whether it is identifying high-risk patients
by health-care professionals, or potential high-school students to enroll in
college by school districts, SVMs can play a major role for social good. This
paper undertakes the challenge of designing a scalable parallel SVM training
algorithm for large scale systems, which includes commodity multi-core
machines, tightly connected supercomputers and cloud computing systems.
Intuitive techniques for improving the time-space complexity including adaptive
elimination of samples for faster convergence and sparse format representation
are proposed. Under sample elimination, several heuristics for {\em earliest
possible} to {\em lazy} elimination of non-contributing samples are proposed.
In several cases, where an early sample elimination might result in a false
positive, low overhead mechanisms for reconstruction of key data structures are
proposed. The algorithm and heuristics are implemented and evaluated on various
publicly available datasets. Empirical evaluation shows up to 26x speed
improvement on some datasets against the sequential baseline, when evaluated on
multiple compute nodes, and an improvement in execution time up to 30-60\% is
readily observed on a number of other datasets against our parallel baseline.
| [
{
"version": "v1",
"created": "Thu, 19 Jun 2014 19:22:28 GMT"
}
] | 2014-06-20T00:00:00 | [
[
"Narasimhan",
"Jeyanthi",
""
],
[
"Vishnu",
"Abhinav",
""
],
[
"Holder",
"Lawrence",
""
],
[
"Hoisie",
"Adolfy",
""
]
] | TITLE: Fast Support Vector Machines Using Parallel Adaptive Shrinking on
Distributed Systems
ABSTRACT: Support Vector Machines (SVM), a popular machine learning technique, has been
applied to a wide range of domains such as science, finance, and social
networks for supervised learning. Whether it is identifying high-risk patients
by health-care professionals, or potential high-school students to enroll in
college by school districts, SVMs can play a major role for social good. This
paper undertakes the challenge of designing a scalable parallel SVM training
algorithm for large scale systems, which includes commodity multi-core
machines, tightly connected supercomputers and cloud computing systems.
Intuitive techniques for improving the time-space complexity including adaptive
elimination of samples for faster convergence and sparse format representation
are proposed. Under sample elimination, several heuristics for {\em earliest
possible} to {\em lazy} elimination of non-contributing samples are proposed.
In several cases, where an early sample elimination might result in a false
positive, low overhead mechanisms for reconstruction of key data structures are
proposed. The algorithm and heuristics are implemented and evaluated on various
publicly available datasets. Empirical evaluation shows up to 26x speed
improvement on some datasets against the sequential baseline, when evaluated on
multiple compute nodes, and an improvement in execution time up to 30-60\% is
readily observed on a number of other datasets against our parallel baseline.
| no_new_dataset | 0.949623 |
1311.4336 | Junming Huang | Junming Huang, Chao Li, Wen-Qiang Wang, Hua-Wei Shen, Guojie Li,
Xue-Qi Cheng | Temporal scaling in information propagation | 13 pages, 2 figures. published on Scientific Reports | Scientific Reports 4, 5334, (2014) | 10.1038/srep05334 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For the study of information propagation, one fundamental problem is
uncovering universal laws governing the dynamics of information propagation.
This problem, from the microscopic perspective, is formulated as estimating the
propagation probability that a piece of information propagates from one
individual to another. Such a propagation probability generally depends on two
major classes of factors: the intrinsic attractiveness of information and the
interactions between individuals. Despite the fact that the temporal effect of
attractiveness is widely studied, temporal laws underlying individual
interactions remain unclear, causing inaccurate prediction of information
propagation on evolving social networks. In this report, we empirically study
the dynamics of information propagation, using the dataset from a
population-scale social media website. We discover a temporal scaling in
information propagation: the probability a message propagates between two
individuals decays with the length of time latency since their latest
interaction, obeying a power-law rule. Leveraging the scaling law, we further
propose a temporal model to estimate future propagation probabilities between
individuals, reducing the error rate of information propagation prediction from
6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
| [
{
"version": "v1",
"created": "Mon, 18 Nov 2013 11:15:26 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Nov 2013 02:15:14 GMT"
},
{
"version": "v3",
"created": "Wed, 18 Jun 2014 09:55:29 GMT"
}
] | 2014-06-19T00:00:00 | [
[
"Huang",
"Junming",
""
],
[
"Li",
"Chao",
""
],
[
"Wang",
"Wen-Qiang",
""
],
[
"Shen",
"Hua-Wei",
""
],
[
"Li",
"Guojie",
""
],
[
"Cheng",
"Xue-Qi",
""
]
] | TITLE: Temporal scaling in information propagation
ABSTRACT: For the study of information propagation, one fundamental problem is
uncovering universal laws governing the dynamics of information propagation.
This problem, from the microscopic perspective, is formulated as estimating the
propagation probability that a piece of information propagates from one
individual to another. Such a propagation probability generally depends on two
major classes of factors: the intrinsic attractiveness of information and the
interactions between individuals. Despite the fact that the temporal effect of
attractiveness is widely studied, temporal laws underlying individual
interactions remain unclear, causing inaccurate prediction of information
propagation on evolving social networks. In this report, we empirically study
the dynamics of information propagation, using the dataset from a
population-scale social media website. We discover a temporal scaling in
information propagation: the probability a message propagates between two
individuals decays with the length of time latency since their latest
interaction, obeying a power-law rule. Leveraging the scaling law, we further
propose a temporal model to estimate future propagation probabilities between
individuals, reducing the error rate of information propagation prediction from
6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
| no_new_dataset | 0.947284 |
1406.4296 | Adrien Gaidon | Adrien Gaidon (Xerox Research Center Europe, France), Gloria Zen
(University of Trento, Italy), Jose A. Rodriguez-Serrano (Xerox Research
Center Europe, France) | Self-Learning Camera: Autonomous Adaptation of Object Detectors to
Unlabeled Video Streams | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning object detectors requires massive amounts of labeled training
samples from the specific data source of interest. This is impractical when
dealing with many different sources (e.g., in camera networks), or constantly
changing ones such as mobile cameras (e.g., in robotics or driving assistant
systems). In this paper, we address the problem of self-learning detectors in
an autonomous manner, i.e. (i) detectors continuously updating themselves to
efficiently adapt to streaming data sources (contrary to transductive
algorithms), (ii) without any labeled data strongly related to the target data
stream (contrary to self-paced learning), and (iii) without manual intervention
to set and update hyper-parameters. To that end, we propose an unsupervised,
on-line, and self-tuning learning algorithm to optimize a multi-task learning
convex objective. Our method uses confident but laconic oracles (high-precision
but low-recall off-the-shelf generic detectors), and exploits the structure of
the problem to jointly learn on-line an ensemble of instance-level trackers,
from which we derive an adapted category-level object detector. Our approach is
validated on real-world publicly available video object datasets.
| [
{
"version": "v1",
"created": "Tue, 17 Jun 2014 09:51:18 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Jun 2014 12:33:22 GMT"
}
] | 2014-06-19T00:00:00 | [
[
"Gaidon",
"Adrien",
"",
"Xerox Research Center Europe, France"
],
[
"Zen",
"Gloria",
"",
"University of Trento, Italy"
],
[
"Rodriguez-Serrano",
"Jose A.",
"",
"Xerox Research\n Center Europe, France"
]
] | TITLE: Self-Learning Camera: Autonomous Adaptation of Object Detectors to
Unlabeled Video Streams
ABSTRACT: Learning object detectors requires massive amounts of labeled training
samples from the specific data source of interest. This is impractical when
dealing with many different sources (e.g., in camera networks), or constantly
changing ones such as mobile cameras (e.g., in robotics or driving assistant
systems). In this paper, we address the problem of self-learning detectors in
an autonomous manner, i.e. (i) detectors continuously updating themselves to
efficiently adapt to streaming data sources (contrary to transductive
algorithms), (ii) without any labeled data strongly related to the target data
stream (contrary to self-paced learning), and (iii) without manual intervention
to set and update hyper-parameters. To that end, we propose an unsupervised,
on-line, and self-tuning learning algorithm to optimize a multi-task learning
convex objective. Our method uses confident but laconic oracles (high-precision
but low-recall off-the-shelf generic detectors), and exploits the structure of
the problem to jointly learn on-line an ensemble of instance-level trackers,
from which we derive an adapted category-level object detector. Our approach is
validated on real-world publicly available video object datasets.
| no_new_dataset | 0.953057 |
1406.4773 | Yi Sun | Yi Sun, Xiaogang Wang, Xiaoou Tang | Deep Learning Face Representation by Joint Identification-Verification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The key challenge of face recognition is to develop effective feature
representations for reducing intra-personal variations while enlarging
inter-personal differences. In this paper, we show that it can be well solved
with deep learning and using both face identification and verification signals
as supervision. The Deep IDentification-verification features (DeepID2) are
learned with carefully designed deep convolutional networks. The face
identification task increases the inter-personal variations by drawing DeepID2
extracted from different identities apart, while the face verification task
reduces the intra-personal variations by pulling DeepID2 extracted from the
same identity together, both of which are essential to face recognition. The
learned DeepID2 features can be well generalized to new identities unseen in
the training data. On the challenging LFW dataset, 99.15% face verification
accuracy is achieved. Compared with the best deep learning result on LFW, the
error rate has been significantly reduced by 67%.
| [
{
"version": "v1",
"created": "Wed, 18 Jun 2014 15:42:16 GMT"
}
] | 2014-06-19T00:00:00 | [
[
"Sun",
"Yi",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"Tang",
"Xiaoou",
""
]
] | TITLE: Deep Learning Face Representation by Joint Identification-Verification
ABSTRACT: The key challenge of face recognition is to develop effective feature
representations for reducing intra-personal variations while enlarging
inter-personal differences. In this paper, we show that it can be well solved
with deep learning and using both face identification and verification signals
as supervision. The Deep IDentification-verification features (DeepID2) are
learned with carefully designed deep convolutional networks. The face
identification task increases the inter-personal variations by drawing DeepID2
extracted from different identities apart, while the face verification task
reduces the intra-personal variations by pulling DeepID2 extracted from the
same identity together, both of which are essential to face recognition. The
learned DeepID2 features can be well generalized to new identities unseen in
the training data. On the challenging LFW dataset, 99.15% face verification
accuracy is achieved. Compared with the best deep learning result on LFW, the
error rate has been significantly reduced by 67%.
| no_new_dataset | 0.949153 |
1406.4775 | Andrea Montanari | Andrea Montanari and Emile Richard | Non-negative Principal Component Analysis: Message Passing Algorithms
and Sharp Asymptotics | 51 pages, 7 pdf figures | null | null | null | cs.IT math.IT math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Principal component analysis (PCA) aims at estimating the direction of
maximal variability of a high-dimensional dataset. A natural question is: does
this task become easier, and estimation more accurate, when we exploit
additional knowledge on the principal vector? We study the case in which the
principal vector is known to lie in the positive orthant. Similar constraints
arise in a number of applications, ranging from analysis of gene expression
data to spike sorting in neural signal processing.
In the unconstrained case, the estimation performances of PCA has been
precisely characterized using random matrix theory, under a statistical model
known as the `spiked model.' It is known that the estimation error undergoes a
phase transition as the signal-to-noise ratio crosses a certain threshold.
Unfortunately, tools from random matrix theory have no bearing on the
constrained problem. Despite this challenge, we develop an analogous
characterization in the constrained case, within a one-spike model.
In particular: $(i)$~We prove that the estimation error undergoes a similar
phase transition, albeit at a different threshold in signal-to-noise ratio that
we determine exactly; $(ii)$~We prove that --unlike in the unconstrained case--
estimation error depends on the spike vector, and characterize the least
favorable vectors; $(iii)$~We show that a non-negative principal component can
be approximately computed --under the spiked model-- in nearly linear time.
This despite the fact that the problem is non-convex and, in general, NP-hard
to solve exactly.
| [
{
"version": "v1",
"created": "Wed, 18 Jun 2014 15:47:33 GMT"
}
] | 2014-06-19T00:00:00 | [
[
"Montanari",
"Andrea",
""
],
[
"Richard",
"Emile",
""
]
] | TITLE: Non-negative Principal Component Analysis: Message Passing Algorithms
and Sharp Asymptotics
ABSTRACT: Principal component analysis (PCA) aims at estimating the direction of
maximal variability of a high-dimensional dataset. A natural question is: does
this task become easier, and estimation more accurate, when we exploit
additional knowledge on the principal vector? We study the case in which the
principal vector is known to lie in the positive orthant. Similar constraints
arise in a number of applications, ranging from analysis of gene expression
data to spike sorting in neural signal processing.
In the unconstrained case, the estimation performances of PCA has been
precisely characterized using random matrix theory, under a statistical model
known as the `spiked model.' It is known that the estimation error undergoes a
phase transition as the signal-to-noise ratio crosses a certain threshold.
Unfortunately, tools from random matrix theory have no bearing on the
constrained problem. Despite this challenge, we develop an analogous
characterization in the constrained case, within a one-spike model.
In particular: $(i)$~We prove that the estimation error undergoes a similar
phase transition, albeit at a different threshold in signal-to-noise ratio that
we determine exactly; $(ii)$~We prove that --unlike in the unconstrained case--
estimation error depends on the spike vector, and characterize the least
favorable vectors; $(iii)$~We show that a non-negative principal component can
be approximately computed --under the spiked model-- in nearly linear time.
This despite the fact that the problem is non-convex and, in general, NP-hard
to solve exactly.
| no_new_dataset | 0.942454 |
1406.4784 | Ping Li | Anshumali Shrivastava and Ping Li | Improved Densification of One Permutation Hashing | null | null | null | null | stat.ME cs.DS cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The existing work on densification of one permutation hashing reduces the
query processing cost of the $(K,L)$-parameterized Locality Sensitive Hashing
(LSH) algorithm with minwise hashing, from $O(dKL)$ to merely $O(d + KL)$,
where $d$ is the number of nonzeros of the data vector, $K$ is the number of
hashes in each hash table, and $L$ is the number of hash tables. While that is
a substantial improvement, our analysis reveals that the existing densification
scheme is sub-optimal. In particular, there is no enough randomness in that
procedure, which affects its accuracy on very sparse datasets.
In this paper, we provide a new densification procedure which is provably
better than the existing scheme. This improvement is more significant for very
sparse datasets which are common over the web. The improved technique has the
same cost of $O(d + KL)$ for query processing, thereby making it strictly
preferable over the existing procedure. Experimental evaluations on public
datasets, in the task of hashing based near neighbor search, support our
theoretical findings.
| [
{
"version": "v1",
"created": "Wed, 18 Jun 2014 16:16:22 GMT"
}
] | 2014-06-19T00:00:00 | [
[
"Shrivastava",
"Anshumali",
""
],
[
"Li",
"Ping",
""
]
] | TITLE: Improved Densification of One Permutation Hashing
ABSTRACT: The existing work on densification of one permutation hashing reduces the
query processing cost of the $(K,L)$-parameterized Locality Sensitive Hashing
(LSH) algorithm with minwise hashing, from $O(dKL)$ to merely $O(d + KL)$,
where $d$ is the number of nonzeros of the data vector, $K$ is the number of
hashes in each hash table, and $L$ is the number of hash tables. While that is
a substantial improvement, our analysis reveals that the existing densification
scheme is sub-optimal. In particular, there is no enough randomness in that
procedure, which affects its accuracy on very sparse datasets.
In this paper, we provide a new densification procedure which is provably
better than the existing scheme. This improvement is more significant for very
sparse datasets which are common over the web. The improved technique has the
same cost of $O(d + KL)$ for query processing, thereby making it strictly
preferable over the existing procedure. Experimental evaluations on public
datasets, in the task of hashing based near neighbor search, support our
theoretical findings.
| no_new_dataset | 0.942823 |
1103.5188 | Catuscia Palamidessi | M\'ario S. Alvim, Miguel E. Andr\'es, Konstantinos Chatzikokolakis,
Pierpaolo Degano, Catuscia Palamidessi | Differential Privacy: on the trade-off between Utility and Information
Leakage | 30 pages; HAL repository | Proceedings of the 8th International Workshop on Formal Aspects of
Security & Trust (FAST'11), Springer, LNCS 7140, pp. 39-54, 2011 | 10.1007/978-3-642-29420-4_3 | inria-00580122 | cs.CR cs.DB cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differential privacy is a notion of privacy that has become very popular in
the database community. Roughly, the idea is that a randomized query mechanism
provides sufficient privacy protection if the ratio between the probabilities
that two adjacent datasets give the same answer is bound by e^epsilon. In the
field of information flow there is a similar concern for controlling
information leakage, i.e. limiting the possibility of inferring the secret
information from the observables. In recent years, researchers have proposed to
quantify the leakage in terms of R\'enyi min mutual information, a notion
strictly related to the Bayes risk. In this paper, we show how to model the
query system in terms of an information-theoretic channel, and we compare the
notion of differential privacy with that of mutual information. We show that
differential privacy implies a bound on the mutual information (but not
vice-versa). Furthermore, we show that our bound is tight. Then, we consider
the utility of the randomization mechanism, which represents how close the
randomized answers are, in average, to the real ones. We show that the notion
of differential privacy implies a bound on utility, also tight, and we propose
a method that under certain conditions builds an optimal randomization
mechanism, i.e. a mechanism which provides the best utility while guaranteeing
differential privacy.
| [
{
"version": "v1",
"created": "Sun, 27 Mar 2011 06:41:12 GMT"
},
{
"version": "v2",
"created": "Mon, 9 May 2011 00:04:26 GMT"
},
{
"version": "v3",
"created": "Thu, 25 Aug 2011 04:12:17 GMT"
}
] | 2014-06-18T00:00:00 | [
[
"Alvim",
"Mário S.",
""
],
[
"Andrés",
"Miguel E.",
""
],
[
"Chatzikokolakis",
"Konstantinos",
""
],
[
"Degano",
"Pierpaolo",
""
],
[
"Palamidessi",
"Catuscia",
""
]
] | TITLE: Differential Privacy: on the trade-off between Utility and Information
Leakage
ABSTRACT: Differential privacy is a notion of privacy that has become very popular in
the database community. Roughly, the idea is that a randomized query mechanism
provides sufficient privacy protection if the ratio between the probabilities
that two adjacent datasets give the same answer is bound by e^epsilon. In the
field of information flow there is a similar concern for controlling
information leakage, i.e. limiting the possibility of inferring the secret
information from the observables. In recent years, researchers have proposed to
quantify the leakage in terms of R\'enyi min mutual information, a notion
strictly related to the Bayes risk. In this paper, we show how to model the
query system in terms of an information-theoretic channel, and we compare the
notion of differential privacy with that of mutual information. We show that
differential privacy implies a bound on the mutual information (but not
vice-versa). Furthermore, we show that our bound is tight. Then, we consider
the utility of the randomization mechanism, which represents how close the
randomized answers are, in average, to the real ones. We show that the notion
of differential privacy implies a bound on utility, also tight, and we propose
a method that under certain conditions builds an optimal randomization
mechanism, i.e. a mechanism which provides the best utility while guaranteeing
differential privacy.
| no_new_dataset | 0.946448 |
1403.5884 | Ginestra Bianconi | Kartik Anand, Dimitri Krioukov, Ginestra Bianconi | Entropy distribution and condensation in random networks with a given
degree distribution | (9 pages, 1 figure) | Phys. Rev. E 89, 062807 (2014) | 10.1103/PhysRevE.89.062807 | null | cond-mat.dis-nn cond-mat.stat-mech physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The entropy of network ensembles characterizes the amount of information
encoded in the network structure, and can be used to quantify network
complexity, and the relevance of given structural properties observed in real
network datasets with respect to a random hypothesis. In many real networks the
degrees of individual nodes are not fixed but change in time, while their
statistical properties, such as the degree distribution, are preserved. Here we
characterize the distribution of entropy of random networks with given degree
sequences, where each degree sequence is drawn randomly from a given degree
distribution. We show that the leading term of the entropy of scale-free
network ensembles depends only on the network size and average degree, and that
entropy is self-averaging, meaning that its relative variance vanishes in the
thermodynamic limit. We also characterize large fluctuations of entropy that
are fully determined by the average degree in the network. Finally, above a
certain threshold, large fluctuations of the average degree in the ensemble can
lead to condensation, meaning that a single node in a network of size~$N$ can
attract $O(N)$ links.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2014 09:26:21 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2014 00:04:17 GMT"
},
{
"version": "v3",
"created": "Fri, 30 May 2014 08:31:44 GMT"
}
] | 2014-06-18T00:00:00 | [
[
"Anand",
"Kartik",
""
],
[
"Krioukov",
"Dimitri",
""
],
[
"Bianconi",
"Ginestra",
""
]
] | TITLE: Entropy distribution and condensation in random networks with a given
degree distribution
ABSTRACT: The entropy of network ensembles characterizes the amount of information
encoded in the network structure, and can be used to quantify network
complexity, and the relevance of given structural properties observed in real
network datasets with respect to a random hypothesis. In many real networks the
degrees of individual nodes are not fixed but change in time, while their
statistical properties, such as the degree distribution, are preserved. Here we
characterize the distribution of entropy of random networks with given degree
sequences, where each degree sequence is drawn randomly from a given degree
distribution. We show that the leading term of the entropy of scale-free
network ensembles depends only on the network size and average degree, and that
entropy is self-averaging, meaning that its relative variance vanishes in the
thermodynamic limit. We also characterize large fluctuations of entropy that
are fully determined by the average degree in the network. Finally, above a
certain threshold, large fluctuations of the average degree in the ensemble can
lead to condensation, meaning that a single node in a network of size~$N$ can
attract $O(N)$ links.
| no_new_dataset | 0.953751 |
1405.5047 | Michael Burke Mr | Michael Burke and Joan Lasenby | Single camera pose estimation using Bayesian filtering and Kinect motion
priors | 25 pages, Technical report, related to Burke and Lasenby, AMDO 2014
conference paper. Code sample: https://github.com/mgb45/SignerBodyPose Video:
https://www.youtube.com/watch?v=dJMTSo7-uFE | null | null | null | cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional approaches to upper body pose estimation using monocular vision
rely on complex body models and a large variety of geometric constraints. We
argue that this is not ideal and somewhat inelegant as it results in large
processing burdens, and instead attempt to incorporate these constraints
through priors obtained directly from training data. A prior distribution
covering the probability of a human pose occurring is used to incorporate
likely human poses. This distribution is obtained offline, by fitting a
Gaussian mixture model to a large dataset of recorded human body poses, tracked
using a Kinect sensor. We combine this prior information with a random walk
transition model to obtain an upper body model, suitable for use within a
recursive Bayesian filtering framework. Our model can be viewed as a mixture of
discrete Ornstein-Uhlenbeck processes, in that states behave as random walks,
but drift towards a set of typically observed poses. This model is combined
with measurements of the human head and hand positions, using recursive
Bayesian estimation to incorporate temporal information. Measurements are
obtained using face detection and a simple skin colour hand detector, trained
using the detected face. The suggested model is designed with analytical
tractability in mind and we show that the pose tracking can be
Rao-Blackwellised using the mixture Kalman filter, allowing for computational
efficiency while still incorporating bio-mechanical properties of the upper
body. In addition, the use of the proposed upper body model allows reliable
three-dimensional pose estimates to be obtained indirectly for a number of
joints that are often difficult to detect using traditional object recognition
strategies. Comparisons with Kinect sensor results and the state of the art in
2D pose estimation highlight the efficacy of the proposed approach.
| [
{
"version": "v1",
"created": "Tue, 20 May 2014 11:54:04 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Jun 2014 12:15:42 GMT"
}
] | 2014-06-18T00:00:00 | [
[
"Burke",
"Michael",
""
],
[
"Lasenby",
"Joan",
""
]
] | TITLE: Single camera pose estimation using Bayesian filtering and Kinect motion
priors
ABSTRACT: Traditional approaches to upper body pose estimation using monocular vision
rely on complex body models and a large variety of geometric constraints. We
argue that this is not ideal and somewhat inelegant as it results in large
processing burdens, and instead attempt to incorporate these constraints
through priors obtained directly from training data. A prior distribution
covering the probability of a human pose occurring is used to incorporate
likely human poses. This distribution is obtained offline, by fitting a
Gaussian mixture model to a large dataset of recorded human body poses, tracked
using a Kinect sensor. We combine this prior information with a random walk
transition model to obtain an upper body model, suitable for use within a
recursive Bayesian filtering framework. Our model can be viewed as a mixture of
discrete Ornstein-Uhlenbeck processes, in that states behave as random walks,
but drift towards a set of typically observed poses. This model is combined
with measurements of the human head and hand positions, using recursive
Bayesian estimation to incorporate temporal information. Measurements are
obtained using face detection and a simple skin colour hand detector, trained
using the detected face. The suggested model is designed with analytical
tractability in mind and we show that the pose tracking can be
Rao-Blackwellised using the mixture Kalman filter, allowing for computational
efficiency while still incorporating bio-mechanical properties of the upper
body. In addition, the use of the proposed upper body model allows reliable
three-dimensional pose estimates to be obtained indirectly for a number of
joints that are often difficult to detect using traditional object recognition
strategies. Comparisons with Kinect sensor results and the state of the art in
2D pose estimation highlight the efficacy of the proposed approach.
| no_new_dataset | 0.952264 |
1406.4281 | N Houlie | P. Psimoulis, N. Houlie, M. Meindl, M. Rothacher | Consistency of GPS and strong-motion records: case study of Mw9.0
Tohoku-Oki 2011 earthquake | Smart Structures and Systems, 2015 | null | null | null | physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | GPS and strong-motion sensors are broadly used for the monitoring of
structural health and Earth surface motions, focusing on response of
structures, earthquake characterization and rupture modeling. Most studies have
shown differences between the two systems at very long periods (e.g. >100sec).
The aim of this study is the assessment of the compatibility of GPS and
strong-motion records by comparing the consistency in the frequency domain and
by comparing their respective displacement waveforms for several frequency
bands. For this purpose, GPS and strong-motion records of 23 collocated sites
of the Mw9.0 Tohoku 2011 earthquake were used to show that the consistency
between the two datasets depends on the frequency of the excitation, the
direction of the excitation signal and the distance from the excitation source.
| [
{
"version": "v1",
"created": "Tue, 17 Jun 2014 08:52:42 GMT"
}
] | 2014-06-18T00:00:00 | [
[
"Psimoulis",
"P.",
""
],
[
"Houlie",
"N.",
""
],
[
"Meindl",
"M.",
""
],
[
"Rothacher",
"M.",
""
]
] | TITLE: Consistency of GPS and strong-motion records: case study of Mw9.0
Tohoku-Oki 2011 earthquake
ABSTRACT: GPS and strong-motion sensors are broadly used for the monitoring of
structural health and Earth surface motions, focusing on response of
structures, earthquake characterization and rupture modeling. Most studies have
shown differences between the two systems at very long periods (e.g. >100sec).
The aim of this study is the assessment of the compatibility of GPS and
strong-motion records by comparing the consistency in the frequency domain and
by comparing their respective displacement waveforms for several frequency
bands. For this purpose, GPS and strong-motion records of 23 collocated sites
of the Mw9.0 Tohoku 2011 earthquake were used to show that the consistency
between the two datasets depends on the frequency of the excitation, the
direction of the excitation signal and the distance from the excitation source.
| no_new_dataset | 0.941654 |
1209.0738 | Bernardino Romera Paredes | Andreas Maurer, Massimiliano Pontil, Bernardino Romera-Paredes | Sparse coding for multitask and transfer learning | International Conference on Machine Learning 2013 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the use of sparse coding and dictionary learning in the
context of multitask and transfer learning. The central assumption of our
learning method is that the tasks parameters are well approximated by sparse
linear combinations of the atoms of a dictionary on a high or infinite
dimensional space. This assumption, together with the large quantity of
available data in the multitask and transfer learning settings, allows a
principled choice of the dictionary. We provide bounds on the generalization
error of this approach, for both settings. Numerical experiments on one
synthetic and two real datasets show the advantage of our method over single
task learning, a previous method based on orthogonal and dense representation
of the tasks and a related method learning task grouping.
| [
{
"version": "v1",
"created": "Tue, 4 Sep 2012 19:06:51 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Mar 2013 19:35:27 GMT"
},
{
"version": "v3",
"created": "Mon, 16 Jun 2014 15:06:48 GMT"
}
] | 2014-06-17T00:00:00 | [
[
"Maurer",
"Andreas",
""
],
[
"Pontil",
"Massimiliano",
""
],
[
"Romera-Paredes",
"Bernardino",
""
]
] | TITLE: Sparse coding for multitask and transfer learning
ABSTRACT: We investigate the use of sparse coding and dictionary learning in the
context of multitask and transfer learning. The central assumption of our
learning method is that the tasks parameters are well approximated by sparse
linear combinations of the atoms of a dictionary on a high or infinite
dimensional space. This assumption, together with the large quantity of
available data in the multitask and transfer learning settings, allows a
principled choice of the dictionary. We provide bounds on the generalization
error of this approach, for both settings. Numerical experiments on one
synthetic and two real datasets show the advantage of our method over single
task learning, a previous method based on orthogonal and dense representation
of the tasks and a related method learning task grouping.
| no_new_dataset | 0.945197 |
1405.1131 | Ali Bou Nassif | Ali Bou Nassif, Luiz Fernando Capretz, Danny Ho | Analyzing the Non-Functional Requirements in the Desharnais Dataset for
Software Effort Estimation | 6 pages | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Studying the quality requirements (aka Non-Functional Requirements (NFR)) of
a system is crucial in Requirements Engineering. Many software projects fail
because of neglecting or failing to incorporate the NFR during the software
life development cycle. This paper focuses on analyzing the importance of the
quality requirements attributes in software effort estimation models based on
the Desharnais dataset. The Desharnais dataset is a collection of eighty one
software projects of twelve attributes developed by a Canadian software house.
The analysis includes studying the influence of each of the quality
requirements attributes, as well as the influence of all quality requirements
attributes combined when calculating software effort using regression and
Artificial Neural Network (ANN) models. The evaluation criteria used in this
investigation include the Mean of the Magnitude of Relative Error (MMRE), the
Prediction Level (PRED), Root Mean Squared Error (RMSE), Mean Error and the
Coefficient of determination (R2). Results show that the quality attribute
Language is the most statistically significant when calculating software
effort. Moreover, if all quality requirements attributes are eliminated in the
training stage and software effort is predicted based on software size only,
the value of the error (MMRE) is doubled.
| [
{
"version": "v1",
"created": "Tue, 6 May 2014 02:32:41 GMT"
},
{
"version": "v2",
"created": "Sat, 14 Jun 2014 03:19:18 GMT"
}
] | 2014-06-17T00:00:00 | [
[
"Nassif",
"Ali Bou",
""
],
[
"Capretz",
"Luiz Fernando",
""
],
[
"Ho",
"Danny",
""
]
] | TITLE: Analyzing the Non-Functional Requirements in the Desharnais Dataset for
Software Effort Estimation
ABSTRACT: Studying the quality requirements (aka Non-Functional Requirements (NFR)) of
a system is crucial in Requirements Engineering. Many software projects fail
because of neglecting or failing to incorporate the NFR during the software
life development cycle. This paper focuses on analyzing the importance of the
quality requirements attributes in software effort estimation models based on
the Desharnais dataset. The Desharnais dataset is a collection of eighty one
software projects of twelve attributes developed by a Canadian software house.
The analysis includes studying the influence of each of the quality
requirements attributes, as well as the influence of all quality requirements
attributes combined when calculating software effort using regression and
Artificial Neural Network (ANN) models. The evaluation criteria used in this
investigation include the Mean of the Magnitude of Relative Error (MMRE), the
Prediction Level (PRED), Root Mean Squared Error (RMSE), Mean Error and the
Coefficient of determination (R2). Results show that the quality attribute
Language is the most statistically significant when calculating software
effort. Moreover, if all quality requirements attributes are eliminated in the
training stage and software effort is predicted based on software size only,
the value of the error (MMRE) is doubled.
| new_dataset | 0.964722 |
1405.7452 | Tim Althoff | Tim Althoff, Damian Borth, J\"orn Hees, Andreas Dengel | Analysis and Forecasting of Trending Topics in Online Media Streams | ACM Multimedia 2013 | null | null | null | cs.SI cs.MM physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Among the vast information available on the web, social media streams capture
what people currently pay attention to and how they feel about certain topics.
Awareness of such trending topics plays a crucial role in multimedia systems
such as trend aware recommendation and automatic vocabulary selection for video
concept detection systems.
Correctly utilizing trending topics requires a better understanding of their
various characteristics in different social media streams. To this end, we
present the first comprehensive study across three major online and social
media streams, Twitter, Google, and Wikipedia, covering thousands of trending
topics during an observation period of an entire year. Our results indicate
that depending on one's requirements one does not necessarily have to turn to
Twitter for information about current events and that some media streams
strongly emphasize content of specific categories. As our second key
contribution, we further present a novel approach for the challenging task of
forecasting the life cycle of trending topics in the very moment they emerge.
Our fully automated approach is based on a nearest neighbor forecasting
technique exploiting our assumption that semantically similar topics exhibit
similar behavior.
We demonstrate on a large-scale dataset of Wikipedia page view statistics
that forecasts by the proposed approach are about 9-48k views closer to the
actual viewing statistics compared to baseline methods and achieve a mean
average percentage error of 45-19% for time periods of up to 14 days.
| [
{
"version": "v1",
"created": "Thu, 29 May 2014 03:43:41 GMT"
},
{
"version": "v2",
"created": "Sat, 14 Jun 2014 20:14:07 GMT"
}
] | 2014-06-17T00:00:00 | [
[
"Althoff",
"Tim",
""
],
[
"Borth",
"Damian",
""
],
[
"Hees",
"Jörn",
""
],
[
"Dengel",
"Andreas",
""
]
] | TITLE: Analysis and Forecasting of Trending Topics in Online Media Streams
ABSTRACT: Among the vast information available on the web, social media streams capture
what people currently pay attention to and how they feel about certain topics.
Awareness of such trending topics plays a crucial role in multimedia systems
such as trend aware recommendation and automatic vocabulary selection for video
concept detection systems.
Correctly utilizing trending topics requires a better understanding of their
various characteristics in different social media streams. To this end, we
present the first comprehensive study across three major online and social
media streams, Twitter, Google, and Wikipedia, covering thousands of trending
topics during an observation period of an entire year. Our results indicate
that depending on one's requirements one does not necessarily have to turn to
Twitter for information about current events and that some media streams
strongly emphasize content of specific categories. As our second key
contribution, we further present a novel approach for the challenging task of
forecasting the life cycle of trending topics in the very moment they emerge.
Our fully automated approach is based on a nearest neighbor forecasting
technique exploiting our assumption that semantically similar topics exhibit
similar behavior.
We demonstrate on a large-scale dataset of Wikipedia page view statistics
that forecasts by the proposed approach are about 9-48k views closer to the
actual viewing statistics compared to baseline methods and achieve a mean
average percentage error of 45-19% for time periods of up to 14 days.
| no_new_dataset | 0.924756 |
1406.1774 | Toufiq Parag | Toufiq Parag, Stephen Plaza, Louis Scheffer (Janelia Farm Research
Campus- HHMI) | Small Sample Learning of Superpixel Classifiers for EM Segmentation-
Extended Version | Accepted for MICCAI 2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pixel and superpixel classifiers have become essential tools for EM
segmentation algorithms. Training these classifiers remains a major bottleneck
primarily due to the requirement of completely annotating the dataset which is
tedious, error-prone and costly. In this paper, we propose an interactive
learning scheme for the superpixel classifier for EM segmentation. Our
algorithm is "active semi-supervised" because it requests the labels of a small
number of examples from user and applies label propagation technique to
generate these queries. Using only a small set ($<20\%$) of all datapoints, the
proposed algorithm consistently generates a classifier almost as accurate as
that estimated from a complete groundtruth. We provide segmentation results on
multiple datasets to show the strength of these classifiers.
| [
{
"version": "v1",
"created": "Fri, 6 Jun 2014 18:59:58 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Jun 2014 22:05:57 GMT"
}
] | 2014-06-17T00:00:00 | [
[
"Parag",
"Toufiq",
"",
"Janelia Farm Research\n Campus- HHMI"
],
[
"Plaza",
"Stephen",
"",
"Janelia Farm Research\n Campus- HHMI"
],
[
"Scheffer",
"Louis",
"",
"Janelia Farm Research\n Campus- HHMI"
]
] | TITLE: Small Sample Learning of Superpixel Classifiers for EM Segmentation-
Extended Version
ABSTRACT: Pixel and superpixel classifiers have become essential tools for EM
segmentation algorithms. Training these classifiers remains a major bottleneck
primarily due to the requirement of completely annotating the dataset which is
tedious, error-prone and costly. In this paper, we propose an interactive
learning scheme for the superpixel classifier for EM segmentation. Our
algorithm is "active semi-supervised" because it requests the labels of a small
number of examples from user and applies label propagation technique to
generate these queries. Using only a small set ($<20\%$) of all datapoints, the
proposed algorithm consistently generates a classifier almost as accurate as
that estimated from a complete groundtruth. We provide segmentation results on
multiple datasets to show the strength of these classifiers.
| no_new_dataset | 0.956186 |
1406.3682 | Srishti Gupta | Srishti Gupta, Ponnurangam Kumaraguru | Emerging Phishing Trends and Effectiveness of the Anti-Phishing Landing
Page | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Each month, more attacks are launched with the aim of making web users
believe that they are communicating with a trusted entity which compels them to
share their personal, financial information. Phishing costs Internet users
billions of dollars every year. Researchers at Carnegie Mellon University (CMU)
created an anti-phishing landing page supported by Anti-Phishing Working Group
(APWG) with the aim to train users on how to prevent themselves from phishing
attacks. It is used by financial institutions, phish site take down vendors,
government organizations, and online merchants. When a potential victim clicks
on a phishing link that has been taken down, he / she is redirected to the
landing page. In this paper, we present the comparative analysis on two
datasets that we obtained from APWG's landing page log files; one, from
September 7, 2008 - November 11, 2009, and other from January 1, 2014 - April
30, 2014. We found that the landing page has been successful in training users
against phishing. Forty six percent users clicked lesser number of phishing
URLs from January 2014 to April 2014 which shows that training from the landing
page helped users not to fall for phishing attacks. Our analysis shows that
phishers have started to modify their techniques by creating more legitimate
looking URLs and buying large number of domains to increase their activity. We
observed that phishers are exploiting ICANN accredited registrars to launch
their attacks even after strict surveillance. We saw that phishers are trying
to exploit free subdomain registration services to carry out attacks. In this
paper, we also compared the phishing e-mails used by phishers to lure victims
in 2008 and 2014. We found that the phishing e-mails have changed considerably
over time. Phishers have adopted new techniques like sending promotional
e-mails and emotionally targeting users in clicking phishing URLs.
| [
{
"version": "v1",
"created": "Sat, 14 Jun 2014 04:19:16 GMT"
}
] | 2014-06-17T00:00:00 | [
[
"Gupta",
"Srishti",
""
],
[
"Kumaraguru",
"Ponnurangam",
""
]
] | TITLE: Emerging Phishing Trends and Effectiveness of the Anti-Phishing Landing
Page
ABSTRACT: Each month, more attacks are launched with the aim of making web users
believe that they are communicating with a trusted entity which compels them to
share their personal, financial information. Phishing costs Internet users
billions of dollars every year. Researchers at Carnegie Mellon University (CMU)
created an anti-phishing landing page supported by Anti-Phishing Working Group
(APWG) with the aim to train users on how to prevent themselves from phishing
attacks. It is used by financial institutions, phish site take down vendors,
government organizations, and online merchants. When a potential victim clicks
on a phishing link that has been taken down, he / she is redirected to the
landing page. In this paper, we present the comparative analysis on two
datasets that we obtained from APWG's landing page log files; one, from
September 7, 2008 - November 11, 2009, and other from January 1, 2014 - April
30, 2014. We found that the landing page has been successful in training users
against phishing. Forty six percent users clicked lesser number of phishing
URLs from January 2014 to April 2014 which shows that training from the landing
page helped users not to fall for phishing attacks. Our analysis shows that
phishers have started to modify their techniques by creating more legitimate
looking URLs and buying large number of domains to increase their activity. We
observed that phishers are exploiting ICANN accredited registrars to launch
their attacks even after strict surveillance. We saw that phishers are trying
to exploit free subdomain registration services to carry out attacks. In this
paper, we also compared the phishing e-mails used by phishers to lure victims
in 2008 and 2014. We found that the phishing e-mails have changed considerably
over time. Phishers have adopted new techniques like sending promotional
e-mails and emotionally targeting users in clicking phishing URLs.
| no_new_dataset | 0.875681 |
1406.3687 | Neha Gupta | Neha Gupta, Anupama Aggarwal, Ponnurangam Kumaraguru | bit.ly/malicious: Deep Dive into Short URL based e-Crime Detection | arXiv admin note: substantial text overlap with arXiv:1405.1511 | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existence of spam URLs over emails and Online Social Media (OSM) has become a
massive e-crime. To counter the dissemination of long complex URLs in emails
and character limit imposed on various OSM (like Twitter), the concept of URL
shortening has gained a lot of traction. URL shorteners take as input a long
URL and output a short URL with the same landing page (as in the long URL) in
return. With their immense popularity over time, URL shorteners have become a
prime target for the attackers giving them an advantage to conceal malicious
content. Bitly, a leading service among all shortening services is being
exploited heavily to carry out phishing attacks, work-from-home scams,
pornographic content propagation, etc. This imposes additional performance
pressure on Bitly and other URL shorteners to be able to detect and take a
timely action against the illegitimate content. In this study, we analyzed a
dataset of 763,160 short URLs marked suspicious by Bitly in the month of
October 2013. Our results reveal that Bitly is not using its claimed spam
detection services very effectively. We also show how a suspicious Bitly
account goes unnoticed despite of a prolonged recurrent illegitimate activity.
Bitly displays a warning page on identification of suspicious links, but we
observed this approach to be weak in controlling the overall propagation of
spam. We also identified some short URL based features and coupled them with
two domain specific features to classify a Bitly URL as malicious or benign and
achieved an accuracy of 86.41%. The feature set identified can be generalized
to other URL shortening services as well. To the best of our knowledge, this is
the first large scale study to highlight the issues with the implementation of
Bitly's spam detection policies and proposing suitable countermeasures.
| [
{
"version": "v1",
"created": "Sat, 14 Jun 2014 06:22:16 GMT"
}
] | 2014-06-17T00:00:00 | [
[
"Gupta",
"Neha",
""
],
[
"Aggarwal",
"Anupama",
""
],
[
"Kumaraguru",
"Ponnurangam",
""
]
] | TITLE: bit.ly/malicious: Deep Dive into Short URL based e-Crime Detection
ABSTRACT: Existence of spam URLs over emails and Online Social Media (OSM) has become a
massive e-crime. To counter the dissemination of long complex URLs in emails
and character limit imposed on various OSM (like Twitter), the concept of URL
shortening has gained a lot of traction. URL shorteners take as input a long
URL and output a short URL with the same landing page (as in the long URL) in
return. With their immense popularity over time, URL shorteners have become a
prime target for the attackers giving them an advantage to conceal malicious
content. Bitly, a leading service among all shortening services is being
exploited heavily to carry out phishing attacks, work-from-home scams,
pornographic content propagation, etc. This imposes additional performance
pressure on Bitly and other URL shorteners to be able to detect and take a
timely action against the illegitimate content. In this study, we analyzed a
dataset of 763,160 short URLs marked suspicious by Bitly in the month of
October 2013. Our results reveal that Bitly is not using its claimed spam
detection services very effectively. We also show how a suspicious Bitly
account goes unnoticed despite of a prolonged recurrent illegitimate activity.
Bitly displays a warning page on identification of suspicious links, but we
observed this approach to be weak in controlling the overall propagation of
spam. We also identified some short URL based features and coupled them with
two domain specific features to classify a Bitly URL as malicious or benign and
achieved an accuracy of 86.41%. The feature set identified can be generalized
to other URL shortening services as well. To the best of our knowledge, this is
the first large scale study to highlight the issues with the implementation of
Bitly's spam detection policies and proposing suitable countermeasures.
| no_new_dataset | 0.915847 |
1406.3692 | Prateek Dewan | Prateek Dewan and Anand Kashyap and Ponnurangam Kumaraguru | Analyzing Social and Stylometric Features to Identify Spear phishing
Emails | Detection of spear phishing using social media features | null | null | null | cs.CY cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spear phishing is a complex targeted attack in which, an attacker harvests
information about the victim prior to the attack. This information is then used
to create sophisticated, genuine-looking attack vectors, drawing the victim to
compromise confidential information. What makes spear phishing different, and
more powerful than normal phishing, is this contextual information about the
victim. Online social media services can be one such source for gathering vital
information about an individual. In this paper, we characterize and examine a
true positive dataset of spear phishing, spam, and normal phishing emails from
Symantec's enterprise email scanning service. We then present a model to detect
spear phishing emails sent to employees of 14 international organizations, by
using social features extracted from LinkedIn. Our dataset consists of 4,742
targeted attack emails sent to 2,434 victims, and 9,353 non targeted attack
emails sent to 5,912 non victims; and publicly available information from their
LinkedIn profiles. We applied various machine learning algorithms to this
labeled data, and achieved an overall maximum accuracy of 97.76% in identifying
spear phishing emails. We used a combination of social features from LinkedIn
profiles, and stylometric features extracted from email subjects, bodies, and
attachments. However, we achieved a slightly better accuracy of 98.28% without
the social features. Our analysis revealed that social features extracted from
LinkedIn do not help in identifying spear phishing emails. To the best of our
knowledge, this is one of the first attempts to make use of a combination of
stylometric features extracted from emails, and social features extracted from
an online social network to detect targeted spear phishing emails.
| [
{
"version": "v1",
"created": "Sat, 14 Jun 2014 07:01:03 GMT"
}
] | 2014-06-17T00:00:00 | [
[
"Dewan",
"Prateek",
""
],
[
"Kashyap",
"Anand",
""
],
[
"Kumaraguru",
"Ponnurangam",
""
]
] | TITLE: Analyzing Social and Stylometric Features to Identify Spear phishing
Emails
ABSTRACT: Spear phishing is a complex targeted attack in which, an attacker harvests
information about the victim prior to the attack. This information is then used
to create sophisticated, genuine-looking attack vectors, drawing the victim to
compromise confidential information. What makes spear phishing different, and
more powerful than normal phishing, is this contextual information about the
victim. Online social media services can be one such source for gathering vital
information about an individual. In this paper, we characterize and examine a
true positive dataset of spear phishing, spam, and normal phishing emails from
Symantec's enterprise email scanning service. We then present a model to detect
spear phishing emails sent to employees of 14 international organizations, by
using social features extracted from LinkedIn. Our dataset consists of 4,742
targeted attack emails sent to 2,434 victims, and 9,353 non targeted attack
emails sent to 5,912 non victims; and publicly available information from their
LinkedIn profiles. We applied various machine learning algorithms to this
labeled data, and achieved an overall maximum accuracy of 97.76% in identifying
spear phishing emails. We used a combination of social features from LinkedIn
profiles, and stylometric features extracted from email subjects, bodies, and
attachments. However, we achieved a slightly better accuracy of 98.28% without
the social features. Our analysis revealed that social features extracted from
LinkedIn do not help in identifying spear phishing emails. To the best of our
knowledge, this is one of the first attempts to make use of a combination of
stylometric features extracted from emails, and social features extracted from
an online social network to detect targeted spear phishing emails.
| new_dataset | 0.967441 |
1406.3837 | Thomas Laurent | Xavier Bresson, Huiyi Hu, Thomas Laurent, Arthur Szlam, and James von
Brecht | An Incremental Reseeding Strategy for Clustering | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we propose a simple and easily parallelizable algorithm for
multiway graph partitioning. The algorithm alternates between three basic
components: diffusing seed vertices over the graph, thresholding the diffused
seeds, and then randomly reseeding the thresholded clusters. We demonstrate
experimentally that the proper combination of these ingredients leads to an
algorithm that achieves state-of-the-art performance in terms of cluster purity
on standard benchmarks datasets. Moreover, the algorithm runs an order of
magnitude faster than the other algorithms that achieve comparable results in
terms of accuracy. We also describe a coarsen, cluster and refine approach
similar to GRACLUS and METIS that removes an additional order of magnitude from
the runtime of our algorithm while still maintaining competitive accuracy.
| [
{
"version": "v1",
"created": "Sun, 15 Jun 2014 18:30:51 GMT"
}
] | 2014-06-17T00:00:00 | [
[
"Bresson",
"Xavier",
""
],
[
"Hu",
"Huiyi",
""
],
[
"Laurent",
"Thomas",
""
],
[
"Szlam",
"Arthur",
""
],
[
"von Brecht",
"James",
""
]
] | TITLE: An Incremental Reseeding Strategy for Clustering
ABSTRACT: In this work we propose a simple and easily parallelizable algorithm for
multiway graph partitioning. The algorithm alternates between three basic
components: diffusing seed vertices over the graph, thresholding the diffused
seeds, and then randomly reseeding the thresholded clusters. We demonstrate
experimentally that the proper combination of these ingredients leads to an
algorithm that achieves state-of-the-art performance in terms of cluster purity
on standard benchmarks datasets. Moreover, the algorithm runs an order of
magnitude faster than the other algorithms that achieve comparable results in
terms of accuracy. We also describe a coarsen, cluster and refine approach
similar to GRACLUS and METIS that removes an additional order of magnitude from
the runtime of our algorithm while still maintaining competitive accuracy.
| no_new_dataset | 0.952574 |
1406.3949 | Jamil Ahmad | Jamil Ahmad, Zahoor Jan, Zia-ud-Din and Shoaib Muhammad Khan | A Fusion of Labeled-Grid Shape Descriptors with Weighted Ranking
Algorithm for Shapes Recognition | null | World Applied Sciences Journal, vol. 31(6), pp. 1207-1213, 2014 | 10.5829/idosi.wasj.2014.31.06.353 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrieving similar images from a large dataset based on the image content has
been a very active research area and is a very challenging task. Studies have
shown that retrieving similar images based on their shape is a very effective
method. For this purpose a large number of methods exist in literature. The
combination of more than one feature has also been investigated for this
purpose and has shown promising results. In this paper a fusion based shapes
recognition method has been proposed. A set of local boundary based and region
based features are derived from the labeled grid based representation of the
shape and are combined with a few global shape features to produce a composite
shape descriptor. This composite shape descriptor is then used in a weighted
ranking algorithm to find similarities among shapes from a large dataset. The
experimental analysis has shown that the proposed method is powerful enough to
discriminate the geometrically similar shapes from the non-similar ones.
| [
{
"version": "v1",
"created": "Mon, 16 Jun 2014 09:50:04 GMT"
}
] | 2014-06-17T00:00:00 | [
[
"Ahmad",
"Jamil",
""
],
[
"Jan",
"Zahoor",
""
],
[
"Zia-ud-Din",
"",
""
],
[
"Khan",
"Shoaib Muhammad",
""
]
] | TITLE: A Fusion of Labeled-Grid Shape Descriptors with Weighted Ranking
Algorithm for Shapes Recognition
ABSTRACT: Retrieving similar images from a large dataset based on the image content has
been a very active research area and is a very challenging task. Studies have
shown that retrieving similar images based on their shape is a very effective
method. For this purpose a large number of methods exist in literature. The
combination of more than one feature has also been investigated for this
purpose and has shown promising results. In this paper a fusion based shapes
recognition method has been proposed. A set of local boundary based and region
based features are derived from the labeled grid based representation of the
shape and are combined with a few global shape features to produce a composite
shape descriptor. This composite shape descriptor is then used in a weighted
ranking algorithm to find similarities among shapes from a large dataset. The
experimental analysis has shown that the proposed method is powerful enough to
discriminate the geometrically similar shapes from the non-similar ones.
| no_new_dataset | 0.952442 |
1312.1743 | Deva Ramanan | Deva Ramanan | Dual coordinate solvers for large-scale structural SVMs | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This manuscript describes a method for training linear SVMs (including binary
SVMs, SVM regression, and structural SVMs) from large, out-of-core training
datasets. Current strategies for large-scale learning fall into one of two
camps; batch algorithms which solve the learning problem given a finite
datasets, and online algorithms which can process out-of-core datasets. The
former typically requires datasets small enough to fit in memory. The latter is
often phrased as a stochastic optimization problem; such algorithms enjoy
strong theoretical properties but often require manual tuned annealing
schedules, and may converge slowly for problems with large output spaces (e.g.,
structural SVMs). We discuss an algorithm for an "intermediate" regime in which
the data is too large to fit in memory, but the active constraints (support
vectors) are small enough to remain in memory. In this case, one can design
rather efficient learning algorithms that are as stable as batch algorithms,
but capable of processing out-of-core datasets. We have developed such a
MATLAB-based solver and used it to train a collection of recognition systems
for articulated pose estimation, facial analysis, 3D object recognition, and
action classification, all with publicly-available code. This writeup describes
the solver in detail.
| [
{
"version": "v1",
"created": "Fri, 6 Dec 2013 00:55:51 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Jun 2014 04:10:06 GMT"
}
] | 2014-06-16T00:00:00 | [
[
"Ramanan",
"Deva",
""
]
] | TITLE: Dual coordinate solvers for large-scale structural SVMs
ABSTRACT: This manuscript describes a method for training linear SVMs (including binary
SVMs, SVM regression, and structural SVMs) from large, out-of-core training
datasets. Current strategies for large-scale learning fall into one of two
camps; batch algorithms which solve the learning problem given a finite
datasets, and online algorithms which can process out-of-core datasets. The
former typically requires datasets small enough to fit in memory. The latter is
often phrased as a stochastic optimization problem; such algorithms enjoy
strong theoretical properties but often require manual tuned annealing
schedules, and may converge slowly for problems with large output spaces (e.g.,
structural SVMs). We discuss an algorithm for an "intermediate" regime in which
the data is too large to fit in memory, but the active constraints (support
vectors) are small enough to remain in memory. In this case, one can design
rather efficient learning algorithms that are as stable as batch algorithms,
but capable of processing out-of-core datasets. We have developed such a
MATLAB-based solver and used it to train a collection of recognition systems
for articulated pose estimation, facial analysis, 3D object recognition, and
action classification, all with publicly-available code. This writeup describes
the solver in detail.
| no_new_dataset | 0.949716 |
1406.0455 | Cheng Chen | Cheng Chen, Lan Zheng, Venkatesh Srinivasan, Alex Thomo, Kui Wu,
Anthony Sukow | Buyer to Seller Recommendation under Constraints | 9 pages, 7 figures | null | null | null | cs.SI cs.GT q-fin.GN q-fin.ST | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The majority of recommender systems are designed to recommend items (such as
movies and products) to users. We focus on the problem of recommending buyers
to sellers which comes with new challenges: (1) constraints on the number of
recommendations buyers are part of before they become overwhelmed, (2)
constraints on the number of recommendations sellers receive within their
budget, and (3) constraints on the set of buyers that sellers want to receive
(e.g., no more than two people from the same household). We propose the
following critical problems of recommending buyers to sellers: Constrained
Recommendation (C-REC) capturing the first two challenges, and Conflict-Aware
Constrained Recommendation (CAC-REC) capturing all three challenges at the same
time. We show that C-REC can be modeled using linear programming and can be
efficiently solved using modern solvers. On the other hand, we show that
CAC-REC is NP-hard. We propose two approximate algorithms to solve CAC-REC and
show that they achieve close to optimal solutions via comprehensive experiments
using real-world datasets.
| [
{
"version": "v1",
"created": "Mon, 2 Jun 2014 17:45:52 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jun 2014 05:32:29 GMT"
},
{
"version": "v3",
"created": "Fri, 13 Jun 2014 17:34:26 GMT"
}
] | 2014-06-16T00:00:00 | [
[
"Chen",
"Cheng",
""
],
[
"Zheng",
"Lan",
""
],
[
"Srinivasan",
"Venkatesh",
""
],
[
"Thomo",
"Alex",
""
],
[
"Wu",
"Kui",
""
],
[
"Sukow",
"Anthony",
""
]
] | TITLE: Buyer to Seller Recommendation under Constraints
ABSTRACT: The majority of recommender systems are designed to recommend items (such as
movies and products) to users. We focus on the problem of recommending buyers
to sellers which comes with new challenges: (1) constraints on the number of
recommendations buyers are part of before they become overwhelmed, (2)
constraints on the number of recommendations sellers receive within their
budget, and (3) constraints on the set of buyers that sellers want to receive
(e.g., no more than two people from the same household). We propose the
following critical problems of recommending buyers to sellers: Constrained
Recommendation (C-REC) capturing the first two challenges, and Conflict-Aware
Constrained Recommendation (CAC-REC) capturing all three challenges at the same
time. We show that C-REC can be modeled using linear programming and can be
efficiently solved using modern solvers. On the other hand, we show that
CAC-REC is NP-hard. We propose two approximate algorithms to solve CAC-REC and
show that they achieve close to optimal solutions via comprehensive experiments
using real-world datasets.
| no_new_dataset | 0.943712 |
1406.3440 | Branislav Brutovsky | Denis Horvath, Jozef Ulicny and Branislav Brutovsky | Self-organized manifold learning and heuristic charting via adaptive
metrics | 13 pages, 11 figures | null | null | null | physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classical metric and non-metric multidimensional scaling (MDS) variants are
widely known manifold learning (ML) methods which enable construction of low
dimensional representation (projections) of high dimensional data inputs.
However, their use is crucially limited to the cases when data are inherently
reducible to low dimensionality. In general, drawbacks and limitations of
these, as well as pure, MDS variants become more apparent when the exploration
(learning) is exposed to the structured data of high intrinsic dimension. As we
demonstrate on artificial and real-world datasets, the over-determination
problem can be solved by means of the hybrid and multi-component
discrete-continuous multi-modal optimization heuristics. Its remarkable feature
is, that projections onto 2D are constructed simultaneously with the data
categorization (classification) compensating in part for the loss of original
input information. We observed, that the optimization module integrated with ML
modeling, metric learning and categorization leads to a nontrivial mechanism
resulting in generation of patterns of categorical variables which can be
interpreted as a heuristic charting. The method provides visual information in
the form of non-convex clusters or separated regions. Furthermore, the ability
to categorize the surfaces into back and front parts of the analyzed 3D data
objects have been attained through self-organized structuring without
supervising.
| [
{
"version": "v1",
"created": "Fri, 13 Jun 2014 07:20:59 GMT"
}
] | 2014-06-16T00:00:00 | [
[
"Horvath",
"Denis",
""
],
[
"Ulicny",
"Jozef",
""
],
[
"Brutovsky",
"Branislav",
""
]
] | TITLE: Self-organized manifold learning and heuristic charting via adaptive
metrics
ABSTRACT: Classical metric and non-metric multidimensional scaling (MDS) variants are
widely known manifold learning (ML) methods which enable construction of low
dimensional representation (projections) of high dimensional data inputs.
However, their use is crucially limited to the cases when data are inherently
reducible to low dimensionality. In general, drawbacks and limitations of
these, as well as pure, MDS variants become more apparent when the exploration
(learning) is exposed to the structured data of high intrinsic dimension. As we
demonstrate on artificial and real-world datasets, the over-determination
problem can be solved by means of the hybrid and multi-component
discrete-continuous multi-modal optimization heuristics. Its remarkable feature
is, that projections onto 2D are constructed simultaneously with the data
categorization (classification) compensating in part for the loss of original
input information. We observed, that the optimization module integrated with ML
modeling, metric learning and categorization leads to a nontrivial mechanism
resulting in generation of patterns of categorical variables which can be
interpreted as a heuristic charting. The method provides visual information in
the form of non-convex clusters or separated regions. Furthermore, the ability
to categorize the surfaces into back and front parts of the analyzed 3D data
objects have been attained through self-organized structuring without
supervising.
| no_new_dataset | 0.945147 |
1310.8544 | Johannes Albrecht | J. Albrecht, V. V. Gligorov, G. Raven, S. Tolk | Performance of the LHCb High Level Trigger in 2012 | Proceedings for the 20th International Conference on Computing in
High Energy and Nuclear Physics (CHEP) | J. Phys.: Conf. Ser. 513 (2014) 012001 | 10.1088/1742-6596/513/1/012001 | null | hep-ex physics.ins-det | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The trigger system of the LHCb experiment is discussed in this paper and its
performance is evaluated on a dataset recorded during the 2012 run of the LHC.
The main purpose of the LHCb trigger system is to separate heavy flavour
signals from the light quark background. The trigger reduces the roughly 11MHz
of bunch-bunch crossings with inelastic collisions to a rate of 5kHz, which is
written to storage.
| [
{
"version": "v1",
"created": "Thu, 31 Oct 2013 15:19:38 GMT"
}
] | 2014-06-13T00:00:00 | [
[
"Albrecht",
"J.",
""
],
[
"Gligorov",
"V. V.",
""
],
[
"Raven",
"G.",
""
],
[
"Tolk",
"S.",
""
]
] | TITLE: Performance of the LHCb High Level Trigger in 2012
ABSTRACT: The trigger system of the LHCb experiment is discussed in this paper and its
performance is evaluated on a dataset recorded during the 2012 run of the LHC.
The main purpose of the LHCb trigger system is to separate heavy flavour
signals from the light quark background. The trigger reduces the roughly 11MHz
of bunch-bunch crossings with inelastic collisions to a rate of 5kHz, which is
written to storage.
| no_new_dataset | 0.9462 |
1406.2375 | Xiaochen Lian | Wenhao Lu, Xiaochen Lian and Alan Yuille | Parsing Semantic Parts of Cars Using Graphical Models and Segment
Appearance Consistency | 12 pages, CBMM memo | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of semantic part parsing (segmentation) of
cars, i.e.assigning every pixel within the car to one of the parts (e.g.body,
window, lights, license plates and wheels). We formulate this as a landmark
identification problem, where a set of landmarks specifies the boundaries of
the parts. A novel mixture of graphical models is proposed, which dynamically
couples the landmarks to a hierarchy of segments. When modeling pairwise
relation between landmarks, this coupling enables our model to exploit the
local image contents in addition to spatial deformation, an aspect that most
existing graphical models ignore. In particular, our model enforces appearance
consistency between segments within the same part. Parsing the car, including
finding the optimal coupling between landmarks and segments in the hierarchy,
is performed by dynamic programming. We evaluate our method on a subset of
PASCAL VOC 2010 car images and on the car subset of 3D Object Category dataset
(CAR3D). We show good results and, in particular, quantify the effectiveness of
using the segment appearance consistency in terms of accuracy of part
localization and segmentation.
| [
{
"version": "v1",
"created": "Mon, 9 Jun 2014 22:16:57 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Jun 2014 23:39:41 GMT"
}
] | 2014-06-13T00:00:00 | [
[
"Lu",
"Wenhao",
""
],
[
"Lian",
"Xiaochen",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: Parsing Semantic Parts of Cars Using Graphical Models and Segment
Appearance Consistency
ABSTRACT: This paper addresses the problem of semantic part parsing (segmentation) of
cars, i.e.assigning every pixel within the car to one of the parts (e.g.body,
window, lights, license plates and wheels). We formulate this as a landmark
identification problem, where a set of landmarks specifies the boundaries of
the parts. A novel mixture of graphical models is proposed, which dynamically
couples the landmarks to a hierarchy of segments. When modeling pairwise
relation between landmarks, this coupling enables our model to exploit the
local image contents in addition to spatial deformation, an aspect that most
existing graphical models ignore. In particular, our model enforces appearance
consistency between segments within the same part. Parsing the car, including
finding the optimal coupling between landmarks and segments in the hierarchy,
is performed by dynamic programming. We evaluate our method on a subset of
PASCAL VOC 2010 car images and on the car subset of 3D Object Category dataset
(CAR3D). We show good results and, in particular, quantify the effectiveness of
using the segment appearance consistency in terms of accuracy of part
localization and segmentation.
| no_new_dataset | 0.948585 |
1406.2807 | Yin Li | Yin Li, Xiaodi Hou, Christof Koch, James M. Rehg, Alan L. Yuille | The Secrets of Salient Object Segmentation | 15 pages, 8 figures. Conference version was accepted by CVPR 2014 | null | null | CBMM Memmo #14 | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In this paper we provide an extensive evaluation of fixation prediction and
salient object segmentation algorithms as well as statistics of major datasets.
Our analysis identifies serious design flaws of existing salient object
benchmarks, called the dataset design bias, by over emphasizing the
stereotypical concepts of saliency. The dataset design bias does not only
create the discomforting disconnection between fixations and salient object
segmentation, but also misleads the algorithm designing. Based on our analysis,
we propose a new high quality dataset that offers both fixation and salient
object segmentation ground-truth. With fixations and salient object being
presented simultaneously, we are able to bridge the gap between fixations and
salient objects, and propose a novel method for salient object segmentation.
Finally, we report significant benchmark progress on three existing datasets of
segmenting salient objects
| [
{
"version": "v1",
"created": "Wed, 11 Jun 2014 07:46:03 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Jun 2014 17:35:08 GMT"
}
] | 2014-06-13T00:00:00 | [
[
"Li",
"Yin",
""
],
[
"Hou",
"Xiaodi",
""
],
[
"Koch",
"Christof",
""
],
[
"Rehg",
"James M.",
""
],
[
"Yuille",
"Alan L.",
""
]
] | TITLE: The Secrets of Salient Object Segmentation
ABSTRACT: In this paper we provide an extensive evaluation of fixation prediction and
salient object segmentation algorithms as well as statistics of major datasets.
Our analysis identifies serious design flaws of existing salient object
benchmarks, called the dataset design bias, by over emphasizing the
stereotypical concepts of saliency. The dataset design bias does not only
create the discomforting disconnection between fixations and salient object
segmentation, but also misleads the algorithm designing. Based on our analysis,
we propose a new high quality dataset that offers both fixation and salient
object segmentation ground-truth. With fixations and salient object being
presented simultaneously, we are able to bridge the gap between fixations and
salient objects, and propose a novel method for salient object segmentation.
Finally, we report significant benchmark progress on three existing datasets of
segmenting salient objects
| new_dataset | 0.955527 |
1406.2732 | George Papandreou | George Papandreou | Deep Epitomic Convolutional Neural Networks | 9 pages | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional neural networks have recently proven extremely competitive
in challenging image recognition tasks. This paper proposes the epitomic
convolution as a new building block for deep neural networks. An epitomic
convolution layer replaces a pair of consecutive convolution and max-pooling
layers found in standard deep convolutional neural networks. The main version
of the proposed model uses mini-epitomes in place of filters and computes
responses invariant to small translations by epitomic search instead of
max-pooling over image positions. The topographic version of the proposed model
uses large epitomes to learn filter maps organized in translational
topographies. We show that error back-propagation can successfully learn
multiple epitomic layers in a supervised fashion. The effectiveness of the
proposed method is assessed in image classification tasks on standard
benchmarks. Our experiments on Imagenet indicate improved recognition
performance compared to standard convolutional neural networks of similar
architecture. Our models pre-trained on Imagenet perform excellently on
Caltech-101. We also obtain competitive image classification results on the
small-image MNIST and CIFAR-10 datasets.
| [
{
"version": "v1",
"created": "Tue, 10 Jun 2014 22:07:01 GMT"
}
] | 2014-06-12T00:00:00 | [
[
"Papandreou",
"George",
""
]
] | TITLE: Deep Epitomic Convolutional Neural Networks
ABSTRACT: Deep convolutional neural networks have recently proven extremely competitive
in challenging image recognition tasks. This paper proposes the epitomic
convolution as a new building block for deep neural networks. An epitomic
convolution layer replaces a pair of consecutive convolution and max-pooling
layers found in standard deep convolutional neural networks. The main version
of the proposed model uses mini-epitomes in place of filters and computes
responses invariant to small translations by epitomic search instead of
max-pooling over image positions. The topographic version of the proposed model
uses large epitomes to learn filter maps organized in translational
topographies. We show that error back-propagation can successfully learn
multiple epitomic layers in a supervised fashion. The effectiveness of the
proposed method is assessed in image classification tasks on standard
benchmarks. Our experiments on Imagenet indicate improved recognition
performance compared to standard convolutional neural networks of similar
architecture. Our models pre-trained on Imagenet perform excellently on
Caltech-101. We also obtain competitive image classification results on the
small-image MNIST and CIFAR-10 datasets.
| no_new_dataset | 0.951953 |
1406.1833 | Kenneth Stanley | Paul A. Szerlip, Gregory Morse, Justin K. Pugh, and Kenneth O. Stanley | Unsupervised Feature Learning through Divergent Discriminative Feature
Accumulation | Corrected citation formatting | null | null | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unlike unsupervised approaches such as autoencoders that learn to reconstruct
their inputs, this paper introduces an alternative approach to unsupervised
feature learning called divergent discriminative feature accumulation (DDFA)
that instead continually accumulates features that make novel discriminations
among the training set. Thus DDFA features are inherently discriminative from
the start even though they are trained without knowledge of the ultimate
classification problem. Interestingly, DDFA also continues to add new features
indefinitely (so it does not depend on a hidden layer size), is not based on
minimizing error, and is inherently divergent instead of convergent, thereby
providing a unique direction of research for unsupervised feature learning. In
this paper the quality of its learned features is demonstrated on the MNIST
dataset, where its performance confirms that indeed DDFA is a viable technique
for learning useful features.
| [
{
"version": "v1",
"created": "Fri, 6 Jun 2014 23:45:03 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jun 2014 03:37:45 GMT"
}
] | 2014-06-11T00:00:00 | [
[
"Szerlip",
"Paul A.",
""
],
[
"Morse",
"Gregory",
""
],
[
"Pugh",
"Justin K.",
""
],
[
"Stanley",
"Kenneth O.",
""
]
] | TITLE: Unsupervised Feature Learning through Divergent Discriminative Feature
Accumulation
ABSTRACT: Unlike unsupervised approaches such as autoencoders that learn to reconstruct
their inputs, this paper introduces an alternative approach to unsupervised
feature learning called divergent discriminative feature accumulation (DDFA)
that instead continually accumulates features that make novel discriminations
among the training set. Thus DDFA features are inherently discriminative from
the start even though they are trained without knowledge of the ultimate
classification problem. Interestingly, DDFA also continues to add new features
indefinitely (so it does not depend on a hidden layer size), is not based on
minimizing error, and is inherently divergent instead of convergent, thereby
providing a unique direction of research for unsupervised feature learning. In
this paper the quality of its learned features is demonstrated on the MNIST
dataset, where its performance confirms that indeed DDFA is a viable technique
for learning useful features.
| no_new_dataset | 0.946001 |
1406.2392 | Ryan Compton | Ryan Compton, Matthew S. Keegan, Jiejun Xu | Inferring the geographic focus of online documents from social media
sharing patterns | 6 pages, 10 figures, Computational Approaches to Social Modeling
(ChASM) Workshop, WebSci 2014, Bloomington, Indiana-June 24-26 2014 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Determining the geographic focus of digital media is an essential first step
for modern geographic information retrieval. However, publicly-visible location
annotations are remarkably sparse in online data. In this work, we demonstrate
a method which infers the geographic focus of an online document by examining
the locations of Twitter users who share links to the document.
We apply our geotagging technique to multiple datasets built from different
content: manually-annotated news articles, GDELT, YouTube, Flickr, Twitter, and
Tumblr.
| [
{
"version": "v1",
"created": "Tue, 10 Jun 2014 00:34:55 GMT"
}
] | 2014-06-11T00:00:00 | [
[
"Compton",
"Ryan",
""
],
[
"Keegan",
"Matthew S.",
""
],
[
"Xu",
"Jiejun",
""
]
] | TITLE: Inferring the geographic focus of online documents from social media
sharing patterns
ABSTRACT: Determining the geographic focus of digital media is an essential first step
for modern geographic information retrieval. However, publicly-visible location
annotations are remarkably sparse in online data. In this work, we demonstrate
a method which infers the geographic focus of an online document by examining
the locations of Twitter users who share links to the document.
We apply our geotagging technique to multiple datasets built from different
content: manually-annotated news articles, GDELT, YouTube, Flickr, Twitter, and
Tumblr.
| no_new_dataset | 0.947721 |
1312.4564 | Peilin Zhao | Peilin Zhao, Jinwei Yang, Tong Zhang, Ping Li | Adaptive Stochastic Alternating Direction Method of Multipliers | 13 pages | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Alternating Direction Method of Multipliers (ADMM) has been studied for
years. The traditional ADMM algorithm needs to compute, at each iteration, an
(empirical) expected loss function on all training examples, resulting in a
computational complexity proportional to the number of training examples. To
reduce the time complexity, stochastic ADMM algorithms were proposed to replace
the expected function with a random loss function associated with one uniformly
drawn example plus a Bregman divergence. The Bregman divergence, however, is
derived from a simple second order proximal function, the half squared norm,
which could be a suboptimal choice.
In this paper, we present a new family of stochastic ADMM algorithms with
optimal second order proximal functions, which produce a new family of adaptive
subgradient methods. We theoretically prove that their regret bounds are as
good as the bounds which could be achieved by the best proximal function that
can be chosen in hindsight. Encouraging empirical results on a variety of
real-world datasets confirm the effectiveness and efficiency of the proposed
algorithms.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2013 21:22:46 GMT"
},
{
"version": "v2",
"created": "Sun, 22 Dec 2013 01:59:05 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Jun 2014 07:03:48 GMT"
},
{
"version": "v4",
"created": "Mon, 9 Jun 2014 09:31:13 GMT"
}
] | 2014-06-10T00:00:00 | [
[
"Zhao",
"Peilin",
""
],
[
"Yang",
"Jinwei",
""
],
[
"Zhang",
"Tong",
""
],
[
"Li",
"Ping",
""
]
] | TITLE: Adaptive Stochastic Alternating Direction Method of Multipliers
ABSTRACT: The Alternating Direction Method of Multipliers (ADMM) has been studied for
years. The traditional ADMM algorithm needs to compute, at each iteration, an
(empirical) expected loss function on all training examples, resulting in a
computational complexity proportional to the number of training examples. To
reduce the time complexity, stochastic ADMM algorithms were proposed to replace
the expected function with a random loss function associated with one uniformly
drawn example plus a Bregman divergence. The Bregman divergence, however, is
derived from a simple second order proximal function, the half squared norm,
which could be a suboptimal choice.
In this paper, we present a new family of stochastic ADMM algorithms with
optimal second order proximal functions, which produce a new family of adaptive
subgradient methods. We theoretically prove that their regret bounds are as
good as the bounds which could be achieved by the best proximal function that
can be chosen in hindsight. Encouraging empirical results on a variety of
real-world datasets confirm the effectiveness and efficiency of the proposed
algorithms.
| no_new_dataset | 0.942981 |
1406.1976 | Wenlian Lu | Y. Yao, W. L. Lu, B. Xu, C. B. Li, C. P. Lin, D. Waxman, J. F. Feng | The Increase of the Functional Entropy of the Human Brain with Age | 8 pages, 5 figures | Scientific Reports, 3:2853, 2013 | 10.1038/srep02853 | null | q-bio.QM physics.med-ph q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use entropy to characterize intrinsic ageing properties of the human
brain. Analysis of fMRI data from a large dataset of individuals, using resting
state BOLD signals, demonstrated that a functional entropy associated with
brain activity increases with age. During an average lifespan, the entropy,
which was calculated from a population of individuals, increased by
approximately 0.1 bits, due to correlations in BOLD activity becoming more
widely distributed. We attribute this to the number of excitatory neurons and
the excitatory conductance decreasing with age. Incorporating these properties
into a computational model leads to quantitatively similar results to the fMRI
data. Our dataset involved males and females and we found significant
differences between them. The entropy of males at birth was lower than that of
females. However, the entropies of the two sexes increase at different rates,
and intersect at approximately 50 years; after this age, males have a larger
entropy.
| [
{
"version": "v1",
"created": "Sun, 8 Jun 2014 12:03:11 GMT"
}
] | 2014-06-10T00:00:00 | [
[
"Yao",
"Y.",
""
],
[
"Lu",
"W. L.",
""
],
[
"Xu",
"B.",
""
],
[
"Li",
"C. B.",
""
],
[
"Lin",
"C. P.",
""
],
[
"Waxman",
"D.",
""
],
[
"Feng",
"J. F.",
""
]
] | TITLE: The Increase of the Functional Entropy of the Human Brain with Age
ABSTRACT: We use entropy to characterize intrinsic ageing properties of the human
brain. Analysis of fMRI data from a large dataset of individuals, using resting
state BOLD signals, demonstrated that a functional entropy associated with
brain activity increases with age. During an average lifespan, the entropy,
which was calculated from a population of individuals, increased by
approximately 0.1 bits, due to correlations in BOLD activity becoming more
widely distributed. We attribute this to the number of excitatory neurons and
the excitatory conductance decreasing with age. Incorporating these properties
into a computational model leads to quantitatively similar results to the fMRI
data. Our dataset involved males and females and we found significant
differences between them. The entropy of males at birth was lower than that of
females. However, the entropies of the two sexes increase at different rates,
and intersect at approximately 50 years; after this age, males have a larger
entropy.
| no_new_dataset | 0.599339 |
1406.2031 | Xianjie Chen | Xianjie Chen, Roozbeh Mottaghi, Xiaobai Liu, Sanja Fidler, Raquel
Urtasun, Alan Yuille | Detect What You Can: Detecting and Representing Objects using Holistic
Models and Body Parts | CBMM memo | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting objects becomes difficult when we need to deal with large shape
deformation, occlusion and low resolution. We propose a novel approach to i)
handle large deformations and partial occlusions in animals (as examples of
highly deformable objects), ii) describe them in terms of body parts, and iii)
detect them when their body parts are hard to detect (e.g., animals depicted at
low resolution). We represent the holistic object and body parts separately and
use a fully connected model to arrange templates for the holistic object and
body parts. Our model automatically decouples the holistic object or body parts
from the model when they are hard to detect. This enables us to represent a
large number of holistic object and body part combinations to better deal with
different "detectability" patterns caused by deformations, occlusion and/or low
resolution.
We apply our method to the six animal categories in the PASCAL VOC dataset
and show that our method significantly improves state-of-the-art (by 4.1% AP)
and provides a richer representation for objects. During training we use
annotations for body parts (e.g., head, torso, etc), making use of a new
dataset of fully annotated object parts for PASCAL VOC 2010, which provides a
mask for each part.
| [
{
"version": "v1",
"created": "Sun, 8 Jun 2014 21:44:18 GMT"
}
] | 2014-06-10T00:00:00 | [
[
"Chen",
"Xianjie",
""
],
[
"Mottaghi",
"Roozbeh",
""
],
[
"Liu",
"Xiaobai",
""
],
[
"Fidler",
"Sanja",
""
],
[
"Urtasun",
"Raquel",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: Detect What You Can: Detecting and Representing Objects using Holistic
Models and Body Parts
ABSTRACT: Detecting objects becomes difficult when we need to deal with large shape
deformation, occlusion and low resolution. We propose a novel approach to i)
handle large deformations and partial occlusions in animals (as examples of
highly deformable objects), ii) describe them in terms of body parts, and iii)
detect them when their body parts are hard to detect (e.g., animals depicted at
low resolution). We represent the holistic object and body parts separately and
use a fully connected model to arrange templates for the holistic object and
body parts. Our model automatically decouples the holistic object or body parts
from the model when they are hard to detect. This enables us to represent a
large number of holistic object and body part combinations to better deal with
different "detectability" patterns caused by deformations, occlusion and/or low
resolution.
We apply our method to the six animal categories in the PASCAL VOC dataset
and show that our method significantly improves state-of-the-art (by 4.1% AP)
and provides a richer representation for objects. During training we use
annotations for body parts (e.g., head, torso, etc), making use of a new
dataset of fully annotated object parts for PASCAL VOC 2010, which provides a
mask for each part.
| new_dataset | 0.863161 |
1406.2049 | Xue Li | Xue Li, Yu-Jin Zhang, Bin Shen, Bao-Di Liu | Image Tag Completion by Low-rank Factorization with Dual Reconstruction
Structure Preserved | null | null | null | null | cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A novel tag completion algorithm is proposed in this paper, which is designed
with the following features: 1) Low-rank and error s-parsity: the incomplete
initial tagging matrix D is decomposed into the complete tagging matrix A and a
sparse error matrix E. However, instead of minimizing its nuclear norm, A is
further factor-ized into a basis matrix U and a sparse coefficient matrix V,
i.e. D=UV+E. This low-rank formulation encapsulating sparse coding enables our
algorithm to recover latent structures from noisy initial data and avoid
performing too much denoising; 2) Local reconstruction structure consistency:
to steer the completion of D, the local linear reconstruction structures in
feature space and tag space are obtained and preserved by U and V respectively.
Such a scheme could alleviate the negative effect of distances measured by
low-level features and incomplete tags. Thus, we can seek a balance between
exploiting as much information and not being mislead to suboptimal performance.
Experiments conducted on Corel5k dataset and the newly issued Flickr30Concepts
dataset demonstrate the effectiveness and efficiency of the proposed method.
| [
{
"version": "v1",
"created": "Mon, 9 Jun 2014 01:22:43 GMT"
}
] | 2014-06-10T00:00:00 | [
[
"Li",
"Xue",
""
],
[
"Zhang",
"Yu-Jin",
""
],
[
"Shen",
"Bin",
""
],
[
"Liu",
"Bao-Di",
""
]
] | TITLE: Image Tag Completion by Low-rank Factorization with Dual Reconstruction
Structure Preserved
ABSTRACT: A novel tag completion algorithm is proposed in this paper, which is designed
with the following features: 1) Low-rank and error s-parsity: the incomplete
initial tagging matrix D is decomposed into the complete tagging matrix A and a
sparse error matrix E. However, instead of minimizing its nuclear norm, A is
further factor-ized into a basis matrix U and a sparse coefficient matrix V,
i.e. D=UV+E. This low-rank formulation encapsulating sparse coding enables our
algorithm to recover latent structures from noisy initial data and avoid
performing too much denoising; 2) Local reconstruction structure consistency:
to steer the completion of D, the local linear reconstruction structures in
feature space and tag space are obtained and preserved by U and V respectively.
Such a scheme could alleviate the negative effect of distances measured by
low-level features and incomplete tags. Thus, we can seek a balance between
exploiting as much information and not being mislead to suboptimal performance.
Experiments conducted on Corel5k dataset and the newly issued Flickr30Concepts
dataset demonstrate the effectiveness and efficiency of the proposed method.
| no_new_dataset | 0.943191 |
1406.2099 | Zahid Halim | Tufail Muhammad, Zahid Halim and Majid Ali Khan | ClassSpy: Java Object Pattern Visualization Tool | ICOMS-2013. International Conference on Modeling and Simulation,
25-27 November, Islamabad | null | null | null | cs.PL cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern java programs consist of large number of classes as well as vast
amount of objects instantiated during program execution. Software developers
are always keen to know the number of objects created for each class. This
information is helpful for a developer in understanding the packages/classes of
a program and optimizing their code. However, understanding such a vast amount
of information is not a trivial task. Visualization helps to depict this
information on a single screen and to comprehend it efficiently. This paper
presents a visualization approach that depicts information about all the
objects instantiated during the program execution. The proposed technique is
more space efficient and scalable to handle vast datasets, at the same time
helpful to identify the key program components. This easy to use interface
provides user an environment to glimpse the entire objects on a single screen.
The proposed approach allows sorting objects at class, thread and method
levels. Effectiveness and usability of the proposed approach is shown through
case studies.
| [
{
"version": "v1",
"created": "Mon, 9 Jun 2014 07:44:56 GMT"
}
] | 2014-06-10T00:00:00 | [
[
"Muhammad",
"Tufail",
""
],
[
"Halim",
"Zahid",
""
],
[
"Khan",
"Majid Ali",
""
]
] | TITLE: ClassSpy: Java Object Pattern Visualization Tool
ABSTRACT: Modern java programs consist of large number of classes as well as vast
amount of objects instantiated during program execution. Software developers
are always keen to know the number of objects created for each class. This
information is helpful for a developer in understanding the packages/classes of
a program and optimizing their code. However, understanding such a vast amount
of information is not a trivial task. Visualization helps to depict this
information on a single screen and to comprehend it efficiently. This paper
presents a visualization approach that depicts information about all the
objects instantiated during the program execution. The proposed technique is
more space efficient and scalable to handle vast datasets, at the same time
helpful to identify the key program components. This easy to use interface
provides user an environment to glimpse the entire objects on a single screen.
The proposed approach allows sorting objects at class, thread and method
levels. Effectiveness and usability of the proposed approach is shown through
case studies.
| no_new_dataset | 0.941061 |
1406.2282 | Chunyu Wang | Chunyu Wang, Yizhou Wang, Zhouchen Lin, Alan L. Yuille, Wen Gao | Robust Estimation of 3D Human Poses from a Single Image | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human pose estimation is a key step to action recognition. We propose a
method of estimating 3D human poses from a single image, which works in
conjunction with an existing 2D pose/joint detector. 3D pose estimation is
challenging because multiple 3D poses may correspond to the same 2D pose after
projection due to the lack of depth information. Moreover, current 2D pose
estimators are usually inaccurate which may cause errors in the 3D estimation.
We address the challenges in three ways: (i) We represent a 3D pose as a linear
combination of a sparse set of bases learned from 3D human skeletons. (ii) We
enforce limb length constraints to eliminate anthropomorphically implausible
skeletons. (iii) We estimate a 3D pose by minimizing the $L_1$-norm error
between the projection of the 3D pose and the corresponding 2D detection. The
$L_1$-norm loss term is robust to inaccurate 2D joint estimations. We use the
alternating direction method (ADM) to solve the optimization problem
efficiently. Our approach outperforms the state-of-the-arts on three benchmark
datasets.
| [
{
"version": "v1",
"created": "Mon, 9 Jun 2014 18:55:31 GMT"
}
] | 2014-06-10T00:00:00 | [
[
"Wang",
"Chunyu",
""
],
[
"Wang",
"Yizhou",
""
],
[
"Lin",
"Zhouchen",
""
],
[
"Yuille",
"Alan L.",
""
],
[
"Gao",
"Wen",
""
]
] | TITLE: Robust Estimation of 3D Human Poses from a Single Image
ABSTRACT: Human pose estimation is a key step to action recognition. We propose a
method of estimating 3D human poses from a single image, which works in
conjunction with an existing 2D pose/joint detector. 3D pose estimation is
challenging because multiple 3D poses may correspond to the same 2D pose after
projection due to the lack of depth information. Moreover, current 2D pose
estimators are usually inaccurate which may cause errors in the 3D estimation.
We address the challenges in three ways: (i) We represent a 3D pose as a linear
combination of a sparse set of bases learned from 3D human skeletons. (ii) We
enforce limb length constraints to eliminate anthropomorphically implausible
skeletons. (iii) We estimate a 3D pose by minimizing the $L_1$-norm error
between the projection of the 3D pose and the corresponding 2D detection. The
$L_1$-norm loss term is robust to inaccurate 2D joint estimations. We use the
alternating direction method (ADM) to solve the optimization problem
efficiently. Our approach outperforms the state-of-the-arts on three benchmark
datasets.
| no_new_dataset | 0.944125 |
1406.2283 | David Eigen | David Eigen and Christian Puhrsch and Rob Fergus | Depth Map Prediction from a Single Image using a Multi-Scale Deep
Network | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting depth is an essential component in understanding the 3D geometry
of a scene. While for stereo images local correspondence suffices for
estimation, finding depth relations from a single image is less
straightforward, requiring integration of both global and local information
from various cues. Moreover, the task is inherently ambiguous, with a large
source of uncertainty coming from the overall scale. In this paper, we present
a new method that addresses this task by employing two deep network stacks: one
that makes a coarse global prediction based on the entire image, and another
that refines this prediction locally. We also apply a scale-invariant error to
help measure depth relations rather than scale. By leveraging the raw datasets
as large sources of training data, our method achieves state-of-the-art results
on both NYU Depth and KITTI, and matches detailed depth boundaries without the
need for superpixelation.
| [
{
"version": "v1",
"created": "Mon, 9 Jun 2014 19:01:18 GMT"
}
] | 2014-06-10T00:00:00 | [
[
"Eigen",
"David",
""
],
[
"Puhrsch",
"Christian",
""
],
[
"Fergus",
"Rob",
""
]
] | TITLE: Depth Map Prediction from a Single Image using a Multi-Scale Deep
Network
ABSTRACT: Predicting depth is an essential component in understanding the 3D geometry
of a scene. While for stereo images local correspondence suffices for
estimation, finding depth relations from a single image is less
straightforward, requiring integration of both global and local information
from various cues. Moreover, the task is inherently ambiguous, with a large
source of uncertainty coming from the overall scale. In this paper, we present
a new method that addresses this task by employing two deep network stacks: one
that makes a coarse global prediction based on the entire image, and another
that refines this prediction locally. We also apply a scale-invariant error to
help measure depth relations rather than scale. By leveraging the raw datasets
as large sources of training data, our method achieves state-of-the-art results
on both NYU Depth and KITTI, and matches detailed depth boundaries without the
need for superpixelation.
| no_new_dataset | 0.953057 |
1401.8257 | Shuai Li | Claudio Gentile, Shuai Li, Giovanni Zappella | Online Clustering of Bandits | In E. Xing and T. Jebara (Eds.), Proceedings of 31st International
Conference on Machine Learning, Journal of Machine Learning Research Workshop
and Conference Proceedings, Vol.32 (JMLR W&CP-32), Beijing, China, Jun.
21-26, 2014 (ICML 2014), Submitted by Shuai Li
(https://sites.google.com/site/shuailidotsli) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel algorithmic approach to content recommendation based on
adaptive clustering of exploration-exploitation ("bandit") strategies. We
provide a sharp regret analysis of this algorithm in a standard stochastic
noise setting, demonstrate its scalability properties, and prove its
effectiveness on a number of artificial and real-world datasets. Our
experiments show a significant increase in prediction performance over
state-of-the-art methods for bandit problems.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2014 18:49:42 GMT"
},
{
"version": "v2",
"created": "Tue, 13 May 2014 07:13:06 GMT"
},
{
"version": "v3",
"created": "Fri, 6 Jun 2014 13:59:04 GMT"
}
] | 2014-06-09T00:00:00 | [
[
"Gentile",
"Claudio",
""
],
[
"Li",
"Shuai",
""
],
[
"Zappella",
"Giovanni",
""
]
] | TITLE: Online Clustering of Bandits
ABSTRACT: We introduce a novel algorithmic approach to content recommendation based on
adaptive clustering of exploration-exploitation ("bandit") strategies. We
provide a sharp regret analysis of this algorithm in a standard stochastic
noise setting, demonstrate its scalability properties, and prove its
effectiveness on a number of artificial and real-world datasets. Our
experiments show a significant increase in prediction performance over
state-of-the-art methods for bandit problems.
| no_new_dataset | 0.941654 |
1406.0588 | Shasha Bu | Shasha Bu and Yu-Jin Zhang | Image retrieval with hierarchical matching pursuit | 5 pages, 6 figures, conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A novel representation of images for image retrieval is introduced in this
paper, by using a new type of feature with remarkable discriminative power.
Despite the multi-scale nature of objects, most existing models perform feature
extraction on a fixed scale, which will inevitably degrade the performance of
the whole system. Motivated by this, we introduce a hierarchical sparse coding
architecture for image retrieval to explore multi-scale cues. Sparse codes
extracted on lower layers are transmitted to higher layers recursively. With
this mechanism, cues from different scales are fused. Experiments on the
Holidays dataset show that the proposed method achieves an excellent retrieval
performance with a small code length.
| [
{
"version": "v1",
"created": "Tue, 3 Jun 2014 06:32:24 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Jun 2014 02:23:21 GMT"
}
] | 2014-06-06T00:00:00 | [
[
"Bu",
"Shasha",
""
],
[
"Zhang",
"Yu-Jin",
""
]
] | TITLE: Image retrieval with hierarchical matching pursuit
ABSTRACT: A novel representation of images for image retrieval is introduced in this
paper, by using a new type of feature with remarkable discriminative power.
Despite the multi-scale nature of objects, most existing models perform feature
extraction on a fixed scale, which will inevitably degrade the performance of
the whole system. Motivated by this, we introduce a hierarchical sparse coding
architecture for image retrieval to explore multi-scale cues. Sparse codes
extracted on lower layers are transmitted to higher layers recursively. With
this mechanism, cues from different scales are fused. Experiments on the
Holidays dataset show that the proposed method achieves an excellent retrieval
performance with a small code length.
| no_new_dataset | 0.951233 |
1406.1167 | Xu-Cheng Yin | Xu-Cheng Yin and Chun Yang and Hong-Wei Hao | Learning to Diversify via Weighted Kernels for Classifier Ensemble | Submitted to IEEE Trans. Pattern Analysis and Machine Intelligence
(TPAMI) | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classifier ensemble generally should combine diverse component classifiers.
However, it is difficult to give a definitive connection between diversity
measure and ensemble accuracy. Given a list of available component classifiers,
how to adaptively and diversely ensemble classifiers becomes a big challenge in
the literature. In this paper, we argue that diversity, not direct diversity on
samples but adaptive diversity with data, is highly correlated to ensemble
accuracy, and we propose a novel technology for classifier ensemble, learning
to diversify, which learns to adaptively combine classifiers by considering
both accuracy and diversity. Specifically, our approach, Learning TO Diversify
via Weighted Kernels (L2DWK), performs classifier combination by optimizing a
direct but simple criterion: maximizing ensemble accuracy and adaptive
diversity simultaneously by minimizing a convex loss function. Given a measure
formulation, the diversity is calculated with weighted kernels (i.e., the
diversity is measured on the component classifiers' outputs which are kernelled
and weighted), and the kernel weights are automatically learned. We minimize
this loss function by estimating the kernel weights in conjunction with the
classifier weights, and propose a self-training algorithm for conducting this
convex optimization procedure iteratively. Extensive experiments on a variety
of 32 UCI classification benchmark datasets show that the proposed approach
consistently outperforms state-of-the-art ensembles such as Bagging, AdaBoost,
Random Forests, Gasen, Regularized Selective Ensemble, and Ensemble Pruning via
Semi-Definite Programming.
| [
{
"version": "v1",
"created": "Wed, 4 Jun 2014 09:16:42 GMT"
}
] | 2014-06-06T00:00:00 | [
[
"Yin",
"Xu-Cheng",
""
],
[
"Yang",
"Chun",
""
],
[
"Hao",
"Hong-Wei",
""
]
] | TITLE: Learning to Diversify via Weighted Kernels for Classifier Ensemble
ABSTRACT: Classifier ensemble generally should combine diverse component classifiers.
However, it is difficult to give a definitive connection between diversity
measure and ensemble accuracy. Given a list of available component classifiers,
how to adaptively and diversely ensemble classifiers becomes a big challenge in
the literature. In this paper, we argue that diversity, not direct diversity on
samples but adaptive diversity with data, is highly correlated to ensemble
accuracy, and we propose a novel technology for classifier ensemble, learning
to diversify, which learns to adaptively combine classifiers by considering
both accuracy and diversity. Specifically, our approach, Learning TO Diversify
via Weighted Kernels (L2DWK), performs classifier combination by optimizing a
direct but simple criterion: maximizing ensemble accuracy and adaptive
diversity simultaneously by minimizing a convex loss function. Given a measure
formulation, the diversity is calculated with weighted kernels (i.e., the
diversity is measured on the component classifiers' outputs which are kernelled
and weighted), and the kernel weights are automatically learned. We minimize
this loss function by estimating the kernel weights in conjunction with the
classifier weights, and propose a self-training algorithm for conducting this
convex optimization procedure iteratively. Extensive experiments on a variety
of 32 UCI classification benchmark datasets show that the proposed approach
consistently outperforms state-of-the-art ensembles such as Bagging, AdaBoost,
Random Forests, Gasen, Regularized Selective Ensemble, and Ensemble Pruning via
Semi-Definite Programming.
| no_new_dataset | 0.949763 |
1207.6430 | Christoph Brune | Braxton Osting and Christoph Brune and Stanley J. Osher | Optimal Data Collection For Informative Rankings Expose Well-Connected
Graphs | 31 pages, 10 figures, 3 tables | null | null | UCLA CAM report 12-32 | stat.ML cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a graph where vertices represent alternatives and arcs represent
pairwise comparison data, the statistical ranking problem is to find a
potential function, defined on the vertices, such that the gradient of the
potential function agrees with the pairwise comparisons. Our goal in this paper
is to develop a method for collecting data for which the least squares
estimator for the ranking problem has maximal Fisher information. Our approach,
based on experimental design, is to view data collection as a bi-level
optimization problem where the inner problem is the ranking problem and the
outer problem is to identify data which maximizes the informativeness of the
ranking. Under certain assumptions, the data collection problem decouples,
reducing to a problem of finding multigraphs with large algebraic connectivity.
This reduction of the data collection problem to graph-theoretic questions is
one of the primary contributions of this work. As an application, we study the
Yahoo! Movie user rating dataset and demonstrate that the addition of a small
number of well-chosen pairwise comparisons can significantly increase the
Fisher informativeness of the ranking. As another application, we study the
2011-12 NCAA football schedule and propose schedules with the same number of
games which are significantly more informative. Using spectral clustering
methods to identify highly-connected communities within the division, we argue
that the NCAA could improve its notoriously poor rankings by simply scheduling
more out-of-conference games.
| [
{
"version": "v1",
"created": "Thu, 26 Jul 2012 23:14:34 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jun 2014 08:31:57 GMT"
}
] | 2014-06-05T00:00:00 | [
[
"Osting",
"Braxton",
""
],
[
"Brune",
"Christoph",
""
],
[
"Osher",
"Stanley J.",
""
]
] | TITLE: Optimal Data Collection For Informative Rankings Expose Well-Connected
Graphs
ABSTRACT: Given a graph where vertices represent alternatives and arcs represent
pairwise comparison data, the statistical ranking problem is to find a
potential function, defined on the vertices, such that the gradient of the
potential function agrees with the pairwise comparisons. Our goal in this paper
is to develop a method for collecting data for which the least squares
estimator for the ranking problem has maximal Fisher information. Our approach,
based on experimental design, is to view data collection as a bi-level
optimization problem where the inner problem is the ranking problem and the
outer problem is to identify data which maximizes the informativeness of the
ranking. Under certain assumptions, the data collection problem decouples,
reducing to a problem of finding multigraphs with large algebraic connectivity.
This reduction of the data collection problem to graph-theoretic questions is
one of the primary contributions of this work. As an application, we study the
Yahoo! Movie user rating dataset and demonstrate that the addition of a small
number of well-chosen pairwise comparisons can significantly increase the
Fisher informativeness of the ranking. As another application, we study the
2011-12 NCAA football schedule and propose schedules with the same number of
games which are significantly more informative. Using spectral clustering
methods to identify highly-connected communities within the division, we argue
that the NCAA could improve its notoriously poor rankings by simply scheduling
more out-of-conference games.
| no_new_dataset | 0.945551 |
1406.1061 | Vit Novacek | Vit Novacek | A Methodology for Empirical Analysis of LOD Datasets | A current working draft of the paper submitted to the ISWC'14
conference (track information available here:
http://iswc2014.semanticweb.org/call-replication-benchmark-data-software-papers) | null | null | null | cs.AI cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | CoCoE stands for Complexity, Coherence and Entropy, and presents an
extensible methodology for empirical analysis of Linked Open Data (i.e., RDF
graphs). CoCoE can offer answers to questions like: Is dataset A better than B
for knowledge discovery since it is more complex and informative?, Is dataset X
better than Y for simple value lookups due its flatter structure?, etc. In
order to address such questions, we introduce a set of well-founded measures
based on complementary notions from distributional semantics, network analysis
and information theory. These measures are part of a specific implementation of
the CoCoE methodology that is available for download. Last but not least, we
illustrate CoCoE by its application to selected biomedical RDF datasets.
| [
{
"version": "v1",
"created": "Wed, 4 Jun 2014 14:45:43 GMT"
}
] | 2014-06-05T00:00:00 | [
[
"Novacek",
"Vit",
""
]
] | TITLE: A Methodology for Empirical Analysis of LOD Datasets
ABSTRACT: CoCoE stands for Complexity, Coherence and Entropy, and presents an
extensible methodology for empirical analysis of Linked Open Data (i.e., RDF
graphs). CoCoE can offer answers to questions like: Is dataset A better than B
for knowledge discovery since it is more complex and informative?, Is dataset X
better than Y for simple value lookups due its flatter structure?, etc. In
order to address such questions, we introduce a set of well-founded measures
based on complementary notions from distributional semantics, network analysis
and information theory. These measures are part of a specific implementation of
the CoCoE methodology that is available for download. Last but not least, we
illustrate CoCoE by its application to selected biomedical RDF datasets.
| no_new_dataset | 0.945045 |
1406.1137 | Gang Wang | Gang Wang, Tianyi Wang, Bolun Wang, Divya Sambasivan, Zengbin Zhang,
Haitao Zheng, Ben Y. Zhao | Crowds on Wall Street: Extracting Value from Social Investing Platforms | null | null | null | null | cs.SI physics.soc-ph | http://creativecommons.org/licenses/by/3.0/ | For decades, the world of financial advisors has been dominated by large
investment banks such as Goldman Sachs. In recent years, user-contributed
investment services such as SeekingAlpha and StockTwits have grown to millions
of users. In this paper, we seek to understand the quality and impact of
content on social investment platforms, by empirically analyzing complete
datasets of SeekingAlpha articles (9 years) and StockTwits messages (4 years).
We develop sentiment analysis tools and correlate contributed content to the
historical performance of relevant stocks. While SeekingAlpha articles and
StockTwits messages provide minimal correlation to stock performance in
aggregate, a subset of authors contribute more valuable (predictive) content.
We show that these authors can be identified via both empirical methods or by
user interactions, and investments using their analysis significantly
outperform broader markets. Finally, we conduct a user survey that sheds light
on users views of SeekingAlpha content and stock manipulation.
| [
{
"version": "v1",
"created": "Wed, 4 Jun 2014 18:34:32 GMT"
}
] | 2014-06-05T00:00:00 | [
[
"Wang",
"Gang",
""
],
[
"Wang",
"Tianyi",
""
],
[
"Wang",
"Bolun",
""
],
[
"Sambasivan",
"Divya",
""
],
[
"Zhang",
"Zengbin",
""
],
[
"Zheng",
"Haitao",
""
],
[
"Zhao",
"Ben Y.",
""
]
] | TITLE: Crowds on Wall Street: Extracting Value from Social Investing Platforms
ABSTRACT: For decades, the world of financial advisors has been dominated by large
investment banks such as Goldman Sachs. In recent years, user-contributed
investment services such as SeekingAlpha and StockTwits have grown to millions
of users. In this paper, we seek to understand the quality and impact of
content on social investment platforms, by empirically analyzing complete
datasets of SeekingAlpha articles (9 years) and StockTwits messages (4 years).
We develop sentiment analysis tools and correlate contributed content to the
historical performance of relevant stocks. While SeekingAlpha articles and
StockTwits messages provide minimal correlation to stock performance in
aggregate, a subset of authors contribute more valuable (predictive) content.
We show that these authors can be identified via both empirical methods or by
user interactions, and investments using their analysis significantly
outperform broader markets. Finally, we conduct a user survey that sheds light
on users views of SeekingAlpha content and stock manipulation.
| no_new_dataset | 0.949623 |
1406.0680 | Ziqiong Liu | Ziqiong Liu, Shengjin Wang, Liang Zheng, Qi Tian | Visual Reranking with Improved Image Graph | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces an improved reranking method for the Bag-of-Words (BoW)
based image search. Built on [1], a directed image graph robust to outlier
distraction is proposed. In our approach, the relevance among images is encoded
in the image graph, based on which the initial rank list is refined. Moreover,
we show that the rank-level feature fusion can be adopted in this reranking
method as well. Taking advantage of the complementary nature of various
features, the reranking performance is further enhanced. Particularly, we
exploit the reranking method combining the BoW and color information.
Experiments on two benchmark datasets demonstrate that ourmethod yields
significant improvements and the reranking results are competitive to the
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 3 Jun 2014 12:07:12 GMT"
}
] | 2014-06-04T00:00:00 | [
[
"Liu",
"Ziqiong",
""
],
[
"Wang",
"Shengjin",
""
],
[
"Zheng",
"Liang",
""
],
[
"Tian",
"Qi",
""
]
] | TITLE: Visual Reranking with Improved Image Graph
ABSTRACT: This paper introduces an improved reranking method for the Bag-of-Words (BoW)
based image search. Built on [1], a directed image graph robust to outlier
distraction is proposed. In our approach, the relevance among images is encoded
in the image graph, based on which the initial rank list is refined. Moreover,
we show that the rank-level feature fusion can be adopted in this reranking
method as well. Taking advantage of the complementary nature of various
features, the reranking performance is further enhanced. Particularly, we
exploit the reranking method combining the BoW and color information.
Experiments on two benchmark datasets demonstrate that ourmethod yields
significant improvements and the reranking results are competitive to the
state-of-the-art methods.
| no_new_dataset | 0.949201 |
1406.0132 | Liang Zheng | Liang Zheng, Shengjin Wang, Fei He, Qi Tian | Seeing the Big Picture: Deep Embedding with Contextual Evidences | 10 pages, 13 figures, 7 tables, submitted to ACM Multimedia 2014 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | In the Bag-of-Words (BoW) model based image retrieval task, the precision of
visual matching plays a critical role in improving retrieval performance.
Conventionally, local cues of a keypoint are employed. However, such strategy
does not consider the contextual evidences of a keypoint, a problem which would
lead to the prevalence of false matches. To address this problem, this paper
defines "true match" as a pair of keypoints which are similar on three levels,
i.e., local, regional, and global. Then, a principled probabilistic framework
is established, which is capable of implicitly integrating discriminative cues
from all these feature levels.
Specifically, the Convolutional Neural Network (CNN) is employed to extract
features from regional and global patches, leading to the so-called "Deep
Embedding" framework. CNN has been shown to produce excellent performance on a
dozen computer vision tasks such as image classification and detection, but few
works have been done on BoW based image retrieval. In this paper, firstly we
show that proper pre-processing techniques are necessary for effective usage of
CNN feature. Then, in the attempt to fit it into our model, a novel indexing
structure called "Deep Indexing" is introduced, which dramatically reduces
memory usage.
Extensive experiments on three benchmark datasets demonstrate that, the
proposed Deep Embedding method greatly promotes the retrieval accuracy when CNN
feature is integrated. We show that our method is efficient in terms of both
memory and time cost, and compares favorably with the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Sun, 1 Jun 2014 05:04:28 GMT"
}
] | 2014-06-03T00:00:00 | [
[
"Zheng",
"Liang",
""
],
[
"Wang",
"Shengjin",
""
],
[
"He",
"Fei",
""
],
[
"Tian",
"Qi",
""
]
] | TITLE: Seeing the Big Picture: Deep Embedding with Contextual Evidences
ABSTRACT: In the Bag-of-Words (BoW) model based image retrieval task, the precision of
visual matching plays a critical role in improving retrieval performance.
Conventionally, local cues of a keypoint are employed. However, such strategy
does not consider the contextual evidences of a keypoint, a problem which would
lead to the prevalence of false matches. To address this problem, this paper
defines "true match" as a pair of keypoints which are similar on three levels,
i.e., local, regional, and global. Then, a principled probabilistic framework
is established, which is capable of implicitly integrating discriminative cues
from all these feature levels.
Specifically, the Convolutional Neural Network (CNN) is employed to extract
features from regional and global patches, leading to the so-called "Deep
Embedding" framework. CNN has been shown to produce excellent performance on a
dozen computer vision tasks such as image classification and detection, but few
works have been done on BoW based image retrieval. In this paper, firstly we
show that proper pre-processing techniques are necessary for effective usage of
CNN feature. Then, in the attempt to fit it into our model, a novel indexing
structure called "Deep Indexing" is introduced, which dramatically reduces
memory usage.
Extensive experiments on three benchmark datasets demonstrate that, the
proposed Deep Embedding method greatly promotes the retrieval accuracy when CNN
feature is integrated. We show that our method is efficient in terms of both
memory and time cost, and compares favorably with the state-of-the-art methods.
| no_new_dataset | 0.944791 |
1406.0304 | Markus Schneider | Markus Schneider and Fabio Ramos | Transductive Learning for Multi-Task Copula Processes | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We tackle the problem of multi-task learning with copula process.
Multivariable prediction in spatial and spatial-temporal processes such as
natural resource estimation and pollution monitoring have been typically
addressed using techniques based on Gaussian processes and co-Kriging. While
the Gaussian prior assumption is convenient from analytical and computational
perspectives, nature is dominated by non-Gaussian likelihoods. Copula processes
are an elegant and flexible solution to handle various non-Gaussian likelihoods
by capturing the dependence structure of random variables with cumulative
distribution functions rather than their marginals. We show how multi-task
learning for copula processes can be used to improve multivariable prediction
for problems where the simple Gaussianity prior assumption does not hold. Then,
we present a transductive approximation for multi-task learning and derive
analytical expressions for the copula process model. The approach is evaluated
and compared to other techniques in one artificial dataset and two publicly
available datasets for natural resource estimation and concrete slump
prediction.
| [
{
"version": "v1",
"created": "Mon, 2 Jun 2014 09:22:49 GMT"
}
] | 2014-06-03T00:00:00 | [
[
"Schneider",
"Markus",
""
],
[
"Ramos",
"Fabio",
""
]
] | TITLE: Transductive Learning for Multi-Task Copula Processes
ABSTRACT: We tackle the problem of multi-task learning with copula process.
Multivariable prediction in spatial and spatial-temporal processes such as
natural resource estimation and pollution monitoring have been typically
addressed using techniques based on Gaussian processes and co-Kriging. While
the Gaussian prior assumption is convenient from analytical and computational
perspectives, nature is dominated by non-Gaussian likelihoods. Copula processes
are an elegant and flexible solution to handle various non-Gaussian likelihoods
by capturing the dependence structure of random variables with cumulative
distribution functions rather than their marginals. We show how multi-task
learning for copula processes can be used to improve multivariable prediction
for problems where the simple Gaussianity prior assumption does not hold. Then,
we present a transductive approximation for multi-task learning and derive
analytical expressions for the copula process model. The approach is evaluated
and compared to other techniques in one artificial dataset and two publicly
available datasets for natural resource estimation and concrete slump
prediction.
| no_new_dataset | 0.944022 |
1405.7958 | George Teodoro | George Teodoro, Tony Pan, Tahsin Kurc, Jun Kong, Lee Cooper, Scott
Klasky, Joel Saltz | Region Templates: Data Representation and Management for Large-Scale
Image Analysis | 43 pages, 17 figures | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distributed memory machines equipped with CPUs and GPUs (hybrid computing
nodes) are hard to program because of the multiple layers of memory and
heterogeneous computing configurations. In this paper, we introduce a region
template abstraction for the efficient management of common data types used in
analysis of large datasets of high resolution images on clusters of hybrid
computing nodes. The region template provides a generic container template for
common data structures, such as points, arrays, regions, and object sets,
within a spatial and temporal bounding box. The region template abstraction
enables different data management strategies and data I/O implementations,
while providing a homogeneous, unified interface to the application for data
storage and retrieval. The execution of region templates applications is
coordinated by a runtime system that supports efficient execution in hybrid
machines. Region templates applications are represented as hierarchical
dataflow in which each computing stage may be represented as another dataflow
of finer-grain tasks. A number of optimizations for hybrid machines are
available in our runtime system, including performance-aware scheduling for
maximizing utilization of computing devices and techniques to reduce impact of
data transfers between CPUs and GPUs. An experimental evaluation on a
state-of-the-art hybrid cluster using a microscopy imaging study shows that
this abstraction adds negligible overhead (about 3%) and achieves good
scalability.
| [
{
"version": "v1",
"created": "Fri, 30 May 2014 19:22:46 GMT"
}
] | 2014-06-02T00:00:00 | [
[
"Teodoro",
"George",
""
],
[
"Pan",
"Tony",
""
],
[
"Kurc",
"Tahsin",
""
],
[
"Kong",
"Jun",
""
],
[
"Cooper",
"Lee",
""
],
[
"Klasky",
"Scott",
""
],
[
"Saltz",
"Joel",
""
]
] | TITLE: Region Templates: Data Representation and Management for Large-Scale
Image Analysis
ABSTRACT: Distributed memory machines equipped with CPUs and GPUs (hybrid computing
nodes) are hard to program because of the multiple layers of memory and
heterogeneous computing configurations. In this paper, we introduce a region
template abstraction for the efficient management of common data types used in
analysis of large datasets of high resolution images on clusters of hybrid
computing nodes. The region template provides a generic container template for
common data structures, such as points, arrays, regions, and object sets,
within a spatial and temporal bounding box. The region template abstraction
enables different data management strategies and data I/O implementations,
while providing a homogeneous, unified interface to the application for data
storage and retrieval. The execution of region templates applications is
coordinated by a runtime system that supports efficient execution in hybrid
machines. Region templates applications are represented as hierarchical
dataflow in which each computing stage may be represented as another dataflow
of finer-grain tasks. A number of optimizations for hybrid machines are
available in our runtime system, including performance-aware scheduling for
maximizing utilization of computing devices and techniques to reduce impact of
data transfers between CPUs and GPUs. An experimental evaluation on a
state-of-the-art hybrid cluster using a microscopy imaging study shows that
this abstraction adds negligible overhead (about 3%) and achieves good
scalability.
| no_new_dataset | 0.948106 |
1405.7397 | Kamal Sarkar | Vivekananda Gayen, Kamal Sarkar | An HMM Based Named Entity Recognition System for Indian Languages: The
JU System at ICON 2013 | The ICON 2013 tools contest on Named Entity Recognition in Indian
languages (IL) co-located with the 10th International Conference on Natural
Language Processing(ICON), CDAC Noida, India,18-20 December, 2013 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper reports about our work in the ICON 2013 NLP TOOLS CONTEST on Named
Entity Recognition. We submitted runs for Bengali, English, Hindi, Marathi,
Punjabi, Tamil and Telugu. A statistical HMM (Hidden Markov Models) based model
has been used to implement our system. The system has been trained and tested
on the NLP TOOLS CONTEST: ICON 2013 datasets. Our system obtains F-measures of
0.8599, 0.7704, 0.7520, 0.4289, 0.5455, 0.4466, and 0.4003 for Bengali,
English, Hindi, Marathi, Punjabi, Tamil and Telugu respectively.
| [
{
"version": "v1",
"created": "Wed, 28 May 2014 21:05:00 GMT"
}
] | 2014-05-30T00:00:00 | [
[
"Gayen",
"Vivekananda",
""
],
[
"Sarkar",
"Kamal",
""
]
] | TITLE: An HMM Based Named Entity Recognition System for Indian Languages: The
JU System at ICON 2013
ABSTRACT: This paper reports about our work in the ICON 2013 NLP TOOLS CONTEST on Named
Entity Recognition. We submitted runs for Bengali, English, Hindi, Marathi,
Punjabi, Tamil and Telugu. A statistical HMM (Hidden Markov Models) based model
has been used to implement our system. The system has been trained and tested
on the NLP TOOLS CONTEST: ICON 2013 datasets. Our system obtains F-measures of
0.8599, 0.7704, 0.7520, 0.4289, 0.5455, 0.4466, and 0.4003 for Bengali,
English, Hindi, Marathi, Punjabi, Tamil and Telugu respectively.
| no_new_dataset | 0.945399 |
1405.7545 | Michael Sapienza | Michael Sapienza and Fabio Cuzzolin and Philip H.S. Torr | Feature sampling and partitioning for visual vocabulary generation on
large action classification datasets | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent trend in action recognition is towards larger datasets, an
increasing number of action classes and larger visual vocabularies.
State-of-the-art human action classification in challenging video data is
currently based on a bag-of-visual-words pipeline in which space-time features
are aggregated globally to form a histogram. The strategies chosen to sample
features and construct a visual vocabulary are critical to performance, in fact
often dominating performance. In this work we provide a critical evaluation of
various approaches to building a vocabulary and show that good practises do
have a significant impact. By subsampling and partitioning features
strategically, we are able to achieve state-of-the-art results on 5 major
action recognition datasets using relatively small visual vocabularies.
| [
{
"version": "v1",
"created": "Thu, 29 May 2014 13:09:52 GMT"
}
] | 2014-05-30T00:00:00 | [
[
"Sapienza",
"Michael",
""
],
[
"Cuzzolin",
"Fabio",
""
],
[
"Torr",
"Philip H. S.",
""
]
] | TITLE: Feature sampling and partitioning for visual vocabulary generation on
large action classification datasets
ABSTRACT: The recent trend in action recognition is towards larger datasets, an
increasing number of action classes and larger visual vocabularies.
State-of-the-art human action classification in challenging video data is
currently based on a bag-of-visual-words pipeline in which space-time features
are aggregated globally to form a histogram. The strategies chosen to sample
features and construct a visual vocabulary are critical to performance, in fact
often dominating performance. In this work we provide a critical evaluation of
various approaches to building a vocabulary and show that good practises do
have a significant impact. By subsampling and partitioning features
strategically, we are able to achieve state-of-the-art results on 5 major
action recognition datasets using relatively small visual vocabularies.
| no_new_dataset | 0.952706 |
1405.7631 | Mostafa Salehi | Motahareh Eslami Mehdiabadi, Hamid R. Rabiee, Mostafa Salehi | Diffusion-Aware Sampling and Estimation in Information Diffusion
Networks | 8 pages, 4 figures, Published in: International Confernece on Social
Computing 2012 (SocialCom12) | null | 10.1109/SocialCom-PASSAT.2012.98 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partially-observed data collected by sampling methods is often being studied
to obtain the characteristics of information diffusion networks. However, these
methods usually do not consider the behavior of diffusion process. In this
paper, we propose a novel two-step (sampling/estimation) measurement framework
by utilizing the diffusion process characteristics. To this end, we propose a
link-tracing based sampling design which uses the infection times as local
information without any knowledge about the latent structure of diffusion
network. To correct the bias of sampled data, we introduce three estimators for
different categories; link-based, node-based, and cascade-based. To the best of
our knowledge, this is the first attempt to introduce a complete measurement
framework for diffusion networks. We also show that the estimator plays an
important role in correcting the bias of sampling from diffusion networks. Our
comprehensive empirical analysis over large synthetic and real datasets
demonstrates that in average, the proposed framework outperforms the common BFS
and RW sampling methods in terms of link-based characteristics by about 37% and
35%, respectively.
| [
{
"version": "v1",
"created": "Thu, 29 May 2014 17:52:04 GMT"
}
] | 2014-05-30T00:00:00 | [
[
"Mehdiabadi",
"Motahareh Eslami",
""
],
[
"Rabiee",
"Hamid R.",
""
],
[
"Salehi",
"Mostafa",
""
]
] | TITLE: Diffusion-Aware Sampling and Estimation in Information Diffusion
Networks
ABSTRACT: Partially-observed data collected by sampling methods is often being studied
to obtain the characteristics of information diffusion networks. However, these
methods usually do not consider the behavior of diffusion process. In this
paper, we propose a novel two-step (sampling/estimation) measurement framework
by utilizing the diffusion process characteristics. To this end, we propose a
link-tracing based sampling design which uses the infection times as local
information without any knowledge about the latent structure of diffusion
network. To correct the bias of sampled data, we introduce three estimators for
different categories; link-based, node-based, and cascade-based. To the best of
our knowledge, this is the first attempt to introduce a complete measurement
framework for diffusion networks. We also show that the estimator plays an
important role in correcting the bias of sampling from diffusion networks. Our
comprehensive empirical analysis over large synthetic and real datasets
demonstrates that in average, the proposed framework outperforms the common BFS
and RW sampling methods in terms of link-based characteristics by about 37% and
35%, respectively.
| no_new_dataset | 0.950549 |
1312.6190 | Son Tran | Son N. Tran, Artur d'Avila Garcez | Adaptive Feature Ranking for Unsupervised Transfer Learning | 9 pages 7 figures, new experimental results on ranking and transfer
have been added, typo fixed | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transfer Learning is concerned with the application of knowledge gained from
solving a problem to a different but related problem domain. In this paper, we
propose a method and efficient algorithm for ranking and selecting
representations from a Restricted Boltzmann Machine trained on a source domain
to be transferred onto a target domain. Experiments carried out using the
MNIST, ICDAR and TiCC image datasets show that the proposed adaptive feature
ranking and transfer learning method offers statistically significant
improvements on the training of RBMs. Our method is general in that the
knowledge chosen by the ranking function does not depend on its relation to any
specific target domain, and it works with unsupervised learning and
knowledge-based transfer.
| [
{
"version": "v1",
"created": "Sat, 21 Dec 2013 01:50:08 GMT"
},
{
"version": "v2",
"created": "Wed, 28 May 2014 16:35:17 GMT"
}
] | 2014-05-29T00:00:00 | [
[
"Tran",
"Son N.",
""
],
[
"Garcez",
"Artur d'Avila",
""
]
] | TITLE: Adaptive Feature Ranking for Unsupervised Transfer Learning
ABSTRACT: Transfer Learning is concerned with the application of knowledge gained from
solving a problem to a different but related problem domain. In this paper, we
propose a method and efficient algorithm for ranking and selecting
representations from a Restricted Boltzmann Machine trained on a source domain
to be transferred onto a target domain. Experiments carried out using the
MNIST, ICDAR and TiCC image datasets show that the proposed adaptive feature
ranking and transfer learning method offers statistically significant
improvements on the training of RBMs. Our method is general in that the
knowledge chosen by the ranking function does not depend on its relation to any
specific target domain, and it works with unsupervised learning and
knowledge-based transfer.
| no_new_dataset | 0.944995 |
1405.6804 | Zhuowen Tu | Zhuowen Tu and Piotr Dollar and Yingnian Wu | Layered Logic Classifiers: Exploring the `And' and `Or' Relations | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Designing effective and efficient classifier for pattern analysis is a key
problem in machine learning and computer vision. Many the solutions to the
problem require to perform logic operations such as `and', `or', and `not'.
Classification and regression tree (CART) include these operations explicitly.
Other methods such as neural networks, SVM, and boosting learn/compute a
weighted sum on features (weak classifiers), which weakly perform the 'and' and
'or' operations. However, it is hard for these classifiers to deal with the
'xor' pattern directly. In this paper, we propose layered logic classifiers for
patterns of complicated distributions by combining the `and', `or', and `not'
operations. The proposed algorithm is very general and easy to implement. We
test the classifiers on several typical datasets from the Irvine repository and
two challenging vision applications, object segmentation and pedestrian
detection. We observe significant improvements on all the datasets over the
widely used decision stump based AdaBoost algorithm. The resulting classifiers
have much less training complexity than decision tree based AdaBoost, and can
be applied in a wide range of domains.
| [
{
"version": "v1",
"created": "Tue, 27 May 2014 06:29:01 GMT"
},
{
"version": "v2",
"created": "Wed, 28 May 2014 00:51:08 GMT"
}
] | 2014-05-29T00:00:00 | [
[
"Tu",
"Zhuowen",
""
],
[
"Dollar",
"Piotr",
""
],
[
"Wu",
"Yingnian",
""
]
] | TITLE: Layered Logic Classifiers: Exploring the `And' and `Or' Relations
ABSTRACT: Designing effective and efficient classifier for pattern analysis is a key
problem in machine learning and computer vision. Many the solutions to the
problem require to perform logic operations such as `and', `or', and `not'.
Classification and regression tree (CART) include these operations explicitly.
Other methods such as neural networks, SVM, and boosting learn/compute a
weighted sum on features (weak classifiers), which weakly perform the 'and' and
'or' operations. However, it is hard for these classifiers to deal with the
'xor' pattern directly. In this paper, we propose layered logic classifiers for
patterns of complicated distributions by combining the `and', `or', and `not'
operations. The proposed algorithm is very general and easy to implement. We
test the classifiers on several typical datasets from the Irvine repository and
two challenging vision applications, object segmentation and pedestrian
detection. We observe significant improvements on all the datasets over the
widely used decision stump based AdaBoost algorithm. The resulting classifiers
have much less training complexity than decision tree based AdaBoost, and can
be applied in a wide range of domains.
| no_new_dataset | 0.947914 |
1405.7258 | Mostafa Salehi | Motahareh Eslami Mehdiabadi, Hamid R. Rabiee, Mostafa Salehi | Sampling from Diffusion Networks | Published in Proceedings of the 2012 International Conference on
Social Informatics, Pages 106-112 | null | 10.1109/SocialInformatics.2012.79 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The diffusion phenomenon has a remarkable impact on Online Social Networks
(OSNs). Gathering diffusion data over these large networks encounters many
challenges which can be alleviated by adopting a suitable sampling approach.
The contributions of this paper is twofold. First we study the sampling
approaches over diffusion networks, and for the first time, classify these
approaches into two categories; (1) Structure-based Sampling (SBS), and (2)
Diffusion-based Sampling (DBS). The dependency of the former approach to
topological features of the network, and unavailability of real diffusion paths
in the latter, converts the problem of choosing an appropriate sampling
approach to a trade-off. Second, we formally define the diffusion network
sampling problem and propose a number of new diffusion-based characteristics to
evaluate introduced sampling approaches. Our experiments on large scale
synthetic and real datasets show that although DBS performs much better than
SBS in higher sampling rates (16% ~ 29% on average), their performances differ
about 7% in lower sampling rates. Therefore, in real large scale systems with
low sampling rate requirements, SBS would be a better choice according to its
lower time complexity in gathering data compared to DBS. Moreover, we show that
the introduced sampling approaches (SBS and DBS) play a more important role
than the graph exploration techniques such as Breadth-First Search (BFS) and
Random Walk (RW) in the analysis of diffusion processes.
| [
{
"version": "v1",
"created": "Wed, 28 May 2014 14:33:02 GMT"
}
] | 2014-05-29T00:00:00 | [
[
"Mehdiabadi",
"Motahareh Eslami",
""
],
[
"Rabiee",
"Hamid R.",
""
],
[
"Salehi",
"Mostafa",
""
]
] | TITLE: Sampling from Diffusion Networks
ABSTRACT: The diffusion phenomenon has a remarkable impact on Online Social Networks
(OSNs). Gathering diffusion data over these large networks encounters many
challenges which can be alleviated by adopting a suitable sampling approach.
The contributions of this paper is twofold. First we study the sampling
approaches over diffusion networks, and for the first time, classify these
approaches into two categories; (1) Structure-based Sampling (SBS), and (2)
Diffusion-based Sampling (DBS). The dependency of the former approach to
topological features of the network, and unavailability of real diffusion paths
in the latter, converts the problem of choosing an appropriate sampling
approach to a trade-off. Second, we formally define the diffusion network
sampling problem and propose a number of new diffusion-based characteristics to
evaluate introduced sampling approaches. Our experiments on large scale
synthetic and real datasets show that although DBS performs much better than
SBS in higher sampling rates (16% ~ 29% on average), their performances differ
about 7% in lower sampling rates. Therefore, in real large scale systems with
low sampling rate requirements, SBS would be a better choice according to its
lower time complexity in gathering data compared to DBS. Moreover, we show that
the introduced sampling approaches (SBS and DBS) play a more important role
than the graph exploration techniques such as Breadth-First Search (BFS) and
Random Walk (RW) in the analysis of diffusion processes.
| no_new_dataset | 0.952264 |
1405.1213 | Oscar Danielsson | Oscar Danielsson and Omid Aghazadeh | Human Pose Estimation from RGB Input Using Synthetic Training Data | 6 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of estimating the pose of humans using RGB image
input. More specifically, we are using a random forest classifier to classify
pixels into joint-based body part categories, much similar to the famous Kinect
pose estimator [11], [12]. However, we are using pure RGB input, i.e. no depth.
Since the random forest requires a large number of training examples, we are
using computer graphics generated, synthetic training data. In addition, we
assume that we have access to a large number of real images with bounding box
labels, extracted for example by a pedestrian detector or a tracking system. We
propose a new objective function for random forest training that uses the
weakly labeled data from the target domain to encourage the learner to select
features that generalize from the synthetic source domain to the real target
domain. We demonstrate on a publicly available dataset [6] that the proposed
objective function yields a classifier that significantly outperforms a
baseline classifier trained using the standard entropy objective [10].
| [
{
"version": "v1",
"created": "Tue, 6 May 2014 10:13:08 GMT"
},
{
"version": "v2",
"created": "Tue, 27 May 2014 12:23:54 GMT"
}
] | 2014-05-28T00:00:00 | [
[
"Danielsson",
"Oscar",
""
],
[
"Aghazadeh",
"Omid",
""
]
] | TITLE: Human Pose Estimation from RGB Input Using Synthetic Training Data
ABSTRACT: We address the problem of estimating the pose of humans using RGB image
input. More specifically, we are using a random forest classifier to classify
pixels into joint-based body part categories, much similar to the famous Kinect
pose estimator [11], [12]. However, we are using pure RGB input, i.e. no depth.
Since the random forest requires a large number of training examples, we are
using computer graphics generated, synthetic training data. In addition, we
assume that we have access to a large number of real images with bounding box
labels, extracted for example by a pedestrian detector or a tracking system. We
propose a new objective function for random forest training that uses the
weakly labeled data from the target domain to encourage the learner to select
features that generalize from the synthetic source domain to the real target
domain. We demonstrate on a publicly available dataset [6] that the proposed
objective function yields a classifier that significantly outperforms a
baseline classifier trained using the standard entropy objective [10].
| no_new_dataset | 0.947672 |
1405.6886 | Rasmus Troelsg{\aa}rd | Rasmus Troelsg{\aa}rd, Bj{\o}rn Sand Jensen, Lars Kai Hansen | A Topic Model Approach to Multi-Modal Similarity | topic modelling workshop at NIPS 2013 | null | null | null | cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Calculating similarities between objects defined by many heterogeneous data
modalities is an important challenge in many multimedia applications. We use a
multi-modal topic model as a basis for defining such a similarity between
objects. We propose to compare the resulting similarities from different model
realizations using the non-parametric Mantel test. The approach is evaluated on
a music dataset.
| [
{
"version": "v1",
"created": "Tue, 27 May 2014 12:34:24 GMT"
}
] | 2014-05-28T00:00:00 | [
[
"Troelsgård",
"Rasmus",
""
],
[
"Jensen",
"Bjørn Sand",
""
],
[
"Hansen",
"Lars Kai",
""
]
] | TITLE: A Topic Model Approach to Multi-Modal Similarity
ABSTRACT: Calculating similarities between objects defined by many heterogeneous data
modalities is an important challenge in many multimedia applications. We use a
multi-modal topic model as a basis for defining such a similarity between
objects. We propose to compare the resulting similarities from different model
realizations using the non-parametric Mantel test. The approach is evaluated on
a music dataset.
| no_new_dataset | 0.949763 |
1405.6922 | Omid Aghazadeh | Omid Aghazadeh and Stefan Carlsson | Large Scale, Large Margin Classification using Indefinite Similarity
Measures | null | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the success of the popular kernelized support vector machines, they
have two major limitations: they are restricted to Positive Semi-Definite (PSD)
kernels, and their training complexity scales at least quadratically with the
size of the data. Many natural measures of similarity between pairs of samples
are not PSD e.g. invariant kernels, and those that are implicitly or explicitly
defined by latent variable models. In this paper, we investigate scalable
approaches for using indefinite similarity measures in large margin frameworks.
In particular we show that a normalization of similarity to a subset of the
data points constitutes a representation suitable for linear classifiers. The
result is a classifier which is competitive to kernelized SVM in terms of
accuracy, despite having better training and test time complexities.
Experimental results demonstrate that on CIFAR-10 dataset, the model equipped
with similarity measures invariant to rigid and non-rigid deformations, can be
made more than 5 times sparser while being more accurate than kernelized SVM
using RBF kernels.
| [
{
"version": "v1",
"created": "Tue, 27 May 2014 14:18:26 GMT"
}
] | 2014-05-28T00:00:00 | [
[
"Aghazadeh",
"Omid",
""
],
[
"Carlsson",
"Stefan",
""
]
] | TITLE: Large Scale, Large Margin Classification using Indefinite Similarity
Measures
ABSTRACT: Despite the success of the popular kernelized support vector machines, they
have two major limitations: they are restricted to Positive Semi-Definite (PSD)
kernels, and their training complexity scales at least quadratically with the
size of the data. Many natural measures of similarity between pairs of samples
are not PSD e.g. invariant kernels, and those that are implicitly or explicitly
defined by latent variable models. In this paper, we investigate scalable
approaches for using indefinite similarity measures in large margin frameworks.
In particular we show that a normalization of similarity to a subset of the
data points constitutes a representation suitable for linear classifiers. The
result is a classifier which is competitive to kernelized SVM in terms of
accuracy, despite having better training and test time complexities.
Experimental results demonstrate that on CIFAR-10 dataset, the model equipped
with similarity measures invariant to rigid and non-rigid deformations, can be
made more than 5 times sparser while being more accurate than kernelized SVM
using RBF kernels.
| no_new_dataset | 0.94887 |
1206.5333 | Leon Derczynski | Naushad UzZaman, Hector Llorens, James Allen, Leon Derczynski, Marc
Verhagen and James Pustejovsky | TempEval-3: Evaluating Events, Time Expressions, and Temporal Relations | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/3.0/ | We describe the TempEval-3 task which is currently in preparation for the
SemEval-2013 evaluation exercise. The aim of TempEval is to advance research on
temporal information processing. TempEval-3 follows on from previous TempEval
events, incorporating: a three-part task structure covering event, temporal
expression and temporal relation extraction; a larger dataset; and single
overall task quality scores.
| [
{
"version": "v1",
"created": "Fri, 22 Jun 2012 22:30:44 GMT"
},
{
"version": "v2",
"created": "Sun, 25 May 2014 19:10:12 GMT"
}
] | 2014-05-27T00:00:00 | [
[
"UzZaman",
"Naushad",
""
],
[
"Llorens",
"Hector",
""
],
[
"Allen",
"James",
""
],
[
"Derczynski",
"Leon",
""
],
[
"Verhagen",
"Marc",
""
],
[
"Pustejovsky",
"James",
""
]
] | TITLE: TempEval-3: Evaluating Events, Time Expressions, and Temporal Relations
ABSTRACT: We describe the TempEval-3 task which is currently in preparation for the
SemEval-2013 evaluation exercise. The aim of TempEval is to advance research on
temporal information processing. TempEval-3 follows on from previous TempEval
events, incorporating: a three-part task structure covering event, temporal
expression and temporal relation extraction; a larger dataset; and single
overall task quality scores.
| no_new_dataset | 0.939913 |
1306.1091 | Yoshua Bengio | Yoshua Bengio, \'Eric Thibodeau-Laufer, Guillaume Alain and Jason
Yosinski | Deep Generative Stochastic Networks Trainable by Backprop | arXiv admin note: text overlap with arXiv:1305.0445, Also published
in ICML'2014 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel training principle for probabilistic models that is an
alternative to maximum likelihood. The proposed Generative Stochastic Networks
(GSN) framework is based on learning the transition operator of a Markov chain
whose stationary distribution estimates the data distribution. The transition
distribution of the Markov chain is conditional on the previous state,
generally involving a small move, so this conditional distribution has fewer
dominant modes, being unimodal in the limit of small moves. Thus, it is easier
to learn because it is easier to approximate its partition function, more like
learning to perform supervised function approximation, with gradients that can
be obtained by backprop. We provide theorems that generalize recent work on the
probabilistic interpretation of denoising autoencoders and obtain along the way
an interesting justification for dependency networks and generalized
pseudolikelihood, along with a definition of an appropriate joint distribution
and sampling mechanism even when the conditionals are not consistent. GSNs can
be used with missing inputs and can be used to sample subsets of variables
given the rest. We validate these theoretical results with experiments on two
image datasets using an architecture that mimics the Deep Boltzmann Machine
Gibbs sampler but allows training to proceed with simple backprop, without the
need for layerwise pretraining.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2013 13:01:14 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jun 2013 16:55:38 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Oct 2013 07:04:58 GMT"
},
{
"version": "v4",
"created": "Wed, 18 Dec 2013 19:46:07 GMT"
},
{
"version": "v5",
"created": "Sat, 24 May 2014 00:05:18 GMT"
}
] | 2014-05-27T00:00:00 | [
[
"Bengio",
"Yoshua",
""
],
[
"Thibodeau-Laufer",
"Éric",
""
],
[
"Alain",
"Guillaume",
""
],
[
"Yosinski",
"Jason",
""
]
] | TITLE: Deep Generative Stochastic Networks Trainable by Backprop
ABSTRACT: We introduce a novel training principle for probabilistic models that is an
alternative to maximum likelihood. The proposed Generative Stochastic Networks
(GSN) framework is based on learning the transition operator of a Markov chain
whose stationary distribution estimates the data distribution. The transition
distribution of the Markov chain is conditional on the previous state,
generally involving a small move, so this conditional distribution has fewer
dominant modes, being unimodal in the limit of small moves. Thus, it is easier
to learn because it is easier to approximate its partition function, more like
learning to perform supervised function approximation, with gradients that can
be obtained by backprop. We provide theorems that generalize recent work on the
probabilistic interpretation of denoising autoencoders and obtain along the way
an interesting justification for dependency networks and generalized
pseudolikelihood, along with a definition of an appropriate joint distribution
and sampling mechanism even when the conditionals are not consistent. GSNs can
be used with missing inputs and can be used to sample subsets of variables
given the rest. We validate these theoretical results with experiments on two
image datasets using an architecture that mimics the Deep Boltzmann Machine
Gibbs sampler but allows training to proceed with simple backprop, without the
need for layerwise pretraining.
| no_new_dataset | 0.948106 |
1405.6173 | Ahmed Ibrahim Taloba | M. H. Marghny, Rasha M. Abd El-Aziz, Ahmed I. Taloba | An Effective Evolutionary Clustering Algorithm: Hepatitis C Case Study | null | null | null | null | cs.NE cs.CE | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Clustering analysis plays an important role in scientific research and
commercial application. K-means algorithm is a widely used partition method in
clustering. However, it is known that the K-means algorithm may get stuck at
suboptimal solutions, depending on the choice of the initial cluster centers.
In this article, we propose a technique to handle large scale data, which can
select initial clustering center purposefully using Genetic algorithms (GAs),
reduce the sensitivity to isolated point, avoid dissevering big cluster, and
overcome deflexion of data in some degree that caused by the disproportion in
data partitioning owing to adoption of multi-sampling. We applied our method to
some public datasets these show the advantages of the proposed approach for
example Hepatitis C dataset that has been taken from the machine learning
warehouse of University of California. Our aim is to evaluate hepatitis
dataset. In order to evaluate this dataset we did some preprocessing operation,
the reason to preprocessing is to summarize the data in the best and suitable
way for our algorithm. Missing values of the instances are adjusted using local
mean method.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2014 11:03:28 GMT"
}
] | 2014-05-26T00:00:00 | [
[
"Marghny",
"M. H.",
""
],
[
"El-Aziz",
"Rasha M. Abd",
""
],
[
"Taloba",
"Ahmed I.",
""
]
] | TITLE: An Effective Evolutionary Clustering Algorithm: Hepatitis C Case Study
ABSTRACT: Clustering analysis plays an important role in scientific research and
commercial application. K-means algorithm is a widely used partition method in
clustering. However, it is known that the K-means algorithm may get stuck at
suboptimal solutions, depending on the choice of the initial cluster centers.
In this article, we propose a technique to handle large scale data, which can
select initial clustering center purposefully using Genetic algorithms (GAs),
reduce the sensitivity to isolated point, avoid dissevering big cluster, and
overcome deflexion of data in some degree that caused by the disproportion in
data partitioning owing to adoption of multi-sampling. We applied our method to
some public datasets these show the advantages of the proposed approach for
example Hepatitis C dataset that has been taken from the machine learning
warehouse of University of California. Our aim is to evaluate hepatitis
dataset. In order to evaluate this dataset we did some preprocessing operation,
the reason to preprocessing is to summarize the data in the best and suitable
way for our algorithm. Missing values of the instances are adjusted using local
mean method.
| no_new_dataset | 0.951504 |
1308.6382 | Pierre de Buyl | Pierre de Buyl, Peter H. Colberg and Felix H\"ofling | H5MD: a structured, efficient, and portable file format for molecular
data | 11 pages, software "pyh5md" present in submission | Comp. Phys. Comm. 185, 1546-1553 (2014) | 10.1016/j.cpc.2014.01.018 | null | physics.comp-ph cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new file format named "H5MD" for storing molecular simulation
data, such as trajectories of particle positions and velocities, along with
thermodynamic observables that are monitored during the course of the
simulation. H5MD files are HDF5 (Hierarchical Data Format) files with a
specific hierarchy and naming scheme. Thus, H5MD inherits many benefits of
HDF5, e.g., structured layout of multi-dimensional datasets, data compression,
fast and parallel I/O, and portability across many programming languages and
hardware platforms. H5MD files are self-contained and foster the
reproducibility of scientific data and the interchange of data between
researchers using different simulation programs and analysis software. In
addition, the H5MD specification can serve for other kinds of data (e.g.
experimental data) and is extensible to supplemental data, or may be part of an
enclosing file structure.
| [
{
"version": "v1",
"created": "Thu, 29 Aug 2013 07:40:33 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Feb 2014 12:42:44 GMT"
}
] | 2014-05-23T00:00:00 | [
[
"de Buyl",
"Pierre",
""
],
[
"Colberg",
"Peter H.",
""
],
[
"Höfling",
"Felix",
""
]
] | TITLE: H5MD: a structured, efficient, and portable file format for molecular
data
ABSTRACT: We propose a new file format named "H5MD" for storing molecular simulation
data, such as trajectories of particle positions and velocities, along with
thermodynamic observables that are monitored during the course of the
simulation. H5MD files are HDF5 (Hierarchical Data Format) files with a
specific hierarchy and naming scheme. Thus, H5MD inherits many benefits of
HDF5, e.g., structured layout of multi-dimensional datasets, data compression,
fast and parallel I/O, and portability across many programming languages and
hardware platforms. H5MD files are self-contained and foster the
reproducibility of scientific data and the interchange of data between
researchers using different simulation programs and analysis software. In
addition, the H5MD specification can serve for other kinds of data (e.g.
experimental data) and is extensible to supplemental data, or may be part of an
enclosing file structure.
| no_new_dataset | 0.932576 |
1405.3100 | Andrea Monacchi | Andrea Monacchi, Dominik Egarter, Wilfried Elmenreich, Salvatore
D'Alessandro, Andrea M. Tonello | GREEND: An Energy Consumption Dataset of Households in Italy and Austria | null | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Home energy management systems can be used to monitor and optimize
consumption and local production from renewable energy. To assess solutions
before their deployment, researchers and designers of those systems demand for
energy consumption datasets. In this paper, we present the GREEND dataset,
containing detailed power usage information obtained through a measurement
campaign in households in Austria and Italy. We provide a description of
consumption scenarios and discuss design choices for the sensing
infrastructure. Finally, we benchmark the dataset with state-of-the-art
techniques in load disaggregation, occupancy detection and appliance usage
mining.
| [
{
"version": "v1",
"created": "Tue, 13 May 2014 10:51:32 GMT"
},
{
"version": "v2",
"created": "Thu, 22 May 2014 13:57:03 GMT"
}
] | 2014-05-23T00:00:00 | [
[
"Monacchi",
"Andrea",
""
],
[
"Egarter",
"Dominik",
""
],
[
"Elmenreich",
"Wilfried",
""
],
[
"D'Alessandro",
"Salvatore",
""
],
[
"Tonello",
"Andrea M.",
""
]
] | TITLE: GREEND: An Energy Consumption Dataset of Households in Italy and Austria
ABSTRACT: Home energy management systems can be used to monitor and optimize
consumption and local production from renewable energy. To assess solutions
before their deployment, researchers and designers of those systems demand for
energy consumption datasets. In this paper, we present the GREEND dataset,
containing detailed power usage information obtained through a measurement
campaign in households in Austria and Italy. We provide a description of
consumption scenarios and discuss design choices for the sensing
infrastructure. Finally, we benchmark the dataset with state-of-the-art
techniques in load disaggregation, occupancy detection and appliance usage
mining.
| new_dataset | 0.956104 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.