id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1412.7449 | Oriol Vinyals | Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever,
Geoffrey Hinton | Grammar as a Foreign Language | null | null | null | null | cs.CL cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Syntactic constituency parsing is a fundamental problem in natural language
processing and has been the subject of intensive research and engineering for
decades. As a result, the most accurate parsers are domain specific, complex,
and inefficient. In this paper we show that the domain agnostic
attention-enhanced sequence-to-sequence model achieves state-of-the-art results
on the most widely used syntactic constituency parsing dataset, when trained on
a large synthetic corpus that was annotated using existing parsers. It also
matches the performance of standard parsers when trained only on a small
human-annotated dataset, which shows that this model is highly data-efficient,
in contrast to sequence-to-sequence models without the attention mechanism. Our
parser is also fast, processing over a hundred sentences per second with an
unoptimized CPU implementation.
| [
{
"version": "v1",
"created": "Tue, 23 Dec 2014 17:16:24 GMT"
},
{
"version": "v2",
"created": "Sat, 28 Feb 2015 03:16:54 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Jun 2015 22:41:07 GMT"
}
] | 2015-06-11T00:00:00 | [
[
"Vinyals",
"Oriol",
""
],
[
"Kaiser",
"Lukasz",
""
],
[
"Koo",
"Terry",
""
],
[
"Petrov",
"Slav",
""
],
[
"Sutskever",
"Ilya",
""
],
[
"Hinton",
"Geoffrey",
""
]
] | TITLE: Grammar as a Foreign Language
ABSTRACT: Syntactic constituency parsing is a fundamental problem in natural language
processing and has been the subject of intensive research and engineering for
decades. As a result, the most accurate parsers are domain specific, complex,
and inefficient. In this paper we show that the domain agnostic
attention-enhanced sequence-to-sequence model achieves state-of-the-art results
on the most widely used syntactic constituency parsing dataset, when trained on
a large synthetic corpus that was annotated using existing parsers. It also
matches the performance of standard parsers when trained only on a small
human-annotated dataset, which shows that this model is highly data-efficient,
in contrast to sequence-to-sequence models without the attention mechanism. Our
parser is also fast, processing over a hundred sentences per second with an
unoptimized CPU implementation.
| no_new_dataset | 0.952662 |
1506.03139 | Keenon Werling | Keenon Werling, Gabor Angeli, Christopher Manning | Robust Subgraph Generation Improves Abstract Meaning Representation
Parsing | To appear in ACL 2015 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Abstract Meaning Representation (AMR) is a representation for open-domain
rich semantics, with potential use in fields like event extraction and machine
translation. Node generation, typically done using a simple dictionary lookup,
is currently an important limiting factor in AMR parsing. We propose a small
set of actions that derive AMR subgraphs by transformations on spans of text,
which allows for more robust learning of this stage. Our set of construction
actions generalize better than the previous approach, and can be learned with a
simple classifier. We improve on the previous state-of-the-art result for AMR
parsing, boosting end-to-end performance by 3 F$_1$ on both the LDC2013E117 and
LDC2014T12 datasets.
| [
{
"version": "v1",
"created": "Wed, 10 Jun 2015 00:40:12 GMT"
}
] | 2015-06-11T00:00:00 | [
[
"Werling",
"Keenon",
""
],
[
"Angeli",
"Gabor",
""
],
[
"Manning",
"Christopher",
""
]
] | TITLE: Robust Subgraph Generation Improves Abstract Meaning Representation
Parsing
ABSTRACT: The Abstract Meaning Representation (AMR) is a representation for open-domain
rich semantics, with potential use in fields like event extraction and machine
translation. Node generation, typically done using a simple dictionary lookup,
is currently an important limiting factor in AMR parsing. We propose a small
set of actions that derive AMR subgraphs by transformations on spans of text,
which allows for more robust learning of this stage. Our set of construction
actions generalize better than the previous approach, and can be learned with a
simple classifier. We improve on the previous state-of-the-art result for AMR
parsing, boosting end-to-end performance by 3 F$_1$ on both the LDC2013E117 and
LDC2014T12 datasets.
| no_new_dataset | 0.947866 |
1506.03184 | Cong Yao | Xinyu Zhou and Shuchang Zhou and Cong Yao and Zhimin Cao and Qi Yin | ICDAR 2015 Text Reading in the Wild Competition | 3 pages, 2 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, text detection and recognition in natural scenes are becoming
increasing popular in the computer vision community as well as the document
analysis community. However, majority of the existing ideas, algorithms and
systems are specifically designed for English. This technical report presents
the final results of the ICDAR 2015 Text Reading in the Wild (TRW 2015)
competition, which aims at establishing a benchmark for assessing detection and
recognition algorithms devised for both Chinese and English scripts and
providing a playground for researchers from the community. In this article, we
describe in detail the dataset, tasks, evaluation protocols and participants of
this competition, and report the performance of the participating methods.
Moreover, promising directions for future research are discussed.
| [
{
"version": "v1",
"created": "Wed, 10 Jun 2015 06:46:55 GMT"
}
] | 2015-06-11T00:00:00 | [
[
"Zhou",
"Xinyu",
""
],
[
"Zhou",
"Shuchang",
""
],
[
"Yao",
"Cong",
""
],
[
"Cao",
"Zhimin",
""
],
[
"Yin",
"Qi",
""
]
] | TITLE: ICDAR 2015 Text Reading in the Wild Competition
ABSTRACT: Recently, text detection and recognition in natural scenes are becoming
increasing popular in the computer vision community as well as the document
analysis community. However, majority of the existing ideas, algorithms and
systems are specifically designed for English. This technical report presents
the final results of the ICDAR 2015 Text Reading in the Wild (TRW 2015)
competition, which aims at establishing a benchmark for assessing detection and
recognition algorithms devised for both Chinese and English scripts and
providing a playground for researchers from the community. In this article, we
describe in detail the dataset, tasks, evaluation protocols and participants of
this competition, and report the performance of the participating methods.
Moreover, promising directions for future research are discussed.
| no_new_dataset | 0.951684 |
1506.03425 | Krzysztof Choromanski | Krzysztof Choromanski and Sanjiv Kumar and Xiaofeng Liu | Fast Online Clustering with Randomized Skeleton Sets | null | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new fast online clustering algorithm that reliably recovers
arbitrary-shaped data clusters in high throughout data streams. Unlike the
existing state-of-the-art online clustering methods based on k-means or
k-medoid, it does not make any restrictive generative assumptions. In addition,
in contrast to existing nonparametric clustering techniques such as DBScan or
DenStream, it gives provable theoretical guarantees. To achieve fast
clustering, we propose to represent each cluster by a skeleton set which is
updated continuously as new data is seen. A skeleton set consists of weighted
samples from the data where weights encode local densities. The size of each
skeleton set is adapted according to the cluster geometry. The proposed
technique automatically detects the number of clusters and is robust to
outliers. The algorithm works for the infinite data stream where more than one
pass over the data is not feasible. We provide theoretical guarantees on the
quality of the clustering and also demonstrate its advantage over the existing
state-of-the-art on several datasets.
| [
{
"version": "v1",
"created": "Wed, 10 Jun 2015 18:41:55 GMT"
}
] | 2015-06-11T00:00:00 | [
[
"Choromanski",
"Krzysztof",
""
],
[
"Kumar",
"Sanjiv",
""
],
[
"Liu",
"Xiaofeng",
""
]
] | TITLE: Fast Online Clustering with Randomized Skeleton Sets
ABSTRACT: We present a new fast online clustering algorithm that reliably recovers
arbitrary-shaped data clusters in high throughout data streams. Unlike the
existing state-of-the-art online clustering methods based on k-means or
k-medoid, it does not make any restrictive generative assumptions. In addition,
in contrast to existing nonparametric clustering techniques such as DBScan or
DenStream, it gives provable theoretical guarantees. To achieve fast
clustering, we propose to represent each cluster by a skeleton set which is
updated continuously as new data is seen. A skeleton set consists of weighted
samples from the data where weights encode local densities. The size of each
skeleton set is adapted according to the cluster geometry. The proposed
technique automatically detects the number of clusters and is robust to
outliers. The algorithm works for the infinite data stream where more than one
pass over the data is not feasible. We provide theoretical guarantees on the
quality of the clustering and also demonstrate its advantage over the existing
state-of-the-art on several datasets.
| no_new_dataset | 0.951006 |
1411.0292 | Alp Kucukelbir | Alp Kucukelbir, David M. Blei | Population Empirical Bayes | UAI 2015 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian predictive inference analyzes a dataset to make predictions about
new observations. When a model does not match the data, predictive accuracy
suffers. We develop population empirical Bayes (POP-EB), a hierarchical
framework that explicitly models the empirical population distribution as part
of Bayesian analysis. We introduce a new concept, the latent dataset, as a
hierarchical variable and set the empirical population as its prior. This leads
to a new predictive density that mitigates model mismatch. We efficiently apply
this method to complex models by proposing a stochastic variational inference
algorithm, called bumping variational inference (BUMP-VI). We demonstrate
improved predictive accuracy over classical Bayesian inference in three models:
a linear regression model of health data, a Bayesian mixture model of natural
images, and a latent Dirichlet allocation topic model of scientific documents.
| [
{
"version": "v1",
"created": "Sun, 2 Nov 2014 18:50:14 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Jun 2015 21:36:22 GMT"
}
] | 2015-06-10T00:00:00 | [
[
"Kucukelbir",
"Alp",
""
],
[
"Blei",
"David M.",
""
]
] | TITLE: Population Empirical Bayes
ABSTRACT: Bayesian predictive inference analyzes a dataset to make predictions about
new observations. When a model does not match the data, predictive accuracy
suffers. We develop population empirical Bayes (POP-EB), a hierarchical
framework that explicitly models the empirical population distribution as part
of Bayesian analysis. We introduce a new concept, the latent dataset, as a
hierarchical variable and set the empirical population as its prior. This leads
to a new predictive density that mitigates model mismatch. We efficiently apply
this method to complex models by proposing a stochastic variational inference
algorithm, called bumping variational inference (BUMP-VI). We demonstrate
improved predictive accuracy over classical Bayesian inference in three models:
a linear regression model of health data, a Bayesian mixture model of natural
images, and a latent Dirichlet allocation topic model of scientific documents.
| no_new_dataset | 0.952926 |
1411.4280 | Jonathan Tompson | Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, Christopher
Bregler | Efficient Object Localization Using Convolutional Networks | 8 pages with 1 page of citations | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent state-of-the-art performance on human-body pose estimation has been
achieved with Deep Convolutional Networks (ConvNets). Traditional ConvNet
architectures include pooling and sub-sampling layers which reduce
computational requirements, introduce invariance and prevent over-training.
These benefits of pooling come at the cost of reduced localization accuracy. We
introduce a novel architecture which includes an efficient `position
refinement' model that is trained to estimate the joint offset location within
a small region of the image. This refinement model is jointly trained in
cascade with a state-of-the-art ConvNet model to achieve improved accuracy in
human joint location estimation. We show that the variance of our detector
approaches the variance of human annotations on the FLIC dataset and
outperforms all existing approaches on the MPII-human-pose dataset.
| [
{
"version": "v1",
"created": "Sun, 16 Nov 2014 17:23:02 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Apr 2015 16:55:05 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Jun 2015 12:29:21 GMT"
}
] | 2015-06-10T00:00:00 | [
[
"Tompson",
"Jonathan",
""
],
[
"Goroshin",
"Ross",
""
],
[
"Jain",
"Arjun",
""
],
[
"LeCun",
"Yann",
""
],
[
"Bregler",
"Christopher",
""
]
] | TITLE: Efficient Object Localization Using Convolutional Networks
ABSTRACT: Recent state-of-the-art performance on human-body pose estimation has been
achieved with Deep Convolutional Networks (ConvNets). Traditional ConvNet
architectures include pooling and sub-sampling layers which reduce
computational requirements, introduce invariance and prevent over-training.
These benefits of pooling come at the cost of reduced localization accuracy. We
introduce a novel architecture which includes an efficient `position
refinement' model that is trained to estimate the joint offset location within
a small region of the image. This refinement model is jointly trained in
cascade with a state-of-the-art ConvNet model to achieve improved accuracy in
human joint location estimation. We show that the variance of our detector
approaches the variance of human annotations on the FLIC dataset and
outperforms all existing approaches on the MPII-human-pose dataset.
| no_new_dataset | 0.94545 |
1502.04843 | Brijnesh Jain | Brijnesh Jain | Generalized Gradient Learning on Time Series under Elastic
Transformations | accepted for publication in Machine Learning | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The majority of machine learning algorithms assumes that objects are
represented as vectors. But often the objects we want to learn on are more
naturally represented by other data structures such as sequences and time
series. For these representations many standard learning algorithms are
unavailable. We generalize gradient-based learning algorithms to time series
under dynamic time warping. To this end, we introduce elastic functions, which
extend functions on time series to matrix spaces. Necessary conditions are
presented under which generalized gradient learning on time series is
consistent. We indicate how results carry over to arbitrary elastic distance
functions and to sequences consisting of symbolic elements. Specifically, four
linear classifiers are extended to time series under dynamic time warping and
applied to benchmark datasets. Results indicate that generalized gradient
learning via elastic functions have the potential to complement the
state-of-the-art in statistical pattern recognition on time series.
| [
{
"version": "v1",
"created": "Tue, 17 Feb 2015 10:08:48 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Jun 2015 10:50:41 GMT"
}
] | 2015-06-10T00:00:00 | [
[
"Jain",
"Brijnesh",
""
]
] | TITLE: Generalized Gradient Learning on Time Series under Elastic
Transformations
ABSTRACT: The majority of machine learning algorithms assumes that objects are
represented as vectors. But often the objects we want to learn on are more
naturally represented by other data structures such as sequences and time
series. For these representations many standard learning algorithms are
unavailable. We generalize gradient-based learning algorithms to time series
under dynamic time warping. To this end, we introduce elastic functions, which
extend functions on time series to matrix spaces. Necessary conditions are
presented under which generalized gradient learning on time series is
consistent. We indicate how results carry over to arbitrary elastic distance
functions and to sequences consisting of symbolic elements. Specifically, four
linear classifiers are extended to time series under dynamic time warping and
applied to benchmark datasets. Results indicate that generalized gradient
learning via elastic functions have the potential to complement the
state-of-the-art in statistical pattern recognition on time series.
| no_new_dataset | 0.947235 |
1504.07659 | Benhui Yang | Benhui Yang, K. M. Walker, R. C. Forrey, P. C. Stancil, N.
Balakrishnan | Collisional quenching of highly rotationally excited HF | 26 pages, 14 figures, and 3 tables in A&A 2015 | A&A 578, A65 (2015) | 10.1051/0004-6361/201525799 | null | astro-ph.SR physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collisional excitation rate coefficients play an important role in the
dynamics of energy transfer in the interstellar medium. In particular, accurate
rotational excitation rates are needed to interpret microwave and infrared
observations of the interstellar gas for nonlocal thermodynamic equilibrium
line formation. Theoretical cross sections and rate coefficients for
collisional deexcitation of rotationally excited HF in the vibrational ground
state are reported. The quantum-mechanical close-coupling approach implemented
in the nonreactive scattering code MOLSCAT was applied in the cross section and
rate coefficient calculations on an accurate 2D HF-He potential energy surface.
Estimates of rate coefficients for H and H$_2$ colliders were obtained from the
HF-He collisional data with a reduced-potential scaling approach. The
calculation of state-to-state rotational quenching cross sections for HF due to
He with initial rotational levels up to $j=20$ were performed for kinetic
energies from 10$^{-5}$ to 15000 cm$^{-1}$. State-to-state rate coefficients
for temperatures between 0.1 and 3000 K are also presented. The comparison of
the present results with previous work for lowly-excited rotational levels
reveals significant differences. In estimating HF-H$_2$ rate coefficients, the
reduced-potential method is found to be more reliable than the standard
reduced-mass approach. The current state-to-state rate coefficient calculations
are the most comprehensive to date for HF-He collisions. We attribute the
differences between previously reported and our results to differences in the
adopted interaction potential energy surfaces. The new He rate coefficients can
be used in a variety of applications. The estimated H$_2$ and H collision rates
can also augment the smaller datasets previously developed for H$_2$ and
electrons.
| [
{
"version": "v1",
"created": "Tue, 28 Apr 2015 21:09:54 GMT"
}
] | 2015-06-10T00:00:00 | [
[
"Yang",
"Benhui",
""
],
[
"Walker",
"K. M.",
""
],
[
"Forrey",
"R. C.",
""
],
[
"Stancil",
"P. C.",
""
],
[
"Balakrishnan",
"N.",
""
]
] | TITLE: Collisional quenching of highly rotationally excited HF
ABSTRACT: Collisional excitation rate coefficients play an important role in the
dynamics of energy transfer in the interstellar medium. In particular, accurate
rotational excitation rates are needed to interpret microwave and infrared
observations of the interstellar gas for nonlocal thermodynamic equilibrium
line formation. Theoretical cross sections and rate coefficients for
collisional deexcitation of rotationally excited HF in the vibrational ground
state are reported. The quantum-mechanical close-coupling approach implemented
in the nonreactive scattering code MOLSCAT was applied in the cross section and
rate coefficient calculations on an accurate 2D HF-He potential energy surface.
Estimates of rate coefficients for H and H$_2$ colliders were obtained from the
HF-He collisional data with a reduced-potential scaling approach. The
calculation of state-to-state rotational quenching cross sections for HF due to
He with initial rotational levels up to $j=20$ were performed for kinetic
energies from 10$^{-5}$ to 15000 cm$^{-1}$. State-to-state rate coefficients
for temperatures between 0.1 and 3000 K are also presented. The comparison of
the present results with previous work for lowly-excited rotational levels
reveals significant differences. In estimating HF-H$_2$ rate coefficients, the
reduced-potential method is found to be more reliable than the standard
reduced-mass approach. The current state-to-state rate coefficient calculations
are the most comprehensive to date for HF-He collisions. We attribute the
differences between previously reported and our results to differences in the
adopted interaction potential energy surfaces. The new He rate coefficients can
be used in a variety of applications. The estimated H$_2$ and H collision rates
can also augment the smaller datasets previously developed for H$_2$ and
electrons.
| no_new_dataset | 0.946399 |
1506.02732 | Zhiguang Wang | Wei Song, Zhiguang Wang, Yangdong Ye, Ming Fan | Empirical Studies on Symbolic Aggregation Approximation Under
Statistical Perspectives for Knowledge Discovery in Time Series | 7 pages, 6 figures. Accepted by FSKD 2015 | null | null | null | cs.LG cs.IT math.IT | http://creativecommons.org/licenses/by/3.0/ | Symbolic Aggregation approXimation (SAX) has been the de facto standard
representation methods for knowledge discovery in time series on a number of
tasks and applications. So far, very little work has been done in empirically
investigating the intrinsic properties and statistical mechanics in SAX words.
In this paper, we applied several statistical measurements and proposed a new
statistical measurement, i.e. information embedding cost (IEC) to analyze the
statistical behaviors of the symbolic dynamics. Our experiments on the
benchmark datasets and the clinical signals demonstrate that SAX can always
reduce the complexity while preserving the core information embedded in the
original time series with significant embedding efficiency. Our proposed IEC
score provide a priori to determine if SAX is adequate for specific dataset,
which can be generalized to evaluate other symbolic representations. Our work
provides an analytical framework with several statistical tools to analyze,
evaluate and further improve the symbolic dynamics for knowledge discovery in
time series.
| [
{
"version": "v1",
"created": "Mon, 8 Jun 2015 23:52:04 GMT"
}
] | 2015-06-10T00:00:00 | [
[
"Song",
"Wei",
""
],
[
"Wang",
"Zhiguang",
""
],
[
"Ye",
"Yangdong",
""
],
[
"Fan",
"Ming",
""
]
] | TITLE: Empirical Studies on Symbolic Aggregation Approximation Under
Statistical Perspectives for Knowledge Discovery in Time Series
ABSTRACT: Symbolic Aggregation approXimation (SAX) has been the de facto standard
representation methods for knowledge discovery in time series on a number of
tasks and applications. So far, very little work has been done in empirically
investigating the intrinsic properties and statistical mechanics in SAX words.
In this paper, we applied several statistical measurements and proposed a new
statistical measurement, i.e. information embedding cost (IEC) to analyze the
statistical behaviors of the symbolic dynamics. Our experiments on the
benchmark datasets and the clinical signals demonstrate that SAX can always
reduce the complexity while preserving the core information embedded in the
original time series with significant embedding efficiency. Our proposed IEC
score provide a priori to determine if SAX is adequate for specific dataset,
which can be generalized to evaluate other symbolic representations. Our work
provides an analytical framework with several statistical tools to analyze,
evaluate and further improve the symbolic dynamics for knowledge discovery in
time series.
| no_new_dataset | 0.944944 |
1202.2160 | Laurent Najman | Cl\'ement Farabet and Camille Couprie and Laurent Najman and Yann
LeCun | Scene Parsing with Multiscale Feature Learning, Purity Trees, and
Optimal Covers | 9 pages, 4 figures - Published in 29th International Conference on
Machine Learning (ICML 2012), Jun 2012, Edinburgh, United Kingdom | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene parsing, or semantic segmentation, consists in labeling each pixel in
an image with the category of the object it belongs to. It is a challenging
task that involves the simultaneous detection, segmentation and recognition of
all the objects in the image.
The scene parsing method proposed here starts by computing a tree of segments
from a graph of pixel dissimilarities. Simultaneously, a set of dense feature
vectors is computed which encodes regions of multiple sizes centered on each
pixel. The feature extractor is a multiscale convolutional network trained from
raw pixels. The feature vectors associated with the segments covered by each
node in the tree are aggregated and fed to a classifier which produces an
estimate of the distribution of object categories contained in the segment. A
subset of tree nodes that cover the image are then selected so as to maximize
the average "purity" of the class distributions, hence maximizing the overall
likelihood that each segment will contain a single object. The convolutional
network feature extractor is trained end-to-end from raw pixels, alleviating
the need for engineered features. After training, the system is parameter free.
The system yields record accuracies on the Stanford Background Dataset (8
classes), the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170
classes) while being an order of magnitude faster than competing approaches,
producing a 320 \times 240 image labeling in less than 1 second.
| [
{
"version": "v1",
"created": "Fri, 10 Feb 2012 00:30:48 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Jul 2012 21:32:24 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Farabet",
"Clément",
""
],
[
"Couprie",
"Camille",
""
],
[
"Najman",
"Laurent",
""
],
[
"LeCun",
"Yann",
""
]
] | TITLE: Scene Parsing with Multiscale Feature Learning, Purity Trees, and
Optimal Covers
ABSTRACT: Scene parsing, or semantic segmentation, consists in labeling each pixel in
an image with the category of the object it belongs to. It is a challenging
task that involves the simultaneous detection, segmentation and recognition of
all the objects in the image.
The scene parsing method proposed here starts by computing a tree of segments
from a graph of pixel dissimilarities. Simultaneously, a set of dense feature
vectors is computed which encodes regions of multiple sizes centered on each
pixel. The feature extractor is a multiscale convolutional network trained from
raw pixels. The feature vectors associated with the segments covered by each
node in the tree are aggregated and fed to a classifier which produces an
estimate of the distribution of object categories contained in the segment. A
subset of tree nodes that cover the image are then selected so as to maximize
the average "purity" of the class distributions, hence maximizing the overall
likelihood that each segment will contain a single object. The convolutional
network feature extractor is trained end-to-end from raw pixels, alleviating
the need for engineered features. After training, the system is parameter free.
The system yields record accuracies on the Stanford Background Dataset (8
classes), the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170
classes) while being an order of magnitude faster than competing approaches,
producing a 320 \times 240 image labeling in less than 1 second.
| no_new_dataset | 0.94801 |
1312.5851 | Mikael Henaff | Michael Mathieu, Mikael Henaff, Yann LeCun | Fast Training of Convolutional Networks through FFTs | null | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional networks are one of the most widely employed architectures in
computer vision and machine learning. In order to leverage their ability to
learn complex functions, large amounts of data are required for training.
Training a large convolutional network to produce state-of-the-art results can
take weeks, even when using modern GPUs. Producing labels using a trained
network can also be costly when dealing with web-scale datasets. In this work,
we present a simple algorithm which accelerates training and inference by a
significant factor, and can yield improvements of over an order of magnitude
compared to existing state-of-the-art implementations. This is done by
computing convolutions as pointwise products in the Fourier domain while
reusing the same transformed feature map many times. The algorithm is
implemented on a GPU architecture and addresses a number of related challenges.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 08:42:21 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Jan 2014 00:28:06 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Jan 2014 01:33:21 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Feb 2014 03:20:51 GMT"
},
{
"version": "v5",
"created": "Thu, 6 Mar 2014 23:27:18 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Mathieu",
"Michael",
""
],
[
"Henaff",
"Mikael",
""
],
[
"LeCun",
"Yann",
""
]
] | TITLE: Fast Training of Convolutional Networks through FFTs
ABSTRACT: Convolutional networks are one of the most widely employed architectures in
computer vision and machine learning. In order to leverage their ability to
learn complex functions, large amounts of data are required for training.
Training a large convolutional network to produce state-of-the-art results can
take weeks, even when using modern GPUs. Producing labels using a trained
network can also be costly when dealing with web-scale datasets. In this work,
we present a simple algorithm which accelerates training and inference by a
significant factor, and can yield improvements of over an order of magnitude
compared to existing state-of-the-art implementations. This is done by
computing convolutions as pointwise products in the Fourier domain while
reusing the same transformed feature map many times. The algorithm is
implemented on a GPU architecture and addresses a number of related challenges.
| no_new_dataset | 0.951549 |
1405.6159 | Mariano Tepper | Mariano Tepper and Guillermo Sapiro | A Bi-clustering Framework for Consensus Problems | null | null | 10.1137/140967325 | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider grouping as a general characterization for problems such as
clustering, community detection in networks, and multiple parametric model
estimation. We are interested in merging solutions from different grouping
algorithms, distilling all their good qualities into a consensus solution. In
this paper, we propose a bi-clustering framework and perspective for reaching
consensus in such grouping problems. In particular, this is the first time that
the task of finding/fitting multiple parametric models to a dataset is formally
posed as a consensus problem. We highlight the equivalence of these tasks and
establish the connection with the computational Gestalt program, that seeks to
provide a psychologically-inspired detection theory for visual events. We also
present a simple but powerful bi-clustering algorithm, specially tuned to the
nature of the problem we address, though general enough to handle many
different instances inscribed within our characterization. The presentation is
accompanied with diverse and extensive experimental results in clustering,
community detection, and multiple parametric model estimation in image
processing applications.
| [
{
"version": "v1",
"created": "Wed, 30 Apr 2014 21:58:10 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Jun 2014 17:44:55 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Aug 2014 22:12:15 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Tepper",
"Mariano",
""
],
[
"Sapiro",
"Guillermo",
""
]
] | TITLE: A Bi-clustering Framework for Consensus Problems
ABSTRACT: We consider grouping as a general characterization for problems such as
clustering, community detection in networks, and multiple parametric model
estimation. We are interested in merging solutions from different grouping
algorithms, distilling all their good qualities into a consensus solution. In
this paper, we propose a bi-clustering framework and perspective for reaching
consensus in such grouping problems. In particular, this is the first time that
the task of finding/fitting multiple parametric models to a dataset is formally
posed as a consensus problem. We highlight the equivalence of these tasks and
establish the connection with the computational Gestalt program, that seeks to
provide a psychologically-inspired detection theory for visual events. We also
present a simple but powerful bi-clustering algorithm, specially tuned to the
nature of the problem we address, though general enough to handle many
different instances inscribed within our characterization. The presentation is
accompanied with diverse and extensive experimental results in clustering,
community detection, and multiple parametric model estimation in image
processing applications.
| no_new_dataset | 0.9463 |
1406.1476 | Toufiq Parag | Toufiq Parag, Anirban Chakraborty, Stephen Plaza and Lou Scheffer | A Context-aware Delayed Agglomeration Framework for Electron Microscopy
Segmentation | null | PLoS ONE 10(5): e0125825, 2015 | 10.1371/journal.pone.0125825 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electron Microscopy (EM) image (or volume) segmentation has become
significantly important in recent years as an instrument for connectomics. This
paper proposes a novel agglomerative framework for EM segmentation. In
particular, given an over-segmented image or volume, we propose a novel
framework for accurately clustering regions of the same neuron. Unlike existing
agglomerative methods, the proposed context-aware algorithm divides superpixels
(over-segmented regions) of different biological entities into different
subsets and agglomerates them separately. In addition, this paper describes a
"delayed" scheme for agglomerative clustering that postpones some of the merge
decisions, pertaining to newly formed bodies, in order to generate a more
confident boundary prediction. We report significant improvements attained by
the proposed approach in segmentation accuracy over existing standard methods
on 2D and 3D datasets.
| [
{
"version": "v1",
"created": "Thu, 5 Jun 2014 18:46:38 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Jun 2014 13:06:53 GMT"
},
{
"version": "v3",
"created": "Thu, 21 Aug 2014 17:22:34 GMT"
},
{
"version": "v4",
"created": "Fri, 19 Sep 2014 19:57:10 GMT"
},
{
"version": "v5",
"created": "Mon, 23 Mar 2015 15:28:02 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Parag",
"Toufiq",
""
],
[
"Chakraborty",
"Anirban",
""
],
[
"Plaza",
"Stephen",
""
],
[
"Scheffer",
"Lou",
""
]
] | TITLE: A Context-aware Delayed Agglomeration Framework for Electron Microscopy
Segmentation
ABSTRACT: Electron Microscopy (EM) image (or volume) segmentation has become
significantly important in recent years as an instrument for connectomics. This
paper proposes a novel agglomerative framework for EM segmentation. In
particular, given an over-segmented image or volume, we propose a novel
framework for accurately clustering regions of the same neuron. Unlike existing
agglomerative methods, the proposed context-aware algorithm divides superpixels
(over-segmented regions) of different biological entities into different
subsets and agglomerates them separately. In addition, this paper describes a
"delayed" scheme for agglomerative clustering that postpones some of the merge
decisions, pertaining to newly formed bodies, in order to generate a more
confident boundary prediction. We report significant improvements attained by
the proposed approach in segmentation accuracy over existing standard methods
on 2D and 3D datasets.
| no_new_dataset | 0.953923 |
1409.2752 | Alireza Makhzani | Alireza Makhzani, Brendan Frey | Winner-Take-All Autoencoders | null | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a winner-take-all method for learning hierarchical
sparse representations in an unsupervised fashion. We first introduce
fully-connected winner-take-all autoencoders which use mini-batch statistics to
directly enforce a lifetime sparsity in the activations of the hidden units. We
then propose the convolutional winner-take-all autoencoder which combines the
benefits of convolutional architectures and autoencoders for learning
shift-invariant sparse representations. We describe a way to train
convolutional autoencoders layer by layer, where in addition to lifetime
sparsity, a spatial sparsity within each feature map is achieved using
winner-take-all activation functions. We will show that winner-take-all
autoencoders can be used to to learn deep sparse representations from the
MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets,
and achieve competitive classification performance.
| [
{
"version": "v1",
"created": "Tue, 9 Sep 2014 14:38:43 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Jun 2015 18:28:22 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Makhzani",
"Alireza",
""
],
[
"Frey",
"Brendan",
""
]
] | TITLE: Winner-Take-All Autoencoders
ABSTRACT: In this paper, we propose a winner-take-all method for learning hierarchical
sparse representations in an unsupervised fashion. We first introduce
fully-connected winner-take-all autoencoders which use mini-batch statistics to
directly enforce a lifetime sparsity in the activations of the hidden units. We
then propose the convolutional winner-take-all autoencoder which combines the
benefits of convolutional architectures and autoencoders for learning
shift-invariant sparse representations. We describe a way to train
convolutional autoencoders layer by layer, where in addition to lifetime
sparsity, a spatial sparsity within each feature map is achieved using
winner-take-all activation functions. We will show that winner-take-all
autoencoders can be used to to learn deep sparse representations from the
MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets,
and achieve competitive classification performance.
| no_new_dataset | 0.946843 |
1503.01578 | Sanghyuk Chun | Sanghyuk Chun, Yung-Kyun Noh, Jinwoo Shin | Scalable Iterative Algorithm for Robust Subspace Clustering | This paper has been withdrawn by the author due to an error in the
initialization section | null | null | null | cs.DS cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Subspace clustering (SC) is a popular method for dimensionality reduction of
high-dimensional data, where it generalizes Principal Component Analysis (PCA).
Recently, several methods have been proposed to enhance the robustness of PCA
and SC, while most of them are computationally very expensive, in particular,
for high dimensional large-scale data. In this paper, we develop much faster
iterative algorithms for SC, incorporating robustness using a {\em non-squared}
$\ell_2$-norm objective. The known implementations for optimizing the objective
would be costly due to the alternative optimization of two separate objectives:
optimal cluster-membership assignment and robust subspace selection, while the
substitution of one process to a faster surrogate can cause failure in
convergence. To address the issue, we use a simplified procedure requiring
efficient matrix-vector multiplications for subspace update instead of solving
an expensive eigenvector problem at each iteration, in addition to release
nested robust PCA loops. We prove that the proposed algorithm monotonically
converges to a local minimum with approximation guarantees, e.g., it achieves
2-approximation for the robust PCA objective. In our experiments, the proposed
algorithm is shown to converge at an order of magnitude faster than known
algorithms optimizing the same objective, and have outperforms prior subspace
clustering methods in accuracy and running time for MNIST dataset.
| [
{
"version": "v1",
"created": "Thu, 5 Mar 2015 08:54:51 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Jun 2015 20:47:35 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Chun",
"Sanghyuk",
""
],
[
"Noh",
"Yung-Kyun",
""
],
[
"Shin",
"Jinwoo",
""
]
] | TITLE: Scalable Iterative Algorithm for Robust Subspace Clustering
ABSTRACT: Subspace clustering (SC) is a popular method for dimensionality reduction of
high-dimensional data, where it generalizes Principal Component Analysis (PCA).
Recently, several methods have been proposed to enhance the robustness of PCA
and SC, while most of them are computationally very expensive, in particular,
for high dimensional large-scale data. In this paper, we develop much faster
iterative algorithms for SC, incorporating robustness using a {\em non-squared}
$\ell_2$-norm objective. The known implementations for optimizing the objective
would be costly due to the alternative optimization of two separate objectives:
optimal cluster-membership assignment and robust subspace selection, while the
substitution of one process to a faster surrogate can cause failure in
convergence. To address the issue, we use a simplified procedure requiring
efficient matrix-vector multiplications for subspace update instead of solving
an expensive eigenvector problem at each iteration, in addition to release
nested robust PCA loops. We prove that the proposed algorithm monotonically
converges to a local minimum with approximation guarantees, e.g., it achieves
2-approximation for the robust PCA objective. In our experiments, the proposed
algorithm is shown to converge at an order of magnitude faster than known
algorithms optimizing the same objective, and have outperforms prior subspace
clustering methods in accuracy and running time for MNIST dataset.
| no_new_dataset | 0.94743 |
1506.01744 | Kevin Chen | Chicheng Zhang, Jimin Song, Kevin C Chen, Kamalika Chaudhuri | Spectral Learning of Large Structured HMMs for Comparative Epigenomics | 27 pages, 3 figures | null | null | null | stat.ML cs.LG math.ST q-bio.GN stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a latent variable model and an efficient spectral algorithm
motivated by the recent emergence of very large data sets of chromatin marks
from multiple human cell types. A natural model for chromatin data in one cell
type is a Hidden Markov Model (HMM); we model the relationship between multiple
cell types by connecting their hidden states by a fixed tree of known
structure. The main challenge with learning parameters of such models is that
iterative methods such as EM are very slow, while naive spectral methods result
in time and space complexity exponential in the number of cell types. We
exploit properties of the tree structure of the hidden states to provide
spectral algorithms that are more computationally efficient for current
biological datasets. We provide sample complexity bounds for our algorithm and
evaluate it experimentally on biological data from nine human cell types.
Finally, we show that beyond our specific model, some of our algorithmic ideas
can be applied to other graphical models.
| [
{
"version": "v1",
"created": "Thu, 4 Jun 2015 22:57:28 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Zhang",
"Chicheng",
""
],
[
"Song",
"Jimin",
""
],
[
"Chen",
"Kevin C",
""
],
[
"Chaudhuri",
"Kamalika",
""
]
] | TITLE: Spectral Learning of Large Structured HMMs for Comparative Epigenomics
ABSTRACT: We develop a latent variable model and an efficient spectral algorithm
motivated by the recent emergence of very large data sets of chromatin marks
from multiple human cell types. A natural model for chromatin data in one cell
type is a Hidden Markov Model (HMM); we model the relationship between multiple
cell types by connecting their hidden states by a fixed tree of known
structure. The main challenge with learning parameters of such models is that
iterative methods such as EM are very slow, while naive spectral methods result
in time and space complexity exponential in the number of cell types. We
exploit properties of the tree structure of the hidden states to provide
spectral algorithms that are more computationally efficient for current
biological datasets. We provide sample complexity bounds for our algorithm and
evaluate it experimentally on biological data from nine human cell types.
Finally, we show that beyond our specific model, some of our algorithmic ideas
can be applied to other graphical models.
| no_new_dataset | 0.942929 |
1506.02075 | Antoine Bordes | Antoine Bordes, Nicolas Usunier, Sumit Chopra, Jason Weston | Large-scale Simple Question Answering with Memory Networks | null | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training large-scale question answering systems is complicated because
training sources usually cover a small portion of the range of possible
questions. This paper studies the impact of multitask and transfer learning for
simple question answering; a setting for which the reasoning required to answer
is quite easy, as long as one can retrieve the correct evidence given a
question, which can be difficult in large-scale conditions. To this end, we
introduce a new dataset of 100k questions that we use in conjunction with
existing benchmarks. We conduct our study within the framework of Memory
Networks (Weston et al., 2015) because this perspective allows us to eventually
scale up to more complex reasoning, and show that Memory Networks can be
successfully trained to achieve excellent performance.
| [
{
"version": "v1",
"created": "Fri, 5 Jun 2015 21:48:39 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Bordes",
"Antoine",
""
],
[
"Usunier",
"Nicolas",
""
],
[
"Chopra",
"Sumit",
""
],
[
"Weston",
"Jason",
""
]
] | TITLE: Large-scale Simple Question Answering with Memory Networks
ABSTRACT: Training large-scale question answering systems is complicated because
training sources usually cover a small portion of the range of possible
questions. This paper studies the impact of multitask and transfer learning for
simple question answering; a setting for which the reasoning required to answer
is quite easy, as long as one can retrieve the correct evidence given a
question, which can be difficult in large-scale conditions. To this end, we
introduce a new dataset of 100k questions that we use in conjunction with
existing benchmarks. We conduct our study within the framework of Memory
Networks (Weston et al., 2015) because this perspective allows us to eventually
scale up to more complex reasoning, and show that Memory Networks can be
successfully trained to achieve excellent performance.
| new_dataset | 0.959459 |
1506.02079 | Michael Kazhdan | Michael Kazhdan, Kunal Lillaney, William Roncal, Davi Bock, Joshua
Vogelstein, and Randal Burns | Gradient-Domain Fusion for Color Correction in Large EM Image Stacks | null | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new gradient-domain technique for processing registered EM image
stacks to remove inter-image discontinuities while preserving intra-image
detail. To this end, we process the image stack by first performing anisotropic
smoothing along the slice axis and then solving a Poisson equation within each
slice to re-introduce the detail. The final image stack is continuous across
the slice axis and maintains sharp details within each slice. Adapting existing
out-of-core techniques for solving the linear system, we describe a parallel
algorithm with time complexity that is linear in the size of the data and space
complexity that is sub-linear, allowing us to process datasets as large as five
teravoxels with a 600 MB memory footprint.
| [
{
"version": "v1",
"created": "Fri, 5 Jun 2015 22:35:31 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Kazhdan",
"Michael",
""
],
[
"Lillaney",
"Kunal",
""
],
[
"Roncal",
"William",
""
],
[
"Bock",
"Davi",
""
],
[
"Vogelstein",
"Joshua",
""
],
[
"Burns",
"Randal",
""
]
] | TITLE: Gradient-Domain Fusion for Color Correction in Large EM Image Stacks
ABSTRACT: We propose a new gradient-domain technique for processing registered EM image
stacks to remove inter-image discontinuities while preserving intra-image
detail. To this end, we process the image stack by first performing anisotropic
smoothing along the slice axis and then solving a Poisson equation within each
slice to re-introduce the detail. The final image stack is continuous across
the slice axis and maintains sharp details within each slice. Adapting existing
out-of-core techniques for solving the linear system, we describe a parallel
algorithm with time complexity that is linear in the size of the data and space
complexity that is sub-linear, allowing us to process datasets as large as five
teravoxels with a 600 MB memory footprint.
| no_new_dataset | 0.955152 |
1506.02154 | Benyuan Liu | Benyuan Liu and Hongqi Fan and Qiang Fu and Zhilin Zhang | Bayesian De-quantization and Data Compression for Low-Energy
Physiological Signal Telemonitoring | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the issue of applying quantized compressed sensing (CS) on
low-energy telemonitoring. So far, few works studied this problem in
applications where signals were only approximately sparse. We propose a
two-stage data compressor based on quantized CS, where signals are compressed
by compressed sensing and then the compressed measurements are quantized with
only 2 bits per measurement. This compressor can greatly reduce the
transmission bit-budget. To recover signals from underdetermined, quantized
measurements, we develop a Bayesian De-quantization algorithm. It can exploit
both the model of quantization errors and the correlated structure of
physiological signals to improve the quality of recovery. The proposed data
compressor and the recovery algorithm are validated on a dataset recorded on 12
subjects during fast running. Experiment results showed that an averaged 2.596
beat per minute (BPM) estimation error was achieved by jointly using compressed
sensing with 50% compression ratio and a 2-bit quantizer. The results imply
that we can effectively transmit n bits instead of n samples, which is a
substantial improvement for low-energy wireless telemonitoring.
| [
{
"version": "v1",
"created": "Sat, 6 Jun 2015 14:29:49 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Liu",
"Benyuan",
""
],
[
"Fan",
"Hongqi",
""
],
[
"Fu",
"Qiang",
""
],
[
"Zhang",
"Zhilin",
""
]
] | TITLE: Bayesian De-quantization and Data Compression for Low-Energy
Physiological Signal Telemonitoring
ABSTRACT: We address the issue of applying quantized compressed sensing (CS) on
low-energy telemonitoring. So far, few works studied this problem in
applications where signals were only approximately sparse. We propose a
two-stage data compressor based on quantized CS, where signals are compressed
by compressed sensing and then the compressed measurements are quantized with
only 2 bits per measurement. This compressor can greatly reduce the
transmission bit-budget. To recover signals from underdetermined, quantized
measurements, we develop a Bayesian De-quantization algorithm. It can exploit
both the model of quantization errors and the correlated structure of
physiological signals to improve the quality of recovery. The proposed data
compressor and the recovery algorithm are validated on a dataset recorded on 12
subjects during fast running. Experiment results showed that an averaged 2.596
beat per minute (BPM) estimation error was achieved by jointly using compressed
sensing with 50% compression ratio and a 2-bit quantizer. The results imply
that we can effectively transmit n bits instead of n samples, which is a
substantial improvement for low-energy wireless telemonitoring.
| no_new_dataset | 0.948155 |
1506.02184 | Jun Ye | Jun Ye, Hao Hu, Kai Li, Guo-Jun Qi and Kien A. Hua | First-Take-All: Temporal Order-Preserving Hashing for 3D Action Videos | 9 pages, 11 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the prevalence of the commodity depth cameras, the new paradigm of user
interfaces based on 3D motion capturing and recognition have dramatically
changed the way of interactions between human and computers. Human action
recognition, as one of the key components in these devices, plays an important
role to guarantee the quality of user experience. Although the model-driven
methods have achieved huge success, they cannot provide a scalable solution for
efficiently storing, retrieving and recognizing actions in the large-scale
applications. These models are also vulnerable to the temporal translation and
warping, as well as the variations in motion scales and execution rates. To
address these challenges, we propose to treat the 3D human action recognition
as a video-level hashing problem and propose a novel First-Take-All (FTA)
Hashing algorithm capable of hashing the entire video into hash codes of fixed
length. We demonstrate that this FTA algorithm produces a compact
representation of the video invariant to the above mentioned variations,
through which action recognition can be solved by an efficient nearest neighbor
search by the Hamming distance between the FTA hash codes. Experiments on the
public 3D human action datasets shows that the FTA algorithm can reach a
recognition accuracy higher than 80%, with about 15 bits per frame considering
there are 65 frames per video over the datasets.
| [
{
"version": "v1",
"created": "Sat, 6 Jun 2015 19:36:11 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Ye",
"Jun",
""
],
[
"Hu",
"Hao",
""
],
[
"Li",
"Kai",
""
],
[
"Qi",
"Guo-Jun",
""
],
[
"Hua",
"Kien A.",
""
]
] | TITLE: First-Take-All: Temporal Order-Preserving Hashing for 3D Action Videos
ABSTRACT: With the prevalence of the commodity depth cameras, the new paradigm of user
interfaces based on 3D motion capturing and recognition have dramatically
changed the way of interactions between human and computers. Human action
recognition, as one of the key components in these devices, plays an important
role to guarantee the quality of user experience. Although the model-driven
methods have achieved huge success, they cannot provide a scalable solution for
efficiently storing, retrieving and recognizing actions in the large-scale
applications. These models are also vulnerable to the temporal translation and
warping, as well as the variations in motion scales and execution rates. To
address these challenges, we propose to treat the 3D human action recognition
as a video-level hashing problem and propose a novel First-Take-All (FTA)
Hashing algorithm capable of hashing the entire video into hash codes of fixed
length. We demonstrate that this FTA algorithm produces a compact
representation of the video invariant to the above mentioned variations,
through which action recognition can be solved by an efficient nearest neighbor
search by the Hamming distance between the FTA hash codes. Experiments on the
public 3D human action datasets shows that the FTA algorithm can reach a
recognition accuracy higher than 80%, with about 15 bits per frame considering
there are 65 frames per video over the datasets.
| no_new_dataset | 0.941115 |
1506.02203 | Matteo Ruggero Ronchi | Matteo Ruggero Ronchi and Pietro Perona | Describing Common Human Visual Actions in Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Which common human actions and interactions are recognizable in monocular
still images? Which involve objects and/or other people? How many is a person
performing at a time? We address these questions by exploring the actions and
interactions that are detectable in the images of the MS COCO dataset. We make
two main contributions. First, a list of 140 common `visual actions', obtained
by analyzing the largest on-line verb lexicon currently available for English
(VerbNet) and human sentences used to describe images in MS COCO. Second, a
complete set of annotations for those `visual actions', composed of
subject-object and associated verb, which we call COCO-a (a for `actions').
COCO-a is larger than existing action datasets in terms of number of actions
and instances of these actions, and is unique because it is data-driven, rather
than experimenter-biased. Other unique features are that it is exhaustive, and
that all subjects and objects are localized. A statistical analysis of the
accuracy of our annotations and of each action, interaction and subject-object
combination is provided.
| [
{
"version": "v1",
"created": "Sun, 7 Jun 2015 00:33:23 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Ronchi",
"Matteo Ruggero",
""
],
[
"Perona",
"Pietro",
""
]
] | TITLE: Describing Common Human Visual Actions in Images
ABSTRACT: Which common human actions and interactions are recognizable in monocular
still images? Which involve objects and/or other people? How many is a person
performing at a time? We address these questions by exploring the actions and
interactions that are detectable in the images of the MS COCO dataset. We make
two main contributions. First, a list of 140 common `visual actions', obtained
by analyzing the largest on-line verb lexicon currently available for English
(VerbNet) and human sentences used to describe images in MS COCO. Second, a
complete set of annotations for those `visual actions', composed of
subject-object and associated verb, which we call COCO-a (a for `actions').
COCO-a is larger than existing action datasets in terms of number of actions
and instances of these actions, and is unique because it is data-driven, rather
than experimenter-biased. Other unique features are that it is exhaustive, and
that all subjects and objects are localized. A statistical analysis of the
accuracy of our annotations and of each action, interaction and subject-object
combination is provided.
| no_new_dataset | 0.74895 |
1506.02211 | Chao Dong | Chao Dong and Ximei Zhu and Yubin Deng and Chen Change Loy and Yu Qiao | Boosting Optical Character Recognition: A Super-Resolution Approach | 5 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text image super-resolution is a challenging yet open research problem in the
computer vision community. In particular, low-resolution images hamper the
performance of typical optical character recognition (OCR) systems. In this
article, we summarize our entry to the ICDAR2015 Competition on Text Image
Super-Resolution. Experiments are based on the provided ICDAR2015 TextSR
dataset and the released Tesseract-OCR 3.02 system. We report that our winning
entry of text image super-resolution framework has largely improved the OCR
performance with low-resolution images used as input, reaching an OCR accuracy
score of 77.19%, which is comparable with that of using the original
high-resolution images 78.80%.
| [
{
"version": "v1",
"created": "Sun, 7 Jun 2015 02:29:45 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Dong",
"Chao",
""
],
[
"Zhu",
"Ximei",
""
],
[
"Deng",
"Yubin",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Qiao",
"Yu",
""
]
] | TITLE: Boosting Optical Character Recognition: A Super-Resolution Approach
ABSTRACT: Text image super-resolution is a challenging yet open research problem in the
computer vision community. In particular, low-resolution images hamper the
performance of typical optical character recognition (OCR) systems. In this
article, we summarize our entry to the ICDAR2015 Competition on Text Image
Super-Resolution. Experiments are based on the provided ICDAR2015 TextSR
dataset and the released Tesseract-OCR 3.02 system. We report that our winning
entry of text image super-resolution framework has largely improved the OCR
performance with low-resolution images used as input, reaching an OCR accuracy
score of 77.19%, which is comparable with that of using the original
high-resolution images 78.80%.
| no_new_dataset | 0.953275 |
1506.02268 | George Grispos | George Grispos, William Bradley Glisson and Tim Storer | Recovering Residual Forensic Data from Smartphone Interactions with
Cloud Storage Providers | null | 2015. In The Cloud Security Ecosystem, edited by Ryan Ko and
Kim-Kwang Raymond Choo, Syngress, Boston, Pages 347-382 | 10.1016/B978-0-12-801595-7.00016-1 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a growing demand for cloud storage services such as Dropbox, Box,
Syncplicity and SugarSync. These public cloud storage services can store
gigabytes of corporate and personal data in remote data centres around the
world, which can then be synchronized to multiple devices. This creates an
environment which is potentially conducive to security incidents, data breaches
and other malicious activities. The forensic investigation of public cloud
environments presents a number of new challenges for the digital forensics
community. However, it is anticipated that end-devices such as smartphones,
will retain data from these cloud storage services. This research investigates
how forensic tools that are currently available to practitioners can be used to
provide a practical solution for the problems related to investigating cloud
storage environments. The research contribution is threefold. First, the
findings from this research support the idea that end-devices which have been
used to access cloud storage services can be used to provide a partial view of
the evidence stored in the cloud service. Second, the research provides a
comparison of the number of files which can be recovered from different
versions of cloud storage applications. In doing so, it also supports the idea
that amalgamating the files recovered from more than one device can result in
the recovery of a more complete dataset. Third, the chapter contributes to the
documentation and evidentiary discussion of the artefacts created from specific
cloud storage applications and different versions of these applications on iOS
and Android smartphones.
| [
{
"version": "v1",
"created": "Sun, 7 Jun 2015 14:07:12 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Grispos",
"George",
""
],
[
"Glisson",
"William Bradley",
""
],
[
"Storer",
"Tim",
""
]
] | TITLE: Recovering Residual Forensic Data from Smartphone Interactions with
Cloud Storage Providers
ABSTRACT: There is a growing demand for cloud storage services such as Dropbox, Box,
Syncplicity and SugarSync. These public cloud storage services can store
gigabytes of corporate and personal data in remote data centres around the
world, which can then be synchronized to multiple devices. This creates an
environment which is potentially conducive to security incidents, data breaches
and other malicious activities. The forensic investigation of public cloud
environments presents a number of new challenges for the digital forensics
community. However, it is anticipated that end-devices such as smartphones,
will retain data from these cloud storage services. This research investigates
how forensic tools that are currently available to practitioners can be used to
provide a practical solution for the problems related to investigating cloud
storage environments. The research contribution is threefold. First, the
findings from this research support the idea that end-devices which have been
used to access cloud storage services can be used to provide a partial view of
the evidence stored in the cloud service. Second, the research provides a
comparison of the number of files which can be recovered from different
versions of cloud storage applications. In doing so, it also supports the idea
that amalgamating the files recovered from more than one device can result in
the recovery of a more complete dataset. Third, the chapter contributes to the
documentation and evidentiary discussion of the artefacts created from specific
cloud storage applications and different versions of these applications on iOS
and Android smartphones.
| no_new_dataset | 0.934395 |
1506.02289 | Oana Goga | Oana Goga, Patrick Loiseau, Robin Sommer, Renata Teixeira, Krishna P.
Gummadi | On the Reliability of Profile Matching Across Large Online Social
Networks | 12 pages. To appear in KDD 2015. Extended version | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Matching the profiles of a user across multiple online social networks brings
opportunities for new services and applications as well as new insights on user
online behavior, yet it raises serious privacy concerns. Prior literature has
proposed methods to match profiles and showed that it is possible to do it
accurately, but using evaluations that focused on sampled datasets only. In
this paper, we study the extent to which we can reliably match profiles in
practice, across real-world social networks, by exploiting public attributes,
i.e., information users publicly provide about themselves. Today's social
networks have hundreds of millions of users, which brings completely new
challenges as a reliable matching scheme must identify the correct matching
profile out of the millions of possible profiles. We first define a set of
properties for profile attributes--Availability, Consistency,
non-Impersonability, and Discriminability (ACID)--that are both necessary and
sufficient to determine the reliability of a matching scheme. Using these
properties, we propose a method to evaluate the accuracy of matching schemes in
real practical cases. Our results show that the accuracy in practice is
significantly lower than the one reported in prior literature. When considering
entire social networks, there is a non-negligible number of profiles that
belong to different users but have similar attributes, which leads to many
false matches. Our paper sheds light on the limits of matching profiles in the
real world and illustrates the correct methodology to evaluate matching schemes
in realistic scenarios.
| [
{
"version": "v1",
"created": "Sun, 7 Jun 2015 17:42:45 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Goga",
"Oana",
""
],
[
"Loiseau",
"Patrick",
""
],
[
"Sommer",
"Robin",
""
],
[
"Teixeira",
"Renata",
""
],
[
"Gummadi",
"Krishna P.",
""
]
] | TITLE: On the Reliability of Profile Matching Across Large Online Social
Networks
ABSTRACT: Matching the profiles of a user across multiple online social networks brings
opportunities for new services and applications as well as new insights on user
online behavior, yet it raises serious privacy concerns. Prior literature has
proposed methods to match profiles and showed that it is possible to do it
accurately, but using evaluations that focused on sampled datasets only. In
this paper, we study the extent to which we can reliably match profiles in
practice, across real-world social networks, by exploiting public attributes,
i.e., information users publicly provide about themselves. Today's social
networks have hundreds of millions of users, which brings completely new
challenges as a reliable matching scheme must identify the correct matching
profile out of the millions of possible profiles. We first define a set of
properties for profile attributes--Availability, Consistency,
non-Impersonability, and Discriminability (ACID)--that are both necessary and
sufficient to determine the reliability of a matching scheme. Using these
properties, we propose a method to evaluate the accuracy of matching schemes in
real practical cases. Our results show that the accuracy in practice is
significantly lower than the one reported in prior literature. When considering
entire social networks, there is a non-negligible number of profiles that
belong to different users but have similar attributes, which leads to many
false matches. Our paper sheds light on the limits of matching profiles in the
real world and illustrates the correct methodology to evaluate matching schemes
in realistic scenarios.
| no_new_dataset | 0.949669 |
1506.02428 | Purushottam Kar | Kush Bhatia and Prateek Jain and Purushottam Kar | Robust Regression via Hard Thresholding | 24 pages, 3 figures | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of Robust Least Squares Regression (RLSR) where several
response variables can be adversarially corrupted. More specifically, for a
data matrix X \in R^{p x n} and an underlying model w*, the response vector is
generated as y = X'w* + b where b \in R^n is the corruption vector supported
over at most C.n coordinates. Existing exact recovery results for RLSR focus
solely on L1-penalty based convex formulations and impose relatively strict
model assumptions such as requiring the corruptions b to be selected
independently of X.
In this work, we study a simple hard-thresholding algorithm called TORRENT
which, under mild conditions on X, can recover w* exactly even if b corrupts
the response variables in an adversarial manner, i.e. both the support and
entries of b are selected adversarially after observing X and w*. Our results
hold under deterministic assumptions which are satisfied if X is sampled from
any sub-Gaussian distribution. Finally unlike existing results that apply only
to a fixed w*, generated independently of X, our results are universal and hold
for any w* \in R^p.
Next, we propose gradient descent-based extensions of TORRENT that can scale
efficiently to large scale problems, such as high dimensional sparse recovery
and prove similar recovery guarantees for these extensions. Empirically we find
TORRENT, and more so its extensions, offering significantly faster recovery
than the state-of-the-art L1 solvers. For instance, even on moderate-sized
datasets (with p = 50K) with around 40% corrupted responses, a variant of our
proposed method called TORRENT-HYB is more than 20x faster than the best L1
solver.
| [
{
"version": "v1",
"created": "Mon, 8 Jun 2015 10:13:53 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Bhatia",
"Kush",
""
],
[
"Jain",
"Prateek",
""
],
[
"Kar",
"Purushottam",
""
]
] | TITLE: Robust Regression via Hard Thresholding
ABSTRACT: We study the problem of Robust Least Squares Regression (RLSR) where several
response variables can be adversarially corrupted. More specifically, for a
data matrix X \in R^{p x n} and an underlying model w*, the response vector is
generated as y = X'w* + b where b \in R^n is the corruption vector supported
over at most C.n coordinates. Existing exact recovery results for RLSR focus
solely on L1-penalty based convex formulations and impose relatively strict
model assumptions such as requiring the corruptions b to be selected
independently of X.
In this work, we study a simple hard-thresholding algorithm called TORRENT
which, under mild conditions on X, can recover w* exactly even if b corrupts
the response variables in an adversarial manner, i.e. both the support and
entries of b are selected adversarially after observing X and w*. Our results
hold under deterministic assumptions which are satisfied if X is sampled from
any sub-Gaussian distribution. Finally unlike existing results that apply only
to a fixed w*, generated independently of X, our results are universal and hold
for any w* \in R^p.
Next, we propose gradient descent-based extensions of TORRENT that can scale
efficiently to large scale problems, such as high dimensional sparse recovery
and prove similar recovery guarantees for these extensions. Empirically we find
TORRENT, and more so its extensions, offering significantly faster recovery
than the state-of-the-art L1 solvers. For instance, even on moderate-sized
datasets (with p = 50K) with around 40% corrupted responses, a variant of our
proposed method called TORRENT-HYB is more than 20x faster than the best L1
solver.
| no_new_dataset | 0.946794 |
1506.02509 | Lei Zhang | Lei Zhang and David Zhang | SVM and ELM: Who Wins? Object Recognition with Deep Convolutional
Features from ImageNet | 7 pages, 4 figures | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning with a convolutional neural network (CNN) has been proved to be
very effective in feature extraction and representation of images. For image
classification problems, this work aim at finding which classifier is more
competitive based on high-level deep features of images. In this report, we
have discussed the nearest neighbor, support vector machines and extreme
learning machines for image classification under deep convolutional activation
feature representation. Specifically, we adopt the benchmark object recognition
dataset from multiple sources with domain bias for evaluating different
classifiers. The deep features of the object dataset are obtained by a
well-trained CNN with five convolutional layers and three fully-connected
layers on the challenging ImageNet. Experiments demonstrate that the ELMs
outperform SVMs in cross-domain recognition tasks. In particular,
state-of-the-art results are obtained by kernel ELM which outperforms SVMs with
about 4% of the average accuracy. The features and codes are available in
http://www.escience.cn/people/lei/index.html
| [
{
"version": "v1",
"created": "Mon, 8 Jun 2015 13:58:01 GMT"
}
] | 2015-06-09T00:00:00 | [
[
"Zhang",
"Lei",
""
],
[
"Zhang",
"David",
""
]
] | TITLE: SVM and ELM: Who Wins? Object Recognition with Deep Convolutional
Features from ImageNet
ABSTRACT: Deep learning with a convolutional neural network (CNN) has been proved to be
very effective in feature extraction and representation of images. For image
classification problems, this work aim at finding which classifier is more
competitive based on high-level deep features of images. In this report, we
have discussed the nearest neighbor, support vector machines and extreme
learning machines for image classification under deep convolutional activation
feature representation. Specifically, we adopt the benchmark object recognition
dataset from multiple sources with domain bias for evaluating different
classifiers. The deep features of the object dataset are obtained by a
well-trained CNN with five convolutional layers and three fully-connected
layers on the challenging ImageNet. Experiments demonstrate that the ELMs
outperform SVMs in cross-domain recognition tasks. In particular,
state-of-the-art results are obtained by kernel ELM which outperforms SVMs with
about 4% of the average accuracy. The features and codes are available in
http://www.escience.cn/people/lei/index.html
| no_new_dataset | 0.951097 |
1505.06289 | Will Monroe | Angel Chang, Will Monroe, Manolis Savva, Christopher Potts,
Christopher D. Manning | Text to 3D Scene Generation with Rich Lexical Grounding | 10 pages, 7 figures, 3 tables. To appear in ACL-IJCNLP 2015 | null | null | null | cs.CL cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to map descriptions of scenes to 3D geometric representations has
many applications in areas such as art, education, and robotics. However, prior
work on the text to 3D scene generation task has used manually specified object
categories and language that identifies them. We introduce a dataset of 3D
scenes annotated with natural language descriptions and learn from this data
how to ground textual descriptions to physical objects. Our method successfully
grounds a variety of lexical terms to concrete referents, and we show
quantitatively that our method improves 3D scene generation over previous work
using purely rule-based methods. We evaluate the fidelity and plausibility of
3D scenes generated with our grounding approach through human judgments. To
ease evaluation on this task, we also introduce an automated metric that
strongly correlates with human judgments.
| [
{
"version": "v1",
"created": "Sat, 23 May 2015 08:32:11 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Jun 2015 01:13:17 GMT"
}
] | 2015-06-08T00:00:00 | [
[
"Chang",
"Angel",
""
],
[
"Monroe",
"Will",
""
],
[
"Savva",
"Manolis",
""
],
[
"Potts",
"Christopher",
""
],
[
"Manning",
"Christopher D.",
""
]
] | TITLE: Text to 3D Scene Generation with Rich Lexical Grounding
ABSTRACT: The ability to map descriptions of scenes to 3D geometric representations has
many applications in areas such as art, education, and robotics. However, prior
work on the text to 3D scene generation task has used manually specified object
categories and language that identifies them. We introduce a dataset of 3D
scenes annotated with natural language descriptions and learn from this data
how to ground textual descriptions to physical objects. Our method successfully
grounds a variety of lexical terms to concrete referents, and we show
quantitatively that our method improves 3D scene generation over previous work
using purely rule-based methods. We evaluate the fidelity and plausibility of
3D scenes generated with our grounding approach through human judgments. To
ease evaluation on this task, we also introduce an automated metric that
strongly correlates with human judgments.
| new_dataset | 0.961965 |
1506.01732 | Sudeep Pillai | Sudeep Pillai, John Leonard | Monocular SLAM Supported Object Recognition | Accepted to appear at Robotics: Science and Systems 2015, Rome, Italy | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we develop a monocular SLAM-aware object recognition system
that is able to achieve considerably stronger recognition performance, as
compared to classical object recognition systems that function on a
frame-by-frame basis. By incorporating several key ideas including multi-view
object proposals and efficient feature encoding methods, our proposed system is
able to detect and robustly recognize objects in its environment using a single
RGB camera in near-constant time. Through experiments, we illustrate the
utility of using such a system to effectively detect and recognize objects,
incorporating multiple object viewpoint detections into a unified prediction
hypothesis. The performance of the proposed recognition system is evaluated on
the UW RGB-D Dataset, showing strong recognition performance and scalable
run-time performance compared to current state-of-the-art recognition systems.
| [
{
"version": "v1",
"created": "Thu, 4 Jun 2015 21:07:56 GMT"
}
] | 2015-06-08T00:00:00 | [
[
"Pillai",
"Sudeep",
""
],
[
"Leonard",
"John",
""
]
] | TITLE: Monocular SLAM Supported Object Recognition
ABSTRACT: In this work, we develop a monocular SLAM-aware object recognition system
that is able to achieve considerably stronger recognition performance, as
compared to classical object recognition systems that function on a
frame-by-frame basis. By incorporating several key ideas including multi-view
object proposals and efficient feature encoding methods, our proposed system is
able to detect and robustly recognize objects in its environment using a single
RGB camera in near-constant time. Through experiments, we illustrate the
utility of using such a system to effectively detect and recognize objects,
incorporating multiple object viewpoint detections into a unified prediction
hypothesis. The performance of the proposed recognition system is evaluated on
the UW RGB-D Dataset, showing strong recognition performance and scalable
run-time performance compared to current state-of-the-art recognition systems.
| no_new_dataset | 0.947088 |
1506.01829 | Remi Lajugie | R\'emi Lajugie (SIERRA, DI-ENS), Piotr Bojanowski (WILLOW, DI-ENS),
Sylvain Arlot (SIERRA, DI-ENS), Francis Bach (SIERRA, DI-ENS) | Semidefinite and Spectral Relaxations for Multi-Label Classification | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of multi-label classification. We
consider linear classifiers and propose to learn a prior over the space of
labels to directly leverage the performance of such methods. This prior takes
the form of a quadratic function of the labels and permits to encode both
attractive and repulsive relations between labels. We cast this problem as a
structured prediction one aiming at optimizing either the accuracies of the
predictors or the F 1-score. This leads to an optimization problem closely
related to the max-cut problem, which naturally leads to semidefinite and
spectral relaxations. We show on standard datasets how such a general prior can
improve the performances of multi-label techniques.
| [
{
"version": "v1",
"created": "Fri, 5 Jun 2015 09:19:01 GMT"
}
] | 2015-06-08T00:00:00 | [
[
"Lajugie",
"Rémi",
"",
"SIERRA, DI-ENS"
],
[
"Bojanowski",
"Piotr",
"",
"WILLOW, DI-ENS"
],
[
"Arlot",
"Sylvain",
"",
"SIERRA, DI-ENS"
],
[
"Bach",
"Francis",
"",
"SIERRA, DI-ENS"
]
] | TITLE: Semidefinite and Spectral Relaxations for Multi-Label Classification
ABSTRACT: In this paper, we address the problem of multi-label classification. We
consider linear classifiers and propose to learn a prior over the space of
labels to directly leverage the performance of such methods. This prior takes
the form of a quadratic function of the labels and permits to encode both
attractive and repulsive relations between labels. We cast this problem as a
structured prediction one aiming at optimizing either the accuracies of the
predictors or the F 1-score. This leads to an optimization problem closely
related to the max-cut problem, which naturally leads to semidefinite and
spectral relaxations. We show on standard datasets how such a general prior can
improve the performances of multi-label techniques.
| no_new_dataset | 0.947088 |
1205.4080 | Justin Ziniel | Justin Ziniel and Philip Schniter | Dynamic Compressive Sensing of Time-Varying Signals via Approximate
Message Passing | 32 pages, 7 figures | null | 10.1109/TSP.2013.2273196 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work the dynamic compressive sensing (CS) problem of recovering
sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear
measurements is explored from a Bayesian perspective. While there has been a
handful of previously proposed Bayesian dynamic CS algorithms in the
literature, the ability to perform inference on high-dimensional problems in a
computationally efficient manner remains elusive. In response, we propose a
probabilistic dynamic CS signal model that captures both amplitude and support
correlation structure, and describe an approximate message passing algorithm
that performs soft signal estimation and support detection with a computational
complexity that is linear in all problem dimensions. The algorithm, DCS-AMP,
can perform either causal filtering or non-causal smoothing, and is capable of
learning model parameters adaptively from the data through an
expectation-maximization learning procedure. We provide numerical evidence that
DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety
of operating conditions. We further describe the result of applying DCS-AMP to
two real dynamic CS datasets, as well as a frequency estimation task, to
bolster our claim that DCS-AMP is capable of offering state-of-the-art
performance and speed on real-world high-dimensional problems.
| [
{
"version": "v1",
"created": "Fri, 18 May 2012 05:33:20 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Apr 2013 18:08:42 GMT"
}
] | 2015-06-05T00:00:00 | [
[
"Ziniel",
"Justin",
""
],
[
"Schniter",
"Philip",
""
]
] | TITLE: Dynamic Compressive Sensing of Time-Varying Signals via Approximate
Message Passing
ABSTRACT: In this work the dynamic compressive sensing (CS) problem of recovering
sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear
measurements is explored from a Bayesian perspective. While there has been a
handful of previously proposed Bayesian dynamic CS algorithms in the
literature, the ability to perform inference on high-dimensional problems in a
computationally efficient manner remains elusive. In response, we propose a
probabilistic dynamic CS signal model that captures both amplitude and support
correlation structure, and describe an approximate message passing algorithm
that performs soft signal estimation and support detection with a computational
complexity that is linear in all problem dimensions. The algorithm, DCS-AMP,
can perform either causal filtering or non-causal smoothing, and is capable of
learning model parameters adaptively from the data through an
expectation-maximization learning procedure. We provide numerical evidence that
DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety
of operating conditions. We further describe the result of applying DCS-AMP to
two real dynamic CS datasets, as well as a frequency estimation task, to
bolster our claim that DCS-AMP is capable of offering state-of-the-art
performance and speed on real-world high-dimensional problems.
| no_new_dataset | 0.947186 |
1206.5298 | Paolo Masucci | A. Paolo Masucci, Kiril Stanilov and Michael Batty | Limited Urban Growth: London's Street Network Dynamics since the 18th
Century | PlosOne, in publication | PLoS ONE 8(8): e69469 (2013) | 10.1371/journal.pone.0069469 | null | physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the growth dynamics of Greater London defined by the
administrative boundary of the Greater London Authority, based on the evolution
of its street network during the last two centuries. This is done by employing
a unique dataset, consisting of the planar graph representation of nine time
slices of Greater London's road network spanning 224 years, from 1786 to 2010.
Within this time-frame, we address the concept of the metropolitan area or city
in physical terms, in that urban evolution reveals observable transitions in
the distribution of relevant geometrical properties. Given that London has a
hard boundary enforced by its long-standing green belt, we show that its street
network dynamics can be described as a fractal space-filling phenomena up to a
capacitated limit, whence its growth can be predicted with a striking level of
accuracy. This observation is confirmed by the analytical calculation of key
topological properties of the planar graph, such as the topological growth of
the network and its average connectivity. This study thus represents an example
of a strong violation of Gibrat's law. In particular, we are able to show
analytically how London evolves from a more loop-like structure, typical of
planned cities, toward a more tree-like structure, typical of self-organized
cities. These observations are relevant to the discourse on sustainable urban
planning with respect to the control of urban sprawl in many large cities,
which have developed under the conditions of spatial constraints imposed by
green belts and hard urban boundaries.
| [
{
"version": "v1",
"created": "Sun, 24 Jun 2012 00:41:22 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Jun 2013 10:36:53 GMT"
}
] | 2015-06-05T00:00:00 | [
[
"Masucci",
"A. Paolo",
""
],
[
"Stanilov",
"Kiril",
""
],
[
"Batty",
"Michael",
""
]
] | TITLE: Limited Urban Growth: London's Street Network Dynamics since the 18th
Century
ABSTRACT: We investigate the growth dynamics of Greater London defined by the
administrative boundary of the Greater London Authority, based on the evolution
of its street network during the last two centuries. This is done by employing
a unique dataset, consisting of the planar graph representation of nine time
slices of Greater London's road network spanning 224 years, from 1786 to 2010.
Within this time-frame, we address the concept of the metropolitan area or city
in physical terms, in that urban evolution reveals observable transitions in
the distribution of relevant geometrical properties. Given that London has a
hard boundary enforced by its long-standing green belt, we show that its street
network dynamics can be described as a fractal space-filling phenomena up to a
capacitated limit, whence its growth can be predicted with a striking level of
accuracy. This observation is confirmed by the analytical calculation of key
topological properties of the planar graph, such as the topological growth of
the network and its average connectivity. This study thus represents an example
of a strong violation of Gibrat's law. In particular, we are able to show
analytically how London evolves from a more loop-like structure, typical of
planned cities, toward a more tree-like structure, typical of self-organized
cities. These observations are relevant to the discourse on sustainable urban
planning with respect to the control of urban sprawl in many large cities,
which have developed under the conditions of spatial constraints imposed by
green belts and hard urban boundaries.
| new_dataset | 0.953275 |
1207.5661 | Rong-Hua Li | Rong-Hua Li, Jeffrey Xu Yu, Xin Huang, Hong Cheng | A Framework of Algorithms: Computing the Bias and Prestige of Nodes in
Trust Networks | null | null | 10.1371/journal.pone.0050843 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A trust network is a social network in which edges represent the trust
relationship between two nodes in the network. In a trust network, a
fundamental question is how to assess and compute the bias and prestige of the
nodes, where the bias of a node measures the trustworthiness of a node and the
prestige of a node measures the importance of the node. The larger bias of a
node implies the lower trustworthiness of the node, and the larger prestige of
a node implies the higher importance of the node. In this paper, we define a
vector-valued contractive function to characterize the bias vector which
results in a rich family of bias measurements, and we propose a framework of
algorithms for computing the bias and prestige of nodes in trust networks.
Based on our framework, we develop four algorithms that can calculate the bias
and prestige of nodes effectively and robustly. The time and space complexities
of all our algorithms are linear w.r.t. the size of the graph, thus our
algorithms are scalable to handle large datasets. We evaluate our algorithms
using five real datasets. The experimental results demonstrate the
effectiveness, robustness, and scalability of our algorithms.
| [
{
"version": "v1",
"created": "Tue, 24 Jul 2012 11:36:05 GMT"
}
] | 2015-06-05T00:00:00 | [
[
"Li",
"Rong-Hua",
""
],
[
"Yu",
"Jeffrey Xu",
""
],
[
"Huang",
"Xin",
""
],
[
"Cheng",
"Hong",
""
]
] | TITLE: A Framework of Algorithms: Computing the Bias and Prestige of Nodes in
Trust Networks
ABSTRACT: A trust network is a social network in which edges represent the trust
relationship between two nodes in the network. In a trust network, a
fundamental question is how to assess and compute the bias and prestige of the
nodes, where the bias of a node measures the trustworthiness of a node and the
prestige of a node measures the importance of the node. The larger bias of a
node implies the lower trustworthiness of the node, and the larger prestige of
a node implies the higher importance of the node. In this paper, we define a
vector-valued contractive function to characterize the bias vector which
results in a rich family of bias measurements, and we propose a framework of
algorithms for computing the bias and prestige of nodes in trust networks.
Based on our framework, we develop four algorithms that can calculate the bias
and prestige of nodes effectively and robustly. The time and space complexities
of all our algorithms are linear w.r.t. the size of the graph, thus our
algorithms are scalable to handle large datasets. We evaluate our algorithms
using five real datasets. The experimental results demonstrate the
effectiveness, robustness, and scalability of our algorithms.
| no_new_dataset | 0.948775 |
1402.0453 | Qi Qian | Qi Qian, Rong Jin, Shenghuo Zhu and Yuanqing Lin | Fine-Grained Visual Categorization via Multi-stage Metric Learning | in CVPR 2015 | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-grained visual categorization (FGVC) is to categorize objects into
subordinate classes instead of basic classes. One major challenge in FGVC is
the co-occurrence of two issues: 1) many subordinate classes are highly
correlated and are difficult to distinguish, and 2) there exists the large
intra-class variation (e.g., due to object pose). This paper proposes to
explicitly address the above two issues via distance metric learning (DML). DML
addresses the first issue by learning an embedding so that data points from the
same class will be pulled together while those from different classes should be
pushed apart from each other; and it addresses the second issue by allowing the
flexibility that only a portion of the neighbors (not all data points) from the
same class need to be pulled together. However, feature representation of an
image is often high dimensional, and DML is known to have difficulty in dealing
with high dimensional feature vectors since it would require $\mathcal{O}(d^2)$
for storage and $\mathcal{O}(d^3)$ for optimization. To this end, we proposed a
multi-stage metric learning framework that divides the large-scale high
dimensional learning problem to a series of simple subproblems, achieving
$\mathcal{O}(d)$ computational complexity. The empirical study with FVGC
benchmark datasets verifies that our method is both effective and efficient
compared to the state-of-the-art FGVC approaches.
| [
{
"version": "v1",
"created": "Mon, 3 Feb 2014 18:20:53 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Jun 2015 17:28:51 GMT"
}
] | 2015-06-05T00:00:00 | [
[
"Qian",
"Qi",
""
],
[
"Jin",
"Rong",
""
],
[
"Zhu",
"Shenghuo",
""
],
[
"Lin",
"Yuanqing",
""
]
] | TITLE: Fine-Grained Visual Categorization via Multi-stage Metric Learning
ABSTRACT: Fine-grained visual categorization (FGVC) is to categorize objects into
subordinate classes instead of basic classes. One major challenge in FGVC is
the co-occurrence of two issues: 1) many subordinate classes are highly
correlated and are difficult to distinguish, and 2) there exists the large
intra-class variation (e.g., due to object pose). This paper proposes to
explicitly address the above two issues via distance metric learning (DML). DML
addresses the first issue by learning an embedding so that data points from the
same class will be pulled together while those from different classes should be
pushed apart from each other; and it addresses the second issue by allowing the
flexibility that only a portion of the neighbors (not all data points) from the
same class need to be pulled together. However, feature representation of an
image is often high dimensional, and DML is known to have difficulty in dealing
with high dimensional feature vectors since it would require $\mathcal{O}(d^2)$
for storage and $\mathcal{O}(d^3)$ for optimization. To this end, we proposed a
multi-stage metric learning framework that divides the large-scale high
dimensional learning problem to a series of simple subproblems, achieving
$\mathcal{O}(d)$ computational complexity. The empirical study with FVGC
benchmark datasets verifies that our method is both effective and efficient
compared to the state-of-the-art FGVC approaches.
| no_new_dataset | 0.950134 |
1406.4173 | D\'ora Erd\H{o}s | Dora Erdos, Vatche Ishakian, Azer Bestavros, Evimaria Terzi | A Divide-and-Conquer Algorithm for Betweenness Centrality | Shorter version of this paper appeared in Siam Data Mining 2015 | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of efficiently computing the betweenness centrality of nodes has
been researched extensively. To date, the best known exact and centralized
algorithm for this task is an algorithm proposed in 2001 by Brandes. The
contribution of our paper is Brandes++, an algorithm for exact efficient
computation of betweenness centrality. The crux of our algorithm is that we
create a sketch of the graph, that we call the skeleton, by replacing subgraphs
with simpler graph structures. Depending on the underlying graph structure,
using this skeleton and by keeping appropriate summaries Brandes++ we can
achieve significantly low running times in our computations. Extensive
experimental evaluation on real life datasets demonstrate the efficacy of our
algorithm for different types of graphs. We release our code for benefit of the
research community.
| [
{
"version": "v1",
"created": "Mon, 16 Jun 2014 21:18:51 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Jun 2015 19:58:34 GMT"
}
] | 2015-06-05T00:00:00 | [
[
"Erdos",
"Dora",
""
],
[
"Ishakian",
"Vatche",
""
],
[
"Bestavros",
"Azer",
""
],
[
"Terzi",
"Evimaria",
""
]
] | TITLE: A Divide-and-Conquer Algorithm for Betweenness Centrality
ABSTRACT: The problem of efficiently computing the betweenness centrality of nodes has
been researched extensively. To date, the best known exact and centralized
algorithm for this task is an algorithm proposed in 2001 by Brandes. The
contribution of our paper is Brandes++, an algorithm for exact efficient
computation of betweenness centrality. The crux of our algorithm is that we
create a sketch of the graph, that we call the skeleton, by replacing subgraphs
with simpler graph structures. Depending on the underlying graph structure,
using this skeleton and by keeping appropriate summaries Brandes++ we can
achieve significantly low running times in our computations. Extensive
experimental evaluation on real life datasets demonstrate the efficacy of our
algorithm for different types of graphs. We release our code for benefit of the
research community.
| no_new_dataset | 0.940626 |
1504.02824 | Yelong Shen | Yelong Shen, Ruoming Jin, Jianshu Chen, Xiaodong He, Jianfeng Gao, Li
Deng | A Deep Embedding Model for Co-occurrence Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Co-occurrence Data is a common and important information source in many
areas, such as the word co-occurrence in the sentences, friends co-occurrence
in social networks and products co-occurrence in commercial transaction data,
etc, which contains rich correlation and clustering information about the
items. In this paper, we study co-occurrence data using a general energy-based
probabilistic model, and we analyze three different categories of energy-based
model, namely, the $L_1$, $L_2$ and $L_k$ models, which are able to capture
different levels of dependency in the co-occurrence data. We also discuss how
several typical existing models are related to these three types of energy
models, including the Fully Visible Boltzmann Machine (FVBM) ($L_2$), Matrix
Factorization ($L_2$), Log-BiLinear (LBL) models ($L_2$), and the Restricted
Boltzmann Machine (RBM) model ($L_k$). Then, we propose a Deep Embedding Model
(DEM) (an $L_k$ model) from the energy model in a \emph{principled} manner.
Furthermore, motivated by the observation that the partition function in the
energy model is intractable and the fact that the major objective of modeling
the co-occurrence data is to predict using the conditional probability, we
apply the \emph{maximum pseudo-likelihood} method to learn DEM. In consequence,
the developed model and its learning method naturally avoid the above
difficulties and can be easily used to compute the conditional probability in
prediction. Interestingly, our method is equivalent to learning a special
structured deep neural network using back-propagation and a special sampling
strategy, which makes it scalable on large-scale datasets. Finally, in the
experiments, we show that the DEM can achieve comparable or better results than
state-of-the-art methods on datasets across several application domains.
| [
{
"version": "v1",
"created": "Sat, 11 Apr 2015 02:56:01 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Jun 2015 09:07:13 GMT"
}
] | 2015-06-05T00:00:00 | [
[
"Shen",
"Yelong",
""
],
[
"Jin",
"Ruoming",
""
],
[
"Chen",
"Jianshu",
""
],
[
"He",
"Xiaodong",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Deng",
"Li",
""
]
] | TITLE: A Deep Embedding Model for Co-occurrence Learning
ABSTRACT: Co-occurrence Data is a common and important information source in many
areas, such as the word co-occurrence in the sentences, friends co-occurrence
in social networks and products co-occurrence in commercial transaction data,
etc, which contains rich correlation and clustering information about the
items. In this paper, we study co-occurrence data using a general energy-based
probabilistic model, and we analyze three different categories of energy-based
model, namely, the $L_1$, $L_2$ and $L_k$ models, which are able to capture
different levels of dependency in the co-occurrence data. We also discuss how
several typical existing models are related to these three types of energy
models, including the Fully Visible Boltzmann Machine (FVBM) ($L_2$), Matrix
Factorization ($L_2$), Log-BiLinear (LBL) models ($L_2$), and the Restricted
Boltzmann Machine (RBM) model ($L_k$). Then, we propose a Deep Embedding Model
(DEM) (an $L_k$ model) from the energy model in a \emph{principled} manner.
Furthermore, motivated by the observation that the partition function in the
energy model is intractable and the fact that the major objective of modeling
the co-occurrence data is to predict using the conditional probability, we
apply the \emph{maximum pseudo-likelihood} method to learn DEM. In consequence,
the developed model and its learning method naturally avoid the above
difficulties and can be easily used to compute the conditional probability in
prediction. Interestingly, our method is equivalent to learning a special
structured deep neural network using back-propagation and a special sampling
strategy, which makes it scalable on large-scale datasets. Finally, in the
experiments, we show that the DEM can achieve comparable or better results than
state-of-the-art methods on datasets across several application domains.
| no_new_dataset | 0.951369 |
1505.01861 | Tao Mei | Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, Yong Rui | Jointly Modeling Embedding and Translation to Bridge Video and Language | null | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically describing video content with natural language is a fundamental
challenge of multimedia. Recurrent Neural Networks (RNN), which models sequence
dynamics, has attracted increasing attention on visual interpretation. However,
most existing approaches generate a word locally with given previous words and
the visual content, while the relationship between sentence semantics and
visual content is not holistically exploited. As a result, the generated
sentences may be contextually correct but the semantics (e.g., subjects, verbs
or objects) are not true.
This paper presents a novel unified framework, named Long Short-Term Memory
with visual-semantic Embedding (LSTM-E), which can simultaneously explore the
learning of LSTM and visual-semantic embedding. The former aims to locally
maximize the probability of generating the next word given previous words and
visual content, while the latter is to create a visual-semantic embedding space
for enforcing the relationship between the semantics of the entire sentence and
visual content. Our proposed LSTM-E consists of three components: a 2-D and/or
3-D deep convolutional neural networks for learning powerful video
representation, a deep RNN for generating sentences, and a joint embedding
model for exploring the relationships between visual content and sentence
semantics. The experiments on YouTube2Text dataset show that our proposed
LSTM-E achieves to-date the best reported performance in generating natural
sentences: 45.3% and 31.0% in terms of BLEU@4 and METEOR, respectively. We also
demonstrate that LSTM-E is superior in predicting Subject-Verb-Object (SVO)
triplets to several state-of-the-art techniques.
| [
{
"version": "v1",
"created": "Thu, 7 May 2015 20:13:33 GMT"
},
{
"version": "v2",
"created": "Sat, 30 May 2015 10:05:50 GMT"
},
{
"version": "v3",
"created": "Thu, 4 Jun 2015 07:17:06 GMT"
}
] | 2015-06-05T00:00:00 | [
[
"Pan",
"Yingwei",
""
],
[
"Mei",
"Tao",
""
],
[
"Yao",
"Ting",
""
],
[
"Li",
"Houqiang",
""
],
[
"Rui",
"Yong",
""
]
] | TITLE: Jointly Modeling Embedding and Translation to Bridge Video and Language
ABSTRACT: Automatically describing video content with natural language is a fundamental
challenge of multimedia. Recurrent Neural Networks (RNN), which models sequence
dynamics, has attracted increasing attention on visual interpretation. However,
most existing approaches generate a word locally with given previous words and
the visual content, while the relationship between sentence semantics and
visual content is not holistically exploited. As a result, the generated
sentences may be contextually correct but the semantics (e.g., subjects, verbs
or objects) are not true.
This paper presents a novel unified framework, named Long Short-Term Memory
with visual-semantic Embedding (LSTM-E), which can simultaneously explore the
learning of LSTM and visual-semantic embedding. The former aims to locally
maximize the probability of generating the next word given previous words and
visual content, while the latter is to create a visual-semantic embedding space
for enforcing the relationship between the semantics of the entire sentence and
visual content. Our proposed LSTM-E consists of three components: a 2-D and/or
3-D deep convolutional neural networks for learning powerful video
representation, a deep RNN for generating sentences, and a joint embedding
model for exploring the relationships between visual content and sentence
semantics. The experiments on YouTube2Text dataset show that our proposed
LSTM-E achieves to-date the best reported performance in generating natural
sentences: 45.3% and 31.0% in terms of BLEU@4 and METEOR, respectively. We also
demonstrate that LSTM-E is superior in predicting Subject-Verb-Object (SVO)
triplets to several state-of-the-art techniques.
| no_new_dataset | 0.94801 |
1506.01499 | Ashish Sureka | Ashish Sureka, Ambika Tripathi, Savita Dabral | Survey Results on Threats To External Validity, Generalizability
Concerns, Data Sharing and University-Industry Collaboration in Mining
Software Repository (MSR) Research | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mining Software Repositories (MSR) is an applied and practise-oriented field
aimed at solving real problems encountered by practitioners and bringing value
to Industry. Replication of results and findings, generalizability and external
validity, University-Industry collaboration, data sharing and creation dataset
repositories are important issues in MSR research. Research consisting of
bibliometric analysis of MSR paper shows lack of University-Industry
collaboration, deficiency of studies on closed or propriety source dataset and
lack of data as well as tool sharing by researchers. We conduct a survey of
authors of past three years of MSR conference (2012, 2013 and 2014) to collect
data on their views and suggestions to address the stated concerns. We asked 20
questions from more than 100 authors and received a response from 39 authors.
Our results shows that about one-third of the respondents always make their
dataset publicly available and about one-third believe that data sharing should
be a mandatory condition for publication in MSR conferences. Our survey reveals
that more than 50% authors used solely open-source software (OSS) dataset for
their research. More than 50% of the respondents mentioned that difficulty in
sharing Industrial dataset outside the company is one of the major impediments
in University-Industry collaboration.
| [
{
"version": "v1",
"created": "Thu, 4 Jun 2015 08:07:29 GMT"
}
] | 2015-06-05T00:00:00 | [
[
"Sureka",
"Ashish",
""
],
[
"Tripathi",
"Ambika",
""
],
[
"Dabral",
"Savita",
""
]
] | TITLE: Survey Results on Threats To External Validity, Generalizability
Concerns, Data Sharing and University-Industry Collaboration in Mining
Software Repository (MSR) Research
ABSTRACT: Mining Software Repositories (MSR) is an applied and practise-oriented field
aimed at solving real problems encountered by practitioners and bringing value
to Industry. Replication of results and findings, generalizability and external
validity, University-Industry collaboration, data sharing and creation dataset
repositories are important issues in MSR research. Research consisting of
bibliometric analysis of MSR paper shows lack of University-Industry
collaboration, deficiency of studies on closed or propriety source dataset and
lack of data as well as tool sharing by researchers. We conduct a survey of
authors of past three years of MSR conference (2012, 2013 and 2014) to collect
data on their views and suggestions to address the stated concerns. We asked 20
questions from more than 100 authors and received a response from 39 authors.
Our results shows that about one-third of the respondents always make their
dataset publicly available and about one-third believe that data sharing should
be a mandatory condition for publication in MSR conferences. Our survey reveals
that more than 50% authors used solely open-source software (OSS) dataset for
their research. More than 50% of the respondents mentioned that difficulty in
sharing Industrial dataset outside the company is one of the major impediments
in University-Industry collaboration.
| no_new_dataset | 0.933309 |
1506.01596 | Roozbeh Rajabi | Roozbeh Rajabi, Hassan Ghassemian | Multilayer Structured NMF for Spectral Unmixing of Hyperspectral Images | 4 pages, conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the challenges in hyperspectral data analysis is the presence of mixed
pixels. Mixed pixels are the result of low spatial resolution of hyperspectral
sensors. Spectral unmixing methods decompose a mixed pixel into a set of
endmembers and abundance fractions. Due to nonnegativity constraint on
abundance fraction values, NMF based methods are well suited to this problem.
In this paper multilayer NMF has been used to improve the results of NMF
methods for spectral unmixing of hyperspectral data under the linear mixing
framework. Sparseness constraint on both spectral signatures and abundance
fractions matrices are used in this paper. Evaluation of the proposed algorithm
is done using synthetic and real datasets in terms of spectral angle and
abundance angle distances. Results show that the proposed algorithm outperforms
other previously proposed methods.
| [
{
"version": "v1",
"created": "Thu, 4 Jun 2015 13:53:33 GMT"
}
] | 2015-06-05T00:00:00 | [
[
"Rajabi",
"Roozbeh",
""
],
[
"Ghassemian",
"Hassan",
""
]
] | TITLE: Multilayer Structured NMF for Spectral Unmixing of Hyperspectral Images
ABSTRACT: One of the challenges in hyperspectral data analysis is the presence of mixed
pixels. Mixed pixels are the result of low spatial resolution of hyperspectral
sensors. Spectral unmixing methods decompose a mixed pixel into a set of
endmembers and abundance fractions. Due to nonnegativity constraint on
abundance fraction values, NMF based methods are well suited to this problem.
In this paper multilayer NMF has been used to improve the results of NMF
methods for spectral unmixing of hyperspectral data under the linear mixing
framework. Sparseness constraint on both spectral signatures and abundance
fractions matrices are used in this paper. Evaluation of the proposed algorithm
is done using synthetic and real datasets in terms of spectral angle and
abundance angle distances. Results show that the proposed algorithm outperforms
other previously proposed methods.
| no_new_dataset | 0.951908 |
1506.01698 | Anna Rohrbach | Anna Rohrbach and Marcus Rohrbach and Bernt Schiele | The Long-Short Story of Movie Description | null | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating descriptions for videos has many applications including assisting
blind people and human-robot interaction. The recent advances in image
captioning as well as the release of large-scale movie description datasets
such as MPII Movie Description allow to study this task in more depth. Many of
the proposed methods for image captioning rely on pre-trained object classifier
CNNs and Long-Short Term Memory recurrent networks (LSTMs) for generating
descriptions. While image description focuses on objects, we argue that it is
important to distinguish verbs, objects, and places in the challenging setting
of movie description. In this work we show how to learn robust visual
classifiers from the weak annotations of the sentence descriptions. Based on
these visual classifiers we learn how to generate a description using an LSTM.
We explore different design choices to build and train the LSTM and achieve the
best performance to date on the challenging MPII-MD dataset. We compare and
analyze our approach and prior work along various dimensions to better
understand the key challenges of the movie description task.
| [
{
"version": "v1",
"created": "Thu, 4 Jun 2015 19:45:36 GMT"
}
] | 2015-06-05T00:00:00 | [
[
"Rohrbach",
"Anna",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: The Long-Short Story of Movie Description
ABSTRACT: Generating descriptions for videos has many applications including assisting
blind people and human-robot interaction. The recent advances in image
captioning as well as the release of large-scale movie description datasets
such as MPII Movie Description allow to study this task in more depth. Many of
the proposed methods for image captioning rely on pre-trained object classifier
CNNs and Long-Short Term Memory recurrent networks (LSTMs) for generating
descriptions. While image description focuses on objects, we argue that it is
important to distinguish verbs, objects, and places in the challenging setting
of movie description. In this work we show how to learn robust visual
classifiers from the weak annotations of the sentence descriptions. Based on
these visual classifiers we learn how to generate a description using an LSTM.
We explore different design choices to build and train the LSTM and achieve the
best performance to date on the challenging MPII-MD dataset. We compare and
analyze our approach and prior work along various dimensions to better
understand the key challenges of the movie description task.
| no_new_dataset | 0.945197 |
1506.01709 | H\'ector P. Mart\'inez | Vincent E. Farrugia, H\'ector P. Mart\'inez, Georgios N. Yannakakis | The Preference Learning Toolbox | null | null | null | null | stat.ML cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Preference learning (PL) is a core area of machine learning that handles
datasets with ordinal relations. As the number of generated data of ordinal
nature is increasing, the importance and role of the PL field becomes central
within machine learning research and practice. This paper introduces an open
source, scalable, efficient and accessible preference learning toolbox that
supports the key phases of the data training process incorporating various
popular data preprocessing, feature selection and preference learning methods.
| [
{
"version": "v1",
"created": "Thu, 4 Jun 2015 19:58:56 GMT"
}
] | 2015-06-05T00:00:00 | [
[
"Farrugia",
"Vincent E.",
""
],
[
"Martínez",
"Héctor P.",
""
],
[
"Yannakakis",
"Georgios N.",
""
]
] | TITLE: The Preference Learning Toolbox
ABSTRACT: Preference learning (PL) is a core area of machine learning that handles
datasets with ordinal relations. As the number of generated data of ordinal
nature is increasing, the importance and role of the PL field becomes central
within machine learning research and practice. This paper introduces an open
source, scalable, efficient and accessible preference learning toolbox that
supports the key phases of the data training process incorporating various
popular data preprocessing, feature selection and preference learning methods.
| no_new_dataset | 0.949106 |
1202.3182 | Andrey Sokolov | Andrey Sokolov, Rachel Webster, Andrew Melatos, Tien Kieu | Loan and nonloan flows in the Australian interbank network | null | Physica A 391 (2012) 2867-2882 | 10.1016/j.physa.2011.12.036 | null | q-fin.GN physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-value transactions between Australian banks are settled in the Reserve
Bank Information and Transfer System (RITS) administered by the Reserve Bank of
Australia. RITS operates on a real-time gross settlement (RTGS) basis and
settles payments sourced from the SWIFT, the Austraclear, and the interbank
transactions entered directly into RITS. In this paper, we analyse a dataset
received from the Reserve Bank of Australia that includes all interbank
transactions settled in RITS on an RTGS basis during five consecutive weekdays
from 19 February 2007 inclusive, a week of relatively quiescent market
conditions. The source, destination, and value of each transaction are known,
which allows us to separate overnight loans from other transactions (nonloans)
and reconstruct monetary flows between banks for every day in our sample. We
conduct a novel analysis of the flow stability and examine the connection
between loan and nonloan flows. Our aim is to understand the underlying causal
mechanism connecting loan and nonloan flows. We find that the imbalances in the
banks' exchange settlement funds resulting from the daily flows of nonloan
transactions are almost exactly counterbalanced by the flows of overnight
loans. The correlation coefficient between loan and nonloan imbalances is about
-0.9 on most days. Some flows that persist over two consecutive days can be
highly variable, but overall the flows are moderately stable in value. The
nonloan network is characterised by a large fraction of persistent flows,
whereas only half of the flows persist over any two consecutive days in the
loan network. Moreover, we observe an unusual degree of coherence between
persistent loan flow values on Tuesday and Wednesday. We probe static
topological properties of the Australian interbank network and find them
consistent with those observed in other countries.
| [
{
"version": "v1",
"created": "Wed, 15 Feb 2012 00:34:21 GMT"
}
] | 2015-06-04T00:00:00 | [
[
"Sokolov",
"Andrey",
""
],
[
"Webster",
"Rachel",
""
],
[
"Melatos",
"Andrew",
""
],
[
"Kieu",
"Tien",
""
]
] | TITLE: Loan and nonloan flows in the Australian interbank network
ABSTRACT: High-value transactions between Australian banks are settled in the Reserve
Bank Information and Transfer System (RITS) administered by the Reserve Bank of
Australia. RITS operates on a real-time gross settlement (RTGS) basis and
settles payments sourced from the SWIFT, the Austraclear, and the interbank
transactions entered directly into RITS. In this paper, we analyse a dataset
received from the Reserve Bank of Australia that includes all interbank
transactions settled in RITS on an RTGS basis during five consecutive weekdays
from 19 February 2007 inclusive, a week of relatively quiescent market
conditions. The source, destination, and value of each transaction are known,
which allows us to separate overnight loans from other transactions (nonloans)
and reconstruct monetary flows between banks for every day in our sample. We
conduct a novel analysis of the flow stability and examine the connection
between loan and nonloan flows. Our aim is to understand the underlying causal
mechanism connecting loan and nonloan flows. We find that the imbalances in the
banks' exchange settlement funds resulting from the daily flows of nonloan
transactions are almost exactly counterbalanced by the flows of overnight
loans. The correlation coefficient between loan and nonloan imbalances is about
-0.9 on most days. Some flows that persist over two consecutive days can be
highly variable, but overall the flows are moderately stable in value. The
nonloan network is characterised by a large fraction of persistent flows,
whereas only half of the flows persist over any two consecutive days in the
loan network. Moreover, we observe an unusual degree of coherence between
persistent loan flow values on Tuesday and Wednesday. We probe static
topological properties of the Australian interbank network and find them
consistent with those observed in other countries.
| no_new_dataset | 0.927495 |
1203.1029 | Swetaprovo Chaudhuri | Swetaprovo Chaudhuri, Fujia Wu, Chung K. Law | Turbulent Flame Speed Scaling for Expanding Flames with Markstein
Diffusion Considerations | null | null | 10.1103/PhysRevE.88.033005 | null | physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we clarify the role of Markstein diffusivity on turbulent flame
speed and it's scaling, from analysis and experimental measurements on
constant-pressure expanding flames propagating in near isotropic turbulence.
For all C0-C4 hydrocarbon-air mixtures presented in this work and recently
published C8 data from Leeds, the normalized turbulent flame speed data of
individual mixtures approximately follows the recent theoretical and
experimental $Re_{T,f}^{0.5} $ scaling, where the average radius is the length
scale and thermal diffusivity is the transport property. We observe that for a
constant $Re_{T,f} $, the normalized turbulent flame speed decreases with
increasing Markstein Number. This could be explained by considering Markstein
diffusivity as the large wavenumber, flame surface fluctuation dissipation
mechanism. As originally suggested by the theory, replacing thermal diffusivity
with Markstein diffusivity in the turbulence Reynolds number definition above,
the present and Leeds dataset could be scaled by the new $Re_{T,M}^{0.5}
$irrespective of the fuel considered, equivalence ratio, pressure and
turbulence intensity for positive Mk flames over a large range of Damk\"ohler
numbers.
| [
{
"version": "v1",
"created": "Mon, 5 Mar 2012 20:03:55 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Nov 2012 17:01:00 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Sep 2013 17:43:08 GMT"
}
] | 2015-06-04T00:00:00 | [
[
"Chaudhuri",
"Swetaprovo",
""
],
[
"Wu",
"Fujia",
""
],
[
"Law",
"Chung K.",
""
]
] | TITLE: Turbulent Flame Speed Scaling for Expanding Flames with Markstein
Diffusion Considerations
ABSTRACT: In this work we clarify the role of Markstein diffusivity on turbulent flame
speed and it's scaling, from analysis and experimental measurements on
constant-pressure expanding flames propagating in near isotropic turbulence.
For all C0-C4 hydrocarbon-air mixtures presented in this work and recently
published C8 data from Leeds, the normalized turbulent flame speed data of
individual mixtures approximately follows the recent theoretical and
experimental $Re_{T,f}^{0.5} $ scaling, where the average radius is the length
scale and thermal diffusivity is the transport property. We observe that for a
constant $Re_{T,f} $, the normalized turbulent flame speed decreases with
increasing Markstein Number. This could be explained by considering Markstein
diffusivity as the large wavenumber, flame surface fluctuation dissipation
mechanism. As originally suggested by the theory, replacing thermal diffusivity
with Markstein diffusivity in the turbulence Reynolds number definition above,
the present and Leeds dataset could be scaled by the new $Re_{T,M}^{0.5}
$irrespective of the fuel considered, equivalence ratio, pressure and
turbulence intensity for positive Mk flames over a large range of Damk\"ohler
numbers.
| no_new_dataset | 0.953319 |
1203.1922 | Kevin Heng | Kevin Heng, Pushkar Kopparla | On the Stability of Super-Earth Atmospheres | Accepted by ApJ. 10 pages, 6 figures. No changes from previous
version, except for added hypen in title | null | 10.1088/0004-637X/754/1/60 | null | astro-ph.EP astro-ph.GA physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the stability of super Earth atmospheres around M stars using
a 7-parameter, analytical framework. We construct stability diagrams in the
parameter space of exoplanetary radius versus semi-major axis and elucidate the
regions in which the atmospheres are stable against the condensation of their
major constituents, out of the gas phase, on their permanent nightside
hemispheres. We find that super Earth atmospheres which are nitrogen-dominated
("Earth-like") occupy a smaller region of allowed parameter space, compared to
hydrogen-dominated atmospheres, because of the dual effects of diminished
advection and enhanced radiative cooling. Furthermore, some super Earths which
reside within the habitable zones of M stars may not possess stable
atmospheres, depending on the mean molecular weight and infrared photospheric
pressure of their atmospheres. We apply our stability diagrams to GJ 436b and
GJ 1214b, and demonstrate that atmospheric compositions with high mean
molecular weights are disfavoured if these exoplanets possess solid surfaces
and shallow atmospheres. Finally, we construct stability diagrams tailored to
the Kepler dataset, for G and K stars, and predict that about half of the
exoplanet candidates are expected to habour stable atmospheres if Earth-like
conditions are assumed. We include 55 Cancri e and CoRoT-7b in our stability
diagram for G stars.
| [
{
"version": "v1",
"created": "Thu, 8 Mar 2012 21:00:01 GMT"
},
{
"version": "v2",
"created": "Wed, 16 May 2012 04:10:58 GMT"
},
{
"version": "v3",
"created": "Thu, 31 May 2012 08:38:48 GMT"
}
] | 2015-06-04T00:00:00 | [
[
"Heng",
"Kevin",
""
],
[
"Kopparla",
"Pushkar",
""
]
] | TITLE: On the Stability of Super-Earth Atmospheres
ABSTRACT: We investigate the stability of super Earth atmospheres around M stars using
a 7-parameter, analytical framework. We construct stability diagrams in the
parameter space of exoplanetary radius versus semi-major axis and elucidate the
regions in which the atmospheres are stable against the condensation of their
major constituents, out of the gas phase, on their permanent nightside
hemispheres. We find that super Earth atmospheres which are nitrogen-dominated
("Earth-like") occupy a smaller region of allowed parameter space, compared to
hydrogen-dominated atmospheres, because of the dual effects of diminished
advection and enhanced radiative cooling. Furthermore, some super Earths which
reside within the habitable zones of M stars may not possess stable
atmospheres, depending on the mean molecular weight and infrared photospheric
pressure of their atmospheres. We apply our stability diagrams to GJ 436b and
GJ 1214b, and demonstrate that atmospheric compositions with high mean
molecular weights are disfavoured if these exoplanets possess solid surfaces
and shallow atmospheres. Finally, we construct stability diagrams tailored to
the Kepler dataset, for G and K stars, and predict that about half of the
exoplanet candidates are expected to habour stable atmospheres if Earth-like
conditions are assumed. We include 55 Cancri e and CoRoT-7b in our stability
diagram for G stars.
| no_new_dataset | 0.940735 |
1406.4112 | Zhenyong Fu | Zhen-Yong Fu, Tao Xiang, Shaogang Gong | Semantic Graph for Zero-Shot Learning | 9 pages, 5 figures | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/3.0/ | Zero-shot learning aims to classify visual objects without any training data
via knowledge transfer between seen and unseen classes. This is typically
achieved by exploring a semantic embedding space where the seen and unseen
classes can be related. Previous works differ in what embedding space is used
and how different classes and a test image can be related. In this paper, we
utilize the annotation-free semantic word space for the former and focus on
solving the latter issue of modeling relatedness. Specifically, in contrast to
previous work which ignores the semantic relationships between seen classes and
focus merely on those between seen and unseen classes, in this paper a novel
approach based on a semantic graph is proposed to represent the relationships
between all the seen and unseen class in a semantic word space. Based on this
semantic graph, we design a special absorbing Markov chain process, in which
each unseen class is viewed as an absorbing state. After incorporating one test
image into the semantic graph, the absorbing probabilities from the test data
to each unseen class can be effectively computed; and zero-shot classification
can be achieved by finding the class label with the highest absorbing
probability. The proposed model has a closed-form solution which is linear with
respect to the number of test images. We demonstrate the effectiveness and
computational efficiency of the proposed method over the state-of-the-arts on
the AwA (animals with attributes) dataset.
| [
{
"version": "v1",
"created": "Mon, 16 Jun 2014 19:40:52 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Jun 2015 09:53:18 GMT"
}
] | 2015-06-04T00:00:00 | [
[
"Fu",
"Zhen-Yong",
""
],
[
"Xiang",
"Tao",
""
],
[
"Gong",
"Shaogang",
""
]
] | TITLE: Semantic Graph for Zero-Shot Learning
ABSTRACT: Zero-shot learning aims to classify visual objects without any training data
via knowledge transfer between seen and unseen classes. This is typically
achieved by exploring a semantic embedding space where the seen and unseen
classes can be related. Previous works differ in what embedding space is used
and how different classes and a test image can be related. In this paper, we
utilize the annotation-free semantic word space for the former and focus on
solving the latter issue of modeling relatedness. Specifically, in contrast to
previous work which ignores the semantic relationships between seen classes and
focus merely on those between seen and unseen classes, in this paper a novel
approach based on a semantic graph is proposed to represent the relationships
between all the seen and unseen class in a semantic word space. Based on this
semantic graph, we design a special absorbing Markov chain process, in which
each unseen class is viewed as an absorbing state. After incorporating one test
image into the semantic graph, the absorbing probabilities from the test data
to each unseen class can be effectively computed; and zero-shot classification
can be achieved by finding the class label with the highest absorbing
probability. The proposed model has a closed-form solution which is linear with
respect to the number of test images. We demonstrate the effectiveness and
computational efficiency of the proposed method over the state-of-the-arts on
the AwA (animals with attributes) dataset.
| no_new_dataset | 0.950411 |
1411.5726 | Ramakrishna Vedantam | Ramakrishna Vedantam, C. Lawrence Zitnick and Devi Parikh | CIDEr: Consensus-based Image Description Evaluation | To appear in CVPR 2015 | null | null | null | cs.CV cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically describing an image with a sentence is a long-standing
challenge in computer vision and natural language processing. Due to recent
progress in object detection, attribute classification, action recognition,
etc., there is renewed interest in this area. However, evaluating the quality
of descriptions has proven to be challenging. We propose a novel paradigm for
evaluating image descriptions that uses human consensus. This paradigm consists
of three main parts: a new triplet-based method of collecting human annotations
to measure consensus, a new automated metric (CIDEr) that captures consensus,
and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences
describing each image. Our simple metric captures human judgment of consensus
better than existing metrics across sentences generated by various sources. We
also evaluate five state-of-the-art image description approaches using this new
protocol and provide a benchmark for future comparisons. A version of CIDEr
named CIDEr-D is available as a part of MS COCO evaluation server to enable
systematic evaluation and benchmarking.
| [
{
"version": "v1",
"created": "Thu, 20 Nov 2014 23:54:35 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Jun 2015 01:42:20 GMT"
}
] | 2015-06-04T00:00:00 | [
[
"Vedantam",
"Ramakrishna",
""
],
[
"Zitnick",
"C. Lawrence",
""
],
[
"Parikh",
"Devi",
""
]
] | TITLE: CIDEr: Consensus-based Image Description Evaluation
ABSTRACT: Automatically describing an image with a sentence is a long-standing
challenge in computer vision and natural language processing. Due to recent
progress in object detection, attribute classification, action recognition,
etc., there is renewed interest in this area. However, evaluating the quality
of descriptions has proven to be challenging. We propose a novel paradigm for
evaluating image descriptions that uses human consensus. This paradigm consists
of three main parts: a new triplet-based method of collecting human annotations
to measure consensus, a new automated metric (CIDEr) that captures consensus,
and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences
describing each image. Our simple metric captures human judgment of consensus
better than existing metrics across sentences generated by various sources. We
also evaluate five state-of-the-art image description approaches using this new
protocol and provide a benchmark for future comparisons. A version of CIDEr
named CIDEr-D is available as a part of MS COCO evaluation server to enable
systematic evaluation and benchmarking.
| new_dataset | 0.955361 |
1501.07430 | Seungjin Choi | Juho Lee and Seungjin Choi | Bayesian Hierarchical Clustering with Exponential Family: Small-Variance
Asymptotics and Reducibility | 10 pages, 2 figures, AISTATS-2015 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian hierarchical clustering (BHC) is an agglomerative clustering method,
where a probabilistic model is defined and its marginal likelihoods are
evaluated to decide which clusters to merge. While BHC provides a few
advantages over traditional distance-based agglomerative clustering algorithms,
successive evaluation of marginal likelihoods and careful hyperparameter tuning
are cumbersome and limit the scalability. In this paper we relax BHC into a
non-probabilistic formulation, exploring small-variance asymptotics in
conjugate-exponential models. We develop a novel clustering algorithm, referred
to as relaxed BHC (RBHC), from the asymptotic limit of the BHC model that
exhibits the scalability of distance-based agglomerative clustering algorithms
as well as the flexibility of Bayesian nonparametric models. We also
investigate the reducibility of the dissimilarity measure emerged from the
asymptotic limit of the BHC model, allowing us to use scalable algorithms such
as the nearest neighbor chain algorithm. Numerical experiments on both
synthetic and real-world datasets demonstrate the validity and high performance
of our method.
| [
{
"version": "v1",
"created": "Thu, 29 Jan 2015 12:13:01 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Jun 2015 00:45:09 GMT"
}
] | 2015-06-04T00:00:00 | [
[
"Lee",
"Juho",
""
],
[
"Choi",
"Seungjin",
""
]
] | TITLE: Bayesian Hierarchical Clustering with Exponential Family: Small-Variance
Asymptotics and Reducibility
ABSTRACT: Bayesian hierarchical clustering (BHC) is an agglomerative clustering method,
where a probabilistic model is defined and its marginal likelihoods are
evaluated to decide which clusters to merge. While BHC provides a few
advantages over traditional distance-based agglomerative clustering algorithms,
successive evaluation of marginal likelihoods and careful hyperparameter tuning
are cumbersome and limit the scalability. In this paper we relax BHC into a
non-probabilistic formulation, exploring small-variance asymptotics in
conjugate-exponential models. We develop a novel clustering algorithm, referred
to as relaxed BHC (RBHC), from the asymptotic limit of the BHC model that
exhibits the scalability of distance-based agglomerative clustering algorithms
as well as the flexibility of Bayesian nonparametric models. We also
investigate the reducibility of the dissimilarity measure emerged from the
asymptotic limit of the BHC model, allowing us to use scalable algorithms such
as the nearest neighbor chain algorithm. Numerical experiments on both
synthetic and real-world datasets demonstrate the validity and high performance
of our method.
| no_new_dataset | 0.95018 |
1506.01077 | Saullo Haniell Galv\~ao De Oliveira | Saullo Haniell Galv\~ao de Oliveira, Rosana Veroneze, Fernando Jos\'e
Von Zuben | On bicluster aggregation and its benefits for enumerative solutions | 15 pages, will be published by Springer Verlag in the LNAI Series in
the book Advances in Data Mining | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Biclustering involves the simultaneous clustering of objects and their
attributes, thus defining local two-way clustering models. Recently, efficient
algorithms were conceived to enumerate all biclusters in real-valued datasets.
In this case, the solution composes a complete set of maximal and non-redundant
biclusters. However, the ability to enumerate biclusters revealed a challenging
scenario: in noisy datasets, each true bicluster may become highly fragmented
and with a high degree of overlapping. It prevents a direct analysis of the
obtained results. To revert the fragmentation, we propose here two approaches
for properly aggregating the whole set of enumerated biclusters: one based on
single linkage and the other directly exploring the rate of overlapping. Both
proposals were compared with each other and with the actual state-of-the-art in
several experiments, and they not only significantly reduced the number of
biclusters but also consistently increased the quality of the solution.
| [
{
"version": "v1",
"created": "Tue, 2 Jun 2015 22:26:42 GMT"
}
] | 2015-06-04T00:00:00 | [
[
"de Oliveira",
"Saullo Haniell Galvão",
""
],
[
"Veroneze",
"Rosana",
""
],
[
"Von Zuben",
"Fernando José",
""
]
] | TITLE: On bicluster aggregation and its benefits for enumerative solutions
ABSTRACT: Biclustering involves the simultaneous clustering of objects and their
attributes, thus defining local two-way clustering models. Recently, efficient
algorithms were conceived to enumerate all biclusters in real-valued datasets.
In this case, the solution composes a complete set of maximal and non-redundant
biclusters. However, the ability to enumerate biclusters revealed a challenging
scenario: in noisy datasets, each true bicluster may become highly fragmented
and with a high degree of overlapping. It prevents a direct analysis of the
obtained results. To revert the fragmentation, we propose here two approaches
for properly aggregating the whole set of enumerated biclusters: one based on
single linkage and the other directly exploring the rate of overlapping. Both
proposals were compared with each other and with the actual state-of-the-art in
several experiments, and they not only significantly reduced the number of
biclusters but also consistently increased the quality of the solution.
| no_new_dataset | 0.950549 |
1506.01092 | Seungjin Choi | Saehoon Kim and Seungjin Choi | Bilinear Random Projections for Locality-Sensitive Binary Codes | 11 pages, 23 figures, CVPR-2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Locality-sensitive hashing (LSH) is a popular data-independent indexing
method for approximate similarity search, where random projections followed by
quantization hash the points from the database so as to ensure that the
probability of collision is much higher for objects that are close to each
other than for those that are far apart. Most of high-dimensional visual
descriptors for images exhibit a natural matrix structure. When visual
descriptors are represented by high-dimensional feature vectors and long binary
codes are assigned, a random projection matrix requires expensive complexities
in both space and time. In this paper we analyze a bilinear random projection
method where feature matrices are transformed to binary codes by two smaller
random projection matrices. We base our theoretical analysis on extending
Raginsky and Lazebnik's result where random Fourier features are composed with
random binary quantizers to form locality sensitive binary codes. To this end,
we answer the following two questions: (1) whether a bilinear random projection
also yields similarity-preserving binary codes; (2) whether a bilinear random
projection yields performance gain or loss, compared to a large linear
projection. Regarding the first question, we present upper and lower bounds on
the expected Hamming distance between binary codes produced by bilinear random
projections. In regards to the second question, we analyze the upper and lower
bounds on covariance between two bits of binary codes, showing that the
correlation between two bits is small. Numerical experiments on MNIST and
Flickr45K datasets confirm the validity of our method.
| [
{
"version": "v1",
"created": "Wed, 3 Jun 2015 00:30:26 GMT"
}
] | 2015-06-04T00:00:00 | [
[
"Kim",
"Saehoon",
""
],
[
"Choi",
"Seungjin",
""
]
] | TITLE: Bilinear Random Projections for Locality-Sensitive Binary Codes
ABSTRACT: Locality-sensitive hashing (LSH) is a popular data-independent indexing
method for approximate similarity search, where random projections followed by
quantization hash the points from the database so as to ensure that the
probability of collision is much higher for objects that are close to each
other than for those that are far apart. Most of high-dimensional visual
descriptors for images exhibit a natural matrix structure. When visual
descriptors are represented by high-dimensional feature vectors and long binary
codes are assigned, a random projection matrix requires expensive complexities
in both space and time. In this paper we analyze a bilinear random projection
method where feature matrices are transformed to binary codes by two smaller
random projection matrices. We base our theoretical analysis on extending
Raginsky and Lazebnik's result where random Fourier features are composed with
random binary quantizers to form locality sensitive binary codes. To this end,
we answer the following two questions: (1) whether a bilinear random projection
also yields similarity-preserving binary codes; (2) whether a bilinear random
projection yields performance gain or loss, compared to a large linear
projection. Regarding the first question, we present upper and lower bounds on
the expected Hamming distance between binary codes produced by bilinear random
projections. In regards to the second question, we analyze the upper and lower
bounds on covariance between two bits of binary codes, showing that the
correlation between two bits is small. Numerical experiments on MNIST and
Flickr45K datasets confirm the validity of our method.
| no_new_dataset | 0.954732 |
1506.01115 | Alexandros-Stavros Iliopoulos | Alexandros-Stavros Iliopoulos, Tiancheng Liu, Xiaobai Sun | Hyperspectral Image Classification and Clutter Detection via Multiple
Structural Embeddings and Dimension Reductions | 13 pages, 6 figures (30 images), submitted to International
Conference on Computer Vision (ICCV) 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new and effective approach for Hyperspectral Image (HSI)
classification and clutter detection, overcoming a few long-standing challenges
presented by HSI data characteristics. Residing in a high-dimensional spectral
attribute space, HSI data samples are known to be strongly correlated in their
spectral signatures, exhibit nonlinear structure due to several physical laws,
and contain uncertainty and noise from multiple sources. In the presented
approach, we generate an adaptive, structurally enriched representation
environment, and employ the locally linear embedding (LLE) in it. There are two
structure layers external to LLE. One is feature space embedding: the HSI data
attributes are embedded into a discriminatory feature space where
spatio-spectral coherence and distinctive structures are distilled and
exploited to mitigate various difficulties encountered in the native
hyperspectral attribute space. The other structure layer encloses the ranges of
algorithmic parameters for LLE and feature embedding, and supports a
multiplexing and integrating scheme for contending with multi-source
uncertainty. Experiments on two commonly used HSI datasets with a small number
of learning samples have rendered remarkably high-accuracy classification
results, as well as distinctive maps of detected clutter regions.
| [
{
"version": "v1",
"created": "Wed, 3 Jun 2015 04:04:43 GMT"
}
] | 2015-06-04T00:00:00 | [
[
"Iliopoulos",
"Alexandros-Stavros",
""
],
[
"Liu",
"Tiancheng",
""
],
[
"Sun",
"Xiaobai",
""
]
] | TITLE: Hyperspectral Image Classification and Clutter Detection via Multiple
Structural Embeddings and Dimension Reductions
ABSTRACT: We present a new and effective approach for Hyperspectral Image (HSI)
classification and clutter detection, overcoming a few long-standing challenges
presented by HSI data characteristics. Residing in a high-dimensional spectral
attribute space, HSI data samples are known to be strongly correlated in their
spectral signatures, exhibit nonlinear structure due to several physical laws,
and contain uncertainty and noise from multiple sources. In the presented
approach, we generate an adaptive, structurally enriched representation
environment, and employ the locally linear embedding (LLE) in it. There are two
structure layers external to LLE. One is feature space embedding: the HSI data
attributes are embedded into a discriminatory feature space where
spatio-spectral coherence and distinctive structures are distilled and
exploited to mitigate various difficulties encountered in the native
hyperspectral attribute space. The other structure layer encloses the ranges of
algorithmic parameters for LLE and feature embedding, and supports a
multiplexing and integrating scheme for contending with multi-source
uncertainty. Experiments on two commonly used HSI datasets with a small number
of learning samples have rendered remarkably high-accuracy classification
results, as well as distinctive maps of detected clutter regions.
| no_new_dataset | 0.947624 |
1506.01125 | Zhun Zhong | Zhun Zhong, Zongmin Li, Runlin Li, Xiaoxia Sun | Unsupervised domain adaption dictionary learning for visual recognition | 5 pages, 3 figures, ICIP 2015 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | Over the last years, dictionary learning method has been extensively applied
to deal with various computer vision recognition applications, and produced
state-of-the-art results. However, when the data instances of a target domain
have a different distribution than that of a source domain, the dictionary
learning method may fail to perform well. In this paper, we address the
cross-domain visual recognition problem and propose a simple but effective
unsupervised domain adaption approach, where labeled data are only from source
domain. In order to bring the original data in source and target domain into
the same distribution, the proposed method forcing nearest coupled data between
source and target domain to have identical sparse representations while jointly
learning dictionaries for each domain, where the learned dictionaries can
reconstruct original data in source and target domain respectively. So that
sparse representations of original data can be used to perform visual
recognition tasks. We demonstrate the effectiveness of our approach on standard
datasets. Our method performs on par or better than competitive
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Wed, 3 Jun 2015 05:21:37 GMT"
}
] | 2015-06-04T00:00:00 | [
[
"Zhong",
"Zhun",
""
],
[
"Li",
"Zongmin",
""
],
[
"Li",
"Runlin",
""
],
[
"Sun",
"Xiaoxia",
""
]
] | TITLE: Unsupervised domain adaption dictionary learning for visual recognition
ABSTRACT: Over the last years, dictionary learning method has been extensively applied
to deal with various computer vision recognition applications, and produced
state-of-the-art results. However, when the data instances of a target domain
have a different distribution than that of a source domain, the dictionary
learning method may fail to perform well. In this paper, we address the
cross-domain visual recognition problem and propose a simple but effective
unsupervised domain adaption approach, where labeled data are only from source
domain. In order to bring the original data in source and target domain into
the same distribution, the proposed method forcing nearest coupled data between
source and target domain to have identical sparse representations while jointly
learning dictionaries for each domain, where the learned dictionaries can
reconstruct original data in source and target domain respectively. So that
sparse representations of original data can be used to perform visual
recognition tasks. We demonstrate the effectiveness of our approach on standard
datasets. Our method performs on par or better than competitive
state-of-the-art methods.
| no_new_dataset | 0.952086 |
1506.01151 | Mathieu Aubry | Mathieu Aubry and Bryan Russell | Understanding deep features with computer-generated imagery | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce an approach for analyzing the variation of features generated by
convolutional neural networks (CNNs) with respect to scene factors that occur
in natural images. Such factors may include object style, 3D viewpoint, color,
and scene lighting configuration. Our approach analyzes CNN feature responses
corresponding to different scene factors by controlling for them via rendering
using a large database of 3D CAD models. The rendered images are presented to a
trained CNN and responses for different layers are studied with respect to the
input scene factors. We perform a decomposition of the responses based on
knowledge of the input scene factors and analyze the resulting components. In
particular, we quantify their relative importance in the CNN responses and
visualize them using principal component analysis. We show qualitative and
quantitative results of our study on three CNNs trained on large image
datasets: AlexNet, Places, and Oxford VGG. We observe important differences
across the networks and CNN layers for different scene factors and object
categories. Finally, we demonstrate that our analysis based on
computer-generated imagery translates to the network representation of natural
images.
| [
{
"version": "v1",
"created": "Wed, 3 Jun 2015 07:41:14 GMT"
}
] | 2015-06-04T00:00:00 | [
[
"Aubry",
"Mathieu",
""
],
[
"Russell",
"Bryan",
""
]
] | TITLE: Understanding deep features with computer-generated imagery
ABSTRACT: We introduce an approach for analyzing the variation of features generated by
convolutional neural networks (CNNs) with respect to scene factors that occur
in natural images. Such factors may include object style, 3D viewpoint, color,
and scene lighting configuration. Our approach analyzes CNN feature responses
corresponding to different scene factors by controlling for them via rendering
using a large database of 3D CAD models. The rendered images are presented to a
trained CNN and responses for different layers are studied with respect to the
input scene factors. We perform a decomposition of the responses based on
knowledge of the input scene factors and analyze the resulting components. In
particular, we quantify their relative importance in the CNN responses and
visualize them using principal component analysis. We show qualitative and
quantitative results of our study on three CNNs trained on large image
datasets: AlexNet, Places, and Oxford VGG. We observe important differences
across the networks and CNN layers for different scene factors and object
categories. Finally, we demonstrate that our analysis based on
computer-generated imagery translates to the network representation of natural
images.
| no_new_dataset | 0.948537 |
1111.5612 | Vijayaraghavan Thirumalai | Vijayaraghavan Thirumalai, and Pascal Frossard | Distributed Representation of Geometrically Correlated Images with
Compressed Linear Measurements | null | null | 10.1109/TIP.2012.2188035 | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of distributed coding of images whose
correlation is driven by the motion of objects or positioning of the vision
sensors. It concentrates on the problem where images are encoded with
compressed linear measurements. We propose a geometry-based correlation model
in order to describe the common information in pairs of images. We assume that
the constitutive components of natural images can be captured by visual
features that undergo local transformations (e.g., translation) in different
images. We first identify prominent visual features by computing a sparse
approximation of a reference image with a dictionary of geometric basis
functions. We then pose a regularized optimization problem to estimate the
corresponding features in correlated images given by quantized linear
measurements. The estimated features have to comply with the compressed
information and to represent consistent transformation between images. The
correlation model is given by the relative geometric transformations between
corresponding features. We then propose an efficient joint decoding algorithm
that estimates the compressed images such that they stay consistent with both
the quantized measurements and the correlation model. Experimental results show
that the proposed algorithm effectively estimates the correlation between
images in multi-view datasets. In addition, the proposed algorithm provides
effective decoding performance that compares advantageously to independent
coding solutions as well as state-of-the-art distributed coding schemes based
on disparity learning.
| [
{
"version": "v1",
"created": "Wed, 23 Nov 2011 15:54:23 GMT"
}
] | 2015-06-03T00:00:00 | [
[
"Thirumalai",
"Vijayaraghavan",
""
],
[
"Frossard",
"Pascal",
""
]
] | TITLE: Distributed Representation of Geometrically Correlated Images with
Compressed Linear Measurements
ABSTRACT: This paper addresses the problem of distributed coding of images whose
correlation is driven by the motion of objects or positioning of the vision
sensors. It concentrates on the problem where images are encoded with
compressed linear measurements. We propose a geometry-based correlation model
in order to describe the common information in pairs of images. We assume that
the constitutive components of natural images can be captured by visual
features that undergo local transformations (e.g., translation) in different
images. We first identify prominent visual features by computing a sparse
approximation of a reference image with a dictionary of geometric basis
functions. We then pose a regularized optimization problem to estimate the
corresponding features in correlated images given by quantized linear
measurements. The estimated features have to comply with the compressed
information and to represent consistent transformation between images. The
correlation model is given by the relative geometric transformations between
corresponding features. We then propose an efficient joint decoding algorithm
that estimates the compressed images such that they stay consistent with both
the quantized measurements and the correlation model. Experimental results show
that the proposed algorithm effectively estimates the correlation between
images in multi-view datasets. In addition, the proposed algorithm provides
effective decoding performance that compares advantageously to independent
coding solutions as well as state-of-the-art distributed coding schemes based
on disparity learning.
| no_new_dataset | 0.945147 |
1112.2392 | Jianguo Liu | Jian-Guo Liu, Tao Zhou, Qiang Guo | Information filtering via biased heat conduction | 4 pages, 3 figures | Phys. Rev. E 87 (2011) 037101 | 10.1103/PhysRevE.84.037101 | null | physics.data-an cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Heat conduction process has recently found its application in personalized
recommendation [T. Zhou \emph{et al.}, PNAS 107, 4511 (2010)], which is of high
diversity but low accuracy. By decreasing the temperatures of small-degree
objects, we present an improved algorithm, called biased heat conduction (BHC),
which could simultaneously enhance the accuracy and diversity. Extensive
experimental analyses demonstrate that the accuracy on MovieLens, Netflix and
Delicious datasets could be improved by 43.5%, 55.4% and 19.2% compared with
the standard heat conduction algorithm, and the diversity is also increased or
approximately unchanged. Further statistical analyses suggest that the present
algorithm could simultaneously identify users' mainstream and special tastes,
resulting in better performance than the standard heat conduction algorithm.
This work provides a creditable way for highly efficient information filtering.
| [
{
"version": "v1",
"created": "Sun, 11 Dec 2011 20:18:22 GMT"
}
] | 2015-06-03T00:00:00 | [
[
"Liu",
"Jian-Guo",
""
],
[
"Zhou",
"Tao",
""
],
[
"Guo",
"Qiang",
""
]
] | TITLE: Information filtering via biased heat conduction
ABSTRACT: Heat conduction process has recently found its application in personalized
recommendation [T. Zhou \emph{et al.}, PNAS 107, 4511 (2010)], which is of high
diversity but low accuracy. By decreasing the temperatures of small-degree
objects, we present an improved algorithm, called biased heat conduction (BHC),
which could simultaneously enhance the accuracy and diversity. Extensive
experimental analyses demonstrate that the accuracy on MovieLens, Netflix and
Delicious datasets could be improved by 43.5%, 55.4% and 19.2% compared with
the standard heat conduction algorithm, and the diversity is also increased or
approximately unchanged. Further statistical analyses suggest that the present
algorithm could simultaneously identify users' mainstream and special tastes,
resulting in better performance than the standard heat conduction algorithm.
This work provides a creditable way for highly efficient information filtering.
| no_new_dataset | 0.951188 |
1112.2984 | Peter Klimek | Peter Klimek, Ricardo Hausmann, Stefan Thurner | Empirical confirmation of creative destruction from world trade data | 16 pages (main text), 6 figures | null | 10.1371/journal.pone.0038924 | null | physics.soc-ph q-fin.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that world trade network datasets contain empirical evidence that the
dynamics of innovation in the world economy follows indeed the concept of
creative destruction, as proposed by J.A. Schumpeter more than half a century
ago. National economies can be viewed as complex, evolving systems, driven by a
stream of appearance and disappearance of goods and services. Products appear
in bursts of creative cascades. We find that products systematically tend to
co-appear, and that product appearances lead to massive disappearance events of
existing products in the following years. The opposite - disappearances
followed by periods of appearances - is not observed. This is an empirical
validation of the dominance of cascading competitive replacement events on the
scale of national economies, i.e. creative destruction. We find a tendency that
more complex products drive out less complex ones, i.e. progress has a
direction. Finally we show that the growth trajectory of a country's product
output diversity can be understood by a recently proposed evolutionary model of
Schumpeterian economic dynamics.
| [
{
"version": "v1",
"created": "Tue, 13 Dec 2011 18:00:49 GMT"
}
] | 2015-06-03T00:00:00 | [
[
"Klimek",
"Peter",
""
],
[
"Hausmann",
"Ricardo",
""
],
[
"Thurner",
"Stefan",
""
]
] | TITLE: Empirical confirmation of creative destruction from world trade data
ABSTRACT: We show that world trade network datasets contain empirical evidence that the
dynamics of innovation in the world economy follows indeed the concept of
creative destruction, as proposed by J.A. Schumpeter more than half a century
ago. National economies can be viewed as complex, evolving systems, driven by a
stream of appearance and disappearance of goods and services. Products appear
in bursts of creative cascades. We find that products systematically tend to
co-appear, and that product appearances lead to massive disappearance events of
existing products in the following years. The opposite - disappearances
followed by periods of appearances - is not observed. This is an empirical
validation of the dominance of cascading competitive replacement events on the
scale of national economies, i.e. creative destruction. We find a tendency that
more complex products drive out less complex ones, i.e. progress has a
direction. Finally we show that the growth trajectory of a country's product
output diversity can be understood by a recently proposed evolutionary model of
Schumpeterian economic dynamics.
| no_new_dataset | 0.94428 |
1409.4841 | Luca Montabone | L. Montabone, F. Forget, E. Millour, R. J. Wilson, S. R. Lewis, B. A.
Cantor, D. Kass, A. Kleinboehl, M. Lemmon, M. D. Smith, M. J. Wolff | Eight-year Climatology of Dust Optical Depth on Mars | This preprint version of this paper was submitted to Icarus on March
8th, 2014 (arXiv processing stamped on the paper the date of arXiv
submission) | null | 10.1016/j.icarus.2014.12.034 | null | astro-ph.EP physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have produced a multiannual climatology of airborne dust from Martian year
24 to 31 using multiple datasets of retrieved or estimated column optical
depths. The datasets are based on observations of the Martian atmosphere from
April 1999 to July 2013 made by different orbiting instruments: the Thermal
Emission Spectrometer (TES) aboard Mars Global Surveyor, the Thermal Emission
Imaging System (THEMIS) aboard Mars Odyssey, and the Mars Climate Sounder (MCS)
aboard Mars Reconnaissance Orbiter (MRO). The procedure we have adopted
consists of gridding the available retrievals of column dust optical depth
(CDOD) from TES and THEMIS nadir observations, as well as the estimates of this
quantity from MCS limb observations. Our gridding method calculates averages
and uncertainties on a regularly spaced, but possibly incomplete,
spatio-temporal grid, using an iterative procedure weighted in space, time, and
retrieval uncertainty. In order to evaluate strengths and weaknesses of the
resulting gridded maps, we validate them with independent observations of CDOD.
We have statistically analyzed the irregularly gridded maps to provide an
overview of the dust climatology on Mars over eight years, specifically in
relation to its interseasonal and interannual variability. Finally, we have
produced multiannual, regular daily maps of CDOD by spatially interpolating the
irregularly gridded maps using a kriging method. These synoptic maps are used
as dust scenarios in the Mars Climate Database version 5, and are useful in
many modelling applications in addition to forming a basis for instrument
intercomparisons. The derived dust maps for the eight available Martian years
are publicly available and distributed with open access.
| [
{
"version": "v1",
"created": "Wed, 17 Sep 2014 00:36:10 GMT"
}
] | 2015-06-03T00:00:00 | [
[
"Montabone",
"L.",
""
],
[
"Forget",
"F.",
""
],
[
"Millour",
"E.",
""
],
[
"Wilson",
"R. J.",
""
],
[
"Lewis",
"S. R.",
""
],
[
"Cantor",
"B. A.",
""
],
[
"Kass",
"D.",
""
],
[
"Kleinboehl",
"A.",
""
],
[
"Lemmon",
"M.",
""
],
[
"Smith",
"M. D.",
""
],
[
"Wolff",
"M. J.",
""
]
] | TITLE: Eight-year Climatology of Dust Optical Depth on Mars
ABSTRACT: We have produced a multiannual climatology of airborne dust from Martian year
24 to 31 using multiple datasets of retrieved or estimated column optical
depths. The datasets are based on observations of the Martian atmosphere from
April 1999 to July 2013 made by different orbiting instruments: the Thermal
Emission Spectrometer (TES) aboard Mars Global Surveyor, the Thermal Emission
Imaging System (THEMIS) aboard Mars Odyssey, and the Mars Climate Sounder (MCS)
aboard Mars Reconnaissance Orbiter (MRO). The procedure we have adopted
consists of gridding the available retrievals of column dust optical depth
(CDOD) from TES and THEMIS nadir observations, as well as the estimates of this
quantity from MCS limb observations. Our gridding method calculates averages
and uncertainties on a regularly spaced, but possibly incomplete,
spatio-temporal grid, using an iterative procedure weighted in space, time, and
retrieval uncertainty. In order to evaluate strengths and weaknesses of the
resulting gridded maps, we validate them with independent observations of CDOD.
We have statistically analyzed the irregularly gridded maps to provide an
overview of the dust climatology on Mars over eight years, specifically in
relation to its interseasonal and interannual variability. Finally, we have
produced multiannual, regular daily maps of CDOD by spatially interpolating the
irregularly gridded maps using a kriging method. These synoptic maps are used
as dust scenarios in the Mars Climate Database version 5, and are useful in
many modelling applications in addition to forming a basis for instrument
intercomparisons. The derived dust maps for the eight available Martian years
are publicly available and distributed with open access.
| no_new_dataset | 0.948346 |
1506.00765 | Rongrong Ji Rongrong Ji | Zheng Cai, Donglin Cao, Rongrong Ji | Video (GIF) Sentiment Analysis using Large-Scale Mid-Level Ontology | null | null | null | null | cs.MM cs.CL cs.IR | http://creativecommons.org/licenses/by/3.0/ | With faster connection speed, Internet users are now making social network a
huge reservoir of texts, images and video clips (GIF). Sentiment analysis for
such online platform can be used to predict political elections, evaluates
economic indicators and so on. However, GIF sentiment analysis is quite
challenging, not only because it hinges on spatio-temporal visual
contentabstraction, but also for the relationship between such abstraction and
final sentiment remains unknown.In this paper, we dedicated to find out such
relationship.We proposed a SentiPairSequence basedspatiotemporal visual
sentiment ontology, which forms the midlevel representations for GIFsentiment.
The establishment process of SentiPair contains two steps. First, we construct
the Synset Forest to define the semantic tree structure of visual sentiment
label elements. Then, through theSynset Forest, we organically select and
combine sentiment label elements to form a mid-level visual sentiment
representation. Our experiments indicate that SentiPair outperforms other
competing mid-level attributes. Using SentiPair, our analysis frameworkcan
achieve satisfying prediction accuracy (72.6%). We also opened ourdataset
(GSO-2015) to the research community. GSO-2015 contains more than 6,000
manually annotated GIFs out of more than 40,000 candidates. Each is labeled
with both sentiment and SentiPair Sequence.
| [
{
"version": "v1",
"created": "Tue, 2 Jun 2015 06:31:57 GMT"
}
] | 2015-06-03T00:00:00 | [
[
"Cai",
"Zheng",
""
],
[
"Cao",
"Donglin",
""
],
[
"Ji",
"Rongrong",
""
]
] | TITLE: Video (GIF) Sentiment Analysis using Large-Scale Mid-Level Ontology
ABSTRACT: With faster connection speed, Internet users are now making social network a
huge reservoir of texts, images and video clips (GIF). Sentiment analysis for
such online platform can be used to predict political elections, evaluates
economic indicators and so on. However, GIF sentiment analysis is quite
challenging, not only because it hinges on spatio-temporal visual
contentabstraction, but also for the relationship between such abstraction and
final sentiment remains unknown.In this paper, we dedicated to find out such
relationship.We proposed a SentiPairSequence basedspatiotemporal visual
sentiment ontology, which forms the midlevel representations for GIFsentiment.
The establishment process of SentiPair contains two steps. First, we construct
the Synset Forest to define the semantic tree structure of visual sentiment
label elements. Then, through theSynset Forest, we organically select and
combine sentiment label elements to form a mid-level visual sentiment
representation. Our experiments indicate that SentiPair outperforms other
competing mid-level attributes. Using SentiPair, our analysis frameworkcan
achieve satisfying prediction accuracy (72.6%). We also opened ourdataset
(GSO-2015) to the research community. GSO-2015 contains more than 6,000
manually annotated GIFs out of more than 40,000 candidates. Each is labeled
with both sentiment and SentiPair Sequence.
| new_dataset | 0.952706 |
1506.00770 | Carlos Herrera-Yag\"ue | C. Herrera-Yag\"ue, C.M. Schneider, T. Couronn\'e, Z. Smoreda, R.M.
Benito, P.J. Zufiria and M.C. Gonz\'alez | The anatomy of urban social networks and its implications in the
searchability problem | null | null | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The appearance of large geolocated communication datasets has recently
increased our understanding of how social networks relate to their physical
space. However, many recurrently reported properties, such as the spatial
clustering of network communities, have not yet been systematically tested at
different scales. In this work we analyze the social network structure of over
25 million phone users from three countries at three different scales: country,
provinces and cities. We consistently find that this last urban scenario
presents significant differences to common knowledge about social networks.
First, the emergence of a giant component in the network seems to be controlled
by whether or not the network spans over the entire urban border, almost
independently of the population or geographic extension of the city. Second,
urban communities are much less geographically clustered than expected. These
two findings shed new light on the widely-studied searchability in
self-organized networks. By exhaustive simulation of decentralized search
strategies we conclude that urban networks are searchable not through
geographical proximity as their country-wide counterparts, but through an
homophily-driven community structure.
| [
{
"version": "v1",
"created": "Tue, 2 Jun 2015 06:48:16 GMT"
}
] | 2015-06-03T00:00:00 | [
[
"Herrera-Yagüe",
"C.",
""
],
[
"Schneider",
"C. M.",
""
],
[
"Couronné",
"T.",
""
],
[
"Smoreda",
"Z.",
""
],
[
"Benito",
"R. M.",
""
],
[
"Zufiria",
"P. J.",
""
],
[
"González",
"M. C.",
""
]
] | TITLE: The anatomy of urban social networks and its implications in the
searchability problem
ABSTRACT: The appearance of large geolocated communication datasets has recently
increased our understanding of how social networks relate to their physical
space. However, many recurrently reported properties, such as the spatial
clustering of network communities, have not yet been systematically tested at
different scales. In this work we analyze the social network structure of over
25 million phone users from three countries at three different scales: country,
provinces and cities. We consistently find that this last urban scenario
presents significant differences to common knowledge about social networks.
First, the emergence of a giant component in the network seems to be controlled
by whether or not the network spans over the entire urban border, almost
independently of the population or geographic extension of the city. Second,
urban communities are much less geographically clustered than expected. These
two findings shed new light on the widely-studied searchability in
self-organized networks. By exhaustive simulation of decentralized search
strategies we conclude that urban networks are searchable not through
geographical proximity as their country-wide counterparts, but through an
homophily-driven community structure.
| no_new_dataset | 0.942612 |
1506.00893 | Joana C\^orte-Real | Joana C\^orte-Real and Theofrastos Mantadelis and In\^es Dutra and
Ricardo Rocha | SkILL - a Stochastic Inductive Logic Learner | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic Inductive Logic Programming (PILP) is a rel- atively unexplored
area of Statistical Relational Learning which extends classic Inductive Logic
Programming (ILP). This work introduces SkILL, a Stochastic Inductive Logic
Learner, which takes probabilistic annotated data and produces First Order
Logic theories. Data in several domains such as medicine and bioinformatics
have an inherent degree of uncer- tainty, that can be used to produce models
closer to reality. SkILL can not only use this type of probabilistic data to
extract non-trivial knowl- edge from databases, but it also addresses
efficiency issues by introducing a novel, efficient and effective search
strategy to guide the search in PILP environments. The capabilities of SkILL
are demonstrated in three dif- ferent datasets: (i) a synthetic toy example
used to validate the system, (ii) a probabilistic adaptation of a well-known
biological metabolism ap- plication, and (iii) a real world medical dataset in
the breast cancer domain. Results show that SkILL can perform as well as a
deterministic ILP learner, while also being able to incorporate probabilistic
knowledge that would otherwise not be considered.
| [
{
"version": "v1",
"created": "Tue, 2 Jun 2015 14:10:02 GMT"
}
] | 2015-06-03T00:00:00 | [
[
"Côrte-Real",
"Joana",
""
],
[
"Mantadelis",
"Theofrastos",
""
],
[
"Dutra",
"Inês",
""
],
[
"Rocha",
"Ricardo",
""
]
] | TITLE: SkILL - a Stochastic Inductive Logic Learner
ABSTRACT: Probabilistic Inductive Logic Programming (PILP) is a rel- atively unexplored
area of Statistical Relational Learning which extends classic Inductive Logic
Programming (ILP). This work introduces SkILL, a Stochastic Inductive Logic
Learner, which takes probabilistic annotated data and produces First Order
Logic theories. Data in several domains such as medicine and bioinformatics
have an inherent degree of uncer- tainty, that can be used to produce models
closer to reality. SkILL can not only use this type of probabilistic data to
extract non-trivial knowl- edge from databases, but it also addresses
efficiency issues by introducing a novel, efficient and effective search
strategy to guide the search in PILP environments. The capabilities of SkILL
are demonstrated in three dif- ferent datasets: (i) a synthetic toy example
used to validate the system, (ii) a probabilistic adaptation of a well-known
biological metabolism ap- plication, and (iii) a real world medical dataset in
the breast cancer domain. Results show that SkILL can perform as well as a
deterministic ILP learner, while also being able to incorporate probabilistic
knowledge that would otherwise not be considered.
| no_new_dataset | 0.69766 |
1401.8269 | Peter Turney | Peter D. Turney and Saif M. Mohammad | Experiments with Three Approaches to Recognizing Lexical Entailment | to appear in Natural Language Engineering | Natural Language Engineering, 21 (3), (2015), 437-476 | 10.1017/S1351324913000387 | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inference in natural language often involves recognizing lexical entailment
(RLE); that is, identifying whether one word entails another. For example,
"buy" entails "own". Two general strategies for RLE have been proposed: One
strategy is to manually construct an asymmetric similarity measure for context
vectors (directional similarity) and another is to treat RLE as a problem of
learning to recognize semantic relations using supervised machine learning
techniques (relation classification). In this paper, we experiment with two
recent state-of-the-art representatives of the two general strategies. The
first approach is an asymmetric similarity measure (an instance of the
directional similarity strategy), designed to capture the degree to which the
contexts of a word, a, form a subset of the contexts of another word, b. The
second approach (an instance of the relation classification strategy)
represents a word pair, a:b, with a feature vector that is the concatenation of
the context vectors of a and b, and then applies supervised learning to a
training set of labeled feature vectors. Additionally, we introduce a third
approach that is a new instance of the relation classification strategy. The
third approach represents a word pair, a:b, with a feature vector in which the
features are the differences in the similarities of a and b to a set of
reference words. All three approaches use vector space models (VSMs) of
semantics, based on word-context matrices. We perform an extensive evaluation
of the three approaches using three different datasets. The proposed new
approach (similarity differences) performs significantly better than the other
two approaches on some datasets and there is no dataset for which it is
significantly worse. Our results suggest it is beneficial to make connections
between the research in lexical entailment and the research in semantic
relation classification.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2014 19:42:19 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"Turney",
"Peter D.",
""
],
[
"Mohammad",
"Saif M.",
""
]
] | TITLE: Experiments with Three Approaches to Recognizing Lexical Entailment
ABSTRACT: Inference in natural language often involves recognizing lexical entailment
(RLE); that is, identifying whether one word entails another. For example,
"buy" entails "own". Two general strategies for RLE have been proposed: One
strategy is to manually construct an asymmetric similarity measure for context
vectors (directional similarity) and another is to treat RLE as a problem of
learning to recognize semantic relations using supervised machine learning
techniques (relation classification). In this paper, we experiment with two
recent state-of-the-art representatives of the two general strategies. The
first approach is an asymmetric similarity measure (an instance of the
directional similarity strategy), designed to capture the degree to which the
contexts of a word, a, form a subset of the contexts of another word, b. The
second approach (an instance of the relation classification strategy)
represents a word pair, a:b, with a feature vector that is the concatenation of
the context vectors of a and b, and then applies supervised learning to a
training set of labeled feature vectors. Additionally, we introduce a third
approach that is a new instance of the relation classification strategy. The
third approach represents a word pair, a:b, with a feature vector in which the
features are the differences in the similarities of a and b to a set of
reference words. All three approaches use vector space models (VSMs) of
semantics, based on word-context matrices. We perform an extensive evaluation
of the three approaches using three different datasets. The proposed new
approach (similarity differences) performs significantly better than the other
two approaches on some datasets and there is no dataset for which it is
significantly worse. Our results suggest it is beneficial to make connections
between the research in lexical entailment and the research in semantic
relation classification.
| no_new_dataset | 0.949389 |
1409.7480 | Mohamed Elhoseiny Mohamed Elhoseiny | Mohamed Elhoseiny, Ahmed Elgammal | Generalized Twin Gaussian Processes using Sharma-Mittal Divergence | This work got accepted for Publication in the Machine Learning
Journal 2015. The work is scheduled for presentation at ECML-PKDD 2015
journal track papers | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been a growing interest in mutual information measures due to their
wide range of applications in Machine Learning and Computer Vision. In this
paper, we present a generalized structured regression framework based on
Shama-Mittal divergence, a relative entropy measure, which is introduced to the
Machine Learning community in this work. Sharma-Mittal (SM) divergence is a
generalized mutual information measure for the widely used R\'enyi, Tsallis,
Bhattacharyya, and Kullback-Leibler (KL) relative entropies. Specifically, we
study Sharma-Mittal divergence as a cost function in the context of the Twin
Gaussian Processes (TGP)~\citep{Bo:2010}, which generalizes over the
KL-divergence without computational penalty. We show interesting properties of
Sharma-Mittal TGP (SMTGP) through a theoretical analysis, which covers missing
insights in the traditional TGP formulation. However, we generalize this theory
based on SM-divergence instead of KL-divergence which is a special case.
Experimentally, we evaluated the proposed SMTGP framework on several datasets.
The results show that SMTGP reaches better predictions than KL-based TGP, since
it offers a bigger class of models through its parameters that we learn from
the data.
| [
{
"version": "v1",
"created": "Fri, 26 Sep 2014 06:46:38 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Oct 2014 13:32:50 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Oct 2014 03:54:41 GMT"
},
{
"version": "v4",
"created": "Mon, 6 Oct 2014 03:47:51 GMT"
},
{
"version": "v5",
"created": "Mon, 1 Jun 2015 06:30:29 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"Elhoseiny",
"Mohamed",
""
],
[
"Elgammal",
"Ahmed",
""
]
] | TITLE: Generalized Twin Gaussian Processes using Sharma-Mittal Divergence
ABSTRACT: There has been a growing interest in mutual information measures due to their
wide range of applications in Machine Learning and Computer Vision. In this
paper, we present a generalized structured regression framework based on
Shama-Mittal divergence, a relative entropy measure, which is introduced to the
Machine Learning community in this work. Sharma-Mittal (SM) divergence is a
generalized mutual information measure for the widely used R\'enyi, Tsallis,
Bhattacharyya, and Kullback-Leibler (KL) relative entropies. Specifically, we
study Sharma-Mittal divergence as a cost function in the context of the Twin
Gaussian Processes (TGP)~\citep{Bo:2010}, which generalizes over the
KL-divergence without computational penalty. We show interesting properties of
Sharma-Mittal TGP (SMTGP) through a theoretical analysis, which covers missing
insights in the traditional TGP formulation. However, we generalize this theory
based on SM-divergence instead of KL-divergence which is a special case.
Experimentally, we evaluated the proposed SMTGP framework on several datasets.
The results show that SMTGP reaches better predictions than KL-based TGP, since
it offers a bigger class of models through its parameters that we learn from
the data.
| no_new_dataset | 0.948537 |
1412.2197 | Liangliang Cao | Liangliang Cao and Chang Wang | Practice in Synonym Extraction at Large Scale | This paper has been withdrawn by the author since the experimental
results are not good enough | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synonym extraction is an important task in natural language processing and
often used as a submodule in query expansion, question answering and other
applications. Automatic synonym extractor is highly preferred for large scale
applications. Previous studies in synonym extraction are most limited to small
scale datasets. In this paper, we build a large dataset with 3.4 million
synonym/non-synonym pairs to capture the challenges in real world scenarios. We
proposed (1) a new cost function to accommodate the unbalanced learning
problem, and (2) a feature learning based deep neural network to model the
complicated relationships in synonym pairs. We compare several different
approaches based on SVMs and neural networks, and find out a novel feature
learning based neural network outperforms the methods with hand-assigned
features. Specifically, the best performance of our model surpasses the SVM
baseline with a significant 97\% relative improvement.
| [
{
"version": "v1",
"created": "Sat, 6 Dec 2014 04:40:18 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Dec 2014 16:49:44 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Jun 2015 19:55:17 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"Cao",
"Liangliang",
""
],
[
"Wang",
"Chang",
""
]
] | TITLE: Practice in Synonym Extraction at Large Scale
ABSTRACT: Synonym extraction is an important task in natural language processing and
often used as a submodule in query expansion, question answering and other
applications. Automatic synonym extractor is highly preferred for large scale
applications. Previous studies in synonym extraction are most limited to small
scale datasets. In this paper, we build a large dataset with 3.4 million
synonym/non-synonym pairs to capture the challenges in real world scenarios. We
proposed (1) a new cost function to accommodate the unbalanced learning
problem, and (2) a feature learning based deep neural network to model the
complicated relationships in synonym pairs. We compare several different
approaches based on SVMs and neural networks, and find out a novel feature
learning based neural network outperforms the methods with hand-assigned
features. Specifically, the best performance of our model surpasses the SVM
baseline with a significant 97\% relative improvement.
| new_dataset | 0.961244 |
1503.06772 | Abigail Jacobs | Abigail Z. Jacobs, Samuel F. Way, Johan Ugander and Aaron Clauset | Assembling thefacebook: Using heterogeneity to understand online social
network assembly | 13 pages, 11 figures, Proceedings of the 7th Annual ACM Web Science
Conference (WebSci), 2015 | null | 10.1145/2786451.2786477 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online social networks represent a popular and diverse class of social media
systems. Despite this variety, each of these systems undergoes a general
process of online social network assembly, which represents the complicated and
heterogeneous changes that transform newly born systems into mature platforms.
However, little is known about this process. For example, how much of a
network's assembly is driven by simple growth? How does a network's structure
change as it matures? How does network structure vary with adoption rates and
user heterogeneity, and do these properties play different roles at different
points in the assembly? We investigate these and other questions using a unique
dataset of online connections among the roughly one million users at the first
100 colleges admitted to Facebook, captured just 20 months after its launch. We
first show that different vintages and adoption rates across this population of
networks reveal temporal dynamics of the assembly process, and that assembly is
only loosely related to network growth. We then exploit natural experiments
embedded in this dataset and complementary data obtained via Internet
archaeology to show that different subnetworks matured at different rates
toward similar end states. These results shed light on the processes and
patterns of online social network assembly, and may facilitate more effective
design for online social systems.
| [
{
"version": "v1",
"created": "Mon, 23 Mar 2015 19:13:27 GMT"
},
{
"version": "v2",
"created": "Sun, 31 May 2015 20:24:02 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"Jacobs",
"Abigail Z.",
""
],
[
"Way",
"Samuel F.",
""
],
[
"Ugander",
"Johan",
""
],
[
"Clauset",
"Aaron",
""
]
] | TITLE: Assembling thefacebook: Using heterogeneity to understand online social
network assembly
ABSTRACT: Online social networks represent a popular and diverse class of social media
systems. Despite this variety, each of these systems undergoes a general
process of online social network assembly, which represents the complicated and
heterogeneous changes that transform newly born systems into mature platforms.
However, little is known about this process. For example, how much of a
network's assembly is driven by simple growth? How does a network's structure
change as it matures? How does network structure vary with adoption rates and
user heterogeneity, and do these properties play different roles at different
points in the assembly? We investigate these and other questions using a unique
dataset of online connections among the roughly one million users at the first
100 colleges admitted to Facebook, captured just 20 months after its launch. We
first show that different vintages and adoption rates across this population of
networks reveal temporal dynamics of the assembly process, and that assembly is
only loosely related to network growth. We then exploit natural experiments
embedded in this dataset and complementary data obtained via Internet
archaeology to show that different subnetworks matured at different rates
toward similar end states. These results shed light on the processes and
patterns of online social network assembly, and may facilitate more effective
design for online social systems.
| new_dataset | 0.747386 |
1504.00905 | Jose Lopez | Jose A. Lopez, Octavia Camps, Mario Sznaier | Robust Anomaly Detection Using Semidefinite Programming | 13 pages, 11 figures | null | null | null | math.OC cs.CV cs.LG cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a new approach, based on polynomial optimization and the
method of moments, to the problem of anomaly detection. The proposed technique
only requires information about the statistical moments of the normal-state
distribution of the features of interest and compares favorably with existing
approaches (such as Parzen windows and 1-class SVM). In addition, it provides a
succinct description of the normal state. Thus, it leads to a substantial
simplification of the the anomaly detection problem when working with higher
dimensional datasets.
| [
{
"version": "v1",
"created": "Fri, 3 Apr 2015 18:20:36 GMT"
},
{
"version": "v2",
"created": "Sat, 30 May 2015 15:58:36 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"Lopez",
"Jose A.",
""
],
[
"Camps",
"Octavia",
""
],
[
"Sznaier",
"Mario",
""
]
] | TITLE: Robust Anomaly Detection Using Semidefinite Programming
ABSTRACT: This paper presents a new approach, based on polynomial optimization and the
method of moments, to the problem of anomaly detection. The proposed technique
only requires information about the statistical moments of the normal-state
distribution of the features of interest and compares favorably with existing
approaches (such as Parzen windows and 1-class SVM). In addition, it provides a
succinct description of the normal state. Thus, it leads to a substantial
simplification of the the anomaly detection problem when working with higher
dimensional datasets.
| no_new_dataset | 0.950686 |
1506.00022 | Xiaohan Zhao | Xiaohan Zhao, Qingyun Liu, Lin Zhou, Haitao Zheng and Ben Y. Zhao | Graph Watermarks | 16 pages, 14 figures, full version | null | null | null | cs.CR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | From network topologies to online social networks, many of today's most
sensitive datasets are captured in large graphs. A significant challenge facing
owners of these datasets is how to share sensitive graphs with collaborators
and authorized users, e.g. network topologies with network equipment vendors or
Facebook's social graphs with academic collaborators. Current tools can provide
limited node or edge privacy, but require modifications to the graph that
significantly reduce its utility.
In this work, we propose a new alternative in the form of graph watermarks.
Graph watermarks are small graphs tailor-made for a given graph dataset, a
secure graph key, and a secure user key. To share a sensitive graph G with a
collaborator C, the owner generates a watermark graph W using G, the graph key,
and C's key as input, and embeds W into G to form G'. If G' is leaked by C,its
owner can reliably determine if the watermark W generated for C does in fact
reside inside G', thereby proving C is responsible for the leak. Graph
watermarks serve both as a deterrent against data leakage and a method of
recourse after a leak. We provide robust schemes for creating, embedding and
extracting watermarks, and use analysis and experiments on large, real graphs
to show that they are unique and difficult to forge. We study the robustness of
graph watermarks against both single and powerful colluding attacker models,
then propose and empirically evaluate mechanisms to dramatically improve
resilience.
| [
{
"version": "v1",
"created": "Fri, 29 May 2015 20:29:04 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"Zhao",
"Xiaohan",
""
],
[
"Liu",
"Qingyun",
""
],
[
"Zhou",
"Lin",
""
],
[
"Zheng",
"Haitao",
""
],
[
"Zhao",
"Ben Y.",
""
]
] | TITLE: Graph Watermarks
ABSTRACT: From network topologies to online social networks, many of today's most
sensitive datasets are captured in large graphs. A significant challenge facing
owners of these datasets is how to share sensitive graphs with collaborators
and authorized users, e.g. network topologies with network equipment vendors or
Facebook's social graphs with academic collaborators. Current tools can provide
limited node or edge privacy, but require modifications to the graph that
significantly reduce its utility.
In this work, we propose a new alternative in the form of graph watermarks.
Graph watermarks are small graphs tailor-made for a given graph dataset, a
secure graph key, and a secure user key. To share a sensitive graph G with a
collaborator C, the owner generates a watermark graph W using G, the graph key,
and C's key as input, and embeds W into G to form G'. If G' is leaked by C,its
owner can reliably determine if the watermark W generated for C does in fact
reside inside G', thereby proving C is responsible for the leak. Graph
watermarks serve both as a deterrent against data leakage and a method of
recourse after a leak. We provide robust schemes for creating, embedding and
extracting watermarks, and use analysis and experiments on large, real graphs
to show that they are unique and difficult to forge. We study the robustness of
graph watermarks against both single and powerful colluding attacker models,
then propose and empirically evaluate mechanisms to dramatically improve
resilience.
| no_new_dataset | 0.934873 |
1506.00176 | Lianwen Jin | Liquan Qiu, Lianwen Jin, Ruifen Dai, Yuxiang Zhang, Lei Li | An Open Source Testing Tool for Evaluating Handwriting Input Methods | 5 pages, 3 figures, 11 tables. Accepted to appear at ICDAR 2015 | null | null | null | cs.HC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an open source tool for testing the recognition accuracy
of Chinese handwriting input methods. The tool consists of two modules, namely
the PC and Android mobile client. The PC client reads handwritten samples in
the computer, and transfers them individually to the Android client in
accordance with the socket communication protocol. After the Android client
receives the data, it simulates the handwriting on screen of client device, and
triggers the corresponding handwriting recognition method. The recognition
accuracy is recorded by the Android client. We present the design principles
and describe the implementation of the test platform. We construct several test
datasets for evaluating different handwriting recognition systems, and conduct
an objective and comprehensive test using six Chinese handwriting input methods
with five datasets. The test results for the recognition accuracy are then
compared and analyzed.
| [
{
"version": "v1",
"created": "Sat, 30 May 2015 22:35:55 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"Qiu",
"Liquan",
""
],
[
"Jin",
"Lianwen",
""
],
[
"Dai",
"Ruifen",
""
],
[
"Zhang",
"Yuxiang",
""
],
[
"Li",
"Lei",
""
]
] | TITLE: An Open Source Testing Tool for Evaluating Handwriting Input Methods
ABSTRACT: This paper presents an open source tool for testing the recognition accuracy
of Chinese handwriting input methods. The tool consists of two modules, namely
the PC and Android mobile client. The PC client reads handwritten samples in
the computer, and transfers them individually to the Android client in
accordance with the socket communication protocol. After the Android client
receives the data, it simulates the handwriting on screen of client device, and
triggers the corresponding handwriting recognition method. The recognition
accuracy is recorded by the Android client. We present the design principles
and describe the implementation of the test platform. We construct several test
datasets for evaluating different handwriting recognition systems, and conduct
an objective and comprehensive test using six Chinese handwriting input methods
with five datasets. The test results for the recognition accuracy are then
compared and analyzed.
| new_dataset | 0.953275 |
1506.00195 | Kaisheng Yao | Baolin Peng and Kaisheng Yao | Recurrent Neural Networks with External Memory for Language
Understanding | submitted to Interspeech 2015 | null | null | null | cs.CL cs.AI cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent Neural Networks (RNNs) have become increasingly popular for the
task of language understanding. In this task, a semantic tagger is deployed to
associate a semantic label to each word in an input sequence. The success of
RNN may be attributed to its ability to memorize long-term dependence that
relates the current-time semantic label prediction to the observations many
time instances away. However, the memory capacity of simple RNNs is limited
because of the gradient vanishing and exploding problem. We propose to use an
external memory to improve memorization capability of RNNs. We conducted
experiments on the ATIS dataset, and observed that the proposed model was able
to achieve the state-of-the-art results. We compare our proposed model with
alternative models and report analysis results that may provide insights for
future research.
| [
{
"version": "v1",
"created": "Sun, 31 May 2015 05:10:03 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"Peng",
"Baolin",
""
],
[
"Yao",
"Kaisheng",
""
]
] | TITLE: Recurrent Neural Networks with External Memory for Language
Understanding
ABSTRACT: Recurrent Neural Networks (RNNs) have become increasingly popular for the
task of language understanding. In this task, a semantic tagger is deployed to
associate a semantic label to each word in an input sequence. The success of
RNN may be attributed to its ability to memorize long-term dependence that
relates the current-time semantic label prediction to the observations many
time instances away. However, the memory capacity of simple RNNs is limited
because of the gradient vanishing and exploding problem. We propose to use an
external memory to improve memorization capability of RNNs. We conducted
experiments on the ATIS dataset, and observed that the proposed model was able
to achieve the state-of-the-art results. We compare our proposed model with
alternative models and report analysis results that may provide insights for
future research.
| no_new_dataset | 0.944689 |
1506.00242 | Zhiwei Steven Wu | Michael Kearns, Aaron Roth, Zhiwei Steven Wu, Grigory Yaroslavtsev | Privacy for the Protected (Only) | null | null | null | null | cs.DS cs.CR cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by tensions between data privacy for individual citizens, and
societal priorities such as counterterrorism and the containment of infectious
disease, we introduce a computational model that distinguishes between parties
for whom privacy is explicitly protected, and those for whom it is not (the
targeted subpopulation). The goal is the development of algorithms that can
effectively identify and take action upon members of the targeted subpopulation
in a way that minimally compromises the privacy of the protected, while
simultaneously limiting the expense of distinguishing members of the two groups
via costly mechanisms such as surveillance, background checks, or medical
testing. Within this framework, we provide provably privacy-preserving
algorithms for targeted search in social networks. These algorithms are natural
variants of common graph search methods, and ensure privacy for the protected
by the careful injection of noise in the prioritization of potential targets.
We validate the utility of our algorithms with extensive computational
experiments on two large-scale social network datasets.
| [
{
"version": "v1",
"created": "Sun, 31 May 2015 14:47:27 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"Kearns",
"Michael",
""
],
[
"Roth",
"Aaron",
""
],
[
"Wu",
"Zhiwei Steven",
""
],
[
"Yaroslavtsev",
"Grigory",
""
]
] | TITLE: Privacy for the Protected (Only)
ABSTRACT: Motivated by tensions between data privacy for individual citizens, and
societal priorities such as counterterrorism and the containment of infectious
disease, we introduce a computational model that distinguishes between parties
for whom privacy is explicitly protected, and those for whom it is not (the
targeted subpopulation). The goal is the development of algorithms that can
effectively identify and take action upon members of the targeted subpopulation
in a way that minimally compromises the privacy of the protected, while
simultaneously limiting the expense of distinguishing members of the two groups
via costly mechanisms such as surveillance, background checks, or medical
testing. Within this framework, we provide provably privacy-preserving
algorithms for targeted search in social networks. These algorithms are natural
variants of common graph search methods, and ensure privacy for the protected
by the careful injection of noise in the prioritization of potential targets.
We validate the utility of our algorithms with extensive computational
experiments on two large-scale social network datasets.
| no_new_dataset | 0.949295 |
1506.00278 | Licheng Yu | Licheng Yu, Eunbyung Park, Alexander C. Berg, and Tamara L. Berg | Visual Madlibs: Fill in the blank Image Generation and Question
Answering | 10 pages; 8 figures; 4 tables | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce a new dataset consisting of 360,001 focused
natural language descriptions for 10,738 images. This dataset, the Visual
Madlibs dataset, is collected using automatically produced fill-in-the-blank
templates designed to gather targeted descriptions about: people and objects,
their appearances, activities, and interactions, as well as inferences about
the general scene or its broader context. We provide several analyses of the
Visual Madlibs dataset and demonstrate its applicability to two new description
generation tasks: focused description generation, and multiple-choice
question-answering for images. Experiments using joint-embedding and deep
learning methods show promising results on these tasks.
| [
{
"version": "v1",
"created": "Sun, 31 May 2015 19:39:44 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"Yu",
"Licheng",
""
],
[
"Park",
"Eunbyung",
""
],
[
"Berg",
"Alexander C.",
""
],
[
"Berg",
"Tamara L.",
""
]
] | TITLE: Visual Madlibs: Fill in the blank Image Generation and Question
Answering
ABSTRACT: In this paper, we introduce a new dataset consisting of 360,001 focused
natural language descriptions for 10,738 images. This dataset, the Visual
Madlibs dataset, is collected using automatically produced fill-in-the-blank
templates designed to gather targeted descriptions about: people and objects,
their appearances, activities, and interactions, as well as inferences about
the general scene or its broader context. We provide several analyses of the
Visual Madlibs dataset and demonstrate its applicability to two new description
generation tasks: focused description generation, and multiple-choice
question-answering for images. Experiments using joint-embedding and deep
learning methods show promising results on these tasks.
| new_dataset | 0.957873 |
1506.00323 | Anastasia Podosinnikova | Anastasia Podosinnikova, Simon Setzer, and Matthias Hein | Robust PCA: Optimization of the Robust Reconstruction Error over the
Stiefel Manifold | long version of GCPR 2014 paper | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is well known that Principal Component Analysis (PCA) is strongly affected
by outliers and a lot of effort has been put into robustification of PCA. In
this paper we present a new algorithm for robust PCA minimizing the trimmed
reconstruction error. By directly minimizing over the Stiefel manifold, we
avoid deflation as often used by projection pursuit methods. In distinction to
other methods for robust PCA, our method has no free parameter and is
computationally very efficient. We illustrate the performance on various
datasets including an application to background modeling and subtraction. Our
method performs better or similar to current state-of-the-art methods while
being faster.
| [
{
"version": "v1",
"created": "Mon, 1 Jun 2015 01:57:15 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"Podosinnikova",
"Anastasia",
""
],
[
"Setzer",
"Simon",
""
],
[
"Hein",
"Matthias",
""
]
] | TITLE: Robust PCA: Optimization of the Robust Reconstruction Error over the
Stiefel Manifold
ABSTRACT: It is well known that Principal Component Analysis (PCA) is strongly affected
by outliers and a lot of effort has been put into robustification of PCA. In
this paper we present a new algorithm for robust PCA minimizing the trimmed
reconstruction error. By directly minimizing over the Stiefel manifold, we
avoid deflation as often used by projection pursuit methods. In distinction to
other methods for robust PCA, our method has no free parameter and is
computationally very efficient. We illustrate the performance on various
datasets including an application to background modeling and subtraction. Our
method performs better or similar to current state-of-the-art methods while
being faster.
| no_new_dataset | 0.951594 |
1506.00327 | Zhiguang Wang | Zhiguang Wang and Tim Oates | Imaging Time-Series to Improve Classification and Imputation | Accepted by IJCAI-2015 ML track | null | null | null | cs.LG cs.NE stat.ML | http://creativecommons.org/licenses/by/3.0/ | Inspired by recent successes of deep learning in computer vision, we propose
a novel framework for encoding time series as different types of images,
namely, Gramian Angular Summation/Difference Fields (GASF/GADF) and Markov
Transition Fields (MTF). This enables the use of techniques from computer
vision for time series classification and imputation. We used Tiled
Convolutional Neural Networks (tiled CNNs) on 20 standard datasets to learn
high-level features from the individual and compound GASF-GADF-MTF images. Our
approaches achieve highly competitive results when compared to nine of the
current best time series classification approaches. Inspired by the bijection
property of GASF on 0/1 rescaled data, we train Denoised Auto-encoders (DA) on
the GASF images of four standard and one synthesized compound dataset. The
imputation MSE on test data is reduced by 12.18%-48.02% when compared to using
the raw data. An analysis of the features and weights learned via tiled CNNs
and DAs explains why the approaches work.
| [
{
"version": "v1",
"created": "Mon, 1 Jun 2015 02:17:06 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"Wang",
"Zhiguang",
""
],
[
"Oates",
"Tim",
""
]
] | TITLE: Imaging Time-Series to Improve Classification and Imputation
ABSTRACT: Inspired by recent successes of deep learning in computer vision, we propose
a novel framework for encoding time series as different types of images,
namely, Gramian Angular Summation/Difference Fields (GASF/GADF) and Markov
Transition Fields (MTF). This enables the use of techniques from computer
vision for time series classification and imputation. We used Tiled
Convolutional Neural Networks (tiled CNNs) on 20 standard datasets to learn
high-level features from the individual and compound GASF-GADF-MTF images. Our
approaches achieve highly competitive results when compared to nine of the
current best time series classification approaches. Inspired by the bijection
property of GASF on 0/1 rescaled data, we train Denoised Auto-encoders (DA) on
the GASF images of four standard and one synthesized compound dataset. The
imputation MSE on test data is reduced by 12.18%-48.02% when compared to using
the raw data. An analysis of the features and weights learned via tiled CNNs
and DAs explains why the approaches work.
| no_new_dataset | 0.940626 |
1506.00527 | Gianluigi Ciocca | Simone Bianco, Gianluigi Ciocca | User Preferences Modeling and Learning for Pleasing Photo Collage
Generation | To be published in ACM Transactions on Multimedia Computing,
Communications, and Applications (TOMM) | null | null | null | cs.MM cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider how to automatically create pleasing photo collages
created by placing a set of images on a limited canvas area. The task is
formulated as an optimization problem. Differently from existing
state-of-the-art approaches, we here exploit subjective experiments to model
and learn pleasantness from user preferences. To this end, we design an
experimental framework for the identification of the criteria that need to be
taken into account to generate a pleasing photo collage. Five different
thematic photo datasets are used to create collages using state-of-the-art
criteria. A first subjective experiment where several subjects evaluated the
collages, emphasizes that different criteria are involved in the subjective
definition of pleasantness. We then identify new global and local criteria and
design algorithms to quantify them. The relative importance of these criteria
are automatically learned by exploiting the user preferences, and new collages
are generated. To validate our framework, we performed several psycho-visual
experiments involving different users. The results shows that the proposed
framework allows to learn a novel computational model which effectively encodes
an inter-user definition of pleasantness. The learned definition of
pleasantness generalizes well to new photo datasets of different themes and
sizes not used in the learning. Moreover, compared with two state of the art
approaches, the collages created using our framework are preferred by the
majority of the users.
| [
{
"version": "v1",
"created": "Mon, 1 Jun 2015 15:20:29 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"Bianco",
"Simone",
""
],
[
"Ciocca",
"Gianluigi",
""
]
] | TITLE: User Preferences Modeling and Learning for Pleasing Photo Collage
Generation
ABSTRACT: In this paper we consider how to automatically create pleasing photo collages
created by placing a set of images on a limited canvas area. The task is
formulated as an optimization problem. Differently from existing
state-of-the-art approaches, we here exploit subjective experiments to model
and learn pleasantness from user preferences. To this end, we design an
experimental framework for the identification of the criteria that need to be
taken into account to generate a pleasing photo collage. Five different
thematic photo datasets are used to create collages using state-of-the-art
criteria. A first subjective experiment where several subjects evaluated the
collages, emphasizes that different criteria are involved in the subjective
definition of pleasantness. We then identify new global and local criteria and
design algorithms to quantify them. The relative importance of these criteria
are automatically learned by exploiting the user preferences, and new collages
are generated. To validate our framework, we performed several psycho-visual
experiments involving different users. The results shows that the proposed
framework allows to learn a novel computational model which effectively encodes
an inter-user definition of pleasantness. The learned definition of
pleasantness generalizes well to new photo datasets of different themes and
sizes not used in the learning. Moreover, compared with two state of the art
approaches, the collages created using our framework are preferred by the
majority of the users.
| no_new_dataset | 0.942029 |
1506.00528 | Liangliang Cao | Chang Wang, Liangliang Cao, Bowen Zhou | Medical Synonym Extraction with Concept Space Models | 7 pages, to appear in IJCAI 2015 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a novel approach for medical synonym extraction. We
aim to integrate the term embedding with the medical domain knowledge for
healthcare applications. One advantage of our method is that it is very
scalable. Experiments on a dataset with more than 1M term pairs show that the
proposed approach outperforms the baseline approaches by a large margin.
| [
{
"version": "v1",
"created": "Mon, 1 Jun 2015 15:21:00 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"Wang",
"Chang",
""
],
[
"Cao",
"Liangliang",
""
],
[
"Zhou",
"Bowen",
""
]
] | TITLE: Medical Synonym Extraction with Concept Space Models
ABSTRACT: In this paper, we present a novel approach for medical synonym extraction. We
aim to integrate the term embedding with the medical domain knowledge for
healthcare applications. One advantage of our method is that it is very
scalable. Experiments on a dataset with more than 1M term pairs show that the
proposed approach outperforms the baseline approaches by a large margin.
| no_new_dataset | 0.940898 |
1506.00619 | Bart van Merri\"enboer | Bart van Merri\"enboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy
Serdyuk, David Warde-Farley, Jan Chorowski, Yoshua Bengio | Blocks and Fuel: Frameworks for deep learning | null | null | null | null | cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce two Python frameworks to train neural networks on large
datasets: Blocks and Fuel. Blocks is based on Theano, a linear algebra compiler
with CUDA-support. It facilitates the training of complex neural network models
by providing parametrized Theano operations, attaching metadata to Theano's
symbolic computational graph, and providing an extensive set of utilities to
assist training the networks, e.g. training algorithms, logging, monitoring,
visualization, and serialization. Fuel provides a standard format for machine
learning datasets. It allows the user to easily iterate over large datasets,
performing many types of pre-processing on the fly.
| [
{
"version": "v1",
"created": "Mon, 1 Jun 2015 19:28:27 GMT"
}
] | 2015-06-02T00:00:00 | [
[
"van Merriënboer",
"Bart",
""
],
[
"Bahdanau",
"Dzmitry",
""
],
[
"Dumoulin",
"Vincent",
""
],
[
"Serdyuk",
"Dmitriy",
""
],
[
"Warde-Farley",
"David",
""
],
[
"Chorowski",
"Jan",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Blocks and Fuel: Frameworks for deep learning
ABSTRACT: We introduce two Python frameworks to train neural networks on large
datasets: Blocks and Fuel. Blocks is based on Theano, a linear algebra compiler
with CUDA-support. It facilitates the training of complex neural network models
by providing parametrized Theano operations, attaching metadata to Theano's
symbolic computational graph, and providing an extensive set of utilities to
assist training the networks, e.g. training algorithms, logging, monitoring,
visualization, and serialization. Fuel provides a standard format for machine
learning datasets. It allows the user to easily iterate over large datasets,
performing many types of pre-processing on the fly.
| no_new_dataset | 0.94256 |
1501.04870 | Luca Martino | J. Read, L. Martino, P. Olmos, D. Luengo | Scalable Multi-Output Label Prediction: From Classifier Chains to
Classifier Trellises | (accepted in Pattern Recognition) | Pattern Recognition, Volume 48, Issue 6, 2015, Pages 2096-2109 | 10.1016/j.patcog.2015.01.004 | null | stat.ML cs.CV cs.DS cs.LG stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-output inference tasks, such as multi-label classification, have become
increasingly important in recent years. A popular method for multi-label
classification is classifier chains, in which the predictions of individual
classifiers are cascaded along a chain, thus taking into account inter-label
dependencies and improving the overall performance. Several varieties of
classifier chain methods have been introduced, and many of them perform very
competitively across a wide range of benchmark datasets. However, scalability
limitations become apparent on larger datasets when modeling a fully-cascaded
chain. In particular, the methods' strategies for discovering and modeling a
good chain structure constitutes a mayor computational bottleneck. In this
paper, we present the classifier trellis (CT) method for scalable multi-label
classification. We compare CT with several recently proposed classifier chain
methods to show that it occupies an important niche: it is highly competitive
on standard multi-label problems, yet it can also scale up to thousands or even
tens of thousands of labels.
| [
{
"version": "v1",
"created": "Tue, 20 Jan 2015 16:33:40 GMT"
}
] | 2015-06-01T00:00:00 | [
[
"Read",
"J.",
""
],
[
"Martino",
"L.",
""
],
[
"Olmos",
"P.",
""
],
[
"Luengo",
"D.",
""
]
] | TITLE: Scalable Multi-Output Label Prediction: From Classifier Chains to
Classifier Trellises
ABSTRACT: Multi-output inference tasks, such as multi-label classification, have become
increasingly important in recent years. A popular method for multi-label
classification is classifier chains, in which the predictions of individual
classifiers are cascaded along a chain, thus taking into account inter-label
dependencies and improving the overall performance. Several varieties of
classifier chain methods have been introduced, and many of them perform very
competitively across a wide range of benchmark datasets. However, scalability
limitations become apparent on larger datasets when modeling a fully-cascaded
chain. In particular, the methods' strategies for discovering and modeling a
good chain structure constitutes a mayor computational bottleneck. In this
paper, we present the classifier trellis (CT) method for scalable multi-label
classification. We compare CT with several recently proposed classifier chain
methods to show that it occupies an important niche: it is highly competitive
on standard multi-label problems, yet it can also scale up to thousands or even
tens of thousands of labels.
| no_new_dataset | 0.946101 |
1505.07922 | Junshi Huang | Junshi Huang, Rogerio S. Feris, Qiang Chen, Shuicheng Yan | Cross-domain Image Retrieval with a Dual Attribute-aware Ranking Network | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of cross-domain image retrieval, considering the
following practical application: given a user photo depicting a clothing image,
our goal is to retrieve the same or attribute-similar clothing items from
online shopping stores. This is a challenging problem due to the large
discrepancy between online shopping images, usually taken in ideal
lighting/pose/background conditions, and user photos captured in uncontrolled
conditions. To address this problem, we propose a Dual Attribute-aware Ranking
Network (DARN) for retrieval feature learning. More specifically, DARN consists
of two sub-networks, one for each domain, whose retrieval feature
representations are driven by semantic attribute learning. We show that this
attribute-guided learning is a key factor for retrieval accuracy improvement.
In addition, to further align with the nature of the retrieval problem, we
impose a triplet visual similarity constraint for learning to rank across the
two sub-networks. Another contribution of our work is a large-scale dataset
which makes the network learning feasible. We exploit customer review websites
to crawl a large set of online shopping images and corresponding offline user
photos with fine-grained clothing attributes, i.e., around 450,000 online
shopping images and about 90,000 exact offline counterpart images of those
online ones. All these images are collected from real-world consumer websites
reflecting the diversity of the data modality, which makes this dataset unique
and rare in the academic community. We extensively evaluate the retrieval
performance of networks in different configurations. The top-20 retrieval
accuracy is doubled when using the proposed DARN other than the current popular
solution using pre-trained CNN features only (0.570 vs. 0.268).
| [
{
"version": "v1",
"created": "Fri, 29 May 2015 04:46:37 GMT"
}
] | 2015-06-01T00:00:00 | [
[
"Huang",
"Junshi",
""
],
[
"Feris",
"Rogerio S.",
""
],
[
"Chen",
"Qiang",
""
],
[
"Yan",
"Shuicheng",
""
]
] | TITLE: Cross-domain Image Retrieval with a Dual Attribute-aware Ranking Network
ABSTRACT: We address the problem of cross-domain image retrieval, considering the
following practical application: given a user photo depicting a clothing image,
our goal is to retrieve the same or attribute-similar clothing items from
online shopping stores. This is a challenging problem due to the large
discrepancy between online shopping images, usually taken in ideal
lighting/pose/background conditions, and user photos captured in uncontrolled
conditions. To address this problem, we propose a Dual Attribute-aware Ranking
Network (DARN) for retrieval feature learning. More specifically, DARN consists
of two sub-networks, one for each domain, whose retrieval feature
representations are driven by semantic attribute learning. We show that this
attribute-guided learning is a key factor for retrieval accuracy improvement.
In addition, to further align with the nature of the retrieval problem, we
impose a triplet visual similarity constraint for learning to rank across the
two sub-networks. Another contribution of our work is a large-scale dataset
which makes the network learning feasible. We exploit customer review websites
to crawl a large set of online shopping images and corresponding offline user
photos with fine-grained clothing attributes, i.e., around 450,000 online
shopping images and about 90,000 exact offline counterpart images of those
online ones. All these images are collected from real-world consumer websites
reflecting the diversity of the data modality, which makes this dataset unique
and rare in the academic community. We extensively evaluate the retrieval
performance of networks in different configurations. The top-20 retrieval
accuracy is doubled when using the proposed DARN other than the current popular
solution using pre-trained CNN features only (0.570 vs. 0.268).
| no_new_dataset | 0.93784 |
1505.07930 | Tam Nguyen | Tam V. Nguyen, Jose Sepulveda | Salient Object Detection via Augmented Hypotheses | IJCAI 2015 paper | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | In this paper, we propose using \textit{augmented hypotheses} which consider
objectness, foreground and compactness for salient object detection. Our
algorithm consists of four basic steps. First, our method generates the
objectness map via objectness hypotheses. Based on the objectness map, we
estimate the foreground margin and compute the corresponding foreground map
which prefers the foreground objects. From the objectness map and the
foreground map, the compactness map is formed to favor the compact objects. We
then derive a saliency measure that produces a pixel-accurate saliency map
which uniformly covers the objects of interest and consistently separates fore-
and background. We finally evaluate the proposed framework on two challenging
datasets, MSRA-1000 and iCoSeg. Our extensive experimental results show that
our method outperforms state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Fri, 29 May 2015 06:03:57 GMT"
}
] | 2015-06-01T00:00:00 | [
[
"Nguyen",
"Tam V.",
""
],
[
"Sepulveda",
"Jose",
""
]
] | TITLE: Salient Object Detection via Augmented Hypotheses
ABSTRACT: In this paper, we propose using \textit{augmented hypotheses} which consider
objectness, foreground and compactness for salient object detection. Our
algorithm consists of four basic steps. First, our method generates the
objectness map via objectness hypotheses. Based on the objectness map, we
estimate the foreground margin and compute the corresponding foreground map
which prefers the foreground objects. From the objectness map and the
foreground map, the compactness map is formed to favor the compact objects. We
then derive a saliency measure that produces a pixel-accurate saliency map
which uniformly covers the objects of interest and consistently separates fore-
and background. We finally evaluate the proposed framework on two challenging
datasets, MSRA-1000 and iCoSeg. Our extensive experimental results show that
our method outperforms state-of-the-art approaches.
| no_new_dataset | 0.951997 |
1505.07931 | Xuefeng Yang | Xuefeng Yang, Kezhi Mao | Supervised Fine Tuning for Word Embedding with Integrated Knowledge | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning vector representation for words is an important research field which
may benefit many natural language processing tasks. Two limitations exist in
nearly all available models, which are the bias caused by the context
definition and the lack of knowledge utilization. They are difficult to tackle
because these algorithms are essentially unsupervised learning approaches.
Inspired by deep learning, the authors propose a supervised framework for
learning vector representation of words to provide additional supervised fine
tuning after unsupervised learning. The framework is knowledge rich approacher
and compatible with any numerical vectors word representation. The authors
perform both intrinsic evaluation like attributional and relational similarity
prediction and extrinsic evaluations like the sentence completion and sentiment
analysis. Experiments results on 6 embeddings and 4 tasks with 10 datasets show
that the proposed fine tuning framework may significantly improve the quality
of the vector representation of words.
| [
{
"version": "v1",
"created": "Fri, 29 May 2015 06:11:00 GMT"
}
] | 2015-06-01T00:00:00 | [
[
"Yang",
"Xuefeng",
""
],
[
"Mao",
"Kezhi",
""
]
] | TITLE: Supervised Fine Tuning for Word Embedding with Integrated Knowledge
ABSTRACT: Learning vector representation for words is an important research field which
may benefit many natural language processing tasks. Two limitations exist in
nearly all available models, which are the bias caused by the context
definition and the lack of knowledge utilization. They are difficult to tackle
because these algorithms are essentially unsupervised learning approaches.
Inspired by deep learning, the authors propose a supervised framework for
learning vector representation of words to provide additional supervised fine
tuning after unsupervised learning. The framework is knowledge rich approacher
and compatible with any numerical vectors word representation. The authors
perform both intrinsic evaluation like attributional and relational similarity
prediction and extrinsic evaluations like the sentence completion and sentiment
analysis. Experiments results on 6 embeddings and 4 tasks with 10 datasets show
that the proposed fine tuning framework may significantly improve the quality
of the vector representation of words.
| no_new_dataset | 0.948822 |
1505.07987 | Thomas Gransden | Thomas Gransden and Neil Walkinshaw and Rajeev Raman | SEPIA: Search for Proofs Using Inferred Automata | To appear at 25th International Conference on Automated Deduction | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes SEPIA, a tool for automated proof generation in Coq.
SEPIA combines model inference with interactive theorem proving. Existing proof
corpora are modelled using state-based models inferred from tactic sequences.
These can then be traversed automatically to identify proofs. The SEPIA system
is described and its performance evaluated on three Coq datasets. Our results
show that SEPIA provides a useful complement to existing automated tactics in
Coq.
| [
{
"version": "v1",
"created": "Fri, 29 May 2015 10:39:44 GMT"
}
] | 2015-06-01T00:00:00 | [
[
"Gransden",
"Thomas",
""
],
[
"Walkinshaw",
"Neil",
""
],
[
"Raman",
"Rajeev",
""
]
] | TITLE: SEPIA: Search for Proofs Using Inferred Automata
ABSTRACT: This paper describes SEPIA, a tool for automated proof generation in Coq.
SEPIA combines model inference with interactive theorem proving. Existing proof
corpora are modelled using state-based models inferred from tactic sequences.
These can then be traversed automatically to identify proofs. The SEPIA system
is described and its performance evaluated on three Coq datasets. Our results
show that SEPIA provides a useful complement to existing automated tactics in
Coq.
| no_new_dataset | 0.94474 |
1110.0140 | Jonathan Lilly | Jonathan M. Lilly, Richard K. Scott, and Sofia C. Olhede | Extracting waves and vortices from Lagrangian trajectories | null | null | 10.1029/2011GL049727 | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A method for extracting time-varying oscillatory motions from time series
records is applied to Lagrangian trajectories from a numerical model of eddies
generated by an unstable equivalent barotropic jet on a beta plane. An
oscillation in a Lagrangian trajectory is represented mathematically as the
signal traced out as a particle orbits a time-varying ellipse, a model which
captures wavelike motions as well as the displacement signal of a particle
trapped in an evolving vortex. Such oscillatory features can be separated from
the turbulent background flow through an analysis founded upon a complex-valued
wavelet transform of the trajectory. Application of the method to a set of one
hundred modeled trajectories shows that the oscillatory motions of Lagrangian
particles orbiting vortex cores appear to be extracted very well by the method,
which depends upon only a handful of free parameters and which requires no
operator intervention. Furthermore, vortex motions are clearly distinguished
from wavelike meandering of the jet---the former are high frequency, nearly
circular signals, while the latter are linear in polarization and at much lower
frequencies. This suggests that the proposed method can be useful for
identifying and studying vortex and wave properties in large Lagrangian
datasets. In particular, the eccentricity of the oscillatory displacement
signals, a quantity which is not normally considered in Lagrangian studies,
emerges as an informative diagnostic for characterizing qualitatively different
types of motion.
| [
{
"version": "v1",
"created": "Sat, 1 Oct 2011 23:54:56 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Oct 2011 22:22:01 GMT"
}
] | 2015-05-30T00:00:00 | [
[
"Lilly",
"Jonathan M.",
""
],
[
"Scott",
"Richard K.",
""
],
[
"Olhede",
"Sofia C.",
""
]
] | TITLE: Extracting waves and vortices from Lagrangian trajectories
ABSTRACT: A method for extracting time-varying oscillatory motions from time series
records is applied to Lagrangian trajectories from a numerical model of eddies
generated by an unstable equivalent barotropic jet on a beta plane. An
oscillation in a Lagrangian trajectory is represented mathematically as the
signal traced out as a particle orbits a time-varying ellipse, a model which
captures wavelike motions as well as the displacement signal of a particle
trapped in an evolving vortex. Such oscillatory features can be separated from
the turbulent background flow through an analysis founded upon a complex-valued
wavelet transform of the trajectory. Application of the method to a set of one
hundred modeled trajectories shows that the oscillatory motions of Lagrangian
particles orbiting vortex cores appear to be extracted very well by the method,
which depends upon only a handful of free parameters and which requires no
operator intervention. Furthermore, vortex motions are clearly distinguished
from wavelike meandering of the jet---the former are high frequency, nearly
circular signals, while the latter are linear in polarization and at much lower
frequencies. This suggests that the proposed method can be useful for
identifying and studying vortex and wave properties in large Lagrangian
datasets. In particular, the eccentricity of the oscillatory displacement
signals, a quantity which is not normally considered in Lagrangian studies,
emerges as an informative diagnostic for characterizing qualitatively different
types of motion.
| no_new_dataset | 0.95275 |
1110.3649 | Yaron Lipman | D. Boyer and Y. Lipman and E. St. Clair and J. Puente and T.
Funkhouser and B. Patel and J. Jernvall and I. Daubechies | Algorithms to automatically quantify the geometric similarity of
anatomical surfaces | Changes with respect to v1, v2: an Erratum was added, correcting the
references for one of the three datasets. Note that the datasets and code for
this paper can be obtained from the Data Conservancy (see Download column on
v1, v2) | PNAS 2011 108 (45) 18221-18226 | 10.1073/pnas.1112822108 | null | math.NA cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe new approaches for distances between pairs of 2-dimensional
surfaces (embedded in 3-dimensional space) that use local structures and global
information contained in inter-structure geometric relationships. We present
algorithms to automatically determine these distances as well as geometric
correspondences. This is motivated by the aspiration of students of natural
science to understand the continuity of form that unites the diversity of life.
At present, scientists using physical traits to study evolutionary
relationships among living and extinct animals analyze data extracted from
carefully defined anatomical correspondence points (landmarks). Identifying and
recording these landmarks is time consuming and can be done accurately only by
trained morphologists. This renders these studies inaccessible to
non-morphologists, and causes phenomics to lag behind genomics in elucidating
evolutionary patterns. Unlike other algorithms presented for morphological
correspondences our approach does not require any preliminary marking of
special features or landmarks by the user. It also differs from other seminal
work in computational geometry in that our algorithms are polynomial in nature
and thus faster, making pairwise comparisons feasible for significantly larger
numbers of digitized surfaces. We illustrate our approach using three datasets
representing teeth and different bones of primates and humans, and show that it
leads to highly accurate results.
| [
{
"version": "v1",
"created": "Mon, 17 Oct 2011 12:23:30 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Oct 2011 09:16:12 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Mar 2012 13:36:16 GMT"
}
] | 2015-05-30T00:00:00 | [
[
"Boyer",
"D.",
""
],
[
"Lipman",
"Y.",
""
],
[
"Clair",
"E. St.",
""
],
[
"Puente",
"J.",
""
],
[
"Funkhouser",
"T.",
""
],
[
"Patel",
"B.",
""
],
[
"Jernvall",
"J.",
""
],
[
"Daubechies",
"I.",
""
]
] | TITLE: Algorithms to automatically quantify the geometric similarity of
anatomical surfaces
ABSTRACT: We describe new approaches for distances between pairs of 2-dimensional
surfaces (embedded in 3-dimensional space) that use local structures and global
information contained in inter-structure geometric relationships. We present
algorithms to automatically determine these distances as well as geometric
correspondences. This is motivated by the aspiration of students of natural
science to understand the continuity of form that unites the diversity of life.
At present, scientists using physical traits to study evolutionary
relationships among living and extinct animals analyze data extracted from
carefully defined anatomical correspondence points (landmarks). Identifying and
recording these landmarks is time consuming and can be done accurately only by
trained morphologists. This renders these studies inaccessible to
non-morphologists, and causes phenomics to lag behind genomics in elucidating
evolutionary patterns. Unlike other algorithms presented for morphological
correspondences our approach does not require any preliminary marking of
special features or landmarks by the user. It also differs from other seminal
work in computational geometry in that our algorithms are polynomial in nature
and thus faster, making pairwise comparisons feasible for significantly larger
numbers of digitized surfaces. We illustrate our approach using three datasets
representing teeth and different bones of primates and humans, and show that it
leads to highly accurate results.
| no_new_dataset | 0.950824 |
1110.4784 | Matthieu Cristelli | Ilaria Bordino, Stefano Battiston, Guido Caldarelli, Matthieu
Cristelli, Antti Ukkonen, Ingmar Weber | Web search queries can predict stock market volumes | 29 pages, 11 figures, 11 tables + Supporting Information | null | 10.1371/journal.pone.0040014 | null | q-fin.ST cs.LG physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We live in a computerized and networked society where many of our actions
leave a digital trace and affect other people's actions. This has lead to the
emergence of a new data-driven research field: mathematical methods of computer
science, statistical physics and sociometry provide insights on a wide range of
disciplines ranging from social science to human mobility. A recent important
discovery is that query volumes (i.e., the number of requests submitted by
users to search engines on the www) can be used to track and, in some cases, to
anticipate the dynamics of social phenomena. Successful exemples include
unemployment levels, car and home sales, and epidemics spreading. Few recent
works applied this approach to stock prices and market sentiment. However, it
remains unclear if trends in financial markets can be anticipated by the
collective wisdom of on-line users on the web. Here we show that trading
volumes of stocks traded in NASDAQ-100 are correlated with the volumes of
queries related to the same stocks. In particular, query volumes anticipate in
many cases peaks of trading by one day or more. Our analysis is carried out on
a unique dataset of queries, submitted to an important web search engine, which
enable us to investigate also the user behavior. We show that the query volume
dynamics emerges from the collective but seemingly uncoordinated activity of
many users. These findings contribute to the debate on the identification of
early warnings of financial systemic risk, based on the activity of users of
the www.
| [
{
"version": "v1",
"created": "Fri, 21 Oct 2011 13:15:59 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Mar 2012 14:07:49 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Jun 2012 15:42:35 GMT"
}
] | 2015-05-30T00:00:00 | [
[
"Bordino",
"Ilaria",
""
],
[
"Battiston",
"Stefano",
""
],
[
"Caldarelli",
"Guido",
""
],
[
"Cristelli",
"Matthieu",
""
],
[
"Ukkonen",
"Antti",
""
],
[
"Weber",
"Ingmar",
""
]
] | TITLE: Web search queries can predict stock market volumes
ABSTRACT: We live in a computerized and networked society where many of our actions
leave a digital trace and affect other people's actions. This has lead to the
emergence of a new data-driven research field: mathematical methods of computer
science, statistical physics and sociometry provide insights on a wide range of
disciplines ranging from social science to human mobility. A recent important
discovery is that query volumes (i.e., the number of requests submitted by
users to search engines on the www) can be used to track and, in some cases, to
anticipate the dynamics of social phenomena. Successful exemples include
unemployment levels, car and home sales, and epidemics spreading. Few recent
works applied this approach to stock prices and market sentiment. However, it
remains unclear if trends in financial markets can be anticipated by the
collective wisdom of on-line users on the web. Here we show that trading
volumes of stocks traded in NASDAQ-100 are correlated with the volumes of
queries related to the same stocks. In particular, query volumes anticipate in
many cases peaks of trading by one day or more. Our analysis is carried out on
a unique dataset of queries, submitted to an important web search engine, which
enable us to investigate also the user behavior. We show that the query volume
dynamics emerges from the collective but seemingly uncoordinated activity of
many users. These findings contribute to the debate on the identification of
early warnings of financial systemic risk, based on the activity of users of
the www.
| new_dataset | 0.789071 |
1404.4888 | Isadora Nun Ms | Isadora Nun, Karim Pichara, Pavlos Protopapas, Dae-Won Kim | Supervised detection of anomalous light-curves in massive astronomical
catalogs | 16 pages, 18 figures, published in The Astrophysical Journal | 2014, ApJ, 793, 23 | 10.1088/0004-637X/793/1/23 | null | cs.CE astro-ph.IM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of synoptic sky surveys has led to a massive amount of data
for which resources needed for analysis are beyond human capabilities. To
process this information and to extract all possible knowledge, machine
learning techniques become necessary. Here we present a new method to
automatically discover unknown variable objects in large astronomical catalogs.
With the aim of taking full advantage of all the information we have about
known objects, our method is based on a supervised algorithm. In particular, we
train a random forest classifier using known variability classes of objects and
obtain votes for each of the objects in the training set. We then model this
voting distribution with a Bayesian network and obtain the joint voting
distribution among the training objects. Consequently, an unknown object is
considered as an outlier insofar it has a low joint probability. Our method is
suitable for exploring massive datasets given that the training process is
performed offline. We tested our algorithm on 20 millions light-curves from the
MACHO catalog and generated a list of anomalous candidates. We divided the
candidates into two main classes of outliers: artifacts and intrinsic outliers.
Artifacts were principally due to air mass variation, seasonal variation, bad
calibration or instrumental errors and were consequently removed from our
outlier list and added to the training set. After retraining, we selected about
4000 objects, which we passed to a post analysis stage by perfoming a
cross-match with all publicly available catalogs. Within these candidates we
identified certain known but rare objects such as eclipsing Cepheids, blue
variables, cataclysmic variables and X-ray sources. For some outliers there
were no additional information. Among them we identified three unknown
variability types and few individual outliers that will be followed up for a
deeper analysis.
| [
{
"version": "v1",
"created": "Fri, 18 Apr 2014 21:12:13 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Sep 2014 15:50:49 GMT"
},
{
"version": "v3",
"created": "Wed, 27 May 2015 21:27:11 GMT"
}
] | 2015-05-29T00:00:00 | [
[
"Nun",
"Isadora",
""
],
[
"Pichara",
"Karim",
""
],
[
"Protopapas",
"Pavlos",
""
],
[
"Kim",
"Dae-Won",
""
]
] | TITLE: Supervised detection of anomalous light-curves in massive astronomical
catalogs
ABSTRACT: The development of synoptic sky surveys has led to a massive amount of data
for which resources needed for analysis are beyond human capabilities. To
process this information and to extract all possible knowledge, machine
learning techniques become necessary. Here we present a new method to
automatically discover unknown variable objects in large astronomical catalogs.
With the aim of taking full advantage of all the information we have about
known objects, our method is based on a supervised algorithm. In particular, we
train a random forest classifier using known variability classes of objects and
obtain votes for each of the objects in the training set. We then model this
voting distribution with a Bayesian network and obtain the joint voting
distribution among the training objects. Consequently, an unknown object is
considered as an outlier insofar it has a low joint probability. Our method is
suitable for exploring massive datasets given that the training process is
performed offline. We tested our algorithm on 20 millions light-curves from the
MACHO catalog and generated a list of anomalous candidates. We divided the
candidates into two main classes of outliers: artifacts and intrinsic outliers.
Artifacts were principally due to air mass variation, seasonal variation, bad
calibration or instrumental errors and were consequently removed from our
outlier list and added to the training set. After retraining, we selected about
4000 objects, which we passed to a post analysis stage by perfoming a
cross-match with all publicly available catalogs. Within these candidates we
identified certain known but rare objects such as eclipsing Cepheids, blue
variables, cataclysmic variables and X-ray sources. For some outliers there
were no additional information. Among them we identified three unknown
variability types and few individual outliers that will be followed up for a
deeper analysis.
| no_new_dataset | 0.942981 |
1410.3560 | Ryan Rossi | Ryan A. Rossi and Nesreen K. Ahmed | NetworkRepository: An Interactive Data Repository with Multi-scale
Visual Analytics | AAAI 2015 DT | null | null | null | cs.DL cs.HC cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network Repository (NR) is the first interactive data repository with a
web-based platform for visual interactive analytics. Unlike other data
repositories (e.g., UCI ML Data Repository, and SNAP), the network data
repository (networkrepository.com) allows users to not only download, but to
interactively analyze and visualize such data using our web-based interactive
graph analytics platform. Users can in real-time analyze, visualize, compare,
and explore data along many different dimensions. The aim of NR is to make it
easy to discover key insights into the data extremely fast with little effort
while also providing a medium for users to share data, visualizations, and
insights. Other key factors that differentiate NR from the current data
repositories is the number of graph datasets, their size, and variety. While
other data repositories are static, they also lack a means for users to
collaboratively discuss a particular dataset, corrections, or challenges with
using the data for certain applications. In contrast, we have incorporated many
social and collaborative aspects into NR in hopes of further facilitating
scientific research (e.g., users can discuss each graph, post observations,
visualizations, etc.).
| [
{
"version": "v1",
"created": "Tue, 14 Oct 2014 03:35:37 GMT"
},
{
"version": "v2",
"created": "Thu, 28 May 2015 19:58:23 GMT"
}
] | 2015-05-29T00:00:00 | [
[
"Rossi",
"Ryan A.",
""
],
[
"Ahmed",
"Nesreen K.",
""
]
] | TITLE: NetworkRepository: An Interactive Data Repository with Multi-scale
Visual Analytics
ABSTRACT: Network Repository (NR) is the first interactive data repository with a
web-based platform for visual interactive analytics. Unlike other data
repositories (e.g., UCI ML Data Repository, and SNAP), the network data
repository (networkrepository.com) allows users to not only download, but to
interactively analyze and visualize such data using our web-based interactive
graph analytics platform. Users can in real-time analyze, visualize, compare,
and explore data along many different dimensions. The aim of NR is to make it
easy to discover key insights into the data extremely fast with little effort
while also providing a medium for users to share data, visualizations, and
insights. Other key factors that differentiate NR from the current data
repositories is the number of graph datasets, their size, and variety. While
other data repositories are static, they also lack a means for users to
collaboratively discuss a particular dataset, corrections, or challenges with
using the data for certain applications. In contrast, we have incorporated many
social and collaborative aspects into NR in hopes of further facilitating
scientific research (e.g., users can discuss each graph, post observations,
visualizations, etc.).
| no_new_dataset | 0.948394 |
1504.06662 | Arvind Neelakantan | Arvind Neelakantan, Benjamin Roth and Andrew McCallum | Compositional Vector Space Models for Knowledge Base Completion | The 53rd Annual Meeting of the Association for Computational
Linguistics and The 7th International Joint Conference of the Asian
Federation of Natural Language Processing, 2015 | null | null | null | cs.CL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge base (KB) completion adds new facts to a KB by making inferences
from existing facts, for example by inferring with high likelihood
nationality(X,Y) from bornIn(X,Y). Most previous methods infer simple one-hop
relational synonyms like this, or use as evidence a multi-hop relational path
treated as an atomic feature, like bornIn(X,Z) -> containedIn(Z,Y). This paper
presents an approach that reasons about conjunctions of multi-hop relations
non-atomically, composing the implications of a path using a recursive neural
network (RNN) that takes as inputs vector embeddings of the binary relation in
the path. Not only does this allow us to generalize to paths unseen at training
time, but also, with a single high-capacity RNN, to predict new relation types
not seen when the compositional model was trained (zero-shot learning). We
assemble a new dataset of over 52M relational triples, and show that our method
improves over a traditional classifier by 11%, and a method leveraging
pre-trained embeddings by 7%.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2015 23:06:10 GMT"
},
{
"version": "v2",
"created": "Wed, 27 May 2015 21:23:45 GMT"
}
] | 2015-05-29T00:00:00 | [
[
"Neelakantan",
"Arvind",
""
],
[
"Roth",
"Benjamin",
""
],
[
"McCallum",
"Andrew",
""
]
] | TITLE: Compositional Vector Space Models for Knowledge Base Completion
ABSTRACT: Knowledge base (KB) completion adds new facts to a KB by making inferences
from existing facts, for example by inferring with high likelihood
nationality(X,Y) from bornIn(X,Y). Most previous methods infer simple one-hop
relational synonyms like this, or use as evidence a multi-hop relational path
treated as an atomic feature, like bornIn(X,Z) -> containedIn(Z,Y). This paper
presents an approach that reasons about conjunctions of multi-hop relations
non-atomically, composing the implications of a path using a recursive neural
network (RNN) that takes as inputs vector embeddings of the binary relation in
the path. Not only does this allow us to generalize to paths unseen at training
time, but also, with a single high-capacity RNN, to predict new relation types
not seen when the compositional model was trained (zero-shot learning). We
assemble a new dataset of over 52M relational triples, and show that our method
improves over a traditional classifier by 11%, and a method leveraging
pre-trained embeddings by 7%.
| new_dataset | 0.950457 |
1505.02137 | Mohamed Amer | Mohamed R. Amer, Behjat Siddiquie, Amir Tamrakar, David A. Salter,
Brian Lande, Darius Mehri and Ajay Divakaran | Human Social Interaction Modeling Using Temporal Deep Networks | null | null | null | null | cs.CY cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We present a novel approach to computational modeling of social interactions
based on modeling of essential social interaction predicates (ESIPs) such as
joint attention and entrainment. Based on sound social psychological theory and
methodology, we collect a new "Tower Game" dataset consisting of audio-visual
capture of dyadic interactions labeled with the ESIPs. We expect this dataset
to provide a new avenue for research in computational social interaction
modeling. We propose a novel joint Discriminative Conditional Restricted
Boltzmann Machine (DCRBM) model that combines a discriminative component with
the generative power of CRBMs. Such a combination enables us to uncover
actionable constituents of the ESIPs in two steps. First, we train the DCRBM
model on the labeled data and get accurate (76\%-49\% across various ESIPs)
detection of the predicates. Second, we exploit the generative capability of
DCRBMs to activate the trained model so as to generate the lower-level data
corresponding to the specific ESIP that closely matches the actual training
data (with mean square error 0.01-0.1 for generating 100 frames). We are thus
able to decompose the ESIPs into their constituent actionable behaviors. Such a
purely computational determination of how to establish an ESIP such as
engagement is unprecedented.
| [
{
"version": "v1",
"created": "Wed, 6 May 2015 18:17:56 GMT"
},
{
"version": "v2",
"created": "Thu, 28 May 2015 16:05:07 GMT"
}
] | 2015-05-29T00:00:00 | [
[
"Amer",
"Mohamed R.",
""
],
[
"Siddiquie",
"Behjat",
""
],
[
"Tamrakar",
"Amir",
""
],
[
"Salter",
"David A.",
""
],
[
"Lande",
"Brian",
""
],
[
"Mehri",
"Darius",
""
],
[
"Divakaran",
"Ajay",
""
]
] | TITLE: Human Social Interaction Modeling Using Temporal Deep Networks
ABSTRACT: We present a novel approach to computational modeling of social interactions
based on modeling of essential social interaction predicates (ESIPs) such as
joint attention and entrainment. Based on sound social psychological theory and
methodology, we collect a new "Tower Game" dataset consisting of audio-visual
capture of dyadic interactions labeled with the ESIPs. We expect this dataset
to provide a new avenue for research in computational social interaction
modeling. We propose a novel joint Discriminative Conditional Restricted
Boltzmann Machine (DCRBM) model that combines a discriminative component with
the generative power of CRBMs. Such a combination enables us to uncover
actionable constituents of the ESIPs in two steps. First, we train the DCRBM
model on the labeled data and get accurate (76\%-49\% across various ESIPs)
detection of the predicates. Second, we exploit the generative capability of
DCRBMs to activate the trained model so as to generate the lower-level data
corresponding to the specific ESIP that closely matches the actual training
data (with mean square error 0.01-0.1 for generating 100 frames). We are thus
able to decompose the ESIPs into their constituent actionable behaviors. Such a
purely computational determination of how to establish an ESIP such as
engagement is unprecedented.
| new_dataset | 0.959913 |
1505.07499 | Reza Shokri | Vincent Bindschaedler and Reza Shokri | Privacy through Fake yet Semantically Real Traces | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Camouflaging data by generating fake information is a well-known obfuscation
technique for protecting data privacy. In this paper, we focus on a very
sensitive and increasingly exposed type of data: location data. There are two
main scenarios in which fake traces are of extreme value to preserve location
privacy: publishing datasets of location trajectories, and using location-based
services. Despite advances in protecting (location) data privacy, there is no
quantitative method to evaluate how realistic a synthetic trace is, and how
much utility and privacy it provides in each scenario. Also, the lack of a
methodology to generate privacy-preserving fake traces is evident. In this
paper, we fill this gap and propose the first statistical metric and model to
generate fake location traces such that both the utility of data and the
privacy of users are preserved. We build upon the fact that, although
geographically they visit distinct locations, people have strongly semantically
similar mobility patterns, for example, their transition pattern across
activities (e.g., working, driving, staying at home) is similar. We define a
statistical metric and propose an algorithm that automatically discovers the
hidden semantic similarities between locations from a bag of real location
traces as seeds, without requiring any initial semantic annotations. We
guarantee that fake traces are geographically dissimilar to their seeds, so
they do not leak sensitive location information. We also protect contributors
to seed traces against membership attacks. Interleaving fake traces with mobile
users' traces is a prominent location privacy defense mechanism. We
quantitatively show the effectiveness of our methodology in protecting against
localization inference attacks while preserving utility of sharing/publishing
traces.
| [
{
"version": "v1",
"created": "Wed, 27 May 2015 21:48:59 GMT"
}
] | 2015-05-29T00:00:00 | [
[
"Bindschaedler",
"Vincent",
""
],
[
"Shokri",
"Reza",
""
]
] | TITLE: Privacy through Fake yet Semantically Real Traces
ABSTRACT: Camouflaging data by generating fake information is a well-known obfuscation
technique for protecting data privacy. In this paper, we focus on a very
sensitive and increasingly exposed type of data: location data. There are two
main scenarios in which fake traces are of extreme value to preserve location
privacy: publishing datasets of location trajectories, and using location-based
services. Despite advances in protecting (location) data privacy, there is no
quantitative method to evaluate how realistic a synthetic trace is, and how
much utility and privacy it provides in each scenario. Also, the lack of a
methodology to generate privacy-preserving fake traces is evident. In this
paper, we fill this gap and propose the first statistical metric and model to
generate fake location traces such that both the utility of data and the
privacy of users are preserved. We build upon the fact that, although
geographically they visit distinct locations, people have strongly semantically
similar mobility patterns, for example, their transition pattern across
activities (e.g., working, driving, staying at home) is similar. We define a
statistical metric and propose an algorithm that automatically discovers the
hidden semantic similarities between locations from a bag of real location
traces as seeds, without requiring any initial semantic annotations. We
guarantee that fake traces are geographically dissimilar to their seeds, so
they do not leak sensitive location information. We also protect contributors
to seed traces against membership attacks. Interleaving fake traces with mobile
users' traces is a prominent location privacy defense mechanism. We
quantitatively show the effectiveness of our methodology in protecting against
localization inference attacks while preserving utility of sharing/publishing
traces.
| no_new_dataset | 0.951459 |
1505.07690 | Remco Duits | Michiel Janssen, Remco Duits, Marcel Breeuwer | Invertible Orientation Scores of 3D Images | ssvm 2015 published version in LNCS contains a mistake (a switch
notation spherical angles) that is corrected in this arxiv version | null | null | null | math.NA cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The enhancement and detection of elongated structures in noisy image data is
relevant for many biomedical applications. To handle complex crossing
structures in 2D images, 2D orientation scores were introduced, which already
showed their use in a variety of applications. Here we extend this work to 3D
orientation scores. First, we construct the orientation score from a given
dataset, which is achieved by an invertible coherent state type of transform.
For this transformation we introduce 3D versions of the 2D cake-wavelets, which
are complex wavelets that can simultaneously detect oriented structures and
oriented edges. For efficient implementation of the different steps in the
wavelet creation we use a spherical harmonic transform. Finally, we show some
first results of practical applications of 3D orientation scores.
| [
{
"version": "v1",
"created": "Thu, 28 May 2015 13:52:41 GMT"
}
] | 2015-05-29T00:00:00 | [
[
"Janssen",
"Michiel",
""
],
[
"Duits",
"Remco",
""
],
[
"Breeuwer",
"Marcel",
""
]
] | TITLE: Invertible Orientation Scores of 3D Images
ABSTRACT: The enhancement and detection of elongated structures in noisy image data is
relevant for many biomedical applications. To handle complex crossing
structures in 2D images, 2D orientation scores were introduced, which already
showed their use in a variety of applications. Here we extend this work to 3D
orientation scores. First, we construct the orientation score from a given
dataset, which is achieved by an invertible coherent state type of transform.
For this transformation we introduce 3D versions of the 2D cake-wavelets, which
are complex wavelets that can simultaneously detect oriented structures and
oriented edges. For efficient implementation of the different steps in the
wavelet creation we use a spherical harmonic transform. Finally, we show some
first results of practical applications of 3D orientation scores.
| no_new_dataset | 0.946597 |
1105.0819 | Pierpaolo Vivo | Simone Pigolotti, Sebastian Bernhardsson, Jeppe Juul, Gorm Galster,
Pierpaolo Vivo | Equilibrium strategy and population-size effects in lowest unique bid
auctions | 6 pag. - 7 figs - added Supplementary Material. Changed affiliations.
Published version | Phys. Rev. Lett. 108, 088701 (2012) | 10.1103/PhysRevLett.108.088701 | null | cs.GT physics.soc-ph q-fin.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In lowest unique bid auctions, $N$ players bid for an item. The winner is
whoever places the \emph{lowest} bid, provided that it is also unique. We use a
grand canonical approach to derive an analytical expression for the equilibrium
distribution of strategies. We then study the properties of the solution as a
function of the mean number of players, and compare them with a large dataset
of internet auctions. The theory agrees with the data with striking accuracy
for small population size $N$, while for larger $N$ a qualitatively different
distribution is observed. We interpret this result as the emergence of two
different regimes, one in which adaptation is feasible and one in which it is
not. Our results question the actual possibility of a large population to adapt
and find the optimal strategy when participating in a collective game.
| [
{
"version": "v1",
"created": "Sat, 30 Apr 2011 10:09:03 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Dec 2011 12:15:39 GMT"
},
{
"version": "v3",
"created": "Sat, 25 Feb 2012 15:03:56 GMT"
}
] | 2015-05-28T00:00:00 | [
[
"Pigolotti",
"Simone",
""
],
[
"Bernhardsson",
"Sebastian",
""
],
[
"Juul",
"Jeppe",
""
],
[
"Galster",
"Gorm",
""
],
[
"Vivo",
"Pierpaolo",
""
]
] | TITLE: Equilibrium strategy and population-size effects in lowest unique bid
auctions
ABSTRACT: In lowest unique bid auctions, $N$ players bid for an item. The winner is
whoever places the \emph{lowest} bid, provided that it is also unique. We use a
grand canonical approach to derive an analytical expression for the equilibrium
distribution of strategies. We then study the properties of the solution as a
function of the mean number of players, and compare them with a large dataset
of internet auctions. The theory agrees with the data with striking accuracy
for small population size $N$, while for larger $N$ a qualitatively different
distribution is observed. We interpret this result as the emergence of two
different regimes, one in which adaptation is feasible and one in which it is
not. Our results question the actual possibility of a large population to adapt
and find the optimal strategy when participating in a collective game.
| no_new_dataset | 0.945951 |
1107.4218 | Maurizio Serva | Maurizio Serva | The settlement of Madagascar: what dialects and languages can tell | We find out the area and the modalities of the settlement of
Madagascar by Indonesian colonizers around 650 CE. Results are obtained
comparing 23 Malagasy dialects with Malay and Maanyan languages | null | 10.1371/journal.pone.0030666 | null | cs.CL q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dialects of Madagascar belong to the Greater Barito East group of the
Austronesian family and it is widely accepted that the Island was colonized by
Indonesian sailors after a maritime trek which probably took place around 650
CE. The language most closely related to Malagasy dialects is Maanyan but also
Malay is strongly related especially for what concerns navigation terms. Since
the Maanyan Dayaks live along the Barito river in Kalimantan (Borneo) and they
do not possess the necessary skill for long maritime navigation, probably they
were brought as subordinates by Malay sailors.
In a recent paper we compared 23 different Malagasy dialects in order to
determine the time and the landing area of the first colonization. In this
research we use new data and new methods to confirm that the landing took place
on the south-east coast of the Island. Furthermore, we are able to state here
that it is unlikely that there were multiple settlements and, therefore,
colonization consisted in a single founding event.
To reach our goal we find out the internal kinship relations among all the 23
Malagasy dialects and we also find out the different kinship degrees of the 23
dialects versus Malay and Maanyan. The method used is an automated version of
the lexicostatistic approach. The data concerning Madagascar were collected by
the author at the beginning of 2010 and consist of Swadesh lists of 200 items
for 23 dialects covering all areas of the Island. The lists for Maanyan and
Malay were obtained from published datasets integrated by author's interviews.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2011 10:02:31 GMT"
}
] | 2015-05-28T00:00:00 | [
[
"Serva",
"Maurizio",
""
]
] | TITLE: The settlement of Madagascar: what dialects and languages can tell
ABSTRACT: The dialects of Madagascar belong to the Greater Barito East group of the
Austronesian family and it is widely accepted that the Island was colonized by
Indonesian sailors after a maritime trek which probably took place around 650
CE. The language most closely related to Malagasy dialects is Maanyan but also
Malay is strongly related especially for what concerns navigation terms. Since
the Maanyan Dayaks live along the Barito river in Kalimantan (Borneo) and they
do not possess the necessary skill for long maritime navigation, probably they
were brought as subordinates by Malay sailors.
In a recent paper we compared 23 different Malagasy dialects in order to
determine the time and the landing area of the first colonization. In this
research we use new data and new methods to confirm that the landing took place
on the south-east coast of the Island. Furthermore, we are able to state here
that it is unlikely that there were multiple settlements and, therefore,
colonization consisted in a single founding event.
To reach our goal we find out the internal kinship relations among all the 23
Malagasy dialects and we also find out the different kinship degrees of the 23
dialects versus Malay and Maanyan. The method used is an automated version of
the lexicostatistic approach. The data concerning Madagascar were collected by
the author at the beginning of 2010 and consist of Swadesh lists of 200 items
for 23 dialects covering all areas of the Island. The lists for Maanyan and
Malay were obtained from published datasets integrated by author's interviews.
| no_new_dataset | 0.883437 |
1412.5335 | Gr\'egoire Mesnil | Gr\'egoire Mesnil, Tomas Mikolov, Marc'Aurelio Ranzato, Yoshua Bengio | Ensemble of Generative and Discriminative Techniques for Sentiment
Analysis of Movie Reviews | null | null | null | null | cs.CL cs.IR cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sentiment analysis is a common task in natural language processing that aims
to detect polarity of a text document (typically a consumer review). In the
simplest settings, we discriminate only between positive and negative
sentiment, turning the task into a standard binary classification problem. We
compare several ma- chine learning approaches to this problem, and combine them
to achieve the best possible results. We show how to use for this task the
standard generative lan- guage models, which are slightly complementary to the
state of the art techniques. We achieve strong results on a well-known dataset
of IMDB movie reviews. Our results are easily reproducible, as we publish also
the code needed to repeat the experiments. This should simplify further advance
of the state of the art, as other researchers can combine their techniques with
ours with little effort.
| [
{
"version": "v1",
"created": "Wed, 17 Dec 2014 11:02:04 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Dec 2014 14:17:16 GMT"
},
{
"version": "v3",
"created": "Fri, 19 Dec 2014 11:36:14 GMT"
},
{
"version": "v4",
"created": "Tue, 3 Feb 2015 20:03:35 GMT"
},
{
"version": "v5",
"created": "Wed, 4 Feb 2015 05:17:55 GMT"
},
{
"version": "v6",
"created": "Thu, 16 Apr 2015 14:26:14 GMT"
},
{
"version": "v7",
"created": "Wed, 27 May 2015 06:40:09 GMT"
}
] | 2015-05-28T00:00:00 | [
[
"Mesnil",
"Grégoire",
""
],
[
"Mikolov",
"Tomas",
""
],
[
"Ranzato",
"Marc'Aurelio",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Ensemble of Generative and Discriminative Techniques for Sentiment
Analysis of Movie Reviews
ABSTRACT: Sentiment analysis is a common task in natural language processing that aims
to detect polarity of a text document (typically a consumer review). In the
simplest settings, we discriminate only between positive and negative
sentiment, turning the task into a standard binary classification problem. We
compare several ma- chine learning approaches to this problem, and combine them
to achieve the best possible results. We show how to use for this task the
standard generative lan- guage models, which are slightly complementary to the
state of the art techniques. We achieve strong results on a well-known dataset
of IMDB movie reviews. Our results are easily reproducible, as we publish also
the code needed to repeat the experiments. This should simplify further advance
of the state of the art, as other researchers can combine their techniques with
ours with little effort.
| no_new_dataset | 0.948585 |
1502.02791 | Mingsheng Long | Mingsheng Long, Yue Cao, Jianmin Wang, Michael I. Jordan | Learning Transferable Features with Deep Adaptation Networks | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies reveal that a deep neural network can learn transferable
features which generalize well to novel tasks for domain adaptation. However,
as deep features eventually transition from general to specific along the
network, the feature transferability drops significantly in higher layers with
increasing domain discrepancy. Hence, it is important to formally reduce the
dataset bias and enhance the transferability in task-specific layers. In this
paper, we propose a new Deep Adaptation Network (DAN) architecture, which
generalizes deep convolutional neural network to the domain adaptation
scenario. In DAN, hidden representations of all task-specific layers are
embedded in a reproducing kernel Hilbert space where the mean embeddings of
different domain distributions can be explicitly matched. The domain
discrepancy is further reduced using an optimal multi-kernel selection method
for mean embedding matching. DAN can learn transferable features with
statistical guarantees, and can scale linearly by unbiased estimate of kernel
embedding. Extensive empirical evidence shows that the proposed architecture
yields state-of-the-art image classification error rates on standard domain
adaptation benchmarks.
| [
{
"version": "v1",
"created": "Tue, 10 Feb 2015 06:01:30 GMT"
},
{
"version": "v2",
"created": "Wed, 27 May 2015 05:28:35 GMT"
}
] | 2015-05-28T00:00:00 | [
[
"Long",
"Mingsheng",
""
],
[
"Cao",
"Yue",
""
],
[
"Wang",
"Jianmin",
""
],
[
"Jordan",
"Michael I.",
""
]
] | TITLE: Learning Transferable Features with Deep Adaptation Networks
ABSTRACT: Recent studies reveal that a deep neural network can learn transferable
features which generalize well to novel tasks for domain adaptation. However,
as deep features eventually transition from general to specific along the
network, the feature transferability drops significantly in higher layers with
increasing domain discrepancy. Hence, it is important to formally reduce the
dataset bias and enhance the transferability in task-specific layers. In this
paper, we propose a new Deep Adaptation Network (DAN) architecture, which
generalizes deep convolutional neural network to the domain adaptation
scenario. In DAN, hidden representations of all task-specific layers are
embedded in a reproducing kernel Hilbert space where the mean embeddings of
different domain distributions can be explicitly matched. The domain
discrepancy is further reduced using an optimal multi-kernel selection method
for mean embedding matching. DAN can learn transferable features with
statistical guarantees, and can scale linearly by unbiased estimate of kernel
embedding. Extensive empirical evidence shows that the proposed architecture
yields state-of-the-art image classification error rates on standard domain
adaptation benchmarks.
| no_new_dataset | 0.94699 |
1505.07130 | Kemele M. Endris | Kemele M. Endris, Sidra Faisal, Fabrizio Orlandi, S\"oren Auer, Simon
Scerri | Interest-based RDF Update Propagation | 16 pages, Keywords: Change Propagation, Dataset Dynamics, Linked
Data, Replication | null | null | null | cs.DC cs.DB cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many LOD datasets, such as DBpedia and LinkedGeoData, are voluminous and
process large amounts of requests from diverse applications. Many data products
and services rely on full or partial local LOD replications to ensure faster
querying and processing. While such replicas enhance the flexibility of
information sharing and integration infrastructures, they also introduce data
duplication with all the associated undesirable consequences. Given the
evolving nature of the original and authoritative datasets, to ensure
consistent and up-to-date replicas frequent replacements are required at a
great cost. In this paper, we introduce an approach for interest-based RDF
update propagation, which propagates only interesting parts of updates from the
source to the target dataset. Effectively, this enables remote applications to
`subscribe' to relevant datasets and consistently reflect the necessary changes
locally without the need to frequently replace the entire dataset (or a
relevant subset). Our approach is based on a formal definition for
graph-pattern-based interest expressions that is used to filter interesting
parts of updates from the source. We implement the approach in the iRap
framework and perform a comprehensive evaluation based on DBpedia Live updates,
to confirm the validity and value of our approach.
| [
{
"version": "v1",
"created": "Tue, 26 May 2015 20:36:42 GMT"
}
] | 2015-05-28T00:00:00 | [
[
"Endris",
"Kemele M.",
""
],
[
"Faisal",
"Sidra",
""
],
[
"Orlandi",
"Fabrizio",
""
],
[
"Auer",
"Sören",
""
],
[
"Scerri",
"Simon",
""
]
] | TITLE: Interest-based RDF Update Propagation
ABSTRACT: Many LOD datasets, such as DBpedia and LinkedGeoData, are voluminous and
process large amounts of requests from diverse applications. Many data products
and services rely on full or partial local LOD replications to ensure faster
querying and processing. While such replicas enhance the flexibility of
information sharing and integration infrastructures, they also introduce data
duplication with all the associated undesirable consequences. Given the
evolving nature of the original and authoritative datasets, to ensure
consistent and up-to-date replicas frequent replacements are required at a
great cost. In this paper, we introduce an approach for interest-based RDF
update propagation, which propagates only interesting parts of updates from the
source to the target dataset. Effectively, this enables remote applications to
`subscribe' to relevant datasets and consistently reflect the necessary changes
locally without the need to frequently replace the entire dataset (or a
relevant subset). Our approach is based on a formal definition for
graph-pattern-based interest expressions that is used to filter interesting
parts of updates from the source. We implement the approach in the iRap
framework and perform a comprehensive evaluation based on DBpedia Live updates,
to confirm the validity and value of our approach.
| no_new_dataset | 0.951323 |
1505.07184 | Danushka Bollegala | Danushka Bollegala and Takanori Maehara and Ken-ichi Kawarabayashi | Unsupervised Cross-Domain Word Representation Learning | 53rd Annual Meeting of the Association for Computational Linguistics
and the 7th International Joint Conferences on Natural Language Processing of
the Asian Federation of Natural Language Processing | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Meaning of a word varies from one domain to another. Despite this important
domain dependence in word semantics, existing word representation learning
methods are bound to a single domain. Given a pair of
\emph{source}-\emph{target} domains, we propose an unsupervised method for
learning domain-specific word representations that accurately capture the
domain-specific aspects of word semantics. First, we select a subset of
frequent words that occur in both domains as \emph{pivots}. Next, we optimize
an objective function that enforces two constraints: (a) for both source and
target domain documents, pivots that appear in a document must accurately
predict the co-occurring non-pivots, and (b) word representations learnt for
pivots must be similar in the two domains. Moreover, we propose a method to
perform domain adaptation using the learnt word representations. Our proposed
method significantly outperforms competitive baselines including the
state-of-the-art domain-insensitive word representations, and reports best
sentiment classification accuracies for all domain-pairs in a benchmark
dataset.
| [
{
"version": "v1",
"created": "Wed, 27 May 2015 04:02:56 GMT"
}
] | 2015-05-28T00:00:00 | [
[
"Bollegala",
"Danushka",
""
],
[
"Maehara",
"Takanori",
""
],
[
"Kawarabayashi",
"Ken-ichi",
""
]
] | TITLE: Unsupervised Cross-Domain Word Representation Learning
ABSTRACT: Meaning of a word varies from one domain to another. Despite this important
domain dependence in word semantics, existing word representation learning
methods are bound to a single domain. Given a pair of
\emph{source}-\emph{target} domains, we propose an unsupervised method for
learning domain-specific word representations that accurately capture the
domain-specific aspects of word semantics. First, we select a subset of
frequent words that occur in both domains as \emph{pivots}. Next, we optimize
an objective function that enforces two constraints: (a) for both source and
target domain documents, pivots that appear in a document must accurately
predict the co-occurring non-pivots, and (b) word representations learnt for
pivots must be similar in the two domains. Moreover, we propose a method to
perform domain adaptation using the learnt word representations. Our proposed
method significantly outperforms competitive baselines including the
state-of-the-art domain-insensitive word representations, and reports best
sentiment classification accuracies for all domain-pairs in a benchmark
dataset.
| no_new_dataset | 0.945751 |
1505.07193 | Linyun Yu | Linyun Yu, Peng Cui, Fei Wang, Chaoming Song, Shiqiang Yang | From Micro to Macro: Uncovering and Predicting Information Cascading
Process with Behavioral Dynamics | 10 pages, 11 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cascades are ubiquitous in various network environments. How to predict these
cascades is highly nontrivial in several vital applications, such as viral
marketing, epidemic prevention and traffic management. Most previous works
mainly focus on predicting the final cascade sizes. As cascades are typical
dynamic processes, it is always interesting and important to predict the
cascade size at any time, or predict the time when a cascade will reach a
certain size (e.g. an threshold for outbreak). In this paper, we unify all
these tasks into a fundamental problem: cascading process prediction. That is,
given the early stage of a cascade, how to predict its cumulative cascade size
of any later time? For such a challenging problem, how to understand the micro
mechanism that drives and generates the macro phenomenons (i.e. cascading
proceese) is essential. Here we introduce behavioral dynamics as the micro
mechanism to describe the dynamic process of a node's neighbors get infected by
a cascade after this node get infected (i.e. one-hop subcascades). Through
data-driven analysis, we find out the common principles and patterns lying in
behavioral dynamics and propose a novel Networked Weibull Regression model for
behavioral dynamics modeling. After that we propose a novel method for
predicting cascading processes by effectively aggregating behavioral dynamics,
and propose a scalable solution to approximate the cascading process with a
theoretical guarantee. We extensively evaluate the proposed method on a large
scale social network dataset. The results demonstrate that the proposed method
can significantly outperform other state-of-the-art baselines in multiple tasks
including cascade size prediction, outbreak time prediction and cascading
process prediction.
| [
{
"version": "v1",
"created": "Wed, 27 May 2015 05:30:33 GMT"
}
] | 2015-05-28T00:00:00 | [
[
"Yu",
"Linyun",
""
],
[
"Cui",
"Peng",
""
],
[
"Wang",
"Fei",
""
],
[
"Song",
"Chaoming",
""
],
[
"Yang",
"Shiqiang",
""
]
] | TITLE: From Micro to Macro: Uncovering and Predicting Information Cascading
Process with Behavioral Dynamics
ABSTRACT: Cascades are ubiquitous in various network environments. How to predict these
cascades is highly nontrivial in several vital applications, such as viral
marketing, epidemic prevention and traffic management. Most previous works
mainly focus on predicting the final cascade sizes. As cascades are typical
dynamic processes, it is always interesting and important to predict the
cascade size at any time, or predict the time when a cascade will reach a
certain size (e.g. an threshold for outbreak). In this paper, we unify all
these tasks into a fundamental problem: cascading process prediction. That is,
given the early stage of a cascade, how to predict its cumulative cascade size
of any later time? For such a challenging problem, how to understand the micro
mechanism that drives and generates the macro phenomenons (i.e. cascading
proceese) is essential. Here we introduce behavioral dynamics as the micro
mechanism to describe the dynamic process of a node's neighbors get infected by
a cascade after this node get infected (i.e. one-hop subcascades). Through
data-driven analysis, we find out the common principles and patterns lying in
behavioral dynamics and propose a novel Networked Weibull Regression model for
behavioral dynamics modeling. After that we propose a novel method for
predicting cascading processes by effectively aggregating behavioral dynamics,
and propose a scalable solution to approximate the cascading process with a
theoretical guarantee. We extensively evaluate the proposed method on a large
scale social network dataset. The results demonstrate that the proposed method
can significantly outperform other state-of-the-art baselines in multiple tasks
including cascade size prediction, outbreak time prediction and cascading
process prediction.
| no_new_dataset | 0.947137 |
1505.07254 | Oliver Mason | Naoise Holohan, Doug Leith and Oliver Mason | Differentially Private Response Mechanisms on Categorical Data | null | null | null | null | cs.DM cs.CR math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study mechanisms for differential privacy on finite datasets. By deriving
\emph{sufficient sets} for differential privacy we obtain necessary and
sufficient conditions for differential privacy, a tight lower bound on the
maximal expected error of a discrete mechanism and a characterisation of the
optimal mechanism which minimises the maximal expected error within the class
of mechanisms considered.
| [
{
"version": "v1",
"created": "Wed, 27 May 2015 10:16:57 GMT"
}
] | 2015-05-28T00:00:00 | [
[
"Holohan",
"Naoise",
""
],
[
"Leith",
"Doug",
""
],
[
"Mason",
"Oliver",
""
]
] | TITLE: Differentially Private Response Mechanisms on Categorical Data
ABSTRACT: We study mechanisms for differential privacy on finite datasets. By deriving
\emph{sufficient sets} for differential privacy we obtain necessary and
sufficient conditions for differential privacy, a tight lower bound on the
maximal expected error of a discrete mechanism and a characterisation of the
optimal mechanism which minimises the maximal expected error within the class
of mechanisms considered.
| no_new_dataset | 0.940898 |
1505.07293 | Vijay Badrinarayanan | Vijay Badrinarayanan, Ankur Handa, Roberto Cipolla | SegNet: A Deep Convolutional Encoder-Decoder Architecture for Robust
Semantic Pixel-Wise Labelling | This version was first submitted to CVPR' 15 on November 14, 2014
with paper Id 1468. A similar architecture was proposed more recently on May
17, 2015, see http://arxiv.org/pdf/1505.04366.pdf | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel deep architecture, SegNet, for semantic pixel wise image
labelling. SegNet has several attractive properties; (i) it only requires
forward evaluation of a fully learnt function to obtain smooth label
predictions, (ii) with increasing depth, a larger context is considered for
pixel labelling which improves accuracy, and (iii) it is easy to visualise the
effect of feature activation(s) in the pixel label space at any depth. SegNet
is composed of a stack of encoders followed by a corresponding decoder stack
which feeds into a soft-max classification layer. The decoders help map low
resolution feature maps at the output of the encoder stack to full input image
size feature maps. This addresses an important drawback of recent deep learning
approaches which have adopted networks designed for object categorization for
pixel wise labelling. These methods lack a mechanism to map deep layer feature
maps to input dimensions. They resort to ad hoc methods to upsample features,
e.g. by replication. This results in noisy predictions and also restricts the
number of pooling layers in order to avoid too much upsampling and thus reduces
spatial context. SegNet overcomes these problems by learning to map encoder
outputs to image pixel labels. We test the performance of SegNet on outdoor RGB
scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results
show that SegNet achieves state-of-the-art performance even without use of
additional cues such as depth, video frames or post-processing with CRF models.
| [
{
"version": "v1",
"created": "Wed, 27 May 2015 12:54:17 GMT"
}
] | 2015-05-28T00:00:00 | [
[
"Badrinarayanan",
"Vijay",
""
],
[
"Handa",
"Ankur",
""
],
[
"Cipolla",
"Roberto",
""
]
] | TITLE: SegNet: A Deep Convolutional Encoder-Decoder Architecture for Robust
Semantic Pixel-Wise Labelling
ABSTRACT: We propose a novel deep architecture, SegNet, for semantic pixel wise image
labelling. SegNet has several attractive properties; (i) it only requires
forward evaluation of a fully learnt function to obtain smooth label
predictions, (ii) with increasing depth, a larger context is considered for
pixel labelling which improves accuracy, and (iii) it is easy to visualise the
effect of feature activation(s) in the pixel label space at any depth. SegNet
is composed of a stack of encoders followed by a corresponding decoder stack
which feeds into a soft-max classification layer. The decoders help map low
resolution feature maps at the output of the encoder stack to full input image
size feature maps. This addresses an important drawback of recent deep learning
approaches which have adopted networks designed for object categorization for
pixel wise labelling. These methods lack a mechanism to map deep layer feature
maps to input dimensions. They resort to ad hoc methods to upsample features,
e.g. by replication. This results in noisy predictions and also restricts the
number of pooling layers in order to avoid too much upsampling and thus reduces
spatial context. SegNet overcomes these problems by learning to map encoder
outputs to image pixel labels. We test the performance of SegNet on outdoor RGB
scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results
show that SegNet achieves state-of-the-art performance even without use of
additional cues such as depth, video frames or post-processing with CRF models.
| no_new_dataset | 0.945349 |
1505.07310 | Md. Iftekhar Tanveer | M. Iftekhar Tanveer | Use of Laplacian Projection Technique for Summarizing Likert Scale
Annotations | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Summarizing Likert scale ratings from human annotators is an important step
for collecting human judgments. In this project we study a novel, graph
theoretic method for this purpose. We also analyze a few interesting properties
for this approach using real annotation datasets.
| [
{
"version": "v1",
"created": "Tue, 26 May 2015 15:45:00 GMT"
}
] | 2015-05-28T00:00:00 | [
[
"Tanveer",
"M. Iftekhar",
""
]
] | TITLE: Use of Laplacian Projection Technique for Summarizing Likert Scale
Annotations
ABSTRACT: Summarizing Likert scale ratings from human annotators is an important step
for collecting human judgments. In this project we study a novel, graph
theoretic method for this purpose. We also analyze a few interesting properties
for this approach using real annotation datasets.
| no_new_dataset | 0.953013 |
1505.07428 | Manuel L\'opez-Antequera | Ruben Gomez-Ojeda, Manuel Lopez-Antequera, Nicolai Petkov, Javier
Gonzalez-Jimenez | Training a Convolutional Neural Network for Appearance-Invariant Place
Recognition | null | null | null | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Place recognition is one of the most challenging problems in computer vision,
and has become a key part in mobile robotics and autonomous driving
applications for performing loop closure in visual SLAM systems. Moreover, the
difficulty of recognizing a revisited location increases with appearance
changes caused, for instance, by weather or illumination variations, which
hinders the long-term application of such algorithms in real environments. In
this paper we present a convolutional neural network (CNN), trained for the
first time with the purpose of recognizing revisited locations under severe
appearance changes, which maps images to a low dimensional space where
Euclidean distances represent place dissimilarity. In order for the network to
learn the desired invariances, we train it with triplets of images selected
from datasets which present a challenging variability in visual appearance. The
triplets are selected in such way that two samples are from the same location
and the third one is taken from a different place. We validate our system
through extensive experimentation, where we demonstrate better performance than
state-of-art algorithms in a number of popular datasets.
| [
{
"version": "v1",
"created": "Wed, 27 May 2015 18:21:54 GMT"
}
] | 2015-05-28T00:00:00 | [
[
"Gomez-Ojeda",
"Ruben",
""
],
[
"Lopez-Antequera",
"Manuel",
""
],
[
"Petkov",
"Nicolai",
""
],
[
"Gonzalez-Jimenez",
"Javier",
""
]
] | TITLE: Training a Convolutional Neural Network for Appearance-Invariant Place
Recognition
ABSTRACT: Place recognition is one of the most challenging problems in computer vision,
and has become a key part in mobile robotics and autonomous driving
applications for performing loop closure in visual SLAM systems. Moreover, the
difficulty of recognizing a revisited location increases with appearance
changes caused, for instance, by weather or illumination variations, which
hinders the long-term application of such algorithms in real environments. In
this paper we present a convolutional neural network (CNN), trained for the
first time with the purpose of recognizing revisited locations under severe
appearance changes, which maps images to a low dimensional space where
Euclidean distances represent place dissimilarity. In order for the network to
learn the desired invariances, we train it with triplets of images selected
from datasets which present a challenging variability in visual appearance. The
triplets are selected in such way that two samples are from the same location
and the third one is taken from a different place. We validate our system
through extensive experimentation, where we demonstrate better performance than
state-of-art algorithms in a number of popular datasets.
| no_new_dataset | 0.954308 |
1101.4749 | Osman G\"unay | Osman Gunay and Behcet Ugur Toreyin and Kivanc Kose and A. Enis Cetin | Online Adaptive Decision Fusion Framework Based on Entropic Projections
onto Convex Sets with Application to Wildfire Detection in Video | 10 pages, 7 figures | null | 10.1117/1.3595426 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, an Entropy functional based online Adaptive Decision Fusion
(EADF) framework is developed for image analysis and computer vision
applications. In this framework, it is assumed that the compound algorithm
consists of several sub-algorithms each of which yielding its own decision as a
real number centered around zero, representing the confidence level of that
particular sub-algorithm. Decision values are linearly combined with weights
which are updated online according to an active fusion method based on
performing entropic projections onto convex sets describing sub-algorithms. It
is assumed that there is an oracle, who is usually a human operator, providing
feedback to the decision fusion method. A video based wildfire detection system
is developed to evaluate the performance of the algorithm in handling the
problems where data arrives sequentially. In this case, the oracle is the
security guard of the forest lookout tower verifying the decision of the
combined algorithm. Simulation results are presented. The EADF framework is
also tested with a standard dataset.
| [
{
"version": "v1",
"created": "Tue, 25 Jan 2011 09:11:49 GMT"
}
] | 2015-05-27T00:00:00 | [
[
"Gunay",
"Osman",
""
],
[
"Toreyin",
"Behcet Ugur",
""
],
[
"Kose",
"Kivanc",
""
],
[
"Cetin",
"A. Enis",
""
]
] | TITLE: Online Adaptive Decision Fusion Framework Based on Entropic Projections
onto Convex Sets with Application to Wildfire Detection in Video
ABSTRACT: In this paper, an Entropy functional based online Adaptive Decision Fusion
(EADF) framework is developed for image analysis and computer vision
applications. In this framework, it is assumed that the compound algorithm
consists of several sub-algorithms each of which yielding its own decision as a
real number centered around zero, representing the confidence level of that
particular sub-algorithm. Decision values are linearly combined with weights
which are updated online according to an active fusion method based on
performing entropic projections onto convex sets describing sub-algorithms. It
is assumed that there is an oracle, who is usually a human operator, providing
feedback to the decision fusion method. A video based wildfire detection system
is developed to evaluate the performance of the algorithm in handling the
problems where data arrives sequentially. In this case, the oracle is the
security guard of the forest lookout tower verifying the decision of the
combined algorithm. Simulation results are presented. The EADF framework is
also tested with a standard dataset.
| no_new_dataset | 0.946646 |
1102.1712 | Ruijiang Li | Ruijiang Li, John H. Lewis, Xun Jia, Xuejun Gu, Michael Folkerts,
Chunhua Men, William Y. Song, and Steve B. Jiang | 3D tumor localization through real-time volumetric x-ray imaging for
lung cancer radiotherapy | null | null | 10.1118/1.3582693 | null | physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently we have developed an algorithm for reconstructing volumetric images
and extracting 3D tumor motion information from a single x-ray projection. We
have demonstrated its feasibility using a digital respiratory phantom with
regular breathing patterns. In this work, we present a detailed description and
a comprehensive evaluation of the improved algorithm. The algorithm was
improved by incorporating respiratory motion prediction. The accuracy and
efficiency were then evaluated on 1) a digital respiratory phantom, 2) a
physical respiratory phantom, and 3) five lung cancer patients. These
evaluation cases include both regular and irregular breathing patterns that are
different from the training dataset. For the digital respiratory phantom with
regular and irregular breathing, the average 3D tumor localization error is
less than 1 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time
for 3D tumor localization from each projection ranges between 0.19 and 0.26
seconds, for both regular and irregular breathing, which is about a 10%
improvement over previously reported results. For the physical respiratory
phantom, an average tumor localization error below 1 mm was achieved with an
average computation time of 0.13 and 0.16 seconds on the same GPU card, for
regular and irregular breathing, respectively. For the five lung cancer
patients, the average tumor localization error is below 2 mm in both the axial
and tangential directions. The average computation time on the same GPU card
ranges between 0.26 and 0.34 seconds.
| [
{
"version": "v1",
"created": "Tue, 8 Feb 2011 20:33:00 GMT"
}
] | 2015-05-27T00:00:00 | [
[
"Li",
"Ruijiang",
""
],
[
"Lewis",
"John H.",
""
],
[
"Jia",
"Xun",
""
],
[
"Gu",
"Xuejun",
""
],
[
"Folkerts",
"Michael",
""
],
[
"Men",
"Chunhua",
""
],
[
"Song",
"William Y.",
""
],
[
"Jiang",
"Steve B.",
""
]
] | TITLE: 3D tumor localization through real-time volumetric x-ray imaging for
lung cancer radiotherapy
ABSTRACT: Recently we have developed an algorithm for reconstructing volumetric images
and extracting 3D tumor motion information from a single x-ray projection. We
have demonstrated its feasibility using a digital respiratory phantom with
regular breathing patterns. In this work, we present a detailed description and
a comprehensive evaluation of the improved algorithm. The algorithm was
improved by incorporating respiratory motion prediction. The accuracy and
efficiency were then evaluated on 1) a digital respiratory phantom, 2) a
physical respiratory phantom, and 3) five lung cancer patients. These
evaluation cases include both regular and irregular breathing patterns that are
different from the training dataset. For the digital respiratory phantom with
regular and irregular breathing, the average 3D tumor localization error is
less than 1 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time
for 3D tumor localization from each projection ranges between 0.19 and 0.26
seconds, for both regular and irregular breathing, which is about a 10%
improvement over previously reported results. For the physical respiratory
phantom, an average tumor localization error below 1 mm was achieved with an
average computation time of 0.13 and 0.16 seconds on the same GPU card, for
regular and irregular breathing, respectively. For the five lung cancer
patients, the average tumor localization error is below 2 mm in both the axial
and tangential directions. The average computation time on the same GPU card
ranges between 0.26 and 0.34 seconds.
| no_new_dataset | 0.951504 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.