id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1407.4896 | Hang-Hyun Jo | Hang-Hyun Jo, Jari Saram\"aki, Robin I. M. Dunbar, and Kimmo Kaski | Spatial patterns of close relationships across the lifespan | 9 pages, 7 figures | Scientific Reports 4, 6988 (2014) | 10.1038/srep06988 | null | physics.soc-ph cs.SI physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dynamics of close relationships is important for understanding the
migration patterns of individual life-courses. The bottom-up approach to this
subject by social scientists has been limited by sample size, while the more
recent top-down approach using large-scale datasets suffers from a lack of
detail about the human individuals. We incorporate the geographic and
demographic information of millions of mobile phone users with their
communication patterns to study the dynamics of close relationships and its
effect in their life-course migration. We demonstrate how the close age- and
sex-biased dyadic relationships are correlated with the geographic proximity of
the pair of individuals, e.g., young couples tend to live further from each
other than old couples. In addition, we find that emotionally closer pairs are
living geographically closer to each other. These findings imply that the
life-course framework is crucial for understanding the complex dynamics of
close relationships and their effect on the migration patterns of human
individuals.
| [
{
"version": "v1",
"created": "Fri, 18 Jul 2014 06:46:45 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Sep 2014 06:12:54 GMT"
}
] | 2014-11-12T00:00:00 | [
[
"Jo",
"Hang-Hyun",
""
],
[
"Saramäki",
"Jari",
""
],
[
"Dunbar",
"Robin I. M.",
""
],
[
"Kaski",
"Kimmo",
""
]
] | TITLE: Spatial patterns of close relationships across the lifespan
ABSTRACT: The dynamics of close relationships is important for understanding the
migration patterns of individual life-courses. The bottom-up approach to this
subject by social scientists has been limited by sample size, while the more
recent top-down approach using large-scale datasets suffers from a lack of
detail about the human individuals. We incorporate the geographic and
demographic information of millions of mobile phone users with their
communication patterns to study the dynamics of close relationships and its
effect in their life-course migration. We demonstrate how the close age- and
sex-biased dyadic relationships are correlated with the geographic proximity of
the pair of individuals, e.g., young couples tend to live further from each
other than old couples. In addition, we find that emotionally closer pairs are
living geographically closer to each other. These findings imply that the
life-course framework is crucial for understanding the complex dynamics of
close relationships and their effect on the migration patterns of human
individuals.
| no_new_dataset | 0.943919 |
1411.2795 | Nitesh Kumar Chaudhary | Nitesh Kumar Chaudhary | Speaker Identification From Youtube Obtained Data | 7 pages, 5 figures, 1 Table, Signal & Image Processing : An
International Journal (SIPIJ) Vol.5, No.5, October 2014 | null | 10.5121/sipij.2014.5503 | null | cs.SD cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An efficient, and intuitive algorithm is presented for the identification of
speakers from a long dataset (like YouTube long discussion, Cocktail party
recorded audio or video).The goal of automatic speaker identification is to
identify the number of different speakers and prepare a model for that speaker
by extraction, characterization and speaker-specific information contained in
the speech signal. It has many diverse application specially in the field of
Surveillance, Immigrations at Airport, cyber security, transcription in
multi-source of similar sound source, where it is difficult to assign
transcription arbitrary. The most commonly speech parametrization used in
speaker verification, K-mean, cepstral analysis, is detailed. Gaussian mixture
modeling, which is the speaker modeling technique is then explained. Gaussian
mixture models (GMM), perhaps the most robust machine learning algorithm has
been introduced examine and judge carefully speaker identification in text
independent. The application or employment of Gaussian mixture models for
monitoring & Analysing speaker identity is encouraged by the familiarity,
awareness, or understanding gained through experience that Gaussian spectrum
depict the characteristics of speaker's spectral conformational pattern and
remarkable ability of GMM to construct capricious densities after that we
illustrate 'Expectation maximization' an iterative algorithm which takes some
arbitrary value in initial estimation and carry on the iterative process until
the convergence of value is observed,so by doing various number of experiments
we are able to obtain 79 ~ 82% of identification rate using Vector quantization
and 85 ~ 92.6% of identification rate using GMM modeling by Expectation
maximization parameter estimation depending on variation of parameter.
| [
{
"version": "v1",
"created": "Tue, 11 Nov 2014 13:20:19 GMT"
}
] | 2014-11-12T00:00:00 | [
[
"Chaudhary",
"Nitesh Kumar",
""
]
] | TITLE: Speaker Identification From Youtube Obtained Data
ABSTRACT: An efficient, and intuitive algorithm is presented for the identification of
speakers from a long dataset (like YouTube long discussion, Cocktail party
recorded audio or video).The goal of automatic speaker identification is to
identify the number of different speakers and prepare a model for that speaker
by extraction, characterization and speaker-specific information contained in
the speech signal. It has many diverse application specially in the field of
Surveillance, Immigrations at Airport, cyber security, transcription in
multi-source of similar sound source, where it is difficult to assign
transcription arbitrary. The most commonly speech parametrization used in
speaker verification, K-mean, cepstral analysis, is detailed. Gaussian mixture
modeling, which is the speaker modeling technique is then explained. Gaussian
mixture models (GMM), perhaps the most robust machine learning algorithm has
been introduced examine and judge carefully speaker identification in text
independent. The application or employment of Gaussian mixture models for
monitoring & Analysing speaker identity is encouraged by the familiarity,
awareness, or understanding gained through experience that Gaussian spectrum
depict the characteristics of speaker's spectral conformational pattern and
remarkable ability of GMM to construct capricious densities after that we
illustrate 'Expectation maximization' an iterative algorithm which takes some
arbitrary value in initial estimation and carry on the iterative process until
the convergence of value is observed,so by doing various number of experiments
we are able to obtain 79 ~ 82% of identification rate using Vector quantization
and 85 ~ 92.6% of identification rate using GMM modeling by Expectation
maximization parameter estimation depending on variation of parameter.
| no_new_dataset | 0.949856 |
1411.2821 | Saeed Afshar | Saeed Afshar, Libin George, Jonathan Tapson, Andre van Schaik, Philip
de Chazal, Tara Julia Hamilton | Turn Down that Noise: Synaptic Encoding of Afferent SNR in a Single
Spiking Neuron | null | null | null | null | cs.NE q-bio.NC | http://creativecommons.org/licenses/by/3.0/ | We have added a simplified neuromorphic model of Spike Time Dependent
Plasticity (STDP) to the Synapto-dendritic Kernel Adapting Neuron (SKAN). The
resulting neuron model is the first to show synaptic encoding of afferent
signal to noise ratio in addition to the unsupervised learning of spatio
temporal spike patterns. The neuron model is particularly suitable for
implementation in digital neuromorphic hardware as it does not use any complex
mathematical operations and uses a novel approach to achieve synaptic
homeostasis. The neurons noise compensation properties are characterized and
tested on noise corrupted zeros digits of the MNIST handwritten dataset.
Results show the simultaneously learning common patterns in its input data
while dynamically weighing individual afferent channels based on their signal
to noise ratio. Despite its simplicity the interesting behaviors of the neuron
model and the resulting computational power may offer insights into biological
systems.
| [
{
"version": "v1",
"created": "Tue, 11 Nov 2014 14:22:37 GMT"
}
] | 2014-11-12T00:00:00 | [
[
"Afshar",
"Saeed",
""
],
[
"George",
"Libin",
""
],
[
"Tapson",
"Jonathan",
""
],
[
"van Schaik",
"Andre",
""
],
[
"de Chazal",
"Philip",
""
],
[
"Hamilton",
"Tara Julia",
""
]
] | TITLE: Turn Down that Noise: Synaptic Encoding of Afferent SNR in a Single
Spiking Neuron
ABSTRACT: We have added a simplified neuromorphic model of Spike Time Dependent
Plasticity (STDP) to the Synapto-dendritic Kernel Adapting Neuron (SKAN). The
resulting neuron model is the first to show synaptic encoding of afferent
signal to noise ratio in addition to the unsupervised learning of spatio
temporal spike patterns. The neuron model is particularly suitable for
implementation in digital neuromorphic hardware as it does not use any complex
mathematical operations and uses a novel approach to achieve synaptic
homeostasis. The neurons noise compensation properties are characterized and
tested on noise corrupted zeros digits of the MNIST handwritten dataset.
Results show the simultaneously learning common patterns in its input data
while dynamically weighing individual afferent channels based on their signal
to noise ratio. Despite its simplicity the interesting behaviors of the neuron
model and the resulting computational power may offer insights into biological
systems.
| no_new_dataset | 0.950319 |
1401.1456 | Margarita Karkali | Margarita Karkali, Francois Rousseau, Alexandros Ntoulas, Michalis
Vazirgiannis | Using temporal IDF for efficient novelty detection in text streams | 30 pages | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Novelty detection in text streams is a challenging task that emerges in quite
a few different scenarios, ranging from email thread filtering to RSS news feed
recommendation on a smartphone. An efficient novelty detection algorithm can
save the user a great deal of time and resources when browsing through relevant
yet usually previously-seen content. Most of the recent research on detection
of novel documents in text streams has been building upon either geometric
distances or distributional similarities, with the former typically performing
better but being much slower due to the need of comparing an incoming document
with all the previously-seen ones. In this paper, we propose a new approach to
novelty detection in text streams. We describe a resource-aware mechanism that
is able to handle massive text streams such as the ones present today thanks to
the burst of social media and the emergence of the Web as the main source of
information. We capitalize on the historical Inverse Document Frequency (IDF)
that was known for capturing well term specificity and we show that it can be
used successfully at the document level as a measure of document novelty. This
enables us to avoid similarity comparisons with previous documents in the text
stream, thus scaling better and leading to faster execution times. Moreover, as
the collection of documents evolves over time, we use a temporal variant of IDF
not only to maintain an efficient representation of what has already been seen
but also to decay the document frequencies as the time goes by. We evaluate the
performance of the proposed approach on a real-world news articles dataset
created for this task. The results show that the proposed method outperforms
all of the baselines while managing to operate efficiently in terms of time
complexity and memory usage, which are of great importance in a mobile setting
scenario.
| [
{
"version": "v1",
"created": "Tue, 7 Jan 2014 17:43:37 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Nov 2014 15:58:35 GMT"
}
] | 2014-11-11T00:00:00 | [
[
"Karkali",
"Margarita",
""
],
[
"Rousseau",
"Francois",
""
],
[
"Ntoulas",
"Alexandros",
""
],
[
"Vazirgiannis",
"Michalis",
""
]
] | TITLE: Using temporal IDF for efficient novelty detection in text streams
ABSTRACT: Novelty detection in text streams is a challenging task that emerges in quite
a few different scenarios, ranging from email thread filtering to RSS news feed
recommendation on a smartphone. An efficient novelty detection algorithm can
save the user a great deal of time and resources when browsing through relevant
yet usually previously-seen content. Most of the recent research on detection
of novel documents in text streams has been building upon either geometric
distances or distributional similarities, with the former typically performing
better but being much slower due to the need of comparing an incoming document
with all the previously-seen ones. In this paper, we propose a new approach to
novelty detection in text streams. We describe a resource-aware mechanism that
is able to handle massive text streams such as the ones present today thanks to
the burst of social media and the emergence of the Web as the main source of
information. We capitalize on the historical Inverse Document Frequency (IDF)
that was known for capturing well term specificity and we show that it can be
used successfully at the document level as a measure of document novelty. This
enables us to avoid similarity comparisons with previous documents in the text
stream, thus scaling better and leading to faster execution times. Moreover, as
the collection of documents evolves over time, we use a temporal variant of IDF
not only to maintain an efficient representation of what has already been seen
but also to decay the document frequencies as the time goes by. We evaluate the
performance of the proposed approach on a real-world news articles dataset
created for this task. The results show that the proposed method outperforms
all of the baselines while managing to operate efficiently in terms of time
complexity and memory usage, which are of great importance in a mobile setting
scenario.
| new_dataset | 0.904059 |
1406.4607 | Camellia Sarkar | Sarika Jalan, Camellia Sarkar, Anagha Madhusudanan, Sanjiv Kumar
Dwivedi | Uncovering Randomness and Success in Society | 39 pages, 12 figures, 14 tables | PloS one, 9(2), e88249 (2014) | 10.1371/journal.pone.0088249 | null | physics.soc-ph cs.SI nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An understanding of how individuals shape and impact the evolution of society
is vastly limited due to the unavailability of large-scale reliable datasets
that can simultaneously capture information regarding individual movements and
social interactions. We believe that the popular Indian film industry,
'Bollywood', can provide a social network apt for such a study. Bollywood
provides massive amounts of real, unbiased data that spans more than 100 years,
and hence this network has been used as a model for the present paper. The
nodes which maintain a moderate degree or widely cooperate with the other nodes
of the network tend to be more fit (measured as the success of the node in the
industry) in comparison to the other nodes. The analysis carried forth in the
current work, using a conjoined framework of complex network theory and random
matrix theory, aims to quantify the elements that determine the fitness of an
individual node and the factors that contribute to the robustness of a network.
The authors of this paper believe that the method of study used in the current
paper can be extended to study various other industries and organizations.
| [
{
"version": "v1",
"created": "Wed, 18 Jun 2014 05:48:54 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Jun 2014 08:21:32 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Jul 2014 10:54:39 GMT"
}
] | 2014-11-11T00:00:00 | [
[
"Jalan",
"Sarika",
""
],
[
"Sarkar",
"Camellia",
""
],
[
"Madhusudanan",
"Anagha",
""
],
[
"Dwivedi",
"Sanjiv Kumar",
""
]
] | TITLE: Uncovering Randomness and Success in Society
ABSTRACT: An understanding of how individuals shape and impact the evolution of society
is vastly limited due to the unavailability of large-scale reliable datasets
that can simultaneously capture information regarding individual movements and
social interactions. We believe that the popular Indian film industry,
'Bollywood', can provide a social network apt for such a study. Bollywood
provides massive amounts of real, unbiased data that spans more than 100 years,
and hence this network has been used as a model for the present paper. The
nodes which maintain a moderate degree or widely cooperate with the other nodes
of the network tend to be more fit (measured as the success of the node in the
industry) in comparison to the other nodes. The analysis carried forth in the
current work, using a conjoined framework of complex network theory and random
matrix theory, aims to quantify the elements that determine the fitness of an
individual node and the factors that contribute to the robustness of a network.
The authors of this paper believe that the method of study used in the current
paper can be extended to study various other industries and organizations.
| no_new_dataset | 0.942665 |
1409.4899 | Michael Schreiber | Michael Schreiber | Is the new citation-rank approach P100' in bibliometrics really new? | 11 pages, 4 figures, 5 tables | Journal of Informetrics 8, 997-1004 (2014) | 10.1016/j.joi.2014.10.001 | null | cs.DL | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The percentile-based rating scale P100 describes the citation impact in terms
of the distribution of unique citation values. This approach has recently been
refined by considering also the frequency of papers with the same citation
counts. Here I compare the resulting P100' with P100 for an empirical dataset
and a simple fictitious model dataset. It is shown that P100' is not much
different from standard percentile-based ratings in terms of citation
frequencies. A new indicator P100'' is introduced.
| [
{
"version": "v1",
"created": "Wed, 17 Sep 2014 08:22:32 GMT"
}
] | 2014-11-11T00:00:00 | [
[
"Schreiber",
"Michael",
""
]
] | TITLE: Is the new citation-rank approach P100' in bibliometrics really new?
ABSTRACT: The percentile-based rating scale P100 describes the citation impact in terms
of the distribution of unique citation values. This approach has recently been
refined by considering also the frequency of papers with the same citation
counts. Here I compare the resulting P100' with P100 for an empirical dataset
and a simple fictitious model dataset. It is shown that P100' is not much
different from standard percentile-based ratings in terms of citation
frequencies. A new indicator P100'' is introduced.
| new_dataset | 0.961207 |
1411.2153 | Simone Cirillo | Simone Cirillo, Stefan Lloyd, Peter Nordin | Evolving intraday foreign exchange trading strategies utilizing multiple
instruments price series | 15 pages, 10 figures, 9 tables | null | null | null | cs.NE q-fin.TR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a Genetic Programming architecture for the generation of foreign
exchange trading strategies. The system's principal features are the evolution
of free-form strategies which do not rely on any prior models and the
utilization of price series from multiple instruments as input data. This
latter feature constitutes an innovation with respect to previous works
documented in literature. In this article we utilize Open, High, Low, Close bar
data at a 5 minutes frequency for the AUD.USD, EUR.USD, GBP.USD and USD.JPY
currency pairs. We will test the implementation analyzing the in-sample and
out-of-sample performance of strategies for trading the USD.JPY obtained across
multiple algorithm runs. We will also evaluate the differences between
strategies selected according to two different criteria: one relies on the
fitness obtained on the training set only, the second one makes use of an
additional validation dataset. Strategy activity and trade accuracy are
remarkably stable between in and out of sample results. From a profitability
aspect, the two criteria both result in strategies successful on out-of-sample
data but exhibiting different characteristics. The overall best performing
out-of-sample strategy achieves a yearly return of 19%.
| [
{
"version": "v1",
"created": "Sat, 8 Nov 2014 19:22:55 GMT"
}
] | 2014-11-11T00:00:00 | [
[
"Cirillo",
"Simone",
""
],
[
"Lloyd",
"Stefan",
""
],
[
"Nordin",
"Peter",
""
]
] | TITLE: Evolving intraday foreign exchange trading strategies utilizing multiple
instruments price series
ABSTRACT: We propose a Genetic Programming architecture for the generation of foreign
exchange trading strategies. The system's principal features are the evolution
of free-form strategies which do not rely on any prior models and the
utilization of price series from multiple instruments as input data. This
latter feature constitutes an innovation with respect to previous works
documented in literature. In this article we utilize Open, High, Low, Close bar
data at a 5 minutes frequency for the AUD.USD, EUR.USD, GBP.USD and USD.JPY
currency pairs. We will test the implementation analyzing the in-sample and
out-of-sample performance of strategies for trading the USD.JPY obtained across
multiple algorithm runs. We will also evaluate the differences between
strategies selected according to two different criteria: one relies on the
fitness obtained on the training set only, the second one makes use of an
additional validation dataset. Strategy activity and trade accuracy are
remarkably stable between in and out of sample results. From a profitability
aspect, the two criteria both result in strategies successful on out-of-sample
data but exhibiting different characteristics. The overall best performing
out-of-sample strategy achieves a yearly return of 19%.
| no_new_dataset | 0.945551 |
1411.2173 | Julieta Martinez | Julieta Martinez, Holger H. Hoos, James J. Little | Stacked Quantizers for Compositional Vector Compression | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Babenko and Lempitsky introduced Additive Quantization (AQ), a
generalization of Product Quantization (PQ) where a non-independent set of
codebooks is used to compress vectors into small binary codes. Unfortunately,
under this scheme encoding cannot be done independently in each codebook, and
optimal encoding is an NP-hard problem. In this paper, we observe that PQ and
AQ are both compositional quantizers that lie on the extremes of the codebook
dependence-independence assumption, and explore an intermediate approach that
exploits a hierarchical structure in the codebooks. This results in a method
that achieves quantization error on par with or lower than AQ, while being
several orders of magnitude faster. We perform a complexity analysis of PQ, AQ
and our method, and evaluate our approach on standard benchmarks of SIFT and
GIST descriptors, as well as on new datasets of features obtained from
state-of-the-art convolutional neural networks.
| [
{
"version": "v1",
"created": "Sat, 8 Nov 2014 22:51:12 GMT"
}
] | 2014-11-11T00:00:00 | [
[
"Martinez",
"Julieta",
""
],
[
"Hoos",
"Holger H.",
""
],
[
"Little",
"James J.",
""
]
] | TITLE: Stacked Quantizers for Compositional Vector Compression
ABSTRACT: Recently, Babenko and Lempitsky introduced Additive Quantization (AQ), a
generalization of Product Quantization (PQ) where a non-independent set of
codebooks is used to compress vectors into small binary codes. Unfortunately,
under this scheme encoding cannot be done independently in each codebook, and
optimal encoding is an NP-hard problem. In this paper, we observe that PQ and
AQ are both compositional quantizers that lie on the extremes of the codebook
dependence-independence assumption, and explore an intermediate approach that
exploits a hierarchical structure in the codebooks. This results in a method
that achieves quantization error on par with or lower than AQ, while being
several orders of magnitude faster. We perform a complexity analysis of PQ, AQ
and our method, and evaluate our approach on standard benchmarks of SIFT and
GIST descriptors, as well as on new datasets of features obtained from
state-of-the-art convolutional neural networks.
| no_new_dataset | 0.9434 |
1411.2214 | Babak Saleh | Babak Saleh, Ali Farhadi, Ahmed Elgammal | Abnormal Object Recognition: A Comprehensive Study | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When describing images, humans tend not to talk about the obvious, but rather
mention what they find interesting. We argue that abnormalities and deviations
from typicalities are among the most important components that form what is
worth mentioning. In this paper we introduce the abnormality detection as a
recognition problem and show how to model typicalities and, consequently,
meaningful deviations from prototypical properties of categories. Our model can
recognize abnormalities and report the main reasons of any recognized
abnormality. We introduce the abnormality detection dataset and show
interesting results on how to reason about abnormalities.
| [
{
"version": "v1",
"created": "Sun, 9 Nov 2014 09:51:06 GMT"
}
] | 2014-11-11T00:00:00 | [
[
"Saleh",
"Babak",
""
],
[
"Farhadi",
"Ali",
""
],
[
"Elgammal",
"Ahmed",
""
]
] | TITLE: Abnormal Object Recognition: A Comprehensive Study
ABSTRACT: When describing images, humans tend not to talk about the obvious, but rather
mention what they find interesting. We argue that abnormalities and deviations
from typicalities are among the most important components that form what is
worth mentioning. In this paper we introduce the abnormality detection as a
recognition problem and show how to model typicalities and, consequently,
meaningful deviations from prototypical properties of categories. Our model can
recognize abnormalities and report the main reasons of any recognized
abnormality. We introduce the abnormality detection dataset and show
interesting results on how to reason about abnormalities.
| new_dataset | 0.950549 |
1411.2331 | Makoto Yamada | Makoto Yamada, Avishek Saha, Hua Ouyang, Dawei Yin, Yi Chang | N$^3$LARS: Minimum Redundancy Maximum Relevance Feature Selection for
Large and High-dimensional Data | arXiv admin note: text overlap with arXiv:1202.0515 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a feature selection method that finds non-redundant features from
a large and high-dimensional data in nonlinear way. Specifically, we propose a
nonlinear extension of the non-negative least-angle regression (LARS) called
N${}^3$LARS, where the similarity between input and output is measured through
the normalized version of the Hilbert-Schmidt Independence Criterion (HSIC). An
advantage of N${}^3$LARS is that it can easily incorporate with map-reduce
frameworks such as Hadoop and Spark. Thus, with the help of distributed
computing, a set of features can be efficiently selected from a large and
high-dimensional data. Moreover, N${}^3$LARS is a convex method and can find a
global optimum solution. The effectiveness of the proposed method is first
demonstrated through feature selection experiments for classification and
regression with small and high-dimensional datasets. Finally, we evaluate our
proposed method over a large and high-dimensional biology dataset.
| [
{
"version": "v1",
"created": "Mon, 10 Nov 2014 05:43:28 GMT"
}
] | 2014-11-11T00:00:00 | [
[
"Yamada",
"Makoto",
""
],
[
"Saha",
"Avishek",
""
],
[
"Ouyang",
"Hua",
""
],
[
"Yin",
"Dawei",
""
],
[
"Chang",
"Yi",
""
]
] | TITLE: N$^3$LARS: Minimum Redundancy Maximum Relevance Feature Selection for
Large and High-dimensional Data
ABSTRACT: We propose a feature selection method that finds non-redundant features from
a large and high-dimensional data in nonlinear way. Specifically, we propose a
nonlinear extension of the non-negative least-angle regression (LARS) called
N${}^3$LARS, where the similarity between input and output is measured through
the normalized version of the Hilbert-Schmidt Independence Criterion (HSIC). An
advantage of N${}^3$LARS is that it can easily incorporate with map-reduce
frameworks such as Hadoop and Spark. Thus, with the help of distributed
computing, a set of features can be efficiently selected from a large and
high-dimensional data. Moreover, N${}^3$LARS is a convex method and can find a
global optimum solution. The effectiveness of the proposed method is first
demonstrated through feature selection experiments for classification and
regression with small and high-dimensional datasets. Finally, we evaluate our
proposed method over a large and high-dimensional biology dataset.
| no_new_dataset | 0.949389 |
1402.3902 | Karthikeyan Shanmugam | Murat Kocaoglu, Karthikeyan Shanmugam, Alexandros G. Dimakis and Adam
Klivans | Sparse Polynomial Learning and Graph Sketching | 14 pages; to appear in NIPS 2014l Updated proof of Theorem 5 and some
other minor changes during revision | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Let $f:\{-1,1\}^n$ be a polynomial with at most $s$ non-zero real
coefficients. We give an algorithm for exactly reconstructing f given random
examples from the uniform distribution on $\{-1,1\}^n$ that runs in time
polynomial in $n$ and $2s$ and succeeds if the function satisfies the unique
sign property: there is one output value which corresponds to a unique set of
values of the participating parities. This sufficient condition is satisfied
when every coefficient of f is perturbed by a small random noise, or satisfied
with high probability when s parity functions are chosen randomly or when all
the coefficients are positive. Learning sparse polynomials over the Boolean
domain in time polynomial in $n$ and $2s$ is considered notoriously hard in the
worst-case. Our result shows that the problem is tractable for almost all
sparse polynomials. Then, we show an application of this result to hypergraph
sketching which is the problem of learning a sparse (both in the number of
hyperedges and the size of the hyperedges) hypergraph from uniformly drawn
random cuts. We also provide experimental results on a real world dataset.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2014 06:00:16 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Feb 2014 06:56:27 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Nov 2014 22:35:40 GMT"
},
{
"version": "v4",
"created": "Fri, 7 Nov 2014 03:00:28 GMT"
}
] | 2014-11-10T00:00:00 | [
[
"Kocaoglu",
"Murat",
""
],
[
"Shanmugam",
"Karthikeyan",
""
],
[
"Dimakis",
"Alexandros G.",
""
],
[
"Klivans",
"Adam",
""
]
] | TITLE: Sparse Polynomial Learning and Graph Sketching
ABSTRACT: Let $f:\{-1,1\}^n$ be a polynomial with at most $s$ non-zero real
coefficients. We give an algorithm for exactly reconstructing f given random
examples from the uniform distribution on $\{-1,1\}^n$ that runs in time
polynomial in $n$ and $2s$ and succeeds if the function satisfies the unique
sign property: there is one output value which corresponds to a unique set of
values of the participating parities. This sufficient condition is satisfied
when every coefficient of f is perturbed by a small random noise, or satisfied
with high probability when s parity functions are chosen randomly or when all
the coefficients are positive. Learning sparse polynomials over the Boolean
domain in time polynomial in $n$ and $2s$ is considered notoriously hard in the
worst-case. Our result shows that the problem is tractable for almost all
sparse polynomials. Then, we show an application of this result to hypergraph
sketching which is the problem of learning a sparse (both in the number of
hyperedges and the size of the hyperedges) hypergraph from uniformly drawn
random cuts. We also provide experimental results on a real world dataset.
| no_new_dataset | 0.947186 |
1402.7015 | Fabian Pedregosa | Fabian Pedregosa (INRIA Saclay - Ile de France, INRIA Paris -
Rocquencourt), Michael Eickenberg (INRIA Saclay - Ile de France, LNAO),
Philippe Ciuciu (INRIA Saclay - Ile de France, NEUROSPIN), Bertrand Thirion
(INRIA Saclay - Ile de France, NEUROSPIN), Alexandre Gramfort (LTCI) | Data-driven HRF estimation for encoding and decoding models | appears in NeuroImage (2015) | null | 10.1016/j.neuroimage.2014.09.060 | null | cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the common usage of a canonical, data-independent, hemodynamic
response function (HRF), it is known that the shape of the HRF varies across
brain regions and subjects. This suggests that a data-driven estimation of this
function could lead to more statistical power when modeling BOLD fMRI data.
However, unconstrained estimation of the HRF can yield highly unstable results
when the number of free parameters is large. We develop a method for the joint
estimation of activation and HRF using a rank constraint causing the estimated
HRF to be equal across events/conditions, yet permitting it to be different
across voxels. Model estimation leads to an optimization problem that we
propose to solve with an efficient quasi-Newton method exploiting fast gradient
computations. This model, called GLM with Rank-1 constraint (R1-GLM), can be
extended to the setting of GLM with separate designs which has been shown to
improve decoding accuracy in brain activity decoding experiments. We compare 10
different HRF modeling methods in terms of encoding and decoding score in two
different datasets. Our results show that the R1-GLM model significantly
outperforms competing methods in both encoding and decoding settings,
positioning it as an attractive method both from the points of view of accuracy
and computational efficiency.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2014 18:50:58 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2014 06:11:17 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Jul 2014 11:14:00 GMT"
},
{
"version": "v4",
"created": "Mon, 6 Oct 2014 16:39:55 GMT"
},
{
"version": "v5",
"created": "Fri, 31 Oct 2014 13:47:01 GMT"
},
{
"version": "v6",
"created": "Fri, 7 Nov 2014 11:27:19 GMT"
}
] | 2014-11-10T00:00:00 | [
[
"Pedregosa",
"Fabian",
"",
"INRIA Saclay - Ile de France, INRIA Paris -\n Rocquencourt"
],
[
"Eickenberg",
"Michael",
"",
"INRIA Saclay - Ile de France, LNAO"
],
[
"Ciuciu",
"Philippe",
"",
"INRIA Saclay - Ile de France, NEUROSPIN"
],
[
"Thirion",
"Bertrand",
"",
"INRIA Saclay - Ile de France, NEUROSPIN"
],
[
"Gramfort",
"Alexandre",
"",
"LTCI"
]
] | TITLE: Data-driven HRF estimation for encoding and decoding models
ABSTRACT: Despite the common usage of a canonical, data-independent, hemodynamic
response function (HRF), it is known that the shape of the HRF varies across
brain regions and subjects. This suggests that a data-driven estimation of this
function could lead to more statistical power when modeling BOLD fMRI data.
However, unconstrained estimation of the HRF can yield highly unstable results
when the number of free parameters is large. We develop a method for the joint
estimation of activation and HRF using a rank constraint causing the estimated
HRF to be equal across events/conditions, yet permitting it to be different
across voxels. Model estimation leads to an optimization problem that we
propose to solve with an efficient quasi-Newton method exploiting fast gradient
computations. This model, called GLM with Rank-1 constraint (R1-GLM), can be
extended to the setting of GLM with separate designs which has been shown to
improve decoding accuracy in brain activity decoding experiments. We compare 10
different HRF modeling methods in terms of encoding and decoding score in two
different datasets. Our results show that the R1-GLM model significantly
outperforms competing methods in both encoding and decoding settings,
positioning it as an attractive method both from the points of view of accuracy
and computational efficiency.
| no_new_dataset | 0.943348 |
1411.1509 | Zetao Chen | Zetao Chen, Obadiah Lam, Adam Jacobson and Michael Milford | Convolutional Neural Network-based Place Recognition | 8 pages, 11 figures, this paper has been accepted by 2014
Australasian Conference on Robotics and Automation (ACRA 2014) to be held in
University of Melbourne, Dec 2~4 | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently Convolutional Neural Networks (CNNs) have been shown to achieve
state-of-the-art performance on various classification tasks. In this paper, we
present for the first time a place recognition technique based on CNN models,
by combining the powerful features learnt by CNNs with a spatial and sequential
filter. Applying the system to a 70 km benchmark place recognition dataset we
achieve a 75% increase in recall at 100% precision, significantly outperforming
all previous state of the art techniques. We also conduct a comprehensive
performance comparison of the utility of features from all 21 layers for place
recognition, both for the benchmark dataset and for a second dataset with more
significant viewpoint changes.
| [
{
"version": "v1",
"created": "Thu, 6 Nov 2014 07:03:15 GMT"
}
] | 2014-11-07T00:00:00 | [
[
"Chen",
"Zetao",
""
],
[
"Lam",
"Obadiah",
""
],
[
"Jacobson",
"Adam",
""
],
[
"Milford",
"Michael",
""
]
] | TITLE: Convolutional Neural Network-based Place Recognition
ABSTRACT: Recently Convolutional Neural Networks (CNNs) have been shown to achieve
state-of-the-art performance on various classification tasks. In this paper, we
present for the first time a place recognition technique based on CNN models,
by combining the powerful features learnt by CNNs with a spatial and sequential
filter. Applying the system to a 70 km benchmark place recognition dataset we
achieve a 75% increase in recall at 100% precision, significantly outperforming
all previous state of the art techniques. We also conduct a comprehensive
performance comparison of the utility of features from all 21 layers for place
recognition, both for the benchmark dataset and for a second dataset with more
significant viewpoint changes.
| no_new_dataset | 0.954009 |
1411.1623 | Siddharth Sigtia | Siddharth Sigtia, Emmanouil Benetos, Nicolas Boulanger-Lewandowski,
Tillman Weyde, Artur S. d'Avila Garcez, Simon Dixon | A Hybrid Recurrent Neural Network For Music Transcription | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the problem of incorporating higher-level symbolic score-like
information into Automatic Music Transcription (AMT) systems to improve their
performance. We use recurrent neural networks (RNNs) and their variants as
music language models (MLMs) and present a generative architecture for
combining these models with predictions from a frame level acoustic classifier.
We also compare different neural network architectures for acoustic modeling.
The proposed model computes a distribution over possible output sequences given
the acoustic input signal and we present an algorithm for performing a global
search for good candidate transcriptions. The performance of the proposed model
is evaluated on piano music from the MAPS dataset and we observe that the
proposed model consistently outperforms existing transcription methods.
| [
{
"version": "v1",
"created": "Thu, 6 Nov 2014 14:18:39 GMT"
}
] | 2014-11-07T00:00:00 | [
[
"Sigtia",
"Siddharth",
""
],
[
"Benetos",
"Emmanouil",
""
],
[
"Boulanger-Lewandowski",
"Nicolas",
""
],
[
"Weyde",
"Tillman",
""
],
[
"Garcez",
"Artur S. d'Avila",
""
],
[
"Dixon",
"Simon",
""
]
] | TITLE: A Hybrid Recurrent Neural Network For Music Transcription
ABSTRACT: We investigate the problem of incorporating higher-level symbolic score-like
information into Automatic Music Transcription (AMT) systems to improve their
performance. We use recurrent neural networks (RNNs) and their variants as
music language models (MLMs) and present a generative architecture for
combining these models with predictions from a frame level acoustic classifier.
We also compare different neural network architectures for acoustic modeling.
The proposed model computes a distribution over possible output sequences given
the acoustic input signal and we present an algorithm for performing a global
search for good candidate transcriptions. The performance of the proposed model
is evaluated on piano music from the MAPS dataset and we observe that the
proposed model consistently outperforms existing transcription methods.
| no_new_dataset | 0.947817 |
1411.1705 | Yuanyi Xue | Yuanyi Xue and Beril Erkin and Yao Wang | A Novel No-reference Video Quality Metric for Evaluating Temporal
Jerkiness due to Frame Freezing | null | null | 10.1109/TMM.2014.2368272 | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose a novel no-reference (NR) video quality metric that
evaluates the impact of frame freezing due to either packet loss or late
arrival. Our metric uses a trained neural network acting on features that are
chosen to capture the impact of frame freezing on the perceived quality. The
considered features include the number of freezes, freeze duration statistics,
inter-freeze distance statistics, frame difference before and after the freeze,
normal frame difference, and the ratio of them. We use the neural network to
find the mapping between features and subjective test scores. We optimize the
network structure and the feature selection through a cross validation
procedure, using training samples extracted from both VQEG and LIVE video
databases. The resulting feature set and network structure yields accurate
quality prediction for both the training data containing 54 test videos and a
separate testing dataset including 14 videos, with Pearson Correlation
Coefficients greater than 0.9 and 0.8 for the training set and the testing set,
respectively. Our proposed metric has low complexity and could be utilized in a
system with realtime processing constraint.
| [
{
"version": "v1",
"created": "Wed, 5 Nov 2014 16:29:30 GMT"
}
] | 2014-11-07T00:00:00 | [
[
"Xue",
"Yuanyi",
""
],
[
"Erkin",
"Beril",
""
],
[
"Wang",
"Yao",
""
]
] | TITLE: A Novel No-reference Video Quality Metric for Evaluating Temporal
Jerkiness due to Frame Freezing
ABSTRACT: In this work, we propose a novel no-reference (NR) video quality metric that
evaluates the impact of frame freezing due to either packet loss or late
arrival. Our metric uses a trained neural network acting on features that are
chosen to capture the impact of frame freezing on the perceived quality. The
considered features include the number of freezes, freeze duration statistics,
inter-freeze distance statistics, frame difference before and after the freeze,
normal frame difference, and the ratio of them. We use the neural network to
find the mapping between features and subjective test scores. We optimize the
network structure and the feature selection through a cross validation
procedure, using training samples extracted from both VQEG and LIVE video
databases. The resulting feature set and network structure yields accurate
quality prediction for both the training data containing 54 test videos and a
separate testing dataset including 14 videos, with Pearson Correlation
Coefficients greater than 0.9 and 0.8 for the training set and the testing set,
respectively. Our proposed metric has low complexity and could be utilized in a
system with realtime processing constraint.
| new_dataset | 0.94743 |
1404.7584 | Jo\~ao F. Henriques | Jo\~ao F. Henriques, Rui Caseiro, Pedro Martins, Jorge Batista | High-Speed Tracking with Kernelized Correlation Filters | null | null | 10.1109/TPAMI.2014.2345390 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The core component of most modern trackers is a discriminative classifier,
tasked with distinguishing between the target and the surrounding environment.
To cope with natural image changes, this classifier is typically trained with
translated and scaled sample patches. Such sets of samples are riddled with
redundancies -- any overlapping pixels are constrained to be the same. Based on
this simple observation, we propose an analytic model for datasets of thousands
of translated patches. By showing that the resulting data matrix is circulant,
we can diagonalize it with the Discrete Fourier Transform, reducing both
storage and computation by several orders of magnitude. Interestingly, for
linear regression our formulation is equivalent to a correlation filter, used
by some of the fastest competitive trackers. For kernel regression, however, we
derive a new Kernelized Correlation Filter (KCF), that unlike other kernel
algorithms has the exact same complexity as its linear counterpart. Building on
it, we also propose a fast multi-channel extension of linear correlation
filters, via a linear kernel, which we call Dual Correlation Filter (DCF). Both
KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50
videos benchmark, despite running at hundreds of frames-per-second, and being
implemented in a few lines of code (Algorithm 1). To encourage further
developments, our tracking framework was made open-source.
| [
{
"version": "v1",
"created": "Wed, 30 Apr 2014 04:16:38 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Jul 2014 23:04:01 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Nov 2014 01:32:56 GMT"
}
] | 2014-11-06T00:00:00 | [
[
"Henriques",
"João F.",
""
],
[
"Caseiro",
"Rui",
""
],
[
"Martins",
"Pedro",
""
],
[
"Batista",
"Jorge",
""
]
] | TITLE: High-Speed Tracking with Kernelized Correlation Filters
ABSTRACT: The core component of most modern trackers is a discriminative classifier,
tasked with distinguishing between the target and the surrounding environment.
To cope with natural image changes, this classifier is typically trained with
translated and scaled sample patches. Such sets of samples are riddled with
redundancies -- any overlapping pixels are constrained to be the same. Based on
this simple observation, we propose an analytic model for datasets of thousands
of translated patches. By showing that the resulting data matrix is circulant,
we can diagonalize it with the Discrete Fourier Transform, reducing both
storage and computation by several orders of magnitude. Interestingly, for
linear regression our formulation is equivalent to a correlation filter, used
by some of the fastest competitive trackers. For kernel regression, however, we
derive a new Kernelized Correlation Filter (KCF), that unlike other kernel
algorithms has the exact same complexity as its linear counterpart. Building on
it, we also propose a fast multi-channel extension of linear correlation
filters, via a linear kernel, which we call Dual Correlation Filter (DCF). Both
KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50
videos benchmark, despite running at hundreds of frames-per-second, and being
implemented in a few lines of code (Algorithm 1). To encourage further
developments, our tracking framework was made open-source.
| no_new_dataset | 0.950088 |
1410.4485 | Miguel \'Angel Bautista Martin | Miguel \'Angel Bautista, Antonio Hern\'andez-Vela, Sergio Escalera,
Laura Igual, Oriol Pujol, Josep Moya, Ver\'onica Violant, Mar\'ia Teresa
Anguera | A Gesture Recognition System for Detecting Behavioral Patterns of ADHD | 12 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an application of gesture recognition using an extension of
Dynamic Time Warping (DTW) to recognize behavioural patterns of Attention
Deficit Hyperactivity Disorder (ADHD). We propose an extension of DTW using
one-class classifiers in order to be able to encode the variability of a
gesture category, and thus, perform an alignment between a gesture sample and a
gesture class. We model the set of gesture samples of a certain gesture
category using either GMMs or an approximation of Convex Hulls. Thus, we add a
theoretical contribution to classical warping path in DTW by including local
modeling of intra-class gesture variability. This methodology is applied in a
clinical context, detecting a group of ADHD behavioural patterns defined by
experts in psychology/psychiatry, to provide support to clinicians in the
diagnose procedure. The proposed methodology is tested on a novel multi-modal
dataset (RGB plus Depth) of ADHD children recordings with behavioural patterns.
We obtain satisfying results when compared to standard state-of-the-art
approaches in the DTW context.
| [
{
"version": "v1",
"created": "Thu, 16 Oct 2014 16:25:29 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Nov 2014 10:25:13 GMT"
}
] | 2014-11-06T00:00:00 | [
[
"Bautista",
"Miguel Ángel",
""
],
[
"Hernández-Vela",
"Antonio",
""
],
[
"Escalera",
"Sergio",
""
],
[
"Igual",
"Laura",
""
],
[
"Pujol",
"Oriol",
""
],
[
"Moya",
"Josep",
""
],
[
"Violant",
"Verónica",
""
],
[
"Anguera",
"María Teresa",
""
]
] | TITLE: A Gesture Recognition System for Detecting Behavioral Patterns of ADHD
ABSTRACT: We present an application of gesture recognition using an extension of
Dynamic Time Warping (DTW) to recognize behavioural patterns of Attention
Deficit Hyperactivity Disorder (ADHD). We propose an extension of DTW using
one-class classifiers in order to be able to encode the variability of a
gesture category, and thus, perform an alignment between a gesture sample and a
gesture class. We model the set of gesture samples of a certain gesture
category using either GMMs or an approximation of Convex Hulls. Thus, we add a
theoretical contribution to classical warping path in DTW by including local
modeling of intra-class gesture variability. This methodology is applied in a
clinical context, detecting a group of ADHD behavioural patterns defined by
experts in psychology/psychiatry, to provide support to clinicians in the
diagnose procedure. The proposed methodology is tested on a novel multi-modal
dataset (RGB plus Depth) of ADHD children recordings with behavioural patterns.
We obtain satisfying results when compared to standard state-of-the-art
approaches in the DTW context.
| new_dataset | 0.961316 |
1411.1171 | Rui Zeng | Rui Zeng, Jiasong Wu, Zhuhong Shao, Lotfi Senhadji, and Huazhong Shu | Multilinear Principal Component Analysis Network for Tensor Object
Classification | 4 pages, 3 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recently proposed principal component analysis network (PCANet) has been
proved high performance for visual content classification. In this letter, we
develop a tensorial extension of PCANet, namely, multilinear principal analysis
component network (MPCANet), for tensor object classification. Compared to
PCANet, the proposed MPCANet uses the spatial structure and the relationship
between each dimension of tensor objects much more efficiently. Experiments
were conducted on different visual content datasets including UCF sports action
video sequences database and UCF11 database. The experimental results have
revealed that the proposed MPCANet achieves higher classification accuracy than
PCANet for tensor object classification.
| [
{
"version": "v1",
"created": "Wed, 5 Nov 2014 07:27:08 GMT"
}
] | 2014-11-06T00:00:00 | [
[
"Zeng",
"Rui",
""
],
[
"Wu",
"Jiasong",
""
],
[
"Shao",
"Zhuhong",
""
],
[
"Senhadji",
"Lotfi",
""
],
[
"Shu",
"Huazhong",
""
]
] | TITLE: Multilinear Principal Component Analysis Network for Tensor Object
Classification
ABSTRACT: The recently proposed principal component analysis network (PCANet) has been
proved high performance for visual content classification. In this letter, we
develop a tensorial extension of PCANet, namely, multilinear principal analysis
component network (MPCANet), for tensor object classification. Compared to
PCANet, the proposed MPCANet uses the spatial structure and the relationship
between each dimension of tensor objects much more efficiently. Experiments
were conducted on different visual content datasets including UCF sports action
video sequences database and UCF11 database. The experimental results have
revealed that the proposed MPCANet achieves higher classification accuracy than
PCANet for tensor object classification.
| no_new_dataset | 0.954984 |
1411.1372 | Nima Keivan | Nima Keivan and Gabe Sibley | Online SLAM with Any-time Self-calibration and Automatic Change
Detection | 8 pages, 6 figures | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A framework for online simultaneous localization, mapping and
self-calibration is presented which can detect and handle significant change in
the calibration parameters. Estimates are computed in constant-time by
factoring the problem and focusing on segments of the trajectory that are most
informative for the purposes of calibration. A novel technique is presented to
detect the probability that a significant change is present in the calibration
parameters. The system is then able to re-calibrate. Maximum likelihood
trajectory and map estimates are computed using an asynchronous and adaptive
optimization. The system requires no prior information and is able to
initialize without any special motions or routines, or in the case where
observability over calibration parameters is delayed. The system is
experimentally validated to calibrate camera intrinsic parameters for a
nonlinear camera model on a monocular dataset featuring a significant zoom
event partway through, and achieves high accuracy despite unknown initial
calibration parameters. Self-calibration and re-calibration parameters are
shown to closely match estimates computed using a calibration target. The
accuracy of the system is demonstrated with SLAM results that achieve sub-1%
distance-travel error even in the presence of significant re-calibration
events.
| [
{
"version": "v1",
"created": "Wed, 5 Nov 2014 19:39:41 GMT"
}
] | 2014-11-06T00:00:00 | [
[
"Keivan",
"Nima",
""
],
[
"Sibley",
"Gabe",
""
]
] | TITLE: Online SLAM with Any-time Self-calibration and Automatic Change
Detection
ABSTRACT: A framework for online simultaneous localization, mapping and
self-calibration is presented which can detect and handle significant change in
the calibration parameters. Estimates are computed in constant-time by
factoring the problem and focusing on segments of the trajectory that are most
informative for the purposes of calibration. A novel technique is presented to
detect the probability that a significant change is present in the calibration
parameters. The system is then able to re-calibrate. Maximum likelihood
trajectory and map estimates are computed using an asynchronous and adaptive
optimization. The system requires no prior information and is able to
initialize without any special motions or routines, or in the case where
observability over calibration parameters is delayed. The system is
experimentally validated to calibrate camera intrinsic parameters for a
nonlinear camera model on a monocular dataset featuring a significant zoom
event partway through, and achieves high accuracy despite unknown initial
calibration parameters. Self-calibration and re-calibration parameters are
shown to closely match estimates computed using a calibration target. The
accuracy of the system is demonstrated with SLAM results that achieve sub-1%
distance-travel error even in the presence of significant re-calibration
events.
| no_new_dataset | 0.942771 |
1406.1134 | Piotr Doll\'ar | Woonhyun Nam, Piotr Doll\'ar, Joon Hee Han | Local Decorrelation For Improved Detection | To appear in Neural Information Processing Systems (NIPS), 2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Even with the advent of more sophisticated, data-hungry methods, boosted
decision trees remain extraordinarily successful for fast rigid object
detection, achieving top accuracy on numerous datasets. While effective, most
boosted detectors use decision trees with orthogonal (single feature) splits,
and the topology of the resulting decision boundary may not be well matched to
the natural topology of the data. Given highly correlated data, decision trees
with oblique (multiple feature) splits can be effective. Use of oblique splits,
however, comes at considerable computational expense. Inspired by recent work
on discriminative decorrelation of HOG features, we instead propose an
efficient feature transform that removes correlations in local neighborhoods.
The result is an overcomplete but locally decorrelated representation ideally
suited for use with orthogonal decision trees. In fact, orthogonal trees with
our locally decorrelated features outperform oblique trees trained over the
original features at a fraction of the computational cost. The overall
improvement in accuracy is dramatic: on the Caltech Pedestrian Dataset, we
reduce false positives nearly tenfold over the previous state-of-the-art.
| [
{
"version": "v1",
"created": "Wed, 4 Jun 2014 18:20:38 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Nov 2014 02:50:18 GMT"
}
] | 2014-11-05T00:00:00 | [
[
"Nam",
"Woonhyun",
""
],
[
"Dollár",
"Piotr",
""
],
[
"Han",
"Joon Hee",
""
]
] | TITLE: Local Decorrelation For Improved Detection
ABSTRACT: Even with the advent of more sophisticated, data-hungry methods, boosted
decision trees remain extraordinarily successful for fast rigid object
detection, achieving top accuracy on numerous datasets. While effective, most
boosted detectors use decision trees with orthogonal (single feature) splits,
and the topology of the resulting decision boundary may not be well matched to
the natural topology of the data. Given highly correlated data, decision trees
with oblique (multiple feature) splits can be effective. Use of oblique splits,
however, comes at considerable computational expense. Inspired by recent work
on discriminative decorrelation of HOG features, we instead propose an
efficient feature transform that removes correlations in local neighborhoods.
The result is an overcomplete but locally decorrelated representation ideally
suited for use with orthogonal decision trees. In fact, orthogonal trees with
our locally decorrelated features outperform oblique trees trained over the
original features at a fraction of the computational cost. The overall
improvement in accuracy is dramatic: on the Caltech Pedestrian Dataset, we
reduce false positives nearly tenfold over the previous state-of-the-art.
| no_new_dataset | 0.946349 |
1407.3399 | Xianjie Chen | Xianjie Chen, Alan Yuille | Articulated Pose Estimation by a Graphical Model with Image Dependent
Pairwise Relations | NIPS 2014 Camera Ready | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method for estimating articulated human pose from a single
static image based on a graphical model with novel pairwise relations that make
adaptive use of local image measurements. More precisely, we specify a
graphical model for human pose which exploits the fact the local image
measurements can be used both to detect parts (or joints) and also to predict
the spatial relationships between them (Image Dependent Pairwise Relations).
These spatial relationships are represented by a mixture model. We use Deep
Convolutional Neural Networks (DCNNs) to learn conditional probabilities for
the presence of parts and their spatial relationships within image patches.
Hence our model combines the representational flexibility of graphical models
with the efficiency and statistical power of DCNNs. Our method significantly
outperforms the state of the art methods on the LSP and FLIC datasets and also
performs very well on the Buffy dataset without any training.
| [
{
"version": "v1",
"created": "Sat, 12 Jul 2014 17:04:21 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Nov 2014 17:28:15 GMT"
}
] | 2014-11-05T00:00:00 | [
[
"Chen",
"Xianjie",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: Articulated Pose Estimation by a Graphical Model with Image Dependent
Pairwise Relations
ABSTRACT: We present a method for estimating articulated human pose from a single
static image based on a graphical model with novel pairwise relations that make
adaptive use of local image measurements. More precisely, we specify a
graphical model for human pose which exploits the fact the local image
measurements can be used both to detect parts (or joints) and also to predict
the spatial relationships between them (Image Dependent Pairwise Relations).
These spatial relationships are represented by a mixture model. We use Deep
Convolutional Neural Networks (DCNNs) to learn conditional probabilities for
the presence of parts and their spatial relationships within image patches.
Hence our model combines the representational flexibility of graphical models
with the efficiency and statistical power of DCNNs. Our method significantly
outperforms the state of the art methods on the LSP and FLIC datasets and also
performs very well on the Buffy dataset without any training.
| no_new_dataset | 0.94887 |
1409.6075 | Tyler Ward | Tyler Ward | The Information Theoretically Efficient Model (ITEM): A model for
computerized analysis of large datasets | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This document discusses the Information Theoretically Efficient Model (ITEM),
a computerized system to generate an information theoretically efficient
multinomial logistic regression from a general dataset. More specifically, this
model is designed to succeed even where the logit transform of the dependent
variable is not necessarily linear in the independent variables. This research
shows that for large datasets, the resulting models can be produced on modern
computers in a tractable amount of time. These models are also resistant to
overfitting, and as such they tend to produce interpretable models with only a
limited number of features, all of which are designed to be well behaved.
| [
{
"version": "v1",
"created": "Mon, 22 Sep 2014 03:39:23 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Oct 2014 11:12:07 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Nov 2014 05:41:04 GMT"
}
] | 2014-11-05T00:00:00 | [
[
"Ward",
"Tyler",
""
]
] | TITLE: The Information Theoretically Efficient Model (ITEM): A model for
computerized analysis of large datasets
ABSTRACT: This document discusses the Information Theoretically Efficient Model (ITEM),
a computerized system to generate an information theoretically efficient
multinomial logistic regression from a general dataset. More specifically, this
model is designed to succeed even where the logit transform of the dependent
variable is not necessarily linear in the independent variables. This research
shows that for large datasets, the resulting models can be produced on modern
computers in a tractable amount of time. These models are also resistant to
overfitting, and as such they tend to produce interpretable models with only a
limited number of features, all of which are designed to be well behaved.
| no_new_dataset | 0.951051 |
1411.0722 | Yaneer Bar-Yam | Urbano Fran\c{c}a, Hiroki Sayama, Colin McSwiggen, Roozbeh Daneshvar
and Yaneer Bar-Yam | Visualizing the "Heartbeat" of a City with Tweets | 11 pages, 6 figures | null | null | New England Complex Systems Institute Report 2014-11-01 | physics.soc-ph cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Describing the dynamics of a city is a crucial step to both understanding the
human activity in urban environments and to planning and designing cities
accordingly. Here we describe the collective dynamics of New York City and
surrounding areas as seen through the lens of Twitter usage. In particular, we
observe and quantify the patterns that emerge naturally from the hourly
activities in different areas of New York City, and discuss how they can be
used to understand the urban areas. Using a dataset that includes more than 6
million geolocated Twitter messages we construct a movie of the geographic
density of tweets. We observe the diurnal "heartbeat" of the NYC area. The
largest scale dynamics are the waking and sleeping cycle and commuting from
residential communities to office areas in Manhattan. Hourly dynamics reflect
the interplay of commuting, work and leisure, including whether people are
preoccupied with other activities or actively using Twitter. Differences
between weekday and weekend dynamics point to changes in when people wake and
sleep, and engage in social activities. We show that by measuring the average
distances to the heart of the city one can quantify the weekly differences and
the shift in behavior during weekends. We also identify locations and times of
high Twitter activity that occur because of specific activities. These include
early morning high levels of traffic as people arrive and wait at air
transportation hubs, and on Sunday at the Meadowlands Sports Complex and Statue
of Liberty. We analyze the role of particular individuals where they have large
impacts on overall Twitter activity. Our analysis points to the opportunity to
develop insight into both geographic social dynamics and attention through
social media analysis.
| [
{
"version": "v1",
"created": "Mon, 3 Nov 2014 22:27:23 GMT"
}
] | 2014-11-05T00:00:00 | [
[
"França",
"Urbano",
""
],
[
"Sayama",
"Hiroki",
""
],
[
"McSwiggen",
"Colin",
""
],
[
"Daneshvar",
"Roozbeh",
""
],
[
"Bar-Yam",
"Yaneer",
""
]
] | TITLE: Visualizing the "Heartbeat" of a City with Tweets
ABSTRACT: Describing the dynamics of a city is a crucial step to both understanding the
human activity in urban environments and to planning and designing cities
accordingly. Here we describe the collective dynamics of New York City and
surrounding areas as seen through the lens of Twitter usage. In particular, we
observe and quantify the patterns that emerge naturally from the hourly
activities in different areas of New York City, and discuss how they can be
used to understand the urban areas. Using a dataset that includes more than 6
million geolocated Twitter messages we construct a movie of the geographic
density of tweets. We observe the diurnal "heartbeat" of the NYC area. The
largest scale dynamics are the waking and sleeping cycle and commuting from
residential communities to office areas in Manhattan. Hourly dynamics reflect
the interplay of commuting, work and leisure, including whether people are
preoccupied with other activities or actively using Twitter. Differences
between weekday and weekend dynamics point to changes in when people wake and
sleep, and engage in social activities. We show that by measuring the average
distances to the heart of the city one can quantify the weekly differences and
the shift in behavior during weekends. We also identify locations and times of
high Twitter activity that occur because of specific activities. These include
early morning high levels of traffic as people arrive and wait at air
transportation hubs, and on Sunday at the Meadowlands Sports Complex and Statue
of Liberty. We analyze the role of particular individuals where they have large
impacts on overall Twitter activity. Our analysis points to the opportunity to
develop insight into both geographic social dynamics and attention through
social media analysis.
| no_new_dataset | 0.698792 |
1411.0860 | Miao Xu | Miao Xu, Rong Jin, Zhi-Hua Zhou | CUR Algorithm for Partially Observed Matrices | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | CUR matrix decomposition computes the low rank approximation of a given
matrix by using the actual rows and columns of the matrix. It has been a very
useful tool for handling large matrices. One limitation with the existing
algorithms for CUR matrix decomposition is that they need an access to the {\it
full} matrix, a requirement that can be difficult to fulfill in many real world
applications. In this work, we alleviate this limitation by developing a CUR
decomposition algorithm for partially observed matrices. In particular, the
proposed algorithm computes the low rank approximation of the target matrix
based on (i) the randomly sampled rows and columns, and (ii) a subset of
observed entries that are randomly sampled from the matrix. Our analysis shows
the relative error bound, measured by spectral norm, for the proposed algorithm
when the target matrix is of full rank. We also show that only $O(n r\ln r)$
observed entries are needed by the proposed algorithm to perfectly recover a
rank $r$ matrix of size $n\times n$, which improves the sample complexity of
the existing algorithms for matrix completion. Empirical studies on both
synthetic and real-world datasets verify our theoretical claims and demonstrate
the effectiveness of the proposed algorithm.
| [
{
"version": "v1",
"created": "Tue, 4 Nov 2014 11:03:50 GMT"
}
] | 2014-11-05T00:00:00 | [
[
"Xu",
"Miao",
""
],
[
"Jin",
"Rong",
""
],
[
"Zhou",
"Zhi-Hua",
""
]
] | TITLE: CUR Algorithm for Partially Observed Matrices
ABSTRACT: CUR matrix decomposition computes the low rank approximation of a given
matrix by using the actual rows and columns of the matrix. It has been a very
useful tool for handling large matrices. One limitation with the existing
algorithms for CUR matrix decomposition is that they need an access to the {\it
full} matrix, a requirement that can be difficult to fulfill in many real world
applications. In this work, we alleviate this limitation by developing a CUR
decomposition algorithm for partially observed matrices. In particular, the
proposed algorithm computes the low rank approximation of the target matrix
based on (i) the randomly sampled rows and columns, and (ii) a subset of
observed entries that are randomly sampled from the matrix. Our analysis shows
the relative error bound, measured by spectral norm, for the proposed algorithm
when the target matrix is of full rank. We also show that only $O(n r\ln r)$
observed entries are needed by the proposed algorithm to perfectly recover a
rank $r$ matrix of size $n\times n$, which improves the sample complexity of
the existing algorithms for matrix completion. Empirical studies on both
synthetic and real-world datasets verify our theoretical claims and demonstrate
the effectiveness of the proposed algorithm.
| no_new_dataset | 0.948489 |
1309.0326 | Micha{\l} {\L}opuszy\'nski | Micha{\l} {\L}opuszy\'nski, {\L}ukasz Bolikowski | Tagging Scientific Publications using Wikipedia and Natural Language
Processing Tools. Comparison on the ArXiv Dataset | null | Communications in Computer and Information Science Volume 416,
Springer 2014, pp 16-27 | 10.1007/978-3-319-08425-1_3 | null | cs.CL cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we compare two simple methods of tagging scientific
publications with labels reflecting their content. As a first source of labels
Wikipedia is employed, second label set is constructed from the noun phrases
occurring in the analyzed corpus. We examine the statistical properties and the
effectiveness of both approaches on the dataset consisting of abstracts from
0.7 million of scientific documents deposited in the ArXiv preprint collection.
We believe that obtained tags can be later on applied as useful document
features in various machine learning tasks (document similarity, clustering,
topic modelling, etc.).
| [
{
"version": "v1",
"created": "Mon, 2 Sep 2013 09:09:27 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Aug 2014 14:30:21 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Nov 2014 14:48:29 GMT"
}
] | 2014-11-04T00:00:00 | [
[
"Łopuszyński",
"Michał",
""
],
[
"Bolikowski",
"Łukasz",
""
]
] | TITLE: Tagging Scientific Publications using Wikipedia and Natural Language
Processing Tools. Comparison on the ArXiv Dataset
ABSTRACT: In this work, we compare two simple methods of tagging scientific
publications with labels reflecting their content. As a first source of labels
Wikipedia is employed, second label set is constructed from the noun phrases
occurring in the analyzed corpus. We examine the statistical properties and the
effectiveness of both approaches on the dataset consisting of abstracts from
0.7 million of scientific documents deposited in the ArXiv preprint collection.
We believe that obtained tags can be later on applied as useful document
features in various machine learning tasks (document similarity, clustering,
topic modelling, etc.).
| no_new_dataset | 0.946547 |
1411.0052 | Chris Muelder | Arnaud Sallaberry, Yang-Chih Fu, Hwai-Chung Ho, Kwan-Liu Ma | ContactTrees: A Technique for Studying Personal Network Data | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network visualization allows a quick glance at how nodes (or actors) are
connected by edges (or ties). A conventional network diagram of "contact tree"
maps out a root and branches that represent the structure of nodes and edges,
often without further specifying leaves or fruits that would have grown from
small branches. By furnishing such a network structure with leaves and fruits,
we reveal details about "contacts" in our ContactTrees that underline ties and
relationships. Our elegant design employs a bottom-up approach that resembles a
recent attempt to understand subjective well-being by means of a series of
emotions. Such a bottom-up approach to social-network studies decomposes each
tie into a series of interactions or contacts, which help deepen our
understanding of the complexity embedded in a network structure. Unlike
previous network visualizations, ContactTrees can highlight how relationships
form and change based upon interactions among actors, and how relationships and
networks vary by contact attributes. Based on a botanical tree metaphor, the
design is easy to construct and the resulting tree-like visualization can
display many properties at both tie and contact levels, a key ingredient
missing from conventional techniques of network visualization. We first
demonstrate ContactTrees using a dataset consisting of three waves of 3-month
contact diaries over the 2004-2012 period, then compare ContactTrees with
alternative tools and discuss how this tool can be applied to other types of
datasets.
| [
{
"version": "v1",
"created": "Sat, 1 Nov 2014 01:44:15 GMT"
}
] | 2014-11-04T00:00:00 | [
[
"Sallaberry",
"Arnaud",
""
],
[
"Fu",
"Yang-Chih",
""
],
[
"Ho",
"Hwai-Chung",
""
],
[
"Ma",
"Kwan-Liu",
""
]
] | TITLE: ContactTrees: A Technique for Studying Personal Network Data
ABSTRACT: Network visualization allows a quick glance at how nodes (or actors) are
connected by edges (or ties). A conventional network diagram of "contact tree"
maps out a root and branches that represent the structure of nodes and edges,
often without further specifying leaves or fruits that would have grown from
small branches. By furnishing such a network structure with leaves and fruits,
we reveal details about "contacts" in our ContactTrees that underline ties and
relationships. Our elegant design employs a bottom-up approach that resembles a
recent attempt to understand subjective well-being by means of a series of
emotions. Such a bottom-up approach to social-network studies decomposes each
tie into a series of interactions or contacts, which help deepen our
understanding of the complexity embedded in a network structure. Unlike
previous network visualizations, ContactTrees can highlight how relationships
form and change based upon interactions among actors, and how relationships and
networks vary by contact attributes. Based on a botanical tree metaphor, the
design is easy to construct and the resulting tree-like visualization can
display many properties at both tie and contact levels, a key ingredient
missing from conventional techniques of network visualization. We first
demonstrate ContactTrees using a dataset consisting of three waves of 3-month
contact diaries over the 2004-2012 period, then compare ContactTrees with
alternative tools and discuss how this tool can be applied to other types of
datasets.
| new_dataset | 0.862757 |
1411.0126 | GowthamRangarajan Raman | Gowtham Rangarajan Raman | Detection of texts in natural images | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | A framework that makes use of Connected components and supervised Support
machine to recognise texts is proposed. The image is preprocessed and and edge
graph is calculated using a probabilistic framework to compensate for
photometric noise. Connected components over the resultant image is calculated,
which is bounded and then pruned using geometric constraints. Finally a Gabor
Feature based SVM is used to classify the presence of text in the candidates.
The proposed method was tested with ICDAR 10 dataset and few other images
available on the internet. It resulted in a recall and precision metric of 0.72
and 0.88 comfortably better than the benchmark Eiphstein's algorithm. The
proposed method recorded a 0.70 and 0.74 in natural images which is
significantly better than current methods on natural images. The proposed
method also scales almost linearly for high resolution, cluttered images.
| [
{
"version": "v1",
"created": "Sat, 1 Nov 2014 15:06:23 GMT"
}
] | 2014-11-04T00:00:00 | [
[
"Raman",
"Gowtham Rangarajan",
""
]
] | TITLE: Detection of texts in natural images
ABSTRACT: A framework that makes use of Connected components and supervised Support
machine to recognise texts is proposed. The image is preprocessed and and edge
graph is calculated using a probabilistic framework to compensate for
photometric noise. Connected components over the resultant image is calculated,
which is bounded and then pruned using geometric constraints. Finally a Gabor
Feature based SVM is used to classify the presence of text in the candidates.
The proposed method was tested with ICDAR 10 dataset and few other images
available on the internet. It resulted in a recall and precision metric of 0.72
and 0.88 comfortably better than the benchmark Eiphstein's algorithm. The
proposed method recorded a 0.70 and 0.74 in natural images which is
significantly better than current methods on natural images. The proposed
method also scales almost linearly for high resolution, cluttered images.
| no_new_dataset | 0.949435 |
1411.0392 | Roozbeh Rajabi | Roozbeh Rajabi, Hassan Ghassemian | Sparsity Constrained Graph Regularized NMF for Spectral Unmixing of
Hyperspectral Data | 10 pages, Journal | null | 10.1007/s12524-014-0408-2 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hyperspectral images contain mixed pixels due to low spatial resolution of
hyperspectral sensors. Mixed pixels are pixels containing more than one
distinct material called endmembers. The presence percentages of endmembers in
mixed pixels are called abundance fractions. Spectral unmixing problem refers
to decomposing these pixels into a set of endmembers and abundance fractions.
Due to nonnegativity constraint on abundance fractions, nonnegative matrix
factorization methods (NMF) have been widely used for solving spectral unmixing
problem. In this paper we have used graph regularized NMF (GNMF) method
combined with sparseness constraint to decompose mixed pixels in hyperspectral
imagery. This method preserves the geometrical structure of data while
representing it in low dimensional space. Adaptive regularization parameter
based on temperature schedule in simulated annealing method also has been used
in this paper for the sparseness term. Proposed algorithm is applied on
synthetic and real datasets. Synthetic data is generated based on endmembers
from USGS spectral library. AVIRIS Cuprite dataset is used as real dataset for
evaluation of proposed method. Results are quantified based on spectral angle
distance (SAD) and abundance angle distance (AAD) measures. Results in
comparison with other methods show that the proposed method can unmix data more
effectively. Specifically for the Cuprite dataset, performance of the proposed
method is approximately 10% better than the VCA and Sparse NMF in terms of root
mean square of SAD.
| [
{
"version": "v1",
"created": "Mon, 3 Nov 2014 08:41:32 GMT"
}
] | 2014-11-04T00:00:00 | [
[
"Rajabi",
"Roozbeh",
""
],
[
"Ghassemian",
"Hassan",
""
]
] | TITLE: Sparsity Constrained Graph Regularized NMF for Spectral Unmixing of
Hyperspectral Data
ABSTRACT: Hyperspectral images contain mixed pixels due to low spatial resolution of
hyperspectral sensors. Mixed pixels are pixels containing more than one
distinct material called endmembers. The presence percentages of endmembers in
mixed pixels are called abundance fractions. Spectral unmixing problem refers
to decomposing these pixels into a set of endmembers and abundance fractions.
Due to nonnegativity constraint on abundance fractions, nonnegative matrix
factorization methods (NMF) have been widely used for solving spectral unmixing
problem. In this paper we have used graph regularized NMF (GNMF) method
combined with sparseness constraint to decompose mixed pixels in hyperspectral
imagery. This method preserves the geometrical structure of data while
representing it in low dimensional space. Adaptive regularization parameter
based on temperature schedule in simulated annealing method also has been used
in this paper for the sparseness term. Proposed algorithm is applied on
synthetic and real datasets. Synthetic data is generated based on endmembers
from USGS spectral library. AVIRIS Cuprite dataset is used as real dataset for
evaluation of proposed method. Results are quantified based on spectral angle
distance (SAD) and abundance angle distance (AAD) measures. Results in
comparison with other methods show that the proposed method can unmix data more
effectively. Specifically for the Cuprite dataset, performance of the proposed
method is approximately 10% better than the VCA and Sparse NMF in terms of root
mean square of SAD.
| no_new_dataset | 0.948585 |
1411.0591 | Charles Fisher | Charles K. Fisher and Pankaj Mehta | Bayesian feature selection with strongly-regularizing priors maps to the
Ising Model | null | null | null | null | cond-mat.stat-mech cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying small subsets of features that are relevant for prediction and/or
classification tasks is a central problem in machine learning and statistics.
The feature selection task is especially important, and computationally
difficult, for modern datasets where the number of features can be comparable
to, or even exceed, the number of samples. Here, we show that feature selection
with Bayesian inference takes a universal form and reduces to calculating the
magnetizations of an Ising model, under some mild conditions. Our results
exploit the observation that the evidence takes a universal form for
strongly-regularizing priors --- priors that have a large effect on the
posterior probability even in the infinite data limit. We derive explicit
expressions for feature selection for generalized linear models, a large class
of statistical techniques that include linear and logistic regression. We
illustrate the power of our approach by analyzing feature selection in a
logistic regression-based classifier trained to distinguish between the letters
B and D in the notMNIST dataset.
| [
{
"version": "v1",
"created": "Mon, 3 Nov 2014 18:15:29 GMT"
}
] | 2014-11-04T00:00:00 | [
[
"Fisher",
"Charles K.",
""
],
[
"Mehta",
"Pankaj",
""
]
] | TITLE: Bayesian feature selection with strongly-regularizing priors maps to the
Ising Model
ABSTRACT: Identifying small subsets of features that are relevant for prediction and/or
classification tasks is a central problem in machine learning and statistics.
The feature selection task is especially important, and computationally
difficult, for modern datasets where the number of features can be comparable
to, or even exceed, the number of samples. Here, we show that feature selection
with Bayesian inference takes a universal form and reduces to calculating the
magnetizations of an Ising model, under some mild conditions. Our results
exploit the observation that the evidence takes a universal form for
strongly-regularizing priors --- priors that have a large effect on the
posterior probability even in the infinite data limit. We derive explicit
expressions for feature selection for generalized linear models, a large class
of statistical techniques that include linear and logistic regression. We
illustrate the power of our approach by analyzing feature selection in a
logistic regression-based classifier trained to distinguish between the letters
B and D in the notMNIST dataset.
| no_new_dataset | 0.951142 |
1410.6858 | Alberto Dainotti | Alberto Dainotti, Karyn Benson, Alistair King, kc claffy, Eduard
Glatz, Xenofontas Dimitropoulos, Philipp Richter, Alessandro Finamore, Alex
C. Snoeren | Lost in Space: Improving Inference of IPv4 Address Space Utilization | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One challenge in understanding the evolution of Internet infrastructure is
the lack of systematic mechanisms for monitoring the extent to which allocated
IP addresses are actually used. In this paper we try to advance the science of
inferring IPv4 address space utilization by analyzing and correlating results
obtained through different types of measurements. We have previously studied an
approach based on passive measurements that can reveal used portions of the
address space unseen by active approaches. In this paper, we study such passive
approaches in detail, extending our methodology to four different types of
vantage points, identifying traffic components that most significantly
contribute to discovering used IPv4 network blocks. We then combine the results
we obtained through passive measurements together with data from active
measurement studies, as well as measurements from BGP and additional datasets
available to researchers. Through the analysis of this large collection of
heterogeneous datasets, we substantially improve the state of the art in terms
of: (i) understanding the challenges and opportunities in using passive and
active techniques to study address utilization; and (ii) knowledge of the
utilization of the IPv4 space.
| [
{
"version": "v1",
"created": "Sat, 25 Oct 2014 00:29:54 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Oct 2014 22:07:44 GMT"
}
] | 2014-11-03T00:00:00 | [
[
"Dainotti",
"Alberto",
""
],
[
"Benson",
"Karyn",
""
],
[
"King",
"Alistair",
""
],
[
"claffy",
"kc",
""
],
[
"Glatz",
"Eduard",
""
],
[
"Dimitropoulos",
"Xenofontas",
""
],
[
"Richter",
"Philipp",
""
],
[
"Finamore",
"Alessandro",
""
],
[
"Snoeren",
"Alex C.",
""
]
] | TITLE: Lost in Space: Improving Inference of IPv4 Address Space Utilization
ABSTRACT: One challenge in understanding the evolution of Internet infrastructure is
the lack of systematic mechanisms for monitoring the extent to which allocated
IP addresses are actually used. In this paper we try to advance the science of
inferring IPv4 address space utilization by analyzing and correlating results
obtained through different types of measurements. We have previously studied an
approach based on passive measurements that can reveal used portions of the
address space unseen by active approaches. In this paper, we study such passive
approaches in detail, extending our methodology to four different types of
vantage points, identifying traffic components that most significantly
contribute to discovering used IPv4 network blocks. We then combine the results
we obtained through passive measurements together with data from active
measurement studies, as well as measurements from BGP and additional datasets
available to researchers. Through the analysis of this large collection of
heterogeneous datasets, we substantially improve the state of the art in terms
of: (i) understanding the challenges and opportunities in using passive and
active techniques to study address utilization; and (ii) knowledge of the
utilization of the IPv4 space.
| no_new_dataset | 0.949809 |
1410.8586 | Tao Chen | Tao Chen, Damian Borth, Trevor Darrell and Shih-Fu Chang | DeepSentiBank: Visual Sentiment Concept Classification with Deep
Convolutional Neural Networks | 7 pages, 4 figures | null | null | null | cs.CV cs.LG cs.MM cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a visual sentiment concept classification method based
on deep convolutional neural networks (CNNs). The visual sentiment concepts are
adjective noun pairs (ANPs) automatically discovered from the tags of web
photos, and can be utilized as effective statistical cues for detecting
emotions depicted in the images. Nearly one million Flickr images tagged with
these ANPs are downloaded to train the classifiers of the concepts. We adopt
the popular model of deep convolutional neural networks which recently shows
great performance improvement on classifying large-scale web-based image
dataset such as ImageNet. Our deep CNNs model is trained based on Caffe, a
newly developed deep learning framework. To deal with the biased training data
which only contains images with strong sentiment and to prevent overfitting, we
initialize the model with the model weights trained from ImageNet. Performance
evaluation shows the newly trained deep CNNs model SentiBank 2.0 (or called
DeepSentiBank) is significantly improved in both annotation accuracy and
retrieval performance, compared to its predecessors which mainly use binary SVM
classification models.
| [
{
"version": "v1",
"created": "Thu, 30 Oct 2014 22:57:12 GMT"
}
] | 2014-11-03T00:00:00 | [
[
"Chen",
"Tao",
""
],
[
"Borth",
"Damian",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Chang",
"Shih-Fu",
""
]
] | TITLE: DeepSentiBank: Visual Sentiment Concept Classification with Deep
Convolutional Neural Networks
ABSTRACT: This paper introduces a visual sentiment concept classification method based
on deep convolutional neural networks (CNNs). The visual sentiment concepts are
adjective noun pairs (ANPs) automatically discovered from the tags of web
photos, and can be utilized as effective statistical cues for detecting
emotions depicted in the images. Nearly one million Flickr images tagged with
these ANPs are downloaded to train the classifiers of the concepts. We adopt
the popular model of deep convolutional neural networks which recently shows
great performance improvement on classifying large-scale web-based image
dataset such as ImageNet. Our deep CNNs model is trained based on Caffe, a
newly developed deep learning framework. To deal with the biased training data
which only contains images with strong sentiment and to prevent overfitting, we
initialize the model with the model weights trained from ImageNet. Performance
evaluation shows the newly trained deep CNNs model SentiBank 2.0 (or called
DeepSentiBank) is significantly improved in both annotation accuracy and
retrieval performance, compared to its predecessors which mainly use binary SVM
classification models.
| no_new_dataset | 0.950915 |
1410.8664 | Yishi Lin | Yishi Lin, John C.S. Lui | Algorithmic Design for Competitive Influence Maximization Problems | null | null | null | null | cs.SI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given the popularity of the viral marketing campaign in online social
networks, finding an effective method to identify a set of most influential
nodes so to compete well with other viral marketing competitors is of upmost
importance. We propose a "General Competitive Independent Cascade (GCIC)" model
to describe the general influence propagation of two competing sources in the
same network. We formulate the "Competitive Influence Maximization (CIM)"
problem as follows: Under a prespecified influence propagation model and that
the competitor's seed set is known, how to find a seed set of $k$ nodes so as
to trigger the largest influence cascade? We propose a general algorithmic
framework TCIM for the CIM problem under the GCIC model. TCIM returns a
$(1-1/e-\epsilon)$-approximate solution with probability at least
$1-n^{-\ell}$, and has an efficient time complexity of $O(c(k+\ell)(m+n)\log
n/\epsilon^2)$, where $c$ depends on specific propagation model and may also
depend on $k$ and underlying network $G$. To the best of our knowledge, this is
the first general algorithmic framework that has both $(1-1/e-\epsilon)$
performance guarantee and practical efficiency. We conduct extensive
experiments on real-world datasets under three specific influence propagation
models, and show the efficiency and accuracy of our framework. In particular,
we achieve up to four orders of magnitude speedup as compared to the previous
state-of-the-art algorithms with the approximate guarantee.
| [
{
"version": "v1",
"created": "Fri, 31 Oct 2014 08:16:20 GMT"
}
] | 2014-11-03T00:00:00 | [
[
"Lin",
"Yishi",
""
],
[
"Lui",
"John C. S.",
""
]
] | TITLE: Algorithmic Design for Competitive Influence Maximization Problems
ABSTRACT: Given the popularity of the viral marketing campaign in online social
networks, finding an effective method to identify a set of most influential
nodes so to compete well with other viral marketing competitors is of upmost
importance. We propose a "General Competitive Independent Cascade (GCIC)" model
to describe the general influence propagation of two competing sources in the
same network. We formulate the "Competitive Influence Maximization (CIM)"
problem as follows: Under a prespecified influence propagation model and that
the competitor's seed set is known, how to find a seed set of $k$ nodes so as
to trigger the largest influence cascade? We propose a general algorithmic
framework TCIM for the CIM problem under the GCIC model. TCIM returns a
$(1-1/e-\epsilon)$-approximate solution with probability at least
$1-n^{-\ell}$, and has an efficient time complexity of $O(c(k+\ell)(m+n)\log
n/\epsilon^2)$, where $c$ depends on specific propagation model and may also
depend on $k$ and underlying network $G$. To the best of our knowledge, this is
the first general algorithmic framework that has both $(1-1/e-\epsilon)$
performance guarantee and practical efficiency. We conduct extensive
experiments on real-world datasets under three specific influence propagation
models, and show the efficiency and accuracy of our framework. In particular,
we achieve up to four orders of magnitude speedup as compared to the previous
state-of-the-art algorithms with the approximate guarantee.
| no_new_dataset | 0.945298 |
1401.0733 | Ahmet Iscen | Ahmet Iscen, Eren Golge, Ilker Sarac, Pinar Duygulu | ConceptVision: A Flexible Scene Classification Framework | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce ConceptVision, a method that aims for high accuracy in
categorizing large number of scenes, while keeping the model relatively simpler
and efficient for scalability. The proposed method combines the advantages of
both low-level representations and high-level semantic categories, and
eliminates the distinctions between different levels through the definition of
concepts. The proposed framework encodes the perspectives brought through
different concepts by considering them in concept groups. Different
perspectives are ensembled for the final decision. Extensive experiments are
carried out on benchmark datasets to test the effects of different concepts,
and methods used to ensemble. Comparisons with state-of-the-art studies show
that we can achieve better results with incorporation of concepts in different
levels with different perspectives.
| [
{
"version": "v1",
"created": "Fri, 3 Jan 2014 21:15:13 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Oct 2014 20:19:35 GMT"
}
] | 2014-10-31T00:00:00 | [
[
"Iscen",
"Ahmet",
""
],
[
"Golge",
"Eren",
""
],
[
"Sarac",
"Ilker",
""
],
[
"Duygulu",
"Pinar",
""
]
] | TITLE: ConceptVision: A Flexible Scene Classification Framework
ABSTRACT: We introduce ConceptVision, a method that aims for high accuracy in
categorizing large number of scenes, while keeping the model relatively simpler
and efficient for scalability. The proposed method combines the advantages of
both low-level representations and high-level semantic categories, and
eliminates the distinctions between different levels through the definition of
concepts. The proposed framework encodes the perspectives brought through
different concepts by considering them in concept groups. Different
perspectives are ensembled for the final decision. Extensive experiments are
carried out on benchmark datasets to test the effects of different concepts,
and methods used to ensemble. Comparisons with state-of-the-art studies show
that we can achieve better results with incorporation of concepts in different
levels with different perspectives.
| no_new_dataset | 0.946843 |
1407.7644 | Ariel Jaffe | Ariel Jaffe, Boaz Nadler and Yuval Kluger | Estimating the Accuracies of Multiple Classifiers Without Labeled Data | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In various situations one is given only the predictions of multiple
classifiers over a large unlabeled test data. This scenario raises the
following questions: Without any labeled data and without any a-priori
knowledge about the reliability of these different classifiers, is it possible
to consistently and computationally efficiently estimate their accuracies?
Furthermore, also in a completely unsupervised manner, can one construct a more
accurate unsupervised ensemble classifier? In this paper, focusing on the
binary case, we present simple, computationally efficient algorithms to solve
these questions. Furthermore, under standard classifier independence
assumptions, we prove our methods are consistent and study their asymptotic
error. Our approach is spectral, based on the fact that the off-diagonal
entries of the classifiers' covariance matrix and 3-d tensor are rank-one. We
illustrate the competitive performance of our algorithms via extensive
experiments on both artificial and real datasets.
| [
{
"version": "v1",
"created": "Tue, 29 Jul 2014 07:19:08 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Oct 2014 11:23:37 GMT"
}
] | 2014-10-31T00:00:00 | [
[
"Jaffe",
"Ariel",
""
],
[
"Nadler",
"Boaz",
""
],
[
"Kluger",
"Yuval",
""
]
] | TITLE: Estimating the Accuracies of Multiple Classifiers Without Labeled Data
ABSTRACT: In various situations one is given only the predictions of multiple
classifiers over a large unlabeled test data. This scenario raises the
following questions: Without any labeled data and without any a-priori
knowledge about the reliability of these different classifiers, is it possible
to consistently and computationally efficiently estimate their accuracies?
Furthermore, also in a completely unsupervised manner, can one construct a more
accurate unsupervised ensemble classifier? In this paper, focusing on the
binary case, we present simple, computationally efficient algorithms to solve
these questions. Furthermore, under standard classifier independence
assumptions, we prove our methods are consistent and study their asymptotic
error. Our approach is spectral, based on the fact that the off-diagonal
entries of the classifiers' covariance matrix and 3-d tensor are rank-one. We
illustrate the competitive performance of our algorithms via extensive
experiments on both artificial and real datasets.
| no_new_dataset | 0.942771 |
1410.8507 | Mark Taylor | M. B. Taylor | External Use of TOPCAT's Plotting Library | 4 pages, 1 figure | null | null | null | astro-ph.IM cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The table analysis application TOPCAT uses a custom Java plotting library for
highly configurable high-performance interactive or exported visualisations in
two and three dimensions. We present here a variety of ways for end users or
application developers to make use of this library outside of the TOPCAT
application: via the command-line suite STILTS or its Jython variant JyStilts,
via a traditional Java API, or by programmatically assigning values to a set of
parameters in java code or using some form of inter-process communication. The
library has been built with large datasets in mind; interactive plots scale
well up to several million points, and static output to standard graphics
formats is possible for unlimited sized input data.
| [
{
"version": "v1",
"created": "Thu, 30 Oct 2014 19:29:08 GMT"
}
] | 2014-10-31T00:00:00 | [
[
"Taylor",
"M. B.",
""
]
] | TITLE: External Use of TOPCAT's Plotting Library
ABSTRACT: The table analysis application TOPCAT uses a custom Java plotting library for
highly configurable high-performance interactive or exported visualisations in
two and three dimensions. We present here a variety of ways for end users or
application developers to make use of this library outside of the TOPCAT
application: via the command-line suite STILTS or its Jython variant JyStilts,
via a traditional Java API, or by programmatically assigning values to a set of
parameters in java code or using some form of inter-process communication. The
library has been built with large datasets in mind; interactive plots scale
well up to several million points, and static output to standard graphics
formats is possible for unlimited sized input data.
| no_new_dataset | 0.928668 |
1410.7709 | Tuomo Sipola | Antti Juvonen and Tuomo Sipola | Anomaly Detection Framework Using Rule Extraction for Efficient
Intrusion Detection | 35 pages, 12 figures, 7 tables | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Huge datasets in cyber security, such as network traffic logs, can be
analyzed using machine learning and data mining methods. However, the amount of
collected data is increasing, which makes analysis more difficult. Many machine
learning methods have not been designed for big datasets, and consequently are
slow and difficult to understand. We address the issue of efficient network
traffic classification by creating an intrusion detection framework that
applies dimensionality reduction and conjunctive rule extraction. The system
can perform unsupervised anomaly detection and use this information to create
conjunctive rules that classify huge amounts of traffic in real time. We test
the implemented system with the widely used KDD Cup 99 dataset and real-world
network logs to confirm that the performance is satisfactory. This system is
transparent and does not work like a black box, making it intuitive for domain
experts, such as network administrators.
| [
{
"version": "v1",
"created": "Tue, 28 Oct 2014 17:29:42 GMT"
}
] | 2014-10-30T00:00:00 | [
[
"Juvonen",
"Antti",
""
],
[
"Sipola",
"Tuomo",
""
]
] | TITLE: Anomaly Detection Framework Using Rule Extraction for Efficient
Intrusion Detection
ABSTRACT: Huge datasets in cyber security, such as network traffic logs, can be
analyzed using machine learning and data mining methods. However, the amount of
collected data is increasing, which makes analysis more difficult. Many machine
learning methods have not been designed for big datasets, and consequently are
slow and difficult to understand. We address the issue of efficient network
traffic classification by creating an intrusion detection framework that
applies dimensionality reduction and conjunctive rule extraction. The system
can perform unsupervised anomaly detection and use this information to create
conjunctive rules that classify huge amounts of traffic in real time. We test
the implemented system with the widely used KDD Cup 99 dataset and real-world
network logs to confirm that the performance is satisfactory. This system is
transparent and does not work like a black box, making it intuitive for domain
experts, such as network administrators.
| no_new_dataset | 0.949106 |
1410.8034 | Xudong Liu | Xudong Liu, Bin Zhang, Ting Zhang and Chang Liu | Latent Feature Based FM Model For Rating Prediction | 4 pages, 3 figures, Large Scale Recommender Systems:workshop of
Recsys 2014 | null | null | null | cs.LG cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rating Prediction is a basic problem in Recommender System, and one of the
most widely used method is Factorization Machines(FM). However, traditional
matrix factorization methods fail to utilize the benefit of implicit feedback,
which has been proved to be important in Rating Prediction problem. In this
work, we consider a specific situation, movie rating prediction, where we
assume that watching history has a big influence on his/her rating behavior on
an item. We introduce two models, Latent Dirichlet Allocation(LDA) and
word2vec, both of which perform state-of-the-art results in training latent
features. Based on that, we propose two feature based models. One is the
Topic-based FM Model which provides the implicit feedback to the matrix
factorization. The other is the Vector-based FM Model which expresses the order
info of watching history. Empirical results on three datasets demonstrate that
our method performs better than the baseline model and confirm that
Vector-based FM Model usually works better as it contains the order info.
| [
{
"version": "v1",
"created": "Wed, 29 Oct 2014 15:51:54 GMT"
}
] | 2014-10-30T00:00:00 | [
[
"Liu",
"Xudong",
""
],
[
"Zhang",
"Bin",
""
],
[
"Zhang",
"Ting",
""
],
[
"Liu",
"Chang",
""
]
] | TITLE: Latent Feature Based FM Model For Rating Prediction
ABSTRACT: Rating Prediction is a basic problem in Recommender System, and one of the
most widely used method is Factorization Machines(FM). However, traditional
matrix factorization methods fail to utilize the benefit of implicit feedback,
which has been proved to be important in Rating Prediction problem. In this
work, we consider a specific situation, movie rating prediction, where we
assume that watching history has a big influence on his/her rating behavior on
an item. We introduce two models, Latent Dirichlet Allocation(LDA) and
word2vec, both of which perform state-of-the-art results in training latent
features. Based on that, we propose two feature based models. One is the
Topic-based FM Model which provides the implicit feedback to the matrix
factorization. The other is the Vector-based FM Model which expresses the order
info of watching history. Empirical results on three datasets demonstrate that
our method performs better than the baseline model and confirm that
Vector-based FM Model usually works better as it contains the order info.
| no_new_dataset | 0.951233 |
1410.7414 | Junier Oliva | Junier Oliva, Willie Neiswanger, Barnabas Poczos, Eric Xing, Jeff
Schneider | Fast Function to Function Regression | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the problem of regression when both input covariates and output
responses are functions from a nonparametric function class. Function to
function regression (FFR) covers a large range of interesting applications
including time-series prediction problems, and also more general tasks like
studying a mapping between two separate types of distributions. However,
previous nonparametric estimators for FFR type problems scale badly
computationally with the number of input/output pairs in a data-set. Given the
complexity of a mapping between general functions it may be necessary to
consider large data-sets in order to achieve a low estimation risk. To address
this issue, we develop a novel scalable nonparametric estimator, the
Triple-Basis Estimator (3BE), which is capable of operating over datasets with
many instances. To the best of our knowledge, the 3BE is the first
nonparametric FFR estimator that can scale to massive datasets. We analyze the
3BE's risk and derive an upperbound rate. Furthermore, we show an improvement
of several orders of magnitude in terms of prediction speed and a reduction in
error over previous estimators in various real-world data-sets.
| [
{
"version": "v1",
"created": "Mon, 27 Oct 2014 20:15:18 GMT"
}
] | 2014-10-29T00:00:00 | [
[
"Oliva",
"Junier",
""
],
[
"Neiswanger",
"Willie",
""
],
[
"Poczos",
"Barnabas",
""
],
[
"Xing",
"Eric",
""
],
[
"Schneider",
"Jeff",
""
]
] | TITLE: Fast Function to Function Regression
ABSTRACT: We analyze the problem of regression when both input covariates and output
responses are functions from a nonparametric function class. Function to
function regression (FFR) covers a large range of interesting applications
including time-series prediction problems, and also more general tasks like
studying a mapping between two separate types of distributions. However,
previous nonparametric estimators for FFR type problems scale badly
computationally with the number of input/output pairs in a data-set. Given the
complexity of a mapping between general functions it may be necessary to
consider large data-sets in order to achieve a low estimation risk. To address
this issue, we develop a novel scalable nonparametric estimator, the
Triple-Basis Estimator (3BE), which is capable of operating over datasets with
many instances. To the best of our knowledge, the 3BE is the first
nonparametric FFR estimator that can scale to massive datasets. We analyze the
3BE's risk and derive an upperbound rate. Furthermore, we show an improvement
of several orders of magnitude in terms of prediction speed and a reduction in
error over previous estimators in various real-world data-sets.
| no_new_dataset | 0.93852 |
1410.7540 | Swaleha Saeed | Swaleha Saeed, M Sarosh Umar, M Athar Ali and Musheer Ahmad | Fisher-Yates Chaotic Shuffling Based Image Encryption | null | International Journal of Information Processing, 8(3), 31-41, 2014
ISSN : 0973-8215 IK International Publishing House Pvt. Ltd., New Delhi,
India | null | null | cs.CR cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Present era, information security is of utmost concern and encryption is
one of the alternatives to ensure security. Chaos based cryptography has
brought a secure and efficient way to meet the challenges of secure multimedia
transmission over the networks. In this paper, we have proposed a secure
Grayscale image encryption methodology in wavelet domain. The proposed
algorithm performs shuffling followed by encryption using states of chaotic map
in a secure manner. Firstly, the image is transformed from spatial domain to
wavelet domain by the Haar wavelet. Subsequently, Fisher Yates chaotic
shuffling technique is employed to shuffle the image in wavelet domain to
confuse the relationship between plain image and cipher image. A key dependent
piece-wise linear chaotic map is used to generate chaos for the chaotic
shuffling. Further, the resultant shuffled approximate coefficients are
chaotically modulated. To enhance the statistical characteristics from
cryptographic point of view, the shuffled image is self keyed diffused and
mixing operation is carried out using keystream extracted from one-dimensional
chaotic map and the plain-image. The proposed algorithm is tested over some
standard image dataset. The results of several experimental, statistical and
sensitivity analyses proved that the algorithm provides an efficient and secure
method to achieve trusted gray scale image encryption.
| [
{
"version": "v1",
"created": "Tue, 28 Oct 2014 07:48:03 GMT"
}
] | 2014-10-29T00:00:00 | [
[
"Saeed",
"Swaleha",
""
],
[
"Umar",
"M Sarosh",
""
],
[
"Ali",
"M Athar",
""
],
[
"Ahmad",
"Musheer",
""
]
] | TITLE: Fisher-Yates Chaotic Shuffling Based Image Encryption
ABSTRACT: In Present era, information security is of utmost concern and encryption is
one of the alternatives to ensure security. Chaos based cryptography has
brought a secure and efficient way to meet the challenges of secure multimedia
transmission over the networks. In this paper, we have proposed a secure
Grayscale image encryption methodology in wavelet domain. The proposed
algorithm performs shuffling followed by encryption using states of chaotic map
in a secure manner. Firstly, the image is transformed from spatial domain to
wavelet domain by the Haar wavelet. Subsequently, Fisher Yates chaotic
shuffling technique is employed to shuffle the image in wavelet domain to
confuse the relationship between plain image and cipher image. A key dependent
piece-wise linear chaotic map is used to generate chaos for the chaotic
shuffling. Further, the resultant shuffled approximate coefficients are
chaotically modulated. To enhance the statistical characteristics from
cryptographic point of view, the shuffled image is self keyed diffused and
mixing operation is carried out using keystream extracted from one-dimensional
chaotic map and the plain-image. The proposed algorithm is tested over some
standard image dataset. The results of several experimental, statistical and
sensitivity analyses proved that the algorithm provides an efficient and secure
method to achieve trusted gray scale image encryption.
| no_new_dataset | 0.947962 |
1410.7744 | Vincent Primault | Vincent Primault, Sonia Ben Mokhtar, Cedric Lauradoux, Lionel Brunie | Differentially Private Location Privacy in Practice | In Proceedings of the Third Workshop on Mobile Security Technologies
(MoST) 2014 (http://arxiv.org/abs/1410.6674) | null | null | MoST/2014/02 | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the wide adoption of handheld devices (e.g. smartphones, tablets) a
large number of location-based services (also called LBSs) have flourished
providing mobile users with real-time and contextual information on the move.
Accounting for the amount of location information they are given by users,
these services are able to track users wherever they go and to learn sensitive
information about them (e.g. their points of interest including home, work,
religious or political places regularly visited). A number of solutions have
been proposed in the past few years to protect users location information while
still allowing them to enjoy geo-located services. Among the most robust
solutions are those that apply the popular notion of differential privacy to
location privacy (e.g. Geo-Indistinguishability), promising strong theoretical
privacy guarantees with a bounded accuracy loss. While these theoretical
guarantees are attracting, it might be difficult for end users or practitioners
to assess their effectiveness in the wild. In this paper, we carry on a
practical study using real mobility traces coming from two different datasets,
to assess the ability of Geo-Indistinguishability to protect users' points of
interest (POIs). We show that a curious LBS collecting obfuscated location
information sent by mobile users is still able to infer most of the users POIs
with a reasonable both geographic and semantic precision. This precision
depends on the degree of obfuscation applied by Geo-Indistinguishability.
Nevertheless, the latter also has an impact on the overhead incurred on mobile
devices resulting in a privacy versus overhead trade-off. Finally, we show in
our study that POIs constitute a quasi-identifier for mobile users and that
obfuscating them using Geo-Indistinguishability is not sufficient as an
attacker is able to re-identify at least 63% of them despite a high degree of
obfuscation.
| [
{
"version": "v1",
"created": "Tue, 28 Oct 2014 19:18:31 GMT"
}
] | 2014-10-29T00:00:00 | [
[
"Primault",
"Vincent",
""
],
[
"Mokhtar",
"Sonia Ben",
""
],
[
"Lauradoux",
"Cedric",
""
],
[
"Brunie",
"Lionel",
""
]
] | TITLE: Differentially Private Location Privacy in Practice
ABSTRACT: With the wide adoption of handheld devices (e.g. smartphones, tablets) a
large number of location-based services (also called LBSs) have flourished
providing mobile users with real-time and contextual information on the move.
Accounting for the amount of location information they are given by users,
these services are able to track users wherever they go and to learn sensitive
information about them (e.g. their points of interest including home, work,
religious or political places regularly visited). A number of solutions have
been proposed in the past few years to protect users location information while
still allowing them to enjoy geo-located services. Among the most robust
solutions are those that apply the popular notion of differential privacy to
location privacy (e.g. Geo-Indistinguishability), promising strong theoretical
privacy guarantees with a bounded accuracy loss. While these theoretical
guarantees are attracting, it might be difficult for end users or practitioners
to assess their effectiveness in the wild. In this paper, we carry on a
practical study using real mobility traces coming from two different datasets,
to assess the ability of Geo-Indistinguishability to protect users' points of
interest (POIs). We show that a curious LBS collecting obfuscated location
information sent by mobile users is still able to infer most of the users POIs
with a reasonable both geographic and semantic precision. This precision
depends on the degree of obfuscation applied by Geo-Indistinguishability.
Nevertheless, the latter also has an impact on the overhead incurred on mobile
devices resulting in a privacy versus overhead trade-off. Finally, we show in
our study that POIs constitute a quasi-identifier for mobile users and that
obfuscating them using Geo-Indistinguishability is not sufficient as an
attacker is able to re-identify at least 63% of them despite a high degree of
obfuscation.
| no_new_dataset | 0.943556 |
1410.7758 | Tobias Blanke | Tobias Blanke, Mark Hedges | Towards a Virtual Data Centre for Classics | null | null | null | null | cs.DL | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The paper presents some of our work on integrating datasets in Classics. We
present the results of various projects we had in this domain. The conclusions
from LaQuAT concerned limitations to the approach rather than solutions. The
relational model followed by OGSA-DAI was more effective for resources that
consist primarily of structured data (which we call data-centric) rather than
for largely unstructured text (which we call text-centric), which makes up a
significant component of the datasets we were using. This approach was,
moreover, insufficiently flexible to deal with the semantic issues. The gMan
project, on the other hand, addressed these problems by virtualizing data
resources using full-text indexes, which can then be used to provide different
views onto the collections and services that more closely match the sort of
information organization and retrieval activities found in the humanities, in
an environment that is more interactive, researcher-focused, and
researcher-driven.
| [
{
"version": "v1",
"created": "Tue, 28 Oct 2014 19:25:52 GMT"
}
] | 2014-10-29T00:00:00 | [
[
"Blanke",
"Tobias",
""
],
[
"Hedges",
"Mark",
""
]
] | TITLE: Towards a Virtual Data Centre for Classics
ABSTRACT: The paper presents some of our work on integrating datasets in Classics. We
present the results of various projects we had in this domain. The conclusions
from LaQuAT concerned limitations to the approach rather than solutions. The
relational model followed by OGSA-DAI was more effective for resources that
consist primarily of structured data (which we call data-centric) rather than
for largely unstructured text (which we call text-centric), which makes up a
significant component of the datasets we were using. This approach was,
moreover, insufficiently flexible to deal with the semantic issues. The gMan
project, on the other hand, addressed these problems by virtualizing data
resources using full-text indexes, which can then be used to provide different
views onto the collections and services that more closely match the sort of
information organization and retrieval activities found in the humanities, in
an environment that is more interactive, researcher-focused, and
researcher-driven.
| no_new_dataset | 0.949012 |
1109.5460 | Ke Xu | Xiao Liang, Xudong Zheng, Weifeng Lv, Tongyu Zhu, Ke Xu | The scaling of human mobility by taxis is exponential | 20 pages, 7 figures | Physica A 391 (2012) 2135-2144 | 10.1016/j.physa.2011.11.035 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a significant factor in urban planning, traffic forecasting and prediction
of epidemics, modeling patterns of human mobility draws intensive attention
from researchers for decades. Power-law distribution and its variations are
observed from quite a few real-world human mobility datasets such as the
movements of banking notes, trackings of cell phone users' locations and
trajectories of vehicles. In this paper, we build models for 20 million
trajectories with fine granularity collected from more than 10 thousand taxis
in Beijing. In contrast to most models observed in human mobility data, the
taxis' traveling displacements in urban areas tend to follow an exponential
distribution instead of a power-law. Similarly, the elapsed time can also be
well approximated by an exponential distribution. Worth mentioning, analysis of
the interevent time indicates the bursty nature of human mobility, similar to
many other human activities.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2011 07:20:32 GMT"
}
] | 2014-10-28T00:00:00 | [
[
"Liang",
"Xiao",
""
],
[
"Zheng",
"Xudong",
""
],
[
"Lv",
"Weifeng",
""
],
[
"Zhu",
"Tongyu",
""
],
[
"Xu",
"Ke",
""
]
] | TITLE: The scaling of human mobility by taxis is exponential
ABSTRACT: As a significant factor in urban planning, traffic forecasting and prediction
of epidemics, modeling patterns of human mobility draws intensive attention
from researchers for decades. Power-law distribution and its variations are
observed from quite a few real-world human mobility datasets such as the
movements of banking notes, trackings of cell phone users' locations and
trajectories of vehicles. In this paper, we build models for 20 million
trajectories with fine granularity collected from more than 10 thousand taxis
in Beijing. In contrast to most models observed in human mobility data, the
taxis' traveling displacements in urban areas tend to follow an exponential
distribution instead of a power-law. Similarly, the elapsed time can also be
well approximated by an exponential distribution. Worth mentioning, analysis of
the interevent time indicates the bursty nature of human mobility, similar to
many other human activities.
| no_new_dataset | 0.940188 |
1311.2887 | Faraz Zaidi | Aneeq Hashmi, Faraz Zaidi, Arnaud Sallaberry, Tariq Mehmood | Are all Social Networks Structurally Similar? A Comparative Study using
Network Statistics and Metrics | ASONAM 2012, Istanbul : Turkey (2012) | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The modern age has seen an exponential growth of social network data
available on the web. Analysis of these networks reveal important structural
information about these networks in particular and about our societies in
general. More often than not, analysis of these networks is concerned in
identifying similarities among social networks and how they are different from
other networks such as protein interaction networks, computer networks and food
web. In this paper, our objective is to perform a critical analysis of
different social networks using structural metrics in an effort to highlight
their similarities and differences. We use five different social network
datasets which are contextually and semantically different from each other. We
then analyze these networks using a number of different network statistics and
metrics. Our results show that although these social networks have been
constructed from different contexts, they are structurally similar. We also
review the snowball sampling method and show its vulnerability against
different network metrics.
| [
{
"version": "v1",
"created": "Wed, 30 Oct 2013 09:55:12 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Oct 2014 00:08:13 GMT"
}
] | 2014-10-28T00:00:00 | [
[
"Hashmi",
"Aneeq",
""
],
[
"Zaidi",
"Faraz",
""
],
[
"Sallaberry",
"Arnaud",
""
],
[
"Mehmood",
"Tariq",
""
]
] | TITLE: Are all Social Networks Structurally Similar? A Comparative Study using
Network Statistics and Metrics
ABSTRACT: The modern age has seen an exponential growth of social network data
available on the web. Analysis of these networks reveal important structural
information about these networks in particular and about our societies in
general. More often than not, analysis of these networks is concerned in
identifying similarities among social networks and how they are different from
other networks such as protein interaction networks, computer networks and food
web. In this paper, our objective is to perform a critical analysis of
different social networks using structural metrics in an effort to highlight
their similarities and differences. We use five different social network
datasets which are contextually and semantically different from each other. We
then analyze these networks using a number of different network statistics and
metrics. Our results show that although these social networks have been
constructed from different contexts, they are structurally similar. We also
review the snowball sampling method and show its vulnerability against
different network metrics.
| no_new_dataset | 0.939692 |
1410.6880 | Seunghak Lee | Seunghak Lee and Eric P. Xing | Screening Rules for Overlapping Group Lasso | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, to solve large-scale lasso and group lasso problems, screening
rules have been developed, the goal of which is to reduce the problem size by
efficiently discarding zero coefficients using simple rules independently of
the others. However, screening for overlapping group lasso remains an open
challenge because the overlaps between groups make it infeasible to test each
group independently. In this paper, we develop screening rules for overlapping
group lasso. To address the challenge arising from groups with overlaps, we
take into account overlapping groups only if they are inclusive of the group
being tested, and then we derive screening rules, adopting the dual polytope
projection approach. This strategy allows us to screen each group independently
of each other. In our experiments, we demonstrate the efficiency of our
screening rules on various datasets.
| [
{
"version": "v1",
"created": "Sat, 25 Oct 2014 04:06:49 GMT"
}
] | 2014-10-28T00:00:00 | [
[
"Lee",
"Seunghak",
""
],
[
"Xing",
"Eric P.",
""
]
] | TITLE: Screening Rules for Overlapping Group Lasso
ABSTRACT: Recently, to solve large-scale lasso and group lasso problems, screening
rules have been developed, the goal of which is to reduce the problem size by
efficiently discarding zero coefficients using simple rules independently of
the others. However, screening for overlapping group lasso remains an open
challenge because the overlaps between groups make it infeasible to test each
group independently. In this paper, we develop screening rules for overlapping
group lasso. To address the challenge arising from groups with overlaps, we
take into account overlapping groups only if they are inclusive of the group
being tested, and then we derive screening rules, adopting the dual polytope
projection approach. This strategy allows us to screen each group independently
of each other. In our experiments, we demonstrate the efficiency of our
screening rules on various datasets.
| no_new_dataset | 0.953319 |
1410.6990 | Dacheng Tao | Chang Xu, Tongliang Liu, Dacheng Tao, Chao Xu | Local Rademacher Complexity for Multi-label Learning | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the local Rademacher complexity of empirical risk minimization
(ERM)-based multi-label learning algorithms, and in doing so propose a new
algorithm for multi-label learning. Rather than using the trace norm to
regularize the multi-label predictor, we instead minimize the tail sum of the
singular values of the predictor in multi-label learning. Benefiting from the
use of the local Rademacher complexity, our algorithm, therefore, has a sharper
generalization error bound and a faster convergence rate. Compared to methods
that minimize over all singular values, concentrating on the tail singular
values results in better recovery of the low-rank structure of the multi-label
predictor, which plays an import role in exploiting label correlations. We
propose a new conditional singular value thresholding algorithm to solve the
resulting objective function. Empirical studies on real-world datasets validate
our theoretical results and demonstrate the effectiveness of the proposed
algorithm.
| [
{
"version": "v1",
"created": "Sun, 26 Oct 2014 05:52:33 GMT"
}
] | 2014-10-28T00:00:00 | [
[
"Xu",
"Chang",
""
],
[
"Liu",
"Tongliang",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Xu",
"Chao",
""
]
] | TITLE: Local Rademacher Complexity for Multi-label Learning
ABSTRACT: We analyze the local Rademacher complexity of empirical risk minimization
(ERM)-based multi-label learning algorithms, and in doing so propose a new
algorithm for multi-label learning. Rather than using the trace norm to
regularize the multi-label predictor, we instead minimize the tail sum of the
singular values of the predictor in multi-label learning. Benefiting from the
use of the local Rademacher complexity, our algorithm, therefore, has a sharper
generalization error bound and a faster convergence rate. Compared to methods
that minimize over all singular values, concentrating on the tail singular
values results in better recovery of the low-rank structure of the multi-label
predictor, which plays an import role in exploiting label correlations. We
propose a new conditional singular value thresholding algorithm to solve the
resulting objective function. Empirical studies on real-world datasets validate
our theoretical results and demonstrate the effectiveness of the proposed
algorithm.
| no_new_dataset | 0.949623 |
1410.6996 | Musa Maharramov | Musa Maharramov and Biondo Biondi | Improved depth imaging by constrained full-waveform inversion | 5 pages, 2 figures | null | null | SEP 155 | physics.geo-ph cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a formulation of full-wavefield inversion (FWI) as a constrained
optimization problem, and describe a computationally efficient technique for
solving constrained full-wavefield inversion (CFWI). The technique is based on
using a total-variation regularization method, with the regularization weighted
in favor of constraining deeper subsurface model sections. The method helps to
promote "edge-preserving" blocky model inversion where fitting the seismic data
alone fails to adequately constrain the model. The method is demonstrated on
synthetic datasets with added noise, and is shown to enhance the sharpness of
the inverted model and correctly reposition mispositioned reflectors by better
constraining the velocity model at depth.
| [
{
"version": "v1",
"created": "Sun, 26 Oct 2014 08:02:01 GMT"
}
] | 2014-10-28T00:00:00 | [
[
"Maharramov",
"Musa",
""
],
[
"Biondi",
"Biondo",
""
]
] | TITLE: Improved depth imaging by constrained full-waveform inversion
ABSTRACT: We propose a formulation of full-wavefield inversion (FWI) as a constrained
optimization problem, and describe a computationally efficient technique for
solving constrained full-wavefield inversion (CFWI). The technique is based on
using a total-variation regularization method, with the regularization weighted
in favor of constraining deeper subsurface model sections. The method helps to
promote "edge-preserving" blocky model inversion where fitting the seismic data
alone fails to adequately constrain the model. The method is demonstrated on
synthetic datasets with added noise, and is shown to enhance the sharpness of
the inverted model and correctly reposition mispositioned reflectors by better
constraining the velocity model at depth.
| no_new_dataset | 0.949342 |
1410.7100 | Harris Georgiou | Harris V. Georgiou | Estimating the intrinsic dimension in fMRI space via dataset fractal
analysis - Counting the `cpu cores' of the human brain | 27 pages, 10 figures, 2 tables, 47 references | null | null | HG/AI.1014.27v1 (draft/preprint) | cs.AI cs.CV q-bio.NC stat.ML | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Functional Magnetic Resonance Imaging (fMRI) is a powerful non-invasive tool
for localizing and analyzing brain activity. This study focuses on one very
important aspect of the functional properties of human brain, specifically the
estimation of the level of parallelism when performing complex cognitive tasks.
Using fMRI as the main modality, the human brain activity is investigated
through a purely data-driven signal processing and dimensionality analysis
approach. Specifically, the fMRI signal is treated as a multi-dimensional data
space and its intrinsic `complexity' is studied via dataset fractal analysis
and blind-source separation (BSS) methods. One simulated and two real fMRI
datasets are used in combination with Independent Component Analysis (ICA) and
fractal analysis for estimating the intrinsic (true) dimensionality, in order
to provide data-driven experimental evidence on the number of independent brain
processes that run in parallel when visual or visuo-motor tasks are performed.
Although this number is can not be defined as a strict threshold but rather as
a continuous range, when a specific activation level is defined, a
corresponding number of parallel processes or the casual equivalent of `cpu
cores' can be detected in normal human brain activity.
| [
{
"version": "v1",
"created": "Mon, 27 Oct 2014 00:25:24 GMT"
}
] | 2014-10-28T00:00:00 | [
[
"Georgiou",
"Harris V.",
""
]
] | TITLE: Estimating the intrinsic dimension in fMRI space via dataset fractal
analysis - Counting the `cpu cores' of the human brain
ABSTRACT: Functional Magnetic Resonance Imaging (fMRI) is a powerful non-invasive tool
for localizing and analyzing brain activity. This study focuses on one very
important aspect of the functional properties of human brain, specifically the
estimation of the level of parallelism when performing complex cognitive tasks.
Using fMRI as the main modality, the human brain activity is investigated
through a purely data-driven signal processing and dimensionality analysis
approach. Specifically, the fMRI signal is treated as a multi-dimensional data
space and its intrinsic `complexity' is studied via dataset fractal analysis
and blind-source separation (BSS) methods. One simulated and two real fMRI
datasets are used in combination with Independent Component Analysis (ICA) and
fractal analysis for estimating the intrinsic (true) dimensionality, in order
to provide data-driven experimental evidence on the number of independent brain
processes that run in parallel when visual or visuo-motor tasks are performed.
Although this number is can not be defined as a strict threshold but rather as
a continuous range, when a specific activation level is defined, a
corresponding number of parallel processes or the casual equivalent of `cpu
cores' can be detected in normal human brain activity.
| no_new_dataset | 0.945147 |
1410.7372 | Jayadeva | Jayadeva, Sanjit S. Batra, and Siddharth Sabharwal | Feature Selection through Minimization of the VC dimension | arXiv admin note: text overlap with arXiv:1410.4573 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature selection involes identifying the most relevant subset of input
features, with a view to improving generalization of predictive models by
reducing overfitting. Directly searching for the most relevant combination of
attributes is NP-hard. Variable selection is of critical importance in many
applications, such as micro-array data analysis, where selecting a small number
of discriminative features is crucial to developing useful models of disease
mechanisms, as well as for prioritizing targets for drug discovery. The
recently proposed Minimal Complexity Machine (MCM) provides a way to learn a
hyperplane classifier by minimizing an exact (\boldmath{$\Theta$}) bound on its
VC dimension. It is well known that a lower VC dimension contributes to good
generalization. For a linear hyperplane classifier in the input space, the VC
dimension is upper bounded by the number of features; hence, a linear
classifier with a small VC dimension is parsimonious in the set of features it
employs. In this paper, we use the linear MCM to learn a classifier in which a
large number of weights are zero; features with non-zero weights are the ones
that are chosen. Selected features are used to learn a kernel SVM classifier.
On a number of benchmark datasets, the features chosen by the linear MCM yield
comparable or better test set accuracy than when methods such as ReliefF and
FCBF are used for the task. The linear MCM typically chooses one-tenth the
number of attributes chosen by the other methods; on some very high dimensional
datasets, the MCM chooses about $0.6\%$ of the features; in comparison, ReliefF
and FCBF choose 70 to 140 times more features, thus demonstrating that
minimizing the VC dimension may provide a new, and very effective route for
feature selection and for learning sparse representations.
| [
{
"version": "v1",
"created": "Mon, 27 Oct 2014 19:46:55 GMT"
}
] | 2014-10-28T00:00:00 | [
[
"Jayadeva",
"",
""
],
[
"Batra",
"Sanjit S.",
""
],
[
"Sabharwal",
"Siddharth",
""
]
] | TITLE: Feature Selection through Minimization of the VC dimension
ABSTRACT: Feature selection involes identifying the most relevant subset of input
features, with a view to improving generalization of predictive models by
reducing overfitting. Directly searching for the most relevant combination of
attributes is NP-hard. Variable selection is of critical importance in many
applications, such as micro-array data analysis, where selecting a small number
of discriminative features is crucial to developing useful models of disease
mechanisms, as well as for prioritizing targets for drug discovery. The
recently proposed Minimal Complexity Machine (MCM) provides a way to learn a
hyperplane classifier by minimizing an exact (\boldmath{$\Theta$}) bound on its
VC dimension. It is well known that a lower VC dimension contributes to good
generalization. For a linear hyperplane classifier in the input space, the VC
dimension is upper bounded by the number of features; hence, a linear
classifier with a small VC dimension is parsimonious in the set of features it
employs. In this paper, we use the linear MCM to learn a classifier in which a
large number of weights are zero; features with non-zero weights are the ones
that are chosen. Selected features are used to learn a kernel SVM classifier.
On a number of benchmark datasets, the features chosen by the linear MCM yield
comparable or better test set accuracy than when methods such as ReliefF and
FCBF are used for the task. The linear MCM typically chooses one-tenth the
number of attributes chosen by the other methods; on some very high dimensional
datasets, the MCM chooses about $0.6\%$ of the features; in comparison, ReliefF
and FCBF choose 70 to 140 times more features, thus demonstrating that
minimizing the VC dimension may provide a new, and very effective route for
feature selection and for learning sparse representations.
| no_new_dataset | 0.946794 |
1410.6532 | Ziming Zhang | Ziming Zhang, Yuting Chen, Venkatesh Saligrama | A Novel Visual Word Co-occurrence Model for Person Re-identification | Accepted at ECCV Workshop on Visual Surveillance and
Re-Identification, 2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Person re-identification aims to maintain the identity of an individual in
diverse locations through different non-overlapping camera views. The problem
is fundamentally challenging due to appearance variations resulting from
differing poses, illumination and configurations of camera views. To deal with
these difficulties, we propose a novel visual word co-occurrence model. We
first map each pixel of an image to a visual word using a codebook, which is
learned in an unsupervised manner. The appearance transformation between camera
views is encoded by a co-occurrence matrix of visual word joint distributions
in probe and gallery images. Our appearance model naturally accounts for
spatial similarities and variations caused by pose, illumination &
configuration change across camera views. Linear SVMs are then trained as
classifiers using these co-occurrence descriptors. On the VIPeR and CUHK Campus
benchmark datasets, our method achieves 83.86% and 85.49% at rank-15 on the
Cumulative Match Characteristic (CMC) curves, and beats the state-of-the-art
results by 10.44% and 22.27%.
| [
{
"version": "v1",
"created": "Fri, 24 Oct 2014 01:04:37 GMT"
}
] | 2014-10-27T00:00:00 | [
[
"Zhang",
"Ziming",
""
],
[
"Chen",
"Yuting",
""
],
[
"Saligrama",
"Venkatesh",
""
]
] | TITLE: A Novel Visual Word Co-occurrence Model for Person Re-identification
ABSTRACT: Person re-identification aims to maintain the identity of an individual in
diverse locations through different non-overlapping camera views. The problem
is fundamentally challenging due to appearance variations resulting from
differing poses, illumination and configurations of camera views. To deal with
these difficulties, we propose a novel visual word co-occurrence model. We
first map each pixel of an image to a visual word using a codebook, which is
learned in an unsupervised manner. The appearance transformation between camera
views is encoded by a co-occurrence matrix of visual word joint distributions
in probe and gallery images. Our appearance model naturally accounts for
spatial similarities and variations caused by pose, illumination &
configuration change across camera views. Linear SVMs are then trained as
classifiers using these co-occurrence descriptors. On the VIPeR and CUHK Campus
benchmark datasets, our method achieves 83.86% and 85.49% at rank-15 on the
Cumulative Match Characteristic (CMC) curves, and beats the state-of-the-art
results by 10.44% and 22.27%.
| no_new_dataset | 0.956391 |
1410.6629 | Gianluca Stringhini | Gianluca Stringhini, Olivier Thonnard | That Ain't You: Detecting Spearphishing Emails Before They Are Sent | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the ways in which attackers try to steal sensitive information from
corporations is by sending spearphishing emails. This type of emails typically
appear to be sent by one of the victim's coworkers, but have instead been
crafted by an attacker. A particularly insidious type of spearphishing emails
are the ones that do not only claim to come from a trusted party, but were
actually sent from that party's legitimate email account that was compromised
in the first place. In this paper, we propose a radical change of focus in the
techniques used for detecting such malicious emails: instead of looking for
particular features that are indicative of attack emails, we look for possible
indicators of impersonation of the legitimate owners. We present
IdentityMailer, a system that validates the authorship of emails by learning
the typical email-sending behavior of users over time, and comparing any
subsequent email sent from their accounts against this model. Our experiments
on real world e-mail datasets demonstrate that our system can effectively block
advanced email attacks sent from genuine email accounts, which traditional
protection systems are unable to detect. Moreover, we show that it is resilient
to an attacker willing to evade the system. To the best of our knowledge,
IdentityMailer is the first system able to identify spearphishing emails that
are sent from within an organization, by a skilled attacker having access to a
compromised email account.
| [
{
"version": "v1",
"created": "Fri, 24 Oct 2014 09:45:03 GMT"
}
] | 2014-10-27T00:00:00 | [
[
"Stringhini",
"Gianluca",
""
],
[
"Thonnard",
"Olivier",
""
]
] | TITLE: That Ain't You: Detecting Spearphishing Emails Before They Are Sent
ABSTRACT: One of the ways in which attackers try to steal sensitive information from
corporations is by sending spearphishing emails. This type of emails typically
appear to be sent by one of the victim's coworkers, but have instead been
crafted by an attacker. A particularly insidious type of spearphishing emails
are the ones that do not only claim to come from a trusted party, but were
actually sent from that party's legitimate email account that was compromised
in the first place. In this paper, we propose a radical change of focus in the
techniques used for detecting such malicious emails: instead of looking for
particular features that are indicative of attack emails, we look for possible
indicators of impersonation of the legitimate owners. We present
IdentityMailer, a system that validates the authorship of emails by learning
the typical email-sending behavior of users over time, and comparing any
subsequent email sent from their accounts against this model. Our experiments
on real world e-mail datasets demonstrate that our system can effectively block
advanced email attacks sent from genuine email accounts, which traditional
protection systems are unable to detect. Moreover, we show that it is resilient
to an attacker willing to evade the system. To the best of our knowledge,
IdentityMailer is the first system able to identify spearphishing emails that
are sent from within an organization, by a skilled attacker having access to a
compromised email account.
| no_new_dataset | 0.925399 |
1410.6725 | Mark Taylor | Mark Taylor | Visualising Large Datasets in TOPCAT v4 | 4 pages, 2 figures, conference paper submitted to arXiv a year after
acceptance | Astronomical Data Anaylsis Softward and Systems XXIII. Proceedings
of a meeting held 29 September - 3 October 2013 at Waikoloa Beach Marriott,
Hawaii, USA. Edited by N. Manset and P. Forshay ASP conference series, vol.
485, 2014, p.257 | null | null | astro-ph.IM cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | TOPCAT is a widely used desktop application for manipulation of astronomical
catalogues and other tables, which has long provided fast interactive
visualisation features including 1, 2 and 3-d plots, multiple datasets, linked
views, color coding, transparency and more. In Version 4 a new plotting library
has been written from scratch to deliver new and enhanced visualisation
capabilities. This paper describes some of the considerations in the design and
implementation, particularly in regard to providing comprehensible interactive
visualisation for multi-million point datasets.
| [
{
"version": "v1",
"created": "Fri, 24 Oct 2014 16:13:11 GMT"
}
] | 2014-10-27T00:00:00 | [
[
"Taylor",
"Mark",
""
]
] | TITLE: Visualising Large Datasets in TOPCAT v4
ABSTRACT: TOPCAT is a widely used desktop application for manipulation of astronomical
catalogues and other tables, which has long provided fast interactive
visualisation features including 1, 2 and 3-d plots, multiple datasets, linked
views, color coding, transparency and more. In Version 4 a new plotting library
has been written from scratch to deliver new and enhanced visualisation
capabilities. This paper describes some of the considerations in the design and
implementation, particularly in regard to providing comprehensible interactive
visualisation for multi-million point datasets.
| no_new_dataset | 0.943243 |
1410.6776 | Purushottam Kar | Purushottam Kar, Harikrishna Narasimhan, Prateek Jain | Online and Stochastic Gradient Methods for Non-decomposable Loss
Functions | 25 pages, 3 figures, To appear in the proceedings of the 28th Annual
Conference on Neural Information Processing Systems, NIPS 2014 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern applications in sensitive domains such as biometrics and medicine
frequently require the use of non-decomposable loss functions such as
precision@k, F-measure etc. Compared to point loss functions such as
hinge-loss, these offer much more fine grained control over prediction, but at
the same time present novel challenges in terms of algorithm design and
analysis. In this work we initiate a study of online learning techniques for
such non-decomposable loss functions with an aim to enable incremental learning
as well as design scalable solvers for batch problems. To this end, we propose
an online learning framework for such loss functions. Our model enjoys several
nice properties, chief amongst them being the existence of efficient online
learning algorithms with sublinear regret and online to batch conversion
bounds. Our model is a provable extension of existing online learning models
for point loss functions. We instantiate two popular losses, prec@k and pAUC,
in our model and prove sublinear regret bounds for both of them. Our proofs
require a novel structural lemma over ranked lists which may be of independent
interest. We then develop scalable stochastic gradient descent solvers for
non-decomposable loss functions. We show that for a large family of loss
functions satisfying a certain uniform convergence property (that includes
prec@k, pAUC, and F-measure), our methods provably converge to the empirical
risk minimizer. Such uniform convergence results were not known for these
losses and we establish these using novel proof techniques. We then use
extensive experimentation on real life and benchmark datasets to establish that
our method can be orders of magnitude faster than a recently proposed cutting
plane method.
| [
{
"version": "v1",
"created": "Fri, 24 Oct 2014 18:45:23 GMT"
}
] | 2014-10-27T00:00:00 | [
[
"Kar",
"Purushottam",
""
],
[
"Narasimhan",
"Harikrishna",
""
],
[
"Jain",
"Prateek",
""
]
] | TITLE: Online and Stochastic Gradient Methods for Non-decomposable Loss
Functions
ABSTRACT: Modern applications in sensitive domains such as biometrics and medicine
frequently require the use of non-decomposable loss functions such as
precision@k, F-measure etc. Compared to point loss functions such as
hinge-loss, these offer much more fine grained control over prediction, but at
the same time present novel challenges in terms of algorithm design and
analysis. In this work we initiate a study of online learning techniques for
such non-decomposable loss functions with an aim to enable incremental learning
as well as design scalable solvers for batch problems. To this end, we propose
an online learning framework for such loss functions. Our model enjoys several
nice properties, chief amongst them being the existence of efficient online
learning algorithms with sublinear regret and online to batch conversion
bounds. Our model is a provable extension of existing online learning models
for point loss functions. We instantiate two popular losses, prec@k and pAUC,
in our model and prove sublinear regret bounds for both of them. Our proofs
require a novel structural lemma over ranked lists which may be of independent
interest. We then develop scalable stochastic gradient descent solvers for
non-decomposable loss functions. We show that for a large family of loss
functions satisfying a certain uniform convergence property (that includes
prec@k, pAUC, and F-measure), our methods provably converge to the empirical
risk minimizer. Such uniform convergence results were not known for these
losses and we establish these using novel proof techniques. We then use
extensive experimentation on real life and benchmark datasets to establish that
our method can be orders of magnitude faster than a recently proposed cutting
plane method.
| no_new_dataset | 0.945248 |
1405.6879 | M\'arton Karsai | M\'arton Karsai, Gerardo I\~niguez, Kimmo Kaski, J\'anos Kert\'esz | Complex contagion process in spreading of online innovation | 27 pages, 11 figures, 2 tables | J. R. Soc. Interface 11, 101 (2014) | 10.1098/rsif.2014.0694 | null | physics.soc-ph cs.SI nlin.AO physics.comp-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion of innovation can be interpreted as a social spreading phenomena
governed by the impact of media and social interactions. Although these
mechanisms have been identified by quantitative theories, their role and
relative importance are not entirely understood, since empirical verification
has so far been hindered by the lack of appropriate data. Here we analyse a
dataset recording the spreading dynamics of the world's largest Voice over
Internet Protocol service to empirically support the assumptions behind models
of social contagion. We show that the rate of spontaneous service adoption is
constant, the probability of adoption via social influence is linearly
proportional to the fraction of adopting neighbours, and the rate of service
termination is time-invariant and independent of the behaviour of peers. By
implementing the detected diffusion mechanisms into a dynamical agent-based
model, we are able to emulate the adoption dynamics of the service in several
countries worldwide. This approach enables us to make medium-term predictions
of service adoption and disclose dependencies between the dynamics of
innovation spreading and the socioeconomic development of a country.
| [
{
"version": "v1",
"created": "Tue, 27 May 2014 12:03:31 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Oct 2014 10:10:21 GMT"
}
] | 2014-10-24T00:00:00 | [
[
"Karsai",
"Márton",
""
],
[
"Iñiguez",
"Gerardo",
""
],
[
"Kaski",
"Kimmo",
""
],
[
"Kertész",
"János",
""
]
] | TITLE: Complex contagion process in spreading of online innovation
ABSTRACT: Diffusion of innovation can be interpreted as a social spreading phenomena
governed by the impact of media and social interactions. Although these
mechanisms have been identified by quantitative theories, their role and
relative importance are not entirely understood, since empirical verification
has so far been hindered by the lack of appropriate data. Here we analyse a
dataset recording the spreading dynamics of the world's largest Voice over
Internet Protocol service to empirically support the assumptions behind models
of social contagion. We show that the rate of spontaneous service adoption is
constant, the probability of adoption via social influence is linearly
proportional to the fraction of adopting neighbours, and the rate of service
termination is time-invariant and independent of the behaviour of peers. By
implementing the detected diffusion mechanisms into a dynamical agent-based
model, we are able to emulate the adoption dynamics of the service in several
countries worldwide. This approach enables us to make medium-term predictions
of service adoption and disclose dependencies between the dynamics of
innovation spreading and the socioeconomic development of a country.
| no_new_dataset | 0.940134 |
1409.5241 | Basura Fernando | Basura Fernando, Amaury Habrard, Marc Sebban and Tinne Tuytelaars | Subspace Alignment For Domain Adaptation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce a new domain adaptation (DA) algorithm where the
source and target domains are represented by subspaces spanned by eigenvectors.
Our method seeks a domain invariant feature space by learning a mapping
function which aligns the source subspace with the target one. We show that the
solution of the corresponding optimization problem can be obtained in a simple
closed form, leading to an extremely fast algorithm. We present two approaches
to determine the only hyper-parameter in our method corresponding to the size
of the subspaces. In the first approach we tune the size of subspaces using a
theoretical bound on the stability of the obtained result. In the second
approach, we use maximum likelihood estimation to determine the subspace size,
which is particularly useful for high dimensional data. Apart from PCA, we
propose a subspace creation method that outperform partial least squares (PLS)
and linear discriminant analysis (LDA) in domain adaptation. We test our method
on various datasets and show that, despite its intrinsic simplicity, it
outperforms state of the art DA methods.
| [
{
"version": "v1",
"created": "Thu, 18 Sep 2014 09:57:41 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Oct 2014 08:40:06 GMT"
}
] | 2014-10-24T00:00:00 | [
[
"Fernando",
"Basura",
""
],
[
"Habrard",
"Amaury",
""
],
[
"Sebban",
"Marc",
""
],
[
"Tuytelaars",
"Tinne",
""
]
] | TITLE: Subspace Alignment For Domain Adaptation
ABSTRACT: In this paper, we introduce a new domain adaptation (DA) algorithm where the
source and target domains are represented by subspaces spanned by eigenvectors.
Our method seeks a domain invariant feature space by learning a mapping
function which aligns the source subspace with the target one. We show that the
solution of the corresponding optimization problem can be obtained in a simple
closed form, leading to an extremely fast algorithm. We present two approaches
to determine the only hyper-parameter in our method corresponding to the size
of the subspaces. In the first approach we tune the size of subspaces using a
theoretical bound on the stability of the obtained result. In the second
approach, we use maximum likelihood estimation to determine the subspace size,
which is particularly useful for high dimensional data. Apart from PCA, we
propose a subspace creation method that outperform partial least squares (PLS)
and linear discriminant analysis (LDA) in domain adaptation. We test our method
on various datasets and show that, despite its intrinsic simplicity, it
outperforms state of the art DA methods.
| no_new_dataset | 0.945901 |
1311.2524 | Ross Girshick | Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik | Rich feature hierarchies for accurate object detection and semantic
segmentation | Extended version of our CVPR 2014 paper; latest update (v5) includes
results using deeper networks (see Appendix G. Changelog) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object detection performance, as measured on the canonical PASCAL VOC
dataset, has plateaued in the last few years. The best-performing methods are
complex ensemble systems that typically combine multiple low-level image
features with high-level context. In this paper, we propose a simple and
scalable detection algorithm that improves mean average precision (mAP) by more
than 30% relative to the previous best result on VOC 2012---achieving a mAP of
53.3%. Our approach combines two key insights: (1) one can apply high-capacity
convolutional neural networks (CNNs) to bottom-up region proposals in order to
localize and segment objects and (2) when labeled training data is scarce,
supervised pre-training for an auxiliary task, followed by domain-specific
fine-tuning, yields a significant performance boost. Since we combine region
proposals with CNNs, we call our method R-CNN: Regions with CNN features. We
also compare R-CNN to OverFeat, a recently proposed sliding-window detector
based on a similar CNN architecture. We find that R-CNN outperforms OverFeat by
a large margin on the 200-class ILSVRC2013 detection dataset. Source code for
the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.
| [
{
"version": "v1",
"created": "Mon, 11 Nov 2013 18:43:49 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Apr 2014 01:44:31 GMT"
},
{
"version": "v3",
"created": "Wed, 7 May 2014 17:09:23 GMT"
},
{
"version": "v4",
"created": "Mon, 9 Jun 2014 22:07:33 GMT"
},
{
"version": "v5",
"created": "Wed, 22 Oct 2014 17:23:20 GMT"
}
] | 2014-10-23T00:00:00 | [
[
"Girshick",
"Ross",
""
],
[
"Donahue",
"Jeff",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Malik",
"Jitendra",
""
]
] | TITLE: Rich feature hierarchies for accurate object detection and semantic
segmentation
ABSTRACT: Object detection performance, as measured on the canonical PASCAL VOC
dataset, has plateaued in the last few years. The best-performing methods are
complex ensemble systems that typically combine multiple low-level image
features with high-level context. In this paper, we propose a simple and
scalable detection algorithm that improves mean average precision (mAP) by more
than 30% relative to the previous best result on VOC 2012---achieving a mAP of
53.3%. Our approach combines two key insights: (1) one can apply high-capacity
convolutional neural networks (CNNs) to bottom-up region proposals in order to
localize and segment objects and (2) when labeled training data is scarce,
supervised pre-training for an auxiliary task, followed by domain-specific
fine-tuning, yields a significant performance boost. Since we combine region
proposals with CNNs, we call our method R-CNN: Regions with CNN features. We
also compare R-CNN to OverFeat, a recently proposed sliding-window detector
based on a similar CNN architecture. We find that R-CNN outperforms OverFeat by
a large margin on the 200-class ILSVRC2013 detection dataset. Source code for
the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.
| no_new_dataset | 0.953535 |
1408.5389 | Zhensong Qian | Zhensong Qian, Oliver Schulte and Yan Sun | Computing Multi-Relational Sufficient Statistics for Large Databases | 11pages, 8 figures, 8 tables, CIKM'14,November 3--7, 2014, Shanghai,
China | null | 10.1145/2661829.2662010 | null | cs.LG cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Databases contain information about which relationships do and do not hold
among entities. To make this information accessible for statistical analysis
requires computing sufficient statistics that combine information from
different database tables. Such statistics may involve any number of {\em
positive and negative} relationships. With a naive enumeration approach,
computing sufficient statistics for negative relationships is feasible only for
small databases. We solve this problem with a new dynamic programming algorithm
that performs a virtual join, where the requisite counts are computed without
materializing join tables. Contingency table algebra is a new extension of
relational algebra, that facilitates the efficient implementation of this
M\"obius virtual join operation. The M\"obius Join scales to large datasets
(over 1M tuples) with complex schemas. Empirical evaluation with seven
benchmark datasets showed that information about the presence and absence of
links can be exploited in feature selection, association rule mining, and
Bayesian network learning.
| [
{
"version": "v1",
"created": "Fri, 22 Aug 2014 19:12:19 GMT"
}
] | 2014-10-23T00:00:00 | [
[
"Qian",
"Zhensong",
""
],
[
"Schulte",
"Oliver",
""
],
[
"Sun",
"Yan",
""
]
] | TITLE: Computing Multi-Relational Sufficient Statistics for Large Databases
ABSTRACT: Databases contain information about which relationships do and do not hold
among entities. To make this information accessible for statistical analysis
requires computing sufficient statistics that combine information from
different database tables. Such statistics may involve any number of {\em
positive and negative} relationships. With a naive enumeration approach,
computing sufficient statistics for negative relationships is feasible only for
small databases. We solve this problem with a new dynamic programming algorithm
that performs a virtual join, where the requisite counts are computed without
materializing join tables. Contingency table algebra is a new extension of
relational algebra, that facilitates the efficient implementation of this
M\"obius virtual join operation. The M\"obius Join scales to large datasets
(over 1M tuples) with complex schemas. Empirical evaluation with seven
benchmark datasets showed that information about the presence and absence of
links can be exploited in feature selection, association rule mining, and
Bayesian network learning.
| no_new_dataset | 0.945601 |
1410.6126 | German Ros | German Ros and Jose Alvarez and Julio Guerrero | Motion Estimation via Robust Decomposition with Constrained Rank | Submitted to IEEE TIP | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we address the problem of outlier detection for robust motion
estimation by using modern sparse-low-rank decompositions, i.e., Robust
PCA-like methods, to impose global rank constraints. Robust decompositions have
shown to be good at splitting a corrupted matrix into an uncorrupted low-rank
matrix and a sparse matrix, containing outliers. However, this process only
works when matrices have relatively low rank with respect to their ambient
space, a property not met in motion estimation problems. As a solution, we
propose to exploit the partial information present in the decomposition to
decide which matches are outliers. We provide evidences showing that even when
it is not possible to recover an uncorrupted low-rank matrix, the resulting
information can be exploited for outlier detection. To this end we propose the
Robust Decomposition with Constrained Rank (RD-CR), a proximal gradient based
method that enforces the rank constraints inherent to motion estimation. We
also present a general framework to perform robust estimation for stereo Visual
Odometry, based on our RD-CR and a simple but effective compressed optimization
method that achieves high performance. Our evaluation on synthetic data and on
the KITTI dataset demonstrates the applicability of our approach in complex
scenarios and it yields state-of-the-art performance.
| [
{
"version": "v1",
"created": "Wed, 22 Oct 2014 18:15:27 GMT"
}
] | 2014-10-23T00:00:00 | [
[
"Ros",
"German",
""
],
[
"Alvarez",
"Jose",
""
],
[
"Guerrero",
"Julio",
""
]
] | TITLE: Motion Estimation via Robust Decomposition with Constrained Rank
ABSTRACT: In this work, we address the problem of outlier detection for robust motion
estimation by using modern sparse-low-rank decompositions, i.e., Robust
PCA-like methods, to impose global rank constraints. Robust decompositions have
shown to be good at splitting a corrupted matrix into an uncorrupted low-rank
matrix and a sparse matrix, containing outliers. However, this process only
works when matrices have relatively low rank with respect to their ambient
space, a property not met in motion estimation problems. As a solution, we
propose to exploit the partial information present in the decomposition to
decide which matches are outliers. We provide evidences showing that even when
it is not possible to recover an uncorrupted low-rank matrix, the resulting
information can be exploited for outlier detection. To this end we propose the
Robust Decomposition with Constrained Rank (RD-CR), a proximal gradient based
method that enforces the rank constraints inherent to motion estimation. We
also present a general framework to perform robust estimation for stereo Visual
Odometry, based on our RD-CR and a simple but effective compressed optimization
method that achieves high performance. Our evaluation on synthetic data and on
the KITTI dataset demonstrates the applicability of our approach in complex
scenarios and it yields state-of-the-art performance.
| no_new_dataset | 0.945901 |
1406.6597 | Vincent Labatut | G\"unce Keziban Orman, Vincent Labatut, Marc Plantevit (LIRIS),
Jean-Fran\c{c}ois Boulicaut (LIRIS) | A Method for Characterizing Communities in Dynamic Attributed Complex
Networks | IEEE/ACM International Conference on Advances in Social Network
Analysis and Mining (ASONAM), P\'ekin : China (2014) | null | 10.1109/ASONAM.2014.6921629 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many methods have been proposed to detect communities, not only in plain, but
also in attributed, directed or even dynamic complex networks. In its simplest
form, a community structure takes the form of a partition of the node set. From
the modeling point of view, to be of some utility, this partition must then be
characterized relatively to the properties of the studied system. However, if
most of the existing works focus on defining methods for the detection of
communities, only very few try to tackle this interpretation problem. Moreover,
the existing approaches are limited either in the type of data they handle, or
by the nature of the results they output. In this work, we propose a method to
efficiently support such a characterization task. We first define a
sequence-based representation of networks, combining temporal information,
topological measures, and nodal attributes. We then describe how to identify
the most emerging sequential patterns of this dataset, and use them to
characterize the communities. We also show how to detect unusual behavior in a
community, and highlight outliers. Finally, as an illustration, we apply our
method to a network of scientific collaborations.
| [
{
"version": "v1",
"created": "Wed, 25 Jun 2014 14:54:52 GMT"
}
] | 2014-10-22T00:00:00 | [
[
"Orman",
"Günce Keziban",
"",
"LIRIS"
],
[
"Labatut",
"Vincent",
"",
"LIRIS"
],
[
"Plantevit",
"Marc",
"",
"LIRIS"
],
[
"Boulicaut",
"Jean-François",
"",
"LIRIS"
]
] | TITLE: A Method for Characterizing Communities in Dynamic Attributed Complex
Networks
ABSTRACT: Many methods have been proposed to detect communities, not only in plain, but
also in attributed, directed or even dynamic complex networks. In its simplest
form, a community structure takes the form of a partition of the node set. From
the modeling point of view, to be of some utility, this partition must then be
characterized relatively to the properties of the studied system. However, if
most of the existing works focus on defining methods for the detection of
communities, only very few try to tackle this interpretation problem. Moreover,
the existing approaches are limited either in the type of data they handle, or
by the nature of the results they output. In this work, we propose a method to
efficiently support such a characterization task. We first define a
sequence-based representation of networks, combining temporal information,
topological measures, and nodal attributes. We then describe how to identify
the most emerging sequential patterns of this dataset, and use them to
characterize the communities. We also show how to detect unusual behavior in a
community, and highlight outliers. Finally, as an illustration, we apply our
method to a network of scientific collaborations.
| no_new_dataset | 0.943815 |
1410.5467 | Josef Urban | Cezary Kaliszyk, Lionel Mamane, Josef Urban | Machine Learning of Coq Proof Guidance: First Experiments | null | null | null | null | cs.LO cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We report the results of the first experiments with learning proof
dependencies from the formalizations done with the Coq system. We explain the
process of obtaining the dependencies from the Coq proofs, the characterization
of formulas that is used for the learning, and the evaluation method. Various
machine learning methods are compared on a dataset of 5021 toplevel Coq proofs
coming from the CoRN repository. The best resulting method covers on average
75% of the needed proof dependencies among the first 100 predictions, which is
a comparable performance of such initial experiments on other large-theory
corpora.
| [
{
"version": "v1",
"created": "Mon, 20 Oct 2014 21:16:52 GMT"
}
] | 2014-10-22T00:00:00 | [
[
"Kaliszyk",
"Cezary",
""
],
[
"Mamane",
"Lionel",
""
],
[
"Urban",
"Josef",
""
]
] | TITLE: Machine Learning of Coq Proof Guidance: First Experiments
ABSTRACT: We report the results of the first experiments with learning proof
dependencies from the formalizations done with the Coq system. We explain the
process of obtaining the dependencies from the Coq proofs, the characterization
of formulas that is used for the learning, and the evaluation method. Various
machine learning methods are compared on a dataset of 5021 toplevel Coq proofs
coming from the CoRN repository. The best resulting method covers on average
75% of the needed proof dependencies among the first 100 predictions, which is
a comparable performance of such initial experiments on other large-theory
corpora.
| no_new_dataset | 0.925567 |
1410.5476 | Josef Urban | Cezary Kaliszyk, Josef Urban, Jiri Vyskocil | Certified Connection Tableaux Proofs for HOL Light and TPTP | null | null | null | null | cs.LO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the recent years, the Metis prover based on ordered paramodulation and
model elimination has replaced the earlier built-in methods for general-purpose
proof automation in HOL4 and Isabelle/HOL. In the annual CASC competition, the
leanCoP system based on connection tableaux has however performed better than
Metis. In this paper we show how the leanCoP's core algorithm can be
implemented inside HOLLight. leanCoP's flagship feature, namely its
minimalistic core, results in a very simple proof system. This plays a crucial
role in extending the MESON proof reconstruction mechanism to connection
tableaux proofs, providing an implementation of leanCoP that certifies its
proofs. We discuss the differences between our direct implementation using an
explicit Prolog stack, to the continuation passing implementation of MESON
present in HOLLight and compare their performance on all core HOLLight goals.
The resulting prover can be also used as a general purpose TPTP prover. We
compare its performance against the resolution based Metis on TPTP and other
interesting datasets.
| [
{
"version": "v1",
"created": "Mon, 20 Oct 2014 21:36:47 GMT"
}
] | 2014-10-22T00:00:00 | [
[
"Kaliszyk",
"Cezary",
""
],
[
"Urban",
"Josef",
""
],
[
"Vyskocil",
"Jiri",
""
]
] | TITLE: Certified Connection Tableaux Proofs for HOL Light and TPTP
ABSTRACT: In the recent years, the Metis prover based on ordered paramodulation and
model elimination has replaced the earlier built-in methods for general-purpose
proof automation in HOL4 and Isabelle/HOL. In the annual CASC competition, the
leanCoP system based on connection tableaux has however performed better than
Metis. In this paper we show how the leanCoP's core algorithm can be
implemented inside HOLLight. leanCoP's flagship feature, namely its
minimalistic core, results in a very simple proof system. This plays a crucial
role in extending the MESON proof reconstruction mechanism to connection
tableaux proofs, providing an implementation of leanCoP that certifies its
proofs. We discuss the differences between our direct implementation using an
explicit Prolog stack, to the continuation passing implementation of MESON
present in HOLLight and compare their performance on all core HOLLight goals.
The resulting prover can be also used as a general purpose TPTP prover. We
compare its performance against the resolution based Metis on TPTP and other
interesting datasets.
| no_new_dataset | 0.940408 |
1410.5684 | Saahil Ognawala | Saahil Ognawala and Justin Bayer | Regularizing Recurrent Networks - On Injected Noise and Norm-based
Methods | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advancements in parallel processing have lead to a surge in multilayer
perceptrons' (MLP) applications and deep learning in the past decades.
Recurrent Neural Networks (RNNs) give additional representational power to
feedforward MLPs by providing a way to treat sequential data. However, RNNs are
hard to train using conventional error backpropagation methods because of the
difficulty in relating inputs over many time-steps. Regularization approaches
from MLP sphere, like dropout and noisy weight training, have been
insufficiently applied and tested on simple RNNs. Moreover, solutions have been
proposed to improve convergence in RNNs but not enough to improve the long term
dependency remembering capabilities thereof.
In this study, we aim to empirically evaluate the remembering and
generalization ability of RNNs on polyphonic musical datasets. The models are
trained with injected noise, random dropout, norm-based regularizers and their
respective performances compared to well-initialized plain RNNs and advanced
regularization methods like fast-dropout. We conclude with evidence that
training with noise does not improve performance as conjectured by a few works
in RNN optimization before ours.
| [
{
"version": "v1",
"created": "Tue, 21 Oct 2014 14:36:26 GMT"
}
] | 2014-10-22T00:00:00 | [
[
"Ognawala",
"Saahil",
""
],
[
"Bayer",
"Justin",
""
]
] | TITLE: Regularizing Recurrent Networks - On Injected Noise and Norm-based
Methods
ABSTRACT: Advancements in parallel processing have lead to a surge in multilayer
perceptrons' (MLP) applications and deep learning in the past decades.
Recurrent Neural Networks (RNNs) give additional representational power to
feedforward MLPs by providing a way to treat sequential data. However, RNNs are
hard to train using conventional error backpropagation methods because of the
difficulty in relating inputs over many time-steps. Regularization approaches
from MLP sphere, like dropout and noisy weight training, have been
insufficiently applied and tested on simple RNNs. Moreover, solutions have been
proposed to improve convergence in RNNs but not enough to improve the long term
dependency remembering capabilities thereof.
In this study, we aim to empirically evaluate the remembering and
generalization ability of RNNs on polyphonic musical datasets. The models are
trained with injected noise, random dropout, norm-based regularizers and their
respective performances compared to well-initialized plain RNNs and advanced
regularization methods like fast-dropout. We conclude with evidence that
training with noise does not improve performance as conjectured by a few works
in RNN optimization before ours.
| no_new_dataset | 0.942082 |
1312.3968 | Philip Schniter | Mark Borgerding, Philip Schniter, and Sundeep Rangan | Generalized Approximate Message Passing for Cosparse Analysis
Compressive Sensing | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In cosparse analysis compressive sensing (CS), one seeks to estimate a
non-sparse signal vector from noisy sub-Nyquist linear measurements by
exploiting the knowledge that a given linear transform of the signal is
cosparse, i.e., has sufficiently many zeros. We propose a novel approach to
cosparse analysis CS based on the generalized approximate message passing
(GAMP) algorithm. Unlike other AMP-based approaches to this problem, ours works
with a wide range of analysis operators and regularizers. In addition, we
propose a novel $\ell_0$-like soft-thresholder based on MMSE denoising for a
spike-and-slab distribution with an infinite-variance slab. Numerical
demonstrations on synthetic and practical datasets demonstrate advantages over
existing AMP-based, greedy, and reweighted-$\ell_1$ approaches.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2013 21:51:20 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Oct 2014 19:12:54 GMT"
}
] | 2014-10-21T00:00:00 | [
[
"Borgerding",
"Mark",
""
],
[
"Schniter",
"Philip",
""
],
[
"Rangan",
"Sundeep",
""
]
] | TITLE: Generalized Approximate Message Passing for Cosparse Analysis
Compressive Sensing
ABSTRACT: In cosparse analysis compressive sensing (CS), one seeks to estimate a
non-sparse signal vector from noisy sub-Nyquist linear measurements by
exploiting the knowledge that a given linear transform of the signal is
cosparse, i.e., has sufficiently many zeros. We propose a novel approach to
cosparse analysis CS based on the generalized approximate message passing
(GAMP) algorithm. Unlike other AMP-based approaches to this problem, ours works
with a wide range of analysis operators and regularizers. In addition, we
propose a novel $\ell_0$-like soft-thresholder based on MMSE denoising for a
spike-and-slab distribution with an infinite-variance slab. Numerical
demonstrations on synthetic and practical datasets demonstrate advantages over
existing AMP-based, greedy, and reweighted-$\ell_1$ approaches.
| no_new_dataset | 0.942242 |
1410.4966 | Shay Cohen | Chiraag Lala and Shay B. Cohen | The Visualization of Change in Word Meaning over Time using Temporal
Word Embeddings | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a visualization tool that can be used to view the change in
meaning of words over time. The tool makes use of existing (static) word
embedding datasets together with a timestamped $n$-gram corpus to create {\em
temporal} word embeddings.
| [
{
"version": "v1",
"created": "Sat, 18 Oct 2014 14:53:19 GMT"
}
] | 2014-10-21T00:00:00 | [
[
"Lala",
"Chiraag",
""
],
[
"Cohen",
"Shay B.",
""
]
] | TITLE: The Visualization of Change in Word Meaning over Time using Temporal
Word Embeddings
ABSTRACT: We describe a visualization tool that can be used to view the change in
meaning of words over time. The tool makes use of existing (static) word
embedding datasets together with a timestamped $n$-gram corpus to create {\em
temporal} word embeddings.
| no_new_dataset | 0.948202 |
1410.4984 | Zhenwen Dai | Zhenwen Dai, Andreas Damianou, James Hensman, Neil Lawrence | Gaussian Process Models with Parallelization and GPU acceleration | null | null | null | null | cs.DC cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present an extension of Gaussian process (GP) models with
sophisticated parallelization and GPU acceleration. The parallelization scheme
arises naturally from the modular computational structure w.r.t. datapoints in
the sparse Gaussian process formulation. Additionally, the computational
bottleneck is implemented with GPU acceleration for further speed up. Combining
both techniques allows applying Gaussian process models to millions of
datapoints. The efficiency of our algorithm is demonstrated with a synthetic
dataset. Its source code has been integrated into our popular software library
GPy.
| [
{
"version": "v1",
"created": "Sat, 18 Oct 2014 18:12:57 GMT"
}
] | 2014-10-21T00:00:00 | [
[
"Dai",
"Zhenwen",
""
],
[
"Damianou",
"Andreas",
""
],
[
"Hensman",
"James",
""
],
[
"Lawrence",
"Neil",
""
]
] | TITLE: Gaussian Process Models with Parallelization and GPU acceleration
ABSTRACT: In this work, we present an extension of Gaussian process (GP) models with
sophisticated parallelization and GPU acceleration. The parallelization scheme
arises naturally from the modular computational structure w.r.t. datapoints in
the sparse Gaussian process formulation. Additionally, the computational
bottleneck is implemented with GPU acceleration for further speed up. Combining
both techniques allows applying Gaussian process models to millions of
datapoints. The efficiency of our algorithm is demonstrated with a synthetic
dataset. Its source code has been integrated into our popular software library
GPy.
| no_new_dataset | 0.945751 |
1410.5372 | Saptarshi Das | Shre Kumar Chatterjee, Sanmitra Ghosh, Saptarshi Das, Veronica
Manzella, Andrea Vitaletti, Elisa Masi, Luisa Santopolo, Stefano Mancuso,
Koushik Maharatna | Forward and Inverse Modelling Approaches for Prediction of Light
Stimulus from Electrophysiological Response in Plants | 25 pages, 14 figures | Measurement, Volume 53, July 2014, Pages 101-116 | 10.1016/j.measurement.2014.03.040 | null | physics.bio-ph math.DS stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, system identification approach has been adopted to develop a
novel dynamical model for describing the relationship between light as an
environmental stimulus and the electrical response as the measured output for a
bay leaf (Laurus nobilis) plant. More specifically, the target is to predict
the characteristics of the input light stimulus (in terms of on-off timing,
duration and intensity) from the measured electrical response - leading to an
inverse problem. We explored two major classes of system estimators to develop
dynamical models - linear and nonlinear - and their several variants for
establishing a forward and also an inverse relationship between the light
stimulus and plant electrical response. The best class of models are given by
the Nonlinear Hammerstein-Wiener (NLHW) estimator showing good data fitting
results over other linear and nonlinear estimators in a statistical sense.
Consequently, a few set of models using different functional variants of NLHW
has been developed and their accuracy in detecting the on-off timing and
intensity of the input light stimulus are compared for 19 independent plant
datasets (including 2 additional species viz. Zamioculcas zamiifolia and
Cucumis sativus) under similar experimental scenario.
| [
{
"version": "v1",
"created": "Mon, 20 Oct 2014 17:51:08 GMT"
}
] | 2014-10-21T00:00:00 | [
[
"Chatterjee",
"Shre Kumar",
""
],
[
"Ghosh",
"Sanmitra",
""
],
[
"Das",
"Saptarshi",
""
],
[
"Manzella",
"Veronica",
""
],
[
"Vitaletti",
"Andrea",
""
],
[
"Masi",
"Elisa",
""
],
[
"Santopolo",
"Luisa",
""
],
[
"Mancuso",
"Stefano",
""
],
[
"Maharatna",
"Koushik",
""
]
] | TITLE: Forward and Inverse Modelling Approaches for Prediction of Light
Stimulus from Electrophysiological Response in Plants
ABSTRACT: In this paper, system identification approach has been adopted to develop a
novel dynamical model for describing the relationship between light as an
environmental stimulus and the electrical response as the measured output for a
bay leaf (Laurus nobilis) plant. More specifically, the target is to predict
the characteristics of the input light stimulus (in terms of on-off timing,
duration and intensity) from the measured electrical response - leading to an
inverse problem. We explored two major classes of system estimators to develop
dynamical models - linear and nonlinear - and their several variants for
establishing a forward and also an inverse relationship between the light
stimulus and plant electrical response. The best class of models are given by
the Nonlinear Hammerstein-Wiener (NLHW) estimator showing good data fitting
results over other linear and nonlinear estimators in a statistical sense.
Consequently, a few set of models using different functional variants of NLHW
has been developed and their accuracy in detecting the on-off timing and
intensity of the input light stimulus are compared for 19 independent plant
datasets (including 2 additional species viz. Zamioculcas zamiifolia and
Cucumis sativus) under similar experimental scenario.
| no_new_dataset | 0.952794 |
1403.1349 | Sam Anzaroot | Sam Anzaroot, Alexandre Passos, David Belanger, Andrew McCallum | Learning Soft Linear Constraints with Application to Citation Field
Extraction | appears in Proc. the 52nd Annual Meeting of the Association for
Computational Linguistics (ACL2014) | null | null | null | cs.CL cs.DL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurately segmenting a citation string into fields for authors, titles, etc.
is a challenging task because the output typically obeys various global
constraints. Previous work has shown that modeling soft constraints, where the
model is encouraged, but not require to obey the constraints, can substantially
improve segmentation performance. On the other hand, for imposing hard
constraints, dual decomposition is a popular technique for efficient prediction
given existing algorithms for unconstrained inference. We extend the technique
to perform prediction subject to soft constraints. Moreover, with a technique
for performing inference given soft constraints, it is easy to automatically
generate large families of constraints and learn their costs with a simple
convex optimization problem during training. This allows us to obtain
substantial gains in accuracy on a new, challenging citation extraction
dataset.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2014 05:24:02 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Oct 2014 13:27:02 GMT"
}
] | 2014-10-20T00:00:00 | [
[
"Anzaroot",
"Sam",
""
],
[
"Passos",
"Alexandre",
""
],
[
"Belanger",
"David",
""
],
[
"McCallum",
"Andrew",
""
]
] | TITLE: Learning Soft Linear Constraints with Application to Citation Field
Extraction
ABSTRACT: Accurately segmenting a citation string into fields for authors, titles, etc.
is a challenging task because the output typically obeys various global
constraints. Previous work has shown that modeling soft constraints, where the
model is encouraged, but not require to obey the constraints, can substantially
improve segmentation performance. On the other hand, for imposing hard
constraints, dual decomposition is a popular technique for efficient prediction
given existing algorithms for unconstrained inference. We extend the technique
to perform prediction subject to soft constraints. Moreover, with a technique
for performing inference given soft constraints, it is easy to automatically
generate large families of constraints and learn their costs with a simple
convex optimization problem during training. This allows us to obtain
substantial gains in accuracy on a new, challenging citation extraction
dataset.
| no_new_dataset | 0.943086 |
1410.4673 | Zhiding Yu | Weiyang Liu, Zhiding Yu, Lijia Lu, Yandong Wen, Hui Li and Yuexian Zou | KCRC-LCD: Discriminative Kernel Collaborative Representation with
Locality Constrained Dictionary for Visual Categorization | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the image classification problem via kernel collaborative
representation classification with locality constrained dictionary (KCRC-LCD).
Specifically, we propose a kernel collaborative representation classification
(KCRC) approach in which kernel method is used to improve the discrimination
ability of collaborative representation classification (CRC). We then measure
the similarities between the query and atoms in the global dictionary in order
to construct a locality constrained dictionary (LCD) for KCRC. In addition, we
discuss several similarity measure approaches in LCD and further present a
simple yet effective unified similarity measure whose superiority is validated
in experiments. There are several appealing aspects associated with LCD. First,
LCD can be nicely incorporated under the framework of KCRC. The LCD similarity
measure can be kernelized under KCRC, which theoretically links CRC and LCD
under the kernel method. Second, KCRC-LCD becomes more scalable to both the
training set size and the feature dimension. Example shows that KCRC is able to
perfectly classify data with certain distribution, while conventional CRC fails
completely. Comprehensive experiments on many public datasets also show that
KCRC-LCD is a robust discriminative classifier with both excellent performance
and good scalability, being comparable or outperforming many other
state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Fri, 17 Oct 2014 09:40:20 GMT"
}
] | 2014-10-20T00:00:00 | [
[
"Liu",
"Weiyang",
""
],
[
"Yu",
"Zhiding",
""
],
[
"Lu",
"Lijia",
""
],
[
"Wen",
"Yandong",
""
],
[
"Li",
"Hui",
""
],
[
"Zou",
"Yuexian",
""
]
] | TITLE: KCRC-LCD: Discriminative Kernel Collaborative Representation with
Locality Constrained Dictionary for Visual Categorization
ABSTRACT: We consider the image classification problem via kernel collaborative
representation classification with locality constrained dictionary (KCRC-LCD).
Specifically, we propose a kernel collaborative representation classification
(KCRC) approach in which kernel method is used to improve the discrimination
ability of collaborative representation classification (CRC). We then measure
the similarities between the query and atoms in the global dictionary in order
to construct a locality constrained dictionary (LCD) for KCRC. In addition, we
discuss several similarity measure approaches in LCD and further present a
simple yet effective unified similarity measure whose superiority is validated
in experiments. There are several appealing aspects associated with LCD. First,
LCD can be nicely incorporated under the framework of KCRC. The LCD similarity
measure can be kernelized under KCRC, which theoretically links CRC and LCD
under the kernel method. Second, KCRC-LCD becomes more scalable to both the
training set size and the feature dimension. Example shows that KCRC is able to
perfectly classify data with certain distribution, while conventional CRC fails
completely. Comprehensive experiments on many public datasets also show that
KCRC-LCD is a robust discriminative classifier with both excellent performance
and good scalability, being comparable or outperforming many other
state-of-the-art approaches.
| no_new_dataset | 0.941708 |
1403.6985 | Kostyantyn Demchuk | Kostyantyn Demchuk and Douglas J. Leith | A Fast Minimal Infrequent Itemset Mining Algorithm | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A novel fast algorithm for finding quasi identifiers in large datasets is
presented. Performance measurements on a broad range of datasets demonstrate
substantial reductions in run-time relative to the state of the art and the
scalability of the algorithm to realistically-sized datasets up to several
million records.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2014 11:54:27 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Apr 2014 15:53:20 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Oct 2014 14:56:52 GMT"
}
] | 2014-10-17T00:00:00 | [
[
"Demchuk",
"Kostyantyn",
""
],
[
"Leith",
"Douglas J.",
""
]
] | TITLE: A Fast Minimal Infrequent Itemset Mining Algorithm
ABSTRACT: A novel fast algorithm for finding quasi identifiers in large datasets is
presented. Performance measurements on a broad range of datasets demonstrate
substantial reductions in run-time relative to the state of the art and the
scalability of the algorithm to realistically-sized datasets up to several
million records.
| no_new_dataset | 0.945801 |
1410.1795 | Jonathan Leloux | Jonathan Leloux, Luis Narvarte, Loreto Gonzalez-Bonilla | A free real-time hourly tilted solar irradiation data Website for Europe | 3 pages, 2 figures, conference proceedings, 29th European
Photovoltaic Solar Energy Conference and Exhibition, 2014, Amsterdam | null | 10.13140/2.1.2018.1762 | null | physics.space-ph physics.ed-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The engineering of solar power applications, such as photovoltaic energy (PV)
or thermal solar energy requires the knowledge of the solar resource available
for the solar energy system. This solar resource is generally obtained from
datasets, and is either measured by ground-stations, through the use of
pyranometers, or by satellites. The solar irradiation data are generally not
free, and their cost can be high, in particular if high temporal resolution is
required, such as hourly data. In this work, we present an alternative method
to provide free hourly global solar tilted irradiation data for the whole
European territory through a web platform. The method that we have developed
generates solar irradiation data from a combination of clear-sky simulations
and weather conditions data. The results are publicly available for free
through Soweda, a Web interface. To our knowledge, this is the first time that
hourly solar irradiance data are made available online, in real-time, and for
free, to the public. The accuracy of these data is not suitable for
applications that require high data accuracy, but can be very useful for other
applications that only require a rough estimate of solar irradiation.
| [
{
"version": "v1",
"created": "Fri, 3 Oct 2014 22:47:29 GMT"
}
] | 2014-10-17T00:00:00 | [
[
"Leloux",
"Jonathan",
""
],
[
"Narvarte",
"Luis",
""
],
[
"Gonzalez-Bonilla",
"Loreto",
""
]
] | TITLE: A free real-time hourly tilted solar irradiation data Website for Europe
ABSTRACT: The engineering of solar power applications, such as photovoltaic energy (PV)
or thermal solar energy requires the knowledge of the solar resource available
for the solar energy system. This solar resource is generally obtained from
datasets, and is either measured by ground-stations, through the use of
pyranometers, or by satellites. The solar irradiation data are generally not
free, and their cost can be high, in particular if high temporal resolution is
required, such as hourly data. In this work, we present an alternative method
to provide free hourly global solar tilted irradiation data for the whole
European territory through a web platform. The method that we have developed
generates solar irradiation data from a combination of clear-sky simulations
and weather conditions data. The results are publicly available for free
through Soweda, a Web interface. To our knowledge, this is the first time that
hourly solar irradiance data are made available online, in real-time, and for
free, to the public. The accuracy of these data is not suitable for
applications that require high data accuracy, but can be very useful for other
applications that only require a rough estimate of solar irradiation.
| no_new_dataset | 0.949201 |
1410.4341 | Manasij Venkatesh | Manasij Venkatesh, Vikas Majjagi, and Deepu Vijayasenan | Implicit segmentation of Kannada characters in offline handwriting
recognition using hidden Markov models | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a method for classification of handwritten Kannada characters
using Hidden Markov Models (HMMs). Kannada script is agglutinative, where
simple shapes are concatenated horizontally to form a character. This results
in a large number of characters making the task of classification difficult.
Character segmentation plays a significant role in reducing the number of
classes. Explicit segmentation techniques suffer when overlapping shapes are
present, which is common in the case of handwritten text. We use HMMs to take
advantage of the agglutinative nature of Kannada script, which allows us to
perform implicit segmentation of characters along with recognition. All the
experiments are performed on the Chars74k dataset that consists of 657
handwritten characters collected across multiple users. Gradient-based features
are extracted from individual characters and are used to train character HMMs.
The use of implicit segmentation technique at the character level resulted in
an improvement of around 10%. This system also outperformed an existing system
tested on the same dataset by around 16%. Analysis based on learning curves
showed that increasing the training data could result in better accuracy.
Accordingly, we collected additional data and obtained an improvement of 4%
with 6 additional samples.
| [
{
"version": "v1",
"created": "Thu, 16 Oct 2014 09:09:45 GMT"
}
] | 2014-10-17T00:00:00 | [
[
"Venkatesh",
"Manasij",
""
],
[
"Majjagi",
"Vikas",
""
],
[
"Vijayasenan",
"Deepu",
""
]
] | TITLE: Implicit segmentation of Kannada characters in offline handwriting
recognition using hidden Markov models
ABSTRACT: We describe a method for classification of handwritten Kannada characters
using Hidden Markov Models (HMMs). Kannada script is agglutinative, where
simple shapes are concatenated horizontally to form a character. This results
in a large number of characters making the task of classification difficult.
Character segmentation plays a significant role in reducing the number of
classes. Explicit segmentation techniques suffer when overlapping shapes are
present, which is common in the case of handwritten text. We use HMMs to take
advantage of the agglutinative nature of Kannada script, which allows us to
perform implicit segmentation of characters along with recognition. All the
experiments are performed on the Chars74k dataset that consists of 657
handwritten characters collected across multiple users. Gradient-based features
are extracted from individual characters and are used to train character HMMs.
The use of implicit segmentation technique at the character level resulted in
an improvement of around 10%. This system also outperformed an existing system
tested on the same dataset by around 16%. Analysis based on learning curves
showed that increasing the training data could result in better accuracy.
Accordingly, we collected additional data and obtained an improvement of 4%
with 6 additional samples.
| new_dataset | 0.914673 |
1312.5306 | Patrick J. Wolfe | Sofia C. Olhede and Patrick J. Wolfe | Network histograms and universality of blockmodel approximation | 27 pages, 4 figures; revised version with link to software | Proceedings of the National Academy of Sciences of the USA 2014,
Vol. 111, No. 41, 14722-14727 | 10.1073/pnas.1400374111 | null | stat.ME cs.SI math.CO math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article we introduce the network histogram: a statistical summary of
network interactions, to be used as a tool for exploratory data analysis. A
network histogram is obtained by fitting a stochastic blockmodel to a single
observation of a network dataset. Blocks of edges play the role of histogram
bins, and community sizes that of histogram bandwidths or bin sizes. Just as
standard histograms allow for varying bandwidths, different blockmodel
estimates can all be considered valid representations of an underlying
probability model, subject to bandwidth constraints. Here we provide methods
for automatic bandwidth selection, by which the network histogram approximates
the generating mechanism that gives rise to exchangeable random graphs. This
makes the blockmodel a universal network representation for unlabeled graphs.
With this insight, we discuss the interpretation of network communities in
light of the fact that many different community assignments can all give an
equally valid representation of such a network. To demonstrate the
fidelity-versus-interpretability tradeoff inherent in considering different
numbers and sizes of communities, we analyze two publicly available networks -
political weblogs and student friendships - and discuss how to interpret the
network histogram when additional information related to node and edge labeling
is present.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2013 20:50:06 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Aug 2014 09:18:30 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Sep 2014 21:34:36 GMT"
}
] | 2014-10-16T00:00:00 | [
[
"Olhede",
"Sofia C.",
""
],
[
"Wolfe",
"Patrick J.",
""
]
] | TITLE: Network histograms and universality of blockmodel approximation
ABSTRACT: In this article we introduce the network histogram: a statistical summary of
network interactions, to be used as a tool for exploratory data analysis. A
network histogram is obtained by fitting a stochastic blockmodel to a single
observation of a network dataset. Blocks of edges play the role of histogram
bins, and community sizes that of histogram bandwidths or bin sizes. Just as
standard histograms allow for varying bandwidths, different blockmodel
estimates can all be considered valid representations of an underlying
probability model, subject to bandwidth constraints. Here we provide methods
for automatic bandwidth selection, by which the network histogram approximates
the generating mechanism that gives rise to exchangeable random graphs. This
makes the blockmodel a universal network representation for unlabeled graphs.
With this insight, we discuss the interpretation of network communities in
light of the fact that many different community assignments can all give an
equally valid representation of such a network. To demonstrate the
fidelity-versus-interpretability tradeoff inherent in considering different
numbers and sizes of communities, we analyze two publicly available networks -
political weblogs and student friendships - and discuss how to interpret the
network histogram when additional information related to node and edge labeling
is present.
| no_new_dataset | 0.951142 |
1410.3862 | James Balhoff | James P. Balhoff, T. Alexander Dececchi, Paula M. Mabee, Hilmar Lapp | Presence-absence reasoning for evolutionary phenotypes | 4 pages. Peer-reviewed submission presented to the Bio-ontologies SIG
Phenotype Day at ISMB 2014, Boston, Mass.
http://phenoday2014.bio-lark.org/pdf/11.pdf | James P. Balhoff, T. Alexander Dececchi, Paula M. Mabee, Hilmar
Lapp. 2014. Presence-absence reasoning for evolutionary phenotypes. In
proceedings of Phenotype Day of the Bio-ontologies SIG at ISMB 2014 | null | null | cs.AI q-bio.QM | http://creativecommons.org/licenses/by/3.0/ | Nearly invariably, phenotypes are reported in the scientific literature in
meticulous detail, utilizing the full expressivity of natural language. Often
it is particularly these detailed observations (facts) that are of interest,
and thus specific to the research questions that motivated observing and
reporting them. However, research aiming to synthesize or integrate phenotype
data across many studies or even fields is often faced with the need to
abstract from detailed observations so as to construct phenotypic concepts that
are common across many datasets rather than specific to a few. Yet,
observations or facts that would fall under such abstracted concepts are
typically not directly asserted by the original authors, usually because they
are "obvious" according to common domain knowledge, and thus asserting them
would be deemed redundant by anyone with sufficient domain knowledge. For
example, a phenotype describing the length of a manual digit for an organism
implicitly means that the organism must have had a hand, and thus a forelimb;
the presence or absence of a forelimb may have supporting data across a far
wider range of taxa than the length of a particular manual digit. Here we
describe how within the Phenoscape project we use a pipeline of OWL axiom
generation and reasoning steps to infer taxon-specific presence/absence of
anatomical entities from anatomical phenotypes. Although presence/absence is
all but one, and a seemingly simple way to abstract phenotypes across data
sources, it can nonetheless be powerful for linking genotype to phenotype, and
it is particularly relevant for constructing synthetic morphological
supermatrices for comparative analysis; in fact presence/absence is one of the
prevailing character observation types in published character matrices.
| [
{
"version": "v1",
"created": "Tue, 14 Oct 2014 20:40:28 GMT"
}
] | 2014-10-16T00:00:00 | [
[
"Balhoff",
"James P.",
""
],
[
"Dececchi",
"T. Alexander",
""
],
[
"Mabee",
"Paula M.",
""
],
[
"Lapp",
"Hilmar",
""
]
] | TITLE: Presence-absence reasoning for evolutionary phenotypes
ABSTRACT: Nearly invariably, phenotypes are reported in the scientific literature in
meticulous detail, utilizing the full expressivity of natural language. Often
it is particularly these detailed observations (facts) that are of interest,
and thus specific to the research questions that motivated observing and
reporting them. However, research aiming to synthesize or integrate phenotype
data across many studies or even fields is often faced with the need to
abstract from detailed observations so as to construct phenotypic concepts that
are common across many datasets rather than specific to a few. Yet,
observations or facts that would fall under such abstracted concepts are
typically not directly asserted by the original authors, usually because they
are "obvious" according to common domain knowledge, and thus asserting them
would be deemed redundant by anyone with sufficient domain knowledge. For
example, a phenotype describing the length of a manual digit for an organism
implicitly means that the organism must have had a hand, and thus a forelimb;
the presence or absence of a forelimb may have supporting data across a far
wider range of taxa than the length of a particular manual digit. Here we
describe how within the Phenoscape project we use a pipeline of OWL axiom
generation and reasoning steps to infer taxon-specific presence/absence of
anatomical entities from anatomical phenotypes. Although presence/absence is
all but one, and a seemingly simple way to abstract phenotypes across data
sources, it can nonetheless be powerful for linking genotype to phenotype, and
it is particularly relevant for constructing synthetic morphological
supermatrices for comparative analysis; in fact presence/absence is one of the
prevailing character observation types in published character matrices.
| no_new_dataset | 0.957715 |
1410.3905 | Xiankai Lu | Xiankai Lu, Zheng Fang, Tao Xu, Haiting Zhang, Hongya Tuo | Efficient Image Categorization with Sparse Fisher Vector | 5pages,4 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | In object recognition, Fisher vector (FV) representation is one of the
state-of-art image representations ways at the expense of dense, high
dimensional features and increased computation time. A simplification of FV is
attractive, so we propose Sparse Fisher vector (SFV). By incorporating locality
strategy, we can accelerate the Fisher coding step in image categorization
which is implemented from a collective of local descriptors. Combining with
pooling step, we explore the relationship between coding step and pooling step
to give a theoretical explanation about SFV. Experiments on benchmark datasets
have shown that SFV leads to a speedup of several-fold of magnitude compares
with FV, while maintaining the categorization performance. In addition, we
demonstrate how SFV preserves the consistence in representation of similar
local features.
| [
{
"version": "v1",
"created": "Wed, 15 Oct 2014 02:00:29 GMT"
}
] | 2014-10-16T00:00:00 | [
[
"Lu",
"Xiankai",
""
],
[
"Fang",
"Zheng",
""
],
[
"Xu",
"Tao",
""
],
[
"Zhang",
"Haiting",
""
],
[
"Tuo",
"Hongya",
""
]
] | TITLE: Efficient Image Categorization with Sparse Fisher Vector
ABSTRACT: In object recognition, Fisher vector (FV) representation is one of the
state-of-art image representations ways at the expense of dense, high
dimensional features and increased computation time. A simplification of FV is
attractive, so we propose Sparse Fisher vector (SFV). By incorporating locality
strategy, we can accelerate the Fisher coding step in image categorization
which is implemented from a collective of local descriptors. Combining with
pooling step, we explore the relationship between coding step and pooling step
to give a theoretical explanation about SFV. Experiments on benchmark datasets
have shown that SFV leads to a speedup of several-fold of magnitude compares
with FV, while maintaining the categorization performance. In addition, we
demonstrate how SFV preserves the consistence in representation of similar
local features.
| no_new_dataset | 0.945601 |
1410.4168 | Adrien Devresse | Adrien Devresse, Fabrizio Furano | Efficient HTTP based I/O on very large datasets for high performance
computing with the libdavix library | Presented at: Very large Data Bases (VLDB) 2014, Hangzhou | null | null | null | cs.PF cs.DC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Remote data access for data analysis in high performance computing is
commonly done with specialized data access protocols and storage systems. These
protocols are highly optimized for high throughput on very large datasets,
multi-streams, high availability, low latency and efficient parallel I/O. The
purpose of this paper is to describe how we have adapted a generic protocol,
the Hyper Text Transport Protocol (HTTP) to make it a competitive alternative
for high performance I/O and data analysis applications in a global computing
grid: the Worldwide LHC Computing Grid. In this work, we first analyze the
design differences between the HTTP protocol and the most common high
performance I/O protocols, pointing out the main performance weaknesses of
HTTP. Then, we describe in detail how we solved these issues. Our solutions
have been implemented in a toolkit called davix, available through several
recent Linux distributions. Finally, we describe the results of our benchmarks
where we compare the performance of davix against a HPC specific protocol for a
data analysis use case.
| [
{
"version": "v1",
"created": "Wed, 15 Oct 2014 18:57:12 GMT"
}
] | 2014-10-16T00:00:00 | [
[
"Devresse",
"Adrien",
""
],
[
"Furano",
"Fabrizio",
""
]
] | TITLE: Efficient HTTP based I/O on very large datasets for high performance
computing with the libdavix library
ABSTRACT: Remote data access for data analysis in high performance computing is
commonly done with specialized data access protocols and storage systems. These
protocols are highly optimized for high throughput on very large datasets,
multi-streams, high availability, low latency and efficient parallel I/O. The
purpose of this paper is to describe how we have adapted a generic protocol,
the Hyper Text Transport Protocol (HTTP) to make it a competitive alternative
for high performance I/O and data analysis applications in a global computing
grid: the Worldwide LHC Computing Grid. In this work, we first analyze the
design differences between the HTTP protocol and the most common high
performance I/O protocols, pointing out the main performance weaknesses of
HTTP. Then, we describe in detail how we solved these issues. Our solutions
have been implemented in a toolkit called davix, available through several
recent Linux distributions. Finally, we describe the results of our benchmarks
where we compare the performance of davix against a HPC specific protocol for a
data analysis use case.
| no_new_dataset | 0.949012 |
1410.3710 | Jisun An | Haewoon Kwak and Jisun An | Understanding News Geography and Major Determinants of Global News
Coverage of Disasters | Presented at Computation+Jounalism Symposium (C+J Symposium) 2014 | null | null | null | cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we reveal the structure of global news coverage of disasters
and its determinants by using a large-scale news coverage dataset collected by
the GDELT (Global Data on Events, Location, and Tone) project that monitors
news media in over 100 languages from the whole world. Significant variables in
our hierarchical (mixed-effect) regression model, such as the number of
population, the political stability, the damage, and more, are well aligned
with a series of previous research. Yet, strong regionalism we found in news
geography highlights the necessity of the comprehensive dataset for the study
of global news coverage.
| [
{
"version": "v1",
"created": "Tue, 14 Oct 2014 14:36:29 GMT"
}
] | 2014-10-15T00:00:00 | [
[
"Kwak",
"Haewoon",
""
],
[
"An",
"Jisun",
""
]
] | TITLE: Understanding News Geography and Major Determinants of Global News
Coverage of Disasters
ABSTRACT: In this work, we reveal the structure of global news coverage of disasters
and its determinants by using a large-scale news coverage dataset collected by
the GDELT (Global Data on Events, Location, and Tone) project that monitors
news media in over 100 languages from the whole world. Significant variables in
our hierarchical (mixed-effect) regression model, such as the number of
population, the political stability, the damage, and more, are well aligned
with a series of previous research. Yet, strong regionalism we found in news
geography highlights the necessity of the comprehensive dataset for the study
of global news coverage.
| no_new_dataset | 0.937726 |
1410.3748 | Chee Seng Chan | Wai Lam Hoo and Chee Seng Chan | Zero-Shot Object Recognition System based on Topic Model | To appear in IEEE Transactions on Human-Machine Systems | null | 10.1109/THMS.2014.2358649 | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object recognition systems usually require fully complete manually labeled
training data to train the classifier. In this paper, we study the problem of
object recognition where the training samples are missing during the classifier
learning stage, a task also known as zero-shot learning. We propose a novel
zero-shot learning strategy that utilizes the topic model and hierarchical
class concept. Our proposed method advanced where cumbersome human annotation
stage (i.e. attribute-based classification) is eliminated. We achieve
comparable performance with state-of-the-art algorithms in four public
datasets: PubFig (67.09%), Cifar-100 (54.85%), Caltech-256 (52.14%), and
Animals with Attributes (49.65%) when unseen classes exist in the
classification task.
| [
{
"version": "v1",
"created": "Tue, 14 Oct 2014 16:11:43 GMT"
}
] | 2014-10-15T00:00:00 | [
[
"Hoo",
"Wai Lam",
""
],
[
"Chan",
"Chee Seng",
""
]
] | TITLE: Zero-Shot Object Recognition System based on Topic Model
ABSTRACT: Object recognition systems usually require fully complete manually labeled
training data to train the classifier. In this paper, we study the problem of
object recognition where the training samples are missing during the classifier
learning stage, a task also known as zero-shot learning. We propose a novel
zero-shot learning strategy that utilizes the topic model and hierarchical
class concept. Our proposed method advanced where cumbersome human annotation
stage (i.e. attribute-based classification) is eliminated. We achieve
comparable performance with state-of-the-art algorithms in four public
datasets: PubFig (67.09%), Cifar-100 (54.85%), Caltech-256 (52.14%), and
Animals with Attributes (49.65%) when unseen classes exist in the
classification task.
| no_new_dataset | 0.953319 |
1410.3751 | Chee Seng Chan | Wei Ren Tan, Chee Seng Chan, Pratheepan Yogarajah and Joan Condell | A Fusion Approach for Efficient Human Skin Detection | Accepted in IEEE Transactions on Industrial Informatics, vol. 8(1),
pp. 138-147, new skin detection + ground truth (Pratheepan) dataset | IEEE Transactions on Industrial Informatics, vol. 8(1), pp.
138-147, 2012 | 10.1109/TII.2011.2172451 | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A reliable human skin detection method that is adaptable to different human
skin colours and illu- mination conditions is essential for better human skin
segmentation. Even though different human skin colour detection solutions have
been successfully applied, they are prone to false skin detection and are not
able to cope with the variety of human skin colours across different ethnic.
Moreover, existing methods require high computational cost. In this paper, we
propose a novel human skin de- tection approach that combines a smoothed 2D
histogram and Gaussian model, for automatic human skin detection in colour
image(s). In our approach an eye detector is used to refine the skin model for
a specific person. The proposed approach reduces computational costs as no
training is required; and it improves the accuracy of skin detection despite
wide variation in ethnicity and illumination. To the best of our knowledge,
this is the first method to employ fusion strategy for this purpose.
Qualitative and quantitative results on three standard public datasets and a
comparison with state-of-the-art methods have shown the effectiveness and
robustness of the proposed approach.
| [
{
"version": "v1",
"created": "Tue, 14 Oct 2014 16:12:58 GMT"
}
] | 2014-10-15T00:00:00 | [
[
"Tan",
"Wei Ren",
""
],
[
"Chan",
"Chee Seng",
""
],
[
"Yogarajah",
"Pratheepan",
""
],
[
"Condell",
"Joan",
""
]
] | TITLE: A Fusion Approach for Efficient Human Skin Detection
ABSTRACT: A reliable human skin detection method that is adaptable to different human
skin colours and illu- mination conditions is essential for better human skin
segmentation. Even though different human skin colour detection solutions have
been successfully applied, they are prone to false skin detection and are not
able to cope with the variety of human skin colours across different ethnic.
Moreover, existing methods require high computational cost. In this paper, we
propose a novel human skin de- tection approach that combines a smoothed 2D
histogram and Gaussian model, for automatic human skin detection in colour
image(s). In our approach an eye detector is used to refine the skin model for
a specific person. The proposed approach reduces computational costs as no
training is required; and it improves the accuracy of skin detection despite
wide variation in ethnicity and illumination. To the best of our knowledge,
this is the first method to employ fusion strategy for this purpose.
Qualitative and quantitative results on three standard public datasets and a
comparison with state-of-the-art methods have shown the effectiveness and
robustness of the proposed approach.
| no_new_dataset | 0.950227 |
1410.3752 | Chee Seng Chan | Wai Lam Hoo, Tae-Kyun Kim, Yuru Pei and Chee Seng Chan | Enhanced Random Forest with Image/Patch-Level Learning for Image
Understanding | Accepted in ICPR 2014 (Oral) | null | null | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image understanding is an important research domain in the computer vision
due to its wide real-world applications. For an image understanding framework
that uses the Bag-of-Words model representation, the visual codebook is an
essential part. Random forest (RF) as a tree-structure discriminative codebook
has been a popular choice. However, the performance of the RF can be degraded
if the local patch labels are poorly assigned. In this paper, we tackle this
problem by a novel way to update the RF codebook learning for a more
discriminative codebook with the introduction of the soft class labels,
estimated from the pLSA model based on a feedback scheme. The feedback scheme
is performed on both the image and patch levels respectively, which is in
contrast to the state- of-the-art RF codebook learning that focused on either
image or patch level only. Experiments on 15-Scene and C-Pascal datasets had
shown the effectiveness of the proposed method in image understanding task.
| [
{
"version": "v1",
"created": "Tue, 14 Oct 2014 16:13:45 GMT"
}
] | 2014-10-15T00:00:00 | [
[
"Hoo",
"Wai Lam",
""
],
[
"Kim",
"Tae-Kyun",
""
],
[
"Pei",
"Yuru",
""
],
[
"Chan",
"Chee Seng",
""
]
] | TITLE: Enhanced Random Forest with Image/Patch-Level Learning for Image
Understanding
ABSTRACT: Image understanding is an important research domain in the computer vision
due to its wide real-world applications. For an image understanding framework
that uses the Bag-of-Words model representation, the visual codebook is an
essential part. Random forest (RF) as a tree-structure discriminative codebook
has been a popular choice. However, the performance of the RF can be degraded
if the local patch labels are poorly assigned. In this paper, we tackle this
problem by a novel way to update the RF codebook learning for a more
discriminative codebook with the introduction of the soft class labels,
estimated from the pLSA model based on a feedback scheme. The feedback scheme
is performed on both the image and patch levels respectively, which is in
contrast to the state- of-the-art RF codebook learning that focused on either
image or patch level only. Experiments on 15-Scene and C-Pascal datasets had
shown the effectiveness of the proposed method in image understanding task.
| no_new_dataset | 0.949295 |
1410.3756 | Chee Seng Chan | Mei Kuan Lim, Ven Jyn Kok, Chen Change Loy and Chee Seng Chan | Crowd Saliency Detection via Global Similarity Structure | Accepted in ICPR 2014 (Oral). Mei Kuan Lim and Ven Jyn Kok share
equal contributions | null | null | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is common for CCTV operators to overlook inter- esting events taking place
within the crowd due to large number of people in the crowded scene (i.e.
marathon, rally). Thus, there is a dire need to automate the detection of
salient crowd regions acquiring immediate attention for a more effective and
proactive surveillance. This paper proposes a novel framework to identify and
localize salient regions in a crowd scene, by transforming low-level features
extracted from crowd motion field into a global similarity structure. The
global similarity structure representation allows the discovery of the
intrinsic manifold of the motion dynamics, which could not be captured by the
low-level representation. Ranking is then performed on the global similarity
structure to identify a set of extrema. The proposed approach is unsupervised
so learning stage is eliminated. Experimental results on public datasets
demonstrates the effectiveness of exploiting such extrema in identifying
salient regions in various crowd scenarios that exhibit crowding, local
irregular motion, and unique motion areas such as sources and sinks.
| [
{
"version": "v1",
"created": "Tue, 14 Oct 2014 16:24:24 GMT"
}
] | 2014-10-15T00:00:00 | [
[
"Lim",
"Mei Kuan",
""
],
[
"Kok",
"Ven Jyn",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Chan",
"Chee Seng",
""
]
] | TITLE: Crowd Saliency Detection via Global Similarity Structure
ABSTRACT: It is common for CCTV operators to overlook inter- esting events taking place
within the crowd due to large number of people in the crowded scene (i.e.
marathon, rally). Thus, there is a dire need to automate the detection of
salient crowd regions acquiring immediate attention for a more effective and
proactive surveillance. This paper proposes a novel framework to identify and
localize salient regions in a crowd scene, by transforming low-level features
extracted from crowd motion field into a global similarity structure. The
global similarity structure representation allows the discovery of the
intrinsic manifold of the motion dynamics, which could not be captured by the
low-level representation. Ranking is then performed on the global similarity
structure to identify a set of extrema. The proposed approach is unsupervised
so learning stage is eliminated. Experimental results on public datasets
demonstrates the effectiveness of exploiting such extrema in identifying
salient regions in various crowd scenarios that exhibit crowding, local
irregular motion, and unique motion areas such as sources and sinks.
| no_new_dataset | 0.953319 |
1410.3791 | Rami Al-Rfou | Rami Al-Rfou, Vivek Kulkarni, Bryan Perozzi, Steven Skiena | POLYGLOT-NER: Massive Multilingual Named Entity Recognition | 9 pages, 4 figures, 5 tables | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing diversity of languages used on the web introduces a new level
of complexity to Information Retrieval (IR) systems. We can no longer assume
that textual content is written in one language or even the same language
family. In this paper, we demonstrate how to build massive multilingual
annotators with minimal human expertise and intervention. We describe a system
that builds Named Entity Recognition (NER) annotators for 40 major languages
using Wikipedia and Freebase. Our approach does not require NER human annotated
datasets or language specific resources like treebanks, parallel corpora, and
orthographic rules. The novelty of approach lies therein - using only language
agnostic techniques, while achieving competitive performance.
Our method learns distributed word representations (word embeddings) which
encode semantic and syntactic features of words in each language. Then, we
automatically generate datasets from Wikipedia link structure and Freebase
attributes. Finally, we apply two preprocessing stages (oversampling and exact
surface form matching) which do not require any linguistic expertise.
Our evaluation is two fold: First, we demonstrate the system performance on
human annotated datasets. Second, for languages where no gold-standard
benchmarks are available, we propose a new method, distant evaluation, based on
statistical machine translation.
| [
{
"version": "v1",
"created": "Tue, 14 Oct 2014 18:37:32 GMT"
}
] | 2014-10-15T00:00:00 | [
[
"Al-Rfou",
"Rami",
""
],
[
"Kulkarni",
"Vivek",
""
],
[
"Perozzi",
"Bryan",
""
],
[
"Skiena",
"Steven",
""
]
] | TITLE: POLYGLOT-NER: Massive Multilingual Named Entity Recognition
ABSTRACT: The increasing diversity of languages used on the web introduces a new level
of complexity to Information Retrieval (IR) systems. We can no longer assume
that textual content is written in one language or even the same language
family. In this paper, we demonstrate how to build massive multilingual
annotators with minimal human expertise and intervention. We describe a system
that builds Named Entity Recognition (NER) annotators for 40 major languages
using Wikipedia and Freebase. Our approach does not require NER human annotated
datasets or language specific resources like treebanks, parallel corpora, and
orthographic rules. The novelty of approach lies therein - using only language
agnostic techniques, while achieving competitive performance.
Our method learns distributed word representations (word embeddings) which
encode semantic and syntactic features of words in each language. Then, we
automatically generate datasets from Wikipedia link structure and Freebase
attributes. Finally, we apply two preprocessing stages (oversampling and exact
surface form matching) which do not require any linguistic expertise.
Our evaluation is two fold: First, we demonstrate the system performance on
human annotated datasets. Second, for languages where no gold-standard
benchmarks are available, we propose a new method, distant evaluation, based on
statistical machine translation.
| no_new_dataset | 0.947866 |
1405.4802 | Paritosh Parmar | Paritosh Parmar | Use of Computer Vision to Detect Tangles in Tangled Objects | IEEE International Conference on Image Information Processing;
untangle; untangling; computer vision; robotic vision; untangling by robot;
Tangled-100 dataset; tangled linear deformable objects; personal robotics;
image processing | null | 10.1109/ICIIP.2013.6707551 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Untangling of structures like ropes and wires by autonomous robots can be
useful in areas such as personal robotics, industries and electrical wiring &
repairing by robots. This problem can be tackled by using computer vision
system in robot. This paper proposes a computer vision based method for
analyzing visual data acquired from camera for perceiving the overlap of wires,
ropes, hoses i.e. detecting tangles. Information obtained after processing
image according to the proposed method comprises of position of tangles in
tangled object and which wire passes over which wire. This information can then
be used to guide robot to untangle wire/s. Given an image, preprocessing is
done to remove noise. Then edges of wire are detected. After that, the image is
divided into smaller blocks and each block is checked for wire overlap/s and
finding other relevant information. TANGLED-100 dataset was introduced, which
consists of images of tangled linear deformable objects. Method discussed in
here was tested on the TANGLED-100 dataset. Accuracy achieved during
experiments was found to be 74.9%. Robotic simulations were carried out to
demonstrate the use of the proposed method in applications of robot. Proposed
method is a general method that can be used by robots working in different
situations.
| [
{
"version": "v1",
"created": "Mon, 19 May 2014 16:51:11 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Oct 2014 04:50:24 GMT"
}
] | 2014-10-14T00:00:00 | [
[
"Parmar",
"Paritosh",
""
]
] | TITLE: Use of Computer Vision to Detect Tangles in Tangled Objects
ABSTRACT: Untangling of structures like ropes and wires by autonomous robots can be
useful in areas such as personal robotics, industries and electrical wiring &
repairing by robots. This problem can be tackled by using computer vision
system in robot. This paper proposes a computer vision based method for
analyzing visual data acquired from camera for perceiving the overlap of wires,
ropes, hoses i.e. detecting tangles. Information obtained after processing
image according to the proposed method comprises of position of tangles in
tangled object and which wire passes over which wire. This information can then
be used to guide robot to untangle wire/s. Given an image, preprocessing is
done to remove noise. Then edges of wire are detected. After that, the image is
divided into smaller blocks and each block is checked for wire overlap/s and
finding other relevant information. TANGLED-100 dataset was introduced, which
consists of images of tangled linear deformable objects. Method discussed in
here was tested on the TANGLED-100 dataset. Accuracy achieved during
experiments was found to be 74.9%. Robotic simulations were carried out to
demonstrate the use of the proposed method in applications of robot. Proposed
method is a general method that can be used by robots working in different
situations.
| new_dataset | 0.96051 |
1410.2988 | Jayakrushna Sahoo | Jayakrushna Sahoo, Ashok Kumar Das, A. Goswami | An Algorithm for Mining High Utility Closed Itemsets and Generators | null | null | null | null | cs.DB | http://creativecommons.org/licenses/by/3.0/ | Traditional association rule mining based on the support-confidence framework
provides the objective measure of the rules that are of interest to users.
However, it does not reflect the utility of the rules. To extract non-redundant
association rules in support-confidence framework frequent closed itemsets and
their generators play an important role. To extract non-redundant association
rules among high utility itemsets, high utility closed itemsets (HUCI) and
their generators should be extracted in order to apply traditional
support-confidence framework. However, no efficient method exists at present
for mining HUCIs with their generators. This paper addresses this issue. A
post-processing algorithm, called the HUCI-Miner, is proposed to mine HUCIs
with their generators. The proposed algorithm is implemented using both
synthetic and real datasets.
| [
{
"version": "v1",
"created": "Sat, 11 Oct 2014 11:30:14 GMT"
}
] | 2014-10-14T00:00:00 | [
[
"Sahoo",
"Jayakrushna",
""
],
[
"Das",
"Ashok Kumar",
""
],
[
"Goswami",
"A.",
""
]
] | TITLE: An Algorithm for Mining High Utility Closed Itemsets and Generators
ABSTRACT: Traditional association rule mining based on the support-confidence framework
provides the objective measure of the rules that are of interest to users.
However, it does not reflect the utility of the rules. To extract non-redundant
association rules in support-confidence framework frequent closed itemsets and
their generators play an important role. To extract non-redundant association
rules among high utility itemsets, high utility closed itemsets (HUCI) and
their generators should be extracted in order to apply traditional
support-confidence framework. However, no efficient method exists at present
for mining HUCIs with their generators. This paper addresses this issue. A
post-processing algorithm, called the HUCI-Miner, is proposed to mine HUCIs
with their generators. The proposed algorithm is implemented using both
synthetic and real datasets.
| no_new_dataset | 0.95388 |
1410.3080 | Xin Yuan | Xin Yuan, Patrick Llull, David J. Brady, and Lawrence Carin | Tree-Structure Bayesian Compressive Sensing for Video | 5 pages, 4 Figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Bayesian compressive sensing framework is developed for video
reconstruction based on the color coded aperture compressive temporal imaging
(CACTI) system. By exploiting the three dimension (3D) tree structure of the
wavelet and Discrete Cosine Transformation (DCT) coefficients, a Bayesian
compressive sensing inversion algorithm is derived to reconstruct (up to 22)
color video frames from a single monochromatic compressive measurement. Both
simulated and real datasets are adopted to verify the performance of the
proposed algorithm.
| [
{
"version": "v1",
"created": "Sun, 12 Oct 2014 11:43:37 GMT"
}
] | 2014-10-14T00:00:00 | [
[
"Yuan",
"Xin",
""
],
[
"Llull",
"Patrick",
""
],
[
"Brady",
"David J.",
""
],
[
"Carin",
"Lawrence",
""
]
] | TITLE: Tree-Structure Bayesian Compressive Sensing for Video
ABSTRACT: A Bayesian compressive sensing framework is developed for video
reconstruction based on the color coded aperture compressive temporal imaging
(CACTI) system. By exploiting the three dimension (3D) tree structure of the
wavelet and Discrete Cosine Transformation (DCT) coefficients, a Bayesian
compressive sensing inversion algorithm is derived to reconstruct (up to 22)
color video frames from a single monochromatic compressive measurement. Both
simulated and real datasets are adopted to verify the performance of the
proposed algorithm.
| no_new_dataset | 0.951459 |
1410.3169 | Ellen Gasparovic | Paul Bendich, Ellen Gasparovic, John Harer, Rauf Izmailov, and Linda
Ness | Multi-Scale Local Shape Analysis and Feature Selection in Machine
Learning Applications | 15 pages, 6 figures, 8 tables | null | null | null | cs.CG cs.LG math.AT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a method called multi-scale local shape analysis, or MLSA, for
extracting features that describe the local structure of points within a
dataset. The method uses both geometric and topological features at multiple
levels of granularity to capture diverse types of local information for
subsequent machine learning algorithms operating on the dataset. Using
synthetic and real dataset examples, we demonstrate significant performance
improvement of classification algorithms constructed for these datasets with
correspondingly augmented features.
| [
{
"version": "v1",
"created": "Mon, 13 Oct 2014 00:21:59 GMT"
}
] | 2014-10-14T00:00:00 | [
[
"Bendich",
"Paul",
""
],
[
"Gasparovic",
"Ellen",
""
],
[
"Harer",
"John",
""
],
[
"Izmailov",
"Rauf",
""
],
[
"Ness",
"Linda",
""
]
] | TITLE: Multi-Scale Local Shape Analysis and Feature Selection in Machine
Learning Applications
ABSTRACT: We introduce a method called multi-scale local shape analysis, or MLSA, for
extracting features that describe the local structure of points within a
dataset. The method uses both geometric and topological features at multiple
levels of granularity to capture diverse types of local information for
subsequent machine learning algorithms operating on the dataset. Using
synthetic and real dataset examples, we demonstrate significant performance
improvement of classification algorithms constructed for these datasets with
correspondingly augmented features.
| no_new_dataset | 0.956513 |
1409.5114 | Shuxin Ouyang | Shuxin Ouyang, Timothy Hospedales, Yi-Zhe Song, Xueming Li | A Survey on Heterogeneous Face Recognition: Sketch, Infra-red, 3D and
Low-resolution | survey paper(35 pages) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Heterogeneous face recognition (HFR) refers to matching face imagery across
different domains. It has received much interest from the research community as
a result of its profound implications in law enforcement. A wide variety of new
invariant features, cross-modality matching models and heterogeneous datasets
being established in recent years. This survey provides a comprehensive review
of established techniques and recent developments in HFR. Moreover, we offer a
detailed account of datasets and benchmarks commonly used for evaluation. We
finish by assessing the state of the field and discussing promising directions
for future research.
| [
{
"version": "v1",
"created": "Wed, 17 Sep 2014 19:55:34 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Oct 2014 13:23:30 GMT"
}
] | 2014-10-13T00:00:00 | [
[
"Ouyang",
"Shuxin",
""
],
[
"Hospedales",
"Timothy",
""
],
[
"Song",
"Yi-Zhe",
""
],
[
"Li",
"Xueming",
""
]
] | TITLE: A Survey on Heterogeneous Face Recognition: Sketch, Infra-red, 3D and
Low-resolution
ABSTRACT: Heterogeneous face recognition (HFR) refers to matching face imagery across
different domains. It has received much interest from the research community as
a result of its profound implications in law enforcement. A wide variety of new
invariant features, cross-modality matching models and heterogeneous datasets
being established in recent years. This survey provides a comprehensive review
of established techniques and recent developments in HFR. Moreover, we offer a
detailed account of datasets and benchmarks commonly used for evaluation. We
finish by assessing the state of the field and discussing promising directions
for future research.
| no_new_dataset | 0.95275 |
1410.2698 | Michael Gowanlock | Michael Gowanlock and Henri Casanova | Technical Report: Towards Efficient Indexing of Spatiotemporal
Trajectories on the GPU for Distance Threshold Similarity Searches | 30 pages, 18 figures, 1 table | null | null | null | cs.DC cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Applications in many domains require processing moving object trajectories.
In this work, we focus on a trajectory similarity search that finds all
trajectories within a given distance of a query trajectory over a time
interval, which we call the distance threshold similarity search. We develop
three indexing strategies with spatial, temporal and spatiotemporal selectivity
for the GPU that differ significantly from indexes suitable for the CPU, and
show the conditions under which each index achieves good performance.
Furthermore, we show that the GPU implementations outperform multithreaded CPU
implementations in a range of experimental scenarios, making the GPU an
attractive technology for processing moving object trajectories. We test our
implementations on two synthetic and one real-world dataset of a galaxy merger.
| [
{
"version": "v1",
"created": "Fri, 10 Oct 2014 07:44:05 GMT"
}
] | 2014-10-13T00:00:00 | [
[
"Gowanlock",
"Michael",
""
],
[
"Casanova",
"Henri",
""
]
] | TITLE: Technical Report: Towards Efficient Indexing of Spatiotemporal
Trajectories on the GPU for Distance Threshold Similarity Searches
ABSTRACT: Applications in many domains require processing moving object trajectories.
In this work, we focus on a trajectory similarity search that finds all
trajectories within a given distance of a query trajectory over a time
interval, which we call the distance threshold similarity search. We develop
three indexing strategies with spatial, temporal and spatiotemporal selectivity
for the GPU that differ significantly from indexes suitable for the CPU, and
show the conditions under which each index achieves good performance.
Furthermore, we show that the GPU implementations outperform multithreaded CPU
implementations in a range of experimental scenarios, making the GPU an
attractive technology for processing moving object trajectories. We test our
implementations on two synthetic and one real-world dataset of a galaxy merger.
| no_new_dataset | 0.9463 |
1410.1035 | Rahul Rama Varior Mr. | Rahul Rama Varior, Gang Wang and Jiwen Lu | Learning Invariant Color Features for Person Re-Identification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Matching people across multiple camera views known as person
re-identification, is a challenging problem due to the change in visual
appearance caused by varying lighting conditions. The perceived color of the
subject appears to be different with respect to illumination. Previous works
use color as it is or address these challenges by designing color spaces
focusing on a specific cue. In this paper, we propose a data driven approach
for learning color patterns from pixels sampled from images across two camera
views. The intuition behind this work is that, even though pixel values of same
color would be different across views, they should be encoded with the same
values. We model color feature generation as a learning problem by jointly
learning a linear transformation and a dictionary to encode pixel values. We
also analyze different photometric invariant color spaces. Using color as the
only cue, we compare our approach with all the photometric invariant color
spaces and show superior performance over all of them. Combining with other
learned low-level and high-level features, we obtain promising results in
ViPER, Person Re-ID 2011 and CAVIAR4REID datasets.
| [
{
"version": "v1",
"created": "Sat, 4 Oct 2014 10:27:51 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Oct 2014 10:32:36 GMT"
}
] | 2014-10-10T00:00:00 | [
[
"Varior",
"Rahul Rama",
""
],
[
"Wang",
"Gang",
""
],
[
"Lu",
"Jiwen",
""
]
] | TITLE: Learning Invariant Color Features for Person Re-Identification
ABSTRACT: Matching people across multiple camera views known as person
re-identification, is a challenging problem due to the change in visual
appearance caused by varying lighting conditions. The perceived color of the
subject appears to be different with respect to illumination. Previous works
use color as it is or address these challenges by designing color spaces
focusing on a specific cue. In this paper, we propose a data driven approach
for learning color patterns from pixels sampled from images across two camera
views. The intuition behind this work is that, even though pixel values of same
color would be different across views, they should be encoded with the same
values. We model color feature generation as a learning problem by jointly
learning a linear transformation and a dictionary to encode pixel values. We
also analyze different photometric invariant color spaces. Using color as the
only cue, we compare our approach with all the photometric invariant color
spaces and show superior performance over all of them. Combining with other
learned low-level and high-level features, we obtain promising results in
ViPER, Person Re-ID 2011 and CAVIAR4REID datasets.
| no_new_dataset | 0.949995 |
1410.1940 | Qi(Rose) Yu | Qi (Rose) Yu, Xinran He and Yan Liu | GLAD: Group Anomaly Detection in Social Media Analysis- Extended
Abstract | null | null | null | null | cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional anomaly detection on social media mostly focuses on individual
point anomalies while anomalous phenomena usually occur in groups. Therefore it
is valuable to study the collective behavior of individuals and detect group
anomalies. Existing group anomaly detection approaches rely on the assumption
that the groups are known, which can hardly be true in real world social media
applications. In this paper, we take a generative approach by proposing a
hierarchical Bayes model: Group Latent Anomaly Detection (GLAD) model. GLAD
takes both pair-wise and point-wise data as input, automatically infers the
groups and detects group anomalies simultaneously. To account for the dynamic
properties of the social media data, we further generalize GLAD to its dynamic
extension d-GLAD. We conduct extensive experiments to evaluate our models on
both synthetic and real world datasets. The empirical results demonstrate that
our approach is effective and robust in discovering latent groups and detecting
group anomalies.
| [
{
"version": "v1",
"created": "Tue, 7 Oct 2014 23:11:37 GMT"
}
] | 2014-10-09T00:00:00 | [
[
"Qi",
"",
"",
"Rose"
],
[
"Yu",
"",
""
],
[
"He",
"Xinran",
""
],
[
"Liu",
"Yan",
""
]
] | TITLE: GLAD: Group Anomaly Detection in Social Media Analysis- Extended
Abstract
ABSTRACT: Traditional anomaly detection on social media mostly focuses on individual
point anomalies while anomalous phenomena usually occur in groups. Therefore it
is valuable to study the collective behavior of individuals and detect group
anomalies. Existing group anomaly detection approaches rely on the assumption
that the groups are known, which can hardly be true in real world social media
applications. In this paper, we take a generative approach by proposing a
hierarchical Bayes model: Group Latent Anomaly Detection (GLAD) model. GLAD
takes both pair-wise and point-wise data as input, automatically infers the
groups and detects group anomalies simultaneously. To account for the dynamic
properties of the social media data, we further generalize GLAD to its dynamic
extension d-GLAD. We conduct extensive experiments to evaluate our models on
both synthetic and real world datasets. The empirical results demonstrate that
our approach is effective and robust in discovering latent groups and detecting
group anomalies.
| no_new_dataset | 0.951549 |
1410.2100 | Wu Xianyan student | Wu Xianyan, Han Qi, Le Dan, Niu Xiamu | A New Method for Estimating the Widths of JPEG Images | null | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image width is important for image understanding. We propose a novel method
to estimate widths for JPEG images when their widths are not available. The key
idea is that the distance between two decoded MCUs (Minimum Coded Unit)
adjacent in the vertical direction is usually small, which is measured by the
average Euclidean distance between the pixels from the bottom row of the top
MCU and the top row of the bottom MCU. On PASCAL VOC 2010 challenge dataset and
USC-SIPI image database, experimental results show the high performance of the
proposed approach.
| [
{
"version": "v1",
"created": "Wed, 8 Oct 2014 13:24:06 GMT"
}
] | 2014-10-09T00:00:00 | [
[
"Xianyan",
"Wu",
""
],
[
"Qi",
"Han",
""
],
[
"Dan",
"Le",
""
],
[
"Xiamu",
"Niu",
""
]
] | TITLE: A New Method for Estimating the Widths of JPEG Images
ABSTRACT: Image width is important for image understanding. We propose a novel method
to estimate widths for JPEG images when their widths are not available. The key
idea is that the distance between two decoded MCUs (Minimum Coded Unit)
adjacent in the vertical direction is usually small, which is measured by the
average Euclidean distance between the pixels from the bottom row of the top
MCU and the top row of the bottom MCU. On PASCAL VOC 2010 challenge dataset and
USC-SIPI image database, experimental results show the high performance of the
proposed approach.
| no_new_dataset | 0.947575 |
1401.5383 | Rayan Chikhi | Rayan Chikhi, Antoine Limasset, Shaun Jackman, Jared Simpson and Paul
Medvedev | On the representation of de Bruijn graphs | Journal version (JCB). A preliminary version of this article was
published in the proceedings of RECOMB 2014 | null | null | null | q-bio.QM cs.DS q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The de Bruijn graph plays an important role in bioinformatics, especially in
the context of de novo assembly. However, the representation of the de Bruijn
graph in memory is a computational bottleneck for many assemblers. Recent
papers proposed a navigational data structure approach in order to improve
memory usage. We prove several theoretical space lower bounds to show the
limitation of these types of approaches. We further design and implement a
general data structure (DBGFM) and demonstrate its use on a human whole-genome
dataset, achieving space usage of 1.5 GB and a 46% improvement over previous
approaches. As part of DBGFM, we develop the notion of frequency-based
minimizers and show how it can be used to enumerate all maximal simple paths of
the de Bruijn graph using only 43 MB of memory. Finally, we demonstrate that
our approach can be integrated into an existing assembler by modifying the
ABySS software to use DBGFM.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2014 16:55:02 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Jan 2014 16:53:37 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Feb 2014 22:55:09 GMT"
},
{
"version": "v4",
"created": "Mon, 6 Oct 2014 12:39:56 GMT"
}
] | 2014-10-07T00:00:00 | [
[
"Chikhi",
"Rayan",
""
],
[
"Limasset",
"Antoine",
""
],
[
"Jackman",
"Shaun",
""
],
[
"Simpson",
"Jared",
""
],
[
"Medvedev",
"Paul",
""
]
] | TITLE: On the representation of de Bruijn graphs
ABSTRACT: The de Bruijn graph plays an important role in bioinformatics, especially in
the context of de novo assembly. However, the representation of the de Bruijn
graph in memory is a computational bottleneck for many assemblers. Recent
papers proposed a navigational data structure approach in order to improve
memory usage. We prove several theoretical space lower bounds to show the
limitation of these types of approaches. We further design and implement a
general data structure (DBGFM) and demonstrate its use on a human whole-genome
dataset, achieving space usage of 1.5 GB and a 46% improvement over previous
approaches. As part of DBGFM, we develop the notion of frequency-based
minimizers and show how it can be used to enumerate all maximal simple paths of
the de Bruijn graph using only 43 MB of memory. Finally, we demonstrate that
our approach can be integrated into an existing assembler by modifying the
ABySS software to use DBGFM.
| no_new_dataset | 0.941815 |
1410.0969 | Abdul Kadir | Abdul Kadir | A Model of Plant Identification System Using GLCM, Lacunarity And Shen
Features | 10 pages | Research Journal of Pharmaceutical, Biological and Chemical
Sciences, Vol 5(2), 2014 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, many approaches have been introduced by several researchers to
identify plants. Now, applications of texture, shape, color and vein features
are common practices. However, there are many possibilities of methods can be
developed to improve the performance of such identification systems. Therefore,
several experiments had been conducted in this research. As a result, a new
novel approach by using combination of Gray-Level Co-occurrence Matrix,
lacunarity and Shen features and a Bayesian classifier gives a better result
compared to other plant identification systems. For comparison, this research
used two kinds of several datasets that were usually used for testing the
performance of each plant identification system. The results show that the
system gives an accuracy rate of 97.19% when using the Flavia dataset and
95.00% when using the Foliage dataset and outperforms other approaches.
| [
{
"version": "v1",
"created": "Wed, 27 Aug 2014 00:49:05 GMT"
}
] | 2014-10-07T00:00:00 | [
[
"Kadir",
"Abdul",
""
]
] | TITLE: A Model of Plant Identification System Using GLCM, Lacunarity And Shen
Features
ABSTRACT: Recently, many approaches have been introduced by several researchers to
identify plants. Now, applications of texture, shape, color and vein features
are common practices. However, there are many possibilities of methods can be
developed to improve the performance of such identification systems. Therefore,
several experiments had been conducted in this research. As a result, a new
novel approach by using combination of Gray-Level Co-occurrence Matrix,
lacunarity and Shen features and a Bayesian classifier gives a better result
compared to other plant identification systems. For comparison, this research
used two kinds of several datasets that were usually used for testing the
performance of each plant identification system. The results show that the
system gives an accuracy rate of 97.19% when using the Flavia dataset and
95.00% when using the Foliage dataset and outperforms other approaches.
| no_new_dataset | 0.951459 |
1410.1090 | Junhua Mao | Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan L. Yuille | Explain Images with Multimodal Recurrent Neural Networks | null | null | null | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model
for generating novel sentence descriptions to explain the content of images. It
directly models the probability distribution of generating a word given
previous words and the image. Image descriptions are generated by sampling from
this distribution. The model consists of two sub-networks: a deep recurrent
neural network for sentences and a deep convolutional network for images. These
two sub-networks interact with each other in a multimodal layer to form the
whole m-RNN model. The effectiveness of our model is validated on three
benchmark datasets (IAPR TC-12, Flickr 8K, and Flickr 30K). Our model
outperforms the state-of-the-art generative method. In addition, the m-RNN
model can be applied to retrieval tasks for retrieving images or sentences, and
achieves significant performance improvement over the state-of-the-art methods
which directly optimize the ranking objective function for retrieval.
| [
{
"version": "v1",
"created": "Sat, 4 Oct 2014 20:24:34 GMT"
}
] | 2014-10-07T00:00:00 | [
[
"Mao",
"Junhua",
""
],
[
"Xu",
"Wei",
""
],
[
"Yang",
"Yi",
""
],
[
"Wang",
"Jiang",
""
],
[
"Yuille",
"Alan L.",
""
]
] | TITLE: Explain Images with Multimodal Recurrent Neural Networks
ABSTRACT: In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model
for generating novel sentence descriptions to explain the content of images. It
directly models the probability distribution of generating a word given
previous words and the image. Image descriptions are generated by sampling from
this distribution. The model consists of two sub-networks: a deep recurrent
neural network for sentences and a deep convolutional network for images. These
two sub-networks interact with each other in a multimodal layer to form the
whole m-RNN model. The effectiveness of our model is validated on three
benchmark datasets (IAPR TC-12, Flickr 8K, and Flickr 30K). Our model
outperforms the state-of-the-art generative method. In addition, the m-RNN
model can be applied to retrieval tasks for retrieving images or sentences, and
achieves significant performance improvement over the state-of-the-art methods
which directly optimize the ranking objective function for retrieval.
| no_new_dataset | 0.948202 |
1410.1151 | Vladimir Bochkarev | Yulia S. Maslennikova, Vladimir V. Bochkarev | Training Algorithm for Neuro-Fuzzy Network Based on Singular Spectrum
Analysis | 5 pages, 3 figures | null | null | null | cs.NE stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, we propose a combination of an noise-reduction algorithm
based on Singular Spectrum Analysis (SSA) and a standard feedforward neural
prediction model. Basically, the proposed algorithm consists of two different
steps: data preprocessing based on the SSA filtering method and step-by-step
training procedure in which we use a simple feedforward multilayer neural
network with backpropagation learning. The proposed noise-reduction procedure
successfully removes most of the noise. That increases long-term predictability
of the processed dataset comparison with the raw dataset. The method was
applied to predict the International sunspot number RZ time series. The results
show that our combined technique has better performances than those offered by
the same network directly applied to raw dataset.
| [
{
"version": "v1",
"created": "Sun, 5 Oct 2014 12:25:15 GMT"
}
] | 2014-10-07T00:00:00 | [
[
"Maslennikova",
"Yulia S.",
""
],
[
"Bochkarev",
"Vladimir V.",
""
]
] | TITLE: Training Algorithm for Neuro-Fuzzy Network Based on Singular Spectrum
Analysis
ABSTRACT: In this article, we propose a combination of an noise-reduction algorithm
based on Singular Spectrum Analysis (SSA) and a standard feedforward neural
prediction model. Basically, the proposed algorithm consists of two different
steps: data preprocessing based on the SSA filtering method and step-by-step
training procedure in which we use a simple feedforward multilayer neural
network with backpropagation learning. The proposed noise-reduction procedure
successfully removes most of the noise. That increases long-term predictability
of the processed dataset comparison with the raw dataset. The method was
applied to predict the International sunspot number RZ time series. The results
show that our combined technique has better performances than those offered by
the same network directly applied to raw dataset.
| no_new_dataset | 0.954647 |
1407.0733 | Davide Barbieri | Giacomo Cocci, Davide Barbieri, Giovanna Citti, Alessandro Sarti | Cortical spatio-temporal dimensionality reduction for visual grouping | null | null | null | null | cs.CV cs.NE q-bio.NC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The visual systems of many mammals, including humans, is able to integrate
the geometric information of visual stimuli and to perform cognitive tasks
already at the first stages of the cortical processing. This is thought to be
the result of a combination of mechanisms, which include feature extraction at
single cell level and geometric processing by means of cells connectivity. We
present a geometric model of such connectivities in the space of detected
features associated to spatio-temporal visual stimuli, and show how they can be
used to obtain low-level object segmentation. The main idea is that of defining
a spectral clustering procedure with anisotropic affinities over datasets
consisting of embeddings of the visual stimuli into higher dimensional spaces.
Neural plausibility of the proposed arguments will be discussed.
| [
{
"version": "v1",
"created": "Wed, 2 Jul 2014 22:07:06 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Oct 2014 16:46:41 GMT"
}
] | 2014-10-06T00:00:00 | [
[
"Cocci",
"Giacomo",
""
],
[
"Barbieri",
"Davide",
""
],
[
"Citti",
"Giovanna",
""
],
[
"Sarti",
"Alessandro",
""
]
] | TITLE: Cortical spatio-temporal dimensionality reduction for visual grouping
ABSTRACT: The visual systems of many mammals, including humans, is able to integrate
the geometric information of visual stimuli and to perform cognitive tasks
already at the first stages of the cortical processing. This is thought to be
the result of a combination of mechanisms, which include feature extraction at
single cell level and geometric processing by means of cells connectivity. We
present a geometric model of such connectivities in the space of detected
features associated to spatio-temporal visual stimuli, and show how they can be
used to obtain low-level object segmentation. The main idea is that of defining
a spectral clustering procedure with anisotropic affinities over datasets
consisting of embeddings of the visual stimuli into higher dimensional spaces.
Neural plausibility of the proposed arguments will be discussed.
| no_new_dataset | 0.950088 |
1402.1973 | Alhussein Fawzi | Alhussein Fawzi, Mike Davies, Pascal Frossard | Dictionary learning for fast classification based on soft-thresholding | null | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classifiers based on sparse representations have recently been shown to
provide excellent results in many visual recognition and classification tasks.
However, the high cost of computing sparse representations at test time is a
major obstacle that limits the applicability of these methods in large-scale
problems, or in scenarios where computational power is restricted. We consider
in this paper a simple yet efficient alternative to sparse coding for feature
extraction. We study a classification scheme that applies the soft-thresholding
nonlinear mapping in a dictionary, followed by a linear classifier. A novel
supervised dictionary learning algorithm tailored for this low complexity
classification architecture is proposed. The dictionary learning problem, which
jointly learns the dictionary and linear classifier, is cast as a difference of
convex (DC) program and solved efficiently with an iterative DC solver. We
conduct experiments on several datasets, and show that our learning algorithm
that leverages the structure of the classification problem outperforms generic
learning procedures. Our simple classifier based on soft-thresholding also
competes with the recent sparse coding classifiers, when the dictionary is
learned appropriately. The adopted classification scheme further requires less
computational time at the testing stage, compared to other classifiers. The
proposed scheme shows the potential of the adequately trained soft-thresholding
mapping for classification and paves the way towards the development of very
efficient classification methods for vision problems.
| [
{
"version": "v1",
"created": "Sun, 9 Feb 2014 18:18:33 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Oct 2014 16:45:19 GMT"
}
] | 2014-10-03T00:00:00 | [
[
"Fawzi",
"Alhussein",
""
],
[
"Davies",
"Mike",
""
],
[
"Frossard",
"Pascal",
""
]
] | TITLE: Dictionary learning for fast classification based on soft-thresholding
ABSTRACT: Classifiers based on sparse representations have recently been shown to
provide excellent results in many visual recognition and classification tasks.
However, the high cost of computing sparse representations at test time is a
major obstacle that limits the applicability of these methods in large-scale
problems, or in scenarios where computational power is restricted. We consider
in this paper a simple yet efficient alternative to sparse coding for feature
extraction. We study a classification scheme that applies the soft-thresholding
nonlinear mapping in a dictionary, followed by a linear classifier. A novel
supervised dictionary learning algorithm tailored for this low complexity
classification architecture is proposed. The dictionary learning problem, which
jointly learns the dictionary and linear classifier, is cast as a difference of
convex (DC) program and solved efficiently with an iterative DC solver. We
conduct experiments on several datasets, and show that our learning algorithm
that leverages the structure of the classification problem outperforms generic
learning procedures. Our simple classifier based on soft-thresholding also
competes with the recent sparse coding classifiers, when the dictionary is
learned appropriately. The adopted classification scheme further requires less
computational time at the testing stage, compared to other classifiers. The
proposed scheme shows the potential of the adequately trained soft-thresholding
mapping for classification and paves the way towards the development of very
efficient classification methods for vision problems.
| no_new_dataset | 0.945851 |
1403.7827 | David Fabian Klosik | David F. Klosik, Stefan Bornholdt, Marc-Thorsten H\"utt | Motif-based success scores in coauthorship networks are highly sensitive
to author name disambiguation | 7 pages, 7 figures | Phys. Rev. E 90, 032811 (2014) | 10.1103/PhysRevE.90.032811 | null | physics.soc-ph cs.DL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Following the work of Krumov et al. [Eur. Phys. J. B 84, 535 (2011)] we
revisit the question whether the usage of large citation datasets allows for
the quantitative assessment of social (by means of coauthorship of
publications) influence on the progression of science. Applying a more
comprehensive and well-curated dataset containing the publications in the
journals of the American Physical Society during the whole 20th century we find
that the measure chosen in the original study, a score based on small induced
subgraphs, has to be used with caution, since the obtained results are highly
sensitive to the exact implementation of the author disambiguation task.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2014 22:39:48 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Oct 2014 13:12:32 GMT"
}
] | 2014-10-03T00:00:00 | [
[
"Klosik",
"David F.",
""
],
[
"Bornholdt",
"Stefan",
""
],
[
"Hütt",
"Marc-Thorsten",
""
]
] | TITLE: Motif-based success scores in coauthorship networks are highly sensitive
to author name disambiguation
ABSTRACT: Following the work of Krumov et al. [Eur. Phys. J. B 84, 535 (2011)] we
revisit the question whether the usage of large citation datasets allows for
the quantitative assessment of social (by means of coauthorship of
publications) influence on the progression of science. Applying a more
comprehensive and well-curated dataset containing the publications in the
journals of the American Physical Society during the whole 20th century we find
that the measure chosen in the original study, a score based on small induced
subgraphs, has to be used with caution, since the obtained results are highly
sensitive to the exact implementation of the author disambiguation task.
| new_dataset | 0.626517 |
1410.0510 | Ludovic Denoyer | Ludovic Denoyer and Patrick Gallinari | Deep Sequential Neural Network | null | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural Networks sequentially build high-level features through their
successive layers. We propose here a new neural network model where each layer
is associated with a set of candidate mappings. When an input is processed, at
each layer, one mapping among these candidates is selected according to a
sequential decision process. The resulting model is structured according to a
DAG like architecture, so that a path from the root to a leaf node defines a
sequence of transformations. Instead of considering global transformations,
like in classical multilayer networks, this model allows us for learning a set
of local transformations. It is thus able to process data with different
characteristics through specific sequences of such local transformations,
increasing the expression power of this model w.r.t a classical multilayered
network. The learning algorithm is inspired from policy gradient techniques
coming from the reinforcement learning domain and is used here instead of the
classical back-propagation based gradient descent techniques. Experiments on
different datasets show the relevance of this approach.
| [
{
"version": "v1",
"created": "Thu, 2 Oct 2014 10:58:17 GMT"
}
] | 2014-10-03T00:00:00 | [
[
"Denoyer",
"Ludovic",
""
],
[
"Gallinari",
"Patrick",
""
]
] | TITLE: Deep Sequential Neural Network
ABSTRACT: Neural Networks sequentially build high-level features through their
successive layers. We propose here a new neural network model where each layer
is associated with a set of candidate mappings. When an input is processed, at
each layer, one mapping among these candidates is selected according to a
sequential decision process. The resulting model is structured according to a
DAG like architecture, so that a path from the root to a leaf node defines a
sequence of transformations. Instead of considering global transformations,
like in classical multilayer networks, this model allows us for learning a set
of local transformations. It is thus able to process data with different
characteristics through specific sequences of such local transformations,
increasing the expression power of this model w.r.t a classical multilayered
network. The learning algorithm is inspired from policy gradient techniques
coming from the reinforcement learning domain and is used here instead of the
classical back-propagation based gradient descent techniques. Experiments on
different datasets show the relevance of this approach.
| no_new_dataset | 0.95018 |
1410.0001 | Fabien Gouyon | Fabien Gouyon, Bob L. Sturm, Joao Lobato Oliveira, Nuno Hespanhol, and
Thibault Langlois | On Evaluation Validity in Music Autotagging | Submitted for journal publication in September 2014 | null | null | null | cs.IR cs.SD | http://creativecommons.org/licenses/by/3.0/ | Music autotagging, an established problem in Music Information Retrieval,
aims to alleviate the human cost required to manually annotate collections of
recorded music with textual labels by automating the process. Many autotagging
systems have been proposed and evaluated by procedures and datasets that are
now standard (used in MIREX, for instance). Very little work, however, has been
dedicated to determine what these evaluations really mean about an autotagging
system, or the comparison of two systems, for the problem of annotating music
in the real world. In this article, we are concerned with explaining the figure
of merit of an autotagging system evaluated with a standard approach.
Specifically, does the figure of merit, or a comparison of figures of merit,
warrant a conclusion about how well autotagging systems have learned to
describe music with a specific vocabulary? The main contributions of this paper
are a formalization of the notion of validity in autotagging evaluation, and a
method to test it in general. We demonstrate the practical use of our method in
experiments with three specific state-of-the-art autotagging systems --all of
which are reproducible using the linked code and data. Our experiments show for
these specific systems in a simple and objective two-class task that the
standard evaluation approach does not provide valid indicators of their
performance.
| [
{
"version": "v1",
"created": "Tue, 30 Sep 2014 14:57:52 GMT"
}
] | 2014-10-02T00:00:00 | [
[
"Gouyon",
"Fabien",
""
],
[
"Sturm",
"Bob L.",
""
],
[
"Oliveira",
"Joao Lobato",
""
],
[
"Hespanhol",
"Nuno",
""
],
[
"Langlois",
"Thibault",
""
]
] | TITLE: On Evaluation Validity in Music Autotagging
ABSTRACT: Music autotagging, an established problem in Music Information Retrieval,
aims to alleviate the human cost required to manually annotate collections of
recorded music with textual labels by automating the process. Many autotagging
systems have been proposed and evaluated by procedures and datasets that are
now standard (used in MIREX, for instance). Very little work, however, has been
dedicated to determine what these evaluations really mean about an autotagging
system, or the comparison of two systems, for the problem of annotating music
in the real world. In this article, we are concerned with explaining the figure
of merit of an autotagging system evaluated with a standard approach.
Specifically, does the figure of merit, or a comparison of figures of merit,
warrant a conclusion about how well autotagging systems have learned to
describe music with a specific vocabulary? The main contributions of this paper
are a formalization of the notion of validity in autotagging evaluation, and a
method to test it in general. We demonstrate the practical use of our method in
experiments with three specific state-of-the-art autotagging systems --all of
which are reproducible using the linked code and data. Our experiments show for
these specific systems in a simple and objective two-class task that the
standard evaluation approach does not provide valid indicators of their
performance.
| no_new_dataset | 0.945147 |
1410.0095 | Xu Wang | Xu Wang, Konstantinos Slavakis, Gilad Lerman | Riemannian Multi-Manifold Modeling | null | null | null | null | stat.ML cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper advocates a novel framework for segmenting a dataset in a
Riemannian manifold $M$ into clusters lying around low-dimensional submanifolds
of $M$. Important examples of $M$, for which the proposed clustering algorithm
is computationally efficient, are the sphere, the set of positive definite
matrices, and the Grassmannian. The clustering problem with these examples of
$M$ is already useful for numerous application domains such as action
identification in video sequences, dynamic texture clustering, brain fiber
segmentation in medical imaging, and clustering of deformed images. The
proposed clustering algorithm constructs a data-affinity matrix by thoroughly
exploiting the intrinsic geometry and then applies spectral clustering. The
intrinsic local geometry is encoded by local sparse coding and more importantly
by directional information of local tangent spaces and geodesics. Theoretical
guarantees are established for a simplified variant of the algorithm even when
the clusters intersect. To avoid complication, these guarantees assume that the
underlying submanifolds are geodesic. Extensive validation on synthetic and
real data demonstrates the resiliency of the proposed method against deviations
from the theoretical model as well as its superior performance over
state-of-the-art techniques.
| [
{
"version": "v1",
"created": "Wed, 1 Oct 2014 02:37:12 GMT"
}
] | 2014-10-02T00:00:00 | [
[
"Wang",
"Xu",
""
],
[
"Slavakis",
"Konstantinos",
""
],
[
"Lerman",
"Gilad",
""
]
] | TITLE: Riemannian Multi-Manifold Modeling
ABSTRACT: This paper advocates a novel framework for segmenting a dataset in a
Riemannian manifold $M$ into clusters lying around low-dimensional submanifolds
of $M$. Important examples of $M$, for which the proposed clustering algorithm
is computationally efficient, are the sphere, the set of positive definite
matrices, and the Grassmannian. The clustering problem with these examples of
$M$ is already useful for numerous application domains such as action
identification in video sequences, dynamic texture clustering, brain fiber
segmentation in medical imaging, and clustering of deformed images. The
proposed clustering algorithm constructs a data-affinity matrix by thoroughly
exploiting the intrinsic geometry and then applies spectral clustering. The
intrinsic local geometry is encoded by local sparse coding and more importantly
by directional information of local tangent spaces and geodesics. Theoretical
guarantees are established for a simplified variant of the algorithm even when
the clusters intersect. To avoid complication, these guarantees assume that the
underlying submanifolds are geodesic. Extensive validation on synthetic and
real data demonstrates the resiliency of the proposed method against deviations
from the theoretical model as well as its superior performance over
state-of-the-art techniques.
| no_new_dataset | 0.946892 |
1410.0265 | Chao Li | Chao Li, Michael Hay, Gerome Miklau, Yue Wang | A Data- and Workload-Aware Algorithm for Range Queries Under
Differential Privacy | VLDB 2014 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a new algorithm for answering a given set of range queries under
$\epsilon$-differential privacy which often achieves substantially lower error
than competing methods. Our algorithm satisfies differential privacy by adding
noise that is adapted to the input data and to the given query set. We first
privately learn a partitioning of the domain into buckets that suit the input
data well. Then we privately estimate counts for each bucket, doing so in a
manner well-suited for the given query set. Since the performance of the
algorithm depends on the input database, we evaluate it on a wide range of real
datasets, showing that we can achieve the benefits of data-dependence on both
"easy" and "hard" databases.
| [
{
"version": "v1",
"created": "Wed, 1 Oct 2014 15:56:42 GMT"
}
] | 2014-10-02T00:00:00 | [
[
"Li",
"Chao",
""
],
[
"Hay",
"Michael",
""
],
[
"Miklau",
"Gerome",
""
],
[
"Wang",
"Yue",
""
]
] | TITLE: A Data- and Workload-Aware Algorithm for Range Queries Under
Differential Privacy
ABSTRACT: We describe a new algorithm for answering a given set of range queries under
$\epsilon$-differential privacy which often achieves substantially lower error
than competing methods. Our algorithm satisfies differential privacy by adding
noise that is adapted to the input data and to the given query set. We first
privately learn a partitioning of the domain into buckets that suit the input
data well. Then we privately estimate counts for each bucket, doing so in a
manner well-suited for the given query set. Since the performance of the
algorithm depends on the input database, we evaluate it on a wide range of real
datasets, showing that we can achieve the benefits of data-dependence on both
"easy" and "hard" databases.
| no_new_dataset | 0.946547 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.