id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1501.04537 | Mohammad Haris Baig | Mohammad Haris Baig and Lorenzo Torresani | Coupled Depth Learning | 10 pages, 3 Figures, 4 Tables with quantitative evaluations | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a method for estimating depth from a single image
using a coarse to fine approach. We argue that modeling the fine depth details
is easier after a coarse depth map has been computed. We express a global
(coarse) depth map of an image as a linear combination of a depth basis learned
from training examples. The depth basis captures spatial and statistical
regularities and reduces the problem of global depth estimation to the task of
predicting the input-specific coefficients in the linear combination. This is
formulated as a regression problem from a holistic representation of the image.
Crucially, the depth basis and the regression function are {\bf coupled} and
jointly optimized by our learning scheme. We demonstrate that this results in a
significant improvement in accuracy compared to direct regression of depth
pixel values or approaches learning the depth basis disjointly from the
regression function. The global depth estimate is then used as a guidance by a
local refinement method that introduces depth details that were not captured at
the global level. Experiments on the NYUv2 and KITTI datasets show that our
method outperforms the existing state-of-the-art at a considerably lower
computational cost for both training and testing.
| [
{
"version": "v1",
"created": "Mon, 19 Jan 2015 16:18:48 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jan 2015 23:17:12 GMT"
},
{
"version": "v3",
"created": "Wed, 29 Apr 2015 22:51:43 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Sep 2015 06:36:34 GMT"
},
{
"version": "v5",
"created": "Thu, 15 Oct 2015 04:35:32 GMT"
},
{
"version": "v6",
"created": "Tue, 9 Feb 2016 16:27:35 GMT"
}
] | 2016-02-10T00:00:00 | [
[
"Baig",
"Mohammad Haris",
""
],
[
"Torresani",
"Lorenzo",
""
]
] | TITLE: Coupled Depth Learning
ABSTRACT: In this paper we propose a method for estimating depth from a single image
using a coarse to fine approach. We argue that modeling the fine depth details
is easier after a coarse depth map has been computed. We express a global
(coarse) depth map of an image as a linear combination of a depth basis learned
from training examples. The depth basis captures spatial and statistical
regularities and reduces the problem of global depth estimation to the task of
predicting the input-specific coefficients in the linear combination. This is
formulated as a regression problem from a holistic representation of the image.
Crucially, the depth basis and the regression function are {\bf coupled} and
jointly optimized by our learning scheme. We demonstrate that this results in a
significant improvement in accuracy compared to direct regression of depth
pixel values or approaches learning the depth basis disjointly from the
regression function. The global depth estimate is then used as a guidance by a
local refinement method that introduces depth details that were not captured at
the global level. Experiments on the NYUv2 and KITTI datasets show that our
method outperforms the existing state-of-the-art at a considerably lower
computational cost for both training and testing.
| no_new_dataset | 0.947962 |
1602.02842 | Truyen Tran | Truyen Tran, Dinh Phung and Svetha Venkatesh | Collaborative filtering via sparse Markov random fields | null | null | null | null | stat.ML cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems play a central role in providing individualized access to
information and services. This paper focuses on collaborative filtering, an
approach that exploits the shared structure among mind-liked users and similar
items. In particular, we focus on a formal probabilistic framework known as
Markov random fields (MRF). We address the open problem of structure learning
and introduce a sparsity-inducing algorithm to automatically estimate the
interaction structures between users and between items. Item-item and user-user
correlation networks are obtained as a by-product. Large-scale experiments on
movie recommendation and date matching datasets demonstrate the power of the
proposed method.
| [
{
"version": "v1",
"created": "Tue, 9 Feb 2016 02:30:27 GMT"
}
] | 2016-02-10T00:00:00 | [
[
"Tran",
"Truyen",
""
],
[
"Phung",
"Dinh",
""
],
[
"Venkatesh",
"Svetha",
""
]
] | TITLE: Collaborative filtering via sparse Markov random fields
ABSTRACT: Recommender systems play a central role in providing individualized access to
information and services. This paper focuses on collaborative filtering, an
approach that exploits the shared structure among mind-liked users and similar
items. In particular, we focus on a formal probabilistic framework known as
Markov random fields (MRF). We address the open problem of structure learning
and introduce a sparsity-inducing algorithm to automatically estimate the
interaction structures between users and between items. Item-item and user-user
correlation networks are obtained as a by-product. Large-scale experiments on
movie recommendation and date matching datasets demonstrate the power of the
proposed method.
| no_new_dataset | 0.94699 |
1602.02868 | Varun Krishna Varun Badrinath Krishna | Deokwoo Jung, Varun Badrinath Krishna, William Temple, David K. Y. Yau | Data-Driven Evaluation of Building Demand Response Capacity | In proceedings of the 2014 IEEE International Conference on Smart
Grid Communications (IEEE SmartGridComm 2014) | null | 10.1109/SmartGridComm.2014.7007703 | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Before a building can participate in a demand response program, its facility
managers must characterize the site's ability to reduce load. Today, this is
often done through manual audit processes and prototypical control strategies.
In this paper, we propose a new approach to estimate a building's demand
response capacity using detailed data from various sensors installed in a
building. We derive a formula for a probabilistic measure that characterizes
various tradeoffs between the available demand response capacity and the
confidence level associated with that curtailment under the constraints of
building occupant comfort level (or utility). Then, we develop a data-driven
framework to associate observed or projected building energy consumption with a
particular set of rules learned from a large sensor dataset. We apply this
methodology using testbeds in two buildings in Singapore: a unique net-zero
energy building and a modern commercial office building. Our experimental
results identify key control parameters and provide insight into the available
demand response strategies at each site.
| [
{
"version": "v1",
"created": "Tue, 9 Feb 2016 05:44:55 GMT"
}
] | 2016-02-10T00:00:00 | [
[
"Jung",
"Deokwoo",
""
],
[
"Krishna",
"Varun Badrinath",
""
],
[
"Temple",
"William",
""
],
[
"Yau",
"David K. Y.",
""
]
] | TITLE: Data-Driven Evaluation of Building Demand Response Capacity
ABSTRACT: Before a building can participate in a demand response program, its facility
managers must characterize the site's ability to reduce load. Today, this is
often done through manual audit processes and prototypical control strategies.
In this paper, we propose a new approach to estimate a building's demand
response capacity using detailed data from various sensors installed in a
building. We derive a formula for a probabilistic measure that characterizes
various tradeoffs between the available demand response capacity and the
confidence level associated with that curtailment under the constraints of
building occupant comfort level (or utility). Then, we develop a data-driven
framework to associate observed or projected building energy consumption with a
particular set of rules learned from a large sensor dataset. We apply this
methodology using testbeds in two buildings in Singapore: a unique net-zero
energy building and a modern commercial office building. Our experimental
results identify key control parameters and provide insight into the available
demand response strategies at each site.
| no_new_dataset | 0.949995 |
1602.03101 | Fl\'avio Martins | Fl\'avio Martins, Jo\~ao Magalh\~aes and Jamie Callan | Barbara Made the News: Mining the Behavior of Crowds for Time-Aware
Learning to Rank | To appear in WSDM 2016 | null | 10.1145/2835776.2835825 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Twitter, and other microblogging services, the generation of new content
by the crowd is often biased towards immediacy: what is happening now. Prompted
by the propagation of commentary and information through multiple mediums,
users on the Web interact with and produce new posts about newsworthy topics
and give rise to trending topics. This paper proposes to leverage on the
behavioral dynamics of users to estimate the most relevant time periods for a
topic. Our hypothesis stems from the fact that when a real-world event occurs
it usually has peak times on the Web: a higher volume of tweets, new visits and
edits to related Wikipedia articles, and news published about the event. In
this paper, we propose a novel time-aware ranking model that leverages on
multiple sources of crowd signals. Our approach builds on two major novelties.
First, a unifying approach that given query q, mines and represents temporal
evidence from multiple sources of crowd signals. This allows us to predict the
temporal relevance of documents for query q. Second, a principled retrieval
model that integrates temporal signals in a learning to rank framework, to rank
results according to the predicted temporal relevance. Evaluation on the TREC
2013 and 2014 Microblog track datasets demonstrates that the proposed model
achieves a relative improvement of 13.2% over lexical retrieval models and 6.2%
over a learning to rank baseline.
| [
{
"version": "v1",
"created": "Tue, 9 Feb 2016 18:01:57 GMT"
}
] | 2016-02-10T00:00:00 | [
[
"Martins",
"Flávio",
""
],
[
"Magalhães",
"João",
""
],
[
"Callan",
"Jamie",
""
]
] | TITLE: Barbara Made the News: Mining the Behavior of Crowds for Time-Aware
Learning to Rank
ABSTRACT: In Twitter, and other microblogging services, the generation of new content
by the crowd is often biased towards immediacy: what is happening now. Prompted
by the propagation of commentary and information through multiple mediums,
users on the Web interact with and produce new posts about newsworthy topics
and give rise to trending topics. This paper proposes to leverage on the
behavioral dynamics of users to estimate the most relevant time periods for a
topic. Our hypothesis stems from the fact that when a real-world event occurs
it usually has peak times on the Web: a higher volume of tweets, new visits and
edits to related Wikipedia articles, and news published about the event. In
this paper, we propose a novel time-aware ranking model that leverages on
multiple sources of crowd signals. Our approach builds on two major novelties.
First, a unifying approach that given query q, mines and represents temporal
evidence from multiple sources of crowd signals. This allows us to predict the
temporal relevance of documents for query q. Second, a principled retrieval
model that integrates temporal signals in a learning to rank framework, to rank
results according to the predicted temporal relevance. Evaluation on the TREC
2013 and 2014 Microblog track datasets demonstrates that the proposed model
achieves a relative improvement of 13.2% over lexical retrieval models and 6.2%
over a learning to rank baseline.
| no_new_dataset | 0.95418 |
1602.03110 | Akhil Arora | Sainyam Galhotra, Akhil Arora, Shourya Roy | Holistic Influence Maximization: Combining Scalability and Efficiency
with Opinion-Aware Models | ACM SIGMOD Conference 2016, 18 pages, 29 figures | null | 10.1145/2882903.2882929 | null | cs.SI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The steady growth of graph data from social networks has resulted in
wide-spread research in finding solutions to the influence maximization
problem. In this paper, we propose a holistic solution to the influence
maximization (IM) problem. (1) We introduce an opinion-cum-interaction (OI)
model that closely mirrors the real-world scenarios. Under the OI model, we
introduce a novel problem of Maximizing the Effective Opinion (MEO) of
influenced users. We prove that the MEO problem is NP-hard and cannot be
approximated within a constant ratio unless P=NP. (2) We propose a heuristic
algorithm OSIM to efficiently solve the MEO problem. To better explain the OSIM
heuristic, we first introduce EaSyIM - the opinion-oblivious version of OSIM, a
scalable algorithm capable of running within practical compute times on
commodity hardware. In addition to serving as a fundamental building block for
OSIM, EaSyIM is capable of addressing the scalability aspect - memory
consumption and running time, of the IM problem as well.
Empirically, our algorithms are capable of maintaining the deviation in the
spread always within 5% of the best known methods in the literature. In
addition, our experiments show that both OSIM and EaSyIM are effective,
efficient, scalable and significantly enhance the ability to analyze real
datasets.
| [
{
"version": "v1",
"created": "Tue, 9 Feb 2016 18:21:41 GMT"
}
] | 2016-02-10T00:00:00 | [
[
"Galhotra",
"Sainyam",
""
],
[
"Arora",
"Akhil",
""
],
[
"Roy",
"Shourya",
""
]
] | TITLE: Holistic Influence Maximization: Combining Scalability and Efficiency
with Opinion-Aware Models
ABSTRACT: The steady growth of graph data from social networks has resulted in
wide-spread research in finding solutions to the influence maximization
problem. In this paper, we propose a holistic solution to the influence
maximization (IM) problem. (1) We introduce an opinion-cum-interaction (OI)
model that closely mirrors the real-world scenarios. Under the OI model, we
introduce a novel problem of Maximizing the Effective Opinion (MEO) of
influenced users. We prove that the MEO problem is NP-hard and cannot be
approximated within a constant ratio unless P=NP. (2) We propose a heuristic
algorithm OSIM to efficiently solve the MEO problem. To better explain the OSIM
heuristic, we first introduce EaSyIM - the opinion-oblivious version of OSIM, a
scalable algorithm capable of running within practical compute times on
commodity hardware. In addition to serving as a fundamental building block for
OSIM, EaSyIM is capable of addressing the scalability aspect - memory
consumption and running time, of the IM problem as well.
Empirically, our algorithms are capable of maintaining the deviation in the
spread always within 5% of the best known methods in the literature. In
addition, our experiments show that both OSIM and EaSyIM are effective,
efficient, scalable and significantly enhance the ability to analyze real
datasets.
| no_new_dataset | 0.941868 |
1511.08990 | Artem Barger | Artem Barger and Dan Feldman | k-Means for Streaming and Distributed Big Sparse Data | 16 pages, 44 figures | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We provide the first streaming algorithm for computing a provable
approximation to the $k$-means of sparse Big data. Here, sparse Big Data is a
set of $n$ vectors in $\mathbb{R}^d$, where each vector has $O(1)$ non-zeroes
entries, and $d\geq n$. E.g., adjacency matrix of a graph, web-links, social
network, document-terms, or image-features matrices.
Our streaming algorithm stores at most $\log n\cdot k^{O(1)}$ input points in
memory. If the stream is distributed among $M$ machines, the running time
reduces by a factor of $M$, while communicating a total of $M\cdot k^{O(1)}$
(sparse) input points between the machines.
% Our main technical result is a deterministic algorithm for computing a
sparse $(k,\epsilon)$-coreset, which is a weighted subset of $k^{O(1)}$ input
points that approximates the sum of squared distances from the $n$ input points
to every $k$ centers, up to $(1\pm\epsilon)$ factor, for any given constant
$\epsilon>0$. This is the first such coreset of size independent of both $d$
and $n$.
Existing algorithms use coresets of size at least polynomial in $d$, or
project the input points on a subspace which diminishes their sparsity, thus
require memory and communication $\Omega(d)=\Omega(n)$ even for $k=2$.
Experimental results real public datasets shows that our algorithm boost the
performance of such given heuristics even in the off-line setting. Open code is
provided for reproducibility.
| [
{
"version": "v1",
"created": "Sun, 29 Nov 2015 10:06:11 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Feb 2016 17:01:46 GMT"
}
] | 2016-02-09T00:00:00 | [
[
"Barger",
"Artem",
""
],
[
"Feldman",
"Dan",
""
]
] | TITLE: k-Means for Streaming and Distributed Big Sparse Data
ABSTRACT: We provide the first streaming algorithm for computing a provable
approximation to the $k$-means of sparse Big data. Here, sparse Big Data is a
set of $n$ vectors in $\mathbb{R}^d$, where each vector has $O(1)$ non-zeroes
entries, and $d\geq n$. E.g., adjacency matrix of a graph, web-links, social
network, document-terms, or image-features matrices.
Our streaming algorithm stores at most $\log n\cdot k^{O(1)}$ input points in
memory. If the stream is distributed among $M$ machines, the running time
reduces by a factor of $M$, while communicating a total of $M\cdot k^{O(1)}$
(sparse) input points between the machines.
% Our main technical result is a deterministic algorithm for computing a
sparse $(k,\epsilon)$-coreset, which is a weighted subset of $k^{O(1)}$ input
points that approximates the sum of squared distances from the $n$ input points
to every $k$ centers, up to $(1\pm\epsilon)$ factor, for any given constant
$\epsilon>0$. This is the first such coreset of size independent of both $d$
and $n$.
Existing algorithms use coresets of size at least polynomial in $d$, or
project the input points on a subspace which diminishes their sparsity, thus
require memory and communication $\Omega(d)=\Omega(n)$ even for $k=2$.
Experimental results real public datasets shows that our algorithm boost the
performance of such given heuristics even in the off-line setting. Open code is
provided for reproducibility.
| no_new_dataset | 0.940353 |
1602.02172 | Weiran Wang | Weiran Wang | On Column Selection in Approximate Kernel Canonical Correlation Analysis | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of column selection in large-scale kernel canonical
correlation analysis (KCCA) using the Nystr\"om approximation, where one
approximates two positive semi-definite kernel matrices using "landmark" points
from the training set. When building low-rank kernel approximations in KCCA,
previous work mostly samples the landmarks uniformly at random from the
training set. We propose novel strategies for sampling the landmarks
non-uniformly based on a version of statistical leverage scores recently
developed for kernel ridge regression. We study the approximation accuracy of
the proposed non-uniform sampling strategy, develop an incremental algorithm
that explores the path of approximation ranks and facilitates efficient model
selection, and derive the kernel stability of out-of-sample mapping for our
method. Experimental results on both synthetic and real-world datasets
demonstrate the promise of our method.
| [
{
"version": "v1",
"created": "Fri, 5 Feb 2016 21:51:41 GMT"
}
] | 2016-02-09T00:00:00 | [
[
"Wang",
"Weiran",
""
]
] | TITLE: On Column Selection in Approximate Kernel Canonical Correlation Analysis
ABSTRACT: We study the problem of column selection in large-scale kernel canonical
correlation analysis (KCCA) using the Nystr\"om approximation, where one
approximates two positive semi-definite kernel matrices using "landmark" points
from the training set. When building low-rank kernel approximations in KCCA,
previous work mostly samples the landmarks uniformly at random from the
training set. We propose novel strategies for sampling the landmarks
non-uniformly based on a version of statistical leverage scores recently
developed for kernel ridge regression. We study the approximation accuracy of
the proposed non-uniform sampling strategy, develop an incremental algorithm
that explores the path of approximation ranks and facilitates efficient model
selection, and derive the kernel stability of out-of-sample mapping for our
method. Experimental results on both synthetic and real-world datasets
demonstrate the promise of our method.
| no_new_dataset | 0.950088 |
1602.02283 | Dominik Csiba | Dominik Csiba and Peter Richt\'arik | Importance Sampling for Minibatches | null | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Minibatching is a very well studied and highly popular technique in
supervised learning, used by practitioners due to its ability to accelerate
training through better utilization of parallel processing power and reduction
of stochastic variance. Another popular technique is importance sampling -- a
strategy for preferential sampling of more important examples also capable of
accelerating the training process. However, despite considerable effort by the
community in these areas, and due to the inherent technical difficulty of the
problem, there is no existing work combining the power of importance sampling
with the strength of minibatching. In this paper we propose the first {\em
importance sampling for minibatches} and give simple and rigorous complexity
analysis of its performance. We illustrate on synthetic problems that for
training data of certain properties, our sampling can lead to several orders of
magnitude improvement in training time. We then test the new sampling on
several popular datasets, and show that the improvement can reach an order of
magnitude.
| [
{
"version": "v1",
"created": "Sat, 6 Feb 2016 17:35:53 GMT"
}
] | 2016-02-09T00:00:00 | [
[
"Csiba",
"Dominik",
""
],
[
"Richtárik",
"Peter",
""
]
] | TITLE: Importance Sampling for Minibatches
ABSTRACT: Minibatching is a very well studied and highly popular technique in
supervised learning, used by practitioners due to its ability to accelerate
training through better utilization of parallel processing power and reduction
of stochastic variance. Another popular technique is importance sampling -- a
strategy for preferential sampling of more important examples also capable of
accelerating the training process. However, despite considerable effort by the
community in these areas, and due to the inherent technical difficulty of the
problem, there is no existing work combining the power of importance sampling
with the strength of minibatching. In this paper we propose the first {\em
importance sampling for minibatches} and give simple and rigorous complexity
analysis of its performance. We illustrate on synthetic problems that for
training data of certain properties, our sampling can lead to several orders of
magnitude improvement in training time. We then test the new sampling on
several popular datasets, and show that the improvement can reach an order of
magnitude.
| no_new_dataset | 0.94868 |
1602.02332 | Antti Puurula | Antti Puurula | Scalable Text Mining with Sparse Generative Models | PhD Thesis, Computer Science, University of Waikato, 2016 | null | null | null | cs.IR cs.AI cs.CL | http://creativecommons.org/publicdomain/zero/1.0/ | The information age has brought a deluge of data. Much of this is in text
form, insurmountable in scope for humans and incomprehensible in structure for
computers. Text mining is an expanding field of research that seeks to utilize
the information contained in vast document collections. General data mining
methods based on machine learning face challenges with the scale of text data,
posing a need for scalable text mining methods.
This thesis proposes a solution to scalable text mining: generative models
combined with sparse computation. A unifying formalization for generative text
models is defined, bringing together research traditions that have used
formally equivalent models, but ignored parallel developments. This framework
allows the use of methods developed in different processing tasks such as
retrieval and classification, yielding effective solutions across different
text mining tasks. Sparse computation using inverted indices is proposed for
inference on probabilistic models. This reduces the computational complexity of
the common text mining operations according to sparsity, yielding probabilistic
models with the scalability of modern search engines.
The proposed combination provides sparse generative models: a solution for
text mining that is general, effective, and scalable. Extensive experimentation
on text classification and ranked retrieval datasets are conducted, showing
that the proposed solution matches or outperforms the leading task-specific
methods in effectiveness, with a order of magnitude decrease in classification
times for Wikipedia article categorization with a million classes. The
developed methods were further applied in two 2014 Kaggle data mining prize
competitions with over a hundred competing teams, earning first and second
places.
| [
{
"version": "v1",
"created": "Sun, 7 Feb 2016 02:49:27 GMT"
}
] | 2016-02-09T00:00:00 | [
[
"Puurula",
"Antti",
""
]
] | TITLE: Scalable Text Mining with Sparse Generative Models
ABSTRACT: The information age has brought a deluge of data. Much of this is in text
form, insurmountable in scope for humans and incomprehensible in structure for
computers. Text mining is an expanding field of research that seeks to utilize
the information contained in vast document collections. General data mining
methods based on machine learning face challenges with the scale of text data,
posing a need for scalable text mining methods.
This thesis proposes a solution to scalable text mining: generative models
combined with sparse computation. A unifying formalization for generative text
models is defined, bringing together research traditions that have used
formally equivalent models, but ignored parallel developments. This framework
allows the use of methods developed in different processing tasks such as
retrieval and classification, yielding effective solutions across different
text mining tasks. Sparse computation using inverted indices is proposed for
inference on probabilistic models. This reduces the computational complexity of
the common text mining operations according to sparsity, yielding probabilistic
models with the scalability of modern search engines.
The proposed combination provides sparse generative models: a solution for
text mining that is general, effective, and scalable. Extensive experimentation
on text classification and ranked retrieval datasets are conducted, showing
that the proposed solution matches or outperforms the leading task-specific
methods in effectiveness, with a order of magnitude decrease in classification
times for Wikipedia article categorization with a million classes. The
developed methods were further applied in two 2014 Kaggle data mining prize
competitions with over a hundred competing teams, earning first and second
places.
| no_new_dataset | 0.946646 |
1501.05352 | Miguel \'A. Carreira-Perpi\~n\'an | Ramin Raziperchikolaei and Miguel \'A. Carreira-Perpi\~n\'an | Optimizing affinity-based binary hashing using auxiliary coordinates | 22 pages, 12 figures; added new experiments and references | null | null | null | cs.LG cs.CV math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In supervised binary hashing, one wants to learn a function that maps a
high-dimensional feature vector to a vector of binary codes, for application to
fast image retrieval. This typically results in a difficult optimization
problem, nonconvex and nonsmooth, because of the discrete variables involved.
Much work has simply relaxed the problem during training, solving a continuous
optimization, and truncating the codes a posteriori. This gives reasonable
results but is quite suboptimal. Recent work has tried to optimize the
objective directly over the binary codes and achieved better results, but the
hash function was still learned a posteriori, which remains suboptimal. We
propose a general framework for learning hash functions using affinity-based
loss functions that uses auxiliary coordinates. This closes the loop and
optimizes jointly over the hash functions and the binary codes so that they
gradually match each other. The resulting algorithm can be seen as a corrected,
iterated version of the procedure of optimizing first over the codes and then
learning the hash function. Compared to this, our optimization is guaranteed to
obtain better hash functions while being not much slower, as demonstrated
experimentally in various supervised datasets. In addition, our framework
facilitates the design of optimization algorithms for arbitrary types of loss
and hash functions.
| [
{
"version": "v1",
"created": "Wed, 21 Jan 2015 23:53:47 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Feb 2016 01:25:26 GMT"
}
] | 2016-02-08T00:00:00 | [
[
"Raziperchikolaei",
"Ramin",
""
],
[
"Carreira-Perpiñán",
"Miguel Á.",
""
]
] | TITLE: Optimizing affinity-based binary hashing using auxiliary coordinates
ABSTRACT: In supervised binary hashing, one wants to learn a function that maps a
high-dimensional feature vector to a vector of binary codes, for application to
fast image retrieval. This typically results in a difficult optimization
problem, nonconvex and nonsmooth, because of the discrete variables involved.
Much work has simply relaxed the problem during training, solving a continuous
optimization, and truncating the codes a posteriori. This gives reasonable
results but is quite suboptimal. Recent work has tried to optimize the
objective directly over the binary codes and achieved better results, but the
hash function was still learned a posteriori, which remains suboptimal. We
propose a general framework for learning hash functions using affinity-based
loss functions that uses auxiliary coordinates. This closes the loop and
optimizes jointly over the hash functions and the binary codes so that they
gradually match each other. The resulting algorithm can be seen as a corrected,
iterated version of the procedure of optimizing first over the codes and then
learning the hash function. Compared to this, our optimization is guaranteed to
obtain better hash functions while being not much slower, as demonstrated
experimentally in various supervised datasets. In addition, our framework
facilitates the design of optimization algorithms for arbitrary types of loss
and hash functions.
| no_new_dataset | 0.946794 |
1601.04560 | Mariano G. Beir\'o PhD. | M.G. Beir\'o, A. Panisson, M. Tizzoni, C. Cattuto | Predicting human mobility through the assimilation of social media
traces into mobility models | 17 pages, 10 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting human mobility flows at different spatial scales is challenged by
the heterogeneity of individual trajectories and the multi-scale nature of
transportation networks. As vast amounts of digital traces of human behaviour
become available, an opportunity arises to improve mobility models by
integrating into them proxy data on mobility collected by a variety of digital
platforms and location-aware services. Here we propose a hybrid model of human
mobility that integrates a large-scale publicly available dataset from a
popular photo-sharing system with the classical gravity model, under a stacked
regression procedure. We validate the performance and generalizability of our
approach using two ground-truth datasets on air travel and daily commuting in
the United States: using two different cross-validation schemes we show that
the hybrid model affords enhanced mobility prediction at both spatial scales.
| [
{
"version": "v1",
"created": "Mon, 18 Jan 2016 15:10:27 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Feb 2016 10:09:26 GMT"
}
] | 2016-02-08T00:00:00 | [
[
"Beiró",
"M. G.",
""
],
[
"Panisson",
"A.",
""
],
[
"Tizzoni",
"M.",
""
],
[
"Cattuto",
"C.",
""
]
] | TITLE: Predicting human mobility through the assimilation of social media
traces into mobility models
ABSTRACT: Predicting human mobility flows at different spatial scales is challenged by
the heterogeneity of individual trajectories and the multi-scale nature of
transportation networks. As vast amounts of digital traces of human behaviour
become available, an opportunity arises to improve mobility models by
integrating into them proxy data on mobility collected by a variety of digital
platforms and location-aware services. Here we propose a hybrid model of human
mobility that integrates a large-scale publicly available dataset from a
popular photo-sharing system with the classical gravity model, under a stacked
regression procedure. We validate the performance and generalizability of our
approach using two ground-truth datasets on air travel and daily commuting in
the United States: using two different cross-validation schemes we show that
the hybrid model affords enhanced mobility prediction at both spatial scales.
| no_new_dataset | 0.945248 |
1602.01895 | Shijian Tang | Shijian Tang, Song Han | Generate Image Descriptions based on Deep RNN and Memory Cells for
Images Features | null | null | null | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating natural language descriptions for images is a challenging task.
The traditional way is to use the convolutional neural network (CNN) to extract
image features, followed by recurrent neural network (RNN) to generate
sentences. In this paper, we present a new model that added memory cells to
gate the feeding of image features to the deep neural network. The intuition is
enabling our model to memorize how much information from images should be fed
at each stage of the RNN. Experiments on Flickr8K and Flickr30K datasets showed
that our model outperforms other state-of-the-art models with higher BLEU
scores.
| [
{
"version": "v1",
"created": "Fri, 5 Feb 2016 00:17:18 GMT"
}
] | 2016-02-08T00:00:00 | [
[
"Tang",
"Shijian",
""
],
[
"Han",
"Song",
""
]
] | TITLE: Generate Image Descriptions based on Deep RNN and Memory Cells for
Images Features
ABSTRACT: Generating natural language descriptions for images is a challenging task.
The traditional way is to use the convolutional neural network (CNN) to extract
image features, followed by recurrent neural network (RNN) to generate
sentences. In this paper, we present a new model that added memory cells to
gate the feeding of image features to the deep neural network. The intuition is
enabling our model to memorize how much information from images should be fed
at each stage of the RNN. Experiments on Flickr8K and Flickr30K datasets showed
that our model outperforms other state-of-the-art models with higher BLEU
scores.
| no_new_dataset | 0.951369 |
1602.01904 | Tanmoy Chakraborty | Dinesh Pradhan, Tanmoy Chakraborty, Saswata Pandit, Subrata Nandi | On the Discovery of Success Trajectories of Authors | 2 pages, 1 figure in 25rd International World Wide Web Conference WWW
2016 | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the qualitative patterns of research endeavor of scientific
authors in terms of publication count and their impact (citation) is important
in order to quantify success trajectories. Here, we examine the career profile
of authors in computer science and physics domains and discover at least six
different success trajectories in terms of normalized citation count in
longitudinal scale. Initial observations of individual trajectories lead us to
characterize the authors in each category. We further leverage this trajectory
information to build a two-stage stratification model to predict future success
of an author at the early stage of her career. Our model outperforms the
baseline with an average improvement of 15.68% for both the datasets.
| [
{
"version": "v1",
"created": "Fri, 5 Feb 2016 01:08:43 GMT"
}
] | 2016-02-08T00:00:00 | [
[
"Pradhan",
"Dinesh",
""
],
[
"Chakraborty",
"Tanmoy",
""
],
[
"Pandit",
"Saswata",
""
],
[
"Nandi",
"Subrata",
""
]
] | TITLE: On the Discovery of Success Trajectories of Authors
ABSTRACT: Understanding the qualitative patterns of research endeavor of scientific
authors in terms of publication count and their impact (citation) is important
in order to quantify success trajectories. Here, we examine the career profile
of authors in computer science and physics domains and discover at least six
different success trajectories in terms of normalized citation count in
longitudinal scale. Initial observations of individual trajectories lead us to
characterize the authors in each category. We further leverage this trajectory
information to build a two-stage stratification model to predict future success
of an author at the early stage of her career. Our model outperforms the
baseline with an average improvement of 15.68% for both the datasets.
| no_new_dataset | 0.956997 |
1602.01910 | Yangyang Hou | Yangyang Hou, Joyce Jiyoung Whang, David F. Gleich, Inderjit S.
Dhillon | Fast Multiplier Methods to Optimize Non-exhaustive, Overlapping
Clustering | 9 pages. 2 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering is one of the most fundamental and important tasks in data mining.
Traditional clustering algorithms, such as K-means, assign every data point to
exactly one cluster. However, in real-world datasets, the clusters may overlap
with each other. Furthermore, often, there are outliers that should not belong
to any cluster. We recently proposed the NEO-K-Means (Non-Exhaustive,
Overlapping K-Means) objective as a way to address both issues in an integrated
fashion. Optimizing this discrete objective is NP-hard, and even though there
is a convex relaxation of the objective, straightforward convex optimization
approaches are too expensive for large datasets. A practical alternative is to
use a low-rank factorization of the solution matrix in the convex formulation.
The resulting optimization problem is non-convex, and we can locally optimize
the objective function using an augmented Lagrangian method. In this paper, we
consider two fast multiplier methods to accelerate the convergence of an
augmented Lagrangian scheme: a proximal method of multipliers and an
alternating direction method of multipliers (ADMM). For the proximal augmented
Lagrangian or proximal method of multipliers, we show a convergence result for
the non-convex case with bound-constrained subproblems. These methods are up to
13 times faster---with no change in quality---compared with a standard
augmented Lagrangian method on problems with over 10,000 variables and bring
runtimes down from over an hour to around 5 minutes.
| [
{
"version": "v1",
"created": "Fri, 5 Feb 2016 02:08:57 GMT"
}
] | 2016-02-08T00:00:00 | [
[
"Hou",
"Yangyang",
""
],
[
"Whang",
"Joyce Jiyoung",
""
],
[
"Gleich",
"David F.",
""
],
[
"Dhillon",
"Inderjit S.",
""
]
] | TITLE: Fast Multiplier Methods to Optimize Non-exhaustive, Overlapping
Clustering
ABSTRACT: Clustering is one of the most fundamental and important tasks in data mining.
Traditional clustering algorithms, such as K-means, assign every data point to
exactly one cluster. However, in real-world datasets, the clusters may overlap
with each other. Furthermore, often, there are outliers that should not belong
to any cluster. We recently proposed the NEO-K-Means (Non-Exhaustive,
Overlapping K-Means) objective as a way to address both issues in an integrated
fashion. Optimizing this discrete objective is NP-hard, and even though there
is a convex relaxation of the objective, straightforward convex optimization
approaches are too expensive for large datasets. A practical alternative is to
use a low-rank factorization of the solution matrix in the convex formulation.
The resulting optimization problem is non-convex, and we can locally optimize
the objective function using an augmented Lagrangian method. In this paper, we
consider two fast multiplier methods to accelerate the convergence of an
augmented Lagrangian scheme: a proximal method of multipliers and an
alternating direction method of multipliers (ADMM). For the proximal augmented
Lagrangian or proximal method of multipliers, we show a convergence result for
the non-convex case with bound-constrained subproblems. These methods are up to
13 times faster---with no change in quality---compared with a standard
augmented Lagrangian method on problems with over 10,000 variables and bring
runtimes down from over an hour to around 5 minutes.
| no_new_dataset | 0.948822 |
1602.01940 | Liangcheng Liu | Liangchen Liu and Arnold Wiliem and Shaokang Chen and Brian C. Lovell | Automatic and Quantitative evaluation of attribute discovery methods | 9 pages, WACV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many automatic attribute discovery methods have been developed to extract a
set of visual attributes from images for various tasks. However, despite good
performance in some image classification tasks, it is difficult to evaluate
whether these methods discover meaningful attributes and which one is the best
to find the attributes for image descriptions. An intuitive way to evaluate
this is to manually verify whether consistent identifiable visual concepts
exist to distinguish between positive and negative images of an attribute. This
manual checking is tedious, labor intensive and expensive and it is very hard
to get quantitative comparisons between different methods. In this work, we
tackle this problem by proposing an attribute meaningfulness metric, that can
perform automatic evaluation on the meaningfulness of attribute sets as well as
achieving quantitative comparisons. We apply our proposed metric to recent
automatic attribute discovery methods and popular hashing methods on three
attribute datasets. A user study is also conducted to validate the
effectiveness of the metric. In our evaluation, we gleaned some insights that
could be beneficial in developing automatic attribute discovery methods to
generate meaningful attributes. To the best of our knowledge, this is the first
work to quantitatively measure the semantic content of automatically discovered
attributes.
| [
{
"version": "v1",
"created": "Fri, 5 Feb 2016 07:43:08 GMT"
}
] | 2016-02-08T00:00:00 | [
[
"Liu",
"Liangchen",
""
],
[
"Wiliem",
"Arnold",
""
],
[
"Chen",
"Shaokang",
""
],
[
"Lovell",
"Brian C.",
""
]
] | TITLE: Automatic and Quantitative evaluation of attribute discovery methods
ABSTRACT: Many automatic attribute discovery methods have been developed to extract a
set of visual attributes from images for various tasks. However, despite good
performance in some image classification tasks, it is difficult to evaluate
whether these methods discover meaningful attributes and which one is the best
to find the attributes for image descriptions. An intuitive way to evaluate
this is to manually verify whether consistent identifiable visual concepts
exist to distinguish between positive and negative images of an attribute. This
manual checking is tedious, labor intensive and expensive and it is very hard
to get quantitative comparisons between different methods. In this work, we
tackle this problem by proposing an attribute meaningfulness metric, that can
perform automatic evaluation on the meaningfulness of attribute sets as well as
achieving quantitative comparisons. We apply our proposed metric to recent
automatic attribute discovery methods and popular hashing methods on three
attribute datasets. A user study is also conducted to validate the
effectiveness of the metric. In our evaluation, we gleaned some insights that
could be beneficial in developing automatic attribute discovery methods to
generate meaningful attributes. To the best of our knowledge, this is the first
work to quantitatively measure the semantic content of automatically discovered
attributes.
| no_new_dataset | 0.929792 |
1602.02022 | Jan Egger | Dzenan Zukic, Jan Egger, Miriam H. A. Bauer, Daniela Kuhnt, Barbara
Carl, Bernd Freisleben, Andreas Kolb, Christopher Nimsky | Preoperative Volume Determination for Pituitary Adenoma | 7 pages, 6 figures, 1 table, 16 references in Proc. SPIE 7963,
Medical Imaging 2011: Computer-Aided Diagnosis, 79632T (9 March 2011). arXiv
admin note: text overlap with arXiv:1103.1778 | null | 10.1117/12.877660 | null | cs.CV cs.CG cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The most common sellar lesion is the pituitary adenoma, and sellar tumors are
approximately 10-15% of all intracranial neoplasms. Manual slice-by-slice
segmentation takes quite some time that can be reduced by using the appropriate
algorithms. In this contribution, we present a segmentation method for
pituitary adenoma. The method is based on an algorithm that we have applied
recently to segmenting glioblastoma multiforme. A modification of this scheme
is used for adenoma segmentation that is much harder to perform, due to lack of
contrast-enhanced boundaries. In our experimental evaluation, neurosurgeons
performed manual slice-by-slice segmentation of ten magnetic resonance imaging
(MRI) cases. The segmentations were compared to the segmentation results of the
proposed method using the Dice Similarity Coefficient (DSC). The average DSC
for all datasets was 75.92% +/- 7.24%. A manual segmentation took about four
minutes and our algorithm required about one second.
| [
{
"version": "v1",
"created": "Fri, 5 Feb 2016 14:08:21 GMT"
}
] | 2016-02-08T00:00:00 | [
[
"Zukic",
"Dzenan",
""
],
[
"Egger",
"Jan",
""
],
[
"Bauer",
"Miriam H. A.",
""
],
[
"Kuhnt",
"Daniela",
""
],
[
"Carl",
"Barbara",
""
],
[
"Freisleben",
"Bernd",
""
],
[
"Kolb",
"Andreas",
""
],
[
"Nimsky",
"Christopher",
""
]
] | TITLE: Preoperative Volume Determination for Pituitary Adenoma
ABSTRACT: The most common sellar lesion is the pituitary adenoma, and sellar tumors are
approximately 10-15% of all intracranial neoplasms. Manual slice-by-slice
segmentation takes quite some time that can be reduced by using the appropriate
algorithms. In this contribution, we present a segmentation method for
pituitary adenoma. The method is based on an algorithm that we have applied
recently to segmenting glioblastoma multiforme. A modification of this scheme
is used for adenoma segmentation that is much harder to perform, due to lack of
contrast-enhanced boundaries. In our experimental evaluation, neurosurgeons
performed manual slice-by-slice segmentation of ten magnetic resonance imaging
(MRI) cases. The segmentations were compared to the segmentation results of the
proposed method using the Dice Similarity Coefficient (DSC). The average DSC
for all datasets was 75.92% +/- 7.24%. A manual segmentation took about four
minutes and our algorithm required about one second.
| no_new_dataset | 0.945851 |
1602.02130 | Enzo Ferrante | Mahsa Shakeri, Stavros Tsogkas (CVN, GALEN), Enzo Ferrante (CVN,
GALEN), Sarah Lippe, Samuel Kadoury, Nikos Paragios (CVN, GALEN), Iasonas
Kokkinos (CVN, GALEN) | Sub-cortical brain structure segmentation using F-CNN's | ISBI 2016: International Symposium on Biomedical Imaging, Apr 2016,
Prague, Czech Republic | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a deep learning approach for segmenting sub-cortical
structures of the human brain in Magnetic Resonance (MR) image data. We draw
inspiration from a state-of-the-art Fully-Convolutional Neural Network (F-CNN)
architecture for semantic segmentation of objects in natural images, and adapt
it to our task. Unlike previous CNN-based methods that operate on image
patches, our model is applied on a full blown 2D image, without any alignment
or registration steps at testing time. We further improve segmentation results
by interpreting the CNN output as potentials of a Markov Random Field (MRF),
whose topology corresponds to a volumetric grid. Alpha-expansion is used to
perform approximate inference imposing spatial volumetric homogeneity to the
CNN priors. We compare the performance of the proposed pipeline with a similar
system using Random Forest-based priors, as well as state-of-art segmentation
algorithms, and show promising results on two different brain MRI datasets.
| [
{
"version": "v1",
"created": "Fri, 5 Feb 2016 19:32:39 GMT"
}
] | 2016-02-08T00:00:00 | [
[
"Shakeri",
"Mahsa",
"",
"CVN, GALEN"
],
[
"Tsogkas",
"Stavros",
"",
"CVN, GALEN"
],
[
"Ferrante",
"Enzo",
"",
"CVN,\n GALEN"
],
[
"Lippe",
"Sarah",
"",
"CVN, GALEN"
],
[
"Kadoury",
"Samuel",
"",
"CVN, GALEN"
],
[
"Paragios",
"Nikos",
"",
"CVN, GALEN"
],
[
"Kokkinos",
"Iasonas",
"",
"CVN, GALEN"
]
] | TITLE: Sub-cortical brain structure segmentation using F-CNN's
ABSTRACT: In this paper we propose a deep learning approach for segmenting sub-cortical
structures of the human brain in Magnetic Resonance (MR) image data. We draw
inspiration from a state-of-the-art Fully-Convolutional Neural Network (F-CNN)
architecture for semantic segmentation of objects in natural images, and adapt
it to our task. Unlike previous CNN-based methods that operate on image
patches, our model is applied on a full blown 2D image, without any alignment
or registration steps at testing time. We further improve segmentation results
by interpreting the CNN output as potentials of a Markov Random Field (MRF),
whose topology corresponds to a volumetric grid. Alpha-expansion is used to
perform approximate inference imposing spatial volumetric homogeneity to the
CNN priors. We compare the performance of the proposed pipeline with a similar
system using Random Forest-based priors, as well as state-of-art segmentation
algorithms, and show promising results on two different brain MRI datasets.
| no_new_dataset | 0.956186 |
1410.2455 | Stephan Gouws | Stephan Gouws, Yoshua Bengio, Greg Corrado | BilBOWA: Fast Bilingual Distributed Representations without Word
Alignments | null | null | null | null | stat.ML cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple
and computationally-efficient model for learning bilingual distributed
representations of words which can scale to large monolingual datasets and does
not require word-aligned parallel training data. Instead it trains directly on
monolingual data and extracts a bilingual signal from a smaller set of raw-text
sentence-aligned data. This is achieved using a novel sampled bag-of-words
cross-lingual objective, which is used to regularize two noise-contrastive
language models for efficient cross-lingual feature learning. We show that
bilingual embeddings learned using the proposed model outperform
state-of-the-art methods on a cross-lingual document classification task as
well as a lexical translation task on WMT11 data.
| [
{
"version": "v1",
"created": "Thu, 9 Oct 2014 13:41:18 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Dec 2014 20:52:32 GMT"
},
{
"version": "v3",
"created": "Thu, 4 Feb 2016 05:51:59 GMT"
}
] | 2016-02-05T00:00:00 | [
[
"Gouws",
"Stephan",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Corrado",
"Greg",
""
]
] | TITLE: BilBOWA: Fast Bilingual Distributed Representations without Word
Alignments
ABSTRACT: We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple
and computationally-efficient model for learning bilingual distributed
representations of words which can scale to large monolingual datasets and does
not require word-aligned parallel training data. Instead it trains directly on
monolingual data and extracts a bilingual signal from a smaller set of raw-text
sentence-aligned data. This is achieved using a novel sampled bag-of-words
cross-lingual objective, which is used to regularize two noise-contrastive
language models for efficient cross-lingual feature learning. We show that
bilingual embeddings learned using the proposed model outperform
state-of-the-art methods on a cross-lingual document classification task as
well as a lexical translation task on WMT11 data.
| no_new_dataset | 0.944074 |
1508.04907 | Li Su | Li Su, Yongluan Zhou | Tolerating Correlated Failures in Massively Parallel Stream Processing
Engines | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fault-tolerance techniques for stream processing engines can be categorized
into passive and active approaches. A typical passive approach periodically
checkpoints a processing task's runtime states and can recover a failed task by
restoring its runtime state using its latest checkpoint. On the other hand, an
active approach usually employs backup nodes to run replicated tasks. Upon
failure, the active replica can take over the processing of the failed task
with minimal latency. However, both approaches have their own inadequacies in
Massively Parallel Stream Processing Engines (MPSPE). The passive approach
incurs a long recovery latency especially when a number of correlated nodes
fail simultaneously, while the active approach requires extra replication
resources. In this paper, we propose a new fault-tolerance framework, which is
Passive and Partially Active (PPA). In a PPA scheme, the passive approach is
applied to all tasks while only a selected set of tasks will be actively
replicated. The number of actively replicated tasks depends on the available
resources. If tasks without active replicas fail, tentative outputs will be
generated before the completion of the recovery process. We also propose
effective and efficient algorithms to optimize a partially active replication
plan to maximize the quality of tentative outputs. We implemented PPA on top of
Storm, an open-source MPSPE and conducted extensive experiments using both real
and synthetic datasets to verify the effectiveness of our approach.
| [
{
"version": "v1",
"created": "Thu, 20 Aug 2015 08:01:58 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Feb 2016 16:02:54 GMT"
}
] | 2016-02-05T00:00:00 | [
[
"Su",
"Li",
""
],
[
"Zhou",
"Yongluan",
""
]
] | TITLE: Tolerating Correlated Failures in Massively Parallel Stream Processing
Engines
ABSTRACT: Fault-tolerance techniques for stream processing engines can be categorized
into passive and active approaches. A typical passive approach periodically
checkpoints a processing task's runtime states and can recover a failed task by
restoring its runtime state using its latest checkpoint. On the other hand, an
active approach usually employs backup nodes to run replicated tasks. Upon
failure, the active replica can take over the processing of the failed task
with minimal latency. However, both approaches have their own inadequacies in
Massively Parallel Stream Processing Engines (MPSPE). The passive approach
incurs a long recovery latency especially when a number of correlated nodes
fail simultaneously, while the active approach requires extra replication
resources. In this paper, we propose a new fault-tolerance framework, which is
Passive and Partially Active (PPA). In a PPA scheme, the passive approach is
applied to all tasks while only a selected set of tasks will be actively
replicated. The number of actively replicated tasks depends on the available
resources. If tasks without active replicas fail, tentative outputs will be
generated before the completion of the recovery process. We also propose
effective and efficient algorithms to optimize a partially active replication
plan to maximize the quality of tentative outputs. We implemented PPA on top of
Storm, an open-source MPSPE and conducted extensive experiments using both real
and synthetic datasets to verify the effectiveness of our approach.
| no_new_dataset | 0.942135 |
1510.01784 | Ruining He | Ruining He, Julian McAuley | VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback | AAAI'16 | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern recommender systems model people and items by discovering or `teasing
apart' the underlying dimensions that encode the properties of items and users'
preferences toward them. Critically, such dimensions are uncovered based on
user feedback, often in implicit form (such as purchase histories, browsing
logs, etc.); in addition, some recommender systems make use of side
information, such as product attributes, temporal information, or review text.
However one important feature that is typically ignored by existing
personalized recommendation and ranking methods is the visual appearance of the
items being considered. In this paper we propose a scalable factorization model
to incorporate visual signals into predictors of people's opinions, which we
apply to a selection of large, real-world datasets. We make use of visual
features extracted from product images using (pre-trained) deep networks, on
top of which we learn an additional layer that uncovers the visual dimensions
that best explain the variation in people's feedback. This not only leads to
significantly more accurate personalized ranking methods, but also helps to
alleviate cold start issues, and qualitatively to analyze the visual dimensions
that influence people's opinions.
| [
{
"version": "v1",
"created": "Tue, 6 Oct 2015 23:46:15 GMT"
}
] | 2016-02-05T00:00:00 | [
[
"He",
"Ruining",
""
],
[
"McAuley",
"Julian",
""
]
] | TITLE: VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback
ABSTRACT: Modern recommender systems model people and items by discovering or `teasing
apart' the underlying dimensions that encode the properties of items and users'
preferences toward them. Critically, such dimensions are uncovered based on
user feedback, often in implicit form (such as purchase histories, browsing
logs, etc.); in addition, some recommender systems make use of side
information, such as product attributes, temporal information, or review text.
However one important feature that is typically ignored by existing
personalized recommendation and ranking methods is the visual appearance of the
items being considered. In this paper we propose a scalable factorization model
to incorporate visual signals into predictors of people's opinions, which we
apply to a selection of large, real-world datasets. We make use of visual
features extracted from product images using (pre-trained) deep networks, on
top of which we learn an additional layer that uncovers the visual dimensions
that best explain the variation in people's feedback. This not only leads to
significantly more accurate personalized ranking methods, but also helps to
alleviate cold start issues, and qualitatively to analyze the visual dimensions
that influence people's opinions.
| no_new_dataset | 0.945901 |
1510.05067 | Qianli Liao | Qianli Liao, Joel Z. Leibo, Tomaso Poggio | How Important is Weight Symmetry in Backpropagation? | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gradient backpropagation (BP) requires symmetric feedforward and feedback
connections -- the same weights must be used for forward and backward passes.
This "weight transport problem" (Grossberg 1987) is thought to be one of the
main reasons to doubt BP's biologically plausibility. Using 15 different
classification datasets, we systematically investigate to what extent BP really
depends on weight symmetry. In a study that turned out to be surprisingly
similar in spirit to Lillicrap et al.'s demonstration (Lillicrap et al. 2014)
but orthogonal in its results, our experiments indicate that: (1) the
magnitudes of feedback weights do not matter to performance (2) the signs of
feedback weights do matter -- the more concordant signs between feedforward and
their corresponding feedback connections, the better (3) with feedback weights
having random magnitudes and 100% concordant signs, we were able to achieve the
same or even better performance than SGD. (4) some
normalizations/stabilizations are indispensable for such asymmetric BP to work,
namely Batch Normalization (BN) (Ioffe and Szegedy 2015) and/or a "Batch
Manhattan" (BM) update rule.
| [
{
"version": "v1",
"created": "Sat, 17 Oct 2015 03:49:05 GMT"
},
{
"version": "v2",
"created": "Sat, 31 Oct 2015 16:55:06 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Dec 2015 01:49:38 GMT"
},
{
"version": "v4",
"created": "Thu, 4 Feb 2016 08:35:58 GMT"
}
] | 2016-02-05T00:00:00 | [
[
"Liao",
"Qianli",
""
],
[
"Leibo",
"Joel Z.",
""
],
[
"Poggio",
"Tomaso",
""
]
] | TITLE: How Important is Weight Symmetry in Backpropagation?
ABSTRACT: Gradient backpropagation (BP) requires symmetric feedforward and feedback
connections -- the same weights must be used for forward and backward passes.
This "weight transport problem" (Grossberg 1987) is thought to be one of the
main reasons to doubt BP's biologically plausibility. Using 15 different
classification datasets, we systematically investigate to what extent BP really
depends on weight symmetry. In a study that turned out to be surprisingly
similar in spirit to Lillicrap et al.'s demonstration (Lillicrap et al. 2014)
but orthogonal in its results, our experiments indicate that: (1) the
magnitudes of feedback weights do not matter to performance (2) the signs of
feedback weights do matter -- the more concordant signs between feedforward and
their corresponding feedback connections, the better (3) with feedback weights
having random magnitudes and 100% concordant signs, we were able to achieve the
same or even better performance than SGD. (4) some
normalizations/stabilizations are indispensable for such asymmetric BP to work,
namely Batch Normalization (BN) (Ioffe and Szegedy 2015) and/or a "Batch
Manhattan" (BM) update rule.
| no_new_dataset | 0.949529 |
1512.09194 | Shuchang Zhou | Shuchang Zhou and Jia-Nan Wu and Yuxin Wu and Xinyu Zhou | Exploiting Local Structures with the Kronecker Layer in Convolutional
Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose and study a technique to reduce the number of
parameters and computation time in convolutional neural networks. We use
Kronecker product to exploit the local structures within convolution and
fully-connected layers, by replacing the large weight matrices by combinations
of multiple Kronecker products of smaller matrices. Just as the Kronecker
product is a generalization of the outer product from vectors to matrices, our
method is a generalization of the low rank approximation method for convolution
neural networks. We also introduce combinations of different shapes of
Kronecker product to increase modeling capacity. Experiments on SVHN, scene
text recognition and ImageNet dataset demonstrate that we can achieve $3.3
\times$ speedup or $3.6 \times$ parameter reduction with less than 1\% drop in
accuracy, showing the effectiveness and efficiency of our method. Moreover, the
computation efficiency of Kronecker layer makes using larger feature map
possible, which in turn enables us to outperform the previous state-of-the-art
on both SVHN(digit recognition) and CASIA-HWDB (handwritten Chinese character
recognition) datasets.
| [
{
"version": "v1",
"created": "Thu, 31 Dec 2015 01:32:16 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Feb 2016 01:19:38 GMT"
}
] | 2016-02-05T00:00:00 | [
[
"Zhou",
"Shuchang",
""
],
[
"Wu",
"Jia-Nan",
""
],
[
"Wu",
"Yuxin",
""
],
[
"Zhou",
"Xinyu",
""
]
] | TITLE: Exploiting Local Structures with the Kronecker Layer in Convolutional
Networks
ABSTRACT: In this paper, we propose and study a technique to reduce the number of
parameters and computation time in convolutional neural networks. We use
Kronecker product to exploit the local structures within convolution and
fully-connected layers, by replacing the large weight matrices by combinations
of multiple Kronecker products of smaller matrices. Just as the Kronecker
product is a generalization of the outer product from vectors to matrices, our
method is a generalization of the low rank approximation method for convolution
neural networks. We also introduce combinations of different shapes of
Kronecker product to increase modeling capacity. Experiments on SVHN, scene
text recognition and ImageNet dataset demonstrate that we can achieve $3.3
\times$ speedup or $3.6 \times$ parameter reduction with less than 1\% drop in
accuracy, showing the effectiveness and efficiency of our method. Moreover, the
computation efficiency of Kronecker layer makes using larger feature map
possible, which in turn enables us to outperform the previous state-of-the-art
on both SVHN(digit recognition) and CASIA-HWDB (handwritten Chinese character
recognition) datasets.
| no_new_dataset | 0.94801 |
1601.07648 | Mark Moyou | Mark Moyou, John Corring, Adrian Peter, Anand Rangarajan | A Grassmannian Graph Approach to Affine Invariant Feature Matching | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present a novel and practical approach to address one of the
longstanding problems in computer vision: 2D and 3D affine invariant feature
matching. Our Grassmannian Graph (GrassGraph) framework employs a two stage
procedure that is capable of robustly recovering correspondences between two
unorganized, affinely related feature (point) sets. The first stage maps the
feature sets to an affine invariant Grassmannian representation, where the
features are mapped into the same subspace. It turns out that coordinate
representations extracted from the Grassmannian differ by an arbitrary
orthonormal matrix. In the second stage, by approximating the Laplace-Beltrami
operator (LBO) on these coordinates, this extra orthonormal factor is
nullified, providing true affine-invariant coordinates which we then utilize to
recover correspondences via simple nearest neighbor relations. The resulting
GrassGraph algorithm is empirically shown to work well in non-ideal scenarios
with noise, outliers, and occlusions. Our validation benchmarks use an
unprecedented 440,000+ experimental trials performed on 2D and 3D datasets,
with a variety of parameter settings and competing methods. State-of-the-art
performance in the majority of these extensive evaluations confirm the utility
of our method.
| [
{
"version": "v1",
"created": "Thu, 28 Jan 2016 05:17:17 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Feb 2016 05:18:52 GMT"
}
] | 2016-02-05T00:00:00 | [
[
"Moyou",
"Mark",
""
],
[
"Corring",
"John",
""
],
[
"Peter",
"Adrian",
""
],
[
"Rangarajan",
"Anand",
""
]
] | TITLE: A Grassmannian Graph Approach to Affine Invariant Feature Matching
ABSTRACT: In this work, we present a novel and practical approach to address one of the
longstanding problems in computer vision: 2D and 3D affine invariant feature
matching. Our Grassmannian Graph (GrassGraph) framework employs a two stage
procedure that is capable of robustly recovering correspondences between two
unorganized, affinely related feature (point) sets. The first stage maps the
feature sets to an affine invariant Grassmannian representation, where the
features are mapped into the same subspace. It turns out that coordinate
representations extracted from the Grassmannian differ by an arbitrary
orthonormal matrix. In the second stage, by approximating the Laplace-Beltrami
operator (LBO) on these coordinates, this extra orthonormal factor is
nullified, providing true affine-invariant coordinates which we then utilize to
recover correspondences via simple nearest neighbor relations. The resulting
GrassGraph algorithm is empirically shown to work well in non-ideal scenarios
with noise, outliers, and occlusions. Our validation benchmarks use an
unprecedented 440,000+ experimental trials performed on 2D and 3D datasets,
with a variety of parameter settings and competing methods. State-of-the-art
performance in the majority of these extensive evaluations confirm the utility
of our method.
| no_new_dataset | 0.946051 |
1602.00955 | Dengxin Dai | Dengxin Dai, Luc Van Gool | Unsupervised High-level Feature Learning by Ensemble Projection for
Semi-supervised Image Classification and Image Clustering | 22 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates the problem of image classification with limited or
no annotations, but abundant unlabeled data. The setting exists in many tasks
such as semi-supervised image classification, image clustering, and image
retrieval. Unlike previous methods, which develop or learn sophisticated
regularizers for classifiers, our method learns a new image representation by
exploiting the distribution patterns of all available data for the task at
hand. Particularly, a rich set of visual prototypes are sampled from all
available data, and are taken as surrogate classes to train discriminative
classifiers; images are projected via the classifiers; the projected values,
similarities to the prototypes, are stacked to build the new feature vector.
The training set is noisy. Hence, in the spirit of ensemble learning we create
a set of such training sets which are all diverse, leading to diverse
classifiers. The method is dubbed Ensemble Projection (EP). EP captures not
only the characteristics of individual images, but also the relationships among
images. It is conceptually simple and computationally efficient, yet effective
and flexible. Experiments on eight standard datasets show that: (1) EP
outperforms previous methods for semi-supervised image classification; (2) EP
produces promising results for self-taught image classification, where
unlabeled samples are a random collection of images rather than being from the
same distribution as the labeled ones; and (3) EP improves over the original
features for image clustering. The code of the method is available on the
project page.
| [
{
"version": "v1",
"created": "Tue, 2 Feb 2016 14:53:36 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Feb 2016 13:58:00 GMT"
}
] | 2016-02-05T00:00:00 | [
[
"Dai",
"Dengxin",
""
],
[
"Van Gool",
"Luc",
""
]
] | TITLE: Unsupervised High-level Feature Learning by Ensemble Projection for
Semi-supervised Image Classification and Image Clustering
ABSTRACT: This paper investigates the problem of image classification with limited or
no annotations, but abundant unlabeled data. The setting exists in many tasks
such as semi-supervised image classification, image clustering, and image
retrieval. Unlike previous methods, which develop or learn sophisticated
regularizers for classifiers, our method learns a new image representation by
exploiting the distribution patterns of all available data for the task at
hand. Particularly, a rich set of visual prototypes are sampled from all
available data, and are taken as surrogate classes to train discriminative
classifiers; images are projected via the classifiers; the projected values,
similarities to the prototypes, are stacked to build the new feature vector.
The training set is noisy. Hence, in the spirit of ensemble learning we create
a set of such training sets which are all diverse, leading to diverse
classifiers. The method is dubbed Ensemble Projection (EP). EP captures not
only the characteristics of individual images, but also the relationships among
images. It is conceptually simple and computationally efficient, yet effective
and flexible. Experiments on eight standard datasets show that: (1) EP
outperforms previous methods for semi-supervised image classification; (2) EP
produces promising results for self-taught image classification, where
unlabeled samples are a random collection of images rather than being from the
same distribution as the labeled ones; and (3) EP improves over the original
features for image clustering. The code of the method is available on the
project page.
| no_new_dataset | 0.949342 |
1602.01464 | Rigas Kouskouridas | Rigas Kouskouridas, Alykhan Tejani, Andreas Doumanoglou, Danhang Tang
and Tae-Kyun Kim | Latent-Class Hough Forests for 6 DoF Object Pose Estimation | PAMI submission, project page:
http://www.iis.ee.ic.ac.uk/rkouskou/research/LCHF.html | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present Latent-Class Hough Forests, a method for object
detection and 6 DoF pose estimation in heavily cluttered and occluded
scenarios. We adapt a state of the art template matching feature into a
scale-invariant patch descriptor and integrate it into a regression forest
using a novel template-based split function. We train with positive samples
only and we treat class distributions at the leaf nodes as latent variables.
During testing we infer by iteratively updating these distributions, providing
accurate estimation of background clutter and foreground occlusions and, thus,
better detection rate. Furthermore, as a by-product, our Latent-Class Hough
Forests can provide accurate occlusion aware segmentation masks, even in the
multi-instance scenario. In addition to an existing public dataset, which
contains only single-instance sequences with large amounts of clutter, we have
collected two, more challenging, datasets for multiple-instance detection
containing heavy 2D and 3D clutter as well as foreground occlusions. We provide
extensive experiments on the various parameters of the framework such as patch
size, number of trees and number of iterations to infer class distributions at
test time. We also evaluate the Latent-Class Hough Forests on all datasets
where we outperform state of the art methods.
| [
{
"version": "v1",
"created": "Wed, 3 Feb 2016 20:53:33 GMT"
}
] | 2016-02-05T00:00:00 | [
[
"Kouskouridas",
"Rigas",
""
],
[
"Tejani",
"Alykhan",
""
],
[
"Doumanoglou",
"Andreas",
""
],
[
"Tang",
"Danhang",
""
],
[
"Kim",
"Tae-Kyun",
""
]
] | TITLE: Latent-Class Hough Forests for 6 DoF Object Pose Estimation
ABSTRACT: In this paper we present Latent-Class Hough Forests, a method for object
detection and 6 DoF pose estimation in heavily cluttered and occluded
scenarios. We adapt a state of the art template matching feature into a
scale-invariant patch descriptor and integrate it into a regression forest
using a novel template-based split function. We train with positive samples
only and we treat class distributions at the leaf nodes as latent variables.
During testing we infer by iteratively updating these distributions, providing
accurate estimation of background clutter and foreground occlusions and, thus,
better detection rate. Furthermore, as a by-product, our Latent-Class Hough
Forests can provide accurate occlusion aware segmentation masks, even in the
multi-instance scenario. In addition to an existing public dataset, which
contains only single-instance sequences with large amounts of clutter, we have
collected two, more challenging, datasets for multiple-instance detection
containing heavy 2D and 3D clutter as well as foreground occlusions. We provide
extensive experiments on the various parameters of the framework such as patch
size, number of trees and number of iterations to infer class distributions at
test time. We also evaluate the Latent-Class Hough Forests on all datasets
where we outperform state of the art methods.
| no_new_dataset | 0.94474 |
1602.01510 | Priyadarshini Panda | Priyadarshini Panda and Kaushik Roy | Unsupervised Regenerative Learning of Hierarchical Features in Spiking
Deep Networks for Object Recognition | 8 pages, 9 figures, <Under review in IJCNN 2016> | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a spike-based unsupervised regenerative learning scheme to train
Spiking Deep Networks (SpikeCNN) for object recognition problems using
biologically realistic leaky integrate-and-fire neurons. The training
methodology is based on the Auto-Encoder learning model wherein the
hierarchical network is trained layer wise using the encoder-decoder principle.
Regenerative learning uses spike-timing information and inherent latencies to
update the weights and learn representative levels for each convolutional layer
in an unsupervised manner. The features learnt from the final layer in the
hierarchy are then fed to an output layer. The output layer is trained with
supervision by showing a fraction of the labeled training dataset and performs
the overall classification of the input. Our proposed methodology yields
0.92%/29.84% classification error on MNIST/CIFAR10 datasets which is comparable
with state-of-the-art results. The proposed methodology also introduces
sparsity in the hierarchical feature representations on account of event-based
coding resulting in computationally efficient learning.
| [
{
"version": "v1",
"created": "Wed, 3 Feb 2016 23:51:22 GMT"
}
] | 2016-02-05T00:00:00 | [
[
"Panda",
"Priyadarshini",
""
],
[
"Roy",
"Kaushik",
""
]
] | TITLE: Unsupervised Regenerative Learning of Hierarchical Features in Spiking
Deep Networks for Object Recognition
ABSTRACT: We present a spike-based unsupervised regenerative learning scheme to train
Spiking Deep Networks (SpikeCNN) for object recognition problems using
biologically realistic leaky integrate-and-fire neurons. The training
methodology is based on the Auto-Encoder learning model wherein the
hierarchical network is trained layer wise using the encoder-decoder principle.
Regenerative learning uses spike-timing information and inherent latencies to
update the weights and learn representative levels for each convolutional layer
in an unsupervised manner. The features learnt from the final layer in the
hierarchy are then fed to an output layer. The output layer is trained with
supervision by showing a fraction of the labeled training dataset and performs
the overall classification of the input. Our proposed methodology yields
0.92%/29.84% classification error on MNIST/CIFAR10 datasets which is comparable
with state-of-the-art results. The proposed methodology also introduces
sparsity in the hierarchical feature representations on account of event-based
coding resulting in computationally efficient learning.
| no_new_dataset | 0.952086 |
1602.01585 | Ruining He | Ruining He, Julian McAuley | Ups and Downs: Modeling the Visual Evolution of Fashion Trends with
One-Class Collaborative Filtering | 11 pages, 5 figures | null | 10.1145/2872427.2883037 | null | cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building a successful recommender system depends on understanding both the
dimensions of people's preferences as well as their dynamics. In certain
domains, such as fashion, modeling such preferences can be incredibly
difficult, due to the need to simultaneously model the visual appearance of
products as well as their evolution over time. The subtle semantics and
non-linear dynamics of fashion evolution raise unique challenges especially
considering the sparsity and large scale of the underlying datasets. In this
paper we build novel models for the One-Class Collaborative Filtering setting,
where our goal is to estimate users' fashion-aware personalized ranking
functions based on their past feedback. To uncover the complex and evolving
visual factors that people consider when evaluating products, our method
combines high-level visual features extracted from a deep convolutional neural
network, users' past feedback, as well as evolving trends within the community.
Experimentally we evaluate our method on two large real-world datasets from
Amazon.com, where we show it to outperform state-of-the-art personalized
ranking measures, and also use it to visualize the high-level fashion trends
across the 11-year span of our dataset.
| [
{
"version": "v1",
"created": "Thu, 4 Feb 2016 08:31:05 GMT"
}
] | 2016-02-05T00:00:00 | [
[
"He",
"Ruining",
""
],
[
"McAuley",
"Julian",
""
]
] | TITLE: Ups and Downs: Modeling the Visual Evolution of Fashion Trends with
One-Class Collaborative Filtering
ABSTRACT: Building a successful recommender system depends on understanding both the
dimensions of people's preferences as well as their dynamics. In certain
domains, such as fashion, modeling such preferences can be incredibly
difficult, due to the need to simultaneously model the visual appearance of
products as well as their evolution over time. The subtle semantics and
non-linear dynamics of fashion evolution raise unique challenges especially
considering the sparsity and large scale of the underlying datasets. In this
paper we build novel models for the One-Class Collaborative Filtering setting,
where our goal is to estimate users' fashion-aware personalized ranking
functions based on their past feedback. To uncover the complex and evolving
visual factors that people consider when evaluating products, our method
combines high-level visual features extracted from a deep convolutional neural
network, users' past feedback, as well as evolving trends within the community.
Experimentally we evaluate our method on two large real-world datasets from
Amazon.com, where we show it to outperform state-of-the-art personalized
ranking measures, and also use it to visualize the high-level fashion trends
across the 11-year span of our dataset.
| no_new_dataset | 0.947137 |
1602.01625 | Sangheum Hwang | Sangheum Hwang, Hyo-Eun Kim | Self-Transfer Learning for Fully Weakly Supervised Object Localization | 9 pages, 4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances of deep learning have achieved remarkable performances in
various challenging computer vision tasks. Especially in object localization,
deep convolutional neural networks outperform traditional approaches based on
extraction of data/task-driven features instead of hand-crafted features.
Although location information of region-of-interests (ROIs) gives good prior
for object localization, it requires heavy annotation efforts from human
resources. Thus a weakly supervised framework for object localization is
introduced. The term "weakly" means that this framework only uses image-level
labeled datasets to train a network. With the help of transfer learning which
adopts weight parameters of a pre-trained network, the weakly supervised
learning framework for object localization performs well because the
pre-trained network already has well-trained class-specific features. However,
those approaches cannot be used for some applications which do not have
pre-trained networks or well-localized large scale images. Medical image
analysis is a representative among those applications because it is impossible
to obtain such pre-trained networks. In this work, we present a "fully" weakly
supervised framework for object localization ("semi"-weakly is the counterpart
which uses pre-trained filters for weakly supervised localization) named as
self-transfer learning (STL). It jointly optimizes both classification and
localization networks simultaneously. By controlling a supervision level of the
localization network, STL helps the localization network focus on correct ROIs
without any types of priors. We evaluate the proposed STL framework using two
medical image datasets, chest X-rays and mammograms, and achieve signiticantly
better localization performance compared to previous weakly supervised
approaches.
| [
{
"version": "v1",
"created": "Thu, 4 Feb 2016 10:41:57 GMT"
}
] | 2016-02-05T00:00:00 | [
[
"Hwang",
"Sangheum",
""
],
[
"Kim",
"Hyo-Eun",
""
]
] | TITLE: Self-Transfer Learning for Fully Weakly Supervised Object Localization
ABSTRACT: Recent advances of deep learning have achieved remarkable performances in
various challenging computer vision tasks. Especially in object localization,
deep convolutional neural networks outperform traditional approaches based on
extraction of data/task-driven features instead of hand-crafted features.
Although location information of region-of-interests (ROIs) gives good prior
for object localization, it requires heavy annotation efforts from human
resources. Thus a weakly supervised framework for object localization is
introduced. The term "weakly" means that this framework only uses image-level
labeled datasets to train a network. With the help of transfer learning which
adopts weight parameters of a pre-trained network, the weakly supervised
learning framework for object localization performs well because the
pre-trained network already has well-trained class-specific features. However,
those approaches cannot be used for some applications which do not have
pre-trained networks or well-localized large scale images. Medical image
analysis is a representative among those applications because it is impossible
to obtain such pre-trained networks. In this work, we present a "fully" weakly
supervised framework for object localization ("semi"-weakly is the counterpart
which uses pre-trained filters for weakly supervised localization) named as
self-transfer learning (STL). It jointly optimizes both classification and
localization networks simultaneously. By controlling a supervision level of the
localization network, STL helps the localization network focus on correct ROIs
without any types of priors. We evaluate the proposed STL framework using two
medical image datasets, chest X-rays and mammograms, and achieve signiticantly
better localization performance compared to previous weakly supervised
approaches.
| no_new_dataset | 0.949435 |
1602.01711 | Anthony Bagnall Dr | Anthony Bagnall, Aaron Bostrom, James Large and Jason Lines | The Great Time Series Classification Bake Off: An Experimental
Evaluation of Recently Proposed Algorithms. Extended Version | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the last five years there have been a large number of new time series
classification algorithms proposed in the literature. These algorithms have
been evaluated on subsets of the 47 data sets in the University of California,
Riverside time series classification archive. The archive has recently been
expanded to 85 data sets, over half of which have been donated by researchers
at the University of East Anglia. Aspects of previous evaluations have made
comparisons between algorithms difficult. For example, several different
programming languages have been used, experiments involved a single train/test
split and some used normalised data whilst others did not. The relaunch of the
archive provides a timely opportunity to thoroughly evaluate algorithms on a
larger number of datasets. We have implemented 18 recently proposed algorithms
in a common Java framework and compared them against two standard benchmark
classifiers (and each other) by performing 100 resampling experiments on each
of the 85 datasets. We use these results to test several hypotheses relating to
whether the algorithms are significantly more accurate than the benchmarks and
each other. Our results indicate that only 9 of these algorithms are
significantly more accurate than both benchmarks and that one classifier, the
Collective of Transformation Ensembles, is significantly more accurate than all
of the others. All of our experiments and results are reproducible: we release
all of our code, results and experimental details and we hope these experiments
form the basis for more rigorous testing of new algorithms in the future.
| [
{
"version": "v1",
"created": "Thu, 4 Feb 2016 15:24:22 GMT"
}
] | 2016-02-05T00:00:00 | [
[
"Bagnall",
"Anthony",
""
],
[
"Bostrom",
"Aaron",
""
],
[
"Large",
"James",
""
],
[
"Lines",
"Jason",
""
]
] | TITLE: The Great Time Series Classification Bake Off: An Experimental
Evaluation of Recently Proposed Algorithms. Extended Version
ABSTRACT: In the last five years there have been a large number of new time series
classification algorithms proposed in the literature. These algorithms have
been evaluated on subsets of the 47 data sets in the University of California,
Riverside time series classification archive. The archive has recently been
expanded to 85 data sets, over half of which have been donated by researchers
at the University of East Anglia. Aspects of previous evaluations have made
comparisons between algorithms difficult. For example, several different
programming languages have been used, experiments involved a single train/test
split and some used normalised data whilst others did not. The relaunch of the
archive provides a timely opportunity to thoroughly evaluate algorithms on a
larger number of datasets. We have implemented 18 recently proposed algorithms
in a common Java framework and compared them against two standard benchmark
classifiers (and each other) by performing 100 resampling experiments on each
of the 85 datasets. We use these results to test several hypotheses relating to
whether the algorithms are significantly more accurate than the benchmarks and
each other. Our results indicate that only 9 of these algorithms are
significantly more accurate than both benchmarks and that one classifier, the
Collective of Transformation Ensembles, is significantly more accurate than all
of the others. All of our experiments and results are reproducible: we release
all of our code, results and experimental details and we hope these experiments
form the basis for more rigorous testing of new algorithms in the future.
| no_new_dataset | 0.92912 |
1602.01728 | Alexander Wong | M. J. Shafiee, P. Siva, C. Scharfenberger, P. Fieguth, and A. Wong | NeRD: a Neural Response Divergence Approach to Visual Salience Detection | 5 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a novel approach to visual salience detection via Neural
Response Divergence (NeRD) is proposed, where synaptic portions of deep neural
networks, previously trained for complex object recognition, are leveraged to
compute low level cues that can be used to compute image region
distinctiveness. Based on this concept , an efficient visual salience detection
framework is proposed using deep convolutional StochasticNets. Experimental
results using CSSD and MSRA10k natural image datasets show that the proposed
NeRD approach can achieve improved performance when compared to
state-of-the-art image saliency approaches, while the attaining low
computational complexity necessary for near-real-time computer vision
applications.
| [
{
"version": "v1",
"created": "Thu, 4 Feb 2016 16:20:26 GMT"
}
] | 2016-02-05T00:00:00 | [
[
"Shafiee",
"M. J.",
""
],
[
"Siva",
"P.",
""
],
[
"Scharfenberger",
"C.",
""
],
[
"Fieguth",
"P.",
""
],
[
"Wong",
"A.",
""
]
] | TITLE: NeRD: a Neural Response Divergence Approach to Visual Salience Detection
ABSTRACT: In this paper, a novel approach to visual salience detection via Neural
Response Divergence (NeRD) is proposed, where synaptic portions of deep neural
networks, previously trained for complex object recognition, are leveraged to
compute low level cues that can be used to compute image region
distinctiveness. Based on this concept , an efficient visual salience detection
framework is proposed using deep convolutional StochasticNets. Experimental
results using CSSD and MSRA10k natural image datasets show that the proposed
NeRD approach can achieve improved performance when compared to
state-of-the-art image saliency approaches, while the attaining low
computational complexity necessary for near-real-time computer vision
applications.
| no_new_dataset | 0.951369 |
1602.00904 | Vangelis Oikonomou | Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis,
Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos and Ioannis
Kompatsiaris | Comparative evaluation of state-of-the-art algorithms for SSVEP-based
BCIs | null | null | null | null | cs.HC cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Brain-computer interfaces (BCIs) have been gaining momentum in making
human-computer interaction more natural, especially for people with
neuro-muscular disabilities. Among the existing solutions the systems relying
on electroencephalograms (EEG) occupy the most prominent place due to their
non-invasiveness. However, the process of translating EEG signals into computer
commands is far from trivial, since it requires the optimization of many
different parameters that need to be tuned jointly. In this report, we focus on
the category of EEG-based BCIs that rely on Steady-State-Visual-Evoked
Potentials (SSVEPs) and perform a comparative evaluation of the most promising
algorithms existing in the literature. More specifically, we define a set of
algorithms for each of the various different parameters composing a BCI system
(i.e. filtering, artifact removal, feature extraction, feature selection and
classification) and study each parameter independently by keeping all other
parameters fixed. The results obtained from this evaluation process are
provided together with a dataset consisting of the 256-channel, EEG signals of
11 subjects, as well as a processing toolbox for reproducing the results and
supporting further experimentation. In this way, we manage to make available
for the community a state-of-the-art baseline for SSVEP-based BCIs that can be
used as a basis for introducing novel methods and approaches.
| [
{
"version": "v1",
"created": "Tue, 2 Feb 2016 12:31:48 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Feb 2016 09:59:44 GMT"
}
] | 2016-02-04T00:00:00 | [
[
"Oikonomou",
"Vangelis P.",
""
],
[
"Liaros",
"Georgios",
""
],
[
"Georgiadis",
"Kostantinos",
""
],
[
"Chatzilari",
"Elisavet",
""
],
[
"Adam",
"Katerina",
""
],
[
"Nikolopoulos",
"Spiros",
""
],
[
"Kompatsiaris",
"Ioannis",
""
]
] | TITLE: Comparative evaluation of state-of-the-art algorithms for SSVEP-based
BCIs
ABSTRACT: Brain-computer interfaces (BCIs) have been gaining momentum in making
human-computer interaction more natural, especially for people with
neuro-muscular disabilities. Among the existing solutions the systems relying
on electroencephalograms (EEG) occupy the most prominent place due to their
non-invasiveness. However, the process of translating EEG signals into computer
commands is far from trivial, since it requires the optimization of many
different parameters that need to be tuned jointly. In this report, we focus on
the category of EEG-based BCIs that rely on Steady-State-Visual-Evoked
Potentials (SSVEPs) and perform a comparative evaluation of the most promising
algorithms existing in the literature. More specifically, we define a set of
algorithms for each of the various different parameters composing a BCI system
(i.e. filtering, artifact removal, feature extraction, feature selection and
classification) and study each parameter independently by keeping all other
parameters fixed. The results obtained from this evaluation process are
provided together with a dataset consisting of the 256-channel, EEG signals of
11 subjects, as well as a processing toolbox for reproducing the results and
supporting further experimentation. In this way, we manage to make available
for the community a state-of-the-art baseline for SSVEP-based BCIs that can be
used as a basis for introducing novel methods and approaches.
| no_new_dataset | 0.652435 |
1602.01197 | Chen Huang | Chen Huang, Chen Change Loy, Xiaoou Tang | Discriminative Sparse Neighbor Approximation for Imbalanced Learning | 11 pages, 10 figures, In submission | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data imbalance is common in many vision tasks where one or more classes are
rare. Without addressing this issue conventional methods tend to be biased
toward the majority class with poor predictive accuracy for the minority class.
These methods further deteriorate on small, imbalanced data that has a large
degree of class overlap. In this study, we propose a novel discriminative
sparse neighbor approximation (DSNA) method to ameliorate the effect of
class-imbalance during prediction. Specifically, given a test sample, we first
traverse it through a cost-sensitive decision forest to collect a good subset
of training examples in its local neighborhood. Then we generate from this
subset several class-discriminating but overlapping clusters and model each as
an affine subspace. From these subspaces, the proposed DSNA iteratively seeks
an optimal approximation of the test sample and outputs an unbiased prediction.
We show that our method not only effectively mitigates the imbalance issue, but
also allows the prediction to extrapolate to unseen data. The latter capability
is crucial for achieving accurate prediction on small dataset with limited
samples. The proposed imbalanced learning method can be applied to both
classification and regression tasks at a wide range of imbalance levels. It
significantly outperforms the state-of-the-art methods that do not possess an
imbalance handling mechanism, and is found to perform comparably or even better
than recent deep learning methods by using hand-crafted features only.
| [
{
"version": "v1",
"created": "Wed, 3 Feb 2016 06:22:14 GMT"
}
] | 2016-02-04T00:00:00 | [
[
"Huang",
"Chen",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Tang",
"Xiaoou",
""
]
] | TITLE: Discriminative Sparse Neighbor Approximation for Imbalanced Learning
ABSTRACT: Data imbalance is common in many vision tasks where one or more classes are
rare. Without addressing this issue conventional methods tend to be biased
toward the majority class with poor predictive accuracy for the minority class.
These methods further deteriorate on small, imbalanced data that has a large
degree of class overlap. In this study, we propose a novel discriminative
sparse neighbor approximation (DSNA) method to ameliorate the effect of
class-imbalance during prediction. Specifically, given a test sample, we first
traverse it through a cost-sensitive decision forest to collect a good subset
of training examples in its local neighborhood. Then we generate from this
subset several class-discriminating but overlapping clusters and model each as
an affine subspace. From these subspaces, the proposed DSNA iteratively seeks
an optimal approximation of the test sample and outputs an unbiased prediction.
We show that our method not only effectively mitigates the imbalance issue, but
also allows the prediction to extrapolate to unseen data. The latter capability
is crucial for achieving accurate prediction on small dataset with limited
samples. The proposed imbalanced learning method can be applied to both
classification and regression tasks at a wide range of imbalance levels. It
significantly outperforms the state-of-the-art methods that do not possess an
imbalance handling mechanism, and is found to perform comparably or even better
than recent deep learning methods by using hand-crafted features only.
| no_new_dataset | 0.947478 |
1602.01376 | William March | Chenhan D. Yu, William B. March, Bo Xiao, and George Biros | Inv-ASKIT: A Parallel Fast Diret Solver for Kernel Matrices | 11 pages, 2 figures, to appear in IPDPS 2016 | null | null | null | cs.NA cs.DS cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a parallel algorithm for computing the approximate factorization
of an $N$-by-$N$ kernel matrix. Once this factorization has been constructed
(with $N \log^2 N $ work), we can solve linear systems with this matrix with $N
\log N $ work. Kernel matrices represent pairwise interactions of points in
metric spaces. They appear in machine learning, approximation theory, and
computational physics. Kernel matrices are typically dense (matrix
multiplication scales quadratically with $N$) and ill-conditioned (solves can
require 100s of Krylov iterations). Thus, fast algorithms for matrix
multiplication and factorization are critical for scalability.
Recently we introduced ASKIT, a new method for approximating a kernel matrix
that resembles N-body methods. Here we introduce INV-ASKIT, a factorization
scheme based on ASKIT. We describe the new method, derive complexity estimates,
and conduct an empirical study of its accuracy and scalability. We report
results on real-world datasets including "COVTYPE" ($0.5$M points in 54
dimensions), "SUSY" ($4.5$M points in 8 dimensions) and "MNIST" (2M points in
784 dimensions) using shared and distributed memory parallelism. In our largest
run we approximately factorize a dense matrix of size 32M $\times$ 32M
(generated from points in 64 dimensions) on 4,096 Sandy-Bridge cores. To our
knowledge these results improve the state of the art by several orders of
magnitude.
| [
{
"version": "v1",
"created": "Wed, 3 Feb 2016 17:23:24 GMT"
}
] | 2016-02-04T00:00:00 | [
[
"Yu",
"Chenhan D.",
""
],
[
"March",
"William B.",
""
],
[
"Xiao",
"Bo",
""
],
[
"Biros",
"George",
""
]
] | TITLE: Inv-ASKIT: A Parallel Fast Diret Solver for Kernel Matrices
ABSTRACT: We present a parallel algorithm for computing the approximate factorization
of an $N$-by-$N$ kernel matrix. Once this factorization has been constructed
(with $N \log^2 N $ work), we can solve linear systems with this matrix with $N
\log N $ work. Kernel matrices represent pairwise interactions of points in
metric spaces. They appear in machine learning, approximation theory, and
computational physics. Kernel matrices are typically dense (matrix
multiplication scales quadratically with $N$) and ill-conditioned (solves can
require 100s of Krylov iterations). Thus, fast algorithms for matrix
multiplication and factorization are critical for scalability.
Recently we introduced ASKIT, a new method for approximating a kernel matrix
that resembles N-body methods. Here we introduce INV-ASKIT, a factorization
scheme based on ASKIT. We describe the new method, derive complexity estimates,
and conduct an empirical study of its accuracy and scalability. We report
results on real-world datasets including "COVTYPE" ($0.5$M points in 54
dimensions), "SUSY" ($4.5$M points in 8 dimensions) and "MNIST" (2M points in
784 dimensions) using shared and distributed memory parallelism. In our largest
run we approximately factorize a dense matrix of size 32M $\times$ 32M
(generated from points in 64 dimensions) on 4,096 Sandy-Bridge cores. To our
knowledge these results improve the state of the art by several orders of
magnitude.
| no_new_dataset | 0.938011 |
1512.05430 | Qian Yu | Qian Yu, Christian Szegedy, Martin C. Stumpe, Liron Yatziv, Vinay
Shet, Julian Ibarz, Sacha Arnoud | Large Scale Business Discovery from Street Level Imagery | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Search with local intent is becoming increasingly useful due to the
popularity of the mobile device. The creation and maintenance of accurate
listings of local businesses worldwide is time consuming and expensive. In this
paper, we propose an approach to automatically discover businesses that are
visible on street level imagery. Precise business store front detection enables
accurate geo-location of businesses, and further provides input for business
categorization, listing generation, etc. The large variety of business
categories in different countries makes this a very challenging problem.
Moreover, manual annotation is prohibitive due to the scale of this problem. We
propose the use of a MultiBox based approach that takes input image pixels and
directly outputs store front bounding boxes. This end-to-end learning approach
instead preempts the need for hand modeling either the proposal generation
phase or the post-processing phase, leveraging large labelled training
datasets. We demonstrate our approach outperforms the state of the art
detection techniques with a large margin in terms of performance and run-time
efficiency. In the evaluation, we show this approach achieves human accuracy in
the low-recall settings. We also provide an end-to-end evaluation of business
discovery in the real world.
| [
{
"version": "v1",
"created": "Thu, 17 Dec 2015 01:15:11 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Feb 2016 07:24:29 GMT"
}
] | 2016-02-03T00:00:00 | [
[
"Yu",
"Qian",
""
],
[
"Szegedy",
"Christian",
""
],
[
"Stumpe",
"Martin C.",
""
],
[
"Yatziv",
"Liron",
""
],
[
"Shet",
"Vinay",
""
],
[
"Ibarz",
"Julian",
""
],
[
"Arnoud",
"Sacha",
""
]
] | TITLE: Large Scale Business Discovery from Street Level Imagery
ABSTRACT: Search with local intent is becoming increasingly useful due to the
popularity of the mobile device. The creation and maintenance of accurate
listings of local businesses worldwide is time consuming and expensive. In this
paper, we propose an approach to automatically discover businesses that are
visible on street level imagery. Precise business store front detection enables
accurate geo-location of businesses, and further provides input for business
categorization, listing generation, etc. The large variety of business
categories in different countries makes this a very challenging problem.
Moreover, manual annotation is prohibitive due to the scale of this problem. We
propose the use of a MultiBox based approach that takes input image pixels and
directly outputs store front bounding boxes. This end-to-end learning approach
instead preempts the need for hand modeling either the proposal generation
phase or the post-processing phase, leveraging large labelled training
datasets. We demonstrate our approach outperforms the state of the art
detection techniques with a large margin in terms of performance and run-time
efficiency. In the evaluation, we show this approach achieves human accuracy in
the low-recall settings. We also provide an end-to-end evaluation of business
discovery in the real world.
| no_new_dataset | 0.953708 |
1602.00032 | Yezhou Yang | Chengxi Ye and Yezhou Yang and Cornelia Fermuller and Yiannis
Aloimonos | What Can I Do Around Here? Deep Functional Scene Understanding for
Cognitive Robots | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For robots that have the capability to interact with the physical environment
through their end effectors, understanding the surrounding scenes is not merely
a task of image classification or object recognition. To perform actual tasks,
it is critical for the robot to have a functional understanding of the visual
scene. Here, we address the problem of localizing and recognition of functional
areas from an arbitrary indoor scene, formulated as a two-stage deep learning
based detection pipeline. A new scene functionality testing-bed, which is
complied from two publicly available indoor scene datasets, is used for
evaluation. Our method is evaluated quantitatively on the new dataset,
demonstrating the ability to perform efficient recognition of functional areas
from arbitrary indoor scenes. We also demonstrate that our detection model can
be generalized onto novel indoor scenes by cross validating it with the images
from two different datasets.
| [
{
"version": "v1",
"created": "Fri, 29 Jan 2016 22:55:53 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Feb 2016 16:28:01 GMT"
}
] | 2016-02-03T00:00:00 | [
[
"Ye",
"Chengxi",
""
],
[
"Yang",
"Yezhou",
""
],
[
"Fermuller",
"Cornelia",
""
],
[
"Aloimonos",
"Yiannis",
""
]
] | TITLE: What Can I Do Around Here? Deep Functional Scene Understanding for
Cognitive Robots
ABSTRACT: For robots that have the capability to interact with the physical environment
through their end effectors, understanding the surrounding scenes is not merely
a task of image classification or object recognition. To perform actual tasks,
it is critical for the robot to have a functional understanding of the visual
scene. Here, we address the problem of localizing and recognition of functional
areas from an arbitrary indoor scene, formulated as a two-stage deep learning
based detection pipeline. A new scene functionality testing-bed, which is
complied from two publicly available indoor scene datasets, is used for
evaluation. Our method is evaluated quantitatively on the new dataset,
demonstrating the ability to perform efficient recognition of functional areas
from arbitrary indoor scenes. We also demonstrate that our detection model can
be generalized onto novel indoor scenes by cross validating it with the images
from two different datasets.
| new_dataset | 0.968051 |
1602.00753 | Hessam Bagherinezhad | Hessam Bagherinezhad, Hannaneh Hajishirzi, Yejin Choi, Ali Farhadi | Are Elephants Bigger than Butterflies? Reasoning about Sizes of Objects | To appear in AAAI 2016 | null | null | null | cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human vision greatly benefits from the information about sizes of objects.
The role of size in several visual reasoning tasks has been thoroughly explored
in human perception and cognition. However, the impact of the information about
sizes of objects is yet to be determined in AI. We postulate that this is
mainly attributed to the lack of a comprehensive repository of size
information. In this paper, we introduce a method to automatically infer object
sizes, leveraging visual and textual information from web. By maximizing the
joint likelihood of textual and visual observations, our method learns reliable
relative size estimates, with no explicit human supervision. We introduce the
relative size dataset and show that our method outperforms competitive textual
and visual baselines in reasoning about size comparisons.
| [
{
"version": "v1",
"created": "Tue, 2 Feb 2016 00:16:39 GMT"
}
] | 2016-02-03T00:00:00 | [
[
"Bagherinezhad",
"Hessam",
""
],
[
"Hajishirzi",
"Hannaneh",
""
],
[
"Choi",
"Yejin",
""
],
[
"Farhadi",
"Ali",
""
]
] | TITLE: Are Elephants Bigger than Butterflies? Reasoning about Sizes of Objects
ABSTRACT: Human vision greatly benefits from the information about sizes of objects.
The role of size in several visual reasoning tasks has been thoroughly explored
in human perception and cognition. However, the impact of the information about
sizes of objects is yet to be determined in AI. We postulate that this is
mainly attributed to the lack of a comprehensive repository of size
information. In this paper, we introduce a method to automatically infer object
sizes, leveraging visual and textual information from web. By maximizing the
joint likelihood of textual and visual observations, our method learns reliable
relative size estimates, with no explicit human supervision. We introduce the
relative size dataset and show that our method outperforms competitive textual
and visual baselines in reasoning about size comparisons.
| new_dataset | 0.960025 |
1602.00798 | Yi-Chao Chen | David Shui Wing Hui (1), Yi-Chao Chen (1), Gong Zhang (1), Weijie Wu
(1), Guanrong Chen (2), John C. S. Lui (3), Yingtao Li (1) ((1) Huawei
Technologies Co. Ltd., (2) City University of Hong Kong, (3) The Chinese
University of Hong Kong) | A Unified Framework for Information Consumption Based on Markov Chains | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper establishes a Markov chain model as a unified framework for
understanding information consumption processes in complex networks, with clear
implications to the Internet and big-data technologies. In particular, the
proposed model is the first one to address the formation mechanism of the
"trichotomy" in observed probability density functions from empirical data of
various social and technical networks. Both simulation and experimental results
demonstrate a good match of the proposed model with real datasets, showing its
superiority over the classical power-law models.
| [
{
"version": "v1",
"created": "Tue, 2 Feb 2016 05:54:24 GMT"
}
] | 2016-02-03T00:00:00 | [
[
"Hui",
"David Shui Wing",
""
],
[
"Chen",
"Yi-Chao",
""
],
[
"Zhang",
"Gong",
""
],
[
"Wu",
"Weijie",
""
],
[
"Chen",
"Guanrong",
""
],
[
"Lui",
"John C. S.",
""
],
[
"Li",
"Yingtao",
""
]
] | TITLE: A Unified Framework for Information Consumption Based on Markov Chains
ABSTRACT: This paper establishes a Markov chain model as a unified framework for
understanding information consumption processes in complex networks, with clear
implications to the Internet and big-data technologies. In particular, the
proposed model is the first one to address the formation mechanism of the
"trichotomy" in observed probability density functions from empirical data of
various social and technical networks. Both simulation and experimental results
demonstrate a good match of the proposed model with real datasets, showing its
superiority over the classical power-law models.
| no_new_dataset | 0.950549 |
1602.01040 | HyeongSik Kim HyeongSik Kim | HyeongSik Kim and Kemafor Anyanwu | Scalable Ontological Query Processing over Semantically Integrated Life
Science Datasets using MapReduce | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To address the requirement of enabling a comprehensive perspective of
life-sciences data, Semantic Web technologies have been adopted for
standardized representations of data and linkages between data. This has
resulted in data warehouses such as UniProt, Bio2RDF, and Chem2Bio2RDF, that
integrate different kinds of biological and chemical data using ontologies.
Unfortunately, the ability to process queries over ontologically-integrated
collections remains a challenge, particularly when data is large. The reason is
that besides the traditional challenges of processing graph-structured data,
complete query answering requires inferencing to explicate implicitly
represented facts. Since traditional inferencing techniques like forward
chaining are difficult to scale up, and need to be repeated each time data is
updated, recent focus has been on inferencing that can be supported using
database technologies via query rewriting. However, due to the richness of most
biomedical ontologies relative to other domain ontologies, the queries
resulting from the query rewriting technique are often more complex than
existing query optimization techniques can cope with. This is particularly so
when using the emerging class of cloud data processing platforms for big data
processing due to some additional overhead which they introduce. In this paper,
we present an approach for dealing such complex queries on big data using
MapReduce, along with an evaluation on existing real-world datasets and
benchmark queries.
| [
{
"version": "v1",
"created": "Tue, 2 Feb 2016 18:45:22 GMT"
}
] | 2016-02-03T00:00:00 | [
[
"Kim",
"HyeongSik",
""
],
[
"Anyanwu",
"Kemafor",
""
]
] | TITLE: Scalable Ontological Query Processing over Semantically Integrated Life
Science Datasets using MapReduce
ABSTRACT: To address the requirement of enabling a comprehensive perspective of
life-sciences data, Semantic Web technologies have been adopted for
standardized representations of data and linkages between data. This has
resulted in data warehouses such as UniProt, Bio2RDF, and Chem2Bio2RDF, that
integrate different kinds of biological and chemical data using ontologies.
Unfortunately, the ability to process queries over ontologically-integrated
collections remains a challenge, particularly when data is large. The reason is
that besides the traditional challenges of processing graph-structured data,
complete query answering requires inferencing to explicate implicitly
represented facts. Since traditional inferencing techniques like forward
chaining are difficult to scale up, and need to be repeated each time data is
updated, recent focus has been on inferencing that can be supported using
database technologies via query rewriting. However, due to the richness of most
biomedical ontologies relative to other domain ontologies, the queries
resulting from the query rewriting technique are often more complex than
existing query optimization techniques can cope with. This is particularly so
when using the emerging class of cloud data processing platforms for big data
processing due to some additional overhead which they introduce. In this paper,
we present an approach for dealing such complex queries on big data using
MapReduce, along with an evaluation on existing real-world datasets and
benchmark queries.
| no_new_dataset | 0.943712 |
1503.00659 | Adriano Barra Dr. | Elena Agliari, Adriano Barra, Andrea Galluzzi, Marco Alberto Javarone,
Andrea Pizzoferrato, Daniele Tantari | Emerging heterogeneities in Italian customs and comparison with nearby
countries | in PLoS One (2015) | null | 10.1371/journal.pone.0144643 | Roma01.Math | physics.soc-ph cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we apply techniques and modus operandi typical of Statistical
Mechanics to a large dataset about key social quantifiers and compare the
resulting behaviours of five European nations, namely France, Germany, Italy,
Spain and Switzerland. The social quantifiers considered are $i.$ the evolution
of the number of autochthonous marriages (i.e. between two natives) within a
given territorial district and $ii.$ the evolution of the number of mixed
marriages (i.e. between a native and an immigrant) within a given territorial
district. Our investigations are twofold. From a theoretical perspective, we
develop novel techniques, complementary to classical methods (e.g. historical
series and logistic regression), in order to detect possible collective
features underlying the empirical behaviours; from an experimental perspective,
we evidence a clear outline for the evolution of the social quantifiers
considered. The comparison between experimental results and theoretical
predictions is excellent and allows speculating that France, Italy and Spain
display a certain degree of {\em internal heterogeneity}, that is not found in
Germany and Switzerland; such heterogeneity, quite mild in France and in Spain,
is not negligible in Italy and highlights quantitative differences in the
customs of Northern and Southern regions. These findings may suggest the
persistence of two culturally distinct communities, long-term lasting heritages
of different and well-established cultures.
| [
{
"version": "v1",
"created": "Mon, 2 Mar 2015 18:51:39 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Nov 2015 20:13:48 GMT"
}
] | 2016-02-02T00:00:00 | [
[
"Agliari",
"Elena",
""
],
[
"Barra",
"Adriano",
""
],
[
"Galluzzi",
"Andrea",
""
],
[
"Javarone",
"Marco Alberto",
""
],
[
"Pizzoferrato",
"Andrea",
""
],
[
"Tantari",
"Daniele",
""
]
] | TITLE: Emerging heterogeneities in Italian customs and comparison with nearby
countries
ABSTRACT: In this work we apply techniques and modus operandi typical of Statistical
Mechanics to a large dataset about key social quantifiers and compare the
resulting behaviours of five European nations, namely France, Germany, Italy,
Spain and Switzerland. The social quantifiers considered are $i.$ the evolution
of the number of autochthonous marriages (i.e. between two natives) within a
given territorial district and $ii.$ the evolution of the number of mixed
marriages (i.e. between a native and an immigrant) within a given territorial
district. Our investigations are twofold. From a theoretical perspective, we
develop novel techniques, complementary to classical methods (e.g. historical
series and logistic regression), in order to detect possible collective
features underlying the empirical behaviours; from an experimental perspective,
we evidence a clear outline for the evolution of the social quantifiers
considered. The comparison between experimental results and theoretical
predictions is excellent and allows speculating that France, Italy and Spain
display a certain degree of {\em internal heterogeneity}, that is not found in
Germany and Switzerland; such heterogeneity, quite mild in France and in Spain,
is not negligible in Italy and highlights quantitative differences in the
customs of Northern and Southern regions. These findings may suggest the
persistence of two culturally distinct communities, long-term lasting heritages
of different and well-established cultures.
| no_new_dataset | 0.937726 |
1504.08153 | Kirell Benzi | Kirell Benzi, Benjamin Ricaud, Pierre Vandergheynst | Principal Patterns on Graphs: Discovering Coherent Structures in
Datasets | null | null | null | null | cs.SI physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graphs are now ubiquitous in almost every field of research. Recently, new
research areas devoted to the analysis of graphs and data associated to their
vertices have emerged. Focusing on dynamical processes, we propose a fast,
robust and scalable framework for retrieving and analyzing recurring patterns
of activity on graphs. Our method relies on a novel type of multilayer graph
that encodes the spreading or propagation of events between successive time
steps. We demonstrate the versatility of our method by applying it on three
different real-world examples. Firstly, we study how rumor spreads on a social
network. Secondly, we reveal congestion patterns of pedestrians in a train
station. Finally, we show how patterns of audio playlists can be used in a
recommender system. In each example, relevant information previously hidden in
the data is extracted in a very efficient manner, emphasizing the scalability
of our method. With a parallel implementation scaling linearly with the size of
the dataset, our framework easily handles millions of nodes on a single
commodity server.
| [
{
"version": "v1",
"created": "Thu, 30 Apr 2015 10:20:57 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Oct 2015 16:51:48 GMT"
},
{
"version": "v3",
"created": "Wed, 18 Nov 2015 15:29:35 GMT"
},
{
"version": "v4",
"created": "Mon, 1 Feb 2016 12:25:01 GMT"
}
] | 2016-02-02T00:00:00 | [
[
"Benzi",
"Kirell",
""
],
[
"Ricaud",
"Benjamin",
""
],
[
"Vandergheynst",
"Pierre",
""
]
] | TITLE: Principal Patterns on Graphs: Discovering Coherent Structures in
Datasets
ABSTRACT: Graphs are now ubiquitous in almost every field of research. Recently, new
research areas devoted to the analysis of graphs and data associated to their
vertices have emerged. Focusing on dynamical processes, we propose a fast,
robust and scalable framework for retrieving and analyzing recurring patterns
of activity on graphs. Our method relies on a novel type of multilayer graph
that encodes the spreading or propagation of events between successive time
steps. We demonstrate the versatility of our method by applying it on three
different real-world examples. Firstly, we study how rumor spreads on a social
network. Secondly, we reveal congestion patterns of pedestrians in a train
station. Finally, we show how patterns of audio playlists can be used in a
recommender system. In each example, relevant information previously hidden in
the data is extracted in a very efficient manner, emphasizing the scalability
of our method. With a parallel implementation scaling linearly with the size of
the dataset, our framework easily handles millions of nodes on a single
commodity server.
| no_new_dataset | 0.941708 |
1505.01634 | Simon Walk | Simon Walk, Denis Helic, Florian Geigl and Markus Strohmaier | Activity Dynamics in Collaboration Networks | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many online collaboration networks struggle to gain user activity and become
self-sustaining due to the ramp-up problem or dwindling activity within the
system. Prominent examples include online encyclopedias such as (Semantic)
MediaWikis, Question and Answering portals such as StackOverflow, and many
others. Only a small fraction of these systems manage to reach self-sustaining
activity, a level of activity that prevents the system from reverting to a
non-active state. In this paper, we model and analyze activity dynamics in
synthetic and empirical collaboration networks. Our approach is based on two
opposing and well-studied principles: (i) without incentives, users tend to
lose interest to contribute and thus, systems become inactive, and (ii) people
are susceptible to actions taken by their peers (social or peer influence).
With the activity dynamics model that we introduce in this paper we can
represent typical situations of such collaboration networks. For example,
activity in a collaborative network, without external impulses or investments,
will vanish over time, eventually rendering the system inactive. However, by
appropriately manipulating the activity dynamics and/or the underlying
collaboration networks, we can jump-start a previously inactive system and
advance it towards an active state. To be able to do so, we first describe our
model and its underlying mechanisms. We then provide illustrative examples of
empirical datasets and characterize the barrier that has to be breached by a
system before it can become self-sustaining in terms of critical mass and
activity dynamics. Additionally, we expand on this empirical illustration and
introduce a new metric p---the Activity Momentum---to assess the activity
robustness of collaboration networks.
| [
{
"version": "v1",
"created": "Thu, 7 May 2015 09:18:48 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Feb 2016 13:32:31 GMT"
}
] | 2016-02-02T00:00:00 | [
[
"Walk",
"Simon",
""
],
[
"Helic",
"Denis",
""
],
[
"Geigl",
"Florian",
""
],
[
"Strohmaier",
"Markus",
""
]
] | TITLE: Activity Dynamics in Collaboration Networks
ABSTRACT: Many online collaboration networks struggle to gain user activity and become
self-sustaining due to the ramp-up problem or dwindling activity within the
system. Prominent examples include online encyclopedias such as (Semantic)
MediaWikis, Question and Answering portals such as StackOverflow, and many
others. Only a small fraction of these systems manage to reach self-sustaining
activity, a level of activity that prevents the system from reverting to a
non-active state. In this paper, we model and analyze activity dynamics in
synthetic and empirical collaboration networks. Our approach is based on two
opposing and well-studied principles: (i) without incentives, users tend to
lose interest to contribute and thus, systems become inactive, and (ii) people
are susceptible to actions taken by their peers (social or peer influence).
With the activity dynamics model that we introduce in this paper we can
represent typical situations of such collaboration networks. For example,
activity in a collaborative network, without external impulses or investments,
will vanish over time, eventually rendering the system inactive. However, by
appropriately manipulating the activity dynamics and/or the underlying
collaboration networks, we can jump-start a previously inactive system and
advance it towards an active state. To be able to do so, we first describe our
model and its underlying mechanisms. We then provide illustrative examples of
empirical datasets and characterize the barrier that has to be breached by a
system before it can become self-sustaining in terms of critical mass and
activity dynamics. Additionally, we expand on this empirical illustration and
introduce a new metric p---the Activity Momentum---to assess the activity
robustness of collaboration networks.
| no_new_dataset | 0.950549 |
1602.00203 | Angshul Majumdar Dr. | Snigdha Tariyal, Angshul Majumdar, Richa Singh and Mayank Vatsa | Greedy Deep Dictionary Learning | null | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we propose a new deep learning tool called deep dictionary
learning. Multi-level dictionaries are learnt in a greedy fashion, one layer at
a time. This requires solving a simple (shallow) dictionary learning problem,
the solution to this is well known. We apply the proposed technique on some
benchmark deep learning datasets. We compare our results with other deep
learning tools like stacked autoencoder and deep belief network; and state of
the art supervised dictionary learning tools like discriminative KSVD and label
consistent KSVD. Our method yields better results than all.
| [
{
"version": "v1",
"created": "Sun, 31 Jan 2016 06:12:58 GMT"
}
] | 2016-02-02T00:00:00 | [
[
"Tariyal",
"Snigdha",
""
],
[
"Majumdar",
"Angshul",
""
],
[
"Singh",
"Richa",
""
],
[
"Vatsa",
"Mayank",
""
]
] | TITLE: Greedy Deep Dictionary Learning
ABSTRACT: In this work we propose a new deep learning tool called deep dictionary
learning. Multi-level dictionaries are learnt in a greedy fashion, one layer at
a time. This requires solving a simple (shallow) dictionary learning problem,
the solution to this is well known. We apply the proposed technique on some
benchmark deep learning datasets. We compare our results with other deep
learning tools like stacked autoencoder and deep belief network; and state of
the art supervised dictionary learning tools like discriminative KSVD and label
consistent KSVD. Our method yields better results than all.
| no_new_dataset | 0.950457 |
1602.00224 | Chunhua Shen | Peng Wang, Lingqiao Liu, Chunhua Shen, Heng Tao Shen | Order-aware Convolutional Pooling for Video Based Action Recognition | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most video based action recognition approaches create the video-level
representation by temporally pooling the features extracted at each frame. The
pooling methods that they adopt, however, usually completely or partially
neglect the dynamic information contained in the temporal domain, which may
undermine the discriminative power of the resulting video representation since
the video sequence order could unveil the evolution of a specific event or
action. To overcome this drawback and explore the importance of incorporating
the temporal order information, in this paper we propose a novel temporal
pooling approach to aggregate the frame-level features. Inspired by the
capacity of Convolutional Neural Networks (CNN) in making use of the internal
structure of images for information abstraction, we propose to apply the
temporal convolution operation to the frame-level representations to extract
the dynamic information. However, directly implementing this idea on the
original high-dimensional feature would inevitably result in parameter
explosion.
To tackle this problem, we view the temporal evolution of the feature value
at each feature dimension as a 1D signal and learn a unique convolutional
filter bank for each of these 1D signals. We conduct experiments on two
challenging video-based action recognition datasets, HMDB51 and UCF101; and
demonstrate that the proposed method is superior to the conventional pooling
methods.
| [
{
"version": "v1",
"created": "Sun, 31 Jan 2016 10:58:11 GMT"
}
] | 2016-02-02T00:00:00 | [
[
"Wang",
"Peng",
""
],
[
"Liu",
"Lingqiao",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Shen",
"Heng Tao",
""
]
] | TITLE: Order-aware Convolutional Pooling for Video Based Action Recognition
ABSTRACT: Most video based action recognition approaches create the video-level
representation by temporally pooling the features extracted at each frame. The
pooling methods that they adopt, however, usually completely or partially
neglect the dynamic information contained in the temporal domain, which may
undermine the discriminative power of the resulting video representation since
the video sequence order could unveil the evolution of a specific event or
action. To overcome this drawback and explore the importance of incorporating
the temporal order information, in this paper we propose a novel temporal
pooling approach to aggregate the frame-level features. Inspired by the
capacity of Convolutional Neural Networks (CNN) in making use of the internal
structure of images for information abstraction, we propose to apply the
temporal convolution operation to the frame-level representations to extract
the dynamic information. However, directly implementing this idea on the
original high-dimensional feature would inevitably result in parameter
explosion.
To tackle this problem, we view the temporal evolution of the feature value
at each feature dimension as a 1D signal and learn a unique convolutional
filter bank for each of these 1D signals. We conduct experiments on two
challenging video-based action recognition datasets, HMDB51 and UCF101; and
demonstrate that the proposed method is superior to the conventional pooling
methods.
| no_new_dataset | 0.94743 |
1602.00248 | Adam Kucharski | Adam J. Kucharski | Modelling the transmission dynamics of online social contagion | 13 pages, 6 figures, 2 tables | null | null | null | cs.SI physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | During 2014-15, there were several outbreaks of nominated-based online social
contagion. These infections, which were transmitted from one individual to
another via posts on social media, included games such as 'neknomination', 'ice
bucket challenge', 'no make up selfies', and Facebook users re-posting their
first profile pictures. Fitting a mathematical model of infectious disease
transmission to outbreaks of these four games in the United Kingdom, I
estimated the basic reproduction number, $R_0$, and generation time of each
infection. Median estimates for $R_0$ ranged from 1.9-2.5 across the four
outbreaks, and the estimated generation times were between 1.0 and 2.0 days.
Tests using out-of-sample data from Australia suggested that the model had
reasonable predictive power, with $R^2$ values between 0.52-0.70 across the
four Australian datasets. Further, the relatively low basic reproduction
numbers for the infections suggests that only 48-60% of index cases in
nomination-based games may subsequently generate major outbreaks.
| [
{
"version": "v1",
"created": "Sun, 31 Jan 2016 13:58:17 GMT"
}
] | 2016-02-02T00:00:00 | [
[
"Kucharski",
"Adam J.",
""
]
] | TITLE: Modelling the transmission dynamics of online social contagion
ABSTRACT: During 2014-15, there were several outbreaks of nominated-based online social
contagion. These infections, which were transmitted from one individual to
another via posts on social media, included games such as 'neknomination', 'ice
bucket challenge', 'no make up selfies', and Facebook users re-posting their
first profile pictures. Fitting a mathematical model of infectious disease
transmission to outbreaks of these four games in the United Kingdom, I
estimated the basic reproduction number, $R_0$, and generation time of each
infection. Median estimates for $R_0$ ranged from 1.9-2.5 across the four
outbreaks, and the estimated generation times were between 1.0 and 2.0 days.
Tests using out-of-sample data from Australia suggested that the model had
reasonable predictive power, with $R^2$ values between 0.52-0.70 across the
four Australian datasets. Further, the relatively low basic reproduction
numbers for the infections suggests that only 48-60% of index cases in
nomination-based games may subsequently generate major outbreaks.
| no_new_dataset | 0.924756 |
1602.00386 | Alexander Wong | Parthipan Siva, Mohammad Javad Shafiee, Mike Jamieson, and Alexander
Wong | Scene Invariant Crowd Segmentation and Counting Using Scale-Normalized
Histogram of Moving Gradients (HoMG) | 9 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of automated crowd segmentation and counting has garnered
significant interest in the field of video surveillance. This paper proposes a
novel scene invariant crowd segmentation and counting algorithm designed with
high accuracy yet low computational complexity in mind, which is key for
widespread industrial adoption. A novel low-complexity, scale-normalized
feature called Histogram of Moving Gradients (HoMG) is introduced for highly
effective spatiotemporal representation of individuals and crowds within a
video. Real-time crowd segmentation is achieved via boosted cascade of weak
classifiers based on sliding-window HoMG features, while linear SVM regression
of crowd-region HoMG features is employed for real-time crowd counting.
Experimental results using multi-camera crowd datasets show that the proposed
algorithm significantly outperform state-of-the-art crowd counting algorithms,
as well as achieve very promising crowd segmentation results, thus
demonstrating the efficacy of the proposed method for highly-accurate,
real-time video-driven crowd analysis.
| [
{
"version": "v1",
"created": "Mon, 1 Feb 2016 04:07:32 GMT"
}
] | 2016-02-02T00:00:00 | [
[
"Siva",
"Parthipan",
""
],
[
"Shafiee",
"Mohammad Javad",
""
],
[
"Jamieson",
"Mike",
""
],
[
"Wong",
"Alexander",
""
]
] | TITLE: Scene Invariant Crowd Segmentation and Counting Using Scale-Normalized
Histogram of Moving Gradients (HoMG)
ABSTRACT: The problem of automated crowd segmentation and counting has garnered
significant interest in the field of video surveillance. This paper proposes a
novel scene invariant crowd segmentation and counting algorithm designed with
high accuracy yet low computational complexity in mind, which is key for
widespread industrial adoption. A novel low-complexity, scale-normalized
feature called Histogram of Moving Gradients (HoMG) is introduced for highly
effective spatiotemporal representation of individuals and crowds within a
video. Real-time crowd segmentation is achieved via boosted cascade of weak
classifiers based on sliding-window HoMG features, while linear SVM regression
of crowd-region HoMG features is employed for real-time crowd counting.
Experimental results using multi-camera crowd datasets show that the proposed
algorithm significantly outperform state-of-the-art crowd counting algorithms,
as well as achieve very promising crowd segmentation results, thus
demonstrating the efficacy of the proposed method for highly-accurate,
real-time video-driven crowd analysis.
| no_new_dataset | 0.948106 |
1602.00419 | Lutz Bornmann Dr. | Lutz Bornmann | Is collaboration among scientists related to the citation impact of
papers because their quality increases with collaboration? An analysis based
on data from F1000Prime and normalized citation scores | Accepted for publication in the Journal of the Association for
Information Science and Technology | null | null | null | cs.DL physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, the relationship of collaboration among scientists and the
citation impact of papers have been frequently investigated. Most of the
studies show that the two variables are closely related: an increasing
collaboration activity (measured in terms of number of authors, number of
affiliations, and number of countries) is associated with an increased citation
impact. However, it is not clear whether the increased citation impact is based
on the higher quality of papers which profit from more than one scientist
giving expert input or other (citation-specific) factors. Thus, the current
study addresses this question by using two comprehensive datasets with
publications (in the biomedical area) including quality assessments by experts
(F1000Prime member scores) and citation data for the publications. The study is
based on nearly 10,000 papers. Robust regression models are used to investigate
the relationship between number of authors, number of affiliations, and number
of countries, respectively, and citation impact - controlling for the papers'
quality (measured by F1000Prime expert ratings). The results point out that the
effect of collaboration activities on impact is largely independent of the
papers' quality. The citation advantage is apparently not quality-related;
citation specific factors (e.g. self-citations) seem to be important here.
| [
{
"version": "v1",
"created": "Mon, 1 Feb 2016 08:07:17 GMT"
}
] | 2016-02-02T00:00:00 | [
[
"Bornmann",
"Lutz",
""
]
] | TITLE: Is collaboration among scientists related to the citation impact of
papers because their quality increases with collaboration? An analysis based
on data from F1000Prime and normalized citation scores
ABSTRACT: In recent years, the relationship of collaboration among scientists and the
citation impact of papers have been frequently investigated. Most of the
studies show that the two variables are closely related: an increasing
collaboration activity (measured in terms of number of authors, number of
affiliations, and number of countries) is associated with an increased citation
impact. However, it is not clear whether the increased citation impact is based
on the higher quality of papers which profit from more than one scientist
giving expert input or other (citation-specific) factors. Thus, the current
study addresses this question by using two comprehensive datasets with
publications (in the biomedical area) including quality assessments by experts
(F1000Prime member scores) and citation data for the publications. The study is
based on nearly 10,000 papers. Robust regression models are used to investigate
the relationship between number of authors, number of affiliations, and number
of countries, respectively, and citation impact - controlling for the papers'
quality (measured by F1000Prime expert ratings). The results point out that the
effect of collaboration activities on impact is largely independent of the
papers' quality. The citation advantage is apparently not quality-related;
citation specific factors (e.g. self-citations) seem to be important here.
| no_new_dataset | 0.952175 |
1602.00572 | Daniel Romero | Daniel M. Romero, Brian Uzzi, and Jon Kleinberg | Social Networks Under Stress | 12 pages, 8 figures, Proceedings of the 25th ACM International World
Wide Web Conference (WWW) 2016 | null | 10.1145/2872427.2883063l | null | cs.SI cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social network research has begun to take advantage of fine-grained
communications regarding coordination, decision-making, and knowledge sharing.
These studies, however, have not generally analyzed how external events are
associated with a social network's structure and communicative properties.
Here, we study how external events are associated with a network's change in
structure and communications. Analyzing a complete dataset of millions of
instant messages among the decision-makers in a large hedge fund and their
network of outside contacts, we investigate the link between price shocks,
network structure, and change in the affect and cognition of decision-makers
embedded in the network. When price shocks occur the communication network
tends not to display structural changes associated with adaptiveness. Rather,
the network "turtles up". It displays a propensity for higher clustering,
strong tie interaction, and an intensification of insider vs. outsider
communication. Further, we find changes in network structure predict shifts in
cognitive and affective processes, execution of new transactions, and local
optimality of transactions better than prices, revealing the important
predictive relationship between network structure and collective behavior
within a social network.
| [
{
"version": "v1",
"created": "Mon, 1 Feb 2016 15:58:29 GMT"
}
] | 2016-02-02T00:00:00 | [
[
"Romero",
"Daniel M.",
""
],
[
"Uzzi",
"Brian",
""
],
[
"Kleinberg",
"Jon",
""
]
] | TITLE: Social Networks Under Stress
ABSTRACT: Social network research has begun to take advantage of fine-grained
communications regarding coordination, decision-making, and knowledge sharing.
These studies, however, have not generally analyzed how external events are
associated with a social network's structure and communicative properties.
Here, we study how external events are associated with a network's change in
structure and communications. Analyzing a complete dataset of millions of
instant messages among the decision-makers in a large hedge fund and their
network of outside contacts, we investigate the link between price shocks,
network structure, and change in the affect and cognition of decision-makers
embedded in the network. When price shocks occur the communication network
tends not to display structural changes associated with adaptiveness. Rather,
the network "turtles up". It displays a propensity for higher clustering,
strong tie interaction, and an intensification of insider vs. outsider
communication. Further, we find changes in network structure predict shifts in
cognitive and affective processes, execution of new transactions, and local
optimality of transactions better than prices, revealing the important
predictive relationship between network structure and collective behavior
within a social network.
| no_new_dataset | 0.934991 |
1505.03566 | Moein Shakeri | Moein Shakeri, Hong Zhang | COROLA: A Sequential Solution to Moving Object Detection Using Low-rank
Approximation | 37 pages, 10 figures | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extracting moving objects from a video sequence and estimating the background
of each individual image are fundamental issues in many practical applications
such as visual surveillance, intelligent vehicle navigation, and traffic
monitoring. Recently, some methods have been proposed to detect moving objects
in a video via low-rank approximation and sparse outliers where the background
is modeled with the computed low-rank component of the video and the foreground
objects are detected as the sparse outliers in the low-rank approximation. All
of these existing methods work in a batch manner, preventing them from being
applied in real time and long duration tasks. In this paper, we present an
online sequential framework, namely contiguous outliers representation via
online low-rank approximation (COROLA), to detect moving objects and learn the
background model at the same time. We also show that our model can detect
moving objects with a moving camera. Our experimental evaluation uses simulated
data and real public datasets and demonstrates the superior performance of
COROLA in terms of both accuracy and execution time.
| [
{
"version": "v1",
"created": "Wed, 13 May 2015 22:13:20 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jan 2016 21:10:35 GMT"
}
] | 2016-02-01T00:00:00 | [
[
"Shakeri",
"Moein",
""
],
[
"Zhang",
"Hong",
""
]
] | TITLE: COROLA: A Sequential Solution to Moving Object Detection Using Low-rank
Approximation
ABSTRACT: Extracting moving objects from a video sequence and estimating the background
of each individual image are fundamental issues in many practical applications
such as visual surveillance, intelligent vehicle navigation, and traffic
monitoring. Recently, some methods have been proposed to detect moving objects
in a video via low-rank approximation and sparse outliers where the background
is modeled with the computed low-rank component of the video and the foreground
objects are detected as the sparse outliers in the low-rank approximation. All
of these existing methods work in a batch manner, preventing them from being
applied in real time and long duration tasks. In this paper, we present an
online sequential framework, namely contiguous outliers representation via
online low-rank approximation (COROLA), to detect moving objects and learn the
background model at the same time. We also show that our model can detect
moving objects with a moving camera. Our experimental evaluation uses simulated
data and real public datasets and demonstrates the superior performance of
COROLA in terms of both accuracy and execution time.
| no_new_dataset | 0.950041 |
1506.01743 | Nuno Moniz | Nuno Moniz, Lu\'is Torgo and Magdalini Eirinaki | Socially Driven News Recommendation | 17 pages, 2 figures, submitted to the ACM Transactions on Intelligent
Systems and Technology (ACM TIST), Special Issue on Social Media Processing | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The participatory Web has enabled the ubiquitous and pervasive access of
information, accompanied by an increase of speed and reach in information
sharing. Data dissemination services such as news aggregators are expected to
provide up-to-date, real-time information to the end users. News aggregators
are in essence recommendation systems that filter and rank news stories in
order to select the few that will appear on the users front screen at any time.
One of the main challenges in such systems is to address the recency and
latency problems, that is, to identify as soon as possible how important a news
story is. In this work we propose an integrated framework that aims at
predicting the importance of news items upon their publication with a focus on
recent and highly popular news, employing resampling strategies, and at
translating the result into concrete news rankings. We perform an extensive
experimental evaluation using real-life datasets of the proposed framework as
both a stand-alone system and when applied to news recommendations from Google
News. Additionally, we propose and evaluate a combinatorial solution to the
augmentation of official media recommendations with social information. Results
show that the proposed approach complements and enhances the news rankings
generated by state-of-the-art systems.
| [
{
"version": "v1",
"created": "Thu, 4 Jun 2015 22:32:40 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jan 2016 12:45:08 GMT"
}
] | 2016-02-01T00:00:00 | [
[
"Moniz",
"Nuno",
""
],
[
"Torgo",
"Luís",
""
],
[
"Eirinaki",
"Magdalini",
""
]
] | TITLE: Socially Driven News Recommendation
ABSTRACT: The participatory Web has enabled the ubiquitous and pervasive access of
information, accompanied by an increase of speed and reach in information
sharing. Data dissemination services such as news aggregators are expected to
provide up-to-date, real-time information to the end users. News aggregators
are in essence recommendation systems that filter and rank news stories in
order to select the few that will appear on the users front screen at any time.
One of the main challenges in such systems is to address the recency and
latency problems, that is, to identify as soon as possible how important a news
story is. In this work we propose an integrated framework that aims at
predicting the importance of news items upon their publication with a focus on
recent and highly popular news, employing resampling strategies, and at
translating the result into concrete news rankings. We perform an extensive
experimental evaluation using real-life datasets of the proposed framework as
both a stand-alone system and when applied to news recommendations from Google
News. Additionally, we propose and evaluate a combinatorial solution to the
augmentation of official media recommendations with social information. Results
show that the proposed approach complements and enhances the news rankings
generated by state-of-the-art systems.
| no_new_dataset | 0.945298 |
1509.02301 | Octavian-Eugen Ganea | Octavian-Eugen Ganea, Marina Ganea, Aurelien Lucchi, Carsten Eickhoff,
Thomas Hofmann | Probabilistic Bag-Of-Hyperlinks Model for Entity Linking | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many fundamental problems in natural language processing rely on determining
what entities appear in a given text. Commonly referenced as entity linking,
this step is a fundamental component of many NLP tasks such as text
understanding, automatic summarization, semantic search or machine translation.
Name ambiguity, word polysemy, context dependencies and a heavy-tailed
distribution of entities contribute to the complexity of this problem.
We here propose a probabilistic approach that makes use of an effective
graphical model to perform collective entity disambiguation. Input mentions
(i.e.,~linkable token spans) are disambiguated jointly across an entire
document by combining a document-level prior of entity co-occurrences with
local information captured from mentions and their surrounding context. The
model is based on simple sufficient statistics extracted from data, thus
relying on few parameters to be learned.
Our method does not require extensive feature engineering, nor an expensive
training procedure. We use loopy belief propagation to perform approximate
inference. The low complexity of our model makes this step sufficiently fast
for real-time usage. We demonstrate the accuracy of our approach on a wide
range of benchmark datasets, showing that it matches, and in many cases
outperforms, existing state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 8 Sep 2015 09:43:13 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Oct 2015 13:40:31 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Jan 2016 19:22:44 GMT"
}
] | 2016-02-01T00:00:00 | [
[
"Ganea",
"Octavian-Eugen",
""
],
[
"Ganea",
"Marina",
""
],
[
"Lucchi",
"Aurelien",
""
],
[
"Eickhoff",
"Carsten",
""
],
[
"Hofmann",
"Thomas",
""
]
] | TITLE: Probabilistic Bag-Of-Hyperlinks Model for Entity Linking
ABSTRACT: Many fundamental problems in natural language processing rely on determining
what entities appear in a given text. Commonly referenced as entity linking,
this step is a fundamental component of many NLP tasks such as text
understanding, automatic summarization, semantic search or machine translation.
Name ambiguity, word polysemy, context dependencies and a heavy-tailed
distribution of entities contribute to the complexity of this problem.
We here propose a probabilistic approach that makes use of an effective
graphical model to perform collective entity disambiguation. Input mentions
(i.e.,~linkable token spans) are disambiguated jointly across an entire
document by combining a document-level prior of entity co-occurrences with
local information captured from mentions and their surrounding context. The
model is based on simple sufficient statistics extracted from data, thus
relying on few parameters to be learned.
Our method does not require extensive feature engineering, nor an expensive
training procedure. We use loopy belief propagation to perform approximate
inference. The low complexity of our model makes this step sufficiently fast
for real-time usage. We demonstrate the accuracy of our approach on a wide
range of benchmark datasets, showing that it matches, and in many cases
outperforms, existing state-of-the-art methods.
| no_new_dataset | 0.9462 |
1511.05672 | Kemal Bicakci | Yasin Uzun, Kemal Bicakci, Yusuf Uzunay | Could We Distinguish Child Users from Adults Using Keystroke Dynamics? | 18 pages | null | null | null | cs.HC cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Significant portion of contemporary computer users are children, who are
vulnerable to threats coming from the Internet. To protect children from such
threats, in this study, we investigate how successfully typing data can be used
to distinguish children from adults. For this purpose, we collect a dataset
comprising keystroke data of 100 users and show that distinguishing child
Internet users from adults is possible using Keystroke Dynamics with equal
error rates less than 10 percent. However the error rates increase
significantly when there are impostors in the system.
| [
{
"version": "v1",
"created": "Wed, 18 Nov 2015 07:06:55 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jan 2016 21:12:54 GMT"
}
] | 2016-02-01T00:00:00 | [
[
"Uzun",
"Yasin",
""
],
[
"Bicakci",
"Kemal",
""
],
[
"Uzunay",
"Yusuf",
""
]
] | TITLE: Could We Distinguish Child Users from Adults Using Keystroke Dynamics?
ABSTRACT: Significant portion of contemporary computer users are children, who are
vulnerable to threats coming from the Internet. To protect children from such
threats, in this study, we investigate how successfully typing data can be used
to distinguish children from adults. For this purpose, we collect a dataset
comprising keystroke data of 100 users and show that distinguishing child
Internet users from adults is possible using Keystroke Dynamics with equal
error rates less than 10 percent. However the error rates increase
significantly when there are impostors in the system.
| new_dataset | 0.958226 |
1601.07950 | Amit Kumar | Amit Kumar, Rajeev Ranjan, Vishal Patel, Rama Chellappa | Face Alignment by Local Deep Descriptor Regression | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an algorithm for extracting key-point descriptors using deep
convolutional neural networks (CNN). Unlike many existing deep CNNs, our model
computes local features around a given point in an image. We also present a
face alignment algorithm based on regression using these local descriptors. The
proposed method called Local Deep Descriptor Regression (LDDR) is able to
localize face landmarks of varying sizes, poses and occlusions with high
accuracy. Deep Descriptors presented in this paper are able to uniquely and
efficiently describe every pixel in the image and therefore can potentially
replace traditional descriptors such as SIFT and HOG. Extensive evaluations on
five publicly available unconstrained face alignment datasets show that our
deep descriptor network is able to capture strong local features around a given
landmark and performs significantly better than many competitive and
state-of-the-art face alignment algorithms.
| [
{
"version": "v1",
"created": "Fri, 29 Jan 2016 00:00:16 GMT"
}
] | 2016-02-01T00:00:00 | [
[
"Kumar",
"Amit",
""
],
[
"Ranjan",
"Rajeev",
""
],
[
"Patel",
"Vishal",
""
],
[
"Chellappa",
"Rama",
""
]
] | TITLE: Face Alignment by Local Deep Descriptor Regression
ABSTRACT: We present an algorithm for extracting key-point descriptors using deep
convolutional neural networks (CNN). Unlike many existing deep CNNs, our model
computes local features around a given point in an image. We also present a
face alignment algorithm based on regression using these local descriptors. The
proposed method called Local Deep Descriptor Regression (LDDR) is able to
localize face landmarks of varying sizes, poses and occlusions with high
accuracy. Deep Descriptors presented in this paper are able to uniquely and
efficiently describe every pixel in the image and therefore can potentially
replace traditional descriptors such as SIFT and HOG. Extensive evaluations on
five publicly available unconstrained face alignment datasets show that our
deep descriptor network is able to capture strong local features around a given
landmark and performs significantly better than many competitive and
state-of-the-art face alignment algorithms.
| no_new_dataset | 0.951459 |
1601.07977 | Guo-Sen Xie | Guo-Sen Xie, Xu-Yao Zhang, Shuicheng Yan and Cheng-Lin Liu | Hybrid CNN and Dictionary-Based Models for Scene Recognition and Domain
Adaptation | Accepted by TCSVT on Sep.2015 | null | 10.1109/TCSVT.2015.2511543 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional neural network (CNN) has achieved state-of-the-art performance
in many different visual tasks. Learned from a large-scale training dataset,
CNN features are much more discriminative and accurate than the hand-crafted
features. Moreover, CNN features are also transferable among different domains.
On the other hand, traditional dictionarybased features (such as BoW and SPM)
contain much more local discriminative and structural information, which is
implicitly embedded in the images. To further improve the performance, in this
paper, we propose to combine CNN with dictionarybased models for scene
recognition and visual domain adaptation. Specifically, based on the well-tuned
CNN models (e.g., AlexNet and VGG Net), two dictionary-based representations
are further constructed, namely mid-level local representation (MLR) and
convolutional Fisher vector representation (CFV). In MLR, an efficient
two-stage clustering method, i.e., weighted spatial and feature space spectral
clustering on the parts of a single image followed by clustering all
representative parts of all images, is used to generate a class-mixture or a
classspecific part dictionary. After that, the part dictionary is used to
operate with the multi-scale image inputs for generating midlevel
representation. In CFV, a multi-scale and scale-proportional GMM training
strategy is utilized to generate Fisher vectors based on the last convolutional
layer of CNN. By integrating the complementary information of MLR, CFV and the
CNN features of the fully connected layer, the state-of-the-art performance can
be achieved on scene recognition and domain adaptation problems. An interested
finding is that our proposed hybrid representation (from VGG net trained on
ImageNet) is also complementary with GoogLeNet and/or VGG-11 (trained on
Place205) greatly.
| [
{
"version": "v1",
"created": "Fri, 29 Jan 2016 05:32:52 GMT"
}
] | 2016-02-01T00:00:00 | [
[
"Xie",
"Guo-Sen",
""
],
[
"Zhang",
"Xu-Yao",
""
],
[
"Yan",
"Shuicheng",
""
],
[
"Liu",
"Cheng-Lin",
""
]
] | TITLE: Hybrid CNN and Dictionary-Based Models for Scene Recognition and Domain
Adaptation
ABSTRACT: Convolutional neural network (CNN) has achieved state-of-the-art performance
in many different visual tasks. Learned from a large-scale training dataset,
CNN features are much more discriminative and accurate than the hand-crafted
features. Moreover, CNN features are also transferable among different domains.
On the other hand, traditional dictionarybased features (such as BoW and SPM)
contain much more local discriminative and structural information, which is
implicitly embedded in the images. To further improve the performance, in this
paper, we propose to combine CNN with dictionarybased models for scene
recognition and visual domain adaptation. Specifically, based on the well-tuned
CNN models (e.g., AlexNet and VGG Net), two dictionary-based representations
are further constructed, namely mid-level local representation (MLR) and
convolutional Fisher vector representation (CFV). In MLR, an efficient
two-stage clustering method, i.e., weighted spatial and feature space spectral
clustering on the parts of a single image followed by clustering all
representative parts of all images, is used to generate a class-mixture or a
classspecific part dictionary. After that, the part dictionary is used to
operate with the multi-scale image inputs for generating midlevel
representation. In CFV, a multi-scale and scale-proportional GMM training
strategy is utilized to generate Fisher vectors based on the last convolutional
layer of CNN. By integrating the complementary information of MLR, CFV and the
CNN features of the fully connected layer, the state-of-the-art performance can
be achieved on scene recognition and domain adaptation problems. An interested
finding is that our proposed hybrid representation (from VGG net trained on
ImageNet) is also complementary with GoogLeNet and/or VGG-11 (trained on
Place205) greatly.
| no_new_dataset | 0.951278 |
1601.08059 | Nikos Bikakis | Nikos Bikakis, Timos Sellis | Exploration and Visualization in the Web of Big Linked Data: A Survey of
the State of the Art | 6th International Workshop on Linked Web Data Management (LWDM 2016) | null | null | null | cs.HC cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data exploration and visualization systems are of great importance in the Big
Data era. Exploring and visualizing very large datasets has become a major
research challenge, of which scalability is a vital requirement. In this
survey, we describe the major prerequisites and challenges that should be
addressed by the modern exploration and visualization systems. Considering
these challenges, we present how state-of-the-art approaches from the Database
and Information Visualization communities attempt to handle them. Finally, we
survey the systems developed by Semantic Web community in the context of the
Web of Linked Data, and discuss to which extent these satisfy the contemporary
requirements.
| [
{
"version": "v1",
"created": "Fri, 29 Jan 2016 11:30:44 GMT"
}
] | 2016-02-01T00:00:00 | [
[
"Bikakis",
"Nikos",
""
],
[
"Sellis",
"Timos",
""
]
] | TITLE: Exploration and Visualization in the Web of Big Linked Data: A Survey of
the State of the Art
ABSTRACT: Data exploration and visualization systems are of great importance in the Big
Data era. Exploring and visualizing very large datasets has become a major
research challenge, of which scalability is a vital requirement. In this
survey, we describe the major prerequisites and challenges that should be
addressed by the modern exploration and visualization systems. Considering
these challenges, we present how state-of-the-art approaches from the Database
and Information Visualization communities attempt to handle them. Finally, we
survey the systems developed by Semantic Web community in the context of the
Web of Linked Data, and discuss to which extent these satisfy the contemporary
requirements.
| no_new_dataset | 0.949949 |
1209.5598 | Fan Min | Fan Min | Granular association rules on two universes with four measures | 33 pages | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relational association rules reveal patterns hide in multiple tables.
Existing rules are usually evaluated through two measures, namely support and
confidence. However, these two measures may not be enough to describe the
strength of a rule. In this paper, we introduce granular association rules with
four measures to reveal connections between granules in two universes, and
propose three algorithms for rule mining. An example of such a rule might be
"40% men like at least 30% kinds of alcohol; 45% customers are men and 6%
products are alcohol." Here 45%, 6%, 40%, and 30% are the source coverage, the
target coverage, the source confidence, and the target confidence,
respectively. With these measures, our rules are semantically richer than
existing ones. Three subtypes of rules are obtained through considering special
requirements on the source/target confidence. Then we define a rule mining
problem, and design a sandwich algorithm with different rule checking
approaches for different subtypes. Experiments on a real world dataset show
that the approaches dedicated to three subtypes are 2-3 orders of magnitudes
faster than the one for the general case. A forward algorithm and a backward
algorithm for one particular subtype can speed up the mining process further.
This work opens a new research trend concerning relational association rule
mining, granular computing and rough sets.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2012 13:13:11 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Feb 2013 02:24:12 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Jan 2016 02:23:32 GMT"
}
] | 2016-01-29T00:00:00 | [
[
"Min",
"Fan",
""
]
] | TITLE: Granular association rules on two universes with four measures
ABSTRACT: Relational association rules reveal patterns hide in multiple tables.
Existing rules are usually evaluated through two measures, namely support and
confidence. However, these two measures may not be enough to describe the
strength of a rule. In this paper, we introduce granular association rules with
four measures to reveal connections between granules in two universes, and
propose three algorithms for rule mining. An example of such a rule might be
"40% men like at least 30% kinds of alcohol; 45% customers are men and 6%
products are alcohol." Here 45%, 6%, 40%, and 30% are the source coverage, the
target coverage, the source confidence, and the target confidence,
respectively. With these measures, our rules are semantically richer than
existing ones. Three subtypes of rules are obtained through considering special
requirements on the source/target confidence. Then we define a rule mining
problem, and design a sandwich algorithm with different rule checking
approaches for different subtypes. Experiments on a real world dataset show
that the approaches dedicated to three subtypes are 2-3 orders of magnitudes
faster than the one for the general case. A forward algorithm and a backward
algorithm for one particular subtype can speed up the mining process further.
This work opens a new research trend concerning relational association rule
mining, granular computing and rough sets.
| no_new_dataset | 0.94545 |
1405.3202 | HyeJin Youn | Hyejin Youn, Lu\'is M. A. Bettencourt, Jos\'e Lobo, Deborah Strumsky,
Horacio Samaniego, and Geoffrey B. West | The systematic structure and predictability of urban business diversity | Press embargo in place until publication | J. R. Soc. Interface 13: 20150937 (2016) | 10.1098/rsif.2015.0937 | null | physics.soc-ph physics.data-an q-fin.GN | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Understanding cities is central to addressing major global challenges from
climate and health to economic resilience. Although increasingly perceived as
fundamental socio-economic units, the detailed fabric of urban economic
activities is only now accessible to comprehensive analyses with the
availability of large datasets. Here, we study abundances of business
categories across U.S. metropolitan statistical areas to investigate how
diversity of economic activities depends on city size. A universal structure
common to all cities is revealed, manifesting self-similarity in internal
economic structure as well as aggregated metrics (GDP, patents, crime). A
derivation is presented that explains universality and the observed empirical
distribution. The model incorporates a generalized preferential attachment
process with ceaseless introduction of new business types. Combined with
scaling analyses for individual categories, the theory quantitatively predicts
how individual business types systematically change rank with city size,
thereby providing a quantitative means for estimating their expected abundances
as a function of city size. These results shed light on processes of economic
differentiation with scale, suggesting a general structure for the growth of
national economies as integrated urban systems.
| [
{
"version": "v1",
"created": "Tue, 13 May 2014 15:54:56 GMT"
}
] | 2016-01-29T00:00:00 | [
[
"Youn",
"Hyejin",
""
],
[
"Bettencourt",
"Luís M. A.",
""
],
[
"Lobo",
"José",
""
],
[
"Strumsky",
"Deborah",
""
],
[
"Samaniego",
"Horacio",
""
],
[
"West",
"Geoffrey B.",
""
]
] | TITLE: The systematic structure and predictability of urban business diversity
ABSTRACT: Understanding cities is central to addressing major global challenges from
climate and health to economic resilience. Although increasingly perceived as
fundamental socio-economic units, the detailed fabric of urban economic
activities is only now accessible to comprehensive analyses with the
availability of large datasets. Here, we study abundances of business
categories across U.S. metropolitan statistical areas to investigate how
diversity of economic activities depends on city size. A universal structure
common to all cities is revealed, manifesting self-similarity in internal
economic structure as well as aggregated metrics (GDP, patents, crime). A
derivation is presented that explains universality and the observed empirical
distribution. The model incorporates a generalized preferential attachment
process with ceaseless introduction of new business types. Combined with
scaling analyses for individual categories, the theory quantitatively predicts
how individual business types systematically change rank with city size,
thereby providing a quantitative means for estimating their expected abundances
as a function of city size. These results shed light on processes of economic
differentiation with scale, suggesting a general structure for the growth of
national economies as integrated urban systems.
| no_new_dataset | 0.944485 |
1509.08971 | Priyadarshini Panda | Priyadarshini Panda, Abhronil Sengupta and Kaushik Roy | Conditional Deep Learning for Energy-Efficient and Enhanced Pattern
Recognition | 6 pages, 10 figures, 2 algorithms < Accepted for Design and
Automation Test in Europe (DATE) conference, 2016> | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning neural networks have emerged as one of the most powerful
classification tools for vision related applications. However, the
computational and energy requirements associated with such deep nets can be
quite high, and hence their energy-efficient implementation is of great
interest. Although traditionally the entire network is utilized for the
recognition of all inputs, we observe that the classification difficulty varies
widely across inputs in real-world datasets; only a small fraction of inputs
require the full computational effort of a network, while a large majority can
be classified correctly with very low effort. In this paper, we propose
Conditional Deep Learning (CDL) where the convolutional layer features are used
to identify the variability in the difficulty of input instances and
conditionally activate the deeper layers of the network. We achieve this by
cascading a linear network of output neurons for each convolutional layer and
monitoring the output of the linear network to decide whether classification
can be terminated at the current stage or not. The proposed methodology thus
enables the network to dynamically adjust the computational effort depending
upon the difficulty of the input data while maintaining competitive
classification accuracy. We evaluate our approach on the MNIST dataset. Our
experiments demonstrate that our proposed CDL yields 1.91x reduction in average
number of operations per input, which translates to 1.84x improvement in
energy. In addition, our results show an improvement in classification accuracy
from 97.5% to 98.9% as compared to the original network.
| [
{
"version": "v1",
"created": "Tue, 29 Sep 2015 23:08:09 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Oct 2015 13:56:35 GMT"
},
{
"version": "v3",
"created": "Sat, 3 Oct 2015 12:23:00 GMT"
},
{
"version": "v4",
"created": "Tue, 6 Oct 2015 01:45:50 GMT"
},
{
"version": "v5",
"created": "Tue, 24 Nov 2015 17:04:59 GMT"
},
{
"version": "v6",
"created": "Thu, 28 Jan 2016 18:34:42 GMT"
}
] | 2016-01-29T00:00:00 | [
[
"Panda",
"Priyadarshini",
""
],
[
"Sengupta",
"Abhronil",
""
],
[
"Roy",
"Kaushik",
""
]
] | TITLE: Conditional Deep Learning for Energy-Efficient and Enhanced Pattern
Recognition
ABSTRACT: Deep learning neural networks have emerged as one of the most powerful
classification tools for vision related applications. However, the
computational and energy requirements associated with such deep nets can be
quite high, and hence their energy-efficient implementation is of great
interest. Although traditionally the entire network is utilized for the
recognition of all inputs, we observe that the classification difficulty varies
widely across inputs in real-world datasets; only a small fraction of inputs
require the full computational effort of a network, while a large majority can
be classified correctly with very low effort. In this paper, we propose
Conditional Deep Learning (CDL) where the convolutional layer features are used
to identify the variability in the difficulty of input instances and
conditionally activate the deeper layers of the network. We achieve this by
cascading a linear network of output neurons for each convolutional layer and
monitoring the output of the linear network to decide whether classification
can be terminated at the current stage or not. The proposed methodology thus
enables the network to dynamically adjust the computational effort depending
upon the difficulty of the input data while maintaining competitive
classification accuracy. We evaluate our approach on the MNIST dataset. Our
experiments demonstrate that our proposed CDL yields 1.91x reduction in average
number of operations per input, which translates to 1.84x improvement in
energy. In addition, our results show an improvement in classification accuracy
from 97.5% to 98.9% as compared to the original network.
| no_new_dataset | 0.946051 |
1601.07721 | Peilin Zhong | David P. Woodruff, Peilin Zhong | Distributed Low Rank Approximation of Implicit Functions of a Matrix | null | null | null | null | cs.NA cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study distributed low rank approximation in which the matrix to be
approximated is only implicitly represented across the different servers. For
example, each of $s$ servers may have an $n \times d$ matrix $A^t$, and we may
be interested in computing a low rank approximation to $A = f(\sum_{t=1}^s
A^t)$, where $f$ is a function which is applied entrywise to the matrix
$\sum_{t=1}^s A^t$. We show for a wide class of functions $f$ it is possible to
efficiently compute a $d \times d$ rank-$k$ projection matrix $P$ for which
$\|A - AP\|_F^2 \leq \|A - [A]_k\|_F^2 + \varepsilon \|A\|_F^2$, where $AP$
denotes the projection of $A$ onto the row span of $P$, and $[A]_k$ denotes the
best rank-$k$ approximation to $A$ given by the singular value decomposition.
The communication cost of our protocols is $d \cdot (sk/\varepsilon)^{O(1)}$,
and they succeed with high probability. Our framework allows us to efficiently
compute a low rank approximation to an entry-wise softmax, to a Gaussian kernel
expansion, and to $M$-Estimators applied entrywise (i.e., forms of robust low
rank approximation). We also show that our additive error approximation is best
possible, in the sense that any protocol achieving relative error for these
problems requires significantly more communication. Finally, we experimentally
validate our algorithms on real datasets.
| [
{
"version": "v1",
"created": "Thu, 28 Jan 2016 10:58:27 GMT"
}
] | 2016-01-29T00:00:00 | [
[
"Woodruff",
"David P.",
""
],
[
"Zhong",
"Peilin",
""
]
] | TITLE: Distributed Low Rank Approximation of Implicit Functions of a Matrix
ABSTRACT: We study distributed low rank approximation in which the matrix to be
approximated is only implicitly represented across the different servers. For
example, each of $s$ servers may have an $n \times d$ matrix $A^t$, and we may
be interested in computing a low rank approximation to $A = f(\sum_{t=1}^s
A^t)$, where $f$ is a function which is applied entrywise to the matrix
$\sum_{t=1}^s A^t$. We show for a wide class of functions $f$ it is possible to
efficiently compute a $d \times d$ rank-$k$ projection matrix $P$ for which
$\|A - AP\|_F^2 \leq \|A - [A]_k\|_F^2 + \varepsilon \|A\|_F^2$, where $AP$
denotes the projection of $A$ onto the row span of $P$, and $[A]_k$ denotes the
best rank-$k$ approximation to $A$ given by the singular value decomposition.
The communication cost of our protocols is $d \cdot (sk/\varepsilon)^{O(1)}$,
and they succeed with high probability. Our framework allows us to efficiently
compute a low rank approximation to an entry-wise softmax, to a Gaussian kernel
expansion, and to $M$-Estimators applied entrywise (i.e., forms of robust low
rank approximation). We also show that our additive error approximation is best
possible, in the sense that any protocol achieving relative error for these
problems requires significantly more communication. Finally, we experimentally
validate our algorithms on real datasets.
| no_new_dataset | 0.934962 |
1601.07765 | Christian Santoni | Christian Santoni, Claudio Calabrese, Francesco Di Renzo, Fabio
Pellacini | SculptStat: Statistical Analysis of Digital Sculpting Workflows | 9 pages, 8 figures | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Targeted user studies are often employed to measure how well artists can
perform specific tasks. But these studies cannot properly describe editing
workflows as wholes, since they guide the artists both by choosing the tasks
and by using simplified interfaces. In this paper, we investigate digital
sculpting workflows used to produce detailed models. In our experiment design,
artists can choose freely what and how to model. We recover whole-workflow
trends with sophisticated statistical analyzes and validate these trends with
goodness-of-fits measures. We record brush strokes and mesh snapshots by
instrumenting a sculpting program and analyze the distribution of these
properties and their spatial and temporal characteristics. We hired expert
artists that can produce relatively sophisticated models in short time, since
their workflows are representative of best practices. We analyze 13 meshes
corresponding to roughly 25 thousand strokes in total. We found that artists
work mainly with short strokes, with average stroke length dependent on model
features rather than the artist itself. Temporally, artists do not work
coarse-to-fine but rather in bursts. Spatially, artists focus on some selected
regions by dedicating different amounts of edits and by applying different
techniques. Spatio-temporally, artists return to work on the same area multiple
times without any apparent periodicity. We release the entire dataset and all
code used for the analyzes as reference for the community.
| [
{
"version": "v1",
"created": "Thu, 28 Jan 2016 14:09:12 GMT"
}
] | 2016-01-29T00:00:00 | [
[
"Santoni",
"Christian",
""
],
[
"Calabrese",
"Claudio",
""
],
[
"Di Renzo",
"Francesco",
""
],
[
"Pellacini",
"Fabio",
""
]
] | TITLE: SculptStat: Statistical Analysis of Digital Sculpting Workflows
ABSTRACT: Targeted user studies are often employed to measure how well artists can
perform specific tasks. But these studies cannot properly describe editing
workflows as wholes, since they guide the artists both by choosing the tasks
and by using simplified interfaces. In this paper, we investigate digital
sculpting workflows used to produce detailed models. In our experiment design,
artists can choose freely what and how to model. We recover whole-workflow
trends with sophisticated statistical analyzes and validate these trends with
goodness-of-fits measures. We record brush strokes and mesh snapshots by
instrumenting a sculpting program and analyze the distribution of these
properties and their spatial and temporal characteristics. We hired expert
artists that can produce relatively sophisticated models in short time, since
their workflows are representative of best practices. We analyze 13 meshes
corresponding to roughly 25 thousand strokes in total. We found that artists
work mainly with short strokes, with average stroke length dependent on model
features rather than the artist itself. Temporally, artists do not work
coarse-to-fine but rather in bursts. Spatially, artists focus on some selected
regions by dedicating different amounts of edits and by applying different
techniques. Spatio-temporally, artists return to work on the same area multiple
times without any apparent periodicity. We release the entire dataset and all
code used for the analyzes as reference for the community.
| new_dataset | 0.96128 |
1601.07884 | Xinchao Li | Xinchao Li, Martha A. Larson, Alan Hanjalic | Geo-distinctive Visual Element Matching for Location Estimation of
Images | null | null | null | null | cs.MM cs.CV | http://creativecommons.org/licenses/by/4.0/ | We propose an image representation and matching approach that substantially
improves visual-based location estimation for images. The main novelty of the
approach, called distinctive visual element matching (DVEM), is its use of
representations that are specific to the query image whose location is being
predicted. These representations are based on visual element clouds, which
robustly capture the connection between the query and visual evidence from
candidate locations. We then maximize the influence of visual elements that are
geo-distinctive because they do not occur in images taken at many other
locations. We carry out experiments and analysis for both geo-constrained and
geo-unconstrained location estimation cases using two large-scale,
publicly-available datasets: the San Francisco Landmark dataset with $1.06$
million street-view images and the MediaEval '15 Placing Task dataset with
$5.6$ million geo-tagged images from Flickr. We present examples that
illustrate the highly-transparent mechanics of the approach, which are based on
common sense observations about the visual patterns in image collections. Our
results show that the proposed method delivers a considerable performance
improvement compared to the state of the art.
| [
{
"version": "v1",
"created": "Thu, 28 Jan 2016 20:13:01 GMT"
}
] | 2016-01-29T00:00:00 | [
[
"Li",
"Xinchao",
""
],
[
"Larson",
"Martha A.",
""
],
[
"Hanjalic",
"Alan",
""
]
] | TITLE: Geo-distinctive Visual Element Matching for Location Estimation of
Images
ABSTRACT: We propose an image representation and matching approach that substantially
improves visual-based location estimation for images. The main novelty of the
approach, called distinctive visual element matching (DVEM), is its use of
representations that are specific to the query image whose location is being
predicted. These representations are based on visual element clouds, which
robustly capture the connection between the query and visual evidence from
candidate locations. We then maximize the influence of visual elements that are
geo-distinctive because they do not occur in images taken at many other
locations. We carry out experiments and analysis for both geo-constrained and
geo-unconstrained location estimation cases using two large-scale,
publicly-available datasets: the San Francisco Landmark dataset with $1.06$
million street-view images and the MediaEval '15 Placing Task dataset with
$5.6$ million geo-tagged images from Flickr. We present examples that
illustrate the highly-transparent mechanics of the approach, which are based on
common sense observations about the visual patterns in image collections. Our
results show that the proposed method delivers a considerable performance
improvement compared to the state of the art.
| no_new_dataset | 0.944944 |
1211.6581 | Eleftherios Spyromitros-Xioufis | Eleftherios Spyromitros-Xioufis, Grigorios Tsoumakas, William Groves,
Ioannis Vlahavas | Multi-Target Regression via Input Space Expansion: Treating Targets as
Inputs | Accepted for publication in Machine Learning journal. This
replacement contains major improvements compared to the previous version,
including a deeper theoretical and experimental analysis and an extended
discussion of related work | null | 10.1007/s10994-016-5546-z | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many practical applications of supervised learning the task involves the
prediction of multiple target variables from a common set of input variables.
When the prediction targets are binary the task is called multi-label
classification, while when the targets are continuous the task is called
multi-target regression. In both tasks, target variables often exhibit
statistical dependencies and exploiting them in order to improve predictive
accuracy is a core challenge. A family of multi-label classification methods
address this challenge by building a separate model for each target on an
expanded input space where other targets are treated as additional input
variables. Despite the success of these methods in the multi-label
classification domain, their applicability and effectiveness in multi-target
regression has not been studied until now. In this paper, we introduce two new
methods for multi-target regression, called Stacked Single-Target and Ensemble
of Regressor Chains, by adapting two popular multi-label classification methods
of this family. Furthermore, we highlight an inherent problem of these methods
- a discrepancy of the values of the additional input variables between
training and prediction - and develop extensions that use out-of-sample
estimates of the target variables during training in order to tackle this
problem. The results of an extensive experimental evaluation carried out on a
large and diverse collection of datasets show that, when the discrepancy is
appropriately mitigated, the proposed methods attain consistent improvements
over the independent regressions baseline. Moreover, two versions of Ensemble
of Regression Chains perform significantly better than four state-of-the-art
methods including regularization-based multi-task learning methods and a
multi-objective random forest approach.
| [
{
"version": "v1",
"created": "Wed, 28 Nov 2012 11:42:36 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2014 11:14:16 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Apr 2014 09:44:27 GMT"
},
{
"version": "v4",
"created": "Tue, 17 Jun 2014 12:09:24 GMT"
},
{
"version": "v5",
"created": "Wed, 27 Jan 2016 20:24:53 GMT"
}
] | 2016-01-28T00:00:00 | [
[
"Spyromitros-Xioufis",
"Eleftherios",
""
],
[
"Tsoumakas",
"Grigorios",
""
],
[
"Groves",
"William",
""
],
[
"Vlahavas",
"Ioannis",
""
]
] | TITLE: Multi-Target Regression via Input Space Expansion: Treating Targets as
Inputs
ABSTRACT: In many practical applications of supervised learning the task involves the
prediction of multiple target variables from a common set of input variables.
When the prediction targets are binary the task is called multi-label
classification, while when the targets are continuous the task is called
multi-target regression. In both tasks, target variables often exhibit
statistical dependencies and exploiting them in order to improve predictive
accuracy is a core challenge. A family of multi-label classification methods
address this challenge by building a separate model for each target on an
expanded input space where other targets are treated as additional input
variables. Despite the success of these methods in the multi-label
classification domain, their applicability and effectiveness in multi-target
regression has not been studied until now. In this paper, we introduce two new
methods for multi-target regression, called Stacked Single-Target and Ensemble
of Regressor Chains, by adapting two popular multi-label classification methods
of this family. Furthermore, we highlight an inherent problem of these methods
- a discrepancy of the values of the additional input variables between
training and prediction - and develop extensions that use out-of-sample
estimates of the target variables during training in order to tackle this
problem. The results of an extensive experimental evaluation carried out on a
large and diverse collection of datasets show that, when the discrepancy is
appropriately mitigated, the proposed methods attain consistent improvements
over the independent regressions baseline. Moreover, two versions of Ensemble
of Regression Chains perform significantly better than four state-of-the-art
methods including regularization-based multi-task learning methods and a
multi-objective random forest approach.
| no_new_dataset | 0.946001 |
1409.1102 | Qiwei Han | Qiwei Han, Pedro Ferreira | The Role of Peer Influence in Churn in Wireless Networks | Accepted in Seventh ASE International Conference on Social Computing
(Socialcom 2014), Best Paper Award Winner | null | 10.1145/2639968.2640057 | null | cs.SI cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Subscriber churn remains a top challenge for wireless carriers. These
carriers need to understand the determinants of churn to confidently apply
effective retention strategies to ensure their profitability and growth. In
this paper, we look at the effect of peer influence on churn and we try to
disentangle it from other effects that drive simultaneous churn across friends
but that do not relate to peer influence. We analyze a random sample of roughly
10 thousand subscribers from large dataset from a major wireless carrier over a
period of 10 months. We apply survival models and generalized propensity score
to identify the role of peer influence. We show that the propensity to churn
increases when friends do and that it increases more when many strong friends
churn. Therefore, our results suggest that churn managers should consider
strategies aimed at preventing group churn. We also show that survival models
fail to disentangle homophily from peer influence over-estimating the effect of
peer influence.
| [
{
"version": "v1",
"created": "Wed, 3 Sep 2014 14:24:30 GMT"
}
] | 2016-01-28T00:00:00 | [
[
"Han",
"Qiwei",
""
],
[
"Ferreira",
"Pedro",
""
]
] | TITLE: The Role of Peer Influence in Churn in Wireless Networks
ABSTRACT: Subscriber churn remains a top challenge for wireless carriers. These
carriers need to understand the determinants of churn to confidently apply
effective retention strategies to ensure their profitability and growth. In
this paper, we look at the effect of peer influence on churn and we try to
disentangle it from other effects that drive simultaneous churn across friends
but that do not relate to peer influence. We analyze a random sample of roughly
10 thousand subscribers from large dataset from a major wireless carrier over a
period of 10 months. We apply survival models and generalized propensity score
to identify the role of peer influence. We show that the propensity to churn
increases when friends do and that it increases more when many strong friends
churn. Therefore, our results suggest that churn managers should consider
strategies aimed at preventing group churn. We also show that survival models
fail to disentangle homophily from peer influence over-estimating the effect of
peer influence.
| no_new_dataset | 0.944893 |
1512.06757 | Jiaji Huang | Jiaji Huang, Qiang Qiu, Robert Calderbank, Guillermo Sapiro | GraphConnect: A Regularization Framework for Neural Networks | Theorems need more validation | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks have proved very successful in domains where large
training sets are available, but when the number of training samples is small,
their performance suffers from overfitting. Prior methods of reducing
overfitting such as weight decay, Dropout and DropConnect are data-independent.
This paper proposes a new method, GraphConnect, that is data-dependent, and is
motivated by the observation that data of interest lie close to a manifold. The
new method encourages the relationships between the learned decisions to
resemble a graph representing the manifold structure. Essentially GraphConnect
is designed to learn attributes that are present in data samples in contrast to
weight decay, Dropout and DropConnect which are simply designed to make it more
difficult to fit to random error or noise. Empirical Rademacher complexity is
used to connect the generalization error of the neural network to spectral
properties of the graph learned from the input data. This framework is used to
show that GraphConnect is superior to weight decay. Experimental results on
several benchmark datasets validate the theoretical analysis, and show that
when the number of training samples is small, GraphConnect is able to
significantly improve performance over weight decay.
| [
{
"version": "v1",
"created": "Mon, 21 Dec 2015 18:42:45 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jan 2016 03:21:15 GMT"
}
] | 2016-01-28T00:00:00 | [
[
"Huang",
"Jiaji",
""
],
[
"Qiu",
"Qiang",
""
],
[
"Calderbank",
"Robert",
""
],
[
"Sapiro",
"Guillermo",
""
]
] | TITLE: GraphConnect: A Regularization Framework for Neural Networks
ABSTRACT: Deep neural networks have proved very successful in domains where large
training sets are available, but when the number of training samples is small,
their performance suffers from overfitting. Prior methods of reducing
overfitting such as weight decay, Dropout and DropConnect are data-independent.
This paper proposes a new method, GraphConnect, that is data-dependent, and is
motivated by the observation that data of interest lie close to a manifold. The
new method encourages the relationships between the learned decisions to
resemble a graph representing the manifold structure. Essentially GraphConnect
is designed to learn attributes that are present in data samples in contrast to
weight decay, Dropout and DropConnect which are simply designed to make it more
difficult to fit to random error or noise. Empirical Rademacher complexity is
used to connect the generalization error of the neural network to spectral
properties of the graph learned from the input data. This framework is used to
show that GraphConnect is superior to weight decay. Experimental results on
several benchmark datasets validate the theoretical analysis, and show that
when the number of training samples is small, GraphConnect is able to
significantly improve performance over weight decay.
| no_new_dataset | 0.951639 |
1601.07172 | Donald Jones | Donald Jones | Measuring the Weak Charge of the Proton via Elastic Electron-Proton
Scattering | null | null | null | null | nucl-ex physics.ins-det | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Qweak experiment which ran at Jefferson Lab in Newport News, VA, measured
the weak charge of the proton $Q_W^p$ via elastic electron-proton scattering.
Longitudinally polarized electrons were scattered from an unpolarized liquid
hydrogen target. The Standard Model predicts a small parity-violating asymmetry
of scattering rates between electron right and left helicity states due to the
weak interaction. An initial result using 4% of the data was published in
October 2013 with a measured parity-violating asymmetry of $-279\pm
35(\text{stat})\pm 31$ (syst) parts per billion (ppb). This asymmetry, along
with other data from parity-violating electron scattering experiments, provided
the world's first determination of the weak charge of the proton. The weak
charge of the proton was found to be $Q_W^p=0.064\pm0.012$, in agreement with
the Standard Model prediction of $Q_W^p(SM)=0.0708\pm0.0003$.
The results of the full dataset are expected to decrease the statistical
error from the initial publication by a factor of 4-5. The level of precision
of the final result makes it a useful test of Standard Model predictions and
particularly of the "running" of $\sin^2\theta_W$ from the Z-mass to low
energies. This thesis focuses on reduction of systematic error in two key
systematics for the Qweak experiment. First, techniques for measuring and
removing false asymmetries arising from helicity-correlated electron beam
properties at the few ppb level are discussed. Second, as a parity-violating
experiment, Qweak relies on accurate knowledge of electron beam polarimetry. To
help address the requirement of accurate polarimetry, a Compton polarimeter
built specifically for Qweak. Compton polarimetry requires accurate knowledge
of laser polarization inside a Fabry-Perot cavity enclosed in the electron beam
pipe. A new technique was developed for Qweak that nearly eliminates this
systematic error.
| [
{
"version": "v1",
"created": "Tue, 26 Jan 2016 20:01:27 GMT"
}
] | 2016-01-28T00:00:00 | [
[
"Jones",
"Donald",
""
]
] | TITLE: Measuring the Weak Charge of the Proton via Elastic Electron-Proton
Scattering
ABSTRACT: The Qweak experiment which ran at Jefferson Lab in Newport News, VA, measured
the weak charge of the proton $Q_W^p$ via elastic electron-proton scattering.
Longitudinally polarized electrons were scattered from an unpolarized liquid
hydrogen target. The Standard Model predicts a small parity-violating asymmetry
of scattering rates between electron right and left helicity states due to the
weak interaction. An initial result using 4% of the data was published in
October 2013 with a measured parity-violating asymmetry of $-279\pm
35(\text{stat})\pm 31$ (syst) parts per billion (ppb). This asymmetry, along
with other data from parity-violating electron scattering experiments, provided
the world's first determination of the weak charge of the proton. The weak
charge of the proton was found to be $Q_W^p=0.064\pm0.012$, in agreement with
the Standard Model prediction of $Q_W^p(SM)=0.0708\pm0.0003$.
The results of the full dataset are expected to decrease the statistical
error from the initial publication by a factor of 4-5. The level of precision
of the final result makes it a useful test of Standard Model predictions and
particularly of the "running" of $\sin^2\theta_W$ from the Z-mass to low
energies. This thesis focuses on reduction of systematic error in two key
systematics for the Qweak experiment. First, techniques for measuring and
removing false asymmetries arising from helicity-correlated electron beam
properties at the few ppb level are discussed. Second, as a parity-violating
experiment, Qweak relies on accurate knowledge of electron beam polarimetry. To
help address the requirement of accurate polarimetry, a Compton polarimeter
built specifically for Qweak. Compton polarimetry requires accurate knowledge
of laser polarization inside a Fabry-Perot cavity enclosed in the electron beam
pipe. A new technique was developed for Qweak that nearly eliminates this
systematic error.
| no_new_dataset | 0.947381 |
1601.07241 | Ayman Taha Ayman Taha | Ayman Taha | Knowledge Discovery In GIS Data | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intelligent geographic information system (IGIS) is one of the promising
topics in GIS field. It aims at making GIS tools more sensitive for large
volumes of data stored inside GIS systems by integrating GIS with other
computer sciences such as Expert system (ES) Data Warehouse (DW), Decision
Support System (DSS), or Knowledge Discovery Database (KDD). One of the main
branches of IGIS is the Geographic Knowledge Discovery (GKD) which tries to
discover the implicit knowledge in the spatial databases. The main difference
between traditional KDD techniques and GKD techniques is hidden in the nature
of spatial data sets. In other words in the traditional data set the values of
each object are supposed to be independent from other objects in the same data
set, whereas the spatial dataset tends to be highly correlated according to the
first law of geography. The spatial outlier detection is one of the most
popular spatial data mining techniques which is used to detect spatial objects
whose non-spatial attributes values are extremely different from those of their
neighboring objects. Analyzing the behavior of these objects may produce an
interesting knowledge, which has an effective role in the decision-making
process. In this thesis, a new definition for the spatial neighborhood
relationship by is proposed considering the weights of the most effective
parameters of neighboring objects in a given spatial dataset. The spatial
parameters taken into our consideration are; distance, cost, and number of
direct connections between neighboring objects. A new model to detect spatial
outliers is also presented based on the new definition of the spatial
neighborhood relationship. This model is adapted to be applied to polygonal
objects. The proposed model is applied to an existing project for supporting
literacy in Fayoum governorate in Arab Republic of Egypt (ARE).
| [
{
"version": "v1",
"created": "Wed, 27 Jan 2016 01:28:50 GMT"
}
] | 2016-01-28T00:00:00 | [
[
"Taha",
"Ayman",
""
]
] | TITLE: Knowledge Discovery In GIS Data
ABSTRACT: Intelligent geographic information system (IGIS) is one of the promising
topics in GIS field. It aims at making GIS tools more sensitive for large
volumes of data stored inside GIS systems by integrating GIS with other
computer sciences such as Expert system (ES) Data Warehouse (DW), Decision
Support System (DSS), or Knowledge Discovery Database (KDD). One of the main
branches of IGIS is the Geographic Knowledge Discovery (GKD) which tries to
discover the implicit knowledge in the spatial databases. The main difference
between traditional KDD techniques and GKD techniques is hidden in the nature
of spatial data sets. In other words in the traditional data set the values of
each object are supposed to be independent from other objects in the same data
set, whereas the spatial dataset tends to be highly correlated according to the
first law of geography. The spatial outlier detection is one of the most
popular spatial data mining techniques which is used to detect spatial objects
whose non-spatial attributes values are extremely different from those of their
neighboring objects. Analyzing the behavior of these objects may produce an
interesting knowledge, which has an effective role in the decision-making
process. In this thesis, a new definition for the spatial neighborhood
relationship by is proposed considering the weights of the most effective
parameters of neighboring objects in a given spatial dataset. The spatial
parameters taken into our consideration are; distance, cost, and number of
direct connections between neighboring objects. A new model to detect spatial
outliers is also presented based on the new definition of the spatial
neighborhood relationship. This model is adapted to be applied to polygonal
objects. The proposed model is applied to an existing project for supporting
literacy in Fayoum governorate in Arab Republic of Egypt (ARE).
| no_new_dataset | 0.947137 |
1601.07258 | Kuldeep S Kulkarni Mr. | Kuldeep Kulkarni and Pavan Turaga | Fast Integral Image Estimation at 1% measurement rate | Submitted to TPAMI | null | null | null | cs.CV math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a framework called ReFInE to directly obtain integral image
estimates from a very small number of spatially multiplexed measurements of the
scene without iterative reconstruction of any auxiliary image, and demonstrate
their practical utility in visual object tracking. Specifically, we design
measurement matrices which are tailored to facilitate extremely fast estimation
of the integral image, by using a single-shot linear operation on the measured
vector. Leveraging a prior model for the images, we formulate a nuclear norm
minimization problem with second order conic constraints to jointly obtain the
measurement matrix and the linear operator. Through qualitative and
quantitative experiments, we show that high quality integral image estimates
can be obtained using our framework at very low measurement rates. Further, on
a standard dataset of 50 videos, we present object tracking results which are
comparable to the state-of-the-art methods, even at an extremely low
measurement rate of 1%.
| [
{
"version": "v1",
"created": "Wed, 27 Jan 2016 04:32:20 GMT"
}
] | 2016-01-28T00:00:00 | [
[
"Kulkarni",
"Kuldeep",
""
],
[
"Turaga",
"Pavan",
""
]
] | TITLE: Fast Integral Image Estimation at 1% measurement rate
ABSTRACT: We propose a framework called ReFInE to directly obtain integral image
estimates from a very small number of spatially multiplexed measurements of the
scene without iterative reconstruction of any auxiliary image, and demonstrate
their practical utility in visual object tracking. Specifically, we design
measurement matrices which are tailored to facilitate extremely fast estimation
of the integral image, by using a single-shot linear operation on the measured
vector. Leveraging a prior model for the images, we formulate a nuclear norm
minimization problem with second order conic constraints to jointly obtain the
measurement matrix and the linear operator. Through qualitative and
quantitative experiments, we show that high quality integral image estimates
can be obtained using our framework at very low measurement rates. Further, on
a standard dataset of 50 videos, we present object tracking results which are
comparable to the state-of-the-art methods, even at an extremely low
measurement rate of 1%.
| no_new_dataset | 0.939192 |
1601.07532 | Damien Teney | Damien Teney, Martial Hebert | Learning to Extract Motion from Videos in Convolutional Neural Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper shows how to extract dense optical flow from videos with a
convolutional neural network (CNN). The proposed model constitutes a potential
building block for deeper architectures to allow using motion without resorting
to an external algorithm, \eg for recognition in videos. We derive our network
architecture from signal processing principles to provide desired invariances
to image contrast, phase and texture. We constrain weights within the network
to enforce strict rotation invariance and substantially reduce the number of
parameters to learn. We demonstrate end-to-end training on only 8 sequences of
the Middlebury dataset, orders of magnitude less than competing CNN-based
motion estimation methods, and obtain comparable performance to classical
methods on the Middlebury benchmark. Importantly, our method outputs a
distributed representation of motion that allows representing multiple,
transparent motions, and dynamic textures. Our contributions on network design
and rotation invariance offer insights nonspecific to motion estimation.
| [
{
"version": "v1",
"created": "Wed, 27 Jan 2016 20:19:14 GMT"
}
] | 2016-01-28T00:00:00 | [
[
"Teney",
"Damien",
""
],
[
"Hebert",
"Martial",
""
]
] | TITLE: Learning to Extract Motion from Videos in Convolutional Neural Networks
ABSTRACT: This paper shows how to extract dense optical flow from videos with a
convolutional neural network (CNN). The proposed model constitutes a potential
building block for deeper architectures to allow using motion without resorting
to an external algorithm, \eg for recognition in videos. We derive our network
architecture from signal processing principles to provide desired invariances
to image contrast, phase and texture. We constrain weights within the network
to enforce strict rotation invariance and substantially reduce the number of
parameters to learn. We demonstrate end-to-end training on only 8 sequences of
the Middlebury dataset, orders of magnitude less than competing CNN-based
motion estimation methods, and obtain comparable performance to classical
methods on the Middlebury benchmark. Importantly, our method outputs a
distributed representation of motion that allows representing multiple,
transparent motions, and dynamic textures. Our contributions on network design
and rotation invariance offer insights nonspecific to motion estimation.
| no_new_dataset | 0.954478 |
1103.4295 | Alberto Accomazzi | Alberto Accomazzi | Linking Literature and Data: Status Report and Future Efforts | 9 pages, 2 figures, to appear in: Future Professional Communication
in Astronomy II (FPCA-II) | null | 10.1007/978-1-4419-8369-5_15 | null | astro-ph.IM cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the current era of data-intensive science, it is increasingly important
for researchers to be able to have access to published results, the supporting
data, and the processes used to produce them. Six years ago, recognizing this
need, the American Astronomical Society and the Astrophysics Data Centers
Executive Committee (ADEC) sponsored an effort to facilitate the annotation and
linking of datasets during the publishing process, with limited success. I will
review the status of this effort and describe a new, more general one now being
considered in the context of the Virtual Astronomical Observatory.
| [
{
"version": "v1",
"created": "Tue, 22 Mar 2011 15:52:51 GMT"
}
] | 2016-01-27T00:00:00 | [
[
"Accomazzi",
"Alberto",
""
]
] | TITLE: Linking Literature and Data: Status Report and Future Efforts
ABSTRACT: In the current era of data-intensive science, it is increasingly important
for researchers to be able to have access to published results, the supporting
data, and the processes used to produce them. Six years ago, recognizing this
need, the American Astronomical Society and the Astrophysics Data Centers
Executive Committee (ADEC) sponsored an effort to facilitate the annotation and
linking of datasets during the publishing process, with limited success. I will
review the status of this effort and describe a new, more general one now being
considered in the context of the Virtual Astronomical Observatory.
| no_new_dataset | 0.962214 |
1506.02059 | Haonan Yu | Haonan Yu and Jeffrey Mark Siskind | Sentence Directed Video Object Codetection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We tackle the problem of video object codetection by leveraging the weak
semantic constraint implied by sentences that describe the video content.
Unlike most existing work that focuses on codetecting large objects which are
usually salient both in size and appearance, we can codetect objects that are
small or medium sized. Our method assumes no human pose or depth information
such as is required by the most recent state-of-the-art method. We employ weak
semantic constraint on the codetection process by pairing the video with
sentences. Although the semantic information is usually simple and weak, it can
greatly boost the performance of our codetection framework by reducing the
search space of the hypothesized object detections. Our experiment demonstrates
an average IoU score of 0.423 on a new challenging dataset which contains 15
object classes and 150 videos with 12,509 frames in total, and an average IoU
score of 0.373 on a subset of an existing dataset, originally intended for
activity recognition, which contains 5 object classes and 75 videos with 8,854
frames in total.
| [
{
"version": "v1",
"created": "Fri, 5 Jun 2015 20:34:12 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jan 2016 20:38:42 GMT"
}
] | 2016-01-27T00:00:00 | [
[
"Yu",
"Haonan",
""
],
[
"Siskind",
"Jeffrey Mark",
""
]
] | TITLE: Sentence Directed Video Object Codetection
ABSTRACT: We tackle the problem of video object codetection by leveraging the weak
semantic constraint implied by sentences that describe the video content.
Unlike most existing work that focuses on codetecting large objects which are
usually salient both in size and appearance, we can codetect objects that are
small or medium sized. Our method assumes no human pose or depth information
such as is required by the most recent state-of-the-art method. We employ weak
semantic constraint on the codetection process by pairing the video with
sentences. Although the semantic information is usually simple and weak, it can
greatly boost the performance of our codetection framework by reducing the
search space of the hypothesized object detections. Our experiment demonstrates
an average IoU score of 0.423 on a new challenging dataset which contains 15
object classes and 150 videos with 12,509 frames in total, and an average IoU
score of 0.373 on a subset of an existing dataset, originally intended for
activity recognition, which contains 5 object classes and 75 videos with 8,854
frames in total.
| new_dataset | 0.957038 |
1510.08345 | Michael B Hynes | Michael B Hynes and Hans De Sterck | A polynomial expansion line search for large-scale unconstrained
minimization of smooth L2-regularized loss functions, with implementation in
Apache Spark | 9 pages, 8 figures, 2 tables. Preprint appearing in SIAM Conf on Data
Mining, Miami, FL, 2016 | null | null | null | math.NA cs.DC cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In large-scale unconstrained optimization algorithms such as limited memory
BFGS (LBFGS), a common subproblem is a line search minimizing the loss function
along a descent direction. Commonly used line searches iteratively find an
approximate solution for which the Wolfe conditions are satisfied, typically
requiring multiple function and gradient evaluations per line search, which is
expensive in parallel due to communication requirements. In this paper we
propose a new line search approach for cases where the loss function is
analytic, as in least squares regression, logistic regression, or low rank
matrix factorization. We approximate the loss function by a truncated Taylor
polynomial, whose coefficients may be computed efficiently in parallel with
less communication than evaluating the gradient, after which this polynomial
may be minimized with high accuracy in a neighbourhood of the expansion point.
Our Polynomial Expansion Line Search (PELS) was implemented in the Apache Spark
framework and used to accelerate the training of a logistic regression model on
binary classification datasets from the LIBSVM repository with LBFGS and the
Nonlinear Conjugate Gradient (NCG) method. In large-scale numerical experiments
in parallel on a 16-node cluster with 256 cores using the URL, KDDA, and KDDB
datasets, the PELS approach produced significant convergence improvements
compared to the use of classical Wolfe line searches. For example, to reach the
final training label prediction accuracies, LBFGS using PELS had speedup
factors of 1.8--2 over LBFGS using a Wolfe line search, measured by both the
number of iterations and the time required, due to the better accuracy of step
sizes computed in the line search. PELS has the potential to significantly
accelerate large-scale regression and factorization computations, and is
applicable to continuous optimization problems with smooth loss functions.
| [
{
"version": "v1",
"created": "Wed, 28 Oct 2015 15:27:26 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jan 2016 07:01:03 GMT"
}
] | 2016-01-27T00:00:00 | [
[
"Hynes",
"Michael B",
""
],
[
"De Sterck",
"Hans",
""
]
] | TITLE: A polynomial expansion line search for large-scale unconstrained
minimization of smooth L2-regularized loss functions, with implementation in
Apache Spark
ABSTRACT: In large-scale unconstrained optimization algorithms such as limited memory
BFGS (LBFGS), a common subproblem is a line search minimizing the loss function
along a descent direction. Commonly used line searches iteratively find an
approximate solution for which the Wolfe conditions are satisfied, typically
requiring multiple function and gradient evaluations per line search, which is
expensive in parallel due to communication requirements. In this paper we
propose a new line search approach for cases where the loss function is
analytic, as in least squares regression, logistic regression, or low rank
matrix factorization. We approximate the loss function by a truncated Taylor
polynomial, whose coefficients may be computed efficiently in parallel with
less communication than evaluating the gradient, after which this polynomial
may be minimized with high accuracy in a neighbourhood of the expansion point.
Our Polynomial Expansion Line Search (PELS) was implemented in the Apache Spark
framework and used to accelerate the training of a logistic regression model on
binary classification datasets from the LIBSVM repository with LBFGS and the
Nonlinear Conjugate Gradient (NCG) method. In large-scale numerical experiments
in parallel on a 16-node cluster with 256 cores using the URL, KDDA, and KDDB
datasets, the PELS approach produced significant convergence improvements
compared to the use of classical Wolfe line searches. For example, to reach the
final training label prediction accuracies, LBFGS using PELS had speedup
factors of 1.8--2 over LBFGS using a Wolfe line search, measured by both the
number of iterations and the time required, due to the better accuracy of step
sizes computed in the line search. PELS has the potential to significantly
accelerate large-scale regression and factorization computations, and is
applicable to continuous optimization problems with smooth loss functions.
| no_new_dataset | 0.947817 |
1511.07131 | Jun Zhu | Jun Zhu and Xianjie Chen and Alan L. Yuille | DeePM: A Deep Part-Based Model for Object Detection and Semantic Part
Localization | the final revision to ICLR 2016, in which some color errors in the
figures are fixed | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a deep part-based model (DeePM) for symbiotic
object detection and semantic part localization. For this purpose, we annotate
semantic parts for all 20 object categories on the PASCAL VOC 2012 dataset,
which provides information on object pose, occlusion, viewpoint and
functionality. DeePM is a latent graphical model based on the state-of-the-art
R-CNN framework, which learns an explicit representation of the object-part
configuration with flexible type sharing (e.g., a sideview horse head can be
shared by a fully-visible sideview horse and a highly truncated sideview horse
with head and neck only). For comparison, we also present an end-to-end
Object-Part (OP) R-CNN which learns an implicit feature representation for
jointly mapping an image ROI to the object and part bounding boxes. We evaluate
the proposed methods for both the object and part detection performance on
PASCAL VOC 2012, and show that DeePM consistently outperforms OP R-CNN in
detecting objects and parts. In addition, it obtains superior performance to
Fast and Faster R-CNNs in object detection.
| [
{
"version": "v1",
"created": "Mon, 23 Nov 2015 08:24:18 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jan 2016 15:25:38 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Jan 2016 09:14:31 GMT"
}
] | 2016-01-27T00:00:00 | [
[
"Zhu",
"Jun",
""
],
[
"Chen",
"Xianjie",
""
],
[
"Yuille",
"Alan L.",
""
]
] | TITLE: DeePM: A Deep Part-Based Model for Object Detection and Semantic Part
Localization
ABSTRACT: In this paper, we propose a deep part-based model (DeePM) for symbiotic
object detection and semantic part localization. For this purpose, we annotate
semantic parts for all 20 object categories on the PASCAL VOC 2012 dataset,
which provides information on object pose, occlusion, viewpoint and
functionality. DeePM is a latent graphical model based on the state-of-the-art
R-CNN framework, which learns an explicit representation of the object-part
configuration with flexible type sharing (e.g., a sideview horse head can be
shared by a fully-visible sideview horse and a highly truncated sideview horse
with head and neck only). For comparison, we also present an end-to-end
Object-Part (OP) R-CNN which learns an implicit feature representation for
jointly mapping an image ROI to the object and part bounding boxes. We evaluate
the proposed methods for both the object and part detection performance on
PASCAL VOC 2012, and show that DeePM consistently outperforms OP R-CNN in
detecting objects and parts. In addition, it obtains superior performance to
Fast and Faster R-CNNs in object detection.
| no_new_dataset | 0.950686 |
1511.09426 | Cengiz Pehlevan | Cengiz Pehlevan, Dmitri B. Chklovskii | A Normative Theory of Adaptive Dimensionality Reduction in Neural
Networks | Advances in Neural Information Processing Systems (NIPS), 2015 | null | null | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To make sense of the world our brains must analyze high-dimensional datasets
streamed by our sensory organs. Because such analysis begins with
dimensionality reduction, modelling early sensory processing requires
biologically plausible online dimensionality reduction algorithms. Recently, we
derived such an algorithm, termed similarity matching, from a Multidimensional
Scaling (MDS) objective function. However, in the existing algorithm, the
number of output dimensions is set a priori by the number of output neurons and
cannot be changed. Because the number of informative dimensions in sensory
inputs is variable there is a need for adaptive dimensionality reduction. Here,
we derive biologically plausible dimensionality reduction algorithms which
adapt the number of output dimensions to the eigenspectrum of the input
covariance matrix. We formulate three objective functions which, in the offline
setting, are optimized by the projections of the input dataset onto its
principal subspace scaled by the eigenvalues of the output covariance matrix.
In turn, the output eigenvalues are computed as i) soft-thresholded, ii)
hard-thresholded, iii) equalized thresholded eigenvalues of the input
covariance matrix. In the online setting, we derive the three corresponding
adaptive algorithms and map them onto the dynamics of neuronal activity in
networks with biologically plausible local learning rules. Remarkably, in the
last two networks, neurons are divided into two classes which we identify with
principal neurons and interneurons in biological circuits.
| [
{
"version": "v1",
"created": "Mon, 30 Nov 2015 18:45:30 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jan 2016 18:44:23 GMT"
}
] | 2016-01-27T00:00:00 | [
[
"Pehlevan",
"Cengiz",
""
],
[
"Chklovskii",
"Dmitri B.",
""
]
] | TITLE: A Normative Theory of Adaptive Dimensionality Reduction in Neural
Networks
ABSTRACT: To make sense of the world our brains must analyze high-dimensional datasets
streamed by our sensory organs. Because such analysis begins with
dimensionality reduction, modelling early sensory processing requires
biologically plausible online dimensionality reduction algorithms. Recently, we
derived such an algorithm, termed similarity matching, from a Multidimensional
Scaling (MDS) objective function. However, in the existing algorithm, the
number of output dimensions is set a priori by the number of output neurons and
cannot be changed. Because the number of informative dimensions in sensory
inputs is variable there is a need for adaptive dimensionality reduction. Here,
we derive biologically plausible dimensionality reduction algorithms which
adapt the number of output dimensions to the eigenspectrum of the input
covariance matrix. We formulate three objective functions which, in the offline
setting, are optimized by the projections of the input dataset onto its
principal subspace scaled by the eigenvalues of the output covariance matrix.
In turn, the output eigenvalues are computed as i) soft-thresholded, ii)
hard-thresholded, iii) equalized thresholded eigenvalues of the input
covariance matrix. In the online setting, we derive the three corresponding
adaptive algorithms and map them onto the dynamics of neuronal activity in
networks with biologically plausible local learning rules. Remarkably, in the
last two networks, neurons are divided into two classes which we identify with
principal neurons and interneurons in biological circuits.
| no_new_dataset | 0.944842 |
1512.01320 | Ali Borji | Ali Borji, Saeed Izadi, Laurent Itti | What can we learn about CNNs from a large scale controlled object
dataset? | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tolerance to image variations (e.g. translation, scale, pose, illumination)
is an important desired property of any object recognition system, be it human
or machine. Moving towards increasingly bigger datasets has been trending in
computer vision specially with the emergence of highly popular deep learning
models. While being very useful for learning invariance to object inter- and
intra-class shape variability, these large-scale wild datasets are not very
useful for learning invariance to other parameters forcing researchers to
resort to other tricks for training a model. In this work, we introduce a
large-scale synthetic dataset, which is freely and publicly available, and use
it to answer several fundamental questions regarding invariance and selectivity
properties of convolutional neural networks. Our dataset contains two parts: a)
objects shot on a turntable: 16 categories, 8 rotation angles, 11 cameras on a
semicircular arch, 5 lighting conditions, 3 focus levels, variety of
backgrounds (23.4 per instance) generating 1320 images per instance (over 20
million images in total), and b) scenes: in which a robot arm takes pictures of
objects on a 1:160 scale scene. We study: 1) invariance and selectivity of
different CNN layers, 2) knowledge transfer from one object category to
another, 3) systematic or random sampling of images to build a train set, 4)
domain adaptation from synthetic to natural scenes, and 5) order of knowledge
delivery to CNNs. We also explore how our analyses can lead the field to
develop more efficient CNNs.
| [
{
"version": "v1",
"created": "Fri, 4 Dec 2015 05:48:09 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jan 2016 16:56:11 GMT"
}
] | 2016-01-27T00:00:00 | [
[
"Borji",
"Ali",
""
],
[
"Izadi",
"Saeed",
""
],
[
"Itti",
"Laurent",
""
]
] | TITLE: What can we learn about CNNs from a large scale controlled object
dataset?
ABSTRACT: Tolerance to image variations (e.g. translation, scale, pose, illumination)
is an important desired property of any object recognition system, be it human
or machine. Moving towards increasingly bigger datasets has been trending in
computer vision specially with the emergence of highly popular deep learning
models. While being very useful for learning invariance to object inter- and
intra-class shape variability, these large-scale wild datasets are not very
useful for learning invariance to other parameters forcing researchers to
resort to other tricks for training a model. In this work, we introduce a
large-scale synthetic dataset, which is freely and publicly available, and use
it to answer several fundamental questions regarding invariance and selectivity
properties of convolutional neural networks. Our dataset contains two parts: a)
objects shot on a turntable: 16 categories, 8 rotation angles, 11 cameras on a
semicircular arch, 5 lighting conditions, 3 focus levels, variety of
backgrounds (23.4 per instance) generating 1320 images per instance (over 20
million images in total), and b) scenes: in which a robot arm takes pictures of
objects on a 1:160 scale scene. We study: 1) invariance and selectivity of
different CNN layers, 2) knowledge transfer from one object category to
another, 3) systematic or random sampling of images to build a train set, 4)
domain adaptation from synthetic to natural scenes, and 5) order of knowledge
delivery to CNNs. We also explore how our analyses can lead the field to
develop more efficient CNNs.
| new_dataset | 0.958382 |
1601.06931 | Manuel Marin-Jimenez | F.M. Castro and M.J. Mar\'in-Jim\'enez and N. Guil and R.
Mu\~noz-Salinas | Fisher Motion Descriptor for Multiview Gait Recognition | This paper extends with new experiments the one published at
ICPR'2014 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this paper is to identify individuals by analyzing their gait.
Instead of using binary silhouettes as input data (as done in many previous
works) we propose and evaluate the use of motion descriptors based on densely
sampled short-term trajectories. We take advantage of state-of-the-art people
detectors to define custom spatial configurations of the descriptors around the
target person, obtaining a rich representation of the gait motion. The local
motion features (described by the Divergence-Curl-Shear descriptor) extracted
on the different spatial areas of the person are combined into a single
high-level gait descriptor by using the Fisher Vector encoding. The proposed
approach, coined Pyramidal Fisher Motion, is experimentally validated on
`CASIA' dataset (parts B and C), `TUM GAID' dataset, `CMU MoBo' dataset and the
recent `AVA Multiview Gait' dataset. The results show that this new approach
achieves state-of-the-art results in the problem of gait recognition, allowing
to recognize walking people from diverse viewpoints on single and multiple
camera setups, wearing different clothes, carrying bags, walking at diverse
speeds and not limited to straight walking paths.
| [
{
"version": "v1",
"created": "Tue, 26 Jan 2016 09:05:26 GMT"
}
] | 2016-01-27T00:00:00 | [
[
"Castro",
"F. M.",
""
],
[
"Marín-Jiménez",
"M. J.",
""
],
[
"Guil",
"N.",
""
],
[
"Muñoz-Salinas",
"R.",
""
]
] | TITLE: Fisher Motion Descriptor for Multiview Gait Recognition
ABSTRACT: The goal of this paper is to identify individuals by analyzing their gait.
Instead of using binary silhouettes as input data (as done in many previous
works) we propose and evaluate the use of motion descriptors based on densely
sampled short-term trajectories. We take advantage of state-of-the-art people
detectors to define custom spatial configurations of the descriptors around the
target person, obtaining a rich representation of the gait motion. The local
motion features (described by the Divergence-Curl-Shear descriptor) extracted
on the different spatial areas of the person are combined into a single
high-level gait descriptor by using the Fisher Vector encoding. The proposed
approach, coined Pyramidal Fisher Motion, is experimentally validated on
`CASIA' dataset (parts B and C), `TUM GAID' dataset, `CMU MoBo' dataset and the
recent `AVA Multiview Gait' dataset. The results show that this new approach
achieves state-of-the-art results in the problem of gait recognition, allowing
to recognize walking people from diverse viewpoints on single and multiple
camera setups, wearing different clothes, carrying bags, walking at diverse
speeds and not limited to straight walking paths.
| no_new_dataset | 0.933734 |
1601.06950 | Michael Waechter | Michael Waechter, Mate Beljan, Simon Fuhrmann, Nils Moehrle, Johannes
Kopf, Michael Goesele | Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction | 10 pages, 12 figures, paper was submitted to ACM Transactions on
Graphics for review | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ultimate goal of many image-based modeling systems is to render
photo-realistic novel views of a scene without visible artifacts. Existing
evaluation metrics and benchmarks focus mainly on the geometric accuracy of the
reconstructed model, which is, however, a poor predictor of visual accuracy.
Furthermore, using only geometric accuracy by itself does not allow evaluating
systems that either lack a geometric scene representation or utilize coarse
proxy geometry. Examples include light field or image-based rendering systems.
We propose a unified evaluation approach based on novel view prediction error
that is able to analyze the visual quality of any method that can render novel
views from input images. One of the key advantages of this approach is that it
does not require ground truth geometry. This dramatically simplifies the
creation of test datasets and benchmarks. It also allows us to evaluate the
quality of an unknown scene during the acquisition and reconstruction process,
which is useful for acquisition planning. We evaluate our approach on a range
of methods including standard geometry-plus-texture pipelines as well as
image-based rendering techniques, compare it to existing geometry-based
benchmarks, and demonstrate its utility for a range of use cases.
| [
{
"version": "v1",
"created": "Tue, 26 Jan 2016 09:57:34 GMT"
}
] | 2016-01-27T00:00:00 | [
[
"Waechter",
"Michael",
""
],
[
"Beljan",
"Mate",
""
],
[
"Fuhrmann",
"Simon",
""
],
[
"Moehrle",
"Nils",
""
],
[
"Kopf",
"Johannes",
""
],
[
"Goesele",
"Michael",
""
]
] | TITLE: Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction
ABSTRACT: The ultimate goal of many image-based modeling systems is to render
photo-realistic novel views of a scene without visible artifacts. Existing
evaluation metrics and benchmarks focus mainly on the geometric accuracy of the
reconstructed model, which is, however, a poor predictor of visual accuracy.
Furthermore, using only geometric accuracy by itself does not allow evaluating
systems that either lack a geometric scene representation or utilize coarse
proxy geometry. Examples include light field or image-based rendering systems.
We propose a unified evaluation approach based on novel view prediction error
that is able to analyze the visual quality of any method that can render novel
views from input images. One of the key advantages of this approach is that it
does not require ground truth geometry. This dramatically simplifies the
creation of test datasets and benchmarks. It also allows us to evaluate the
quality of an unknown scene during the acquisition and reconstruction process,
which is useful for acquisition planning. We evaluate our approach on a range
of methods including standard geometry-plus-texture pipelines as well as
image-based rendering techniques, compare it to existing geometry-based
benchmarks, and demonstrate its utility for a range of use cases.
| no_new_dataset | 0.949482 |
1601.06223 | Joel Oren | Yuval Filmus, Joel Oren, Kannan Soundararajan | Shapley Values in Weighted Voting Games with Random Weights | null | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the distribution of the well-studied Shapley--Shubik values in
weighted voting games where the agents are stochastically determined. The
Shapley--Shubik value measures the voting power of an agent, in typical
collective decision making systems. While easy to estimate empirically given
the parameters of a weighted voting game, the Shapley values are notoriously
hard to reason about analytically.
We propose a probabilistic approach in which the agent weights are drawn
i.i.d. from some known exponentially decaying distribution. We provide a
general closed-form characterization of the highest and lowest expected Shapley
values in such a game, as a function of the parameters of the underlying
distribution. To do so, we give a novel reinterpretation of the stochastic
process that generates the Shapley variables as a renewal process. We
demonstrate the use of our results on the uniform and exponential
distributions. Furthermore, we show the strength of our theoretical predictions
on several synthetic datasets.
| [
{
"version": "v1",
"created": "Sat, 23 Jan 2016 03:33:13 GMT"
}
] | 2016-01-26T00:00:00 | [
[
"Filmus",
"Yuval",
""
],
[
"Oren",
"Joel",
""
],
[
"Soundararajan",
"Kannan",
""
]
] | TITLE: Shapley Values in Weighted Voting Games with Random Weights
ABSTRACT: We investigate the distribution of the well-studied Shapley--Shubik values in
weighted voting games where the agents are stochastically determined. The
Shapley--Shubik value measures the voting power of an agent, in typical
collective decision making systems. While easy to estimate empirically given
the parameters of a weighted voting game, the Shapley values are notoriously
hard to reason about analytically.
We propose a probabilistic approach in which the agent weights are drawn
i.i.d. from some known exponentially decaying distribution. We provide a
general closed-form characterization of the highest and lowest expected Shapley
values in such a game, as a function of the parameters of the underlying
distribution. To do so, we give a novel reinterpretation of the stochastic
process that generates the Shapley variables as a renewal process. We
demonstrate the use of our results on the uniform and exponential
distributions. Furthermore, we show the strength of our theoretical predictions
on several synthetic datasets.
| no_new_dataset | 0.946001 |
1601.06243 | Yao Wang | Shiying He, Haiwei Zhou, Yao Wang, Wenfei Cao and Zhi Han | Super-resolution reconstruction of hyperspectral images via low rank
tensor modeling and total variation regularization | submitted to IGARSS 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel approach to hyperspectral image
super-resolution by modeling the global spatial-and-spectral correlation and
local smoothness properties over hyperspectral images. Specifically, we utilize
the tensor nuclear norm and tensor folded-concave penalty functions to describe
the global spatial-and-spectral correlation hidden in hyperspectral images, and
3D total variation (TV) to characterize the local spatial-and-spectral
smoothness across all hyperspectral bands. Then, we develop an efficient
algorithm for solving the resulting optimization problem by combing the local
linear approximation (LLA) strategy and alternative direction method of
multipliers (ADMM). Experimental results on one hyperspectral image dataset
illustrate the merits of the proposed approach.
| [
{
"version": "v1",
"created": "Sat, 23 Jan 2016 07:07:16 GMT"
}
] | 2016-01-26T00:00:00 | [
[
"He",
"Shiying",
""
],
[
"Zhou",
"Haiwei",
""
],
[
"Wang",
"Yao",
""
],
[
"Cao",
"Wenfei",
""
],
[
"Han",
"Zhi",
""
]
] | TITLE: Super-resolution reconstruction of hyperspectral images via low rank
tensor modeling and total variation regularization
ABSTRACT: In this paper, we propose a novel approach to hyperspectral image
super-resolution by modeling the global spatial-and-spectral correlation and
local smoothness properties over hyperspectral images. Specifically, we utilize
the tensor nuclear norm and tensor folded-concave penalty functions to describe
the global spatial-and-spectral correlation hidden in hyperspectral images, and
3D total variation (TV) to characterize the local spatial-and-spectral
smoothness across all hyperspectral bands. Then, we develop an efficient
algorithm for solving the resulting optimization problem by combing the local
linear approximation (LLA) strategy and alternative direction method of
multipliers (ADMM). Experimental results on one hyperspectral image dataset
illustrate the merits of the proposed approach.
| no_new_dataset | 0.949389 |
1601.06251 | Homa Davoudi | Homa Davoudi, Ehsanollah Kabir | Using compatible shape descriptor for lexicon reduction of printed Farsi
subwords | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This Paper presents a method for lexicon reduction of Printed Farsi subwords
based on their holistic shape features. Because of the large number of Persian
subwords variously shaped from a simple letter to a complex combination of
several connected characters, it is not easy to find a fixed shape descriptor
suitable for all subwords. In this paper, we propose to select the descriptor
according to the input shape characteristics. To do this, a neural network is
trained to predict the appropriate descriptor of the input image. This network
is implemented in the proposed lexicon reduction system to decide on the
descriptor used for comparison of the query image with the lexicon entries.
Evaluating the proposed method on a dataset of Persian subwords allows one to
attest the effectiveness of the proposed idea of dealing differently with
various query shapes.
| [
{
"version": "v1",
"created": "Sat, 23 Jan 2016 08:49:00 GMT"
}
] | 2016-01-26T00:00:00 | [
[
"Davoudi",
"Homa",
""
],
[
"Kabir",
"Ehsanollah",
""
]
] | TITLE: Using compatible shape descriptor for lexicon reduction of printed Farsi
subwords
ABSTRACT: This Paper presents a method for lexicon reduction of Printed Farsi subwords
based on their holistic shape features. Because of the large number of Persian
subwords variously shaped from a simple letter to a complex combination of
several connected characters, it is not easy to find a fixed shape descriptor
suitable for all subwords. In this paper, we propose to select the descriptor
according to the input shape characteristics. To do this, a neural network is
trained to predict the appropriate descriptor of the input image. This network
is implemented in the proposed lexicon reduction system to decide on the
descriptor used for comparison of the query image with the lexicon entries.
Evaluating the proposed method on a dataset of Persian subwords allows one to
attest the effectiveness of the proposed idea of dealing differently with
various query shapes.
| no_new_dataset | 0.943919 |
1601.06260 | Xiatian Zhu | Taiqing Wang and Shaogang Gong and Xiatian Zhu and Shengjin Wang | Person Re-Identification by Discriminative Selection in Video Ranking | 14 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current person re-identification (ReID) methods typically rely on
single-frame imagery features, whilst ignoring space-time information from
image sequences often available in the practical surveillance scenarios.
Single-frame (single-shot) based visual appearance matching is inherently
limited for person ReID in public spaces due to the challenging visual
ambiguity and uncertainty arising from non-overlapping camera views where
viewing condition changes can cause significant people appearance variations.
In this work, we present a novel model to automatically select the most
discriminative video fragments from noisy/incomplete image sequences of people
from which reliable space-time and appearance features can be computed, whilst
simultaneously learning a video ranking function for person ReID. Using the
PRID$2011$, iLIDS-VID, and HDA+ image sequence datasets, we extensively
conducted comparative evaluations to demonstrate the advantages of the proposed
model over contemporary gait recognition, holistic image sequence matching and
state-of-the-art single-/multi-shot ReID methods.
| [
{
"version": "v1",
"created": "Sat, 23 Jan 2016 10:33:45 GMT"
}
] | 2016-01-26T00:00:00 | [
[
"Wang",
"Taiqing",
""
],
[
"Gong",
"Shaogang",
""
],
[
"Zhu",
"Xiatian",
""
],
[
"Wang",
"Shengjin",
""
]
] | TITLE: Person Re-Identification by Discriminative Selection in Video Ranking
ABSTRACT: Current person re-identification (ReID) methods typically rely on
single-frame imagery features, whilst ignoring space-time information from
image sequences often available in the practical surveillance scenarios.
Single-frame (single-shot) based visual appearance matching is inherently
limited for person ReID in public spaces due to the challenging visual
ambiguity and uncertainty arising from non-overlapping camera views where
viewing condition changes can cause significant people appearance variations.
In this work, we present a novel model to automatically select the most
discriminative video fragments from noisy/incomplete image sequences of people
from which reliable space-time and appearance features can be computed, whilst
simultaneously learning a video ranking function for person ReID. Using the
PRID$2011$, iLIDS-VID, and HDA+ image sequence datasets, we extensively
conducted comparative evaluations to demonstrate the advantages of the proposed
model over contemporary gait recognition, holistic image sequence matching and
state-of-the-art single-/multi-shot ReID methods.
| no_new_dataset | 0.952175 |
1601.06527 | Pascal Held | Pascal Held, Rudolf Kruse | Online Community Detection by Using Nearest Hubs | Presented as poster at the NetSciX 2016 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community and cluster detection is a popular field of social network
analysis. Most algorithms focus on static graphs or series of snapshots.
In this paper we present an algorithm, which detects communities in dynamic
graphs. The method is based on shortest paths to high-connected nodes, so
called hubs. Due to local message passing we can update the clustering results
with low computational power.
The presented algorithm is compared with other for some static social
networks. The reached modularity is not as high as the Louvain method, but even
higher then spectral clustering. For large-scale real-world datasets with given
ground truth, we could reconstruct most of the given community structure. The
advantage of the algorithm is the good performance in dynamic scenarios.
| [
{
"version": "v1",
"created": "Mon, 25 Jan 2016 09:41:43 GMT"
}
] | 2016-01-26T00:00:00 | [
[
"Held",
"Pascal",
""
],
[
"Kruse",
"Rudolf",
""
]
] | TITLE: Online Community Detection by Using Nearest Hubs
ABSTRACT: Community and cluster detection is a popular field of social network
analysis. Most algorithms focus on static graphs or series of snapshots.
In this paper we present an algorithm, which detects communities in dynamic
graphs. The method is based on shortest paths to high-connected nodes, so
called hubs. Due to local message passing we can update the clustering results
with low computational power.
The presented algorithm is compared with other for some static social
networks. The reached modularity is not as high as the Louvain method, but even
higher then spectral clustering. For large-scale real-world datasets with given
ground truth, we could reconstruct most of the given community structure. The
advantage of the algorithm is the good performance in dynamic scenarios.
| no_new_dataset | 0.947381 |
1601.06603 | Sibo Song | Sibo Song, Ngai-Man Cheung, Vijay Chandrasekhar, Bappaditya Mandal,
Jie Lin | Egocentric Activity Recognition with Multimodal Fisher Vector | 5 pages, 4 figures, ICASSP 2016 accepted | null | null | null | cs.MM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing availability of wearable devices, research on egocentric
activity recognition has received much attention recently. In this paper, we
build a Multimodal Egocentric Activity dataset which includes egocentric videos
and sensor data of 20 fine-grained and diverse activity categories. We present
a novel strategy to extract temporal trajectory-like features from sensor data.
We propose to apply the Fisher Kernel framework to fuse video and temporal
enhanced sensor features. Experiment results show that with careful design of
feature extraction and fusion algorithm, sensor data can enhance
information-rich video data. We make publicly available the Multimodal
Egocentric Activity dataset to facilitate future research.
| [
{
"version": "v1",
"created": "Mon, 25 Jan 2016 13:57:07 GMT"
}
] | 2016-01-26T00:00:00 | [
[
"Song",
"Sibo",
""
],
[
"Cheung",
"Ngai-Man",
""
],
[
"Chandrasekhar",
"Vijay",
""
],
[
"Mandal",
"Bappaditya",
""
],
[
"Lin",
"Jie",
""
]
] | TITLE: Egocentric Activity Recognition with Multimodal Fisher Vector
ABSTRACT: With the increasing availability of wearable devices, research on egocentric
activity recognition has received much attention recently. In this paper, we
build a Multimodal Egocentric Activity dataset which includes egocentric videos
and sensor data of 20 fine-grained and diverse activity categories. We present
a novel strategy to extract temporal trajectory-like features from sensor data.
We propose to apply the Fisher Kernel framework to fuse video and temporal
enhanced sensor features. Experiment results show that with careful design of
feature extraction and fusion algorithm, sensor data can enhance
information-rich video data. We make publicly available the Multimodal
Egocentric Activity dataset to facilitate future research.
| new_dataset | 0.953923 |
1601.06608 | Mrinal Haloi | Mrinal Haloi and Samarendra Dandapat, and Rohit Sinha | An Unsupervised Method for Detection and Validation of The Optic Disc
and The Fovea | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we have presented a novel method for detection of retinal image
features, the optic disc and the fovea, from colour fundus photographs of
dilated eyes for Computer-aided Diagnosis(CAD) system. A saliency map based
method was used to detect the optic disc followed by an unsupervised
probabilistic Latent Semantic Analysis for detection validation. The validation
concept is based on distinct vessels structures in the optic disc. By using the
clinical information of standard location of the fovea with respect to the
optic disc, the macula region is estimated. Accuracy of 100\% detection is
achieved for the optic disc and the macula on MESSIDOR and DIARETDB1 and 98.8\%
detection accuracy on STARE dataset.
| [
{
"version": "v1",
"created": "Mon, 25 Jan 2016 14:05:36 GMT"
}
] | 2016-01-26T00:00:00 | [
[
"Haloi",
"Mrinal",
""
],
[
"Dandapat",
"Samarendra",
""
],
[
"Sinha",
"Rohit",
""
]
] | TITLE: An Unsupervised Method for Detection and Validation of The Optic Disc
and The Fovea
ABSTRACT: In this work, we have presented a novel method for detection of retinal image
features, the optic disc and the fovea, from colour fundus photographs of
dilated eyes for Computer-aided Diagnosis(CAD) system. A saliency map based
method was used to detect the optic disc followed by an unsupervised
probabilistic Latent Semantic Analysis for detection validation. The validation
concept is based on distinct vessels structures in the optic disc. By using the
clinical information of standard location of the fovea with respect to the
optic disc, the macula region is estimated. Accuracy of 100\% detection is
achieved for the optic disc and the macula on MESSIDOR and DIARETDB1 and 98.8\%
detection accuracy on STARE dataset.
| no_new_dataset | 0.953188 |
1507.03292 | Jaeseong Jeong | Jaeseong Jeong, Mathieu Leconte and Alexandre Proutiere | Cluster-Aided Mobility Predictions | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the future location of users in wireless net- works has numerous
applications, and can help service providers to improve the quality of service
perceived by their clients. The location predictors proposed so far estimate
the next location of a specific user by inspecting the past individual
trajectories of this user. As a consequence, when the training data collected
for a given user is limited, the resulting prediction is inaccurate. In this
paper, we develop cluster-aided predictors that exploit past trajectories
collected from all users to predict the next location of a given user. These
predictors rely on clustering techniques and extract from the training data
similarities among the mobility patterns of the various users to improve the
prediction accuracy. Specifically, we present CAMP (Cluster-Aided Mobility
Predictor), a cluster-aided predictor whose design is based on recent
non-parametric bayesian statistical tools. CAMP is robust and adaptive in the
sense that it exploits similarities in users' mobility only if such
similarities are really present in the training data. We analytically prove the
consistency of the predictions provided by CAMP, and investigate its
performance using two large-scale datasets. CAMP significantly outperforms
existing predictors, and in particular those that only exploit individual past
trajectories.
| [
{
"version": "v1",
"created": "Sun, 12 Jul 2015 23:27:50 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Jul 2015 18:35:18 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Aug 2015 23:09:58 GMT"
},
{
"version": "v4",
"created": "Thu, 21 Jan 2016 21:44:54 GMT"
}
] | 2016-01-25T00:00:00 | [
[
"Jeong",
"Jaeseong",
""
],
[
"Leconte",
"Mathieu",
""
],
[
"Proutiere",
"Alexandre",
""
]
] | TITLE: Cluster-Aided Mobility Predictions
ABSTRACT: Predicting the future location of users in wireless net- works has numerous
applications, and can help service providers to improve the quality of service
perceived by their clients. The location predictors proposed so far estimate
the next location of a specific user by inspecting the past individual
trajectories of this user. As a consequence, when the training data collected
for a given user is limited, the resulting prediction is inaccurate. In this
paper, we develop cluster-aided predictors that exploit past trajectories
collected from all users to predict the next location of a given user. These
predictors rely on clustering techniques and extract from the training data
similarities among the mobility patterns of the various users to improve the
prediction accuracy. Specifically, we present CAMP (Cluster-Aided Mobility
Predictor), a cluster-aided predictor whose design is based on recent
non-parametric bayesian statistical tools. CAMP is robust and adaptive in the
sense that it exploits similarities in users' mobility only if such
similarities are really present in the training data. We analytically prove the
consistency of the predictions provided by CAMP, and investigate its
performance using two large-scale datasets. CAMP significantly outperforms
existing predictors, and in particular those that only exploit individual past
trajectories.
| no_new_dataset | 0.948489 |
1511.07386 | Iasonas Kokkinos | Iasonas Kokkinos | Pushing the Boundaries of Boundary Detection using Deep Learning | The previous version reported large improvements w.r.t. the LPO
region proposal baseline, which turned out to be due to a wrong computation
for the baseline. The improvements are currently less important, and are
omitted. We are sorry if the reported results caused any confusion. We have
also integrated reviewer feedback regarding human performance on the BSD
benchmark | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we show that adapting Deep Convolutional Neural Network training
to the task of boundary detection can result in substantial improvements over
the current state-of-the-art in boundary detection.
Our contributions consist firstly in combining a careful design of the loss
for boundary detection training, a multi-resolution architecture and training
with external data to improve the detection accuracy of the current state of
the art. When measured on the standard Berkeley Segmentation Dataset, we
improve theoptimal dataset scale F-measure from 0.780 to 0.808 - while human
performance is at 0.803. We further improve performance to 0.813 by combining
deep learning with grouping, integrating the Normalized Cuts technique within a
deep network.
We also examine the potential of our boundary detector in conjunction with
the task of semantic segmentation and demonstrate clear improvements over
state-of-the-art systems. Our detector is fully integrated in the popular Caffe
framework and processes a 320x420 image in less than a second.
| [
{
"version": "v1",
"created": "Mon, 23 Nov 2015 19:54:09 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Jan 2016 15:31:32 GMT"
}
] | 2016-01-25T00:00:00 | [
[
"Kokkinos",
"Iasonas",
""
]
] | TITLE: Pushing the Boundaries of Boundary Detection using Deep Learning
ABSTRACT: In this work we show that adapting Deep Convolutional Neural Network training
to the task of boundary detection can result in substantial improvements over
the current state-of-the-art in boundary detection.
Our contributions consist firstly in combining a careful design of the loss
for boundary detection training, a multi-resolution architecture and training
with external data to improve the detection accuracy of the current state of
the art. When measured on the standard Berkeley Segmentation Dataset, we
improve theoptimal dataset scale F-measure from 0.780 to 0.808 - while human
performance is at 0.803. We further improve performance to 0.813 by combining
deep learning with grouping, integrating the Normalized Cuts technique within a
deep network.
We also examine the potential of our boundary detector in conjunction with
the task of semantic segmentation and demonstrate clear improvements over
state-of-the-art systems. Our detector is fully integrated in the popular Caffe
framework and processes a 320x420 image in less than a second.
| no_new_dataset | 0.94256 |
1601.05893 | Hans De Sterck | Shawn Brunsting, Hans De Sterck, Remco Dolman, Teun van Sprundel | GeoTextTagger: High-Precision Location Tagging of Textual Documents
using a Natural Language Processing Approach | null | null | null | null | cs.AI cs.CL cs.DB cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Location tagging, also known as geotagging or geolocation, is the process of
assigning geographical coordinates to input data. In this paper we present an
algorithm for location tagging of textual documents. Our approach makes use of
previous work in natural language processing by using a state-of-the-art
part-of-speech tagger and named entity recognizer to find blocks of text which
may refer to locations. A knowledge base (OpenStreatMap) is then used to find a
list of possible locations for each block. Finally, one location is chosen for
each block by assigning distance-based scores to each location and repeatedly
selecting the location and block with the best score. We tested our geolocation
algorithm with Wikipedia articles about topics with a well-defined geographical
location that are geotagged by the articles' authors, where classification
approaches have achieved median errors as low as 11 km, with attainable
accuracy limited by the class size. Our approach achieved a 10th percentile
error of 490 metres and median error of 54 kilometres on the Wikipedia dataset
we used. When considering the five location tags with the greatest scores, 50%
of articles were assigned at least one tag within 8.5 kilometres of the
article's author-assigned true location. We also tested our approach on Twitter
messages that are tagged with the location from which the message was sent.
Twitter texts are challenging because they are short and unstructured and often
do not contain words referring to the location they were sent from, but we
obtain potentially useful results. We explain how we use the Spark framework
for data analytics to collect and process our test data. In general,
classification-based approaches for location tagging may be reaching their
upper accuracy limit, but our precision-focused approach has high accuracy for
some texts and shows significant potential for improvement overall.
| [
{
"version": "v1",
"created": "Fri, 22 Jan 2016 07:09:54 GMT"
}
] | 2016-01-25T00:00:00 | [
[
"Brunsting",
"Shawn",
""
],
[
"De Sterck",
"Hans",
""
],
[
"Dolman",
"Remco",
""
],
[
"van Sprundel",
"Teun",
""
]
] | TITLE: GeoTextTagger: High-Precision Location Tagging of Textual Documents
using a Natural Language Processing Approach
ABSTRACT: Location tagging, also known as geotagging or geolocation, is the process of
assigning geographical coordinates to input data. In this paper we present an
algorithm for location tagging of textual documents. Our approach makes use of
previous work in natural language processing by using a state-of-the-art
part-of-speech tagger and named entity recognizer to find blocks of text which
may refer to locations. A knowledge base (OpenStreatMap) is then used to find a
list of possible locations for each block. Finally, one location is chosen for
each block by assigning distance-based scores to each location and repeatedly
selecting the location and block with the best score. We tested our geolocation
algorithm with Wikipedia articles about topics with a well-defined geographical
location that are geotagged by the articles' authors, where classification
approaches have achieved median errors as low as 11 km, with attainable
accuracy limited by the class size. Our approach achieved a 10th percentile
error of 490 metres and median error of 54 kilometres on the Wikipedia dataset
we used. When considering the five location tags with the greatest scores, 50%
of articles were assigned at least one tag within 8.5 kilometres of the
article's author-assigned true location. We also tested our approach on Twitter
messages that are tagged with the location from which the message was sent.
Twitter texts are challenging because they are short and unstructured and often
do not contain words referring to the location they were sent from, but we
obtain potentially useful results. We explain how we use the Spark framework
for data analytics to collect and process our test data. In general,
classification-based approaches for location tagging may be reaching their
upper accuracy limit, but our precision-focused approach has high accuracy for
some texts and shows significant potential for improvement overall.
| no_new_dataset | 0.94625 |
1601.06032 | Wangmeng Zuo | Wangmeng Zuo, Xiaohe Wu, Liang Lin, Lei Zhang, and Ming-Hsuan Yang | Learning Support Correlation Filters for Visual Tracking | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sampling and budgeting training examples are two essential factors in
tracking algorithms based on support vector machines (SVMs) as a trade-off
between accuracy and efficiency. Recently, the circulant matrix formed by dense
sampling of translated image patches has been utilized in correlation filters
for fast tracking. In this paper, we derive an equivalent formulation of a SVM
model with circulant matrix expression and present an efficient alternating
optimization method for visual tracking. We incorporate the discrete Fourier
transform with the proposed alternating optimization process, and pose the
tracking problem as an iterative learning of support correlation filters (SCFs)
which find the global optimal solution with real-time performance. For a given
circulant data matrix with n^2 samples of size n*n, the computational
complexity of the proposed algorithm is O(n^2*logn) whereas that of the
standard SVM-based approaches is at least O(n^4). In addition, we extend the
SCF-based tracking algorithm with multi-channel features, kernel functions, and
scale-adaptive approaches to further improve the tracking performance.
Experimental results on a large benchmark dataset show that the proposed
SCF-based algorithms perform favorably against the state-of-the-art tracking
methods in terms of accuracy and speed.
| [
{
"version": "v1",
"created": "Fri, 22 Jan 2016 15:02:50 GMT"
}
] | 2016-01-25T00:00:00 | [
[
"Zuo",
"Wangmeng",
""
],
[
"Wu",
"Xiaohe",
""
],
[
"Lin",
"Liang",
""
],
[
"Zhang",
"Lei",
""
],
[
"Yang",
"Ming-Hsuan",
""
]
] | TITLE: Learning Support Correlation Filters for Visual Tracking
ABSTRACT: Sampling and budgeting training examples are two essential factors in
tracking algorithms based on support vector machines (SVMs) as a trade-off
between accuracy and efficiency. Recently, the circulant matrix formed by dense
sampling of translated image patches has been utilized in correlation filters
for fast tracking. In this paper, we derive an equivalent formulation of a SVM
model with circulant matrix expression and present an efficient alternating
optimization method for visual tracking. We incorporate the discrete Fourier
transform with the proposed alternating optimization process, and pose the
tracking problem as an iterative learning of support correlation filters (SCFs)
which find the global optimal solution with real-time performance. For a given
circulant data matrix with n^2 samples of size n*n, the computational
complexity of the proposed algorithm is O(n^2*logn) whereas that of the
standard SVM-based approaches is at least O(n^4). In addition, we extend the
SCF-based tracking algorithm with multi-channel features, kernel functions, and
scale-adaptive approaches to further improve the tracking performance.
Experimental results on a large benchmark dataset show that the proposed
SCF-based algorithms perform favorably against the state-of-the-art tracking
methods in terms of accuracy and speed.
| no_new_dataset | 0.952131 |
1601.06035 | Cyril Stark | Cyril Stark | Recommender systems inspired by the structure of quantum theory | null | null | null | null | cs.LG cs.IT math.IT math.OC quant-ph stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Physicists use quantum models to describe the behavior of physical systems.
Quantum models owe their success to their interpretability, to their relation
to probabilistic models (quantization of classical models) and to their high
predictive power. Beyond physics, these properties are valuable in general data
science. This motivates the use of quantum models to analyze general
nonphysical datasets. Here we provide both empirical and theoretical insights
into the application of quantum models in data science. In the theoretical part
of this paper, we firstly show that quantum models can be exponentially more
efficient than probabilistic models because there exist datasets that admit
low-dimensional quantum models and only exponentially high-dimensional
probabilistic models. Secondly, we explain in what sense quantum models realize
a useful relaxation of compressed probabilistic models. Thirdly, we show that
sparse datasets admit low-dimensional quantum models and finally, we introduce
a method to compute hierarchical orderings of properties of users (e.g.,
personality traits) and items (e.g., genres of movies). In the empirical part
of the paper, we evaluate quantum models in item recommendation and observe
that the predictive power of quantum-inspired recommender systems can compete
with state-of-the-art recommender systems like SVD++ and PureSVD. Furthermore,
we make use of the interpretability of quantum models by computing hierarchical
orderings of properties of users and items. This work establishes a connection
between data science (item recommendation), information theory (communication
complexity), mathematical programming (positive semidefinite factorizations)
and physics (quantum models).
| [
{
"version": "v1",
"created": "Fri, 22 Jan 2016 15:09:18 GMT"
}
] | 2016-01-25T00:00:00 | [
[
"Stark",
"Cyril",
""
]
] | TITLE: Recommender systems inspired by the structure of quantum theory
ABSTRACT: Physicists use quantum models to describe the behavior of physical systems.
Quantum models owe their success to their interpretability, to their relation
to probabilistic models (quantization of classical models) and to their high
predictive power. Beyond physics, these properties are valuable in general data
science. This motivates the use of quantum models to analyze general
nonphysical datasets. Here we provide both empirical and theoretical insights
into the application of quantum models in data science. In the theoretical part
of this paper, we firstly show that quantum models can be exponentially more
efficient than probabilistic models because there exist datasets that admit
low-dimensional quantum models and only exponentially high-dimensional
probabilistic models. Secondly, we explain in what sense quantum models realize
a useful relaxation of compressed probabilistic models. Thirdly, we show that
sparse datasets admit low-dimensional quantum models and finally, we introduce
a method to compute hierarchical orderings of properties of users (e.g.,
personality traits) and items (e.g., genres of movies). In the empirical part
of the paper, we evaluate quantum models in item recommendation and observe
that the predictive power of quantum-inspired recommender systems can compete
with state-of-the-art recommender systems like SVD++ and PureSVD. Furthermore,
we make use of the interpretability of quantum models by computing hierarchical
orderings of properties of users and items. This work establishes a connection
between data science (item recommendation), information theory (communication
complexity), mathematical programming (positive semidefinite factorizations)
and physics (quantum models).
| no_new_dataset | 0.94256 |
1601.06057 | Bartosz Zieli\'nski | Matthias Zeppelzauer, Bartosz Zieli\'nski, Mateusz Juda and Markus
Seidl | Topological descriptors for 3D surface analysis | 12 pages, 3 figures, CTIC 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate topological descriptors for 3D surface analysis, i.e. the
classification of surfaces according to their geometric fine structure. On a
dataset of high-resolution 3D surface reconstructions we compute persistence
diagrams for a 2D cubical filtration. In the next step we investigate different
topological descriptors and measure their ability to discriminate structurally
different 3D surface patches. We evaluate their sensitivity to different
parameters and compare the performance of the resulting topological descriptors
to alternative (non-topological) descriptors. We present a comprehensive
evaluation that shows that topological descriptors are (i) robust, (ii) yield
state-of-the-art performance for the task of 3D surface analysis and (iii)
improve classification performance when combined with non-topological
descriptors.
| [
{
"version": "v1",
"created": "Fri, 22 Jan 2016 16:10:54 GMT"
}
] | 2016-01-25T00:00:00 | [
[
"Zeppelzauer",
"Matthias",
""
],
[
"Zieliński",
"Bartosz",
""
],
[
"Juda",
"Mateusz",
""
],
[
"Seidl",
"Markus",
""
]
] | TITLE: Topological descriptors for 3D surface analysis
ABSTRACT: We investigate topological descriptors for 3D surface analysis, i.e. the
classification of surfaces according to their geometric fine structure. On a
dataset of high-resolution 3D surface reconstructions we compute persistence
diagrams for a 2D cubical filtration. In the next step we investigate different
topological descriptors and measure their ability to discriminate structurally
different 3D surface patches. We evaluate their sensitivity to different
parameters and compare the performance of the resulting topological descriptors
to alternative (non-topological) descriptors. We present a comprehensive
evaluation that shows that topological descriptors are (i) robust, (ii) yield
state-of-the-art performance for the task of 3D surface analysis and (iii)
improve classification performance when combined with non-topological
descriptors.
| no_new_dataset | 0.949248 |
1601.06087 | Aria Ahmadi | Aria Ahmadi and Ioannis Patras | Unsupervised convolutional neural networks for motion estimation | Submitted to ICIP 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional methods for motion estimation estimate the motion field F between
a pair of images as the one that minimizes a predesigned cost function. In this
paper, we propose a direct method and train a Convolutional Neural Network
(CNN) that when, at test time, is given a pair of images as input it produces a
dense motion field F at its output layer. In the absence of large datasets with
ground truth motion that would allow classical supervised training, we propose
to train the network in an unsupervised manner. The proposed cost function that
is optimized during training, is based on the classical optical flow
constraint. The latter is differentiable with respect to the motion field and,
therefore, allows backpropagation of the error to previous layers of the
network. Our method is tested on both synthetic and real image sequences and
performs similarly to the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Fri, 22 Jan 2016 17:57:07 GMT"
}
] | 2016-01-25T00:00:00 | [
[
"Ahmadi",
"Aria",
""
],
[
"Patras",
"Ioannis",
""
]
] | TITLE: Unsupervised convolutional neural networks for motion estimation
ABSTRACT: Traditional methods for motion estimation estimate the motion field F between
a pair of images as the one that minimizes a predesigned cost function. In this
paper, we propose a direct method and train a Convolutional Neural Network
(CNN) that when, at test time, is given a pair of images as input it produces a
dense motion field F at its output layer. In the absence of large datasets with
ground truth motion that would allow classical supervised training, we propose
to train the network in an unsupervised manner. The proposed cost function that
is optimized during training, is based on the classical optical flow
constraint. The latter is differentiable with respect to the motion field and,
therefore, allows backpropagation of the error to previous layers of the
network. Our method is tested on both synthetic and real image sequences and
performs similarly to the state-of-the-art methods.
| no_new_dataset | 0.952042 |
1601.05447 | Subarna Tripathi | Subarna Tripathi, Serge Belongie, Youngbae Hwang, Truong Nguyen | Detecting Temporally Consistent Objects in Videos through Object Class
Label Propagation | Accepted for publication in WACV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object proposals for detecting moving or static video objects need to address
issues such as speed, memory complexity and temporal consistency. We propose an
efficient Video Object Proposal (VOP) generation method and show its efficacy
in learning a better video object detector. A deep-learning based video object
detector learned using the proposed VOP achieves state-of-the-art detection
performance on the Youtube-Objects dataset. We further propose a clustering of
VOPs which can efficiently be used for detecting objects in video in a
streaming fashion. As opposed to applying per-frame convolutional neural
network (CNN) based object detection, our proposed method called Objects in
Video Enabler thRough LAbel Propagation (OVERLAP) needs to classify only a
small fraction of all candidate proposals in every video frame through
streaming clustering of object proposals and class-label propagation. Source
code will be made available soon.
| [
{
"version": "v1",
"created": "Wed, 20 Jan 2016 21:45:29 GMT"
}
] | 2016-01-22T00:00:00 | [
[
"Tripathi",
"Subarna",
""
],
[
"Belongie",
"Serge",
""
],
[
"Hwang",
"Youngbae",
""
],
[
"Nguyen",
"Truong",
""
]
] | TITLE: Detecting Temporally Consistent Objects in Videos through Object Class
Label Propagation
ABSTRACT: Object proposals for detecting moving or static video objects need to address
issues such as speed, memory complexity and temporal consistency. We propose an
efficient Video Object Proposal (VOP) generation method and show its efficacy
in learning a better video object detector. A deep-learning based video object
detector learned using the proposed VOP achieves state-of-the-art detection
performance on the Youtube-Objects dataset. We further propose a clustering of
VOPs which can efficiently be used for detecting objects in video in a
streaming fashion. As opposed to applying per-frame convolutional neural
network (CNN) based object detection, our proposed method called Objects in
Video Enabler thRough LAbel Propagation (OVERLAP) needs to classify only a
small fraction of all candidate proposals in every video frame through
streaming clustering of object proposals and class-label propagation. Source
code will be made available soon.
| no_new_dataset | 0.952309 |
1601.05511 | Pichao Wang | Jing Zhang and Wanqing Li and Philip O. Ogunbona and Pichao Wang and
Chang Tang | RGB-D-based Action Recognition Datasets: A Survey | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human action recognition from RGB-D (Red, Green, Blue and Depth) data has
attracted increasing attention since the first work reported in 2010. Over this
period, many benchmark datasets have been created to facilitate the development
and evaluation of new algorithms. This raises the question of which dataset to
select and how to use it in providing a fair and objective comparative
evaluation against state-of-the-art methods. To address this issue, this paper
provides a comprehensive review of the most commonly used action recognition
related RGB-D video datasets, including 27 single-view datasets, 10 multi-view
datasets, and 7 multi-person datasets. The detailed information and analysis of
these datasets is a useful resource in guiding insightful selection of datasets
for future research. In addition, the issues with current algorithm evaluation
vis-\'{a}-vis limitations of the available datasets and evaluation protocols
are also highlighted; resulting in a number of recommendations for collection
of new datasets and use of evaluation protocols.
| [
{
"version": "v1",
"created": "Thu, 21 Jan 2016 04:58:04 GMT"
}
] | 2016-01-22T00:00:00 | [
[
"Zhang",
"Jing",
""
],
[
"Li",
"Wanqing",
""
],
[
"Ogunbona",
"Philip O.",
""
],
[
"Wang",
"Pichao",
""
],
[
"Tang",
"Chang",
""
]
] | TITLE: RGB-D-based Action Recognition Datasets: A Survey
ABSTRACT: Human action recognition from RGB-D (Red, Green, Blue and Depth) data has
attracted increasing attention since the first work reported in 2010. Over this
period, many benchmark datasets have been created to facilitate the development
and evaluation of new algorithms. This raises the question of which dataset to
select and how to use it in providing a fair and objective comparative
evaluation against state-of-the-art methods. To address this issue, this paper
provides a comprehensive review of the most commonly used action recognition
related RGB-D video datasets, including 27 single-view datasets, 10 multi-view
datasets, and 7 multi-person datasets. The detailed information and analysis of
these datasets is a useful resource in guiding insightful selection of datasets
for future research. In addition, the issues with current algorithm evaluation
vis-\'{a}-vis limitations of the available datasets and evaluation protocols
are also highlighted; resulting in a number of recommendations for collection
of new datasets and use of evaluation protocols.
| no_new_dataset | 0.941169 |
1601.05532 | Alexander Belyi | Alexander Belyi, Iva Bojic, Stanislav Sobolevsky, Izabela Sitko,
Bartosz Hawelka, Lada Rudikova, Alexander Kurbatski, Carlo Ratti | Global multi-layer network of human mobility | 13 pages, 10 figures, 1 table | null | null | null | cs.SI cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent availability of geo-localized data capturing individual human activity
together with the statistical data on international migration opened up
unprecedented opportunities for a study on global mobility. In this paper we
consider it from the perspective of a multi-layer complex network, built using
a combination of three datasets: Twitter, Flickr and official migration data.
Those datasets provide different but equally important insights on the global
mobility: while the first two highlight short-term visits of people from one
country to another, the last one - migration - shows the long-term mobility
perspective, when people relocate for good. And the main purpose of the paper
is to emphasize importance of this multi-layer approach capturing both aspects
of human mobility at the same time. So we start from a comparative study of the
network layers, comparing short- and long- term mobility through the
statistical properties of the corresponding networks, such as the parameters of
their degree centrality distributions or parameters of the corresponding
gravity model being fit to the network. We also focus on the differences in
country ranking by their short- and long-term attractiveness, discussing the
most noticeable outliers. Finally, we apply this multi-layered human mobility
network to infer the structure of the global society through a community
detection approach and demonstrate that consideration of mobility from a
multi-layer perspective can reveal important global spatial patterns in a way
more consistent with other available relevant sources of international
connections, in comparison to the spatial structure inferred from each network
layer taken separately.
| [
{
"version": "v1",
"created": "Thu, 21 Jan 2016 07:40:37 GMT"
}
] | 2016-01-22T00:00:00 | [
[
"Belyi",
"Alexander",
""
],
[
"Bojic",
"Iva",
""
],
[
"Sobolevsky",
"Stanislav",
""
],
[
"Sitko",
"Izabela",
""
],
[
"Hawelka",
"Bartosz",
""
],
[
"Rudikova",
"Lada",
""
],
[
"Kurbatski",
"Alexander",
""
],
[
"Ratti",
"Carlo",
""
]
] | TITLE: Global multi-layer network of human mobility
ABSTRACT: Recent availability of geo-localized data capturing individual human activity
together with the statistical data on international migration opened up
unprecedented opportunities for a study on global mobility. In this paper we
consider it from the perspective of a multi-layer complex network, built using
a combination of three datasets: Twitter, Flickr and official migration data.
Those datasets provide different but equally important insights on the global
mobility: while the first two highlight short-term visits of people from one
country to another, the last one - migration - shows the long-term mobility
perspective, when people relocate for good. And the main purpose of the paper
is to emphasize importance of this multi-layer approach capturing both aspects
of human mobility at the same time. So we start from a comparative study of the
network layers, comparing short- and long- term mobility through the
statistical properties of the corresponding networks, such as the parameters of
their degree centrality distributions or parameters of the corresponding
gravity model being fit to the network. We also focus on the differences in
country ranking by their short- and long-term attractiveness, discussing the
most noticeable outliers. Finally, we apply this multi-layered human mobility
network to infer the structure of the global society through a community
detection approach and demonstrate that consideration of mobility from a
multi-layer perspective can reveal important global spatial patterns in a way
more consistent with other available relevant sources of international
connections, in comparison to the spatial structure inferred from each network
layer taken separately.
| no_new_dataset | 0.9462 |
1601.05644 | Weilong Peng | Weilong Peng (1), Zhiyong Feng (1) and Chao Xu (2) ((1) School of
Computer Science, Tianjin University (2) School of Software, Tianjin
University) | B-spline Shape from Motion & Shading: An Automatic Free-form Surface
Modeling for Face Reconstruction | 9 pages, 6 figures | null | null | null | cs.CV cs.GR | http://creativecommons.org/licenses/by/4.0/ | Recently, many methods have been proposed for face reconstruction from
multiple images, most of which involve fundamental principles of Shape from
Shading and Structure from motion. However, a majority of the methods just
generate discrete surface model of face. In this paper, B-spline Shape from
Motion and Shading (BsSfMS) is proposed to reconstruct continuous B-spline
surface for multi-view face images, according to an assumption that shading and
motion information in the images contain 1st- and 0th-order derivative of
B-spline face respectively. Face surface is expressed as a B-spline surface
that can be reconstructed by optimizing B-spline control points. Therefore,
normals and 3D feature points computed from shading and motion of images
respectively are used as the 1st- and 0th- order derivative information, to be
jointly applied in optimizing the B-spline face. Additionally, an IMLS
(iterative multi-least-square) algorithm is proposed to handle the difficult
control point optimization. Furthermore, synthetic samples and LFW dataset are
introduced and conducted to verify the proposed approach, and the experimental
results demonstrate the effectiveness with different poses, illuminations,
expressions etc., even with wild images.
| [
{
"version": "v1",
"created": "Thu, 21 Jan 2016 14:11:40 GMT"
}
] | 2016-01-22T00:00:00 | [
[
"Peng",
"Weilong",
""
],
[
"Feng",
"Zhiyong",
""
],
[
"Xu",
"Chao",
""
]
] | TITLE: B-spline Shape from Motion & Shading: An Automatic Free-form Surface
Modeling for Face Reconstruction
ABSTRACT: Recently, many methods have been proposed for face reconstruction from
multiple images, most of which involve fundamental principles of Shape from
Shading and Structure from motion. However, a majority of the methods just
generate discrete surface model of face. In this paper, B-spline Shape from
Motion and Shading (BsSfMS) is proposed to reconstruct continuous B-spline
surface for multi-view face images, according to an assumption that shading and
motion information in the images contain 1st- and 0th-order derivative of
B-spline face respectively. Face surface is expressed as a B-spline surface
that can be reconstructed by optimizing B-spline control points. Therefore,
normals and 3D feature points computed from shading and motion of images
respectively are used as the 1st- and 0th- order derivative information, to be
jointly applied in optimizing the B-spline face. Additionally, an IMLS
(iterative multi-least-square) algorithm is proposed to handle the difficult
control point optimization. Furthermore, synthetic samples and LFW dataset are
introduced and conducted to verify the proposed approach, and the experimental
results demonstrate the effectiveness with different poses, illuminations,
expressions etc., even with wild images.
| no_new_dataset | 0.906901 |
1601.05654 | Nikolaos Gianniotis | Nikolaos Gianniotis and Sven D. K\"ugler and Peter Ti\v{n}o and Kai L.
Polsterer | Model-Coupled Autoencoder for Time Series Visualisation | null | null | null | null | astro-ph.IM cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an approach for the visualisation of a set of time series that
combines an echo state network with an autoencoder. For each time series in the
dataset we train an echo state network, using a common and fixed reservoir of
hidden neurons, and use the optimised readout weights as the new
representation. Dimensionality reduction is then performed via an autoencoder
on the readout weight representations. The crux of the work is to equip the
autoencoder with a loss function that correctly interprets the reconstructed
readout weights by associating them with a reconstruction error measured in the
data space of sequences. This essentially amounts to measuring the predictive
performance that the reconstructed readout weights exhibit on their
corresponding sequences when plugged back into the echo state network with the
same fixed reservoir. We demonstrate that the proposed visualisation framework
can deal both with real valued sequences as well as binary sequences. We derive
magnification factors in order to analyse distance preservations and
distortions in the visualisation space. The versatility and advantages of the
proposed method are demonstrated on datasets of time series that originate from
diverse domains.
| [
{
"version": "v1",
"created": "Thu, 21 Jan 2016 14:26:21 GMT"
}
] | 2016-01-22T00:00:00 | [
[
"Gianniotis",
"Nikolaos",
""
],
[
"Kügler",
"Sven D.",
""
],
[
"Tiňo",
"Peter",
""
],
[
"Polsterer",
"Kai L.",
""
]
] | TITLE: Model-Coupled Autoencoder for Time Series Visualisation
ABSTRACT: We present an approach for the visualisation of a set of time series that
combines an echo state network with an autoencoder. For each time series in the
dataset we train an echo state network, using a common and fixed reservoir of
hidden neurons, and use the optimised readout weights as the new
representation. Dimensionality reduction is then performed via an autoencoder
on the readout weight representations. The crux of the work is to equip the
autoencoder with a loss function that correctly interprets the reconstructed
readout weights by associating them with a reconstruction error measured in the
data space of sequences. This essentially amounts to measuring the predictive
performance that the reconstructed readout weights exhibit on their
corresponding sequences when plugged back into the echo state network with the
same fixed reservoir. We demonstrate that the proposed visualisation framework
can deal both with real valued sequences as well as binary sequences. We derive
magnification factors in order to analyse distance preservations and
distortions in the visualisation space. The versatility and advantages of the
proposed method are demonstrated on datasets of time series that originate from
diverse domains.
| no_new_dataset | 0.947721 |
1601.05767 | Subit Chakrabarti | Subit Chakrabarti and Jasmeet Judge and Tara Bongiovanni and Anand
Rangarajan and Sanjay Ranka | Spatial Scaling of Satellite Soil Moisture using Temporal Correlations
and Ensemble Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A novel algorithm is developed to downscale soil moisture (SM), obtained at
satellite scales of 10-40 km by utilizing its temporal correlations to
historical auxiliary data at finer scales. Including such correlations
drastically reduces the size of the training set needed, accounts for
time-lagged relationships, and enables downscaling even in the presence of
short gaps in the auxiliary data. The algorithm is based upon bagged regression
trees (BRT) and uses correlations between high-resolution remote sensing
products and SM observations. The algorithm trains multiple regression trees
and automatically chooses the trees that generate the best downscaled
estimates. The algorithm was evaluated using a multi-scale synthetic dataset in
north central Florida for two years, including two growing seasons of corn and
one growing season of cotton per year. The time-averaged error across the
region was found to be 0.01 $\mathrm{m}^3/\mathrm{m}^3$, with a standard
deviation of 0.012 $\mathrm{m}^3/\mathrm{m}^3$ when 0.02% of the data were used
for training in addition to temporal correlations from the past seven days, and
all available data from the past year. The maximum spatially averaged errors
obtained using this algorithm in downscaled SM were 0.005
$\mathrm{m}^3/\mathrm{m}^3$, for pixels with cotton land-cover. When land
surface temperature~(LST) on the day of downscaling was not included in the
algorithm to simulate "data gaps", the spatially averaged error increased
minimally by 0.015 $\mathrm{m}^3/\mathrm{m}^3$ when LST is unavailable on the
day of downscaling. The results indicate that the BRT-based algorithm provides
high accuracy for downscaling SM using complex non-linear spatio-temporal
correlations, under heterogeneous micro meteorological conditions.
| [
{
"version": "v1",
"created": "Thu, 21 Jan 2016 20:19:19 GMT"
}
] | 2016-01-22T00:00:00 | [
[
"Chakrabarti",
"Subit",
""
],
[
"Judge",
"Jasmeet",
""
],
[
"Bongiovanni",
"Tara",
""
],
[
"Rangarajan",
"Anand",
""
],
[
"Ranka",
"Sanjay",
""
]
] | TITLE: Spatial Scaling of Satellite Soil Moisture using Temporal Correlations
and Ensemble Learning
ABSTRACT: A novel algorithm is developed to downscale soil moisture (SM), obtained at
satellite scales of 10-40 km by utilizing its temporal correlations to
historical auxiliary data at finer scales. Including such correlations
drastically reduces the size of the training set needed, accounts for
time-lagged relationships, and enables downscaling even in the presence of
short gaps in the auxiliary data. The algorithm is based upon bagged regression
trees (BRT) and uses correlations between high-resolution remote sensing
products and SM observations. The algorithm trains multiple regression trees
and automatically chooses the trees that generate the best downscaled
estimates. The algorithm was evaluated using a multi-scale synthetic dataset in
north central Florida for two years, including two growing seasons of corn and
one growing season of cotton per year. The time-averaged error across the
region was found to be 0.01 $\mathrm{m}^3/\mathrm{m}^3$, with a standard
deviation of 0.012 $\mathrm{m}^3/\mathrm{m}^3$ when 0.02% of the data were used
for training in addition to temporal correlations from the past seven days, and
all available data from the past year. The maximum spatially averaged errors
obtained using this algorithm in downscaled SM were 0.005
$\mathrm{m}^3/\mathrm{m}^3$, for pixels with cotton land-cover. When land
surface temperature~(LST) on the day of downscaling was not included in the
algorithm to simulate "data gaps", the spatially averaged error increased
minimally by 0.015 $\mathrm{m}^3/\mathrm{m}^3$ when LST is unavailable on the
day of downscaling. The results indicate that the BRT-based algorithm provides
high accuracy for downscaling SM using complex non-linear spatio-temporal
correlations, under heterogeneous micro meteorological conditions.
| no_new_dataset | 0.950686 |
1511.06380 | William Lotter | William Lotter, Gabriel Kreiman, David Cox | Unsupervised Learning of Visual Structure using Predictive Generative
Networks | under review as conference paper at ICLR 2016 | null | null | null | cs.LG cs.AI cs.CV q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to predict future states of the environment is a central pillar
of intelligence. At its core, effective prediction requires an internal model
of the world and an understanding of the rules by which the world changes.
Here, we explore the internal models developed by deep neural networks trained
using a loss based on predicting future frames in synthetic video sequences,
using a CNN-LSTM-deCNN framework. We first show that this architecture can
achieve excellent performance in visual sequence prediction tasks, including
state-of-the-art performance in a standard 'bouncing balls' dataset (Sutskever
et al., 2009). Using a weighted mean-squared error and adversarial loss
(Goodfellow et al., 2014), the same architecture successfully extrapolates
out-of-the-plane rotations of computer-generated faces. Furthermore, despite
being trained end-to-end to predict only pixel-level information, our
Predictive Generative Networks learn a representation of the latent structure
of the underlying three-dimensional objects themselves. Importantly, we find
that this representation is naturally tolerant to object transformations, and
generalizes well to new tasks, such as classification of static images. Similar
models trained solely with a reconstruction loss fail to generalize as
effectively. We argue that prediction can serve as a powerful unsupervised loss
for learning rich internal representations of high-level object features.
| [
{
"version": "v1",
"created": "Thu, 19 Nov 2015 21:10:17 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jan 2016 05:50:46 GMT"
}
] | 2016-01-21T00:00:00 | [
[
"Lotter",
"William",
""
],
[
"Kreiman",
"Gabriel",
""
],
[
"Cox",
"David",
""
]
] | TITLE: Unsupervised Learning of Visual Structure using Predictive Generative
Networks
ABSTRACT: The ability to predict future states of the environment is a central pillar
of intelligence. At its core, effective prediction requires an internal model
of the world and an understanding of the rules by which the world changes.
Here, we explore the internal models developed by deep neural networks trained
using a loss based on predicting future frames in synthetic video sequences,
using a CNN-LSTM-deCNN framework. We first show that this architecture can
achieve excellent performance in visual sequence prediction tasks, including
state-of-the-art performance in a standard 'bouncing balls' dataset (Sutskever
et al., 2009). Using a weighted mean-squared error and adversarial loss
(Goodfellow et al., 2014), the same architecture successfully extrapolates
out-of-the-plane rotations of computer-generated faces. Furthermore, despite
being trained end-to-end to predict only pixel-level information, our
Predictive Generative Networks learn a representation of the latent structure
of the underlying three-dimensional objects themselves. Importantly, we find
that this representation is naturally tolerant to object transformations, and
generalizes well to new tasks, such as classification of static images. Similar
models trained solely with a reconstruction loss fail to generalize as
effectively. We argue that prediction can serve as a powerful unsupervised loss
for learning rich internal representations of high-level object features.
| no_new_dataset | 0.942665 |
1511.06418 | Klaus Greff | Klaus Greff, Rupesh Kumar Srivastava, J\"urgen Schmidhuber | Binding via Reconstruction Clustering | 12 pages, plus 12 pages Appendix | null | null | null | cs.LG cs.NE | http://creativecommons.org/licenses/by/4.0/ | Disentangled distributed representations of data are desirable for machine
learning, since they are more expressive and can generalize from fewer
examples. However, for complex data, the distributed representations of
multiple objects present in the same input can interfere and lead to
ambiguities, which is commonly referred to as the binding problem. We argue for
the importance of the binding problem to the field of representation learning,
and develop a probabilistic framework that explicitly models inputs as a
composition of multiple objects. We propose an unsupervised algorithm that uses
denoising autoencoders to dynamically bind features together in multi-object
inputs through an Expectation-Maximization-like clustering process. The
effectiveness of this method is demonstrated on artificially generated datasets
of binary images, showing that it can even generalize to bind together new
objects never seen by the autoencoder during training.
| [
{
"version": "v1",
"created": "Thu, 19 Nov 2015 22:13:11 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Nov 2015 23:35:10 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Jan 2016 20:48:53 GMT"
},
{
"version": "v4",
"created": "Wed, 20 Jan 2016 19:31:17 GMT"
}
] | 2016-01-21T00:00:00 | [
[
"Greff",
"Klaus",
""
],
[
"Srivastava",
"Rupesh Kumar",
""
],
[
"Schmidhuber",
"Jürgen",
""
]
] | TITLE: Binding via Reconstruction Clustering
ABSTRACT: Disentangled distributed representations of data are desirable for machine
learning, since they are more expressive and can generalize from fewer
examples. However, for complex data, the distributed representations of
multiple objects present in the same input can interfere and lead to
ambiguities, which is commonly referred to as the binding problem. We argue for
the importance of the binding problem to the field of representation learning,
and develop a probabilistic framework that explicitly models inputs as a
composition of multiple objects. We propose an unsupervised algorithm that uses
denoising autoencoders to dynamically bind features together in multi-object
inputs through an Expectation-Maximization-like clustering process. The
effectiveness of this method is demonstrated on artificially generated datasets
of binary images, showing that it can even generalize to bind together new
objects never seen by the autoencoder during training.
| no_new_dataset | 0.91383 |
1601.03313 | Valentin Kassarnig | Valentin Kassarnig | Political Speech Generation | 15 pages, class project | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this report we present a system that can generate political speeches for a
desired political party. Furthermore, the system allows to specify whether a
speech should hold a supportive or opposing opinion. The system relies on a
combination of several state-of-the-art NLP methods which are discussed in this
report. These include n-grams, Justeson & Katz POS tag filter, recurrent neural
networks, and latent Dirichlet allocation. Sequences of words are generated
based on probabilities obtained from two underlying models: A language model
takes care of the grammatical correctness while a topic model aims for textual
consistency. Both models were trained on the Convote dataset which contains
transcripts from US congressional floor debates. Furthermore, we present a
manual and an automated approach to evaluate the quality of generated speeches.
In an experimental evaluation generated speeches have shown very high quality
in terms of grammatical correctness and sentence transitions.
| [
{
"version": "v1",
"created": "Wed, 13 Jan 2016 16:58:05 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jan 2016 15:47:13 GMT"
}
] | 2016-01-21T00:00:00 | [
[
"Kassarnig",
"Valentin",
""
]
] | TITLE: Political Speech Generation
ABSTRACT: In this report we present a system that can generate political speeches for a
desired political party. Furthermore, the system allows to specify whether a
speech should hold a supportive or opposing opinion. The system relies on a
combination of several state-of-the-art NLP methods which are discussed in this
report. These include n-grams, Justeson & Katz POS tag filter, recurrent neural
networks, and latent Dirichlet allocation. Sequences of words are generated
based on probabilities obtained from two underlying models: A language model
takes care of the grammatical correctness while a topic model aims for textual
consistency. Both models were trained on the Convote dataset which contains
transcripts from US congressional floor debates. Furthermore, we present a
manual and an automated approach to evaluate the quality of generated speeches.
In an experimental evaluation generated speeches have shown very high quality
in terms of grammatical correctness and sentence transitions.
| no_new_dataset | 0.943086 |
1601.05142 | Justin F Brunelle | Justin F. Brunelle and Michele C. Weigle and Michael L. Nelson | Adapting the Hypercube Model to Archive Deferred Representations and
Their Descendants | null | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The web is today's primary publication medium, making web archiving an
important activity for historical and analytical purposes. Web pages are
increasingly interactive, resulting in pages that are increasingly difficult to
archive. Client-side technologies (e.g., JavaScript) enable interactions that
can potentially change the client-side state of a representation. We refer to
representations that load embedded resources via JavaScript as deferred
representations. It is difficult to archive all of the resources in deferred
representations and the result is archives with web pages that are either
incomplete or that erroneously load embedded resources from the live web.
We propose a method of discovering and crawling deferred representations and
their descendants (representation states that are only reachable through
client-side events). We adapt the Dincturk et al. Hypercube model to construct
a model for archiving descendants, and we measure the number of descendants and
requisite embedded resources discovered in a proof-of-concept crawl. Our
approach identified an average of 38.5 descendants per seed URI crawled, 70.9%
of which are reached through an onclick event. This approach also added 15.6
times more embedded resources than Heritrix to the crawl frontier, but at a
rate that was 38.9 times slower than simply using Heritrix. We show that our
dataset has two levels of descendants. We conclude with proposed crawl policies
and an analysis of the storage requirements for archiving descendants.
| [
{
"version": "v1",
"created": "Wed, 20 Jan 2016 00:48:39 GMT"
}
] | 2016-01-21T00:00:00 | [
[
"Brunelle",
"Justin F.",
""
],
[
"Weigle",
"Michele C.",
""
],
[
"Nelson",
"Michael L.",
""
]
] | TITLE: Adapting the Hypercube Model to Archive Deferred Representations and
Their Descendants
ABSTRACT: The web is today's primary publication medium, making web archiving an
important activity for historical and analytical purposes. Web pages are
increasingly interactive, resulting in pages that are increasingly difficult to
archive. Client-side technologies (e.g., JavaScript) enable interactions that
can potentially change the client-side state of a representation. We refer to
representations that load embedded resources via JavaScript as deferred
representations. It is difficult to archive all of the resources in deferred
representations and the result is archives with web pages that are either
incomplete or that erroneously load embedded resources from the live web.
We propose a method of discovering and crawling deferred representations and
their descendants (representation states that are only reachable through
client-side events). We adapt the Dincturk et al. Hypercube model to construct
a model for archiving descendants, and we measure the number of descendants and
requisite embedded resources discovered in a proof-of-concept crawl. Our
approach identified an average of 38.5 descendants per seed URI crawled, 70.9%
of which are reached through an onclick event. This approach also added 15.6
times more embedded resources than Heritrix to the crawl frontier, but at a
rate that was 38.9 times slower than simply using Heritrix. We show that our
dataset has two levels of descendants. We conclude with proposed crawl policies
and an analysis of the storage requirements for archiving descendants.
| no_new_dataset | 0.933309 |
1601.05266 | Pavlos Sermpezis | Pavlos Sermpezis and Thrasyvoulos Spyropoulos | Effects of Content Popularity on the Performance of Content-Centric
Opportunistic Networking: An Analytical Approach and Applications | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile users are envisioned to exploit direct communication opportunities
between their portable devices, in order to enrich the set of services they can
access through cellular or WiFi networks. Sharing contents of common interest
or providing access to resources or services between peers can enhance a mobile
node's capabilities, offload the cellular network, and disseminate information
to nodes without Internet access. Interest patterns, i.e. how many nodes are
interested in each content or service (popularity), as well as how many users
can provide a content or service (availability) impact the performance and
feasibility of envisioned applications. In this paper, we establish an
analytical framework to study the effects of these factors on the delay and
success probability of a content/service access request through opportunistic
communication. We also apply our framework to the mobile data offloading
problem and provide insights for the optimization of its performance. We
validate our model and results through realistic simulations, using datasets of
real opportunistic networks.
| [
{
"version": "v1",
"created": "Wed, 20 Jan 2016 13:28:29 GMT"
}
] | 2016-01-21T00:00:00 | [
[
"Sermpezis",
"Pavlos",
""
],
[
"Spyropoulos",
"Thrasyvoulos",
""
]
] | TITLE: Effects of Content Popularity on the Performance of Content-Centric
Opportunistic Networking: An Analytical Approach and Applications
ABSTRACT: Mobile users are envisioned to exploit direct communication opportunities
between their portable devices, in order to enrich the set of services they can
access through cellular or WiFi networks. Sharing contents of common interest
or providing access to resources or services between peers can enhance a mobile
node's capabilities, offload the cellular network, and disseminate information
to nodes without Internet access. Interest patterns, i.e. how many nodes are
interested in each content or service (popularity), as well as how many users
can provide a content or service (availability) impact the performance and
feasibility of envisioned applications. In this paper, we establish an
analytical framework to study the effects of these factors on the delay and
success probability of a content/service access request through opportunistic
communication. We also apply our framework to the mobile data offloading
problem and provide insights for the optimization of its performance. We
validate our model and results through realistic simulations, using datasets of
real opportunistic networks.
| no_new_dataset | 0.945399 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.