Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1306.1959 | Xin Zhao | Xin Zhao and Bo Li | Pattern Recognition and Revealing using Parallel Coordinates Plot | 8 pages and 6 figures. This paper has been withdrawn by the author
due to publication | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parallel coordinates plot (PCP) is an excellent tool for multivariate
visualization and analysis, but it may fail to reveal inherent structures for
datasets with a large number of items. In this paper, we propose a suite of
novel clustering, dimension ordering and visualization techniques based on PCP,
to reveal and highlight hidden structures. First, we propose a continuous
spline based polycurves design to extract and classify different cluster
aspects of the data. Then, we provide an efficient and optimal correlation
based sorting technique to reorder coordinates, as a helpful visualization tool
for data analysis. Various results generated by our framework visually
represent much structure, trend and correlation information to guide the user,
and improve the efficacy of analysis, especially for complex and noisy
datasets.
| [
{
"version": "v1",
"created": "Sat, 8 Jun 2013 21:18:43 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Nov 2013 21:38:56 GMT"
}
] | 2013-11-05T00:00:00 | [
[
"Zhao",
"Xin",
""
],
[
"Li",
"Bo",
""
]
] | TITLE: Pattern Recognition and Revealing using Parallel Coordinates Plot
ABSTRACT: Parallel coordinates plot (PCP) is an excellent tool for multivariate
visualization and analysis, but it may fail to reveal inherent structures for
datasets with a large number of items. In this paper, we propose a suite of
novel clustering, dimension ordering and visualization techniques based on PCP,
to reveal and highlight hidden structures. First, we propose a continuous
spline based polycurves design to extract and classify different cluster
aspects of the data. Then, we provide an efficient and optimal correlation
based sorting technique to reorder coordinates, as a helpful visualization tool
for data analysis. Various results generated by our framework visually
represent much structure, trend and correlation information to guide the user,
and improve the efficacy of analysis, especially for complex and noisy
datasets.
|
1307.0147 | Xin Zhao | Bo Li, Xin Zhao and Hong Qin | 4-Dimensional Geometry Lens: A Novel Volumetric Magnification Approach | 12 pages. In CGF 2013. This paper has been withdrawn by the author
due to publication | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel methodology that utilizes 4-Dimensional (4D) space
deformation to simulate a magnification lens on versatile volume datasets and
textured solid models. Compared with other magnification methods (e.g.,
geometric optics, mesh editing), 4D differential geometry theory and its
practices are much more flexible and powerful for preserving shape features
(i.e., minimizing angle distortion), and easier to adapt to versatile solid
models. The primary advantage of 4D space lies at the following fact: we can
now easily magnify the volume of regions of interest (ROIs) from the additional
dimension, while keeping the rest region unchanged. To achieve this primary
goal, we first embed a 3D volumetric input into 4D space and magnify ROIs in
the 4th dimension. Then we flatten the 4D shape back into 3D space to
accommodate other typical applications in the real 3D world. In order to
enforce distortion minimization, in both steps we devise the high dimensional
geometry techniques based on rigorous 4D geometry theory for 3D/4D mapping back
and forth to amend the distortion. Our system can preserve not only focus
region, but also context region and global shape. We demonstrate the
effectiveness, robustness, and efficacy of our framework with a variety of
models ranging from tetrahedral meshes to volume datasets.
| [
{
"version": "v1",
"created": "Sat, 29 Jun 2013 20:20:37 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Nov 2013 21:38:37 GMT"
}
] | 2013-11-05T00:00:00 | [
[
"Li",
"Bo",
""
],
[
"Zhao",
"Xin",
""
],
[
"Qin",
"Hong",
""
]
] | TITLE: 4-Dimensional Geometry Lens: A Novel Volumetric Magnification Approach
ABSTRACT: We present a novel methodology that utilizes 4-Dimensional (4D) space
deformation to simulate a magnification lens on versatile volume datasets and
textured solid models. Compared with other magnification methods (e.g.,
geometric optics, mesh editing), 4D differential geometry theory and its
practices are much more flexible and powerful for preserving shape features
(i.e., minimizing angle distortion), and easier to adapt to versatile solid
models. The primary advantage of 4D space lies at the following fact: we can
now easily magnify the volume of regions of interest (ROIs) from the additional
dimension, while keeping the rest region unchanged. To achieve this primary
goal, we first embed a 3D volumetric input into 4D space and magnify ROIs in
the 4th dimension. Then we flatten the 4D shape back into 3D space to
accommodate other typical applications in the real 3D world. In order to
enforce distortion minimization, in both steps we devise the high dimensional
geometry techniques based on rigorous 4D geometry theory for 3D/4D mapping back
and forth to amend the distortion. Our system can preserve not only focus
region, but also context region and global shape. We demonstrate the
effectiveness, robustness, and efficacy of our framework with a variety of
models ranging from tetrahedral meshes to volume datasets.
|
1307.1739 | Xin Zhao | Xin Zhao and Arie Kaufman | Anatomical Feature-guided Volumeric Registration of Multimodal Prostate
MRI | This paper has been withdrawn by the author due to publication | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Radiological imaging of prostate is becoming more popular among researchers
and clinicians in searching for diseases, primarily cancer. Scans might be
acquired at different times, with patient movement between scans, or with
different equipment, resulting in multiple datasets that need to be registered.
For this issue, we introduce a registration method using anatomical
feature-guided mutual information. Prostate scans of the same patient taken in
three different orientations are first aligned for the accurate detection of
anatomical features in 3D. Then, our pipeline allows for multiple modalities
registration through the use of anatomical features, such as the interior
urethra of prostate and gland utricle, in a bijective way. The novelty of this
approach is the application of anatomical features as the pre-specified
corresponding landmarks for prostate registration. We evaluate the registration
results through both artificial and clinical datasets. Registration accuracy is
evaluated by performing statistical analysis of local intensity differences or
spatial differences of anatomical landmarks between various MR datasets.
Evaluation results demonstrate that our method statistics-significantly
improves the quality of registration. Although this strategy is tested for
MRI-guided brachytherapy, the preliminary results from these experiments
suggest that it can be also applied to other settings such as transrectal
ultrasound-guided or CT-guided therapy, where the integration of preoperative
MRI may have a significant impact upon treatment planning and guidance.
| [
{
"version": "v1",
"created": "Sat, 6 Jul 2013 00:30:40 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Nov 2013 21:38:08 GMT"
}
] | 2013-11-05T00:00:00 | [
[
"Zhao",
"Xin",
""
],
[
"Kaufman",
"Arie",
""
]
] | TITLE: Anatomical Feature-guided Volumeric Registration of Multimodal Prostate
MRI
ABSTRACT: Radiological imaging of prostate is becoming more popular among researchers
and clinicians in searching for diseases, primarily cancer. Scans might be
acquired at different times, with patient movement between scans, or with
different equipment, resulting in multiple datasets that need to be registered.
For this issue, we introduce a registration method using anatomical
feature-guided mutual information. Prostate scans of the same patient taken in
three different orientations are first aligned for the accurate detection of
anatomical features in 3D. Then, our pipeline allows for multiple modalities
registration through the use of anatomical features, such as the interior
urethra of prostate and gland utricle, in a bijective way. The novelty of this
approach is the application of anatomical features as the pre-specified
corresponding landmarks for prostate registration. We evaluate the registration
results through both artificial and clinical datasets. Registration accuracy is
evaluated by performing statistical analysis of local intensity differences or
spatial differences of anatomical landmarks between various MR datasets.
Evaluation results demonstrate that our method statistics-significantly
improves the quality of registration. Although this strategy is tested for
MRI-guided brachytherapy, the preliminary results from these experiments
suggest that it can be also applied to other settings such as transrectal
ultrasound-guided or CT-guided therapy, where the integration of preoperative
MRI may have a significant impact upon treatment planning and guidance.
|
1311.0378 | George Teodoro | George Teodoro and Tahsin Kurc and Jun Kong and Lee Cooper and Joel
Saltz | Comparative Performance Analysis of Intel Xeon Phi, GPU, and CPU | 11 pages, 2 figures | null | null | null | cs.DC cs.PF | http://creativecommons.org/licenses/publicdomain/ | We investigate and characterize the performance of an important class of
operations on GPUs and Many Integrated Core (MIC) architectures. Our work is
motivated by applications that analyze low-dimensional spatial datasets
captured by high resolution sensors, such as image datasets obtained from whole
slide tissue specimens using microscopy image scanners. We identify the data
access and computation patterns of operations in object segmentation and
feature computation categories. We systematically implement and evaluate the
performance of these core operations on modern CPUs, GPUs, and MIC systems for
a microscopy image analysis application. Our results show that (1) the data
access pattern and parallelization strategy employed by the operations strongly
affect their performance. While the performance on a MIC of operations that
perform regular data access is comparable or sometimes better than that on a
GPU; (2) GPUs are significantly more efficient than MICs for operations and
algorithms that irregularly access data. This is a result of the low
performance of the latter when it comes to random data access; (3) adequate
coordinated execution on MICs and CPUs using a performance aware task
scheduling strategy improves about 1.29x over a first-come-first-served
strategy. The example application attained an efficiency of 84% in an execution
with of 192 nodes (3072 CPU cores and 192 MICs).
| [
{
"version": "v1",
"created": "Sat, 2 Nov 2013 14:00:40 GMT"
}
] | 2013-11-05T00:00:00 | [
[
"Teodoro",
"George",
""
],
[
"Kurc",
"Tahsin",
""
],
[
"Kong",
"Jun",
""
],
[
"Cooper",
"Lee",
""
],
[
"Saltz",
"Joel",
""
]
] | TITLE: Comparative Performance Analysis of Intel Xeon Phi, GPU, and CPU
ABSTRACT: We investigate and characterize the performance of an important class of
operations on GPUs and Many Integrated Core (MIC) architectures. Our work is
motivated by applications that analyze low-dimensional spatial datasets
captured by high resolution sensors, such as image datasets obtained from whole
slide tissue specimens using microscopy image scanners. We identify the data
access and computation patterns of operations in object segmentation and
feature computation categories. We systematically implement and evaluate the
performance of these core operations on modern CPUs, GPUs, and MIC systems for
a microscopy image analysis application. Our results show that (1) the data
access pattern and parallelization strategy employed by the operations strongly
affect their performance. While the performance on a MIC of operations that
perform regular data access is comparable or sometimes better than that on a
GPU; (2) GPUs are significantly more efficient than MICs for operations and
algorithms that irregularly access data. This is a result of the low
performance of the latter when it comes to random data access; (3) adequate
coordinated execution on MICs and CPUs using a performance aware task
scheduling strategy improves about 1.29x over a first-come-first-served
strategy. The example application attained an efficiency of 84% in an execution
with of 192 nodes (3072 CPU cores and 192 MICs).
|
1311.0636 | Dhruv Mahajan | Dhruv Mahajan, S. Sathiya Keerthi, S. Sundararajan, Leon Bottou | A Parallel SGD method with Strong Convergence | null | null | null | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a novel parallel stochastic gradient descent (SGD) method
that is obtained by applying parallel sets of SGD iterations (each set
operating on one node using the data residing in it) for finding the direction
in each iteration of a batch descent method. The method has strong convergence
properties. Experiments on datasets with high dimensional feature spaces show
the value of this method.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2013 10:31:11 GMT"
}
] | 2013-11-05T00:00:00 | [
[
"Mahajan",
"Dhruv",
""
],
[
"Keerthi",
"S. Sathiya",
""
],
[
"Sundararajan",
"S.",
""
],
[
"Bottou",
"Leon",
""
]
] | TITLE: A Parallel SGD method with Strong Convergence
ABSTRACT: This paper proposes a novel parallel stochastic gradient descent (SGD) method
that is obtained by applying parallel sets of SGD iterations (each set
operating on one node using the data residing in it) for finding the direction
in each iteration of a batch descent method. The method has strong convergence
properties. Experiments on datasets with high dimensional feature spaces show
the value of this method.
|
1311.0833 | Zitao Liu | Zitao Liu | A Comparative Study on Linguistic Feature Selection in Sentiment
Polarity Classification | arXiv admin note: text overlap with arXiv:cs/0205070 by other authors | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sentiment polarity classification is perhaps the most widely studied topic.
It classifies an opinionated document as expressing a positive or negative
opinion. In this paper, using movie review dataset, we perform a comparative
study with different single kind linguistic features and the combinations of
these features. We find that the classic topic-based classifier(Naive Bayes and
Support Vector Machine) do not perform as well on sentiment polarity
classification. And we find that with some combination of different linguistic
features, the classification accuracy can be boosted a lot. We give some
reasonable explanations about these boosting outcomes.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2013 20:11:35 GMT"
}
] | 2013-11-05T00:00:00 | [
[
"Liu",
"Zitao",
""
]
] | TITLE: A Comparative Study on Linguistic Feature Selection in Sentiment
Polarity Classification
ABSTRACT: Sentiment polarity classification is perhaps the most widely studied topic.
It classifies an opinionated document as expressing a positive or negative
opinion. In this paper, using movie review dataset, we perform a comparative
study with different single kind linguistic features and the combinations of
these features. We find that the classic topic-based classifier(Naive Bayes and
Support Vector Machine) do not perform as well on sentiment polarity
classification. And we find that with some combination of different linguistic
features, the classification accuracy can be boosted a lot. We give some
reasonable explanations about these boosting outcomes.
|
1305.6659 | Trevor Campbell | Trevor Campbell, Miao Liu, Brian Kulis, Jonathan P. How, Lawrence
Carin | Dynamic Clustering via Asymptotics of the Dependent Dirichlet Process
Mixture | This paper is from NIPS 2013. Please use the following BibTeX
citation: @inproceedings{Campbell13_NIPS, Author = {Trevor Campbell and Miao
Liu and Brian Kulis and Jonathan P. How and Lawrence Carin}, Title = {Dynamic
Clustering via Asymptotics of the Dependent Dirichlet Process}, Booktitle =
{Advances in Neural Information Processing Systems (NIPS)}, Year = {2013}} | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel algorithm, based upon the dependent Dirichlet
process mixture model (DDPMM), for clustering batch-sequential data containing
an unknown number of evolving clusters. The algorithm is derived via a
low-variance asymptotic analysis of the Gibbs sampling algorithm for the DDPMM,
and provides a hard clustering with convergence guarantees similar to those of
the k-means algorithm. Empirical results from a synthetic test with moving
Gaussian clusters and a test with real ADS-B aircraft trajectory data
demonstrate that the algorithm requires orders of magnitude less computational
time than contemporary probabilistic and hard clustering algorithms, while
providing higher accuracy on the examined datasets.
| [
{
"version": "v1",
"created": "Tue, 28 May 2013 23:59:16 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Nov 2013 18:25:39 GMT"
}
] | 2013-11-04T00:00:00 | [
[
"Campbell",
"Trevor",
""
],
[
"Liu",
"Miao",
""
],
[
"Kulis",
"Brian",
""
],
[
"How",
"Jonathan P.",
""
],
[
"Carin",
"Lawrence",
""
]
] | TITLE: Dynamic Clustering via Asymptotics of the Dependent Dirichlet Process
Mixture
ABSTRACT: This paper presents a novel algorithm, based upon the dependent Dirichlet
process mixture model (DDPMM), for clustering batch-sequential data containing
an unknown number of evolving clusters. The algorithm is derived via a
low-variance asymptotic analysis of the Gibbs sampling algorithm for the DDPMM,
and provides a hard clustering with convergence guarantees similar to those of
the k-means algorithm. Empirical results from a synthetic test with moving
Gaussian clusters and a test with real ADS-B aircraft trajectory data
demonstrate that the algorithm requires orders of magnitude less computational
time than contemporary probabilistic and hard clustering algorithms, while
providing higher accuracy on the examined datasets.
|
1307.1493 | Stefan Wager | Stefan Wager, Sida Wang, and Percy Liang | Dropout Training as Adaptive Regularization | 11 pages. Advances in Neural Information Processing Systems (NIPS),
2013 | null | null | null | stat.ML cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dropout and other feature noising schemes control overfitting by artificially
corrupting the training data. For generalized linear models, dropout performs a
form of adaptive regularization. Using this viewpoint, we show that the dropout
regularizer is first-order equivalent to an L2 regularizer applied after
scaling the features by an estimate of the inverse diagonal Fisher information
matrix. We also establish a connection to AdaGrad, an online learning
algorithm, and find that a close relative of AdaGrad operates by repeatedly
solving linear dropout-regularized problems. By casting dropout as
regularization, we develop a natural semi-supervised algorithm that uses
unlabeled data to create a better adaptive regularizer. We apply this idea to
document classification tasks, and show that it consistently boosts the
performance of dropout training, improving on state-of-the-art results on the
IMDB reviews dataset.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2013 21:33:56 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Nov 2013 17:56:35 GMT"
}
] | 2013-11-04T00:00:00 | [
[
"Wager",
"Stefan",
""
],
[
"Wang",
"Sida",
""
],
[
"Liang",
"Percy",
""
]
] | TITLE: Dropout Training as Adaptive Regularization
ABSTRACT: Dropout and other feature noising schemes control overfitting by artificially
corrupting the training data. For generalized linear models, dropout performs a
form of adaptive regularization. Using this viewpoint, we show that the dropout
regularizer is first-order equivalent to an L2 regularizer applied after
scaling the features by an estimate of the inverse diagonal Fisher information
matrix. We also establish a connection to AdaGrad, an online learning
algorithm, and find that a close relative of AdaGrad operates by repeatedly
solving linear dropout-regularized problems. By casting dropout as
regularization, we develop a natural semi-supervised algorithm that uses
unlabeled data to create a better adaptive regularizer. We apply this idea to
document classification tasks, and show that it consistently boosts the
performance of dropout training, improving on state-of-the-art results on the
IMDB reviews dataset.
|
1306.4626 | Laetitia Gauvin | Laetitia Gauvin, Andr\'e Panisson, Ciro Cattuto and Alain Barrat | Activity clocks: spreading dynamics on temporal networks of human
contact | null | Scientific Reports 3, 3099 (2013) | 10.1038/srep03099 | null | physics.soc-ph cs.SI nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamical processes on time-varying complex networks are key to understanding
and modeling a broad variety of processes in socio-technical systems. Here we
focus on empirical temporal networks of human proximity and we aim at
understanding the factors that, in simulation, shape the arrival time
distribution of simple spreading processes. Abandoning the notion of wall-clock
time in favour of node-specific clocks based on activity exposes robust
statistical patterns in the arrival times across different social contexts.
Using randomization strategies and generative models constrained by data, we
show that these patterns can be understood in terms of heterogeneous
inter-event time distributions coupled with heterogeneous numbers of events per
edge. We also show, both empirically and by using a synthetic dataset, that
significant deviations from the above behavior can be caused by the presence of
edge classes with strong activity correlations.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2013 17:44:40 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Oct 2013 14:13:04 GMT"
}
] | 2013-11-01T00:00:00 | [
[
"Gauvin",
"Laetitia",
""
],
[
"Panisson",
"André",
""
],
[
"Cattuto",
"Ciro",
""
],
[
"Barrat",
"Alain",
""
]
] | TITLE: Activity clocks: spreading dynamics on temporal networks of human
contact
ABSTRACT: Dynamical processes on time-varying complex networks are key to understanding
and modeling a broad variety of processes in socio-technical systems. Here we
focus on empirical temporal networks of human proximity and we aim at
understanding the factors that, in simulation, shape the arrival time
distribution of simple spreading processes. Abandoning the notion of wall-clock
time in favour of node-specific clocks based on activity exposes robust
statistical patterns in the arrival times across different social contexts.
Using randomization strategies and generative models constrained by data, we
show that these patterns can be understood in terms of heterogeneous
inter-event time distributions coupled with heterogeneous numbers of events per
edge. We also show, both empirically and by using a synthetic dataset, that
significant deviations from the above behavior can be caused by the presence of
edge classes with strong activity correlations.
|
1308.6324 | Jakub Tomczak Ph.D. | Jakub M. Tomczak | Prediction of breast cancer recurrence using Classification Restricted
Boltzmann Machine with Dropping | technical report | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we apply Classification Restricted Boltzmann Machine
(ClassRBM) to the problem of predicting breast cancer recurrence. According to
the Polish National Cancer Registry, in 2010 only, the breast cancer caused
almost 25% of all diagnosed cases of cancer in Poland. We propose how to use
ClassRBM for predicting breast cancer return and discovering relevant inputs
(symptoms) in illness reappearance. Next, we outline a general probabilistic
framework for learning Boltzmann machines with masks, which we refer to as
Dropping. The fashion of generating masks leads to different learning methods,
i.e., DropOut, DropConnect. We propose a new method called DropPart which is a
generalization of DropConnect. In DropPart the Beta distribution instead of
Bernoulli distribution in DropConnect is used. At the end, we carry out an
experiment using real-life dataset consisting of 949 cases, provided by the
Institute of Oncology Ljubljana.
| [
{
"version": "v1",
"created": "Wed, 28 Aug 2013 22:08:29 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Oct 2013 16:10:27 GMT"
}
] | 2013-10-31T00:00:00 | [
[
"Tomczak",
"Jakub M.",
""
]
] | TITLE: Prediction of breast cancer recurrence using Classification Restricted
Boltzmann Machine with Dropping
ABSTRACT: In this paper, we apply Classification Restricted Boltzmann Machine
(ClassRBM) to the problem of predicting breast cancer recurrence. According to
the Polish National Cancer Registry, in 2010 only, the breast cancer caused
almost 25% of all diagnosed cases of cancer in Poland. We propose how to use
ClassRBM for predicting breast cancer return and discovering relevant inputs
(symptoms) in illness reappearance. Next, we outline a general probabilistic
framework for learning Boltzmann machines with masks, which we refer to as
Dropping. The fashion of generating masks leads to different learning methods,
i.e., DropOut, DropConnect. We propose a new method called DropPart which is a
generalization of DropConnect. In DropPart the Beta distribution instead of
Bernoulli distribution in DropConnect is used. At the end, we carry out an
experiment using real-life dataset consisting of 949 cases, provided by the
Institute of Oncology Ljubljana.
|
1308.1995 | Shuyang Lin | Shuyang Lin, Xiangnan Kong, Philip S. Yu | Predicting Trends in Social Networks via Dynamic Activeness Model | 10 pages, a shorter version published in CIKM 2013 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the effect of word-of-the-mouth, trends in social networks are now
playing a significant role in shaping people's lives. Predicting dynamic trends
is an important problem with many useful applications. There are three dynamic
characteristics of a trend that should be captured by a trend model: intensity,
coverage and duration. However, existing approaches on the information
diffusion are not capable of capturing these three characteristics. In this
paper, we study the problem of predicting dynamic trends in social networks. We
first define related concepts to quantify the dynamic characteristics of trends
in social networks, and formalize the problem of trend prediction. We then
propose a Dynamic Activeness (DA) model based on the novel concept of
activeness, and design a trend prediction algorithm using the DA model. Due to
the use of stacking principle, we are able to make the prediction algorithm
very efficient. We examine the prediction algorithm on a number of real social
network datasets, and show that it is more accurate than state-of-the-art
approaches.
| [
{
"version": "v1",
"created": "Thu, 8 Aug 2013 22:35:35 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Oct 2013 03:31:24 GMT"
}
] | 2013-10-30T00:00:00 | [
[
"Lin",
"Shuyang",
""
],
[
"Kong",
"Xiangnan",
""
],
[
"Yu",
"Philip S.",
""
]
] | TITLE: Predicting Trends in Social Networks via Dynamic Activeness Model
ABSTRACT: With the effect of word-of-the-mouth, trends in social networks are now
playing a significant role in shaping people's lives. Predicting dynamic trends
is an important problem with many useful applications. There are three dynamic
characteristics of a trend that should be captured by a trend model: intensity,
coverage and duration. However, existing approaches on the information
diffusion are not capable of capturing these three characteristics. In this
paper, we study the problem of predicting dynamic trends in social networks. We
first define related concepts to quantify the dynamic characteristics of trends
in social networks, and formalize the problem of trend prediction. We then
propose a Dynamic Activeness (DA) model based on the novel concept of
activeness, and design a trend prediction algorithm using the DA model. Due to
the use of stacking principle, we are able to make the prediction algorithm
very efficient. We examine the prediction algorithm on a number of real social
network datasets, and show that it is more accurate than state-of-the-art
approaches.
|
1107.0789 | Lester Mackey | Lester Mackey, Ameet Talwalkar, Michael I. Jordan | Distributed Matrix Completion and Robust Factorization | 35 pages, 6 figures | null | null | null | cs.LG cs.DS cs.NA math.NA stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | If learning methods are to scale to the massive sizes of modern datasets, it
is essential for the field of machine learning to embrace parallel and
distributed computing. Inspired by the recent development of matrix
factorization methods with rich theory but poor computational complexity and by
the relative ease of mapping matrices onto distributed architectures, we
introduce a scalable divide-and-conquer framework for noisy matrix
factorization. We present a thorough theoretical analysis of this framework in
which we characterize the statistical errors introduced by the "divide" step
and control their magnitude in the "conquer" step, so that the overall
algorithm enjoys high-probability estimation guarantees comparable to those of
its base algorithm. We also present experiments in collaborative filtering and
video background modeling that demonstrate the near-linear to superlinear
speed-ups attainable with this approach.
| [
{
"version": "v1",
"created": "Tue, 5 Jul 2011 06:03:44 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Aug 2011 00:59:30 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Sep 2011 01:38:14 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Nov 2011 05:37:48 GMT"
},
{
"version": "v5",
"created": "Fri, 18 May 2012 09:28:27 GMT"
},
{
"version": "v6",
"created": "Tue, 14 Aug 2012 17:33:30 GMT"
},
{
"version": "v7",
"created": "Mon, 28 Oct 2013 06:02:12 GMT"
}
] | 2013-10-29T00:00:00 | [
[
"Mackey",
"Lester",
""
],
[
"Talwalkar",
"Ameet",
""
],
[
"Jordan",
"Michael I.",
""
]
] | TITLE: Distributed Matrix Completion and Robust Factorization
ABSTRACT: If learning methods are to scale to the massive sizes of modern datasets, it
is essential for the field of machine learning to embrace parallel and
distributed computing. Inspired by the recent development of matrix
factorization methods with rich theory but poor computational complexity and by
the relative ease of mapping matrices onto distributed architectures, we
introduce a scalable divide-and-conquer framework for noisy matrix
factorization. We present a thorough theoretical analysis of this framework in
which we characterize the statistical errors introduced by the "divide" step
and control their magnitude in the "conquer" step, so that the overall
algorithm enjoys high-probability estimation guarantees comparable to those of
its base algorithm. We also present experiments in collaborative filtering and
video background modeling that demonstrate the near-linear to superlinear
speed-ups attainable with this approach.
|
1310.7297 | Farhana Murtaza Choudhury | Farhana Murtaza Choudhury, Mohammed Eunus Ali, Sarah Masud, Suman
Nath, Ishat E Rabban | Scalable Visibility Color Map Construction in Spatial Databases | 12 pages, 14 figures | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in 3D modeling provide us with real 3D datasets to answer
queries, such as "What is the best position for a new billboard?" and "Which
hotel room has the best view?" in the presence of obstacles. These applications
require measuring and differentiating the visibility of an object (target) from
different viewpoints in a dataspace, e.g., a billboard may be seen from two
viewpoints but is readable only from the viewpoint closer to the target. In
this paper, we formulate the above problem of quantifying the visibility of
(from) a target object from (of) the surrounding area with a visibility color
map (VCM). A VCM is essentially defined as a surface color map of the space,
where each viewpoint of the space is assigned a color value that denotes the
visibility measure of the target from that viewpoint. Measuring the visibility
of a target even from a single viewpoint is an expensive operation, as we need
to consider factors such as distance, angle, and obstacles between the
viewpoint and the target. Hence, a straightforward approach to construct the
VCM that requires visibility computation for every viewpoint of the surrounding
space of the target, is prohibitively expensive in terms of both I/Os and
computation, especially for a real dataset comprising of thousands of
obstacles. We propose an efficient approach to compute the VCM based on a key
property of the human vision that eliminates the necessity of computing the
visibility for a large number of viewpoints of the space. To further reduce the
computational overhead, we propose two approximations; namely, minimum bounding
rectangle and tangential approaches with guaranteed error bounds. Our extensive
experiments demonstrate the effectiveness and efficiency of our solutions to
construct the VCM for real 2D and 3D datasets.
| [
{
"version": "v1",
"created": "Mon, 28 Oct 2013 02:38:26 GMT"
}
] | 2013-10-29T00:00:00 | [
[
"Choudhury",
"Farhana Murtaza",
""
],
[
"Ali",
"Mohammed Eunus",
""
],
[
"Masud",
"Sarah",
""
],
[
"Nath",
"Suman",
""
],
[
"Rabban",
"Ishat E",
""
]
] | TITLE: Scalable Visibility Color Map Construction in Spatial Databases
ABSTRACT: Recent advances in 3D modeling provide us with real 3D datasets to answer
queries, such as "What is the best position for a new billboard?" and "Which
hotel room has the best view?" in the presence of obstacles. These applications
require measuring and differentiating the visibility of an object (target) from
different viewpoints in a dataspace, e.g., a billboard may be seen from two
viewpoints but is readable only from the viewpoint closer to the target. In
this paper, we formulate the above problem of quantifying the visibility of
(from) a target object from (of) the surrounding area with a visibility color
map (VCM). A VCM is essentially defined as a surface color map of the space,
where each viewpoint of the space is assigned a color value that denotes the
visibility measure of the target from that viewpoint. Measuring the visibility
of a target even from a single viewpoint is an expensive operation, as we need
to consider factors such as distance, angle, and obstacles between the
viewpoint and the target. Hence, a straightforward approach to construct the
VCM that requires visibility computation for every viewpoint of the surrounding
space of the target, is prohibitively expensive in terms of both I/Os and
computation, especially for a real dataset comprising of thousands of
obstacles. We propose an efficient approach to compute the VCM based on a key
property of the human vision that eliminates the necessity of computing the
visibility for a large number of viewpoints of the space. To further reduce the
computational overhead, we propose two approximations; namely, minimum bounding
rectangle and tangential approaches with guaranteed error bounds. Our extensive
experiments demonstrate the effectiveness and efficiency of our solutions to
construct the VCM for real 2D and 3D datasets.
|
1310.6772 | Ragib Hasan | Thamar Solorio and Ragib Hasan and Mainul Mizan | Sockpuppet Detection in Wikipedia: A Corpus of Real-World Deceptive
Writing for Linking Identities | 4 pages, under submission at LREC 2014 | null | null | null | cs.CL cs.CR cs.CY | http://creativecommons.org/licenses/by/3.0/ | This paper describes the corpus of sockpuppet cases we gathered from
Wikipedia. A sockpuppet is an online user account created with a fake identity
for the purpose of covering abusive behavior and/or subverting the editing
regulation process. We used a semi-automated method for crawling and curating a
dataset of real sockpuppet investigation cases. To the best of our knowledge,
this is the first corpus available on real-world deceptive writing. We describe
the process for crawling the data and some preliminary results that can be used
as baseline for benchmarking research. The dataset will be released under a
Creative Commons license from our project website: http://docsig.cis.uab.edu.
| [
{
"version": "v1",
"created": "Thu, 24 Oct 2013 20:59:27 GMT"
}
] | 2013-10-28T00:00:00 | [
[
"Solorio",
"Thamar",
""
],
[
"Hasan",
"Ragib",
""
],
[
"Mizan",
"Mainul",
""
]
] | TITLE: Sockpuppet Detection in Wikipedia: A Corpus of Real-World Deceptive
Writing for Linking Identities
ABSTRACT: This paper describes the corpus of sockpuppet cases we gathered from
Wikipedia. A sockpuppet is an online user account created with a fake identity
for the purpose of covering abusive behavior and/or subverting the editing
regulation process. We used a semi-automated method for crawling and curating a
dataset of real sockpuppet investigation cases. To the best of our knowledge,
this is the first corpus available on real-world deceptive writing. We describe
the process for crawling the data and some preliminary results that can be used
as baseline for benchmarking research. The dataset will be released under a
Creative Commons license from our project website: http://docsig.cis.uab.edu.
|
1310.6998 | Shiladitya Sinha | Shiladitya Sinha, Chris Dyer, Kevin Gimpel, and Noah A. Smith | Predicting the NFL using Twitter | Presented at ECML/PKDD 2013 Workshop on Machine Learning and Data
Mining for Sports Analytics | null | null | null | cs.SI cs.LG physics.soc-ph stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the relationship between social media output and National Football
League (NFL) games, using a dataset containing messages from Twitter and NFL
game statistics. Specifically, we consider tweets pertaining to specific teams
and games in the NFL season and use them alongside statistical game data to
build predictive models for future game outcomes (which team will win?) and
sports betting outcomes (which team will win with the point spread? will the
total points be over/under the line?). We experiment with several feature sets
and find that simple features using large volumes of tweets can match or exceed
the performance of more traditional features that use game statistics.
| [
{
"version": "v1",
"created": "Fri, 25 Oct 2013 18:35:22 GMT"
}
] | 2013-10-28T00:00:00 | [
[
"Sinha",
"Shiladitya",
""
],
[
"Dyer",
"Chris",
""
],
[
"Gimpel",
"Kevin",
""
],
[
"Smith",
"Noah A.",
""
]
] | TITLE: Predicting the NFL using Twitter
ABSTRACT: We study the relationship between social media output and National Football
League (NFL) games, using a dataset containing messages from Twitter and NFL
game statistics. Specifically, we consider tweets pertaining to specific teams
and games in the NFL season and use them alongside statistical game data to
build predictive models for future game outcomes (which team will win?) and
sports betting outcomes (which team will win with the point spread? will the
total points be over/under the line?). We experiment with several feature sets
and find that simple features using large volumes of tweets can match or exceed
the performance of more traditional features that use game statistics.
|
1310.6304 | Nikos Karampatziakis | Nikos Karampatziakis, Paul Mineiro | Combining Structured and Unstructured Randomness in Large Scale PCA | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Principal Component Analysis (PCA) is a ubiquitous tool with many
applications in machine learning including feature construction, subspace
embedding, and outlier detection. In this paper, we present an algorithm for
computing the top principal components of a dataset with a large number of rows
(examples) and columns (features). Our algorithm leverages both structured and
unstructured random projections to retain good accuracy while being
computationally efficient. We demonstrate the technique on the winning
submission the KDD 2010 Cup.
| [
{
"version": "v1",
"created": "Wed, 23 Oct 2013 17:33:26 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Oct 2013 17:36:27 GMT"
}
] | 2013-10-25T00:00:00 | [
[
"Karampatziakis",
"Nikos",
""
],
[
"Mineiro",
"Paul",
""
]
] | TITLE: Combining Structured and Unstructured Randomness in Large Scale PCA
ABSTRACT: Principal Component Analysis (PCA) is a ubiquitous tool with many
applications in machine learning including feature construction, subspace
embedding, and outlier detection. In this paper, we present an algorithm for
computing the top principal components of a dataset with a large number of rows
(examples) and columns (features). Our algorithm leverages both structured and
unstructured random projections to retain good accuracy while being
computationally efficient. We demonstrate the technique on the winning
submission the KDD 2010 Cup.
|
1310.6654 | Sahil Sikka | Sahil Sikka and Karan Sikka and M.K. Bhuyan and Yuji Iwahori | Pseudo vs. True Defect Classification in Printed Circuits Boards using
Wavelet Features | 6 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, Printed Circuit Boards (PCB) have become the backbone of a
large number of consumer electronic devices leading to a surge in their
production. This has made it imperative to employ automatic inspection systems
to identify manufacturing defects in PCB before they are installed in the
respective systems. An important task in this regard is the classification of
defects as either true or pseudo defects, which decides if the PCB is to be
re-manufactured or not. This work proposes a novel approach to detect most
common defects in the PCBs. The problem has been approached by employing highly
discriminative features based on multi-scale wavelet transform, which are
further boosted by using a kernalized version of the support vector machines
(SVM). A real world printed circuit board dataset has been used for
quantitative analysis. Experimental results demonstrated the efficacy of the
proposed method.
| [
{
"version": "v1",
"created": "Thu, 24 Oct 2013 16:11:28 GMT"
}
] | 2013-10-25T00:00:00 | [
[
"Sikka",
"Sahil",
""
],
[
"Sikka",
"Karan",
""
],
[
"Bhuyan",
"M. K.",
""
],
[
"Iwahori",
"Yuji",
""
]
] | TITLE: Pseudo vs. True Defect Classification in Printed Circuits Boards using
Wavelet Features
ABSTRACT: In recent years, Printed Circuit Boards (PCB) have become the backbone of a
large number of consumer electronic devices leading to a surge in their
production. This has made it imperative to employ automatic inspection systems
to identify manufacturing defects in PCB before they are installed in the
respective systems. An important task in this regard is the classification of
defects as either true or pseudo defects, which decides if the PCB is to be
re-manufactured or not. This work proposes a novel approach to detect most
common defects in the PCBs. The problem has been approached by employing highly
discriminative features based on multi-scale wavelet transform, which are
further boosted by using a kernalized version of the support vector machines
(SVM). A real world printed circuit board dataset has been used for
quantitative analysis. Experimental results demonstrated the efficacy of the
proposed method.
|
1211.7052 | Andrea Baronchelli | Bruno Ribeiro, Nicola Perra, Andrea Baronchelli | Quantifying the effect of temporal resolution on time-varying networks | null | Scientific Reports 3, 3006 (2013) | 10.1038/srep03006 | null | cond-mat.stat-mech cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time-varying networks describe a wide array of systems whose constituents and
interactions evolve over time. They are defined by an ordered stream of
interactions between nodes, yet they are often represented in terms of a
sequence of static networks, each aggregating all edges and nodes present in a
time interval of size \Delta t. In this work we quantify the impact of an
arbitrary \Delta t on the description of a dynamical process taking place upon
a time-varying network. We focus on the elementary random walk, and put forth a
simple mathematical framework that well describes the behavior observed on real
datasets. The analytical description of the bias introduced by time integrating
techniques represents a step forward in the correct characterization of
dynamical processes on time-varying graphs.
| [
{
"version": "v1",
"created": "Thu, 29 Nov 2012 20:56:13 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Dec 2012 17:21:53 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Oct 2013 10:50:42 GMT"
}
] | 2013-10-23T00:00:00 | [
[
"Ribeiro",
"Bruno",
""
],
[
"Perra",
"Nicola",
""
],
[
"Baronchelli",
"Andrea",
""
]
] | TITLE: Quantifying the effect of temporal resolution on time-varying networks
ABSTRACT: Time-varying networks describe a wide array of systems whose constituents and
interactions evolve over time. They are defined by an ordered stream of
interactions between nodes, yet they are often represented in terms of a
sequence of static networks, each aggregating all edges and nodes present in a
time interval of size \Delta t. In this work we quantify the impact of an
arbitrary \Delta t on the description of a dynamical process taking place upon
a time-varying network. We focus on the elementary random walk, and put forth a
simple mathematical framework that well describes the behavior observed on real
datasets. The analytical description of the bias introduced by time integrating
techniques represents a step forward in the correct characterization of
dynamical processes on time-varying graphs.
|
1310.5767 | Chunhua Shen | Xi Li, Yao Li, Chunhua Shen, Anthony Dick, Anton van den Hengel | Contextual Hypergraph Modelling for Salient Object Detection | Appearing in Proc. Int. Conf. Computer Vision 2013, Sydney, Australia | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Salient object detection aims to locate objects that capture human attention
within images. Previous approaches often pose this as a problem of image
contrast analysis. In this work, we model an image as a hypergraph that
utilizes a set of hyperedges to capture the contextual properties of image
pixels or regions. As a result, the problem of salient object detection becomes
one of finding salient vertices and hyperedges in the hypergraph. The main
advantage of hypergraph modeling is that it takes into account each pixel's (or
region's) affinity with its neighborhood as well as its separation from image
background. Furthermore, we propose an alternative approach based on
center-versus-surround contextual contrast analysis, which performs salient
object detection by optimizing a cost-sensitive support vector machine (SVM)
objective function. Experimental results on four challenging datasets
demonstrate the effectiveness of the proposed approaches against the
state-of-the-art approaches to salient object detection.
| [
{
"version": "v1",
"created": "Tue, 22 Oct 2013 00:38:59 GMT"
}
] | 2013-10-23T00:00:00 | [
[
"Li",
"Xi",
""
],
[
"Li",
"Yao",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Dick",
"Anthony",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: Contextual Hypergraph Modelling for Salient Object Detection
ABSTRACT: Salient object detection aims to locate objects that capture human attention
within images. Previous approaches often pose this as a problem of image
contrast analysis. In this work, we model an image as a hypergraph that
utilizes a set of hyperedges to capture the contextual properties of image
pixels or regions. As a result, the problem of salient object detection becomes
one of finding salient vertices and hyperedges in the hypergraph. The main
advantage of hypergraph modeling is that it takes into account each pixel's (or
region's) affinity with its neighborhood as well as its separation from image
background. Furthermore, we propose an alternative approach based on
center-versus-surround contextual contrast analysis, which performs salient
object detection by optimizing a cost-sensitive support vector machine (SVM)
objective function. Experimental results on four challenging datasets
demonstrate the effectiveness of the proposed approaches against the
state-of-the-art approaches to salient object detection.
|
1310.5965 | Roozbeh Rajabi | Roozbeh Rajabi, Hassan Ghassemian | Fusion of Hyperspectral and Panchromatic Images using Spectral Uumixing
Results | 4 pages, conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hyperspectral imaging, due to providing high spectral resolution images, is
one of the most important tools in the remote sensing field. Because of
technological restrictions hyperspectral sensors has a limited spatial
resolution. On the other hand panchromatic image has a better spatial
resolution. Combining this information together can provide a better
understanding of the target scene. Spectral unmixing of mixed pixels in
hyperspectral images results in spectral signature and abundance fractions of
endmembers but gives no information about their location in a mixed pixel. In
this paper we have used spectral unmixing results of hyperspectral images and
segmentation results of panchromatic image for data fusion. The proposed method
has been applied on simulated data using AVRIS Indian Pines datasets. Results
show that this method can effectively combine information in hyperspectral and
panchromatic images.
| [
{
"version": "v1",
"created": "Tue, 22 Oct 2013 15:44:51 GMT"
}
] | 2013-10-23T00:00:00 | [
[
"Rajabi",
"Roozbeh",
""
],
[
"Ghassemian",
"Hassan",
""
]
] | TITLE: Fusion of Hyperspectral and Panchromatic Images using Spectral Uumixing
Results
ABSTRACT: Hyperspectral imaging, due to providing high spectral resolution images, is
one of the most important tools in the remote sensing field. Because of
technological restrictions hyperspectral sensors has a limited spatial
resolution. On the other hand panchromatic image has a better spatial
resolution. Combining this information together can provide a better
understanding of the target scene. Spectral unmixing of mixed pixels in
hyperspectral images results in spectral signature and abundance fractions of
endmembers but gives no information about their location in a mixed pixel. In
this paper we have used spectral unmixing results of hyperspectral images and
segmentation results of panchromatic image for data fusion. The proposed method
has been applied on simulated data using AVRIS Indian Pines datasets. Results
show that this method can effectively combine information in hyperspectral and
panchromatic images.
|
1303.2130 | Xiao-Lei Zhang | Xiao-Lei Zhang | Convex Discriminative Multitask Clustering | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multitask clustering tries to improve the clustering performance of multiple
tasks simultaneously by taking their relationship into account. Most existing
multitask clustering algorithms fall into the type of generative clustering,
and none are formulated as convex optimization problems. In this paper, we
propose two convex Discriminative Multitask Clustering (DMTC) algorithms to
address the problems. Specifically, we first propose a Bayesian DMTC framework.
Then, we propose two convex DMTC objectives within the framework. The first
one, which can be seen as a technical combination of the convex multitask
feature learning and the convex Multiclass Maximum Margin Clustering (M3C),
aims to learn a shared feature representation. The second one, which can be
seen as a combination of the convex multitask relationship learning and M3C,
aims to learn the task relationship. The two objectives are solved in a uniform
procedure by the efficient cutting-plane algorithm. Experimental results on a
toy problem and two benchmark datasets demonstrate the effectiveness of the
proposed algorithms.
| [
{
"version": "v1",
"created": "Fri, 8 Mar 2013 21:32:52 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Oct 2013 15:06:36 GMT"
}
] | 2013-10-22T00:00:00 | [
[
"Zhang",
"Xiao-Lei",
""
]
] | TITLE: Convex Discriminative Multitask Clustering
ABSTRACT: Multitask clustering tries to improve the clustering performance of multiple
tasks simultaneously by taking their relationship into account. Most existing
multitask clustering algorithms fall into the type of generative clustering,
and none are formulated as convex optimization problems. In this paper, we
propose two convex Discriminative Multitask Clustering (DMTC) algorithms to
address the problems. Specifically, we first propose a Bayesian DMTC framework.
Then, we propose two convex DMTC objectives within the framework. The first
one, which can be seen as a technical combination of the convex multitask
feature learning and the convex Multiclass Maximum Margin Clustering (M3C),
aims to learn a shared feature representation. The second one, which can be
seen as a combination of the convex multitask relationship learning and M3C,
aims to learn the task relationship. The two objectives are solved in a uniform
procedure by the efficient cutting-plane algorithm. Experimental results on a
toy problem and two benchmark datasets demonstrate the effectiveness of the
proposed algorithms.
|
1310.1949 | Nikos Karampatziakis | Alekh Agarwal, Sham M. Kakade, Nikos Karampatziakis, Le Song, Gregory
Valiant | Least Squares Revisited: Scalable Approaches for Multi-class Prediction | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work provides simple algorithms for multi-class (and multi-label)
prediction in settings where both the number of examples n and the data
dimension d are relatively large. These robust and parameter free algorithms
are essentially iterative least-squares updates and very versatile both in
theory and in practice. On the theoretical front, we present several variants
with convergence guarantees. Owing to their effective use of second-order
structure, these algorithms are substantially better than first-order methods
in many practical scenarios. On the empirical side, we present a scalable
stagewise variant of our approach, which achieves dramatic computational
speedups over popular optimization packages such as Liblinear and Vowpal Wabbit
on standard datasets (MNIST and CIFAR-10), while attaining state-of-the-art
accuracies.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2013 20:48:58 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Oct 2013 15:18:37 GMT"
}
] | 2013-10-22T00:00:00 | [
[
"Agarwal",
"Alekh",
""
],
[
"Kakade",
"Sham M.",
""
],
[
"Karampatziakis",
"Nikos",
""
],
[
"Song",
"Le",
""
],
[
"Valiant",
"Gregory",
""
]
] | TITLE: Least Squares Revisited: Scalable Approaches for Multi-class Prediction
ABSTRACT: This work provides simple algorithms for multi-class (and multi-label)
prediction in settings where both the number of examples n and the data
dimension d are relatively large. These robust and parameter free algorithms
are essentially iterative least-squares updates and very versatile both in
theory and in practice. On the theoretical front, we present several variants
with convergence guarantees. Owing to their effective use of second-order
structure, these algorithms are substantially better than first-order methods
in many practical scenarios. On the empirical side, we present a scalable
stagewise variant of our approach, which achieves dramatic computational
speedups over popular optimization packages such as Liblinear and Vowpal Wabbit
on standard datasets (MNIST and CIFAR-10), while attaining state-of-the-art
accuracies.
|
1310.4954 | Miguel A. Martinez-Prieto | Sandra \'Alvarez-Garc\'ia and Nieves R. Brisaboa and Javier D.
Fern\'andez and Miguel A. Mart\'inez-Prieto and Gonzalo Navarro | Compressed Vertical Partitioning for Full-In-Memory RDF Management | null | null | null | null | cs.DB cs.DS cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Web of Data has been gaining momentum and this leads to increasingly
publish more semi-structured datasets following the RDF model, based on atomic
triple units of subject, predicate, and object. Although it is a simple model,
compression methods become necessary because datasets are increasingly larger
and various scalability issues arise around their organization and storage.
This requirement is more restrictive in RDF stores because efficient SPARQL
resolution on the compressed RDF datasets is also required.
This article introduces a novel RDF indexing technique (called k2-triples)
supporting efficient SPARQL resolution in compressed space. k2-triples, uses
the predicate to vertically partition the dataset into disjoint subsets of
pairs (subject, object), one per predicate. These subsets are represented as
binary matrices in which 1-bits mean that the corresponding triple exists in
the dataset. This model results in very sparse matrices, which are efficiently
compressed using k2-trees. We enhance this model with two compact indexes
listing the predicates related to each different subject and object, in order
to address the specific weaknesses of vertically partitioned representations.
The resulting technique not only achieves by far the most compressed
representations, but also the best overall performance for RDF retrieval in our
experiments. Our approach uses up to 10 times less space than a state of the
art baseline, and outperforms its performance by several order of magnitude on
the most basic query patterns. In addition, we optimize traditional join
algorithms on k2-triples and define a novel one leveraging its specific
features. Our experimental results show that our technique overcomes
traditional vertical partitioning for join resolution, reporting the best
numbers for joins in which the non-joined nodes are provided, and being
competitive in the majority of the cases.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2013 08:58:01 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Oct 2013 09:00:47 GMT"
}
] | 2013-10-22T00:00:00 | [
[
"Álvarez-García",
"Sandra",
""
],
[
"Brisaboa",
"Nieves R.",
""
],
[
"Fernández",
"Javier D.",
""
],
[
"Martínez-Prieto",
"Miguel A.",
""
],
[
"Navarro",
"Gonzalo",
""
]
] | TITLE: Compressed Vertical Partitioning for Full-In-Memory RDF Management
ABSTRACT: The Web of Data has been gaining momentum and this leads to increasingly
publish more semi-structured datasets following the RDF model, based on atomic
triple units of subject, predicate, and object. Although it is a simple model,
compression methods become necessary because datasets are increasingly larger
and various scalability issues arise around their organization and storage.
This requirement is more restrictive in RDF stores because efficient SPARQL
resolution on the compressed RDF datasets is also required.
This article introduces a novel RDF indexing technique (called k2-triples)
supporting efficient SPARQL resolution in compressed space. k2-triples, uses
the predicate to vertically partition the dataset into disjoint subsets of
pairs (subject, object), one per predicate. These subsets are represented as
binary matrices in which 1-bits mean that the corresponding triple exists in
the dataset. This model results in very sparse matrices, which are efficiently
compressed using k2-trees. We enhance this model with two compact indexes
listing the predicates related to each different subject and object, in order
to address the specific weaknesses of vertically partitioned representations.
The resulting technique not only achieves by far the most compressed
representations, but also the best overall performance for RDF retrieval in our
experiments. Our approach uses up to 10 times less space than a state of the
art baseline, and outperforms its performance by several order of magnitude on
the most basic query patterns. In addition, we optimize traditional join
algorithms on k2-triples and define a novel one leveraging its specific
features. Our experimental results show that our technique overcomes
traditional vertical partitioning for join resolution, reporting the best
numbers for joins in which the non-joined nodes are provided, and being
competitive in the majority of the cases.
|
1310.5142 | Hyun Joon Jung | Hyun Joon Jung and Matthew Lease | Crowdsourced Task Routing via Matrix Factorization | 10 pages, 7 figures | null | null | null | cs.CY cs.IR | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We describe methods to predict a crowd worker's accuracy on new tasks based
on his accuracy on past tasks. Such prediction provides a foundation for
identifying the best workers to route work to in order to maximize accuracy on
the new task. Our key insight is to model similarity of past tasks to the
target task such that past task accuracies can be optimally integrated to
predict target task accuracy. We describe two matrix factorization (MF)
approaches from collaborative filtering which not only exploit such task
similarity, but are known to be robust to sparse data. Experiments on synthetic
and real-world datasets provide feasibility assessment and comparative
evaluation of MF approaches vs. two baseline methods. Across a range of data
scales and task similarity conditions, we evaluate: 1) prediction error over
all workers; and 2) how well each method predicts the best workers to use for
each task. Results show the benefit of task routing over random assignment, the
strength of probabilistic MF over baseline methods, and the robustness of
methods under different conditions.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2013 14:37:24 GMT"
}
] | 2013-10-22T00:00:00 | [
[
"Jung",
"Hyun Joon",
""
],
[
"Lease",
"Matthew",
""
]
] | TITLE: Crowdsourced Task Routing via Matrix Factorization
ABSTRACT: We describe methods to predict a crowd worker's accuracy on new tasks based
on his accuracy on past tasks. Such prediction provides a foundation for
identifying the best workers to route work to in order to maximize accuracy on
the new task. Our key insight is to model similarity of past tasks to the
target task such that past task accuracies can be optimally integrated to
predict target task accuracy. We describe two matrix factorization (MF)
approaches from collaborative filtering which not only exploit such task
similarity, but are known to be robust to sparse data. Experiments on synthetic
and real-world datasets provide feasibility assessment and comparative
evaluation of MF approaches vs. two baseline methods. Across a range of data
scales and task similarity conditions, we evaluate: 1) prediction error over
all workers; and 2) how well each method predicts the best workers to use for
each task. Results show the benefit of task routing over random assignment, the
strength of probabilistic MF over baseline methods, and the robustness of
methods under different conditions.
|
1310.5221 | Sumeet Kaur Sehra | Sumeet Kaur Sehra, Yadwinder Singh Brar, Navdeep Kaur | Soft computing techniques for software effort estimation | null | International Journal of Advanced Computer and Mathematical
Sciences Vol.2(3), November, 2011. pp:10-17 | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The effort invested in a software project is probably one of the most
important and most analyzed variables in recent years in the process of project
management. The limitation of algorithmic effort prediction models is their
inability to cope with uncertainties and imprecision surrounding software
projects at the early development stage. More recently attention has turned to
a variety of machine learning methods, and soft computing in particular to
predict software development effort. Soft computing is a consortium of
methodologies centering in fuzzy logic, artificial neural networks, and
evolutionary computation. It is important, to mention here, that these
methodologies are complementary and synergistic, rather than competitive. They
provide in one form or another flexible information processing capability for
handling real life ambiguous situations. These methodologies are currently used
for reliable and accurate estimate of software development effort, which has
always been a challenge for both the software industry and academia. The aim of
this study is to analyze soft computing techniques in the existing models and
to provide in depth review of software and project estimation techniques
existing in industry and literature based on the different test datasets along
with their strength and weaknesses
| [
{
"version": "v1",
"created": "Sat, 19 Oct 2013 12:25:00 GMT"
}
] | 2013-10-22T00:00:00 | [
[
"Sehra",
"Sumeet Kaur",
""
],
[
"Brar",
"Yadwinder Singh",
""
],
[
"Kaur",
"Navdeep",
""
]
] | TITLE: Soft computing techniques for software effort estimation
ABSTRACT: The effort invested in a software project is probably one of the most
important and most analyzed variables in recent years in the process of project
management. The limitation of algorithmic effort prediction models is their
inability to cope with uncertainties and imprecision surrounding software
projects at the early development stage. More recently attention has turned to
a variety of machine learning methods, and soft computing in particular to
predict software development effort. Soft computing is a consortium of
methodologies centering in fuzzy logic, artificial neural networks, and
evolutionary computation. It is important, to mention here, that these
methodologies are complementary and synergistic, rather than competitive. They
provide in one form or another flexible information processing capability for
handling real life ambiguous situations. These methodologies are currently used
for reliable and accurate estimate of software development effort, which has
always been a challenge for both the software industry and academia. The aim of
this study is to analyze soft computing techniques in the existing models and
to provide in depth review of software and project estimation techniques
existing in industry and literature based on the different test datasets along
with their strength and weaknesses
|
1310.5249 | Fabrice Rossi | Mohamed Khalil El Mahrsi (LTCI, SAMM), Fabrice Rossi (SAMM) | Graph-Based Approaches to Clustering Network-Constrained Trajectory Data | null | New Frontiers in Mining Complex Patterns, Appice, Annalisa and
Ceci, Michelangelo and Loglisci, Corrado and Manco, Giuseppe and Masciari,
Elio and Ras, Zbigniew (Ed.) (2013) 124-137 | 10.1007/978-3-642-37382-4_9 | NFMCP2013 | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering trajectory data attracted considerable attention in the last few
years. Most of prior work assumed that moving objects can move freely in an
euclidean space and did not consider the eventual presence of an underlying
road network and its influence on evaluating the similarity between
trajectories. In this paper, we present an approach to clustering such
network-constrained trajectory data. More precisely we aim at discovering
groups of road segments that are often travelled by the same trajectories. To
achieve this end, we model the interactions between segments w.r.t. their
similarity as a weighted graph to which we apply a community detection
algorithm to discover meaningful clusters. We showcase our proposition through
experimental results obtained on synthetic datasets.
| [
{
"version": "v1",
"created": "Sat, 19 Oct 2013 17:24:39 GMT"
}
] | 2013-10-22T00:00:00 | [
[
"Mahrsi",
"Mohamed Khalil El",
"",
"LTCI, SAMM"
],
[
"Rossi",
"Fabrice",
"",
"SAMM"
]
] | TITLE: Graph-Based Approaches to Clustering Network-Constrained Trajectory Data
ABSTRACT: Clustering trajectory data attracted considerable attention in the last few
years. Most of prior work assumed that moving objects can move freely in an
euclidean space and did not consider the eventual presence of an underlying
road network and its influence on evaluating the similarity between
trajectories. In this paper, we present an approach to clustering such
network-constrained trajectory data. More precisely we aim at discovering
groups of road segments that are often travelled by the same trajectories. To
achieve this end, we model the interactions between segments w.r.t. their
similarity as a weighted graph to which we apply a community detection
algorithm to discover meaningful clusters. We showcase our proposition through
experimental results obtained on synthetic datasets.
|
1310.5430 | Amit Goyal | Glenn S. Bevilacqua and Shealen Clare and Amit Goyal and Laks V. S.
Lakshmanan | Validating Network Value of Influencers by means of Explanations | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, there has been significant interest in social influence analysis.
One of the central problems in this area is the problem of identifying
influencers, such that by convincing these users to perform a certain action
(like buying a new product), a large number of other users get influenced to
follow the action. The client of such an application is a marketer who would
target these influencers for marketing a given new product, say by providing
free samples or discounts. It is natural that before committing resources for
targeting an influencer the marketer would be interested in validating the
influence (or network value) of influencers returned. This requires digging
deeper into such analytical questions as: who are their followers, on what
actions (or products) they are influential, etc. However, the current
approaches to identifying influencers largely work as a black box in this
respect. The goal of this paper is to open up the black box, address these
questions and provide informative and crisp explanations for validating the
network value of influencers.
We formulate the problem of providing explanations (called PROXI) as a
discrete optimization problem of feature selection. We show that PROXI is not
only NP-hard to solve exactly, it is NP-hard to approximate within any
reasonable factor. Nevertheless, we show interesting properties of the
objective function and develop an intuitive greedy heuristic. We perform
detailed experimental analysis on two real world datasets - Twitter and
Flixster, and show that our approach is useful in generating concise and
insightful explanations of the influence distribution of users and that our
greedy algorithm is effective and efficient with respect to several baselines.
| [
{
"version": "v1",
"created": "Mon, 21 Oct 2013 06:05:48 GMT"
}
] | 2013-10-22T00:00:00 | [
[
"Bevilacqua",
"Glenn S.",
""
],
[
"Clare",
"Shealen",
""
],
[
"Goyal",
"Amit",
""
],
[
"Lakshmanan",
"Laks V. S.",
""
]
] | TITLE: Validating Network Value of Influencers by means of Explanations
ABSTRACT: Recently, there has been significant interest in social influence analysis.
One of the central problems in this area is the problem of identifying
influencers, such that by convincing these users to perform a certain action
(like buying a new product), a large number of other users get influenced to
follow the action. The client of such an application is a marketer who would
target these influencers for marketing a given new product, say by providing
free samples or discounts. It is natural that before committing resources for
targeting an influencer the marketer would be interested in validating the
influence (or network value) of influencers returned. This requires digging
deeper into such analytical questions as: who are their followers, on what
actions (or products) they are influential, etc. However, the current
approaches to identifying influencers largely work as a black box in this
respect. The goal of this paper is to open up the black box, address these
questions and provide informative and crisp explanations for validating the
network value of influencers.
We formulate the problem of providing explanations (called PROXI) as a
discrete optimization problem of feature selection. We show that PROXI is not
only NP-hard to solve exactly, it is NP-hard to approximate within any
reasonable factor. Nevertheless, we show interesting properties of the
objective function and develop an intuitive greedy heuristic. We perform
detailed experimental analysis on two real world datasets - Twitter and
Flixster, and show that our approach is useful in generating concise and
insightful explanations of the influence distribution of users and that our
greedy algorithm is effective and efficient with respect to several baselines.
|
1310.2700 | Marvin Weinstein | M. Weinstein, F. Meirer, A. Hume, Ph. Sciau, G. Shaked, R. Hofstetter,
E. Persi, A. Mehta, and D. Horn | Analyzing Big Data with Dynamic Quantum Clustering | 37 pages, 22 figures, 1 Table | null | null | null | physics.data-an cs.LG physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How does one search for a needle in a multi-dimensional haystack without
knowing what a needle is and without knowing if there is one in the haystack?
This kind of problem requires a paradigm shift - away from hypothesis driven
searches of the data - towards a methodology that lets the data speak for
itself. Dynamic Quantum Clustering (DQC) is such a methodology. DQC is a
powerful visual method that works with big, high-dimensional data. It exploits
variations of the density of the data (in feature space) and unearths subsets
of the data that exhibit correlations among all the measured variables. The
outcome of a DQC analysis is a movie that shows how and why sets of data-points
are eventually classified as members of simple clusters or as members of - what
we call - extended structures. This allows DQC to be successfully used in a
non-conventional exploratory mode where one searches data for unexpected
information without the need to model the data. We show how this works for big,
complex, real-world datasets that come from five distinct fields: i.e., x-ray
nano-chemistry, condensed matter, biology, seismology and finance. These
studies show how DQC excels at uncovering unexpected, small - but meaningful -
subsets of the data that contain important information. We also establish an
important new result: namely, that big, complex datasets often contain
interesting structures that will be missed by many conventional clustering
techniques. Experience shows that these structures appear frequently enough
that it is crucial to know they can exist, and that when they do, they encode
important hidden information. In short, we not only demonstrate that DQC can be
flexibly applied to datasets that present significantly different challenges,
we also show how a simple analysis can be used to look for the needle in the
haystack, determine what it is, and find what this means.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2013 04:00:03 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Oct 2013 21:06:22 GMT"
}
] | 2013-10-21T00:00:00 | [
[
"Weinstein",
"M.",
""
],
[
"Meirer",
"F.",
""
],
[
"Hume",
"A.",
""
],
[
"Sciau",
"Ph.",
""
],
[
"Shaked",
"G.",
""
],
[
"Hofstetter",
"R.",
""
],
[
"Persi",
"E.",
""
],
[
"Mehta",
"A.",
""
],
[
"Horn",
"D.",
""
]
] | TITLE: Analyzing Big Data with Dynamic Quantum Clustering
ABSTRACT: How does one search for a needle in a multi-dimensional haystack without
knowing what a needle is and without knowing if there is one in the haystack?
This kind of problem requires a paradigm shift - away from hypothesis driven
searches of the data - towards a methodology that lets the data speak for
itself. Dynamic Quantum Clustering (DQC) is such a methodology. DQC is a
powerful visual method that works with big, high-dimensional data. It exploits
variations of the density of the data (in feature space) and unearths subsets
of the data that exhibit correlations among all the measured variables. The
outcome of a DQC analysis is a movie that shows how and why sets of data-points
are eventually classified as members of simple clusters or as members of - what
we call - extended structures. This allows DQC to be successfully used in a
non-conventional exploratory mode where one searches data for unexpected
information without the need to model the data. We show how this works for big,
complex, real-world datasets that come from five distinct fields: i.e., x-ray
nano-chemistry, condensed matter, biology, seismology and finance. These
studies show how DQC excels at uncovering unexpected, small - but meaningful -
subsets of the data that contain important information. We also establish an
important new result: namely, that big, complex datasets often contain
interesting structures that will be missed by many conventional clustering
techniques. Experience shows that these structures appear frequently enough
that it is crucial to know they can exist, and that when they do, they encode
important hidden information. In short, we not only demonstrate that DQC can be
flexibly applied to datasets that present significantly different challenges,
we also show how a simple analysis can be used to look for the needle in the
haystack, determine what it is, and find what this means.
|
1310.4759 | Erik Rodner | Christoph G\"oring, Alexander Freytag, Erik Rodner, Joachim Denzler | Fine-grained Categorization -- Short Summary of our Entry for the
ImageNet Challenge 2012 | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we tackle the problem of visual categorization of dog breeds,
which is a surprisingly challenging task due to simultaneously present low
interclass distances and high intra-class variances. Our approach combines
several techniques well known in our community but often not utilized for
fine-grained recognition:
(1) automatic segmentation, (2) efficient part detection, and (3) combination
of multiple features. In particular, we demonstrate that a simple head detector
embedded in an off-the-shelf recognition pipeline can improve recognition
accuracy quite significantly, highlighting the importance of part features for
fine-grained recognition tasks. Using our approach, we achieved a 24.59% mean
average precision performance on the Stanford dog dataset.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2013 16:11:53 GMT"
}
] | 2013-10-18T00:00:00 | [
[
"Göring",
"Christoph",
""
],
[
"Freytag",
"Alexander",
""
],
[
"Rodner",
"Erik",
""
],
[
"Denzler",
"Joachim",
""
]
] | TITLE: Fine-grained Categorization -- Short Summary of our Entry for the
ImageNet Challenge 2012
ABSTRACT: In this paper, we tackle the problem of visual categorization of dog breeds,
which is a surprisingly challenging task due to simultaneously present low
interclass distances and high intra-class variances. Our approach combines
several techniques well known in our community but often not utilized for
fine-grained recognition:
(1) automatic segmentation, (2) efficient part detection, and (3) combination
of multiple features. In particular, we demonstrate that a simple head detector
embedded in an off-the-shelf recognition pipeline can improve recognition
accuracy quite significantly, highlighting the importance of part features for
fine-grained recognition tasks. Using our approach, we achieved a 24.59% mean
average precision performance on the Stanford dog dataset.
|
1304.1385 | Yves Dehouck | Yves Dehouck and Alexander S. Mikhailov | Effective harmonic potentials: insights into the internal cooperativity
and sequence-specificity of protein dynamics | 10 pages, 5 figures, 1 table ; Supplementary Material (11 pages, 7
figures, 1 table) ; 4 Supplementary tables as plain text files | PLoS Comput. Biol. 9 (2013) e1003209 | 10.1371/journal.pcbi.1003209 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proper biological functioning of proteins often relies on the occurrence
of coordinated fluctuations around their native structure, or of wider and
sometimes highly elaborated motions. Coarse-grained elastic-network
descriptions are known to capture essential aspects of conformational dynamics
in proteins, but have so far remained mostly phenomenological, and unable to
account for the chemical specificities of amino acids. Here, we propose a
method to derive residue- and distance-specific effective harmonic potentials
from the statistical analysis of an extensive dataset of NMR conformational
ensembles. These potentials constitute dynamical counterparts to the mean-force
statistical potentials commonly used for static analyses of protein structures.
In the context of the elastic network model, they yield a strongly improved
description of the cooperative aspects of residue motions, and give the
opportunity to systematically explore the influence of sequence details on
protein dynamics.
| [
{
"version": "v1",
"created": "Thu, 4 Apr 2013 14:58:44 GMT"
}
] | 2013-10-17T00:00:00 | [
[
"Dehouck",
"Yves",
""
],
[
"Mikhailov",
"Alexander S.",
""
]
] | TITLE: Effective harmonic potentials: insights into the internal cooperativity
and sequence-specificity of protein dynamics
ABSTRACT: The proper biological functioning of proteins often relies on the occurrence
of coordinated fluctuations around their native structure, or of wider and
sometimes highly elaborated motions. Coarse-grained elastic-network
descriptions are known to capture essential aspects of conformational dynamics
in proteins, but have so far remained mostly phenomenological, and unable to
account for the chemical specificities of amino acids. Here, we propose a
method to derive residue- and distance-specific effective harmonic potentials
from the statistical analysis of an extensive dataset of NMR conformational
ensembles. These potentials constitute dynamical counterparts to the mean-force
statistical potentials commonly used for static analyses of protein structures.
In the context of the elastic network model, they yield a strongly improved
description of the cooperative aspects of residue motions, and give the
opportunity to systematically explore the influence of sequence details on
protein dynamics.
|
1304.5583 | Ameet Talwalkar | Ameet Talwalkar, Lester Mackey, Yadong Mu, Shih-Fu Chang, Michael I.
Jordan | Distributed Low-rank Subspace Segmentation | null | null | null | null | cs.CV cs.DC cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision problems ranging from image clustering to motion segmentation to
semi-supervised learning can naturally be framed as subspace segmentation
problems, in which one aims to recover multiple low-dimensional subspaces from
noisy and corrupted input data. Low-Rank Representation (LRR), a convex
formulation of the subspace segmentation problem, is provably and empirically
accurate on small problems but does not scale to the massive sizes of modern
vision datasets. Moreover, past work aimed at scaling up low-rank matrix
factorization is not applicable to LRR given its non-decomposable constraints.
In this work, we propose a novel divide-and-conquer algorithm for large-scale
subspace segmentation that can cope with LRR's non-decomposable constraints and
maintains LRR's strong recovery guarantees. This has immediate implications for
the scalability of subspace segmentation, which we demonstrate on a benchmark
face recognition dataset and in simulations. We then introduce novel
applications of LRR-based subspace segmentation to large-scale semi-supervised
learning for multimedia event detection, concept detection, and image tagging.
In each case, we obtain state-of-the-art results and order-of-magnitude speed
ups.
| [
{
"version": "v1",
"created": "Sat, 20 Apr 2013 03:54:48 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Oct 2013 02:55:18 GMT"
}
] | 2013-10-17T00:00:00 | [
[
"Talwalkar",
"Ameet",
""
],
[
"Mackey",
"Lester",
""
],
[
"Mu",
"Yadong",
""
],
[
"Chang",
"Shih-Fu",
""
],
[
"Jordan",
"Michael I.",
""
]
] | TITLE: Distributed Low-rank Subspace Segmentation
ABSTRACT: Vision problems ranging from image clustering to motion segmentation to
semi-supervised learning can naturally be framed as subspace segmentation
problems, in which one aims to recover multiple low-dimensional subspaces from
noisy and corrupted input data. Low-Rank Representation (LRR), a convex
formulation of the subspace segmentation problem, is provably and empirically
accurate on small problems but does not scale to the massive sizes of modern
vision datasets. Moreover, past work aimed at scaling up low-rank matrix
factorization is not applicable to LRR given its non-decomposable constraints.
In this work, we propose a novel divide-and-conquer algorithm for large-scale
subspace segmentation that can cope with LRR's non-decomposable constraints and
maintains LRR's strong recovery guarantees. This has immediate implications for
the scalability of subspace segmentation, which we demonstrate on a benchmark
face recognition dataset and in simulations. We then introduce novel
applications of LRR-based subspace segmentation to large-scale semi-supervised
learning for multimedia event detection, concept detection, and image tagging.
In each case, we obtain state-of-the-art results and order-of-magnitude speed
ups.
|
1304.7284 | Zenglin Xu | Shandian Zhe, Zenglin Xu, and Yuan Qi | Supervised Heterogeneous Multiview Learning for Joint Association Study
and Disease Diagnosis | null | null | null | null | cs.LG cs.CE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given genetic variations and various phenotypical traits, such as Magnetic
Resonance Imaging (MRI) features, we consider two important and related tasks
in biomedical research: i)to select genetic and phenotypical markers for
disease diagnosis and ii) to identify associations between genetic and
phenotypical data. These two tasks are tightly coupled because underlying
associations between genetic variations and phenotypical features contain the
biological basis for a disease. While a variety of sparse models have been
applied for disease diagnosis and canonical correlation analysis and its
extensions have bee widely used in association studies (e.g., eQTL analysis),
these two tasks have been treated separately. To unify these two tasks, we
present a new sparse Bayesian approach for joint association study and disease
diagnosis. In this approach, common latent features are extracted from
different data sources based on sparse projection matrices and used to predict
multiple disease severity levels based on Gaussian process ordinal regression;
in return, the disease status is used to guide the discovery of relationships
between the data sources. The sparse projection matrices not only reveal
interactions between data sources but also select groups of biomarkers related
to the disease. To learn the model from data, we develop an efficient
variational expectation maximization algorithm. Simulation results demonstrate
that our approach achieves higher accuracy in both predicting ordinal labels
and discovering associations between data sources than alternative methods. We
apply our approach to an imaging genetics dataset for the study of Alzheimer's
Disease (AD). Our method identifies biologically meaningful relationships
between genetic variations, MRI features, and AD status, and achieves
significantly higher accuracy for predicting ordinal AD stages than the
competing methods.
| [
{
"version": "v1",
"created": "Fri, 26 Apr 2013 20:47:46 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Oct 2013 07:04:04 GMT"
}
] | 2013-10-17T00:00:00 | [
[
"Zhe",
"Shandian",
""
],
[
"Xu",
"Zenglin",
""
],
[
"Qi",
"Yuan",
""
]
] | TITLE: Supervised Heterogeneous Multiview Learning for Joint Association Study
and Disease Diagnosis
ABSTRACT: Given genetic variations and various phenotypical traits, such as Magnetic
Resonance Imaging (MRI) features, we consider two important and related tasks
in biomedical research: i)to select genetic and phenotypical markers for
disease diagnosis and ii) to identify associations between genetic and
phenotypical data. These two tasks are tightly coupled because underlying
associations between genetic variations and phenotypical features contain the
biological basis for a disease. While a variety of sparse models have been
applied for disease diagnosis and canonical correlation analysis and its
extensions have bee widely used in association studies (e.g., eQTL analysis),
these two tasks have been treated separately. To unify these two tasks, we
present a new sparse Bayesian approach for joint association study and disease
diagnosis. In this approach, common latent features are extracted from
different data sources based on sparse projection matrices and used to predict
multiple disease severity levels based on Gaussian process ordinal regression;
in return, the disease status is used to guide the discovery of relationships
between the data sources. The sparse projection matrices not only reveal
interactions between data sources but also select groups of biomarkers related
to the disease. To learn the model from data, we develop an efficient
variational expectation maximization algorithm. Simulation results demonstrate
that our approach achieves higher accuracy in both predicting ordinal labels
and discovering associations between data sources than alternative methods. We
apply our approach to an imaging genetics dataset for the study of Alzheimer's
Disease (AD). Our method identifies biologically meaningful relationships
between genetic variations, MRI features, and AD status, and achieves
significantly higher accuracy for predicting ordinal AD stages than the
competing methods.
|
1310.4217 | Bingni Brunton | B. W. Brunton, S. L. Brunton, J. L. Proctor, and J. N. Kutz | Optimal Sensor Placement and Enhanced Sparsity for Classification | 13 pages, 11 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of compressive sensing is efficient reconstruction of data from few
measurements, sometimes leading to a categorical decision. If only
classification is required, reconstruction can be circumvented and the
measurements needed are orders-of-magnitude sparser still. We define enhanced
sparsity as the reduction in number of measurements required for classification
over reconstruction. In this work, we exploit enhanced sparsity and learn
spatial sensor locations that optimally inform a categorical decision. The
algorithm solves an l1-minimization to find the fewest entries of the full
measurement vector that exactly reconstruct the discriminant vector in feature
space. Once the sensor locations have been identified from the training data,
subsequent test samples are classified with remarkable efficiency, achieving
performance comparable to that obtained by discrimination using the full image.
Sensor locations may be learned from full images, or from a random subsample of
pixels. For classification between more than two categories, we introduce a
coupling parameter whose value tunes the number of sensors selected, trading
accuracy for economy. We demonstrate the algorithm on example datasets from
image recognition using PCA for feature extraction and LDA for discrimination;
however, the method can be broadly applied to non-image data and adapted to
work with other methods for feature extraction and discrimination.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2013 21:41:17 GMT"
}
] | 2013-10-17T00:00:00 | [
[
"Brunton",
"B. W.",
""
],
[
"Brunton",
"S. L.",
""
],
[
"Proctor",
"J. L.",
""
],
[
"Kutz",
"J. N.",
""
]
] | TITLE: Optimal Sensor Placement and Enhanced Sparsity for Classification
ABSTRACT: The goal of compressive sensing is efficient reconstruction of data from few
measurements, sometimes leading to a categorical decision. If only
classification is required, reconstruction can be circumvented and the
measurements needed are orders-of-magnitude sparser still. We define enhanced
sparsity as the reduction in number of measurements required for classification
over reconstruction. In this work, we exploit enhanced sparsity and learn
spatial sensor locations that optimally inform a categorical decision. The
algorithm solves an l1-minimization to find the fewest entries of the full
measurement vector that exactly reconstruct the discriminant vector in feature
space. Once the sensor locations have been identified from the training data,
subsequent test samples are classified with remarkable efficiency, achieving
performance comparable to that obtained by discrimination using the full image.
Sensor locations may be learned from full images, or from a random subsample of
pixels. For classification between more than two categories, we introduce a
coupling parameter whose value tunes the number of sensors selected, trading
accuracy for economy. We demonstrate the algorithm on example datasets from
image recognition using PCA for feature extraction and LDA for discrimination;
however, the method can be broadly applied to non-image data and adapted to
work with other methods for feature extraction and discrimination.
|
1310.4321 | Alberto Maurizi | Alberto Maurizi and Francesco Tampieri | Some considerations on skewness and kurtosis of vertical velocity in the
convective boundary layer | null | null | null | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data of skewness $S$ and kurtosis $K$ of vertical velocity in the convective
boundary layer from different datasets have been analysed. Vertical profiles of
$S$ were found to be grouped into two classes that display different slopes
with height: one is nearly constant and the other is increasing. This behaviour
can be explained using a simple model for the PDF of vertical velocity and
assuming two distinct vertical profiles of updraft area fraction from
literature. The possibility of describing the explicit dependence of $K$ on $S$
was revised critically, also considering the neutral limit as well as the limit
for very small non-dimensional height. It was found that the coefficients of
the relationship depends on both the Obukhov length scale $L$ and inversion
height $z_i$.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2013 10:25:46 GMT"
}
] | 2013-10-17T00:00:00 | [
[
"Maurizi",
"Alberto",
""
],
[
"Tampieri",
"Francesco",
""
]
] | TITLE: Some considerations on skewness and kurtosis of vertical velocity in the
convective boundary layer
ABSTRACT: Data of skewness $S$ and kurtosis $K$ of vertical velocity in the convective
boundary layer from different datasets have been analysed. Vertical profiles of
$S$ were found to be grouped into two classes that display different slopes
with height: one is nearly constant and the other is increasing. This behaviour
can be explained using a simple model for the PDF of vertical velocity and
assuming two distinct vertical profiles of updraft area fraction from
literature. The possibility of describing the explicit dependence of $K$ on $S$
was revised critically, also considering the neutral limit as well as the limit
for very small non-dimensional height. It was found that the coefficients of
the relationship depends on both the Obukhov length scale $L$ and inversion
height $z_i$.
|
1310.3939 | Domenico Sacca' | Domenico Sacca', Edoardo Serra, Pietro Dicosta, Antonio Piccolo | Multi-Sorted Inverse Frequent Itemsets Mining | 14 pages | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of novel platforms and techniques for emerging "Big Data"
applications requires the availability of real-life datasets for data-driven
experiments, which are however out of reach for academic research in most cases
as they are typically proprietary. A possible solution is to use synthesized
datasets that reflect patterns of real ones in order to ensure high quality
experimental findings. A first step in this direction is to use inverse mining
techniques such as inverse frequent itemset mining (IFM) that consists of
generating a transactional database satisfying given support constraints on the
itemsets in an input set, that are typically the frequent ones. This paper
introduces an extension of IFM, called many-sorted IFM, where the schemes for
the datasets to be generated are those typical of Big Tables as required in
emerging big data applications, e.g., social network analytics.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2013 07:38:36 GMT"
}
] | 2013-10-16T00:00:00 | [
[
"Sacca'",
"Domenico",
""
],
[
"Serra",
"Edoardo",
""
],
[
"Dicosta",
"Pietro",
""
],
[
"Piccolo",
"Antonio",
""
]
] | TITLE: Multi-Sorted Inverse Frequent Itemsets Mining
ABSTRACT: The development of novel platforms and techniques for emerging "Big Data"
applications requires the availability of real-life datasets for data-driven
experiments, which are however out of reach for academic research in most cases
as they are typically proprietary. A possible solution is to use synthesized
datasets that reflect patterns of real ones in order to ensure high quality
experimental findings. A first step in this direction is to use inverse mining
techniques such as inverse frequent itemset mining (IFM) that consists of
generating a transactional database satisfying given support constraints on the
itemsets in an input set, that are typically the frequent ones. This paper
introduces an extension of IFM, called many-sorted IFM, where the schemes for
the datasets to be generated are those typical of Big Tables as required in
emerging big data applications, e.g., social network analytics.
|
1310.4136 | Thiago S. F. X. Teixeira | Thiago S. F. X. Teixeira, George Teodoro, Eduardo Valle, Joel H. Saltz | Scalable Locality-Sensitive Hashing for Similarity Search in
High-Dimensional, Large-Scale Multimedia Datasets | null | null | null | null | cs.DC cs.DB cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Similarity search is critical for many database applications, including the
increasingly popular online services for Content-Based Multimedia Retrieval
(CBMR). These services, which include image search engines, must handle an
overwhelming volume of data, while keeping low response times. Thus,
scalability is imperative for similarity search in Web-scale applications, but
most existing methods are sequential and target shared-memory machines. Here we
address these issues with a distributed, efficient, and scalable index based on
Locality-Sensitive Hashing (LSH). LSH is one of the most efficient and popular
techniques for similarity search, but its poor referential locality properties
has made its implementation a challenging problem. Our solution is based on a
widely asynchronous dataflow parallelization with a number of optimizations
that include a hierarchical parallelization to decouple indexing and data
storage, locality-aware data partition strategies to reduce message passing,
and multi-probing to limit memory usage. The proposed parallelization attained
an efficiency of 90% in a distributed system with about 800 CPU cores. In
particular, the original locality-aware data partition reduced the number of
messages exchanged in 30%. Our parallel LSH was evaluated using the largest
public dataset for similarity search (to the best of our knowledge) with $10^9$
128-d SIFT descriptors extracted from Web images. This is two orders of
magnitude larger than datasets that previous LSH parallelizations could handle.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2013 18:21:39 GMT"
}
] | 2013-10-16T00:00:00 | [
[
"Teixeira",
"Thiago S. F. X.",
""
],
[
"Teodoro",
"George",
""
],
[
"Valle",
"Eduardo",
""
],
[
"Saltz",
"Joel H.",
""
]
] | TITLE: Scalable Locality-Sensitive Hashing for Similarity Search in
High-Dimensional, Large-Scale Multimedia Datasets
ABSTRACT: Similarity search is critical for many database applications, including the
increasingly popular online services for Content-Based Multimedia Retrieval
(CBMR). These services, which include image search engines, must handle an
overwhelming volume of data, while keeping low response times. Thus,
scalability is imperative for similarity search in Web-scale applications, but
most existing methods are sequential and target shared-memory machines. Here we
address these issues with a distributed, efficient, and scalable index based on
Locality-Sensitive Hashing (LSH). LSH is one of the most efficient and popular
techniques for similarity search, but its poor referential locality properties
has made its implementation a challenging problem. Our solution is based on a
widely asynchronous dataflow parallelization with a number of optimizations
that include a hierarchical parallelization to decouple indexing and data
storage, locality-aware data partition strategies to reduce message passing,
and multi-probing to limit memory usage. The proposed parallelization attained
an efficiency of 90% in a distributed system with about 800 CPU cores. In
particular, the original locality-aware data partition reduced the number of
messages exchanged in 30%. Our parallel LSH was evaluated using the largest
public dataset for similarity search (to the best of our knowledge) with $10^9$
128-d SIFT descriptors extracted from Web images. This is two orders of
magnitude larger than datasets that previous LSH parallelizations could handle.
|
1310.3322 | Mohamed Elhoseiny Mohamed Elhoseiny | Mohamed Elhoseiny, Hossam Faheem, Taymour Nazmy, and Eman Shaaban | GPU-Framework for Teamwork Action Recognition | 7 pages | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real time processing for teamwork action recognition is a challenge, due to
complex computational models to achieve high system performance. Hence, this
paper proposes a framework based on Graphical Processing Units (GPUs) to
achieve a significant speed up in the performance of role based activity
recognition of teamwork. The framework can be applied in various fields,
especially athletic and military applications. Furthermore, the framework can
be customized for many action recognition applications. The paper presents the
stages of the framework where GPUs are the main tool for performance
improvement. The speedup is achieved by performing video processing and Machine
learning algorithms on GPU. Video processing and machine learning algorithms
covers all computations involved in our framework. Video processing tasks on
involves GPU implementation of Motion detection, segmentation and object
tracking algorithms. In addition, our framework is integrated with GPUCV, a GPU
version of OpenCV functions. Machine learning tasks are supported under our
framework with GPU implementations of Support Vector Machine (SVM) for object
classification and feature discretization, Hidden Marcov Model (HMM) for
activity recognition phase, and ID3 algorithm for role recognition of team
members. The system was tested against UC-Teamwork dataset and speedup of 20X
has been achieved on NVidia 9500GT graphics card (32 500MHZ processors).
| [
{
"version": "v1",
"created": "Sat, 12 Oct 2013 01:16:32 GMT"
}
] | 2013-10-15T00:00:00 | [
[
"Elhoseiny",
"Mohamed",
""
],
[
"Faheem",
"Hossam",
""
],
[
"Nazmy",
"Taymour",
""
],
[
"Shaaban",
"Eman",
""
]
] | TITLE: GPU-Framework for Teamwork Action Recognition
ABSTRACT: Real time processing for teamwork action recognition is a challenge, due to
complex computational models to achieve high system performance. Hence, this
paper proposes a framework based on Graphical Processing Units (GPUs) to
achieve a significant speed up in the performance of role based activity
recognition of teamwork. The framework can be applied in various fields,
especially athletic and military applications. Furthermore, the framework can
be customized for many action recognition applications. The paper presents the
stages of the framework where GPUs are the main tool for performance
improvement. The speedup is achieved by performing video processing and Machine
learning algorithms on GPU. Video processing and machine learning algorithms
covers all computations involved in our framework. Video processing tasks on
involves GPU implementation of Motion detection, segmentation and object
tracking algorithms. In addition, our framework is integrated with GPUCV, a GPU
version of OpenCV functions. Machine learning tasks are supported under our
framework with GPU implementations of Support Vector Machine (SVM) for object
classification and feature discretization, Hidden Marcov Model (HMM) for
activity recognition phase, and ID3 algorithm for role recognition of team
members. The system was tested against UC-Teamwork dataset and speedup of 20X
has been achieved on NVidia 9500GT graphics card (32 500MHZ processors).
|
1310.3353 | Gunnar W. Klau | Thomas Bellitto and Tobias Marschall and Alexander Sch\"onhuth and
Gunnar W. Klau | Next Generation Cluster Editing | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work aims at improving the quality of structural variant prediction from
the mapped reads of a sequenced genome. We suggest a new model based on cluster
editing in weighted graphs and introduce a new heuristic algorithm that allows
to solve this problem quickly and with a good approximation on the huge graphs
that arise from biological datasets.
| [
{
"version": "v1",
"created": "Sat, 12 Oct 2013 09:34:30 GMT"
}
] | 2013-10-15T00:00:00 | [
[
"Bellitto",
"Thomas",
""
],
[
"Marschall",
"Tobias",
""
],
[
"Schönhuth",
"Alexander",
""
],
[
"Klau",
"Gunnar W.",
""
]
] | TITLE: Next Generation Cluster Editing
ABSTRACT: This work aims at improving the quality of structural variant prediction from
the mapped reads of a sequenced genome. We suggest a new model based on cluster
editing in weighted graphs and introduce a new heuristic algorithm that allows
to solve this problem quickly and with a good approximation on the huge graphs
that arise from biological datasets.
|
1310.3498 | Dirk Helbing | Dirk Helbing | New Ways to Promote Sustainability and Social Well-Being in a Complex,
Strongly Interdependent World: The FuturICT Approach | For related work see http://www.soms.ethz.ch and
http://www.futurict.eu | This is the Epilogue of the Booklet by P. Ball, Why Society is a
Complex Matter (Springer, Berlin, 2012), pp. 55-60 | null | null | cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | FuturICT is one of six proposals currently being considered for support
within the European Commission's Flagship Initiative (see Box 1). The vision of
the FuturICT project is to develop new science and new information and
communication systems that will promote social self-organization,
self-regulation, well-being, sustainability, and resilience. One of the main
aims of the approach is to increase individual opportunities for social,
economic and political participation, combined with the creation of collective
awareness of the impact that human actions have on our world. This requires us
to mine large datasets ("Big Data") and to develop new methods and tools: a
Planetary Nervous System (PNS) to answer "What is (the state of the world)..."
questions, a Living Earth Simulator (LES) to study "What ... if ..." scenarios,
and a Global Participatory Platform (GPP) for social exploration and
interaction.
| [
{
"version": "v1",
"created": "Sun, 13 Oct 2013 18:03:36 GMT"
}
] | 2013-10-15T00:00:00 | [
[
"Helbing",
"Dirk",
""
]
] | TITLE: New Ways to Promote Sustainability and Social Well-Being in a Complex,
Strongly Interdependent World: The FuturICT Approach
ABSTRACT: FuturICT is one of six proposals currently being considered for support
within the European Commission's Flagship Initiative (see Box 1). The vision of
the FuturICT project is to develop new science and new information and
communication systems that will promote social self-organization,
self-regulation, well-being, sustainability, and resilience. One of the main
aims of the approach is to increase individual opportunities for social,
economic and political participation, combined with the creation of collective
awareness of the impact that human actions have on our world. This requires us
to mine large datasets ("Big Data") and to develop new methods and tools: a
Planetary Nervous System (PNS) to answer "What is (the state of the world)..."
questions, a Living Earth Simulator (LES) to study "What ... if ..." scenarios,
and a Global Participatory Platform (GPP) for social exploration and
interaction.
|
1310.3805 | Chiranjib Sur | Chiranjib Sur, Anupam Shukla | Green Heron Swarm Optimization Algorithm - State-of-the-Art of a New
Nature Inspired Discrete Meta-Heuristics | 20 pages, Pre-print copy, submitted to a peer reviewed journal | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many real world problems are NP-Hard problems are a very large part of them
can be represented as graph based problems. This makes graph theory a very
important and prevalent field of study. In this work a new bio-inspired
meta-heuristics called Green Heron Swarm Optimization (GHOSA) Algorithm is
being introduced which is inspired by the fishing skills of the bird. The
algorithm basically suited for graph based problems like combinatorial
optimization etc. However introduction of an adaptive mathematical variation
operator called Location Based Neighbour Influenced Variation (LBNIV) makes it
suitable for high dimensional continuous domain problems. The new algorithm is
being operated on the traditional benchmark equations and the results are
compared with Genetic Algorithm and Particle Swarm Optimization. The algorithm
is also operated on Travelling Salesman Problem, Quadratic Assignment Problem,
Knapsack Problem dataset. The procedure to operate the algorithm on the
Resource Constraint Shortest Path and road network optimization is also
discussed. The results clearly demarcates the GHOSA algorithm as an efficient
algorithm specially considering that the number of algorithms for the discrete
optimization is very low and robust and more explorative algorithm is required
in this age of social networking and mostly graph based problem scenarios.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2013 19:42:26 GMT"
}
] | 2013-10-15T00:00:00 | [
[
"Sur",
"Chiranjib",
""
],
[
"Shukla",
"Anupam",
""
]
] | TITLE: Green Heron Swarm Optimization Algorithm - State-of-the-Art of a New
Nature Inspired Discrete Meta-Heuristics
ABSTRACT: Many real world problems are NP-Hard problems are a very large part of them
can be represented as graph based problems. This makes graph theory a very
important and prevalent field of study. In this work a new bio-inspired
meta-heuristics called Green Heron Swarm Optimization (GHOSA) Algorithm is
being introduced which is inspired by the fishing skills of the bird. The
algorithm basically suited for graph based problems like combinatorial
optimization etc. However introduction of an adaptive mathematical variation
operator called Location Based Neighbour Influenced Variation (LBNIV) makes it
suitable for high dimensional continuous domain problems. The new algorithm is
being operated on the traditional benchmark equations and the results are
compared with Genetic Algorithm and Particle Swarm Optimization. The algorithm
is also operated on Travelling Salesman Problem, Quadratic Assignment Problem,
Knapsack Problem dataset. The procedure to operate the algorithm on the
Resource Constraint Shortest Path and road network optimization is also
discussed. The results clearly demarcates the GHOSA algorithm as an efficient
algorithm specially considering that the number of algorithms for the discrete
optimization is very low and robust and more explorative algorithm is required
in this age of social networking and mostly graph based problem scenarios.
|
1310.3073 | Teresa Scholz | Teresa Scholz, Vitor V. Lopes, Ana Estanqueiro | A cyclic time-dependent Markov process to model daily patterns in wind
turbine power production | null | null | null | null | physics.data-an stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wind energy is becoming a top contributor to the renewable energy mix, which
raises potential reliability issues for the grid due to the fluctuating nature
of its source. To achieve adequate reserve commitment and to promote market
participation, it is necessary to provide models that can capture daily
patterns in wind power production. This paper presents a cyclic inhomogeneous
Markov process, which is based on a three-dimensional state-space (wind power,
speed and direction). Each time-dependent transition probability is expressed
as a Bernstein polynomial. The model parameters are estimated by solving a
constrained optimization problem: The objective function combines two maximum
likelihood estimators, one to ensure that the Markov process long-term behavior
reproduces the data accurately and another to capture daily fluctuations. A
convex formulation for the overall optimization problem is presented and its
applicability demonstrated through the analysis of a case-study. The proposed
model is capable of reproducing the diurnal patterns of a three-year dataset
collected from a wind turbine located in a mountainous region in Portugal. In
addition, it is shown how to compute persistence statistics directly from the
Markov process transition matrices. Based on the case-study, the power
production persistence through the daily cycle is analysed and discussed.
| [
{
"version": "v1",
"created": "Fri, 11 Oct 2013 10:13:43 GMT"
}
] | 2013-10-14T00:00:00 | [
[
"Scholz",
"Teresa",
""
],
[
"Lopes",
"Vitor V.",
""
],
[
"Estanqueiro",
"Ana",
""
]
] | TITLE: A cyclic time-dependent Markov process to model daily patterns in wind
turbine power production
ABSTRACT: Wind energy is becoming a top contributor to the renewable energy mix, which
raises potential reliability issues for the grid due to the fluctuating nature
of its source. To achieve adequate reserve commitment and to promote market
participation, it is necessary to provide models that can capture daily
patterns in wind power production. This paper presents a cyclic inhomogeneous
Markov process, which is based on a three-dimensional state-space (wind power,
speed and direction). Each time-dependent transition probability is expressed
as a Bernstein polynomial. The model parameters are estimated by solving a
constrained optimization problem: The objective function combines two maximum
likelihood estimators, one to ensure that the Markov process long-term behavior
reproduces the data accurately and another to capture daily fluctuations. A
convex formulation for the overall optimization problem is presented and its
applicability demonstrated through the analysis of a case-study. The proposed
model is capable of reproducing the diurnal patterns of a three-year dataset
collected from a wind turbine located in a mountainous region in Portugal. In
addition, it is shown how to compute persistence statistics directly from the
Markov process transition matrices. Based on the case-study, the power
production persistence through the daily cycle is analysed and discussed.
|
1310.3197 | Yaniv Erlich | Yaniv Erlich and Arvind Narayanan | Routes for breaching and protecting genetic privacy | Draft for comments | null | null | null | q-bio.GN cs.CR stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We are entering the era of ubiquitous genetic information for research,
clinical care, and personal curiosity. Sharing these datasets is vital for
rapid progress in understanding the genetic basis of human diseases. However,
one growing concern is the ability to protect the genetic privacy of the data
originators. Here, we technically map threats to genetic privacy and discuss
potential mitigation strategies for privacy-preserving dissemination of genetic
data.
| [
{
"version": "v1",
"created": "Fri, 11 Oct 2013 17:02:54 GMT"
}
] | 2013-10-14T00:00:00 | [
[
"Erlich",
"Yaniv",
""
],
[
"Narayanan",
"Arvind",
""
]
] | TITLE: Routes for breaching and protecting genetic privacy
ABSTRACT: We are entering the era of ubiquitous genetic information for research,
clinical care, and personal curiosity. Sharing these datasets is vital for
rapid progress in understanding the genetic basis of human diseases. However,
one growing concern is the ability to protect the genetic privacy of the data
originators. Here, we technically map threats to genetic privacy and discuss
potential mitigation strategies for privacy-preserving dissemination of genetic
data.
|
1310.3233 | ANqi Qiu DR | Jia Du, Alvina Goh, Anqi Qiu | Bayesian Estimation of White Matter Atlas from High Angular Resolution
Diffusion Imaging | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a Bayesian probabilistic model to estimate the brain white matter
atlas from high angular resolution diffusion imaging (HARDI) data. This model
incorporates a shape prior of the white matter anatomy and the likelihood of
individual observed HARDI datasets. We first assume that the atlas is generated
from a known hyperatlas through a flow of diffeomorphisms and its shape prior
can be constructed based on the framework of large deformation diffeomorphic
metric mapping (LDDMM). LDDMM characterizes a nonlinear diffeomorphic shape
space in a linear space of initial momentum uniquely determining diffeomorphic
geodesic flows from the hyperatlas. Therefore, the shape prior of the HARDI
atlas can be modeled using a centered Gaussian random field (GRF) model of the
initial momentum. In order to construct the likelihood of observed HARDI
datasets, it is necessary to study the diffeomorphic transformation of
individual observations relative to the atlas and the probabilistic
distribution of orientation distribution functions (ODFs). To this end, we
construct the likelihood related to the transformation using the same
construction as discussed for the shape prior of the atlas. The probabilistic
distribution of ODFs is then constructed based on the ODF Riemannian manifold.
We assume that the observed ODFs are generated by an exponential map of random
tangent vectors at the deformed atlas ODF. Hence, the likelihood of the ODFs
can be modeled using a GRF of their tangent vectors in the ODF Riemannian
manifold. We solve for the maximum a posteriori using the
Expectation-Maximization algorithm and derive the corresponding update
equations. Finally, we illustrate the HARDI atlas constructed based on a
Chinese aging cohort of 94 adults and compare it with that generated by
averaging the coefficients of spherical harmonics of the ODF across subjects.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2013 00:32:01 GMT"
}
] | 2013-10-14T00:00:00 | [
[
"Du",
"Jia",
""
],
[
"Goh",
"Alvina",
""
],
[
"Qiu",
"Anqi",
""
]
] | TITLE: Bayesian Estimation of White Matter Atlas from High Angular Resolution
Diffusion Imaging
ABSTRACT: We present a Bayesian probabilistic model to estimate the brain white matter
atlas from high angular resolution diffusion imaging (HARDI) data. This model
incorporates a shape prior of the white matter anatomy and the likelihood of
individual observed HARDI datasets. We first assume that the atlas is generated
from a known hyperatlas through a flow of diffeomorphisms and its shape prior
can be constructed based on the framework of large deformation diffeomorphic
metric mapping (LDDMM). LDDMM characterizes a nonlinear diffeomorphic shape
space in a linear space of initial momentum uniquely determining diffeomorphic
geodesic flows from the hyperatlas. Therefore, the shape prior of the HARDI
atlas can be modeled using a centered Gaussian random field (GRF) model of the
initial momentum. In order to construct the likelihood of observed HARDI
datasets, it is necessary to study the diffeomorphic transformation of
individual observations relative to the atlas and the probabilistic
distribution of orientation distribution functions (ODFs). To this end, we
construct the likelihood related to the transformation using the same
construction as discussed for the shape prior of the atlas. The probabilistic
distribution of ODFs is then constructed based on the ODF Riemannian manifold.
We assume that the observed ODFs are generated by an exponential map of random
tangent vectors at the deformed atlas ODF. Hence, the likelihood of the ODFs
can be modeled using a GRF of their tangent vectors in the ODF Riemannian
manifold. We solve for the maximum a posteriori using the
Expectation-Maximization algorithm and derive the corresponding update
equations. Finally, we illustrate the HARDI atlas constructed based on a
Chinese aging cohort of 94 adults and compare it with that generated by
averaging the coefficients of spherical harmonics of the ODF across subjects.
|
1310.2646 | Akshay Gadde | Sunil K. Narang, Akshay Gadde, Eduard Sanou and Antonio Ortega | Localized Iterative Methods for Interpolation in Graph Structured Data | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present two localized graph filtering based methods for
interpolating graph signals defined on the vertices of arbitrary graphs from
only a partial set of samples. The first method is an extension of previous
work on reconstructing bandlimited graph signals from partially observed
samples. The iterative graph filtering approach very closely approximates the
solution proposed in the that work, while being computationally more efficient.
As an alternative, we propose a regularization based framework in which we
define the cost of reconstruction to be a combination of smoothness of the
graph signal and the reconstruction error with respect to the known samples,
and find solutions that minimize this cost. We provide both a closed form
solution and a computationally efficient iterative solution of the optimization
problem. The experimental results on the recommendation system datasets
demonstrate effectiveness of the proposed methods.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2013 22:24:28 GMT"
}
] | 2013-10-11T00:00:00 | [
[
"Narang",
"Sunil K.",
""
],
[
"Gadde",
"Akshay",
""
],
[
"Sanou",
"Eduard",
""
],
[
"Ortega",
"Antonio",
""
]
] | TITLE: Localized Iterative Methods for Interpolation in Graph Structured Data
ABSTRACT: In this paper, we present two localized graph filtering based methods for
interpolating graph signals defined on the vertices of arbitrary graphs from
only a partial set of samples. The first method is an extension of previous
work on reconstructing bandlimited graph signals from partially observed
samples. The iterative graph filtering approach very closely approximates the
solution proposed in the that work, while being computationally more efficient.
As an alternative, we propose a regularization based framework in which we
define the cost of reconstruction to be a combination of smoothness of the
graph signal and the reconstruction error with respect to the known samples,
and find solutions that minimize this cost. We provide both a closed form
solution and a computationally efficient iterative solution of the optimization
problem. The experimental results on the recommendation system datasets
demonstrate effectiveness of the proposed methods.
|
1310.2409 | Ning Chen | Ning Chen, Jun Zhu, Fei Xia, Bo Zhang | Discriminative Relational Topic Models | null | null | null | null | cs.LG cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many scientific and engineering fields involve analyzing network data. For
document networks, relational topic models (RTMs) provide a probabilistic
generative process to describe both the link structure and document contents,
and they have shown promise on predicting network structures and discovering
latent topic representations. However, existing RTMs have limitations in both
the restricted model expressiveness and incapability of dealing with imbalanced
network data. To expand the scope and improve the inference accuracy of RTMs,
this paper presents three extensions: 1) unlike the common link likelihood with
a diagonal weight matrix that allows the-same-topic interactions only, we
generalize it to use a full weight matrix that captures all pairwise topic
interactions and is applicable to asymmetric networks; 2) instead of doing
standard Bayesian inference, we perform regularized Bayesian inference
(RegBayes) with a regularization parameter to deal with the imbalanced link
structure issue in common real networks and improve the discriminative ability
of learned latent representations; and 3) instead of doing variational
approximation with strict mean-field assumptions, we present collapsed Gibbs
sampling algorithms for the generalized relational topic models by exploring
data augmentation without making restricting assumptions. Under the generic
RegBayes framework, we carefully investigate two popular discriminative loss
functions, namely, the logistic log-loss and the max-margin hinge loss.
Experimental results on several real network datasets demonstrate the
significance of these extensions on improving the prediction performance, and
the time efficiency can be dramatically improved with a simple fast
approximation method.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2013 09:32:56 GMT"
}
] | 2013-10-10T00:00:00 | [
[
"Chen",
"Ning",
""
],
[
"Zhu",
"Jun",
""
],
[
"Xia",
"Fei",
""
],
[
"Zhang",
"Bo",
""
]
] | TITLE: Discriminative Relational Topic Models
ABSTRACT: Many scientific and engineering fields involve analyzing network data. For
document networks, relational topic models (RTMs) provide a probabilistic
generative process to describe both the link structure and document contents,
and they have shown promise on predicting network structures and discovering
latent topic representations. However, existing RTMs have limitations in both
the restricted model expressiveness and incapability of dealing with imbalanced
network data. To expand the scope and improve the inference accuracy of RTMs,
this paper presents three extensions: 1) unlike the common link likelihood with
a diagonal weight matrix that allows the-same-topic interactions only, we
generalize it to use a full weight matrix that captures all pairwise topic
interactions and is applicable to asymmetric networks; 2) instead of doing
standard Bayesian inference, we perform regularized Bayesian inference
(RegBayes) with a regularization parameter to deal with the imbalanced link
structure issue in common real networks and improve the discriminative ability
of learned latent representations; and 3) instead of doing variational
approximation with strict mean-field assumptions, we present collapsed Gibbs
sampling algorithms for the generalized relational topic models by exploring
data augmentation without making restricting assumptions. Under the generic
RegBayes framework, we carefully investigate two popular discriminative loss
functions, namely, the logistic log-loss and the max-margin hinge loss.
Experimental results on several real network datasets demonstrate the
significance of these extensions on improving the prediction performance, and
the time efficiency can be dramatically improved with a simple fast
approximation method.
|
1310.2053 | Kai Berger | Kai Berger | The role of RGB-D benchmark datasets: an overview | 6 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advent of the Microsoft Kinect three years ago stimulated not only the
computer vision community for new algorithms and setups to tackle well-known
problems in the community but also sparked the launch of several new benchmark
datasets to which future algorithms can be compared 019 to. This review of the
literature and industry developments concludes that the current RGB-D benchmark
datasets can be useful to determine the accuracy of a variety of applications
of a single or multiple RGB-D sensors.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2013 09:16:56 GMT"
}
] | 2013-10-09T00:00:00 | [
[
"Berger",
"Kai",
""
]
] | TITLE: The role of RGB-D benchmark datasets: an overview
ABSTRACT: The advent of the Microsoft Kinect three years ago stimulated not only the
computer vision community for new algorithms and setups to tackle well-known
problems in the community but also sparked the launch of several new benchmark
datasets to which future algorithms can be compared 019 to. This review of the
literature and industry developments concludes that the current RGB-D benchmark
datasets can be useful to determine the accuracy of a variety of applications
of a single or multiple RGB-D sensors.
|
1310.1498 | Nikolas Landia | Nikolas Landia, Stephan Doerfel, Robert J\"aschke, Sarabjot Singh
Anand, Andreas Hotho and Nathan Griffiths | Deeper Into the Folksonomy Graph: FolkRank Adaptations and Extensions
for Improved Tag Recommendations | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The information contained in social tagging systems is often modelled as a
graph of connections between users, items and tags. Recommendation algorithms
such as FolkRank, have the potential to leverage complex relationships in the
data, corresponding to multiple hops in the graph. We present an in-depth
analysis and evaluation of graph models for social tagging data and propose
novel adaptations and extensions of FolkRank to improve tag recommendations. We
highlight implicit assumptions made by the widely used folksonomy model, and
propose an alternative and more accurate graph-representation of the data. Our
extensions of FolkRank address the new item problem by incorporating content
data into the algorithm, and significantly improve prediction results on
unpruned datasets. Our adaptations address issues in the iterative weight
spreading calculation that potentially hinder FolkRank's ability to leverage
the deep graph as an information source. Moreover, we evaluate the benefit of
considering each deeper level of the graph, and present important insights
regarding the characteristics of social tagging data in general. Our results
suggest that the base assumption made by conventional weight propagation
methods, that closeness in the graph always implies a positive relationship,
does not hold for the social tagging domain.
| [
{
"version": "v1",
"created": "Sat, 5 Oct 2013 17:27:42 GMT"
}
] | 2013-10-08T00:00:00 | [
[
"Landia",
"Nikolas",
""
],
[
"Doerfel",
"Stephan",
""
],
[
"Jäschke",
"Robert",
""
],
[
"Anand",
"Sarabjot Singh",
""
],
[
"Hotho",
"Andreas",
""
],
[
"Griffiths",
"Nathan",
""
]
] | TITLE: Deeper Into the Folksonomy Graph: FolkRank Adaptations and Extensions
for Improved Tag Recommendations
ABSTRACT: The information contained in social tagging systems is often modelled as a
graph of connections between users, items and tags. Recommendation algorithms
such as FolkRank, have the potential to leverage complex relationships in the
data, corresponding to multiple hops in the graph. We present an in-depth
analysis and evaluation of graph models for social tagging data and propose
novel adaptations and extensions of FolkRank to improve tag recommendations. We
highlight implicit assumptions made by the widely used folksonomy model, and
propose an alternative and more accurate graph-representation of the data. Our
extensions of FolkRank address the new item problem by incorporating content
data into the algorithm, and significantly improve prediction results on
unpruned datasets. Our adaptations address issues in the iterative weight
spreading calculation that potentially hinder FolkRank's ability to leverage
the deep graph as an information source. Moreover, we evaluate the benefit of
considering each deeper level of the graph, and present important insights
regarding the characteristics of social tagging data in general. Our results
suggest that the base assumption made by conventional weight propagation
methods, that closeness in the graph always implies a positive relationship,
does not hold for the social tagging domain.
|
1310.1545 | Xuhui Fan | Xuhui Fan, Richard Yi Da Xu, Longbing Cao, Yin Song | Learning Hidden Structures with Relational Models by Adequately
Involving Rich Information in A Network | null | null | null | null | cs.LG cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Effectively modelling hidden structures in a network is very practical but
theoretically challenging. Existing relational models only involve very limited
information, namely the binary directional link data, embedded in a network to
learn hidden networking structures. There is other rich and meaningful
information (e.g., various attributes of entities and more granular information
than binary elements such as "like" or "dislike") missed, which play a critical
role in forming and understanding relations in a network. In this work, we
propose an informative relational model (InfRM) framework to adequately involve
rich information and its granularity in a network, including metadata
information about each entity and various forms of link data. Firstly, an
effective metadata information incorporation method is employed on the prior
information from relational models MMSB and LFRM. This is to encourage the
entities with similar metadata information to have similar hidden structures.
Secondly, we propose various solutions to cater for alternative forms of link
data. Substantial efforts have been made towards modelling appropriateness and
efficiency, for example, using conjugate priors. We evaluate our framework and
its inference algorithms in different datasets, which shows the generality and
effectiveness of our models in capturing implicit structures in networks.
| [
{
"version": "v1",
"created": "Sun, 6 Oct 2013 05:47:50 GMT"
}
] | 2013-10-08T00:00:00 | [
[
"Fan",
"Xuhui",
""
],
[
"Da Xu",
"Richard Yi",
""
],
[
"Cao",
"Longbing",
""
],
[
"Song",
"Yin",
""
]
] | TITLE: Learning Hidden Structures with Relational Models by Adequately
Involving Rich Information in A Network
ABSTRACT: Effectively modelling hidden structures in a network is very practical but
theoretically challenging. Existing relational models only involve very limited
information, namely the binary directional link data, embedded in a network to
learn hidden networking structures. There is other rich and meaningful
information (e.g., various attributes of entities and more granular information
than binary elements such as "like" or "dislike") missed, which play a critical
role in forming and understanding relations in a network. In this work, we
propose an informative relational model (InfRM) framework to adequately involve
rich information and its granularity in a network, including metadata
information about each entity and various forms of link data. Firstly, an
effective metadata information incorporation method is employed on the prior
information from relational models MMSB and LFRM. This is to encourage the
entities with similar metadata information to have similar hidden structures.
Secondly, we propose various solutions to cater for alternative forms of link
data. Substantial efforts have been made towards modelling appropriateness and
efficiency, for example, using conjugate priors. We evaluate our framework and
its inference algorithms in different datasets, which shows the generality and
effectiveness of our models in capturing implicit structures in networks.
|
1310.1597 | Mengqiu Wang | Mengqiu Wang and Christopher D. Manning | Cross-lingual Pseudo-Projected Expectation Regularization for Weakly
Supervised Learning | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a multilingual weakly supervised learning scenario where
knowledge from annotated corpora in a resource-rich language is transferred via
bitext to guide the learning in other languages. Past approaches project labels
across bitext and use them as features or gold labels for training. We propose
a new method that projects model expectations rather than labels, which
facilities transfer of model uncertainty across language boundaries. We encode
expectations as constraints and train a discriminative CRF model using
Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on
standard Chinese-English and German-English NER datasets, our method
demonstrates F1 scores of 64% and 60% when no labeled data is used. Attaining
the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences.
Furthermore, when combined with labeled examples, our method yields significant
improvements over state-of-the-art supervised methods, achieving best reported
numbers to date on Chinese OntoNotes and German CoNLL-03 datasets.
| [
{
"version": "v1",
"created": "Sun, 6 Oct 2013 16:34:30 GMT"
}
] | 2013-10-08T00:00:00 | [
[
"Wang",
"Mengqiu",
""
],
[
"Manning",
"Christopher D.",
""
]
] | TITLE: Cross-lingual Pseudo-Projected Expectation Regularization for Weakly
Supervised Learning
ABSTRACT: We consider a multilingual weakly supervised learning scenario where
knowledge from annotated corpora in a resource-rich language is transferred via
bitext to guide the learning in other languages. Past approaches project labels
across bitext and use them as features or gold labels for training. We propose
a new method that projects model expectations rather than labels, which
facilities transfer of model uncertainty across language boundaries. We encode
expectations as constraints and train a discriminative CRF model using
Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on
standard Chinese-English and German-English NER datasets, our method
demonstrates F1 scores of 64% and 60% when no labeled data is used. Attaining
the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences.
Furthermore, when combined with labeled examples, our method yields significant
improvements over state-of-the-art supervised methods, achieving best reported
numbers to date on Chinese OntoNotes and German CoNLL-03 datasets.
|
1310.1811 | Ouais Alsharif | Ouais Alsharif and Joelle Pineau | End-to-End Text Recognition with Hybrid HMM Maxout Models | 9 pages, 7 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of detecting and recognizing text in natural scenes has proved to
be more challenging than its counterpart in documents, with most of the
previous work focusing on a single part of the problem. In this work, we
propose new solutions to the character and word recognition problems and then
show how to combine these solutions in an end-to-end text-recognition system.
We do so by leveraging the recently introduced Maxout networks along with
hybrid HMM models that have proven useful for voice recognition. Using these
elements, we build a tunable and highly accurate recognition system that beats
state-of-the-art results on all the sub-problems for both the ICDAR 2003 and
SVT benchmark datasets.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2013 15:08:53 GMT"
}
] | 2013-10-08T00:00:00 | [
[
"Alsharif",
"Ouais",
""
],
[
"Pineau",
"Joelle",
""
]
] | TITLE: End-to-End Text Recognition with Hybrid HMM Maxout Models
ABSTRACT: The problem of detecting and recognizing text in natural scenes has proved to
be more challenging than its counterpart in documents, with most of the
previous work focusing on a single part of the problem. In this work, we
propose new solutions to the character and word recognition problems and then
show how to combine these solutions in an end-to-end text-recognition system.
We do so by leveraging the recently introduced Maxout networks along with
hybrid HMM models that have proven useful for voice recognition. Using these
elements, we build a tunable and highly accurate recognition system that beats
state-of-the-art results on all the sub-problems for both the ICDAR 2003 and
SVT benchmark datasets.
|
1304.0786 | Sameet Sreenivasan | Sameet Sreenivasan | Quantitative analysis of the evolution of novelty in cinema through
crowdsourced keywords | 23 pages, 12 figures (including supplementary material) | Scientific Reports 3, Article number: 2758 (2013) | 10.1038/srep02758 | null | physics.soc-ph cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The generation of novelty is central to any creative endeavor. Novelty
generation and the relationship between novelty and individual hedonic value
have long been subjects of study in social psychology. However, few studies
have utilized large-scale datasets to quantitatively investigate these issues.
Here we consider the domain of American cinema and explore these questions
using a database of films spanning a 70 year period. We use crowdsourced
keywords from the Internet Movie Database as a window into the contents of
films, and prescribe novelty scores for each film based on occurrence
probabilities of individual keywords and keyword-pairs. These scores provide
revealing insights into the dynamics of novelty in cinema. We investigate how
novelty influences the revenue generated by a film, and find a relationship
that resembles the Wundt-Berlyne curve. We also study the statistics of keyword
occurrence and the aggregate distribution of keywords over a 100 year period.
| [
{
"version": "v1",
"created": "Tue, 2 Apr 2013 20:14:54 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Jul 2013 12:00:21 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Oct 2013 17:03:22 GMT"
}
] | 2013-10-07T00:00:00 | [
[
"Sreenivasan",
"Sameet",
""
]
] | TITLE: Quantitative analysis of the evolution of novelty in cinema through
crowdsourced keywords
ABSTRACT: The generation of novelty is central to any creative endeavor. Novelty
generation and the relationship between novelty and individual hedonic value
have long been subjects of study in social psychology. However, few studies
have utilized large-scale datasets to quantitatively investigate these issues.
Here we consider the domain of American cinema and explore these questions
using a database of films spanning a 70 year period. We use crowdsourced
keywords from the Internet Movie Database as a window into the contents of
films, and prescribe novelty scores for each film based on occurrence
probabilities of individual keywords and keyword-pairs. These scores provide
revealing insights into the dynamics of novelty in cinema. We investigate how
novelty influences the revenue generated by a film, and find a relationship
that resembles the Wundt-Berlyne curve. We also study the statistics of keyword
occurrence and the aggregate distribution of keywords over a 100 year period.
|
1310.0883 | Srikumar Venugopal | Freddie Sunarso, Srikumar Venugopal and Federico Lauro | Scalable Protein Sequence Similarity Search using Locality-Sensitive
Hashing and MapReduce | null | null | null | UNSW CSE TR 201325 | cs.DC cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metagenomics is the study of environments through genetic sampling of their
microbiota. Metagenomic studies produce large datasets that are estimated to
grow at a faster rate than the available computational capacity. A key step in
the study of metagenome data is sequence similarity searching which is
computationally intensive over large datasets. Tools such as BLAST require
large dedicated computing infrastructure to perform such analysis and may not
be available to every researcher.
In this paper, we propose a novel approach called ScalLoPS that performs
searching on protein sequence datasets using LSH (Locality-Sensitive Hashing)
that is implemented using the MapReduce distributed framework. ScalLoPS is
designed to scale across computing resources sourced from cloud computing
providers. We present the design and implementation of ScalLoPS followed by
evaluation with datasets derived from both traditional as well as metagenomic
studies. Our experiments show that with this method approximates the quality of
BLAST results while improving the scalability of protein sequence search.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2013 03:11:06 GMT"
}
] | 2013-10-04T00:00:00 | [
[
"Sunarso",
"Freddie",
""
],
[
"Venugopal",
"Srikumar",
""
],
[
"Lauro",
"Federico",
""
]
] | TITLE: Scalable Protein Sequence Similarity Search using Locality-Sensitive
Hashing and MapReduce
ABSTRACT: Metagenomics is the study of environments through genetic sampling of their
microbiota. Metagenomic studies produce large datasets that are estimated to
grow at a faster rate than the available computational capacity. A key step in
the study of metagenome data is sequence similarity searching which is
computationally intensive over large datasets. Tools such as BLAST require
large dedicated computing infrastructure to perform such analysis and may not
be available to every researcher.
In this paper, we propose a novel approach called ScalLoPS that performs
searching on protein sequence datasets using LSH (Locality-Sensitive Hashing)
that is implemented using the MapReduce distributed framework. ScalLoPS is
designed to scale across computing resources sourced from cloud computing
providers. We present the design and implementation of ScalLoPS followed by
evaluation with datasets derived from both traditional as well as metagenomic
studies. Our experiments show that with this method approximates the quality of
BLAST results while improving the scalability of protein sequence search.
|
1310.0894 | Richard Chow | Richard Chow, Hongxia Jin, Bart Knijnenburg, Gokay Saldamli | Differential Data Analysis for Recommender Systems | Extended version of RecSys 2013 paper | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present techniques to characterize which data is important to a
recommender system and which is not. Important data is data that contributes
most to the accuracy of the recommendation algorithm, while less important data
contributes less to the accuracy or even decreases it. Characterizing the
importance of data has two potential direct benefits: (1) increased privacy and
(2) reduced data management costs, including storage. For privacy, we enable
increased recommendation accuracy for comparable privacy levels using existing
data obfuscation techniques. For storage, our results indicate that we can
achieve large reductions in recommendation data and yet maintain recommendation
accuracy.
Our main technique is called differential data analysis. The name is inspired
by other sorts of differential analysis, such as differential power analysis
and differential cryptanalysis, where insight comes through analysis of
slightly differing inputs. In differential data analysis we chunk the data and
compare results in the presence or absence of each chunk. We present results
applying differential data analysis to two datasets and three different kinds
of attributes. The first attribute is called user hardship. This is a novel
attribute, particularly relevant to location datasets, that indicates how
burdensome a data point was to achieve. The second and third attributes are
more standard: timestamp and user rating. For user rating, we confirm previous
work concerning the increased importance to the recommender of data
corresponding to high and low user ratings.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2013 04:47:47 GMT"
}
] | 2013-10-04T00:00:00 | [
[
"Chow",
"Richard",
""
],
[
"Jin",
"Hongxia",
""
],
[
"Knijnenburg",
"Bart",
""
],
[
"Saldamli",
"Gokay",
""
]
] | TITLE: Differential Data Analysis for Recommender Systems
ABSTRACT: We present techniques to characterize which data is important to a
recommender system and which is not. Important data is data that contributes
most to the accuracy of the recommendation algorithm, while less important data
contributes less to the accuracy or even decreases it. Characterizing the
importance of data has two potential direct benefits: (1) increased privacy and
(2) reduced data management costs, including storage. For privacy, we enable
increased recommendation accuracy for comparable privacy levels using existing
data obfuscation techniques. For storage, our results indicate that we can
achieve large reductions in recommendation data and yet maintain recommendation
accuracy.
Our main technique is called differential data analysis. The name is inspired
by other sorts of differential analysis, such as differential power analysis
and differential cryptanalysis, where insight comes through analysis of
slightly differing inputs. In differential data analysis we chunk the data and
compare results in the presence or absence of each chunk. We present results
applying differential data analysis to two datasets and three different kinds
of attributes. The first attribute is called user hardship. This is a novel
attribute, particularly relevant to location datasets, that indicates how
burdensome a data point was to achieve. The second and third attributes are
more standard: timestamp and user rating. For user rating, we confirm previous
work concerning the increased importance to the recommender of data
corresponding to high and low user ratings.
|
1309.5275 | Dan Stowell | Dan Stowell and Mark D. Plumbley | An open dataset for research on audio field recording archives:
freefield1010 | null | null | null | null | cs.SD cs.DL | http://creativecommons.org/licenses/by/3.0/ | We introduce a free and open dataset of 7690 audio clips sampled from the
field-recording tag in the Freesound audio archive. The dataset is designed for
use in research related to data mining in audio archives of field recordings /
soundscapes. Audio is standardised, and audio and metadata are Creative Commons
licensed. We describe the data preparation process, characterise the dataset
descriptively, and illustrate its use through an auto-tagging experiment.
| [
{
"version": "v1",
"created": "Fri, 20 Sep 2013 14:12:04 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Oct 2013 21:29:13 GMT"
}
] | 2013-10-03T00:00:00 | [
[
"Stowell",
"Dan",
""
],
[
"Plumbley",
"Mark D.",
""
]
] | TITLE: An open dataset for research on audio field recording archives:
freefield1010
ABSTRACT: We introduce a free and open dataset of 7690 audio clips sampled from the
field-recording tag in the Freesound audio archive. The dataset is designed for
use in research related to data mining in audio archives of field recordings /
soundscapes. Audio is standardised, and audio and metadata are Creative Commons
licensed. We describe the data preparation process, characterise the dataset
descriptively, and illustrate its use through an auto-tagging experiment.
|
1310.0505 | Haiyan Wang | Haiyan Wang, Feng Wang, Kuai Xu | Modeling Information Diffusion in Online Social Networks with Partial
Differential Equations | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online social networks such as Twitter and Facebook have gained tremendous
popularity for information exchange. The availability of unprecedented amounts
of digital data has accelerated research on information diffusion in online
social networks. However, the mechanism of information spreading in online
social networks remains elusive due to the complexity of social interactions
and rapid change of online social networks. Much of prior work on information
diffusion over online social networks has based on empirical and statistical
approaches. The majority of dynamical models arising from information diffusion
over online social networks involve ordinary differential equations which only
depend on time. In a number of recent papers, the authors propose to use
partial differential equations(PDEs) to characterize temporal and spatial
patterns of information diffusion over online social networks. Built on
intuitive cyber-distances such as friendship hops in online social networks,
the reaction-diffusion equations take into account influences from various
external out-of-network sources, such as the mainstream media, and provide a
new analytic framework to study the interplay of structural and topical
influences on information diffusion over online social networks. In this
survey, we discuss a number of PDE-based models that are validated with real
datasets collected from popular online social networks such as Digg and
Twitter. Some new developments including the conservation law of information
flow in online social networks and information propagation speeds based on
traveling wave solutions are presented to solidify the foundation of the PDE
models and highlight the new opportunities and challenges for mathematicians as
well as computer scientists and researchers in online social networks.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2013 22:17:30 GMT"
}
] | 2013-10-03T00:00:00 | [
[
"Wang",
"Haiyan",
""
],
[
"Wang",
"Feng",
""
],
[
"Xu",
"Kuai",
""
]
] | TITLE: Modeling Information Diffusion in Online Social Networks with Partial
Differential Equations
ABSTRACT: Online social networks such as Twitter and Facebook have gained tremendous
popularity for information exchange. The availability of unprecedented amounts
of digital data has accelerated research on information diffusion in online
social networks. However, the mechanism of information spreading in online
social networks remains elusive due to the complexity of social interactions
and rapid change of online social networks. Much of prior work on information
diffusion over online social networks has based on empirical and statistical
approaches. The majority of dynamical models arising from information diffusion
over online social networks involve ordinary differential equations which only
depend on time. In a number of recent papers, the authors propose to use
partial differential equations(PDEs) to characterize temporal and spatial
patterns of information diffusion over online social networks. Built on
intuitive cyber-distances such as friendship hops in online social networks,
the reaction-diffusion equations take into account influences from various
external out-of-network sources, such as the mainstream media, and provide a
new analytic framework to study the interplay of structural and topical
influences on information diffusion over online social networks. In this
survey, we discuss a number of PDE-based models that are validated with real
datasets collected from popular online social networks such as Digg and
Twitter. Some new developments including the conservation law of information
flow in online social networks and information propagation speeds based on
traveling wave solutions are presented to solidify the foundation of the PDE
models and highlight the new opportunities and challenges for mathematicians as
well as computer scientists and researchers in online social networks.
|
1206.2038 | Tien Tuan Anh Dinh | Dinh Tien Tuan Anh, Quach Vinh Thanh, Anwitaman Datta | CloudMine: Multi-Party Privacy-Preserving Data Analytics Service | null | null | null | null | cs.CR cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An increasing number of businesses are replacing their data storage and
computation infrastructure with cloud services. Likewise, there is an increased
emphasis on performing analytics based on multiple datasets obtained from
different data sources. While ensuring security of data and computation
outsourced to a third party cloud is in itself challenging, supporting
analytics using data distributed across multiple, independent clouds is even
further from trivial. In this paper we present CloudMine, a cloud-based service
which allows multiple data owners to perform privacy-preserved computation over
the joint data using their clouds as delegates. CloudMine protects data privacy
with respect to semi-honest data owners and semi-honest clouds. It furthermore
ensures the privacy of the computation outputs from the curious clouds. It
allows data owners to reliably detect if their cloud delegates have been lazy
when carrying out the delegated computation. CloudMine can run as a centralized
service on a single cloud, or as a distributed service over multiple,
independent clouds. CloudMine supports a set of basic computations that can be
used to construct a variety of highly complex, distributed privacy-preserving
data analytics. We demonstrate how a simple instance of CloudMine (secure sum
service) is used to implement three classical data mining tasks
(classification, association rule mining and clustering) in a cloud
environment. We experiment with a prototype of the service, the results of
which suggest its practicality for supporting privacy-preserving data analytics
as a (multi) cloud-based service.
| [
{
"version": "v1",
"created": "Sun, 10 Jun 2012 16:27:48 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Oct 2013 05:14:19 GMT"
}
] | 2013-10-02T00:00:00 | [
[
"Anh",
"Dinh Tien Tuan",
""
],
[
"Thanh",
"Quach Vinh",
""
],
[
"Datta",
"Anwitaman",
""
]
] | TITLE: CloudMine: Multi-Party Privacy-Preserving Data Analytics Service
ABSTRACT: An increasing number of businesses are replacing their data storage and
computation infrastructure with cloud services. Likewise, there is an increased
emphasis on performing analytics based on multiple datasets obtained from
different data sources. While ensuring security of data and computation
outsourced to a third party cloud is in itself challenging, supporting
analytics using data distributed across multiple, independent clouds is even
further from trivial. In this paper we present CloudMine, a cloud-based service
which allows multiple data owners to perform privacy-preserved computation over
the joint data using their clouds as delegates. CloudMine protects data privacy
with respect to semi-honest data owners and semi-honest clouds. It furthermore
ensures the privacy of the computation outputs from the curious clouds. It
allows data owners to reliably detect if their cloud delegates have been lazy
when carrying out the delegated computation. CloudMine can run as a centralized
service on a single cloud, or as a distributed service over multiple,
independent clouds. CloudMine supports a set of basic computations that can be
used to construct a variety of highly complex, distributed privacy-preserving
data analytics. We demonstrate how a simple instance of CloudMine (secure sum
service) is used to implement three classical data mining tasks
(classification, association rule mining and clustering) in a cloud
environment. We experiment with a prototype of the service, the results of
which suggest its practicality for supporting privacy-preserving data analytics
as a (multi) cloud-based service.
|
1309.7512 | Alexander Fix | Alexander Fix and Thorsten Joachims and Sam Park and Ramin Zabih | Structured learning of sum-of-submodular higher order energy functions | null | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Submodular functions can be exactly minimized in polynomial time, and the
special case that graph cuts solve with max flow \cite{KZ:PAMI04} has had
significant impact in computer vision
\cite{BVZ:PAMI01,Kwatra:SIGGRAPH03,Rother:GrabCut04}. In this paper we address
the important class of sum-of-submodular (SoS) functions
\cite{Arora:ECCV12,Kolmogorov:DAM12}, which can be efficiently minimized via a
variant of max flow called submodular flow \cite{Edmonds:ADM77}. SoS functions
can naturally express higher order priors involving, e.g., local image patches;
however, it is difficult to fully exploit their expressive power because they
have so many parameters. Rather than trying to formulate existing higher order
priors as an SoS function, we take a discriminative learning approach,
effectively searching the space of SoS functions for a higher order prior that
performs well on our training set. We adopt a structural SVM approach
\cite{Joachims/etal/09a,Tsochantaridis/etal/04} and formulate the training
problem in terms of quadratic programming; as a result we can efficiently
search the space of SoS priors via an extended cutting-plane algorithm. We also
show how the state-of-the-art max flow method for vision problems
\cite{Goldberg:ESA11} can be modified to efficiently solve the submodular flow
problem. Experimental comparisons are made against the OpenCV implementation of
the GrabCut interactive segmentation technique \cite{Rother:GrabCut04}, which
uses hand-tuned parameters instead of machine learning. On a standard dataset
\cite{Gulshan:CVPR10} our method learns higher order priors with hundreds of
parameter values, and produces significantly better segmentations. While our
focus is on binary labeling problems, we show that our techniques can be
naturally generalized to handle more than two labels.
| [
{
"version": "v1",
"created": "Sat, 28 Sep 2013 23:55:01 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Oct 2013 02:45:20 GMT"
}
] | 2013-10-02T00:00:00 | [
[
"Fix",
"Alexander",
""
],
[
"Joachims",
"Thorsten",
""
],
[
"Park",
"Sam",
""
],
[
"Zabih",
"Ramin",
""
]
] | TITLE: Structured learning of sum-of-submodular higher order energy functions
ABSTRACT: Submodular functions can be exactly minimized in polynomial time, and the
special case that graph cuts solve with max flow \cite{KZ:PAMI04} has had
significant impact in computer vision
\cite{BVZ:PAMI01,Kwatra:SIGGRAPH03,Rother:GrabCut04}. In this paper we address
the important class of sum-of-submodular (SoS) functions
\cite{Arora:ECCV12,Kolmogorov:DAM12}, which can be efficiently minimized via a
variant of max flow called submodular flow \cite{Edmonds:ADM77}. SoS functions
can naturally express higher order priors involving, e.g., local image patches;
however, it is difficult to fully exploit their expressive power because they
have so many parameters. Rather than trying to formulate existing higher order
priors as an SoS function, we take a discriminative learning approach,
effectively searching the space of SoS functions for a higher order prior that
performs well on our training set. We adopt a structural SVM approach
\cite{Joachims/etal/09a,Tsochantaridis/etal/04} and formulate the training
problem in terms of quadratic programming; as a result we can efficiently
search the space of SoS priors via an extended cutting-plane algorithm. We also
show how the state-of-the-art max flow method for vision problems
\cite{Goldberg:ESA11} can be modified to efficiently solve the submodular flow
problem. Experimental comparisons are made against the OpenCV implementation of
the GrabCut interactive segmentation technique \cite{Rother:GrabCut04}, which
uses hand-tuned parameters instead of machine learning. On a standard dataset
\cite{Gulshan:CVPR10} our method learns higher order priors with hundreds of
parameter values, and produces significantly better segmentations. While our
focus is on binary labeling problems, we show that our techniques can be
naturally generalized to handle more than two labels.
|
1310.0266 | Benjamin Laken | Benjamin A. Laken and Ja\v{s}a \v{C}alogovi\'c | Composite analysis with Monte Carlo methods: an example with cosmic rays
and clouds | 13 pages, 9 figures | Journal of Space Weather Space Climate, 3(A29) | 10.1051/swsc2013051 | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The composite (superposed epoch) analysis technique has been frequently
employed to examine a hypothesized link between solar activity and the Earth's
atmosphere, often through an investigation of Forbush decrease (Fd) events
(sudden high-magnitude decreases in the flux cosmic rays impinging on the
upper-atmosphere lasting up to several days). This technique is useful for
isolating low-amplitude signals within data where background variability would
otherwise obscure detection. The application of composite analyses to
investigate the possible impacts of Fd events involves a statistical
examination of time-dependent atmospheric responses to Fds often from aerosol
and/or cloud datasets. Despite the publication of numerous results within this
field, clear conclusions have yet to be drawn and much ambiguity and
disagreement still remain. In this paper, we argue that the conflicting
findings of composite studies within this field relate to methodological
differences in the manner in which the composites have been constructed and
analyzed. Working from an example, we show how a composite may be objectively
constructed to maximize signal detection, robustly identify statistical
significance, and quantify the lower-limit uncertainty related to hypothesis
testing. Additionally, we also demonstrate how a seemingly significant false
positive may be obtained from non-significant data by minor alterations to
methodological approaches.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2013 12:29:06 GMT"
}
] | 2013-10-02T00:00:00 | [
[
"Laken",
"Benjamin A.",
""
],
[
"Čalogović",
"Jaša",
""
]
] | TITLE: Composite analysis with Monte Carlo methods: an example with cosmic rays
and clouds
ABSTRACT: The composite (superposed epoch) analysis technique has been frequently
employed to examine a hypothesized link between solar activity and the Earth's
atmosphere, often through an investigation of Forbush decrease (Fd) events
(sudden high-magnitude decreases in the flux cosmic rays impinging on the
upper-atmosphere lasting up to several days). This technique is useful for
isolating low-amplitude signals within data where background variability would
otherwise obscure detection. The application of composite analyses to
investigate the possible impacts of Fd events involves a statistical
examination of time-dependent atmospheric responses to Fds often from aerosol
and/or cloud datasets. Despite the publication of numerous results within this
field, clear conclusions have yet to be drawn and much ambiguity and
disagreement still remain. In this paper, we argue that the conflicting
findings of composite studies within this field relate to methodological
differences in the manner in which the composites have been constructed and
analyzed. Working from an example, we show how a composite may be objectively
constructed to maximize signal detection, robustly identify statistical
significance, and quantify the lower-limit uncertainty related to hypothesis
testing. Additionally, we also demonstrate how a seemingly significant false
positive may be obtained from non-significant data by minor alterations to
methodological approaches.
|
1310.0308 | Tomislav Petkovi\'c | Karla Brki\'c, Sr{\dj}an Ra\v{s}i\'c, Axel Pinz, Sini\v{s}a
\v{S}egvi\'c and Zoran Kalafati\'c | Combining Spatio-Temporal Appearance Descriptors and Optical Flow for
Human Action Recognition in Video Data | Part of the Proceedings of the Croatian Computer Vision Workshop,
CCVW 2013, Year 1 | null | null | UniZg-CRV-CCVW/2013/0011 | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This paper proposes combining spatio-temporal appearance (STA) descriptors
with optical flow for human action recognition. The STA descriptors are local
histogram-based descriptors of space-time, suitable for building a partial
representation of arbitrary spatio-temporal phenomena. Because of the
possibility of iterative refinement, they are interesting in the context of
online human action recognition. We investigate the use of dense optical flow
as the image function of the STA descriptor for human action recognition, using
two different algorithms for computing the flow: the Farneb\"ack algorithm and
the TVL1 algorithm. We provide a detailed analysis of the influencing optical
flow algorithm parameters on the produced optical flow fields. An extensive
experimental validation of optical flow-based STA descriptors in human action
recognition is performed on the KTH human action dataset. The encouraging
experimental results suggest the potential of our approach in online human
action recognition.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2013 14:13:40 GMT"
}
] | 2013-10-02T00:00:00 | [
[
"Brkić",
"Karla",
""
],
[
"Rašić",
"Srđan",
""
],
[
"Pinz",
"Axel",
""
],
[
"Šegvić",
"Siniša",
""
],
[
"Kalafatić",
"Zoran",
""
]
] | TITLE: Combining Spatio-Temporal Appearance Descriptors and Optical Flow for
Human Action Recognition in Video Data
ABSTRACT: This paper proposes combining spatio-temporal appearance (STA) descriptors
with optical flow for human action recognition. The STA descriptors are local
histogram-based descriptors of space-time, suitable for building a partial
representation of arbitrary spatio-temporal phenomena. Because of the
possibility of iterative refinement, they are interesting in the context of
online human action recognition. We investigate the use of dense optical flow
as the image function of the STA descriptor for human action recognition, using
two different algorithms for computing the flow: the Farneb\"ack algorithm and
the TVL1 algorithm. We provide a detailed analysis of the influencing optical
flow algorithm parameters on the produced optical flow fields. An extensive
experimental validation of optical flow-based STA descriptors in human action
recognition is performed on the KTH human action dataset. The encouraging
experimental results suggest the potential of our approach in online human
action recognition.
|
1310.0310 | Tomislav Petkovi\'c | Ivan Kre\v{s}o, Marko \v{S}evrovi\'c and Sini\v{s}a \v{S}egvi\'c | A Novel Georeferenced Dataset for Stereo Visual Odometry | Part of the Proceedings of the Croatian Computer Vision Workshop,
CCVW 2013, Year 1 | null | null | UniZg-CRV-CCVW/2013/0017 | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In this work, we present a novel dataset for assessing the accuracy of stereo
visual odometry. The dataset has been acquired by a small-baseline stereo rig
mounted on the top of a moving car. The groundtruth is supplied by a consumer
grade GPS device without IMU. Synchronization and alignment between GPS
readings and stereo frames are recovered after the acquisition. We show that
the attained groundtruth accuracy allows to draw useful conclusions in
practice. The presented experiments address influence of camera calibration,
baseline distance and zero-disparity features to the achieved reconstruction
performance.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2013 14:15:48 GMT"
}
] | 2013-10-02T00:00:00 | [
[
"Krešo",
"Ivan",
""
],
[
"Ševrović",
"Marko",
""
],
[
"Šegvić",
"Siniša",
""
]
] | TITLE: A Novel Georeferenced Dataset for Stereo Visual Odometry
ABSTRACT: In this work, we present a novel dataset for assessing the accuracy of stereo
visual odometry. The dataset has been acquired by a small-baseline stereo rig
mounted on the top of a moving car. The groundtruth is supplied by a consumer
grade GPS device without IMU. Synchronization and alignment between GPS
readings and stereo frames are recovered after the acquisition. We show that
the attained groundtruth accuracy allows to draw useful conclusions in
practice. The presented experiments address influence of camera calibration,
baseline distance and zero-disparity features to the achieved reconstruction
performance.
|
1310.0316 | Tomislav Petkovi\'c | Ivan Sikiri\'c, Karla Brki\'c and Sini\v{s}a \v{S}egvi\'c | Classifying Traffic Scenes Using The GIST Image Descriptor | Part of the Proceedings of the Croatian Computer Vision Workshop,
CCVW 2013, Year 1 | null | null | UniZg-CRV-CCVW/2013/0013 | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This paper investigates classification of traffic scenes in a very low
bandwidth scenario, where an image should be coded by a small number of
features. We introduce a novel dataset, called the FM1 dataset, consisting of
5615 images of eight different traffic scenes: open highway, open road,
settlement, tunnel, tunnel exit, toll booth, heavy traffic and the overpass. We
evaluate the suitability of the GIST descriptor as a representation of these
images, first by exploring the descriptor space using PCA and k-means
clustering, and then by using an SVM classifier and recording its 10-fold
cross-validation performance on the introduced FM1 dataset. The obtained
recognition rates are very encouraging, indicating that the use of the GIST
descriptor alone could be sufficiently descriptive even when very high
performance is required.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2013 14:19:26 GMT"
}
] | 2013-10-02T00:00:00 | [
[
"Sikirić",
"Ivan",
""
],
[
"Brkić",
"Karla",
""
],
[
"Šegvić",
"Siniša",
""
]
] | TITLE: Classifying Traffic Scenes Using The GIST Image Descriptor
ABSTRACT: This paper investigates classification of traffic scenes in a very low
bandwidth scenario, where an image should be coded by a small number of
features. We introduce a novel dataset, called the FM1 dataset, consisting of
5615 images of eight different traffic scenes: open highway, open road,
settlement, tunnel, tunnel exit, toll booth, heavy traffic and the overpass. We
evaluate the suitability of the GIST descriptor as a representation of these
images, first by exploring the descriptor space using PCA and k-means
clustering, and then by using an SVM classifier and recording its 10-fold
cross-validation performance on the introduced FM1 dataset. The obtained
recognition rates are very encouraging, indicating that the use of the GIST
descriptor alone could be sufficiently descriptive even when very high
performance is required.
|
1211.4909 | Benyuan Liu | Benyuan Liu, Zhilin Zhang, Hongqi Fan, Qiang Fu | Fast Marginalized Block Sparse Bayesian Learning Algorithm | null | null | null | null | cs.IT cs.LG math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The performance of sparse signal recovery from noise corrupted,
underdetermined measurements can be improved if both sparsity and correlation
structure of signals are exploited. One typical correlation structure is the
intra-block correlation in block sparse signals. To exploit this structure, a
framework, called block sparse Bayesian learning (BSBL), has been proposed
recently. Algorithms derived from this framework showed superior performance
but they are not very fast, which limits their applications. This work derives
an efficient algorithm from this framework, using a marginalized likelihood
maximization method. Compared to existing BSBL algorithms, it has close
recovery performance but is much faster. Therefore, it is more suitable for
large scale datasets and applications requiring real-time implementation.
| [
{
"version": "v1",
"created": "Wed, 21 Nov 2012 01:06:49 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Nov 2012 01:24:49 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Jan 2013 02:28:09 GMT"
},
{
"version": "v4",
"created": "Tue, 22 Jan 2013 01:51:17 GMT"
},
{
"version": "v5",
"created": "Mon, 4 Mar 2013 02:07:31 GMT"
},
{
"version": "v6",
"created": "Mon, 16 Sep 2013 22:58:17 GMT"
},
{
"version": "v7",
"created": "Sun, 29 Sep 2013 15:56:47 GMT"
}
] | 2013-10-01T00:00:00 | [
[
"Liu",
"Benyuan",
""
],
[
"Zhang",
"Zhilin",
""
],
[
"Fan",
"Hongqi",
""
],
[
"Fu",
"Qiang",
""
]
] | TITLE: Fast Marginalized Block Sparse Bayesian Learning Algorithm
ABSTRACT: The performance of sparse signal recovery from noise corrupted,
underdetermined measurements can be improved if both sparsity and correlation
structure of signals are exploited. One typical correlation structure is the
intra-block correlation in block sparse signals. To exploit this structure, a
framework, called block sparse Bayesian learning (BSBL), has been proposed
recently. Algorithms derived from this framework showed superior performance
but they are not very fast, which limits their applications. This work derives
an efficient algorithm from this framework, using a marginalized likelihood
maximization method. Compared to existing BSBL algorithms, it has close
recovery performance but is much faster. Therefore, it is more suitable for
large scale datasets and applications requiring real-time implementation.
|
1301.7619 | Gonzalo Mateos | Juan Andres Bazerque, Gonzalo Mateos, and Georgios B. Giannakis | Rank regularization and Bayesian inference for tensor completion and
extrapolation | 12 pages, submitted to IEEE Transactions on Signal Processing | null | 10.1109/TSP.2013.2278516 | null | cs.IT cs.LG math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A novel regularizer of the PARAFAC decomposition factors capturing the
tensor's rank is proposed in this paper, as the key enabler for completion of
three-way data arrays with missing entries. Set in a Bayesian framework, the
tensor completion method incorporates prior information to enhance its
smoothing and prediction capabilities. This probabilistic approach can
naturally accommodate general models for the data distribution, lending itself
to various fitting criteria that yield optimum estimates in the
maximum-a-posteriori sense. In particular, two algorithms are devised for
Gaussian- and Poisson-distributed data, that minimize the rank-regularized
least-squares error and Kullback-Leibler divergence, respectively. The proposed
technique is able to recover the "ground-truth'' tensor rank when tested on
synthetic data, and to complete brain imaging and yeast gene expression
datasets with 50% and 15% of missing entries respectively, resulting in
recovery errors at -10dB and -15dB.
| [
{
"version": "v1",
"created": "Thu, 31 Jan 2013 14:17:28 GMT"
}
] | 2013-10-01T00:00:00 | [
[
"Bazerque",
"Juan Andres",
""
],
[
"Mateos",
"Gonzalo",
""
],
[
"Giannakis",
"Georgios B.",
""
]
] | TITLE: Rank regularization and Bayesian inference for tensor completion and
extrapolation
ABSTRACT: A novel regularizer of the PARAFAC decomposition factors capturing the
tensor's rank is proposed in this paper, as the key enabler for completion of
three-way data arrays with missing entries. Set in a Bayesian framework, the
tensor completion method incorporates prior information to enhance its
smoothing and prediction capabilities. This probabilistic approach can
naturally accommodate general models for the data distribution, lending itself
to various fitting criteria that yield optimum estimates in the
maximum-a-posteriori sense. In particular, two algorithms are devised for
Gaussian- and Poisson-distributed data, that minimize the rank-regularized
least-squares error and Kullback-Leibler divergence, respectively. The proposed
technique is able to recover the "ground-truth'' tensor rank when tested on
synthetic data, and to complete brain imaging and yeast gene expression
datasets with 50% and 15% of missing entries respectively, resulting in
recovery errors at -10dB and -15dB.
|
1309.5594 | Chunhua Shen | Fumin Shen and Chunhua Shen | Generic Image Classification Approaches Excel on Face Recognition | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The main finding of this work is that the standard image classification
pipeline, which consists of dictionary learning, feature encoding, spatial
pyramid pooling and linear classification, outperforms all state-of-the-art
face recognition methods on the tested benchmark datasets (we have tested on
AR, Extended Yale B, the challenging FERET, and LFW-a datasets). This
surprising and prominent result suggests that those advances in generic image
classification can be directly applied to improve face recognition systems. In
other words, face recognition may not need to be viewed as a separate object
classification problem.
While recently a large body of residual based face recognition methods focus
on developing complex dictionary learning algorithms, in this work we show that
a dictionary of randomly extracted patches (even from non-face images) can
achieve very promising results using the image classification pipeline. That
means, the choice of dictionary learning methods may not be important. Instead,
we find that learning multiple dictionaries using different low-level image
features often improve the final classification accuracy. Our proposed face
recognition approach offers the best reported results on the widely-used face
recognition benchmark datasets. In particular, on the challenging FERET and
LFW-a datasets, we improve the best reported accuracies in the literature by
about 20% and 30% respectively.
| [
{
"version": "v1",
"created": "Sun, 22 Sep 2013 11:52:03 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Sep 2013 03:23:36 GMT"
}
] | 2013-10-01T00:00:00 | [
[
"Shen",
"Fumin",
""
],
[
"Shen",
"Chunhua",
""
]
] | TITLE: Generic Image Classification Approaches Excel on Face Recognition
ABSTRACT: The main finding of this work is that the standard image classification
pipeline, which consists of dictionary learning, feature encoding, spatial
pyramid pooling and linear classification, outperforms all state-of-the-art
face recognition methods on the tested benchmark datasets (we have tested on
AR, Extended Yale B, the challenging FERET, and LFW-a datasets). This
surprising and prominent result suggests that those advances in generic image
classification can be directly applied to improve face recognition systems. In
other words, face recognition may not need to be viewed as a separate object
classification problem.
While recently a large body of residual based face recognition methods focus
on developing complex dictionary learning algorithms, in this work we show that
a dictionary of randomly extracted patches (even from non-face images) can
achieve very promising results using the image classification pipeline. That
means, the choice of dictionary learning methods may not be important. Instead,
we find that learning multiple dictionaries using different low-level image
features often improve the final classification accuracy. Our proposed face
recognition approach offers the best reported results on the widely-used face
recognition benchmark datasets. In particular, on the challenging FERET and
LFW-a datasets, we improve the best reported accuracies in the literature by
about 20% and 30% respectively.
|
1309.7434 | Omar Oreifej | Dong Zhang, Omar Oreifej, Mubarak Shah | Face Verification Using Boosted Cross-Image Features | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a new approach for face verification, where a pair of
images needs to be classified as belonging to the same person or not. This
problem is relatively new and not well-explored in the literature. Current
methods mostly adopt techniques borrowed from face recognition, and process
each of the images in the pair independently, which is counter intuitive. In
contrast, we propose to extract cross-image features, i.e. features across the
pair of images, which, as we demonstrate, is more discriminative to the
similarity and the dissimilarity of faces. Our features are derived from the
popular Haar-like features, however, extended to handle the face verification
problem instead of face detection. We collect a large bank of cross-image
features using filters of different sizes, locations, and orientations.
Consequently, we use AdaBoost to select and weight the most discriminative
features. We carried out extensive experiments on the proposed ideas using
three standard face verification datasets, and obtained promising results
outperforming state-of-the-art.
| [
{
"version": "v1",
"created": "Sat, 28 Sep 2013 06:21:18 GMT"
}
] | 2013-10-01T00:00:00 | [
[
"Zhang",
"Dong",
""
],
[
"Oreifej",
"Omar",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: Face Verification Using Boosted Cross-Image Features
ABSTRACT: This paper proposes a new approach for face verification, where a pair of
images needs to be classified as belonging to the same person or not. This
problem is relatively new and not well-explored in the literature. Current
methods mostly adopt techniques borrowed from face recognition, and process
each of the images in the pair independently, which is counter intuitive. In
contrast, we propose to extract cross-image features, i.e. features across the
pair of images, which, as we demonstrate, is more discriminative to the
similarity and the dissimilarity of faces. Our features are derived from the
popular Haar-like features, however, extended to handle the face verification
problem instead of face detection. We collect a large bank of cross-image
features using filters of different sizes, locations, and orientations.
Consequently, we use AdaBoost to select and weight the most discriminative
features. We carried out extensive experiments on the proposed ideas using
three standard face verification datasets, and obtained promising results
outperforming state-of-the-art.
|
1309.7484 | Junzhou Chen | Chen Junzhou, Li Qing, Peng Qiang and Kin Hong Wong | CSIFT Based Locality-constrained Linear Coding for Image Classification | 9 pages, 5 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In the past decade, SIFT descriptor has been witnessed as one of the most
robust local invariant feature descriptors and widely used in various vision
tasks. Most traditional image classification systems depend on the
luminance-based SIFT descriptors, which only analyze the gray level variations
of the images. Misclassification may happen since their color contents are
ignored. In this article, we concentrate on improving the performance of
existing image classification algorithms by adding color information. To
achieve this purpose, different kinds of colored SIFT descriptors are
introduced and implemented. Locality-constrained Linear Coding (LLC), a
state-of-the-art sparse coding technology, is employed to construct the image
classification system for the evaluation. The real experiments are carried out
on several benchmarks. With the enhancements of color SIFT, the proposed image
classification system obtains approximate 3% improvement of classification
accuracy on the Caltech-101 dataset and approximate 4% improvement of
classification accuracy on the Caltech-256 dataset.
| [
{
"version": "v1",
"created": "Sat, 28 Sep 2013 18:05:12 GMT"
}
] | 2013-10-01T00:00:00 | [
[
"Junzhou",
"Chen",
""
],
[
"Qing",
"Li",
""
],
[
"Qiang",
"Peng",
""
],
[
"Wong",
"Kin Hong",
""
]
] | TITLE: CSIFT Based Locality-constrained Linear Coding for Image Classification
ABSTRACT: In the past decade, SIFT descriptor has been witnessed as one of the most
robust local invariant feature descriptors and widely used in various vision
tasks. Most traditional image classification systems depend on the
luminance-based SIFT descriptors, which only analyze the gray level variations
of the images. Misclassification may happen since their color contents are
ignored. In this article, we concentrate on improving the performance of
existing image classification algorithms by adding color information. To
achieve this purpose, different kinds of colored SIFT descriptors are
introduced and implemented. Locality-constrained Linear Coding (LLC), a
state-of-the-art sparse coding technology, is employed to construct the image
classification system for the evaluation. The real experiments are carried out
on several benchmarks. With the enhancements of color SIFT, the proposed image
classification system obtains approximate 3% improvement of classification
accuracy on the Caltech-101 dataset and approximate 4% improvement of
classification accuracy on the Caltech-256 dataset.
|
1309.7517 | Modou Gueye M. | Modou Gueye and Talel Abdessalem and Hubert Naacke | Improving tag recommendation by folding in more consistency | 14 pages | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tag recommendation is a major aspect of collaborative tagging systems. It
aims to recommend tags to a user for tagging an item. In this paper we present
a part of our work in progress which is a novel improvement of recommendations
by re-ranking the output of a tag recommender. We mine association rules
between candidates tags in order to determine a more consistent list of tags to
recommend.
Our method is an add-on one which leads to better recommendations as we show
in this paper. It is easily parallelizable and morever it may be applied to a
lot of tag recommenders. The experiments we did on five datasets with two kinds
of tag recommender demonstrated the efficiency of our method.
| [
{
"version": "v1",
"created": "Sun, 29 Sep 2013 01:43:40 GMT"
}
] | 2013-10-01T00:00:00 | [
[
"Gueye",
"Modou",
""
],
[
"Abdessalem",
"Talel",
""
],
[
"Naacke",
"Hubert",
""
]
] | TITLE: Improving tag recommendation by folding in more consistency
ABSTRACT: Tag recommendation is a major aspect of collaborative tagging systems. It
aims to recommend tags to a user for tagging an item. In this paper we present
a part of our work in progress which is a novel improvement of recommendations
by re-ranking the output of a tag recommender. We mine association rules
between candidates tags in order to determine a more consistent list of tags to
recommend.
Our method is an add-on one which leads to better recommendations as we show
in this paper. It is easily parallelizable and morever it may be applied to a
lot of tag recommenders. The experiments we did on five datasets with two kinds
of tag recommender demonstrated the efficiency of our method.
|
1309.7804 | Michael I. Jordan | Michael I. Jordan | On statistics, computation and scalability | Published in at http://dx.doi.org/10.3150/12-BEJSP17 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm) | Bernoulli 2013, Vol. 19, No. 4, 1378-1390 | 10.3150/12-BEJSP17 | IMS-BEJ-BEJSP17 | stat.ML cs.LG math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How should statistical procedures be designed so as to be scalable
computationally to the massive datasets that are increasingly the norm? When
coupled with the requirement that an answer to an inferential question be
delivered within a certain time budget, this question has significant
repercussions for the field of statistics. With the goal of identifying
"time-data tradeoffs," we investigate some of the statistical consequences of
computational perspectives on scability, in particular divide-and-conquer
methodology and hierarchies of convex relaxations.
| [
{
"version": "v1",
"created": "Mon, 30 Sep 2013 11:51:23 GMT"
}
] | 2013-10-01T00:00:00 | [
[
"Jordan",
"Michael I.",
""
]
] | TITLE: On statistics, computation and scalability
ABSTRACT: How should statistical procedures be designed so as to be scalable
computationally to the massive datasets that are increasingly the norm? When
coupled with the requirement that an answer to an inferential question be
delivered within a certain time budget, this question has significant
repercussions for the field of statistics. With the goal of identifying
"time-data tradeoffs," we investigate some of the statistical consequences of
computational perspectives on scability, in particular divide-and-conquer
methodology and hierarchies of convex relaxations.
|
1309.7912 | Ricardo Fabbri | Mauro de Amorim, Ricardo Fabbri, Lucia Maria dos Santos Pinto and
Francisco Duarte Moura Neto | An Image-Based Fluid Surface Pattern Model | a reduced version in Portuguese appears in proceedings of the XVI EMC
- Computational Modeling Meeting (Encontro de Modelagem Computacional), 2013 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | This work aims at generating a model of the ocean surface and its dynamics
from one or more video cameras. The idea is to model wave patterns from video
as a first step towards a larger system of photogrammetric monitoring of marine
conditions for use in offshore oil drilling platforms. The first part of the
proposed approach consists in reducing the dimensionality of sensor data made
up of the many pixels of each frame of the input video streams. This enables
finding a concise number of most relevant parameters to model the temporal
dataset, yielding an efficient data-driven model of the evolution of the
observed surface. The second part proposes stochastic modeling to better
capture the patterns embedded in the data. One can then draw samples from the
final model, which are expected to simulate the behavior of previously observed
flow, in order to determine conditions that match new observations. In this
paper we focus on proposing and discussing the overall approach and on
comparing two different techniques for dimensionality reduction in the first
stage: principal component analysis and diffusion maps. Work is underway on the
second stage of constructing better stochastic models of fluid surface dynamics
as proposed here.
| [
{
"version": "v1",
"created": "Mon, 30 Sep 2013 16:39:21 GMT"
}
] | 2013-10-01T00:00:00 | [
[
"de Amorim",
"Mauro",
""
],
[
"Fabbri",
"Ricardo",
""
],
[
"Pinto",
"Lucia Maria dos Santos",
""
],
[
"Neto",
"Francisco Duarte Moura",
""
]
] | TITLE: An Image-Based Fluid Surface Pattern Model
ABSTRACT: This work aims at generating a model of the ocean surface and its dynamics
from one or more video cameras. The idea is to model wave patterns from video
as a first step towards a larger system of photogrammetric monitoring of marine
conditions for use in offshore oil drilling platforms. The first part of the
proposed approach consists in reducing the dimensionality of sensor data made
up of the many pixels of each frame of the input video streams. This enables
finding a concise number of most relevant parameters to model the temporal
dataset, yielding an efficient data-driven model of the evolution of the
observed surface. The second part proposes stochastic modeling to better
capture the patterns embedded in the data. One can then draw samples from the
final model, which are expected to simulate the behavior of previously observed
flow, in order to determine conditions that match new observations. In this
paper we focus on proposing and discussing the overall approach and on
comparing two different techniques for dimensionality reduction in the first
stage: principal component analysis and diffusion maps. Work is underway on the
second stage of constructing better stochastic models of fluid surface dynamics
as proposed here.
|
1309.7982 | Shou Chung Li scli | Zhung-Xun Liao, Shou-Chung Li, Wen-Chih Peng, Philip S Yu | On the Feature Discovery for App Usage Prediction in Smartphones | 10 pages, 17 figures, ICDM 2013 short paper | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing number of mobile Apps developed, they are now closely
integrated into daily life. In this paper, we develop a framework to predict
mobile Apps that are most likely to be used regarding the current device status
of a smartphone. Such an Apps usage prediction framework is a crucial
prerequisite for fast App launching, intelligent user experience, and power
management of smartphones. By analyzing real App usage log data, we discover
two kinds of features: The Explicit Feature (EF) from sensing readings of
built-in sensors, and the Implicit Feature (IF) from App usage relations. The
IF feature is derived by constructing the proposed App Usage Graph (abbreviated
as AUG) that models App usage transitions. In light of AUG, we are able to
discover usage relations among Apps. Since users may have different usage
behaviors on their smartphones, we further propose one personalized feature
selection algorithm. We explore minimum description length (MDL) from the
training data and select those features which need less length to describe the
training data. The personalized feature selection can successfully reduce the
log size and the prediction time. Finally, we adopt the kNN classification
model to predict Apps usage. Note that through the features selected by the
proposed personalized feature selection algorithm, we only need to keep these
features, which in turn reduces the prediction time and avoids the curse of
dimensionality when using the kNN classifier. We conduct a comprehensive
experimental study based on a real mobile App usage dataset. The results
demonstrate the effectiveness of the proposed framework and show the predictive
capability for App usage prediction.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 14:44:10 GMT"
}
] | 2013-10-01T00:00:00 | [
[
"Liao",
"Zhung-Xun",
""
],
[
"Li",
"Shou-Chung",
""
],
[
"Peng",
"Wen-Chih",
""
],
[
"Yu",
"Philip S",
""
]
] | TITLE: On the Feature Discovery for App Usage Prediction in Smartphones
ABSTRACT: With the increasing number of mobile Apps developed, they are now closely
integrated into daily life. In this paper, we develop a framework to predict
mobile Apps that are most likely to be used regarding the current device status
of a smartphone. Such an Apps usage prediction framework is a crucial
prerequisite for fast App launching, intelligent user experience, and power
management of smartphones. By analyzing real App usage log data, we discover
two kinds of features: The Explicit Feature (EF) from sensing readings of
built-in sensors, and the Implicit Feature (IF) from App usage relations. The
IF feature is derived by constructing the proposed App Usage Graph (abbreviated
as AUG) that models App usage transitions. In light of AUG, we are able to
discover usage relations among Apps. Since users may have different usage
behaviors on their smartphones, we further propose one personalized feature
selection algorithm. We explore minimum description length (MDL) from the
training data and select those features which need less length to describe the
training data. The personalized feature selection can successfully reduce the
log size and the prediction time. Finally, we adopt the kNN classification
model to predict Apps usage. Note that through the features selected by the
proposed personalized feature selection algorithm, we only need to keep these
features, which in turn reduces the prediction time and avoids the curse of
dimensionality when using the kNN classifier. We conduct a comprehensive
experimental study based on a real mobile App usage dataset. The results
demonstrate the effectiveness of the proposed framework and show the predictive
capability for App usage prediction.
|
1309.7266 | Ahmed Abbasi | Ahmed Abbasi, Siddharth Kaza and F. Mariam Zahedi | Evaluating Link-Based Techniques for Detecting Fake Pharmacy Websites | Abbasi, A., Kaza, S., and Zahedi, F. M. "Evaluating Link-Based
Techniques for Detecting Fake Pharmacy Websites," In Proceedings of the 19th
Annual Workshop on Information Technologies and Systems, Phoenix, Arizona,
December 14-15, 2009 | null | null | null | cs.CY cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fake online pharmacies have become increasingly pervasive, constituting over
90% of online pharmacy websites. There is a need for fake website detection
techniques capable of identifying fake online pharmacy websites with a high
degree of accuracy. In this study, we compared several well-known link-based
detection techniques on a large-scale test bed with the hyperlink graph
encompassing over 80 million links between 15.5 million web pages, including
1.2 million known legitimate and fake pharmacy pages. We found that the QoC and
QoL class propagation algorithms achieved an accuracy of over 90% on our
dataset. The results revealed that algorithms that incorporate dual class
propagation as well as inlink and outlink information, on page-level or
site-level graphs, are better suited for detecting fake pharmacy websites. In
addition, site-level analysis yielded significantly better results than
page-level analysis for most algorithms evaluated.
| [
{
"version": "v1",
"created": "Fri, 27 Sep 2013 15:09:24 GMT"
}
] | 2013-09-30T00:00:00 | [
[
"Abbasi",
"Ahmed",
""
],
[
"Kaza",
"Siddharth",
""
],
[
"Zahedi",
"F. Mariam",
""
]
] | TITLE: Evaluating Link-Based Techniques for Detecting Fake Pharmacy Websites
ABSTRACT: Fake online pharmacies have become increasingly pervasive, constituting over
90% of online pharmacy websites. There is a need for fake website detection
techniques capable of identifying fake online pharmacy websites with a high
degree of accuracy. In this study, we compared several well-known link-based
detection techniques on a large-scale test bed with the hyperlink graph
encompassing over 80 million links between 15.5 million web pages, including
1.2 million known legitimate and fake pharmacy pages. We found that the QoC and
QoL class propagation algorithms achieved an accuracy of over 90% on our
dataset. The results revealed that algorithms that incorporate dual class
propagation as well as inlink and outlink information, on page-level or
site-level graphs, are better suited for detecting fake pharmacy websites. In
addition, site-level analysis yielded significantly better results than
page-level analysis for most algorithms evaluated.
|
1309.6691 | Chunhua Shen | Yao Li, Wenjing Jia, Chunhua Shen, Anton van den Hengel | Characterness: An Indicator of Text in the Wild | 11 pages; Appearing in IEEE Trans. on Image Processing | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text in an image provides vital information for interpreting its contents,
and text in a scene can aide with a variety of tasks from navigation, to
obstacle avoidance, and odometry. Despite its value, however, identifying
general text in images remains a challenging research problem. Motivated by the
need to consider the widely varying forms of natural text, we propose a
bottom-up approach to the problem which reflects the `characterness' of an
image region. In this sense our approach mirrors the move from saliency
detection methods to measures of `objectness'. In order to measure the
characterness we develop three novel cues that are tailored for character
detection, and a Bayesian method for their integration. Because text is made up
of sets of characters, we then design a Markov random field (MRF) model so as
to exploit the inherent dependencies between characters.
We experimentally demonstrate the effectiveness of our characterness cues as
well as the advantage of Bayesian multi-cue integration. The proposed text
detector outperforms state-of-the-art methods on a few benchmark scene text
detection datasets. We also show that our measurement of `characterness' is
superior than state-of-the-art saliency detection models when applied to the
same task.
| [
{
"version": "v1",
"created": "Wed, 25 Sep 2013 23:30:18 GMT"
}
] | 2013-09-27T00:00:00 | [
[
"Li",
"Yao",
""
],
[
"Jia",
"Wenjing",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: Characterness: An Indicator of Text in the Wild
ABSTRACT: Text in an image provides vital information for interpreting its contents,
and text in a scene can aide with a variety of tasks from navigation, to
obstacle avoidance, and odometry. Despite its value, however, identifying
general text in images remains a challenging research problem. Motivated by the
need to consider the widely varying forms of natural text, we propose a
bottom-up approach to the problem which reflects the `characterness' of an
image region. In this sense our approach mirrors the move from saliency
detection methods to measures of `objectness'. In order to measure the
characterness we develop three novel cues that are tailored for character
detection, and a Bayesian method for their integration. Because text is made up
of sets of characters, we then design a Markov random field (MRF) model so as
to exploit the inherent dependencies between characters.
We experimentally demonstrate the effectiveness of our characterness cues as
well as the advantage of Bayesian multi-cue integration. The proposed text
detector outperforms state-of-the-art methods on a few benchmark scene text
detection datasets. We also show that our measurement of `characterness' is
superior than state-of-the-art saliency detection models when applied to the
same task.
|
1309.6722 | Duyu Tang | Tang Duyu, Qin Bing, Zhou LanJun, Wong KamFai, Zhao Yanyan, Liu Ting | Domain-Specific Sentiment Word Extraction by Seed Expansion and Pattern
Generation | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on the automatic extraction of domain-specific sentiment
word (DSSW), which is a fundamental subtask of sentiment analysis. Most
previous work utilizes manual patterns for this task. However, the performance
of those methods highly relies on the labelled patterns or selected seeds. In
order to overcome the above problem, this paper presents an automatic framework
to detect large-scale domain-specific patterns for DSSW extraction. To this
end, sentiment seeds are extracted from massive dataset of user comments.
Subsequently, these sentiment seeds are expanded by synonyms using a
bootstrapping mechanism. Simultaneously, a synonymy graph is built and the
graph propagation algorithm is applied on the built synonymy graph. Afterwards,
syntactic and sequential relations between target words and high-ranked
sentiment words are extracted automatically to construct large-scale patterns,
which are further used to extracte DSSWs. The experimental results in three
domains reveal the effectiveness of our method.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 05:18:12 GMT"
}
] | 2013-09-27T00:00:00 | [
[
"Duyu",
"Tang",
""
],
[
"Bing",
"Qin",
""
],
[
"LanJun",
"Zhou",
""
],
[
"KamFai",
"Wong",
""
],
[
"Yanyan",
"Zhao",
""
],
[
"Ting",
"Liu",
""
]
] | TITLE: Domain-Specific Sentiment Word Extraction by Seed Expansion and Pattern
Generation
ABSTRACT: This paper focuses on the automatic extraction of domain-specific sentiment
word (DSSW), which is a fundamental subtask of sentiment analysis. Most
previous work utilizes manual patterns for this task. However, the performance
of those methods highly relies on the labelled patterns or selected seeds. In
order to overcome the above problem, this paper presents an automatic framework
to detect large-scale domain-specific patterns for DSSW extraction. To this
end, sentiment seeds are extracted from massive dataset of user comments.
Subsequently, these sentiment seeds are expanded by synonyms using a
bootstrapping mechanism. Simultaneously, a synonymy graph is built and the
graph propagation algorithm is applied on the built synonymy graph. Afterwards,
syntactic and sequential relations between target words and high-ranked
sentiment words are extracted automatically to construct large-scale patterns,
which are further used to extracte DSSWs. The experimental results in three
domains reveal the effectiveness of our method.
|
1309.6811 | Tameem Adel | Tameem Adel, Benn Smith, Ruth Urner, Daniel Stashuk, Daniel J. Lizotte | Generative Multiple-Instance Learning Models For Quantitative
Electromyography | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-2-11 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a comprehensive study of the use of generative modeling approaches
for Multiple-Instance Learning (MIL) problems. In MIL a learner receives
training instances grouped together into bags with labels for the bags only
(which might not be correct for the comprised instances). Our work was
motivated by the task of facilitating the diagnosis of neuromuscular disorders
using sets of motor unit potential trains (MUPTs) detected within a muscle
which can be cast as a MIL problem. Our approach leads to a state-of-the-art
solution to the problem of muscle classification. By introducing and analyzing
generative models for MIL in a general framework and examining a variety of
model structures and components, our work also serves as a methodological guide
to modelling MIL tasks. We evaluate our proposed methods both on MUPT datasets
and on the MUSK1 dataset, one of the most widely used benchmarks for MIL.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:26:53 GMT"
}
] | 2013-09-27T00:00:00 | [
[
"Adel",
"Tameem",
""
],
[
"Smith",
"Benn",
""
],
[
"Urner",
"Ruth",
""
],
[
"Stashuk",
"Daniel",
""
],
[
"Lizotte",
"Daniel J.",
""
]
] | TITLE: Generative Multiple-Instance Learning Models For Quantitative
Electromyography
ABSTRACT: We present a comprehensive study of the use of generative modeling approaches
for Multiple-Instance Learning (MIL) problems. In MIL a learner receives
training instances grouped together into bags with labels for the bags only
(which might not be correct for the comprised instances). Our work was
motivated by the task of facilitating the diagnosis of neuromuscular disorders
using sets of motor unit potential trains (MUPTs) detected within a muscle
which can be cast as a MIL problem. Our approach leads to a state-of-the-art
solution to the problem of muscle classification. By introducing and analyzing
generative models for MIL in a general framework and examining a variety of
model structures and components, our work also serves as a methodological guide
to modelling MIL tasks. We evaluate our proposed methods both on MUPT datasets
and on the MUSK1 dataset, one of the most widely used benchmarks for MIL.
|
1309.6812 | Saeed Amizadeh | Saeed Amizadeh, Bo Thiesson, Milos Hauskrecht | The Bregman Variational Dual-Tree Framework | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-22-31 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph-based methods provide a powerful tool set for many non-parametric
frameworks in Machine Learning. In general, the memory and computational
complexity of these methods is quadratic in the number of examples in the data
which makes them quickly infeasible for moderate to large scale datasets. A
significant effort to find more efficient solutions to the problem has been
made in the literature. One of the state-of-the-art methods that has been
recently introduced is the Variational Dual-Tree (VDT) framework. Despite some
of its unique features, VDT is currently restricted only to Euclidean spaces
where the Euclidean distance quantifies the similarity. In this paper, we
extend the VDT framework beyond the Euclidean distance to more general Bregman
divergences that include the Euclidean distance as a special case. By
exploiting the properties of the general Bregman divergence, we show how the
new framework can maintain all the pivotal features of the VDT framework and
yet significantly improve its performance in non-Euclidean domains. We apply
the proposed framework to different text categorization problems and
demonstrate its benefits over the original VDT.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:28:35 GMT"
}
] | 2013-09-27T00:00:00 | [
[
"Amizadeh",
"Saeed",
""
],
[
"Thiesson",
"Bo",
""
],
[
"Hauskrecht",
"Milos",
""
]
] | TITLE: The Bregman Variational Dual-Tree Framework
ABSTRACT: Graph-based methods provide a powerful tool set for many non-parametric
frameworks in Machine Learning. In general, the memory and computational
complexity of these methods is quadratic in the number of examples in the data
which makes them quickly infeasible for moderate to large scale datasets. A
significant effort to find more efficient solutions to the problem has been
made in the literature. One of the state-of-the-art methods that has been
recently introduced is the Variational Dual-Tree (VDT) framework. Despite some
of its unique features, VDT is currently restricted only to Euclidean spaces
where the Euclidean distance quantifies the similarity. In this paper, we
extend the VDT framework beyond the Euclidean distance to more general Bregman
divergences that include the Euclidean distance as a special case. By
exploiting the properties of the general Bregman divergence, we show how the
new framework can maintain all the pivotal features of the VDT framework and
yet significantly improve its performance in non-Euclidean domains. We apply
the proposed framework to different text categorization problems and
demonstrate its benefits over the original VDT.
|
1309.6829 | Qiang Fu | Qiang Fu, Huahua Wang, Arindam Banerjee | Bethe-ADMM for Tree Decomposition based Parallel MAP Inference | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-222-231 | cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of maximum a posteriori (MAP) inference in discrete
graphical models. We present a parallel MAP inference algorithm called
Bethe-ADMM based on two ideas: tree-decomposition of the graph and the
alternating direction method of multipliers (ADMM). However, unlike the
standard ADMM, we use an inexact ADMM augmented with a Bethe-divergence based
proximal function, which makes each subproblem in ADMM easy to solve in
parallel using the sum-product algorithm. We rigorously prove global
convergence of Bethe-ADMM. The proposed algorithm is extensively evaluated on
both synthetic and real datasets to illustrate its effectiveness. Further, the
parallel Bethe-ADMM is shown to scale almost linearly with increasing number of
cores.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:38:09 GMT"
}
] | 2013-09-27T00:00:00 | [
[
"Fu",
"Qiang",
""
],
[
"Wang",
"Huahua",
""
],
[
"Banerjee",
"Arindam",
""
]
] | TITLE: Bethe-ADMM for Tree Decomposition based Parallel MAP Inference
ABSTRACT: We consider the problem of maximum a posteriori (MAP) inference in discrete
graphical models. We present a parallel MAP inference algorithm called
Bethe-ADMM based on two ideas: tree-decomposition of the graph and the
alternating direction method of multipliers (ADMM). However, unlike the
standard ADMM, we use an inexact ADMM augmented with a Bethe-divergence based
proximal function, which makes each subproblem in ADMM easy to solve in
parallel using the sum-product algorithm. We rigorously prove global
convergence of Bethe-ADMM. The proposed algorithm is extensively evaluated on
both synthetic and real datasets to illustrate its effectiveness. Further, the
parallel Bethe-ADMM is shown to scale almost linearly with increasing number of
cores.
|
1309.6830 | Ravi Ganti | Ravi Ganti, Alexander G. Gray | Building Bridges: Viewing Active Learning from the Multi-Armed Bandit
Lens | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-232-241 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a multi-armed bandit inspired, pool based active
learning algorithm for the problem of binary classification. By carefully
constructing an analogy between active learning and multi-armed bandits, we
utilize ideas such as lower confidence bounds, and self-concordant
regularization from the multi-armed bandit literature to design our proposed
algorithm. Our algorithm is a sequential algorithm, which in each round assigns
a sampling distribution on the pool, samples one point from this distribution,
and queries the oracle for the label of this sampled point. The design of this
sampling distribution is also inspired by the analogy between active learning
and multi-armed bandits. We show how to derive lower confidence bounds required
by our algorithm. Experimental comparisons to previously proposed active
learning algorithms show superior performance on some standard UCI datasets.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:39:01 GMT"
}
] | 2013-09-27T00:00:00 | [
[
"Ganti",
"Ravi",
""
],
[
"Gray",
"Alexander G.",
""
]
] | TITLE: Building Bridges: Viewing Active Learning from the Multi-Armed Bandit
Lens
ABSTRACT: In this paper we propose a multi-armed bandit inspired, pool based active
learning algorithm for the problem of binary classification. By carefully
constructing an analogy between active learning and multi-armed bandits, we
utilize ideas such as lower confidence bounds, and self-concordant
regularization from the multi-armed bandit literature to design our proposed
algorithm. Our algorithm is a sequential algorithm, which in each round assigns
a sampling distribution on the pool, samples one point from this distribution,
and queries the oracle for the label of this sampled point. The design of this
sampling distribution is also inspired by the analogy between active learning
and multi-armed bandits. We show how to derive lower confidence bounds required
by our algorithm. Experimental comparisons to previously proposed active
learning algorithms show superior performance on some standard UCI datasets.
|
1309.6867 | Yaniv Tenzer | Yaniv Tenzer, Gal Elidan | Speedy Model Selection (SMS) for Copula Models | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-625-634 | cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We tackle the challenge of efficiently learning the structure of expressive
multivariate real-valued densities of copula graphical models. We start by
theoretically substantiating the conjecture that for many copula families the
magnitude of Spearman's rank correlation coefficient is monotone in the
expected contribution of an edge in network, namely the negative copula
entropy. We then build on this theory and suggest a novel Bayesian approach
that makes use of a prior over values of Spearman's rho for learning
copula-based models that involve a mix of copula families. We demonstrate the
generalization effectiveness of our highly efficient approach on sizable and
varied real-life datasets.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:51:22 GMT"
}
] | 2013-09-27T00:00:00 | [
[
"Tenzer",
"Yaniv",
""
],
[
"Elidan",
"Gal",
""
]
] | TITLE: Speedy Model Selection (SMS) for Copula Models
ABSTRACT: We tackle the challenge of efficiently learning the structure of expressive
multivariate real-valued densities of copula graphical models. We start by
theoretically substantiating the conjecture that for many copula families the
magnitude of Spearman's rank correlation coefficient is monotone in the
expected contribution of an edge in network, namely the negative copula
entropy. We then build on this theory and suggest a novel Bayesian approach
that makes use of a prior over values of Spearman's rho for learning
copula-based models that involve a mix of copula families. We demonstrate the
generalization effectiveness of our highly efficient approach on sizable and
varied real-life datasets.
|
1309.6874 | Pengtao Xie | Pengtao Xie, Eric P. Xing | Integrating Document Clustering and Topic Modeling | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-694-703 | cs.LG cs.CL cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Document clustering and topic modeling are two closely related tasks which
can mutually benefit each other. Topic modeling can project documents into a
topic space which facilitates effective document clustering. Cluster labels
discovered by document clustering can be incorporated into topic models to
extract local topics specific to each cluster and global topics shared by all
clusters. In this paper, we propose a multi-grain clustering topic model
(MGCTM) which integrates document clustering and topic modeling into a unified
framework and jointly performs the two tasks to achieve the overall best
performance. Our model tightly couples two components: a mixture component used
for discovering latent groups in document collection and a topic model
component used for mining multi-grain topics including local topics specific to
each cluster and global topics shared across clusters.We employ variational
inference to approximate the posterior of hidden variables and learn model
parameters. Experiments on two datasets demonstrate the effectiveness of our
model.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:54:02 GMT"
}
] | 2013-09-27T00:00:00 | [
[
"Xie",
"Pengtao",
""
],
[
"Xing",
"Eric P.",
""
]
] | TITLE: Integrating Document Clustering and Topic Modeling
ABSTRACT: Document clustering and topic modeling are two closely related tasks which
can mutually benefit each other. Topic modeling can project documents into a
topic space which facilitates effective document clustering. Cluster labels
discovered by document clustering can be incorporated into topic models to
extract local topics specific to each cluster and global topics shared by all
clusters. In this paper, we propose a multi-grain clustering topic model
(MGCTM) which integrates document clustering and topic modeling into a unified
framework and jointly performs the two tasks to achieve the overall best
performance. Our model tightly couples two components: a mixture component used
for discovering latent groups in document collection and a topic model
component used for mining multi-grain topics including local topics specific to
each cluster and global topics shared across clusters.We employ variational
inference to approximate the posterior of hidden variables and learn model
parameters. Experiments on two datasets demonstrate the effectiveness of our
model.
|
1309.6379 | ANqi Qiu DR | Jia Du, A. Pasha Hosseinbor, Moo K. Chung, Barbara B. Bendlin, Gaurav
Suryawanshi, Andrew L. Alexander, Anqi Qiu | Diffeomorphic Metric Mapping and Probabilistic Atlas Generation of
Hybrid Diffusion Imaging based on BFOR Signal Basis | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a large deformation diffeomorphic metric mapping algorithm to
align multiple b-value diffusion weighted imaging (mDWI) data, specifically
acquired via hybrid diffusion imaging (HYDI), denoted as LDDMM-HYDI. We then
propose a Bayesian model for estimating the white matter atlas from HYDIs. We
adopt the work given in Hosseinbor et al. (2012) and represent the q-space
diffusion signal with the Bessel Fourier orientation reconstruction (BFOR)
signal basis. The BFOR framework provides the representation of mDWI in the
q-space and thus reduces memory requirement. In addition, since the BFOR signal
basis is orthonormal, the L2 norm that quantifies the differences in the
q-space signals of any two mDWI datasets can be easily computed as the sum of
the squared differences in the BFOR expansion coefficients. In this work, we
show that the reorientation of the $q$-space signal due to spatial
transformation can be easily defined on the BFOR signal basis. We incorporate
the BFOR signal basis into the LDDMM framework and derive the gradient descent
algorithm for LDDMM-HYDI with explicit orientation optimization. Additionally,
we extend the previous Bayesian atlas estimation framework for scalar-valued
images to HYDIs and derive the expectation-maximization algorithm for solving
the HYDI atlas estimation problem. Using real HYDI datasets, we show the
Bayesian model generates the white matter atlas with anatomical details.
Moreover, we show that it is important to consider the variation of mDWI
reorientation due to a small change in diffeomorphic transformation in the
LDDMM-HYDI optimization and to incorporate the full information of HYDI for
aligning mDWI.
| [
{
"version": "v1",
"created": "Wed, 25 Sep 2013 01:57:50 GMT"
}
] | 2013-09-26T00:00:00 | [
[
"Du",
"Jia",
""
],
[
"Hosseinbor",
"A. Pasha",
""
],
[
"Chung",
"Moo K.",
""
],
[
"Bendlin",
"Barbara B.",
""
],
[
"Suryawanshi",
"Gaurav",
""
],
[
"Alexander",
"Andrew L.",
""
],
[
"Qiu",
"Anqi",
""
]
] | TITLE: Diffeomorphic Metric Mapping and Probabilistic Atlas Generation of
Hybrid Diffusion Imaging based on BFOR Signal Basis
ABSTRACT: We propose a large deformation diffeomorphic metric mapping algorithm to
align multiple b-value diffusion weighted imaging (mDWI) data, specifically
acquired via hybrid diffusion imaging (HYDI), denoted as LDDMM-HYDI. We then
propose a Bayesian model for estimating the white matter atlas from HYDIs. We
adopt the work given in Hosseinbor et al. (2012) and represent the q-space
diffusion signal with the Bessel Fourier orientation reconstruction (BFOR)
signal basis. The BFOR framework provides the representation of mDWI in the
q-space and thus reduces memory requirement. In addition, since the BFOR signal
basis is orthonormal, the L2 norm that quantifies the differences in the
q-space signals of any two mDWI datasets can be easily computed as the sum of
the squared differences in the BFOR expansion coefficients. In this work, we
show that the reorientation of the $q$-space signal due to spatial
transformation can be easily defined on the BFOR signal basis. We incorporate
the BFOR signal basis into the LDDMM framework and derive the gradient descent
algorithm for LDDMM-HYDI with explicit orientation optimization. Additionally,
we extend the previous Bayesian atlas estimation framework for scalar-valued
images to HYDIs and derive the expectation-maximization algorithm for solving
the HYDI atlas estimation problem. Using real HYDI datasets, we show the
Bayesian model generates the white matter atlas with anatomical details.
Moreover, we show that it is important to consider the variation of mDWI
reorientation due to a small change in diffeomorphic transformation in the
LDDMM-HYDI optimization and to incorporate the full information of HYDI for
aligning mDWI.
|
1212.2044 | Gabriel Kronberger | Gabriel Kronberger, Stefan Fink, Michael Kommenda and Michael
Affenzeller | Macro-Economic Time Series Modeling and Interaction Networks | The original publication is available at
http://link.springer.com/chapter/10.1007/978-3-642-20520-0_11 | Applications of Evolutionary Computation, LNCS 6625 (Springer
Berlin Heidelberg), pp. 101-110 (2011) | 10.1007/978-3-642-20520-0_11 | null | cs.NE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Macro-economic models describe the dynamics of economic quantities. The
estimations and forecasts produced by such models play a substantial role for
financial and political decisions. In this contribution we describe an approach
based on genetic programming and symbolic regression to identify variable
interactions in large datasets. In the proposed approach multiple symbolic
regression runs are executed for each variable of the dataset to find
potentially interesting models. The result is a variable interaction network
that describes which variables are most relevant for the approximation of each
variable of the dataset. This approach is applied to a macro-economic dataset
with monthly observations of important economic indicators in order to identify
potentially interesting dependencies of these indicators. The resulting
interaction network of macro-economic indicators is briefly discussed and two
of the identified models are presented in detail. The two models approximate
the help wanted index and the CPI inflation in the US.
| [
{
"version": "v1",
"created": "Mon, 10 Dec 2012 12:04:58 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Sep 2013 16:58:12 GMT"
}
] | 2013-09-24T00:00:00 | [
[
"Kronberger",
"Gabriel",
""
],
[
"Fink",
"Stefan",
""
],
[
"Kommenda",
"Michael",
""
],
[
"Affenzeller",
"Michael",
""
]
] | TITLE: Macro-Economic Time Series Modeling and Interaction Networks
ABSTRACT: Macro-economic models describe the dynamics of economic quantities. The
estimations and forecasts produced by such models play a substantial role for
financial and political decisions. In this contribution we describe an approach
based on genetic programming and symbolic regression to identify variable
interactions in large datasets. In the proposed approach multiple symbolic
regression runs are executed for each variable of the dataset to find
potentially interesting models. The result is a variable interaction network
that describes which variables are most relevant for the approximation of each
variable of the dataset. This approach is applied to a macro-economic dataset
with monthly observations of important economic indicators in order to identify
potentially interesting dependencies of these indicators. The resulting
interaction network of macro-economic indicators is briefly discussed and two
of the identified models are presented in detail. The two models approximate
the help wanted index and the CPI inflation in the US.
|
1309.5427 | Gang Chen | Gang Chen | Latent Fisher Discriminant Analysis | 12 pages | null | null | null | cs.LG cs.CV stat.ML | http://creativecommons.org/licenses/by/3.0/ | Linear Discriminant Analysis (LDA) is a well-known method for dimensionality
reduction and classification. Previous studies have also extended the
binary-class case into multi-classes. However, many applications, such as
object detection and keyframe extraction cannot provide consistent
instance-label pairs, while LDA requires labels on instance level for training.
Thus it cannot be directly applied for semi-supervised classification problem.
In this paper, we overcome this limitation and propose a latent variable Fisher
discriminant analysis model. We relax the instance-level labeling into
bag-level, is a kind of semi-supervised (video-level labels of event type are
required for semantic frame extraction) and incorporates a data-driven prior
over the latent variables. Hence, our method combines the latent variable
inference and dimension reduction in an unified bayesian framework. We test our
method on MUSK and Corel data sets and yield competitive results compared to
the baseline approach. We also demonstrate its capacity on the challenging
TRECVID MED11 dataset for semantic keyframe extraction and conduct a
human-factors ranking-based experimental evaluation, which clearly demonstrates
our proposed method consistently extracts more semantically meaningful
keyframes than challenging baselines.
| [
{
"version": "v1",
"created": "Sat, 21 Sep 2013 03:42:04 GMT"
}
] | 2013-09-24T00:00:00 | [
[
"Chen",
"Gang",
""
]
] | TITLE: Latent Fisher Discriminant Analysis
ABSTRACT: Linear Discriminant Analysis (LDA) is a well-known method for dimensionality
reduction and classification. Previous studies have also extended the
binary-class case into multi-classes. However, many applications, such as
object detection and keyframe extraction cannot provide consistent
instance-label pairs, while LDA requires labels on instance level for training.
Thus it cannot be directly applied for semi-supervised classification problem.
In this paper, we overcome this limitation and propose a latent variable Fisher
discriminant analysis model. We relax the instance-level labeling into
bag-level, is a kind of semi-supervised (video-level labels of event type are
required for semantic frame extraction) and incorporates a data-driven prior
over the latent variables. Hence, our method combines the latent variable
inference and dimension reduction in an unified bayesian framework. We test our
method on MUSK and Corel data sets and yield competitive results compared to
the baseline approach. We also demonstrate its capacity on the challenging
TRECVID MED11 dataset for semantic keyframe extraction and conduct a
human-factors ranking-based experimental evaluation, which clearly demonstrates
our proposed method consistently extracts more semantically meaningful
keyframes than challenging baselines.
|
1309.5657 | Tarek El-Shishtawy Ahmed | T.El-Shishtawy | A Hybrid Algorithm for Matching Arabic Names | null | International Journal of Computational Linguistics Research Volume
4 Number 2 June 2013 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a new hybrid algorithm which combines both of token-based and
character-based approaches is presented. The basic Levenshtein approach has
been extended to token-based distance metric. The distance metric is enhanced
to set the proper granularity level behavior of the algorithm. It smoothly maps
a threshold of misspellings differences at the character level, and the
importance of token level errors in terms of token's position and frequency.
Using a large Arabic dataset, the experimental results show that the proposed
algorithm overcomes successfully many types of errors such as: typographical
errors, omission or insertion of middle name components, omission of
non-significant popular name components, and different writing styles character
variations. When compared the results with other classical algorithms, using
the same dataset, the proposed algorithm was found to increase the minimum
success level of best tested algorithms, while achieving higher upper limits .
| [
{
"version": "v1",
"created": "Sun, 22 Sep 2013 22:06:26 GMT"
}
] | 2013-09-24T00:00:00 | [
[
"El-Shishtawy",
"T.",
""
]
] | TITLE: A Hybrid Algorithm for Matching Arabic Names
ABSTRACT: In this paper, a new hybrid algorithm which combines both of token-based and
character-based approaches is presented. The basic Levenshtein approach has
been extended to token-based distance metric. The distance metric is enhanced
to set the proper granularity level behavior of the algorithm. It smoothly maps
a threshold of misspellings differences at the character level, and the
importance of token level errors in terms of token's position and frequency.
Using a large Arabic dataset, the experimental results show that the proposed
algorithm overcomes successfully many types of errors such as: typographical
errors, omission or insertion of middle name components, omission of
non-significant popular name components, and different writing styles character
variations. When compared the results with other classical algorithms, using
the same dataset, the proposed algorithm was found to increase the minimum
success level of best tested algorithms, while achieving higher upper limits .
|
1309.5843 | Marco Guerini | Marco Guerini, Lorenzo Gatti, Marco Turchi | Sentiment Analysis: How to Derive Prior Polarities from SentiWordNet | To appear in Proceedings of EMNLP 2013 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Assigning a positive or negative score to a word out of context (i.e. a
word's prior polarity) is a challenging task for sentiment analysis. In the
literature, various approaches based on SentiWordNet have been proposed. In
this paper, we compare the most often used techniques together with newly
proposed ones and incorporate all of them in a learning framework to see
whether blending them can further improve the estimation of prior polarity
scores. Using two different versions of SentiWordNet and testing regression and
classification models across tasks and datasets, our learning approach
consistently outperforms the single metrics, providing a new state-of-the-art
approach in computing words' prior polarity for sentiment analysis. We conclude
our investigation showing interesting biases in calculated prior polarity
scores when word Part of Speech and annotator gender are considered.
| [
{
"version": "v1",
"created": "Mon, 23 Sep 2013 15:26:09 GMT"
}
] | 2013-09-24T00:00:00 | [
[
"Guerini",
"Marco",
""
],
[
"Gatti",
"Lorenzo",
""
],
[
"Turchi",
"Marco",
""
]
] | TITLE: Sentiment Analysis: How to Derive Prior Polarities from SentiWordNet
ABSTRACT: Assigning a positive or negative score to a word out of context (i.e. a
word's prior polarity) is a challenging task for sentiment analysis. In the
literature, various approaches based on SentiWordNet have been proposed. In
this paper, we compare the most often used techniques together with newly
proposed ones and incorporate all of them in a learning framework to see
whether blending them can further improve the estimation of prior polarity
scores. Using two different versions of SentiWordNet and testing regression and
classification models across tasks and datasets, our learning approach
consistently outperforms the single metrics, providing a new state-of-the-art
approach in computing words' prior polarity for sentiment analysis. We conclude
our investigation showing interesting biases in calculated prior polarity
scores when word Part of Speech and annotator gender are considered.
|
1309.5931 | Gabriel Kronberger | Michael Kommenda and Gabriel Kronberger and Christoph Feilmayr and
Michael Affenzeller | Data Mining using Unguided Symbolic Regression on a Blast Furnace
Dataset | Presented at Workshop for Heuristic Problem Solving, Computer Aided
Systems Theory - EUROCAST 2011. The final publication is available at
http://link.springer.com/chapter/10.1007/978-3-642-27549-4_51 | Computer Aided Systems Theory - EUROCAST 2011, Lecture Notes in
Computer Science Volume 6927, 2012, pp 400-407 | 10.1007/978-3-642-27549-4_51 | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper a data mining approach for variable selection and knowledge
extraction from datasets is presented. The approach is based on unguided
symbolic regression (every variable present in the dataset is treated as the
target variable in multiple regression runs) and a novel variable relevance
metric for genetic programming. The relevance of each input variable is
calculated and a model approximating the target variable is created. The
genetic programming configurations with different target variables are executed
multiple times to reduce stochastic effects and the aggregated results are
displayed as a variable interaction network. This interaction network
highlights important system components and implicit relations between the
variables. The whole approach is tested on a blast furnace dataset, because of
the complexity of the blast furnace and the many interrelations between the
variables. Finally the achieved results are discussed with respect to existing
knowledge about the blast furnace process.
| [
{
"version": "v1",
"created": "Mon, 23 Sep 2013 19:35:29 GMT"
}
] | 2013-09-24T00:00:00 | [
[
"Kommenda",
"Michael",
""
],
[
"Kronberger",
"Gabriel",
""
],
[
"Feilmayr",
"Christoph",
""
],
[
"Affenzeller",
"Michael",
""
]
] | TITLE: Data Mining using Unguided Symbolic Regression on a Blast Furnace
Dataset
ABSTRACT: In this paper a data mining approach for variable selection and knowledge
extraction from datasets is presented. The approach is based on unguided
symbolic regression (every variable present in the dataset is treated as the
target variable in multiple regression runs) and a novel variable relevance
metric for genetic programming. The relevance of each input variable is
calculated and a model approximating the target variable is created. The
genetic programming configurations with different target variables are executed
multiple times to reduce stochastic effects and the aggregated results are
displayed as a variable interaction network. This interaction network
highlights important system components and implicit relations between the
variables. The whole approach is tested on a blast furnace dataset, because of
the complexity of the blast furnace and the many interrelations between the
variables. Finally the achieved results are discussed with respect to existing
knowledge about the blast furnace process.
|
1111.5062 | Emiliano De Cristofaro | Carlo Blundo, Emiliano De Cristofaro, Paolo Gasti | EsPRESSo: Efficient Privacy-Preserving Evaluation of Sample Set
Similarity | A preliminary version of this paper was published in the Proceedings
of the 7th ESORICS International Workshop on Digital Privacy Management (DPM
2012). This is the full version, appearing in the Journal of Computer
Security | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electronic information is increasingly often shared among entities without
complete mutual trust. To address related security and privacy issues, a few
cryptographic techniques have emerged that support privacy-preserving
information sharing and retrieval. One interesting open problem in this context
involves two parties that need to assess the similarity of their datasets, but
are reluctant to disclose their actual content. This paper presents an
efficient and provably-secure construction supporting the privacy-preserving
evaluation of sample set similarity, where similarity is measured as the
Jaccard index. We present two protocols: the first securely computes the
(Jaccard) similarity of two sets, and the second approximates it, using MinHash
techniques, with lower complexities. We show that our novel protocols are
attractive in many compelling applications, including document/multimedia
similarity, biometric authentication, and genetic tests. In the process, we
demonstrate that our constructions are appreciably more efficient than prior
work.
| [
{
"version": "v1",
"created": "Mon, 21 Nov 2011 23:35:47 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Nov 2011 21:51:04 GMT"
},
{
"version": "v3",
"created": "Wed, 11 Apr 2012 02:09:27 GMT"
},
{
"version": "v4",
"created": "Fri, 20 Jul 2012 19:36:07 GMT"
},
{
"version": "v5",
"created": "Fri, 20 Sep 2013 00:43:44 GMT"
}
] | 2013-09-23T00:00:00 | [
[
"Blundo",
"Carlo",
""
],
[
"De Cristofaro",
"Emiliano",
""
],
[
"Gasti",
"Paolo",
""
]
] | TITLE: EsPRESSo: Efficient Privacy-Preserving Evaluation of Sample Set
Similarity
ABSTRACT: Electronic information is increasingly often shared among entities without
complete mutual trust. To address related security and privacy issues, a few
cryptographic techniques have emerged that support privacy-preserving
information sharing and retrieval. One interesting open problem in this context
involves two parties that need to assess the similarity of their datasets, but
are reluctant to disclose their actual content. This paper presents an
efficient and provably-secure construction supporting the privacy-preserving
evaluation of sample set similarity, where similarity is measured as the
Jaccard index. We present two protocols: the first securely computes the
(Jaccard) similarity of two sets, and the second approximates it, using MinHash
techniques, with lower complexities. We show that our novel protocols are
attractive in many compelling applications, including document/multimedia
similarity, biometric authentication, and genetic tests. In the process, we
demonstrate that our constructions are appreciably more efficient than prior
work.
|
1302.4389 | Ian Goodfellow | Ian J. Goodfellow and David Warde-Farley and Mehdi Mirza and Aaron
Courville and Yoshua Bengio | Maxout Networks | This is the version of the paper that appears in ICML 2013 | JMLR WCP 28 (3): 1319-1327, 2013 | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of designing models to leverage a recently introduced
approximate model averaging technique called dropout. We define a simple new
model called maxout (so named because its output is the max of a set of inputs,
and because it is a natural companion to dropout) designed to both facilitate
optimization by dropout and improve the accuracy of dropout's fast approximate
model averaging technique. We empirically verify that the model successfully
accomplishes both of these tasks. We use maxout and dropout to demonstrate
state of the art classification performance on four benchmark datasets: MNIST,
CIFAR-10, CIFAR-100, and SVHN.
| [
{
"version": "v1",
"created": "Mon, 18 Feb 2013 18:59:07 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Feb 2013 04:39:48 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Feb 2013 22:33:13 GMT"
},
{
"version": "v4",
"created": "Fri, 20 Sep 2013 08:54:35 GMT"
}
] | 2013-09-23T00:00:00 | [
[
"Goodfellow",
"Ian J.",
""
],
[
"Warde-Farley",
"David",
""
],
[
"Mirza",
"Mehdi",
""
],
[
"Courville",
"Aaron",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Maxout Networks
ABSTRACT: We consider the problem of designing models to leverage a recently introduced
approximate model averaging technique called dropout. We define a simple new
model called maxout (so named because its output is the max of a set of inputs,
and because it is a natural companion to dropout) designed to both facilitate
optimization by dropout and improve the accuracy of dropout's fast approximate
model averaging technique. We empirically verify that the model successfully
accomplishes both of these tasks. We use maxout and dropout to demonstrate
state of the art classification performance on four benchmark datasets: MNIST,
CIFAR-10, CIFAR-100, and SVHN.
|
1309.5047 | Sean Whalen | Sean Whalen and Gaurav Pandey | A Comparative Analysis of Ensemble Classifiers: Case Studies in Genomics | 10 pages, 3 figures, 8 tables, to appear in Proceedings of the 2013
International Conference on Data Mining | null | null | null | cs.LG q-bio.GN stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The combination of multiple classifiers using ensemble methods is
increasingly important for making progress in a variety of difficult prediction
problems. We present a comparative analysis of several ensemble methods through
two case studies in genomics, namely the prediction of genetic interactions and
protein functions, to demonstrate their efficacy on real-world datasets and
draw useful conclusions about their behavior. These methods include simple
aggregation, meta-learning, cluster-based meta-learning, and ensemble selection
using heterogeneous classifiers trained on resampled data to improve the
diversity of their predictions. We present a detailed analysis of these methods
across 4 genomics datasets and find the best of these methods offer
statistically significant improvements over the state of the art in their
respective domains. In addition, we establish a novel connection between
ensemble selection and meta-learning, demonstrating how both of these disparate
methods establish a balance between ensemble diversity and performance.
| [
{
"version": "v1",
"created": "Thu, 19 Sep 2013 16:45:18 GMT"
}
] | 2013-09-20T00:00:00 | [
[
"Whalen",
"Sean",
""
],
[
"Pandey",
"Gaurav",
""
]
] | TITLE: A Comparative Analysis of Ensemble Classifiers: Case Studies in Genomics
ABSTRACT: The combination of multiple classifiers using ensemble methods is
increasingly important for making progress in a variety of difficult prediction
problems. We present a comparative analysis of several ensemble methods through
two case studies in genomics, namely the prediction of genetic interactions and
protein functions, to demonstrate their efficacy on real-world datasets and
draw useful conclusions about their behavior. These methods include simple
aggregation, meta-learning, cluster-based meta-learning, and ensemble selection
using heterogeneous classifiers trained on resampled data to improve the
diversity of their predictions. We present a detailed analysis of these methods
across 4 genomics datasets and find the best of these methods offer
statistically significant improvements over the state of the art in their
respective domains. In addition, we establish a novel connection between
ensemble selection and meta-learning, demonstrating how both of these disparate
methods establish a balance between ensemble diversity and performance.
|
1301.6847 | Zhilin Zhang | Taiyong Li, Zhilin Zhang | Robust Face Recognition via Block Sparse Bayesian Learning | Accepted by Mathematical Problems in Engineering in 2013 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Face recognition (FR) is an important task in pattern recognition and
computer vision. Sparse representation (SR) has been demonstrated to be a
powerful framework for FR. In general, an SR algorithm treats each face in a
training dataset as a basis function, and tries to find a sparse representation
of a test face under these basis functions. The sparse representation
coefficients then provide a recognition hint. Early SR algorithms are based on
a basic sparse model. Recently, it has been found that algorithms based on a
block sparse model can achieve better recognition rates. Based on this model,
in this study we use block sparse Bayesian learning (BSBL) to find a sparse
representation of a test face for recognition. BSBL is a recently proposed
framework, which has many advantages over existing block-sparse-model based
algorithms. Experimental results on the Extended Yale B, the AR and the CMU PIE
face databases show that using BSBL can achieve better recognition rates and
higher robustness than state-of-the-art algorithms in most cases.
| [
{
"version": "v1",
"created": "Tue, 29 Jan 2013 07:23:00 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Sep 2013 00:19:12 GMT"
}
] | 2013-09-19T00:00:00 | [
[
"Li",
"Taiyong",
""
],
[
"Zhang",
"Zhilin",
""
]
] | TITLE: Robust Face Recognition via Block Sparse Bayesian Learning
ABSTRACT: Face recognition (FR) is an important task in pattern recognition and
computer vision. Sparse representation (SR) has been demonstrated to be a
powerful framework for FR. In general, an SR algorithm treats each face in a
training dataset as a basis function, and tries to find a sparse representation
of a test face under these basis functions. The sparse representation
coefficients then provide a recognition hint. Early SR algorithms are based on
a basic sparse model. Recently, it has been found that algorithms based on a
block sparse model can achieve better recognition rates. Based on this model,
in this study we use block sparse Bayesian learning (BSBL) to find a sparse
representation of a test face for recognition. BSBL is a recently proposed
framework, which has many advantages over existing block-sparse-model based
algorithms. Experimental results on the Extended Yale B, the AR and the CMU PIE
face databases show that using BSBL can achieve better recognition rates and
higher robustness than state-of-the-art algorithms in most cases.
|
1309.4496 | Thoralf Gutierrez | Thoralf Gutierrez, Gautier Krings, Vincent D. Blondel | Evaluating socio-economic state of a country analyzing airtime credit
and mobile phone datasets | 6 pages, 6 figures | null | null | null | cs.CY cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reliable statistical information is important to make political decisions on
a sound basis and to help measure the impact of policies. Unfortunately,
statistics offices in developing countries have scarce resources and
statistical censuses are therefore conducted sporadically. Based on mobile
phone communications and history of airtime credit purchases, we estimate the
relative income of individuals, the diversity and inequality of income, and an
indicator for socioeconomic segregation for fine-grained regions of an African
country. Our study shows how to use mobile phone datasets as a starting point
to understand the socio-economic state of a country, which can be especially
useful in countries with few resources to conduct large surveys.
| [
{
"version": "v1",
"created": "Tue, 17 Sep 2013 22:36:34 GMT"
}
] | 2013-09-19T00:00:00 | [
[
"Gutierrez",
"Thoralf",
""
],
[
"Krings",
"Gautier",
""
],
[
"Blondel",
"Vincent D.",
""
]
] | TITLE: Evaluating socio-economic state of a country analyzing airtime credit
and mobile phone datasets
ABSTRACT: Reliable statistical information is important to make political decisions on
a sound basis and to help measure the impact of policies. Unfortunately,
statistics offices in developing countries have scarce resources and
statistical censuses are therefore conducted sporadically. Based on mobile
phone communications and history of airtime credit purchases, we estimate the
relative income of individuals, the diversity and inequality of income, and an
indicator for socioeconomic segregation for fine-grained regions of an African
country. Our study shows how to use mobile phone datasets as a starting point
to understand the socio-economic state of a country, which can be especially
useful in countries with few resources to conduct large surveys.
|
1309.4157 | Rui Li | Rui Li and Kevin Chen-Chuan Chang | EgoNet-UIUC: A Dataset For Ego Network Research | DataSet Description | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this report, we introduce the version one of EgoNet-UIUC, which is a
dataset for ego network research. The dataset contains about 230 ego networks
in Linkedin, which have about 33K users (with their attributes) and 283K
relationships (with their relationship types) in total. We name this dataset as
EgoNet-UIUC, which stands for Ego Network Dataset from University of Illinois
at Urbana-Champaign.
| [
{
"version": "v1",
"created": "Tue, 17 Sep 2013 02:28:25 GMT"
}
] | 2013-09-18T00:00:00 | [
[
"Li",
"Rui",
""
],
[
"Chang",
"Kevin Chen-Chuan",
""
]
] | TITLE: EgoNet-UIUC: A Dataset For Ego Network Research
ABSTRACT: In this report, we introduce the version one of EgoNet-UIUC, which is a
dataset for ego network research. The dataset contains about 230 ego networks
in Linkedin, which have about 33K users (with their attributes) and 283K
relationships (with their relationship types) in total. We name this dataset as
EgoNet-UIUC, which stands for Ego Network Dataset from University of Illinois
at Urbana-Champaign.
|
1309.3809 | Ishani Chakraborty | Ishani Chakraborty and Ahmed Elgammal | Visual-Semantic Scene Understanding by Sharing Labels in a Context
Network | null | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of naming objects in complex, natural scenes
containing widely varying object appearance and subtly different names.
Informed by cognitive research, we propose an approach based on sharing context
based object hypotheses between visual and lexical spaces. To this end, we
present the Visual Semantic Integration Model (VSIM) that represents object
labels as entities shared between semantic and visual contexts and infers a new
image by updating labels through context switching. At the core of VSIM is a
semantic Pachinko Allocation Model and a visual nearest neighbor Latent
Dirichlet Allocation Model. For inference, we derive an iterative Data
Augmentation algorithm that pools the label probabilities and maximizes the
joint label posterior of an image. Our model surpasses the performance of
state-of-art methods in several visual tasks on the challenging SUN09 dataset.
| [
{
"version": "v1",
"created": "Mon, 16 Sep 2013 00:22:01 GMT"
}
] | 2013-09-17T00:00:00 | [
[
"Chakraborty",
"Ishani",
""
],
[
"Elgammal",
"Ahmed",
""
]
] | TITLE: Visual-Semantic Scene Understanding by Sharing Labels in a Context
Network
ABSTRACT: We consider the problem of naming objects in complex, natural scenes
containing widely varying object appearance and subtly different names.
Informed by cognitive research, we propose an approach based on sharing context
based object hypotheses between visual and lexical spaces. To this end, we
present the Visual Semantic Integration Model (VSIM) that represents object
labels as entities shared between semantic and visual contexts and infers a new
image by updating labels through context switching. At the core of VSIM is a
semantic Pachinko Allocation Model and a visual nearest neighbor Latent
Dirichlet Allocation Model. For inference, we derive an iterative Data
Augmentation algorithm that pools the label probabilities and maximizes the
joint label posterior of an image. Our model surpasses the performance of
state-of-art methods in several visual tasks on the challenging SUN09 dataset.
|
1309.3877 | Huyen Do | Huyen Do and Alexandros Kalousis | A Metric-learning based framework for Support Vector Machines and
Multiple Kernel Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most metric learning algorithms, as well as Fisher's Discriminant Analysis
(FDA), optimize some cost function of different measures of within-and
between-class distances. On the other hand, Support Vector Machines(SVMs) and
several Multiple Kernel Learning (MKL) algorithms are based on the SVM large
margin theory. Recently, SVMs have been analyzed from SVM and metric learning,
and to develop new algorithms that build on the strengths of each. Inspired by
the metric learning interpretation of SVM, we develop here a new
metric-learning based SVM framework in which we incorporate metric learning
concepts within SVM. We extend the optimization problem of SVM to include some
measure of the within-class distance and along the way we develop a new
within-class distance measure which is appropriate for SVM. In addition, we
adopt the same approach for MKL and show that it can be also formulated as a
Mahalanobis metric learning problem. Our end result is a number of SVM/MKL
algorithms that incorporate metric learning concepts. We experiment with them
on a set of benchmark datasets and observe important predictive performance
improvements.
| [
{
"version": "v1",
"created": "Mon, 16 Sep 2013 09:39:25 GMT"
}
] | 2013-09-17T00:00:00 | [
[
"Do",
"Huyen",
""
],
[
"Kalousis",
"Alexandros",
""
]
] | TITLE: A Metric-learning based framework for Support Vector Machines and
Multiple Kernel Learning
ABSTRACT: Most metric learning algorithms, as well as Fisher's Discriminant Analysis
(FDA), optimize some cost function of different measures of within-and
between-class distances. On the other hand, Support Vector Machines(SVMs) and
several Multiple Kernel Learning (MKL) algorithms are based on the SVM large
margin theory. Recently, SVMs have been analyzed from SVM and metric learning,
and to develop new algorithms that build on the strengths of each. Inspired by
the metric learning interpretation of SVM, we develop here a new
metric-learning based SVM framework in which we incorporate metric learning
concepts within SVM. We extend the optimization problem of SVM to include some
measure of the within-class distance and along the way we develop a new
within-class distance measure which is appropriate for SVM. In addition, we
adopt the same approach for MKL and show that it can be also formulated as a
Mahalanobis metric learning problem. Our end result is a number of SVM/MKL
algorithms that incorporate metric learning concepts. We experiment with them
on a set of benchmark datasets and observe important predictive performance
improvements.
|
1309.4067 | Dima Kagan | Dima Kagan, Michael Fire, Aviad Elyashar, and Yuval Elovici | Facebook Applications' Installation and Removal: A Temporal Analysis | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facebook applications are one of the reasons for Facebook attractiveness.
Unfortunately, numerous users are not aware of the fact that many malicious
Facebook applications exist. To educate users, to raise users' awareness and to
improve Facebook users' security and privacy, we developed a Firefox add-on
that alerts users to the number of installed applications on their Facebook
profiles. In this study, we present the temporal analysis of the Facebook
applications' installation and removal dataset collected by our add-on. This
dataset consists of information from 2,945 users, collected during a period of
over a year. We used linear regression to analyze our dataset and discovered
the linear connection between the average percentage change of newly installed
Facebook applications and the number of days passed since the user initially
installed our add-on. Additionally, we found out that users who used our
Firefox add-on become more aware of their security and privacy installing on
average fewer new applications. Finally, we discovered that on average 86.4% of
Facebook users install an additional application every 4.2 days.
| [
{
"version": "v1",
"created": "Mon, 16 Sep 2013 18:56:45 GMT"
}
] | 2013-09-17T00:00:00 | [
[
"Kagan",
"Dima",
""
],
[
"Fire",
"Michael",
""
],
[
"Elyashar",
"Aviad",
""
],
[
"Elovici",
"Yuval",
""
]
] | TITLE: Facebook Applications' Installation and Removal: A Temporal Analysis
ABSTRACT: Facebook applications are one of the reasons for Facebook attractiveness.
Unfortunately, numerous users are not aware of the fact that many malicious
Facebook applications exist. To educate users, to raise users' awareness and to
improve Facebook users' security and privacy, we developed a Firefox add-on
that alerts users to the number of installed applications on their Facebook
profiles. In this study, we present the temporal analysis of the Facebook
applications' installation and removal dataset collected by our add-on. This
dataset consists of information from 2,945 users, collected during a period of
over a year. We used linear regression to analyze our dataset and discovered
the linear connection between the average percentage change of newly installed
Facebook applications and the number of days passed since the user initially
installed our add-on. Additionally, we found out that users who used our
Firefox add-on become more aware of their security and privacy installing on
average fewer new applications. Finally, we discovered that on average 86.4% of
Facebook users install an additional application every 4.2 days.
|
1309.3515 | Olga Ohrimenko | Joshua Brown, Olga Ohrimenko, Roberto Tamassia | Haze: Privacy-Preserving Real-Time Traffic Statistics | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider traffic-update mobile applications that let users learn traffic
conditions based on reports from other users. These applications are becoming
increasingly popular (e.g., Waze reported 30 million users in 2013) since they
aggregate real-time road traffic updates from actual users traveling on the
roads. However, the providers of these mobile services have access to such
sensitive information as timestamped locations and movements of its users. In
this paper, we describe Haze, a protocol for traffic-update applications that
supports the creation of traffic statistics from user reports while protecting
the privacy of the users. Haze relies on a small subset of users to jointly
aggregate encrypted speed and alert data and report the result to the service
provider. We use jury-voting protocols based on threshold cryptosystem and
differential privacy techniques to hide user data from anyone participating in
the protocol while allowing only aggregate information to be extracted and sent
to the service provider. We show that Haze is effective in practice by
developing a prototype implementation and performing experiments on a
real-world dataset of car trajectories.
| [
{
"version": "v1",
"created": "Fri, 13 Sep 2013 17:17:29 GMT"
}
] | 2013-09-16T00:00:00 | [
[
"Brown",
"Joshua",
""
],
[
"Ohrimenko",
"Olga",
""
],
[
"Tamassia",
"Roberto",
""
]
] | TITLE: Haze: Privacy-Preserving Real-Time Traffic Statistics
ABSTRACT: We consider traffic-update mobile applications that let users learn traffic
conditions based on reports from other users. These applications are becoming
increasingly popular (e.g., Waze reported 30 million users in 2013) since they
aggregate real-time road traffic updates from actual users traveling on the
roads. However, the providers of these mobile services have access to such
sensitive information as timestamped locations and movements of its users. In
this paper, we describe Haze, a protocol for traffic-update applications that
supports the creation of traffic statistics from user reports while protecting
the privacy of the users. Haze relies on a small subset of users to jointly
aggregate encrypted speed and alert data and report the result to the service
provider. We use jury-voting protocols based on threshold cryptosystem and
differential privacy techniques to hide user data from anyone participating in
the protocol while allowing only aggregate information to be extracted and sent
to the service provider. We show that Haze is effective in practice by
developing a prototype implementation and performing experiments on a
real-world dataset of car trajectories.
|
1212.4522 | Yunchao Gong | Yunchao Gong and Qifa Ke and Michael Isard and Svetlana Lazebnik | A Multi-View Embedding Space for Modeling Internet Images, Tags, and
their Semantics | To Appear: International Journal of Computer Vision | null | null | null | cs.CV cs.IR cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates the problem of modeling Internet images and
associated text or tags for tasks such as image-to-image search, tag-to-image
search, and image-to-tag search (image annotation). We start with canonical
correlation analysis (CCA), a popular and successful approach for mapping
visual and textual features to the same latent space, and incorporate a third
view capturing high-level image semantics, represented either by a single
category or multiple non-mutually-exclusive concepts. We present two ways to
train the three-view embedding: supervised, with the third view coming from
ground-truth labels or search keywords; and unsupervised, with semantic themes
automatically obtained by clustering the tags. To ensure high accuracy for
retrieval tasks while keeping the learning process scalable, we combine
multiple strong visual features and use explicit nonlinear kernel mappings to
efficiently approximate kernel CCA. To perform retrieval, we use a specially
designed similarity function in the embedded space, which substantially
outperforms the Euclidean distance. The resulting system produces compelling
qualitative results and outperforms a number of two-view baselines on retrieval
tasks on three large-scale Internet image datasets.
| [
{
"version": "v1",
"created": "Tue, 18 Dec 2012 22:02:43 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Sep 2013 19:14:58 GMT"
}
] | 2013-09-13T00:00:00 | [
[
"Gong",
"Yunchao",
""
],
[
"Ke",
"Qifa",
""
],
[
"Isard",
"Michael",
""
],
[
"Lazebnik",
"Svetlana",
""
]
] | TITLE: A Multi-View Embedding Space for Modeling Internet Images, Tags, and
their Semantics
ABSTRACT: This paper investigates the problem of modeling Internet images and
associated text or tags for tasks such as image-to-image search, tag-to-image
search, and image-to-tag search (image annotation). We start with canonical
correlation analysis (CCA), a popular and successful approach for mapping
visual and textual features to the same latent space, and incorporate a third
view capturing high-level image semantics, represented either by a single
category or multiple non-mutually-exclusive concepts. We present two ways to
train the three-view embedding: supervised, with the third view coming from
ground-truth labels or search keywords; and unsupervised, with semantic themes
automatically obtained by clustering the tags. To ensure high accuracy for
retrieval tasks while keeping the learning process scalable, we combine
multiple strong visual features and use explicit nonlinear kernel mappings to
efficiently approximate kernel CCA. To perform retrieval, we use a specially
designed similarity function in the embedded space, which substantially
outperforms the Euclidean distance. The resulting system produces compelling
qualitative results and outperforms a number of two-view baselines on retrieval
tasks on three large-scale Internet image datasets.
|
1309.3103 | Alex Susemihl | Chris H\"ausler, Alex Susemihl, Martin P Nawrot, Manfred Opper | Temporal Autoencoding Improves Generative Models of Time Series | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Restricted Boltzmann Machines (RBMs) are generative models which can learn
useful representations from samples of a dataset in an unsupervised fashion.
They have been widely employed as an unsupervised pre-training method in
machine learning. RBMs have been modified to model time series in two main
ways: The Temporal RBM stacks a number of RBMs laterally and introduces
temporal dependencies between the hidden layer units; The Conditional RBM, on
the other hand, considers past samples of the dataset as a conditional bias and
learns a representation which takes these into account. Here we propose a new
training method for both the TRBM and the CRBM, which enforces the dynamic
structure of temporal datasets. We do so by treating the temporal models as
denoising autoencoders, considering past frames of the dataset as corrupted
versions of the present frame and minimizing the reconstruction error of the
present data by the model. We call this approach Temporal Autoencoding. This
leads to a significant improvement in the performance of both models in a
filling-in-frames task across a number of datasets. The error reduction for
motion capture data is 56\% for the CRBM and 80\% for the TRBM. Taking the
posterior mean prediction instead of single samples further improves the
model's estimates, decreasing the error by as much as 91\% for the CRBM on
motion capture data. We also trained the model to perform forecasting on a
large number of datasets and have found TA pretraining to consistently improve
the performance of the forecasts. Furthermore, by looking at the prediction
error across time, we can see that this improvement reflects a better
representation of the dynamics of the data as opposed to a bias towards
reconstructing the observed data on a short time scale.
| [
{
"version": "v1",
"created": "Thu, 12 Sep 2013 10:39:50 GMT"
}
] | 2013-09-13T00:00:00 | [
[
"Häusler",
"Chris",
""
],
[
"Susemihl",
"Alex",
""
],
[
"Nawrot",
"Martin P",
""
],
[
"Opper",
"Manfred",
""
]
] | TITLE: Temporal Autoencoding Improves Generative Models of Time Series
ABSTRACT: Restricted Boltzmann Machines (RBMs) are generative models which can learn
useful representations from samples of a dataset in an unsupervised fashion.
They have been widely employed as an unsupervised pre-training method in
machine learning. RBMs have been modified to model time series in two main
ways: The Temporal RBM stacks a number of RBMs laterally and introduces
temporal dependencies between the hidden layer units; The Conditional RBM, on
the other hand, considers past samples of the dataset as a conditional bias and
learns a representation which takes these into account. Here we propose a new
training method for both the TRBM and the CRBM, which enforces the dynamic
structure of temporal datasets. We do so by treating the temporal models as
denoising autoencoders, considering past frames of the dataset as corrupted
versions of the present frame and minimizing the reconstruction error of the
present data by the model. We call this approach Temporal Autoencoding. This
leads to a significant improvement in the performance of both models in a
filling-in-frames task across a number of datasets. The error reduction for
motion capture data is 56\% for the CRBM and 80\% for the TRBM. Taking the
posterior mean prediction instead of single samples further improves the
model's estimates, decreasing the error by as much as 91\% for the CRBM on
motion capture data. We also trained the model to perform forecasting on a
large number of datasets and have found TA pretraining to consistently improve
the performance of the forecasts. Furthermore, by looking at the prediction
error across time, we can see that this improvement reflects a better
representation of the dynamics of the data as opposed to a bias towards
reconstructing the observed data on a short time scale.
|
1309.2648 | Hany SalahEldeen | Hany M. SalahEldeen and Michael L. Nelson | Resurrecting My Revolution: Using Social Link Neighborhood in Bringing
Context to the Disappearing Web | Published IN TPDL 2013 | null | null | null | cs.IR cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In previous work we reported that resources linked in tweets disappeared at
the rate of 11% in the first year followed by 7.3% each year afterwards. We
also found that in the first year 6.7%, and 14.6% in each subsequent year, of
the resources were archived in public web archives. In this paper we revisit
the same dataset of tweets and find that our prior model still holds and the
calculated error for estimating percentages missing was about 4%, but we found
the rate of archiving produced a higher error of about 11.5%. We also
discovered that resources have disappeared from the archives themselves (7.89%)
as well as reappeared on the live web after being declared missing (6.54%). We
have also tested the availability of the tweets themselves and found that
10.34% have disappeared from the live web. To mitigate the loss of resources on
the live web, we propose the use of a "tweet signature". Using the Topsy API,
we extract the top five most frequent terms from the union of all tweets about
a resource, and use these five terms as a query to Google. We found that using
tweet signatures results in discovering replacement resources with 70+% textual
similarity to the missing resource 41% of the time.
| [
{
"version": "v1",
"created": "Tue, 10 Sep 2013 20:00:55 GMT"
}
] | 2013-09-12T00:00:00 | [
[
"SalahEldeen",
"Hany M.",
""
],
[
"Nelson",
"Michael L.",
""
]
] | TITLE: Resurrecting My Revolution: Using Social Link Neighborhood in Bringing
Context to the Disappearing Web
ABSTRACT: In previous work we reported that resources linked in tweets disappeared at
the rate of 11% in the first year followed by 7.3% each year afterwards. We
also found that in the first year 6.7%, and 14.6% in each subsequent year, of
the resources were archived in public web archives. In this paper we revisit
the same dataset of tweets and find that our prior model still holds and the
calculated error for estimating percentages missing was about 4%, but we found
the rate of archiving produced a higher error of about 11.5%. We also
discovered that resources have disappeared from the archives themselves (7.89%)
as well as reappeared on the live web after being declared missing (6.54%). We
have also tested the availability of the tweets themselves and found that
10.34% have disappeared from the live web. To mitigate the loss of resources on
the live web, we propose the use of a "tweet signature". Using the Topsy API,
we extract the top five most frequent terms from the union of all tweets about
a resource, and use these five terms as a query to Google. We found that using
tweet signatures results in discovering replacement resources with 70+% textual
similarity to the missing resource 41% of the time.
|
1309.2675 | Robert McColl | Rob McColl, David Ediger, Jason Poovey, Dan Campbell, David Bader | A Brief Study of Open Source Graph Databases | WSSSPE13, 4 Pages, 18 Pages with Appendix, 25 figures | null | null | null | cs.DB cs.DS cs.SE | http://creativecommons.org/licenses/by/3.0/ | With the proliferation of large irregular sparse relational datasets, new
storage and analysis platforms have arisen to fill gaps in performance and
capability left by conventional approaches built on traditional database
technologies and query languages. Many of these platforms apply graph
structures and analysis techniques to enable users to ingest, update, query and
compute on the topological structure of these relationships represented as
set(s) of edges between set(s) of vertices. To store and process Facebook-scale
datasets, they must be able to support data sources with billions of edges,
update rates of millions of updates per second, and complex analysis kernels.
These platforms must provide intuitive interfaces that enable graph experts and
novice programmers to write implementations of common graph algorithms. In this
paper, we explore a variety of graph analysis and storage platforms. We compare
their capabil- ities, interfaces, and performance by implementing and computing
a set of real-world graph algorithms on synthetic graphs with up to 256 million
edges. In the spirit of full disclosure, several authors are affiliated with
the development of STINGER.
| [
{
"version": "v1",
"created": "Fri, 6 Sep 2013 18:36:33 GMT"
}
] | 2013-09-12T00:00:00 | [
[
"McColl",
"Rob",
""
],
[
"Ediger",
"David",
""
],
[
"Poovey",
"Jason",
""
],
[
"Campbell",
"Dan",
""
],
[
"Bader",
"David",
""
]
] | TITLE: A Brief Study of Open Source Graph Databases
ABSTRACT: With the proliferation of large irregular sparse relational datasets, new
storage and analysis platforms have arisen to fill gaps in performance and
capability left by conventional approaches built on traditional database
technologies and query languages. Many of these platforms apply graph
structures and analysis techniques to enable users to ingest, update, query and
compute on the topological structure of these relationships represented as
set(s) of edges between set(s) of vertices. To store and process Facebook-scale
datasets, they must be able to support data sources with billions of edges,
update rates of millions of updates per second, and complex analysis kernels.
These platforms must provide intuitive interfaces that enable graph experts and
novice programmers to write implementations of common graph algorithms. In this
paper, we explore a variety of graph analysis and storage platforms. We compare
their capabil- ities, interfaces, and performance by implementing and computing
a set of real-world graph algorithms on synthetic graphs with up to 256 million
edges. In the spirit of full disclosure, several authors are affiliated with
the development of STINGER.
|
1309.2199 | Przemyslaw Grabowicz Mr | Przemyslaw A. Grabowicz, Luca Maria Aiello, V\'ictor M. Egu\'iluz,
Alejandro Jaimes | Distinguishing Topical and Social Groups Based on Common Identity and
Bond Theory | 10 pages, 6 figures, 2 tables | 2013. In Proceedings of the sixth ACM international conference on
Web search and data mining (WSDM '13). ACM, New York, NY, USA, 627-636 | 10.1145/2433396.2433475 | null | cs.SI cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social groups play a crucial role in social media platforms because they form
the basis for user participation and engagement. Groups are created explicitly
by members of the community, but also form organically as members interact. Due
to their importance, they have been studied widely (e.g., community detection,
evolution, activity, etc.). One of the key questions for understanding how such
groups evolve is whether there are different types of groups and how they
differ. In Sociology, theories have been proposed to help explain how such
groups form. In particular, the common identity and common bond theory states
that people join groups based on identity (i.e., interest in the topics
discussed) or bond attachment (i.e., social relationships). The theory has been
applied qualitatively to small groups to classify them as either topical or
social. We use the identity and bond theory to define a set of features to
classify groups into those two categories. Using a dataset from Flickr, we
extract user-defined groups and automatically-detected groups, obtained from a
community detection algorithm. We discuss the process of manual labeling of
groups into social or topical and present results of predicting the group label
based on the defined features. We directly validate the predictions of the
theory showing that the metrics are able to forecast the group type with high
accuracy. In addition, we present a comparison between declared and detected
groups along topicality and sociality dimensions.
| [
{
"version": "v1",
"created": "Mon, 9 Sep 2013 15:47:00 GMT"
}
] | 2013-09-10T00:00:00 | [
[
"Grabowicz",
"Przemyslaw A.",
""
],
[
"Aiello",
"Luca Maria",
""
],
[
"Eguíluz",
"Víctor M.",
""
],
[
"Jaimes",
"Alejandro",
""
]
] | TITLE: Distinguishing Topical and Social Groups Based on Common Identity and
Bond Theory
ABSTRACT: Social groups play a crucial role in social media platforms because they form
the basis for user participation and engagement. Groups are created explicitly
by members of the community, but also form organically as members interact. Due
to their importance, they have been studied widely (e.g., community detection,
evolution, activity, etc.). One of the key questions for understanding how such
groups evolve is whether there are different types of groups and how they
differ. In Sociology, theories have been proposed to help explain how such
groups form. In particular, the common identity and common bond theory states
that people join groups based on identity (i.e., interest in the topics
discussed) or bond attachment (i.e., social relationships). The theory has been
applied qualitatively to small groups to classify them as either topical or
social. We use the identity and bond theory to define a set of features to
classify groups into those two categories. Using a dataset from Flickr, we
extract user-defined groups and automatically-detected groups, obtained from a
community detection algorithm. We discuss the process of manual labeling of
groups into social or topical and present results of predicting the group label
based on the defined features. We directly validate the predictions of the
theory showing that the metrics are able to forecast the group type with high
accuracy. In addition, we present a comparison between declared and detected
groups along topicality and sociality dimensions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.