Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1211.5817 | Seyed-Mehdi-Reza Beheshti | Seyed-Mehdi-Reza Beheshti, Sherif Sakr, Boualem Benatallah, Hamid Reza
Motahari-Nezhad | Extending SPARQL to Support Entity Grouping and Path Queries | 23 pages. arXiv admin note: text overlap with arXiv:1211.5009 | null | null | UNSW-CSE-TR-1019 | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to efficiently find relevant subgraphs and paths in a large graph
to a given query is important in many applications including scientific data
analysis, social networks, and business intelligence. Currently, there is
little support and no efficient approaches for expressing and executing such
queries. This paper proposes a data model and a query language to address this
problem. The contributions include supporting the construction and selection
of: (i) folder nodes, representing a set of related entities, and (ii) path
nodes, representing a set of paths in which a path is the transitive
relationship of two or more entities in the graph. Folders and paths can be
stored and used for future queries. We introduce FPSPARQL which is an extension
of the SPARQL supporting folder and path nodes. We have implemented a query
engine that supports FPSPARQL and the evaluation results shows its viability
and efficiency for querying large graph datasets.
| [
{
"version": "v1",
"created": "Wed, 21 Nov 2012 10:55:36 GMT"
}
] | 2012-11-27T00:00:00 | [
[
"Beheshti",
"Seyed-Mehdi-Reza",
""
],
[
"Sakr",
"Sherif",
""
],
[
"Benatallah",
"Boualem",
""
],
[
"Motahari-Nezhad",
"Hamid Reza",
""
]
] | TITLE: Extending SPARQL to Support Entity Grouping and Path Queries
ABSTRACT: The ability to efficiently find relevant subgraphs and paths in a large graph
to a given query is important in many applications including scientific data
analysis, social networks, and business intelligence. Currently, there is
little support and no efficient approaches for expressing and executing such
queries. This paper proposes a data model and a query language to address this
problem. The contributions include supporting the construction and selection
of: (i) folder nodes, representing a set of related entities, and (ii) path
nodes, representing a set of paths in which a path is the transitive
relationship of two or more entities in the graph. Folders and paths can be
stored and used for future queries. We introduce FPSPARQL which is an extension
of the SPARQL supporting folder and path nodes. We have implemented a query
engine that supports FPSPARQL and the evaluation results shows its viability
and efficiency for querying large graph datasets.
|
1211.5820 | Erjia Yan | Erjia Yan, Ying Ding, Blaise Cronin, Loet Leydesdorff | A bird's-eye view of scientific trading: Dependency relations among
fields of science | null | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use a trading metaphor to study knowledge transfer in the sciences as well
as the social sciences. The metaphor comprises four dimensions: (a) Discipline
Self-dependence, (b) Knowledge Exports/Imports, (c) Scientific Trading
Dynamics, and (d) Scientific Trading Impact. This framework is applied to a
dataset of 221 Web of Science subject categories. We find that: (i) the
Scientific Trading Impact and Dynamics of Materials Science And Transportation
Science have increased; (ii) Biomedical Disciplines, Physics, And Mathematics
are significant knowledge exporters, as is Statistics & Probability; (iii) in
the social sciences, Economics, Business, Psychology, Management, And Sociology
are important knowledge exporters; (iv) Discipline Self-dependence is
associated with specialized domains which have ties to professional practice
(e.g., Law, Ophthalmology, Dentistry, Oral Surgery & Medicine, Psychology,
Psychoanalysis, Veterinary Sciences, And Nursing).
| [
{
"version": "v1",
"created": "Sun, 25 Nov 2012 23:22:05 GMT"
}
] | 2012-11-27T00:00:00 | [
[
"Yan",
"Erjia",
""
],
[
"Ding",
"Ying",
""
],
[
"Cronin",
"Blaise",
""
],
[
"Leydesdorff",
"Loet",
""
]
] | TITLE: A bird's-eye view of scientific trading: Dependency relations among
fields of science
ABSTRACT: We use a trading metaphor to study knowledge transfer in the sciences as well
as the social sciences. The metaphor comprises four dimensions: (a) Discipline
Self-dependence, (b) Knowledge Exports/Imports, (c) Scientific Trading
Dynamics, and (d) Scientific Trading Impact. This framework is applied to a
dataset of 221 Web of Science subject categories. We find that: (i) the
Scientific Trading Impact and Dynamics of Materials Science And Transportation
Science have increased; (ii) Biomedical Disciplines, Physics, And Mathematics
are significant knowledge exporters, as is Statistics & Probability; (iii) in
the social sciences, Economics, Business, Psychology, Management, And Sociology
are important knowledge exporters; (iv) Discipline Self-dependence is
associated with specialized domains which have ties to professional practice
(e.g., Law, Ophthalmology, Dentistry, Oral Surgery & Medicine, Psychology,
Psychoanalysis, Veterinary Sciences, And Nursing).
|
1211.0191 | Branko Ristic | Branko Ristic, Jamie Sherrah and \'Angel F. Garc\'ia-Fern\'andez | Performance Evaluation of Random Set Based Pedestrian Tracking
Algorithms | 6 pages, 3 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper evaluates the error performance of three random finite set based
multi-object trackers in the context of pedestrian video tracking. The
evaluation is carried out using a publicly available video dataset of 4500
frames (town centre street) for which the ground truth is available. The input
to all pedestrian tracking algorithms is an identical set of head and body
detections, obtained using the Histogram of Oriented Gradients (HOG) detector.
The tracking error is measured using the recently proposed OSPA metric for
tracks, adopted as the only known mathematically rigorous metric for measuring
the distance between two sets of tracks. A comparative analysis is presented
under various conditions.
| [
{
"version": "v1",
"created": "Thu, 25 Oct 2012 23:21:46 GMT"
}
] | 2012-11-26T00:00:00 | [
[
"Ristic",
"Branko",
""
],
[
"Sherrah",
"Jamie",
""
],
[
"García-Fernández",
"Ángel F.",
""
]
] | TITLE: Performance Evaluation of Random Set Based Pedestrian Tracking
Algorithms
ABSTRACT: The paper evaluates the error performance of three random finite set based
multi-object trackers in the context of pedestrian video tracking. The
evaluation is carried out using a publicly available video dataset of 4500
frames (town centre street) for which the ground truth is available. The input
to all pedestrian tracking algorithms is an identical set of head and body
detections, obtained using the Histogram of Oriented Gradients (HOG) detector.
The tracking error is measured using the recently proposed OSPA metric for
tracks, adopted as the only known mathematically rigorous metric for measuring
the distance between two sets of tracks. A comparative analysis is presented
under various conditions.
|
1211.5245 | Benjamin Laken | Benjamin A. Laken, Enric Palle, Jasa Calogovic and Eimear M. Dunne | A cosmic ray-climate link and cloud observations | 13 pages, 6 figures | J. Space Weather Space Clim., 2, A18, 13pp, 2012 | 10.1051/swsc/2012018 | null | physics.ao-ph astro-ph.EP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite over 35 years of constant satellite-based measurements of cloud,
reliable evidence of a long-hypothesized link between changes in solar activity
and Earth's cloud cover remains elusive. This work examines evidence of a
cosmic ray cloud link from a range of sources, including satellite-based cloud
measurements and long-term ground-based climatological measurements. The
satellite-based studies can be divided into two categories: 1) monthly to
decadal timescale correlations, and 2) daily timescale epoch-superpositional
(composite) analysis. The latter analyses frequently focus on high-magnitude
reductions in the cosmic ray flux known as Forbush Decrease (FD) events. At
present, two long-term independent global satellite cloud datasets are
available (ISCCP and MODIS). Although the differences between them are
considerable, neither shows evidence of a solar-cloud link at either long or
short timescales. Furthermore, reports of observed correlations between solar
activity and cloud over the 1983 to 1995 period are attributed to the chance
agreement between solar changes and artificially induced cloud trends. It is
possible that the satellite cloud datasets and analysis methods may simply be
too insensitive to detect a small solar signal. Evidence from ground-based
studies suggests that some weak but statistically significant CR-cloud
relationships may exist at regional scales, involving mechanisms related to the
global electric circuit. However, a poor understanding of these mechanisms and
their effects on cloud make the net impacts of such links uncertain. Regardless
of this, it is clear that there is no robust evidence of a widespread link
between the cosmic ray flux and clouds.
| [
{
"version": "v1",
"created": "Thu, 22 Nov 2012 10:24:27 GMT"
}
] | 2012-11-26T00:00:00 | [
[
"Laken",
"Benjamin A.",
""
],
[
"Palle",
"Enric",
""
],
[
"Calogovic",
"Jasa",
""
],
[
"Dunne",
"Eimear M.",
""
]
] | TITLE: A cosmic ray-climate link and cloud observations
ABSTRACT: Despite over 35 years of constant satellite-based measurements of cloud,
reliable evidence of a long-hypothesized link between changes in solar activity
and Earth's cloud cover remains elusive. This work examines evidence of a
cosmic ray cloud link from a range of sources, including satellite-based cloud
measurements and long-term ground-based climatological measurements. The
satellite-based studies can be divided into two categories: 1) monthly to
decadal timescale correlations, and 2) daily timescale epoch-superpositional
(composite) analysis. The latter analyses frequently focus on high-magnitude
reductions in the cosmic ray flux known as Forbush Decrease (FD) events. At
present, two long-term independent global satellite cloud datasets are
available (ISCCP and MODIS). Although the differences between them are
considerable, neither shows evidence of a solar-cloud link at either long or
short timescales. Furthermore, reports of observed correlations between solar
activity and cloud over the 1983 to 1995 period are attributed to the chance
agreement between solar changes and artificially induced cloud trends. It is
possible that the satellite cloud datasets and analysis methods may simply be
too insensitive to detect a small solar signal. Evidence from ground-based
studies suggests that some weak but statistically significant CR-cloud
relationships may exist at regional scales, involving mechanisms related to the
global electric circuit. However, a poor understanding of these mechanisms and
their effects on cloud make the net impacts of such links uncertain. Regardless
of this, it is clear that there is no robust evidence of a widespread link
between the cosmic ray flux and clouds.
|
1211.5520 | Ashish Tendulkar Dr | Vivekanand Samant, Arvind Hulgeri, Alfonso Valencia, Ashish V.
Tendulkar | Accurate Demarcation of Protein Domain Linkers based on Structural
Analysis of Linker Probable Region | 18 pages, 2 figures | International Journal of Computational Biology, 0001:01-19, 2012 | null | null | cs.CE q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In multi-domain proteins, the domains are connected by a flexible
unstructured region called as protein domain linker. The accurate demarcation
of these linkers holds a key to understanding of their biochemical and
evolutionary attributes. This knowledge helps in designing a suitable linker
for engineering stable multi-domain chimeric proteins. Here we propose a novel
method for the demarcation of the linker based on a three-dimensional protein
structure and a domain definition. The proposed method is based on biological
knowledge about structural flexibility of the linkers. We performed structural
analysis on a linker probable region (LPR) around domain boundary points of
known SCOP domains. The LPR was described using a set of overlapping peptide
fragments of fixed size. Each peptide fragment was then described by geometric
invariants (GIs) and subjected to clustering process where the fragments
corresponding to actual linker come up as outliers. We then discover the actual
linkers by finding the longest continuous stretch of outlier fragments from
LPRs. This method was evaluated on a benchmark dataset of 51 continuous
multi-domain proteins, where it achieves F1 score of 0.745 (0.83 precision and
0.66 recall). When the method was applied on 725 continuous multi-domain
proteins, it was able to identify novel linkers that were not reported
previously. This method can be used in combination with supervised / sequence
based linker prediction methods for accurate linker demarcation.
| [
{
"version": "v1",
"created": "Fri, 23 Nov 2012 14:53:54 GMT"
}
] | 2012-11-26T00:00:00 | [
[
"Samant",
"Vivekanand",
""
],
[
"Hulgeri",
"Arvind",
""
],
[
"Valencia",
"Alfonso",
""
],
[
"Tendulkar",
"Ashish V.",
""
]
] | TITLE: Accurate Demarcation of Protein Domain Linkers based on Structural
Analysis of Linker Probable Region
ABSTRACT: In multi-domain proteins, the domains are connected by a flexible
unstructured region called as protein domain linker. The accurate demarcation
of these linkers holds a key to understanding of their biochemical and
evolutionary attributes. This knowledge helps in designing a suitable linker
for engineering stable multi-domain chimeric proteins. Here we propose a novel
method for the demarcation of the linker based on a three-dimensional protein
structure and a domain definition. The proposed method is based on biological
knowledge about structural flexibility of the linkers. We performed structural
analysis on a linker probable region (LPR) around domain boundary points of
known SCOP domains. The LPR was described using a set of overlapping peptide
fragments of fixed size. Each peptide fragment was then described by geometric
invariants (GIs) and subjected to clustering process where the fragments
corresponding to actual linker come up as outliers. We then discover the actual
linkers by finding the longest continuous stretch of outlier fragments from
LPRs. This method was evaluated on a benchmark dataset of 51 continuous
multi-domain proteins, where it achieves F1 score of 0.745 (0.83 precision and
0.66 recall). When the method was applied on 725 continuous multi-domain
proteins, it was able to identify novel linkers that were not reported
previously. This method can be used in combination with supervised / sequence
based linker prediction methods for accurate linker demarcation.
|
1211.4888 | Tuhin Sahai | Tuhin Sahai, Stefan Klus and Michael Dellnitz | A Traveling Salesman Learns Bayesian Networks | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structure learning of Bayesian networks is an important problem that arises
in numerous machine learning applications. In this work, we present a novel
approach for learning the structure of Bayesian networks using the solution of
an appropriately constructed traveling salesman problem. In our approach, one
computes an optimal ordering (partially ordered set) of random variables using
methods for the traveling salesman problem. This ordering significantly reduces
the search space for the subsequent greedy optimization that computes the final
structure of the Bayesian network. We demonstrate our approach of learning
Bayesian networks on real world census and weather datasets. In both cases, we
demonstrate that the approach very accurately captures dependencies between
random variables. We check the accuracy of the predictions based on independent
studies in both application domains.
| [
{
"version": "v1",
"created": "Tue, 20 Nov 2012 21:50:22 GMT"
}
] | 2012-11-22T00:00:00 | [
[
"Sahai",
"Tuhin",
""
],
[
"Klus",
"Stefan",
""
],
[
"Dellnitz",
"Michael",
""
]
] | TITLE: A Traveling Salesman Learns Bayesian Networks
ABSTRACT: Structure learning of Bayesian networks is an important problem that arises
in numerous machine learning applications. In this work, we present a novel
approach for learning the structure of Bayesian networks using the solution of
an appropriately constructed traveling salesman problem. In our approach, one
computes an optimal ordering (partially ordered set) of random variables using
methods for the traveling salesman problem. This ordering significantly reduces
the search space for the subsequent greedy optimization that computes the final
structure of the Bayesian network. We demonstrate our approach of learning
Bayesian networks on real world census and weather datasets. In both cases, we
demonstrate that the approach very accurately captures dependencies between
random variables. We check the accuracy of the predictions based on independent
studies in both application domains.
|
1211.4658 | Monowar Bhuyan H | Monowar H. Bhuyan, Sarat Saharia, and Dhruba Kr Bhattacharyya | An Effective Method for Fingerprint Classification | 9 pages, 7 figures, 6 tables referred journal publication. arXiv
admin note: substantial text overlap with arXiv:1211.4503 | International A. Journal of e-Technology, Vol. 1, No. 3, pp.
89-97, January, 2010 | null | null | cs.CV cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an effective method for fingerprint classification using
data mining approach. Initially, it generates a numeric code sequence for each
fingerprint image based on the ridge flow patterns. Then for each class, a seed
is selected by using a frequent itemsets generation technique. These seeds are
subsequently used for clustering the fingerprint images. The proposed method
was tested and evaluated in terms of several real-life datasets and a
significant improvement in reducing the misclassification errors has been
noticed in comparison to its other counterparts.
| [
{
"version": "v1",
"created": "Tue, 20 Nov 2012 03:25:57 GMT"
}
] | 2012-11-21T00:00:00 | [
[
"Bhuyan",
"Monowar H.",
""
],
[
"Saharia",
"Sarat",
""
],
[
"Bhattacharyya",
"Dhruba Kr",
""
]
] | TITLE: An Effective Method for Fingerprint Classification
ABSTRACT: This paper presents an effective method for fingerprint classification using
data mining approach. Initially, it generates a numeric code sequence for each
fingerprint image based on the ridge flow patterns. Then for each class, a seed
is selected by using a frequent itemsets generation technique. These seeds are
subsequently used for clustering the fingerprint images. The proposed method
was tested and evaluated in terms of several real-life datasets and a
significant improvement in reducing the misclassification errors has been
noticed in comparison to its other counterparts.
|
1211.4142 | Shaina Race | Ralph Abbey, Jeremy Diepenbrock, Amy Langville, Carl Meyer, Shaina
Race, Dexin Zhou | Data Clustering via Principal Direction Gap Partitioning | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore the geometrical interpretation of the PCA based clustering
algorithm Principal Direction Divisive Partitioning (PDDP). We give several
examples where this algorithm breaks down, and suggest a new method, gap
partitioning, which takes into account natural gaps in the data between
clusters. Geometric features of the PCA space are derived and illustrated and
experimental results are given which show our method is comparable on the
datasets used in the original paper on PDDP.
| [
{
"version": "v1",
"created": "Sat, 17 Nov 2012 18:28:30 GMT"
}
] | 2012-11-20T00:00:00 | [
[
"Abbey",
"Ralph",
""
],
[
"Diepenbrock",
"Jeremy",
""
],
[
"Langville",
"Amy",
""
],
[
"Meyer",
"Carl",
""
],
[
"Race",
"Shaina",
""
],
[
"Zhou",
"Dexin",
""
]
] | TITLE: Data Clustering via Principal Direction Gap Partitioning
ABSTRACT: We explore the geometrical interpretation of the PCA based clustering
algorithm Principal Direction Divisive Partitioning (PDDP). We give several
examples where this algorithm breaks down, and suggest a new method, gap
partitioning, which takes into account natural gaps in the data between
clusters. Geometric features of the PCA space are derived and illustrated and
experimental results are given which show our method is comparable on the
datasets used in the original paper on PDDP.
|
1211.4503 | Monowar Bhuyan H | Monowar H. Bhuyan and D. K. Bhattacharyya | An Effective Fingerprint Classification and Search Method | 10 pages, 8 figures, 6 tables, referred journal publication | International Journal of Computer Science and Network Security,
Vol. 9, No.11, pp. 39-48, 2009 | null | null | cs.CV cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an effective fingerprint classification method designed
based on a hierarchical agglomerative clustering technique. The performance of
the technique was evaluated in terms of several real-life datasets and a
significant improvement in reducing the misclassification error has been
noticed. This paper also presents a query based faster fingerprint search
method over the clustered fingerprint databases. The retrieval accuracy of the
search method has been found effective in light of several real-life databases.
| [
{
"version": "v1",
"created": "Mon, 19 Nov 2012 17:13:26 GMT"
}
] | 2012-11-20T00:00:00 | [
[
"Bhuyan",
"Monowar H.",
""
],
[
"Bhattacharyya",
"D. K.",
""
]
] | TITLE: An Effective Fingerprint Classification and Search Method
ABSTRACT: This paper presents an effective fingerprint classification method designed
based on a hierarchical agglomerative clustering technique. The performance of
the technique was evaluated in terms of several real-life datasets and a
significant improvement in reducing the misclassification error has been
noticed. This paper also presents a query based faster fingerprint search
method over the clustered fingerprint databases. The retrieval accuracy of the
search method has been found effective in light of several real-life databases.
|
1211.4521 | Tyler Clemons Mr | Tyler Clemons, S. M. Faisal, Shirish Tatikonda, Charu Aggarawl, and
Srinivasan Parthasarathy | Hash in a Flash: Hash Tables for Solid State Devices | 16 pages 10 figures | null | null | null | cs.DB cs.DS cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, information retrieval algorithms have taken center stage for
extracting important data in ever larger datasets. Advances in hardware
technology have lead to the increasingly wide spread use of flash storage
devices. Such devices have clear benefits over traditional hard drives in terms
of latency of access, bandwidth and random access capabilities particularly
when reading data. There are however some interesting trade-offs to consider
when leveraging the advanced features of such devices. On a relative scale
writing to such devices can be expensive. This is because typical flash devices
(NAND technology) are updated in blocks. A minor update to a given block
requires the entire block to be erased, followed by a re-writing of the block.
On the other hand, sequential writes can be two orders of magnitude faster than
random writes. In addition, random writes are degrading to the life of the
flash drive, since each block can support only a limited number of erasures.
TF-IDF can be implemented using a counting hash table. In general, hash tables
are a particularly challenging case for the flash drive because this data
structure is inherently dependent upon the randomness of the hash function, as
opposed to the spatial locality of the data. This makes it difficult to avoid
the random writes incurred during the construction of the counting hash table
for TF-IDF. In this paper, we will study the design landscape for the
development of a hash table for flash storage devices. We demonstrate how to
effectively design a hash table with two related hash functions, one of which
exhibits a data placement property with respect to the other. Specifically, we
focus on three designs based on this general philosophy and evaluate the
trade-offs among them along the axes of query performance, insert and update
times and I/O time through an implementation of the TF-IDF algorithm.
| [
{
"version": "v1",
"created": "Mon, 19 Nov 2012 17:55:01 GMT"
}
] | 2012-11-20T00:00:00 | [
[
"Clemons",
"Tyler",
""
],
[
"Faisal",
"S. M.",
""
],
[
"Tatikonda",
"Shirish",
""
],
[
"Aggarawl",
"Charu",
""
],
[
"Parthasarathy",
"Srinivasan",
""
]
] | TITLE: Hash in a Flash: Hash Tables for Solid State Devices
ABSTRACT: In recent years, information retrieval algorithms have taken center stage for
extracting important data in ever larger datasets. Advances in hardware
technology have lead to the increasingly wide spread use of flash storage
devices. Such devices have clear benefits over traditional hard drives in terms
of latency of access, bandwidth and random access capabilities particularly
when reading data. There are however some interesting trade-offs to consider
when leveraging the advanced features of such devices. On a relative scale
writing to such devices can be expensive. This is because typical flash devices
(NAND technology) are updated in blocks. A minor update to a given block
requires the entire block to be erased, followed by a re-writing of the block.
On the other hand, sequential writes can be two orders of magnitude faster than
random writes. In addition, random writes are degrading to the life of the
flash drive, since each block can support only a limited number of erasures.
TF-IDF can be implemented using a counting hash table. In general, hash tables
are a particularly challenging case for the flash drive because this data
structure is inherently dependent upon the randomness of the hash function, as
opposed to the spatial locality of the data. This makes it difficult to avoid
the random writes incurred during the construction of the counting hash table
for TF-IDF. In this paper, we will study the design landscape for the
development of a hash table for flash storage devices. We demonstrate how to
effectively design a hash table with two related hash functions, one of which
exhibits a data placement property with respect to the other. Specifically, we
focus on three designs based on this general philosophy and evaluate the
trade-offs among them along the axes of query performance, insert and update
times and I/O time through an implementation of the TF-IDF algorithm.
|
1211.4552 | Gabriel Synnaeve | Gabriel Synnaeve (LIG, LPPA), Pierre Bessiere (LPPA) | A Dataset for StarCraft AI \& an Example of Armies Clustering | Artificial Intelligence in Adversarial Real-Time Games 2012, Palo
Alto : United States (2012) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper advocates the exploration of the full state of recorded real-time
strategy (RTS) games, by human or robotic players, to discover how to reason
about tactics and strategy. We present a dataset of StarCraft games
encompassing the most of the games' state (not only player's orders). We
explain one of the possible usages of this dataset by clustering armies on
their compositions. This reduction of armies compositions to mixtures of
Gaussian allow for strategic reasoning at the level of the components. We
evaluated this clustering method by predicting the outcomes of battles based on
armies compositions' mixtures components
| [
{
"version": "v1",
"created": "Mon, 19 Nov 2012 20:18:43 GMT"
}
] | 2012-11-20T00:00:00 | [
[
"Synnaeve",
"Gabriel",
"",
"LIG, LPPA"
],
[
"Bessiere",
"Pierre",
"",
"LPPA"
]
] | TITLE: A Dataset for StarCraft AI \& an Example of Armies Clustering
ABSTRACT: This paper advocates the exploration of the full state of recorded real-time
strategy (RTS) games, by human or robotic players, to discover how to reason
about tactics and strategy. We present a dataset of StarCraft games
encompassing the most of the games' state (not only player's orders). We
explain one of the possible usages of this dataset by clustering armies on
their compositions. This reduction of armies compositions to mixtures of
Gaussian allow for strategic reasoning at the level of the components. We
evaluated this clustering method by predicting the outcomes of battles based on
armies compositions' mixtures components
|
1011.4104 | Andri Mirzal | Andri Mirzal | Clustering and Latent Semantic Indexing Aspects of the Singular Value
Decomposition | 38 pages, submitted to Pattern Recognition | null | null | null | cs.LG cs.NA math.SP | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This paper discusses clustering and latent semantic indexing (LSI) aspects of
the singular value decomposition (SVD). The purpose of this paper is twofold.
The first is to give an explanation on how and why the singular vectors can be
used in clustering. And the second is to show that the two seemingly unrelated
SVD aspects actually originate from the same source: related vertices tend to
be more clustered in the graph representation of lower rank approximate matrix
using the SVD than in the original semantic graph. Accordingly, the SVD can
improve retrieval performance of an information retrieval system since queries
made to the approximate matrix can retrieve more relevant documents and filter
out more irrelevant documents than the same queries made to the original
matrix. By utilizing this fact, we will devise an LSI algorithm that mimicks
SVD capability in clustering related vertices. Convergence analysis shows that
the algorithm is convergent and produces a unique solution for each input.
Experimental results using some standard datasets in LSI research show that
retrieval performances of the algorithm are comparable to the SVD's. In
addition, the algorithm is more practical and easier to use because there is no
need to determine decomposition rank which is crucial in driving retrieval
performance of the SVD.
| [
{
"version": "v1",
"created": "Wed, 17 Nov 2010 23:39:12 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Mar 2011 18:56:56 GMT"
},
{
"version": "v3",
"created": "Wed, 17 Oct 2012 08:41:06 GMT"
},
{
"version": "v4",
"created": "Fri, 16 Nov 2012 04:26:29 GMT"
}
] | 2012-11-19T00:00:00 | [
[
"Mirzal",
"Andri",
""
]
] | TITLE: Clustering and Latent Semantic Indexing Aspects of the Singular Value
Decomposition
ABSTRACT: This paper discusses clustering and latent semantic indexing (LSI) aspects of
the singular value decomposition (SVD). The purpose of this paper is twofold.
The first is to give an explanation on how and why the singular vectors can be
used in clustering. And the second is to show that the two seemingly unrelated
SVD aspects actually originate from the same source: related vertices tend to
be more clustered in the graph representation of lower rank approximate matrix
using the SVD than in the original semantic graph. Accordingly, the SVD can
improve retrieval performance of an information retrieval system since queries
made to the approximate matrix can retrieve more relevant documents and filter
out more irrelevant documents than the same queries made to the original
matrix. By utilizing this fact, we will devise an LSI algorithm that mimicks
SVD capability in clustering related vertices. Convergence analysis shows that
the algorithm is convergent and produces a unique solution for each input.
Experimental results using some standard datasets in LSI research show that
retrieval performances of the algorithm are comparable to the SVD's. In
addition, the algorithm is more practical and easier to use because there is no
need to determine decomposition rank which is crucial in driving retrieval
performance of the SVD.
|
1211.3444 | Deanna Needell | B. Cung, T. Jin, J. Ramirez, A. Thompson, C. Boutsidis and D. Needell | Spectral Clustering: An empirical study of Approximation Algorithms and
its Application to the Attrition Problem | null | null | null | null | cs.LG math.NA stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering is the problem of separating a set of objects into groups (called
clusters) so that objects within the same cluster are more similar to each
other than to those in different clusters. Spectral clustering is a now
well-known method for clustering which utilizes the spectrum of the data
similarity matrix to perform this separation. Since the method relies on
solving an eigenvector problem, it is computationally expensive for large
datasets. To overcome this constraint, approximation methods have been
developed which aim to reduce running time while maintaining accurate
classification. In this article, we summarize and experimentally evaluate
several approximation methods for spectral clustering. From an applications
standpoint, we employ spectral clustering to solve the so-called attrition
problem, where one aims to identify from a set of employees those who are
likely to voluntarily leave the company from those who are not. Our study sheds
light on the empirical performance of existing approximate spectral clustering
methods and shows the applicability of these methods in an important business
optimization related problem.
| [
{
"version": "v1",
"created": "Wed, 14 Nov 2012 22:05:09 GMT"
}
] | 2012-11-16T00:00:00 | [
[
"Cung",
"B.",
""
],
[
"Jin",
"T.",
""
],
[
"Ramirez",
"J.",
""
],
[
"Thompson",
"A.",
""
],
[
"Boutsidis",
"C.",
""
],
[
"Needell",
"D.",
""
]
] | TITLE: Spectral Clustering: An empirical study of Approximation Algorithms and
its Application to the Attrition Problem
ABSTRACT: Clustering is the problem of separating a set of objects into groups (called
clusters) so that objects within the same cluster are more similar to each
other than to those in different clusters. Spectral clustering is a now
well-known method for clustering which utilizes the spectrum of the data
similarity matrix to perform this separation. Since the method relies on
solving an eigenvector problem, it is computationally expensive for large
datasets. To overcome this constraint, approximation methods have been
developed which aim to reduce running time while maintaining accurate
classification. In this article, we summarize and experimentally evaluate
several approximation methods for spectral clustering. From an applications
standpoint, we employ spectral clustering to solve the so-called attrition
problem, where one aims to identify from a set of employees those who are
likely to voluntarily leave the company from those who are not. Our study sheds
light on the empirical performance of existing approximate spectral clustering
methods and shows the applicability of these methods in an important business
optimization related problem.
|
1203.5387 | Vibhor Rastogi | Vibhor Rastogi, Ashwin Machanavajjhala, Laukik Chitnis, Anish Das
Sarma | Finding Connected Components on Map-reduce in Logarithmic Rounds | null | null | null | null | cs.DS cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a large graph G = (V,E) with millions of nodes and edges, how do we
compute its connected components efficiently? Recent work addresses this
problem in map-reduce, where a fundamental trade-off exists between the number
of map-reduce rounds and the communication of each round. Denoting d the
diameter of the graph, and n the number of nodes in the largest component, all
prior map-reduce techniques either require d rounds, or require about n|V| +
|E| communication per round. We propose two randomized map-reduce algorithms --
(i) Hash-Greater-To-Min, which provably requires at most 3log(n) rounds with
high probability, and at most 2(|V| + |E|) communication per round, and (ii)
Hash-to-Min, which has a worse theoretical complexity, but in practice
completes in at most 2log(d) rounds and 3(|V| + |E|) communication per rounds.
Our techniques for connected components can be applied to clustering as well.
We propose a novel algorithm for agglomerative single linkage clustering in
map-reduce. This is the first algorithm that can provably compute a clustering
in at most O(log(n)) rounds, where n is the size of the largest cluster. We
show the effectiveness of all our algorithms through detailed experiments on
large synthetic as well as real-world datasets.
| [
{
"version": "v1",
"created": "Sat, 24 Mar 2012 05:16:27 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Nov 2012 01:50:51 GMT"
}
] | 2012-11-14T00:00:00 | [
[
"Rastogi",
"Vibhor",
""
],
[
"Machanavajjhala",
"Ashwin",
""
],
[
"Chitnis",
"Laukik",
""
],
[
"Sarma",
"Anish Das",
""
]
] | TITLE: Finding Connected Components on Map-reduce in Logarithmic Rounds
ABSTRACT: Given a large graph G = (V,E) with millions of nodes and edges, how do we
compute its connected components efficiently? Recent work addresses this
problem in map-reduce, where a fundamental trade-off exists between the number
of map-reduce rounds and the communication of each round. Denoting d the
diameter of the graph, and n the number of nodes in the largest component, all
prior map-reduce techniques either require d rounds, or require about n|V| +
|E| communication per round. We propose two randomized map-reduce algorithms --
(i) Hash-Greater-To-Min, which provably requires at most 3log(n) rounds with
high probability, and at most 2(|V| + |E|) communication per round, and (ii)
Hash-to-Min, which has a worse theoretical complexity, but in practice
completes in at most 2log(d) rounds and 3(|V| + |E|) communication per rounds.
Our techniques for connected components can be applied to clustering as well.
We propose a novel algorithm for agglomerative single linkage clustering in
map-reduce. This is the first algorithm that can provably compute a clustering
in at most O(log(n)) rounds, where n is the size of the largest cluster. We
show the effectiveness of all our algorithms through detailed experiments on
large synthetic as well as real-world datasets.
|
1211.2399 | Rustam Tagiew | Rustam Tagiew | Mining Determinism in Human Strategic Behavior | 8 pages, no figures, EEML 2012 | Experimental Economics and Machine Learning 2012, CEUR-WS Vol-870,
urn:nbn:de:0074-870-0 | null | null | cs.GT cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work lies in the fusion of experimental economics and data mining. It
continues author's previous work on mining behaviour rules of human subjects
from experimental data, where game-theoretic predictions partially fail to
work. Game-theoretic predictions aka equilibria only tend to success with
experienced subjects on specific games, what is rarely given. Apart from game
theory, contemporary experimental economics offers a number of alternative
models. In relevant literature, these models are always biased by psychological
and near-psychological theories and are claimed to be proven by the data. This
work introduces a data mining approach to the problem without using vast
psychological background. Apart from determinism, no other biases are regarded.
Two datasets from different human subject experiments are taken for evaluation.
The first one is a repeated mixed strategy zero sum game and the second -
repeated ultimatum game. As result, the way of mining deterministic
regularities in human strategic behaviour is described and evaluated. As future
work, the design of a new representation formalism is discussed.
| [
{
"version": "v1",
"created": "Sun, 11 Nov 2012 11:27:01 GMT"
}
] | 2012-11-13T00:00:00 | [
[
"Tagiew",
"Rustam",
""
]
] | TITLE: Mining Determinism in Human Strategic Behavior
ABSTRACT: This work lies in the fusion of experimental economics and data mining. It
continues author's previous work on mining behaviour rules of human subjects
from experimental data, where game-theoretic predictions partially fail to
work. Game-theoretic predictions aka equilibria only tend to success with
experienced subjects on specific games, what is rarely given. Apart from game
theory, contemporary experimental economics offers a number of alternative
models. In relevant literature, these models are always biased by psychological
and near-psychological theories and are claimed to be proven by the data. This
work introduces a data mining approach to the problem without using vast
psychological background. Apart from determinism, no other biases are regarded.
Two datasets from different human subject experiments are taken for evaluation.
The first one is a repeated mixed strategy zero sum game and the second -
repeated ultimatum game. As result, the way of mining deterministic
regularities in human strategic behaviour is described and evaluated. As future
work, the design of a new representation formalism is discussed.
|
1211.2556 | Fatai Anifowose | Fatai Adesina Anifowose | A Comparative Study of Gaussian Mixture Model and Radial Basis Function
for Voice Recognition | 9 pages, 10 figures; International Journal of Advanced Computer
Science and Applications (IJACSA), Vol. 1, No.3, September 2010 | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A comparative study of the application of Gaussian Mixture Model (GMM) and
Radial Basis Function (RBF) in biometric recognition of voice has been carried
out and presented. The application of machine learning techniques to biometric
authentication and recognition problems has gained a widespread acceptance. In
this research, a GMM model was trained, using Expectation Maximization (EM)
algorithm, on a dataset containing 10 classes of vowels and the model was used
to predict the appropriate classes using a validation dataset. For experimental
validity, the model was compared to the performance of two different versions
of RBF model using the same learning and validation datasets. The results
showed very close recognition accuracy between the GMM and the standard RBF
model, but with GMM performing better than the standard RBF by less than 1% and
the two models outperformed similar models reported in literature. The DTREG
version of RBF outperformed the other two models by producing 94.8% recognition
accuracy. In terms of recognition time, the standard RBF was found to be the
fastest among the three models.
| [
{
"version": "v1",
"created": "Mon, 12 Nov 2012 10:42:58 GMT"
}
] | 2012-11-13T00:00:00 | [
[
"Anifowose",
"Fatai Adesina",
""
]
] | TITLE: A Comparative Study of Gaussian Mixture Model and Radial Basis Function
for Voice Recognition
ABSTRACT: A comparative study of the application of Gaussian Mixture Model (GMM) and
Radial Basis Function (RBF) in biometric recognition of voice has been carried
out and presented. The application of machine learning techniques to biometric
authentication and recognition problems has gained a widespread acceptance. In
this research, a GMM model was trained, using Expectation Maximization (EM)
algorithm, on a dataset containing 10 classes of vowels and the model was used
to predict the appropriate classes using a validation dataset. For experimental
validity, the model was compared to the performance of two different versions
of RBF model using the same learning and validation datasets. The results
showed very close recognition accuracy between the GMM and the standard RBF
model, but with GMM performing better than the standard RBF by less than 1% and
the two models outperformed similar models reported in literature. The DTREG
version of RBF outperformed the other two models by producing 94.8% recognition
accuracy. In terms of recognition time, the standard RBF was found to be the
fastest among the three models.
|
1211.1752 | Abhishek Anand Abhishek | Abhishek Anand and Sherwin Li | 3D Scene Grammar for Parsing RGB-D Pointclouds | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We pose 3D scene-understanding as a problem of parsing in a grammar. A
grammar helps us capture the compositional structure of real-word objects,
e.g., a chair is composed of a seat, a back-rest and some legs. Having multiple
rules for an object helps us capture structural variations in objects, e.g., a
chair can optionally also have arm-rests. Finally, having rules to capture
composition at different levels helps us formulate the entire scene-processing
pipeline as a single problem of finding most likely parse-tree---small segments
combine to form parts of objects, parts to objects and objects to a scene. We
attach a generative probability model to our grammar by having a
feature-dependent probability function for every rule. We evaluated it by
extracting labels for every segment and comparing the results with the
state-of-the-art segment-labeling algorithm. Our algorithm was outperformed by
the state-or-the-art method. But, Our model can be trained very efficiently
(within seconds), and it scales only linearly in with the number of rules in
the grammar. Also, we think that this is an important problem for the 3D vision
community. So, we are releasing our dataset and related code.
| [
{
"version": "v1",
"created": "Thu, 8 Nov 2012 03:11:53 GMT"
}
] | 2012-11-09T00:00:00 | [
[
"Anand",
"Abhishek",
""
],
[
"Li",
"Sherwin",
""
]
] | TITLE: 3D Scene Grammar for Parsing RGB-D Pointclouds
ABSTRACT: We pose 3D scene-understanding as a problem of parsing in a grammar. A
grammar helps us capture the compositional structure of real-word objects,
e.g., a chair is composed of a seat, a back-rest and some legs. Having multiple
rules for an object helps us capture structural variations in objects, e.g., a
chair can optionally also have arm-rests. Finally, having rules to capture
composition at different levels helps us formulate the entire scene-processing
pipeline as a single problem of finding most likely parse-tree---small segments
combine to form parts of objects, parts to objects and objects to a scene. We
attach a generative probability model to our grammar by having a
feature-dependent probability function for every rule. We evaluated it by
extracting labels for every segment and comparing the results with the
state-of-the-art segment-labeling algorithm. Our algorithm was outperformed by
the state-or-the-art method. But, Our model can be trained very efficiently
(within seconds), and it scales only linearly in with the number of rules in
the grammar. Also, we think that this is an important problem for the 3D vision
community. So, we are releasing our dataset and related code.
|
1208.2448 | Yong Zeng | Yong Zeng, Zhifeng Bao, Guoliang Li, Tok Wang Ling, Jiaheng Lu | Breaking Out The XML MisMatch Trap | The article is already withdrawn | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In keyword search, when user cannot get what she wants, query refinement is
needed and reason can be various. We first give a thorough categorization of
the reason, then focus on solving one category of query refinement problem in
the context of XML keyword search, where what user searches for does not exist
in the data. We refer to it as the MisMatch problem in this paper. Then we
propose a practical way to detect the MisMatch problem and generate helpful
suggestions to users. Our approach can be viewed as a post-processing job of
query evaluation, and has three main features: (1) it adopts both the suggested
queries and their sample results as the output to user, helping user judge
whether the MisMatch problem is solved without consuming all query results; (2)
it is portable in the sense that it can work with any LCA-based matching
semantics and orthogonal to the choice of result retrieval method adopted; (3)
it is lightweight in the way that it occupies a very small proportion of the
whole query evaluation time. Extensive experiments on three real datasets
verify the effectiveness, efficiency and scalability of our approach. An online
XML keyword search engine called XClear that embeds the MisMatch problem
detector and suggester has been built.
| [
{
"version": "v1",
"created": "Sun, 12 Aug 2012 18:51:23 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Nov 2012 03:09:15 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Nov 2012 07:34:13 GMT"
}
] | 2012-11-08T00:00:00 | [
[
"Zeng",
"Yong",
""
],
[
"Bao",
"Zhifeng",
""
],
[
"Li",
"Guoliang",
""
],
[
"Ling",
"Tok Wang",
""
],
[
"Lu",
"Jiaheng",
""
]
] | TITLE: Breaking Out The XML MisMatch Trap
ABSTRACT: In keyword search, when user cannot get what she wants, query refinement is
needed and reason can be various. We first give a thorough categorization of
the reason, then focus on solving one category of query refinement problem in
the context of XML keyword search, where what user searches for does not exist
in the data. We refer to it as the MisMatch problem in this paper. Then we
propose a practical way to detect the MisMatch problem and generate helpful
suggestions to users. Our approach can be viewed as a post-processing job of
query evaluation, and has three main features: (1) it adopts both the suggested
queries and their sample results as the output to user, helping user judge
whether the MisMatch problem is solved without consuming all query results; (2)
it is portable in the sense that it can work with any LCA-based matching
semantics and orthogonal to the choice of result retrieval method adopted; (3)
it is lightweight in the way that it occupies a very small proportion of the
whole query evaluation time. Extensive experiments on three real datasets
verify the effectiveness, efficiency and scalability of our approach. An online
XML keyword search engine called XClear that embeds the MisMatch problem
detector and suggester has been built.
|
1204.2169 | Hang-Hyun Jo | Hang-Hyun Jo, M\'arton Karsai, Juuso Karikoski, Kimmo Kaski | Spatiotemporal correlations of handset-based service usages | 11 pages, 15 figures | EPJ Data Science 1, 10 (2012) | 10.1140/epjds10 | null | physics.soc-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study spatiotemporal correlations and temporal diversities of
handset-based service usages by analyzing a dataset that includes detailed
information about locations and service usages of 124 users over 16 months. By
constructing the spatiotemporal trajectories of the users we detect several
meaningful places or contexts for each one of them and show how the context
affects the service usage patterns. We find that temporal patterns of service
usages are bound to the typical weekly cycles of humans, yet they show maximal
activities at different times. We first discuss their temporal correlations and
then investigate the time-ordering behavior of communication services like
calls being followed by the non-communication services like applications. We
also find that the behavioral overlap network based on the clustering of
temporal patterns is comparable to the communication network of users. Our
approach provides a useful framework for handset-based data analysis and helps
us to understand the complexities of information and communications technology
enabled human behavior.
| [
{
"version": "v1",
"created": "Tue, 10 Apr 2012 14:42:56 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jul 2012 15:06:23 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Sep 2012 12:16:40 GMT"
}
] | 2012-11-07T00:00:00 | [
[
"Jo",
"Hang-Hyun",
""
],
[
"Karsai",
"Márton",
""
],
[
"Karikoski",
"Juuso",
""
],
[
"Kaski",
"Kimmo",
""
]
] | TITLE: Spatiotemporal correlations of handset-based service usages
ABSTRACT: We study spatiotemporal correlations and temporal diversities of
handset-based service usages by analyzing a dataset that includes detailed
information about locations and service usages of 124 users over 16 months. By
constructing the spatiotemporal trajectories of the users we detect several
meaningful places or contexts for each one of them and show how the context
affects the service usage patterns. We find that temporal patterns of service
usages are bound to the typical weekly cycles of humans, yet they show maximal
activities at different times. We first discuss their temporal correlations and
then investigate the time-ordering behavior of communication services like
calls being followed by the non-communication services like applications. We
also find that the behavioral overlap network based on the clustering of
temporal patterns is comparable to the communication network of users. Our
approach provides a useful framework for handset-based data analysis and helps
us to understand the complexities of information and communications technology
enabled human behavior.
|
1209.0911 | Junming Huang Junming Huang | Junming Huang, Xue-Qi Cheng, Hua-Wei Shen, Xiaoming Sun, Tao Zhou,
Xiaolong Jin | Conquering the rating bound problem in neighborhood-based collaborative
filtering: a function recovery approach | 10 pages, 4 figures | null | null | null | cs.IR cs.AI cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As an important tool for information filtering in the era of socialized web,
recommender systems have witnessed rapid development in the last decade. As
benefited from the better interpretability, neighborhood-based collaborative
filtering techniques, such as item-based collaborative filtering adopted by
Amazon, have gained a great success in many practical recommender systems.
However, the neighborhood-based collaborative filtering method suffers from the
rating bound problem, i.e., the rating on a target item that this method
estimates is bounded by the observed ratings of its all neighboring items.
Therefore, it cannot accurately estimate the unobserved rating on a target
item, if its ground truth rating is actually higher (lower) than the highest
(lowest) rating over all items in its neighborhood. In this paper, we address
this problem by formalizing rating estimation as a task of recovering a scalar
rating function. With a linearity assumption, we infer all the ratings by
optimizing the low-order norm, e.g., the $l_1/2$-norm, of the second derivative
of the target scalar function, while remaining its observed ratings unchanged.
Experimental results on three real datasets, namely Douban, Goodreads and
MovieLens, demonstrate that the proposed approach can well overcome the rating
bound problem. Particularly, it can significantly improve the accuracy of
rating estimation by 37% than the conventional neighborhood-based methods.
| [
{
"version": "v1",
"created": "Wed, 5 Sep 2012 09:55:27 GMT"
}
] | 2012-11-07T00:00:00 | [
[
"Huang",
"Junming",
""
],
[
"Cheng",
"Xue-Qi",
""
],
[
"Shen",
"Hua-Wei",
""
],
[
"Sun",
"Xiaoming",
""
],
[
"Zhou",
"Tao",
""
],
[
"Jin",
"Xiaolong",
""
]
] | TITLE: Conquering the rating bound problem in neighborhood-based collaborative
filtering: a function recovery approach
ABSTRACT: As an important tool for information filtering in the era of socialized web,
recommender systems have witnessed rapid development in the last decade. As
benefited from the better interpretability, neighborhood-based collaborative
filtering techniques, such as item-based collaborative filtering adopted by
Amazon, have gained a great success in many practical recommender systems.
However, the neighborhood-based collaborative filtering method suffers from the
rating bound problem, i.e., the rating on a target item that this method
estimates is bounded by the observed ratings of its all neighboring items.
Therefore, it cannot accurately estimate the unobserved rating on a target
item, if its ground truth rating is actually higher (lower) than the highest
(lowest) rating over all items in its neighborhood. In this paper, we address
this problem by formalizing rating estimation as a task of recovering a scalar
rating function. With a linearity assumption, we infer all the ratings by
optimizing the low-order norm, e.g., the $l_1/2$-norm, of the second derivative
of the target scalar function, while remaining its observed ratings unchanged.
Experimental results on three real datasets, namely Douban, Goodreads and
MovieLens, demonstrate that the proposed approach can well overcome the rating
bound problem. Particularly, it can significantly improve the accuracy of
rating estimation by 37% than the conventional neighborhood-based methods.
|
1211.1136 | Malathi Subramanian | S.Malathi and S.Sridhar | Estimation of Effort in Software Cost Analysis for Heterogenous Dataset
using Fuzzy Analogy | 5 pages,5 figures | Journal of IEEE Transactions on Software Engineering,2010 | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the significant objectives of software engineering community is to use
effective and useful models for precise calculation of effort in software cost
estimation. The existing techniques cannot handle the dataset having
categorical variables efficiently including the commonly used analogy method.
Also, the project attributes of cost estimation are measured in terms of
linguistic values whose imprecision leads to confusion and ambiguity while
explaining the process. There are no definite set of models which can
efficiently handle the dataset having categorical variables and endure the
major hindrances such as imprecision and uncertainty without taking the
classical intervals and numeric value approaches. In this paper, a new approach
based on fuzzy logic, linguistic quantifiers and analogy based reasoning is
proposed to enhance the performance of the effort estimation in software
projects dealing with numerical and categorical data. The performance of this
proposed method illustrates that there is a realistic validation of the results
while using historical heterogeneous dataset. The results were analyzed using
the Mean Magnitude Relative Error (MMRE) and indicates that the proposed method
can produce more explicable results than the methods which are in vogue.
| [
{
"version": "v1",
"created": "Tue, 6 Nov 2012 08:15:30 GMT"
}
] | 2012-11-07T00:00:00 | [
[
"Malathi",
"S.",
""
],
[
"Sridhar",
"S.",
""
]
] | TITLE: Estimation of Effort in Software Cost Analysis for Heterogenous Dataset
using Fuzzy Analogy
ABSTRACT: One of the significant objectives of software engineering community is to use
effective and useful models for precise calculation of effort in software cost
estimation. The existing techniques cannot handle the dataset having
categorical variables efficiently including the commonly used analogy method.
Also, the project attributes of cost estimation are measured in terms of
linguistic values whose imprecision leads to confusion and ambiguity while
explaining the process. There are no definite set of models which can
efficiently handle the dataset having categorical variables and endure the
major hindrances such as imprecision and uncertainty without taking the
classical intervals and numeric value approaches. In this paper, a new approach
based on fuzzy logic, linguistic quantifiers and analogy based reasoning is
proposed to enhance the performance of the effort estimation in software
projects dealing with numerical and categorical data. The performance of this
proposed method illustrates that there is a realistic validation of the results
while using historical heterogeneous dataset. The results were analyzed using
the Mean Magnitude Relative Error (MMRE) and indicates that the proposed method
can produce more explicable results than the methods which are in vogue.
|
1205.6326 | Iain Murray | Krzysztof Chalupka, Christopher K. I. Williams and Iain Murray | A Framework for Evaluating Approximation Methods for Gaussian Process
Regression | 19 pages, 4 figures | null | null | null | stat.ML cs.LG stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gaussian process (GP) predictors are an important component of many Bayesian
approaches to machine learning. However, even a straightforward implementation
of Gaussian process regression (GPR) requires O(n^2) space and O(n^3) time for
a dataset of n examples. Several approximation methods have been proposed, but
there is a lack of understanding of the relative merits of the different
approximations, and in what situations they are most useful. We recommend
assessing the quality of the predictions obtained as a function of the compute
time taken, and comparing to standard baselines (e.g., Subset of Data and
FITC). We empirically investigate four different approximation algorithms on
four different prediction problems, and make our code available to encourage
future comparisons.
| [
{
"version": "v1",
"created": "Tue, 29 May 2012 10:59:30 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Nov 2012 17:39:32 GMT"
}
] | 2012-11-06T00:00:00 | [
[
"Chalupka",
"Krzysztof",
""
],
[
"Williams",
"Christopher K. I.",
""
],
[
"Murray",
"Iain",
""
]
] | TITLE: A Framework for Evaluating Approximation Methods for Gaussian Process
Regression
ABSTRACT: Gaussian process (GP) predictors are an important component of many Bayesian
approaches to machine learning. However, even a straightforward implementation
of Gaussian process regression (GPR) requires O(n^2) space and O(n^3) time for
a dataset of n examples. Several approximation methods have been proposed, but
there is a lack of understanding of the relative merits of the different
approximations, and in what situations they are most useful. We recommend
assessing the quality of the predictions obtained as a function of the compute
time taken, and comparing to standard baselines (e.g., Subset of Data and
FITC). We empirically investigate four different approximation algorithms on
four different prediction problems, and make our code available to encourage
future comparisons.
|
1211.0498 | Rami Al-Rfou' | Rami Al-Rfou' | Detecting English Writing Styles For Non-native Speakers | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analyzing writing styles of non-native speakers is a challenging task. In
this paper, we analyze the comments written in the discussion pages of the
English Wikipedia. Using learning algorithms, we are able to detect native
speakers' writing style with an accuracy of 74%. Given the diversity of the
English Wikipedia users and the large number of languages they speak, we
measure the similarities among their native languages by comparing the
influence they have on their English writing style. Our results show that
languages known to have the same origin and development path have similar
footprint on their speakers' English writing style. To enable further studies,
the dataset we extracted from Wikipedia will be made available publicly.
| [
{
"version": "v1",
"created": "Fri, 2 Nov 2012 17:37:06 GMT"
}
] | 2012-11-05T00:00:00 | [
[
"Al-Rfou'",
"Rami",
""
]
] | TITLE: Detecting English Writing Styles For Non-native Speakers
ABSTRACT: Analyzing writing styles of non-native speakers is a challenging task. In
this paper, we analyze the comments written in the discussion pages of the
English Wikipedia. Using learning algorithms, we are able to detect native
speakers' writing style with an accuracy of 74%. Given the diversity of the
English Wikipedia users and the large number of languages they speak, we
measure the similarities among their native languages by comparing the
influence they have on their English writing style. Our results show that
languages known to have the same origin and development path have similar
footprint on their speakers' English writing style. To enable further studies,
the dataset we extracted from Wikipedia will be made available publicly.
|
1211.0210 | Sundararajan Sellamanickam | Sathiya Keerthi Selvaraj, Sundararajan Sellamanickam, Shirish Shevade | Extension of TSVM to Multi-Class and Hierarchical Text Classification
Problems With General Losses | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transductive SVM (TSVM) is a well known semi-supervised large margin learning
method for binary text classification. In this paper we extend this method to
multi-class and hierarchical classification problems. We point out that the
determination of labels of unlabeled examples with fixed classifier weights is
a linear programming problem. We devise an efficient technique for solving it.
The method is applicable to general loss functions. We demonstrate the value of
the new method using large margin loss on a number of multi-class and
hierarchical classification datasets. For maxent loss we show empirically that
our method is better than expectation regularization/constraint and posterior
regularization methods, and competitive with the version of entropy
regularization method which uses label constraints.
| [
{
"version": "v1",
"created": "Thu, 1 Nov 2012 15:52:11 GMT"
}
] | 2012-11-02T00:00:00 | [
[
"Selvaraj",
"Sathiya Keerthi",
""
],
[
"Sellamanickam",
"Sundararajan",
""
],
[
"Shevade",
"Shirish",
""
]
] | TITLE: Extension of TSVM to Multi-Class and Hierarchical Text Classification
Problems With General Losses
ABSTRACT: Transductive SVM (TSVM) is a well known semi-supervised large margin learning
method for binary text classification. In this paper we extend this method to
multi-class and hierarchical classification problems. We point out that the
determination of labels of unlabeled examples with fixed classifier weights is
a linear programming problem. We devise an efficient technique for solving it.
The method is applicable to general loss functions. We demonstrate the value of
the new method using large margin loss on a number of multi-class and
hierarchical classification datasets. For maxent loss we show empirically that
our method is better than expectation regularization/constraint and posterior
regularization methods, and competitive with the version of entropy
regularization method which uses label constraints.
|
1211.0224 | Lorena Etcheverry | Lorena Etcheverry and Alejandro A. Vaisman | Views over RDF Datasets: A State-of-the-Art and Open Challenges | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Views on RDF datasets have been discussed in several works, nevertheless
there is no consensus on their definition nor the requirements they should
fulfill. In traditional data management systems, views have proved to be useful
in different application scenarios such as data integration, query answering,
data security, and query modularization.
In this work we have reviewed existent work on views over RDF datasets, and
discussed the application of existent view definition mechanisms to four
scenarios in which views have proved to be useful in traditional (relational)
data management systems. To give a framework for the discussion we provided a
definition of views over RDF datasets, an issue over which there is no
consensus so far. We finally chose the three proposals closer to this
definition, and analyzed them with respect to four selected goals.
| [
{
"version": "v1",
"created": "Thu, 1 Nov 2012 17:00:27 GMT"
}
] | 2012-11-02T00:00:00 | [
[
"Etcheverry",
"Lorena",
""
],
[
"Vaisman",
"Alejandro A.",
""
]
] | TITLE: Views over RDF Datasets: A State-of-the-Art and Open Challenges
ABSTRACT: Views on RDF datasets have been discussed in several works, nevertheless
there is no consensus on their definition nor the requirements they should
fulfill. In traditional data management systems, views have proved to be useful
in different application scenarios such as data integration, query answering,
data security, and query modularization.
In this work we have reviewed existent work on views over RDF datasets, and
discussed the application of existent view definition mechanisms to four
scenarios in which views have proved to be useful in traditional (relational)
data management systems. To give a framework for the discussion we provided a
definition of views over RDF datasets, an issue over which there is no
consensus so far. We finally chose the three proposals closer to this
definition, and analyzed them with respect to four selected goals.
|
1210.3926 | Julian McAuley | Julian McAuley, Jure Leskovec, Dan Jurafsky | Learning Attitudes and Attributes from Multi-Aspect Reviews | 11 pages, 6 figures, extended version of our ICDM 2012 submission | null | null | null | cs.CL cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The majority of online reviews consist of plain-text feedback together with a
single numeric score. However, there are multiple dimensions to products and
opinions, and understanding the `aspects' that contribute to users' ratings may
help us to better understand their individual preferences. For example, a
user's impression of an audiobook presumably depends on aspects such as the
story and the narrator, and knowing their opinions on these aspects may help us
to recommend better products. In this paper, we build models for rating systems
in which such dimensions are explicit, in the sense that users leave separate
ratings for each aspect of a product. By introducing new corpora consisting of
five million reviews, rated with between three and six aspects, we evaluate our
models on three prediction tasks: First, we use our model to uncover which
parts of a review discuss which of the rated aspects. Second, we use our model
to summarize reviews, which for us means finding the sentences that best
explain a user's rating. Finally, since aspect ratings are optional in many of
the datasets we consider, we use our model to recover those ratings that are
missing from a user's evaluation. Our model matches state-of-the-art approaches
on existing small-scale datasets, while scaling to the real-world datasets we
introduce. Moreover, our model is able to `disentangle' content and sentiment
words: we automatically learn content words that are indicative of a particular
aspect as well as the aspect-specific sentiment words that are indicative of a
particular rating.
| [
{
"version": "v1",
"created": "Mon, 15 Oct 2012 07:36:57 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Oct 2012 16:14:35 GMT"
}
] | 2012-11-01T00:00:00 | [
[
"McAuley",
"Julian",
""
],
[
"Leskovec",
"Jure",
""
],
[
"Jurafsky",
"Dan",
""
]
] | TITLE: Learning Attitudes and Attributes from Multi-Aspect Reviews
ABSTRACT: The majority of online reviews consist of plain-text feedback together with a
single numeric score. However, there are multiple dimensions to products and
opinions, and understanding the `aspects' that contribute to users' ratings may
help us to better understand their individual preferences. For example, a
user's impression of an audiobook presumably depends on aspects such as the
story and the narrator, and knowing their opinions on these aspects may help us
to recommend better products. In this paper, we build models for rating systems
in which such dimensions are explicit, in the sense that users leave separate
ratings for each aspect of a product. By introducing new corpora consisting of
five million reviews, rated with between three and six aspects, we evaluate our
models on three prediction tasks: First, we use our model to uncover which
parts of a review discuss which of the rated aspects. Second, we use our model
to summarize reviews, which for us means finding the sentences that best
explain a user's rating. Finally, since aspect ratings are optional in many of
the datasets we consider, we use our model to recover those ratings that are
missing from a user's evaluation. Our model matches state-of-the-art approaches
on existing small-scale datasets, while scaling to the real-world datasets we
introduce. Moreover, our model is able to `disentangle' content and sentiment
words: we automatically learn content words that are indicative of a particular
aspect as well as the aspect-specific sentiment words that are indicative of a
particular rating.
|
1210.8353 | Alex Susemihl | Chris H\"ausler, Alex Susemihl | Temporal Autoencoding Restricted Boltzmann Machine | null | null | null | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Much work has been done refining and characterizing the receptive fields
learned by deep learning algorithms. A lot of this work has focused on the
development of Gabor-like filters learned when enforcing sparsity constraints
on a natural image dataset. Little work however has investigated how these
filters might expand to the temporal domain, namely through training on natural
movies. Here we investigate exactly this problem in established temporal deep
learning algorithms as well as a new learning paradigm suggested here, the
Temporal Autoencoding Restricted Boltzmann Machine (TARBM).
| [
{
"version": "v1",
"created": "Wed, 31 Oct 2012 14:55:50 GMT"
}
] | 2012-11-01T00:00:00 | [
[
"Häusler",
"Chris",
""
],
[
"Susemihl",
"Alex",
""
]
] | TITLE: Temporal Autoencoding Restricted Boltzmann Machine
ABSTRACT: Much work has been done refining and characterizing the receptive fields
learned by deep learning algorithms. A lot of this work has focused on the
development of Gabor-like filters learned when enforcing sparsity constraints
on a natural image dataset. Little work however has investigated how these
filters might expand to the temporal domain, namely through training on natural
movies. Here we investigate exactly this problem in established temporal deep
learning algorithms as well as a new learning paradigm suggested here, the
Temporal Autoencoding Restricted Boltzmann Machine (TARBM).
|
1210.7657 | Antonio Giuliano Zippo Dr. | Antonio Giuliano Zippo | Text Classification with Compression Algorithms | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work concerns a comparison of SVM kernel methods in text categorization
tasks. In particular I define a kernel function that estimates the similarity
between two objects computing by their compressed lengths. In fact, compression
algorithms can detect arbitrarily long dependencies within the text strings.
Data text vectorization looses information in feature extractions and is highly
sensitive by textual language. Furthermore, these methods are language
independent and require no text preprocessing. Moreover, the accuracy computed
on the datasets (Web-KB, 20ng and Reuters-21578), in some case, is greater than
Gaussian, linear and polynomial kernels. The method limits are represented by
computational time complexity of the Gram matrix and by very poor performance
on non-textual datasets.
| [
{
"version": "v1",
"created": "Mon, 29 Oct 2012 13:30:27 GMT"
}
] | 2012-10-30T00:00:00 | [
[
"Zippo",
"Antonio Giuliano",
""
]
] | TITLE: Text Classification with Compression Algorithms
ABSTRACT: This work concerns a comparison of SVM kernel methods in text categorization
tasks. In particular I define a kernel function that estimates the similarity
between two objects computing by their compressed lengths. In fact, compression
algorithms can detect arbitrarily long dependencies within the text strings.
Data text vectorization looses information in feature extractions and is highly
sensitive by textual language. Furthermore, these methods are language
independent and require no text preprocessing. Moreover, the accuracy computed
on the datasets (Web-KB, 20ng and Reuters-21578), in some case, is greater than
Gaussian, linear and polynomial kernels. The method limits are represented by
computational time complexity of the Gram matrix and by very poor performance
on non-textual datasets.
|
1210.7191 | Robert Dunn | Robert J. H. Dunn (1), Kate M. Willett (1), Peter W. Thorne (2,3),
Emma V. Woolley, Imke Durre (3), Aiguo Dai (4), David E. Parker (1), Russ E.
Vose (3) ((1) Met Office Hadley Centre, Exeter, UK, (2) CICS-NC, Asheville,
NC, (3) NOAA NCDC, Asheville, NC, (4) NCAR, Boulder, CO) | HadISD: a quality-controlled global synoptic report database for
selected variables at long-term stations from 1973--2011 | Published in Climate of the Past, www.clim-past.net/8/1649/2012/. 31
pages, 23 figures, 9 pages. For data see
http://www.metoffice.gov.uk/hadobs/hadisd | Clim. Past, 8, 1649-1679 (2012) | 10.5194/cp-8-1649-2012 | null | physics.ao-ph | http://creativecommons.org/licenses/by/3.0/ | [Abridged] This paper describes the creation of HadISD: an automatically
quality-controlled synoptic resolution dataset of temperature, dewpoint
temperature, sea-level pressure, wind speed, wind direction and cloud cover
from global weather stations for 1973--2011. The full dataset consists of over
6000 stations, with 3427 long-term stations deemed to have sufficient sampling
and quality for climate applications requiring sub-daily resolution. As with
other surface datasets, coverage is heavily skewed towards Northern Hemisphere
mid-latitudes.
The dataset is constructed from a large pre-existing ASCII flatfile data bank
that represents over a decade of substantial effort at data retrieval,
reformatting and provision. These raw data have had varying levels of quality
control applied to them by individual data providers. The work proceeded in
several steps: merging stations with multiple reporting identifiers;
reformatting to netCDF; quality control; and then filtering to form a final
dataset. Particular attention has been paid to maintaining true extreme values
where possible within an automated, objective process. Detailed validation has
been performed on a subset of global stations and also on UK data using known
extreme events to help finalise the QC tests. Further validation was performed
on a selection of extreme events world-wide (Hurricane Katrina in 2005, the
cold snap in Alaska in 1989 and heat waves in SE Australia in 2009). Although
the filtering has removed the poorest station records, no attempt has been made
to homogenise the data thus far. Hence non-climatic, time-varying errors may
still exist in many of the individual station records and care is needed in
inferring long-term trends from these data.
A version-control system has been constructed for this dataset to allow for
the clear documentation of any updates and corrections in the future.
| [
{
"version": "v1",
"created": "Fri, 26 Oct 2012 16:57:09 GMT"
}
] | 2012-10-29T00:00:00 | [
[
"Dunn",
"Robert J. H.",
""
],
[
"Willett",
"Kate M.",
""
],
[
"Thorne",
"Peter W.",
""
],
[
"Woolley",
"Emma V.",
""
],
[
"Durre",
"Imke",
""
],
[
"Dai",
"Aiguo",
""
],
[
"Parker",
"David E.",
""
],
[
"Vose",
"Russ E.",
""
]
] | TITLE: HadISD: a quality-controlled global synoptic report database for
selected variables at long-term stations from 1973--2011
ABSTRACT: [Abridged] This paper describes the creation of HadISD: an automatically
quality-controlled synoptic resolution dataset of temperature, dewpoint
temperature, sea-level pressure, wind speed, wind direction and cloud cover
from global weather stations for 1973--2011. The full dataset consists of over
6000 stations, with 3427 long-term stations deemed to have sufficient sampling
and quality for climate applications requiring sub-daily resolution. As with
other surface datasets, coverage is heavily skewed towards Northern Hemisphere
mid-latitudes.
The dataset is constructed from a large pre-existing ASCII flatfile data bank
that represents over a decade of substantial effort at data retrieval,
reformatting and provision. These raw data have had varying levels of quality
control applied to them by individual data providers. The work proceeded in
several steps: merging stations with multiple reporting identifiers;
reformatting to netCDF; quality control; and then filtering to form a final
dataset. Particular attention has been paid to maintaining true extreme values
where possible within an automated, objective process. Detailed validation has
been performed on a subset of global stations and also on UK data using known
extreme events to help finalise the QC tests. Further validation was performed
on a selection of extreme events world-wide (Hurricane Katrina in 2005, the
cold snap in Alaska in 1989 and heat waves in SE Australia in 2009). Although
the filtering has removed the poorest station records, no attempt has been made
to homogenise the data thus far. Hence non-climatic, time-varying errors may
still exist in many of the individual station records and care is needed in
inferring long-term trends from these data.
A version-control system has been constructed for this dataset to allow for
the clear documentation of any updates and corrections in the future.
|
1210.6891 | Clifton Phua | Clifton Phua, Hong Cao, Jo\~ao B\'artolo Gomes, Minh Nhut Nguyen | Predicting Near-Future Churners and Win-Backs in the Telecommunications
Industry | null | null | null | null | cs.CE cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In this work, we presented the strategies and techniques that we have
developed for predicting the near-future churners and win-backs for a telecom
company. On a large-scale and real-world database containing customer profiles
and some transaction data from a telecom company, we first analyzed the data
schema, developed feature computation strategies and then extracted a large set
of relevant features that can be associated with the customer churning and
returning behaviors. Our features include both the original driver factors as
well as some derived features. We evaluated our features on the imbalance
corrected dataset, i.e. under-sampled dataset and compare a large number of
existing machine learning tools, especially decision tree-based classifiers,
for predicting the churners and win-backs. In general, we find RandomForest and
SimpleCart learning algorithms generally perform well and tend to provide us
with highly competitive prediction performance. Among the top-15 driver factors
that signal the churn behavior, we find that the service utilization, e.g. last
two months' download and upload volume, last three months' average upload and
download, and the payment related factors are the most indicative features for
predicting if churn will happen soon. Such features can collectively tell
discrepancies between the service plans, payments and the dynamically changing
utilization needs of the customers. Our proposed features and their
computational strategy exhibit reasonable precision performance to predict
churn behavior in near future.
| [
{
"version": "v1",
"created": "Wed, 24 Oct 2012 05:56:45 GMT"
}
] | 2012-10-26T00:00:00 | [
[
"Phua",
"Clifton",
""
],
[
"Cao",
"Hong",
""
],
[
"Gomes",
"João Bártolo",
""
],
[
"Nguyen",
"Minh Nhut",
""
]
] | TITLE: Predicting Near-Future Churners and Win-Backs in the Telecommunications
Industry
ABSTRACT: In this work, we presented the strategies and techniques that we have
developed for predicting the near-future churners and win-backs for a telecom
company. On a large-scale and real-world database containing customer profiles
and some transaction data from a telecom company, we first analyzed the data
schema, developed feature computation strategies and then extracted a large set
of relevant features that can be associated with the customer churning and
returning behaviors. Our features include both the original driver factors as
well as some derived features. We evaluated our features on the imbalance
corrected dataset, i.e. under-sampled dataset and compare a large number of
existing machine learning tools, especially decision tree-based classifiers,
for predicting the churners and win-backs. In general, we find RandomForest and
SimpleCart learning algorithms generally perform well and tend to provide us
with highly competitive prediction performance. Among the top-15 driver factors
that signal the churn behavior, we find that the service utilization, e.g. last
two months' download and upload volume, last three months' average upload and
download, and the payment related factors are the most indicative features for
predicting if churn will happen soon. Such features can collectively tell
discrepancies between the service plans, payments and the dynamically changing
utilization needs of the customers. Our proposed features and their
computational strategy exhibit reasonable precision performance to predict
churn behavior in near future.
|
1210.6497 | Zhipeng Luo | Daifeng Li, Ying Ding, Xin Shuai, Golden Guo-zheng Sun, Jie Tang,
Zhipeng Luo, Jingwei Zhang and Guo Zhang | Topic-Level Opinion Influence Model(TOIM): An Investigation Using
Tencent Micro-Blogging | PLOS ONE Manuscript Draft | null | null | null | cs.SI cs.CY cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mining user opinion from Micro-Blogging has been extensively studied on the
most popular social networking sites such as Twitter and Facebook in the U.S.,
but few studies have been done on Micro-Blogging websites in other countries
(e.g. China). In this paper, we analyze the social opinion influence on
Tencent, one of the largest Micro-Blogging websites in China, endeavoring to
unveil the behavior patterns of Chinese Micro-Blogging users. This paper
proposes a Topic-Level Opinion Influence Model (TOIM) that simultaneously
incorporates topic factor and social direct influence in a unified
probabilistic framework. Based on TOIM, two topic level opinion influence
propagation and aggregation algorithms are developed to consider the indirect
influence: CP (Conservative Propagation) and NCP (None Conservative
Propagation). Users' historical social interaction records are leveraged by
TOIM to construct their progressive opinions and neighbors' opinion influence
through a statistical learning process, which can be further utilized to
predict users' future opinions on some specific topics. To evaluate and test
this proposed model, an experiment was designed and a sub-dataset from Tencent
Micro-Blogging was used. The experimental results show that TOIM outperforms
baseline methods on predicting users' opinion. The applications of CP and NCP
have no significant differences and could significantly improve recall and
F1-measure of TOIM.
| [
{
"version": "v1",
"created": "Wed, 24 Oct 2012 11:51:21 GMT"
}
] | 2012-10-25T00:00:00 | [
[
"Li",
"Daifeng",
""
],
[
"Ding",
"Ying",
""
],
[
"Shuai",
"Xin",
""
],
[
"Sun",
"Golden Guo-zheng",
""
],
[
"Tang",
"Jie",
""
],
[
"Luo",
"Zhipeng",
""
],
[
"Zhang",
"Jingwei",
""
],
[
"Zhang",
"Guo",
""
]
] | TITLE: Topic-Level Opinion Influence Model(TOIM): An Investigation Using
Tencent Micro-Blogging
ABSTRACT: Mining user opinion from Micro-Blogging has been extensively studied on the
most popular social networking sites such as Twitter and Facebook in the U.S.,
but few studies have been done on Micro-Blogging websites in other countries
(e.g. China). In this paper, we analyze the social opinion influence on
Tencent, one of the largest Micro-Blogging websites in China, endeavoring to
unveil the behavior patterns of Chinese Micro-Blogging users. This paper
proposes a Topic-Level Opinion Influence Model (TOIM) that simultaneously
incorporates topic factor and social direct influence in a unified
probabilistic framework. Based on TOIM, two topic level opinion influence
propagation and aggregation algorithms are developed to consider the indirect
influence: CP (Conservative Propagation) and NCP (None Conservative
Propagation). Users' historical social interaction records are leveraged by
TOIM to construct their progressive opinions and neighbors' opinion influence
through a statistical learning process, which can be further utilized to
predict users' future opinions on some specific topics. To evaluate and test
this proposed model, an experiment was designed and a sub-dataset from Tencent
Micro-Blogging was used. The experimental results show that TOIM outperforms
baseline methods on predicting users' opinion. The applications of CP and NCP
have no significant differences and could significantly improve recall and
F1-measure of TOIM.
|
1210.6122 | Dr Munaga HM Krishna Prasad | Hazarath Munaga and Venkata Jarugumalli | Performance Evaluation: Ball-Tree and KD-Tree in the Context of MST | 4 pages | http://link.springer.com/chapter/10.1007%2F978-3-642-32573-1_38?LI=true 2012 | 10.1007/978-3-642-32573-1_38 | null | cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Now a days many algorithms are invented or being inventing to find the
solution for Euclidean Minimum Spanning Tree, EMST, problem, as its
applicability is increasing in much wide range of fields containing spatial or
spatio temporal data viz. astronomy which consists of millions of spatial data.
To solve this problem, we are presenting a technique by adopting the dual tree
algorithm for finding efficient EMST and experimented on a variety of real time
and synthetic datasets. This paper presents the observed experimental
observations and the efficiency of the dual tree framework, in the context of
kdtree and ball tree on spatial datasets of different dimensions.
| [
{
"version": "v1",
"created": "Tue, 23 Oct 2012 04:09:30 GMT"
}
] | 2012-10-24T00:00:00 | [
[
"Munaga",
"Hazarath",
""
],
[
"Jarugumalli",
"Venkata",
""
]
] | TITLE: Performance Evaluation: Ball-Tree and KD-Tree in the Context of MST
ABSTRACT: Now a days many algorithms are invented or being inventing to find the
solution for Euclidean Minimum Spanning Tree, EMST, problem, as its
applicability is increasing in much wide range of fields containing spatial or
spatio temporal data viz. astronomy which consists of millions of spatial data.
To solve this problem, we are presenting a technique by adopting the dual tree
algorithm for finding efficient EMST and experimented on a variety of real time
and synthetic datasets. This paper presents the observed experimental
observations and the efficiency of the dual tree framework, in the context of
kdtree and ball tree on spatial datasets of different dimensions.
|
1210.5873 | Ayodeji Akinduko Mr | A. A. Akinduko and E. M. Mirkes | Initialization of Self-Organizing Maps: Principal Components Versus
Random Initialization. A Case Study | 18 pages, 6 figures | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/3.0/ | The performance of the Self-Organizing Map (SOM) algorithm is dependent on
the initial weights of the map. The different initialization methods can
broadly be classified into random and data analysis based initialization
approach. In this paper, the performance of random initialization (RI) approach
is compared to that of principal component initialization (PCI) in which the
initial map weights are chosen from the space of the principal component.
Performance is evaluated by the fraction of variance unexplained (FVU).
Datasets were classified into quasi-linear and non-linear and it was observed
that RI performed better for non-linear datasets; however the performance of
PCI approach remains inconclusive for quasi-linear datasets.
| [
{
"version": "v1",
"created": "Mon, 22 Oct 2012 11:17:31 GMT"
}
] | 2012-10-23T00:00:00 | [
[
"Akinduko",
"A. A.",
""
],
[
"Mirkes",
"E. M.",
""
]
] | TITLE: Initialization of Self-Organizing Maps: Principal Components Versus
Random Initialization. A Case Study
ABSTRACT: The performance of the Self-Organizing Map (SOM) algorithm is dependent on
the initial weights of the map. The different initialization methods can
broadly be classified into random and data analysis based initialization
approach. In this paper, the performance of random initialization (RI) approach
is compared to that of principal component initialization (PCI) in which the
initial map weights are chosen from the space of the principal component.
Performance is evaluated by the fraction of variance unexplained (FVU).
Datasets were classified into quasi-linear and non-linear and it was observed
that RI performed better for non-linear datasets; however the performance of
PCI approach remains inconclusive for quasi-linear datasets.
|
1210.4839 | Stephane Caron | Stephane Caron, Branislav Kveton, Marc Lelarge, Smriti Bhagat | Leveraging Side Observations in Stochastic Bandits | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-142-151 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers stochastic bandits with side observations, a model that
accounts for both the exploration/exploitation dilemma and relationships
between arms. In this setting, after pulling an arm i, the decision maker also
observes the rewards for some other actions related to i. We will see that this
model is suited to content recommendation in social networks, where users'
reactions may be endorsed or not by their friends. We provide efficient
algorithms based on upper confidence bounds (UCBs) to leverage this additional
information and derive new bounds improving on standard regret guarantees. We
also evaluate these policies in the context of movie recommendation in social
networks: experiments on real datasets show substantial learning rate speedups
ranging from 2.2x to 14x on dense networks.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:32:09 GMT"
}
] | 2012-10-19T00:00:00 | [
[
"Caron",
"Stephane",
""
],
[
"Kveton",
"Branislav",
""
],
[
"Lelarge",
"Marc",
""
],
[
"Bhagat",
"Smriti",
""
]
] | TITLE: Leveraging Side Observations in Stochastic Bandits
ABSTRACT: This paper considers stochastic bandits with side observations, a model that
accounts for both the exploration/exploitation dilemma and relationships
between arms. In this setting, after pulling an arm i, the decision maker also
observes the rewards for some other actions related to i. We will see that this
model is suited to content recommendation in social networks, where users'
reactions may be endorsed or not by their friends. We provide efficient
algorithms based on upper confidence bounds (UCBs) to leverage this additional
information and derive new bounds improving on standard regret guarantees. We
also evaluate these policies in the context of movie recommendation in social
networks: experiments on real datasets show substantial learning rate speedups
ranging from 2.2x to 14x on dense networks.
|
1210.4851 | Sreangsu Acharyya | Sreangsu Acharyya, Oluwasanmi Koyejo, Joydeep Ghosh | Learning to Rank With Bregman Divergences and Monotone Retargeting | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-15-25 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a novel approach for learning to rank (LETOR) based on
the notion of monotone retargeting. It involves minimizing a divergence between
all monotonic increasing transformations of the training scores and a
parameterized prediction function. The minimization is both over the
transformations as well as over the parameters. It is applied to Bregman
divergences, a large class of "distance like" functions that were recently
shown to be the unique class that is statistically consistent with the
normalized discounted gain (NDCG) criterion [19]. The algorithm uses
alternating projection style updates, in which one set of simultaneous
projections can be computed independent of the Bregman divergence and the other
reduces to parameter estimation of a generalized linear model. This results in
easily implemented, efficiently parallelizable algorithm for the LETOR task
that enjoys global optimum guarantees under mild conditions. We present
empirical results on benchmark datasets showing that this approach can
outperform the state of the art NDCG consistent techniques.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:35:52 GMT"
}
] | 2012-10-19T00:00:00 | [
[
"Acharyya",
"Sreangsu",
""
],
[
"Koyejo",
"Oluwasanmi",
""
],
[
"Ghosh",
"Joydeep",
""
]
] | TITLE: Learning to Rank With Bregman Divergences and Monotone Retargeting
ABSTRACT: This paper introduces a novel approach for learning to rank (LETOR) based on
the notion of monotone retargeting. It involves minimizing a divergence between
all monotonic increasing transformations of the training scores and a
parameterized prediction function. The minimization is both over the
transformations as well as over the parameters. It is applied to Bregman
divergences, a large class of "distance like" functions that were recently
shown to be the unique class that is statistically consistent with the
normalized discounted gain (NDCG) criterion [19]. The algorithm uses
alternating projection style updates, in which one set of simultaneous
projections can be computed independent of the Bregman divergence and the other
reduces to parameter estimation of a generalized linear model. This results in
easily implemented, efficiently parallelizable algorithm for the LETOR task
that enjoys global optimum guarantees under mild conditions. We present
empirical results on benchmark datasets showing that this approach can
outperform the state of the art NDCG consistent techniques.
|
1210.4854 | Hannaneh Hajishirzi | Hannaneh Hajishirzi, Mohammad Rastegari, Ali Farhadi, Jessica K.
Hodgins | Semantic Understanding of Professional Soccer Commentaries | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-326-335 | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel approach to the problem of semantic parsing via
learning the correspondences between complex sentences and rich sets of events.
Our main intuition is that correct correspondences tend to occur more
frequently. Our model benefits from a discriminative notion of similarity to
learn the correspondence between sentence and an event and a ranking machinery
that scores the popularity of each correspondence. Our method can discover a
group of events (called macro-events) that best describes a sentence. We
evaluate our method on our novel dataset of professional soccer commentaries.
The empirical results show that our method significantly outperforms the
state-of-theart.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:37:21 GMT"
}
] | 2012-10-19T00:00:00 | [
[
"Hajishirzi",
"Hannaneh",
""
],
[
"Rastegari",
"Mohammad",
""
],
[
"Farhadi",
"Ali",
""
],
[
"Hodgins",
"Jessica K.",
""
]
] | TITLE: Semantic Understanding of Professional Soccer Commentaries
ABSTRACT: This paper presents a novel approach to the problem of semantic parsing via
learning the correspondences between complex sentences and rich sets of events.
Our main intuition is that correct correspondences tend to occur more
frequently. Our model benefits from a discriminative notion of similarity to
learn the correspondence between sentence and an event and a ranking machinery
that scores the popularity of each correspondence. Our method can discover a
group of events (called macro-events) that best describes a sentence. We
evaluate our method on our novel dataset of professional soccer commentaries.
The empirical results show that our method significantly outperforms the
state-of-theart.
|
1210.4856 | Roger Grosse | Roger Grosse, Ruslan R Salakhutdinov, William T. Freeman, Joshua B.
Tenenbaum | Exploiting compositionality to explore a large space of model structures | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-306-315 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent proliferation of richly structured probabilistic models raises the
question of how to automatically determine an appropriate model for a dataset.
We investigate this question for a space of matrix decomposition models which
can express a variety of widely used models from unsupervised learning. To
enable model selection, we organize these models into a context-free grammar
which generates a wide variety of structures through the compositional
application of a few simple rules. We use our grammar to generically and
efficiently infer latent components and estimate predictive likelihood for
nearly 2500 structures using a small toolbox of reusable algorithms. Using a
greedy search over our grammar, we automatically choose the decomposition
structure from raw data by evaluating only a small fraction of all models. The
proposed method typically finds the correct structure for synthetic data and
backs off gracefully to simpler models under heavy noise. It learns sensible
structures for datasets as diverse as image patches, motion capture, 20
Questions, and U.S. Senate votes, all using exactly the same code.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:37:41 GMT"
}
] | 2012-10-19T00:00:00 | [
[
"Grosse",
"Roger",
""
],
[
"Salakhutdinov",
"Ruslan R",
""
],
[
"Freeman",
"William T.",
""
],
[
"Tenenbaum",
"Joshua B.",
""
]
] | TITLE: Exploiting compositionality to explore a large space of model structures
ABSTRACT: The recent proliferation of richly structured probabilistic models raises the
question of how to automatically determine an appropriate model for a dataset.
We investigate this question for a space of matrix decomposition models which
can express a variety of widely used models from unsupervised learning. To
enable model selection, we organize these models into a context-free grammar
which generates a wide variety of structures through the compositional
application of a few simple rules. We use our grammar to generically and
efficiently infer latent components and estimate predictive likelihood for
nearly 2500 structures using a small toolbox of reusable algorithms. Using a
greedy search over our grammar, we automatically choose the decomposition
structure from raw data by evaluating only a small fraction of all models. The
proposed method typically finds the correct structure for synthetic data and
backs off gracefully to simpler models under heavy noise. It learns sensible
structures for datasets as diverse as image patches, motion capture, 20
Questions, and U.S. Senate votes, all using exactly the same code.
|
1210.4874 | Hoong Chuin Lau | Hoong Chuin Lau, William Yeoh, Pradeep Varakantham, Duc Thien Nguyen,
Huaxing Chen | Dynamic Stochastic Orienteering Problems for Risk-Aware Applications | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-448-458 | cs.AI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Orienteering problems (OPs) are a variant of the well-known prize-collecting
traveling salesman problem, where the salesman needs to choose a subset of
cities to visit within a given deadline. OPs and their extensions with
stochastic travel times (SOPs) have been used to model vehicle routing problems
and tourist trip design problems. However, they suffer from two limitations
travel times between cities are assumed to be time independent and the route
provided is independent of the risk preference (with respect to violating the
deadline) of the user. To address these issues, we make the following
contributions: We introduce (1) a dynamic SOP (DSOP) model, which is an
extension of SOPs with dynamic (time-dependent) travel times; (2) a
risk-sensitive criterion to allow for different risk preferences; and (3) a
local search algorithm to solve DSOPs with this risk-sensitive criterion. We
evaluated our algorithms on a real-world dataset for a theme park navigation
problem as well as synthetic datasets employed in the literature.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:42:27 GMT"
}
] | 2012-10-19T00:00:00 | [
[
"Lau",
"Hoong Chuin",
""
],
[
"Yeoh",
"William",
""
],
[
"Varakantham",
"Pradeep",
""
],
[
"Nguyen",
"Duc Thien",
""
],
[
"Chen",
"Huaxing",
""
]
] | TITLE: Dynamic Stochastic Orienteering Problems for Risk-Aware Applications
ABSTRACT: Orienteering problems (OPs) are a variant of the well-known prize-collecting
traveling salesman problem, where the salesman needs to choose a subset of
cities to visit within a given deadline. OPs and their extensions with
stochastic travel times (SOPs) have been used to model vehicle routing problems
and tourist trip design problems. However, they suffer from two limitations
travel times between cities are assumed to be time independent and the route
provided is independent of the risk preference (with respect to violating the
deadline) of the user. To address these issues, we make the following
contributions: We introduce (1) a dynamic SOP (DSOP) model, which is an
extension of SOPs with dynamic (time-dependent) travel times; (2) a
risk-sensitive criterion to allow for different risk preferences; and (3) a
local search algorithm to solve DSOPs with this risk-sensitive criterion. We
evaluated our algorithms on a real-world dataset for a theme park navigation
problem as well as synthetic datasets employed in the literature.
|
1210.4884 | Ankur P. Parikh | Ankur P. Parikh, Le Song, Mariya Ishteva, Gabi Teodoru, Eric P. Xing | A Spectral Algorithm for Latent Junction Trees | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-675-684 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Latent variable models are an elegant framework for capturing rich
probabilistic dependencies in many applications. However, current approaches
typically parametrize these models using conditional probability tables, and
learning relies predominantly on local search heuristics such as Expectation
Maximization. Using tensor algebra, we propose an alternative parameterization
of latent variable models (where the model structures are junction trees) that
still allows for computation of marginals among observed variables. While this
novel representation leads to a moderate increase in the number of parameters
for junction trees of low treewidth, it lets us design a local-minimum-free
algorithm for learning this parameterization. The main computation of the
algorithm involves only tensor operations and SVDs which can be orders of
magnitude faster than EM algorithms for large datasets. To our knowledge, this
is the first provably consistent parameter learning technique for a large class
of low-treewidth latent graphical models beyond trees. We demonstrate the
advantages of our method on synthetic and real datasets.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:45:30 GMT"
}
] | 2012-10-19T00:00:00 | [
[
"Parikh",
"Ankur P.",
""
],
[
"Song",
"Le",
""
],
[
"Ishteva",
"Mariya",
""
],
[
"Teodoru",
"Gabi",
""
],
[
"Xing",
"Eric P.",
""
]
] | TITLE: A Spectral Algorithm for Latent Junction Trees
ABSTRACT: Latent variable models are an elegant framework for capturing rich
probabilistic dependencies in many applications. However, current approaches
typically parametrize these models using conditional probability tables, and
learning relies predominantly on local search heuristics such as Expectation
Maximization. Using tensor algebra, we propose an alternative parameterization
of latent variable models (where the model structures are junction trees) that
still allows for computation of marginals among observed variables. While this
novel representation leads to a moderate increase in the number of parameters
for junction trees of low treewidth, it lets us design a local-minimum-free
algorithm for learning this parameterization. The main computation of the
algorithm involves only tensor operations and SVDs which can be orders of
magnitude faster than EM algorithms for large datasets. To our knowledge, this
is the first provably consistent parameter learning technique for a large class
of low-treewidth latent graphical models beyond trees. We demonstrate the
advantages of our method on synthetic and real datasets.
|
1210.4896 | Daniel Lowd | Daniel Lowd | Closed-Form Learning of Markov Networks from Dependency Networks | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-533-542 | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Markov networks (MNs) are a powerful way to compactly represent a joint
probability distribution, but most MN structure learning methods are very slow,
due to the high cost of evaluating candidates structures. Dependency networks
(DNs) represent a probability distribution as a set of conditional probability
distributions. DNs are very fast to learn, but the conditional distributions
may be inconsistent with each other and few inference algorithms support DNs.
In this paper, we present a closed-form method for converting a DN into an MN,
allowing us to enjoy both the efficiency of DN learning and the convenience of
the MN representation. When the DN is consistent, this conversion is exact. For
inconsistent DNs, we present averaging methods that significantly improve the
approximation. In experiments on 12 standard datasets, our methods are orders
of magnitude faster than and often more accurate than combining conditional
distributions using weight learning.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:48:08 GMT"
}
] | 2012-10-19T00:00:00 | [
[
"Lowd",
"Daniel",
""
]
] | TITLE: Closed-Form Learning of Markov Networks from Dependency Networks
ABSTRACT: Markov networks (MNs) are a powerful way to compactly represent a joint
probability distribution, but most MN structure learning methods are very slow,
due to the high cost of evaluating candidates structures. Dependency networks
(DNs) represent a probability distribution as a set of conditional probability
distributions. DNs are very fast to learn, but the conditional distributions
may be inconsistent with each other and few inference algorithms support DNs.
In this paper, we present a closed-form method for converting a DN into an MN,
allowing us to enjoy both the efficiency of DN learning and the convenience of
the MN representation. When the DN is consistent, this conversion is exact. For
inconsistent DNs, we present averaging methods that significantly improve the
approximation. In experiments on 12 standard datasets, our methods are orders
of magnitude faster than and often more accurate than combining conditional
distributions using weight learning.
|
1210.4909 | Jens Roeder | Jens Roeder, Boaz Nadler, Kevin Kunzmann, Fred A. Hamprecht | Active Learning with Distributional Estimates | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-715-725 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Active Learning (AL) is increasingly important in a broad range of
applications. Two main AL principles to obtain accurate classification with few
labeled data are refinement of the current decision boundary and exploration of
poorly sampled regions. In this paper we derive a novel AL scheme that balances
these two principles in a natural way. In contrast to many AL strategies, which
are based on an estimated class conditional probability ^p(y|x), a key
component of our approach is to view this quantity as a random variable, hence
explicitly considering the uncertainty in its estimated value. Our main
contribution is a novel mathematical framework for uncertainty-based AL, and a
corresponding AL scheme, where the uncertainty in ^p(y|x) is modeled by a
second-order distribution. On the practical side, we show how to approximate
such second-order distributions for kernel density classification. Finally, we
find that over a large number of UCI, USPS and Caltech4 datasets, our AL scheme
achieves significantly better learning curves than popular AL methods such as
uncertainty sampling and error reduction sampling, when all use the same kernel
density classifier.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:53:17 GMT"
}
] | 2012-10-19T00:00:00 | [
[
"Roeder",
"Jens",
""
],
[
"Nadler",
"Boaz",
""
],
[
"Kunzmann",
"Kevin",
""
],
[
"Hamprecht",
"Fred A.",
""
]
] | TITLE: Active Learning with Distributional Estimates
ABSTRACT: Active Learning (AL) is increasingly important in a broad range of
applications. Two main AL principles to obtain accurate classification with few
labeled data are refinement of the current decision boundary and exploration of
poorly sampled regions. In this paper we derive a novel AL scheme that balances
these two principles in a natural way. In contrast to many AL strategies, which
are based on an estimated class conditional probability ^p(y|x), a key
component of our approach is to view this quantity as a random variable, hence
explicitly considering the uncertainty in its estimated value. Our main
contribution is a novel mathematical framework for uncertainty-based AL, and a
corresponding AL scheme, where the uncertainty in ^p(y|x) is modeled by a
second-order distribution. On the practical side, we show how to approximate
such second-order distributions for kernel density classification. Finally, we
find that over a large number of UCI, USPS and Caltech4 datasets, our AL scheme
achieves significantly better learning curves than popular AL methods such as
uncertainty sampling and error reduction sampling, when all use the same kernel
density classifier.
|
1210.4913 | Changhe Yuan | Changhe Yuan, Brandon Malone | An Improved Admissible Heuristic for Learning Optimal Bayesian Networks | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-924-933 | cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently two search algorithms, A* and breadth-first branch and bound
(BFBnB), were developed based on a simple admissible heuristic for learning
Bayesian network structures that optimize a scoring function. The heuristic
represents a relaxation of the learning problem such that each variable chooses
optimal parents independently. As a result, the heuristic may contain many
directed cycles and result in a loose bound. This paper introduces an improved
admissible heuristic that tries to avoid directed cycles within small groups of
variables. A sparse representation is also introduced to store only the unique
optimal parent choices. Empirical results show that the new techniques
significantly improved the efficiency and scalability of A* and BFBnB on most
of datasets tested in this paper.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:55:57 GMT"
}
] | 2012-10-19T00:00:00 | [
[
"Yuan",
"Changhe",
""
],
[
"Malone",
"Brandon",
""
]
] | TITLE: An Improved Admissible Heuristic for Learning Optimal Bayesian Networks
ABSTRACT: Recently two search algorithms, A* and breadth-first branch and bound
(BFBnB), were developed based on a simple admissible heuristic for learning
Bayesian network structures that optimize a scoring function. The heuristic
represents a relaxation of the learning problem such that each variable chooses
optimal parents independently. As a result, the heuristic may contain many
directed cycles and result in a loose bound. This paper introduces an improved
admissible heuristic that tries to avoid directed cycles within small groups of
variables. A sparse representation is also introduced to store only the unique
optimal parent choices. Empirical results show that the new techniques
significantly improved the efficiency and scalability of A* and BFBnB on most
of datasets tested in this paper.
|
1210.4919 | Mirwaes Wahabzada | Mirwaes Wahabzada, Kristian Kersting, Christian Bauckhage, Christoph
Roemer, Agim Ballvora, Francisco Pinto, Uwe Rascher, Jens Leon, Lutz Ploemer | Latent Dirichlet Allocation Uncovers Spectral Characteristics of Drought
Stressed Plants | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-852-862 | cs.LG cs.CE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the adaptation process of plants to drought stress is essential
in improving management practices, breeding strategies as well as engineering
viable crops for a sustainable agriculture in the coming decades.
Hyper-spectral imaging provides a particularly promising approach to gain such
understanding since it allows to discover non-destructively spectral
characteristics of plants governed primarily by scattering and absorption
characteristics of the leaf internal structure and biochemical constituents.
Several drought stress indices have been derived using hyper-spectral imaging.
However, they are typically based on few hyper-spectral images only, rely on
interpretations of experts, and consider few wavelengths only. In this study,
we present the first data-driven approach to discovering spectral drought
stress indices, treating it as an unsupervised labeling problem at massive
scale. To make use of short range dependencies of spectral wavelengths, we
develop an online variational Bayes algorithm for latent Dirichlet allocation
with convolved Dirichlet regularizer. This approach scales to massive datasets
and, hence, provides a more objective complement to plant physiological
practices. The spectral topics found conform to plant physiological knowledge
and can be computed in a fraction of the time compared to existing LDA
approaches.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:57:06 GMT"
}
] | 2012-10-19T00:00:00 | [
[
"Wahabzada",
"Mirwaes",
""
],
[
"Kersting",
"Kristian",
""
],
[
"Bauckhage",
"Christian",
""
],
[
"Roemer",
"Christoph",
""
],
[
"Ballvora",
"Agim",
""
],
[
"Pinto",
"Francisco",
""
],
[
"Rascher",
"Uwe",
""
],
[
"Leon",
"Jens",
""
],
[
"Ploemer",
"Lutz",
""
]
] | TITLE: Latent Dirichlet Allocation Uncovers Spectral Characteristics of Drought
Stressed Plants
ABSTRACT: Understanding the adaptation process of plants to drought stress is essential
in improving management practices, breeding strategies as well as engineering
viable crops for a sustainable agriculture in the coming decades.
Hyper-spectral imaging provides a particularly promising approach to gain such
understanding since it allows to discover non-destructively spectral
characteristics of plants governed primarily by scattering and absorption
characteristics of the leaf internal structure and biochemical constituents.
Several drought stress indices have been derived using hyper-spectral imaging.
However, they are typically based on few hyper-spectral images only, rely on
interpretations of experts, and consider few wavelengths only. In this study,
we present the first data-driven approach to discovering spectral drought
stress indices, treating it as an unsupervised labeling problem at massive
scale. To make use of short range dependencies of spectral wavelengths, we
develop an online variational Bayes algorithm for latent Dirichlet allocation
with convolved Dirichlet regularizer. This approach scales to massive datasets
and, hence, provides a more objective complement to plant physiological
practices. The spectral topics found conform to plant physiological knowledge
and can be computed in a fraction of the time compared to existing LDA
approaches.
|
1210.5135 | Yang Lu | Yang Lu, Mengying Wang, Menglu Li, Qili Zhu, Bo Yuan | LSBN: A Large-Scale Bayesian Structure Learning Framework for Model
Averaging | 13 pages, 6 figures | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The motivation for this paper is to apply Bayesian structure learning using
Model Averaging in large-scale networks. Currently, Bayesian model averaging
algorithm is applicable to networks with only tens of variables, restrained by
its super-exponential complexity. We present a novel framework, called
LSBN(Large-Scale Bayesian Network), making it possible to handle networks with
infinite size by following the principle of divide-and-conquer. The method of
LSBN comprises three steps. In general, LSBN first performs the partition by
using a second-order partition strategy, which achieves more robust results.
LSBN conducts sampling and structure learning within each overlapping community
after the community is isolated from other variables by Markov Blanket. Finally
LSBN employs an efficient algorithm, to merge structures of overlapping
communities into a whole. In comparison with other four state-of-art
large-scale network structure learning algorithms such as ARACNE, PC, Greedy
Search and MMHC, LSBN shows comparable results in five common benchmark
datasets, evaluated by precision, recall and f-score. What's more, LSBN makes
it possible to learn large-scale Bayesian structure by Model Averaging which
used to be intractable. In summary, LSBN provides an scalable and parallel
framework for the reconstruction of network structures. Besides, the complete
information of overlapping communities serves as the byproduct, which could be
used to mine meaningful clusters in biological networks, such as
protein-protein-interaction network or gene regulatory network, as well as in
social network.
| [
{
"version": "v1",
"created": "Thu, 18 Oct 2012 14:15:40 GMT"
}
] | 2012-10-19T00:00:00 | [
[
"Lu",
"Yang",
""
],
[
"Wang",
"Mengying",
""
],
[
"Li",
"Menglu",
""
],
[
"Zhu",
"Qili",
""
],
[
"Yuan",
"Bo",
""
]
] | TITLE: LSBN: A Large-Scale Bayesian Structure Learning Framework for Model
Averaging
ABSTRACT: The motivation for this paper is to apply Bayesian structure learning using
Model Averaging in large-scale networks. Currently, Bayesian model averaging
algorithm is applicable to networks with only tens of variables, restrained by
its super-exponential complexity. We present a novel framework, called
LSBN(Large-Scale Bayesian Network), making it possible to handle networks with
infinite size by following the principle of divide-and-conquer. The method of
LSBN comprises three steps. In general, LSBN first performs the partition by
using a second-order partition strategy, which achieves more robust results.
LSBN conducts sampling and structure learning within each overlapping community
after the community is isolated from other variables by Markov Blanket. Finally
LSBN employs an efficient algorithm, to merge structures of overlapping
communities into a whole. In comparison with other four state-of-art
large-scale network structure learning algorithms such as ARACNE, PC, Greedy
Search and MMHC, LSBN shows comparable results in five common benchmark
datasets, evaluated by precision, recall and f-score. What's more, LSBN makes
it possible to learn large-scale Bayesian structure by Model Averaging which
used to be intractable. In summary, LSBN provides an scalable and parallel
framework for the reconstruction of network structures. Besides, the complete
information of overlapping communities serves as the byproduct, which could be
used to mine meaningful clusters in biological networks, such as
protein-protein-interaction network or gene regulatory network, as well as in
social network.
|
1210.3165 | Ayatullah Faruk Mollah | Ayatullah Faruk Mollah, Subhadip Basu, Mita Nasipuri | Computationally Efficient Implementation of Convolution-based Locally
Adaptive Binarization Techniques | null | Proc. of Int'l Conf. on Information Processing, Springer, CCIS
292, pp. 159-168, 2012 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most important steps of document image processing is binarization.
The computational requirements of locally adaptive binarization techniques make
them unsuitable for devices with limited computing facilities. In this paper,
we have presented a computationally efficient implementation of convolution
based locally adaptive binarization techniques keeping the performance
comparable to the original implementation. The computational complexity has
been reduced from O(W2N2) to O(WN2) where WxW is the window size and NxN is the
image size. Experiments over benchmark datasets show that the computation time
has been reduced by 5 to 15 times depending on the window size while memory
consumption remains the same with respect to the state-of-the-art algorithmic
implementation.
| [
{
"version": "v1",
"created": "Thu, 11 Oct 2012 10:04:44 GMT"
}
] | 2012-10-12T00:00:00 | [
[
"Mollah",
"Ayatullah Faruk",
""
],
[
"Basu",
"Subhadip",
""
],
[
"Nasipuri",
"Mita",
""
]
] | TITLE: Computationally Efficient Implementation of Convolution-based Locally
Adaptive Binarization Techniques
ABSTRACT: One of the most important steps of document image processing is binarization.
The computational requirements of locally adaptive binarization techniques make
them unsuitable for devices with limited computing facilities. In this paper,
we have presented a computationally efficient implementation of convolution
based locally adaptive binarization techniques keeping the performance
comparable to the original implementation. The computational complexity has
been reduced from O(W2N2) to O(WN2) where WxW is the window size and NxN is the
image size. Experiments over benchmark datasets show that the computation time
has been reduced by 5 to 15 times depending on the window size while memory
consumption remains the same with respect to the state-of-the-art algorithmic
implementation.
|
1210.3266 | Marco Pellegrini | Marco Pellegrini, Filippo Geraci, Miriam Baglioni | Detecting dense communities in large social and information networks
with the Core & Peel algorithm | null | null | null | null | cs.SI cs.DS physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting and characterizing dense subgraphs (tight communities) in social
and information networks is an important exploratory tool in social network
analysis. Several approaches have been proposed that either (i) partition the
whole network into clusters, even in low density region, or (ii) are aimed at
finding a single densest community (and need to be iterated to find the next
one). As social networks grow larger both approaches (i) and (ii) result in
algorithms too slow to be practical, in particular when speed in analyzing the
data is required. In this paper we propose an approach that aims at balancing
efficiency of computation and expressiveness and manageability of the output
community representation. We define the notion of a partial dense cover (PDC)
of a graph. Intuitively a PDC of a graph is a collection of sets of nodes that
(a) each set forms a disjoint dense induced subgraphs and (b) its removal
leaves the residual graph without dense regions. Exact computation of PDC is an
NP-complete problem, thus, we propose an efficient heuristic algorithms for
computing a PDC which we christen Core and Peel. Moreover we propose a novel
benchmarking technique that allows us to evaluate algorithms for computing PDC
using the classical IR concepts of precision and recall even without a golden
standard. Tests on 25 social and technological networks from the Stanford Large
Network Dataset Collection confirm that Core and Peel is efficient and attains
very high precison and recall.
| [
{
"version": "v1",
"created": "Thu, 11 Oct 2012 15:17:35 GMT"
}
] | 2012-10-12T00:00:00 | [
[
"Pellegrini",
"Marco",
""
],
[
"Geraci",
"Filippo",
""
],
[
"Baglioni",
"Miriam",
""
]
] | TITLE: Detecting dense communities in large social and information networks
with the Core & Peel algorithm
ABSTRACT: Detecting and characterizing dense subgraphs (tight communities) in social
and information networks is an important exploratory tool in social network
analysis. Several approaches have been proposed that either (i) partition the
whole network into clusters, even in low density region, or (ii) are aimed at
finding a single densest community (and need to be iterated to find the next
one). As social networks grow larger both approaches (i) and (ii) result in
algorithms too slow to be practical, in particular when speed in analyzing the
data is required. In this paper we propose an approach that aims at balancing
efficiency of computation and expressiveness and manageability of the output
community representation. We define the notion of a partial dense cover (PDC)
of a graph. Intuitively a PDC of a graph is a collection of sets of nodes that
(a) each set forms a disjoint dense induced subgraphs and (b) its removal
leaves the residual graph without dense regions. Exact computation of PDC is an
NP-complete problem, thus, we propose an efficient heuristic algorithms for
computing a PDC which we christen Core and Peel. Moreover we propose a novel
benchmarking technique that allows us to evaluate algorithms for computing PDC
using the classical IR concepts of precision and recall even without a golden
standard. Tests on 25 social and technological networks from the Stanford Large
Network Dataset Collection confirm that Core and Peel is efficient and attains
very high precison and recall.
|
1210.3288 | Willie Neiswanger | Willie Neiswanger, Frank Wood | Unsupervised Detection and Tracking of Arbitrary Objects with Dependent
Dirichlet Process Mixtures | 21 pages, 7 figures | null | null | null | stat.ML cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a technique for the unsupervised detection and tracking
of arbitrary objects in videos. It is intended to reduce the need for detection
and localization methods tailored to specific object types and serve as a
general framework applicable to videos with varied objects, backgrounds, and
image qualities. The technique uses a dependent Dirichlet process mixture
(DDPM) known as the Generalized Polya Urn (GPUDDPM) to model image pixel data
that can be easily and efficiently extracted from the regions in a video that
represent objects. This paper describes a specific implementation of the model
using spatial and color pixel data extracted via frame differencing and gives
two algorithms for performing inference in the model to accomplish detection
and tracking. This technique is demonstrated on multiple synthetic and
benchmark video datasets that illustrate its ability to, without modification,
detect and track objects with diverse physical characteristics moving over
non-uniform backgrounds and through occlusion.
| [
{
"version": "v1",
"created": "Thu, 11 Oct 2012 16:30:15 GMT"
}
] | 2012-10-12T00:00:00 | [
[
"Neiswanger",
"Willie",
""
],
[
"Wood",
"Frank",
""
]
] | TITLE: Unsupervised Detection and Tracking of Arbitrary Objects with Dependent
Dirichlet Process Mixtures
ABSTRACT: This paper proposes a technique for the unsupervised detection and tracking
of arbitrary objects in videos. It is intended to reduce the need for detection
and localization methods tailored to specific object types and serve as a
general framework applicable to videos with varied objects, backgrounds, and
image qualities. The technique uses a dependent Dirichlet process mixture
(DDPM) known as the Generalized Polya Urn (GPUDDPM) to model image pixel data
that can be easily and efficiently extracted from the regions in a video that
represent objects. This paper describes a specific implementation of the model
using spatial and color pixel data extracted via frame differencing and gives
two algorithms for performing inference in the model to accomplish detection
and tracking. This technique is demonstrated on multiple synthetic and
benchmark video datasets that illustrate its ability to, without modification,
detect and track objects with diverse physical characteristics moving over
non-uniform backgrounds and through occlusion.
|
1210.3312 | Juan Manuel Torres Moreno | Juan-Manuel Torres-Moreno | Artex is AnotheR TEXt summarizer | 11 pages, 5 figures. arXiv admin note: substantial text overlap with
arXiv:1209.3126 | null | null | null | cs.IR cs.AI cs.CL | http://creativecommons.org/licenses/by/3.0/ | This paper describes Artex, another algorithm for Automatic Text
Summarization. In order to rank sentences, a simple inner product is calculated
between each sentence, a document vector (text topic) and a lexical vector
(vocabulary used by a sentence). Summaries are then generated by assembling the
highest ranked sentences. No ruled-based linguistic post-processing is
necessary in order to obtain summaries. Tests over several datasets (coming
from Document Understanding Conferences (DUC), Text Analysis Conferences (TAC),
evaluation campaigns, etc.) in French, English and Spanish have shown that
summarizer achieves interesting results.
| [
{
"version": "v1",
"created": "Thu, 11 Oct 2012 18:21:01 GMT"
}
] | 2012-10-12T00:00:00 | [
[
"Torres-Moreno",
"Juan-Manuel",
""
]
] | TITLE: Artex is AnotheR TEXt summarizer
ABSTRACT: This paper describes Artex, another algorithm for Automatic Text
Summarization. In order to rank sentences, a simple inner product is calculated
between each sentence, a document vector (text topic) and a lexical vector
(vocabulary used by a sentence). Summaries are then generated by assembling the
highest ranked sentences. No ruled-based linguistic post-processing is
necessary in order to obtain summaries. Tests over several datasets (coming
from Document Understanding Conferences (DUC), Text Analysis Conferences (TAC),
evaluation campaigns, etc.) in French, English and Spanish have shown that
summarizer achieves interesting results.
|
1210.2752 | Andrea Capocci | Andrea Capocci, Andrea Baldassarri, Vito D. P. Servedio, Vittorio
Loreto | Statistical Properties of Inter-arrival Times Distribution in Social
Tagging Systems | 6 pages, 10 figures; Proceedings of the 20th ACM conference on
Hypertext and hypermedia, 2009 | null | 10.1145/1557914.1557955 | null | physics.soc-ph cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Folksonomies provide a rich source of data to study social patterns taking
place on the World Wide Web. Here we study the temporal patterns of users'
tagging activity. We show that the statistical properties of inter-arrival
times between subsequent tagging events cannot be explained without taking into
account correlation in users' behaviors. This shows that social interaction in
collaborative tagging communities shapes the evolution of folksonomies. A
consensus formation process involving the usage of a small number of tags for a
given resources is observed through a numerical and analytical analysis of some
well-known folksonomy datasets.
| [
{
"version": "v1",
"created": "Tue, 9 Oct 2012 20:47:33 GMT"
}
] | 2012-10-11T00:00:00 | [
[
"Capocci",
"Andrea",
""
],
[
"Baldassarri",
"Andrea",
""
],
[
"Servedio",
"Vito D. P.",
""
],
[
"Loreto",
"Vittorio",
""
]
] | TITLE: Statistical Properties of Inter-arrival Times Distribution in Social
Tagging Systems
ABSTRACT: Folksonomies provide a rich source of data to study social patterns taking
place on the World Wide Web. Here we study the temporal patterns of users'
tagging activity. We show that the statistical properties of inter-arrival
times between subsequent tagging events cannot be explained without taking into
account correlation in users' behaviors. This shows that social interaction in
collaborative tagging communities shapes the evolution of folksonomies. A
consensus formation process involving the usage of a small number of tags for a
given resources is observed through a numerical and analytical analysis of some
well-known folksonomy datasets.
|
1210.2838 | Stefan Seer | Stefan Seer, Norbert Br\"andle, Carlo Ratti | Kinects and Human Kinetics: A New Approach for Studying Crowd Behavior | Preprint submitted to Transportation Research Part C: Emerging
Technologies, September 11, 2012 | null | null | null | cs.CV physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling crowd behavior relies on accurate data of pedestrian movements at a
high level of detail. Imaging sensors such as cameras provide a good basis for
capturing such detailed pedestrian motion data. However, currently available
computer vision technologies, when applied to conventional video footage, still
cannot automatically unveil accurate motions of groups of people or crowds from
the image sequences. We present a novel data collection approach for studying
crowd behavior which uses the increasingly popular low-cost sensor Microsoft
Kinect. The Kinect captures both standard camera data and a three-dimensional
depth map. Our human detection and tracking algorithm is based on agglomerative
clustering of depth data captured from an elevated view - in contrast to the
lateral view used for gesture recognition in Kinect gaming applications. Our
approach transforms local Kinect 3D data to a common world coordinate system in
order to stitch together human trajectories from multiple Kinects, which allows
for a scalable and flexible capturing area. At a testbed with real-world
pedestrian traffic we demonstrate that our approach can provide accurate
trajectories from three Kinects with a Pedestrian Detection Rate of up to 94%
and a Multiple Object Tracking Precision of 4 cm. Using a comprehensive dataset
of 2240 captured human trajectories we calibrate three variations of the Social
Force model. The results of our model validations indicate their particular
ability to reproduce the observed crowd behavior in microscopic simulations.
| [
{
"version": "v1",
"created": "Wed, 10 Oct 2012 09:06:04 GMT"
}
] | 2012-10-11T00:00:00 | [
[
"Seer",
"Stefan",
""
],
[
"Brändle",
"Norbert",
""
],
[
"Ratti",
"Carlo",
""
]
] | TITLE: Kinects and Human Kinetics: A New Approach for Studying Crowd Behavior
ABSTRACT: Modeling crowd behavior relies on accurate data of pedestrian movements at a
high level of detail. Imaging sensors such as cameras provide a good basis for
capturing such detailed pedestrian motion data. However, currently available
computer vision technologies, when applied to conventional video footage, still
cannot automatically unveil accurate motions of groups of people or crowds from
the image sequences. We present a novel data collection approach for studying
crowd behavior which uses the increasingly popular low-cost sensor Microsoft
Kinect. The Kinect captures both standard camera data and a three-dimensional
depth map. Our human detection and tracking algorithm is based on agglomerative
clustering of depth data captured from an elevated view - in contrast to the
lateral view used for gesture recognition in Kinect gaming applications. Our
approach transforms local Kinect 3D data to a common world coordinate system in
order to stitch together human trajectories from multiple Kinects, which allows
for a scalable and flexible capturing area. At a testbed with real-world
pedestrian traffic we demonstrate that our approach can provide accurate
trajectories from three Kinects with a Pedestrian Detection Rate of up to 94%
and a Multiple Object Tracking Precision of 4 cm. Using a comprehensive dataset
of 2240 captured human trajectories we calibrate three variations of the Social
Force model. The results of our model validations indicate their particular
ability to reproduce the observed crowd behavior in microscopic simulations.
|
1210.2872 | Kulthida Tuamsuk | Tipawan Silwattananusarn and Kulthida Tuamsuk | Data Mining and Its Applications for Knowledge Management: A Literature
Review from 2007 to 2012 | 12 pages, 4 figures | International Journal of Data Mining & Knowledge Management
Process (IJDKP) Vol.2, No.5, 2012, pp. 13-24 | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data mining is one of the most important steps of the knowledge discovery in
databases process and is considered as significant subfield in knowledge
management. Research in data mining continues growing in business and in
learning organization over coming decades. This review paper explores the
applications of data mining techniques which have been developed to support
knowledge management process. The journal articles indexed in ScienceDirect
Database from 2007 to 2012 are analyzed and classified. The discussion on the
findings is divided into 4 topics: (i) knowledge resource; (ii) knowledge types
and/or knowledge datasets; (iii) data mining tasks; and (iv) data mining
techniques and applications used in knowledge management. The article first
briefly describes the definition of data mining and data mining functionality.
Then the knowledge management rationale and major knowledge management tools
integrated in knowledge management cycle are described. Finally, the
applications of data mining techniques in the process of knowledge management
are summarized and discussed.
| [
{
"version": "v1",
"created": "Wed, 10 Oct 2012 11:12:13 GMT"
}
] | 2012-10-11T00:00:00 | [
[
"Silwattananusarn",
"Tipawan",
""
],
[
"Tuamsuk",
"Kulthida",
""
]
] | TITLE: Data Mining and Its Applications for Knowledge Management: A Literature
Review from 2007 to 2012
ABSTRACT: Data mining is one of the most important steps of the knowledge discovery in
databases process and is considered as significant subfield in knowledge
management. Research in data mining continues growing in business and in
learning organization over coming decades. This review paper explores the
applications of data mining techniques which have been developed to support
knowledge management process. The journal articles indexed in ScienceDirect
Database from 2007 to 2012 are analyzed and classified. The discussion on the
findings is divided into 4 topics: (i) knowledge resource; (ii) knowledge types
and/or knowledge datasets; (iii) data mining tasks; and (iv) data mining
techniques and applications used in knowledge management. The article first
briefly describes the definition of data mining and data mining functionality.
Then the knowledge management rationale and major knowledge management tools
integrated in knowledge management cycle are described. Finally, the
applications of data mining techniques in the process of knowledge management
are summarized and discussed.
|
1210.2401 | Biao Xu | Biao Xu, Ruair\'i de Fr\'ein, Eric Robson and M\'iche\'al \'O Foghl\'u | Distributed Formal Concept Analysis Algorithms Based on an Iterative
MapReduce Framework | 17 pages, ICFCA 201, Formal Concept Analysis 2012 | null | 10.1007/978-3-642-29892-9_26 | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While many existing formal concept analysis algorithms are efficient, they
are typically unsuitable for distributed implementation. Taking the MapReduce
(MR) framework as our inspiration we introduce a distributed approach for
performing formal concept mining. Our method has its novelty in that we use a
light-weight MapReduce runtime called Twister which is better suited to
iterative algorithms than recent distributed approaches. First, we describe the
theoretical foundations underpinning our distributed formal concept analysis
approach. Second, we provide a representative exemplar of how a classic
centralized algorithm can be implemented in a distributed fashion using our
methodology: we modify Ganter's classic algorithm by introducing a family of
MR* algorithms, namely MRGanter and MRGanter+ where the prefix denotes the
algorithm's lineage. To evaluate the factors that impact distributed algorithm
performance, we compare our MR* algorithms with the state-of-the-art.
Experiments conducted on real datasets demonstrate that MRGanter+ is efficient,
scalable and an appealing algorithm for distributed problems.
| [
{
"version": "v1",
"created": "Fri, 5 Oct 2012 10:28:24 GMT"
}
] | 2012-10-10T00:00:00 | [
[
"Xu",
"Biao",
""
],
[
"de Fréin",
"Ruairí",
""
],
[
"Robson",
"Eric",
""
],
[
"Foghlú",
"Mícheál Ó",
""
]
] | TITLE: Distributed Formal Concept Analysis Algorithms Based on an Iterative
MapReduce Framework
ABSTRACT: While many existing formal concept analysis algorithms are efficient, they
are typically unsuitable for distributed implementation. Taking the MapReduce
(MR) framework as our inspiration we introduce a distributed approach for
performing formal concept mining. Our method has its novelty in that we use a
light-weight MapReduce runtime called Twister which is better suited to
iterative algorithms than recent distributed approaches. First, we describe the
theoretical foundations underpinning our distributed formal concept analysis
approach. Second, we provide a representative exemplar of how a classic
centralized algorithm can be implemented in a distributed fashion using our
methodology: we modify Ganter's classic algorithm by introducing a family of
MR* algorithms, namely MRGanter and MRGanter+ where the prefix denotes the
algorithm's lineage. To evaluate the factors that impact distributed algorithm
performance, we compare our MR* algorithms with the state-of-the-art.
Experiments conducted on real datasets demonstrate that MRGanter+ is efficient,
scalable and an appealing algorithm for distributed problems.
|
1210.2406 | Ali Tajer | Ali Tajer and H. Vincent Poor | Quick Search for Rare Events | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rare events can potentially occur in many applications. When manifested as
opportunities to be exploited, risks to be ameliorated, or certain features to
be extracted, such events become of paramount significance. Due to their
sporadic nature, the information-bearing signals associated with rare events
often lie in a large set of irrelevant signals and are not easily accessible.
This paper provides a statistical framework for detecting such events so that
an optimal balance between detection reliability and agility, as two opposing
performance measures, is established. The core component of this framework is a
sampling procedure that adaptively and quickly focuses the
information-gathering resources on the segments of the dataset that bear the
information pertinent to the rare events. Particular focus is placed on
Gaussian signals with the aim of detecting signals with rare mean and variance
values.
| [
{
"version": "v1",
"created": "Mon, 8 Oct 2012 20:15:32 GMT"
}
] | 2012-10-10T00:00:00 | [
[
"Tajer",
"Ali",
""
],
[
"Poor",
"H. Vincent",
""
]
] | TITLE: Quick Search for Rare Events
ABSTRACT: Rare events can potentially occur in many applications. When manifested as
opportunities to be exploited, risks to be ameliorated, or certain features to
be extracted, such events become of paramount significance. Due to their
sporadic nature, the information-bearing signals associated with rare events
often lie in a large set of irrelevant signals and are not easily accessible.
This paper provides a statistical framework for detecting such events so that
an optimal balance between detection reliability and agility, as two opposing
performance measures, is established. The core component of this framework is a
sampling procedure that adaptively and quickly focuses the
information-gathering resources on the segments of the dataset that bear the
information pertinent to the rare events. Particular focus is placed on
Gaussian signals with the aim of detecting signals with rare mean and variance
values.
|
1210.2515 | Peijun Zhu | Ting Huang, Peijun Zhu, Zengyou He | Protein Inference and Protein Quantification: Two Sides of the Same Coin | 14 Pages, This paper has submitted to RECOMB2013 | null | null | null | cs.CE cs.DS q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: In mass spectrometry-based shotgun proteomics, protein
quantification and protein identification are two major computational problems.
To quantify the protein abundance, a list of proteins must be firstly inferred
from the sample. Then the relative or absolute protein abundance is estimated
with quantification methods, such as spectral counting. Until now, researchers
have been dealing with these two processes separately. In fact, they are two
sides of same coin in the sense that truly present proteins are those proteins
with non-zero abundances. Then, one interesting question is if we regard the
protein inference problem as a special protein quantification problem, is it
possible to achieve better protein inference performance?
Contribution: In this paper, we investigate the feasibility of using protein
quantification methods to solve the protein inference problem. Protein
inference is to determine whether each candidate protein is present in the
sample or not. Protein quantification is to calculate the abundance of each
protein. Naturally, the absent proteins should have zero abundances. Thus, we
argue that the protein inference problem can be viewed as a special case of
protein quantification problem: present proteins are those proteins with
non-zero abundances. Based on this idea, our paper tries to use three very
simple protein quantification methods to solve the protein inference problem
effectively.
Results: The experimental results on six datasets show that these three
methods are competitive with previous protein inference algorithms. This
demonstrates that it is plausible to take the protein inference problem as a
special case of protein quantification, which opens the door of devising more
effective protein inference algorithms from a quantification perspective.
| [
{
"version": "v1",
"created": "Tue, 9 Oct 2012 07:36:26 GMT"
}
] | 2012-10-10T00:00:00 | [
[
"Huang",
"Ting",
""
],
[
"Zhu",
"Peijun",
""
],
[
"He",
"Zengyou",
""
]
] | TITLE: Protein Inference and Protein Quantification: Two Sides of the Same Coin
ABSTRACT: Motivation: In mass spectrometry-based shotgun proteomics, protein
quantification and protein identification are two major computational problems.
To quantify the protein abundance, a list of proteins must be firstly inferred
from the sample. Then the relative or absolute protein abundance is estimated
with quantification methods, such as spectral counting. Until now, researchers
have been dealing with these two processes separately. In fact, they are two
sides of same coin in the sense that truly present proteins are those proteins
with non-zero abundances. Then, one interesting question is if we regard the
protein inference problem as a special protein quantification problem, is it
possible to achieve better protein inference performance?
Contribution: In this paper, we investigate the feasibility of using protein
quantification methods to solve the protein inference problem. Protein
inference is to determine whether each candidate protein is present in the
sample or not. Protein quantification is to calculate the abundance of each
protein. Naturally, the absent proteins should have zero abundances. Thus, we
argue that the protein inference problem can be viewed as a special case of
protein quantification problem: present proteins are those proteins with
non-zero abundances. Based on this idea, our paper tries to use three very
simple protein quantification methods to solve the protein inference problem
effectively.
Results: The experimental results on six datasets show that these three
methods are competitive with previous protein inference algorithms. This
demonstrates that it is plausible to take the protein inference problem as a
special case of protein quantification, which opens the door of devising more
effective protein inference algorithms from a quantification perspective.
|
1210.2695 | Lyndie Chiou | Lyndie Chiou | The Association of the Moon and the Sun with Large Earthquakes | null | null | null | null | physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The role of the moon in triggering earthquakes has been studied since the
early 1900s. Theory states that as land tides swept by the moon cross fault
lines, stress in the Earth's plates intensifies, increasing the likelihood of
small earthquakes. This paper studied the association of the moon and sun with
larger magnitude earthquakes (magnitude 5 and greater) using a worldwide
dataset from the USGS.
Initially, the positions of the moon and sun were considered separately. The
moon showed a reduction of 1.74% (95% confidence) in earthquakes when it was 10
hours behind a longitude on earth and a 1.62% increase when it was 6 hours
behind. The sun revealed even weaker associations (<1%). Binning the data in 6
hours quadrants (matching natural tide cycles) reduced the associations
further.
However, combinations of moon-sun positions displayed significant
associations. Cycling the moon and sun in all possible quadrant permutations
showed a decrease in earthquakes when they were paired together on the East and
West horizons of an earthquake longitude (4.57% and 2.31% reductions). When the
moon and sun were on opposite sides of a longitude, there was often a small
(about 1%) increase in earthquakes.
Reducing the bin size from 6 hours to 1 hour produced noisy results. By
examining the outliers in the data, a pattern emerged that was independent of
earthquake longitude. The results showed a significant decrease (3.33% less
than expected) in earthquakes when the sun was located near the moon. There was
an increase (2.23%) when the moon and sun were on opposite sides of the Earth.
The association with earthquakes independent of terrestrial longitude
suggests that the combined moon-sun tidal forces act deep below the Earth's
crust where circumferential forces are weaker.
| [
{
"version": "v1",
"created": "Tue, 9 Oct 2012 19:12:29 GMT"
}
] | 2012-10-10T00:00:00 | [
[
"Chiou",
"Lyndie",
""
]
] | TITLE: The Association of the Moon and the Sun with Large Earthquakes
ABSTRACT: The role of the moon in triggering earthquakes has been studied since the
early 1900s. Theory states that as land tides swept by the moon cross fault
lines, stress in the Earth's plates intensifies, increasing the likelihood of
small earthquakes. This paper studied the association of the moon and sun with
larger magnitude earthquakes (magnitude 5 and greater) using a worldwide
dataset from the USGS.
Initially, the positions of the moon and sun were considered separately. The
moon showed a reduction of 1.74% (95% confidence) in earthquakes when it was 10
hours behind a longitude on earth and a 1.62% increase when it was 6 hours
behind. The sun revealed even weaker associations (<1%). Binning the data in 6
hours quadrants (matching natural tide cycles) reduced the associations
further.
However, combinations of moon-sun positions displayed significant
associations. Cycling the moon and sun in all possible quadrant permutations
showed a decrease in earthquakes when they were paired together on the East and
West horizons of an earthquake longitude (4.57% and 2.31% reductions). When the
moon and sun were on opposite sides of a longitude, there was often a small
(about 1%) increase in earthquakes.
Reducing the bin size from 6 hours to 1 hour produced noisy results. By
examining the outliers in the data, a pattern emerged that was independent of
earthquake longitude. The results showed a significant decrease (3.33% less
than expected) in earthquakes when the sun was located near the moon. There was
an increase (2.23%) when the moon and sun were on opposite sides of the Earth.
The association with earthquakes independent of terrestrial longitude
suggests that the combined moon-sun tidal forces act deep below the Earth's
crust where circumferential forces are weaker.
|
1210.0310 | Rahele Kafieh | Raheleh Kafieh, Hossein Rabbani, Michael D. Abramoff, Milan Sonka | Intra-Retinal Layer Segmentation of 3D Optical Coherence Tomography
Using Coarse Grained Diffusion Map | 30 pages,32 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optical coherence tomography (OCT) is a powerful and noninvasive method for
retinal imaging. In this paper, we introduce a fast segmentation method based
on a new variant of spectral graph theory named diffusion maps. The research is
performed on spectral domain (SD) OCT images depicting macular and optic nerve
head appearance. The presented approach does not require edge-based image
information and relies on regional image texture. Consequently, the proposed
method demonstrates robustness in situations of low image contrast or poor
layer-to-layer image gradients. Diffusion mapping is applied to 2D and 3D OCT
datasets composed of two steps, one for partitioning the data into important
and less important sections, and another one for localization of internal
layers.In the first step, the pixels/voxels are grouped in rectangular/cubic
sets to form a graph node.The weights of a graph are calculated based on
geometric distances between pixels/voxels and differences of their mean
intensity.The first diffusion map clusters the data into three parts, the
second of which is the area of interest. The other two sections are eliminated
from the remaining calculations. In the second step, the remaining area is
subjected to another diffusion map assessment and the internal layers are
localized based on their textural similarities.The proposed method was tested
on 23 datasets from two patient groups (glaucoma and normals). The mean
unsigned border positioning errors(mean - SD) was 8.52 - 3.13 and 7.56 - 2.95
micrometer for the 2D and 3D methods, respectively.
| [
{
"version": "v1",
"created": "Mon, 1 Oct 2012 08:52:29 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Oct 2012 11:05:28 GMT"
}
] | 2012-10-09T00:00:00 | [
[
"Kafieh",
"Raheleh",
""
],
[
"Rabbani",
"Hossein",
""
],
[
"Abramoff",
"Michael D.",
""
],
[
"Sonka",
"Milan",
""
]
] | TITLE: Intra-Retinal Layer Segmentation of 3D Optical Coherence Tomography
Using Coarse Grained Diffusion Map
ABSTRACT: Optical coherence tomography (OCT) is a powerful and noninvasive method for
retinal imaging. In this paper, we introduce a fast segmentation method based
on a new variant of spectral graph theory named diffusion maps. The research is
performed on spectral domain (SD) OCT images depicting macular and optic nerve
head appearance. The presented approach does not require edge-based image
information and relies on regional image texture. Consequently, the proposed
method demonstrates robustness in situations of low image contrast or poor
layer-to-layer image gradients. Diffusion mapping is applied to 2D and 3D OCT
datasets composed of two steps, one for partitioning the data into important
and less important sections, and another one for localization of internal
layers.In the first step, the pixels/voxels are grouped in rectangular/cubic
sets to form a graph node.The weights of a graph are calculated based on
geometric distances between pixels/voxels and differences of their mean
intensity.The first diffusion map clusters the data into three parts, the
second of which is the area of interest. The other two sections are eliminated
from the remaining calculations. In the second step, the remaining area is
subjected to another diffusion map assessment and the internal layers are
localized based on their textural similarities.The proposed method was tested
on 23 datasets from two patient groups (glaucoma and normals). The mean
unsigned border positioning errors(mean - SD) was 8.52 - 3.13 and 7.56 - 2.95
micrometer for the 2D and 3D methods, respectively.
|
1210.2162 | Peter Welinder | Peter Welinder and Max Welling and Pietro Perona | Semisupervised Classifier Evaluation and Recalibration | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How many labeled examples are needed to estimate a classifier's performance
on a new dataset? We study the case where data is plentiful, but labels are
expensive. We show that by making a few reasonable assumptions on the structure
of the data, it is possible to estimate performance curves, with confidence
bounds, using a small number of ground truth labels. Our approach, which we
call Semisupervised Performance Evaluation (SPE), is based on a generative
model for the classifier's confidence scores. In addition to estimating the
performance of classifiers on new datasets, SPE can be used to recalibrate a
classifier by re-estimating the class-conditional confidence distributions.
| [
{
"version": "v1",
"created": "Mon, 8 Oct 2012 07:15:57 GMT"
}
] | 2012-10-09T00:00:00 | [
[
"Welinder",
"Peter",
""
],
[
"Welling",
"Max",
""
],
[
"Perona",
"Pietro",
""
]
] | TITLE: Semisupervised Classifier Evaluation and Recalibration
ABSTRACT: How many labeled examples are needed to estimate a classifier's performance
on a new dataset? We study the case where data is plentiful, but labels are
expensive. We show that by making a few reasonable assumptions on the structure
of the data, it is possible to estimate performance curves, with confidence
bounds, using a small number of ground truth labels. Our approach, which we
call Semisupervised Performance Evaluation (SPE), is based on a generative
model for the classifier's confidence scores. In addition to estimating the
performance of classifiers on new datasets, SPE can be used to recalibrate a
classifier by re-estimating the class-conditional confidence distributions.
|
1210.2333 | Richard Davy | Richard Davy and Igor Esau | Surface air temperature variability in global climate models | 6 pages, 2 figures | null | null | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | New results from the Coupled Model Inter-comparison Project phase 5 (CMIP5)
and multiple global reanalysis datasets are used to investigate the
relationship between the mean and standard deviation in the surface air
temperature. A combination of a land-sea mask and orographic filter were used
to investigate the geographic region with the strongest correlation and in all
cases this was found to be for low-lying over-land locations. This result is
consistent with the expectation that differences in the effective heat capacity
of the atmosphere are an important factor in determining the surface air
temperature response to forcing.
| [
{
"version": "v1",
"created": "Mon, 8 Oct 2012 16:47:26 GMT"
}
] | 2012-10-09T00:00:00 | [
[
"Davy",
"Richard",
""
],
[
"Esau",
"Igor",
""
]
] | TITLE: Surface air temperature variability in global climate models
ABSTRACT: New results from the Coupled Model Inter-comparison Project phase 5 (CMIP5)
and multiple global reanalysis datasets are used to investigate the
relationship between the mean and standard deviation in the surface air
temperature. A combination of a land-sea mask and orographic filter were used
to investigate the geographic region with the strongest correlation and in all
cases this was found to be for low-lying over-land locations. This result is
consistent with the expectation that differences in the effective heat capacity
of the atmosphere are an important factor in determining the surface air
temperature response to forcing.
|
1210.1714 | Andrew N. Jackson | Andrew N. Jackson | Formats over Time: Exploring UK Web History | 4 pages, 6 figures, presented at iPres 2012 in Toronto | null | null | null | cs.DL | http://creativecommons.org/licenses/by/3.0/ | Is software obsolescence a significant risk? To explore this issue, we
analysed a corpus of over 2.5 billion resources corresponding to the UK Web
domain, as crawled between 1996 and 2010. Using the DROID and Apache Tika
identification tools, we examined each resource and captured the results as
extended MIME types, embedding version, software and hardware identifiers
alongside the format information. The combined results form a detailed temporal
format profile of the corpus, which we have made available as open data. We
present the results of our initial analysis of this dataset. We look at image,
HTML and PDF resources in some detail, showing how the usage of different
formats, versions and software implementations has changed over time.
Furthermore, we show that software obsolescence is rare on the web and uncover
evidence indicating that network effects act to stabilise formats against
obsolescence.
| [
{
"version": "v1",
"created": "Fri, 5 Oct 2012 11:34:33 GMT"
}
] | 2012-10-08T00:00:00 | [
[
"Jackson",
"Andrew N.",
""
]
] | TITLE: Formats over Time: Exploring UK Web History
ABSTRACT: Is software obsolescence a significant risk? To explore this issue, we
analysed a corpus of over 2.5 billion resources corresponding to the UK Web
domain, as crawled between 1996 and 2010. Using the DROID and Apache Tika
identification tools, we examined each resource and captured the results as
extended MIME types, embedding version, software and hardware identifiers
alongside the format information. The combined results form a detailed temporal
format profile of the corpus, which we have made available as open data. We
present the results of our initial analysis of this dataset. We look at image,
HTML and PDF resources in some detail, showing how the usage of different
formats, versions and software implementations has changed over time.
Furthermore, we show that software obsolescence is rare on the web and uncover
evidence indicating that network effects act to stabilise formats against
obsolescence.
|
1210.1258 | Mariya Ishteva | Mariya Ishteva, Haesun Park, Le Song | Unfolding Latent Tree Structures using 4th Order Tensors | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discovering the latent structure from many observed variables is an important
yet challenging learning task. Existing approaches for discovering latent
structures often require the unknown number of hidden states as an input. In
this paper, we propose a quartet based approach which is \emph{agnostic} to
this number. The key contribution is a novel rank characterization of the
tensor associated with the marginal distribution of a quartet. This
characterization allows us to design a \emph{nuclear norm} based test for
resolving quartet relations. We then use the quartet test as a subroutine in a
divide-and-conquer algorithm for recovering the latent tree structure. Under
mild conditions, the algorithm is consistent and its error probability decays
exponentially with increasing sample size. We demonstrate that the proposed
approach compares favorably to alternatives. In a real world stock dataset, it
also discovers meaningful groupings of variables, and produces a model that
fits the data better.
| [
{
"version": "v1",
"created": "Wed, 3 Oct 2012 23:30:24 GMT"
}
] | 2012-10-05T00:00:00 | [
[
"Ishteva",
"Mariya",
""
],
[
"Park",
"Haesun",
""
],
[
"Song",
"Le",
""
]
] | TITLE: Unfolding Latent Tree Structures using 4th Order Tensors
ABSTRACT: Discovering the latent structure from many observed variables is an important
yet challenging learning task. Existing approaches for discovering latent
structures often require the unknown number of hidden states as an input. In
this paper, we propose a quartet based approach which is \emph{agnostic} to
this number. The key contribution is a novel rank characterization of the
tensor associated with the marginal distribution of a quartet. This
characterization allows us to design a \emph{nuclear norm} based test for
resolving quartet relations. We then use the quartet test as a subroutine in a
divide-and-conquer algorithm for recovering the latent tree structure. Under
mild conditions, the algorithm is consistent and its error probability decays
exponentially with increasing sample size. We demonstrate that the proposed
approach compares favorably to alternatives. In a real world stock dataset, it
also discovers meaningful groupings of variables, and produces a model that
fits the data better.
|
1210.1317 | Phong Nguyen | Phong Nguyen, Jun Wang, Melanie Hilario and Alexandros Kalousis | Learning Heterogeneous Similarity Measures for Hybrid-Recommendations in
Meta-Mining | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The notion of meta-mining has appeared recently and extends the traditional
meta-learning in two ways. First it does not learn meta-models that provide
support only for the learning algorithm selection task but ones that support
the whole data-mining process. In addition it abandons the so called black-box
approach to algorithm description followed in meta-learning. Now in addition to
the datasets, algorithms also have descriptors, workflows as well. For the
latter two these descriptions are semantic, describing properties of the
algorithms. With the availability of descriptors both for datasets and data
mining workflows the traditional modelling techniques followed in
meta-learning, typically based on classification and regression algorithms, are
no longer appropriate. Instead we are faced with a problem the nature of which
is much more similar to the problems that appear in recommendation systems. The
most important meta-mining requirements are that suggestions should use only
datasets and workflows descriptors and the cold-start problem, e.g. providing
workflow suggestions for new datasets.
In this paper we take a different view on the meta-mining modelling problem
and treat it as a recommender problem. In order to account for the meta-mining
specificities we derive a novel metric-based-learning recommender approach. Our
method learns two homogeneous metrics, one in the dataset and one in the
workflow space, and a heterogeneous one in the dataset-workflow space. All
learned metrics reflect similarities established from the dataset-workflow
preference matrix. We demonstrate our method on meta-mining over biological
(microarray datasets) problems. The application of our method is not limited to
the meta-mining problem, its formulations is general enough so that it can be
applied on problems with similar requirements.
| [
{
"version": "v1",
"created": "Thu, 4 Oct 2012 07:17:37 GMT"
}
] | 2012-10-05T00:00:00 | [
[
"Nguyen",
"Phong",
""
],
[
"Wang",
"Jun",
""
],
[
"Hilario",
"Melanie",
""
],
[
"Kalousis",
"Alexandros",
""
]
] | TITLE: Learning Heterogeneous Similarity Measures for Hybrid-Recommendations in
Meta-Mining
ABSTRACT: The notion of meta-mining has appeared recently and extends the traditional
meta-learning in two ways. First it does not learn meta-models that provide
support only for the learning algorithm selection task but ones that support
the whole data-mining process. In addition it abandons the so called black-box
approach to algorithm description followed in meta-learning. Now in addition to
the datasets, algorithms also have descriptors, workflows as well. For the
latter two these descriptions are semantic, describing properties of the
algorithms. With the availability of descriptors both for datasets and data
mining workflows the traditional modelling techniques followed in
meta-learning, typically based on classification and regression algorithms, are
no longer appropriate. Instead we are faced with a problem the nature of which
is much more similar to the problems that appear in recommendation systems. The
most important meta-mining requirements are that suggestions should use only
datasets and workflows descriptors and the cold-start problem, e.g. providing
workflow suggestions for new datasets.
In this paper we take a different view on the meta-mining modelling problem
and treat it as a recommender problem. In order to account for the meta-mining
specificities we derive a novel metric-based-learning recommender approach. Our
method learns two homogeneous metrics, one in the dataset and one in the
workflow space, and a heterogeneous one in the dataset-workflow space. All
learned metrics reflect similarities established from the dataset-workflow
preference matrix. We demonstrate our method on meta-mining over biological
(microarray datasets) problems. The application of our method is not limited to
the meta-mining problem, its formulations is general enough so that it can be
applied on problems with similar requirements.
|
1210.1461 | Shusen Wang | Shusen Wang, Zhihua Zhang, Jian Li | A Scalable CUR Matrix Decomposition Algorithm: Lower Time Complexity and
Tighter Bound | accepted by NIPS 2012 | Shusen Wang and Zhihua Zhang. A Scalable CUR Matrix Decomposition
Algorithm: Lower Time Complexity and Tighter Bound. In Advances in Neural
Information Processing Systems 25, 2012 | null | null | cs.LG cs.DM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The CUR matrix decomposition is an important extension of Nystr\"{o}m
approximation to a general matrix. It approximates any data matrix in terms of
a small number of its columns and rows. In this paper we propose a novel
randomized CUR algorithm with an expected relative-error bound. The proposed
algorithm has the advantages over the existing relative-error CUR algorithms
that it possesses tighter theoretical bound and lower time complexity, and that
it can avoid maintaining the whole data matrix in main memory. Finally,
experiments on several real-world datasets demonstrate significant improvement
over the existing relative-error algorithms.
| [
{
"version": "v1",
"created": "Thu, 4 Oct 2012 14:23:34 GMT"
}
] | 2012-10-05T00:00:00 | [
[
"Wang",
"Shusen",
""
],
[
"Zhang",
"Zhihua",
""
],
[
"Li",
"Jian",
""
]
] | TITLE: A Scalable CUR Matrix Decomposition Algorithm: Lower Time Complexity and
Tighter Bound
ABSTRACT: The CUR matrix decomposition is an important extension of Nystr\"{o}m
approximation to a general matrix. It approximates any data matrix in terms of
a small number of its columns and rows. In this paper we propose a novel
randomized CUR algorithm with an expected relative-error bound. The proposed
algorithm has the advantages over the existing relative-error CUR algorithms
that it possesses tighter theoretical bound and lower time complexity, and that
it can avoid maintaining the whole data matrix in main memory. Finally,
experiments on several real-world datasets demonstrate significant improvement
over the existing relative-error algorithms.
|
1210.0386 | Junlin Hu | Junlin Hu and Ping Guo | Combined Descriptors in Spatial Pyramid Domain for Image Classification | 9 pages, 5 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently spatial pyramid matching (SPM) with scale invariant feature
transform (SIFT) descriptor has been successfully used in image classification.
Unfortunately, the codebook generation and feature quantization procedures
using SIFT feature have the high complexity both in time and space. To address
this problem, in this paper, we propose an approach which combines local binary
patterns (LBP) and three-patch local binary patterns (TPLBP) in spatial pyramid
domain. The proposed method does not need to learn the codebook and feature
quantization processing, hence it becomes very efficient. Experiments on two
popular benchmark datasets demonstrate that the proposed method always
significantly outperforms the very popular SPM based SIFT descriptor method
both in time and classification accuracy.
| [
{
"version": "v1",
"created": "Mon, 1 Oct 2012 13:05:20 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Oct 2012 06:03:23 GMT"
},
{
"version": "v3",
"created": "Wed, 3 Oct 2012 02:48:47 GMT"
}
] | 2012-10-04T00:00:00 | [
[
"Hu",
"Junlin",
""
],
[
"Guo",
"Ping",
""
]
] | TITLE: Combined Descriptors in Spatial Pyramid Domain for Image Classification
ABSTRACT: Recently spatial pyramid matching (SPM) with scale invariant feature
transform (SIFT) descriptor has been successfully used in image classification.
Unfortunately, the codebook generation and feature quantization procedures
using SIFT feature have the high complexity both in time and space. To address
this problem, in this paper, we propose an approach which combines local binary
patterns (LBP) and three-patch local binary patterns (TPLBP) in spatial pyramid
domain. The proposed method does not need to learn the codebook and feature
quantization processing, hence it becomes very efficient. Experiments on two
popular benchmark datasets demonstrate that the proposed method always
significantly outperforms the very popular SPM based SIFT descriptor method
both in time and classification accuracy.
|
1210.0564 | Tao Hu | Tao Hu, Juan Nunez-Iglesias, Shiv Vitaladevuni, Lou Scheffer, Shan Xu,
Mehdi Bolorizadeh, Harald Hess, Richard Fetter and Dmitri Chklovskii | Super-resolution using Sparse Representations over Learned Dictionaries:
Reconstruction of Brain Structure using Electron Microscopy | 12 pages, 11 figures | null | null | null | cs.CV q-bio.NC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A central problem in neuroscience is reconstructing neuronal circuits on the
synapse level. Due to a wide range of scales in brain architecture such
reconstruction requires imaging that is both high-resolution and
high-throughput. Existing electron microscopy (EM) techniques possess required
resolution in the lateral plane and either high-throughput or high depth
resolution but not both. Here, we exploit recent advances in unsupervised
learning and signal processing to obtain high depth-resolution EM images
computationally without sacrificing throughput. First, we show that the brain
tissue can be represented as a sparse linear combination of localized basis
functions that are learned using high-resolution datasets. We then develop
compressive sensing-inspired techniques that can reconstruct the brain tissue
from very few (typically 5) tomographic views of each section. This enables
tracing of neuronal processes and, hence, high throughput reconstruction of
neural circuits on the level of individual synapses.
| [
{
"version": "v1",
"created": "Mon, 1 Oct 2012 20:30:36 GMT"
}
] | 2012-10-03T00:00:00 | [
[
"Hu",
"Tao",
""
],
[
"Nunez-Iglesias",
"Juan",
""
],
[
"Vitaladevuni",
"Shiv",
""
],
[
"Scheffer",
"Lou",
""
],
[
"Xu",
"Shan",
""
],
[
"Bolorizadeh",
"Mehdi",
""
],
[
"Hess",
"Harald",
""
],
[
"Fetter",
"Richard",
""
],
[
"Chklovskii",
"Dmitri",
""
]
] | TITLE: Super-resolution using Sparse Representations over Learned Dictionaries:
Reconstruction of Brain Structure using Electron Microscopy
ABSTRACT: A central problem in neuroscience is reconstructing neuronal circuits on the
synapse level. Due to a wide range of scales in brain architecture such
reconstruction requires imaging that is both high-resolution and
high-throughput. Existing electron microscopy (EM) techniques possess required
resolution in the lateral plane and either high-throughput or high depth
resolution but not both. Here, we exploit recent advances in unsupervised
learning and signal processing to obtain high depth-resolution EM images
computationally without sacrificing throughput. First, we show that the brain
tissue can be represented as a sparse linear combination of localized basis
functions that are learned using high-resolution datasets. We then develop
compressive sensing-inspired techniques that can reconstruct the brain tissue
from very few (typically 5) tomographic views of each section. This enables
tracing of neuronal processes and, hence, high throughput reconstruction of
neural circuits on the level of individual synapses.
|
1210.0595 | Amir Hosein Asiaee | Amir H. Asiaee, Prashant Doshi, Todd Minning, Satya Sahoo, Priti
Parikh, Amit Sheth, Rick L. Tarleton | From Questions to Effective Answers: On the Utility of Knowledge-Driven
Querying Systems for Life Sciences Data | null | null | null | null | cs.IR cs.DB | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We compare two distinct approaches for querying data in the context of the
life sciences. The first approach utilizes conventional databases to store the
data and intuitive form-based interfaces to facilitate easy querying of the
data. These interfaces could be seen as implementing a set of "pre-canned"
queries commonly used by the life science researchers that we study. The second
approach is based on semantic Web technologies and is knowledge (model) driven.
It utilizes a large OWL ontology and same datasets as before but associated as
RDF instances of the ontology concepts. An intuitive interface is provided that
allows the formulation of RDF triples-based queries. Both these approaches are
being used in parallel by a team of cell biologists in their daily research
activities, with the objective of gradually replacing the conventional approach
with the knowledge-driven one. This provides us with a valuable opportunity to
compare and qualitatively evaluate the two approaches. We describe several
benefits of the knowledge-driven approach in comparison to the traditional way
of accessing data, and highlight a few limitations as well. We believe that our
analysis not only explicitly highlights the specific benefits and limitations
of semantic Web technologies in our context but also contributes toward
effective ways of translating a question in a researcher's mind into precise
computational queries with the intent of obtaining effective answers from the
data. While researchers often assume the benefits of semantic Web technologies,
we explicitly illustrate these in practice.
| [
{
"version": "v1",
"created": "Mon, 1 Oct 2012 22:10:30 GMT"
}
] | 2012-10-03T00:00:00 | [
[
"Asiaee",
"Amir H.",
""
],
[
"Doshi",
"Prashant",
""
],
[
"Minning",
"Todd",
""
],
[
"Sahoo",
"Satya",
""
],
[
"Parikh",
"Priti",
""
],
[
"Sheth",
"Amit",
""
],
[
"Tarleton",
"Rick L.",
""
]
] | TITLE: From Questions to Effective Answers: On the Utility of Knowledge-Driven
Querying Systems for Life Sciences Data
ABSTRACT: We compare two distinct approaches for querying data in the context of the
life sciences. The first approach utilizes conventional databases to store the
data and intuitive form-based interfaces to facilitate easy querying of the
data. These interfaces could be seen as implementing a set of "pre-canned"
queries commonly used by the life science researchers that we study. The second
approach is based on semantic Web technologies and is knowledge (model) driven.
It utilizes a large OWL ontology and same datasets as before but associated as
RDF instances of the ontology concepts. An intuitive interface is provided that
allows the formulation of RDF triples-based queries. Both these approaches are
being used in parallel by a team of cell biologists in their daily research
activities, with the objective of gradually replacing the conventional approach
with the knowledge-driven one. This provides us with a valuable opportunity to
compare and qualitatively evaluate the two approaches. We describe several
benefits of the knowledge-driven approach in comparison to the traditional way
of accessing data, and highlight a few limitations as well. We believe that our
analysis not only explicitly highlights the specific benefits and limitations
of semantic Web technologies in our context but also contributes toward
effective ways of translating a question in a researcher's mind into precise
computational queries with the intent of obtaining effective answers from the
data. While researchers often assume the benefits of semantic Web technologies,
we explicitly illustrate these in practice.
|
1210.0758 | Daniele Cerra | Daniele Cerra and Mihai Datcu | A fast compression-based similarity measure with applications to
content-based image retrieval | Pre-print | Journal of Visual Communication and Image Representation, vol. 23,
no. 2, pp. 293-302, 2012 | 10.1016/j.jvcir.2011.10.009 | null | stat.ML cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compression-based similarity measures are effectively employed in
applications on diverse data types with a basically parameter-free approach.
Nevertheless, there are problems in applying these techniques to
medium-to-large datasets which have been seldom addressed. This paper proposes
a similarity measure based on compression with dictionaries, the Fast
Compression Distance (FCD), which reduces the complexity of these methods,
without degradations in performance. On its basis a content-based color image
retrieval system is defined, which can be compared to state-of-the-art methods
based on invariant color features. Through the FCD a better understanding of
compression-based techniques is achieved, by performing experiments on datasets
which are larger than the ones analyzed so far in literature.
| [
{
"version": "v1",
"created": "Tue, 2 Oct 2012 13:04:49 GMT"
}
] | 2012-10-03T00:00:00 | [
[
"Cerra",
"Daniele",
""
],
[
"Datcu",
"Mihai",
""
]
] | TITLE: A fast compression-based similarity measure with applications to
content-based image retrieval
ABSTRACT: Compression-based similarity measures are effectively employed in
applications on diverse data types with a basically parameter-free approach.
Nevertheless, there are problems in applying these techniques to
medium-to-large datasets which have been seldom addressed. This paper proposes
a similarity measure based on compression with dictionaries, the Fast
Compression Distance (FCD), which reduces the complexity of these methods,
without degradations in performance. On its basis a content-based color image
retrieval system is defined, which can be compared to state-of-the-art methods
based on invariant color features. Through the FCD a better understanding of
compression-based techniques is achieved, by performing experiments on datasets
which are larger than the ones analyzed so far in literature.
|
1210.0866 | Aaron Adcock | Aaron Adcock and Daniel Rubin and Gunnar Carlsson | Classification of Hepatic Lesions using the Matching Metric | null | null | null | null | cs.CV cs.CG math.AT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a methodology of classifying hepatic (liver) lesions
using multidimensional persistent homology, the matching metric (also called
the bottleneck distance), and a support vector machine. We present our
classification results on a dataset of 132 lesions that have been outlined and
annotated by radiologists. We find that topological features are useful in the
classification of hepatic lesions. We also find that two-dimensional persistent
homology outperforms one-dimensional persistent homology in this application.
| [
{
"version": "v1",
"created": "Tue, 2 Oct 2012 18:08:54 GMT"
}
] | 2012-10-03T00:00:00 | [
[
"Adcock",
"Aaron",
""
],
[
"Rubin",
"Daniel",
""
],
[
"Carlsson",
"Gunnar",
""
]
] | TITLE: Classification of Hepatic Lesions using the Matching Metric
ABSTRACT: In this paper we present a methodology of classifying hepatic (liver) lesions
using multidimensional persistent homology, the matching metric (also called
the bottleneck distance), and a support vector machine. We present our
classification results on a dataset of 132 lesions that have been outlined and
annotated by radiologists. We find that topological features are useful in the
classification of hepatic lesions. We also find that two-dimensional persistent
homology outperforms one-dimensional persistent homology in this application.
|
1207.3598 | Fabian Pedregosa | Fabian Pedregosa (INRIA Paris - Rocquencourt, INRIA Saclay - Ile de
France), Alexandre Gramfort (INRIA Saclay - Ile de France, LNAO), Ga\"el
Varoquaux (INRIA Saclay - Ile de France, LNAO), Elodie Cauvet (NEUROSPIN),
Christophe Pallier (NEUROSPIN), Bertrand Thirion (INRIA Saclay - Ile de
France) | Learning to rank from medical imaging data | null | MLMI 2012 - 3rd International Workshop on Machine Learning in
Medical Imaging (2012) | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical images can be used to predict a clinical score coding for the
severity of a disease, a pain level or the complexity of a cognitive task. In
all these cases, the predicted variable has a natural order. While a standard
classifier discards this information, we would like to take it into account in
order to improve prediction performance. A standard linear regression does
model such information, however the linearity assumption is likely not be
satisfied when predicting from pixel intensities in an image. In this paper we
address these modeling challenges with a supervised learning procedure where
the model aims to order or rank images. We use a linear model for its
robustness in high dimension and its possible interpretation. We show on
simulations and two fMRI datasets that this approach is able to predict the
correct ordering on pairs of images, yielding higher prediction accuracy than
standard regression and multiclass classification techniques.
| [
{
"version": "v1",
"created": "Mon, 16 Jul 2012 08:22:36 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Sep 2012 17:04:22 GMT"
}
] | 2012-10-02T00:00:00 | [
[
"Pedregosa",
"Fabian",
"",
"INRIA Paris - Rocquencourt, INRIA Saclay - Ile de\n France"
],
[
"Gramfort",
"Alexandre",
"",
"INRIA Saclay - Ile de France, LNAO"
],
[
"Varoquaux",
"Gaël",
"",
"INRIA Saclay - Ile de France, LNAO"
],
[
"Cauvet",
"Elodie",
"",
"NEUROSPIN"
],
[
"Pallier",
"Christophe",
"",
"NEUROSPIN"
],
[
"Thirion",
"Bertrand",
"",
"INRIA Saclay - Ile de\n France"
]
] | TITLE: Learning to rank from medical imaging data
ABSTRACT: Medical images can be used to predict a clinical score coding for the
severity of a disease, a pain level or the complexity of a cognitive task. In
all these cases, the predicted variable has a natural order. While a standard
classifier discards this information, we would like to take it into account in
order to improve prediction performance. A standard linear regression does
model such information, however the linearity assumption is likely not be
satisfied when predicting from pixel intensities in an image. In this paper we
address these modeling challenges with a supervised learning procedure where
the model aims to order or rank images. We use a linear model for its
robustness in high dimension and its possible interpretation. We show on
simulations and two fMRI datasets that this approach is able to predict the
correct ordering on pairs of images, yielding higher prediction accuracy than
standard regression and multiclass classification techniques.
|
1209.6419 | Xiaotong Yuan | Xiao-Tong Yuan and Tong Zhang | Partial Gaussian Graphical Model Estimation | 32 pages, 5 figures, 4tables | null | null | null | cs.LG cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies the partial estimation of Gaussian graphical models from
high-dimensional empirical observations. We derive a convex formulation for
this problem using $\ell_1$-regularized maximum-likelihood estimation, which
can be solved via a block coordinate descent algorithm. Statistical estimation
performance can be established for our method. The proposed approach has
competitive empirical performance compared to existing methods, as demonstrated
by various experiments on synthetic and real datasets.
| [
{
"version": "v1",
"created": "Fri, 28 Sep 2012 04:12:14 GMT"
}
] | 2012-10-01T00:00:00 | [
[
"Yuan",
"Xiao-Tong",
""
],
[
"Zhang",
"Tong",
""
]
] | TITLE: Partial Gaussian Graphical Model Estimation
ABSTRACT: This paper studies the partial estimation of Gaussian graphical models from
high-dimensional empirical observations. We derive a convex formulation for
this problem using $\ell_1$-regularized maximum-likelihood estimation, which
can be solved via a block coordinate descent algorithm. Statistical estimation
performance can be established for our method. The proposed approach has
competitive empirical performance compared to existing methods, as demonstrated
by various experiments on synthetic and real datasets.
|
1209.6540 | Gabor Sarkozy | G\'abor N. S\'ark\"ozy, Fei Song, Endre Szemer\'edi, Shubhendu Trivedi | A Practical Regularity Partitioning Algorithm and its Applications in
Clustering | null | null | null | null | math.CO cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we introduce a new clustering technique called Regularity
Clustering. This new technique is based on the practical variants of the two
constructive versions of the Regularity Lemma, a very useful tool in graph
theory. The lemma claims that every graph can be partitioned into pseudo-random
graphs. While the Regularity Lemma has become very important in proving
theoretical results, it has no direct practical applications so far. An
important reason for this lack of practical applications is that the graph
under consideration has to be astronomically large. This requirement makes its
application restrictive in practice where graphs typically are much smaller. In
this paper we propose modifications of the constructive versions of the
Regularity Lemma that work for smaller graphs as well. We call this the
Practical Regularity partitioning algorithm. The partition obtained by this is
used to build the reduced graph which can be viewed as a compressed
representation of the original graph. Then we apply a pairwise clustering
method such as spectral clustering on this reduced graph to get a clustering of
the original graph that we call Regularity Clustering. We present results of
using Regularity Clustering on a number of benchmark datasets and compare them
with standard clustering techniques, such as $k$-means and spectral clustering.
These empirical results are very encouraging. Thus in this paper we report an
attempt to harness the power of the Regularity Lemma for real-world
applications.
| [
{
"version": "v1",
"created": "Fri, 28 Sep 2012 15:01:22 GMT"
}
] | 2012-10-01T00:00:00 | [
[
"Sárközy",
"Gábor N.",
""
],
[
"Song",
"Fei",
""
],
[
"Szemerédi",
"Endre",
""
],
[
"Trivedi",
"Shubhendu",
""
]
] | TITLE: A Practical Regularity Partitioning Algorithm and its Applications in
Clustering
ABSTRACT: In this paper we introduce a new clustering technique called Regularity
Clustering. This new technique is based on the practical variants of the two
constructive versions of the Regularity Lemma, a very useful tool in graph
theory. The lemma claims that every graph can be partitioned into pseudo-random
graphs. While the Regularity Lemma has become very important in proving
theoretical results, it has no direct practical applications so far. An
important reason for this lack of practical applications is that the graph
under consideration has to be astronomically large. This requirement makes its
application restrictive in practice where graphs typically are much smaller. In
this paper we propose modifications of the constructive versions of the
Regularity Lemma that work for smaller graphs as well. We call this the
Practical Regularity partitioning algorithm. The partition obtained by this is
used to build the reduced graph which can be viewed as a compressed
representation of the original graph. Then we apply a pairwise clustering
method such as spectral clustering on this reduced graph to get a clustering of
the original graph that we call Regularity Clustering. We present results of
using Regularity Clustering on a number of benchmark datasets and compare them
with standard clustering techniques, such as $k$-means and spectral clustering.
These empirical results are very encouraging. Thus in this paper we report an
attempt to harness the power of the Regularity Lemma for real-world
applications.
|
1209.6342 | Jie Cheng MS | Jie Cheng, Elizaveta Levina, Pei Wang and Ji Zhu | Sparse Ising Models with Covariates | 32 pages (including 5 pages of appendix), 3 figures, 2 tables | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been a lot of work fitting Ising models to multivariate binary data
in order to understand the conditional dependency relationships between the
variables. However, additional covariates are frequently recorded together with
the binary data, and may influence the dependence relationships. Motivated by
such a dataset on genomic instability collected from tumor samples of several
types, we propose a sparse covariate dependent Ising model to study both the
conditional dependency within the binary data and its relationship with the
additional covariates. This results in subject-specific Ising models, where the
subject's covariates influence the strength of association between the genes.
As in all exploratory data analysis, interpretability of results is important,
and we use L1 penalties to induce sparsity in the fitted graphs and in the
number of selected covariates. Two algorithms to fit the model are proposed and
compared on a set of simulated data, and asymptotic results are established.
The results on the tumor dataset and their biological significance are
discussed in detail.
| [
{
"version": "v1",
"created": "Thu, 27 Sep 2012 19:43:44 GMT"
}
] | 2012-09-28T00:00:00 | [
[
"Cheng",
"Jie",
""
],
[
"Levina",
"Elizaveta",
""
],
[
"Wang",
"Pei",
""
],
[
"Zhu",
"Ji",
""
]
] | TITLE: Sparse Ising Models with Covariates
ABSTRACT: There has been a lot of work fitting Ising models to multivariate binary data
in order to understand the conditional dependency relationships between the
variables. However, additional covariates are frequently recorded together with
the binary data, and may influence the dependence relationships. Motivated by
such a dataset on genomic instability collected from tumor samples of several
types, we propose a sparse covariate dependent Ising model to study both the
conditional dependency within the binary data and its relationship with the
additional covariates. This results in subject-specific Ising models, where the
subject's covariates influence the strength of association between the genes.
As in all exploratory data analysis, interpretability of results is important,
and we use L1 penalties to induce sparsity in the fitted graphs and in the
number of selected covariates. Two algorithms to fit the model are proposed and
compared on a set of simulated data, and asymptotic results are established.
The results on the tumor dataset and their biological significance are
discussed in detail.
|
1209.5765 | Kevin Mote | Kevin Mote | Fast Point-Feature Label Placement for Dynamic Visualizations (2007) | null | Information Visualization (2007) 6, 249-260 | 10.1057/PALGRAVE.IVS.9500163 | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a fast approach to automatic point label de-confliction
on interactive maps. The general Map Labeling problem is NP-hard and has been
the subject of much study for decades. Computerized maps have introduced
interactive zooming and panning, which has intensified the problem. Providing
dynamic labels for such maps typically requires a time-consuming pre-processing
phase. In the realm of visual analytics, however, the labeling of interactive
maps is further complicated by the use of massive datasets laid out in
arbitrary configurations, thus rendering reliance on a pre-processing phase
untenable. This paper offers a method for labeling point-features on dynamic
maps in real time without pre-processing. The algorithm presented is efficient,
scalable, and exceptionally fast; it can label interactive charts and diagrams
at speeds of multiple frames per second on maps with tens of thousands of
nodes. To accomplish this, the algorithm employs a novel geometric
de-confliction approach, the 'trellis strategy,' along with a unique label
candidate cost analysis to determine the 'least expensive' label configuration.
The speed and scalability of this approach make it well-suited for visual
analytic applications.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2012 20:58:42 GMT"
}
] | 2012-09-27T00:00:00 | [
[
"Mote",
"Kevin",
""
]
] | TITLE: Fast Point-Feature Label Placement for Dynamic Visualizations (2007)
ABSTRACT: This paper describes a fast approach to automatic point label de-confliction
on interactive maps. The general Map Labeling problem is NP-hard and has been
the subject of much study for decades. Computerized maps have introduced
interactive zooming and panning, which has intensified the problem. Providing
dynamic labels for such maps typically requires a time-consuming pre-processing
phase. In the realm of visual analytics, however, the labeling of interactive
maps is further complicated by the use of massive datasets laid out in
arbitrary configurations, thus rendering reliance on a pre-processing phase
untenable. This paper offers a method for labeling point-features on dynamic
maps in real time without pre-processing. The algorithm presented is efficient,
scalable, and exceptionally fast; it can label interactive charts and diagrams
at speeds of multiple frames per second on maps with tens of thousands of
nodes. To accomplish this, the algorithm employs a novel geometric
de-confliction approach, the 'trellis strategy,' along with a unique label
candidate cost analysis to determine the 'least expensive' label configuration.
The speed and scalability of this approach make it well-suited for visual
analytic applications.
|
1209.5766 | Kevin Mote | Kevin Mote | Fast Point-Feature Label Placement for Dynamic Visualizations (Thesis) | Master's Thesis, Washington State University | null | 10.1057/PALGRAVE.IVS.9500163 | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a fast approach to automatic point label de-confliction
on interactive maps. The general Map Labeling problem is NP-hard and has been
the subject of much study for decades. Computerized maps have introduced
interactive zooming and panning, which has intensified the problem. Providing
dynamic labels for such maps typically requires a time-consuming pre-processing
phase. In the realm of visual analytics, however, the labeling of interactive
maps is further complicated by the use of massive datasets laid out in
arbitrary configurations, thus rendering reliance on a pre-processing phase
untenable. This paper offers a method for labeling point-features on dynamic
maps in real time without pre-processing. The algorithm presented is efficient,
scalable, and exceptionally fast; it can label interactive charts and diagrams
at speeds of multiple frames per second on maps with tens of thousands of
nodes. To accomplish this, the algorithm employs a novel geometric
de-confliction approach, the 'trellis strategy,' along with a unique label
candidate cost analysis to determine the "least expensive" label configuration.
The speed and scalability of this approach make it well-suited for visual
analytic applications.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2012 20:59:51 GMT"
}
] | 2012-09-27T00:00:00 | [
[
"Mote",
"Kevin",
""
]
] | TITLE: Fast Point-Feature Label Placement for Dynamic Visualizations (Thesis)
ABSTRACT: This paper describes a fast approach to automatic point label de-confliction
on interactive maps. The general Map Labeling problem is NP-hard and has been
the subject of much study for decades. Computerized maps have introduced
interactive zooming and panning, which has intensified the problem. Providing
dynamic labels for such maps typically requires a time-consuming pre-processing
phase. In the realm of visual analytics, however, the labeling of interactive
maps is further complicated by the use of massive datasets laid out in
arbitrary configurations, thus rendering reliance on a pre-processing phase
untenable. This paper offers a method for labeling point-features on dynamic
maps in real time without pre-processing. The algorithm presented is efficient,
scalable, and exceptionally fast; it can label interactive charts and diagrams
at speeds of multiple frames per second on maps with tens of thousands of
nodes. To accomplish this, the algorithm employs a novel geometric
de-confliction approach, the 'trellis strategy,' along with a unique label
candidate cost analysis to determine the "least expensive" label configuration.
The speed and scalability of this approach make it well-suited for visual
analytic applications.
|
1209.6001 | Jonathan Shapiro | Ruefei He and Jonathan Shapiro | Bayesian Mixture Models for Frequent Itemset Discovery | null | null | null | null | cs.LG cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In binary-transaction data-mining, traditional frequent itemset mining often
produces results which are not straightforward to interpret. To overcome this
problem, probability models are often used to produce more compact and
conclusive results, albeit with some loss of accuracy. Bayesian statistics have
been widely used in the development of probability models in machine learning
in recent years and these methods have many advantages, including their
abilities to avoid overfitting. In this paper, we develop two Bayesian mixture
models with the Dirichlet distribution prior and the Dirichlet process (DP)
prior to improve the previous non-Bayesian mixture model developed for
transaction dataset mining. We implement the inference of both mixture models
using two methods: a collapsed Gibbs sampling scheme and a variational
approximation algorithm. Experiments in several benchmark problems have shown
that both mixture models achieve better performance than a non-Bayesian mixture
model. The variational algorithm is the faster of the two approaches while the
Gibbs sampling method achieves a more accurate results. The Dirichlet process
mixture model can automatically grow to a proper complexity for a better
approximation. Once the model is built, it can be very fast to query and run
analysis on (typically 10 times faster than Eclat, as we will show in the
experiment section). However, these approaches also show that mixture models
underestimate the probabilities of frequent itemsets. Consequently, these
models have a higher sensitivity but a lower specificity.
| [
{
"version": "v1",
"created": "Wed, 26 Sep 2012 16:41:59 GMT"
}
] | 2012-09-27T00:00:00 | [
[
"He",
"Ruefei",
""
],
[
"Shapiro",
"Jonathan",
""
]
] | TITLE: Bayesian Mixture Models for Frequent Itemset Discovery
ABSTRACT: In binary-transaction data-mining, traditional frequent itemset mining often
produces results which are not straightforward to interpret. To overcome this
problem, probability models are often used to produce more compact and
conclusive results, albeit with some loss of accuracy. Bayesian statistics have
been widely used in the development of probability models in machine learning
in recent years and these methods have many advantages, including their
abilities to avoid overfitting. In this paper, we develop two Bayesian mixture
models with the Dirichlet distribution prior and the Dirichlet process (DP)
prior to improve the previous non-Bayesian mixture model developed for
transaction dataset mining. We implement the inference of both mixture models
using two methods: a collapsed Gibbs sampling scheme and a variational
approximation algorithm. Experiments in several benchmark problems have shown
that both mixture models achieve better performance than a non-Bayesian mixture
model. The variational algorithm is the faster of the two approaches while the
Gibbs sampling method achieves a more accurate results. The Dirichlet process
mixture model can automatically grow to a proper complexity for a better
approximation. Once the model is built, it can be very fast to query and run
analysis on (typically 10 times faster than Eclat, as we will show in the
experiment section). However, these approaches also show that mixture models
underestimate the probabilities of frequent itemsets. Consequently, these
models have a higher sensitivity but a lower specificity.
|
1005.3063 | Diego Amancio Raphael | D.R. Amancio, M. G. V. Nunes, O. N. Oliveira Jr., L. da F. Costa | Good practices for a literature survey are not followed by authors while
preparing scientific manuscripts | null | Scientometrics, v. 90, p. 2, (2012) | 10.1007/s11192-012-0630-z | null | physics.soc-ph cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The number of citations received by authors in scientific journals has become
a major parameter to assess individual researchers and the journals themselves
through the impact factor. A fair assessment therefore requires that the
criteria for selecting references in a given manuscript should be unbiased with
respect to the authors or the journals cited. In this paper, we advocate that
authors should follow two mandatory principles to select papers (later
reflected in the list of references) while studying the literature for a given
research: i) consider similarity of content with the topics investigated, lest
very related work should be reproduced or ignored; ii) perform a systematic
search over the network of citations including seminal or very related papers.
We use formalisms of complex networks for two datasets of papers from the arXiv
repository to show that neither of these two criteria is fulfilled in practice.
| [
{
"version": "v1",
"created": "Mon, 17 May 2010 21:45:47 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Sep 2012 00:49:13 GMT"
}
] | 2012-09-25T00:00:00 | [
[
"Amancio",
"D. R.",
""
],
[
"Nunes",
"M. G. V.",
""
],
[
"Oliveira",
"O. N.",
"Jr."
],
[
"Costa",
"L. da F.",
""
]
] | TITLE: Good practices for a literature survey are not followed by authors while
preparing scientific manuscripts
ABSTRACT: The number of citations received by authors in scientific journals has become
a major parameter to assess individual researchers and the journals themselves
through the impact factor. A fair assessment therefore requires that the
criteria for selecting references in a given manuscript should be unbiased with
respect to the authors or the journals cited. In this paper, we advocate that
authors should follow two mandatory principles to select papers (later
reflected in the list of references) while studying the literature for a given
research: i) consider similarity of content with the topics investigated, lest
very related work should be reproduced or ignored; ii) perform a systematic
search over the network of citations including seminal or very related papers.
We use formalisms of complex networks for two datasets of papers from the arXiv
repository to show that neither of these two criteria is fulfilled in practice.
|
1209.5038 | Daniel Gordon | Daniel Gordon, Danny Hendler, Lior Rokach | Fast Randomized Model Generation for Shapelet-Based Time Series
Classification | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series classification is a field which has drawn much attention over the
past decade. A new approach for classification of time series uses
classification trees based on shapelets. A shapelet is a subsequence extracted
from one of the time series in the dataset. A disadvantage of this approach is
the time required for building the shapelet-based classification tree. The
search for the best shapelet requires examining all subsequences of all lengths
from all time series in the training set.
A key goal of this work was to find an evaluation order of the shapelets
space which enables fast convergence to an accurate model. The comparative
analysis we conducted clearly indicates that a random evaluation order yields
the best results. Our empirical analysis of the distribution of high-quality
shapelets within the shapelets space provides insights into why randomized
shapelets sampling is superior to alternative evaluation orders.
We present an algorithm for randomized model generation for shapelet-based
classification that converges extremely quickly to a model with surprisingly
high accuracy after evaluating only an exceedingly small fraction of the
shapelets space.
| [
{
"version": "v1",
"created": "Sun, 23 Sep 2012 07:50:42 GMT"
}
] | 2012-09-25T00:00:00 | [
[
"Gordon",
"Daniel",
""
],
[
"Hendler",
"Danny",
""
],
[
"Rokach",
"Lior",
""
]
] | TITLE: Fast Randomized Model Generation for Shapelet-Based Time Series
Classification
ABSTRACT: Time series classification is a field which has drawn much attention over the
past decade. A new approach for classification of time series uses
classification trees based on shapelets. A shapelet is a subsequence extracted
from one of the time series in the dataset. A disadvantage of this approach is
the time required for building the shapelet-based classification tree. The
search for the best shapelet requires examining all subsequences of all lengths
from all time series in the training set.
A key goal of this work was to find an evaluation order of the shapelets
space which enables fast convergence to an accurate model. The comparative
analysis we conducted clearly indicates that a random evaluation order yields
the best results. Our empirical analysis of the distribution of high-quality
shapelets within the shapelets space provides insights into why randomized
shapelets sampling is superior to alternative evaluation orders.
We present an algorithm for randomized model generation for shapelet-based
classification that converges extremely quickly to a model with surprisingly
high accuracy after evaluating only an exceedingly small fraction of the
shapelets space.
|
1209.5335 | Arash Einolghozati | Erman Ayday, Arash Einolghozati, Faramarz Fekri | BPRS: Belief Propagation Based Iterative Recommender System | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we introduce the first application of the Belief Propagation
(BP) algorithm in the design of recommender systems. We formulate the
recommendation problem as an inference problem and aim to compute the marginal
probability distributions of the variables which represent the ratings to be
predicted. However, computing these marginal probability functions is
computationally prohibitive for large-scale systems. Therefore, we utilize the
BP algorithm to efficiently compute these functions. Recommendations for each
active user are then iteratively computed by probabilistic message passing. As
opposed to the previous recommender algorithms, BPRS does not require solving
the recommendation problem for all the users if it wishes to update the
recommendations for only a single active. Further, BPRS computes the
recommendations for each user with linear complexity and without requiring a
training period. Via computer simulations (using the 100K MovieLens dataset),
we verify that BPRS iteratively reduces the error in the predicted ratings of
the users until it converges. Finally, we confirm that BPRS is comparable to
the state of art methods such as Correlation-based neighborhood model (CorNgbr)
and Singular Value Decomposition (SVD) in terms of rating and precision
accuracy. Therefore, we believe that the BP-based recommendation algorithm is a
new promising approach which offers a significant advantage on scalability
while providing competitive accuracy for the recommender systems.
| [
{
"version": "v1",
"created": "Mon, 24 Sep 2012 16:59:12 GMT"
}
] | 2012-09-25T00:00:00 | [
[
"Ayday",
"Erman",
""
],
[
"Einolghozati",
"Arash",
""
],
[
"Fekri",
"Faramarz",
""
]
] | TITLE: BPRS: Belief Propagation Based Iterative Recommender System
ABSTRACT: In this paper we introduce the first application of the Belief Propagation
(BP) algorithm in the design of recommender systems. We formulate the
recommendation problem as an inference problem and aim to compute the marginal
probability distributions of the variables which represent the ratings to be
predicted. However, computing these marginal probability functions is
computationally prohibitive for large-scale systems. Therefore, we utilize the
BP algorithm to efficiently compute these functions. Recommendations for each
active user are then iteratively computed by probabilistic message passing. As
opposed to the previous recommender algorithms, BPRS does not require solving
the recommendation problem for all the users if it wishes to update the
recommendations for only a single active. Further, BPRS computes the
recommendations for each user with linear complexity and without requiring a
training period. Via computer simulations (using the 100K MovieLens dataset),
we verify that BPRS iteratively reduces the error in the predicted ratings of
the users until it converges. Finally, we confirm that BPRS is comparable to
the state of art methods such as Correlation-based neighborhood model (CorNgbr)
and Singular Value Decomposition (SVD) in terms of rating and precision
accuracy. Therefore, we believe that the BP-based recommendation algorithm is a
new promising approach which offers a significant advantage on scalability
while providing competitive accuracy for the recommender systems.
|
1009.1380 | Stefano Marchesini | F. R. N. C. Maia, A. MacDowell, S. Marchesini, H. A. Padmore, D. Y.
Parkinson, J. Pien, A. Schirotzek, and C. Yang | Compressive Phase Contrast Tomography | 5 pages, "Image Reconstruction from Incomplete Data VI" conference
7800, SPIE Optical Engineering + Applications 1-5 August 2010 San Diego, CA
United States | Proc. SPIE 7800, 78000F (2010) | 10.1117/12.861946 | LBNL-3899E | physics.optics math.OC | http://creativecommons.org/licenses/publicdomain/ | When x-rays penetrate soft matter, their phase changes more rapidly than
their amplitude. In- terference effects visible with high brightness sources
creates higher contrast, edge enhanced images. When the object is piecewise
smooth (made of big blocks of a few components), such higher con- trast
datasets have a sparse solution. We apply basis pursuit solvers to improve SNR,
remove ring artifacts, reduce the number of views and radiation dose from phase
contrast datasets collected at the Hard X-Ray Micro Tomography Beamline at the
Advanced Light Source. We report a GPU code for the most computationally
intensive task, the gridding and inverse gridding algorithm (non uniform
sampled Fourier transform).
| [
{
"version": "v1",
"created": "Tue, 7 Sep 2010 19:55:40 GMT"
}
] | 2012-09-24T00:00:00 | [
[
"Maia",
"F. R. N. C.",
""
],
[
"MacDowell",
"A.",
""
],
[
"Marchesini",
"S.",
""
],
[
"Padmore",
"H. A.",
""
],
[
"Parkinson",
"D. Y.",
""
],
[
"Pien",
"J.",
""
],
[
"Schirotzek",
"A.",
""
],
[
"Yang",
"C.",
""
]
] | TITLE: Compressive Phase Contrast Tomography
ABSTRACT: When x-rays penetrate soft matter, their phase changes more rapidly than
their amplitude. In- terference effects visible with high brightness sources
creates higher contrast, edge enhanced images. When the object is piecewise
smooth (made of big blocks of a few components), such higher con- trast
datasets have a sparse solution. We apply basis pursuit solvers to improve SNR,
remove ring artifacts, reduce the number of views and radiation dose from phase
contrast datasets collected at the Hard X-Ray Micro Tomography Beamline at the
Advanced Light Source. We report a GPU code for the most computationally
intensive task, the gridding and inverse gridding algorithm (non uniform
sampled Fourier transform).
|
1201.5338 | Xiang Wang | Xiang Wang, Buyue Qian, Ian Davidson | On Constrained Spectral Clustering and Its Applications | Data Mining and Knowledge Discovery, 2012 | null | 10.1007/s10618-012-0291-9 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constrained clustering has been well-studied for algorithms such as $K$-means
and hierarchical clustering. However, how to satisfy many constraints in these
algorithmic settings has been shown to be intractable. One alternative to
encode many constraints is to use spectral clustering, which remains a
developing area. In this paper, we propose a flexible framework for constrained
spectral clustering. In contrast to some previous efforts that implicitly
encode Must-Link and Cannot-Link constraints by modifying the graph Laplacian
or constraining the underlying eigenspace, we present a more natural and
principled formulation, which explicitly encodes the constraints as part of a
constrained optimization problem. Our method offers several practical
advantages: it can encode the degree of belief in Must-Link and Cannot-Link
constraints; it guarantees to lower-bound how well the given constraints are
satisfied using a user-specified threshold; it can be solved deterministically
in polynomial time through generalized eigendecomposition. Furthermore, by
inheriting the objective function from spectral clustering and encoding the
constraints explicitly, much of the existing analysis of unconstrained spectral
clustering techniques remains valid for our formulation. We validate the
effectiveness of our approach by empirical results on both artificial and real
datasets. We also demonstrate an innovative use of encoding large number of
constraints: transfer learning via constraints.
| [
{
"version": "v1",
"created": "Wed, 25 Jan 2012 18:36:11 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Sep 2012 06:04:35 GMT"
}
] | 2012-09-24T00:00:00 | [
[
"Wang",
"Xiang",
""
],
[
"Qian",
"Buyue",
""
],
[
"Davidson",
"Ian",
""
]
] | TITLE: On Constrained Spectral Clustering and Its Applications
ABSTRACT: Constrained clustering has been well-studied for algorithms such as $K$-means
and hierarchical clustering. However, how to satisfy many constraints in these
algorithmic settings has been shown to be intractable. One alternative to
encode many constraints is to use spectral clustering, which remains a
developing area. In this paper, we propose a flexible framework for constrained
spectral clustering. In contrast to some previous efforts that implicitly
encode Must-Link and Cannot-Link constraints by modifying the graph Laplacian
or constraining the underlying eigenspace, we present a more natural and
principled formulation, which explicitly encodes the constraints as part of a
constrained optimization problem. Our method offers several practical
advantages: it can encode the degree of belief in Must-Link and Cannot-Link
constraints; it guarantees to lower-bound how well the given constraints are
satisfied using a user-specified threshold; it can be solved deterministically
in polynomial time through generalized eigendecomposition. Furthermore, by
inheriting the objective function from spectral clustering and encoding the
constraints explicitly, much of the existing analysis of unconstrained spectral
clustering techniques remains valid for our formulation. We validate the
effectiveness of our approach by empirical results on both artificial and real
datasets. We also demonstrate an innovative use of encoding large number of
constraints: transfer learning via constraints.
|
1206.5587 | Shafqat Shad Mr | Shafqat Ali Shad, Enhong Chen | Spatial Outlier Detection from GSM Mobility Data | null | International Journal of Advanced Research in Computer Science,
vol. 3, no. 3, pp. 68-74, 2012 | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper has been withdrawn by the authors. With the rigorous growth of
cellular network many mobility datasets are available publically, which
attracted researchers to study human mobility fall under spatio-temporal
phenomenon. Mobility profile building is main task in spatio-temporal trend
analysis which can be extracted from the location information available in the
dataset. The location information is usually gathered through the GPS, service
provider assisted faux GPS and Cell Global Identity (CGI). Because of high
power consumption and extra resource installation requirement in GPS related
methods, Cell Global Identity is most inexpensive method and readily available
solution for location information. CGI location information is four set head
i.e. Mobile country code (MCC), Mobile network code (MNC), Location area code
(LAC) and Cell ID, location information is retrieved in form of longitude and
latitude coordinates through any of publically available Cell Id databases e.g.
Google location API using CGI. However due to of fast growth in GSM network,
change in topology by the GSM service provider and technology shift toward 3G
exact spatial extraction is somehow a problem in it, so location extraction
must dealt with spatial outlier's problem first for mobility building. In this
paper we proposed a methodology for the detection of spatial outliers from GSM
CGI data, the proposed methodology is hierarchical clustering based and used
the basic GSM network architecture properties.
| [
{
"version": "v1",
"created": "Mon, 25 Jun 2012 06:47:46 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Aug 2012 18:33:59 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Sep 2012 02:21:33 GMT"
}
] | 2012-09-24T00:00:00 | [
[
"Shad",
"Shafqat Ali",
""
],
[
"Chen",
"Enhong",
""
]
] | TITLE: Spatial Outlier Detection from GSM Mobility Data
ABSTRACT: This paper has been withdrawn by the authors. With the rigorous growth of
cellular network many mobility datasets are available publically, which
attracted researchers to study human mobility fall under spatio-temporal
phenomenon. Mobility profile building is main task in spatio-temporal trend
analysis which can be extracted from the location information available in the
dataset. The location information is usually gathered through the GPS, service
provider assisted faux GPS and Cell Global Identity (CGI). Because of high
power consumption and extra resource installation requirement in GPS related
methods, Cell Global Identity is most inexpensive method and readily available
solution for location information. CGI location information is four set head
i.e. Mobile country code (MCC), Mobile network code (MNC), Location area code
(LAC) and Cell ID, location information is retrieved in form of longitude and
latitude coordinates through any of publically available Cell Id databases e.g.
Google location API using CGI. However due to of fast growth in GSM network,
change in topology by the GSM service provider and technology shift toward 3G
exact spatial extraction is somehow a problem in it, so location extraction
must dealt with spatial outlier's problem first for mobility building. In this
paper we proposed a methodology for the detection of spatial outliers from GSM
CGI data, the proposed methodology is hierarchical clustering based and used
the basic GSM network architecture properties.
|
1209.0835 | Neil Zhenqiang Gong | Neil Zhenqiang Gong, Wenchang Xu, Ling Huang, Prateek Mittal, Emil
Stefanov, Vyas Sekar, Dawn Song | Evolution of Social-Attribute Networks: Measurements, Modeling, and
Implications using Google+ | 14 pages, 19 figures. will appear in IMC'12 | null | null | null | cs.SI cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding social network structure and evolution has important
implications for many aspects of network and system design including
provisioning, bootstrapping trust and reputation systems via social networks,
and defenses against Sybil attacks. Several recent results suggest that
augmenting the social network structure with user attributes (e.g., location,
employer, communities of interest) can provide a more fine-grained
understanding of social networks. However, there have been few studies to
provide a systematic understanding of these effects at scale. We bridge this
gap using a unique dataset collected as the Google+ social network grew over
time since its release in late June 2011. We observe novel phenomena with
respect to both standard social network metrics and new attribute-related
metrics (that we define). We also observe interesting evolutionary patterns as
Google+ went from a bootstrap phase to a steady invitation-only stage before a
public release. Based on our empirical observations, we develop a new
generative model to jointly reproduce the social structure and the node
attributes. Using theoretical analysis and empirical evaluations, we show that
our model can accurately reproduce the social and attribute structure of real
social networks. We also demonstrate that our model provides more accurate
predictions for practical application contexts.
| [
{
"version": "v1",
"created": "Wed, 5 Sep 2012 01:01:47 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Sep 2012 04:12:28 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Sep 2012 17:17:36 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Sep 2012 02:24:50 GMT"
}
] | 2012-09-20T00:00:00 | [
[
"Gong",
"Neil Zhenqiang",
""
],
[
"Xu",
"Wenchang",
""
],
[
"Huang",
"Ling",
""
],
[
"Mittal",
"Prateek",
""
],
[
"Stefanov",
"Emil",
""
],
[
"Sekar",
"Vyas",
""
],
[
"Song",
"Dawn",
""
]
] | TITLE: Evolution of Social-Attribute Networks: Measurements, Modeling, and
Implications using Google+
ABSTRACT: Understanding social network structure and evolution has important
implications for many aspects of network and system design including
provisioning, bootstrapping trust and reputation systems via social networks,
and defenses against Sybil attacks. Several recent results suggest that
augmenting the social network structure with user attributes (e.g., location,
employer, communities of interest) can provide a more fine-grained
understanding of social networks. However, there have been few studies to
provide a systematic understanding of these effects at scale. We bridge this
gap using a unique dataset collected as the Google+ social network grew over
time since its release in late June 2011. We observe novel phenomena with
respect to both standard social network metrics and new attribute-related
metrics (that we define). We also observe interesting evolutionary patterns as
Google+ went from a bootstrap phase to a steady invitation-only stage before a
public release. Based on our empirical observations, we develop a new
generative model to jointly reproduce the social structure and the node
attributes. Using theoretical analysis and empirical evaluations, we show that
our model can accurately reproduce the social and attribute structure of real
social networks. We also demonstrate that our model provides more accurate
predictions for practical application contexts.
|
1209.2493 | Subhabrata Mukherjee | Subhabrata Mukherjee, Pushpak Bhattacharyya | WikiSent : Weakly Supervised Sentiment Analysis Through Extractive
Summarization With Wikipedia | The paper is available at
http://subhabrata-mukherjee.webs.com/publications.htm | Lecture Notes in Computer Science Volume 7523, 2012, pp 774-793 | 10.1007/978-3-642-33460-3_55 | null | cs.IR cs.CL | http://creativecommons.org/licenses/by/3.0/ | This paper describes a weakly supervised system for sentiment analysis in the
movie review domain. The objective is to classify a movie review into a
polarity class, positive or negative, based on those sentences bearing opinion
on the movie alone. The irrelevant text, not directly related to the reviewer
opinion on the movie, is left out of analysis. Wikipedia incorporates the world
knowledge of movie-specific features in the system which is used to obtain an
extractive summary of the review, consisting of the reviewer's opinions about
the specific aspects of the movie. This filters out the concepts which are
irrelevant or objective with respect to the given movie. The proposed system,
WikiSent, does not require any labeled data for training. The only weak
supervision arises out of the usage of resources like WordNet, Part-of-Speech
Tagger and Sentiment Lexicons by virtue of their construction. WikiSent
achieves a considerable accuracy improvement over the baseline and has a better
or comparable accuracy to the existing semi-supervised and unsupervised systems
in the domain, on the same dataset. We also perform a general movie review
trend analysis using WikiSent to find the trend in movie-making and the public
acceptance in terms of movie genre, year of release and polarity.
| [
{
"version": "v1",
"created": "Wed, 12 Sep 2012 04:33:08 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Sep 2012 14:44:11 GMT"
}
] | 2012-09-19T00:00:00 | [
[
"Mukherjee",
"Subhabrata",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] | TITLE: WikiSent : Weakly Supervised Sentiment Analysis Through Extractive
Summarization With Wikipedia
ABSTRACT: This paper describes a weakly supervised system for sentiment analysis in the
movie review domain. The objective is to classify a movie review into a
polarity class, positive or negative, based on those sentences bearing opinion
on the movie alone. The irrelevant text, not directly related to the reviewer
opinion on the movie, is left out of analysis. Wikipedia incorporates the world
knowledge of movie-specific features in the system which is used to obtain an
extractive summary of the review, consisting of the reviewer's opinions about
the specific aspects of the movie. This filters out the concepts which are
irrelevant or objective with respect to the given movie. The proposed system,
WikiSent, does not require any labeled data for training. The only weak
supervision arises out of the usage of resources like WordNet, Part-of-Speech
Tagger and Sentiment Lexicons by virtue of their construction. WikiSent
achieves a considerable accuracy improvement over the baseline and has a better
or comparable accuracy to the existing semi-supervised and unsupervised systems
in the domain, on the same dataset. We also perform a general movie review
trend analysis using WikiSent to find the trend in movie-making and the public
acceptance in terms of movie genre, year of release and polarity.
|
1209.2495 | Subhabrata Mukherjee | Subhabrata Mukherjee, Akshat Malu, A.R. Balamurali, Pushpak
Bhattacharyya | TwiSent: A Multistage System for Analyzing Sentiment in Twitter | The paper is available at
http://subhabrata-mukherjee.webs.com/publications.htm | In Proceedings of The 21st ACM Conference on Information and
Knowledge Management (CIKM), 2012 as a poster | null | null | cs.IR cs.CL | http://creativecommons.org/licenses/by/3.0/ | In this paper, we present TwiSent, a sentiment analysis system for Twitter.
Based on the topic searched, TwiSent collects tweets pertaining to it and
categorizes them into the different polarity classes positive, negative and
objective. However, analyzing micro-blog posts have many inherent challenges
compared to the other text genres. Through TwiSent, we address the problems of
1) Spams pertaining to sentiment analysis in Twitter, 2) Structural anomalies
in the text in the form of incorrect spellings, nonstandard abbreviations,
slangs etc., 3) Entity specificity in the context of the topic searched and 4)
Pragmatics embedded in text. The system performance is evaluated on manually
annotated gold standard data and on an automatically annotated tweet set based
on hashtags. It is a common practise to show the efficacy of a supervised
system on an automatically annotated dataset. However, we show that such a
system achieves lesser classification accurcy when tested on generic twitter
dataset. We also show that our system performs much better than an existing
system.
| [
{
"version": "v1",
"created": "Wed, 12 Sep 2012 04:39:37 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Sep 2012 14:43:49 GMT"
}
] | 2012-09-19T00:00:00 | [
[
"Mukherjee",
"Subhabrata",
""
],
[
"Malu",
"Akshat",
""
],
[
"Balamurali",
"A. R.",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] | TITLE: TwiSent: A Multistage System for Analyzing Sentiment in Twitter
ABSTRACT: In this paper, we present TwiSent, a sentiment analysis system for Twitter.
Based on the topic searched, TwiSent collects tweets pertaining to it and
categorizes them into the different polarity classes positive, negative and
objective. However, analyzing micro-blog posts have many inherent challenges
compared to the other text genres. Through TwiSent, we address the problems of
1) Spams pertaining to sentiment analysis in Twitter, 2) Structural anomalies
in the text in the form of incorrect spellings, nonstandard abbreviations,
slangs etc., 3) Entity specificity in the context of the topic searched and 4)
Pragmatics embedded in text. The system performance is evaluated on manually
annotated gold standard data and on an automatically annotated tweet set based
on hashtags. It is a common practise to show the efficacy of a supervised
system on an automatically annotated dataset. However, we show that such a
system achieves lesser classification accurcy when tested on generic twitter
dataset. We also show that our system performs much better than an existing
system.
|
1209.3873 | Erwin Lalik | Erwin Lalik | Chaos in oscillatory heat evolution accompanying the sorption of
hydrogen and deuterium in palladium | 17 pages, 5 figures | null | null | null | physics.chem-ph nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aperiodic oscillations in the sorption of H2 or D2 in metallic Pd powder have
been observed, and a novel method to confirm their deterministic rather than
random character has been devised. A theorem relating the square of a function,
with the derivative and integral with variable upper limit of the same function
has been proved and proposed to be used as a base for a chaos-vs-random test.
Both the experimental and the computed time series may be tested to detect
determinism. The result is a single number within the interval [0,2]. The test
is designed in such a way that its result is close to zero for the datasets
that are deterministic and smooth, and close to 2 for the datasets that are non
deterministic (random) or non smooth (discrete). A large variety of the test
results has been obtained for the calorimetric time series recorded in
thermokinetic oscillations, periodic and quasiperiodic, accompanying the
sorption of H2 or D2 with Pd as well as for several non oscillatory
calorimetric curves recorded in this reaction. These experimental datasets, all
coming form presumably deterministic processes, yielded the results clustering
around 0.001. On the other hand, certain databases that were presumably random
or non smooth yielded the test results from 0.7 to 1.9. Against these
benchmarks, the examined, experimental, aperiodic oscillations gave the test
results between 0.004 and 0.01, which appear to be much closer to the
deterministic behavior than to randomness. Consequently, it has been concluded
that the examined cases of aperiodic oscillations in the heat evolution
accompanying the sorption of H2 or D2 in palladium may represent an occurrence
of mathematical chaos in the behavior of this system. Further applicability and
limitations of the test have also been discussed, including its intrinsic
inability to detect determinism in discrete time series.
| [
{
"version": "v1",
"created": "Tue, 18 Sep 2012 08:51:11 GMT"
}
] | 2012-09-19T00:00:00 | [
[
"Lalik",
"Erwin",
""
]
] | TITLE: Chaos in oscillatory heat evolution accompanying the sorption of
hydrogen and deuterium in palladium
ABSTRACT: Aperiodic oscillations in the sorption of H2 or D2 in metallic Pd powder have
been observed, and a novel method to confirm their deterministic rather than
random character has been devised. A theorem relating the square of a function,
with the derivative and integral with variable upper limit of the same function
has been proved and proposed to be used as a base for a chaos-vs-random test.
Both the experimental and the computed time series may be tested to detect
determinism. The result is a single number within the interval [0,2]. The test
is designed in such a way that its result is close to zero for the datasets
that are deterministic and smooth, and close to 2 for the datasets that are non
deterministic (random) or non smooth (discrete). A large variety of the test
results has been obtained for the calorimetric time series recorded in
thermokinetic oscillations, periodic and quasiperiodic, accompanying the
sorption of H2 or D2 with Pd as well as for several non oscillatory
calorimetric curves recorded in this reaction. These experimental datasets, all
coming form presumably deterministic processes, yielded the results clustering
around 0.001. On the other hand, certain databases that were presumably random
or non smooth yielded the test results from 0.7 to 1.9. Against these
benchmarks, the examined, experimental, aperiodic oscillations gave the test
results between 0.004 and 0.01, which appear to be much closer to the
deterministic behavior than to randomness. Consequently, it has been concluded
that the examined cases of aperiodic oscillations in the heat evolution
accompanying the sorption of H2 or D2 in palladium may represent an occurrence
of mathematical chaos in the behavior of this system. Further applicability and
limitations of the test have also been discussed, including its intrinsic
inability to detect determinism in discrete time series.
|
1209.4056 | Kashyap Dixit | Kashyap Dixit and Madhav Jha and Abhradeep Thakurta | Testing Lipschitz Property over Product Distribution and its
Applications to Statistical Data Privacy | 17 pages | null | null | null | cs.CR | http://creativecommons.org/licenses/by/3.0/ | In this work, we present a connection between Lipschitz property testing and
a relaxed notion of differential privacy, where we assume that the datasets are
being sampled from a domain according to some distribution defined on it.
Specifically, we show that testing whether an algorithm is private can be
reduced to testing Lipschitz property in the distributional setting.
We also initiate the study of distribution Lipschitz testing. We present an
efficient Lipschitz tester for the hypercube domain when the "distance to
property" is measured with respect to product distribution. Most previous works
in property testing of functions (including prior works on Lipschitz testing)
work with uniform distribution.
| [
{
"version": "v1",
"created": "Tue, 18 Sep 2012 18:51:17 GMT"
}
] | 2012-09-19T00:00:00 | [
[
"Dixit",
"Kashyap",
""
],
[
"Jha",
"Madhav",
""
],
[
"Thakurta",
"Abhradeep",
""
]
] | TITLE: Testing Lipschitz Property over Product Distribution and its
Applications to Statistical Data Privacy
ABSTRACT: In this work, we present a connection between Lipschitz property testing and
a relaxed notion of differential privacy, where we assume that the datasets are
being sampled from a domain according to some distribution defined on it.
Specifically, we show that testing whether an algorithm is private can be
reduced to testing Lipschitz property in the distributional setting.
We also initiate the study of distribution Lipschitz testing. We present an
efficient Lipschitz tester for the hypercube domain when the "distance to
property" is measured with respect to product distribution. Most previous works
in property testing of functions (including prior works on Lipschitz testing)
work with uniform distribution.
|
1209.3286 | Nikolay Glazyrin | Nikolay Glazyrin | Music Recommendation System for Million Song Dataset Challenge | 4 pages | null | null | null | cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper a system that took 8th place in Million Song Dataset challenge
is described. Given full listening history for 1 million of users and half of
listening history for 110000 users participatints should predict the missing
half. The system proposed here uses memory-based collaborative filtering
approach and user-based similarity. MAP@500 score of 0.15037 was achieved.
| [
{
"version": "v1",
"created": "Fri, 14 Sep 2012 18:59:03 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Sep 2012 18:53:14 GMT"
}
] | 2012-09-18T00:00:00 | [
[
"Glazyrin",
"Nikolay",
""
]
] | TITLE: Music Recommendation System for Million Song Dataset Challenge
ABSTRACT: In this paper a system that took 8th place in Million Song Dataset challenge
is described. Given full listening history for 1 million of users and half of
listening history for 110000 users participatints should predict the missing
half. The system proposed here uses memory-based collaborative filtering
approach and user-based similarity. MAP@500 score of 0.15037 was achieved.
|
1209.3332 | George Teodoro | George Teodoro, Tony Pan, Tahsin M. Kurc, Jun Kong, Lee A. D. Cooper,
Joel H. Saltz | High-throughput Execution of Hierarchical Analysis Pipelines on Hybrid
Cluster Platforms | 12 pages, 14 figures | null | null | null | cs.DC cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose, implement, and experimentally evaluate a runtime middleware to
support high-throughput execution on hybrid cluster machines of large-scale
analysis applications. A hybrid cluster machine consists of computation nodes
which have multiple CPUs and general purpose graphics processing units (GPUs).
Our work targets scientific analysis applications in which datasets are
processed in application-specific data chunks, and the processing of a data
chunk is expressed as a hierarchical pipeline of operations. The proposed
middleware system combines a bag-of-tasks style execution with coarse-grain
dataflow execution. Data chunks and associated data processing pipelines are
scheduled across cluster nodes using a demand driven approach, while within a
node operations in a given pipeline instance are scheduled across CPUs and
GPUs. The runtime system implements several optimizations, including
performance aware task scheduling, architecture aware process placement, data
locality conscious task assignment, and data prefetching and asynchronous data
copy, to maximize utilization of the aggregate computing power of CPUs and GPUs
and minimize data copy overheads. The application and performance benefits of
the runtime middleware are demonstrated using an image analysis application,
which is employed in a brain cancer study, on a state-of-the-art hybrid cluster
in which each node has two 6-core CPUs and three GPUs. Our results show that
implementing and scheduling application data processing as a set of fine-grain
operations provide more opportunities for runtime optimizations and attain
better performance than a coarser-grain, monolithic implementation. The
proposed runtime system can achieve high-throughput processing of large
datasets - we were able to process an image dataset consisting of 36,848
4Kx4K-pixel image tiles at about 150 tiles/second rate on 100 nodes.
| [
{
"version": "v1",
"created": "Fri, 14 Sep 2012 21:56:51 GMT"
}
] | 2012-09-18T00:00:00 | [
[
"Teodoro",
"George",
""
],
[
"Pan",
"Tony",
""
],
[
"Kurc",
"Tahsin M.",
""
],
[
"Kong",
"Jun",
""
],
[
"Cooper",
"Lee A. D.",
""
],
[
"Saltz",
"Joel H.",
""
]
] | TITLE: High-throughput Execution of Hierarchical Analysis Pipelines on Hybrid
Cluster Platforms
ABSTRACT: We propose, implement, and experimentally evaluate a runtime middleware to
support high-throughput execution on hybrid cluster machines of large-scale
analysis applications. A hybrid cluster machine consists of computation nodes
which have multiple CPUs and general purpose graphics processing units (GPUs).
Our work targets scientific analysis applications in which datasets are
processed in application-specific data chunks, and the processing of a data
chunk is expressed as a hierarchical pipeline of operations. The proposed
middleware system combines a bag-of-tasks style execution with coarse-grain
dataflow execution. Data chunks and associated data processing pipelines are
scheduled across cluster nodes using a demand driven approach, while within a
node operations in a given pipeline instance are scheduled across CPUs and
GPUs. The runtime system implements several optimizations, including
performance aware task scheduling, architecture aware process placement, data
locality conscious task assignment, and data prefetching and asynchronous data
copy, to maximize utilization of the aggregate computing power of CPUs and GPUs
and minimize data copy overheads. The application and performance benefits of
the runtime middleware are demonstrated using an image analysis application,
which is employed in a brain cancer study, on a state-of-the-art hybrid cluster
in which each node has two 6-core CPUs and three GPUs. Our results show that
implementing and scheduling application data processing as a set of fine-grain
operations provide more opportunities for runtime optimizations and attain
better performance than a coarser-grain, monolithic implementation. The
proposed runtime system can achieve high-throughput processing of large
datasets - we were able to process an image dataset consisting of 36,848
4Kx4K-pixel image tiles at about 150 tiles/second rate on 100 nodes.
|
1209.3433 | Salah A. Aly | Hossam M. Zawbaa, Salah A. Aly, Adnan A. Gutub | A Hajj And Umrah Location Classification System For Video Crowded Scenes | 9 pages, 10 figures, 2 tables, 3 algirthms | null | null | null | cs.CV cs.CY cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a new automatic system for classifying ritual locations in
diverse Hajj and Umrah video scenes is investigated. This challenging subject
has mostly been ignored in the past due to several problems one of which is the
lack of realistic annotated video datasets. HUER Dataset is defined to model
six different Hajj and Umrah ritual locations[26].
The proposed Hajj and Umrah ritual location classifying system consists of
four main phases: Preprocessing, segmentation, feature extraction, and location
classification phases. The shot boundary detection and background/foregroud
segmentation algorithms are applied to prepare the input video scenes into the
KNN, ANN, and SVM classifiers. The system improves the state of art results on
Hajj and Umrah location classifications, and successfully recognizes the six
Hajj rituals with more than 90% accuracy. The various demonstrated experiments
show the promising results.
| [
{
"version": "v1",
"created": "Sat, 15 Sep 2012 20:57:51 GMT"
}
] | 2012-09-18T00:00:00 | [
[
"Zawbaa",
"Hossam M.",
""
],
[
"Aly",
"Salah A.",
""
],
[
"Gutub",
"Adnan A.",
""
]
] | TITLE: A Hajj And Umrah Location Classification System For Video Crowded Scenes
ABSTRACT: In this paper, a new automatic system for classifying ritual locations in
diverse Hajj and Umrah video scenes is investigated. This challenging subject
has mostly been ignored in the past due to several problems one of which is the
lack of realistic annotated video datasets. HUER Dataset is defined to model
six different Hajj and Umrah ritual locations[26].
The proposed Hajj and Umrah ritual location classifying system consists of
four main phases: Preprocessing, segmentation, feature extraction, and location
classification phases. The shot boundary detection and background/foregroud
segmentation algorithms are applied to prepare the input video scenes into the
KNN, ANN, and SVM classifiers. The system improves the state of art results on
Hajj and Umrah location classifications, and successfully recognizes the six
Hajj rituals with more than 90% accuracy. The various demonstrated experiments
show the promising results.
|
1209.3694 | Yifei Ma | Yifei Ma, Roman Garnett, Jeff Schneider | Submodularity in Batch Active Learning and Survey Problems on Gaussian
Random Fields | null | null | null | null | cs.LG cs.AI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many real-world datasets can be represented in the form of a graph whose edge
weights designate similarities between instances. A discrete Gaussian random
field (GRF) model is a finite-dimensional Gaussian process (GP) whose prior
covariance is the inverse of a graph Laplacian. Minimizing the trace of the
predictive covariance Sigma (V-optimality) on GRFs has proven successful in
batch active learning classification problems with budget constraints. However,
its worst-case bound has been missing. We show that the V-optimality on GRFs as
a function of the batch query set is submodular and hence its greedy selection
algorithm guarantees an (1-1/e) approximation ratio. Moreover, GRF models have
the absence-of-suppressor (AofS) condition. For active survey problems, we
propose a similar survey criterion which minimizes 1'(Sigma)1. In practice,
V-optimality criterion performs better than GPs with mutual information gain
criteria and allows nonuniform costs for different nodes.
| [
{
"version": "v1",
"created": "Mon, 17 Sep 2012 15:43:11 GMT"
}
] | 2012-09-18T00:00:00 | [
[
"Ma",
"Yifei",
""
],
[
"Garnett",
"Roman",
""
],
[
"Schneider",
"Jeff",
""
]
] | TITLE: Submodularity in Batch Active Learning and Survey Problems on Gaussian
Random Fields
ABSTRACT: Many real-world datasets can be represented in the form of a graph whose edge
weights designate similarities between instances. A discrete Gaussian random
field (GRF) model is a finite-dimensional Gaussian process (GP) whose prior
covariance is the inverse of a graph Laplacian. Minimizing the trace of the
predictive covariance Sigma (V-optimality) on GRFs has proven successful in
batch active learning classification problems with budget constraints. However,
its worst-case bound has been missing. We show that the V-optimality on GRFs as
a function of the batch query set is submodular and hence its greedy selection
algorithm guarantees an (1-1/e) approximation ratio. Moreover, GRF models have
the absence-of-suppressor (AofS) condition. For active survey problems, we
propose a similar survey criterion which minimizes 1'(Sigma)1. In practice,
V-optimality criterion performs better than GPs with mutual information gain
criteria and allows nonuniform costs for different nodes.
|
1209.3026 | Hany SalahEldeen | Hany M. SalahEldeen and Michael L. Nelson | Losing My Revolution: How Many Resources Shared on Social Media Have
Been Lost? | 12 pages, Theory and Practice of Digital Libraries (TPDL) 2012 | null | null | null | cs.DL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social media content has grown exponentially in the recent years and the role
of social media has evolved from just narrating life events to actually shaping
them. In this paper we explore how many resources shared in social media are
still available on the live web or in public web archives. By analyzing six
different event-centric datasets of resources shared in social media in the
period from June 2009 to March 2012, we found about 11% lost and 20% archived
after just a year and an average of 27% lost and 41% archived after two and a
half years. Furthermore, we found a nearly linear relationship between time of
sharing of the resource and the percentage lost, with a slightly less linear
relationship between time of sharing and archiving coverage of the resource.
From this model we conclude that after the first year of publishing, nearly 11%
of shared resources will be lost and after that we will continue to lose 0.02%
per day.
| [
{
"version": "v1",
"created": "Thu, 13 Sep 2012 20:08:07 GMT"
}
] | 2012-09-17T00:00:00 | [
[
"SalahEldeen",
"Hany M.",
""
],
[
"Nelson",
"Michael L.",
""
]
] | TITLE: Losing My Revolution: How Many Resources Shared on Social Media Have
Been Lost?
ABSTRACT: Social media content has grown exponentially in the recent years and the role
of social media has evolved from just narrating life events to actually shaping
them. In this paper we explore how many resources shared in social media are
still available on the live web or in public web archives. By analyzing six
different event-centric datasets of resources shared in social media in the
period from June 2009 to March 2012, we found about 11% lost and 20% archived
after just a year and an average of 27% lost and 41% archived after two and a
half years. Furthermore, we found a nearly linear relationship between time of
sharing of the resource and the percentage lost, with a slightly less linear
relationship between time of sharing and archiving coverage of the resource.
From this model we conclude that after the first year of publishing, nearly 11%
of shared resources will be lost and after that we will continue to lose 0.02%
per day.
|
1209.3089 | Lei Wu Dr. | Mehdi Adda, Lei Wu, Sharon White, Yi Feng | Pattern Detection with Rare Item-set Mining | 17 pages, 5 figures, International Journal on Soft Computing,
Artificial Intelligence and Applications (IJSCAI), Vol.1, No.1, August 2012 | null | null | null | cs.SE cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The discovery of new and interesting patterns in large datasets, known as
data mining, draws more and more interest as the quantities of available data
are exploding. Data mining techniques may be applied to different domains and
fields such as computer science, health sector, insurances, homeland security,
banking and finance, etc. In this paper we are interested by the discovery of a
specific category of patterns, known as rare and non-present patterns. We
present a novel approach towards the discovery of non-present patterns using
rare item-set mining.
| [
{
"version": "v1",
"created": "Fri, 14 Sep 2012 04:25:56 GMT"
}
] | 2012-09-17T00:00:00 | [
[
"Adda",
"Mehdi",
""
],
[
"Wu",
"Lei",
""
],
[
"White",
"Sharon",
""
],
[
"Feng",
"Yi",
""
]
] | TITLE: Pattern Detection with Rare Item-set Mining
ABSTRACT: The discovery of new and interesting patterns in large datasets, known as
data mining, draws more and more interest as the quantities of available data
are exploding. Data mining techniques may be applied to different domains and
fields such as computer science, health sector, insurances, homeland security,
banking and finance, etc. In this paper we are interested by the discovery of a
specific category of patterns, known as rare and non-present patterns. We
present a novel approach towards the discovery of non-present patterns using
rare item-set mining.
|
1209.2868 | Georg Groh | Georg Groh and Florian Straub and Benjamin Koster | Spatio-Temporal Small Worlds for Decentralized Information Retrieval in
Social Networking | null | null | null | null | cs.SI cs.IR physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We discuss foundations and options for alternative, agent-based information
retrieval (IR) approaches in Social Networking, especially Decentralized and
Mobile Social Networking scenarios. In addition to usual semantic contexts,
these approaches make use of long-term social and spatio-temporal contexts in
order to satisfy conscious as well as unconscious information needs according
to Human IR heuristics. Using a large Twitter dataset, we investigate these
approaches and especially investigate the question in how far spatio-temporal
contexts can act as a conceptual bracket implicating social and semantic
cohesion, giving rise to the concept of Spatio-Temporal Small Worlds.
| [
{
"version": "v1",
"created": "Thu, 13 Sep 2012 12:11:10 GMT"
}
] | 2012-09-14T00:00:00 | [
[
"Groh",
"Georg",
""
],
[
"Straub",
"Florian",
""
],
[
"Koster",
"Benjamin",
""
]
] | TITLE: Spatio-Temporal Small Worlds for Decentralized Information Retrieval in
Social Networking
ABSTRACT: We discuss foundations and options for alternative, agent-based information
retrieval (IR) approaches in Social Networking, especially Decentralized and
Mobile Social Networking scenarios. In addition to usual semantic contexts,
these approaches make use of long-term social and spatio-temporal contexts in
order to satisfy conscious as well as unconscious information needs according
to Human IR heuristics. Using a large Twitter dataset, we investigate these
approaches and especially investigate the question in how far spatio-temporal
contexts can act as a conceptual bracket implicating social and semantic
cohesion, giving rise to the concept of Spatio-Temporal Small Worlds.
|
1209.2553 | Malathi Subramanian | S. Malathi and S. Sridhar | Optimization of fuzzy analogy in software cost estimation using
linguistic variables | 14 pages, 8 figures; Journal of Systems and Software, 2011. arXiv
admin note: text overlap with arXiv:1112.3877 by other authors | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most important objectives of software engineering community has
been the increase of useful models that beneficially explain the development of
life cycle and precisely calculate the effort of software cost estimation. In
analogy concept, there is deficiency in handling the datasets containing
categorical variables though there are innumerable methods to estimate the
cost. Due to the nature of software engineering domain, generally project
attributes are often measured in terms of linguistic values such as very low,
low, high and very high. The imprecise nature of such value represents the
uncertainty and vagueness in their elucidation. However, there is no efficient
method that can directly deal with the categorical variables and tolerate such
imprecision and uncertainty without taking the classical intervals and numeric
value approaches. In this paper, a new approach for optimization based on fuzzy
logic, linguistic quantifiers and analogy based reasoning is proposed to
improve the performance of the effort in software project when they are
described in either numerical or categorical data. The performance of this
proposed method exemplifies a pragmatic validation based on the historical NASA
dataset. The results were analyzed using the prediction criterion and indicates
that the proposed method can produce more explainable results than other
machine learning methods.
| [
{
"version": "v1",
"created": "Wed, 12 Sep 2012 10:35:01 GMT"
}
] | 2012-09-13T00:00:00 | [
[
"Malathi",
"S.",
""
],
[
"Sridhar",
"S.",
""
]
] | TITLE: Optimization of fuzzy analogy in software cost estimation using
linguistic variables
ABSTRACT: One of the most important objectives of software engineering community has
been the increase of useful models that beneficially explain the development of
life cycle and precisely calculate the effort of software cost estimation. In
analogy concept, there is deficiency in handling the datasets containing
categorical variables though there are innumerable methods to estimate the
cost. Due to the nature of software engineering domain, generally project
attributes are often measured in terms of linguistic values such as very low,
low, high and very high. The imprecise nature of such value represents the
uncertainty and vagueness in their elucidation. However, there is no efficient
method that can directly deal with the categorical variables and tolerate such
imprecision and uncertainty without taking the classical intervals and numeric
value approaches. In this paper, a new approach for optimization based on fuzzy
logic, linguistic quantifiers and analogy based reasoning is proposed to
improve the performance of the effort in software project when they are
described in either numerical or categorical data. The performance of this
proposed method exemplifies a pragmatic validation based on the historical NASA
dataset. The results were analyzed using the prediction criterion and indicates
that the proposed method can produce more explainable results than other
machine learning methods.
|
1209.1322 | Wahbeh Qardaji | Wahbeh Qardaji, Weining Yang, Ninghui Li | Differentially Private Grids for Geospatial Data | null | null | null | null | cs.CR cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we tackle the problem of constructing a differentially private
synopsis for two-dimensional datasets such as geospatial datasets. The current
state-of-the-art methods work by performing recursive binary partitioning of
the data domains, and constructing a hierarchy of partitions. We show that the
key challenge in partition-based synopsis methods lies in choosing the right
partition granularity to balance the noise error and the non-uniformity error.
We study the uniform-grid approach, which applies an equi-width grid of a
certain size over the data domain and then issues independent count queries on
the grid cells. This method has received no attention in the literature,
probably due to the fact that no good method for choosing a grid size was
known. Based on an analysis of the two kinds of errors, we propose a method for
choosing the grid size. Experimental results validate our method, and show that
this approach performs as well as, and often times better than, the
state-of-the-art methods. We further introduce a novel adaptive-grid method.
The adaptive grid method lays a coarse-grained grid over the dataset, and then
further partitions each cell according to its noisy count. Both levels of
partitions are then used in answering queries over the dataset. This method
exploits the need to have finer granularity partitioning over dense regions
and, at the same time, coarse partitioning over sparse regions. Through
extensive experiments on real-world datasets, we show that this approach
consistently and significantly outperforms the uniform-grid method and other
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Thu, 6 Sep 2012 15:47:45 GMT"
}
] | 2012-09-07T00:00:00 | [
[
"Qardaji",
"Wahbeh",
""
],
[
"Yang",
"Weining",
""
],
[
"Li",
"Ninghui",
""
]
] | TITLE: Differentially Private Grids for Geospatial Data
ABSTRACT: In this paper, we tackle the problem of constructing a differentially private
synopsis for two-dimensional datasets such as geospatial datasets. The current
state-of-the-art methods work by performing recursive binary partitioning of
the data domains, and constructing a hierarchy of partitions. We show that the
key challenge in partition-based synopsis methods lies in choosing the right
partition granularity to balance the noise error and the non-uniformity error.
We study the uniform-grid approach, which applies an equi-width grid of a
certain size over the data domain and then issues independent count queries on
the grid cells. This method has received no attention in the literature,
probably due to the fact that no good method for choosing a grid size was
known. Based on an analysis of the two kinds of errors, we propose a method for
choosing the grid size. Experimental results validate our method, and show that
this approach performs as well as, and often times better than, the
state-of-the-art methods. We further introduce a novel adaptive-grid method.
The adaptive grid method lays a coarse-grained grid over the dataset, and then
further partitions each cell according to its noisy count. Both levels of
partitions are then used in answering queries over the dataset. This method
exploits the need to have finer granularity partitioning over dense regions
and, at the same time, coarse partitioning over sparse regions. Through
extensive experiments on real-world datasets, we show that this approach
consistently and significantly outperforms the uniform-grid method and other
state-of-the-art methods.
|
1209.1323 | Sheng Yu | Sheng Yu and Subhash Kak | An Empirical Study of How Users Adopt Famous Entities | 7 pages, 10 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Users of social networking services construct their personal social networks
by creating asymmetric and symmetric social links. Users usually follow friends
and selected famous entities that include celebrities and news agencies. In
this paper, we investigate how users follow famous entities. We statically and
dynamically analyze data within a huge social networking service with a
manually classified set of famous entities. The results show that the in-degree
of famous entities does not fit to power-law distribution. Conversely, the
maximum number of famous followees in one category for each user shows
power-law property. To our best knowledge, there is no research work on this
topic with human-chosen famous entity dataset in real life. These findings
might be helpful in microblogging marketing and user classification.
| [
{
"version": "v1",
"created": "Thu, 6 Sep 2012 15:47:55 GMT"
}
] | 2012-09-07T00:00:00 | [
[
"Yu",
"Sheng",
""
],
[
"Kak",
"Subhash",
""
]
] | TITLE: An Empirical Study of How Users Adopt Famous Entities
ABSTRACT: Users of social networking services construct their personal social networks
by creating asymmetric and symmetric social links. Users usually follow friends
and selected famous entities that include celebrities and news agencies. In
this paper, we investigate how users follow famous entities. We statically and
dynamically analyze data within a huge social networking service with a
manually classified set of famous entities. The results show that the in-degree
of famous entities does not fit to power-law distribution. Conversely, the
maximum number of famous followees in one category for each user shows
power-law property. To our best knowledge, there is no research work on this
topic with human-chosen famous entity dataset in real life. These findings
might be helpful in microblogging marketing and user classification.
|
1209.0913 | Jun Wang | Jun Wang and Alexandros Kalousis | Structuring Relevant Feature Sets with Multiple Model Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature selection is one of the most prominent learning tasks, especially in
high-dimensional datasets in which the goal is to understand the mechanisms
that underly the learning dataset. However most of them typically deliver just
a flat set of relevant features and provide no further information on what kind
of structures, e.g. feature groupings, might underly the set of relevant
features. In this paper we propose a new learning paradigm in which our goal is
to uncover the structures that underly the set of relevant features for a given
learning problem. We uncover two types of features sets, non-replaceable
features that contain important information about the target variable and
cannot be replaced by other features, and functionally similar features sets
that can be used interchangeably in learned models, given the presence of the
non-replaceable features, with no change in the predictive performance. To do
so we propose a new learning algorithm that learns a number of disjoint models
using a model disjointness regularization constraint together with a constraint
on the predictive agreement of the disjoint models. We explore the behavior of
our approach on a number of high-dimensional datasets, and show that, as
expected by their construction, these satisfy a number of properties. Namely,
model disjointness, a high predictive agreement, and a similar predictive
performance to models learned on the full set of relevant features. The ability
to structure the set of relevant features in such a manner can become a
valuable tool in different applications of scientific knowledge discovery.
| [
{
"version": "v1",
"created": "Wed, 5 Sep 2012 10:08:02 GMT"
}
] | 2012-09-06T00:00:00 | [
[
"Wang",
"Jun",
""
],
[
"Kalousis",
"Alexandros",
""
]
] | TITLE: Structuring Relevant Feature Sets with Multiple Model Learning
ABSTRACT: Feature selection is one of the most prominent learning tasks, especially in
high-dimensional datasets in which the goal is to understand the mechanisms
that underly the learning dataset. However most of them typically deliver just
a flat set of relevant features and provide no further information on what kind
of structures, e.g. feature groupings, might underly the set of relevant
features. In this paper we propose a new learning paradigm in which our goal is
to uncover the structures that underly the set of relevant features for a given
learning problem. We uncover two types of features sets, non-replaceable
features that contain important information about the target variable and
cannot be replaced by other features, and functionally similar features sets
that can be used interchangeably in learned models, given the presence of the
non-replaceable features, with no change in the predictive performance. To do
so we propose a new learning algorithm that learns a number of disjoint models
using a model disjointness regularization constraint together with a constraint
on the predictive agreement of the disjoint models. We explore the behavior of
our approach on a number of high-dimensional datasets, and show that, as
expected by their construction, these satisfy a number of properties. Namely,
model disjointness, a high predictive agreement, and a similar predictive
performance to models learned on the full set of relevant features. The ability
to structure the set of relevant features in such a manner can become a
valuable tool in different applications of scientific knowledge discovery.
|
1205.5407 | Deniz Yuret | Deniz Yuret | FASTSUBS: An Efficient and Exact Procedure for Finding the Most Likely
Lexical Substitutes Based on an N-gram Language Model | 4 pages, 1 figure, to appear in IEEE Signal Processing Letters | null | 10.1109/LSP.2012.2215587 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lexical substitutes have found use in areas such as paraphrasing, text
simplification, machine translation, word sense disambiguation, and part of
speech induction. However the computational complexity of accurately
identifying the most likely substitutes for a word has made large scale
experiments difficult. In this paper I introduce a new search algorithm,
FASTSUBS, that is guaranteed to find the K most likely lexical substitutes for
a given word in a sentence based on an n-gram language model. The computation
is sub-linear in both K and the vocabulary size V. An implementation of the
algorithm and a dataset with the top 100 substitutes of each token in the WSJ
section of the Penn Treebank are available at http://goo.gl/jzKH0.
| [
{
"version": "v1",
"created": "Thu, 24 May 2012 11:53:41 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Sep 2012 07:54:47 GMT"
}
] | 2012-09-04T00:00:00 | [
[
"Yuret",
"Deniz",
""
]
] | TITLE: FASTSUBS: An Efficient and Exact Procedure for Finding the Most Likely
Lexical Substitutes Based on an N-gram Language Model
ABSTRACT: Lexical substitutes have found use in areas such as paraphrasing, text
simplification, machine translation, word sense disambiguation, and part of
speech induction. However the computational complexity of accurately
identifying the most likely substitutes for a word has made large scale
experiments difficult. In this paper I introduce a new search algorithm,
FASTSUBS, that is guaranteed to find the K most likely lexical substitutes for
a given word in a sentence based on an n-gram language model. The computation
is sub-linear in both K and the vocabulary size V. An implementation of the
algorithm and a dataset with the top 100 substitutes of each token in the WSJ
section of the Penn Treebank are available at http://goo.gl/jzKH0.
|
1208.5801 | Nivan Ferreira Jr | Nivan Ferreira, James T. Klosowski, Carlos Scheidegger, Claudio Silva | Vector Field k-Means: Clustering Trajectories by Fitting Multiple Vector
Fields | 30 pages, 15 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scientists study trajectory data to understand trends in movement patterns,
such as human mobility for traffic analysis and urban planning. There is a
pressing need for scalable and efficient techniques for analyzing this data and
discovering the underlying patterns. In this paper, we introduce a novel
technique which we call vector-field $k$-means.
The central idea of our approach is to use vector fields to induce a
similarity notion between trajectories. Other clustering algorithms seek a
representative trajectory that best describes each cluster, much like $k$-means
identifies a representative "center" for each cluster. Vector-field $k$-means,
on the other hand, recognizes that in all but the simplest examples, no single
trajectory adequately describes a cluster. Our approach is based on the premise
that movement trends in trajectory data can be modeled as flows within multiple
vector fields, and the vector field itself is what defines each of the
clusters. We also show how vector-field $k$-means connects techniques for
scalar field design on meshes and $k$-means clustering.
We present an algorithm that finds a locally optimal clustering of
trajectories into vector fields, and demonstrate how vector-field $k$-means can
be used to mine patterns from trajectory data. We present experimental evidence
of its effectiveness and efficiency using several datasets, including
historical hurricane data, GPS tracks of people and vehicles, and anonymous
call records from a large phone company. We compare our results to previous
trajectory clustering techniques, and find that our algorithm performs faster
in practice than the current state-of-the-art in trajectory clustering, in some
examples by a large margin.
| [
{
"version": "v1",
"created": "Tue, 28 Aug 2012 21:51:36 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Aug 2012 18:17:40 GMT"
}
] | 2012-09-03T00:00:00 | [
[
"Ferreira",
"Nivan",
""
],
[
"Klosowski",
"James T.",
""
],
[
"Scheidegger",
"Carlos",
""
],
[
"Silva",
"Claudio",
""
]
] | TITLE: Vector Field k-Means: Clustering Trajectories by Fitting Multiple Vector
Fields
ABSTRACT: Scientists study trajectory data to understand trends in movement patterns,
such as human mobility for traffic analysis and urban planning. There is a
pressing need for scalable and efficient techniques for analyzing this data and
discovering the underlying patterns. In this paper, we introduce a novel
technique which we call vector-field $k$-means.
The central idea of our approach is to use vector fields to induce a
similarity notion between trajectories. Other clustering algorithms seek a
representative trajectory that best describes each cluster, much like $k$-means
identifies a representative "center" for each cluster. Vector-field $k$-means,
on the other hand, recognizes that in all but the simplest examples, no single
trajectory adequately describes a cluster. Our approach is based on the premise
that movement trends in trajectory data can be modeled as flows within multiple
vector fields, and the vector field itself is what defines each of the
clusters. We also show how vector-field $k$-means connects techniques for
scalar field design on meshes and $k$-means clustering.
We present an algorithm that finds a locally optimal clustering of
trajectories into vector fields, and demonstrate how vector-field $k$-means can
be used to mine patterns from trajectory data. We present experimental evidence
of its effectiveness and efficiency using several datasets, including
historical hurricane data, GPS tracks of people and vehicles, and anonymous
call records from a large phone company. We compare our results to previous
trajectory clustering techniques, and find that our algorithm performs faster
in practice than the current state-of-the-art in trajectory clustering, in some
examples by a large margin.
|
1201.4481 | Vadim Zotev | Vadim Zotev, Han Yuan, Raquel Phillips, Jerzy Bodurka | EEG-assisted retrospective motion correction for fMRI: E-REMCOR | 19 pages, 10 figures, to appear in NeuroImage | NeuroImage 63 (2012) 698-712 | 10.1016/j.neuroimage.2012.07.031 | null | physics.med-ph physics.ins-det | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a method for retrospective motion correction of fMRI data in
simultaneous EEG-fMRI that employs the EEG array as a sensitive motion
detector. EEG motion artifacts are used to generate motion regressors
describing rotational head movements with millisecond temporal resolution.
These regressors are utilized for slice-specific motion correction of
unprocessed fMRI data. Performance of the method is demonstrated by correction
of fMRI data from five patients with major depressive disorder, who exhibited
head movements by 1-3 mm during a resting EEG-fMRI run. The fMRI datasets,
corrected using eight to ten EEG-based motion regressors, show significant
improvements in temporal SNR (TSNR) of fMRI time series, particularly in the
frontal brain regions and near the surface of the brain. The TSNR improvements
are as high as 50% for large brain areas in single-subject analysis and as high
as 25% when the results are averaged across the subjects. Simultaneous
application of the EEG-based motion correction and physiological noise
correction by means of RETROICOR leads to average TSNR enhancements as high as
35% for large brain regions. These TSNR improvements are largely preserved
after the subsequent fMRI volume registration and regression of fMRI motion
parameters. The proposed EEG-assisted method of retrospective fMRI motion
correction (referred to as E-REMCOR) can be used to improve quality of fMRI
data with severe motion artifacts and to reduce spurious correlations between
the EEG and fMRI data caused by head movements. It does not require any
specialized equipment beyond the standard EEG-fMRI instrumentation and can be
applied retrospectively to any existing EEG-fMRI data set.
| [
{
"version": "v1",
"created": "Sat, 21 Jan 2012 15:19:12 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Jul 2012 19:05:35 GMT"
}
] | 2012-08-31T00:00:00 | [
[
"Zotev",
"Vadim",
""
],
[
"Yuan",
"Han",
""
],
[
"Phillips",
"Raquel",
""
],
[
"Bodurka",
"Jerzy",
""
]
] | TITLE: EEG-assisted retrospective motion correction for fMRI: E-REMCOR
ABSTRACT: We propose a method for retrospective motion correction of fMRI data in
simultaneous EEG-fMRI that employs the EEG array as a sensitive motion
detector. EEG motion artifacts are used to generate motion regressors
describing rotational head movements with millisecond temporal resolution.
These regressors are utilized for slice-specific motion correction of
unprocessed fMRI data. Performance of the method is demonstrated by correction
of fMRI data from five patients with major depressive disorder, who exhibited
head movements by 1-3 mm during a resting EEG-fMRI run. The fMRI datasets,
corrected using eight to ten EEG-based motion regressors, show significant
improvements in temporal SNR (TSNR) of fMRI time series, particularly in the
frontal brain regions and near the surface of the brain. The TSNR improvements
are as high as 50% for large brain areas in single-subject analysis and as high
as 25% when the results are averaged across the subjects. Simultaneous
application of the EEG-based motion correction and physiological noise
correction by means of RETROICOR leads to average TSNR enhancements as high as
35% for large brain regions. These TSNR improvements are largely preserved
after the subsequent fMRI volume registration and regression of fMRI motion
parameters. The proposed EEG-assisted method of retrospective fMRI motion
correction (referred to as E-REMCOR) can be used to improve quality of fMRI
data with severe motion artifacts and to reduce spurious correlations between
the EEG and fMRI data caused by head movements. It does not require any
specialized equipment beyond the standard EEG-fMRI instrumentation and can be
applied retrospectively to any existing EEG-fMRI data set.
|
1208.6137 | Deepak Kumar | Deepak Kumar, M N Anil Prasad and A G Ramakrishnan | Benchmarking recognition results on word image datasets | 16 pages, 4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have benchmarked the maximum obtainable recognition accuracy on various
word image datasets using manual segmentation and a currently available
commercial OCR. We have developed a Matlab program, with graphical user
interface, for semi-automated pixel level segmentation of word images. We
discuss the advantages of pixel level annotation. We have covered five
databases adding up to over 3600 word images. These word images have been
cropped from camera captured scene, born-digital and street view images. We
recognize the segmented word image using the trial version of Nuance Omnipage
OCR. We also discuss, how the degradations introduced during acquisition or
inaccuracies introduced during creation of word images affect the recognition
of the word present in the image. Word images for different kinds of
degradations and correction for slant and curvy nature of words are also
discussed. The word recognition rates obtained on ICDAR 2003, Sign evaluation,
Street view, Born-digital and ICDAR 2011 datasets are 83.9%, 89.3%, 79.6%,
88.5% and 86.7% respectively.
| [
{
"version": "v1",
"created": "Thu, 30 Aug 2012 11:24:44 GMT"
}
] | 2012-08-31T00:00:00 | [
[
"Kumar",
"Deepak",
""
],
[
"Prasad",
"M N Anil",
""
],
[
"Ramakrishnan",
"A G",
""
]
] | TITLE: Benchmarking recognition results on word image datasets
ABSTRACT: We have benchmarked the maximum obtainable recognition accuracy on various
word image datasets using manual segmentation and a currently available
commercial OCR. We have developed a Matlab program, with graphical user
interface, for semi-automated pixel level segmentation of word images. We
discuss the advantages of pixel level annotation. We have covered five
databases adding up to over 3600 word images. These word images have been
cropped from camera captured scene, born-digital and street view images. We
recognize the segmented word image using the trial version of Nuance Omnipage
OCR. We also discuss, how the degradations introduced during acquisition or
inaccuracies introduced during creation of word images affect the recognition
of the word present in the image. Word images for different kinds of
degradations and correction for slant and curvy nature of words are also
discussed. The word recognition rates obtained on ICDAR 2003, Sign evaluation,
Street view, Born-digital and ICDAR 2011 datasets are 83.9%, 89.3%, 79.6%,
88.5% and 86.7% respectively.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.