id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1402.2676 | Parameswaran Raman | Hyokun Yun, Parameswaran Raman, S.V.N. Vishwanathan | Ranking via Robust Binary Classification and Parallel Parameter
Estimation in Large-Scale Data | null | null | null | null | stat.ML cs.DC cs.LG stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose RoBiRank, a ranking algorithm that is motivated by observing a
close connection between evaluation metrics for learning to rank and loss
functions for robust classification. The algorithm shows a very competitive
performance on standard benchmark datasets against other representative
algorithms in the literature. On the other hand, in large scale problems where
explicit feature vectors and scores are not given, our algorithm can be
efficiently parallelized across a large number of machines; for a task that
requires 386,133 x 49,824,519 pairwise interactions between items to be ranked,
our algorithm finds solutions that are of dramatically higher quality than that
can be found by a state-of-the-art competitor algorithm, given the same amount
of wall-clock time for computation.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2014 21:39:54 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2014 21:08:34 GMT"
},
{
"version": "v3",
"created": "Fri, 11 Apr 2014 06:19:04 GMT"
},
{
"version": "v4",
"created": "Thu, 21 Aug 2014 06:00:32 GMT"
}
] | 2014-08-22T00:00:00 | [
[
"Yun",
"Hyokun",
""
],
[
"Raman",
"Parameswaran",
""
],
[
"Vishwanathan",
"S. V. N.",
""
]
] | TITLE: Ranking via Robust Binary Classification and Parallel Parameter
Estimation in Large-Scale Data
ABSTRACT: We propose RoBiRank, a ranking algorithm that is motivated by observing a
close connection between evaluation metrics for learning to rank and loss
functions for robust classification. The algorithm shows a very competitive
performance on standard benchmark datasets against other representative
algorithms in the literature. On the other hand, in large scale problems where
explicit feature vectors and scores are not given, our algorithm can be
efficiently parallelized across a large number of machines; for a task that
requires 386,133 x 49,824,519 pairwise interactions between items to be ranked,
our algorithm finds solutions that are of dramatically higher quality than that
can be found by a state-of-the-art competitor algorithm, given the same amount
of wall-clock time for computation.
| no_new_dataset | 0.940572 |
1408.3863 | Christoph Lange | Christoph Lange and Angelo Di Iorio | Semantic Publishing Challenge -- Assessing the Quality of Scientific
Output | To appear in: Valentina Presutti and Milan Stankovic and Erik Cambria
and Reforgiato Recupero, Diego and Di Iorio, Angelo and Christoph Lange and
Di Noia, Tommaso and Ivan Cantador (eds.). Semantic Web Evaluation Challenges
2014. Number 457 in Communications in Computer and Information Science,
Springer, 2014 | null | null | null | cs.DL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linked Open Datasets about scholarly publications enable the development and
integration of sophisticated end-user services; however, richer datasets are
still needed. The first goal of this Challenge was to investigate novel
approaches to obtain such semantic data. In particular, we were seeking methods
and tools to extract information from scholarly publications, to publish it as
LOD, and to use queries over this LOD to assess quality. This year we focused
on the quality of workshop proceedings, and of journal articles w.r.t. their
citation network. A third, open task, asked to showcase how such semantic data
could be exploited and how Semantic Web technologies could help in this
emerging context.
| [
{
"version": "v1",
"created": "Sun, 17 Aug 2014 21:33:10 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Aug 2014 22:23:41 GMT"
}
] | 2014-08-22T00:00:00 | [
[
"Lange",
"Christoph",
""
],
[
"Di Iorio",
"Angelo",
""
]
] | TITLE: Semantic Publishing Challenge -- Assessing the Quality of Scientific
Output
ABSTRACT: Linked Open Datasets about scholarly publications enable the development and
integration of sophisticated end-user services; however, richer datasets are
still needed. The first goal of this Challenge was to investigate novel
approaches to obtain such semantic data. In particular, we were seeking methods
and tools to extract information from scholarly publications, to publish it as
LOD, and to use queries over this LOD to assess quality. This year we focused
on the quality of workshop proceedings, and of journal articles w.r.t. their
citation network. A third, open task, asked to showcase how such semantic data
could be exploited and how Semantic Web technologies could help in this
emerging context.
| no_new_dataset | 0.934395 |
1408.4793 | Luca Matteis | Luca Matteis | Restpark: Minimal RESTful API for Retrieving RDF Triples | null | null | null | null | cs.DB | http://creativecommons.org/licenses/by-nc-sa/3.0/ | How do RDF datasets currently get published on the Web? They are either
available as large RDF files, which need to be downloaded and processed
locally, or they exist behind complex SPARQL endpoints. By providing a RESTful
API that can access triple data, we allow users to query a dataset through a
simple interface based on just a couple of HTTP parameters. If RDF resources
were published this way we could quickly build applications that depend on
these datasets, without having to download and process them locally. This is
what Restpark is: a set of HTTP GET parameters that servers need to handle, and
respond with JSON-LD.
| [
{
"version": "v1",
"created": "Tue, 19 Aug 2014 22:57:41 GMT"
}
] | 2014-08-22T00:00:00 | [
[
"Matteis",
"Luca",
""
]
] | TITLE: Restpark: Minimal RESTful API for Retrieving RDF Triples
ABSTRACT: How do RDF datasets currently get published on the Web? They are either
available as large RDF files, which need to be downloaded and processed
locally, or they exist behind complex SPARQL endpoints. By providing a RESTful
API that can access triple data, we allow users to query a dataset through a
simple interface based on just a couple of HTTP parameters. If RDF resources
were published this way we could quickly build applications that depend on
these datasets, without having to download and process them locally. This is
what Restpark is: a set of HTTP GET parameters that servers need to handle, and
respond with JSON-LD.
| no_new_dataset | 0.928344 |
1401.5836 | Vedran Sekara Mr. | Vedran Sekara and Sune Lehmann | The Strength of Friendship Ties in Proximity Sensor Data | Updated Introduction, added references. 12 pages, 7 figures | null | 10.1371/journal.pone.0100915 | PLoS One 9.7 (2014): e100915 | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding how people interact and socialize is important in many contexts
from disease control to urban planning. Datasets that capture this specific
aspect of human life have increased in size and availability over the last few
years. We have yet to understand, however, to what extent such electronic
datasets may serve as a valid proxy for real life social interactions. For an
observational dataset, gathered using mobile phones, we analyze the problem of
identifying transient and non-important links, as well as how to highlight
important social interactions. Applying the Bluetooth signal strength parameter
to distinguish between observations, we demonstrate that weak links, compared
to strong links, have a lower probability of being observed at later times,
while such links--on average--also have lower link-weights and probability of
sharing an online friendship. Further, the role of link-strength is
investigated in relation to social network properties.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2014 00:29:51 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Feb 2014 23:51:14 GMT"
},
{
"version": "v3",
"created": "Tue, 27 May 2014 22:12:39 GMT"
}
] | 2014-08-21T00:00:00 | [
[
"Sekara",
"Vedran",
""
],
[
"Lehmann",
"Sune",
""
]
] | TITLE: The Strength of Friendship Ties in Proximity Sensor Data
ABSTRACT: Understanding how people interact and socialize is important in many contexts
from disease control to urban planning. Datasets that capture this specific
aspect of human life have increased in size and availability over the last few
years. We have yet to understand, however, to what extent such electronic
datasets may serve as a valid proxy for real life social interactions. For an
observational dataset, gathered using mobile phones, we analyze the problem of
identifying transient and non-important links, as well as how to highlight
important social interactions. Applying the Bluetooth signal strength parameter
to distinguish between observations, we demonstrate that weak links, compared
to strong links, have a lower probability of being observed at later times,
while such links--on average--also have lower link-weights and probability of
sharing an online friendship. Further, the role of link-strength is
investigated in relation to social network properties.
| new_dataset | 0.940243 |
1408.4504 | Mohammed Abdelsamea | Mohammed M. Abdelsamea | Unsupervised Parallel Extraction based Texture for Efficient Image
Representation | arXiv admin note: substantial text overlap with arXiv:1408.4143 | 2011 International Conference on Signal, Image Processing and
Applications With workshop of ICEEA 2011, IPCSIT vol.21 (2011), IACSIT Press,
Singapore | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | SOM is a type of unsupervised learning where the goal is to discover some
underlying structure of the data. In this paper, a new extraction method based
on the main idea of Concurrent Self-Organizing Maps (CSOM), representing a
winner-takes-all collection of small SOM networks is proposed. Each SOM of the
system is trained individually to provide best results for one class only. The
experiments confirm that the proposed features based CSOM is capable to
represent image content better than extracted features based on a single big
SOM and these proposed features improve the final decision of the CAD.
Experiments held on Mammographic Image Analysis Society (MIAS) dataset.
| [
{
"version": "v1",
"created": "Wed, 20 Aug 2014 01:10:44 GMT"
}
] | 2014-08-21T00:00:00 | [
[
"Abdelsamea",
"Mohammed M.",
""
]
] | TITLE: Unsupervised Parallel Extraction based Texture for Efficient Image
Representation
ABSTRACT: SOM is a type of unsupervised learning where the goal is to discover some
underlying structure of the data. In this paper, a new extraction method based
on the main idea of Concurrent Self-Organizing Maps (CSOM), representing a
winner-takes-all collection of small SOM networks is proposed. Each SOM of the
system is trained individually to provide best results for one class only. The
experiments confirm that the proposed features based CSOM is capable to
represent image content better than extracted features based on a single big
SOM and these proposed features improve the final decision of the CAD.
Experiments held on Mammographic Image Analysis Society (MIAS) dataset.
| no_new_dataset | 0.949435 |
1408.4523 | Mohammed Al-Maolegi | Yahya Tashtoush, Mohammed Al-Maolegi and Bassam Arkok | The Correlation among Software Complexity Metrics with Case Study | 6 pages | International Journal of Advanced Computer Research, 2014 | null | null | cs.SE | http://creativecommons.org/licenses/by/3.0/ | People demand for software quality is growing increasingly, thus different
scales for the software are growing fast to handle the quality of software. The
software complexity metric is one of the measurements that use some of the
internal attributes or characteristics of software to know how they effect on
the software quality. In this paper, we cover some of more efficient software
complexity metrics such as Cyclomatic complexity, line of code and Hallstead
complexity metric. This paper presents their impacts on the software quality.
It also discusses and analyzes the correlation between them. It finally reveals
their relation with the number of errors using a real dataset as a case study.
| [
{
"version": "v1",
"created": "Wed, 20 Aug 2014 05:08:32 GMT"
}
] | 2014-08-21T00:00:00 | [
[
"Tashtoush",
"Yahya",
""
],
[
"Al-Maolegi",
"Mohammed",
""
],
[
"Arkok",
"Bassam",
""
]
] | TITLE: The Correlation among Software Complexity Metrics with Case Study
ABSTRACT: People demand for software quality is growing increasingly, thus different
scales for the software are growing fast to handle the quality of software. The
software complexity metric is one of the measurements that use some of the
internal attributes or characteristics of software to know how they effect on
the software quality. In this paper, we cover some of more efficient software
complexity metrics such as Cyclomatic complexity, line of code and Hallstead
complexity metric. This paper presents their impacts on the software quality.
It also discusses and analyzes the correlation between them. It finally reveals
their relation with the number of errors using a real dataset as a case study.
| no_new_dataset | 0.949856 |
1408.4143 | Mohammed Abdelsamea | Marghny H. Mohamed and Mohammed M. Abdelsamea | Self Organization Map based Texture Feature Extraction for Efficient
Medical Image Categorization | In Proceedings of the 4th ACM International Conference on Intelligent
Computing and Information Systems, ICICIS 2009, Cairo, Egypt 2009 | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Texture is one of the most important properties of visual surface that helps
in discriminating one object from another or an object from background. The
self-organizing map (SOM) is an excellent tool in exploratory phase of data
mining. It projects its input space on prototypes of a low-dimensional regular
grid that can be effectively utilized to visualize and explore properties of
the data. This paper proposes an enhancement extraction method for accurate
extracting features for efficient image representation it based on SOM neural
network. In this approach, we apply three different partitioning approaches as
a region of interested (ROI) selection methods for extracting different
accurate textural features from medical image as a primary step of our
extraction method. Fisherfaces feature selection is used, for selecting
discriminated features form extracted textural features. Experimental result
showed the high accuracy of medical image categorization with our proposed
extraction method. Experiments held on Mammographic Image Analysis Society
(MIAS) dataset.
| [
{
"version": "v1",
"created": "Mon, 14 Jul 2014 13:43:19 GMT"
}
] | 2014-08-20T00:00:00 | [
[
"Mohamed",
"Marghny H.",
""
],
[
"Abdelsamea",
"Mohammed M.",
""
]
] | TITLE: Self Organization Map based Texture Feature Extraction for Efficient
Medical Image Categorization
ABSTRACT: Texture is one of the most important properties of visual surface that helps
in discriminating one object from another or an object from background. The
self-organizing map (SOM) is an excellent tool in exploratory phase of data
mining. It projects its input space on prototypes of a low-dimensional regular
grid that can be effectively utilized to visualize and explore properties of
the data. This paper proposes an enhancement extraction method for accurate
extracting features for efficient image representation it based on SOM neural
network. In this approach, we apply three different partitioning approaches as
a region of interested (ROI) selection methods for extracting different
accurate textural features from medical image as a primary step of our
extraction method. Fisherfaces feature selection is used, for selecting
discriminated features form extracted textural features. Experimental result
showed the high accuracy of medical image categorization with our proposed
extraction method. Experiments held on Mammographic Image Analysis Society
(MIAS) dataset.
| no_new_dataset | 0.950869 |
1408.4325 | Diane Larlus | Yangmuzi Zhang, Diane Larlus, Florent Perronnin | What makes an Image Iconic? A Fine-Grained Case Study | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A natural approach to teaching a visual concept, e.g. a bird species, is to
show relevant images. However, not all relevant images represent a concept
equally well. In other words, they are not necessarily iconic. This observation
raises three questions. Is iconicity a subjective property? If not, can we
predict iconicity? And what exactly makes an image iconic? We provide answers
to these questions through an extensive experimental study on a challenging
fine-grained dataset of birds. We first show that iconicity ratings are
consistent across individuals, even when they are not domain experts, thus
demonstrating that iconicity is not purely subjective. We then consider an
exhaustive list of properties that are intuitively related to iconicity and
measure their correlation with these iconicity ratings. We combine them to
predict iconicity of new unseen images. We also propose a direct iconicity
predictor that is discriminatively trained with iconicity ratings. By combining
both systems, we get an iconicity prediction that approaches human performance.
| [
{
"version": "v1",
"created": "Tue, 19 Aug 2014 13:26:01 GMT"
}
] | 2014-08-20T00:00:00 | [
[
"Zhang",
"Yangmuzi",
""
],
[
"Larlus",
"Diane",
""
],
[
"Perronnin",
"Florent",
""
]
] | TITLE: What makes an Image Iconic? A Fine-Grained Case Study
ABSTRACT: A natural approach to teaching a visual concept, e.g. a bird species, is to
show relevant images. However, not all relevant images represent a concept
equally well. In other words, they are not necessarily iconic. This observation
raises three questions. Is iconicity a subjective property? If not, can we
predict iconicity? And what exactly makes an image iconic? We provide answers
to these questions through an extensive experimental study on a challenging
fine-grained dataset of birds. We first show that iconicity ratings are
consistent across individuals, even when they are not domain experts, thus
demonstrating that iconicity is not purely subjective. We then consider an
exhaustive list of properties that are intuitively related to iconicity and
measure their correlation with these iconicity ratings. We combine them to
predict iconicity of new unseen images. We also propose a direct iconicity
predictor that is discriminatively trained with iconicity ratings. By combining
both systems, we get an iconicity prediction that approaches human performance.
| no_new_dataset | 0.939637 |
1408.2770 | Christopher Griffin | Sarah Rajtmajer and Christopher Griffin and Derek Mikesell and Anna
Squicciarini | A cooperate-defect model for the spread of deviant behavior in social
networks | 9 pages, 6 figures, corrects an oversight in Version 1 in which
equilibrium point analysis is insufficiently qualified | null | null | null | cs.GT cs.SI physics.soc-ph | http://creativecommons.org/licenses/publicdomain/ | We present a game-theoretic model for the spread of deviant behavior in
online social networks. We utilize a two-strategy framework wherein each
player's behavior is classified as normal or deviant and evolves according to
the cooperate-defect payoff scheme of the classic prisoner's dilemma game. We
demonstrate convergence of individual behavior over time to a final strategy
vector and indicate counterexamples to this convergence outside the context of
prisoner's dilemma. Theoretical results are validated on a real-world dataset
collected from a popular online forum.
| [
{
"version": "v1",
"created": "Tue, 12 Aug 2014 16:33:10 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Aug 2014 18:12:17 GMT"
}
] | 2014-08-19T00:00:00 | [
[
"Rajtmajer",
"Sarah",
""
],
[
"Griffin",
"Christopher",
""
],
[
"Mikesell",
"Derek",
""
],
[
"Squicciarini",
"Anna",
""
]
] | TITLE: A cooperate-defect model for the spread of deviant behavior in social
networks
ABSTRACT: We present a game-theoretic model for the spread of deviant behavior in
online social networks. We utilize a two-strategy framework wherein each
player's behavior is classified as normal or deviant and evolves according to
the cooperate-defect payoff scheme of the classic prisoner's dilemma game. We
demonstrate convergence of individual behavior over time to a final strategy
vector and indicate counterexamples to this convergence outside the context of
prisoner's dilemma. Theoretical results are validated on a real-world dataset
collected from a popular online forum.
| no_new_dataset | 0.940844 |
1408.3733 | Ehtesham Hassan | Ehtesham Hassan and Gautam Shroff and Puneet Agarwal | Multi-Sensor Event Detection using Shape Histograms | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vehicular sensor data consists of multiple time-series arising from a number
of sensors. Using such multi-sensor data we would like to detect occurrences of
specific events that vehicles encounter, e.g., corresponding to particular
maneuvers that a vehicle makes or conditions that it encounters. Events are
characterized by similar waveform patterns re-appearing within one or more
sensors. Further such patterns can be of variable duration. In this work, we
propose a method for detecting such events in time-series data using a novel
feature descriptor motivated by similar ideas in image processing. We define
the shape histogram: a constant dimension descriptor that nevertheless captures
patterns of variable duration. We demonstrate the efficacy of using shape
histograms as features to detect events in an SVM-based, multi-sensor,
supervised learning scenario, i.e., multiple time-series are used to detect an
event. We present results on real-life vehicular sensor data and show that our
technique performs better than available pattern detection implementations on
our data, and that it can also be used to combine features from multiple
sensors resulting in better accuracy than using any single sensor. Since
previous work on pattern detection in time-series has been in the single series
context, we also present results using our technique on multiple standard
time-series datasets and show that it is the most versatile in terms of how it
ranks compared to other published results.
| [
{
"version": "v1",
"created": "Sat, 16 Aug 2014 11:11:59 GMT"
}
] | 2014-08-19T00:00:00 | [
[
"Hassan",
"Ehtesham",
""
],
[
"Shroff",
"Gautam",
""
],
[
"Agarwal",
"Puneet",
""
]
] | TITLE: Multi-Sensor Event Detection using Shape Histograms
ABSTRACT: Vehicular sensor data consists of multiple time-series arising from a number
of sensors. Using such multi-sensor data we would like to detect occurrences of
specific events that vehicles encounter, e.g., corresponding to particular
maneuvers that a vehicle makes or conditions that it encounters. Events are
characterized by similar waveform patterns re-appearing within one or more
sensors. Further such patterns can be of variable duration. In this work, we
propose a method for detecting such events in time-series data using a novel
feature descriptor motivated by similar ideas in image processing. We define
the shape histogram: a constant dimension descriptor that nevertheless captures
patterns of variable duration. We demonstrate the efficacy of using shape
histograms as features to detect events in an SVM-based, multi-sensor,
supervised learning scenario, i.e., multiple time-series are used to detect an
event. We present results on real-life vehicular sensor data and show that our
technique performs better than available pattern detection implementations on
our data, and that it can also be used to combine features from multiple
sensors resulting in better accuracy than using any single sensor. Since
previous work on pattern detection in time-series has been in the single series
context, we also present results using our technique on multiple standard
time-series datasets and show that it is the most versatile in terms of how it
ranks compared to other published results.
| no_new_dataset | 0.953101 |
1408.4067 | Krishna Murthy A | Krishna Murthy A., Suresha, Anil Kumar K. M | Challenges and Issues in Adapting Web Contents on Small Screen Devices | null | International Journal of Information Processing Year 2014 Volume 8
Issue 1 | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In general, Web pages are intended for large screen devices using HTML
technology. Admittance of such Web pages on Small Screen Devices (SSDs) like
mobile phones, palmtops, tablets, PDA etc., is increasing with the support of
the current wireless technologies. However, SSDs have limited screen size,
memory capacity and bandwidth, which makes accessing the Website on SSDs
extremely difficult. There are many approaches have been proposed in literature
to regenerate HTML Web pages suitable for browsing on SSDs. These proposed
methods involve segment the Web page based on its semantic structure, followed
by noise removal based on block features and to utilize the hierarchy of the
content element to regenerate a page suitable for Small Screen Devices. But
World Wide Web consortium stated that, HTML does not provide a better
description of semantic structure of the web page contents. To overcome this
draw backs, Web developers started to develop Web pages using new technologies
like XML, Flash etc. It makes a way for new research methods. Therefore, we
require an approach to reconstruct these Web pages suitable for SSDs. However,
existing approaches in literature do not perform well for Web pages erected
using XML and Flash. In this paper, we have emphasized a few issues of the
existing approaches on XML, Flash Datasets and propose an approach that
performs better on data set comprising of Flash Web pages.
| [
{
"version": "v1",
"created": "Fri, 15 Aug 2014 04:35:59 GMT"
}
] | 2014-08-19T00:00:00 | [
[
"A.",
"Krishna Murthy",
""
],
[
"Suresha",
"",
""
],
[
"M",
"Anil Kumar K.",
""
]
] | TITLE: Challenges and Issues in Adapting Web Contents on Small Screen Devices
ABSTRACT: In general, Web pages are intended for large screen devices using HTML
technology. Admittance of such Web pages on Small Screen Devices (SSDs) like
mobile phones, palmtops, tablets, PDA etc., is increasing with the support of
the current wireless technologies. However, SSDs have limited screen size,
memory capacity and bandwidth, which makes accessing the Website on SSDs
extremely difficult. There are many approaches have been proposed in literature
to regenerate HTML Web pages suitable for browsing on SSDs. These proposed
methods involve segment the Web page based on its semantic structure, followed
by noise removal based on block features and to utilize the hierarchy of the
content element to regenerate a page suitable for Small Screen Devices. But
World Wide Web consortium stated that, HTML does not provide a better
description of semantic structure of the web page contents. To overcome this
draw backs, Web developers started to develop Web pages using new technologies
like XML, Flash etc. It makes a way for new research methods. Therefore, we
require an approach to reconstruct these Web pages suitable for SSDs. However,
existing approaches in literature do not perform well for Web pages erected
using XML and Flash. In this paper, we have emphasized a few issues of the
existing approaches on XML, Flash Datasets and propose an approach that
performs better on data set comprising of Flash Web pages.
| no_new_dataset | 0.946941 |
1408.3559 | Will Ball | William T. Ball, Daniel J. Mortlock, Jack S. Egerton and Joanna D.
Haigh | Assessing the relationship between spectral solar irradiance and
stratospheric ozone using Bayesian inference | 21 pages, 4 figures, Journal of Space Weather and Space Climate
(accepted), pdf version is in draft mode of Space Weather and Space Climate | null | null | null | physics.ao-ph astro-ph.EP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the relationship between spectral solar irradiance (SSI) and
ozone in the tropical upper stratosphere. We find that solar cycle (SC) changes
in ozone can be well approximated by considering the ozone response to SSI
changes in a small number individual wavelength bands between 176 and 310 nm,
operating independently of each other. Additionally, we find that the ozone
varies approximately linearly with changes in the SSI. Using these facts, we
present a Bayesian formalism for inferring SC SSI changes and uncertainties
from measured SC ozone profiles. Bayesian inference is a powerful,
mathematically self-consistent method of considering both the uncertainties of
the data and additional external information to provide the best estimate of
parameters being estimated. Using this method, we show that, given measurement
uncertainties in both ozone and SSI datasets, it is not currently possible to
distinguish between observed or modelled SSI datasets using available estimates
of ozone change profiles, although this might be possible by the inclusion of
other external constraints. Our methodology has the potential, using wider
datasets, to provide better understanding of both variations in SSI and the
atmospheric response.
| [
{
"version": "v1",
"created": "Thu, 14 Aug 2014 13:13:37 GMT"
}
] | 2014-08-18T00:00:00 | [
[
"Ball",
"William T.",
""
],
[
"Mortlock",
"Daniel J.",
""
],
[
"Egerton",
"Jack S.",
""
],
[
"Haigh",
"Joanna D.",
""
]
] | TITLE: Assessing the relationship between spectral solar irradiance and
stratospheric ozone using Bayesian inference
ABSTRACT: We investigate the relationship between spectral solar irradiance (SSI) and
ozone in the tropical upper stratosphere. We find that solar cycle (SC) changes
in ozone can be well approximated by considering the ozone response to SSI
changes in a small number individual wavelength bands between 176 and 310 nm,
operating independently of each other. Additionally, we find that the ozone
varies approximately linearly with changes in the SSI. Using these facts, we
present a Bayesian formalism for inferring SC SSI changes and uncertainties
from measured SC ozone profiles. Bayesian inference is a powerful,
mathematically self-consistent method of considering both the uncertainties of
the data and additional external information to provide the best estimate of
parameters being estimated. Using this method, we show that, given measurement
uncertainties in both ozone and SSI datasets, it is not currently possible to
distinguish between observed or modelled SSI datasets using available estimates
of ozone change profiles, although this might be possible by the inclusion of
other external constraints. Our methodology has the potential, using wider
datasets, to provide better understanding of both variations in SSI and the
atmospheric response.
| no_new_dataset | 0.947284 |
1408.3170 | Eugene Ch'ng | Eugene Ch'ng | The Value of Using Big Data Technologies in Computational Social Science | 3rd ASE Big Data Science Conference, Tsinghua University Beijing, 3-7
August 2014 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The discovery of phenomena in social networks has prompted renewed interests
in the field. Data in social networks however can be massive, requiring
scalable Big Data architecture. Conversely, research in Big Data needs the
volume and velocity of social media data for testing its scalability. Not only
so, appropriate data processing and mining of acquired datasets involve complex
issues in the variety, veracity, and variability of the data, after which
visualisation must occur before we can see fruition in our efforts. This
article presents topical, multimodal, and longitudinal social media datasets
from the integration of various scalable open source technologies. The article
details the process that led to the discovery of social information landscapes
within the Twitter social network, highlighting the experience of dealing with
social media datasets, using a funneling approach so that data becomes
manageable. The article demonstrated the feasibility and value of using
scalable open source technologies for acquiring massive, connected datasets for
research in the social sciences.
| [
{
"version": "v1",
"created": "Thu, 14 Aug 2014 00:21:59 GMT"
}
] | 2014-08-15T00:00:00 | [
[
"Ch'ng",
"Eugene",
""
]
] | TITLE: The Value of Using Big Data Technologies in Computational Social Science
ABSTRACT: The discovery of phenomena in social networks has prompted renewed interests
in the field. Data in social networks however can be massive, requiring
scalable Big Data architecture. Conversely, research in Big Data needs the
volume and velocity of social media data for testing its scalability. Not only
so, appropriate data processing and mining of acquired datasets involve complex
issues in the variety, veracity, and variability of the data, after which
visualisation must occur before we can see fruition in our efforts. This
article presents topical, multimodal, and longitudinal social media datasets
from the integration of various scalable open source technologies. The article
details the process that led to the discovery of social information landscapes
within the Twitter social network, highlighting the experience of dealing with
social media datasets, using a funneling approach so that data becomes
manageable. The article demonstrated the feasibility and value of using
scalable open source technologies for acquiring massive, connected datasets for
research in the social sciences.
| no_new_dataset | 0.949153 |
1408.3337 | Ari Seff | Ari Seff, Le Lu, Kevin M. Cherry, Holger Roth, Jiamin Liu, Shijun
Wang, Joanne Hoffman, Evrim B. Turkbey, and Ronald M. Summers | 2D View Aggregation for Lymph Node Detection Using a Shallow Hierarchy
of Linear Classifiers | This article will be presented at MICCAI (Medical Image Computing and
Computer-Assisted Intervention) 2014 | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/publicdomain/ | Enlarged lymph nodes (LNs) can provide important information for cancer
diagnosis, staging, and measuring treatment reactions, making automated
detection a highly sought goal. In this paper, we propose a new algorithm
representation of decomposing the LN detection problem into a set of 2D object
detection subtasks on sampled CT slices, largely alleviating the curse of
dimensionality issue. Our 2D detection can be effectively formulated as linear
classification on a single image feature type of Histogram of Oriented
Gradients (HOG), covering a moderate field-of-view of 45 by 45 voxels. We
exploit both simple pooling and sparse linear fusion schemes to aggregate these
2D detection scores for the final 3D LN detection. In this manner, detection is
more tractable and does not need to perform perfectly at instance level (as
weak hypotheses) since our aggregation process will robustly harness collective
information for LN detection. Two datasets (90 patients with 389 mediastinal
LNs and 86 patients with 595 abdominal LNs) are used for validation.
Cross-validation demonstrates 78.0% sensitivity at 6 false positives/volume
(FP/vol.) (86.1% at 10 FP/vol.) and 73.1% sensitivity at 6 FP/vol. (87.2% at 10
FP/vol.), for the mediastinal and abdominal datasets respectively. Our results
compare favorably to previous state-of-the-art methods.
| [
{
"version": "v1",
"created": "Thu, 14 Aug 2014 16:47:34 GMT"
}
] | 2014-08-15T00:00:00 | [
[
"Seff",
"Ari",
""
],
[
"Lu",
"Le",
""
],
[
"Cherry",
"Kevin M.",
""
],
[
"Roth",
"Holger",
""
],
[
"Liu",
"Jiamin",
""
],
[
"Wang",
"Shijun",
""
],
[
"Hoffman",
"Joanne",
""
],
[
"Turkbey",
"Evrim B.",
""
],
[
"Summers",
"Ronald M.",
""
]
] | TITLE: 2D View Aggregation for Lymph Node Detection Using a Shallow Hierarchy
of Linear Classifiers
ABSTRACT: Enlarged lymph nodes (LNs) can provide important information for cancer
diagnosis, staging, and measuring treatment reactions, making automated
detection a highly sought goal. In this paper, we propose a new algorithm
representation of decomposing the LN detection problem into a set of 2D object
detection subtasks on sampled CT slices, largely alleviating the curse of
dimensionality issue. Our 2D detection can be effectively formulated as linear
classification on a single image feature type of Histogram of Oriented
Gradients (HOG), covering a moderate field-of-view of 45 by 45 voxels. We
exploit both simple pooling and sparse linear fusion schemes to aggregate these
2D detection scores for the final 3D LN detection. In this manner, detection is
more tractable and does not need to perform perfectly at instance level (as
weak hypotheses) since our aggregation process will robustly harness collective
information for LN detection. Two datasets (90 patients with 389 mediastinal
LNs and 86 patients with 595 abdominal LNs) are used for validation.
Cross-validation demonstrates 78.0% sensitivity at 6 false positives/volume
(FP/vol.) (86.1% at 10 FP/vol.) and 73.1% sensitivity at 6 FP/vol. (87.2% at 10
FP/vol.), for the mediastinal and abdominal datasets respectively. Our results
compare favorably to previous state-of-the-art methods.
| no_new_dataset | 0.950041 |
1408.2869 | Wojciech Czarnecki | Wojciech Marian Czarnecki, Jacek Tabor | Cluster based RBF Kernel for Support Vector Machines | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the classical Gaussian SVM classification we use the feature space
projection transforming points to normal distributions with fixed covariance
matrices (identity in the standard RBF and the covariance of the whole dataset
in Mahalanobis RBF). In this paper we add additional information to Gaussian
SVM by considering local geometry-dependent feature space projection. We
emphasize that our approach is in fact an algorithm for a construction of the
new Gaussian-type kernel.
We show that better (compared to standard RBF and Mahalanobis RBF)
classification results are obtained in the simple case when the space is
preliminary divided by k-means into two sets and points are represented as
normal distributions with a covariances calculated according to the dataset
partitioning.
We call the constructed method C$_k$RBF, where $k$ stands for the amount of
clusters used in k-means. We show empirically on nine datasets from UCI
repository that C$_2$RBF increases the stability of the grid search (measured
as the probability of finding good parameters).
| [
{
"version": "v1",
"created": "Tue, 12 Aug 2014 22:30:11 GMT"
}
] | 2014-08-14T00:00:00 | [
[
"Czarnecki",
"Wojciech Marian",
""
],
[
"Tabor",
"Jacek",
""
]
] | TITLE: Cluster based RBF Kernel for Support Vector Machines
ABSTRACT: In the classical Gaussian SVM classification we use the feature space
projection transforming points to normal distributions with fixed covariance
matrices (identity in the standard RBF and the covariance of the whole dataset
in Mahalanobis RBF). In this paper we add additional information to Gaussian
SVM by considering local geometry-dependent feature space projection. We
emphasize that our approach is in fact an algorithm for a construction of the
new Gaussian-type kernel.
We show that better (compared to standard RBF and Mahalanobis RBF)
classification results are obtained in the simple case when the space is
preliminary divided by k-means into two sets and points are represented as
normal distributions with a covariances calculated according to the dataset
partitioning.
We call the constructed method C$_k$RBF, where $k$ stands for the amount of
clusters used in k-means. We show empirically on nine datasets from UCI
repository that C$_2$RBF increases the stability of the grid search (measured
as the probability of finding good parameters).
| no_new_dataset | 0.952131 |
1310.0287 | null | A.J. Webster, R.O. Dendy, F.A. Calderon, S.C. Chapman, E. Delabie, D.
Dodt, R. Felton, T.N. Todd, F. Maviglia, J. Morris, V. Riccardo, B. Alper, S
Brezinsek, P. Coad, J. Likonen, M. Rubel, and JET EFDA Contributors | Time-Resonant Tokamak Plasma Edge Instabilities? | 10 pages, 4 figures | Plasma Physics and Controlled Fusion, Vol.56, No.7, July 2014,
pp.075017 | 10.1088/0741-3335/56/7/075017 | null | physics.plasm-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For a two week period during the Joint European Torus (JET) 2012 experimental
campaign, the same high confinement plasma was repeated 151 times. The dataset
was analysed to produce a probability density function (pdf) for the waiting
times between edge-localised plasma instabilities ("ELMS"). The result was
entirely unexpected. Instead of a smooth single peaked pdf, a succession of 4-5
sharp maxima and minima uniformly separated by 7-8 millisecond intervals was
found. Here we explore the causes of this newly observed phenomenon, and
conclude that it is either due to a self-organised plasma phenomenon or an
interaction between the plasma and a real-time control system. If the maxima
are a result of "resonant" frequencies at which ELMs can be triggered more
easily, then future ELM control techniques can, and probably will, use them.
Either way, these results demand a deeper understanding of the ELMing process.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2013 13:31:06 GMT"
}
] | 2014-08-13T00:00:00 | [
[
"Webster",
"A. J.",
""
],
[
"Dendy",
"R. O.",
""
],
[
"Calderon",
"F. A.",
""
],
[
"Chapman",
"S. C.",
""
],
[
"Delabie",
"E.",
""
],
[
"Dodt",
"D.",
""
],
[
"Felton",
"R.",
""
],
[
"Todd",
"T. N.",
""
],
[
"Maviglia",
"F.",
""
],
[
"Morris",
"J.",
""
],
[
"Riccardo",
"V.",
""
],
[
"Alper",
"B.",
""
],
[
"Brezinsek",
"S",
""
],
[
"Coad",
"P.",
""
],
[
"Likonen",
"J.",
""
],
[
"Rubel",
"M.",
""
],
[
"Contributors",
"JET EFDA",
""
]
] | TITLE: Time-Resonant Tokamak Plasma Edge Instabilities?
ABSTRACT: For a two week period during the Joint European Torus (JET) 2012 experimental
campaign, the same high confinement plasma was repeated 151 times. The dataset
was analysed to produce a probability density function (pdf) for the waiting
times between edge-localised plasma instabilities ("ELMS"). The result was
entirely unexpected. Instead of a smooth single peaked pdf, a succession of 4-5
sharp maxima and minima uniformly separated by 7-8 millisecond intervals was
found. Here we explore the causes of this newly observed phenomenon, and
conclude that it is either due to a self-organised plasma phenomenon or an
interaction between the plasma and a real-time control system. If the maxima
are a result of "resonant" frequencies at which ELMs can be triggered more
easily, then future ELM control techniques can, and probably will, use them.
Either way, these results demand a deeper understanding of the ELMing process.
| no_new_dataset | 0.944331 |
1407.8041 | Petter Holme | Fariba Karimi, Ver\'onica C. Ramenzoni, Petter Holme | Structural differences between open and direct communication in an
online community | null | Physica A 414, 263-273 (2014) | 10.1016/j.physa.2014.07.037 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most research of online communication focuses on modes of communication that
are either open (like forums, bulletin boards, Twitter, etc.) or direct (like
e-mails). In this work, we study a dataset that has both types of communication
channels. We relate our findings to theories of social organization and human
dynamics. The data comprises 36,492 users of a movie discussion community. Our
results show that there are differences in the way users communicate in the two
channels that are reflected in the shape of degree- and interevent time
distributions. The open communication that is designed to facilitate
conversations with any member, shows a broader degree distribution and more of
the triangles in the network are primarily formed in this mode of
communication. The direct channel is presumably preferred by closer
communication and the response time in dialogues is shorter. On a more
coarse-grained level, there are common patterns in the two networks. The
differences and overlaps between communication networks, thus, provide a unique
window into how social and structural aspects of communication establish and
evolve.
| [
{
"version": "v1",
"created": "Wed, 30 Jul 2014 13:49:42 GMT"
}
] | 2014-08-13T00:00:00 | [
[
"Karimi",
"Fariba",
""
],
[
"Ramenzoni",
"Verónica C.",
""
],
[
"Holme",
"Petter",
""
]
] | TITLE: Structural differences between open and direct communication in an
online community
ABSTRACT: Most research of online communication focuses on modes of communication that
are either open (like forums, bulletin boards, Twitter, etc.) or direct (like
e-mails). In this work, we study a dataset that has both types of communication
channels. We relate our findings to theories of social organization and human
dynamics. The data comprises 36,492 users of a movie discussion community. Our
results show that there are differences in the way users communicate in the two
channels that are reflected in the shape of degree- and interevent time
distributions. The open communication that is designed to facilitate
conversations with any member, shows a broader degree distribution and more of
the triangles in the network are primarily formed in this mode of
communication. The direct channel is presumably preferred by closer
communication and the response time in dialogues is shorter. On a more
coarse-grained level, there are common patterns in the two networks. The
differences and overlaps between communication networks, thus, provide a unique
window into how social and structural aspects of communication establish and
evolve.
| no_new_dataset | 0.72027 |
1408.2810 | Roozbeh Rajabi | Roozbeh Rajabi, Hassan Ghassemian | Spectral Unmixing of Hyperspectral Imagery using Multilayer NMF | 5 pages, Journal | null | 10.1109/LGRS.2014.2325874 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hyperspectral images contain mixed pixels due to low spatial resolution of
hyperspectral sensors. Spectral unmixing problem refers to decomposing mixed
pixels into a set of endmembers and abundance fractions. Due to nonnegativity
constraint on abundance fractions, nonnegative matrix factorization (NMF)
methods have been widely used for solving spectral unmixing problem. In this
letter we proposed using multilayer NMF (MLNMF) for the purpose of
hyperspectral unmixing. In this approach, spectral signature matrix can be
modeled as a product of sparse matrices. In fact MLNMF decomposes the
observation matrix iteratively in a number of layers. In each layer, we applied
sparseness constraint on spectral signature matrix as well as on abundance
fractions matrix. In this way signatures matrix can be sparsely decomposed
despite the fact that it is not generally a sparse matrix. The proposed
algorithm is applied on synthetic and real datasets. Synthetic data is
generated based on endmembers from USGS spectral library. AVIRIS Cuprite
dataset has been used as a real dataset for evaluation of proposed method.
Results of experiments are quantified based on SAD and AAD measures. Results in
comparison with previously proposed methods show that the multilayer approach
can unmix data more effectively.
| [
{
"version": "v1",
"created": "Tue, 12 Aug 2014 19:07:23 GMT"
}
] | 2014-08-13T00:00:00 | [
[
"Rajabi",
"Roozbeh",
""
],
[
"Ghassemian",
"Hassan",
""
]
] | TITLE: Spectral Unmixing of Hyperspectral Imagery using Multilayer NMF
ABSTRACT: Hyperspectral images contain mixed pixels due to low spatial resolution of
hyperspectral sensors. Spectral unmixing problem refers to decomposing mixed
pixels into a set of endmembers and abundance fractions. Due to nonnegativity
constraint on abundance fractions, nonnegative matrix factorization (NMF)
methods have been widely used for solving spectral unmixing problem. In this
letter we proposed using multilayer NMF (MLNMF) for the purpose of
hyperspectral unmixing. In this approach, spectral signature matrix can be
modeled as a product of sparse matrices. In fact MLNMF decomposes the
observation matrix iteratively in a number of layers. In each layer, we applied
sparseness constraint on spectral signature matrix as well as on abundance
fractions matrix. In this way signatures matrix can be sparsely decomposed
despite the fact that it is not generally a sparse matrix. The proposed
algorithm is applied on synthetic and real datasets. Synthetic data is
generated based on endmembers from USGS spectral library. AVIRIS Cuprite
dataset has been used as a real dataset for evaluation of proposed method.
Results of experiments are quantified based on SAD and AAD measures. Results in
comparison with previously proposed methods show that the multilayer approach
can unmix data more effectively.
| no_new_dataset | 0.945147 |
1407.4833 | Nikita Zhiltsov | Olga Nevzorova, Nikita Zhiltsov, Alexander Kirillovich, and Evgeny
Lipachev | $OntoMath^{PRO}$ Ontology: A Linked Data Hub for Mathematics | 15 pages, 6 images, 1 table, Knowledge Engineering and the Semantic
Web - 5th International Conference | null | null | null | cs.AI cs.DL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present an ontology of mathematical knowledge concepts that
covers a wide range of the fields of mathematics and introduces a balanced
representation between comprehensive and sensible models. We demonstrate the
applications of this representation in information extraction, semantic search,
and education. We argue that the ontology can be a core of future integration
of math-aware data sets in the Web of Data and, therefore, provide mappings
onto relevant datasets, such as DBpedia and ScienceWISE.
| [
{
"version": "v1",
"created": "Thu, 17 Jul 2014 20:36:36 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Aug 2014 06:54:16 GMT"
}
] | 2014-08-12T00:00:00 | [
[
"Nevzorova",
"Olga",
""
],
[
"Zhiltsov",
"Nikita",
""
],
[
"Kirillovich",
"Alexander",
""
],
[
"Lipachev",
"Evgeny",
""
]
] | TITLE: $OntoMath^{PRO}$ Ontology: A Linked Data Hub for Mathematics
ABSTRACT: In this paper, we present an ontology of mathematical knowledge concepts that
covers a wide range of the fields of mathematics and introduces a balanced
representation between comprehensive and sensible models. We demonstrate the
applications of this representation in information extraction, semantic search,
and education. We argue that the ontology can be a core of future integration
of math-aware data sets in the Web of Data and, therefore, provide mappings
onto relevant datasets, such as DBpedia and ScienceWISE.
| no_new_dataset | 0.945701 |
1408.0680 | Rafael Berri A | Rafael A. Berri, Alexandre G. Silva, Rafael S. Parpinelli, Elaine
Girardi, Rangel Arthur | A Pattern Recognition System for Detecting Use of Mobile Phones While
Driving | 8 pages, 9th International Conference on Computer Vision Theory and
Applications | null | 10.5220/0004684504110418 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is estimated that 80% of crashes and 65% of near collisions involved
drivers inattentive to traffic for three seconds before the event. This paper
develops an algorithm for extracting characteristics allowing the cell phones
identification used during driving a vehicle. Experiments were performed on
sets of images with 100 positive images (with phone) and the other 100 negative
images (no phone), containing frontal images of the driver. Support Vector
Machine (SVM) with Polynomial kernel is the most advantageous classification
system to the features provided by the algorithm, obtaining a success rate of
91.57% for the vision system. Tests done on videos show that it is possible to
use the image datasets for training classifiers in real situations. Periods of
3 seconds were correctly classified at 87.43% of cases.
| [
{
"version": "v1",
"created": "Mon, 4 Aug 2014 13:35:24 GMT"
}
] | 2014-08-12T00:00:00 | [
[
"Berri",
"Rafael A.",
""
],
[
"Silva",
"Alexandre G.",
""
],
[
"Parpinelli",
"Rafael S.",
""
],
[
"Girardi",
"Elaine",
""
],
[
"Arthur",
"Rangel",
""
]
] | TITLE: A Pattern Recognition System for Detecting Use of Mobile Phones While
Driving
ABSTRACT: It is estimated that 80% of crashes and 65% of near collisions involved
drivers inattentive to traffic for three seconds before the event. This paper
develops an algorithm for extracting characteristics allowing the cell phones
identification used during driving a vehicle. Experiments were performed on
sets of images with 100 positive images (with phone) and the other 100 negative
images (no phone), containing frontal images of the driver. Support Vector
Machine (SVM) with Polynomial kernel is the most advantageous classification
system to the features provided by the algorithm, obtaining a success rate of
91.57% for the vision system. Tests done on videos show that it is possible to
use the image datasets for training classifiers in real situations. Periods of
3 seconds were correctly classified at 87.43% of cases.
| no_new_dataset | 0.942188 |
1408.2015 | Mohammed Javed | Abdessamad Elboushaki, Rachida Hannane, P. Nagabhushan, Mohammed Javed | Automatic Removal of Marginal Annotations in Printed Text Document | Original Article Published by Elsevier at ERCICA-2014, Pages 123-131,
August 2014 | Proceedings of Second International Conference on Emerging
Research in Computing, Information,Communication and Applications
(ERCICA-14), pages 123-131, August 2014, Bangalore | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recovering the original printed texts from a document with added handwritten
annotations in the marginal area is one of the challenging problems, especially
when the original document is not available. Therefore, this paper aims at
salvaging automatically the original document from the annotated document by
detecting and removing any handwritten annotations that appear in the marginal
area of the document without any loss of information. Here a two stage
algorithm is proposed, where in the first stage due to approximate marginal
boundary detection with horizontal and vertical projection profiles, all of the
marginal annotations along with some part of the original printed text that may
appear very close to the marginal boundary are removed. Therefore as a second
stage, using the connected components, a strategy is applied to bring back the
printed text components cropped during the first stage. The proposed method is
validated using a dataset of 50 documents having complex handwritten
annotations, which gives an overall accuracy of 89.01% in removing the marginal
annotations and 97.74% in case of retrieving the original printed text
document.
| [
{
"version": "v1",
"created": "Sat, 9 Aug 2014 03:56:16 GMT"
}
] | 2014-08-12T00:00:00 | [
[
"Elboushaki",
"Abdessamad",
""
],
[
"Hannane",
"Rachida",
""
],
[
"Nagabhushan",
"P.",
""
],
[
"Javed",
"Mohammed",
""
]
] | TITLE: Automatic Removal of Marginal Annotations in Printed Text Document
ABSTRACT: Recovering the original printed texts from a document with added handwritten
annotations in the marginal area is one of the challenging problems, especially
when the original document is not available. Therefore, this paper aims at
salvaging automatically the original document from the annotated document by
detecting and removing any handwritten annotations that appear in the marginal
area of the document without any loss of information. Here a two stage
algorithm is proposed, where in the first stage due to approximate marginal
boundary detection with horizontal and vertical projection profiles, all of the
marginal annotations along with some part of the original printed text that may
appear very close to the marginal boundary are removed. Therefore as a second
stage, using the connected components, a strategy is applied to bring back the
printed text components cropped during the first stage. The proposed method is
validated using a dataset of 50 documents having complex handwritten
annotations, which gives an overall accuracy of 89.01% in removing the marginal
annotations and 97.74% in case of retrieving the original printed text
document.
| new_dataset | 0.95018 |
1408.2031 | Alina Beygelzimer | Alina Beygelzimer, John Langford, Yuri Lifshits, Gregory Sorkin,
Alexander L. Strehl | Conditional Probability Tree Estimation Analysis and Algorithms | Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty
in Artificial Intelligence (UAI2009) | null | null | UAI-P-2009-PG-51-58 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of estimating the conditional probability of a label
in time O(log n), where n is the number of possible labels. We analyze a
natural reduction of this problem to a set of binary regression problems
organized in a tree structure, proving a regret bound that scales with the
depth of the tree. Motivated by this analysis, we propose the first online
algorithm which provably constructs a logarithmic depth tree on the set of
labels to solve this problem. We test the algorithm empirically, showing that
it works succesfully on a dataset with roughly 106 labels.
| [
{
"version": "v1",
"created": "Sat, 9 Aug 2014 05:25:07 GMT"
}
] | 2014-08-12T00:00:00 | [
[
"Beygelzimer",
"Alina",
""
],
[
"Langford",
"John",
""
],
[
"Lifshits",
"Yuri",
""
],
[
"Sorkin",
"Gregory",
""
],
[
"Strehl",
"Alexander L.",
""
]
] | TITLE: Conditional Probability Tree Estimation Analysis and Algorithms
ABSTRACT: We consider the problem of estimating the conditional probability of a label
in time O(log n), where n is the number of possible labels. We analyze a
natural reduction of this problem to a set of binary regression problems
organized in a tree structure, proving a regret bound that scales with the
depth of the tree. Motivated by this analysis, we propose the first online
algorithm which provably constructs a logarithmic depth tree on the set of
labels to solve this problem. We test the algorithm empirically, showing that
it works succesfully on a dataset with roughly 106 labels.
| no_new_dataset | 0.944995 |
1408.2045 | Konstantin Voevodski | Konstantin Voevodski, Maria-Florina Balcan, Heiko Roglin, Shang-Hua
Teng, Yu Xia | Efficient Clustering with Limited Distance Information | Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty
in Artificial Intelligence (UAI2010) | null | null | UAI-P-2010-PG-632-640 | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a point set S and an unknown metric d on S, we study the problem of
efficiently partitioning S into k clusters while querying few distances between
the points. In our model we assume that we have access to one versus all
queries that given a point s 2 S return the distances between s and all other
points. We show that given a natural assumption about the structure of the
instance, we can efficiently find an accurate clustering using only O(k)
distance queries. We use our algorithm to cluster proteins by sequence
similarity. This setting nicely fits our model because we can use a fast
sequence database search program to query a sequence against an entire dataset.
We conduct an empirical study that shows that even though we query a small
fraction of the distances between the points, we produce clusterings that are
close to a desired clustering given by manual classification.
| [
{
"version": "v1",
"created": "Sat, 9 Aug 2014 05:41:26 GMT"
}
] | 2014-08-12T00:00:00 | [
[
"Voevodski",
"Konstantin",
""
],
[
"Balcan",
"Maria-Florina",
""
],
[
"Roglin",
"Heiko",
""
],
[
"Teng",
"Shang-Hua",
""
],
[
"Xia",
"Yu",
""
]
] | TITLE: Efficient Clustering with Limited Distance Information
ABSTRACT: Given a point set S and an unknown metric d on S, we study the problem of
efficiently partitioning S into k clusters while querying few distances between
the points. In our model we assume that we have access to one versus all
queries that given a point s 2 S return the distances between s and all other
points. We show that given a natural assumption about the structure of the
instance, we can efficiently find an accurate clustering using only O(k)
distance queries. We use our algorithm to cluster proteins by sequence
similarity. This setting nicely fits our model because we can use a fast
sequence database search program to query a sequence against an entire dataset.
We conduct an empirical study that shows that even though we query a small
fraction of the distances between the points, we produce clusterings that are
close to a desired clustering given by manual classification.
| no_new_dataset | 0.948442 |
1408.2060 | Jie Chen | Jie Chen, Nannan Cao, Kian Hsiang Low, Ruofei Ouyang, Colin Keng-Yan
Tan, Patrick Jaillet | Parallel Gaussian Process Regression with Low-Rank Covariance Matrix
Approximations | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-152-161 | cs.LG cs.DC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gaussian processes (GP) are Bayesian non-parametric models that are widely
used for probabilistic regression. Unfortunately, it cannot scale well with
large data nor perform real-time predictions due to its cubic time cost in the
data size. This paper presents two parallel GP regression methods that exploit
low-rank covariance matrix approximations for distributing the computational
load among parallel machines to achieve time efficiency and scalability. We
theoretically guarantee the predictive performances of our proposed parallel
GPs to be equivalent to that of some centralized approximate GP regression
methods: The computation of their centralized counterparts can be distributed
among parallel machines, hence achieving greater time efficiency and
scalability. We analytically compare the properties of our parallel GPs such as
time, space, and communication complexity. Empirical evaluation on two
real-world datasets in a cluster of 20 computing nodes shows that our parallel
GPs are significantly more time-efficient and scalable than their centralized
counterparts and exact/full GP while achieving predictive performances
comparable to full GP.
| [
{
"version": "v1",
"created": "Sat, 9 Aug 2014 05:58:33 GMT"
}
] | 2014-08-12T00:00:00 | [
[
"Chen",
"Jie",
""
],
[
"Cao",
"Nannan",
""
],
[
"Low",
"Kian Hsiang",
""
],
[
"Ouyang",
"Ruofei",
""
],
[
"Tan",
"Colin Keng-Yan",
""
],
[
"Jaillet",
"Patrick",
""
]
] | TITLE: Parallel Gaussian Process Regression with Low-Rank Covariance Matrix
Approximations
ABSTRACT: Gaussian processes (GP) are Bayesian non-parametric models that are widely
used for probabilistic regression. Unfortunately, it cannot scale well with
large data nor perform real-time predictions due to its cubic time cost in the
data size. This paper presents two parallel GP regression methods that exploit
low-rank covariance matrix approximations for distributing the computational
load among parallel machines to achieve time efficiency and scalability. We
theoretically guarantee the predictive performances of our proposed parallel
GPs to be equivalent to that of some centralized approximate GP regression
methods: The computation of their centralized counterparts can be distributed
among parallel machines, hence achieving greater time efficiency and
scalability. We analytically compare the properties of our parallel GPs such as
time, space, and communication complexity. Empirical evaluation on two
real-world datasets in a cluster of 20 computing nodes shows that our parallel
GPs are significantly more time-efficient and scalable than their centralized
counterparts and exact/full GP while achieving predictive performances
comparable to full GP.
| no_new_dataset | 0.947527 |
1408.2061 | Tomoharu Iwata | Tomoharu Iwata, David Duvenaud, Zoubin Ghahramani | Warped Mixtures for Nonparametric Cluster Shapes | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-311-320 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A mixture of Gaussians fit to a single curved or heavy-tailed cluster will
report that the data contains many clusters. To produce more appropriate
clusterings, we introduce a model which warps a latent mixture of Gaussians to
produce nonparametric cluster shapes. The possibly low-dimensional latent
mixture model allows us to summarize the properties of the high-dimensional
clusters (or density manifolds) describing the data. The number of manifolds,
as well as the shape and dimension of each manifold is automatically inferred.
We derive a simple inference scheme for this model which analytically
integrates out both the mixture parameters and the warping function. We show
that our model is effective for density estimation, performs better than
infinite Gaussian mixture models at recovering the true number of clusters, and
produces interpretable summaries of high-dimensional datasets.
| [
{
"version": "v1",
"created": "Sat, 9 Aug 2014 06:00:05 GMT"
}
] | 2014-08-12T00:00:00 | [
[
"Iwata",
"Tomoharu",
""
],
[
"Duvenaud",
"David",
""
],
[
"Ghahramani",
"Zoubin",
""
]
] | TITLE: Warped Mixtures for Nonparametric Cluster Shapes
ABSTRACT: A mixture of Gaussians fit to a single curved or heavy-tailed cluster will
report that the data contains many clusters. To produce more appropriate
clusterings, we introduce a model which warps a latent mixture of Gaussians to
produce nonparametric cluster shapes. The possibly low-dimensional latent
mixture model allows us to summarize the properties of the high-dimensional
clusters (or density manifolds) describing the data. The number of manifolds,
as well as the shape and dimension of each manifold is automatically inferred.
We derive a simple inference scheme for this model which analytically
integrates out both the mixture parameters and the warping function. We show
that our model is effective for density estimation, performs better than
infinite Gaussian mixture models at recovering the true number of clusters, and
produces interpretable summaries of high-dimensional datasets.
| no_new_dataset | 0.952442 |
1408.2064 | Krikamol Muandet | Krikamol Muandet, Bernhard Schoelkopf | One-Class Support Measure Machines for Group Anomaly Detection | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-449-458 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose one-class support measure machines (OCSMMs) for group anomaly
detection which aims at recognizing anomalous aggregate behaviors of data
points. The OCSMMs generalize well-known one-class support vector machines
(OCSVMs) to a space of probability measures. By formulating the problem as
quantile estimation on distributions, we can establish an interesting
connection to the OCSVMs and variable kernel density estimators (VKDEs) over
the input space on which the distributions are defined, bridging the gap
between large-margin methods and kernel density estimators. In particular, we
show that various types of VKDEs can be considered as solutions to a class of
regularization problems studied in this paper. Experiments on Sloan Digital Sky
Survey dataset and High Energy Particle Physics dataset demonstrate the
benefits of the proposed framework in real-world applications.
| [
{
"version": "v1",
"created": "Sat, 9 Aug 2014 06:04:33 GMT"
}
] | 2014-08-12T00:00:00 | [
[
"Muandet",
"Krikamol",
""
],
[
"Schoelkopf",
"Bernhard",
""
]
] | TITLE: One-Class Support Measure Machines for Group Anomaly Detection
ABSTRACT: We propose one-class support measure machines (OCSMMs) for group anomaly
detection which aims at recognizing anomalous aggregate behaviors of data
points. The OCSMMs generalize well-known one-class support vector machines
(OCSVMs) to a space of probability measures. By formulating the problem as
quantile estimation on distributions, we can establish an interesting
connection to the OCSVMs and variable kernel density estimators (VKDEs) over
the input space on which the distributions are defined, bridging the gap
between large-margin methods and kernel density estimators. In particular, we
show that various types of VKDEs can be considered as solutions to a class of
regularization problems studied in this paper. Experiments on Sloan Digital Sky
Survey dataset and High Energy Particle Physics dataset demonstrate the
benefits of the proposed framework in real-world applications.
| no_new_dataset | 0.947624 |
1408.2430 | Boris Iolis | Boris Iolis, Gianluca Bontempi | Optimizing Component Combination in a Multi-Indexing Paragraph Retrieval
System | 5 pages, 1 figure, unpublished | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We demonstrate a method to optimize the combination of distinct components in
a paragraph retrieval system. Our system makes use of several indices, query
generators and filters, each of them potentially contributing to the quality of
the returned list of results. The components are combined with a weighed sum,
and we optimize the weights using a heuristic optimization algorithm. This
allows us to maximize the quality of our results, but also to determine which
components are most valuable in our system. We evaluate our approach on the
paragraph selection task of a Question Answering dataset.
| [
{
"version": "v1",
"created": "Mon, 11 Aug 2014 14:58:30 GMT"
}
] | 2014-08-12T00:00:00 | [
[
"Iolis",
"Boris",
""
],
[
"Bontempi",
"Gianluca",
""
]
] | TITLE: Optimizing Component Combination in a Multi-Indexing Paragraph Retrieval
System
ABSTRACT: We demonstrate a method to optimize the combination of distinct components in
a paragraph retrieval system. Our system makes use of several indices, query
generators and filters, each of them potentially contributing to the quality of
the returned list of results. The components are combined with a weighed sum,
and we optimize the weights using a heuristic optimization algorithm. This
allows us to maximize the quality of our results, but also to determine which
components are most valuable in our system. We evaluate our approach on the
paragraph selection task of a Question Answering dataset.
| no_new_dataset | 0.948917 |
1408.2468 | Christoph Lange | Jeremy Debattista and Christoph Lange and S\"oren Auer | Representing Dataset Quality Metadata using Multi-Dimensional Views | Preprint of a paper submitted to the forthcoming SEMANTiCS 2014, 4-5
September 2014, Leipzig, Germany | null | null | null | cs.DB cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data quality is commonly defined as fitness for use. The problem of
identifying quality of data is faced by many data consumers. Data publishers
often do not have the means to identify quality problems in their data. To make
the task for both stakeholders easier, we have developed the Dataset Quality
Ontology (daQ). daQ is a core vocabulary for representing the results of
quality benchmarking of a linked dataset. It represents quality metadata as
multi-dimensional and statistical observations using the Data Cube vocabulary.
Quality metadata are organised as a self-contained graph, which can, e.g., be
embedded into linked open datasets. We discuss the design considerations, give
examples for extending daQ by custom quality metrics, and present use cases
such as analysing data versions, browsing datasets by quality, and link
identification. We finally discuss how data cube visualisation tools enable
data publishers and consumers to analyse better the quality of their data.
| [
{
"version": "v1",
"created": "Mon, 11 Aug 2014 17:00:40 GMT"
}
] | 2014-08-12T00:00:00 | [
[
"Debattista",
"Jeremy",
""
],
[
"Lange",
"Christoph",
""
],
[
"Auer",
"Sören",
""
]
] | TITLE: Representing Dataset Quality Metadata using Multi-Dimensional Views
ABSTRACT: Data quality is commonly defined as fitness for use. The problem of
identifying quality of data is faced by many data consumers. Data publishers
often do not have the means to identify quality problems in their data. To make
the task for both stakeholders easier, we have developed the Dataset Quality
Ontology (daQ). daQ is a core vocabulary for representing the results of
quality benchmarking of a linked dataset. It represents quality metadata as
multi-dimensional and statistical observations using the Data Cube vocabulary.
Quality metadata are organised as a self-contained graph, which can, e.g., be
embedded into linked open datasets. We discuss the design considerations, give
examples for extending daQ by custom quality metrics, and present use cases
such as analysing data versions, browsing datasets by quality, and link
identification. We finally discuss how data cube visualisation tools enable
data publishers and consumers to analyse better the quality of their data.
| no_new_dataset | 0.949482 |
1408.1276 | Kumar Sharad | Kumar Sharad and George Danezis | An Automated Social Graph De-anonymization Technique | 12 pages | null | null | null | cs.CR cs.SI | http://creativecommons.org/licenses/by/3.0/ | We present a generic and automated approach to re-identifying nodes in
anonymized social networks which enables novel anonymization techniques to be
quickly evaluated. It uses machine learning (decision forests) to matching
pairs of nodes in disparate anonymized sub-graphs. The technique uncovers
artefacts and invariants of any black-box anonymization scheme from a small set
of examples. Despite a high degree of automation, classification succeeds with
significant true positive rates even when small false positive rates are
sought. Our evaluation uses publicly available real world datasets to study the
performance of our approach against real-world anonymization strategies, namely
the schemes used to protect datasets of The Data for Development (D4D)
Challenge. We show that the technique is effective even when only small numbers
of samples are used for training. Further, since it detects weaknesses in the
black-box anonymization scheme it can re-identify nodes in one social network
when trained on another.
| [
{
"version": "v1",
"created": "Wed, 6 Aug 2014 13:42:48 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Aug 2014 19:45:10 GMT"
}
] | 2014-08-08T00:00:00 | [
[
"Sharad",
"Kumar",
""
],
[
"Danezis",
"George",
""
]
] | TITLE: An Automated Social Graph De-anonymization Technique
ABSTRACT: We present a generic and automated approach to re-identifying nodes in
anonymized social networks which enables novel anonymization techniques to be
quickly evaluated. It uses machine learning (decision forests) to matching
pairs of nodes in disparate anonymized sub-graphs. The technique uncovers
artefacts and invariants of any black-box anonymization scheme from a small set
of examples. Despite a high degree of automation, classification succeeds with
significant true positive rates even when small false positive rates are
sought. Our evaluation uses publicly available real world datasets to study the
performance of our approach against real-world anonymization strategies, namely
the schemes used to protect datasets of The Data for Development (D4D)
Challenge. We show that the technique is effective even when only small numbers
of samples are used for training. Further, since it detects weaknesses in the
black-box anonymization scheme it can re-identify nodes in one social network
when trained on another.
| no_new_dataset | 0.947039 |
1408.1489 | Amos J. Storkey | Amos J. Storkey, Nigel C. Hambly, Christopher K. I. Williams, Robert
G. Mann | Renewal Strings for Cleaning Astronomical Databases | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-559-566 | cs.AI astro-ph.IM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large astronomical databases obtained from sky surveys such as the
SuperCOSMOS Sky Surveys (SSS) invariably suffer from a small number of spurious
records coming from artefactual effects of the telescope, satellites and junk
objects in orbit around earth and physical defects on the photographic plate or
CCD. Though relatively small in number these spurious records present a
significant problem in many situations where they can become a large proportion
of the records potentially of interest to a given astronomer. In this paper we
focus on the four most common causes of unwanted records in the SSS: satellite
or aeroplane tracks, scratches fibres and other linear phenomena introduced to
the plate, circular halos around bright stars due to internal reflections
within the telescope and diffraction spikes near to bright stars. Accurate and
robust techniques are needed for locating and flagging such spurious objects.
We have developed renewal strings, a probabilistic technique combining the
Hough transform, renewal processes and hidden Markov models which have proven
highly effective in this context. The methods are applied to the SSS data to
develop a dataset of spurious object detections, along with confidence
measures, which can allow this unwanted data to be removed from consideration.
These methods are general and can be adapted to any future astronomical survey
data.
| [
{
"version": "v1",
"created": "Thu, 7 Aug 2014 06:27:12 GMT"
}
] | 2014-08-08T00:00:00 | [
[
"Storkey",
"Amos J.",
""
],
[
"Hambly",
"Nigel C.",
""
],
[
"Williams",
"Christopher K. I.",
""
],
[
"Mann",
"Robert G.",
""
]
] | TITLE: Renewal Strings for Cleaning Astronomical Databases
ABSTRACT: Large astronomical databases obtained from sky surveys such as the
SuperCOSMOS Sky Surveys (SSS) invariably suffer from a small number of spurious
records coming from artefactual effects of the telescope, satellites and junk
objects in orbit around earth and physical defects on the photographic plate or
CCD. Though relatively small in number these spurious records present a
significant problem in many situations where they can become a large proportion
of the records potentially of interest to a given astronomer. In this paper we
focus on the four most common causes of unwanted records in the SSS: satellite
or aeroplane tracks, scratches fibres and other linear phenomena introduced to
the plate, circular halos around bright stars due to internal reflections
within the telescope and diffraction spikes near to bright stars. Accurate and
robust techniques are needed for locating and flagging such spurious objects.
We have developed renewal strings, a probabilistic technique combining the
Hough transform, renewal processes and hidden Markov models which have proven
highly effective in this context. The methods are applied to the SSS data to
develop a dataset of spurious object detections, along with confidence
measures, which can allow this unwanted data to be removed from consideration.
These methods are general and can be adapted to any future astronomical survey
data.
| no_new_dataset | 0.881869 |
1408.1549 | Reza Azad | Reza Azad, Babak Azad, Nabil Belhaj Khalifa, Shahram Jamali | Real-Time Human-Computer Interaction Based on Face and Hand Gesture
Recognition | null | International Journal in Foundations of Computer Science &
Technology 07/2014; 4(4):37-48 | 10.5121/ijfcst.2014.4403 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | At the present time, hand gestures recognition system could be used as a more
expected and useable approach for human computer interaction. Automatic hand
gesture recognition system provides us a new tactic for interactive with the
virtual environment. In this paper, a face and hand gesture recognition system
which is able to control computer media player is offered. Hand gesture and
human face are the key element to interact with the smart system. We used the
face recognition scheme for viewer verification and the hand gesture
recognition in mechanism of computer media player, for instance, volume
down/up, next music and etc. In the proposed technique, first, the hand gesture
and face location is extracted from the main image by combination of skin and
cascade detector and then is sent to recognition stage. In recognition stage,
first, the threshold condition is inspected then the extracted face and gesture
will be recognized. In the result stage, the proposed technique is applied on
the video dataset and the high precision ratio acquired. Additional the
recommended hand gesture recognition method is applied on static American Sign
Language (ASL) database and the correctness rate achieved nearby 99.40%. also
the planned method could be used in gesture based computer games and virtual
reality.
| [
{
"version": "v1",
"created": "Thu, 7 Aug 2014 11:38:20 GMT"
}
] | 2014-08-08T00:00:00 | [
[
"Azad",
"Reza",
""
],
[
"Azad",
"Babak",
""
],
[
"Khalifa",
"Nabil Belhaj",
""
],
[
"Jamali",
"Shahram",
""
]
] | TITLE: Real-Time Human-Computer Interaction Based on Face and Hand Gesture
Recognition
ABSTRACT: At the present time, hand gestures recognition system could be used as a more
expected and useable approach for human computer interaction. Automatic hand
gesture recognition system provides us a new tactic for interactive with the
virtual environment. In this paper, a face and hand gesture recognition system
which is able to control computer media player is offered. Hand gesture and
human face are the key element to interact with the smart system. We used the
face recognition scheme for viewer verification and the hand gesture
recognition in mechanism of computer media player, for instance, volume
down/up, next music and etc. In the proposed technique, first, the hand gesture
and face location is extracted from the main image by combination of skin and
cascade detector and then is sent to recognition stage. In recognition stage,
first, the threshold condition is inspected then the extracted face and gesture
will be recognized. In the result stage, the proposed technique is applied on
the video dataset and the high precision ratio acquired. Additional the
recommended hand gesture recognition method is applied on static American Sign
Language (ASL) database and the correctness rate achieved nearby 99.40%. also
the planned method could be used in gesture based computer games and virtual
reality.
| no_new_dataset | 0.949106 |
1408.0784 | Joseph Gardiner | Joseph Gardiner and Shishir Nagaraja | Blindspot: Indistinguishable Anonymous Communications | 13 Pages | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Communication anonymity is a key requirement for individuals under targeted
surveillance. Practical anonymous communications also require
indistinguishability - an adversary should be unable to distinguish between
anonymised and non-anonymised traffic for a given user. We propose Blindspot, a
design for high-latency anonymous communications that offers
indistinguishability and unobservability under a (qualified) global active
adversary. Blindspot creates anonymous routes between sender-receiver pairs by
subliminally encoding messages within the pre-existing communication behaviour
of users within a social network. Specifically, the organic image sharing
behaviour of users. Thus channel bandwidth depends on the intensity of image
sharing behaviour of users along a route. A major challenge we successfully
overcome is that routing must be accomplished in the face of significant
restrictions - channel bandwidth is stochastic. We show that conventional
social network routing strategies do not work. To solve this problem, we
propose a novel routing algorithm. We evaluate Blindspot using a real-world
dataset. We find that it delivers reasonable results for applications requiring
low-volume unobservable communication.
| [
{
"version": "v1",
"created": "Mon, 4 Aug 2014 19:35:15 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Aug 2014 22:26:32 GMT"
}
] | 2014-08-07T00:00:00 | [
[
"Gardiner",
"Joseph",
""
],
[
"Nagaraja",
"Shishir",
""
]
] | TITLE: Blindspot: Indistinguishable Anonymous Communications
ABSTRACT: Communication anonymity is a key requirement for individuals under targeted
surveillance. Practical anonymous communications also require
indistinguishability - an adversary should be unable to distinguish between
anonymised and non-anonymised traffic for a given user. We propose Blindspot, a
design for high-latency anonymous communications that offers
indistinguishability and unobservability under a (qualified) global active
adversary. Blindspot creates anonymous routes between sender-receiver pairs by
subliminally encoding messages within the pre-existing communication behaviour
of users within a social network. Specifically, the organic image sharing
behaviour of users. Thus channel bandwidth depends on the intensity of image
sharing behaviour of users along a route. A major challenge we successfully
overcome is that routing must be accomplished in the face of significant
restrictions - channel bandwidth is stochastic. We show that conventional
social network routing strategies do not work. To solve this problem, we
propose a novel routing algorithm. We evaluate Blindspot using a real-world
dataset. We find that it delivers reasonable results for applications requiring
low-volume unobservable communication.
| no_new_dataset | 0.936749 |
1408.1160 | Truyen Tran | Truyen Tran, Dinh Phung, Svetha Venkatesh | Mixed-Variate Restricted Boltzmann Machines | Originally published in Proceedings of ACML'11 | null | null | null | stat.ML cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern datasets are becoming heterogeneous. To this end, we present in this
paper Mixed-Variate Restricted Boltzmann Machines for simultaneously modelling
variables of multiple types and modalities, including binary and continuous
responses, categorical options, multicategorical choices, ordinal assessment
and category-ranked preferences. Dependency among variables is modeled using
latent binary variables, each of which can be interpreted as a particular
hidden aspect of the data. The proposed model, similar to the standard RBMs,
allows fast evaluation of the posterior for the latent variables. Hence, it is
naturally suitable for many common tasks including, but not limited to, (a) as
a pre-processing step to convert complex input data into a more convenient
vectorial representation through the latent posteriors, thereby offering a
dimensionality reduction capacity, (b) as a classifier supporting binary,
multiclass, multilabel, and label-ranking outputs, or a regression tool for
continuous outputs and (c) as a data completion tool for multimodal and
heterogeneous data. We evaluate the proposed model on a large-scale dataset
using the world opinion survey results on three tasks: feature extraction and
visualization, data completion and prediction.
| [
{
"version": "v1",
"created": "Wed, 6 Aug 2014 01:43:05 GMT"
}
] | 2014-08-07T00:00:00 | [
[
"Tran",
"Truyen",
""
],
[
"Phung",
"Dinh",
""
],
[
"Venkatesh",
"Svetha",
""
]
] | TITLE: Mixed-Variate Restricted Boltzmann Machines
ABSTRACT: Modern datasets are becoming heterogeneous. To this end, we present in this
paper Mixed-Variate Restricted Boltzmann Machines for simultaneously modelling
variables of multiple types and modalities, including binary and continuous
responses, categorical options, multicategorical choices, ordinal assessment
and category-ranked preferences. Dependency among variables is modeled using
latent binary variables, each of which can be interpreted as a particular
hidden aspect of the data. The proposed model, similar to the standard RBMs,
allows fast evaluation of the posterior for the latent variables. Hence, it is
naturally suitable for many common tasks including, but not limited to, (a) as
a pre-processing step to convert complex input data into a more convenient
vectorial representation through the latent posteriors, thereby offering a
dimensionality reduction capacity, (b) as a classifier supporting binary,
multiclass, multilabel, and label-ranking outputs, or a regression tool for
continuous outputs and (c) as a data completion tool for multimodal and
heterogeneous data. We evaluate the proposed model on a large-scale dataset
using the world opinion survey results on three tasks: feature extraction and
visualization, data completion and prediction.
| no_new_dataset | 0.953665 |
1408.1260 | Maxim Kolchin Mr. | Maxim Kolchin, Fedor Kozlov | Unstable markup: A template-based information extraction from web sites
with unstable markup | ESWC 2014 Semantic Publishing Challenge, Task 1 | null | null | null | cs.IR cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents results of a work on crawling CEUR Workshop proceedings
web site to a Linked Open Data (LOD) dataset in the framework of ESWC 2014
Semantic Publishing Challenge 2014. Our approach is based on using an
extensible template-dependent crawler and DBpedia for linking extracted
entities, such as the names of universities and countries.
| [
{
"version": "v1",
"created": "Wed, 6 Aug 2014 12:36:23 GMT"
}
] | 2014-08-07T00:00:00 | [
[
"Kolchin",
"Maxim",
""
],
[
"Kozlov",
"Fedor",
""
]
] | TITLE: Unstable markup: A template-based information extraction from web sites
with unstable markup
ABSTRACT: This paper presents results of a work on crawling CEUR Workshop proceedings
web site to a Linked Open Data (LOD) dataset in the framework of ESWC 2014
Semantic Publishing Challenge 2014. Our approach is based on using an
extensible template-dependent crawler and DBpedia for linking extracted
entities, such as the names of universities and countries.
| no_new_dataset | 0.946151 |
1407.3268 | Michael Schreiber | Michael Schreiber | Examples for counterintuitive behavior of the new citation-rank
indicator P100 for bibliometric evaluations | 9 pages, 5 tables, 4 figures; accepted for publication in Journal of
Informetrics | J. Informetrics 8, 738-748 (2014) | null | null | cs.DL physics.soc-ph | http://creativecommons.org/licenses/by-nc-sa/3.0/ | A new percentile-based rating scale P100 has recently been proposed to
describe the citation impact in terms of the distribution of the unique
citation values. Here I investigate P100 for 5 example datasets, two simple
fictitious models and three larger empirical samples. Counterintuitive behavior
is demonstrated in the model datasets, pointing to difficulties when the
evolution with time of the indicator is analyzed or when different fields or
publication years are compared. It is shown that similar problems can occur for
the three larger datasets of empirical citation values. Further, it is observed
that the performance evalution result in terms of percentiles can be influenced
by selecting different journals for publication of a manuscript.
| [
{
"version": "v1",
"created": "Fri, 11 Jul 2014 08:42:19 GMT"
}
] | 2014-08-06T00:00:00 | [
[
"Schreiber",
"Michael",
""
]
] | TITLE: Examples for counterintuitive behavior of the new citation-rank
indicator P100 for bibliometric evaluations
ABSTRACT: A new percentile-based rating scale P100 has recently been proposed to
describe the citation impact in terms of the distribution of the unique
citation values. Here I investigate P100 for 5 example datasets, two simple
fictitious models and three larger empirical samples. Counterintuitive behavior
is demonstrated in the model datasets, pointing to difficulties when the
evolution with time of the indicator is analyzed or when different fields or
publication years are compared. It is shown that similar problems can occur for
the three larger datasets of empirical citation values. Further, it is observed
that the performance evalution result in terms of percentiles can be influenced
by selecting different journals for publication of a manuscript.
| no_new_dataset | 0.948442 |
1408.0926 | James Cheney | Harry Halpin and James Cheney | Dynamic Provenance for SPARQL Update | Pre-publication version of ISWC 2014 paper | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While the Semantic Web currently can exhibit provenance information by using
the W3C PROV standards, there is a "missing link" in connecting PROV to storing
and querying for dynamic changes to RDF graphs using SPARQL. Solving this
problem would be required for such clear use-cases as the creation of version
control systems for RDF. While some provenance models and annotation techniques
for storing and querying provenance data originally developed with databases or
workflows in mind transfer readily to RDF and SPARQL, these techniques do not
readily adapt to describing changes in dynamic RDF datasets over time. In this
paper we explore how to adapt the dynamic copy-paste provenance model of
Buneman et al. [2] to RDF datasets that change over time in response to SPARQL
updates, how to represent the resulting provenance records themselves as RDF in
a manner compatible with W3C PROV, and how the provenance information can be
defined by reinterpreting SPARQL updates. The primary contribution of this
paper is a semantic framework that enables the semantics of SPARQL Update to be
used as the basis for a 'cut-and-paste' provenance model in a principled
manner.
| [
{
"version": "v1",
"created": "Tue, 5 Aug 2014 11:17:13 GMT"
}
] | 2014-08-06T00:00:00 | [
[
"Halpin",
"Harry",
""
],
[
"Cheney",
"James",
""
]
] | TITLE: Dynamic Provenance for SPARQL Update
ABSTRACT: While the Semantic Web currently can exhibit provenance information by using
the W3C PROV standards, there is a "missing link" in connecting PROV to storing
and querying for dynamic changes to RDF graphs using SPARQL. Solving this
problem would be required for such clear use-cases as the creation of version
control systems for RDF. While some provenance models and annotation techniques
for storing and querying provenance data originally developed with databases or
workflows in mind transfer readily to RDF and SPARQL, these techniques do not
readily adapt to describing changes in dynamic RDF datasets over time. In this
paper we explore how to adapt the dynamic copy-paste provenance model of
Buneman et al. [2] to RDF datasets that change over time in response to SPARQL
updates, how to represent the resulting provenance records themselves as RDF in
a manner compatible with W3C PROV, and how the provenance information can be
defined by reinterpreting SPARQL updates. The primary contribution of this
paper is a semantic framework that enables the semantics of SPARQL Update to be
used as the basis for a 'cut-and-paste' provenance model in a principled
manner.
| no_new_dataset | 0.939081 |
1408.0972 | Shaina Race Ph.D | Shaina Race and Carl Meyer | A Flexible Iterative Framework for Consensus Clustering | null | null | null | null | stat.ML cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A novel framework for consensus clustering is presented which has the ability
to determine both the number of clusters and a final solution using multiple
algorithms. A consensus similarity matrix is formed from an ensemble using
multiple algorithms and several values for k. A variety of dimension reduction
techniques and clustering algorithms are considered for analysis. For noisy or
high-dimensional data, an iterative technique is presented to refine this
consensus matrix in way that encourages algorithms to agree upon a common
solution. We utilize the theory of nearly uncoupled Markov chains to determine
the number, k , of clusters in a dataset by considering a random walk on the
graph defined by the consensus matrix. The eigenvalues of the associated
transition probability matrix are used to determine the number of clusters.
This method succeeds at determining the number of clusters in many datasets
where previous methods fail. On every considered dataset, our consensus method
provides a final result with accuracy well above the average of the individual
algorithms.
| [
{
"version": "v1",
"created": "Tue, 5 Aug 2014 13:54:01 GMT"
}
] | 2014-08-06T00:00:00 | [
[
"Race",
"Shaina",
""
],
[
"Meyer",
"Carl",
""
]
] | TITLE: A Flexible Iterative Framework for Consensus Clustering
ABSTRACT: A novel framework for consensus clustering is presented which has the ability
to determine both the number of clusters and a final solution using multiple
algorithms. A consensus similarity matrix is formed from an ensemble using
multiple algorithms and several values for k. A variety of dimension reduction
techniques and clustering algorithms are considered for analysis. For noisy or
high-dimensional data, an iterative technique is presented to refine this
consensus matrix in way that encourages algorithms to agree upon a common
solution. We utilize the theory of nearly uncoupled Markov chains to determine
the number, k , of clusters in a dataset by considering a random walk on the
graph defined by the consensus matrix. The eigenvalues of the associated
transition probability matrix are used to determine the number of clusters.
This method succeeds at determining the number of clusters in many datasets
where previous methods fail. On every considered dataset, our consensus method
provides a final result with accuracy well above the average of the individual
algorithms.
| no_new_dataset | 0.949623 |
1408.0985 | Lucas Lacasa | Jordi Luque, Bartolo Luque and Lucas Lacasa | Speech earthquakes: scaling and universality in human voice | Submitted for publication | null | null | null | physics.soc-ph cs.CL q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Speech is a distinctive complex feature of human capabilities. In order to
understand the physics underlying speech production, in this work we
empirically analyse the statistics of large human speech datasets ranging
several languages. We first show that during speech the energy is unevenly
released and power-law distributed, reporting a universal robust
Gutenberg-Richter-like law in speech. We further show that such earthquakes in
speech show temporal correlations, as the interevent statistics are again
power-law distributed. Since this feature takes place in the intra-phoneme
range, we conjecture that the responsible for this complex phenomenon is not
cognitive, but it resides on the physiological speech production mechanism.
Moreover, we show that these waiting time distributions are scale invariant
under a renormalisation group transformation, suggesting that the process of
speech generation is indeed operating close to a critical point. These results
are put in contrast with current paradigms in speech processing, which point
towards low dimensional deterministic chaos as the origin of nonlinear traits
in speech fluctuations. As these latter fluctuations are indeed the aspects
that humanize synthetic speech, these findings may have an impact in future
speech synthesis technologies. Results are robust and independent of the
communication language or the number of speakers, pointing towards an universal
pattern and yet another hint of complexity in human speech.
| [
{
"version": "v1",
"created": "Tue, 5 Aug 2014 14:34:20 GMT"
}
] | 2014-08-06T00:00:00 | [
[
"Luque",
"Jordi",
""
],
[
"Luque",
"Bartolo",
""
],
[
"Lacasa",
"Lucas",
""
]
] | TITLE: Speech earthquakes: scaling and universality in human voice
ABSTRACT: Speech is a distinctive complex feature of human capabilities. In order to
understand the physics underlying speech production, in this work we
empirically analyse the statistics of large human speech datasets ranging
several languages. We first show that during speech the energy is unevenly
released and power-law distributed, reporting a universal robust
Gutenberg-Richter-like law in speech. We further show that such earthquakes in
speech show temporal correlations, as the interevent statistics are again
power-law distributed. Since this feature takes place in the intra-phoneme
range, we conjecture that the responsible for this complex phenomenon is not
cognitive, but it resides on the physiological speech production mechanism.
Moreover, we show that these waiting time distributions are scale invariant
under a renormalisation group transformation, suggesting that the process of
speech generation is indeed operating close to a critical point. These results
are put in contrast with current paradigms in speech processing, which point
towards low dimensional deterministic chaos as the origin of nonlinear traits
in speech fluctuations. As these latter fluctuations are indeed the aspects
that humanize synthetic speech, these findings may have an impact in future
speech synthesis technologies. Results are robust and independent of the
communication language or the number of speakers, pointing towards an universal
pattern and yet another hint of complexity in human speech.
| no_new_dataset | 0.941975 |
1408.0427 | Akihiro Fujihara Dr. | Akihiro Fujihara, Hiroyoshi Miwa | Homesick L\'evy walk: A mobility model having Ichi-go Ichi-e and
scale-free properties of human encounters | 8 pages, 10 figures | null | 10.1109/COMPSAC.2014.81 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, mobility models have been reconsidered based on findings by
analyzing some big datasets collected by GPS sensors, cellphone call records,
and Geotagging. To understand the fundamental statistical properties of the
frequency of serendipitous human encounters, we conducted experiments to
collect long-term data on human contact using short-range wireless
communication devices which many people frequently carry in daily life. By
analyzing the data we showed that the majority of human encounters occur
once-in-an-experimental-period: they are Ichi-go Ichi-e. We also found that the
remaining more frequent encounters obey a power-law distribution: they are
scale-free. To theoretically find the origin of these properties, we introduced
as a minimal human mobility model, Homesick L\'evy walk, where the walker
stochastically selects moving long distances as well as L\'evy walk or
returning back home. Using numerical simulations and a simple mean-field
theory, we offer a theoretical explanation for the properties to validate the
mobility model. The proposed model is helpful for evaluating long-term
performance of routing protocols in delay tolerant networks and mobile
opportunistic networks better since some utility-based protocols select nodes
with frequent encounters for message transfer.
| [
{
"version": "v1",
"created": "Sat, 2 Aug 2014 21:53:22 GMT"
}
] | 2014-08-05T00:00:00 | [
[
"Fujihara",
"Akihiro",
""
],
[
"Miwa",
"Hiroyoshi",
""
]
] | TITLE: Homesick L\'evy walk: A mobility model having Ichi-go Ichi-e and
scale-free properties of human encounters
ABSTRACT: In recent years, mobility models have been reconsidered based on findings by
analyzing some big datasets collected by GPS sensors, cellphone call records,
and Geotagging. To understand the fundamental statistical properties of the
frequency of serendipitous human encounters, we conducted experiments to
collect long-term data on human contact using short-range wireless
communication devices which many people frequently carry in daily life. By
analyzing the data we showed that the majority of human encounters occur
once-in-an-experimental-period: they are Ichi-go Ichi-e. We also found that the
remaining more frequent encounters obey a power-law distribution: they are
scale-free. To theoretically find the origin of these properties, we introduced
as a minimal human mobility model, Homesick L\'evy walk, where the walker
stochastically selects moving long distances as well as L\'evy walk or
returning back home. Using numerical simulations and a simple mean-field
theory, we offer a theoretical explanation for the properties to validate the
mobility model. The proposed model is helpful for evaluating long-term
performance of routing protocols in delay tolerant networks and mobile
opportunistic networks better since some utility-based protocols select nodes
with frequent encounters for message transfer.
| no_new_dataset | 0.948058 |
1408.0677 | Chris Muelder | Chris W. Muelder, Nick Leaf, Carmen Sigovan, and Kwan-Liu Ma | A Moving Least Squares Based Approach for Contour Visualization of
Multi-Dimensional Data | null | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analysis of high dimensional data is a common task. Often, small multiples
are used to visualize 1 or 2 dimensions at a time, such as in a scatterplot
matrix. Associating data points between different views can be difficult
though, as the points are not fixed. Other times, dimensional reduction
techniques are employed to summarize the whole dataset in one image, but
individual dimensions are lost in this view. In this paper, we present a means
of augmenting a dimensional reduction plot with isocontours to reintroduce the
original dimensions. By applying this to each dimension in the original data,
we create multiple views where the points are consistent, which facilitates
their comparison. Our approach employs a combination of a novel, graph-based
projection technique with a GPU accelerated implementation of moving least
squares to interpolate space between the points. We also present evaluations of
this approach both with a case study and with a user study.
| [
{
"version": "v1",
"created": "Mon, 4 Aug 2014 13:27:17 GMT"
}
] | 2014-08-05T00:00:00 | [
[
"Muelder",
"Chris W.",
""
],
[
"Leaf",
"Nick",
""
],
[
"Sigovan",
"Carmen",
""
],
[
"Ma",
"Kwan-Liu",
""
]
] | TITLE: A Moving Least Squares Based Approach for Contour Visualization of
Multi-Dimensional Data
ABSTRACT: Analysis of high dimensional data is a common task. Often, small multiples
are used to visualize 1 or 2 dimensions at a time, such as in a scatterplot
matrix. Associating data points between different views can be difficult
though, as the points are not fixed. Other times, dimensional reduction
techniques are employed to summarize the whole dataset in one image, but
individual dimensions are lost in this view. In this paper, we present a means
of augmenting a dimensional reduction plot with isocontours to reintroduce the
original dimensions. By applying this to each dimension in the original data,
we create multiple views where the points are consistent, which facilitates
their comparison. Our approach employs a combination of a novel, graph-based
projection technique with a GPU accelerated implementation of moving least
squares to interpolate space between the points. We also present evaluations of
this approach both with a case study and with a user study.
| no_new_dataset | 0.949106 |
1408.0751 | Amirali Abdullah | Amirali Abdullah, Alexandr Andoni, Ravindran Kannan, Robert
Krauthgamer | Spectral Approaches to Nearest Neighbor Search | Accepted in the proceedings of FOCS 2014. 30 pages and 4 figures | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study spectral algorithms for the high-dimensional Nearest Neighbor Search
problem (NNS). In particular, we consider a semi-random setting where a dataset
$P$ in $\mathbb{R}^d$ is chosen arbitrarily from an unknown subspace of low
dimension $k\ll d$, and then perturbed by fully $d$-dimensional Gaussian noise.
We design spectral NNS algorithms whose query time depends polynomially on $d$
and $\log n$ (where $n=|P|$) for large ranges of $k$, $d$ and $n$. Our
algorithms use a repeated computation of the top PCA vector/subspace, and are
effective even when the random-noise magnitude is {\em much larger} than the
interpoint distances in $P$. Our motivation is that in practice, a number of
spectral NNS algorithms outperform the random-projection methods that seem
otherwise theoretically optimal on worst case datasets. In this paper we aim to
provide theoretical justification for this disparity.
| [
{
"version": "v1",
"created": "Mon, 4 Aug 2014 17:51:17 GMT"
}
] | 2014-08-05T00:00:00 | [
[
"Abdullah",
"Amirali",
""
],
[
"Andoni",
"Alexandr",
""
],
[
"Kannan",
"Ravindran",
""
],
[
"Krauthgamer",
"Robert",
""
]
] | TITLE: Spectral Approaches to Nearest Neighbor Search
ABSTRACT: We study spectral algorithms for the high-dimensional Nearest Neighbor Search
problem (NNS). In particular, we consider a semi-random setting where a dataset
$P$ in $\mathbb{R}^d$ is chosen arbitrarily from an unknown subspace of low
dimension $k\ll d$, and then perturbed by fully $d$-dimensional Gaussian noise.
We design spectral NNS algorithms whose query time depends polynomially on $d$
and $\log n$ (where $n=|P|$) for large ranges of $k$, $d$ and $n$. Our
algorithms use a repeated computation of the top PCA vector/subspace, and are
effective even when the random-noise magnitude is {\em much larger} than the
interpoint distances in $P$. Our motivation is that in practice, a number of
spectral NNS algorithms outperform the random-projection methods that seem
otherwise theoretically optimal on worst case datasets. In this paper we aim to
provide theoretical justification for this disparity.
| no_new_dataset | 0.94366 |
1408.0047 | Truyen Tran | Truyen Tran, Dinh Phung, Svetha Venkatesh | Cumulative Restricted Boltzmann Machines for Ordinal Matrix Data
Analysis | JMLR: Workshop and Conference Proceedings 25:1-16, 2012; Asian
Conference on Machine Learning | null | null | null | stat.ML cs.IR cs.LG stat.AP stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ordinal data is omnipresent in almost all multiuser-generated feedback -
questionnaires, preferences etc. This paper investigates modelling of ordinal
data with Gaussian restricted Boltzmann machines (RBMs). In particular, we
present the model architecture, learning and inference procedures for both
vector-variate and matrix-variate ordinal data. We show that our model is able
to capture latent opinion profile of citizens around the world, and is
competitive against state-of-art collaborative filtering techniques on
large-scale public datasets. The model thus has the potential to extend
application of RBMs to diverse domains such as recommendation systems, product
reviews and expert assessments.
| [
{
"version": "v1",
"created": "Thu, 31 Jul 2014 23:54:16 GMT"
}
] | 2014-08-04T00:00:00 | [
[
"Tran",
"Truyen",
""
],
[
"Phung",
"Dinh",
""
],
[
"Venkatesh",
"Svetha",
""
]
] | TITLE: Cumulative Restricted Boltzmann Machines for Ordinal Matrix Data
Analysis
ABSTRACT: Ordinal data is omnipresent in almost all multiuser-generated feedback -
questionnaires, preferences etc. This paper investigates modelling of ordinal
data with Gaussian restricted Boltzmann machines (RBMs). In particular, we
present the model architecture, learning and inference procedures for both
vector-variate and matrix-variate ordinal data. We show that our model is able
to capture latent opinion profile of citizens around the world, and is
competitive against state-of-art collaborative filtering techniques on
large-scale public datasets. The model thus has the potential to extend
application of RBMs to diverse domains such as recommendation systems, product
reviews and expert assessments.
| no_new_dataset | 0.949248 |
1311.6802 | Smriti Bhagat | Smriti Bhagat, Udi Weinsberg, Stratis Ioannidis, Nina Taft | Recommending with an Agenda: Active Learning of Private Attributes using
Matrix Factorization | This is the extended version of a paper that appeared in ACM RecSys
2014 | null | null | null | cs.LG cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems leverage user demographic information, such as age,
gender, etc., to personalize recommendations and better place their targeted
ads. Oftentimes, users do not volunteer this information due to privacy
concerns, or due to a lack of initiative in filling out their online profiles.
We illustrate a new threat in which a recommender learns private attributes of
users who do not voluntarily disclose them. We design both passive and active
attacks that solicit ratings for strategically selected items, and could thus
be used by a recommender system to pursue this hidden agenda. Our methods are
based on a novel usage of Bayesian matrix factorization in an active learning
setting. Evaluations on multiple datasets illustrate that such attacks are
indeed feasible and use significantly fewer rated items than static inference
methods. Importantly, they succeed without sacrificing the quality of
recommendations to users.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2013 20:48:59 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Jul 2014 23:08:54 GMT"
}
] | 2014-08-01T00:00:00 | [
[
"Bhagat",
"Smriti",
""
],
[
"Weinsberg",
"Udi",
""
],
[
"Ioannidis",
"Stratis",
""
],
[
"Taft",
"Nina",
""
]
] | TITLE: Recommending with an Agenda: Active Learning of Private Attributes using
Matrix Factorization
ABSTRACT: Recommender systems leverage user demographic information, such as age,
gender, etc., to personalize recommendations and better place their targeted
ads. Oftentimes, users do not volunteer this information due to privacy
concerns, or due to a lack of initiative in filling out their online profiles.
We illustrate a new threat in which a recommender learns private attributes of
users who do not voluntarily disclose them. We design both passive and active
attacks that solicit ratings for strategically selected items, and could thus
be used by a recommender system to pursue this hidden agenda. Our methods are
based on a novel usage of Bayesian matrix factorization in an active learning
setting. Evaluations on multiple datasets illustrate that such attacks are
indeed feasible and use significantly fewer rated items than static inference
methods. Importantly, they succeed without sacrificing the quality of
recommendations to users.
| no_new_dataset | 0.947088 |
1312.7715 | Dan Banica | Dan Banica, Cristian Sminchisescu | Constrained Parametric Proposals and Pooling Methods for Semantic
Segmentation in RGB-D Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We focus on the problem of semantic segmentation based on RGB-D data, with
emphasis on analyzing cluttered indoor scenes containing many instances from
many visual categories. Our approach is based on a parametric figure-ground
intensity and depth-constrained proposal process that generates spatial layout
hypotheses at multiple locations and scales in the image followed by a
sequential inference algorithm that integrates the proposals into a complete
scene estimate. Our contributions can be summarized as proposing the following:
(1) a generalization of parametric max flow figure-ground proposal methodology
to take advantage of intensity and depth information, in order to
systematically and efficiently generate the breakpoints of an underlying
spatial model in polynomial time, (2) new region description methods based on
second-order pooling over multiple features constructed using both intensity
and depth channels, (3) an inference procedure that can resolve conflicts in
overlapping spatial partitions, and handles scenes with a large number of
objects category instances, of very different scales, (4) extensive evaluation
of the impact of depth, as well as the effectiveness of a large number of
descriptors, both pre-designed and automatically obtained using deep learning,
in a difficult RGB-D semantic segmentation problem with 92 classes. We report
state of the art results in the challenging NYU Depth v2 dataset, extended for
RMRC 2013 Indoor Segmentation Challenge, where currently the proposed model
ranks first, with an average score of 24.61% and a number of 39 classes won.
Moreover, we show that by combining second-order and deep learning features,
over 15% relative accuracy improvements can be additionally achieved. In a
scene classification benchmark, our methodology further improves the state of
the art by 24%.
| [
{
"version": "v1",
"created": "Mon, 30 Dec 2013 13:44:53 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Jul 2014 16:17:50 GMT"
}
] | 2014-08-01T00:00:00 | [
[
"Banica",
"Dan",
""
],
[
"Sminchisescu",
"Cristian",
""
]
] | TITLE: Constrained Parametric Proposals and Pooling Methods for Semantic
Segmentation in RGB-D Images
ABSTRACT: We focus on the problem of semantic segmentation based on RGB-D data, with
emphasis on analyzing cluttered indoor scenes containing many instances from
many visual categories. Our approach is based on a parametric figure-ground
intensity and depth-constrained proposal process that generates spatial layout
hypotheses at multiple locations and scales in the image followed by a
sequential inference algorithm that integrates the proposals into a complete
scene estimate. Our contributions can be summarized as proposing the following:
(1) a generalization of parametric max flow figure-ground proposal methodology
to take advantage of intensity and depth information, in order to
systematically and efficiently generate the breakpoints of an underlying
spatial model in polynomial time, (2) new region description methods based on
second-order pooling over multiple features constructed using both intensity
and depth channels, (3) an inference procedure that can resolve conflicts in
overlapping spatial partitions, and handles scenes with a large number of
objects category instances, of very different scales, (4) extensive evaluation
of the impact of depth, as well as the effectiveness of a large number of
descriptors, both pre-designed and automatically obtained using deep learning,
in a difficult RGB-D semantic segmentation problem with 92 classes. We report
state of the art results in the challenging NYU Depth v2 dataset, extended for
RMRC 2013 Indoor Segmentation Challenge, where currently the proposed model
ranks first, with an average score of 24.61% and a number of 39 classes won.
Moreover, we show that by combining second-order and deep learning features,
over 15% relative accuracy improvements can be additionally achieved. In a
scene classification benchmark, our methodology further improves the state of
the art by 24%.
| no_new_dataset | 0.953057 |
1407.8176 | T.R. Gopalakrishnan Nair | T.R. Gopalakrishnan Nair, Richa Sharma | Accurate merging of images for predictive analysis using combined image | 5 pages, 4 figures,Signal Processing Image Processing & Pattern
Recognition (ICSIPR), 2013 International Conference on, Karunya University,
Coimbatore, India, pp.169,173, 7-8 Feb. 2013. arXiv admin note: substantial
text overlap with arXiv:1407.8123 | null | 10.1109/ICSIPR.2013.6497980 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several Scientific and engineering applications require merging of sampled
images for complex perception development. In most cases, for such
requirements, images are merged at intensity level. Even though it gives fairly
good perception of combined scenario of objects and scenes, it is found that
they are not sufficient enough to analyze certain engineering cases. The main
problem is incoherent modulation of intensity arising out of phase properties
being lost. In order to compensate these losses, combined phase and amplitude
merge is demanded. We present here a method which could be used in precision
engineering and biological applications where more precise prediction is
required of a combined phenomenon. When pixels are added, its original property
is lost but accurate merging of intended pixels can be achieved in high quality
using frequency domain properties of an image. This paper introduces a
technique to merge various images which can be used as a simple but effective
technique for overlapped view of a set of images and producing reduced dataset
for review purposes.
| [
{
"version": "v1",
"created": "Wed, 30 Jul 2014 07:08:31 GMT"
}
] | 2014-08-01T00:00:00 | [
[
"Nair",
"T. R. Gopalakrishnan",
""
],
[
"Sharma",
"Richa",
""
]
] | TITLE: Accurate merging of images for predictive analysis using combined image
ABSTRACT: Several Scientific and engineering applications require merging of sampled
images for complex perception development. In most cases, for such
requirements, images are merged at intensity level. Even though it gives fairly
good perception of combined scenario of objects and scenes, it is found that
they are not sufficient enough to analyze certain engineering cases. The main
problem is incoherent modulation of intensity arising out of phase properties
being lost. In order to compensate these losses, combined phase and amplitude
merge is demanded. We present here a method which could be used in precision
engineering and biological applications where more precise prediction is
required of a combined phenomenon. When pixels are added, its original property
is lost but accurate merging of intended pixels can be achieved in high quality
using frequency domain properties of an image. This paper introduces a
technique to merge various images which can be used as a simple but effective
technique for overlapped view of a set of images and producing reduced dataset
for review purposes.
| no_new_dataset | 0.950365 |
1407.8187 | Charles Fisher | Charles K. Fisher, Pankaj Mehta | Fast Bayesian Feature Selection for High Dimensional Linear Regression
in Genomics via the Ising Approximation | null | null | null | null | q-bio.QM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature selection, identifying a subset of variables that are relevant for
predicting a response, is an important and challenging component of many
methods in statistics and machine learning. Feature selection is especially
difficult and computationally intensive when the number of variables approaches
or exceeds the number of samples, as is often the case for many genomic
datasets. Here, we introduce a new approach -- the Bayesian Ising Approximation
(BIA) -- to rapidly calculate posterior probabilities for feature relevance in
L2 penalized linear regression. In the regime where the regression problem is
strongly regularized by the prior, we show that computing the marginal
posterior probabilities for features is equivalent to computing the
magnetizations of an Ising model. Using a mean field approximation, we show it
is possible to rapidly compute the feature selection path described by the
posterior probabilities as a function of the L2 penalty. We present simulations
and analytical results illustrating the accuracy of the BIA on some simple
regression problems. Finally, we demonstrate the applicability of the BIA to
high dimensional regression by analyzing a gene expression dataset with nearly
30,000 features.
| [
{
"version": "v1",
"created": "Wed, 30 Jul 2014 20:00:14 GMT"
}
] | 2014-08-01T00:00:00 | [
[
"Fisher",
"Charles K.",
""
],
[
"Mehta",
"Pankaj",
""
]
] | TITLE: Fast Bayesian Feature Selection for High Dimensional Linear Regression
in Genomics via the Ising Approximation
ABSTRACT: Feature selection, identifying a subset of variables that are relevant for
predicting a response, is an important and challenging component of many
methods in statistics and machine learning. Feature selection is especially
difficult and computationally intensive when the number of variables approaches
or exceeds the number of samples, as is often the case for many genomic
datasets. Here, we introduce a new approach -- the Bayesian Ising Approximation
(BIA) -- to rapidly calculate posterior probabilities for feature relevance in
L2 penalized linear regression. In the regime where the regression problem is
strongly regularized by the prior, we show that computing the marginal
posterior probabilities for features is equivalent to computing the
magnetizations of an Ising model. Using a mean field approximation, we show it
is possible to rapidly compute the feature selection path described by the
posterior probabilities as a function of the L2 penalty. We present simulations
and analytical results illustrating the accuracy of the BIA on some simple
regression problems. Finally, we demonstrate the applicability of the BIA to
high dimensional regression by analyzing a gene expression dataset with nearly
30,000 features.
| no_new_dataset | 0.950457 |
1407.8518 | Roberto Rigamonti | Roberto Rigamonti, Vincent Lepetit, Pascal Fua | Beyond KernelBoost | null | null | null | EPFL-REPORT-200378 | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this Technical Report we propose a set of improvements with respect to the
KernelBoost classifier presented in [Becker et al., MICCAI 2013]. We start with
a scheme inspired by Auto-Context, but that is suitable in situations where the
lack of large training sets poses a potential problem of overfitting. The aim
is to capture the interactions between neighboring image pixels to better
regularize the boundaries of segmented regions. As in Auto-Context [Tu et al.,
PAMI 2009] the segmentation process is iterative and, at each iteration, the
segmentation results for the previous iterations are taken into account in
conjunction with the image itself. However, unlike in [Tu et al., PAMI 2009],
we organize our recursion so that the classifiers can progressively focus on
difficult-to-classify locations. This lets us exploit the power of the
decision-tree paradigm while avoiding over-fitting. In the context of this
architecture, KernelBoost represents a powerful building block due to its
ability to learn on the score maps coming from previous iterations. We first
introduce two important mechanisms to empower the KernelBoost classifier,
namely pooling and the clustering of positive samples based on the appearance
of the corresponding ground-truth. These operations significantly contribute to
increase the effectiveness of the system on biomedical images, where texture
plays a major role in the recognition of the different image components. We
then present some other techniques that can be easily integrated in the
KernelBoost framework to further improve the accuracy of the final
segmentation. We show extensive results on different medical image datasets,
including some multi-label tasks, on which our method is shown to outperform
state-of-the-art approaches. The resulting segmentations display high accuracy,
neat contours, and reduced noise.
| [
{
"version": "v1",
"created": "Mon, 28 Jul 2014 09:07:03 GMT"
}
] | 2014-08-01T00:00:00 | [
[
"Rigamonti",
"Roberto",
""
],
[
"Lepetit",
"Vincent",
""
],
[
"Fua",
"Pascal",
""
]
] | TITLE: Beyond KernelBoost
ABSTRACT: In this Technical Report we propose a set of improvements with respect to the
KernelBoost classifier presented in [Becker et al., MICCAI 2013]. We start with
a scheme inspired by Auto-Context, but that is suitable in situations where the
lack of large training sets poses a potential problem of overfitting. The aim
is to capture the interactions between neighboring image pixels to better
regularize the boundaries of segmented regions. As in Auto-Context [Tu et al.,
PAMI 2009] the segmentation process is iterative and, at each iteration, the
segmentation results for the previous iterations are taken into account in
conjunction with the image itself. However, unlike in [Tu et al., PAMI 2009],
we organize our recursion so that the classifiers can progressively focus on
difficult-to-classify locations. This lets us exploit the power of the
decision-tree paradigm while avoiding over-fitting. In the context of this
architecture, KernelBoost represents a powerful building block due to its
ability to learn on the score maps coming from previous iterations. We first
introduce two important mechanisms to empower the KernelBoost classifier,
namely pooling and the clustering of positive samples based on the appearance
of the corresponding ground-truth. These operations significantly contribute to
increase the effectiveness of the system on biomedical images, where texture
plays a major role in the recognition of the different image components. We
then present some other techniques that can be easily integrated in the
KernelBoost framework to further improve the accuracy of the final
segmentation. We show extensive results on different medical image datasets,
including some multi-label tasks, on which our method is shown to outperform
state-of-the-art approaches. The resulting segmentations display high accuracy,
neat contours, and reduced noise.
| no_new_dataset | 0.943815 |
1406.6667 | Andrew Crotty | Andrew Crotty, Alex Galakatos, Kayhan Dursun, Tim Kraska, Ugur
Cetintemel, Stan Zdonik | Tupleware: Redefining Modern Analytics | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a fundamental discrepancy between the targeted and actual users of
current analytics frameworks. Most systems are designed for the data and
infrastructure of the Googles and Facebooks of the world---petabytes of data
distributed across large cloud deployments consisting of thousands of cheap
commodity machines. Yet, the vast majority of users operate clusters ranging
from a few to a few dozen nodes, analyze relatively small datasets of up to a
few terabytes, and perform primarily compute-intensive operations. Targeting
these users fundamentally changes the way we should build analytics systems.
This paper describes the design of Tupleware, a new system specifically aimed
at the challenges faced by the typical user. Tupleware's architecture brings
together ideas from the database, compiler, and programming languages
communities to create a powerful end-to-end solution for data analysis. We
propose novel techniques that consider the data, computations, and hardware
together to achieve maximum performance on a case-by-case basis. Our
experimental evaluation quantifies the impact of our novel techniques and shows
orders of magnitude performance improvement over alternative systems.
| [
{
"version": "v1",
"created": "Wed, 25 Jun 2014 19:06:15 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Jul 2014 12:49:08 GMT"
}
] | 2014-07-31T00:00:00 | [
[
"Crotty",
"Andrew",
""
],
[
"Galakatos",
"Alex",
""
],
[
"Dursun",
"Kayhan",
""
],
[
"Kraska",
"Tim",
""
],
[
"Cetintemel",
"Ugur",
""
],
[
"Zdonik",
"Stan",
""
]
] | TITLE: Tupleware: Redefining Modern Analytics
ABSTRACT: There is a fundamental discrepancy between the targeted and actual users of
current analytics frameworks. Most systems are designed for the data and
infrastructure of the Googles and Facebooks of the world---petabytes of data
distributed across large cloud deployments consisting of thousands of cheap
commodity machines. Yet, the vast majority of users operate clusters ranging
from a few to a few dozen nodes, analyze relatively small datasets of up to a
few terabytes, and perform primarily compute-intensive operations. Targeting
these users fundamentally changes the way we should build analytics systems.
This paper describes the design of Tupleware, a new system specifically aimed
at the challenges faced by the typical user. Tupleware's architecture brings
together ideas from the database, compiler, and programming languages
communities to create a powerful end-to-end solution for data analysis. We
propose novel techniques that consider the data, computations, and hardware
together to achieve maximum performance on a case-by-case basis. Our
experimental evaluation quantifies the impact of our novel techniques and shows
orders of magnitude performance improvement over alternative systems.
| no_new_dataset | 0.944228 |
1407.4885 | Yves-Alexandre de Montjoye | Yves-Alexandre de Montjoye, Zbigniew Smoreda, Romain Trinquart, Cezary
Ziemlicki, Vincent D. Blondel | D4D-Senegal: The Second Mobile Phone Data for Development Challenge | null | null | null | null | cs.CY cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The D4D-Senegal challenge is an open innovation data challenge on anonymous
call patterns of Orange's mobile phone users in Senegal. The goal of the
challenge is to help address society development questions in novel ways by
contributing to the socio-economic development and well-being of the Senegalese
population. Participants to the challenge are given access to three mobile
phone datasets. This paper describes the three datasets. The datasets are based
on Call Detail Records (CDR) of phone calls and text exchanges between more
than 9 million of Orange's customers in Senegal between January 1, 2013 to
December 31, 2013. The datasets are: (1) antenna-to-antenna traffic for 1666
antennas on an hourly basis, (2) fine-grained mobility data on a rolling 2-week
basis for a year with bandicoot behavioral indicators at individual level for
about 300,000 randomly sampled users, (3) one year of coarse-grained mobility
data at arrondissement level with bandicoot behavioral indicators at individual
level for about 150,000 randomly sampled users
| [
{
"version": "v1",
"created": "Fri, 18 Jul 2014 05:07:49 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Jul 2014 13:13:59 GMT"
}
] | 2014-07-31T00:00:00 | [
[
"de Montjoye",
"Yves-Alexandre",
""
],
[
"Smoreda",
"Zbigniew",
""
],
[
"Trinquart",
"Romain",
""
],
[
"Ziemlicki",
"Cezary",
""
],
[
"Blondel",
"Vincent D.",
""
]
] | TITLE: D4D-Senegal: The Second Mobile Phone Data for Development Challenge
ABSTRACT: The D4D-Senegal challenge is an open innovation data challenge on anonymous
call patterns of Orange's mobile phone users in Senegal. The goal of the
challenge is to help address society development questions in novel ways by
contributing to the socio-economic development and well-being of the Senegalese
population. Participants to the challenge are given access to three mobile
phone datasets. This paper describes the three datasets. The datasets are based
on Call Detail Records (CDR) of phone calls and text exchanges between more
than 9 million of Orange's customers in Senegal between January 1, 2013 to
December 31, 2013. The datasets are: (1) antenna-to-antenna traffic for 1666
antennas on an hourly basis, (2) fine-grained mobility data on a rolling 2-week
basis for a year with bandicoot behavioral indicators at individual level for
about 300,000 randomly sampled users, (3) one year of coarse-grained mobility
data at arrondissement level with bandicoot behavioral indicators at individual
level for about 150,000 randomly sampled users
| no_new_dataset | 0.815967 |
1407.7930 | EPTCS | Roi Blanco (Yahoo! Research Barcelona, Spain), Paolo Boldi
(Dipartimento di informatica, Universit\`a degli Studi di Milano), Andrea
Marino (Dipartimento di informatica, Universit\`a degli Studi di Milano) | Entity-Linking via Graph-Distance Minimization | In Proceedings GRAPHITE 2014, arXiv:1407.7671. The second and third
authors were supported by the EU-FET grant NADINE (GA 288956) | EPTCS 159, 2014, pp. 30-43 | 10.4204/EPTCS.159.4 | null | cs.DS cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Entity-linking is a natural-language-processing task that consists in
identifying the entities mentioned in a piece of text, linking each to an
appropriate item in some knowledge base; when the knowledge base is Wikipedia,
the problem comes to be known as wikification (in this case, items are
wikipedia articles). One instance of entity-linking can be formalized as an
optimization problem on the underlying concept graph, where the quantity to be
optimized is the average distance between chosen items. Inspired by this
application, we define a new graph problem which is a natural variant of the
Maximum Capacity Representative Set. We prove that our problem is NP-hard for
general graphs; nonetheless, under some restrictive assumptions, it turns out
to be solvable in linear time. For the general case, we propose two heuristics:
one tries to enforce the above assumptions and another one is based on the
notion of hitting distance; we show experimentally how these approaches perform
with respect to some baselines on a real-world dataset.
| [
{
"version": "v1",
"created": "Wed, 30 Jul 2014 03:22:51 GMT"
}
] | 2014-07-31T00:00:00 | [
[
"Blanco",
"Roi",
"",
"Yahoo! Research Barcelona, Spain"
],
[
"Boldi",
"Paolo",
"",
"Dipartimento di informatica, Università degli Studi di Milano"
],
[
"Marino",
"Andrea",
"",
"Dipartimento di informatica, Università degli Studi di Milano"
]
] | TITLE: Entity-Linking via Graph-Distance Minimization
ABSTRACT: Entity-linking is a natural-language-processing task that consists in
identifying the entities mentioned in a piece of text, linking each to an
appropriate item in some knowledge base; when the knowledge base is Wikipedia,
the problem comes to be known as wikification (in this case, items are
wikipedia articles). One instance of entity-linking can be formalized as an
optimization problem on the underlying concept graph, where the quantity to be
optimized is the average distance between chosen items. Inspired by this
application, we define a new graph problem which is a natural variant of the
Maximum Capacity Representative Set. We prove that our problem is NP-hard for
general graphs; nonetheless, under some restrictive assumptions, it turns out
to be solvable in linear time. For the general case, we propose two heuristics:
one tries to enforce the above assumptions and another one is based on the
notion of hitting distance; we show experimentally how these approaches perform
with respect to some baselines on a real-world dataset.
| no_new_dataset | 0.942612 |
1405.4095 | Xuzhen Zhu | Xuzhen Zhu, Hui Tian, Shimin Cai | Personalized recommendation with corrected similarity | 13 pages, 2 figures, 2 tables. arXiv admin note: text overlap with
arXiv:0805.4127 by other authors | null | 10.1088/1742-5468/2014/07/P07004 | null | cs.IR cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personalized recommendation attracts a surge of interdisciplinary researches.
Especially, similarity based methods in applications of real recommendation
systems achieve great success. However, the computations of similarities are
overestimated or underestimated outstandingly due to the defective strategy of
unidirectional similarity estimation. In this paper, we solve this drawback by
leveraging mutual correction of forward and backward similarity estimations,
and propose a new personalized recommendation index, i.e., corrected similarity
based inference (CSI). Through extensive experiments on four benchmark
datasets, the results show a greater improvement of CSI in comparison with
these mainstream baselines. And the detailed analysis is presented to unveil
and understand the origin of such difference between CSI and mainstream
indices.
| [
{
"version": "v1",
"created": "Fri, 16 May 2014 08:50:59 GMT"
}
] | 2014-07-30T00:00:00 | [
[
"Zhu",
"Xuzhen",
""
],
[
"Tian",
"Hui",
""
],
[
"Cai",
"Shimin",
""
]
] | TITLE: Personalized recommendation with corrected similarity
ABSTRACT: Personalized recommendation attracts a surge of interdisciplinary researches.
Especially, similarity based methods in applications of real recommendation
systems achieve great success. However, the computations of similarities are
overestimated or underestimated outstandingly due to the defective strategy of
unidirectional similarity estimation. In this paper, we solve this drawback by
leveraging mutual correction of forward and backward similarity estimations,
and propose a new personalized recommendation index, i.e., corrected similarity
based inference (CSI). Through extensive experiments on four benchmark
datasets, the results show a greater improvement of CSI in comparison with
these mainstream baselines. And the detailed analysis is presented to unveil
and understand the origin of such difference between CSI and mainstream
indices.
| no_new_dataset | 0.946843 |
1407.7566 | Eric Strobl | Eric V. Strobl, Shyam Visweswaran | Dependence versus Conditional Dependence in Local Causal Discovery from
Gene Expression Data | 11 pages, 2 algorithms, 4 figures, 5 tables | null | null | null | q-bio.QM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Algorithms that discover variables which are causally related to
a target may inform the design of experiments. With observational gene
expression data, many methods discover causal variables by measuring each
variable's degree of statistical dependence with the target using dependence
measures (DMs). However, other methods measure each variable's ability to
explain the statistical dependence between the target and the remaining
variables in the data using conditional dependence measures (CDMs), since this
strategy is guaranteed to find the target's direct causes, direct effects, and
direct causes of the direct effects in the infinite sample limit. In this
paper, we design a new algorithm in order to systematically compare the
relative abilities of DMs and CDMs in discovering causal variables from gene
expression data.
Results: The proposed algorithm using a CDM is sample efficient, since it
consistently outperforms other state-of-the-art local causal discovery
algorithms when samples sizes are small. However, the proposed algorithm using
a CDM outperforms the proposed algorithm using a DM only when sample sizes are
above several hundred. These results suggest that accurate causal discovery
from gene expression data using current CDM-based algorithms requires datasets
with at least several hundred samples.
Availability: The proposed algorithm is freely available at
https://github.com/ericstrobl/DvCD.
| [
{
"version": "v1",
"created": "Mon, 28 Jul 2014 20:52:18 GMT"
}
] | 2014-07-30T00:00:00 | [
[
"Strobl",
"Eric V.",
""
],
[
"Visweswaran",
"Shyam",
""
]
] | TITLE: Dependence versus Conditional Dependence in Local Causal Discovery from
Gene Expression Data
ABSTRACT: Motivation: Algorithms that discover variables which are causally related to
a target may inform the design of experiments. With observational gene
expression data, many methods discover causal variables by measuring each
variable's degree of statistical dependence with the target using dependence
measures (DMs). However, other methods measure each variable's ability to
explain the statistical dependence between the target and the remaining
variables in the data using conditional dependence measures (CDMs), since this
strategy is guaranteed to find the target's direct causes, direct effects, and
direct causes of the direct effects in the infinite sample limit. In this
paper, we design a new algorithm in order to systematically compare the
relative abilities of DMs and CDMs in discovering causal variables from gene
expression data.
Results: The proposed algorithm using a CDM is sample efficient, since it
consistently outperforms other state-of-the-art local causal discovery
algorithms when samples sizes are small. However, the proposed algorithm using
a CDM outperforms the proposed algorithm using a DM only when sample sizes are
above several hundred. These results suggest that accurate causal discovery
from gene expression data using current CDM-based algorithms requires datasets
with at least several hundred samples.
Availability: The proposed algorithm is freely available at
https://github.com/ericstrobl/DvCD.
| no_new_dataset | 0.946646 |
1407.7584 | Danushka Bollegala | Danushka Bollegala | Dynamic Feature Scaling for Online Learning of Binary Classifiers | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scaling feature values is an important step in numerous machine learning
tasks. Different features can have different value ranges and some form of a
feature scaling is often required in order to learn an accurate classifier.
However, feature scaling is conducted as a preprocessing task prior to
learning. This is problematic in an online setting because of two reasons.
First, it might not be possible to accurately determine the value range of a
feature at the initial stages of learning when we have observed only a few
number of training instances. Second, the distribution of data can change over
the time, which render obsolete any feature scaling that we perform in a
pre-processing step. We propose a simple but an effective method to dynamically
scale features at train time, thereby quickly adapting to any changes in the
data stream. We compare the proposed dynamic feature scaling method against
more complex methods for estimating scaling parameters using several benchmark
datasets for binary classification. Our proposed feature scaling method
consistently outperforms more complex methods on all of the benchmark datasets
and improves classification accuracy of a state-of-the-art online binary
classifier algorithm.
| [
{
"version": "v1",
"created": "Mon, 28 Jul 2014 21:59:06 GMT"
}
] | 2014-07-30T00:00:00 | [
[
"Bollegala",
"Danushka",
""
]
] | TITLE: Dynamic Feature Scaling for Online Learning of Binary Classifiers
ABSTRACT: Scaling feature values is an important step in numerous machine learning
tasks. Different features can have different value ranges and some form of a
feature scaling is often required in order to learn an accurate classifier.
However, feature scaling is conducted as a preprocessing task prior to
learning. This is problematic in an online setting because of two reasons.
First, it might not be possible to accurately determine the value range of a
feature at the initial stages of learning when we have observed only a few
number of training instances. Second, the distribution of data can change over
the time, which render obsolete any feature scaling that we perform in a
pre-processing step. We propose a simple but an effective method to dynamically
scale features at train time, thereby quickly adapting to any changes in the
data stream. We compare the proposed dynamic feature scaling method against
more complex methods for estimating scaling parameters using several benchmark
datasets for binary classification. Our proposed feature scaling method
consistently outperforms more complex methods on all of the benchmark datasets
and improves classification accuracy of a state-of-the-art online binary
classifier algorithm.
| no_new_dataset | 0.947527 |
1205.4418 | Alberto Baccini | Alberto Baccini, Lucio Barabesi, Marzia Marcheselli, Luca Pratelli | Statistical inference on the h-index with an application to
top-scientist performance | 14 pages, 3 tables | Journal of Informetrics, Volume 6, Issue 4, October 2012, Pages
721 - 728 | 10.1016/j.joi.2012.07.009 | null | stat.AP cs.DL physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the huge amount of literature on h-index, few papers have been
devoted to the statistical analysis of h-index when a probabilistic
distribution is assumed for citation counts. The present contribution relies on
showing the available inferential techniques, by providing the details for
proper point and set estimation of the theoretical h-index. Moreover, some
issues on simultaneous inference - aimed to produce suitable scholar
comparisons - are carried out. Finally, the analysis of the citation dataset
for the Nobel Laureates (in the last five years) and for the Fields medallists
(from 2002 onward) is proposed.
| [
{
"version": "v1",
"created": "Sun, 20 May 2012 13:30:26 GMT"
}
] | 2014-07-29T00:00:00 | [
[
"Baccini",
"Alberto",
""
],
[
"Barabesi",
"Lucio",
""
],
[
"Marcheselli",
"Marzia",
""
],
[
"Pratelli",
"Luca",
""
]
] | TITLE: Statistical inference on the h-index with an application to
top-scientist performance
ABSTRACT: Despite the huge amount of literature on h-index, few papers have been
devoted to the statistical analysis of h-index when a probabilistic
distribution is assumed for citation counts. The present contribution relies on
showing the available inferential techniques, by providing the details for
proper point and set estimation of the theoretical h-index. Moreover, some
issues on simultaneous inference - aimed to produce suitable scholar
comparisons - are carried out. Finally, the analysis of the citation dataset
for the Nobel Laureates (in the last five years) and for the Fields medallists
(from 2002 onward) is proposed.
| no_new_dataset | 0.945601 |
1312.0084 | Alberto Baccini | Alberto Baccini, Lucio Barabesi, Martina Cioni, Caterina Pisani | Crossing the hurdle: the determinants of individual scientific
performance | Revised version accepted for publication by Scientometrics | null | 10.1007/s11192-014-1395-3 | null | physics.soc-ph cs.DL stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An original cross sectional dataset referring to a medium sized Italian
university is implemented in order to analyze the determinants of scientific
research production at individual level. The dataset includes 942 permanent
researchers of various scientific sectors for a three year time span (2008 -
2010). Three different indicators - based on the number of publications or
citations - are considered as response variables. The corresponding
distributions are highly skewed and display an excess of zero - valued
observations. In this setting, the goodness of fit of several Poisson mixture
regression models are explored by assuming an extensive set of explanatory
variables. As to the personal observable characteristics of the researchers,
the results emphasize the age effect and the gender productivity gap, as
previously documented by existing studies. Analogously, the analysis confirm
that productivity is strongly affected by the publication and citation
practices adopted in different scientific disciplines. The empirical evidence
on the connection between teaching and research activities suggests that no
univocal substitution or complementarity thesis can be claimed: a major
teaching load does not affect the odds to be a non-active researcher and does
not significantly reduce the number of publications for active researchers. In
addition, new evidence emerges on the effect of researchers administrative
tasks, which seem to be negatively related with researcher's productivity, and
on the composition of departments. Researchers' productivity is apparently
enhanced by operating in department filled with more administrative and
technical staff, and it is not significantly affected by the composition of the
department in terms of senior or junior researchers.
| [
{
"version": "v1",
"created": "Sat, 30 Nov 2013 10:20:15 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Jul 2014 14:30:20 GMT"
}
] | 2014-07-29T00:00:00 | [
[
"Baccini",
"Alberto",
""
],
[
"Barabesi",
"Lucio",
""
],
[
"Cioni",
"Martina",
""
],
[
"Pisani",
"Caterina",
""
]
] | TITLE: Crossing the hurdle: the determinants of individual scientific
performance
ABSTRACT: An original cross sectional dataset referring to a medium sized Italian
university is implemented in order to analyze the determinants of scientific
research production at individual level. The dataset includes 942 permanent
researchers of various scientific sectors for a three year time span (2008 -
2010). Three different indicators - based on the number of publications or
citations - are considered as response variables. The corresponding
distributions are highly skewed and display an excess of zero - valued
observations. In this setting, the goodness of fit of several Poisson mixture
regression models are explored by assuming an extensive set of explanatory
variables. As to the personal observable characteristics of the researchers,
the results emphasize the age effect and the gender productivity gap, as
previously documented by existing studies. Analogously, the analysis confirm
that productivity is strongly affected by the publication and citation
practices adopted in different scientific disciplines. The empirical evidence
on the connection between teaching and research activities suggests that no
univocal substitution or complementarity thesis can be claimed: a major
teaching load does not affect the odds to be a non-active researcher and does
not significantly reduce the number of publications for active researchers. In
addition, new evidence emerges on the effect of researchers administrative
tasks, which seem to be negatively related with researcher's productivity, and
on the composition of departments. Researchers' productivity is apparently
enhanced by operating in department filled with more administrative and
technical staff, and it is not significantly affected by the composition of the
department in terms of senior or junior researchers.
| no_new_dataset | 0.938463 |
1406.1881 | Leonid Pishchulin | Leonid Pishchulin, Mykhaylo Andriluka, Bernt Schiele | Fine-grained Activity Recognition with Holistic and Pose based Features | 12 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Holistic methods based on dense trajectories are currently the de facto
standard for recognition of human activities in video. Whether holistic
representations will sustain or will be superseded by higher level video
encoding in terms of body pose and motion is the subject of an ongoing debate.
In this paper we aim to clarify the underlying factors responsible for good
performance of holistic and pose-based representations. To that end we build on
our recent dataset leveraging the existing taxonomy of human activities. This
dataset includes 24,920 video snippets covering 410 human activities in total.
Our analysis reveals that holistic and pose-based methods are highly
complementary, and their performance varies significantly depending on the
activity. We find that holistic methods are mostly affected by the number and
speed of trajectories, whereas pose-based methods are mostly influenced by
viewpoint of the person. We observe striking performance differences across
activities: for certain activities results with pose-based features are more
than twice as accurate compared to holistic features, and vice versa. The best
performing approach in our comparison is based on the combination of holistic
and pose-based approaches, which again underlines their complementarity.
| [
{
"version": "v1",
"created": "Sat, 7 Jun 2014 10:07:24 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Jul 2014 14:55:23 GMT"
}
] | 2014-07-29T00:00:00 | [
[
"Pishchulin",
"Leonid",
""
],
[
"Andriluka",
"Mykhaylo",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: Fine-grained Activity Recognition with Holistic and Pose based Features
ABSTRACT: Holistic methods based on dense trajectories are currently the de facto
standard for recognition of human activities in video. Whether holistic
representations will sustain or will be superseded by higher level video
encoding in terms of body pose and motion is the subject of an ongoing debate.
In this paper we aim to clarify the underlying factors responsible for good
performance of holistic and pose-based representations. To that end we build on
our recent dataset leveraging the existing taxonomy of human activities. This
dataset includes 24,920 video snippets covering 410 human activities in total.
Our analysis reveals that holistic and pose-based methods are highly
complementary, and their performance varies significantly depending on the
activity. We find that holistic methods are mostly affected by the number and
speed of trajectories, whereas pose-based methods are mostly influenced by
viewpoint of the person. We observe striking performance differences across
activities: for certain activities results with pose-based features are more
than twice as accurate compared to holistic features, and vice versa. The best
performing approach in our comparison is based on the combination of holistic
and pose-based approaches, which again underlines their complementarity.
| new_dataset | 0.958886 |
1407.3068 | Marijn Stollenga | Marijn Stollenga, Jonathan Masci, Faustino Gomez, Juergen Schmidhuber | Deep Networks with Internal Selective Attention through Feedback
Connections | 13 pages, 3 figures | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional convolutional neural networks (CNN) are stationary and
feedforward. They neither change their parameters during evaluation nor use
feedback from higher to lower layers. Real brains, however, do. So does our
Deep Attention Selective Network (dasNet) architecture. DasNets feedback
structure can dynamically alter its convolutional filter sensitivities during
classification. It harnesses the power of sequential processing to improve
classification performance, by allowing the network to iteratively focus its
internal attention on some of its convolutional filters. Feedback is trained
through direct policy search in a huge million-dimensional parameter space,
through scalable natural evolution strategies (SNES). On the CIFAR-10 and
CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model.
| [
{
"version": "v1",
"created": "Fri, 11 Jul 2014 08:56:54 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Jul 2014 08:22:50 GMT"
}
] | 2014-07-29T00:00:00 | [
[
"Stollenga",
"Marijn",
""
],
[
"Masci",
"Jonathan",
""
],
[
"Gomez",
"Faustino",
""
],
[
"Schmidhuber",
"Juergen",
""
]
] | TITLE: Deep Networks with Internal Selective Attention through Feedback
Connections
ABSTRACT: Traditional convolutional neural networks (CNN) are stationary and
feedforward. They neither change their parameters during evaluation nor use
feedback from higher to lower layers. Real brains, however, do. So does our
Deep Attention Selective Network (dasNet) architecture. DasNets feedback
structure can dynamically alter its convolutional filter sensitivities during
classification. It harnesses the power of sequential processing to improve
classification performance, by allowing the network to iteratively focus its
internal attention on some of its convolutional filters. Feedback is trained
through direct policy search in a huge million-dimensional parameter space,
through scalable natural evolution strategies (SNES). On the CIFAR-10 and
CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model.
| no_new_dataset | 0.949482 |
1407.3535 | Arif Mahmood | Arif Mahmood, Ajmal Mian and Robyn Owens | Optimizing Auto-correlation for Fast Target Search in Large Search Space | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In remote sensing image-blurring is induced by many sources such as
atmospheric scatter, optical aberration, spatial and temporal sensor
integration. The natural blurring can be exploited to speed up target search by
fast template matching. In this paper, we synthetically induce additional
non-uniform blurring to further increase the speed of the matching process. To
avoid loss of accuracy, the amount of synthetic blurring is varied spatially
over the image according to the underlying content. We extend transitive
algorithm for fast template matching by incorporating controlled image blur. To
this end we propose an Efficient Group Size (EGS) algorithm which minimizes the
number of similarity computations for a particular search image. A larger
efficient group size guarantees less computations and more speedup. EGS
algorithm is used as a component in our proposed Optimizing auto-correlation
(OptA) algorithm. In OptA a search image is iteratively non-uniformly blurred
while ensuring no accuracy degradation at any image location. In each iteration
efficient group size and overall computations are estimated by using the
proposed EGS algorithm. The OptA algorithm stops when the number of
computations cannot be further decreased without accuracy degradation. The
proposed algorithm is compared with six existing state of the art exhaustive
accuracy techniques using correlation coefficient as the similarity measure.
Experiments on satellite and aerial image datasets demonstrate the
effectiveness of the proposed algorithm.
| [
{
"version": "v1",
"created": "Mon, 14 Jul 2014 03:57:57 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Jul 2014 00:47:47 GMT"
}
] | 2014-07-28T00:00:00 | [
[
"Mahmood",
"Arif",
""
],
[
"Mian",
"Ajmal",
""
],
[
"Owens",
"Robyn",
""
]
] | TITLE: Optimizing Auto-correlation for Fast Target Search in Large Search Space
ABSTRACT: In remote sensing image-blurring is induced by many sources such as
atmospheric scatter, optical aberration, spatial and temporal sensor
integration. The natural blurring can be exploited to speed up target search by
fast template matching. In this paper, we synthetically induce additional
non-uniform blurring to further increase the speed of the matching process. To
avoid loss of accuracy, the amount of synthetic blurring is varied spatially
over the image according to the underlying content. We extend transitive
algorithm for fast template matching by incorporating controlled image blur. To
this end we propose an Efficient Group Size (EGS) algorithm which minimizes the
number of similarity computations for a particular search image. A larger
efficient group size guarantees less computations and more speedup. EGS
algorithm is used as a component in our proposed Optimizing auto-correlation
(OptA) algorithm. In OptA a search image is iteratively non-uniformly blurred
while ensuring no accuracy degradation at any image location. In each iteration
efficient group size and overall computations are estimated by using the
proposed EGS algorithm. The OptA algorithm stops when the number of
computations cannot be further decreased without accuracy degradation. The
proposed algorithm is compared with six existing state of the art exhaustive
accuracy techniques using correlation coefficient as the similarity measure.
Experiments on satellite and aerial image datasets demonstrate the
effectiveness of the proposed algorithm.
| no_new_dataset | 0.947381 |
1008.4063 | Alexander Gorban | A. Zinovyev, A.N. Gorban | Nonlinear Quality of Life Index | 9 pages, 1 figure, 1 table with data for 171 countries. In this case
study we use only publicly available data taken from GAPMINDER online data
base for 2005 | null | null | null | cs.NE stat.AP | http://creativecommons.org/licenses/by/3.0/ | We present details of the analysis of the nonlinear quality of life index for
171 countries. This index is based on four indicators: GDP per capita by
Purchasing Power Parities, Life expectancy at birth, Infant mortality rate, and
Tuberculosis incidence. We analyze the structure of the data in order to find
the optimal and independent on expert's opinion way to map several numerical
indicators from a multidimensional space onto the one-dimensional space of the
quality of life. In the 4D space we found a principal curve that goes "through
the middle" of the dataset and project the data points on this curve. The order
along this principal curve gives us the ranking of countries. Projection onto
the principal curve provides a solution to the classical problem of
unsupervised ranking of objects. It allows us to find the independent on
expert's opinion way to project several numerical indicators from a
multidimensional space onto the one-dimensional space of the index values. This
projection is, in some sense, optimal and preserves as much information as
possible. For computation we used ViDaExpert, a tool for visualization and
analysis of multidimensional vectorial data (arXiv:1406.5550).
| [
{
"version": "v1",
"created": "Tue, 24 Aug 2010 15:13:33 GMT"
},
{
"version": "v2",
"created": "Sun, 29 Aug 2010 20:02:29 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Jul 2014 09:58:24 GMT"
}
] | 2014-07-25T00:00:00 | [
[
"Zinovyev",
"A.",
""
],
[
"Gorban",
"A. N.",
""
]
] | TITLE: Nonlinear Quality of Life Index
ABSTRACT: We present details of the analysis of the nonlinear quality of life index for
171 countries. This index is based on four indicators: GDP per capita by
Purchasing Power Parities, Life expectancy at birth, Infant mortality rate, and
Tuberculosis incidence. We analyze the structure of the data in order to find
the optimal and independent on expert's opinion way to map several numerical
indicators from a multidimensional space onto the one-dimensional space of the
quality of life. In the 4D space we found a principal curve that goes "through
the middle" of the dataset and project the data points on this curve. The order
along this principal curve gives us the ranking of countries. Projection onto
the principal curve provides a solution to the classical problem of
unsupervised ranking of objects. It allows us to find the independent on
expert's opinion way to project several numerical indicators from a
multidimensional space onto the one-dimensional space of the index values. This
projection is, in some sense, optimal and preserves as much information as
possible. For computation we used ViDaExpert, a tool for visualization and
analysis of multidimensional vectorial data (arXiv:1406.5550).
| no_new_dataset | 0.951323 |
1407.6513 | Amir Hesam Salavati | Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi | Convolutional Neural Associative Memories: Massive Capacity with Noise
Tolerance | null | null | null | null | cs.NE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of a neural associative memory is to retrieve a set of previously
memorized patterns from their noisy versions using a network of neurons. An
ideal network should have the ability to 1) learn a set of patterns as they
arrive, 2) retrieve the correct patterns from noisy queries, and 3) maximize
the pattern retrieval capacity while maintaining the reliability in responding
to queries. The majority of work on neural associative memories has focused on
designing networks capable of memorizing any set of randomly chosen patterns at
the expense of limiting the retrieval capacity. In this paper, we show that if
we target memorizing only those patterns that have inherent redundancy (i.e.,
belong to a subspace), we can obtain all the aforementioned properties. This is
in sharp contrast with the previous work that could only improve one or two
aspects at the expense of the third. More specifically, we propose framework
based on a convolutional neural network along with an iterative algorithm that
learns the redundancy among the patterns. The resulting network has a retrieval
capacity that is exponential in the size of the network. Moreover, the
asymptotic error correction performance of our network is linear in the size of
the patterns. We then ex- tend our approach to deal with patterns lie
approximately in a subspace. This extension allows us to memorize datasets
containing natural patterns (e.g., images). Finally, we report experimental
results on both synthetic and real datasets to support our claims.
| [
{
"version": "v1",
"created": "Thu, 24 Jul 2014 10:06:24 GMT"
}
] | 2014-07-25T00:00:00 | [
[
"Karbasi",
"Amin",
""
],
[
"Salavati",
"Amir Hesam",
""
],
[
"Shokrollahi",
"Amin",
""
]
] | TITLE: Convolutional Neural Associative Memories: Massive Capacity with Noise
Tolerance
ABSTRACT: The task of a neural associative memory is to retrieve a set of previously
memorized patterns from their noisy versions using a network of neurons. An
ideal network should have the ability to 1) learn a set of patterns as they
arrive, 2) retrieve the correct patterns from noisy queries, and 3) maximize
the pattern retrieval capacity while maintaining the reliability in responding
to queries. The majority of work on neural associative memories has focused on
designing networks capable of memorizing any set of randomly chosen patterns at
the expense of limiting the retrieval capacity. In this paper, we show that if
we target memorizing only those patterns that have inherent redundancy (i.e.,
belong to a subspace), we can obtain all the aforementioned properties. This is
in sharp contrast with the previous work that could only improve one or two
aspects at the expense of the third. More specifically, we propose framework
based on a convolutional neural network along with an iterative algorithm that
learns the redundancy among the patterns. The resulting network has a retrieval
capacity that is exponential in the size of the network. Moreover, the
asymptotic error correction performance of our network is linear in the size of
the patterns. We then ex- tend our approach to deal with patterns lie
approximately in a subspace. This extension allows us to memorize datasets
containing natural patterns (e.g., images). Finally, we report experimental
results on both synthetic and real datasets to support our claims.
| no_new_dataset | 0.947672 |
1407.6603 | Zaid Alyasseri | Zaid Abdi Alkareem Alyasseri, Kadhim Al-Attar, Mazin Nasser (ISMAIL) | Parallelize Bubble Sort Algorithm Using OpenMP | 4 pages, 5 firgyes | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sorting has been a profound area for the algorithmic researchers and many
resources are invested to suggest more works for sorting algorithms. For this
purpose, many existing sorting algorithms were observed in terms of the
efficiency of the algorithmic complexity. In this paper we implemented the
bubble sort algorithm using multithreading (OpenMP). The proposed work tested
on two standard datasets (text file) with different size . The main idea of the
proposed algorithm is distributing the elements of the input datasets into many
additional temporary sub-arrays according to a number of characters in each
word. The sizes of each of these sub-arrays are decided depending on a number
of elements with the same number of characters in the input array. We
implemented OpenMP using Intel core i7-3610QM ,(8 CPUs),using two approaches
(vectors of string and array 3D) . Finally, we get the data structure effects
on the performance of the algorithm for that we choice the second approach.
| [
{
"version": "v1",
"created": "Thu, 24 Jul 2014 14:47:48 GMT"
}
] | 2014-07-25T00:00:00 | [
[
"Alyasseri",
"Zaid Abdi Alkareem",
"",
"ISMAIL"
],
[
"Al-Attar",
"Kadhim",
"",
"ISMAIL"
],
[
"Nasser",
"Mazin",
"",
"ISMAIL"
]
] | TITLE: Parallelize Bubble Sort Algorithm Using OpenMP
ABSTRACT: Sorting has been a profound area for the algorithmic researchers and many
resources are invested to suggest more works for sorting algorithms. For this
purpose, many existing sorting algorithms were observed in terms of the
efficiency of the algorithmic complexity. In this paper we implemented the
bubble sort algorithm using multithreading (OpenMP). The proposed work tested
on two standard datasets (text file) with different size . The main idea of the
proposed algorithm is distributing the elements of the input datasets into many
additional temporary sub-arrays according to a number of characters in each
word. The sizes of each of these sub-arrays are decided depending on a number
of elements with the same number of characters in the input array. We
implemented OpenMP using Intel core i7-3610QM ,(8 CPUs),using two approaches
(vectors of string and array 3D) . Finally, we get the data structure effects
on the performance of the algorithm for that we choice the second approach.
| no_new_dataset | 0.948822 |
1312.6995 | Sourav Bhattacharya | Sourav Bhattacharya and Petteri Nurmi and Nils Hammerla and Thomas
Pl\"otz | Towards Using Unlabeled Data in a Sparse-coding Framework for Human
Activity Recognition | 18 pages, 12 figures, Pervasive and Mobile Computing, 2014 | null | 10.1016/j.pmcj.2014.05.006 | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a sparse-coding framework for activity recognition in ubiquitous
and mobile computing that alleviates two fundamental problems of current
supervised learning approaches. (i) It automatically derives a compact, sparse
and meaningful feature representation of sensor data that does not rely on
prior expert knowledge and generalizes extremely well across domain boundaries.
(ii) It exploits unlabeled sample data for bootstrapping effective activity
recognizers, i.e., substantially reduces the amount of ground truth annotation
required for model estimation. Such unlabeled data is trivial to obtain, e.g.,
through contemporary smartphones carried by users as they go about their
everyday activities.
Based on the self-taught learning paradigm we automatically derive an
over-complete set of basis vectors from unlabeled data that captures inherent
patterns present within activity data. Through projecting raw sensor data onto
the feature space defined by such over-complete sets of basis vectors effective
feature extraction is pursued. Given these learned feature representations,
classification backends are then trained using small amounts of labeled
training data.
We study the new approach in detail using two datasets which differ in terms
of the recognition tasks and sensor modalities. Primarily we focus on
transportation mode analysis task, a popular task in mobile-phone based
sensing. The sparse-coding framework significantly outperforms the
state-of-the-art in supervised learning approaches. Furthermore, we demonstrate
the great practical potential of the new approach by successfully evaluating
its generalization capabilities across both domain and sensor modalities by
considering the popular Opportunity dataset. Our feature learning approach
outperforms state-of-the-art approaches to analyzing activities in daily
living.
| [
{
"version": "v1",
"created": "Wed, 25 Dec 2013 18:08:44 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Jul 2014 10:32:32 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Jul 2014 13:39:53 GMT"
}
] | 2014-07-24T00:00:00 | [
[
"Bhattacharya",
"Sourav",
""
],
[
"Nurmi",
"Petteri",
""
],
[
"Hammerla",
"Nils",
""
],
[
"Plötz",
"Thomas",
""
]
] | TITLE: Towards Using Unlabeled Data in a Sparse-coding Framework for Human
Activity Recognition
ABSTRACT: We propose a sparse-coding framework for activity recognition in ubiquitous
and mobile computing that alleviates two fundamental problems of current
supervised learning approaches. (i) It automatically derives a compact, sparse
and meaningful feature representation of sensor data that does not rely on
prior expert knowledge and generalizes extremely well across domain boundaries.
(ii) It exploits unlabeled sample data for bootstrapping effective activity
recognizers, i.e., substantially reduces the amount of ground truth annotation
required for model estimation. Such unlabeled data is trivial to obtain, e.g.,
through contemporary smartphones carried by users as they go about their
everyday activities.
Based on the self-taught learning paradigm we automatically derive an
over-complete set of basis vectors from unlabeled data that captures inherent
patterns present within activity data. Through projecting raw sensor data onto
the feature space defined by such over-complete sets of basis vectors effective
feature extraction is pursued. Given these learned feature representations,
classification backends are then trained using small amounts of labeled
training data.
We study the new approach in detail using two datasets which differ in terms
of the recognition tasks and sensor modalities. Primarily we focus on
transportation mode analysis task, a popular task in mobile-phone based
sensing. The sparse-coding framework significantly outperforms the
state-of-the-art in supervised learning approaches. Furthermore, we demonstrate
the great practical potential of the new approach by successfully evaluating
its generalization capabilities across both domain and sensor modalities by
considering the popular Opportunity dataset. Our feature learning approach
outperforms state-of-the-art approaches to analyzing activities in daily
living.
| no_new_dataset | 0.948394 |
1407.6315 | Deepak Kumar | Deepak Kumar, A G Ramakrishnan | Quadratically constrained quadratic programming for classification using
particle swarms and applications | 17 pages, 3 figures | null | null | null | cs.AI cs.LG cs.NE math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Particle swarm optimization is used in several combinatorial optimization
problems. In this work, particle swarms are used to solve quadratic programming
problems with quadratic constraints. The approach of particle swarms is an
example for interior point methods in optimization as an iterative technique.
This approach is novel and deals with classification problems without the use
of a traditional classifier. Our method determines the optimal hyperplane or
classification boundary for a data set. In a binary classification problem, we
constrain each class as a cluster, which is enclosed by an ellipsoid. The
estimation of the optimal hyperplane between the two clusters is posed as a
quadratically constrained quadratic problem. The optimization problem is solved
in distributed format using modified particle swarms. Our method has the
advantage of using the direction towards optimal solution rather than searching
the entire feasible region. Our results on the Iris, Pima, Wine, and Thyroid
datasets show that the proposed method works better than a neural network and
the performance is close to that of SVM.
| [
{
"version": "v1",
"created": "Wed, 23 Jul 2014 18:04:23 GMT"
}
] | 2014-07-24T00:00:00 | [
[
"Kumar",
"Deepak",
""
],
[
"Ramakrishnan",
"A G",
""
]
] | TITLE: Quadratically constrained quadratic programming for classification using
particle swarms and applications
ABSTRACT: Particle swarm optimization is used in several combinatorial optimization
problems. In this work, particle swarms are used to solve quadratic programming
problems with quadratic constraints. The approach of particle swarms is an
example for interior point methods in optimization as an iterative technique.
This approach is novel and deals with classification problems without the use
of a traditional classifier. Our method determines the optimal hyperplane or
classification boundary for a data set. In a binary classification problem, we
constrain each class as a cluster, which is enclosed by an ellipsoid. The
estimation of the optimal hyperplane between the two clusters is posed as a
quadratically constrained quadratic problem. The optimization problem is solved
in distributed format using modified particle swarms. Our method has the
advantage of using the direction towards optimal solution rather than searching
the entire feasible region. Our results on the Iris, Pima, Wine, and Thyroid
datasets show that the proposed method works better than a neural network and
the performance is close to that of SVM.
| no_new_dataset | 0.949949 |
1407.2098 | G\"unter J\"ager | G\"unter J\"ager, Alexander Peltzer and Kay Nieselt | inPHAP: Interactive visualization of genotype and phased haplotype data | BioVis 2014 conference | null | null | null | cs.CE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: To understand individual genomes it is necessary to look at the
variations that lead to changes in phenotype and possibly to disease. However,
genotype information alone is often not sufficient and additional knowledge
regarding the phase of the variation is needed to make correct interpretations.
Interactive visualizations, that allow the user to explore the data in various
ways, can be of great assistance in the process of making well informed
decisions. But, currently there is a lack for visualizations that are able to
deal with phased haplotype data. Results: We present inPHAP, an interactive
visualization tool for genotype and phased haplotype data. inPHAP features a
variety of interaction possibilities such as zooming, sorting, filtering and
aggregation of rows in order to explore patterns hidden in large genetic data
sets. As a proof of concept, we apply inPHAP to the phased haplotype data set
of Phase 1 of the 1000 Genomes Project. Thereby, inPHAP's ability to show
genetic variations on the population as well as on the individuals level is
demonstrated for several disease related loci. Conclusions: As of today, inPHAP
is the only visual analytical tool that allows the user to explore unphased and
phased haplotype data interactively. Due to its highly scalable design, inPHAP
can be applied to large datasets with up to 100 GB of data, enabling users to
visualize even large scale input data. inPHAP closes the gap between common
visualization tools for unphased genotype data and introduces several new
features, such as the visualization of phased data.
| [
{
"version": "v1",
"created": "Tue, 8 Jul 2014 14:14:18 GMT"
}
] | 2014-07-23T00:00:00 | [
[
"Jäger",
"Günter",
""
],
[
"Peltzer",
"Alexander",
""
],
[
"Nieselt",
"Kay",
""
]
] | TITLE: inPHAP: Interactive visualization of genotype and phased haplotype data
ABSTRACT: Background: To understand individual genomes it is necessary to look at the
variations that lead to changes in phenotype and possibly to disease. However,
genotype information alone is often not sufficient and additional knowledge
regarding the phase of the variation is needed to make correct interpretations.
Interactive visualizations, that allow the user to explore the data in various
ways, can be of great assistance in the process of making well informed
decisions. But, currently there is a lack for visualizations that are able to
deal with phased haplotype data. Results: We present inPHAP, an interactive
visualization tool for genotype and phased haplotype data. inPHAP features a
variety of interaction possibilities such as zooming, sorting, filtering and
aggregation of rows in order to explore patterns hidden in large genetic data
sets. As a proof of concept, we apply inPHAP to the phased haplotype data set
of Phase 1 of the 1000 Genomes Project. Thereby, inPHAP's ability to show
genetic variations on the population as well as on the individuals level is
demonstrated for several disease related loci. Conclusions: As of today, inPHAP
is the only visual analytical tool that allows the user to explore unphased and
phased haplotype data interactively. Due to its highly scalable design, inPHAP
can be applied to large datasets with up to 100 GB of data, enabling users to
visualize even large scale input data. inPHAP closes the gap between common
visualization tools for unphased genotype data and introduces several new
features, such as the visualization of phased data.
| no_new_dataset | 0.942401 |
1407.3386 | Sadegh Aliakbary | Sadegh Aliakbary, Jafar Habibi, Ali Movaghar | Feature Extraction from Degree Distribution for Comparison and Analysis
of Complex Networks | arXiv admin note: substantial text overlap with arXiv:1307.3625 | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The degree distribution is an important characteristic of complex networks.
In many data analysis applications, the networks should be represented as
fixed-length feature vectors and therefore the feature extraction from the
degree distribution is a necessary step. Moreover, many applications need a
similarity function for comparison of complex networks based on their degree
distributions. Such a similarity measure has many applications including
classification and clustering of network instances, evaluation of network
sampling methods, anomaly detection, and study of epidemic dynamics. The
existing methods are unable to effectively capture the similarity of degree
distributions, particularly when the corresponding networks have different
sizes. Based on our observations about the structure of the degree
distributions in networks over time, we propose a feature extraction and a
similarity function for the degree distributions in complex networks. We
propose to calculate the feature values based on the mean and standard
deviation of the node degrees in order to decrease the effect of the network
size on the extracted features. The proposed method is evaluated using
different artificial and real network datasets, and it outperforms the state of
the art methods with respect to the accuracy of the distance function and the
effectiveness of the extracted features.
| [
{
"version": "v1",
"created": "Sat, 12 Jul 2014 13:58:03 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Jul 2014 08:13:45 GMT"
}
] | 2014-07-23T00:00:00 | [
[
"Aliakbary",
"Sadegh",
""
],
[
"Habibi",
"Jafar",
""
],
[
"Movaghar",
"Ali",
""
]
] | TITLE: Feature Extraction from Degree Distribution for Comparison and Analysis
of Complex Networks
ABSTRACT: The degree distribution is an important characteristic of complex networks.
In many data analysis applications, the networks should be represented as
fixed-length feature vectors and therefore the feature extraction from the
degree distribution is a necessary step. Moreover, many applications need a
similarity function for comparison of complex networks based on their degree
distributions. Such a similarity measure has many applications including
classification and clustering of network instances, evaluation of network
sampling methods, anomaly detection, and study of epidemic dynamics. The
existing methods are unable to effectively capture the similarity of degree
distributions, particularly when the corresponding networks have different
sizes. Based on our observations about the structure of the degree
distributions in networks over time, we propose a feature extraction and a
similarity function for the degree distributions in complex networks. We
propose to calculate the feature values based on the mean and standard
deviation of the node degrees in order to decrease the effect of the network
size on the extracted features. The proposed method is evaluated using
different artificial and real network datasets, and it outperforms the state of
the art methods with respect to the accuracy of the distance function and the
effectiveness of the extracted features.
| no_new_dataset | 0.948489 |
1407.5661 | Scott Sawyer | Scott M. Sawyer and B. David O'Gwynn | Evaluating Accumulo Performance for a Scalable Cyber Data Processing
Pipeline | To appear at 2014 IEEE High Performance Extreme Computing Conference
(HPEC '14) | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Streaming, big data applications face challenges in creating scalable data
flow pipelines, in which multiple data streams must be collected, stored,
queried, and analyzed. These data sources are characterized by their volume (in
terms of dataset size), velocity (in terms of data rates), and variety (in
terms of fields and types). For many applications, distributed NoSQL databases
are effective alternatives to traditional relational database management
systems. This paper considers a cyber situational awareness system that uses
the Apache Accumulo database to provide scalable data warehousing, real-time
data ingest, and responsive querying for human users and analytic algorithms.
We evaluate Accumulo's ingestion scalability as a function of number of client
processes and servers. We also describe a flexible data model with effective
techniques for query planning and query batching to deliver responsive results.
Query performance is evaluated in terms of latency of the client receiving
initial result sets. Accumulo performance is measured on a database of up to 8
nodes using real cyber data.
| [
{
"version": "v1",
"created": "Mon, 21 Jul 2014 20:34:32 GMT"
}
] | 2014-07-23T00:00:00 | [
[
"Sawyer",
"Scott M.",
""
],
[
"O'Gwynn",
"B. David",
""
]
] | TITLE: Evaluating Accumulo Performance for a Scalable Cyber Data Processing
Pipeline
ABSTRACT: Streaming, big data applications face challenges in creating scalable data
flow pipelines, in which multiple data streams must be collected, stored,
queried, and analyzed. These data sources are characterized by their volume (in
terms of dataset size), velocity (in terms of data rates), and variety (in
terms of fields and types). For many applications, distributed NoSQL databases
are effective alternatives to traditional relational database management
systems. This paper considers a cyber situational awareness system that uses
the Apache Accumulo database to provide scalable data warehousing, real-time
data ingest, and responsive querying for human users and analytic algorithms.
We evaluate Accumulo's ingestion scalability as a function of number of client
processes and servers. We also describe a flexible data model with effective
techniques for query planning and query batching to deliver responsive results.
Query performance is evaluated in terms of latency of the client receiving
initial result sets. Accumulo performance is measured on a database of up to 8
nodes using real cyber data.
| no_new_dataset | 0.94743 |
1407.5908 | Mehrdad Mahdavi | Mehrdad Mahdavi | Exploiting Smoothness in Statistical Learning, Sequential Prediction,
and Stochastic Optimization | Ph.D. Thesis | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the last several years, the intimate connection between convex
optimization and learning problems, in both statistical and sequential
frameworks, has shifted the focus of algorithmic machine learning to examine
this interplay. In particular, on one hand, this intertwinement brings forward
new challenges in reassessment of the performance of learning algorithms
including generalization and regret bounds under the assumptions imposed by
convexity such as analytical properties of loss functions (e.g., Lipschitzness,
strong convexity, and smoothness). On the other hand, emergence of datasets of
an unprecedented size, demands the development of novel and more efficient
optimization algorithms to tackle large-scale learning problems.
The overarching goal of this thesis is to reassess the smoothness of loss
functions in statistical learning, sequential prediction/online learning, and
stochastic optimization and explicate its consequences. In particular we
examine how smoothness of loss function could be beneficial or detrimental in
these settings in terms of sample complexity, statistical consistency, regret
analysis, and convergence rate, and investigate how smoothness can be leveraged
to devise more efficient learning algorithms.
| [
{
"version": "v1",
"created": "Sat, 19 Jul 2014 15:16:40 GMT"
}
] | 2014-07-23T00:00:00 | [
[
"Mahdavi",
"Mehrdad",
""
]
] | TITLE: Exploiting Smoothness in Statistical Learning, Sequential Prediction,
and Stochastic Optimization
ABSTRACT: In the last several years, the intimate connection between convex
optimization and learning problems, in both statistical and sequential
frameworks, has shifted the focus of algorithmic machine learning to examine
this interplay. In particular, on one hand, this intertwinement brings forward
new challenges in reassessment of the performance of learning algorithms
including generalization and regret bounds under the assumptions imposed by
convexity such as analytical properties of loss functions (e.g., Lipschitzness,
strong convexity, and smoothness). On the other hand, emergence of datasets of
an unprecedented size, demands the development of novel and more efficient
optimization algorithms to tackle large-scale learning problems.
The overarching goal of this thesis is to reassess the smoothness of loss
functions in statistical learning, sequential prediction/online learning, and
stochastic optimization and explicate its consequences. In particular we
examine how smoothness of loss function could be beneficial or detrimental in
these settings in terms of sample complexity, statistical consistency, regret
analysis, and convergence rate, and investigate how smoothness can be leveraged
to devise more efficient learning algorithms.
| no_new_dataset | 0.947866 |
1407.5242 | Ziming Zhang | Ziming Zhang and Philip H.S. Torr | Object Proposal Generation using Two-Stage Cascade SVMs | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object proposal algorithms have shown great promise as a first step for
object recognition and detection. Good object proposal generation algorithms
require high object recall rate as well as low computational cost, because
generating object proposals is usually utilized as a preprocessing step. The
problem of how to accelerate the object proposal generation and evaluation
process without decreasing recall is thus of great interest. In this paper, we
propose a new object proposal generation method using two-stage cascade SVMs,
where in the first stage linear filters are learned for predefined quantized
scales/aspect-ratios independently, and in the second stage a global linear
classifier is learned across all the quantized scales/aspect-ratios for
calibration, so that all the proposals can be compared properly. The proposals
with highest scores are our final output. Specifically, we explain our
scale/aspect-ratio quantization scheme, and investigate the effects of
combinations of $\ell_1$ and $\ell_2$ regularizers in cascade SVMs with/without
ranking constraints in learning. Comprehensive experiments on VOC2007 dataset
are conducted, and our results achieve the state-of-the-art performance with
high object recall rate and high computational efficiency. Besides, our method
has been demonstrated to be suitable for not only class-specific but also
generic object proposal generation.
| [
{
"version": "v1",
"created": "Sun, 20 Jul 2014 03:53:21 GMT"
}
] | 2014-07-22T00:00:00 | [
[
"Zhang",
"Ziming",
""
],
[
"Torr",
"Philip H. S.",
""
]
] | TITLE: Object Proposal Generation using Two-Stage Cascade SVMs
ABSTRACT: Object proposal algorithms have shown great promise as a first step for
object recognition and detection. Good object proposal generation algorithms
require high object recall rate as well as low computational cost, because
generating object proposals is usually utilized as a preprocessing step. The
problem of how to accelerate the object proposal generation and evaluation
process without decreasing recall is thus of great interest. In this paper, we
propose a new object proposal generation method using two-stage cascade SVMs,
where in the first stage linear filters are learned for predefined quantized
scales/aspect-ratios independently, and in the second stage a global linear
classifier is learned across all the quantized scales/aspect-ratios for
calibration, so that all the proposals can be compared properly. The proposals
with highest scores are our final output. Specifically, we explain our
scale/aspect-ratio quantization scheme, and investigate the effects of
combinations of $\ell_1$ and $\ell_2$ regularizers in cascade SVMs with/without
ranking constraints in learning. Comprehensive experiments on VOC2007 dataset
are conducted, and our results achieve the state-of-the-art performance with
high object recall rate and high computational efficiency. Besides, our method
has been demonstrated to be suitable for not only class-specific but also
generic object proposal generation.
| no_new_dataset | 0.951142 |
1407.5547 | Rossano Schifanella | Luca Maria Aiello, Rossano Schifanella, Bogdan State | Reading the Source Code of Social Ties | 10 pages, 8 figures, Proceedings of the 2014 ACM conference on Web
(WebSci'14) | null | 10.1145/2615569.2615672 | null | cs.CY cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Though online social network research has exploded during the past years, not
much thought has been given to the exploration of the nature of social links.
Online interactions have been interpreted as indicative of one social process
or another (e.g., status exchange or trust), often with little systematic
justification regarding the relation between observed data and theoretical
concept. Our research aims to breach this gap in computational social science
by proposing an unsupervised, parameter-free method to discover, with high
accuracy, the fundamental domains of interaction occurring in social networks.
By applying this method on two online datasets different by scope and type of
interaction (aNobii and Flickr) we observe the spontaneous emergence of three
domains of interaction representing the exchange of status, knowledge and
social support. By finding significant relations between the domains of
interaction and classic social network analysis issues (e.g., tie strength,
dyadic interaction over time) we show how the network of interactions induced
by the extracted domains can be used as a starting point for more nuanced
analysis of online social data that may one day incorporate the normative
grammar of social interaction. Our methods finds applications in online social
media services ranging from recommendation to visual link summarization.
| [
{
"version": "v1",
"created": "Mon, 21 Jul 2014 16:16:44 GMT"
}
] | 2014-07-22T00:00:00 | [
[
"Aiello",
"Luca Maria",
""
],
[
"Schifanella",
"Rossano",
""
],
[
"State",
"Bogdan",
""
]
] | TITLE: Reading the Source Code of Social Ties
ABSTRACT: Though online social network research has exploded during the past years, not
much thought has been given to the exploration of the nature of social links.
Online interactions have been interpreted as indicative of one social process
or another (e.g., status exchange or trust), often with little systematic
justification regarding the relation between observed data and theoretical
concept. Our research aims to breach this gap in computational social science
by proposing an unsupervised, parameter-free method to discover, with high
accuracy, the fundamental domains of interaction occurring in social networks.
By applying this method on two online datasets different by scope and type of
interaction (aNobii and Flickr) we observe the spontaneous emergence of three
domains of interaction representing the exchange of status, knowledge and
social support. By finding significant relations between the domains of
interaction and classic social network analysis issues (e.g., tie strength,
dyadic interaction over time) we show how the network of interactions induced
by the extracted domains can be used as a starting point for more nuanced
analysis of online social data that may one day incorporate the normative
grammar of social interaction. Our methods finds applications in online social
media services ranging from recommendation to visual link summarization.
| no_new_dataset | 0.94428 |
1407.5581 | {\O}yvind Breivik PhD | {\O}yvind Breivik and Ole Johan Aarnes and Saleh Abdalla and
Jean-Raymond Bidlot and Peter A.E.M. Janssen | Wind and Wave Extremes over the World Oceans from Very Large Ensembles | 28 pages, 16 figures | Geophys Res Lett, 2014, 2014GL060997 | 10.1002/2014GL060997 | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Global return values of marine wind speed and significant wave height are
estimated from very large aggregates of archived ensemble forecasts at +240-h
lead time. Long lead time ensures that the forecasts represent independent
draws from the model climate. Compared with ERA-Interim, a reanalysis, the
ensemble yields higher return estimates for both wind speed and significant
wave height. Confidence intervals are much tighter due to the large size of the
dataset. The period (9 yrs) is short enough to be considered stationary even
with climate change. Furthermore, the ensemble is large enough for
non-parametric 100-yr return estimates to be made from order statistics. These
direct return estimates compare well with extreme value estimates outside areas
with tropical cyclones. Like any method employing modeled fields, it is
sensitive to tail biases in the numerical model, but we find that the biases
are moderate outside areas with tropical cyclones.
| [
{
"version": "v1",
"created": "Mon, 21 Jul 2014 17:45:01 GMT"
}
] | 2014-07-22T00:00:00 | [
[
"Breivik",
"Øyvind",
""
],
[
"Aarnes",
"Ole Johan",
""
],
[
"Abdalla",
"Saleh",
""
],
[
"Bidlot",
"Jean-Raymond",
""
],
[
"Janssen",
"Peter A. E. M.",
""
]
] | TITLE: Wind and Wave Extremes over the World Oceans from Very Large Ensembles
ABSTRACT: Global return values of marine wind speed and significant wave height are
estimated from very large aggregates of archived ensemble forecasts at +240-h
lead time. Long lead time ensures that the forecasts represent independent
draws from the model climate. Compared with ERA-Interim, a reanalysis, the
ensemble yields higher return estimates for both wind speed and significant
wave height. Confidence intervals are much tighter due to the large size of the
dataset. The period (9 yrs) is short enough to be considered stationary even
with climate change. Furthermore, the ensemble is large enough for
non-parametric 100-yr return estimates to be made from order statistics. These
direct return estimates compare well with extreme value estimates outside areas
with tropical cyclones. Like any method employing modeled fields, it is
sensitive to tail biases in the numerical model, but we find that the biases
are moderate outside areas with tropical cyclones.
| no_new_dataset | 0.947235 |
1407.4832 | Ernesto Diaz-Aviles | Bernat Coma-Puig and Ernesto Diaz-Aviles and Wolfgang Nejdl | Collaborative Filtering Ensemble for Personalized Name Recommendation | Top-N recommendation; personalized ranking; given name recommendation | Proceedings of the ECML PKDD Discovery Challenge - Recommending
Given Names. Co-located with ECML PKDD 2013. Prague, Czech Republic,
September 27, 2013 | null | null | cs.IR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Out of thousands of names to choose from, picking the right one for your
child is a daunting task. In this work, our objective is to help parents making
an informed decision while choosing a name for their baby. We follow a
recommender system approach and combine, in an ensemble, the individual
rankings produced by simple collaborative filtering algorithms in order to
produce a personalized list of names that meets the individual parents' taste.
Our experiments were conducted using real-world data collected from the query
logs of 'nameling' (nameling.net), an online portal for searching and exploring
names, which corresponds to the dataset released in the context of the ECML
PKDD Discover Challenge 2013. Our approach is intuitive, easy to implement, and
features fast training and prediction steps.
| [
{
"version": "v1",
"created": "Wed, 16 Jul 2014 12:07:36 GMT"
}
] | 2014-07-21T00:00:00 | [
[
"Coma-Puig",
"Bernat",
""
],
[
"Diaz-Aviles",
"Ernesto",
""
],
[
"Nejdl",
"Wolfgang",
""
]
] | TITLE: Collaborative Filtering Ensemble for Personalized Name Recommendation
ABSTRACT: Out of thousands of names to choose from, picking the right one for your
child is a daunting task. In this work, our objective is to help parents making
an informed decision while choosing a name for their baby. We follow a
recommender system approach and combine, in an ensemble, the individual
rankings produced by simple collaborative filtering algorithms in order to
produce a personalized list of names that meets the individual parents' taste.
Our experiments were conducted using real-world data collected from the query
logs of 'nameling' (nameling.net), an online portal for searching and exploring
names, which corresponds to the dataset released in the context of the ECML
PKDD Discover Challenge 2013. Our approach is intuitive, easy to implement, and
features fast training and prediction steps.
| no_new_dataset | 0.953966 |
1407.4958 | Stefan Westerlund | Stefan Westerlund and Christopher Harris | A Framework for HI Spectral Source Finding Using Distributed-Memory
Supercomputing | 15 pages, 6 figures | Publications of the Astronomical Society of Australia, 2014,
Volume 31 | 10.1017/pasa.2014.18 | null | astro-ph.IM cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The latest generation of radio astronomy interferometers will conduct all sky
surveys with data products consisting of petabytes of spectral line data.
Traditional approaches to identifying and parameterising the astrophysical
sources within this data will not scale to datasets of this magnitude, since
the performance of workstations will not keep up with the real-time generation
of data. For this reason, it is necessary to employ high performance computing
systems consisting of a large number of processors connected by a
high-bandwidth network. In order to make use of such supercomputers substantial
modifications must be made to serial source finding code. To ease the
transition, this work presents the Scalable Source Finder Framework, a
framework providing storage access, networking communication and data
composition functionality, which can support a wide range of source finding
algorithms provided they can be applied to subsets of the entire image.
Additionally, the Parallel Gaussian Source Finder was implemented using SSoFF,
utilising Gaussian filters, thresholding, and local statistics. PGSF was able
to search on a 256GB simulated dataset in under 24 minutes, significantly less
than the 8 to 12 hour observation that would generate such a dataset.
| [
{
"version": "v1",
"created": "Fri, 18 Jul 2014 11:36:57 GMT"
}
] | 2014-07-21T00:00:00 | [
[
"Westerlund",
"Stefan",
""
],
[
"Harris",
"Christopher",
""
]
] | TITLE: A Framework for HI Spectral Source Finding Using Distributed-Memory
Supercomputing
ABSTRACT: The latest generation of radio astronomy interferometers will conduct all sky
surveys with data products consisting of petabytes of spectral line data.
Traditional approaches to identifying and parameterising the astrophysical
sources within this data will not scale to datasets of this magnitude, since
the performance of workstations will not keep up with the real-time generation
of data. For this reason, it is necessary to employ high performance computing
systems consisting of a large number of processors connected by a
high-bandwidth network. In order to make use of such supercomputers substantial
modifications must be made to serial source finding code. To ease the
transition, this work presents the Scalable Source Finder Framework, a
framework providing storage access, networking communication and data
composition functionality, which can support a wide range of source finding
algorithms provided they can be applied to subsets of the entire image.
Additionally, the Parallel Gaussian Source Finder was implemented using SSoFF,
utilising Gaussian filters, thresholding, and local statistics. PGSF was able
to search on a 256GB simulated dataset in under 24 minutes, significantly less
than the 8 to 12 hour observation that would generate such a dataset.
| no_new_dataset | 0.936865 |
1407.4979 | Dong Yi | Dong Yi and Zhen Lei and Stan Z. Li | Deep Metric Learning for Practical Person Re-Identification | null | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Various hand-crafted features and metric learning methods prevail in the
field of person re-identification. Compared to these methods, this paper
proposes a more general way that can learn a similarity metric from image
pixels directly. By using a "siamese" deep neural network, the proposed method
can jointly learn the color feature, texture feature and metric in a unified
framework. The network has a symmetry structure with two sub-networks which are
connected by Cosine function. To deal with the big variations of person images,
binomial deviance is used to evaluate the cost between similarities and labels,
which is proved to be robust to outliers.
Compared to existing researches, a more practical setting is studied in the
experiments that is training and test on different datasets (cross dataset
person re-identification). Both in "intra dataset" and "cross dataset"
settings, the superiorities of the proposed method are illustrated on VIPeR and
PRID.
| [
{
"version": "v1",
"created": "Fri, 18 Jul 2014 13:07:16 GMT"
}
] | 2014-07-21T00:00:00 | [
[
"Yi",
"Dong",
""
],
[
"Lei",
"Zhen",
""
],
[
"Li",
"Stan Z.",
""
]
] | TITLE: Deep Metric Learning for Practical Person Re-Identification
ABSTRACT: Various hand-crafted features and metric learning methods prevail in the
field of person re-identification. Compared to these methods, this paper
proposes a more general way that can learn a similarity metric from image
pixels directly. By using a "siamese" deep neural network, the proposed method
can jointly learn the color feature, texture feature and metric in a unified
framework. The network has a symmetry structure with two sub-networks which are
connected by Cosine function. To deal with the big variations of person images,
binomial deviance is used to evaluate the cost between similarities and labels,
which is proved to be robust to outliers.
Compared to existing researches, a more practical setting is studied in the
experiments that is training and test on different datasets (cross dataset
person re-identification). Both in "intra dataset" and "cross dataset"
settings, the superiorities of the proposed method are illustrated on VIPeR and
PRID.
| no_new_dataset | 0.941761 |
1404.4646 | Ping Li | Guangcan Liu and Ping Li | Advancing Matrix Completion by Modeling Extra Structures beyond
Low-Rankness | arXiv admin note: text overlap with arXiv:1404.4032 | null | null | null | stat.ME cs.IT cs.LG math.IT math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A well-known method for completing low-rank matrices based on convex
optimization has been established by Cand{\`e}s and Recht. Although
theoretically complete, the method may not entirely solve the low-rank matrix
completion problem. This is because the method captures only the low-rankness
property which gives merely a rough constraint that the data points locate on
some low-dimensional subspace, but generally ignores the extra structures which
specify in more detail how the data points locate on the subspace. Whenever the
geometric distribution of the data points is not uniform, the coherence
parameters of data might be large and, accordingly, the method might fail even
if the latent matrix we want to recover is fairly low-rank. To better handle
non-uniform data, in this paper we propose a method termed Low-Rank Factor
Decomposition (LRFD), which imposes an additional restriction that the data
points must be represented as linear combinations of the bases in a dictionary
constructed or learnt in advance. We show that LRFD can well handle non-uniform
data, provided that the dictionary is configured properly: We mathematically
prove that if the dictionary itself is low-rank then LRFD is immune to the
coherence parameters which might be large on non-uniform data. This provides an
elementary principle for learning the dictionary in LRFD and, naturally, leads
to a practical algorithm for advancing matrix completion. Extensive experiments
on randomly generated matrices and motion datasets show encouraging results.
| [
{
"version": "v1",
"created": "Thu, 17 Apr 2014 20:50:26 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Jul 2014 18:04:35 GMT"
}
] | 2014-07-17T00:00:00 | [
[
"Liu",
"Guangcan",
""
],
[
"Li",
"Ping",
""
]
] | TITLE: Advancing Matrix Completion by Modeling Extra Structures beyond
Low-Rankness
ABSTRACT: A well-known method for completing low-rank matrices based on convex
optimization has been established by Cand{\`e}s and Recht. Although
theoretically complete, the method may not entirely solve the low-rank matrix
completion problem. This is because the method captures only the low-rankness
property which gives merely a rough constraint that the data points locate on
some low-dimensional subspace, but generally ignores the extra structures which
specify in more detail how the data points locate on the subspace. Whenever the
geometric distribution of the data points is not uniform, the coherence
parameters of data might be large and, accordingly, the method might fail even
if the latent matrix we want to recover is fairly low-rank. To better handle
non-uniform data, in this paper we propose a method termed Low-Rank Factor
Decomposition (LRFD), which imposes an additional restriction that the data
points must be represented as linear combinations of the bases in a dictionary
constructed or learnt in advance. We show that LRFD can well handle non-uniform
data, provided that the dictionary is configured properly: We mathematically
prove that if the dictionary itself is low-rank then LRFD is immune to the
coherence parameters which might be large on non-uniform data. This provides an
elementary principle for learning the dictionary in LRFD and, naturally, leads
to a practical algorithm for advancing matrix completion. Extensive experiments
on randomly generated matrices and motion datasets show encouraging results.
| no_new_dataset | 0.943712 |
1407.4179 | Paolo Gasti | Jaroslav Sedenka, Kiran Balagani, Vir Phoha, Paolo Gasti | Privacy-Preserving Population-Enhanced Biometric Key Generation from
Free-Text Keystroke Dynamics | null | Jaroslav Sedenka, Kiran Balagani, Vir Phoha and Paolo Gasti.
Privacy-Preserving Population-Enhanced Biometric Key Generation from
Free-Text Keystroke Dynamics. BTAS 2013 | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biometric key generation techniques are used to reliably generate
cryptographic material from biometric signals. Existing constructions require
users to perform a particular activity (e.g., type or say a password, or
provide a handwritten signature), and are therefore not suitable for generating
keys continuously. In this paper we present a new technique for biometric key
generation from free-text keystroke dynamics. This is the first technique
suitable for continuous key generation. Our approach is based on a scaled
parity code for key generation (and subsequent key reconstruction), and can be
augmented with the use of population data to improve security and reduce key
reconstruction error. In particular, we rely on linear discriminant analysis
(LDA) to obtain a better representation of discriminable biometric signals.
To update the LDA matrix without disclosing user's biometric information, we
design a provably secure privacy-preserving protocol (PP-LDA) based on
homomorphic encryption. Our biometric key generation with PP-LDA was evaluated
on a dataset of 486 users. We report equal error rate around 5% when using LDA,
and below 7% without LDA.
| [
{
"version": "v1",
"created": "Wed, 16 Jul 2014 01:47:59 GMT"
}
] | 2014-07-17T00:00:00 | [
[
"Sedenka",
"Jaroslav",
""
],
[
"Balagani",
"Kiran",
""
],
[
"Phoha",
"Vir",
""
],
[
"Gasti",
"Paolo",
""
]
] | TITLE: Privacy-Preserving Population-Enhanced Biometric Key Generation from
Free-Text Keystroke Dynamics
ABSTRACT: Biometric key generation techniques are used to reliably generate
cryptographic material from biometric signals. Existing constructions require
users to perform a particular activity (e.g., type or say a password, or
provide a handwritten signature), and are therefore not suitable for generating
keys continuously. In this paper we present a new technique for biometric key
generation from free-text keystroke dynamics. This is the first technique
suitable for continuous key generation. Our approach is based on a scaled
parity code for key generation (and subsequent key reconstruction), and can be
augmented with the use of population data to improve security and reduce key
reconstruction error. In particular, we rely on linear discriminant analysis
(LDA) to obtain a better representation of discriminable biometric signals.
To update the LDA matrix without disclosing user's biometric information, we
design a provably secure privacy-preserving protocol (PP-LDA) based on
homomorphic encryption. Our biometric key generation with PP-LDA was evaluated
on a dataset of 486 users. We report equal error rate around 5% when using LDA,
and below 7% without LDA.
| no_new_dataset | 0.946001 |
1407.4194 | Chaogui Kang | Chaogui Kang, Yu Liu, Lun Wu | Delineating Intra-Urban Spatial Connectivity Patterns by
Travel-Activities: A Case Study of Beijing, China | 6 pages, 4 figures | null | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Travel activities have been widely applied to quantify spatial interactions
between places, regions and nations. In this paper, we model the spatial
connectivities between 652 Traffic Analysis Zones (TAZs) in Beijing by a taxi
OD dataset. First, we unveil the gravitational structure of intra-urban spatial
connectivities of Beijing. On overall, the inter-TAZ interactions are well
governed by the Gravity Model $G_{ij} = {\lambda}p_{i}p_{j}/d_{ij}$, where
$p_{i}$, $p_{j}$ are degrees of TAZ $i$, $j$ and $d_{ij}$ the distance between
them, with a goodness-of-fit around 0.8. Second, the network based analysis
well reveals the polycentric form of Beijing. Last, we detect the semantics of
inter-TAZ connectivities based on their spatiotemporal patterns. We further
find that inter-TAZ connections deviating from the Gravity Model can be well
explained by link semantics.
| [
{
"version": "v1",
"created": "Wed, 16 Jul 2014 03:58:00 GMT"
}
] | 2014-07-17T00:00:00 | [
[
"Kang",
"Chaogui",
""
],
[
"Liu",
"Yu",
""
],
[
"Wu",
"Lun",
""
]
] | TITLE: Delineating Intra-Urban Spatial Connectivity Patterns by
Travel-Activities: A Case Study of Beijing, China
ABSTRACT: Travel activities have been widely applied to quantify spatial interactions
between places, regions and nations. In this paper, we model the spatial
connectivities between 652 Traffic Analysis Zones (TAZs) in Beijing by a taxi
OD dataset. First, we unveil the gravitational structure of intra-urban spatial
connectivities of Beijing. On overall, the inter-TAZ interactions are well
governed by the Gravity Model $G_{ij} = {\lambda}p_{i}p_{j}/d_{ij}$, where
$p_{i}$, $p_{j}$ are degrees of TAZ $i$, $j$ and $d_{ij}$ the distance between
them, with a goodness-of-fit around 0.8. Second, the network based analysis
well reveals the polycentric form of Beijing. Last, we detect the semantics of
inter-TAZ connectivities based on their spatiotemporal patterns. We further
find that inter-TAZ connections deviating from the Gravity Model can be well
explained by link semantics.
| no_new_dataset | 0.915658 |
1407.4378 | Cameron Mura | Marcin Cieslik and Cameron Mura | PaPy: Parallel and Distributed Data-processing Pipelines in Python | 7 pages, 5 figures, 2 tables, some use-cases; more at
http://muralab.org/PaPy | null | null | null | cs.PL q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | PaPy, which stands for parallel pipelines in Python, is a highly flexible
framework that enables the construction of robust, scalable workflows for
either generating or processing voluminous datasets. A workflow is created from
user-written Python functions (nodes) connected by 'pipes' (edges) into a
directed acyclic graph. These functions are arbitrarily definable, and can make
use of any Python modules or external binaries. Given a user-defined topology
and collection of input data, functions are composed into nested higher-order
maps, which are transparently and robustly evaluated in parallel on a single
computer or on remote hosts. Local and remote computational resources can be
flexibly pooled and assigned to functional nodes, thereby allowing facile
load-balancing and pipeline optimization to maximize computational throughput.
Input items are processed by nodes in parallel, and traverse the graph in
batches of adjustable size -- a trade-off between lazy-evaluation, parallelism,
and memory consumption. The processing of a single item can be parallelized in
a scatter/gather scheme. The simplicity and flexibility of distributed
workflows using PaPy bridges the gap between desktop -> grid, enabling this new
computing paradigm to be leveraged in the processing of large scientific
datasets.
| [
{
"version": "v1",
"created": "Tue, 15 Jul 2014 03:13:00 GMT"
}
] | 2014-07-17T00:00:00 | [
[
"Cieslik",
"Marcin",
""
],
[
"Mura",
"Cameron",
""
]
] | TITLE: PaPy: Parallel and Distributed Data-processing Pipelines in Python
ABSTRACT: PaPy, which stands for parallel pipelines in Python, is a highly flexible
framework that enables the construction of robust, scalable workflows for
either generating or processing voluminous datasets. A workflow is created from
user-written Python functions (nodes) connected by 'pipes' (edges) into a
directed acyclic graph. These functions are arbitrarily definable, and can make
use of any Python modules or external binaries. Given a user-defined topology
and collection of input data, functions are composed into nested higher-order
maps, which are transparently and robustly evaluated in parallel on a single
computer or on remote hosts. Local and remote computational resources can be
flexibly pooled and assigned to functional nodes, thereby allowing facile
load-balancing and pipeline optimization to maximize computational throughput.
Input items are processed by nodes in parallel, and traverse the graph in
batches of adjustable size -- a trade-off between lazy-evaluation, parallelism,
and memory consumption. The processing of a single item can be parallelized in
a scatter/gather scheme. The simplicity and flexibility of distributed
workflows using PaPy bridges the gap between desktop -> grid, enabling this new
computing paradigm to be leveraged in the processing of large scientific
datasets.
| no_new_dataset | 0.941061 |
1407.4409 | Ming Jin | Ruoxi Jia, Ming Jin, Costas J. Spanos | SoundLoc: Acoustic Method for Indoor Localization without Infrastructure | BuildSys 2014 | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying locations of occupants is beneficial to energy management in
buildings. A key observation in indoor environment is that distinct functional
areas are typically controlled by separate HVAC and lighting systems and room
level localization is sufficient to provide a powerful tool for energy usage
reduction by occupancy-based actuation of the building facilities. Based upon
this observation, this paper focuses on identifying the room where a person or
a mobile device is physically present. Existing room localization methods,
however, require special infrastructure to annotate rooms.
SoundLoc is a room-level localization system that exploits the intrinsic
acoustic properties of individual rooms and obviates the needs for
infrastructures. As we show in the study, rooms' acoustic properties can be
characterized by Room Impulse Response (RIR). Nevertheless, obtaining precise
RIRs is a time-consuming and expensive process. The main contributions of our
work are the following. First, a cost-effective RIR measurement system is
implemented and the Noise Adaptive Extraction of Reverberation (NAER) algorithm
is developed to estimate room acoustic parameters in noisy conditions. Second,
a comprehensive physical and statistical analysis of features extracted from
RIRs is performed. Also, SoundLoc is evaluated using the dataset consisting of
ten (10) different rooms. The overall accuracy of 97.8% achieved demonstrates
the potential to be integrated into automatic mapping of building space.
| [
{
"version": "v1",
"created": "Wed, 16 Jul 2014 18:16:00 GMT"
}
] | 2014-07-17T00:00:00 | [
[
"Jia",
"Ruoxi",
""
],
[
"Jin",
"Ming",
""
],
[
"Spanos",
"Costas J.",
""
]
] | TITLE: SoundLoc: Acoustic Method for Indoor Localization without Infrastructure
ABSTRACT: Identifying locations of occupants is beneficial to energy management in
buildings. A key observation in indoor environment is that distinct functional
areas are typically controlled by separate HVAC and lighting systems and room
level localization is sufficient to provide a powerful tool for energy usage
reduction by occupancy-based actuation of the building facilities. Based upon
this observation, this paper focuses on identifying the room where a person or
a mobile device is physically present. Existing room localization methods,
however, require special infrastructure to annotate rooms.
SoundLoc is a room-level localization system that exploits the intrinsic
acoustic properties of individual rooms and obviates the needs for
infrastructures. As we show in the study, rooms' acoustic properties can be
characterized by Room Impulse Response (RIR). Nevertheless, obtaining precise
RIRs is a time-consuming and expensive process. The main contributions of our
work are the following. First, a cost-effective RIR measurement system is
implemented and the Noise Adaptive Extraction of Reverberation (NAER) algorithm
is developed to estimate room acoustic parameters in noisy conditions. Second,
a comprehensive physical and statistical analysis of features extracted from
RIRs is performed. Also, SoundLoc is evaluated using the dataset consisting of
ten (10) different rooms. The overall accuracy of 97.8% achieved demonstrates
the potential to be integrated into automatic mapping of building space.
| no_new_dataset | 0.951097 |
1407.4416 | Ping Li | Anshumali Shrivastava and Ping Li | In Defense of MinHash Over SimHash | null | null | null | null | stat.CO cs.DS cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MinHash and SimHash are the two widely adopted Locality Sensitive Hashing
(LSH) algorithms for large-scale data processing applications. Deciding which
LSH to use for a particular problem at hand is an important question, which has
no clear answer in the existing literature. In this study, we provide a
theoretical answer (validated by experiments) that MinHash virtually always
outperforms SimHash when the data are binary, as common in practice such as
search.
The collision probability of MinHash is a function of resemblance similarity
($\mathcal{R}$), while the collision probability of SimHash is a function of
cosine similarity ($\mathcal{S}$). To provide a common basis for comparison, we
evaluate retrieval results in terms of $\mathcal{S}$ for both MinHash and
SimHash. This evaluation is valid as we can prove that MinHash is a valid LSH
with respect to $\mathcal{S}$, by using a general inequality $\mathcal{S}^2\leq
\mathcal{R}\leq \frac{\mathcal{S}}{2-\mathcal{S}}$. Our worst case analysis can
show that MinHash significantly outperforms SimHash in high similarity region.
Interestingly, our intensive experiments reveal that MinHash is also
substantially better than SimHash even in datasets where most of the data
points are not too similar to each other. This is partly because, in practical
data, often $\mathcal{R}\geq \frac{\mathcal{S}}{z-\mathcal{S}}$ holds where $z$
is only slightly larger than 2 (e.g., $z\leq 2.1$). Our restricted worst case
analysis by assuming $\frac{\mathcal{S}}{z-\mathcal{S}}\leq \mathcal{R}\leq
\frac{\mathcal{S}}{2-\mathcal{S}}$ shows that MinHash indeed significantly
outperforms SimHash even in low similarity region.
We believe the results in this paper will provide valuable guidelines for
search in practice, especially when the data are sparse.
| [
{
"version": "v1",
"created": "Wed, 16 Jul 2014 18:27:02 GMT"
}
] | 2014-07-17T00:00:00 | [
[
"Shrivastava",
"Anshumali",
""
],
[
"Li",
"Ping",
""
]
] | TITLE: In Defense of MinHash Over SimHash
ABSTRACT: MinHash and SimHash are the two widely adopted Locality Sensitive Hashing
(LSH) algorithms for large-scale data processing applications. Deciding which
LSH to use for a particular problem at hand is an important question, which has
no clear answer in the existing literature. In this study, we provide a
theoretical answer (validated by experiments) that MinHash virtually always
outperforms SimHash when the data are binary, as common in practice such as
search.
The collision probability of MinHash is a function of resemblance similarity
($\mathcal{R}$), while the collision probability of SimHash is a function of
cosine similarity ($\mathcal{S}$). To provide a common basis for comparison, we
evaluate retrieval results in terms of $\mathcal{S}$ for both MinHash and
SimHash. This evaluation is valid as we can prove that MinHash is a valid LSH
with respect to $\mathcal{S}$, by using a general inequality $\mathcal{S}^2\leq
\mathcal{R}\leq \frac{\mathcal{S}}{2-\mathcal{S}}$. Our worst case analysis can
show that MinHash significantly outperforms SimHash in high similarity region.
Interestingly, our intensive experiments reveal that MinHash is also
substantially better than SimHash even in datasets where most of the data
points are not too similar to each other. This is partly because, in practical
data, often $\mathcal{R}\geq \frac{\mathcal{S}}{z-\mathcal{S}}$ holds where $z$
is only slightly larger than 2 (e.g., $z\leq 2.1$). Our restricted worst case
analysis by assuming $\frac{\mathcal{S}}{z-\mathcal{S}}\leq \mathcal{R}\leq
\frac{\mathcal{S}}{2-\mathcal{S}}$ shows that MinHash indeed significantly
outperforms SimHash even in low similarity region.
We believe the results in this paper will provide valuable guidelines for
search in practice, especially when the data are sparse.
| no_new_dataset | 0.944689 |
1403.4106 | Mario Vincenzo Tomasello | Mario Vincenzo Tomasello, Nicola Perra, Claudio Juan Tessone, M\'arton
Karsai, Frank Schweitzer | The role of endogenous and exogenous mechanisms in the formation of R&D
networks | 12 pages, 10 figures | Tomasello, M.V., Perra, N., Tessone, C.J., Karsai, M. &
Schweitzer, F. The role of endogenous and exogenous mechanisms in the
formation of R&D networks. Sci. Rep. 4, 5679 (2014) | 10.1038/srep05679 | null | physics.soc-ph cs.SI physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop an agent-based model of strategic link formation in Research and
Development (R&D) networks. Empirical evidence has shown that the growth of
these networks is driven by mechanisms which are both endogenous to the system
(that is, depending on existing alliances patterns) and exogenous (that is,
driven by an exploratory search for newcomer firms). Extant research to date
has not investigated both mechanisms simultaneously in a comparative manner. To
overcome this limitation, we develop a general modeling framework to shed light
on the relative importance of these two mechanisms. We test our model against a
comprehensive dataset, listing cross-country and cross-sectoral R&D alliances
from 1984 to 2009. Our results show that by fitting only three macroscopic
properties of the network topology, this framework is able to reproduce a
number of micro-level measures, including the distributions of degree, local
clustering, path length and component size, and the emergence of network
clusters. Furthermore, by estimating the link probabilities towards newcomers
and established firms from the data, we find that endogenous mechanisms are
predominant over the exogenous ones in the network formation, thus quantifying
the importance of existing structures in selecting partner firms.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2014 14:21:08 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Jul 2014 08:28:40 GMT"
}
] | 2014-07-16T00:00:00 | [
[
"Tomasello",
"Mario Vincenzo",
""
],
[
"Perra",
"Nicola",
""
],
[
"Tessone",
"Claudio Juan",
""
],
[
"Karsai",
"Márton",
""
],
[
"Schweitzer",
"Frank",
""
]
] | TITLE: The role of endogenous and exogenous mechanisms in the formation of R&D
networks
ABSTRACT: We develop an agent-based model of strategic link formation in Research and
Development (R&D) networks. Empirical evidence has shown that the growth of
these networks is driven by mechanisms which are both endogenous to the system
(that is, depending on existing alliances patterns) and exogenous (that is,
driven by an exploratory search for newcomer firms). Extant research to date
has not investigated both mechanisms simultaneously in a comparative manner. To
overcome this limitation, we develop a general modeling framework to shed light
on the relative importance of these two mechanisms. We test our model against a
comprehensive dataset, listing cross-country and cross-sectoral R&D alliances
from 1984 to 2009. Our results show that by fitting only three macroscopic
properties of the network topology, this framework is able to reproduce a
number of micro-level measures, including the distributions of degree, local
clustering, path length and component size, and the emergence of network
clusters. Furthermore, by estimating the link probabilities towards newcomers
and established firms from the data, we find that endogenous mechanisms are
predominant over the exogenous ones in the network formation, thus quantifying
the importance of existing structures in selecting partner firms.
| no_new_dataset | 0.912358 |
1407.3867 | Ning Zhang | Ning Zhang, Jeff Donahue, Ross Girshick, Trevor Darrell | Part-based R-CNNs for Fine-grained Category Detection | 16 pages. To appear at European Conference on Computer Vision (ECCV),
2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic part localization can facilitate fine-grained categorization by
explicitly isolating subtle appearance differences associated with specific
object parts. Methods for pose-normalized representations have been proposed,
but generally presume bounding box annotations at test time due to the
difficulty of object detection. We propose a model for fine-grained
categorization that overcomes these limitations by leveraging deep
convolutional features computed on bottom-up region proposals. Our method
learns whole-object and part detectors, enforces learned geometric constraints
between them, and predicts a fine-grained category from a pose-normalized
representation. Experiments on the Caltech-UCSD bird dataset confirm that our
method outperforms state-of-the-art fine-grained categorization methods in an
end-to-end evaluation without requiring a bounding box at test time.
| [
{
"version": "v1",
"created": "Tue, 15 Jul 2014 02:32:16 GMT"
}
] | 2014-07-16T00:00:00 | [
[
"Zhang",
"Ning",
""
],
[
"Donahue",
"Jeff",
""
],
[
"Girshick",
"Ross",
""
],
[
"Darrell",
"Trevor",
""
]
] | TITLE: Part-based R-CNNs for Fine-grained Category Detection
ABSTRACT: Semantic part localization can facilitate fine-grained categorization by
explicitly isolating subtle appearance differences associated with specific
object parts. Methods for pose-normalized representations have been proposed,
but generally presume bounding box annotations at test time due to the
difficulty of object detection. We propose a model for fine-grained
categorization that overcomes these limitations by leveraging deep
convolutional features computed on bottom-up region proposals. Our method
learns whole-object and part detectors, enforces learned geometric constraints
between them, and predicts a fine-grained category from a pose-normalized
representation. Experiments on the Caltech-UCSD bird dataset confirm that our
method outperforms state-of-the-art fine-grained categorization methods in an
end-to-end evaluation without requiring a bounding box at test time.
| no_new_dataset | 0.951006 |
1407.3950 | Anders Drachen Dr. | Anders Drachen, Christian Thurau, Rafet Sifa, Christian Bauckhage | A Comparison of Methods for Player Clustering via Behavioral Telemetry | Foundations of Digital Games 2013 | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The analysis of user behavior in digital games has been aided by the
introduction of user telemetry in game development, which provides
unprecedented access to quantitative data on user behavior from the installed
game clients of the entire population of players. Player behavior telemetry
datasets can be exceptionally complex, with features recorded for a varying
population of users over a temporal segment that can reach years in duration.
Categorization of behaviors, whether through descriptive methods (e.g.
segmention) or unsupervised/supervised learning techniques, is valuable for
finding patterns in the behavioral data, and developing profiles that are
actionable to game developers. There are numerous methods for unsupervised
clustering of user behavior, e.g. k-means/c-means, Non-negative Matrix
Factorization, or Principal Component Analysis. Although all yield behavior
categorizations, interpretation of the resulting categories in terms of actual
play behavior can be difficult if not impossible. In this paper, a range of
unsupervised techniques are applied together with Archetypal Analysis to
develop behavioral clusters from playtime data of 70,014 World of Warcraft
players, covering a five year interval. The techniques are evaluated with
respect to their ability to develop actionable behavioral profiles from the
dataset.
| [
{
"version": "v1",
"created": "Tue, 15 Jul 2014 11:41:39 GMT"
}
] | 2014-07-16T00:00:00 | [
[
"Drachen",
"Anders",
""
],
[
"Thurau",
"Christian",
""
],
[
"Sifa",
"Rafet",
""
],
[
"Bauckhage",
"Christian",
""
]
] | TITLE: A Comparison of Methods for Player Clustering via Behavioral Telemetry
ABSTRACT: The analysis of user behavior in digital games has been aided by the
introduction of user telemetry in game development, which provides
unprecedented access to quantitative data on user behavior from the installed
game clients of the entire population of players. Player behavior telemetry
datasets can be exceptionally complex, with features recorded for a varying
population of users over a temporal segment that can reach years in duration.
Categorization of behaviors, whether through descriptive methods (e.g.
segmention) or unsupervised/supervised learning techniques, is valuable for
finding patterns in the behavioral data, and developing profiles that are
actionable to game developers. There are numerous methods for unsupervised
clustering of user behavior, e.g. k-means/c-means, Non-negative Matrix
Factorization, or Principal Component Analysis. Although all yield behavior
categorizations, interpretation of the resulting categories in terms of actual
play behavior can be difficult if not impossible. In this paper, a range of
unsupervised techniques are applied together with Archetypal Analysis to
develop behavioral clusters from playtime data of 70,014 World of Warcraft
players, covering a five year interval. The techniques are evaluated with
respect to their ability to develop actionable behavioral profiles from the
dataset.
| no_new_dataset | 0.948058 |
1407.4075 | Grigori Fursin | Lianjie Luo and Yang Chen and Chengyong Wu and Shun Long and Grigori
Fursin | Finding representative sets of optimizations for adaptive
multiversioning applications | 3rd Workshop on Statistical and Machine Learning Approaches Applied
to Architectures and Compilation (SMART'09), co-located with HiPEAC'09
conference, Paphos, Cyprus, 2009 | null | null | null | cs.PL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Iterative compilation is a widely adopted technique to optimize programs for
different constraints such as performance, code size and power consumption in
rapidly evolving hardware and software environments. However, in case of
statically compiled programs, it is often restricted to optimizations for a
specific dataset and may not be applicable to applications that exhibit
different run-time behavior across program phases, multiple datasets or when
executed in heterogeneous, reconfigurable and virtual environments. Several
frameworks have been recently introduced to tackle these problems and enable
run-time optimization and adaptation for statically compiled programs based on
static function multiversioning and monitoring of online program behavior. In
this article, we present a novel technique to select a minimal set of
representative optimization variants (function versions) for such frameworks
while avoiding performance loss across available datasets and code-size
explosion. We developed a novel mapping mechanism using popular decision tree
or rule induction based machine learning techniques to rapidly select best code
versions at run-time based on dataset features and minimize selection overhead.
These techniques enable creation of self-tuning static binaries or libraries
adaptable to changing behavior and environments at run-time using staged
compilation that do not require complex recompilation frameworks while
effectively outperforming traditional single-version non-adaptable code.
| [
{
"version": "v1",
"created": "Mon, 14 Jul 2014 17:55:07 GMT"
}
] | 2014-07-16T00:00:00 | [
[
"Luo",
"Lianjie",
""
],
[
"Chen",
"Yang",
""
],
[
"Wu",
"Chengyong",
""
],
[
"Long",
"Shun",
""
],
[
"Fursin",
"Grigori",
""
]
] | TITLE: Finding representative sets of optimizations for adaptive
multiversioning applications
ABSTRACT: Iterative compilation is a widely adopted technique to optimize programs for
different constraints such as performance, code size and power consumption in
rapidly evolving hardware and software environments. However, in case of
statically compiled programs, it is often restricted to optimizations for a
specific dataset and may not be applicable to applications that exhibit
different run-time behavior across program phases, multiple datasets or when
executed in heterogeneous, reconfigurable and virtual environments. Several
frameworks have been recently introduced to tackle these problems and enable
run-time optimization and adaptation for statically compiled programs based on
static function multiversioning and monitoring of online program behavior. In
this article, we present a novel technique to select a minimal set of
representative optimization variants (function versions) for such frameworks
while avoiding performance loss across available datasets and code-size
explosion. We developed a novel mapping mechanism using popular decision tree
or rule induction based machine learning techniques to rapidly select best code
versions at run-time based on dataset features and minimize selection overhead.
These techniques enable creation of self-tuning static binaries or libraries
adaptable to changing behavior and environments at run-time using staged
compilation that do not require complex recompilation frameworks while
effectively outperforming traditional single-version non-adaptable code.
| no_new_dataset | 0.940408 |
1402.0790 | Philipp Singer | Philipp Singer, Denis Helic, Behnam Taraghi and Markus Strohmaier | Detecting Memory and Structure in Human Navigation Patterns Using Markov
Chain Models of Varying Order | null | PLoS ONE, vol 9(7), 2014 | 10.1371/journal.pone.0102070 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most frequently used models for understanding human navigation on
the Web is the Markov chain model, where Web pages are represented as states
and hyperlinks as probabilities of navigating from one page to another.
Predominantly, human navigation on the Web has been thought to satisfy the
memoryless Markov property stating that the next page a user visits only
depends on her current page and not on previously visited ones. This idea has
found its way in numerous applications such as Google's PageRank algorithm and
others. Recently, new studies suggested that human navigation may better be
modeled using higher order Markov chain models, i.e., the next page depends on
a longer history of past clicks. Yet, this finding is preliminary and does not
account for the higher complexity of higher order Markov chain models which is
why the memoryless model is still widely used. In this work we thoroughly
present a diverse array of advanced inference methods for determining the
appropriate Markov chain order. We highlight strengths and weaknesses of each
method and apply them for investigating memory and structure of human
navigation on the Web. Our experiments reveal that the complexity of higher
order models grows faster than their utility, and thus we confirm that the
memoryless model represents a quite practical model for human navigation on a
page level. However, when we expand our analysis to a topical level, where we
abstract away from specific page transitions to transitions between topics, we
find that the memoryless assumption is violated and specific regularities can
be observed. We report results from experiments with two types of navigational
datasets (goal-oriented vs. free form) and observe interesting structural
differences that make a strong argument for more contextual studies of human
navigation in future work.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 16:25:46 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jun 2014 08:58:24 GMT"
}
] | 2014-07-15T00:00:00 | [
[
"Singer",
"Philipp",
""
],
[
"Helic",
"Denis",
""
],
[
"Taraghi",
"Behnam",
""
],
[
"Strohmaier",
"Markus",
""
]
] | TITLE: Detecting Memory and Structure in Human Navigation Patterns Using Markov
Chain Models of Varying Order
ABSTRACT: One of the most frequently used models for understanding human navigation on
the Web is the Markov chain model, where Web pages are represented as states
and hyperlinks as probabilities of navigating from one page to another.
Predominantly, human navigation on the Web has been thought to satisfy the
memoryless Markov property stating that the next page a user visits only
depends on her current page and not on previously visited ones. This idea has
found its way in numerous applications such as Google's PageRank algorithm and
others. Recently, new studies suggested that human navigation may better be
modeled using higher order Markov chain models, i.e., the next page depends on
a longer history of past clicks. Yet, this finding is preliminary and does not
account for the higher complexity of higher order Markov chain models which is
why the memoryless model is still widely used. In this work we thoroughly
present a diverse array of advanced inference methods for determining the
appropriate Markov chain order. We highlight strengths and weaknesses of each
method and apply them for investigating memory and structure of human
navigation on the Web. Our experiments reveal that the complexity of higher
order models grows faster than their utility, and thus we confirm that the
memoryless model represents a quite practical model for human navigation on a
page level. However, when we expand our analysis to a topical level, where we
abstract away from specific page transitions to transitions between topics, we
find that the memoryless assumption is violated and specific regularities can
be observed. We report results from experiments with two types of navigational
datasets (goal-oriented vs. free form) and observe interesting structural
differences that make a strong argument for more contextual studies of human
navigation in future work.
| no_new_dataset | 0.947721 |
1404.6635 | Gugan Thoppe | Gugan Thoppe, Vivek S. Borkar, Dinesh Garg | Greedy Block Coordinate Descent (GBCD) Method for High Dimensional
Quadratic Programs | 29 pages, 3 figures, New references added | null | null | null | math.OC cs.SY stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High dimensional unconstrained quadratic programs (UQPs) involving massive
datasets are now common in application areas such as web, social networks, etc.
Unless computational resources that match up to these datasets are available,
solving such problems using classical UQP methods is very difficult. This paper
discusses alternatives. We first define high dimensional compliant (HDC)
methods for UQPs---methods that can solve high dimensional UQPs by adapting to
available computational resources. We then show that the class of block
Kaczmarz and block coordinate descent (BCD) are the only existing methods that
can be made HDC. As a possible answer to the question of the `best' amongst BCD
methods for UQP, we propose a novel greedy BCD (GBCD) method with serial,
parallel and distributed variants. Convergence rates and numerical tests
confirm that the GBCD is indeed an effective method to solve high dimensional
UQPs. In fact, it sometimes beats even the conjugate gradient.
| [
{
"version": "v1",
"created": "Sat, 26 Apr 2014 11:36:46 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Jul 2014 12:05:55 GMT"
},
{
"version": "v3",
"created": "Sat, 12 Jul 2014 08:04:36 GMT"
}
] | 2014-07-15T00:00:00 | [
[
"Thoppe",
"Gugan",
""
],
[
"Borkar",
"Vivek S.",
""
],
[
"Garg",
"Dinesh",
""
]
] | TITLE: Greedy Block Coordinate Descent (GBCD) Method for High Dimensional
Quadratic Programs
ABSTRACT: High dimensional unconstrained quadratic programs (UQPs) involving massive
datasets are now common in application areas such as web, social networks, etc.
Unless computational resources that match up to these datasets are available,
solving such problems using classical UQP methods is very difficult. This paper
discusses alternatives. We first define high dimensional compliant (HDC)
methods for UQPs---methods that can solve high dimensional UQPs by adapting to
available computational resources. We then show that the class of block
Kaczmarz and block coordinate descent (BCD) are the only existing methods that
can be made HDC. As a possible answer to the question of the `best' amongst BCD
methods for UQP, we propose a novel greedy BCD (GBCD) method with serial,
parallel and distributed variants. Convergence rates and numerical tests
confirm that the GBCD is indeed an effective method to solve high dimensional
UQPs. In fact, it sometimes beats even the conjugate gradient.
| no_new_dataset | 0.942295 |
1407.0342 | Jinwei Xu | Jinwei Xu, Jiankun Hu, Xiuping Jia | A New Path to Construct Parametric Orientation Field: Sparse FOMFE Model
and Compressed Sparse FOMFE Model | null | null | null | null | cs.CV cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Orientation field, representing the fingerprint ridge structure direction,
plays a crucial role in fingerprint-related image processing tasks. Orientation
field is able to be constructed by either non-parametric or parametric methods.
In this paper, the advantages and disadvantages regarding to the existing
non-parametric and parametric approaches are briefly summarized. With the
further investigation for constructing the orientation field by parametric
technique, two new models - sparse FOMFE model and compressed sparse FOMFE
model are introduced, based on the rapidly developing signal sparse
representation and compressed sensing theories. The experiments on high-quality
fingerprint image dataset (plain and rolled print) and poor-quality fingerprint
image dataset (latent print) demonstrate their feasibilities to construct the
orientation field in a sparse or even compressed sparse mode. The comparisons
among the state-of-art orientation field modeling approaches show that the
proposed two models have the potential availability in big data-oriented
fingerprint indexing tasks.
| [
{
"version": "v1",
"created": "Tue, 1 Jul 2014 18:18:39 GMT"
}
] | 2014-07-15T00:00:00 | [
[
"Xu",
"Jinwei",
""
],
[
"Hu",
"Jiankun",
""
],
[
"Jia",
"Xiuping",
""
]
] | TITLE: A New Path to Construct Parametric Orientation Field: Sparse FOMFE Model
and Compressed Sparse FOMFE Model
ABSTRACT: Orientation field, representing the fingerprint ridge structure direction,
plays a crucial role in fingerprint-related image processing tasks. Orientation
field is able to be constructed by either non-parametric or parametric methods.
In this paper, the advantages and disadvantages regarding to the existing
non-parametric and parametric approaches are briefly summarized. With the
further investigation for constructing the orientation field by parametric
technique, two new models - sparse FOMFE model and compressed sparse FOMFE
model are introduced, based on the rapidly developing signal sparse
representation and compressed sensing theories. The experiments on high-quality
fingerprint image dataset (plain and rolled print) and poor-quality fingerprint
image dataset (latent print) demonstrate their feasibilities to construct the
orientation field in a sparse or even compressed sparse mode. The comparisons
among the state-of-art orientation field modeling approaches show that the
proposed two models have the potential availability in big data-oriented
fingerprint indexing tasks.
| no_new_dataset | 0.951188 |
1407.3685 | Anthony Bagnall Dr | Anthony Bagnall, Jon Hills and Jason Lines | Finding Motif Sets in Time Series | null | null | null | CMPC14-03 | cs.LG cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time-series motifs are representative subsequences that occur frequently in a
time series; a motif set is the set of subsequences deemed to be instances of a
given motif. We focus on finding motif sets. Our motivation is to detect motif
sets in household electricity-usage profiles, representing repeated patterns of
household usage.
We propose three algorithms for finding motif sets. Two are greedy algorithms
based on pairwise comparison, and the third uses a heuristic measure of set
quality to find the motif set directly. We compare these algorithms on
simulated datasets and on electricity-usage data. We show that Scan MK, the
simplest way of using the best-matching pair to find motif sets, is less
accurate on our synthetic data than Set Finder and Cluster MK, although the
latter is very sensitive to parameter settings. We qualitatively analyse the
outputs for the electricity-usage data and demonstrate that both Scan MK and
Set Finder can discover useful motif sets in such data.
| [
{
"version": "v1",
"created": "Mon, 14 Jul 2014 15:01:57 GMT"
}
] | 2014-07-15T00:00:00 | [
[
"Bagnall",
"Anthony",
""
],
[
"Hills",
"Jon",
""
],
[
"Lines",
"Jason",
""
]
] | TITLE: Finding Motif Sets in Time Series
ABSTRACT: Time-series motifs are representative subsequences that occur frequently in a
time series; a motif set is the set of subsequences deemed to be instances of a
given motif. We focus on finding motif sets. Our motivation is to detect motif
sets in household electricity-usage profiles, representing repeated patterns of
household usage.
We propose three algorithms for finding motif sets. Two are greedy algorithms
based on pairwise comparison, and the third uses a heuristic measure of set
quality to find the motif set directly. We compare these algorithms on
simulated datasets and on electricity-usage data. We show that Scan MK, the
simplest way of using the best-matching pair to find motif sets, is less
accurate on our synthetic data than Set Finder and Cluster MK, although the
latter is very sensitive to parameter settings. We qualitatively analyse the
outputs for the electricity-usage data and demonstrate that both Scan MK and
Set Finder can discover useful motif sets in such data.
| no_new_dataset | 0.949342 |
1407.3686 | Alejandro Gonz\'alez Alzate | Alejandro Gonz\'alez and Sebastian Ramos and David V\'azquez and
Antonio M. L\'opez and Jaume Amores | Spatiotemporal Stacked Sequential Learning for Pedestrian Detection | 8 pages, 5 figure, 1 table | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pedestrian classifiers decide which image windows contain a pedestrian. In
practice, such classifiers provide a relatively high response at neighbor
windows overlapping a pedestrian, while the responses around potential false
positives are expected to be lower. An analogous reasoning applies for image
sequences. If there is a pedestrian located within a frame, the same pedestrian
is expected to appear close to the same location in neighbor frames. Therefore,
such a location has chances of receiving high classification scores during
several frames, while false positives are expected to be more spurious. In this
paper we propose to exploit such correlations for improving the accuracy of
base pedestrian classifiers. In particular, we propose to use two-stage
classifiers which not only rely on the image descriptors required by the base
classifiers but also on the response of such base classifiers in a given
spatiotemporal neighborhood. More specifically, we train pedestrian classifiers
using a stacked sequential learning (SSL) paradigm. We use a new pedestrian
dataset we have acquired from a car to evaluate our proposal at different frame
rates. We also test on a well known dataset: Caltech. The obtained results show
that our SSL proposal boosts detection accuracy significantly with a minimal
impact on the computational cost. Interestingly, SSL improves more the accuracy
at the most dangerous situations, i.e. when a pedestrian is close to the
camera.
| [
{
"version": "v1",
"created": "Mon, 14 Jul 2014 15:03:01 GMT"
}
] | 2014-07-15T00:00:00 | [
[
"González",
"Alejandro",
""
],
[
"Ramos",
"Sebastian",
""
],
[
"Vázquez",
"David",
""
],
[
"López",
"Antonio M.",
""
],
[
"Amores",
"Jaume",
""
]
] | TITLE: Spatiotemporal Stacked Sequential Learning for Pedestrian Detection
ABSTRACT: Pedestrian classifiers decide which image windows contain a pedestrian. In
practice, such classifiers provide a relatively high response at neighbor
windows overlapping a pedestrian, while the responses around potential false
positives are expected to be lower. An analogous reasoning applies for image
sequences. If there is a pedestrian located within a frame, the same pedestrian
is expected to appear close to the same location in neighbor frames. Therefore,
such a location has chances of receiving high classification scores during
several frames, while false positives are expected to be more spurious. In this
paper we propose to exploit such correlations for improving the accuracy of
base pedestrian classifiers. In particular, we propose to use two-stage
classifiers which not only rely on the image descriptors required by the base
classifiers but also on the response of such base classifiers in a given
spatiotemporal neighborhood. More specifically, we train pedestrian classifiers
using a stacked sequential learning (SSL) paradigm. We use a new pedestrian
dataset we have acquired from a car to evaluate our proposal at different frame
rates. We also test on a well known dataset: Caltech. The obtained results show
that our SSL proposal boosts detection accuracy significantly with a minimal
impact on the computational cost. Interestingly, SSL improves more the accuracy
at the most dangerous situations, i.e. when a pedestrian is close to the
camera.
| new_dataset | 0.971483 |
1407.2987 | Eren Golge | Eren Golge and Pinar Duygulu | FAME: Face Association through Model Evolution | Draft version of the study | null | null | null | cs.CV cs.AI cs.IR cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We attack the problem of learning face models for public faces from
weakly-labelled images collected from web through querying a name. The data is
very noisy even after face detection, with several irrelevant faces
corresponding to other people. We propose a novel method, Face Association
through Model Evolution (FAME), that is able to prune the data in an iterative
way, for the face models associated to a name to evolve. The idea is based on
capturing discriminativeness and representativeness of each instance and
eliminating the outliers. The final models are used to classify faces on novel
datasets with possibly different characteristics. On benchmark datasets, our
results are comparable to or better than state-of-the-art studies for the task
of face identification.
| [
{
"version": "v1",
"created": "Thu, 10 Jul 2014 23:52:44 GMT"
}
] | 2014-07-14T00:00:00 | [
[
"Golge",
"Eren",
""
],
[
"Duygulu",
"Pinar",
""
]
] | TITLE: FAME: Face Association through Model Evolution
ABSTRACT: We attack the problem of learning face models for public faces from
weakly-labelled images collected from web through querying a name. The data is
very noisy even after face detection, with several irrelevant faces
corresponding to other people. We propose a novel method, Face Association
through Model Evolution (FAME), that is able to prune the data in an iterative
way, for the face models associated to a name to evolve. The idea is based on
capturing discriminativeness and representativeness of each instance and
eliminating the outliers. The final models are used to classify faces on novel
datasets with possibly different characteristics. On benchmark datasets, our
results are comparable to or better than state-of-the-art studies for the task
of face identification.
| new_dataset | 0.949295 |
1407.2649 | Alican Bozkurt | Alican Bozkurt, Pinar Duygulu, A. Enis Cetin | Classifying Fonts and Calligraphy Styles Using Complex Wavelet Transform | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | Recognizing fonts has become an important task in document analysis, due to
the increasing number of available digital documents in different fonts and
emphases. A generic font-recognition system independent of language, script and
content is desirable for processing various types of documents. At the same
time, categorizing calligraphy styles in handwritten manuscripts is important
for palaeographic analysis, but has not been studied sufficiently in the
literature. We address the font-recognition problem as analysis and
categorization of textures. We extract features using complex wavelet transform
and use support vector machines for classification. Extensive experimental
evaluations on different datasets in four languages and comparisons with
state-of-the-art studies show that our proposed method achieves higher
recognition accuracy while being computationally simpler. Furthermore, on a new
dataset generated from Ottoman manuscripts, we show that the proposed method
can also be used for categorizing Ottoman calligraphy with high accuracy.
| [
{
"version": "v1",
"created": "Wed, 9 Jul 2014 22:25:32 GMT"
}
] | 2014-07-11T00:00:00 | [
[
"Bozkurt",
"Alican",
""
],
[
"Duygulu",
"Pinar",
""
],
[
"Cetin",
"A. Enis",
""
]
] | TITLE: Classifying Fonts and Calligraphy Styles Using Complex Wavelet Transform
ABSTRACT: Recognizing fonts has become an important task in document analysis, due to
the increasing number of available digital documents in different fonts and
emphases. A generic font-recognition system independent of language, script and
content is desirable for processing various types of documents. At the same
time, categorizing calligraphy styles in handwritten manuscripts is important
for palaeographic analysis, but has not been studied sufficiently in the
literature. We address the font-recognition problem as analysis and
categorization of textures. We extract features using complex wavelet transform
and use support vector machines for classification. Extensive experimental
evaluations on different datasets in four languages and comparisons with
state-of-the-art studies show that our proposed method achieves higher
recognition accuracy while being computationally simpler. Furthermore, on a new
dataset generated from Ottoman manuscripts, we show that the proposed method
can also be used for categorizing Ottoman calligraphy with high accuracy.
| new_dataset | 0.960952 |
1407.2683 | Jiaxing Shang | Jiaxing Shang, Lianchen Liu, Feng Xie, Zhen Chen, Jiajia Miao, Xuelin
Fang, Cheng Wu | A Real-Time Detecting Algorithm for Tracking Community Structure of
Dynamic Networks | 9 pages, 6 figures, 3 tables, 6th SNA-KDD Workshop (2012) | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper a simple but efficient real-time detecting algorithm is
proposed for tracking community structure of dynamic networks. Community
structure is intuitively characterized as divisions of network nodes into
subgroups, within which nodes are densely connected while between which they
are sparsely connected. To evaluate the quality of community structure of a
network, a metric called modularity is proposed and many algorithms are
developed on optimizing it. However, most of the modularity based algorithms
deal with static networks and cannot be performed frequently, due to their high
computing complexity. In order to track the community structure of dynamic
networks in a fine-grained way, we propose a modularity based algorithm that is
incremental and has very low computing complexity. In our algorithm we adopt a
two-step approach. Firstly we apply the algorithm of Blondel et al for
detecting static communities to obtain an initial community structure. Then,
apply our incremental updating strategies to track the dynamic communities. The
performance of our algorithm is measured in terms of the modularity. We test
the algorithm on tracking community structure of Enron Email and three other
real world datasets. The experimental results show that our algorithm can keep
track of community structure in time and outperform the well known CNM
algorithm in terms of modularity.
| [
{
"version": "v1",
"created": "Thu, 10 Jul 2014 04:08:29 GMT"
}
] | 2014-07-11T00:00:00 | [
[
"Shang",
"Jiaxing",
""
],
[
"Liu",
"Lianchen",
""
],
[
"Xie",
"Feng",
""
],
[
"Chen",
"Zhen",
""
],
[
"Miao",
"Jiajia",
""
],
[
"Fang",
"Xuelin",
""
],
[
"Wu",
"Cheng",
""
]
] | TITLE: A Real-Time Detecting Algorithm for Tracking Community Structure of
Dynamic Networks
ABSTRACT: In this paper a simple but efficient real-time detecting algorithm is
proposed for tracking community structure of dynamic networks. Community
structure is intuitively characterized as divisions of network nodes into
subgroups, within which nodes are densely connected while between which they
are sparsely connected. To evaluate the quality of community structure of a
network, a metric called modularity is proposed and many algorithms are
developed on optimizing it. However, most of the modularity based algorithms
deal with static networks and cannot be performed frequently, due to their high
computing complexity. In order to track the community structure of dynamic
networks in a fine-grained way, we propose a modularity based algorithm that is
incremental and has very low computing complexity. In our algorithm we adopt a
two-step approach. Firstly we apply the algorithm of Blondel et al for
detecting static communities to obtain an initial community structure. Then,
apply our incremental updating strategies to track the dynamic communities. The
performance of our algorithm is measured in terms of the modularity. We test
the algorithm on tracking community structure of Enron Email and three other
real world datasets. The experimental results show that our algorithm can keep
track of community structure in time and outperform the well known CNM
algorithm in terms of modularity.
| no_new_dataset | 0.944944 |
1407.2697 | Aaron Defazio Mr | Aaron J. Defazio and Tiberio S. Caetano | A Convex Formulation for Learning Scale-Free Networks via Submodular
Relaxation | null | Advances in Neural Information Processing Systems 25 (NIPS 2012) | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key problem in statistics and machine learning is the determination of
network structure from data. We consider the case where the structure of the
graph to be reconstructed is known to be scale-free. We show that in such cases
it is natural to formulate structured sparsity inducing priors using submodular
functions, and we use their Lov\'asz extension to obtain a convex relaxation.
For tractable classes such as Gaussian graphical models, this leads to a convex
optimization problem that can be efficiently solved. We show that our method
results in an improvement in the accuracy of reconstructed networks for
synthetic data. We also show how our prior encourages scale-free
reconstructions on a bioinfomatics dataset.
| [
{
"version": "v1",
"created": "Thu, 10 Jul 2014 05:45:17 GMT"
}
] | 2014-07-11T00:00:00 | [
[
"Defazio",
"Aaron J.",
""
],
[
"Caetano",
"Tiberio S.",
""
]
] | TITLE: A Convex Formulation for Learning Scale-Free Networks via Submodular
Relaxation
ABSTRACT: A key problem in statistics and machine learning is the determination of
network structure from data. We consider the case where the structure of the
graph to be reconstructed is known to be scale-free. We show that in such cases
it is natural to formulate structured sparsity inducing priors using submodular
functions, and we use their Lov\'asz extension to obtain a convex relaxation.
For tractable classes such as Gaussian graphical models, this leads to a convex
optimization problem that can be efficiently solved. We show that our method
results in an improvement in the accuracy of reconstructed networks for
synthetic data. We also show how our prior encourages scale-free
reconstructions on a bioinfomatics dataset.
| no_new_dataset | 0.945801 |
1407.2736 | Hima Patel | Ramasubramanian Sundararajan, Hima Patel, Manisha Srivastava | A multi-instance learning algorithm based on a stacked ensemble of lazy
learners | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/3.0/ | This document describes a novel learning algorithm that classifies "bags" of
instances rather than individual instances. A bag is labeled positive if it
contains at least one positive instance (which may or may not be specifically
identified), and negative otherwise. This class of problems is known as
multi-instance learning problems, and is useful in situations where the class
label at an instance level may be unavailable or imprecise or difficult to
obtain, or in situations where the problem is naturally posed as one of
classifying instance groups. The algorithm described here is an ensemble-based
method, wherein the members of the ensemble are lazy learning classifiers
learnt using the Citation Nearest Neighbour method. Diversity among the
ensemble members is achieved by optimizing their parameters using a
multi-objective optimization method, with the objectives being to maximize
Class 1 accuracy and minimize false positive rate. The method has been found to
be effective on the Musk1 benchmark dataset.
| [
{
"version": "v1",
"created": "Thu, 10 Jul 2014 09:39:24 GMT"
}
] | 2014-07-11T00:00:00 | [
[
"Sundararajan",
"Ramasubramanian",
""
],
[
"Patel",
"Hima",
""
],
[
"Srivastava",
"Manisha",
""
]
] | TITLE: A multi-instance learning algorithm based on a stacked ensemble of lazy
learners
ABSTRACT: This document describes a novel learning algorithm that classifies "bags" of
instances rather than individual instances. A bag is labeled positive if it
contains at least one positive instance (which may or may not be specifically
identified), and negative otherwise. This class of problems is known as
multi-instance learning problems, and is useful in situations where the class
label at an instance level may be unavailable or imprecise or difficult to
obtain, or in situations where the problem is naturally posed as one of
classifying instance groups. The algorithm described here is an ensemble-based
method, wherein the members of the ensemble are lazy learning classifiers
learnt using the Citation Nearest Neighbour method. Diversity among the
ensemble members is achieved by optimizing their parameters using a
multi-objective optimization method, with the objectives being to maximize
Class 1 accuracy and minimize false positive rate. The method has been found to
be effective on the Musk1 benchmark dataset.
| no_new_dataset | 0.948394 |
1407.2806 | Preux Philippe | J\'er\'emie Mary (INRIA Lille - Nord Europe, LIFL), Romaric Gaudel
(INRIA Lille - Nord Europe, LIFL), Preux Philippe (INRIA Lille - Nord Europe,
LIFL) | Bandits Warm-up Cold Recommender Systems | null | null | null | RR-8563 | cs.LG cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the cold start problem in recommendation systems assuming no
contextual information is available neither about users, nor items. We consider
the case in which we only have access to a set of ratings of items by users.
Most of the existing works consider a batch setting, and use cross-validation
to tune parameters. The classical method consists in minimizing the root mean
square error over a training subset of the ratings which provides a
factorization of the matrix of ratings, interpreted as a latent representation
of items and users. Our contribution in this paper is 5-fold. First, we
explicit the issues raised by this kind of batch setting for users or items
with very few ratings. Then, we propose an online setting closer to the actual
use of recommender systems; this setting is inspired by the bandit framework.
The proposed methodology can be used to turn any recommender system dataset
(such as Netflix, MovieLens,...) into a sequential dataset. Then, we explicit a
strong and insightful link between contextual bandit algorithms and matrix
factorization; this leads us to a new algorithm that tackles the
exploration/exploitation dilemma associated to the cold start problem in a
strikingly new perspective. Finally, experimental evidence confirm that our
algorithm is effective in dealing with the cold start problem on publicly
available datasets. Overall, the goal of this paper is to bridge the gap
between recommender systems based on matrix factorizations and those based on
contextual bandits.
| [
{
"version": "v1",
"created": "Thu, 10 Jul 2014 14:32:37 GMT"
}
] | 2014-07-11T00:00:00 | [
[
"Mary",
"Jérémie",
"",
"INRIA Lille - Nord Europe, LIFL"
],
[
"Gaudel",
"Romaric",
"",
"INRIA Lille - Nord Europe, LIFL"
],
[
"Philippe",
"Preux",
"",
"INRIA Lille - Nord Europe,\n LIFL"
]
] | TITLE: Bandits Warm-up Cold Recommender Systems
ABSTRACT: We address the cold start problem in recommendation systems assuming no
contextual information is available neither about users, nor items. We consider
the case in which we only have access to a set of ratings of items by users.
Most of the existing works consider a batch setting, and use cross-validation
to tune parameters. The classical method consists in minimizing the root mean
square error over a training subset of the ratings which provides a
factorization of the matrix of ratings, interpreted as a latent representation
of items and users. Our contribution in this paper is 5-fold. First, we
explicit the issues raised by this kind of batch setting for users or items
with very few ratings. Then, we propose an online setting closer to the actual
use of recommender systems; this setting is inspired by the bandit framework.
The proposed methodology can be used to turn any recommender system dataset
(such as Netflix, MovieLens,...) into a sequential dataset. Then, we explicit a
strong and insightful link between contextual bandit algorithms and matrix
factorization; this leads us to a new algorithm that tackles the
exploration/exploitation dilemma associated to the cold start problem in a
strikingly new perspective. Finally, experimental evidence confirm that our
algorithm is effective in dealing with the cold start problem on publicly
available datasets. Overall, the goal of this paper is to bridge the gap
between recommender systems based on matrix factorizations and those based on
contextual bandits.
| no_new_dataset | 0.946151 |
1407.2889 | John-Alexander Assael | Charalampos S. Kouzinopoulos, John-Alexander M. Assael, Themistoklis
K. Pyrgiotis, and Konstantinos G. Margaritis | A Hybrid Parallel Implementation of the Aho-Corasick and Wu-Manber
Algorithms Using NVIDIA CUDA and MPI Evaluated on a Biological Sequence
Database | null | null | null | null | cs.DC cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple matching algorithms are used to locate the occurrences of patterns
from a finite pattern set in a large input string. Aho-Corasick and Wu-Manber,
two of the most well known algorithms for multiple matching require an
increased computing power, particularly in cases where large-size datasets must
be processed, as is common in computational biology applications. Over the past
years, Graphics Processing Units (GPUs) have evolved to powerful parallel
processors outperforming Central Processing Units (CPUs) in scientific
calculations. Moreover, multiple GPUs can be used in parallel, forming hybrid
computer cluster configurations to achieve an even higher processing
throughput. This paper evaluates the speedup of the parallel implementation of
the Aho-Corasick and Wu-Manber algorithms on a hybrid GPU cluster, when used to
process a snapshot of the Expressed Sequence Tags of the human genome and for
different problem parameters.
| [
{
"version": "v1",
"created": "Thu, 10 Jul 2014 18:15:18 GMT"
}
] | 2014-07-11T00:00:00 | [
[
"Kouzinopoulos",
"Charalampos S.",
""
],
[
"Assael",
"John-Alexander M.",
""
],
[
"Pyrgiotis",
"Themistoklis K.",
""
],
[
"Margaritis",
"Konstantinos G.",
""
]
] | TITLE: A Hybrid Parallel Implementation of the Aho-Corasick and Wu-Manber
Algorithms Using NVIDIA CUDA and MPI Evaluated on a Biological Sequence
Database
ABSTRACT: Multiple matching algorithms are used to locate the occurrences of patterns
from a finite pattern set in a large input string. Aho-Corasick and Wu-Manber,
two of the most well known algorithms for multiple matching require an
increased computing power, particularly in cases where large-size datasets must
be processed, as is common in computational biology applications. Over the past
years, Graphics Processing Units (GPUs) have evolved to powerful parallel
processors outperforming Central Processing Units (CPUs) in scientific
calculations. Moreover, multiple GPUs can be used in parallel, forming hybrid
computer cluster configurations to achieve an even higher processing
throughput. This paper evaluates the speedup of the parallel implementation of
the Aho-Corasick and Wu-Manber algorithms on a hybrid GPU cluster, when used to
process a snapshot of the Expressed Sequence Tags of the human genome and for
different problem parameters.
| no_new_dataset | 0.947235 |
1407.2899 | Gabriela Montoya | Gabriela Montoya (LINA), Hala Skaf-Molli (LINA), Pascal Molli (LINA),
Maria-Esther Vidal | Fedra: Query Processing for SPARQL Federations with Divergence | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data replication and deployment of local SPARQL endpoints improve scalability
and availability of public SPARQL endpoints, making the consumption of Linked
Data a reality. This solution requires synchronization and specific query
processing strategies to take advantage of replication. However, existing
replication aware techniques in federations of SPARQL endpoints do not consider
data dynamicity. We propose Fedra, an approach for querying federations of
endpoints that benefits from replication. Participants in Fedra federations can
copy fragments of data from several datasets, and describe them using
provenance and views. These descriptions enable Fedra to reduce the number of
selected endpoints while satisfying user divergence requirements. Experiments
on real-world datasets suggest savings of up to three orders of magnitude.
| [
{
"version": "v1",
"created": "Thu, 10 Jul 2014 18:39:47 GMT"
}
] | 2014-07-11T00:00:00 | [
[
"Montoya",
"Gabriela",
"",
"LINA"
],
[
"Skaf-Molli",
"Hala",
"",
"LINA"
],
[
"Molli",
"Pascal",
"",
"LINA"
],
[
"Vidal",
"Maria-Esther",
""
]
] | TITLE: Fedra: Query Processing for SPARQL Federations with Divergence
ABSTRACT: Data replication and deployment of local SPARQL endpoints improve scalability
and availability of public SPARQL endpoints, making the consumption of Linked
Data a reality. This solution requires synchronization and specific query
processing strategies to take advantage of replication. However, existing
replication aware techniques in federations of SPARQL endpoints do not consider
data dynamicity. We propose Fedra, an approach for querying federations of
endpoints that benefits from replication. Participants in Fedra federations can
copy fragments of data from several datasets, and describe them using
provenance and views. These descriptions enable Fedra to reduce the number of
selected endpoints while satisfying user divergence requirements. Experiments
on real-world datasets suggest savings of up to three orders of magnitude.
| no_new_dataset | 0.950319 |
1407.2220 | Brian Thompson | Graham Cormode, Qiang Ma, S. Muthukrishnan, Brian Thompson | Modeling Collaboration in Academia: A Game Theoretic Approach | Presented at the 1st WWW Workshop on Big Scholarly Data (2014). 6
pages, 5 figures | Proceedings of the Companion Publication of the 23rd International
Conference on World Wide Web (WWW 2014), pgs 1177-1182 | null | null | cs.SI cs.DL cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we aim to understand the mechanisms driving academic
collaboration. We begin by building a model for how researchers split their
effort between multiple papers, and how collaboration affects the number of
citations a paper receives, supported by observations from a large real-world
publication and citation dataset, which we call the h-Reinvestment model. Using
tools from the field of Game Theory, we study researchers' collaborative
behavior over time under this model, with the premise that each researcher
wants to maximize his or her academic success. We find analytically that there
is a strong incentive to collaborate rather than work in isolation, and that
studying collaborative behavior through a game-theoretic lens is a promising
approach to help us better understand the nature and dynamics of academic
collaboration.
| [
{
"version": "v1",
"created": "Tue, 8 Jul 2014 19:09:31 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Jul 2014 04:34:58 GMT"
}
] | 2014-07-10T00:00:00 | [
[
"Cormode",
"Graham",
""
],
[
"Ma",
"Qiang",
""
],
[
"Muthukrishnan",
"S.",
""
],
[
"Thompson",
"Brian",
""
]
] | TITLE: Modeling Collaboration in Academia: A Game Theoretic Approach
ABSTRACT: In this work, we aim to understand the mechanisms driving academic
collaboration. We begin by building a model for how researchers split their
effort between multiple papers, and how collaboration affects the number of
citations a paper receives, supported by observations from a large real-world
publication and citation dataset, which we call the h-Reinvestment model. Using
tools from the field of Game Theory, we study researchers' collaborative
behavior over time under this model, with the premise that each researcher
wants to maximize his or her academic success. We find analytically that there
is a strong incentive to collaborate rather than work in isolation, and that
studying collaborative behavior through a game-theoretic lens is a promising
approach to help us better understand the nature and dynamics of academic
collaboration.
| no_new_dataset | 0.950365 |
1407.1976 | Shanta Phani | Shanta Phani, Shibamouli Lahiri and Arindam Biswas | Inter-Rater Agreement Study on Readability Assessment in Bengali | 6 pages, 4 tables, Accepted in ICCONAC, 2014 | International Journal on Natural Language Computing (IJNLC), 3(3),
2014 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An inter-rater agreement study is performed for readability assessment in
Bengali. A 1-7 rating scale was used to indicate different levels of
readability. We obtained moderate to fair agreement among seven independent
annotators on 30 text passages written by four eminent Bengali authors. As a by
product of our study, we obtained a readability-annotated ground truth dataset
in Bengali. .
| [
{
"version": "v1",
"created": "Tue, 8 Jul 2014 07:35:16 GMT"
}
] | 2014-07-09T00:00:00 | [
[
"Phani",
"Shanta",
""
],
[
"Lahiri",
"Shibamouli",
""
],
[
"Biswas",
"Arindam",
""
]
] | TITLE: Inter-Rater Agreement Study on Readability Assessment in Bengali
ABSTRACT: An inter-rater agreement study is performed for readability assessment in
Bengali. A 1-7 rating scale was used to indicate different levels of
readability. We obtained moderate to fair agreement among seven independent
annotators on 30 text passages written by four eminent Bengali authors. As a by
product of our study, we obtained a readability-annotated ground truth dataset
in Bengali. .
| new_dataset | 0.959459 |
1407.2107 | Raghu Machiraju | Hao Ding, Chao Wang, Kun Huang and Raghu Machiraju | iGPSe: A Visual Analytic System for Integrative Genomic Based Cancer
Patient Stratification | BioVis 2014 conference | null | null | null | cs.GR cs.HC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Cancers are highly heterogeneous with different subtypes. These
subtypes often possess different genetic variants, present different
pathological phenotypes, and most importantly, show various clinical outcomes
such as varied prognosis and response to treatment and likelihood for
recurrence and metastasis. Recently, integrative genomics (or panomics)
approaches are often adopted with the goal of combining multiple types of omics
data to identify integrative biomarkers for stratification of patients into
groups with different clinical outcomes. Results: In this paper we present a
visual analytic system called Interactive Genomics Patient Stratification
explorer (iGPSe) which significantly reduces the computing burden for
biomedical researchers in the process of exploring complicated integrative
genomics data. Our system integrates unsupervised clustering with graph and
parallel sets visualization and allows direct comparison of clinical outcomes
via survival analysis. Using a breast cancer dataset obtained from the The
Cancer Genome Atlas (TCGA) project, we are able to quickly explore different
combinations of gene expression (mRNA) and microRNA features and identify
potential combined markers for survival prediction. Conclusions: Visualization
plays an important role in the process of stratifying given population
patients. Visual tools allowed for the selection of possibly features across
various datasets for the given patient population. We essentially made a case
for visualization for a very important problem in translational informatics.
| [
{
"version": "v1",
"created": "Tue, 8 Jul 2014 14:30:15 GMT"
}
] | 2014-07-09T00:00:00 | [
[
"Ding",
"Hao",
""
],
[
"Wang",
"Chao",
""
],
[
"Huang",
"Kun",
""
],
[
"Machiraju",
"Raghu",
""
]
] | TITLE: iGPSe: A Visual Analytic System for Integrative Genomic Based Cancer
Patient Stratification
ABSTRACT: Background: Cancers are highly heterogeneous with different subtypes. These
subtypes often possess different genetic variants, present different
pathological phenotypes, and most importantly, show various clinical outcomes
such as varied prognosis and response to treatment and likelihood for
recurrence and metastasis. Recently, integrative genomics (or panomics)
approaches are often adopted with the goal of combining multiple types of omics
data to identify integrative biomarkers for stratification of patients into
groups with different clinical outcomes. Results: In this paper we present a
visual analytic system called Interactive Genomics Patient Stratification
explorer (iGPSe) which significantly reduces the computing burden for
biomedical researchers in the process of exploring complicated integrative
genomics data. Our system integrates unsupervised clustering with graph and
parallel sets visualization and allows direct comparison of clinical outcomes
via survival analysis. Using a breast cancer dataset obtained from the The
Cancer Genome Atlas (TCGA) project, we are able to quickly explore different
combinations of gene expression (mRNA) and microRNA features and identify
potential combined markers for survival prediction. Conclusions: Visualization
plays an important role in the process of stratifying given population
patients. Visual tools allowed for the selection of possibly features across
various datasets for the given patient population. We essentially made a case
for visualization for a very important problem in translational informatics.
| no_new_dataset | 0.951369 |
1404.1777 | Victor Lempitsky | Artem Babenko, Anton Slesarev, Alexandr Chigorin and Victor Lempitsky | Neural Codes for Image Retrieval | to appear at ECCV 2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been shown that the activations invoked by an image within the top
layers of a large convolutional neural network provide a high-level descriptor
of the visual content of the image. In this paper, we investigate the use of
such descriptors (neural codes) within the image retrieval application. In the
experiments with several standard retrieval benchmarks, we establish that
neural codes perform competitively even when the convolutional neural network
has been trained for an unrelated classification task (e.g.\ Image-Net). We
also evaluate the improvement in the retrieval performance of neural codes,
when the network is retrained on a dataset of images that are similar to images
encountered at test time.
We further evaluate the performance of the compressed neural codes and show
that a simple PCA compression provides very good short codes that give
state-of-the-art accuracy on a number of datasets. In general, neural codes
turn out to be much more resilient to such compression in comparison other
state-of-the-art descriptors. Finally, we show that discriminative
dimensionality reduction trained on a dataset of pairs of matched photographs
improves the performance of PCA-compressed neural codes even further. Overall,
our quantitative experiments demonstrate the promise of neural codes as visual
descriptors for image retrieval.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2014 13:08:08 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Jul 2014 07:51:04 GMT"
}
] | 2014-07-08T00:00:00 | [
[
"Babenko",
"Artem",
""
],
[
"Slesarev",
"Anton",
""
],
[
"Chigorin",
"Alexandr",
""
],
[
"Lempitsky",
"Victor",
""
]
] | TITLE: Neural Codes for Image Retrieval
ABSTRACT: It has been shown that the activations invoked by an image within the top
layers of a large convolutional neural network provide a high-level descriptor
of the visual content of the image. In this paper, we investigate the use of
such descriptors (neural codes) within the image retrieval application. In the
experiments with several standard retrieval benchmarks, we establish that
neural codes perform competitively even when the convolutional neural network
has been trained for an unrelated classification task (e.g.\ Image-Net). We
also evaluate the improvement in the retrieval performance of neural codes,
when the network is retrained on a dataset of images that are similar to images
encountered at test time.
We further evaluate the performance of the compressed neural codes and show
that a simple PCA compression provides very good short codes that give
state-of-the-art accuracy on a number of datasets. In general, neural codes
turn out to be much more resilient to such compression in comparison other
state-of-the-art descriptors. Finally, we show that discriminative
dimensionality reduction trained on a dataset of pairs of matched photographs
improves the performance of PCA-compressed neural codes even further. Overall,
our quantitative experiments demonstrate the promise of neural codes as visual
descriptors for image retrieval.
| no_new_dataset | 0.942981 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.