id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
sequencelengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
1409.7935
Lior Shamir
Evan Kuminski, Joe George, John Wallin, Lior Shamir
Combining human and machine learning for morphological analysis of galaxy images
PASP, accepted
null
10.1086/678977
null
astro-ph.IM astro-ph.GA cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasing importance of digital sky surveys collecting many millions of galaxy images has reinforced the need for robust methods that can perform morphological analysis of large galaxy image databases. Citizen science initiatives such as Galaxy Zoo showed that large datasets of galaxy images can be analyzed effectively by non-scientist volunteers, but since databases generated by robotic telescopes grow much faster than the processing power of any group of citizen scientists, it is clear that computer analysis is required. Here we propose to use citizen science data for training machine learning systems, and show experimental results demonstrating that machine learning systems can be trained with citizen science data. Our findings show that the performance of machine learning depends on the quality of the data, which can be improved by using samples that have a high degree of agreement between the citizen scientists. The source code of the method is publicly available.
[ { "version": "v1", "created": "Sun, 28 Sep 2014 17:47:35 GMT" } ]
2015-06-23T00:00:00
[ [ "Kuminski", "Evan", "" ], [ "George", "Joe", "" ], [ "Wallin", "John", "" ], [ "Shamir", "Lior", "" ] ]
TITLE: Combining human and machine learning for morphological analysis of galaxy images ABSTRACT: The increasing importance of digital sky surveys collecting many millions of galaxy images has reinforced the need for robust methods that can perform morphological analysis of large galaxy image databases. Citizen science initiatives such as Galaxy Zoo showed that large datasets of galaxy images can be analyzed effectively by non-scientist volunteers, but since databases generated by robotic telescopes grow much faster than the processing power of any group of citizen scientists, it is clear that computer analysis is required. Here we propose to use citizen science data for training machine learning systems, and show experimental results demonstrating that machine learning systems can be trained with citizen science data. Our findings show that the performance of machine learning depends on the quality of the data, which can be improved by using samples that have a high degree of agreement between the citizen scientists. The source code of the method is publicly available.
no_new_dataset
0.947914
1410.1257
Abhronil Sengupta
Abhronil Sengupta, Sri Harsha Choday, Yusung Kim, and Kaushik Roy
Spin Orbit Torque Based Electronic Neuron
null
null
10.1063/1.4917011
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A device based on current-induced spin-orbit torque (SOT) that functions as an electronic neuron is proposed in this work. The SOT device implements an artificial neuron's thresholding (transfer) function. In the first step of a two-step switching scheme, a charge current places the magnetization of a nano-magnet along the hard-axis i.e. an unstable point for the magnet. In the second step, the SOT device (neuron) receives a current (from the synapses) which moves the magnetization from the unstable point to one of the two stable states. The polarity of the synaptic current encodes the excitatory and inhibitory nature of the neuron input, and determines the final orientation of the magnetization. A resistive crossbar array, functioning as synapses, generates a bipolar current that is a weighted sum of the inputs. The simulation of a two layer feed-forward Artificial Neural Network (ANN) based on the SOT electronic neuron shows that it consumes ~3X lower power than a 45nm digital CMOS implementation, while reaching ~80% accuracy in the classification of one hundred images of handwritten digits from the MNIST dataset.
[ { "version": "v1", "created": "Mon, 6 Oct 2014 05:36:19 GMT" } ]
2015-06-23T00:00:00
[ [ "Sengupta", "Abhronil", "" ], [ "Choday", "Sri Harsha", "" ], [ "Kim", "Yusung", "" ], [ "Roy", "Kaushik", "" ] ]
TITLE: Spin Orbit Torque Based Electronic Neuron ABSTRACT: A device based on current-induced spin-orbit torque (SOT) that functions as an electronic neuron is proposed in this work. The SOT device implements an artificial neuron's thresholding (transfer) function. In the first step of a two-step switching scheme, a charge current places the magnetization of a nano-magnet along the hard-axis i.e. an unstable point for the magnet. In the second step, the SOT device (neuron) receives a current (from the synapses) which moves the magnetization from the unstable point to one of the two stable states. The polarity of the synaptic current encodes the excitatory and inhibitory nature of the neuron input, and determines the final orientation of the magnetization. A resistive crossbar array, functioning as synapses, generates a bipolar current that is a weighted sum of the inputs. The simulation of a two layer feed-forward Artificial Neural Network (ANN) based on the SOT electronic neuron shows that it consumes ~3X lower power than a 45nm digital CMOS implementation, while reaching ~80% accuracy in the classification of one hundred images of handwritten digits from the MNIST dataset.
no_new_dataset
0.958538
1411.1343
Giuseppe Cataldo
Giuseppe Cataldo, Edward J. Wollack, Emily M. Barrentine, Ari D. Brown, Samuel H. Moseley, and Kongpop U-Yen
Analysis and calibration techniques for superconducting resonators
12 pages, 4 figures
null
10.1063/1.4904972
null
astro-ph.IM physics.ins-det
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A method is proposed and experimentally explored for in-situ calibration of complex transmission data for superconducting microwave resonators. This cryogenic calibration method accounts for the instrumental transmission response between the vector network analyzer reference plane and the device calibration plane. Once calibrated, the observed resonator response is analyzed in detail by two approaches. The first, a phenomenological model based on physically realizable rational functions, enables the extraction of multiple resonance frequencies and widths for coupled resonators without explicit specification of the circuit network. In the second, an ABCD-matrix representation for the distributed transmission line circuit is used to model the observed response from the characteristic impedance and propagation constant. When used in conjunction with electromagnetic simulations, the kinetic inductance fraction can be determined with this method with an accuracy of 2%. Datasets for superconducting microstrip and coplanar-waveguide resonator devices were investigated and a recovery within 1% of the observed complex transmission amplitude was achieved with both analysis approaches. The experimental configuration used in microwave characterization of the devices and self-consistent constraints for the electromagnetic constitutive relations for parameter extraction are also presented.
[ { "version": "v1", "created": "Wed, 5 Nov 2014 17:54:42 GMT" }, { "version": "v2", "created": "Mon, 17 Nov 2014 21:40:39 GMT" }, { "version": "v3", "created": "Thu, 4 Dec 2014 20:00:08 GMT" } ]
2015-06-23T00:00:00
[ [ "Cataldo", "Giuseppe", "" ], [ "Wollack", "Edward J.", "" ], [ "Barrentine", "Emily M.", "" ], [ "Brown", "Ari D.", "" ], [ "Moseley", "Samuel H.", "" ], [ "U-Yen", "Kongpop", "" ] ]
TITLE: Analysis and calibration techniques for superconducting resonators ABSTRACT: A method is proposed and experimentally explored for in-situ calibration of complex transmission data for superconducting microwave resonators. This cryogenic calibration method accounts for the instrumental transmission response between the vector network analyzer reference plane and the device calibration plane. Once calibrated, the observed resonator response is analyzed in detail by two approaches. The first, a phenomenological model based on physically realizable rational functions, enables the extraction of multiple resonance frequencies and widths for coupled resonators without explicit specification of the circuit network. In the second, an ABCD-matrix representation for the distributed transmission line circuit is used to model the observed response from the characteristic impedance and propagation constant. When used in conjunction with electromagnetic simulations, the kinetic inductance fraction can be determined with this method with an accuracy of 2%. Datasets for superconducting microstrip and coplanar-waveguide resonator devices were investigated and a recovery within 1% of the observed complex transmission amplitude was achieved with both analysis approaches. The experimental configuration used in microwave characterization of the devices and self-consistent constraints for the electromagnetic constitutive relations for parameter extraction are also presented.
no_new_dataset
0.953057
1412.5027
Ali Borji
Ali Borji
What is a salient object? A dataset and a baseline model for salient object detection
IEEE Transactions on Image Processing, 2014
null
10.1109/TIP.2014.2383320
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Salient object detection or salient region detection models, diverging from fixation prediction models, have traditionally been dealing with locating and segmenting the most salient object or region in a scene. While the notion of most salient object is sensible when multiple objects exist in a scene, current datasets for evaluation of saliency detection approaches often have scenes with only one single object. We introduce three main contributions in this paper: First, we take an indepth look at the problem of salient object detection by studying the relationship between where people look in scenes and what they choose as the most salient object when they are explicitly asked. Based on the agreement between fixations and saliency judgments, we then suggest that the most salient object is the one that attracts the highest fraction of fixations. Second, we provide two new less biased benchmark datasets containing scenes with multiple objects that challenge existing saliency models. Indeed, we observed a severe drop in performance of 8 state-of-the-art models on our datasets (40% to 70%). Third, we propose a very simple yet powerful model based on superpixels to be used as a baseline for model evaluation and comparison. While on par with the best models on MSRA-5K dataset, our model wins over other models on our data highlighting a serious drawback of existing models, which is convoluting the processes of locating the most salient object and its segmentation. We also provide a review and statistical analysis of some labeled scene datasets that can be used for evaluating salient object detection models. We believe that our work can greatly help remedy the over-fitting of models to existing biased datasets and opens new venues for future research in this fast-evolving field.
[ { "version": "v1", "created": "Mon, 8 Dec 2014 23:51:50 GMT" } ]
2015-06-23T00:00:00
[ [ "Borji", "Ali", "" ] ]
TITLE: What is a salient object? A dataset and a baseline model for salient object detection ABSTRACT: Salient object detection or salient region detection models, diverging from fixation prediction models, have traditionally been dealing with locating and segmenting the most salient object or region in a scene. While the notion of most salient object is sensible when multiple objects exist in a scene, current datasets for evaluation of saliency detection approaches often have scenes with only one single object. We introduce three main contributions in this paper: First, we take an indepth look at the problem of salient object detection by studying the relationship between where people look in scenes and what they choose as the most salient object when they are explicitly asked. Based on the agreement between fixations and saliency judgments, we then suggest that the most salient object is the one that attracts the highest fraction of fixations. Second, we provide two new less biased benchmark datasets containing scenes with multiple objects that challenge existing saliency models. Indeed, we observed a severe drop in performance of 8 state-of-the-art models on our datasets (40% to 70%). Third, we propose a very simple yet powerful model based on superpixels to be used as a baseline for model evaluation and comparison. While on par with the best models on MSRA-5K dataset, our model wins over other models on our data highlighting a serious drawback of existing models, which is convoluting the processes of locating the most salient object and its segmentation. We also provide a review and statistical analysis of some labeled scene datasets that can be used for evaluating salient object detection models. We believe that our work can greatly help remedy the over-fitting of models to existing biased datasets and opens new venues for future research in this fast-evolving field.
new_dataset
0.926503
1412.7156
Ludovic Denoyer
Gabriella Contardo and Ludovic Denoyer and Thierry Artieres
Representation Learning for cold-start recommendation
Accepted as workshop contribution at ICLR 2015
null
null
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A standard approach to Collaborative Filtering (CF), i.e. prediction of user ratings on items, relies on Matrix Factorization techniques. Representations for both users and items are computed from the observed ratings and used for prediction. Unfortunatly, these transductive approaches cannot handle the case of new users arriving in the system, with no known rating, a problem known as user cold-start. A common approach in this context is to ask these incoming users for a few initialization ratings. This paper presents a model to tackle this twofold problem of (i) finding good questions to ask, (ii) building efficient representations from this small amount of information. The model can also be used in a more standard (warm) context. Our approach is evaluated on the classical CF problem and on the cold-start problem on four different datasets showing its ability to improve baseline performance in both cases.
[ { "version": "v1", "created": "Mon, 22 Dec 2014 21:58:06 GMT" }, { "version": "v2", "created": "Fri, 27 Feb 2015 18:56:23 GMT" }, { "version": "v3", "created": "Fri, 27 Mar 2015 09:59:25 GMT" }, { "version": "v4", "created": "Wed, 8 Apr 2015 15:37:19 GMT" }, { "version": "v5", "created": "Mon, 22 Jun 2015 14:01:33 GMT" } ]
2015-06-23T00:00:00
[ [ "Contardo", "Gabriella", "" ], [ "Denoyer", "Ludovic", "" ], [ "Artieres", "Thierry", "" ] ]
TITLE: Representation Learning for cold-start recommendation ABSTRACT: A standard approach to Collaborative Filtering (CF), i.e. prediction of user ratings on items, relies on Matrix Factorization techniques. Representations for both users and items are computed from the observed ratings and used for prediction. Unfortunatly, these transductive approaches cannot handle the case of new users arriving in the system, with no known rating, a problem known as user cold-start. A common approach in this context is to ask these incoming users for a few initialization ratings. This paper presents a model to tackle this twofold problem of (i) finding good questions to ask, (ii) building efficient representations from this small amount of information. The model can also be used in a more standard (warm) context. Our approach is evaluated on the classical CF problem and on the cold-start problem on four different datasets showing its ability to improve baseline performance in both cases.
no_new_dataset
0.947721
1501.03252
Ali Ajmi
Ali Ajmi, S. Uma Sankar
Muonless Events in ICAL at INO
21 pages, 6 figures
null
10.1088/1748-0221/10/04/P04006
null
physics.ins-det hep-ex
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The primary physics signal events in the ICAL at INO are the ${\nu}_{\mu}$ charged current (CC) interactions with a well defined muon track. Apart from these events, ICAL can also detect other types of neutrino interactions, i.e. the electron neutrino charged current interactions and the neutral current events. It is possible to have a dataset containing mostly ${\nu}_e$CC events, by imposing appropriate selection cuts on the events. The ${\nu}_{\mu}$ CC and the neutral current events form the background to these events. This study uses the Monte Carlo generated neutrino events, to design the necessary selection cuts to obtain a ${\nu}_e$ CC rich dataset. An optimized set of constraints are developed which balance the need for improving the purity of the sample and having a large enough event sample. Depending on the constraints used, one can obtain a neutrino data sample, with the purity of ${\nu}_e$ events varying between 55% to 70%.
[ { "version": "v1", "created": "Wed, 14 Jan 2015 05:34:11 GMT" } ]
2015-06-23T00:00:00
[ [ "Ajmi", "Ali", "" ], [ "Sankar", "S. Uma", "" ] ]
TITLE: Muonless Events in ICAL at INO ABSTRACT: The primary physics signal events in the ICAL at INO are the ${\nu}_{\mu}$ charged current (CC) interactions with a well defined muon track. Apart from these events, ICAL can also detect other types of neutrino interactions, i.e. the electron neutrino charged current interactions and the neutral current events. It is possible to have a dataset containing mostly ${\nu}_e$CC events, by imposing appropriate selection cuts on the events. The ${\nu}_{\mu}$ CC and the neutral current events form the background to these events. This study uses the Monte Carlo generated neutrino events, to design the necessary selection cuts to obtain a ${\nu}_e$ CC rich dataset. An optimized set of constraints are developed which balance the need for improving the purity of the sample and having a large enough event sample. Depending on the constraints used, one can obtain a neutrino data sample, with the purity of ${\nu}_e$ events varying between 55% to 70%.
no_new_dataset
0.90389
1501.06952
Brendon Brewer
Brendon J. Brewer, Courtney P. Donovan
Fast Bayesian Inference for Exoplanet Discovery in Radial Velocity Data
Accepted for publication in MNRAS. 9 pages, 12 figures. Code at http://www.github.com/eggplantbren/Exoplanet
null
10.1093/mnras/stv199
null
astro-ph.IM astro-ph.EP physics.data-an stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inferring the number of planets $N$ in an exoplanetary system from radial velocity (RV) data is a challenging task. Recently, it has become clear that RV data can contain periodic signals due to stellar activity, which can be difficult to distinguish from planetary signals. However, even doing the inference under a given set of simplifying assumptions (e.g. no stellar activity) can be difficult. It is common for the posterior distribution for the planet parameters, such as orbital periods, to be multimodal and to have other awkward features. In addition, when $N$ is unknown, the marginal likelihood (or evidence) as a function of $N$ is required. Rather than doing separate runs with different trial values of $N$, we propose an alternative approach using a trans-dimensional Markov Chain Monte Carlo method within Nested Sampling. The posterior distribution for $N$ can be obtained with a single run. We apply the method to $\nu$ Oph and Gliese 581, finding moderate evidence for additional signals in $\nu$ Oph with periods of 36.11 $\pm$ 0.034 days, 75.58 $\pm$ 0.80 days, and 1709 $\pm$ 183 days; the posterior probability that at least one of these exists is 85%. The results also suggest Gliese 581 hosts many (7-15) "planets" (or other causes of other periodic signals), but only 4-6 have well determined periods. The analysis of both of these datasets shows phase transitions exist which are difficult to negotiate without Nested Sampling.
[ { "version": "v1", "created": "Tue, 27 Jan 2015 22:54:14 GMT" } ]
2015-06-23T00:00:00
[ [ "Brewer", "Brendon J.", "" ], [ "Donovan", "Courtney P.", "" ] ]
TITLE: Fast Bayesian Inference for Exoplanet Discovery in Radial Velocity Data ABSTRACT: Inferring the number of planets $N$ in an exoplanetary system from radial velocity (RV) data is a challenging task. Recently, it has become clear that RV data can contain periodic signals due to stellar activity, which can be difficult to distinguish from planetary signals. However, even doing the inference under a given set of simplifying assumptions (e.g. no stellar activity) can be difficult. It is common for the posterior distribution for the planet parameters, such as orbital periods, to be multimodal and to have other awkward features. In addition, when $N$ is unknown, the marginal likelihood (or evidence) as a function of $N$ is required. Rather than doing separate runs with different trial values of $N$, we propose an alternative approach using a trans-dimensional Markov Chain Monte Carlo method within Nested Sampling. The posterior distribution for $N$ can be obtained with a single run. We apply the method to $\nu$ Oph and Gliese 581, finding moderate evidence for additional signals in $\nu$ Oph with periods of 36.11 $\pm$ 0.034 days, 75.58 $\pm$ 0.80 days, and 1709 $\pm$ 183 days; the posterior probability that at least one of these exists is 85%. The results also suggest Gliese 581 hosts many (7-15) "planets" (or other causes of other periodic signals), but only 4-6 have well determined periods. The analysis of both of these datasets shows phase transitions exist which are difficult to negotiate without Nested Sampling.
no_new_dataset
0.942718
1504.06044
Radoslaw Klimek
Radoslaw Klimek and Leszek Kotulski
Towards a better understanding and behavior recognition of inhabitants in smart cities. A public transport case
Proceedings of 14th International Conference on Arificial Inteligence and Soft Computing (ICAISC 2015), 14-18 June, 2015, Zakopane, Poland; Lecture Notes in Computer Science, vol. 9120, pp.237-246. Springer Verlag 2015
null
10.1007/978-3-319-19369-4_22
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The idea of modern urban systems and smart cities requires monitoring and careful analysis of different signals. Such signals can originate from different sources and one of the most promising is the BTS, i.e. base transceiver station, an element of mobile carrier networks. This paper presents the fundamental problems of elicitation, classification and understanding of such signals so as to develop context-aware and pro-active systems in urban areas. These systems are characterized by the omnipresence of computing which is strongly focused on providing on-line support to users/inhabitants of smart cities. A method of analyzing selected elements of mobile phone datasets through understanding inhabitants' behavioral fingerprints to obtain smart scenarios for public transport is proposed. Some scenarios are outlined. A multi-agent system is proposed. A formalism based on graphs that allows reasoning about inhabitant behaviors is also proposed.
[ { "version": "v1", "created": "Thu, 23 Apr 2015 04:57:50 GMT" }, { "version": "v2", "created": "Sat, 20 Jun 2015 12:59:53 GMT" } ]
2015-06-23T00:00:00
[ [ "Klimek", "Radoslaw", "" ], [ "Kotulski", "Leszek", "" ] ]
TITLE: Towards a better understanding and behavior recognition of inhabitants in smart cities. A public transport case ABSTRACT: The idea of modern urban systems and smart cities requires monitoring and careful analysis of different signals. Such signals can originate from different sources and one of the most promising is the BTS, i.e. base transceiver station, an element of mobile carrier networks. This paper presents the fundamental problems of elicitation, classification and understanding of such signals so as to develop context-aware and pro-active systems in urban areas. These systems are characterized by the omnipresence of computing which is strongly focused on providing on-line support to users/inhabitants of smart cities. A method of analyzing selected elements of mobile phone datasets through understanding inhabitants' behavioral fingerprints to obtain smart scenarios for public transport is proposed. Some scenarios are outlined. A multi-agent system is proposed. A formalism based on graphs that allows reasoning about inhabitant behaviors is also proposed.
no_new_dataset
0.945551
1505.00359
Harm de Vries
Harm de Vries, Jason Yosinski
Can deep learning help you find the perfect match?
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Is he/she my type or not? The answer to this question depends on the personal preferences of the one asking it. The individual process of obtaining a full answer may generally be difficult and time consuming, but often an approximate answer can be obtained simply by looking at a photo of the potential match. Such approximate answers based on visual cues can be produced in a fraction of a second, a phenomenon that has led to a series of recently successful dating apps in which users rate others positively or negatively using primarily a single photo. In this paper we explore using convolutional networks to create a model of an individual's personal preferences based on rated photos. This introduced task is difficult due to the large number of variations in profile pictures and the noise in attractiveness labels. Toward this task we collect a dataset comprised of $9364$ pictures and binary labels for each. We compare performance of convolutional models trained in three ways: first directly on the collected dataset, second with features transferred from a network trained to predict gender, and third with features transferred from a network trained on ImageNet. Our findings show that ImageNet features transfer best, producing a model that attains $68.1\%$ accuracy on the test set and is moderately successful at predicting matches.
[ { "version": "v1", "created": "Sat, 2 May 2015 17:20:23 GMT" }, { "version": "v2", "created": "Sat, 20 Jun 2015 15:41:45 GMT" } ]
2015-06-23T00:00:00
[ [ "de Vries", "Harm", "" ], [ "Yosinski", "Jason", "" ] ]
TITLE: Can deep learning help you find the perfect match? ABSTRACT: Is he/she my type or not? The answer to this question depends on the personal preferences of the one asking it. The individual process of obtaining a full answer may generally be difficult and time consuming, but often an approximate answer can be obtained simply by looking at a photo of the potential match. Such approximate answers based on visual cues can be produced in a fraction of a second, a phenomenon that has led to a series of recently successful dating apps in which users rate others positively or negatively using primarily a single photo. In this paper we explore using convolutional networks to create a model of an individual's personal preferences based on rated photos. This introduced task is difficult due to the large number of variations in profile pictures and the noise in attractiveness labels. Toward this task we collect a dataset comprised of $9364$ pictures and binary labels for each. We compare performance of convolutional models trained in three ways: first directly on the collected dataset, second with features transferred from a network trained to predict gender, and third with features transferred from a network trained on ImageNet. Our findings show that ImageNet features transfer best, producing a model that attains $68.1\%$ accuracy on the test set and is moderately successful at predicting matches.
new_dataset
0.960361
1506.06272
Fei Sha
Junqi Jin, Kun Fu, Runpeng Cui, Fei Sha and Changshui Zhang
Aligning where to see and what to tell: image caption with region-based attention and scene factorization
null
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image caption system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifting among the visual regions imposes a thread of visual ordering. This alignment characterizes the flow of "abstract meaning", encoding what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets. We show that using either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.
[ { "version": "v1", "created": "Sat, 20 Jun 2015 17:25:38 GMT" } ]
2015-06-23T00:00:00
[ [ "Jin", "Junqi", "" ], [ "Fu", "Kun", "" ], [ "Cui", "Runpeng", "" ], [ "Sha", "Fei", "" ], [ "Zhang", "Changshui", "" ] ]
TITLE: Aligning where to see and what to tell: image caption with region-based attention and scene factorization ABSTRACT: Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image caption system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifting among the visual regions imposes a thread of visual ordering. This alignment characterizes the flow of "abstract meaning", encoding what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets. We show that using either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.
no_new_dataset
0.948298
1506.06418
Raphael Hoffmann
Raphael Hoffmann, Luke Zettlemoyer, Daniel S. Weld
Extreme Extraction: Only One Hour per Relation
null
null
null
null
cs.CL cs.AI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information Extraction (IE) aims to automatically generate a large knowledge base from natural language text, but progress remains slow. Supervised learning requires copious human annotation, while unsupervised and weakly supervised approaches do not deliver competitive accuracy. As a result, most fielded applications of IE, as well as the leading TAC-KBP systems, rely on significant amounts of manual engineering. Even "Extreme" methods, such as those reported in Freedman et al. 2011, require about 10 hours of expert labor per relation. This paper shows how to reduce that effort by an order of magnitude. We present a novel system, InstaRead, that streamlines authoring with an ensemble of methods: 1) encoding extraction rules in an expressive and compositional representation, 2) guiding the user to promising rules based on corpus statistics and mined resources, and 3) introducing a new interactive development cycle that provides immediate feedback --- even on large datasets. Experiments show that experts can create quality extractors in under an hour and even NLP novices can author good extractors. These extractors equal or outperform ones obtained by comparably supervised and state-of-the-art distantly supervised approaches.
[ { "version": "v1", "created": "Sun, 21 Jun 2015 22:04:39 GMT" } ]
2015-06-23T00:00:00
[ [ "Hoffmann", "Raphael", "" ], [ "Zettlemoyer", "Luke", "" ], [ "Weld", "Daniel S.", "" ] ]
TITLE: Extreme Extraction: Only One Hour per Relation ABSTRACT: Information Extraction (IE) aims to automatically generate a large knowledge base from natural language text, but progress remains slow. Supervised learning requires copious human annotation, while unsupervised and weakly supervised approaches do not deliver competitive accuracy. As a result, most fielded applications of IE, as well as the leading TAC-KBP systems, rely on significant amounts of manual engineering. Even "Extreme" methods, such as those reported in Freedman et al. 2011, require about 10 hours of expert labor per relation. This paper shows how to reduce that effort by an order of magnitude. We present a novel system, InstaRead, that streamlines authoring with an ensemble of methods: 1) encoding extraction rules in an expressive and compositional representation, 2) guiding the user to promising rules based on corpus statistics and mined resources, and 3) introducing a new interactive development cycle that provides immediate feedback --- even on large datasets. Experiments show that experts can create quality extractors in under an hour and even NLP novices can author good extractors. These extractors equal or outperform ones obtained by comparably supervised and state-of-the-art distantly supervised approaches.
no_new_dataset
0.941223
1506.06490
Baotian Hu
Xiaoqiang Zhou, Baotian Hu, Qingcai Chen, Buzhou Tang, Xiaolong Wang
Answer Sequence Learning with Neural Networks for Answer Selection in Community Question Answering
6 pages
null
null
null
cs.CL cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the answer selection problem in community question answering (CQA) is regarded as an answer sequence labeling task, and a novel approach is proposed based on the recurrent architecture for this problem. Our approach applies convolution neural networks (CNNs) to learning the joint representation of question-answer pair firstly, and then uses the joint representation as input of the long short-term memory (LSTM) to learn the answer sequence of a question for labeling the matching quality of each answer. Experiments conducted on the SemEval 2015 CQA dataset shows the effectiveness of our approach.
[ { "version": "v1", "created": "Mon, 22 Jun 2015 07:26:51 GMT" } ]
2015-06-23T00:00:00
[ [ "Zhou", "Xiaoqiang", "" ], [ "Hu", "Baotian", "" ], [ "Chen", "Qingcai", "" ], [ "Tang", "Buzhou", "" ], [ "Wang", "Xiaolong", "" ] ]
TITLE: Answer Sequence Learning with Neural Networks for Answer Selection in Community Question Answering ABSTRACT: In this paper, the answer selection problem in community question answering (CQA) is regarded as an answer sequence labeling task, and a novel approach is proposed based on the recurrent architecture for this problem. Our approach applies convolution neural networks (CNNs) to learning the joint representation of question-answer pair firstly, and then uses the joint representation as input of the long short-term memory (LSTM) to learn the answer sequence of a question for labeling the matching quality of each answer. Experiments conducted on the SemEval 2015 CQA dataset shows the effectiveness of our approach.
no_new_dataset
0.944689
1506.06724
Yukun Zhu
Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
null
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This paper aims to align books to their movie releases in order to provide rich descriptive explanations for visual content that go semantically far beyond the captions available in current datasets. To align movies and books we exploit a neural sentence embedding that is trained in an unsupervised way from a large corpus of books, as well as a video-text neural embedding for computing similarities between movie clips and sentences in the book. We propose a context-aware CNN to combine information from multiple sources. We demonstrate good quantitative performance for movie/book alignment and show several qualitative examples that showcase the diversity of tasks our model can be used for.
[ { "version": "v1", "created": "Mon, 22 Jun 2015 19:26:56 GMT" } ]
2015-06-23T00:00:00
[ [ "Zhu", "Yukun", "" ], [ "Kiros", "Ryan", "" ], [ "Zemel", "Richard", "" ], [ "Salakhutdinov", "Ruslan", "" ], [ "Urtasun", "Raquel", "" ], [ "Torralba", "Antonio", "" ], [ "Fidler", "Sanja", "" ] ]
TITLE: Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books ABSTRACT: Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This paper aims to align books to their movie releases in order to provide rich descriptive explanations for visual content that go semantically far beyond the captions available in current datasets. To align movies and books we exploit a neural sentence embedding that is trained in an unsupervised way from a large corpus of books, as well as a video-text neural embedding for computing similarities between movie clips and sentences in the book. We propose a context-aware CNN to combine information from multiple sources. We demonstrate good quantitative performance for movie/book alignment and show several qualitative examples that showcase the diversity of tasks our model can be used for.
no_new_dataset
0.948106
1506.06726
Ryan Kiros
Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, Sanja Fidler
Skip-Thought Vectors
11 pages
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe an approach for unsupervised learning of a generic, distributed sentence encoder. Using the continuity of text from books, we train an encoder-decoder model that tries to reconstruct the surrounding sentences of an encoded passage. Sentences that share semantic and syntactic properties are thus mapped to similar vector representations. We next introduce a simple vocabulary expansion method to encode words that were not seen as part of training, allowing us to expand our vocabulary to a million words. After training our model, we extract and evaluate our vectors with linear models on 8 tasks: semantic relatedness, paraphrase detection, image-sentence ranking, question-type classification and 4 benchmark sentiment and subjectivity datasets. The end result is an off-the-shelf encoder that can produce highly generic sentence representations that are robust and perform well in practice. We will make our encoder publicly available.
[ { "version": "v1", "created": "Mon, 22 Jun 2015 19:33:40 GMT" } ]
2015-06-23T00:00:00
[ [ "Kiros", "Ryan", "" ], [ "Zhu", "Yukun", "" ], [ "Salakhutdinov", "Ruslan", "" ], [ "Zemel", "Richard S.", "" ], [ "Torralba", "Antonio", "" ], [ "Urtasun", "Raquel", "" ], [ "Fidler", "Sanja", "" ] ]
TITLE: Skip-Thought Vectors ABSTRACT: We describe an approach for unsupervised learning of a generic, distributed sentence encoder. Using the continuity of text from books, we train an encoder-decoder model that tries to reconstruct the surrounding sentences of an encoded passage. Sentences that share semantic and syntactic properties are thus mapped to similar vector representations. We next introduce a simple vocabulary expansion method to encode words that were not seen as part of training, allowing us to expand our vocabulary to a million words. After training our model, we extract and evaluate our vectors with linear models on 8 tasks: semantic relatedness, paraphrase detection, image-sentence ranking, question-type classification and 4 benchmark sentiment and subjectivity datasets. The end result is an off-the-shelf encoder that can produce highly generic sentence representations that are robust and perform well in practice. We will make our encoder publicly available.
no_new_dataset
0.944689
1406.6909
Alexey Dosovitskiy
Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin Riedmiller and Thomas Brox
Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks
PAMI submission. Includes matching experiments as in arXiv:1405.5769v1. Also includes new network architectures, experiments on Caltech-256, experiment on combining Exemplar-CNN with clustering
null
null
null
cs.LG cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While such generic features cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor.
[ { "version": "v1", "created": "Thu, 26 Jun 2014 15:07:14 GMT" }, { "version": "v2", "created": "Fri, 19 Jun 2015 11:43:36 GMT" } ]
2015-06-22T00:00:00
[ [ "Dosovitskiy", "Alexey", "" ], [ "Fischer", "Philipp", "" ], [ "Springenberg", "Jost Tobias", "" ], [ "Riedmiller", "Martin", "" ], [ "Brox", "Thomas", "" ] ]
TITLE: Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks ABSTRACT: Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While such generic features cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor.
no_new_dataset
0.944125
1406.7187
Maziar Hemati
Maziar S. Hemati, Matthew O. Williams, and Clarence W. Rowley
Dynamic Mode Decomposition for Large and Streaming Datasets
null
null
10.1063/1.4901016
null
physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We formulate a low-storage method for performing dynamic mode decomposition that can be updated inexpensively as new data become available; this formulation allows dynamical information to be extracted from large datasets and data streams. We present two algorithms: the first is mathematically equivalent to a standard "batch-processed" formulation; the second introduces a compression step that maintains computational efficiency, while enhancing the ability to isolate pertinent dynamical information from noisy measurements. Both algorithms reliably capture dominant fluid dynamic behaviors, as demonstrated on cylinder wake data collected from both direct numerical simulations and particle image velocimetry experiments
[ { "version": "v1", "created": "Fri, 27 Jun 2014 14:07:11 GMT" } ]
2015-06-22T00:00:00
[ [ "Hemati", "Maziar S.", "" ], [ "Williams", "Matthew O.", "" ], [ "Rowley", "Clarence W.", "" ] ]
TITLE: Dynamic Mode Decomposition for Large and Streaming Datasets ABSTRACT: We formulate a low-storage method for performing dynamic mode decomposition that can be updated inexpensively as new data become available; this formulation allows dynamical information to be extracted from large datasets and data streams. We present two algorithms: the first is mathematically equivalent to a standard "batch-processed" formulation; the second introduces a compression step that maintains computational efficiency, while enhancing the ability to isolate pertinent dynamical information from noisy measurements. Both algorithms reliably capture dominant fluid dynamic behaviors, as demonstrated on cylinder wake data collected from both direct numerical simulations and particle image velocimetry experiments
no_new_dataset
0.953362
1408.0365
Will Ball
William T. Ball, Natalie A. Krivova, Yvonne C. Unruh, Joanna D. Haigh, Sami K. Solanki
A new SATIRE-S spectral solar irradiance reconstruction for solar cycles 21--23 and its implications for stratospheric ozone
25 pages (18 pages in main article with 6 figures; 7 pages in supplementary materials with 6 figures) in draft mode using the American Meteorological Society package. Submitted to Journal of Atmospheric Sciences for publication
null
10.1175/JAS-D-13-0241.1
null
physics.ao-ph astro-ph.SR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a revised and extended total and spectral solar irradiance (SSI) reconstruction, which includes a wavelength-dependent uncertainty estimate, spanning the last three solar cycles using the SATIRE-S model. The SSI reconstruction covers wavelengths between 115 and 160,000 nm and all dates between August 1974 and October 2009. This represents the first full-wavelength SATIRE-S reconstruction to cover the last three solar cycles without data gaps and with an uncertainty estimate. SATIRE-S is compared with the NRLSSI model and SORCE/SOLSTICE ultraviolet (UV) observations. SATIRE-S displays similar cycle behaviour to NRLSSI for wavelengths below 242 nm and almost twice the variability between 242 and 310 nm. During the decline of last solar cycle, between 2003 and 2008, SSI from SORCE/SOLSTICE version 12 and 10 typically displays more than three times the variability of SATIRE-S between 200 and 300 nm. All three datasets are used to model changes in stratospheric ozone within a 2D atmospheric model for a decline from high solar activity to solar minimum. The different flux changes result in different modelled ozone trends. Using NRLSSI leads to a decline in mesospheric ozone, while SATIRE-S and SORCE/SOLSTICE result in an increase. Recent publications have highlighted increases in mesospheric ozone when considering version 10 SORCE/SOLSTICE irradiances. The recalibrated SORCE/SOLSTICE version 12 irradiances result in a much smaller mesospheric ozone response than when using version 10 and now similar in magnitude to SATIRE-S. This shows that current knowledge of variations in spectral irradiance is not sufficient to warrant robust conclusions concerning the impact of solar variability on the atmosphere and climate.
[ { "version": "v1", "created": "Sat, 2 Aug 2014 12:40:51 GMT" } ]
2015-06-22T00:00:00
[ [ "Ball", "William T.", "" ], [ "Krivova", "Natalie A.", "" ], [ "Unruh", "Yvonne C.", "" ], [ "Haigh", "Joanna D.", "" ], [ "Solanki", "Sami K.", "" ] ]
TITLE: A new SATIRE-S spectral solar irradiance reconstruction for solar cycles 21--23 and its implications for stratospheric ozone ABSTRACT: We present a revised and extended total and spectral solar irradiance (SSI) reconstruction, which includes a wavelength-dependent uncertainty estimate, spanning the last three solar cycles using the SATIRE-S model. The SSI reconstruction covers wavelengths between 115 and 160,000 nm and all dates between August 1974 and October 2009. This represents the first full-wavelength SATIRE-S reconstruction to cover the last three solar cycles without data gaps and with an uncertainty estimate. SATIRE-S is compared with the NRLSSI model and SORCE/SOLSTICE ultraviolet (UV) observations. SATIRE-S displays similar cycle behaviour to NRLSSI for wavelengths below 242 nm and almost twice the variability between 242 and 310 nm. During the decline of last solar cycle, between 2003 and 2008, SSI from SORCE/SOLSTICE version 12 and 10 typically displays more than three times the variability of SATIRE-S between 200 and 300 nm. All three datasets are used to model changes in stratospheric ozone within a 2D atmospheric model for a decline from high solar activity to solar minimum. The different flux changes result in different modelled ozone trends. Using NRLSSI leads to a decline in mesospheric ozone, while SATIRE-S and SORCE/SOLSTICE result in an increase. Recent publications have highlighted increases in mesospheric ozone when considering version 10 SORCE/SOLSTICE irradiances. The recalibrated SORCE/SOLSTICE version 12 irradiances result in a much smaller mesospheric ozone response than when using version 10 and now similar in magnitude to SATIRE-S. This shows that current knowledge of variations in spectral irradiance is not sufficient to warrant robust conclusions concerning the impact of solar variability on the atmosphere and climate.
no_new_dataset
0.948298
1408.1519
Chlo\"e Brown
Chlo\"e Brown, Neal Lathia, Anastasios Noulas, Cecilia Mascolo, Vincent Blondel
Group colocation behavior in technological social networks
7 pages, 8 figures. Accepted for publication in PLOS One
null
10.1371/journal.pone.0105816
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze two large datasets from technological networks with location and social data: user location records from an online location-based social networking service, and anonymized telecommunications data from a European cellphone operator, in order to investigate the differences between individual and group behavior with respect to physical location. We discover agreements between the two datasets: firstly, that individuals are more likely to meet with one friend at a place they have not visited before, but tend to meet at familiar locations when with a larger group. We also find that groups of individuals are more likely to meet at places that their other friends have visited, and that the type of a place strongly affects the propensity for groups to meet there. These differences between group and solo mobility has potential technological applications, for example, in venue recommendation in location-based social networks.
[ { "version": "v1", "created": "Thu, 7 Aug 2014 09:18:17 GMT" }, { "version": "v2", "created": "Fri, 8 Aug 2014 08:01:48 GMT" } ]
2015-06-22T00:00:00
[ [ "Brown", "Chloë", "" ], [ "Lathia", "Neal", "" ], [ "Noulas", "Anastasios", "" ], [ "Mascolo", "Cecilia", "" ], [ "Blondel", "Vincent", "" ] ]
TITLE: Group colocation behavior in technological social networks ABSTRACT: We analyze two large datasets from technological networks with location and social data: user location records from an online location-based social networking service, and anonymized telecommunications data from a European cellphone operator, in order to investigate the differences between individual and group behavior with respect to physical location. We discover agreements between the two datasets: firstly, that individuals are more likely to meet with one friend at a place they have not visited before, but tend to meet at familiar locations when with a larger group. We also find that groups of individuals are more likely to meet at places that their other friends have visited, and that the type of a place strongly affects the propensity for groups to meet there. These differences between group and solo mobility has potential technological applications, for example, in venue recommendation in location-based social networks.
no_new_dataset
0.940463
1408.5240
Shimin Cai Dr
Lili Miao, Qian-Ming Zhang, Da-Chen Nie, Shi-Min Cai
Whether Information Network Supplements Friendship Network
8 pages, 5 figures
Physica A 419, 301 (2015)
10.1016/j.physa.2014.10.021
null
physics.soc-ph cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Homophily is a significant mechanism for link prediction in complex network, of which principle describes that people with similar profiles or experiences tend to tie with each other. In a multi-relationship network, friendship among people has been utilized to reinforce similarity of taste for recommendation system whose basic idea is similar to homophily, yet how the taste inversely affects friendship prediction is little discussed. This paper contributes to address the issue by analyzing two benchmark datasets both including user's behavioral information of taste and friendship based on the principle of homophily. It can be found that the creation of friendship tightly associates with personal taste. Especially, the behavioral information of taste involving with popular objects is much more effective to improve the performance of friendship prediction. However, this result seems to be contradictory to the finding in [Q.M. Zhang, et al., PLoS ONE 8(2013)e62624] that the behavior information of taste involving with popular objects is redundant in recommendation system. We thus discuss this inconformity to comprehensively understand the correlation between them.
[ { "version": "v1", "created": "Fri, 22 Aug 2014 09:31:48 GMT" } ]
2015-06-22T00:00:00
[ [ "Miao", "Lili", "" ], [ "Zhang", "Qian-Ming", "" ], [ "Nie", "Da-Chen", "" ], [ "Cai", "Shi-Min", "" ] ]
TITLE: Whether Information Network Supplements Friendship Network ABSTRACT: Homophily is a significant mechanism for link prediction in complex network, of which principle describes that people with similar profiles or experiences tend to tie with each other. In a multi-relationship network, friendship among people has been utilized to reinforce similarity of taste for recommendation system whose basic idea is similar to homophily, yet how the taste inversely affects friendship prediction is little discussed. This paper contributes to address the issue by analyzing two benchmark datasets both including user's behavioral information of taste and friendship based on the principle of homophily. It can be found that the creation of friendship tightly associates with personal taste. Especially, the behavioral information of taste involving with popular objects is much more effective to improve the performance of friendship prediction. However, this result seems to be contradictory to the finding in [Q.M. Zhang, et al., PLoS ONE 8(2013)e62624] that the behavior information of taste involving with popular objects is redundant in recommendation system. We thus discuss this inconformity to comprehensively understand the correlation between them.
no_new_dataset
0.945901
1409.2944
Hao Wang
Hao Wang and Naiyan Wang and Dit-Yan Yeung
Collaborative Deep Learning for Recommender Systems
null
null
null
null
cs.LG cs.CL cs.IR cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collaborative filtering (CF) is a successful approach commonly used by many recommender systems. Conventional CF-based methods use the ratings given to items by users as the sole source of information for learning to make recommendation. However, the ratings are often very sparse in many applications, causing CF-based methods to degrade significantly in their recommendation performance. To address this sparsity problem, auxiliary information such as item content information may be utilized. Collaborative topic regression (CTR) is an appealing recent method taking this approach which tightly couples the two components that learn from two different sources of information. Nevertheless, the latent representation learned by CTR may not be very effective when the auxiliary information is very sparse. To address this problem, we generalize recent advances in deep learning from i.i.d. input to non-i.i.d. (CF-based) input and propose in this paper a hierarchical Bayesian model called collaborative deep learning (CDL), which jointly performs deep representation learning for the content information and collaborative filtering for the ratings (feedback) matrix. Extensive experiments on three real-world datasets from different domains show that CDL can significantly advance the state of the art.
[ { "version": "v1", "created": "Wed, 10 Sep 2014 03:05:22 GMT" }, { "version": "v2", "created": "Thu, 18 Jun 2015 09:23:37 GMT" } ]
2015-06-22T00:00:00
[ [ "Wang", "Hao", "" ], [ "Wang", "Naiyan", "" ], [ "Yeung", "Dit-Yan", "" ] ]
TITLE: Collaborative Deep Learning for Recommender Systems ABSTRACT: Collaborative filtering (CF) is a successful approach commonly used by many recommender systems. Conventional CF-based methods use the ratings given to items by users as the sole source of information for learning to make recommendation. However, the ratings are often very sparse in many applications, causing CF-based methods to degrade significantly in their recommendation performance. To address this sparsity problem, auxiliary information such as item content information may be utilized. Collaborative topic regression (CTR) is an appealing recent method taking this approach which tightly couples the two components that learn from two different sources of information. Nevertheless, the latent representation learned by CTR may not be very effective when the auxiliary information is very sparse. To address this problem, we generalize recent advances in deep learning from i.i.d. input to non-i.i.d. (CF-based) input and propose in this paper a hierarchical Bayesian model called collaborative deep learning (CDL), which jointly performs deep representation learning for the content information and collaborative filtering for the ratings (feedback) matrix. Extensive experiments on three real-world datasets from different domains show that CDL can significantly advance the state of the art.
no_new_dataset
0.948251
1506.02344
Anastasios Kyrillidis
Megasthenis Asteris, Anastasios Kyrillidis, Alexandros G. Dimakis, Han-Gyol Yi and, Bharath Chandrasekaran
Stay on path: PCA along graph paths
12 pages, 5 figures, In Proceedings of International Conference on Machine Learning (ICML) 2015
null
null
null
stat.ML cs.IT cs.LG math.IT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a variant of (sparse) PCA in which the set of feasible support sets is determined by a graph. In particular, we consider the following setting: given a directed acyclic graph $G$ on $p$ vertices corresponding to variables, the non-zero entries of the extracted principal component must coincide with vertices lying along a path in $G$. From a statistical perspective, information on the underlying network may potentially reduce the number of observations required to recover the population principal component. We consider the canonical estimator which optimally exploits the prior knowledge by solving a non-convex quadratic maximization on the empirical covariance. We introduce a simple network and analyze the estimator under the spiked covariance model. We show that side information potentially improves the statistical complexity. We propose two algorithms to approximate the solution of the constrained quadratic maximization, and recover a component with the desired properties. We empirically evaluate our schemes on synthetic and real datasets.
[ { "version": "v1", "created": "Mon, 8 Jun 2015 03:37:36 GMT" }, { "version": "v2", "created": "Fri, 19 Jun 2015 02:27:49 GMT" } ]
2015-06-22T00:00:00
[ [ "Asteris", "Megasthenis", "" ], [ "Kyrillidis", "Anastasios", "" ], [ "Dimakis", "Alexandros G.", "" ], [ "and", "Han-Gyol Yi", "" ], [ "Chandrasekaran", "Bharath", "" ] ]
TITLE: Stay on path: PCA along graph paths ABSTRACT: We introduce a variant of (sparse) PCA in which the set of feasible support sets is determined by a graph. In particular, we consider the following setting: given a directed acyclic graph $G$ on $p$ vertices corresponding to variables, the non-zero entries of the extracted principal component must coincide with vertices lying along a path in $G$. From a statistical perspective, information on the underlying network may potentially reduce the number of observations required to recover the population principal component. We consider the canonical estimator which optimally exploits the prior knowledge by solving a non-convex quadratic maximization on the empirical covariance. We introduce a simple network and analyze the estimator under the spiked covariance model. We show that side information potentially improves the statistical complexity. We propose two algorithms to approximate the solution of the constrained quadratic maximization, and recover a component with the desired properties. We empirically evaluate our schemes on synthetic and real datasets.
no_new_dataset
0.941708
1506.05247
Zhihai Yang
Zhihai Yang
Defending Grey Attacks by Exploiting Wavelet Analysis in Collaborative Filtering Recommender Systems
16 pages, 16 figures
null
null
null
cs.IR cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
"Shilling" attacks or "profile injection" attacks have always major challenges in collaborative filtering recommender systems (CFRSs). Many efforts have been devoted to improve collaborative filtering techniques which can eliminate the "shilling" attacks. However, most of them focused on detecting push attack or nuke attack which is rated with the highest score or lowest score on the target items. Few pay attention to grey attack when a target item is rated with a lower or higher score than the average score, which shows a more hidden rating behavior than push or nuke attack. In this paper, we present a novel detection method to make recommender systems resistant to such attacks. To characterize grey ratings, we exploit rating deviation of item to discriminate between grey attack profiles and genuine profiles. In addition, we also employ novelty and popularity of item to construct rating series. Since it is difficult to discriminate between the rating series of attacker and genuine users, we incorporate into discrete wavelet transform (DWT) to amplify these differences based on the rating series of rating deviation, novelty and popularity, respectively. Finally, we respectively extract features from rating series of rating deviation-based, novelty-based and popularity-based by using amplitude domain analysis method and combine all clustered results as our detection results. We conduct a list of experiments on both the Book-Crossing and HetRec-2011 datasets in diverse attack models. Experimental results were included to validate the effectiveness of our approach in comparison with the benchmarked methods.
[ { "version": "v1", "created": "Wed, 17 Jun 2015 08:54:04 GMT" }, { "version": "v2", "created": "Fri, 19 Jun 2015 07:30:47 GMT" } ]
2015-06-22T00:00:00
[ [ "Yang", "Zhihai", "" ] ]
TITLE: Defending Grey Attacks by Exploiting Wavelet Analysis in Collaborative Filtering Recommender Systems ABSTRACT: "Shilling" attacks or "profile injection" attacks have always major challenges in collaborative filtering recommender systems (CFRSs). Many efforts have been devoted to improve collaborative filtering techniques which can eliminate the "shilling" attacks. However, most of them focused on detecting push attack or nuke attack which is rated with the highest score or lowest score on the target items. Few pay attention to grey attack when a target item is rated with a lower or higher score than the average score, which shows a more hidden rating behavior than push or nuke attack. In this paper, we present a novel detection method to make recommender systems resistant to such attacks. To characterize grey ratings, we exploit rating deviation of item to discriminate between grey attack profiles and genuine profiles. In addition, we also employ novelty and popularity of item to construct rating series. Since it is difficult to discriminate between the rating series of attacker and genuine users, we incorporate into discrete wavelet transform (DWT) to amplify these differences based on the rating series of rating deviation, novelty and popularity, respectively. Finally, we respectively extract features from rating series of rating deviation-based, novelty-based and popularity-based by using amplitude domain analysis method and combine all clustered results as our detection results. We conduct a list of experiments on both the Book-Crossing and HetRec-2011 datasets in diverse attack models. Experimental results were included to validate the effectiveness of our approach in comparison with the benchmarked methods.
no_new_dataset
0.948822
1506.05870
Kuan-Wen Chen
Kuan-Wen Chen, Chun-Hsin Wang, Xiao Wei, Qiao Liang, Ming-Hsuan Yang, Chu-Song Chen, Yi-Ping Hung
To Know Where We Are: Vision-Based Positioning in Outdoor Environments
11 pages, 14 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Augmented reality (AR) displays become more and more popular recently, because of its high intuitiveness for humans and high-quality head-mounted display have rapidly developed. To achieve such displays with augmented information, highly accurate image registration or ego-positioning are required, but little attention have been paid for out-door environments. This paper presents a method for ego-positioning in outdoor environments with low cost monocular cameras. To reduce the computational and memory requirements as well as the communication overheads, we formulate the model compression algorithm as a weighted k-cover problem for better preserving model structures. Specifically for real-world vision-based positioning applications, we consider the issues with large scene change and propose a model update algorithm to tackle these problems. A long- term positioning dataset with more than one month, 106 sessions, and 14,275 images is constructed. Based on both local and up-to-date models constructed in our approach, extensive experimental results show that high positioning accuracy (mean ~ 30.9cm, stdev. ~ 15.4cm) can be achieved, which outperforms existing vision-based algorithms.
[ { "version": "v1", "created": "Fri, 19 Jun 2015 03:11:33 GMT" } ]
2015-06-22T00:00:00
[ [ "Chen", "Kuan-Wen", "" ], [ "Wang", "Chun-Hsin", "" ], [ "Wei", "Xiao", "" ], [ "Liang", "Qiao", "" ], [ "Yang", "Ming-Hsuan", "" ], [ "Chen", "Chu-Song", "" ], [ "Hung", "Yi-Ping", "" ] ]
TITLE: To Know Where We Are: Vision-Based Positioning in Outdoor Environments ABSTRACT: Augmented reality (AR) displays become more and more popular recently, because of its high intuitiveness for humans and high-quality head-mounted display have rapidly developed. To achieve such displays with augmented information, highly accurate image registration or ego-positioning are required, but little attention have been paid for out-door environments. This paper presents a method for ego-positioning in outdoor environments with low cost monocular cameras. To reduce the computational and memory requirements as well as the communication overheads, we formulate the model compression algorithm as a weighted k-cover problem for better preserving model structures. Specifically for real-world vision-based positioning applications, we consider the issues with large scene change and propose a model update algorithm to tackle these problems. A long- term positioning dataset with more than one month, 106 sessions, and 14,275 images is constructed. Based on both local and up-to-date models constructed in our approach, extensive experimental results show that high positioning accuracy (mean ~ 30.9cm, stdev. ~ 15.4cm) can be achieved, which outperforms existing vision-based algorithms.
new_dataset
0.957991
1506.05908
Chris Piech
Chris Piech, Jonathan Spencer, Jonathan Huang, Surya Ganguli, Mehran Sahami, Leonidas Guibas, Jascha Sohl-Dickstein
Deep Knowledge Tracing
null
null
null
null
cs.AI cs.CY cs.LG
http://creativecommons.org/licenses/by-nc-sa/3.0/
Knowledge tracing---where a machine models the knowledge of a student as they interact with coursework---is a well established problem in computer supported education. Though effectively modeling student knowledge would have high educational impact, the task has many inherent challenges. In this paper we explore the utility of using Recurrent Neural Networks (RNNs) to model student learning. The RNN family of models have important advantages over previous methods in that they do not require the explicit encoding of human domain knowledge, and can capture more complex representations of student knowledge. Using neural networks results in substantial improvements in prediction performance on a range of knowledge tracing datasets. Moreover the learned model can be used for intelligent curriculum design and allows straightforward interpretation and discovery of structure in student tasks. These results suggest a promising new line of research for knowledge tracing and an exemplary application task for RNNs.
[ { "version": "v1", "created": "Fri, 19 Jun 2015 08:29:00 GMT" } ]
2015-06-22T00:00:00
[ [ "Piech", "Chris", "" ], [ "Spencer", "Jonathan", "" ], [ "Huang", "Jonathan", "" ], [ "Ganguli", "Surya", "" ], [ "Sahami", "Mehran", "" ], [ "Guibas", "Leonidas", "" ], [ "Sohl-Dickstein", "Jascha", "" ] ]
TITLE: Deep Knowledge Tracing ABSTRACT: Knowledge tracing---where a machine models the knowledge of a student as they interact with coursework---is a well established problem in computer supported education. Though effectively modeling student knowledge would have high educational impact, the task has many inherent challenges. In this paper we explore the utility of using Recurrent Neural Networks (RNNs) to model student learning. The RNN family of models have important advantages over previous methods in that they do not require the explicit encoding of human domain knowledge, and can capture more complex representations of student knowledge. Using neural networks results in substantial improvements in prediction performance on a range of knowledge tracing datasets. Moreover the learned model can be used for intelligent curriculum design and allows straightforward interpretation and discovery of structure in student tasks. These results suggest a promising new line of research for knowledge tracing and an exemplary application task for RNNs.
no_new_dataset
0.947721
1506.05970
Dominique Jault
G. Hellio, N. Gillet, C. Bouligand, D. Jault
Stochastic modelling of regional archaeomagnetic series
null
Geophysical Journal International, Oxford University Press (OUP): Policy P - Oxford Open Option A, 2014, 199 (2), pp. 931-943
10.1093/gji/ggu303
null
physics.geo-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
SUMMARY We report a new method to infer continuous time series of the declination, inclination and intensity of the magnetic field from archeomagnetic data. Adopting a Bayesian perspective, we need to specify a priori knowledge about the time evolution of the magnetic field. It consists in a time correlation function that we choose to be compatible with present knowledge about the geomagnetic time spectra. The results are presented as distributions of possible values for the declination, inclination or intensity. We find that the methodology can be adapted to account for the age uncertainties of archeological artefacts and we use Markov Chain Monte Carlo to explore the possible dates of observations. We apply the method to intensity datasets from Mari, Syria and to intensity and directional datasets from Paris, France. Our reconstructions display more rapid variations than previous studies and we find that the possible values of geomagnetic field elements are not necessarily normally distributed. Another output of the model is better age estimates of archeological artefacts.
[ { "version": "v1", "created": "Fri, 19 Jun 2015 12:14:55 GMT" } ]
2015-06-22T00:00:00
[ [ "Hellio", "G.", "" ], [ "Gillet", "N.", "" ], [ "Bouligand", "C.", "" ], [ "Jault", "D.", "" ] ]
TITLE: Stochastic modelling of regional archaeomagnetic series ABSTRACT: SUMMARY We report a new method to infer continuous time series of the declination, inclination and intensity of the magnetic field from archeomagnetic data. Adopting a Bayesian perspective, we need to specify a priori knowledge about the time evolution of the magnetic field. It consists in a time correlation function that we choose to be compatible with present knowledge about the geomagnetic time spectra. The results are presented as distributions of possible values for the declination, inclination or intensity. We find that the methodology can be adapted to account for the age uncertainties of archeological artefacts and we use Markov Chain Monte Carlo to explore the possible dates of observations. We apply the method to intensity datasets from Mari, Syria and to intensity and directional datasets from Paris, France. Our reconstructions display more rapid variations than previous studies and we find that the possible values of geomagnetic field elements are not necessarily normally distributed. Another output of the model is better age estimates of archeological artefacts.
no_new_dataset
0.947769
1506.05985
Xavier Bresson
Xavier Bresson and Thomas Laurent and James von Brecht
Enhanced Lasso Recovery on Graph
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work aims at recovering signals that are sparse on graphs. Compressed sensing offers techniques for signal recovery from a few linear measurements and graph Fourier analysis provides a signal representation on graph. In this paper, we leverage these two frameworks to introduce a new Lasso recovery algorithm on graphs. More precisely, we present a non-convex, non-smooth algorithm that outperforms the standard convex Lasso technique. We carry out numerical experiments on three benchmark graph datasets.
[ { "version": "v1", "created": "Fri, 19 Jun 2015 12:59:18 GMT" } ]
2015-06-22T00:00:00
[ [ "Bresson", "Xavier", "" ], [ "Laurent", "Thomas", "" ], [ "von Brecht", "James", "" ] ]
TITLE: Enhanced Lasso Recovery on Graph ABSTRACT: This work aims at recovering signals that are sparse on graphs. Compressed sensing offers techniques for signal recovery from a few linear measurements and graph Fourier analysis provides a signal representation on graph. In this paper, we leverage these two frameworks to introduce a new Lasso recovery algorithm on graphs. More precisely, we present a non-convex, non-smooth algorithm that outperforms the standard convex Lasso technique. We carry out numerical experiments on three benchmark graph datasets.
no_new_dataset
0.950915
1506.06006
Srinivas S S Kruthiventi
Srinivas S. S. Kruthiventi and R. Venkatesh Babu
Crowd Flow Segmentation in Compressed Domain using CRF
In IEEE International Conference on Image Processing (ICIP), 2015
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Crowd flow segmentation is an important step in many video surveillance tasks. In this work, we propose an algorithm for segmenting flows in H.264 compressed videos in a completely unsupervised manner. Our algorithm works on motion vectors which can be obtained by partially decoding the compressed video without extracting any additional features. Our approach is based on modelling the motion vector field as a Conditional Random Field (CRF) and obtaining oriented motion segments by finding the optimal labelling which minimises the global energy of CRF. These oriented motion segments are recursively merged based on gradient across their boundaries to obtain the final flow segments. This work in compressed domain can be easily extended to pixel domain by substituting motion vectors with motion based features like optical flow. The proposed algorithm is experimentally evaluated on a standard crowd flow dataset and its superior performance in both accuracy and computational time are demonstrated through quantitative results.
[ { "version": "v1", "created": "Fri, 19 Jun 2015 14:01:24 GMT" } ]
2015-06-22T00:00:00
[ [ "Kruthiventi", "Srinivas S. S.", "" ], [ "Babu", "R. Venkatesh", "" ] ]
TITLE: Crowd Flow Segmentation in Compressed Domain using CRF ABSTRACT: Crowd flow segmentation is an important step in many video surveillance tasks. In this work, we propose an algorithm for segmenting flows in H.264 compressed videos in a completely unsupervised manner. Our algorithm works on motion vectors which can be obtained by partially decoding the compressed video without extracting any additional features. Our approach is based on modelling the motion vector field as a Conditional Random Field (CRF) and obtaining oriented motion segments by finding the optimal labelling which minimises the global energy of CRF. These oriented motion segments are recursively merged based on gradient across their boundaries to obtain the final flow segments. This work in compressed domain can be easily extended to pixel domain by substituting motion vectors with motion based features like optical flow. The proposed algorithm is experimentally evaluated on a standard crowd flow dataset and its superior performance in both accuracy and computational time are demonstrated through quantitative results.
no_new_dataset
0.957715
1506.06068
Teng Qiu
Teng Qiu, Yongjie Li
A general framework for the IT-based clustering methods
17 pages
null
null
null
cs.CV cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-sa/3.0/
Previously, we proposed a physically inspired rule to organize the data points in a sparse yet effective structure, called the in-tree (IT) graph, which is able to capture a wide class of underlying cluster structures in the datasets, especially for the density-based datasets. Although there are some redundant edges or lines between clusters requiring to be removed by computer, this IT graph has a big advantage compared with the k-nearest-neighborhood (k-NN) or the minimal spanning tree (MST) graph, in that the redundant edges in the IT graph are much more distinguishable and thus can be easily determined by several methods previously proposed by us. In this paper, we propose a general framework to re-construct the IT graph, based on an initial neighborhood graph, such as the k-NN or MST, etc, and the corresponding graph distances. For this general framework, our previous way of constructing the IT graph turns out to be a special case of it. This general framework 1) can make the IT graph capture a wider class of underlying cluster structures in the datasets, especially for the manifolds, and 2) should be more effective to cluster the sparse or graph-based datasets.
[ { "version": "v1", "created": "Fri, 19 Jun 2015 16:03:31 GMT" } ]
2015-06-22T00:00:00
[ [ "Qiu", "Teng", "" ], [ "Li", "Yongjie", "" ] ]
TITLE: A general framework for the IT-based clustering methods ABSTRACT: Previously, we proposed a physically inspired rule to organize the data points in a sparse yet effective structure, called the in-tree (IT) graph, which is able to capture a wide class of underlying cluster structures in the datasets, especially for the density-based datasets. Although there are some redundant edges or lines between clusters requiring to be removed by computer, this IT graph has a big advantage compared with the k-nearest-neighborhood (k-NN) or the minimal spanning tree (MST) graph, in that the redundant edges in the IT graph are much more distinguishable and thus can be easily determined by several methods previously proposed by us. In this paper, we propose a general framework to re-construct the IT graph, based on an initial neighborhood graph, such as the k-NN or MST, etc, and the corresponding graph distances. For this general framework, our previous way of constructing the IT graph turns out to be a special case of it. This general framework 1) can make the IT graph capture a wider class of underlying cluster structures in the datasets, especially for the manifolds, and 2) should be more effective to cluster the sparse or graph-based datasets.
no_new_dataset
0.952706
1403.4462
Andrzej Cichocki
A. Cichocki, D. Mandic, A-H. Phan, C. Caiafa, G. Zhou, Q. Zhao, and L. De Lathauwer
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
null
null
10.1109/MSP.2013.2297439
null
cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train.
[ { "version": "v1", "created": "Mon, 17 Mar 2014 11:03:58 GMT" } ]
2015-06-19T00:00:00
[ [ "Cichocki", "A.", "" ], [ "Mandic", "D.", "" ], [ "Phan", "A-H.", "" ], [ "Caiafa", "C.", "" ], [ "Zhou", "G.", "" ], [ "Zhao", "Q.", "" ], [ "De Lathauwer", "L.", "" ] ]
TITLE: Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis ABSTRACT: The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train.
no_new_dataset
0.944382
1403.4590
Kareem Osman
K. T. Osman, W. H. Matthaeus, J. T. Gosling, A. Greco, S. Servidio, B. Hnat, S. C. Chapman, and T. D. Phan
Magnetic Reconnection and Intermittent Turbulence in the Solar Wind
5 pages, 3 figures, submitted to Physical Review Letters
null
10.1103/PhysRevLett.112.215002
null
physics.space-ph physics.data-an physics.plasm-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A statistical relationship between magnetic reconnection, current sheets and intermittent turbulence in the solar wind is reported for the first time using in-situ measurements from the Wind spacecraft at 1 AU. We identify intermittency as non-Gaussian fluctuations in increments of the magnetic field vector, $\mathbf{B}$, that are spatially and temporally non-uniform. The reconnection events and current sheets are found to be concentrated in intervals of intermittent turbulence, identified using the partial variance of increments method: within the most non-Gaussian 1% of fluctuations in $\mathbf{B}$, we find 87%-92% of reconnection exhausts and $\sim$9% of current sheets. Also, the likelihood that an identified current sheet will also correspond to a reconnection exhaust increases dramatically as the least intermittent fluctuations are removed from the dataset. Hence, the turbulent solar wind contains a hierarchy of intermittent magnetic field structures that are increasingly linked to current sheets, which in turn are progressively more likely to correspond to sites of magnetic reconnection. These results could have far reaching implications for laboratory and astrophysical plasmas where turbulence and magnetic reconnection are ubiquitous.
[ { "version": "v1", "created": "Tue, 18 Mar 2014 19:45:07 GMT" } ]
2015-06-19T00:00:00
[ [ "Osman", "K. T.", "" ], [ "Matthaeus", "W. H.", "" ], [ "Gosling", "J. T.", "" ], [ "Greco", "A.", "" ], [ "Servidio", "S.", "" ], [ "Hnat", "B.", "" ], [ "Chapman", "S. C.", "" ], [ "Phan", "T. D.", "" ] ]
TITLE: Magnetic Reconnection and Intermittent Turbulence in the Solar Wind ABSTRACT: A statistical relationship between magnetic reconnection, current sheets and intermittent turbulence in the solar wind is reported for the first time using in-situ measurements from the Wind spacecraft at 1 AU. We identify intermittency as non-Gaussian fluctuations in increments of the magnetic field vector, $\mathbf{B}$, that are spatially and temporally non-uniform. The reconnection events and current sheets are found to be concentrated in intervals of intermittent turbulence, identified using the partial variance of increments method: within the most non-Gaussian 1% of fluctuations in $\mathbf{B}$, we find 87%-92% of reconnection exhausts and $\sim$9% of current sheets. Also, the likelihood that an identified current sheet will also correspond to a reconnection exhaust increases dramatically as the least intermittent fluctuations are removed from the dataset. Hence, the turbulent solar wind contains a hierarchy of intermittent magnetic field structures that are increasingly linked to current sheets, which in turn are progressively more likely to correspond to sites of magnetic reconnection. These results could have far reaching implications for laboratory and astrophysical plasmas where turbulence and magnetic reconnection are ubiquitous.
no_new_dataset
0.949576
1403.5156
Daniele Marinazzo
Sebastiano Stramaglia, Jesus M. Cortes, Daniele Marinazzo
Synergy and redundancy in the Granger causal analysis of dynamical networks
null
null
10.1088/1367-2630/16/10/105003
null
q-bio.QM cs.IT math.IT physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze by means of Granger causality the effect of synergy and redundancy in the inference (from time series data) of the information flow between subsystems of a complex network. Whilst we show that fully conditioned Granger causality is not affected by synergy, the pairwise analysis fails to put in evidence synergetic effects. In cases when the number of samples is low, thus making the fully conditioned approach unfeasible, we show that partially conditioned Granger causality is an effective approach if the set of conditioning variables is properly chosen. We consider here two different strategies (based either on informational content for the candidate driver or on selecting the variables with highest pairwise influences) for partially conditioned Granger causality and show that depending on the data structure either one or the other might be valid. On the other hand, we observe that fully conditioned approaches do not work well in presence of redundancy, thus suggesting the strategy of separating the pairwise links in two subsets: those corresponding to indirect connections of the fully conditioned Granger causality (which should thus be excluded) and links that can be ascribed to redundancy effects and, together with the results from the fully connected approach, provide a better description of the causality pattern in presence of redundancy. We finally apply these methods to two different real datasets. First, analyzing electrophysiological data from an epileptic brain, we show that synergetic effects are dominant just before seizure occurrences. Second, our analysis applied to gene expression time series from HeLa culture shows that the underlying regulatory networks are characterized by both redundancy and synergy.
[ { "version": "v1", "created": "Thu, 20 Mar 2014 14:49:27 GMT" }, { "version": "v2", "created": "Thu, 31 Jul 2014 22:38:24 GMT" } ]
2015-06-19T00:00:00
[ [ "Stramaglia", "Sebastiano", "" ], [ "Cortes", "Jesus M.", "" ], [ "Marinazzo", "Daniele", "" ] ]
TITLE: Synergy and redundancy in the Granger causal analysis of dynamical networks ABSTRACT: We analyze by means of Granger causality the effect of synergy and redundancy in the inference (from time series data) of the information flow between subsystems of a complex network. Whilst we show that fully conditioned Granger causality is not affected by synergy, the pairwise analysis fails to put in evidence synergetic effects. In cases when the number of samples is low, thus making the fully conditioned approach unfeasible, we show that partially conditioned Granger causality is an effective approach if the set of conditioning variables is properly chosen. We consider here two different strategies (based either on informational content for the candidate driver or on selecting the variables with highest pairwise influences) for partially conditioned Granger causality and show that depending on the data structure either one or the other might be valid. On the other hand, we observe that fully conditioned approaches do not work well in presence of redundancy, thus suggesting the strategy of separating the pairwise links in two subsets: those corresponding to indirect connections of the fully conditioned Granger causality (which should thus be excluded) and links that can be ascribed to redundancy effects and, together with the results from the fully connected approach, provide a better description of the causality pattern in presence of redundancy. We finally apply these methods to two different real datasets. First, analyzing electrophysiological data from an epileptic brain, we show that synergetic effects are dominant just before seizure occurrences. Second, our analysis applied to gene expression time series from HeLa culture shows that the underlying regulatory networks are characterized by both redundancy and synergy.
no_new_dataset
0.944177
1403.7595
Zi-Ke Zhang Dr.
Da-Cheng Nie, Zi-Ke Zhang, Jun-lin Zhou, Yan Fu, Kui Zhang
Information Filtering on Coupled Social Networks
null
null
10.1371/journal.pone.0101675
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, based on the coupled social networks (CSN), we propose a hybrid algorithm to nonlinearly integrate both social and behavior information of online users. Filtering algorithm based on the coupled social networks, which considers the effects of both social influence and personalized preference. Experimental results on two real datasets, \emph{Epinions} and \emph{Friendfeed}, show that hybrid pattern can not only provide more accurate recommendations, but also can enlarge the recommendation coverage while adopting global metric. Further empirical analyses demonstrate that the mutual reinforcement and rich-club phenomenon can also be found in coupled social networks where the identical individuals occupy the core position of the online system. This work may shed some light on the in-depth understanding structure and function of coupled social networks.
[ { "version": "v1", "created": "Sat, 29 Mar 2014 06:20:25 GMT" } ]
2015-06-19T00:00:00
[ [ "Nie", "Da-Cheng", "" ], [ "Zhang", "Zi-Ke", "" ], [ "Zhou", "Jun-lin", "" ], [ "Fu", "Yan", "" ], [ "Zhang", "Kui", "" ] ]
TITLE: Information Filtering on Coupled Social Networks ABSTRACT: In this paper, based on the coupled social networks (CSN), we propose a hybrid algorithm to nonlinearly integrate both social and behavior information of online users. Filtering algorithm based on the coupled social networks, which considers the effects of both social influence and personalized preference. Experimental results on two real datasets, \emph{Epinions} and \emph{Friendfeed}, show that hybrid pattern can not only provide more accurate recommendations, but also can enlarge the recommendation coverage while adopting global metric. Further empirical analyses demonstrate that the mutual reinforcement and rich-club phenomenon can also be found in coupled social networks where the identical individuals occupy the core position of the online system. This work may shed some light on the in-depth understanding structure and function of coupled social networks.
no_new_dataset
0.949669
1404.2342
Ko-Jen Hsiao
Ko-Jen Hsiao, Alex Kulesza, Alfred Hero
Social Collaborative Retrieval
10 pages
null
10.1109/JSTSP.2014.2317286
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Socially-based recommendation systems have recently attracted significant interest, and a number of studies have shown that social information can dramatically improve a system's predictions of user interests. Meanwhile, there are now many potential applications that involve aspects of both recommendation and information retrieval, and the task of collaborative retrieval---a combination of these two traditional problems---has recently been introduced. Successful collaborative retrieval requires overcoming severe data sparsity, making additional sources of information, such as social graphs, particularly valuable. In this paper we propose a new model for collaborative retrieval, and show that our algorithm outperforms current state-of-the-art approaches by incorporating information from social networks. We also provide empirical analyses of the ways in which cultural interests propagate along a social graph using a real-world music dataset.
[ { "version": "v1", "created": "Wed, 9 Apr 2014 01:18:05 GMT" } ]
2015-06-19T00:00:00
[ [ "Hsiao", "Ko-Jen", "" ], [ "Kulesza", "Alex", "" ], [ "Hero", "Alfred", "" ] ]
TITLE: Social Collaborative Retrieval ABSTRACT: Socially-based recommendation systems have recently attracted significant interest, and a number of studies have shown that social information can dramatically improve a system's predictions of user interests. Meanwhile, there are now many potential applications that involve aspects of both recommendation and information retrieval, and the task of collaborative retrieval---a combination of these two traditional problems---has recently been introduced. Successful collaborative retrieval requires overcoming severe data sparsity, making additional sources of information, such as social graphs, particularly valuable. In this paper we propose a new model for collaborative retrieval, and show that our algorithm outperforms current state-of-the-art approaches by incorporating information from social networks. We also provide empirical analyses of the ways in which cultural interests propagate along a social graph using a real-world music dataset.
no_new_dataset
0.941601
1404.4667
Morteza Mardani
Morteza Mardani, Gonzalo Mateos, and Georgios B. Giannakis
Subspace Learning and Imputation for Streaming Big Data Matrices and Tensors
null
null
10.1109/TSP.2015.2417491
null
stat.ML cs.IT cs.LG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extracting latent low-dimensional structure from high-dimensional data is of paramount importance in timely inference tasks encountered with `Big Data' analytics. However, increasingly noisy, heterogeneous, and incomplete datasets as well as the need for {\em real-time} processing of streaming data pose major challenges to this end. In this context, the present paper permeates benefits from rank minimization to scalable imputation of missing data, via tracking low-dimensional subspaces and unraveling latent (possibly multi-way) structure from \emph{incomplete streaming} data. For low-rank matrix data, a subspace estimator is proposed based on an exponentially-weighted least-squares criterion regularized with the nuclear norm. After recasting the non-separable nuclear norm into a form amenable to online optimization, real-time algorithms with complementary strengths are developed and their convergence is established under simplifying technical assumptions. In a stationary setting, the asymptotic estimates obtained offer the well-documented performance guarantees of the {\em batch} nuclear-norm regularized estimator. Under the same unifying framework, a novel online (adaptive) algorithm is developed to obtain multi-way decompositions of \emph{low-rank tensors} with missing entries, and perform imputation as a byproduct. Simulated tests with both synthetic as well as real Internet and cardiac magnetic resonance imagery (MRI) data confirm the efficacy of the proposed algorithms, and their superior performance relative to state-of-the-art alternatives.
[ { "version": "v1", "created": "Thu, 17 Apr 2014 22:55:08 GMT" } ]
2015-06-19T00:00:00
[ [ "Mardani", "Morteza", "" ], [ "Mateos", "Gonzalo", "" ], [ "Giannakis", "Georgios B.", "" ] ]
TITLE: Subspace Learning and Imputation for Streaming Big Data Matrices and Tensors ABSTRACT: Extracting latent low-dimensional structure from high-dimensional data is of paramount importance in timely inference tasks encountered with `Big Data' analytics. However, increasingly noisy, heterogeneous, and incomplete datasets as well as the need for {\em real-time} processing of streaming data pose major challenges to this end. In this context, the present paper permeates benefits from rank minimization to scalable imputation of missing data, via tracking low-dimensional subspaces and unraveling latent (possibly multi-way) structure from \emph{incomplete streaming} data. For low-rank matrix data, a subspace estimator is proposed based on an exponentially-weighted least-squares criterion regularized with the nuclear norm. After recasting the non-separable nuclear norm into a form amenable to online optimization, real-time algorithms with complementary strengths are developed and their convergence is established under simplifying technical assumptions. In a stationary setting, the asymptotic estimates obtained offer the well-documented performance guarantees of the {\em batch} nuclear-norm regularized estimator. Under the same unifying framework, a novel online (adaptive) algorithm is developed to obtain multi-way decompositions of \emph{low-rank tensors} with missing entries, and perform imputation as a byproduct. Simulated tests with both synthetic as well as real Internet and cardiac magnetic resonance imagery (MRI) data confirm the efficacy of the proposed algorithms, and their superior performance relative to state-of-the-art alternatives.
no_new_dataset
0.945601
1404.4923
Jie Shen
Jie Shen, Guangcan Liu, Jia Chen, Yuqiang Fang, Jianbin Xie, Yong Yu, Shuicheng Yan
Unified Structured Learning for Simultaneous Human Pose Estimation and Garment Attribute Classification
Accepted to IEEE Trans. on Image Processing
null
10.1109/TIP.2014.2358082
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we utilize structured learning to simultaneously address two intertwined problems: human pose estimation (HPE) and garment attribute classification (GAC), which are valuable for a variety of computer vision and multimedia applications. Unlike previous works that usually handle the two problems separately, our approach aims to produce a jointly optimal estimation for both HPE and GAC via a unified inference procedure. To this end, we adopt a preprocessing step to detect potential human parts from each image (i.e., a set of "candidates") that allows us to have a manageable input space. In this way, the simultaneous inference of HPE and GAC is converted to a structured learning problem, where the inputs are the collections of candidate ensembles, the outputs are the joint labels of human parts and garment attributes, and the joint feature representation involves various cues such as pose-specific features, garment-specific features, and cross-task features that encode correlations between human parts and garment attributes. Furthermore, we explore the "strong edge" evidence around the potential human parts so as to derive more powerful representations for oriented human parts. Such evidences can be seamlessly integrated into our structured learning model as a kind of energy function, and the learning process could be performed by standard structured Support Vector Machines (SVM) algorithm. However, the joint structure of the two problems is a cyclic graph, which hinders efficient inference. To resolve this issue, we compute instead approximate optima by using an iterative procedure, where in each iteration the variables of one problem are fixed. In this way, satisfactory solutions can be efficiently computed by dynamic programming. Experimental results on two benchmark datasets show the state-of-the-art performance of our approach.
[ { "version": "v1", "created": "Sat, 19 Apr 2014 04:51:06 GMT" }, { "version": "v2", "created": "Tue, 16 Sep 2014 19:50:41 GMT" }, { "version": "v3", "created": "Mon, 22 Sep 2014 19:09:38 GMT" } ]
2015-06-19T00:00:00
[ [ "Shen", "Jie", "" ], [ "Liu", "Guangcan", "" ], [ "Chen", "Jia", "" ], [ "Fang", "Yuqiang", "" ], [ "Xie", "Jianbin", "" ], [ "Yu", "Yong", "" ], [ "Yan", "Shuicheng", "" ] ]
TITLE: Unified Structured Learning for Simultaneous Human Pose Estimation and Garment Attribute Classification ABSTRACT: In this paper, we utilize structured learning to simultaneously address two intertwined problems: human pose estimation (HPE) and garment attribute classification (GAC), which are valuable for a variety of computer vision and multimedia applications. Unlike previous works that usually handle the two problems separately, our approach aims to produce a jointly optimal estimation for both HPE and GAC via a unified inference procedure. To this end, we adopt a preprocessing step to detect potential human parts from each image (i.e., a set of "candidates") that allows us to have a manageable input space. In this way, the simultaneous inference of HPE and GAC is converted to a structured learning problem, where the inputs are the collections of candidate ensembles, the outputs are the joint labels of human parts and garment attributes, and the joint feature representation involves various cues such as pose-specific features, garment-specific features, and cross-task features that encode correlations between human parts and garment attributes. Furthermore, we explore the "strong edge" evidence around the potential human parts so as to derive more powerful representations for oriented human parts. Such evidences can be seamlessly integrated into our structured learning model as a kind of energy function, and the learning process could be performed by standard structured Support Vector Machines (SVM) algorithm. However, the joint structure of the two problems is a cyclic graph, which hinders efficient inference. To resolve this issue, we compute instead approximate optima by using an iterative procedure, where in each iteration the variables of one problem are fixed. In this way, satisfactory solutions can be efficiently computed by dynamic programming. Experimental results on two benchmark datasets show the state-of-the-art performance of our approach.
no_new_dataset
0.942718
1404.7170
Paul Expert
Giovanni Petri, Paul Expert
Temporal stability of network partitions
15 pages, 12 figures
Phys. Rev. E 90, 022813, 2014
10.1103/PhysRevE.90.022813
null
physics.soc-ph cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method to find the best temporal partition at any time-scale and rank the relevance of partitions found at different time-scales. This method is based on random walkers coevolving with the network and as such constitutes a generalization of partition stability to the case of temporal networks. We show that, when applied to a toy model and real datasets, temporal stability uncovers structures that are persistent over meaningful time-scales as well as important isolated events, making it an effective tool to study both abrupt changes and gradual evolution of a network mesoscopic structures.
[ { "version": "v1", "created": "Mon, 28 Apr 2014 21:18:49 GMT" }, { "version": "v2", "created": "Thu, 7 Aug 2014 09:25:50 GMT" } ]
2015-06-19T00:00:00
[ [ "Petri", "Giovanni", "" ], [ "Expert", "Paul", "" ] ]
TITLE: Temporal stability of network partitions ABSTRACT: We present a method to find the best temporal partition at any time-scale and rank the relevance of partitions found at different time-scales. This method is based on random walkers coevolving with the network and as such constitutes a generalization of partition stability to the case of temporal networks. We show that, when applied to a toy model and real datasets, temporal stability uncovers structures that are persistent over meaningful time-scales as well as important isolated events, making it an effective tool to study both abrupt changes and gradual evolution of a network mesoscopic structures.
no_new_dataset
0.943608
1405.4574
Kristjan Greenewald
Kristjan H. Greenewald and Alfred O. Hero III
Kronecker PCA Based Spatio-Temporal Modeling of Video for Dismount Classification
8 pages. To appear in Proceeding of SPIE DSS. arXiv admin note: text overlap with arXiv:1402.5568
null
10.1117/12.2050184
null
cs.CV stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the application of KronPCA spatio-temporal modeling techniques [Greenewald et al 2013, Tsiligkaridis et al 2013] to the extraction of spatiotemporal features for video dismount classification. KronPCA performs a low-rank type of dimensionality reduction that is adapted to spatio-temporal data and is characterized by the T frame multiframe mean and covariance of p spatial features. For further regularization and improved inverse estimation, we also use the diagonally corrected KronPCA shrinkage methods we presented in [Greenewald et al 2013]. We apply this very general method to the modeling of the multivariate temporal behavior of HOG features extracted from pedestrian bounding boxes in video, with gender classification in a challenging dataset chosen as a specific application. The learned covariances for each class are used to extract spatiotemporal features which are then classified, achieving competitive classification performance.
[ { "version": "v1", "created": "Mon, 19 May 2014 01:22:34 GMT" } ]
2015-06-19T00:00:00
[ [ "Greenewald", "Kristjan H.", "" ], [ "Hero", "Alfred O.", "III" ] ]
TITLE: Kronecker PCA Based Spatio-Temporal Modeling of Video for Dismount Classification ABSTRACT: We consider the application of KronPCA spatio-temporal modeling techniques [Greenewald et al 2013, Tsiligkaridis et al 2013] to the extraction of spatiotemporal features for video dismount classification. KronPCA performs a low-rank type of dimensionality reduction that is adapted to spatio-temporal data and is characterized by the T frame multiframe mean and covariance of p spatial features. For further regularization and improved inverse estimation, we also use the diagonally corrected KronPCA shrinkage methods we presented in [Greenewald et al 2013]. We apply this very general method to the modeling of the multivariate temporal behavior of HOG features extracted from pedestrian bounding boxes in video, with gender classification in a challenging dataset chosen as a specific application. The learned covariances for each class are used to extract spatiotemporal features which are then classified, achieving competitive classification performance.
no_new_dataset
0.947575
1406.3295
Cesar Caiafa
Cesar F. Caiafa and Andrzej Cichocki
Stable, Robust and Super Fast Reconstruction of Tensors Using Multi-Way Projections
Submitted to IEEE Transactions on Signal Processing
null
10.1109/TSP.2014.2385040
null
cs.IT cs.DS math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the framework of multidimensional Compressed Sensing (CS), we introduce an analytical reconstruction formula that allows one to recover an $N$th-order $(I_1\times I_2\times \cdots \times I_N)$ data tensor $\underline{\mathbf{X}}$ from a reduced set of multi-way compressive measurements by exploiting its low multilinear-rank structure. Moreover, we show that, an interesting property of multi-way measurements allows us to build the reconstruction based on compressive linear measurements taken only in two selected modes, independently of the tensor order $N$. In addition, it is proved that, in the matrix case and in a particular case with $3$rd-order tensors where the same 2D sensor operator is applied to all mode-3 slices, the proposed reconstruction $\underline{\mathbf{X}}_\tau$ is stable in the sense that the approximation error is comparable to the one provided by the best low-multilinear-rank approximation, where $\tau$ is a threshold parameter that controls the approximation error. Through the analysis of the upper bound of the approximation error we show that, in the 2D case, an optimal value for the threshold parameter $\tau=\tau_0 > 0$ exists, which is confirmed by our simulation results. On the other hand, our experiments on 3D datasets show that very good reconstructions are obtained using $\tau=0$, which means that this parameter does not need to be tuned. Our extensive simulation results demonstrate the stability and robustness of the method when it is applied to real-world 2D and 3D signals. A comparison with state-of-the-arts sparsity based CS methods specialized for multidimensional signals is also included. A very attractive characteristic of the proposed method is that it provides a direct computation, i.e. it is non-iterative in contrast to all existing sparsity based CS algorithms, thus providing super fast computations, even for large datasets.
[ { "version": "v1", "created": "Thu, 22 May 2014 18:35:07 GMT" }, { "version": "v2", "created": "Mon, 30 Jun 2014 17:05:36 GMT" } ]
2015-06-19T00:00:00
[ [ "Caiafa", "Cesar F.", "" ], [ "Cichocki", "Andrzej", "" ] ]
TITLE: Stable, Robust and Super Fast Reconstruction of Tensors Using Multi-Way Projections ABSTRACT: In the framework of multidimensional Compressed Sensing (CS), we introduce an analytical reconstruction formula that allows one to recover an $N$th-order $(I_1\times I_2\times \cdots \times I_N)$ data tensor $\underline{\mathbf{X}}$ from a reduced set of multi-way compressive measurements by exploiting its low multilinear-rank structure. Moreover, we show that, an interesting property of multi-way measurements allows us to build the reconstruction based on compressive linear measurements taken only in two selected modes, independently of the tensor order $N$. In addition, it is proved that, in the matrix case and in a particular case with $3$rd-order tensors where the same 2D sensor operator is applied to all mode-3 slices, the proposed reconstruction $\underline{\mathbf{X}}_\tau$ is stable in the sense that the approximation error is comparable to the one provided by the best low-multilinear-rank approximation, where $\tau$ is a threshold parameter that controls the approximation error. Through the analysis of the upper bound of the approximation error we show that, in the 2D case, an optimal value for the threshold parameter $\tau=\tau_0 > 0$ exists, which is confirmed by our simulation results. On the other hand, our experiments on 3D datasets show that very good reconstructions are obtained using $\tau=0$, which means that this parameter does not need to be tuned. Our extensive simulation results demonstrate the stability and robustness of the method when it is applied to real-world 2D and 3D signals. A comparison with state-of-the-arts sparsity based CS methods specialized for multidimensional signals is also included. A very attractive characteristic of the proposed method is that it provides a direct computation, i.e. it is non-iterative in contrast to all existing sparsity based CS algorithms, thus providing super fast computations, even for large datasets.
no_new_dataset
0.943191
1503.02128
Qingming Tang
Qingming Tang, Chao Yang, Jian Peng and Jinbo Xu
Exact Hybrid Covariance Thresholding for Joint Graphical Lasso
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers the problem of estimating multiple related Gaussian graphical models from a $p$-dimensional dataset consisting of different classes. Our work is based upon the formulation of this problem as group graphical lasso. This paper proposes a novel hybrid covariance thresholding algorithm that can effectively identify zero entries in the precision matrices and split a large joint graphical lasso problem into small subproblems. Our hybrid covariance thresholding method is superior to existing uniform thresholding methods in that our method can split the precision matrix of each individual class using different partition schemes and thus split group graphical lasso into much smaller subproblems, each of which can be solved very fast. In addition, this paper establishes necessary and sufficient conditions for our hybrid covariance thresholding algorithm. The superior performance of our thresholding method is thoroughly analyzed and illustrated by a few experiments on simulated data and real gene expression data.
[ { "version": "v1", "created": "Sat, 7 Mar 2015 03:34:48 GMT" }, { "version": "v2", "created": "Thu, 18 Jun 2015 02:52:51 GMT" } ]
2015-06-19T00:00:00
[ [ "Tang", "Qingming", "" ], [ "Yang", "Chao", "" ], [ "Peng", "Jian", "" ], [ "Xu", "Jinbo", "" ] ]
TITLE: Exact Hybrid Covariance Thresholding for Joint Graphical Lasso ABSTRACT: This paper considers the problem of estimating multiple related Gaussian graphical models from a $p$-dimensional dataset consisting of different classes. Our work is based upon the formulation of this problem as group graphical lasso. This paper proposes a novel hybrid covariance thresholding algorithm that can effectively identify zero entries in the precision matrices and split a large joint graphical lasso problem into small subproblems. Our hybrid covariance thresholding method is superior to existing uniform thresholding methods in that our method can split the precision matrix of each individual class using different partition schemes and thus split group graphical lasso into much smaller subproblems, each of which can be solved very fast. In addition, this paper establishes necessary and sufficient conditions for our hybrid covariance thresholding algorithm. The superior performance of our thresholding method is thoroughly analyzed and illustrated by a few experiments on simulated data and real gene expression data.
no_new_dataset
0.949248
1506.04729
Shohreh Shaghaghian Ms
Shohreh Shaghaghian, Mark Coates
Optimal Forwarding in Opportunistic Delay Tolerant Networks with Meeting Rate Estimations
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data transfer in opportunistic Delay Tolerant Networks (DTNs) must rely on unscheduled sporadic meetings between nodes. The main challenge in these networks is to develop a mechanism based on which nodes can learn to make nearly optimal forwarding decision rules despite having no a-priori knowledge of the network topology. The forwarding mechanism should ideally result in a high delivery probability, low average latency and efficient usage of the network resources. In this paper, we propose both centralized and decentralized single-copy message forwarding algorithms that, under relatively strong assumptions about the networks behaviour, minimize the expected latencies from any node in the network to a particular destination. After proving the optimality of our proposed algorithms, we develop a decentralized algorithm that involves a recursive maximum likelihood procedure to estimate the meeting rates. We confirm the improvement that our proposed algorithms make in the system performance through numerical simulations on datasets from synthetic and real-world opportunistic networks.
[ { "version": "v1", "created": "Mon, 15 Jun 2015 19:49:48 GMT" }, { "version": "v2", "created": "Tue, 16 Jun 2015 18:20:30 GMT" }, { "version": "v3", "created": "Wed, 17 Jun 2015 14:38:37 GMT" }, { "version": "v4", "created": "Thu, 18 Jun 2015 14:12:07 GMT" } ]
2015-06-19T00:00:00
[ [ "Shaghaghian", "Shohreh", "" ], [ "Coates", "Mark", "" ] ]
TITLE: Optimal Forwarding in Opportunistic Delay Tolerant Networks with Meeting Rate Estimations ABSTRACT: Data transfer in opportunistic Delay Tolerant Networks (DTNs) must rely on unscheduled sporadic meetings between nodes. The main challenge in these networks is to develop a mechanism based on which nodes can learn to make nearly optimal forwarding decision rules despite having no a-priori knowledge of the network topology. The forwarding mechanism should ideally result in a high delivery probability, low average latency and efficient usage of the network resources. In this paper, we propose both centralized and decentralized single-copy message forwarding algorithms that, under relatively strong assumptions about the networks behaviour, minimize the expected latencies from any node in the network to a particular destination. After proving the optimality of our proposed algorithms, we develop a decentralized algorithm that involves a recursive maximum likelihood procedure to estimate the meeting rates. We confirm the improvement that our proposed algorithms make in the system performance through numerical simulations on datasets from synthetic and real-world opportunistic networks.
no_new_dataset
0.950319
1506.05514
Ubai Sandouk
Ubai Sandouk, Ke Chen
Learning Contextualized Semantics from Co-occurring Terms via a Siamese Architecture
null
null
null
2015-06-18
cs.IR cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the biggest challenges in Multimedia information retrieval and understanding is to bridge the semantic gap by properly modeling concept semantics in context. The presence of out of vocabulary (OOV) concepts exacerbates this difficulty. To address the semantic gap issues, we formulate a problem on learning contextualized semantics from descriptive terms and propose a novel Siamese architecture to model the contextualized semantics from descriptive terms. By means of pattern aggregation and probabilistic topic models, our Siamese architecture captures contextualized semantics from the co-occurring descriptive terms via unsupervised learning, which leads to a concept embedding space of the terms in context. Furthermore, the co-occurring OOV concepts can be easily represented in the learnt concept embedding space. The main properties of the concept embedding space are demonstrated via visualization. Using various settings in semantic priming, we have carried out a thorough evaluation by comparing our approach to a number of state-of-the-art methods on six annotation corpora in different domains, i.e., MagTag5K, CAL500 and Million Song Dataset in the music domain as well as Corel5K, LabelMe and SUNDatabase in the image domain. Experimental results on semantic priming suggest that our approach outperforms those state-of-the-art methods considerably in various aspects.
[ { "version": "v1", "created": "Wed, 17 Jun 2015 23:03:43 GMT" } ]
2015-06-19T00:00:00
[ [ "Sandouk", "Ubai", "" ], [ "Chen", "Ke", "" ] ]
TITLE: Learning Contextualized Semantics from Co-occurring Terms via a Siamese Architecture ABSTRACT: One of the biggest challenges in Multimedia information retrieval and understanding is to bridge the semantic gap by properly modeling concept semantics in context. The presence of out of vocabulary (OOV) concepts exacerbates this difficulty. To address the semantic gap issues, we formulate a problem on learning contextualized semantics from descriptive terms and propose a novel Siamese architecture to model the contextualized semantics from descriptive terms. By means of pattern aggregation and probabilistic topic models, our Siamese architecture captures contextualized semantics from the co-occurring descriptive terms via unsupervised learning, which leads to a concept embedding space of the terms in context. Furthermore, the co-occurring OOV concepts can be easily represented in the learnt concept embedding space. The main properties of the concept embedding space are demonstrated via visualization. Using various settings in semantic priming, we have carried out a thorough evaluation by comparing our approach to a number of state-of-the-art methods on six annotation corpora in different domains, i.e., MagTag5K, CAL500 and Million Song Dataset in the music domain as well as Corel5K, LabelMe and SUNDatabase in the image domain. Experimental results on semantic priming suggest that our approach outperforms those state-of-the-art methods considerably in various aspects.
no_new_dataset
0.946399
1506.05541
Yi Sun
Yi Sun, Xiaoqi Yin, Nanshu Wang, Junchen Jiang, Vyas Sekar, Yun Jin, Bruno Sinopoli
Analyzing TCP Throughput Stability and Predictability with Implications for Adaptive Video Streaming
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work suggests that TCP throughput stability and predictability within a video viewing session can inform the design of better video bitrate adaptation algorithms. Despite a rich tradition of Internet measurement, however, our understanding of throughput stability and predictability is quite limited. To bridge this gap, we present a measurement study of throughput stability using a large-scale dataset from a video service provider. Drawing on this analysis, we propose a simple-but-effective prediction mechanism based on a hidden Markov model and demonstrate that it outperforms other approaches. We also show the practical implications in improving the user experience of adaptive video streaming.
[ { "version": "v1", "created": "Thu, 18 Jun 2015 03:36:24 GMT" } ]
2015-06-19T00:00:00
[ [ "Sun", "Yi", "" ], [ "Yin", "Xiaoqi", "" ], [ "Wang", "Nanshu", "" ], [ "Jiang", "Junchen", "" ], [ "Sekar", "Vyas", "" ], [ "Jin", "Yun", "" ], [ "Sinopoli", "Bruno", "" ] ]
TITLE: Analyzing TCP Throughput Stability and Predictability with Implications for Adaptive Video Streaming ABSTRACT: Recent work suggests that TCP throughput stability and predictability within a video viewing session can inform the design of better video bitrate adaptation algorithms. Despite a rich tradition of Internet measurement, however, our understanding of throughput stability and predictability is quite limited. To bridge this gap, we present a measurement study of throughput stability using a large-scale dataset from a video service provider. Drawing on this analysis, we propose a simple-but-effective prediction mechanism based on a hidden Markov model and demonstrate that it outperforms other approaches. We also show the practical implications in improving the user experience of adaptive video streaming.
no_new_dataset
0.933309
1506.05751
Rob Fergus
Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (Goodfellow et al.). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40% of the time, compared to 10% for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.
[ { "version": "v1", "created": "Thu, 18 Jun 2015 17:03:54 GMT" } ]
2015-06-19T00:00:00
[ [ "Denton", "Emily", "" ], [ "Chintala", "Soumith", "" ], [ "Szlam", "Arthur", "" ], [ "Fergus", "Rob", "" ] ]
TITLE: Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks ABSTRACT: In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (Goodfellow et al.). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40% of the time, compared to 10% for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.
no_new_dataset
0.953492
1312.0317
Chunxiao Jiang
Chunxiao Jiang and Yan Chen and K. J. Ray Liu
Evolutionary Dynamics of Information Diffusion over Social Networks
arXiv admin note: substantial text overlap with arXiv:1309.2920
null
10.1109/JSTSP.2014.2313024
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current social networks are of extremely large-scale generating tremendous information flows at every moment. How information diffuse over social networks has attracted much attention from both industry and academics. Most of the existing works on information diffusion analysis are based on machine learning methods focusing on social network structure analysis and empirical data mining. However, the dynamics of information diffusion, which are heavily influenced by network users' decisions, actions and their socio-economic interactions, is generally ignored by most of existing works. In this paper, we propose an evolutionary game theoretic framework to model the dynamic information diffusion process in social networks. Specifically, we derive the information diffusion dynamics in complete networks, uniform degree and non-uniform degree networks, with the highlight of two special networks, Erd\H{o}s-R\'enyi random network and the Barab\'asi-Albert scale-free network. We find that the dynamics of information diffusion over these three kinds of networks are scale-free and the same with each other when the network scale is sufficiently large. To verify our theoretical analysis, we perform simulations for the information diffusion over synthetic networks and real-world Facebook networks. Moreover, we also conduct experiment on Twitter hashtags dataset, which shows that the proposed game theoretic model can well fit and predict the information diffusion over real social networks.
[ { "version": "v1", "created": "Mon, 2 Dec 2013 03:21:28 GMT" } ]
2015-06-18T00:00:00
[ [ "Jiang", "Chunxiao", "" ], [ "Chen", "Yan", "" ], [ "Liu", "K. J. Ray", "" ] ]
TITLE: Evolutionary Dynamics of Information Diffusion over Social Networks ABSTRACT: Current social networks are of extremely large-scale generating tremendous information flows at every moment. How information diffuse over social networks has attracted much attention from both industry and academics. Most of the existing works on information diffusion analysis are based on machine learning methods focusing on social network structure analysis and empirical data mining. However, the dynamics of information diffusion, which are heavily influenced by network users' decisions, actions and their socio-economic interactions, is generally ignored by most of existing works. In this paper, we propose an evolutionary game theoretic framework to model the dynamic information diffusion process in social networks. Specifically, we derive the information diffusion dynamics in complete networks, uniform degree and non-uniform degree networks, with the highlight of two special networks, Erd\H{o}s-R\'enyi random network and the Barab\'asi-Albert scale-free network. We find that the dynamics of information diffusion over these three kinds of networks are scale-free and the same with each other when the network scale is sufficiently large. To verify our theoretical analysis, we perform simulations for the information diffusion over synthetic networks and real-world Facebook networks. Moreover, we also conduct experiment on Twitter hashtags dataset, which shows that the proposed game theoretic model can well fit and predict the information diffusion over real social networks.
no_new_dataset
0.949012
1401.0887
Dorina Thanou
Dorina Thanou, David I Shuman, Pascal Frossard
Learning parametric dictionaries for graph signals
null
null
10.1109/TSP.2014.2332441
null
cs.LG cs.SI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In sparse signal representation, the choice of a dictionary often involves a tradeoff between two desirable properties -- the ability to adapt to specific signal data and a fast implementation of the dictionary. To sparsely represent signals residing on weighted graphs, an additional design challenge is to incorporate the intrinsic geometric structure of the irregular data domain into the atoms of the dictionary. In this work, we propose a parametric dictionary learning algorithm to design data-adapted, structured dictionaries that sparsely represent graph signals. In particular, we model graph signals as combinations of overlapping local patterns. We impose the constraint that each dictionary is a concatenation of subdictionaries, with each subdictionary being a polynomial of the graph Laplacian matrix, representing a single pattern translated to different areas of the graph. The learning algorithm adapts the patterns to a training set of graph signals. Experimental results on both synthetic and real datasets demonstrate that the dictionaries learned by the proposed algorithm are competitive with and often better than unstructured dictionaries learned by state-of-the-art numerical learning algorithms in terms of sparse approximation of graph signals. In contrast to the unstructured dictionaries, however, the dictionaries learned by the proposed algorithm feature localized atoms and can be implemented in a computationally efficient manner in signal processing tasks such as compression, denoising, and classification.
[ { "version": "v1", "created": "Sun, 5 Jan 2014 12:17:51 GMT" } ]
2015-06-18T00:00:00
[ [ "Thanou", "Dorina", "" ], [ "Shuman", "David I", "" ], [ "Frossard", "Pascal", "" ] ]
TITLE: Learning parametric dictionaries for graph signals ABSTRACT: In sparse signal representation, the choice of a dictionary often involves a tradeoff between two desirable properties -- the ability to adapt to specific signal data and a fast implementation of the dictionary. To sparsely represent signals residing on weighted graphs, an additional design challenge is to incorporate the intrinsic geometric structure of the irregular data domain into the atoms of the dictionary. In this work, we propose a parametric dictionary learning algorithm to design data-adapted, structured dictionaries that sparsely represent graph signals. In particular, we model graph signals as combinations of overlapping local patterns. We impose the constraint that each dictionary is a concatenation of subdictionaries, with each subdictionary being a polynomial of the graph Laplacian matrix, representing a single pattern translated to different areas of the graph. The learning algorithm adapts the patterns to a training set of graph signals. Experimental results on both synthetic and real datasets demonstrate that the dictionaries learned by the proposed algorithm are competitive with and often better than unstructured dictionaries learned by state-of-the-art numerical learning algorithms in terms of sparse approximation of graph signals. In contrast to the unstructured dictionaries, however, the dictionaries learned by the proposed algorithm feature localized atoms and can be implemented in a computationally efficient manner in signal processing tasks such as compression, denoising, and classification.
no_new_dataset
0.947332
1412.6583
Brian Cheung
Brian Cheung, Jesse A. Livezey, Arjun K. Bansal, Bruno A. Olshausen
Discovering Hidden Factors of Variation in Deep Networks
Presented at International Conference on Learning Representations 2015 Workshop
null
null
null
cs.LG cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning has enjoyed a great deal of success because of its ability to learn useful features for tasks such as classification. But there has been less exploration in learning the factors of variation apart from the classification signal. By augmenting autoencoders with simple regularization terms during training, we demonstrate that standard deep architectures can discover and explicitly represent factors of variation beyond those relevant for categorization. We introduce a cross-covariance penalty (XCov) as a method to disentangle factors like handwriting style for digits and subject identity in faces. We demonstrate this on the MNIST handwritten digit database, the Toronto Faces Database (TFD) and the Multi-PIE dataset by generating manipulated instances of the data. Furthermore, we demonstrate these deep networks can extrapolate `hidden' variation in the supervised signal.
[ { "version": "v1", "created": "Sat, 20 Dec 2014 02:52:03 GMT" }, { "version": "v2", "created": "Fri, 27 Feb 2015 20:41:40 GMT" }, { "version": "v3", "created": "Fri, 17 Apr 2015 17:15:02 GMT" }, { "version": "v4", "created": "Wed, 17 Jun 2015 06:47:48 GMT" } ]
2015-06-18T00:00:00
[ [ "Cheung", "Brian", "" ], [ "Livezey", "Jesse A.", "" ], [ "Bansal", "Arjun K.", "" ], [ "Olshausen", "Bruno A.", "" ] ]
TITLE: Discovering Hidden Factors of Variation in Deep Networks ABSTRACT: Deep learning has enjoyed a great deal of success because of its ability to learn useful features for tasks such as classification. But there has been less exploration in learning the factors of variation apart from the classification signal. By augmenting autoencoders with simple regularization terms during training, we demonstrate that standard deep architectures can discover and explicitly represent factors of variation beyond those relevant for categorization. We introduce a cross-covariance penalty (XCov) as a method to disentangle factors like handwriting style for digits and subject identity in faces. We demonstrate this on the MNIST handwritten digit database, the Toronto Faces Database (TFD) and the Multi-PIE dataset by generating manipulated instances of the data. Furthermore, we demonstrate these deep networks can extrapolate `hidden' variation in the supervised signal.
no_new_dataset
0.944944
1504.06852
Philipp Fischer
Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip H\"ausser, Caner Haz{\i}rba\c{s}, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox
FlowNet: Learning Optical Flow with Convolutional Networks
Added supplementary material
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.
[ { "version": "v1", "created": "Sun, 26 Apr 2015 17:30:32 GMT" }, { "version": "v2", "created": "Mon, 4 May 2015 08:50:57 GMT" } ]
2015-06-18T00:00:00
[ [ "Fischer", "Philipp", "" ], [ "Dosovitskiy", "Alexey", "" ], [ "Ilg", "Eddy", "" ], [ "Häusser", "Philip", "" ], [ "Hazırbaş", "Caner", "" ], [ "Golkov", "Vladimir", "" ], [ "van der Smagt", "Patrick", "" ], [ "Cremers", "Daniel", "" ], [ "Brox", "Thomas", "" ] ]
TITLE: FlowNet: Learning Optical Flow with Convolutional Networks ABSTRACT: Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.
new_dataset
0.960025
1505.06454
Qing Ke
Qing Ke, Emilio Ferrara, Filippo Radicchi, Alessandro Flammini
Defining and identifying Sleeping Beauties in science
40 pages, Supporting Information included, top examples listed at http://qke.github.io/projects/beauty/beauty.html
Proc. Natl. Acad. Sci. USA 112, 7426-7431 (2015)
10.1073/pnas.1424329112
null
physics.soc-ph cs.DL cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A Sleeping Beauty (SB) in science refers to a paper whose importance is not recognized for several years after publication. Its citation history exhibits a long hibernation period followed by a sudden spike of popularity. Previous studies suggest a relative scarcity of SBs. The reliability of this conclusion is, however, heavily dependent on identification methods based on arbitrary threshold parameters for sleeping time and number of citations, applied to small or monodisciplinary bibliographic datasets. Here we present a systematic, large-scale, and multidisciplinary analysis of the SB phenomenon in science. We introduce a parameter-free measure that quantifies the extent to which a specific paper can be considered an SB. We apply our method to 22 million scientific papers published in all disciplines of natural and social sciences over a time span longer than a century. Our results reveal that the SB phenomenon is not exceptional. There is a continuous spectrum of delayed recognition where both the hibernation period and the awakening intensity are taken into account. Although many cases of SBs can be identified by looking at monodisciplinary bibliographic data, the SB phenomenon becomes much more apparent with the analysis of multidisciplinary datasets, where we can observe many examples of papers achieving delayed yet exceptional importance in disciplines different from those where they were originally published. Our analysis emphasizes a complex feature of citation dynamics that so far has received little attention, and also provides empirical evidence against the use of short-term citation metrics in the quantification of scientific impact.
[ { "version": "v1", "created": "Sun, 24 May 2015 16:38:14 GMT" } ]
2015-06-18T00:00:00
[ [ "Ke", "Qing", "" ], [ "Ferrara", "Emilio", "" ], [ "Radicchi", "Filippo", "" ], [ "Flammini", "Alessandro", "" ] ]
TITLE: Defining and identifying Sleeping Beauties in science ABSTRACT: A Sleeping Beauty (SB) in science refers to a paper whose importance is not recognized for several years after publication. Its citation history exhibits a long hibernation period followed by a sudden spike of popularity. Previous studies suggest a relative scarcity of SBs. The reliability of this conclusion is, however, heavily dependent on identification methods based on arbitrary threshold parameters for sleeping time and number of citations, applied to small or monodisciplinary bibliographic datasets. Here we present a systematic, large-scale, and multidisciplinary analysis of the SB phenomenon in science. We introduce a parameter-free measure that quantifies the extent to which a specific paper can be considered an SB. We apply our method to 22 million scientific papers published in all disciplines of natural and social sciences over a time span longer than a century. Our results reveal that the SB phenomenon is not exceptional. There is a continuous spectrum of delayed recognition where both the hibernation period and the awakening intensity are taken into account. Although many cases of SBs can be identified by looking at monodisciplinary bibliographic data, the SB phenomenon becomes much more apparent with the analysis of multidisciplinary datasets, where we can observe many examples of papers achieving delayed yet exceptional importance in disciplines different from those where they were originally published. Our analysis emphasizes a complex feature of citation dynamics that so far has received little attention, and also provides empirical evidence against the use of short-term citation metrics in the quantification of scientific impact.
no_new_dataset
0.945349
1506.02085
Min Xu
Min Xu, Rudy Setiono
Gene selection for cancer classification using a hybrid of univariate and multivariate feature selection methods
null
Applied Genomics and Proteomics. 2003:2(2)79-91
null
null
q-bio.QM cs.CE cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Various approaches to gene selection for cancer classification based on microarray data can be found in the literature and they may be grouped into two categories: univariate methods and multivariate methods. Univariate methods look at each gene in the data in isolation from others. They measure the contribution of a particular gene to the classification without considering the presence of the other genes. In contrast, multivariate methods measure the relative contribution of a gene to the classification by taking the other genes in the data into consideration. Multivariate methods select fewer genes in general. However, the selection process of multivariate methods may be sensitive to the presence of irrelevant genes, noises in the expression and outliers in the training data. At the same time, the computational cost of multivariate methods is high. To overcome the disadvantages of the two types of approaches, we propose a hybrid method to obtain gene sets that are small and highly discriminative. We devise our hybrid method from the univariate Maximum Likelihood method (LIK) and the multivariate Recursive Feature Elimination method (RFE). We analyze the properties of these methods and systematically test the effectiveness of our proposed method on two cancer microarray datasets. Our experiments on a leukemia dataset and a small, round blue cell tumors dataset demonstrate the effectiveness of our hybrid method. It is able to discover sets consisting of fewer genes than those reported in the literature and at the same time achieve the same or better prediction accuracy.
[ { "version": "v1", "created": "Fri, 5 Jun 2015 23:29:06 GMT" } ]
2015-06-18T00:00:00
[ [ "Xu", "Min", "" ], [ "Setiono", "Rudy", "" ] ]
TITLE: Gene selection for cancer classification using a hybrid of univariate and multivariate feature selection methods ABSTRACT: Various approaches to gene selection for cancer classification based on microarray data can be found in the literature and they may be grouped into two categories: univariate methods and multivariate methods. Univariate methods look at each gene in the data in isolation from others. They measure the contribution of a particular gene to the classification without considering the presence of the other genes. In contrast, multivariate methods measure the relative contribution of a gene to the classification by taking the other genes in the data into consideration. Multivariate methods select fewer genes in general. However, the selection process of multivariate methods may be sensitive to the presence of irrelevant genes, noises in the expression and outliers in the training data. At the same time, the computational cost of multivariate methods is high. To overcome the disadvantages of the two types of approaches, we propose a hybrid method to obtain gene sets that are small and highly discriminative. We devise our hybrid method from the univariate Maximum Likelihood method (LIK) and the multivariate Recursive Feature Elimination method (RFE). We analyze the properties of these methods and systematically test the effectiveness of our proposed method on two cancer microarray datasets. Our experiments on a leukemia dataset and a small, round blue cell tumors dataset demonstrate the effectiveness of our hybrid method. It is able to discover sets consisting of fewer genes than those reported in the literature and at the same time achieve the same or better prediction accuracy.
no_new_dataset
0.946001
1506.02087
Min Xu
Min Xu
Global Gene Expression Analysis Using Machine Learning Methods
Author's master thesis (National University of Singapore, May 2003). Adviser: Rudy Setiono
null
null
null
q-bio.QM cs.CE cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Microarray is a technology to quantitatively monitor the expression of large number of genes in parallel. It has become one of the main tools for global gene expression analysis in molecular biology research in recent years. The large amount of expression data generated by this technology makes the study of certain complex biological problems possible and machine learning methods are playing a crucial role in the analysis process. At present, many machine learning methods have been or have the potential to be applied to major areas of gene expression analysis. These areas include clustering, classification, dynamic modeling and reverse engineering. In this thesis, we focus our work on using machine learning methods to solve the classification problems arising from microarray data. We first identify the major types of the classification problems; then apply several machine learning methods to solve the problems and perform systematic tests on real and artificial datasets. We propose improvement to existing methods. Specifically, we develop a multivariate and a hybrid feature selection method to obtain high classification performance for high dimension classification problems. Using the hybrid feature selection method, we are able to identify small sets of features that give predictive accuracy that is as good as that from other methods which require many more features.
[ { "version": "v1", "created": "Fri, 5 Jun 2015 23:37:20 GMT" } ]
2015-06-18T00:00:00
[ [ "Xu", "Min", "" ] ]
TITLE: Global Gene Expression Analysis Using Machine Learning Methods ABSTRACT: Microarray is a technology to quantitatively monitor the expression of large number of genes in parallel. It has become one of the main tools for global gene expression analysis in molecular biology research in recent years. The large amount of expression data generated by this technology makes the study of certain complex biological problems possible and machine learning methods are playing a crucial role in the analysis process. At present, many machine learning methods have been or have the potential to be applied to major areas of gene expression analysis. These areas include clustering, classification, dynamic modeling and reverse engineering. In this thesis, we focus our work on using machine learning methods to solve the classification problems arising from microarray data. We first identify the major types of the classification problems; then apply several machine learning methods to solve the problems and perform systematic tests on real and artificial datasets. We propose improvement to existing methods. Specifically, we develop a multivariate and a hybrid feature selection method to obtain high classification performance for high dimension classification problems. Using the hybrid feature selection method, we are able to identify small sets of features that give predictive accuracy that is as good as that from other methods which require many more features.
no_new_dataset
0.950503
1506.04924
Seunghoon Hong
Seunghoon Hong, Hyeonwoo Noh, Bohyung Han
Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation
Added a link to the project page for more comprehensive illustration of results
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations. Contrary to existing approaches posing semantic segmentation as a single task of region-based classification, our algorithm decouples classification and segmentation, and learns a separate network for each task. In this architecture, labels associated with an image are identified by classification network, and binary segmentation is subsequently performed for each identified label in segmentation network. The decoupled architecture enables us to learn classification and segmentation networks separately based on the training data with image-level and pixel-wise class labels, respectively. It facilitates to reduce search space for segmentation effectively by exploiting class-specific activation maps obtained from bridging layers. Our algorithm shows outstanding performance compared to other semi-supervised approaches even with much less training images with strong annotations in PASCAL VOC dataset.
[ { "version": "v1", "created": "Tue, 16 Jun 2015 11:20:04 GMT" }, { "version": "v2", "created": "Wed, 17 Jun 2015 08:38:32 GMT" } ]
2015-06-18T00:00:00
[ [ "Hong", "Seunghoon", "" ], [ "Noh", "Hyeonwoo", "" ], [ "Han", "Bohyung", "" ] ]
TITLE: Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation ABSTRACT: We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations. Contrary to existing approaches posing semantic segmentation as a single task of region-based classification, our algorithm decouples classification and segmentation, and learns a separate network for each task. In this architecture, labels associated with an image are identified by classification network, and binary segmentation is subsequently performed for each identified label in segmentation network. The decoupled architecture enables us to learn classification and segmentation networks separately based on the training data with image-level and pixel-wise class labels, respectively. It facilitates to reduce search space for segmentation effectively by exploiting class-specific activation maps obtained from bridging layers. Our algorithm shows outstanding performance compared to other semi-supervised approaches even with much less training images with strong annotations in PASCAL VOC dataset.
no_new_dataset
0.953405
1506.05158
Taylor Arnold
Taylor Arnold
An Entropy Maximizing Geohash for Distributed Spatiotemporal Database Indexing
12 pages, 4 figures
null
null
null
cs.DB cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a modification of the standard geohash algorithm based on maximum entropy encoding in which the data volume is approximately constant for a given hash prefix length. Distributed spatiotemporal databases, which typically require interleaving spatial and temporal elements into a single key, reap large benefits from a balanced geohash by creating a consistent ratio between spatial and temporal precision even across areas of varying data density. This property is also useful for indexing purely spatial datasets, where the load distribution of large range scans is an important aspect of query performance. We apply our algorithm to data generated proportional to population as given by census block population counts provided from the US Census Bureau.
[ { "version": "v1", "created": "Tue, 16 Jun 2015 21:54:12 GMT" } ]
2015-06-18T00:00:00
[ [ "Arnold", "Taylor", "" ] ]
TITLE: An Entropy Maximizing Geohash for Distributed Spatiotemporal Database Indexing ABSTRACT: We present a modification of the standard geohash algorithm based on maximum entropy encoding in which the data volume is approximately constant for a given hash prefix length. Distributed spatiotemporal databases, which typically require interleaving spatial and temporal elements into a single key, reap large benefits from a balanced geohash by creating a consistent ratio between spatial and temporal precision even across areas of varying data density. This property is also useful for indexing purely spatial datasets, where the load distribution of large range scans is an important aspect of query performance. We apply our algorithm to data generated proportional to population as given by census block population counts provided from the US Census Bureau.
no_new_dataset
0.948155
1506.05257
Daniel J Mankowitz
Daniel J. Mankowitz and Ehud Rivlin
CFORB: Circular FREAK-ORB Visual Odometry
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel Visual Odometry algorithm entitled Circular FREAK-ORB (CFORB). This algorithm detects features using the well-known ORB algorithm [12] and computes feature descriptors using the FREAK algorithm [14]. CFORB is invariant to both rotation and scale changes, and is suitable for use in environments with uneven terrain. Two visual geometric constraints have been utilized in order to remove invalid feature descriptor matches. These constraints have not previously been utilized in a Visual Odometry algorithm. A variation to circular matching [16] has also been implemented. This allows features to be matched between images without having to be dependent upon the epipolar constraint. This algorithm has been run on the KITTI benchmark dataset and achieves a competitive average translational error of $3.73 \%$ and average rotational error of $0.0107 deg/m$. CFORB has also been run in an indoor environment and achieved an average translational error of $3.70 \%$. After running CFORB in a highly textured environment with an approximately uniform feature spread across the images, the algorithm achieves an average translational error of $2.4 \%$ and an average rotational error of $0.009 deg/m$.
[ { "version": "v1", "created": "Wed, 17 Jun 2015 09:44:42 GMT" } ]
2015-06-18T00:00:00
[ [ "Mankowitz", "Daniel J.", "" ], [ "Rivlin", "Ehud", "" ] ]
TITLE: CFORB: Circular FREAK-ORB Visual Odometry ABSTRACT: We present a novel Visual Odometry algorithm entitled Circular FREAK-ORB (CFORB). This algorithm detects features using the well-known ORB algorithm [12] and computes feature descriptors using the FREAK algorithm [14]. CFORB is invariant to both rotation and scale changes, and is suitable for use in environments with uneven terrain. Two visual geometric constraints have been utilized in order to remove invalid feature descriptor matches. These constraints have not previously been utilized in a Visual Odometry algorithm. A variation to circular matching [16] has also been implemented. This allows features to be matched between images without having to be dependent upon the epipolar constraint. This algorithm has been run on the KITTI benchmark dataset and achieves a competitive average translational error of $3.73 \%$ and average rotational error of $0.0107 deg/m$. CFORB has also been run in an indoor environment and achieved an average translational error of $3.70 \%$. After running CFORB in a highly textured environment with an approximately uniform feature spread across the images, the algorithm achieves an average translational error of $2.4 \%$ and an average rotational error of $0.009 deg/m$.
no_new_dataset
0.948822
1309.0691
Zi-Ke Zhang Dr.
Chu-Xu Zhang, Zi-Ke Zhang, Lu Yu, Chuang Liu, Hao Liu, Xiao-Yong Yan
Information Filtering via Collaborative User Clustering Modeling
null
null
10.1016/j.physa.2013.11.024
null
cs.IR cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The past few years have witnessed the great success of recommender systems, which can significantly help users find out personalized items for them from the information era. One of the most widely applied recommendation methods is the Matrix Factorization (MF). However, most of researches on this topic have focused on mining the direct relationships between users and items. In this paper, we optimize the standard MF by integrating the user clustering regularization term. Our model considers not only the user-item rating information, but also takes into account the user interest. We compared the proposed model with three typical other methods: User-Mean (UM), Item-Mean (IM) and standard MF. Experimental results on a real-world dataset, \emph{MovieLens}, show that our method performs much better than other three methods in the accuracy of recommendation.
[ { "version": "v1", "created": "Tue, 3 Sep 2013 14:20:00 GMT" }, { "version": "v2", "created": "Wed, 4 Sep 2013 09:20:30 GMT" }, { "version": "v3", "created": "Thu, 7 Nov 2013 16:29:26 GMT" }, { "version": "v4", "created": "Mon, 10 Feb 2014 08:40:21 GMT" } ]
2015-06-17T00:00:00
[ [ "Zhang", "Chu-Xu", "" ], [ "Zhang", "Zi-Ke", "" ], [ "Yu", "Lu", "" ], [ "Liu", "Chuang", "" ], [ "Liu", "Hao", "" ], [ "Yan", "Xiao-Yong", "" ] ]
TITLE: Information Filtering via Collaborative User Clustering Modeling ABSTRACT: The past few years have witnessed the great success of recommender systems, which can significantly help users find out personalized items for them from the information era. One of the most widely applied recommendation methods is the Matrix Factorization (MF). However, most of researches on this topic have focused on mining the direct relationships between users and items. In this paper, we optimize the standard MF by integrating the user clustering regularization term. Our model considers not only the user-item rating information, but also takes into account the user interest. We compared the proposed model with three typical other methods: User-Mean (UM), Item-Mean (IM) and standard MF. Experimental results on a real-world dataset, \emph{MovieLens}, show that our method performs much better than other three methods in the accuracy of recommendation.
no_new_dataset
0.949482
1309.2920
Chunxiao Jiang
Chunxiao Jiang and Yan Chen and K. J. Ray Liu
Evolutionary Information Diffusion over Social Networks
null
null
10.1109/TSP.2014.2339799
null
cs.GT cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social networks have become ubiquitous in our daily life, as such it has attracted great research interests recently. A key challenge is that it is of extremely large-scale with tremendous information flow, creating the phenomenon of "Big Data". Under such a circumstance, understanding information diffusion over social networks has become an important research issue. Most of the existing works on information diffusion analysis are based on either network structure modeling or empirical approach with dataset mining. However, the information diffusion is also heavily influenced by network users' decisions, actions and their socio-economic connections, which is generally ignored in existing works. In this paper, we propose an evolutionary game theoretic framework to model the dynamic information diffusion process in social networks. Specifically, we analyze the framework in uniform degree and non-uniform degree networks and derive the closed-form expressions of the evolutionary stable network states. Moreover, the information diffusion over two special networks, Erd\H{o}s-R\'enyi random network and the Barab\'asi-Albert scale-free network, are also highlighted. To verify our theoretical analysis, we conduct experiments by using both synthetic networks and real-world Facebook network, as well as real-world information spreading dataset of Twitter and Memetracker. Experiments shows that the proposed game theoretic framework is effective and practical in modeling the social network users' information forwarding behaviors.
[ { "version": "v1", "created": "Wed, 11 Sep 2013 19:22:33 GMT" } ]
2015-06-17T00:00:00
[ [ "Jiang", "Chunxiao", "" ], [ "Chen", "Yan", "" ], [ "Liu", "K. J. Ray", "" ] ]
TITLE: Evolutionary Information Diffusion over Social Networks ABSTRACT: Social networks have become ubiquitous in our daily life, as such it has attracted great research interests recently. A key challenge is that it is of extremely large-scale with tremendous information flow, creating the phenomenon of "Big Data". Under such a circumstance, understanding information diffusion over social networks has become an important research issue. Most of the existing works on information diffusion analysis are based on either network structure modeling or empirical approach with dataset mining. However, the information diffusion is also heavily influenced by network users' decisions, actions and their socio-economic connections, which is generally ignored in existing works. In this paper, we propose an evolutionary game theoretic framework to model the dynamic information diffusion process in social networks. Specifically, we analyze the framework in uniform degree and non-uniform degree networks and derive the closed-form expressions of the evolutionary stable network states. Moreover, the information diffusion over two special networks, Erd\H{o}s-R\'enyi random network and the Barab\'asi-Albert scale-free network, are also highlighted. To verify our theoretical analysis, we conduct experiments by using both synthetic networks and real-world Facebook network, as well as real-world information spreading dataset of Twitter and Memetracker. Experiments shows that the proposed game theoretic framework is effective and practical in modeling the social network users' information forwarding behaviors.
no_new_dataset
0.952442
1309.3330
Aditya Vempaty
Aditya Vempaty, Lav R. Varshney and Pramod K. Varshney
Reliable Crowdsourcing for Multi-Class Labeling using Coding Theory
20 pages, 11 figures, under revision, IEEE Journal of Selected Topics in Signal Processing
null
10.1109/JSTSP.2014.2316116
null
cs.IT cs.SI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Crowdsourcing systems often have crowd workers that perform unreliable work on the task they are assigned. In this paper, we propose the use of error-control codes and decoding algorithms to design crowdsourcing systems for reliable classification despite unreliable crowd workers. Coding-theory based techniques also allow us to pose easy-to-answer binary questions to the crowd workers. We consider three different crowdsourcing models: systems with independent crowd workers, systems with peer-dependent reward schemes, and systems where workers have common sources of information. For each of these models, we analyze classification performance with the proposed coding-based scheme. We develop an ordering principle for the quality of crowds and describe how system performance changes with the quality of the crowd. We also show that pairing among workers and diversification of the questions help in improving system performance. We demonstrate the effectiveness of the proposed coding-based scheme using both simulated data and real datasets from Amazon Mechanical Turk, a crowdsourcing microtask platform. Results suggest that use of good codes may improve the performance of the crowdsourcing task over typical majority-voting approaches.
[ { "version": "v1", "created": "Thu, 12 Sep 2013 23:10:32 GMT" }, { "version": "v2", "created": "Wed, 22 Jan 2014 21:23:43 GMT" } ]
2015-06-17T00:00:00
[ [ "Vempaty", "Aditya", "" ], [ "Varshney", "Lav R.", "" ], [ "Varshney", "Pramod K.", "" ] ]
TITLE: Reliable Crowdsourcing for Multi-Class Labeling using Coding Theory ABSTRACT: Crowdsourcing systems often have crowd workers that perform unreliable work on the task they are assigned. In this paper, we propose the use of error-control codes and decoding algorithms to design crowdsourcing systems for reliable classification despite unreliable crowd workers. Coding-theory based techniques also allow us to pose easy-to-answer binary questions to the crowd workers. We consider three different crowdsourcing models: systems with independent crowd workers, systems with peer-dependent reward schemes, and systems where workers have common sources of information. For each of these models, we analyze classification performance with the proposed coding-based scheme. We develop an ordering principle for the quality of crowds and describe how system performance changes with the quality of the crowd. We also show that pairing among workers and diversification of the questions help in improving system performance. We demonstrate the effectiveness of the proposed coding-based scheme using both simulated data and real datasets from Amazon Mechanical Turk, a crowdsourcing microtask platform. Results suggest that use of good codes may improve the performance of the crowdsourcing task over typical majority-voting approaches.
no_new_dataset
0.953449
1309.4411
Ginestra Bianconi
Arda Halu, Satyam Mukherjee and Ginestra Bianconi
Emergence of overlap in ensembles of spatial multiplexes and statistical mechanics of spatial interacting networks ensembles
(12 pages, 4 figures) for downloading data see URL http://sites.google.com/site/satyammukherjee/pubs
Phys. Rev. E 89, 012806 (2014)
10.1103/PhysRevE.89.012806
null
physics.soc-ph cond-mat.dis-nn cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spatial networks range from the brain networks, to transportation networks and infrastructures. Recently interacting and multiplex networks are attracting great attention because their dynamics and robustness cannot be understood without treating at the same time several networks. Here we present maximal entropy ensembles of spatial multiplex and spatial interacting networks that can be used in order to model spatial multilayer network structures and to build null models of real datasets. We show that spatial multiplex naturally develop a significant overlap of the links, a noticeable property of many multiplexes that can affect significantly the dynamics taking place on them. Additionally, we characterize ensembles of spatial interacting networks and we analyse the structure of interacting airport and railway networks in India, showing the effect of space in determining the link probability.
[ { "version": "v1", "created": "Tue, 17 Sep 2013 18:05:29 GMT" }, { "version": "v2", "created": "Thu, 26 Dec 2013 17:59:58 GMT" }, { "version": "v3", "created": "Wed, 29 Apr 2015 19:49:40 GMT" } ]
2015-06-17T00:00:00
[ [ "Halu", "Arda", "" ], [ "Mukherjee", "Satyam", "" ], [ "Bianconi", "Ginestra", "" ] ]
TITLE: Emergence of overlap in ensembles of spatial multiplexes and statistical mechanics of spatial interacting networks ensembles ABSTRACT: Spatial networks range from the brain networks, to transportation networks and infrastructures. Recently interacting and multiplex networks are attracting great attention because their dynamics and robustness cannot be understood without treating at the same time several networks. Here we present maximal entropy ensembles of spatial multiplex and spatial interacting networks that can be used in order to model spatial multilayer network structures and to build null models of real datasets. We show that spatial multiplex naturally develop a significant overlap of the links, a noticeable property of many multiplexes that can affect significantly the dynamics taking place on them. Additionally, we characterize ensembles of spatial interacting networks and we analyse the structure of interacting airport and railway networks in India, showing the effect of space in determining the link probability.
no_new_dataset
0.949248
1309.7031
Nicola Perra
Suyu Liu, Nicola Perra, Marton Karsai, Alessandro Vespignani
Controlling Contagion Processes in Time-Varying Networks
null
null
10.1103/PhysRevLett.112.118702
null
physics.soc-ph cs.SI q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The vast majority of strategies aimed at controlling contagion processes on networks considers the connectivity pattern of the system as either quenched or annealed. However, in the real world many networks are highly dynamical and evolve in time concurrently to the contagion process. Here, we derive an analytical framework for the study of control strategies specifically devised for time-varying networks. We consider the removal/immunization of individual nodes according the their activity in the network and develop a block variable mean-field approach that allows the derivation of the equations describing the evolution of the contagion process concurrently to the network dynamic. We derive the critical immunization threshold and assess the effectiveness of the control strategies. Finally, we validate the theoretical picture by simulating numerically the information spreading process and control strategies in both synthetic networks and a large-scale, real-world mobile telephone call dataset
[ { "version": "v1", "created": "Thu, 26 Sep 2013 19:50:15 GMT" } ]
2015-06-17T00:00:00
[ [ "Liu", "Suyu", "" ], [ "Perra", "Nicola", "" ], [ "Karsai", "Marton", "" ], [ "Vespignani", "Alessandro", "" ] ]
TITLE: Controlling Contagion Processes in Time-Varying Networks ABSTRACT: The vast majority of strategies aimed at controlling contagion processes on networks considers the connectivity pattern of the system as either quenched or annealed. However, in the real world many networks are highly dynamical and evolve in time concurrently to the contagion process. Here, we derive an analytical framework for the study of control strategies specifically devised for time-varying networks. We consider the removal/immunization of individual nodes according the their activity in the network and develop a block variable mean-field approach that allows the derivation of the equations describing the evolution of the contagion process concurrently to the network dynamic. We derive the critical immunization threshold and assess the effectiveness of the control strategies. Finally, we validate the theoretical picture by simulating numerically the information spreading process and control strategies in both synthetic networks and a large-scale, real-world mobile telephone call dataset
no_new_dataset
0.946547
1310.2632
Philip Schniter
Jason T. Parker, Philip Schniter, and Volkan Cevher
Bilinear Generalized Approximate Message Passing
null
null
10.1109/TSP.2014.2357776
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We extend the generalized approximate message passing (G-AMP) approach, originally proposed for high-dimensional generalized-linear regression in the context of compressive sensing, to the generalized-bilinear case, which enables its application to matrix completion, robust PCA, dictionary learning, and related matrix-factorization problems. In the first part of the paper, we derive our Bilinear G-AMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation algorithm in the high-dimensional limit, where central-limit theorem arguments and Taylor-series approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectation-maximization (EM)-based method to automatically tune the parameters of the assumed priors, and two rank-selection strategies. In the second part of the paper, we discuss the specializations of EM-BiG-AMP to the problems of matrix completion, robust PCA, and dictionary learning, and present the results of an extensive empirical study comparing EM-BiG-AMP to state-of-the-art algorithms on each problem. Our numerical results, using both synthetic and real-world datasets, demonstrate that EM-BiG-AMP yields excellent reconstruction accuracy (often best in class) while maintaining competitive runtimes and avoiding the need to tune algorithmic parameters.
[ { "version": "v1", "created": "Wed, 9 Oct 2013 21:08:40 GMT" }, { "version": "v2", "created": "Fri, 1 Nov 2013 16:45:09 GMT" }, { "version": "v3", "created": "Thu, 5 Jun 2014 14:32:06 GMT" } ]
2015-06-17T00:00:00
[ [ "Parker", "Jason T.", "" ], [ "Schniter", "Philip", "" ], [ "Cevher", "Volkan", "" ] ]
TITLE: Bilinear Generalized Approximate Message Passing ABSTRACT: We extend the generalized approximate message passing (G-AMP) approach, originally proposed for high-dimensional generalized-linear regression in the context of compressive sensing, to the generalized-bilinear case, which enables its application to matrix completion, robust PCA, dictionary learning, and related matrix-factorization problems. In the first part of the paper, we derive our Bilinear G-AMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation algorithm in the high-dimensional limit, where central-limit theorem arguments and Taylor-series approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectation-maximization (EM)-based method to automatically tune the parameters of the assumed priors, and two rank-selection strategies. In the second part of the paper, we discuss the specializations of EM-BiG-AMP to the problems of matrix completion, robust PCA, and dictionary learning, and present the results of an extensive empirical study comparing EM-BiG-AMP to state-of-the-art algorithms on each problem. Our numerical results, using both synthetic and real-world datasets, demonstrate that EM-BiG-AMP yields excellent reconstruction accuracy (often best in class) while maintaining competitive runtimes and avoiding the need to tune algorithmic parameters.
no_new_dataset
0.947332
1311.1753
Rolf Andreassen
R. Andreassen, B. T. Meadows, M. de Silva, M. D. Sokoloff, and K. Tomko
GooFit: A library for massively parallelising maximum-likelihood fits
Presented at the CHEP 2013 conference
null
10.1088/1742-6596/513/5/052003
null
cs.DC cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fitting complicated models to large datasets is a bottleneck of many analyses. We present GooFit, a library and tool for constructing arbitrarily-complex probability density functions (PDFs) to be evaluated on nVidia GPUs or on multicore CPUs using OpenMP. The massive parallelisation of dividing up event calculations between hundreds of processors can achieve speedups of factors 200-300 in real-world problems.
[ { "version": "v1", "created": "Thu, 7 Nov 2013 17:18:42 GMT" } ]
2015-06-17T00:00:00
[ [ "Andreassen", "R.", "" ], [ "Meadows", "B. T.", "" ], [ "de Silva", "M.", "" ], [ "Sokoloff", "M. D.", "" ], [ "Tomko", "K.", "" ] ]
TITLE: GooFit: A library for massively parallelising maximum-likelihood fits ABSTRACT: Fitting complicated models to large datasets is a bottleneck of many analyses. We present GooFit, a library and tool for constructing arbitrarily-complex probability density functions (PDFs) to be evaluated on nVidia GPUs or on multicore CPUs using OpenMP. The massive parallelisation of dividing up event calculations between hundreds of processors can achieve speedups of factors 200-300 in real-world problems.
no_new_dataset
0.944228
1311.2911
Kevin Kung
Kevin S. Kung, Kael Greco, Stanislav Sobolevsky, and Carlo Ratti
Exploring universal patterns in human home-work commuting from mobile phone data
null
Kung KS, Greco K, Sobolevsky S, Ratti C (2014) Exploring Universal Patterns in Human Home-Work Commuting from Mobile Phone Data. PLoS ONE 9(6): e96180
10.1371/journal.pone.0096180
null
cs.SI cs.CY physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Home-work commuting has always attracted significant research attention because of its impact on human mobility. One of the key assumptions in this domain of study is the universal uniformity of commute times. However, a true comparison of commute patterns has often been hindered by the intrinsic differences in data collection methods, which make observation from different countries potentially biased and unreliable. In the present work, we approach this problem through the use of mobile phone call detail records (CDRs), which offers a consistent method for investigating mobility patterns in wholly different parts of the world. We apply our analysis to a broad range of datasets, at both the country and city scale. Additionally, we compare these results with those obtained from vehicle GPS traces in Milan. While different regions have some unique commute time characteristics, we show that the home-work time distributions and average values within a single region are indeed largely independent of commute distance or country (Portugal, Ivory Coast, and Boston)--despite substantial spatial and infrastructural differences. Furthermore, a comparative analysis demonstrates that such distance-independence holds true only if we consider multimodal commute behaviors--as consistent with previous studies. In car-only (Milan GPS traces) and car-heavy (Saudi Arabia) commute datasets, we see that commute time is indeed influenced by commute distance.
[ { "version": "v1", "created": "Tue, 12 Nov 2013 20:29:14 GMT" }, { "version": "v2", "created": "Wed, 24 Sep 2014 20:46:56 GMT" } ]
2015-06-17T00:00:00
[ [ "Kung", "Kevin S.", "" ], [ "Greco", "Kael", "" ], [ "Sobolevsky", "Stanislav", "" ], [ "Ratti", "Carlo", "" ] ]
TITLE: Exploring universal patterns in human home-work commuting from mobile phone data ABSTRACT: Home-work commuting has always attracted significant research attention because of its impact on human mobility. One of the key assumptions in this domain of study is the universal uniformity of commute times. However, a true comparison of commute patterns has often been hindered by the intrinsic differences in data collection methods, which make observation from different countries potentially biased and unreliable. In the present work, we approach this problem through the use of mobile phone call detail records (CDRs), which offers a consistent method for investigating mobility patterns in wholly different parts of the world. We apply our analysis to a broad range of datasets, at both the country and city scale. Additionally, we compare these results with those obtained from vehicle GPS traces in Milan. While different regions have some unique commute time characteristics, we show that the home-work time distributions and average values within a single region are indeed largely independent of commute distance or country (Portugal, Ivory Coast, and Boston)--despite substantial spatial and infrastructural differences. Furthermore, a comparative analysis demonstrates that such distance-independence holds true only if we consider multimodal commute behaviors--as consistent with previous studies. In car-only (Milan GPS traces) and car-heavy (Saudi Arabia) commute datasets, we see that commute time is indeed influenced by commute distance.
no_new_dataset
0.921922
1407.7390
Jos\'e Ram\'on Padilla-L\'opez
Jos\'e Ram\'on Padilla-L\'opez and Alexandros Andr\'e Chaaraoui and Francisco Fl\'orez-Revuelta
A discussion on the validation tests employed to compare human action recognition methods using the MSR Action3D dataset
16 pages and 7 tables
null
null
hdl:10045/39889
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper aims to determine which is the best human action recognition method based on features extracted from RGB-D devices, such as the Microsoft Kinect. A review of all the papers that make reference to MSR Action3D, the most used dataset that includes depth information acquired from a RGB-D device, has been performed. We found that the validation method used by each work differs from the others. So, a direct comparison among works cannot be made. However, almost all the works present their results comparing them without taking into account this issue. Therefore, we present different rankings according to the methodology used for the validation in orden to clarify the existing confusion.
[ { "version": "v1", "created": "Mon, 28 Jul 2014 11:59:30 GMT" }, { "version": "v2", "created": "Mon, 12 Jan 2015 11:30:40 GMT" }, { "version": "v3", "created": "Tue, 16 Jun 2015 19:57:45 GMT" } ]
2015-06-17T00:00:00
[ [ "Padilla-López", "José Ramón", "" ], [ "Chaaraoui", "Alexandros André", "" ], [ "Flórez-Revuelta", "Francisco", "" ] ]
TITLE: A discussion on the validation tests employed to compare human action recognition methods using the MSR Action3D dataset ABSTRACT: This paper aims to determine which is the best human action recognition method based on features extracted from RGB-D devices, such as the Microsoft Kinect. A review of all the papers that make reference to MSR Action3D, the most used dataset that includes depth information acquired from a RGB-D device, has been performed. We found that the validation method used by each work differs from the others. So, a direct comparison among works cannot be made. However, almost all the works present their results comparing them without taking into account this issue. Therefore, we present different rankings according to the methodology used for the validation in orden to clarify the existing confusion.
no_new_dataset
0.867598
1506.01071
Aleksey Buzmakov
Aleksey Buzmakov and Sergei O. Kuznetsov and Amedeo Napoli
Fast Generation of Best Interval Patterns for Nonmonotonic Constraints
18 pages; 2 figures; 2 tables; 1 algorithm; PKDD 2015 Conference Scientific Track
null
null
null
cs.AI cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In pattern mining, the main challenge is the exponential explosion of the set of patterns. Typically, to solve this problem, a constraint for pattern selection is introduced. One of the first constraints proposed in pattern mining is support (frequency) of a pattern in a dataset. Frequency is an anti-monotonic function, i.e., given an infrequent pattern, all its superpatterns are not frequent. However, many other constraints for pattern selection are not (anti-)monotonic, which makes it difficult to generate patterns satisfying these constraints. In this paper we introduce the notion of projection-antimonotonicity and $\theta$-$\Sigma\o\phi\iota\alpha$ algorithm that allows efficient generation of the best patterns for some nonmonotonic constraints. In this paper we consider stability and $\Delta$-measure, which are nonmonotonic constraints, and apply them to interval tuple datasets. In the experiments, we compute best interval tuple patterns w.r.t. these measures and show the advantage of our approach over postfiltering approaches. KEYWORDS: Pattern mining, nonmonotonic constraints, interval tuple data
[ { "version": "v1", "created": "Tue, 2 Jun 2015 21:32:14 GMT" }, { "version": "v2", "created": "Tue, 16 Jun 2015 15:31:19 GMT" } ]
2015-06-17T00:00:00
[ [ "Buzmakov", "Aleksey", "" ], [ "Kuznetsov", "Sergei O.", "" ], [ "Napoli", "Amedeo", "" ] ]
TITLE: Fast Generation of Best Interval Patterns for Nonmonotonic Constraints ABSTRACT: In pattern mining, the main challenge is the exponential explosion of the set of patterns. Typically, to solve this problem, a constraint for pattern selection is introduced. One of the first constraints proposed in pattern mining is support (frequency) of a pattern in a dataset. Frequency is an anti-monotonic function, i.e., given an infrequent pattern, all its superpatterns are not frequent. However, many other constraints for pattern selection are not (anti-)monotonic, which makes it difficult to generate patterns satisfying these constraints. In this paper we introduce the notion of projection-antimonotonicity and $\theta$-$\Sigma\o\phi\iota\alpha$ algorithm that allows efficient generation of the best patterns for some nonmonotonic constraints. In this paper we consider stability and $\Delta$-measure, which are nonmonotonic constraints, and apply them to interval tuple datasets. In the experiments, we compute best interval tuple patterns w.r.t. these measures and show the advantage of our approach over postfiltering approaches. KEYWORDS: Pattern mining, nonmonotonic constraints, interval tuple data
no_new_dataset
0.953232
1506.04757
Julian McAuley
Julian McAuley, Christopher Targett, Qinfeng Shi, Anton van den Hengel
Image-based Recommendations on Styles and Substitutes
11 pages, 10 figures, SIGIR 2015
null
null
null
cs.CV cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans inevitably develop a sense of the relationships between objects, some of which are based on their appearance. Some pairs of objects might be seen as being alternatives to each other (such as two pairs of jeans), while others may be seen as being complementary (such as a pair of jeans and a matching shirt). This information guides many of the choices that people make, from buying clothes to their interactions with each other. We seek here to model this human sense of the relationships between objects based on their appearance. Our approach is not based on fine-grained modeling of user annotations but rather on capturing the largest dataset possible and developing a scalable method for uncovering human notions of the visual relationships within. We cast this as a network inference problem defined on graphs of related images, and provide a large-scale dataset for the training and evaluation of the same. The system we develop is capable of recommending which clothes and accessories will go well together (and which will not), amongst a host of other applications.
[ { "version": "v1", "created": "Mon, 15 Jun 2015 20:01:49 GMT" } ]
2015-06-17T00:00:00
[ [ "McAuley", "Julian", "" ], [ "Targett", "Christopher", "" ], [ "Shi", "Qinfeng", "" ], [ "Hengel", "Anton van den", "" ] ]
TITLE: Image-based Recommendations on Styles and Substitutes ABSTRACT: Humans inevitably develop a sense of the relationships between objects, some of which are based on their appearance. Some pairs of objects might be seen as being alternatives to each other (such as two pairs of jeans), while others may be seen as being complementary (such as a pair of jeans and a matching shirt). This information guides many of the choices that people make, from buying clothes to their interactions with each other. We seek here to model this human sense of the relationships between objects based on their appearance. Our approach is not based on fine-grained modeling of user annotations but rather on capturing the largest dataset possible and developing a scalable method for uncovering human notions of the visual relationships within. We cast this as a network inference problem defined on graphs of related images, and provide a large-scale dataset for the training and evaluation of the same. The system we develop is capable of recommending which clothes and accessories will go well together (and which will not), amongst a host of other applications.
no_new_dataset
0.939582
1506.04776
Jeff Heaton
Jeff Heaton
Encog: Library of Interchangeable Machine Learning Models for Java and C#
null
null
null
null
cs.MS cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces the Encog library for Java and C#, a scalable, adaptable, multiplatform machine learning framework that was 1st released in 2008. Encog allows a variety of machine learning models to be applied to datasets using regression, classification, and clustering. Various supported machine learning models can be used interchangeably with minimal recoding. Encog uses efficient multithreaded code to reduce training time by exploiting modern multicore processors. The current version of Encog can be downloaded from http://www.encog.org.
[ { "version": "v1", "created": "Mon, 15 Jun 2015 21:20:06 GMT" } ]
2015-06-17T00:00:00
[ [ "Heaton", "Jeff", "" ] ]
TITLE: Encog: Library of Interchangeable Machine Learning Models for Java and C# ABSTRACT: This paper introduces the Encog library for Java and C#, a scalable, adaptable, multiplatform machine learning framework that was 1st released in 2008. Encog allows a variety of machine learning models to be applied to datasets using regression, classification, and clustering. Various supported machine learning models can be used interchangeably with minimal recoding. Encog uses efficient multithreaded code to reduce training time by exploiting modern multicore processors. The current version of Encog can be downloaded from http://www.encog.org.
no_new_dataset
0.947137
1506.04803
Afshin Rahimi
Afshin Rahimi, Duy Vu, Trevor Cohn, and Timothy Baldwin
Exploiting Text and Network Context for Geolocation of Social Media Users
null
null
null
null
cs.CL cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research on automatically geolocating social media users has conventionally been based on the text content of posts from a given user or the social network of the user, with very little crossover between the two, and no bench-marking of the two approaches over compara- ble datasets. We bring the two threads of research together in first proposing a text-based method based on adaptive grids, followed by a hybrid network- and text-based method. Evaluating over three Twitter datasets, we show that the empirical difference between text- and network-based methods is not great, and that hybridisation of the two is superior to the component methods, especially in contexts where the user graph is not well connected. We achieve state-of-the-art results on all three datasets.
[ { "version": "v1", "created": "Tue, 16 Jun 2015 00:32:33 GMT" } ]
2015-06-17T00:00:00
[ [ "Rahimi", "Afshin", "" ], [ "Vu", "Duy", "" ], [ "Cohn", "Trevor", "" ], [ "Baldwin", "Timothy", "" ] ]
TITLE: Exploiting Text and Network Context for Geolocation of Social Media Users ABSTRACT: Research on automatically geolocating social media users has conventionally been based on the text content of posts from a given user or the social network of the user, with very little crossover between the two, and no bench-marking of the two approaches over compara- ble datasets. We bring the two threads of research together in first proposing a text-based method based on adaptive grids, followed by a hybrid network- and text-based method. Evaluating over three Twitter datasets, we show that the empirical difference between text- and network-based methods is not great, and that hybridisation of the two is superior to the component methods, especially in contexts where the user graph is not well connected. We achieve state-of-the-art results on all three datasets.
no_new_dataset
0.948155
1506.04815
Amit Chavan
Amit Chavan, Silu Huang, Amol Deshpande, Aaron Elmore, Samuel Madden and Aditya Parameswaran
Towards a unified query language for provenance and versioning
Theory and Practice of Provenance, 2015
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Organizations and teams collect and acquire data from various sources, such as social interactions, financial transactions, sensor data, and genome sequencers. Different teams in an organization as well as different data scientists within a team are interested in extracting a variety of insights which require combining and collaboratively analyzing datasets in diverse ways. DataHub is a system that aims to provide robust version control and provenance management for such a scenario. To be truly useful for collaborative data science, one also needs the ability to specify queries and analysis tasks over the versioning and the provenance information in a unified manner. In this paper, we present an initial design of our query language, called VQuel, that aims to support such unified querying over both types of information, as well as the intermediate and final results of analyses. We also discuss some of the key language design and implementation challenges moving forward.
[ { "version": "v1", "created": "Tue, 16 Jun 2015 01:32:51 GMT" } ]
2015-06-17T00:00:00
[ [ "Chavan", "Amit", "" ], [ "Huang", "Silu", "" ], [ "Deshpande", "Amol", "" ], [ "Elmore", "Aaron", "" ], [ "Madden", "Samuel", "" ], [ "Parameswaran", "Aditya", "" ] ]
TITLE: Towards a unified query language for provenance and versioning ABSTRACT: Organizations and teams collect and acquire data from various sources, such as social interactions, financial transactions, sensor data, and genome sequencers. Different teams in an organization as well as different data scientists within a team are interested in extracting a variety of insights which require combining and collaboratively analyzing datasets in diverse ways. DataHub is a system that aims to provide robust version control and provenance management for such a scenario. To be truly useful for collaborative data science, one also needs the ability to specify queries and analysis tasks over the versioning and the provenance information in a unified manner. In this paper, we present an initial design of our query language, called VQuel, that aims to support such unified querying over both types of information, as well as the intermediate and final results of analyses. We also discuss some of the key language design and implementation challenges moving forward.
no_new_dataset
0.928926
1506.05101
Dhruba Bhattacharyya
Hirak Kashyap, Hasin Afzal Ahmed, Nazrul Hoque, Swarup Roy and Dhruba Kumar Bhattacharyya
Big Data Analytics in Bioinformatics: A Machine Learning Perspective
20 pages survey paper on Big data analytics in Bioinformatics
null
null
null
cs.CE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bioinformatics research is characterized by voluminous and incremental datasets and complex data analytics methods. The machine learning methods used in bioinformatics are iterative and parallel. These methods can be scaled to handle big data using the distributed and parallel computing technologies. Usually big data tools perform computation in batch-mode and are not optimized for iterative processing and high data dependency among operations. In the recent years, parallel, incremental, and multi-view machine learning algorithms have been proposed. Similarly, graph-based architectures and in-memory big data tools have been developed to minimize I/O cost and optimize iterative processing. However, there lack standard big data architectures and tools for many important bioinformatics problems, such as fast construction of co-expression and regulatory networks and salient module identification, detection of complexes over growing protein-protein interaction data, fast analysis of massive DNA, RNA, and protein sequence data, and fast querying on incremental and heterogeneous disease networks. This paper addresses the issues and challenges posed by several big data problems in bioinformatics, and gives an overview of the state of the art and the future research opportunities.
[ { "version": "v1", "created": "Mon, 15 Jun 2015 11:32:00 GMT" } ]
2015-06-17T00:00:00
[ [ "Kashyap", "Hirak", "" ], [ "Ahmed", "Hasin Afzal", "" ], [ "Hoque", "Nazrul", "" ], [ "Roy", "Swarup", "" ], [ "Bhattacharyya", "Dhruba Kumar", "" ] ]
TITLE: Big Data Analytics in Bioinformatics: A Machine Learning Perspective ABSTRACT: Bioinformatics research is characterized by voluminous and incremental datasets and complex data analytics methods. The machine learning methods used in bioinformatics are iterative and parallel. These methods can be scaled to handle big data using the distributed and parallel computing technologies. Usually big data tools perform computation in batch-mode and are not optimized for iterative processing and high data dependency among operations. In the recent years, parallel, incremental, and multi-view machine learning algorithms have been proposed. Similarly, graph-based architectures and in-memory big data tools have been developed to minimize I/O cost and optimize iterative processing. However, there lack standard big data architectures and tools for many important bioinformatics problems, such as fast construction of co-expression and regulatory networks and salient module identification, detection of complexes over growing protein-protein interaction data, fast analysis of massive DNA, RNA, and protein sequence data, and fast querying on incremental and heterogeneous disease networks. This paper addresses the issues and challenges posed by several big data problems in bioinformatics, and gives an overview of the state of the art and the future research opportunities.
no_new_dataset
0.944689
1306.6455
Sergio Servidio
S. Servidio, K.T. Osman, F. Valentini, D. Perrone, F. Califano, S. Chapman, W. H. Matthaeus, and P. Veltri
Proton Kinetic Effects in Vlasov and Solar Wind Turbulence
12 pages, 3 figures
null
10.1088/2041-8205/781/2/L27
null
physics.space-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Kinetic plasma processes have been investigated in the framework of solar wind turbulence, employing Hybrid Vlasov-Maxwell (HVM) simulations. The dependency of proton temperature anisotropy T_{\perp}/T_{\parallel} on the parallel plasma beta \beta_{\parallel}, commonly observed in spacecraft data, has been recovered using an ensemble of HVM simulations. By varying plasma parameters, such as plasma beta and fluctuation level, the simulations explore distinct regions of the parameter space given by T_{\perp}/T_{\parallel} and \beta_{\parallel}, similar to solar wind sub-datasets. Moreover, both simulation and solar wind data suggest that temperature anisotropy is not only associated with magnetic intermittent events, but also with gradient-type structures in the flow and in the density. This connection between non-Maxwellian kinetic effects and various types of intermittency may be a key point for understanding the complex nature of plasma turbulence.
[ { "version": "v1", "created": "Thu, 27 Jun 2013 10:21:26 GMT" } ]
2015-06-16T00:00:00
[ [ "Servidio", "S.", "" ], [ "Osman", "K. T.", "" ], [ "Valentini", "F.", "" ], [ "Perrone", "D.", "" ], [ "Califano", "F.", "" ], [ "Chapman", "S.", "" ], [ "Matthaeus", "W. H.", "" ], [ "Veltri", "P.", "" ] ]
TITLE: Proton Kinetic Effects in Vlasov and Solar Wind Turbulence ABSTRACT: Kinetic plasma processes have been investigated in the framework of solar wind turbulence, employing Hybrid Vlasov-Maxwell (HVM) simulations. The dependency of proton temperature anisotropy T_{\perp}/T_{\parallel} on the parallel plasma beta \beta_{\parallel}, commonly observed in spacecraft data, has been recovered using an ensemble of HVM simulations. By varying plasma parameters, such as plasma beta and fluctuation level, the simulations explore distinct regions of the parameter space given by T_{\perp}/T_{\parallel} and \beta_{\parallel}, similar to solar wind sub-datasets. Moreover, both simulation and solar wind data suggest that temperature anisotropy is not only associated with magnetic intermittent events, but also with gradient-type structures in the flow and in the density. This connection between non-Maxwellian kinetic effects and various types of intermittency may be a key point for understanding the complex nature of plasma turbulence.
no_new_dataset
0.956227
1307.3756
Jean Golay
J. Golay, M. Kanevski, C. Vega Orozco, M. Leuenberger
The Multipoint Morisita Index for the Analysis of Spatial Patterns
null
null
10.1016/j.physa.2014.03.063
null
physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many fields, the spatial clustering of sampled data points has many consequences. Therefore, several indices have been proposed to assess the level of clustering affecting datasets (e.g. the Morisita index, Ripley's K-function and R\'enyi's generalized entropy). The classical Morisita index measures how many times it is more likely to select two measurement points from the same quadrats (the data set is covered by a regular grid of changing size) than it would be in the case of a random distribution generated from a Poisson process. The multipoint version (k-Morisita) takes into account k points with k greater than or equal to 2. The present research deals with a new development of the k-Morisita index for (1) monitoring network characterization and for (2) the detection of patterns in monitored phenomena. From a theoretical perspective, a connection between the k-Morisita index and multifractality has also been found and highlighted on a mathematical multifractal set.
[ { "version": "v1", "created": "Sun, 14 Jul 2013 17:17:24 GMT" }, { "version": "v2", "created": "Thu, 5 Dec 2013 18:13:33 GMT" }, { "version": "v3", "created": "Mon, 13 Jan 2014 16:20:35 GMT" } ]
2015-06-16T00:00:00
[ [ "Golay", "J.", "" ], [ "Kanevski", "M.", "" ], [ "Orozco", "C. Vega", "" ], [ "Leuenberger", "M.", "" ] ]
TITLE: The Multipoint Morisita Index for the Analysis of Spatial Patterns ABSTRACT: In many fields, the spatial clustering of sampled data points has many consequences. Therefore, several indices have been proposed to assess the level of clustering affecting datasets (e.g. the Morisita index, Ripley's K-function and R\'enyi's generalized entropy). The classical Morisita index measures how many times it is more likely to select two measurement points from the same quadrats (the data set is covered by a regular grid of changing size) than it would be in the case of a random distribution generated from a Poisson process. The multipoint version (k-Morisita) takes into account k points with k greater than or equal to 2. The present research deals with a new development of the k-Morisita index for (1) monitoring network characterization and for (2) the detection of patterns in monitored phenomena. From a theoretical perspective, a connection between the k-Morisita index and multifractality has also been found and highlighted on a mathematical multifractal set.
no_new_dataset
0.950134
1506.04257
Matthew Malloy
Matthew L. Malloy, Scott Alfeld, Paul Barford
Contamination Estimation via Convex Relaxations
To appear, ISIT 2015
null
null
null
cs.IT cs.LG math.IT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying anomalies and contamination in datasets is important in a wide variety of settings. In this paper, we describe a new technique for estimating contamination in large, discrete valued datasets. Our approach considers the normal condition of the data to be specified by a model consisting of a set of distributions. Our key contribution is in our approach to contamination estimation. Specifically, we develop a technique that identifies the minimum number of data points that must be discarded (i.e., the level of contamination) from an empirical data set in order to match the model to within a specified goodness-of-fit, controlled by a p-value. Appealing to results from large deviations theory, we show a lower bound on the level of contamination is obtained by solving a series of convex programs. Theoretical results guarantee the bound converges at a rate of $O(\sqrt{\log(p)/p})$, where p is the size of the empirical data set.
[ { "version": "v1", "created": "Sat, 13 Jun 2015 11:51:52 GMT" } ]
2015-06-16T00:00:00
[ [ "Malloy", "Matthew L.", "" ], [ "Alfeld", "Scott", "" ], [ "Barford", "Paul", "" ] ]
TITLE: Contamination Estimation via Convex Relaxations ABSTRACT: Identifying anomalies and contamination in datasets is important in a wide variety of settings. In this paper, we describe a new technique for estimating contamination in large, discrete valued datasets. Our approach considers the normal condition of the data to be specified by a model consisting of a set of distributions. Our key contribution is in our approach to contamination estimation. Specifically, we develop a technique that identifies the minimum number of data points that must be discarded (i.e., the level of contamination) from an empirical data set in order to match the model to within a specified goodness-of-fit, controlled by a p-value. Appealing to results from large deviations theory, we show a lower bound on the level of contamination is obtained by solving a series of convex programs. Theoretical results guarantee the bound converges at a rate of $O(\sqrt{\log(p)/p})$, where p is the size of the empirical data set.
no_new_dataset
0.947624
1506.04359
Yunwen Lei
Yunwen Lei and \"Ur\"un Dogan and Alexander Binder and Marius Kloft
Multi-class SVMs: From Tighter Data-Dependent Generalization Bounds to Novel Algorithms
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the generalization performance of multi-class classification algorithms, for which we obtain, for the first time, a data-dependent generalization error bound with a logarithmic dependence on the class size, substantially improving the state-of-the-art linear dependence in the existing data-dependent generalization analysis. The theoretical analysis motivates us to introduce a new multi-class classification machine based on $\ell_p$-norm regularization, where the parameter $p$ controls the complexity of the corresponding bounds. We derive an efficient optimization algorithm based on Fenchel duality theory. Benchmarks on several real-world datasets show that the proposed algorithm can achieve significant accuracy gains over the state of the art.
[ { "version": "v1", "created": "Sun, 14 Jun 2015 08:07:23 GMT" } ]
2015-06-16T00:00:00
[ [ "Lei", "Yunwen", "" ], [ "Dogan", "Ürün", "" ], [ "Binder", "Alexander", "" ], [ "Kloft", "Marius", "" ] ]
TITLE: Multi-class SVMs: From Tighter Data-Dependent Generalization Bounds to Novel Algorithms ABSTRACT: This paper studies the generalization performance of multi-class classification algorithms, for which we obtain, for the first time, a data-dependent generalization error bound with a logarithmic dependence on the class size, substantially improving the state-of-the-art linear dependence in the existing data-dependent generalization analysis. The theoretical analysis motivates us to introduce a new multi-class classification machine based on $\ell_p$-norm regularization, where the parameter $p$ controls the complexity of the corresponding bounds. We derive an efficient optimization algorithm based on Fenchel duality theory. Benchmarks on several real-world datasets show that the proposed algorithm can achieve significant accuracy gains over the state of the art.
no_new_dataset
0.946892
1506.04608
Daja Abdul
Javairia Nazir, Mehreen Sirshar
Flow Segmentation in Dense Crowds
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A framework is proposed in this paper that is used to segment flow of dense crowds. The flow field that is generated by the movement in the crowd is treated just like an aperiodic dynamic system. On this flow field a grid of particles is put over for particle advection by the use of a numerical integration scheme. Then flow maps are generated which associates the initial position of the particles with final position. The gradient of the flow maps gives the amount of divergence of the neighboring particles. For forward integration and analysis forward Finite time Lyapunov Exponent is calculated and backward Finite time Lyapunov Exponent is also calculated it gives the Lagrangian coherent structures of the flow in crowd. Lagrangian Coherent Structures basically divides the flow in crowd into regions and these regions have different dynamics. These regions are then used to get the boundary in the different flow segments by using water shed algorithm. The experiment is conducted on the crowd dataset of UCF (University of central Florida).
[ { "version": "v1", "created": "Mon, 15 Jun 2015 14:14:20 GMT" } ]
2015-06-16T00:00:00
[ [ "Nazir", "Javairia", "" ], [ "Sirshar", "Mehreen", "" ] ]
TITLE: Flow Segmentation in Dense Crowds ABSTRACT: A framework is proposed in this paper that is used to segment flow of dense crowds. The flow field that is generated by the movement in the crowd is treated just like an aperiodic dynamic system. On this flow field a grid of particles is put over for particle advection by the use of a numerical integration scheme. Then flow maps are generated which associates the initial position of the particles with final position. The gradient of the flow maps gives the amount of divergence of the neighboring particles. For forward integration and analysis forward Finite time Lyapunov Exponent is calculated and backward Finite time Lyapunov Exponent is also calculated it gives the Lagrangian coherent structures of the flow in crowd. Lagrangian Coherent Structures basically divides the flow in crowd into regions and these regions have different dynamics. These regions are then used to get the boundary in the different flow segments by using water shed algorithm. The experiment is conducted on the crowd dataset of UCF (University of central Florida).
no_new_dataset
0.954351
1506.04693
Vincent Labatut
G\"unce Orman (BIT Lab), Vincent Labatut (LIA), Marc Plantevit (LIRIS), Jean-Fran\c{c}ois Boulicaut (LIRIS)
Interpreting communities based on the evolution of a dynamic attributed network
null
Social Network Analysis and Mining Journal (SNAM), 2015, 5, pp.20. \<http://link.springer.com/article/10.1007%2Fs13278-015-0262-4\>. \<10.1007/s13278-015-0262-4\>
10.1007/s13278-015-0262-4
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many methods have been proposed to detect communities, not only in plain, but also in attributed, directed or even dynamic complex networks. From the modeling point of view, to be of some utility, the community structure must be characterized relatively to the properties of the studied system. However, most of the existing works focus on the detection of communities, and only very few try to tackle this interpretation problem. Moreover, the existing approaches are limited either by the type of data they handle, or by the nature of the results they output. In this work, we see the interpretation of communities as a problem independent from the detection process, consisting in identifying the most characteristic features of communities. We give a formal definition of this problem and propose a method to solve it. To this aim, we first define a sequence-based representation of networks, combining temporal information, community structure, topological measures, and nodal attributes. We then describe how to identify the most emerging sequential patterns of this dataset, and use them to characterize the communities. We study the performance of our method on artificially generated dynamic attributed networks. We also empirically validate our framework on real-world systems: a DBLP network of scientific collaborations, and a LastFM network of social and musical interactions.
[ { "version": "v1", "created": "Mon, 15 Jun 2015 18:22:38 GMT" } ]
2015-06-16T00:00:00
[ [ "Orman", "Günce", "", "BIT Lab" ], [ "Labatut", "Vincent", "", "LIA" ], [ "Plantevit", "Marc", "", "LIRIS" ], [ "Boulicaut", "Jean-François", "", "LIRIS" ] ]
TITLE: Interpreting communities based on the evolution of a dynamic attributed network ABSTRACT: Many methods have been proposed to detect communities, not only in plain, but also in attributed, directed or even dynamic complex networks. From the modeling point of view, to be of some utility, the community structure must be characterized relatively to the properties of the studied system. However, most of the existing works focus on the detection of communities, and only very few try to tackle this interpretation problem. Moreover, the existing approaches are limited either by the type of data they handle, or by the nature of the results they output. In this work, we see the interpretation of communities as a problem independent from the detection process, consisting in identifying the most characteristic features of communities. We give a formal definition of this problem and propose a method to solve it. To this aim, we first define a sequence-based representation of networks, combining temporal information, community structure, topological measures, and nodal attributes. We then describe how to identify the most emerging sequential patterns of this dataset, and use them to characterize the communities. We study the performance of our method on artificially generated dynamic attributed networks. We also empirically validate our framework on real-world systems: a DBLP network of scientific collaborations, and a LastFM network of social and musical interactions.
no_new_dataset
0.939192
1506.04720
Siqi Nie
Siqi Nie, Qiang Ji
Latent Regression Bayesian Network for Data Representation
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep directed generative models have attracted much attention recently due to their expressive representation power and the ability of ancestral sampling. One major difficulty of learning directed models with many latent variables is the intractable inference. To address this problem, most existing algorithms make assumptions to render the latent variables independent of each other, either by designing specific priors, or by approximating the true posterior using a factorized distribution. We believe the correlations among latent variables are crucial for faithful data representation. Driven by this idea, we propose an inference method based on the conditional pseudo-likelihood that preserves the dependencies among the latent variables. For learning, we propose to employ the hard Expectation Maximization (EM) algorithm, which avoids the intractability of the traditional EM by max-out instead of sum-out to compute the data likelihood. Qualitative and quantitative evaluations of our model against state of the art deep models on benchmark datasets demonstrate the effectiveness of the proposed algorithm in data representation and reconstruction.
[ { "version": "v1", "created": "Mon, 15 Jun 2015 19:34:59 GMT" } ]
2015-06-16T00:00:00
[ [ "Nie", "Siqi", "" ], [ "Ji", "Qiang", "" ] ]
TITLE: Latent Regression Bayesian Network for Data Representation ABSTRACT: Deep directed generative models have attracted much attention recently due to their expressive representation power and the ability of ancestral sampling. One major difficulty of learning directed models with many latent variables is the intractable inference. To address this problem, most existing algorithms make assumptions to render the latent variables independent of each other, either by designing specific priors, or by approximating the true posterior using a factorized distribution. We believe the correlations among latent variables are crucial for faithful data representation. Driven by this idea, we propose an inference method based on the conditional pseudo-likelihood that preserves the dependencies among the latent variables. For learning, we propose to employ the hard Expectation Maximization (EM) algorithm, which avoids the intractability of the traditional EM by max-out instead of sum-out to compute the data likelihood. Qualitative and quantitative evaluations of our model against state of the art deep models on benchmark datasets demonstrate the effectiveness of the proposed algorithm in data representation and reconstruction.
no_new_dataset
0.946843
1303.5577
Ilaria Ermolli
I. Ermolli, K. Matthes, T. Dudok de Wit, N. A. Krivova, K. Tourpali, M. Weber, Y. C. Unruh, L. Gray, U. Langematz, P. Pilewskie, E. Rozanov, W. Schmutz, A. Shapiro, S. K. Solanki, and T. N. Woods
Recent variability of the solar spectral irradiance and its impact on climate modelling
34 pages, 12 figures, accepted for publication in ACP
null
10.5194/acp-13-3945-2013
null
astro-ph.SR physics.ao-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The lack of long and reliable time series of solar spectral irradiance (SSI) measurements makes an accurate quantification of solar contributions to recent climate change difficult. Whereas earlier SSI observations and models provided a qualitatively consistent picture of the SSI variability, recent measurements by the SORCE satellite suggest a significantly stronger variability in the ultraviolet (UV) spectral range and changes in the visible and near-infrared (NIR) bands in anti-phase with the solar cycle. A number of recent chemistry-climate model (CCM) simulations have shown that this might have significant implications on the Earth's atmosphere. Motivated by these results, we summarize here our current knowledge of SSI variability and its impact on Earth's climate. We present a detailed overview of existing SSI measurements and provide thorough comparison of models available to date. SSI changes influence the Earth's atmosphere, both directly, through changes in shortwave (SW) heating and therefore, temperature and ozone distributions in the stratosphere, and indirectly, through dynamical feedbacks. We investigate these direct and indirect effects using several state-of-the art CCM simulations forced with measured and modeled SSI changes. A unique asset of this study is the use of a common comprehensive approach for an issue that is usually addressed separately by different communities. Omissis. Finally, we discuss the reliability of the available data and we propose additional coordinated work, first to build composite SSI datasets out of scattered observations and to refine current SSI models, and second, to run coordinated CCM experiments.
[ { "version": "v1", "created": "Fri, 22 Mar 2013 10:51:01 GMT" } ]
2015-06-15T00:00:00
[ [ "Ermolli", "I.", "" ], [ "Matthes", "K.", "" ], [ "de Wit", "T. Dudok", "" ], [ "Krivova", "N. A.", "" ], [ "Tourpali", "K.", "" ], [ "Weber", "M.", "" ], [ "Unruh", "Y. C.", "" ], [ "Gray", "L.", "" ], [ "Langematz", "U.", "" ], [ "Pilewskie", "P.", "" ], [ "Rozanov", "E.", "" ], [ "Schmutz", "W.", "" ], [ "Shapiro", "A.", "" ], [ "Solanki", "S. K.", "" ], [ "Woods", "T. N.", "" ] ]
TITLE: Recent variability of the solar spectral irradiance and its impact on climate modelling ABSTRACT: The lack of long and reliable time series of solar spectral irradiance (SSI) measurements makes an accurate quantification of solar contributions to recent climate change difficult. Whereas earlier SSI observations and models provided a qualitatively consistent picture of the SSI variability, recent measurements by the SORCE satellite suggest a significantly stronger variability in the ultraviolet (UV) spectral range and changes in the visible and near-infrared (NIR) bands in anti-phase with the solar cycle. A number of recent chemistry-climate model (CCM) simulations have shown that this might have significant implications on the Earth's atmosphere. Motivated by these results, we summarize here our current knowledge of SSI variability and its impact on Earth's climate. We present a detailed overview of existing SSI measurements and provide thorough comparison of models available to date. SSI changes influence the Earth's atmosphere, both directly, through changes in shortwave (SW) heating and therefore, temperature and ozone distributions in the stratosphere, and indirectly, through dynamical feedbacks. We investigate these direct and indirect effects using several state-of-the art CCM simulations forced with measured and modeled SSI changes. A unique asset of this study is the use of a common comprehensive approach for an issue that is usually addressed separately by different communities. Omissis. Finally, we discuss the reliability of the available data and we propose additional coordinated work, first to build composite SSI datasets out of scattered observations and to refine current SSI models, and second, to run coordinated CCM experiments.
no_new_dataset
0.947866
1303.6170
Brandon Jones
Brandon Jones, Mark Campbell, Lang Tong
Maximum Likelihood Fusion of Stochastic Maps
10 pages, 8 figures, submitted to IEEE Transactions on Signal Processing on 24-March-2013
null
10.1109/TSP.2014.2304435
null
stat.AP cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The fusion of independently obtained stochastic maps by collaborating mobile agents is considered. The proposed approach includes two parts: matching of stochastic maps and maximum likelihood alignment. In particular, an affine invariant hypergraph is constructed for each stochastic map, and a bipartite matching via a linear program is used to establish landmark correspondence between stochastic maps. A maximum likelihood alignment procedure is proposed to determine rotation and translation between common landmarks in order to construct a global map within a common frame of reference. A main feature of the proposed approach is its scalability with respect to the number of landmarks: the matching step has polynomial complexity and the maximum likelihood alignment is obtained in closed form. Experimental validation of the proposed fusion approach is performed using the Victoria Park benchmark dataset.
[ { "version": "v1", "created": "Mon, 25 Mar 2013 15:34:26 GMT" } ]
2015-06-15T00:00:00
[ [ "Jones", "Brandon", "" ], [ "Campbell", "Mark", "" ], [ "Tong", "Lang", "" ] ]
TITLE: Maximum Likelihood Fusion of Stochastic Maps ABSTRACT: The fusion of independently obtained stochastic maps by collaborating mobile agents is considered. The proposed approach includes two parts: matching of stochastic maps and maximum likelihood alignment. In particular, an affine invariant hypergraph is constructed for each stochastic map, and a bipartite matching via a linear program is used to establish landmark correspondence between stochastic maps. A maximum likelihood alignment procedure is proposed to determine rotation and translation between common landmarks in order to construct a global map within a common frame of reference. A main feature of the proposed approach is its scalability with respect to the number of landmarks: the matching step has polynomial complexity and the maximum likelihood alignment is obtained in closed form. Experimental validation of the proposed fusion approach is performed using the Victoria Park benchmark dataset.
no_new_dataset
0.95222
1304.5302
Satoshi Eguchi
Satoshi Eguchi
"Superluminal" FITS File Processing on Multiprocessors: Zero Time Endian Conversion Technique
25 pages, 9 figures, 12 tables, accepted for publication in PASP
null
10.1086/671105
null
astro-ph.IM cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The FITS is the standard file format in astronomy, and it has been extended to agree with astronomical needs of the day. However, astronomical datasets have been inflating year by year. In case of ALMA telescope, a ~ TB scale 4-dimensional data cube may be produced for one target. Considering that typical Internet bandwidth is a few 10 MB/s at most, the original data cubes in FITS format are hosted on a VO server, and the region which a user is interested in should be cut out and transferred to the user (Eguchi et al., 2012). The system will equip a very high-speed disk array to process a TB scale data cube in a few 10 seconds, and disk I/O speed, endian conversion and data processing one will be comparable. Hence to reduce the endian conversion time is one of issues to realize our system. In this paper, I introduce a technique named "just-in-time endian conversion", which delays the endian conversion for each pixel just before it is really needed, to sweep out the endian conversion time; by applying this method, the FITS processing speed increases 20% for single threading, and 40% for multi-threading compared to CFITSIO. The speed-up by the method tightly relates to modern CPU architecture to improve the efficiency of instruction pipelines due to break of "causality", a programmed instruction code sequence.
[ { "version": "v1", "created": "Fri, 19 Apr 2013 03:29:36 GMT" } ]
2015-06-15T00:00:00
[ [ "Eguchi", "Satoshi", "" ] ]
TITLE: "Superluminal" FITS File Processing on Multiprocessors: Zero Time Endian Conversion Technique ABSTRACT: The FITS is the standard file format in astronomy, and it has been extended to agree with astronomical needs of the day. However, astronomical datasets have been inflating year by year. In case of ALMA telescope, a ~ TB scale 4-dimensional data cube may be produced for one target. Considering that typical Internet bandwidth is a few 10 MB/s at most, the original data cubes in FITS format are hosted on a VO server, and the region which a user is interested in should be cut out and transferred to the user (Eguchi et al., 2012). The system will equip a very high-speed disk array to process a TB scale data cube in a few 10 seconds, and disk I/O speed, endian conversion and data processing one will be comparable. Hence to reduce the endian conversion time is one of issues to realize our system. In this paper, I introduce a technique named "just-in-time endian conversion", which delays the endian conversion for each pixel just before it is really needed, to sweep out the endian conversion time; by applying this method, the FITS processing speed increases 20% for single threading, and 40% for multi-threading compared to CFITSIO. The speed-up by the method tightly relates to modern CPU architecture to improve the efficiency of instruction pipelines due to break of "causality", a programmed instruction code sequence.
no_new_dataset
0.949809
1305.3532
Alain Barrat
Alain Barrat, Ciro Cattuto
Temporal networks of face-to-face human interactions
Chapter of the book "Temporal Networks", Springer, 2013. Series: Understanding Complex Systems. Holme, Petter; Saram\"aki, Jari (Eds.)
null
10.1007/978-3-642-36461-7_10
null
physics.soc-ph cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ever increasing adoption of mobile technologies and ubiquitous services allows to sense human behavior at unprecedented levels of details and scale. Wearable sensors are opening up a new window on human mobility and proximity at the finest resolution of face-to-face proximity. As a consequence, empirical data describing social and behavioral networks are acquiring a longitudinal dimension that brings forth new challenges for analysis and modeling. Here we review recent work on the representation and analysis of temporal networks of face-to-face human proximity, based on large-scale datasets collected in the context of the SocioPatterns collaboration. We show that the raw behavioral data can be studied at various levels of coarse-graining, which turn out to be complementary to one another, with each level exposing different features of the underlying system. We briefly review a generative model of temporal contact networks that reproduces some statistical observables. Then, we shift our focus from surface statistical features to dynamical processes on empirical temporal networks. We discuss how simple dynamical processes can be used as probes to expose important features of the interaction patterns, such as burstiness and causal constraints. We show that simulating dynamical processes on empirical temporal networks can unveil differences between datasets that would otherwise look statistically similar. Moreover, we argue that, due to the temporal heterogeneity of human dynamics, in order to investigate the temporal properties of spreading processes it may be necessary to abandon the notion of wall-clock time in favour of an intrinsic notion of time for each individual node, defined in terms of its activity level. We conclude highlighting several open research questions raised by the nature of the data at hand.
[ { "version": "v1", "created": "Wed, 15 May 2013 16:18:24 GMT" } ]
2015-06-15T00:00:00
[ [ "Barrat", "Alain", "" ], [ "Cattuto", "Ciro", "" ] ]
TITLE: Temporal networks of face-to-face human interactions ABSTRACT: The ever increasing adoption of mobile technologies and ubiquitous services allows to sense human behavior at unprecedented levels of details and scale. Wearable sensors are opening up a new window on human mobility and proximity at the finest resolution of face-to-face proximity. As a consequence, empirical data describing social and behavioral networks are acquiring a longitudinal dimension that brings forth new challenges for analysis and modeling. Here we review recent work on the representation and analysis of temporal networks of face-to-face human proximity, based on large-scale datasets collected in the context of the SocioPatterns collaboration. We show that the raw behavioral data can be studied at various levels of coarse-graining, which turn out to be complementary to one another, with each level exposing different features of the underlying system. We briefly review a generative model of temporal contact networks that reproduces some statistical observables. Then, we shift our focus from surface statistical features to dynamical processes on empirical temporal networks. We discuss how simple dynamical processes can be used as probes to expose important features of the interaction patterns, such as burstiness and causal constraints. We show that simulating dynamical processes on empirical temporal networks can unveil differences between datasets that would otherwise look statistically similar. Moreover, we argue that, due to the temporal heterogeneity of human dynamics, in order to investigate the temporal properties of spreading processes it may be necessary to abandon the notion of wall-clock time in favour of an intrinsic notion of time for each individual node, defined in terms of its activity level. We conclude highlighting several open research questions raised by the nature of the data at hand.
no_new_dataset
0.946001
1412.3421
Juan Eugenio Iglesias
Juan Eugenio Iglesias and Mert Rory Sabuncu
Multi-Atlas Segmentation of Biomedical Images: A Survey
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing, Brandt, Menzel and Maurer Jr (2004), Klein, Mensh, Ghosh, Tourville and Hirsch (2005), and Heckemann, Hajnal, Aljabar, Rueckert and Hammers (2006), is becoming one of the most widely-used and successful image segmentation techniques in biomedical applications. By manipulating and utilizing the entire dataset of "atlases" (training images that have been previously labeled, e.g., manually by an expert), rather than some model-based average representation, MAS has the flexibility to better capture anatomical variation, thus offering superior segmentation accuracy. This benefit, however, typically comes at a high computational cost. Recent advancements in computer hardware and image processing software have been instrumental in addressing this challenge and facilitated the wide adoption of MAS. Today, MAS has come a long way and the approach includes a wide array of sophisticated algorithms that employ ideas from machine learning, probabilistic modeling, optimization, and computer vision, among other fields. This paper presents a survey of published MAS algorithms and studies that have applied these methods to various biomedical problems. In writing this survey, we have three distinct aims. Our primary goal is to document how MAS was originally conceived, later evolved, and now relates to alternative methods. Second, this paper is intended to be a detailed reference of past research activity in MAS, which now spans over a decade (2003 - 2014) and entails novel methodological developments and application-specific solutions. Finally, our goal is to also present a perspective on the future of MAS, which, we believe, will be one of the dominant approaches in biomedical image segmentation.
[ { "version": "v1", "created": "Wed, 10 Dec 2014 19:28:09 GMT" }, { "version": "v2", "created": "Fri, 12 Jun 2015 14:35:30 GMT" } ]
2015-06-15T00:00:00
[ [ "Iglesias", "Juan Eugenio", "" ], [ "Sabuncu", "Mert Rory", "" ] ]
TITLE: Multi-Atlas Segmentation of Biomedical Images: A Survey ABSTRACT: Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing, Brandt, Menzel and Maurer Jr (2004), Klein, Mensh, Ghosh, Tourville and Hirsch (2005), and Heckemann, Hajnal, Aljabar, Rueckert and Hammers (2006), is becoming one of the most widely-used and successful image segmentation techniques in biomedical applications. By manipulating and utilizing the entire dataset of "atlases" (training images that have been previously labeled, e.g., manually by an expert), rather than some model-based average representation, MAS has the flexibility to better capture anatomical variation, thus offering superior segmentation accuracy. This benefit, however, typically comes at a high computational cost. Recent advancements in computer hardware and image processing software have been instrumental in addressing this challenge and facilitated the wide adoption of MAS. Today, MAS has come a long way and the approach includes a wide array of sophisticated algorithms that employ ideas from machine learning, probabilistic modeling, optimization, and computer vision, among other fields. This paper presents a survey of published MAS algorithms and studies that have applied these methods to various biomedical problems. In writing this survey, we have three distinct aims. Our primary goal is to document how MAS was originally conceived, later evolved, and now relates to alternative methods. Second, this paper is intended to be a detailed reference of past research activity in MAS, which now spans over a decade (2003 - 2014) and entails novel methodological developments and application-specific solutions. Finally, our goal is to also present a perspective on the future of MAS, which, we believe, will be one of the dominant approaches in biomedical image segmentation.
no_new_dataset
0.945801
1506.00815
Yuhuang Hu
Yuhuang Hu, M.S. Ishwarya, Chu Kiong Loo
Classify Images with Conceptor Network
This paper has been withdrawn by the author due to a crucial sign error in experiments
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article demonstrates a new conceptor network based classifier in classifying images. Mathematical descriptions and analysis are presented. Various tests are experimented using three benchmark datasets: MNIST, CIFAR-10 and CIFAR-100. The experiments displayed that conceptor network can offer superior results and flexible configurations than conventional classifiers such as Softmax Regression and Support Vector Machine (SVM).
[ { "version": "v1", "created": "Tue, 2 Jun 2015 09:49:45 GMT" }, { "version": "v2", "created": "Wed, 3 Jun 2015 13:57:14 GMT" }, { "version": "v3", "created": "Sat, 6 Jun 2015 16:58:41 GMT" }, { "version": "v4", "created": "Fri, 12 Jun 2015 01:13:06 GMT" } ]
2015-06-15T00:00:00
[ [ "Hu", "Yuhuang", "" ], [ "Ishwarya", "M. S.", "" ], [ "Loo", "Chu Kiong", "" ] ]
TITLE: Classify Images with Conceptor Network ABSTRACT: This article demonstrates a new conceptor network based classifier in classifying images. Mathematical descriptions and analysis are presented. Various tests are experimented using three benchmark datasets: MNIST, CIFAR-10 and CIFAR-100. The experiments displayed that conceptor network can offer superior results and flexible configurations than conventional classifiers such as Softmax Regression and Support Vector Machine (SVM).
no_new_dataset
0.952175
1506.03837
Weinan Zhang
Weinan Zhang, Jun Wang
Statistical Arbitrage Mining for Display Advertising
In the proceedings of the 21st ACM SIGKDD international conference on Knowledge discovery and data mining (KDD 2015)
null
10.1145/2783258.2783269
null
cs.GT cs.DB
http://creativecommons.org/licenses/publicdomain/
We study and formulate arbitrage in display advertising. Real-Time Bidding (RTB) mimics stock spot exchanges and utilises computers to algorithmically buy display ads per impression via a real-time auction. Despite the new automation, the ad markets are still informationally inefficient due to the heavily fragmented marketplaces. Two display impressions with similar or identical effectiveness (e.g., measured by conversion or click-through rates for a targeted audience) may sell for quite different prices at different market segments or pricing schemes. In this paper, we propose a novel data mining paradigm called Statistical Arbitrage Mining (SAM) focusing on mining and exploiting price discrepancies between two pricing schemes. In essence, our SAMer is a meta-bidder that hedges advertisers' risk between CPA (cost per action)-based campaigns and CPM (cost per mille impressions)-based ad inventories; it statistically assesses the potential profit and cost for an incoming CPM bid request against a portfolio of CPA campaigns based on the estimated conversion rate, bid landscape and other statistics learned from historical data. In SAM, (i) functional optimisation is utilised to seek for optimal bidding to maximise the expected arbitrage net profit, and (ii) a portfolio-based risk management solution is leveraged to reallocate bid volume and budget across the set of campaigns to make a risk and return trade-off. We propose to jointly optimise both components in an EM fashion with high efficiency to help the meta-bidder successfully catch the transient statistical arbitrage opportunities in RTB. Both the offline experiments on a real-world large-scale dataset and online A/B tests on a commercial platform demonstrate the effectiveness of our proposed solution in exploiting arbitrage in various model settings and market environments.
[ { "version": "v1", "created": "Thu, 11 Jun 2015 21:05:26 GMT" } ]
2015-06-15T00:00:00
[ [ "Zhang", "Weinan", "" ], [ "Wang", "Jun", "" ] ]
TITLE: Statistical Arbitrage Mining for Display Advertising ABSTRACT: We study and formulate arbitrage in display advertising. Real-Time Bidding (RTB) mimics stock spot exchanges and utilises computers to algorithmically buy display ads per impression via a real-time auction. Despite the new automation, the ad markets are still informationally inefficient due to the heavily fragmented marketplaces. Two display impressions with similar or identical effectiveness (e.g., measured by conversion or click-through rates for a targeted audience) may sell for quite different prices at different market segments or pricing schemes. In this paper, we propose a novel data mining paradigm called Statistical Arbitrage Mining (SAM) focusing on mining and exploiting price discrepancies between two pricing schemes. In essence, our SAMer is a meta-bidder that hedges advertisers' risk between CPA (cost per action)-based campaigns and CPM (cost per mille impressions)-based ad inventories; it statistically assesses the potential profit and cost for an incoming CPM bid request against a portfolio of CPA campaigns based on the estimated conversion rate, bid landscape and other statistics learned from historical data. In SAM, (i) functional optimisation is utilised to seek for optimal bidding to maximise the expected arbitrage net profit, and (ii) a portfolio-based risk management solution is leveraged to reallocate bid volume and budget across the set of campaigns to make a risk and return trade-off. We propose to jointly optimise both components in an EM fashion with high efficiency to help the meta-bidder successfully catch the transient statistical arbitrage opportunities in RTB. Both the offline experiments on a real-world large-scale dataset and online A/B tests on a commercial platform demonstrate the effectiveness of our proposed solution in exploiting arbitrage in various model settings and market environments.
no_new_dataset
0.943608
1506.04046
Jason Byrne
Jason P. Byrne
Investigating the Kinematics of Coronal Mass Ejections with the Automated CORIMP Catalog
23 pages, 11 figures, 1 table
null
null
null
astro-ph.SR astro-ph.EP physics.data-an physics.space-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Studying coronal mass ejections (CMEs) in coronagraph data can be challenging due to their diffuse structure and transient nature, compounded by the variations in their dynamics, morphology, and frequency of occurrence. The large amounts of data available from missions like the Solar and Heliospheric Observatory (SOHO) make manual cataloging of CMEs tedious and prone to human error, and so a robust method of detection and analysis is required and often preferred. A new coronal image processing catalog called CORIMP has been developed in an effort to achieve this, through the implementation of a dynamic background separation technique and multiscale edge detection. These algorithms together isolate and characterise CME structure in the field-of-view of the Large Angle Spectrometric Coronagraph (LASCO) onboard SOHO. CORIMP also applies a Savitzky-Golay filter, along with quadratic and linear fits, to the height-time measurements for better revealing the true CME speed and acceleration profiles across the plane-of-sky. Here we present a sample of new results from the CORIMP CME catalog, and directly compare them with the other automated catalogs of Computer Aided CME Tracking (CACTus) and Solar Eruptive Events Detection System (SEEDS), as well as the manual CME catalog at the Coordinated Data Analysis Workshop (CDAW) Data Center and a previously published study of the sample events. We further investigate a form of unsupervised machine learning by using a k-means clustering algorithm to distinguish detections of multiple CMEs that occur close together in space and time. While challenges still exist, this investigation and comparison of results demonstrates the reliability and robustness of the CORIMP catalog, proving its effectiveness at detecting and tracking CMEs throughout the LASCO dataset.
[ { "version": "v1", "created": "Fri, 12 Jun 2015 15:39:27 GMT" } ]
2015-06-15T00:00:00
[ [ "Byrne", "Jason P.", "" ] ]
TITLE: Investigating the Kinematics of Coronal Mass Ejections with the Automated CORIMP Catalog ABSTRACT: Studying coronal mass ejections (CMEs) in coronagraph data can be challenging due to their diffuse structure and transient nature, compounded by the variations in their dynamics, morphology, and frequency of occurrence. The large amounts of data available from missions like the Solar and Heliospheric Observatory (SOHO) make manual cataloging of CMEs tedious and prone to human error, and so a robust method of detection and analysis is required and often preferred. A new coronal image processing catalog called CORIMP has been developed in an effort to achieve this, through the implementation of a dynamic background separation technique and multiscale edge detection. These algorithms together isolate and characterise CME structure in the field-of-view of the Large Angle Spectrometric Coronagraph (LASCO) onboard SOHO. CORIMP also applies a Savitzky-Golay filter, along with quadratic and linear fits, to the height-time measurements for better revealing the true CME speed and acceleration profiles across the plane-of-sky. Here we present a sample of new results from the CORIMP CME catalog, and directly compare them with the other automated catalogs of Computer Aided CME Tracking (CACTus) and Solar Eruptive Events Detection System (SEEDS), as well as the manual CME catalog at the Coordinated Data Analysis Workshop (CDAW) Data Center and a previously published study of the sample events. We further investigate a form of unsupervised machine learning by using a k-means clustering algorithm to distinguish detections of multiple CMEs that occur close together in space and time. While challenges still exist, this investigation and comparison of results demonstrates the reliability and robustness of the CORIMP catalog, proving its effectiveness at detecting and tracking CMEs throughout the LASCO dataset.
no_new_dataset
0.949856
1506.04051
Lucia Maddalena
Lucia Maddalena and Alfredo Petrosino
Towards Benchmarking Scene Background Initialization
6 pages, SBI dataset, SBMI2015 Workshop
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/3.0/
Given a set of images of a scene taken at different times, the availability of an initial background model that describes the scene without foreground objects is the prerequisite for a wide range of applications, ranging from video surveillance to computational photography. Even though several methods have been proposed for scene background initialization, the lack of a common groundtruthed dataset and of a common set of metrics makes it difficult to compare their performance. To move first steps towards an easy and fair comparison of these methods, we assembled a dataset of sequences frequently adopted for background initialization, selected or created ground truths for quantitative evaluation through a selected suite of metrics, and compared results obtained by some existing methods, making all the material publicly available.
[ { "version": "v1", "created": "Fri, 12 Jun 2015 15:52:46 GMT" } ]
2015-06-15T00:00:00
[ [ "Maddalena", "Lucia", "" ], [ "Petrosino", "Alfredo", "" ] ]
TITLE: Towards Benchmarking Scene Background Initialization ABSTRACT: Given a set of images of a scene taken at different times, the availability of an initial background model that describes the scene without foreground objects is the prerequisite for a wide range of applications, ranging from video surveillance to computational photography. Even though several methods have been proposed for scene background initialization, the lack of a common groundtruthed dataset and of a common set of metrics makes it difficult to compare their performance. To move first steps towards an easy and fair comparison of these methods, we assembled a dataset of sequences frequently adopted for background initialization, selected or created ground truths for quantitative evaluation through a selected suite of metrics, and compared results obtained by some existing methods, making all the material publicly available.
new_dataset
0.955527
1211.1073
Venkat Chandrasekaran
Venkat Chandrasekaran and Michael I. Jordan
Computational and Statistical Tradeoffs via Convex Relaxation
null
null
10.1073/pnas.1302293110
null
math.ST cs.IT math.IT math.OC stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In modern data analysis, one is frequently faced with statistical inference problems involving massive datasets. Processing such large datasets is usually viewed as a substantial computational challenge. However, if data are a statistician's main resource then access to more data should be viewed as an asset rather than as a burden. In this paper we describe a computational framework based on convex relaxation to reduce the computational complexity of an inference procedure when one has access to increasingly larger datasets. Convex relaxation techniques have been widely used in theoretical computer science as they give tractable approximation algorithms to many computationally intractable tasks. We demonstrate the efficacy of this methodology in statistical estimation in providing concrete time-data tradeoffs in a class of denoising problems. Thus, convex relaxation offers a principled approach to exploit the statistical gains from larger datasets to reduce the runtime of inference algorithms.
[ { "version": "v1", "created": "Mon, 5 Nov 2012 23:28:44 GMT" }, { "version": "v2", "created": "Mon, 26 Nov 2012 22:02:27 GMT" } ]
2015-06-12T00:00:00
[ [ "Chandrasekaran", "Venkat", "" ], [ "Jordan", "Michael I.", "" ] ]
TITLE: Computational and Statistical Tradeoffs via Convex Relaxation ABSTRACT: In modern data analysis, one is frequently faced with statistical inference problems involving massive datasets. Processing such large datasets is usually viewed as a substantial computational challenge. However, if data are a statistician's main resource then access to more data should be viewed as an asset rather than as a burden. In this paper we describe a computational framework based on convex relaxation to reduce the computational complexity of an inference procedure when one has access to increasingly larger datasets. Convex relaxation techniques have been widely used in theoretical computer science as they give tractable approximation algorithms to many computationally intractable tasks. We demonstrate the efficacy of this methodology in statistical estimation in providing concrete time-data tradeoffs in a class of denoising problems. Thus, convex relaxation offers a principled approach to exploit the statistical gains from larger datasets to reduce the runtime of inference algorithms.
no_new_dataset
0.946448
1211.6688
Jaroslav Hlinka
Jaroslav Hlinka, David Hartman, Martin Vejmelka, Dagmar Novotn\'a, Milan Palu\v{s}
Non-linear dependence and teleconnections in climate data: sources, relevance, nonstationarity
null
null
10.1007/s00382-013-1780-2
null
stat.ME physics.ao-ph physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantification of relations between measured variables of interest by statistical measures of dependence is a common step in analysis of climate data. The term "connectivity" is used in the network context including the study of complex coupled dynamical systems. The choice of dependence measure is key for the results of the subsequent analysis and interpretation. The use of linear Pearson's correlation coefficient is widespread and convenient. On the other side, as the climate is widely acknowledged to be a nonlinear system, nonlinear connectivity quantification methods, such as those based on information-theoretical concepts, are increasingly used for this purpose. In this paper we outline an approach that enables well informed choice of connectivity method for a given type of data, improving the subsequent interpretation of the results. The presented multi-step approach includes statistical testing, quantification of the specific non-linear contribution to the interaction information, localization of nodes with strongest nonlinear contribution and assessment of the role of specific temporal patterns, including signal nonstationarities. In detail we study the consequences of the choice of a general nonlinear connectivity measure, namely mutual information, focusing on its relevance and potential alterations in the discovered dependence structure. We document the method by applying it on monthly mean temperature data from the NCEP/NCAR reanalysis dataset as well as the ERA dataset. We have been able to identify main sources of observed non-linearity in inter-node couplings. Detailed analysis suggested an important role of several sources of nonstationarity within the climate data. The quantitative role of genuine nonlinear coupling at this scale has proven to be almost negligible, providing quantitative support for the use of linear methods for this type of data.
[ { "version": "v1", "created": "Wed, 28 Nov 2012 18:06:06 GMT" } ]
2015-06-12T00:00:00
[ [ "Hlinka", "Jaroslav", "" ], [ "Hartman", "David", "" ], [ "Vejmelka", "Martin", "" ], [ "Novotná", "Dagmar", "" ], [ "Paluš", "Milan", "" ] ]
TITLE: Non-linear dependence and teleconnections in climate data: sources, relevance, nonstationarity ABSTRACT: Quantification of relations between measured variables of interest by statistical measures of dependence is a common step in analysis of climate data. The term "connectivity" is used in the network context including the study of complex coupled dynamical systems. The choice of dependence measure is key for the results of the subsequent analysis and interpretation. The use of linear Pearson's correlation coefficient is widespread and convenient. On the other side, as the climate is widely acknowledged to be a nonlinear system, nonlinear connectivity quantification methods, such as those based on information-theoretical concepts, are increasingly used for this purpose. In this paper we outline an approach that enables well informed choice of connectivity method for a given type of data, improving the subsequent interpretation of the results. The presented multi-step approach includes statistical testing, quantification of the specific non-linear contribution to the interaction information, localization of nodes with strongest nonlinear contribution and assessment of the role of specific temporal patterns, including signal nonstationarities. In detail we study the consequences of the choice of a general nonlinear connectivity measure, namely mutual information, focusing on its relevance and potential alterations in the discovered dependence structure. We document the method by applying it on monthly mean temperature data from the NCEP/NCAR reanalysis dataset as well as the ERA dataset. We have been able to identify main sources of observed non-linearity in inter-node couplings. Detailed analysis suggested an important role of several sources of nonstationarity within the climate data. The quantitative role of genuine nonlinear coupling at this scale has proven to be almost negligible, providing quantitative support for the use of linear methods for this type of data.
no_new_dataset
0.948632
1212.3333
Ralf Kaehler
Ralf Kaehler, Tom Abel
Single-Pass GPU-Raycasting for Structured Adaptive Mesh Refinement Data
12 pages, 7 figures. submitted to Visualization and Data Analysis 2013
null
10.1117/12.2008552
null
astro-ph.IM cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Structured Adaptive Mesh Refinement (SAMR) is a popular numerical technique to study processes with high spatial and temporal dynamic range. It reduces computational requirements by adapting the lattice on which the underlying differential equations are solved to most efficiently represent the solution. Particularly in astrophysics and cosmology such simulations now can capture spatial scales ten orders of magnitude apart and more. The irregular locations and extensions of the refined regions in the SAMR scheme and the fact that different resolution levels partially overlap, poses a challenge for GPU-based direct volume rendering methods. kD-trees have proven to be advantageous to subdivide the data domain into non-overlapping blocks of equally sized cells, optimal for the texture units of current graphics hardware, but previous GPU-supported raycasting approaches for SAMR data using this data structure required a separate rendering pass for each node, preventing the application of many advanced lighting schemes that require simultaneous access to more than one block of cells. In this paper we present a single-pass GPU-raycasting algorithm for SAMR data that is based on a kD-tree. The tree is efficiently encoded by a set of 3D-textures, which allows to adaptively sample complete rays entirely on the GPU without any CPU interaction. We discuss two different data storage strategies to access the grid data on the GPU and apply them to several datasets to prove the benefits of the proposed method.
[ { "version": "v1", "created": "Thu, 13 Dec 2012 21:00:02 GMT" } ]
2015-06-12T00:00:00
[ [ "Kaehler", "Ralf", "" ], [ "Abel", "Tom", "" ] ]
TITLE: Single-Pass GPU-Raycasting for Structured Adaptive Mesh Refinement Data ABSTRACT: Structured Adaptive Mesh Refinement (SAMR) is a popular numerical technique to study processes with high spatial and temporal dynamic range. It reduces computational requirements by adapting the lattice on which the underlying differential equations are solved to most efficiently represent the solution. Particularly in astrophysics and cosmology such simulations now can capture spatial scales ten orders of magnitude apart and more. The irregular locations and extensions of the refined regions in the SAMR scheme and the fact that different resolution levels partially overlap, poses a challenge for GPU-based direct volume rendering methods. kD-trees have proven to be advantageous to subdivide the data domain into non-overlapping blocks of equally sized cells, optimal for the texture units of current graphics hardware, but previous GPU-supported raycasting approaches for SAMR data using this data structure required a separate rendering pass for each node, preventing the application of many advanced lighting schemes that require simultaneous access to more than one block of cells. In this paper we present a single-pass GPU-raycasting algorithm for SAMR data that is based on a kD-tree. The tree is efficiently encoded by a set of 3D-textures, which allows to adaptively sample complete rays entirely on the GPU without any CPU interaction. We discuss two different data storage strategies to access the grid data on the GPU and apply them to several datasets to prove the benefits of the proposed method.
no_new_dataset
0.949482
1301.4546
Akira Kageyama
Akira Kageyama and Tomoki Yamada
An Approach to Exascale Visualization: Interactive Viewing of In-Situ Visualization
Will appear in Comput. Phys. Comm
null
10.1016/j.cpc.2013.08.017
null
physics.comp-ph cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the coming era of exascale supercomputing, in-situ visualization will be a crucial approach for reducing the output data size. A problem of in-situ visualization is that it loses interactivity if a steering method is not adopted. In this paper, we propose a new method for the interactive analysis of in-situ visualization images produced by a batch simulation job. A key idea is to apply numerous (thousands to millions) in-situ visualizations simultaneously. The viewer then analyzes the image database interactively during postprocessing. If each movie can be compressed to 100 MB, one million movies will only require 100 TB, which is smaller than the size of the raw numerical data in exascale supercomputing. We performed a feasibility study using the proposed method. Multiple movie files were produced by a simulation and they were analyzed using a specially designed movie player. The user could change the viewing angle, the visualization method, and the parameters interactively by retrieving an appropriate sequence of images from the movie dataset.
[ { "version": "v1", "created": "Sat, 19 Jan 2013 08:39:58 GMT" }, { "version": "v2", "created": "Sat, 24 Aug 2013 00:21:14 GMT" }, { "version": "v3", "created": "Fri, 13 Sep 2013 08:55:28 GMT" } ]
2015-06-12T00:00:00
[ [ "Kageyama", "Akira", "" ], [ "Yamada", "Tomoki", "" ] ]
TITLE: An Approach to Exascale Visualization: Interactive Viewing of In-Situ Visualization ABSTRACT: In the coming era of exascale supercomputing, in-situ visualization will be a crucial approach for reducing the output data size. A problem of in-situ visualization is that it loses interactivity if a steering method is not adopted. In this paper, we propose a new method for the interactive analysis of in-situ visualization images produced by a batch simulation job. A key idea is to apply numerous (thousands to millions) in-situ visualizations simultaneously. The viewer then analyzes the image database interactively during postprocessing. If each movie can be compressed to 100 MB, one million movies will only require 100 TB, which is smaller than the size of the raw numerical data in exascale supercomputing. We performed a feasibility study using the proposed method. Multiple movie files were produced by a simulation and they were analyzed using a specially designed movie player. The user could change the viewing angle, the visualization method, and the parameters interactively by retrieving an appropriate sequence of images from the movie dataset.
no_new_dataset
0.943867
1409.5400
Tobias Weyand
Tobias Weyand and Bastian Leibe
Visual Landmark Recognition from Internet Photo Collections: A Large-Scale Evaluation
null
null
10.1016/j.cviu.2015.02.002
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world's landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures.
[ { "version": "v1", "created": "Thu, 18 Sep 2014 18:28:20 GMT" } ]
2015-06-12T00:00:00
[ [ "Weyand", "Tobias", "" ], [ "Leibe", "Bastian", "" ] ]
TITLE: Visual Landmark Recognition from Internet Photo Collections: A Large-Scale Evaluation ABSTRACT: The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world's landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures.
no_new_dataset
0.929951
1412.6632
Junhua Mao
Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan Yuille
Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN)
Add a simple strategy to boost the performance of image captioning task significantly. More details are shown in Section 8 of the paper. The code and related data are available at https://github.com/mjhucla/mRNN-CR ;. arXiv admin note: substantial text overlap with arXiv:1410.1090
ICLR 2015
null
null
cs.CV cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu/~junhua.mao/m-RNN.html .
[ { "version": "v1", "created": "Sat, 20 Dec 2014 08:10:04 GMT" }, { "version": "v2", "created": "Fri, 26 Dec 2014 08:24:11 GMT" }, { "version": "v3", "created": "Tue, 10 Mar 2015 04:17:48 GMT" }, { "version": "v4", "created": "Fri, 10 Apr 2015 21:03:35 GMT" }, { "version": "v5", "created": "Thu, 11 Jun 2015 15:26:58 GMT" } ]
2015-06-12T00:00:00
[ [ "Mao", "Junhua", "" ], [ "Xu", "Wei", "" ], [ "Yang", "Yi", "" ], [ "Wang", "Jiang", "" ], [ "Huang", "Zhiheng", "" ], [ "Yuille", "Alan", "" ] ]
TITLE: Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) ABSTRACT: In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu/~junhua.mao/m-RNN.html .
no_new_dataset
0.950503
1506.03623
Han Xiao Bookman
Han Xiao, Xiaoyan Zhu
Max-Entropy Feed-Forward Clustering Neural Network
This paper has been published in ICANN 2015
ICANN 2015: International Conference on Artificial Neural Networks, Amsterdam, The Netherlands, (May 14-15, 2015)
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The outputs of non-linear feed-forward neural network are positive, which could be treated as probability when they are normalized to one. If we take Entropy-Based Principle into consideration, the outputs for each sample could be represented as the distribution of this sample for different clusters. Entropy-Based Principle is the principle with which we could estimate the unknown distribution under some limited conditions. As this paper defines two processes in Feed-Forward Neural Network, our limited condition is the abstracted features of samples which are worked out in the abstraction process. And the final outputs are the probability distribution for different clusters in the clustering process. As Entropy-Based Principle is considered into the feed-forward neural network, a clustering method is born. We have conducted some experiments on six open UCI datasets, comparing with a few baselines and applied purity as the measurement . The results illustrate that our method outperforms all the other baselines that are most popular clustering methods.
[ { "version": "v1", "created": "Thu, 11 Jun 2015 11:01:40 GMT" } ]
2015-06-12T00:00:00
[ [ "Xiao", "Han", "" ], [ "Zhu", "Xiaoyan", "" ] ]
TITLE: Max-Entropy Feed-Forward Clustering Neural Network ABSTRACT: The outputs of non-linear feed-forward neural network are positive, which could be treated as probability when they are normalized to one. If we take Entropy-Based Principle into consideration, the outputs for each sample could be represented as the distribution of this sample for different clusters. Entropy-Based Principle is the principle with which we could estimate the unknown distribution under some limited conditions. As this paper defines two processes in Feed-Forward Neural Network, our limited condition is the abstracted features of samples which are worked out in the abstraction process. And the final outputs are the probability distribution for different clusters in the clustering process. As Entropy-Based Principle is considered into the feed-forward neural network, a clustering method is born. We have conducted some experiments on six open UCI datasets, comparing with a few baselines and applied purity as the measurement . The results illustrate that our method outperforms all the other baselines that are most popular clustering methods.
no_new_dataset
0.948106
1506.03626
Han Xiao Bookman
Han Xiao, Xiaoyan Zhu
Margin-Based Feed-Forward Neural Network Classifiers
This paper has been published in ICANN 2015: International Conference on Artificial Neural Networks, Amsterdam, The Netherlands, (May 14-15, 2015)
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Margin-Based Principle has been proposed for a long time, it has been proved that this principle could reduce the structural risk and improve the performance in both theoretical and practical aspects. Meanwhile, feed-forward neural network is a traditional classifier, which is very hot at present with a deeper architecture. However, the training algorithm of feed-forward neural network is developed and generated from Widrow-Hoff Principle that means to minimize the squared error. In this paper, we propose a new training algorithm for feed-forward neural networks based on Margin-Based Principle, which could effectively promote the accuracy and generalization ability of neural network classifiers with less labelled samples and flexible network. We have conducted experiments on four UCI open datasets and achieved good results as expected. In conclusion, our model could handle more sparse labelled and more high-dimension dataset in a high accuracy while modification from old ANN method to our method is easy and almost free of work.
[ { "version": "v1", "created": "Thu, 11 Jun 2015 11:10:25 GMT" } ]
2015-06-12T00:00:00
[ [ "Xiao", "Han", "" ], [ "Zhu", "Xiaoyan", "" ] ]
TITLE: Margin-Based Feed-Forward Neural Network Classifiers ABSTRACT: Margin-Based Principle has been proposed for a long time, it has been proved that this principle could reduce the structural risk and improve the performance in both theoretical and practical aspects. Meanwhile, feed-forward neural network is a traditional classifier, which is very hot at present with a deeper architecture. However, the training algorithm of feed-forward neural network is developed and generated from Widrow-Hoff Principle that means to minimize the squared error. In this paper, we propose a new training algorithm for feed-forward neural networks based on Margin-Based Principle, which could effectively promote the accuracy and generalization ability of neural network classifiers with less labelled samples and flexible network. We have conducted experiments on four UCI open datasets and achieved good results as expected. In conclusion, our model could handle more sparse labelled and more high-dimension dataset in a high accuracy while modification from old ANN method to our method is easy and almost free of work.
no_new_dataset
0.949529
1506.03668
Zolzaya Dashdorj
Zolzaya Dashdorj and Stanislav Sobolevsky
Impact of the spatial context on human communication activity
12 pages, 11 figures
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Technology development produces terabytes of data generated by hu- man activity in space and time. This enormous amount of data often called big data becomes crucial for delivering new insights to decision makers. It contains behavioral information on different types of human activity influenced by many external factors such as geographic infor- mation and weather forecast. Early recognition and prediction of those human behaviors are of great importance in many societal applications like health-care, risk management and urban planning, etc. In this pa- per, we investigate relevant geographical areas based on their categories of human activities (i.e., working and shopping) which identified from ge- ographic information (i.e., Openstreetmap). We use spectral clustering followed by k-means clustering algorithm based on TF/IDF cosine simi- larity metric. We evaluate the quality of those observed clusters with the use of silhouette coefficients which are estimated based on the similari- ties of the mobile communication activity temporal patterns. The area clusters are further used to explain typical or exceptional communication activities. We demonstrate the study using a real dataset containing 1 million Call Detailed Records. This type of analysis and its application are important for analyzing the dependency of human behaviors from the external factors and hidden relationships and unknown correlations and other useful information that can support decision-making.
[ { "version": "v1", "created": "Thu, 11 Jun 2015 13:46:16 GMT" } ]
2015-06-12T00:00:00
[ [ "Dashdorj", "Zolzaya", "" ], [ "Sobolevsky", "Stanislav", "" ] ]
TITLE: Impact of the spatial context on human communication activity ABSTRACT: Technology development produces terabytes of data generated by hu- man activity in space and time. This enormous amount of data often called big data becomes crucial for delivering new insights to decision makers. It contains behavioral information on different types of human activity influenced by many external factors such as geographic infor- mation and weather forecast. Early recognition and prediction of those human behaviors are of great importance in many societal applications like health-care, risk management and urban planning, etc. In this pa- per, we investigate relevant geographical areas based on their categories of human activities (i.e., working and shopping) which identified from ge- ographic information (i.e., Openstreetmap). We use spectral clustering followed by k-means clustering algorithm based on TF/IDF cosine simi- larity metric. We evaluate the quality of those observed clusters with the use of silhouette coefficients which are estimated based on the similari- ties of the mobile communication activity temporal patterns. The area clusters are further used to explain typical or exceptional communication activities. We demonstrate the study using a real dataset containing 1 million Call Detailed Records. This type of analysis and its application are important for analyzing the dependency of human behaviors from the external factors and hidden relationships and unknown correlations and other useful information that can support decision-making.
no_new_dataset
0.922273
1208.3953
Vasyl Palchykov
Vasyl Palchykov, J\'anos Kert\'esz, Robin I. M. Dunbar, Kimmo Kaski
Close relationships: A study of mobile communication records
11 pages, 7 figures
J. Stat. Phys. 151 (2013) 735-744
10.1007/s10955-013-0705-0
null
physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile phone communication as digital service generates ever-increasing datasets of human communication actions, which in turn allow us to investigate the structure and evolution of social interactions and their networks. These datasets can be used to study the structuring of such egocentric networks with respect to the strength of the relationships by assuming direct dependence of the communication intensity on the strength of the social tie. Recently we have discovered that there are significant differences between the first and further "best friends" from the point of view of age and gender preferences. Here we introduce a control parameter $p_{\rm max}$ based on the statistics of communication with the first and second "best friend" and use it to filter the data. We find that when $p_{\rm max}$ is decreased the identification of the "best friend" becomes less ambiguous and the earlier observed effects get stronger, thus corroborating them.
[ { "version": "v1", "created": "Mon, 20 Aug 2012 09:18:55 GMT" }, { "version": "v2", "created": "Thu, 24 Jan 2013 16:52:20 GMT" } ]
2015-06-11T00:00:00
[ [ "Palchykov", "Vasyl", "" ], [ "Kertész", "János", "" ], [ "Dunbar", "Robin I. M.", "" ], [ "Kaski", "Kimmo", "" ] ]
TITLE: Close relationships: A study of mobile communication records ABSTRACT: Mobile phone communication as digital service generates ever-increasing datasets of human communication actions, which in turn allow us to investigate the structure and evolution of social interactions and their networks. These datasets can be used to study the structuring of such egocentric networks with respect to the strength of the relationships by assuming direct dependence of the communication intensity on the strength of the social tie. Recently we have discovered that there are significant differences between the first and further "best friends" from the point of view of age and gender preferences. Here we introduce a control parameter $p_{\rm max}$ based on the statistics of communication with the first and second "best friend" and use it to filter the data. We find that when $p_{\rm max}$ is decreased the identification of the "best friend" becomes less ambiguous and the earlier observed effects get stronger, thus corroborating them.
no_new_dataset
0.845369
1208.4122
Stephen Bailey
Stephen Bailey
Principal Component Analysis with Noisy and/or Missing Data
Accepted for publication in PASP; v2 with minor updates, mostly to bibliography
null
10.1086/668105
null
astro-ph.IM physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method for performing Principal Component Analysis (PCA) on noisy datasets with missing values. Estimates of the measurement error are used to weight the input data such that compared to classic PCA, the resulting eigenvectors are more sensitive to the true underlying signal variations rather than being pulled by heteroskedastic measurement noise. Missing data is simply the limiting case of weight=0. The underlying algorithm is a noise weighted Expectation Maximization (EM) PCA, which has additional benefits of implementation speed and flexibility for smoothing eigenvectors to reduce the noise contribution. We present applications of this method on simulated data and QSO spectra from the Sloan Digital Sky Survey.
[ { "version": "v1", "created": "Mon, 20 Aug 2012 20:59:10 GMT" }, { "version": "v2", "created": "Fri, 14 Sep 2012 18:27:56 GMT" } ]
2015-06-11T00:00:00
[ [ "Bailey", "Stephen", "" ] ]
TITLE: Principal Component Analysis with Noisy and/or Missing Data ABSTRACT: We present a method for performing Principal Component Analysis (PCA) on noisy datasets with missing values. Estimates of the measurement error are used to weight the input data such that compared to classic PCA, the resulting eigenvectors are more sensitive to the true underlying signal variations rather than being pulled by heteroskedastic measurement noise. Missing data is simply the limiting case of weight=0. The underlying algorithm is a noise weighted Expectation Maximization (EM) PCA, which has additional benefits of implementation speed and flexibility for smoothing eigenvectors to reduce the noise contribution. We present applications of this method on simulated data and QSO spectra from the Sloan Digital Sky Survey.
no_new_dataset
0.954308
1208.5582
Davide Faranda
Davide Faranda, Jorge Milhazes Freitas, Valerio Lucarini, Giorgio Turchetti and Sandro Vaienti
Extreme value statistics for dynamical systems with noise
34 pages, 8 figures
null
10.1088/0951-7715/26/9/2597
null
math.DS physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the distribution of maxima (Extreme Value Statistics) for sequences of observables computed along orbits generated by random transformations. The underlying, deterministic, dynamical system can be regular or chaotic. In the former case, we will show that by perturbing rational or irrational rotations with additive noise, an extreme value law appears, regardless of the intensity of the noise, while unperturbed rotations do not admit such limiting distributions. In the case of deterministic chaotic dynamics, we will consider observables specially designed to study the recurrence properties in the neighbourhood of periodic points. Hence, the exponential limiting law for the distribution of maxima is modified by the presence of the extremal index, a positive parameter not larger than one, whose inverse gives the average size of the clusters of extreme events. The theory predicts that such a parameter is unitary when the system is perturbed randomly. We perform sophisticated numerical tests to assess how strong is the impact of noise level, when finite time series are considered. We find agreement with the asymptotic theoretical results but also non-trivial behaviour in the finite range. In particular our results suggest that in many applications where finite datasets can be produced or analysed one must be careful in assuming that the smoothing nature of noise prevails over the underlying deterministic dynamics.
[ { "version": "v1", "created": "Tue, 28 Aug 2012 08:03:07 GMT" }, { "version": "v2", "created": "Mon, 25 Mar 2013 11:21:05 GMT" } ]
2015-06-11T00:00:00
[ [ "Faranda", "Davide", "" ], [ "Freitas", "Jorge Milhazes", "" ], [ "Lucarini", "Valerio", "" ], [ "Turchetti", "Giorgio", "" ], [ "Vaienti", "Sandro", "" ] ]
TITLE: Extreme value statistics for dynamical systems with noise ABSTRACT: We study the distribution of maxima (Extreme Value Statistics) for sequences of observables computed along orbits generated by random transformations. The underlying, deterministic, dynamical system can be regular or chaotic. In the former case, we will show that by perturbing rational or irrational rotations with additive noise, an extreme value law appears, regardless of the intensity of the noise, while unperturbed rotations do not admit such limiting distributions. In the case of deterministic chaotic dynamics, we will consider observables specially designed to study the recurrence properties in the neighbourhood of periodic points. Hence, the exponential limiting law for the distribution of maxima is modified by the presence of the extremal index, a positive parameter not larger than one, whose inverse gives the average size of the clusters of extreme events. The theory predicts that such a parameter is unitary when the system is perturbed randomly. We perform sophisticated numerical tests to assess how strong is the impact of noise level, when finite time series are considered. We find agreement with the asymptotic theoretical results but also non-trivial behaviour in the finite range. In particular our results suggest that in many applications where finite datasets can be produced or analysed one must be careful in assuming that the smoothing nature of noise prevails over the underlying deterministic dynamics.
no_new_dataset
0.943919
1209.4826
Dr. Anirudh Pradhan
Anirudh Pradhan
Accelerating dark energy models with anisotropic fluid in Bianchi type-$VI_{0}$ space-time
22 pages, 8 figures. arXiv admin note: substantial text overlap with arXiv:1010.1121, arXiv:1108.2133, arXiv:1010.2362
Res. Astron. Astrophys., Vol. 13, No. 2, (2013), 139-158
10.1088/1674-4527/13/2/002
null
physics.gen-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by the increasing evidence for the need of a geometry that resembles Bianchi morphology to explain the observed anisotropy in the WMAP data, we have discussed some features of the Bianchi type-$VI_{0}$ universes in the presence of a fluid that wields an anisotropic equation of state (EoS) parameter in general relativity. We present two accelerating dark energy (DE) models with an anisotropic fluid in Bianchi type-$VI_{0}$ space-time. To prevail the deterministic solution we choose the scale factor $a(t) = \sqrt{t^{n}e^{t}}$, which yields a time-dependent deceleration parameter (DP), representing a class of models which generate a transition of the universe from the early decelerating phase to the recent accelerating phase. Under the suitable condition, the anisotropic models approach to isotropic scenario. The EoS for dark energy $\omega$ is found to be time-dependent and its existing range for derived models is in good agreement with the recent observations of SNe Ia data (Knop et al. 2003), SNe Ia data with CMBR anisotropy and galaxy clustering statistics (Tegmark et al. 2004) and latest combination of cosmological datasets coming from CMB anisotropies, luminosity distances of high redshift type Ia supernovae and galaxy clustering (Hinshaw et al. 2009; Komatsu et al. 2009). For different values of $n$, we can generate a class of physically viable DE models.The cosmological constant $\Lambda$ is found to be a positive decreasing function of time and it approaches to a small positive value at late time (i.e. the present epoch) which is corroborated by results from recent type Ia supernovae observations. We also observe that our solutions are stable. The physical and geometric aspects of both the models are also discussed in detail.
[ { "version": "v1", "created": "Mon, 17 Sep 2012 04:55:54 GMT" } ]
2015-06-11T00:00:00
[ [ "Pradhan", "Anirudh", "" ] ]
TITLE: Accelerating dark energy models with anisotropic fluid in Bianchi type-$VI_{0}$ space-time ABSTRACT: Motivated by the increasing evidence for the need of a geometry that resembles Bianchi morphology to explain the observed anisotropy in the WMAP data, we have discussed some features of the Bianchi type-$VI_{0}$ universes in the presence of a fluid that wields an anisotropic equation of state (EoS) parameter in general relativity. We present two accelerating dark energy (DE) models with an anisotropic fluid in Bianchi type-$VI_{0}$ space-time. To prevail the deterministic solution we choose the scale factor $a(t) = \sqrt{t^{n}e^{t}}$, which yields a time-dependent deceleration parameter (DP), representing a class of models which generate a transition of the universe from the early decelerating phase to the recent accelerating phase. Under the suitable condition, the anisotropic models approach to isotropic scenario. The EoS for dark energy $\omega$ is found to be time-dependent and its existing range for derived models is in good agreement with the recent observations of SNe Ia data (Knop et al. 2003), SNe Ia data with CMBR anisotropy and galaxy clustering statistics (Tegmark et al. 2004) and latest combination of cosmological datasets coming from CMB anisotropies, luminosity distances of high redshift type Ia supernovae and galaxy clustering (Hinshaw et al. 2009; Komatsu et al. 2009). For different values of $n$, we can generate a class of physically viable DE models.The cosmological constant $\Lambda$ is found to be a positive decreasing function of time and it approaches to a small positive value at late time (i.e. the present epoch) which is corroborated by results from recent type Ia supernovae observations. We also observe that our solutions are stable. The physical and geometric aspects of both the models are also discussed in detail.
no_new_dataset
0.952574
1210.1095
Francesco Vezzi
Francesco Vezzi, Giuseppe Narzisi and Bud Mishra
Reevaluating Assembly Evaluations with Feature Response Curves: GAGE and Assemblathons
Submitted to PLoS One. Supplementary material available at http://www.nada.kth.se/~vezzi/publications/supplementary.pdf and http://cs.nyu.edu/mishra/PUBLICATIONS/12.supplementaryFRC.pdf
null
10.1371/journal.pone.0052210
null
q-bio.GN cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In just the last decade, a multitude of bio-technologies and software pipelines have emerged to revolutionize genomics. To further their central goal, they aim to accelerate and improve the quality of de novo whole-genome assembly starting from short DNA reads. However, the performance of each of these tools is contingent on the length and quality of the sequencing data, the structure and complexity of the genome sequence, and the resolution and quality of long-range information. Furthermore, in the absence of any metric that captures the most fundamental "features" of a high-quality assembly, there is no obvious recipe for users to select the most desirable assembler/assembly. International competitions such as Assemblathons or GAGE tried to identify the best assembler(s) and their features. Some what circuitously, the only available approach to gauge de novo assemblies and assemblers relies solely on the availability of a high-quality fully assembled reference genome sequence. Still worse, reference-guided evaluations are often both difficult to analyze, leading to conclusions that are difficult to interpret. In this paper, we circumvent many of these issues by relying upon a tool, dubbed FRCbam, which is capable of evaluating de novo assemblies from the read-layouts even when no reference exists. We extend the FRCurve approach to cases where lay-out information may have been obscured, as is true in many deBruijn-graph-based algorithms. As a by-product, FRCurve now expands its applicability to a much wider class of assemblers -- thus, identifying higher-quality members of this group, their inter-relations as well as sensitivity to carefully selected features, with or without the support of a reference sequence or layout for the reads. The paper concludes by reevaluating several recently conducted assembly competitions and the datasets that have resulted from them.
[ { "version": "v1", "created": "Wed, 3 Oct 2012 13:02:30 GMT" } ]
2015-06-11T00:00:00
[ [ "Vezzi", "Francesco", "" ], [ "Narzisi", "Giuseppe", "" ], [ "Mishra", "Bud", "" ] ]
TITLE: Reevaluating Assembly Evaluations with Feature Response Curves: GAGE and Assemblathons ABSTRACT: In just the last decade, a multitude of bio-technologies and software pipelines have emerged to revolutionize genomics. To further their central goal, they aim to accelerate and improve the quality of de novo whole-genome assembly starting from short DNA reads. However, the performance of each of these tools is contingent on the length and quality of the sequencing data, the structure and complexity of the genome sequence, and the resolution and quality of long-range information. Furthermore, in the absence of any metric that captures the most fundamental "features" of a high-quality assembly, there is no obvious recipe for users to select the most desirable assembler/assembly. International competitions such as Assemblathons or GAGE tried to identify the best assembler(s) and their features. Some what circuitously, the only available approach to gauge de novo assemblies and assemblers relies solely on the availability of a high-quality fully assembled reference genome sequence. Still worse, reference-guided evaluations are often both difficult to analyze, leading to conclusions that are difficult to interpret. In this paper, we circumvent many of these issues by relying upon a tool, dubbed FRCbam, which is capable of evaluating de novo assemblies from the read-layouts even when no reference exists. We extend the FRCurve approach to cases where lay-out information may have been obscured, as is true in many deBruijn-graph-based algorithms. As a by-product, FRCurve now expands its applicability to a much wider class of assemblers -- thus, identifying higher-quality members of this group, their inter-relations as well as sensitivity to carefully selected features, with or without the support of a reference sequence or layout for the reads. The paper concludes by reevaluating several recently conducted assembly competitions and the datasets that have resulted from them.
no_new_dataset
0.939858
1404.0466
Da Kuang
Da Kuang, Alex Gittens, Raffay Hamid
piCholesky: Polynomial Interpolation of Multiple Cholesky Factors for Efficient Approximate Cross-Validation
null
null
null
null
cs.LG cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dominant cost in solving least-square problems using Newton's method is often that of factorizing the Hessian matrix over multiple values of the regularization parameter ($\lambda$). We propose an efficient way to interpolate the Cholesky factors of the Hessian matrix computed over a small set of $\lambda$ values. This approximation enables us to optimally minimize the hold-out error while incurring only a fraction of the cost compared to exact cross-validation. We provide a formal error bound for our approximation scheme and present solutions to a set of key implementation challenges that allow our approach to maximally exploit the compute power of modern architectures. We present a thorough empirical analysis over multiple datasets to show the effectiveness of our approach.
[ { "version": "v1", "created": "Wed, 2 Apr 2014 05:33:41 GMT" }, { "version": "v2", "created": "Wed, 10 Jun 2015 18:20:16 GMT" } ]
2015-06-11T00:00:00
[ [ "Kuang", "Da", "" ], [ "Gittens", "Alex", "" ], [ "Hamid", "Raffay", "" ] ]
TITLE: piCholesky: Polynomial Interpolation of Multiple Cholesky Factors for Efficient Approximate Cross-Validation ABSTRACT: The dominant cost in solving least-square problems using Newton's method is often that of factorizing the Hessian matrix over multiple values of the regularization parameter ($\lambda$). We propose an efficient way to interpolate the Cholesky factors of the Hessian matrix computed over a small set of $\lambda$ values. This approximation enables us to optimally minimize the hold-out error while incurring only a fraction of the cost compared to exact cross-validation. We provide a formal error bound for our approximation scheme and present solutions to a set of key implementation challenges that allow our approach to maximally exploit the compute power of modern architectures. We present a thorough empirical analysis over multiple datasets to show the effectiveness of our approach.
no_new_dataset
0.942135
1408.2617
Ludwig Ritschl
Ludwig Ritschl, Jan Kuntz and Marc Kachelrie{\ss}
The rotate-plus-shift C-arm trajectory: Complete CT data with less than 180{\deg} rotation
null
null
10.1117/12.2081925
null
physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last decade C-arm-based cone-beam CT became a widely used modality for intraoperative imaging. Typically a C-arm scan is performed using a circle-like trajectory around a region of interest. Therefor an angular range of at least 180{\deg} plus fan-angle must be covered to ensure a completely sampled data set. This fact defines some constraints on the geometry and technical specifications of a C-arm system, for example a larger C radius or a smaller C opening respectively. These technical modifications are usually not benificial in terms of handling and usability of the C-arm during classical 2D applications like fluoroscopy. The method proposed in this paper relaxes the constraint of 180{\deg} plus fan-angle rotation to acquire a complete data set. The proposed C-arm trajectory requires a motorization of the orbital axis of the C and of ideally two orthogonal axis in the C plane. The trajectory consists of three parts: A rotation of the C around a defined iso-center and two translational movements parallel to the detector plane at the begin and at the end of the rotation. Combining these three parts to one trajectory enables for the acquisition of a completely sampled dataset using only 180{\deg} minus fan-angle of rotation. To evaluate the method we show animal scans acquired with a mobile C-arm prototype. We expect that the transition of this method into clinical routine will lead to a much broader use of intraoperative 3D imaging in a wide field of clinical applications.
[ { "version": "v1", "created": "Tue, 12 Aug 2014 05:01:18 GMT" } ]
2015-06-11T00:00:00
[ [ "Ritschl", "Ludwig", "" ], [ "Kuntz", "Jan", "" ], [ "Kachelrieß", "Marc", "" ] ]
TITLE: The rotate-plus-shift C-arm trajectory: Complete CT data with less than 180{\deg} rotation ABSTRACT: In the last decade C-arm-based cone-beam CT became a widely used modality for intraoperative imaging. Typically a C-arm scan is performed using a circle-like trajectory around a region of interest. Therefor an angular range of at least 180{\deg} plus fan-angle must be covered to ensure a completely sampled data set. This fact defines some constraints on the geometry and technical specifications of a C-arm system, for example a larger C radius or a smaller C opening respectively. These technical modifications are usually not benificial in terms of handling and usability of the C-arm during classical 2D applications like fluoroscopy. The method proposed in this paper relaxes the constraint of 180{\deg} plus fan-angle rotation to acquire a complete data set. The proposed C-arm trajectory requires a motorization of the orbital axis of the C and of ideally two orthogonal axis in the C plane. The trajectory consists of three parts: A rotation of the C around a defined iso-center and two translational movements parallel to the detector plane at the begin and at the end of the rotation. Combining these three parts to one trajectory enables for the acquisition of a completely sampled dataset using only 180{\deg} minus fan-angle of rotation. To evaluate the method we show animal scans acquired with a mobile C-arm prototype. We expect that the transition of this method into clinical routine will lead to a much broader use of intraoperative 3D imaging in a wide field of clinical applications.
no_new_dataset
0.95018