id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
sequencelengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
1609.08716
Juan Francisco Saldarriaga
Juan Francisco Saldarriaga (Columbia University), David A. King (Arizona State University)
Access to Taxicabs for Unbanked Households: An Exploratory Analysis in New York City
Presented at the Data For Good Exchange 2016
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Taxicabs are a critical aspect of the public transit system in New York City. The yellow cabs that are ubiquitous in Manhattan are as iconic as the city's subway system, and in recent years green taxicabs were introduced by the city to improve taxi service in areas outside of the central business districts and airports. Approximately 500,000 taxi trips are taken daily, carrying about 800,000 passengers, and not including other livery firms such as Uber, Lyft or Carmel. Since 2008 yellow taxis have been able to process fare payments with credit cards, and credits cards are a growing share of total fare payments. However, the use of credit cards to pay for taxi fares varies widely across neighborhoods, and there are strong correlations between cash payments for taxi fares and the presence of unbanked or underbanked populations. These issues are of concern for policymakers as approximately ten percent of households in the city are unbanked, and in some neighborhoods the share of unbanked households is over 50 percent. In this paper we use multiple datasets to explore taxicab fare payments by neighborhood and examine how access to taxicab services is associated with use of conventional banking services. There is a clear spatial dimension to the propensity of riders to pay cash, and we find that both immigrant status and being 'unbanked' are strong predictors of cash transactions for taxicabs. These results have implications for local regulations of the for-hire vehicle industry, particularly in the context of the rapid growth of services that require credit cards. Without some type of cash-based payment option taxi services will isolate certain neighborhoods. At the very least, existing and new providers of transit services must consider access to mainstream financial products as part of their equity analyses.
[ { "version": "v1", "created": "Wed, 28 Sep 2016 00:34:02 GMT" } ]
2016-09-29T00:00:00
[ [ "Saldarriaga", "Juan Francisco", "", "Columbia University" ], [ "King", "David A.", "", "Arizona State University" ] ]
TITLE: Access to Taxicabs for Unbanked Households: An Exploratory Analysis in New York City ABSTRACT: Taxicabs are a critical aspect of the public transit system in New York City. The yellow cabs that are ubiquitous in Manhattan are as iconic as the city's subway system, and in recent years green taxicabs were introduced by the city to improve taxi service in areas outside of the central business districts and airports. Approximately 500,000 taxi trips are taken daily, carrying about 800,000 passengers, and not including other livery firms such as Uber, Lyft or Carmel. Since 2008 yellow taxis have been able to process fare payments with credit cards, and credits cards are a growing share of total fare payments. However, the use of credit cards to pay for taxi fares varies widely across neighborhoods, and there are strong correlations between cash payments for taxi fares and the presence of unbanked or underbanked populations. These issues are of concern for policymakers as approximately ten percent of households in the city are unbanked, and in some neighborhoods the share of unbanked households is over 50 percent. In this paper we use multiple datasets to explore taxicab fare payments by neighborhood and examine how access to taxicab services is associated with use of conventional banking services. There is a clear spatial dimension to the propensity of riders to pay cash, and we find that both immigrant status and being 'unbanked' are strong predictors of cash transactions for taxicabs. These results have implications for local regulations of the for-hire vehicle industry, particularly in the context of the rapid growth of services that require credit cards. Without some type of cash-based payment option taxi services will isolate certain neighborhoods. At the very least, existing and new providers of transit services must consider access to mainstream financial products as part of their equity analyses.
no_new_dataset
0.906198
1609.08740
Shifeng Zhang
Shifeng Zhang, Jianmin Li, Jinma Guo, and Bo Zhang
Scalable Discrete Supervised Hash Learning with Asymmetric Matrix Factorization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hashing method maps similar data to binary hashcodes with smaller hamming distance, and it has received a broad attention due to its low storage cost and fast retrieval speed. However, the existing limitations make the present algorithms difficult to deal with large-scale datasets: (1) discrete constraints are involved in the learning of the hash function; (2) pairwise or triplet similarity is adopted to generate efficient hashcodes, resulting both time and space complexity are greater than O(n^2). To address these issues, we propose a novel discrete supervised hash learning framework which can be scalable to large-scale datasets. First, the discrete learning procedure is decomposed into a binary classifier learning scheme and binary codes learning scheme, which makes the learning procedure more efficient. Second, we adopt the Asymmetric Low-rank Matrix Factorization and propose the Fast Clustering-based Batch Coordinate Descent method, such that the time and space complexity is reduced to O(n). The proposed framework also provides a flexible paradigm to incorporate with arbitrary hash function, including deep neural networks and kernel methods. Experiments on large-scale datasets demonstrate that the proposed method is superior or comparable with state-of-the-art hashing algorithms.
[ { "version": "v1", "created": "Wed, 28 Sep 2016 02:37:23 GMT" } ]
2016-09-29T00:00:00
[ [ "Zhang", "Shifeng", "" ], [ "Li", "Jianmin", "" ], [ "Guo", "Jinma", "" ], [ "Zhang", "Bo", "" ] ]
TITLE: Scalable Discrete Supervised Hash Learning with Asymmetric Matrix Factorization ABSTRACT: Hashing method maps similar data to binary hashcodes with smaller hamming distance, and it has received a broad attention due to its low storage cost and fast retrieval speed. However, the existing limitations make the present algorithms difficult to deal with large-scale datasets: (1) discrete constraints are involved in the learning of the hash function; (2) pairwise or triplet similarity is adopted to generate efficient hashcodes, resulting both time and space complexity are greater than O(n^2). To address these issues, we propose a novel discrete supervised hash learning framework which can be scalable to large-scale datasets. First, the discrete learning procedure is decomposed into a binary classifier learning scheme and binary codes learning scheme, which makes the learning procedure more efficient. Second, we adopt the Asymmetric Low-rank Matrix Factorization and propose the Fast Clustering-based Batch Coordinate Descent method, such that the time and space complexity is reduced to O(n). The proposed framework also provides a flexible paradigm to incorporate with arbitrary hash function, including deep neural networks and kernel methods. Experiments on large-scale datasets demonstrate that the proposed method is superior or comparable with state-of-the-art hashing algorithms.
no_new_dataset
0.947186
1609.08758
Mayu Otani
Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkil\"a, Naokazu Yokoya
Video Summarization using Deep Semantic Features
16 pages, the 13th Asian Conference on Computer Vision (ACCV'16)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a video summarization technique for an Internet video to provide a quick way to overview its content. This is a challenging problem because finding important or informative parts of the original video requires to understand its content. Furthermore the content of Internet videos is very diverse, ranging from home videos to documentaries, which makes video summarization much more tough as prior knowledge is almost not available. To tackle this problem, we propose to use deep video features that can encode various levels of content semantics, including objects, actions, and scenes, improving the efficiency of standard video summarization techniques. For this, we design a deep neural network that maps videos as well as descriptions to a common semantic space and jointly trained it with associated pairs of videos and descriptions. To generate a video summary, we extract the deep features from each segment of the original video and apply a clustering-based summarization technique to them. We evaluate our video summaries using the SumMe dataset as well as baseline approaches. The results demonstrated the advantages of incorporating our deep semantic features in a video summarization technique.
[ { "version": "v1", "created": "Wed, 28 Sep 2016 03:41:49 GMT" } ]
2016-09-29T00:00:00
[ [ "Otani", "Mayu", "" ], [ "Nakashima", "Yuta", "" ], [ "Rahtu", "Esa", "" ], [ "Heikkilä", "Janne", "" ], [ "Yokoya", "Naokazu", "" ] ]
TITLE: Video Summarization using Deep Semantic Features ABSTRACT: This paper presents a video summarization technique for an Internet video to provide a quick way to overview its content. This is a challenging problem because finding important or informative parts of the original video requires to understand its content. Furthermore the content of Internet videos is very diverse, ranging from home videos to documentaries, which makes video summarization much more tough as prior knowledge is almost not available. To tackle this problem, we propose to use deep video features that can encode various levels of content semantics, including objects, actions, and scenes, improving the efficiency of standard video summarization techniques. For this, we design a deep neural network that maps videos as well as descriptions to a common semantic space and jointly trained it with associated pairs of videos and descriptions. To generate a video summary, we extract the deep features from each segment of the original video and apply a clustering-based summarization technique to them. We evaluate our video summaries using the SumMe dataset as well as baseline approaches. The results demonstrated the advantages of incorporating our deep semantic features in a video summarization technique.
no_new_dataset
0.945601
1609.08824
Subhro Roy
Subhro Roy, Shyam Upadhyay, Dan Roth
Equation Parsing: Mapping Sentences to Grounded Equations
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Identifying mathematical relations expressed in text is essential to understanding a broad range of natural language text from election reports, to financial news, to sport commentaries to mathematical word problems. This paper focuses on identifying and understanding mathematical relations described within a single sentence. We introduce the problem of Equation Parsing -- given a sentence, identify noun phrases which represent variables, and generate the mathematical equation expressing the relation described in the sentence. We introduce the notion of projective equation parsing and provide an efficient algorithm to parse text to projective equations. Our system makes use of a high precision lexicon of mathematical expressions and a pipeline of structured predictors, and generates correct equations in $70\%$ of the cases. In $60\%$ of the time, it also identifies the correct noun phrase $\rightarrow$ variables mapping, significantly outperforming baselines. We also release a new annotated dataset for task evaluation.
[ { "version": "v1", "created": "Wed, 28 Sep 2016 08:54:05 GMT" } ]
2016-09-29T00:00:00
[ [ "Roy", "Subhro", "" ], [ "Upadhyay", "Shyam", "" ], [ "Roth", "Dan", "" ] ]
TITLE: Equation Parsing: Mapping Sentences to Grounded Equations ABSTRACT: Identifying mathematical relations expressed in text is essential to understanding a broad range of natural language text from election reports, to financial news, to sport commentaries to mathematical word problems. This paper focuses on identifying and understanding mathematical relations described within a single sentence. We introduce the problem of Equation Parsing -- given a sentence, identify noun phrases which represent variables, and generate the mathematical equation expressing the relation described in the sentence. We introduce the notion of projective equation parsing and provide an efficient algorithm to parse text to projective equations. Our system makes use of a high precision lexicon of mathematical expressions and a pipeline of structured predictors, and generates correct equations in $70\%$ of the cases. In $60\%$ of the time, it also identifies the correct noun phrase $\rightarrow$ variables mapping, significantly outperforming baselines. We also release a new annotated dataset for task evaluation.
new_dataset
0.956675
1609.08938
Allison Del Giorno
Allison Del Giorno, J. Andrew Bagnell, Martial Hebert
A Discriminative Framework for Anomaly Detection in Large Videos
14 pages without references, 16 pages with. 7 figures. Accepted to ECCV 2016
null
null
null
cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address an anomaly detection setting in which training sequences are unavailable and anomalies are scored independently of temporal ordering. Current algorithms in anomaly detection are based on the classical density estimation approach of learning high-dimensional models and finding low-probability events. These algorithms are sensitive to the order in which anomalies appear and require either training data or early context assumptions that do not hold for longer, more complex videos. By defining anomalies as examples that can be distinguished from other examples in the same video, our definition inspires a shift in approaches from classical density estimation to simple discriminative learning. Our contributions include a novel framework for anomaly detection that is (1) independent of temporal ordering of anomalies, and (2) unsupervised, requiring no separate training sequences. We show that our algorithm can achieve state-of-the-art results even when we adjust the setting by removing training sequences from standard datasets.
[ { "version": "v1", "created": "Wed, 28 Sep 2016 14:48:32 GMT" } ]
2016-09-29T00:00:00
[ [ "Del Giorno", "Allison", "" ], [ "Bagnell", "J. Andrew", "" ], [ "Hebert", "Martial", "" ] ]
TITLE: A Discriminative Framework for Anomaly Detection in Large Videos ABSTRACT: We address an anomaly detection setting in which training sequences are unavailable and anomalies are scored independently of temporal ordering. Current algorithms in anomaly detection are based on the classical density estimation approach of learning high-dimensional models and finding low-probability events. These algorithms are sensitive to the order in which anomalies appear and require either training data or early context assumptions that do not hold for longer, more complex videos. By defining anomalies as examples that can be distinguished from other examples in the same video, our definition inspires a shift in approaches from classical density estimation to simple discriminative learning. Our contributions include a novel framework for anomaly detection that is (1) independent of temporal ordering of anomalies, and (2) unsupervised, requiring no separate training sequences. We show that our algorithm can achieve state-of-the-art results even when we adjust the setting by removing training sequences from standard datasets.
no_new_dataset
0.950915
1609.09018
Tobi Baumgartner
Tobi Baumgartner and Jack Culpepper
Deep Architectures for Face Attributes
11 pages, 2 figures, accepted in "Workshop on Facial Informatics in conjunction with ACCV '16"
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We train a deep convolutional neural network to perform identity classification using a new dataset of public figures annotated with age, gender, ethnicity and emotion labels, and then fine-tune it for attribute classification. An optimal sharing pattern of computational resources within this network is determined by experiment, requiring only 1 G flops to produce all predictions. Rather than fine-tune by relearning weights in one additional layer after the penultimate layer of the identity network, we try several different depths for each attribute. We find that prediction of age and emotion is improved by fine-tuning from earlier layers onward, presumably because deeper layers are progressively invariant to non-identity related changes in the input.
[ { "version": "v1", "created": "Wed, 28 Sep 2016 17:57:46 GMT" } ]
2016-09-29T00:00:00
[ [ "Baumgartner", "Tobi", "" ], [ "Culpepper", "Jack", "" ] ]
TITLE: Deep Architectures for Face Attributes ABSTRACT: We train a deep convolutional neural network to perform identity classification using a new dataset of public figures annotated with age, gender, ethnicity and emotion labels, and then fine-tune it for attribute classification. An optimal sharing pattern of computational resources within this network is determined by experiment, requiring only 1 G flops to produce all predictions. Rather than fine-tune by relearning weights in one additional layer after the penultimate layer of the identity network, we try several different depths for each attribute. We find that prediction of age and emotion is improved by fine-tuning from earlier layers onward, presumably because deeper layers are progressively invariant to non-identity related changes in the input.
new_dataset
0.957238
1506.01461
Matthew Burgess
Matthew Burgess, Eytan Adar, Michael Cafarella
Link-Prediction Enhanced Consensus Clustering for Complex Networks
null
null
10.1371/journal.pone.0153384
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many real networks that are inferred or collected from data are incomplete due to missing edges. Missing edges can be inherent to the dataset (Facebook friend links will never be complete) or the result of sampling (one may only have access to a portion of the data). The consequence is that downstream analyses that consume the network will often yield less accurate results than if the edges were complete. Community detection algorithms, in particular, often suffer when critical intra-community edges are missing. We propose a novel consensus clustering algorithm to enhance community detection on incomplete networks. Our framework utilizes existing community detection algorithms that process networks imputed by our link prediction based algorithm. The framework then merges their multiple outputs into a final consensus output. On average our method boosts performance of existing algorithms by 7% on artificial data and 17% on ego networks collected from Facebook.
[ { "version": "v1", "created": "Thu, 4 Jun 2015 04:18:16 GMT" } ]
2016-09-28T00:00:00
[ [ "Burgess", "Matthew", "" ], [ "Adar", "Eytan", "" ], [ "Cafarella", "Michael", "" ] ]
TITLE: Link-Prediction Enhanced Consensus Clustering for Complex Networks ABSTRACT: Many real networks that are inferred or collected from data are incomplete due to missing edges. Missing edges can be inherent to the dataset (Facebook friend links will never be complete) or the result of sampling (one may only have access to a portion of the data). The consequence is that downstream analyses that consume the network will often yield less accurate results than if the edges were complete. Community detection algorithms, in particular, often suffer when critical intra-community edges are missing. We propose a novel consensus clustering algorithm to enhance community detection on incomplete networks. Our framework utilizes existing community detection algorithms that process networks imputed by our link prediction based algorithm. The framework then merges their multiple outputs into a final consensus output. On average our method boosts performance of existing algorithms by 7% on artificial data and 17% on ego networks collected from Facebook.
no_new_dataset
0.951188
1511.02647
Samuel Martin
Corentin Vande Kerckhove, Samuel Martin, Pascal Gend, Peter J. Rentfrow, Julien M. Hendrickx, and Vincent D. Blondel
Modelling influence and opinion evolution in online collective behaviour
Accepted for publication in PLOS ONE (2016)
null
10.1371/journal.pone.0157685
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Opinion evolution and judgment revision are mediated through social influence. Based on a large crowdsourced in vitro experiment (n=861), it is shown how a consensus model can be used to predict opinion evolution in online collective behaviour. It is the first time the predictive power of a quantitative model of opinion dynamics is tested against a real dataset. Unlike previous research on the topic, the model was validated on data which did not serve to calibrate it. This avoids to favor more complex models over more simple ones and prevents overfitting. The model is parametrized by the influenceability of each individual, a factor representing to what extent individuals incorporate external judgments. The prediction accuracy depends on prior knowledge on the participants' past behaviour. Several situations reflecting data availability are compared. When the data is scarce, the data from previous participants is used to predict how a new participant will behave. Judgment revision includes unpredictable variations which limit the potential for prediction. A first measure of unpredictability is proposed. The measure is based on a specific control experiment. More than two thirds of the prediction errors are found to occur due to unpredictability of the human judgment revision process rather than to model imperfection.
[ { "version": "v1", "created": "Mon, 9 Nov 2015 11:52:08 GMT" }, { "version": "v2", "created": "Mon, 14 Dec 2015 23:01:57 GMT" }, { "version": "v3", "created": "Fri, 3 Jun 2016 16:33:18 GMT" } ]
2016-09-28T00:00:00
[ [ "Kerckhove", "Corentin Vande", "" ], [ "Martin", "Samuel", "" ], [ "Gend", "Pascal", "" ], [ "Rentfrow", "Peter J.", "" ], [ "Hendrickx", "Julien M.", "" ], [ "Blondel", "Vincent D.", "" ] ]
TITLE: Modelling influence and opinion evolution in online collective behaviour ABSTRACT: Opinion evolution and judgment revision are mediated through social influence. Based on a large crowdsourced in vitro experiment (n=861), it is shown how a consensus model can be used to predict opinion evolution in online collective behaviour. It is the first time the predictive power of a quantitative model of opinion dynamics is tested against a real dataset. Unlike previous research on the topic, the model was validated on data which did not serve to calibrate it. This avoids to favor more complex models over more simple ones and prevents overfitting. The model is parametrized by the influenceability of each individual, a factor representing to what extent individuals incorporate external judgments. The prediction accuracy depends on prior knowledge on the participants' past behaviour. Several situations reflecting data availability are compared. When the data is scarce, the data from previous participants is used to predict how a new participant will behave. Judgment revision includes unpredictable variations which limit the potential for prediction. A first measure of unpredictability is proposed. The measure is based on a specific control experiment. More than two thirds of the prediction errors are found to occur due to unpredictability of the human judgment revision process rather than to model imperfection.
no_new_dataset
0.947914
1511.04156
Josh Merel
Josh Merel, David Carlson, Liam Paninski, John P. Cunningham
Neuroprosthetic decoder training as imitation learning
null
null
10.1371/journal.pcbi.1004948
null
stat.ML cs.LG q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. We describe how training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger, [1]), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.
[ { "version": "v1", "created": "Fri, 13 Nov 2015 04:21:33 GMT" }, { "version": "v2", "created": "Mon, 14 Mar 2016 16:39:03 GMT" } ]
2016-09-28T00:00:00
[ [ "Merel", "Josh", "" ], [ "Carlson", "David", "" ], [ "Paninski", "Liam", "" ], [ "Cunningham", "John P.", "" ] ]
TITLE: Neuroprosthetic decoder training as imitation learning ABSTRACT: Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. We describe how training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger, [1]), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.
no_new_dataset
0.943138
1602.09108
Philipp H\"ovel
Hartmut H. K. Lentz, Andreas Koher, Philipp H\"ovel, J\"orn Gethmann, Carola Sauter-Louis, Thomas Selhorst, Franz J. Conraths
Disease spread through animal movements: a static and temporal network analysis of pig trade in Germany
main text 33 pages, 17 figures, supporting information 7 pages, 7 figures
null
10.1371/journal.pone.0155196
null
physics.soc-ph q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Animal trade plays an important role for the spread of infectious diseases in livestock populations. As a case study, we consider pig trade in Germany, where trade actors (agricultural premises) form a complex network. The central question is how infectious diseases can potentially spread within the system of trade contacts. We address this question by analyzing the underlying network of animal movements. Methodology/Findings: The considered pig trade dataset spans several years and is analyzed with respect to its potential to spread infectious diseases. Focusing on measurements of network-topological properties, we avoid the usage of external parameters, since these properties are independent of specific pathogens. They are on the contrary of great importance for understanding any general spreading process on this particular network. We analyze the system using different network models, which include varying amounts of information: (i) static network, (ii) network as a time series of uncorrelated snapshots, (iii) temporal network, where causality is explicitly taken into account. Findings: Our approach provides a general framework for a topological-temporal characterization of livestock trade networks. We find that a static network view captures many relevant aspects of the trade system, and premises can be classified into two clearly defined risk classes. Moreover, our results allow for an efficient allocation strategy for intervention measures using centrality measures. Data on trade volume does barely alter the results and is therefore of secondary importance. Although a static network description yields useful results, the temporal resolution of data plays an outstanding role for an in-depth understanding of spreading processes. This applies in particular for an accurate calculation of the maximum outbreak size.
[ { "version": "v1", "created": "Mon, 29 Feb 2016 19:22:35 GMT" }, { "version": "v2", "created": "Wed, 2 Mar 2016 20:34:20 GMT" } ]
2016-09-28T00:00:00
[ [ "Lentz", "Hartmut H. K.", "" ], [ "Koher", "Andreas", "" ], [ "Hövel", "Philipp", "" ], [ "Gethmann", "Jörn", "" ], [ "Sauter-Louis", "Carola", "" ], [ "Selhorst", "Thomas", "" ], [ "Conraths", "Franz J.", "" ] ]
TITLE: Disease spread through animal movements: a static and temporal network analysis of pig trade in Germany ABSTRACT: Background: Animal trade plays an important role for the spread of infectious diseases in livestock populations. As a case study, we consider pig trade in Germany, where trade actors (agricultural premises) form a complex network. The central question is how infectious diseases can potentially spread within the system of trade contacts. We address this question by analyzing the underlying network of animal movements. Methodology/Findings: The considered pig trade dataset spans several years and is analyzed with respect to its potential to spread infectious diseases. Focusing on measurements of network-topological properties, we avoid the usage of external parameters, since these properties are independent of specific pathogens. They are on the contrary of great importance for understanding any general spreading process on this particular network. We analyze the system using different network models, which include varying amounts of information: (i) static network, (ii) network as a time series of uncorrelated snapshots, (iii) temporal network, where causality is explicitly taken into account. Findings: Our approach provides a general framework for a topological-temporal characterization of livestock trade networks. We find that a static network view captures many relevant aspects of the trade system, and premises can be classified into two clearly defined risk classes. Moreover, our results allow for an efficient allocation strategy for intervention measures using centrality measures. Data on trade volume does barely alter the results and is therefore of secondary importance. Although a static network description yields useful results, the temporal resolution of data plays an outstanding role for an in-depth understanding of spreading processes. This applies in particular for an accurate calculation of the maximum outbreak size.
no_new_dataset
0.944434
1604.00125
Ziqiang Cao
Ziqiang Cao, Wenjie Li, Sujian Li, Furu Wei and Yanran Li
AttSum: Joint Learning of Focusing and Summarization with Neural Attention
10 pages, 1 figure
COLING 2016
null
null
cs.IR cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Query relevance ranking and sentence saliency ranking are the two main tasks in extractive query-focused summarization. Previous supervised summarization systems often perform the two tasks in isolation. However, since reference summaries are the trade-off between relevance and saliency, using them as supervision, neither of the two rankers could be trained well. This paper proposes a novel summarization system called AttSum, which tackles the two tasks jointly. It automatically learns distributed representations for sentences as well as the document cluster. Meanwhile, it applies the attention mechanism to simulate the attentive reading of human behavior when a query is given. Extensive experiments are conducted on DUC query-focused summarization benchmark datasets. Without using any hand-crafted features, AttSum achieves competitive performance. It is also observed that the sentences recognized to focus on the query indeed meet the query need.
[ { "version": "v1", "created": "Fri, 1 Apr 2016 04:18:39 GMT" }, { "version": "v2", "created": "Tue, 27 Sep 2016 02:22:33 GMT" } ]
2016-09-28T00:00:00
[ [ "Cao", "Ziqiang", "" ], [ "Li", "Wenjie", "" ], [ "Li", "Sujian", "" ], [ "Wei", "Furu", "" ], [ "Li", "Yanran", "" ] ]
TITLE: AttSum: Joint Learning of Focusing and Summarization with Neural Attention ABSTRACT: Query relevance ranking and sentence saliency ranking are the two main tasks in extractive query-focused summarization. Previous supervised summarization systems often perform the two tasks in isolation. However, since reference summaries are the trade-off between relevance and saliency, using them as supervision, neither of the two rankers could be trained well. This paper proposes a novel summarization system called AttSum, which tackles the two tasks jointly. It automatically learns distributed representations for sentences as well as the document cluster. Meanwhile, it applies the attention mechanism to simulate the attentive reading of human behavior when a query is given. Extensive experiments are conducted on DUC query-focused summarization benchmark datasets. Without using any hand-crafted features, AttSum achieves competitive performance. It is also observed that the sentences recognized to focus on the query indeed meet the query need.
no_new_dataset
0.944536
1604.02123
Talayeh Razzaghi
Talayeh Razzaghi, Oleg Roderick, Ilya Safro, Nicholas Marko
Multilevel Weighted Support Vector Machine for Classification on Healthcare Data with Missing Values
arXiv admin note: substantial text overlap with arXiv:1503.06250
null
10.1371/journal.pone.0155119
null
stat.ML cs.LG stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work is motivated by the needs of predictive analytics on healthcare data as represented by Electronic Medical Records. Such data is invariably problematic: noisy, with missing entries, with imbalance in classes of interests, leading to serious bias in predictive modeling. Since standard data mining methods often produce poor performance measures, we argue for development of specialized techniques of data-preprocessing and classification. In this paper, we propose a new method to simultaneously classify large datasets and reduce the effects of missing values. It is based on a multilevel framework of the cost-sensitive SVM and the expected maximization imputation method for missing values, which relies on iterated regression analyses. We compare classification results of multilevel SVM-based algorithms on public benchmark datasets with imbalanced classes and missing values as well as real data in health applications, and show that our multilevel SVM-based method produces fast, and more accurate and robust classification results.
[ { "version": "v1", "created": "Thu, 7 Apr 2016 19:19:52 GMT" } ]
2016-09-28T00:00:00
[ [ "Razzaghi", "Talayeh", "" ], [ "Roderick", "Oleg", "" ], [ "Safro", "Ilya", "" ], [ "Marko", "Nicholas", "" ] ]
TITLE: Multilevel Weighted Support Vector Machine for Classification on Healthcare Data with Missing Values ABSTRACT: This work is motivated by the needs of predictive analytics on healthcare data as represented by Electronic Medical Records. Such data is invariably problematic: noisy, with missing entries, with imbalance in classes of interests, leading to serious bias in predictive modeling. Since standard data mining methods often produce poor performance measures, we argue for development of specialized techniques of data-preprocessing and classification. In this paper, we propose a new method to simultaneously classify large datasets and reduce the effects of missing values. It is based on a multilevel framework of the cost-sensitive SVM and the expected maximization imputation method for missing values, which relies on iterated regression analyses. We compare classification results of multilevel SVM-based algorithms on public benchmark datasets with imbalanced classes and missing values as well as real data in health applications, and show that our multilevel SVM-based method produces fast, and more accurate and robust classification results.
no_new_dataset
0.949623
1605.01584
Yongchao Liu
Yongchao Liu, Tony Pan, Srinivas Aluru
Parallel Pairwise Correlation Computation On Intel Xeon Phi Clusters
9 pages, 2 figures, 2 tables, accepted by the SBAC-PAD 2016 conference
null
null
null
cs.DC q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Co-expression network is a critical technique for the identification of inter-gene interactions, which usually relies on all-pairs correlation (or similar measure) computation between gene expression profiles across multiple samples. Pearson's correlation coefficient (PCC) is one widely used technique for gene co-expression network construction. However, all-pairs PCC computation is computationally demanding for large numbers of gene expression profiles, thus motivating our acceleration of its execution using high-performance computing. In this paper, we present LightPCC, the first parallel and distributed all-pairs PCC computation on Intel Xeon Phi (Phi) clusters. It achieves high speed by exploring the SIMD-instruction-level and thread-level parallelism within Phis as well as accelerator-level parallelism among multiple Phis. To facilitate balanced workload distribution, we have proposed a general framework for symmetric all-pairs computation by building bijective functions between job identifier and coordinate space for the first time. We have evaluated LightPCC and compared it to two CPU-based counterparts: a sequential C++ implementation in ALGLIB and an implementation based on a parallel general matrix-matrix multiplication routine in Intel Math Kernel Library (MKL) (all use double precision), using a set of gene expression datasets. Performance evaluation revealed that with one 5110P Phi and 16 Phis, LightPCC runs up to $20.6\times$ and $218.2\times$ faster than ALGLIB, and up to $6.8\times$ and $71.4\times$ faster than single-threaded MKL, respectively. In addition, LightPCC demonstrated good parallel scalability in terms of number of Phis. Source code of LightPCC is publicly available at http://lightpcc.sourceforge.net.
[ { "version": "v1", "created": "Thu, 5 May 2016 13:30:28 GMT" }, { "version": "v2", "created": "Fri, 10 Jun 2016 13:35:27 GMT" }, { "version": "v3", "created": "Tue, 27 Sep 2016 00:15:44 GMT" } ]
2016-09-28T00:00:00
[ [ "Liu", "Yongchao", "" ], [ "Pan", "Tony", "" ], [ "Aluru", "Srinivas", "" ] ]
TITLE: Parallel Pairwise Correlation Computation On Intel Xeon Phi Clusters ABSTRACT: Co-expression network is a critical technique for the identification of inter-gene interactions, which usually relies on all-pairs correlation (or similar measure) computation between gene expression profiles across multiple samples. Pearson's correlation coefficient (PCC) is one widely used technique for gene co-expression network construction. However, all-pairs PCC computation is computationally demanding for large numbers of gene expression profiles, thus motivating our acceleration of its execution using high-performance computing. In this paper, we present LightPCC, the first parallel and distributed all-pairs PCC computation on Intel Xeon Phi (Phi) clusters. It achieves high speed by exploring the SIMD-instruction-level and thread-level parallelism within Phis as well as accelerator-level parallelism among multiple Phis. To facilitate balanced workload distribution, we have proposed a general framework for symmetric all-pairs computation by building bijective functions between job identifier and coordinate space for the first time. We have evaluated LightPCC and compared it to two CPU-based counterparts: a sequential C++ implementation in ALGLIB and an implementation based on a parallel general matrix-matrix multiplication routine in Intel Math Kernel Library (MKL) (all use double precision), using a set of gene expression datasets. Performance evaluation revealed that with one 5110P Phi and 16 Phis, LightPCC runs up to $20.6\times$ and $218.2\times$ faster than ALGLIB, and up to $6.8\times$ and $71.4\times$ faster than single-threaded MKL, respectively. In addition, LightPCC demonstrated good parallel scalability in terms of number of Phis. Source code of LightPCC is publicly available at http://lightpcc.sourceforge.net.
no_new_dataset
0.951188
1607.03316
Dirk Weissenborn
Dirk Weissenborn
Separating Answers from Queries for Neural Reading Comprehension
null
null
null
null
cs.CL cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel neural architecture for answering queries, designed to optimally leverage explicit support in the form of query-answer memories. Our model is able to refine and update a given query while separately accumulating evidence for predicting the answer. Its architecture reflects this separation with dedicated embedding matrices and loosely connected information pathways (modules) for updating the query and accumulating evidence. This separation of responsibilities effectively decouples the search for query related support and the prediction of the answer. On recent benchmark datasets for reading comprehension, our model achieves state-of-the-art results. A qualitative analysis reveals that the model effectively accumulates weighted evidence from the query and over multiple support retrieval cycles which results in a robust answer prediction.
[ { "version": "v1", "created": "Tue, 12 Jul 2016 11:43:15 GMT" }, { "version": "v2", "created": "Wed, 13 Jul 2016 11:54:46 GMT" }, { "version": "v3", "created": "Tue, 27 Sep 2016 13:37:41 GMT" } ]
2016-09-28T00:00:00
[ [ "Weissenborn", "Dirk", "" ] ]
TITLE: Separating Answers from Queries for Neural Reading Comprehension ABSTRACT: We present a novel neural architecture for answering queries, designed to optimally leverage explicit support in the form of query-answer memories. Our model is able to refine and update a given query while separately accumulating evidence for predicting the answer. Its architecture reflects this separation with dedicated embedding matrices and loosely connected information pathways (modules) for updating the query and accumulating evidence. This separation of responsibilities effectively decouples the search for query related support and the prediction of the answer. On recent benchmark datasets for reading comprehension, our model achieves state-of-the-art results. A qualitative analysis reveals that the model effectively accumulates weighted evidence from the query and over multiple support retrieval cycles which results in a robust answer prediction.
no_new_dataset
0.947088
1609.08210
Ferhan Ture
Ferhan Ture and Elizabeth Boschee
Learning to Translate for Multilingual Question Answering
12 pages. To appear in EMNLP'16
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In multilingual question answering, either the question needs to be translated into the document language, or vice versa. In addition to direction, there are multiple methods to perform the translation, four of which we explore in this paper: word-based, 10-best, context-based, and grammar-based. We build a feature for each combination of translation direction and method, and train a model that learns optimal feature weights. On a large forum dataset consisting of posts in English, Arabic, and Chinese, our novel learn-to-translate approach was more effective than a strong baseline (p<0.05): translating all text into English, then training a classifier based only on English (original or translated) text.
[ { "version": "v1", "created": "Mon, 26 Sep 2016 22:12:50 GMT" } ]
2016-09-28T00:00:00
[ [ "Ture", "Ferhan", "" ], [ "Boschee", "Elizabeth", "" ] ]
TITLE: Learning to Translate for Multilingual Question Answering ABSTRACT: In multilingual question answering, either the question needs to be translated into the document language, or vice versa. In addition to direction, there are multiple methods to perform the translation, four of which we explore in this paper: word-based, 10-best, context-based, and grammar-based. We build a feature for each combination of translation direction and method, and train a model that learns optimal feature weights. On a large forum dataset consisting of posts in English, Arabic, and Chinese, our novel learn-to-translate approach was more effective than a strong baseline (p<0.05): translating all text into English, then training a classifier based only on English (original or translated) text.
no_new_dataset
0.94545
1609.08264
Zhao Kang
Zhao Kang, Chong Peng, Ming Yang, Qiang Cheng
Top-N Recommendation on Graphs
CIKM 2016
null
10.1145/2983323.2983649
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recommender systems play an increasingly important role in online applications to help users find what they need or prefer. Collaborative filtering algorithms that generate predictions by analyzing the user-item rating matrix perform poorly when the matrix is sparse. To alleviate this problem, this paper proposes a simple recommendation algorithm that fully exploits the similarity information among users and items and intrinsic structural information of the user-item matrix. The proposed method constructs a new representation which preserves affinity and structure information in the user-item rating matrix and then performs recommendation task. To capture proximity information about users and items, two graphs are constructed. Manifold learning idea is used to constrain the new representation to be smooth on these graphs, so as to enforce users and item proximities. Our model is formulated as a convex optimization problem, for which we need to solve the well-known Sylvester equation only. We carry out extensive empirical evaluations on six benchmark datasets to show the effectiveness of this approach.
[ { "version": "v1", "created": "Tue, 27 Sep 2016 05:45:03 GMT" } ]
2016-09-28T00:00:00
[ [ "Kang", "Zhao", "" ], [ "Peng", "Chong", "" ], [ "Yang", "Ming", "" ], [ "Cheng", "Qiang", "" ] ]
TITLE: Top-N Recommendation on Graphs ABSTRACT: Recommender systems play an increasingly important role in online applications to help users find what they need or prefer. Collaborative filtering algorithms that generate predictions by analyzing the user-item rating matrix perform poorly when the matrix is sparse. To alleviate this problem, this paper proposes a simple recommendation algorithm that fully exploits the similarity information among users and items and intrinsic structural information of the user-item matrix. The proposed method constructs a new representation which preserves affinity and structure information in the user-item rating matrix and then performs recommendation task. To capture proximity information about users and items, two graphs are constructed. Manifold learning idea is used to constrain the new representation to be smooth on these graphs, so as to enforce users and item proximities. Our model is formulated as a convex optimization problem, for which we need to solve the well-known Sylvester equation only. We carry out extensive empirical evaluations on six benchmark datasets to show the effectiveness of this approach.
no_new_dataset
0.945951
1609.08286
Weixiang Shao
Weixiang Shao, Lifang He, Chun-Ta Lu, Xiaokai Wei, Philip S. Yu
Online Unsupervised Multi-view Feature Selection
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the era of big data, it is becoming common to have data with multiple modalities or coming from multiple sources, known as "multi-view data". Multi-view data are usually unlabeled and come from high-dimensional spaces (such as language vocabularies), unsupervised multi-view feature selection is crucial to many applications. However, it is nontrivial due to the following challenges. First, there are too many instances or the feature dimensionality is too large. Thus, the data may not fit in memory. How to select useful features with limited memory space? Second, how to select features from streaming data and handles the concept drift? Third, how to leverage the consistent and complementary information from different views to improve the feature selection in the situation when the data are too big or come in as streams? To the best of our knowledge, none of the previous works can solve all the challenges simultaneously. In this paper, we propose an Online unsupervised Multi-View Feature Selection, OMVFS, which deals with large-scale/streaming multi-view data in an online fashion. OMVFS embeds unsupervised feature selection into a clustering algorithm via NMF with sparse learning. It further incorporates the graph regularization to preserve the local structure information and help select discriminative features. Instead of storing all the historical data, OMVFS processes the multi-view data chunk by chunk and aggregates all the necessary information into several small matrices. By using the buffering technique, the proposed OMVFS can reduce the computational and storage cost while taking advantage of the structure information. Furthermore, OMVFS can capture the concept drifts in the data streams. Extensive experiments on four real-world datasets show the effectiveness and efficiency of the proposed OMVFS method. More importantly, OMVFS is about 100 times faster than the off-line methods.
[ { "version": "v1", "created": "Tue, 27 Sep 2016 07:10:16 GMT" } ]
2016-09-28T00:00:00
[ [ "Shao", "Weixiang", "" ], [ "He", "Lifang", "" ], [ "Lu", "Chun-Ta", "" ], [ "Wei", "Xiaokai", "" ], [ "Yu", "Philip S.", "" ] ]
TITLE: Online Unsupervised Multi-view Feature Selection ABSTRACT: In the era of big data, it is becoming common to have data with multiple modalities or coming from multiple sources, known as "multi-view data". Multi-view data are usually unlabeled and come from high-dimensional spaces (such as language vocabularies), unsupervised multi-view feature selection is crucial to many applications. However, it is nontrivial due to the following challenges. First, there are too many instances or the feature dimensionality is too large. Thus, the data may not fit in memory. How to select useful features with limited memory space? Second, how to select features from streaming data and handles the concept drift? Third, how to leverage the consistent and complementary information from different views to improve the feature selection in the situation when the data are too big or come in as streams? To the best of our knowledge, none of the previous works can solve all the challenges simultaneously. In this paper, we propose an Online unsupervised Multi-View Feature Selection, OMVFS, which deals with large-scale/streaming multi-view data in an online fashion. OMVFS embeds unsupervised feature selection into a clustering algorithm via NMF with sparse learning. It further incorporates the graph regularization to preserve the local structure information and help select discriminative features. Instead of storing all the historical data, OMVFS processes the multi-view data chunk by chunk and aggregates all the necessary information into several small matrices. By using the buffering technique, the proposed OMVFS can reduce the computational and storage cost while taking advantage of the structure information. Furthermore, OMVFS can capture the concept drifts in the data streams. Extensive experiments on four real-world datasets show the effectiveness and efficiency of the proposed OMVFS method. More importantly, OMVFS is about 100 times faster than the off-line methods.
no_new_dataset
0.941331
1609.08313
Jun Yang
Jun Yang, Zhenhua Tian
Unsupervised Co-segmentation of 3D Shapes via Functional Maps
14 pages, 8figures
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an unsupervised method for co-segmentation of a set of 3D shapes from the same class with the aim of segmenting the input shapes into consistent semantic parts and establishing their correspondence across the set. Starting from meaningful pre-segmentation of all given shapes individually, we construct the correspondence between same candidate parts and obtain the labels via functional maps. And then, we use these labels to mark the input shapes and obtain results of co-segmentation. The core of our algorithm is to seek for an optimal correspondence between semantically similar parts through functional maps and mark such shape parts. Experimental results on the benchmark datasets show the efficiency of this method and comparable accuracy to the state-of-the-art algorithms.
[ { "version": "v1", "created": "Tue, 27 Sep 2016 08:35:14 GMT" } ]
2016-09-28T00:00:00
[ [ "Yang", "Jun", "" ], [ "Tian", "Zhenhua", "" ] ]
TITLE: Unsupervised Co-segmentation of 3D Shapes via Functional Maps ABSTRACT: We present an unsupervised method for co-segmentation of a set of 3D shapes from the same class with the aim of segmenting the input shapes into consistent semantic parts and establishing their correspondence across the set. Starting from meaningful pre-segmentation of all given shapes individually, we construct the correspondence between same candidate parts and obtain the labels via functional maps. And then, we use these labels to mark the input shapes and obtain results of co-segmentation. The core of our algorithm is to seek for an optimal correspondence between semantically similar parts through functional maps and mark such shape parts. Experimental results on the benchmark datasets show the efficiency of this method and comparable accuracy to the state-of-the-art algorithms.
no_new_dataset
0.955068
1609.08399
Mohamed Moustafa
Eman Ahmed, Mohamed Moustafa
House price estimation from visual and textual features
NCTA 2016. Final paper is on SCITEPRESS digital library
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most existing automatic house price estimation systems rely only on some textual data like its neighborhood area and the number of rooms. The final price is estimated by a human agent who visits the house and assesses it visually. In this paper, we propose extracting visual features from house photographs and combining them with the house's textual information. The combined features are fed to a fully connected multilayer Neural Network (NN) that estimates the house price as its single output. To train and evaluate our network, we have collected the first houses dataset (to our knowledge) that combines both images and textual attributes. The dataset is composed of 535 sample houses from the state of California, USA. Our experiments showed that adding the visual features increased the R-value by a factor of 3 and decreased the Mean Square Error (MSE) by one order of magnitude compared with textual-only features. Additionally, when trained on the benchmark textual-only features housing dataset, our proposed NN still outperformed the existing model published results.
[ { "version": "v1", "created": "Tue, 27 Sep 2016 13:15:31 GMT" } ]
2016-09-28T00:00:00
[ [ "Ahmed", "Eman", "" ], [ "Moustafa", "Mohamed", "" ] ]
TITLE: House price estimation from visual and textual features ABSTRACT: Most existing automatic house price estimation systems rely only on some textual data like its neighborhood area and the number of rooms. The final price is estimated by a human agent who visits the house and assesses it visually. In this paper, we propose extracting visual features from house photographs and combining them with the house's textual information. The combined features are fed to a fully connected multilayer Neural Network (NN) that estimates the house price as its single output. To train and evaluate our network, we have collected the first houses dataset (to our knowledge) that combines both images and textual attributes. The dataset is composed of 535 sample houses from the state of California, USA. Our experiments showed that adding the visual features increased the R-value by a factor of 3 and decreased the Mean Square Error (MSE) by one order of magnitude compared with textual-only features. Additionally, when trained on the benchmark textual-only features housing dataset, our proposed NN still outperformed the existing model published results.
new_dataset
0.964656
1609.08409
Giovanni Montana
Savelie Cornegruta, Robert Bakewell, Samuel Withey, Giovanni Montana
Modelling Radiological Language with Bidirectional Long Short-Term Memory Networks
LOUHI 2016 conference proceedings
null
null
null
cs.CL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by the need to automate medical information extraction from free-text radiological reports, we present a bi-directional long short-term memory (BiLSTM) neural network architecture for modelling radiological language. The model has been used to address two NLP tasks: medical named-entity recognition (NER) and negation detection. We investigate whether learning several types of word embeddings improves BiLSTM's performance on those tasks. Using a large dataset of chest x-ray reports, we compare the proposed model to a baseline dictionary-based NER system and a negation detection system that leverages the hand-crafted rules of the NegEx algorithm and the grammatical relations obtained from the Stanford Dependency Parser. Compared to these more traditional rule-based systems, we argue that BiLSTM offers a strong alternative for both our tasks.
[ { "version": "v1", "created": "Tue, 27 Sep 2016 13:25:10 GMT" } ]
2016-09-28T00:00:00
[ [ "Cornegruta", "Savelie", "" ], [ "Bakewell", "Robert", "" ], [ "Withey", "Samuel", "" ], [ "Montana", "Giovanni", "" ] ]
TITLE: Modelling Radiological Language with Bidirectional Long Short-Term Memory Networks ABSTRACT: Motivated by the need to automate medical information extraction from free-text radiological reports, we present a bi-directional long short-term memory (BiLSTM) neural network architecture for modelling radiological language. The model has been used to address two NLP tasks: medical named-entity recognition (NER) and negation detection. We investigate whether learning several types of word embeddings improves BiLSTM's performance on those tasks. Using a large dataset of chest x-ray reports, we compare the proposed model to a baseline dictionary-based NER system and a negation detection system that leverages the hand-crafted rules of the NegEx algorithm and the grammatical relations obtained from the Stanford Dependency Parser. Compared to these more traditional rule-based systems, we argue that BiLSTM offers a strong alternative for both our tasks.
no_new_dataset
0.944791
1609.08496
Jipeng Qiang
Jipeng Qiang, Ping Chen, Tong Wang, Xindong Wu
Topic Modeling over Short Texts by Incorporating Word Embeddings
null
null
null
null
cs.CL cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inferring topics from the overwhelming amount of short texts becomes a critical but challenging task for many content analysis tasks, such as content charactering, user interest profiling, and emerging topic detecting. Existing methods such as probabilistic latent semantic analysis (PLSA) and latent Dirichlet allocation (LDA) cannot solve this prob- lem very well since only very limited word co-occurrence information is available in short texts. This paper studies how to incorporate the external word correlation knowledge into short texts to improve the coherence of topic modeling. Based on recent results in word embeddings that learn se- mantically representations for words from a large corpus, we introduce a novel method, Embedding-based Topic Model (ETM), to learn latent topics from short texts. ETM not only solves the problem of very limited word co-occurrence information by aggregating short texts into long pseudo- texts, but also utilizes a Markov Random Field regularized model that gives correlated words a better chance to be put into the same topic. The experiments on real-world datasets validate the effectiveness of our model comparing with the state-of-the-art models.
[ { "version": "v1", "created": "Tue, 27 Sep 2016 15:26:07 GMT" } ]
2016-09-28T00:00:00
[ [ "Qiang", "Jipeng", "" ], [ "Chen", "Ping", "" ], [ "Wang", "Tong", "" ], [ "Wu", "Xindong", "" ] ]
TITLE: Topic Modeling over Short Texts by Incorporating Word Embeddings ABSTRACT: Inferring topics from the overwhelming amount of short texts becomes a critical but challenging task for many content analysis tasks, such as content charactering, user interest profiling, and emerging topic detecting. Existing methods such as probabilistic latent semantic analysis (PLSA) and latent Dirichlet allocation (LDA) cannot solve this prob- lem very well since only very limited word co-occurrence information is available in short texts. This paper studies how to incorporate the external word correlation knowledge into short texts to improve the coherence of topic modeling. Based on recent results in word embeddings that learn se- mantically representations for words from a large corpus, we introduce a novel method, Embedding-based Topic Model (ETM), to learn latent topics from short texts. ETM not only solves the problem of very limited word co-occurrence information by aggregating short texts into long pseudo- texts, but also utilizes a Markov Random Field regularized model that gives correlated words a better chance to be put into the same topic. The experiments on real-world datasets validate the effectiveness of our model comparing with the state-of-the-art models.
no_new_dataset
0.94801
1609.08535
Peter Polack Jr
Peter J Polack Jr, Shang-Tse Chen, Minsuk Kahng, Kaya de Barbaro, Moushumi Sharmin, Rahul Basole, Duen Horng Chau
Chronodes: Interactive Multi-focus Exploration of Event Sequences
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of mobile health technologies presents new challenges that existing visualizations, interactive tools, and algorithms are not yet designed to support. In dealing with uncertainty in sensor data and high-dimensional physiological records, we must seek to improve current tools that make sense of health data from traditional perspectives in event-based trend discovery. With Chronodes, a system developed to help researchers collect, interpret, and model mobile health (mHealth) data, we posit a series of interaction techniques that enable new approaches to understanding and exploring event-based data. From numerous and discontinuous mobile health data streams, Chronodes finds and visualizes frequent event sequences that reveal common chronological patterns across participants and days. By then promoting the sequences as interactive elements, Chronodes presents opportunities for finding, defining, and comparing cohorts of participants that exhibit particular behaviors. We applied Chronodes to a real 40GB mHealth dataset capturing about 400 hours of data. Through our pilot study with 20 behavioral and biomedical health experts, we gained insights into Chronodes' efficacy, limitations, and potential applicability to a wide range of healthcare scenarios.
[ { "version": "v1", "created": "Tue, 27 Sep 2016 17:05:15 GMT" } ]
2016-09-28T00:00:00
[ [ "Polack", "Peter J", "Jr" ], [ "Chen", "Shang-Tse", "" ], [ "Kahng", "Minsuk", "" ], [ "de Barbaro", "Kaya", "" ], [ "Sharmin", "Moushumi", "" ], [ "Basole", "Rahul", "" ], [ "Chau", "Duen Horng", "" ] ]
TITLE: Chronodes: Interactive Multi-focus Exploration of Event Sequences ABSTRACT: The advent of mobile health technologies presents new challenges that existing visualizations, interactive tools, and algorithms are not yet designed to support. In dealing with uncertainty in sensor data and high-dimensional physiological records, we must seek to improve current tools that make sense of health data from traditional perspectives in event-based trend discovery. With Chronodes, a system developed to help researchers collect, interpret, and model mobile health (mHealth) data, we posit a series of interaction techniques that enable new approaches to understanding and exploring event-based data. From numerous and discontinuous mobile health data streams, Chronodes finds and visualizes frequent event sequences that reveal common chronological patterns across participants and days. By then promoting the sequences as interactive elements, Chronodes presents opportunities for finding, defining, and comparing cohorts of participants that exhibit particular behaviors. We applied Chronodes to a real 40GB mHealth dataset capturing about 400 hours of data. Through our pilot study with 20 behavioral and biomedical health experts, we gained insights into Chronodes' efficacy, limitations, and potential applicability to a wide range of healthcare scenarios.
no_new_dataset
0.942454
1504.00241
Alex Borges Vieira
Eduardo Chinelate Costa and Alex Borges Vieira and Klaus Wehmuth and Artur Ziviani and Ana Paula Couto da Silva
Time Centrality in Dynamic Complex Networks
null
Advances in Complex Systems (ACS), vol. 18, no. 07n08, November & December 2015
10.1142/S021952591550023X
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is an ever-increasing interest in investigating dynamics in time-varying graphs (TVGs). Nevertheless, so far, the notion of centrality in TVG scenarios usually refers to metrics that assess the relative importance of nodes along the temporal evolution of the dynamic complex network. For some TVG scenarios, however, more important than identifying the central nodes under a given node centrality definition is identifying the key time instants for taking certain actions. In this paper, we thus introduce and investigate the notion of time centrality in TVGs. Analogously to node centrality, time centrality evaluates the relative importance of time instants in dynamic complex networks. In this context, we present two time centrality metrics related to diffusion processes. We evaluate the two defined metrics using both a real-world dataset representing an in-person contact dynamic network and a synthetically generated randomized TVG. We validate the concept of time centrality showing that diffusion starting at the best classified time instants (i.e. the most central ones), according to our metrics, can perform a faster and more efficient diffusion process.
[ { "version": "v1", "created": "Wed, 1 Apr 2015 14:16:10 GMT" }, { "version": "v2", "created": "Wed, 15 Apr 2015 15:13:09 GMT" }, { "version": "v3", "created": "Sun, 6 Sep 2015 02:20:53 GMT" } ]
2016-09-27T00:00:00
[ [ "Costa", "Eduardo Chinelate", "" ], [ "Vieira", "Alex Borges", "" ], [ "Wehmuth", "Klaus", "" ], [ "Ziviani", "Artur", "" ], [ "da Silva", "Ana Paula Couto", "" ] ]
TITLE: Time Centrality in Dynamic Complex Networks ABSTRACT: There is an ever-increasing interest in investigating dynamics in time-varying graphs (TVGs). Nevertheless, so far, the notion of centrality in TVG scenarios usually refers to metrics that assess the relative importance of nodes along the temporal evolution of the dynamic complex network. For some TVG scenarios, however, more important than identifying the central nodes under a given node centrality definition is identifying the key time instants for taking certain actions. In this paper, we thus introduce and investigate the notion of time centrality in TVGs. Analogously to node centrality, time centrality evaluates the relative importance of time instants in dynamic complex networks. In this context, we present two time centrality metrics related to diffusion processes. We evaluate the two defined metrics using both a real-world dataset representing an in-person contact dynamic network and a synthetically generated randomized TVG. We validate the concept of time centrality showing that diffusion starting at the best classified time instants (i.e. the most central ones), according to our metrics, can perform a faster and more efficient diffusion process.
no_new_dataset
0.819821
1506.04304
Jeremy Maitin-Shepard
Jeremy Maitin-Shepard (1 and 2), Viren Jain (2), Michal Januszewski (2), Peter Li (2), Pieter Abbeel (1) ((1) UC Berkeley, (2) Google)
Combinatorial Energy Learning for Image Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new machine learning approach for image segmentation that uses a neural network to model the conditional energy of a segmentation given an image. Our approach, combinatorial energy learning for image segmentation (CELIS) places a particular emphasis on modeling the inherent combinatorial nature of dense image segmentation problems. We propose efficient algorithms for learning deep neural networks to model the energy function, and for local optimization of this energy in the space of supervoxel agglomerations. We extensively evaluate our method on a publicly available 3-D microscopy dataset with 25 billion voxels of ground truth data. On an 11 billion voxel test set, we find that our method improves volumetric reconstruction accuracy by more than 20% as compared to two state-of-the-art baseline methods: graph-based segmentation of the output of a 3-D convolutional neural network trained to predict boundaries, as well as a random forest classifier trained to agglomerate supervoxels that were generated by a 3-D convolutional neural network.
[ { "version": "v1", "created": "Sat, 13 Jun 2015 18:23:42 GMT" }, { "version": "v2", "created": "Tue, 16 Jun 2015 19:33:20 GMT" }, { "version": "v3", "created": "Fri, 23 Sep 2016 20:47:55 GMT" } ]
2016-09-27T00:00:00
[ [ "Maitin-Shepard", "Jeremy", "", "1 and 2" ], [ "Jain", "Viren", "", "Google" ], [ "Januszewski", "Michal", "", "Google" ], [ "Li", "Peter", "", "Google" ], [ "Abbeel", "Pieter", "", "UC Berkeley" ] ]
TITLE: Combinatorial Energy Learning for Image Segmentation ABSTRACT: We introduce a new machine learning approach for image segmentation that uses a neural network to model the conditional energy of a segmentation given an image. Our approach, combinatorial energy learning for image segmentation (CELIS) places a particular emphasis on modeling the inherent combinatorial nature of dense image segmentation problems. We propose efficient algorithms for learning deep neural networks to model the energy function, and for local optimization of this energy in the space of supervoxel agglomerations. We extensively evaluate our method on a publicly available 3-D microscopy dataset with 25 billion voxels of ground truth data. On an 11 billion voxel test set, we find that our method improves volumetric reconstruction accuracy by more than 20% as compared to two state-of-the-art baseline methods: graph-based segmentation of the output of a 3-D convolutional neural network trained to predict boundaries, as well as a random forest classifier trained to agglomerate supervoxels that were generated by a 3-D convolutional neural network.
no_new_dataset
0.949529
1605.04469
Ye Zhang
Ye Zhang, Iain Marshall, Byron C. Wallace
Rationale-Augmented Convolutional Neural Networks for Text Classification
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new Convolutional Neural Network (CNN) model for text classification that jointly exploits labels on documents and their component sentences. Specifically, we consider scenarios in which annotators explicitly mark sentences (or snippets) that support their overall document categorization, i.e., they provide rationales. Our model exploits such supervision via a hierarchical approach in which each document is represented by a linear combination of the vector representations of its component sentences. We propose a sentence-level convolutional model that estimates the probability that a given sentence is a rationale, and we then scale the contribution of each sentence to the aggregate document representation in proportion to these estimates. Experiments on five classification datasets that have document labels and associated rationales demonstrate that our approach consistently outperforms strong baselines. Moreover, our model naturally provides explanations for its predictions.
[ { "version": "v1", "created": "Sat, 14 May 2016 21:30:57 GMT" }, { "version": "v2", "created": "Sat, 21 May 2016 01:05:59 GMT" }, { "version": "v3", "created": "Sat, 24 Sep 2016 16:35:57 GMT" } ]
2016-09-27T00:00:00
[ [ "Zhang", "Ye", "" ], [ "Marshall", "Iain", "" ], [ "Wallace", "Byron C.", "" ] ]
TITLE: Rationale-Augmented Convolutional Neural Networks for Text Classification ABSTRACT: We present a new Convolutional Neural Network (CNN) model for text classification that jointly exploits labels on documents and their component sentences. Specifically, we consider scenarios in which annotators explicitly mark sentences (or snippets) that support their overall document categorization, i.e., they provide rationales. Our model exploits such supervision via a hierarchical approach in which each document is represented by a linear combination of the vector representations of its component sentences. We propose a sentence-level convolutional model that estimates the probability that a given sentence is a rationale, and we then scale the contribution of each sentence to the aggregate document representation in proportion to these estimates. Experiments on five classification datasets that have document labels and associated rationales demonstrate that our approach consistently outperforms strong baselines. Moreover, our model naturally provides explanations for its predictions.
no_new_dataset
0.949059
1605.04502
Bing Wang
Bing Wang, Li Wang, Bing Shuai, Zhen Zuo, Ting Liu, Kap Luk Chan, Gang Wang
Joint Learning of Siamese CNNs and Temporally Constrained Metrics for Tracklet Association
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the challenging problem of multi-object tracking in a complex scene captured by a single camera. Different from the existing tracklet association-based tracking methods, we propose a novel and efficient way to obtain discriminative appearance-based tracklet affinity models. Our proposed method jointly learns the convolutional neural networks (CNNs) and temporally constrained metrics. In our method, a Siamese convolutional neural network (CNN) is first pre-trained on the auxiliary data. Then the Siamese CNN and temporally constrained metrics are jointly learned online to construct the appearance-based tracklet affinity models. The proposed method can jointly learn the hierarchical deep features and temporally constrained segment-wise metrics under a unified framework. For reliable association between tracklets, a novel loss function incorporating temporally constrained multi-task learning mechanism is proposed. By employing the proposed method, tracklet association can be accomplished even in challenging situations. Moreover, a new dataset with 40 fully annotated sequences is created to facilitate the tracking evaluation. Experimental results on five public datasets and the new large-scale dataset show that our method outperforms several state-of-the-art approaches in multi-object tracking.
[ { "version": "v1", "created": "Sun, 15 May 2016 07:09:28 GMT" }, { "version": "v2", "created": "Sun, 25 Sep 2016 09:58:32 GMT" } ]
2016-09-27T00:00:00
[ [ "Wang", "Bing", "" ], [ "Wang", "Li", "" ], [ "Shuai", "Bing", "" ], [ "Zuo", "Zhen", "" ], [ "Liu", "Ting", "" ], [ "Chan", "Kap Luk", "" ], [ "Wang", "Gang", "" ] ]
TITLE: Joint Learning of Siamese CNNs and Temporally Constrained Metrics for Tracklet Association ABSTRACT: In this paper, we study the challenging problem of multi-object tracking in a complex scene captured by a single camera. Different from the existing tracklet association-based tracking methods, we propose a novel and efficient way to obtain discriminative appearance-based tracklet affinity models. Our proposed method jointly learns the convolutional neural networks (CNNs) and temporally constrained metrics. In our method, a Siamese convolutional neural network (CNN) is first pre-trained on the auxiliary data. Then the Siamese CNN and temporally constrained metrics are jointly learned online to construct the appearance-based tracklet affinity models. The proposed method can jointly learn the hierarchical deep features and temporally constrained segment-wise metrics under a unified framework. For reliable association between tracklets, a novel loss function incorporating temporally constrained multi-task learning mechanism is proposed. By employing the proposed method, tracklet association can be accomplished even in challenging situations. Moreover, a new dataset with 40 fully annotated sequences is created to facilitate the tracking evaluation. Experimental results on five public datasets and the new large-scale dataset show that our method outperforms several state-of-the-art approaches in multi-object tracking.
new_dataset
0.960435
1605.08900
Duyu Tang
Duyu Tang, Bing Qin, Ting Liu
Aspect Level Sentiment Classification with Deep Memory Network
published in EMNLP 2016
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a deep memory network for aspect level sentiment classification. Unlike feature-based SVM and sequential neural models such as LSTM, this approach explicitly captures the importance of each context word when inferring the sentiment polarity of an aspect. Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory. Experiments on laptop and restaurant datasets demonstrate that our approach performs comparable to state-of-art feature based SVM system, and substantially better than LSTM and attention-based LSTM architectures. On both datasets we show that multiple computational layers could improve the performance. Moreover, our approach is also fast. The deep memory network with 9 layers is 15 times faster than LSTM with a CPU implementation.
[ { "version": "v1", "created": "Sat, 28 May 2016 14:47:49 GMT" }, { "version": "v2", "created": "Sat, 24 Sep 2016 06:04:15 GMT" } ]
2016-09-27T00:00:00
[ [ "Tang", "Duyu", "" ], [ "Qin", "Bing", "" ], [ "Liu", "Ting", "" ] ]
TITLE: Aspect Level Sentiment Classification with Deep Memory Network ABSTRACT: We introduce a deep memory network for aspect level sentiment classification. Unlike feature-based SVM and sequential neural models such as LSTM, this approach explicitly captures the importance of each context word when inferring the sentiment polarity of an aspect. Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory. Experiments on laptop and restaurant datasets demonstrate that our approach performs comparable to state-of-art feature based SVM system, and substantially better than LSTM and attention-based LSTM architectures. On both datasets we show that multiple computational layers could improve the performance. Moreover, our approach is also fast. The deep memory network with 9 layers is 15 times faster than LSTM with a CPU implementation.
no_new_dataset
0.949902
1606.01847
Marcus Rohrbach
Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach
Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding
Accepted to EMNLP 2016
null
null
null
cs.CV cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge.
[ { "version": "v1", "created": "Mon, 6 Jun 2016 17:59:56 GMT" }, { "version": "v2", "created": "Thu, 23 Jun 2016 19:52:41 GMT" }, { "version": "v3", "created": "Sat, 24 Sep 2016 01:58:59 GMT" } ]
2016-09-27T00:00:00
[ [ "Fukui", "Akira", "" ], [ "Park", "Dong Huk", "" ], [ "Yang", "Daylen", "" ], [ "Rohrbach", "Anna", "" ], [ "Darrell", "Trevor", "" ], [ "Rohrbach", "Marcus", "" ] ]
TITLE: Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding ABSTRACT: Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge.
no_new_dataset
0.943919
1606.01933
Ankur Parikh
Ankur P. Parikh, Oscar T\"ackstr\"om, Dipanjan Das, Jakob Uszkoreit
A Decomposable Attention Model for Natural Language Inference
7 pages, 1 figure, Proceeedings of EMNLP 2016
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.
[ { "version": "v1", "created": "Mon, 6 Jun 2016 20:30:57 GMT" }, { "version": "v2", "created": "Sun, 25 Sep 2016 23:52:45 GMT" } ]
2016-09-27T00:00:00
[ [ "Parikh", "Ankur P.", "" ], [ "Täckström", "Oscar", "" ], [ "Das", "Dipanjan", "" ], [ "Uszkoreit", "Jakob", "" ] ]
TITLE: A Decomposable Attention Model for Natural Language Inference ABSTRACT: We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.
no_new_dataset
0.951504
1607.08378
Rahul Rama Varior Mr.
Rahul Rama Varior, Mrinal Haloi, and Gang Wang
Gated Siamese Convolutional Neural Network Architecture for Human Re-Identification
Accepted to ECCV2016
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Matching pedestrians across multiple camera views, known as human re-identification, is a challenging research problem that has numerous applications in visual surveillance. With the resurgence of Convolutional Neural Networks (CNNs), several end-to-end deep Siamese CNN architectures have been proposed for human re-identification with the objective of projecting the images of similar pairs (i.e. same identity) to be closer to each other and those of dissimilar pairs to be distant from each other. However, current networks extract fixed representations for each image regardless of other images which are paired with it and the comparison with other images is done only at the final level. In this setting, the network is at risk of failing to extract finer local patterns that may be essential to distinguish positive pairs from hard negative pairs. In this paper, we propose a gating function to selectively emphasize such fine common local patterns by comparing the mid-level features across pairs of images. This produces flexible representations for the same image according to the images they are paired with. We conduct experiments on the CUHK03, Market-1501 and VIPeR datasets and demonstrate improved performance compared to a baseline Siamese CNN architecture.
[ { "version": "v1", "created": "Thu, 28 Jul 2016 09:40:18 GMT" }, { "version": "v2", "created": "Mon, 26 Sep 2016 16:28:58 GMT" } ]
2016-09-27T00:00:00
[ [ "Varior", "Rahul Rama", "" ], [ "Haloi", "Mrinal", "" ], [ "Wang", "Gang", "" ] ]
TITLE: Gated Siamese Convolutional Neural Network Architecture for Human Re-Identification ABSTRACT: Matching pedestrians across multiple camera views, known as human re-identification, is a challenging research problem that has numerous applications in visual surveillance. With the resurgence of Convolutional Neural Networks (CNNs), several end-to-end deep Siamese CNN architectures have been proposed for human re-identification with the objective of projecting the images of similar pairs (i.e. same identity) to be closer to each other and those of dissimilar pairs to be distant from each other. However, current networks extract fixed representations for each image regardless of other images which are paired with it and the comparison with other images is done only at the final level. In this setting, the network is at risk of failing to extract finer local patterns that may be essential to distinguish positive pairs from hard negative pairs. In this paper, we propose a gating function to selectively emphasize such fine common local patterns by comparing the mid-level features across pairs of images. This produces flexible representations for the same image according to the images they are paired with. We conduct experiments on the CUHK03, Market-1501 and VIPeR datasets and demonstrate improved performance compared to a baseline Siamese CNN architecture.
no_new_dataset
0.951323
1609.04387
Elad Richardson
Elad Richardson, Matan Sela, Ron Kimmel
3D Face Reconstruction by Learning from Synthetic Data
The first two authors contributed equally to this work
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fast and robust three-dimensional reconstruction of facial geometric structure from a single image is a challenging task with numerous applications. Here, we introduce a learning-based approach for reconstructing a three-dimensional face from a single image. Recent face recovery methods rely on accurate localization of key characteristic points. In contrast, the proposed approach is based on a Convolutional-Neural-Network (CNN) which extracts the face geometry directly from its image. Although such deep architectures outperform other models in complex computer vision problems, training them properly requires a large dataset of annotated examples. In the case of three-dimensional faces, currently, there are no large volume data sets, while acquiring such big-data is a tedious task. As an alternative, we propose to generate random, yet nearly photo-realistic, facial images for which the geometric form is known. The suggested model successfully recovers facial shapes from real images, even for faces with extreme expressions and under various lighting conditions.
[ { "version": "v1", "created": "Wed, 14 Sep 2016 19:47:12 GMT" }, { "version": "v2", "created": "Mon, 26 Sep 2016 12:12:34 GMT" } ]
2016-09-27T00:00:00
[ [ "Richardson", "Elad", "" ], [ "Sela", "Matan", "" ], [ "Kimmel", "Ron", "" ] ]
TITLE: 3D Face Reconstruction by Learning from Synthetic Data ABSTRACT: Fast and robust three-dimensional reconstruction of facial geometric structure from a single image is a challenging task with numerous applications. Here, we introduce a learning-based approach for reconstructing a three-dimensional face from a single image. Recent face recovery methods rely on accurate localization of key characteristic points. In contrast, the proposed approach is based on a Convolutional-Neural-Network (CNN) which extracts the face geometry directly from its image. Although such deep architectures outperform other models in complex computer vision problems, training them properly requires a large dataset of annotated examples. In the case of three-dimensional faces, currently, there are no large volume data sets, while acquiring such big-data is a tedious task. As an alternative, we propose to generate random, yet nearly photo-realistic, facial images for which the geometric form is known. The suggested model successfully recovers facial shapes from real images, even for faces with extreme expressions and under various lighting conditions.
no_new_dataset
0.945197
1609.07480
Stylianos Kampakis
Stylianos Kampakis
Predictive modelling of football injuries
PhD Thesis submitted and defended successfully at the Department of Computer Science at University College London
null
null
null
stat.AP cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of this thesis is to investigate the potential of predictive modelling for football injuries. This work was conducted in close collaboration with Tottenham Hotspurs FC (THFC), the PGA European tour and the participation of Wolverhampton Wanderers (WW). Three investigations were conducted: 1. Predicting the recovery time of football injuries using the UEFA injury recordings: The UEFA recordings is a common standard for recording injuries in professional football. For this investigation, three datasets of UEFA injury recordings were available. Different machine learning algorithms were used in order to build a predictive model. The performance of the machine learning models is then improved by using feature selection conducted through correlation-based subset feature selection and random forests. 2. Predicting injuries in professional football using exposure records: The relationship between exposure (in training hours and match hours) in professional football athletes and injury incidence was studied. A common problem in football is understanding how the training schedule of an athlete can affect the chance of him getting injured. The task was to predict the number of days a player can train before he gets injured. 3. Predicting intrinsic injury incidence using in-training GPS measurements: A significant percentage of football injuries can be attributed to overtraining and fatigue. GPS data collected during training sessions might provide indicators of fatigue, or might be used to detect very intense training sessions which can lead to overtraining. This research used GPS data gathered during training sessions of the first team of THFC, in order to predict whether an injury would take place during a week.
[ { "version": "v1", "created": "Tue, 20 Sep 2016 11:58:42 GMT" } ]
2016-09-27T00:00:00
[ [ "Kampakis", "Stylianos", "" ] ]
TITLE: Predictive modelling of football injuries ABSTRACT: The goal of this thesis is to investigate the potential of predictive modelling for football injuries. This work was conducted in close collaboration with Tottenham Hotspurs FC (THFC), the PGA European tour and the participation of Wolverhampton Wanderers (WW). Three investigations were conducted: 1. Predicting the recovery time of football injuries using the UEFA injury recordings: The UEFA recordings is a common standard for recording injuries in professional football. For this investigation, three datasets of UEFA injury recordings were available. Different machine learning algorithms were used in order to build a predictive model. The performance of the machine learning models is then improved by using feature selection conducted through correlation-based subset feature selection and random forests. 2. Predicting injuries in professional football using exposure records: The relationship between exposure (in training hours and match hours) in professional football athletes and injury incidence was studied. A common problem in football is understanding how the training schedule of an athlete can affect the chance of him getting injured. The task was to predict the number of days a player can train before he gets injured. 3. Predicting intrinsic injury incidence using in-training GPS measurements: A significant percentage of football injuries can be attributed to overtraining and fatigue. GPS data collected during training sessions might provide indicators of fatigue, or might be used to detect very intense training sessions which can lead to overtraining. This research used GPS data gathered during training sessions of the first team of THFC, in order to predict whether an injury would take place during a week.
no_new_dataset
0.931338
1609.07495
Matteo Ruggero Ronchi
Matteo Ruggero Ronchi, Joon Sik Kim and Yisong Yue
A Rotation Invariant Latent Factor Model for Moveme Discovery from Static Poses
Long version of the paper accepted at the IEEE ICDM 2016 conference. 10 pages, 9 figures, 1 table. Project page: http://www.vision.caltech.edu/~mronchi/projects/RotationInvariantMovemes/
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
We tackle the problem of learning a rotation invariant latent factor model when the training data is comprised of lower-dimensional projections of the original feature space. The main goal is the discovery of a set of 3-D bases poses that can characterize the manifold of primitive human motions, or movemes, from a training set of 2-D projected poses obtained from still images taken at various camera angles. The proposed technique for basis discovery is data-driven rather than hand-designed. The learned representation is rotation invariant, and can reconstruct any training instance from multiple viewing angles. We apply our method to modeling human poses in sports (via the Leeds Sports Dataset), and demonstrate the effectiveness of the learned bases in a range of applications such as activity classification, inference of dynamics from a single frame, and synthetic representation of movements.
[ { "version": "v1", "created": "Fri, 23 Sep 2016 20:00:23 GMT" } ]
2016-09-27T00:00:00
[ [ "Ronchi", "Matteo Ruggero", "" ], [ "Kim", "Joon Sik", "" ], [ "Yue", "Yisong", "" ] ]
TITLE: A Rotation Invariant Latent Factor Model for Moveme Discovery from Static Poses ABSTRACT: We tackle the problem of learning a rotation invariant latent factor model when the training data is comprised of lower-dimensional projections of the original feature space. The main goal is the discovery of a set of 3-D bases poses that can characterize the manifold of primitive human motions, or movemes, from a training set of 2-D projected poses obtained from still images taken at various camera angles. The proposed technique for basis discovery is data-driven rather than hand-designed. The learned representation is rotation invariant, and can reconstruct any training instance from multiple viewing angles. We apply our method to modeling human poses in sports (via the Leeds Sports Dataset), and demonstrate the effectiveness of the learned bases in a range of applications such as activity classification, inference of dynamics from a single frame, and synthetic representation of movements.
no_new_dataset
0.947721
1609.07569
Aming Li
Aming Li, Lei Zhou, Qi Su, Sean P. Cornelius, Yang-Yu Liu, Long Wang
Evolution of Cooperation on Temporal Networks
23 pages, 12 figures
null
null
null
physics.soc-ph physics.bio-ph q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The structure of social networks is a key determinant in fostering cooperation and other altruistic behavior among naturally selfish individuals. However, most real social interactions are temporal, being both finite in duration and spread out over time. This raises the question of whether stable cooperation can form despite an intrinsically fragmented social fabric. Here we develop a framework to study the evolution of cooperation on temporal networks in the setting of the classic Prisoner's Dilemma. By analyzing both real and synthetic datasets, we find that temporal networks generally facilitate the evolution of cooperation compared to their static counterparts. More interestingly, we find that the intrinsic human interactive pattern like bursty behavior impedes the evolution of cooperation. Finally, we introduce a measure to quantify the temporality present in networks and demonstrate that there is an intermediate level of temporality that boosts cooperation most. Our results open a new avenue for investigating the evolution of cooperation in more realistic structured populations.
[ { "version": "v1", "created": "Sat, 24 Sep 2016 04:18:25 GMT" } ]
2016-09-27T00:00:00
[ [ "Li", "Aming", "" ], [ "Zhou", "Lei", "" ], [ "Su", "Qi", "" ], [ "Cornelius", "Sean P.", "" ], [ "Liu", "Yang-Yu", "" ], [ "Wang", "Long", "" ] ]
TITLE: Evolution of Cooperation on Temporal Networks ABSTRACT: The structure of social networks is a key determinant in fostering cooperation and other altruistic behavior among naturally selfish individuals. However, most real social interactions are temporal, being both finite in duration and spread out over time. This raises the question of whether stable cooperation can form despite an intrinsically fragmented social fabric. Here we develop a framework to study the evolution of cooperation on temporal networks in the setting of the classic Prisoner's Dilemma. By analyzing both real and synthetic datasets, we find that temporal networks generally facilitate the evolution of cooperation compared to their static counterparts. More interestingly, we find that the intrinsic human interactive pattern like bursty behavior impedes the evolution of cooperation. Finally, we introduce a measure to quantify the temporality present in networks and demonstrate that there is an intermediate level of temporality that boosts cooperation most. Our results open a new avenue for investigating the evolution of cooperation in more realistic structured populations.
no_new_dataset
0.94545
1609.07599
Shenglan Liu
Shenglan Liu, Muxin Sun, Lin Feng, Yang Liu, Jun Wu
Three Tiers Neighborhood Graph and Multi-graph Fusion Ranking for Multi-feature Image Retrieval: A Manifold Aspect
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single feature is inefficient to describe content of an image, which is a shortcoming in traditional image retrieval task. We know that one image can be described by different features. Multi-feature fusion ranking can be utilized to improve the ranking list of query. In this paper, we first analyze graph structure and multi-feature fusion re-ranking from manifold aspect. Then, Three Tiers Neighborhood Graph (TTNG) is constructed to re-rank the original ranking list by single feature and to enhance precision of single feature. Furthermore, we propose Multi-graph Fusion Ranking (MFR) for multi-feature ranking, which considers the correlation of all images in multiple neighborhood graphs. Evaluations are conducted on UK-bench, Corel-1K, Corel-10K and Cifar-10 benchmark datasets. The experimental results show that our TTNG and MFR outperform than other state-of-the-art methods. For example, we achieve competitive results N-S score 3.91 and precision 65.00% on UK-bench and Corel-10K datasets respectively.
[ { "version": "v1", "created": "Sat, 24 Sep 2016 10:34:36 GMT" } ]
2016-09-27T00:00:00
[ [ "Liu", "Shenglan", "" ], [ "Sun", "Muxin", "" ], [ "Feng", "Lin", "" ], [ "Liu", "Yang", "" ], [ "Wu", "Jun", "" ] ]
TITLE: Three Tiers Neighborhood Graph and Multi-graph Fusion Ranking for Multi-feature Image Retrieval: A Manifold Aspect ABSTRACT: Single feature is inefficient to describe content of an image, which is a shortcoming in traditional image retrieval task. We know that one image can be described by different features. Multi-feature fusion ranking can be utilized to improve the ranking list of query. In this paper, we first analyze graph structure and multi-feature fusion re-ranking from manifold aspect. Then, Three Tiers Neighborhood Graph (TTNG) is constructed to re-rank the original ranking list by single feature and to enhance precision of single feature. Furthermore, we propose Multi-graph Fusion Ranking (MFR) for multi-feature ranking, which considers the correlation of all images in multiple neighborhood graphs. Evaluations are conducted on UK-bench, Corel-1K, Corel-10K and Cifar-10 benchmark datasets. The experimental results show that our TTNG and MFR outperform than other state-of-the-art methods. For example, we achieve competitive results N-S score 3.91 and precision 65.00% on UK-bench and Corel-10K datasets respectively.
no_new_dataset
0.947914
1609.07603
Claus Brenner
Claus Brenner
Scalable Estimation of Precision Maps in a MapReduce Framework
ACM SIGSPATIAL'16, October 31-November 03, 2016, Burlingame, CA, USA
null
10.1145/2996913.2996990
null
cs.DC cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a large-scale strip adjustment method for LiDAR mobile mapping data, yielding highly precise maps. It uses several concepts to achieve scalability. First, an efficient graph-based pre-segmentation is used, which directly operates on LiDAR scan strip data, rather than on point clouds. Second, observation equations are obtained from a dense matching, which is formulated in terms of an estimation of a latent map. As a result of this formulation, the number of observation equations is not quadratic, but rather linear in the number of scan strips. Third, the dynamic Bayes network, which results from all observation and condition equations, is partitioned into two sub-networks. Consequently, the estimation matrices for all position and orientation corrections are linear instead of quadratic in the number of unknowns and can be solved very efficiently using an alternating least squares approach. It is shown how this approach can be mapped to a standard key/value MapReduce implementation, where each of the processing nodes operates independently on small chunks of data, leading to essentially linear scalability. Results are demonstrated for a dataset of one billion measured LiDAR points and 278,000 unknowns, leading to maps with a precision of a few millimeters.
[ { "version": "v1", "created": "Sat, 24 Sep 2016 11:24:30 GMT" } ]
2016-09-27T00:00:00
[ [ "Brenner", "Claus", "" ] ]
TITLE: Scalable Estimation of Precision Maps in a MapReduce Framework ABSTRACT: This paper presents a large-scale strip adjustment method for LiDAR mobile mapping data, yielding highly precise maps. It uses several concepts to achieve scalability. First, an efficient graph-based pre-segmentation is used, which directly operates on LiDAR scan strip data, rather than on point clouds. Second, observation equations are obtained from a dense matching, which is formulated in terms of an estimation of a latent map. As a result of this formulation, the number of observation equations is not quadratic, but rather linear in the number of scan strips. Third, the dynamic Bayes network, which results from all observation and condition equations, is partitioned into two sub-networks. Consequently, the estimation matrices for all position and orientation corrections are linear instead of quadratic in the number of unknowns and can be solved very efficiently using an alternating least squares approach. It is shown how this approach can be mapped to a standard key/value MapReduce implementation, where each of the processing nodes operates independently on small chunks of data, leading to essentially linear scalability. Results are demonstrated for a dataset of one billion measured LiDAR points and 278,000 unknowns, leading to maps with a precision of a few millimeters.
no_new_dataset
0.938969
1609.07615
Shenglan Liu
Shenglan Liu, Jun Wu, Lin Feng, Yang Liu, Hong Qiao, Wenbo Luo Muxin Sun, and Wei Wang
Perceptual uniform descriptor and Ranking on manifold: A bridge between image representation and ranking for image retrieval
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Incompatibility of image descriptor and ranking is always neglected in image retrieval. In this paper, manifold learning and Gestalt psychology theory are involved to solve the incompatibility problem. A new holistic descriptor called Perceptual Uniform Descriptor (PUD) based on Gestalt psychology is proposed, which combines color and gradient direction to imitate the human visual uniformity. PUD features in the same class images distributes on one manifold in most cases because PUD improves the visual uniformity of the traditional descriptors. Thus, we use manifold ranking and PUD to realize image retrieval. Experiments were carried out on five benchmark data sets, and the proposed method can greatly improve the accuracy of image retrieval. Our experimental results in the Ukbench and Corel-1K datasets demonstrated that N-S score reached to 3.58 (HSV 3.4) and mAP to 81.77% (ODBTC 77.9%) respectively by utilizing PUD which has only 280 dimension. The results are higher than other holistic image descriptors (even some local ones) and state-of-the-arts retrieval methods.
[ { "version": "v1", "created": "Sat, 24 Sep 2016 13:13:38 GMT" } ]
2016-09-27T00:00:00
[ [ "Liu", "Shenglan", "" ], [ "Wu", "Jun", "" ], [ "Feng", "Lin", "" ], [ "Liu", "Yang", "" ], [ "Qiao", "Hong", "" ], [ "Sun", "Wenbo Luo Muxin", "" ], [ "Wang", "Wei", "" ] ]
TITLE: Perceptual uniform descriptor and Ranking on manifold: A bridge between image representation and ranking for image retrieval ABSTRACT: Incompatibility of image descriptor and ranking is always neglected in image retrieval. In this paper, manifold learning and Gestalt psychology theory are involved to solve the incompatibility problem. A new holistic descriptor called Perceptual Uniform Descriptor (PUD) based on Gestalt psychology is proposed, which combines color and gradient direction to imitate the human visual uniformity. PUD features in the same class images distributes on one manifold in most cases because PUD improves the visual uniformity of the traditional descriptors. Thus, we use manifold ranking and PUD to realize image retrieval. Experiments were carried out on five benchmark data sets, and the proposed method can greatly improve the accuracy of image retrieval. Our experimental results in the Ukbench and Corel-1K datasets demonstrated that N-S score reached to 3.58 (HSV 3.4) and mAP to 81.77% (ODBTC 77.9%) respectively by utilizing PUD which has only 280 dimension. The results are higher than other holistic image descriptors (even some local ones) and state-of-the-arts retrieval methods.
no_new_dataset
0.952175
1609.07826
Georgios Georgakis
Georgios Georgakis, Md Alimoor Reza, Arsalan Mousavian, Phi-Hung Le, Jana Kosecka
Multiview RGB-D Dataset for Object Instance Detection
null
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new multi-view RGB-D dataset of nine kitchen scenes, each containing several objects in realistic cluttered environments including a subset of objects from the BigBird dataset. The viewpoints of the scenes are densely sampled and objects in the scenes are annotated with bounding boxes and in the 3D point cloud. Also, an approach for detection and recognition is presented, which is comprised of two parts: i) a new multi-view 3D proposal generation method and ii) the development of several recognition baselines using AlexNet to score our proposals, which is trained either on crops of the dataset or on synthetically composited training images. Finally, we compare the performance of the object proposals and a detection baseline to the Washington RGB-D Scenes (WRGB-D) dataset and demonstrate that our Kitchen scenes dataset is more challenging for object detection and recognition. The dataset is available at: http://cs.gmu.edu/~robot/gmu-kitchens.html.
[ { "version": "v1", "created": "Mon, 26 Sep 2016 01:18:56 GMT" } ]
2016-09-27T00:00:00
[ [ "Georgakis", "Georgios", "" ], [ "Reza", "Md Alimoor", "" ], [ "Mousavian", "Arsalan", "" ], [ "Le", "Phi-Hung", "" ], [ "Kosecka", "Jana", "" ] ]
TITLE: Multiview RGB-D Dataset for Object Instance Detection ABSTRACT: This paper presents a new multi-view RGB-D dataset of nine kitchen scenes, each containing several objects in realistic cluttered environments including a subset of objects from the BigBird dataset. The viewpoints of the scenes are densely sampled and objects in the scenes are annotated with bounding boxes and in the 3D point cloud. Also, an approach for detection and recognition is presented, which is comprised of two parts: i) a new multi-view 3D proposal generation method and ii) the development of several recognition baselines using AlexNet to score our proposals, which is trained either on crops of the dataset or on synthetically composited training images. Finally, we compare the performance of the object proposals and a detection baseline to the Washington RGB-D Scenes (WRGB-D) dataset and demonstrate that our Kitchen scenes dataset is more challenging for object detection and recognition. The dataset is available at: http://cs.gmu.edu/~robot/gmu-kitchens.html.
new_dataset
0.95877
1609.08084
Yi Yang
Yi Yang, Ming-Wei Chang, Jacob Eisenstein
Toward Socially-Infused Information Extraction: Embedding Authors, Mentions, and Entities
Accepted to EMNLP 2016
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Entity linking is the task of identifying mentions of entities in text, and linking them to entries in a knowledge base. This task is especially difficult in microblogs, as there is little additional text to provide disambiguating context; rather, authors rely on an implicit common ground of shared knowledge with their readers. In this paper, we attempt to capture some of this implicit context by exploiting the social network structure in microblogs. We build on the theory of homophily, which implies that socially linked individuals share interests, and are therefore likely to mention the same sorts of entities. We implement this idea by encoding authors, mentions, and entities in a continuous vector space, which is constructed so that socially-connected authors have similar vector representations. These vectors are incorporated into a neural structured prediction model, which captures structural constraints that are inherent in the entity linking task. Together, these design decisions yield F1 improvements of 1%-5% on benchmark datasets, as compared to the previous state-of-the-art.
[ { "version": "v1", "created": "Mon, 26 Sep 2016 17:19:07 GMT" } ]
2016-09-27T00:00:00
[ [ "Yang", "Yi", "" ], [ "Chang", "Ming-Wei", "" ], [ "Eisenstein", "Jacob", "" ] ]
TITLE: Toward Socially-Infused Information Extraction: Embedding Authors, Mentions, and Entities ABSTRACT: Entity linking is the task of identifying mentions of entities in text, and linking them to entries in a knowledge base. This task is especially difficult in microblogs, as there is little additional text to provide disambiguating context; rather, authors rely on an implicit common ground of shared knowledge with their readers. In this paper, we attempt to capture some of this implicit context by exploiting the social network structure in microblogs. We build on the theory of homophily, which implies that socially linked individuals share interests, and are therefore likely to mention the same sorts of entities. We implement this idea by encoding authors, mentions, and entities in a continuous vector space, which is constructed so that socially-connected authors have similar vector representations. These vectors are incorporated into a neural structured prediction model, which captures structural constraints that are inherent in the entity linking task. Together, these design decisions yield F1 improvements of 1%-5% on benchmark datasets, as compared to the previous state-of-the-art.
no_new_dataset
0.947624
1609.08124
Atousa Torabi Atousa Torabi
Atousa Torabi, Niket Tandon, Leonid Sigal
Learning Language-Visual Embedding for Movie Understanding with Natural-Language
13 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning a joint language-visual embedding has a number of very appealing properties and can result in variety of practical application, including natural language image/video annotation and search. In this work, we study three different joint language-visual neural network model architectures. We evaluate our models on large scale LSMDC16 movie dataset for two tasks: 1) Standard Ranking for video annotation and retrieval 2) Our proposed movie multiple-choice test. This test facilitate automatic evaluation of visual-language models for natural language video annotation based on human activities. In addition to original Audio Description (AD) captions, provided as part of LSMDC16, we collected and will make available a) manually generated re-phrasings of those captions obtained using Amazon MTurk b) automatically generated human activity elements in "Predicate + Object" (PO) phrases based on "Knowlywood", an activity knowledge mining model. Our best model archives Recall@10 of 19.2% on annotation and 18.9% on video retrieval tasks for subset of 1000 samples. For multiple-choice test, our best model achieve accuracy 58.11% over whole LSMDC16 public test-set.
[ { "version": "v1", "created": "Mon, 26 Sep 2016 19:14:12 GMT" } ]
2016-09-27T00:00:00
[ [ "Torabi", "Atousa", "" ], [ "Tandon", "Niket", "" ], [ "Sigal", "Leonid", "" ] ]
TITLE: Learning Language-Visual Embedding for Movie Understanding with Natural-Language ABSTRACT: Learning a joint language-visual embedding has a number of very appealing properties and can result in variety of practical application, including natural language image/video annotation and search. In this work, we study three different joint language-visual neural network model architectures. We evaluate our models on large scale LSMDC16 movie dataset for two tasks: 1) Standard Ranking for video annotation and retrieval 2) Our proposed movie multiple-choice test. This test facilitate automatic evaluation of visual-language models for natural language video annotation based on human activities. In addition to original Audio Description (AD) captions, provided as part of LSMDC16, we collected and will make available a) manually generated re-phrasings of those captions obtained using Amazon MTurk b) automatically generated human activity elements in "Predicate + Object" (PO) phrases based on "Knowlywood", an activity knowledge mining model. Our best model archives Recall@10 of 19.2% on annotation and 18.9% on video retrieval tasks for subset of 1000 samples. For multiple-choice test, our best model achieve accuracy 58.11% over whole LSMDC16 public test-set.
no_new_dataset
0.944228
1511.02930
Vishesh Karwa
Vishesh Karwa and Pavel N. Krivitsky and Aleksandra B. Slavkovi\'c
Sharing Social Network Data: Differentially Private Estimation of Exponential-Family Random Graph Models
Updated, 39 pages
null
null
null
stat.CO cs.CR cs.SI stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by a real-life problem of sharing social network data that contain sensitive personal information, we propose a novel approach to release and analyze synthetic graphs in order to protect privacy of individual relationships captured by the social network while maintaining the validity of statistical results. A case study using a version of the Enron e-mail corpus dataset demonstrates the application and usefulness of the proposed techniques in solving the challenging problem of maintaining privacy \emph{and} supporting open access to network data to ensure reproducibility of existing studies and discovering new scientific insights that can be obtained by analyzing such data. We use a simple yet effective randomized response mechanism to generate synthetic networks under $\epsilon$-edge differential privacy, and then use likelihood based inference for missing data and Markov chain Monte Carlo techniques to fit exponential-family random graph models to the generated synthetic networks.
[ { "version": "v1", "created": "Mon, 9 Nov 2015 23:36:30 GMT" }, { "version": "v2", "created": "Fri, 23 Sep 2016 16:48:20 GMT" } ]
2016-09-26T00:00:00
[ [ "Karwa", "Vishesh", "" ], [ "Krivitsky", "Pavel N.", "" ], [ "Slavković", "Aleksandra B.", "" ] ]
TITLE: Sharing Social Network Data: Differentially Private Estimation of Exponential-Family Random Graph Models ABSTRACT: Motivated by a real-life problem of sharing social network data that contain sensitive personal information, we propose a novel approach to release and analyze synthetic graphs in order to protect privacy of individual relationships captured by the social network while maintaining the validity of statistical results. A case study using a version of the Enron e-mail corpus dataset demonstrates the application and usefulness of the proposed techniques in solving the challenging problem of maintaining privacy \emph{and} supporting open access to network data to ensure reproducibility of existing studies and discovering new scientific insights that can be obtained by analyzing such data. We use a simple yet effective randomized response mechanism to generate synthetic networks under $\epsilon$-edge differential privacy, and then use likelihood based inference for missing data and Markov chain Monte Carlo techniques to fit exponential-family random graph models to the generated synthetic networks.
no_new_dataset
0.943138
1603.07771
David Grangier
Remi Lebret, David Grangier, Michael Auli
Neural Text Generation from Structured Data with Application to the Biography Domain
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2016
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a neural model for concept-to-text generation that scales to large, rich domains. We experiment with a new dataset of biographies from Wikipedia that is an order of magnitude larger than existing resources with over 700k samples. The dataset is also vastly more diverse with a 400k vocabulary, compared to a few hundred words for Weathergov or Robocup. Our model builds upon recent work on conditional neural language model for text generation. To deal with the large vocabulary, we extend these models to mix a fixed vocabulary with copy actions that transfer sample-specific words from the input database to the generated output sentence. Our neural model significantly out-performs a classical Kneser-Ney language model adapted to this task by nearly 15 BLEU.
[ { "version": "v1", "created": "Thu, 24 Mar 2016 22:40:00 GMT" }, { "version": "v2", "created": "Thu, 22 Sep 2016 14:47:44 GMT" }, { "version": "v3", "created": "Fri, 23 Sep 2016 15:16:46 GMT" } ]
2016-09-26T00:00:00
[ [ "Lebret", "Remi", "" ], [ "Grangier", "David", "" ], [ "Auli", "Michael", "" ] ]
TITLE: Neural Text Generation from Structured Data with Application to the Biography Domain ABSTRACT: This paper introduces a neural model for concept-to-text generation that scales to large, rich domains. We experiment with a new dataset of biographies from Wikipedia that is an order of magnitude larger than existing resources with over 700k samples. The dataset is also vastly more diverse with a 400k vocabulary, compared to a few hundred words for Weathergov or Robocup. Our model builds upon recent work on conditional neural language model for text generation. To deal with the large vocabulary, we extend these models to mix a fixed vocabulary with copy actions that transfer sample-specific words from the input database to the generated output sentence. Our neural model significantly out-performs a classical Kneser-Ney language model adapted to this task by nearly 15 BLEU.
new_dataset
0.952794
1608.04117
Michal Drozdzal
Michal Drozdzal, Eugene Vorontsov, Gabriel Chartrand, Samuel Kadoury, Chris Pal
The Importance of Skip Connections in Biomedical Image Segmentation
Accepted to 2nd Workshop on Deep Learning in Medical Image Analysis (DLMIA 2016); Added references
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the influence of both long and short skip connections on Fully Convolutional Networks (FCN) for biomedical image segmentation. In standard FCNs, only long skip connections are used to skip features from the contracting path to the expanding path in order to recover spatial information lost during downsampling. We extend FCNs by adding short skip connections, that are similar to the ones introduced in residual networks, in order to build very deep FCNs (of hundreds of layers). A review of the gradient flow confirms that for a very deep FCN it is beneficial to have both long and short skip connections. Finally, we show that a very deep FCN can achieve near-to-state-of-the-art results on the EM dataset without any further post-processing.
[ { "version": "v1", "created": "Sun, 14 Aug 2016 17:10:30 GMT" }, { "version": "v2", "created": "Thu, 22 Sep 2016 20:14:09 GMT" } ]
2016-09-26T00:00:00
[ [ "Drozdzal", "Michal", "" ], [ "Vorontsov", "Eugene", "" ], [ "Chartrand", "Gabriel", "" ], [ "Kadoury", "Samuel", "" ], [ "Pal", "Chris", "" ] ]
TITLE: The Importance of Skip Connections in Biomedical Image Segmentation ABSTRACT: In this paper, we study the influence of both long and short skip connections on Fully Convolutional Networks (FCN) for biomedical image segmentation. In standard FCNs, only long skip connections are used to skip features from the contracting path to the expanding path in order to recover spatial information lost during downsampling. We extend FCNs by adding short skip connections, that are similar to the ones introduced in residual networks, in order to build very deep FCNs (of hundreds of layers). A review of the gradient flow confirms that for a very deep FCN it is beneficial to have both long and short skip connections. Finally, we show that a very deep FCN can achieve near-to-state-of-the-art results on the EM dataset without any further post-processing.
no_new_dataset
0.957755
1609.05158
Wenzhe Shi
Wenzhe Shi, Jose Caballero, Ferenc Husz\'ar, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert and Zehan Wang
Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
CVPR 2016 paper with updated affiliations and supplemental material, fixed typo in equation 4
null
null
null
cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.
[ { "version": "v1", "created": "Fri, 16 Sep 2016 17:58:14 GMT" }, { "version": "v2", "created": "Fri, 23 Sep 2016 17:16:37 GMT" } ]
2016-09-26T00:00:00
[ [ "Shi", "Wenzhe", "" ], [ "Caballero", "Jose", "" ], [ "Huszár", "Ferenc", "" ], [ "Totz", "Johannes", "" ], [ "Aitken", "Andrew P.", "" ], [ "Bishop", "Rob", "" ], [ "Rueckert", "Daniel", "" ], [ "Wang", "Zehan", "" ] ]
TITLE: Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network ABSTRACT: Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.
no_new_dataset
0.952486
1609.06423
Mayank Singh
Mayank Singh, Barnopriyo Barua, Priyank Palod, Manvi Garg, Sidhartha Satapathy, Samuel Bushi, Kumar Ayush, Krishna Sai Rohith, Tulasi Gamidi, Pawan Goyal and Animesh Mukherjee
OCR++: A Robust Framework For Information Extraction from Scholarly Articles
null
null
null
null
cs.DL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes OCR++, an open-source framework designed for a variety of information extraction tasks from scholarly articles including metadata (title, author names, affiliation and e-mail), structure (section headings and body text, table and figure headings, URLs and footnotes) and bibliography (citation instances and references). We analyze a diverse set of scientific articles written in English language to understand generic writing patterns and formulate rules to develop this hybrid framework. Extensive evaluations show that the proposed framework outperforms the existing state-of-the-art tools with huge margin in structural information extraction along with improved performance in metadata and bibliography extraction tasks, both in terms of accuracy (around 50% improvement) and processing time (around 52% improvement). A user experience study conducted with the help of 30 researchers reveals that the researchers found this system to be very helpful. As an additional objective, we discuss two novel use cases including automatically extracting links to public datasets from the proceedings, which would further accelerate the advancement in digital libraries. The result of the framework can be exported as a whole into structured TEI-encoded documents. Our framework is accessible online at http://cnergres.iitkgp.ac.in/OCR++/home/.
[ { "version": "v1", "created": "Wed, 21 Sep 2016 06:12:52 GMT" }, { "version": "v2", "created": "Thu, 22 Sep 2016 10:54:57 GMT" }, { "version": "v3", "created": "Fri, 23 Sep 2016 13:05:27 GMT" } ]
2016-09-26T00:00:00
[ [ "Singh", "Mayank", "" ], [ "Barua", "Barnopriyo", "" ], [ "Palod", "Priyank", "" ], [ "Garg", "Manvi", "" ], [ "Satapathy", "Sidhartha", "" ], [ "Bushi", "Samuel", "" ], [ "Ayush", "Kumar", "" ], [ "Rohith", "Krishna Sai", "" ], [ "Gamidi", "Tulasi", "" ], [ "Goyal", "Pawan", "" ], [ "Mukherjee", "Animesh", "" ] ]
TITLE: OCR++: A Robust Framework For Information Extraction from Scholarly Articles ABSTRACT: This paper proposes OCR++, an open-source framework designed for a variety of information extraction tasks from scholarly articles including metadata (title, author names, affiliation and e-mail), structure (section headings and body text, table and figure headings, URLs and footnotes) and bibliography (citation instances and references). We analyze a diverse set of scientific articles written in English language to understand generic writing patterns and formulate rules to develop this hybrid framework. Extensive evaluations show that the proposed framework outperforms the existing state-of-the-art tools with huge margin in structural information extraction along with improved performance in metadata and bibliography extraction tasks, both in terms of accuracy (around 50% improvement) and processing time (around 52% improvement). A user experience study conducted with the help of 30 researchers reveals that the researchers found this system to be very helpful. As an additional objective, we discuss two novel use cases including automatically extracting links to public datasets from the proceedings, which would further accelerate the advancement in digital libraries. The result of the framework can be exported as a whole into structured TEI-encoded documents. Our framework is accessible online at http://cnergres.iitkgp.ac.in/OCR++/home/.
no_new_dataset
0.948346
1609.07170
Alexander Wong
Prajna Paramita Dash, Akshaya Mishra, and Alexander Wong
Deep Quality: A Deep No-reference Quality Assessment System
2 pages
null
null
null
cs.MM cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image quality assessment (IQA) continues to garner great interest in the research community, particularly given the tremendous rise in consumer video capture and streaming. Despite significant research effort in IQA in the past few decades, the area of no-reference image quality assessment remains a great challenge and is largely unsolved. In this paper, we propose a novel no-reference image quality assessment system called Deep Quality, which leverages the power of deep learning to model the complex relationship between visual content and the perceived quality. Deep Quality consists of a novel multi-scale deep convolutional neural network, trained to learn to assess image quality based on training samples consisting of different distortions and degradations such as blur, Gaussian noise, and compression artifacts. Preliminary results using the CSIQ benchmark image quality dataset showed that Deep Quality was able to achieve strong quality prediction performance (89% patch-level and 98% image-level prediction accuracy), being able to achieve similar performance as full-reference IQA methods.
[ { "version": "v1", "created": "Thu, 22 Sep 2016 21:26:21 GMT" } ]
2016-09-26T00:00:00
[ [ "Dash", "Prajna Paramita", "" ], [ "Mishra", "Akshaya", "" ], [ "Wong", "Alexander", "" ] ]
TITLE: Deep Quality: A Deep No-reference Quality Assessment System ABSTRACT: Image quality assessment (IQA) continues to garner great interest in the research community, particularly given the tremendous rise in consumer video capture and streaming. Despite significant research effort in IQA in the past few decades, the area of no-reference image quality assessment remains a great challenge and is largely unsolved. In this paper, we propose a novel no-reference image quality assessment system called Deep Quality, which leverages the power of deep learning to model the complex relationship between visual content and the perceived quality. Deep Quality consists of a novel multi-scale deep convolutional neural network, trained to learn to assess image quality based on training samples consisting of different distortions and degradations such as blur, Gaussian noise, and compression artifacts. Preliminary results using the CSIQ benchmark image quality dataset showed that Deep Quality was able to achieve strong quality prediction performance (89% patch-level and 98% image-level prediction accuracy), being able to achieve similar performance as full-reference IQA methods.
no_new_dataset
0.947088
1609.07215
Rajasekar Venkatesan
Mihika Dave, Sahil Tapiawala, Meng Joo Er, Rajasekar Venkatesan
A Novel Progressive Multi-label Classifier for Classincremental Data
5 pages, 3 figures, 4 tables
null
null
null
cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a progressive learning algorithm for multi-label classification to learn new labels while retaining the knowledge of previous labels is designed. New output neurons corresponding to new labels are added and the neural network connections and parameters are automatically restructured as if the label has been introduced from the beginning. This work is the first of the kind in multi-label classifier for class-incremental learning. It is useful for real-world applications such as robotics where streaming data are available and the number of labels is often unknown. Based on the Extreme Learning Machine framework, a novel universal classifier with plug and play capabilities for progressive multi-label classification is developed. Experimental results on various benchmark synthetic and real datasets validate the efficiency and effectiveness of our proposed algorithm.
[ { "version": "v1", "created": "Fri, 23 Sep 2016 03:09:24 GMT" } ]
2016-09-26T00:00:00
[ [ "Dave", "Mihika", "" ], [ "Tapiawala", "Sahil", "" ], [ "Er", "Meng Joo", "" ], [ "Venkatesan", "Rajasekar", "" ] ]
TITLE: A Novel Progressive Multi-label Classifier for Classincremental Data ABSTRACT: In this paper, a progressive learning algorithm for multi-label classification to learn new labels while retaining the knowledge of previous labels is designed. New output neurons corresponding to new labels are added and the neural network connections and parameters are automatically restructured as if the label has been introduced from the beginning. This work is the first of the kind in multi-label classifier for class-incremental learning. It is useful for real-world applications such as robotics where streaming data are available and the number of labels is often unknown. Based on the Extreme Learning Machine framework, a novel universal classifier with plug and play capabilities for progressive multi-label classification is developed. Experimental results on various benchmark synthetic and real datasets validate the efficiency and effectiveness of our proposed algorithm.
no_new_dataset
0.953751
1609.07306
Helge Rhodin
Helge Rhodin, Christian Richardt, Dan Casas, Eldar Insafutdinov, Mohammad Shafiei, Hans-Peter Seidel, Bernt Schiele, Christian Theobalt
EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras
SIGGRAPH Asia 2016
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. They often create discomfort by possibly needed marker suits, and their recording volume is severely restricted and often constrained to indoor scenes with controlled backgrounds. Alternative suit-based systems use several inertial measurement units or an exoskeleton to capture motion. This makes capturing independent of a confined volume, but requires substantial, often constraining, and hard to set up body instrumentation. We therefore propose a new method for real-time, marker-less and egocentric motion capture which estimates the full-body skeleton pose from a lightweight stereo pair of fisheye cameras that are attached to a helmet or virtual reality headset. It combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a large new dataset. Our inside-in method captures full-body motion in general indoor and outdoor scenes, and also crowded scenes with many people in close vicinity. The captured user can freely move around, which enables reconstruction of larger-scale activities and is particularly useful in virtual reality to freely roam and interact, while seeing the fully motion-captured virtual body.
[ { "version": "v1", "created": "Fri, 23 Sep 2016 10:46:19 GMT" } ]
2016-09-26T00:00:00
[ [ "Rhodin", "Helge", "" ], [ "Richardt", "Christian", "" ], [ "Casas", "Dan", "" ], [ "Insafutdinov", "Eldar", "" ], [ "Shafiei", "Mohammad", "" ], [ "Seidel", "Hans-Peter", "" ], [ "Schiele", "Bernt", "" ], [ "Theobalt", "Christian", "" ] ]
TITLE: EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras ABSTRACT: Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. They often create discomfort by possibly needed marker suits, and their recording volume is severely restricted and often constrained to indoor scenes with controlled backgrounds. Alternative suit-based systems use several inertial measurement units or an exoskeleton to capture motion. This makes capturing independent of a confined volume, but requires substantial, often constraining, and hard to set up body instrumentation. We therefore propose a new method for real-time, marker-less and egocentric motion capture which estimates the full-body skeleton pose from a lightweight stereo pair of fisheye cameras that are attached to a helmet or virtual reality headset. It combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a large new dataset. Our inside-in method captures full-body motion in general indoor and outdoor scenes, and also crowded scenes with many people in close vicinity. The captured user can freely move around, which enables reconstruction of larger-scale activities and is particularly useful in virtual reality to freely roam and interact, while seeing the fully motion-captured virtual body.
new_dataset
0.956836
1609.07349
Vincent Primault
Vincent Primault (INSA Lyon, DRIM), Antoine Boutet (DRIM, INSA Lyon), Sonia Ben Mokhtar (DRIM, INSA Lyon), Lionel Brunie (DRIM, INSA Lyon)
Adaptive Location Privacy with ALP
35th Symposium on Reliable Distributed Systems, Sep 2016, Budapest, Hungary
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the increasing amount of mobility data being collected on a daily basis by location-based services (LBSs) comes a new range of threats for users, related to the over-sharing of their location information. To deal with this issue, several location privacy protection mechanisms (LPPMs) have been proposed in the past years. However, each of these mechanisms comes with different configuration parameters that have a direct impact both on the privacy guarantees offered to the users and on the resulting utility of the protected data. In this context, it can be difficult for non-expert system designers to choose the appropriate configuration to use. Moreover, these mechanisms are generally configured once for all, which results in the same configuration for every protected piece of information. However, not all users have the same behaviour, and even the behaviour of a single user is likely to change over time. To address this issue, we present in this paper ALP, a new framework enabling the dynamic configuration of LPPMs. ALP can be used in two scenarios: (1) offline, where ALP enables a system designer to choose and automatically tune the most appropriate LPPM for the protection of a given dataset; (2) online, where ALP enables the user of a crowd sensing application to protect consecutive batches of her geolocated data by automatically tuning an existing LPPM to fulfil a set of privacy and utility objectives. We evaluate ALP on both scenarios with two real-life mobility datasets and two state-of-the-art LPPMs. Our experiments show that the adaptive LPPM configurations found by ALP outperform both in terms of privacy and utility a set of static configurations manually fixed by a system designer.
[ { "version": "v1", "created": "Fri, 23 Sep 2016 13:19:18 GMT" } ]
2016-09-26T00:00:00
[ [ "Primault", "Vincent", "", "INSA Lyon, DRIM" ], [ "Boutet", "Antoine", "", "DRIM, INSA Lyon" ], [ "Mokhtar", "Sonia Ben", "", "DRIM, INSA Lyon" ], [ "Brunie", "Lionel", "", "DRIM, INSA Lyon" ] ]
TITLE: Adaptive Location Privacy with ALP ABSTRACT: With the increasing amount of mobility data being collected on a daily basis by location-based services (LBSs) comes a new range of threats for users, related to the over-sharing of their location information. To deal with this issue, several location privacy protection mechanisms (LPPMs) have been proposed in the past years. However, each of these mechanisms comes with different configuration parameters that have a direct impact both on the privacy guarantees offered to the users and on the resulting utility of the protected data. In this context, it can be difficult for non-expert system designers to choose the appropriate configuration to use. Moreover, these mechanisms are generally configured once for all, which results in the same configuration for every protected piece of information. However, not all users have the same behaviour, and even the behaviour of a single user is likely to change over time. To address this issue, we present in this paper ALP, a new framework enabling the dynamic configuration of LPPMs. ALP can be used in two scenarios: (1) offline, where ALP enables a system designer to choose and automatically tune the most appropriate LPPM for the protection of a given dataset; (2) online, where ALP enables the user of a crowd sensing application to protect consecutive batches of her geolocated data by automatically tuning an existing LPPM to fulfil a set of privacy and utility objectives. We evaluate ALP on both scenarios with two real-life mobility datasets and two state-of-the-art LPPMs. Our experiments show that the adaptive LPPM configurations found by ALP outperform both in terms of privacy and utility a set of static configurations manually fixed by a system designer.
no_new_dataset
0.948917
1609.07420
Marko Linna
Marko Linna, Juho Kannala, Esa Rahtu
Real-time Human Pose Estimation from Video with Convolutional Neural Networks
16 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a method for real-time multi-person human pose estimation from video by utilizing convolutional neural networks. Our method is aimed for use case specific applications, where good accuracy is essential and variation of the background and poses is limited. This enables us to use a generic network architecture, which is both accurate and fast. We divide the problem into two phases: (1) pre-training and (2) finetuning. In pre-training, the network is learned with highly diverse input data from publicly available datasets, while in finetuning we train with application specific data, which we record with Kinect. Our method differs from most of the state-of-the-art methods in that we consider the whole system, including person detector, pose estimator and an automatic way to record application specific training material for finetuning. Our method is considerably faster than many of the state-of-the-art methods. Our method can be thought of as a replacement for Kinect, and it can be used for higher level tasks, such as gesture control, games, person tracking, action recognition and action tracking. We achieved accuracy of 96.8\% ([email protected]) with application specific data.
[ { "version": "v1", "created": "Fri, 23 Sep 2016 16:22:59 GMT" } ]
2016-09-26T00:00:00
[ [ "Linna", "Marko", "" ], [ "Kannala", "Juho", "" ], [ "Rahtu", "Esa", "" ] ]
TITLE: Real-time Human Pose Estimation from Video with Convolutional Neural Networks ABSTRACT: In this paper, we present a method for real-time multi-person human pose estimation from video by utilizing convolutional neural networks. Our method is aimed for use case specific applications, where good accuracy is essential and variation of the background and poses is limited. This enables us to use a generic network architecture, which is both accurate and fast. We divide the problem into two phases: (1) pre-training and (2) finetuning. In pre-training, the network is learned with highly diverse input data from publicly available datasets, while in finetuning we train with application specific data, which we record with Kinect. Our method differs from most of the state-of-the-art methods in that we consider the whole system, including person detector, pose estimator and an automatic way to record application specific training material for finetuning. Our method is considerably faster than many of the state-of-the-art methods. Our method can be thought of as a replacement for Kinect, and it can be used for higher level tasks, such as gesture control, games, person tracking, action recognition and action tracking. We achieved accuracy of 96.8\% ([email protected]) with application specific data.
no_new_dataset
0.948489
1609.07451
Linfeng Song
Linfeng Song, Yue Zhang, Xiaochang Peng, Zhiguo Wang and Daniel Gildea
AMR-to-text generation as a Traveling Salesman Problem
accepted by EMNLP 2016
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The task of AMR-to-text generation is to generate grammatical text that sustains the semantic meaning for a given AMR graph. We at- tack the task by first partitioning the AMR graph into smaller fragments, and then generating the translation for each fragment, before finally deciding the order by solving an asymmetric generalized traveling salesman problem (AGTSP). A Maximum Entropy classifier is trained to estimate the traveling costs, and a TSP solver is used to find the optimized solution. The final model reports a BLEU score of 22.44 on the SemEval-2016 Task8 dataset.
[ { "version": "v1", "created": "Fri, 23 Sep 2016 18:12:12 GMT" } ]
2016-09-26T00:00:00
[ [ "Song", "Linfeng", "" ], [ "Zhang", "Yue", "" ], [ "Peng", "Xiaochang", "" ], [ "Wang", "Zhiguo", "" ], [ "Gildea", "Daniel", "" ] ]
TITLE: AMR-to-text generation as a Traveling Salesman Problem ABSTRACT: The task of AMR-to-text generation is to generate grammatical text that sustains the semantic meaning for a given AMR graph. We at- tack the task by first partitioning the AMR graph into smaller fragments, and then generating the translation for each fragment, before finally deciding the order by solving an asymmetric generalized traveling salesman problem (AGTSP). A Maximum Entropy classifier is trained to estimate the traveling costs, and a TSP solver is used to find the optimized solution. The final model reports a BLEU score of 22.44 on the SemEval-2016 Task8 dataset.
no_new_dataset
0.955068
1309.1114
Marzia Rivi
Marzia Rivi, Claudio Gheller, Tim Dykes, Mel Krokos, Klaus Dolag
GPU Accelerated Particle Visualization with Splotch
25 pages, 9 figures. Astronomy and Computing (2014)
Astronomy and Computing 2014, 5: 9-18
10.1016/j.ascom.2014.03.001
null
astro-ph.IM cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Splotch is a rendering algorithm for exploration and visual discovery in particle-based datasets coming from astronomical observations or numerical simulations. The strengths of the approach are production of high quality imagery and support for very large-scale datasets through an effective mix of the OpenMP and MPI parallel programming paradigms. This article reports our experiences in re-designing Splotch for exploiting emerging HPC architectures nowadays increasingly populated with GPUs. A performance model is introduced for data transfers, computations and memory access, to guide our re-factoring of Splotch. A number of parallelization issues are discussed, in particular relating to race conditions and workload balancing, towards achieving optimal performances. Our implementation was accomplished by using the CUDA programming paradigm. Our strategy is founded on novel schemes achieving optimized data organisation and classification of particles. We deploy a reference simulation to present performance results on acceleration gains and scalability. We finally outline our vision for future work developments including possibilities for further optimisations and exploitation of emerging technologies.
[ { "version": "v1", "created": "Wed, 4 Sep 2013 17:36:46 GMT" }, { "version": "v2", "created": "Sun, 23 Mar 2014 18:18:03 GMT" } ]
2016-09-23T00:00:00
[ [ "Rivi", "Marzia", "" ], [ "Gheller", "Claudio", "" ], [ "Dykes", "Tim", "" ], [ "Krokos", "Mel", "" ], [ "Dolag", "Klaus", "" ] ]
TITLE: GPU Accelerated Particle Visualization with Splotch ABSTRACT: Splotch is a rendering algorithm for exploration and visual discovery in particle-based datasets coming from astronomical observations or numerical simulations. The strengths of the approach are production of high quality imagery and support for very large-scale datasets through an effective mix of the OpenMP and MPI parallel programming paradigms. This article reports our experiences in re-designing Splotch for exploiting emerging HPC architectures nowadays increasingly populated with GPUs. A performance model is introduced for data transfers, computations and memory access, to guide our re-factoring of Splotch. A number of parallelization issues are discussed, in particular relating to race conditions and workload balancing, towards achieving optimal performances. Our implementation was accomplished by using the CUDA programming paradigm. Our strategy is founded on novel schemes achieving optimized data organisation and classification of particles. We deploy a reference simulation to present performance results on acceleration gains and scalability. We finally outline our vision for future work developments including possibilities for further optimisations and exploitation of emerging technologies.
no_new_dataset
0.942401
1510.05727
Patrick Huck
Patrick Huck, Dan Gunter, Shreyas Cholia, Donald Winston, Alpha N'Diaye, Kristin Persson
User Applications Driven by the Community Contribution Framework MPContribs in the Materials Project
12 pages, 5 figures, Proceedings of 10th Gateway Computing Environments Workshop (2015), to be published in "Concurrency in Computation: Practice and Experience"
Concurrency and Computation: Practice and Experience Vol. 28 Nr. 7 p.1982-1993
10.1002/cpe.3698
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work discusses how the MPContribs framework in the Materials Project (MP) allows user-contributed data to be shown and analyzed alongside the core MP database. The Materials Project is a searchable database of electronic structure properties of over 65,000 bulk solid materials that is accessible through a web-based science-gateway. We describe the motivation for enabling user contributions to the materials data and present the framework's features and challenges in the context of two real applications. These use-cases illustrate how scientific collaborations can build applications with their own "user-contributed" data using MPContribs. The Nanoporous Materials Explorer application provides a unique search interface to a novel dataset of hundreds of thousands of materials, each with tables of user-contributed values related to material adsorption and density at varying temperature and pressure. The Unified Theoretical and Experimental x-ray Spectroscopy application discusses a full workflow for the association, dissemination and combined analyses of experimental data from the Advanced Light Source with MP's theoretical core data, using MPContribs tools for data formatting, management and exploration. The capabilities being developed for these collaborations are serving as the model for how new materials data can be incorporated into the Materials Project website with minimal staff overhead while giving powerful tools for data search and display to the user community.
[ { "version": "v1", "created": "Tue, 20 Oct 2015 00:55:50 GMT" } ]
2016-09-23T00:00:00
[ [ "Huck", "Patrick", "" ], [ "Gunter", "Dan", "" ], [ "Cholia", "Shreyas", "" ], [ "Winston", "Donald", "" ], [ "N'Diaye", "Alpha", "" ], [ "Persson", "Kristin", "" ] ]
TITLE: User Applications Driven by the Community Contribution Framework MPContribs in the Materials Project ABSTRACT: This work discusses how the MPContribs framework in the Materials Project (MP) allows user-contributed data to be shown and analyzed alongside the core MP database. The Materials Project is a searchable database of electronic structure properties of over 65,000 bulk solid materials that is accessible through a web-based science-gateway. We describe the motivation for enabling user contributions to the materials data and present the framework's features and challenges in the context of two real applications. These use-cases illustrate how scientific collaborations can build applications with their own "user-contributed" data using MPContribs. The Nanoporous Materials Explorer application provides a unique search interface to a novel dataset of hundreds of thousands of materials, each with tables of user-contributed values related to material adsorption and density at varying temperature and pressure. The Unified Theoretical and Experimental x-ray Spectroscopy application discusses a full workflow for the association, dissemination and combined analyses of experimental data from the Advanced Light Source with MP's theoretical core data, using MPContribs tools for data formatting, management and exploration. The capabilities being developed for these collaborations are serving as the model for how new materials data can be incorporated into the Materials Project website with minimal staff overhead while giving powerful tools for data search and display to the user community.
new_dataset
0.894789
1604.06318
Nikolay Savinov
Dmitry Laptev, Nikolay Savinov, Joachim M. Buhmann, Marc Pollefeys
TI-POOLING: transformation-invariant pooling for feature learning in Convolutional Neural Networks
Accepted at CVPR 2016. The first two authors assert equal contribution and joint first authorship
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a deep neural network topology that incorporates a simple to implement transformation invariant pooling operator (TI-POOLING). This operator is able to efficiently handle prior knowledge on nuisance variations in the data, such as rotation or scale changes. Most current methods usually make use of dataset augmentation to address this issue, but this requires larger number of model parameters and more training data, and results in significantly increased training time and larger chance of under- or overfitting. The main reason for these drawbacks is that the learned model needs to capture adequate features for all the possible transformations of the input. On the other hand, we formulate features in convolutional neural networks to be transformation-invariant. We achieve that using parallel siamese architectures for the considered transformation set and applying the TI-POOLING operator on their outputs before the fully-connected layers. We show that this topology internally finds the most optimal "canonical" instance of the input image for training and therefore limits the redundancy in learned features. This more efficient use of training data results in better performance on popular benchmark datasets with smaller number of parameters when comparing to standard convolutional neural networks with dataset augmentation and to other baselines.
[ { "version": "v1", "created": "Thu, 21 Apr 2016 14:17:05 GMT" }, { "version": "v2", "created": "Thu, 22 Sep 2016 14:42:28 GMT" } ]
2016-09-23T00:00:00
[ [ "Laptev", "Dmitry", "" ], [ "Savinov", "Nikolay", "" ], [ "Buhmann", "Joachim M.", "" ], [ "Pollefeys", "Marc", "" ] ]
TITLE: TI-POOLING: transformation-invariant pooling for feature learning in Convolutional Neural Networks ABSTRACT: In this paper we present a deep neural network topology that incorporates a simple to implement transformation invariant pooling operator (TI-POOLING). This operator is able to efficiently handle prior knowledge on nuisance variations in the data, such as rotation or scale changes. Most current methods usually make use of dataset augmentation to address this issue, but this requires larger number of model parameters and more training data, and results in significantly increased training time and larger chance of under- or overfitting. The main reason for these drawbacks is that the learned model needs to capture adequate features for all the possible transformations of the input. On the other hand, we formulate features in convolutional neural networks to be transformation-invariant. We achieve that using parallel siamese architectures for the considered transformation set and applying the TI-POOLING operator on their outputs before the fully-connected layers. We show that this topology internally finds the most optimal "canonical" instance of the input image for training and therefore limits the redundancy in learned features. This more efficient use of training data results in better performance on popular benchmark datasets with smaller number of parameters when comparing to standard convolutional neural networks with dataset augmentation and to other baselines.
no_new_dataset
0.949153
1604.06629
Giulio Cimini
Giulio Cimini and Matteo Serri
Entangling credit and funding shocks in interbank markets
null
PLoS ONE 11(8): e0161642 (2016)
10.1371/journal.pone.0161642
null
q-fin.RM physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Credit and liquidity risks represent main channels of financial contagion for interbank lending markets. On one hand, banks face potential losses whenever their counterparties are under distress and thus unable to fulfill their obligations. On the other hand, solvency constraints may force banks to recover lost fundings by selling their illiquid assets, resulting in effective losses in the presence of fire sales - that is, when funding shortcomings are widespread over the market. Because of the complex structure of the network of interbank exposures, these losses reverberate among banks and eventually get amplified, with potentially catastrophic consequences for the whole financial system. Building on Debt Rank [Battiston et al., 2012], in this work we define a systemic risk metric that estimates the potential amplification of losses in interbank markets accounting for both credit and liquidity contagion channels: the Debt-Solvency Rank. We implement this framework on a dataset of 183 European banks that were publicly traded between 2004 and 2013, showing indeed that liquidity spillovers substantially increase systemic risk, and thus cannot be neglected in stress-test scenarios. We also provide additional evidence that the interbank market was extremely fragile up to the 2008 financial crisis, becoming slightly more robust only afterwards.
[ { "version": "v1", "created": "Fri, 22 Apr 2016 12:39:36 GMT" } ]
2016-09-23T00:00:00
[ [ "Cimini", "Giulio", "" ], [ "Serri", "Matteo", "" ] ]
TITLE: Entangling credit and funding shocks in interbank markets ABSTRACT: Credit and liquidity risks represent main channels of financial contagion for interbank lending markets. On one hand, banks face potential losses whenever their counterparties are under distress and thus unable to fulfill their obligations. On the other hand, solvency constraints may force banks to recover lost fundings by selling their illiquid assets, resulting in effective losses in the presence of fire sales - that is, when funding shortcomings are widespread over the market. Because of the complex structure of the network of interbank exposures, these losses reverberate among banks and eventually get amplified, with potentially catastrophic consequences for the whole financial system. Building on Debt Rank [Battiston et al., 2012], in this work we define a systemic risk metric that estimates the potential amplification of losses in interbank markets accounting for both credit and liquidity contagion channels: the Debt-Solvency Rank. We implement this framework on a dataset of 183 European banks that were publicly traded between 2004 and 2013, showing indeed that liquidity spillovers substantially increase systemic risk, and thus cannot be neglected in stress-test scenarios. We also provide additional evidence that the interbank market was extremely fragile up to the 2008 financial crisis, becoming slightly more robust only afterwards.
no_new_dataset
0.92157
1609.06845
Sebastien Lefevre
Nicolas Audebert (OBELIX, Palaiseau), Bertrand Le Saux (Palaiseau), S\'ebastien Lef\`evre (OBELIX)
On the usability of deep networks for object-based image analysis
in International Conference on Geographic Object-Based Image Analysis (GEOBIA), Sep 2016, Enschede, Netherlands
null
null
null
cs.NE cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As computer vision before, remote sensing has been radically changed by the introduction of Convolution Neural Networks. Land cover use, object detection and scene understanding in aerial images rely more and more on deep learning to achieve new state-of-the-art results. Recent architectures such as Fully Convolutional Networks (Long et al., 2015) can even produce pixel level annotations for semantic mapping. In this work, we show how to use such deep networks to detect, segment and classify different varieties of wheeled vehicles in aerial images from the ISPRS Potsdam dataset. This allows us to tackle object detection and classification on a complex dataset made up of visually similar classes, and to demonstrate the relevance of such a subclass modeling approach. Especially, we want to show that deep learning is also suitable for object-oriented analysis of Earth Observation data. First, we train a FCN variant on the ISPRS Potsdam dataset and show how the learnt semantic maps can be used to extract precise segmentation of vehicles, which allow us studying the repartition of vehicles in the city. Second, we train a CNN to perform vehicle classification on the VEDAI (Razakarivony and Jurie, 2016) dataset, and transfer its knowledge to classify candidate segmented vehicles on the Potsdam dataset.
[ { "version": "v1", "created": "Thu, 22 Sep 2016 07:39:37 GMT" } ]
2016-09-23T00:00:00
[ [ "Audebert", "Nicolas", "", "OBELIX, Palaiseau" ], [ "Saux", "Bertrand Le", "", "Palaiseau" ], [ "Lefèvre", "Sébastien", "", "OBELIX" ] ]
TITLE: On the usability of deep networks for object-based image analysis ABSTRACT: As computer vision before, remote sensing has been radically changed by the introduction of Convolution Neural Networks. Land cover use, object detection and scene understanding in aerial images rely more and more on deep learning to achieve new state-of-the-art results. Recent architectures such as Fully Convolutional Networks (Long et al., 2015) can even produce pixel level annotations for semantic mapping. In this work, we show how to use such deep networks to detect, segment and classify different varieties of wheeled vehicles in aerial images from the ISPRS Potsdam dataset. This allows us to tackle object detection and classification on a complex dataset made up of visually similar classes, and to demonstrate the relevance of such a subclass modeling approach. Especially, we want to show that deep learning is also suitable for object-oriented analysis of Earth Observation data. First, we train a FCN variant on the ISPRS Potsdam dataset and show how the learnt semantic maps can be used to extract precise segmentation of vehicles, which allow us studying the repartition of vehicles in the city. Second, we train a CNN to perform vehicle classification on the VEDAI (Razakarivony and Jurie, 2016) dataset, and transfer its knowledge to classify candidate segmented vehicles on the Potsdam dataset.
no_new_dataset
0.943243
1609.06846
Nicolas Audebert
Nicolas Audebert (OBELIX, Palaiseau), Bertrand Le Saux (Palaiseau), S\'ebastien Lef\`evre (OBELIX)
Semantic Segmentation of Earth Observation Data Using Multimodal and Multi-scale Deep Networks
Asian Conference on Computer Vision (ACCV16), Nov 2016, Taipei, Taiwan
null
null
null
cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work investigates the use of deep fully convolutional neural networks (DFCNN) for pixel-wise scene labeling of Earth Observation images. Especially, we train a variant of the SegNet architecture on remote sensing data over an urban area and study different strategies for performing accurate semantic segmentation. Our contributions are the following: 1) we transfer efficiently a DFCNN from generic everyday images to remote sensing images; 2) we introduce a multi-kernel convolutional layer for fast aggregation of predictions at multiple scales; 3) we perform data fusion from heterogeneous sensors (optical and laser) using residual correction. Our framework improves state-of-the-art accuracy on the ISPRS Vaihingen 2D Semantic Labeling dataset.
[ { "version": "v1", "created": "Thu, 22 Sep 2016 07:42:06 GMT" } ]
2016-09-23T00:00:00
[ [ "Audebert", "Nicolas", "", "OBELIX, Palaiseau" ], [ "Saux", "Bertrand Le", "", "Palaiseau" ], [ "Lefèvre", "Sébastien", "", "OBELIX" ] ]
TITLE: Semantic Segmentation of Earth Observation Data Using Multimodal and Multi-scale Deep Networks ABSTRACT: This work investigates the use of deep fully convolutional neural networks (DFCNN) for pixel-wise scene labeling of Earth Observation images. Especially, we train a variant of the SegNet architecture on remote sensing data over an urban area and study different strategies for performing accurate semantic segmentation. Our contributions are the following: 1) we transfer efficiently a DFCNN from generic everyday images to remote sensing images; 2) we introduce a multi-kernel convolutional layer for fast aggregation of predictions at multiple scales; 3) we perform data fusion from heterogeneous sensors (optical and laser) using residual correction. Our framework improves state-of-the-art accuracy on the ISPRS Vaihingen 2D Semantic Labeling dataset.
no_new_dataset
0.950041
1609.06896
Dominik Alexander Klein
Dominik Alexander Klein, Dirk Schulz, Armin Bernd Cremers
Realtime Hierarchical Clustering based on Boundary and Surface Statistics
Asian Conf. on Computer Vision (ACCV) 2016
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual grouping is a key mechanism in human scene perception. There, it belongs to the subconscious, early processing and is key prerequisite for other high level tasks such as recognition. In this paper, we introduce an efficient, realtime capable algorithm which likewise agglomerates a valuable hierarchical clustering of a scene, while using purely local appearance statistics. To speed up the processing, first we subdivide the image into meaningful, atomic segments using a fast Watershed transform. Starting from there, our rapid, agglomerative clustering algorithm prunes and maintains the connectivity graph between clusters to contain only such pairs, which directly touch in the image domain and are reciprocal nearest neighbors (RNN) wrt. a distance metric. The core of this approach is our novel cluster distance: it combines boundary and surface statistics both in terms of appearance as well as spatial linkage. This yields state-of-the-art performance, as we demonstrate in conclusive experiments conducted on BSDS500 and Pascal-Context datasets.
[ { "version": "v1", "created": "Thu, 22 Sep 2016 10:17:30 GMT" } ]
2016-09-23T00:00:00
[ [ "Klein", "Dominik Alexander", "" ], [ "Schulz", "Dirk", "" ], [ "Cremers", "Armin Bernd", "" ] ]
TITLE: Realtime Hierarchical Clustering based on Boundary and Surface Statistics ABSTRACT: Visual grouping is a key mechanism in human scene perception. There, it belongs to the subconscious, early processing and is key prerequisite for other high level tasks such as recognition. In this paper, we introduce an efficient, realtime capable algorithm which likewise agglomerates a valuable hierarchical clustering of a scene, while using purely local appearance statistics. To speed up the processing, first we subdivide the image into meaningful, atomic segments using a fast Watershed transform. Starting from there, our rapid, agglomerative clustering algorithm prunes and maintains the connectivity graph between clusters to contain only such pairs, which directly touch in the image domain and are reciprocal nearest neighbors (RNN) wrt. a distance metric. The core of this approach is our novel cluster distance: it combines boundary and surface statistics both in terms of appearance as well as spatial linkage. This yields state-of-the-art performance, as we demonstrate in conclusive experiments conducted on BSDS500 and Pascal-Context datasets.
no_new_dataset
0.951188
1609.06988
Yuan Gao
Yuan Gao, Alan Yuille
Symmetric Non-Rigid Structure from Motion for Category-Specific Object Structure Estimation
Accepted to ECCV 2016
null
null
null
cs.CV cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many objects, especially these made by humans, are symmetric, e.g. cars and aeroplanes. This paper addresses the estimation of 3D structures of symmetric objects from multiple images of the same object category, e.g. different cars, seen from various viewpoints. We assume that the deformation between different instances from the same object category is non-rigid and symmetric. In this paper, we extend two leading non-rigid structure from motion (SfM) algorithms to exploit symmetry constraints. We model the both methods as energy minimization, in which we also recover the missing observations caused by occlusions. In particularly, we show that by rotating the coordinate system, the energy can be decoupled into two independent terms, which still exploit symmetry, to apply matrix factorization separately on each of them for initialization. The results on the Pascal3D+ dataset show that our methods significantly improve performance over baseline methods.
[ { "version": "v1", "created": "Thu, 22 Sep 2016 13:57:10 GMT" } ]
2016-09-23T00:00:00
[ [ "Gao", "Yuan", "" ], [ "Yuille", "Alan", "" ] ]
TITLE: Symmetric Non-Rigid Structure from Motion for Category-Specific Object Structure Estimation ABSTRACT: Many objects, especially these made by humans, are symmetric, e.g. cars and aeroplanes. This paper addresses the estimation of 3D structures of symmetric objects from multiple images of the same object category, e.g. different cars, seen from various viewpoints. We assume that the deformation between different instances from the same object category is non-rigid and symmetric. In this paper, we extend two leading non-rigid structure from motion (SfM) algorithms to exploit symmetry constraints. We model the both methods as energy minimization, in which we also recover the missing observations caused by occlusions. In particularly, we show that by rotating the coordinate system, the energy can be decoupled into two independent terms, which still exploit symmetry, to apply matrix factorization separately on each of them for initialization. The results on the Pascal3D+ dataset show that our methods significantly improve performance over baseline methods.
no_new_dataset
0.94868
1609.07034
Siddhartha Banerjee Siddhartha Banerjee
Siddhartha Banerjee, Prasenjit Mitra and Kazunari Sugiyama
Multi-document abstractive summarization using ILP based multi-sentence compression
IJCAI'15 Proceedings of the 24th International Conference on Artificial Intelligence, Pages 1208-1214, AAAI Press
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Abstractive summarization is an ideal form of summarization since it can synthesize information from multiple documents to create concise informative summaries. In this work, we aim at developing an abstractive summarizer. First, our proposed approach identifies the most important document in the multi-document set. The sentences in the most important document are aligned to sentences in other documents to generate clusters of similar sentences. Second, we generate K-shortest paths from the sentences in each cluster using a word-graph structure. Finally, we select sentences from the set of shortest paths generated from all the clusters employing a novel integer linear programming (ILP) model with the objective of maximizing information content and readability of the final summary. Our ILP model represents the shortest paths as binary variables and considers the length of the path, information score and linguistic quality score in the objective function. Experimental results on the DUC 2004 and 2005 multi-document summarization datasets show that our proposed approach outperforms all the baselines and state-of-the-art extractive summarizers as measured by the ROUGE scores. Our method also outperforms a recent abstractive summarization technique. In manual evaluation, our approach also achieves promising results on informativeness and readability.
[ { "version": "v1", "created": "Thu, 22 Sep 2016 15:51:43 GMT" } ]
2016-09-23T00:00:00
[ [ "Banerjee", "Siddhartha", "" ], [ "Mitra", "Prasenjit", "" ], [ "Sugiyama", "Kazunari", "" ] ]
TITLE: Multi-document abstractive summarization using ILP based multi-sentence compression ABSTRACT: Abstractive summarization is an ideal form of summarization since it can synthesize information from multiple documents to create concise informative summaries. In this work, we aim at developing an abstractive summarizer. First, our proposed approach identifies the most important document in the multi-document set. The sentences in the most important document are aligned to sentences in other documents to generate clusters of similar sentences. Second, we generate K-shortest paths from the sentences in each cluster using a word-graph structure. Finally, we select sentences from the set of shortest paths generated from all the clusters employing a novel integer linear programming (ILP) model with the objective of maximizing information content and readability of the final summary. Our ILP model represents the shortest paths as binary variables and considers the length of the path, information score and linguistic quality score in the objective function. Experimental results on the DUC 2004 and 2005 multi-document summarization datasets show that our proposed approach outperforms all the baselines and state-of-the-art extractive summarizers as measured by the ROUGE scores. Our method also outperforms a recent abstractive summarization technique. In manual evaluation, our approach also achieves promising results on informativeness and readability.
no_new_dataset
0.947137
1609.07061
Itay Hubara
Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv and Yoshua Bengio
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
arXiv admin note: text overlap with arXiv:1602.02830
null
null
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
[ { "version": "v1", "created": "Thu, 22 Sep 2016 16:48:03 GMT" } ]
2016-09-23T00:00:00
[ [ "Hubara", "Itay", "" ], [ "Courbariaux", "Matthieu", "" ], [ "Soudry", "Daniel", "" ], [ "El-Yaniv", "Ran", "" ], [ "Bengio", "Yoshua", "" ] ]
TITLE: Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations ABSTRACT: We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
no_new_dataset
0.943919
1609.07086
Arvind Saibaba
Jiani Zhang, Arvind K. Saibaba, Misha Kilmer, Shuchin Aeron
A Randomized Tensor Singular Value Decomposition based on the t-product
null
null
null
null
math.NA cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The tensor Singular Value Decomposition (t-SVD) for third order tensors that was proposed by Kilmer and Martin~\cite{2011kilmer} has been applied successfully in many fields, such as computed tomography, facial recognition, and video completion. In this paper, we propose a method that extends a well-known randomized matrix method to the t-SVD. This method can produce a factorization with similar properties to the t-SVD, but is more computationally efficient on very large datasets. We present details of the algorithm, theoretical results, and provide numerical results that show the promise of our approach for compressing and analyzing datasets. We also present an improved analysis of the randomized subspace iteration for matrices, which may be of independent interest to the scientific community.
[ { "version": "v1", "created": "Thu, 22 Sep 2016 17:55:21 GMT" } ]
2016-09-23T00:00:00
[ [ "Zhang", "Jiani", "" ], [ "Saibaba", "Arvind K.", "" ], [ "Kilmer", "Misha", "" ], [ "Aeron", "Shuchin", "" ] ]
TITLE: A Randomized Tensor Singular Value Decomposition based on the t-product ABSTRACT: The tensor Singular Value Decomposition (t-SVD) for third order tensors that was proposed by Kilmer and Martin~\cite{2011kilmer} has been applied successfully in many fields, such as computed tomography, facial recognition, and video completion. In this paper, we propose a method that extends a well-known randomized matrix method to the t-SVD. This method can produce a factorization with similar properties to the t-SVD, but is more computationally efficient on very large datasets. We present details of the algorithm, theoretical results, and provide numerical results that show the promise of our approach for compressing and analyzing datasets. We also present an improved analysis of the randomized subspace iteration for matrices, which may be of independent interest to the scientific community.
no_new_dataset
0.95018
1512.02902
Makarand Tapaswi
Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, Sanja Fidler
MovieQA: Understanding Stories in Movies through Question-Answering
CVPR 2016, Spotlight presentation. Benchmark @ http://movieqa.cs.toronto.edu/ Code @ https://github.com/makarandtapaswi/MovieQA_CVPR2016/
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the MovieQA dataset which aims to evaluate automatic story comprehension from both video and text. The dataset consists of 14,944 questions about 408 movies with high semantic diversity. The questions range from simpler "Who" did "What" to "Whom", to "Why" and "How" certain events occurred. Each question comes with a set of five possible answers; a correct one and four deceiving answers provided by human annotators. Our dataset is unique in that it contains multiple sources of information -- video clips, plots, subtitles, scripts, and DVS. We analyze our data through various statistics and methods. We further extend existing QA techniques to show that question-answering with such open-ended semantics is hard. We make this data set public along with an evaluation benchmark to encourage inspiring work in this challenging domain.
[ { "version": "v1", "created": "Wed, 9 Dec 2015 15:34:31 GMT" }, { "version": "v2", "created": "Wed, 21 Sep 2016 04:52:35 GMT" } ]
2016-09-22T00:00:00
[ [ "Tapaswi", "Makarand", "" ], [ "Zhu", "Yukun", "" ], [ "Stiefelhagen", "Rainer", "" ], [ "Torralba", "Antonio", "" ], [ "Urtasun", "Raquel", "" ], [ "Fidler", "Sanja", "" ] ]
TITLE: MovieQA: Understanding Stories in Movies through Question-Answering ABSTRACT: We introduce the MovieQA dataset which aims to evaluate automatic story comprehension from both video and text. The dataset consists of 14,944 questions about 408 movies with high semantic diversity. The questions range from simpler "Who" did "What" to "Whom", to "Why" and "How" certain events occurred. Each question comes with a set of five possible answers; a correct one and four deceiving answers provided by human annotators. Our dataset is unique in that it contains multiple sources of information -- video clips, plots, subtitles, scripts, and DVS. We analyze our data through various statistics and methods. We further extend existing QA techniques to show that question-answering with such open-ended semantics is hard. We make this data set public along with an evaluation benchmark to encourage inspiring work in this challenging domain.
new_dataset
0.960025
1602.03426
Aditya Joshi
Aditya Joshi, Pushpak Bhattacharyya, Mark James Carman
Automatic Sarcasm Detection: A Survey
This paper is likely to be submitted to ACM CSUR. This copy on arXiv is to obtain feedback from stakeholders
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic sarcasm detection is the task of predicting sarcasm in text. This is a crucial step to sentiment analysis, considering prevalence and challenges of sarcasm in sentiment-bearing text. Beginning with an approach that used speech-based features, sarcasm detection has witnessed great interest from the sentiment analysis community. This paper is the first known compilation of past work in automatic sarcasm detection. We observe three milestones in the research so far: semi-supervised pattern extraction to identify implicit sentiment, use of hashtag-based supervision, and use of context beyond target text. In this paper, we describe datasets, approaches, trends and issues in sarcasm detection. We also discuss representative performance values, shared tasks and pointers to future work, as given in prior works. In terms of resources that could be useful for understanding state-of-the-art, the survey presents several useful illustrations - most prominently, a table that summarizes past papers along different dimensions such as features, annotation techniques, data forms, etc.
[ { "version": "v1", "created": "Wed, 10 Feb 2016 16:02:46 GMT" }, { "version": "v2", "created": "Tue, 20 Sep 2016 22:15:52 GMT" } ]
2016-09-22T00:00:00
[ [ "Joshi", "Aditya", "" ], [ "Bhattacharyya", "Pushpak", "" ], [ "Carman", "Mark James", "" ] ]
TITLE: Automatic Sarcasm Detection: A Survey ABSTRACT: Automatic sarcasm detection is the task of predicting sarcasm in text. This is a crucial step to sentiment analysis, considering prevalence and challenges of sarcasm in sentiment-bearing text. Beginning with an approach that used speech-based features, sarcasm detection has witnessed great interest from the sentiment analysis community. This paper is the first known compilation of past work in automatic sarcasm detection. We observe three milestones in the research so far: semi-supervised pattern extraction to identify implicit sentiment, use of hashtag-based supervision, and use of context beyond target text. In this paper, we describe datasets, approaches, trends and issues in sarcasm detection. We also discuss representative performance values, shared tasks and pointers to future work, as given in prior works. In terms of resources that could be useful for understanding state-of-the-art, the survey presents several useful illustrations - most prominently, a table that summarizes past papers along different dimensions such as features, annotation techniques, data forms, etc.
no_new_dataset
0.938969
1606.05741
Christian Jacobs
Christian T. Jacobs, Alexandros Avdis
Connecting web-based mapping services with scientific data repositories: collaborative curation and retrieval of simulation data via a geospatial interface
Submission withdrawn from the International Journal of Digital Curation on 9 September 2016 in order to prepare a joint paper with additional colleagues
null
null
null
cs.DL
http://creativecommons.org/licenses/by/4.0/
Increasing quantities of scientific data are becoming readily accessible via online repositories such as those provided by Figshare and Zenodo. Geoscientific simulations in particular generate large quantities of data, with several research groups studying many, often overlapping areas of the world. When studying a particular area, being able to keep track of one's own simulations as well as those of collaborators can be challenging. This paper describes the design, implementation, and evaluation of a new tool for visually cataloguing and retrieving data associated with a given geographical location through a web-based Google Maps interface. Each data repository is pin-pointed on the map with a marker based on the geographical location that the dataset corresponds to. By clicking on the markers, users can quickly inspect the metadata of the repositories and download the associated data files. The crux of the approach lies in the ability to easily query and retrieve data from multiple sources via a common interface. While many advances are being made in terms of scientific data repositories, the development of this new tool has uncovered several issues and limitations of the current state-of-the-art which are discussed herein, along with some ideas for the future.
[ { "version": "v1", "created": "Sat, 18 Jun 2016 11:46:10 GMT" }, { "version": "v2", "created": "Tue, 20 Sep 2016 19:56:01 GMT" }, { "version": "v3", "created": "Wed, 21 Sep 2016 08:01:24 GMT" } ]
2016-09-22T00:00:00
[ [ "Jacobs", "Christian T.", "" ], [ "Avdis", "Alexandros", "" ] ]
TITLE: Connecting web-based mapping services with scientific data repositories: collaborative curation and retrieval of simulation data via a geospatial interface ABSTRACT: Increasing quantities of scientific data are becoming readily accessible via online repositories such as those provided by Figshare and Zenodo. Geoscientific simulations in particular generate large quantities of data, with several research groups studying many, often overlapping areas of the world. When studying a particular area, being able to keep track of one's own simulations as well as those of collaborators can be challenging. This paper describes the design, implementation, and evaluation of a new tool for visually cataloguing and retrieving data associated with a given geographical location through a web-based Google Maps interface. Each data repository is pin-pointed on the map with a marker based on the geographical location that the dataset corresponds to. By clicking on the markers, users can quickly inspect the metadata of the repositories and download the associated data files. The crux of the approach lies in the ability to easily query and retrieve data from multiple sources via a common interface. While many advances are being made in terms of scientific data repositories, the development of this new tool has uncovered several issues and limitations of the current state-of-the-art which are discussed herein, along with some ideas for the future.
no_new_dataset
0.943867
1609.06265
Faizan Javed
Janani Balaji, Faizan Javed, Mayank Kejriwal, Chris Min, Sam Sander and Ozgur Ozturk
An Ensemble Blocking Scheme for Entity Resolution of Large and Sparse Datasets
null
null
null
null
cs.AI cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Entity Resolution, also called record linkage or deduplication, refers to the process of identifying and merging duplicate versions of the same entity into a unified representation. The standard practice is to use a Rule based or Machine Learning based model that compares entity pairs and assigns a score to represent the pairs' Match/Non-Match status. However, performing an exhaustive pair-wise comparison on all pairs of records leads to quadratic matcher complexity and hence a Blocking step is performed before the Matching to group similar entities into smaller blocks that the matcher can then examine exhaustively. Several blocking schemes have been developed to efficiently and effectively block the input dataset into manageable groups. At CareerBuilder (CB), we perform deduplication on massive datasets of people profiles collected from disparate sources with varying informational content. We observed that, employing a single blocking technique did not cover the base for all possible scenarios due to the multi-faceted nature of our data sources. In this paper, we describe our ensemble approach to blocking that combines two different blocking techniques to leverage their respective strengths.
[ { "version": "v1", "created": "Tue, 20 Sep 2016 17:44:28 GMT" }, { "version": "v2", "created": "Wed, 21 Sep 2016 00:26:17 GMT" } ]
2016-09-22T00:00:00
[ [ "Balaji", "Janani", "" ], [ "Javed", "Faizan", "" ], [ "Kejriwal", "Mayank", "" ], [ "Min", "Chris", "" ], [ "Sander", "Sam", "" ], [ "Ozturk", "Ozgur", "" ] ]
TITLE: An Ensemble Blocking Scheme for Entity Resolution of Large and Sparse Datasets ABSTRACT: Entity Resolution, also called record linkage or deduplication, refers to the process of identifying and merging duplicate versions of the same entity into a unified representation. The standard practice is to use a Rule based or Machine Learning based model that compares entity pairs and assigns a score to represent the pairs' Match/Non-Match status. However, performing an exhaustive pair-wise comparison on all pairs of records leads to quadratic matcher complexity and hence a Blocking step is performed before the Matching to group similar entities into smaller blocks that the matcher can then examine exhaustively. Several blocking schemes have been developed to efficiently and effectively block the input dataset into manageable groups. At CareerBuilder (CB), we perform deduplication on massive datasets of people profiles collected from disparate sources with varying informational content. We observed that, employing a single blocking technique did not cover the base for all possible scenarios due to the multi-faceted nature of our data sources. In this paper, we describe our ensemble approach to blocking that combines two different blocking techniques to leverage their respective strengths.
no_new_dataset
0.94887
1609.06335
Anthony Gitter
Anthony Gitter, Furong Huang, Ragupathyraj Valluvan, Ernest Fraenkel, Animashree Anandkumar
Unsupervised learning of transcriptional regulatory networks via latent tree graphical models
37 pages, 9 figures
null
null
null
q-bio.MN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene expression is a readily-observed quantification of transcriptional activity and cellular state that enables the recovery of the relationships between regulators and their target genes. Reconstructing transcriptional regulatory networks from gene expression data is a problem that has attracted much attention, but previous work often makes the simplifying (but unrealistic) assumption that regulator activity is represented by mRNA levels. We use a latent tree graphical model to analyze gene expression without relying on transcription factor expression as a proxy for regulator activity. The latent tree model is a type of Markov random field that includes both observed gene variables and latent (hidden) variables, which factorize on a Markov tree. Through efficient unsupervised learning approaches, we determine which groups of genes are co-regulated by hidden regulators and the activity levels of those regulators. Post-processing annotates many of these discovered latent variables as specific transcription factors or groups of transcription factors. Other latent variables do not necessarily represent physical regulators but instead reveal hidden structure in the gene expression such as shared biological function. We apply the latent tree graphical model to a yeast stress response dataset. In addition to novel predictions, such as condition-specific binding of the transcription factor Msn4, our model recovers many known aspects of the yeast regulatory network. These include groups of co-regulated genes, condition-specific regulator activity, and combinatorial regulation among transcription factors. The latent tree graphical model is a general approach for analyzing gene expression data that requires no prior knowledge of which possible regulators exist, regulator activity, or where transcription factors physically bind.
[ { "version": "v1", "created": "Tue, 20 Sep 2016 20:14:15 GMT" } ]
2016-09-22T00:00:00
[ [ "Gitter", "Anthony", "" ], [ "Huang", "Furong", "" ], [ "Valluvan", "Ragupathyraj", "" ], [ "Fraenkel", "Ernest", "" ], [ "Anandkumar", "Animashree", "" ] ]
TITLE: Unsupervised learning of transcriptional regulatory networks via latent tree graphical models ABSTRACT: Gene expression is a readily-observed quantification of transcriptional activity and cellular state that enables the recovery of the relationships between regulators and their target genes. Reconstructing transcriptional regulatory networks from gene expression data is a problem that has attracted much attention, but previous work often makes the simplifying (but unrealistic) assumption that regulator activity is represented by mRNA levels. We use a latent tree graphical model to analyze gene expression without relying on transcription factor expression as a proxy for regulator activity. The latent tree model is a type of Markov random field that includes both observed gene variables and latent (hidden) variables, which factorize on a Markov tree. Through efficient unsupervised learning approaches, we determine which groups of genes are co-regulated by hidden regulators and the activity levels of those regulators. Post-processing annotates many of these discovered latent variables as specific transcription factors or groups of transcription factors. Other latent variables do not necessarily represent physical regulators but instead reveal hidden structure in the gene expression such as shared biological function. We apply the latent tree graphical model to a yeast stress response dataset. In addition to novel predictions, such as condition-specific binding of the transcription factor Msn4, our model recovers many known aspects of the yeast regulatory network. These include groups of co-regulated genes, condition-specific regulator activity, and combinatorial regulation among transcription factors. The latent tree graphical model is a general approach for analyzing gene expression data that requires no prior knowledge of which possible regulators exist, regulator activity, or where transcription factors physically bind.
no_new_dataset
0.955194
1609.06380
Yang Liu
Yang Liu and Sujian Li
Recognizing Implicit Discourse Relations via Repeated Reading: Neural Networks with Multi-Level Attention
Accepted as long paper at EMNLP2016
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognizing implicit discourse relations is a challenging but important task in the field of Natural Language Processing. For such a complex text processing task, different from previous studies, we argue that it is necessary to repeatedly read the arguments and dynamically exploit the efficient features useful for recognizing discourse relations. To mimic the repeated reading strategy, we propose the neural networks with multi-level attention (NNMA), combining the attention mechanism and external memories to gradually fix the attention on some specific words helpful to judging the discourse relations. Experiments on the PDTB dataset show that our proposed method achieves the state-of-art results. The visualization of the attention weights also illustrates the progress that our model observes the arguments on each level and progressively locates the important words.
[ { "version": "v1", "created": "Tue, 20 Sep 2016 22:59:19 GMT" } ]
2016-09-22T00:00:00
[ [ "Liu", "Yang", "" ], [ "Li", "Sujian", "" ] ]
TITLE: Recognizing Implicit Discourse Relations via Repeated Reading: Neural Networks with Multi-Level Attention ABSTRACT: Recognizing implicit discourse relations is a challenging but important task in the field of Natural Language Processing. For such a complex text processing task, different from previous studies, we argue that it is necessary to repeatedly read the arguments and dynamically exploit the efficient features useful for recognizing discourse relations. To mimic the repeated reading strategy, we propose the neural networks with multi-level attention (NNMA), combining the attention mechanism and external memories to gradually fix the attention on some specific words helpful to judging the discourse relations. Experiments on the PDTB dataset show that our proposed method achieves the state-of-art results. The visualization of the attention weights also illustrates the progress that our model observes the arguments on each level and progressively locates the important words.
no_new_dataset
0.948965
1609.06434
Haoran Chen
Haoran Chen, Yanfeng Sun, Junbin Gao, Yongli Hu, Baocai Yin
Partial Least Squares Regression on Riemannian Manifolds and Its Application in Classifications
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Partial least squares regression (PLSR) has been a popular technique to explore the linear relationship between two datasets. However, most of algorithm implementations of PLSR may only achieve a suboptimal solution through an optimization on the Euclidean space. In this paper, we propose several novel PLSR models on Riemannian manifolds and develop optimization algorithms based on Riemannian geometry of manifolds. This algorithm can calculate all the factors of PLSR globally to avoid suboptimal solutions. In a number of experiments, we have demonstrated the benefits of applying the proposed model and algorithm to a variety of learning tasks in pattern recognition and object classification.
[ { "version": "v1", "created": "Wed, 21 Sep 2016 06:48:07 GMT" } ]
2016-09-22T00:00:00
[ [ "Chen", "Haoran", "" ], [ "Sun", "Yanfeng", "" ], [ "Gao", "Junbin", "" ], [ "Hu", "Yongli", "" ], [ "Yin", "Baocai", "" ] ]
TITLE: Partial Least Squares Regression on Riemannian Manifolds and Its Application in Classifications ABSTRACT: Partial least squares regression (PLSR) has been a popular technique to explore the linear relationship between two datasets. However, most of algorithm implementations of PLSR may only achieve a suboptimal solution through an optimization on the Euclidean space. In this paper, we propose several novel PLSR models on Riemannian manifolds and develop optimization algorithms based on Riemannian geometry of manifolds. This algorithm can calculate all the factors of PLSR globally to avoid suboptimal solutions. In a number of experiments, we have demonstrated the benefits of applying the proposed model and algorithm to a variety of learning tasks in pattern recognition and object classification.
no_new_dataset
0.949012
1609.06457
Pin-Yu Chen
Pin-Yu Chen and Thibaut Gensollen and Alfred O. Hero III
AMOS: An Automated Model Order Selection Algorithm for Spectral Graph Clustering
arXiv admin note: substantial text overlap with arXiv:1604.03159
null
null
null
cs.SI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the longstanding problems in spectral graph clustering (SGC) is the so-called model order selection problem: automated selection of the correct number of clusters. This is equivalent to the problem of finding the number of connected components or communities in an undirected graph. In this paper, we propose AMOS, an automated model order selection algorithm for SGC. Based on a recent analysis of clustering reliability for SGC under the random interconnection model, AMOS works by incrementally increasing the number of clusters, estimating the quality of identified clusters, and providing a series of clustering reliability tests. Consequently, AMOS outputs clusters of minimal model order with statistical clustering reliability guarantees. Comparing to three other automated graph clustering methods on real-world datasets, AMOS shows superior performance in terms of multiple external and internal clustering metrics.
[ { "version": "v1", "created": "Wed, 21 Sep 2016 08:14:12 GMT" } ]
2016-09-22T00:00:00
[ [ "Chen", "Pin-Yu", "" ], [ "Gensollen", "Thibaut", "" ], [ "Hero", "Alfred O.", "III" ] ]
TITLE: AMOS: An Automated Model Order Selection Algorithm for Spectral Graph Clustering ABSTRACT: One of the longstanding problems in spectral graph clustering (SGC) is the so-called model order selection problem: automated selection of the correct number of clusters. This is equivalent to the problem of finding the number of connected components or communities in an undirected graph. In this paper, we propose AMOS, an automated model order selection algorithm for SGC. Based on a recent analysis of clustering reliability for SGC under the random interconnection model, AMOS works by incrementally increasing the number of clusters, estimating the quality of identified clusters, and providing a series of clustering reliability tests. Consequently, AMOS outputs clusters of minimal model order with statistical clustering reliability guarantees. Comparing to three other automated graph clustering methods on real-world datasets, AMOS shows superior performance in terms of multiple external and internal clustering metrics.
no_new_dataset
0.953362
1609.06532
Kar Wai Lim
Kar Wai Lim and Wray Buntine
Bibliographic Analysis on Research Publications using Authors, Categorical Labels and the Citation Network
Preprint for Journal Machine Learning
Machine Learning 103(2):185-213, 2016
10.1007/s10994-016-5554-z
null
cs.DL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bibliographic analysis considers the author's research areas, the citation network and the paper content among other things. In this paper, we combine these three in a topic model that produces a bibliographic model of authors, topics and documents, using a nonparametric extension of a combination of the Poisson mixed-topic link model and the author-topic model. This gives rise to the Citation Network Topic Model (CNTM). We propose a novel and efficient inference algorithm for the CNTM to explore subsets of research publications from CiteSeerX. The publication datasets are organised into three corpora, totalling to about 168k publications with about 62k authors. The queried datasets are made available online. In three publicly available corpora in addition to the queried datasets, our proposed model demonstrates an improved performance in both model fitting and document clustering, compared to several baselines. Moreover, our model allows extraction of additional useful knowledge from the corpora, such as the visualisation of the author-topics network. Additionally, we propose a simple method to incorporate supervision into topic modelling to achieve further improvement on the clustering task.
[ { "version": "v1", "created": "Wed, 21 Sep 2016 12:44:37 GMT" } ]
2016-09-22T00:00:00
[ [ "Lim", "Kar Wai", "" ], [ "Buntine", "Wray", "" ] ]
TITLE: Bibliographic Analysis on Research Publications using Authors, Categorical Labels and the Citation Network ABSTRACT: Bibliographic analysis considers the author's research areas, the citation network and the paper content among other things. In this paper, we combine these three in a topic model that produces a bibliographic model of authors, topics and documents, using a nonparametric extension of a combination of the Poisson mixed-topic link model and the author-topic model. This gives rise to the Citation Network Topic Model (CNTM). We propose a novel and efficient inference algorithm for the CNTM to explore subsets of research publications from CiteSeerX. The publication datasets are organised into three corpora, totalling to about 168k publications with about 62k authors. The queried datasets are made available online. In three publicly available corpora in addition to the queried datasets, our proposed model demonstrates an improved performance in both model fitting and document clustering, compared to several baselines. Moreover, our model allows extraction of additional useful knowledge from the corpora, such as the visualisation of the author-topics network. Additionally, we propose a simple method to incorporate supervision into topic modelling to achieve further improvement on the clustering task.
no_new_dataset
0.949995
1609.06570
Guillaume Lemaitre
Guillaume Lemaitre and Fernando Nogueira and Christos K. Aridas
Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Imbalanced-learn is an open-source python toolbox aiming at providing a wide range of methods to cope with the problem of imbalanced dataset frequently encountered in machine learning and pattern recognition. The implemented state-of-the-art methods can be categorized into 4 groups: (i) under-sampling, (ii) over-sampling, (iii) combination of over- and under-sampling, and (iv) ensemble learning methods. The proposed toolbox only depends on numpy, scipy, and scikit-learn and is distributed under MIT license. Furthermore, it is fully compatible with scikit-learn and is part of the scikit-learn-contrib supported project. Documentation, unit tests as well as integration tests are provided to ease usage and contribution. The toolbox is publicly available in GitHub: https://github.com/scikit-learn-contrib/imbalanced-learn.
[ { "version": "v1", "created": "Wed, 21 Sep 2016 14:16:14 GMT" } ]
2016-09-22T00:00:00
[ [ "Lemaitre", "Guillaume", "" ], [ "Nogueira", "Fernando", "" ], [ "Aridas", "Christos K.", "" ] ]
TITLE: Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning ABSTRACT: Imbalanced-learn is an open-source python toolbox aiming at providing a wide range of methods to cope with the problem of imbalanced dataset frequently encountered in machine learning and pattern recognition. The implemented state-of-the-art methods can be categorized into 4 groups: (i) under-sampling, (ii) over-sampling, (iii) combination of over- and under-sampling, and (iv) ensemble learning methods. The proposed toolbox only depends on numpy, scipy, and scikit-learn and is distributed under MIT license. Furthermore, it is fully compatible with scikit-learn and is part of the scikit-learn-contrib supported project. Documentation, unit tests as well as integration tests are provided to ease usage and contribution. The toolbox is publicly available in GitHub: https://github.com/scikit-learn-contrib/imbalanced-learn.
no_new_dataset
0.940735
1609.06604
Filippo Arcadu
Filippo Arcadu, Jakob Vogel, Marco Stampanoni and Federica Marone
Improving analytical tomographic reconstructions through consistency conditions
16 pages, 12 figures
null
null
null
physics.med-ph cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work introduces and characterizes a fast parameterless filter based on the Helgason-Ludwig consistency conditions, used to improve the accuracy of analytical reconstructions of tomographic undersampled datasets. The filter, acting in the Radon domain, extrapolates intermediate projections between those existing. The resulting sinogram, doubled in views, is then reconstructed by a standard analytical method. Experiments with simulated data prove that the peak-signal-to-noise ratio of the results computed by filtered backprojection is improved up to 5-6 dB, if the filter is used prior to reconstruction.
[ { "version": "v1", "created": "Wed, 21 Sep 2016 15:34:39 GMT" } ]
2016-09-22T00:00:00
[ [ "Arcadu", "Filippo", "" ], [ "Vogel", "Jakob", "" ], [ "Stampanoni", "Marco", "" ], [ "Marone", "Federica", "" ] ]
TITLE: Improving analytical tomographic reconstructions through consistency conditions ABSTRACT: This work introduces and characterizes a fast parameterless filter based on the Helgason-Ludwig consistency conditions, used to improve the accuracy of analytical reconstructions of tomographic undersampled datasets. The filter, acting in the Radon domain, extrapolates intermediate projections between those existing. The resulting sinogram, doubled in views, is then reconstructed by a standard analytical method. Experiments with simulated data prove that the peak-signal-to-noise ratio of the results computed by filtered backprojection is improved up to 5-6 dB, if the filter is used prior to reconstruction.
no_new_dataset
0.95297
1609.06612
Edip Demirbilek
Edip Demirbilek and Jean-Charles Gr\'egoire
Multimedia Communication Quality Assessment Testbeds
9 pages, 5 figures. this has not been submitted to any conference yet. however some part of it would be presented in GStreamer Conf 2016. As the GStreamer conf requires only an abstract submission, we though it would be better to share the actual content via arxiv
null
null
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We make an intensive use of multimedia frameworks in our research on modeling the perceived quality estimation in streaming services and real-time communications. In our preliminary work, we have used the VLC VOD software to generate reference audiovisual files with various degree of coding and network degradations. We have successfully built machine learning based models on the subjective quality dataset we have generated using these files. However, imperfections in the dataset introduced by the multimedia framework we have used prevented us from achieving the full potential of these models. In order to develop better models, we have re-created our end-to-end multimedia pipeline using the GStreamer framework for audio and video streaming. A GStreamer based pipeline proved to be significantly more robust to network degradations than the VLC VOD framework and allowed us to stream a video flow at a loss rate up to 5\% packet very easily. GStreamer has also enabled us to collect the relevant RTCP statistics that proved to be more accurate than network-deduced information. This dataset is free to the public. The accuracy of the statistics eventually helped us to generate better performing perceived quality estimation models. In this paper, we present the implementation of these VLC and GStreamer-based multimedia communication quality assessment testbeds with the references to their publicly available code bases.
[ { "version": "v1", "created": "Wed, 21 Sep 2016 15:59:59 GMT" } ]
2016-09-22T00:00:00
[ [ "Demirbilek", "Edip", "" ], [ "Grégoire", "Jean-Charles", "" ] ]
TITLE: Multimedia Communication Quality Assessment Testbeds ABSTRACT: We make an intensive use of multimedia frameworks in our research on modeling the perceived quality estimation in streaming services and real-time communications. In our preliminary work, we have used the VLC VOD software to generate reference audiovisual files with various degree of coding and network degradations. We have successfully built machine learning based models on the subjective quality dataset we have generated using these files. However, imperfections in the dataset introduced by the multimedia framework we have used prevented us from achieving the full potential of these models. In order to develop better models, we have re-created our end-to-end multimedia pipeline using the GStreamer framework for audio and video streaming. A GStreamer based pipeline proved to be significantly more robust to network degradations than the VLC VOD framework and allowed us to stream a video flow at a loss rate up to 5\% packet very easily. GStreamer has also enabled us to collect the relevant RTCP statistics that proved to be more accurate than network-deduced information. This dataset is free to the public. The accuracy of the statistics eventually helped us to generate better performing perceived quality estimation models. In this paper, we present the implementation of these VLC and GStreamer-based multimedia communication quality assessment testbeds with the references to their publicly available code bases.
no_new_dataset
0.824462
1609.06647
Oriol Vinyals
Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan
Show and Tell: Lessons learned from the 2015 MSCOCO Image Captioning Challenge
arXiv admin note: substantial text overlap with arXiv:1411.4555
IEEE Transactions on Pattern Analysis and Machine Intelligence ( Volume: PP, Issue: 99 , July 2016 )
10.1109/TPAMI.2016.2587640
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. Finally, given the recent surge of interest in this task, a competition was organized in 2015 using the newly released COCO dataset. We describe and analyze the various improvements we applied to our own baseline and show the resulting performance in the competition, which we won ex-aequo with a team from Microsoft Research, and provide an open source implementation in TensorFlow.
[ { "version": "v1", "created": "Wed, 21 Sep 2016 17:40:57 GMT" } ]
2016-09-22T00:00:00
[ [ "Vinyals", "Oriol", "" ], [ "Toshev", "Alexander", "" ], [ "Bengio", "Samy", "" ], [ "Erhan", "Dumitru", "" ] ]
TITLE: Show and Tell: Lessons learned from the 2015 MSCOCO Image Captioning Challenge ABSTRACT: Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. Finally, given the recent surge of interest in this task, a competition was organized in 2015 using the newly released COCO dataset. We describe and analyze the various improvements we applied to our own baseline and show the resulting performance in the competition, which we won ex-aequo with a team from Microsoft Research, and provide an open source implementation in TensorFlow.
new_dataset
0.537929
1609.06653
Yi Zhu
Yi Zhu and Shawn Newsam
Land Use Classification using Convolutional Neural Networks Applied to Ground-Level Images
ACM SIGSPATIAL 2015, Best Poster Award
null
null
null
cs.CV cs.CY cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Land use mapping is a fundamental yet challenging task in geographic science. In contrast to land cover mapping, it is generally not possible using overhead imagery. The recent, explosive growth of online geo-referenced photo collections suggests an alternate approach to geographic knowledge discovery. In this work, we present a general framework that uses ground-level images from Flickr for land use mapping. Our approach benefits from several novel aspects. First, we address the nosiness of the online photo collections, such as imprecise geolocation and uneven spatial distribution, by performing location and indoor/outdoor filtering, and semi- supervised dataset augmentation. Our indoor/outdoor classifier achieves state-of-the-art performance on several bench- mark datasets and approaches human-level accuracy. Second, we utilize high-level semantic image features extracted using deep learning, specifically convolutional neural net- works, which allow us to achieve upwards of 76% accuracy on a challenging eight class land use mapping problem.
[ { "version": "v1", "created": "Wed, 21 Sep 2016 18:01:24 GMT" } ]
2016-09-22T00:00:00
[ [ "Zhu", "Yi", "" ], [ "Newsam", "Shawn", "" ] ]
TITLE: Land Use Classification using Convolutional Neural Networks Applied to Ground-Level Images ABSTRACT: Land use mapping is a fundamental yet challenging task in geographic science. In contrast to land cover mapping, it is generally not possible using overhead imagery. The recent, explosive growth of online geo-referenced photo collections suggests an alternate approach to geographic knowledge discovery. In this work, we present a general framework that uses ground-level images from Flickr for land use mapping. Our approach benefits from several novel aspects. First, we address the nosiness of the online photo collections, such as imprecise geolocation and uneven spatial distribution, by performing location and indoor/outdoor filtering, and semi- supervised dataset augmentation. Our indoor/outdoor classifier achieves state-of-the-art performance on several bench- mark datasets and approaches human-level accuracy. Second, we utilize high-level semantic image features extracted using deep learning, specifically convolutional neural net- works, which allow us to achieve upwards of 76% accuracy on a challenging eight class land use mapping problem.
no_new_dataset
0.954984
1609.06657
Andrew Shin
Andrew Shin, Yoshitaka Ushiku, Tatsuya Harada
The Color of the Cat is Gray: 1 Million Full-Sentences Visual Question Answering (FSVQA)
null
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual Question Answering (VQA) task has showcased a new stage of interaction between language and vision, two of the most pivotal components of artificial intelligence. However, it has mostly focused on generating short and repetitive answers, mostly single words, which fall short of rich linguistic capabilities of humans. We introduce Full-Sentence Visual Question Answering (FSVQA) dataset, consisting of nearly 1 million pairs of questions and full-sentence answers for images, built by applying a number of rule-based natural language processing techniques to original VQA dataset and captions in the MS COCO dataset. This poses many additional complexities to conventional VQA task, and we provide a baseline for approaching and evaluating the task, on top of which we invite the research community to build further improvements.
[ { "version": "v1", "created": "Wed, 21 Sep 2016 18:12:04 GMT" } ]
2016-09-22T00:00:00
[ [ "Shin", "Andrew", "" ], [ "Ushiku", "Yoshitaka", "" ], [ "Harada", "Tatsuya", "" ] ]
TITLE: The Color of the Cat is Gray: 1 Million Full-Sentences Visual Question Answering (FSVQA) ABSTRACT: Visual Question Answering (VQA) task has showcased a new stage of interaction between language and vision, two of the most pivotal components of artificial intelligence. However, it has mostly focused on generating short and repetitive answers, mostly single words, which fall short of rich linguistic capabilities of humans. We introduce Full-Sentence Visual Question Answering (FSVQA) dataset, consisting of nearly 1 million pairs of questions and full-sentence answers for images, built by applying a number of rule-based natural language processing techniques to original VQA dataset and captions in the MS COCO dataset. This poses many additional complexities to conventional VQA task, and we provide a baseline for approaching and evaluating the task, on top of which we invite the research community to build further improvements.
new_dataset
0.962603
1609.06668
Ziyue Xu
Mario Buty, Ziyue Xu, Mingchen Gao, Ulas Bagci, Aaron Wu, and Daniel J. Mollura
Characterization of Lung Nodule Malignancy using Hybrid Shape and Appearance Features
Accepted to MICCAI 2016
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computed tomography imaging is a standard modality for detecting and assessing lung cancer. In order to evaluate the malignancy of lung nodules, clinical practice often involves expert qualitative ratings on several criteria describing a nodule's appearance and shape. Translating these features for computer-aided diagnostics is challenging due to their subjective nature and the difficulties in gaining a complete description. In this paper, we propose a computerized approach to quantitatively evaluate both appearance distinctions and 3D surface variations. Nodule shape was modeled and parameterized using spherical harmonics, and appearance features were extracted using deep convolutional neural networks. Both sets of features were combined to estimate the nodule malignancy using a random forest classifier. The proposed algorithm was tested on the publicly available Lung Image Database Consortium dataset, achieving high accuracy. By providing lung nodule characterization, this method can provide a robust alternative reference opinion for lung cancer diagnosis.
[ { "version": "v1", "created": "Wed, 21 Sep 2016 18:33:56 GMT" } ]
2016-09-22T00:00:00
[ [ "Buty", "Mario", "" ], [ "Xu", "Ziyue", "" ], [ "Gao", "Mingchen", "" ], [ "Bagci", "Ulas", "" ], [ "Wu", "Aaron", "" ], [ "Mollura", "Daniel J.", "" ] ]
TITLE: Characterization of Lung Nodule Malignancy using Hybrid Shape and Appearance Features ABSTRACT: Computed tomography imaging is a standard modality for detecting and assessing lung cancer. In order to evaluate the malignancy of lung nodules, clinical practice often involves expert qualitative ratings on several criteria describing a nodule's appearance and shape. Translating these features for computer-aided diagnostics is challenging due to their subjective nature and the difficulties in gaining a complete description. In this paper, we propose a computerized approach to quantitatively evaluate both appearance distinctions and 3D surface variations. Nodule shape was modeled and parameterized using spherical harmonics, and appearance features were extracted using deep convolutional neural networks. Both sets of features were combined to estimate the nodule malignancy using a random forest classifier. The proposed algorithm was tested on the publicly available Lung Image Database Consortium dataset, achieving high accuracy. By providing lung nodule characterization, this method can provide a robust alternative reference opinion for lung cancer diagnosis.
no_new_dataset
0.955527
1609.06676
Li Sun
Li Sun, Steven Versteeg, Serdar Boztas and Asha Rao
Detecting Anomalous User Behavior Using an Extended Isolation Forest Algorithm: An Enterprise Case Study
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anomalous user behavior detection is the core component of many information security systems, such as intrusion detection, insider threat detection and authentication systems. Anomalous behavior will raise an alarm to the system administrator and can be further combined with other information to determine whether it constitutes an unauthorised or malicious use of a resource. This paper presents an anomalous user behaviour detection framework that applies an extended version of Isolation Forest algorithm. Our method is fast and scalable and does not require example anomalies in the training data set. We apply our method to an enterprise dataset. The experimental results show that the system is able to isolate anomalous instances from the baseline user model using a single feature or combined features.
[ { "version": "v1", "created": "Wed, 21 Sep 2016 18:44:48 GMT" } ]
2016-09-22T00:00:00
[ [ "Sun", "Li", "" ], [ "Versteeg", "Steven", "" ], [ "Boztas", "Serdar", "" ], [ "Rao", "Asha", "" ] ]
TITLE: Detecting Anomalous User Behavior Using an Extended Isolation Forest Algorithm: An Enterprise Case Study ABSTRACT: Anomalous user behavior detection is the core component of many information security systems, such as intrusion detection, insider threat detection and authentication systems. Anomalous behavior will raise an alarm to the system administrator and can be further combined with other information to determine whether it constitutes an unauthorised or malicious use of a resource. This paper presents an anomalous user behaviour detection framework that applies an extended version of Isolation Forest algorithm. Our method is fast and scalable and does not require example anomalies in the training data set. We apply our method to an enterprise dataset. The experimental results show that the system is able to isolate anomalous instances from the baseline user model using a single feature or combined features.
no_new_dataset
0.939637
1609.06686
Sebastian Ruder
Sebastian Ruder, Parsa Ghaffari, John G. Breslin
Character-level and Multi-channel Convolutional Neural Networks for Large-scale Authorship Attribution
9 pages, 5 figures, 3 tables
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural networks (CNNs) have demonstrated superior capability for extracting information from raw signals in computer vision. Recently, character-level and multi-channel CNNs have exhibited excellent performance for sentence classification tasks. We apply CNNs to large-scale authorship attribution, which aims to determine an unknown text's author among many candidate authors, motivated by their ability to process character-level signals and to differentiate between a large number of classes, while making fast predictions in comparison to state-of-the-art approaches. We extensively evaluate CNN-based approaches that leverage word and character channels and compare them against state-of-the-art methods for a large range of author numbers, shedding new light on traditional approaches. We show that character-level CNNs outperform the state-of-the-art on four out of five datasets in different domains. Additionally, we present the first application of authorship attribution to reddit.
[ { "version": "v1", "created": "Wed, 21 Sep 2016 19:08:15 GMT" } ]
2016-09-22T00:00:00
[ [ "Ruder", "Sebastian", "" ], [ "Ghaffari", "Parsa", "" ], [ "Breslin", "John G.", "" ] ]
TITLE: Character-level and Multi-channel Convolutional Neural Networks for Large-scale Authorship Attribution ABSTRACT: Convolutional neural networks (CNNs) have demonstrated superior capability for extracting information from raw signals in computer vision. Recently, character-level and multi-channel CNNs have exhibited excellent performance for sentence classification tasks. We apply CNNs to large-scale authorship attribution, which aims to determine an unknown text's author among many candidate authors, motivated by their ability to process character-level signals and to differentiate between a large number of classes, while making fast predictions in comparison to state-of-the-art approaches. We extensively evaluate CNN-based approaches that leverage word and character channels and compare them against state-of-the-art methods for a large range of author numbers, shedding new light on traditional approaches. We show that character-level CNNs outperform the state-of-the-art on four out of five datasets in different domains. Additionally, we present the first application of authorship attribution to reddit.
no_new_dataset
0.951863
1609.06694
Aayush Bansal
Aayush Bansal, Xinlei Chen, Bryan Russell, Abhinav Gupta, Deva Ramanan
PixelNet: Towards a General Pixel-level Architecture
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore architectures for general pixel-level prediction problems, from low-level edge detection to mid-level surface normal estimation to high-level semantic segmentation. Convolutional predictors, such as the fully-convolutional network (FCN), have achieved remarkable success by exploiting the spatial redundancy of neighboring pixels through convolutional processing. Though computationally efficient, we point out that such approaches are not statistically efficient during learning precisely because spatial redundancy limits the information learned from neighboring pixels. We demonstrate that (1) stratified sampling allows us to add diversity during batch updates and (2) sampled multi-scale features allow us to explore more nonlinear predictors (multiple fully-connected layers followed by ReLU) that improve overall accuracy. Finally, our objective is to show how a architecture can get performance better than (or comparable to) the architectures designed for a particular task. Interestingly, our single architecture produces state-of-the-art results for semantic segmentation on PASCAL-Context, surface normal estimation on NYUDv2 dataset, and edge detection on BSDS without contextual post-processing.
[ { "version": "v1", "created": "Wed, 21 Sep 2016 19:32:46 GMT" } ]
2016-09-22T00:00:00
[ [ "Bansal", "Aayush", "" ], [ "Chen", "Xinlei", "" ], [ "Russell", "Bryan", "" ], [ "Gupta", "Abhinav", "" ], [ "Ramanan", "Deva", "" ] ]
TITLE: PixelNet: Towards a General Pixel-level Architecture ABSTRACT: We explore architectures for general pixel-level prediction problems, from low-level edge detection to mid-level surface normal estimation to high-level semantic segmentation. Convolutional predictors, such as the fully-convolutional network (FCN), have achieved remarkable success by exploiting the spatial redundancy of neighboring pixels through convolutional processing. Though computationally efficient, we point out that such approaches are not statistically efficient during learning precisely because spatial redundancy limits the information learned from neighboring pixels. We demonstrate that (1) stratified sampling allows us to add diversity during batch updates and (2) sampled multi-scale features allow us to explore more nonlinear predictors (multiple fully-connected layers followed by ReLU) that improve overall accuracy. Finally, our objective is to show how a architecture can get performance better than (or comparable to) the architectures designed for a particular task. Interestingly, our single architecture produces state-of-the-art results for semantic segmentation on PASCAL-Context, surface normal estimation on NYUDv2 dataset, and edge detection on BSDS without contextual post-processing.
no_new_dataset
0.946498
1406.2431
Oren Anava
Oren Anava, Shahar Golan, Nadav Golbandi, Zohar Karnin, Ronny Lempel, Oleg Rokhlenko, Oren Somekh
Budget-Constrained Item Cold-Start Handling in Collaborative Filtering Recommenders via Optimal Design
11 pages, 2 figures
null
null
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is well known that collaborative filtering (CF) based recommender systems provide better modeling of users and items associated with considerable rating history. The lack of historical ratings results in the user and the item cold-start problems. The latter is the main focus of this work. Most of the current literature addresses this problem by integrating content-based recommendation techniques to model the new item. However, in many cases such content is not available, and the question arises is whether this problem can be mitigated using CF techniques only. We formalize this problem as an optimization problem: given a new item, a pool of available users, and a budget constraint, select which users to assign with the task of rating the new item in order to minimize the prediction error of our model. We show that the objective function is monotone-supermodular, and propose efficient optimal design based algorithms that attain an approximation to its optimum. Our findings are verified by an empirical study using the Netflix dataset, where the proposed algorithms outperform several baselines for the problem at hand.
[ { "version": "v1", "created": "Tue, 10 Jun 2014 06:17:23 GMT" }, { "version": "v2", "created": "Wed, 19 Nov 2014 21:10:43 GMT" }, { "version": "v3", "created": "Tue, 20 Sep 2016 09:51:02 GMT" } ]
2016-09-21T00:00:00
[ [ "Anava", "Oren", "" ], [ "Golan", "Shahar", "" ], [ "Golbandi", "Nadav", "" ], [ "Karnin", "Zohar", "" ], [ "Lempel", "Ronny", "" ], [ "Rokhlenko", "Oleg", "" ], [ "Somekh", "Oren", "" ] ]
TITLE: Budget-Constrained Item Cold-Start Handling in Collaborative Filtering Recommenders via Optimal Design ABSTRACT: It is well known that collaborative filtering (CF) based recommender systems provide better modeling of users and items associated with considerable rating history. The lack of historical ratings results in the user and the item cold-start problems. The latter is the main focus of this work. Most of the current literature addresses this problem by integrating content-based recommendation techniques to model the new item. However, in many cases such content is not available, and the question arises is whether this problem can be mitigated using CF techniques only. We formalize this problem as an optimization problem: given a new item, a pool of available users, and a budget constraint, select which users to assign with the task of rating the new item in order to minimize the prediction error of our model. We show that the objective function is monotone-supermodular, and propose efficient optimal design based algorithms that attain an approximation to its optimum. Our findings are verified by an empirical study using the Netflix dataset, where the proposed algorithms outperform several baselines for the problem at hand.
no_new_dataset
0.946843
1505.04870
Bryan Plummer
Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, and Svetlana Lazebnik
Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models
null
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research.
[ { "version": "v1", "created": "Tue, 19 May 2015 04:46:03 GMT" }, { "version": "v2", "created": "Mon, 5 Oct 2015 22:17:45 GMT" }, { "version": "v3", "created": "Fri, 15 Apr 2016 14:58:37 GMT" }, { "version": "v4", "created": "Mon, 19 Sep 2016 20:20:42 GMT" } ]
2016-09-21T00:00:00
[ [ "Plummer", "Bryan A.", "" ], [ "Wang", "Liwei", "" ], [ "Cervantes", "Chris M.", "" ], [ "Caicedo", "Juan C.", "" ], [ "Hockenmaier", "Julia", "" ], [ "Lazebnik", "Svetlana", "" ] ]
TITLE: Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models ABSTRACT: The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research.
no_new_dataset
0.935524
1509.04491
Philip Schniter
Evan Byrne and Philip Schniter
Sparse Multinomial Logistic Regression via Approximate Message Passing
null
null
10.1109/TSP.2016.2593691
null
cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the problem of multi-class linear classification and feature selection, we propose approximate message passing approaches to sparse multinomial logistic regression (MLR). First, we propose two algorithms based on the Hybrid Generalized Approximate Message Passing (HyGAMP) framework: one finds the maximum a posteriori (MAP) linear classifier and the other finds an approximation of the test-error-rate minimizing linear classifier. Then we design computationally simplified variants of these two algorithms. Next, we detail methods to tune the hyperparameters of their assumed statistical models using Stein's unbiased risk estimate (SURE) and expectation-maximization (EM), respectively. Finally, using both synthetic and real-world datasets, we demonstrate improved error-rate and runtime performance relative to existing state-of-the-art approaches to sparse MLR.
[ { "version": "v1", "created": "Tue, 15 Sep 2015 11:08:33 GMT" }, { "version": "v2", "created": "Tue, 14 Jun 2016 19:11:23 GMT" } ]
2016-09-21T00:00:00
[ [ "Byrne", "Evan", "" ], [ "Schniter", "Philip", "" ] ]
TITLE: Sparse Multinomial Logistic Regression via Approximate Message Passing ABSTRACT: For the problem of multi-class linear classification and feature selection, we propose approximate message passing approaches to sparse multinomial logistic regression (MLR). First, we propose two algorithms based on the Hybrid Generalized Approximate Message Passing (HyGAMP) framework: one finds the maximum a posteriori (MAP) linear classifier and the other finds an approximation of the test-error-rate minimizing linear classifier. Then we design computationally simplified variants of these two algorithms. Next, we detail methods to tune the hyperparameters of their assumed statistical models using Stein's unbiased risk estimate (SURE) and expectation-maximization (EM), respectively. Finally, using both synthetic and real-world datasets, we demonstrate improved error-rate and runtime performance relative to existing state-of-the-art approaches to sparse MLR.
no_new_dataset
0.949153
1509.08970
Priyadarshini Panda
Priyadarshini Panda, Swagath Venkataramani, Abhronil Sengupta, Anand Raghunathan and Kaushik Roy
Energy-Efficient Object Detection using Semantic Decomposition
10 pages, 13 figures, 3 algorithms, Submitted to IEEE TVLSI(Under Review)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine-learning algorithms offer immense possibilities in the development of several cognitive applications. In fact, large scale machine-learning classifiers now represent the state-of-the-art in a wide range of object detection/classification problems. However, the network complexities of large-scale classifiers present them as one of the most challenging and energy intensive workloads across the computing spectrum. In this paper, we present a new approach to optimize energy efficiency of object detection tasks using semantic decomposition to build a hierarchical classification framework. We observe that certain semantic information like color/texture are common across various images in real-world datasets for object detection applications. We exploit these common semantic features to distinguish the objects of interest from the remaining inputs (non-objects of interest) in a dataset at a lower computational effort. We propose a 2-stage hierarchical classification framework, with increasing levels of complexity, wherein the first stage is trained to recognize the broad representative semantic features relevant to the object of interest. The first stage rejects the input instances that do not have the representative features and passes only the relevant instances to the second stage. Our methodology thus allows us to reject certain information at lower complexity and utilize the full computational effort of a network only on a smaller fraction of inputs to perform detection. We use color and texture as distinctive traits to carry out several experiments for object detection. Our experiments on the Caltech101/CIFAR10 dataset show that the proposed method yields 1.93x/1.46x improvement in average energy, respectively, over the traditional single classifier model.
[ { "version": "v1", "created": "Tue, 29 Sep 2015 22:56:33 GMT" }, { "version": "v2", "created": "Tue, 12 Apr 2016 23:21:51 GMT" }, { "version": "v3", "created": "Tue, 20 Sep 2016 14:38:32 GMT" } ]
2016-09-21T00:00:00
[ [ "Panda", "Priyadarshini", "" ], [ "Venkataramani", "Swagath", "" ], [ "Sengupta", "Abhronil", "" ], [ "Raghunathan", "Anand", "" ], [ "Roy", "Kaushik", "" ] ]
TITLE: Energy-Efficient Object Detection using Semantic Decomposition ABSTRACT: Machine-learning algorithms offer immense possibilities in the development of several cognitive applications. In fact, large scale machine-learning classifiers now represent the state-of-the-art in a wide range of object detection/classification problems. However, the network complexities of large-scale classifiers present them as one of the most challenging and energy intensive workloads across the computing spectrum. In this paper, we present a new approach to optimize energy efficiency of object detection tasks using semantic decomposition to build a hierarchical classification framework. We observe that certain semantic information like color/texture are common across various images in real-world datasets for object detection applications. We exploit these common semantic features to distinguish the objects of interest from the remaining inputs (non-objects of interest) in a dataset at a lower computational effort. We propose a 2-stage hierarchical classification framework, with increasing levels of complexity, wherein the first stage is trained to recognize the broad representative semantic features relevant to the object of interest. The first stage rejects the input instances that do not have the representative features and passes only the relevant instances to the second stage. Our methodology thus allows us to reject certain information at lower complexity and utilize the full computational effort of a network only on a smaller fraction of inputs to perform detection. We use color and texture as distinctive traits to carry out several experiments for object detection. Our experiments on the Caltech101/CIFAR10 dataset show that the proposed method yields 1.93x/1.46x improvement in average energy, respectively, over the traditional single classifier model.
no_new_dataset
0.949201
1604.01125
Hang-Hyun Jo
Eun-Kyeong Kim and Hang-Hyun Jo
Measuring burstiness for finite event sequences
7 pages, 3 figures
Phys. Rev. E 94, 032311 (2016)
10.1103/PhysRevE.94.032311
null
physics.soc-ph physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Characterizing inhomogeneous temporal patterns in natural and social phenomena is important to understand underlying mechanisms behind such complex systems, hence even to predict and control them. Temporal inhomogeneities in event sequences have been described in terms of bursts that are rapidly occurring events in short time periods alternating with long inactive periods. The bursts can be quantified by a simple measure, called burstiness parameter, which was introduced by Goh and Barab\'asi [EPL \textbf{81}, 48002 (2008)]. The burstiness parameter has been widely used due to its simplicity, which however turns out to be strongly affected by the finite number of events in the time series. As the finite-size effects on burstiness parameter have been largely ignored, we analytically investigate the finite-size effects of the burstiness parameter. Then we suggest an alternative definition of burstiness that is free from finite-size effects and yet simple. Using our alternative burstiness measure, one can distinguish the finite-size effects from the intrinsic bursty properties in the time series. We also demonstrate the advantages of our burstiness measure by analyzing empirical datasets.
[ { "version": "v1", "created": "Tue, 5 Apr 2016 03:42:19 GMT" }, { "version": "v2", "created": "Mon, 29 Aug 2016 17:04:20 GMT" }, { "version": "v3", "created": "Mon, 19 Sep 2016 02:10:48 GMT" } ]
2016-09-21T00:00:00
[ [ "Kim", "Eun-Kyeong", "" ], [ "Jo", "Hang-Hyun", "" ] ]
TITLE: Measuring burstiness for finite event sequences ABSTRACT: Characterizing inhomogeneous temporal patterns in natural and social phenomena is important to understand underlying mechanisms behind such complex systems, hence even to predict and control them. Temporal inhomogeneities in event sequences have been described in terms of bursts that are rapidly occurring events in short time periods alternating with long inactive periods. The bursts can be quantified by a simple measure, called burstiness parameter, which was introduced by Goh and Barab\'asi [EPL \textbf{81}, 48002 (2008)]. The burstiness parameter has been widely used due to its simplicity, which however turns out to be strongly affected by the finite number of events in the time series. As the finite-size effects on burstiness parameter have been largely ignored, we analytically investigate the finite-size effects of the burstiness parameter. Then we suggest an alternative definition of burstiness that is free from finite-size effects and yet simple. Using our alternative burstiness measure, one can distinguish the finite-size effects from the intrinsic bursty properties in the time series. We also demonstrate the advantages of our burstiness measure by analyzing empirical datasets.
no_new_dataset
0.946843
1604.07480
Arsalan Mousavian
Arsalan Mousavian, Hamed Pirsiavash, Jana Kosecka
Joint Semantic Segmentation and Depth Estimation with Deep Convolutional Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-scale deep CNNs have been used successfully for problems mapping each pixel to a label, such as depth estimation and semantic segmentation. It has also been shown that such architectures are reusable and can be used for multiple tasks. These networks are typically trained independently for each task by varying the output layer(s) and training objective. In this work we present a new model for simultaneous depth estimation and semantic segmentation from a single RGB image. Our approach demonstrates the feasibility of training parts of the model for each task and then fine tuning the full, combined model on both tasks simultaneously using a single loss function. Furthermore we couple the deep CNN with fully connected CRF, which captures the contextual relationships and interactions between the semantic and depth cues improving the accuracy of the final results. The proposed model is trained and evaluated on NYUDepth V2 dataset outperforming the state of the art methods on semantic segmentation and achieving comparable results on the task of depth estimation.
[ { "version": "v1", "created": "Mon, 25 Apr 2016 23:58:00 GMT" }, { "version": "v2", "created": "Thu, 8 Sep 2016 15:10:54 GMT" }, { "version": "v3", "created": "Mon, 19 Sep 2016 21:57:28 GMT" } ]
2016-09-21T00:00:00
[ [ "Mousavian", "Arsalan", "" ], [ "Pirsiavash", "Hamed", "" ], [ "Kosecka", "Jana", "" ] ]
TITLE: Joint Semantic Segmentation and Depth Estimation with Deep Convolutional Networks ABSTRACT: Multi-scale deep CNNs have been used successfully for problems mapping each pixel to a label, such as depth estimation and semantic segmentation. It has also been shown that such architectures are reusable and can be used for multiple tasks. These networks are typically trained independently for each task by varying the output layer(s) and training objective. In this work we present a new model for simultaneous depth estimation and semantic segmentation from a single RGB image. Our approach demonstrates the feasibility of training parts of the model for each task and then fine tuning the full, combined model on both tasks simultaneously using a single loss function. Furthermore we couple the deep CNN with fully connected CRF, which captures the contextual relationships and interactions between the semantic and depth cues improving the accuracy of the final results. The proposed model is trained and evaluated on NYUDepth V2 dataset outperforming the state of the art methods on semantic segmentation and achieving comparable results on the task of depth estimation.
no_new_dataset
0.948728
1606.05579
Irina Higgins
Irina Higgins, Loic Matthey, Xavier Glorot, Arka Pal, Benigno Uria, Charles Blundell, Shakir Mohamed, Alexander Lerchner
Early Visual Concept Learning with Unsupervised Deep Learning
null
null
null
null
stat.ML cs.LG q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of "objectness".
[ { "version": "v1", "created": "Fri, 17 Jun 2016 16:19:46 GMT" }, { "version": "v2", "created": "Mon, 19 Sep 2016 19:50:49 GMT" }, { "version": "v3", "created": "Tue, 20 Sep 2016 09:30:26 GMT" } ]
2016-09-21T00:00:00
[ [ "Higgins", "Irina", "" ], [ "Matthey", "Loic", "" ], [ "Glorot", "Xavier", "" ], [ "Pal", "Arka", "" ], [ "Uria", "Benigno", "" ], [ "Blundell", "Charles", "" ], [ "Mohamed", "Shakir", "" ], [ "Lerchner", "Alexander", "" ] ]
TITLE: Early Visual Concept Learning with Unsupervised Deep Learning ABSTRACT: Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of "objectness".
no_new_dataset
0.947721
1608.00762
Han Gong
Han Gong, Darren P. Cosker
Interactive Removal and Ground Truth for Difficult Shadow Scenes
Accepted by JOSA A
null
10.1364/JOSAA.33.001798
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A user-centric method for fast, interactive, robust and high-quality shadow removal is presented. Our algorithm can perform detection and removal in a range of difficult cases: such as highly textured and colored shadows. To perform detection an on-the-fly learning approach is adopted guided by two rough user inputs for the pixels of the shadow and the lit area. After detection, shadow removal is performed by registering the penumbra to a normalized frame which allows us efficient estimation of non-uniform shadow illumination changes, resulting in accurate and robust removal. Another major contribution of this work is the first validated and multi-scene category ground truth for shadow removal algorithms. This data set containing 186 images eliminates inconsistencies between shadow and shadow-free images and provides a range of different shadow types such as soft, textured, colored and broken shadow. Using this data, the most thorough comparison of state-of-the-art shadow removal methods to date is performed, showing our proposed new algorithm to outperform the state-of-the-art across several measures and shadow category. To complement our dataset, an online shadow removal benchmark website is also presented to encourage future open comparisons in this challenging field of research.
[ { "version": "v1", "created": "Tue, 2 Aug 2016 10:51:07 GMT" } ]
2016-09-21T00:00:00
[ [ "Gong", "Han", "" ], [ "Cosker", "Darren P.", "" ] ]
TITLE: Interactive Removal and Ground Truth for Difficult Shadow Scenes ABSTRACT: A user-centric method for fast, interactive, robust and high-quality shadow removal is presented. Our algorithm can perform detection and removal in a range of difficult cases: such as highly textured and colored shadows. To perform detection an on-the-fly learning approach is adopted guided by two rough user inputs for the pixels of the shadow and the lit area. After detection, shadow removal is performed by registering the penumbra to a normalized frame which allows us efficient estimation of non-uniform shadow illumination changes, resulting in accurate and robust removal. Another major contribution of this work is the first validated and multi-scene category ground truth for shadow removal algorithms. This data set containing 186 images eliminates inconsistencies between shadow and shadow-free images and provides a range of different shadow types such as soft, textured, colored and broken shadow. Using this data, the most thorough comparison of state-of-the-art shadow removal methods to date is performed, showing our proposed new algorithm to outperform the state-of-the-art across several measures and shadow category. To complement our dataset, an online shadow removal benchmark website is also presented to encourage future open comparisons in this challenging field of research.
new_dataset
0.958538
1608.05571
Martin Danelljan
Martin Danelljan, Gustav H\"ager, Fahad Shahbaz Khan, Michael Felsberg
Learning Spatially Regularized Correlation Filters for Visual Tracking
ICCV 2015
International Conference on Computer Vision, (ICCV) 2015, pp. 4310-4318. IEEE (2015)
10.1109/ICCV.2015.490
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers.
[ { "version": "v1", "created": "Fri, 19 Aug 2016 11:11:49 GMT" } ]
2016-09-21T00:00:00
[ [ "Danelljan", "Martin", "" ], [ "Häger", "Gustav", "" ], [ "Khan", "Fahad Shahbaz", "" ], [ "Felsberg", "Michael", "" ] ]
TITLE: Learning Spatially Regularized Correlation Filters for Visual Tracking ABSTRACT: Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers.
no_new_dataset
0.947769
1609.05103
Martin Theobald
Maximilian Dylla, Martin Theobald
Learning Tuple Probabilities
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning the parameters of complex probabilistic-relational models from labeled training data is a standard technique in machine learning, which has been intensively studied in the subfield of Statistical Relational Learning (SRL), but---so far---this is still an under-investigated topic in the context of Probabilistic Databases (PDBs). In this paper, we focus on learning the probability values of base tuples in a PDB from labeled lineage formulas. The resulting learning problem can be viewed as the inverse problem to confidence computations in PDBs: given a set of labeled query answers, learn the probability values of the base tuples, such that the marginal probabilities of the query answers again yield in the assigned probability labels. We analyze the learning problem from a theoretical perspective, cast it into an optimization problem, and provide an algorithm based on stochastic gradient descent. Finally, we conclude by an experimental evaluation on three real-world and one synthetic dataset, thus comparing our approach to various techniques from SRL, reasoning in information extraction, and optimization.
[ { "version": "v1", "created": "Fri, 16 Sep 2016 15:16:25 GMT" }, { "version": "v2", "created": "Tue, 20 Sep 2016 06:36:11 GMT" } ]
2016-09-21T00:00:00
[ [ "Dylla", "Maximilian", "" ], [ "Theobald", "Martin", "" ] ]
TITLE: Learning Tuple Probabilities ABSTRACT: Learning the parameters of complex probabilistic-relational models from labeled training data is a standard technique in machine learning, which has been intensively studied in the subfield of Statistical Relational Learning (SRL), but---so far---this is still an under-investigated topic in the context of Probabilistic Databases (PDBs). In this paper, we focus on learning the probability values of base tuples in a PDB from labeled lineage formulas. The resulting learning problem can be viewed as the inverse problem to confidence computations in PDBs: given a set of labeled query answers, learn the probability values of the base tuples, such that the marginal probabilities of the query answers again yield in the assigned probability labels. We analyze the learning problem from a theoretical perspective, cast it into an optimization problem, and provide an algorithm based on stochastic gradient descent. Finally, we conclude by an experimental evaluation on three real-world and one synthetic dataset, thus comparing our approach to various techniques from SRL, reasoning in information extraction, and optimization.
no_new_dataset
0.945197
1609.06018
Junxuan Chen
Junxuan Chen, Baigui Sun, Hao Li, Hongtao Lu, Xian-Sheng Hua
Deep CTR Prediction in Display Advertising
This manuscript is the accepted version for ACM Multimedia Conference 2016
null
null
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Click through rate (CTR) prediction of image ads is the core task of online display advertising systems, and logistic regression (LR) has been frequently applied as the prediction model. However, LR model lacks the ability of extracting complex and intrinsic nonlinear features from handcrafted high-dimensional image features, which limits its effectiveness. To solve this issue, in this paper, we introduce a novel deep neural network (DNN) based model that directly predicts the CTR of an image ad based on raw image pixels and other basic features in one step. The DNN model employs convolution layers to automatically extract representative visual features from images, and nonlinear CTR features are then learned from visual features and other contextual features by using fully-connected layers. Empirical evaluations on a real world dataset with over 50 million records demonstrate the effectiveness and efficiency of this method.
[ { "version": "v1", "created": "Tue, 20 Sep 2016 04:50:03 GMT" } ]
2016-09-21T00:00:00
[ [ "Chen", "Junxuan", "" ], [ "Sun", "Baigui", "" ], [ "Li", "Hao", "" ], [ "Lu", "Hongtao", "" ], [ "Hua", "Xian-Sheng", "" ] ]
TITLE: Deep CTR Prediction in Display Advertising ABSTRACT: Click through rate (CTR) prediction of image ads is the core task of online display advertising systems, and logistic regression (LR) has been frequently applied as the prediction model. However, LR model lacks the ability of extracting complex and intrinsic nonlinear features from handcrafted high-dimensional image features, which limits its effectiveness. To solve this issue, in this paper, we introduce a novel deep neural network (DNN) based model that directly predicts the CTR of an image ad based on raw image pixels and other basic features in one step. The DNN model employs convolution layers to automatically extract representative visual features from images, and nonlinear CTR features are then learned from visual features and other contextual features by using fully-connected layers. Empirical evaluations on a real world dataset with over 50 million records demonstrate the effectiveness and efficiency of this method.
no_new_dataset
0.951051
1609.06082
Yitong Li
Yitong Li and Trevor Cohn and Timothy Baldwin
Learning Robust Representations of Text
5 pages with 2 pages reference, 2 tables, 1 figure
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks have achieved remarkable results across many language processing tasks, however these methods are highly sensitive to noise and adversarial attacks. We present a regularization based method for limiting network sensitivity to its inputs, inspired by ideas from computer vision, thus learning models that are more robust. Empirical evaluation over a range of sentiment datasets with a convolutional neural network shows that, compared to a baseline model and the dropout method, our method achieves superior performance over noisy inputs and out-of-domain data.
[ { "version": "v1", "created": "Tue, 20 Sep 2016 10:23:47 GMT" } ]
2016-09-21T00:00:00
[ [ "Li", "Yitong", "" ], [ "Cohn", "Trevor", "" ], [ "Baldwin", "Timothy", "" ] ]
TITLE: Learning Robust Representations of Text ABSTRACT: Deep neural networks have achieved remarkable results across many language processing tasks, however these methods are highly sensitive to noise and adversarial attacks. We present a regularization based method for limiting network sensitivity to its inputs, inspired by ideas from computer vision, thus learning models that are more robust. Empirical evaluation over a range of sentiment datasets with a convolutional neural network shows that, compared to a baseline model and the dropout method, our method achieves superior performance over noisy inputs and out-of-domain data.
no_new_dataset
0.947332
1609.06118
Martin Danelljan
Martin Danelljan, Gustav H\"ager, Fahad Shahbaz Khan, Michael Felsberg
Adaptive Decontamination of the Training Set: A Unified Formulation for Discriminative Visual Tracking
CVPR 2016
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limited amount of labeled training data, additional samples need to be extracted and labeled by the tracker itself. This often leads to the inclusion of corrupted training samples, due to occlusions, misalignments and other perturbations. Existing tracking-by-detection methods either ignore this problem, or employ a separate component for managing the training set. We propose a novel generic approach for alleviating the problem of corrupted training samples in tracking-by-detection frameworks. Our approach dynamically manages the training set by estimating the quality of the samples. Contrary to existing approaches, we propose a unified formulation by minimizing a single loss over both the target appearance model and the sample quality weights. The joint formulation enables corrupted samples to be down-weighted while increasing the impact of correct ones. Experiments are performed on three benchmarks: OTB-2015 with 100 videos, VOT-2015 with 60 videos, and Temple-Color with 128 videos. On the OTB-2015, our unified formulation significantly improves the baseline, with a gain of 3.8% in mean overlap precision. Finally, our method achieves state-of-the-art results on all three datasets. Code and supplementary material are available at http://www.cvl.isy.liu.se/research/objrec/visualtracking/decontrack/index.html .
[ { "version": "v1", "created": "Tue, 20 Sep 2016 11:46:17 GMT" } ]
2016-09-21T00:00:00
[ [ "Danelljan", "Martin", "" ], [ "Häger", "Gustav", "" ], [ "Khan", "Fahad Shahbaz", "" ], [ "Felsberg", "Michael", "" ] ]
TITLE: Adaptive Decontamination of the Training Set: A Unified Formulation for Discriminative Visual Tracking ABSTRACT: Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limited amount of labeled training data, additional samples need to be extracted and labeled by the tracker itself. This often leads to the inclusion of corrupted training samples, due to occlusions, misalignments and other perturbations. Existing tracking-by-detection methods either ignore this problem, or employ a separate component for managing the training set. We propose a novel generic approach for alleviating the problem of corrupted training samples in tracking-by-detection frameworks. Our approach dynamically manages the training set by estimating the quality of the samples. Contrary to existing approaches, we propose a unified formulation by minimizing a single loss over both the target appearance model and the sample quality weights. The joint formulation enables corrupted samples to be down-weighted while increasing the impact of correct ones. Experiments are performed on three benchmarks: OTB-2015 with 100 videos, VOT-2015 with 60 videos, and Temple-Color with 128 videos. On the OTB-2015, our unified formulation significantly improves the baseline, with a gain of 3.8% in mean overlap precision. Finally, our method achieves state-of-the-art results on all three datasets. Code and supplementary material are available at http://www.cvl.isy.liu.se/research/objrec/visualtracking/decontrack/index.html .
no_new_dataset
0.944125
1609.06127
Diana Al Jlailaty
Diana Jlailaty and Daniela Grigori and Khalid Belhajjame
A framework for mining process models from emails logs
18 pages, 6 figures
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to its wide use in personal, but most importantly, professional contexts, email represents a valuable source of information that can be harvested for understanding, reengineering and repurposing undocumented business processes of companies and institutions. Towards this aim, a few researchers investigated the problem of extracting process oriented information from email logs in order to take benefit of the many available process mining techniques and tools. In this paper we go further in this direction, by proposing a new method for mining process models from email logs that leverage unsupervised machine learning techniques with little human involvement. Moreover, our method allows to semi-automatically label emails with activity names, that can be used for activity recognition in new incoming emails. A use case demonstrates the usefulness of the proposed solution using a modest in size, yet real-world, dataset containing emails that belong to two different process models.
[ { "version": "v1", "created": "Tue, 20 Sep 2016 12:29:15 GMT" } ]
2016-09-21T00:00:00
[ [ "Jlailaty", "Diana", "" ], [ "Grigori", "Daniela", "" ], [ "Belhajjame", "Khalid", "" ] ]
TITLE: A framework for mining process models from emails logs ABSTRACT: Due to its wide use in personal, but most importantly, professional contexts, email represents a valuable source of information that can be harvested for understanding, reengineering and repurposing undocumented business processes of companies and institutions. Towards this aim, a few researchers investigated the problem of extracting process oriented information from email logs in order to take benefit of the many available process mining techniques and tools. In this paper we go further in this direction, by proposing a new method for mining process models from email logs that leverage unsupervised machine learning techniques with little human involvement. Moreover, our method allows to semi-automatically label emails with activity names, that can be used for activity recognition in new incoming emails. A use case demonstrates the usefulness of the proposed solution using a modest in size, yet real-world, dataset containing emails that belong to two different process models.
no_new_dataset
0.939858
1609.06141
Martin Danelljan
Martin Danelljan, Gustav H\"ager, Fahad Shahbaz Khan, Michael Felsberg
Discriminative Scale Space Tracking
To appear in TPAMI. This is the journal extension of the VOT2014-winning DSST tracking method
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate scale estimation of a target is a challenging research problem in visual object tracking. Most state-of-the-art methods employ an exhaustive scale search to estimate the target size. The exhaustive search strategy is computationally expensive and struggles when encountered with large scale variations. This paper investigates the problem of accurate and robust scale estimation in a tracking-by-detection framework. We propose a novel scale adaptive tracking approach by learning separate discriminative correlation filters for translation and scale estimation. The explicit scale filter is learned online using the target appearance sampled at a set of different scales. Contrary to standard approaches, our method directly learns the appearance change induced by variations in the target scale. Additionally, we investigate strategies to reduce the computational cost of our approach. Extensive experiments are performed on the OTB and the VOT2014 datasets. Compared to the standard exhaustive scale search, our approach achieves a gain of 2.5% in average overlap precision on the OTB dataset. Additionally, our method is computationally efficient, operating at a 50% higher frame rate compared to the exhaustive scale search. Our method obtains the top rank in performance by outperforming 19 state-of-the-art trackers on OTB and 37 state-of-the-art trackers on VOT2014.
[ { "version": "v1", "created": "Tue, 20 Sep 2016 12:57:08 GMT" } ]
2016-09-21T00:00:00
[ [ "Danelljan", "Martin", "" ], [ "Häger", "Gustav", "" ], [ "Khan", "Fahad Shahbaz", "" ], [ "Felsberg", "Michael", "" ] ]
TITLE: Discriminative Scale Space Tracking ABSTRACT: Accurate scale estimation of a target is a challenging research problem in visual object tracking. Most state-of-the-art methods employ an exhaustive scale search to estimate the target size. The exhaustive search strategy is computationally expensive and struggles when encountered with large scale variations. This paper investigates the problem of accurate and robust scale estimation in a tracking-by-detection framework. We propose a novel scale adaptive tracking approach by learning separate discriminative correlation filters for translation and scale estimation. The explicit scale filter is learned online using the target appearance sampled at a set of different scales. Contrary to standard approaches, our method directly learns the appearance change induced by variations in the target scale. Additionally, we investigate strategies to reduce the computational cost of our approach. Extensive experiments are performed on the OTB and the VOT2014 datasets. Compared to the standard exhaustive scale search, our approach achieves a gain of 2.5% in average overlap precision on the OTB dataset. Additionally, our method is computationally efficient, operating at a 50% higher frame rate compared to the exhaustive scale search. Our method obtains the top rank in performance by outperforming 19 state-of-the-art trackers on OTB and 37 state-of-the-art trackers on VOT2014.
no_new_dataset
0.952042
1609.06192
Florian Dubost
Florian Dubost, Loic Peter, Christian Rupprecht, Benjamin Gutierrez-Becker, Nassir Navab
Hands-Free Segmentation of Medical Volumes via Binary Inputs
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel hands-free method to interactively segment 3D medical volumes. In our scenario, a human user progressively segments an organ by answering a series of questions of the form "Is this voxel inside the object to segment?". At each iteration, the chosen question is defined as the one halving a set of candidate segmentations given the answered questions. For a quick and efficient exploration, these segmentations are sampled according to the Metropolis-Hastings algorithm. Our sampling technique relies on a combination of relaxed shape prior, learnt probability map and consistency with previous answers. We demonstrate the potential of our strategy on a prostate segmentation MRI dataset. Through the study of failure cases with synthetic examples, we demonstrate the adaptation potential of our method. We also show that our method outperforms two intuitive baselines: one based on random questions, the other one being the thresholded probability map.
[ { "version": "v1", "created": "Tue, 20 Sep 2016 14:18:40 GMT" } ]
2016-09-21T00:00:00
[ [ "Dubost", "Florian", "" ], [ "Peter", "Loic", "" ], [ "Rupprecht", "Christian", "" ], [ "Gutierrez-Becker", "Benjamin", "" ], [ "Navab", "Nassir", "" ] ]
TITLE: Hands-Free Segmentation of Medical Volumes via Binary Inputs ABSTRACT: We propose a novel hands-free method to interactively segment 3D medical volumes. In our scenario, a human user progressively segments an organ by answering a series of questions of the form "Is this voxel inside the object to segment?". At each iteration, the chosen question is defined as the one halving a set of candidate segmentations given the answered questions. For a quick and efficient exploration, these segmentations are sampled according to the Metropolis-Hastings algorithm. Our sampling technique relies on a combination of relaxed shape prior, learnt probability map and consistency with previous answers. We demonstrate the potential of our strategy on a prostate segmentation MRI dataset. Through the study of failure cases with synthetic examples, we demonstrate the adaptation potential of our method. We also show that our method outperforms two intuitive baselines: one based on random questions, the other one being the thresholded probability map.
no_new_dataset
0.948917
1512.00296
Vinay Jayaram
Vinay Jayaram, Morteza Alamgir, Yasemin Altun, Bernhard Sch\"olkopf, Moritz Grosse-Wentrup
Transfer Learning in Brain-Computer Interfaces
To be published in IEEE Computational Intelligence Magazine, special BCI issue on January 15th online
null
10.1109/MCI.2015.2501545
null
cs.HC q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The performance of brain-computer interfaces (BCIs) improves with the amount of available training data, the statistical distribution of this data, however, varies across subjects as well as across sessions within individual subjects, limiting the transferability of training data or trained models between them. In this article, we review current transfer learning techniques in BCIs that exploit shared structure between training data of multiple subjects and/or sessions to increase performance. We then present a framework for transfer learning in the context of BCIs that can be applied to any arbitrary feature space, as well as a novel regression estimation method that is specifically designed for the structure of a system based on the electroencephalogram (EEG). We demonstrate the utility of our framework and method on subject-to-subject transfer in a motor-imagery paradigm as well as on session-to-session transfer in one patient diagnosed with amyotrophic lateral sclerosis (ALS), showing that it is able to outperform other comparable methods on an identical dataset.
[ { "version": "v1", "created": "Tue, 1 Dec 2015 15:33:24 GMT" } ]
2016-09-20T00:00:00
[ [ "Jayaram", "Vinay", "" ], [ "Alamgir", "Morteza", "" ], [ "Altun", "Yasemin", "" ], [ "Schölkopf", "Bernhard", "" ], [ "Grosse-Wentrup", "Moritz", "" ] ]
TITLE: Transfer Learning in Brain-Computer Interfaces ABSTRACT: The performance of brain-computer interfaces (BCIs) improves with the amount of available training data, the statistical distribution of this data, however, varies across subjects as well as across sessions within individual subjects, limiting the transferability of training data or trained models between them. In this article, we review current transfer learning techniques in BCIs that exploit shared structure between training data of multiple subjects and/or sessions to increase performance. We then present a framework for transfer learning in the context of BCIs that can be applied to any arbitrary feature space, as well as a novel regression estimation method that is specifically designed for the structure of a system based on the electroencephalogram (EEG). We demonstrate the utility of our framework and method on subject-to-subject transfer in a motor-imagery paradigm as well as on session-to-session transfer in one patient diagnosed with amyotrophic lateral sclerosis (ALS), showing that it is able to outperform other comparable methods on an identical dataset.
no_new_dataset
0.949482
1603.06679
Wenya Wang
Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier and Xiaokui Xiao
Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis
null
null
null
null
cs.CL cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In aspect-based sentiment analysis, extracting aspect terms along with the opinions being expressed from user-generated content is one of the most important subtasks. Previous studies have shown that exploiting connections between aspect and opinion terms is promising for this task. In this paper, we propose a novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction. The proposed model learns high-level discriminative features and double propagate information between aspect and opinion terms, simultaneously. Moreover, it is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance. Experimental results on the SemEval Challenge 2014 dataset show the superiority of our proposed model over several baseline methods as well as the winning systems of the challenge.
[ { "version": "v1", "created": "Tue, 22 Mar 2016 05:59:00 GMT" }, { "version": "v2", "created": "Wed, 8 Jun 2016 06:24:06 GMT" }, { "version": "v3", "created": "Mon, 19 Sep 2016 14:00:43 GMT" } ]
2016-09-20T00:00:00
[ [ "Wang", "Wenya", "" ], [ "Pan", "Sinno Jialin", "" ], [ "Dahlmeier", "Daniel", "" ], [ "Xiao", "Xiaokui", "" ] ]
TITLE: Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis ABSTRACT: In aspect-based sentiment analysis, extracting aspect terms along with the opinions being expressed from user-generated content is one of the most important subtasks. Previous studies have shown that exploiting connections between aspect and opinion terms is promising for this task. In this paper, we propose a novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction. The proposed model learns high-level discriminative features and double propagate information between aspect and opinion terms, simultaneously. Moreover, it is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance. Experimental results on the SemEval Challenge 2014 dataset show the superiority of our proposed model over several baseline methods as well as the winning systems of the challenge.
no_new_dataset
0.949342
1603.08981
Shuang Li
Shuang Li, Yao Xie, Mehrdad Farajtabar, Apurv Verma, and Le Song
Detecting weak changes in dynamic events over networks
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large volume of networked streaming event data are becoming increasingly available in a wide variety of applications, such as social network analysis, Internet traffic monitoring and healthcare analytics. Streaming event data are discrete observation occurred in continuous time, and the precise time interval between two events carries a great deal of information about the dynamics of the underlying systems. How to promptly detect changes in these dynamic systems using these streaming event data? In this paper, we propose a novel change-point detection framework for multi-dimensional event data over networks. We cast the problem into sequential hypothesis test, and derive the likelihood ratios for point processes, which are computed efficiently via an EM-like algorithm that is parameter-free and can be computed in a distributed fashion. We derive a highly accurate theoretical characterization of the false-alarm-rate, and show that it can achieve weak signal detection by aggregating local statistics over time and networks. Finally, we demonstrate the good performance of our algorithm on numerical examples and real-world datasets from twitter and Memetracker.
[ { "version": "v1", "created": "Tue, 29 Mar 2016 21:54:56 GMT" }, { "version": "v2", "created": "Fri, 16 Sep 2016 20:09:56 GMT" } ]
2016-09-20T00:00:00
[ [ "Li", "Shuang", "" ], [ "Xie", "Yao", "" ], [ "Farajtabar", "Mehrdad", "" ], [ "Verma", "Apurv", "" ], [ "Song", "Le", "" ] ]
TITLE: Detecting weak changes in dynamic events over networks ABSTRACT: Large volume of networked streaming event data are becoming increasingly available in a wide variety of applications, such as social network analysis, Internet traffic monitoring and healthcare analytics. Streaming event data are discrete observation occurred in continuous time, and the precise time interval between two events carries a great deal of information about the dynamics of the underlying systems. How to promptly detect changes in these dynamic systems using these streaming event data? In this paper, we propose a novel change-point detection framework for multi-dimensional event data over networks. We cast the problem into sequential hypothesis test, and derive the likelihood ratios for point processes, which are computed efficiently via an EM-like algorithm that is parameter-free and can be computed in a distributed fashion. We derive a highly accurate theoretical characterization of the false-alarm-rate, and show that it can achieve weak signal detection by aggregating local statistics over time and networks. Finally, we demonstrate the good performance of our algorithm on numerical examples and real-world datasets from twitter and Memetracker.
no_new_dataset
0.946151