id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
1605.07852
Javid Dadashkarimi
Javid Dadashkarimi, Hossein Nasr Esfahani, Heshaam Faili, and Azadeh Shakery
SS4MCT: A Statistical Stemmer for Morphologically Complex Texts
null
null
null
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There have been multiple attempts to resolve various inflection matching problems in information retrieval. Stemming is a common approach to this end. Among many techniques for stemming, statistical stemming has been shown to be effective in a number of languages, particularly highly inflected languages. In this paper we propose a method for finding affixes in different positions of a word. Common statistical techniques heavily rely on string similarity in terms of prefix and suffix matching. Since infixes are common in irregular/informal inflections in morphologically complex texts, it is required to find infixes for stemming. In this paper we propose a method whose aim is to find statistical inflectional rules based on minimum edit distance table of word pairs and the likelihoods of the rules in a language. These rules are used to statistically stem words and can be used in different text mining tasks. Experimental results on CLEF 2008 and CLEF 2009 English-Persian CLIR tasks indicate that the proposed method significantly outperforms all the baselines in terms of MAP.
[ { "version": "v1", "created": "Wed, 25 May 2016 12:25:26 GMT" }, { "version": "v2", "created": "Mon, 20 Jun 2016 21:37:19 GMT" } ]
2016-06-22T00:00:00
[ [ "Dadashkarimi", "Javid", "" ], [ "Esfahani", "Hossein Nasr", "" ], [ "Faili", "Heshaam", "" ], [ "Shakery", "Azadeh", "" ] ]
new_dataset
0.998578
1606.06369
Annamalai Narayanan
Annamalai Narayanan, Guozhu Meng, Liu Yang, Jinliang Liu and Lihui Chen
Contextual Weisfeiler-Lehman Graph Kernel For Malware Detection
null
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel graph kernel specifically to address a challenging problem in the field of cyber-security, namely, malware detection. Previous research has revealed the following: (1) Graph representations of programs are ideally suited for malware detection as they are robust against several attacks, (2) Besides capturing topological neighbourhoods (i.e., structural information) from these graphs it is important to capture the context under which the neighbourhoods are reachable to accurately detect malicious neighbourhoods. We observe that state-of-the-art graph kernels, such as Weisfeiler-Lehman kernel (WLK) capture the structural information well but fail to capture contextual information. To address this, we develop the Contextual Weisfeiler-Lehman kernel (CWLK) which is capable of capturing both these types of information. We show that for the malware detection problem, CWLK is more expressive and hence more accurate than WLK while maintaining comparable efficiency. Through our large-scale experiments with more than 50,000 real-world Android apps, we demonstrate that CWLK outperforms two state-of-the-art graph kernels (including WLK) and three malware detection techniques by more than 5.27% and 4.87% F-measure, respectively, while maintaining high efficiency. This high accuracy and efficiency make CWLK suitable for large-scale real-world malware detection.
[ { "version": "v1", "created": "Tue, 21 Jun 2016 00:02:45 GMT" } ]
2016-06-22T00:00:00
[ [ "Narayanan", "Annamalai", "" ], [ "Meng", "Guozhu", "" ], [ "Yang", "Liu", "" ], [ "Liu", "Jinliang", "" ], [ "Chen", "Lihui", "" ] ]
new_dataset
0.999191
1606.06376
EPTCS
Tristan Crolard
A verified abstract machine for functional coroutines
In Proceedings WoC 2015, arXiv:1606.05839
EPTCS 212, 2016, pp. 1-17
10.4204/EPTCS.212.1
null
cs.LO cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Functional coroutines are a restricted form of control mechanism, where each coroutine is represented with both a continuation and an environment. This restriction was originally obtained by considering a constructive version of Parigot's classical natural deduction which is sound and complete for the Constant Domain logic. In this article, we present a refinement of de Groote's abstract machine for functional coroutines and we prove its correctness. Therefore, this abstract machine also provides a direct computational interpretation of the Constant Domain logic.
[ { "version": "v1", "created": "Tue, 21 Jun 2016 00:45:32 GMT" } ]
2016-06-22T00:00:00
[ [ "Crolard", "Tristan", "" ] ]
new_dataset
0.970201
1606.06428
Wei Zhao
Wei Zhao, Xilin Tang, Ze Gu
All $\alpha+u\beta$-constacyclic codes of length $np^{s}$ over $\mathbb{F}_{p^{m}}+u\mathbb{F}_{p^{m}}$
arXiv admin note: text overlap with arXiv:1512.01406 by other authors
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let $\mathbb{F}_{p^{m}}$ be a finite field with cardinality $p^{m}$ and $R=\mathbb{F}_{p^{m}}+u\mathbb{F}_{p^{m}}$ with $u^{2}=0$. We aim to determine all $\alpha+u\beta$-constacyclic codes of length $np^{s}$ over $R$, where $\alpha,\beta\in\mathbb{F}_{p^{m}}^{*}$, $n, s\in\mathbb{N}_{+}$ and $\gcd(n,p)=1$. Let $\alpha_{0}\in\mathbb{F}_{p^{m}}^{*}$ and $\alpha_{0}^{p^{s}}=\alpha$. The residue ring $R[x]/\langle x^{np^{s}}-\alpha-u\beta\rangle$ is a chain ring with the maximal ideal $\langle x^{n}-\alpha_{0}\rangle$ in the case that $x^{n}-\alpha_{0}$ is irreducible in $\mathbb{F}_{p^{m}}[x]$. If $x^{n}-\alpha_{0}$ is reducible in $\mathbb{F}_{p^{m}}[x]$, we give the explicit expressions of the ideals of $R[x]/\langle x^{np^{s}}-\alpha-u\beta\rangle$. Besides, the number of codewords and the dual code of every $\alpha+u\beta$-constacyclic code are provided.
[ { "version": "v1", "created": "Tue, 21 Jun 2016 05:20:55 GMT" } ]
2016-06-22T00:00:00
[ [ "Zhao", "Wei", "" ], [ "Tang", "Xilin", "" ], [ "Gu", "Ze", "" ] ]
new_dataset
0.98658
1606.06483
Ho-Cheung Ng
Ho-Cheung Ng, Cheng Liu, Hayden Kwok-Hay So
A Soft Processor Overlay with Tightly-coupled FPGA Accelerator
Presented at 2nd International Workshop on Overlay Architectures for FPGAs (OLAF 2016) arXiv:1605.08149
null
null
OLAF/2016/07
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
FPGA overlays are commonly implemented as coarse-grained reconfigurable architectures with a goal to improve designers' productivity through balancing flexibility and ease of configuration of the underlying fabric. To truly facilitate full application acceleration, it is often necessary to also include a highly efficient processor that integrates and collaborates with the accelerators while maintaining the benefits of being implemented within the same overlay framework. This paper presents an open-source soft processor that is designed to tightly-couple with FPGA accelerators as part of an overlay framework. RISC-V is chosen as the instruction set for its openness and portability, and the soft processor is designed as a 4-stage pipeline to balance resource consumption and performance when implemented on FPGAs. The processor is generically implemented so as to promote design portability and compatibility across different FPGA platforms. Experimental results show that integrated software-hardware applications using the proposed tightly-coupled architecture achieve comparable performance as hardware-only accelerators while the proposed architecture provides additional run-time flexibility. The processor has been synthesized to both low-end and high-performance FPGA families from different vendors, achieving the highest frequency of 268.67MHz and resource consumption comparable to existing RISC-V designs.
[ { "version": "v1", "created": "Tue, 21 Jun 2016 09:00:33 GMT" } ]
2016-06-22T00:00:00
[ [ "Ng", "Ho-Cheung", "" ], [ "Liu", "Cheng", "" ], [ "So", "Hayden Kwok-Hay", "" ] ]
new_dataset
0.951309
1606.06653
Francesco Grassi
Francesco Grassi, Nathanael Perraudin, Benjamin Ricaud
Tracking Time-Vertex Propagation using Dynamic Graph Wavelets
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Signal Processing generalizes classical signal processing to signal or data indexed by the vertices of a weighted graph. So far, the research efforts have been focused on static graph signals. However numerous applications involve graph signals evolving in time, such as spreading or propagation of waves on a network. The analysis of this type of data requires a new set of methods that fully takes into account the time and graph dimensions. We propose a novel class of wavelet frames named Dynamic Graph Wavelets, whose time-vertex evolution follows a dynamic process. We demonstrate that this set of functions can be combined with sparsity based approaches such as compressive sensing to reveal information on the dynamic processes occurring on a graph. Experiments on real seismological data show the efficiency of the technique, allowing to estimate the epicenter of earthquake events recorded by a seismic network.
[ { "version": "v1", "created": "Tue, 21 Jun 2016 16:48:25 GMT" } ]
2016-06-22T00:00:00
[ [ "Grassi", "Francesco", "" ], [ "Perraudin", "Nathanael", "" ], [ "Ricaud", "Benjamin", "" ] ]
new_dataset
0.985932
1507.02357
Jeremy Kepner
Jeremy Kepner, William Arcand, David Bestor, Bill Bergeron, Chansup Byun, Lauren Edwards, Vijay Gadepally, Matthew Hubbell, Peter Michaleas, Julie Mullen, Andrew Prout, Antonio Rosa, Charles Yee, Albert Reuther
Lustre, Hadoop, Accumulo
6 pages; accepted to IEEE High Performance Extreme Computing conference, Waltham, MA, 2015
null
10.1109/HPEC.2015.7322476
null
cs.DC cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data processing systems impose multiple views on data as it is processed by the system. These views include spreadsheets, databases, matrices, and graphs. There are a wide variety of technologies that can be used to store and process data through these different steps. The Lustre parallel file system, the Hadoop distributed file system, and the Accumulo database are all designed to address the largest and the most challenging data storage problems. There have been many ad-hoc comparisons of these technologies. This paper describes the foundational principles of each technology, provides simple models for assessing their capabilities, and compares the various technologies on a hypothetical common cluster. These comparisons indicate that Lustre provides 2x more storage capacity, is less likely to loose data during 3 simultaneous drive failures, and provides higher bandwidth on general purpose workloads. Hadoop can provide 4x greater read bandwidth on special purpose workloads. Accumulo provides 10,000x lower latency on random lookups than either Lustre or Hadoop but Accumulo's bulk bandwidth is 10x less. Significant recent work has been done to enable mix-and-match solutions that allow Lustre, Hadoop, and Accumulo to be combined in different ways.
[ { "version": "v1", "created": "Thu, 9 Jul 2015 03:00:06 GMT" } ]
2016-06-21T00:00:00
[ [ "Kepner", "Jeremy", "" ], [ "Arcand", "William", "" ], [ "Bestor", "David", "" ], [ "Bergeron", "Bill", "" ], [ "Byun", "Chansup", "" ], [ "Edwards", "Lauren", "" ], [ "Gadepally", "Vijay", "" ], [ "Hubbell", "Matthew", "" ], [ "Michaleas", "Peter", "" ], [ "Mullen", "Julie", "" ], [ "Prout", "Andrew", "" ], [ "Rosa", "Antonio", "" ], [ "Yee", "Charles", "" ], [ "Reuther", "Albert", "" ] ]
new_dataset
0.962591
1511.04897
Daniel Gruss
Moritz Lipp, Daniel Gruss, Raphael Spreitzer, Cl\'ementine Maurice, Stefan Mangard
ARMageddon: Cache Attacks on Mobile Devices
Original publication in the Proceedings of the 25th Annual USENIX Security Symposium (USENIX Security 2016). https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/lipp
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last 10 years, cache attacks on Intel x86 CPUs have gained increasing attention among the scientific community and powerful techniques to exploit cache side channels have been developed. However, modern smartphones use one or more multi-core ARM CPUs that have a different cache organization and instruction set than Intel x86 CPUs. So far, no cross-core cache attacks have been demonstrated on non-rooted Android smartphones. In this work, we demonstrate how to solve key challenges to perform the most powerful cross-core cache attacks Prime+Probe, Flush+Reload, Evict+Reload, and Flush+Flush on non-rooted ARM-based devices without any privileges. Based on our techniques, we demonstrate covert channels that outperform state-of-the-art covert channels on Android by several orders of magnitude. Moreover, we present attacks to monitor tap and swipe events as well as keystrokes, and even derive the lengths of words entered on the touchscreen. Eventually, we are the first to attack cryptographic primitives implemented in Java. Our attacks work across CPUs and can even monitor cache activity in the ARM TrustZone from the normal world. The techniques we present can be used to attack hundreds of millions of Android devices.
[ { "version": "v1", "created": "Mon, 16 Nov 2015 10:24:33 GMT" }, { "version": "v2", "created": "Sun, 19 Jun 2016 18:37:47 GMT" } ]
2016-06-21T00:00:00
[ [ "Lipp", "Moritz", "" ], [ "Gruss", "Daniel", "" ], [ "Spreitzer", "Raphael", "" ], [ "Maurice", "Clémentine", "" ], [ "Mangard", "Stefan", "" ] ]
new_dataset
0.999831
1512.09163
Gordon Love
Paul V. Johnson, Jared A.Q. Parnell, Joowan Kim, Christopher D. Saunter, Gordon D. Love, Martin S. Banks
Dynamic lens and monovision 3D displays to improve viewer comfort
null
null
10.1364/OE.24.011808
null
cs.HC physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stereoscopic 3D (S3D) displays provide an additional sense of depth compared to non-stereoscopic displays by sending slightly different images to the two eyes. But conventional S3D displays do not reproduce all natural depth cues. In particular, focus cues are incorrect causing mismatches between accommodation and vergence: The eyes must accommodate to the display screen to create sharp retinal images even when binocular disparity drives the eyes to converge to other distances. This mismatch causes visual discomfort and reduces visual performance. We propose and assess two new techniques that are designed to reduce the vergence-accommodation conflict and thereby decrease discomfort and increase visual performance. These techniques are much simpler to implement than previous conflict-reducing techniques.
[ { "version": "v1", "created": "Wed, 30 Dec 2015 21:59:14 GMT" } ]
2016-06-21T00:00:00
[ [ "Johnson", "Paul V.", "" ], [ "Parnell", "Jared A. Q.", "" ], [ "Kim", "Joowan", "" ], [ "Saunter", "Christopher D.", "" ], [ "Love", "Gordon D.", "" ], [ "Banks", "Martin S.", "" ] ]
new_dataset
0.995436
1601.07140
Andreas Veit
Andreas Veit and Tomas Matera and Lukas Neumann and Jiri Matas and Serge Belongie
COCO-Text: Dataset and Benchmark for Text Detection and Recognition in Natural Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes the COCO-Text dataset. In recent years large-scale datasets like SUN and Imagenet drove the advancement of scene understanding and object recognition. The goal of COCO-Text is to advance state-of-the-art in text detection and recognition in natural images. The dataset is based on the MS COCO dataset, which contains images of complex everyday scenes. The images were not collected with text in mind and thus contain a broad variety of text instances. To reflect the diversity of text in natural scenes, we annotate text with (a) location in terms of a bounding box, (b) fine-grained classification into machine printed text and handwritten text, (c) classification into legible and illegible text, (d) script of the text and (e) transcriptions of legible text. The dataset contains over 173k text annotations in over 63k images. We provide a statistical analysis of the accuracy of our annotations. In addition, we present an analysis of three leading state-of-the-art photo Optical Character Recognition (OCR) approaches on our dataset. While scene text detection and recognition enjoys strong advances in recent years, we identify significant shortcomings motivating future work.
[ { "version": "v1", "created": "Tue, 26 Jan 2016 19:30:34 GMT" }, { "version": "v2", "created": "Sun, 19 Jun 2016 23:52:14 GMT" } ]
2016-06-21T00:00:00
[ [ "Veit", "Andreas", "" ], [ "Matera", "Tomas", "" ], [ "Neumann", "Lukas", "" ], [ "Matas", "Jiri", "" ], [ "Belongie", "Serge", "" ] ]
new_dataset
0.999881
1602.04256
Yihan Gao
Yihan Gao, Aditya Parameswaran
Squish: Near-Optimal Compression for Archival of Relational Datasets
null
null
10.1145/2939672.2939867
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop Squish, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. Squish also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: Squish achieves a reduction of over 50\% in storage size relative to systems developed in prior work on a variety of real datasets.
[ { "version": "v1", "created": "Fri, 12 Feb 2016 22:46:57 GMT" }, { "version": "v2", "created": "Sun, 19 Jun 2016 16:09:39 GMT" } ]
2016-06-21T00:00:00
[ [ "Gao", "Yihan", "" ], [ "Parameswaran", "Aditya", "" ] ]
new_dataset
0.95214
1606.05660
Gwena\"el Richomme
Michelangelo Bucci, Gwena\"el Richomme
Greedy palindromic lengths
null
null
null
null
cs.FL cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In [A. Frid, S. Puzynina, L.Q. Zamboni, \textit{On palindromic factorization of words}, Adv. in Appl. Math. 50 (2013), 737-748], it was conjectured that any infinite word whose palindromic lengths of factors are bounded is ultimately periodic. We introduce variants of this conjecture and prove this conjecture in particular cases. Especially we introduce left and right greedy palindromic lengths. These lengths are always greater than or equals to the initial palindromic length. When the greedy left (or right) palindromic lengths of prefixes of a word are bounded then this word is ultimately periodic.
[ { "version": "v1", "created": "Fri, 17 Jun 2016 20:10:35 GMT" } ]
2016-06-21T00:00:00
[ [ "Bucci", "Michelangelo", "" ], [ "Richomme", "Gwenaël", "" ] ]
new_dataset
0.9909
1606.05759
Nadir Durrani Dr
Hassan Sajjad, Nadir Durrani, Francisco Guzman, Preslav Nakov, Ahmed Abdelali, Stephan Vogel, Wael Salloum, Ahmed El Kholy, Nizar Habash
Egyptian Arabic to English Statistical Machine Translation System for NIST OpenMT'2015
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper describes the Egyptian Arabic-to-English statistical machine translation (SMT) system that the QCRI-Columbia-NYUAD (QCN) group submitted to the NIST OpenMT'2015 competition. The competition focused on informal dialectal Arabic, as used in SMS, chat, and speech. Thus, our efforts focused on processing and standardizing Arabic, e.g., using tools such as 3arrib and MADAMIRA. We further trained a phrase-based SMT system using state-of-the-art features and components such as operation sequence model, class-based language model, sparse features, neural network joint model, genre-based hierarchically-interpolated language model, unsupervised transliteration mining, phrase-table merging, and hypothesis combination. Our system ranked second on all three genres.
[ { "version": "v1", "created": "Sat, 18 Jun 2016 14:34:07 GMT" } ]
2016-06-21T00:00:00
[ [ "Sajjad", "Hassan", "" ], [ "Durrani", "Nadir", "" ], [ "Guzman", "Francisco", "" ], [ "Nakov", "Preslav", "" ], [ "Abdelali", "Ahmed", "" ], [ "Vogel", "Stephan", "" ], [ "Salloum", "Wael", "" ], [ "Kholy", "Ahmed El", "" ], [ "Habash", "Nizar", "" ] ]
new_dataset
0.977735
1606.05814
Kyle Krafka
Kyle Krafka and Aditya Khosla and Petr Kellnhofer and Harini Kannan and Suchendra Bhandarkar and Wojciech Matusik and Antonio Torralba
Eye Tracking for Everyone
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
From scientific research to commercial applications, eye tracking is an important tool across many domains. Despite its range of applications, eye tracking has yet to become a pervasive technology. We believe that we can put the power of eye tracking in everyone's palm by building eye tracking software that works on commodity hardware such as mobile phones and tablets, without the need for additional sensors or devices. We tackle this problem by introducing GazeCapture, the first large-scale dataset for eye tracking, containing data from over 1450 people consisting of almost 2.5M frames. Using GazeCapture, we train iTracker, a convolutional neural network for eye tracking, which achieves a significant reduction in error over previous approaches while running in real time (10-15fps) on a modern mobile device. Our model achieves a prediction error of 1.71cm and 2.53cm without calibration on mobile phones and tablets respectively. With calibration, this is reduced to 1.34cm and 2.12cm. Further, we demonstrate that the features learned by iTracker generalize well to other datasets, achieving state-of-the-art results. The code, data, and models are available at http://gazecapture.csail.mit.edu.
[ { "version": "v1", "created": "Sat, 18 Jun 2016 23:53:54 GMT" } ]
2016-06-21T00:00:00
[ [ "Krafka", "Kyle", "" ], [ "Khosla", "Aditya", "" ], [ "Kellnhofer", "Petr", "" ], [ "Kannan", "Harini", "" ], [ "Bhandarkar", "Suchendra", "" ], [ "Matusik", "Wojciech", "" ], [ "Torralba", "Antonio", "" ] ]
new_dataset
0.999416
1606.05859
Shenglin Zhao
Shenglin Zhao, Tong Zhao, Irwin King and Michael R. Lyu
GT-SEER: Geo-Temporal SEquential Embedding Rank for Point-of-interest Recommendation
null
null
null
null
cs.IR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Point-of-interest (POI) recommendation is an important application in location-based social networks (LBSNs), which learns the user preference and mobility pattern from check-in sequences to recommend POIs. However, previous POI recommendation systems model check-in sequences based on either tensor factorization or Markov chain model, which cannot capture contextual check-in information in sequences. The contextual check-in information implies the complementary functions among POIs that compose an individual's daily check-in sequence. In this paper, we exploit the embedding learning technique to capture the contextual check-in information and further propose the \textit{{\textbf{SE}}}quential \textit{{\textbf{E}}}mbedding \textit{{\textbf{R}}}ank (\textit{SEER}) model for POI recommendation. In particular, the \textit{SEER} model learns user preferences via a pairwise ranking model under the sequential constraint modeled by the POI embedding learning method. Furthermore, we incorporate two important factors, i.e., temporal influence and geographical influence, into the \textit{SEER} model to enhance the POI recommendation system. Due to the temporal variance of sequences on different days, we propose a temporal POI embedding model and incorporate the temporal POI representations into a temporal preference ranking model to establish the \textit{T}emporal \textit{SEER} (\textit{T-SEER}) model. In addition, We incorporate the geographical influence into the \textit{T-SEER} model and develop the \textit{\textbf{Geo-Temporal}} \textit{{\textbf{SEER}}} (\textit{GT-SEER}) model.
[ { "version": "v1", "created": "Sun, 19 Jun 2016 11:45:57 GMT" } ]
2016-06-21T00:00:00
[ [ "Zhao", "Shenglin", "" ], [ "Zhao", "Tong", "" ], [ "King", "Irwin", "" ], [ "Lyu", "Michael R.", "" ] ]
new_dataset
0.992474
1606.05910
Daniel Doerr
Daniel Doerr, Pedro Feijao, Metin Balaban, Cedric Chauve
The gene family-free median of three
null
null
null
null
cs.DS q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The gene family-free framework for comparative genomics aims at developing methods for gene order analysis that do not require prior gene family assignment, but work directly on a sequence similarity multipartite graph. We present a model for constructing a median of three genomes in this family-free setting, based on maximizing an objective function that generalizes the classical breakpoint distance by integrating sequence similarity in the score of a gene adjacency. We show that the corresponding computational problem is MAX SNP-hard and we present a 0-1 linear program for its exact solution. The result of our FF-median program is a median genome with median genes associated to extant genes, in which median adjacencies are assumed to define positional orthologs. We demonstrate through simulations and comparison with the OMA orthology database that the herein presented method is able compute accurate medians and positional orthologs for genomes comparable in size of bacterial genomes.
[ { "version": "v1", "created": "Sun, 19 Jun 2016 21:21:30 GMT" } ]
2016-06-21T00:00:00
[ [ "Doerr", "Daniel", "" ], [ "Feijao", "Pedro", "" ], [ "Balaban", "Metin", "" ], [ "Chauve", "Cedric", "" ] ]
new_dataset
0.984557
1606.05927
Andrew Connor
Wilson S. Siringoringo, Andy M. Connor, Nick Clements and Nick Alexander
Minimum cost polygon overlay with rectangular shape stock panels
null
International Journal of Construction Education & Research, 4(3), 1-24 (2008)
10.1080/15578770802494516
null
cs.NE cs.CG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Minimum Cost Polygon Overlay (MCPO) is a unique two-dimensional optimization problem that involves the task of covering a polygon shaped area with a series of rectangular shaped panels. This has a number of applications in the construction industry. This work examines the MCPO problem in order to construct a model that captures essential parameters of the problem to be solved automatically using numerical optimization algorithms. Three algorithms have been implemented of the actual optimization task: the greedy search, the Monte Carlo (MC) method, and the Genetic Algorithm (GA). Results are presented to show the relative effectiveness of the algorithms. This is followed by critical analysis of various findings of this research.
[ { "version": "v1", "created": "Sun, 19 Jun 2016 23:50:15 GMT" } ]
2016-06-21T00:00:00
[ [ "Siringoringo", "Wilson S.", "" ], [ "Connor", "Andy M.", "" ], [ "Clements", "Nick", "" ], [ "Alexander", "Nick", "" ] ]
new_dataset
0.958962
1606.05940
EPTCS
Tony Garnock-Jones (Northeastern University, Boston, USA)
From Events to Reactions: A Progress Report
In Proceedings PLACES 2016, arXiv:1606.05403
EPTCS 211, 2016, pp. 46-55
10.4204/EPTCS.211.5
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Syndicate is a new coordinated, concurrent programming language. It occupies a novel point on the spectrum between the shared-everything paradigm of threads and the shared-nothing approach of actors. Syndicate actors exchange messages and share common knowledge via a carefully controlled database that clearly scopes conversations. This approach clearly simplifies coordination of concurrent activities. Experience in programming with Syndicate, however, suggests a need to raise the level of linguistic abstraction. In addition to writing event handlers and managing event subscriptions directly, the language will have to support a reactive style of programming. This paper presents event-oriented Syndicate programming and then describes a preliminary design for augmenting it with new reactive programming constructs.
[ { "version": "v1", "created": "Mon, 20 Jun 2016 01:09:16 GMT" } ]
2016-06-21T00:00:00
[ [ "Garnock-Jones", "Tony", "", "Northeastern University, Boston, USA" ] ]
new_dataset
0.96973
1606.05954
Si-Hyeon Lee
Si-Hyeon Lee and Ashish Khisti
The Wiretapped Diamond-Relay Channel
30 pages, 4 figures, Submitted to IEEE Transactions on Information Theory
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study a diamond-relay channel where the source is connected to $M$ relays through orthogonal links and the relays transmit to the destination over a wireless multiple-access channel in the presence of an eavesdropper. The eavesdropper not only observes the relay transmissions through another multiple-access channel, but also observes a certain number of source-relay links. The legitimate terminals know neither the eavesdropper's channel state information nor the location of source-relay links revealed to the eavesdropper except the total number of such links. For this wiretapped diamond-relay channel, we establish the optimal secure degrees of freedom. In the achievability part, our proposed scheme uses the source-relay links to transmit a judiciously constructed combination of message symbols, artificial noise symbols as well as fictitious message symbols associated with secure network coding. The relays use a combination of beamforming and interference alignment in their transmission scheme. For the converse part, we take a genie-aided approach assuming that the location of wiretapped links is known.
[ { "version": "v1", "created": "Mon, 20 Jun 2016 02:00:02 GMT" } ]
2016-06-21T00:00:00
[ [ "Lee", "Si-Hyeon", "" ], [ "Khisti", "Ashish", "" ] ]
new_dataset
0.995456
1606.05978
Da-Cheng Juan
Da-Cheng Juan, Neil Shah, Mingyu Tang, Zhiliang Qian, Diana Marculescu, Christos Faloutsos
M3A: Model, MetaModel, and Anomaly Detection in Web Searches
10 pages, 10 figures, 3 tables
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
'Alice' is submitting one web search per five minutes, for three hours in a row - is it normal? How to detect abnormal search behaviors, among Alice and other users? Is there any distinct pattern in Alice's (or other users') search behavior? We studied what is probably the largest, publicly available, query log that contains more than 30 million queries from 0.6 million users. In this paper, we present a novel, user-and group-level framework, M3A: Model, MetaModel and Anomaly detection. For each user, we discover and explain a surprising, bi-modal pattern of the inter-arrival time (IAT) of landed queries (queries with user click-through). Specifically, the model Camel-Log is proposed to describe such an IAT distribution; we then notice the correlations among its parameters at the group level. Thus, we further propose the metamodel Meta-Click, to capture and explain the two-dimensional, heavy-tail distribution of the parameters. Combining Camel-Log and Meta-Click, the proposed M3A has the following strong points: (1) the accurate modeling of marginal IAT distribution, (2) quantitative interpretations, and (3) anomaly detection.
[ { "version": "v1", "created": "Mon, 20 Jun 2016 05:46:47 GMT" } ]
2016-06-21T00:00:00
[ [ "Juan", "Da-Cheng", "" ], [ "Shah", "Neil", "" ], [ "Tang", "Mingyu", "" ], [ "Qian", "Zhiliang", "" ], [ "Marculescu", "Diana", "" ], [ "Faloutsos", "Christos", "" ] ]
new_dataset
0.998799
1606.06031
Sandro Pezzelle
Denis Paperno (1), Germ\'an Kruszewski (1), Angeliki Lazaridou (1), Quan Ngoc Pham (1), Raffaella Bernardi (1), Sandro Pezzelle (1), Marco Baroni (1), Gemma Boleda (1), Raquel Fern\'andez (2) ((1) CIMeC - Center for Mind/Brain Sciences, University of Trento, (2) Institute for Logic, Language & Computation, University of Amsterdam)
The LAMBADA dataset: Word prediction requiring a broad discourse context
10 pages, Accepted as a long paper for ACL 2016
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
[ { "version": "v1", "created": "Mon, 20 Jun 2016 09:37:17 GMT" } ]
2016-06-21T00:00:00
[ [ "Paperno", "Denis", "" ], [ "Kruszewski", "Germán", "" ], [ "Lazaridou", "Angeliki", "" ], [ "Pham", "Quan Ngoc", "" ], [ "Bernardi", "Raffaella", "" ], [ "Pezzelle", "Sandro", "" ], [ "Baroni", "Marco", "" ], [ "Boleda", "Gemma", "" ], [ "Fernández", "Raquel", "" ] ]
new_dataset
0.992511
1606.06041
Jialin Liu Ph.D
Jialin Liu, Diego Pe\'rez-Lie\'bana, Simon M. Lucas
Bandit-Based Random Mutation Hill-Climbing
7 pages, 10 figures
null
null
null
cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Random Mutation Hill-Climbing algorithm is a direct search technique mostly used in discrete domains. It repeats the process of randomly selecting a neighbour of a best-so-far solution and accepts the neighbour if it is better than or equal to it. In this work, we propose to use a novel method to select the neighbour solution using a set of independent multi- armed bandit-style selection units which results in a bandit-based Random Mutation Hill-Climbing algorithm. The new algorithm significantly outperforms Random Mutation Hill-Climbing in both OneMax (in noise-free and noisy cases) and Royal Road problems (in the noise-free case). The algorithm shows particular promise for discrete optimisation problems where each fitness evaluation is expensive.
[ { "version": "v1", "created": "Mon, 20 Jun 2016 09:53:29 GMT" } ]
2016-06-21T00:00:00
[ [ "Liu", "Jialin", "" ], [ "Peŕez-Liebana", "Diego", "" ], [ "Lucas", "Simon M.", "" ] ]
new_dataset
0.994834
1606.06136
Muhammad Nur Pratama
Daryl Haris Antoni Junior, Muhammad Nur Pratama, Yusrina Nur Dini, Ary Setijadi Prihatmanto
Desain dan Implementasi Sistem Digital Assistant Berbasis Google Glass pada Rumah Sakit
7 pages, in Indonesian. arXiv admin note: substantial text overlap with arXiv:1606.06129
null
10.13140/RG.2.1.4781.2726
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to improve the performance of the hospital, one solution that is needed is a system that can facilitate functional medical staff in the discharge of their duties. This paper discusses the design and implementation of digital assistant system designed for functional medical staff at the hospital. This system is implemented in Google Glass, PC, tablet, and smart phones. Google Glass device is equipped with two modules, which are patient data module with face recognition feature and communication module with livestreaming feature. Tablet devices and mobile phones are equipped with the personnel module and display medical records module. Tablet devices are also equipped with prescription writing. The whole process in all four devices require webservice to be executed. The implementation results indicate that the four software has been built based on the design and integrated with HIS (Hospital Information System) database via web service. ---- Demi meningkatkan kinerja rumah sakit, salah satu solusi yang diperlukan adalah sebuah sistem yang dapat mempermudah staf medik fungsional dalam menunaikan tugasnya. Makalah ini membahas mengenai desain dan implementasi sistem digital assistant yang dirancang untuk staf medik fungsional di rumah sakit. Sistem ini diimplementasikan pada perangkat keras Google Glass, PC, tablet, dan ponsel pintar. Perangkat Google Glass dilengkapi dengan dua fitur yaitu face recognition dan live streaming. Perangkat tablet dan ponsel dilengkapi dengan modul personalia dan modul tampilan rekam medis. Perangkat tablet juga dilengkapi dengan fitur penulisan resep. Seluruh proses pada keempat perangkat tersebut membutuhkan web service untuk dapat dieksekusi. Hasil implementasi mengindikasikan bahwa keempat perangkat lunak telah sesuai dengan rancangan dan terintegrasikan dengan database SIRS tingkat rumah sakit melalui web service.
[ { "version": "v1", "created": "Mon, 20 Jun 2016 14:24:12 GMT" } ]
2016-06-21T00:00:00
[ [ "Junior", "Daryl Haris Antoni", "" ], [ "Pratama", "Muhammad Nur", "" ], [ "Dini", "Yusrina Nur", "" ], [ "Prihatmanto", "Ary Setijadi", "" ] ]
new_dataset
0.980657
1606.06257
Xu Chen
Xu Chen and Xiaowen Gong and Lei Yang and Junshan Zhang
Amazon in the White Space: Social Recommendation Aided Distributed Spectrum Access
Xu Chen, Xiaowen Gong, Lei Yang, and Junshan Zhang, "Amazon in the White Space: Social Recommendation Aided Distributed Spectrum Access," IEEE/ACM Transactions Networking, 2016
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed spectrum access (DSA) is challenging since an individual secondary user often has limited sensing capabilities only. One key insight is that channel recommendation among secondary users can help to take advantage of the inherent correlation structure of spectrum availability in both time and space, and enable users to obtain more informed spectrum opportunities. With this insight, we advocate to leverage the wisdom of crowds, and devise social recommendation aided DSA mechanisms to orient secondary users to make more intelligent spectrum access decisions, for both strong and weak network information cases. We start with the strong network information case where secondary users have the statistical information. To mitigate the difficulty due to the curse of dimensionality in the stochastic game approach, we take the one-step Nash approach and cast the social recommendation aided DSA decision making problem at each time slot as a strategic game. We show that it is a potential game, and then devise an algorithm to achieve the Nash equilibrium by exploiting its finite improvement property. For the weak information case where secondary users do not have the statistical information, we develop a distributed reinforcement learning mechanism for social recommendation aided DSA based on the local observations of secondary users only. Appealing to the maximum-norm contraction mapping, we also derive the conditions under which the distributed mechanism converges and characterize the equilibrium therein. Numerical results reveal that the proposed social recommendation aided DSA mechanisms can achieve superior performance using real social data traces and its performance loss in the weak network information case is insignificant, compared with the strong network information case.
[ { "version": "v1", "created": "Mon, 20 Jun 2016 19:20:37 GMT" } ]
2016-06-21T00:00:00
[ [ "Chen", "Xu", "" ], [ "Gong", "Xiaowen", "" ], [ "Yang", "Lei", "" ], [ "Zhang", "Junshan", "" ] ]
new_dataset
0.988057
1405.6162
Alan Gray
Alan Gray and Kevin Stratford
targetDP: an Abstraction of Lattice Based Parallelism with Portable Performance
4 pages, 1 figure, to appear in proceedings of HPCC 2014 conference
2014 IEEE Intl Conf on High Performance Computing and Communications, 2014
10.1109/HPCC.2014.212
null
cs.DC hep-lat physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To achieve high performance on modern computers, it is vital to map algorithmic parallelism to that inherent in the hardware. From an application developer's perspective, it is also important that code can be maintained in a portable manner across a range of hardware. Here we present targetDP (target Data Parallel), a lightweight programming layer that allows the abstraction of data parallelism for applications that employ structured grids. A single source code may be used to target both thread level parallelism (TLP) and instruction level parallelism (ILP) on either SIMD multi-core CPUs or GPU-accelerated platforms. targetDP is implemented via standard C preprocessor macros and library functions, can be added to existing applications incrementally, and can be combined with higher-level paradigms such as MPI. We present CPU and GPU performance results for a benchmark taken from the lattice Boltzmann application that motivated this work. These demonstrate not only performance portability, but also the optimisation resulting from the intelligent exposure of ILP.
[ { "version": "v1", "created": "Tue, 29 Apr 2014 07:51:59 GMT" }, { "version": "v2", "created": "Thu, 31 Jul 2014 13:45:44 GMT" } ]
2016-06-20T00:00:00
[ [ "Gray", "Alan", "" ], [ "Stratford", "Kevin", "" ] ]
new_dataset
0.998907
1507.01145
Tad Hogg
Tad Hogg
Energy Dissipation by Metamorphic Micro-Robots in Viscous Fluids
corrected typos
J. of Micro-Bio Robotics 11:85-95 (2016)
10.1007/s12213-015-0086-3
null
cs.RO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Microscopic robots could perform tasks with high spatial precision, such as acting on precisely-targeted cells in biological tissues. Some tasks may benefit from robots that change shape, such as elongating to improve chemical gradient sensing or contracting to squeeze through narrow channels. This paper evaluates the energy dissipation for shape-changing (i.e., metamorphic) robots whose size is comparable to bacteria. Unlike larger robots, surface forces dominate the dissipation. Theoretical estimates indicate that the power likely to be available to the robots, as determined by previous studies, is sufficient to change shape fairly rapidly even in highly-viscous biological fluids. Achieving this performance will require significant improvements in manufacturing and material properties compared to current micromachines. Furthermore, optimally varying the speed of shape change only slightly reduces energy use compared to uniform speed, thereby simplifying robot controllers.
[ { "version": "v1", "created": "Sat, 4 Jul 2015 21:09:25 GMT" }, { "version": "v2", "created": "Fri, 17 Jun 2016 00:29:30 GMT" } ]
2016-06-20T00:00:00
[ [ "Hogg", "Tad", "" ] ]
new_dataset
0.961466
1606.03556
Abhishek Das
Abhishek Das, Harsh Agrawal, C. Lawrence Zitnick, Devi Parikh, Dhruv Batra
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?
9 pages, 6 figures, 3 tables; Under review at EMNLP 2016
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We conduct large-scale studies on `human attention' in Visual Question Answering (VQA) to understand where humans choose to look to answer questions about images. We design and test multiple game-inspired novel attention-annotation interfaces that require the subject to sharpen regions of a blurred image to answer a question. Thus, we introduce the VQA-HAT (Human ATtention) dataset. We evaluate attention maps generated by state-of-the-art VQA models against human attention both qualitatively (via visualizations) and quantitatively (via rank-order correlation). Overall, our experiments show that current attention models in VQA do not seem to be looking at the same regions as humans.
[ { "version": "v1", "created": "Sat, 11 Jun 2016 05:41:10 GMT" }, { "version": "v2", "created": "Fri, 17 Jun 2016 04:39:01 GMT" } ]
2016-06-20T00:00:00
[ [ "Das", "Abhishek", "" ], [ "Agrawal", "Harsh", "" ], [ "Zitnick", "C. Lawrence", "" ], [ "Parikh", "Devi", "" ], [ "Batra", "Dhruv", "" ] ]
new_dataset
0.989669
1606.05413
Chenchen Zhu
Chenchen Zhu, Yutong Zheng, Khoa Luu, Marios Savvides
CMS-RCNN: Contextual Multi-Scale Region-based CNN for Unconstrained Face Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robust face detection in the wild is one of the ultimate components to support various facial related problems, i.e. unconstrained face recognition, facial periocular recognition, facial landmarking and pose estimation, facial expression recognition, 3D facial model construction, etc. Although the face detection problem has been intensely studied for decades with various commercial applications, it still meets problems in some real-world scenarios due to numerous challenges, e.g. heavy facial occlusions, extremely low resolutions, strong illumination, exceptionally pose variations, image or video compression artifacts, etc. In this paper, we present a face detection approach named Contextual Multi-Scale Region-based Convolution Neural Network (CMS-RCNN) to robustly solve the problems mentioned above. Similar to the region-based CNNs, our proposed network consists of the region proposal component and the region-of-interest (RoI) detection component. However, far apart of that network, there are two main contributions in our proposed network that play a significant role to achieve the state-of-the-art performance in face detection. Firstly, the multi-scale information is grouped both in region proposal and RoI detection to deal with tiny face regions. Secondly, our proposed network allows explicit body contextual reasoning in the network inspired from the intuition of human vision system. The proposed approach is benchmarked on two recent challenging face detection databases, i.e. the WIDER FACE Dataset which contains high degree of variability, as well as the Face Detection Dataset and Benchmark (FDDB). The experimental results show that our proposed approach trained on WIDER FACE Dataset outperforms strong baselines on WIDER FACE Dataset by a large margin, and consistently achieves competitive results on FDDB against the recent state-of-the-art face detection methods.
[ { "version": "v1", "created": "Fri, 17 Jun 2016 03:19:09 GMT" } ]
2016-06-20T00:00:00
[ [ "Zhu", "Chenchen", "" ], [ "Zheng", "Yutong", "" ], [ "Luu", "Khoa", "" ], [ "Savvides", "Marios", "" ] ]
new_dataset
0.952459
1606.05477
Seyed Hossein Ahmadpanah
Seyed Hossein Ahmadpanah, Abdullah Jafari Chashmi, Majidreza Yadollahi
4G Mobile Communication Systems: Key Technology and Evolution
3rd National Conference on Computer Engineering and IT Management , Tehran , June 02,2016
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
With the worldwide third-generation mobile communication system gradually implemented, the future development of mobile communications has become a hot topic and evolution of the problem. This paper introduces the fourth generation mobile communication system and its performance and network structure and OFDM, software defined radio, smart antennas, IPv6 and other key technologies, and analyzes the relationship between 4G mobile communication system for mobile communications and 3G, and the evolution of communication systems do Prospect.
[ { "version": "v1", "created": "Fri, 17 Jun 2016 11:08:27 GMT" } ]
2016-06-20T00:00:00
[ [ "Ahmadpanah", "Seyed Hossein", "" ], [ "Chashmi", "Abdullah Jafari", "" ], [ "Yadollahi", "Majidreza", "" ] ]
new_dataset
0.995691
1606.05562
Renato J Cintra
T. L. T. da Silveira, F. M. Bayer, R. J. Cintra, S. Kulasekera, A. Madanayake, A. J. Kozakevicius
An Orthogonal 16-point Approximate DCT for Image and Video Compression
18 pages, 7 figures, 6 tables
Multidimensional Systems and Signal Processing, vol. 27, no. 1, pp. 87-104, 2016
10.1007/s11045-014-0291-6
null
cs.IT cs.AR cs.NA math.IT stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A low-complexity orthogonal multiplierless approximation for the 16-point discrete cosine transform (DCT) was introduced. The proposed method was designed to possess a very low computational cost. A fast algorithm based on matrix factorization was proposed requiring only 60~additions. The proposed architecture outperforms classical and state-of-the-art algorithms when assessed as a tool for image and video compression. Digital VLSI hardware implementations were also proposed being physically realized in FPGA technology and implemented in 45 nm up to synthesis and place-route levels. Additionally, the proposed method was embedded into a high efficiency video coding (HEVC) reference software for actual proof-of-concept. Obtained results show negligible video degradation when compared to Chen DCT algorithm in HEVC.
[ { "version": "v1", "created": "Fri, 27 May 2016 01:19:46 GMT" } ]
2016-06-20T00:00:00
[ [ "da Silveira", "T. L. T.", "" ], [ "Bayer", "F. M.", "" ], [ "Cintra", "R. J.", "" ], [ "Kulasekera", "S.", "" ], [ "Madanayake", "A.", "" ], [ "Kozakevicius", "A. J.", "" ] ]
new_dataset
0.988917
1606.05576
Kleomenis Katevas
Kleomenis Katevas, Hamed Haddadi and Laurissa Tokarchuk
SensingKit: Evaluating the Sensor Power Consumption in iOS devices
4 pages, 2 figures, 3 tables. To be published in the 12th International Conference on Intelligent Environments (IE'16)
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today's smartphones come equipped with a range of advanced sensors capable of sensing motion, orientation, audio as well as environmental data with high accuracy. With the existence of application distribution channels such as the Apple App Store and the Google Play Store, researchers can distribute applications and collect large scale data in ways that previously were not possible. Motivated by the lack of a universal, multi-platform sensing library, in this work we present the design and implementation of SensingKit, an open-source continuous sensing system that supports both iOS and Android mobile devices. One of the unique features of SensingKit is the support of the latest beacon technologies based on Bluetooth Smart (BLE), such as iBeaconand Eddystone. We evaluate and compare the power consumption of each supported sensor individually, using an iPhone 5S device running on iOS 9. We believe that this platform will be beneficial to all researchers and developers who plan to use mobile sensing technology in large-scale experiments.
[ { "version": "v1", "created": "Fri, 17 Jun 2016 16:09:46 GMT" } ]
2016-06-20T00:00:00
[ [ "Katevas", "Kleomenis", "" ], [ "Haddadi", "Hamed", "" ], [ "Tokarchuk", "Laurissa", "" ] ]
new_dataset
0.998294
1606.05614
Cristiano Premebida
C. Premebida, L. Garrote, A. Asvadi, A. Pedro Ribeiro, and U. Nunes
High-resolution LIDAR-based Depth Mapping using Bilateral Filter
8 pages, 6 figures, submitted to IEEE-ITSC'16
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High resolution depth-maps, obtained by upsampling sparse range data from a 3D-LIDAR, find applications in many fields ranging from sensory perception to semantic segmentation and object detection. Upsampling is often based on combining data from a monocular camera to compensate the low-resolution of a LIDAR. This paper, on the other hand, introduces a novel framework to obtain dense depth-map solely from a single LIDAR point cloud; which is a research direction that has been barely explored. The formulation behind the proposed depth-mapping process relies on local spatial interpolation, using sliding-window (mask) technique, and on the Bilateral Filter (BF) where the variable of interest, the distance from the sensor, is considered in the interpolation problem. In particular, the BF is conveniently modified to perform depth-map upsampling such that the edges (foreground-background discontinuities) are better preserved by means of a proposed method which influences the range-based weighting term. Other methods for spatial upsampling are discussed, evaluated and compared in terms of different error measures. This paper also researches the role of the mask's size in the performance of the implemented methods. Quantitative and qualitative results from experiments on the KITTI Database, using LIDAR point clouds only, show very satisfactory performance of the approach introduced in this work.
[ { "version": "v1", "created": "Fri, 17 Jun 2016 18:14:59 GMT" } ]
2016-06-20T00:00:00
[ [ "Premebida", "C.", "" ], [ "Garrote", "L.", "" ], [ "Asvadi", "A.", "" ], [ "Ribeiro", "A. Pedro", "" ], [ "Nunes", "U.", "" ] ]
new_dataset
0.996531
1510.02460
Dongsoo Har
Dongsoo Har
Safety technology for train based on multi-sensors and braking system
arXiv admin note: text overlap with arXiv:1504.00549
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work deals with integration and transmission of safety information for smart railway vehicles and control and design of cooperative emergency brake systems for high speed trains. Due to increased speed of high-speed train, safety of passengers is becoming more critical. From this perspective, three different approaches to ensure the safety of passengers are useful. These approaches are based on integrated use of multi-sensors and emergency brake system. Methodology for integrated use of sensors to obtain situation-aware safety related information and for enhanced braking is to be discussed in details in this work
[ { "version": "v1", "created": "Thu, 8 Oct 2015 19:53:45 GMT" }, { "version": "v2", "created": "Thu, 16 Jun 2016 00:55:46 GMT" } ]
2016-06-17T00:00:00
[ [ "Har", "Dongsoo", "" ] ]
new_dataset
0.999062
1512.00932
S L Happy
S L Happy, Priyadarshi Patnaik, Aurobinda Routray, and Rajlakshmi Guha
The Indian Spontaneous Expression Database for Emotion Recognition
in IEEE Transactions on Affective Computing, 2016
null
10.1109/TAFFC.2015.2498174
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic recognition of spontaneous facial expressions is a major challenge in the field of affective computing. Head rotation, face pose, illumination variation, occlusion etc. are the attributes that increase the complexity of recognition of spontaneous expressions in practical applications. Effective recognition of expressions depends significantly on the quality of the database used. Most well-known facial expression databases consist of posed expressions. However, currently there is a huge demand for spontaneous expression databases for the pragmatic implementation of the facial expression recognition algorithms. In this paper, we propose and establish a new facial expression database containing spontaneous expressions of both male and female participants of Indian origin. The database consists of 428 segmented video clips of the spontaneous facial expressions of 50 participants. In our experiment, emotions were induced among the participants by using emotional videos and simultaneously their self-ratings were collected for each experienced emotion. Facial expression clips were annotated carefully by four trained decoders, which were further validated by the nature of stimuli used and self-report of emotions. An extensive analysis was carried out on the database using several machine learning algorithms and the results are provided for future reference. Such a spontaneous database will help in the development and validation of algorithms for recognition of spontaneous expressions.
[ { "version": "v1", "created": "Thu, 3 Dec 2015 02:51:08 GMT" }, { "version": "v2", "created": "Thu, 16 Jun 2016 02:01:16 GMT" } ]
2016-06-17T00:00:00
[ [ "Happy", "S L", "" ], [ "Patnaik", "Priyadarshi", "" ], [ "Routray", "Aurobinda", "" ], [ "Guha", "Rajlakshmi", "" ] ]
new_dataset
0.98224
1602.06056
Jiaji Zhou
Jiaji Zhou, Robert Paolini, J. Andrew Bagnell and Matthew T. Mason
A Convex Polynomial Force-Motion Model for Planar Sliding: Identification and Application
2016 IEEE International Conference on Robotics and Automation (ICRA)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a polynomial force-motion model for planar sliding. The set of generalized friction loads is the 1-sublevel set of a polynomial whose gradient directions correspond to generalized velocities. Additionally, the polynomial is confined to be convex even-degree homogeneous in order to obey the maximum work inequality, symmetry, shape invariance in scale, and fast invertibility. We present a simple and statistically-efficient model identification procedure using a sum-of-squares convex relaxation. Simulation and robotic experiments validate the accuracy and efficiency of our approach. We also show practical applications of our model including stable pushing of objects and free sliding dynamic simulations.
[ { "version": "v1", "created": "Fri, 19 Feb 2016 07:07:54 GMT" }, { "version": "v2", "created": "Fri, 20 May 2016 12:54:26 GMT" }, { "version": "v3", "created": "Thu, 16 Jun 2016 03:28:59 GMT" } ]
2016-06-17T00:00:00
[ [ "Zhou", "Jiaji", "" ], [ "Paolini", "Robert", "" ], [ "Bagnell", "J. Andrew", "" ], [ "Mason", "Matthew T.", "" ] ]
new_dataset
0.980489
1604.07002
Somaiyeh Mahmoud Zadeh
Somaiyeh Mahmoud Zadeh, Amir Mehdi Yazdani, Karl Sammut, David M.W Powers
AUV Rendezvous Online Path Planning in a Highly Cluttered Undersea Environment Using Evolutionary Algorithms
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, a single autonomous underwater vehicle (AUV) aims to rendezvous with a submerged leader recovery vehicle through a cluttered and variable operating field. The rendezvous problem is transformed into a nonlinear optimal control problem (NOCP) and then numerical solutions are provided. A penalty function method is utilized to combine the boundary conditions, vehicular and environmental constraints with the performance index that is final rendezvous time.Four evolutionary based path planning methods namely particle swarm optimization (PSO), biogeography-based optimization (BBO), differential evolution (DE) and Firefly algorithm (FA) are employed to establish a reactive planner module and provide a numerical solution for the proposed NOCP. The objective is to synthesize and analysis the performance and capability of the mentioned methods for guiding an AUV from loitering point toward the rendezvous place through a comprehensive simulation study.The proposed planner module entails a heuristic for refining the path considering situational awareness of underlying environment, encompassing static and dynamic obstacles overwhelmed in spatiotemporal current vectors.This leads to accommodate the unforeseen changes in the operating field like emergence of unpredicted obstacles or variability of current vector filed and turbulent regions. The simulation results demonstrate the inherent robustness and significant efficiency of the proposed planner in enhancement of the vehicle's autonomy in terms of using current force, coping undesired current disturbance for the desired rendezvous purpose. Advantages and shortcoming of all utilized methods are also presented based on the obtained results.
[ { "version": "v1", "created": "Sun, 24 Apr 2016 09:25:25 GMT" }, { "version": "v2", "created": "Wed, 15 Jun 2016 23:14:42 GMT" } ]
2016-06-17T00:00:00
[ [ "Zadeh", "Somaiyeh Mahmoud", "" ], [ "Yazdani", "Amir Mehdi", "" ], [ "Sammut", "Karl", "" ], [ "Powers", "David M. W", "" ] ]
new_dataset
0.996927
1606.03180
Yoshihiko Kakutani
Yoshihiko Kakutani
Calculi for Intuitionistic Normal Modal Logic
null
null
null
null
cs.LO cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper provides a call-by-name and a call-by-value term calculus, both of which have a Curry-Howard correspondence to the box fragment of the intuitionistic modal logic IK. The strong normalizability and the confluency of the calculi are shown. Moreover, we define a CPS transformation from the call-by-value calculus to the call-by-name calculus, and show its soundness and completeness.
[ { "version": "v1", "created": "Fri, 10 Jun 2016 05:19:27 GMT" } ]
2016-06-17T00:00:00
[ [ "Kakutani", "Yoshihiko", "" ] ]
new_dataset
0.998967
1606.04435
Kathrin Grosse
Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, Patrick McDaniel
Adversarial Perturbations Against Deep Neural Networks for Malware Classification
version update: correcting typos, incorporating external feedback
null
null
null
cs.CR cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks, like many other machine learning models, have recently been shown to lack robustness against adversarially crafted inputs. These inputs are derived from regular inputs by minor yet carefully selected perturbations that deceive machine learning models into desired misclassifications. Existing work in this emerging field was largely specific to the domain of image classification, since the high-entropy of images can be conveniently manipulated without changing the images' overall visual appearance. Yet, it remains unclear how such attacks translate to more security-sensitive applications such as malware detection - which may pose significant challenges in sample generation and arguably grave consequences for failure. In this paper, we show how to construct highly-effective adversarial sample crafting attacks for neural networks used as malware classifiers. The application domain of malware classification introduces additional constraints in the adversarial sample crafting problem when compared to the computer vision domain: (i) continuous, differentiable input domains are replaced by discrete, often binary inputs; and (ii) the loose condition of leaving visual appearance unchanged is replaced by requiring equivalent functional behavior. We demonstrate the feasibility of these attacks on many different instances of malware classifiers that we trained using the DREBIN Android malware data set. We furthermore evaluate to which extent potential defensive mechanisms against adversarial crafting can be leveraged to the setting of malware classification. While feature reduction did not prove to have a positive impact, distillation and re-training on adversarially crafted samples show promising results.
[ { "version": "v1", "created": "Tue, 14 Jun 2016 16:01:52 GMT" }, { "version": "v2", "created": "Thu, 16 Jun 2016 08:14:12 GMT" } ]
2016-06-17T00:00:00
[ [ "Grosse", "Kathrin", "" ], [ "Papernot", "Nicolas", "" ], [ "Manoharan", "Praveen", "" ], [ "Backes", "Michael", "" ], [ "McDaniel", "Patrick", "" ] ]
new_dataset
0.982361
1606.04601
Yuan Cao
Cao Yuan, Li Qingguo
Cyclic codes over $\mathbb{Z}_4[u]/\langle u^k\rangle$ of odd length
arXiv admin note: substantial text overlap with arXiv:1511.05413
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let $R=\mathbb{Z}_{4}[u]/\langle u^k\rangle=\mathbb{Z}_{4}+u\mathbb{Z}_{4}+\ldots+u^{k-1}\mathbb{Z}_{4}$ ($u^k=0$) where $k\in \mathbb{Z}^{+}$ satisfies $k\geq 2$. For any odd positive integer $n$, it is known that cyclic codes over $R$ of length $n$ are identified with ideals of the ring $R[x]/\langle x^{n}-1\rangle$. In this paper, an explicit representation for each cyclic code over $R$ of length $n$ is provided and a formula to count the number of codewords in each code is given. Then a formula to calculate the number of cyclic codes over $R$ of length $n$ is obtained. Precisely, the dual code of each cyclic code and self-dual cyclic codes over $R$ of length $n$ are investigated. When $k=4$, some optimal quasi-cyclic codes over $\mathbb{Z}_{4}$ of length $28$ and index $4$ are obtained from cyclic codes over $R=\mathbb{Z}_{4} [u]/\langle u^4\rangle$.
[ { "version": "v1", "created": "Wed, 15 Jun 2016 00:58:19 GMT" }, { "version": "v2", "created": "Thu, 16 Jun 2016 00:58:09 GMT" } ]
2016-06-17T00:00:00
[ [ "Yuan", "Cao", "" ], [ "Qingguo", "Li", "" ] ]
new_dataset
0.994545
1606.05017
Shreyas Sen
Shreyas Sen
SocialHBC: Social Networking and Secure Authentication using Interference-Robust Human Body Communication
Accepted for Publication in International Symposium on Low Power Electronics and Design (ISLPED)
null
10.1145/2934583.2934609
null
cs.HC cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advent of cheap computing through five decades of continued miniaturization following Moores Law, wearable devices are becoming increasingly popular. These wearable devices are typically interconnected using wireless body area network (WBAN). Human body communication (HBC) provides an alternate energy-efficient communication technique between on-body wearable devices by using the human body as a conducting medium. This allows order of magnitude lower communication power, compared to WBAN, due to lower loss and broadband signaling. Moreover, HBC is significantly more secure than WBAN, as the information is contained within the human body and cannot be snooped on unless the person is physically touched. In this paper, we highlight applications of HBC as (1) Social Networking (e.g. LinkedIn/Facebook friend request sent during Handshaking in a meeting/party), (2) Secure Authentication using human-human or human-machine dynamic HBC and (3) ultra-low power, secure BAN using intra-human HBC. One of the biggest technical bottlenecks of HBC has been the interference (e.g. FM) picked up by the human body acting like an antenna. In this work, for the first time, we introduce an integrating dual data rate (DDR) receiver technique, that allows notch filtering (>20 dB) of the interference for interference-robust HBC.
[ { "version": "v1", "created": "Thu, 16 Jun 2016 00:48:13 GMT" } ]
2016-06-17T00:00:00
[ [ "Sen", "Shreyas", "" ] ]
new_dataset
0.997167
1311.3123
Rajai Nasser
Rajai Nasser and Emre Telatar
Polar Codes for Arbitrary DMCs and Arbitrary MACs
32 pages, 1 figure. arXiv admin note: text overlap with arXiv:1112.1770
IEEE Transactions on Information Theory, vol. 62, no. 6, pp. 2917-2936, June 2016
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Polar codes are constructed for arbitrary channels by imposing an arbitrary quasigroup structure on the input alphabet. Just as with "usual" polar codes, the block error probability under successive cancellation decoding is $o(2^{-N^{1/2-\epsilon}})$, where $N$ is the block length. Encoding and decoding for these codes can be implemented with a complexity of $O(N\log N)$. It is shown that the same technique can be used to construct polar codes for arbitrary multiple access channels (MAC) by using an appropriate Abelian group structure. Although the symmetric sum capacity is achieved by this coding scheme, some points in the symmetric capacity region may not be achieved. In the case where the channel is a combination of linear channels, we provide a necessary and sufficient condition characterizing the channels whose symmetric capacity region is preserved by the polarization process. We also provide a sufficient condition for having a maximal loss in the dominant face.
[ { "version": "v1", "created": "Wed, 13 Nov 2013 13:24:56 GMT" } ]
2016-06-16T00:00:00
[ [ "Nasser", "Rajai", "" ], [ "Telatar", "Emre", "" ] ]
new_dataset
0.999514
1604.05492
Geoffroy Fouquier
Guillaume Pitel, Geoffroy Fouquier, Emmanuel Marchand and Abdul Mouhamadsultane
Count-Min Tree Sketch: Approximate counting for NLP
submitted to the second International Symposium on Web Algorithms (iSwag'2016). arXiv admin note: text overlap with arXiv:1502.04885, In the proceedings of the Second International Symposium on Web Algorithms (iSWAG 2016), June 9-10, 2016, Deauville, Normandy, France
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Count-Min Sketch is a widely adopted structure for approximate event counting in large scale processing. In a previous work we improved the original version of the Count-Min-Sketch (CMS) with conservative update using approximate counters instead of linear counters. These structures are computationaly efficient and improve the average relative error (ARE) of a CMS at constant memory footprint. These improvements are well suited for NLP tasks, in which one is interested by the low-frequency items. However, if Log counters allow to improve ARE, they produce a residual error due to the approximation. In this paper, we propose the Count-Min Tree Sketch (Copyright 2016 eXenSa. All rights reserved) variant with pyramidal counters, which are focused toward taking advantage of the Zipfian distribution of text data.
[ { "version": "v1", "created": "Tue, 19 Apr 2016 09:51:34 GMT" }, { "version": "v2", "created": "Thu, 21 Apr 2016 09:44:51 GMT" }, { "version": "v3", "created": "Wed, 15 Jun 2016 06:15:34 GMT" } ]
2016-06-16T00:00:00
[ [ "Pitel", "Guillaume", "" ], [ "Fouquier", "Geoffroy", "" ], [ "Marchand", "Emmanuel", "" ], [ "Mouhamadsultane", "Abdul", "" ] ]
new_dataset
0.998962
1606.04593
Guy Kloss
Guy Kloss
Strongvelope Multi-Party Encrypted Messaging Protocol design document
design whitepaper
null
null
null
cs.CR
http://creativecommons.org/licenses/by-sa/4.0/
In this document we describe the design of a multi-party messaging encryption protocol "Strongvelope". We hope that it will prove useful to people interested in understanding the inner workings of this protocol as well as cryptography and security experts to review the underlying concepts and assumptions. In this design paper we are outlining the perspective of chat message protection through the Strongvelope module. This is different from the product (the Mega chat) and the transport means which it will be used with. Aspects of the chat product and transport are only referred to where appropriate, but are not subject to discussion in this document.
[ { "version": "v1", "created": "Tue, 14 Jun 2016 23:38:41 GMT" } ]
2016-06-16T00:00:00
[ [ "Kloss", "Guy", "" ] ]
new_dataset
0.988688
1606.04598
Guy Kloss
Ximin Luo, Guy Kloss
mpENC Multi-Party Encrypted Messaging Protocol design document
technical whitepaper
null
null
null
cs.CR
http://creativecommons.org/licenses/by-sa/4.0/
This document is a technical overview and discussion of our work, a protocol for secure group messaging. By secure we mean for the actual users i.e. end-to-end security, as opposed to "secure" for irrelevant third parties. Our work provides everything needed to run a messaging session between real users on top of a real transport protocol. That is, we specify not just a key exchange, but when and how to run these relative to transport-layer events; how to achieve liveness properties such as reliability and consistency, that are time-sensitive and lie outside of the send-receive logic that cryptography-only protocols often restrict themselves to; and offer suggestions for displaying accurate (i.e. secure) but not overwhelming information in user interfaces. We aim towards a general-purpose unified protocol. In other words, we'd prefer to avoid creating a completely new protocol merely to support automation, or asynchronity, or a different transport protocol. This would add complexity to the overall ecosystem of communications protocols. It is simply unnecessary if the original protocol is designed well, as we have tried to do. That aim is not complete -- our full protocol system, as currently implemented, is suitable only for use with certain instant messaging protocols. However, we have tried to separate out conceptually-independent concerns, and solve these individually using minimal assumptions even if other components make extra assumptions. This means that many components of our full system can be reused in future protocol extensions, and we know exactly which components must be replaced in order to lift the existing constraints on our full system.
[ { "version": "v1", "created": "Wed, 15 Jun 2016 00:40:17 GMT" } ]
2016-06-16T00:00:00
[ [ "Luo", "Ximin", "" ], [ "Kloss", "Guy", "" ] ]
new_dataset
0.99824
1606.04599
Guy Kloss
Guy Kloss
Mega Key Authentication Mechanism
technical whitepaper
null
null
null
cs.CR
http://creativecommons.org/licenses/by-sa/4.0/
For secure communication it is not just sufficient to use strong cryptography with good and strong keys, but to actually have the assurance, that the keys in use for it are authentic and from the contact one is expecting to communicate with. Without that, it is possible to be subject to impersonation or man-in-the-middle (MitM) attacks. Mega meets this problem by providing a hierarchical authentication mechanism for contacts and their keys. To avoid any hassle when using multiple types of keys and key pairs for different purposes, the whole authentication mechanism is brought down to a single "identity key".
[ { "version": "v1", "created": "Wed, 15 Jun 2016 00:40:31 GMT" } ]
2016-06-16T00:00:00
[ [ "Kloss", "Guy", "" ] ]
new_dataset
0.997149
1606.04853
Patrick Flynn
Kevin W. Bowyer and Patrick J. Flynn
The ND-IRIS-0405 Iris Image Dataset
13 pages, 8 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Computer Vision Research Lab at the University of Notre Dame began collecting iris images in the spring semester of 2004. The initial data collections used an LG 2200 iris imaging system for image acquisition. Image datasets acquired in 2004-2005 at Notre Dame with this LG 2200 have been used in the ICE 2005 and ICE 2006 iris biometric evaluations. The ICE 2005 iris image dataset has been distributed to over 100 research groups around the world. The purpose of this document is to describe the content of the ND-IRIS-0405 iris image dataset. This dataset is a superset of the iris image datasets used in ICE 2005 and ICE 2006. The ND 2004-2005 iris image dataset contains 64,980 images corresponding to 356 unique subjects, and 712 unique irises. The age range of the subjects is 18 to 75 years old. 158 of the subjects are female, and 198 are male. 250 of the subjects are Caucasian, 82 are Asian, and 24 are other ethnicities.
[ { "version": "v1", "created": "Wed, 15 Jun 2016 16:40:51 GMT" } ]
2016-06-16T00:00:00
[ [ "Bowyer", "Kevin W.", "" ], [ "Flynn", "Patrick J.", "" ] ]
new_dataset
0.999558
1505.04211
John Loverich
John Loverich
Discontinuous Piecewise Polynomial Neural Networks
null
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An artificial neural network is presented based on the idea of connections between units that are only active for a specific range of input values and zero outside that range (and so are not evaluated outside the active range). The connection function is represented by a polynomial with compact support. The finite range of activation allows for great activation sparsity in the network and means that theoretically you are able to add computational power to the network without increasing the computational time required to evaluate the network for a given input. The polynomial order ranges from first to fifth order. Unit dropout is used for regularization and a parameter free weight update is used. Better performance is obtained by moving from piecewise linear connections to piecewise quadratic, even better performance can be obtained by moving to higher order polynomials. The algorithm is tested on the MAGIC Gamma ray data set as well as the MNIST data set.
[ { "version": "v1", "created": "Fri, 15 May 2015 22:21:39 GMT" }, { "version": "v2", "created": "Tue, 14 Jun 2016 18:58:11 GMT" } ]
2016-06-15T00:00:00
[ [ "Loverich", "John", "" ] ]
new_dataset
0.997485
1507.02081
Michael Neunert
Michael Neunert, Michael Bloesch, Jonas Buchli
An Open Source, Fiducial Based, Visual-Inertial Motion Capture System
To appear in The International Conference on Information Fusion (FUSION) 2016
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many robotic tasks rely on the accurate localization of moving objects within a given workspace. This information about the objects' poses and velocities are used for control,motion planning, navigation, interaction with the environment or verification. Often motion capture systems are used to obtain such a state estimate. However, these systems are often costly, limited in workspace size and not suitable for outdoor usage. Therefore, we propose a lightweight and easy to use, visual-inertial Simultaneous Localization and Mapping approach that leverages cost-efficient, paper printable artificial landmarks, socalled fiducials. Results show that by fusing visual and inertial data, the system provides accurate estimates and is robust against fast motions and changing lighting conditions. Tight integration of the estimation of sensor and fiducial pose as well as extrinsics ensures accuracy, map consistency and avoids the requirement for precalibration. By providing an open source implementation and various datasets, partially with ground truth information, we enable community members to run, test, modify and extend the system either using these datasets or directly running the system on their own robotic setups.
[ { "version": "v1", "created": "Wed, 8 Jul 2015 09:38:13 GMT" }, { "version": "v2", "created": "Mon, 13 Jun 2016 20:02:20 GMT" } ]
2016-06-15T00:00:00
[ [ "Neunert", "Michael", "" ], [ "Bloesch", "Michael", "" ], [ "Buchli", "Jonas", "" ] ]
new_dataset
0.976418
1604.03793
Florian Lonsing
Tomas Balyo and Florian Lonsing
HordeQBF: A Modular and Massively Parallel QBF Solver
camera-ready version, 6-page tool paper, to appear in the proceedings of SAT 2016, LNCS, Springer
null
10.1007/978-3-319-40970-2_33
null
cs.LO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recently developed massively parallel satisfiability (SAT) solver HordeSAT was designed in a modular way to allow the integration of any sequential CDCL-based SAT solver in its core. We integrated the QCDCL-based quantified Boolean formula (QBF) solver DepQBF in HordeSAT to obtain a massively parallel QBF solver---HordeQBF. In this paper we describe the details of this integration and report on results of the experimental evaluation of HordeQBF's performance. HordeQBF achieves superlinear average and median speedup on the hard application instances of the 2014 QBF Gallery.
[ { "version": "v1", "created": "Wed, 13 Apr 2016 14:21:34 GMT" } ]
2016-06-15T00:00:00
[ [ "Balyo", "Tomas", "" ], [ "Lonsing", "Florian", "" ] ]
new_dataset
0.99113
1604.05994
Florian Lonsing
Florian Lonsing, Uwe Egly, and Martina Seidl
Q-Resolution with Generalized Axioms
(minor fixes) camera-ready version + appendix; to appear in the proceedings of SAT 2016, LNCS, Springer
null
10.1007/978-3-319-40970-2_27
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Q-resolution is a proof system for quantified Boolean formulas (QBFs) in prenex conjunctive normal form (PCNF) which underlies search-based QBF solvers with clause and cube learning (QCDCL). With the aim to derive and learn stronger clauses and cubes earlier in the search, we generalize the axioms of the Q-resolution calculus resulting in an exponentially more powerful proof system. The generalized axioms introduce an interface of Q-resolution to any other QBF proof system allowing for the direct combination of orthogonal solving techniques. We implemented a variant of the Q-resolution calculus with generalized axioms in the QBF solver DepQBF. As two case studies, we apply integrated SAT solving and resource-bounded QBF preprocessing during the search to heuristically detect potential axiom applications. Experiments with application benchmarks indicate a substantial performance improvement.
[ { "version": "v1", "created": "Wed, 20 Apr 2016 15:07:24 GMT" }, { "version": "v2", "created": "Fri, 22 Apr 2016 12:17:47 GMT" } ]
2016-06-15T00:00:00
[ [ "Lonsing", "Florian", "" ], [ "Egly", "Uwe", "" ], [ "Seidl", "Martina", "" ] ]
new_dataset
0.994198
1605.05858
Moez AbdelGawad
Robert Cartwright, Rebecca Parsons, Moez AbdelGawad
Domain Theory: An Introduction
90 pages
null
null
null
cs.PL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This monograph is an ongoing revision of "Lectures On A Mathematical Theory of Computation" by Dana Scott. Scott's monograph uses a formulation of domains called neighborhood systems in which finite elements are selected subsets of a master set of objects called "tokens". Since tokens have little intuitive significance, Scott has discarded neighborhood systems in favor of an equivalent formulation of domains called information systems. Unfortunately, he has not rewritten his monograph to reflect this change. We have rewritten Scott's monograph in terms of finitary bases instead of information systems. A finitary basis is an information system that is closed under least upper bounds on finite consistent subsets. This convention ensures that every finite answer is represented by a single basis object instead of a set of objects.
[ { "version": "v1", "created": "Thu, 19 May 2016 09:06:01 GMT" }, { "version": "v2", "created": "Mon, 23 May 2016 12:21:17 GMT" }, { "version": "v3", "created": "Thu, 26 May 2016 14:28:34 GMT" }, { "version": "v4", "created": "Tue, 14 Jun 2016 06:36:36 GMT" } ]
2016-06-15T00:00:00
[ [ "Cartwright", "Robert", "" ], [ "Parsons", "Rebecca", "" ], [ "AbdelGawad", "Moez", "" ] ]
new_dataset
0.962888
1606.04171
Xingqin Lin
Y.-P. Eric Wang, Xingqin Lin, Ansuman Adhikary, Asbj\"orn Gr\"ovlen, Yutao Sui, Yufei Blankenship, Johan Bergman, Hazhir S. Razaghi
A Primer on 3GPP Narrowband Internet of Things (NB-IoT)
8 pages, 5 figures, submitted for publication
null
null
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Narrowband Internet of Things (NB-IoT) is a new cellular technology introduced in 3GPP Release 13 for providing wide-area coverage for the Internet of Things (IoT). This article provides an overview of the air interface of NB-IoT. We describe how NB-IoT addresses key IoT requirements such as deployment flexibility, low device complexity, long battery life time, support of massive number of devices in a cell, and significant coverage extension beyond existing cellular technologies. We also share the various design rationales during the standardization of NB-IoT in Release 13 and point out several open areas for future evolution of NB-IoT.
[ { "version": "v1", "created": "Mon, 13 Jun 2016 23:13:47 GMT" } ]
2016-06-15T00:00:00
[ [ "Wang", "Y. -P. Eric", "" ], [ "Lin", "Xingqin", "" ], [ "Adhikary", "Ansuman", "" ], [ "Grövlen", "Asbjörn", "" ], [ "Sui", "Yutao", "" ], [ "Blankenship", "Yufei", "" ], [ "Bergman", "Johan", "" ], [ "Razaghi", "Hazhir S.", "" ] ]
new_dataset
0.99643
1606.04195
Zhi Wang Dr.
Zhi Wang, Lifeng Sun, Miao Zhang, Haitian Pang, Erfang Tian, Wenwu Zhu
Social- and Mobility-Aware Device-to-Device Content Delivery
null
null
null
null
cs.MM cs.NI cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile online social network services have seen a rapid increase, in which the huge amount of user-generated social media contents propagating between users via social connections has significantly challenged the traditional content delivery paradigm: First, replicating all of the contents generated by users to edge servers that well "fit" the receivers becomes difficult due to the limited bandwidth and storage capacities. Motivated by device-to-device (D2D) communication that allows users with smart devices to transfer content directly, we propose replicating bandwidth-intensive social contents in a device-to-device manner. Based on large-scale measurement studies on social content propagation and user mobility patterns in edge-network regions, we observe that (1) Device-to-device replication can significantly help users download social contents from nearby neighboring peers; (2) Both social propagation and mobility patterns affect how contents should be replicated; (3) The replication strategies depend on regional characteristics ({\em e.g.}, how users move across regions). Using these measurement insights, we propose a joint \emph{propagation- and mobility-aware} content replication strategy for edge-network regions, in which social contents are assigned to users in edge-network regions according to a joint consideration of social graph, content propagation and user mobility. We formulate the replication scheduling as an optimization problem and design distributed algorithm only using historical, local and partial information to solve it. Trace-driven experiments further verify the superiority of our proposal: compared with conventional pure movement-based and popularity-based approach, our design can significantly ($2-4$ times) improve the amount of social contents successfully delivered by device-to-device replication.
[ { "version": "v1", "created": "Tue, 14 Jun 2016 02:52:45 GMT" } ]
2016-06-15T00:00:00
[ [ "Wang", "Zhi", "" ], [ "Sun", "Lifeng", "" ], [ "Zhang", "Miao", "" ], [ "Pang", "Haitian", "" ], [ "Tian", "Erfang", "" ], [ "Zhu", "Wenwu", "" ] ]
new_dataset
0.966455
1606.04288
Polyvios Pratikakis
Alexandros Labrineas, Polyvios Pratikakis, Dimitrios S. Nikolopoulos, Angelos Bilas
BDDT-SCC: A Task-parallel Runtime for Non Cache-Coherent Multicores
null
null
null
null
cs.DC cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents BDDT-SCC, a task-parallel runtime system for non cache-coherent multicore processors, implemented for the Intel Single-Chip Cloud Computer. The BDDT-SCC runtime includes a dynamic dependence analysis and automatic synchronization, and executes OpenMP-Ss tasks on a non cache-coherent architecture. We design a runtime that uses fast on-chip inter-core communication with small messages. At the same time, we use non coherent shared memory to avoid large core-to-core data transfers that would incur a high volume of unnecessary copying. We evaluate BDDT-SCC on a set of representative benchmarks, in terms of task granularity, locality, and communication. We find that memory locality and allocation plays a very important role in performance, as the architecture of the SCC memory controllers can create strong contention effects. We suggest patterns that improve memory locality and thus the performance of applications, and measure their impact.
[ { "version": "v1", "created": "Tue, 14 Jun 2016 10:09:42 GMT" } ]
2016-06-15T00:00:00
[ [ "Labrineas", "Alexandros", "" ], [ "Pratikakis", "Polyvios", "" ], [ "Nikolopoulos", "Dimitrios S.", "" ], [ "Bilas", "Angelos", "" ] ]
new_dataset
0.999688
1606.04296
Foivos Zakkak
Foivos S. Zakkak and Polyvios Pratikakis
DiSquawk: 512 cores, 512 memories, 1 JVM
null
null
null
FORTH-ICS/TR-470, June 2016
cs.DC cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trying to cope with the constantly growing number of cores per processor, hardware architects are experimenting with modular non-cache-coherent architectures. Such architectures delegate the memory coherency to the software. On the contrary, high productivity languages, like Java, are designed to abstract away the hardware details and allow developers to focus on the implementation of their algorithm. Such programming languages rely on a process virtual machine to perform the necessary operations to implement the corresponding memory model. Arguing about the correctness of such implementations is not trivial though. In this work we present our implementation of the Java Memory Model in a Java Virtual Machine targeting a 512-core non-cache-coherent memory architecture. We shortly discuss design decisions and present early evaluation results, which demonstrate that our implementation scales with the number of cores. We model our implementation as the operational semantics of a Java Core Calculus that we extend with synchronization actions, and prove its adherence to the Java Memory Model.
[ { "version": "v1", "created": "Tue, 14 Jun 2016 10:43:18 GMT" } ]
2016-06-15T00:00:00
[ [ "Zakkak", "Foivos S.", "" ], [ "Pratikakis", "Polyvios", "" ] ]
new_dataset
0.999057
1412.5278
Jose M. Such
Jose M. Such, Michael Rovatsos
Privacy Policy Negotiation in Social Media
null
ACM Transactions on Autonomous and Adaptive Systems, 11(1):1-29, 2016
10.1145/2821512
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social Media involve many shared items, such as photos, which may concern more than one user. The first challenge we address in this paper is to develop a way for users of such items to take a decision on to whom to share these items. This is not an easy problem, as users' privacy preferences for the same item may conflict, so an approach that just merges in some way the users' privacy preferences may provide unsatisfactory results. We propose a negotiation mechanism for users to agree on a compromise for the conflicts found. The second challenge we address in this paper relates to the exponential complexity of such a negotiation mechanism, which could make it too slow to be used in practice in a Social Media infrastructure. To address this, we propose heuristics that reduce the complexity of the negotiation mechanism and show how substantial benefits can be derived from the use of these heuristics through extensive experimental evaluation that compares the performance of the negotiation mechanism with and without these heuristics. Moreover, we show that one such heuristic makes the negotiation mechanism produce results fast enough to be used in actual Social Media infrastructures with near-optimal results.
[ { "version": "v1", "created": "Wed, 17 Dec 2014 08:09:14 GMT" } ]
2016-06-14T00:00:00
[ [ "Such", "Jose M.", "" ], [ "Rovatsos", "Michael", "" ] ]
new_dataset
0.977789
1505.01810
Khalid Khan
Khalid Khan and D.K. Lobiyal
B$\acute{e}$zier curves based on Lupa\c{s} $(p,q)$-analogue of Bernstein polynomials in CAGD
24 pages, 9 figures, $(p,q)$-lupas operator and limit $(p,q)$-lupas operators and their property introduced, typo corrected
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we use the blending functions of Lupa\c{s} type (rational) $(p,q)$-Bernstein operators based on $(p,q)$-integers for construction of Lupa\c{s} $(p,q)$-B$\acute{e}$zier curves (rational curves) and surfaces (rational surfaces) with shape parameters. We study the nature of degree elevation and degree reduction for Lupa\c{s} $(p,q)$-B$\acute{e}$zier Bernstein functions. Parametric curves are represented using Lupa\c{s} $(p,q)$-Bernstein basis. We introduce affine de Casteljau algorithm for Lupa\c{s} type $(p,q)$-Bernstein B$\acute{e}$zier curves. The new curves have some properties similar to $q$-B$\acute{e}$zier curves. Moreover, we construct the corresponding tensor product surfaces over the rectangular domain $(u, v) \in [0, 1] \times [0, 1] $ depending on four parameters. We also study the de Casteljau algorithm and degree evaluation properties of the surfaces for these generalization over the rectangular domain. We get $q$-B$\acute{e}$zier surfaces for $(u, v) \in [0, 1] \times [0, 1] $ when we set the parameter $p_1=p_2=1.$ In comparison to $q$-B$\acute{e}$zier curves and surfaces based on Lupa\c{s} $q$-Bernstein polynomials, our generalization gives us more flexibility in controlling the shapes of curves and surfaces. We also show that the $(p,q)$-analogue of Lupa\c{s} Bernstein operator sequence $L^{n}_{p_n,q_n}(f,x)$ converges uniformly to $f(x)\in C[0,1]$ if and only if $0<q_n<p_n\leq1$ such that $\lim\limits_{n\to\infty} q_n=1, $ $\lim\limits_{n\to\infty} p_n=1$ and $\lim\limits_{n\to\infty}p_n^n=a,$ $\lim\limits_{n\to\infty}q_n^n=b$ with $0<a,b\leq1.$ On the other hand, for any $p>0$ fixed and $p \neq 1,$ the sequence $L^{n}_{p,q}(f,x)$ converges uniformly to $f(x)~ \in C[0,1]$ if and only if $f(x)=ax+b$ for some $a, b \in \mathbb{R}.$
[ { "version": "v1", "created": "Thu, 7 May 2015 18:36:37 GMT" }, { "version": "v2", "created": "Sun, 16 Aug 2015 12:07:50 GMT" }, { "version": "v3", "created": "Tue, 1 Dec 2015 20:14:26 GMT" }, { "version": "v4", "created": "Fri, 15 Apr 2016 20:46:06 GMT" }, { "version": "v5", "created": "Sat, 11 Jun 2016 11:47:23 GMT" } ]
2016-06-14T00:00:00
[ [ "Khan", "Khalid", "" ], [ "Lobiyal", "D. K.", "" ] ]
new_dataset
0.975237
1511.01756
Suzanne Patience Mpouli Njanga Seh
Suzanne Mpouli (ACASA), Jean-Gabriel Ganascia (ACASA)
"Pale as death" or "p\^ale comme la mort" : Frozen similes used as literary clich\'es
EUROPHRAS2015:COMPUTERISED AND CORPUS-BASED APPROACHES TO PHRASEOLOGY: MONOLINGUAL AND MULTILINGUAL PERSPECTIVES, Jun 2015, Malaga, Spain
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The present study is focused on the automatic identification and description of frozen similes in British and French novels written between the 19 th century and the beginning of the 20 th century. Two main patterns of frozen similes were considered: adjectival ground + simile marker + nominal vehicle (e.g. happy as a lark) and eventuality + simile marker + nominal vehicle (e.g. sleep like a top). All potential similes and their components were first extracted using a rule-based algorithm. Then, frozen similes were identified based on reference lists of existing similes and semantic distance between the tenor and the vehicle. The results obtained tend to confirm the fact that frozen similes are not used haphazardly in literary texts. In addition, contrary to how they are often presented, frozen similes often go beyond the ground or the eventuality and the vehicle to also include the tenor.
[ { "version": "v1", "created": "Thu, 5 Nov 2015 14:20:01 GMT" }, { "version": "v2", "created": "Mon, 13 Jun 2016 07:35:38 GMT" } ]
2016-06-14T00:00:00
[ [ "Mpouli", "Suzanne", "", "ACASA" ], [ "Ganascia", "Jean-Gabriel", "", "ACASA" ] ]
new_dataset
0.998817
1603.00982
Yu-An Chung
Yu-An Chung, Chao-Chung Wu, Chia-Hao Shen, Hung-Yi Lee, Lin-Shan Lee
Audio Word2Vec: Unsupervised Learning of Audio Segment Representations using Sequence-to-sequence Autoencoder
null
null
null
null
cs.SD cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The vector representations of fixed dimensionality for words (in text) offered by Word2Vec have been shown to be very useful in many application scenarios, in particular due to the semantic information they carry. This paper proposes a parallel version, the Audio Word2Vec. It offers the vector representations of fixed dimensionality for variable-length audio segments. These vector representations are shown to describe the sequential phonetic structures of the audio segments to a good degree, with very attractive real world applications such as query-by-example Spoken Term Detection (STD). In this STD application, the proposed approach significantly outperformed the conventional Dynamic Time Warping (DTW) based approaches at significantly lower computation requirements. We propose unsupervised learning of Audio Word2Vec from audio data without human annotation using Sequence-to-sequence Audoencoder (SA). SA consists of two RNNs equipped with Long Short-Term Memory (LSTM) units: the first RNN (encoder) maps the input audio sequence into a vector representation of fixed dimensionality, and the second RNN (decoder) maps the representation back to the input audio sequence. The two RNNs are jointly trained by minimizing the reconstruction error. Denoising Sequence-to-sequence Autoencoder (DSA) is furthered proposed offering more robust learning.
[ { "version": "v1", "created": "Thu, 3 Mar 2016 05:44:51 GMT" }, { "version": "v2", "created": "Tue, 15 Mar 2016 14:16:28 GMT" }, { "version": "v3", "created": "Thu, 17 Mar 2016 06:11:47 GMT" }, { "version": "v4", "created": "Sat, 11 Jun 2016 03:40:23 GMT" } ]
2016-06-14T00:00:00
[ [ "Chung", "Yu-An", "" ], [ "Wu", "Chao-Chung", "" ], [ "Shen", "Chia-Hao", "" ], [ "Lee", "Hung-Yi", "" ], [ "Lee", "Lin-Shan", "" ] ]
new_dataset
0.996994
1605.08859
Baokun Ding
Baokun Ding, Gennian Ge, Jun Zhang, Tao Zhang and Yiwei Zhang
New Constructions of MDS Symbol-Pair Codes
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by the application of high-density data storage technologies, symbol-pair codes are proposed to protect against pair-errors in symbol-pair channels, whose outputs are overlapping pairs of symbols. The research of symbol-pair codes with the largest minimum pair-distance is interesting since such codes have the best possible error-correcting capability. A symbol-pair code attaining the maximal minimum pair-distance is called a maximum distance separable (MDS) symbol-pair code. In this paper, we focus on constructing linear MDS symbol-pair codes over the finite field $\mathbb{F}_{q}$. We show that a linear MDS symbol-pair code over $\mathbb{F}_{q}$ with pair-distance $5$ exists if and only if the length $n$ ranges from $5$ to $q^2+q+1$. As for codes with pair-distance $6$, length ranging from $6$ to $q^{2}+1$, we construct linear MDS symbol-pair codes by using a configuration called ovoid in projective geometry. With the help of elliptic curves, we present a construction of linear MDS symbol-pair codes for any pair-distance $d+2$ with length $n$ satisfying $7\le d+2\leq n\le q+\lfloor 2\sqrt{q}\rfloor+\delta(q)-3$, where $\delta(q)=0$ or $1$.
[ { "version": "v1", "created": "Sat, 28 May 2016 08:00:25 GMT" }, { "version": "v2", "created": "Mon, 13 Jun 2016 05:02:21 GMT" } ]
2016-06-14T00:00:00
[ [ "Ding", "Baokun", "" ], [ "Ge", "Gennian", "" ], [ "Zhang", "Jun", "" ], [ "Zhang", "Tao", "" ], [ "Zhang", "Yiwei", "" ] ]
new_dataset
0.996271
1606.03473
Huaizu Jiang
Huaizu Jiang and Erik Learned-Miller
Face Detection with the Faster R-CNN
technical report
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Faster R-CNN has recently demonstrated impressive results on various object detection benchmarks. By training a Faster R-CNN model on the large scale WIDER face dataset, we report state-of-the-art results on two widely used face detection benchmarks, FDDB and the recently released IJB-A.
[ { "version": "v1", "created": "Fri, 10 Jun 2016 20:34:39 GMT" } ]
2016-06-14T00:00:00
[ [ "Jiang", "Huaizu", "" ], [ "Learned-Miller", "Erik", "" ] ]
new_dataset
0.998133
1606.03513
Sanjib Tiwari
Sanjib Tiwari, Michael Lane and Khorshed Alam
The challenges and opportunities of delivering wireless high speed broadband services in Rural and Remote Australia: A Case Study of Western Downs Region (WDR)
ISBN# 978-0-646-95337-3 Presented at the Australasian Conference on Information Systems 2015 (arXiv:1605.01032)
null
null
ACIS/2015/205
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper critically assesses wireless broadband internet infrastructure, in the rural and remote communities of WDR in terms of supply, demand and utilisation. Only 8 of 20 towns have ADSL/ADSL2+, and only 3 towns have 4G mobile network coverage. Conversely all of the towns have 2G/3G mobile network coverage but have problems with speed, reliability of service and capacity to handle data traffic loads at peak times. Satellite broadband internet for remote areas is also patchy at best. Satisfaction with existing wireless broadband internet services is highly variable across rural and remote communities in WDR. Finally we provide suggestions to improve broadband internet access for rural and remote communities. Public and private investment and sharing of wired and wireless broadband internet infrastructure is needed to provide the backhaul networks and 4G mobile and fixed wireless services to ensure high speed, reliable and affordable broadband services for rural and remote communities.
[ { "version": "v1", "created": "Sat, 11 Jun 2016 00:40:39 GMT" } ]
2016-06-14T00:00:00
[ [ "Tiwari", "Sanjib", "" ], [ "Lane", "Michael", "" ], [ "Alam", "Khorshed", "" ] ]
new_dataset
0.969409
1606.03542
Sanjib Tiwari
Sanjib Tiwari, Michael Lane and Khorshed Alam
Does Broadband Connectivity and Social networking sites build and maintain social capital in rural communities?
ISBN# 978-0-646-95337-3 Presented at the Australasian Conference on Information Systems 2015 (arXiv:1605.01032)
null
null
ACIS/2015/224
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Broadband internet access is a major enabling technology for building social capital (SC) by better connecting rural and regional communities which are often geographically dispersed both locally nationally and internationally. The main objectives of this paper were determine to what extent Social Networking Sites (SNS) can build SC for households in a rural and regional context of rural household adoption and use of broadband internet. A large scale survey of households was used to collect empirical data regarding household adoption and use of Broadband internet services including SNSs and their contribution to building SC in rural communities. The results of this study confirmed that SNSs would appear to build SC two high level dimensions bonding and bridging for households in rural communities such as Western Downs Region. Moreover SNS users would appear to have significantly higher levels of SC than Non-SNS users in rural communities.
[ { "version": "v1", "created": "Sat, 11 Jun 2016 03:34:05 GMT" } ]
2016-06-14T00:00:00
[ [ "Tiwari", "Sanjib", "" ], [ "Lane", "Michael", "" ], [ "Alam", "Khorshed", "" ] ]
new_dataset
0.987814
1606.03628
Jiaping Zhao
Jiaping Zhao, Zerong Xi and Laurent Itti
metricDTW: local distance metric learning in Dynamic Time Warping
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose to learn multiple local Mahalanobis distance metrics to perform k-nearest neighbor (kNN) classification of temporal sequences. Temporal sequences are first aligned by dynamic time warping (DTW); given the alignment path, similarity between two sequences is measured by the DTW distance, which is computed as the accumulated distance between matched temporal point pairs along the alignment path. Traditionally, Euclidean metric is used for distance computation between matched pairs, which ignores the data regularities and might not be optimal for applications at hand. Here we propose to learn multiple Mahalanobis metrics, such that DTW distance becomes the sum of Mahalanobis distances. We adapt the large margin nearest neighbor (LMNN) framework to our case, and formulate multiple metric learning as a linear programming problem. Extensive sequence classification results show that our proposed multiple metrics learning approach is effective, insensitive to the preceding alignment qualities, and reaches the state-of-the-art performances on UCR time series datasets.
[ { "version": "v1", "created": "Sat, 11 Jun 2016 21:14:08 GMT" } ]
2016-06-14T00:00:00
[ [ "Zhao", "Jiaping", "" ], [ "Xi", "Zerong", "" ], [ "Itti", "Laurent", "" ] ]
new_dataset
0.9982
1606.03774
Chenxia Wu
Chenxia Wu, Jiemi Zhang, Ashutosh Saxena, Silvio Savarese
Human Centred Object Co-Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Co-segmentation is the automatic extraction of the common semantic regions given a set of images. Different from previous approaches mainly based on object visuals, in this paper, we propose a human centred object co-segmentation approach, which uses the human as another strong evidence. In order to discover the rich internal structure of the objects reflecting their human-object interactions and visual similarities, we propose an unsupervised fully connected CRF auto-encoder incorporating the rich object features and a novel human-object interaction representation. We propose an efficient learning and inference algorithm to allow the full connectivity of the CRF with the auto-encoder, that establishes pairwise relations on all pairs of the object proposals in the dataset. Moreover, the auto-encoder learns the parameters from the data itself rather than supervised learning or manually assigned parameters in the conventional CRF. In the extensive experiments on four datasets, we show that our approach is able to extract the common objects more accurately than the state-of-the-art co-segmentation algorithms.
[ { "version": "v1", "created": "Sun, 12 Jun 2016 22:36:53 GMT" } ]
2016-06-14T00:00:00
[ [ "Wu", "Chenxia", "" ], [ "Zhang", "Jiemi", "" ], [ "Saxena", "Ashutosh", "" ], [ "Savarese", "Silvio", "" ] ]
new_dataset
0.990096
1606.03788
Michael Jacobs
Vishwa S. Parekh, Jeremy R. Jacobs, Michael A. Jacobs
Unsupervised Non Linear Dimensionality Reduction Machine Learning methods applied to Multiparametric MRI in cerebral ischemia: Preliminary Results
9 pages
Proceedings of the SPIE, Volume 9034, id. 90342O 9 pp. (2014)
10.1117/12.2044001
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged, we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.
[ { "version": "v1", "created": "Mon, 13 Jun 2016 01:29:04 GMT" } ]
2016-06-14T00:00:00
[ [ "Parekh", "Vishwa S.", "" ], [ "Jacobs", "Jeremy R.", "" ], [ "Jacobs", "Michael A.", "" ] ]
new_dataset
0.970971
1606.03838
Boyue Wang
Boyue Wang and Yongli Hu and Junbin Gao and Yanfeng Sun and Baocai Yin
Laplacian LRR on Product Grassmann Manifolds for Human Activity Clustering in Multi-Camera Video Surveillance
14pages,submitting to IEEE TCSVT with minor revision
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In multi-camera video surveillance, it is challenging to represent videos from different cameras properly and fuse them efficiently for specific applications such as human activity recognition and clustering. In this paper, a novel representation for multi-camera video data, namely the Product Grassmann Manifold (PGM), is proposed to model video sequences as points on the Grassmann manifold and integrate them as a whole in the product manifold form. Additionally, with a new geometry metric on the product manifold, the conventional Low Rank Representation (LRR) model is extended onto PGM and the new LRR model can be used for clustering non-linear data, such as multi-camera video data. To evaluate the proposed method, a number of clustering experiments are conducted on several multi-camera video datasets of human activity, including Dongzhimen Transport Hub Crowd action dataset, ACT 42 Human action dataset and SKIG action dataset. The experiment results show that the proposed method outperforms many state-of-the-art clustering methods.
[ { "version": "v1", "created": "Mon, 13 Jun 2016 07:09:39 GMT" } ]
2016-06-14T00:00:00
[ [ "Wang", "Boyue", "" ], [ "Hu", "Yongli", "" ], [ "Gao", "Junbin", "" ], [ "Sun", "Yanfeng", "" ], [ "Yin", "Baocai", "" ] ]
new_dataset
0.964312
1606.03846
Astrid Weiss
Astrid Weiss and Andreas Huber
User Experience of a Smart Factory Robot: Assembly Line Workers Demand Adaptive Robots
5th International Symposium on New Frontiers in Human-Robot Interaction 2016 (arXiv:1602.05456)
null
null
AISB-NFHRI/2016/02
cs.RO cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper reports a case study on the User Experience (UX)of an industrial robotic prototype in the context of human-robot cooperation in an automotive assembly line. The goal was to find out what kinds of suggestions the assembly line workers, who actually use the new robotic system, propose in order to improve the human-robot interaction (HRI). The operators working with the robotic prototype were interviewed three weeks after the deployment using established UX narrative interview guidelines. Our results show that the cooperation with a robot that executes predefined working steps actually impedes the user in terms of flexibility and individual speed. This results in a change of working routine for the operators, impacts the UX, and potentially leads to a decrease in productivity. We present the results of the interviews as well as first thoughts on technical solutions in order to enhance the adaptivity and subsequently the UX of the human-robot cooperation.
[ { "version": "v1", "created": "Mon, 13 Jun 2016 07:43:24 GMT" } ]
2016-06-14T00:00:00
[ [ "Weiss", "Astrid", "" ], [ "Huber", "Andreas", "" ] ]
new_dataset
0.997768
1606.03875
Pablo Gomez Esteban
Pablo G\'omez Esteban, Hoang-Long Cao, Albert De Beir, Greet Van de Perre, Dirk Lefeber and Bram Vanderborght
A multilayer reactive system for robots interacting with children with autism
5th International Symposium on New Frontiers in Human-Robot Interaction 2016 (arXiv:1602.05456)
null
null
AISB-NFHRI/2016/10
cs.RO cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is a lack of autonomy on traditional Robot-Assisted Therapy systems interacting with children with autism. To overcome this limitation a supervised autonomous robot controller is being built. In this paper we present a multilayer reactive system within such controller. The goal of this Reactive system is to allow the robot to appropriately react to the child's behavior creating the illusion of being alive.
[ { "version": "v1", "created": "Mon, 13 Jun 2016 09:42:14 GMT" } ]
2016-06-14T00:00:00
[ [ "Esteban", "Pablo Gómez", "" ], [ "Cao", "Hoang-Long", "" ], [ "De Beir", "Albert", "" ], [ "Van de Perre", "Greet", "" ], [ "Lefeber", "Dirk", "" ], [ "Vanderborght", "Bram", "" ] ]
new_dataset
0.99698
1606.03928
Konrad Siek
Konrad Siek and Pawe{\l} T. Wojciechowski
Atomic RMI 2: Highly Parallel Pessimistic Distributed Transactional Memory
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed Transactional Memory (DTM) is an emerging approach to distributed synchronization based on the application of the transaction abstraction to distributed computation. DTM comes in several system models, but the control flow model (CF) is particularly powerful, since it allows transactions to delegate computation to remote nodes as well as access shared data. However, there are no existing CF DTM systems that perform on par with state-of-the-art systems operating in other models. Hence, we introduce a CF DTM synchronization algorithm, OptSVA-CF. It supports fine-grained pessimistic concurrency control, so it avoids aborts, and thus avoids problems with irrevocable operations. Furthermore, it uses early release and asynchrony to parallelize concurrent transactions to a high degree, while retaining strong safety properties. We implement it as Atomic RMI 2, in effect producing a CF DTM system that, as our evaluation shows, can outperform a state-of-the-art non-CF DTM such as HyFlow2.
[ { "version": "v1", "created": "Mon, 13 Jun 2016 12:58:24 GMT" } ]
2016-06-14T00:00:00
[ [ "Siek", "Konrad", "" ], [ "Wojciechowski", "Paweł T.", "" ] ]
new_dataset
0.999336
1605.07168
Haibo Hong
Haibo Hong, Licheng Wang, Jun Shao, Haseeb Ahmad and Yixian Yang
A Miniature CCA2 Public key Encryption scheme based on non-Abelian factorization problems in Lie Groups
This paper has been withdrawn by the author due to substantial text overlap with arXiv:1605.06608
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since 1870s, scientists have been taking deep insight into Lie groups and Lie algebras. With the development of Lie theory, Lie groups have got profound significance in many branches of mathematics and physics. In Lie theory, exponential mapping between Lie groups and Lie algebras plays a crucial role. Exponential mapping is the mechanism for passing information from Lie algebras to Lie groups. Since many computations are performed much more easily by employing Lie algebras, exponential mapping is indispensable while studying Lie groups. In this paper, we first put forward a novel idea of designing cryptosystem based on Lie groups and Lie algebras. Besides, combing with discrete logarithm problem(DLP) and factorization problem(FP), we propose some new intractable assumptions based on exponential mapping. Moreover, in analog with Boyen's sceme(AsiaCrypt 2007), we disign a public key encryption scheme based on non-Abelian factorization problems in Lie Groups. Finally, our proposal is proved to be IND-CCA2 secure in the random oracle model.
[ { "version": "v1", "created": "Sat, 21 May 2016 09:10:15 GMT" }, { "version": "v2", "created": "Thu, 26 May 2016 02:56:37 GMT" }, { "version": "v3", "created": "Fri, 10 Jun 2016 10:55:39 GMT" } ]
2016-06-13T00:00:00
[ [ "Hong", "Haibo", "" ], [ "Wang", "Licheng", "" ], [ "Shao", "Jun", "" ], [ "Ahmad", "Haseeb", "" ], [ "Yang", "Yixian", "" ] ]
new_dataset
0.996141
1606.02562
Tiancheng Zhao
Tiancheng Zhao, Kyusong Lee, Maxine Eskenazi
DialPort: Connecting the Spoken Dialog Research Community to Real User Data
Under Peer Review of SigDial 2016
null
null
null
cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
This paper describes a new spoken dialog portal that connects systems produced by the spoken dialog academic research community and gives them access to real users. We introduce a distributed, multi-modal, multi-agent prototype dialog framework that affords easy integration with various remote resources, ranging from end-to-end dialog systems to external knowledge APIs. To date, the DialPort portal has successfully connected to the multi-domain spoken dialog system at Cambridge University, the NOAA (National Oceanic and Atmospheric Administration) weather API and the Yelp API.
[ { "version": "v1", "created": "Wed, 8 Jun 2016 14:08:21 GMT" } ]
2016-06-13T00:00:00
[ [ "Zhao", "Tiancheng", "" ], [ "Lee", "Kyusong", "" ], [ "Eskenazi", "Maxine", "" ] ]
new_dataset
0.999521
1606.03198
Annalisa De Bonis
Annalisa De Bonis
Conflict Resolution in Multiple Access Channels Supporting Simultaneous Successful Transmissions
null
null
null
null
cs.DS cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the Conflict Resolution Problem in the context of a multiple-access system in which several stations can transmit their messages simultaneously to the channel. We assume that there are n stations and that at most k<= n stations are active at the same time, i.e, are willing to transmit a message. If in a certain instant at most d<=k active stations transmit to the channel then their messages are successfully transmitted, whereas if more than d active stations transmit simultaneously then their messages are lost. In this latter case we say that a conflict occurs. The present paper investigates non-adaptive conflict resolution algorithms working under the assumption that active stations receive a feedback from the channel that informs them on whether their messages have been successfully transmitted. If a station becomes aware that its message has been correctly sent over the channel then it becomes immediately inactive. The measure to optimize is the number of time slots needed to solve conflicts among all active stations. The fundamental question is whether this measure decreases linearly with the number d of messages that can be simultaneously transmitted with success. We give a positive answer to this question by providing a conflict resolution algorithm that uses a 1/d ratio of the number of time slots used by the optimal conflict resolution algorithm for the case d=1. Moreover, we derive a lower bound on the number of time slots needed to solve conflicts non-adaptively which is within a log (k/d) factor from the upper bound. To this aim, we introduce a new combinatorial structure that consists in a generalization of Komlos and Greenberg codes. Constructions of these new codes are obtained via a new generalization of selectors, whereas the non-existential result is implied by a non-existential result for a new generalization of the locally thin families.
[ { "version": "v1", "created": "Fri, 10 Jun 2016 06:09:42 GMT" } ]
2016-06-13T00:00:00
[ [ "De Bonis", "Annalisa", "" ] ]
new_dataset
0.987345
1606.03199
Zhiyong Sun
Zhiyong Sun, Myoung-Chul Park, Brian D. O. Anderson, Hyo-Sung Ahn
Distributed stabilization control of rigid formations with prescribed orientation
This paper was submitted to Automatica for publication. Compared to the submitted version, this arXiv version contains complete proofs, examples and remarks (some of them are removed in the submitted version due to space limit.)
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most rigid formation controllers reported in the literature aim to only stabilize a rigid formation shape, while the formation orientation is not controlled. This paper studies the problem of controlling rigid formations with prescribed orientations in both 2-D and 3-D spaces. The proposed controllers involve the commonly-used gradient descent control for shape stabilization, and an additional term to control the directions of certain relative position vectors associated with certain chosen agents. In this control framework, we show the minimal number of agents which should have knowledge of a global coordinate system (2 agents for a 2-D rigid formation and 3 agents for a 3-D rigid formation), while all other agents do not require any global coordinate knowledge or any coordinate frame alignment to implement the proposed control. The exponential convergence to the desired rigid shape and formation orientation is also proved. Typical simulation examples are shown to support the analysis and performance of the proposed formation controllers.
[ { "version": "v1", "created": "Fri, 10 Jun 2016 06:13:26 GMT" } ]
2016-06-13T00:00:00
[ [ "Sun", "Zhiyong", "" ], [ "Park", "Myoung-Chul", "" ], [ "Anderson", "Brian D. O.", "" ], [ "Ahn", "Hyo-Sung", "" ] ]
new_dataset
0.998794
1606.03302
Moustafa Elhamshary
Moustafa Elhamshary, Moustafa Youssef, Akira Uchiyama, Hirozumi Yamaguchi, Teruo Higashino
TransitLabel: A Crowd-Sensing System for Automatic Labeling of Transit Stations Semantics
14 pages, 17 figures, published in MobiSys 2016
null
10.1145/2906388.2906395
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present TransitLabel, a crowd-sensing system for automatic enrichment of transit stations indoor floorplans with different semantics like ticket vending machines, entrance gates, drink vending machines, platforms, cars' waiting lines, restrooms, lockers, waiting (sitting) areas, among others. Our key observations show that certain passengers' activities (e.g., purchasing tickets, crossing entrance gates, etc) present identifiable signatures on one or more cell-phone sensors. TransitLabel leverages this fact to automatically and unobtrusively recognize different passengers' activities, which in turn are mined to infer their uniquely associated stations semantics. Furthermore, the locations of the discovered semantics are automatically estimated from the inaccurate passengers' positions when these semantics are identified. We evaluate TransitLabel through a field experiment in eight different train stations in Japan. Our results show that TransitLabel can detect the fine-grained stations semantics accurately with 7.7% false positive rate and 7.5% false negative rate on average. In addition, it can consistently detect the location of discovered semantics accurately, achieving an error within 2.5m on average for all semantics. Finally, we show that TransitLabel has a small energy footprint on cell-phones, could be generalized to other stations, and is robust to different phone placements; highlighting its promise as a ubiquitous indoor maps enriching service.
[ { "version": "v1", "created": "Fri, 10 Jun 2016 13:00:05 GMT" } ]
2016-06-13T00:00:00
[ [ "Elhamshary", "Moustafa", "" ], [ "Youssef", "Moustafa", "" ], [ "Uchiyama", "Akira", "" ], [ "Yamaguchi", "Hirozumi", "" ], [ "Higashino", "Teruo", "" ] ]
new_dataset
0.999111
1606.03335
Roman Bartusiak
Roman Bartusiak, {\L}ukasz Augustyniak, Tomasz Kajdanowicz, Przemys{\l}aw Kazienko, Maciej Piasecki
WordNet2Vec: Corpora Agnostic Word Vectorization Method
29 pages, 16 figures, submitted to journal
null
null
null
cs.CL cs.AI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A complex nature of big data resources demands new methods for structuring especially for textual content. WordNet is a good knowledge source for comprehensive abstraction of natural language as its good implementations exist for many languages. Since WordNet embeds natural language in the form of a complex network, a transformation mechanism WordNet2Vec is proposed in the paper. It creates vectors for each word from WordNet. These vectors encapsulate general position - role of a given word towards all other words in the natural language. Any list or set of such vectors contains knowledge about the context of its component within the whole language. Such word representation can be easily applied to many analytic tasks like classification or clustering. The usefulness of the WordNet2Vec method was demonstrated in sentiment analysis, i.e. classification with transfer learning for the real Amazon opinion textual dataset.
[ { "version": "v1", "created": "Fri, 10 Jun 2016 14:12:47 GMT" } ]
2016-06-13T00:00:00
[ [ "Bartusiak", "Roman", "" ], [ "Augustyniak", "Łukasz", "" ], [ "Kajdanowicz", "Tomasz", "" ], [ "Kazienko", "Przemysław", "" ], [ "Piasecki", "Maciej", "" ] ]
new_dataset
0.978068
1111.2626
Sigal Oren
Moshe Babaioff and Shahar Dobzinski and Sigal Oren and Aviv Zohar
On Bitcoin and Red Balloons
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many large decentralized systems rely on information propagation to ensure their proper function. We examine a common scenario in which only participants that are aware of the information can compete for some reward, and thus informed participants have an incentive not to propagate information to others. One recent example in which such tension arises is the 2009 DARPA Network Challenge (finding red balloons). We focus on another prominent example: Bitcoin, a decentralized electronic currency system. Bitcoin represents a radical new approach to monetary systems. It has been getting a large amount of public attention over the last year, both in policy discussions and in the popular press. Its cryptographic fundamentals have largely held up even as its usage has become increasingly widespread. We find, however, that it exhibits a fundamental problem of a different nature, based on how its incentives are structured. We propose a modification to the protocol that can eliminate this problem. Bitcoin relies on a peer-to-peer network to track transactions that are performed with the currency. For this purpose, every transaction a node learns about should be transmitted to its neighbors in the network. The current implemented protocol provides an incentive to nodes to not broadcast transactions they are aware of. Our solution is to augment the protocol with a scheme that rewards information propagation. Since clones are easy to create in the Bitcoin system, an important feature of our scheme is Sybil-proofness. We show that our proposed scheme succeeds in setting the correct incentives, that it is Sybil-proof, and that it requires only a small payment overhead, all this is achieved with iterated elimination of dominated strategies. We complement this result by showing that there are no reward schemes in which information propagation and no self-cloning is a dominant strategy.
[ { "version": "v1", "created": "Thu, 10 Nov 2011 22:07:26 GMT" }, { "version": "v2", "created": "Thu, 9 Jun 2016 07:11:16 GMT" } ]
2016-06-10T00:00:00
[ [ "Babaioff", "Moshe", "" ], [ "Dobzinski", "Shahar", "" ], [ "Oren", "Sigal", "" ], [ "Zohar", "Aviv", "" ] ]
new_dataset
0.999294
1506.04059
Pierre-Alain Reynier
Luc Dartois, Isma\"el Jecker, Pierre-Alain Reynier
Aperiodic String Transducers
null
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Regular string-to-string functions enjoy a nice triple characterization through deterministic two-way transducers (2DFT), streaming string transducers (SST) and MSO definable functions. This result has recently been lifted to FO definable functions, with equivalent representations by means of aperiodic 2DFT and aperiodic 1-bounded SST, extending a well-known result on regular languages. In this paper, we give three direct transformations: i) from 1-bounded SST to 2DFT, ii) from 2DFT to copyless SST, and iii) from k-bounded to 1-bounded SST. We give the complexity of each construction and also prove that they preserve the aperiodicity of transducers. As corollaries, we obtain that FO definable string-to-string functions are equivalent to SST whose transition monoid is finite and aperiodic, and to aperiodic copyless SST.
[ { "version": "v1", "created": "Fri, 12 Jun 2015 16:18:21 GMT" }, { "version": "v2", "created": "Thu, 9 Jun 2016 12:58:56 GMT" } ]
2016-06-10T00:00:00
[ [ "Dartois", "Luc", "" ], [ "Jecker", "Ismaël", "" ], [ "Reynier", "Pierre-Alain", "" ] ]
new_dataset
0.988787
1507.02414
Angelo Fanelli
Angelo Fanelli and Gianluigi Greco
Ride Sharing with a Vehicle of Unlimited Capacity
null
null
null
null
cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A ride sharing problem is considered where we are given a graph, whose edges are equipped with a travel cost, plus a set of objects, each associated with a transportation request given by a pair of origin and destination nodes. A vehicle travels through the graph, carrying each object from its origin to its destination without any bound on the number of objects that can be simultaneously transported. The vehicle starts and terminates its ride at given nodes, and the goal is to compute a minimum-cost ride satisfying all requests. This ride sharing problem is shown to be tractable on paths by designing a $O(h \log h+n)$ algorithm, with $h$ being the number of distinct requests and with $n$ being the number of nodes in the path. The algorithm is then used as a subroutine to efficiently solve instances defined over cycles, hence covering all graphs with maximum degree $2$. This traces the frontier of tractability, since $\bf NP$-hard instances are exhibited over trees whose maximum degree is $3$.
[ { "version": "v1", "created": "Thu, 9 Jul 2015 08:31:35 GMT" }, { "version": "v2", "created": "Mon, 8 Feb 2016 15:05:00 GMT" }, { "version": "v3", "created": "Thu, 9 Jun 2016 10:55:38 GMT" } ]
2016-06-10T00:00:00
[ [ "Fanelli", "Angelo", "" ], [ "Greco", "Gianluigi", "" ] ]
new_dataset
0.999544
1606.01323
Kevin Clark
Kevin Clark and Christopher D. Manning
Improving Coreference Resolution by Learning Entity-Level Distributed Representations
Accepted for publication at the Association for Computational Linguistics (ACL), 2016
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A long-standing challenge in coreference resolution has been the incorporation of entity-level information - features defined over clusters of mentions instead of mention pairs. We present a neural network based coreference system that produces high-dimensional vector representations for pairs of coreference clusters. Using these representations, our system learns when combining clusters is desirable. We train the system with a learning-to-search algorithm that teaches it which local decisions (cluster merges) will lead to a high-scoring final coreference partition. The system substantially outperforms the current state-of-the-art on the English and Chinese portions of the CoNLL 2012 Shared Task dataset despite using few hand-engineered features.
[ { "version": "v1", "created": "Sat, 4 Jun 2016 04:08:45 GMT" }, { "version": "v2", "created": "Wed, 8 Jun 2016 21:11:13 GMT" } ]
2016-06-10T00:00:00
[ [ "Clark", "Kevin", "" ], [ "Manning", "Christopher D.", "" ] ]
new_dataset
0.995138
1606.02711
Ferran Gal\'an
Ferran Gal\'an, Stuart N. Baker, Monica A. Perez
ChinMotion Rapidly Enables 3D Computer Interaction after Tetraplegia
The .ps file contains main manuscript and supplementary information. The .ps file is accompanied with ancillary files (supplementary files)
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Individuals with severe paralysis require hands-free interfaces to control assistive devices that can improve their quality of life. We present ChinMotion, an interface that noninvasively harnesses preserved chin, lip and tongue sensorimotor function after tetraplegia to convey intuitive control commands. After two hours of practice, ChinMotion enables superior point-and-click performance over existing interfaces and it facilitates accurate 3D control of a virtual robotic arm.
[ { "version": "v1", "created": "Wed, 8 Jun 2016 14:21:53 GMT" } ]
2016-06-10T00:00:00
[ [ "Galán", "Ferran", "" ], [ "Baker", "Stuart N.", "" ], [ "Perez", "Monica A.", "" ] ]
new_dataset
0.998175
1606.02753
Michael McCann
Michael T. McCann and Matthew Fickus and Jelena Kovacevic
Rotation Invariant Angular Descriptor Via A Bandlimited Gaussian-like Kernel
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new smooth, Gaussian-like kernel that allows the kernel density estimate for an angular distribution to be exactly represented by a finite number of its Fourier series coefficients. Distributions of angular quantities, such as gradients, are a central part of several state-of-the-art image processing algorithms, but these distributions are usually described via histograms and therefore lack rotation invariance due to binning artifacts. Replacing histograming with kernel density estimation removes these binning artifacts and can provide a finite-dimensional descriptor of the distribution, provided that the kernel is selected to be bandlimited. In this paper, we present a new band-limited kernel that has the added advantage of being Gaussian-like in the angular domain. We then show that it compares favorably to gradient histograms for patch matching, person detection, and texture segmentation.
[ { "version": "v1", "created": "Wed, 8 Jun 2016 20:51:23 GMT" } ]
2016-06-10T00:00:00
[ [ "McCann", "Michael T.", "" ], [ "Fickus", "Matthew", "" ], [ "Kovacevic", "Jelena", "" ] ]
new_dataset
0.999014
1606.02809
Anand Sivamalai
Anand Sivamalai and Jamie S. Evans
On Uplink User Capacity for Massive MIMO Cellular Networks
7 pages, 6 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Under the conditions where performance in a massive MIMO network is limited by pilot contamination, the reverse link signal-to-interference ratio (SIR) exhibits different distributions when using different pilot allocation schemes. By utilising different sets of orthogonal pilot sequences, as opposed to reused sequences amongst adjacent cells, the resulting SIR distribution is more favourable with respect to maximising the number of users on the network while maintaining a given quality of service (QoS) for all users. This paper provides a simple expression for uplink user capacity on such networks and presents uplink user capacity figures for both pilot allocation schemes for a selection of quality of service targets.
[ { "version": "v1", "created": "Thu, 9 Jun 2016 03:13:37 GMT" } ]
2016-06-10T00:00:00
[ [ "Sivamalai", "Anand", "" ], [ "Evans", "Jamie S.", "" ] ]
new_dataset
0.997722
1606.02831
Farooq Aftab
Farooq Aftab, Muhammad Nafees Ulfat khan, Shahzad Ali
Light fidelity (LI-FI) based indoor communication system
11 Pages, 7 figures, May 2016, International Journal of Computer Networks & Communications (IJCNC)
null
10.5121/ijcnc.2016.8302
null
cs.NI cs.IT math.IT
http://creativecommons.org/licenses/by-sa/4.0/
Indoor wireless communication is an essential part of next generation wireless communication system.For an indoor communication number of users and their device are increasing very rapidly so as a result capacity of frequency spectrum to accommodate further users in future is limited and also it would be difficult for service providers to provide more user reliable and high speed communication so this short come can be solve in future by using Li-Fi based indoor communication system. Li-Fi which is an emerging branch of optical wireless communication can be useful in future as a replacement and backup of Wireless Fidelity (Wi-Fi)for indoor communication because it can provide high data rate of transmission along with high capacity to utilize more users as its spectrum bandwidth is much broader than the radio spectrum. In this paper we will look at the different aspects of the Li-Fi based indoor communication system,summarizes some of the research conducted so far and we will also proposed a Li-Fi based communication model keeping in mind coverage area for multiple user and evaluate its performance under different scenarios .
[ { "version": "v1", "created": "Thu, 9 Jun 2016 06:17:20 GMT" } ]
2016-06-10T00:00:00
[ [ "Aftab", "Farooq", "" ], [ "khan", "Muhammad Nafees Ulfat", "" ], [ "Ali", "Shahzad", "" ] ]
new_dataset
0.999591
1606.02879
Martin Schuster
Martin Schuster
Transducer-based Rewriting Games for Active XML
Extended version of MFCS 2016 conference paper
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Context-free games are two-player rewriting games that are played on nested strings representing XML documents with embedded function symbols. These games were introduced to model rewriting processes for intensional documents in the Active XML framework, where input documents are to be rewritten into a given target schema by calls to external services. This paper studies the setting where dependencies between inputs and outputs of service calls are modelled by transducers, which has not been examined previously. It defines transducer models operating on nested words and studies their properties, as well as the computational complexity of the winning problem for transducer-based context-free games in several scenarios. While the complexity of this problem is quite high in most settings (ranging from NP-complete to undecidable), some tractable restrictions are also identified.
[ { "version": "v1", "created": "Thu, 9 Jun 2016 09:19:26 GMT" } ]
2016-06-10T00:00:00
[ [ "Schuster", "Martin", "" ] ]
new_dataset
0.99959
1606.02976
Gayo Diallo
Khadim Dram\'e (UB), Fleur Mougin (UB), Gayo Diallo (UB)
Large scale biomedical texts classification: a kNN and an ESA-based approaches
Journal of Biomedical Semantics, BioMed Central, 2016
null
null
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the large and increasing volume of textual data, automated methods for identifying significant topics to classify textual documents have received a growing interest. While many efforts have been made in this direction, it still remains a real challenge. Moreover, the issue is even more complex as full texts are not always freely available. Then, using only partial information to annotate these documents is promising but remains a very ambitious issue. MethodsWe propose two classification methods: a k-nearest neighbours (kNN)-based approach and an explicit semantic analysis (ESA)-based approach. Although the kNN-based approach is widely used in text classification, it needs to be improved to perform well in this specific classification problem which deals with partial information. Compared to existing kNN-based methods, our method uses classical Machine Learning (ML) algorithms for ranking the labels. Additional features are also investigated in order to improve the classifiers' performance. In addition, the combination of several learning algorithms with various techniques for fixing the number of relevant topics is performed. On the other hand, ESA seems promising for this classification task as it yielded interesting results in related issues, such as semantic relatedness computation between texts and text classification. Unlike existing works, which use ESA for enriching the bag-of-words approach with additional knowledge-based features, our ESA-based method builds a standalone classifier. Furthermore, we investigate if the results of this method could be useful as a complementary feature of our kNN-based approach.ResultsExperimental evaluations performed on large standard annotated datasets, provided by the BioASQ organizers, show that the kNN-based method with the Random Forest learning algorithm achieves good performances compared with the current state-of-the-art methods, reaching a competitive f-measure of 0.55% while the ESA-based approach surprisingly yielded reserved results.ConclusionsWe have proposed simple classification methods suitable to annotate textual documents using only partial information. They are therefore adequate for large multi-label classification and particularly in the biomedical domain. Thus, our work contributes to the extraction of relevant information from unstructured documents in order to facilitate their automated processing. Consequently, it could be used for various purposes, including document indexing, information retrieval, etc.
[ { "version": "v1", "created": "Thu, 9 Jun 2016 14:32:50 GMT" } ]
2016-06-10T00:00:00
[ [ "Dramé", "Khadim", "", "UB" ], [ "Mougin", "Fleur", "", "UB" ], [ "Diallo", "Gayo", "", "UB" ] ]
new_dataset
0.960476
1606.03002
Dirk Weissenborn
Dirk Weissenborn and Tim Rockt\"aschel
MuFuRU: The Multi-Function Recurrent Unit
null
null
null
null
cs.NE cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recurrent neural networks such as the GRU and LSTM found wide adoption in natural language processing and achieve state-of-the-art results for many tasks. These models are characterized by a memory state that can be written to and read from by applying gated composition operations to the current input and the previous state. However, they only cover a small subset of potentially useful compositions. We propose Multi-Function Recurrent Units (MuFuRUs) that allow for arbitrary differentiable functions as composition operations. Furthermore, MuFuRUs allow for an input- and state-dependent choice of these composition operations that is learned. Our experiments demonstrate that the additional functionality helps in different sequence modeling tasks, including the evaluation of propositional logic formulae, language modeling and sentiment analysis.
[ { "version": "v1", "created": "Thu, 9 Jun 2016 15:41:17 GMT" } ]
2016-06-10T00:00:00
[ [ "Weissenborn", "Dirk", "" ], [ "Rocktäschel", "Tim", "" ] ]
new_dataset
0.99852
1503.09016
G\'abor Ivanyos
Gabor Ivanyos, Miklos Santha
On solving systems of diagonal polynomial equations over finite fields
A preliminary extended abstract of this paper has appeared in Proceedings of FAW 2015, Springer LNCS vol. 9130, pp. 125-137 (2015)
null
10.1016/j.tcs.2016.04.045
null
cs.CC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an algorithm to solve a system of diagonal polynomial equations over finite fields when the number of variables is greater than some fixed polynomial of the number of equations whose degree depends only on the degree of the polynomial equations. Our algorithm works in time polynomial in the number of equations and the logarithm of the size of the field, whenever the degree of the polynomial equations is constant. As a consequence we design polynomial time quantum algorithms for two algebraic hidden structure problems: for the hidden subgroup problem in certain semidirect product p-groups of constant nilpotency class, and for the multi-dimensional univariate hidden polynomial graph problem when the degree of the polynomials is constant.
[ { "version": "v1", "created": "Tue, 31 Mar 2015 12:06:00 GMT" }, { "version": "v2", "created": "Wed, 8 Jun 2016 13:54:48 GMT" } ]
2016-06-09T00:00:00
[ [ "Ivanyos", "Gabor", "" ], [ "Santha", "Miklos", "" ] ]
new_dataset
0.971865
1603.06075
Kazuma Hashimoto
Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka
Tree-to-Sequence Attentional Neural Machine Translation
Accepted as a full paper at the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most of the existing Neural Machine Translation (NMT) models focus on the conversion of sequential data and do not directly use syntactic information. We propose a novel end-to-end syntactic NMT model, extending a sequence-to-sequence model with the source-side phrase structure. Our model has an attention mechanism that enables the decoder to generate a translated word while softly aligning it with phrases as well as words of the source sentence. Experimental results on the WAT'15 English-to-Japanese dataset demonstrate that our proposed model considerably outperforms sequence-to-sequence attentional NMT models and compares favorably with the state-of-the-art tree-to-string SMT system.
[ { "version": "v1", "created": "Sat, 19 Mar 2016 10:08:40 GMT" }, { "version": "v2", "created": "Tue, 22 Mar 2016 09:55:39 GMT" }, { "version": "v3", "created": "Wed, 8 Jun 2016 08:39:11 GMT" } ]
2016-06-09T00:00:00
[ [ "Eriguchi", "Akiko", "" ], [ "Hashimoto", "Kazuma", "" ], [ "Tsuruoka", "Yoshimasa", "" ] ]
new_dataset
0.988969
1606.02373
Meysam Ghaffari
Meysam Ghaffari, Nasser Ghadiri, Mohammad Hossein Manshaei, Mehran Sadeghi Lahijani
P4QS: A Peer to Peer Privacy Preserving Query Service for Location-Based Mobile Applications
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The location-based services provide an interesting combination of cyber and physical worlds. However, they can also threaten the users' privacy. Existing privacy preserving protocols require trusted nodes, with serious security and computational bottlenecks. In this paper, we propose a novel distributed anonymizing protocol based on peer-to-peer architecture. Each mobile node is responsible for anonymizing a specific zone. The mobile nodes collaborate in anonymizing their queries, without the need not get access to any information about each other. In the proposed protocol, each request will be sent with a randomly chosen ticket. The encrypted response produced by the server is sent to a particular mobile node (called broker node) over the network, based on the hash value of this ticket. The user will query the broker to get the response. All parts of the messages are encrypted except the fields required for the anonymizer and the broker. This will secure the packet exchange over the P2P network. The proposed protocol was implemented and tested successfully, and the experimental results showed that it could be deployed efficiently to achieve user privacy in location-based services.
[ { "version": "v1", "created": "Wed, 8 Jun 2016 02:09:15 GMT" } ]
2016-06-09T00:00:00
[ [ "Ghaffari", "Meysam", "" ], [ "Ghadiri", "Nasser", "" ], [ "Manshaei", "Mohammad Hossein", "" ], [ "Lahijani", "Mehran Sadeghi", "" ] ]
new_dataset
0.95652
1606.02424
Yassine Hachaichi
Imen Ben Saad, Younes Lahbib, Yassine Hacha\"ichi (LAMSIN), Sonia Mami, Abdelkader Mami
Generic-Precision algorithm for DCT-Cordic architectures
null
null
null
null
cs.MM cs.DM cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a generic algorithm to calculate the rotation parameters of CORDIC angles required for the Discrete Cosine Transform algorithm (DCT). This leads us to increase the precision of calculation meeting any accuracy.Our contribution is to use this decomposition in CORDIC based DCT which is appropriate for domains which require high quality and top precision. We then propose a hardware implementation of the novel transformation, and as expected, a substantial improvement in PSNR quality is found.
[ { "version": "v1", "created": "Wed, 8 Jun 2016 07:08:10 GMT" } ]
2016-06-09T00:00:00
[ [ "Saad", "Imen Ben", "", "LAMSIN" ], [ "Lahbib", "Younes", "", "LAMSIN" ], [ "Hachaïchi", "Yassine", "", "LAMSIN" ], [ "Mami", "Sonia", "" ], [ "Mami", "Abdelkader", "" ] ]
new_dataset
0.977008
1606.02534
Sara Boujaada
S. Boujaada, Y. Qaraai, S. Agoujil and M. Hajar
Protector Control PC-AODV-BH in The Ad Hoc Networks
submit 15 pages, 19 figures, 1 table, Journal Indexing team, AIRCC 2016
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we deal with the protector control that which we used to secure AODV routing protocol in Ad Hoc networks. The considered system can be vulnerable to several attacks because of mobility and absence of infrastructure. While the disturbance is assumed to be of the black hole type, we purpose a control named "PC-AODV-BH" in order to neutralize the effects of malicious nodes. Such a protocol is obtained by coupling hash functions, digital signatures and fidelity concept. An implementation under NS2 simulator will be given to compare our proposed approach with SAODV protocol, basing on three performance metrics and taking into account the number of black hole malicious nodes
[ { "version": "v1", "created": "Wed, 8 Jun 2016 12:54:22 GMT" } ]
2016-06-09T00:00:00
[ [ "Boujaada", "S.", "" ], [ "Qaraai", "Y.", "" ], [ "Agoujil", "S.", "" ], [ "Hajar", "M.", "" ] ]
new_dataset
0.992328
1606.02542
Christian Walder Dr
Christian Walder
Symbolic Music Data Version 1.0
arXiv admin note: substantial text overlap with arXiv:1606.01368
null
null
null
cs.SD cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this document, we introduce a new dataset designed for training machine learning models of symbolic music data. Five datasets are provided, one of which is from a newly collected corpus of 20K midi files. We describe our preprocessing and cleaning pipeline, which includes the exclusion of a number of files based on scores from a previously developed probabilistic machine learning model. We also define training, testing and validation splits for the new dataset, based on a clustering scheme which we also describe. Some simple histograms are included.
[ { "version": "v1", "created": "Wed, 8 Jun 2016 13:19:01 GMT" } ]
2016-06-09T00:00:00
[ [ "Walder", "Christian", "" ] ]
new_dataset
0.9955
1606.02599
Timothy Wood
Wei Zhang, Guyue Liu, Timothy Wood, K.K. Ramakrishnan, and Jinho Hwang
SDNFV: Flexible and Dynamic Software Defined Control of an Application- and Flow-Aware Data Plane
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software Defined Networking (SDN) promises greater flexibility for directing packet flows, and Network Function Virtualization promises to enable dynamic management of software-based network functions. However, the current divide between an intelligent control plane and an overly simple, stateless data plane results in the inability to exploit the flexibility of a software based network. In this paper we propose SDNFV, a framework that expands the capabilities of network processing-and-forwarding elements to flexibly manage packet flows, while retaining both a high performance data plane and an easily managed control plane. SDNFV proposes a hierarchical control framework where decisions are made across the SDN controller, a host-level manager, and individual VMs to best exploit state available at each level. This increases the network's flexibility compared to existing SDNs where controllers often make decisions solely based on the first packet header of a flow. SDNFV intelligently places network services across hosts and connects them in sequential and parallel chains, giving both the SDN controller and individual network functions the ability to enhance and update flow rules to adapt to changing conditions. Our prototype demonstrates how to efficiently and flexibly reroute flows based on data plane state such as packet payloads and traffic characteristics.
[ { "version": "v1", "created": "Wed, 8 Jun 2016 15:22:42 GMT" } ]
2016-06-09T00:00:00
[ [ "Zhang", "Wei", "" ], [ "Liu", "Guyue", "" ], [ "Wood", "Timothy", "" ], [ "Ramakrishnan", "K. K.", "" ], [ "Hwang", "Jinho", "" ] ]
new_dataset
0.998899
1606.02674
Bruna Soares Peres
Bruna Peres and Olga Goussevskaia
MHCL: IPv6 Multihop Host Configuration for Low-Power Wireless Networks
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standard routing protocols for Low power and Lossy Networks are typically designed to optimize bottom-up data flows, by maintaining a cycle-free network topology. The advantage of such topologies is low memory footprint to store routing information (only the parent's address needs to me known by each node). The disadvantage is that other communication patterns, like top-down and bidirectional data flows, are not easily implemented. In this work we propose MHCL: IPv6 Multihop Host Configuration for Low-Power Wireless Networks. MHCL employs hierarchical address allocation that explores cycle-free network topologies and aims to enable top-down data communication with low message overhead and memory footprint. We evaluated the performance of MHCL both analytically and through simulations. We implemented MHCL as a subroutine of RPL protocol on Contiki OS and showed that it significantly improves top-down message delivery in RPL, while using a constant amount of memory (i.e., independent of network size) and being efficient in terms of setup time and number of control messages.
[ { "version": "v1", "created": "Wed, 8 Jun 2016 18:18:57 GMT" } ]
2016-06-09T00:00:00
[ [ "Peres", "Bruna", "" ], [ "Goussevskaia", "Olga", "" ] ]
new_dataset
0.950678
1505.04364
Kai-Fu Yang
Kai-Fu Yang, Hui Li, Chao-Yi Li, and Yong-Jie Li
Salient Structure Detection by Context-Guided Visual Search
13 pages, 15 figures
IEEE Transactions on Image Processing (TIP), 2016
10.1109/TIP.2016.2572600
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We define the task of salient structure (SS) detection to unify the saliency-related tasks like fixation prediction, salient object detection, and other detection of structures of interest. In this study, we propose a unified framework for SS detection by modeling the two-pathway-based guided search strategy of biological vision. Firstly, context-based spatial prior (CBSP) is extracted based on the layout of edges in the given scene along a fast visual pathway, called non-selective pathway. This is a rough and non-selective estimation of the locations where the potential SSs present. Secondly, another flow of local feature extraction is executed in parallel along the selective pathway. Finally, Bayesian inference is used to integrate local cues guided by CBSP, and to predict the exact locations of SSs in the input scene. The proposed model is invariant to size and features of objects. Experimental results on four datasets (two fixation prediction datasets and two salient object datasets) demonstrate that our system achieves competitive performance for SS detection (i.e., both the tasks of fixation prediction and salient object detection) comparing to the state-of-the-art methods.
[ { "version": "v1", "created": "Sun, 17 May 2015 07:15:25 GMT" } ]
2016-06-08T00:00:00
[ [ "Yang", "Kai-Fu", "" ], [ "Li", "Hui", "" ], [ "Li", "Chao-Yi", "" ], [ "Li", "Yong-Jie", "" ] ]
new_dataset
0.995732
1606.01941
Giovanni Interdonato
G. Interdonato, S. Pfletschinger, F. Vazquez-Gallego, J. Alonso-Zarate, G. Araniti
Intra-Slot Interference Cancellation for Collision Resolution in Irregular Repetition Slotted ALOHA
2015 IEEE International Conference on Communication Workshop (ICCW)
null
10.1109/ICCW.2015.7247486
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
ALOHA-type protocols became a popular solution for distributed and uncoordinated multiple random access in wireless networks. However, such distributed operation of the Medium Access Control (MAC) layer leads to sub-optimal utilization of the shared channel. One of the reasons is the occurrence of collisions when more than one packet is transmitted at the same time. These packets cannot be decoded and retransmissions are necessary. However, it has been recently shown that it is possible to apply signal processing techniques with these collided packets so that useful information can be decoded. This was recently proposed in the Irregular Repetition Slotted ALOHA (IRSA), achieving a throughput $T \simeq 0.97$ for very large MAC frame lengths as long as the number of active users is smaller than the number of slots per frame. In this paper, we extend the operation of IRSA with i) an iterative physical layer decoding processing that exploits the capture effect and ii) a Successive Interference Cancellation (SIC) processing at the slot-level, named intra-slot SIC, to decode more than one colliding packet per slot. We evaluate the performance of the proposed scheme, referred to as Extended IRSA (E-IRSA), in terms of throughput and channel capacity. Computer-based simulation results show that E-IRSA protocol allows to reach the maximum theoretical achievable throughput even in scenarios where the number of active users is higher than the number of slots per frame. Results also show that E-IRSA protocol significantly improves the performance even for small MAC frame lengths used in practical scenarios.
[ { "version": "v1", "created": "Mon, 6 Jun 2016 21:03:10 GMT" } ]
2016-06-08T00:00:00
[ [ "Interdonato", "G.", "" ], [ "Pfletschinger", "S.", "" ], [ "Vazquez-Gallego", "F.", "" ], [ "Alonso-Zarate", "J.", "" ], [ "Araniti", "G.", "" ] ]
new_dataset
0.962992
1606.02019
EPTCS
Alexandre Madeira (HASLab INESC TEC and Universidade do Minho), Manuel A. Martins (CIDMA and Dep Matem\'atica Universidade de Aveiro), Lu\'is S. Barbosa (HASLab INESC TEC and Universidade do Minho)
A logic for n-dimensional hierarchical refinement
In Proceedings Refine'15, arXiv:1606.01344
EPTCS 209, 2016, pp. 40-56
10.4204/EPTCS.209.4
null
cs.LO cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hierarchical transition systems provide a popular mathematical structure to represent state-based software applications in which different layers of abstraction are represented by inter-related state machines. The decomposition of high level states into inner sub-states, and of their transitions into inner sub-transitions is common refinement procedure adopted in a number of specification formalisms. This paper introduces a hybrid modal logic for k-layered transition systems, its first-order standard translation, a notion of bisimulation, and a modal invariance result. Layered and hierarchical notions of refinement are also discussed in this setting.
[ { "version": "v1", "created": "Tue, 7 Jun 2016 04:09:19 GMT" } ]
2016-06-08T00:00:00
[ [ "Madeira", "Alexandre", "", "HASLab INESC TEC and Universidade do Minho" ], [ "Martins", "Manuel A.", "", "CIDMA and Dep Matemática Universidade de Aveiro" ], [ "Barbosa", "Luís S.", "", "HASLab INESC TEC and Universidade do Minho" ] ]
new_dataset
0.987351
1606.02021
EPTCS
Alvaro Miyazawa (University of York), Ana Cavalcanti (University of York)
SCJ-Circus: a refinement-oriented formal notation for Safety-Critical Java
In Proceedings Refine'15, arXiv:1606.01344
EPTCS 209, 2016, pp. 71-86
10.4204/EPTCS.209.6
null
cs.LO cs.PL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Safety-Critical Java (SCJ) is a version of Java whose goal is to support the development of real-time, embedded, safety-critical software. In particular, SCJ supports certification of such software by introducing abstractions that enforce a simpler architecture, and simpler concurrency and memory models. In this paper, we present SCJ-Circus, a refinement-oriented formal notation that supports the specification and verification of low-level programming models that include the new abstractions introduced by SCJ. SCJ-Circus is part of the family of state-rich process algebra Circus, as such, SCJ-Circus includes the Circus constructs for modelling sequential and concurrent behaviour, real-time and object orientation. We present here the syntax and semantics of SCJ-Circus, which is defined by mapping SCJ-Circus constructs to those of standard Circus. This is based on an existing approach for modelling SCJ programs. We also extend an existing Circus-based refinement strategy that targets SCJ programs to account for the generation of SCJ-Circus models close to implementations in SCJ.
[ { "version": "v1", "created": "Tue, 7 Jun 2016 04:09:36 GMT" } ]
2016-06-08T00:00:00
[ [ "Miyazawa", "Alvaro", "", "University of York" ], [ "Cavalcanti", "Ana", "", "University of\n York" ] ]
new_dataset
0.999502
1606.02041
Nils Hammerla
Katherine Middleton, Mobasher Butt, Nils Hammerla, Steven Hamblin, Karan Mehta, Ali Parsa
Sorting out symptoms: design and evaluation of the 'babylon check' automated triage system
null
null
null
null
cs.AI cs.CY cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prior to seeking professional medical care it is increasingly common for patients to use online resources such as automated symptom checkers. Many such systems attempt to provide a differential diagnosis based on the symptoms elucidated from the user, which may lead to anxiety if life or limb-threatening conditions are part of the list, a phenomenon termed 'cyberchondria' [1]. Systems that provide advice on where to seek help, rather than a diagnosis, are equally popular, and in our view provide the most useful information. In this technical report we describe how such a triage system can be modelled computationally, how medical insights can be translated into triage flows, and how such systems can be validated and tested. We present babylon check, our commercially deployed automated triage system, as a case study, and illustrate its performance in a large, semi-naturalistic deployment study.
[ { "version": "v1", "created": "Tue, 7 Jun 2016 06:55:42 GMT" } ]
2016-06-08T00:00:00
[ [ "Middleton", "Katherine", "" ], [ "Butt", "Mobasher", "" ], [ "Hammerla", "Nils", "" ], [ "Hamblin", "Steven", "" ], [ "Mehta", "Karan", "" ], [ "Parsa", "Ali", "" ] ]
new_dataset
0.960962