id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
1802.05884
Lee Prangnell
Lee Prangnell, Miguel Hern\'andez-Cabronero, Victor Sanchez
Coding Block-Level Perceptual Video Coding for 4:4:4 Data in HEVC
Preprint: 2017 IEEE International Conference on Image Processing (ICIP 2017)
null
null
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is an increasing consumer demand for high bit-depth 4:4:4 HD video data playback due to its superior perceptual visual quality compared with standard 8-bit subsampled 4:2:0 video data. Due to vast file sizes and associated bitrates, it is desirable to compress raw high bit-depth 4:4:4 HD video sequences as much as possible without incurring a discernible decrease in visual quality. In this paper, we propose a Coding Block (CB)-level perceptual video coding technique for HEVC named Full Color Perceptual Quantization (FCPQ). FCPQ is designed to adjust the Quantization Parameter (QP) at the CB level (i.e., the luma CB and the chroma Cb and Cr CBs) according to the variances of pixel data in each CB. FCPQ is based on the default perceptual quantization method in HEVC called AdaptiveQP. AdaptiveQP adjusts the QP of an entire CU based only on the spatial activity of the constituent luma CB. As demonstrated in this paper, by not accounting for the spatial activity of the constituent chroma CBs, as is the case with AdaptiveQP, coding performance can be significantly affected; this is because the variance of pixel data in a luma CB is notably different from the variances of pixel data in chroma Cb and Cr CBs. FCPQ, therefore, addresses this problem. In terms of coding performance, FCPQ achieves BD-Rate improvements of up to 39.5% (Y), 16% (Cb) and 29.9% (Cr) compared with AdaptiveQP.
[ { "version": "v1", "created": "Fri, 16 Feb 2018 10:20:34 GMT" } ]
2018-02-19T00:00:00
[ [ "Prangnell", "Lee", "" ], [ "Hernández-Cabronero", "Miguel", "" ], [ "Sanchez", "Victor", "" ] ]
new_dataset
0.990609
1802.06063
Peter Kokol PhD
Peter Kokol, Milan Zorman, Grega Zlahtic, Bojan Zlahtic
Code smells
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Code smells as symptoms of poor design and implementation choices. Many times they are the result of so called technical debt. Our study showed that the interest in code smells research is increasing. However, most of the publications are appearing in conference proceedings. Most of the research is done in G7 and other highly developed countries. Four main research themes were identified namely code smell detection, bad smell based refactoring, software development and anti patterns. The results show that code smells can also have a positive connotation, we can develop software which smells good and attracts various customers and good smelling code could also serve as a pattern for future software development.
[ { "version": "v1", "created": "Fri, 16 Feb 2018 18:38:06 GMT" } ]
2018-02-19T00:00:00
[ [ "Kokol", "Peter", "" ], [ "Zorman", "Milan", "" ], [ "Zlahtic", "Grega", "" ], [ "Zlahtic", "Bojan", "" ] ]
new_dataset
0.997208
1603.04641
Jules Hedges
Neil Ghani, Jules Hedges, Viktor Winschel, Philipp Zahn
Compositional game theory
This version submitted to LiCS 2018
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce open games as a compositional foundation of economic game theory. A compositional approach potentially allows methods of game theory and theoretical computer science to be applied to large-scale economic models for which standard economic tools are not practical. An open game represents a game played relative to an arbitrary environment and to this end we introduce the concept of coutility, which is the utility generated by an open game and returned to its environment. Open games are the morphisms of a symmetric monoidal category and can therefore be composed by categorical composition into sequential move games and by monoidal products into simultaneous move games. Open games can be represented by string diagrams which provide an intuitive but formal visualisation of the information flows. We show that a variety of games can be faithfully represented as open games in the sense of having the same Nash equilibria and off-equilibrium best responses.
[ { "version": "v1", "created": "Tue, 15 Mar 2016 11:23:35 GMT" }, { "version": "v2", "created": "Tue, 17 Jan 2017 18:06:38 GMT" }, { "version": "v3", "created": "Thu, 15 Feb 2018 11:54:39 GMT" } ]
2018-02-16T00:00:00
[ [ "Ghani", "Neil", "" ], [ "Hedges", "Jules", "" ], [ "Winschel", "Viktor", "" ], [ "Zahn", "Philipp", "" ] ]
new_dataset
0.998319
1710.06824
Shervin Minaee
Shervin Minaee, Siyun Wang, Yao Wang, Sohae Chung, Xiuyuan Wang, Els Fieremans, Steven Flanagan, Joseph Rath, Yvonne W. Lui
Identifying Mild Traumatic Brain Injury Patients From MR Images Using Bag of Visual Words
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mild traumatic brain injury (mTBI) is a growing public health problem with an estimated incidence of one million people annually in US. Neurocognitive tests are used to both assess the patient condition and to monitor the patient progress. This work aims to directly use MR images taken shortly after injury to detect whether a patient suffers from mTBI, by incorporating machine learning and computer vision techniques to learn features suitable discriminating between mTBI and normal patients. We focus on 3 regions in brain, and extract multiple patches from them, and use bag-of-visual-word technique to represent each subject as a histogram of representative patterns derived from patches from all training subjects. After extracting the features, we use greedy forward feature selection, to choose a subset of features which achieves highest accuracy. We show through experimental studies that BoW features perform better than the simple mean value features which were used previously.
[ { "version": "v1", "created": "Wed, 18 Oct 2017 16:55:52 GMT" }, { "version": "v2", "created": "Thu, 30 Nov 2017 22:42:25 GMT" }, { "version": "v3", "created": "Wed, 14 Feb 2018 22:16:08 GMT" } ]
2018-02-16T00:00:00
[ [ "Minaee", "Shervin", "" ], [ "Wang", "Siyun", "" ], [ "Wang", "Yao", "" ], [ "Chung", "Sohae", "" ], [ "Wang", "Xiuyuan", "" ], [ "Fieremans", "Els", "" ], [ "Flanagan", "Steven", "" ], [ "Rath", "Joseph", "" ], [ "Lui", "Yvonne W.", "" ] ]
new_dataset
0.997979
1802.05323
Benedikt Brecht
Benedikt Brecht, Dean Therriault, Andr\'e Weimerskirch, William Whyte, Virendra Kumar, Thorsten Hehn, Roy Goudy
A Security Credential Management System for V2X Communications
Accepted at IEEE Transactions on Intelligent Transportation Systems (accepted version)
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
The US Department of Transportation (USDOT) issued a proposed rule on January 12th, 2017 to mandate vehicle-to-vehicle (V2V) safety communications in light vehicles in the US. Cybersecurity and privacy are major challenges for such a deployment. The authors present a Security Credential Management System (SCMS) for vehicle-to-everything (V2X) communications in this paper, which has been developed by the Crash Avoidance Metrics Partners LLC (CAMP) under a Cooperative Agreement with the USDOT. This system design is currently transitioning from research to Proof-of-Concept, and is a leading candidate to support the establishment of a nationwide Public Key Infrastructure (PKI) for V2X security. It issues digital certificates to participating vehicles and infrastructure nodes for trustworthy communications among them, which is necessary for safety and mobility applications that are based on V2X communications. The main design goal is to provide both security and privacy to the largest extent reasonable and possible. To achieve a reasonable level of privacy in this context, vehicles are issued pseudonym certificates, and the generation and provisioning of those certificates are divided among multiple organizations. Given the large number of pseudonym certificates per vehicle, one of the main challenges is to facilitate efficient revocation of misbehaving or malfunctioning vehicles, while preserving privacy against attacks from insiders. The proposed SCMS supports all identified V2X use-cases and certificate types necessary for V2X communication security. This paper is based upon work supported by the USDOT. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the Authors ("we") and do not necessarily reflect the view of the USDOT.
[ { "version": "v1", "created": "Wed, 14 Feb 2018 21:05:58 GMT" } ]
2018-02-16T00:00:00
[ [ "Brecht", "Benedikt", "" ], [ "Therriault", "Dean", "" ], [ "Weimerskirch", "André", "" ], [ "Whyte", "William", "" ], [ "Kumar", "Virendra", "" ], [ "Hehn", "Thorsten", "" ], [ "Goudy", "Roy", "" ] ]
new_dataset
0.999583
1702.07339
Emmanouil Zampetakis
Constantinos Daskalakis, Christos Tzamos, Manolis Zampetakis
A Converse to Banach's Fixed Point Theorem and its CLS Completeness
null
null
null
null
cs.CC cs.LG math.GN stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Banach's fixed point theorem for contraction maps has been widely used to analyze the convergence of iterative methods in non-convex problems. It is a common experience, however, that iterative maps fail to be globally contracting under the natural metric in their domain, making the applicability of Banach's theorem limited. We explore how generally we can apply Banach's fixed point theorem to establish the convergence of iterative methods when pairing it with carefully designed metrics. Our first result is a strong converse of Banach's theorem, showing that it is a universal analysis tool for establishing global convergence of iterative methods to unique fixed points, and for bounding their convergence rate. In other words, we show that, whenever an iterative map globally converges to a unique fixed point, there exists a metric under which the iterative map is contracting and which can be used to bound the number of iterations until convergence. We illustrate our approach in the widely used power method, providing a new way of bounding its convergence rate through contraction arguments. We next consider the computational complexity of Banach's fixed point theorem. Making the proof of our converse theorem constructive, we show that computing a fixed point whose existence is guaranteed by Banach's fixed point theorem is CLS-complete. We thus provide the first natural complete problem for the class CLS, which was defined in [Daskalakis, Papadimitriou 2011] to capture the complexity of problems such as P-matrix LCP, computing KKT-points, and finding mixed Nash equilibria in congestion and network coordination games.
[ { "version": "v1", "created": "Thu, 23 Feb 2017 18:52:31 GMT" }, { "version": "v2", "created": "Wed, 5 Apr 2017 20:25:27 GMT" }, { "version": "v3", "created": "Tue, 13 Feb 2018 23:33:13 GMT" } ]
2018-02-15T00:00:00
[ [ "Daskalakis", "Constantinos", "" ], [ "Tzamos", "Christos", "" ], [ "Zampetakis", "Manolis", "" ] ]
new_dataset
0.955334
1802.04853
Drahomira Herrmannova
Drahomira Herrmannova and Robert M. Patton and Petr Knoth and Christopher G. Stahl
Do Citations and Readership Identify Seminal Publications?
Accepted to journal Scientometrics
Herrmannova, D., Patton, R.M., Knoth, P. et al. Scientometrics (2018). https://doi.org/10.1007/s11192-018-2669-y
10.1007/s11192-018-2669-y
null
cs.DL physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we show that citation counts work better than a random baseline (by a margin of 10%) in distinguishing excellent research, while Mendeley reader counts don't work better than the baseline. Specifically, we study the potential of these metrics for distinguishing publications that caused a change in a research field from those that have not. The experiment has been conducted on a new dataset for bibliometric research called TrueImpactDataset. TrueImpactDataset is a collection of research publications of two types -- research papers which are considered seminal works in their area and papers which provide a literature review of a research area. We provide overview statistics of the dataset and propose to use it for validating research evaluation metrics. Using the dataset, we conduct a set of experiments to study how citation and reader counts perform in distinguishing these publication types, following the intuition that causing a change in a field signifies research contribution. We show that citation counts help in distinguishing research that strongly influenced later developments from works that predominantly discuss the current state of the art with a degree of accuracy (63%, i.e. 10% over the random baseline). In all setups, Mendeley reader counts perform worse than a random baseline.
[ { "version": "v1", "created": "Tue, 13 Feb 2018 20:53:28 GMT" } ]
2018-02-15T00:00:00
[ [ "Herrmannova", "Drahomira", "" ], [ "Patton", "Robert M.", "" ], [ "Knoth", "Petr", "" ], [ "Stahl", "Christopher G.", "" ] ]
new_dataset
0.998163
1802.04887
David Blum
David M. Blum, M. Elisabeth Pate-Cornell
Probabilistic Warnings in National Security Crises: Pearl Harbor Revisited
null
Decision Analysis 13:1 (2015) 1-25
10.1287/deca.2015.0321
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Imagine a situation where a group of adversaries is preparing an attack on the United States or U.S. interests. An intelligence analyst has observed some signals, but the situation is rapidly changing. The analyst faces the decision to alert a principal decision maker that an attack is imminent, or to wait until more is known about the situation. This warning decision is based on the analyst's observation and evaluation of signals, independent or correlated, and on her updating of the prior probabilities of possible scenarios and their outcomes. The warning decision also depends on the analyst's assessment of the crisis' dynamics and perception of the preferences of the principal decision maker, as well as the lead time needed for an appropriate response. This article presents a model to support this analyst's dynamic warning decision. As with most problems involving warning, the key is to manage the tradeoffs between false positives and false negatives given the probabilities and the consequences of intelligence failures of both types. The model is illustrated by revisiting the case of the attack on Pearl Harbor in December 1941. It shows that the radio silence of the Japanese fleet carried considerable information (Sir Arthur Conan Doyle's "dog in the night" problem), which was misinterpreted at the time. Even though the probabilities of different attacks were relatively low, their consequences were such that the Bayesian dynamic reasoning described here may have provided valuable information to key decision makers.
[ { "version": "v1", "created": "Tue, 13 Feb 2018 22:54:28 GMT" } ]
2018-02-15T00:00:00
[ [ "Blum", "David M.", "" ], [ "Pate-Cornell", "M. Elisabeth", "" ] ]
new_dataset
0.992536
1802.05050
Chuka Oham
Chuka Oham, Salil S. Kanhere, Raja Jurdak and Sanjay Jha
A Blockchain Based Liability Attribution Framework for Autonomous Vehicles
null
null
null
null
cs.CR cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of autonomous vehicles is envisaged to disrupt the auto insurance liability model.Compared to the the current model where liability is largely attributed to the driver,autonomous vehicles necessitate the consideration of other entities in the automotive ecosystem including the auto manufacturer,software provider,service technician and the vehicle owner.The proliferation of sensors and connecting technologies in autonomous vehicles enables an autonomous vehicle to gather sufficient data for liability attribution,yet increased connectivity exposes the vehicle to attacks from interacting entities.These possibilities motivate potential liable entities to repudiate their involvement in a collision event to evade liability. While the data collected from vehicular sensors and vehicular communications is an integral part of the evidence for arbitrating liability in the event of an accident,there is also a need to record all interactions between the aforementioned entities to identify potential instances of negligence that may have played a role in the accident.In this paper,we propose a BlockChain(BC) based framework that integrates the concerned entities in the liability model and provides untampered evidence for liability attribution and adjudication.We first describe the liability attribution model, identify key requirements and describe the adversarial capabilities of entities. Also,we present a detailed description of data contributing to evidence.Our framework uses permissioned BC and partitions the BC to tailor data access to relevant BC participants.Finally,we conduct a security analysis to verify that the identified requirements are met and resilience of our proposed framework to identified attacks.
[ { "version": "v1", "created": "Wed, 14 Feb 2018 11:50:42 GMT" } ]
2018-02-15T00:00:00
[ [ "Oham", "Chuka", "" ], [ "Kanhere", "Salil S.", "" ], [ "Jurdak", "Raja", "" ], [ "Jha", "Sanjay", "" ] ]
new_dataset
0.998184
1802.05079
Jan Dvorak
Jan Dvo\v{r}\'ak and Zden\v{e}k Hanz\'alek
Using Two Independent Channels with Gateway for FlexRay Static Segment Scheduling
null
IEEE Transactions on Industrial Informatics, 12(5), Oct 2016
10.1109/TII.2016.2571667
null
cs.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
The FlexRay bus is a communication standard used in the automotive industry. It offers a deterministic message transmission in the static segment following a time-triggered schedule. Even if its bandwidth is ten times higher than the bandwidth of CAN, its throughput limits are going to be reached in high-class car models soon. A solution that could postpone this problem is to use an efficient scheduling algorithm that exploits both channels of the FlexRay. The significant and often neglected feature that can theoretically double the bandwidth is the possibility to use two independent communication channels that can intercommunicate through the gateway. In this paper, we propose a heuristic algorithm that decomposes the scheduling problem to the ECU-to-channel assignment subproblem which decides which channel the ECUs (Electronic Control Units) should be connected to and the channel scheduling subproblem which creates static segment communication schedules for both channels. The algorithm is able to create a schedule for cases where channels are configured in the independent mode as well as in the fault-tolerant mode or in cases where just part of the signals are fault-tolerant. Finally, the algorithm is evaluated on real data and synthesized data, and the relation between the portion of fault-tolerant signals and the number of allocated slots is presented.
[ { "version": "v1", "created": "Wed, 14 Feb 2018 13:07:21 GMT" } ]
2018-02-15T00:00:00
[ [ "Dvořák", "Jan", "" ], [ "Hanzálek", "Zdeněk", "" ] ]
new_dataset
0.997384
1802.05131
William Savoie
Ross Warkentin, William Savoie, Daniel I. Goldman
Locomoting robots composed of immobile robots
4 pages 4 figures IRC 2018 conference paper
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robotic materials are multi-robot systems formulated to leverage the low-order computation and actuation of the constituents to manipulate the high-order behavior of the entire material. We study the behaviors of ensembles composed of smart active particles, smarticles. Smarticles are small, low cost robots equipped with basic actuation and sensing abilities that are individually incapable of rotating or displacing. We demonstrate that a "supersmarticle", composed of many smarticles constrained within a bounding membrane, can harness the internal collisions of the robotic material among the constituents and the membrane to achieve diffusive locomotion. The emergent diffusion can be directed by modulating the robotic material properties in response to a light source, analogous to biological phototaxis. The light source introduces asymmetries within the robotic material, resulting in modified populations of interaction modes and dynamics which ultimately result in supersmarticle biased locomotion. We present experimental methods and results for the robotic material which moves with a directed displacement in response to a light source.
[ { "version": "v1", "created": "Wed, 14 Feb 2018 14:53:36 GMT" } ]
2018-02-15T00:00:00
[ [ "Warkentin", "Ross", "" ], [ "Savoie", "William", "" ], [ "Goldman", "Daniel I.", "" ] ]
new_dataset
0.993007
1802.05176
Paulo Ferreira
Paulo Ferreira
Sampling Superquadric Point Clouds with Normals
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Superquadrics provide a compact representation of common shapes and have been used both for object/surface modelling in computer graphics and as object-part representation in computer vision and robotics. Superquadrics refer to a family of shapes: here we deal with the superellipsoids and superparaboloids. Due to the strong non-linearities involved in the equations, uniform or close-to-uniform sampling is not attainable through a naive approach of direct sampling from the parametric formulation. This is specially true for more `cubic' superquadrics (with shape parameters close to $0.1$). We extend a previous solution of 2D close-to-uniform uniform sampling of superellipses to the superellipsoid (3D) case and derive our own for the superparaboloid. Additionally, we are able to provide normals for each sampled point. To the best of our knowledge, this is the first complete approach for close-to-uniform sampling of superellipsoids and superparaboloids in one single framework. We present derivations, pseudocode and qualitative and quantitative results using our code, which is available online.
[ { "version": "v1", "created": "Wed, 14 Feb 2018 16:04:27 GMT" } ]
2018-02-15T00:00:00
[ [ "Ferreira", "Paulo", "" ] ]
new_dataset
0.991828
1701.08347
Tadashi Wadayama
Yoju Fujino and Tadashi Wadayama
Construction of Fixed Rate Non-Binary WOM Codes based on Integer Programming
null
null
10.1587/transfun.E100.A.2654
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a construction of non-binary WOM (Write-Once-Memory) codes for WOM storages such as flash memories. The WOM codes discussed in this paper are fixed rate WOM codes where messages in a fixed alphabet of size $M$ can be sequentially written in the WOM storage at least $t^*$-times. In this paper, a WOM storage is modeled by a state transition graph. The proposed construction has the following two features. First, it includes a systematic method to determine the encoding regions in the state transition graph. Second, the proposed construction includes a labeling method for states by using integer programming. Several novel WOM codes for $q$ level flash memories with 2 cells are constructed by the proposed construction. They achieve the worst numbers of writes $t^*$ that meet the known upper bound in many cases. In addition, we constructed fixed rate non-binary WOM codes with the capability to reduce ICI (inter cell interference) of flash cells. One of the advantages of the proposed construction is its flexibility. It can be applied to various storage devices, to various dimensions (i.e, number of cells), and various kind of additional constraints.
[ { "version": "v1", "created": "Sun, 29 Jan 2017 02:37:37 GMT" } ]
2018-02-14T00:00:00
[ [ "Fujino", "Yoju", "" ], [ "Wadayama", "Tadashi", "" ] ]
new_dataset
0.992549
1701.08492
Tadashi Wadayama
Takafumi Nakano and Tadashi Wadayama
On Zero Error Capacity of Nearest Neighbor Error Channels with Multilevel Alphabet
null
null
10.1587/transfun.E100.A.2647
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the zero error capacity of the Nearest Neighbor Error (NNE) channels with a multilevel alphabet. In the NNE channels, a transmitted symbol is a $d$-tuple of elements in $\{0,1,2,\dots, n-1 \}$. It is assumed that only one element error to a nearest neighbor element in a transmitted symbol can occur. The NNE channels can be considered as a special type of limited magnitude error channels, and it is closely related to error models for flash memories. In this paper, we derive a lower bound of the zero error capacity of the NNE channels based on a result of the perfect Lee codes. An upper bound of the zero error capacity of the NNE channels is also derived from a feasible solution of a linear programming problem defined based on the confusion graphs of the NNE channels. As a result, a concise formula of the zero error capacity is obtained using the lower and upper bounds.
[ { "version": "v1", "created": "Mon, 30 Jan 2017 06:44:11 GMT" } ]
2018-02-14T00:00:00
[ [ "Nakano", "Takafumi", "" ], [ "Wadayama", "Tadashi", "" ] ]
new_dataset
0.95113
1703.00121
Gong Cheng
Gong Cheng, Junwei Han, and Xiaoqiang Lu
Remote Sensing Image Scene Classification: Benchmark and State of the Art
This manuscript is the accepted version for Proceedings of the IEEE
Proceedings of the IEEE, 105 (10): 1865-1883, 2017
10.1109/JPROC.2017.2675998
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Remote sensing image scene classification plays an important role in a wide range of applications and hence has been receiving remarkable attention. During the past years, significant efforts have been made to develop various datasets or present a variety of approaches for scene classification from remote sensing images. However, a systematic review of the literature concerning datasets and methods for scene classification is still lacking. In addition, almost all existing datasets have a number of limitations, including the small scale of scene classes and the image numbers, the lack of image variations and diversity, and the saturation of accuracy. These limitations severely limit the development of new approaches especially deep learning-based methods. This paper first provides a comprehensive review of the recent progress. Then, we propose a large-scale dataset, termed "NWPU-RESISC45", which is a publicly available benchmark for REmote Sensing Image Scene Classification (RESISC), created by Northwestern Polytechnical University (NWPU). This dataset contains 31,500 images, covering 45 scene classes with 700 images in each class. The proposed NWPU-RESISC45 (i) is large-scale on the scene classes and the total image number, (ii) holds big variations in translation, spatial resolution, viewpoint, object pose, illumination, background, and occlusion, and (iii) has high within-class diversity and between-class similarity. The creation of this dataset will enable the community to develop and evaluate various data-driven algorithms. Finally, several representative methods are evaluated using the proposed dataset and the results are reported as a useful baseline for future research.
[ { "version": "v1", "created": "Wed, 1 Mar 2017 03:38:13 GMT" } ]
2018-02-14T00:00:00
[ [ "Cheng", "Gong", "" ], [ "Han", "Junwei", "" ], [ "Lu", "Xiaoqiang", "" ] ]
new_dataset
0.999578
1707.05740
Jun Liu
Jun Liu, Gang Wang, Ling-Yu Duan, Kamila Abdiyeva and Alex C. Kot
Skeleton-Based Human Action Recognition with Global Context-Aware Attention LSTM Networks
null
null
10.1109/TIP.2017.2785279
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human action recognition in 3D skeleton sequences has attracted a lot of research attention. Recently, Long Short-Term Memory (LSTM) networks have shown promising performance in this task due to their strengths in modeling the dependencies and dynamics in sequential data. As not all skeletal joints are informative for action recognition, and the irrelevant joints often bring noise which can degrade the performance, we need to pay more attention to the informative ones. However, the original LSTM network does not have explicit attention ability. In this paper, we propose a new class of LSTM network, Global Context-Aware Attention LSTM (GCA-LSTM), for skeleton based action recognition. This network is capable of selectively focusing on the informative joints in each frame of each skeleton sequence by using a global context memory cell. To further improve the attention capability of our network, we also introduce a recurrent attention mechanism, with which the attention performance of the network can be enhanced progressively. Moreover, we propose a stepwise training scheme in order to train our network effectively. Our approach achieves state-of-the-art performance on five challenging benchmark datasets for skeleton based action recognition.
[ { "version": "v1", "created": "Tue, 18 Jul 2017 17:03:53 GMT" }, { "version": "v2", "created": "Mon, 21 Aug 2017 05:34:53 GMT" }, { "version": "v3", "created": "Tue, 22 Aug 2017 02:36:41 GMT" }, { "version": "v4", "created": "Wed, 13 Dec 2017 09:49:38 GMT" }, { "version": "v5", "created": "Thu, 11 Jan 2018 15:36:27 GMT" } ]
2018-02-14T00:00:00
[ [ "Liu", "Jun", "" ], [ "Wang", "Gang", "" ], [ "Duan", "Ling-Yu", "" ], [ "Abdiyeva", "Kamila", "" ], [ "Kot", "Alex C.", "" ] ]
new_dataset
0.976283
1707.07816
George MacCartney Jr
Theodore S. Rappaport, George R. MacCartney Jr., Shu Sun, Hangsong Yan and Sijia Deng
Small-Scale, Local Area, and Transitional Millimeter Wave Propagation for 5G Communications
To appear in the IEEE Transactions on Antennas and Propagation, Special Issue on 5G, Nov. 2017
null
10.1109/TAP.2017.2734159
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies radio propagation mechanisms that impact handoffs, air interface design, beam steering, and MIMO for 5G mobile communication systems. Knife edge diffraction (KED) and a creeping wave linear model are shown to predict diffraction loss around typical building objects from 10 to 26 GHz, and human blockage measurements at 73 GHz are shown to fit a double knife-edge diffraction (DKED) model which incorporates antenna gains. Small-scale spatial fading of millimeter wave received signal voltage amplitude is generally Ricean-distributed for both omnidirectional and directional receive antenna patterns under both line-of-sight (LOS) and non-line-of-sight (NLOS) conditions in most cases, although the log-normal distribution fits measured data better for the omnidirectional receive antenna pattern in the NLOS environment. Small-scale spatial autocorrelations of received voltage amplitudes are shown to fit sinusoidal exponential and exponential functions for LOS and NLOS environments, respectively, with small decorrelation distances of 0.27 cm to 13.6 cm (smaller than the size of a handset) that are favorable for spatial multiplexing. Local area measurements using cluster and route scenarios show how the received signal changes as the mobile moves and transitions from LOS to NLOS locations, with reasonably stationary signal levels within clusters. Wideband mmWave power levels are shown to fade from 0.4 dB/ms to 40 dB/s, depending on travel speed and surroundings.
[ { "version": "v1", "created": "Tue, 25 Jul 2017 05:40:38 GMT" }, { "version": "v2", "created": "Tue, 15 Aug 2017 06:18:14 GMT" } ]
2018-02-14T00:00:00
[ [ "Rappaport", "Theodore S.", "" ], [ "MacCartney", "George R.", "Jr." ], [ "Sun", "Shu", "" ], [ "Yan", "Hangsong", "" ], [ "Deng", "Sijia", "" ] ]
new_dataset
0.998537
1709.05590
Vasanthan Raghavan
Vasanthan Raghavan, Andrzej Partyka, Lida Akhoondzadehasl, Ali Tassoudji, Ozge Koymen, John Sanelli
Millimeter Wave Channel Measurements and Implications for PHY Layer Design
13 pages, 8 figures, Accepted for publication at the IEEE Transactions on Antennas and Propagation
null
10.1109/TAP.2017.2758198
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been an increasing interest in the millimeter wave (mmW) frequency regime in the design of next-generation wireless systems. The focus of this work is on understanding mmW channel properties that have an important bearing on the feasibility of mmW systems in practice and have a significant impact on physical (PHY) layer design. In this direction, simultaneous channel sounding measurements at 2.9, 29 and 61 GHz are performed at a number of transmit-receive location pairs in indoor office, shopping mall and outdoor environments. Based on these measurements, this paper first studies large-scale properties such as path loss and delay spread across different carrier frequencies in these scenarios. Towards the goal of understanding the feasibility of outdoor-to-indoor coverage, material measurements corresponding to mmW reflection and penetration are studied and significant notches in signal reception spread over a few GHz are reported. Finally, implications of these measurements on system design are discussed and multiple solutions are proposed to overcome these impairments.
[ { "version": "v1", "created": "Sun, 17 Sep 2017 01:25:31 GMT" } ]
2018-02-14T00:00:00
[ [ "Raghavan", "Vasanthan", "" ], [ "Partyka", "Andrzej", "" ], [ "Akhoondzadehasl", "Lida", "" ], [ "Tassoudji", "Ali", "" ], [ "Koymen", "Ozge", "" ], [ "Sanelli", "John", "" ] ]
new_dataset
0.999789
1712.00427
Alejandro Frery
Debanshu Ratha, Avik Bhattacharya, Alejandro C. Frery
Unsupervised Classification of PolSAR Data Using a Scattering Similarity Measure Derived from a Geodesic Distance
Accepted for publication at IEEE Geoscience and Remote Sensing Letters
null
10.1109/LGRS.2017.2778749
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this letter, we propose a novel technique for obtaining scattering components from Polarimetric Synthetic Aperture Radar (PolSAR) data using the geodesic distance on the unit sphere. This geodesic distance is obtained between an elementary target and the observed Kennaugh matrix, and it is further utilized to compute a similarity measure between scattering mechanisms. The normalized similarity measure for each elementary target is then modulated with the total scattering power (Span). This measure is used to categorize pixels into three categories i.e. odd-bounce, double-bounce and volume, depending on which of the above scattering mechanisms dominate. Then the maximum likelihood classifier of [J.-S. Lee, M. R. Grunes, E. Pottier, and L. Ferro-Famil, Unsupervised terrain classification preserving polarimetric scattering characteristics, IEEE Trans. Geos. Rem. Sens., vol. 42, no. 4, pp. 722731, April 2004.] based on the complex Wishart distribution is iteratively used for each category. Dominant scattering mechanisms are thus preserved in this classification scheme. We show results for L-band AIRSAR and ALOS-2 datasets acquired over San Francisco and Mumbai, respectively. The scattering mechanisms are better preserved using the proposed methodology than the unsupervised classification results using the Freeman-Durden scattering powers on an orientation angle (OA) corrected PolSAR image. Furthermore, (1) the scattering similarity is a completely non-negative quantity unlike the negative powers that might occur in double- bounce and odd-bounce scattering component under Freeman Durden decomposition (FDD), and (2) the methodology can be extended to more canonical targets as well as for bistatic scattering.
[ { "version": "v1", "created": "Fri, 1 Dec 2017 17:58:42 GMT" } ]
2018-02-14T00:00:00
[ [ "Ratha", "Debanshu", "" ], [ "Bhattacharya", "Avik", "" ], [ "Frery", "Alejandro C.", "" ] ]
new_dataset
0.96275
1802.02125
Jiahui Li
Jiahui Li, Yin Sun, Limin Xiao, Shidong Zhou, Ashutosh Sabharwal
How to Mobilize mmWave: A Joint Beam and Channel Tracking Approach
Technical report, part of which accepted by ICASSP 2018
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Maintaining reliable millimeter wave (mmWave) connections to many fast-moving mobiles is a key challenge in the theory and practice of 5G systems. In this paper, we develop a new algorithm that can jointly track the beam direction and channel coefficient of mmWave propagation paths using phased antenna arrays. Despite the significant difficulty in this problem, our algorithm can simultaneously achieve fast tracking speed, high tracking accuracy, and low pilot overhead. In static scenarios, this algorithm can converge to the minimum Cram\'er-Rao lower bound of beam direction with high probability. Simulations reveal that this algorithm greatly outperforms several existing algorithms. Even at SNRs as low as 5dB, our algorithm is capable of tracking a mobile moving at an angular velocity of 5.45 degrees per second and achieving over 95\% of channel capacity with a 32-antenna phased array, by inserting only 10 pilots per second.
[ { "version": "v1", "created": "Tue, 6 Feb 2018 18:40:40 GMT" }, { "version": "v2", "created": "Tue, 13 Feb 2018 08:56:52 GMT" } ]
2018-02-14T00:00:00
[ [ "Li", "Jiahui", "" ], [ "Sun", "Yin", "" ], [ "Xiao", "Limin", "" ], [ "Zhou", "Shidong", "" ], [ "Sabharwal", "Ashutosh", "" ] ]
new_dataset
0.996902
1802.03558
Tianyin Xu
Tianyin Xu and Darko Marinov
Mining Container Image Repositories for Software Configuration and Beyond
6 pages, an extended version of the short paper presented at ICSE-NIER '18
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces the idea of mining container image repositories for configuration and other deployment information of software systems. Unlike traditional software repositories (e.g., source code repositories and app stores), image repositories encapsulate the entire execution ecosystem for running target software, including its configurations, dependent libraries and components, and OS-level utilities, which contributes to a wealth of data and information. We showcase the opportunities based on concrete software engineering tasks that can benefit from mining image repositories. To facilitate future mining efforts, we summarize the challenges of analyzing image repositories and the approaches that can address these challenges. We hope that this paper will stimulate exciting research agenda of mining this emerging type of software repositories.
[ { "version": "v1", "created": "Sat, 10 Feb 2018 09:31:11 GMT" }, { "version": "v2", "created": "Tue, 13 Feb 2018 07:32:10 GMT" } ]
2018-02-14T00:00:00
[ [ "Xu", "Tianyin", "" ], [ "Marinov", "Darko", "" ] ]
new_dataset
0.996339
1802.04252
Karthik R
Karthik R, Preetam Satapath, Srivatsa Patnaik, Saurabh Priyadarshi, Rajesh Kumar M
Automatic Phone Slip Detection System
Accepted for publication in Springer LNEE
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile phones are becoming increasingly advanced and the latest ones are equipped with many diverse and powerful sensors. These sensors can be used to study different position and orientation of the phone which can help smartphone manufacture to track about their customers handling from the recorded log. The inbuilt sensors such as the accelerometer and gyroscope present in our phones are used to obtain data for acceleration and orientation of the phone in the three axes for different phone vulnerable position. From the data obtained appropriate features are extracted using various feature extraction techniques. The extracted features are then given to classifier such as neural network to classify them and decide whether the phone is in a vulnerable position to fall or it is in a safe position .In this paper we mainly concentrated on various case of handling the smartphone and classified by training the neural network.
[ { "version": "v1", "created": "Sat, 10 Feb 2018 14:51:24 GMT" } ]
2018-02-14T00:00:00
[ [ "R", "Karthik", "" ], [ "Satapath", "Preetam", "" ], [ "Patnaik", "Srivatsa", "" ], [ "Priyadarshi", "Saurabh", "" ], [ "M", "Rajesh Kumar", "" ] ]
new_dataset
0.99359
1802.04328
Belal Amro
Belal Amro
Personal Mobile Malware Guard PMMG: a mobile malware detection technique based on user's preferences
7 pages, 4 figures. arXiv admin note: text overlap with arXiv:1801.02837
IJCSNS International Journal of Computer Science and Network Security, Vol. 18 No. 1 pp. 18-24, 2018
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile malware has increased rapidly last 10 years. This rapid increase is due to the rapid enhancement of mobile technology and their power to do most work for their users. Since mobile devices are personal devices, then a special action must be taken towards preserving privacy and security of the mobile data. Malware refers to all types of software applications with malicious behavior. In this paper, we propose a malware detection technique called Personal Mobile Malware Guard ? PMMG- that classifies malwares based on the mobile user feedback. PMMG controls permissions of different applications and their behavior according to the user needs. These preferences are built incrementally on a personal basis according to the feedback of the user. Performance analysis showed that it is theoretically feasible to build PMMG tool and use it on mobile devices.
[ { "version": "v1", "created": "Mon, 12 Feb 2018 19:42:05 GMT" } ]
2018-02-14T00:00:00
[ [ "Amro", "Belal", "" ] ]
new_dataset
0.999261
1802.04335
Illia Polosukhin
Illia Polosukhin, Alexander Skidanov
Neural Program Search: Solving Programming Tasks from Description and Examples
9 pages, 3 figures, ICLR workshop
null
null
null
cs.AI cs.CL cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a Neural Program Search, an algorithm to generate programs from natural language description and a small number of input/output examples. The algorithm combines methods from Deep Learning and Program Synthesis fields by designing rich domain-specific language (DSL) and defining efficient search algorithm guided by a Seq2Tree model on it. To evaluate the quality of the approach we also present a semi-synthetic dataset of descriptions with test examples and corresponding programs. We show that our algorithm significantly outperforms a sequence-to-sequence model with attention baseline.
[ { "version": "v1", "created": "Mon, 12 Feb 2018 20:05:26 GMT" } ]
2018-02-14T00:00:00
[ [ "Polosukhin", "Illia", "" ], [ "Skidanov", "Alexander", "" ] ]
new_dataset
0.998176
1802.04410
Yuanyu Zhang
Yuanyu Zhang, Shoji Kasahara, Yulong Shen, Xiaohong Jiang and Jianxiong Wan
Smart Contract-Based Access Control for the Internet of Things
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates a critical access control issue in the Internet of Things (IoT). In particular, we propose a smart contract-based framework, which consists of multiple access control contracts (ACCs), one judge contract (JC) and one register contract (RC), to achieve distributed and trustworthy access control for IoT systems. Each ACC provides one access control method for a subject-object pair, and implements both static access right validation based on predefined policies and dynamic access right validation by checking the behavior of the subject. The JC implements a misbehavior-judging method to facilitate the dynamic validation of the ACCs by receiving misbehavior reports from the ACCs, judging the misbehavior and returning the corresponding penalty. The RC registers the information of the access control and misbehavior-judging methods as well as their smart contracts, and also provides functions (e.g., register, update and delete) to manage these methods. To demonstrate the application of the framework, we provide a case study in an IoT system with one desktop computer, one laptop and two Raspberry Pi single-board computers, where the ACCs, JC and RC are implemented based on the Ethereum smart contract platform to achieve the access control.
[ { "version": "v1", "created": "Tue, 13 Feb 2018 00:42:31 GMT" } ]
2018-02-14T00:00:00
[ [ "Zhang", "Yuanyu", "" ], [ "Kasahara", "Shoji", "" ], [ "Shen", "Yulong", "" ], [ "Jiang", "Xiaohong", "" ], [ "Wan", "Jianxiong", "" ] ]
new_dataset
0.997445
1802.04559
Carlos-Emiliano Gonz\'alez-Gallardo
Carlos-Emiliano Gonz\'alez-Gallardo and Juan-Manuel Torres-Moreno
Sentence Boundary Detection for French with Subword-Level Information Vectors and Convolutional Neural Networks
In proceedings of the International Conference on Natural Language, Signal and Speech Processing (ICNLSSP) 2017
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we tackle the problem of sentence boundary detection applied to French as a binary classification task ("sentence boundary" or "not sentence boundary"). We combine convolutional neural networks with subword-level information vectors, which are word embedding representations learned from Wikipedia that take advantage of the words morphology; so each word is represented as a bag of their character n-grams. We decide to use a big written dataset (French Gigaword) instead of standard size transcriptions to train and evaluate the proposed architectures with the intention of using the trained models in posterior real life ASR transcriptions. Three different architectures are tested showing similar results; general accuracy for all models overpasses 0.96. All three models have good F1 scores reaching values over 0.97 regarding the "not sentence boundary" class. However, the "sentence boundary" class reflects lower scores decreasing the F1 metric to 0.778 for one of the models. Using subword-level information vectors seem to be very effective leading to conclude that the morphology of words encoded in the embeddings representations behave like pixels in an image making feasible the use of convolutional neural network architectures.
[ { "version": "v1", "created": "Tue, 13 Feb 2018 11:04:07 GMT" } ]
2018-02-14T00:00:00
[ [ "González-Gallardo", "Carlos-Emiliano", "" ], [ "Torres-Moreno", "Juan-Manuel", "" ] ]
new_dataset
0.999485
1802.04738
Sergio Caccamo S
Sergio Caccamo, Esra Ataer-Cansizoglu and Yuichi Taguchi
Joint 3D Reconstruction of a Static Scene and Moving Objects
This paper has been accepted and presented in 3DV-2017 conference held at Qingdao, China. Video experiments: https://youtu.be/goflUxzG2VI
Proceedings International Conference on 3D Vision 2017
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a technique for simultaneous 3D reconstruction of static regions and rigidly moving objects in a scene. An RGB-D frame is represented as a collection of features, which are points and planes. We classify the features into static and dynamic regions and grow separate maps, static and object maps, for each of them. To robustly classify the features in each frame, we fuse multiple RANSAC-based registration results obtained by registering different groups of the features to different maps, including (1) all the features to the static map, (2) all the features to each object map, and (3) subsets of the features, each forming a segment, to each object map. This multi-group registration approach is designed to overcome the following challenges: scenes can be dominated by static regions, making object tracking more difficult; and moving object might have larger pose variation between frames compared to the static regions. We show qualitative results from indoor scenes with objects in various shapes. The technique enables on-the-fly object model generation to be used for robotic manipulation.
[ { "version": "v1", "created": "Tue, 13 Feb 2018 17:05:55 GMT" } ]
2018-02-14T00:00:00
[ [ "Caccamo", "Sergio", "" ], [ "Ataer-Cansizoglu", "Esra", "" ], [ "Taguchi", "Yuichi", "" ] ]
new_dataset
0.993802
1802.04749
Kevin Moran P
Kevin Moran, Michele Tufano, Carlos Bernal-C\'ardenas, Mario Linares-V\'asquez, Gabriele Bavota, Christopher Vendome, Massimiliano Di Penta and Denys Poshyvanyk
MDroid+: A Mutation Testing Framework for Android
4 Pages, Accepted to the Formal Tool Demonstration Track at the 40th International Conference on Software Engineering (ICSE'18)
null
10.1145/3183440.3183492
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mutation testing has shown great promise in assessing the effectiveness of test suites while exhibiting additional applications to test-case generation, selection, and prioritization. Traditional mutation testing typically utilizes a set of simple language specific source code transformations, called operators, to introduce faults. However, empirical studies have shown that for mutation testing to be most effective, these simple operators must be augmented with operators specific to the domain of the software under test. One challenging software domain for the application of mutation testing is that of mobile apps. While mobile devices and accompanying apps have become a mainstay of modern computing, the frameworks and patterns utilized in their development make testing and verification particularly difficult. As a step toward helping to measure and ensure the effectiveness of mobile testing practices, we introduce MDroid+, an automated framework for mutation testing of Android apps. MDroid+ includes 38 mutation operators from ten empirically derived types of Android faults and has been applied to generate over 8,000 mutants for more than 50 apps.
[ { "version": "v1", "created": "Tue, 13 Feb 2018 17:18:10 GMT" } ]
2018-02-14T00:00:00
[ [ "Moran", "Kevin", "" ], [ "Tufano", "Michele", "" ], [ "Bernal-Cárdenas", "Carlos", "" ], [ "Linares-Vásquez", "Mario", "" ], [ "Bavota", "Gabriele", "" ], [ "Vendome", "Christopher", "" ], [ "Di Penta", "Massimiliano", "" ], [ "Poshyvanyk", "Denys", "" ] ]
new_dataset
0.994338
1802.04766
Yueming Liu
Peng Zhang, Yueming Liu and Meikang Qiu
SNC: A Cloud Service Platform for Symbolic-Numeric Computation using Just-In-Time Compilation
13 pages, 23 figures
IEEE Transactions on Cloud Computing, 2017
10.1109/TCC.2017.2656088
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cloud services have been widely employed in IT industry and scientific research. By using Cloud services users can move computing tasks and data away from local computers to remote datacenters. By accessing Internet-based services over lightweight and mobile devices, users deploy diversified Cloud applications on powerful machines. The key drivers towards this paradigm for the scientific computing field include the substantial computing capacity, on-demand provisioning and cross-platform interoperability. To fully harness the Cloud services for scientific computing, however, we need to design an application-specific platform to help the users efficiently migrate their applications. In this, we propose a Cloud service platform for symbolic-numeric computation - SNC. SNC allows the Cloud users to describe tasks as symbolic expressions through C/C++, Python, Java APIs and SNC script. Just-In-Time (JIT) compilation through using LLVM/JVM is used to compile the user code to the machine code. We implemented the SNC design and tested a wide range of symbolic-numeric computation applications (including nonlinear minimization, Monte Carlo integration, finite element assembly and multibody dynamics) on several popular cloud platforms (including the Google Compute Engine, Amazon EC2, Microsoft Azure, Rackspace, HP Helion and VMWare vCloud). These results demonstrate that our approach can work across multiple cloud platforms, support different languages and significantly improve the performance of symbolic-numeric computation using cloud platforms. This offered a way to stimulate the need for using the cloud computing for the symbolic-numeric computation in the field of scientific research.
[ { "version": "v1", "created": "Fri, 9 Feb 2018 20:20:14 GMT" } ]
2018-02-14T00:00:00
[ [ "Zhang", "Peng", "" ], [ "Liu", "Yueming", "" ], [ "Qiu", "Meikang", "" ] ]
new_dataset
0.997537
1309.0671
Ruben Martinez-Cantin
Ruben Martinez-Cantin
BayesOpt: A Library for Bayesian optimization with Robotics Applications
Robotics: Science and Systems, Workshop on Active Learning in Robotics: Exploration, Curiosity, and Interaction
Journal of Machine Learning Research, 15(Nov), 3915-3919, 2014
null
null
cs.RO cs.AI cs.LG cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The purpose of this paper is twofold. On one side, we present a general framework for Bayesian optimization and we compare it with some related fields in active learning and Bayesian numerical analysis. On the other hand, Bayesian optimization and related problems (bandits, sequential experimental design) are highly dependent on the surrogate model that is selected. However, there is no clear standard in the literature. Thus, we present a fast and flexible toolbox that allows to test and combine different models and criteria with little effort. It includes most of the state-of-the-art contributions, algorithms and models. Its speed also removes part of the stigma that Bayesian optimization methods are only good for "expensive functions". The software is free and it can be used in many operating systems and computer languages.
[ { "version": "v1", "created": "Tue, 3 Sep 2013 13:38:05 GMT" } ]
2018-02-13T00:00:00
[ [ "Martinez-Cantin", "Ruben", "" ] ]
new_dataset
0.994
1507.02178
Marcin Pilipczuk
Marcin Pilipczuk and Magnus Wahlstr\"om
Directed multicut is W[1]-hard, even for four terminal pairs
v2: Added almost tight ETH lower bounds
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove that Multicut in directed graphs, parameterized by the size of the cutset, is W[1]-hard and hence unlikely to be fixed-parameter tractable even if restricted to instances with only four terminal pairs. This negative result almost completely resolves one of the central open problems in the area of parameterized complexity of graph separation problems, posted originally by Marx and Razgon [SIAM J. Comput. 43(2):355-388 (2014)], leaving only the case of three terminal pairs open. Our gadget methodology allows us also to prove W[1]-hardness of the Steiner Orientation problem parameterized by the number of terminal pairs, resolving an open problem of Cygan, Kortsarz, and Nutov [SIAM J. Discrete Math. 27(3):1503-1513 (2013)].
[ { "version": "v1", "created": "Wed, 8 Jul 2015 14:38:17 GMT" }, { "version": "v2", "created": "Fri, 17 Jun 2016 08:19:04 GMT" }, { "version": "v3", "created": "Mon, 12 Feb 2018 10:05:14 GMT" } ]
2018-02-13T00:00:00
[ [ "Pilipczuk", "Marcin", "" ], [ "Wahlström", "Magnus", "" ] ]
new_dataset
0.966756
1609.02985
Qifa Yan
Qifa Yan, Xiaohu Tang, Qingchun Chen and Minquan Cheng
Placement Delivery Array Design through Strong Edge Coloring of Bipartite Graphs
5 pages, 2 figures
IEEE Communications Letters, pp. 236-239, Vol. 22, No. 2, Feb. 2018
10.1109/LCOMM.2017.2765629
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The technique of coded caching proposed by Madddah-Ali and Niesen is a promising approach to alleviate the load of networks during busy times. Recently, placement delivery array (PDA) was presented to characterize both the placement and delivery phase in a single array for the centralized coded caching algorithm. In this paper, we interpret PDA from a new perspective, i.e., the strong edge coloring of bipartite graph. We prove that, a PDA is equivalent to a strong edge colored bipartite graph. Thus, we can construct a class of PDAs from existing structures in bipartite graphs. The class includes the scheme proposed by Maddah-Ali \textit{et al.} and a more general class of PDAs proposed by Shangguan \textit{et al.} as special cases. Moreover, it is capable of generating a lot of PDAs with flexible tradeoff between the sub-packet level and load.
[ { "version": "v1", "created": "Sat, 10 Sep 2016 01:35:52 GMT" } ]
2018-02-13T00:00:00
[ [ "Yan", "Qifa", "" ], [ "Tang", "Xiaohu", "" ], [ "Chen", "Qingchun", "" ], [ "Cheng", "Minquan", "" ] ]
new_dataset
0.99583
1701.02379
Ali Dehghan
Ali Dehghan and Amir H. Banihashemi
On the Tanner Graph Cycle Distribution of Random LDPC, Random Protograph-Based LDPC, and Random Quasi-Cyclic LDPC Code Ensembles
To appear in IEEE Trans. Inform. Theory
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the cycle distribution of random low-density parity-check (LDPC) codes, randomly constructed protograph-based LDPC codes, and random quasi-cyclic (QC) LDPC codes. We prove that for a random bipartite graph, with a given (irregular) degree distribution, the distributions of cycles of different length tend to independent Poisson distributions, as the size of the graph tends to infinity. We derive asymptotic upper and lower bounds on the expected values of the Poisson distributions that are independent of the size of the graph, and only depend on the degree distribution and the cycle length. For a random lift of a bi-regular protograph, we prove that the asymptotic cycle distributions are essentially the same as those of random bipartite graphs as long as the degree distributions are identical. For random QC-LDPC codes, however, we show that the cycle distribution can be quite different from the other two categories. In particular, depending on the protograph and the value of $c$, the expected number of cycles of length $c$, in this case, can be either $\Theta(N)$ or $\Theta(1)$, where $N$ is the lifting degree (code length). We also provide numerical results that match our theoretical derivations. Our results provide a theoretical foundation for emperical results that were reported in the literature but were not well-justified. They can also be used for the analysis and design of LDPC codes and associated algorithms that are based on cycles.
[ { "version": "v1", "created": "Mon, 9 Jan 2017 22:37:21 GMT" }, { "version": "v2", "created": "Sat, 10 Feb 2018 23:00:21 GMT" } ]
2018-02-13T00:00:00
[ [ "Dehghan", "Ali", "" ], [ "Banihashemi", "Amir H.", "" ] ]
new_dataset
0.99538
1702.00554
Tariq Ahmad Mir
Tariq Ahmad Mir, Marcel Ausloos
Benford's law: a 'sleeping beauty' sleeping in the dirty pages of logarithmic tables
18 pages, 4 figures, 3 tables, 79 references, Accepted for publication in Journal of the Association for Information Science and Technology
Journal of the Association for Information Science and Technology 69(3) (2018) 349-358
10.1002/asi.23845
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Benford's law is an empirical observation, first reported by Simon Newcomb in 1881 and then independently by Frank Benford in 1938: the first significant digits of numbers in large data are often distributed according to a logarithmically decreasing function. Being contrary to intuition, the law was forgotten as a mere curious observation. However, in the last two decades, relevant literature has grown exponentially, - an evolution typical of "Sleeping Beauties" (SBs) publications that go unnoticed (sleep) for a long time and then suddenly become center of attention (are awakened). Thus, in the present study, we show that Newcomb (1881) and Benford (1938) papers are clearly SBs. The former was in deep sleep for 110 years whereas the latter was in deep sleep for a comparatively lesser period of 31 years up to 1968, and in a state of less deep sleep for another 27 years up to 1995. Both SBs were awakened in the year 1995 by Hill (1995a). In so doing, we show that the waking prince (Hill, 1995a) is more often quoted than the SB whom he kissed, - in this Benford's law case, wondering whether this is a general effect, - to be usefully studied.
[ { "version": "v1", "created": "Thu, 2 Feb 2017 07:08:27 GMT" } ]
2018-02-13T00:00:00
[ [ "Mir", "Tariq Ahmad", "" ], [ "Ausloos", "Marcel", "" ] ]
new_dataset
0.998932
1705.05767
Stephen Makonin
Stephen Makonin, Z. Jane Wang, and Chris Tumpach
RAE: The Rainforest Automation Energy Dataset for Smart Grid Meter Data Analysis
null
null
10.3390/data3010008
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Datasets are important for researchers to build models and test how well their machine learning algorithms perform. This paper presents the Rainforest Automation Energy (RAE) dataset to help smart grid researchers test their algorithms which make use of smart meter data. This initial release of RAE contains 1Hz data (mains and sub-meters) from two a residential house. In addition to power data, environmental and sensor data from the house's thermostat is included. Sub-meter data from one of the houses includes heat pump and rental suite captures which is of interest to power utilities. We also show and energy breakdown of each house and show (by example) how RAE can be used to test non-intrusive load monitoring (NILM) algorithms.
[ { "version": "v1", "created": "Sun, 14 May 2017 04:57:27 GMT" }, { "version": "v2", "created": "Tue, 23 May 2017 16:11:47 GMT" }, { "version": "v3", "created": "Wed, 3 Jan 2018 02:01:57 GMT" }, { "version": "v4", "created": "Mon, 12 Feb 2018 10:09:36 GMT" } ]
2018-02-13T00:00:00
[ [ "Makonin", "Stephen", "" ], [ "Wang", "Z. Jane", "" ], [ "Tumpach", "Chris", "" ] ]
new_dataset
0.999794
1705.07750
Joao Carreira
Joao Carreira and Andrew Zisserman
Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
Removed references to mini-kinetics dataset that was never made publicly available and repeated all experiments on the full Kinetics dataset
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9% on HMDB-51 and 98.0% on UCF-101.
[ { "version": "v1", "created": "Mon, 22 May 2017 13:57:53 GMT" }, { "version": "v2", "created": "Thu, 20 Jul 2017 15:24:03 GMT" }, { "version": "v3", "created": "Mon, 12 Feb 2018 17:10:11 GMT" } ]
2018-02-13T00:00:00
[ [ "Carreira", "Joao", "" ], [ "Zisserman", "Andrew", "" ] ]
new_dataset
0.999711
1707.02264
Kyle Niemeyer
Arfon M Smith, Kyle E Niemeyer, Daniel S Katz, Lorena A Barba, George Githinji, Melissa Gymrek, Kathryn D Huff, Christopher R Madan, Abigail Cabunoc Mayes, Kevin M Moerman, Pjotr Prins, Karthik Ram, Ariel Rokem, Tracy K Teal, Roman Valls Guimera, Jacob T Vanderplas
Journal of Open Source Software (JOSS): design and first-year review
22 pages, 8 figures
PeerJ Computer Science 4 (2018) e147
10.7717/peerj-cs.147
null
cs.DL cs.SE
http://creativecommons.org/licenses/by/4.0/
This article describes the motivation, design, and progress of the Journal of Open Source Software (JOSS). JOSS is a free and open-access journal that publishes articles describing research software. It has the dual goals of improving the quality of the software submitted and providing a mechanism for research software developers to receive credit. While designed to work within the current merit system of science, JOSS addresses the dearth of rewards for key contributions to science made in the form of software. JOSS publishes articles that encapsulate scholarship contained in the software itself, and its rigorous peer review targets the software components: functionality, documentation, tests, continuous integration, and the license. A JOSS article contains an abstract describing the purpose and functionality of the software, references, and a link to the software archive. The article is the entry point of a JOSS submission, which encompasses the full set of software artifacts. Submission and review proceed in the open, on GitHub. Editors, reviewers, and authors work collaboratively and openly. Unlike other journals, JOSS does not reject articles requiring major revision; while not yet accepted, articles remain visible and under review until the authors make adequate changes (or withdraw, if unable to meet requirements). Once an article is accepted, JOSS gives it a DOI, deposits its metadata in Crossref, and the article can begin collecting citations on indexers like Google Scholar and other services. Authors retain copyright of their JOSS article, releasing it under a Creative Commons Attribution 4.0 International License. In its first year, starting in May 2016, JOSS published 111 articles, with more than 40 additional articles under review. JOSS is a sponsored project of the nonprofit organization NumFOCUS and is an affiliate of the Open Source Initiative.
[ { "version": "v1", "created": "Fri, 7 Jul 2017 16:50:35 GMT" }, { "version": "v2", "created": "Wed, 27 Dec 2017 19:20:47 GMT" }, { "version": "v3", "created": "Wed, 24 Jan 2018 23:27:51 GMT" } ]
2018-02-13T00:00:00
[ [ "Smith", "Arfon M", "" ], [ "Niemeyer", "Kyle E", "" ], [ "Katz", "Daniel S", "" ], [ "Barba", "Lorena A", "" ], [ "Githinji", "George", "" ], [ "Gymrek", "Melissa", "" ], [ "Huff", "Kathryn D", "" ], [ "Madan", "Christopher R", "" ], [ "Mayes", "Abigail Cabunoc", "" ], [ "Moerman", "Kevin M", "" ], [ "Prins", "Pjotr", "" ], [ "Ram", "Karthik", "" ], [ "Rokem", "Ariel", "" ], [ "Teal", "Tracy K", "" ], [ "Guimera", "Roman Valls", "" ], [ "Vanderplas", "Jacob T", "" ] ]
new_dataset
0.998861
1708.06417
Lee Prangnell
Lee Prangnell
Visually Lossless Coding in HEVC: A High Bit Depth and 4:4:4 Capable JND-Based Perceptual Quantisation Technique for HEVC
Preprint: Elsevier Signal Processing: Image Communication (Journal)
null
null
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the increasing prevalence of high bit depth and YCbCr 4:4:4 video data, it is desirable to develop a JND-based visually lossless coding technique which can account for high bit depth 4:4:4 data in addition to standard 8-bit precision chroma subsampled data. In this paper, we propose a Coding Block (CB)-level JND-based luma and chroma perceptual quantisation technique for HEVC named Pixel-PAQ. Pixel-PAQ exploits both luminance masking and chrominance masking to achieve JND-based visually lossless coding; the proposed method is compatible with high bit depth YCbCr 4:4:4 video data of any resolution. When applied to YCbCr 4:4:4 high bit depth video data, Pixel-PAQ can achieve vast bitrate reductions, of up to 75% (68.6% over four QP data points), compared with a state-of-the-art luma-based JND method for HEVC named IDSQ. Moreover, the participants in the subjective evaluations confirm that visually lossless coding is successfully achieved by Pixel-PAQ (at a PSNR value of 28.04 dB in one test).
[ { "version": "v1", "created": "Mon, 21 Aug 2017 20:46:54 GMT" }, { "version": "v2", "created": "Mon, 28 Aug 2017 14:04:14 GMT" }, { "version": "v3", "created": "Fri, 20 Oct 2017 09:51:32 GMT" }, { "version": "v4", "created": "Fri, 27 Oct 2017 08:54:43 GMT" }, { "version": "v5", "created": "Mon, 12 Feb 2018 18:44:15 GMT" } ]
2018-02-13T00:00:00
[ [ "Prangnell", "Lee", "" ] ]
new_dataset
0.960041
1709.00846
Alexander Wendel
Alexander Wendel and James Underwood
Extrinsic Parameter Calibration for Line Scanning Cameras on Ground Vehicles with Navigation Systems Using a Calibration Pattern
Published in MDPI Sensors, 30 October 2017
Sensors 2017, 17, 2491
10.3390/s17112491
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Line scanning cameras, which capture only a single line of pixels, have been increasingly used in ground based mobile or robotic platforms. In applications where it is advantageous to directly georeference the camera data to world coordinates, an accurate estimate of the camera's 6D pose is required. This paper focuses on the common case where a mobile platform is equipped with a rigidly mounted line scanning camera, whose pose is unknown, and a navigation system providing vehicle body pose estimates. We propose a novel method that estimates the camera's pose relative to the navigation system. The approach involves imaging and manually labelling a calibration pattern with distinctly identifiable points, triangulating these points from camera and navigation system data and reprojecting them in order to compute a likelihood, which is maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte Carlo (MCMC) algorithm is used to estimate the uncertainty of the offset. Tested on two different platforms, the method was able to estimate the pose to within 0.06 m / 1.05$^{\circ}$ and 0.18 m / 2.39$^{\circ}$. We also propose several approaches to displaying and interpreting the 6D results in a human readable way.
[ { "version": "v1", "created": "Mon, 4 Sep 2017 07:46:43 GMT" }, { "version": "v2", "created": "Tue, 24 Oct 2017 00:56:14 GMT" }, { "version": "v3", "created": "Mon, 12 Feb 2018 06:26:47 GMT" } ]
2018-02-13T00:00:00
[ [ "Wendel", "Alexander", "" ], [ "Underwood", "James", "" ] ]
new_dataset
0.998259
1710.09919
Lee Prangnell
Lee Prangnell and Victor Sanchez
JND-Based Perceptual Video Coding for 4:4:4 Screen Content Data in HEVC
Preprint: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2018)
null
null
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The JCT-VC standardized Screen Content Coding (SCC) extension in the HEVC HM RExt + SCM reference codec offers an impressive coding efficiency performance when compared with HM RExt alone; however, it is not significantly perceptually optimized. For instance, it does not include advanced HVS-based perceptual coding methods, such as JND-based spatiotemporal masking schemes. In this paper, we propose a novel JND-based perceptual video coding technique for HM RExt + SCM. The proposed method is designed to further improve the compression performance of HM RExt + SCM when applied to YCbCr 4:4:4 SC video data. In the proposed technique, luminance masking and chrominance masking are exploited to perceptually adjust the Quantization Step Size (QStep) at the Coding Block (CB) level. Compared with HM RExt 16.10 + SCM 8.0, the proposed method considerably reduces bitrates (Kbps), with a maximum reduction of 48.3%. In addition to this, the subjective evaluations reveal that SC-PAQ achieves visually lossless coding at very low bitrates.
[ { "version": "v1", "created": "Thu, 26 Oct 2017 21:28:57 GMT" }, { "version": "v2", "created": "Mon, 12 Feb 2018 18:47:58 GMT" } ]
2018-02-13T00:00:00
[ [ "Prangnell", "Lee", "" ], [ "Sanchez", "Victor", "" ] ]
new_dataset
0.983443
1712.09872
Md Zahangir Alom
Md Zahangir Alom, Peheding Sidike, Mahmudul Hasan, Tark M. Taha and Vijayan K. Asari
Handwritten Bangla Character Recognition Using The State-of-Art Deep Convolutional Neural Networks
12 pages,22 figures, 5 tables. arXiv admin note: text overlap with arXiv:1705.02680
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In spite of advances in object recognition technology, Handwritten Bangla Character Recognition (HBCR) remains largely unsolved due to the presence of many ambiguous handwritten characters and excessively cursive Bangla handwritings. Even the best existing recognizers do not lead to satisfactory performance for practical applications related to Bangla character recognition and have much lower performance than those developed for English alpha-numeric characters. To improve the performance of HBCR, we herein present the application of the state-of-the-art Deep Convolutional Neural Networks (DCNN) including VGG Network, All Convolution Network (All-Conv Net), Network in Network (NiN), Residual Network, FractalNet, and DenseNet for HBCR. The deep learning approaches have the advantage of extracting and using feature information, improving the recognition of 2D shapes with a high degree of invariance to translation, scaling and other distortions. We systematically evaluated the performance of DCNN models on publicly available Bangla handwritten character dataset called CMATERdb and achieved the superior recognition accuracy when using DCNN models. This improvement would help in building an automatic HBCR system for practical applications.
[ { "version": "v1", "created": "Thu, 28 Dec 2017 14:31:56 GMT" }, { "version": "v2", "created": "Wed, 7 Feb 2018 17:22:42 GMT" }, { "version": "v3", "created": "Sat, 10 Feb 2018 18:40:54 GMT" } ]
2018-02-13T00:00:00
[ [ "Alom", "Md Zahangir", "" ], [ "Sidike", "Peheding", "" ], [ "Hasan", "Mahmudul", "" ], [ "Taha", "Tark M.", "" ], [ "Asari", "Vijayan K.", "" ] ]
new_dataset
0.994554
1801.02911
Harsh Thakkar
Harsh Thakkar and Dharmen Punjani and Yashwant Keswani and Jens Lehmann and S\"oren Auer
A Stitch in Time Saves Nine -- SPARQL querying of Property Graphs using Gremlin Traversals
Author's draft -- submitted to SWJ
null
null
null
cs.DB cs.PF
http://creativecommons.org/licenses/by/4.0/
Knowledge graphs have become popular over the past years and frequently rely on the Resource Description Framework (RDF) or Property Graphs (PG) as underlying data models. However, the query languages for these two data models -- SPARQL for RDF and Gremlin for property graph traversal -- are lacking interoperability. We present Gremlinator, a novel SPARQL to Gremlin translator. Gremlinator translates SPARQL queries to Gremlin traversals for executing graph pattern matching queries over graph databases. This allows to access and query a wide variety of Graph Data Management Systems (DMS) using the W3C standardized SPARQL query language and avoid the learning curve of a new Graph Query Language. Gremlin is a system-agnostic traversal language covering both OLTP graph database or OLAP graph processors, thus making it a desirable choice for supporting interoperability wrt. querying Graph DMSs. We present a comprehensive empirical evaluation of Gremlinator and demonstrate its validity and applicability by executing SPARQL queries on top of the leading graph stores Neo4J, Sparksee, and Apache TinkerGraph and compare the performance with the RDF stores Virtuoso, 4Store and JenaTDB. Our evaluation demonstrates the substantial performance gain obtained by the Gremlin counterparts of the SPARQL queries, especially for star-shaped and complex queries.
[ { "version": "v1", "created": "Tue, 9 Jan 2018 12:25:19 GMT" }, { "version": "v2", "created": "Mon, 12 Feb 2018 14:53:00 GMT" } ]
2018-02-13T00:00:00
[ [ "Thakkar", "Harsh", "" ], [ "Punjani", "Dharmen", "" ], [ "Keswani", "Yashwant", "" ], [ "Lehmann", "Jens", "" ], [ "Auer", "Sören", "" ] ]
new_dataset
0.993279
1802.01144
Cewu Lu
Bo Pang, Kaiwen Zha, Cewu Lu
Human Action Adverb Recognition: ADHA Dataset and A Three-Stream Hybrid Model
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the first benchmark for a new problem --- recognizing human action adverbs (HAA): "Adverbs Describing Human Actions" (ADHA). This is the first step for computer vision to change over from pattern recognition to real AI. We demonstrate some key features of ADHA: a semantically complete set of adverbs describing human actions, a set of common, describable human actions, and an exhaustive labeling of simultaneously emerging actions in each video. We commit an in-depth analysis on the implementation of current effective models in action recognition and image captioning on adverb recognition, and the results show that such methods are unsatisfactory. Moreover, we propose a novel three-stream hybrid model to deal the HAA problem, which achieves a better result.
[ { "version": "v1", "created": "Sun, 4 Feb 2018 15:25:52 GMT" }, { "version": "v2", "created": "Mon, 12 Feb 2018 06:49:38 GMT" } ]
2018-02-13T00:00:00
[ [ "Pang", "Bo", "" ], [ "Zha", "Kaiwen", "" ], [ "Lu", "Cewu", "" ] ]
new_dataset
0.99963
1802.03478
Bing Li
Bing Li
Programming Requests/Responses with GreatFree in the Cloud Environment
20 pages, 16 listings, 4 figures, 4 tables, International Journal of Distributed and Parallel Systems, 2018
null
10.5121/ijdps.2018.9101
null
cs.PL cs.DC cs.SE
http://creativecommons.org/publicdomain/zero/1.0/
Programming request with GreatFree is an efficient programming technique to implement distributed polling in the cloud computing environment. GreatFree is a distributed programming environment through which diverse distributed systems can be established through programming rather than configuring or scripting. GreatFree emphasizes the importance of programming since it offers developers the opportunities to leverage their distributed knowledge and programming skills. Additionally, programming is the unique way to construct creative, adaptive and flexible systems to accommodate various distributed computing environments. With the support of GreatFree code-level Distributed Infrastructure Patterns, Distributed Operation Patterns and APIs, the difficult procedure is accomplished in a programmable, rapid and highly-patterned manner, i.e., the programming behaviors are simplified as the repeatable operation of Copy-Paste-Replace. Since distributed polling is one of the fundamental techniques to construct distributed systems, GreatFree provides developers with relevant APIs and patterns to program requests/responses in the novel programming environment.
[ { "version": "v1", "created": "Fri, 9 Feb 2018 23:48:44 GMT" } ]
2018-02-13T00:00:00
[ [ "Li", "Bing", "" ] ]
new_dataset
0.99705
1802.03572
Philip Howard
John D. Gallacher, Vlad Barash, Philip N. Howard, John Kelly
Junk News on Military Affairs and National Security: Social Media Disinformation Campaigns Against US Military Personnel and Veterans
Data Memo
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social media provides political news and information for both active duty military personnel and veterans. We analyze the subgroups of Twitter and Facebook users who spend time consuming junk news from websites that target US military personnel and veterans with conspiracy theories, misinformation, and other forms of junk news about military affairs and national security issues. (1) Over Twitter we find that there are significant and persistent interactions between current and former military personnel and a broad network of extremist, Russia-focused, and international conspiracy subgroups. (2) Over Facebook, we find significant and persistent interactions between public pages for military and veterans and subgroups dedicated to political conspiracy, and both sides of the political spectrum. (3) Over Facebook, the users who are most interested in conspiracy theories and the political right seem to be distributing the most junk news, whereas users who are either in the military or are veterans are among the most sophisticated news consumers, and share very little junk news through the network.
[ { "version": "v1", "created": "Sat, 10 Feb 2018 12:16:12 GMT" } ]
2018-02-13T00:00:00
[ [ "Gallacher", "John D.", "" ], [ "Barash", "Vlad", "" ], [ "Howard", "Philip N.", "" ], [ "Kelly", "John", "" ] ]
new_dataset
0.999641
1802.03573
Philip Howard
Philip N. Howard, Bence Kollanyi, Samantha Bradshaw, Lisa-Maria Neudert
Social Media, News and Political Information during the US Election: Was Polarizing Content Concentrated in Swing States?
Data Memo
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
US voters shared large volumes of polarizing political news and information in the form of links to content from Russian, WikiLeaks and junk news sources. Was this low quality political information distributed evenly around the country, or concentrated in swing states and particular parts of the country? In this data memo we apply a tested dictionary of sources about political news and information being shared over Twitter over a ten day period around the 2016 Presidential Election. Using self-reported location information, we place a third of users by state and create a simple index for the distribution of polarizing content around the country. We find that (1) nationally, Twitter users got more misinformation, polarizing and conspiratorial content than professionally produced news. (2) Users in some states, however, shared more polarizing political news and information than users in other states. (3) Average levels of misinformation were higher in swing states than in uncontested states, even when weighted for the relative size of the user population in each state. We conclude with some observations about the impact of strategically disseminated polarizing information on public life.
[ { "version": "v1", "created": "Sat, 10 Feb 2018 12:22:59 GMT" } ]
2018-02-13T00:00:00
[ [ "Howard", "Philip N.", "" ], [ "Kollanyi", "Bence", "" ], [ "Bradshaw", "Samantha", "" ], [ "Neudert", "Lisa-Maria", "" ] ]
new_dataset
0.972749
1802.03611
Anatoly Plotnikov
Anatoly D. Plotnikov
Searching isomorphic graphs
17 pages, 11 figures
Transactions on Networks and Communications, Volume 5, No. 5, ISSN: 2054 -7420 (2017)
10.14738/tnc.55.3551
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To determine that two given undirected graphs are isomorphic, we construct for them auxiliary graphs, using the breadth-first search. This makes capability to position vertices in each digraph with respect to each other. If the given graphs are isomorphic, in each of them we can find such positionally equivalent auxiliary digraphs that have the same mutual positioning of vertices. Obviously, if the given graphs are isomorphic, then such equivalent digraphs exist. Proceeding from the arrangement of vertices in one of the digraphs, we try to determine the corresponding vertices in another digraph. As a result we develop the algorithm for constructing a bijective mapping between vertices of the given graphs if they are isomorphic. The running time of the algorithm equal to $O(n^5)$, where $n$ is the number of graph vertices.
[ { "version": "v1", "created": "Sat, 10 Feb 2018 15:51:52 GMT" } ]
2018-02-13T00:00:00
[ [ "Plotnikov", "Anatoly D.", "" ] ]
new_dataset
0.993555
1802.03625
Saravanakumar Shanmugam Sakthivadivel
Anupama Aggarwal, Saravana Kumar, Kushagra Bhargava, Ponnurangam Kumaraguru
The Follower Count Fallacy: Detecting Twitter Users with Manipulated Follower Count
Accepted at ACM SAC'18
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online Social Networks (OSN) are increasingly being used as platform for an effective communication, to engage with other users, and to create a social worth via number of likes, followers and shares. Such metrics and crowd-sourced ratings give the OSN user a sense of social reputation which she tries to maintain and boost to be more influential. Users artificially bolster their social reputation via black-market web services. In this work, we identify users which manipulate their projected follower count using an unsupervised local neighborhood detection method. We identify a neighborhood of the user based on a robust set of features which reflect user similarity in terms of the expected follower count. We show that follower count estimation using our method has 84.2% accuracy with a low error rate. In addition, we estimate the follower count of the user under suspicion by finding its neighborhood drawn from a large random sample of Twitter. We show that our method is highly tolerant to synthetic manipulation of followers. Using the deviation of predicted follower count from the displayed count, we are also able to detect customers with a high precision of 98.62%
[ { "version": "v1", "created": "Sat, 10 Feb 2018 17:48:02 GMT" } ]
2018-02-13T00:00:00
[ [ "Aggarwal", "Anupama", "" ], [ "Kumar", "Saravana", "" ], [ "Bhargava", "Kushagra", "" ], [ "Kumaraguru", "Ponnurangam", "" ] ]
new_dataset
0.983152
1802.03674
Fatima Salahdine
Fatima Salahdine
Compressive Spectrum Sensing for Cognitive Radio Networks
PhD dissertation, Advisors: Dr. Naima Kaabouch and Dr. Hassan El Ghazi
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A cognitive radio system has the ability to observe and learn from the environment, adapt to the environmental conditions, and use the radio spectrum more efficiently. It allows secondary users (SUs) to use the primary users (PUs) channels when they are not being utilized. Cognitive radio involves three main processes: spectrum sensing, deciding, and acting. In the spectrum sensing process, the channel occupancy is measured with spectrum sensing techniques in order to detect unused channels. In the deciding process, sensing results are analyzed and decisions are made based on these results. In the acting process, actions are made by adjusting the transmission parameters to enhance the cognitive radio performance. One of the main challenges of cognitive radio is the wideband spectrum sensing. Existing spectrum sensing techniques are based on a set of observations sampled by an ADC at the Nyquist rate. However, those techniques can sense only one channel at a time because of the hardware limitations on the sampling rate. In addition, in order to sense a wideband spectrum, the wideband is divided into narrow bands or multiple frequency bands. SUs have to sense each band using multiple RF frontends simultaneously, which can result in a very high processing time, hardware cost, and computational complexity. In order to overcome this problem, the signal sampling should be as fast as possible even with high dimensional signals. Compressive sensing has been proposed as a low-cost solution to reduce the processing time and accelerate the scanning process. It allows reducing the number of samples required for high dimensional signal acquisition while keeping the essential information.
[ { "version": "v1", "created": "Sun, 11 Feb 2018 01:33:25 GMT" } ]
2018-02-13T00:00:00
[ [ "Salahdine", "Fatima", "" ] ]
new_dataset
0.96322
1802.03905
Xiaowei Wu
Zhiyi Huang, Ning Kang, Zhihao Gavin Tang, Xiaowei Wu, Yuhao Zhang and Xue Zhu
How to Match when All Vertices Arrive Online
25 pages, 10 figures, to appear in STOC 2018
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a fully online model of maximum cardinality matching in which all vertices arrive online. On the arrival of a vertex, its incident edges to previously-arrived vertices are revealed. Each vertex has a deadline that is after all its neighbors' arrivals. If a vertex remains unmatched until its deadline, the algorithm must then irrevocably either match it to an unmatched neighbor, or leave it unmatched. The model generalizes the existing one-sided online model and is motivated by applications including ride-sharing platforms, real-estate agency, etc. We show that the Ranking algorithm by Karp et al. (STOC 1990) is $0.5211$-competitive in our fully online model for general graphs. Our analysis brings a novel charging mechanic into the randomized primal dual technique by Devanur et al. (SODA 2013), allowing a vertex other than the two endpoints of a matched edge to share the gain. To our knowledge, this is the first analysis of Ranking that beats $0.5$ on general graphs in an online matching problem, a first step towards solving the open problem by Karp et al. (STOC 1990) about the optimality of Ranking on general graphs. If the graph is bipartite, we show that the competitive ratio of Ranking is between $0.5541$ and $0.5671$. Finally, we prove that the fully online model is strictly harder than the previous model as no online algorithm can be $0.6317 < 1-\frac{1}{e}$-competitive in our model even for bipartite graphs.
[ { "version": "v1", "created": "Mon, 12 Feb 2018 06:31:58 GMT" } ]
2018-02-13T00:00:00
[ [ "Huang", "Zhiyi", "" ], [ "Kang", "Ning", "" ], [ "Tang", "Zhihao Gavin", "" ], [ "Wu", "Xiaowei", "" ], [ "Zhang", "Yuhao", "" ], [ "Zhu", "Xue", "" ] ]
new_dataset
0.995879
1802.03909
Sarani Bhattacharya
Manaar Alam, Sarani Bhattacharya, Debdeep Mukhopadhyay, Anupam Chattopadhyay
RAPPER: Ransomware Prevention via Performance Counters
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ransomware can produce direct and controllable economic loss, which makes it one of the most prominent threats in cyber security. As per the latest statistics, more than half of malwares reported in Q1 of 2017 are ransomware and there is a potent threat of a novice cybercriminals accessing rasomware-as-a-service. The concept of public-key based data kidnapping and subsequent extortion was introduced in 1996. Since then, variants of ransomware emerged with different cryptosystems and larger key sizes though, the underlying techniques remained same. Though there are works in literature which proposes a generic framework to detect the crypto ransomwares, we present a two step unsupervised detection tool which when suspects a process activity to be malicious, issues an alarm for further analysis to be carried in the second step and detects it with minimal traces. The two step detection framework- RAPPER uses Artificial Neural Network and Fast Fourier Transformation to develop a highly accurate, fast and reliable solution to ransomware detection using minimal trace points.
[ { "version": "v1", "created": "Mon, 12 Feb 2018 06:43:26 GMT" } ]
2018-02-13T00:00:00
[ [ "Alam", "Manaar", "" ], [ "Bhattacharya", "Sarani", "" ], [ "Mukhopadhyay", "Debdeep", "" ], [ "Chattopadhyay", "Anupam", "" ] ]
new_dataset
0.999695
1802.03998
Salvador Tamarit
David Insa, Sergio P\'erez, Josep Silva, Salvador Tamarit
Erlang Code Evolution Control (Use Cases)
null
null
null
null
cs.PL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The main goal of this work is to show how SecEr can be used in different scenarios. Concretely, we demonstrate how a user can run SecEr to obtain reports about the behaviour preservation between versions as well as how a user can use SecEr to find the source of a discrepancy. The use cases presented are three: two completely different versions of the same program, an improvement in the performance of a function and a program where an error has been introduced. A complete description of the technique and the tool is available at [1] and [2].
[ { "version": "v1", "created": "Mon, 12 Feb 2018 12:04:49 GMT" } ]
2018-02-13T00:00:00
[ [ "Insa", "David", "" ], [ "Pérez", "Sergio", "" ], [ "Silva", "Josep", "" ], [ "Tamarit", "Salvador", "" ] ]
new_dataset
0.992527
1802.04023
L. Elisa Celis
L. Elisa Celis, Vijay Keswani, Damian Straszak, Amit Deshpande, Tarun Kathuria and Nisheeth K. Vishnoi
Fair and Diverse DPP-based Data Summarization
A short version of this paper appeared in the workshop FAT/ML 2016 - arXiv:1610.07183
null
null
null
cs.LG cs.CY cs.IR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sampling methods that choose a subset of the data proportional to its diversity in the feature space are popular for data summarization. However, recent studies have noted the occurrence of bias (under- or over-representation of a certain gender or race) in such data summarization methods. In this paper we initiate a study of the problem of outputting a diverse and fair summary of a given dataset. We work with a well-studied determinantal measure of diversity and corresponding distributions (DPPs) and present a framework that allows us to incorporate a general class of fairness constraints into such distributions. Coming up with efficient algorithms to sample from these constrained determinantal distributions, however, suffers from a complexity barrier and we present a fast sampler that is provably good when the input vectors satisfy a natural property. Our experimental results on a real-world and an image dataset show that the diversity of the samples produced by adding fairness constraints is not too far from the unconstrained case, and we also provide a theoretical explanation of it.
[ { "version": "v1", "created": "Mon, 12 Feb 2018 13:12:43 GMT" } ]
2018-02-13T00:00:00
[ [ "Celis", "L. Elisa", "" ], [ "Keswani", "Vijay", "" ], [ "Straszak", "Damian", "" ], [ "Deshpande", "Amit", "" ], [ "Kathuria", "Tarun", "" ], [ "Vishnoi", "Nisheeth K.", "" ] ]
new_dataset
0.966381
1802.04112
Swaminathan Gopalswamy
Swaminathan Gopalswamy, Sivakumar Rathinam
Infrastructure Enabled Autonomy: A Distributed Intelligence Architecture for Autonomous Vehicles
submitted to the IEEE Intelligent Vehicles Symposium 2018
null
null
null
cs.CY cs.DC cs.MA cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Multiple studies have illustrated the potential for dramatic societal, environmental and economic benefits from significant penetration of autonomous driving. However, all the current approaches to autonomous driving require the automotive manufacturers to shoulder the primary responsibility and liability associated with replacing human perception and decision making with automation, potentially slowing the penetration of autonomous vehicles, and consequently slowing the realization of the societal benefits of autonomous vehicles. We propose here a new approach to autonomous driving that will re-balance the responsibility and liabilities associated with autonomous driving between traditional automotive manufacturers, infrastructure players, and third-party players. Our proposed distributed intelligence architecture leverages the significant advancements in connectivity and edge computing in the recent decades to partition the driving functions between the vehicle, edge computers on the road side, and specialized third-party computers that reside in the vehicle. Infrastructure becomes a critical enabler for autonomy. With this Infrastructure Enabled Autonomy (IEA) concept, the traditional automotive manufacturers will only need to shoulder responsibility and liability comparable to what they already do today, and the infrastructure and third-party players will share the added responsibility and liabilities associated with autonomous functionalities. We propose a Bayesian Network Model based framework for assessing the risk benefits of such a distributed intelligence architecture. An additional benefit of the proposed architecture is that it enables "autonomy as a service" while still allowing for private ownership of automobiles.
[ { "version": "v1", "created": "Mon, 5 Feb 2018 23:33:53 GMT" } ]
2018-02-13T00:00:00
[ [ "Gopalswamy", "Swaminathan", "" ], [ "Rathinam", "Sivakumar", "" ] ]
new_dataset
0.963741
1802.04216
Gr\'egory Rogez
Gr\'egory Rogez and Cordelia Schmid
Image-based Synthesis for Deep 3D Human Pose Estimation
accepted to appear in IJCV (with minor revisions). Follow-up to NIPS 2016 arXiv:1607.02046
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the problem of 3D human pose estimation in the wild. A significant challenge is the lack of training data, i.e., 2D images of humans annotated with 3D poses. Such data is necessary to train state-of-the-art CNN architectures. Here, we propose a solution to generate a large set of photorealistic synthetic images of humans with 3D pose annotations. We introduce an image-based synthesis engine that artificially augments a dataset of real images with 2D human pose annotations using 3D motion capture data. Given a candidate 3D pose, our algorithm selects for each joint an image whose 2D pose locally matches the projected 3D pose. The selected images are then combined to generate a new synthetic image by stitching local image patches in a kinematically constrained manner. The resulting images are used to train an end-to-end CNN for full-body 3D pose estimation. We cluster the training data into a large number of pose classes and tackle pose estimation as a $K$-way classification problem. Such an approach is viable only with large training sets such as ours. Our method outperforms most of the published works in terms of 3D pose estimation in controlled environments (Human3.6M) and shows promising results for real-world images (LSP). This demonstrates that CNNs trained on artificial images generalize well to real images. Compared to data generated from more classical rendering engines, our synthetic images do not require any domain adaptation or fine-tuning stage.
[ { "version": "v1", "created": "Mon, 12 Feb 2018 17:59:47 GMT" } ]
2018-02-13T00:00:00
[ [ "Rogez", "Grégory", "" ], [ "Schmid", "Cordelia", "" ] ]
new_dataset
0.997638
1802.04236
Shayan Eskandari
Shayan Eskandari, Jeremy Clark, Abdelwahab Hamou-Lhadj
Buy your coffee with bitcoin: Real-world deployment of a bitcoin point of sale terminal
Advanced and Trusted Computing 2016 Intl IEEE Conferences, 8 pages
null
10.1109/UIC-ATC-ScalCom-CBDCom-IoP-SmartWorld.2016.0073
null
cs.CR cs.CY cs.ET cs.HC cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we discuss existing approaches for Bitcoin payments, as suitable for a small business for small-value transactions. We develop an evaluation framework utilizing security, usability, deployability criteria,, examine several existing systems, tools. Following a requirements engineering approach, we designed, implemented a new Point of Sale (PoS) system that satisfies an optimal set of criteria within our evaluation framework. Our open source system, Aunja PoS, has been deployed in a real world cafe since October 2014.
[ { "version": "v1", "created": "Mon, 12 Feb 2018 18:38:35 GMT" } ]
2018-02-13T00:00:00
[ [ "Eskandari", "Shayan", "" ], [ "Clark", "Jeremy", "" ], [ "Hamou-Lhadj", "Abdelwahab", "" ] ]
new_dataset
0.999099
1502.01566
Helio M. de Oliveira
H.M. de Oliveira, R.M. Campello de Souza and R.C. de Oliveira
A Matrix Laurent Series-based Fast Fourier Transform for Blocklengths N=4 (mod 8)
6 pages, 2 figures, 2 tables. Conference: XXVII Simposio Brasileiro de Telecomunicacoes - SBrT'09, 2009, Blumenau, SC, Brazil
null
null
null
cs.DS cs.DM eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
General guidelines for a new fast computation of blocklength 8m+4 DFTs are presented, which is based on a Laurent series involving matrices. Results of non-trivial real multiplicative complexity are presented for blocklengths N=64, achieving lower multiplication counts than previously published FFTs. A detailed description for the cases m=1 and m=2 is presented.
[ { "version": "v1", "created": "Thu, 5 Feb 2015 14:25:33 GMT" } ]
2018-02-12T00:00:00
[ [ "de Oliveira", "H. M.", "" ], [ "de Souza", "R. M. Campello", "" ], [ "de Oliveira", "R. C.", "" ] ]
new_dataset
0.99805
1802.00041
Leo Ferres
Mariano G. Beir\'o, Loreto Bravo, Diego Caro, Ciro Cattuto, Leo Ferres and Eduardo Graells-Garrido
Shopping Mall Attraction and Social Mixing at a City Scale
submitted to peer review
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The social inclusion aspects of shopping malls and their effects on our understanding of urban spaces have been a controversial argument largely discussed in the literature. Shopping malls offer an open, safe and democratic version of the public space. Many of their detractors suggest that malls target their customers in subtle ways, promoting social exclusion. In this work, we analyze whether malls offer opportunities for social mixing by analyzing the patterns of shopping mall visits in a large Latin-American city: Santiago de Chile. We use a large XDR (Data Detail Records) dataset from a telecommunication company to analyze the mobility of $387,152$ cell phones around $16$ large malls in Santiago de Chile during one month. We model the influx of people to malls in terms of a gravity model of mobility, and we are able to predict the customer profile distribution of each mall, explaining it in terms of mall location, the population distribution, and mall size. Then, we analyze the concept of social attraction, expressed as people from low and middle classes being attracted by malls that target high-income customers. We include a social attraction factor in our model and find that it is negligible in the process of choosing a mall. We observe that social mixing arises only in peripheral malls located farthest from the city center, which both low and middle class people visit. Using a co-visitation model we show that people tend to choose a restricted profile of malls according to their socio-economic status and their distance from the mall. We conclude that the potential for social mixing in malls could be capitalized by designing public policies regarding transportation and mobility.
[ { "version": "v1", "created": "Wed, 31 Jan 2018 19:56:23 GMT" }, { "version": "v2", "created": "Fri, 9 Feb 2018 16:33:49 GMT" } ]
2018-02-12T00:00:00
[ [ "Beiró", "Mariano G.", "" ], [ "Bravo", "Loreto", "" ], [ "Caro", "Diego", "" ], [ "Cattuto", "Ciro", "" ], [ "Ferres", "Leo", "" ], [ "Graells-Garrido", "Eduardo", "" ] ]
new_dataset
0.99787
1802.03086
Luciano Oliveira
Gil Jader, Luciano Oliveira, Matheus Pithon
Automatic segmenting teeth in X-ray images: Trends, a novel data set, benchmarking and future perspectives
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
This review presents an in-depth study of the literature on segmentation methods applied in dental imaging. Ten segmentation methods were studied and categorized according to the type of the segmentation method (region-based, threshold-based, cluster-based, boundary-based or watershed-based), type of X-ray images used (intra-oral or extra-oral) and characteristics of the dataset used to evaluate the methods in the state-of-the-art works. We found that the literature has primarily focused on threshold-based segmentation methods (54%). 80% of the reviewed papers have used intra-oral X-ray images in their experiments, demonstrating preference to perform segmentation on images of already isolated parts of the teeth, rather than using extra-oral X-rays, which show tooth structure of the mouth and bones of the face. To fill a scientific gap in the field, a novel data set based on extra-oral X-ray images are proposed here. A statistical comparison of the results found with the 10 image segmentation methods over our proposed data set comprised of 1,500 images is also carried out, providing a more comprehensive source of performance assessment. Discussion on limitations of the methods conceived over the past year as well as future perspectives on exploiting learning-based segmentation methods to improve performance are also provided.
[ { "version": "v1", "created": "Fri, 9 Feb 2018 00:31:06 GMT" } ]
2018-02-12T00:00:00
[ [ "Jader", "Gil", "" ], [ "Oliveira", "Luciano", "" ], [ "Pithon", "Matheus", "" ] ]
new_dataset
0.988971
1802.03102
Themistoklis Mavridis
Themis Mavridis, Pablo Estevez, Lucas Bernardi
Learning to Match
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Booking.com is a virtual two-sided marketplace where guests and accommodation providers are the two distinct stakeholders. They meet to satisfy their respective and different goals. Guests want to be able to choose accommodations from a huge and diverse inventory, fast and reliably within their requirements and constraints. Accommodation providers desire to reach a reliable and large market that maximizes their revenue. Finding the best accommodation for the guests, a problem typically addressed by the recommender systems community, and finding the best audience for the accommodation providers, are key pieces of a good platform. This work describes how Booking.com extends such approach, enabling the guests themselves to find the best accommodation by helping them to discover their needs and restrictions, what the market can actually offer, reinforcing good decisions, discouraging bad ones, etc. turning the platform into a decision process advisor, as opposed to a decision maker. Booking.com implements this idea with hundreds of Machine Learned Models, all of them validated through rigorous Randomized Controlled Experiments. We further elaborate on model types, techniques, methodological issues and challenges that we have faced.
[ { "version": "v1", "created": "Fri, 9 Feb 2018 02:13:00 GMT" } ]
2018-02-12T00:00:00
[ [ "Mavridis", "Themis", "" ], [ "Estevez", "Pablo", "" ], [ "Bernardi", "Lucas", "" ] ]
new_dataset
0.961041
1802.03109
Carlos Toxtli
Carlos Toxtli, Andr\'es Monroy-Hern\'andez, Justin Cranshaw
Understanding Chatbot-mediated Task Management
5 pages, 2 figures, CHI 2018
null
10.1145/3173574.3173632
null
cs.HC
http://creativecommons.org/publicdomain/zero/1.0/
Effective task management is essential to successful team collaboration. While the past decade has seen considerable innovation in systems that track and manage group tasks, these innovations have typically been outside of the principal communication channels: email, instant messenger, and group chat. Teams formulate, discuss, refine, assign, and track the progress of their collaborative tasks over electronic communication channels, yet they must leave these channels to update their task-tracking tools, creating a source of friction and inefficiency. To address this problem, we explore how bots might be used to mediate task management for individuals and teams. We deploy a prototype bot to eight different teams of information workers to help them create, assign, and keep track of tasks, all within their main communication channel. We derived seven insights for the design of future bots for coordinating work.
[ { "version": "v1", "created": "Fri, 9 Feb 2018 03:15:33 GMT" } ]
2018-02-12T00:00:00
[ [ "Toxtli", "Carlos", "" ], [ "Monroy-Hernández", "Andrés", "" ], [ "Cranshaw", "Justin", "" ] ]
new_dataset
0.992981
1802.03142
Laurent Besacier
Ali Can Kocabiyikoglu, Laurent Besacier, Olivier Kraif
Augmenting Librispeech with French Translations: A Multimodal Corpus for Direct Speech Translation Evaluation
LREC 2018, Japan
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent works in spoken language translation (SLT) have attempted to build end-to-end speech-to-text translation without using source language transcription during learning or decoding. However, while large quantities of parallel texts (such as Europarl, OpenSubtitles) are available for training machine translation systems, there are no large (100h) and open source parallel corpora that include speech in a source language aligned to text in a target language. This paper tries to fill this gap by augmenting an existing (monolingual) corpus: LibriSpeech. This corpus, used for automatic speech recognition, is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned. After gathering French e-books corresponding to the English audio-books from LibriSpeech, we align speech segments at the sentence level with their respective translations and obtain 236h of usable parallel data. This paper presents the details of the processing as well as a manual evaluation conducted on a small subset of the corpus. This evaluation shows that the automatic alignments scores are reasonably correlated with the human judgments of the bilingual alignment quality. We believe that this corpus (which is made available online) is useful for replicable experiments in direct speech translation or more general spoken language translation experiments.
[ { "version": "v1", "created": "Fri, 9 Feb 2018 06:29:43 GMT" } ]
2018-02-12T00:00:00
[ [ "Kocabiyikoglu", "Ali Can", "" ], [ "Besacier", "Laurent", "" ], [ "Kraif", "Olivier", "" ] ]
new_dataset
0.993746
1802.03159
Jan Seeger
Jan Seeger, Rohit A. Deshmukh and Arne Br\"oring
Running Distributed and Dynamic IoT Choreographies
Submitted to GIoTS 2018
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
IoT systems are growing larger and larger and are becoming suitable for basic automation tasks. One of the features IoT automation systems can provide is dealing with a dynamic system -- Devices leaving and joining the system during operation. Additionally, IoT automation systems operate in a decentralized manner. Current commercial automation systems have difficulty providing these features. Integrating new devices into an automation system takes manual intervention. Additionally, automation systems also require central entities to orchestrate the operation of participants. With smarter sensors and actors, we can move control operations into software deployed on a decentralized network of devices, and provide support for dynamic systems. In this paper, we present a framework for automation systems that demonstrates these two properties (distributed and dynamic). We represent applications as semantically described data flows that are run decentrally on participating devices, and connected at runtime via rules. This allows integrating new devices into applications without manual interaction and removes central controllers from the equation. This approach provides similar features to current automation systems (central engineering, multiple instantiation of applications), but enables distributed and dynamic operation. We demonstrate satisfying performance of the system via a quantitative evaluation.
[ { "version": "v1", "created": "Fri, 9 Feb 2018 07:51:32 GMT" } ]
2018-02-12T00:00:00
[ [ "Seeger", "Jan", "" ], [ "Deshmukh", "Rohit A.", "" ], [ "Bröring", "Arne", "" ] ]
new_dataset
0.995678
1802.03279
Michael Ying Yang
Michael Ying Yang, Matthias Reso, Jun Tang, Wentong Liao, Bodo Rosenhahn
Temporally Object-based Video Co-Segmentation
ISVC 2015 (Oral)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose an unsupervised video object co-segmentation framework based on the primary object proposals to extract the common foreground object(s) from a given video set. In addition to the objectness attributes and motion coherence our framework exploits the temporal consistency of the object-like regions between adjacent frames to enrich the set of original object proposals. We call the enriched proposal sets temporal proposal streams, as they are composed of the most similar proposals from each frame augmented with predicted proposals using temporally consistent superpixel information. The temporal proposal streams represent all the possible region tubes of the objects. Therefore, we formulate a graphical model to select a proposal stream for each object in which the pairwise potentials consist of the appearance dissimilarity between different streams in the same video and also the similarity between the streams in different videos. This model is suitable for single (multiple) foreground objects in two (more) videos, which can be solved by any existing energy minimization method. We evaluate our proposed framework by comparing it to other video co-segmentation algorithms. Our method achieves improved performance on state-of-the-art benchmark datasets.
[ { "version": "v1", "created": "Fri, 9 Feb 2018 14:32:12 GMT" } ]
2018-02-12T00:00:00
[ [ "Yang", "Michael Ying", "" ], [ "Reso", "Matthias", "" ], [ "Tang", "Jun", "" ], [ "Liao", "Wentong", "" ], [ "Rosenhahn", "Bodo", "" ] ]
new_dataset
0.987451
1802.03367
Jedidiah Crandall
Jeffrey Knockel, Thomas Ristenpart, and Jedidiah Crandall
When Textbook RSA is Used to Protect the Privacy of Hundreds of Millions of Users
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We evaluate Tencent's QQ Browser, a popular mobile browser in China with hundreds of millions of users---including 16 million overseas, with respect to the threat model of a man-in-the-middle attacker with state actor capabilities. This is motivated by information in the Snowden revelations suggesting that another Chinese mobile browser, UC Browser, was being used to track users by Western nation-state adversaries. Among the many issues we found in QQ Browser that are presented in this paper, the use of "textbook RSA"---that is, RSA implemented as shown in textbooks, with no padding---is particularly interesting because it affords us the opportunity to contextualize existing research in breaking textbook RSA. We also present a novel attack on QQ Browser's use of textbook RSA that is distinguished from previous research by its simplicity. We emphasize that although QQ Browser's cryptography and our attacks on it are very simple, the impact is serious. Thus, research into how to break very poor cryptography (such as textbook RSA) has both pedagogical value and real-world impact.
[ { "version": "v1", "created": "Fri, 9 Feb 2018 18:02:22 GMT" } ]
2018-02-12T00:00:00
[ [ "Knockel", "Jeffrey", "" ], [ "Ristenpart", "Thomas", "" ], [ "Crandall", "Jedidiah", "" ] ]
new_dataset
0.998967
1408.6923
Bogdan Oancea
Bogdan Oancea, Tudorel Andrei, Raluca Mariana Dragoescu
GPGPU Computing
null
Proceedings of the CKS International Conference, 2012
null
null
cs.DC
http://creativecommons.org/licenses/by-nc-sa/3.0/
Since the first idea of using GPU to general purpose computing, things have evolved over the years and now there are several approaches to GPU programming. GPU computing practically began with the introduction of CUDA (Compute Unified Device Architecture) by NVIDIA and Stream by AMD. These are APIs designed by the GPU vendors to be used together with the hardware that they provide. A new emerging standard, OpenCL (Open Computing Language) tries to unify different GPU general computing API implementations and provides a framework for writing programs executed across heterogeneous platforms consisting of both CPUs and GPUs. OpenCL provides parallel computing using task-based and data-based parallelism. In this paper we will focus on the CUDA parallel computing architecture and programming model introduced by NVIDIA. We will present the benefits of the CUDA programming model. We will also compare the two main approaches, CUDA and AMD APP (STREAM) and the new framwork, OpenCL that tries to unify the GPGPU computing models.
[ { "version": "v1", "created": "Fri, 29 Aug 2014 05:24:20 GMT" } ]
2018-02-09T00:00:00
[ [ "Oancea", "Bogdan", "" ], [ "Andrei", "Tudorel", "" ], [ "Dragoescu", "Raluca Mariana", "" ] ]
new_dataset
0.99448
1708.05096
Zainab Zaidi
Zainab Zaidi, Vasilis Friderikos, Zarrar Yousaf, Simon Fletcher, Mischa Dohler, and Hamid Aghvami
Will SDN be part of 5G?
33 pages, 10 figures
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For many, this is no longer a valid question and the case is considered settled with SDN/NFV (Software Defined Networking/Network Function Virtualization) providing the inevitable innovation enablers solving many outstanding management issues regarding 5G. However, given the monumental task of softwarization of radio access network (RAN) while 5G is just around the corner and some companies have started unveiling their 5G equipment already, the concern is very realistic that we may only see some point solutions involving SDN technology instead of a fully SDN-enabled RAN. This survey paper identifies all important obstacles in the way and looks at the state of the art of the relevant solutions. This survey is different from the previous surveys on SDN-based RAN as it focuses on the salient problems and discusses solutions proposed within and outside SDN literature. Our main focus is on fronthaul, backward compatibility, supposedly disruptive nature of SDN deployment, business cases and monetization of SDN related upgrades, latency of general purpose processors (GPP), and additional security vulnerabilities, softwarization brings along to the RAN. We have also provided a summary of the architectural developments in SDN-based RAN landscape as not all work can be covered under the focused issues. This paper provides a comprehensive survey on the state of the art of SDN-based RAN and clearly points out the gaps in the technology.
[ { "version": "v1", "created": "Wed, 16 Aug 2017 22:20:13 GMT" }, { "version": "v2", "created": "Wed, 7 Feb 2018 21:19:40 GMT" } ]
2018-02-09T00:00:00
[ [ "Zaidi", "Zainab", "" ], [ "Friderikos", "Vasilis", "" ], [ "Yousaf", "Zarrar", "" ], [ "Fletcher", "Simon", "" ], [ "Dohler", "Mischa", "" ], [ "Aghvami", "Hamid", "" ] ]
new_dataset
0.982658
1710.03109
Umberto Mart\'inez-Pe\~nas
Umberto Mart\'inez-Pe\~nas
Skew and linearized Reed-Solomon codes and maximum sum rank distance codes over any division ring
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reed-Solomon codes and Gabidulin codes have maximum Hamming distance and maximum rank distance, respectively. A general construction using skew polynomials, called skew Reed-Solomon codes, has already been introduced in the literature. In this work, we introduce a linearized version of such codes, called linearized Reed-Solomon codes. We prove that they have maximum sum-rank distance. Such distance is of interest in multishot network coding or in singleshot multi-network coding. To prove our result, we introduce new metrics defined by skew polynomials, which we call skew metrics, we prove that skew Reed-Solomon codes have maximum skew distance, and then we translate this scenario to linearized Reed-Solomon codes and the sum-rank metric. The theories of Reed-Solomon codes and Gabidulin codes are particular cases of our theory, and the sum-rank metric extends both the Hamming and rank metrics. We develop our theory over any division ring (commutative or non-commutative field). We also consider non-zero derivations, which give new maximum rank distance codes over infinite fields not considered before.
[ { "version": "v1", "created": "Mon, 9 Oct 2017 14:14:19 GMT" }, { "version": "v2", "created": "Thu, 8 Feb 2018 15:46:11 GMT" } ]
2018-02-09T00:00:00
[ [ "Martínez-Peñas", "Umberto", "" ] ]
new_dataset
0.999232
1710.03979
Stavros Tripakis
Viorel Preoteasa, Iulia Dragomir, Stavros Tripakis
The Refinement Calculus of Reactive Systems
null
null
null
null
cs.LO cs.PL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Refinement Calculus of Reactive Systems (RCRS) is a compositional formal framework for modeling and reasoning about reactive systems. RCRS provides a language which allows to describe atomic components as symbolic transition systems or QLTL formulas, and composite components formed using three primitive composition operators: serial, parallel, and feedback. The semantics of the language is given in terms of monotonic property transformers, an extension to reactive systems of monotonic predicate transformers, which have been used to give compositional semantics to sequential programs. RCRS allows to specify both safety and liveness properties. It also allows to model input-output systems which are both non-deterministic and non-input-receptive (i.e., which may reject some inputs at some points in time), and can thus be seen as a behavioral type system. RCRS provides a set of techniques for symbolic computer-aided reasoning, including compositional static analysis and verification. RCRS comes with a publicly available implementation which includes a complete formalization of the RCRS theory in the Isabelle proof assistant.
[ { "version": "v1", "created": "Wed, 11 Oct 2017 09:41:59 GMT" }, { "version": "v2", "created": "Thu, 8 Feb 2018 11:19:27 GMT" } ]
2018-02-09T00:00:00
[ [ "Preoteasa", "Viorel", "" ], [ "Dragomir", "Iulia", "" ], [ "Tripakis", "Stavros", "" ] ]
new_dataset
0.99895
1712.05851
Shagun Jhaver
Shagun Jhaver, Larry Chan, Amy Bruckman
The View from the Other Side: The Border Between Controversial Speech and Harassment on Kotaku in Action
41 pages, 3 figures, under review at First Monday Journal
Jhaver, S., Chan, L., & Bruckman, A. (2018). The view from the other side: The border between controversial speech and harassment on Kotaku in Action. First Monday, 23(2)
10.5210/fm.v23i2.8232
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we use mixed methods to study a controversial Internet site: The Kotaku in Action (KiA) subreddit. Members of KiA are part of GamerGate, a distributed social movement. We present an emic account of what takes place on KiA who are they, what are their goals and beliefs, and what rules do they follow. Members of GamerGate in general and KiA in particular have often been accused of harassment. However, KiA site policies explicitly prohibit such behavior, and members insist that they have been falsely accused. Underlying the controversy over whether KiA supports harassment is a complex disagreement about what "harassment" is, and where to draw the line between freedom of expression and censorship. We propose a model that characterizes perceptions of controversial speech, dividing it into four categories: criticism, insult, public shaming, and harassment. We also discuss design solutions that address the challenges of moderating harassment without impinging on free speech, and communicating across different ideologies.
[ { "version": "v1", "created": "Mon, 11 Dec 2017 20:56:27 GMT" }, { "version": "v2", "created": "Thu, 8 Feb 2018 06:57:34 GMT" } ]
2018-02-09T00:00:00
[ [ "Jhaver", "Shagun", "" ], [ "Chan", "Larry", "" ], [ "Bruckman", "Amy", "" ] ]
new_dataset
0.997793
1801.05271
Nitin Darkunde
N.S.Darkunde
On Some Linear Codes
I came to know later on that some results in this paper were wrong
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Linear code with complementary dual($LCD$) are those codes which meet their duals trivially. In this paper we will give rather alternative proof of Massey's theorem\cite{Massey2}, which is one of the most important characterization of $LCD$ codes. Let $LCD[n,k]_3$ denote the maximum of possible values of $d$ among $[n,k,d]$ ternary codes. We will give bound on $LCD[n,k]_3$. We will also discuss the cases when this bound is attained.
[ { "version": "v1", "created": "Mon, 18 Dec 2017 11:18:10 GMT" }, { "version": "v2", "created": "Thu, 8 Feb 2018 13:23:05 GMT" } ]
2018-02-09T00:00:00
[ [ "Darkunde", "N. S.", "" ] ]
new_dataset
0.999149
1801.09992
Vi Tran
Phuong Hoai Ha, Vi Ngoc-Nha Tran, Ibrahim Umar, Philippas Tsigas, Anders Gidenstam, Paul Renaud-Goud, Ivan Walulya, Aras Atalar
D2.1 Models for energy consumption of data structures and algorithms
108 pages. arXiv admin note: text overlap with arXiv:1801.08761
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This deliverable reports our early energy models for data structures and algorithms based on both micro-benchmarks and concurrent algorithms. It reports the early results of Task 2.1 on investigating and modeling the trade-off between energy and performance in concurrent data structures and algorithms, which forms the basis for the whole work package 2 (WP2). The work has been conducted on the two main EXCESS platforms: (1) Intel platform with recent Intel multi-core CPUs and (2) Movidius embedded platform.
[ { "version": "v1", "created": "Mon, 29 Jan 2018 06:52:47 GMT" }, { "version": "v2", "created": "Thu, 8 Feb 2018 08:55:56 GMT" } ]
2018-02-09T00:00:00
[ [ "Ha", "Phuong Hoai", "" ], [ "Tran", "Vi Ngoc-Nha", "" ], [ "Umar", "Ibrahim", "" ], [ "Tsigas", "Philippas", "" ], [ "Gidenstam", "Anders", "" ], [ "Renaud-Goud", "Paul", "" ], [ "Walulya", "Ivan", "" ], [ "Atalar", "Aras", "" ] ]
new_dataset
0.990236
1801.10556
Vi Tran
Phuong Hoai Ha, Vi Ngoc-Nha Tran, Ibrahim Umar, Aras Atalar, Anders Gidenstam, Paul Renaud-Goud, Philippas Tsigas, Ivan Walulya
D2.3 Power models, energy models and libraries for energy-efficient concurrent data structures and algorithms
142 pages
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This deliverable reports the results of the power models, energy models and libraries for energy-efficient concurrent data structures and algorithms as available by project month 30 of Work Package 2 (WP2). It reports i) the latest results of Task 2.2-2.4 on providing programming abstractions and libraries for developing energy-efficient data structures and algorithms and ii) the improved results of Task 2.1 on investigating and modeling the trade-off between energy and performance of concurrent data structures and algorithms. The work has been conducted on two main EXCESS platforms: Intel platforms with recent Intel multicore CPUs and Movidius Myriad platforms.
[ { "version": "v1", "created": "Wed, 31 Jan 2018 17:17:30 GMT" }, { "version": "v2", "created": "Thu, 8 Feb 2018 08:51:50 GMT" } ]
2018-02-09T00:00:00
[ [ "Ha", "Phuong Hoai", "" ], [ "Tran", "Vi Ngoc-Nha", "" ], [ "Umar", "Ibrahim", "" ], [ "Atalar", "Aras", "" ], [ "Gidenstam", "Anders", "" ], [ "Renaud-Goud", "Paul", "" ], [ "Tsigas", "Philippas", "" ], [ "Walulya", "Ivan", "" ] ]
new_dataset
0.977845
1802.02573
Nandita Vijaykumar
Nandita Vijaykumar, Kevin Hsieh, Gennady Pekhimenko, Samira Khan, Ashish Shrestha, Saugata Ghose, Phillip B. Gibbons, Onur Mutlu
Zorua: Enhancing Programming Ease, Portability, and Performance in GPUs by Decoupling Programming Models from Resource Management
null
null
null
SAFARI Technical Report 2016-005
cs.DC cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The application resource specification--a static specification of several parameters such as the number of threads and the scratchpad memory usage per thread block--forms a critical component of the existing GPU programming models. This specification determines the performance of the application during execution because the corresponding on-chip hardware resources are allocated and managed purely based on this specification. This tight coupling between the software-provided resource specification and resource management in hardware leads to significant challenges in programming ease, portability, and performance, as we demonstrate in this work. Our goal in this work is to reduce the dependence of performance on the software-provided resource specification to simultaneously alleviate the above challenges. To this end, we introduce Zorua, a new resource virtualization framework, that decouples the programmer-specified resource usage of a GPU application from the actual allocation in the on-chip hardware resources. Zorua enables this decoupling by virtualizing each resource transparently to the programmer. We demonstrate that by providing the illusion of more resources than physically available, Zorua offers several important benefits: (i) Programming Ease: Zorua eases the burden on the programmer to provide code that is tuned to efficiently utilize the physically available on-chip resources. (ii) Portability: Zorua alleviates the necessity of re-tuning an application's resource usage when porting the application across GPU generations. (iii) Performance: By dynamically allocating resources and carefully oversubscribing them when necessary, Zorua improves or retains the performance of applications that are already highly tuned to best utilize the resources. The holistic virtualization provided by Zorua has many other potential uses which we describe in this paper.
[ { "version": "v1", "created": "Wed, 7 Feb 2018 20:07:48 GMT" } ]
2018-02-09T00:00:00
[ [ "Vijaykumar", "Nandita", "" ], [ "Hsieh", "Kevin", "" ], [ "Pekhimenko", "Gennady", "" ], [ "Khan", "Samira", "" ], [ "Shrestha", "Ashish", "" ], [ "Ghose", "Saugata", "" ], [ "Gibbons", "Phillip B.", "" ], [ "Mutlu", "Onur", "" ] ]
new_dataset
0.993431
1802.02668
Yi Zhu
Yi Zhu and Xueqing Deng and Shawn Newsam
Fine-Grained Land Use Classification at the City Scale Using Ground-Level Images
null
null
null
null
cs.CV cs.IR cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We perform fine-grained land use mapping at the city scale using ground-level images. Mapping land use is considerably more difficult than mapping land cover and is generally not possible using overhead imagery as it requires close-up views and seeing inside buildings. We postulate that the growing collections of georeferenced, ground-level images suggest an alternate approach to this geographic knowledge discovery problem. We develop a general framework that uses Flickr images to map 45 different land-use classes for the City of San Francisco. Individual images are classified using a novel convolutional neural network containing two streams, one for recognizing objects and another for recognizing scenes. This network is trained in an end-to-end manner directly on the labeled training images. We propose several strategies to overcome the noisiness of our user-generated data including search-based training set augmentation and online adaptive training. We derive a ground truth map of San Francisco in order to evaluate our method. We demonstrate the effectiveness of our approach through geo-visualization and quantitative analysis. Our framework achieves over 29% recall at the individual land parcel level which represents a strong baseline for the challenging 45-way land use classification problem especially given the noisiness of the image data.
[ { "version": "v1", "created": "Wed, 7 Feb 2018 23:01:13 GMT" } ]
2018-02-09T00:00:00
[ [ "Zhu", "Yi", "" ], [ "Deng", "Xueqing", "" ], [ "Newsam", "Shawn", "" ] ]
new_dataset
0.998241
1802.02700
Mordechai Guri
Mordechai Guri, Boris Zadov, Andrey Daidakulov, Yuval Elovici
ODINI : Escaping Sensitive Data from Faraday-Caged, Air-Gapped Computers via Magnetic Fields
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Air-gapped computers are computers which are kept isolated from the Internet, because they store and process sensitive information. When highly sensitive data is involved, an air-gapped computer might also be kept secluded in a Faraday cage. The Faraday cage prevents the leakage of electromagnetic signals emanating from various computer parts, which may be picked up by an eavesdropping adversary remotely. The air-gap separation, coupled with the Faraday shield, provides a high level of isolation, preventing the potential leakage of sensitive data from the system. In this paper, we show how attackers can bypass Faraday cages and air-gaps in order to leak data from highly secure computers. Our method is based on an exploitation of the magnetic field generated by the computer CPU. Unlike electromagnetic radiation (EMR), low frequency magnetic radiation propagates though the air, penetrating metal shielding such as Faraday cages (e.g., compass still works inside Faraday cages). We introduce a malware code-named ODINI that can control the low frequency magnetic fields emitted from the infected computer by regulating the load of the CPU cores. Arbitrary data can be modulated and transmitted on top of the magnetic emission and received by a magnetic receiver (bug) placed nearby. We provide technical background and examine the characteristics of the magnetic fields. We implement a malware prototype and discuss the design considerations along with the implementation details. We also show that the malicious code does not require special privileges (e.g., root) and can successfully operate from within isolated virtual machines (VMs) as well.
[ { "version": "v1", "created": "Thu, 8 Feb 2018 03:22:45 GMT" } ]
2018-02-09T00:00:00
[ [ "Guri", "Mordechai", "" ], [ "Zadov", "Boris", "" ], [ "Daidakulov", "Andrey", "" ], [ "Elovici", "Yuval", "" ] ]
new_dataset
0.958011
1802.02926
George Christodoulides
George Christodoulides (ILC), Mathieu Avanzi, Jean-Philippe Goldman (UNIGE)
DisMo: A Morphosyntactic, Disfluency and Multi-Word Unit Annotator. An Evaluation on a Corpus of French Spontaneous and Read Speech
null
Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC), May 2014, Reykjavik, Iceland
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present DisMo, a multi-level annotator for spoken language corpora that integrates part-of-speech tagging with basic disfluency detection and annotation, and multi-word unit recognition. DisMo is a hybrid system that uses a combination of lexical resources, rules, and statistical models based on Conditional Random Fields (CRF). In this paper, we present the first public version of DisMo for French. The system is trained and its performance evaluated on a 57k-token corpus, including different varieties of French spoken in three countries (Belgium, France and Switzerland). DisMo supports a multi-level annotation scheme, in which the tokenisation to minimal word units is complemented with multi-word unit groupings (each having associated POS tags), as well as separate levels for annotating disfluencies and discourse phenomena. We present the system's architecture, linguistic resources and its hierarchical tag-set. Results show that DisMo achieves a precision of 95% (finest tag-set) to 96.8% (coarse tag-set) in POS-tagging non-punctuated, sound-aligned transcriptions of spoken French, while also offering substantial possibilities for automated multi-level annotation.
[ { "version": "v1", "created": "Thu, 8 Feb 2018 15:38:54 GMT" } ]
2018-02-09T00:00:00
[ [ "Christodoulides", "George", "", "ILC" ], [ "Avanzi", "Mathieu", "", "UNIGE" ], [ "Goldman", "Jean-Philippe", "", "UNIGE" ] ]
new_dataset
0.989911
1208.3124
Daniel Reem
Daniel Reem
On the computation of zone and double zone diagrams
Very slight improvements (mainly correction of a few typos); add DOI; Ref [51] points to a freely available computer application which implements the algorithms; to appear in Discrete & Computational Geometry (available online)
Discrete Comput. Geom. 59 (2018), 253--292
10.1007/s00454-017-9958-8
null
cs.CG math.FA math.MG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classical objects in computational geometry are defined by explicit relations. Several years ago the pioneering works of T. Asano, J. Matousek and T. Tokuyama introduced "implicit computational geometry", in which the geometric objects are defined by implicit relations involving sets. An important member in this family is called "a zone diagram". The implicit nature of zone diagrams implies, as already observed in the original works, that their computation is a challenging task. In a continuous setting this task has been addressed (briefly) only by these authors in the Euclidean plane with point sites. We discuss the possibility to compute zone diagrams in a wide class of spaces and also shed new light on their computation in the original setting. The class of spaces, which is introduced here, includes, in particular, Euclidean spheres and finite dimensional strictly convex normed spaces. Sites of a general form are allowed and it is shown that a generalization of the iterative method suggested by Asano, Matousek and Tokuyama converges to a double zone diagram, another implicit geometric object whose existence is known in general. Occasionally a zone diagram can be obtained from this procedure. The actual (approximate) computation of the iterations is based on a simple algorithm which enables the approximate computation of Voronoi diagrams in a general setting. Our analysis also yields a few byproducts of independent interest, such as certain topological properties of Voronoi cells (e.g., that in the considered setting their boundaries cannot be "fat").
[ { "version": "v1", "created": "Tue, 14 Aug 2012 16:19:13 GMT" }, { "version": "v2", "created": "Tue, 4 Dec 2012 19:19:36 GMT" }, { "version": "v3", "created": "Mon, 29 Apr 2013 04:03:25 GMT" }, { "version": "v4", "created": "Tue, 25 Apr 2017 12:33:11 GMT" }, { "version": "v5", "created": "Wed, 29 Nov 2017 18:10:54 GMT" }, { "version": "v6", "created": "Sun, 31 Dec 2017 18:59:07 GMT" } ]
2018-02-08T00:00:00
[ [ "Reem", "Daniel", "" ] ]
new_dataset
0.981528
1705.05590
Thang X. Vu
Thang X. Vu, Symeon Chatzinotas, Bjorn Ottersten
Edge-Caching Wireless Networks: Performance Analysis and Optimization
to appear in IEEE Trans. Wireless Commun
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Edge-caching has received much attention as an efficient technique to reduce delivery latency and network congestion during peak-traffic times by bringing data closer to end users. Existing works usually design caching algorithms separately from physical layer design. In this paper, we analyse edge-caching wireless networks by taking into account the caching capability when designing the signal transmission. Particularly, we investigate multi-layer caching where both base station (BS) and users are capable of storing content data in their local cache and analyse the performance of edge-caching wireless networks under two notable uncoded and coded caching strategies. Firstly, we propose a coded caching strategy that is applied to arbitrary values of cache size. The required backhaul and access rates are derived as a function of the BS and user cache size. Secondly, closed-form expressions for the system energy efficiency (EE) corresponding to the two caching methods are derived. Based on the derived formulas, the system EE is maximized via precoding vectors design and optimization while satisfying a predefined user request rate. Thirdly, two optimization problems are proposed to minimize the content delivery time for the two caching strategies. Finally, numerical results are presented to verify the effectiveness of the two caching methods.
[ { "version": "v1", "created": "Tue, 16 May 2017 08:39:43 GMT" }, { "version": "v2", "created": "Sun, 28 May 2017 15:09:05 GMT" }, { "version": "v3", "created": "Tue, 6 Feb 2018 22:58:32 GMT" } ]
2018-02-08T00:00:00
[ [ "Vu", "Thang X.", "" ], [ "Chatzinotas", "Symeon", "" ], [ "Ottersten", "Bjorn", "" ] ]
new_dataset
0.998202
1802.02149
Monica Marra
Monica Marra
Astrophysicists and physicists as creators of ArXiv-based commenting resources for their research communities. An initial survey
Journal article 16 pages
Information Services and Use, vol.37, no.4, pp. 371-387, published 8 January 2018
10.3233/ISU-170856
null
cs.DL astro-ph.IM
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper conveys the outcomes of what results to be the first, though initial, overview of commenting platforms and related 2.0 resources born within and for the astrophysical community (from 2004 to 2016). Experiences were added, mainly in the physics domain, for a total of 22 major items, including four epijournals, and four supplementary resources, thus casting some light onto an unexpected richness and consonance of endeavours. These experiences rest almost entirely on the contents of the database ArXiv, which adds to its merits that of potentially setting the grounds for web 2.0 resources, and research behaviours, to be explored. Most of the experiences retrieved are UK and US based, but the resulting picture is international, as various European countries, China and Australia have been actively involved. Final remarks about creation patterns and outcome of these resources are outlined. The results integrate the previous studies according to which the web 2.0 is presently of limited use for communication in astrophysics and vouch for a role of researchers in the shaping of their own professional communication tools that is greater than expected. Collaterally, some aspects of ArXiv s recent pathway towards partial inclusion of web 2.0 features are touched upon. Further investigation is hoped for.
[ { "version": "v1", "created": "Tue, 6 Feb 2018 16:35:51 GMT" } ]
2018-02-08T00:00:00
[ [ "Marra", "Monica", "" ] ]
new_dataset
0.992179
1802.02187
Vinh Ngo
Vinh Ngo, Arnau Casadevall, Marc Codina, David Castells-Rufas, Jordi Carrabina
A High-Performance HOG Extractor on FPGA
Presented at HIP3ES, 2018
null
null
HIP3ES/2018/5
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pedestrian detection is one of the key problems in emerging self-driving car industry. And HOG algorithm has proven to provide good accuracy for pedestrian detection. There are plenty of research works have been done in accelerating HOG algorithm on FPGA because of its low-power and high-throughput characteristics. In this paper, we present a high-performance HOG architecture for pedestrian detection on a low-cost FPGA platform. It achieves a maximum throughput of 526 FPS with 640x480 input images, which is 3.25 times faster than the state of the art design. The accelerator is integrated with SVM-based prediction in realizing a pedestrian detection system. And the power consumption of the whole system is comparable with the best existing implementations.
[ { "version": "v1", "created": "Fri, 12 Jan 2018 18:12:43 GMT" } ]
2018-02-08T00:00:00
[ [ "Ngo", "Vinh", "" ], [ "Casadevall", "Arnau", "" ], [ "Codina", "Marc", "" ], [ "Castells-Rufas", "David", "" ], [ "Carrabina", "Jordi", "" ] ]
new_dataset
0.984139
1802.02204
Pawel Cyrta
Tomasz Trzcinski, Adam Bielski, Pawe{\l} Cyrta, Matthew Zak
SocialML: machine learning for social media video creators
2pages, 6 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the recent years, social media have become one of the main places where creative content is being published and consumed by billions of users. Contrary to traditional media, social media allow the publishers to receive almost instantaneous feedback regarding their creative work at an unprecedented scale. This is a perfect use case for machine learning methods that can use these massive amounts of data to provide content creators with inspirational ideas and constructive criticism of their work. In this work, we present a comprehensive overview of machine learning-empowered tools we developed for video creators at Group Nine Media - one of the major social media companies that creates short-form videos with over three billion views per month. Our main contribution is a set of tools that allow the creators to leverage massive amounts of data to improve their creation process, evaluate their videos before the publication and improve content quality. These applications include an interactive conversational bot that allows access to material archives, a Web-based application for automatic selection of optimal video thumbnail, as well as deep learning methods for optimizing headline and predicting video popularity. Our A/B tests show that deployment of our tools leads to significant increase of average video view count by 12.9%. Our additional contribution is a set of considerations collected during the deployment of those tools that can hel
[ { "version": "v1", "created": "Thu, 25 Jan 2018 08:15:54 GMT" } ]
2018-02-08T00:00:00
[ [ "Trzcinski", "Tomasz", "" ], [ "Bielski", "Adam", "" ], [ "Cyrta", "Paweł", "" ], [ "Zak", "Matthew", "" ] ]
new_dataset
0.990988
1802.02317
Mordechai Guri
Mordechai Guri, Andrey Daidakulov, Yuval Elovici
MAGNETO: Covert Channel between Air-Gapped Systems and Nearby Smartphones via CPU-Generated Magnetic Fields
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we show that attackers can leak data from isolated, air-gapped computers to nearby smartphones via covert magnetic signals. The proposed covert channel works even if a smartphone is kept inside a Faraday shielding case, which aims to block any type of inbound and outbound wireless communication (Wi-Fi, cellular, Bluetooth, etc.). The channel also works if the smartphone is set in airplane mode in order to block any communication with the device. We implement a malware that controls the magnetic fields emanating from the computer by regulating workloads on the CPU cores. Sensitive data such as encryption keys, passwords, or keylogging data is encoded and transmitted over the magnetic signals. A smartphone located near the computer receives the covert signals with its magnetic sensor. We present technical background, and discuss signal generation, data encoding, and signal reception. We show that the proposed covert channel works from a user-level process, without requiring special privileges, and can successfully operate from within an isolated virtual machine (VM).
[ { "version": "v1", "created": "Wed, 7 Feb 2018 06:22:10 GMT" } ]
2018-02-08T00:00:00
[ [ "Guri", "Mordechai", "" ], [ "Daidakulov", "Andrey", "" ], [ "Elovici", "Yuval", "" ] ]
new_dataset
0.99877
1802.02360
Joaquin Garcia-Alfaro
Jose Rubio-Hernan, Rishikesh Sahay, Luca De Cicco, Joaquin Garcia-Alfaro
Cyber-Physical Architecture Assisted by Programmable Networking
8 pages, 3 figures, pre-print
null
null
null
cs.CR cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cyber-physical technologies are prone to attacks, in addition to faults and failures. The issue of protecting cyber-physical systems should be tackled by jointly addressing security at both cyber and physical domains, in order to promptly detect and mitigate cyber-physical threats. Towards this end, this letter proposes a new architecture combining control-theoretic solutions together with programmable networking techniques to jointly handle crucial threats to cyber-physical systems. The architecture paves the way for new interesting techniques, research directions, and challenges which we discuss in our work.
[ { "version": "v1", "created": "Wed, 7 Feb 2018 09:17:39 GMT" } ]
2018-02-08T00:00:00
[ [ "Rubio-Hernan", "Jose", "" ], [ "Sahay", "Rishikesh", "" ], [ "De Cicco", "Luca", "" ], [ "Garcia-Alfaro", "Joaquin", "" ] ]
new_dataset
0.983148
1512.06632
Zeno Toffano
Zeno Toffano
Eigenlogic in the spirit of George Boole
24 pages, 3 tables
null
null
null
cs.LO math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents an operational and geometric approach to logic. It starts from the multilinear elective decomposition of binary logical functions in the original form introduced by George Boole. A justification on historical grounds is presented bridging Boole's theory and the use of his arithmetical logical functions with the axioms of Boolean algebra using sets and quantum logic. It is shown that this algebraic polynomial formulation can be naturally extended to operators in finite vector spaces. Logical operators will appear as commuting projection operators and the truth values, which take the binary values {0,1}, are the respective eigenvalues. In this view the solution of a logical proposition resulting from the operation on a combination of arguments will appear as a selection where the outcome can only be one of the eigenvalues. In this way propositional logic can be formalized in linear algebra by using elective developments which correspond here to combinations of tensored elementary projection operators. The original and principal motivation of this work is for applications in the new field of quantum information, differences are outlined with more traditional quantum logic approaches.
[ { "version": "v1", "created": "Mon, 21 Dec 2015 14:09:27 GMT" }, { "version": "v10", "created": "Thu, 16 Feb 2017 16:44:20 GMT" }, { "version": "v11", "created": "Mon, 5 Feb 2018 09:18:26 GMT" }, { "version": "v12", "created": "Tue, 6 Feb 2018 08:10:29 GMT" }, { "version": "v2", "created": "Tue, 22 Dec 2015 10:38:12 GMT" }, { "version": "v3", "created": "Wed, 23 Dec 2015 15:25:38 GMT" }, { "version": "v4", "created": "Tue, 29 Dec 2015 12:05:42 GMT" }, { "version": "v5", "created": "Fri, 1 Jan 2016 12:35:38 GMT" }, { "version": "v6", "created": "Thu, 16 Jun 2016 19:43:25 GMT" }, { "version": "v7", "created": "Sat, 18 Jun 2016 09:42:33 GMT" }, { "version": "v8", "created": "Wed, 1 Feb 2017 10:42:26 GMT" }, { "version": "v9", "created": "Fri, 10 Feb 2017 13:26:19 GMT" } ]
2018-02-07T00:00:00
[ [ "Toffano", "Zeno", "" ] ]
new_dataset
0.970728
1605.03639
Ali Mollahosseini
Ali Mollahosseini, Behzad Hassani, Michelle J. Salvador, Hojjat Abdollahi, David Chan, and Mohammad H. Mahoor
Facial Expression Recognition from World Wild Web
null
null
10.1109/CVPRW.2016.188
null
cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognizing facial expression in a wild setting has remained a challenging task in computer vision. The World Wide Web is a good source of facial images which most of them are captured in uncontrolled conditions. In fact, the Internet is a Word Wild Web of facial images with expressions. This paper presents the results of a new study on collecting, annotating, and analyzing wild facial expressions from the web. Three search engines were queried using 1250 emotion related keywords in six different languages and the retrieved images were mapped by two annotators to six basic expressions and neutral. Deep neural networks and noise modeling were used in three different training scenarios to find how accurately facial expressions can be recognized when trained on noisy images collected from the web using query terms (e.g. happy face, laughing man, etc)? The results of our experiments show that deep neural networks can recognize wild facial expressions with an accuracy of 82.12%.
[ { "version": "v1", "created": "Wed, 11 May 2016 23:45:00 GMT" }, { "version": "v2", "created": "Fri, 20 May 2016 04:38:42 GMT" }, { "version": "v3", "created": "Thu, 5 Jan 2017 18:07:46 GMT" } ]
2018-02-07T00:00:00
[ [ "Mollahosseini", "Ali", "" ], [ "Hassani", "Behzad", "" ], [ "Salvador", "Michelle J.", "" ], [ "Abdollahi", "Hojjat", "" ], [ "Chan", "David", "" ], [ "Mahoor", "Mohammad H.", "" ] ]
new_dataset
0.986829
1607.00235
Tuvi Etzion
Simon Blackburn and Tuvi Etzion
PIR Array Codes with Optimal Virtual Server Rate
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been much recent interest in Private information Retrieval (PIR) in models where a database is stored across several servers using coding techniques from distributed storage, rather than being simply replicated. In particular, a recent breakthrough result of Fazelli, Vardy and Yaakobi introduces the notion of a PIR code and a PIR array code, and uses this notion to produce efficient PIR protocols. In this paper we are interested in designing PIR array codes. We consider the case when we have $m$ servers, with each server storing a fraction $(1/s)$ of the bits of the database; here $s$ is a fixed rational number with $s > 1$. A PIR array code with the $k$-PIR property enables a $k$-server PIR protocol (with $k\leq m$) to be emulated on $m$ servers, with the overall storage requirements of the protocol being reduced. The communication complexity of a PIR protocol reduces as $k$ grows, so the virtual server rate, defined to be $k/m$, is an important parameter. We study the maximum virtual server rate of a PIR array code with the $k$-PIR property. We present upper bounds on the achievable virtual server rate, some constructions, and ideas how to obtain PIR array codes with the highest possible virtual server rate. In particular, we present constructions that asymptotically meet our upper bounds, and the exact largest virtual server rate is obtained when $1 < s \leq 2$. A $k$-PIR code (and similarly a $k$-PIR array code) is also a locally repairable code with symbol availability $k-1$. Such a code ensures $k$ parallel reads for each information symbol. So the virtual server rate is very closely related to the symbol availability of the code when used as a locally repairable code. The results of this paper are discussed also in this context, where subspace codes also have an important role.
[ { "version": "v1", "created": "Fri, 1 Jul 2016 13:25:52 GMT" }, { "version": "v2", "created": "Wed, 6 Jul 2016 17:45:04 GMT" }, { "version": "v3", "created": "Thu, 25 Aug 2016 21:44:16 GMT" }, { "version": "v4", "created": "Wed, 28 Sep 2016 11:28:51 GMT" }, { "version": "v5", "created": "Thu, 2 Mar 2017 14:16:12 GMT" }, { "version": "v6", "created": "Tue, 6 Feb 2018 13:08:25 GMT" } ]
2018-02-07T00:00:00
[ [ "Blackburn", "Simon", "" ], [ "Etzion", "Tuvi", "" ] ]
new_dataset
0.993366
1711.05443
Zhiyuan Tang
Miao Zhang, Xiaofei Kang, Yanqing Wang, Lantian Li, Zhiyuan Tang, Haisheng Dai, Dong Wang
Human and Machine Speaker Recognition Based on Short Trivial Events
ICASSP 2018
null
null
null
cs.SD cs.CL cs.NE eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trivial events are ubiquitous in human to human conversations, e.g., cough, laugh and sniff. Compared to regular speech, these trivial events are usually short and unclear, thus generally regarded as not speaker discriminative and so are largely ignored by present speaker recognition research. However, these trivial events are highly valuable in some particular circumstances such as forensic examination, as they are less subjected to intentional change, so can be used to discover the genuine speaker from disguised speech. In this paper, we collect a trivial event speech database that involves 75 speakers and 6 types of events, and report preliminary speaker recognition results on this database, by both human listeners and machines. Particularly, the deep feature learning technique recently proposed by our group is utilized to analyze and recognize the trivial events, which leads to acceptable equal error rates (EERs) despite the extremely short durations (0.2-0.5 seconds) of these events. Comparing different types of events, 'hmm' seems more speaker discriminative.
[ { "version": "v1", "created": "Wed, 15 Nov 2017 08:21:20 GMT" }, { "version": "v2", "created": "Fri, 5 Jan 2018 10:27:00 GMT" }, { "version": "v3", "created": "Tue, 6 Feb 2018 04:13:27 GMT" } ]
2018-02-07T00:00:00
[ [ "Zhang", "Miao", "" ], [ "Kang", "Xiaofei", "" ], [ "Wang", "Yanqing", "" ], [ "Li", "Lantian", "" ], [ "Tang", "Zhiyuan", "" ], [ "Dai", "Haisheng", "" ], [ "Wang", "Dong", "" ] ]
new_dataset
0.976777
1802.01621
Joseph O'Rourke
Joseph O'Rourke
Un-unzippable Convex Caps
14 pages, 14 figures, 10 references
null
null
null
cs.CG cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An unzipping of a polyhedron P is a cut-path through its vertices that unfolds P to a non-overlapping shape in the plane. It is an open problem to decide if every convex P has an unzipping. Here we show that there are nearly flat convex caps that have no unzipping. A convex cap is a "top" portion of a convex polyhedron; it has a boundary, i.e., it is not closed by a base.
[ { "version": "v1", "created": "Mon, 5 Feb 2018 19:48:19 GMT" } ]
2018-02-07T00:00:00
[ [ "O'Rourke", "Joseph", "" ] ]
new_dataset
0.997479
1802.01713
Sneha Mehta
Sneha Mehta
Spot that Bird: A Location Based Bird Game
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In today's age of pervasive computing and social media people make extensive use of technology for communicating, sharing media and learning. Yet while in the outdoors, on a hike or a trail we find ourselves inept of information about the natural world surrounding us. In this paper I present in detail the design and technological considerations required to build a location based mobile application for learning about the avian taxonomy present in the user's surroundings. It is designed to be a game for better engagement and learning. The application makes suggestions for birds likely to be sighted in the vicinity of the user and requires the user to spot those birds and upload a photograph to the system. If spotted correctly the user scores points. I also discuss some design methods and evaluation approaches for the application.
[ { "version": "v1", "created": "Mon, 5 Feb 2018 22:17:36 GMT" } ]
2018-02-07T00:00:00
[ [ "Mehta", "Sneha", "" ] ]
new_dataset
0.99563
1802.01738
Andrey Mokhov
Andrey Mokhov, Georgy Lukyanov, Jakob Lechner
Formal Verification of Spacecraft Control Programs Using a Metalanguage for State Transformers
Under review, feedback is sought
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Verification of functional correctness of control programs is an essential task for the development of space electronics; it is difficult and time-consuming and typically outweighs design and programming tasks in terms of development hours. We present a verification approach designed to help spacecraft engineers reduce the effort required for formal verification of low-level control programs executed on custom hardware. The approach uses a metalanguage to describe the semantics of a program as a state transformer, which can be compiled to multiple targets for testing, formal verification, and code generation. The metalanguage itself is embedded in a strongly-typed host language (Haskell), providing a way to prove program properties at the type level, which can shorten the feedback loop and further increase the productivity of engineers. The verification approach is demonstrated on an industrial case study. We present REDFIN, a processing core used in space missions, and its formal semantics expressed using the proposed metalanguage, followed by a detailed example of verification of a simple control program.
[ { "version": "v1", "created": "Tue, 6 Feb 2018 00:18:20 GMT" } ]
2018-02-07T00:00:00
[ [ "Mokhov", "Andrey", "" ], [ "Lukyanov", "Georgy", "" ], [ "Lechner", "Jakob", "" ] ]
new_dataset
0.986641
1802.01752
Chenqi Mou Dr.
Chenqi Mou, Yang Bai
On the chordality of polynomial sets in triangular decomposition in top-down style
20 pages
null
null
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper the chordal graph structures of polynomial sets appearing in triangular decomposition in top-down style are studied when the input polynomial set to decompose has a chordal associated graph. In particular, we prove that the associated graph of one specific triangular set computed in any algorithm for triangular decomposition in top-down style is a subgraph of the chordal graph of the input polynomial set and that all the polynomial sets including all the computed triangular sets appearing in one specific simply-structured algorithm for triangular decomposition in top-down style (Wang's method) have associated graphs which are subgraphs of the the chordal graph of the input polynomial set. These subgraph structures in triangular decomposition in top-down style are multivariate generalization of existing results for Gaussian elimination and may lead to specialized efficient algorithms and refined complexity analyses for triangular decomposition of chordal polynomial sets.
[ { "version": "v1", "created": "Tue, 6 Feb 2018 01:12:56 GMT" } ]
2018-02-07T00:00:00
[ [ "Mou", "Chenqi", "" ], [ "Bai", "Yang", "" ] ]
new_dataset
0.987934
1802.01886
Weinan Zhang
Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, Yong Yu
Texygen: A Benchmarking Platform for Text Generation Models
4 pages
null
null
null
cs.CL cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
We introduce Texygen, a benchmarking platform to support research on open-domain text generation models. Texygen has not only implemented a majority of text generation models, but also covered a set of metrics that evaluate the diversity, the quality and the consistency of the generated texts. The Texygen platform could help standardize the research on text generation and facilitate the sharing of fine-tuned open-source implementations among researchers for their work. As a consequence, this would help in improving the reproductivity and reliability of future research work in text generation.
[ { "version": "v1", "created": "Tue, 6 Feb 2018 11:30:32 GMT" } ]
2018-02-07T00:00:00
[ [ "Zhu", "Yaoming", "" ], [ "Lu", "Sidi", "" ], [ "Zheng", "Lei", "" ], [ "Guo", "Jiaxian", "" ], [ "Zhang", "Weinan", "" ], [ "Wang", "Jun", "" ], [ "Yu", "Yong", "" ] ]
new_dataset
0.995928
1802.01983
Joan Pujol Roig
Joan S. Pujol Roig, Filippo Tosato and Deniz G\"und\"uz
Storage-Latency Trade-off in Cache-Aided Fog Radio Access Networks
6 pages, 7 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fog radio access network (F-RAN) is studied, in which $K_T$ edge nodes (ENs) connected to a cloud server via orthogonal fronthaul links, serve $K_R$ users through a wireless Gaussian interference channel. Both the ENs and the users have finite-capacity cache memories, which are filled before the user demands are revealed. While a centralized placement phase is used for the ENs, which model static base stations, a decentralized placement is leveraged for the mobile users. An achievable transmission scheme is presented, which employs a combination of interference alignment, zero-forcing and interference cancellation techniques in the delivery phase, and the \textit{normalized delivery time} (NDT), which captures the worst-case latency, is analyzed.
[ { "version": "v1", "created": "Tue, 6 Feb 2018 14:58:24 GMT" } ]
2018-02-07T00:00:00
[ [ "Roig", "Joan S. Pujol", "" ], [ "Tosato", "Filippo", "" ], [ "Gündüz", "Deniz", "" ] ]
new_dataset
0.955235
1802.02026
Piotr Antonik
Piotr Antonik, Marc Haelterman, and Serge Massar
Brain-inspired photonic signal processor for periodic pattern generation and chaotic system emulation
16 pages, 18 figures
Phys. Rev. Applied 7, 054014 -- Published 24 May 2017
10.1103/PhysRevApplied.7.054014
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reservoir computing is a bio-inspired computing paradigm for processing time-dependent signals. Its hardware implementations have received much attention because of their simplicity and remarkable performance on a series of benchmark tasks. In previous experiments the output was uncoupled from the system and in most cases simply computed offline on a post-processing computer. However, numerical investigations have shown that feeding the output back into the reservoir would open the possibility of long-horizon time series forecasting. Here we present a photonic reservoir computer with output feedback, and demonstrate its capacity to generate periodic time series and to emulate chaotic systems. We study in detail the effect of experimental noise on system performance. In the case of chaotic systems, this leads us to introduce several metrics, based on standard signal processing techniques, to evaluate the quality of the emulation. Our work significantly enlarges the range of tasks that can be solved by hardware reservoir computers, and therefore the range of applications they could potentially tackle. It also raises novel questions in nonlinear dynamics and chaos theory.
[ { "version": "v1", "created": "Tue, 6 Feb 2018 16:08:32 GMT" } ]
2018-02-07T00:00:00
[ [ "Antonik", "Piotr", "" ], [ "Haelterman", "Marc", "" ], [ "Massar", "Serge", "" ] ]
new_dataset
0.964589
1802.02053
Marwa Hadj Salah
Marwa Hadj Salah, Didier Schwab, Herv\'e Blanchon and Mounir Zrigui
Syst\`eme de traduction automatique statistique Anglais-Arabe
in French
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine translation (MT) is the process of translating text written in a source language into text in a target language. In this article, we present our English-Arabic statistical machine translation system. First, we present the general process for setting up a statistical machine translation system, then we describe the tools as well as the different corpora we used to build our MT system. Our system was evaluated in terms of the BLUE score (24.51%)
[ { "version": "v1", "created": "Tue, 6 Feb 2018 16:36:44 GMT" } ]
2018-02-07T00:00:00
[ [ "Salah", "Marwa Hadj", "" ], [ "Schwab", "Didier", "" ], [ "Blanchon", "Hervé", "" ], [ "Zrigui", "Mounir", "" ] ]
new_dataset
0.99786
1409.8230
Adrian Barbu
Josue Anaya, Adrian Barbu
RENOIR - A Dataset for Real Low-Light Image Noise Reduction
27 pages, 11 figures
Journal of Visual Communication and Image Representation 51, No. 2, 144-154, 2018
10.1016/j.jvcir.2018.01.012
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image denoising algorithms are evaluated using images corrupted by artificial noise, which may lead to incorrect conclusions about their performances on real noise. In this paper we introduce a dataset of color images corrupted by natural noise due to low-light conditions, together with spatially and intensity-aligned low noise images of the same scenes. We also introduce a method for estimating the true noise level in our images, since even the low noise images contain small amounts of noise. We evaluate the accuracy of our noise estimation method on real and artificial noise, and investigate the Poisson-Gaussian noise model. Finally, we use our dataset to evaluate six denoising algorithms: Active Random Field, BM3D, Bilevel-MRF, Multi-Layer Perceptron, and two versions of NL-means. We show that while the Multi-Layer Perceptron, Bilevel-MRF, and NL-means with soft threshold outperform BM3D on gray images with synthetic noise, they lag behind on our dataset.
[ { "version": "v1", "created": "Mon, 29 Sep 2014 18:38:52 GMT" }, { "version": "v2", "created": "Tue, 3 May 2016 17:43:06 GMT" }, { "version": "v3", "created": "Thu, 16 Jun 2016 17:11:16 GMT" }, { "version": "v4", "created": "Mon, 20 Jun 2016 15:12:13 GMT" }, { "version": "v5", "created": "Tue, 21 Jun 2016 10:38:40 GMT" }, { "version": "v6", "created": "Tue, 9 Aug 2016 21:23:51 GMT" }, { "version": "v7", "created": "Tue, 18 Oct 2016 13:59:43 GMT" }, { "version": "v8", "created": "Tue, 13 Dec 2016 21:36:38 GMT" }, { "version": "v9", "created": "Mon, 8 May 2017 22:24:44 GMT" } ]
2018-02-06T00:00:00
[ [ "Anaya", "Josue", "" ], [ "Barbu", "Adrian", "" ] ]
new_dataset
0.999798
1606.08333
Hans van Ditmarsch
Thomas {\AA}gotnes, Hans van Ditmarsch, Yanjing Wang
True Lies
null
null
10.1007/s11229-017-1423-y
null
cs.AI cs.LO cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A true lie is a lie that becomes true when announced. In a logic of announcements, where the announcing agent is not modelled, a true lie is a formula (that is false and) that becomes true when announced. We investigate true lies and other types of interaction between announced formulas, their preconditions and their postconditions, in the setting of Gerbrandy's logic of believed announcements, wherein agents may have or obtain incorrect beliefs. Our results are on the satisfiability and validity of instantiations of these semantically defined categories, on iterated announcements, including arbitrarily often iterated announcements, and on syntactic characterization. We close with results for iterated announcements in the logic of knowledge (instead of belief), and for lying as private announcements (instead of public announcements) to different agents. Detailed examples illustrate our lying concepts.
[ { "version": "v1", "created": "Mon, 27 Jun 2016 15:59:32 GMT" }, { "version": "v2", "created": "Thu, 27 Apr 2017 16:08:35 GMT" } ]
2018-02-06T00:00:00
[ [ "Ågotnes", "Thomas", "" ], [ "van Ditmarsch", "Hans", "" ], [ "Wang", "Yanjing", "" ] ]
new_dataset
0.999555
1609.06795
Matthew A. Wright
Matthew A. Wright and Roberto Horowitz
Particle-Filter-Enabled Real-Time Sensor Fault Detection Without a Model of Faults
To appear at the 56th IEEE Conference on Decision and Control (CDC 2017)
Proceedings of the 56th IEEE Conference on Decision and Control (CDC 2017), pp. 5757-5763, Dec. 2017
10.1109/CDC.2017.8264529
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We are experiencing an explosion in the amount of sensors measuring our activities and the world around us. These sensors are spread throughout the built environment and can help us perform state estimation and control of related systems, but they are often built and/or maintained by third parties or system users. As a result, by outsourcing system measurement to third parties, the controller must accept their measurements without being able to directly verify the sensors' correct operation. Instead, detection and rejection of measurements from faulty sensors must be done with the raw data only. Towards this goal, we present a method of detecting possibly faulty behavior of sensors. The method does not require that the control designer have any model of faulty sensor behavior. As we discuss, it turns out that the widely-used particle filter state estimation algorithm provides the ingredients necessary for a hypothesis test against all ranges of correct operating behavior, obviating the need for a fault model to compare measurements. We demonstrate the applicability of our method by demonstrating its ability to reject faulty measurements and improve state estimation accuracy in a nonlinear vehicle traffic model without information of generated faulty measurements' characteristics. In our test, we correctly identify nearly 90% of measurements as faulty or non-faulty without having any fault model. This leads to only a 3% increase in state estimation error over a theoretical 100%-accurate fault detector.
[ { "version": "v1", "created": "Thu, 22 Sep 2016 01:34:35 GMT" }, { "version": "v2", "created": "Tue, 21 Mar 2017 19:15:22 GMT" }, { "version": "v3", "created": "Fri, 22 Sep 2017 03:32:03 GMT" } ]
2018-02-06T00:00:00
[ [ "Wright", "Matthew A.", "" ], [ "Horowitz", "Roberto", "" ] ]
new_dataset
0.99263
1612.00137
Cewu Lu
Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai and Cewu Lu
RMPE: Regional Multi-person Pose Estimation
Models & Codes available at https://github.com/MVIG-SJTU/RMPE or https://github.com/Fang-Haoshu/RMPE
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-person pose estimation in the wild is challenging. Although state-of-the-art human detectors have demonstrated good performance, small errors in localization and recognition are inevitable. These errors can cause failures for a single-person pose estimator (SPPE), especially for methods that solely depend on human detection results. In this paper, we propose a novel regional multi-person pose estimation (RMPE) framework to facilitate pose estimation in the presence of inaccurate human bounding boxes. Our framework consists of three components: Symmetric Spatial Transformer Network (SSTN), Parametric Pose Non-Maximum-Suppression (NMS), and Pose-Guided Proposals Generator (PGPG). Our method is able to handle inaccurate bounding boxes and redundant detections, allowing it to achieve a 17% increase in mAP over the state-of-the-art methods on the MPII (multi person) dataset.Our model and source codes are publicly available.
[ { "version": "v1", "created": "Thu, 1 Dec 2016 04:36:52 GMT" }, { "version": "v2", "created": "Tue, 7 Feb 2017 09:34:28 GMT" }, { "version": "v3", "created": "Wed, 19 Apr 2017 16:25:22 GMT" }, { "version": "v4", "created": "Sat, 2 Sep 2017 00:16:36 GMT" }, { "version": "v5", "created": "Sun, 4 Feb 2018 04:27:56 GMT" } ]
2018-02-06T00:00:00
[ [ "Fang", "Hao-Shu", "" ], [ "Xie", "Shuqin", "" ], [ "Tai", "Yu-Wing", "" ], [ "Lu", "Cewu", "" ] ]
new_dataset
0.998416