id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
1203.5414
Stasys Jukna
S. Jukna
Clique problem, cutting plane proofs and communication complexity
10 pages. Theorem 1 in the previous version holds only for bipartite graphs, the non-bipartite case remains open. I now separate the bipartite and non-bipartite cases (by switching from independent sets to cliques, hence a new title). Some new open problems as well as references are added
Information Processing Letters 112(20) (2012) 772-777
10.1016/j.ipl.2012.07.003
null
cs.CC cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by its relation to the length of cutting plane proofs for the Maximum Biclique problem, we consider the following communication game on a given graph G, known to both players. Let K be the maximal number of vertices in a complete bipartite subgraph of G, which is not necessarily an induced subgraph if G is not bipartite. Alice gets a set A of vertices, and Bob gets a disjoint set B of vertices such that |A|+|B|>K. The goal is to find a nonedge of G between A and B. We show that O(\log n) bits of communication are enough for every n-vertex graph.
[ { "version": "v1", "created": "Sat, 24 Mar 2012 13:51:15 GMT" }, { "version": "v2", "created": "Sun, 15 Apr 2012 15:20:40 GMT" } ]
2018-05-30T00:00:00
[ [ "Jukna", "S.", "" ] ]
new_dataset
0.998748
1406.3065
Stasys Jukna
Stasys Jukna
Lower Bounds for Tropical Circuits and Dynamic Programs
Corrected reduction to arithmetic circuits (holds only for multilinear polynomials, now Sect. 4). Solved Open Problem 3 about Min/Max gaps (now Lemma 10). Added lower bounds for the depth of tropical circuits (Sect. 15)
Theory of Computing Systems 57:1 (2015) 160-194
10.1007/s00224-014-9574-4
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tropical circuits are circuits with Min and Plus, or Max and Plus operations as gates. Their importance stems from their intimate relation to dynamic programming algorithms. The power of tropical circuits lies somewhere between that of monotone boolean circuits and monotone arithmetic circuits. In this paper we present some lower bounds arguments for tropical circuits, and hence, for dynamic programs.
[ { "version": "v1", "created": "Wed, 11 Jun 2014 20:58:10 GMT" }, { "version": "v2", "created": "Tue, 29 Jul 2014 11:47:33 GMT" } ]
2018-05-30T00:00:00
[ [ "Jukna", "Stasys", "" ] ]
new_dataset
0.975018
1801.02728
Yuhua Chen
Yuhua Chen, Yibin Xie, Zhengwei Zhou, Feng Shi, Anthony G. Christodoulou, Debiao Li
Brain MRI Super Resolution Using 3D Deep Densely Connected Neural Networks
Accepted by ISBI'18
null
10.1109/ISBI.2018.8363679
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Magnetic resonance image (MRI) in high spatial resolution provides detailed anatomical information and is often necessary for accurate quantitative analysis. However, high spatial resolution typically comes at the expense of longer scan time, less spatial coverage, and lower signal to noise ratio (SNR). Single Image Super-Resolution (SISR), a technique aimed to restore high-resolution (HR) details from one single low-resolution (LR) input image, has been improved dramatically by recent breakthroughs in deep learning. In this paper, we introduce a new neural network architecture, 3D Densely Connected Super-Resolution Networks (DCSRN) to restore HR features of structural brain MR images. Through experiments on a dataset with 1,113 subjects, we demonstrate that our network outperforms bicubic interpolation as well as other deep learning methods in restoring 4x resolution-reduced images.
[ { "version": "v1", "created": "Mon, 8 Jan 2018 23:56:32 GMT" } ]
2018-05-30T00:00:00
[ [ "Chen", "Yuhua", "" ], [ "Xie", "Yibin", "" ], [ "Zhou", "Zhengwei", "" ], [ "Shi", "Feng", "" ], [ "Christodoulou", "Anthony G.", "" ], [ "Li", "Debiao", "" ] ]
new_dataset
0.978885
1801.03069
Tingjun Chen
Tingjun Chen, Mahmood Baraani Dastjerdi, Guy Farkash, Jin Zhou, Harish Krishnaswamy, Gil Zussman
Open-Access Full-Duplex Wireless in the ORBIT Testbed
null
null
null
null
cs.NI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to support experimentation with full-duplex (FD) wireless, we recently integrated an open-access FD transceiver in the ORBIT testbed. In this report, we present the design and implementation of the FD transceiver and interfaces, and provide examples and guidelines for experimentation. In particular, an ORBIT node with a National Instruments (NI)/Ettus Research Universal Software Radio Peripheral (USRP) N210 software-defined radio (SDR) was equipped with the Columbia FlexICoN Gen-1 customized RF self-interference (SI) canceller box. The RF canceller box includes an RF SI canceller that is implemented using discrete components on a printed circuit board (PCB) and achieves 40dB RF SI cancellation across 5MHz bandwidth. We provide an FD transceiver baseline program and present two example FD experiments where 90dB and 85dB overall SI cancellation is achieved for a simple waveform and PSK modulated signals across both the RF and digital domains. We also discuss potential FD wireless experiments that can be conducted based on the implemented open-access FD transceiver and baseline program.
[ { "version": "v1", "created": "Tue, 9 Jan 2018 18:21:57 GMT" }, { "version": "v2", "created": "Tue, 29 May 2018 14:29:48 GMT" } ]
2018-05-30T00:00:00
[ [ "Chen", "Tingjun", "" ], [ "Dastjerdi", "Mahmood Baraani", "" ], [ "Farkash", "Guy", "" ], [ "Zhou", "Jin", "" ], [ "Krishnaswamy", "Harish", "" ], [ "Zussman", "Gil", "" ] ]
new_dataset
0.999666
1803.00419
R. Baghdadi
Riyadh Baghdadi, Jessica Ray, Malek Ben Romdhane, Emanuele Del Sozzo, Patricia Suriana, Shoaib Kamil, Saman Amarasinghe
Technical Report about Tiramisu: a Three-Layered Abstraction for Hiding Hardware Complexity from DSL Compilers
This is a duplicate for 1804.10694. This version of the paper is outdated and should be deleted and only 1804.10694 should be kept. Future versions of the paper will replace 1804.10694 (as second, third version, ...) but now we want to remove duplicates
null
null
null
cs.PL cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-performance DSL developers work hard to take advantage of modern hardware. The DSL compilers have to build their own complex middle-ends before they can target a common back-end such as LLVM, which only handles single instruction streams with SIMD instructions. We introduce Tiramisu, a common middle-end that can generate efficient code for modern processors and accelerators such as multicores, GPUs, FPGAs and distributed clusters. Tiramisu introduces a novel three-level IR that separates the algorithm, how that algorithm is executed, and where intermediate data are stored. This separation simplifies optimization and makes targeting multiple hardware architectures from the same algorithm easier. As a result, DSL compilers can be made considerably less complex with no loss of performance while immediately targeting multiple hardware or hardware combinations such as distributed nodes with both CPUs and GPUs. We evaluated Tiramisu by creating a new middle-end for the Halide and Julia compilers. We show that Tiramisu extends Halide and Julia with many new capabilities including the ability to: express new algorithms (such as recurrent filters and non-rectangular iteration spaces), perform new complex loop nest transformations (such as wavefront parallelization, loop shifting and loop fusion) and generate efficient code for more architectures (such as combinations of distributed clusters, multicores, GPUs and FPGAs). Finally, we demonstrate that Tiramisu can generate very efficient code that matches the highly optimized Intel MKL gemm (generalized matrix multiplication) implementation, we also show speedups reaching 4X in Halide and 16X in Julia due to optimizations enabled by Tiramisu.
[ { "version": "v1", "created": "Wed, 28 Feb 2018 17:05:22 GMT" }, { "version": "v2", "created": "Wed, 7 Mar 2018 13:48:28 GMT" }, { "version": "v3", "created": "Mon, 28 May 2018 19:49:55 GMT" } ]
2018-05-30T00:00:00
[ [ "Baghdadi", "Riyadh", "" ], [ "Ray", "Jessica", "" ], [ "Romdhane", "Malek Ben", "" ], [ "Del Sozzo", "Emanuele", "" ], [ "Suriana", "Patricia", "" ], [ "Kamil", "Shoaib", "" ], [ "Amarasinghe", "Saman", "" ] ]
new_dataset
0.996151
1805.11203
Xiang Zhang
Xiang Zhang, Philip A. Chou, Ming-Ting Sun, Maolong Tang, Shanshe Wang, Siwei Ma, Wen Gao
Surface Light Field Compression using a Point Cloud Codec
null
null
null
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Light field (LF) representations aim to provide photo-realistic, free-viewpoint viewing experiences. However, the most popular LF representations are images from multiple views. Multi-view image-based representations generally need to restrict the range or degrees of freedom of the viewing experience to what can be interpolated in the image domain, essentially because they lack explicit geometry information. We present a new surface light field (SLF) representation based on explicit geometry, and a method for SLF compression. First, we map the multi-view images of a scene onto a 3D geometric point cloud. The color of each point in the point cloud is a function of viewing direction known as a view map. We represent each view map efficiently in a B-Spline wavelet basis. This representation is capable of modeling diverse surface materials and complex lighting conditions in a highly scalable and adaptive manner. The coefficients of the B-Spline wavelet representation are then compressed spatially. To increase the spatial correlation and thus improve compression efficiency, we introduce a smoothing term to make the coefficients more similar across the 3D space. We compress the coefficients spatially using existing point cloud compression (PCC) methods. On the decoder side, the scene is rendered efficiently from any viewing direction by reconstructing the view map at each point. In contrast to multi-view image-based LF approaches, our method supports photo-realistic rendering of real-world scenes from arbitrary viewpoints, i.e., with an unlimited six degrees of freedom (6DOF). In terms of rate and distortion, experimental results show that our method achieves superior performance with lighter decoder complexity compared with a reference image-plus-geometry compression (IGC) scheme, indicating its potential in practical virtual and augmented reality applications.
[ { "version": "v1", "created": "Tue, 29 May 2018 00:08:30 GMT" } ]
2018-05-30T00:00:00
[ [ "Zhang", "Xiang", "" ], [ "Chou", "Philip A.", "" ], [ "Sun", "Ming-Ting", "" ], [ "Tang", "Maolong", "" ], [ "Wang", "Shanshe", "" ], [ "Ma", "Siwei", "" ], [ "Gao", "Wen", "" ] ]
new_dataset
0.986685
1805.11227
Chee Seng Chan
Yuen Peng Loh, Chee Seng Chan
Getting to Know Low-light Images with The Exclusively Dark Dataset
Exclusively Dark (ExDARK) dataset is a collection of 7,363 low-light images from very low-light environments to twilight (i.e 10 different conditions), and 12 object classes (as to PASCAL VOC) annotated on both image class level and local object bounding boxes. 16 pages, 13 figures, submitted to CVIU
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Low-light is an inescapable element of our daily surroundings that greatly affects the efficiency of our vision. Research works on low-light has seen a steady growth, particularly in the field of image enhancement, but there is still a lack of a go-to database as benchmark. Besides, research fields that may assist us in low-light environments, such as object detection, has glossed over this aspect even though breakthroughs-after-breakthroughs had been achieved in recent years, most noticeably from the lack of low-light data (less than 2% of the total images) in successful public benchmark dataset such as PASCAL VOC, ImageNet, and Microsoft COCO. Thus, we propose the Exclusively Dark dataset to elevate this data drought, consisting exclusively of ten different types of low-light images (i.e. low, ambient, object, single, weak, strong, screen, window, shadow and twilight) captured in visible light only with image and object level annotations. Moreover, we share insightful findings in regards to the effects of low-light on the object detection task by analyzing visualizations of both hand-crafted and learned features. Most importantly, we found that the effects of low-light reaches far deeper into the features than can be solved by simple "illumination invariance'". It is our hope that this analysis and the Exclusively Dark dataset can encourage the growth in low-light domain researches on different fields. The Exclusively Dark dataset with its annotation is available at https://github.com/cs-chan/Exclusively-Dark-Image-Dataset
[ { "version": "v1", "created": "Tue, 29 May 2018 02:59:41 GMT" } ]
2018-05-30T00:00:00
[ [ "Loh", "Yuen Peng", "" ], [ "Chan", "Chee Seng", "" ] ]
new_dataset
0.952786
1805.11234
Junwei Bao
Junwei Bao, Duyu Tang, Nan Duan, Zhao Yan, Yuanhua Lv, Ming Zhou, Tiejun Zhao
Table-to-Text: Describing Table Region with Natural Language
9 pages, 4 figures. This paper has been published by AAAI2018
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a generative model to generate a natural language sentence describing a table region, e.g., a row. The model maps a row from a table to a continuous vector and then generates a natural language sentence by leveraging the semantics of a table. To deal with rare words appearing in a table, we develop a flexible copying mechanism that selectively replicates contents from the table in the output sequence. Extensive experiments demonstrate the accuracy of the model and the power of the copying mechanism. On two synthetic datasets, WIKIBIO and SIMPLEQUESTIONS, our model improves the current state-of-the-art BLEU-4 score from 34.70 to 40.26 and from 33.32 to 39.12, respectively. Furthermore, we introduce an open-domain dataset WIKITABLETEXT including 13,318 explanatory sentences for 4,962 tables. Our model achieves a BLEU-4 score of 38.23, which outperforms template based and language model based approaches.
[ { "version": "v1", "created": "Tue, 29 May 2018 03:39:35 GMT" } ]
2018-05-30T00:00:00
[ [ "Bao", "Junwei", "" ], [ "Tang", "Duyu", "" ], [ "Duan", "Nan", "" ], [ "Yan", "Zhao", "" ], [ "Lv", "Yuanhua", "" ], [ "Zhou", "Ming", "" ], [ "Zhao", "Tiejun", "" ] ]
new_dataset
0.998685
1805.11374
Yu Li
Yu Li and Ya Zhang
Webpage Saliency Prediction with Two-stage Generative Adversarial Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Web page saliency prediction is a challenge problem in image transformation and computer vision. In this paper, we propose a new model combined with web page outline information to prediction people's interest region in web page. For each web page image, our model can generate the saliency map which indicates the region of interest for people. A two-stage generative adversarial networks are proposed and image outline information is introduced for better transferring. Experiment results on FIWI dataset show that our model have better performance in terms of saliency prediction.
[ { "version": "v1", "created": "Tue, 29 May 2018 12:03:42 GMT" } ]
2018-05-30T00:00:00
[ [ "Li", "Yu", "" ], [ "Zhang", "Ya", "" ] ]
new_dataset
0.988194
1402.4327
Marc Bagnol
Cl\'ement Aubert, Marc Bagnol
Unification and Logarithmic Space
null
International Conference on Rewriting Techniques and Applications RTA 2014: Rewriting and Typed Lambda Calculi pp 77-9
10.1007/978-3-319-08918-8_6
null
cs.LO
http://creativecommons.org/licenses/by-nc-sa/3.0/
We present an algebraic characterization of the complexity classes Logspace and NLogspace, using an algebra with a composition law based on unification. This new bridge between unification and complexity classes is inspired from proof theory and more specifically linear logic and Geometry of Interaction. We show how unification can be used to build a model of computation by means of specific subalgebras associated to finite permutations groups. We then prove that whether an observation (the algebraic counterpart of a program) accepts a word can be decided within logarithmic space. We also show that the construction can naturally represent pointer machines, an intuitive way of understanding logarithmic space computing.
[ { "version": "v1", "created": "Tue, 18 Feb 2014 13:32:50 GMT" }, { "version": "v2", "created": "Mon, 24 Mar 2014 23:20:13 GMT" }, { "version": "v3", "created": "Thu, 27 Mar 2014 11:13:25 GMT" } ]
2018-05-29T00:00:00
[ [ "Aubert", "Clément", "" ], [ "Bagnol", "Marc", "" ] ]
new_dataset
0.9943
1710.07390
Omid Haji Maghsoudi
Omid Haji Maghsoudi
Superpixel Based Segmentation and Classification of Polyps in Wireless Capsule Endoscopy
This paper has been published in SPMB 2017
null
10.1109/SPMB.2017.8257027
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless Capsule Endoscopy (WCE) is a relatively new technology to record the entire GI trace, in vivo. The large amounts of frames captured during an examination cause difficulties for physicians to review all these frames. The need for reducing the reviewing time using some intelligent methods has been a challenge. Polyps are considered as growing tissues on the surface of intestinal tract not inside of an organ. Most polyps are not cancerous, but if one becomes larger than a centimeter, it can turn into cancer by great chance. The WCE frames provide the early stage possibility for detection of polyps. Here, the application of simple linear iterative clustering (SLIC) superpixel for segmentation of polyps in WCE frames is evaluated. Different SLIC superpixel numbers are examined to find the highest sensitivity for detection of polyps. The SLIC superpixel segmentation is promising to improve the results of previous studies. Finally, the superpixels were classified using a support vector machine (SVM) by extracting some texture and color features. The classification results showed a sensitivity of 91%.
[ { "version": "v1", "created": "Fri, 20 Oct 2017 01:32:53 GMT" }, { "version": "v2", "created": "Mon, 28 May 2018 15:59:16 GMT" } ]
2018-05-29T00:00:00
[ [ "Maghsoudi", "Omid Haji", "" ] ]
new_dataset
0.997279
1711.00199
Yu Xiang
Yu Xiang, Tanner Schmidt, Venkatraman Narayanan and Dieter Fox
PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes
Accepted to RSS 2018
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating the 6D pose of known objects is important for robots to interact with the real world. The problem is challenging due to the variety of objects as well as the complexity of a scene caused by clutter and occlusions between objects. In this work, we introduce PoseCNN, a new Convolutional Neural Network for 6D object pose estimation. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. We also introduce a novel loss function that enables PoseCNN to handle symmetric objects. In addition, we contribute a large scale video dataset for 6D object pose estimation named the YCB-Video dataset. Our dataset provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames. We conduct extensive experiments on our YCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is highly robust to occlusions, can handle symmetric objects, and provide accurate pose estimation using only color images as input. When using depth data to further refine the poses, our approach achieves state-of-the-art results on the challenging OccludedLINEMOD dataset. Our code and dataset are available at https://rse-lab.cs.washington.edu/projects/posecnn/.
[ { "version": "v1", "created": "Wed, 1 Nov 2017 04:10:58 GMT" }, { "version": "v2", "created": "Tue, 20 Feb 2018 02:50:26 GMT" }, { "version": "v3", "created": "Sat, 26 May 2018 07:34:09 GMT" } ]
2018-05-29T00:00:00
[ [ "Xiang", "Yu", "" ], [ "Schmidt", "Tanner", "" ], [ "Narayanan", "Venkatraman", "" ], [ "Fox", "Dieter", "" ] ]
new_dataset
0.984409
1711.03795
Ali Gholami Rudi
Ali Gholami Rudi
Time-Windowed Contiguous Hotspot Queries
Updates after ICCG 2018
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A hotspot of a moving entity is a region in which it spends a significant amount of time. Given the location of a moving object through a certain time interval, i.e. its trajectory, our goal is to find its hotspots. We consider axis-parallel square hotspots of fixed side length, which contain the longest contiguous portion of the trajectory. Gudmundsson, van Kreveld, and Staals (2013) presented an algorithm to find a hotspot of a trajectory in $O(n \log n)$, in which $n$ is the number of vertices of the trajectory. We present an algorithm for answering \emph{time-windowed} hotspot queries, to find a hotspot in any given time interval. The algorithm has an approximation factor of $1/2$ and answers each query with the time complexity $O(\log^2 n)$. The time complexity of the preprocessing step of the algorithm is $O(n)$. When the query contains the whole trajectory, it implies an $O(n)$ algorithm for finding approximate contiguous hotspots.
[ { "version": "v1", "created": "Fri, 10 Nov 2017 12:36:17 GMT" }, { "version": "v2", "created": "Sat, 26 May 2018 11:51:19 GMT" } ]
2018-05-29T00:00:00
[ [ "Rudi", "Ali Gholami", "" ] ]
new_dataset
0.973525
1801.07424
Wenguan Wang
Wenguan Wang and Jianbing Shen and Fang Guo and Ming-Ming Cheng and Ali Borji
Revisiting Video Saliency: A Large-scale Benchmark and a New Model
CVPR2018 paper. Website: https://github.com/wenguanwang/DHF1K We have corrected some statistics of our results (baseline training setting (iii)) on UCF sports dataset
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this work, we contribute to video saliency research in two ways. First, we introduce a new benchmark for predicting human eye movements during dynamic scene free-viewing, which is long-time urged in this field. Our dataset, named DHF1K (Dynamic Human Fixation), consists of 1K high-quality, elaborately selected video sequences spanning a large range of scenes, motions, object types and background complexity. Existing video saliency datasets lack variety and generality of common dynamic scenes and fall short in covering challenging situations in unconstrained environments. In contrast, DHF1K makes a significant leap in terms of scalability, diversity and difficulty, and is expected to boost video saliency modeling. Second, we propose a novel video saliency model that augments the CNN-LSTM network architecture with an attention mechanism to enable fast, end-to-end saliency learning. The attention mechanism explicitly encodes static saliency information, thus allowing LSTM to focus on learning more flexible temporal saliency representation across successive frames. Such a design fully leverages existing large-scale static fixation datasets, avoids overfitting, and significantly improves training efficiency and testing performance. We thoroughly examine the performance of our model, with respect to state-of-the-art saliency models, on three large-scale datasets (i.e., DHF1K, Hollywood2, UCF sports). Experimental results over more than 1.2K testing videos containing 400K frames demonstrate that our model outperforms other competitors.
[ { "version": "v1", "created": "Tue, 23 Jan 2018 08:01:50 GMT" }, { "version": "v2", "created": "Thu, 22 Mar 2018 22:35:33 GMT" }, { "version": "v3", "created": "Sat, 26 May 2018 05:07:41 GMT" } ]
2018-05-29T00:00:00
[ [ "Wang", "Wenguan", "" ], [ "Shen", "Jianbing", "" ], [ "Guo", "Fang", "" ], [ "Cheng", "Ming-Ming", "" ], [ "Borji", "Ali", "" ] ]
new_dataset
0.999808
1801.10100
Pavani Tripathi
Aditya Lakra, Pavani Tripathi, Rohit Keshari, Mayank Vatsa, Richa Singh
SegDenseNet: Iris Segmentation for Pre and Post Cataract Surgery
Corrected diagrams. Results remain the same!
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cataract is caused due to various factors such as age, trauma, genetics, smoking and substance consumption, and radiation. It is one of the major common ophthalmic diseases worldwide which can potentially affect iris-based biometric systems. India, which hosts the largest biometrics project in the world, has about 8 million people undergoing cataract surgery annually. While existing research shows that cataract does not have a major impact on iris recognition, our observations suggest that the iris segmentation approaches are not well equipped to handle cataract or post cataract surgery cases. Therefore, failure in iris segmentation affects the overall recognition performance. This paper presents an efficient iris segmentation algorithm with variations due to cataract and post cataract surgery. The proposed algorithm, termed as SegDenseNet, is a deep learning algorithm based on DenseNets. The experiments on the IIITD Cataract database show that improving the segmentation enhances the identification by up to 25% across different sensors and matchers.
[ { "version": "v1", "created": "Tue, 30 Jan 2018 17:09:23 GMT" }, { "version": "v2", "created": "Thu, 19 Apr 2018 09:27:38 GMT" } ]
2018-05-29T00:00:00
[ [ "Lakra", "Aditya", "" ], [ "Tripathi", "Pavani", "" ], [ "Keshari", "Rohit", "" ], [ "Vatsa", "Mayank", "" ], [ "Singh", "Richa", "" ] ]
new_dataset
0.999154
1804.00525
Ismail Elezi
Lukas Tuggener, Ismail Elezi, J\"urgen Schmidhuber, Marcello Pelillo and Thilo Stadelmann
DeepScores -- A Dataset for Segmentation, Detection and Classification of Tiny Objects
6 pages, accepted on IEEE International Conference on Pattern Recognition 2018
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the DeepScores dataset with the goal of advancing the state-of-the-art in small objects recognition, and by placing the question of object recognition in the context of scene understanding. DeepScores contains high quality images of musical scores, partitioned into 300,000 sheets of written music that contain symbols of different shapes and sizes. With close to a hundred millions of small objects, this makes our dataset not only unique, but also the largest public dataset. DeepScores comes with ground truth for object classification, detection and semantic segmentation. DeepScores thus poses a relevant challenge for computer vision in general, beyond the scope of optical music recognition (OMR) research. We present a detailed statistical analysis of the dataset, comparing it with other computer vision datasets like Caltech101/256, PASCAL VOC, SUN, SVHN, ImageNet, MS-COCO, smaller computer vision datasets, as well as with other OMR datasets. Finally, we provide baseline performances for object classification and give pointers to future research based on this dataset.
[ { "version": "v1", "created": "Tue, 27 Mar 2018 14:44:45 GMT" }, { "version": "v2", "created": "Sat, 26 May 2018 21:12:59 GMT" } ]
2018-05-29T00:00:00
[ [ "Tuggener", "Lukas", "" ], [ "Elezi", "Ismail", "" ], [ "Schmidhuber", "Jürgen", "" ], [ "Pelillo", "Marcello", "" ], [ "Stadelmann", "Thilo", "" ] ]
new_dataset
0.999876
1805.10490
Yusuf Said Eroglu
Yusuf Said Eroglu, Chethan Kumar Anjinappa, Ismail Guvenc, Nezih Pala
Slow Beam Steering for Indoor Multi-User Visible Light Communications
To be published in IEEE SPAWC 2018 proceedings
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visible light communications (VLC) is an emerging technology that enables broadband data rates using the visible spectrum. VLC beam steering has been studied in the literature to track mobile users and to improve coverage. However, in some scenarios, it may be needed to track and serve multiple users using a single beam, which has not been rigorously studied in the existing works to our best knowledge. In this paper, considering slow beam steering where beam directions are assumed to be fixed within a transmission frame, we find the optimum steering angles to simultaneously serve multiple users within the frame duration. This is achieved by solving a non-convex optimization problem using grid based search and majorization-minimization (MM) procedure. Additionally, we consider multiple steerable beams case with larger number of users in the network, and propose an algorithm to cluster users and serve each cluster with a separate beam. The simulation results show that clustering users can provide higher rates compared to serving each user with a separate beam, and two user clusters maximizes the sum rate in a crowded room setting.
[ { "version": "v1", "created": "Sat, 26 May 2018 14:46:20 GMT" } ]
2018-05-29T00:00:00
[ [ "Eroglu", "Yusuf Said", "" ], [ "Anjinappa", "Chethan Kumar", "" ], [ "Guvenc", "Ismail", "" ], [ "Pala", "Nezih", "" ] ]
new_dataset
0.995987
1805.10548
Ismail Elezi
Lukas Tuggener, Ismail Elezi, Jurgen Schmidhuber and Thilo Stadelmann
Deep Watershed Detector for Music Object Recognition
Accepted on The 19th International Society for Music Information Retrieval Conference 2018
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optical Music Recognition (OMR) is an important and challenging area within music information retrieval, the accurate detection of music symbols in digital images is a core functionality of any OMR pipeline. In this paper, we introduce a novel object detection method, based on synthetic energy maps and the watershed transform, called Deep Watershed Detector (DWD). Our method is specifically tailored to deal with high resolution images that contain a large number of very small objects and is therefore able to process full pages of written music. We present state-of-the-art detection results of common music symbols and show DWD's ability to work with synthetic scores equally well as on handwritten music.
[ { "version": "v1", "created": "Sat, 26 May 2018 22:13:16 GMT" } ]
2018-05-29T00:00:00
[ [ "Tuggener", "Lukas", "" ], [ "Elezi", "Ismail", "" ], [ "Schmidhuber", "Jurgen", "" ], [ "Stadelmann", "Thilo", "" ] ]
new_dataset
0.992886
1805.10558
Honggang Chen
Honggang Chen and Xiaohai He and Linbo Qing and Shuhua Xiong and Truong Q. Nguyen
DPW-SDNet: Dual Pixel-Wavelet Domain Deep CNNs for Soft Decoding of JPEG-Compressed Images
CVPRW 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
JPEG is one of the widely used lossy compression methods. JPEG-compressed images usually suffer from compression artifacts including blocking and blurring, especially at low bit-rates. Soft decoding is an effective solution to improve the quality of compressed images without changing codec or introducing extra coding bits. Inspired by the excellent performance of the deep convolutional neural networks (CNNs) on both low-level and high-level computer vision problems, we develop a dual pixel-wavelet domain deep CNNs-based soft decoding network for JPEG-compressed images, namely DPW-SDNet. The pixel domain deep network takes the four downsampled versions of the compressed image to form a 4-channel input and outputs a pixel domain prediction, while the wavelet domain deep network uses the 1-level discrete wavelet transformation (DWT) coefficients to form a 4-channel input to produce a DWT domain prediction. The pixel domain and wavelet domain estimates are combined to generate the final soft decoded result. Experimental results demonstrate the superiority of the proposed DPW-SDNet over several state-of-the-art compression artifacts reduction algorithms.
[ { "version": "v1", "created": "Sun, 27 May 2018 00:27:25 GMT" } ]
2018-05-29T00:00:00
[ [ "Chen", "Honggang", "" ], [ "He", "Xiaohai", "" ], [ "Qing", "Linbo", "" ], [ "Xiong", "Shuhua", "" ], [ "Nguyen", "Truong Q.", "" ] ]
new_dataset
0.98646
1805.10564
Rajarshi Bhowmik
Rajarshi Bhowmik and Gerard de Melo
Generating Fine-Grained Open Vocabulary Entity Type Descriptions
Published in ACL 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While large-scale knowledge graphs provide vast amounts of structured facts about entities, a short textual description can often be useful to succinctly characterize an entity and its type. Unfortunately, many knowledge graph entities lack such textual descriptions. In this paper, we introduce a dynamic memory-based network that generates a short open vocabulary description of an entity by jointly leveraging induced fact embeddings as well as the dynamic context of the generated sequence of words. We demonstrate the ability of our architecture to discern relevant information for more accurate generation of type description by pitting the system against several strong baselines.
[ { "version": "v1", "created": "Sun, 27 May 2018 01:58:39 GMT" } ]
2018-05-29T00:00:00
[ [ "Bhowmik", "Rajarshi", "" ], [ "de Melo", "Gerard", "" ] ]
new_dataset
0.999324
1805.10708
Jason Li
Jason Li
Distributed Treewidth Computation
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Of all the restricted graph families out there, the family of low treewidth graphs has continuously proven to admit many algorithmic applications. For example, many NP-hard algorithms can be solved in polynomial time on graphs of constant treewidth. Other algorithmic techniques, such as Baker's technique, partition the graph into components of low treewidth. Therefore, computing the treewidth of a graph remains an important problem in algorithm design. For graphs of constant treewidth, linear-time algorithms are known in the classical setting, and well as $\text{polylog}(n)$-time parallel algorithms for computing an $O(1)$-approximation to treewidth. However, nothing is yet known in the distributed setting. In this paper, we give near-optimal algorithms for computing the treewidth on a distributed network. We show that for graphs of constant treewidth, an $O(1)$-approximation to the treewidth can be computed in near-optimal $\tilde O(D)$ time, where $D$ is the diameter of the network graph. In addition, we show that many NP-hard problems that are tractable on constant treewidth graphs can also be solved in $\tilde O(D)$ time on a distributed network of constant treewidth. Our algorithms make use of the shortcuts framework of Ghaffari and Haeupler [SODA'16], which has proven to be a powerful tool in designing near-optimal distributed algorithms for restricted graph networks, such as planar graphs, low-treewidth graphs, and excluded minor graphs.
[ { "version": "v1", "created": "Sun, 27 May 2018 23:01:25 GMT" } ]
2018-05-29T00:00:00
[ [ "Li", "Jason", "" ] ]
new_dataset
0.993837
1805.10710
Matt Luckcuck
Matt Luckcuck, Andy Wellings, Ana Cavalcanti
Safety-Critical Java: Level 2 in Practice
null
Concurrency and Computation: Practice and Experience 29 6 2017
10.1002/cpe.3951
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Safety Critical Java (SCJ) is a profile of the Real-Time Specification for Java that brings to the safety-critical industry the possibility of using Java. SCJ defines three compliance levels: Level 0, Level 1 and Level 2. The SCJ specification is clear on what constitutes a Level 2 application in terms of its use of the defined API, but not the occasions on which it should be used. This paper broadly classifies the features that are only available at Level 2 into three groups:~nested mission sequencers, managed threads, and global scheduling across multiple processors. We explore the first two groups to elicit programming requirements that they support. We identify several areas where the SCJ specification needs modifications to support these requirements fully; these include:~support for terminating managed threads, the ability to set a deadline on the transition between missions, and augmentation of the mission sequencer concept to support composibility of timing constraints. We also propose simplifications to the termination protocol of missions and their mission sequencers. To illustrate the benefit of our changes, we present excerpts from a formal model of SCJ Level~2 written in Circus, a state-rich process algebra for refinement.
[ { "version": "v1", "created": "Sun, 27 May 2018 23:10:03 GMT" } ]
2018-05-29T00:00:00
[ [ "Luckcuck", "Matt", "" ], [ "Wellings", "Andy", "" ], [ "Cavalcanti", "Ana", "" ] ]
new_dataset
0.996648
1805.10783
Chuntao Ding
Shangguang Wang and Chuntao Ding and Ning Zhang and Nan Cheng and Jie Huang and Ying Liu
ECD: An Edge Content Delivery and Update Framework in Mobile Edge Computing
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article proposes an edge content delivery framework (ECD) based on mobile edge computing in the era of Internet of Things (IOT), to alleviate the load of the core network and improve the quality of experience (QoE) of mobile users. Considering mobile devices become both the content consumers and providers, and majority of the contents are unnecessary to be uploaded to the cloud datacenter, at the network edge, we deploy a content server to store the raw contents generated from the mobile users, and a cache pool to store the contents that are frequently requested by mobile users in the ECD. The cache pools are ranked and high ranked cache pools will store contents with higher popularity. Furthermore, we propose edge content delivery scheme and edge content update scheme, based on content popularity and cache pool ranking. The content delivery scheme is to efficiently deliver contents to mobile users, while the edge content update scheme is to mitigate the content generated by users to appropriate cache pools based on its request frequently and cache poor ranking. The edge content delivery is completely different from the content delivery network and can further reduce the load on the core network. In addition, because the top ranking cache pools are prioritized for higher priority contents and the cache pools are prioritized for higher priority contents and the cache pools are in proximity to the mobile users, the immediately interactive response between mobile users and cache pools can be achieved. A representative case study of ECD is provided and open research issues are discussed.
[ { "version": "v1", "created": "Mon, 28 May 2018 06:21:56 GMT" } ]
2018-05-29T00:00:00
[ [ "Wang", "Shangguang", "" ], [ "Ding", "Chuntao", "" ], [ "Zhang", "Ning", "" ], [ "Cheng", "Nan", "" ], [ "Huang", "Jie", "" ], [ "Liu", "Ying", "" ] ]
new_dataset
0.998916
1805.10799
Hyemin Ahn
Hyemin Ahn, Sungjoon Choi, Nuri Kim, Geonho Cha, Songhwai Oh
Interactive Text2Pickup Network for Natural Language based Human-Robot Collaboration
8 pages, 9 figures
null
null
null
cs.RO cs.CL cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose the Interactive Text2Pickup (IT2P) network for human-robot collaboration which enables an effective interaction with a human user despite the ambiguity in user's commands. We focus on the task where a robot is expected to pick up an object instructed by a human, and to interact with the human when the given instruction is vague. The proposed network understands the command from the human user and estimates the position of the desired object first. To handle the inherent ambiguity in human language commands, a suitable question which can resolve the ambiguity is generated. The user's answer to the question is combined with the initial command and given back to the network, resulting in more accurate estimation. The experiment results show that given unambiguous commands, the proposed method can estimate the position of the requested object with an accuracy of 98.49% based on our test dataset. Given ambiguous language commands, we show that the accuracy of the pick up task increases by 1.94 times after incorporating the information obtained from the interaction.
[ { "version": "v1", "created": "Mon, 28 May 2018 07:52:42 GMT" } ]
2018-05-29T00:00:00
[ [ "Ahn", "Hyemin", "" ], [ "Choi", "Sungjoon", "" ], [ "Kim", "Nuri", "" ], [ "Cha", "Geonho", "" ], [ "Oh", "Songhwai", "" ] ]
new_dataset
0.996569
1805.10906
Giorgio Forcina
Carlo Castagnari, Flavio Corradini, Francesco De Angelis, Jacopo de Berardinis, Giorgio Forcina and Andrea Polini
Tangramob: an agent-based simulation framework for validating urban smart mobility solutions
null
null
null
null
cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating the effects of introducing a range of smart mobility solutions within an urban area is a crucial concern in urban planning. The lack of a Decision Support System (DSS) for the assessment of mobility initiatives, forces local public authorities and mobility service providers to base their decisions on guidelines derived from common heuristics and best practices. These approaches can help planners in shaping mobility solutions, but given the high number of variables to consider the effects are not guaranteed. Therefore, a solution conceived respecting the available guidelines can result in a failure in a different context. In particular, difficult aspects to consider are the interactions between different mobility services available in a given urban area, and the acceptance of a given mobility initiative by the inhabitants of the area. In order to fill this gap, we introduce Tangramob, an agent-based simulation framework capable of assessing the impacts of a Smart Mobility Initiative (SMI) within an urban area of interest. Tangramob simulates how urban traffic is expected to evolve as citizens start experiencing the newly offered traveling solutions. This allows decision makers to evaluate the efficacy of their initiatives taking into account the current urban system. In this paper we provide an overview of the simulation framework along with its design. To show the potential of Tangramob, 3 mobility initiatives are simulated and compared on the same scenario. This shows how it is possible to perform comparative experiments so as to align mobility initiatives to the user goals.
[ { "version": "v1", "created": "Mon, 28 May 2018 13:15:52 GMT" } ]
2018-05-29T00:00:00
[ [ "Castagnari", "Carlo", "" ], [ "Corradini", "Flavio", "" ], [ "De Angelis", "Francesco", "" ], [ "de Berardinis", "Jacopo", "" ], [ "Forcina", "Giorgio", "" ], [ "Polini", "Andrea", "" ] ]
new_dataset
0.954438
1805.11060
Giulia Fanti
Giulia Fanti, Shaileshh Bojja Venkatakrishnan, Surya Bakshi, Bradley Denby, Shruti Bhargava, Andrew Miller, Pramod Viswanath
Dandelion++: Lightweight Cryptocurrency Networking with Formal Anonymity Guarantees
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work has demonstrated significant anonymity vulnerabilities in Bitcoin's networking stack. In particular, the current mechanism for broadcasting Bitcoin transactions allows third-party observers to link transactions to the IP addresses that originated them. This lays the groundwork for low-cost, large-scale deanonymization attacks. In this work, we present Dandelion++, a first-principles defense against large-scale deanonymization attacks with near-optimal information-theoretic guarantees. Dandelion++ builds upon a recent proposal called Dandelion that exhibited similar goals. However, in this paper, we highlight simplifying assumptions made in Dandelion, and show how they can lead to serious deanonymization attacks when violated. In contrast, Dandelion++ defends against stronger adversaries that are allowed to disobey protocol. Dandelion++ is lightweight, scalable, and completely interoperable with the existing Bitcoin network. We evaluate it through experiments on Bitcoin's mainnet (i.e., the live Bitcoin network) to demonstrate its interoperability and low broadcast latency overhead.
[ { "version": "v1", "created": "Mon, 28 May 2018 17:12:33 GMT" } ]
2018-05-29T00:00:00
[ [ "Fanti", "Giulia", "" ], [ "Venkatakrishnan", "Shaileshh Bojja", "" ], [ "Bakshi", "Surya", "" ], [ "Denby", "Bradley", "" ], [ "Bhargava", "Shruti", "" ], [ "Miller", "Andrew", "" ], [ "Viswanath", "Pramod", "" ] ]
new_dataset
0.995193
1701.01580
Gabriele Fici
Alessandro De Luca, Gabriele Fici, Luca Q. Zamboni
The sequence of open and closed prefixes of a Sturmian word
Published in Advances in Applied Mathematics. Journal version of arXiv:1306.2254
Advances in Applied Mathematics Volume 90, September 2017, Pages 27-45
10.1016/j.aam.2017.04.007
null
cs.DM cs.FL math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A finite word is closed if it contains a factor that occurs both as a prefix and as a suffix but does not have internal occurrences, otherwise it is open. We are interested in the {\it oc-sequence} of a word, which is the binary sequence whose $n$-th element is $0$ if the prefix of length $n$ of the word is open, or $1$ if it is closed. We exhibit results showing that this sequence is deeply related to the combinatorial and periodic structure of a word. In the case of Sturmian words, we show that these are uniquely determined (up to renaming letters) by their oc-sequence. Moreover, we prove that the class of finite Sturmian words is a maximal element with this property in the class of binary factorial languages. We then discuss several aspects of Sturmian words that can be expressed through this sequence. Finally, we provide a linear-time algorithm that computes the oc-sequence of a finite word, and a linear-time algorithm that reconstructs a finite Sturmian word from its oc-sequence.
[ { "version": "v1", "created": "Fri, 6 Jan 2017 09:13:16 GMT" }, { "version": "v2", "created": "Thu, 1 Jun 2017 09:25:20 GMT" } ]
2018-05-28T00:00:00
[ [ "De Luca", "Alessandro", "" ], [ "Fici", "Gabriele", "" ], [ "Zamboni", "Luca Q.", "" ] ]
new_dataset
0.999501
1805.10047
Mamoru Komachi
Michiki Kurosawa, Yukio Matsumura, Hayahide Yamagishi, Mamoru Komachi
Japanese Predicate Conjugation for Neural Machine Translation
6 pages; NAACL 2018 Student Research Workshop
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Neural machine translation (NMT) has a drawback in that can generate only high-frequency words owing to the computational costs of the softmax function in the output layer. In Japanese-English NMT, Japanese predicate conjugation causes an increase in vocabulary size. For example, one verb can have as many as 19 surface varieties. In this research, we focus on predicate conjugation for compressing the vocabulary size in Japanese. The vocabulary list is filled with the various forms of verbs. We propose methods using predicate conjugation information without discarding linguistic information. The proposed methods can generate low-frequency words and deal with unknown words. Two methods were considered to introduce conjugation information: the first considers it as a token (conjugation token) and the second considers it as an embedded vector (conjugation feature). The results using these methods demonstrate that the vocabulary size can be compressed by approximately 86.1% (Tanaka corpus) and the NMT models can output the words not in the training data set. Furthermore, BLEU scores improved by 0.91 points in Japanese-to-English translation, and 0.32 points in English-to-Japanese translation with ASPEC.
[ { "version": "v1", "created": "Fri, 25 May 2018 08:56:43 GMT" } ]
2018-05-28T00:00:00
[ [ "Kurosawa", "Michiki", "" ], [ "Matsumura", "Yukio", "" ], [ "Yamagishi", "Hayahide", "" ], [ "Komachi", "Mamoru", "" ] ]
new_dataset
0.999002
1805.10082
Liu Zhou
Bowen Feng, Jian Jiao, Liu Zhou, Shaohua Wu, Bin Cao, and Qinyu Zhang
A Novel High-Rate Polar-Staircase Coding Scheme
6 pages, 4 figures. Accepted for publication at IEEE Vehicular Technology Conference (VTC), Fall 2018
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The long-haul communication systems can offer ultra high-speed data transfer rates but suffer from burst errors. The high-rate and high-performance staircase codes provide an efficient way for long-haul transmission. The staircase coding scheme is a concatenation structure, which provides the opportunity to improve the performance of high-rate polar codes. At the same time, the polar codes make the staircase structure more reliable. Thus, a high-rate polar-staircase coding scheme is proposed, where the systematic polar codes are applied as the component codes. The soft cancellation decoding of the systematic polar codes is proposed as a basic ingredient. The encoding of the polar-staircase codes is designed with the help of density evolution, where the unreliable parts of the polar codes are enhanced. The corresponding decoding is proposed with low complexity, and is also optimized for burst error channels. With the well designed encoding and decoding algorithms, the polar-staircase codes perform well on both AWGN channels and burst error channels.
[ { "version": "v1", "created": "Fri, 25 May 2018 11:11:20 GMT" } ]
2018-05-28T00:00:00
[ [ "Feng", "Bowen", "" ], [ "Jiao", "Jian", "" ], [ "Zhou", "Liu", "" ], [ "Wu", "Shaohua", "" ], [ "Cao", "Bin", "" ], [ "Zhang", "Qinyu", "" ] ]
new_dataset
0.999505
1805.10107
Chao Zhai
Chao Zhai, Gaoxi Xiao, Hehong Zhang, Tso-Chien Pan
Cooperative Control of TCSC to Relieve the Stress of Cyber-physical Power System
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the cooperative control problem of Thyristor-Controlled Series Compensation (TCSC) to eliminate the stress of cyber-physical power system. The cyber-physical power system is composed of power network, protection and control center and communication network. A cooperative control algorithm of TCSC is developed to adjust the branch impedance and regulate the power flow. To reduce computation burdens, an approximate method is adopted to estimate the Jacobian matrix for the generation of control signals. In addition, a performance index is introduced to quantify the stress level of power system. Theoretical analysis is conducted to guarantee the convergence of performance index when the proposed cooperative control algorithm is implemented. Finally, numerical simulations are carried out to validate the cooperative control approach on IEEE 24 Bus Systems in uncertain environments.
[ { "version": "v1", "created": "Fri, 25 May 2018 12:36:35 GMT" } ]
2018-05-28T00:00:00
[ [ "Zhai", "Chao", "" ], [ "Xiao", "Gaoxi", "" ], [ "Zhang", "Hehong", "" ], [ "Pan", "Tso-Chien", "" ] ]
new_dataset
0.996748
1805.10163
Elena Voita
Elena Voita, Pavel Serdyukov, Rico Sennrich, Ivan Titov
Context-Aware Neural Machine Translation Learns Anaphora Resolution
ACL 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standard machine translation systems process sentences in isolation and hence ignore extra-sentential information, even though extended context can both prevent mistakes in ambiguous cases and improve translation coherence. We introduce a context-aware neural machine translation model designed in such way that the flow of information from the extended context to the translation model can be controlled and analyzed. We experiment with an English-Russian subtitles dataset, and observe that much of what is captured by our model deals with improving pronoun translation. We measure correspondences between induced attention distributions and coreference relations and observe that the model implicitly captures anaphora. It is consistent with gains for sentences where pronouns need to be gendered in translation. Beside improvements in anaphoric cases, the model also improves in overall BLEU, both over its context-agnostic version (+0.7) and over simple concatenation of the context and source sentences (+0.6).
[ { "version": "v1", "created": "Fri, 25 May 2018 14:03:27 GMT" } ]
2018-05-28T00:00:00
[ [ "Voita", "Elena", "" ], [ "Serdyukov", "Pavel", "" ], [ "Sennrich", "Rico", "" ], [ "Titov", "Ivan", "" ] ]
new_dataset
0.957156
1805.10211
Laurent Risser
Camille Champion (IMT), Anne-Claire Brunet (IMT), Jean-Michel Loubes (IMT), Laurent Risser (IMT)
COREclust: a new package for a robust and scalable analysis of complex data
null
null
null
null
cs.MS stat.CO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a new R package COREclust dedicated to the detection of representative variables in high dimensional spaces with a potentially limited number of observations. Variable sets detection is based on an original graph clustering strategy denoted CORE-clustering algorithm that detects CORE-clusters, i.e. variable sets having a user defined size range and in which each variable is very similar to at least another variable. Representative variables are then robustely estimate as the CORE-cluster centers. This strategy is entirely coded in C++ and wrapped by R using the Rcpp package. A particular effort has been dedicated to keep its algorithmic cost reasonable so that it can be used on large datasets. After motivating our work, we will explain the CORE-clustering algorithm as well as a greedy extension of this algorithm. We will then present how to use it and results obtained on synthetic and real data.
[ { "version": "v1", "created": "Fri, 25 May 2018 15:50:15 GMT" } ]
2018-05-28T00:00:00
[ [ "Champion", "Camille", "", "IMT" ], [ "Brunet", "Anne-Claire", "", "IMT" ], [ "Loubes", "Jean-Michel", "", "IMT" ], [ "Risser", "Laurent", "", "IMT" ] ]
new_dataset
0.997743
1805.10271
Ted Pedersen
Arshia Z. Hassan and Manikya S. Vallabhajosyula and Ted Pedersen
UMDuluth-CS8761 at SemEval-2018 Task 9: Hypernym Discovery using Hearst Patterns, Co-occurrence frequencies and Word Embeddings
5 pages, to Appear in the Proceedings of the 12th International Workshop on Semantic Evaluation (SemEval 2018), June 2018, New Orleans, LA
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Hypernym Discovery is the task of identifying potential hypernyms for a given term. A hypernym is a more generalized word that is super-ordinate to more specific words. This paper explores several approaches that rely on co-occurrence frequencies of word pairs, Hearst Patterns based on regular expressions, and word embeddings created from the UMBC corpus. Our system Babbage participated in Subtask 1A for English and placed 6th of 19 systems when identifying concept hypernyms, and 12th of 18 systems for entity hypernyms.
[ { "version": "v1", "created": "Fri, 25 May 2018 17:44:03 GMT" } ]
2018-05-28T00:00:00
[ [ "Hassan", "Arshia Z.", "" ], [ "Vallabhajosyula", "Manikya S.", "" ], [ "Pedersen", "Ted", "" ] ]
new_dataset
0.993834
1705.09569
Serge Kas Hanna
Serge Kas Hanna and Salim El Rouayheb
Guess & Check Codes for Deletions, Insertions, and Synchronization
Accepted to the IEEE Transactions on Information Theory. arXiv admin note: text overlap with arXiv:1702.04466
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of constructing codes that can correct $\delta$ deletions occurring in an arbitrary binary string of length $n$ bits. Varshamov-Tenengolts (VT) codes, dating back to 1965, are zero-error single deletion $(\delta=1)$ correcting codes, and have an asymptotically optimal redundancy. Finding similar codes for $\delta \geq 2$ deletions remains an open problem. In this work, we relax the standard zero-error (i.e., worst-case) decoding requirement by assuming that the positions of the $\delta$ deletions (or insertions) are independent of the codeword. Our contribution is a new family of explicit codes, that we call Guess & Check (GC) codes, that can correct with high probability up to a constant number of $\delta$ deletions (or insertions). GC codes are systematic; and have deterministic polynomial time encoding and decoding algorithms. We also describe the application of GC codes to file synchronization.
[ { "version": "v1", "created": "Wed, 24 May 2017 20:16:59 GMT" }, { "version": "v2", "created": "Fri, 3 Nov 2017 00:55:43 GMT" }, { "version": "v3", "created": "Thu, 24 May 2018 15:24:20 GMT" } ]
2018-05-25T00:00:00
[ [ "Hanna", "Serge Kas", "" ], [ "Rouayheb", "Salim El", "" ] ]
new_dataset
0.955686
1709.07158
Trung Pham
Trung Pham, Thanh-Toan Do, Niko S\"underhauf and Ian Reid
SceneCut: Joint Geometric and Object Segmentation for Indoor Scenes
Published in ICRA 2018
ICRA 2018
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents SceneCut, a novel approach to jointly discover previously unseen objects and non-object surfaces using a single RGB-D image. SceneCut's joint reasoning over scene semantics and geometry allows a robot to detect and segment object instances in complex scenes where modern deep learning-based methods either fail to separate object instances, or fail to detect objects that were not seen during training. SceneCut automatically decomposes a scene into meaningful regions which either represent objects or scene surfaces. The decomposition is qualified by an unified energy function over objectness and geometric fitting. We show how this energy function can be optimized efficiently by utilizing hierarchical segmentation trees. Moreover, we leverage a pre-trained convolutional oriented boundary network to predict accurate boundaries from images, which are used to construct high-quality region hierarchies. We evaluate SceneCut on several different indoor environments, and the results show that SceneCut significantly outperforms all the existing methods.
[ { "version": "v1", "created": "Thu, 21 Sep 2017 05:08:35 GMT" }, { "version": "v2", "created": "Thu, 24 May 2018 06:44:56 GMT" } ]
2018-05-25T00:00:00
[ [ "Pham", "Trung", "" ], [ "Do", "Thanh-Toan", "" ], [ "Sünderhauf", "Niko", "" ], [ "Reid", "Ian", "" ] ]
new_dataset
0.988171
1805.09105
Xuan-Yu Wang
Xuan-Yu Wang, Wen-Xuan Liao, Dong An, Yao-Guang Wei
Maize Haploid Identification via LSTM-CNN and Hyperspectral Imaging Technology
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate and fast identification of seed cultivars is crucial to plant breeding, with accelerating breeding of new products and increasing its quality. In our study, the first attempt to design a high-accurate identification model of maize haploid seeds from diploid ones based on optimum waveband selection of the LSTM-CNN algorithm is realized via deep learning and hyperspectral imaging technology, with accuracy reaching 97% in the determining optimum waveband of 1367.6-1526.4nm. The verification of testing another cultivar achieved an accuracy of 93% in the same waveband. The model collected images of 256 wavebands of seeds in the spectral region of 862.9-1704.2nm. The high-noise waveband intervals were found and deleted by the LSTM. The optimum-data waveband intervals were determined by CNN's waveband-based detection. The optimum sample set for network training only accounted for 1/5 of total sample data. The accuracy was significantly higher than the full-waveband modeling or modeling of any other wavebands. Our study demonstrates that the proposed model has outstanding effect on maize haploid identification and it could be generalized to some extent.
[ { "version": "v1", "created": "Wed, 23 May 2018 13:01:15 GMT" }, { "version": "v2", "created": "Thu, 24 May 2018 08:17:39 GMT" } ]
2018-05-25T00:00:00
[ [ "Wang", "Xuan-Yu", "" ], [ "Liao", "Wen-Xuan", "" ], [ "An", "Dong", "" ], [ "Wei", "Yao-Guang", "" ] ]
new_dataset
0.983455
1805.09408
Iv\'an Ram\'irez D\'iaz
Iv\'an Ram\'irez, Gonzalo Galiano and Emanuele Schiavi
Non-convex non-local flows for saliency detection
null
null
null
null
cs.CV math.NA
http://creativecommons.org/licenses/by/4.0/
We propose and numerically solve a new variational model for automatic saliency detection in digital images. Using a non-local framework we consider a family of edge preserving functions combined with a new quadratic saliency detection term. Such term defines a constrained bilateral obstacle problem for image classification driven by p-Laplacian operators, including the so-called hyper-Laplacian case (0 < p < 1). The related non-convex non-local reactive flows are then considered and applied for glioblastoma segmentation in magnetic resonance fluid-attenuated inversion recovery (MRI-Flair) images. A fast convolutional kernel based approximated solution is computed. The numerical experiments show how the non-convexity related to the hyperLaplacian operators provides monotonically better results in terms of the standard metrics.
[ { "version": "v1", "created": "Wed, 23 May 2018 20:03:06 GMT" } ]
2018-05-25T00:00:00
[ [ "Ramírez", "Iván", "" ], [ "Galiano", "Gonzalo", "" ], [ "Schiavi", "Emanuele", "" ] ]
new_dataset
0.970828
1805.09562
Julio Marco
Julio Marco, Ib\'on Guill\'en, Wojciech Jarosz, Diego Gutierrez, Adrian Jarabo
Progressive Transient Photon Beams
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we introduce a novel algorithm for transient rendering in participating media. Our method is consistent, robust, and is able to generate animations of time-resolved light transport featuring complex caustic light paths in media. We base our method on the observation that the spatial continuity provides an increased coverage of the temporal domain, and generalize photon beams to transient-state. We extend the beam steady-state radiance estimates to include the temporal domain. Then, we develop a progressive version of spatio-temporal density estimations, that converges to the correct solution with finite memory requirements by iteratively averaging several realizations of independent renders with a progressively reduced kernel bandwidth. We derive the optimal convergence rates accounting for space and time kernels, and demonstrate our method against previous consistent transient rendering methods for participating media.
[ { "version": "v1", "created": "Thu, 24 May 2018 09:15:15 GMT" } ]
2018-05-25T00:00:00
[ [ "Marco", "Julio", "" ], [ "Guillén", "Ibón", "" ], [ "Jarosz", "Wojciech", "" ], [ "Gutierrez", "Diego", "" ], [ "Jarabo", "Adrian", "" ] ]
new_dataset
0.995838
1805.09583
Shan Zhang
Shan Zhang, Jiayin Chen, Feng Lyu, Nan Cheng, Weisen Shi, Xuemin (Sherman) Shen
Vehicular Communication Networks in Automated Driving Era
15 pages, 5 figures, IEEE Communications Magazine
null
null
null
cs.NI cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Embedded with advanced sensors, cameras and processors, the emerging automated driving vehicles are capable of sensing the environment and conducting automobile operation, paving the way to modern intelligent transportation systems (ITS) with high safety and efficiency. On the other hand, vehicular communication networks (VCNs) connect vehicles, infrastructures, clouds, and all other devices with communication modules, whereby vehicles can obtain local and global information to make intelligent operation decisions. Although the sensing-based automated driving technologies and VCNs have been investigated independently, their interactions and mutual benefits are still underdeveloped. In this article, we argue that VCNs have attractive potentials to enhance the on-board sensing-based automated vehicles from different perspectives, such as driving safety, transportation efficiency, as well as customer experiences. A case study is conducted to demonstrate that the traffic jam can be relieved at intersections with automated driving vehicles coordinated with each other through VCNs. Furthermore, we highlight the critical yet interesting issues for future research, based on the specific requirements posed by automated driving on VCNs.
[ { "version": "v1", "created": "Thu, 24 May 2018 09:59:54 GMT" } ]
2018-05-25T00:00:00
[ [ "Zhang", "Shan", "", "Sherman" ], [ "Chen", "Jiayin", "", "Sherman" ], [ "Lyu", "Feng", "", "Sherman" ], [ "Cheng", "Nan", "", "Sherman" ], [ "Shi", "Weisen", "", "Sherman" ], [ "Xuemin", "", "", "Sherman" ], [ "Shen", "", "" ] ]
new_dataset
0.998533
1805.09604
Julian Horsch
Mathias Morbitzer, Manuel Huber, Julian Horsch, Sascha Wessel
SEVered: Subverting AMD's Virtual Machine Encryption
Published in Proceedings of the 11th European Workshop on Systems Security (EuroSec'18)
null
10.1145/3193111.3193112
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
AMD SEV is a hardware feature designed for the secure encryption of virtual machines. SEV aims to protect virtual machine memory not only from other malicious guests and physical attackers, but also from a possibly malicious hypervisor. This relieves cloud and virtual server customers from fully trusting their server providers and the hypervisors they are using. We present the design and implementation of SEVered, an attack from a malicious hypervisor capable of extracting the full contents of main memory in plaintext from SEV-encrypted virtual machines. SEVered neither requires physical access nor colluding virtual machines, but only relies on a remote communication service, such as a web server, running in the targeted virtual machine. We verify the effectiveness of SEVered on a recent AMD SEV-enabled server platform running different services, such as web or SSH servers, in encrypted virtual machines. With these examples, we demonstrate that SEVered reliably and efficiently extracts all memory contents even in scenarios where the targeted virtual machine is under high load.
[ { "version": "v1", "created": "Thu, 24 May 2018 11:09:39 GMT" } ]
2018-05-25T00:00:00
[ [ "Morbitzer", "Mathias", "" ], [ "Huber", "Manuel", "" ], [ "Horsch", "Julian", "" ], [ "Wessel", "Sascha", "" ] ]
new_dataset
0.997871
1805.09635
Anneli Heimb\"urger Dr. Tech.
Anneli Heimb\"urger
When Cultures Meet: Modelling Cross-Cultural Knowledge Spaces
null
Frontiers in Artificial Intelligence and Applications, 2008, Vol. 166, Information Modelling and Knowledge Bases XIX. Amsterdam: IOS Press. Pp. 314-321
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross cultural research projects are becoming a norm in our global world. More and more projects are being executed using teams from eastern and western cultures. Cultural competence might help project managers to achieve project goals and avoid potential risks in cross cultural project environments and would also support them to promote creativity and motivation through flexible leadership. In our paper we introduce an idea for constructing an information system, a cross cultural knowledge space, which could support cross cultural communication, collaborative learning experiences and time based project management functions. The case cultures in our project are Finnish and Japanese. The system can be used both in virtual and in physical spaces for example to clarify cultural business etiquette. The core of our system design will be based on cross cultural ontology, and the system implementation on XML technologies. Our approach is a practical, step by step example of constructive research. In our paper we shortly describe Hofstede's dimensions for assessing cultures as one example of a larger framework for our study. We also discuss the concept of time in cultural context.
[ { "version": "v1", "created": "Thu, 24 May 2018 12:40:47 GMT" } ]
2018-05-25T00:00:00
[ [ "Heimbürger", "Anneli", "" ] ]
new_dataset
0.992685
1805.09678
Madhu Raka
Mokshi Goyal and Madhu Raka
Duadic negacyclic codes over a finite non-chain ring and their Gray images
arXiv admin note: text overlap with arXiv:1609.07862
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let $f(u)$ be a polynomial of degree $m, m \geq 2,$ which splits into distinct linear factors over a finite field $\mathbb{F}_{q}$. Let $\mathcal{R}=\mathbb{F}_{q}[u]/\langle f(u)\rangle$ be a finite non-chain ring. In an earlier paper, we studied duadic and triadic codes over $\mathcal{R}$ and their Gray images. Here, we study duadic negacyclic codes of Type I and Type II over the ring $\mathcal{R}$, their extensions and their Gray images. As a consequence some self-dual, isodual, self-orthogonal and complementary dual(LCD) codes over $\mathbb{F}_q$ are constructed. Some examples are also given to illustrate this.
[ { "version": "v1", "created": "Wed, 23 May 2018 07:41:18 GMT" } ]
2018-05-25T00:00:00
[ [ "Goyal", "Mokshi", "" ], [ "Raka", "Madhu", "" ] ]
new_dataset
0.999235
1805.09738
Hyrum Anderson
Jonathan Woodbridge, Hyrum S. Anderson, Anjum Ahuja, Daniel Grant
Detecting Homoglyph Attacks with a Siamese Neural Network
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A homoglyph (name spoofing) attack is a common technique used by adversaries to obfuscate file and domain names. This technique creates process or domain names that are visually similar to legitimate and recognized names. For instance, an attacker may create malware with the name svch0st.exe so that in a visual inspection of running processes or a directory listing, the process or file name might be mistaken as the Windows system process svchost.exe. There has been limited published research on detecting homoglyph attacks. Current approaches rely on string comparison algorithms (such as Levenshtein distance) that result in computationally heavy solutions with a high number of false positives. In addition, there is a deficiency in the number of publicly available datasets for reproducible research, with most datasets focused on phishing attacks, in which homoglyphs are not always used. This paper presents a fundamentally different solution to this problem using a Siamese convolutional neural network (CNN). Rather than leveraging similarity based on character swaps and deletions, this technique uses a learned metric on strings rendered as images: a CNN learns features that are optimized to detect visual similarity of the rendered strings. The trained model is used to convert thousands of potentially targeted process or domain names to feature vectors. These feature vectors are indexed using randomized KD-Trees to make similarity searches extremely fast with minimal computational processing. This technique shows a considerable 13% to 45% improvement over baseline techniques in terms of area under the receiver operating characteristic curve (ROC AUC). In addition, we provide both code and data to further future research.
[ { "version": "v1", "created": "Thu, 24 May 2018 15:43:34 GMT" } ]
2018-05-25T00:00:00
[ [ "Woodbridge", "Jonathan", "" ], [ "Anderson", "Hyrum S.", "" ], [ "Ahuja", "Anjum", "" ], [ "Grant", "Daniel", "" ] ]
new_dataset
0.998749
1601.03162
Ioannis Avramopoulos
Ioannis Avramopoulos
Jump-starting coordination in a stag hunt: Motivation, mechanisms, and their analysis
Some overlap with arXiv:1210.7789
null
null
null
cs.GT cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The stag hunt (or assurance game) is a simple game that has been used as a prototype of a variety of social coordination problems (ranging from the social contract to the adoption of technical standards). Players have the option to either use a superior cooperative strategy whose payoff depends on the other players' choices or use an inferior strategy whose payoff is independent of what other players do; the cooperative strategy may incur a loss if sufficiently many other players do not cooperate. Stag hunts have two (strict) pure Nash equilibria, namely, universal cooperation and universal defection (as well as a mixed equilibrium of low predictive value). Selection of the inferior (pure) equilibrium is called a coordination failure. In this paper, we present and analyze using game-theoretic techniques mechanisms aiming to avert coordination failures and incite instead selection of the superior equilibrium. Our analysis is based on the solution concepts of Nash equilibrium, dominance solvability, as well as a formalization of the notion of "incremental deployability," which is shown to be keenly relevant to the sink equilibrium.
[ { "version": "v1", "created": "Wed, 13 Jan 2016 08:19:22 GMT" } ]
2018-05-24T00:00:00
[ [ "Avramopoulos", "Ioannis", "" ] ]
new_dataset
0.988574
1705.05828
Edlira Kuci
Edlira Kuci, Sebastian Erdweg, Oliver Bra\v{c}evac, Andi Bejleri, and Mira Mezini
A Co-contextual Type Checker for Featherweight Java (incl. Proofs)
54 pages, 10 figures, ECOOP 2017
null
null
null
cs.PL
http://creativecommons.org/licenses/by/4.0/
This paper addresses compositional and incremental type checking for object-oriented programming languages. Recent work achieved incremental type checking for structurally typed functional languages through co-contextual typing rules, a constraint-based formulation that removes any context dependency for expression typings. However, that work does not cover key features of object-oriented languages: Subtype polymorphism, nominal typing, and implementation inheritance. Type checkers encode these features in the form of class tables, an additional form of typing context inhibiting incrementalization. In the present work, we demonstrate that an appropriate co-contextual notion to class tables exists, paving the way to efficient incremental type checkers for object-oriented languages. This yields a novel formulation of Igarashi et al.'s Featherweight Java (FJ) type system, where we replace class tables by the dual concept of class table requirements and class table operations by dual operations on class table requirements. We prove the equivalence of FJ's type system and our co-contextual formulation. Based on our formulation, we implemented an incremental FJ type checker and compared its performance against javac on a number of realistic example programs.
[ { "version": "v1", "created": "Tue, 16 May 2017 17:59:40 GMT" }, { "version": "v2", "created": "Wed, 23 May 2018 12:48:52 GMT" } ]
2018-05-24T00:00:00
[ [ "Kuci", "Edlira", "" ], [ "Erdweg", "Sebastian", "" ], [ "Bračevac", "Oliver", "" ], [ "Bejleri", "Andi", "" ], [ "Mezini", "Mira", "" ] ]
new_dataset
0.996997
1805.08846
David Ketcheson
H. Gorune Ohannessian and George Turkiyyah and Aron Ahmadia and David Ketcheson
CUDACLAW: A high-performance programmable GPU framework for the solution of hyperbolic PDEs
null
null
null
null
cs.MS cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present cudaclaw, a CUDA-based high performance data-parallel framework for the solution of multidimensional hyperbolic partial differential equation (PDE) systems, equations describing wave motion. cudaclaw allows computational scientists to solve such systems on GPUs without being burdened by the need to write CUDA code, worry about thread and block details, data layout, and data movement between the different levels of the memory hierarchy. The user defines the set of PDEs to be solved via a CUDA- independent serial Riemann solver and the framework takes care of orchestrating the computations and data transfers to maximize arithmetic throughput. cudaclaw treats the different spatial dimensions separately to allow suitable block sizes and dimensions to be used in the different directions, and includes a number of optimizations to minimize access to global memory.
[ { "version": "v1", "created": "Mon, 21 May 2018 14:21:51 GMT" } ]
2018-05-24T00:00:00
[ [ "Ohannessian", "H. Gorune", "" ], [ "Turkiyyah", "George", "" ], [ "Ahmadia", "Aron", "" ], [ "Ketcheson", "David", "" ] ]
new_dataset
0.999442
1805.08876
Berkay Celik
Z. Berkay Celik, Patrick McDaniel and Gang Tan
Soteria: Automated IoT Safety and Security Analysis
Accepted to the USENIX Annual Technical Conference (USENIX ATC), 2018
null
null
null
cs.CR cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Broadly defined as the Internet of Things (IoT), the growth of commodity devices that integrate physical processes with digital systems have changed the way we live, play and work. Yet existing IoT platforms cannot evaluate whether an IoT app or environment is safe, secure, and operates correctly. In this paper, we present Soteria, a static analysis system for validating whether an IoT app or IoT environment (collection of apps working in concert) adheres to identified safety, security, and functional properties. Soteria operates in three phases; (a) translation of platform-specific IoT source code into an intermediate representation (IR), (b) extracting a state model from the IR, (c) applying model checking to verify desired properties. We evaluate Soteria on 65 SmartThings market apps through 35 properties and find nine (14%) individual apps violate ten (29%) properties. Further, our study of combined app environments uncovered eleven property violations not exhibited in the isolated apps. Lastly, we demonstrate Soteria on MalIoT, a novel open-source test suite containing 17 apps with 20 unique violations.
[ { "version": "v1", "created": "Tue, 22 May 2018 21:41:04 GMT" } ]
2018-05-24T00:00:00
[ [ "Celik", "Z. Berkay", "" ], [ "McDaniel", "Patrick", "" ], [ "Tan", "Gang", "" ] ]
new_dataset
0.99924
1805.08893
Michael Kenzel
Michael Kenzel, Bernhard Kerbl, Wolfgang Tatzgern, Elena Ivanchenko, Dieter Schmalstieg, Markus Steinberger
On-the-fly Vertex Reuse for Massively-Parallel Software Geometry Processing
null
null
null
null
cs.GR cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compute-mode rendering is becoming more and more attractive for non-standard rendering applications, due to the high flexibility of compute-mode execution. These newly designed pipelines often include streaming vertex and geometry processing stages. In typical triangle meshes, the same transformed vertex is on average required six times during rendering. To avoid redundant computation, a post-transform cache is traditionally suggested to enable reuse of vertex processing results. However, traditional caching neither scales well as the hardware becomes more parallel, nor can be efficiently implemented in a software design. We investigate alternative strategies to reusing vertex shading results on-the-fly for massively parallel software geometry processing. Forming static and dynamic batching on the data input stream, we analyze the effectiveness of identifying potential local reuse based on sorting, hashing, and efficient intra-thread-group communication. Altogether, we present four vertex reuse strategies, tailored to modern parallel architectures. Our simulations showcase that our batch-based strategies significantly outperform parallel caches in terms of reuse. On actual GPU hardware, our evaluation shows that our strategies not only lead to good reuse of processing results, but also boost performance by $2-3\times$ compared to na\"ively ignoring reuse in a variety of practical applications.
[ { "version": "v1", "created": "Tue, 22 May 2018 22:40:07 GMT" } ]
2018-05-24T00:00:00
[ [ "Kenzel", "Michael", "" ], [ "Kerbl", "Bernhard", "" ], [ "Tatzgern", "Wolfgang", "" ], [ "Ivanchenko", "Elena", "" ], [ "Schmalstieg", "Dieter", "" ], [ "Steinberger", "Markus", "" ] ]
new_dataset
0.96837
1805.08932
Chetan Singh Thakur
Chetan Singh Thakur, Jamal Molin, Gert Cauwenberghs, Giacomo Indiveri, Kundan Kumar, Ning Qiao, Johannes Schemmel, Runchun Wang, Elisabetta Chicca, Jennifer Olson Hasler, Jae-sun Seo, Shimeng Yu, Yu Cao, Andr\'e van Schaik, Ralph Etienne-Cummings
Large-Scale Neuromorphic Spiking Array Processors: A quest to mimic the brain
null
null
null
null
cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Neuromorphic engineering (NE) encompasses a diverse range of approaches to information processing that are inspired by neurobiological systems, and this feature distinguishes neuromorphic systems from conventional computing systems. The brain has evolved over billions of years to solve difficult engineering problems by using efficient, parallel, low-power computation. The goal of NE is to design systems capable of brain-like computation. Numerous large-scale neuromorphic projects have emerged recently. This interdisciplinary field was listed among the top 10 technology breakthroughs of 2014 by the MIT Technology Review and among the top 10 emerging technologies of 2015 by the World Economic Forum. NE has two-way goals: one, a scientific goal to understand the computational properties of biological neural systems by using models implemented in integrated circuits (ICs); second, an engineering goal to exploit the known properties of biological systems to design and implement efficient devices for engineering applications. Building hardware neural emulators can be extremely useful for simulating large-scale neural models to explain how intelligent behavior arises in the brain. The principle advantages of neuromorphic emulators are that they are highly energy efficient, parallel and distributed, and require a small silicon area. Thus, compared to conventional CPUs, these neuromorphic emulators are beneficial in many engineering applications such as for the porting of deep learning algorithms for various recognitions tasks. In this review article, we describe some of the most significant neuromorphic spiking emulators, compare the different architectures and approaches used by them, illustrate their advantages and drawbacks, and highlight the capabilities that each can deliver to neural modelers.
[ { "version": "v1", "created": "Wed, 23 May 2018 01:52:33 GMT" } ]
2018-05-24T00:00:00
[ [ "Thakur", "Chetan Singh", "" ], [ "Molin", "Jamal", "" ], [ "Cauwenberghs", "Gert", "" ], [ "Indiveri", "Giacomo", "" ], [ "Kumar", "Kundan", "" ], [ "Qiao", "Ning", "" ], [ "Schemmel", "Johannes", "" ], [ "Wang", "Runchun", "" ], [ "Chicca", "Elisabetta", "" ], [ "Hasler", "Jennifer Olson", "" ], [ "Seo", "Jae-sun", "" ], [ "Yu", "Shimeng", "" ], [ "Cao", "Yu", "" ], [ "van Schaik", "André", "" ], [ "Etienne-Cummings", "Ralph", "" ] ]
new_dataset
0.998347
1805.08955
Prasad Krishnan Dr
Prasad Krishnan
Coded Caching via Line Graphs of Bipartite Graphs
Keywords: coded caching based on projective geometry over finite fields
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a coded caching framework using line graphs of bipartite graphs. A clique cover of the line graph describes the uncached subfiles at users. A clique cover of the complement of the square of the line graph gives a transmission scheme that satisfies user demands. We then define a specific class of such caching line graphs, for which the subpacketization, rate, and uncached fraction of the coded caching problem can be captured via its graph theoretic parameters. We present a construction of such caching line graphs using projective geometry. The presented scheme has a rate bounded from above by a constant with subpacketization level $q^{O((log_qK)^2)}$ and uncached fraction $\Theta(\frac{1}{\sqrt{K}})$, where $K$ is the number of users and $q$ is a prime power. We also present a subpacketization-dependent lower bound on the rate of coded caching schemes for a given broadcast setup.
[ { "version": "v1", "created": "Wed, 23 May 2018 04:14:34 GMT" } ]
2018-05-24T00:00:00
[ [ "Krishnan", "Prasad", "" ] ]
new_dataset
0.998077
1805.08962
Kensuke Harada
Kensuke Harada, Kento Nakayama, Weiwei Wan, Kazuyuki Nagata, Natsuki Yamanobe, and Ixchel G. Ramirez-Alpizar
Tool Exchangeable Grasp/Assembly Planner
This is to appear Int. Conf. on Intelligent Autonomous Systems
Int. Conf. on Intelligent Autonomous Systems, 2018
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a novel assembly planner for a manipulator which can simultaneously plan assembly sequence, robot motion, grasping configuration, and exchange of grippers. Our assembly planner assumes multiple grippers and can automatically selects a feasible one to assemble a part. For a given AND/OR graph of an assembly task, we consider generating the assembly graph from which assembly motion of a robot can be planned. The edges of the assembly graph are composed of three kinds of paths, i.e., transfer/assembly paths, transit paths and tool exchange paths. In this paper, we first explain the proposed method for planning assembly motion sequence including the function of gripper exchange. Finally, the effectiveness of the proposed method is confirmed through some numerical examples and a physical experiment.
[ { "version": "v1", "created": "Wed, 23 May 2018 05:17:07 GMT" } ]
2018-05-24T00:00:00
[ [ "Harada", "Kensuke", "" ], [ "Nakayama", "Kento", "" ], [ "Wan", "Weiwei", "" ], [ "Nagata", "Kazuyuki", "" ], [ "Yamanobe", "Natsuki", "" ], [ "Ramirez-Alpizar", "Ixchel G.", "" ] ]
new_dataset
0.993121
1805.08982
Chenglong Li
Chenglong Li, Xinyan Liang, Yijuan Lu, Nan Zhao, Jin Tang
RGB-T Object Tracking:Benchmark and Baseline
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
RGB-Thermal (RGB-T) object tracking receives more and more attention due to the strongly complementary benefits of thermal information to visible data. However, RGB-T research is limited by lacking a comprehensive evaluation platform. In this paper, we propose a large-scale video benchmark dataset for RGB-T tracking.It has three major advantages over existing ones: 1) Its size is sufficiently large for large-scale performance evaluation (total frame number: 234K, maximum frame per sequence: 8K). 2) The alignment between RGB-T sequence pairs is highly accurate, which does not need pre- or post-processing. 3) The occlusion levels are annotated for occlusion-sensitive performance analysis of different tracking algorithms.Moreover, we propose a novel graph-based approach to learn a robust object representation for RGB-T tracking. In particular, the tracked object is represented with a graph with image patches as nodes. This graph including graph structure, node weights and edge weights is dynamically learned in a unified ADMM (alternating direction method of multipliers)-based optimization framework, in which the modality weights are also incorporated for adaptive fusion of multiple source data.Extensive experiments on the large-scale dataset are executed to demonstrate the effectiveness of the proposed tracker against other state-of-the-art tracking methods. We also provide new insights and potential research directions to the field of RGB-T object tracking.
[ { "version": "v1", "created": "Wed, 23 May 2018 07:13:39 GMT" } ]
2018-05-24T00:00:00
[ [ "Li", "Chenglong", "" ], [ "Liang", "Xinyan", "" ], [ "Lu", "Yijuan", "" ], [ "Zhao", "Nan", "" ], [ "Tang", "Jin", "" ] ]
new_dataset
0.999395
1805.09061
Kevin Eckenhoff
Indrajeet Yadav, Kevin Eckenhoff, Guoquan Huang, and Herbert G. Tanner
Visual-Inertial Target Tracking and Motion Planning for UAV-based Radiation Detection
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the problem of detecting radioactive material in transit using an UAV of minimal sensing capability, where the objective is to classify the target's radioactivity as the vehicle plans its paths through the workspace while tracking the target for a short time interval. To this end, we propose a motion planning framework that integrates tightly-coupled visual-inertial localization and target tracking. In this framework,the 3D workspace is known, and this information together with the UAV dynamics, is used to construct a navigation function that generates dynamically feasible, safe paths which avoid obstacles and provably converge to the moving target. The efficacy of the proposed approach is validated through realistic simulations in Gazebo.
[ { "version": "v1", "created": "Wed, 23 May 2018 11:19:09 GMT" } ]
2018-05-24T00:00:00
[ [ "Yadav", "Indrajeet", "" ], [ "Eckenhoff", "Kevin", "" ], [ "Huang", "Guoquan", "" ], [ "Tanner", "Herbert G.", "" ] ]
new_dataset
0.996854
1805.09277
Sangha Lee
Sang-Ha Lee, Soon-Chul Kwon, Jin-Wook Shim, Jeong-Eun Lim, Jisang Yoo
WisenetMD: Motion Detection Using Dynamic Background Region Analysis
8 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motion detection algorithms that can be applied to surveillance cameras such as CCTV (Closed Circuit Television) have been studied extensively. Motion detection algorithm is mostly based on background subtraction. One main issue in this technique is that false positives of dynamic backgrounds such as wind shaking trees and flowing rivers might occur. In this paper, we proposed a method to search for dynamic background region by analyzing the video and removing false positives by re-checking false positives. The proposed method was evaluated based on CDnet 2012/2014 dataset obtained at "changedetection.net" site. We also compared its processing speed with other algorithms.
[ { "version": "v1", "created": "Wed, 23 May 2018 16:48:27 GMT" } ]
2018-05-24T00:00:00
[ [ "Lee", "Sang-Ha", "" ], [ "Kwon", "Soon-Chul", "" ], [ "Shim", "Jin-Wook", "" ], [ "Lim", "Jeong-Eun", "" ], [ "Yoo", "Jisang", "" ] ]
new_dataset
0.992113
1703.07938
Pengpeng Liang
Pengpeng Liang, Yifan Wu, Hu Lu, Liming Wang, Chunyuan Liao, Haibin Ling
Planar Object Tracking in the Wild: A Benchmark
Accepted by ICRA 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Planar object tracking is an actively studied problem in vision-based robotic applications. While several benchmarks have been constructed for evaluating state-of-the-art algorithms, there is a lack of video sequences captured in the wild rather than in constrained laboratory environment. In this paper, we present a carefully designed planar object tracking benchmark containing 210 videos of 30 planar objects sampled in the natural environment. In particular, for each object, we shoot seven videos involving various challenging factors, namely scale change, rotation, perspective distortion, motion blur, occlusion, out-of-view, and unconstrained. The ground truth is carefully annotated semi-manually to ensure the quality. Moreover, eleven state-of-the-art algorithms are evaluated on the benchmark using two evaluation metrics, with detailed analysis provided for the evaluation results. We expect the proposed benchmark to benefit future studies on planar object tracking.
[ { "version": "v1", "created": "Thu, 23 Mar 2017 05:21:24 GMT" }, { "version": "v2", "created": "Tue, 22 May 2018 06:54:43 GMT" } ]
2018-05-23T00:00:00
[ [ "Liang", "Pengpeng", "" ], [ "Wu", "Yifan", "" ], [ "Lu", "Hu", "" ], [ "Wang", "Liming", "" ], [ "Liao", "Chunyuan", "" ], [ "Ling", "Haibin", "" ] ]
new_dataset
0.999663
1709.05862
Mohammad Reza Loghmani
Mohammad Reza Loghmani and Barbara Caputo and Markus Vincze
Recognizing Objects In-the-wild: Where Do We Stand?
null
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to recognize objects is an essential skill for a robotic system acting in human-populated environments. Despite decades of effort from the robotic and vision research communities, robots are still missing good visual perceptual systems, preventing the use of autonomous agents for real-world applications. The progress is slowed down by the lack of a testbed able to accurately represent the world perceived by the robot in-the-wild. In order to fill this gap, we introduce a large-scale, multi-view object dataset collected with an RGB-D camera mounted on a mobile robot. The dataset embeds the challenges faced by a robot in a real-life application and provides a useful tool for validating object recognition algorithms. Besides describing the characteristics of the dataset, the paper evaluates the performance of a collection of well-established deep convolutional networks on the new dataset and analyzes the transferability of deep representations from Web images to robotic data. Despite the promising results obtained with such representations, the experiments demonstrate that object classification with real-life robotic data is far from being solved. Finally, we provide a comparative study to analyze and highlight the open challenges in robot vision, explaining the discrepancies in the performance.
[ { "version": "v1", "created": "Mon, 18 Sep 2017 11:11:31 GMT" }, { "version": "v2", "created": "Tue, 22 May 2018 11:55:27 GMT" } ]
2018-05-23T00:00:00
[ [ "Loghmani", "Mohammad Reza", "" ], [ "Caputo", "Barbara", "" ], [ "Vincze", "Markus", "" ] ]
new_dataset
0.999246
1802.08218
Danna Gurari
Danna Gurari, Qing Li, Abigale J. Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P. Bigham
VizWiz Grand Challenge: Answering Visual Questions from Blind People
null
null
null
null
cs.CV cs.CL cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The study of algorithms to automatically answer visual questions currently is motivated by visual question answering (VQA) datasets constructed in artificial VQA settings. We propose VizWiz, the first goal-oriented VQA dataset arising from a natural VQA setting. VizWiz consists of over 31,000 visual questions originating from blind people who each took a picture using a mobile phone and recorded a spoken question about it, together with 10 crowdsourced answers per visual question. VizWiz differs from the many existing VQA datasets because (1) images are captured by blind photographers and so are often poor quality, (2) questions are spoken and so are more conversational, and (3) often visual questions cannot be answered. Evaluation of modern algorithms for answering visual questions and deciding if a visual question is answerable reveals that VizWiz is a challenging dataset. We introduce this dataset to encourage a larger community to develop more generalized algorithms that can assist blind people.
[ { "version": "v1", "created": "Thu, 22 Feb 2018 18:16:53 GMT" }, { "version": "v2", "created": "Thu, 29 Mar 2018 19:52:08 GMT" }, { "version": "v3", "created": "Mon, 2 Apr 2018 15:53:07 GMT" }, { "version": "v4", "created": "Wed, 9 May 2018 17:26:40 GMT" } ]
2018-05-23T00:00:00
[ [ "Gurari", "Danna", "" ], [ "Li", "Qing", "" ], [ "Stangl", "Abigale J.", "" ], [ "Guo", "Anhong", "" ], [ "Lin", "Chi", "" ], [ "Grauman", "Kristen", "" ], [ "Luo", "Jiebo", "" ], [ "Bigham", "Jeffrey P.", "" ] ]
new_dataset
0.997989
1803.05434
Pablo Barros
Pablo Barros, Nikhil Churamani, Egor Lakomkin, Henrique Siqueira, Alexander Sutherland and Stefan Wermter
The OMG-Emotion Behavior Dataset
Submited to WCCI/IJCNN 2018
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
This paper is the basis paper for the accepted IJCNN challenge One-Minute Gradual-Emotion Recognition (OMG-Emotion) by which we hope to foster long-emotion classification using neural models for the benefit of the IJCNN community. The proposed corpus has as the novelty the data collection and annotation strategy based on emotion expressions which evolve over time into a specific context. Different from other corpora, we propose a novel multimodal corpus for emotion expression recognition, which uses gradual annotations with a focus on contextual emotion expressions. Our dataset was collected from Youtube videos using a specific search strategy based on restricted keywords and filtering which guaranteed that the data follow a gradual emotion expression transition, i.e. emotion expressions evolve over time in a natural and continuous fashion. We also provide an experimental protocol and a series of unimodal baseline experiments which can be used to evaluate deep and recurrent neural models in a fair and standard manner.
[ { "version": "v1", "created": "Wed, 14 Mar 2018 15:31:03 GMT" }, { "version": "v2", "created": "Tue, 22 May 2018 14:00:37 GMT" } ]
2018-05-23T00:00:00
[ [ "Barros", "Pablo", "" ], [ "Churamani", "Nikhil", "" ], [ "Lakomkin", "Egor", "" ], [ "Siqueira", "Henrique", "" ], [ "Sutherland", "Alexander", "" ], [ "Wermter", "Stefan", "" ] ]
new_dataset
0.996767
1803.06854
Sebastian Meiling
Sebastian Meiling, Dorothea Purnomo, Julia-Ann Shiraishi, Michael Fischer, and Thomas C. Schmidt
MONICA in Hamburg: Towards Large-Scale IoT Deployments in a Smart City
6 pages
Proceedings of the European Conference on Networks and Communications, EuCNC, 2018
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern cities and metropolitan areas all over the world face new management challenges in the 21st century primarily due to increasing demands on living standards by the urban population. These challenges range from climate change, pollution, transportation, and citizen engagement, to urban planning, and security threats. The primary goal of a Smart City is to counteract these problems and mitigate their effects by means of modern ICT to improve urban administration and infrastructure. Key ideas are to utilise network communication to inter-connect public authorities; but also to deploy and integrate numerous sensors and actuators throughout the city infrastructure - which is also widely known as the Internet of Things (IoT). Thus, IoT technologies will be an integral part and key enabler to achieve many objectives of the Smart City vision. The contributions of this paper are as follows. We first examine a number of IoT platforms, technologies and network standards that can help to foster a Smart City environment. Second, we introduce the EU project MONICA which aims for demonstration of large-scale IoT deployments at public, inner-city events and give an overview on its IoT platform architecture. And third, we provide a case-study report on SmartCity activities by the City of Hamburg and provide insights on recent (on-going) field tests of a vertically integrated, end-to-end IoT sensor application.
[ { "version": "v1", "created": "Mon, 19 Mar 2018 10:05:41 GMT" }, { "version": "v2", "created": "Tue, 15 May 2018 11:11:44 GMT" } ]
2018-05-23T00:00:00
[ [ "Meiling", "Sebastian", "" ], [ "Purnomo", "Dorothea", "" ], [ "Shiraishi", "Julia-Ann", "" ], [ "Fischer", "Michael", "" ], [ "Schmidt", "Thomas C.", "" ] ]
new_dataset
0.999082
1805.08320
Sarah Ackerman
Sarah M. Ackerman, G. Matthew Fricke, Joshua P. Hecker, Kastro M. Hamed, Samantha R. Fowler, Antonio D. Griego, Jarett C. Jones, J. Jake Nichol, Kurt W. Leucht, and Melanie E. Moses
The Swarmathon: An Autonomous Swarm Robotics Competition
Paper presented May 2018 at ICRA 2018 Workshop: "Swarms: From Biology to Robotics and Back"
null
null
null
cs.MA cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Swarmathon is a swarm robotics programming challenge that engages college students from minority-serving institutions in NASA's Journey to Mars. Teams compete by programming a group of robots to search for, pick up, and drop off resources in a collection zone. The Swarmathon produces prototypes for robot swarms that would collect resources on the surface of Mars. Robots operate completely autonomously with no global map, and each team's algorithm must be sufficiently flexible to effectively find resources from a variety of unknown distributions. The Swarmathon includes Physical and Virtual Competitions. Physical competitors test their algorithms on robots they build at their schools; they then upload their code to run autonomously on identical robots during the three day competition in an outdoor arena at Kennedy Space Center. Virtual competitors complete an identical challenge in simulation. Participants mentor local teams to compete in a separate High School Division. In the first 2 years, over 1,100 students participated. 63% of students were from underrepresented ethnic and racial groups. Participants had significant gains in both interest and core robotic competencies that were equivalent across gender and racial groups, suggesting that the Swarmathon is effectively educating a diverse population of future roboticists.
[ { "version": "v1", "created": "Mon, 21 May 2018 23:18:58 GMT" } ]
2018-05-23T00:00:00
[ [ "Ackerman", "Sarah M.", "" ], [ "Fricke", "G. Matthew", "" ], [ "Hecker", "Joshua P.", "" ], [ "Hamed", "Kastro M.", "" ], [ "Fowler", "Samantha R.", "" ], [ "Griego", "Antonio D.", "" ], [ "Jones", "Jarett C.", "" ], [ "Nichol", "J. Jake", "" ], [ "Leucht", "Kurt W.", "" ], [ "Moses", "Melanie E.", "" ] ]
new_dataset
0.99978
1805.08399
Rudresh Dwivedi
Rudresh Dwivedi, Somnath Dey, Mukul Anand Sharma, Apurv Goel
A fingerprint based crypto-biometric system for secure communication
29 single column pages, 8 figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To ensure the secure transmission of data, cryptography is treated as the most effective solution. Cryptographic key is an important entity in this procedure. In general, randomly generated cryptographic key (of 256 bits) is difficult to remember. However, such a key needs to be stored in a protected place or transported through a shared communication line which, in fact, poses another threat to security. As an alternative, researchers advocate the generation of cryptographic key using the biometric traits of both sender and receiver during the sessions of communication, thus avoiding key storing and at the same time without compromising the strength in security. Nevertheless, the biometric-based cryptographic key generation possesses few concerns such as privacy of biometrics, sharing of biometric data between both communicating users (i.e., sender and receiver), and generating revocable key from irrevocable biometric. This work addresses the above-mentioned concerns. In this work, a framework for secure communication between two users using fingerprint based crypto-biometric system has been proposed. For this, Diffie-Hellman (DH) algorithm is used to generate public keys from private keys of both sender and receiver which are shared and further used to produce a symmetric cryptographic key at both ends. In this approach, revocable key for symmetric cryptography is generated from irrevocable fingerprint. The biometric data is neither stored nor shared which ensures the security of biometric data, and perfect forward secrecy is achieved using session keys. This work also ensures the long-term security of messages communicated between two users. Based on the experimental evaluation over four datasets of FVC2002 and NIST special database, the proposed framework is privacy-preserving and could be utilized onto real access control systems.
[ { "version": "v1", "created": "Tue, 22 May 2018 05:22:24 GMT" } ]
2018-05-23T00:00:00
[ [ "Dwivedi", "Rudresh", "" ], [ "Dey", "Somnath", "" ], [ "Sharma", "Mukul Anand", "" ], [ "Goel", "Apurv", "" ] ]
new_dataset
0.994478
1805.08480
Jiangtao Wang
Jiangtao Wang, Feng Wang, Yasha Wang, Leye Wang, Zhaopeng Qiu, Daqing Zhang, Bin Guo, Qin Lv
HyTasker: Hybrid Task Allocation in Mobile Crowd Sensing
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Task allocation is a major challenge in Mobile Crowd Sensing (MCS). While previous task allocation approaches follow either the opportunistic or participatory mode, this paper proposes to integrate these two complementary modes in a two-phased hybrid framework called HyTasker. In the offline phase, a group of workers (called opportunistic workers) are selected, and they complete MCS tasks during their daily routines (i.e., opportunistic mode). In the online phase, we assign another set of workers (called participatory workers) and require them to move specifically to perform tasks that are not completed by the opportunistic workers (i.e., participatory mode). Instead of considering these two phases separately, HyTasker jointly optimizes them with a total incentive budget constraint. In particular, when selecting opportunistic workers in the offline phase of HyTasker, we propose a novel algorithm that simultaneously considers the predicted task assignment for the participatory workers, in which the density and mobility of participatory workers are taken into account. Experiments on a real-world mobility dataset demonstrate that HyTasker outperforms other methods with more completed tasks under the same budget constraint.
[ { "version": "v1", "created": "Tue, 22 May 2018 10:10:42 GMT" } ]
2018-05-23T00:00:00
[ [ "Wang", "Jiangtao", "" ], [ "Wang", "Feng", "" ], [ "Wang", "Yasha", "" ], [ "Wang", "Leye", "" ], [ "Qiu", "Zhaopeng", "" ], [ "Zhang", "Daqing", "" ], [ "Guo", "Bin", "" ], [ "Lv", "Qin", "" ] ]
new_dataset
0.967725
1805.08500
Renato Farias
Renato Farias, Marcelo Kallmann
Improved Shortest Path Maps with GPU Shaders
Work being submitted for peer review, 9 pages, 8 figures
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present in this paper several improvements for computing shortest path maps using OpenGL shaders. The approach explores GPU rasterization as a way to propagate optimal costs on a polygonal 2D environment, producing shortest path maps which can efficiently be queried at run-time. Our improved method relies on Compute Shaders for improved performance, does not require any CPU pre-computation, and handles shortest path maps both with source points and with line segment sources. The produced path maps partition the input environment into regions sharing a same parent point along the shortest path to the closest source point or segment source. Our method produces paths with global optimality, a characteristic which has been mostly neglected in animated virtual environments. The proposed approach is particularly suitable for the animation of multiple agents moving toward the entrances or exits of a virtual environment, a situation which is efficiently represented with the proposed path maps.
[ { "version": "v1", "created": "Tue, 22 May 2018 11:03:30 GMT" } ]
2018-05-23T00:00:00
[ [ "Farias", "Renato", "" ], [ "Kallmann", "Marcelo", "" ] ]
new_dataset
0.964557
1805.08520
Hannes M\"uhleisen
Mark Raasveldt, Hannes M\"uhleisen
MonetDBLite: An Embedded Analytical Database
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While traditional RDBMSes offer a lot of advantages, they require significant effort to setup and to use. Because of these challenges, many data scientists and analysts have switched to using alternative data management solutions. These alternatives, however, lack features that are standard for RDBMSes, e.g. out-of-core query execution. In this paper, we introduce the embedded analytical database MonetDBLite. MonetDBLite is designed to be both highly efficient and easy to use in conjunction with standard analytical tools. It can be installed using standard package managers, and requires no configuration or server management. It is designed for OLAP scenarios, and offers near-instantaneous data transfer between the database and analytical tools, all the while maintaining the transactional guarantees and ACID properties of a standard relational system. These properties make MonetDBLite highly suitable as a storage engine for data used in analytics, machine learning and classification tasks.
[ { "version": "v1", "created": "Tue, 22 May 2018 11:50:35 GMT" } ]
2018-05-23T00:00:00
[ [ "Raasveldt", "Mark", "" ], [ "Mühleisen", "Hannes", "" ] ]
new_dataset
0.996662
1805.08598
Ying Mao
Ying Mao, Jenna Oak, Anthony Pompili, Daniel Beer, Tao Han, Peizhao Hu
DRAPS: Dynamic and Resource-Aware Placement Scheme for Docker Containers in a Heterogeneous Cluster
The 36th IEEE International Performance Computing and Communications Conference(IPCCC'2017)
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Virtualization is a promising technology that has facilitated cloud computing to become the next wave of the Internet revolution. Adopted by data centers, millions of applications that are powered by various virtual machines improve the quality of services. Although virtual machines are well-isolated among each other, they suffer from redundant boot volumes and slow provisioning time. To address limitations, containers were born to deploy and run distributed applications without launching entire virtual machines. As a dominant player, Docker is an open-source implementation of container technology. When managing a cluster of Docker containers, the management tool, Swarmkit, does not take the heterogeneities in both physical nodes and virtualized containers into consideration. The heterogeneity lies in the fact that different nodes in the cluster may have various configurations, concerning resource types and availabilities, etc., and the demands generated by services are varied, such as CPU-intensive (e.g. Clustering services) as well as memory-intensive (e.g. Web services). In this paper, we target on investigating the Docker container cluster and developed, DRAPS, a resource-aware placement scheme to boost the system performance in a heterogeneous cluster.
[ { "version": "v1", "created": "Tue, 22 May 2018 14:18:46 GMT" } ]
2018-05-23T00:00:00
[ [ "Mao", "Ying", "" ], [ "Oak", "Jenna", "" ], [ "Pompili", "Anthony", "" ], [ "Beer", "Daniel", "" ], [ "Han", "Tao", "" ], [ "Hu", "Peizhao", "" ] ]
new_dataset
0.994219
1805.08645
Deniz Ozsoyeller
Deniz Ozsoyeller
Multi-robot Symmetric Rendezvous Search on the Line with an Unknown Initial Distance
null
null
null
null
cs.DC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the symmetric rendezvous search problem on the line with n > 2 robots that are unaware of their locations and the initial distances between them. In the symmetric version of this problem, the robots execute the same strategy. The multi-robot symmetric rendezvous algorithm, MSR presented in this paper is an extension our symmetric rendezvous algorithm, SR presented in [23]. We study both the synchronous and asynchronous cases of the problem. The asynchronous version of MSR algorithm is called MASR algorithm. We consider that robots start executing MASR at different times. We perform the theoretical analysis of MSR and MASR, and show that their competitive ratios are $O(n^{0.67})$ and $O(n^{1.5})$, respectively. Finally, we confirm our theoretical results through simulations.
[ { "version": "v1", "created": "Mon, 21 May 2018 08:05:24 GMT" } ]
2018-05-23T00:00:00
[ [ "Ozsoyeller", "Deniz", "" ] ]
new_dataset
0.996857
1805.08706
Jignesh Bhatt Shashikant
Jignesh S. Bhatt and N. Padmanabhan
Automatic Data Registration of Geostationary Payloads for Meteorological Applications at ISRO
16 pages, 13 figures
null
null
null
cs.CV cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The launch of KALPANA-1 satellite in the year 2002 heralded the establishment of an indigenous operational payload for meteorological predictions. This was further enhanced in the year 2003 with the launching of INSAT-3A satellite. The software for generating products from the data of these two satellites was taken up subsequently in the year 2004 and the same was installed at the Indian Meteorological Department, New Delhi in January 2006. Registration has been one of the most fundamental operations to generate almost all the data products from the remotely sensed data. Registration is a challenging task due to inevitable radiometric and geometric distortions during the acquisition process. Besides the presence of clouds makes the problem more complicated. In this paper, we present an algorithm for multitemporal and multiband registration. In addition, India facing reference boundaries for the CCD data of INSAT-3A have also been generated. The complete implementation is made up of the following steps: 1) automatic identification of the ground control points (GCPs) in the sensed data, 2) finding the optimal transformation model based on the match-points, and 3) resampling the transformed imagery to the reference coordinates. The proposed algorithm is demonstrated using the real datasets from KALPANA-1 and INSAT-3A. Both KALAPANA-1 and INSAT-3A have recently been decommissioned due to lack of fuel, however, the experience gained from them have given rise to a series of meteorological satellites and associated software; like INSAT-3D series which give continuous weather forecasting for the country. This paper is not so much focused on the theory (widely available in the literature) but concentrates on the implementation of operational software.
[ { "version": "v1", "created": "Thu, 17 May 2018 07:41:48 GMT" } ]
2018-05-23T00:00:00
[ [ "Bhatt", "Jignesh S.", "" ], [ "Padmanabhan", "N.", "" ] ]
new_dataset
0.977648
1608.03180
Jiangbin Lyu Dr.
Jiangbin Lyu, Yong Zeng and Rui Zhang
Cyclical Multiple Access in UAV-Aided Communications: A Throughput-Delay Tradeoff
5 pages, 3 figures, published in IEEE Wireless Communications Letters, https://ieeexplore.ieee.org/document/7556368/
null
10.1109/LWC.2016.2604306
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This letter studies a wireless system consisting of distributed ground terminals (GTs) communicating with an unmanned aerial vehicle (UAV) that serves as a mobile base station (BS). The UAV flies cyclically above the GTs at a fixed altitude, which results in a cyclical pattern of the strength of the UAV-GT channels. To exploit such periodic channel variations, we propose a new cyclical multiple access (CMA) scheme to schedule the communications between the UAV and GTs in a cyclical time-division manner based on the flying UAV's position. The time allocations to different GTs are optimized to maximize their minimum throughput. It is revealed that there is a fundamental tradeoff between throughput and access delay in the proposed CMA. Simulation results show significant throughput gains over the case of a static UAV BS in delay-tolerant applications.
[ { "version": "v1", "created": "Wed, 10 Aug 2016 14:04:43 GMT" }, { "version": "v2", "created": "Sun, 20 May 2018 04:30:51 GMT" } ]
2018-05-22T00:00:00
[ [ "Lyu", "Jiangbin", "" ], [ "Zeng", "Yong", "" ], [ "Zhang", "Rui", "" ] ]
new_dataset
0.984561
1707.08234
Jeremy Morton
Jeremy Morton, Tim A. Wheeler, Mykel J. Kochenderfer
Closed-Loop Policies for Operational Tests of Safety-Critical Systems
12 pages, 5 figures, 5 tables
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Manufacturers of safety-critical systems must make the case that their product is sufficiently safe for public deployment. Much of this case often relies upon critical event outcomes from real-world testing, requiring manufacturers to be strategic about how they allocate testing resources in order to maximize their chances of demonstrating system safety. This work frames the partially observable and belief-dependent problem of test scheduling as a Markov decision process, which can be solved efficiently to yield closed-loop manufacturer testing policies. By solving for policies over a wide range of problem formulations, we are able to provide high-level guidance for manufacturers and regulators on issues relating to the testing of safety-critical systems. This guidance spans an array of topics, including circumstances under which manufacturers should continue testing despite observed incidents, when manufacturers should test aggressively, and when regulators should increase or reduce the real-world testing requirements for an autonomous vehicle.
[ { "version": "v1", "created": "Tue, 25 Jul 2017 21:48:58 GMT" }, { "version": "v2", "created": "Wed, 13 Dec 2017 18:20:38 GMT" }, { "version": "v3", "created": "Sat, 19 May 2018 20:34:54 GMT" } ]
2018-05-22T00:00:00
[ [ "Morton", "Jeremy", "" ], [ "Wheeler", "Tim A.", "" ], [ "Kochenderfer", "Mykel J.", "" ] ]
new_dataset
0.998429
1709.04916
Rub\'en Saborido Infantes
Rub\'en Saborido, Foutse Khomh, Abram Hindle, Enrique Alba
An App Performance Optimization Advisor for Mobile Device App Marketplaces
18 pages, 8 figures
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
On mobile phones, users and developers use apps official marketplaces serving as repositories of apps. The Google Play Store and Apple Store are the official marketplaces of Android and Apple products which offer more than a million apps. Although both repositories offer description of apps, information concerning performance is not available. Due to the constrained hardware of mobile devices, users and developers have to meticulously manage the resources available and they should be given access to performance information about apps. Even if this information was available, the selection of apps would still depend on user preferences and it would require a huge cognitive effort to make optimal decisions. Considering this fact we propose APOA, a recommendation system which can be implemented in any marketplace for helping users and developers to compare apps in terms of performance. APOA uses as input metric values of apps and a set of metrics to optimize. It solves an optimization problem and it generates optimal sets of apps for different user's context. We show how APOA works over an Android case study. Out of 140 apps, we define typical usage scenarios and we collect measurements of power, CPU, memory, and network usages to demonstrate the benefit of using APOA.
[ { "version": "v1", "created": "Thu, 14 Sep 2017 01:08:53 GMT" }, { "version": "v2", "created": "Sun, 20 May 2018 16:02:59 GMT" } ]
2018-05-22T00:00:00
[ [ "Saborido", "Rubén", "" ], [ "Khomh", "Foutse", "" ], [ "Hindle", "Abram", "" ], [ "Alba", "Enrique", "" ] ]
new_dataset
0.996734
1710.02855
Anoop Kunchukuttan
Anoop Kunchukuttan, Pratik Mehta, Pushpak Bhattacharyya
The IIT Bombay English-Hindi Parallel Corpus
accepted for LREC 2018, 4 pages, parallel corpus for English-Hindi machine translation
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the IIT Bombay English-Hindi Parallel Corpus. The corpus is a compilation of parallel corpora previously available in the public domain as well as new parallel corpora we collected. The corpus contains 1.49 million parallel segments, of which 694k segments were not previously available in the public domain. The corpus has been pre-processed for machine translation, and we report baseline phrase-based SMT and NMT translation results on this corpus. This corpus has been used in two editions of shared tasks at the Workshop on Asian Language Translation (2016 and 2017). The corpus is freely available for non-commercial research. To the best of our knowledge, this is the largest publicly available English-Hindi parallel corpus.
[ { "version": "v1", "created": "Sun, 8 Oct 2017 16:56:05 GMT" }, { "version": "v2", "created": "Sat, 19 May 2018 20:00:21 GMT" } ]
2018-05-22T00:00:00
[ [ "Kunchukuttan", "Anoop", "" ], [ "Mehta", "Pratik", "" ], [ "Bhattacharyya", "Pushpak", "" ] ]
new_dataset
0.999408
1710.04783
Dwarikanath Mahapatra
Dwarikanath Mahapatra, Behzad Bozorgtabar
Retinal Vasculature Segmentation Using Local Saliency Maps and Generative Adversarial Networks For Image Super Resolution
Accepted in MICCAI 2017 conference
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an image super resolution(ISR) method using generative adversarial networks (GANs) that takes a low resolution input fundus image and generates a high resolution super resolved (SR) image upto scaling factor of $16$. This facilitates more accurate automated image analysis, especially for small or blurred landmarks and pathologies. Local saliency maps, which define each pixel's importance, are used to define a novel saliency loss in the GAN cost function. Experimental results show the resulting SR images have perceptual quality very close to the original images and perform better than competing methods that do not weigh pixels according to their importance. When used for retinal vasculature segmentation, our SR images result in accuracy levels close to those obtained when using the original images.
[ { "version": "v1", "created": "Fri, 13 Oct 2017 02:17:05 GMT" }, { "version": "v2", "created": "Mon, 16 Oct 2017 23:59:28 GMT" }, { "version": "v3", "created": "Mon, 21 May 2018 05:24:11 GMT" } ]
2018-05-22T00:00:00
[ [ "Mahapatra", "Dwarikanath", "" ], [ "Bozorgtabar", "Behzad", "" ] ]
new_dataset
0.995226
1801.05948
Xiaohui Zhou
Xiaohui Zhou, Jing Guo, Salman Durrani, and Halim Yanikomeroglu
Uplink Coverage Performance of an Underlay Drone Cell for Temporary Events
This work is accepted to 2018 IEEE International Conference on Communications Workshops (ICC Workshops): Integrating UAVs into 5G
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using a drone as an aerial base station (ABS) to provide coverage to users on the ground is envisaged as a promising solution for beyond fifth generation (beyond-5G) wireless networks. While the literature to date has examined downlink cellular networks with ABSs, we consider an uplink cellular network with an ABS. Specifically, we analyze the use of an underlay ABS to provide coverage for a temporary event, such as a sporting event or a concert in a stadium. Using stochastic geometry, we derive the analytical expressions for the uplink coverage probability of the terrestrial base station (TBS) and the ABS. The results are expressed in terms of (i) the Laplace transforms of the interference power distribution at the TBS and the ABS and (ii) the distance distribution between the ABS and an independently and uniformly distributed (i.u.d.) ABS-supported user equipment and between the ABS and an i.u.d. TBS-supported user equipment. The accuracy of the analytical results is verified by Monte Carlo simulations. Our results show that varying the ABS height leads to a trade-off between the uplink coverage probability of the TBS and the ABS. In addition, assuming a quality of service of 90% at the TBS, an uplink coverage probability of the ABS of over 85% can be achieved, with the ABS deployed at or below its optimal height of typically between 250-500 m for the considered setup.
[ { "version": "v1", "created": "Thu, 18 Jan 2018 06:11:18 GMT" }, { "version": "v2", "created": "Tue, 6 Mar 2018 05:41:36 GMT" }, { "version": "v3", "created": "Sun, 20 May 2018 23:44:01 GMT" } ]
2018-05-22T00:00:00
[ [ "Zhou", "Xiaohui", "" ], [ "Guo", "Jing", "" ], [ "Durrani", "Salman", "" ], [ "Yanikomeroglu", "Halim", "" ] ]
new_dataset
0.956493
1802.01273
S Ritika
S Ritika, Dattaraj Rao
Face recognition for monitoring operator shift in railways
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Train Pilot is a very tedious and stressful job. Pilots must be vigilant at all times and its easy for them to lose track of time of shift. In countries like USA the pilots are mandated by law to adhere to 8 hour shifts. If they exceed 8 hours of shift the railroads may be penalized for over-tiring their drivers. The problem happens when the 8 hour shift may end in middle of a journey. In such case, the new drivers must be moved to the location locomotive is operating for shift change. Hence accurate monitoring of drivers during their shift and making sure the shifts are scheduled correctly is very important for railroads. Here we propose an automated camera system that uses camera mounted inside Locomotive cabs to continuously record video feeds. These feeds are analyzed in real time to detect the face of driver and recognize the driver using state of the art deep Learning techniques. The outcome is an increased safety of train pilots. Cameras continuously capture video from inside the cab which is stored on an on board data acquisition device. Using advanced computer vision and deep learning techniques the videos are analyzed at regular intervals to detect presence of the pilot and identify the pilot. Using a time based analysis, it is identified for how long that shift has been active. If this time exceeds allocated shift time an alert is sent to the dispatch to adjust shift hours.
[ { "version": "v1", "created": "Mon, 5 Feb 2018 05:52:51 GMT" }, { "version": "v2", "created": "Mon, 21 May 2018 04:31:05 GMT" } ]
2018-05-22T00:00:00
[ [ "Ritika", "S", "" ], [ "Rao", "Dattaraj", "" ] ]
new_dataset
0.996839
1802.05412
Chan Woo Kim
Chan Woo Kim
NtMalDetect: A Machine Learning Approach to Malware Detection Using Native API System Calls
8 pages, Intel International Science and Engineering Fair Project - SOFT006T
null
null
null
cs.CR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As computing systems become increasingly advanced and as users increasingly engage themselves in technology, security has never been a greater concern. In malware detection, static analysis, the method of analyzing potentially malicious files, has been the prominent approach. This approach, however, quickly falls short as malicious programs become more advanced and adopt the capabilities of obfuscating its binaries to execute the same malicious functions, making static analysis extremely difficult for newer variants. The approach assessed in this paper is a novel dynamic malware analysis method, which may generalize better than static analysis to newer variants. Inspired by recent successes in Natural Language Processing (NLP), widely used document classification techniques were assessed in detecting malware by doing such analysis on system calls, which contain useful information about the operation of a program as requests that the program makes of the kernel. Features considered are extracted from system call traces of benign and malicious programs, and the task to classify these traces is treated as a binary document classification task of system call traces. The system call traces were processed to remove the parameters to only leave the system call function names. The features were grouped into various n-grams and weighted with Term Frequency-Inverse Document Frequency. This paper shows that Linear Support Vector Machines (SVM) optimized by Stochastic Gradient Descent and the traditional Coordinate Descent on the Wolfe Dual form of the SVM are effective in this approach, achieving a highest of 96% accuracy with 95% recall score. Additional contributions include the identification of significant system call sequences that could be avenues for further research.
[ { "version": "v1", "created": "Thu, 15 Feb 2018 05:34:21 GMT" }, { "version": "v2", "created": "Sat, 19 May 2018 19:27:36 GMT" } ]
2018-05-22T00:00:00
[ [ "Kim", "Chan Woo", "" ] ]
new_dataset
0.97509
1803.03042
Jonas Lef\`evre
Armando Casta\~neda (1), Jonas Lef\`evre (2) and Amitabh Trehan (2) ((1) Instituto de Matem\'aticas, UNAM, Mexico,(2) Computer Science, Loughborough University, UK)
Some Problems in Compact Message Passing
22 pages, 5 figures, submitted to DISC 2018
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper seeks to address the question of designing distributed algorithms for the setting of compact memory i.e. sublinear bits working memory for arbitrary connected networks. The nodes in our networks may have much lower internal memory as compared to the number of their possible neighbours implying that a node may not be able to store all the IDs of its neighbours. These algorithms are useful for large networks of small devices such as the Internet of Things, for wireless or ad-hoc networks, and, in general, as memory efficient algorithms. We introduce the Compact Message Passing(CMP) model;an extension of the standard message passing model considered at a finer granularity where a node can interleave reads and writes with internal computations, using a port only once in a round. The interleaving is required for meaningful computations due to the low memory requirement and is akin to a distributed network with nodes executing streaming algorithms. Note that the internal memory size upper bounds the message sizes and hence e.g. for log-memory, the model is weaker than the Congest model; for such models our algorithms will work directly too. We present early results in the CMP model for nodes with log^2-memory. We introduce the concepts of local compact functions and compact protocols and give solutions for some classic distributed problems (leader election, tree constructions and traversals). We build on these to solve the open problem of compact preprocessing for the compact self-healing routing algorithm CompactFTZ posed in Compact Routing Messages in Self-Healing Trees(TCS2017) by designing local compact functions for finding particular subtrees of labeled binary trees. Hence, we introduce the first fully compact self-healing routing algorithm. We also give independent fully compact versions of the Forgiving Tree[PODC08] and Thorup-Zwick's tree based compact routing[SPAA01].
[ { "version": "v1", "created": "Thu, 8 Mar 2018 11:20:08 GMT" }, { "version": "v2", "created": "Mon, 21 May 2018 12:44:51 GMT" } ]
2018-05-22T00:00:00
[ [ "Castañeda", "Armando", "" ], [ "Lefèvre", "Jonas", "" ], [ "Trehan", "Amitabh", "" ] ]
new_dataset
0.95997
1803.03917
Will Monroe
Will Monroe, Jennifer Hu, Andrew Jong, Christopher Potts
Generating Bilingual Pragmatic Color References
11 pages including appendices, 7 figures, 3 tables. NAACL-HLT 2018
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Contextual influences on language often exhibit substantial cross-lingual regularities; for example, we are more verbose in situations that require finer distinctions. However, these regularities are sometimes obscured by semantic and syntactic differences. Using a newly-collected dataset of color reference games in Mandarin Chinese (which we release to the public), we confirm that a variety of constructions display the same sensitivity to contextual difficulty in Chinese and English. We then show that a neural speaker agent trained on bilingual data with a simple multitask learning approach displays more human-like patterns of context dependence and is more pragmatically informative than its monolingual Chinese counterpart. Moreover, this is not at the expense of language-specific semantic understanding: the resulting speaker model learns the different basic color term systems of English and Chinese (with noteworthy cross-lingual influences), and it can identify synonyms between the two languages using vector analogy operations on its output layer, despite having no exposure to parallel data.
[ { "version": "v1", "created": "Sun, 11 Mar 2018 07:05:50 GMT" }, { "version": "v2", "created": "Sat, 19 May 2018 00:56:23 GMT" } ]
2018-05-22T00:00:00
[ [ "Monroe", "Will", "" ], [ "Hu", "Jennifer", "" ], [ "Jong", "Andrew", "" ], [ "Potts", "Christopher", "" ] ]
new_dataset
0.999211
1805.00889
Justin Salamon
Juan Pablo Bello, Claudio Silva, Oded Nov, R. Luke DuBois, Anish Arora, Justin Salamon, Charles Mydlarz, Harish Doraiswamy
SONYC: A System for the Monitoring, Analysis and Mitigation of Urban Noise Pollution
Accepted May 2018, Communications of the ACM. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record will be published in Communications of the ACM
null
null
null
cs.SD cs.CY cs.HC eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the Sounds of New York City (SONYC) project, a smart cities initiative focused on developing a cyber-physical system for the monitoring, analysis and mitigation of urban noise pollution. Noise pollution is one of the topmost quality of life issues for urban residents in the U.S. with proven effects on health, education, the economy, and the environment. Yet, most cities lack the resources to continuously monitor noise and understand the contribution of individual sources, the tools to analyze patterns of noise pollution at city-scale, and the means to empower city agencies to take effective, data-driven action for noise mitigation. The SONYC project advances novel technological and socio-technical solutions that help address these needs. SONYC includes a distributed network of both sensors and people for large-scale noise monitoring. The sensors use low-cost, low-power technology, and cutting-edge machine listening techniques, to produce calibrated acoustic measurements and recognize individual sound sources in real time. Citizen science methods are used to help urban residents connect to city agencies and each other, understand their noise footprint, and facilitate reporting and self-regulation. Crucially, SONYC utilizes big data solutions to analyze, retrieve and visualize information from sensors and citizens, creating a comprehensive acoustic model of the city that can be used to identify significant patterns of noise pollution. These data can be used to drive the strategic application of noise code enforcement by city agencies to optimize the reduction of noise pollution. The entire system, integrating cyber, physical and social infrastructure, forms a closed loop of continuous sensing, analysis and actuation on the environment. SONYC provides a blueprint for the mitigation of noise pollution that can potentially be applied to other cities in the US and abroad.
[ { "version": "v1", "created": "Wed, 2 May 2018 16:07:39 GMT" }, { "version": "v2", "created": "Fri, 18 May 2018 19:23:01 GMT" } ]
2018-05-22T00:00:00
[ [ "Bello", "Juan Pablo", "" ], [ "Silva", "Claudio", "" ], [ "Nov", "Oded", "" ], [ "DuBois", "R. Luke", "" ], [ "Arora", "Anish", "" ], [ "Salamon", "Justin", "" ], [ "Mydlarz", "Charles", "" ], [ "Doraiswamy", "Harish", "" ] ]
new_dataset
0.998134
1805.07470
Stephen McAleer
Stephen McAleer, Forest Agostinelli, Alexander Shmakov, Pierre Baldi
Solving the Rubik's Cube Without Human Knowledge
First three authors contributed equally. Submitted to NIPS 2018
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A generally intelligent agent must be able to teach itself how to solve problems in complex domains with minimal human supervision. Recently, deep reinforcement learning algorithms combined with self-play have achieved superhuman proficiency in Go, Chess, and Shogi without human data or domain knowledge. In these environments, a reward is always received at the end of the game, however, for many combinatorial optimization environments, rewards are sparse and episodes are not guaranteed to terminate. We introduce Autodidactic Iteration: a novel reinforcement learning algorithm that is able to teach itself how to solve the Rubik's Cube with no human assistance. Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves -- less than or equal to solvers that employ human domain knowledge.
[ { "version": "v1", "created": "Fri, 18 May 2018 23:07:31 GMT" } ]
2018-05-22T00:00:00
[ [ "McAleer", "Stephen", "" ], [ "Agostinelli", "Forest", "" ], [ "Shmakov", "Alexander", "" ], [ "Baldi", "Pierre", "" ] ]
new_dataset
0.991623
1805.07486
Ke Feng
Ke Feng and Martin Haenggi
A Tunable Base Station Cooperation Scheme for Poisson Cellular Networks
6 pages, 6 figures, 52nd Annual Conference on Information Sciences and Systems
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a tunable location-dependent base station (BS) cooperation scheme by partitioning the plane into three regions: the cell centers, cell edges and cell corners. The area fraction of each region is tuned by the cooperation level $\gamma$ ranging from 0 to 1. Depending on the region a user resides in, he/she receives no cooperation, two-BS cooperation or three-BS cooperation. Here, we use a Poisson point process (PPP) to model BS locations and study a non-coherent joint transmission scheme, $\textit{i.e.}$, selected BSs jointly serve one user in the absence of channel state information (CSI). For the proposed scheme, we examine its performance as a function of the cooperation level using tools from stochastic geometry. We derive an analytical expression for the signal-to-interference ratio (SIR) distribution and its approximation based on the asymptotic SIR gain, along with the characterization of the normalized spectral efficiency per BS. Our result suggests that the proposed scheme with a moderate cooperation level can improve the SIR performance while maintaining the normalized spectral efficiency.
[ { "version": "v1", "created": "Sat, 19 May 2018 01:00:56 GMT" } ]
2018-05-22T00:00:00
[ [ "Feng", "Ke", "" ], [ "Haenggi", "Martin", "" ] ]
new_dataset
0.993033
1805.07541
Yu Zhang
Yu Zhang, Ying Wei, Qiang Yang
Learning to Multitask
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multitask learning has shown promising performance in many applications and many multitask models have been proposed. In order to identify an effective multitask model for a given multitask problem, we propose a learning framework called learning to multitask (L2MT). To achieve the goal, L2MT exploits historical multitask experience which is organized as a training set consists of several tuples, each of which contains a multitask problem with multiple tasks, a multitask model, and the relative test error. Based on such training set, L2MT first uses a proposed layerwise graph neural network to learn task embeddings for all the tasks in a multitask problem and then learns an estimation function to estimate the relative test error based on task embeddings and the representation of the multitask model based on a unified formulation. Given a new multitask problem, the estimation function is used to identify a suitable multitask model. Experiments on benchmark datasets show the effectiveness of the proposed L2MT framework.
[ { "version": "v1", "created": "Sat, 19 May 2018 08:07:30 GMT" } ]
2018-05-22T00:00:00
[ [ "Zhang", "Yu", "" ], [ "Wei", "Ying", "" ], [ "Yang", "Qiang", "" ] ]
new_dataset
0.991071
1805.07565
Saeid Pourroostaei Ardakani
Saeid Pourroostaei Ardakani
ACR: a cluster-based routing protocol for VANET
15 pages, 6 figures
null
10.5121/ijwmn.2018.10204
null
cs.NI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clustering is a technique used in network routing to enhance the performance and conserve the network resources. This paper presents a cluster-based routing protocol for VANET utilizing a new addressing scheme in which each node gets an address according to its mobility pattern. Hamming distance technique is used then to partition the network in an address-centric manner. The simulation results show that this protocol enhances routing reachability, whereas reduces routing end-to-end delay and traffic received comparing with two benchmarks namely AODV and DSDV.
[ { "version": "v1", "created": "Sat, 19 May 2018 10:13:21 GMT" } ]
2018-05-22T00:00:00
[ [ "Ardakani", "Saeid Pourroostaei", "" ] ]
new_dataset
0.993529
1805.07566
Yunus Can Bilge
Mehmet Kerim Yucel, Yunus Can Bilge, Oguzhan Oguz, Nazli Ikizler-Cinbis, Pinar Duygulu, Ramazan Gokberk Cinbis
Wildest Faces: Face Detection and Recognition in Violent Settings
Submitted to BMVC 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the introduction of large-scale datasets and deep learning models capable of learning complex representations, impressive advances have emerged in face detection and recognition tasks. Despite such advances, existing datasets do not capture the difficulty of face recognition in the wildest scenarios, such as hostile disputes or fights. Furthermore, existing datasets do not represent completely unconstrained cases of low resolution, high blur and large pose/occlusion variances. To this end, we introduce the Wildest Faces dataset, which focuses on such adverse effects through violent scenes. The dataset consists of an extensive set of violent scenes of celebrities from movies. Our experimental results demonstrate that state-of-the-art techniques are not well-suited for violent scenes, and therefore, Wildest Faces is likely to stir further interest in face detection and recognition research.
[ { "version": "v1", "created": "Sat, 19 May 2018 10:46:24 GMT" } ]
2018-05-22T00:00:00
[ [ "Yucel", "Mehmet Kerim", "" ], [ "Bilge", "Yunus Can", "" ], [ "Oguz", "Oguzhan", "" ], [ "Ikizler-Cinbis", "Nazli", "" ], [ "Duygulu", "Pinar", "" ], [ "Cinbis", "Ramazan Gokberk", "" ] ]
new_dataset
0.999865
1805.07667
Floriana Gargiulo
Ilaria Bertazzi, Sylvie Huet, Guillaume Deffuant, Floriana Gargiulo
The anatomy of a Web of Trust: the Bitcoin-OTC market
null
null
null
null
cs.CY cs.CR cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bitcoin-otc is a peer to peer (over-the-counter) marketplace for trading with bit- coin crypto-currency. To mitigate the risks of the p2p unsupervised exchanges, the establishment of a reliable reputation systems is needed: for this reason, a web of trust is implemented on the website. The availability of all the historic of the users interaction data makes this dataset a unique playground for studying reputation dynamics through others evaluations. We analyze the structure and the dynamics of this web of trust with a multilayer network approach distin- guishing the rewarding and the punitive behaviors. We show that the rewarding and the punitive behavior have similar emergent topological properties (apart from the clustering coefficient being higher for the rewarding layer) and that the resultant reputation originates from the complex interaction of the more regular behaviors on the layers. We show which are the behaviors that correlate (i.e. the rewarding activity) or not (i.e. the punitive activity) with reputation. We show that the network activity presents bursty behaviors on both the layers and that the inequality reaches a steady value (higher for the rewarding layer) with the network evolution. Finally, we characterize the reputation trajectories and we identify prototypical behaviors associated to three classes of users: trustworthy, untrusted and controversial.
[ { "version": "v1", "created": "Sat, 19 May 2018 22:27:23 GMT" } ]
2018-05-22T00:00:00
[ [ "Bertazzi", "Ilaria", "" ], [ "Huet", "Sylvie", "" ], [ "Deffuant", "Guillaume", "" ], [ "Gargiulo", "Floriana", "" ] ]
new_dataset
0.999445
1805.07824
Javier \'Alvez
Javier \'Alvez and Itziar Gonzalez-Dios and German Rigau
Validating WordNet Meronymy Relations using Adimen-SUMO
14 pages, 10 tables
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we report on the practical application of a novel approach for validating the knowledge of WordNet using Adimen-SUMO. In particular, this paper focuses on cross-checking the WordNet meronymy relations against the knowledge encoded in Adimen-SUMO. Our validation approach tests a large set of competency questions (CQs), which are derived (semi)-automatically from the knowledge encoded in WordNet, SUMO and their mapping, by applying efficient first-order logic automated theorem provers. Unfortunately, despite of being created manually, these knowledge resources are not free of errors and discrepancies. In consequence, some of the resulting CQs are not plausible according to the knowledge included in Adimen-SUMO. Thus, first we focus on (semi)-automatically improving the alignment between these knowledge resources, and second, we perform a minimal set of corrections in the ontology. Our aim is to minimize the manual effort required for an extensive validation process. We report on the strategies followed, the changes made, the effort needed and its impact when validating the WordNet meronymy relations using improved versions of the mapping and the ontology. Based on the new results, we discuss the implications of the appropriate corrections and the need of future enhancements.
[ { "version": "v1", "created": "Sun, 20 May 2018 20:50:17 GMT" } ]
2018-05-22T00:00:00
[ [ "Álvez", "Javier", "" ], [ "Gonzalez-Dios", "Itziar", "" ], [ "Rigau", "German", "" ] ]
new_dataset
0.952366
1805.07907
Joy Bose
Kushal Singla, Joy Bose
IoT2Vec: Identification of Similar IoT Devices via Activity Footprints
5 pages, 4 figures
null
null
null
cs.HC cs.AI cs.NE cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a smart home or smart office environment with a number of IoT devices connected and passing data between one another. The footprints of the data transferred can provide valuable information about the devices, which can be used to (a) identify the IoT devices and (b) in case of failure, to identify the correct replacements for these devices. In this paper, we generate the embeddings for IoT devices in a smart home using Word2Vec, and explore the possibility of having a similar concept for IoT devices, aka IoT2Vec. These embeddings can be used in a number of ways, such as to find similar devices in an IoT device store, or as a signature of each type of IoT device. We show results of a feasibility study on the CASAS dataset of IoT device activity logs, using our method to identify the patterns in embeddings of various types of IoT devices in a household.
[ { "version": "v1", "created": "Mon, 21 May 2018 06:31:52 GMT" } ]
2018-05-22T00:00:00
[ [ "Singla", "Kushal", "" ], [ "Bose", "Joy", "" ] ]
new_dataset
0.980069
1805.07952
Deniz Yuret
Ozan Arkan Can, Deniz Yuret
A new dataset and model for learning to understand navigational instructions
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a state-of-the-art model and introduce a new dataset for grounded language learning. Our goal is to develop a model that can learn to follow new instructions given prior instruction-perception-action examples. We based our work on the SAIL dataset which consists of navigational instructions and actions in a maze-like environment. The new model we propose achieves the best results to date on the SAIL dataset by using an improved perceptual component that can represent relative positions of objects. We also analyze the problems with the SAIL dataset regarding its size and balance. We argue that performance on a small, fixed-size dataset is no longer a good measure to differentiate state-of-the-art models. We introduce SAILx, a synthetic dataset generator, and perform experiments where the size and balance of the dataset are controlled.
[ { "version": "v1", "created": "Mon, 21 May 2018 09:01:31 GMT" } ]
2018-05-22T00:00:00
[ [ "Can", "Ozan Arkan", "" ], [ "Yuret", "Deniz", "" ] ]
new_dataset
0.999594
1805.08009
Wenyan Yang
Wenyan Yang, Yanlin Qian, Francesco Cricri, Lixin Fan, Joni-Kristian Kamarainen
Object Detection in Equirectangular Panorama
6 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduced a high-resolution equirectangular panorama (360-degree, virtual reality) dataset for object detection and propose a multi-projection variant of YOLO detector. The main challenge with equirectangular panorama image are i) the lack of annotated training data, ii) high-resolution imagery and iii) severe geometric distortions of objects near the panorama projection poles. In this work, we solve the challenges by i) using training examples available in the "conventional datasets" (ImageNet and COCO), ii) employing only low-resolution images that require only moderate GPU computing power and memory, and iii) our multi-projection YOLO handles projection distortions by making multiple stereographic sub-projections. In our experiments, YOLO outperforms the other state-of-art detector, Faster RCNN and our multi-projection YOLO achieves the best accuracy with low-resolution input.
[ { "version": "v1", "created": "Mon, 21 May 2018 12:11:38 GMT" } ]
2018-05-22T00:00:00
[ [ "Yang", "Wenyan", "" ], [ "Qian", "Yanlin", "" ], [ "Cricri", "Francesco", "" ], [ "Fan", "Lixin", "" ], [ "Kamarainen", "Joni-Kristian", "" ] ]
new_dataset
0.998172
1805.08069
Rui Wang
Rui Wang, Olivier Renaudin, C. Umit Bas, Seun Sangodoyin, Andreas F. Molisch
On channel sounding with switched arrays in fast time-varying channels
11 pages, submitted to IEEE Transaction on Wireless Communications. arXiv admin note: text overlap with arXiv:1805.06611
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time-division multiplexed (TDM) channel sounders, in which a single RF chain is connected sequentially via an electronic switch to different elements of an array, are widely used for the measurement of double-directional/MIMO propagation channels. This paper investigates the impact of array switching patterns on the accuracy of parameter estimation of multipath components (MPC) for a time-division multiplexed (TDM) channel sounder. The commonly-used sequential (uniform) switching pattern poses a fundamental limit on the number of antennas that a TDM channel sounder can employ in fast time-varying channels. We thus aim to design improved patterns that relax these constraints. To characterize the performance, we introduce a novel spatio-temporal ambiguity function, which can handle the non-idealities of real-word arrays. We formulate the sequence design problem as an optimization problem and propose an algorithm based on simulated annealing to obtain the optimal sequence. As a result we can extend the estimation range of Doppler shifts by eliminating ambiguities in parameter estimation. We show through Monte Carlo simulations that the root mean square errors of both direction of departure and Doppler are reduced significantly with the new switching sequence. Results are also verified with actual vehicle-to-vehicle (V2V) channel measurements.
[ { "version": "v1", "created": "Fri, 18 May 2018 04:27:55 GMT" } ]
2018-05-22T00:00:00
[ [ "Wang", "Rui", "" ], [ "Renaudin", "Olivier", "" ], [ "Bas", "C. Umit", "" ], [ "Sangodoyin", "Seun", "" ], [ "Molisch", "Andreas F.", "" ] ]
new_dataset
0.998486
1805.08144
Manish Gupta
Krishna Gopal Benerjee and Manish K Gupta
On Universally Good Flower Codes
18 pages, 2 Figures, submitted to SETA 2018
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For a Distributed Storage System (DSS), the \textit{Fractional Repetition} (FR) code is a class in which replicas of encoded data packets are stored on distributed chunk servers, where the encoding is done using the Maximum Distance Separable (MDS) code. The FR codes allow for exact uncoded repair with minimum repair bandwidth. In this paper, FR codes are constructed using finite binary sequences. The condition for universally good FR codes is calculated on such sequences. For some sequences, the universally good FR codes are explored.
[ { "version": "v1", "created": "Mon, 21 May 2018 15:52:25 GMT" } ]
2018-05-22T00:00:00
[ [ "Benerjee", "Krishna Gopal", "" ], [ "Gupta", "Manish K", "" ] ]
new_dataset
0.993894
1805.08162
Yogesh Rawat
Kevin Duarte, Yogesh S Rawat, Mubarak Shah
VideoCapsuleNet: A Simplified Network for Action Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent advances in Deep Convolutional Neural Networks (DCNNs) have shown extremely good results for video human action classification, however, action detection is still a challenging problem. The current action detection approaches follow a complex pipeline which involves multiple tasks such as tube proposals, optical flow, and tube classification. In this work, we present a more elegant solution for action detection based on the recently developed capsule network. We propose a 3D capsule network for videos, called VideoCapsuleNet: a unified network for action detection which can jointly perform pixel-wise action segmentation along with action classification. The proposed network is a generalization of capsule network from 2D to 3D, which takes a sequence of video frames as input. The 3D generalization drastically increases the number of capsules in the network, making capsule routing computationally expensive. We introduce capsule-pooling in the convolutional capsule layer to address this issue which makes the voting algorithm tractable. The routing-by-agreement in the network inherently models the action representations and various action characteristics are captured by the predicted capsules. This inspired us to utilize the capsules for action localization and the class-specific capsules predicted by the network are used to determine a pixel-wise localization of actions. The localization is further improved by parameterized skip connections with the convolutional capsule layers and the network is trained end-to-end with a classification as well as localization loss. The proposed network achieves sate-of-the-art performance on multiple action detection datasets including UCF-Sports, J-HMDB, and UCF-101 (24 classes) with an impressive ~20% improvement on UCF-101 and ~15% improvement on J-HMDB in terms of v-mAP scores.
[ { "version": "v1", "created": "Mon, 21 May 2018 16:28:47 GMT" } ]
2018-05-22T00:00:00
[ [ "Duarte", "Kevin", "" ], [ "Rawat", "Yogesh S", "" ], [ "Shah", "Mubarak", "" ] ]
new_dataset
0.998867
1610.06924
KrishnaKanth Nakka
Krishna Kanth Nakka
Automatic Image De-fencing System
Master Thesis, EE IIT KGP, May 2015. arXiv admin note: text overlap with arXiv:1405.3531 by other authors
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tourists and Wild-life photographers are often hindered in capturing their cherished images or videos by a fence that limits accessibility to the scene of interest. The situation has been exacerbated by growing concerns of security at public places and a need exists to provide a tool that can be used for post-processing such fenced videos to produce a de-fenced image. There are several challenges in this problem, we identify them as Robust detection of fence/occlusions and Estimating pixel motion of background scenes and Filling in the fence/occlusions by utilizing information in multiple frames of the input video. In this work, we aim to build an automatic post-processing tool that can efficiently rid the input video of occlusion artifacts like fences. Our work is distinguished by two major contributions. The first is the introduction of learning based technique to detect the fences patterns with complicated backgrounds. The second is the formulation of objective function and further minimization through loopy belief propagation to fill-in the fence pixels. We observe that grids of Histogram of oriented gradients descriptor using Support vector machines based classifier significantly outperforms detection accuracy of texels in a lattice. We present results of experiments using several real-world videos to demonstrate the effectiveness of the proposed fence detection and de-fencing algorithm.
[ { "version": "v1", "created": "Fri, 21 Oct 2016 19:59:41 GMT" } ]
2018-05-21T00:00:00
[ [ "Nakka", "Krishna Kanth", "" ] ]
new_dataset
0.966939
1801.09515
Maurice Herlihy
Maurice Herlihy
Atomic Cross-Chain Swaps
To appear, PODC 2018
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
An atomic cross-chain swap is a distributed coordination task where multiple parties exchange assets across multiple blockchains, for example, trading bitcoin for ether. An atomic swap protocol guarantees (1) if all parties conform to the protocol, then all swaps take place, (2) if some coalition deviates from the protocol, then no conforming party ends up worse off, and (3) no coalition has an incentive to deviate from the protocol. A cross-chain swap is modeled as a directed graph ${\cal D}$, whose vertexes are parties and whose arcs are proposed asset transfers. For any pair $({\cal D},L)$, where ${\cal D} = (V,A)$ is a strongly-connected directed graph and $L \subset V$ a feedback vertex set for ${\cal D}$, we give an atomic cross-chain swap protocol for ${\cal D}$, using a form of hashed timelock contracts, where the vertexes in $L$ generate the hashlocked secrets. We show that no such protocol is possible if ${\cal D}$ is not strongly connected, or if ${\cal D}$ is strongly connected but $L$ is not a feedback vertex set. The protocol has time complexity $O(diam({\cal D}))$ and space complexity (bits stored on all blockchains) $O(|A|^2)$.
[ { "version": "v1", "created": "Mon, 29 Jan 2018 14:10:22 GMT" }, { "version": "v2", "created": "Thu, 12 Apr 2018 18:04:27 GMT" }, { "version": "v3", "created": "Tue, 8 May 2018 01:30:49 GMT" }, { "version": "v4", "created": "Fri, 18 May 2018 11:54:44 GMT" } ]
2018-05-21T00:00:00
[ [ "Herlihy", "Maurice", "" ] ]
new_dataset
0.996679
1803.02232
Suttinee Sawadsitang
Suttinee Sawadsitang, Siwei Jiang, Dusit Niyato, Ping Wang
Optimal Stochastic Package Delivery Planning with Deadline: A Cardinality Minimization in Routing
7 pages, 6 figures, Vehicular Technology Conference (VTC fall), 2017 IEEE 86th
null
10.1109/VTCFall.2017.8288239
null
cs.AI math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vehicle Routing Problem with Private fleet and common Carrier (VRPPC) has been proposed to help a supplier manage package delivery services from a single depot to multiple customers. Most of the existing VRPPC works consider deterministic parameters which may not be practical and uncertainty has to be taken into account. In this paper, we propose the Optimal Stochastic Delivery Planning with Deadline (ODPD) to help a supplier plan and optimize the package delivery. The aim of ODPD is to service all customers within a given deadline while considering the randomness in customer demands and traveling time. We formulate the ODPD as a stochastic integer programming, and use the cardinality minimization approach for calculating the deadline violation probability. To accelerate computation, the L-shaped decomposition method is adopted. We conduct extensive performance evaluation based on real customer locations and traveling time from Google Map.
[ { "version": "v1", "created": "Wed, 28 Feb 2018 02:01:43 GMT" } ]
2018-05-21T00:00:00
[ [ "Sawadsitang", "Suttinee", "" ], [ "Jiang", "Siwei", "" ], [ "Niyato", "Dusit", "" ], [ "Wang", "Ping", "" ] ]
new_dataset
0.974822
1805.06911
Nir Shlezinger
Nir Shlezinger, Roee Shaked, and Ron Dabora
On the Capacity of MIMO Broadband Power Line Communications Channels
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Communications over power lines in the frequency range above 2 MHz, commonly referred to as broadband (BB) power line communications (PLC), has been the focus of increasing research attention and standardization efforts in recent years. BB-PLC channels are characterized by a dominant colored non-Gaussian additive noise, as well as by periodic variations of the channel impulse response and the noise statistics. In this work we study the fundamental rate limits for BB-PLC channels by bounding their capacity while accounting for the unique properties of these channels. We obtain explicit expressions for the derived bounds for several BB-PLC noise models, and illustrate the resulting fundamental limits in a numerical analysis.
[ { "version": "v1", "created": "Thu, 17 May 2018 18:09:18 GMT" } ]
2018-05-21T00:00:00
[ [ "Shlezinger", "Nir", "" ], [ "Shaked", "Roee", "" ], [ "Dabora", "Ron", "" ] ]
new_dataset
0.991638
1805.06975
Peter Clark
Bhavana Dalvi Mishra, Lifu Huang, Niket Tandon, Wen-tau Yih, Peter Clark
Tracking State Changes in Procedural Text: A Challenge Dataset and Models for Process Paragraph Comprehension
In Proc. NAACL'2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new dataset and models for comprehending paragraphs about processes (e.g., photosynthesis), an important genre of text describing a dynamic world. The new dataset, ProPara, is the first to contain natural (rather than machine-generated) text about a changing world along with a full annotation of entity states (location and existence) during those changes (81k datapoints). The end-task, tracking the location and existence of entities through the text, is challenging because the causal effects of actions are often implicit and need to be inferred. We find that previous models that have worked well on synthetic data achieve only mediocre performance on ProPara, and introduce two new neural models that exploit alternative mechanisms for state prediction, in particular using LSTM input encoding and span prediction. The new models improve accuracy by up to 19%. The dataset and models are available to the community at http://data.allenai.org/propara.
[ { "version": "v1", "created": "Thu, 17 May 2018 21:42:04 GMT" } ]
2018-05-21T00:00:00
[ [ "Mishra", "Bhavana Dalvi", "" ], [ "Huang", "Lifu", "" ], [ "Tandon", "Niket", "" ], [ "Yih", "Wen-tau", "" ], [ "Clark", "Peter", "" ] ]
new_dataset
0.998608
1805.07069
Mahdi Shaghaghi
Mahdi Shaghaghi, Raviraj S. Adve, Zhen Ding
Multifunction Cognitive Radar Task Scheduling Using Monte Carlo Tree Search and Policy Networks
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A modern radar may be designed to perform multiple functions, such as surveillance, tracking, and fire control. Each function requires the radar to execute a number of transmit-receive tasks. A radar resource management (RRM) module makes decisions on parameter selection, prioritization, and scheduling of such tasks. RRM becomes especially challenging in overload situations, where some tasks may need to be delayed or even dropped. In general, task scheduling is an NP-hard problem. In this work, we develop the branch-and-bound (B&B) method which obtains the optimal solution but at exponential computational complexity. On the other hand, heuristic methods have low complexity but provide relatively poor performance. We resort to machine learning-based techniques to address this issue; specifically we propose an approximate algorithm based on the Monte Carlo tree search method. Along with using bound and dominance rules to eliminate nodes from the search tree, we use a policy network to help to reduce the width of the search. Such a network can be trained using solutions obtained by running the B&B method offline on problems with feasible complexity. We show that the proposed method provides near-optimal performance, but with computational complexity orders of magnitude smaller than the B&B algorithm.
[ { "version": "v1", "created": "Fri, 18 May 2018 06:58:16 GMT" } ]
2018-05-21T00:00:00
[ [ "Shaghaghi", "Mahdi", "" ], [ "Adve", "Raviraj S.", "" ], [ "Ding", "Zhen", "" ] ]
new_dataset
0.995494
1805.07078
Peihong Yuan
Peihong Yuan, Fabian Steiner, Tobias Prinz, Georg B\"ocherer
Flexible IR-HARQ Scheme for Polar-Coded Modulation
6 pages, accepted to 2018 IEEE Wireless Communications and Networking Conference Workshops (WCNCW): Polar Coding for Future Networks: Theory and Practice, presented on 15. April
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A flexible incremental redundancy hybrid auto- mated repeat request (IR-HARQ) scheme for polar codes is proposed based on dynamically frozen bits and the quasi-uniform puncturing (QUP) algorithm. The length of each transmission is not restricted to a power of two. It is applicable for the binary input additive white Gaussian noise (biAWGN) channel as well as higher-order modulation. Simulation results show that this scheme has similar performance as directly designed polar codes with QUP and outperforms LTE-turbo and 5G-LDPC codes with IR-HARQ.
[ { "version": "v1", "created": "Fri, 18 May 2018 07:30:44 GMT" } ]
2018-05-21T00:00:00
[ [ "Yuan", "Peihong", "" ], [ "Steiner", "Fabian", "" ], [ "Prinz", "Tobias", "" ], [ "Böcherer", "Georg", "" ] ]
new_dataset
0.998055
1805.07182
Shuowen Zhang
Shuowen Zhang, Yong Zeng, Rui Zhang
Cellular-Enabled UAV Communication: A Connectivity-Constrained Trajectory Optimization Perspective
Invited paper, submitted for publication, 55 pages, 11 figures
null
null
null
cs.IT cs.SY math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Integrating the unmanned aerial vehicles (UAVs) into the cellular network is envisioned to be a promising technology to significantly enhance the communication performance of both UAVs and existing terrestrial users. In this paper, we first provide an overview on the two main paradigms in cellular UAV communications, i.e., cellular-enabled UAV communication with UAVs as new aerial users served by the ground base stations (GBSs), and UAV-assisted cellular communication with UAVs as new aerial communication platforms serving the terrestrial users. Then, we focus on the former paradigm and study a new UAV trajectory design problem subject to practical communication connectivity constraints with the GBSs. Specifically, we consider a cellular-connected UAV in the mission of flying from an initial location to a final location, during which it needs to maintain reliable communication with the cellular network by associating with one GBS at each time instant. We aim to minimize the UAV's mission completion time by optimizing its trajectory, subject to a quality-of-connectivity constraint of the GBS-UAV link specified by a minimum receive signal-to-noise ratio target. To tackle this challenging non-convex problem, we first propose a graph connectivity based method to verify its feasibility. Next, by examining the GBS-UAV association sequence over time, we obtain useful structural results on the optimal UAV trajectory, based on which two efficient methods are proposed to find high-quality approximate trajectory solutions by leveraging graph theory and convex optimization techniques. The proposed methods are analytically shown to be capable of achieving a flexible trade-off between complexity and performance, and yielding a solution that is arbitrarily close to the optimal solution in polynomial time. Finally, we make concluding remarks and point out some promising directions for future work.
[ { "version": "v1", "created": "Fri, 18 May 2018 12:57:30 GMT" } ]
2018-05-21T00:00:00
[ [ "Zhang", "Shuowen", "" ], [ "Zeng", "Yong", "" ], [ "Zhang", "Rui", "" ] ]
new_dataset
0.995403
1805.07256
Petr Svarny
Petr \v{S}varn\'y and Mat\v{e}j Hoffmann
Safety of human-robot interaction through tactile sensors and peripersonal space representations
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Human-robot collaboration including close physical human-robot interaction (pHRI) is a current trend in industry and also science. The safety guidelines prescribe two modes of safety: (i) power and force limitation and (ii) speed and separation monitoring. We examine the potential of robots equipped with artificial sensitive skin and a protective safety zone around it (peripersonal space) to safe pHRI.
[ { "version": "v1", "created": "Fri, 18 May 2018 14:55:08 GMT" } ]
2018-05-21T00:00:00
[ [ "Švarný", "Petr", "" ], [ "Hoffmann", "Matěj", "" ] ]
new_dataset
0.995713