id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
1806.11342
Qing Zhou
Qing Zhou and Nan Liu
The Economics of Video Websites with Membership-Advertising Mode in Wireless Networks
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider a novel business model of video websites via Membership-Advertising Mode in wireless network, where the video websites provide three video services for mobile users: \textit{VIP-Member} service, Regular-Member service and Non-Member service. The VIP-Member (Regular-Member) service provides the highest level (middle level) quality and non-advertising video service with high (low) price, while the Non-Member service provides the lowest level quality and advertising-containing video service for free. Meanwhile, the video websites sell their advertising spaces to the advertiser to create extra revenues. We formulate the interactions among the advertiser, video websites and mobile users as a three-stage Stackelberg game. Specifically, in Stage I, the advertiser decides the advertising budget; in Stage II, the video websites determine their advertising spaces selling strategies for advertiser and the membership pricing strategies for mobile users; in Stage III, the mobile users make their own decisions on video watching strategies for each video website. We analyze the equilibrium of each sub-game. Particularly, we derive the closed-form solutions of each mobile user's optimal video watching strategies, each video website's optimal membership price and the optimal advertising spaces selling number. In addition, we also investigate the piece-wise structure of the advertiser's utility function, and further propose an efficient algorithm to obtain the optimal advertising budget. Finally, numerical results show the impacts of different parameters' values on each entity's utility as well as the key indicators.
[ { "version": "v1", "created": "Fri, 29 Jun 2018 10:38:56 GMT" } ]
2018-07-02T00:00:00
[ [ "Zhou", "Qing", "" ], [ "Liu", "Nan", "" ] ]
new_dataset
0.993087
1806.11349
Richard Diehl Martinez
Rooz Mahdavian, Richard Diehl Martinez
Ignition: An End-to-End Supervised Model for Training Simulated Self-Driving Vehicles
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Ignition: an end-to-end neural network architecture for training unconstrained self-driving vehicles in simulated environments. The model is a ResNet-18 variant, which is fed in images from the front of a simulated F1 car, and outputs optimal labels for steering, throttle, braking. Importantly, we never explicitly train the model to detect road features like the outline of a track or distance to other cars; instead, we illustrate that these latent features can be automatically encapsulated by the network.
[ { "version": "v1", "created": "Fri, 29 Jun 2018 10:48:33 GMT" } ]
2018-07-02T00:00:00
[ [ "Mahdavian", "Rooz", "" ], [ "Martinez", "Richard Diehl", "" ] ]
new_dataset
0.997729
1806.11423
Shreya Singh
Shreya Singh, G Mohammed Abdulla, Sumit Borar, Sagar Arora
Footwear Size Recommendation System
7 pages, 5 figures, 5 tables, AI meets Fashion workshop, KDD 2018
null
null
null
cs.IR
http://creativecommons.org/licenses/by-nc-sa/4.0/
While shopping for fashion products, customers usually prefer to try-out products to examine fit, material, overall look and feel. Due to lack of try out options during online shopping, it becomes pivotal to provide customers with as much of this information as possible to enhance their shopping experience. Also it becomes essential to provide same experience for new customers. Our work here focuses on providing a production ready size recommendation system for shoes and address the challenge of providing recommendation for users with no previous purchases on the platform. In our work, we present a probabilistic approach based on user co-purchase data facilitated by generating a brand-brand relationship graph. Specifically we address two challenges that are commonly faced while implementing such solution. 1. Sparse signals for less popular or new products in the system 2. Extending the solution for new users. Further we compare and contrast this approach with our previous work and show significant improvement both in recommendation precision and coverage.
[ { "version": "v1", "created": "Thu, 28 Jun 2018 14:22:49 GMT" } ]
2018-07-02T00:00:00
[ [ "Singh", "Shreya", "" ], [ "Abdulla", "G Mohammed", "" ], [ "Borar", "Sumit", "" ], [ "Arora", "Sagar", "" ] ]
new_dataset
0.998452
1806.11552
Li Lin
Li Lin, Xiaofei Liao
Echo: An Edge-Centric Code Offloading System with Quality of Service Guarantee
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Code offloading is promising to accelerate mobile applications and save energy of mobile devices by shifting some computation to cloud. However, existing code offloading systems suffer from a long communication delay between mobile devices and cloud. To address this challenge, in this paper, we consider to deploy edge nodes in the proximity of mobile devices, and study how they benefit code offloading. We design an edge-centric code offloading system, called Echo, over a three-layer computing hierarchy consisting of mobile devices, edge and cloud. A critical problem needs to be addressed by Echo is to decide which method should be offloaded to which computing platform (edge or cloud). Different from existing offloading systems that let mobile devices individually make offloading decisions, Echo implements a centralized decision engine at the edge node. This edge-centric design can fully exploit the limited hardware resources at the edge to provide an offloading service with Quality of Service guarantee. Furthermore, we propose some novel mechanisms, e.g., lazy object transmission and differential object update, to further improve system performance. The results of a small-scale real deployment and trace-driven simulations show that Echo significantly outperforms existing
[ { "version": "v1", "created": "Mon, 4 Jun 2018 16:38:44 GMT" } ]
2018-07-02T00:00:00
[ [ "Lin", "Li", "" ], [ "Liao", "Xiaofei", "" ] ]
new_dataset
0.997006
1801.04179
Andres Gomez Ramirez
A. Gomez Ramirez and C. Lara and L. Betev and D. Bilanovic and U. Kebschull (and for the ALICE Collaboration)
Arhuaco: Deep Learning and Isolation Based Security for Distributed High-Throughput Computing
Manuscript submitted to the Journal of Grid Computing
null
null
null
cs.DC cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
Grid computing systems require innovative methods and tools to identify cybersecurity incidents and perform autonomous actions i.e. without administrator intervention. They also require methods to isolate and trace job payload activity in order to protect users and find evidence of malicious behavior. We introduce an integrated approach of security monitoring via Security by Isolation with Linux Containers and Deep Learning methods for the analysis of real time data in Grid jobs running inside virtualized High-Throughput Computing infrastructure in order to detect and prevent intrusions. A dataset for malware detection in Grid computing is described. We show in addition the utilization of generative methods with Recurrent Neural Networks to improve the collected dataset. We present Arhuaco, a prototype implementation of the proposed methods. We empirically study the performance of our technique. The results show that Arhuaco outperforms other methods used in Intrusion Detection Systems for Grid Computing. The study is carried out in the ALICE Collaboration Grid, part of the Worldwide LHC Computing Grid.
[ { "version": "v1", "created": "Fri, 12 Jan 2018 14:35:19 GMT" } ]
2018-07-01T00:00:00
[ [ "Ramirez", "A. Gomez", "", "and for the ALICE Collaboration" ], [ "Lara", "C.", "", "and for the ALICE Collaboration" ], [ "Betev", "L.", "", "and for the ALICE Collaboration" ], [ "Bilanovic", "D.", "", "and for the ALICE Collaboration" ], [ "Kebschull", "U.", "", "and for the ALICE Collaboration" ] ]
new_dataset
0.9969
1312.3876
Mine Alsan Ms
Mine Alsan
The Symmetric Convex Ordering: A Novel Partial Order for B-DMCs Ordering the Information Sets of Polar Codes
This manuscript was submitted to IEEE Transactions on Information Theory on 01-Nov-2015 as a revision of an earlier version submitted on 21-Aug-2014
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel partial order for binary discrete memoryless channels that we call the symmetric convex ordering. We show that Ar{\i}kan's polar transform preserves 'symmetric convex orders'. Furthermore, we show that while for symmetric channels this ordering turns out to be equivalent to the stochastic degradation ordering already known to order the information sets of polar codes, a strictly weaker partial order is obtained when at least one of the channels is asymmetric. In between, we also discuss two tools which can be useful for verifying this ordering: a criterion known as the cut criterion and channel symmetrization. Finally, we discuss potential applications of the results to polar coding over non-stationary channels.
[ { "version": "v1", "created": "Fri, 13 Dec 2013 17:03:04 GMT" }, { "version": "v2", "created": "Thu, 28 Jun 2018 06:53:36 GMT" } ]
2018-06-29T00:00:00
[ [ "Alsan", "Mine", "" ] ]
new_dataset
0.998901
1711.11499
Leonardo Ermann
Leonardo Ermann, Klaus M. Frahm and Dima L. Shepelyansky
Google matrix of Bitcoin network
12 pages, 15 figures
Eur. Phys. J. B 91, 127 (2018)
10.1140/epjb/e2018-80674-y
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We construct and study the Google matrix of Bitcoin transactions during the time period from the very beginning in 2009 till April 2013. The Bitcoin network has up to a few millions of bitcoin users and we present its main characteristics including the PageRank and CheiRank probability distributions, the spectrum of eigenvalues of Google matrix and related eigenvectors. We find that the spectrum has an unusual circle-type structure which we attribute to existing hidden communities of nodes linked between their members. We show that the Gini coefficient of the transactions for the whole period is close to unity showing that the main part of wealth of the network is captured by a small fraction of users.
[ { "version": "v1", "created": "Thu, 30 Nov 2017 16:35:43 GMT" } ]
2018-06-29T00:00:00
[ [ "Ermann", "Leonardo", "" ], [ "Frahm", "Klaus M.", "" ], [ "Shepelyansky", "Dima L.", "" ] ]
new_dataset
0.994533
1802.00756
Reuben Rowe
Liron Cohen and Reuben N. S. Rowe
Infinitary and Cyclic Proof Systems for Transitive Closure Logic
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Transitive closure logic is a known extension of first-order logic obtained by introducing a transitive closure operator. While other extensions of first-order logic with inductive definitions are a priori parametrized by a set of inductive definitions, the addition of the transitive closure operator uniformly captures all finitary inductive definitions. In this paper we present an infinitary proof system for transitive closure logic which is an infinite descent-style counterpart to the existing (explicit induction) proof system for the logic. We show that, as for similar systems for first-order logic with inductive definitions, our infinitary system is complete for the standard semantics and subsumes the explicit system. Moreover, the uniformity of the transitive closure operator allows semantically meaningful complete restrictions to be defined using simple syntactic criteria. Consequently, the restriction to regular infinitary (i.e. cyclic) proofs provides the basis for an effective system for automating inductive reasoning.
[ { "version": "v1", "created": "Fri, 2 Feb 2018 16:26:04 GMT" }, { "version": "v2", "created": "Mon, 5 Feb 2018 13:31:22 GMT" }, { "version": "v3", "created": "Thu, 28 Jun 2018 13:45:58 GMT" } ]
2018-06-29T00:00:00
[ [ "Cohen", "Liron", "" ], [ "Rowe", "Reuben N. S.", "" ] ]
new_dataset
0.999285
1803.04054
Azad Aminpour
Kamyar Nazeri, Azad Aminpour, Mehran Ebrahimi
Two-Stage Convolutional Neural Network for Breast Cancer Histology Image Classification
10 pages, 5 figures, ICIAR 2018 conference
LNCS 10882 (2018) 717-726
10.1007/978-3-319-93000-8_81
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper explores the problem of breast tissue classification of microscopy images. Based on the predominant cancer type the goal is to classify images into four categories of normal, benign, in situ carcinoma, and invasive carcinoma. Given a suitable training dataset, we utilize deep learning techniques to address the classification problem. Due to the large size of each image in the training dataset, we propose a patch-based technique which consists of two consecutive convolutional neural networks. The first "patch-wise" network acts as an auto-encoder that extracts the most salient features of image patches while the second "image-wise" network performs classification of the whole image. The first network is pre-trained and aimed at extracting local information while the second network obtains global information of an input image. We trained the networks using the ICIAR 2018 grand challenge on BreAst Cancer Histology (BACH) dataset. The proposed method yields 95 % accuracy on the validation set compared to previously reported 77 % accuracy rates in the literature. Our code is publicly available at https://github.com/ImagingLab/ICIAR2018
[ { "version": "v1", "created": "Sun, 11 Mar 2018 22:05:33 GMT" }, { "version": "v2", "created": "Sun, 1 Apr 2018 16:37:19 GMT" } ]
2018-06-29T00:00:00
[ [ "Nazeri", "Kamyar", "" ], [ "Aminpour", "Azad", "" ], [ "Ebrahimi", "Mehran", "" ] ]
new_dataset
0.999408
1803.09615
Behnam Montazeri
Behnam Montazeri, Yilong Li, Mohammad Alizadeh, and John Ousterhout
Homa: A Receiver-Driven Low-Latency Transport Protocol Using Network Priorities (Complete Version)
This paper is an extended version of the paper on Homa that was published in ACM SIGCOMM 2018. Material had to be removed from Sections 5.1 and 5.2 to meet the SIGCOMM page restrictions; this version restores the missing material. This paper is 18 pages, plus two pages of references
Behnam Montazeri, Yilong Li, Mohammad Alizadeh , and John Ousterhout. Homa: A Receiver-Driven Low-Latency Transport Protocol Using Network Priorities . In Proceedings of ACM SIGCOMM 2018 (SIGCOMM 18). ACM, New York, NY, USA, 15 pages
10.1145/3230543.3230564
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Homa is a new transport protocol for datacenter networks. It provides exceptionally low latency, especially for workloads with a high volume of very short messages, and it also supports large messages and high network utilization. Homa uses in-network priority queues to ensure low latency for short messages; priority allocation is managed dynamically by each receiver and integrated with a receiver-driven flow control mechanism. Homa also uses controlled overcommitment of receiver downlinks to ensure efficient bandwidth utilization at high load. Our implementation of Homa delivers 99th percentile round-trip times less than 15{\mu}s for short messages on a 10 Gbps network running at 80% load. These latencies are almost 100x lower than the best published measurements of an implementation. In simulations, Homa's latency is roughly equal to pFabric and significantly better than pHost, PIAS, and NDP for almost all message sizes and workloads. Homa can also sustain higher network loads than pFabric, pHost, or PIAS.
[ { "version": "v1", "created": "Mon, 26 Mar 2018 14:24:45 GMT" }, { "version": "v2", "created": "Wed, 27 Jun 2018 22:09:56 GMT" } ]
2018-06-29T00:00:00
[ [ "Montazeri", "Behnam", "" ], [ "Li", "Yilong", "" ], [ "Alizadeh", "Mohammad", "" ], [ "Ousterhout", "John", "" ] ]
new_dataset
0.995529
1805.02400
Mika Juuti Mr
Mika Juuti, Bo Sun, Tatsuya Mori, and N. Asokan
Stay On-Topic: Generating Context-specific Fake Restaurant Reviews
21 pages, 5 figures, 6 tables. Accepted for publication in the European Symposium on Research in Computer Security (ESORICS) 2018
null
null
null
cs.CR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatically generated fake restaurant reviews are a threat to online review systems. Recent research has shown that users have difficulties in detecting machine-generated fake reviews hiding among real restaurant reviews. The method used in this work (char-LSTM ) has one drawback: it has difficulties staying in context, i.e. when it generates a review for specific target entity, the resulting review may contain phrases that are unrelated to the target, thus increasing its detectability. In this work, we present and evaluate a more sophisticated technique based on neural machine translation (NMT) with which we can generate reviews that stay on-topic. We test multiple variants of our technique using native English speakers on Amazon Mechanical Turk. We demonstrate that reviews generated by the best variant have almost optimal undetectability (class-averaged F-score 47%). We conduct a user study with skeptical users and show that our method evades detection more frequently compared to the state-of-the-art (average evasion 3.2/4 vs 1.5/4) with statistical significance, at level {\alpha} = 1% (Section 4.3). We develop very effective detection tools and reach average F-score of 97% in classifying these. Although fake reviews are very effective in fooling people, effective automatic detection is still feasible.
[ { "version": "v1", "created": "Mon, 7 May 2018 08:37:04 GMT" }, { "version": "v2", "created": "Tue, 8 May 2018 14:46:09 GMT" }, { "version": "v3", "created": "Wed, 9 May 2018 06:44:38 GMT" }, { "version": "v4", "created": "Thu, 28 Jun 2018 07:55:31 GMT" } ]
2018-06-29T00:00:00
[ [ "Juuti", "Mika", "" ], [ "Sun", "Bo", "" ], [ "Mori", "Tatsuya", "" ], [ "Asokan", "N.", "" ] ]
new_dataset
0.991096
1805.05983
Xuan-Bach Dinh Le
Xuan Bach D. Le, Lingfeng Bao, David Lo, Xin Xia, Shanping Li
On Reliability of Patch Correctness Assessment
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current state-of-the-art automatic software repair (ASR) techniques rely heavily on incomplete specifications, e.g., test suites, to generate repairs. This, however, may render ASR tools to generate incorrect repairs that do not generalize. To assess patch correctness, researchers have been following two typical ways separately: (1) Automated annotation, wherein patches are automatically labeled by an independent test suite (ITS) - a patch passing the ITS is regarded as correct or generalizable, and incorrect otherwise, (2) Author annotation, wherein authors of ASR techniques annotate correctness labels of patches generated by their and competing tools by themselves. While automated annotation fails to prove that a patch is actually correct, author annotation is prone to subjectivity. This concern has caused an on-going debate on appropriate ways to assess the effectiveness of numerous ASR techniques proposed recently. To address this concern, we propose to assess reliability of author and automated annotations on patch correctness assessment. We do this by first constructing a gold set of correctness labels for 189 randomly selected patches generated by 8 state-of-the-art ASR techniques through a user study involving 35 professional developers as independent annotators. By measuring inter-rater agreement as a proxy for annotation quality - as commonly done in the literature - we demonstrate that our constructed gold set is on par with other high-quality gold sets. We then compare labels generated by author and automated annotations with this gold set to assess reliability of the patch assessment methodologies. We subsequently report several findings and highlight implications for future studies.
[ { "version": "v1", "created": "Tue, 15 May 2018 18:32:14 GMT" }, { "version": "v2", "created": "Wed, 27 Jun 2018 22:22:22 GMT" } ]
2018-06-29T00:00:00
[ [ "Le", "Xuan Bach D.", "" ], [ "Bao", "Lingfeng", "" ], [ "Lo", "David", "" ], [ "Xia", "Xin", "" ], [ "Li", "Shanping", "" ] ]
new_dataset
0.998622
1806.04410
Anupam Saraph
Anupam Saraph, Lalit Kathpalia, Anab Kidwai, Aniruddha Joshi
Is India's Unique Identification Number a legally valid identification?
21 pages
null
null
null
cs.CY cs.CR cs.SI
http://creativecommons.org/licenses/by-sa/4.0/
A legally valid identification document allows impartial arbitration of the identification of individuals. It protects individuals from a violation of their dignity, justice, liberty and equality. It protects the nation from a destruction of its republic, democratic, sovereign status. In order to test the ability of an identification document to establish impartial identification of individuals, it must be evaluated for its ability to establish identity, undertake identification and build confidence to impartial, reliable and valid identification. The processes of issuing, using and validating identification documents alter the ability of the document to establish identity, undertake identification and build confidence to impartial and valid identification. These processes alter the ability of the document to serve as proof of identity, proof of address, proof of being a resident, or even the proof of existence of a person. We examine the ability of the UID number to serve as an identification document with the ability to impartially arbitrate the identification of individuals and serve as proof of identity, address, and demonstrate existence of a person. We evaluate the implications of the continued use UID system on our ability to undertake legally valid identification ensure integrity of the identity and address databases across the world.
[ { "version": "v1", "created": "Tue, 12 Jun 2018 09:30:33 GMT" } ]
2018-06-29T00:00:00
[ [ "Saraph", "Anupam", "" ], [ "Kathpalia", "Lalit", "" ], [ "Kidwai", "Anab", "" ], [ "Joshi", "Aniruddha", "" ] ]
new_dataset
0.999201
1806.10642
Asaf Hecht
Asaf Hecht, Adi Sagi, Yuval Elovici
PIDS - A Behavioral Framework for Analysis and Detection of Network Printer Attacks
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, every organization might be attacked through its network printers. The malicious exploitation of printing protocols is a dangerous and underestimated threat against every printer today, as highlighted by recent published researches. This article presents PIDS (Printers' IDS), an intrusion detection system for detecting attacks on printing protocols. PIDS continuously captures various features and events obtained from traffic produced by printing protocols in order to detect attacks. As part of this research we conducted thousands of automatic and manual printing protocol attacks on various printers and recorded thousands of the printers' benign network sessions. Then we applied various supervised machine learning (ML) algorithms to classify the collected data as normal (benign) or abnormal (malicious). We evaluated several detection algorithms, feature selection methods, and the features needed in order to obtain the best detection results for protocol traffic of printers. Our empirical results suggest that the proposed framework is effective in detecting printing protocol attacks, providing an accuracy of 99.9 with negligible fall-positive rate.
[ { "version": "v1", "created": "Wed, 27 Jun 2018 18:43:30 GMT" } ]
2018-06-29T00:00:00
[ [ "Hecht", "Asaf", "" ], [ "Sagi", "Adi", "" ], [ "Elovici", "Yuval", "" ] ]
new_dataset
0.980657
1806.10658
Soheil Khorram
Soheil Khorram, Mimansa Jaiswal, John Gideon, Melvin McInnis, Emily Mower Provost
The PRIORI Emotion Dataset: Linking Mood to Emotion Detected In-the-Wild
Interspeech 2018
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Bipolar Disorder is a chronic psychiatric illness characterized by pathological mood swings associated with severe disruptions in emotion regulation. Clinical monitoring of mood is key to the care of these dynamic and incapacitating mood states. Frequent and detailed monitoring improves clinical sensitivity to detect mood state changes, but typically requires costly and limited resources. Speech characteristics change during both depressed and manic states, suggesting automatic methods applied to the speech signal can be effectively used to monitor mood state changes. However, speech is modulated by many factors, which renders mood state prediction challenging. We hypothesize that emotion can be used as an intermediary step to improve mood state prediction. This paper presents critical steps in developing this pipeline, including (1) a new in the wild emotion dataset, the PRIORI Emotion Dataset, collected from everyday smartphone conversational speech recordings, (2) activation/valence emotion recognition baselines on this dataset (PCC of 0.71 and 0.41, respectively), and (3) significant correlation between predicted emotion and mood state for individuals with bipolar disorder. This provides evidence and a working baseline for the use of emotion as a meta-feature for mood state monitoring.
[ { "version": "v1", "created": "Tue, 19 Jun 2018 23:28:12 GMT" } ]
2018-06-29T00:00:00
[ [ "Khorram", "Soheil", "" ], [ "Jaiswal", "Mimansa", "" ], [ "Gideon", "John", "" ], [ "McInnis", "Melvin", "" ], [ "Provost", "Emily Mower", "" ] ]
new_dataset
0.999686
1806.10836
Alessia Amelio Dr.
Lucio Amelio and Alessia Amelio
CT Image Registration in Acute Stroke Monitoring
10 pages, 9 figures, Accepted at the 41th Jubilee International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia
null
null
null
cs.CV cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new system based on tracking the temporal evolution of stroke lesions using an image registration technique on CT exams of the patient's brain. The system is able to compare past CT exams with the most recent one related to stroke event in order to evaluate past lesions which are not related to stroke. Then, it can compare recent CT exams related to the current stroke for assessing the evolution of the lesion over time. A new similarity measure is also introduced for the comparison of the source and target images during image registration. It will result in a cheaper, faster and more accessible evaluation of the acute phase of the stroke overcoming the current limitations of the proposed systems in the state-of-the-art.
[ { "version": "v1", "created": "Thu, 28 Jun 2018 09:11:57 GMT" } ]
2018-06-29T00:00:00
[ [ "Amelio", "Lucio", "" ], [ "Amelio", "Alessia", "" ] ]
new_dataset
0.999282
1806.10899
Mandy Neumann
Ruslan R. Fayzrakhmanov, Christopher Michels, Mandy Neumann
Introduction to OXPath
63 pages
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contemporary web pages with increasingly sophisticated interfaces rival traditional desktop applications for interface complexity and are often called web applications or RIA (Rich Internet Applications). They often require the execution of JavaScript in a web browser and can call AJAX requests to dynamically generate the content, reacting to user interaction. From the automatic data acquisition point of view, thus, it is essential to be able to correctly render web pages and mimic user actions to obtain relevant data from the web page content. Briefly, to obtain data through existing Web interfaces and transform it into structured form, contemporary wrappers should be able to: 1) interact with sophisticated interfaces of web applications; 2) precisely acquire relevant data; 3) scale with the number of crawled web pages or states of web application; 4) have an embeddable programming API for integration with existing web technologies. OXPath is a state-of-the-art technology, which is compliant with these requirements and demonstrated its efficiency in comprehensive experiments. OXPath integrates Firefox for correct rendering of web pages and extends XPath 1.0 for the DOM node selection, interaction, and extraction. It provides means for converting extracted data into different formats, such as XML, JSON, CSV, and saving data into relational databases. This tutorial explains main features of the OXPath language and the setup of a suitable working environment. The guidelines for using OXPath are provided in the form of prototypical examples.
[ { "version": "v1", "created": "Thu, 28 Jun 2018 11:58:05 GMT" } ]
2018-06-29T00:00:00
[ [ "Fayzrakhmanov", "Ruslan R.", "" ], [ "Michels", "Christopher", "" ], [ "Neumann", "Mandy", "" ] ]
new_dataset
0.999152
1806.10968
Hamza Ahmad Madni
Zain Mumtaz, Saleem Ullah, Zeeshan Ilyas, Shuo Liu, Naila Aslam, Jehangir Arshad Meo, Hamza Ahmad Madni
Automatic streetlights that glow on detecting night and object using Arduino
null
null
null
null
cs.OH
http://creativecommons.org/licenses/by/4.0/
Our manuscript aims to develop a system which will lead to energy conservation and by doing so, we would be able to lighten few more homes. The proposed work is accomplished by using Arduino microcontroller and sensors that will control the electricity based on night and object's detection. Meanwhile, a counter is set that will count the number of objects passed through the road. The beauty of the proposed work is that the wastage of unused electricity can be reduced, lifetime of the streetlights gets enhance because the lights do not stay ON during the whole night, and helps to increase safety measurements. We are confident that the proposed idea will be beneficial in the future applications of microcontrollers and sensors etc.
[ { "version": "v1", "created": "Thu, 28 Jun 2018 13:47:37 GMT" } ]
2018-06-29T00:00:00
[ [ "Mumtaz", "Zain", "" ], [ "Ullah", "Saleem", "" ], [ "Ilyas", "Zeeshan", "" ], [ "Liu", "Shuo", "" ], [ "Aslam", "Naila", "" ], [ "Meo", "Jehangir Arshad", "" ], [ "Madni", "Hamza Ahmad", "" ] ]
new_dataset
0.974954
1611.08024
Vernon Lawhern
Vernon J. Lawhern, Amelia J. Solon, Nicholas R. Waytowich, Stephen M. Gordon, Chou P. Hung, Brent J. Lance
EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces
30 pages, 10 figures. Added additional feature relevance analyses. Minor change to EEGNet architecture. Source code can be found at https://github.com/vlawhern/arl-eegmodels
null
10.1088/1741-2552/aace8c
null
cs.LG q-bio.NC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brain computer interfaces (BCI) enable direct communication with a computer, using neural activity as the control signal. This neural signal is generally chosen from a variety of well-studied electroencephalogram (EEG) signals. For a given BCI paradigm, feature extractors and classifiers are tailored to the distinct characteristics of its expected EEG control signal, limiting its application to that specific signal. Convolutional Neural Networks (CNNs), which have been used in computer vision and speech recognition, have successfully been applied to EEG-based BCIs; however, they have mainly been applied to single BCI paradigms and thus it remains unclear how these architectures generalize to other paradigms. Here, we ask if we can design a single CNN architecture to accurately classify EEG signals from different BCI paradigms, while simultaneously being as compact as possible. In this work we introduce EEGNet, a compact convolutional network for EEG-based BCIs. We introduce the use of depthwise and separable convolutions to construct an EEG-specific model which encapsulates well-known EEG feature extraction concepts for BCI. We compare EEGNet to current state-of-the-art approaches across four BCI paradigms: P300 visual-evoked potentials, error-related negativity responses (ERN), movement-related cortical potentials (MRCP), and sensory motor rhythms (SMR). We show that EEGNet generalizes across paradigms better than the reference algorithms when only limited training data is available. We demonstrate three different approaches to visualize the contents of a trained EEGNet model to enable interpretation of the learned features. Our results suggest that EEGNet is robust enough to learn a wide variety of interpretable features over a range of BCI tasks, suggesting that the observed performances were not due to artifact or noise sources in the data.
[ { "version": "v1", "created": "Wed, 23 Nov 2016 22:36:58 GMT" }, { "version": "v2", "created": "Tue, 9 May 2017 16:03:13 GMT" }, { "version": "v3", "created": "Fri, 9 Mar 2018 01:02:21 GMT" }, { "version": "v4", "created": "Wed, 16 May 2018 01:14:34 GMT" } ]
2018-06-28T00:00:00
[ [ "Lawhern", "Vernon J.", "" ], [ "Solon", "Amelia J.", "" ], [ "Waytowich", "Nicholas R.", "" ], [ "Gordon", "Stephen M.", "" ], [ "Hung", "Chou P.", "" ], [ "Lance", "Brent J.", "" ] ]
new_dataset
0.976085
1804.05258
Ross M. McConnell
Pavol Hell and Jing Huang and Ross M. McConnell and Arash Rafiey
Interval-Like Graphs and Digraphs
null
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We unify several seemingly different graph and digraph classes under one umbrella. These classes are all broadly speaking different generalizations of interval graphs, and include, in addition to interval graphs, also adjusted interval digraphs, threshold graphs, complements of threshold tolerance graphs (known as `co-TT' graphs), bipartite interval containment graphs, bipartite co-circular arc graphs, and two-directional orthogonal ray graphs. (The last three classes coincide, but have been investigated in different contexts.) This common view is made possible by introducing loops. We also show that all the above classes are united by a common ordering characterization, the existence of a min ordering. We propose a common generalization of all these graph and digraph classes, namely signed-interval digraphs, and show that they are precisely the digraphs that are characterized by the existence of a min ordering. We also offer an alternative geometric characterization of these digraphs. For most of the above example graph and digraph classes, we show that they are exactly those signed-interval digraphs that satisfy a suitable natural restriction on the digraph, like having all loops, or having a symmetric edge-set, or being bipartite. (For instance co-TT graphs are precisely those signed-interval digraphs that have each edge symmetric.) We also offer some discussion of recognition algorithms and characterizations, saving the details for future papers.
[ { "version": "v1", "created": "Sat, 14 Apr 2018 18:19:33 GMT" }, { "version": "v2", "created": "Tue, 26 Jun 2018 21:41:10 GMT" } ]
2018-06-28T00:00:00
[ [ "Hell", "Pavol", "" ], [ "Huang", "Jing", "" ], [ "McConnell", "Ross M.", "" ], [ "Rafiey", "Arash", "" ] ]
new_dataset
0.979103
1806.10173
Mitsuo Yoshida
Mitsuo Yoshida, Fujio Toriumi
Do Political Detachment Users Receive Various Political Information on Social Media?
AAAI ICWSM 2018 Workshop : The 3rd International Workshop on Event Analytics using Social Media Data (EASM 2018)
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the election, political parties communicate political information to people through social media. The followers receive the information, but can users who are not followers, political detachment users, receive the information? We focus on political detachment users who do not follow any political parties, and tackle the following research question: do political detachment users receive various political information during the election period? The results indicate that the answer is No. We determined that the political detachment users only receive the information of a few political parties.
[ { "version": "v1", "created": "Tue, 26 Jun 2018 19:08:04 GMT" } ]
2018-06-28T00:00:00
[ [ "Yoshida", "Mitsuo", "" ], [ "Toriumi", "Fujio", "" ] ]
new_dataset
0.980571
1806.10278
Ramanpreet Pahwa Singh
Ramanpreet Singh Pahwa, Wei Kiat Leong, Shaohui Foong, Karianto Leman, Minh N. Do
Feature-less Stitching of Cylindrical Tunnel
6 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional image stitching algorithms use transforms such as homography to combine different views of a scene. They usually work well when the scene is planar or when the camera is only rotated, keeping its position static. This severely limits their use in real world scenarios where an unmanned aerial vehicle (UAV) potentially hovers around and flies in an enclosed area while rotating to capture a video sequence. We utilize known scene geometry along with recorded camera trajectory to create cylindrical images captured in a given environment such as a tunnel where the camera rotates around its center. The captured images of the inner surface of the given scene are combined to create a composite panoramic image that is textured onto a 3D geometrical object in Unity graphical engine to create an immersive environment for end users.
[ { "version": "v1", "created": "Wed, 27 Jun 2018 02:56:19 GMT" } ]
2018-06-28T00:00:00
[ [ "Pahwa", "Ramanpreet Singh", "" ], [ "Leong", "Wei Kiat", "" ], [ "Foong", "Shaohui", "" ], [ "Leman", "Karianto", "" ], [ "Do", "Minh N.", "" ] ]
new_dataset
0.996979
1806.10419
Shervin Minaee
Shervin Minaee, Yao Wang, Alp Aygar, Sohae Chung, Xiuyuan Wang, Yvonne W. Lui, Els Fieremans, Steven Flanagan, Joseph Rath
MTBI Identification From Diffusion MR Images Using Bag of Adversarial Visual Features
IEEE Transactions on Medical Imaging
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we propose bag of adversarial features (BAF) for identifying mild traumatic brain injury (MTBI) patients from their diffusion magnetic resonance images (MRI) (obtained within one month of injury) by incorporating unsupervised feature learning techniques. MTBI is a growing public health problem with an estimated incidence of over 1.7 million people annually in US. Diagnosis is based on clinical history and symptoms, and accurate, concrete measures of injury are lacking. Unlike most of previous works, which use hand-crafted features extracted from different parts of brain for MTBI classification, we employ feature learning algorithms to learn more discriminative representation for this task. A major challenge in this field thus far is the relatively small number of subjects available for training. This makes it difficult to use an end-to-end convolutional neural network to directly classify a subject from MR images. To overcome this challenge, we first apply an adversarial auto-encoder (with convolutional structure) to learn patch-level features, from overlapping image patches extracted from different brain regions. We then aggregate these features through a bag-of-word approach. We perform an extensive experimental study on a dataset of 227 subjects (including 109 MTBI patients, and 118 age and sex matched healthy controls), and compare the bag-of-deep-features with several previous approaches. Our experimental results show that the BAF significantly outperforms earlier works relying on the mean values of MR metrics in selected brain regions.
[ { "version": "v1", "created": "Wed, 27 Jun 2018 11:41:34 GMT" } ]
2018-06-28T00:00:00
[ [ "Minaee", "Shervin", "" ], [ "Wang", "Yao", "" ], [ "Aygar", "Alp", "" ], [ "Chung", "Sohae", "" ], [ "Wang", "Xiuyuan", "" ], [ "Lui", "Yvonne W.", "" ], [ "Fieremans", "Els", "" ], [ "Flanagan", "Steven", "" ], [ "Rath", "Joseph", "" ] ]
new_dataset
0.992183
1806.10447
Alexey Gruzdev
Sergey Zherzdev and Alexey Gruzdev
LPRNet: License Plate Recognition via Deep Neural Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes LPRNet - end-to-end method for Automatic License Plate Recognition without preliminary character segmentation. Our approach is inspired by recent breakthroughs in Deep Neural Networks, and works in real-time with recognition accuracy up to 95% for Chinese license plates: 3 ms/plate on nVIDIA GeForce GTX 1080 and 1.3 ms/plate on Intel Core i7-6700K CPU. LPRNet consists of the lightweight Convolutional Neural Network, so it can be trained in end-to-end way. To the best of our knowledge, LPRNet is the first real-time License Plate Recognition system that does not use RNNs. As a result, the LPRNet algorithm may be used to create embedded solutions for LPR that feature high level accuracy even on challenging Chinese license plates.
[ { "version": "v1", "created": "Wed, 27 Jun 2018 12:57:17 GMT" } ]
2018-06-28T00:00:00
[ [ "Zherzdev", "Sergey", "" ], [ "Gruzdev", "Alexey", "" ] ]
new_dataset
0.999081
1806.10464
Shenjie Huang
Shenjie Huang, Vahid Shah-Mansouri, Majid Safari
Game-Theoretic Spectrum Trading in RF Relay-Assisted Free-Space Optical Communications
null
null
null
null
cs.IT cs.GT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work proposes a novel hybrid RF/FSO system based on a game theoretic spectrum trading process. It is assumed that no RF spectrum is preallocated to the FSO link and only when the link availability is severely impaired by the infrequent adverse weather conditions, i.e. fog, etc., the source can borrow a portion of licensed RF spectrum from one of the surrounding RF nodes. Using the leased spectrum, the source establishes a dual-hop RF/FSO hybrid link to maintain its throughout to the destination. The proposed system is considered to be both spectrum- and power-efficient. A market-equilibrium-based pricing process is proposed for the spectrum trading between the source and RF nodes. Through extensive performance analysis, it is demonstrated that the proposed scheme can significantly improve the average capacity of the system, especially when the surrounding RF nodes are with low traffic loads. In addition, the system benefits from involving more RF nodes into the spectrum trading process by means of diversity, particularly when the surrounding RF nodes have high probability of being in heavy traffic loads. Furthermore, the application of the proposed system in a realistic scenario is presented based on the weather statistics in the city of Edinburgh, UK. It is demonstrated that the proposed system can substantially enhance the link availability towards the carrier-class requirement.
[ { "version": "v1", "created": "Wed, 27 Jun 2018 13:21:48 GMT" } ]
2018-06-28T00:00:00
[ [ "Huang", "Shenjie", "" ], [ "Shah-Mansouri", "Vahid", "" ], [ "Safari", "Majid", "" ] ]
new_dataset
0.974018
1806.10521
Florian Kauer
Florian Kauer and Maximilian K\"ostler and Volker Turau
Reliable Wireless Multi-Hop Networks with Decentralized Slot Management: An Analysis of IEEE 802.15.4 DSME
27 pages, 18 figures
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless communication is a key element in the realization of the Industrial Internet of Things for flexible and cost-efficient monitoring and control of industrial processes. Wireless mesh networks using IEEE 802.15.4 have a high potential for executing monitoring and control tasks with low energy consumption and low costs for deployment and maintenance. However, conventional medium access techniques based on carrier sensing cannot provide the required reliability for industrial applications. Therefore, the standard was extended with techniques for time-slotted medium access on multiple channels. In this paper, we present openDSME, a comprehensive implementation of the Deterministic and Synchronous Multi-channel Extension (DSME) and propose a method for traffic-aware and decentralized slot scheduling to enable scalable wireless industrial networks. The performance of DSME and our implementation is demonstrated in the OMNeT++ simulator and on a physically deployed wireless network in the FIT/IoT-LAB. It is shown that in the given scenarios, twice as much traffic can be delivered reliably by using DSME instead of CSMA/CA and that the energy consumption can be reduced significantly. The paper is completed by presenting important trade-offs for parameter selection and by uncovering open issues of the current specification that call for further effort in research and standardization.
[ { "version": "v1", "created": "Wed, 27 Jun 2018 15:07:43 GMT" } ]
2018-06-28T00:00:00
[ [ "Kauer", "Florian", "" ], [ "Köstler", "Maximilian", "" ], [ "Turau", "Volker", "" ] ]
new_dataset
0.997461
1611.01428
Laura Luzzi
Laura Luzzi, Roope Vehkalahti, Cong Ling
Almost universal codes for MIMO wiretap channels
23 pages (double column), 3 figures. Final version. Appendix II has been removed from the final version due to a bug in Corollary II.5
null
null
null
cs.IT math.IT math.NT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite several works on secrecy coding for fading and MIMO wiretap channels from an error probability perspective, the construction of information-theoretically secure codes over such channels remains an open problem. In this paper, we consider a fading wiretap channel model where the transmitter has only partial statistical channel state information. Our channel model includes static channels, i.i.d. block fading channels, and ergodic stationary fading with fast decay of large deviations for the eavesdropper's channel. We extend the flatness factor criterion from the Gaussian wiretap channel to fading and MIMO wiretap channels, and establish a simple design criterion where the normalized product distance / minimum determinant of the lattice and its dual should be maximized simultaneously. Moreover, we propose concrete lattice codes satisfying this design criterion, which are built from algebraic number fields with constant root discriminant in the single-antenna case, and from division algebras centered at such number fields in the multiple-antenna case. The proposed lattice codes achieve strong secrecy and semantic security for all rates $R<C_b-C_e-\kappa$, where $C_b$ and $C_e$ are Bob and Eve's channel capacities respectively, and $\kappa$ is an explicit constant gap. Furthermore, these codes are almost universal in the sense that a fixed code is good for secrecy for a wide range of fading models. Finally, we consider a compound wiretap model with a more restricted uncertainty set, and show that rates $R<\bar{C}_b-\bar{C}_e-\kappa$ are achievable, where $\bar{C}_b$ is a lower bound for Bob's capacity and $\bar{C}_e$ is an upper bound for Eve's capacity for all the channels in the set.
[ { "version": "v1", "created": "Fri, 4 Nov 2016 15:48:01 GMT" }, { "version": "v2", "created": "Fri, 15 Jun 2018 08:25:44 GMT" }, { "version": "v3", "created": "Tue, 26 Jun 2018 11:39:47 GMT" } ]
2018-06-27T00:00:00
[ [ "Luzzi", "Laura", "" ], [ "Vehkalahti", "Roope", "" ], [ "Ling", "Cong", "" ] ]
new_dataset
0.990652
1703.08348
Vaneet Aggarwal
Abubakr O. Al-Abbasi and Vaneet Aggarwal
Video Streaming in Distributed Erasure-coded Storage Systems: Stall Duration Analysis
18 pages, accepted to IEEE/ACM Transactions on Networking
null
null
null
cs.NI cs.DC cs.IT cs.MM math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The demand for global video has been burgeoning across industries. With the expansion and improvement of video-streaming services, cloud-based video is evolving into a necessary feature of any successful business for reaching internal and external audiences. This paper considers video streaming over distributed systems where the video segments are encoded using an erasure code for better reliability thus being the first work to our best knowledge that considers video streaming over erasure-coded distributed cloud systems. The download time of each coded chunk of each video segment is characterized and ordered statistics over the choice of the erasure-coded chunks is used to obtain the playback time of different video segments. Using the playback times, bounds on the moment generating function on the stall duration is used to bound the mean stall duration. Moment generating function based bounds on the ordered statistics are also used to bound the stall duration tail probability which determines the probability that the stall time is greater than a pre-defined number. These two metrics, mean stall duration and the stall duration tail probability, are important quality of experience (QoE) measures for the end users. Based on these metrics, we formulate an optimization problem to jointly minimize the convex combination of both the QoE metrics averaged over all requests over the placement and access of the video content. The non-convex problem is solved using an efficient iterative algorithm. Numerical results show significant improvement in QoE metrics for cloud-based video as compared to the considered baselines.
[ { "version": "v1", "created": "Fri, 24 Mar 2017 10:39:05 GMT" }, { "version": "v2", "created": "Tue, 26 Jun 2018 11:32:48 GMT" } ]
2018-06-27T00:00:00
[ [ "Al-Abbasi", "Abubakr O.", "" ], [ "Aggarwal", "Vaneet", "" ] ]
new_dataset
0.967847
1705.09413
David Bau iii
David Bau, Jeff Gray, Caitlin Kelleher, Josh Sheldon, Franklyn Turbak
Learnable Programming: Blocks and Beyond
null
Communications of the ACM, June 2017, pp. 72-80
10.1145/3015455
null
cs.PL cs.CY cs.HC cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Blocks-based programming has become the lingua franca for introductory coding. Studies have found that experience with blocks-based programming can help beginners learn more traditional text-based languages. We explore how blocks environments improve learnability for novices by 1) favoring recognition over recall, 2) reducing cognitive load, and 3) preventing errors. Increased usability of blocks programming has led to widespread adoption within introductory programming contexts across a range of ages. Ongoing work explores further reducing barriers to programming, supporting novice programmers in expanding their programming skills, and transitioning to textual programming. New blocks frameworks are making it easier to access a variety of APIs through blocks environments, opening the doors to a greater diversity of programming domains and supporting greater experimentation for novices and professionals alike.
[ { "version": "v1", "created": "Fri, 26 May 2017 02:25:19 GMT" } ]
2018-06-27T00:00:00
[ [ "Bau", "David", "" ], [ "Gray", "Jeff", "" ], [ "Kelleher", "Caitlin", "" ], [ "Sheldon", "Josh", "" ], [ "Turbak", "Franklyn", "" ] ]
new_dataset
0.990485
1707.02650
Qingyu Liu
Qingyu Liu, Lei Deng, Haibo Zeng, Minghua Chen
On the Min-Max-Delay Problem: NP-completeness, Algorithm, and Integrality Gap
null
null
null
null
cs.DS
http://creativecommons.org/licenses/by-nc-sa/4.0/
We study a delay-sensitive information flow problem where a source streams information to a sink over a directed graph G(V,E) at a fixed rate R possibly using multiple paths to minimize the maximum end-to-end delay, denoted as the Min-Max-Delay problem. Transmission over an edge incurs a constant delay within the capacity. We prove that Min-Max-Delay is weakly NP-complete, and demonstrate that it becomes strongly NP-complete if we require integer flow solution. We propose an optimal pseudo-polynomial time algorithm for Min-Max-Delay, with time complexity O(\log (Nd_{\max}) (N^5d_{\max}^{2.5})(\log R+N^2d_{\max}\log(N^2d_{\max}))), where N = \max\{|V|,|E|\} and d_{\max} is the maximum edge delay. Besides, we show that the integrality gap, which is defined as the ratio of the maximum delay of an optimal integer flow to the maximum delay of an optimal fractional flow, could be arbitrarily large.
[ { "version": "v1", "created": "Sun, 9 Jul 2017 22:34:44 GMT" }, { "version": "v2", "created": "Tue, 11 Jul 2017 14:32:42 GMT" }, { "version": "v3", "created": "Fri, 16 Feb 2018 22:01:59 GMT" }, { "version": "v4", "created": "Mon, 25 Jun 2018 21:42:27 GMT" } ]
2018-06-27T00:00:00
[ [ "Liu", "Qingyu", "" ], [ "Deng", "Lei", "" ], [ "Zeng", "Haibo", "" ], [ "Chen", "Minghua", "" ] ]
new_dataset
0.989392
1711.05368
Bao Zhao
Bao Zhao, Xinyi Le, Juntong Xi
A Novel SDASS Descriptor for Fully Encoding the Information of 3D Local Surface
21 pages, 15 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Local feature description is a fundamental yet challenging task in 3D computer vision. This paper proposes a novel descriptor, named Statistic of Deviation Angles on Subdivided Space (SDASS), of encoding geometrical and spatial information of local surface on Local Reference Axis (LRA). In terms of encoding geometrical information, considering that surface normals, which are usually used for encoding geometrical information of local surface, are vulnerable to various nuisances (e.g., noise, varying mesh resolutions etc.), we propose a robust geometrical attribute, called Local Minimum Axis (LMA), to replace the normals for generating the geometrical feature in our SDASS descriptor. For encoding spatial information, we use two spatial features for fully encoding the spatial information of a local surface based on LRA which usually presents high overall repeatability than Local Reference Axis (LRF). Besides, an improved LRA is proposed for increasing the robustness of our SDASS to noise and varying mesh resolutions. The performance of the SDASS descriptor is rigorously tested on four popular datasets. The results show that our descriptor has a high descriptiveness and strong robustness, and its performance outperform existing algorithms by a large margin. Finally, the proposed descriptor is applied to 3D registration. The accurate result further confirms the effectiveness of our SDASS method.
[ { "version": "v1", "created": "Wed, 15 Nov 2017 00:50:16 GMT" }, { "version": "v2", "created": "Mon, 25 Jun 2018 13:39:53 GMT" }, { "version": "v3", "created": "Tue, 26 Jun 2018 13:06:34 GMT" } ]
2018-06-27T00:00:00
[ [ "Zhao", "Bao", "" ], [ "Le", "Xinyi", "" ], [ "Xi", "Juntong", "" ] ]
new_dataset
0.999841
1806.09755
Xingchao Peng
Xingchao Peng, Ben Usman, Kuniaki Saito, Neela Kaushik, Judy Hoffman, Kate Saenko
Syn2Real: A New Benchmark forSynthetic-to-Real Visual Domain Adaptation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised transfer of object recognition models from synthetic to real data is an important problem with many potential applications. The challenge is how to "adapt" a model trained on simulated images so that it performs well on real-world data without any additional supervision. Unfortunately, current benchmarks for this problem are limited in size and task diversity. In this paper, we present a new large-scale benchmark called Syn2Real, which consists of a synthetic domain rendered from 3D object models and two real-image domains containing the same object categories. We define three related tasks on this benchmark: closed-set object classification, open-set object classification, and object detection. Our evaluation of multiple state-of-the-art methods reveals a large gap in adaptation performance between the easier closed-set classification task and the more difficult open-set and detection tasks. We conclude that developing adaptation methods that work well across all three tasks presents a significant future challenge for syn2real domain transfer.
[ { "version": "v1", "created": "Tue, 26 Jun 2018 01:53:13 GMT" } ]
2018-06-27T00:00:00
[ [ "Peng", "Xingchao", "" ], [ "Usman", "Ben", "" ], [ "Saito", "Kuniaki", "" ], [ "Kaushik", "Neela", "" ], [ "Hoffman", "Judy", "" ], [ "Saenko", "Kate", "" ] ]
new_dataset
0.999542
1806.09771
Zhengxing Chen
Zhengxing Chen, Chris Amato, Truong-Huy Nguyen, Seth Cooper, Yizhou Sun, Magy Seif El-Nasr
Q-DeckRec: A Fast Deck Recommendation System for Collectible Card Games
CIG 2018
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deck building is a crucial component in playing Collectible Card Games (CCGs). The goal of deck building is to choose a fixed-sized subset of cards from a large card pool, so that they work well together in-game against specific opponents. Existing methods either lack flexibility to adapt to different opponents or require large computational resources, still making them unsuitable for any real-time or large-scale application. We propose a new deck recommendation system, named Q-DeckRec, which learns a deck search policy during a training phase and uses it to solve deck building problem instances. Our experimental results demonstrate Q-DeckRec requires less computational resources to build winning-effective decks after a training phase compared to several baseline methods.
[ { "version": "v1", "created": "Tue, 26 Jun 2018 02:55:16 GMT" } ]
2018-06-27T00:00:00
[ [ "Chen", "Zhengxing", "" ], [ "Amato", "Chris", "" ], [ "Nguyen", "Truong-Huy", "" ], [ "Cooper", "Seth", "" ], [ "Sun", "Yizhou", "" ], [ "El-Nasr", "Magy Seif", "" ] ]
new_dataset
0.998239
1806.09793
Khuong Vo An
Khanh Dang, Khuong Vo and Josef K\"ung
A NoSQL Data-based Personalized Recommendation System for C2C e-Commerce
Accepted to DEXA 2017
null
10.1007/978-3-319-64471-4_25
null
cs.IR cs.DB cs.LG
http://creativecommons.org/licenses/by/4.0/
With the considerable development of customer-to-customer (C2C) e-commerce in the recent years, there is a big demand for an effective recommendation system that suggests suitable websites for users to sell their items with some specified needs. Nonetheless, e-commerce recommendation systems are mostly designed for business-to-customer (B2C) websites, where the systems offer the consumers the products that they might like to buy. Almost none of the related research works focus on choosing selling sites for target items. In this paper, we introduce an approach that recommends the selling websites based upon the item's description, category, and desired selling price. This approach employs NoSQL data-based machine learning techniques for building and training topic models and classification models. The trained models can then be used to rank the websites dynamically with respect to the user needs. The experimental results with real-world datasets from Vietnam C2C websites will demonstrate the effectiveness of our proposed method.
[ { "version": "v1", "created": "Tue, 26 Jun 2018 05:02:30 GMT" } ]
2018-06-27T00:00:00
[ [ "Dang", "Khanh", "" ], [ "Vo", "Khuong", "" ], [ "Küng", "Josef", "" ] ]
new_dataset
0.991184
1806.09852
EPTCS
Kasper Dokter (CWI), Farhad Arbab (CWI)
Treo: Textual Syntax for Reo Connectors
In Proceedings MeTRiD 2018, arXiv:1806.09330
EPTCS 272, 2018, pp. 121-135
10.4204/EPTCS.272.10
null
cs.PL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reo is an interaction-centric model of concurrency for compositional specification of communication and coordination protocols. Formal verification tools exist to ensure correctness and compliance of protocols specified in Reo, which can readily be (re)used in different applications, or composed into more complex protocols. Recent benchmarks show that compiling such high-level Reo specifications produces executable code that can compete with or even beat the performance of hand-crafted programs written in languages such as C or Java using conventional concurrency constructs. The original declarative graphical syntax of Reo does not support intuitive constructs for parameter passing, iteration, recursion, or conditional specification. This shortcoming hinders Reo's uptake in large-scale practical applications. Although a number of Reo-inspired syntax alternatives have appeared in the past, none of them follows the primary design principles of Reo: a) declarative specification; b) all channel types and their sorts are user-defined; and c) channels compose via shared nodes. In this paper, we offer a textual syntax for Reo that respects these principles and supports flexible parameter passing, iteration, recursion, and conditional specification. In on-going work, we use this textual syntax to compile Reo into target languages such as Java, Promela, and Maude.
[ { "version": "v1", "created": "Tue, 26 Jun 2018 08:55:13 GMT" } ]
2018-06-27T00:00:00
[ [ "Dokter", "Kasper", "", "CWI" ], [ "Arbab", "Farhad", "", "CWI" ] ]
new_dataset
0.987737
1806.09859
Mengyu Liu
Mengyu Liu and Yuan Liu
Charge-then-Forward: Wireless Powered Communication for Multiuser Relay Networks
13 Pages, 9 figures, accepted by IEEE Transactions on Communications
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies a relay-assisted wireless powered communication network (R-WPCN) consisting of multiple source-destination pairs and a hybrid relay node (HRN). We consider a "charge-then-forward" protocol at the HRN, in which the HRN with constant energy supply first acts as an energy transmitter to charge the sources, and then forwards the information from the sources to their destinations through time division multiple access (TDMA) or frequency division multiple access (FDMA). Processing costs at the wireless-powered sources are taken into account. Our goal is to maximize the sum-rate of all transmission pairs by jointly optimizing the time, frequency and power resources. The formulated optimization problems for both TDMA and FDMA are non-convex. For the TDMA scheme, by appropriate transformation, the problem is reformulated as a convex problem and be optimally solved. For the FDMA case, we find the asymptotically optimal solution in the dual domain. Furthermore, suboptimal algorithms are proposed for both schemes to tradeoff the complexity and performance. Finally, the simulation results validate the effectiveness of the proposed schemes.
[ { "version": "v1", "created": "Tue, 26 Jun 2018 09:11:03 GMT" } ]
2018-06-27T00:00:00
[ [ "Liu", "Mengyu", "" ], [ "Liu", "Yuan", "" ] ]
new_dataset
0.978522
1806.09894
Mohammad Mohammadi Amiri Mr.
Mohammad Mohammadi Amiri and Deniz Gunduz
On the Capacity Region of a Cache-Aided Gaussian Broadcast Channel with Multi-Layer Messages
Part of this work was presented at the IEEE International Symposium on Information Theory, Colorado, USA, June 2018
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A cache-aided $K$-user Gaussian broadcast channel (BC) is studied. The transmitter has a library of $N$ files, from which each user requests one. The users are equipped with caches of different sizes, which are filled without the knowledge of the user requests in a centralized manner. Differently from the literature, it is assumed that each file can be delivered to different users at different rates, which may correspond to different quality representations of the underlying content, e.g., scalable coded video segments. Accordingly, instead of a single achievable rate, the system performance is characterized by a rate tuple, which corresponds to the vector of rates users' requests can be delivered at. The goal is to characterize the set of all achievable rate tuples for a given total cache capacity by designing joint cache and channel coding schemes together with cache allocation across users. Assuming that the users are ordered in increasing channel quality, each file is coded into $K$ layers, and only the first $k$ layers of the requested file are delivered to user $k$, $k=1,...,K$. Three different coding schemes are proposed, which differ in the way they deliver the coded contents over the BC; in particular, time-division, superposition, and dirty paper coding schemes are studied. Corresponding achievable rate regions are characterized, and compared with a novel outer bound. To the best of our knowledge, this is the first work studying the delivery of files at different rates over a cache-aided noisy BC.
[ { "version": "v1", "created": "Tue, 26 Jun 2018 10:49:15 GMT" } ]
2018-06-27T00:00:00
[ [ "Amiri", "Mohammad Mohammadi", "" ], [ "Gunduz", "Deniz", "" ] ]
new_dataset
0.997254
1806.10025
Youssouf Oualhadj
Youssouf Oualhadj, L\'eo Tible and Daniele Varacca
Banach-Mazur Parity Games and Almost-sure Winning Strategies
null
null
null
null
cs.GT cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two-player stochastic games are games with two 2 players and a randomised entity called "nature". A natural question to ask in this framework is the existence of strategies that ensure that an event happens with probability 1 (almost-sure strategies). In the case of Markov decision processes, when the event 2 of interest is given as a parity condition, we can replace the "nature" by two more players that play according to the rules of what is known as Banach-Mazur game [1]. In this paper we continue this research program by extending the above result to two-player stochastic parity games. As in the paper [1], the basic idea is that, under the correct hypothesis, we can replace the randomised player with two players playing a Banach-Mazur game. This requires a few technical observations, and a non trivial proof, that this paper sets out to do.
[ { "version": "v1", "created": "Tue, 26 Jun 2018 14:39:48 GMT" } ]
2018-06-27T00:00:00
[ [ "Oualhadj", "Youssouf", "" ], [ "Tible", "Léo", "" ], [ "Varacca", "Daniele", "" ] ]
new_dataset
0.980511
1701.07501
Moshe Schwartz
Natalia Silberstein and Tuvi Etzion and Moshe Schwartz
Locality and Availability of Array Codes Constructed from Subspaces
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ever-increasing amounts of data are created and processed in internet-scale companies such as Google, Facebook, and Amazon. The efficient storage of such copious amounts of data has thus become a fundamental and acute problem in modern computing. No single machine can possibly satisfy such immense storage demands. Therefore, distributed storage systems (DSS), which rely on tens of thousands of storage nodes, are the only viable solution. Such systems are broadly used in all modern internet-scale systems. However, the design of a DSS poses a number of crucial challenges, markedly different from single-user storage systems. Such systems must be able to reconstruct the data efficiently, to overcome failure of servers, to correct errors, etc. Lots of research was done in the last few years to answer these challenges and the research is increasing in parallel to the increasing amount of stored data. The main goal of this paper is to consider codes which have two of the most important features of distributed storage systems, namely, locality and availability. Our codes are array codes which are based on subspaces of a linear space over a finite field. We present several constructions of such codes which are $q$-analog to some of the known block codes. Some of these codes possess independent intellectual merit. We examine the locality and availability of the constructed codes. In particular we distinguish between two types of locality and availability, node vs.~symbol, locality and availability. To our knowledge this is the first time that such a distinction is given in the literature.
[ { "version": "v1", "created": "Wed, 25 Jan 2017 21:55:29 GMT" }, { "version": "v2", "created": "Tue, 7 Feb 2017 21:01:31 GMT" }, { "version": "v3", "created": "Sat, 23 Jun 2018 08:03:57 GMT" } ]
2018-06-26T00:00:00
[ [ "Silberstein", "Natalia", "" ], [ "Etzion", "Tuvi", "" ], [ "Schwartz", "Moshe", "" ] ]
new_dataset
0.996535
1706.00765
Yiannis Kantaros
Yiannis Kantaros, Meng Guo, Michael M. Zavlanos
Temporal Logic Task Planning and Intermittent Connectivity Control of Mobile Robot Networks
null
null
null
null
cs.RO cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we develop a distributed intermittent communication and task planning framework for mobile robot teams. The goal of the robots is to accomplish complex tasks, captured by local Linear Temporal Logic formulas, and share the collected information with all other robots and possibly also with a user. Specifically, we consider situations where the robot communication capabilities are not sufficient to form reliable and connected networks while the robots move to accomplish their tasks. In this case, intermittent communication protocols are necessary that allow the robots to temporarily disconnect from the network in order to accomplish their tasks free of communication constraints. We assume that the robots can only communicate with each other when they meet at common locations in space. Our distributed control framework jointly determines local plans that allow all robots fulfill their assigned temporal tasks, sequences of communication events that guarantee information exchange infinitely often, and optimal communication locations that minimize a desired distance metric. Simulation results verify the efficacy of the proposed controllers.
[ { "version": "v1", "created": "Fri, 2 Jun 2017 17:26:22 GMT" }, { "version": "v2", "created": "Sun, 24 Dec 2017 16:38:13 GMT" }, { "version": "v3", "created": "Sun, 24 Jun 2018 08:56:54 GMT" } ]
2018-06-26T00:00:00
[ [ "Kantaros", "Yiannis", "" ], [ "Guo", "Meng", "" ], [ "Zavlanos", "Michael M.", "" ] ]
new_dataset
0.999137
1802.00752
Alexey Shvets
Alexander Rakhlin, Alexey Shvets, Vladimir Iglovikov and Alexandr A. Kalinin
Deep Convolutional Neural Networks for Breast Cancer Histology Image Analysis
8 pages, 4 figures
null
10.1007/978-3-319-93000-8_83
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Breast cancer is one of the main causes of cancer death worldwide. Early diagnostics significantly increases the chances of correct treatment and survival, but this process is tedious and often leads to a disagreement between pathologists. Computer-aided diagnosis systems showed potential for improving the diagnostic accuracy. In this work, we develop the computational approach based on deep convolution neural networks for breast cancer histology image classification. Hematoxylin and eosin stained breast histology microscopy image dataset is provided as a part of the ICIAR 2018 Grand Challenge on Breast Cancer Histology Images. Our approach utilizes several deep neural network architectures and gradient boosted trees classifier. For 4-class classification task, we report 87.2% accuracy. For 2-class classification task to detect carcinomas we report 93.8% accuracy, AUC 97.3%, and sensitivity/specificity 96.5/88.0% at the high-sensitivity operating point. To our knowledge, this approach outperforms other common methods in automated histopathological image classification. The source code for our approach is made publicly available at https://github.com/alexander-rakhlin/ICIAR2018
[ { "version": "v1", "created": "Fri, 2 Feb 2018 16:20:58 GMT" }, { "version": "v2", "created": "Tue, 3 Apr 2018 13:59:59 GMT" } ]
2018-06-26T00:00:00
[ [ "Rakhlin", "Alexander", "" ], [ "Shvets", "Alexey", "" ], [ "Iglovikov", "Vladimir", "" ], [ "Kalinin", "Alexandr A.", "" ] ]
new_dataset
0.986555
1805.03852
Yanjing Wang
Yanjing Wang and Jeremy Seligman
When Names Are Not Commonly Known: Epistemic Logic with Assignments
18 pages, to appear in proceedings of AiML2018
null
null
null
cs.AI cs.LO math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In standard epistemic logic, agent names are usually assumed to be common knowledge implicitly. This is unreasonable for various applications. Inspired by term modal logic and assignment operators in dynamic logic, we introduce a lightweight modal predicate logic where names can be non-rigid. The language can handle various de dicto and de re distinctions in a natural way. The main technical result is a complete axiomatisation of this logic over S5 models.
[ { "version": "v1", "created": "Thu, 10 May 2018 06:56:53 GMT" }, { "version": "v2", "created": "Sun, 24 Jun 2018 16:51:53 GMT" } ]
2018-06-26T00:00:00
[ [ "Wang", "Yanjing", "" ], [ "Seligman", "Jeremy", "" ] ]
new_dataset
0.995855
1805.04262
Yi-Min Chou
Yi-Min Chou, Chien-Hung Chen, Keng-Hao Liu, and Chu-Song Chen
Stingray Detection of Aerial Images Using Augmented Training Images Generated by A Conditional Generative Model
to appear in CVPR 2018 Workshop (CVPR 2018 Workshop and Challenge: Automated Analysis of Marine Video for Environmental Monitoring)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present an object detection method that tackles the stingray detection problem based on aerial images. In this problem, the images are aerially captured on a sea-surface area by using an Unmanned Aerial Vehicle (UAV), and the stingrays swimming under (but close to) the sea surface are the target we want to detect and locate. To this end, we use a deep object detection method, faster RCNN, to train a stingray detector based on a limited training set of images. To boost the performance, we develop a new generative approach, conditional GLO, to increase the training samples of stingray, which is an extension of the Generative Latent Optimization (GLO) approach. Unlike traditional data augmentation methods that generate new data only for image classification, our proposed method that mixes foreground and background together can generate new data for an object detection task, and thus improve the training efficacy of a CNN detector. Experimental results show that satisfiable performance can be obtained by using our approach on stingray detection in aerial images.
[ { "version": "v1", "created": "Fri, 11 May 2018 07:29:23 GMT" }, { "version": "v2", "created": "Thu, 14 Jun 2018 03:24:23 GMT" }, { "version": "v3", "created": "Mon, 25 Jun 2018 06:44:41 GMT" } ]
2018-06-26T00:00:00
[ [ "Chou", "Yi-Min", "" ], [ "Chen", "Chien-Hung", "" ], [ "Liu", "Keng-Hao", "" ], [ "Chen", "Chu-Song", "" ] ]
new_dataset
0.993607
1806.08862
Yaman Umuroglu
Yaman Umuroglu, Lahiru Rasnayake, Magnus Sjalander
BISMO: A Scalable Bit-Serial Matrix Multiplication Overlay for Reconfigurable Computing
To appear at FPL'18
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Matrix-matrix multiplication is a key computational kernel for numerous applications in science and engineering, with ample parallelism and data locality that lends itself well to high-performance implementations. Many matrix multiplication-dependent applications can use reduced-precision integer or fixed-point representations to increase their performance and energy efficiency while still offering adequate quality of results. However, precision requirements may vary between different application phases or depend on input data, rendering constant-precision solutions ineffective. We present BISMO, a vectorized bit-serial matrix multiplication overlay for reconfigurable computing. BISMO utilizes the excellent binary-operation performance of FPGAs to offer a matrix multiplication performance that scales with required precision and parallelism. We characterize the resource usage and performance of BISMO across a range of parameters to build a hardware cost model, and demonstrate a peak performance of 6.5 TOPS on the Xilinx PYNQ-Z1 board.
[ { "version": "v1", "created": "Fri, 22 Jun 2018 21:30:05 GMT" } ]
2018-06-26T00:00:00
[ [ "Umuroglu", "Yaman", "" ], [ "Rasnayake", "Lahiru", "" ], [ "Sjalander", "Magnus", "" ] ]
new_dataset
0.999646
1806.08928
Zhiyong Chen
Yaping Sun, Zhiyong Chen, Meixia Tao and Hui Liu
Communications, Caching and Computing for Mobile Virtual Reality: Modeling and Tradeoff
submitted to IEEE JSAC, and the paper was presented in part at IEEE ICC 2018
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Virtual reality (VR) over wireless is emerging as an important use case of 5G networks. Immersive VR experience requires the delivery of huge data at ultra-low latency, thus demanding ultra-high transmission rate. This challenge can be largely addressed by the recent network architecture known as mobile edge computing (MEC), which enables caching and computing capabilities at the edge of wireless networks. This paper presents a novel MEC-based mobile VR delivery framework that is able to cache parts of the field of views (FOVs) in advance and run certain post-processing procedures at the mobile VR device. To optimize resource allocation at the mobile VR device, we formulate a joint caching and computing decision problem to minimize the average required transmission rate while meeting a given latency constraint. When FOVs are homogeneous, we obtain a closed-form expression for the optimal joint policy which reveals interesting communications-caching-computing tradeoffs. When FOVs are heterogeneous, we obtain a local optima of the problem by transforming it into a linearly constrained indefinite quadratic problem then applying concave convex procedure. Numerical results demonstrate great promises of the proposed mobile VR delivery framework in saving communication bandwidth while meeting low latency requirement.
[ { "version": "v1", "created": "Sat, 23 Jun 2018 08:22:08 GMT" } ]
2018-06-26T00:00:00
[ [ "Sun", "Yaping", "" ], [ "Chen", "Zhiyong", "" ], [ "Tao", "Meixia", "" ], [ "Liu", "Hui", "" ] ]
new_dataset
0.973273
1806.09063
Xi Chen
Xi Chen, Yiqun Liu, Liang Zhang, Krishnaram Kenthapadi
How LinkedIn Economic Graph Bonds Information and Product: Applications in LinkedIn Salary
10 pages, 5 figures
null
10.1145/3219819.3219921
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The LinkedIn Salary product was launched in late 2016 with the goal of providing insights on compensation distribution to job seekers, so that they can make more informed decisions when discovering and assessing career opportunities. The compensation insights are provided based on data collected from LinkedIn members and aggregated in a privacy-preserving manner. Given the simultaneous desire for computing robust, reliable insights and for having insights to satisfy as many job seekers as possible, a key challenge is to reliably infer the insights at the company level when there is limited or no data at all. We propose a two-step framework that utilizes a novel, semantic representation of companies (Company2vec) and a Bayesian statistical model to address this problem. Our approach makes use of the rich information present in the LinkedIn Economic Graph, and in particular, uses the intuition that two companies are likely to be similar if employees are very likely to transition from one company to the other and vice versa. We compute embeddings for companies by analyzing the LinkedIn members' company transition data using machine learning algorithms, then compute pairwise similarities between companies based on these embeddings, and finally incorporate company similarities in the form of peer company groups as part of the proposed Bayesian statistical model to predict insights at the company level. We perform extensive validation using several different evaluation techniques, and show that we can significantly increase the coverage of insights while, in fact, even improving the quality of the obtained insights. For example, we were able to compute salary insights for 35 times as many title-region-company combinations in the U.S. as compared to previous work, corresponding to 4.9 times as many monthly active users. Finally, we highlight the lessons learned from deployment of our system.
[ { "version": "v1", "created": "Sun, 24 Jun 2018 01:31:33 GMT" } ]
2018-06-26T00:00:00
[ [ "Chen", "Xi", "" ], [ "Liu", "Yiqun", "" ], [ "Zhang", "Liang", "" ], [ "Kenthapadi", "Krishnaram", "" ] ]
new_dataset
0.983702
1806.09111
Marco Squarcina
Stefano Calzavara (1), Riccardo Focardi (1), Matteo Maffei (2), Clara Schneidewind (2), Marco Squarcina (1), Mauro Tempesta (1) ((1) Universit\`a Ca' Foscari Venezia, (2) TU Wien)
WPSE: Fortifying Web Protocols via Browser-Side Security Monitoring
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present WPSE, a browser-side security monitor for web protocols designed to ensure compliance with the intended protocol flow, as well as confidentiality and integrity properties of messages. We formally prove that WPSE is expressive enough to protect web applications from a wide range of protocol implementation bugs and web attacks. We discuss concrete examples of attacks which can be prevented by WPSE on OAuth 2.0 and SAML 2.0, including a novel attack on the Google implementation of SAML 2.0 which we discovered by formalizing the protocol specification in WPSE. Moreover, we use WPSE to carry out an extensive experimental evaluation of OAuth 2.0 in the wild. Out of 90 tested websites, we identify security flaws in 55 websites (61.1%), including new critical vulnerabilities introduced by tracking libraries such as Facebook Pixel, all of which fixable by WPSE. Finally, we show that WPSE works flawlessly on 83 websites (92.2%), with the 7 compatibility issues being caused by custom implementations deviating from the OAuth 2.0 specification, one of which introducing a critical vulnerability.
[ { "version": "v1", "created": "Sun, 24 Jun 2018 09:18:34 GMT" } ]
2018-06-26T00:00:00
[ [ "Calzavara", "Stefano", "" ], [ "Focardi", "Riccardo", "" ], [ "Maffei", "Matteo", "" ], [ "Schneidewind", "Clara", "" ], [ "Squarcina", "Marco", "" ], [ "Tempesta", "Mauro", "" ] ]
new_dataset
0.986496
1806.09115
Ahmed Arif
Ohoud Alharbi, Ahmed Sabbir Arif
The Perception of Humanoid Robots for Domestic Use in Saudi Arabia
In CHI 2018 Workshop on Exploring Participatory Design Methods to Engage with Arab Communities (April 22, 2018). Montr\'eal, QC, Canada, 6 pages
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a research to investigate Saudi peoples' perception of humanoid domestic robots and attitude towards the possibility of having one in their house. Through a series of questionnaires, semi-structured interviews, focus groups, and participatory design sessions, this research will explore Saudi peoples' level of acceptance towards domestic robots, the tasks and responsibilities they would feel comfortable assigning to these robots, their preferred appearance of domestic robots, and the cultural stereotypes they feel a domestic robot must mimic.
[ { "version": "v1", "created": "Sun, 24 Jun 2018 09:42:43 GMT" } ]
2018-06-26T00:00:00
[ [ "Alharbi", "Ohoud", "" ], [ "Arif", "Ahmed Sabbir", "" ] ]
new_dataset
0.998341
1806.09285
Michael Haythorpe
Pouya Baniasadi, Vladimir Ejov, Michael Haythorpe and Serguei Rossomakhine
A new benchmark set for Traveling salesman problem and Hamiltonian cycle problem
21 pages, 11 tables
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a benchmark set for Traveling salesman problem (TSP) with characteristics that are different from the existing benchmark sets. In particular, we focus on small instances which prove to be challenging for one or more state-of-the-art TSP algorithms. These instances are based on difficult instances of Hamiltonian cycle problem (HCP). This includes instances from literature, specially modified randomly generated instances, and instances arising from the conversion of other difficult problems to HCP. We demonstrate that such benchmark instances are helpful in understanding the weaknesses and strengths of algorithms. In particular, we conduct a benchmarking exercise for this new benchmark set totalling over five years of CPU time, comparing the TSP algorithms Concorde, Chained Lin-Kernighan, and LKH. We also include the HCP heuristic SLH in the benchmarking exercise. A discussion about the benefits of specifically considering outlying instances, and in particular instances which are unusually difficult relative to size, is also included.
[ { "version": "v1", "created": "Mon, 25 Jun 2018 04:48:34 GMT" } ]
2018-06-26T00:00:00
[ [ "Baniasadi", "Pouya", "" ], [ "Ejov", "Vladimir", "" ], [ "Haythorpe", "Michael", "" ], [ "Rossomakhine", "Serguei", "" ] ]
new_dataset
0.999508
1806.09339
Peng Gao
Peng Gao, Xusheng Xiao, Ding Li, Zhichun Li, Kangkook Jee, Zhenyu Wu, Chung Hwan Kim, Sanjeev R. Kulkarni, Prateek Mittal
SAQL: A Stream-based Query System for Real-Time Abnormal System Behavior Detection
Accepted paper at USENIX Security Symposium 2018
null
null
null
cs.CR cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, advanced cyber attacks, which consist of a sequence of steps that involve many vulnerabilities and hosts, compromise the security of many well-protected businesses. This has led to the solutions that ubiquitously monitor system activities in each host (big data) as a series of events, and search for anomalies (abnormal behaviors) for triaging risky events. Since fighting against these attacks is a time-critical mission to prevent further damage, these solutions face challenges in incorporating expert knowledge to perform timely anomaly detection over the large-scale provenance data. To address these challenges, we propose a novel stream-based query system that takes as input, a real-time event feed aggregated from multiple hosts in an enterprise, and provides an anomaly query engine that queries the event feed to identify abnormal behaviors based on the specified anomalies. To facilitate the task of expressing anomalies based on expert knowledge, our system provides a domain-specific query language, SAQL, which allows analysts to express models for (1) rule-based anomalies, (2) time-series anomalies, (3) invariant-based anomalies, and (4) outlier-based anomalies. We deployed our system in NEC Labs America comprising 150 hosts and evaluated it using 1.1TB of real system monitoring data (containing 3.3 billion events). Our evaluations on a broad set of attack behaviors and micro-benchmarks show that our system has a low detection latency (<2s) and a high system throughput (110,000 events/s; supporting ~4000 hosts), and is more efficient in memory utilization than the existing stream-based complex event processing systems.
[ { "version": "v1", "created": "Mon, 25 Jun 2018 09:15:11 GMT" } ]
2018-06-26T00:00:00
[ [ "Gao", "Peng", "" ], [ "Xiao", "Xusheng", "" ], [ "Li", "Ding", "" ], [ "Li", "Zhichun", "" ], [ "Jee", "Kangkook", "" ], [ "Wu", "Zhenyu", "" ], [ "Kim", "Chung Hwan", "" ], [ "Kulkarni", "Sanjeev R.", "" ], [ "Mittal", "Prateek", "" ] ]
new_dataset
0.998681
1806.09455
Hector Geffner
Tomas Geffner and Hector Geffner
Compact Policies for Fully-Observable Non-Deterministic Planning as SAT
null
Proc. ICAPS 2018
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fully observable non-deterministic (FOND) planning is becoming increasingly important as an approach for computing proper policies in probabilistic planning, extended temporal plans in LTL planning, and general plans in generalized planning. In this work, we introduce a SAT encoding for FOND planning that is compact and can produce compact strong cyclic policies. Simple variations of the encodings are also introduced for strong planning and for what we call, dual FOND planning, where some non-deterministic actions are assumed to be fair (e.g., probabilistic) and others unfair (e.g., adversarial). The resulting FOND planners are compared empirically with existing planners over existing and new benchmarks. The notion of "probabilistic interesting problems" is also revisited to yield a more comprehensive picture of the strengths and limitations of current FOND planners and the proposed SAT approach.
[ { "version": "v1", "created": "Mon, 25 Jun 2018 13:51:04 GMT" } ]
2018-06-26T00:00:00
[ [ "Geffner", "Tomas", "" ], [ "Geffner", "Hector", "" ] ]
new_dataset
0.998601
1806.09458
Yan Ding
Yan Ding
Eco-Route: Recommending Economical Driving Routes For Plug-in Hybrid Electric Vehicles
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High fuel consumption cost results in drivers' economic burden. Plug-In Hybrid Electric Vehicles (PHEVs) consume two fuel sources (i.e., gasoline and electricity energy sources) with floating prices. To reduce drivers' total fuel cost, recommending economical routes to them becomes one of the effective methods. In this paper, we present a novel economical path-planning framework called Eco-Route, which consists of two phases. In the first phase, we build a driving route cost model (DRCM) for each PHEV (and driver) under the energy management strategy, based on driving condition and vehicles' parameters. In the second phase, with the real-time traffic information collected via the mobile crowdsensing manner, we are able to estimate and compare the driving cost among the shortest and the fastest routes for a given PHEV, and then recommend the driver with the more economical one. We evaluate the two-phase framework using 8 different PHEVs simulated in Matlab/Simulink, and the real-world datasets consisting of the road network, POI and GPS trajectory data generated by 559 taxis in seven days in Beijing, China. Experimental results demonstrate that the proposed model achieves good accuracy, with a mean cost error of less 8% when paths length is longer than 5 km. Moreover, users could save about 9% driving cost on average if driving along suggested routes in our case studies.
[ { "version": "v1", "created": "Mon, 25 Jun 2018 13:56:52 GMT" } ]
2018-06-26T00:00:00
[ [ "Ding", "Yan", "" ] ]
new_dataset
0.998628
1806.09476
Christian Attiogb\'e
Christian Attiogb\'e
Building Correct SDN-Based Components from a Global Formal Mode
16 pages; 2 figures (under polishment for submission)
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software Defined Networking (SDN) brings flexibility in the construction and managment of distributed applications by reducing the constraints imposed by physical networks and by moving the control of networks closer to the applications. However mastering SDN still poses numerous challenges among which the design of correct SDN components (more specifically controller and switches). In this work we use a formal stepwise approach to model and reason on SDN. Although formal approaches have already been used in this area, this contribution is the first state-based approach; it is based on the Event-B formal method, and it enables a correct-by-construction of SDN components. We provide the steps to build, using several refinements, a global formal model of a SDN system; correct SDN components are then systematically built from the global formal model satisfying the desired properties. Event-B is used to experiment the approach.
[ { "version": "v1", "created": "Mon, 25 Jun 2018 14:07:43 GMT" } ]
2018-06-26T00:00:00
[ [ "Attiogbé", "Christian", "" ] ]
new_dataset
0.997186
1806.09545
Oliver Sander
Christian Engwer, Carsten Gr\"aser, Steffen M\"uthing, Oliver Sander
Function space bases in the dune-functions module
null
null
null
null
cs.MS cs.NA
http://creativecommons.org/licenses/by/4.0/
The dune-functions Dune module provides interfaces for functions and function space bases. It forms one abstraction level above grids, shape functions, and linear algebra, and provides infrastructure for full discretization frameworks like dune-pdelab and dune-fem. This document describes the function space bases provided by dune-functions. These are based on an abstract description of bases for product spaces as trees of simpler bases. From this description, many different numberings of degrees of freedom by multi-indices can be derived in a natural way. We describe the abstract concepts, document the programmer interface, and give a complete example program that solves the stationary Stokes equation using Taylor-Hood elements.
[ { "version": "v1", "created": "Mon, 25 Jun 2018 15:58:53 GMT" } ]
2018-06-26T00:00:00
[ [ "Engwer", "Christian", "" ], [ "Gräser", "Carsten", "" ], [ "Müthing", "Steffen", "" ], [ "Sander", "Oliver", "" ] ]
new_dataset
0.999481
1806.09565
Shuo Liu
Shuo Liu, Vijay John, Erik Blasch, Zheng Liu, Ying Huang
IR2VI: Enhanced Night Environmental Perception by Unsupervised Thermal Image Translation
Present at CVPR Workshops 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Context enhancement is critical for night vision (NV) applications, especially for the dark night situation without any artificial lights. In this paper, we present the infrared-to-visual (IR2VI) algorithm, a novel unsupervised thermal-to-visible image translation framework based on generative adversarial networks (GANs). IR2VI is able to learn the intrinsic characteristics from VI images and integrate them into IR images. Since the existing unsupervised GAN-based image translation approaches face several challenges, such as incorrect mapping and lack of fine details, we propose a structure connection module and a region-of-interest (ROI) focal loss method to address the current limitations. Experimental results show the superiority of the IR2VI algorithm over baseline methods.
[ { "version": "v1", "created": "Mon, 25 Jun 2018 16:57:00 GMT" } ]
2018-06-26T00:00:00
[ [ "Liu", "Shuo", "" ], [ "John", "Vijay", "" ], [ "Blasch", "Erik", "" ], [ "Liu", "Zheng", "" ], [ "Huang", "Ying", "" ] ]
new_dataset
0.981603
1802.04995
Jeremy Frey
J\'er\'emy Frey (IDC), May Grabli, Ronit Slyper (IDC), Jessica Cauchard (IDC)
Breeze: Sharing Biofeedback Through Wearable Technologies
null
CHI '18 - SIGCHI Conference on Human Factors in Computing System, Apr 2018, Montreal, Canada. 2018, https://chi2018.acm.org/
10.1145/3173574.3174219
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Digitally presenting physiological signals as biofeedback to users raises awareness of both body and mind. This paper describes the effectiveness of conveying a physiological signal often overlooked for communication: breathing. We present the design and development of digital breathing patterns and their evaluation along three output modalities: visual, audio, and haptic. We also present Breeze, a wearable pendant placed around the neck that measures breathing and sends biofeedback in real-time. We evaluated how the breathing patterns were interpreted in a fixed environment and gathered qualitative data on the wearable device's design. We found that participants intentionally modified their own breathing to match the biofeedback, as a technique for understanding the underlying emotion. Our results describe how the features of the breathing patterns and the feedback modalities influenced participants' perception. We include guidelines and suggested use cases, such as Breeze being used by loved ones to increase connectedness and empathy.
[ { "version": "v1", "created": "Wed, 14 Feb 2018 09:12:31 GMT" } ]
2018-06-25T00:00:00
[ [ "Frey", "Jérémy", "", "IDC" ], [ "Grabli", "May", "", "IDC" ], [ "Slyper", "Ronit", "", "IDC" ], [ "Cauchard", "Jessica", "", "IDC" ] ]
new_dataset
0.997168
1806.08379
Aditya Asgaonkar
Aditya Asgaonkar, Bhaskar Krishnamachari
Solving the Buyer and Seller's Dilemma: A Dual-Deposit Escrow Smart Contract for Provably Cheat-Proof Delivery and Payment for a Digital Good without a Trusted Mediator
null
null
null
null
cs.CR cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental problem for electronic commerce is the buying and selling of digital goods between individuals that may not know or trust each other. Traditionally, this problem has been addressed by the use of trusted third-parties such as credit-card companies, mediated escrows, legal adjudication, or reputation systems. Despite the rise of blockchain protocols as a way to send payments without trusted third parties, the important problem of exchanging a digital good for payment without trusted third parties has been paid much less attention. We refer to this problem as the Buyer and Seller's Dilemma and present for it a dual-deposit escrow trade protocol which uses double-sided payment deposits in conjunction with simple cryptographic primitives, and that can be implemented using a blockchain-based smart contract. We analyze our protocol as an extensive-form game and prove that the Sub-game Perfect Nash Equilibrium for this game is for both the buyer and seller to cooperate and behave honestly. We address this problem under the assumption that the digital good being traded is known and verifiable, with a fixed price known to both parties.
[ { "version": "v1", "created": "Thu, 21 Jun 2018 18:14:52 GMT" } ]
2018-06-25T00:00:00
[ [ "Asgaonkar", "Aditya", "" ], [ "Krishnamachari", "Bhaskar", "" ] ]
new_dataset
0.996064
1806.08420
Aqsa Kashaf
Aqsa Kashaf, Carolina Zarate, Hanrou Wang, Yuvraj Agarwal, Vyas Sekar
Oh, What a Fragile Web We Weave: Third-party Service Dependencies In Modern Webservices and Implications
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent October 2016 DDoS attack on Dyn served as a wakeup call to the security community as many popular and independent webservices (e.g., Twitter, Spotify) were impacted. This incident raises a larger question on the fragility of modern webservices due to their dependence on third-party services. In this paper, we characterize the dependencies of popular webservices on third party services and how these can lead to DoS, RoQ attacks, and reduction in security posture. In particular, we focus on three critical infrastructure services: DNS, CDNs, and certificate authorities (CAs). We analyze both direct relationships (e.g., Twitter uses Dyn) and indirect dependencies (e.g., Netflix uses Symantec as OCSP and Symantec, in turn, uses Verisign for DNS). Our key findings are: (1) 73.14% of the top 100,000 popular services are vulnerable to reduction in availabil- ity due to potential attacks on third-party DNS, CDN, CA services that they exclusively rely on; (2) the use of third-party services is concentrated, so that if the top-10 providers of CDN, DNS and OCSP services go down, they can potentially impact 25%-46% of the top 100K most popular web services; (3) transitive depen- dencies significantly increase the set of webservices that exclusively depend on popular CDN and DNS service providers, in some cases by ten times (4) targeting even less popular webservices can potentially cause signifi- cant collateral damage, affecting upto 20% of the top- 100K webservices due to their shared dependencies. Based on our findings, we present a number of key implications and guidelines to guard against such Internet- scale incidents in the future.
[ { "version": "v1", "created": "Thu, 21 Jun 2018 20:27:36 GMT" } ]
2018-06-25T00:00:00
[ [ "Kashaf", "Aqsa", "" ], [ "Zarate", "Carolina", "" ], [ "Wang", "Hanrou", "" ], [ "Agarwal", "Yuvraj", "" ], [ "Sekar", "Vyas", "" ] ]
new_dataset
0.997062
1806.08457
David Kavaler
David Kavaler, Premkumar Devanbu, Vladimir Filkov
Whom Are You Going to Call?: Determinants of @-Mentions in GitHub Discussions
12 pages, 5 figures, 2 tables
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open Source Software (OSS) project success relies on crowd contributions. When an issue arises in pull-request based systems, @-mentions are used to call on people to task; previous studies have shown that @-mentions in discussions are associated with faster issue resolution. In most projects there may be many developers who could technically handle a variety of tasks. But OSS supports dynamic teams distributed across a wide variety of social and geographic backgrounds, as well as levels of involvement. It is, then, important to know whom to call on, i.e., who can be relied or trusted with important task-related duties, and why. In this paper, we sought to understand which observable socio-technical attributes of developers can be used to build good models of them being future @-mentioned in GitHub issues and pull request discussions. We built overall and project-specific predictive models of future @-mentions, in order to capture the determinants of @-mentions in each of two hundred GitHub projects, and to understand if and how those determinants differ between projects. We found that visibility, expertise, and productivity are associated with an increase in @-mentions, while responsiveness is not, in the presence of a number of control variables. Also, we find that though project-specific differences exist, the overall model can be used for cross-project prediction, indicating its GitHub-wide utility.
[ { "version": "v1", "created": "Thu, 21 Jun 2018 23:52:47 GMT" } ]
2018-06-25T00:00:00
[ [ "Kavaler", "David", "" ], [ "Devanbu", "Premkumar", "" ], [ "Filkov", "Vladimir", "" ] ]
new_dataset
0.97089
1806.08463
Alexander Wong
Rene Bidart and Alexander Wong
TriResNet: A Deep Triple-stream Residual Network for Histopathology Grading
9 pages
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While microscopic analysis of histopathological slides is generally considered as the gold standard method for performing cancer diagnosis and grading, the current method for analysis is extremely time consuming and labour intensive as it requires pathologists to visually inspect tissue samples in a detailed fashion for the presence of cancer. As such, there has been significant recent interest in computer aided diagnosis systems for analysing histopathological slides for cancer grading to aid pathologists to perform cancer diagnosis and grading in a more efficient, accurate, and consistent manner. In this work, we investigate and explore a deep triple-stream residual network (TriResNet) architecture for the purpose of tile-level histopathology grading, which is the critical first step to computer-aided whole-slide histopathology grading. In particular, the design mentality behind the proposed TriResNet network architecture is to facilitate for the learning of a more diverse set of quantitative features to better characterize the complex tissue characteristics found in histopathology samples. Experimental results on two widely-used computer-aided histopathology benchmark datasets (CAMELYON16 dataset and Invasive Ductal Carcinoma (IDC) dataset) demonstrated that the proposed TriResNet network architecture was able to achieve noticeably improved accuracies when compared with two other state-of-the-art deep convolutional neural network architectures. Based on these promising results, the hope is that the proposed TriResNet network architecture could become a useful tool to aiding pathologists increase the consistency, speed, and accuracy of the histopathology grading process.
[ { "version": "v1", "created": "Fri, 22 Jun 2018 01:18:14 GMT" } ]
2018-06-25T00:00:00
[ [ "Bidart", "Rene", "" ], [ "Wong", "Alexander", "" ] ]
new_dataset
0.960512
1806.08471
Jasmine DeHart
Jasmine DeHart and Christan Grant
Visual Content Privacy Leaks on Social Media Networks
2 pages, 3 figures, IEEE Security and Privacy Conference, Poster
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
With the growth and accessibility of mobile devices and internet, the ease of posting and sharing content on social media networks (SMNs) has increased exponentially. Many users post images that contain "privacy leaks" regarding themselves or someone else. Privacy leaks include any instance in which a transfer of personal identifying visual content is shared on SMNs. Private visual content (images and videos) exposes intimate information that can be detrimental to your finances, personal life, and reputation. Private visual content can include baby faces, credit cards, social security cards, house keys and others. The Hawaii Emergency Agency example provides evidence that visual content privacy leaks can happen on an individual or organization level. We find that monitoring techniques are essential for the improvement of private life and the development of future techniques. More extensive and enduring techniques will allow typical users, organizations, and the government to have a positive social media footprint.
[ { "version": "v1", "created": "Fri, 22 Jun 2018 02:39:45 GMT" } ]
2018-06-25T00:00:00
[ [ "DeHart", "Jasmine", "" ], [ "Grant", "Christan", "" ] ]
new_dataset
0.992336
1806.08544
Simon Lucas
Simon M. Lucas
Game AI Research with Fast Planet Wars Variants
To appear in Proceedings of IEEE Conference on Computational and Games, 2018
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a new implementation of Planet Wars, designed from the outset for Game AI research. The skill-depth of the game makes it a challenge for game-playing agents, and the speed of more than 1 million game ticks per second enables rapid experimentation and prototyping. The parameterised nature of the game together with an interchangeable actuator model make it well suited to automated game tuning. The game is designed to be fun to play for humans, and is directly playable by General Video Game AI agents.
[ { "version": "v1", "created": "Fri, 22 Jun 2018 08:18:53 GMT" } ]
2018-06-25T00:00:00
[ [ "Lucas", "Simon M.", "" ] ]
new_dataset
0.995105
1806.08612
Shervin Minaee
Shervin Minaee, Imed Bouazizi, Prakash Kolan, Hossein Najafzadeh
Ad-Net: Audio-Visual Convolutional Neural Network for Advertisement Detection In Videos
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Personalized advertisement is a crucial task for many of the online businesses and video broadcasters. Many of today's broadcasters use the same commercial for all customers, but as one can imagine different viewers have different interests and it seems reasonable to have customized commercial for different group of people, chosen based on their demographic features, and history. In this project, we propose a framework, which gets the broadcast videos, analyzes them, detects the commercial and replaces it with a more suitable commercial. We propose a two-stream audio-visual convolutional neural network, that one branch analyzes the visual information and the other one analyzes the audio information, and then the audio and visual embedding are fused together, and are used for commercial detection, and content categorization. We show that using both the visual and audio content of the videos significantly improves the model performance for video analysis. This network is trained on a dataset of more than 50k regular video and commercial shots, and achieved much better performance compared to the models based on hand-crafted features.
[ { "version": "v1", "created": "Fri, 22 Jun 2018 11:52:57 GMT" } ]
2018-06-25T00:00:00
[ [ "Minaee", "Shervin", "" ], [ "Bouazizi", "Imed", "" ], [ "Kolan", "Prakash", "" ], [ "Najafzadeh", "Hossein", "" ] ]
new_dataset
0.997308
1806.08730
Nitish Shirish Keskar
Bryan McCann and Nitish Shirish Keskar and Caiming Xiong and Richard Socher
The Natural Language Decathlon: Multitask Learning as Question Answering
null
null
null
null
cs.CL cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning has improved performance on many natural language processing (NLP) tasks individually. However, general NLP models cannot emerge within a paradigm that focuses on the particularities of a single metric, dataset, and task. We introduce the Natural Language Decathlon (decaNLP), a challenge that spans ten tasks: question answering, machine translation, summarization, natural language inference, sentiment analysis, semantic role labeling, zero-shot relation extraction, goal-oriented dialogue, semantic parsing, and commonsense pronoun resolution. We cast all tasks as question answering over a context. Furthermore, we present a new Multitask Question Answering Network (MQAN) jointly learns all tasks in decaNLP without any task-specific modules or parameters in the multitask setting. MQAN shows improvements in transfer learning for machine translation and named entity recognition, domain adaptation for sentiment analysis and natural language inference, and zero-shot capabilities for text classification. We demonstrate that the MQAN's multi-pointer-generator decoder is key to this success and performance further improves with an anti-curriculum training strategy. Though designed for decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic parsing task in the single-task setting. We also release code for procuring and processing data, training and evaluating models, and reproducing all experiments for decaNLP.
[ { "version": "v1", "created": "Wed, 20 Jun 2018 16:39:26 GMT" } ]
2018-06-25T00:00:00
[ [ "McCann", "Bryan", "" ], [ "Keskar", "Nitish Shirish", "" ], [ "Xiong", "Caiming", "" ], [ "Socher", "Richard", "" ] ]
new_dataset
0.999159
1508.02521
Sajid Ullah
Sajid Ullah, Mussarat Wahid
Topology Control of wireless sensor network using Quantum Inspired Genetic algorithm
4 Figures/6 pages
International Journal of Swarm Intelligence and Evolutionary Computation :2015
10.4172/2090-4908.1000121
null
cs.NE cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, an evolving Linked Quantum register has been introduced, which are group vector of binary pair of genes, which in its local proximity represent those nodes that will have high connectivity and keep the energy consumption at low, and which are taken into account for topology control. The register works in higher dimension. Here order-2 Quantum inspired genetic algorithm has been used and also higher order can be used to achieve greater versatility in topology control of nodes. Numerical result has been obtained, analysis is done as how the result has previously been obtained with Quantum genetic algorithm and results are compared too. For future work, factor is hinted which would exploit the algorithm to work in more computational intensive problem.
[ { "version": "v1", "created": "Tue, 11 Aug 2015 08:53:06 GMT" }, { "version": "v2", "created": "Thu, 3 May 2018 14:01:52 GMT" }, { "version": "v3", "created": "Wed, 20 Jun 2018 19:11:15 GMT" } ]
2018-06-22T00:00:00
[ [ "Ullah", "Sajid", "" ], [ "Wahid", "Mussarat", "" ] ]
new_dataset
0.982681
1709.01152
L\'aszl\'o Kozma
Dani Dorfman, Haim Kaplan, L\'aszl\'o Kozma, Uri Zwick
Pairing heaps: the forward variant
small fixes
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The pairing heap is a classical heap data structure introduced in 1986 by Fredman, Sedgewick, Sleator, and Tarjan. It is remarkable both for its simplicity and for its excellent performance in practice. The "magic" of pairing heaps lies in the restructuring that happens after the deletion of the smallest item. The resulting collection of trees is consolidated in two rounds: a left-to-right pairing round, followed by a right-to-left accumulation round. Fredman et al. showed, via an elegant correspondence to splay trees, that in a pairing heap of size $n$ all operations take $O(\log{n})$ amortized time. They also proposed an arguably more natural variant, where both pairing and accumulation are performed in a combined left-to-right round (called the forward variant of pairing heaps). The analogy to splaying breaks down in this case, and the analysis of the forward variant was left open. In this paper we show that inserting an item and deleting the minimum in a forward-variant pairing heap both take amortized time $O(\log{n} \cdot 4^{\sqrt{\log{n}}} )$. This is the first improvement over the $O(\sqrt{n})$ bound showed by Fredman et al. three decades ago. Our analysis relies on a new potential function that tracks parent-child rank-differences in the heap.
[ { "version": "v1", "created": "Mon, 4 Sep 2017 20:57:44 GMT" }, { "version": "v2", "created": "Thu, 21 Jun 2018 10:56:43 GMT" } ]
2018-06-22T00:00:00
[ [ "Dorfman", "Dani", "" ], [ "Kaplan", "Haim", "" ], [ "Kozma", "László", "" ], [ "Zwick", "Uri", "" ] ]
new_dataset
0.997463
1710.01269
Christian Samuel Perone
Christian S. Perone, Evan Calabrese, Julien Cohen-Adad
Spinal cord gray matter segmentation using deep dilated convolutions
13 pages, 8 figures
null
10.1038/s41598-018-24304-3
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Gray matter (GM) tissue changes have been associated with a wide range of neurological disorders and was also recently found relevant as a biomarker for disability in amyotrophic lateral sclerosis. The ability to automatically segment the GM is, therefore, an important task for modern studies of the spinal cord. In this work, we devise a modern, simple and end-to-end fully automated human spinal cord gray matter segmentation method using Deep Learning, that works both on in vivo and ex vivo MRI acquisitions. We evaluate our method against six independently developed methods on a GM segmentation challenge and report state-of-the-art results in 8 out of 10 different evaluation metrics as well as major network parameter reduction when compared to the traditional medical imaging architectures such as U-Nets.
[ { "version": "v1", "created": "Mon, 2 Oct 2017 16:25:14 GMT" } ]
2018-06-22T00:00:00
[ [ "Perone", "Christian S.", "" ], [ "Calabrese", "Evan", "" ], [ "Cohen-Adad", "Julien", "" ] ]
new_dataset
0.960867
1710.07774
Shaofeng Jiang
T-H. Hubert Chan, Haotian Jiang, Shaofeng H.-C. Jiang
A Unified PTAS for Prize Collecting TSP and Steiner Tree Problem in Doubling Metrics
Appeared in ESA 2018. This is the full version
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a unified polynomial-time approximation scheme (PTAS) for the prize collecting traveling salesman problem (PCTSP) and the prize collecting Steiner tree problem (PCSTP) in doubling metrics. Given a metric space and a penalty function on a subset of points known as terminals, a solution is a subgraph on points in the metric space, whose cost is the weight of its edges plus the penalty due to terminals not covered by the subgraph. Under our unified framework, the solution subgraph needs to be Eulerian for PCTSP, while it needs to be connected for PCSTP. Before our work, even a QPTAS for the problems in doubling metrics is not known. Our unified PTAS is based on the previous dynamic programming frameworks proposed in [Talwar STOC 2004] and [Bartal, Gottlieb, Krauthgamer STOC 2012]. However, since it is unknown which part of the optimal cost is due to edge lengths and which part is due to penalties of uncovered terminals, we need to develop new techniques to apply previous divide-and-conquer strategies and sparse instance decompositions.
[ { "version": "v1", "created": "Sat, 21 Oct 2017 08:33:37 GMT" }, { "version": "v2", "created": "Wed, 20 Jun 2018 18:29:18 GMT" } ]
2018-06-22T00:00:00
[ [ "Chan", "T-H. Hubert", "" ], [ "Jiang", "Haotian", "" ], [ "Jiang", "Shaofeng H. -C.", "" ] ]
new_dataset
0.998567
1806.03133
Cyril Banderier
Cyril Banderier (LIPN), Philippe Marchal (LRGP), Michael Wallner (TU WIEN)
Periodic P\'olya urns and an application to Young tableaux
null
Leibniz International Proceedings in Informatics (LIPIcs), 29th International Conference on Probabilistic, Combinatorial and Asymptotic Methods for the Analysis of Algorithms (AofA 2018), pp.1-12
10.4230/LIPIcs.AofA.2018.11
null
cs.DM math.CO math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
P{\'o}lya urns are urns where at each unit of time a ball is drawn and is replaced with some other balls according to its colour. We introduce a more general model: The replacement rule depends on the colour of the drawn ball and the value of the time (mod p). We discuss some intriguing properties of the differential operators associated to the generating functions encoding the evolution of these urns. The initial partial differential equation indeed leads to ordinary linear differential equations and we prove that the moment generating functions are D-finite. For a subclass, we exhibit a closed form for the corresponding generating functions (giving the exact state of the urns at time n). When the time goes to infinity, we show that these periodic P{\'o}lya urns follow a rich variety of behaviours: their asymptotic fluctuations are described by a family of distributions, the generalized Gamma distributions, which can also be seen as powers of Gamma distributions. En passant, we establish some enumerative links with other combinatorial objects, and we give an application for a new result on the asymptotics of Young tableaux: This approach allows us to prove that the law of the lower right corner in a triangular Young tableau follows asymptotically a product of generalized Gamma distributions.
[ { "version": "v1", "created": "Fri, 8 Jun 2018 13:11:56 GMT" } ]
2018-06-22T00:00:00
[ [ "Banderier", "Cyril", "", "LIPN" ], [ "Marchal", "Philippe", "", "LRGP" ], [ "Wallner", "Michael", "", "TU\n WIEN" ] ]
new_dataset
0.99904
1806.04932
Josep L. Rossello
Alejandro Mor\'an, Christiam F. Frasser and Josep L. Rossell\'o
Reservoir Computing Hardware with Cellular Automata
20 pages, 11 figures, draft of an article currently submitted to IEEE journal
null
null
null
cs.NE cs.CV nlin.CG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Elementary cellular automata (ECA) is a widely studied one-dimensional processing methodology where the successive iteration of the automaton may lead to the recreation of a rich pattern dynamic. Recently, cellular automata have been proposed as a feasible way to implement Reservoir Computing (RC) systems in which the automata rule is fixed and the training is performed using a linear regression. In this work we perform an exhaustive study of the performance of the different ECA rules when applied to pattern recognition of time-independent input signals using a RC scheme. Once the different ECA rules have been tested, the most accurate one (rule 90) is selected to implement a digital circuit. Rule 90 is easily reproduced using a reduced set of XOR gates and shift-registers, thus representing a high-performance alternative for RC hardware implementation in terms of processing time, circuit area, power dissipation and system accuracy. The model (both in software and its hardware implementation) has been tested using a pattern recognition task of handwritten numbers (the MNIST database) for which we obtained competitive results in terms of accuracy, speed and power dissipation. The proposed model can be considered to be a low-cost method to implement fast pattern recognition digital circuits.
[ { "version": "v1", "created": "Wed, 13 Jun 2018 10:28:44 GMT" }, { "version": "v2", "created": "Thu, 21 Jun 2018 09:23:43 GMT" } ]
2018-06-22T00:00:00
[ [ "Morán", "Alejandro", "" ], [ "Frasser", "Christiam F.", "" ], [ "Rosselló", "Josep L.", "" ] ]
new_dataset
0.989135
1806.07916
Sean MacAvaney
Sean MacAvaney, Bart Desmet, Arman Cohan, Luca Soldaini, Andrew Yates, Ayah Zirikly, Nazli Goharian
RSDD-Time: Temporal Annotation of Self-Reported Mental Health Diagnoses
6 pages, accepted for publication at the CLPsych workshop at NAACL-HLT 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-reported diagnosis statements have been widely employed in studying language related to mental health in social media. However, existing research has largely ignored the temporality of mental health diagnoses. In this work, we introduce RSDD-Time: a new dataset of 598 manually annotated self-reported depression diagnosis posts from Reddit that include temporal information about the diagnosis. Annotations include whether a mental health condition is present and how recently the diagnosis happened. Furthermore, we include exact temporal spans that relate to the date of diagnosis. This information is valuable for various computational methods to examine mental health through social media because one's mental health state is not static. We also test several baseline classification and extraction approaches, which suggest that extracting temporal information from self-reported diagnosis statements is challenging.
[ { "version": "v1", "created": "Wed, 20 Jun 2018 18:18:52 GMT" } ]
2018-06-22T00:00:00
[ [ "MacAvaney", "Sean", "" ], [ "Desmet", "Bart", "" ], [ "Cohan", "Arman", "" ], [ "Soldaini", "Luca", "" ], [ "Yates", "Andrew", "" ], [ "Zirikly", "Ayah", "" ], [ "Goharian", "Nazli", "" ] ]
new_dataset
0.99702
1806.07977
Gabriela Ramirez-De-La-Rosa
Gabriela Ram\'irez-de-la-Rosa, Esa\'u Villatoro-Tello, H\'ector Jim\'enez-Salazar
TxPI-u: A Resource for Personality Identification of Undergraduates
null
Journal of Intelligent & Fuzzy Systems, vol. 34, no. 5, pp. 2991-3001, 2018
10.3233/JIFS-169484
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Resources such as labeled corpora are necessary to train automatic models within the natural language processing (NLP) field. Historically, a large number of resources regarding a broad number of problems are available mostly in English. One of such problems is known as Personality Identification where based on a psychological model (e.g. The Big Five Model), the goal is to find the traits of a subject's personality given, for instance, a text written by the same subject. In this paper we introduce a new corpus in Spanish called Texts for Personality Identification (TxPI). This corpus will help to develop models to automatically assign a personality trait to an author of a text document. Our corpus, TxPI-u, contains information of 416 Mexican undergraduate students with some demographics information such as, age, gender, and the academic program they are enrolled. Finally, as an additional contribution, we present a set of baselines to provide a comparison scheme for further research.
[ { "version": "v1", "created": "Wed, 20 Jun 2018 20:31:47 GMT" } ]
2018-06-22T00:00:00
[ [ "Ramírez-de-la-Rosa", "Gabriela", "" ], [ "Villatoro-Tello", "Esaú", "" ], [ "Jiménez-Salazar", "Héctor", "" ] ]
new_dataset
0.985765
1806.08027
Tong-Xing Zheng
Tong-Xing Zheng, Hui-Ming Wang, and Jinhong Yuan
Physical-Layer Security in Cache-Enabled Cooperative Small Cell Networks Against Randomly Distributed Eavesdroppers
14 pages, 10 figures, accepted for publication on IEEE Transactions on Wireless Communications
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper explores the physical-layer security in a small cell network (SCN) with cooperative cache-enabled small base stations (SBSs) in the presence of randomly distributed eavesdroppers. We propose a joint design on the caching placement and the physical-layer transmission to improve the secure content delivery probability (SCDP). We first put forward a hybrid caching placement strategy in which a proportion of the cache unit in each SBS is assigned to store the most popular files (MPFs), while the remaining is used to cache the disjoint subfiles (DSFs) of the less popular files in different SBSs as a means to enhance transmission secrecy and content diversity. We then introduce two coordinated multi-point (CoMP) techniques, namely, joint transmission (JT) and orthogonal transmission (OT), to deliver the MPFs and DSFs, respectively. We derive analytical expressions for the SCDP in each transmission scheme, considering both non-colluding and colluding eavesdropping scenarios. Based on the obtained analytical results, we jointly design the optimal transmission rates and the optimal caching assignment for maximizing the overall SCDP. Various insights into the optimal transmission and caching designs are further provided. Numerical results are also presented to verify our theoretical findings and to demonstrate the superiority of the proposed caching and transmission strategies.
[ { "version": "v1", "created": "Thu, 21 Jun 2018 00:52:36 GMT" } ]
2018-06-22T00:00:00
[ [ "Zheng", "Tong-Xing", "" ], [ "Wang", "Hui-Ming", "" ], [ "Yuan", "Jinhong", "" ] ]
new_dataset
0.959654
1806.08136
Nicolas Gastineau
Nicolas Gastineau (Le2i), Olivier Togni (Le2i)
Coloring of the dth power of the face-centered cubic grid
null
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The face-centered cubic grid is a three dimensional 12-regular infinite grid. This graph represents an optimal way to pack spheres in the three-dimensional space. In this grid, the vertices represent the spheres and the edges represent the contact between spheres. We give lower and upper bounds on the chromatic number of the d th power of the face-centered cubic grid. In particular, in the case d = 2 we prove that the chromatic number of this grid is 13. We also determine sharper bounds for d = 3 and for subgraphs of of the face-centered cubic grid.
[ { "version": "v1", "created": "Thu, 21 Jun 2018 09:37:20 GMT" } ]
2018-06-22T00:00:00
[ [ "Gastineau", "Nicolas", "", "Le2i" ], [ "Togni", "Olivier", "", "Le2i" ] ]
new_dataset
0.982433
1806.08152
Alessandro Masullo
Alessandro Masullo, Tilo Burghardt, Dima Damen, Sion Hannuna, Victor Ponce-L\'opez, Majid Mirmehdi
CaloriNet: From silhouettes to calorie estimation in private environments
11 pages, 7 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel deep fusion architecture, CaloriNet, for the online estimation of energy expenditure for free living monitoring in private environments, where RGB data is discarded and replaced by silhouettes. Our fused convolutional neural network architecture is trainable end-to-end, to estimate calorie expenditure, using temporal foreground silhouettes alongside accelerometer data. The network is trained and cross-validated on a publicly available dataset, SPHERE_RGBD + Inertial_calorie. Results show state-of-the-art minimum error on the estimation of energy expenditure (calories per minute), outperforming alternative, standard and single-modal techniques.
[ { "version": "v1", "created": "Thu, 21 Jun 2018 10:09:28 GMT" } ]
2018-06-22T00:00:00
[ [ "Masullo", "Alessandro", "" ], [ "Burghardt", "Tilo", "" ], [ "Damen", "Dima", "" ], [ "Hannuna", "Sion", "" ], [ "Ponce-López", "Victor", "" ], [ "Mirmehdi", "Majid", "" ] ]
new_dataset
0.998275
1806.08170
Patrick Totzke
Parosh Aziz Abdulla, Mohamed Faouzi Atig, Radu Ciobanu, Richard Mayr, Patrick Totzke
Universal Safety for Timed Petri Nets is PSPACE-complete
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
A timed network consists of an arbitrary number of initially identical 1-clock timed automata, interacting via hand-shake communication. In this setting there is no unique central controller, since all automata are initially identical. We consider the universal safety problem for such controller-less timed networks, i.e., verifying that a bad event (enabling some given transition) is impossible regardless of the size of the network. This universal safety problem is dual to the existential coverability problem for timed-arc Petri nets, i.e., does there exist a number $m$ of tokens, such that starting with $m$ tokens in a given place, and none in the other places, some given transition is eventually enabled. We show that these problems are PSPACE-complete.
[ { "version": "v1", "created": "Thu, 21 Jun 2018 11:12:10 GMT" } ]
2018-06-22T00:00:00
[ [ "Abdulla", "Parosh Aziz", "" ], [ "Atig", "Mohamed Faouzi", "" ], [ "Ciobanu", "Radu", "" ], [ "Mayr", "Richard", "" ], [ "Totzke", "Patrick", "" ] ]
new_dataset
0.992875
1806.08269
Krishnendu Rarhi
Krishnendu Rarhi, Rhea Bonnerji, Simanta Sarkar, Abhishek Bhattacharya
COZMO-A New Lightweight Stream Cipher
null
null
10.7287/peerj.preprints.6571
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper deals with the merger of the two lightweight stream ciphers: A5by1 and Trivium. The idea is to make the key stream generation more secure and to remove the attacks of the individual algorithms. The bits generated by the Trivium cipher will act as the input of the A5by1 cipher. The registers used in the A5by1 cipher will be filled by the output bits of the Trivium cipher. The three registers will then be connected to generate an output which will be our required key stream.
[ { "version": "v1", "created": "Thu, 21 Jun 2018 14:41:02 GMT" } ]
2018-06-22T00:00:00
[ [ "Rarhi", "Krishnendu", "" ], [ "Bonnerji", "Rhea", "" ], [ "Sarkar", "Simanta", "" ], [ "Bhattacharya", "Abhishek", "" ] ]
new_dataset
0.999136
1806.08282
Pablo Arag\'on
Pablo Arag\'on, Diego S\'aez-Trumper, Miriam Redi, Scott A. Hale, Vicen\c{c} G\'omez, Andreas Kaltenbrunner
Online Petitioning Through Data Exploration and What We Found There: A Dataset of Petitions from Avaaz.org
Accepted as a dataset paper at the 12th International AAAI Conference on Web and Social Media (ICWSM-18). This preprint includes an additional appendix with the reasons, provided by Avaaz.org, about the anomalies detected when exploring the dataset. For academic purposes, please cite the ICWSM version
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Internet has become a fundamental resource for activism as it facilitates political mobilization at a global scale. Petition platforms are a clear example of how thousands of people around the world can contribute to social change. Avaaz.org, with a presence in over 200 countries, is one of the most popular of this type. However, little research has focused on this platform, probably due to a lack of available data. In this work we retrieved more than 350K petitions, standardized their field values, and added new information using language detection and named-entity recognition. To motivate future research with this unique repository of global protest, we present a first exploration of the dataset. In particular, we examine how social media campaigning is related to the success of petitions, as well as some geographic and linguistic findings about the worldwide community of Avaaz.org. We conclude with example research questions that could be addressed with our dataset.
[ { "version": "v1", "created": "Thu, 21 Jun 2018 15:02:56 GMT" } ]
2018-06-22T00:00:00
[ [ "Aragón", "Pablo", "" ], [ "Sáez-Trumper", "Diego", "" ], [ "Redi", "Miriam", "" ], [ "Hale", "Scott A.", "" ], [ "Gómez", "Vicenç", "" ], [ "Kaltenbrunner", "Andreas", "" ] ]
new_dataset
0.965687
1702.02263
Emilio Ferrara
Adam Badawy, Emilio Ferrara
The Rise of Jihadist Propaganda on Social Networks
22 pages, 9 figures, 7 tables
Journal of Computational Social Science, 2018
10.1007/s42001-018-0015-z
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using a dataset of over 1.9 million messages posted on Twitter by about 25,000 ISIS members, we explore how ISIS makes use of social media to spread its propaganda and to recruit militants from the Arab world and across the globe. By distinguishing between violence-driven, theological, and sectarian content, we trace the connection between online rhetoric and key events on the ground. To the best of our knowledge, ours is one of the first studies to focus on Arabic content, while most literature focuses on English content. Our findings yield new important insights about how social media is used by radical militant groups to target the Arab-speaking world, and reveal important patterns in their propaganda efforts.
[ { "version": "v1", "created": "Wed, 8 Feb 2017 03:28:55 GMT" } ]
2018-06-21T00:00:00
[ [ "Badawy", "Adam", "" ], [ "Ferrara", "Emilio", "" ] ]
new_dataset
0.999831
1704.00112
Yixin Zhu
Chenfanfu Jiang, Siyuan Qi, Yixin Zhu, Siyuan Huang, Jenny Lin, Lap-Fai Yu, Demetri Terzopoulos, Song-Chun Zhu
Configurable 3D Scene Synthesis and 2D Image Rendering with Per-Pixel Ground Truth using Stochastic Grammars
Accepted in IJCV 2018
null
10.1007/s11263-018-1103-5
null
cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a systematic learning-based approach to the generation of massive quantities of synthetic 3D scenes and arbitrary numbers of photorealistic 2D images thereof, with associated ground truth information, for the purposes of training, benchmarking, and diagnosing learning-based computer vision and robotics algorithms. In particular, we devise a learning-based pipeline of algorithms capable of automatically generating and rendering a potentially infinite variety of indoor scenes by using a stochastic grammar, represented as an attributed Spatial And-Or Graph, in conjunction with state-of-the-art physics-based rendering. Our pipeline is capable of synthesizing scene layouts with high diversity, and it is configurable inasmuch as it enables the precise customization and control of important attributes of the generated scenes. It renders photorealistic RGB images of the generated scenes while automatically synthesizing detailed, per-pixel ground truth data, including visible surface depth and normal, object identity, and material information (detailed to object parts), as well as environments (e.g., illuminations and camera viewpoints). We demonstrate the value of our synthesized dataset, by improving performance in certain machine-learning-based scene understanding tasks--depth and surface normal prediction, semantic segmentation, reconstruction, etc.--and by providing benchmarks for and diagnostics of trained models by modifying object attributes and scene properties in a controllable manner.
[ { "version": "v1", "created": "Sat, 1 Apr 2017 03:05:29 GMT" }, { "version": "v2", "created": "Tue, 4 Apr 2017 00:50:58 GMT" }, { "version": "v3", "created": "Wed, 20 Jun 2018 15:24:55 GMT" } ]
2018-06-21T00:00:00
[ [ "Jiang", "Chenfanfu", "" ], [ "Qi", "Siyuan", "" ], [ "Zhu", "Yixin", "" ], [ "Huang", "Siyuan", "" ], [ "Lin", "Jenny", "" ], [ "Yu", "Lap-Fai", "" ], [ "Terzopoulos", "Demetri", "" ], [ "Zhu", "Song-Chun", "" ] ]
new_dataset
0.999233
1806.00844
Alexey Shvets
Vladimir I. Iglovikov, Selim Seferbekov, Alexander V. Buslaev and Alexey Shvets
TernausNetV2: Fully Convolutional Network for Instance Segmentation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The most common approaches to instance segmentation are complex and use two-stage networks with object proposals, conditional random-fields, template matching or recurrent neural networks. In this work we present TernausNetV2 - a simple fully convolutional network that allows extracting objects from a high-resolution satellite imagery on an instance level. The network has popular encoder-decoder type of architecture with skip connections but has a few essential modifications that allows using for semantic as well as for instance segmentation tasks. This approach is universal and allows to extend any network that has been successfully applied for semantic segmentation to perform instance segmentation task. In addition, we generalize network encoder that was pre-trained for RGB images to use additional input channels. It makes possible to use transfer learning from visual to a wider spectral range. For DeepGlobe-CVPR 2018 building detection sub-challenge, based on public leaderboard score, our approach shows superior performance in comparison to other methods. The source code corresponding pre-trained weights are publicly available at https://github.com/ternaus/TernausNetV2
[ { "version": "v1", "created": "Sun, 3 Jun 2018 17:55:13 GMT" }, { "version": "v2", "created": "Tue, 19 Jun 2018 19:13:47 GMT" } ]
2018-06-21T00:00:00
[ [ "Iglovikov", "Vladimir I.", "" ], [ "Seferbekov", "Selim", "" ], [ "Buslaev", "Alexander V.", "" ], [ "Shvets", "Alexey", "" ] ]
new_dataset
0.991686
1806.07480
Julian Stecklina
Julian Stecklina and Thomas Prescher
LazyFP: Leaking FPU Register State using Microarchitectural Side-Channels
null
null
null
null
cs.OS cs.AR cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern processors utilize an increasingly large register set to facilitate efficient floating point and SIMD computation. This large register set is a burden for operating systems, as its content needs to be saved and restored when the operating system context switches between tasks. As an optimization, the operating system can defer the context switch of the FPU and SIMD register set until the first instruction is executed that needs access to these registers. Meanwhile, the old content is left in place with the hope that the current task might not use these registers at all. This optimization is commonly called lazy FPU context switching. To make it possible, a processor offers the ability to toggle the availability of instructions utilizing floating point and SIMD registers. If the instructions are turned off, any attempt of executing them will generate a fault. In this paper, we present an attack that exploits lazy FPU context switching and allows an adversary to recover the FPU and SIMD register set of arbitrary processes or VMs. The attack works on processors that transiently execute FPU or SIMD instructions that follow an instruction generating the fault indicating the first use of FPU or SIMD instructions. On operating systems using lazy FPU context switching, the FPU and SIMD register content of other processes or virtual machines can then be reconstructed via cache side effects. With SIMD registers not only being used for cryptographic computation, but also increasingly for simple operations, such as copying memory, we argue that lazy FPU context switching is a dangerous optimization that needs to be turned off in all operating systems, if there is a chance that they run on affected processors.
[ { "version": "v1", "created": "Tue, 19 Jun 2018 21:59:59 GMT" } ]
2018-06-21T00:00:00
[ [ "Stecklina", "Julian", "" ], [ "Prescher", "Thomas", "" ] ]
new_dataset
0.996164
1806.07507
Shan Luo Dr
Shan Luo, Wenxuan Mou, Kaspar Althoefer and Hongbin Liu
iCLAP: Shape Recognition by Combining Proprioception and Touch Sensing
10 pages, 12 figures, accepted to Autonomous Robots
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For humans, both the proprioception and touch sensing are highly utilized when performing haptic perception. However, most approaches in robotics use only either proprioceptive data or touch data in haptic object recognition. In this paper, we present a novel method named Iterative Closest Labeled Point (iCLAP) to link the kinesthetic cues and tactile patterns fundamentally and also introduce its extensions to recognize object shapes. In the training phase, the iCLAP first clusters the features of tactile readings into a codebook and assigns these features with distinct label numbers. A 4D point cloud of the object is then formed by taking the label numbers of the tactile features as an additional dimension to the 3D sensor positions; hence, the two sensing modalities are merged to achieve a synthesized perception of the touched object. Furthermore, we developed and validated hybrid fusion strategies, product based and weighted sum based, to combine decisions obtained from iCLAP and single sensing modalities. Extensive experimentation demonstrates a dramatic improvement of object recognition using the proposed methods and it shows great potential to enhance robot perception ability.
[ { "version": "v1", "created": "Tue, 19 Jun 2018 23:44:20 GMT" } ]
2018-06-21T00:00:00
[ [ "Luo", "Shan", "" ], [ "Mou", "Wenxuan", "" ], [ "Althoefer", "Kaspar", "" ], [ "Liu", "Hongbin", "" ] ]
new_dataset
0.983968
1806.07530
Raj Gaire
Raj Gaire, Chigulapalli Sriharsha, Deepak Puthal, Hendra Wijaya, Jongkil Kim, Prateeksha Keshari, Rajiv Ranjan, Rajkumar Buyya, Ratan K. Ghosh, R.K. Shyamasundar and Surya Nepal
Internet of Things (IoT) and Cloud Computing Enabled Disaster Management
Submitted for the book titled "Integration of Cyber-Physical Systems, Cloud, and Internet of Things"
null
null
null
cs.CY cs.CR cs.DC
http://creativecommons.org/licenses/by/4.0/
Disaster management demands a near real-time information dissemina-tion so that the emergency services can be provided to the right people at the right time. Recent advances in information and communication technologies enable collection of real-time information from various sources. For example, sensors deployed in the fields collect data about the environment. Similarly, social networks like Twitter and Facebook can help to collect data from people in the disaster zone. On one hand, inadequate situation awareness in disasters has been identified as one of the primary factors in human errors with grave consequences such as loss of lives and destruction of critical infrastructure. On the other hand, the growing ubiquity of social media and mobile devices, and pervasive nature of the Internet-of-Things means that there are more sources of outbound traffic, which ultimately results in the creation of a data deluge, beginning shortly after the onset of disaster events, leading to the problem of information tsunami. In addition, security and privacy has crucial role to overcome the misuse of the system for either intrusions into data or overcome the misuse of the information that was meant for a specified purpose. .... In this chapter, we provide such a situation aware application to support disaster management data lifecycle, i.e. from data ingestion and processing to alert dissemination. We utilize cloud computing, Internet of Things and social computing technologies to achieve a scalable, effi-cient, and usable situation-aware application called Cloud4BigData.
[ { "version": "v1", "created": "Wed, 20 Jun 2018 03:00:29 GMT" } ]
2018-06-21T00:00:00
[ [ "Gaire", "Raj", "" ], [ "Sriharsha", "Chigulapalli", "" ], [ "Puthal", "Deepak", "" ], [ "Wijaya", "Hendra", "" ], [ "Kim", "Jongkil", "" ], [ "Keshari", "Prateeksha", "" ], [ "Ranjan", "Rajiv", "" ], [ "Buyya", "Rajkumar", "" ], [ "Ghosh", "Ratan K.", "" ], [ "Shyamasundar", "R. K.", "" ], [ "Nepal", "Surya", "" ] ]
new_dataset
0.986819
1806.07570
Fazel Sharifi
Sepher Tabrizchi, Fazel Sharifi, and Abdel-Hameed A. Badawy
Energy Efficient Tri-State CNFET Ternary Logic Gates
null
null
null
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional silicon binary circuits continue to face challenges such as high leakage power dissipation and large area of interconnections. Multiple-Valued Logic (MVL) and nano devices are two feasible solutions to overcome these problems. In this paper, a novel method is presented to design ternary logic circuits based on Carbon Nanotube Field Effect Transistors (CNFETs). The proposed designs use the unique properties of CNFETs, for example, adjusting the Carbon Nanontube (CNT) diameters to have the desired threshold voltage and have the same mobility of P-FET and N-FET transistors. Each of our designed logic circuits implements a logic function and its complementary via a control signal. Also, these circuits have a high impedance state which saves power while the circuits are not in use. In an effort to show a more detailed application of our approach, we design a 2-digit adder-subtractor circuit. We simulate the proposed ternary circuits using HSPICE via standard 32nm CNFET technology. The simulation results indicate the correct operation of the designs under different process, voltage and temperature (PVT) variations. Moreover, a power efficient ternary logic ALU has been design based on the proposed gates.
[ { "version": "v1", "created": "Wed, 20 Jun 2018 06:21:20 GMT" } ]
2018-06-21T00:00:00
[ [ "Tabrizchi", "Sepher", "" ], [ "Sharifi", "Fazel", "" ], [ "Badawy", "Abdel-Hameed A.", "" ] ]
new_dataset
0.991776
1806.07579
Maosheng Xiong
Maosheng Xiong, Nian Li, Zhengchun Zhou and Cunsheng Ding
Weight distribution of cyclic codes with arbitrary number of generalized Niho type zeroes with corrigendum
The paper was published in Designs Codes Cryptogr., vol. 78, no. 3, pp. 713-730, 2016. Here in the last page we correct a mistake in the paper
Designs Codes Cryptogr., vol. 78, no. 3, pp. 713-730, 2016
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cyclic codes are an important class of linear codes, whose weight distribution have been extensively studied. Most previous results obtained so far were for cyclic codes with no more than three zeroes. Inspired by the works \cite{Li-Zeng-Hu} and \cite{gegeng2}, we study two families of cyclic codes over $\mathbb{F}_p$ with arbitrary number of zeroes of generalized Niho type, more precisely $\ca$ (for $p=2$) of $t+1$ zeroes, and $\cb$ (for any prime $p$) of $t$ zeroes for any $t$. We find that the first family has at most $(2t+1)$ non-zero weights, and the second has at most $2t$ non-zero weights. Their weight distribution are also determined in the paper.
[ { "version": "v1", "created": "Wed, 20 Jun 2018 07:12:30 GMT" } ]
2018-06-21T00:00:00
[ [ "Xiong", "Maosheng", "" ], [ "Li", "Nian", "" ], [ "Zhou", "Zhengchun", "" ], [ "Ding", "Cunsheng", "" ] ]
new_dataset
0.990902
1806.07586
Andreas Bjorklund
Andreas Bj\"orklund and Thore Husfeldt
Counting Shortest Two Disjoint Paths in Cubic Planar Graphs with an NC Algorithm
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given an undirected graph and two disjoint vertex pairs $s_1,t_1$ and $s_2,t_2$, the Shortest two disjoint paths problem (S2DP) asks for the minimum total length of two vertex disjoint paths connecting $s_1$ with $t_1$, and $s_2$ with $t_2$, respectively. We show that for cubic planar graphs there are NC algorithms, uniform circuits of polynomial size and polylogarithmic depth, that compute the S2DP and moreover also output the number of such minimum length path pairs. Previously, to the best of our knowledge, no deterministic polynomial time algorithm was known for S2DP in cubic planar graphs with arbitrary placement of the terminals. In contrast, the randomized polynomial time algorithm by Bj\"orklund and Husfeldt, ICALP 2014, for general graphs is much slower, is serial in nature, and cannot count the solutions. Our results are built on an approach by Hirai and Namba, Algorithmica 2017, for a generalisation of S2DP, and fast algorithms for counting perfect matchings in planar graphs.
[ { "version": "v1", "created": "Wed, 20 Jun 2018 07:26:37 GMT" } ]
2018-06-21T00:00:00
[ [ "Björklund", "Andreas", "" ], [ "Husfeldt", "Thore", "" ] ]
new_dataset
0.969038
1806.07789
Titouan Parcollet
Titouan Parcollet, Ying Zhang, Mohamed Morchid, Chiheb Trabelsi, Georges Linar\`es, Renato De Mori and Yoshua Bengio
Quaternion Convolutional Neural Networks for End-to-End Automatic Speech Recognition
Accepted at INTERSPEECH 2018
null
null
null
cs.SD cs.LG eess.AS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, the connectionist temporal classification (CTC) model coupled with recurrent (RNN) or convolutional neural networks (CNN), made it easier to train speech recognition systems in an end-to-end fashion. However in real-valued models, time frame components such as mel-filter-bank energies and the cepstral coefficients obtained from them, together with their first and second order derivatives, are processed as individual elements, while a natural alternative is to process such components as composed entities. We propose to group such elements in the form of quaternions and to process these quaternions using the established quaternion algebra. Quaternion numbers and quaternion neural networks have shown their efficiency to process multidimensional inputs as entities, to encode internal dependencies, and to solve many tasks with less learning parameters than real-valued models. This paper proposes to integrate multiple feature views in quaternion-valued convolutional neural network (QCNN), to be used for sequence-to-sequence mapping with the CTC model. Promising results are reported using simple QCNNs in phoneme recognition experiments with the TIMIT corpus. More precisely, QCNNs obtain a lower phoneme error rate (PER) with less learning parameters than a competing model based on real-valued CNNs.
[ { "version": "v1", "created": "Wed, 20 Jun 2018 15:16:43 GMT" } ]
2018-06-21T00:00:00
[ [ "Parcollet", "Titouan", "" ], [ "Zhang", "Ying", "" ], [ "Morchid", "Mohamed", "" ], [ "Trabelsi", "Chiheb", "" ], [ "Linarès", "Georges", "" ], [ "De Mori", "Renato", "" ], [ "Bengio", "Yoshua", "" ] ]
new_dataset
0.997851
1806.07848
Ghurumuruhan Ganesan
Ghurumuruhan Ganesan
Correcting an ordered deletion-erasure
null
null
null
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we show that the single deletion correcting Varshamov-Tenengolts code, with minor modifications, can also correct an ordered deletion-erasure pattern where one deletion and at most one erasure occur and the deletion always occurs before the erasure. For large code lengths, the constructed code has the same logarithmic redundancy as optimal codes.
[ { "version": "v1", "created": "Wed, 20 Jun 2018 17:17:36 GMT" } ]
2018-06-21T00:00:00
[ [ "Ganesan", "Ghurumuruhan", "" ] ]
new_dataset
0.986655
1607.00662
Danilo Jimenez Rezende
Danilo Jimenez Rezende and S. M. Ali Eslami and Shakir Mohamed and Peter Battaglia and Max Jaderberg and Nicolas Heess
Unsupervised Learning of 3D Structure from Images
Appears in Advances in Neural Information Processing Systems 29 (NIPS 2016)
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.
[ { "version": "v1", "created": "Sun, 3 Jul 2016 17:53:11 GMT" }, { "version": "v2", "created": "Tue, 19 Jun 2018 17:26:53 GMT" } ]
2018-06-20T00:00:00
[ [ "Rezende", "Danilo Jimenez", "" ], [ "Eslami", "S. M. Ali", "" ], [ "Mohamed", "Shakir", "" ], [ "Battaglia", "Peter", "" ], [ "Jaderberg", "Max", "" ], [ "Heess", "Nicolas", "" ] ]
new_dataset
0.984835
1805.01374
Baibhab Chatterjee
Baibhab Chatterjee, Debayan Das, Shovan Maity and Shreyas Sen
RF-PUF: Enhancing IoT Security through Authentication of Wireless Nodes using In-situ Machine Learning
Accepted: in the IEEE Internet of Things Journal (JIoT), 2018
null
null
null
cs.CR cs.AI cs.NE eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional authentication in radio-frequency (RF) systems enable secure data communication within a network through techniques such as digital signatures and hash-based message authentication codes (HMAC), which suffer from key recovery attacks. State-of-the-art IoT networks such as Nest also use Open Authentication (OAuth 2.0) protocols that are vulnerable to cross-site-recovery forgery (CSRF), which shows that these techniques may not prevent an adversary from copying or modeling the secret IDs or encryption keys using invasive, side channel, learning or software attacks. Physical unclonable functions (PUF), on the other hand, can exploit manufacturing process variations to uniquely identify silicon chips which makes a PUF-based system extremely robust and secure at low cost, as it is practically impossible to replicate the same silicon characteristics across dies. Taking inspiration from human communication, which utilizes inherent variations in the voice signatures to identify a certain speaker, we present RF- PUF: a deep neural network-based framework that allows real-time authentication of wireless nodes, using the effects of inherent process variation on RF properties of the wireless transmitters (Tx), detected through in-situ machine learning at the receiver (Rx) end. The proposed method utilizes the already-existing asymmetric RF communication framework and does not require any additional circuitry for PUF generation or feature extraction. Simulation results involving the process variations in a standard 65 nm technology node, and features such as LO offset and I-Q imbalance detected with a neural network having 50 neurons in the hidden layer indicate that the framework can distinguish up to 4800 transmitters with an accuracy of 99.9% (~ 99% for 10,000 transmitters) under varying channel conditions, and without the need for traditional preambles.
[ { "version": "v1", "created": "Thu, 3 May 2018 15:28:44 GMT" }, { "version": "v2", "created": "Fri, 18 May 2018 20:15:40 GMT" }, { "version": "v3", "created": "Tue, 19 Jun 2018 02:00:32 GMT" } ]
2018-06-20T00:00:00
[ [ "Chatterjee", "Baibhab", "" ], [ "Das", "Debayan", "" ], [ "Maity", "Shovan", "" ], [ "Sen", "Shreyas", "" ] ]
new_dataset
0.99721
1806.01023
Hongwei Li
Hongwei Li, Kanru Lin, Maximilian Reichert, Lina Xu, Rickmer Braren, Deliang Fu, Roland Schmid, Ji Li, Bjoern Menze and Kuangyu Shi
Differential Diagnosis for Pancreatic Cysts in CT Scans Using Densely-Connected Convolutional Networks
submitted to miccai 2017, *corresponding author: [email protected]
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The lethal nature of pancreatic ductal adenocarcinoma (PDAC) calls for early differential diagnosis of pancreatic cysts, which are identified in up to 16% of normal subjects, and some of which may develop into PDAC. Previous computer-aided developments have achieved certain accuracy for classification on segmented cystic lesions in CT. However, pancreatic cysts have a large variation in size and shape, and the precise segmentation of them remains rather challenging, which restricts the computer-aided interpretation of CT images acquired for differential diagnosis. We propose a computer-aided framework for early differential diagnosis of pancreatic cysts without pre-segmenting the lesions using densely-connected convolutional networks (Dense-Net). The Dense-Net learns high-level features from whole abnormal pancreas and builds mappings between medical imaging appearance to different pathological types of pancreatic cysts. To enhance the clinical applicability, we integrate saliency maps in the framework to assist the physicians to understand the decision of the deep learning method. The test on a cohort of 206 patients with 4 pathologically confirmed subtypes of pancreatic cysts has achieved an overall accuracy of 72.8%, which is significantly higher than the baseline accuracy of 48.1%, which strongly supports the clinical potential of our developed method.
[ { "version": "v1", "created": "Mon, 4 Jun 2018 09:25:59 GMT" }, { "version": "v2", "created": "Thu, 7 Jun 2018 14:13:10 GMT" }, { "version": "v3", "created": "Tue, 19 Jun 2018 07:38:11 GMT" } ]
2018-06-20T00:00:00
[ [ "Li", "Hongwei", "" ], [ "Lin", "Kanru", "" ], [ "Reichert", "Maximilian", "" ], [ "Xu", "Lina", "" ], [ "Braren", "Rickmer", "" ], [ "Fu", "Deliang", "" ], [ "Schmid", "Roland", "" ], [ "Li", "Ji", "" ], [ "Menze", "Bjoern", "" ], [ "Shi", "Kuangyu", "" ] ]
new_dataset
0.9898
1806.06902
Xuan-Thuan Nguyen Dr
Xuan-Thuan Nguyen, Trong-Thuc Hoang, Hong-Thu Nguyen, Katsumi Inoue, and Cong-Kha Pham
A 1.2-V 162.9-pJ/cycle Bitmap Index Creation Core with 0.31-pW/bit Standby Power on 65-nm SOTB
Submitted to IEEE Transactions on Circuits and Systems II: Express Brief
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to maximize the performance during peak workload hours and minimize the power consumption during off-peak time plays a significant role in the energy-efficient systems. Our previous work has proposed a high-performance multi-core bitmap index creator (BIC) in a field-programmable gate array that could deliver higher indexing throughput than central processing units and graphics processing units. This brief extends the previous study by focusing on the application-specific integrated circuit implementation of the proposed BIC in a 65-nm silicon-on-thin-buried-oxide (SOTB) CMOS process. The BIC chip can operate with different supply voltage from 0.4 V to 1.2 V. In the active mode with the supply voltage of 1.2 V, the BIC chip is fully operational at 41 MHz and consumes 162.9 pJ/cycle. In the standby mode with the supply voltage of 0.4 V and clock-gating technique, the power consumption was reduced to 10.6 uW. The standby power is also dramatically reduced to 2.64 nW due to the utilization of reverse back-gate biasing technique. This achievement is considerable importance to the energy-efficient systems.
[ { "version": "v1", "created": "Mon, 18 Jun 2018 19:52:34 GMT" } ]
2018-06-20T00:00:00
[ [ "Nguyen", "Xuan-Thuan", "" ], [ "Hoang", "Trong-Thuc", "" ], [ "Nguyen", "Hong-Thu", "" ], [ "Inoue", "Katsumi", "" ], [ "Pham", "Cong-Kha", "" ] ]
new_dataset
0.996537
1806.06978
Isaac Sheff
Isaac Sheff, Xinwen Wang, Andrew C. Myers, Robbert van Renesse
A Web of Blocks
null
null
null
null
cs.DC cs.DB
http://creativecommons.org/licenses/by/4.0/
Blockchains offer a useful abstraction: a trustworthy, decentralized log of totally ordered transactions. Traditional blockchains have problems with scalability and efficiency, preventing their use for many applications. These limitations arise from the requirement that all participants agree on the total ordering of transactions. To address this fundamental shortcoming, we introduce Charlotte, a system for maintaining decentralized, authenticated data structures, including transaction logs. Each data structurestructure -- indeed, each block -- specifies its own availability and integrity properties, allowing Charlotte applications to retain the full benefits of permissioned or permissionless blockchains. In Charlotte, a block can be atomically appended to multiple logs, allowing applications to be interoperable when they want to, without inefficiently forcing all applications to share one big log. We call this open graph of interconnected blocks a blockweb. We allow new kinds of blockweb applications that operate beyond traditional chains. We demonstrate the viability of Charlotte applications with proof-of-concept servers running interoperable blockchains. Using performance data from our prototype, we estimate that when compared with traditional blockchains, Charlotte offers multiple orders of magnitude improvement in speed and energy efficiency.
[ { "version": "v1", "created": "Mon, 18 Jun 2018 23:01:41 GMT" } ]
2018-06-20T00:00:00
[ [ "Sheff", "Isaac", "" ], [ "Wang", "Xinwen", "" ], [ "Myers", "Andrew C.", "" ], [ "van Renesse", "Robbert", "" ] ]
new_dataset
0.987569
1806.07011
Xavier Puig
Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, Antonio Torralba
VirtualHome: Simulating Household Activities via Programs
CVPR 2018 (Oral)
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we are interested in modeling complex activities that occur in a typical household. We propose to use programs, i.e., sequences of atomic actions and interactions, as a high level representation of complex tasks. Programs are interesting because they provide a non-ambiguous representation of a task, and allow agents to execute them. However, nowadays, there is no database providing this type of information. Towards this goal, we first crowd-source programs for a variety of activities that happen in people's homes, via a game-like interface used for teaching kids how to code. Using the collected dataset, we show how we can learn to extract programs directly from natural language descriptions or from videos. We then implement the most common atomic (inter)actions in the Unity3D game engine, and use our programs to "drive" an artificial agent to execute tasks in a simulated household environment. Our VirtualHome simulator allows us to create a large activity video dataset with rich ground-truth, enabling training and testing of video understanding models. We further showcase examples of our agent performing tasks in our VirtualHome based on language descriptions.
[ { "version": "v1", "created": "Tue, 19 Jun 2018 02:16:44 GMT" } ]
2018-06-20T00:00:00
[ [ "Puig", "Xavier", "" ], [ "Ra", "Kevin", "" ], [ "Boben", "Marko", "" ], [ "Li", "Jiaman", "" ], [ "Wang", "Tingwu", "" ], [ "Fidler", "Sanja", "" ], [ "Torralba", "Antonio", "" ] ]
new_dataset
0.997965
1806.07041
Taro Sekiyama
Taro Sekiyama, Atsushi Igarashi
Reasoning about Polymorphic Manifest Contracts
null
null
null
null
cs.PL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Manifest contract calculi, which integrate cast-based dynamic contract checking and refinement type systems, have been studied as foundations for hybrid contract checking. In this article, we study techniques to reasoning about a polymorphic manifest contract calculus, including a few program transformations related to static contract verification. We first define a polymorphic manifest contract calculus $\mathrm{F}_{H}$, which is much simpler than a previously studied one with delayed substitution, and a logical relation for it and prove that the logical relation is sound with respect to contextual equivalence. Next, we show that the upcast elimination property, which has been studied as correctness of subtyping-based static cast verification, holds for $\mathrm{F}_{H}$. More specifically, we give a subtyping relation (which is not part of the calculus) for $\mathrm{F}_{H}$ types and prove that a term obtained by eliminating upcasts---casts from one type to a supertype of it---is logically related and so contextually equivalent to the original one. We also justify two other program transformations for casts: selfification and static cast decomposition, which help upcast elimination. A challenge is that, due to the subsumption-free approach to manifest contracts, these program transformations do not always preserve well-typedness of terms. To address it, the logical relation and contextual equivalence in this work are defined as semityped relations: only one side of the relations is required to be well typed and the other side may be ill typed.
[ { "version": "v1", "created": "Tue, 19 Jun 2018 05:08:27 GMT" } ]
2018-06-20T00:00:00
[ [ "Sekiyama", "Taro", "" ], [ "Igarashi", "Atsushi", "" ] ]
new_dataset
0.999506
1806.07059
Vuk Marojevic
Vuk Marojevic, Shem Kikamaze, Randall Nealy, Carl Dietrich
5G-CORNET: Platform as a Service
IEEE 5G World Forum 2018
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Practical testing of the latest wireless communications standards requires the availability of flexible radio frequency hardware, networking and computing resources. We are providing a Cloud-based infrastructure which offers the necessary resources to carry out tests of the latest 5G standards. The testbed provides a Cloud-based Infrastructure as a Service. The research community can access hardware and software resources through a virtual plat-form that enables isolation and customization of experiments. In other words, researchers have control over the preferred experimental architecture and can run concurrent experiments on the same testbed. This paper introduces the resources that can be used to develop 5G testbeds and experiments.
[ { "version": "v1", "created": "Tue, 19 Jun 2018 06:14:00 GMT" } ]
2018-06-20T00:00:00
[ [ "Marojevic", "Vuk", "" ], [ "Kikamaze", "Shem", "" ], [ "Nealy", "Randall", "" ], [ "Dietrich", "Carl", "" ] ]
new_dataset
0.982124
1806.07072
Sauradip Nag
Sauradip Nag, Palaiahnakote Shivakumara, Wu Yirui, Umapada Pal, and Tong Lu
A New COLD Feature based Handwriting Analysis for Ethnicity/Nationality Identification
Accepted in ICFHR18
null
null
null
cs.CV cs.AI cs.CG cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying crime for forensic investigating teams when crimes involve people of different nationals is challenging. This paper proposes a new method for ethnicity (nationality) identification based on Cloud of Line Distribution (COLD) features of handwriting components. The proposed method, at first, explores tangent angle for the contour pixels in each row and the mean of intensity values of each row in an image for segmenting text lines. For segmented text lines, we use tangent angle and direction of base lines to remove rule lines in the image. We use polygonal approximation for finding dominant points for contours of edge components. Then the proposed method connects the nearest dominant points of every dominant point, which results in line segments of dominant point pairs. For each line segment, the proposed method estimates angle and length, which gives a point in polar domain. For all the line segments, the proposed method generates dense points in polar domain, which results in COLD distribution. As character component shapes change, according to nationals, the shape of the distribution changes. This observation is extracted based on distance from pixels of distribution to Principal Axis of the distribution. Then the features are subjected to an SVM classifier for identifying nationals. Experiments are conducted on a complex dataset, which show the proposed method is effective and outperforms the existing method
[ { "version": "v1", "created": "Tue, 19 Jun 2018 07:14:54 GMT" } ]
2018-06-20T00:00:00
[ [ "Nag", "Sauradip", "" ], [ "Shivakumara", "Palaiahnakote", "" ], [ "Yirui", "Wu", "" ], [ "Pal", "Umapada", "" ], [ "Lu", "Tong", "" ] ]
new_dataset
0.991707
1806.07080
He Jiang
He Jiang, Dong Liu, Zhilei Ren, and Tao Zhang
Blockchain in the Eyes of Developers
10 pages, 5 figures
null
null
null
cs.SE cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The popularity of blockchain technology continues to grow rapidly in both industrial and academic fields. Most studies of blockchain focus on the improvements of security, usability, or efficiency of blockchain protocols, or the applications of blockchain in finance, Internet of Things, or public services. However, few of them could reveal the concerns of front-line developers and the situations of blockchain in practice. In this article, we investigate how developers use and discuss blockchain with a case study of Stack Overflow posts. We find blockchain is a relatively new topic in Stack Overflow but it is rising to popularity. We detect 13 types of questions that developers post in Stack Overflow and identify 45 blockchain relevant entities (e.g., frameworks, libraries, or tools) for building blockchain applications. These findings may help blockchain project communities to know where to improve and help novices to know where to start.
[ { "version": "v1", "created": "Tue, 19 Jun 2018 07:39:47 GMT" } ]
2018-06-20T00:00:00
[ [ "Jiang", "He", "" ], [ "Liu", "Dong", "" ], [ "Ren", "Zhilei", "" ], [ "Zhang", "Tao", "" ] ]
new_dataset
0.978074
1806.07111
Giuseppe De Nittis
Giuseppe De Nittis and Nicola Gatti
Facing Multiple Attacks in Adversarial Patrolling Games with Alarmed Targets
null
null
null
null
cs.AI cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We focus on adversarial patrolling games on arbitrary graphs, where the Defender can control a mobile resource, the targets are alarmed by an alarm system, and the Attacker can observe the actions of the mobile resource of the Defender and perform different attacks exploiting multiple resources. This scenario can be modeled as a zero-sum extensive-form game in which each player can play multiple times. The game tree is exponentially large both in the size of the graph and in the number of attacking resources. We show that when the number of the Attacker's resources is free, the problem of computing the equilibrium path is NP-hard, while when the number of resources is fixed, the equilibrium path can be computed in poly-time. We provide a dynamic-programming algorithm that, given the number of the Attacker's resources, computes the equilibrium path requiring poly-time in the size of the graph and exponential time in the number of the resources. Furthermore, since in real-world scenarios it is implausible that the Defender knows the number of attacking resources, we study the robustness of the Defender's strategy when she makes a wrong guess about that number. We show that even the error of just a single resource can lead to an arbitrary inefficiency, when the inefficiency is defined as the ratio of the Defender's utilities obtained with a wrong guess and a correct guess. However, a more suitable definition of inefficiency is given by the difference of the Defender's utilities: this way, we observe that the higher the error in the estimation, the higher the loss for the Defender. Then, we investigate the performance of online algorithms when no information about the Attacker's resources is available. Finally, we resort to randomized online algorithms showing that we can obtain a competitive factor that is twice better than the one that can be achieved by any deterministic online algorithm.
[ { "version": "v1", "created": "Tue, 19 Jun 2018 08:57:03 GMT" } ]
2018-06-20T00:00:00
[ [ "De Nittis", "Giuseppe", "" ], [ "Gatti", "Nicola", "" ] ]
new_dataset
0.995793