id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
1708.01806
Jan Haji\v{c} Jr
Jan Haji\v{c} Jr., Pavel Pecina
Detecting Noteheads in Handwritten Scores with ConvNets and Bounding Box Regression
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Noteheads are the interface between the written score and music. Each notehead on the page signifies one note to be played, and detecting noteheads is thus an unavoidable step for Optical Music Recognition. Noteheads are clearly distinct objects, however, the variety of music notation handwriting makes noteheads harder to identify, and while handwritten music notation symbol {\em classification} is a well-studied task, symbol {\em detection} has usually been limited to heuristics and rule-based systems instead of machine learning methods better suited to deal with the uncertainties in handwriting. We present ongoing work on a simple notehead detector using convolutional neural networks for pixel classification and bounding box regression that achieves a detection f-score of 0.97 on binary score images in the MUSCIMA++ dataset, does not require staff removal, and is applicable to a variety of handwriting styles and levels of musical complexity.
[ { "version": "v1", "created": "Sat, 5 Aug 2017 18:54:06 GMT" } ]
2017-08-08T00:00:00
[ [ "Hajič", "Jan", "Jr." ], [ "Pecina", "Pavel", "" ] ]
new_dataset
0.990224
1708.01872
Ding Zhao
Ding Zhao, Yaohui Guo, Yunhan Jack Jia
TrafficNet: An Open Naturalistic Driving Scenario Library
IEEE 20th International Conference on Intelligent Transportation
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The enormous efforts spent on collecting naturalistic driving data in the recent years has resulted in an expansion of publicly available traffic datasets, which has the potential to assist the development of the self-driving vehicles. However, we found that many of the attempts to utilize these datasets have failed in practice due to a lack of usability concern from the organizations that host these collected data. For example, extracting data associated with certain critical conditions from naturalistic driving data organized in chronological order may not be convenient for a vehicle engineer that doesn't have big data analytics experiences. To address the general usability challenges of these publicly available traffic datasets, we propose TrafficNet, a large-scale and extensible library of naturalistic driving scenarios, aiming at bridging the gap between research datasets and practically usable information for vehicle engineers and researchers. The proposed web-based driving scenario database preprocesses massive raw traffic data collected in chronological order into an organized scenario-based dataset by applying a set of categorization algorithms to label the naturalistic driving data with six different critical driving scenarios. TrafficNet opens not only the scenario library but also the source code of these categorization methods to the public, which will foster more sophisticated and accurate scenario-based categorization algorithms to advance the intelligent transportation research. The source code and the scenario database can be accessed at https://github.com/TrafficNet.
[ { "version": "v1", "created": "Tue, 1 Aug 2017 03:33:37 GMT" } ]
2017-08-08T00:00:00
[ [ "Zhao", "Ding", "" ], [ "Guo", "Yaohui", "" ], [ "Jia", "Yunhan Jack", "" ] ]
new_dataset
0.996609
1708.01928
Moi Hoon Yap
Manu Goyal, Neil D. Reeves, Satyan Rajbhandari, Jennifer Spragg and Moi Hoon Yap
Fully Convolutional Networks for Diabetic Foot Ulcer Segmentation
7 pages, 5 figures, 2017 IEEE SMC International Conference (To appear)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diabetic Foot Ulcer (DFU) is a major complication of Diabetes, which if not managed properly can lead to amputation. DFU can appear anywhere on the foot and can vary in size, colour, and contrast depending on various pathologies. Current clinical approaches to DFU treatment rely on patients and clinician vigilance, which has significant limitations such as the high cost involved in the diagnosis, treatment and lengthy care of the DFU. We introduce a dataset of 705 foot images. We provide the ground truth of ulcer region and the surrounding skin that is an important indicator for clinicians to assess the progress of ulcer. Then, we propose a two-tier transfer learning from bigger datasets to train the Fully Convolutional Networks (FCNs) to automatically segment the ulcer and surrounding skin. Using 5-fold cross-validation, the proposed two-tier transfer learning FCN Models achieve a Dice Similarity Coefficient of 0.794 ($\pm$0.104) for ulcer region, 0.851 ($\pm$0.148) for surrounding skin region, and 0.899 ($\pm$0.072) for the combination of both regions. This demonstrates the potential of FCNs in DFU segmentation, which can be further improved with a larger dataset.
[ { "version": "v1", "created": "Sun, 6 Aug 2017 19:45:37 GMT" } ]
2017-08-08T00:00:00
[ [ "Goyal", "Manu", "" ], [ "Reeves", "Neil D.", "" ], [ "Rajbhandari", "Satyan", "" ], [ "Spragg", "Jennifer", "" ], [ "Yap", "Moi Hoon", "" ] ]
new_dataset
0.996764
1708.01956
Hanwang Zhang
Hanwang Zhang, Zawlin Kyaw, Jinyang Yu, Shih-Fu Chang
PPR-FCN: Weakly Supervised Visual Relation Detection via Parallel Pairwise R-FCN
To appear in International Conference on Computer Vision (ICCV) 2017, Venice, Italy
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We aim to tackle a novel vision task called Weakly Supervised Visual Relation Detection (WSVRD) to detect "subject-predicate-object" relations in an image with object relation groundtruths available only at the image level. This is motivated by the fact that it is extremely expensive to label the combinatorial relations between objects at the instance level. Compared to the extensively studied problem, Weakly Supervised Object Detection (WSOD), WSVRD is more challenging as it needs to examine a large set of regions pairs, which is computationally prohibitive and more likely stuck in a local optimal solution such as those involving wrong spatial context. To this end, we present a Parallel, Pairwise Region-based, Fully Convolutional Network (PPR-FCN) for WSVRD. It uses a parallel FCN architecture that simultaneously performs pair selection and classification of single regions and region pairs for object and relation detection, while sharing almost all computation shared over the entire image. In particular, we propose a novel position-role-sensitive score map with pairwise RoI pooling to efficiently capture the crucial context associated with a pair of objects. We demonstrate the superiority of PPR-FCN over all baselines in solving the WSVRD challenge by using results of extensive experiments over two visual relation benchmarks.
[ { "version": "v1", "created": "Mon, 7 Aug 2017 01:07:20 GMT" } ]
2017-08-08T00:00:00
[ [ "Zhang", "Hanwang", "" ], [ "Kyaw", "Zawlin", "" ], [ "Yu", "Jinyang", "" ], [ "Chang", "Shih-Fu", "" ] ]
new_dataset
0.963553
1708.02030
Faisal Shahzad
Faisal Shahzad, Jonas Thies, Moritz Kreutzer, Thomas Zeiser, Georg Hager, Gerhard Wellein
CRAFT: A library for easier application-level Checkpoint/Restart and Automatic Fault Tolerance
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to efficiently use the future generations of supercomputers, fault tolerance and power consumption are two of the prime challenges anticipated by the High Performance Computing (HPC) community. Checkpoint/Restart (CR) has been and still is the most widely used technique to deal with hard failures. Application-level CR is the most effective CR technique in terms of overhead efficiency but it takes a lot of implementation effort. This work presents the implementation of our C++ based library CRAFT (Checkpoint-Restart and Automatic Fault Tolerance), which serves two purposes. First, it provides an extendable library that significantly eases the implementation of application-level checkpointing. The most basic and frequently used checkpoint data types are already part of CRAFT and can be directly used out of the box. The library can be easily extended to add more data types. As means of overhead reduction, the library offers a build-in asynchronous checkpointing mechanism and also supports the Scalable Checkpoint/Restart (SCR) library for node level checkpointing. Second, CRAFT provides an easier interface for User-Level Failure Mitigation (ULFM) based dynamic process recovery, which significantly reduces the complexity and effort of failure detection and communication recovery mechanism. By utilizing both functionalities together, applications can write application-level checkpoints and recover dynamically from process failures with very limited programming effort. This work presents the design and use of our library in detail. The associated overheads are thoroughly analyzed using several benchmarks.
[ { "version": "v1", "created": "Mon, 7 Aug 2017 08:17:56 GMT" } ]
2017-08-08T00:00:00
[ [ "Shahzad", "Faisal", "" ], [ "Thies", "Jonas", "" ], [ "Kreutzer", "Moritz", "" ], [ "Zeiser", "Thomas", "" ], [ "Hager", "Georg", "" ], [ "Wellein", "Gerhard", "" ] ]
new_dataset
0.994013
1708.02044
Ziwei Liu
Sijie Yan, Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, Xiaoou Tang
Unconstrained Fashion Landmark Detection via Hierarchical Recurrent Transformer Networks
To appear in ACM Multimedia (ACM MM) 2017 as a full research paper. More details at the project page: http://personal.ie.cuhk.edu.hk/~lz013/projects/UnconstrainedLandmarks.html
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fashion landmarks are functional key points defined on clothes, such as corners of neckline, hemline, and cuff. They have been recently introduced as an effective visual representation for fashion image understanding. However, detecting fashion landmarks are challenging due to background clutters, human poses, and scales. To remove the above variations, previous works usually assumed bounding boxes of clothes are provided in training and test as additional annotations, which are expensive to obtain and inapplicable in practice. This work addresses unconstrained fashion landmark detection, where clothing bounding boxes are not provided in both training and test. To this end, we present a novel Deep LAndmark Network (DLAN), where bounding boxes and landmarks are jointly estimated and trained iteratively in an end-to-end manner. DLAN contains two dedicated modules, including a Selective Dilated Convolution for handling scale discrepancies, and a Hierarchical Recurrent Spatial Transformer for handling background clutters. To evaluate DLAN, we present a large-scale fashion landmark dataset, namely Unconstrained Landmark Database (ULD), consisting of 30K images. Statistics show that ULD is more challenging than existing datasets in terms of image scales, background clutters, and human poses. Extensive experiments demonstrate the effectiveness of DLAN over the state-of-the-art methods. DLAN also exhibits excellent generalization across different clothing categories and modalities, making it extremely suitable for real-world fashion analysis.
[ { "version": "v1", "created": "Mon, 7 Aug 2017 09:02:52 GMT" } ]
2017-08-08T00:00:00
[ [ "Yan", "Sijie", "" ], [ "Liu", "Ziwei", "" ], [ "Luo", "Ping", "" ], [ "Qiu", "Shi", "" ], [ "Wang", "Xiaogang", "" ], [ "Tang", "Xiaoou", "" ] ]
new_dataset
0.986038
1708.02048
Chao Zhang
Chao Zhang, Samson Lasaulce, and Vineeth S. Varma
Using Continuous Power Modulation for Exchanging Local Channel State Information
null
IEEE Communications Letters ( Volume: 21, Issue: 5, May 2017 )
10.1109/LCOMM.2017.2650919
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This letter provides a simple but efficient technique, which allows each transmitter of an interference network, to exchange local channel state information with the other transmitters. One salient feature of the proposed technique is that a transmitter only needs measurements of the signal power at its intended receiver to implement it, making direct inter-transmitter signaling channels unnecessary. The key idea to achieve this is to use a transient period during which the continuous power level of a transmitter is taken to be the linear combination of the channel gains to be exchanged.
[ { "version": "v1", "created": "Mon, 7 Aug 2017 09:20:16 GMT" } ]
2017-08-08T00:00:00
[ [ "Zhang", "Chao", "" ], [ "Lasaulce", "Samson", "" ], [ "Varma", "Vineeth S.", "" ] ]
new_dataset
0.996857
1708.02052
Fabrizio Pastore
Fabrizio Pastore, Leonardo Mariani
VART: A Tool for the Automatic Detection of Regression Faults
null
null
10.1145/3106237.3122819
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present VART, a tool for automatically revealing regression faults missed by regression test suites. Interestingly, VART is not limited to faults causing crashing or exceptions, but can reveal faults that cause the violation of application-specific correctness properties. VART achieves this goal by combining static and dynamic program analysis.
[ { "version": "v1", "created": "Mon, 7 Aug 2017 09:43:54 GMT" } ]
2017-08-08T00:00:00
[ [ "Pastore", "Fabrizio", "" ], [ "Mariani", "Leonardo", "" ] ]
new_dataset
0.997896
1708.02091
Christian Weinert
Christian Weinert, Denise Demirel, Mart\'in Vigil, Matthias Geihs, Johannes Buchmann
MoPS: A Modular Protection Scheme for Long-Term Storage
Original Publication (in the same form): ASIACCS 2017
ASIACCS 2017, pages 436-448
10.1145/3052973.3053025
TUD-CS-2017-0033
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current trends in technology, such as cloud computing, allow outsourcing the storage, backup, and archiving of data. This provides efficiency and flexibility, but also poses new risks for data security. It in particular became crucial to develop protection schemes that ensure security even in the long-term, i.e. beyond the lifetime of keys, certificates, and cryptographic primitives. However, all current solutions fail to provide optimal performance for different application scenarios. Thus, in this work, we present MoPS, a modular protection scheme to ensure authenticity and integrity for data stored over long periods of time. MoPS does not come with any requirements regarding the storage architecture and can therefore be used together with existing archiving or storage systems. It supports a set of techniques which can be plugged together, combined, and migrated in order to create customized solutions that fulfill the requirements of different application scenarios in the best possible way. As a proof of concept we implemented MoPS and provide performance measurements. Furthermore, our implementation provides additional features, such as guidance for non-expert users and export functionalities for external verifiers.
[ { "version": "v1", "created": "Mon, 7 Aug 2017 12:27:51 GMT" } ]
2017-08-08T00:00:00
[ [ "Weinert", "Christian", "" ], [ "Demirel", "Denise", "" ], [ "Vigil", "Martín", "" ], [ "Geihs", "Matthias", "" ], [ "Buchmann", "Johannes", "" ] ]
new_dataset
0.998034
1708.02174
Mehran Maghoumi
Pooya Khaloo, Mehran Maghoumi, Eugene Taranta II, David Bettner, Joseph Laviola Jr
Code Park: A New 3D Code Visualization Tool
Accepted for publication in 2017 IEEE Working Conference on Software Visualization (VISSOFT 2017); Supplementary video: https://www.youtube.com/watch?v=LUiy1M9hUKU
null
null
null
cs.HC cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Code Park, a novel tool for visualizing codebases in a 3D game-like environment. Code Park aims to improve a programmer's understanding of an existing codebase in a manner that is both engaging and intuitive, appealing to novice users such as students. It achieves these goals by laying out the codebase in a 3D park-like environment. Each class in the codebase is represented as a 3D room-like structure. Constituent parts of the class (variable, member functions, etc.) are laid out on the walls, resembling a syntax-aware "wallpaper". The users can interact with the codebase using an overview, and a first-person viewer mode. We conducted two user studies to evaluate Code Park's usability and suitability for organizing an existing project. Our results indicate that Code Park is easy to get familiar with and significantly helps in code understanding compared to a traditional IDE. Further, the users unanimously believed that Code Park was a fun tool to work with.
[ { "version": "v1", "created": "Mon, 7 Aug 2017 15:53:10 GMT" } ]
2017-08-08T00:00:00
[ [ "Khaloo", "Pooya", "" ], [ "Maghoumi", "Mehran", "" ], [ "Taranta", "Eugene", "II" ], [ "Bettner", "David", "" ], [ "Laviola", "Joseph", "Jr" ] ]
new_dataset
0.999
1708.02209
Ying Tai
Ying Tai, Jian Yang, Xiaoming Liu, Chunyan Xu
MemNet: A Persistent Memory Network for Image Restoration
Accepted by ICCV 2017 (Spotlight presentation)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, very deep convolutional neural networks (CNNs) have been attracting considerable attention in image restoration. However, as the depth grows, the long-term dependency problem is rarely realized for these very deep models, which results in the prior states/layers having little influence on the subsequent ones. Motivated by the fact that human thoughts have persistency, we propose a very deep persistent memory network (MemNet) that introduces a memory block, consisting of a recursive unit and a gate unit, to explicitly mine persistent memory through an adaptive learning process. The recursive unit learns multi-level representations of the current state under different receptive fields. The representations and the outputs from the previous memory blocks are concatenated and sent to the gate unit, which adaptively controls how much of the previous states should be reserved, and decides how much of the current state should be stored. We apply MemNet to three image restoration tasks, i.e., image denosing, super-resolution and JPEG deblocking. Comprehensive experiments demonstrate the necessity of the MemNet and its unanimous superiority on all three tasks over the state of the arts. Code is available at https://github.com/tyshiwo/MemNet.
[ { "version": "v1", "created": "Mon, 7 Aug 2017 17:20:58 GMT" } ]
2017-08-08T00:00:00
[ [ "Tai", "Ying", "" ], [ "Yang", "Jian", "" ], [ "Liu", "Xiaoming", "" ], [ "Xu", "Chunyan", "" ] ]
new_dataset
0.988441
1603.06477
Umberto Mart\'inez-Pe\~nas
Umberto Mart\'inez-Pe\~nas
Generalized rank weights of reducible codes, optimal cases and related properties
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reducible codes for the rank metric were introduced for cryptographic purposes. They have fast encoding and decoding algorithms, include maximum rank distance (MRD) codes and can correct many rank errors beyond half of their minimum rank distance, which makes them suitable for error-correction in network coding. In this paper, we study their security behaviour against information leakage on networks when applied as coset coding schemes, giving the following main results: 1) we give lower and upper bounds on their generalized rank weights (GRWs), which measure worst-case information leakage to the wire-tapper, 2) we find new parameters for which these codes are MRD (meaning that their first GRW is optimal), and use the previous bounds to estimate their higher GRWs, 3) we show that all linear (over the extension field) codes whose GRWs are all optimal for fixed packet and code sizes but varying length are reducible codes up to rank equivalence, and 4) we show that the information leaked to a wire-tapper when using reducible codes is often much less than the worst case given by their (optimal in some cases) GRWs. We conclude with some secondary related properties: Conditions to be rank equivalent to cartesian products of linear codes, conditions to be rank degenerate, duality properties and MRD ranks.
[ { "version": "v1", "created": "Mon, 21 Mar 2016 16:01:37 GMT" }, { "version": "v2", "created": "Fri, 4 Aug 2017 03:19:40 GMT" } ]
2017-08-07T00:00:00
[ [ "Martínez-Peñas", "Umberto", "" ] ]
new_dataset
0.994441
1703.08093
Sven Puchinger
Sven Puchinger and Johan Rosenkilde n\'e Nielsen and John Sheekey
Further Generalisations of Twisted Gabidulin Codes
10 pages, accepted at the International Workshop on Coding and Cryptography (WCC) 2017
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes.
[ { "version": "v1", "created": "Thu, 23 Mar 2017 14:56:59 GMT" }, { "version": "v2", "created": "Fri, 4 Aug 2017 05:49:53 GMT" } ]
2017-08-07T00:00:00
[ [ "Puchinger", "Sven", "" ], [ "Nielsen", "Johan Rosenkilde né", "" ], [ "Sheekey", "John", "" ] ]
new_dataset
0.960029
1704.04086
Rui Huang
Rui Huang, Shu Zhang, Tianyu Li, Ran He
Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis
accepted at ICCV 2017, main paper & supplementary material, 11 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Photorealistic frontal view synthesis from a single face image has a wide range of applications in the field of face recognition. Although data-driven deep learning methods have been proposed to address this problem by seeking solutions from ample face data, this problem is still challenging because it is intrinsically ill-posed. This paper proposes a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by simultaneously perceiving global structures and local details. Four landmark located patch networks are proposed to attend to local textures in addition to the commonly used global encoder-decoder network. Except for the novel architecture, we make this ill-posed problem well constrained by introducing a combination of adversarial loss, symmetry loss and identity preserving loss. The combined loss function leverages both frontal face distribution and pre-trained discriminative deep face models to guide an identity preserving inference of frontal views from profiles. Different from previous deep learning methods that mainly rely on intermediate features for recognition, our method directly leverages the synthesized identity preserving image for downstream tasks like face recognition and attribution estimation. Experimental results demonstrate that our method not only presents compelling perceptual results but also outperforms state-of-the-art results on large pose face recognition.
[ { "version": "v1", "created": "Thu, 13 Apr 2017 12:18:13 GMT" }, { "version": "v2", "created": "Fri, 4 Aug 2017 03:44:37 GMT" } ]
2017-08-07T00:00:00
[ [ "Huang", "Rui", "" ], [ "Zhang", "Shu", "" ], [ "Li", "Tianyu", "" ], [ "He", "Ran", "" ] ]
new_dataset
0.983826
1708.01302
Y\"uksel Arslan
Yuksel Arslan
A solution for ARP spoofing: Layer-2 MAC and protocol filtering and arpserver
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most attacks are launched inside the companies by the employees of the same company. These kinds of attacks are generally against layer-2, not against layer-3 or IP. These attacks abuse the switch operation at layer-2. One of the attacks of this kind is Address Resolution Protocol (ARP) Spoofing (sometimes it is called ARP poisoning). This attack is classified as the 'man in the middle' (MITM) attack. The usual security systems such as (personal) firewalls or virus protection software can not recognize this type of attack. Taping into the communication between two hosts one can access the confidential data. Malicious software to run internal attacks on a network is freely available on the Internet, such as Ettercap. In this paper a solution is proposed and implemented to prevent ARP Spoofing. In this proposal access control lists (ACL) for layer-2 Media Access Control (MAC) address and protocol filtering and an application called ARPserver which will reply all ARP requests are used.
[ { "version": "v1", "created": "Thu, 3 Aug 2017 20:38:24 GMT" } ]
2017-08-07T00:00:00
[ [ "Arslan", "Yuksel", "" ] ]
new_dataset
0.980192
1708.01321
Sergey Bereg
S. Bereg, J. M. D\'iaz-B\'a\~nez, R. Fabila-Monroy, P. P\'erez-Lantero, A. Ram\'irez-Vigueras, T. Sakai, J. Urrutia, I. Ventura
On balanced 4-holes in bichromatic point sets
this is an arxiv version of our paper
Computational Geometry: Theory and Applications, 48 (3): 169-179 (2015)
10.1016/j.comgeo.2014.09.004
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let $S=R\cup B$ be a point set in the plane in general position such that each of its elements is colored either red or blue, where $R$ and $B$ denote the points colored red and the points colored blue, respectively. A quadrilateral with vertices in $S$ is called a $4$-hole if its interior is empty of elements of $S$. We say that a $4$-hole of $S$ is balanced if it has $2$ red and $2$ blue points of $S$ as vertices. In this paper, we prove that if $R$ and $B$ contain $n$ points each then $S$ has at least $\frac{n^2-4n}{12}$ balanced $4$-holes, and this bound is tight up to a constant factor. Since there are two-colored point sets with no balanced {\em convex} $4$-holes, we further provide a characterization of the two-colored point sets having this type of $4$-holes.
[ { "version": "v1", "created": "Thu, 3 Aug 2017 22:12:15 GMT" } ]
2017-08-07T00:00:00
[ [ "Bereg", "S.", "" ], [ "Díaz-Báñez", "J. M.", "" ], [ "Fabila-Monroy", "R.", "" ], [ "Pérez-Lantero", "P.", "" ], [ "Ramírez-Vigueras", "A.", "" ], [ "Sakai", "T.", "" ], [ "Urrutia", "J.", "" ], [ "Ventura", "I.", "" ] ]
new_dataset
0.95566
1708.01335
Chaitanya Swamy
Zachary Friggstad and Chaitanya Swamy
Compact, Provably-Good LPs for Orienteering and Regret-Bounded Vehicle Routing
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop polynomial-size LP-relaxations for {\em orienteering} and the {\em regret-bounded vehicle routing problem} (\rvrp) and devise suitable LP-rounding algorithms that lead to various new insights and approximation results for these problems. In orienteering, the goal is to find a maximum-reward $r$-rooted path, possibly ending at a specified node, of length at most some given budget $B$. In \rvrp, the goal is to find the minimum number of $r$-rooted paths of {\em regret} at most a given bound $R$ that cover all nodes, where the regret of an $r$-$v$ path is its length $-$ $c_{rv}$. For {\em rooted orienteering}, we introduce a natural bidirected LP-relaxation and obtain a simple $3$-approximation algorithm via LP-rounding. This is the {\em first LP-based} guarantee for this problem. We also show that {\em point-to-point} (\ptp) {\em orienteering} can be reduced to a regret-version of rooted orienteering at the expense of a factor-2 loss in approximation. For \rvrp, we propose two compact LPs that lead to significant improvements, in both approximation ratio and running time, over the approach in~\cite{FriggstadS14}. One of these is a natural modification of the LP for rooted orienteering; the other is an unconventional formulation that is motivated by certain structural properties of an \rvrp-solution, which leads to a $15$-approximation algorithm for \rvrp.
[ { "version": "v1", "created": "Fri, 4 Aug 2017 00:06:38 GMT" } ]
2017-08-07T00:00:00
[ [ "Friggstad", "Zachary", "" ], [ "Swamy", "Chaitanya", "" ] ]
new_dataset
0.992079
1708.01336
Yannis Kalantidis
Lu Jiang, Junwei Liang, Liangliang Cao, Yannis Kalantidis, Sachin Farfade, Alexander Hauptmann
MemexQA: Visual Memex Question Answering
https://memexqa.cs.cmu.edu/
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a new task, MemexQA: given a collection of photos or videos from a user, the goal is to automatically answer questions that help users recover their memory about events captured in the collection. Towards solving the task, we 1) present the MemexQA dataset, a large, realistic multimodal dataset consisting of real personal photos and crowd-sourced questions/answers, 2) propose MemexNet, a unified, end-to-end trainable network architecture for image, text and video question answering. Experimental results on the MemexQA dataset demonstrate that MemexNet outperforms strong baselines and yields the state-of-the-art on this novel and challenging task. The promising results on TextQA and VideoQA suggest MemexNet's efficacy and scalability across various QA tasks.
[ { "version": "v1", "created": "Fri, 4 Aug 2017 00:17:48 GMT" } ]
2017-08-07T00:00:00
[ [ "Jiang", "Lu", "" ], [ "Liang", "Junwei", "" ], [ "Cao", "Liangliang", "" ], [ "Kalantidis", "Yannis", "" ], [ "Farfade", "Sachin", "" ], [ "Hauptmann", "Alexander", "" ] ]
new_dataset
0.999655
1708.01353
Qian Chen
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, Diana Inkpen
Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference
RepEval 2017 workshop paper at EMNLP 2017, Copenhagen
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixed-length vector with neural networks and the quality of the representation is tested with a natural language inference task. This paper describes our system (alpha) that is ranked among the top in the Shared Task, on both the in-domain test set (obtaining a 74.9% accuracy) and on the cross-domain test set (also attaining a 74.9% accuracy), demonstrating that the model generalizes well to the cross-domain data. Our model is equipped with intra-sentence gated-attention composition which helps achieve a better performance. In addition to submitting our model to the Shared Task, we have also tested it on the Stanford Natural Language Inference (SNLI) dataset. We obtain an accuracy of 85.5%, which is the best reported result on SNLI when cross-sentence attention is not allowed, the same condition enforced in RepEval 2017.
[ { "version": "v1", "created": "Fri, 4 Aug 2017 01:55:18 GMT" } ]
2017-08-07T00:00:00
[ [ "Chen", "Qian", "" ], [ "Zhu", "Xiaodan", "" ], [ "Ling", "Zhen-Hua", "" ], [ "Wei", "Si", "" ], [ "Jiang", "Hui", "" ], [ "Inkpen", "Diana", "" ] ]
new_dataset
0.964312
1708.01401
Zheng Li
Zheng Li and He Zhang and Liam O'Brien and Shu Jiang and You Zhou and Maria Kihl and Rajiv Ranjan
Spot Pricing in the Cloud Ecosystem: A Comparative Investigation
null
Journal of Systems and Software, vol. 114, pp. 1-19 (2016)
10.1016/j.jss.2015.10.042
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Spot pricing is considered as a significant supplement for building a full-fledged market economy for the Cloud ecosystem. However, it seems that both providers and consumers are still hesitating to enter the Cloud spot market. The relevant academic community also has conflicting opinions about Cloud spot pricing in terms of revenue generation. Aim: This work aims to systematically identify, assess, synthesize and report the published evidence in favor of or against spot-price scheme compared with fixed-price scheme of Cloud computing, so as to help relieve the aforementioned conflict. Method: We employed the systematic literature review (SLR) method to collect and investigate the empirical studies of Cloud spot pricing indexed by major electronic libraries. Results: This SLR identified 61 primary studies that either delivered discussions or conducted experiments to perform comparison between spot pricing and fixed pricing in the Cloud domain. The reported benefits and limitations were summarized to facilitate cost-benefit analysis of being a Cloud spot pricing player, while four types of theories were distinguished to help both researchers and practitioners better understand the Cloud spot market. Conclusions: This SLR shows that the academic community strongly advocates the emerging Cloud spot market. Although there is still a lack of practical and easily deployable market-driven mechanisms, the overall findings of our work indicate that spot pricing plays a promising role in the sustainability of Cloud resource exploitation.
[ { "version": "v1", "created": "Fri, 4 Aug 2017 07:23:05 GMT" } ]
2017-08-07T00:00:00
[ [ "Li", "Zheng", "" ], [ "Zhang", "He", "" ], [ "O'Brien", "Liam", "" ], [ "Jiang", "Shu", "" ], [ "Zhou", "You", "" ], [ "Kihl", "Maria", "" ], [ "Ranjan", "Rajiv", "" ] ]
new_dataset
0.983221
1708.01405
Marcelo Saval Calvo
Marcelo Saval-Calvo and Jorge Azorin-Lopez and Andres Fuster-Guillo and Higinio Mora-Mora
{\mu}-MAR: Multiplane 3D Marker based Registration for Depth-sensing Cameras
null
Expert Systems with Applications, Volume 42, Issue 23, Pages 9353-9365 (2015)
10.1016/j.eswa.2015.08.011
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many applications including object reconstruction, robot guidance, and scene mapping require the registration of multiple views from a scene to generate a complete geometric and appearance model of it. In real situations, transformations between views are unknown an it is necessary to apply expert inference to estimate them. In the last few years, the emergence of low-cost depth-sensing cameras has strengthened the research on this topic, motivating a plethora of new applications. Although they have enough resolution and accuracy for many applications, some situations may not be solved with general state-of-the-art registration methods due to the Signal-to-Noise ratio (SNR) and the resolution of the data provided. The problem of working with low SNR data, in general terms, may appear in any 3D system, then it is necessary to propose novel solutions in this aspect. In this paper, we propose a method, {\mu}-MAR, able to both coarse and fine register sets of 3D points provided by low-cost depth-sensing cameras, despite it is not restricted to these sensors, into a common coordinate system. The method is able to overcome the noisy data problem by means of using a model-based solution of multiplane registration. Specifically, it iteratively registers 3D markers composed by multiple planes extracted from points of multiple views of the scene. As the markers and the object of interest are static in the scenario, the transformations obtained for the markers are applied to the object in order to reconstruct it. Experiments have been performed using synthetic and real data. The synthetic data allows a qualitative and quantitative evaluation by means of visual inspection and Hausdorff distance respectively. The real data experiments show the performance of the proposal using data acquired by a Primesense Carmine RGB-D sensor. The method has been compared to several state-of-the-art methods. The ...
[ { "version": "v1", "created": "Fri, 4 Aug 2017 07:35:22 GMT" } ]
2017-08-07T00:00:00
[ [ "Saval-Calvo", "Marcelo", "" ], [ "Azorin-Lopez", "Jorge", "" ], [ "Fuster-Guillo", "Andres", "" ], [ "Mora-Mora", "Higinio", "" ] ]
new_dataset
0.997937
1708.01461
Hamid Hoorfar
Hamid Hoorfar and Alireza Bagheri
A Linear-time Algorithm for Orthogonal Watchman Route Problem with Minimum Bends
null
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given an orthogonal polygon $ P $ with $ n $ vertices, the goal of the watchman route problem is finding a path $ S $ of the minimum length in $ P $ such that every point of the polygon $ P $ is visible from at least one of the point of $ S $. In the other words, in the watchman route problem we must compute a shortest watchman route inside a simple polygon of $ n $ vertices such that all the points interior to the polygon and on its boundary are visible to at least one point on the route. If route and polygon be orthogonal, it is called orthogonal watchman route problem. One of the targets of this problem is finding the orthogonal path with the minimum number of bends as possible. We present a linear-time algorithm for the orthogonal watchman route problem, in which the given polygon is monotone. Our algorithm can be used also for the problem on simple orthogonal polygons $ P $ for which the dual graph induced by the vertical decomposition of $ P $ is a path, which is called path polygon.
[ { "version": "v1", "created": "Fri, 4 Aug 2017 11:49:52 GMT" } ]
2017-08-07T00:00:00
[ [ "Hoorfar", "Hamid", "" ], [ "Bagheri", "Alireza", "" ] ]
new_dataset
0.979791
1708.01524
Mehrdad Shariat
Marcin Rybakowski, Krystian Safjan, Venkatkumar Venkatasubramanian, Arnesh Vijay, Laurent Dussopt, Ali Zaidi, Michael Peter, Jian Luo, Maria Fresia, Mehrdad Shariat
Challenges & Solutions for above 6 GHz Radio Access Network Integration for Future Mobile Communication Systems
6 pages, 4 figures
null
10.1109/ICCW.2016.7503855
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile communication technology has been rapidly evolving ever since its first introduction in the late 1980s. The development witnessed is not just in the refinement of the radio access techniques, but also in the progression towards offering sophisticated features and services to the mobile phone users. To fulfill this ever-growing user demand and market trends, frequency ranges in millimeter wave bands are envisioned for wireless radio transmission. To respond to this trends, the EU-funded mmMAGIC project has been launched and its main objective is to design and develop radio access techniques operating in 6-100 GHz bands. When it comes to developing technologies for systems operating these frequency ranges, a major challenge encountered will be in terms of its radio access network integration. Unquestionably, issues at various aspects of physical layer design, channel modelling, architecture, network functions and deployment will be encountered; problems in multi-node and multi-antenna transceiver designs will surface as well. The work carried in this project will address those challenges and propose solutions; but additionally, measure its efficiency against the project specific KPIs set to meet the requirements of the operational future 5G systems. The main intention of this paper is to outline some of the challenges, more specifically to highlight the network integration challenges, and discuss some of its technical solutions. The primary purpose here is to focus towards integrated 5G technology, thereby opening further research avenues for the exploration of new and alternate frequency bands in the electromagnetic spectrum.
[ { "version": "v1", "created": "Fri, 4 Aug 2017 14:35:29 GMT" } ]
2017-08-07T00:00:00
[ [ "Rybakowski", "Marcin", "" ], [ "Safjan", "Krystian", "" ], [ "Venkatasubramanian", "Venkatkumar", "" ], [ "Vijay", "Arnesh", "" ], [ "Dussopt", "Laurent", "" ], [ "Zaidi", "Ali", "" ], [ "Peter", "Michael", "" ], [ "Luo", "Jian", "" ], [ "Fresia", "Maria", "" ], [ "Shariat", "Mehrdad", "" ] ]
new_dataset
0.959443
1311.4096
Vaneet Aggarwal
Vaneet Aggarwal and Chao Tian and Vinay A. Vaishampayan and Yih-Farn R. Chen
Distributed Data Storage Systems with Opportunistic Repair
18 pages, revision from Infocom paper. arXiv admin note: text overlap with arXiv:0803.0632 by other authors
IEEE INFOCOM 2014 - IEEE Conference on Computer Communications, Toronto, ON, pp. 1833-1841 (2014)
10.1109/INFOCOM.2014.6848122
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The reliability of erasure-coded distributed storage systems, as measured by the mean time to data loss (MTTDL), depends on the repair bandwidth of the code. Repair-efficient codes provide reliability values several orders of magnitude better than conventional erasure codes. Current state of the art codes fix the number of helper nodes (nodes participating in repair) a priori. In practice, however, it is desirable to allow the number of helper nodes to be adaptively determined by the network traffic conditions. In this work, we propose an opportunistic repair framework to address this issue. It is shown that there exists a threshold on the storage overhead, below which such an opportunistic approach does not lose any efficiency from the optimal storage-repair-bandwidth tradeoff; i.e. it is possible to construct a code simultaneously optimal for different numbers of helper nodes. We further examine the benefits of such opportunistic codes, and derive the MTTDL improvement for two repair models: one with limited total repair bandwidth and the other with limited individual-node repair bandwidth. In both settings, we show orders of magnitude improvement in MTTDL. Finally, the proposed framework is examined in a network setting where a significant improvement in MTTDL is observed.
[ { "version": "v1", "created": "Sat, 16 Nov 2013 21:05:32 GMT" }, { "version": "v2", "created": "Thu, 6 Nov 2014 19:15:42 GMT" } ]
2017-08-04T00:00:00
[ [ "Aggarwal", "Vaneet", "" ], [ "Tian", "Chao", "" ], [ "Vaishampayan", "Vinay A.", "" ], [ "Chen", "Yih-Farn R.", "" ] ]
new_dataset
0.998112
1607.04339
Geordie George
Geordie George, Kiran Venugopal, Angel Lozano and Robert W. Heath Jr
Enclosed mmWave Wearable Networks: Feasibility and Performance
33 pages, 17 figures, Submitted to IEEE Transactions on Wireless Communications
null
10.1109/TWC.2017.2662681
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the feasibility of mmWave frequencies for personal networks of wireless wearable devices in enclosed settings (e.g., commuter trains, subways, airplanes, airports, or offices). At these frequencies, specular reflections off surfaces are expected to contribute intended signal power and, simultaneously, to aggravate the interference at the receivers. Meanwhile, blockages by obstacles and people---including the individuals wearing the devices---are expected to shield receivers from interference. With the aid of stochastic geometry and random shape theory, we assess the interplay of surface reflections and blockages for dense deployments of wearable networks equipped with directional antenna arrays in relevant indoor settings.
[ { "version": "v1", "created": "Thu, 14 Jul 2016 22:56:22 GMT" }, { "version": "v2", "created": "Thu, 24 Nov 2016 14:32:07 GMT" } ]
2017-08-04T00:00:00
[ [ "George", "Geordie", "" ], [ "Venugopal", "Kiran", "" ], [ "Lozano", "Angel", "" ], [ "Heath", "Robert W.", "Jr" ] ]
new_dataset
0.995903
1703.08628
Stavros Tsogkas
Stavros Tsogkas, Sven Dickinson
AMAT: Medial Axis Transform for Natural Images
10 pages (including references), 5 figures, accepted at ICCV 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Appearance-MAT (AMAT), a generalization of the medial axis transform for natural images, that is framed as a weighted geometric set cover problem. We make the following contributions: i) we extend previous medial point detection methods for color images, by associating each medial point with a local scale; ii) inspired by the invertibility property of the binary MAT, we also associate each medial point with a local encoding that allows us to invert the AMAT, reconstructing the input image; iii) we describe a clustering scheme that takes advantage of the additional scale and appearance information to group individual points into medial branches, providing a shape decomposition of the underlying image regions. In our experiments, we show state-of-the-art performance in medial point detection on Berkeley Medial AXes (BMAX500), a new dataset of medial axes based on the BSDS500 database, and good generalization on the SK506 and WH-SYMMAX datasets. We also measure the quality of reconstructed images from BMAX500, obtained by inverting their computed AMAT. Our approach delivers significantly better reconstruction quality with respect to three baselines, using just 10% of the image pixels. Our code and annotations are available at https://github.com/tsogkas/amat .
[ { "version": "v1", "created": "Fri, 24 Mar 2017 23:50:52 GMT" }, { "version": "v2", "created": "Wed, 2 Aug 2017 23:21:18 GMT" } ]
2017-08-04T00:00:00
[ [ "Tsogkas", "Stavros", "" ], [ "Dickinson", "Sven", "" ] ]
new_dataset
0.994531
1708.00997
Hualu Liu
Xiusheng Liu and Hualu Liu
Rank-metric LCD codes
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate the rank-metric codes which are proposed by Delsarte and Gabidulin to be complementary dual codes. We point out the relationship between Delsarte complementary dual codes and Gabidulin complementary dual codes. In finite field $\mathbb{F}_{q}^{m}$, we construct two classes of Gabidulin LCD MRD codes by self-dual basis (or almost self-dual basis) of $\mathbb{F}_{q}^{m}$ over $\mathbb{F}_{q}$. Under a suitable condition, we determine a sufficient condition for Delsarte optimal anticodes to be LCD codes over $\mathbb{F}_{q}$.
[ { "version": "v1", "created": "Thu, 3 Aug 2017 04:42:23 GMT" } ]
2017-08-04T00:00:00
[ [ "Liu", "Xiusheng", "" ], [ "Liu", "Hualu", "" ] ]
new_dataset
0.99654
1708.01135
Eike Hermann M\"uller
William R. Saunders, James Grant, Eike H. M\"uller
Long range forces in a performance portable Molecular Dynamics framework
9 pages, 3 figures, submitted to ParCo 2017 Parallel Computing Conference
null
null
null
cs.DC cs.SE physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular Dynamics (MD) codes predict the fundamental properties of matter by following the trajectories of a collection of interacting model particles. To exploit diverse modern manycore hardware, efficient codes must use all available parallelism. At the same time they need to be portable and easily extendible by the domain specialist (physicist/chemist) without detailed knowledge of this hardware. To address this challenge, we recently described a new Domain Specific Language (DSL) for the development of performance portable MD codes based on a "Separation of Concerns": a Python framework automatically generates efficient parallel code for a range of target architectures. Electrostatic interactions between charged particles are important in many physical systems and often dominate the runtime. Here we discuss the inclusion of long-range interaction algorithms in our code generation framework. These algorithms require global communications and careful consideration has to be given to any impact on parallel scalability. We implemented an Ewald summation algorithm for electrostatic forces, present scaling comparisons for different system sizes and compare to the performance of existing codes. We also report on further performance optimisations delivered with OpenMP shared memory parallelism.
[ { "version": "v1", "created": "Thu, 3 Aug 2017 13:46:07 GMT" } ]
2017-08-04T00:00:00
[ [ "Saunders", "William R.", "" ], [ "Grant", "James", "" ], [ "Müller", "Eike H.", "" ] ]
new_dataset
0.996862
1611.01477
Enis Ulqinaku
Enis Ulqinaku, Luka Malisa, Julinda Stefa, Alessandro Mei and Srdjan Capkun
Using Hover to Compromise the Confidentiality of User Input on Android
11 pages
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that the new hover (floating touch) technology, available in a number of today's smartphone models, can be abused by any Android application running with a common SYSTEM_ALERT_WINDOW permission to record all touchscreen input into other applications. Leveraging this attack, a malicious application running on the system is therefore able to profile user's behavior, capture sensitive input such as passwords and PINs as well as record all user's social interactions. To evaluate our attack we implemented Hoover, a proof-of-concept malicious application that runs in the system background and records all input to foreground applications. We evaluated Hoover with 40 users, across two different Android devices and two input methods, stylus and finger. In the case of touchscreen input by finger, Hoover estimated the positions of users' clicks within an error of 100 pixels and keyboard input with an accuracy of 79%. Hoover captured users' input by stylus even more accurately, estimating users' clicks within 2 pixels and keyboard input with an accuracy of 98%. We discuss ways of mitigating this attack and show that this cannot be done by simply restricting access to permissions or imposing additional cognitive load on the users since this would significantly constrain the intended use of the hover technology.
[ { "version": "v1", "created": "Fri, 4 Nov 2016 18:18:38 GMT" }, { "version": "v2", "created": "Wed, 2 Aug 2017 09:06:37 GMT" } ]
2017-08-03T00:00:00
[ [ "Ulqinaku", "Enis", "" ], [ "Malisa", "Luka", "" ], [ "Stefa", "Julinda", "" ], [ "Mei", "Alessandro", "" ], [ "Capkun", "Srdjan", "" ] ]
new_dataset
0.997916
1701.05648
Christoph Treude
Brock Angus Campbell and Christoph Treude
NLP2Code: Code Snippet Content Assist via Natural Language Tasks
tool demo video available at https://www.youtube.com/watch?v=h-gaVYtCznI; to appear as a tool demo paper at ICSME 2017 (https://icsme2017.github.io/)
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Developers increasingly take to the Internet for code snippets to integrate into their programs. To save developers the time required to switch from their development environments to a web browser in the quest for a suitable code snippet, we introduce NLP2Code, a content assist for code snippets. Unlike related tools, NLP2Code integrates directly into the source code editor and provides developers with a content assist feature to close the vocabulary gap between developers' needs and code snippet meta data. Our preliminary evaluation of NLP2Code shows that the majority of invocations lead to code snippets rated as helpful by users and that the tool is able to support a wide range of tasks.
[ { "version": "v1", "created": "Fri, 20 Jan 2017 00:38:53 GMT" }, { "version": "v2", "created": "Sat, 29 Apr 2017 12:53:02 GMT" }, { "version": "v3", "created": "Wed, 2 Aug 2017 08:55:58 GMT" } ]
2017-08-03T00:00:00
[ [ "Campbell", "Brock Angus", "" ], [ "Treude", "Christoph", "" ] ]
new_dataset
0.999626
1706.05406
Mark Kibanov
Mark Kibanov, Gerd Stumme, Imaduddin Amin and Jong Gun Lee
Mining Social Media to Inform Peatland Fire and Haze Disaster Management
null
null
10.1007/s13278-017-0446-1
null
cs.SI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Peatland fires and haze events are disasters with national, regional and international implications. The phenomena lead to direct damage to local assets, as well as broader economic and environmental losses. Satellite imagery is still the main and often the only available source of information for disaster management. In this article, we test the potential of social media to assist disaster management. To this end, we compare insights from two datasets: fire hotspots detected via NASA satellite imagery and almost all GPS-stamped tweets from Sumatra Island, Indonesia, posted during 2014. Sumatra Island is chosen as it regularly experiences a significant number of haze events, which affect citizens in Indonesia as well as in nearby countries including Malaysia and Singapore. We analyse temporal correlations between the datasets and their geo-spatial interdependence. Furthermore, we show how Twitter data reveals changes in users' behavior during severe haze events. Overall, we demonstrate that social media is a valuable source of complementary and supplementary information for haze disaster management. Based on our methodology and findings, an analytics tool to improve peatland fire and haze disaster management by the Indonesian authorities is under development.
[ { "version": "v1", "created": "Fri, 16 Jun 2017 18:57:06 GMT" }, { "version": "v2", "created": "Wed, 2 Aug 2017 14:44:17 GMT" } ]
2017-08-03T00:00:00
[ [ "Kibanov", "Mark", "" ], [ "Stumme", "Gerd", "" ], [ "Amin", "Imaduddin", "" ], [ "Lee", "Jong Gun", "" ] ]
new_dataset
0.999501
1707.01032
Carlos Sarraute
Carlos Sarraute, Carolina Lang, Nicolas B. Ponieman, Sebastian Anapolsky
The City Pulse of Buenos Aires
Published in NetMob 2015 (Fourth Conference on the Scientific Analysis of Mobile Phone Datasets), MIT Media Lab, Cambridge, USA, 8-10 April 2015
null
null
null
cs.SI cs.CY physics.soc-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Cell phone technology generates massive amounts of data. Although this data has been gathered for billing and logging purposes, today it has a much higher value, because its volume makes it very useful for big data analyses. In this project, we analyze the viability of using cell phone records to lower the cost of urban and transportation planning, in particular, to find out how people travel in a specific city (in this case, Buenos Aires, in Argentina). We use anonymized cell phone data to estimate the distribution of the population in the city using different periods of time. We compare those results with traditional methods (urban polling) using data from Buenos Aires origin-destination surveys. Traditional polling methods have a much smaller sample, in the order of tens of thousands (or even less for smaller cities), to maintain reasonable costs. Furthermore, these studies are performed at most once per decade, in the best cases, in Argentina and many other countries. Our objective is to prove that new methods based on cell phone data are reliable, and can be used indirectly to keep a real-time track of the flow of people among different parts of a city. We also go further to explore new possibilities opened by these methods.
[ { "version": "v1", "created": "Tue, 4 Jul 2017 15:18:06 GMT" }, { "version": "v2", "created": "Tue, 1 Aug 2017 18:31:46 GMT" } ]
2017-08-03T00:00:00
[ [ "Sarraute", "Carlos", "" ], [ "Lang", "Carolina", "" ], [ "Ponieman", "Nicolas B.", "" ], [ "Anapolsky", "Sebastian", "" ] ]
new_dataset
0.998319
1708.00308
Igor Melnyk
Ramesh Nallapati, Igor Melnyk, Abhishek Kumar and Bowen Zhou
SenGen: Sentence Generating Neural Variational Topic Model
null
null
null
null
cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new topic model that generates documents by sampling a topic for one whole sentence at a time, and generating the words in the sentence using an RNN decoder that is conditioned on the topic of the sentence. We argue that this novel formalism will help us not only visualize and model the topical discourse structure in a document better, but also potentially lead to more interpretable topics since we can now illustrate topics by sampling representative sentences instead of bag of words or phrases. We present a variational auto-encoder approach for learning in which we use a factorized variational encoder that independently models the posterior over topical mixture vectors of documents using a feed-forward network, and the posterior over topic assignments to sentences using an RNN. Our preliminary experiments on two different datasets indicate early promise, but also expose many challenges that remain to be addressed.
[ { "version": "v1", "created": "Tue, 1 Aug 2017 13:31:24 GMT" } ]
2017-08-03T00:00:00
[ [ "Nallapati", "Ramesh", "" ], [ "Melnyk", "Igor", "" ], [ "Kumar", "Abhishek", "" ], [ "Zhou", "Bowen", "" ] ]
new_dataset
0.996964
1708.00497
Chiara Boldrini
Chiara Boldrini, Raffaele Bruno and Haitam Laarabi
Car sharing through the data analysis lens
Accepted for KNOWMe: 1st International Workshop on Knowledge Discovery from Mobility and Transportation Systems (colocated with PKDD 2017)
null
null
null
cs.CY cs.DB cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Car sharing is one the pillars of a smart transportation infrastructure, as it is expected to reduce traffic congestion, parking demands and pollution in our cities. From the point of view of demand modelling, car sharing is a weak signal in the city landscape: only a small percentage of the population uses it, and thus it is difficult to study reliably with traditional techniques such as households travel diaries. In this work, we depart from these traditional approaches and we rely on web-based, digital records about vehicle availability in 10 European cities for one of the major active car sharing operators. We discuss how vehicles are used, what are the main characteristics of car sharing trips, whether events happening in certain areas are predictable or not, and how the spatio-temporal information about vehicle availability can be used to infer how different zones in a city are used by customers. We conclude the paper by presenting a direct application of the analysis of the dataset, aimed at identifying where to locate maintenance facilities within the car sharing operational area.
[ { "version": "v1", "created": "Tue, 25 Jul 2017 13:07:47 GMT" } ]
2017-08-03T00:00:00
[ [ "Boldrini", "Chiara", "" ], [ "Bruno", "Raffaele", "" ], [ "Laarabi", "Haitam", "" ] ]
new_dataset
0.966892
1708.00551
Kartik Chandra
Kartik Chandra and Rastislav Bodik
Bonsai: Synthesis-Based Reasoning for Type Systems
null
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe algorithms for symbolic reasoning about executable models of type systems, supporting three queries intended for designers of type systems. First, we check for type soundness bugs and synthesize a counterexample program if such a bug is found. Second, we compare two versions of a type system, synthesizing a program accepted by one but rejected by the other. Third, we minimize the size of synthesized counterexample programs. These algorithms symbolically evaluate typecheckers and interpreters, producing formulas that characterize the set of programs that fail or succeed in the typechecker and the interpreter. However, symbolically evaluating interpreters poses efficiency challenges, which are caused by having to merge execution paths of the various possible input programs. Our main contribution is the Bonsai tree, a novel symbolic representation of programs and program states which addresses these challenges. Bonsai trees encode complex syntactic information in terms of logical constraints, enabling more efficient merging. We implement these algorithms in the Bonsai tool, an assistant for type system designers. We perform case studies on how Bonsai helps test and explore a variety of type systems. Bonsai efficiently synthesizes counterexamples for soundness bugs that have been inaccessible to automatic tools, and is the first automated tool to find a counterexample for the recently discovered Scala soundness bug SI-9633.
[ { "version": "v1", "created": "Tue, 1 Aug 2017 23:31:35 GMT" } ]
2017-08-03T00:00:00
[ [ "Chandra", "Kartik", "" ], [ "Bodik", "Rastislav", "" ] ]
new_dataset
0.999474
1708.00586
Sifat Ibne Mushfique
Sifat Ibne Mushfique, Prabath Palathingal, Yusuf Said Eroglu, Murat Yuksel, Ismail Guvenc and Nezih Pala
A Software-Defined Multi-Element VLC Architecture
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the modern era of radio frequency (RF) spectrum crunch, visible light communication (VLC) is a recent and promising alternative technology that operates at the visible light spectrum. Thanks to its unlicensed and large bandwidth, VLC can deliver high throughput, better energy efficiency, and low cost data communications. In this article, a hybrid RF/VLC architecture is considered that can simultaneously provide light- ing and communication coverage across a room. Considered architecture involves a novel multi-element hemispherical bulb design, which can transmit multiple data streams over light emitting diode (LED) modules. Simulations considering various VLC transmitter configurations and topologies show that good link quality and high spatial reuse can be maintained in typical indoor communication scenarios.
[ { "version": "v1", "created": "Wed, 2 Aug 2017 03:06:55 GMT" } ]
2017-08-03T00:00:00
[ [ "Mushfique", "Sifat Ibne", "" ], [ "Palathingal", "Prabath", "" ], [ "Eroglu", "Yusuf Said", "" ], [ "Yuksel", "Murat", "" ], [ "Guvenc", "Ismail", "" ], [ "Pala", "Nezih", "" ] ]
new_dataset
0.997271
1708.00666
Yuan Yuan
Yuan Yuan, Xiaodan Liang, Xiaolong Wang, Dit-Yan Yeung, Abhinav Gupta
Temporal Dynamic Graph LSTM for Action-driven Video Object Detection
To appear in ICCV 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate a weakly-supervised object detection framework. Most existing frameworks focus on using static images to learn object detectors. However, these detectors often fail to generalize to videos because of the existing domain shift. Therefore, we investigate learning these detectors directly from boring videos of daily activities. Instead of using bounding boxes, we explore the use of action descriptions as supervision since they are relatively easy to gather. A common issue, however, is that objects of interest that are not involved in human actions are often absent in global action descriptions known as "missing label". To tackle this problem, we propose a novel temporal dynamic graph Long Short-Term Memory network (TD-Graph LSTM). TD-Graph LSTM enables global temporal reasoning by constructing a dynamic graph that is based on temporal correlations of object proposals and spans the entire video. The missing label issue for each individual frame can thus be significantly alleviated by transferring knowledge across correlated objects proposals in the whole video. Extensive evaluations on a large-scale daily-life action dataset (i.e., Charades) demonstrates the superiority of our proposed method. We also release object bounding-box annotations for more than 5,000 frames in Charades. We believe this annotated data can also benefit other research on video-based object recognition in the future.
[ { "version": "v1", "created": "Wed, 2 Aug 2017 09:38:26 GMT" } ]
2017-08-03T00:00:00
[ [ "Yuan", "Yuan", "" ], [ "Liang", "Xiaodan", "" ], [ "Wang", "Xiaolong", "" ], [ "Yeung", "Dit-Yan", "" ], [ "Gupta", "Abhinav", "" ] ]
new_dataset
0.996139
1708.00726
Rico Sennrich
Rico Sennrich, Alexandra Birch, Anna Currey, Ulrich Germann, Barry Haddow, Kenneth Heafield, Antonio Valerio Miceli Barone and Philip Williams
The University of Edinburgh's Neural MT Systems for WMT17
WMT 2017 shared task track; for Bibtex, see http://homepages.inf.ed.ac.uk/rsennric/bib.html#uedin-nmt:2017
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This paper describes the University of Edinburgh's submissions to the WMT17 shared news translation and biomedical translation tasks. We participated in 12 translation directions for news, translating between English and Czech, German, Latvian, Russian, Turkish and Chinese. For the biomedical task we submitted systems for English to Czech, German, Polish and Romanian. Our systems are neural machine translation systems trained with Nematus, an attentional encoder-decoder. We follow our setup from last year and build BPE-based models with parallel and back-translated monolingual training data. Novelties this year include the use of deep architectures, layer normalization, and more compact models due to weight tying and improvements in BPE segmentations. We perform extensive ablative experiments, reporting on the effectivenes of layer normalization, deep architectures, and different ensembling techniques.
[ { "version": "v1", "created": "Wed, 2 Aug 2017 12:48:32 GMT" } ]
2017-08-03T00:00:00
[ [ "Sennrich", "Rico", "" ], [ "Birch", "Alexandra", "" ], [ "Currey", "Anna", "" ], [ "Germann", "Ulrich", "" ], [ "Haddow", "Barry", "" ], [ "Heafield", "Kenneth", "" ], [ "Barone", "Antonio Valerio Miceli", "" ], [ "Williams", "Philip", "" ] ]
new_dataset
0.998181
1708.00783
Stuart Golodetz
Victor Adrian Prisacariu, Olaf K\"ahler, Stuart Golodetz, Michael Sapienza, Tommaso Cavallari, Philip H S Torr, David W Murray
InfiniTAM v3: A Framework for Large-Scale 3D Reconstruction with Loop Closure
This article largely supersedes arxiv:1410.0925 (it describes version 3 of the InfiniTAM framework)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Volumetric models have become a popular representation for 3D scenes in recent years. One breakthrough leading to their popularity was KinectFusion, which focuses on 3D reconstruction using RGB-D sensors. However, monocular SLAM has since also been tackled with very similar approaches. Representing the reconstruction volumetrically as a TSDF leads to most of the simplicity and efficiency that can be achieved with GPU implementations of these systems. However, this representation is memory-intensive and limits applicability to small-scale reconstructions. Several avenues have been explored to overcome this. With the aim of summarizing them and providing for a fast, flexible 3D reconstruction pipeline, we propose a new, unifying framework called InfiniTAM. The idea is that steps like camera tracking, scene representation and integration of new data can easily be replaced and adapted to the user's needs. This report describes the technical implementation details of InfiniTAM v3, the third version of our InfiniTAM system. We have added various new features, as well as making numerous enhancements to the low-level code that significantly improve our camera tracking performance. The new features that we expect to be of most interest are (i) a robust camera tracking module; (ii) an implementation of Glocker et al.'s keyframe-based random ferns camera relocaliser; (iii) a novel approach to globally-consistent TSDF-based reconstruction, based on dividing the scene into rigid submaps and optimising the relative poses between them; and (iv) an implementation of Keller et al.'s surfel-based reconstruction approach.
[ { "version": "v1", "created": "Wed, 2 Aug 2017 14:50:02 GMT" } ]
2017-08-03T00:00:00
[ [ "Prisacariu", "Victor Adrian", "" ], [ "Kähler", "Olaf", "" ], [ "Golodetz", "Stuart", "" ], [ "Sapienza", "Michael", "" ], [ "Cavallari", "Tommaso", "" ], [ "Torr", "Philip H S", "" ], [ "Murray", "David W", "" ] ]
new_dataset
0.953027
1708.00818
Jo\~ao Sedoc
Grishma Jena, Mansi Vashisht, Abheek Basu, Lyle Ungar, Jo\~ao Sedoc
Enterprise to Computer: Star Trek chatbot
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human interactions and human-computer interactions are strongly influenced by style as well as content. Adding a persona to a chatbot makes it more human-like and contributes to a better and more engaging user experience. In this work, we propose a design for a chatbot that captures the "style" of Star Trek by incorporating references from the show along with peculiar tones of the fictional characters therein. Our Enterprise to Computer bot (E2Cbot) treats Star Trek dialog style and general dialog style differently, using two recurrent neural network Encoder-Decoder models. The Star Trek dialog style uses sequence to sequence (SEQ2SEQ) models (Sutskever et al., 2014; Bahdanau et al., 2014) trained on Star Trek dialogs. The general dialog style uses Word Graph to shift the response of the SEQ2SEQ model into the Star Trek domain. We evaluate the bot both in terms of perplexity and word overlap with Star Trek vocabulary and subjectively using human evaluators.
[ { "version": "v1", "created": "Wed, 2 Aug 2017 16:51:01 GMT" } ]
2017-08-03T00:00:00
[ [ "Jena", "Grishma", "" ], [ "Vashisht", "Mansi", "" ], [ "Basu", "Abheek", "" ], [ "Ungar", "Lyle", "" ], [ "Sedoc", "João", "" ] ]
new_dataset
0.998034
1606.04200
Mrinal Kumar
Suryajith Chillara, Mrinal Kumar, Ramprasad Saptharishi, V Vinay
The Chasm at Depth Four, and Tensor Rank : Old results, new insights
Correction - tensor rank is sub-multiplicative. The earlier version incorrectly mentioned that it is multiplicative
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Agrawal and Vinay [AV08] showed how any polynomial size arithmetic circuit can be thought of as a depth four arithmetic circuit of subexponential size. The resulting circuit size in this simulation was more carefully analyzed by Korian [Koiran] and subsequently by Tavenas [Tav13]. We provide a simple proof of this chain of results. We then abstract the main ingredient to apply it to formulas and constant depth circuits, and show more structured depth reductions for them. In an apriori surprising result, Raz [Raz10] showed that for any $n$ and $d$, such that $ \omega(1) \leq d \leq O\left(\frac{\log n}{\log\log n}\right)$, constructing explicit tensors $T:[n]^d \rightarrow F$ of high enough rank would imply superpolynomial lower bounds for arithmetic formulas over the field $F$. Using the additional structure we obtain from our proof of the depth reduction for arithmetic formulas, we give a new and arguably simpler proof of this connection. We also extend this result for homogeneous formulas to show that, in fact, the connection holds for any $d$ such that $\omega(1) \leq d \leq n^{o(1)}$.
[ { "version": "v1", "created": "Tue, 14 Jun 2016 04:37:17 GMT" }, { "version": "v2", "created": "Tue, 1 Aug 2017 03:42:53 GMT" } ]
2017-08-02T00:00:00
[ [ "Chillara", "Suryajith", "" ], [ "Kumar", "Mrinal", "" ], [ "Saptharishi", "Ramprasad", "" ], [ "Vinay", "V", "" ] ]
new_dataset
0.959645
1607.07247
Weihua Hu
Weihua Hu, Hirosuke Yamamoto, Junya Honda
Worst-case Redundancy of Optimal Binary AIFV Codes and their Extended Codes
IEEE Transactions on Information Theory, vol.63, no.8, pp.5074-5086, Aug. 2017
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Binary AIFV codes are lossless codes that generalize the class of instantaneous FV codes. The code uses two code trees and assigns source symbols to incomplete internal nodes as well as to leaves. AIFV codes are empirically shown to attain better compression ratio than Huffman codes. Nevertheless, an upper bound on the redundancy of optimal binary AIFV codes is only known to be 1, which is the same as the bound of Huffman codes. In this paper, the upper bound is improved to 1/2, which is shown to coincide with the worst-case redundancy of the codes. Along with this, the worst-case redundancy is derived in terms of $p_{\max}\geq$1/2, where $p_{\max}$ is the probability of the most likely source symbol. Additionally, we propose an extension of binary AIFV codes, which use $m$ code trees and allow at most $m$-bit decoding delay. We show that the worst-case redundancy of the extended binary AIFV codes is $1/m$ for $m \leq 4.$
[ { "version": "v1", "created": "Mon, 25 Jul 2016 12:44:10 GMT" }, { "version": "v2", "created": "Tue, 26 Jul 2016 03:20:48 GMT" }, { "version": "v3", "created": "Mon, 3 Apr 2017 05:44:24 GMT" }, { "version": "v4", "created": "Tue, 1 Aug 2017 05:05:15 GMT" } ]
2017-08-02T00:00:00
[ [ "Hu", "Weihua", "" ], [ "Yamamoto", "Hirosuke", "" ], [ "Honda", "Junya", "" ] ]
new_dataset
0.990811
1608.07470
Qi Jia
Qi Jia, Xin Fan, Zhongxuan Luo, Lianbo Song, and Tie Qiu
A Fast Ellipse Detector Using Projective Invariant Pruning
14 pages, 34 figures, journal
null
10.1109/TIP.2017.2704660
null
cs.CV cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting elliptical objects from an image is a central task in robot navigation and industrial diagnosis where the detection time is always a critical issue. Existing methods are hardly applicable to these real-time scenarios of limited hardware resource due to the huge number of fragment candidates (edges or arcs) for fitting ellipse equations. In this paper, we present a fast algorithm detecting ellipses with high accuracy. The algorithm leverage a newly developed projective invariant to significantly prune the undesired candidates and to pick out elliptical ones. The invariant is able to reflect the intrinsic geometry of a planar curve, giving the value of -1 on any three collinear points and +1 for any six points on an ellipse. Thus, we apply the pruning and picking by simply comparing these binary values. Moreover, the calculation of the invariant only involves the determinant of a 3*3 matrix. Extensive experiments on three challenging data sets with 650 images demonstrate that our detector runs 20%-50% faster than the state-of-the-art algorithms with the comparable or higher precision.
[ { "version": "v1", "created": "Fri, 26 Aug 2016 14:25:15 GMT" } ]
2017-08-02T00:00:00
[ [ "Jia", "Qi", "" ], [ "Fan", "Xin", "" ], [ "Luo", "Zhongxuan", "" ], [ "Song", "Lianbo", "" ], [ "Qiu", "Tie", "" ] ]
new_dataset
0.99298
1609.07878
Sujoy Kumar Biswas
Sujoy Kumar Biswas and Peyman Milanfar
Linear Support Tensor Machine: Pedestrian Detection in Thermal Infrared Images
null
null
10.1109/TIP.2017.2705426
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pedestrian detection in thermal infrared images poses unique challenges because of the low resolution and noisy nature of the image. Here we propose a mid-level attribute in the form of multidimensional template, or tensor, using Local Steering Kernel (LSK) as low-level descriptors for detecting pedestrians in far infrared images. LSK is specifically designed to deal with intrinsic image noise and pixel level uncertainty by capturing local image geometry succinctly instead of collecting local orientation statistics (e.g., histograms in HOG). Our second contribution is the introduction of a new image similarity kernel in the popular maximum margin framework of support vector machines that results in a relatively short and simple training phase for building a rigid pedestrian detector. Our third contribution is to replace the sluggish but de facto sliding window based detection methodology with multichannel discrete Fourier transform, facilitating very fast and efficient pedestrian localization. The experimental studies on publicly available thermal infrared images justify our proposals and model assumptions. In addition, the proposed work also involves the release of our in-house annotations of pedestrians in more than 17000 frames of OSU Color Thermal database for the purpose of sharing with the research community.
[ { "version": "v1", "created": "Mon, 26 Sep 2016 07:54:00 GMT" } ]
2017-08-02T00:00:00
[ [ "Biswas", "Sujoy Kumar", "" ], [ "Milanfar", "Peyman", "" ] ]
new_dataset
0.980498
1705.01359
Moin Nabi
Ravi Shekhar, Sandro Pezzelle, Yauhen Klimovich, Aurelie Herbelot, Moin Nabi, Enver Sangineto, Raffaella Bernardi
FOIL it! Find One mismatch between Image and Language caption
To appear at ACL 2017
null
10.18653/v1/P17-1024
null
cs.CV cs.CL cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we aim to understand whether current language and vision (LaVi) models truly grasp the interaction between the two modalities. To this end, we propose an extension of the MSCOCO dataset, FOIL-COCO, which associates images with both correct and "foil" captions, that is, descriptions of the image that are highly similar to the original ones, but contain one single mistake ("foil word"). We show that current LaVi models fall into the traps of this data and perform badly on three tasks: a) caption classification (correct vs. foil); b) foil word detection; c) foil word correction. Humans, in contrast, have near-perfect performance on those tasks. We demonstrate that merely utilising language cues is not enough to model FOIL-COCO and that it challenges the state-of-the-art by requiring a fine-grained understanding of the relation between text and image.
[ { "version": "v1", "created": "Wed, 3 May 2017 11:07:13 GMT" } ]
2017-08-02T00:00:00
[ [ "Shekhar", "Ravi", "" ], [ "Pezzelle", "Sandro", "" ], [ "Klimovich", "Yauhen", "" ], [ "Herbelot", "Aurelie", "" ], [ "Nabi", "Moin", "" ], [ "Sangineto", "Enver", "" ], [ "Bernardi", "Raffaella", "" ] ]
new_dataset
0.991322
1707.01489
Filippo Vella
Agnese Augello, Emanuele Cipolla, Ignazio Infantino, Adriano Manfre, Giovanni Pilato and Filippo Vella
Creative Robot Dance with Variational Encoder
This paper is an extended version of a paper published on the eighth International Conference on Computational Creativity (ICCC), held in Atlanta, GA, June 20th-June 22nd, 2017
null
null
null
cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
What we appreciate in dance is the ability of people to sponta- neously improvise new movements and choreographies, sur- rendering to the music rhythm, being inspired by the cur- rent perceptions and sensations and by previous experiences, deeply stored in their memory. Like other human abilities, this, of course, is challenging to reproduce in an artificial entity such as a robot. Recent generations of anthropomor- phic robots, the so-called humanoids, however, exhibit more and more sophisticated skills and raised the interest in robotic communities to design and experiment systems devoted to automatic dance generation. In this work, we highlight the importance to model a computational creativity behavior in dancing robots to avoid a mere execution of preprogrammed dances. In particular, we exploit a deep learning approach that allows a robot to generate in real time new dancing move- ments according to to the listened music.
[ { "version": "v1", "created": "Wed, 5 Jul 2017 17:42:42 GMT" } ]
2017-08-02T00:00:00
[ [ "Augello", "Agnese", "" ], [ "Cipolla", "Emanuele", "" ], [ "Infantino", "Ignazio", "" ], [ "Manfre", "Adriano", "" ], [ "Pilato", "Giovanni", "" ], [ "Vella", "Filippo", "" ] ]
new_dataset
0.992704
1707.08690
Yixin Cao
Yixin Cao, Yuping Ke, Yota Otachi and Jie You
Vertex Deletion Problems on Chordal Graphs
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Containing many classic optimization problems, the family of vertex deletion problems has an important position in algorithm and complexity study. The celebrated result of Lewis and Yannakakis gives a complete dichotomy of their complexity. It however has nothing to say about the case when the input graph is also special. This paper initiates a systematic study of vertex deletion problems from one subclass of chordal graphs to another. We give polynomial-time algorithms or proofs of NP-completeness for most of the problems. In particular, we show that the vertex deletion problem from chordal graphs to interval graphs is NP-complete.
[ { "version": "v1", "created": "Thu, 27 Jul 2017 02:57:15 GMT" }, { "version": "v2", "created": "Tue, 1 Aug 2017 13:05:06 GMT" } ]
2017-08-02T00:00:00
[ [ "Cao", "Yixin", "" ], [ "Ke", "Yuping", "" ], [ "Otachi", "Yota", "" ], [ "You", "Jie", "" ] ]
new_dataset
0.992782
1707.09476
Shanghang Zhang
Shanghang Zhang, Guanhang Wu, Jo\~ao P. Costeira, Jos\'e M. F. Moura
FCN-rLSTM: Deep Spatio-Temporal Neural Networks for Vehicle Counting in City Cameras
Accepted by International Conference on Computer Vision (ICCV), 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we develop deep spatio-temporal neural networks to sequentially count vehicles from low quality videos captured by city cameras (citycams). Citycam videos have low resolution, low frame rate, high occlusion and large perspective, making most existing methods lose their efficacy. To overcome limitations of existing methods and incorporate the temporal information of traffic video, we design a novel FCN-rLSTM network to jointly estimate vehicle density and vehicle count by connecting fully convolutional neural networks (FCN) with long short term memory networks (LSTM) in a residual learning fashion. Such design leverages the strengths of FCN for pixel-level prediction and the strengths of LSTM for learning complex temporal dynamics. The residual learning connection reformulates the vehicle count regression as learning residual functions with reference to the sum of densities in each frame, which significantly accelerates the training of networks. To preserve feature map resolution, we propose a Hyper-Atrous combination to integrate atrous convolution in FCN and combine feature maps of different convolution layers. FCN-rLSTM enables refined feature representation and a novel end-to-end trainable mapping from pixels to vehicle count. We extensively evaluated the proposed method on different counting tasks with three datasets, with experimental results demonstrating their effectiveness and robustness. In particular, FCN-rLSTM reduces the mean absolute error (MAE) from 5.31 to 4.21 on TRANCOS, and reduces the MAE from 2.74 to 1.53 on WebCamT. Training process is accelerated by 5 times on average.
[ { "version": "v1", "created": "Sat, 29 Jul 2017 07:22:48 GMT" }, { "version": "v2", "created": "Tue, 1 Aug 2017 00:33:29 GMT" } ]
2017-08-02T00:00:00
[ [ "Zhang", "Shanghang", "" ], [ "Wu", "Guanhang", "" ], [ "Costeira", "João P.", "" ], [ "Moura", "José M. F.", "" ] ]
new_dataset
0.996527
1708.00045
Rudrasis Chakraborty Mr
Rudrasis Chakraborty and Baba Vemuri
Statistics on the (compact) Stiefel manifold: Theory and Applications
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A Stiefel manifold of the compact type is often encountered in many fields of Engineering including, signal and image processing, machine learning, numerical optimization and others. The Stiefel manifold is a Riemannian homogeneous space but not a symmetric space. In previous work, researchers have defined probability distributions on symmetric spaces and performed statistical analysis of data residing in these spaces. In this paper, we present original work involving definition of Gaussian distributions on a homogeneous space and show that the maximum-likelihood estimate of the location parameter of a Gaussian distribution on the homogeneous space yields the Fr\'echet mean (FM) of the samples drawn from this distribution. Further, we present an algorithm to sample from the Gaussian distribution on the Stiefel manifold and recursively compute the FM of these samples. We also prove the weak consistency of this recursive FM estimator. Several synthetic and real data experiments are then presented, demonstrating the superior computational performance of this estimator over the gradient descent based non-recursive counter part as well as the stochastic gradient descent based method prevalent in literature.
[ { "version": "v1", "created": "Mon, 31 Jul 2017 19:32:18 GMT" } ]
2017-08-02T00:00:00
[ [ "Chakraborty", "Rudrasis", "" ], [ "Vemuri", "Baba", "" ] ]
new_dataset
0.971481
1708.00157
Jos\'e I. Orlicki
Jose I. Orlicki
A Stable Coin with Pro-rated Rebasement and Price Manipulation Protection
9 pages, 4 figures, draft
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An existing pseudo-commodity and a smart contracts framework allow the creation of a purely automatic and self-sufficient price-stable cryptocurrency, without human intervention. This new currency, we denominated Toroid or TRD, can be used more extensively for commerce than pseudo commodity cryptocurrencies due to its lower volatility. Also, is suitable for investment, as the tokens in each account multiply, return interest, when the market grows. Like the controlled fiat money of a central bank plus the benefits of an inflation-adjusted perpetuity bond. Collateral in base coin, for example BTC or ETH, can be added to bootstrap your own Toroid investment or withdrawed after a very small investment period. So, the Toroids are not created from nothing nor have a limited monetary base. The minimum investment period can be very small, for example one day, and you keep the interest but you can return the Toroids and refund your collateral. That is a one-side only peg to a deflationary crypto-commodity. The stability is guaranteed by endogenous measurements of number of transactions and wallet pro-rated rebasement of balance to reduce volatility of price. Each account has its own rebasement due to the account creation timestamp. Rebasement control mechanism is progressive during initial bootstrap period because price manipulation protection is more severe when the capital involved is smaller. Rebasement has a quick positive start to incentivize early adopters that see only big growth in their TRD account during bootstrap period. Finally, the new rebasement control makes it economically infeasible for an attacker targeting the coin with manipulated transaction volume if we set the minimum rebasement greater than profits from massive currency manipulation.
[ { "version": "v1", "created": "Tue, 1 Aug 2017 04:59:12 GMT" } ]
2017-08-02T00:00:00
[ [ "Orlicki", "Jose I.", "" ] ]
new_dataset
0.964889
1708.00391
Wuwei Lan
Wuwei Lan, Siyu Qiu, Hua He and Wei Xu
A Continuously Growing Dataset of Sentential Paraphrases
11 pages, accepted to EMNLP 2017
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major challenge in paraphrase research is the lack of parallel corpora. In this paper, we present a new method to collect large-scale sentential paraphrases from Twitter by linking tweets through shared URLs. The main advantage of our method is its simplicity, as it gets rid of the classifier or human in the loop needed to select data before annotation and subsequent application of paraphrase identification algorithms in the previous work. We present the largest human-labeled paraphrase corpus to date of 51,524 sentence pairs and the first cross-domain benchmarking for automatic paraphrase identification. In addition, we show that more than 30,000 new sentential paraphrases can be easily and continuously captured every month at ~70% precision, and demonstrate their utility for downstream NLP tasks through phrasal paraphrase extraction. We make our code and data freely available.
[ { "version": "v1", "created": "Tue, 1 Aug 2017 15:41:51 GMT" } ]
2017-08-02T00:00:00
[ [ "Lan", "Wuwei", "" ], [ "Qiu", "Siyu", "" ], [ "He", "Hua", "" ], [ "Xu", "Wei", "" ] ]
new_dataset
0.99886
1308.6384
Carola Doerr
Benjamin Doerr and Carola Doerr
Collecting Coupons with Random Initial Stake
null
Algorithmica 75 (2016), 529-553
null
null
cs.DM cs.DS cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by a problem in the theory of randomized search heuristics, we give a very precise analysis for the coupon collector problem where the collector starts with a random set of coupons (chosen uniformly from all sets). We show that the expected number of rounds until we have a coupon of each type is $nH_{n/2} - 1/2 \pm o(1)$, where $H_{n/2}$ denotes the $(n/2)$th harmonic number when $n$ is even, and $H_{n/2}:= (1/2) H_{\lfloor n/2 \rfloor} + (1/2) H_{\lceil n/2 \rceil}$ when $n$ is odd. Consequently, the coupon collector with random initial stake is by half a round faster than the one starting with exactly $n/2$ coupons (apart from additive $o(1)$ terms). This result implies that classic simple heuristic called \emph{randomized local search} needs an expected number of $nH_{n/2} - 1/2 \pm o(1)$ iterations to find the optimum of any monotonic function defined on bit-strings of length $n$.
[ { "version": "v1", "created": "Thu, 29 Aug 2013 07:45:51 GMT" } ]
2017-08-01T00:00:00
[ [ "Doerr", "Benjamin", "" ], [ "Doerr", "Carola", "" ] ]
new_dataset
0.953199
1505.03227
Keze Wang
Keze Wang and Liang Lin and Jiangbo Lu and Chenglong Li and Keyang Shi
PISA: Pixelwise Image Saliency by Aggregating Complementary Appearance Contrast Measures with Edge-Preserving Coherence
14 pages, 14 figures, 1 table, to appear in IEEE Transactions on Image Processing
IEEE Transactions on Image Processing (TIP), volume. 24, Issue. 10, page. 3019 - 3033, Oct. 2015
10.1109/TIP.2015.2432712
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Driven by recent vision and graphics applications such as image segmentation and object recognition, computing pixel-accurate saliency values to uniformly highlight foreground objects becomes increasingly important. In this paper, we propose a unified framework called PISA, which stands for Pixelwise Image Saliency Aggregating various bottom-up cues and priors. It generates spatially coherent yet detail-preserving, pixel-accurate and fine-grained saliency, and overcomes the limitations of previous methods which use homogeneous superpixel-based and color only treatment. PISA aggregates multiple saliency cues in a global context such as complementary color and structure contrast measures with their spatial priors in the image domain. The saliency confidence is further jointly modeled with a neighborhood consistence constraint into an energy minimization formulation, in which each pixel will be evaluated with multiple hypothetical saliency levels. Instead of using global discrete optimization methods, we employ the cost-volume filtering technique to solve our formulation, assigning the saliency levels smoothly while preserving the edge-aware structure details. In addition, a faster version of PISA is developed using a gradient-driven image sub-sampling strategy to greatly improve the runtime efficiency while keeping comparable detection accuracy. Extensive experiments on a number of public datasets suggest that PISA convincingly outperforms other state-of-the-art approaches. In addition, with this work we also create a new dataset containing $800$ commodity images for evaluating saliency detection. The dataset and source code of PISA can be downloaded at http://vision.sysu.edu.cn/project/PISA/
[ { "version": "v1", "created": "Wed, 13 May 2015 03:05:46 GMT" } ]
2017-08-01T00:00:00
[ [ "Wang", "Keze", "" ], [ "Lin", "Liang", "" ], [ "Lu", "Jiangbo", "" ], [ "Li", "Chenglong", "" ], [ "Shi", "Keyang", "" ] ]
new_dataset
0.950447
1603.09185
\"Ozlem Salehi
\"Ozlem Salehi, A.C. Cem Say, Flavio D'Alessandro
Homing Vector Automata
This is the extended version of our paper homing vector automata arXiv:1504.04859
RAIRO-Theoretical Informatics and Applications 50.4 (2016): 371-386
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce homing vector automata, which are finite automata augmented by a vector that is multiplied at each step by a matrix determined by the current transition, and have to return the vector to its original setting in order to accept the input. The computational power and properties of deterministic, nondeterministic, blind, non-blind, real-time and one-way versions of these machines are examined and compared to various related types of automata. A generalized version of the Stern-Brocot encoding method, suitable for representing strings on arbitrary alphabets, is also developed.
[ { "version": "v1", "created": "Wed, 30 Mar 2016 13:35:01 GMT" }, { "version": "v2", "created": "Thu, 4 Aug 2016 20:13:26 GMT" } ]
2017-08-01T00:00:00
[ [ "Salehi", "Özlem", "" ], [ "Say", "A. C. Cem", "" ], [ "D'Alessandro", "Flavio", "" ] ]
new_dataset
0.995416
1611.01579
Mohammad Mohammadi Amiri Mr.
Mohammad Mohammadi Amiri, Qianqian Yang, and Deniz Gunduz
Decentralized Caching and Coded Delivery with Distinct Cache Capacities
to appear, IEEE Transactions on Communications
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decentralized proactive caching and coded delivery is studied in a content delivery network, where each user is equipped with a cache memory, not necessarily of equal capacity. Cache memories are filled in advance during the off-peak traffic period in a decentralized manner, i.e., without the knowledge of the number of active users, their identities, or their particular demands. User demands are revealed during the peak traffic period, and are served simultaneously through an error-free shared link. The goal is to find the minimum delivery rate during the peak traffic period that is sufficient to satisfy all possible demand combinations. A group-based decentralized caching and coded delivery scheme is proposed, and it is shown to improve upon the state-of-the-art in terms of the minimum required delivery rate when there are more users in the system than files. Numerical results indicate that the improvement is more significant as the cache capacities of the users become more skewed. A new lower bound on the delivery rate is also presented, which provides a tighter bound than the classical cut-set bound.
[ { "version": "v1", "created": "Sat, 5 Nov 2016 00:43:05 GMT" }, { "version": "v2", "created": "Mon, 31 Jul 2017 11:21:10 GMT" } ]
2017-08-01T00:00:00
[ [ "Amiri", "Mohammad Mohammadi", "" ], [ "Yang", "Qianqian", "" ], [ "Gunduz", "Deniz", "" ] ]
new_dataset
0.977287
1701.03338
Tom Kocmi
Tom Kocmi, Ond\v{r}ej Bojar
LanideNN: Multilingual Language Identification on Character Window
Accepted to EACL 2017
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, 927-936 (2017)
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In language identification, a common first step in natural language processing, we want to automatically determine the language of some input text. Monolingual language identification assumes that the given document is written in one language. In multilingual language identification, the document is usually in two or three languages and we just want their names. We aim one step further and propose a method for textual language identification where languages can change arbitrarily and the goal is to identify the spans of each of the languages. Our method is based on Bidirectional Recurrent Neural Networks and it performs well in monolingual and multilingual language identification tasks on six datasets covering 131 languages. The method keeps the accuracy also for short documents and across domains, so it is ideal for off-the-shelf use without preparation of training data.
[ { "version": "v1", "created": "Thu, 12 Jan 2017 13:41:08 GMT" }, { "version": "v2", "created": "Sat, 29 Jul 2017 15:52:00 GMT" } ]
2017-08-01T00:00:00
[ [ "Kocmi", "Tom", "" ], [ "Bojar", "Ondřej", "" ] ]
new_dataset
0.998519
1701.08104
Tal Mizrahi
Tal Mizrahi, Yoram Revah, Yehonathan Refael Kalim, Elad Kapuza, Yuval Cassuto
FM-Delta: Fault Management Packet Compression
This technical report is an extended version of "FM-Delta: Fault Management Packet Compression", which was accepted to the IFIP/IEEE International Symposium on Integrated Network Management, IM 2017
null
10.23919/INM.2017.7987338
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fault Management (FM) is a cardinal feature in communication networks. One of the most common FM approaches is to use periodic keepalive messages. Hence, switches and routers are required to transmit a large number of FM messages periodically, requiring a hardware-based packet generator that periodically transmits a set of messages that are stored in an expensive on-chip memory. With the rapid growth of carrier networks, and as 5G technologies emerge, the number of users and the traffic rates are expected to significantly increase over the next few years. Consequently, we expect the on-chip memories used for FM to become a costly component in switch and router chips. We introduce a novel approach in which FM messages are stored in compressed form in the on-chip memory, allowing to significantly reduce the memory size. We present FM-Delta, a simple hardware-friendly delta encoding algorithm that allows FM messages to be compressed by a factor of 2.6. We show that this compression ratio is very close to the results of the zlib compression library, which requires much higher implementation complexity.
[ { "version": "v1", "created": "Fri, 27 Jan 2017 16:32:05 GMT" } ]
2017-08-01T00:00:00
[ [ "Mizrahi", "Tal", "" ], [ "Revah", "Yoram", "" ], [ "Kalim", "Yehonathan Refael", "" ], [ "Kapuza", "Elad", "" ], [ "Cassuto", "Yuval", "" ] ]
new_dataset
0.999336
1707.04312
Guillaume Lagarde
Guillaume Lagarde, Sylvain Perifel
Lempel-Ziv: a "one-bit catastrophe" but not a tragedy
42 pages, 6 figures
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The so-called "one-bit catastrophe" for the compression algorithm LZ'78 asks whether the compression ratio of an infinite word can change when a single bit is added in front of it. We answer positively this open question raised by Lutz and others: we show that there exists an infinite word $w$ such that $\rho_{sup}(w)=0$ but $\rho_{inf}(0w)>0$, where $\rho_{sup}$ and $\rho_{inf}$ are respectively the $\limsup$ and the $\liminf$ of the compression ratios $\rho$ of the prefixes. To that purpose we explore the behaviour of LZ'78 on finite words and show the following results: - There is a constant $C>0$ such that, for any finite word $w$ and any letter $a$, $\rho(aw)\leq C\sqrt{\rho(w)\log|w|}$. Thus, sufficiently compressible words ($\rho(w)=o(1/\log|w|)$) remain compressible with a letter in front; - The previous result is tight up to a multiplicative constant for any compression ratio $\rho(w)=O(1/\log|w|)$. In particular, there are infinitely many words $w$ satisfying $\rho(w)=O(1/\log|w|)$ but $\rho(0w)=\Omega(1)$.
[ { "version": "v1", "created": "Thu, 13 Jul 2017 20:37:25 GMT" }, { "version": "v2", "created": "Mon, 31 Jul 2017 10:17:02 GMT" } ]
2017-08-01T00:00:00
[ [ "Lagarde", "Guillaume", "" ], [ "Perifel", "Sylvain", "" ] ]
new_dataset
0.975556
1707.06763
Yue-Li Wang
Tzong-Huei Shiau, Yue-Li Wang and Kung-Jui Pai
On the Orbits of Crossed Cubes
15 pages
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An orbit of $G$ is a subset $S$ of $V(G)$ such that $\phi(u)=v$ for any two vertices $u,v\in S$, where $\phi$ is an isomorphism of $G$. The orbit number of a graph $G$, denoted by $\text{Orb}(G)$, is the number of orbits of $G$. In [A Note on Path Embedding in Crossed Cubes with Faulty Vertices, Information Processing Letters 121 (2017) pp. 34--38], Chen et al. conjectured that $\text{Orb}(\text{CQ}_n)=2^{\lceil\frac{n}{2}\rceil-2}$ for $n\geqslant 3$, where $\text{CQ}_n$ denotes an $n$-dimensional crossed cube. In this paper, we settle the conjecture.
[ { "version": "v1", "created": "Fri, 21 Jul 2017 05:52:17 GMT" }, { "version": "v2", "created": "Mon, 31 Jul 2017 09:05:04 GMT" } ]
2017-08-01T00:00:00
[ [ "Shiau", "Tzong-Huei", "" ], [ "Wang", "Yue-Li", "" ], [ "Pai", "Kung-Jui", "" ] ]
new_dataset
0.993907
1707.08207
Xiang Lan
Xiang Lan, Wei Liu
A Fully Quaternion-Valued Capon Beamformer Based on Crossed-Dipole Arrays
5 pages, 5 figures
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Quaternion models have been developed for both direction of arrival estimation and beamforming based on crossed-dipole arrays in the past. However, for almost all the models, especially for adaptive beamforming, the desired signal is still complex-valued and one example is the quaternion-Capon beamformer. However, since the complex-valued desired signal only has two components, while there are four components in a quaternion, only two components of the quaternion-valued beamformer output are used and the remaining two are simply removed. This leads to significant redundancy in its implementation. In this work, we consider a quaternion-valued desired signal and develop a full quaternion-valued Capon beamformer, which has a better performance and a much lower complexity and is shown to be more robust against array pointing errors.
[ { "version": "v1", "created": "Mon, 26 Jun 2017 12:59:59 GMT" }, { "version": "v2", "created": "Mon, 31 Jul 2017 16:30:05 GMT" } ]
2017-08-01T00:00:00
[ [ "Lan", "Xiang", "" ], [ "Liu", "Wei", "" ] ]
new_dataset
0.965987
1707.09383
Matthew Johnson
Marthe Bonamy, Konrad K. Dabrowski, Carl Feghali, Matthew Johnson, Daniel Paulusma
Independent Feedback Vertex Sets for Graphs of Bounded Diameter
null
null
null
null
cs.DS cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Near-Bipartiteness problem is that of deciding whether or not the vertices of a graph can be partitioned into sets $A$ and $B$, where $A$ is an independent set and $B$ induces a forest. The set $A$ in such a partition is said to be an independent feedback vertex set. Yang and Yuan proved that Near-Bipartiteness is polynomial-time solvable for graphs of diameter 2 and NP-complete for graphs of diameter 4. We show that Near-Bipartiteness is NP-complete for graphs of diameter 3, resolving their open problem. We also generalise their result for diameter 2 by proving that even the problem of computing a minimum independent feedback vertex is polynomial-time solvable for graphs of diameter 2.
[ { "version": "v1", "created": "Fri, 28 Jul 2017 19:26:46 GMT" } ]
2017-08-01T00:00:00
[ [ "Bonamy", "Marthe", "" ], [ "Dabrowski", "Konrad K.", "" ], [ "Feghali", "Carl", "" ], [ "Johnson", "Matthew", "" ], [ "Paulusma", "Daniel", "" ] ]
new_dataset
0.999413
1707.09402
Matthew Johnson
Marthe Bonamy, Konrad K. Dabrowski, Carl Feghali, Matthew Johnson, Daniel Paulusma
Independent Feedback Vertex Set for $P_5$-free Graphs
null
null
null
null
cs.DS cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The NP-complete problem Feedback Vertex Set is that of deciding whether or not it is possible, for a given integer $k\geq 0$, to delete at most $k$ vertices from a given graph so that what remains is a forest. The variant in which the deleted vertices must form an independent set is called Independent Feedback Vertex Set and is also NP-complete. In fact, even deciding if an independent feedback vertex set exists is NP-complete and this problem is closely related to the $3$-Colouring problem, or equivalently, to the problem of deciding whether or not a graph has an independent odd cycle transversal, that is, an independent set of vertices whose deletion makes the graph bipartite. We initiate a systematic study of the complexity of Independent Feedback Vertex Set for $H$-free graphs. We prove that it is NP-complete if $H$ contains a claw or cycle. Tamura, Ito and Zhou proved that it is polynomial-time solvable for $P_4$-free graphs. We show that it remains polynomial-time solvable for $P_5$-free graphs. We prove analogous results for the Independent Odd Cycle Transversal problem, which asks whether or not a graph has an independent odd cycle transversal of size at most $k$ for a given integer $k\geq 0$. Finally, in line with our underlying research aim, we compare the complexity of Independent Feedback Vertex Set for $H$-free graphs with the complexity of $3$-Colouring, Independent Odd Cycle Transversal and other related problems.
[ { "version": "v1", "created": "Fri, 28 Jul 2017 20:17:45 GMT" } ]
2017-08-01T00:00:00
[ [ "Bonamy", "Marthe", "" ], [ "Dabrowski", "Konrad K.", "" ], [ "Feghali", "Carl", "" ], [ "Johnson", "Matthew", "" ], [ "Paulusma", "Daniel", "" ] ]
new_dataset
0.9995
1707.09487
Nikolaos K Tselios
Nikolaos Tselios, Manolis Maragoudakis
Method and apparatus for automatic text input insertion in digital devices with a restricted number of keys
European patent office
null
null
null
cs.HC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A device which contains number of symbol input keys, where the number of available keys is less than the number of symbols of an alphabet of any given language, screen, and dynamic reordering table of the symbols which are mapped onto those keys, according to a disambiguation method based on the previously entered symbols. The device incorporates a previously entered keystrokes tracking mechanism, and the key selected by the user detector, as well as a mechanism to select the dynamic symbol reordering mapped onto this key according to the information contained to the reordering table. The reordering table occurs from a disambiguation method which reorders the symbol appearance. The reordering information occurs from Bayesian Belief network construction and training from text corpora of the specific language.
[ { "version": "v1", "created": "Sat, 29 Jul 2017 09:39:17 GMT" } ]
2017-08-01T00:00:00
[ [ "Tselios", "Nikolaos", "" ], [ "Maragoudakis", "Manolis", "" ] ]
new_dataset
0.987084
1707.09489
Poonam Yadav Dr
Poonam Yadav and Jeremy Cohen and John Darlington
CitizenGrid: An Online Middleware for Crowdsourcing Scientific Research
11 pages
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last few years, contributions of the general public in scientific projects has increased due to the advancement of communication and computing technologies. Internet played an important role in connecting scientists and volunteers who are interested in participating in their scientific projects. However, despite potential benefits, only a limited number of crowdsourcing based large-scale science (citizen science) projects have been deployed due to the complexity involved in setting them up and running them. In this paper, we present CitizenGrid - an online middleware platform which addresses security and deployment complexity issues by making use of cloud computing and virtualisation technologies. CitizenGrid incentivises scientists to make their small-to-medium scale applications available as citizen science projects by: 1) providing a directory of projects through a web-based portal that makes applications easy to discover; 2) providing flexibility to participate in, monitor, and control multiple citizen science projects from a common interface; 3) supporting diverse categories of citizen science projects. The paper describes the design, development and evaluation of CitizenGrid and its use cases.
[ { "version": "v1", "created": "Sat, 29 Jul 2017 09:48:14 GMT" } ]
2017-08-01T00:00:00
[ [ "Yadav", "Poonam", "" ], [ "Cohen", "Jeremy", "" ], [ "Darlington", "John", "" ] ]
new_dataset
0.990051
1707.09593
Kai Chen
Kai Chen, Hang Song, Chen Change Loy, Dahua Lin
Discover and Learn New Objects from Documentaries
Published on CVPR 2017 (spotlight)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the remarkable progress in recent years, detecting objects in a new context remains a challenging task. Detectors learned from a public dataset can only work with a fixed list of categories, while training from scratch usually requires a large amount of training data with detailed annotations. This work aims to explore a novel approach -- learning object detectors from documentary films in a weakly supervised manner. This is inspired by the observation that documentaries often provide dedicated exposition of certain object categories, where visual presentations are aligned with subtitles. We believe that object detectors can be learned from such a rich source of information. Towards this goal, we develop a joint probabilistic framework, where individual pieces of information, including video frames and subtitles, are brought together via both visual and linguistic links. On top of this formulation, we further derive a weakly supervised learning algorithm, where object model learning and training set mining are unified in an optimization procedure. Experimental results on a real world dataset demonstrate that this is an effective approach to learning new object detectors.
[ { "version": "v1", "created": "Sun, 30 Jul 2017 07:52:29 GMT" } ]
2017-08-01T00:00:00
[ [ "Chen", "Kai", "" ], [ "Song", "Hang", "" ], [ "Loy", "Chen Change", "" ], [ "Lin", "Dahua", "" ] ]
new_dataset
0.993285
1707.09597
Hao Chen
Huangjing Lin, Hao Chen, Qi Dou, Liansheng Wang, Jing Qin, Pheng-Ann Heng
ScanNet: A Fast and Dense Scanning Framework for Metastatic Breast Cancer Detection from Whole-Slide Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lymph node metastasis is one of the most significant diagnostic indicators in breast cancer, which is traditionally observed under the microscope by pathologists. In recent years, computerized histology diagnosis has become one of the most rapidly expanding fields in medical image computing, which alleviates pathologists' workload and reduces misdiagnosis rate. However, automatic detection of lymph node metastases from whole slide images remains a challenging problem, due to the large-scale data with enormous resolutions and existence of hard mimics. In this paper, we propose a novel framework by leveraging fully convolutional networks for efficient inference to meet the speed requirement for clinical practice, while reconstructing dense predictions under different offsets for ensuring accurate detection on both micro- and macro-metastases. Incorporating with the strategies of asynchronous sample prefetching and hard negative mining, the network can be effectively trained. Extensive experiments on the benchmark dataset of 2016 Camelyon Grand Challenge corroborated the efficacy of our method. Compared with the state-of-the-art methods, our method achieved superior performance with a faster speed on the tumor localization task and surpassed human performance on the WSI classification task.
[ { "version": "v1", "created": "Sun, 30 Jul 2017 08:51:32 GMT" } ]
2017-08-01T00:00:00
[ [ "Lin", "Huangjing", "" ], [ "Chen", "Hao", "" ], [ "Dou", "Qi", "" ], [ "Wang", "Liansheng", "" ], [ "Qin", "Jing", "" ], [ "Heng", "Pheng-Ann", "" ] ]
new_dataset
0.990432
1707.09661
Michael Cook
Michael Cook
A Vision For Continuous Automated Game Design
Published in the proceedings of the Experimental AI in Games workshop at AIIDE 2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
ANGELINA is an automated game design system which has previously been built as a single software block which designs games from start to finish. In this paper we outline a roadmap for the development of a new version of ANGELINA, designed to iterate on games in different ways to produce a continuous creative process that will improve the quality of its work, but more importantly improve the perception of the software as being an independently creative piece of software. We provide an initial report of the system's structure here as well as results from the first working module of the system.
[ { "version": "v1", "created": "Sun, 30 Jul 2017 19:53:40 GMT" } ]
2017-08-01T00:00:00
[ [ "Cook", "Michael", "" ] ]
new_dataset
0.991936
1707.09695
Liang Lin
Mude Lin and Liang Lin and Xiaodan Liang and Keze Wang and Hui Cheng
Recurrent 3D Pose Sequence Machines
Published in CVPR 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D human articulated pose recovery from monocular image sequences is very challenging due to the diverse appearances, viewpoints, occlusions, and also the human 3D pose is inherently ambiguous from the monocular imagery. It is thus critical to exploit rich spatial and temporal long-range dependencies among body joints for accurate 3D pose sequence prediction. Existing approaches usually manually design some elaborate prior terms and human body kinematic constraints for capturing structures, which are often insufficient to exploit all intrinsic structures and not scalable for all scenarios. In contrast, this paper presents a Recurrent 3D Pose Sequence Machine(RPSM) to automatically learn the image-dependent structural constraint and sequence-dependent temporal context by using a multi-stage sequential refinement. At each stage, our RPSM is composed of three modules to predict the 3D pose sequences based on the previously learned 2D pose representations and 3D poses: (i) a 2D pose module extracting the image-dependent pose representations, (ii) a 3D pose recurrent module regressing 3D poses and (iii) a feature adaption module serving as a bridge between module (i) and (ii) to enable the representation transformation from 2D to 3D domain. These three modules are then assembled into a sequential prediction framework to refine the predicted poses with multiple recurrent stages. Extensive evaluations on the Human3.6M dataset and HumanEva-I dataset show that our RPSM outperforms all state-of-the-art approaches for 3D pose estimation.
[ { "version": "v1", "created": "Mon, 31 Jul 2017 02:06:45 GMT" } ]
2017-08-01T00:00:00
[ [ "Lin", "Mude", "" ], [ "Lin", "Liang", "" ], [ "Liang", "Xiaodan", "" ], [ "Wang", "Keze", "" ], [ "Cheng", "Hui", "" ] ]
new_dataset
0.995515
1707.09813
Shubham Jain
Jay Patravali, Shubham Jain and Sasank Chilamkurthy
2D-3D Fully Convolutional Neural Networks for Cardiac MR Segmentation
Accepted in STACOM '17
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we develop a 2D and 3D segmentation pipelines for fully automated cardiac MR image segmentation using Deep Convolutional Neural Networks (CNN). Our models are trained end-to-end from scratch using the ACD Challenge 2017 dataset comprising of 100 studies, each containing Cardiac MR images in End Diastole and End Systole phase. We show that both our segmentation models achieve near state-of-the-art performance scores in terms of distance metrics and have convincing accuracy in terms of clinical parameters. A comparative analysis is provided by introducing a novel dice loss function and its combination with cross entropy loss. By exploring different network structures and comprehensive experiments, we discuss several key insights to obtain optimal model performance, which also is central to the theme of this challenge.
[ { "version": "v1", "created": "Mon, 31 Jul 2017 12:17:23 GMT" } ]
2017-08-01T00:00:00
[ [ "Patravali", "Jay", "" ], [ "Jain", "Shubham", "" ], [ "Chilamkurthy", "Sasank", "" ] ]
new_dataset
0.996112
1707.09823
Chen Li
Di Jiang, Zeyu Chen, Rongzhong Lian, Siqi Bao, Chen Li
Familia: An Open-Source Toolkit for Industrial Topic Modeling
null
null
null
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Familia is an open-source toolkit for pragmatic topic modeling in industry. Familia abstracts the utilities of topic modeling in industry as two paradigms: semantic representation and semantic matching. Efficient implementations of the two paradigms are made publicly available for the first time. Furthermore, we provide off-the-shelf topic models trained on large-scale industrial corpora, including Latent Dirichlet Allocation (LDA), SentenceLDA and Topical Word Embedding (TWE). We further describe typical applications which are successfully powered by topic modeling, in order to ease the confusions and difficulties of software engineers during topic model selection and utilization.
[ { "version": "v1", "created": "Mon, 31 Jul 2017 12:48:45 GMT" } ]
2017-08-01T00:00:00
[ [ "Jiang", "Di", "" ], [ "Chen", "Zeyu", "" ], [ "Lian", "Rongzhong", "" ], [ "Bao", "Siqi", "" ], [ "Li", "Chen", "" ] ]
new_dataset
0.986734
1707.09848
Giulio Ruffini
Giulio Ruffini
Lempel-Zip Complexity Reference
For the Luminous Project (FET Open); Zip file includes Python code and Jupiter notebook
null
null
Starlab Technical Note TN00344
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of this note is to provide some reference facts for LZW---mostly from Thomas and Cover \cite{Cover:2006aa} and provide a reference for some metrics that can be derived from it. LZW is an algorithm to compute a Kolmogorov Complexity estimate derived from a limited programming language that only allows copy and insertion in strings (not Turing complete set). Despite its delightful simplicity, it is rather powerful and fast. We then focus on definitions of LZW derived complexity metrics consistent with the notion of descriptive length, and discuss different normalizations, which result in a set of metrics we call $\rho_0$, $\rho_1$ and $\rho_2$, in addition to the Description Length $l_{LZW}$ and the Entropy Rate.
[ { "version": "v1", "created": "Fri, 28 Jul 2017 16:21:25 GMT" } ]
2017-08-01T00:00:00
[ [ "Ruffini", "Giulio", "" ] ]
new_dataset
0.971731
1707.09972
Haixia Peng
Haixia Peng, Le Liang, Xuemin Shen, Geoffrey Ye Li
Vehicular Communications: A Network Layer Perspective
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vehicular communications, referring to information exchange among vehicles, pedestrians, and infrastructures, have become very popular and been widely studied recently due to its great potential to support intelligent transportation and various safety applications. Via vehicular communications, manually driving vehicles and autonomous vehicles can collect useful information to improve traffic safety and support infotainment services. In this paper, we provide a comprehensive overview of recent research on enabling efficient vehicular communications from the network layer perspective. First, we introduce general applications and unique characteristics of vehicular networks and the corresponding classifications. Based on different driving patterns of vehicles, we divide vehicular networks into two categories, i.e., manually driving vehicular networks and automated driving vehicular networks, and then discuss the available communication techniques, network structures, routing protocols, and handoff strategies applied in these vehicular networks. Finally, we identify the challenges confronted by the current vehicular communications and present the corresponding research opportunities.
[ { "version": "v1", "created": "Mon, 31 Jul 2017 17:34:16 GMT" } ]
2017-08-01T00:00:00
[ [ "Peng", "Haixia", "" ], [ "Liang", "Le", "" ], [ "Shen", "Xuemin", "" ], [ "Li", "Geoffrey Ye", "" ] ]
new_dataset
0.999106
1610.07914
Roberto Bagnara
Roberto Bagnara, Abramo Bagnara, Alessandro Benedetti, Patricia M. Hill
The ACPATH Metric: Precise Estimation of the Number of Acyclic Paths in C-like Languages
62 pages, 10 figures, 7 tables
null
null
null
cs.SE cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
NPATH is a metric introduced by Brian A. Nejmeh in [13] that is aimed at overcoming some important limitations of McCabe's cyclomatic complexity. Despite the fact that the declared NPATH objective is to count the number of acyclic execution paths through a function, the definition given for the C language in [13] fails to do so even for very simple programs. We show that counting the number of acyclic paths in CFG is unfeasible in general. Then we define a new metric for C-like languages, called ACPATH, that allows to quickly compute a very good estimation of the number of acyclic execution paths through the given function. We show that, if the function body does not contain backward gotos and does not contain jumps into a loop from outside the loop, then such estimation is actually exact.
[ { "version": "v1", "created": "Tue, 25 Oct 2016 15:11:46 GMT" }, { "version": "v2", "created": "Wed, 26 Oct 2016 05:16:08 GMT" }, { "version": "v3", "created": "Fri, 28 Jul 2017 08:21:24 GMT" } ]
2017-07-31T00:00:00
[ [ "Bagnara", "Roberto", "" ], [ "Bagnara", "Abramo", "" ], [ "Benedetti", "Alessandro", "" ], [ "Hill", "Patricia M.", "" ] ]
new_dataset
0.999495
1707.08833
Claire Pennarun
Daniel Gon\c{c}alves (ALGCO), Lucas Isenmann (ALGCO), Claire Pennarun (LaBRI)
Planar graphs as L-intersection or L-contact graphs
null
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The L-intersection graphs are the graphs that have a representation as intersection graphs of axis parallel shapes in the plane. A subfamily of these graphs are {L, |, --}-contact graphs which are the contact graphs of axis parallel L, |, and -- shapes in the plane. We prove here two results that were conjectured by Chaplick and Ueckerdt in 2013. We show that planar graphs are L-intersection graphs, and that triangle-free planar graphs are {L, |, --}-contact graphs. These results are obtained by a new and simple decomposition technique for 4-connected triangulations. Our results also provide a much simpler proof of the known fact that planar graphs are segment intersection graphs.
[ { "version": "v1", "created": "Thu, 27 Jul 2017 12:28:05 GMT" }, { "version": "v2", "created": "Fri, 28 Jul 2017 08:53:55 GMT" } ]
2017-07-31T00:00:00
[ [ "Gonçalves", "Daniel", "", "ALGCO" ], [ "Isenmann", "Lucas", "", "ALGCO" ], [ "Pennarun", "Claire", "", "LaBRI" ] ]
new_dataset
0.999912
1703.02743
Tomasz Jurdzinski
Tomasz Jurdzinski and Krzysztof Nowicki
MSF and Connectivity in Limited Variants of the Congested Clique
null
null
null
null
cs.DC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The congested clique is a synchronous, message-passing model of distributed computing in which each computational unit (node) in each round can send message of O(log n) bits to each other node of the network, where n is the number of nodes. This model has been considered under two extreme scanarios: unicast or broadcast. In the unicast model, a node can send (possibly) different message to each other node of the network. In contrast, in the broadcast model each node sends a single (the same) message to all other nodes. We study the congested clique model parametrized by the range r, the maximum number of different messages a node can send in one round. Following recent progress in design of algorihms for graph connectivity and minimum span- ning forest (MSF) in the unicast congested clique, we study these problems in limited variants of the congested clique. We present the first sub-logarithmic algorithm for connected components in the broadcast congested clique. Then, we show that efficient unicast deterministic algorithm for MSF and randomized algorithm for connected components can be efficiently imple- mented in the rcast model with range r = 2, the weakest model of the congested clique above the broadcast variant (r = 1) in the hierarchy with respect to range. More importantly, our al- gorithms give the first solutions with optimal capacity of communication edges, while preserving small round complexity.
[ { "version": "v1", "created": "Wed, 8 Mar 2017 08:15:40 GMT" } ]
2017-07-28T00:00:00
[ [ "Jurdzinski", "Tomasz", "" ], [ "Nowicki", "Krzysztof", "" ] ]
new_dataset
0.984151
1703.07290
Sergey Polyakovskiy
S. Polyakovskiy and A. Makarowsky and R. M'Hallah
Just-in-Time Batch Scheduling Problem with Two-dimensional Bin Packing Constraints
null
Proceedings of the Genetic and Evolutionary Computation Conference, 2017, pages 321-328
10.1145/3071178.3071223
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces and approximately solves a multi-component problem where small rectangular items are produced from large rectangular bins via guillotine cuts. An item is characterized by its width, height, due date, and earliness and tardiness penalties per unit time. Each item induces a cost that is proportional to its earliness and tardiness. Items cut from the same bin form a batch, whose processing and completion times depend on its assigned items. The items of a batch have the completion time of their bin. The objective is to find a cutting plan that minimizes the weighted sum of earliness and tardiness penalties. We address this problem via a constraint programming based heuristic (CP) and an agent based modelling heuristic (AB). CP is an impact-based search strategy, implemented in the general-purpose solver IBM CP Optimizer. AB is constructive. It builds a solution through repeated negotiations between the set of agents representing the items and the set representing the bins. The agents cooperate to minimize the weighted earliness-tardiness penalties. The computational investigation shows that CP outperforms AB on small-sized instances while the opposite prevails for larger instances.
[ { "version": "v1", "created": "Tue, 21 Mar 2017 15:57:42 GMT" }, { "version": "v2", "created": "Thu, 27 Jul 2017 11:43:36 GMT" } ]
2017-07-28T00:00:00
[ [ "Polyakovskiy", "S.", "" ], [ "Makarowsky", "A.", "" ], [ "M'Hallah", "R.", "" ] ]
new_dataset
0.986551
1707.08652
Kathrin Hanauer
Christian Bachmaier, Franz J. Brandenburg, and Kathrin Hanauer
A Note on IC-Planar Graphs
null
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A graph is IC-planar if it admits a drawing in the plane with at most one crossing per edge and such that two pairs of crossing edges share no common end vertex. IC-planarity specializes both NIC-planarity, which allows a pair of crossing edges to share at most one vertex, and 1-planarity, where each edge may be crossed at most once. We show that there are infinitely maximal IC-planar graphs with n vertices and 3n-5 edges and thereby prove a tight lower bound on the density of this class of graphs.
[ { "version": "v1", "created": "Wed, 26 Jul 2017 21:39:45 GMT" } ]
2017-07-28T00:00:00
[ [ "Bachmaier", "Christian", "" ], [ "Brandenburg", "Franz J.", "" ], [ "Hanauer", "Kathrin", "" ] ]
new_dataset
0.998175
1707.08718
Saman Naderiparizi
Saman Naderiparizi, Mehrdad Hessar, Vamsi Talla, Shyamnath Gollakota and Joshua R. Smith
Ultra-low-power Wireless Streaming Cameras
9 pages, 11 figures
null
null
null
cs.ET cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless video streaming has traditionally been considered an extremely power-hungry operation. Existing approaches optimize the camera and communication modules individually to minimize their power consumption. However, the joint redesign and optimization of wireless communication as well as the camera is what that provides more power saving. We present an ultra-low-power wireless video streaming camera. To achieve this, we present a novel "analog" video backscatter technique that feeds analog pixels from the photo-diodes directly to the backscatter hardware, thereby eliminating power consuming hardware components such as ADCs and amplifiers. We prototype our wireless camera using off-the-shelf hardware and show that our design can stream video at up to 13 FPS and can operate up to a distance of 150 feet from the access point. Our COTS prototype consumes 2.36mW. Finally, to demonstrate the potential of our design, we built two proof-of-concept applications: video streaming for micro-robots and security cameras for face detection.
[ { "version": "v1", "created": "Thu, 27 Jul 2017 06:43:18 GMT" } ]
2017-07-28T00:00:00
[ [ "Naderiparizi", "Saman", "" ], [ "Hessar", "Mehrdad", "" ], [ "Talla", "Vamsi", "" ], [ "Gollakota", "Shyamnath", "" ], [ "Smith", "Joshua R.", "" ] ]
new_dataset
0.993866
1707.08735
EPTCS
Francesco Belardinelli (Labortoire IBISC, UEVE and IRIT Toulouse), Hans van Ditmarsch (LORIA \^A-- CNRS, Universit\'e de Lorraine, Vandoeuvre-l\`es-Nancy, France), Wiebe van der Hoek (Department of Computing, University of Liverpool, Liverpool, UK)
A Logic for Global and Local Announcements
In Proceedings TARK 2017, arXiv:1707.08250
EPTCS 251, 2017, pp. 28-42
10.4204/EPTCS.251.3
null
cs.LO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we introduce {\em global and local announcement logic} (GLAL), a dynamic epistemic logic with two distinct announcement operators -- $[\phi]^+_A$ and $[\phi]^-_A$ indexed to a subset $A$ of the set $Ag$ of all agents -- for global and local announcements respectively. The boundary case $[\phi]^+_{Ag}$ corresponds to the public announcement of $\phi$, as known from the literature. Unlike standard public announcements, which are {\em model transformers}, the global and local announcements are {\em pointed model transformers}. In particular, the update induced by the announcement may be different in different states of the model. Therefore, the resulting computations are trees of models, rather than the typical sequences. A consequence of our semantics is that modally bisimilar states may be distinguished in our logic. Then, we provide a stronger notion of bisimilarity and we show that it preserves modal equivalence in GLAL. Additionally, we show that GLAL is strictly more expressive than public announcement logic with common knowledge. We prove a wide range of validities for GLAL involving the interaction between dynamics and knowledge, and show that the satisfiability problem for GLAL is decidable. We illustrate the formal machinery by means of detailed epistemic scenarios.
[ { "version": "v1", "created": "Thu, 27 Jul 2017 07:45:23 GMT" } ]
2017-07-28T00:00:00
[ [ "Belardinelli", "Francesco", "", "Labortoire IBISC, UEVE and IRIT Toulouse" ], [ "van Ditmarsch", "Hans", "", "LORIA Â-- CNRS, Université de Lorraine,\n Vandoeuvre-lès-Nancy, France" ], [ "van der Hoek", "Wiebe", "", "Department of Computing,\n University of Liverpool, Liverpool, UK" ] ]
new_dataset
0.99772
1707.08742
EPTCS
Ivano Ciardelli (ILLC, University of Amsterdam), Martin Otto (Technische Universit\"at Darmstadt)
Bisimulation in Inquisitive Modal Logic
In Proceedings TARK 2017, arXiv:1707.08250
EPTCS 251, 2017, pp. 151-166
10.4204/EPTCS.251.11
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inquisitive modal logic, InqML, is a generalisation of standard Kripke-style modal logic. In its epistemic incarnation, it extends standard epistemic logic to capture not just the information that agents have, but also the questions that they are interested in. Technically, InqML fits within the family of logics based on team semantics. From a model-theoretic perspective, it takes us a step in the direction of monadic second-order logic, as inquisitive modal operators involve quantification over sets of worlds. We introduce and investigate the natural notion of bisimulation equivalence in the setting of InqML. We compare the expressiveness of InqML and first-order logic, and characterise inquisitive modal logic as the bisimulation invariant fragments of first-order logic over various classes of two-sorted relational structures. These results crucially require non-classical methods in studying bisimulations and first-order expressiveness over non-elementary classes.
[ { "version": "v1", "created": "Thu, 27 Jul 2017 07:47:49 GMT" } ]
2017-07-28T00:00:00
[ [ "Ciardelli", "Ivano", "", "ILLC, University of Amsterdam" ], [ "Otto", "Martin", "", "Technische Universität Darmstadt" ] ]
new_dataset
0.993154
1707.08789
Chunming Tang
Claude Carlet, Sihem Mesnager, Chunming Tang, Yanfeng Qi
On {\sigma}-LCD codes
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Linear complementary pairs (LCP) of codes play an important role in armoring implementations against side-channel attacks and fault injection attacks. One of the most common ways to construct LCP of codes is to use Euclidean linear complementary dual (LCD) codes. In this paper, we first introduce the concept of linear codes with $\sigma$ complementary dual ($\sigma$-LCD), which includes known Euclidean LCD codes, Hermitian LCD codes, and Galois LCD codes. As Euclidean LCD codes, $\sigma$-LCD codes can also be used to construct LCP of codes. We show that, for $q > 2$, all q-ary linear codes are $\sigma$-LCD and that, for every binary linear code $\mathcal C$, the code $\{0\}\times \mathcal C$ is $\sigma$-LCD. Further, we study deeply $\sigma$-LCD generalized quasi-cyclic (GQC) codes. In particular, we provide characterizations of $\sigma$-LCD GQC codes, self-orthogonal GQC codes and self-dual GQC codes, respectively. Moreover, we provide constructions of asymptotically good $\sigma$-LCD GQC codes. Finally, we focus on $\sigma$-LCD Abelian codes and prove that all Abelian codes in a semi-simple group algebra are $\sigma$-LCD. The results derived in this paper extend those on the classical LCD codes and show that $\sigma$-LCD codes allow the construction of LCP of codes more easily and with more flexibility.
[ { "version": "v1", "created": "Thu, 27 Jul 2017 09:23:11 GMT" } ]
2017-07-28T00:00:00
[ [ "Carlet", "Claude", "" ], [ "Mesnager", "Sihem", "" ], [ "Tang", "Chunming", "" ], [ "Qi", "Yanfeng", "" ] ]
new_dataset
0.999793
1707.08932
Ezio Biglieri
Ezio Biglieri and Emanuele Viterbo
Line codes generated by finite Coxeter groups
19 pages, 10 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using an algebraic approach based on the theory of Coxeter groups, we design, and describe the performance of, a class of line codes for parallel transmission of $b$ bits over $b+1$ wires that admit especially simple encoding and decoding algorithms. A number of designs are exhibited, some of them being novel or improving on previously obtained codes.
[ { "version": "v1", "created": "Thu, 27 Jul 2017 16:53:37 GMT" } ]
2017-07-28T00:00:00
[ [ "Biglieri", "Ezio", "" ], [ "Viterbo", "Emanuele", "" ] ]
new_dataset
0.986953
1605.04486
Abhinav Aggarwal
Abhinav Aggarwal, Varsha Dani, Thomas Hayes, Jared Saia
Sending a Message with Unknown Noise
10 pages, 3 figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Alice and Bob are connected via a two-way channel, and Alice wants to send a message of $L$ bits to Bob. An adversary flips an arbitrary but finite number of bits, $T$, on the channel. This adversary knows our algorithm and Alice's message, but does not know any private random bits generated by Alice or Bob, nor the bits sent over the channel, except when these bits can be predicted by knowledge of Alice's message or our algorithm. We want Bob to receive Alice's message and for both players to terminate, with error probability at most $\delta > 0$, where $\delta$ is a parameter known to both Alice and Bob. Unfortunately, the value $T$ is unknown in advance to either Alice or Bob, and the value $L$ is unknown in advance to Bob. We describe an algorithm to solve the above problem while sending an expected $L + O \left( T + \min \left(T+1,\frac{L}{\log L} \right) \log \left( \frac{L}{\delta} \right) \right)$ bits. A special case is when $\delta = O(1/L^c)$, for some constant $c$. Then when $T = o(L/\log L)$, the expected number of bits sent is $L + o(L)$, and when $T = \Omega(L)$, the expected number of bits sent is $L + O\left( T \right)$, which is asymptotically optimal.
[ { "version": "v1", "created": "Sun, 15 May 2016 01:33:47 GMT" }, { "version": "v2", "created": "Wed, 26 Jul 2017 05:54:20 GMT" } ]
2017-07-27T00:00:00
[ [ "Aggarwal", "Abhinav", "" ], [ "Dani", "Varsha", "" ], [ "Hayes", "Thomas", "" ], [ "Saia", "Jared", "" ] ]
new_dataset
0.997543
1701.05141
Wouter van Toll
Wouter van Toll, Atlas F. Cook IV, Marc J. van Kreveld, Roland Geraerts
The Medial Axis of a Multi-Layered Environment and its Application as a Navigation Mesh
34 pages
null
null
null
cs.CG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Path planning for walking characters in complicated virtual environments is a fundamental task in simulations and games. A navigation mesh is a data structure that allows efficient path planning. The Explicit Corridor Map (ECM) is a navigation mesh based on the medial axis. It enables path planning for disk-shaped characters of any radius. In this paper, we formally extend the medial axis (and therefore the ECM) to 3D environments in which characters are constrained to walkable surfaces. Typical examples of such environments are multi-storey buildings, train stations, and sports stadiums. We give improved definitions of a walkable environment (WE: a description of walkable surfaces in 3D) and a multi-layered environment (MLE: a subdivision of a WE into connected layers). We define the medial axis of such environments based on projected distances on the ground plane. For an MLE with $n$ boundary vertices and $k$ connections, we show that the medial axis has size $O(n)$, and we present an improved algorithm that constructs the medial axis in $O(n \log n \log k)$ time. The medial axis can be annotated with nearest-obstacle information to obtain the ECM navigation mesh. Our implementations show that the ECM can be computed efficiently for large 2D and multi-layered environments, and that it can be used to compute paths within milliseconds. This enables simulations of large virtual crowds of heterogeneous characters in real-time.
[ { "version": "v1", "created": "Wed, 18 Jan 2017 16:46:08 GMT" }, { "version": "v2", "created": "Wed, 26 Jul 2017 08:01:38 GMT" } ]
2017-07-27T00:00:00
[ [ "van Toll", "Wouter", "" ], [ "Cook", "Atlas F.", "IV" ], [ "van Kreveld", "Marc J.", "" ], [ "Geraerts", "Roland", "" ] ]
new_dataset
0.999659
1702.08256
Florian Lonsing
Florian Lonsing and Uwe Egly
DepQBF 6.0: A Search-Based QBF Solver Beyond Traditional QCDCL
12 pages + appendix; to appear in the proceedings of CADE-26, LNCS, Springer, 2017
null
10.1007/978-3-319-63046-5_23
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the latest major release version 6.0 of the quantified Boolean formula (QBF) solver DepQBF, which is based on QCDCL. QCDCL is an extension of the conflict-driven clause learning (CDCL) paradigm implemented in state of the art propositional satisfiability (SAT) solvers. The Q-resolution calculus (QRES) is a QBF proof system which underlies QCDCL. QCDCL solvers can produce QRES proofs of QBFs in prenex conjunctive normal form (PCNF) as a byproduct of the solving process. In contrast to traditional QCDCL based on QRES, DepQBF 6.0 implements a variant of QCDCL which is based on a generalization of QRES. This generalization is due to a set of additional axioms and leaves the original Q-resolution rules unchanged. The generalization of QRES enables QCDCL to potentially produce exponentially shorter proofs than the traditional variant. We present an overview of the features implemented in DepQBF and report on experimental results which demonstrate the effectiveness of generalized QRES in QCDCL.
[ { "version": "v1", "created": "Mon, 27 Feb 2017 12:42:33 GMT" }, { "version": "v2", "created": "Tue, 30 May 2017 08:54:32 GMT" } ]
2017-07-27T00:00:00
[ [ "Lonsing", "Florian", "" ], [ "Egly", "Uwe", "" ] ]
new_dataset
0.999442
1703.09471
Seong Joon Oh
Seong Joon Oh, Mario Fritz, Bernt Schiele
Adversarial Image Perturbation for Privacy Protection -- A Game Theory Perspective
To appear at ICCV'17
null
null
null
cs.CV cs.CR cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Users like sharing personal photos with others through social media. At the same time, they might want to make automatic identification in such photos difficult or even impossible. Classic obfuscation methods such as blurring are not only unpleasant but also not as effective as one would expect. Recent studies on adversarial image perturbations (AIP) suggest that it is possible to confuse recognition systems effectively without unpleasant artifacts. However, in the presence of counter measures against AIPs, it is unclear how effective AIP would be in particular when the choice of counter measure is unknown. Game theory provides tools for studying the interaction between agents with uncertainties in the strategies. We introduce a general game theoretical framework for the user-recogniser dynamics, and present a case study that involves current state of the art AIP and person recognition techniques. We derive the optimal strategy for the user that assures an upper bound on the recognition rate independent of the recogniser's counter measure. Code is available at https://goo.gl/hgvbNK.
[ { "version": "v1", "created": "Tue, 28 Mar 2017 09:17:47 GMT" }, { "version": "v2", "created": "Wed, 26 Jul 2017 10:01:43 GMT" } ]
2017-07-27T00:00:00
[ [ "Oh", "Seong Joon", "" ], [ "Fritz", "Mario", "" ], [ "Schiele", "Bernt", "" ] ]
new_dataset
0.982636
1707.04413
Nor Jaafari
Jan van den Brand, Nor Jaafari
The Mutual information of LDGM codes
null
null
null
null
cs.IT math.IT math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide matching upper and lower bounds on the mutual information in noisy reconstruction of parity check codes and thereby prove a long-standing conjecture by Montanari [IEEE Transactions on Information Theory 2005]. Besides extending a prior concentration result of Abbe and Montanari [Theory of Computing 2015] to the case of odd check degrees, we precisely determine the conjectured formula for code ensembles of arbitrary degree distribution, thus capturing a broad class of capacity approaching codes.
[ { "version": "v1", "created": "Fri, 14 Jul 2017 08:37:50 GMT" }, { "version": "v2", "created": "Wed, 26 Jul 2017 09:31:51 GMT" } ]
2017-07-27T00:00:00
[ [ "Brand", "Jan van den", "" ], [ "Jaafari", "Nor", "" ] ]
new_dataset
0.997885
1707.07540
Ramviyas Parasuraman
Danilo Tardioli, Ramviyas Parasuraman and Petter \"Ogren
Pound: A ROS node for Reducing Delay and Jitter in Wireless Multi-Robot Networks
16 pages
null
null
null
cs.NI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Robot Operating System (ROS) is rapidly becoming the de facto framework for building robotics systems, thanks to its flexibility and the large acceptance that it has received in the robotics community. With the growth of its popularity, it has started to be used in multi-robot systems as well. However, the TCP connections that the platform relies on for connecting the so-called ROS nodes, presents several issues in terms of limited-bandwidth, delays and jitter, when used in wireless ad-hoc networks. In this paper, we present a thorough analysis of the problem and propose a new ROS node called Pound to improve the wireless communication performance. Pound allows the use of multiple ROS cores and introduces a priority scheme favoring more important flows over less important ones, thus reducing delay and jitter over single-hop and multihop networks. We compare Pound to the state-of-the-art solutions and show that it performs equally well, or better in all the test cases, including a control-over-network example.
[ { "version": "v1", "created": "Thu, 20 Jul 2017 23:00:25 GMT" }, { "version": "v2", "created": "Wed, 26 Jul 2017 14:52:26 GMT" } ]
2017-07-27T00:00:00
[ [ "Tardioli", "Danilo", "" ], [ "Parasuraman", "Ramviyas", "" ], [ "Ögren", "Petter", "" ] ]
new_dataset
0.998704
1707.08209
Jaydeep Chipalkatti
Jaydeep Chipalkatti, Mihir Kulkarni
On the letter frequencies and entropy of written Marathi
null
null
null
null
cs.IT cs.CL math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We carry out a comprehensive analysis of letter frequencies in contemporary written Marathi. We determine sets of letters which statistically predominate any large generic Marathi text, and use these sets to estimate the entropy of Marathi.
[ { "version": "v1", "created": "Tue, 11 Jul 2017 19:52:56 GMT" } ]
2017-07-27T00:00:00
[ [ "Chipalkatti", "Jaydeep", "" ], [ "Kulkarni", "Mihir", "" ] ]
new_dataset
0.995792
1707.08212
Ilker Yildirim
Ilker Yildirim, Tobias Gerstenberg, Basil Saeed, Marc Toussaint, Josh Tenenbaum
Physical problem solving: Joint planning with symbolic, geometric, and dynamic constraints
null
null
null
null
cs.AI cs.RO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a new task that investigates how people interact with and make judgments about towers of blocks. In Experiment~1, participants in the lab solved a series of problems in which they had to re-configure three blocks from an initial to a final configuration. We recorded whether they used one hand or two hands to do so. In Experiment~2, we asked participants online to judge whether they think the person in the lab used one or two hands. The results revealed a close correspondence between participants' actions in the lab, and the mental simulations of participants online. To explain participants' actions and mental simulations, we develop a model that plans over a symbolic representation of the situation, executes the plan using a geometric solver, and checks the plan's feasibility by taking into account the physical constraints of the scene. Our model explains participants' actions and judgments to a high degree of quantitative accuracy.
[ { "version": "v1", "created": "Tue, 25 Jul 2017 20:44:18 GMT" } ]
2017-07-27T00:00:00
[ [ "Yildirim", "Ilker", "" ], [ "Gerstenberg", "Tobias", "" ], [ "Saeed", "Basil", "" ], [ "Toussaint", "Marc", "" ], [ "Tenenbaum", "Josh", "" ] ]
new_dataset
0.977095
1707.08250
EPTCS
J\'er\^ome Lang (CNRS)
Proceedings Sixteenth Conference on Theoretical Aspects of Rationality and Knowledge
null
EPTCS 251, 2017
10.4204/EPTCS.251
null
cs.GT cs.AI cs.CR cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This volume consists of papers presented at the Sixteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK) held at the University of Liverpool, UK, from July 24 to 26, 2017. TARK conferences bring together researchers from a wide variety of fields, including Computer Science (especially, Artificial Intelligence, Cryptography, Distributed Computing), Economics (especially, Decision Theory, Game Theory, Social Choice Theory), Linguistics, Philosophy (especially, Philosophical Logic), and Cognitive Psychology, in order to further understand the issues involving reasoning about rationality and knowledge.
[ { "version": "v1", "created": "Tue, 25 Jul 2017 23:32:51 GMT" } ]
2017-07-27T00:00:00
[ [ "Lang", "Jérôme", "", "CNRS" ] ]
new_dataset
0.990469
1707.08262
Siddharth Biswal
Siddharth Biswal, Joshua Kulas, Haoqi Sun, Balaji Goparaju, M Brandon Westover, Matt T Bianchi, Jimeng Sun
SLEEPNET: Automated Sleep Staging System via Deep Learning
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sleep disorders, such as sleep apnea, parasomnias, and hypersomnia, affect 50-70 million adults in the United States (Hillman et al., 2006). Overnight polysomnography (PSG), including brain monitoring using electroencephalography (EEG), is a central component of the diagnostic evaluation for sleep disorders. While PSG is conventionally performed by trained technologists, the recent rise of powerful neural network learning algorithms combined with large physiological datasets offers the possibility of automation, potentially making expert-level sleep analysis more widely available. We propose SLEEPNET (Sleep EEG neural network), a deployed annotation tool for sleep staging. SLEEPNET uses a deep recurrent neural network trained on the largest sleep physiology database assembled to date, consisting of PSGs from over 10,000 patients from the Massachusetts General Hospital (MGH) Sleep Laboratory. SLEEPNET achieves human-level annotation performance on an independent test set of 1,000 EEGs, with an average accuracy of 85.76% and algorithm-expert inter-rater agreement (IRA) of kappa = 79.46%, comparable to expert-expert IRA.
[ { "version": "v1", "created": "Wed, 26 Jul 2017 00:39:59 GMT" } ]
2017-07-27T00:00:00
[ [ "Biswal", "Siddharth", "" ], [ "Kulas", "Joshua", "" ], [ "Sun", "Haoqi", "" ], [ "Goparaju", "Balaji", "" ], [ "Westover", "M Brandon", "" ], [ "Bianchi", "Matt T", "" ], [ "Sun", "Jimeng", "" ] ]
new_dataset
0.996086
1707.08287
Kevin Xu
Yuning Zhang, Maysam Haghdan, and Kevin S. Xu
Unsupervised Motion Artifact Detection in Wrist-Measured Electrodermal Activity Data
To appear at International Symposium on Wearable Computers (ISWC) 2017
null
null
null
cs.HC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the main benefits of a wrist-worn computer is its ability to collect a variety of physiological data in a minimally intrusive manner. Among these data, electrodermal activity (EDA) is readily collected and provides a window into a person's emotional and sympathetic responses. EDA data collected using a wearable wristband are easily influenced by motion artifacts (MAs) that may significantly distort the data and degrade the quality of analyses performed on the data if not identified and removed. Prior work has demonstrated that MAs can be successfully detected using supervised machine learning algorithms on a small data set collected in a lab setting. In this paper, we demonstrate that unsupervised learning algorithms perform competitively with supervised algorithms for detecting MAs on EDA data collected in both a lab-based setting and a real-world setting comprising about 23 hours of data. We also find, somewhat surprisingly, that incorporating accelerometer data as well as EDA improves detection accuracy only slightly for supervised algorithms and significantly degrades the accuracy of unsupervised algorithms.
[ { "version": "v1", "created": "Wed, 26 Jul 2017 05:02:45 GMT" } ]
2017-07-27T00:00:00
[ [ "Zhang", "Yuning", "" ], [ "Haghdan", "Maysam", "" ], [ "Xu", "Kevin S.", "" ] ]
new_dataset
0.996752
1707.08347
Xialei Liu
Xialei Liu, Joost van de Weijer and Andrew D. Bagdanov
RankIQA: Learning from Rankings for No-reference Image Quality Assessment
Accepted by ICCV 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a no-reference image quality assessment (NR-IQA) approach that learns from rankings (RankIQA). To address the problem of limited IQA dataset size, we train a Siamese Network to rank images in terms of image quality by using synthetically generated distortions for which relative image quality is known. These ranked image sets can be automatically generated without laborious human labeling. We then use fine-tuning to transfer the knowledge represented in the trained Siamese Network to a traditional CNN that estimates absolute image quality from single images. We demonstrate how our approach can be made significantly more efficient than traditional Siamese Networks by forward propagating a batch of images through a single network and backpropagating gradients derived from all pairs of images in the batch. Experiments on the TID2013 benchmark show that we improve the state-of-the-art by over 5%. Furthermore, on the LIVE benchmark we show that our approach is superior to existing NR-IQA techniques and that we even outperform the state-of-the-art in full-reference IQA (FR-IQA) methods without having to resort to high-quality reference images to infer IQA.
[ { "version": "v1", "created": "Wed, 26 Jul 2017 10:02:40 GMT" } ]
2017-07-27T00:00:00
[ [ "Liu", "Xialei", "" ], [ "van de Weijer", "Joost", "" ], [ "Bagdanov", "Andrew D.", "" ] ]
new_dataset
0.991438
1707.08360
Michael Rabinovich
Michael Rabinovich, Tim Hoffmann, Olga Sorkine-Hornung
Discrete Geodesic Nets for Modeling Developable Surfaces
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a discrete theory for modeling developable surfaces as quadrilateral meshes satisfying simple angle constraints. The basis of our model is a lesser known characterization of developable surfaces as manifolds that can be parameterized through orthogonal geodesics. Our model is simple, local, and, unlike previous works, it does not directly encode the surface rulings. This allows us to model continuous deformations of discrete developable surfaces independently of their decomposition into torsal and planar patches or the surface topology. We prove and experimentally demonstrate strong ties to smooth developable surfaces, including a theorem stating that every sampling of the smooth counterpart satisfies our constraints up to second order. We further present an extension of our model that enables a local definition of discrete isometry. We demonstrate the effectiveness of our discrete model in a developable surface editing system, as well as computation of an isometric interpolation between isometric discrete developable shapes.
[ { "version": "v1", "created": "Wed, 26 Jul 2017 10:30:34 GMT" } ]
2017-07-27T00:00:00
[ [ "Rabinovich", "Michael", "" ], [ "Hoffmann", "Tim", "" ], [ "Sorkine-Hornung", "Olga", "" ] ]
new_dataset
0.991141
1707.08559
Cheng-Yang Fu
Cheng-Yang Fu, Joon Lee, Mohit Bansal, Alexander C. Berg
Video Highlight Prediction Using Audience Chat Reactions
EMNLP 2017
null
null
null
cs.CL cs.AI cs.CV cs.LG cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sports channel video portals offer an exciting domain for research on multimodal, multilingual analysis. We present methods addressing the problem of automatic video highlight prediction based on joint visual features and textual analysis of the real-world audience discourse with complex slang, in both English and traditional Chinese. We present a novel dataset based on League of Legends championships recorded from North American and Taiwanese Twitch.tv channels (will be released for further research), and demonstrate strong results on these using multimodal, character-level CNN-RNN model architectures.
[ { "version": "v1", "created": "Wed, 26 Jul 2017 17:44:38 GMT" } ]
2017-07-27T00:00:00
[ [ "Fu", "Cheng-Yang", "" ], [ "Lee", "Joon", "" ], [ "Bansal", "Mohit", "" ], [ "Berg", "Alexander C.", "" ] ]
new_dataset
0.984358
1612.05601
Christian Baumgartner
Christian F. Baumgartner, Konstantinos Kamnitsas, Jacqueline Matthew, Tara P. Fletcher, Sandra Smith, Lisa M. Koch, Bernhard Kainz and Daniel Rueckert
SonoNet: Real-Time Detection and Localisation of Fetal Standard Scan Planes in Freehand Ultrasound
12 pages, 8 figures, published in IEEE Transactions in Medical Imaging
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying and interpreting fetal standard scan planes during 2D ultrasound mid-pregnancy examinations are highly complex tasks which require years of training. Apart from guiding the probe to the correct location, it can be equally difficult for a non-expert to identify relevant structures within the image. Automatic image processing can provide tools to help experienced as well as inexperienced operators with these tasks. In this paper, we propose a novel method based on convolutional neural networks which can automatically detect 13 fetal standard views in freehand 2D ultrasound data as well as provide a localisation of the fetal structures via a bounding box. An important contribution is that the network learns to localise the target anatomy using weak supervision based on image-level labels only. The network architecture is designed to operate in real-time while providing optimal output for the localisation task. We present results for real-time annotation, retrospective frame retrieval from saved videos, and localisation on a very large and challenging dataset consisting of images and video recordings of full clinical anomaly screenings. We found that the proposed method achieved an average F1-score of 0.798 in a realistic classification experiment modelling real-time detection, and obtained a 90.09% accuracy for retrospective frame retrieval. Moreover, an accuracy of 77.8% was achieved on the localisation task.
[ { "version": "v1", "created": "Fri, 16 Dec 2016 19:20:20 GMT" }, { "version": "v2", "created": "Tue, 25 Jul 2017 16:12:50 GMT" } ]
2017-07-26T00:00:00
[ [ "Baumgartner", "Christian F.", "" ], [ "Kamnitsas", "Konstantinos", "" ], [ "Matthew", "Jacqueline", "" ], [ "Fletcher", "Tara P.", "" ], [ "Smith", "Sandra", "" ], [ "Koch", "Lisa M.", "" ], [ "Kainz", "Bernhard", "" ], [ "Rueckert", "Daniel", "" ] ]
new_dataset
0.976469
1707.01541
Yue Li
Ekaterina Komendantskaya, Yue Li
Productive Corecursion in Logic Programming
Paper presented at the 33nd International Conference on Logic Programming (ICLP 2017), Melbourne, Australia, August 28 to September 1, 2017 16 pages, LaTeX, no figures
null
null
null
cs.LO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Logic Programming is a Turing complete language. As a consequence, designing algorithms that decide termination and non-termination of programs or decide inductive/coinductive soundness of formulae is a challenging task. For example, the existing state-of-the-art algorithms can only semi-decide coinductive soundness of queries in logic programming for regular formulae. Another, less famous, but equally fundamental and important undecidable property is productivity. If a derivation is infinite and coinductively sound, we may ask whether the computed answer it determines actually computes an infinite formula. If it does, the infinite computation is productive. This intuition was first expressed under the name of computations at infinity in the 80s. In modern days of the Internet and stream processing, its importance lies in connection to infinite data structure processing. Recently, an algorithm was presented that semi-decides a weaker property -- of productivity of logic programs. A logic program is productive if it can give rise to productive derivations. In this paper we strengthen these recent results. We propose a method that semi-decides productivity of individual derivations for regular formulae. Thus we at last give an algorithmic counterpart to the notion of productivity of derivations in logic programming. This is the first algorithmic solution to the problem since it was raised more than 30 years ago. We also present an implementation of this algorithm.
[ { "version": "v1", "created": "Wed, 5 Jul 2017 19:06:52 GMT" }, { "version": "v2", "created": "Fri, 7 Jul 2017 06:40:58 GMT" }, { "version": "v3", "created": "Wed, 12 Jul 2017 18:20:29 GMT" }, { "version": "v4", "created": "Tue, 25 Jul 2017 15:46:56 GMT" } ]
2017-07-26T00:00:00
[ [ "Komendantskaya", "Ekaterina", "" ], [ "Li", "Yue", "" ] ]
new_dataset
0.993329
1707.04941
Haewoon Kwak
Haewoon Kwak and Jisun An
Multiplex Media Attention and Disregard Network among 129 Countries
To appear in the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2017), Sydney, Australia, 31 July - 03 August, 2017
null
null
null
cs.SI cs.CY physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We built a multiplex media attention and disregard network (MADN) among 129 countries over 212 days. By characterizing the MADN from multiple levels, we found that it is formed primarily by skewed, hierarchical, and asymmetric relationships. Also, we found strong evidence that our news world is becoming a "global village." However, at the same time, unique attention blocks of the Middle East and North Africa (MENA) region, as well as Russia and its neighbors, still exist.
[ { "version": "v1", "created": "Sun, 16 Jul 2017 20:20:03 GMT" }, { "version": "v2", "created": "Tue, 25 Jul 2017 11:24:55 GMT" } ]
2017-07-26T00:00:00
[ [ "Kwak", "Haewoon", "" ], [ "An", "Jisun", "" ] ]
new_dataset
0.961868
1707.07671
Eshan Singh
Eshan Singh, Clark Barrett, Subhasish Mitra
E-QED: Electrical Bug Localization During Post-Silicon Validation Enabled by Quick Error Detection and Formal Methods
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During post-silicon validation, manufactured integrated circuits are extensively tested in actual system environments to detect design bugs. Bug localization involves identification of a bug trace (a sequence of inputs that activates and detects the bug) and a hardware design block where the bug is located. Existing bug localization practices during post-silicon validation are mostly manual and ad hoc, and, hence, extremely expensive and time consuming. This is particularly true for subtle electrical bugs caused by unexpected interactions between a design and its electrical state. We present E-QED, a new approach that automatically localizes electrical bugs during post-silicon validation. Our results on the OpenSPARC T2, an open-source 500-million-transistor multicore chip design, demonstrate the effectiveness and practicality of E-QED: starting with a failed post-silicon test, in a few hours (9 hours on average) we can automatically narrow the location of the bug to (the fan-in logic cone of) a handful of candidate flip-flops (18 flip-flops on average for a design with ~ 1 Million flip-flops) and also obtain the corresponding bug trace. The area impact of E-QED is ~2.5%. In contrast, deter-mining this same information might take weeks (or even months) of mostly manual work using traditional approaches.
[ { "version": "v1", "created": "Sun, 23 Jul 2017 20:56:29 GMT" } ]
2017-07-26T00:00:00
[ [ "Singh", "Eshan", "" ], [ "Barrett", "Clark", "" ], [ "Mitra", "Subhasish", "" ] ]
new_dataset
0.99195