id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1807.02728
|
Neeraj Varshney
|
Neeraj Varshney, Adarsh Patel, Yansha Deng, Werner Haselmayr, Pramod
K. Varshney, Arumugam Nallanathan
|
Abnormality Detection inside Blood Vessels with Mobile Nanomachines
|
Submitted to IEEE Transactions on Molecular, Biological, and
Multi-Scale Communications Letters for possible publication
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by the numerous healthcare applications of molecular communication
within Internet of Bio-Nano Things (IoBNT), this work addresses the problem of
abnormality detection in a blood vessel using multiple biological embedded
computing devices called cooperative biological nanomachines (CNs), and a
common receiver called the fusion center (FC). Due to blood flow inside a
vessel, each CN and the FC are assumed to be mobile. In this work, each of the
CNs perform abnormality detection with certain probabilities of detection and
false alarm by counting the number of molecules received from a source, e.g.,
infected tissue. These CNs subsequently report their local decisions to a FC
over a diffusion-advection blood flow channel using different types of
molecules in the presence of inter-symbol interference, multi-source
interference, and counting errors. Due to limited computational capability at
the FC, OR and AND logic based fusion rules are employed to make the final
decision after obtaining each local decision based on the optimal likelihood
ratio test. For the aforementioned system, probabilities of detection and false
alarm at the FC are derived for OR and AND fusion rules. Finally, simulation
results are presented to validate the derived analytical results, which provide
important insights.
|
[
{
"version": "v1",
"created": "Sun, 8 Jul 2018 00:07:48 GMT"
}
] | 2018-07-10T00:00:00 |
[
[
"Varshney",
"Neeraj",
""
],
[
"Patel",
"Adarsh",
""
],
[
"Deng",
"Yansha",
""
],
[
"Haselmayr",
"Werner",
""
],
[
"Varshney",
"Pramod K.",
""
],
[
"Nallanathan",
"Arumugam",
""
]
] |
new_dataset
| 0.973858 |
1807.02804
|
Xiaomeng Li
|
Xiaomeng Li, Lequan Yu, Chi-Wing Fu, Pheng-Ann Heng
|
Deeply Supervised Rotation Equivariant Network for Lesion Segmentation
in Dermoscopy Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic lesion segmentation in dermoscopy images is an essential step for
computer-aided diagnosis of melanoma. The dermoscopy images exhibits rotational
and reflectional symmetry, however, this geometric property has not been
encoded in the state-of-the-art convolutional neural networks based skin lesion
segmentation methods. In this paper, we present a deeply supervised rotation
equivariant network for skin lesion segmentation by extending the recent group
rotation equivariant network~\cite{cohen2016group}. Specifically, we propose
the G-upsampling and G-projection operations to adapt the rotation equivariant
classification network for our skin lesion segmentation problem. To further
increase the performance, we integrate the deep supervision scheme into our
proposed rotation equivariant segmentation architecture. The whole framework is
equivariant to input transformations, including rotation and reflection, which
improves the network efficiency and thus contributes to the segmentation
performance. We extensively evaluate our method on the ISIC 2017 skin lesion
challenge dataset. The experimental results show that our rotation equivariant
networks consistently excel the regular counterparts with the same model
complexity under different experimental settings. Our best model achieves
77.23\%(JA) on the test dataset, outperforming the state-of-the-art challenging
methods and further demonstrating the effectiveness of our proposed deeply
supervised rotation equivariant segmentation network. Our best model also
outperforms the state-of-the-art challenging methods, which further demonstrate
the effectiveness of our proposed deeply supervised rotation equivariant
segmentation network.
|
[
{
"version": "v1",
"created": "Sun, 8 Jul 2018 11:49:49 GMT"
}
] | 2018-07-10T00:00:00 |
[
[
"Li",
"Xiaomeng",
""
],
[
"Yu",
"Lequan",
""
],
[
"Fu",
"Chi-Wing",
""
],
[
"Heng",
"Pheng-Ann",
""
]
] |
new_dataset
| 0.975073 |
1807.02947
|
Snehasis Mukherjee
|
Snehasis Mukherjee, Leburu Anvitha and T. Mohana Lahari
|
Human Activity Recognition in RGB-D Videos by Dynamic Images
|
Submitted in ICARCV 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human Activity Recognition in RGB-D videos has been an active research topic
during the last decade. However, no efforts have been found in the literature,
for recognizing human activity in RGB-D videos where several performers are
performing simultaneously. In this paper we introduce such a challenging
dataset with several performers performing the activities. We present a novel
method for recognizing human activities in such videos. The proposed method
aims in capturing the motion information of the whole video by producing a
dynamic image corresponding to the input video. We use two parallel ResNext-101
to produce the dynamic images for the RGB video and depth video separately. The
dynamic images contain only the motion information and hence, the unnecessary
background information are eliminated. We send the two dynamic images extracted
from the RGB and Depth videos respectively, through a fully connected layer of
neural networks. The proposed dynamic image reduces the complexity of the
recognition process by extracting a sparse matrix from a video. However, the
proposed system maintains the required motion information for recognizing the
activity. The proposed method has been tested on the MSR Action 3D dataset and
has shown comparable performances with respect to the state-of-the-art. We also
apply the proposed method on our own dataset, where the proposed method
outperforms the state-of-the-art approaches.
|
[
{
"version": "v1",
"created": "Mon, 9 Jul 2018 05:28:19 GMT"
}
] | 2018-07-10T00:00:00 |
[
[
"Mukherjee",
"Snehasis",
""
],
[
"Anvitha",
"Leburu",
""
],
[
"Lahari",
"T. Mohana",
""
]
] |
new_dataset
| 0.999123 |
1807.03088
|
Chunlei Li
|
Zibi Xiao, Xiangyong Zeng, Chunlei Li and Tor Helleseth
|
Corrigendum to New Generalized Cyclotomic Binary Sequences of Period
$p^2$
|
In the appended corrigendum, we pointed out that the proof of Lemma 6
in the paper only holds for $f=2$ and gave a proof for any $f=2^r$ when $p$
is a non-Wieferich prime
| null |
10.1007/s10623-017-0408-7
| null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
New generalized cyclotomic binary sequences of period $p^2$ are proposed in
this paper, where $p$ is an odd prime. The sequences are almost balanced and
their linear complexity is determined. The result shows that the proposed
sequences have very large linear complexity if $p$ is a non-Wieferich prime.
|
[
{
"version": "v1",
"created": "Mon, 9 Jul 2018 13:03:57 GMT"
}
] | 2018-07-10T00:00:00 |
[
[
"Xiao",
"Zibi",
""
],
[
"Zeng",
"Xiangyong",
""
],
[
"Li",
"Chunlei",
""
],
[
"Helleseth",
"Tor",
""
]
] |
new_dataset
| 0.999605 |
1807.03099
|
Ayse Ipek Akin Atalay
|
Ayse Ipek Akin, Nafiseh Janatian, Ivan Stupia, and Luc Vandendorpe
|
SWIPT-based Real-Time Mobile Computing Systems: A Stochastic Geometry
Perspective
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Driven by the Internet of Things vision, recent years have seen the rise of
new horizons for the wireless ecosystem in which a very large number of mobile
low power devices interact to run sophisticated applications. The main
hindrance to the massive deployment of low power nodes is most probably the
prohibitive maintenance cost of battery replacement and the ecotoxicity of the
battery production/end-of-life. An emerging research direction to avoid battery
replacement is the combination of radio frequency energy harvesting and mobile
computing (MC). In this paper, we propose the use of simultaneous information
and power transfer (SWIPT) to control the distributed computation process while
delivering power to perform the computation tasks requested. A real-time MC
system is considered, meaning that the trade-off between the information rate
and the energy harvested must be carefully chosen to guarantee that the CPU may
perform tasks of given complexity before receiving a new control signal. In
order to provide a system-level perspective on the performance of SWIPT-MC
networks, we propose a mathematical framework based on stochastic geometry to
characterise the rate-energy trade-off of the system. The resulting achievable
performance region is then put in relation with the CPU energy consumption to
investigate the operating conditions of real-time computing systems. Finally,
numerical results illustrate the joint effect of the network densification and
the propagation environment on the optimisation of the CPU usage.
|
[
{
"version": "v1",
"created": "Mon, 9 Jul 2018 13:17:33 GMT"
}
] | 2018-07-10T00:00:00 |
[
[
"Akin",
"Ayse Ipek",
""
],
[
"Janatian",
"Nafiseh",
""
],
[
"Stupia",
"Ivan",
""
],
[
"Vandendorpe",
"Luc",
""
]
] |
new_dataset
| 0.99895 |
1807.03128
|
Diederik Paul Moeys
|
Diederik Paul Moeys, Daniel Neil, Federico Corradi, Emmett Kerr,
Philip Vance, Gautham Das, Sonya A. Coleman, Thomas M. McGinnity, Dermot
Kerr, Tobi Delbruck
|
PRED18: Dataset and Further Experiments with DAVIS Event Camera in
Predator-Prey Robot Chasing
|
8 pages
|
IEEE EBCCSP 2018
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine vision systems using convolutional neural networks (CNNs) for robotic
applications are increasingly being developed. Conventional vision CNNs are
driven by camera frames at constant sample rate, thus achieving a fixed latency
and power consumption tradeoff. This paper describes further work on the first
experiments of a closed-loop robotic system integrating a CNN together with a
Dynamic and Active Pixel Vision Sensor (DAVIS) in a predator/prey scenario. The
DAVIS, mounted on the predator Summit XL robot, produces frames at a fixed 15
Hz frame-rate and Dynamic Vision Sensor (DVS) histograms containing 5k ON and
OFF events at a variable frame-rate ranging from 15-500 Hz depending on the
robot speeds. In contrast to conventional frame-based systems, the latency and
processing cost depends on the rate of change of the image. The CNN is trained
offline on the 1.25h labeled dataset to recognize the position and size of the
prey robot, in the field of view of the predator. During inference, combining
the ten output classes of the CNN allows extracting the analog position vector
of the prey relative to the predator with a mean 8.7% error in angular
estimation. The system is compatible with conventional deep learning
technology, but achieves a variable latency-power tradeoff that adapts
automatically to the dynamics. Finally, investigations on the robustness of the
algorithm, a human performance comparison and a deconvolution analysis are also
explored.
|
[
{
"version": "v1",
"created": "Mon, 2 Jul 2018 18:07:18 GMT"
}
] | 2018-07-10T00:00:00 |
[
[
"Moeys",
"Diederik Paul",
""
],
[
"Neil",
"Daniel",
""
],
[
"Corradi",
"Federico",
""
],
[
"Kerr",
"Emmett",
""
],
[
"Vance",
"Philip",
""
],
[
"Das",
"Gautham",
""
],
[
"Coleman",
"Sonya A.",
""
],
[
"McGinnity",
"Thomas M.",
""
],
[
"Kerr",
"Dermot",
""
],
[
"Delbruck",
"Tobi",
""
]
] |
new_dataset
| 0.997199 |
1807.03130
|
Dov Danon
|
Dov Danon, Hadar Averbuch-Elor, Ohad Fried, Daniel Cohen-Or
|
Unsupervised Natural Image Patch Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning a metric of natural image patches is an important tool for analyzing
images. An efficient means is to train a deep network to map an image patch to
a vector space, in which the Euclidean distance reflects patch similarity.
Previous attempts learned such an embedding in a supervised manner, requiring
the availability of many annotated images. In this paper, we present an
unsupervised embedding of natural image patches, avoiding the need for
annotated images. The key idea is that the similarity of two patches can be
learned from the prevalence of their spatial proximity in natural images.
Clearly, relying on this simple principle, many spatially nearby pairs are
outliers, however, as we show, the outliers do not harm the convergence of the
metric learning. We show that our unsupervised embedding approach is more
effective than a supervised one or one that uses deep patch representations.
Moreover, we show that it naturally leads itself to an efficient
self-supervised domain adaptation technique onto a target domain that contains
a common foreground object.
|
[
{
"version": "v1",
"created": "Thu, 28 Jun 2018 18:21:43 GMT"
}
] | 2018-07-10T00:00:00 |
[
[
"Danon",
"Dov",
""
],
[
"Averbuch-Elor",
"Hadar",
""
],
[
"Fried",
"Ohad",
""
],
[
"Cohen-Or",
"Daniel",
""
]
] |
new_dataset
| 0.953124 |
1807.03168
|
Maksym Zavershynskyi
|
Maksym Zavershynskyi, Alex Skidanov, Illia Polosukhin
|
NAPS: Natural Program Synthesis Dataset
|
4 pages, 5 tables in 2nd Workshop on Neural Abstract Machines &
Program Induction (NAMPI), @ICML 2018
| null | null | null |
cs.LG cs.AI cs.PL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a program synthesis-oriented dataset consisting of human written
problem statements and solutions for these problems. The problem statements
were collected via crowdsourcing and the program solutions were extracted from
human-written solutions in programming competitions, accompanied by
input/output examples. We propose using this dataset for the program synthesis
tasks aimed for working with real user-generated data. As a baseline we present
few models, with the best model achieving 8.8% accuracy, showcasing both the
complexity of the dataset and large room for future research.
|
[
{
"version": "v1",
"created": "Fri, 6 Jul 2018 02:59:34 GMT"
}
] | 2018-07-10T00:00:00 |
[
[
"Zavershynskyi",
"Maksym",
""
],
[
"Skidanov",
"Alex",
""
],
[
"Polosukhin",
"Illia",
""
]
] |
new_dataset
| 0.999766 |
1807.03170
|
Mehdi Tavan
|
Mehdi Tavan and Kamel Sabahi
|
Input Current Sensorless Control for a AC-DC Converter with Unknown Load
and Source Voltage Amplitude
|
arXiv admin note: substantial text overlap with arXiv:1804.00342
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Input current estimation is indispensable in the sensorless control
algorithms for the problem of power factor compensation (PFC) of an AC-DC boost
converter. The system estimator design is challenged by the bilinear form
dynamics and uncertain parameters of the system. In this paper, the system
dynamics is immersed to a proper form by a new filtered transformation. Thanks
to the proposed transformation, the input current, input voltage amplitude, and
load conductance are globally estimated. The exponential convergent of the
estimates is established in normal converter operation. An application of the
proposed estimator is presented in conjunction with a well-known dynamic
controller.
|
[
{
"version": "v1",
"created": "Fri, 6 Jul 2018 06:35:07 GMT"
}
] | 2018-07-10T00:00:00 |
[
[
"Tavan",
"Mehdi",
""
],
[
"Sabahi",
"Kamel",
""
]
] |
new_dataset
| 0.996453 |
1807.03280
|
Chao Wang
|
Shengjian Guo and Meng Wu and Chao Wang
|
Adversarial Symbolic Execution for Detecting Concurrency-Related Cache
Timing Leaks
| null | null | null | null |
cs.CR cs.DC cs.PL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The timing characteristics of cache, a high-speed storage between the fast
CPU and the slowmemory, may reveal sensitive information of a program, thus
allowing an adversary to conduct side-channel attacks. Existing methods for
detecting timing leaks either ignore cache all together or focus only on
passive leaks generated by the program itself, without considering leaks that
are made possible by concurrently running some other threads. In this work, we
show that timing-leak-freedom is not a compositional property: a program that
is not leaky when running alone may become leaky when interleaved with other
threads. Thus, we develop a new method, named adversarial symbolic execution,
to detect such leaks. It systematically explores both the feasible program
paths and their interleavings while modeling the cache, and leverages an SMT
solver to decide if there are timing leaks. We have implemented our method in
LLVM and evaluated it on a set of real-world ciphers with 14,455 lines of C
code in total. Our experiments demonstrate both the efficiency of our method
and its effectiveness in detecting side-channel leaks.
|
[
{
"version": "v1",
"created": "Mon, 9 Jul 2018 17:32:09 GMT"
}
] | 2018-07-10T00:00:00 |
[
[
"Guo",
"Shengjian",
""
],
[
"Wu",
"Meng",
""
],
[
"Wang",
"Chao",
""
]
] |
new_dataset
| 0.998533 |
1807.03296
|
Jan K\v{r}et\'insk\'y
|
Jan K\v{r}et\'insk\'y and Tobias Meggendorfer and Salomon Sickert
|
LTL Store: Repository of LTL formulae from literature and case studies
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This continuously extended technical report collects and compares commonly
used formulae from the literature and provides them in a machine readable way.
|
[
{
"version": "v1",
"created": "Fri, 29 Jun 2018 14:41:32 GMT"
}
] | 2018-07-10T00:00:00 |
[
[
"Křetínský",
"Jan",
""
],
[
"Meggendorfer",
"Tobias",
""
],
[
"Sickert",
"Salomon",
""
]
] |
new_dataset
| 0.999291 |
1803.06064
|
Chao-Chun Liang
|
Chao-Chun Liang, Yu-Shiang Wong, Yi-Chung Lin and Keh-Yih Su
|
A Meaning-based Statistical English Math Word Problem Solver
|
Accepted as a long paper at NAACL HLT 2018
| null | null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce MeSys, a meaning-based approach, for solving English math word
problems (MWPs) via understanding and reasoning in this paper. It first
analyzes the text, transforms both body and question parts into their
corresponding logic forms, and then performs inference on them. The associated
context of each quantity is represented with proposed role-tags (e.g., nsubj,
verb, etc.), which provides the flexibility for annotating an extracted math
quantity with its associated context information (i.e., the physical meaning of
this quantity). Statistical models are proposed to select the operator and
operands. A noisy dataset is designed to assess if a solver solves MWPs mainly
via understanding or mechanical pattern matching. Experimental results show
that our approach outperforms existing systems on both benchmark datasets and
the noisy dataset, which demonstrates that the proposed approach understands
the meaning of each quantity in the text more.
|
[
{
"version": "v1",
"created": "Fri, 16 Mar 2018 03:07:06 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Jul 2018 00:37:36 GMT"
}
] | 2018-07-09T00:00:00 |
[
[
"Liang",
"Chao-Chun",
""
],
[
"Wong",
"Yu-Shiang",
""
],
[
"Lin",
"Yi-Chung",
""
],
[
"Su",
"Keh-Yih",
""
]
] |
new_dataset
| 0.996314 |
1804.07954
|
Radu Tudor Ionescu
|
M\u{a}d\u{a}lina Cozma and Andrei M. Butnaru and Radu Tudor Ionescu
|
Automated essay scoring with string kernels and word embeddings
|
Accepted at ACL 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present an approach based on combining string kernels and
word embeddings for automatic essay scoring. String kernels capture the
similarity among strings based on counting common character n-grams, which are
a low-level yet powerful type of feature, demonstrating state-of-the-art
results in various text classification tasks such as Arabic dialect
identification or native language identification. To our best knowledge, we are
the first to apply string kernels to automatically score essays. We are also
the first to combine them with a high-level semantic feature representation,
namely the bag-of-super-word-embeddings. We report the best performance on the
Automated Student Assessment Prize data set, in both in-domain and cross-domain
settings, surpassing recent state-of-the-art deep learning approaches.
|
[
{
"version": "v1",
"created": "Sat, 21 Apr 2018 12:26:29 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Jul 2018 12:49:40 GMT"
}
] | 2018-07-09T00:00:00 |
[
[
"Cozma",
"Mădălina",
""
],
[
"Butnaru",
"Andrei M.",
""
],
[
"Ionescu",
"Radu Tudor",
""
]
] |
new_dataset
| 0.981704 |
1807.02205
|
Adib Rastegarnia
|
Douglas Comer, Adib Rastegarnia
|
OSDF: An Intent-based Software Defined Network Programming Framework
|
9 pages, accepted as a full paper in LCN 2018
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software Defined Networking (SDN) offers flexibility to program a network
based on a set of network requirements. Programming the networks using SDN is
not completely straightforward because a programmer must deal with low level
details. To solve the problem, researchers proposed a set of network
programming languages that provide a set of high level abstractions to hide low
level hardware details. Most of the proposed languages provide abstractions
related to packet processing and flows, and still require a programmer to
specify low-level match-action fields to configure and monitor a network.
Recently, in an attempt to raise the level at which programmers work,
researchers have begun to investigate Intent-based, descriptive northbound
interfaces. The work is still in early stages, and further investigation is
required before intent-based systems will be adopted by enterprise networks. To
help achieve the goal of moving to an intent-based design, we propose an
SDN-based network programming framework, the Open Software Defined Framework
(OSDF). OSDF provides a high level Application Programming Interface (API) that
can be used by managers and network administrators to express network
requirements for applications and policies for multiple domains. OSDF also
provides a set of high level network operation services that handle common
network configuration, monitoring, and Quality of Service (QoS) provisioning.
OSDF is equipped with a policy conflict management module to help a network
administrator detect and resolve policy conflicts. The paper shows how OSDF can
be used and explains application-based policies. Finally, the paper reports the
results of both testbed measurements and simulations that are used to evaluate
the framework from multiple perspectives, including functionality and
performance.
|
[
{
"version": "v1",
"created": "Fri, 6 Jul 2018 00:27:00 GMT"
}
] | 2018-07-09T00:00:00 |
[
[
"Comer",
"Douglas",
""
],
[
"Rastegarnia",
"Adib",
""
]
] |
new_dataset
| 0.999737 |
1807.02247
|
Fan Yang
|
Kevin Lin, Fan Yang, Qiaosong Wang, Robinson Piramuthu
|
Adversarial Learning for Fine-grained Image Search
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fine-grained image search is still a challenging problem due to the
difficulty in capturing subtle differences regardless of pose variations of
objects from fine-grained categories. In practice, a dynamic inventory with new
fine-grained categories adds another dimension to this challenge. In this work,
we propose an end-to-end network, called FGGAN, that learns discriminative
representations by implicitly learning a geometric transformation from
multi-view images for fine-grained image search. We integrate a generative
adversarial network (GAN) that can automatically handle complex view and pose
variations by converting them to a canonical view without any predefined
transformations. Moreover, in an open-set scenario, our network is able to
better match images from unseen and unknown fine-grained categories. Extensive
experiments on two public datasets and a newly collected dataset have
demonstrated the outstanding robust performance of the proposed FGGAN in both
closed-set and open-set scenarios, providing as much as 10% relative
improvement compared to baselines.
|
[
{
"version": "v1",
"created": "Fri, 6 Jul 2018 04:03:11 GMT"
}
] | 2018-07-09T00:00:00 |
[
[
"Lin",
"Kevin",
""
],
[
"Yang",
"Fan",
""
],
[
"Wang",
"Qiaosong",
""
],
[
"Piramuthu",
"Robinson",
""
]
] |
new_dataset
| 0.990942 |
1807.02251
|
WajihUllah Baig
|
Wajih Ullah Baig, Umar Munir, Waqas Ellahi, Adeel Ejaz, Kashif Sardar
(National Database and Registration Authority)
|
Minutia Texture Cylinder Codes for fingerprint matching
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Minutia Cylinder Codes (MCC) are minutiae based fingerprint descriptors that
take into account minutiae information in a fingerprint image for fingerprint
matching. In this paper, we present a modification to the underlying
information of the MCC descriptor and show that using different features, the
accuracy of matching is highly affected by such changes. MCC originally being a
minutia only descriptor is transformed into a texture descriptor. The
transformation is from minutiae angular information to orientation, frequency
and energy information using Short Time Fourier Transform (STFT) analysis. The
minutia cylinder codes are converted to minutiae texture cylinder codes (MTCC).
Based on a fixed set of parameters, the proposed changes to MCC show improved
performance on FVC 2002 and 2004 data sets and surpass the traditional MCC
performance.
|
[
{
"version": "v1",
"created": "Fri, 6 Jul 2018 04:25:18 GMT"
}
] | 2018-07-09T00:00:00 |
[
[
"Baig",
"Wajih Ullah",
"",
"National Database and Registration Authority"
],
[
"Munir",
"Umar",
"",
"National Database and Registration Authority"
],
[
"Ellahi",
"Waqas",
"",
"National Database and Registration Authority"
],
[
"Ejaz",
"Adeel",
"",
"National Database and Registration Authority"
],
[
"Sardar",
"Kashif",
"",
"National Database and Registration Authority"
]
] |
new_dataset
| 0.999774 |
1807.02256
|
Mohammad Masudur Rahman
|
Mohammad Masudur Rahman and Chanchal K. Roy
|
SurfClipse: Context-Aware Meta Search in the IDE
|
The 30th International Conference on Software Maintenance and
Evolution (ICSME 2014), pp. 617--620, Victoria, Canada, September 2014
|
Proc. ICSME 2014, pp. 617--620
|
10.1109/ICSME.2014.109
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Despite various debugging supports of the existing IDEs for programming
errors and exceptions, software developers often look at web for working
solutions or any up-to-date information. Traditional web search does not
consider the context of the problems that they search solutions for, and thus
it often does not help much in problem solving. In this paper, we propose a
context-aware meta search tool, SurfClipse, that analyzes an encountered
exception and its context in the IDE, and recommends not only suitable search
queries but also relevant web pages for the exception (and its context). The
tool collects results from three popular search engines and a programming Q & A
site against the exception in the IDE, refines the results for relevance
against the context of the exception, and then ranks them before
recommendation. It provides two working modes--interactive and proactive to
meet the versatile needs of the developers, and one can browse the result pages
using a customized embedded browser provided by the tool.
Tool page: www.usask.ca/~masud.rahman/surfclipse
|
[
{
"version": "v1",
"created": "Fri, 6 Jul 2018 05:18:43 GMT"
}
] | 2018-07-09T00:00:00 |
[
[
"Rahman",
"Mohammad Masudur",
""
],
[
"Roy",
"Chanchal K.",
""
]
] |
new_dataset
| 0.999256 |
1807.02282
|
Yi Qin
|
Yi Qin, Tao Xie, Chang Xu, Angello Astorga, and Jian Lu
|
CoMID: Context-based Multi-Invariant Detection for Monitoring
Cyber-Physical Software
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cyber-physical software continually interacts with its physical environment
for adaptation in order to deliver smart services. However, the interactions
can be subject to various errors when the software's assumption on its
environment no longer holds, thus leading to unexpected misbehavior or even
failure. To address this problem, one promising way is to conduct runtime
monitoring of invariants, so as to prevent cyber-physical software from
entering such errors (a.k.a. abnormal states). To effectively detect abnormal
states, we in this article present an approach, named Context-based
Multi-Invariant Detection (CoMID), which consists of two techniques:
context-based trace grouping and multi-invariant detection. The former infers
contexts to distinguish different effective scopes for CoMID's derived
invariants, and the latter conducts ensemble evaluation of multiple invariants
to detect abnormal states. We experimentally evaluate CoMID on real-world
cyber-physical software. The results show that CoMID achieves a 5.7-28.2%
higher true-positive rate and a 6.8-37.6% lower false-positive rate in
detecting abnormal states, as compared with state-of-the-art approaches (i.e.,
Daikon and ZoomIn). When deployed in field tests, CoMID's runtime monitoring
improves the success rate of cyber-physical software in its task executions by
15.3-31.7%.
|
[
{
"version": "v1",
"created": "Fri, 6 Jul 2018 06:55:02 GMT"
}
] | 2018-07-09T00:00:00 |
[
[
"Qin",
"Yi",
""
],
[
"Xie",
"Tao",
""
],
[
"Xu",
"Chang",
""
],
[
"Astorga",
"Angello",
""
],
[
"Lu",
"Jian",
""
]
] |
new_dataset
| 0.999621 |
1807.02478
|
Cunchao Tu
|
Chaojun Xiao and Haoxi Zhong and Zhipeng Guo and Cunchao Tu and
Zhiyuan Liu and Maosong Sun and Yansong Feng and Xianpei Han and Zhen Hu and
Heng Wang and Jianfeng Xu
|
CAIL2018: A Large-Scale Legal Dataset for Judgment Prediction
|
4 pages, 2 tables
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce the \textbf{C}hinese \textbf{AI} and \textbf{L}aw
challenge dataset (CAIL2018), the first large-scale Chinese legal dataset for
judgment prediction. \dataset contains more than $2.6$ million criminal cases
published by the Supreme People's Court of China, which are several times
larger than other datasets in existing works on judgment prediction. Moreover,
the annotations of judgment results are more detailed and rich. It consists of
applicable law articles, charges, and prison terms, which are expected to be
inferred according to the fact descriptions of cases. For comparison, we
implement several conventional text classification baselines for judgment
prediction and experimental results show that it is still a challenge for
current models to predict the judgment results of legal cases, especially on
prison terms. To help the researchers make improvements on legal judgment
prediction, both \dataset and baselines will be released after the CAIL
competition\footnote{http://cail.cipsc.org.cn/}.
|
[
{
"version": "v1",
"created": "Wed, 4 Jul 2018 02:09:06 GMT"
}
] | 2018-07-09T00:00:00 |
[
[
"Xiao",
"Chaojun",
""
],
[
"Zhong",
"Haoxi",
""
],
[
"Guo",
"Zhipeng",
""
],
[
"Tu",
"Cunchao",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Sun",
"Maosong",
""
],
[
"Feng",
"Yansong",
""
],
[
"Han",
"Xianpei",
""
],
[
"Hu",
"Zhen",
""
],
[
"Wang",
"Heng",
""
],
[
"Xu",
"Jianfeng",
""
]
] |
new_dataset
| 0.99952 |
1312.2048
|
Brian Hanley
|
Brian P. Hanley
|
The False Premises and Promises of Bitcoin
|
28 pages, 6 figures. JEL: E21, E22, E42, E51, G21, G29, G28 Section
2.6 has been broken out into a separate paper, and that unwieldy section is
replaced by a short bit referencing that new paper titled, "A zero-sum
monetary system, interest rates, and implications."
| null | null | null |
cs.CE q-fin.GN
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Designed to compete with fiat currencies, bitcoin proposes it is a
crypto-currency alternative. Bitcoin makes a number of false claims, including:
solving the double-spending problem is a good thing; bitcoin can be a reserve
currency for banking; hoarding equals saving, and that we should believe
bitcoin can expand by deflation to become a global transactional currency
supply. Bitcoin's developers combine technical implementation proficiency with
ignorance of currency and banking fundamentals. This has resulted in a failed
attempt to change finance. A set of recommendations to change finance are
provided in the Afterword: Investment/venture banking for the masses; Venture
banking to bring back what investment banks once were; Open-outcry exchange for
all CDS contracts; Attempting to develop CDS type contracts on investments in
startup and existing enterprises; and Improving the connection between startup
tech/ideas, business organization and investment.
|
[
{
"version": "v1",
"created": "Sat, 7 Dec 2013 01:41:50 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Dec 2013 18:09:35 GMT"
},
{
"version": "v3",
"created": "Mon, 23 Dec 2013 18:55:19 GMT"
},
{
"version": "v4",
"created": "Tue, 31 Dec 2013 01:10:46 GMT"
},
{
"version": "v5",
"created": "Fri, 7 Feb 2014 20:49:25 GMT"
},
{
"version": "v6",
"created": "Tue, 25 Feb 2014 01:46:41 GMT"
},
{
"version": "v7",
"created": "Fri, 26 Jun 2015 22:31:15 GMT"
},
{
"version": "v8",
"created": "Wed, 4 Jul 2018 20:15:21 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Hanley",
"Brian P.",
""
]
] |
new_dataset
| 0.9988 |
1703.05443
|
Rijul Magu
|
Rijul Magu, Kshitij Joshi, Jiebo Luo
|
Detecting the Hate Code on Social Media
| null |
Eleventh International AAAI Conference on Weblogs and Social
Media., 2017, 608-612
| null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media has become an indispensable part of the everyday lives of
millions of people around the world. It provides a platform for expressing
opinions and beliefs, communicated to a massive audience. However, this ease
with which people can express themselves has also allowed for the large scale
spread of propaganda and hate speech. To prevent violating the abuse policies
of social media platforms and also to avoid detection by automatic systems like
Google's Conversation AI, racists have begun to use a code (a movement termed
Operation Google). This involves substituting references to communities by
benign words that seem out of context, in hate filled posts or Tweets. For
example, users have used the words Googles and Bings to represent the
African-American and Asian communities, respectively. By generating the list of
users who post such content, we move a step forward from classifying tweets by
allowing us to study the usage pattern of these concentrated set of users.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2017 01:03:49 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Magu",
"Rijul",
""
],
[
"Joshi",
"Kshitij",
""
],
[
"Luo",
"Jiebo",
""
]
] |
new_dataset
| 0.998731 |
1706.08302
|
Jo\~ao Paulo de Araujo
|
Jo\~ao Paulo de Araujo and Luciana Arantes and Elias P. Duarte Jr. and
Luiz A. Rodrigues and Pierre Sens
|
VCube-PS: A Causal Broadcast Topic-based Publish/Subscribe System
|
Improved text and performance evaluation. Added proof for the
algorithms (Section 3.4)
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we present VCube-PS, a topic-based Publish/Subscribe system
built on the top of a virtual hypercube-like topology. Membership information
and published messages are broadcast to subscribers (members) of a topic group
over dynamically built spanning trees rooted at the publisher. For a given
topic, the delivery of published messages respects the causal order. VCube-PS
was implemented on the PeerSim simulator, and experiments are reported
including a comparison with the traditional Publish/Subscribe approach that
employs a single rooted static spanning-tree for message distribution. Results
confirm the efficiency of VCube-PS in terms of scalability, latency, number and
size of messages.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2017 09:57:12 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jul 2018 21:48:16 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"de Araujo",
"João Paulo",
""
],
[
"Arantes",
"Luciana",
""
],
[
"Duarte",
"Elias P.",
"Jr."
],
[
"Rodrigues",
"Luiz A.",
""
],
[
"Sens",
"Pierre",
""
]
] |
new_dataset
| 0.983082 |
1707.08380
|
Alberto Giaretta
|
Nicola Dragoni, Alberto Giaretta and Manuel Mazzara
|
The Internet of Hackable Things
| null |
Proceedings of 5th International Conference in Software
Engineering for Defence Applications. SEDA 2016. Advances in Intelligent
Systems and Computing, vol 717. Springer, Cham
|
10.1007/978-3-319-70578-1_13
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Internet of Things makes possible to connect each everyday object to the
Internet, making computing pervasive like never before. From a security and
privacy perspective, this tsunami of connectivity represents a disaster, which
makes each object remotely hackable. We claim that, in order to tackle this
issue, we need to address a new challenge in security: education.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2017 11:21:23 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Dragoni",
"Nicola",
""
],
[
"Giaretta",
"Alberto",
""
],
[
"Mazzara",
"Manuel",
""
]
] |
new_dataset
| 0.995863 |
1712.08362
|
Daniel Paulusma
|
Matthew Johnson and Giacomo Paesani and Daniel Paulusma
|
Connected Vertex Cover for $(sP_1+P_5)$-Free Graphs
| null | null | null | null |
cs.DS cs.CC cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Connected Vertex Cover problem is to decide if a graph G has a vertex
cover of size at most $k$ that induces a connected subgraph of $G$. This is a
well-studied problem, known to be NP-complete for restricted graph classes,
and, in particular, for $H$-free graphs if $H$ is not a linear forest (a graph
is $H$-free if it does not contain $H$ as an induced subgraph). It is easy to
see that Connected Vertex Cover is polynomial-time solvable for $P_4$-free
graphs. We continue the search for tractable graph classes: we prove that it is
also polynomial-time solvable for $(sP_1+P_5)$-free graphs for every integer
$s\geq 0$.
|
[
{
"version": "v1",
"created": "Fri, 22 Dec 2017 09:18:52 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Feb 2018 14:43:30 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Jul 2018 16:37:04 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Johnson",
"Matthew",
""
],
[
"Paesani",
"Giacomo",
""
],
[
"Paulusma",
"Daniel",
""
]
] |
new_dataset
| 0.999328 |
1803.02380
|
Pedro F. Proen\c{c}a
|
Pedro F. Proen\c{c}a and Yang Gao
|
Fast Cylinder and Plane Extraction from Depth Cameras for Visual
Odometry
|
Accepted to IROS 2018
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents CAPE, a method to extract planes and cylinder segments
from organized point clouds, which processes 640x480 depth images on a single
CPU core at an average of 300 Hz, by operating on a grid of planar cells.
While, compared to state-of-the-art plane extraction, the latency of CAPE is
more consistent and 4-10 times faster, depending on the scene, we also
demonstrate empirically that applying CAPE to visual odometry can improve
trajectory estimation on scenes made of cylindrical surfaces (e.g. tunnels),
whereas using a plane extraction approach that is not curve-aware deteriorates
performance on these scenes. To use these geometric primitives in visual
odometry, we propose extending a probabilistic RGB-D odometry framework based
on points, lines and planes to cylinder primitives. Following this framework,
CAPE runs on fused depth maps and the parameters of cylinders are modelled
probabilistically to account for uncertainty and weight accordingly the pose
optimization residuals.
|
[
{
"version": "v1",
"created": "Tue, 6 Mar 2018 19:07:45 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Mar 2018 11:36:45 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Jul 2018 11:17:01 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Proença",
"Pedro F.",
""
],
[
"Gao",
"Yang",
""
]
] |
new_dataset
| 0.998355 |
1807.01726
|
Ze Wang
|
Ze Wang, Weiqiang Ren, Qiang Qiu
|
LaneNet: Real-Time Lane Detection Networks for Autonomous Driving
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lane detection is to detect lanes on the road and provide the accurate
location and shape of each lane. It severs as one of the key techniques to
enable modern assisted and autonomous driving systems. However, several unique
properties of lanes challenge the detection methods. The lack of distinctive
features makes lane detection algorithms tend to be confused by other objects
with similar local appearance. Moreover, the inconsistent number of lanes on a
road as well as diverse lane line patterns, e.g. solid, broken, single, double,
merging, and splitting lines further hamper the performance. In this paper, we
propose a deep neural network based method, named LaneNet, to break down the
lane detection into two stages: lane edge proposal and lane line localization.
Stage one uses a lane edge proposal network for pixel-wise lane edge
classification, and the lane line localization network in stage two then
detects lane lines based on lane edge proposals. Please note that the goal of
our LaneNet is built to detect lane line only, which introduces more
difficulties on suppressing the false detections on the similar lane marks on
the road like arrows and characters. Despite all the difficulties, our lane
detection is shown to be robust to both highway and urban road scenarios method
without relying on any assumptions on the lane number or the lane line
patterns. The high running speed and low computational cost endow our LaneNet
the capability of being deployed on vehicle-based systems. Experiments validate
that our LaneNet consistently delivers outstanding performances on real world
traffic scenarios.
|
[
{
"version": "v1",
"created": "Wed, 4 Jul 2018 18:05:04 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Wang",
"Ze",
""
],
[
"Ren",
"Weiqiang",
""
],
[
"Qiu",
"Qiang",
""
]
] |
new_dataset
| 0.994708 |
1807.01788
|
Siddhant Rao
|
Siddhant Rao
|
MITOS-RCNN: A Novel Approach to Mitotic Figure Detection in Breast
Cancer Histopathology Images using Region Based Convolutional Neural Networks
|
Submitted to Elsevier Medical Image Analysis journal. 17 pages. 3
tables. 4 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Studies estimate that there will be 266,120 new cases of invasive breast
cancer and 40,920 breast cancer induced deaths in the year of 2018 alone.
Despite the pervasiveness of this affliction, the current process to obtain an
accurate breast cancer prognosis is tedious and time consuming, requiring a
trained pathologist to manually examine histopathological images in order to
identify the features that characterize various cancer severity levels. We
propose MITOS-RCNN: a novel region based convolutional neural network (RCNN)
geared for small object detection to accurately grade one of the three factors
that characterize tumor belligerence described by the Nottingham Grading
System: mitotic count. Other computational approaches to mitotic figure
counting and detection do not demonstrate ample recall or precision to be
clinically viable. Our models outperformed all previous participants in the
ICPR 2012 challenge, the AMIDA 2013 challenge and the MITOS-ATYPIA-14 challenge
along with recently published works. Our model achieved an F-measure score of
0.955, a 6.11% improvement in accuracy from the most accurate of the previously
proposed models.
|
[
{
"version": "v1",
"created": "Wed, 4 Jul 2018 21:29:53 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Rao",
"Siddhant",
""
]
] |
new_dataset
| 0.973065 |
1807.01829
|
Yin Yang
|
Yin Yang
|
LinBFT: Linear-Communication Byzantine Fault Tolerance for Public
Blockchains
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents LinBFT, a novel Byzantine fault tolerance (BFT) protocol
for blockchain systems that achieves amortized O(n) communication volume per
block under reasonable conditions (where n is the number of participants),
while satisfying determinist guarantees on safety and liveness. This
significantly improves previous results, which either incurs quadratic
communication complexity, or only satisfies safety in a probabilistic sense.
LinBFT is based on the popular PBFT protocol, and cuts down its $O(n^4)$
complexity with three tricks, each by $O(n)$: linear view change, threshold
signatures, and verifiable random functions. All three are known, i.e., the
solutions are right in front of our eyes, and yet LinBFT is the first $O(n)$
solution with deterministic security guarantees.
Further, LinBFT also addresses issues that are specific to permission-less,
public blockchain systems, such as anonymous participants without a public-key
infrastructure, proof-of-stake with slashing, rotating leader, and a dynamic
participant set. In addition, LinBFT contains no proof-of-work module, reaches
consensus for every block, and tolerates changing honesty of the participants
for different blocks.
|
[
{
"version": "v1",
"created": "Thu, 5 Jul 2018 02:26:23 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Yang",
"Yin",
""
]
] |
new_dataset
| 0.996666 |
1807.01844
|
Benyamin Ghojogh
|
Benyamin Ghojogh, Saeed Sharifian
|
Pontogammarus Maeoticus Swarm Optimization: A Metaheuristic Optimization
Algorithm
|
15 pages, 13 figures, 11 tables, key words: Pontogammarus Maeoticus,
Gammarus swarm, metaheuristic optimization
| null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, metaheuristic optimization algorithms are used to find the global
optima in difficult search spaces. Pontogammarus Maeoticus Swarm Optimization
(PMSO) is a metaheuristic algorithm imitating aquatic nature and foraging
behavior. Pontogammarus Maeoticus, also called Gammarus in short, is a tiny
creature found mostly in coast of Caspian Sea in Iran. In this algorithm,
global optima is modeled as sea edge (coast) to which Gammarus creatures are
willing to move in order to rest from sea waves and forage in sand. Sea waves
satisfy exploration and foraging models exploitation. The strength of sea wave
is determined according to distance of Gammarus from sea edge. The angles of
waves applied on several particles are set randomly helping algorithm not be
stuck in local bests. Meanwhile, the neighborhood of particles change
adaptively resulting in more efficient progress in searching. The proposed
algorithm, although is applicable on any optimization problem, is experimented
for partially shaded solar PV array. Experiments on CEC05 benchmarks, as well
as solar PV array, show the effectiveness of this optimization algorithm.
|
[
{
"version": "v1",
"created": "Thu, 5 Jul 2018 04:36:18 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Ghojogh",
"Benyamin",
""
],
[
"Sharifian",
"Saeed",
""
]
] |
new_dataset
| 0.995491 |
1807.01855
|
Haitao Liu
|
Shuiyuan Yu, Chunshan Xu, Haitao Liu
|
Zipf's law in 50 languages: its structural pattern, linguistic
interpretation, and cognitive motivation
|
18 pages, 3 figures
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Zipf's law has been found in many human-related fields, including language,
where the frequency of a word is persistently found as a power law function of
its frequency rank, known as Zipf's law. However, there is much dispute whether
it is a universal law or a statistical artifact, and little is known about what
mechanisms may have shaped it. To answer these questions, this study conducted
a large scale cross language investigation into Zipf's law. The statistical
results show that Zipf's laws in 50 languages all share a 3-segment structural
pattern, with each segment demonstrating distinctive linguistic properties and
the lower segment invariably bending downwards to deviate from theoretical
expectation. This finding indicates that this deviation is a fundamental and
universal feature of word frequency distributions in natural languages, not the
statistical error of low frequency words. A computer simulation based on the
dual-process theory yields Zipf's law with the same structural pattern,
suggesting that Zipf's law of natural languages are motivated by common
cognitive mechanisms. These results show that Zipf's law in languages is
motivated by cognitive mechanisms like dual-processing that govern human verbal
behaviors.
|
[
{
"version": "v1",
"created": "Thu, 5 Jul 2018 06:03:39 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Yu",
"Shuiyuan",
""
],
[
"Xu",
"Chunshan",
""
],
[
"Liu",
"Haitao",
""
]
] |
new_dataset
| 0.990336 |
1807.01857
|
Mohammad Masudur Rahman
|
Mohammad Masudur Rahman, Shamima Yeasmin and Chanchal K. Roy
|
An IDE-Based Context-Aware Meta Search Engine
|
20th Working Conference on Reverse Engineering (WCRE 2013), Koblenz,
Germany, October 2013, pp. 467--471
| null |
10.1109/WCRE.2013.6671324
| null |
cs.SE cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Traditional web search forces the developers to leave their working
environments and look for solutions in the web browsers. It often does not
consider the context of their programming problems. The context-switching
between the web browser and the working environment is time-consuming and
distracting, and the keyword-based traditional search often does not help much
in problem solving. In this paper, we propose an Eclipse IDE-based web search
solution that collects the data from three web search APIs-- Google, Yahoo,
Bing and a programming Q & A site-- Stack Overflow. It then provides search
results within IDE taking not only the content of the selected error into
account but also the problem context, popularity and search engine
recommendation of the result links. Experiments with 25 run time errors and
exceptions show that the proposed approach outperforms the keyword-based search
approaches with a recommendation accuracy of 96%. We also validate the results
with a user study involving five prospective participants where we get a result
agreement of 64.28%. While the preliminary results are promising, the approach
needs to be further validated with more errors and exceptions followed by a
user study with more participants to establish itself as a complete IDE-based
web search solution.
|
[
{
"version": "v1",
"created": "Thu, 5 Jul 2018 06:05:46 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Rahman",
"Mohammad Masudur",
""
],
[
"Yeasmin",
"Shamima",
""
],
[
"Roy",
"Chanchal K.",
""
]
] |
new_dataset
| 0.999526 |
1807.01868
|
TonTon Huang
|
TonTon Hsien-De Huang
|
Hunting the Ethereum Smart Contract: Color-inspired Inspection of
Potential Attacks
|
2018/07/04 Draft Version
| null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchain and Cryptocurrencies are gaining unprecedented popularity and
understanding. Meanwhile, Ethereum is gaining a significant popularity in the
blockchain community, mainly due to the fact that it is designed in a way that
enables developers to write smart contract and decentralized applications
(Dapps). This new paradigm of applications opens the door to many possibilities
and opportunities. However, the security of Ethereum smart contracts has not
received much attention; several Ethereum smart contracts malfunctioning have
recently been reported. Unlike many previous works that have applied static and
dynamic analyses to find bugs in smart contracts, we do not attempt to define
and extract any features; instead we focus on reducing the expert's labor
costs. We first present a new in-depth analysis of potential attacks
methodology and then translate the bytecode of solidity into RGB color code.
After that, we transform them to a fixed-sized encoded image. Finally, the
encoded image is fed to convolutional neural network (CNN) for automatic
feature extraction and learning, detecting compiler bugs of Ethereum smart
contract.
|
[
{
"version": "v1",
"created": "Thu, 5 Jul 2018 07:06:36 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Huang",
"TonTon Hsien-De",
""
]
] |
new_dataset
| 0.996909 |
1807.01869
|
EPTCS
|
Carlo Angiuli, Evan Cavallo, Kuen-Bang Hou (Favonia), Robert Harper,
Jonathan Sterling
|
The RedPRL Proof Assistant (Invited Paper)
|
In Proceedings LFMTP 2018, arXiv:1807.01352
|
EPTCS 274, 2018, pp. 1-10
|
10.4204/EPTCS.274.1
| null |
cs.LO cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RedPRL is an experimental proof assistant based on Cartesian cubical
computational type theory, a new type theory for higher-dimensional
constructions inspired by homotopy type theory. In the style of Nuprl, RedPRL
users employ tactics to establish behavioral properties of cubical functional
programs embodying the constructive content of proofs. Notably, RedPRL
implements a two-level type theory, allowing an extensional, proof-irrelevant
notion of exact equality to coexist with a higher-dimensional proof-relevant
notion of paths.
|
[
{
"version": "v1",
"created": "Thu, 5 Jul 2018 07:08:44 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Angiuli",
"Carlo",
"",
"Favonia"
],
[
"Cavallo",
"Evan",
"",
"Favonia"
],
[
"Hou",
"Kuen-Bang",
"",
"Favonia"
],
[
"Harper",
"Robert",
""
],
[
"Sterling",
"Jonathan",
""
]
] |
new_dataset
| 0.961266 |
1807.01884
|
Bingwang Zhang
|
Qi Yuan and Bingwang Zhang and Haojie Li and Zhihui Wang and Zhongxuan
Luo
|
A Single Shot Text Detector with Scale-adaptive Anchors
|
8 pages, 6figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Currently, most top-performing text detection networks tend to employ
fixed-size anchor boxes to guide the search for text instances. They usually
rely on a large amount of anchors with different scales to discover texts in
scene images, thus leading to high computational cost. In this paper, we
propose an end-to-end box-based text detector with scale-adaptive anchors,
which can dynamically adjust the scales of anchors according to the sizes of
underlying texts by introducing an additional scale regression layer. The
proposed scale-adaptive anchors allow us to use a few number of anchors to
handle multi-scale texts and therefore significantly improve the computational
efficiency. Moreover, compared to discrete scales used in previous methods, the
learned continuous scales are more reliable, especially for small texts
detection. Additionally, we propose Anchor convolution to better exploit
necessary feature information by dynamically adjusting the sizes of receptive
fields according to the learned scales. Extensive experiments demonstrate that
the proposed detector is fast, taking only $0.28$ second per image, while
outperforming most state-of-the-art methods in accuracy.
|
[
{
"version": "v1",
"created": "Thu, 5 Jul 2018 07:48:18 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Yuan",
"Qi",
""
],
[
"Zhang",
"Bingwang",
""
],
[
"Li",
"Haojie",
""
],
[
"Wang",
"Zhihui",
""
],
[
"Luo",
"Zhongxuan",
""
]
] |
new_dataset
| 0.969252 |
1807.01980
|
Regio Michelin
|
Regio A. Michelin and Ali Dorri and Roben C. Lunardi and Marco Steger
and Salil S. Kanhere and Raja Jurdak and Avelino F. Zorzo
|
SpeedyChain: A framework for decoupling data from blockchain for smart
cities
|
10 pages
| null | null | null |
cs.CR cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is increased interest in smart vehicles acting as both data consumers
and producers in smart cities. Vehicles can use smart city data for
decision-making, such as dynamic routing based on traffic conditions. Moreover,
the multitude of embedded sensors in vehicles can collectively produce a rich
data set of the urban landscape that can be used to provide a range of
services. Key to the success of this vision is a scalable and private
architecture for trusted data sharing. This paper proposes a framework called
SpeedyChain, that leverages blockchain technology to allow smart vehicles to
share their data while maintaining privacy, integrity, resilience and
non-repudiation in a decentralized, and tamper-resistant manner. Differently
from traditional blockchain usage (e.g., Bitcoin and Ethereum), the proposed
framework uses a blockchain design that decouples the data stored in the
transactions from the block header, thus allowing for fast addition of data to
the blocks. Furthermore, an expiration time for each block to avoid large sized
blocks is proposed. This paper also presents an evaluation of the proposed
framework in a network emulator to demonstrate its benefits.
|
[
{
"version": "v1",
"created": "Thu, 5 Jul 2018 13:12:02 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Michelin",
"Regio A.",
""
],
[
"Dorri",
"Ali",
""
],
[
"Lunardi",
"Roben C.",
""
],
[
"Steger",
"Marco",
""
],
[
"Kanhere",
"Salil S.",
""
],
[
"Jurdak",
"Raja",
""
],
[
"Zorzo",
"Avelino F.",
""
]
] |
new_dataset
| 0.985028 |
1807.02009
|
Sanaa Sharafeddine
|
Sanaa Sharafeddine and Rania Islambouli
|
On-Demand Deployment of Multiple Aerial Base Stations for Traffic
Offloading and Network Recovery
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned aerial vehicles (UAVs) are being utilized for a wide spectrum of
applications in wireless networks leading to attractive business opportunities.
In the case of abrupt disruption to existing cellular network operation or
infrastructure, e.g., due to an unexpected surge in user demand or a natural
disaster, UAVs can be deployed to provide instant recovery via temporary
wireless coverage in designated areas. A major challenge is to determine
efficiently how many UAVs are needed and where to position them in a relatively
large 3D search space. To this end, we formulate the problem of 3D deployment
of a fleet of UAVs as a mixed integer linear program, and present a greedy
approach that mimics the optimal behavior assuming a grid composed of a finite
set of possible UAV locations. In addition, we propose and evaluate a novel low
complexity algorithm for multiple UAV deployment in a continuous 3D space,
based on an unsupervised learning technique that relies on the notion of
electrostatics with repulsion and attraction forces. We present performance
results for the proposed algorithm as a function of various system parameters
and demonstrate its effectiveness compared to the close-to-optimal greedy
approach and its superiority compared to recent related work from the
literature.
|
[
{
"version": "v1",
"created": "Thu, 5 Jul 2018 13:56:58 GMT"
}
] | 2018-07-06T00:00:00 |
[
[
"Sharafeddine",
"Sanaa",
""
],
[
"Islambouli",
"Rania",
""
]
] |
new_dataset
| 0.990082 |
1507.01988
|
Abuzer Yakaryilmaz
|
Andris Ambainis and Abuzer Yakary{\i}lmaz
|
Automata and Quantum Computing
|
33 pages. A revised and updated version (June 2018). To appear in
Automata: From Mathematics to Applications edited by Jean-\'Eric Pin
| null | null | null |
cs.FL cs.CC quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantum computing is a new model of computation, based on quantum physics.
Quantum computers can be exponentially faster than conventional computers for
problems such as factoring. Besides full-scale quantum computers, more
restricted models such as quantum versions of finite automata have been
studied. In this paper, we survey various models of quantum finite automata and
their properties. We also provide some open questions and new directions for
researchers.
Keywords: quantum finite automata, probabilistic finite automata,
nondeterminism, bounded error, unbounded error, state complexity, decidability
and undecidability, computational complexity
|
[
{
"version": "v1",
"created": "Tue, 7 Jul 2015 23:40:48 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jul 2018 10:21:57 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Ambainis",
"Andris",
""
],
[
"Yakaryılmaz",
"Abuzer",
""
]
] |
new_dataset
| 0.999425 |
1703.04088
|
Yang Zhao
|
Yang Zhao, Ronggang Wang, Wei Jia, Jianchao Yang, Wenmin Wang, Wen Gao
|
Local Patch Encoding-Based Method for Single Image Super-Resolution
|
20 pages, 8 figures
|
Y. Zhao, R. Wang, W. Jia, J. Yang, W. Wang , W. Gao, Local patch
encoding-based method for single image super-resolution, Information
Sciences, vol.433, pp.292-305, 2018
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent learning-based super-resolution (SR) methods often focus on dictionary
learning or network training. In this paper, we discuss in detail a new SR
method based on local patch encoding (LPE) instead of traditional dictionary
learning. The proposed method consists of a learning stage and a reconstructing
stage. In the learning stage, image patches are classified into different
classes by means of the proposed LPE, and then a projection matrix is computed
for each class by utilizing a simple constraint. In the reconstructing stage,
an input LR patch can be simply reconstructed by computing its LPE code and
then multiplying the corresponding projection matrix. Furthermore, we discuss
the relationship between the proposed method and the anchored neighborhood
regression methods; we also analyze the extendibility of the proposed method.
The experimental results on several image sets demonstrate the effectiveness of
the LPE-based methods.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2017 09:47:51 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jul 2018 01:45:24 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Zhao",
"Yang",
""
],
[
"Wang",
"Ronggang",
""
],
[
"Jia",
"Wei",
""
],
[
"Yang",
"Jianchao",
""
],
[
"Wang",
"Wenmin",
""
],
[
"Gao",
"Wen",
""
]
] |
new_dataset
| 0.976588 |
1712.01493
|
Zhou Yin
|
Zhou Yin, Wei-Shi Zheng, Ancong Wu, Hong-Xing Yu, Hai Wan, Xiaowei
Guo, Feiyue Huang, Jianhuang Lai
|
Adversarial Attribute-Image Person Re-identification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While attributes have been widely used for person re-identification (Re-ID)
which aims at matching the same person images across disjoint camera views,
they are used either as extra features or for performing multi-task learning to
assist the image-image matching task. However, how to find a set of person
images according to a given attribute description, which is very practical in
many surveillance applications, remains a rarely investigated cross-modality
matching problem in person Re-ID. In this work, we present this challenge and
formulate this task as a joint space learning problem. By imposing an
attribute-guided attention mechanism for images and a semantic consistent
adversary strategy for attributes, each modality, i.e., images and attributes,
successfully learns semantically correlated concepts under the guidance of the
other. We conducted extensive experiments on three attribute datasets and
demonstrated that the proposed joint space learning method is so far the most
effective method for the attribute-image cross-modality person Re-ID problem.
|
[
{
"version": "v1",
"created": "Tue, 5 Dec 2017 06:06:32 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Feb 2018 07:57:07 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Jul 2018 16:49:39 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Yin",
"Zhou",
""
],
[
"Zheng",
"Wei-Shi",
""
],
[
"Wu",
"Ancong",
""
],
[
"Yu",
"Hong-Xing",
""
],
[
"Wan",
"Hai",
""
],
[
"Guo",
"Xiaowei",
""
],
[
"Huang",
"Feiyue",
""
],
[
"Lai",
"Jianhuang",
""
]
] |
new_dataset
| 0.999735 |
1712.09359
|
Renato Fabbri
|
Renato Fabbri
|
Basic concepts and tools for the Toki Pona minimal and constructed
language: description of the language and main issues; analysis of the
vocabulary; text synthesis and syntax highlighting; Wordnet synsets
|
Python and Vim scripts in this repository:
https://github.com/ttm/prv/
| null | null | null |
cs.CY cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A minimal constructed language (conlang) is useful for experiments and
comfortable for making tools. The Toki Pona (TP) conlang is minimal both in the
vocabulary (with only 14 letters and 124 lemmas) and in the (about) 10 syntax
rules. The language is useful for being a used and somewhat established minimal
conlang with at least hundreds of fluent speakers. This article exposes current
concepts and resources for TP, and makes available Python (and Vim) scripted
routines for the analysis of the language, synthesis of texts, syntax
highlighting schemes, and the achievement of a preliminary TP Wordnet. Focus is
on the analysis of the basic vocabulary, as corpus analyses were found. The
synthesis is based on sentence templates, relates to context by keeping track
of used words, and renders larger texts by using a fixed number of phonemes
(e.g. for poems) and number of sentences, words and letters (e.g. for
paragraphs). Syntax highlighting reflects morphosyntactic classes given in the
official dictionary and different solutions are described and implemented in
the well-established Vim text editor. The tentative TP Wordnet is made
available in three patterns of relations between synsets and word lemmas. In
summary, this text holds potentially novel conceptualizations about, and tools
and results in analyzing, synthesizing and syntax highlighting the TP language.
|
[
{
"version": "v1",
"created": "Tue, 26 Dec 2017 18:43:32 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Jul 2018 14:10:05 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Jul 2018 00:18:33 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Fabbri",
"Renato",
""
]
] |
new_dataset
| 0.999207 |
1803.10369
|
Jie Xu
|
Jie Xu, Wei Ding, Jian Gong, Xiaoyan Hu
|
SRLA: A real time sliding time window super point cardinality estimation
algorithm for high speed network based on GPU
|
11 pages, 11 figures
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Super point is a special host in network which communicates with lots of
other hosts in a certain time period. The number of hosts contacting with a
super point is called as its cardinality. Cardinality estimating plays
important roles in network management and security. All of existing works focus
on how to estimate super point's cardinality under discrete time window. But
discrete time window causes great delay and the accuracy of estimating result
is subject to the starting of the window. sliding time window, moving
forwarding a small slice every time, offers a more accuracy and timely scale to
monitor super point's cardinality. On the other hand, super point's cardinality
estimating under sliding time window is more difficult because it requires an
algorithm to record the cardinality incrementally and report them immediately
at the end of the sliding duration. This paper firstly solves this problem by
devising a sliding time window available algorithm SRLA. SRLA records hosts
cardinality by a novel structure which could be updated incrementally. In order
to reduce the cardinality estimating time at the end of every sliding time
window, SRLA generates a super point candidate list while scanning packets and
calculates the cardinality of hosts in the candidate list only. It also has the
ability to run parallel to deal with high speed network in line speed. This
paper gives the way to deploy SRLA on a common GPU. Experiments on real world
traffics which have 40 GB/s bandwidth show that SRLA successfully estimates
super point's cardinality within 100 milliseconds under sliding time window
when running on a low cost Nvidia GPU, GTX650 with 1 GB memory. The estimating
time of SRLA is much smaller than that of other algorithms which consumes more
than 2000 milliseconds under discrete time window.
|
[
{
"version": "v1",
"created": "Wed, 28 Mar 2018 01:10:24 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jul 2018 11:22:45 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Xu",
"Jie",
""
],
[
"Ding",
"Wei",
""
],
[
"Gong",
"Jian",
""
],
[
"Hu",
"Xiaoyan",
""
]
] |
new_dataset
| 0.99439 |
1804.10363
|
Sambaran Bandyopadhyay
|
Sambaran Bandyopadhyay, Harsh Kara, Anirban Biswas, M N Murty
|
SaC2Vec: Information Network Representation with Structure and Content
|
10 Pages, Submitted to a conference for publication
| null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Network representation learning (also known as information network embedding)
has been the central piece of research in social and information network
analysis for the last couple of years. An information network can be viewed as
a linked structure of a set of entities. A set of linked web pages and
documents, a set of users in a social network are common examples of
information network. Network embedding learns low dimensional representations
of the nodes, which can further be used for downstream network mining
applications such as community detection or node clustering. Information
network representation techniques traditionally use only the link structure of
the network. But in real world networks, nodes come with additional content
such as textual descriptions or associated images. This content is semantically
correlated with the network structure and hence using the content along with
the topological structure of the network can facilitate the overall network
representation. In this paper, we propose Sac2Vec, a network representation
technique that exploits both the structure and content. We convert the network
into a multi-layered graph and use random walk and language modeling technique
to generate the embedding of the nodes. Our approach is simple and
computationally fast, yet able to use the content as a complement to structure
and vice-versa. We also generalize the approach for networks having multiple
types of content in each node. Experimental evaluations on four real world
publicly available datasets show the merit of our approach compared to
state-of-the-art algorithms in the domain.
|
[
{
"version": "v1",
"created": "Fri, 27 Apr 2018 07:14:59 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jul 2018 14:24:49 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Bandyopadhyay",
"Sambaran",
""
],
[
"Kara",
"Harsh",
""
],
[
"Biswas",
"Anirban",
""
],
[
"Murty",
"M N",
""
]
] |
new_dataset
| 0.99342 |
1805.01937
|
Jeffrey Shainline
|
Jeffrey M. Shainline, Adam N. McCaughan, Sonia M. Buckley, Christine
A. Donnelly, Manuel Castellanos-Beltran, Michael L. Schneider, Richard P.
Mirin, and Sae Woo Nam
|
Superconducting Optoelectronic Neurons III: Synaptic Plasticity
|
17 pages, 12 figures
| null | null | null |
cs.NE cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a means of dynamically reconfiguring the synaptic weight of a
superconducting optoelectronic loop neuron, a superconducting flux storage loop
is inductively coupled to the synaptic current bias of the neuron. A standard
flux memory cell is used to achieve a binary synapse, and loops capable of
storing many flux quanta are used to enact multi-stable synapses. Circuits are
designed to implement supervised learning wherein current pulses add or remove
flux from the loop to strengthen or weaken the synaptic weight. Designs are
presented for circuits with hundreds of intermediate synaptic weights between
minimum and maximum strengths. Circuits for implementing unsupervised learning
are modeled using two photons to strengthen and two photons to weaken the
synaptic weight via Hebbian and anti-Hebbian learning rules, and techniques are
proposed to control the learning rate. Implementation of short-term plasticity,
homeostatic plasticity, and metaplasticity in loop neurons is discussed.
|
[
{
"version": "v1",
"created": "Fri, 4 May 2018 21:06:40 GMT"
},
{
"version": "v2",
"created": "Tue, 8 May 2018 17:11:51 GMT"
},
{
"version": "v3",
"created": "Tue, 15 May 2018 19:57:45 GMT"
},
{
"version": "v4",
"created": "Tue, 3 Jul 2018 22:41:53 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Shainline",
"Jeffrey M.",
""
],
[
"McCaughan",
"Adam N.",
""
],
[
"Buckley",
"Sonia M.",
""
],
[
"Donnelly",
"Christine A.",
""
],
[
"Castellanos-Beltran",
"Manuel",
""
],
[
"Schneider",
"Michael L.",
""
],
[
"Mirin",
"Richard P.",
""
],
[
"Nam",
"Sae Woo",
""
]
] |
new_dataset
| 0.987588 |
1805.05727
|
Tae Joon Jun
|
Tae Joon Jun, Dohyeun Kim, Hoang Minh Nguyen, Daeyoung Kim, Youngsub
Eom
|
2sRanking-CNN: A 2-stage ranking-CNN for diagnosis of glaucoma from
fundus images using CAM-extracted ROI as an intermediate input
|
Accepted at BMVC 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Glaucoma is a disease in which the optic nerve is chronically damaged by the
elevation of the intra-ocular pressure, resulting in visual field defect.
Therefore, it is important to monitor and treat suspected patients before they
are confirmed with glaucoma. In this paper, we propose a 2-stage ranking-CNN
that classifies fundus images as normal, suspicious, and glaucoma. Furthermore,
we propose a method of using the class activation map as a mask filter and
combining it with the original fundus image as an intermediate input. Our
results have improved the average accuracy by about 10% over the existing
3-class CNN and ranking-CNN, and especially improved the sensitivity of
suspicious class by more than 20% over 3-class CNN. In addition, the extracted
ROI was also found to overlap with the diagnostic criteria of the physician.
The method we propose is expected to be efficiently applied to any medical data
where there is a suspicious condition between normal and disease.
|
[
{
"version": "v1",
"created": "Tue, 15 May 2018 12:27:00 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jul 2018 05:56:39 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Jun",
"Tae Joon",
""
],
[
"Kim",
"Dohyeun",
""
],
[
"Nguyen",
"Hoang Minh",
""
],
[
"Kim",
"Daeyoung",
""
],
[
"Eom",
"Youngsub",
""
]
] |
new_dataset
| 0.979364 |
1807.01401
|
Henry Kvinge
|
Elin Farnell, Henry Kvinge, Michael Kirby, Chris Peterson
|
Endmember Extraction on the Grassmannian
|
To appear in Proceedings of the 2018 IEEE Data Science Workshop,
Lausanne, Switzerland
| null | null | null |
cs.CV cs.LG eess.IV eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Endmember extraction plays a prominent role in a variety of data analysis
problems as endmembers often correspond to data representing the purest or best
representative of some feature. Identifying endmembers then can be useful for
further identification and classification tasks. In settings with
high-dimensional data, such as hyperspectral imagery, it can be useful to
consider endmembers that are subspaces as they are capable of capturing a wider
range of variations of a signature. The endmember extraction problem in this
setting thus translates to finding the vertices of the convex hull of a set of
points on a Grassmannian. In the presence of noise, it can be less clear
whether a point should be considered a vertex. In this paper, we propose an
algorithm to extract endmembers on a Grassmannian, identify subspaces of
interest that lie near the boundary of a convex hull, and demonstrate the use
of the algorithm on a synthetic example and on the 220 spectral band AVIRIS
Indian Pines hyperspectral image.
|
[
{
"version": "v1",
"created": "Tue, 3 Jul 2018 23:35:47 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Farnell",
"Elin",
""
],
[
"Kvinge",
"Henry",
""
],
[
"Kirby",
"Michael",
""
],
[
"Peterson",
"Chris",
""
]
] |
new_dataset
| 0.982104 |
1807.01410
|
Pavol Hell
|
Tomas Feder, Pavol Hell, and Carlos Subi
|
Distance-Two Colorings of Barnette Graphs
|
Expanded version of CCCG 2018 paper
| null | null | null |
cs.CG cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Barnette identified two interesting classes of cubic polyhedral graphs for
which he conjectured the existence of a Hamiltonian cycle. Goodey proved the
conjecture for the intersection of the two classes. We examine these classes
from the point of view of distance-two colorings. A distance-two $r$-coloring
of a graph $G$ is an assignment of $r$ colors to the vertices of $G$ so that
any two vertices at distance at most two have different colors. Note that a
cubic graph needs at least four colors. The distance-two four-coloring problem
for cubic planar graphs is known to be NP-complete. We claim the problem
remains NP-complete for tri-connected bipartite cubic planar graphs, which we
call type-one Barnette graphs, since they are the first class identified by
Barnette. By contrast, we claim the problem is polynomial for cubic plane
graphs with face sizes $3, 4, 5,$ or $6$, which we call type-two Barnette
graphs, because of their relation to Barnette's second conjecture. We call
Goodey graphs those type-two Barnette graphs all of whose faces have size $4$
or $6$. We fully describe all Goodey graphs that admit a distance-two
four-coloring, and characterize the remaining type-two Barnette graphs that
admit a distance-two four-coloring according to their face size.
For quartic plane graphs, the analogue of type-two Barnette graphs are graphs
with face sizes $3$ or $4$. For this class, the distance-two four-coloring
problem is also polynomial; in fact, we can again fully describe all colorable
instances -- there are exactly two such graphs.
|
[
{
"version": "v1",
"created": "Wed, 4 Jul 2018 00:33:52 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Feder",
"Tomas",
""
],
[
"Hell",
"Pavol",
""
],
[
"Subi",
"Carlos",
""
]
] |
new_dataset
| 0.998037 |
1807.01431
|
Naveen Kumar Macha
|
Naveen Kumar Macha, Sandeep Geedipally, Bhavana Repalle, Md Arif
Iqbal, Wafi Danesh, Mostafizur Rahman
|
Crosstalk based Fine-Grained Reconfiguration Techniques for Polymorphic
Circuits
|
7 pages, 6 figures, 2 tables, Nanoarch 2018
| null | null | null |
cs.AR cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Truly polymorphic circuits, whose functionality/circuit behavior can be
altered using a control variable, can provide tremendous benefits in
multi-functional system design and resource sharing. For secure and fault
tolerant hardware designs these can be crucial as well. Polymorphic circuits
work in literature so far either rely on environmental parameters such as
temperature, variation etc. or on special devices such as ambipolar FET,
configurable magnetic devices, etc., that often result in inefficiencies in
performance and/or realization. In this paper, we introduce a novel polymorphic
circuit design approach where deterministic interference between nano-metal
lines is leveraged for logic computing and configuration. For computing, the
proposed approach relies on nano-metal lines, their interference and commonly
used FETs, and for polymorphism, it requires only an extra metal line that
carries the control signal. In this paper, we show a wide range of crosstalk
polymorphic (CT-P) logic gates and their evaluation results. We also show an
example of a large circuit that performs both the functionalities of multiplier
and sorter depending on the configuration signal. Our benchmarking results are
presented in this paper. For CT-P, the transistor count was found to be
significantly less compared to other existing approaches, ranging from 25% to
83%. For example, CT-P AOI21-OA21 cell show 83%, 85% and 50% transistor count
reduction, and MultiplierSorter circuit show 40%, 36% and 28% transistor count
reduction with respect to CMOS, genetically evolved, and ambipolar transistor
based polymorphic circuits respectively.
|
[
{
"version": "v1",
"created": "Wed, 4 Jul 2018 02:43:33 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Macha",
"Naveen Kumar",
""
],
[
"Geedipally",
"Sandeep",
""
],
[
"Repalle",
"Bhavana",
""
],
[
"Iqbal",
"Md Arif",
""
],
[
"Danesh",
"Wafi",
""
],
[
"Rahman",
"Mostafizur",
""
]
] |
new_dataset
| 0.99644 |
1807.01451
|
Pengcheng Qiu
|
Xiaocheng Liu, Qifan Zhang, Pengcheng Qiu, Jiajie Tong, Huazi Zhang,
Changyong Zhao, Jun Wang
|
A 5.16Gbps decoder ASIC for Polar Code in 16nm FinFET
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polar codes has been selected as 5G standard. However, only a couple of ASIC
featuring decoders are fabricated,and none of them support list size L > 4 and
code length N > 1024. This paper presents an ASIC implementation of three
decoders for polar code: successive cancellation (SC) decoder, flexible decoder
and ultra-reliable decoder. These decoders are all SC based decoder, supporting
list size up to 1,8,32 and code length up to 2^15,2^14,2^11 respectively. This
chip is fabricated in a 16nm TSMC FinFET technology, and can be clocked at 1
Ghz. Optimization techniques are proposed and employed to increase throughput.
Experiment result shows that the throughput can achieve up to 5.16Gbps.
Compared with fabricated AISC decoder and synthesized decoder in literature,
the flexible decoder achieves higher area efficiency.
|
[
{
"version": "v1",
"created": "Wed, 4 Jul 2018 05:20:42 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Liu",
"Xiaocheng",
""
],
[
"Zhang",
"Qifan",
""
],
[
"Qiu",
"Pengcheng",
""
],
[
"Tong",
"Jiajie",
""
],
[
"Zhang",
"Huazi",
""
],
[
"Zhao",
"Changyong",
""
],
[
"Wang",
"Jun",
""
]
] |
new_dataset
| 0.998936 |
1807.01569
|
Michael Schmitt
|
Michael Schmitt and Lloyd Haydn Hughes and Xiao Xiang Zhu
|
The SEN1-2 Dataset for Deep Learning in SAR-Optical Data Fusion
|
accepted for publication in the ISPRS Annals of the Photogrammetry,
Remote Sensing and Spatial Information Sciences (online from October 2018)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While deep learning techniques have an increasing impact on many technical
fields, gathering sufficient amounts of training data is a challenging problem
in remote sensing. In particular, this holds for applications involving data
from multiple sensors with heterogeneous characteristics. One example for that
is the fusion of synthetic aperture radar (SAR) data and optical imagery. With
this paper, we publish the SEN1-2 dataset to foster deep learning research in
SAR-optical data fusion. SEN1-2 comprises 282,384 pairs of corresponding image
patches, collected from across the globe and throughout all meteorological
seasons. Besides a detailed description of the dataset, we show exemplary
results for several possible applications, such as SAR image colorization,
SAR-optical image matching, and creation of artificial optical images from SAR
input data. Since SEN1-2 is the first large open dataset of this kind, we
believe it will support further developments in the field of deep learning for
remote sensing as well as multi-sensor data fusion.
|
[
{
"version": "v1",
"created": "Wed, 4 Jul 2018 13:29:14 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Schmitt",
"Michael",
""
],
[
"Hughes",
"Lloyd Haydn",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.999391 |
1807.01577
|
Mario Corsolini
|
Mario Corsolini and Andrea Carta
|
VideoKifu, or the automatic transcription of a Go game
|
14 pages, 6 figures. Accepted for the "International Conference on
Research in Mind Games" (August 7-8, 2018) at the EGC in Pisa, Italy.
Datasets available from http://www.oipaz.net/VideoKifu.html
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In two previous papers [arXiv:1508.03269, arXiv:1701.05419] we described the
techniques we employed for reconstructing the whole move sequence of a Go game.
That task was at first accomplished by means of a series of photographs,
manually shot, as explained during the scientific conference held within the
LIX European Go Congress (Liberec, CZ). The photographs were subsequently
replaced by a possibly unattended video live stream (provided by webcams,
videocameras, smartphones and so on) or, were the live stream not available, by
means of a pre-recorded video of the game itself, on condition that the goban
and the stones were clearly visible more often than not. As we hinted in the
latter paper, in the last two years we have improved both the algorithms
employed for reconstructing the grid and detecting the stones, making extensive
usage of the multicore capabilities offered by modern CPUs. Those capabilities
prompted us to develop some asynchronous routines, capable of double-checking
the position of the grid and the number and colour of any stone previously
detected, in order to get rid of minor errors possibly occurred during the main
analysis, and that may pass undetected especially in the course of an
unattended live streaming. Those routines will be described in details, as they
address some problems that are of general interest when reconstructing the move
sequence, for example what to do when large movements of the whole goban occur
(deliberate or not) and how to deal with captures of dead stones $-$ that could
be wrongly detected and recorded as "fresh" moves if not promptly removed.
|
[
{
"version": "v1",
"created": "Wed, 4 Jul 2018 13:52:10 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Corsolini",
"Mario",
""
],
[
"Carta",
"Andrea",
""
]
] |
new_dataset
| 0.998295 |
1807.01599
|
Satoshi Takabe
|
Satoshi Takabe, Tadashi Wadayama, and Masahito Hayashi
|
Asymptotic Analysis of Spatial Coupling Coding for Compute-and-Forward
Relaying
|
8 pages, 6 figures. arXiv admin note: substantial text overlap with
arXiv:1801.06328
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Compute-and-forward (CAF) relaying is effective to increase bandwidth
efficiency of wireless two-way relay channels. In a CAF scheme, a relay is
designed to decode a linear combination composed of transmitted messages from
other terminals or relays. Design for error-correcting codes and its decoding
algorithms suitable for CAF relaying schemes remain as an important issue to be
studied. As described in this paper, we will present an asymptotic performance
analysis of LDPC codes over two-way relay channels based on density evolution
(DE). Because of the asymmetric characteristics of the channel, we use the
population dynamics DE combined with DE formulas for asymmetric channels to
obtain BP thresholds. Additionally, we also evaluate the asymptotic performance
of spatially coupled LDPC codes for two-way relay channels. The results
indicate that the spatial coupling codes yield improvements in the BP threshold
compared with corresponding uncoupled codes for two-way relay channels.
Finally, we will compare the mutual information rate and rate achievability
between the CAF scheme and the MAC separation decoding scheme. We demonstrate
the possibility that the CAF scheme has higher reliability in the high-rate
region.
|
[
{
"version": "v1",
"created": "Tue, 3 Jul 2018 13:02:10 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Takabe",
"Satoshi",
""
],
[
"Wadayama",
"Tadashi",
""
],
[
"Hayashi",
"Masahito",
""
]
] |
new_dataset
| 0.992857 |
1807.01624
|
Vladimir Kiriansky
|
Vladimir Kiriansky, Haoran Xu, Martin Rinard, Saman Amarasinghe
|
Cimple: Instruction and Memory Level Parallelism
|
To appear in PACT'18
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern out-of-order processors have increased capacity to exploit instruction
level parallelism (ILP) and memory level parallelism (MLP), e.g., by using wide
superscalar pipelines and vector execution units, as well as deep buffers for
in-flight memory requests. These resources, however, often exhibit poor
utilization rates on workloads with large working sets, e.g., in-memory
databases, key-value stores, and graph analytics, as compilers and hardware
struggle to expose ILP and MLP from the instruction stream automatically.
In this paper, we introduce the IMLP (Instruction and Memory Level
Parallelism) task programming model. IMLP tasks execute as coroutines that
yield execution at annotated long-latency operations, e.g., memory accesses,
divisions, or unpredictable branches. IMLP tasks are interleaved on a single
thread, and integrate well with thread parallelism and vectorization. Our DSL
embedded in C++, Cimple, allows exploration of task scheduling and
transformations, such as buffering, vectorization, pipelining, and prefetching.
We demonstrate state-of-the-art performance on core algorithms used in
in-memory databases that operate on arrays, hash tables, trees, and skip lists.
Cimple applications reach 2.5x throughput gains over hardware multithreading on
a multi-core, and 6.4x single thread speedup.
|
[
{
"version": "v1",
"created": "Wed, 4 Jul 2018 14:50:13 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Kiriansky",
"Vladimir",
""
],
[
"Xu",
"Haoran",
""
],
[
"Rinard",
"Martin",
""
],
[
"Amarasinghe",
"Saman",
""
]
] |
new_dataset
| 0.982376 |
1807.01633
|
Rusheng Zhang
|
Rusheng Zhang, Frank Schmutz, Kyle Gerard, Aur\'elien Pomini, Louis
Basseto, Sami Ben Hassen, Akihiro Ishikawa, Inci Ozgunes, Ozan Tonguz
|
Virtual Traffic Lights: System Design and Implementation
|
5 pages, 7 figures Accepted by Vehicular Technology Conference 2018
(2018)
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traffic congestion is a daunting problem that is affecting the daily lives of
billions of people across the world. Recently, a promising new traffic control
scheme known as Virtual Traffic Lights (VTL) has been proposed for mitigating
traffic congestion. VTL is an infrastructure free traffic control scheme that
leverages the presence of Vehicle-to-Vehicle (V2V) communications. Such
infrastructure free scheme has several benefits, such as alleviating traffic
congestion; reducing the large cost of traffic lights and traffic control
systems; reducing carbon emission, etc. This paper reports a DSRC-based
prototype design effort on VTL using Dedicated Short Range Communications
(DSRC) technology. The experiments performed show the feasibility of
implementing VTL with DSRC technology. Preliminary results of the field tests
conducted in Pittsburgh with vehicles using VTL equipment indicate that VTL is
capable of coordinating traffic at intersections and reducing the commute time
of people.
|
[
{
"version": "v1",
"created": "Wed, 4 Jul 2018 15:16:59 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Zhang",
"Rusheng",
""
],
[
"Schmutz",
"Frank",
""
],
[
"Gerard",
"Kyle",
""
],
[
"Pomini",
"Aurélien",
""
],
[
"Basseto",
"Louis",
""
],
[
"Hassen",
"Sami Ben",
""
],
[
"Ishikawa",
"Akihiro",
""
],
[
"Ozgunes",
"Inci",
""
],
[
"Tonguz",
"Ozan",
""
]
] |
new_dataset
| 0.999355 |
1807.01679
|
Sreekavitha Parupalli
|
Sreekavitha Parupalli, Vijjini Anvesh Rao and Radhika Mamidi
|
BCSAT : A Benchmark Corpus for Sentiment Analysis in Telugu Using
Word-level Annotations
|
Accepted as Long Paper at Student Research Workshop in 56th Annual
Meeting of the Association for Computational Linguistics, ACL-2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The presented work aims at generating a systematically annotated corpus that
can support the enhancement of sentiment analysis tasks in Telugu using
word-level sentiment annotations. From OntoSenseNet, we extracted 11,000
adjectives, 253 adverbs, 8483 verbs and sentiment annotation is being done by
language experts. We discuss the methodology followed for the polarity
annotations and validate the developed resource. This work aims at developing a
benchmark corpus, as an extension to SentiWordNet, and baseline accuracy for a
model where lexeme annotations are applied for sentiment predictions. The
fundamental aim of this paper is to validate and study the possibility of
utilizing machine learning algorithms, word-level sentiment annotations in the
task of automated sentiment identification. Furthermore, accuracy is improved
by annotating the bi-grams extracted from the target corpus.
|
[
{
"version": "v1",
"created": "Wed, 4 Jul 2018 16:56:50 GMT"
}
] | 2018-07-05T00:00:00 |
[
[
"Parupalli",
"Sreekavitha",
""
],
[
"Rao",
"Vijjini Anvesh",
""
],
[
"Mamidi",
"Radhika",
""
]
] |
new_dataset
| 0.99873 |
1707.03095
|
Demival Vasques Filho
|
Ben Curran, Kyle Higham, Elisenda Ortiz, Demival Vasques Filho
|
Look Who's Talking: Bipartite Networks as Representations of a Topic
Model of New Zealand Parliamentary Speeches
|
28 pages, 12 figures, 3 tables
| null |
10.1371/journal.pone.0199072
| null |
cs.CL cs.DL cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantitative methods to measure the participation to parliamentary debate and
discourse of elected Members of Parliament (MPs) and the parties they belong to
are lacking. This is an exploratory study in which we propose the development
of a new approach for a quantitative analysis of such participation. We utilize
the New Zealand government's digital Hansard database to construct a topic
model of parliamentary speeches consisting of nearly 40 million words in the
period 2003-2016. A Latent Dirichlet Allocation topic model is implemented in
order to reveal the thematic structure of our set of documents. This generative
statistical model enables the detection of major themes or topics that are
publicly discussed in the New Zealand parliament, as well as permitting their
classification by MP. Information on topic proportions is subsequently analyzed
using a combination of statistical methods. We observe patterns arising from
time-series analysis of topic frequencies which can be related to specific
social, economic and legislative events. We then construct a bipartite network
representation, linking MPs to topics, for each of four parliamentary terms in
this time frame. We build projected networks (onto the set of nodes represented
by MPs) and proceed to the study of the dynamical changes of their topology,
including community structure. By performing this longitudinal network
analysis, we can observe the evolution of the New Zealand parliamentary topic
network and its main parties in the period studied.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2017 01:25:31 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Jul 2017 01:40:56 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Jul 2017 03:12:32 GMT"
}
] | 2018-07-04T00:00:00 |
[
[
"Curran",
"Ben",
""
],
[
"Higham",
"Kyle",
""
],
[
"Ortiz",
"Elisenda",
""
],
[
"Filho",
"Demival Vasques",
""
]
] |
new_dataset
| 0.988483 |
1801.04290
|
Markus Giftthaler
|
Markus Giftthaler, Michael Neunert, Markus St\"auble, Jonas Buchli
|
The Control Toolbox - An Open-Source C++ Library for Robotics, Optimal
and Model Predictive Control
| null | null |
10.1109/SIMPAR.2018.8376281
| null |
cs.RO math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the Control Toolbox (CT), an open-source C++ library for
efficient modeling, control, estimation, trajectory optimization and Model
Predictive Control. The CT is applicable to a broad class of dynamic systems
but features interfaces to modeling tools specifically designed for robotic
applications. This paper outlines the general concept of the toolbox, its main
building blocks, and highlights selected application examples. The library
contains several tools to design and evaluate controllers, model dynamical
systems and solve optimal control problems. The CT was designed for intuitive
modeling of systems governed by ordinary differential or difference equations.
It supports rapid prototyping of cost functions and constraints and provides
standard interfaces for different optimal control solvers. To date, we support
Single Shooting, the iterative Linear-Quadratic Regulator, Gauss-Newton
Multiple Shooting and classical Direct Multiple Shooting. We provide interfaces
to general purpose NLP solvers and Riccati-based linear-quadratic optimal
control solvers. The CT was designed to solve large-scale optimal control and
estimation problems efficiently and allows for online control of dynamic
systems. Some of the key features to enable fast run-time performance are full
compatibility with Automatic Differentiation, derivative code generation, and
multi-threading. Still, the CT is designed as a modular framework whose
building blocks can also be used for other control and estimation applications
such as inverse dynamics control, extended Kalman filters or kinematic
planning.
|
[
{
"version": "v1",
"created": "Fri, 12 Jan 2018 19:08:37 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Mar 2018 16:23:43 GMT"
}
] | 2018-07-04T00:00:00 |
[
[
"Giftthaler",
"Markus",
""
],
[
"Neunert",
"Michael",
""
],
[
"Stäuble",
"Markus",
""
],
[
"Buchli",
"Jonas",
""
]
] |
new_dataset
| 0.969076 |
1802.04206
|
Foad Sohrabi
|
Foad Sohrabi, Ya-Feng Liu, Wei Yu
|
One-Bit Precoding and Constellation Range Design for Massive MIMO with
QAM Signaling
|
14 pages, 9 figures, to be published in IEEE Journal on Selected
Topics on Signal Processing
| null |
10.1109/JSTSP.2018.2823267
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of low-resolution digital-to-analog converters (DACs) for transmit
precoding provides crucial energy efficiency advantage for massive
multiple-input multiple-output (MIMO) implementation. This paper formulates a
quadrature amplitude modulation (QAM) constellation range and one-bit
symbol-level precoding design problem for minimizing the average symbol error
rate (SER) in downlink massive MIMO transmission. A tight upper-bound for SER
with low-resolution DAC precoding is first derived. The derived expression
suggests that the performance degradation of one-bit precoding can be
interpreted as a decrease in the effective minimum distance of the QAM
constellation. Using the obtained SER expression, we propose a QAM
constellation range design for the single-user case. It is shown that in the
massive MIMO limit, a reasonable choice for constellation range with one-bit
precoding is that of the infinite-resolution precoding with per-symbol power
constraint, but reduced by a factor of $\sqrt{2/\pi}$ or about $0.8$. The
corresponding minimum distance reduction translates to about 2dB gap between
the performance of one-bit precoding and infinite-resolution precoding. This
paper further proposes a low-complexity heuristic algorithm for one-bit
precoder design. Finally, the proposed QAM constellation range and precoder
design are generalized to the multi-user downlink. We propose to scale the
constellation range for infinite-resolution zero-forcing (ZF) precoding with
per-symbol power constraint by the same factor of $\sqrt{2/\pi}$ for one-bit
precoding. The proposed one-bit precoding scheme is shown to be within 2dB of
infinite-resolution ZF. In term of number of antennas, one-bit precoding
requires about 50% more antennas to achieve the same performance as
infinite-resolution precoding.
|
[
{
"version": "v1",
"created": "Mon, 12 Feb 2018 17:47:48 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Mar 2018 20:48:59 GMT"
}
] | 2018-07-04T00:00:00 |
[
[
"Sohrabi",
"Foad",
""
],
[
"Liu",
"Ya-Feng",
""
],
[
"Yu",
"Wei",
""
]
] |
new_dataset
| 0.979173 |
1805.10258
|
Raja Naeem Akram
|
Freya Sheer Hardwick, Apostolos Gioulis, Raja Naeem Akram,
Konstantinos Markantonakis
|
E-Voting with Blockchain: An E-Voting Protocol with Decentralisation and
Voter Privacy
|
9 Pages, 6 Figures, 3 Tables, 5 Algorithms, Conference
| null | null | null |
cs.CR cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Technology has positive impacts on many aspects of our social life. Designing
a 24hour globally connected architecture enables ease of access to a variety of
resources and services. Furthermore, technology like Internet has been a
fertile ground for innovation and creativity. One of such disruptive innovation
is blockchain -- a keystone of cryptocurrencies. The blockchain technology is
presented as a game changer for many of the existing and emerging
technologies/services. With its immutability property and decentralised
architecture, it is taking centre stage in many services as an equalisation
factor to the current parity between consumers and large
corporations/governments. One of such potential applications of the blockchain
is in e-voting schemes. The objective of such a scheme would be to provide a
decentralised architecture to run and support a voting scheme that is open,
fair and independently verifiable. In this paper, we propose potentially a new
e-voting protocol that utilises the blockchain as a transparent ballot box. The
protocol has been designed with adhering to the fundamental e-voting properties
in mind as well as offering a degree of decentralisation and allowing for the
voter to change/update their vote (within the permissible voting period). The
paper highlights the pros and cons of using blockchain for such a proposal from
practical point view in both development/deployment and usage contexts.
Concluding the paper with a potential roadmap for blockchain technology to be
able to support complex applications.
|
[
{
"version": "v1",
"created": "Fri, 25 May 2018 17:18:25 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Jul 2018 08:21:29 GMT"
}
] | 2018-07-04T00:00:00 |
[
[
"Hardwick",
"Freya Sheer",
""
],
[
"Gioulis",
"Apostolos",
""
],
[
"Akram",
"Raja Naeem",
""
],
[
"Markantonakis",
"Konstantinos",
""
]
] |
new_dataset
| 0.997904 |
1807.00220
|
Nitish Mital
|
Nitish Mital, Katina Kralevska, Deniz Gunduz and Cong Ling
|
Storage-Repair Bandwidth Trade-off for Wireless Caching with Partial
Failure and Broadcast Repair
|
Conference version of this paper has been submitted for review in ITW
2018. This submission includes the proof of theorem 1
| null | null | null |
cs.DC cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Repair of multiple partially failed cache nodes is studied in a distributed
wireless content caching system, where $r$ out of a total of $n$ cache nodes
lose part of their cached data. Broadcast repair of failed cache contents at
the network edge is studied; that is, the surviving cache nodes transmit
broadcast messages to the failed ones, which are then used, together with the
surviving data in their local cache memories, to recover the lost content. The
trade-off between the storage capacity and the repair bandwidth is derived. It
is shown that utilizing the broadcast nature of the wireless medium and the
surviving cache contents at partially failed nodes significantly reduces the
required repair bandwidth per node.
|
[
{
"version": "v1",
"created": "Sat, 30 Jun 2018 19:33:37 GMT"
}
] | 2018-07-04T00:00:00 |
[
[
"Mital",
"Nitish",
""
],
[
"Kralevska",
"Katina",
""
],
[
"Gunduz",
"Deniz",
""
],
[
"Ling",
"Cong",
""
]
] |
new_dataset
| 0.996652 |
1807.00858
|
Yongqiang Huang
|
Yongqiang Huang, Yu Sun
|
A Dataset of Daily Interactive Manipulation
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robots that succeed in factories stumble to complete the simplest daily task
humans take for granted, for the change of environment makes the task
exceedingly difficult. Aiming to teach robot perform daily interactive
manipulation in a changing environment using human demonstrations, we collected
our own data of interactive manipulation. The dataset focuses on position,
orientation, force, and torque of objects manipulated in daily tasks. The
dataset includes 1,593 trials of 32 types of daily motions and 1,596 trials of
pouring alone, as well as helper code. We present our dataset to facilitate the
research on task-oriented interactive manipulation.
|
[
{
"version": "v1",
"created": "Mon, 2 Jul 2018 19:04:32 GMT"
}
] | 2018-07-04T00:00:00 |
[
[
"Huang",
"Yongqiang",
""
],
[
"Sun",
"Yu",
""
]
] |
new_dataset
| 0.99974 |
1807.00920
|
Haiqiang Wang
|
Haiqiang Wang, Xinfeng Zhang, Chao Yang and C.-C. Jay Kuo
|
A JND-based Video Quality Assessment Model and Its Application
|
v3
| null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Based on the Just-Noticeable-Difference (JND) criterion, a subjective video
quality assessment (VQA) dataset, called the VideoSet, was constructed
recently. In this work, we propose a JND-based VQA model using a probabilistic
framework to analyze and clean collected subjective test data. While most
traditional VQA models focus on content variability, our proposed VQA model
takes both subject and content variabilities into account. The model parameters
used to describe subject and content variabilities are jointly optimized by
solving a maximum likelihood estimation (MLE) problem. As an application, the
new subjective VQA model is used to filter out unreliable video quality scores
collected in the VideoSet. Experiments are conducted to demonstrate the
effectiveness of the proposed model.
|
[
{
"version": "v1",
"created": "Mon, 2 Jul 2018 23:17:07 GMT"
}
] | 2018-07-04T00:00:00 |
[
[
"Wang",
"Haiqiang",
""
],
[
"Zhang",
"Xinfeng",
""
],
[
"Yang",
"Chao",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] |
new_dataset
| 0.99928 |
1807.00996
|
Boris Galkin Mr
|
Boris Galkin and Luiz A. DaSilva
|
UAVs as Mobile Infrastructure: Addressing Battery Lifetime
|
Under Submission
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned aerial vehicles (UAVs) are expected to play an important role in
next generation cellular networks, acting as flying infrastructure which can
serve ground users when regular infrastructure is overloaded or unavailable. As
these devices are expected to operate wirelessly they will rely on an internal
battery for their power supply, which will limit the amount of time they can
operate over an area of interest before having to recharge. In this article, we
outline three battery charging options that may be considered by a network
operator and use simulations to demonstrate the performance impact of
incorporating those options into a cellular network where UAV infrastructure
provides wireless service.
|
[
{
"version": "v1",
"created": "Tue, 3 Jul 2018 07:06:02 GMT"
}
] | 2018-07-04T00:00:00 |
[
[
"Galkin",
"Boris",
""
],
[
"DaSilva",
"Luiz A.",
""
]
] |
new_dataset
| 0.992807 |
1807.01081
|
Sergio Hernandez
|
Sergio Hernandez Cerezo, Guillem Duran Ballester, Spiros Baxevanakis
|
Solving Atari Games Using Fractals And Entropy
|
7 pages, 1 figure, submitted to NIPS-2018
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce a novel MCTS based approach that is derived from
the laws of the thermodynamics. The algorithm coined Fractal Monte Carlo (FMC),
allows us to create an agent that takes intelligent actions in both continuous
and discrete environments while providing control over every aspect of the
agent behavior. Results show that FMC is several orders of magnitude more
efficient than similar techniques, such as MCTS, in the Atari games tested.
|
[
{
"version": "v1",
"created": "Tue, 3 Jul 2018 10:59:26 GMT"
}
] | 2018-07-04T00:00:00 |
[
[
"Cerezo",
"Sergio Hernandez",
""
],
[
"Ballester",
"Guillem Duran",
""
],
[
"Baxevanakis",
"Spiros",
""
]
] |
new_dataset
| 0.99472 |
1807.01166
|
Sai Vikneshwar Mani Jayaraman
|
Venkatesan Guruswami, Satyanarayana V. Lokam, Sai Vikneshwar Mani
Jayaraman
|
$\epsilon$-MSR Codes: Contacting Fewer Code Blocks for Exact Repair
|
A preliminary conference version of this work was presented at ISIT
2018
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
$\epsilon$-Minimum Storage Regenerating ($\epsilon$-MSR) codes form a special
class of Maximum Distance Separable (MDS) codes, providing mechanisms for exact
regeneration of a single code block in their codewords by downloading slighly
sub-optimal amount of information from the remaining code blocks. The key
advantage of these codes is a significantly lower sub-packetization that grows
only logarithmically with the length of the code, while providing optimality in
storage and error-correcting capacity. However, from an implementation point of
view, these codes require each remaining code block to be available for the
repair of any single code block. In this paper, we address this issue by
constructing $\epsilon$-MSR codes that can repair a failed code block by
contacting a fewer number of available code blocks. When a code block fails,
our repair procedure needs to contact a few compulsory code blocks and is free
to choose any subset of available code blocks for the remaining choices.
Further, our construction requiresa field size linear in code length and
ensures load balancing among the contacted code blocks in terms of information
downloaded from them for a single repair.
|
[
{
"version": "v1",
"created": "Tue, 3 Jul 2018 13:27:55 GMT"
}
] | 2018-07-04T00:00:00 |
[
[
"Guruswami",
"Venkatesan",
""
],
[
"Lokam",
"Satyanarayana V.",
""
],
[
"Jayaraman",
"Sai Vikneshwar Mani",
""
]
] |
new_dataset
| 0.988679 |
1807.01226
|
J\'er\'emie Decouchant
|
David Kozhaya and J\'er\'emie Decouchant and Paulo Esteves-Verissimo
|
RT-ByzCast: Byzantine-Resilient Real-Time Reliable Broadcast
|
19 pages
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today's cyber-physical systems face various impediments to achieving their
intended goals, namely, communication uncertainties and faults, relative to the
increased integration of networked and wireless devices, hinder the synchronism
needed to meet real-time deadlines. Moreover, being critical, these systems are
also exposed to significant security threats. This threat combination increases
the risk of physical damage. This paper addresses these problems by studying
how to build the first real-time Byzantine reliable broadcast protocol (RTBRB)
tolerating network uncertainties, faults, and attacks. Previous literature
describes either real-time reliable broadcast protocols, or asynchronous (non
real-time) Byzantine~ones.
We first prove that it is impossible to implement RTBRB using traditional
distributed computing paradigms, e.g., where the error/failure detection
mechanisms of processes are decoupled from the broadcast algorithm itself, even
with the help of the most powerful failure detectors. We circumvent this
impossibility by proposing RT-ByzCast, an algorithm based on aggregating
digital signatures in a sliding time-window and on empowering processes with
self-crashing capabilities to mask and bound losses. We show that RT-ByzCast
(i) operates in real-time by proving that messages broadcast by correct
processes are delivered within a known bounded delay, and (ii) is reliable by
demonstrating that correct processes using our algorithm crash themselves with
a negligible probability, even with message loss rates as high as 60%.
|
[
{
"version": "v1",
"created": "Tue, 3 Jul 2018 15:04:46 GMT"
}
] | 2018-07-04T00:00:00 |
[
[
"Kozhaya",
"David",
""
],
[
"Decouchant",
"Jérémie",
""
],
[
"Esteves-Verissimo",
"Paulo",
""
]
] |
new_dataset
| 0.996773 |
1408.3030
|
Fabian Reiter
|
Fabian Reiter
|
Distributed Graph Automata and Verification of Distributed Algorithms
|
26 pages, 6 figures, includes a condensed version of the author's
Master's thesis arXiv:1404.6503. (This version of the article (v2) is
identical to the previous one (v1), except for minor changes in phrasing.)
| null |
10.1109/LICS.2015.27
| null |
cs.FL cs.DC cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Combining ideas from distributed algorithms and alternating automata, we
introduce a new class of finite graph automata that recognize precisely the
languages of finite graphs definable in monadic second-order logic. By
restricting transitions to be nondeterministic or deterministic, we also obtain
two strictly weaker variants of our automata for which the emptiness problem is
decidable. As an application, we suggest how suitable graph automata might be
useful in formal verification of distributed algorithms, using Floyd-Hoare
logic.
|
[
{
"version": "v1",
"created": "Wed, 13 Aug 2014 15:26:35 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Sep 2014 14:02:50 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Reiter",
"Fabian",
""
]
] |
new_dataset
| 0.990492 |
1602.07365
|
Andr\'e van Renssen
|
Prosenjit Bose, Jean-Lou De Carufel, Andr\'e van Renssen
|
Constrained Generalized Delaunay Graphs Are Plane Spanners
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We look at generalized Delaunay graphs in the constrained setting by
introducing line segments which the edges of the graph are not allowed to
cross. Given an arbitrary convex shape $C$, a constrained Delaunay graph is
constructed by adding an edge between two vertices $p$ and $q$ if and only if
there exists a homothet of $C$ with $p$ and $q$ on its boundary that does not
contain any other vertices visible to $p$ and $q$. We show that, regardless of
the convex shape $C$ used to construct the constrained Delaunay graph, there
exists a constant $t$ (that depends on $C$) such that it is a plane $t$-spanner
of the visibility graph. Furthermore, we reduce the upper bound on the spanning
ratio for the special case where the empty convex shape is an arbitrary
rectangle to $\sqrt{2} \cdot \left( 2 l/s + 1 \right)$, where $l$ and $s$ are
the length of the long and short side of the rectangle.
|
[
{
"version": "v1",
"created": "Wed, 24 Feb 2016 01:29:19 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Mar 2018 03:08:11 GMT"
},
{
"version": "v3",
"created": "Mon, 2 Jul 2018 00:13:17 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Bose",
"Prosenjit",
""
],
[
"De Carufel",
"Jean-Lou",
""
],
[
"van Renssen",
"André",
""
]
] |
new_dataset
| 0.998656 |
1612.00123
|
Minjia Shi
|
Minjia Shi, Hongwei Zhu, Patrick Sol\'e
|
Optimal three-weight cubic codes
| null |
Applied and Computational Mathematics,2018,17(2):175-184
| null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we construct an infinite family of three-weight binary codes
from linear codes over the ring $R=\mathbb{F}_2+v\mathbb{F}_2+v^2\mathbb{F}_2$,
where $v^3=1.$ These codes are defined as trace codes. They have the algebraic
structure of abelian codes. Their Lee weight distributions are computed by
employing character sums. The three-weight binary linear codes which we
construct are shown to be optimal when $m$ is odd and $m>1$. They are cubic,
that is to say quasi-cyclic of co-index three. An application to secret sharing
schemes is given.
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2016 03:08:31 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Jun 2018 02:35:00 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Shi",
"Minjia",
""
],
[
"Zhu",
"Hongwei",
""
],
[
"Solé",
"Patrick",
""
]
] |
new_dataset
| 0.999607 |
1705.03667
|
Mikl\'os Homolya
|
Mikl\'os Homolya, Lawrence Mitchell, Fabio Luporini, David A. Ham
|
TSFC: a structure-preserving form compiler
|
Accepted version. 28 pages plus 5 pages supplement
|
SIAM Journal on Scientific Computing, 40 (2018), pp. C401-C428
|
10.1137/17M1130642
| null |
cs.MS cs.NA
|
http://creativecommons.org/licenses/by/4.0/
|
A form compiler takes a high-level description of the weak form of partial
differential equations and produces low-level code that carries out the finite
element assembly. In this paper we present the Two-Stage Form Compiler (TSFC),
a new form compiler with the main motivation to maintain the structure of the
input expression as long as possible. This facilitates the application of
optimizations at the highest possible level of abstraction. TSFC features a
novel, structure-preserving method for separating the contributions of a form
to the subblocks of the local tensor in discontinuous Galerkin problems. This
enables us to preserve the tensor structure of expressions longer through the
compilation process than other form compilers. This is also achieved in part by
a two-stage approach that cleanly separates the lowering of finite element
constructs to tensor algebra in the first stage, from the scheduling of those
tensor operations in the second stage. TSFC also efficiently traverses
complicated expressions, and experimental evaluation demonstrates good
compile-time performance even for highly complex forms.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2017 09:21:24 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Apr 2018 13:51:11 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Homolya",
"Miklós",
""
],
[
"Mitchell",
"Lawrence",
""
],
[
"Luporini",
"Fabio",
""
],
[
"Ham",
"David A.",
""
]
] |
new_dataset
| 0.99981 |
1708.05091
|
Berna Bulut
|
Berna Bulut, Thomas Barratt, Di Kong, Jue Cao, Alberto Loaiza Freire,
Simon Armour, Mark Beach
|
Millimeter Wave Channel Measurements in a Railway Depot
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Millimeter wave (mmWave) communication is a key enabling technology with the
potential to deliver high capacity, high peak data rate communications for
future railway services. Knowledge of the radio characteristics is of paramount
importance for the successful deployment of such systems. In this paper mmWave
channel measurements are reported for a railway environment using a wideband
channel sounder operating at 60GHz. Highly directional antennas are deployed at
both ends of the link. Data is reported for path loss, root mean square (RMS)
delay spread and K-factor. Static and mobile measurements are considered.
Analysis shows that the signal strength is strongly dependent (up to 25dB) on
the azimuth orientation of the directional transmit and receive antennas. A
path loss exponent of n=2.04 was extracted from the Line-of-Sight measurements
with optimally aligned antennas. RMS delay spreads ranged from 1ns to 22ns
depending on antenna alignment. 50% of the measured K-factors were found to be
less than 6dB. We conclude this is the result of ground reflections in the
vertical Tx-Rx plane.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2017 21:56:41 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Jun 2018 08:12:51 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Bulut",
"Berna",
""
],
[
"Barratt",
"Thomas",
""
],
[
"Kong",
"Di",
""
],
[
"Cao",
"Jue",
""
],
[
"Freire",
"Alberto Loaiza",
""
],
[
"Armour",
"Simon",
""
],
[
"Beach",
"Mark",
""
]
] |
new_dataset
| 0.999804 |
1709.00309
|
Saeed Gholami Shahbandi
|
Saeed Gholami Shahbandi and Martin Magnusson
|
2D Map Alignment With Region Decomposition
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many applications of autonomous mobile robots the following problem is
encountered. Two maps of the same environment are available, one a prior map
and the other a sensor map built by the robot. To benefit from all available
information in both maps, the robot must find the correct alignment between the
two maps. There exist many approaches to address this challenge, however, most
of the previous methods rely on assumptions such as similar modalities of the
maps, same scale, or existence of an initial guess for the alignment. In this
work we propose a decomposition-based method for 2D spatial map alignment which
does not rely on those assumptions. Our proposed method is validated and
compared with other approaches, including generic data association approaches
and map alignment algorithms. Real world examples of four different
environments with thirty six sensor maps and four layout maps are used for this
analysis. The maps, along with an implementation of the method, are made
publicly available online.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2017 13:40:50 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Dec 2017 04:15:27 GMT"
},
{
"version": "v3",
"created": "Fri, 27 Apr 2018 09:56:02 GMT"
},
{
"version": "v4",
"created": "Sat, 30 Jun 2018 22:11:10 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Shahbandi",
"Saeed Gholami",
""
],
[
"Magnusson",
"Martin",
""
]
] |
new_dataset
| 0.965824 |
1709.02463
|
Ayan Kumar Bhunia
|
Prithaj Banerjee, Ayan Kumar Bhunia, Avirup Bhattacharyya, Partha
Pratim Roy, Subrahmanyam Murala
|
Local Neighborhood Intensity Pattern: A new texture feature descriptor
for image retrieval
|
Expert Systems with Applications(Elsevier)
| null |
10.1016/j.eswa.2018.06.044
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a new texture descriptor based on the local neighborhood
intensity difference is proposed for content based image retrieval (CBIR). For
computation of texture features like Local Binary Pattern (LBP), the center
pixel in a 3*3 window of an image is compared with all the remaining neighbors,
one pixel at a time to generate a binary bit pattern. It ignores the effect of
the adjacent neighbors of a particular pixel for its binary encoding and also
for texture description. The proposed method is based on the concept that
neighbors of a particular pixel hold a significant amount of texture
information that can be considered for efficient texture representation for
CBIR. Taking this into account, we develop a new texture descriptor, named as
Local Neighborhood Intensity Pattern (LNIP) which considers the relative
intensity difference between a particular pixel and the center pixel by
considering its adjacent neighbors and generate a sign and a magnitude pattern.
Since sign and magnitude patterns hold complementary information to each other,
these two patterns are concatenated into a single feature descriptor to
generate a more concrete and useful feature descriptor. The proposed descriptor
has been tested for image retrieval on four databases, including three texture
image databases - Brodatz texture image database, MIT VisTex database and
Salzburg texture database and one face database AT&T face database. The
precision and recall values observed on these databases are compared with some
state-of-art local patterns. The proposed method showed a significant
improvement over many other existing methods.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2017 21:56:32 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Jun 2018 18:25:08 GMT"
},
{
"version": "v3",
"created": "Mon, 2 Jul 2018 05:49:52 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Banerjee",
"Prithaj",
""
],
[
"Bhunia",
"Ayan Kumar",
""
],
[
"Bhattacharyya",
"Avirup",
""
],
[
"Roy",
"Partha Pratim",
""
],
[
"Murala",
"Subrahmanyam",
""
]
] |
new_dataset
| 0.950879 |
1710.00386
|
Panos Giannopoulos
|
\'Edouard Bonnet and Panos Giannopoulos
|
Orthogonal Terrain Guarding is NP-complete
|
17 pages, 18 figures. Note: In the previous arXiv version (and the
conference version), we erroneously claim a $2^{\Omega(n^{1/2})}$ lower bound
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A terrain is an x-monotone polygonal curve, i.e., successive vertices have
increasing x-coordinates.
Terrain Guarding can be seen as a special case of the famous art gallery
problem where one has to place at most $k$ guards on a terrain made of $n$
vertices in order to fully see it.
In 2010, King and Krohn showed that Terrain Guarding is NP-complete [SODA
'10, SIAM J. Comput. '11] thereby solving a long-standing open question.
They observe that their proof does not settle the complexity of Orthogonal
Terrain Guarding where the terrain only consists of horizontal or vertical
segments; those terrains are called rectilinear or orthogonal.
Recently, Ashok et al. [SoCG'17] presented an FPT algorithm running in time
$k^{O(k)}n^{O(1)}$ for Dominating Set in the visibility graphs of rectilinear
terrains without 180-degree vertices.
They ask if Orthogonal Terrain Guarding is in P or NP-hard.
In the same paper, they give a subexponential-time algorithm running in
$n^{O(\sqrt n)}$ (actually even $n^{O(\sqrt k)}$) for the general Terrain
Guarding and notice that the hardness proof of King and Krohn only disproves a
running time $2^{o(n^{1/4})}$ under the ETH.
Hence, there is a significant gap between their $2^{O(n^{1/2} \log
n)}$-algorithm and the no $2^{o(n^{1/4})}$ ETH-hardness implied by King and
Krohn's result.
In this paper, we adapt the gadgets of King and Krohn to rectilinear terrains
in order to prove that even Orthogonal Terrain Guarding is NP-complete.
Then, we show how to obtain an improved ETH lower bound of
$2^{\Omega(n^{1/3})}$ by refining the quadratic reduction from Planar 3-SAT
into a cubic reduction from 3-SAT.
This works for both Orthogonal Terrain Guarding and Terrain Guarding.
|
[
{
"version": "v1",
"created": "Sun, 1 Oct 2017 18:17:04 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Jul 2018 12:15:06 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Bonnet",
"Édouard",
""
],
[
"Giannopoulos",
"Panos",
""
]
] |
new_dataset
| 0.999224 |
1802.05330
|
Nitin J. Sanket
|
Nitin J Sanket, Chahat Deep Singh, Kanishka Ganguly, Cornelia
Ferm\"uller, Yiannis Aloimonos
|
GapFlyt: Active Vision Based Minimalist Structure-less Gap Detection For
Quadrotor Flight
|
11 pages, 15 figures, 4 tables. Published in IEEE Robotics and
Automation Letters (2018)
| null |
10.1109/LRA.2018.2843445
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although quadrotors, and aerial robots in general, are inherently active
agents, their perceptual capabilities in literature so far have been mostly
passive in nature. Researchers and practitioners today use traditional computer
vision algorithms with the aim of building a representation of general
applicability: a 3D reconstruction of the scene. Using this representation,
planning tasks are constructed and accomplished to allow the quadrotor to
demonstrate autonomous behavior. These methods are inefficient as they are not
task driven and such methodologies are not utilized by flying insects and
birds. Such agents have been solving the problem of navigation and complex
control for ages without the need to build a 3D map and are highly task driven.
In this paper, we propose this framework of bio-inspired perceptual design for
quadrotors. We use this philosophy to design a minimalist sensori-motor
framework for a quadrotor to fly though unknown gaps without a 3D
reconstruction of the scene using only a monocular camera and onboard sensing.
We successfully evaluate and demonstrate the proposed approach in many
real-world experiments with different settings and window shapes, achieving a
success rate of 85% at 2.5ms$^{-1}$ even with a minimum tolerance of just 5cm.
To our knowledge, this is the first paper which addresses the problem of gap
detection of an unknown shape and location with a monocular camera and onboard
sensing.
|
[
{
"version": "v1",
"created": "Wed, 14 Feb 2018 21:40:23 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Feb 2018 22:04:22 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Jun 2018 16:41:40 GMT"
},
{
"version": "v4",
"created": "Sun, 1 Jul 2018 18:00:33 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Sanket",
"Nitin J",
""
],
[
"Singh",
"Chahat Deep",
""
],
[
"Ganguly",
"Kanishka",
""
],
[
"Fermüller",
"Cornelia",
""
],
[
"Aloimonos",
"Yiannis",
""
]
] |
new_dataset
| 0.980211 |
1804.03582
|
Fabian Reiter
|
Olivier Carton, Bruno Guillon, and Fabian Reiter
|
Counter Machines and Distributed Automata: A Story about Exchanging
Space and Time
|
15 pages (+ 13 pages of appendices), 5 figures; To appear in the
proceedings of AUTOMATA 2018;
| null |
10.1007/978-3-319-92675-9_2
| null |
cs.FL cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We prove the equivalence of two classes of counter machines and one class of
distributed automata. Our counter machines operate on finite words, which they
read from left to right while incrementing or decrementing a fixed number of
counters. The two classes differ in the extra features they offer: one allows
to copy counter values, whereas the other allows to compute copyless sums of
counters. Our distributed automata, on the other hand, operate on directed path
graphs that represent words. All nodes of a path synchronously execute the same
finite-state machine, whose state diagram must be acyclic except for
self-loops, and each node receives as input the state of its direct
predecessor. These devices form a subclass of linear-time one-way cellular
automata.
|
[
{
"version": "v1",
"created": "Tue, 10 Apr 2018 15:12:40 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Carton",
"Olivier",
""
],
[
"Guillon",
"Bruno",
""
],
[
"Reiter",
"Fabian",
""
]
] |
new_dataset
| 0.99932 |
1805.09924
|
Ritu Kundu
|
Tomasz Kociumaka, Ritu Kundu, Manal Mohamed, and Solon P. Pissis
|
Longest Unbordered Factor in Quasilinear Time
|
17 pages, 5 figures
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A border u of a word w is a proper factor of w occurring both as a prefix and
as a suffix. The maximal unbordered factor of w is the longest factor of w
which does not have a border. Here an O(n log n)-time with high probability (or
O(n log n log^2 log n)-time deterministic) algorithm to compute the Longest
Unbordered Factor Array of w for general alphabets is presented, where n is the
length of w. This array specifies the length of the maximal unbordered factor
starting at each position of w. This is a major improvement on the running time
of the currently best worst-case algorithm working in O(n^{1.5} ) time for
integer alphabets [Gawrychowski et al., 2015].
|
[
{
"version": "v1",
"created": "Thu, 24 May 2018 22:14:27 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Jul 2018 07:39:26 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Kociumaka",
"Tomasz",
""
],
[
"Kundu",
"Ritu",
""
],
[
"Mohamed",
"Manal",
""
],
[
"Pissis",
"Solon P.",
""
]
] |
new_dataset
| 0.995467 |
1805.11833
|
Yuanfu Luo
|
Yuanfu Luo, Panpan Cai, Aniket Bera, David Hsu, Wee Sun Lee, Dinesh
Manocha
|
PORCA: Modeling and Planning for Autonomous Driving among Many
Pedestrians
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a planning system for autonomous driving among many
pedestrians. A key ingredient of our approach is PORCA, a pedestrian motion
prediction model that accounts for both a pedestrian's global navigation
intention and local interactions with the vehicle and other pedestrians.
Unfortunately, the autonomous vehicle does not know the pedestrian's intention
a priori and requires a planning algorithm that hedges against the uncertainty
in pedestrian intentions. Our planning system combines a POMDP algorithm with
the pedestrian motion model and runs in near real time. Experiments show that
it enables a robot vehicle to drive safely, efficiently, and smoothly among a
crowd with a density of nearly one person per square meter.
|
[
{
"version": "v1",
"created": "Wed, 30 May 2018 07:19:20 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Jul 2018 04:34:14 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Luo",
"Yuanfu",
""
],
[
"Cai",
"Panpan",
""
],
[
"Bera",
"Aniket",
""
],
[
"Hsu",
"David",
""
],
[
"Lee",
"Wee Sun",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.986827 |
1806.11314
|
Nevrez Imamoglu
|
Nevrez Imamoglu, Yu Oishi, Xiaoqiang Zhang, Guanqun Ding, Yuming Fang,
Toru Kouyama, Ryosuke Nakamura
|
Hyperspectral Image Dataset for Benchmarking on Salient Object Detection
|
3 pages, 3 figures. 2 tables, appeared in the Proceedings of the 10th
International Conference on Quality of Multimedia Experience (QoMEX 2018)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many works have been done on salient object detection using supervised or
unsupervised approaches on colour images. Recently, a few studies demonstrated
that efficient salient object detection can also be implemented by using
spectral features in visible spectrum of hyperspectral images from natural
scenes. However, these models on hyperspectral salient object detection were
tested with a very few number of data selected from various online public
dataset, which are not specifically created for object detection purposes.
Therefore, here, we aim to contribute to the field by releasing a hyperspectral
salient object detection dataset with a collection of 60 hyperspectral images
with their respective ground-truth binary images and representative rendered
colour images (sRGB). We took several aspects in consideration during the data
collection such as variation in object size, number of objects,
foreground-background contrast, object position on the image, and etc. Then, we
prepared ground truth binary images for each hyperspectral data, where salient
objects are labelled on the images. Finally, we did performance evaluation
using Area Under Curve (AUC) metric on some existing hyperspectral saliency
detection models in literature.
|
[
{
"version": "v1",
"created": "Fri, 29 Jun 2018 09:31:56 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Jul 2018 01:25:04 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Imamoglu",
"Nevrez",
""
],
[
"Oishi",
"Yu",
""
],
[
"Zhang",
"Xiaoqiang",
""
],
[
"Ding",
"Guanqun",
""
],
[
"Fang",
"Yuming",
""
],
[
"Kouyama",
"Toru",
""
],
[
"Nakamura",
"Ryosuke",
""
]
] |
new_dataset
| 0.999797 |
1807.00022
|
Hanshen Xiao
|
Hanshen Xiao and Guoqiang Xiao
|
On Solving Ambiguity Resolution with Robust Chinese Remainder Theorem
for Multiple Numbers
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Chinese Remainder Theorem (CRT) is a powerful approach to solve ambiguity
resolution related problems such as undersampling frequency estimation and
phase unwrapping which are widely applied in localization. Recently, the
deterministic robust CRT for multiple numbers (RCRTMN) was proposed, which can
reconstruct multiple integers with unknown relationship of residue
correspondence via generalized CRT and achieves robustness to bounded errors
simultaneously. Naturally, RCRTMN sheds light on CRT-based estimation for
multiple objectives. In this paper, two open problems arising that how to
introduce statistical methods into RCRTMN and deal with arbitrary errors
introduced in residues are solved. We propose the extended version of RCRTMN
assisted with Maximum Likelihood Estimation (MLE), which can tolerate
unrestricted errors and bring considerable improvement in robustness.
|
[
{
"version": "v1",
"created": "Fri, 29 Jun 2018 18:17:05 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Xiao",
"Hanshen",
""
],
[
"Xiao",
"Guoqiang",
""
]
] |
new_dataset
| 0.974223 |
1807.00141
|
Jiasong Wu
|
Li Liu, Jiasong Wu, Dengwang Li, Lotfi Senhadji, Huazhong Shu
|
Fractional Wavelet Scattering Network and Applications
|
11 pages, 6 figures, 3 tables, IEEE Transactions on Biomedical
Engineering, 2018
| null |
10.1109/TBME.2018.2850356
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Objective: The present study introduces a fractional wavelet scattering
network (FrScatNet), which is a generalized translation invariant version of
the classical wavelet scattering network (ScatNet). Methods: In our approach,
the FrScatNet is constructed based on the fractional wavelet transform (FRWT).
The fractional scattering coefficients are iteratively computed using FRWTs and
modulus operators. The feature vectors constructed by fractional scattering
coefficients are usually used for signal classification. In this work, an
application example of FrScatNet is provided in order to assess its performance
on pathological images. Firstly, the FrScatNet extracts feature vectors from
patches of the original histological images under different orders. Then we
classify those patches into target (benign or malignant) and background groups.
And the FrScatNet property is analyzed by comparing error rates computed from
different fractional orders respectively. Based on the above pathological image
classification, a gland segmentation algorithm is proposed by combining the
boundary information and the gland location. Results: The error rates for
different fractional orders of FrScatNet are examined and show that the
classification accuracy is significantly improved in fractional scattering
domain. We also compare the FrScatNet based gland segmentation method with
those proposed in the 2015 MICCAI Gland Segmentation Challenge and our method
achieves comparable results. Conclusion: The FrScatNet is shown to achieve
accurate and robust results. More stable and discriminative fractional
scattering coefficients are obtained by the FrScatNet in this work.
Significance: The added fractional order parameter is able to analyze the image
in the fractional scattering domain.
|
[
{
"version": "v1",
"created": "Sat, 30 Jun 2018 08:38:22 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Liu",
"Li",
""
],
[
"Wu",
"Jiasong",
""
],
[
"Li",
"Dengwang",
""
],
[
"Senhadji",
"Lotfi",
""
],
[
"Shu",
"Huazhong",
""
]
] |
new_dataset
| 0.983452 |
1807.00224
|
Farshid Alambeigi
|
Farshid Alambeigi, Mahsan Bakhtiarinejad, Armina Azizi, Rachel
Hegeman, Iulian Iordachita, Harpal Khanuja and Mehran Armand
|
Inroads Toward Robot-Assisted Internal Fixation of Bone Fractures Using
a Bendable Medical Screw and the Curved Drilling Technique
|
6 pages, To be appeared in the 7th IEEE RAS/EMBS International
Conference on Biomedical Robotics and Biomechatronics (BIOROB 2018)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Internal fixation is a common orthopedic procedure in which a rigid screw is
used to fix fragments of a fractured bone together and expedite the healing
process. However, the rigidity of the screw, geometry of the fractured anatomy
(e.g. femur and pelvis), and patient age can cause an array of complications
during screw placement, such as improper fracture healing due to misalignment
of the bone fragments, lengthy procedure time and subsequently high radiation
exposure. To address these issues, we propose a minimally invasive
robot-assisted procedure comprising of a continuum robot, called ortho-snake,
together with a novel bendable medical screw (BMS) for fixating the fractures.
We describe the implementation of a curved drilling technique and focus on the
design, manufacturing, and evaluation of a novel BMS, which can passively morph
into the drilled curved tunnels with various curvatures. We evaluate the
performance and efficacy of the proposed BMS using both finite element
simulations as well as experiments conducted on synthetic bone samples.
|
[
{
"version": "v1",
"created": "Sat, 30 Jun 2018 20:42:37 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Alambeigi",
"Farshid",
""
],
[
"Bakhtiarinejad",
"Mahsan",
""
],
[
"Azizi",
"Armina",
""
],
[
"Hegeman",
"Rachel",
""
],
[
"Iordachita",
"Iulian",
""
],
[
"Khanuja",
"Harpal",
""
],
[
"Armand",
"Mehran",
""
]
] |
new_dataset
| 0.96885 |
1807.00253
|
Chaojing Duan
|
Chaojing Duan, Siheng Chen, Jelena Kova\v{c}evi\'c
|
Weighted Multi-projection: 3D Point Cloud Denoising with Estimated
Tangent Planes
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a collection of 3D points sampled from surfaces of objects, a 3D point
cloud is widely used in robotics, autonomous driving and augmented reality. Due
to the physical limitations of 3D sensing devices, 3D point clouds are usually
noisy, which influences subsequent computations, such as surface
reconstruction, recognition and many others. To denoise a 3D point cloud, we
present a novel algorithm, called weighted multi-projection. Compared to many
previous works on denoising, instead of directly smoothing the coordinates of
3D points, we use a two-fold smoothing: We first estimate a local tangent plane
at each 3D point and then reconstruct each 3D point by weighted averaging of
its projections on multiple tangent planes. We also provide the theoretical
analysis for the surface normal estimation and achieve a tighter bound than in
a previous work. We validate the empirical performance on the dataset of
ShapeNetCore and show that weighted multi-projection outperforms its
competitors in all nine classes.
|
[
{
"version": "v1",
"created": "Sun, 1 Jul 2018 01:42:04 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Duan",
"Chaojing",
""
],
[
"Chen",
"Siheng",
""
],
[
"Kovačević",
"Jelena",
""
]
] |
new_dataset
| 0.990559 |
1807.00280
|
Shahriar Sefati
|
Shahriar Sefati, Ryan Murphy, Farshid Alambeigi, Michael Pozin, Iulian
Iordachita, Russell Taylor, Mehran Armand
|
FBG-Based Control of a Continuum Manipulator Interacting With Obstacles
|
Accepted for IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS) 2018
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Tracking and controlling the shape of continuum dexterous manipulators (CDM)
in constraint environments is a challenging task. The imposed constraints and
interaction with unknown obstacles may conform the CDM's shape and therefore
demands for shape sensing methods which do not rely on direct line of sight. To
address these issues, we integrate a novel Fiber Bragg Grating (FBG) shape
sensing unit into a CDM, reconstruct the shape in real-time, and develop an
optimization-based control algorithm using FBG tip position feedback. The CDM
is designed for less-invasive treatment of osteolysis (bone degradation). To
evaluate the performance of the feedback control algorithm when the CDM
interacts with obstacles, we perform a set of experiments similar to the real
scenario of the CDM interaction with soft and hard lesions during the treatment
of osteolysis. In addition, we propose methods for identification of the CDM
collisions with soft or hard obstacles using the jacobian information. Results
demonstrate successful control of the CDM tip based on the FBG feedback and
indicate repeatability and robustness of the proposed method when interacting
with unknown obstacles.
|
[
{
"version": "v1",
"created": "Sun, 1 Jul 2018 06:59:45 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Sefati",
"Shahriar",
""
],
[
"Murphy",
"Ryan",
""
],
[
"Alambeigi",
"Farshid",
""
],
[
"Pozin",
"Michael",
""
],
[
"Iordachita",
"Iulian",
""
],
[
"Taylor",
"Russell",
""
],
[
"Armand",
"Mehran",
""
]
] |
new_dataset
| 0.998269 |
1807.00462
|
Jiankai Sun
|
Jiankai Sun, Abhinav Vishnu, Aniket Chakrabarti, Charles Siegel, and
Srinivasan Parthasarathy
|
ColdRoute: Effective Routing of Cold Questions in Stack Exchange Sites
|
Accepted to the Journal Track of The European Conference on Machine
Learning and Principles and Practice of Knowledge Discovery in Databases
(ECML PKDD 2018); Published by Springer:
https://link.springer.com/article/10.1007%2Fs10618-018-0577-7
|
@Article{Sun2018, author="Sun, Jiankai and Vishnu, A. and
Chakrabarti, A. and Siegel, C. and Parthasarathy, S.", title="ColdRoute:
effective routing of cold questions in stack exchange sites", journal="ECML
PKDD", year="2018"}
|
10.1007/s10618-018-0577-7
| null |
cs.AI cs.HC cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Routing questions in Community Question Answer services (CQAs) such as Stack
Exchange sites is a well-studied problem. Yet, cold-start -- a phenomena
observed when a new question is posted is not well addressed by existing
approaches. Additionally, cold questions posted by new askers present
significant challenges to state-of-the-art approaches. We propose ColdRoute to
address these challenges. ColdRoute is able to handle the task of routing cold
questions posted by new or existing askers to matching experts. Specifically,
we use Factorization Machines on the one-hot encoding of critical features such
as question tags and compare our approach to well-studied techniques such as
CQARank and semantic matching (LDA, BoW, and Doc2Vec). Using data from eight
stack exchange sites, we are able to improve upon the routing metrics
(Precision$@1$, Accuracy, MRR) over the state-of-the-art models such as
semantic matching by $159.5\%$,$31.84\%$, and $40.36\%$ for cold questions
posted by existing askers, and $123.1\%$, $27.03\%$, and $34.81\%$ for cold
questions posted by new askers respectively.
|
[
{
"version": "v1",
"created": "Mon, 2 Jul 2018 05:08:05 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Sun",
"Jiankai",
""
],
[
"Vishnu",
"Abhinav",
""
],
[
"Chakrabarti",
"Aniket",
""
],
[
"Siegel",
"Charles",
""
],
[
"Parthasarathy",
"Srinivasan",
""
]
] |
new_dataset
| 0.987774 |
1807.00507
|
Michael Codish
|
Michael Codish
|
A SAT Encoding for the $n$-Fractions Problem
| null | null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This note describes a SAT encoding for the $n$-fractions puzzle which is
problem 041 of the CSPLib. Using a SAT solver we obtain a solution for two of
the six remaining open instances of this problem.
|
[
{
"version": "v1",
"created": "Mon, 2 Jul 2018 07:54:17 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Codish",
"Michael",
""
]
] |
new_dataset
| 0.983162 |
1807.00518
|
Martin Monperrus
|
Mar\'ia G\'omez, Bram Adams, Walid Maalej, Martin Monperrus, Romain
Rouvoy
|
App Store 2.0: From Crowd Information to Actionable Feedback in Mobile
Ecosystems
| null |
IEEE Software, Institute of Electrical and Electronics Engineers,
2017, 34, pp.81-89
|
10.1109/MS.2017.46
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given the increasing competition in mobile app ecosystems, improving the
experience of users has become a major goal for app vendors. This article
introduces a visionary app store, called APP STORE 2.0, which exploits
crowdsourced information about apps, devices and users to increase the overall
quality of the delivered mobile apps. We sketch a blueprint architecture of the
envisioned app stores and discuss the different kinds of actionable feedbacks
that app stores can generate using crowdsourced information.
|
[
{
"version": "v1",
"created": "Mon, 2 Jul 2018 08:15:57 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Gómez",
"María",
""
],
[
"Adams",
"Bram",
""
],
[
"Maalej",
"Walid",
""
],
[
"Monperrus",
"Martin",
""
],
[
"Rouvoy",
"Romain",
""
]
] |
new_dataset
| 0.999425 |
1807.00556
|
Julia Lasserre
|
Julia Lasserre, Katharina Rasch, Roland Vollgraf
|
Studio2Shop: from studio photo shoots to fashion articles
|
12 pages, 9 figures (Figure 1 has 5 subfigures, Figure 2 has 3
subfigures), 7 tables
|
Proceedings of the 7th International Conference on Pattern
Recognition Applications and Methods (January 16-18, 2018, in Funchal,
Madeira, Portugal), Vol. 1 (ISBN 978-989-758-276-9), P. 37-48
|
10.5220/0006544500370048
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fashion is an increasingly important topic in computer vision, in particular
the so-called street-to-shop task of matching street images with shop images
containing similar fashion items. Solving this problem promises new means of
making fashion searchable and helping shoppers find the articles they are
looking for. This paper focuses on finding pieces of clothing worn by a person
in full-body or half-body images with neutral backgrounds. Such images are
ubiquitous on the web and in fashion blogs, and are typically studio photos, we
refer to this setting as studio-to-shop. Recent advances in computational
fashion include the development of domain-specific numerical representations.
Our model Studio2Shop builds on top of such representations and uses a deep
convolutional network trained to match a query image to the numerical feature
vectors of all the articles annotated in this image. Top-$k$ retrieval
evaluation on test query images shows that the correct items are most often
found within a range that is sufficiently small for building realistic visual
search engines for the studio-to-shop setting.
|
[
{
"version": "v1",
"created": "Mon, 2 Jul 2018 09:26:58 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Lasserre",
"Julia",
""
],
[
"Rasch",
"Katharina",
""
],
[
"Vollgraf",
"Roland",
""
]
] |
new_dataset
| 0.999813 |
1807.00602
|
Evgeniy Gryaznov
|
Evgeniy Gryaznov
|
Semantic Query Language for Temporal Genealogical Trees
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Computers play a crucial role in modern ancestry management, they are used to
collect, store, analyze, sort and display genealogical data. However, current
applications do not take into account the kinship structure of a natural
language.
In this paper we propose a new domain-specific language KISP which is based
on a formalization of English kinship system, for accessing and querying
traditional genealogical trees. KISP is a dynamically typed LISP-like
programming language with a rich set of features, such as kinship term
reduction and temporal information expression.
Our solution provides a user with a coherent genealogical framework that
allows for a natural navigation over any traditional family tree.
|
[
{
"version": "v1",
"created": "Mon, 2 Jul 2018 11:27:51 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Gryaznov",
"Evgeniy",
""
]
] |
new_dataset
| 0.995034 |
1807.00637
|
Shaked Perek
|
Shaked Perek, Alon Hazan, Ella Barkan, Ayelet Akselrod-Ballin
|
Mammography Dual View Mass Correspondence
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Standard breast cancer screening involves the acquisition of two mammography
X-ray projections for each breast. Typically, a comparison of both views
supports the challenging task of tumor detection and localization. We introduce
a deep learning, patch-based Siamese network for lesion matching in dual-view
mammography. Our locally-fitted approach generates a joint patch pair
representation and comparison with a shared configuration between the two
views. We performed a comprehensive set of experiments with the network on
standard datasets, among them the large Digital Database for Screening
Mammography (DDSM). We analyzed the effect of transfer learning with the
network between different types of datasets and compared the network-based
matching to using Euclidean distance by template matching. Finally, we
evaluated the contribution of the matching network in a full detection
pipeline. Experimental results demonstrate the promise of improved detection
accuracy using our approach.
|
[
{
"version": "v1",
"created": "Mon, 2 Jul 2018 12:52:24 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Perek",
"Shaked",
""
],
[
"Hazan",
"Alon",
""
],
[
"Barkan",
"Ella",
""
],
[
"Akselrod-Ballin",
"Ayelet",
""
]
] |
new_dataset
| 0.99618 |
1807.00686
|
Ting Yao
|
Ting Yao and Xue Li
|
YH Technologies at ActivityNet Challenge 2018
|
Rank 2 in both Temporal Activity Detection Task & Kinetics Task @
ActivityNet 2018. arXiv admin note: substantial text overlap with
arXiv:1710.08011 by other authors
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This notebook paper presents an overview and comparative analysis of our
systems designed for the following five tasks in ActivityNet Challenge 2018:
temporal action proposals, temporal action localization, dense-captioning
events in videos, trimmed action recognition, and spatio-temporal action
localization.
|
[
{
"version": "v1",
"created": "Fri, 29 Jun 2018 07:49:08 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Yao",
"Ting",
""
],
[
"Li",
"Xue",
""
]
] |
new_dataset
| 0.991267 |
1807.00703
|
Fabio Ferreira
|
Fabio Ferreira, Jonas Rothfuss, Eren Erdal Aksoy, You Zhou, Tamim
Asfour
|
Introducing the Simulated Flying Shapes and Simulated Planar Manipulator
Datasets
|
technical documentation, 2 figures, links to repositories
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We release two artificial datasets, Simulated Flying Shapes and Simulated
Planar Manipulator that allow to test the learning ability of video processing
systems. In particular, the dataset is meant as a tool which allows to easily
assess the sanity of deep neural network models that aim to encode, reconstruct
or predict video frame sequences. The datasets each consist of 90000 videos.
The Simulated Flying Shapes dataset comprises scenes showing two objects of
equal shape (rectangle, triangle and circle) and size in which one object
approaches its counterpart. The Simulated Planar Manipulator shows a 3-DOF
planar manipulator that executes a pick-and-place task in which it has to place
a size-varying circle on a squared platform. Different from other widely used
datasets such as moving MNIST [1], [2], the two presented datasets involve
goal-oriented tasks (e.g. the manipulator grasping an object and placing it on
a platform), rather than showing random movements. This makes our datasets more
suitable for testing prediction capabilities and the learning of sophisticated
motions by a machine learning model. This technical document aims at providing
an introduction into the usage of both datasets.
|
[
{
"version": "v1",
"created": "Mon, 2 Jul 2018 14:20:24 GMT"
}
] | 2018-07-03T00:00:00 |
[
[
"Ferreira",
"Fabio",
""
],
[
"Rothfuss",
"Jonas",
""
],
[
"Aksoy",
"Eren Erdal",
""
],
[
"Zhou",
"You",
""
],
[
"Asfour",
"Tamim",
""
]
] |
new_dataset
| 0.999628 |
1706.01560
|
Md Mizanur Rahman
|
Mizanur Rahman, Ruben Recabarren, Bogdan Carbunar, Dongwon Lee
|
Stateless Puzzles for Real Time Online Fraud Preemption
| null |
The 9th International ACM Web Science Conference, 2017
|
10.1145/3091478.3091507
| null |
cs.SI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The profitability of fraud in online systems such as app markets and social
networks marks the failure of existing defense mechanisms. In this paper, we
propose FraudSys, a real-time fraud preemption approach that imposes
Bitcoin-inspired computational puzzles on the devices that post online system
activities, such as reviews and likes. We introduce and leverage several novel
concepts that include (i) stateless, verifiable computational puzzles, that
impose minimal performance overhead, but enable the efficient verification of
their authenticity, (ii) a real-time, graph-based solution to assign fraud
scores to user activities, and (iii) mechanisms to dynamically adjust puzzle
difficulty levels based on fraud scores and the computational capabilities of
devices. FraudSys does not alter the experience of users in online systems, but
delays fraudulent actions and consumes significant computational resources of
the fraudsters. Using real datasets from Google Play and Facebook, we
demonstrate the feasibility of FraudSys by showing that the devices of honest
users are minimally impacted, while fraudster controlled devices receive daily
computational penalties of up to 3,079 hours. In addition, we show that with
FraudSys, fraud does not pay off, as a user equipped with mining hardware
(e.g., AntMiner S7) will earn less than half through fraud than from honest
Bitcoin mining.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2017 23:25:55 GMT"
}
] | 2018-07-02T00:00:00 |
[
[
"Rahman",
"Mizanur",
""
],
[
"Recabarren",
"Ruben",
""
],
[
"Carbunar",
"Bogdan",
""
],
[
"Lee",
"Dongwon",
""
]
] |
new_dataset
| 0.998192 |
1801.07777
|
Mohsen Heidari Khoozani
|
Mohsen Heidari, Achilleas Anastasopoulos, and S. Sandeep Pradhan
|
On The Reliability Function of Discrete Memoryless Multiple-Access
Channel with Feedback
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We derive a lower and upper bound on the reliability function of discrete
memoryless multiple-access channel (MAC) with noiseless feedback and
variable-length codes (VLCs). For the upper-bound, we use proof techniques of
Burnashev for the point-to-point case. Also, we adopt the techniques used to
prove the converse for the feedback-capacity of MAC. For the lower-bound on the
error exponent, we present a coding scheme consisting of a data and a
confirmation stage. In the data stage, any arbitrary feedback
capacity-achieving code is used. In the confirmation stage, each transmitter
sends one bit of information to the receiver using a pair of codebooks of size
two, one for each transmitter. The codewords at this stage are selected
randomly according to an appropriately optimized joint probability
distribution. The bounds increase linearly with respect to a specific Euclidean
distance measure defined between the transmission rate pair and the capacity
boundary. The lower and upper bounds match for a class of MACs.
|
[
{
"version": "v1",
"created": "Tue, 23 Jan 2018 21:28:58 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Apr 2018 01:04:26 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Jun 2018 06:34:21 GMT"
}
] | 2018-07-02T00:00:00 |
[
[
"Heidari",
"Mohsen",
""
],
[
"Anastasopoulos",
"Achilleas",
""
],
[
"Pradhan",
"S. Sandeep",
""
]
] |
new_dataset
| 0.994744 |
1805.10049
|
Niranjan Saikumar
|
Lennart van Duist, Gijs van der Gugten, Daan Toten, Niranjan Saikumar,
Hassan HosseinNia
|
FLOreS - Fractional order loop shaping MATLAB toolbox
|
3rd IFAC Conference on Advances in Proportional-Integral-Derivative
Control 2018
| null |
10.1016/j.ifacol.2018.06.152
| null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel toolbox named FLOreS is presented for intuitive design of fractional
order controllers (FOC) using industry standard loop shaping technique. This
will allow control engineers to use frequency response data (FRD) of the plant
to design FOCs by shaping the open loop to meet the necessary specifications of
stability, robustness, tracking, precision and bandwidth. FLOreS provides a
graphical approach using closed-loop sensitivity functions for overall insight
into system performance. The main advantage over existing optimization
toolboxes for FOC is that the engineer can use prior knowledge and expertise of
plant during design of FOC. Different approximation methods for fractional
order filters are also included for greater freedom of final implementation.
This combined with the included example plants enables additionally to be used
as an educational tool. FLOreS has been used for design and implementation of
both integer and fractional order controllers on a precision stage to prove
industry readiness.
|
[
{
"version": "v1",
"created": "Fri, 25 May 2018 09:03:38 GMT"
}
] | 2018-07-02T00:00:00 |
[
[
"van Duist",
"Lennart",
""
],
[
"van der Gugten",
"Gijs",
""
],
[
"Toten",
"Daan",
""
],
[
"Saikumar",
"Niranjan",
""
],
[
"HosseinNia",
"Hassan",
""
]
] |
new_dataset
| 0.968001 |
1805.12280
|
Haomiao Wang Mr.
|
Haomiao Wang, Prabu Thiagaraj and Oliver Sinnen
|
FPGA-based Acceleration of FT Convolution for Pulsar Search Using OpenCL
|
25 page, 13 figures
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Square Kilometre Array (SKA) project will be the world largest radio
telescope array. With its large number of antennas, the number of signals that
need to be processed is dramatic. One important element of the SKA's Central
Signal Processor package is pulsar search. This paper focuses on the FPGA-based
acceleration of the Frequency-Domain Acceleration Search module, which is a
part of SKA pulsar search engine. In this module, the frequency-domain input
signals have to be processed by 85 Finite Impulse response (FIR) filters within
a short period of limitation and for thousands of input arrays. Because of the
large scale of the input length and FIR filter size, even high-end FPGA devices
cannot parallelise the task completely. We start by investigating both
time-domain FIR filter (TDFIR) and frequency-domain FIR filter (FDFIR) to
tackle this task. We applied the overlap-add algorithm to split the coefficient
array of TDFIR and the overlap-save algorithm to split the input signals of
FDFIR. To achieve fast prototyping design, we employed OpenCL, which is a
high-level FPGA development technique. The performance and power consumption
are evaluated using multiple FPGA devices simultaneously and compared with GPU
results, which is achieved by porting FPGA-based OpenCL kernels. The
experimental evaluation shows that the FDFIR solution is very competitive in
terms of performance, with a clear energy consumption advantage over the GPU
solution.
|
[
{
"version": "v1",
"created": "Thu, 31 May 2018 01:18:35 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jun 2018 00:17:54 GMT"
}
] | 2018-07-02T00:00:00 |
[
[
"Wang",
"Haomiao",
""
],
[
"Thiagaraj",
"Prabu",
""
],
[
"Sinnen",
"Oliver",
""
]
] |
new_dataset
| 0.987774 |
1806.11195
|
Seyyed Ali Hashemi
|
Nghia Doan, Seyyed Ali Hashemi, Marco Mondelli, Warren J. Gross
|
On the Decoding of Polar Codes on Permuted Factor Graphs
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polar codes are a channel coding scheme for the next generation of wireless
communications standard (5G). The belief propagation (BP) decoder allows for
parallel decoding of polar codes, making it suitable for high throughput
applications. However, the error-correction performance of polar codes under BP
decoding is far from the requirements of 5G. It has been shown that the
error-correction performance of BP can be improved if the decoding is performed
on multiple permuted factor graphs of polar codes. However, a different BP
decoding scheduling is required for each factor graph permutation which results
in the design of a different decoder for each permutation. Moreover, the
selection of the different factor graph permutations is at random, which
prevents the decoder to achieve a desirable error-correction performance with a
small number of permutations. In this paper, we first show that the
permutations on the factor graph can be mapped into suitable permutations on
the codeword positions. As a result, we can make use of a single decoder for
all the permutations. In addition, we introduce a method to construct a set of
predetermined permutations which can provide the correct codeword if the
decoding fails on the original permutation. We show that for the 5G polar code
of length $1024$, the error-correction performance of the proposed decoder is
more than $0.25$ dB better than that of the BP decoder with the same number of
random permutations at the frame error rate of $10^{-4}$.
|
[
{
"version": "v1",
"created": "Thu, 28 Jun 2018 21:12:57 GMT"
}
] | 2018-07-02T00:00:00 |
[
[
"Doan",
"Nghia",
""
],
[
"Hashemi",
"Seyyed Ali",
""
],
[
"Mondelli",
"Marco",
""
],
[
"Gross",
"Warren J.",
""
]
] |
new_dataset
| 0.997579 |
1806.11216
|
Maximilian Seitzer
|
Maximilian Seitzer and Guang Yang and Jo Schlemper and Ozan Oktay and
Tobias W\"urfl and Vincent Christlein and Tom Wong and Raad Mohiaddin and
David Firmin and Jennifer Keegan and Daniel Rueckert and Andreas Maier
|
Adversarial and Perceptual Refinement for Compressed Sensing MRI
Reconstruction
|
To be published at MICCAI 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning approaches have shown promising performance for compressed
sensing-based Magnetic Resonance Imaging. While deep neural networks trained
with mean squared error (MSE) loss functions can achieve high peak signal to
noise ratio, the reconstructed images are often blurry and lack sharp details,
especially for higher undersampling rates. Recently, adversarial and perceptual
loss functions have been shown to achieve more visually appealing results.
However, it remains an open question how to (1) optimally combine these loss
functions with the MSE loss function and (2) evaluate such a perceptual
enhancement. In this work, we propose a hybrid method, in which a visual
refinement component is learnt on top of an MSE loss-based reconstruction
network. In addition, we introduce a semantic interpretability score, measuring
the visibility of the region of interest in both ground truth and reconstructed
images, which allows us to objectively quantify the usefulness of the image
quality for image post-processing and analysis. Applied on a large cardiac MRI
dataset simulated with 8-fold undersampling, we demonstrate significant
improvements ($p<0.01$) over the state-of-the-art in both a human observer
study and the semantic interpretability score.
|
[
{
"version": "v1",
"created": "Thu, 28 Jun 2018 22:12:39 GMT"
}
] | 2018-07-02T00:00:00 |
[
[
"Seitzer",
"Maximilian",
""
],
[
"Yang",
"Guang",
""
],
[
"Schlemper",
"Jo",
""
],
[
"Oktay",
"Ozan",
""
],
[
"Würfl",
"Tobias",
""
],
[
"Christlein",
"Vincent",
""
],
[
"Wong",
"Tom",
""
],
[
"Mohiaddin",
"Raad",
""
],
[
"Firmin",
"David",
""
],
[
"Keegan",
"Jennifer",
""
],
[
"Rueckert",
"Daniel",
""
],
[
"Maier",
"Andreas",
""
]
] |
new_dataset
| 0.972007 |
1806.11226
|
Kamelia Aryafar
|
Murium Iqbal, Adair Kovac, Kamelia Aryafar
|
A Multimodal Recommender System for Large-scale Assortment Generation in
E-commerce
|
SIGIR eComm Accepted Paper
| null | null | null |
cs.IR cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
E-commerce platforms surface interesting products largely through product
recommendations that capture users' styles and aesthetic preferences. Curating
recommendations as a complete complementary set, or assortment, is critical for
a successful e-commerce experience, especially for product categories such as
furniture, where items are selected together with the overall theme, style or
ambiance of a space in mind. In this paper, we propose two visually-aware
recommender systems that can automatically curate an assortment of living room
furniture around a couple of pre-selected seed pieces for the room. The first
system aims to maximize the visual-based style compatibility of the entire
selection by making use of transfer learning and topic modeling. The second
system extends the first by incorporating text data and applying polylingual
topic modeling to infer style over both modalities. We review the production
pipeline for surfacing these visually-aware recommender systems and compare
them through offline validations and large-scale online A/B tests on Overstock.
Our experimental results show that complimentary style is best discovered over
product sets when both visual and textual data are incorporated.
|
[
{
"version": "v1",
"created": "Thu, 28 Jun 2018 23:11:54 GMT"
}
] | 2018-07-02T00:00:00 |
[
[
"Iqbal",
"Murium",
""
],
[
"Kovac",
"Adair",
""
],
[
"Aryafar",
"Kamelia",
""
]
] |
new_dataset
| 0.992383 |
1806.11263
|
DaeHun Nyang
|
DaeHun Nyang
|
Gruut: A Fully-Decentralized P2P Public Ledger
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Owing to Satoshi Nakamoto's brilliant idea, a P2P public ledger is shown to
be implementable in anonymous network. Any Internet user can then join the
anonymous network and contribute to the P2P public ledger by providing their
computing power or proof-of-work. The proof-of-work is a clever implementation
of one-CPU-one-vote by anonymous participants, and it protects the Bitcoin
ledger from illegal modification. To compensate the nodes for their work, a
cryptocurrency called Bitcoin is issued and given to nodes. However, the very
nature of anonymity of the ledger and the cryptocurrency prevent the technology
from being used in fiat money economy. Cryptocurrencies are not traceable even
if they are used for money laundering or tax evasion, and the value of
cryptocurrencies is not stable but fluctuates wildly. In this white paper, we
introduce Gruut, a P2P ledger to implement a universal financial platform for
fiat money. For this purpose, we introduce a new consensus algorithm called
`proof-of-population,' which is one instance of `proof of public
collaboration.' It can be used for multiple purposes; as a P2P ledger for
banks, as a powerful tool for payment, including micropayment, and as a tool
for any type of financial transactions. Even better, it distributes the profit
obtained from transaction fee, currently dominated by a third party, to peers
that cannot be centralized. Energy requirements of Gruut are so low that it is
possible to run our software on a smartphone or on a personal computer without
a graphic card.
|
[
{
"version": "v1",
"created": "Fri, 29 Jun 2018 04:20:02 GMT"
}
] | 2018-07-02T00:00:00 |
[
[
"Nyang",
"DaeHun",
""
]
] |
new_dataset
| 0.998653 |
1806.11301
|
Chenyang Xia
|
YouZhe Fan, ChenYang Xia, Ji Chen, Chi-Ying Tsui, Jie Jin, Hui Shen,
Bin Li
|
A Low-Latency List Successive-Cancellation Decoding Implementation for
Polar Codes
|
15 pages, 13 figures, 5 tables
|
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 34, NO. 2,
FEBRUARY 2016
|
10.1109/JSAC.2015.2504318
| null |
cs.IT cs.AR eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to their provably capacity-achieving performance, polar codes have
attracted a lot of research interest recently. For a good error-correcting
performance, list successive-cancellation decoding (LSCD) with large list size
is used to decode polar codes. However, as the complexity and delay of the list
management operation rapidly increase with the list size, the overall latency
of LSCD becomes large and limits the applicability of polar codes in
high-throughput and latency-sensitive applications. Therefore, in this work,
the low-latency implementation for LSCD with large list size is studied.
Specifically, at the system level, a selective expansion method is proposed
such that some of the reliable bits are not expanded to reduce the computation
and latency. At the algorithmic level, a double thresholding scheme is proposed
as a fast approximate-sorting method for the list management operation to
reduce the LSCD latency for large list size. A VLSI architecture of the LSCD
implementing the selective expansion and double thresholding scheme is then
developed, and implemented using a UMC 90 nm CMOS technology. Experimental
results show that, even for a large list size of 16, the proposed LSCD achieves
a decoding throughput of 460 Mbps at a clock frequency of 658 MHz.
|
[
{
"version": "v1",
"created": "Fri, 29 Jun 2018 08:30:09 GMT"
}
] | 2018-07-02T00:00:00 |
[
[
"Fan",
"YouZhe",
""
],
[
"Xia",
"ChenYang",
""
],
[
"Chen",
"Ji",
""
],
[
"Tsui",
"Chi-Ying",
""
],
[
"Jin",
"Jie",
""
],
[
"Shen",
"Hui",
""
],
[
"Li",
"Bin",
""
]
] |
new_dataset
| 0.986395 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.