id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1711.10958
|
Kevin Kilgour
|
Blaise Ag\"uera y Arcas, Beat Gfeller, Ruiqi Guo, Kevin Kilgour,
Sanjiv Kumar, James Lyon, Julian Odell, Marvin Ritter, Dominik Roblek,
Matthew Sharifi, Mihajlo Velimirovi\'c
|
Now Playing: Continuous low-power music recognition
|
Authors are listed in alphabetical order by last name
| null | null | null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing music recognition applications require a connection to a server that
performs the actual recognition. In this paper we present a low-power music
recognizer that runs entirely on a mobile device and automatically recognizes
music without user interaction. To reduce battery consumption, a small music
detector runs continuously on the mobile device's DSP chip and wakes up the
main application processor only when it is confident that music is present.
Once woken, the recognizer on the application processor is provided with a few
seconds of audio which is fingerprinted and compared to the stored fingerprints
in the on-device fingerprint database of tens of thousands of songs. Our
presented system, Now Playing, has a daily battery usage of less than 1% on
average, respects user privacy by running entirely on-device and can passively
recognize a wide range of music.
|
[
{
"version": "v1",
"created": "Wed, 29 Nov 2017 16:42:52 GMT"
}
] | 2017-11-30T00:00:00 |
[
[
"Arcas",
"Blaise Agüera y",
""
],
[
"Gfeller",
"Beat",
""
],
[
"Guo",
"Ruiqi",
""
],
[
"Kilgour",
"Kevin",
""
],
[
"Kumar",
"Sanjiv",
""
],
[
"Lyon",
"James",
""
],
[
"Odell",
"Julian",
""
],
[
"Ritter",
"Marvin",
""
],
[
"Roblek",
"Dominik",
""
],
[
"Sharifi",
"Matthew",
""
],
[
"Velimirović",
"Mihajlo",
""
]
] |
new_dataset
| 0.997176 |
1711.11017
|
Ethan Perez
|
Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca
Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, Aaron Courville
|
HoME: a Household Multimodal Environment
|
Presented at NIPS 2017's Visually-Grounded Interaction and Language
Workshop
| null | null | null |
cs.AI cs.CL cs.CV cs.RO cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce HoME: a Household Multimodal Environment for artificial agents
to learn from vision, audio, semantics, physics, and interaction with objects
and other agents, all within a realistic context. HoME integrates over 45,000
diverse 3D house layouts based on the SUNCG dataset, a scale which may
facilitate learning, generalization, and transfer. HoME is an open-source,
OpenAI Gym-compatible platform extensible to tasks in reinforcement learning,
language grounding, sound-based navigation, robotics, multi-agent learning, and
more. We hope HoME better enables artificial agents to learn as humans do: in
an interactive, multimodal, and richly contextualized setting.
|
[
{
"version": "v1",
"created": "Wed, 29 Nov 2017 18:45:59 GMT"
}
] | 2017-11-30T00:00:00 |
[
[
"Brodeur",
"Simon",
""
],
[
"Perez",
"Ethan",
""
],
[
"Anand",
"Ankesh",
""
],
[
"Golemo",
"Florian",
""
],
[
"Celotti",
"Luca",
""
],
[
"Strub",
"Florian",
""
],
[
"Rouat",
"Jean",
""
],
[
"Larochelle",
"Hugo",
""
],
[
"Courville",
"Aaron",
""
]
] |
new_dataset
| 0.999814 |
1607.06140
|
Rafael Reisenhofer
|
Rafael Reisenhofer, Sebastian Bosse, Gitta Kutyniok and Thomas Wiegand
|
A Haar Wavelet-Based Perceptual Similarity Index for Image Quality
Assessment
| null |
Signal Processing: Image Communication 61 (2018) 33-43
|
10.1016/j.image.2017.11.001
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In most practical situations, the compression or transmission of images and
videos creates distortions that will eventually be perceived by a human
observer. Vice versa, image and video restoration techniques, such as
inpainting or denoising, aim to enhance the quality of experience of human
viewers. Correctly assessing the similarity between an image and an undistorted
reference image as subjectively experienced by a human viewer can thus lead to
significant improvements in any transmission, compression, or restoration
system. This paper introduces the Haar wavelet-based perceptual similarity
index (HaarPSI), a novel and computationally inexpensive similarity measure for
full reference image quality assessment. The HaarPSI utilizes the coefficients
obtained from a Haar wavelet decomposition to assess local similarities between
two images, as well as the relative importance of image areas. The consistency
of the HaarPSI with the human quality of experience was validated on four large
benchmark databases containing thousands of differently distorted images. On
these databases, the HaarPSI achieves higher correlations with human opinion
scores than state-of-the-art full reference similarity measures like the
structural similarity index (SSIM), the feature similarity index (FSIM), and
the visual saliency-based index (VSI). Along with the simple computational
structure and the short execution time, these experimental results suggest a
high applicability of the HaarPSI in real world tasks.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2016 22:30:31 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2017 19:11:14 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Aug 2017 11:16:29 GMT"
},
{
"version": "v4",
"created": "Mon, 6 Nov 2017 01:33:21 GMT"
}
] | 2017-11-29T00:00:00 |
[
[
"Reisenhofer",
"Rafael",
""
],
[
"Bosse",
"Sebastian",
""
],
[
"Kutyniok",
"Gitta",
""
],
[
"Wiegand",
"Thomas",
""
]
] |
new_dataset
| 0.998426 |
1702.05512
|
Parminder Bhatia
|
Parminder Bhatia, Marsal Gavalda and Arash Einolghozati
|
soc2seq: Social Embedding meets Conversation Model
| null | null | null | null |
cs.SI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While liking or upvoting a post on a mobile app is easy to do, replying with
a written note is much more difficult, due to both the cognitive load of coming
up with a meaningful response as well as the mechanics of entering the text.
Here we present a novel textual reply generation model that goes beyond the
current auto-reply and predictive text entry models by taking into account the
content preferences of the user, the idiosyncrasies of their conversational
style, and even the structure of their social graph. Specifically, we have
developed two types of models for personalized user interactions: a
content-based conversation model, which makes use of location together with
user information, and a social-graph-based conversation model, which combines
content-based conversation models with social graphs.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2017 20:26:50 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2017 15:14:22 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Nov 2017 22:21:52 GMT"
}
] | 2017-11-29T00:00:00 |
[
[
"Bhatia",
"Parminder",
""
],
[
"Gavalda",
"Marsal",
""
],
[
"Einolghozati",
"Arash",
""
]
] |
new_dataset
| 0.981118 |
1704.02792
|
Yuxin Peng
|
Xiangteng He and Yuxin Peng
|
Fine-graind Image Classification via Combining Vision and Language
|
9 pages, to appear in CVPR 2017
| null |
10.1109/CVPR.2017.775
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fine-grained image classification is a challenging task due to the large
intra-class variance and small inter-class variance, aiming at recognizing
hundreds of sub-categories belonging to the same basic-level category. Most
existing fine-grained image classification methods generally learn part
detection models to obtain the semantic parts for better classification
accuracy. Despite achieving promising results, these methods mainly have two
limitations: (1) not all the parts which obtained through the part detection
models are beneficial and indispensable for classification, and (2)
fine-grained image classification requires more detailed visual descriptions
which could not be provided by the part locations or attribute annotations. For
addressing the above two limitations, this paper proposes the two-stream model
combining vision and language (CVL) for learning latent semantic
representations. The vision stream learns deep representations from the
original visual information via deep convolutional neural network. The language
stream utilizes the natural language descriptions which could point out the
discriminative parts or characteristics for each image, and provides a flexible
and compact way of encoding the salient visual aspects for distinguishing
sub-categories. Since the two streams are complementary, combining the two
streams can further achieves better classification accuracy. Comparing with 12
state-of-the-art methods on the widely used CUB-200-2011 dataset for
fine-grained image classification, the experimental results demonstrate our CVL
approach achieves the best performance.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2017 10:34:06 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2017 03:01:38 GMT"
}
] | 2017-11-29T00:00:00 |
[
[
"He",
"Xiangteng",
""
],
[
"Peng",
"Yuxin",
""
]
] |
new_dataset
| 0.999598 |
1711.10002
|
Priya Arora
|
Dhanasekar Sundararaman, Priya Arora, Vishwanath Seshagiri
|
TweetIT- Analyzing Topics for Twitter Users to garner Maximum Attention
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Twitter, a microblogging service, is todays most popular platform for
communication in the form of short text messages, called Tweets. Users use
Twitter to publish their content either for expressing concerns on information
news or views on daily conversations. When this expression emerges, they are
experienced by the worldwide distribution network of users and not only by the
interlocutor(s). Depending upon the impact of the tweet in the form of the
likes, retweets and percentage of followers increases for the user considering
a window of time frame, we compute attention factor for each tweet for the
selected user profiles. This factor is used to select the top 1000 Tweets, from
each user profile, to form a document. Topic modelling is then applied to this
document to determine the intent of the user behind the Tweets. After topics
are modelled, the similarity is determined between the BBC news data-set
containing the modelled topic, and the user document under evaluation. Finally,
we determine the top words for a user which would enable us to find the topics
which garnered attention and has been posted recently. The experiment is
performed using more than 1.1M Tweets from around 500 Twitter profiles spanning
Politics, Entertainment, Sports etc. and hundreds of BBC news articles. The
results show that our analysis is efficient enough to enable us to find the
topics which would act as a suggestion for users to get higher popularity
rating for the user in the future.
|
[
{
"version": "v1",
"created": "Mon, 27 Nov 2017 21:10:48 GMT"
}
] | 2017-11-29T00:00:00 |
[
[
"Sundararaman",
"Dhanasekar",
""
],
[
"Arora",
"Priya",
""
],
[
"Seshagiri",
"Vishwanath",
""
]
] |
new_dataset
| 0.99169 |
1711.10006
|
Fabian Manhardt
|
Wadim Kehl, Fabian Manhardt, Federico Tombari, Slobodan Ilic, Nassir
Navab
|
SSD-6D: Making RGB-based 3D detection and 6D pose estimation great again
|
The first two authors contributed equally to this work
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel method for detecting 3D model instances and estimating
their 6D poses from RGB data in a single shot. To this end, we extend the
popular SSD paradigm to cover the full 6D pose space and train on synthetic
model data only. Our approach competes or surpasses current state-of-the-art
methods that leverage RGB-D data on multiple challenging datasets. Furthermore,
our method produces these results at around 10Hz, which is many times faster
than the related methods. For the sake of reproducibility, we make our trained
networks and detection code publicly available.
|
[
{
"version": "v1",
"created": "Mon, 27 Nov 2017 21:17:51 GMT"
}
] | 2017-11-29T00:00:00 |
[
[
"Kehl",
"Wadim",
""
],
[
"Manhardt",
"Fabian",
""
],
[
"Tombari",
"Federico",
""
],
[
"Ilic",
"Slobodan",
""
],
[
"Navab",
"Nassir",
""
]
] |
new_dataset
| 0.97475 |
1711.10093
|
Jherez Taylor
|
Jherez Taylor, Melvyn Peignon, Yi-Shin Chen
|
Surfacing contextual hate speech words within social media
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media platforms have recently seen an increase in the occurrence of
hate speech discourse which has led to calls for improved detection methods.
Most of these rely on annotated data, keywords, and a classification technique.
While this approach provides good coverage, it can fall short when dealing with
new terms produced by online extremist communities which act as original
sources of words which have alternate hate speech meanings. These code words
(which can be both created and adopted words) are designed to evade automatic
detection and often have benign meanings in regular discourse. As an example,
"skypes", "googles", and "yahoos" are all instances of words which have an
alternate meaning that can be used for hate speech. This overlap introduces
additional challenges when relying on keywords for both the collection of data
that is specific to hate speech, and downstream classification. In this work,
we develop a community detection approach for finding extremist hate speech
communities and collecting data from their members. We also develop a word
embedding model that learns the alternate hate speech meaning of words and
demonstrate the candidacy of our code words with several annotation
experiments, designed to determine if it is possible to recognize a word as
being used for hate speech without knowing its alternate meaning. We report an
inter-annotator agreement rate of K=0.871, and K=0.676 for data drawn from our
extremist community and the keyword approach respectively, supporting our claim
that hate speech detection is a contextual task and does not depend on a fixed
list of keywords. Our goal is to advance the domain by providing a high quality
hate speech dataset in addition to learned code words that can be fed into
existing classification approaches, thus improving the accuracy of automated
detection.
|
[
{
"version": "v1",
"created": "Tue, 28 Nov 2017 02:56:12 GMT"
}
] | 2017-11-29T00:00:00 |
[
[
"Taylor",
"Jherez",
""
],
[
"Peignon",
"Melvyn",
""
],
[
"Chen",
"Yi-Shin",
""
]
] |
new_dataset
| 0.998237 |
1711.10104
|
Kasturi Vasudevan
|
K. Vasudevan
|
Near Capacity Signaling over Fading Channels using Coherent Turbo Coded
OFDM and Massive MIMO
|
16 pages, 12 figures, 5 tables, journal
|
International Journal On Advances in Telecommunications, issn
1942-2601, vol. 10, no. 1 & 2, year 2017, 22:37,
http://www.iariajournals.org/telecommunications/
| null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The minimum average signal-to-noise ratio (SNR) per bit required for
error-free transmission over a fading channel is derived, and is shown to be
equal to that of the additive white Gaussian noise (AWGN) channel, which is
$-1.6$ dB. Discrete-time algorithms are presented for timing and carrier
synchronization, as well as channel estimation, for turbo coded multiple input
multiple output (MIMO) orthogonal frequency division multiplexed (OFDM)
systems. Simulation results show that it is possible to achieve a bit error
rate of $10^{-5}$ at an average SNR per bit of 5.5 dB, using two transmit and
two receive antennas. We then propose a near-capacity signaling method in which
each transmit antenna uses a different carrier frequency. Using the
near-capacity approach, we show that it is possible to achieve a BER of
$2\times 10^{-5}$ at an average SNR per bit of just 2.5 dB, with one receive
antenna for each transmit antenna. When the number of receive antennas for each
transmit antenna is increased to 128, then a BER of $2\times 10^{-5}$ is
attained at an average SNR per bit of 1.25 dB. In all cases, the number of
transmit antennas is two and the spectral efficiency is 1 bit/transmission or 1
bit/sec/Hz. In other words, each transmit antenna sends 0.5 bit/transmission.
It is possible to obtain higher spectral efficiency by increasing the number of
transmit antennas, with no loss in BER performance, as long as each transmit
antenna uses a different carrier frequency. The transmitted signal spectrum for
the near-capacity approach can be restricted by pulse-shaping. In all the
simulations, a four-state turbo code is used. The corresponding turbo decoder
uses eight iterations. The algorithms can be implemented on programmable
hardware and there is a large scope for parallel processing.
|
[
{
"version": "v1",
"created": "Tue, 28 Nov 2017 03:44:25 GMT"
}
] | 2017-11-29T00:00:00 |
[
[
"Vasudevan",
"K.",
""
]
] |
new_dataset
| 0.993097 |
1711.10131
|
Shan Suthaharan
|
Shan Suthaharan
|
A fatal point concept and a low-sensitivity quantitative measure for
traffic safety analytics
| null | null | null | null |
cs.CV stat.AP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The variability of the clusters generated by clustering techniques in the
domain of latitude and longitude variables of fatal crash data are
significantly unpredictable. This unpredictability, caused by the randomness of
fatal crash incidents, reduces the accuracy of crash frequency (i.e., counts of
fatal crashes per cluster) which is used to measure traffic safety in practice.
In this paper, a quantitative measure of traffic safety that is not
significantly affected by the aforementioned variability is proposed. It
introduces a fatal point -- a segment with the highest frequency of fatality --
concept based on cluster characteristics and detects them by imposing rounding
errors to the hundredth decimal place of the longitude. The frequencies of the
cluster and the cluster's fatal point are combined to construct a low-sensitive
quantitative measure of traffic safety for the cluster. The performance of the
proposed measure of traffic safety is then studied by varying the parameter k
of k-means clustering with the expectation that other clustering techniques can
be adopted in a similar fashion. The 2015 North Carolina fatal crash dataset of
Fatality Analysis Reporting System (FARS) is used to evaluate the proposed
fatal point concept and perform experimental analysis to determine the
effectiveness of the proposed measure. The empirical study shows that the
average traffic safety, measured by the proposed quantitative measure over
several clusters, is not significantly affected by the variability, compared to
that of the standard crash frequency.
|
[
{
"version": "v1",
"created": "Tue, 28 Nov 2017 05:37:37 GMT"
}
] | 2017-11-29T00:00:00 |
[
[
"Suthaharan",
"Shan",
""
]
] |
new_dataset
| 0.963191 |
1711.10188
|
Emilio Mart\'inez-Pa\~neda
|
George Papazafeiropoulos, Miguel Mu\~niz-Calvente, Emilio
Mart\'inez-Pa\~neda
|
Abaqus2Matlab: A suitable tool for finite element post-processing
| null |
Advances in Engineering Software 105, pp. 9-16 (2017)
|
10.1016/j.advengsoft.2017.01.006
| null |
cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A suitable piece of software is presented to connect Abaqus, a sophisticated
finite element package, with Matlab, the most comprehensive program for
mathematical analysis. This interface between these well-known codes not only
benefits from the image processing and the integrated graph-plotting features
of Matlab but also opens up new opportunities in results post-processing,
statistical analysis and mathematical optimization, among many other
possibilities. The software architecture and usage are appropriately described
and two problems of particular engineering significance are addressed to
demonstrate its capabilities. Firstly, the software is employed to assess
cleavage fracture through a novel 3-parameter Weibull probabilistic framework.
Then, its potential to create and train neural networks is used to identify
damage parameters through a hybrid experimental-numerical scheme, and model
crack propagation in structural materials by means of a cohesive zone approach.
The source code, detailed documentation and a large number of tutorials can be
freely downloaded from www.abaqus2matlab.com.
|
[
{
"version": "v1",
"created": "Tue, 28 Nov 2017 09:06:44 GMT"
}
] | 2017-11-29T00:00:00 |
[
[
"Papazafeiropoulos",
"George",
""
],
[
"Muñiz-Calvente",
"Miguel",
""
],
[
"Martínez-Pañeda",
"Emilio",
""
]
] |
new_dataset
| 0.992645 |
1711.10192
|
Asaf Shabtai
|
Edan Habler, Asaf Shabtai
|
Using LSTM Encoder-Decoder Algorithm for Detecting Anomalous ADS-B
Messages
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although the ADS-B system is going to play a major role in the safe
navigation of airplanes and air traffic control (ATC) management, it is also
well known for its lack of security mechanisms. Previous research has proposed
various methods for improving the security of the ADS-B system and mitigating
associated risks. However, these solutions typically require the use of
additional participating nodes (or sensors) (e.g., to verify the location of
the airplane by analyzing the physical signal) or modification of the current
protocol architecture (e.g., adding encryption or authentication mechanisms.)
Due to the regulation process regarding avionic systems and the fact that the
ADS-B system is already deployed in most airplanes, applying such modifications
to the current protocol at this stage is impractical. In this paper we propose
an alternative security solution for detecting anomalous ADS-B messages aimed
at the detection of spoofed or manipulated ADS- B messages sent by an attacker
or compromised airplane. The proposed approach utilizes an LSTM encoder-decoder
algorithm for modeling flight routes by analyzing sequences of legitimate ADS-B
messages. Using these models, aircraft can autonomously evaluate received ADS-B
messages and identify deviations from the legitimate flight path (i.e.,
anomalies). We examined our approach on six different flight route datasets to
which we injected different types of anomalies. Using our approach we were able
to detect all of the injected attacks with an average false alarm rate of 4.3%
for all of datasets.
|
[
{
"version": "v1",
"created": "Tue, 28 Nov 2017 09:09:54 GMT"
}
] | 2017-11-29T00:00:00 |
[
[
"Habler",
"Edan",
""
],
[
"Shabtai",
"Asaf",
""
]
] |
new_dataset
| 0.983495 |
1711.10201
|
Marco Peressotti
|
Lu\'is Cruz-Filipe, Fabrizio Montesi, Marco Peressotti
|
Communications in Choreographies, Revisited
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Choreographic Programming is a paradigm for developing
correct-by-construction concurrent programs, by writing high-level descriptions
of the desired communications and then synthesising process implementations
automatically. So far, choreographic programming has been explored in the
monadic setting: interaction terms express point-to-point communications of a
single value. However, real-world systems often rely on interactions of
polyadic nature, where multiple values are communicated among two or more
parties, like multicast, scatter-gather, and atomic exchanges. We introduce a
new model for choreographic programming equipped with a primitive for grouped
interactions that subsumes all the above scenarios. Intuitively, grouped
interactions can be thought of as being carried out as one single interaction.
In practice, they are implemented by processes that carry them out in a
concurrent fashion. After formalising the intuitive semantics of grouped
interactions, we prove that choreographic programs and their implementations
are correct and deadlock-free by construction.
|
[
{
"version": "v1",
"created": "Tue, 28 Nov 2017 09:37:02 GMT"
}
] | 2017-11-29T00:00:00 |
[
[
"Cruz-Filipe",
"Luís",
""
],
[
"Montesi",
"Fabrizio",
""
],
[
"Peressotti",
"Marco",
""
]
] |
new_dataset
| 0.96957 |
1711.10400
|
Simon Kohl
|
Simon Kohl, David Bonekamp, Heinz-Peter Schlemmer, Kaneschka Yaqubi,
Markus Hohenfellner, Boris Hadaschik, Jan-Philipp Radtke, Klaus Maier-Hein
|
Adversarial Networks for Prostate Cancer Detection
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The large number of trainable parameters of deep neural networks renders them
inherently data hungry. This characteristic heavily challenges the medical
imaging community and to make things even worse, many imaging modalities are
ambiguous in nature leading to rater-dependant annotations that current loss
formulations fail to capture. We propose employing adversarial training for
segmentation networks in order to alleviate aforementioned problems. We learn
to segment aggressive prostate cancer utilizing challenging MRI images of 152
patients and show that the proposed scheme is superior over the de facto
standard in terms of the detection sensitivity and the dice-score for
aggressive prostate cancer. The achieved relative gains are shown to be
particularly pronounced in the small dataset limit.
|
[
{
"version": "v1",
"created": "Tue, 28 Nov 2017 16:53:33 GMT"
}
] | 2017-11-29T00:00:00 |
[
[
"Kohl",
"Simon",
""
],
[
"Bonekamp",
"David",
""
],
[
"Schlemmer",
"Heinz-Peter",
""
],
[
"Yaqubi",
"Kaneschka",
""
],
[
"Hohenfellner",
"Markus",
""
],
[
"Hadaschik",
"Boris",
""
],
[
"Radtke",
"Jan-Philipp",
""
],
[
"Maier-Hein",
"Klaus",
""
]
] |
new_dataset
| 0.991206 |
1711.10433
|
A\"aron van den Oord
|
Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol
Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis
C. Cobo, Florian Stimberg, Norman Casagrande, Dominik Grewe, Seb Noury,
Sander Dieleman, Erich Elsen, Nal Kalchbrenner, Heiga Zen, Alex Graves, Helen
King, Tom Walters, Dan Belov, Demis Hassabis
|
Parallel WaveNet: Fast High-Fidelity Speech Synthesis
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recently-developed WaveNet architecture is the current state of the art
in realistic speech synthesis, consistently rated as more natural sounding for
many different languages than any previous system. However, because WaveNet
relies on sequential generation of one audio sample at a time, it is poorly
suited to today's massively parallel computers, and therefore hard to deploy in
a real-time production setting. This paper introduces Probability Density
Distillation, a new method for training a parallel feed-forward network from a
trained WaveNet with no significant difference in quality. The resulting system
is capable of generating high-fidelity speech samples at more than 20 times
faster than real-time, and is deployed online by Google Assistant, including
serving multiple English and Japanese voices.
|
[
{
"version": "v1",
"created": "Tue, 28 Nov 2017 17:48:11 GMT"
}
] | 2017-11-29T00:00:00 |
[
[
"Oord",
"Aaron van den",
""
],
[
"Li",
"Yazhe",
""
],
[
"Babuschkin",
"Igor",
""
],
[
"Simonyan",
"Karen",
""
],
[
"Vinyals",
"Oriol",
""
],
[
"Kavukcuoglu",
"Koray",
""
],
[
"Driessche",
"George van den",
""
],
[
"Lockhart",
"Edward",
""
],
[
"Cobo",
"Luis C.",
""
],
[
"Stimberg",
"Florian",
""
],
[
"Casagrande",
"Norman",
""
],
[
"Grewe",
"Dominik",
""
],
[
"Noury",
"Seb",
""
],
[
"Dieleman",
"Sander",
""
],
[
"Elsen",
"Erich",
""
],
[
"Kalchbrenner",
"Nal",
""
],
[
"Zen",
"Heiga",
""
],
[
"Graves",
"Alex",
""
],
[
"King",
"Helen",
""
],
[
"Walters",
"Tom",
""
],
[
"Belov",
"Dan",
""
],
[
"Hassabis",
"Demis",
""
]
] |
new_dataset
| 0.992172 |
1612.02095
|
Evan Racah Mr.
|
Evan Racah, Christopher Beckham, Tegan Maharaj, Samira Ebrahimi Kahou,
Prabhat, Christopher Pal
|
ExtremeWeather: A large-scale climate dataset for semi-supervised
detection, localization, and understanding of extreme weather events
| null | null | null | null |
cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Then detection and identification of extreme weather events in large-scale
climate simulations is an important problem for risk management, informing
governmental policy decisions and advancing our basic understanding of the
climate system. Recent work has shown that fully supervised convolutional
neural networks (CNNs) can yield acceptable accuracy for classifying well-known
types of extreme weather events when large amounts of labeled data are
available. However, many different types of spatially localized climate
patterns are of interest including hurricanes, extra-tropical cyclones, weather
fronts, and blocking events among others. Existing labeled data for these
patterns can be incomplete in various ways, such as covering only certain years
or geographic areas and having false negatives. This type of climate data
therefore poses a number of interesting machine learning challenges. We present
a multichannel spatiotemporal CNN architecture for semi-supervised bounding box
prediction and exploratory data analysis. We demonstrate that our approach is
able to leverage temporal information and unlabeled data to improve the
localization of extreme weather events. Further, we explore the representations
learned by our model in order to better understand this important data. We
present a dataset, ExtremeWeather, to encourage machine learning research in
this area and to help facilitate further work in understanding and mitigating
the effects of climate change. The dataset is available at
extremeweatherdataset.github.io and the code is available at
https://github.com/eracah/hur-detect.
|
[
{
"version": "v1",
"created": "Wed, 7 Dec 2016 01:46:09 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Nov 2017 23:44:46 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Racah",
"Evan",
""
],
[
"Beckham",
"Christopher",
""
],
[
"Maharaj",
"Tegan",
""
],
[
"Kahou",
"Samira Ebrahimi",
""
],
[
"Prabhat",
"",
""
],
[
"Pal",
"Christopher",
""
]
] |
new_dataset
| 0.999757 |
1708.08086
|
Dimitris Chatzopoulos
|
Dimitris Chatzopoulos, Sujit Gujar, Boi Faltings, Pan Hui
|
LocalCoin: An Ad-hoc Payment Scheme for Areas with High Connectivity
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The popularity of digital currencies, especially cryptocurrencies, has been
continuously growing since the appearance of Bitcoin. Bitcoin's security lies
in a proof-of-work scheme, which requires high computational resources at the
miners. Despite advances in mobile technology, existing cryptocurrencies cannot
be maintained by mobile devices due to their low processing capabilities.
Mobile devices can only accommodate mobile applications (wallets) that allow
users to exchange credits of cryptocurrencies. In this work, we propose
LocalCoin, an alternative cryptocurrency that requires minimal computational
resources, produces low data traffic and works with off-the-shelf mobile
devices. LocalCoin replaces the computational hardness that is at the root of
Bitcoin's security with the social hardness of ensuring that all witnesses to a
transaction are colluders. Localcoin features (i) a lightweight proof-of-work
scheme and (ii) a distributed blockchain. We analyze LocalCoin for double
spending for passive and active attacks and prove that under the assumption of
sufficient number of users and properly selected tuning parameters the
probability of double spending is close to zero. Extensive simulations on real
mobility traces, realistic urban settings, and random geometric graphs show
that the probability of success of one transaction converges to 1 and the
probability of the success of a double spending attempt converges to 0.
|
[
{
"version": "v1",
"created": "Sun, 27 Aug 2017 13:39:43 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Nov 2017 10:41:12 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Chatzopoulos",
"Dimitris",
""
],
[
"Gujar",
"Sujit",
""
],
[
"Faltings",
"Boi",
""
],
[
"Hui",
"Pan",
""
]
] |
new_dataset
| 0.999687 |
1709.00551
|
Yong Xu Dr
|
Yong Xu, Qiuqiang Kong, Wenwu Wang, Mark D. Plumbley
|
Surrey-cvssp system for DCASE2017 challenge task4
|
DCASE2017 challenge ranked 1st system, task4, tech report
| null | null | null |
cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this technique report, we present a bunch of methods for the task 4 of
Detection and Classification of Acoustic Scenes and Events 2017 (DCASE2017)
challenge. This task evaluates systems for the large-scale detection of sound
events using weakly labeled training data. The data are YouTube video excerpts
focusing on transportation and warnings due to their industry applications.
There are two tasks, audio tagging and sound event detection from weakly
labeled data. Convolutional neural network (CNN) and gated recurrent unit (GRU)
based recurrent neural network (RNN) are adopted as our basic framework. We
proposed a learnable gating activation function for selecting informative local
features. Attention-based scheme is used for localizing the specific events in
a weakly-supervised mode. A new batch-level balancing strategy is also proposed
to tackle the data unbalancing problem. Fusion of posteriors from different
systems are found effective to improve the performance. In a summary, we get
61% F-value for the audio tagging subtask and 0.73 error rate (ER) for the
sound event detection subtask on the development set. While the official
multilayer perceptron (MLP) based baseline just obtained 13.1% F-value for the
audio tagging and 1.02 for the sound event detection.
|
[
{
"version": "v1",
"created": "Sat, 2 Sep 2017 09:40:06 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Nov 2017 20:21:32 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Xu",
"Yong",
""
],
[
"Kong",
"Qiuqiang",
""
],
[
"Wang",
"Wenwu",
""
],
[
"Plumbley",
"Mark D.",
""
]
] |
new_dataset
| 0.998694 |
1710.08315
|
Jinhua Tao
|
Jinhua Tao, Zidong Du, Qi Guo, Huiying Lan, Lei Zhang, Shengyuan Zhou,
Lingjie Xu, Cong Liu, Haifeng Liu, Shan Tang, Allen Rush, Willian Chen,
Shaoli Liu, Yunji Chen, Tianshi Chen
|
BENCHIP: Benchmarking Intelligence Processors
|
37pages, 14 figures
| null | null | null |
cs.PF cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The increasing attention on deep learning has tremendously spurred the design
of intelligence processing hardware. The variety of emerging intelligence
processors requires standard benchmarks for fair comparison and system
optimization (in both software and hardware). However, existing benchmarks are
unsuitable for benchmarking intelligence processors due to their non-diversity
and nonrepresentativeness. Also, the lack of a standard benchmarking
methodology further exacerbates this problem. In this paper, we propose
BENCHIP, a benchmark suite and benchmarking methodology for intelligence
processors. The benchmark suite in BENCHIP consists of two sets of benchmarks:
microbenchmarks and macrobenchmarks. The microbenchmarks consist of
single-layer networks. They are mainly designed for bottleneck analysis and
system optimization. The macrobenchmarks contain state-of-the-art industrial
networks, so as to offer a realistic comparison of different platforms. We also
propose a standard benchmarking methodology built upon an industrial software
stack and evaluation metrics that comprehensively reflect the various
characteristics of the evaluated intelligence processors. BENCHIP is utilized
for evaluating various hardware platforms, including CPUs, GPUs, and
accelerators. BENCHIP will be open-sourced soon.
|
[
{
"version": "v1",
"created": "Mon, 23 Oct 2017 14:53:54 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Nov 2017 10:37:09 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Tao",
"Jinhua",
""
],
[
"Du",
"Zidong",
""
],
[
"Guo",
"Qi",
""
],
[
"Lan",
"Huiying",
""
],
[
"Zhang",
"Lei",
""
],
[
"Zhou",
"Shengyuan",
""
],
[
"Xu",
"Lingjie",
""
],
[
"Liu",
"Cong",
""
],
[
"Liu",
"Haifeng",
""
],
[
"Tang",
"Shan",
""
],
[
"Rush",
"Allen",
""
],
[
"Chen",
"Willian",
""
],
[
"Liu",
"Shaoli",
""
],
[
"Chen",
"Yunji",
""
],
[
"Chen",
"Tianshi",
""
]
] |
new_dataset
| 0.990858 |
1711.08521
|
Ibrahim Aljarah
|
Wadi' Hijawi, Hossam Faris, Ja'far Alqatawna, Ibrahim Aljarah, Ala' M.
Al-Zoubi, and Maria Habib
|
EMFET: E-mail Features Extraction Tool
| null | null |
10.13140/RG.2.2.32995.45603
| null |
cs.IR cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
EMFET is an open source and flexible tool that can be used to extract a large
number of features from any email corpus with emails saved in EML format. The
extracted features can be categorized into three main groups: header features,
payload (body) features, and attachment features. The purpose of the tool is to
help practitioners and researchers to build datasets that can be used for
training machine learning models for spam detection. So far, 140 features can
be extracted using EMFET. EMFET is extensible and easy to use. The source code
of EMFET is publicly available at GitHub
(https://github.com/WadeaHijjawi/EmailFeaturesExtraction)
|
[
{
"version": "v1",
"created": "Wed, 22 Nov 2017 22:24:20 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Hijawi",
"Wadi'",
""
],
[
"Faris",
"Hossam",
""
],
[
"Alqatawna",
"Ja'far",
""
],
[
"Aljarah",
"Ibrahim",
""
],
[
"Al-Zoubi",
"Ala' M.",
""
],
[
"Habib",
"Maria",
""
]
] |
new_dataset
| 0.973663 |
1711.09281
|
Milod Kazerounian
|
Milod Kazerounian, Niki Vazou, Austin Bourgerie, Jeffrey S. Foster,
Emina Torlak
|
Refinement Types for Ruby
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Refinement types are a popular way to specify and reason about key program
properties. In this paper, we introduce RTR, a new system that adds refinement
types to Ruby. RTR is built on top of RDL, a Ruby type checker that provides
basic type information for the verification process. RTR works by encoding its
verification problems into Rosette, a solver-aided host language. RTR handles
mixins through assume-guarantee reasoning and uses just-in-time verification
for metaprogramming. We formalize RTR by showing a translation from a core,
Ruby-like language with refinement types into Rosette. We apply RTR to check a
range of functional correctness properties on six Ruby programs. We find that
RTR can successfully verify key methods in these programs, taking only a few
minutes to perform verification.
|
[
{
"version": "v1",
"created": "Sat, 25 Nov 2017 20:18:50 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Kazerounian",
"Milod",
""
],
[
"Vazou",
"Niki",
""
],
[
"Bourgerie",
"Austin",
""
],
[
"Foster",
"Jeffrey S.",
""
],
[
"Torlak",
"Emina",
""
]
] |
new_dataset
| 0.967852 |
1711.09299
|
Jiankang Zhang
|
Jiankang Zhang, Sheng Chen, Robert G. Maunder, Rong Zhang, Lajos Hanzo
|
Adaptive Coding and Modulation for Large-Scale Antenna Array Based
Aeronautical Communications in the Presence of Co-channel Interference
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to meet the demands of `Internet above the clouds', we propose a
multiple-antenna aided adaptive coding and modulation (ACM) for aeronautical
communications. The proposed ACM scheme switches its coding and modulation mode
according to the distance between the communicating aircraft, which is readily
available with the aid of the airborne radar or the global positioning system.
We derive an asymptotic closed-form expression of the
signal-to-interference-plus-noise ratio (SINR) as the number of transmitting
antennas tends to infinity, in the presence of realistic co-channel
interference and channel estimation errors. The achievable transmission rates
and the corresponding mode-switching distance-thresholds are readily obtained
based on this closed-form SINR formula. Monte-Carlo simulation results are used
to validate our theoretical analysis. For the specific example of 32 transmit
antennas and 4 receive antennas communicating at a 5 GHz carrier frequency and
using 6 MHz bandwidth, which are reused by multiple other pairs of
communicating aircraft, the proposed distance-based ACM is capable of providing
as high as 65.928 Mbps data rate when the communication distance is less than
25\,km.
|
[
{
"version": "v1",
"created": "Sat, 25 Nov 2017 21:48:31 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Zhang",
"Jiankang",
""
],
[
"Chen",
"Sheng",
""
],
[
"Maunder",
"Robert G.",
""
],
[
"Zhang",
"Rong",
""
],
[
"Hanzo",
"Lajos",
""
]
] |
new_dataset
| 0.996857 |
1711.09327
|
Anastasia Mavridou
|
Anastasia Mavridou, Aron Laszka
|
Designing Secure Ethereum Smart Contracts: A Finite State Machine Based
Approach
| null | null | null | null |
cs.CR cs.FL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The adoption of blockchain-based distributed computation platforms is growing
fast. Some of these platforms, such as Ethereum, provide support for
implementing smart contracts, which are envisioned to have novel applications
in a broad range of areas, including finance and Internet-of-Things. However, a
significant number of smart contracts deployed in practice suffer from security
vulnerabilities, which enable malicious users to steal assets from a contract
or to cause damage. Vulnerabilities present a serious issue since contracts may
handle financial assets of considerable value, and contract bugs are
non-fixable by design. To help developers create more secure smart contracts,
we introduce FSolidM, a framework rooted in rigorous semantics for designing
con- tracts as Finite State Machines (FSM). We present a tool for creating FSM
on an easy-to-use graphical interface and for automatically generating Ethereum
contracts. Further, we introduce a set of design patterns, which we implement
as plugins that developers can easily add to their contracts to enhance
security and functionality.
|
[
{
"version": "v1",
"created": "Sun, 26 Nov 2017 03:05:42 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Mavridou",
"Anastasia",
""
],
[
"Laszka",
"Aron",
""
]
] |
new_dataset
| 0.988131 |
1711.09400
|
Elham Taghizadeh
|
Elham Taghizadeh and Mostafa Abedzadeh and Mostafa Setak
|
A Multi Objective Reliable Location-Inventory Capacitated Disruption
Facility Problem with Penalty Cost Solve with Efficient Meta Historic
Algorithms
| null | null | null | null |
cs.DS stat.OT
|
http://creativecommons.org/licenses/by/4.0/
|
Logistics network is expected that opened facilities work continuously for a
long time horizon without any failure, but in real world problems, facilities
may face disruptions. This paper studies a reliable joint inventory location
problem to optimize the cost of facility locations, customers assignment, and
inventory management decisions when facilities face failure risks and do not
work. In our model we assume when a facility is out of work, its customers may
be reassigned to other operational facilities otherwise they must endure high
penalty costs associated with losing service. For defining the model closer to
real world problems, the model is proposed based on pmedian problem and the
facilities are considered to have limited capacities. We define a new binary
variable for showing that customers are not assigned to any facilities. Our
problem involves a biobjective model, the first one minimizes the sum of
facility construction costs and expected inventory holding costs, the second
one function that mentions for the first one is minimized maximum expected
customer costs under normal and failure scenarios. For solving this model we
use NSGAII and MOSS algorithms have been applied to find the Pareto archive
solution. Also, Response Surface Methodology (RSM) is applied for optimizing
the NSGAII Algorithm Parameters. We compare the performance of two algorithms
with three metrics and the results show NSGAII is more suitable for our model.
|
[
{
"version": "v1",
"created": "Sun, 26 Nov 2017 15:04:06 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Taghizadeh",
"Elham",
""
],
[
"Abedzadeh",
"Mostafa",
""
],
[
"Setak",
"Mostafa",
""
]
] |
new_dataset
| 0.988152 |
1711.09411
|
Jiawei Zhang
|
Jiawei Zhang, Limeng Cui, Philip S. Yu and Yuanhua Lv
|
BL-ECD: Broad Learning based Enterprise Community Detection via
Hierarchical Structure Fusion
|
10 Pages, 12 Figures. Full paper has been accepted by CIKM 2017,In:
Proceedings of the 2017 International Conference on Information and Knowledge
Management
| null | null | null |
cs.SI cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Employees in companies can be divided into di erent communities, and those
who frequently socialize with each other will be treated as close friends and
are grouped in the same community. In the enterprise context, a large amount of
information about the employees is available in both (1) o ine company internal
sources and (2) online enterprise social networks (ESNs). Each of the
information sources also contain multiple categories of employees'
socialization activities at the same time. In this paper, we propose to detect
the social communities of the employees in companies based on the broad
learning se ing with both these online and o ine information sources
simultaneously, and the problem is formally called the "Broad Learning based
Enterprise Community Detection" (BL-Ecd) problem. To address the problem, a
novel broad learning based community detection framework named "HeterogeneoUs
Multi-sOurce ClusteRing" (Humor) is introduced in this paper. Based on the
various enterprise social intimacy measures introduced in this paper, Humor
detects a set of micro community structures of the employees based on each of
the socialization activities respectively. To obtain the (globally) consistent
community structure of employees in the company, Humor further fuses these
micro community structures via two broad learning phases: (1) intra-fusion of
micro community structures to obtain the online and o ine (locally) consistent
communities respectively, and (2) inter-fusion of the online and o ine
communities to achieve the (globally) consistent community structure of
employees. Extensive experiments conducted on real-world enterprise datasets
demonstrate our method can perform very well in addressing the BL-Ecd problem.
|
[
{
"version": "v1",
"created": "Sun, 26 Nov 2017 15:56:06 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Zhang",
"Jiawei",
""
],
[
"Cui",
"Limeng",
""
],
[
"Yu",
"Philip S.",
""
],
[
"Lv",
"Yuanhua",
""
]
] |
new_dataset
| 0.986851 |
1711.09414
|
Boyu Liu
|
Boyu Liu, Yanzhao Wang, Yu-Wing Tai, Chi-Keung Tang
|
MAVOT: Memory-Augmented Video Object Tracking
|
Submitted to CVPR2018
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a one-shot learning approach for video object tracking. The
proposed algorithm requires seeing the object to be tracked only once, and
employs an external memory to store and remember the evolving features of the
foreground object as well as backgrounds over time during tracking. With the
relevant memory retrieved and updated in each tracking, our tracking model is
capable of maintaining long-term memory of the object, and thus can naturally
deal with hard tracking scenarios including partial and total occlusion, motion
changes and large scale and shape variations. In our experiments we use the
ImageNet ILSVRC2015 video detection dataset to train and use the VOT-2016
benchmark to test and compare our Memory-Augmented Video Object Tracking
(MAVOT) model. From the results, we conclude that given its oneshot property
and simplicity in design, MAVOT is an attractive approach in visual tracking
because it shows good performance on VOT-2016 benchmark and is among the top 5
performers in accuracy and robustness in occlusion, motion changes and empty
target.
|
[
{
"version": "v1",
"created": "Sun, 26 Nov 2017 16:20:45 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Liu",
"Boyu",
""
],
[
"Wang",
"Yanzhao",
""
],
[
"Tai",
"Yu-Wing",
""
],
[
"Tang",
"Chi-Keung",
""
]
] |
new_dataset
| 0.998042 |
1711.09464
|
Iuliia Kotseruba
|
Iuliia Kotseruba, John K. Tsotsos
|
STAR-RT: Visual attention for real-time video game playing
|
21 page, 13 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present STAR-RT - the first working prototype of Selective
Tuning Attention Reference (STAR) model and Cognitive Programs (CPs). The
Selective Tuning (ST) model received substantial support through psychological
and neurophysiological experiments. The STAR framework expands ST and applies
it to practical visual tasks. In order to do so, similarly to many cognitive
architectures, STAR combines the visual hierarchy (based on ST) with the
executive controller, working and short-term memory components and fixation
controller. CPs in turn enable the communication among all these elements for
visual task execution. To test the relevance of the system in a realistic
context, we implemented the necessary components of STAR and designed CPs for
playing two closed-source video games - Canabaltand Robot Unicorn Attack. Since
both games run in a browser window, our algorithm has the same amount of
information and the same amount of time to react to the events on the screen as
a human player would. STAR-RT plays both games in real time using only visual
input and achieves scores comparable to human expert players. It thus provides
an existence proof for the utility of the particular CP structure and
primitives used and the potential for continued experimentation and
verification of their utility in broader scenarios.
|
[
{
"version": "v1",
"created": "Sun, 26 Nov 2017 21:24:52 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Kotseruba",
"Iuliia",
""
],
[
"Tsotsos",
"John K.",
""
]
] |
new_dataset
| 0.996281 |
1711.09543
|
Ning Gao
|
Ning Gao, Zhang Liu, Dirk Grunwald
|
DTranx: A SEDA-based Distributed and Transactional Key Value Store with
Persistent Memory Log
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current distributed key value stores achieve scalability by trading off
consistency. As persistent memory technologies evolve tremendously, it is not
necessary to sacrifice consistency for performance. This paper proposes DTranx,
a distributed key value store based on a persistent memory aware log. DTranx
integrates a state transition based garbage collection mechanism in the log
design to effectively and efficiently reclaim old logs. In addition, DTranx
adopts the SEDA architecture to exploit higher concurrency in multi-core
environments and employs the optimal core binding strategy to minimize context
switch overhead. Moreover, we customize a hybrid commit protocol that combines
optimistic concurrency control and two-phase commit to reduce critical section
of distributed locking and introduce a locking mechanism to avoid deadlocks and
livelocks.
In our evaluations, DTranx reaches 514.11k transactions per second with 36
servers and 95\% read workloads. The persistent memory aware log is 30 times
faster than the SSD based system. And, our state transition based garbage
collection mechanism is efficient and effective. It does not affect normal
transactions and log space usage is steadily low.
|
[
{
"version": "v1",
"created": "Mon, 27 Nov 2017 05:38:10 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Gao",
"Ning",
""
],
[
"Liu",
"Zhang",
""
],
[
"Grunwald",
"Dirk",
""
]
] |
new_dataset
| 0.999 |
1711.09666
|
Eli (Omid) David
|
Ishai Rosenberg, Guillaume Sicard, Eli David
|
DeepAPT: Nation-State APT Attribution Using End-to-End Deep Neural
Networks
| null |
International Conference on Artificial Neural Networks (ICANN),
Springer LNCS, Vol. 10614, pp. 91-99, Alghero, Italy, September, 2017
|
10.1007/978-3-319-68612-7_11
| null |
cs.CR cs.LG cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years numerous advanced malware, aka advanced persistent threats
(APT) are allegedly developed by nation-states. The task of attributing an APT
to a specific nation-state is extremely challenging for several reasons. Each
nation-state has usually more than a single cyber unit that develops such
advanced malware, rendering traditional authorship attribution algorithms
useless. Furthermore, those APTs use state-of-the-art evasion techniques,
making feature extraction challenging. Finally, the dataset of such available
APTs is extremely small.
In this paper we describe how deep neural networks (DNN) could be
successfully employed for nation-state APT attribution. We use sandbox reports
(recording the behavior of the APT when run dynamically) as raw input for the
neural network, allowing the DNN to learn high level feature abstractions of
the APTs itself. Using a test set of 1,000 Chinese and Russian developed APTs,
we achieved an accuracy rate of 94.6%.
|
[
{
"version": "v1",
"created": "Mon, 27 Nov 2017 13:04:46 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Rosenberg",
"Ishai",
""
],
[
"Sicard",
"Guillaume",
""
],
[
"David",
"Eli",
""
]
] |
new_dataset
| 0.999622 |
1711.09723
|
Lei Lin
|
Zhenhua Zhang, Lei Lin, Lei Zhu, Anuj Sharma
|
Bi-National Delay Pattern Analysis For Commercial and Passenger Vehicles
at Niagara Frontier Border
|
Accepted for Presentation at 2018 TRB Annual Meeting
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Border crossing delays between New York State and Southern Ontario cause
problems like enormous economic loss and massive environmental pollutions. In
this area, there are three border-crossing ports: Peace Bridge (PB), Rainbow
Bridge (RB) and Lewiston-Queenston Bridge (LQ) at Niagara Frontier border. The
goals of this paper are to figure out whether the distributions of bi-national
wait times for commercial and passenger vehicles are evenly distributed among
the three ports and uncover the hidden significant influential factors that
result in the possible insufficient utilization. The historical border wait
time data from 7:00 to 21:00 between 08/22/2016 and 06/20/2017 are archived, as
well as the corresponding temporal and weather data. For each vehicle type
towards each direction, a Decision Tree is built to identify the various border
delay patterns over the three bridges. We find that for the passenger vehicles
to the USA, the convenient connections between the Canada freeways with USA
I-190 by LQ and PB may cause these two bridges more congested than RB,
especially when it is a holiday in Canada. For the passenger vehicles in the
other bound, RB is much more congested than LQ and PB in some cases, and the
visitors to Niagara Falls in the USA in summer may be a reason. For the
commercial trucks to the USA, the various delay patterns show PB is always more
congested than LQ. Hour interval and weekend are the most significant factors
appearing in all the four Decision Trees. These Decision Trees can help the
authorities to make specific routing suggestions when the corresponding
conditions are satisfied.
|
[
{
"version": "v1",
"created": "Mon, 13 Nov 2017 20:43:02 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Zhang",
"Zhenhua",
""
],
[
"Lin",
"Lei",
""
],
[
"Zhu",
"Lei",
""
],
[
"Sharma",
"Anuj",
""
]
] |
new_dataset
| 0.993831 |
1711.09756
|
Ad\'an S\'anchez de Pedro Crespo
|
Ad\'an S\'anchez de Pedro and Daniele Levi and Luis Iv\'an Cuende
|
Witnet: A Decentralized Oracle Network Protocol
|
Version 0.1 - 58 pages, 18 figures - Reviewed and edited by D. Levi
and L.I. Cuende
| null |
10.13140/RG.2.2.28152.34560
| null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Witnet is a decentralized oracle network (DON) that connects smart contracts
to the outer world. Generally speaking, it allows any piece of software to
retrieve the contents published at any web address at a certain point in time,
with complete and verifiable proof of its integrity and without blindly
trusting any third party. Witnet runs on a blockchain with a native protocol
token (called Wit), which miners-called witnesses-earn by retrieving, attesting
and delivering web contents for clients. On the other hand, clients spend Wit
to pay witnesses for their Retrieve-Attest-Deliver (RAD) work. Witnesses also
compete to mine blocks with considerable rewards, but Witnet mining power is
proportional to their previous performance in terms of honesty and
trustworthiness-this is, their reputation as witnesses. This creates a powerful
incentive for witnesses to do their work honestly, protect their reputation and
not to deceive the network. The Witnet protocol is designed to assign the RAD
tasks to witnesses in a way that mitigates most attack vectors to the greatest
extent. At the same time, it includes a novel 'sharding' feature that (1)
guarantees the efficiency and scalability of the network, (2) keeps the price
of RAD tasks within reasonable bounds and (3) gives clients the freedom to
adjust certainty and price by letting them choose how many witnesses will work
on their RAD tasks. When coupled with a Decentralized Storage Network (DSN),
Witnet also gives us the possibility to build the Digital Knowledge Ark: a
decentralized, immutable, censorship-resistant and eternal archive of
humanity's most relevant digital data. A truth vault aimed to ensure that
knowledge will remain democratic and verifiable forever and to prevent history
from being written by the victors.
|
[
{
"version": "v1",
"created": "Mon, 27 Nov 2017 15:23:42 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"de Pedro",
"Adán Sánchez",
""
],
[
"Levi",
"Daniele",
""
],
[
"Cuende",
"Luis Iván",
""
]
] |
new_dataset
| 0.999699 |
1711.09758
|
Andrea Pinna
|
Andrea Pinna and Simona Ibba
|
A blockchain-based Decentralized System for proper handling of temporary
Employment contracts
|
Accepted for publication in the proceedings of the "Computing
Conference 2018" - 10-12 July 2018 - London, United Kingdom
| null | null | null |
cs.CY cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporary work is an employment situation useful and suitable in all
occasions in which business needs to adjust more easily and quickly to workload
fluctuations or maintain staffing flexibility. Temporary workers play therefore
an important role in many companies, but this kind of activity is subject to a
special form of legal protections and many aspects and risks must be taken into
account both employers and employees. In this work we propose a
blockchain-based system that aims to ensure respect for the rights for all
actors involved in a temporary employment, in order to provide employees with
the fair and legal remuneration (including taxes) of work performances and a
protection in the case employer becomes insolvent. At the same time, our system
wants to assist the employer in processing contracts with a fully automated and
fast procedure. To resolve these problems we propose the D-ES (Decentralized
Employment System). We first model the employment relationship as a state
system. Then we describe the enabling technology that makes us able to realize
the D-ES. In facts, we propose the implementation of a DLT (Decentralized
Ledger Technology) based system, consisting in a blockchain system and of a
web-based environment. Thanks the decentralized application platforms that
makes us able to develop smart contracts, we define a discrete event control
system that works inside the blockchain. In addition, we discuss the temporary
work in agriculture as a interesting case of study.
|
[
{
"version": "v1",
"created": "Thu, 23 Nov 2017 10:52:22 GMT"
}
] | 2017-11-28T00:00:00 |
[
[
"Pinna",
"Andrea",
""
],
[
"Ibba",
"Simona",
""
]
] |
new_dataset
| 0.999309 |
1609.00062
|
Wanchun Liu
|
Wanchun Liu, Kaibin Huang, Xiangyun Zhou and Salman Durrani
|
Full-Duplex Backscatter Interference Networks Based on Time-Hopping
Spread Spectrum
|
submitted for possible journal publication
|
IEEE Transactions on Wireless Communications, vol. 16, no. 7, pp.
4361-4377, Jul. 2017
| null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Future Internet-of-Things (IoT) is expected to wirelessly connect billions of
low-complexity devices. For wireless information transfer (WIT) in IoT, high
density of IoT devices and their ad hoc communication result in strong
interference which acts as a bottleneck on WIT. Furthermore, battery
replacement for the massive number of IoT devices is difficult if not
infeasible, making wireless energy transfer (WET) desirable. This motivates:
(i) the design of full-duplex WIT to reduce latency and enable efficient
spectrum utilization, and (ii) the implementation of passive IoT devices using
backscatter antennas that enable WET from one device (reader) to another (tag).
However, the resultant increase in the density of simultaneous links
exacerbates the interference issue. This issue is addressed in this paper by
proposing the design of full-duplex backscatter communication (BackCom)
networks, where a novel multiple-access scheme based on time-hopping
spread-spectrum (TH-SS) is designed to enable both one-way WET and two-way WIT
in coexisting backscatter reader-tag links. Comprehensive performance analysis
of BackCom networks is presented in this paper, including forward/backward
bit-error rates and WET efficiency and outage probabilities, which accounts for
energy harvesting at tags, non-coherent and coherent detection at tags and
readers, respectively, and the effects of asynchronous transmissions.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2016 22:50:32 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Apr 2017 04:52:01 GMT"
}
] | 2017-11-27T00:00:00 |
[
[
"Liu",
"Wanchun",
""
],
[
"Huang",
"Kaibin",
""
],
[
"Zhou",
"Xiangyun",
""
],
[
"Durrani",
"Salman",
""
]
] |
new_dataset
| 0.965273 |
1611.06159
|
Yipei Wang
|
Yan Xu, Siyuan Shan, Ziming Qiu, Zhipeng Jia, Zhengyang Shen, Yipei
Wang, Mengfei Shi, Eric I-Chao Chang
|
End-to-End Subtitle Detection and Recognition for Videos in East Asian
Languages via CNN Ensemble with Near-Human-Level Performance
|
35 pages
| null |
10.1016/j.image.2017.09.013
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose an innovative end-to-end subtitle detection and
recognition system for videos in East Asian languages. Our end-to-end system
consists of multiple stages. Subtitles are firstly detected by a novel image
operator based on the sequence information of consecutive video frames. Then,
an ensemble of Convolutional Neural Networks (CNNs) trained on synthetic data
is adopted for detecting and recognizing East Asian characters. Finally, a
dynamic programming approach leveraging language models is applied to
constitute results of the entire body of text lines. The proposed system
achieves average end-to-end accuracies of 98.2% and 98.3% on 40 videos in
Simplified Chinese and 40 videos in Traditional Chinese respectively, which is
a significant outperformance of other existing methods. The near-perfect
accuracy of our system dramatically narrows the gap between human cognitive
ability and state-of-the-art algorithms used for such a task.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2016 17:09:14 GMT"
}
] | 2017-11-27T00:00:00 |
[
[
"Xu",
"Yan",
""
],
[
"Shan",
"Siyuan",
""
],
[
"Qiu",
"Ziming",
""
],
[
"Jia",
"Zhipeng",
""
],
[
"Shen",
"Zhengyang",
""
],
[
"Wang",
"Yipei",
""
],
[
"Shi",
"Mengfei",
""
],
[
"Chang",
"Eric I-Chao",
""
]
] |
new_dataset
| 0.994216 |
1612.05974
|
Francesco Conti
|
Francesco Conti, Robert Schilling, Pasquale Davide Schiavone, Antonio
Pullini, Davide Rossi, Frank Kagan G\"urkaynak, Michael Muehlberghuber,
Michael Gautschi, Igor Loi, Germain Haugou, Stefan Mangard, Luca Benini
|
An IoT Endpoint System-on-Chip for Secure and Energy-Efficient
Near-Sensor Analytics
|
15 pages, 12 figures, accepted for publication to the IEEE
Transactions on Circuits and Systems - I: Regular Papers
| null |
10.1109/TCSI.2017.2698019
| null |
cs.AR cs.CR cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Near-sensor data analytics is a promising direction for IoT endpoints, as it
minimizes energy spent on communication and reduces network load - but it also
poses security concerns, as valuable data is stored or sent over the network at
various stages of the analytics pipeline. Using encryption to protect sensitive
data at the boundary of the on-chip analytics engine is a way to address data
security issues. To cope with the combined workload of analytics and encryption
in a tight power envelope, we propose Fulmine, a System-on-Chip based on a
tightly-coupled multi-core cluster augmented with specialized blocks for
compute-intensive data processing and encryption functions, supporting software
programmability for regular computing tasks. The Fulmine SoC, fabricated in
65nm technology, consumes less than 20mW on average at 0.8V achieving an
efficiency of up to 70pJ/B in encryption, 50pJ/px in convolution, or up to
25MIPS/mW in software. As a strong argument for real-life flexible application
of our platform, we show experimental results for three secure analytics use
cases: secure autonomous aerial surveillance with a state-of-the-art deep CNN
consuming 3.16pJ per equivalent RISC op; local CNN-based face detection with
secured remote recognition in 5.74pJ/op; and seizure detection with encrypted
data collection from EEG within 12.7pJ/op.
|
[
{
"version": "v1",
"created": "Sun, 18 Dec 2016 19:20:42 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Apr 2017 22:55:15 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Apr 2017 17:39:09 GMT"
}
] | 2017-11-27T00:00:00 |
[
[
"Conti",
"Francesco",
""
],
[
"Schilling",
"Robert",
""
],
[
"Schiavone",
"Pasquale Davide",
""
],
[
"Pullini",
"Antonio",
""
],
[
"Rossi",
"Davide",
""
],
[
"Gürkaynak",
"Frank Kagan",
""
],
[
"Muehlberghuber",
"Michael",
""
],
[
"Gautschi",
"Michael",
""
],
[
"Loi",
"Igor",
""
],
[
"Haugou",
"Germain",
""
],
[
"Mangard",
"Stefan",
""
],
[
"Benini",
"Luca",
""
]
] |
new_dataset
| 0.999452 |
1705.06942
|
Akhilesh Jaiswal
|
Akhilesh Jaiswal, Amogh Agrawal, Priyadarshini Panda, Kaushik Roy
|
Voltage-Driven Domain-Wall Motion based Neuro-Synaptic Devices for
Dynamic On-line Learning
| null | null | null | null |
cs.ET cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conventional von-Neumann computing models have achieved remarkable feats for
the past few decades. However, they fail to deliver the required efficiency for
certain basic tasks like image and speech recognition when compared to
biological systems. As such, taking cues from biological systems, novel
computing paradigms are being explored for efficient hardware implementations
of recognition/classification tasks. The basic building blocks of such
neuromorphic systems are neurons and synapses. Towards that end, we propose a
leaky-integrate-fire (LIF) neuron and a programmable non-volatile synapse using
domain wall motion induced by magneto-electric effect. Due to a strong elastic
pinning between the ferro-magnetic domain wall (FM-DW) and the underlying
ferro-electric domain wall (FE-DW), the FM-DW gets dragged by the FE-DW on
application of a voltage pulse. The fact that FE materials are insulators
allows for pure voltage-driven FM-DW motion, which in turn can be used to mimic
the behaviors of biological spiking neurons and synapses. The voltage driven
nature of the proposed devices allows energy-efficient operation. A detailed
device to system level simulation framework based on micromagnetic simulations
has been developed to analyze the feasibility of the proposed neuro-synaptic
devices. We also demonstrate that the energy-efficient voltage-controlled
behavior of the proposed devices make them suitable for dynamic on-line and
lifelong learning in spiking neural networks (SNNs).
|
[
{
"version": "v1",
"created": "Fri, 19 May 2017 11:37:04 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Nov 2017 03:47:06 GMT"
}
] | 2017-11-27T00:00:00 |
[
[
"Jaiswal",
"Akhilesh",
""
],
[
"Agrawal",
"Amogh",
""
],
[
"Panda",
"Priyadarshini",
""
],
[
"Roy",
"Kaushik",
""
]
] |
new_dataset
| 0.992401 |
1706.00682
|
Zongtao Liu
|
Yang Yang, Chenhao Tan, Zongtao Liu, Fei Wu and Yueting Zhuang
|
Urban Dreams of Migrants: A Case Study of Migrant Integration in
Shanghai
|
A modified version. The paper was accepted by AAAI 2018
| null | null | null |
cs.CY cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unprecedented human mobility has driven the rapid urbanization around the
world. In China, the fraction of population dwelling in cities increased from
17.9% to 52.6% between 1978 and 2012. Such large-scale migration poses
challenges for policymakers and important questions for researchers. To
investigate the process of migrant integration, we employ a one-month complete
dataset of telecommunication metadata in Shanghai with 54 million users and 698
million call logs. We find systematic differences between locals and migrants
in their mobile communication networks and geographical locations. For
instance, migrants have more diverse contacts and move around the city with a
larger radius than locals after they settle down. By distinguishing new
migrants (who recently moved to Shanghai) from settled migrants (who have been
in Shanghai for a while), we demonstrate the integration process of new
migrants in their first three weeks. Moreover, we formulate classification
problems to predict whether a person is a migrant. Our classifier is able to
achieve an F1-score of 0.82 when distinguishing settled migrants from locals,
but it remains challenging to identify new migrants because of class imbalance.
This classification setup holds promise for identifying new migrants who will
successfully integrate into locals (new migrants that misclassified as locals).
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2017 13:24:37 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2017 14:57:13 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Jun 2017 03:11:34 GMT"
},
{
"version": "v4",
"created": "Tue, 21 Nov 2017 05:54:00 GMT"
},
{
"version": "v5",
"created": "Wed, 22 Nov 2017 07:40:00 GMT"
}
] | 2017-11-27T00:00:00 |
[
[
"Yang",
"Yang",
""
],
[
"Tan",
"Chenhao",
""
],
[
"Liu",
"Zongtao",
""
],
[
"Wu",
"Fei",
""
],
[
"Zhuang",
"Yueting",
""
]
] |
new_dataset
| 0.999614 |
1706.02447
|
Raquel Aoki
|
Raquel YS Aoki, Renato M Assuncao, Pedro OS Vaz de Melo
|
Luck is Hard to Beat: The Difficulty of Sports Prediction
|
10 pages, KDD2017, Applied Data Science track
|
Proceedings of the 23rd ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, 2017
|
10.1145/3097983.3098045
| null |
cs.LG stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predicting the outcome of sports events is a hard task. We quantify this
difficulty with a coefficient that measures the distance between the observed
final results of sports leagues and idealized perfectly balanced competitions
in terms of skill. This indicates the relative presence of luck and skill. We
collected and analyzed all games from 198 sports leagues comprising 1503
seasons from 84 countries of 4 different sports: basketball, soccer, volleyball
and handball. We measured the competitiveness by countries and sports. We also
identify in each season which teams, if removed from its league, result in a
completely random tournament. Surprisingly, not many of them are needed. As
another contribution of this paper, we propose a probabilistic graphical model
to learn about the teams' skills and to decompose the relative weights of luck
and skill in each game. We break down the skill component into factors
associated with the teams' characteristics. The model also allows to estimate
as 0.36 the probability that an underdog team wins in the NBA league, with a
home advantage adding 0.09 to this probability. As shown in the first part of
the paper, luck is substantially present even in the most competitive
championships, which partially explains why sophisticated and complex
feature-based models hardly beat simple models in the task of forecasting
sports' outcomes.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2017 03:38:27 GMT"
}
] | 2017-11-27T00:00:00 |
[
[
"Aoki",
"Raquel YS",
""
],
[
"Assuncao",
"Renato M",
""
],
[
"de Melo",
"Pedro OS Vaz",
""
]
] |
new_dataset
| 0.982108 |
1711.07312
|
Muktabh Mayank Srivastava
|
Muktabh Mayank Srivastava, Pratyush Kumar, Lalit Pradhan, Srikrishna
Varadarajan
|
Detection of Tooth caries in Bitewing Radiographs using Deep Learning
|
Accepted at NIPS 2017 workshop on Machine Learning for Health (NIPS
2017 ML4H)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We develop a Computer Aided Diagnosis (CAD) system, which enhances the
performance of dentists in detecting wide range of dental caries. The CAD
System achieves this by acting as a second opinion for the dentists with way
higher sensitivity on the task of detecting cavities than the dentists
themselves. We develop annotated dataset of more than 3000 bitewing radiographs
and utilize it for developing a system for automated diagnosis of dental
caries. Our system consists of a deep fully convolutional neural network (FCNN)
consisting 100+ layers, which is trained to mark caries on bitewing
radiographs. We have compared the performance of our proposed system with three
certified dentists for marking dental caries. We exceed the average performance
of the dentists in both recall (sensitivity) and F1-Score (agreement with
truth) by a very large margin. Working example of our system is shown in Figure
1.
|
[
{
"version": "v1",
"created": "Mon, 20 Nov 2017 14:12:32 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Nov 2017 16:08:27 GMT"
}
] | 2017-11-27T00:00:00 |
[
[
"Srivastava",
"Muktabh Mayank",
""
],
[
"Kumar",
"Pratyush",
""
],
[
"Pradhan",
"Lalit",
""
],
[
"Varadarajan",
"Srikrishna",
""
]
] |
new_dataset
| 0.996463 |
1711.08336
|
Eli (Omid) David
|
Eli David, Nathan S. Netanyahu
|
DeepSign: Deep Learning for Automatic Malware Signature Generation and
Classification
| null |
International Joint Conference on Neural Networks (IJCNN), pages
1-8, Killarney, Ireland, July 2015
|
10.1109/IJCNN.2015.7280815
| null |
cs.CR cs.LG cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel deep learning based method for automatic malware
signature generation and classification. The method uses a deep belief network
(DBN), implemented with a deep stack of denoising autoencoders, generating an
invariant compact representation of the malware behavior. While conventional
signature and token based methods for malware detection do not detect a
majority of new variants for existing malware, the results presented in this
paper show that signatures generated by the DBN allow for an accurate
classification of new malware variants. Using a dataset containing hundreds of
variants for several major malware families, our method achieves 98.6%
classification accuracy using the signatures generated by the DBN. The
presented method is completely agnostic to the type of malware behavior that is
logged (e.g., API calls and their parameters, registry entries, websites and
ports accessed, etc.), and can use any raw input from a sandbox to successfully
train the deep neural network which is used to generate malware signatures.
|
[
{
"version": "v1",
"created": "Tue, 21 Nov 2017 07:22:58 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Nov 2017 16:27:18 GMT"
}
] | 2017-11-27T00:00:00 |
[
[
"David",
"Eli",
""
],
[
"Netanyahu",
"Nathan S.",
""
]
] |
new_dataset
| 0.958235 |
1711.08528
|
Luiz Capretz Dr.
|
Marwan Darwish, Abdelkader Ouda, Luiz Fernando Capretz
|
Cloud-Based Secure Authentication (CSA) Protocol Suite for Defense
against DoS Attacks
| null |
Journal of Information Security and Applications, Volume 20, pp.
90-98, Elsevier, April 2015
|
10.1016/jisa.2014.12.001
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cloud-based services have become part of our day-to-day software solutions.
The identity authentication process is considered to be the main gateway to
these services. As such, these gates have become increasingly susceptible to
aggressive attackers, who may use Denial of Service (DoS) attacks to close
these gates permanently. There are a number of authentication protocols that
are strong enough to verify identities and protect traditional networked
applications. However, these authentication protocols may themselves introduce
DoS risks when used in cloud-based applications. This risk introduction is due
to the utilization of a heavy verification process that may consume the cloud
resources and disable the application service. In this work, we propose a novel
cloud-based authentication protocol suite that not only is aware of the
internal DoS threats but is also capable of defending against external DoS
attackers. The proposed solution uses a multilevel adaptive technique to
dictate the efforts of the protocol participants. This technique is capable of
identifying a legitimate users requests and placing them at the front of the
authentication process queue. The authentication process was designed in such a
way that the cloud-based servers become footprint-free and completely aware of
the risks of any DoS attack.
|
[
{
"version": "v1",
"created": "Wed, 22 Nov 2017 22:42:56 GMT"
}
] | 2017-11-27T00:00:00 |
[
[
"Darwish",
"Marwan",
""
],
[
"Ouda",
"Abdelkader",
""
],
[
"Capretz",
"Luiz Fernando",
""
]
] |
new_dataset
| 0.991962 |
1711.08572
|
SeyedMohammad Seyedzadeh
|
Seyed Mohammad Seyedzadeh, Alex K. Jones, Rami Melhem
|
Enabling Fine-Grain Restricted Coset Coding Through Word-Level
Compression for PCM
|
12 pages
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Phase change memory (PCM) has recently emerged as a promising technology to
meet the fast growing demand for large capacity memory in computer systems,
replacing DRAM that is impeded by physical limitations. Multi-level cell (MLC)
PCM offers high density with low per-byte fabrication cost. However, despite
many advantages, such as scalability and low leakage, the energy for
programming intermediate states is considerably larger than programing
single-level cell PCM. In this paper, we study encoding techniques to reduce
write energy for MLC PCM when the encoding granularity is lowered below the
typical cache line size. We observe that encoding data blocks at small
granularity to reduce write energy actually increases the write energy because
of the auxiliary encoding bits. We mitigate this adverse effect by 1) designing
suitable codeword mappings that use fewer auxiliary bits and 2) proposing a new
Word-Level Compression (WLC) which compresses more than 91% of the memory lines
and provides enough room to store the auxiliary data using a novel restricted
coset encoding applied at small data block granularities.
Experimental results show that the proposed encoding at 16-bit data
granularity reduces the write energy by 39%, on average, versus the leading
encoding approach for write energy reduction. Furthermore, it improves
endurance by 20% and is more reliable than the leading approach. Hardware
synthesis evaluation shows that the proposed encoding can be implemented
on-chip with only a nominal area overhead.
|
[
{
"version": "v1",
"created": "Thu, 23 Nov 2017 04:32:45 GMT"
}
] | 2017-11-27T00:00:00 |
[
[
"Seyedzadeh",
"Seyed Mohammad",
""
],
[
"Jones",
"Alex K.",
""
],
[
"Melhem",
"Rami",
""
]
] |
new_dataset
| 0.993574 |
1711.08710
|
Pascal Ochem
|
Fran\c{c}ois Dross and Pascal Ochem
|
Vertex partitions of $(C_3,C_4,C_6)$-free planar graphs
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A graph is $(k_1,k_2)$-colorable if its vertex set can be partitioned into a
graph with maximum degree at most $k_1$ and and a graph with maximum degree at
most $k_2$. We show that every $(C_3,C_4,C_6)$-free planar graph is
$(0,6)$-colorable. We also show that deciding whether a $(C_3,C_4,C_6)$-free
planar graph is $(0,3)$-colorable is NP-complete.
|
[
{
"version": "v1",
"created": "Thu, 23 Nov 2017 14:36:15 GMT"
}
] | 2017-11-27T00:00:00 |
[
[
"Dross",
"François",
""
],
[
"Ochem",
"Pascal",
""
]
] |
new_dataset
| 0.998598 |
1711.08767
|
Mike Thelwall Prof
|
Mike Thelwall
|
Microsoft Academic: A multidisciplinary comparison of citation counts
with Scopus and Mendeley for 29 journals
| null |
Thelwall, M. (2017). Microsoft Academic: A multidisciplinary
comparison of citation counts with Scopus and Mendeley for 29 journals.
Journal of Informetrics, 11(4), 1201-1212
|
10.1016/j.joi.2017.10.006
| null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Microsoft Academic is a free citation index that allows large scale data
collection. This combination makes it useful for scientometric research.
Previous studies have found that its citation counts tend to be slightly larger
than those of Scopus but smaller than Google Scholar, with disciplinary
variations. This study reports the largest and most systematic analysis so far,
of 172,752 articles in 29 large journals chosen from different specialisms.
From Scopus citation counts, Microsoft Academic citation counts and Mendeley
reader counts for articles published 2007-2017, Microsoft Academic found a
slightly more (6%) citations than Scopus overall and especially for the current
year (51%). It found fewer citations than Mendeley readers overall (59%), and
only 7% as many for the current year. Differences between journals were
probably due to field preprint sharing cultures or journal policies rather than
broad disciplinary differences.
|
[
{
"version": "v1",
"created": "Thu, 23 Nov 2017 16:42:21 GMT"
}
] | 2017-11-27T00:00:00 |
[
[
"Thelwall",
"Mike",
""
]
] |
new_dataset
| 0.993537 |
1711.09008
|
Yuming Jiang
|
Atef Abdelkefi and Yuming Jiang and Sachin Sharma
|
SENATUS: An Approach to Joint Traffic Anomaly Detection and Root Cause
Analysis
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel approach, called SENATUS, for joint traffic
anomaly detection and root-cause analysis. Inspired from the concept of a
senate, the key idea of the proposed approach is divided into three stages:
election, voting and decision. At the election stage, a small number of
\nop{traffic flow sets (termed as senator flows)}senator flows are chosen\nop{,
which are used} to represent approximately the total (usually huge) set of
traffic flows. In the voting stage, anomaly detection is applied on the senator
flows and the detected anomalies are correlated to identify the most possible
anomalous time bins. Finally in the decision stage, a machine learning
technique is applied to the senator flows of each anomalous time bin to find
the root cause of the anomalies. We evaluate SENATUS using traffic traces
collected from the Pan European network, GEANT, and compare against another
approach which detects anomalies using lossless compression of traffic
histograms. We show the effectiveness of SENATUS in diagnosing anomaly types:
network scans and DoS/DDoS attacks.
|
[
{
"version": "v1",
"created": "Fri, 24 Nov 2017 15:14:50 GMT"
}
] | 2017-11-27T00:00:00 |
[
[
"Abdelkefi",
"Atef",
""
],
[
"Jiang",
"Yuming",
""
],
[
"Sharma",
"Sachin",
""
]
] |
new_dataset
| 0.957291 |
1711.09017
|
Xucong Zhang
|
Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling
|
MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning-based methods are believed to work well for unconstrained gaze
estimation, i.e. gaze estimation from a monocular RGB camera without
assumptions regarding user, environment, or camera. However, current gaze
datasets were collected under laboratory conditions and methods were not
evaluated across multiple datasets. Our work makes three contributions towards
addressing these limitations. First, we present the MPIIGaze that contains
213,659 full face images and corresponding ground-truth gaze positions
collected from 15 users during everyday laptop use over several months. An
experience sampling approach ensured continuous gaze and head poses and
realistic variation in eye appearance and illumination. To facilitate
cross-dataset evaluations, 37,667 images were manually annotated with eye
corners, mouth corners, and pupil centres. Second, we present an extensive
evaluation of state-of-the-art gaze estimation methods on three current
datasets, including MPIIGaze. We study key challenges including target gaze
range, illumination conditions, and facial appearance variation. We show that
image resolution and the use of both eyes affect gaze estimation performance
while head pose and pupil centre information are less informative. Finally, we
propose GazeNet, the first deep appearance-based gaze estimation method.
GazeNet improves the state of the art by 22% percent (from a mean error of 13.9
degrees to 10.8 degrees) for the most challenging cross-dataset evaluation.
|
[
{
"version": "v1",
"created": "Fri, 24 Nov 2017 15:20:22 GMT"
}
] | 2017-11-27T00:00:00 |
[
[
"Zhang",
"Xucong",
""
],
[
"Sugano",
"Yusuke",
""
],
[
"Fritz",
"Mario",
""
],
[
"Bulling",
"Andreas",
""
]
] |
new_dataset
| 0.999583 |
1703.02847
|
Andre Ebert
|
Andre Ebert, Marie Kiermeier, Chadly Marouane, and Claudia
Linnhoff-Popien
|
SensX: About Sensing and Assessment of Complex Human Motion
|
Published within the Proceedings of 14th IEEE International
Conference on Networking, Sensing and Control (ICNSC), May 16th-18th, 2017,
Calabria Italy 6 pages, 5 figures
| null |
10.1109/ICNSC.2017.8000113
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The great success of wearables and smartphone apps for provision of extensive
physical workout instructions boosts a whole industry dealing with consumer
oriented sensors and sports equipment. But with these opportunities there are
also new challenges emerging. The unregulated distribution of instructions
about ambitious exercises enables unexperienced users to undertake demanding
workouts without professional supervision which may lead to suboptimal training
success or even serious injuries. We believe, that automated supervision and
realtime feedback during a workout may help to solve these issues. Therefore we
introduce four fundamental steps for complex human motion assessment and
present SensX, a sensor-based architecture for monitoring, recording, and
analyzing complex and multi-dimensional motion chains. We provide the results
of our preliminary study encompassing 8 different body weight exercises, 20
participants, and more than 9,220 recorded exercise repetitions. Furthermore,
insights into SensXs classification capabilities and the impact of specific
sensor configurations onto the analysis process are given.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2017 13:50:41 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Nov 2017 14:02:01 GMT"
}
] | 2017-11-23T00:00:00 |
[
[
"Ebert",
"Andre",
""
],
[
"Kiermeier",
"Marie",
""
],
[
"Marouane",
"Chadly",
""
],
[
"Linnhoff-Popien",
"Claudia",
""
]
] |
new_dataset
| 0.99519 |
1711.02254
|
Jiajun Zhang
|
Jiajun Zhang, Jinkun Tao, Jiangtao Huangfu and Zhiguo Shi
|
Doppler-Radar Based Hand Gesture Recognition System Using Convolutional
Neural Networks
|
Best Paper Award of International Conference on Communications,
Signal Processing, and Systems 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hand gesture recognition has long been a hot topic in human computer
interaction. Traditional camera-based hand gesture recognition systems cannot
work properly under dark circumstances. In this paper, a Doppler Radar based
hand gesture recognition system using convolutional neural networks is
proposed. A cost-effective Doppler radar sensor with dual receiving channels at
5.8GHz is used to acquire a big database of four standard gestures. The
received hand gesture signals are then processed with time-frequency analysis.
Convolutional neural networks are used to classify different gestures.
Experimental results verify the effectiveness of the system with an accuracy of
98%. Besides, related factors such as recognition distance and gesture scale
are investigated.
|
[
{
"version": "v1",
"created": "Tue, 7 Nov 2017 01:58:11 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Nov 2017 10:13:21 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Nov 2017 11:53:47 GMT"
}
] | 2017-11-23T00:00:00 |
[
[
"Zhang",
"Jiajun",
""
],
[
"Tao",
"Jinkun",
""
],
[
"Huangfu",
"Jiangtao",
""
],
[
"Shi",
"Zhiguo",
""
]
] |
new_dataset
| 0.999113 |
1711.06710
|
Ashkan Yousefpour
|
Ashkan Yousefpour, Caleb Fung, Tam Nguyen, David Hong, Daniel Zhang
|
Instant Accident Reporting and Crowdsensed Road Condition Analytics for
Smart Cities
|
8 pages, 7 figures, submitted to "Communication Technology Changing
the World Competition", Sponsored by IEEE Communication Society
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The following report contains information about a proposed technology by the
authors, which consists of a device that sits inside of a vehicle and
constantly monitors the car information. It can determine speed, g-force, and
location coordinates. Using these data, the device can detect a car crash or
pothole on the road. The data collected from the car is forwarded to a server
to for more in-depth analytics. If there is an accident, the server promptly
contacts the emergency services with the location of the crash. Moreover, the
pothole information is used for analytics of road conditions.
|
[
{
"version": "v1",
"created": "Fri, 17 Nov 2017 19:58:52 GMT"
}
] | 2017-11-23T00:00:00 |
[
[
"Yousefpour",
"Ashkan",
""
],
[
"Fung",
"Caleb",
""
],
[
"Nguyen",
"Tam",
""
],
[
"Hong",
"David",
""
],
[
"Zhang",
"Daniel",
""
]
] |
new_dataset
| 0.999795 |
1711.08007
|
Ramviyas Parasuraman
|
Byung-Cheol Min, Ramviyas Parasuraman, Sangjun Lee, Jin-Woo Jung, Eric
T. Matson
|
A Directional Antenna based Leader-Follower Relay System for End-to-End
Robot Communications
| null | null | null | null |
cs.RO cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a directional antenna-based leader-follower robotic
relay system capable of building end-to-end communication in complicated and
dynamically changing environments. The proposed system consists of multiple
networked robots - one is a mobile end node and the others are leaders or
followers acting as radio relays. Every follower uses directional antennas to
relay a communication radio and to estimate the location of the leader robot as
a sensory device.
For bearing estimation, we employ a weight centroid algorithm (WCA) and
present a theoretical analysis of the use of WCA for this work. Using a robotic
convoy method, we develop online, distributed control strategies that satisfy
the scalability requirements of robotic network systems and enable cooperating
robots to work independently. The performance of the proposed system is
evaluated by conducting extensive real-world experiments that successfully
build actual communication between two end nodes.
|
[
{
"version": "v1",
"created": "Tue, 7 Nov 2017 07:24:00 GMT"
}
] | 2017-11-23T00:00:00 |
[
[
"Min",
"Byung-Cheol",
""
],
[
"Parasuraman",
"Ramviyas",
""
],
[
"Lee",
"Sangjun",
""
],
[
"Jung",
"Jin-Woo",
""
],
[
"Matson",
"Eric T.",
""
]
] |
new_dataset
| 0.999738 |
1711.08057
|
Erel Segal-Halevi
|
Erel Segal-Halevi and Avinatan Hassidim
|
Truthful Bilateral Trade is Impossible even with Fixed Prices
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A seminal theorem of Myerson and Satterthwaite (1983) proves that, in a game
of bilateral trade between a single buyer and a single seller, no mechanism can
be simultaneously individually-rational, budget-balanced, incentive-compatible
and socially-efficient. However, the impossibility disappears if the price is
fixed exogenously and the social-efficiency goal is subject to
individual-rationality at the given price.
We show that the impossibility comes back if there are multiple units of the
same good, or multiple types of goods, even when the prices are fixed
exogenously. Particularly, if there are $M$ units of the same good or $M$ kinds
of goods, for some $M\geq 2$, then no truthful mechanism can guarantee more
than $1/M$ of the optimal gain-from-trade. In the single-good multi-unit case,
if both agents have submodular valuations (decreasing marginal returns), then
no truthful mechanism can guarantee more than $1/H_M$ of the optimal
gain-from-trade, where $H_M$ is the $M$-th harmonic number ($H_M\approx
\ln{M}+1/2$). All upper bounds are tight.
|
[
{
"version": "v1",
"created": "Sun, 19 Nov 2017 13:50:36 GMT"
}
] | 2017-11-23T00:00:00 |
[
[
"Segal-Halevi",
"Erel",
""
],
[
"Hassidim",
"Avinatan",
""
]
] |
new_dataset
| 0.988377 |
1711.08076
|
Marijn Heule
|
Marijn J.H. Heule
|
Schur Number Five
|
accepted by AAAI 2018
| null | null | null |
cs.LO cs.DC cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the solution of a century-old problem known as Schur Number Five:
What is the largest (natural) number $n$ such that there exists a five-coloring
of the positive numbers up to $n$ without a monochromatic solution of the
equation $a + b = c$? We obtained the solution, $n = 160$, by encoding the
problem into propositional logic and applying massively parallel satisfiability
solving techniques on the resulting formula. We constructed and validated a
proof of the solution to increase trust in the correctness of the
multi-CPU-year computations. The proof is two petabytes in size and was
certified using a formally verified proof checker, demonstrating that any
result by satisfiability solvers---no matter how large---can now be validated
using highly trustworthy systems.
|
[
{
"version": "v1",
"created": "Tue, 21 Nov 2017 22:54:59 GMT"
}
] | 2017-11-23T00:00:00 |
[
[
"Heule",
"Marijn J. H.",
""
]
] |
new_dataset
| 0.978002 |
1711.08103
|
Marc Walton
|
Johanna Salvant, Marc Walton, Dale Kronkright, Chia-Kai Yeh, Fengqiang
Li, Oliver Cossairt, Aggelos K. Katsaggelos
|
Photometric Stereo by UV-Induced Fluorescence to Detect Protrusions on
Georgia O'Keeffe's Paintings
|
Accepted for publication in the Springer Nature book: Metal Soaps in
Art-Conservation & Research
| null | null | null |
cs.GR physics.app-ph
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A significant number of oil paintings produced by Georgia O'Keeffe
(1887-1986) show surface protrusions of varying width, up to several hundreds
of microns. These protrusions are similar to those described in the art
conservation literature as metallic soaps. Since the presence of these
protrusions raises questions about the state of conservation and long-term
prospects for deterioration of these artworks, a 3D-imaging technique,
photometric stereo using ultraviolet illumination, was developed for the
long-term monitoring of the surface-shape of the protrusions and the
surrounding paint. Because the UV fluorescence response of painting materials
is isotropic, errors typically caused by non-Lambertian (anisotropic)
specularities when using visible reflected light can be avoided providing a
more accurate estimation of shape. As an added benefit, fluorescence provides
additional contrast information contributing to materials characterization. The
developed methodology aims to detect, characterize, and quantify the
distribution of micro-protrusions and their development over the surface of
entire artworks. Combined with a set of analytical in-situ techniques, and
computational tools, this approach constitutes a novel methodology to
investigate the selective distribution of protrusions in correlation with the
composition of painting materials at the macro-scale. While focused on
O'Keeffe's paintings as a case study, we expect the proposed approach to have
broader significance by providing a non-invasive protocol to the conservation
community to probe topological changes for any relatively flat painted surface
of an artwork, and more specifically to monitor the dynamic formation of
protrusions, in relation to paint composition and modifications of
environmental conditions, loans, exhibitions and storage over the long-term.
|
[
{
"version": "v1",
"created": "Wed, 22 Nov 2017 01:47:04 GMT"
}
] | 2017-11-23T00:00:00 |
[
[
"Salvant",
"Johanna",
""
],
[
"Walton",
"Marc",
""
],
[
"Kronkright",
"Dale",
""
],
[
"Yeh",
"Chia-Kai",
""
],
[
"Li",
"Fengqiang",
""
],
[
"Cossairt",
"Oliver",
""
],
[
"Katsaggelos",
"Aggelos K.",
""
]
] |
new_dataset
| 0.995738 |
1711.08118
|
Mohammad Saidur Rahman
|
Mohammad Saidur Rahman, Ashfaqur Rahman
|
Channel Transition Invariant Fast Broadcasting Scheme
|
2014 9th International Forum on Strategic Technology (IFOST)
| null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fast broadcasting (FB) is a popular near video-on-demand system where a video
is divided into equal size segments those are repeatedly transmitted over a
number of channels following a pattern. For user satisfaction, it is required
to reduce the initial user waiting time and client side buffer requirement at
streaming. Use of additional channels can achieve the objective. However, some
augmentation is required to the basic FB scheme as it lacks any mechanism to
realise a well defined relationship among the segment sizes at channel
transition. Lack of correspondence between the segments causes intermediate
waiting for the clients while watching videos. Use of additional channel
requires additional bandwidth. In this paper, we propose a modified FB scheme
that achieves zero initial clients waiting time and provides a mechanism to
control client side buffer requirement at streaming without requiring
additional channels. We present several results to demonstrate the
effectiveness of the proposed FB scheme over the existing ones.
|
[
{
"version": "v1",
"created": "Wed, 22 Nov 2017 03:02:28 GMT"
}
] | 2017-11-23T00:00:00 |
[
[
"Rahman",
"Mohammad Saidur",
""
],
[
"Rahman",
"Ashfaqur",
""
]
] |
new_dataset
| 0.989082 |
1711.08153
|
Vaishali Dhare
|
Usha Mehta and Vaishali Dhare
|
Quantum-dot Cellular Automata (QCA): A Survey
|
10 pages 11 figures, 3 tables
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the near future the era of Beyond CMOS will start as the scaling of the
current CMOS technology will reach the fundamental limit. QCA (Quantum-dot
Cellular Automata) is the transistor less computation paradigm and viable
candidate for Beyond CMOS device technology. The complete state of art survey
on QCA is presented in this paper. This paper addresses the QCA background, its
possible implementation and available simulation and synthesis tools. In depth
survey is carried out for the QCA oriented defects and testing. Also, need of
development and possible research areas in various sides of QCA are discussed.
|
[
{
"version": "v1",
"created": "Wed, 22 Nov 2017 07:17:58 GMT"
}
] | 2017-11-23T00:00:00 |
[
[
"Mehta",
"Usha",
""
],
[
"Dhare",
"Vaishali",
""
]
] |
new_dataset
| 0.994227 |
1711.08199
|
He Chen
|
Yifan Gu, He Chen, Yonghui Li, Branka Vucetic
|
Ultra-Reliable Short-Packet Communications: Half-Duplex or Full-Duplex
Relaying?
|
Accepted to appear in IEEE Wireless Communication Letters
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter analyzes and compares the performance of full-duplex relaying
(FDR) and half-duplex relaying (HDR) for ultra-reliable short-packet
communications. Specifically, we derive both approximate and asymptotic
closed-form expressions of the block error rate (BLER) for FDR and HDR using
short packets with finite blocklength codes. We define and attain a closed-form
expression of a critical BLER, which can be used to efficiently determine the
optimal duplex mode for ultra-reliable low latency communication scenarios. Our
results unveil that FDR is more appealing to the system with relatively lower
transmit power constraint, less stringent BLER requirement and stronger loop
interference suppression.
|
[
{
"version": "v1",
"created": "Wed, 22 Nov 2017 09:57:05 GMT"
}
] | 2017-11-23T00:00:00 |
[
[
"Gu",
"Yifan",
""
],
[
"Chen",
"He",
""
],
[
"Li",
"Yonghui",
""
],
[
"Vucetic",
"Branka",
""
]
] |
new_dataset
| 0.999405 |
1711.08314
|
Adriano Peron
|
Laura Bozzelli, Aniello Murano, Adriano Peron
|
Event-Clock Nested Automata
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we introduce and study Event-Clock Nested Automata (ECNA), a
formalism that combines Event Clock Automata (ECA) and Visibly Pushdown
Automata (VPA). ECNA allow to express real-time properties over non-regular
patterns of recursive programs. We prove that ECNA retain the same closure and
decidability properties of ECA and VPA being closed under Boolean operations
and having a decidable language-inclusion problem. In particular, we prove that
emptiness, universality, and language-inclusion for ECNA are EXPTIME-complete
problems. As for the expressiveness, we have that ECNA properly extend any
previous attempt in the literature of combining ECA and VPA.
|
[
{
"version": "v1",
"created": "Wed, 22 Nov 2017 15:01:22 GMT"
}
] | 2017-11-23T00:00:00 |
[
[
"Bozzelli",
"Laura",
""
],
[
"Murano",
"Aniello",
""
],
[
"Peron",
"Adriano",
""
]
] |
new_dataset
| 0.994052 |
1711.08406
|
Sandra Scott-Hayward
|
Sandra Scott-Hayward
|
Trailing the Snail: SDN Controller Security Evolution
|
7 pages
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The first OpenFlow Software-Defined Network (SDN) Controller, NOX, was
developed by Nicira Networks and donated to the research community in 2008.
Almost 10 years later, there are at least 29 open-source SDN Controllers and
many more proprietary solutions. Two of the open-source SDN controllers stand
out in terms of broad deployment and strong contributor base; Open Network
Operating System (ONOS) and OpenDaylight (ODL). Both have been deployed in live
networks. However, despite increasing adoption of SDN, the security of the SDN
control plane has developed at a snail's pace. In this paper, the evolution of
ONOS and ODL security is discussed. The reflection of this on secure SDN
Controller design is analyzed.
|
[
{
"version": "v1",
"created": "Fri, 3 Nov 2017 17:35:06 GMT"
}
] | 2017-11-23T00:00:00 |
[
[
"Scott-Hayward",
"Sandra",
""
]
] |
new_dataset
| 0.975874 |
1707.03804
|
Hao Tan
|
Hao Tan, Mohit Bansal
|
Source-Target Inference Models for Spatial Instruction Understanding
|
Accepted to AAAI 2018 (8 pages)
| null | null | null |
cs.CL cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Models that can execute natural language instructions for situated robotic
tasks such as assembly and navigation have several useful applications in
homes, offices, and remote scenarios. We study the semantics of
spatially-referred configuration and arrangement instructions, based on the
challenging Bisk-2016 blank-labeled block dataset. This task involves finding a
source block and moving it to the target position (mentioned via a reference
block and offset), where the blocks have no names or colors and are just
referred to via spatial location features. We present novel models for the
subtasks of source block classification and target position regression, based
on joint-loss language and spatial-world representation learning, as well as
CNN-based and dual attention models to compute the alignment between the world
blocks and the instruction phrases. For target position prediction, we compare
two inference approaches: annealed sampling via policy gradient versus
expectation inference via supervised regression. Our models achieve the new
state-of-the-art on this task, with an improvement of 47% on source block
accuracy and 22% on target position distance.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2017 17:15:57 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Nov 2017 16:57:02 GMT"
}
] | 2017-11-22T00:00:00 |
[
[
"Tan",
"Hao",
""
],
[
"Bansal",
"Mohit",
""
]
] |
new_dataset
| 0.988353 |
1709.03856
|
Jason Weston
|
Ledell Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes and
Jason Weston
|
StarSpace: Embed All The Things!
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present StarSpace, a general-purpose neural embedding model that can solve
a wide variety of problems: labeling tasks such as text classification, ranking
tasks such as information retrieval/web search, collaborative filtering-based
or content-based recommendation, embedding of multi-relational graphs, and
learning word, sentence or document level embeddings. In each case the model
works by embedding those entities comprised of discrete features and comparing
them against each other -- learning similarities dependent on the task.
Empirical results on a number of tasks show that StarSpace is highly
competitive with existing methods, whilst also being generally applicable to
new cases where those methods are not.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2017 14:16:56 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Sep 2017 12:19:23 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Sep 2017 13:06:43 GMT"
},
{
"version": "v4",
"created": "Tue, 26 Sep 2017 15:00:56 GMT"
},
{
"version": "v5",
"created": "Tue, 21 Nov 2017 02:59:57 GMT"
}
] | 2017-11-22T00:00:00 |
[
[
"Wu",
"Ledell",
""
],
[
"Fisch",
"Adam",
""
],
[
"Chopra",
"Sumit",
""
],
[
"Adams",
"Keith",
""
],
[
"Bordes",
"Antoine",
""
],
[
"Weston",
"Jason",
""
]
] |
new_dataset
| 0.999559 |
1709.09455
|
Simon Duque Anton
|
Simon Duque Anton, Daniel Fraunholz, Hans Dieter Schotten
|
Angriffserkennung f\"ur industrielle Netzwerke innerhalb des Projektes
IUNO
|
Paper is written in German, presented on the 22. ITG Fachtagung
Mobilkommunikation in Osnabrueck
| null | null | null |
cs.NI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The increasing interconnectivity of industrial networks is one of the central
current hot topics. It is adressed by research institutes, as well as industry.
In order to perform the fourth industrial revolution, a full connectivity
between production facilities is necessary. Due to this connectivity, however,
an abundance of new attack vectors emerges. In the National Reference Project
for Industrial IT-Security (IUNO), these risks and threats are addressed and
solutions are developed. These solutions are especially applicable for small
and medium sized enterprises that have not as much means in staff as well as
money as larger companies. These enterprises should be able to implement the
solutions without much effort. The security solutions are derived from four use
cases and implemented prototypically. A further topic of this work are the
research areas of the German Research Center for Artificial Intelligence that
address the given challenges, as well as the solutions developed in the context
of IUNO. Aside from the project itself, a method for distributed network data
collection aggregation is presented, as a prerequisite for anomaly detection
for network security.
|
[
{
"version": "v1",
"created": "Wed, 27 Sep 2017 11:24:56 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Nov 2017 15:01:34 GMT"
}
] | 2017-11-22T00:00:00 |
[
[
"Anton",
"Simon Duque",
""
],
[
"Fraunholz",
"Daniel",
""
],
[
"Schotten",
"Hans Dieter",
""
]
] |
new_dataset
| 0.977776 |
1710.05172
|
Runmin Cong
|
Runmin Cong, Jianjun Lei, Huazhu Fu, Qingming Huang, Xiaochun Cao,
Chunping Hou
|
Co-saliency Detection for RGBD Images Based on Multi-constraint Feature
Matching and Cross Label Propagation
|
11 pages, 8 figures, Accepted by IEEE Transactions on Image
Processing, Project URL: https://rmcong.github.io/proj_RGBD_cosal.html
| null |
10.1109/TIP.2017.2763819
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Co-saliency detection aims at extracting the common salient regions from an
image group containing two or more relevant images. It is a newly emerging
topic in computer vision community. Different from the most existing
co-saliency methods focusing on RGB images, this paper proposes a novel
co-saliency detection model for RGBD images, which utilizes the depth
information to enhance identification of co-saliency. First, the intra saliency
map for each image is generated by the single image saliency model, while the
inter saliency map is calculated based on the multi-constraint feature
matching, which represents the constraint relationship among multiple images.
Then, the optimization scheme, namely Cross Label Propagation (CLP), is used to
refine the intra and inter saliency maps in a cross way. Finally, all the
original and optimized saliency maps are integrated to generate the final
co-saliency result. The proposed method introduces the depth information and
multi-constraint feature matching to improve the performance of co-saliency
detection. Moreover, the proposed method can effectively exploit any existing
single image saliency model to work well in co-saliency scenarios. Experiments
on two RGBD co-saliency datasets demonstrate the effectiveness of our proposed
model.
|
[
{
"version": "v1",
"created": "Sat, 14 Oct 2017 12:28:35 GMT"
}
] | 2017-11-22T00:00:00 |
[
[
"Cong",
"Runmin",
""
],
[
"Lei",
"Jianjun",
""
],
[
"Fu",
"Huazhu",
""
],
[
"Huang",
"Qingming",
""
],
[
"Cao",
"Xiaochun",
""
],
[
"Hou",
"Chunping",
""
]
] |
new_dataset
| 0.999082 |
1711.07611
|
Noah Weber
|
Noah Weber, Niranjan Balasubramanian, Nathanael Chambers
|
Event Representations with Tensor-based Compositions
|
Accepted at AAAI 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robust and flexible event representations are important to many core areas in
language understanding. Scripts were proposed early on as a way of representing
sequences of events for such understanding, and has recently attracted renewed
attention. However, obtaining effective representations for modeling
script-like event sequences is challenging. It requires representations that
can capture event-level and scenario-level semantics. We propose a new
tensor-based composition method for creating event representations. The method
captures more subtle semantic interactions between an event and its entities
and yields representations that are effective at multiple event-related tasks.
With the continuous representations, we also devise a simple schema generation
method which produces better schemas compared to a prior discrete
representation based method. Our analysis shows that the tensors capture
distinct usages of a predicate even when there are only subtle differences in
their surface realizations.
|
[
{
"version": "v1",
"created": "Tue, 21 Nov 2017 03:04:02 GMT"
}
] | 2017-11-22T00:00:00 |
[
[
"Weber",
"Noah",
""
],
[
"Balasubramanian",
"Niranjan",
""
],
[
"Chambers",
"Nathanael",
""
]
] |
new_dataset
| 0.969668 |
1711.07689
|
Alessandro Guidotti
|
A. Guidotti, A. Vanelli-Coralli, T. Foggi, G. Colavolpe, M. Caus, J.
Bas, S. Cioni, A. Modenini
|
LTE-based Satellite Communications in LEO Mega-Constellations
|
Submitted to IJSCN Special Issue on SatNEx IV
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The integration of satellite and terrestrial networks is a promising solution
for extending broadband coverage to areas not connected to a terrestrial
infrastructure, as also demonstrated by recent commercial and standardisation
endeavours. However, the large delays and Doppler shifts over the satellite
channel pose severe technical challenges to traditional terrestrial systems, as
LTE or 5G. In this paper, two architectures are proposed for a LEO
mega-constellation realising a satellite-enabled LTE system, in which the on-
ground LTE entity is either an eNB (Sat-eNB) or a Relay Node (Sat-RN). The
impact of satellite channel impairments as large delays and Doppler shifts on
LTE PHY/MAC procedures is discussed and assessed. The proposed analysis shows
that, while carrier spacings, Random Access, and RN attach procedures do not
pose specific issues, HARQ requires substantial modifications. Moreover,
advanced handover procedures will be also required due to the satellites'
movement.
|
[
{
"version": "v1",
"created": "Tue, 21 Nov 2017 09:46:04 GMT"
}
] | 2017-11-22T00:00:00 |
[
[
"Guidotti",
"A.",
""
],
[
"Vanelli-Coralli",
"A.",
""
],
[
"Foggi",
"T.",
""
],
[
"Colavolpe",
"G.",
""
],
[
"Caus",
"M.",
""
],
[
"Bas",
"J.",
""
],
[
"Cioni",
"S.",
""
],
[
"Modenini",
"A.",
""
]
] |
new_dataset
| 0.999391 |
1711.07838
|
Quanyu Dai
|
Quanyu Dai, Qiang Li, Jian Tang, Dan Wang
|
Adversarial Network Embedding
|
AAAI 2018
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning low-dimensional representations of networks has proved effective in
a variety of tasks such as node classification, link prediction and network
visualization. Existing methods can effectively encode different structural
properties into the representations, such as neighborhood connectivity
patterns, global structural role similarities and other high-order proximities.
However, except for objectives to capture network structural properties, most
of them suffer from lack of additional constraints for enhancing the robustness
of representations. In this paper, we aim to exploit the strengths of
generative adversarial networks in capturing latent features, and investigate
its contribution in learning stable and robust graph representations.
Specifically, we propose an Adversarial Network Embedding (ANE) framework,
which leverages the adversarial learning principle to regularize the
representation learning. It consists of two components, i.e., a structure
preserving component and an adversarial learning component. The former
component aims to capture network structural properties, while the latter
contributes to learning robust representations by matching the posterior
distribution of the latent representations to given priors. As shown by the
empirical results, our method is competitive with or superior to
state-of-the-art approaches on benchmark network embedding tasks.
|
[
{
"version": "v1",
"created": "Tue, 21 Nov 2017 15:19:31 GMT"
}
] | 2017-11-22T00:00:00 |
[
[
"Dai",
"Quanyu",
""
],
[
"Li",
"Qiang",
""
],
[
"Tang",
"Jian",
""
],
[
"Wang",
"Dan",
""
]
] |
new_dataset
| 0.968692 |
1711.07876
|
Luiz Capretz Dr.
|
Luiz Fernando Capretz, Fahem Ahmed, Fabio Queda Bueno da Silva
|
Soft Sides of Software
| null |
Information and Software Technology, 92(2017):92-94, Elsevier,
July 2017
|
10.1016/j.infsof.2017.07.011,
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software is a field of rapid changes: the best technology today becomes
obsolete in the near future. If we review the graduate attributes of any of the
software engineering programs across the world, life-long learning is one of
them. The social and psychological aspects of professional development is
linked with rewards. In organizations, where people are provided with learning
opportunities and there is a culture that rewards learning, people embrace
changes easily. However, the software industry tends to be short-sighted and
its primary focus is more on current project success; it usually ignores the
capacity building of the individual or team. It is hoped that our software
engineering colleagues will be motivated to conduct more research into the area
of software psychology so as to understand more completely the possibilities
for increased effectiveness and personal fulfillment among software engineers
working alone and in teams.
|
[
{
"version": "v1",
"created": "Tue, 21 Nov 2017 16:20:53 GMT"
}
] | 2017-11-22T00:00:00 |
[
[
"Capretz",
"Luiz Fernando",
""
],
[
"Ahmed",
"Fahem",
""
],
[
"da Silva",
"Fabio Queda Bueno",
""
]
] |
new_dataset
| 0.984092 |
1711.07951
|
Olivier Van Acker
|
Olivier Van Acker and Oded Lachish and Graeme Burnett
|
Cellular Automata Simulation on FPGA for Training Neural Networks with
Virtual World Imagery
|
Published as a short paper at IEEE CIG2017
| null |
10.1109/CIG.2017.8080450
| null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ongoing work on a tool that consists of two parts: (i) A raw
micro-level abstract world simulator with an interface to (ii) a 3D game
engine, translator of raw abstract simulator data to photorealistic graphics.
Part (i) implements a dedicated cellular automata (CA) on reconfigurable
hardware (FPGA) and part (ii) interfaces with a deep learning framework for
training neural networks. The bottleneck of such an architecture usually lies
in the fact that transferring the state of the whole CA significantly slows
down the simulation. We bypass this by sending only a small subset of the
general state, which we call a 'locus of visibility', akin to a torchlight in a
darkened 3D space, into the simulation. The torchlight concept exists in many
games but these games generally only simulate what is in or near the locus. Our
chosen architecture will enable us to simulate on a micro level outside the
locus. This will give us the advantage of being able to create a larger and
more fine-grained simulation which can be used to train neural networks for use
in games.
|
[
{
"version": "v1",
"created": "Tue, 21 Nov 2017 18:22:05 GMT"
}
] | 2017-11-22T00:00:00 |
[
[
"Van Acker",
"Olivier",
""
],
[
"Lachish",
"Oded",
""
],
[
"Burnett",
"Graeme",
""
]
] |
new_dataset
| 0.997196 |
1308.0219
|
Kees Middelburg
|
J. A. Bergstra, C. A. Middelburg
|
Instruction sequence expressions for the secure hash algorithm SHA-256
|
14 pages; several minor errors corrected; counting error corrected;
instruction sequence fault repaired; misunderstanding cleared up; a minor
error corrected; 15 pages, presentation improved, a minor error corrected.
preliminaries have text overlap with arXiv:1301.3297
| null | null | null |
cs.PL cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The secure hash function SHA-256 is a function on bit strings. This means
that its restriction to the bit strings of any given length can be computed by
a finite instruction sequence that contains only instructions to set and get
the content of Boolean registers, forward jump instructions, and a termination
instruction. We describe such instruction sequences for the restrictions to bit
strings of the different possible lengths by means of uniform terms from an
algebraic theory.
|
[
{
"version": "v1",
"created": "Thu, 1 Aug 2013 14:19:28 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Aug 2013 12:20:41 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Aug 2013 11:40:13 GMT"
},
{
"version": "v4",
"created": "Mon, 16 Sep 2013 11:58:04 GMT"
},
{
"version": "v5",
"created": "Sat, 2 Nov 2013 09:45:55 GMT"
},
{
"version": "v6",
"created": "Tue, 11 Nov 2014 15:40:30 GMT"
},
{
"version": "v7",
"created": "Sat, 18 Nov 2017 11:01:35 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Bergstra",
"J. A.",
""
],
[
"Middelburg",
"C. A.",
""
]
] |
new_dataset
| 0.966367 |
1610.07393
|
Samuele Capobianco
|
Samuele Capobianco, Simone Marinai
|
Record Counting in Historical Handwritten Documents with Convolutional
Neural Networks
|
Accepted to ICPR workshop on Deep Learning for Pattern Recognition
(DLPR 2016)
| null |
10.1016/j.patrec.2017.10.023
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the use of Convolutional Neural Networks for
counting the number of records in historical handwritten documents. With this
work we demonstrate that training the networks only with synthetic images
allows us to perform a near perfect evaluation of the number of records printed
on historical documents. The experiments have been performed on a benchmark
dataset composed by marriage records and outperform previous results on this
dataset.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2016 12:56:20 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2016 10:23:02 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Capobianco",
"Samuele",
""
],
[
"Marinai",
"Simone",
""
]
] |
new_dataset
| 0.997159 |
1612.02916
|
Ling Ren
|
Ittai Abraham and Dahlia Malkhi and Kartik Nayak and Ling Ren and
Alexander Spiegelman
|
Solida: A Blockchain Protocol Based on Reconfigurable Byzantine
Consensus
| null | null | null | null |
cs.CR cs.DC cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The decentralized cryptocurrency Bitcoin has experienced great success but
also encountered many challenges. One of the challenges has been the long
confirmation time. Another challenge is the lack of incentives at certain steps
of the protocol, raising concerns for transaction withholding, selfish mining,
etc. To address these challenges, we propose Solida, a decentralized blockchain
protocol based on reconfigurable Byzantine consensus augmented by
proof-of-work. Solida improves on Bitcoin in confirmation time, and provides
safety and liveness assuming the adversary control less than (roughly)
one-third of the total mining power.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2016 04:59:22 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Nov 2017 21:47:49 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Abraham",
"Ittai",
""
],
[
"Malkhi",
"Dahlia",
""
],
[
"Nayak",
"Kartik",
""
],
[
"Ren",
"Ling",
""
],
[
"Spiegelman",
"Alexander",
""
]
] |
new_dataset
| 0.997468 |
1612.04433
|
Emiliano De Cristofaro
|
Enrico Mariconti, Lucky Onwuzurike, Panagiotis Andriotis, Emiliano De
Cristofaro, Gordon Ross, Gianluca Stringhini
|
MaMaDroid: Detecting Android Malware by Building Markov Chains of
Behavioral Models
|
This paper appears in the Proceedings of 24th Network and Distributed
System Security Symposium (NDSS 2017). Some experiments have been slightly
updated in this version
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rise in popularity of the Android platform has resulted in an explosion
of malware threats targeting it. As both Android malware and the operating
system itself constantly evolve, it is very challenging to design robust
malware mitigation techniques that can operate for long periods of time without
the need for modifications or costly re-training. In this paper, we present
MaMaDroid, an Android malware detection system that relies on app behavior.
MaMaDroid builds a behavioral model, in the form of a Markov chain, from the
sequence of abstracted API calls performed by an app, and uses it to extract
features and perform classification. By abstracting calls to their packages or
families, MaMaDroid maintains resilience to API changes and keeps the feature
set size manageable. We evaluate its accuracy on a dataset of 8.5K benign and
35.5K malicious apps collected over a period of six years, showing that it not
only effectively detects malware (with up to 99% F-measure), but also that the
model built by the system keeps its detection capabilities for long periods of
time (on average, 86% and 75% F-measure, respectively, one and two years after
training). Finally, we compare against DroidAPIMiner, a state-of-the-art system
that relies on the frequency of API calls performed by apps, showing that
MaMaDroid significantly outperforms it.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2016 23:57:28 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Feb 2017 12:12:11 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Nov 2017 10:51:40 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Mariconti",
"Enrico",
""
],
[
"Onwuzurike",
"Lucky",
""
],
[
"Andriotis",
"Panagiotis",
""
],
[
"De Cristofaro",
"Emiliano",
""
],
[
"Ross",
"Gordon",
""
],
[
"Stringhini",
"Gianluca",
""
]
] |
new_dataset
| 0.999259 |
1612.04787
|
Arthur Willis
|
Arthur W. Wetzel, Jennifer Bakal, Markus Dittrich, David G. C.
Hildebrand, Josh L. Morgan, Jeff W. Lichtman
|
Registering large volume serial-section electron microscopy image sets
for neural circuit reconstruction using FFT signal whitening
|
10 pages, 4 figures as submitted for the 2016 IEEE Applied Imagery
and Pattern Recognition Workshop proceedings, Oct 18-20, 2016
| null |
10.1109/AIPR.2016.8010595
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The detailed reconstruction of neural anatomy for connectomics studies
requires a combination of resolution and large three-dimensional data capture
provided by serial section electron microscopy (ssEM). The convergence of high
throughput ssEM imaging and improved tissue preparation methods now allows ssEM
capture of complete specimen volumes up to cubic millimeter scale. The
resulting multi-terabyte image sets span thousands of serial sections and must
be precisely registered into coherent volumetric forms in which neural circuits
can be traced and segmented. This paper introduces a Signal Whitening Fourier
Transform Image Registration approach (SWiFT-IR) under development at the
Pittsburgh Supercomputing Center and its use to align mouse and zebrafish brain
datasets acquired using the wafer mapper ssEM imaging technology recently
developed at Harvard University. Unlike other methods now used for ssEM
registration, SWiFT-IR modifies its spatial frequency response during image
matching to maximize a signal-to-noise measure used as its primary indicator of
alignment quality. This alignment signal is more robust to rapid variations in
biological content and unavoidable data distortions than either phase-only or
standard Pearson correlation, thus allowing more precise alignment and
statistical confidence. These improvements in turn enable an iterative
registration procedure based on projections through multiple sections rather
than more typical adjacent-pair matching methods. This projection approach,
when coupled with known anatomical constraints and iteratively applied in a
multi-resolution pyramid fashion, drives the alignment into a smooth form that
properly represents complex and widely varying anatomical content such as the
full cross-section zebrafish data.
|
[
{
"version": "v1",
"created": "Wed, 14 Dec 2016 20:03:05 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Wetzel",
"Arthur W.",
""
],
[
"Bakal",
"Jennifer",
""
],
[
"Dittrich",
"Markus",
""
],
[
"Hildebrand",
"David G. C.",
""
],
[
"Morgan",
"Josh L.",
""
],
[
"Lichtman",
"Jeff W.",
""
]
] |
new_dataset
| 0.973668 |
1704.01389
|
Gregory Gutin
|
Gregory Gutin and Ruijuan Li
|
Seymour's second neighbourhood conjecture for quasi-transitive oriented
graphs
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Seymour's second neighbourhood conjecture asserts that every oriented graph
has a vertex whose second out-neighbourhood is at least as large as its
out-neighbourhood. In this paper, we prove that the conjecture holds for
quasi-transitive oriented graphs, which is a superclass of tournaments and
transitive acyclic digraphs. A digraph $D$ is called quasi-transitive is for
every pair $xy,yz$ of arcs between distinct vertices $x,y,z$, $xz$ or $zx$
("or" is inclusive here) is in $D$.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2017 12:51:58 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Nov 2017 08:49:01 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Gutin",
"Gregory",
""
],
[
"Li",
"Ruijuan",
""
]
] |
new_dataset
| 0.999078 |
1706.04277
|
Mahmoud Afifi
|
Mahmoud Afifi, Abdelrahman Abdelhamed
|
AFIF4: Deep Gender Classification based on AdaBoost-based Fusion of
Isolated Facial Features and Foggy Faces
|
26 pages, 7 figures, 7 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gender classification aims at recognizing a person's gender. Despite the high
accuracy achieved by state-of-the-art methods for this task, there is still
room for improvement in generalized and unrestricted datasets. In this paper,
we advocate a new strategy inspired by the behavior of humans in gender
recognition. Instead of dealing with the face image as a sole feature, we rely
on the combination of isolated facial features and a holistic feature which we
call the foggy face. Then, we use these features to train deep convolutional
neural networks followed by an AdaBoost-based score fusion to infer the final
gender class. We evaluate our method on four challenging datasets to
demonstrate its efficacy in achieving better or on-par accuracy with
state-of-the-art methods. In addition, we present a new face dataset that
intensifies the challenges of occluded faces and illumination changes, which we
believe to be a much-needed resource for gender classification research.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2017 23:15:14 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2017 00:48:27 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Sep 2017 02:54:38 GMT"
},
{
"version": "v4",
"created": "Sat, 30 Sep 2017 01:00:35 GMT"
},
{
"version": "v5",
"created": "Sat, 18 Nov 2017 02:26:50 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Afifi",
"Mahmoud",
""
],
[
"Abdelhamed",
"Abdelrahman",
""
]
] |
new_dataset
| 0.975635 |
1706.04652
|
Ulrich Viereck
|
Ulrich Viereck, Andreas ten Pas, Kate Saenko, Robert Platt
|
Learning a visuomotor controller for real world robotic grasping using
simulated depth images
|
1st Conference on Robot Learning (CoRL), 13-15 November 2017,
Mountain View, CA
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We want to build robots that are useful in unstructured real world
applications, such as doing work in the household. Grasping in particular is an
important skill in this domain, yet it remains a challenge. One of the key
hurdles is handling unexpected changes or motion in the objects being grasped
and kinematic noise or other errors in the robot. This paper proposes an
approach to learning a closed-loop controller for robotic grasping that
dynamically guides the gripper to the object. We use a wrist-mounted sensor to
acquire depth images in front of the gripper and train a convolutional neural
network to learn a distance function to true grasps for grasp configurations
over an image. The training sensor data is generated in simulation, a major
advantage over previous work that uses real robot experience, which is costly
to obtain. Despite being trained in simulation, our approach works well on real
noisy sensor images. We compare our controller in simulated and real robot
experiments to a strong baseline for grasp pose detection, and find that our
approach significantly outperforms the baseline in the presence of kinematic
noise, perceptual errors and disturbances of the object during grasping.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2017 19:50:09 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2017 21:18:20 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Nov 2017 20:09:12 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Viereck",
"Ulrich",
""
],
[
"Pas",
"Andreas ten",
""
],
[
"Saenko",
"Kate",
""
],
[
"Platt",
"Robert",
""
]
] |
new_dataset
| 0.977826 |
1708.06822
|
Mehmet Turan
|
Mehmet Turan, Yasin Almalioglu, Helder Araujo, Ender Konukoglu, Metin
Sitti
|
Deep EndoVO: A Recurrent Convolutional Neural Network (RCNN) based
Visual Odometry Approach for Endoscopic Capsule Robots
| null | null |
10.1016/j.neucom.2017.10.014
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ingestible wireless capsule endoscopy is an emerging minimally invasive
diagnostic technology for inspection of the GI tract and diagnosis of a wide
range of diseases and pathologies. Medical device companies and many research
groups have recently made substantial progresses in converting passive capsule
endoscopes to active capsule robots, enabling more accurate, precise, and
intuitive detection of the location and size of the diseased areas. Since a
reliable real time pose estimation functionality is crucial for actively
controlled endoscopic capsule robots, in this study, we propose a monocular
visual odometry (VO) method for endoscopic capsule robot operations. Our method
lies on the application of the deep Recurrent Convolutional Neural Networks
(RCNNs) for the visual odometry task, where Convolutional Neural Networks
(CNNs) and Recurrent Neural Networks (RNNs) are used for the feature extraction
and inference of dynamics across the frames, respectively. Detailed analyses
and evaluations made on a real pig stomach dataset proves that our system
achieves high translational and rotational accuracies for different types of
endoscopic capsule robot trajectories.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2017 21:13:18 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2017 13:47:53 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Turan",
"Mehmet",
""
],
[
"Almalioglu",
"Yasin",
""
],
[
"Araujo",
"Helder",
""
],
[
"Konukoglu",
"Ender",
""
],
[
"Sitti",
"Metin",
""
]
] |
new_dataset
| 0.994184 |
1711.01030
|
Fangguo Zhang
|
Huige Li, Fangguo Zhang, Jiejie He and Haibo Tian
|
A Searchable Symmetric Encryption Scheme using BlockChain
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
At present, the cloud storage used in searchable symmetric encryption schemes
(SSE) is provided in a private way, which cannot be seen as a true cloud.
Moreover, the cloud server is thought to be credible, because it always returns
the search result to the user, even they are not correct. In order to really
resist this malicious adversary and accelerate the usage of the data, it is
necessary to store the data on a public chain, which can be seen as a
decentralized system. As the increasing amount of the data, the search problem
becomes more and more intractable, because there does not exist any effective
solution at present.
In this paper, we begin by pointing out the importance of storing the data in
a public chain. We then innovatively construct a model of SSE using
blockchain(SSE-using-BC) and give its security definition to ensure the privacy
of the data and improve the search efficiency. According to the size of data,
we consider two different cases and propose two corresponding schemes. Lastly,
the security and performance analyses show that our scheme is feasible and
secure.
|
[
{
"version": "v1",
"created": "Fri, 3 Nov 2017 05:14:11 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Nov 2017 08:59:53 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Li",
"Huige",
""
],
[
"Zhang",
"Fangguo",
""
],
[
"He",
"Jiejie",
""
],
[
"Tian",
"Haibo",
""
]
] |
new_dataset
| 0.998077 |
1711.04915
|
Zhe Gan
|
Yunchen Pu, Weiyao Wang, Ricardo Henao, Liqun Chen, Zhe Gan, Chunyuan
Li, Lawrence Carin
|
Adversarial Symmetric Variational Autoencoder
|
Accepted to NIPS 2017
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new form of variational autoencoder (VAE) is developed, in which the joint
distribution of data and codes is considered in two (symmetric) forms: ($i$)
from observed data fed through the encoder to yield codes, and ($ii$) from
latent codes drawn from a simple prior and propagated through the decoder to
manifest data. Lower bounds are learned for marginal log-likelihood fits
observed data and latent codes. When learning with the variational bound, one
seeks to minimize the symmetric Kullback-Leibler divergence of joint density
functions from ($i$) and ($ii$), while simultaneously seeking to maximize the
two marginal log-likelihoods. To facilitate learning, a new form of adversarial
training is developed. An extensive set of experiments is performed, in which
we demonstrate state-of-the-art data reconstruction and generation on several
image benchmark datasets.
|
[
{
"version": "v1",
"created": "Tue, 14 Nov 2017 02:48:01 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Nov 2017 18:29:28 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Pu",
"Yunchen",
""
],
[
"Wang",
"Weiyao",
""
],
[
"Henao",
"Ricardo",
""
],
[
"Chen",
"Liqun",
""
],
[
"Gan",
"Zhe",
""
],
[
"Li",
"Chunyuan",
""
],
[
"Carin",
"Lawrence",
""
]
] |
new_dataset
| 0.999462 |
1711.06768
|
Eli (Omid) David
|
Dror Sholomon, Eli David, Nathan S. Netanyahu
|
A Generalized Genetic Algorithm-Based Solver for Very Large Jigsaw
Puzzles of Complex Types
| null |
AAAI Conference on Artificial Intelligence, pages 2839-2845,
Quebec City, Canada, July 2014
| null | null |
cs.CV cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we introduce new types of square-piece jigsaw puzzles, where in
addition to the unknown location and orientation of each piece, a piece might
also need to be flipped. These puzzles, which are associated with a number of
real world problems, are considerably harder, from a computational standpoint.
Specifically, we present a novel generalized genetic algorithm (GA)-based
solver that can handle puzzle pieces of unknown location and orientation (Type
2 puzzles) and (two-sided) puzzle pieces of unknown location, orientation, and
face (Type 4 puzzles). To the best of our knowledge, our solver provides a new
state-of-the-art, solving previously attempted puzzles faster and far more
accurately, handling puzzle sizes that have never been attempted before, and
assembling the newly introduced two-sided puzzles automatically and
effectively. This paper also presents, among other results, the most extensive
set of experimental results, compiled as of yet, on Type 2 puzzles.
|
[
{
"version": "v1",
"created": "Fri, 17 Nov 2017 23:17:29 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Sholomon",
"Dror",
""
],
[
"David",
"Eli",
""
],
[
"Netanyahu",
"Nathan S.",
""
]
] |
new_dataset
| 0.966987 |
1711.06769
|
Eli (Omid) David
|
Dror Sholomon, Eli David, Nathan S. Netanyahu
|
A Genetic Algorithm-Based Solver for Very Large Jigsaw Puzzles
|
arXiv admin note: substantial text overlap with arXiv:1711.06767
|
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
pages 1767-1774, Portland, OR, June 2013
|
10.1109/CVPR.2013.231
| null |
cs.CV cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose the first effective automated, genetic algorithm
(GA)-based jigsaw puzzle solver. We introduce a novel procedure of merging two
"parent" solutions to an improved "child" solution by detecting, extracting,
and combining correctly assembled puzzle segments. The solver proposed exhibits
state-of-the-art performance solving previously attempted puzzles faster and
far more accurately, and also puzzles of size never before attempted. Other
contributions include the creation of a benchmark of large images, previously
unavailable. We share the data sets and all of our results for future testing
and comparative evaluation of jigsaw puzzle solvers.
|
[
{
"version": "v1",
"created": "Fri, 17 Nov 2017 23:17:33 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Sholomon",
"Dror",
""
],
[
"David",
"Eli",
""
],
[
"Netanyahu",
"Nathan S.",
""
]
] |
new_dataset
| 0.974178 |
1711.06819
|
Vishal Saxena
|
Vishal Saxena
|
A Compact CMOS Memristor Emulator Circuit and its Applications
|
Submitted to International Symposium of Circuits and Systems (ISCAS)
2018
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conceptual memristors have recently gathered wider interest due to their
diverse application in non-von Neumann computing, machine learning,
neuromorphic computing, and chaotic circuits. We introduce a compact CMOS
circuit that emulates idealized memristor characteristics and can bridge the
gap between concepts to chip-scale realization by transcending device
challenges. The CMOS memristor circuit embodies a two-terminal variable
resistor whose resistance is controlled by the voltage applied across its
terminals. The memristor 'state' is held in a capacitor that controls the
resistor value. This work presents the design and simulation of the memristor
emulation circuit, and applies it to a memcomputing application of maze solving
using analog parallelism. Furthermore, the memristor emulator circuit can be
designed and fabricated using standard commercial CMOS technologies and opens
doors to interesting applications in neuromorphic and machine learning
circuits.
|
[
{
"version": "v1",
"created": "Sat, 18 Nov 2017 06:43:25 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Saxena",
"Vishal",
""
]
] |
new_dataset
| 0.999845 |
1711.06862
|
Rajnikant Sharma
|
Ishmaal Erekson, Rajnikant Sharma, Ashwini Ratnoo, Ryan Gerdes
|
Multi-vehicle Path Following using Modified Trajectory Shaping Guidance
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we formulate a virtual target-based path following guidance
law aimed towards multi-vehicle path following problem. The guidance law is
well suited to precisely follow circular paths while minting desired distance
between two adjacent vehicles where path information is only available to the
lead vehicle. We analytically show lateral and longitudnal stability and
convergence on the path. This is also validated through simulation and
experimental results.
|
[
{
"version": "v1",
"created": "Sat, 18 Nov 2017 13:39:16 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Erekson",
"Ishmaal",
""
],
[
"Sharma",
"Rajnikant",
""
],
[
"Ratnoo",
"Ashwini",
""
],
[
"Gerdes",
"Ryan",
""
]
] |
new_dataset
| 0.970614 |
1711.06895
|
Hai Hu
|
Hai Hu
|
Is China Entering WTO or shijie maoyi zuzhi--a Corpus Study of English
Acronyms in Chinese Newspapers
|
To appear in Proceedings of the 28th North American Conference on
Chinese Linguistics
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This is one of the first studies that quantitatively examine the usage of
English acronyms (e.g. WTO) in Chinese texts. Using newspaper corpora, I try to
answer 1) for all instances of a concept that has an English acronym (e.g.
World Trade Organization), what percentage is expressed in the English acronym
(WTO), and what percentage in its Chinese translation (shijie maoyi zuzhi), and
2) what factors are at play in language users' choice between the English and
Chinese forms? Results show that different concepts have different percentage
for English acronyms (PercentOfEn), ranging from 2% to 98%. Linear models show
that PercentOfEn for individual concepts can be predicted by language economy
(how long the Chinese translation is), concept frequency, and whether the first
appearance of the concept in Chinese newspapers is the English acronym or its
Chinese translation (all p < .05).
|
[
{
"version": "v1",
"created": "Sat, 18 Nov 2017 17:01:24 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Hu",
"Hai",
""
]
] |
new_dataset
| 0.999413 |
1711.06964
|
Amitabha Roy
|
Amitabha Roy, Subramanya R. Dulloor
|
Cyclone: High Availability for Persistent Key Value Stores
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Persistent key value stores are an important component of many distributed
data serving solutions with innovations targeted at taking advantage of growing
flash speeds. Unfortunately their performance is hampered by the need to
maintain and replicate a write ahead log to guarantee availability in the face
of machine and storage failures. Cyclone is a replicated log plug-in for key
value stores that systematically addresses various sources of this bottleneck.
It uses a small amount of non-volatile memory directly addressable by the CPU -
such as in the form of NVDIMMs or Intel 3DXPoint - to remove block oriented IO
devices such as SSDs from the critical path for appending to the log. This
enables it to address network overheads using an implementation of the RAFT
consensus protocol that is designed around a userspace network stack to relieve
the CPU of the burden of data copies. Finally, it provides a way to efficiently
map the commutativity in key-value store APIs to the parallelism available in
commodity NICs. Cyclone is able to replicate millions of small updates per
second using only commodity 10 gigabit ethernet adapters. As a practical
application, we use it to improve the performance (and availability) of
RocksDB, a popular persistent key value store by an order of magnitude when
compared to its own write ahead log without replication.
|
[
{
"version": "v1",
"created": "Sun, 19 Nov 2017 04:07:34 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Roy",
"Amitabha",
""
],
[
"Dulloor",
"Subramanya R.",
""
]
] |
new_dataset
| 0.980519 |
1711.07208
|
Yelena Mejova
|
Yelena Mejova, Youcef Benkhedda, Khairani
|
#Halal Culture on Instagram
| null | null |
10.3389/fdigh.2017.00021
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Halal is a notion that applies to both objects and actions, and means
permissible according to Islamic law. It may be most often associated with food
and the rules of selecting, slaughtering, and cooking animals. In the
globalized world, halal can be found in street corners of New York and beauty
shops of Manila. In this study, we explore the cultural diversity of the
concept, as revealed through social media, and specifically the way it is
expressed by different populations around the world, and how it relates to
their perception of (i) religious and (ii) governmental authority, and (iii)
personal health. Here, we analyze two Instagram datasets, using Halal in Arabic
(325,665 posts) and in English (1,004,445 posts), which provide a global view
of major Muslim populations around the world. We find a great variety in the
use of halal within Arabic, English, and Indonesian-speaking populations, with
animal trade emphasized in first (making up 61% of the language's stream), food
in second (80%), and cosmetics and supplements in third (70%). The
commercialization of the term halal is a powerful signal of its detraction from
its traditional roots. We find a complex social engagement around posts
mentioning religious terms, such that when a food-related post is accompanied
by a religious term, it on average gets more likes in English and Indonesian,
but not in Arabic, indicating a potential shift out of its traditional moral
framing.
|
[
{
"version": "v1",
"created": "Mon, 20 Nov 2017 08:59:12 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Mejova",
"Yelena",
""
],
[
"Benkhedda",
"Youcef",
""
],
[
"Khairani",
"",
""
]
] |
new_dataset
| 0.997547 |
1711.07224
|
Olivier Van Acker
|
Olivier Van Acker
|
SCTP in Go
|
Published in the proceedings of AsiaBSD 2013, at Tokyo, Japan
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a successful attempt to combine two relatively new
technologies: Stream Control Transmission Protocol (SCTP) and the programming
language Go, achieved by extending the existing Go network library with SCTP.
SCTP is a reliable, message-oriented transport layer protocol, similar to TCP
and UDP. It offers sequenced delivery of messages over multiple streams,
network fault tolerance via multihoming support, resistance against flooding
and masquerade attacks and congestion avoidance procedures. It has improvements
over wider-established network technologies and is gradually gaining traction
in the telecom and Internet industries. Go is an open source, concurrent,
statically typed, compiled and garbage-collected language, developed by Google
Inc. Go's main design goals are simplicity and ease of use and it has a syntax
broadly similar to C. Go has good support for networked and multicore computing
and as a system language is often used for networked applications, however it
doesn't yet support SCTP. By combining SCTP and Go, software engineers can
exploit the advantages of both technologies. The implementation of SCTP
extending the Go network library was done on FreeBSD and Mac OS X, the two
operating systems that contain the most up to date implementation of the SCTP
specification.
|
[
{
"version": "v1",
"created": "Mon, 20 Nov 2017 09:39:56 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Van Acker",
"Olivier",
""
]
] |
new_dataset
| 0.993425 |
1711.07231
|
Alexis Arnaudon Mr
|
Alexis Arnaudon, Darryl Holm, Stefan Sommer
|
Stochastic metamorphosis with template uncertainties
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate two stochastic perturbations of the
metamorphosis equations of image analysis, in the geometrical context of the
Euler-Poincar\'e theory. In the metamorphosis of images, the Lie group of
diffeomorphisms deforms a template image that is undergoing its own internal
dynamics as it deforms. This type of deformation allows more freedom for image
matching and has analogies with complex fluids when the template properties are
regarded as order parameters (coset spaces of broken symmetries). The first
stochastic perturbation we consider corresponds to uncertainty due to random
errors in the reconstruction of the deformation map from its vector field. We
also consider a second stochastic perturbation, which compounds the uncertainty
in of the deformation map with the uncertainty in the reconstruction of the
template position from its velocity field. We apply this general geometric
theory to several classical examples, including landmarks, images, and closed
curves, and we discuss its use for functional data analysis.
|
[
{
"version": "v1",
"created": "Mon, 20 Nov 2017 09:55:15 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Arnaudon",
"Alexis",
""
],
[
"Holm",
"Darryl",
""
],
[
"Sommer",
"Stefan",
""
]
] |
new_dataset
| 0.993297 |
1711.07325
|
Wiktor Daszczuk
|
W{\l}odzimierz Choroma\'nski, Wiktor Daszczuk, Jaros{\l}aw Dyduch,
Mariusz Maciejewski, Pawe{\l} Brach, Waldemar Grabski
|
PRT (Personal Rapid Transit) network simulation
|
17 pages, 6 figures
|
Proceedings of the 13th World Conference on Transportation
Research, Rio de Janeiro, Brasil, 7-10 July 2013, Joao Victor (ed.), 2014,
Federal University of Rio de Janeiro, ISBN 978-85-285-0232-9
| null | null |
cs.DC cs.CE cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transportation problems of large urban conurbations inspire search for new
transportation systems, that meet high environmental standards, are relatively
cheap and user friendly. The latter element also includes the needs of disabled
and elderly people. This article concerns a new transportation system PRT -
Personal Rapid Transit. In this article the attention is focused on the
analysis of the efficiency of the PRT transport network. The simulator of
vehicle movement in PRT network as well as algorithms for traffic management
and control will be presented. The proposal of its physical implementation will
be also included.
|
[
{
"version": "v1",
"created": "Wed, 18 Oct 2017 19:09:48 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Choromański",
"Włodzimierz",
""
],
[
"Daszczuk",
"Wiktor",
""
],
[
"Dyduch",
"Jarosław",
""
],
[
"Maciejewski",
"Mariusz",
""
],
[
"Brach",
"Paweł",
""
],
[
"Grabski",
"Waldemar",
""
]
] |
new_dataset
| 0.994663 |
1711.07361
|
Kathleen Hamilton
|
Kathleen E. Hamilton, Neena Imam, Travis S. Humble
|
Community detection with spiking neural networks for neuromorphic
hardware
|
Conference paper presented at ORNL Neuromorphic Workshop 2017, 7
pages, 6 figures
| null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present results related to the performance of an algorithm for community
detection which incorporates event-driven computation. We define a mapping
which takes a graph G to a system of spiking neurons. Using a fully connected
spiking neuron system, with both inhibitory and excitatory synaptic
connections, the firing patterns of neurons within the same community can be
distinguished from firing patterns of neurons in different communities. On a
random graph with 128 vertices and known community structure we show that by
using binary decoding and a Hamming-distance based metric, individual
communities can be identified from spike train similarities. Using bipolar
decoding and finite rate thresholding, we verify that inhibitory connections
prevent the spread of spiking patterns.
|
[
{
"version": "v1",
"created": "Mon, 20 Nov 2017 15:10:54 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Hamilton",
"Kathleen E.",
""
],
[
"Imam",
"Neena",
""
],
[
"Humble",
"Travis S.",
""
]
] |
new_dataset
| 0.986768 |
1711.07459
|
Alexander Wong
|
Mohammad Javad Shafiee, Francis Li, Brendan Chwyl, and Alexander Wong
|
SquishedNets: Squishing SqueezeNet further for edge device scenarios via
deep evolutionary synthesis
|
4 pages
| null | null | null |
cs.NE cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While deep neural networks have been shown in recent years to outperform
other machine learning methods in a wide range of applications, one of the
biggest challenges with enabling deep neural networks for widespread deployment
on edge devices such as mobile and other consumer devices is high computational
and memory requirements. Recently, there has been greater exploration into
small deep neural network architectures that are more suitable for edge
devices, with one of the most popular architectures being SqueezeNet, with an
incredibly small model size of 4.8MB. Taking further advantage of the notion
that many applications of machine learning on edge devices are often
characterized by a low number of target classes, this study explores the
utility of combining architectural modifications and an evolutionary synthesis
strategy for synthesizing even smaller deep neural architectures based on the
more recent SqueezeNet v1.1 macroarchitecture for applications with fewer
target classes. In particular, architectural modifications are first made to
SqueezeNet v1.1 to accommodate for a 10-class ImageNet-10 dataset, and then an
evolutionary synthesis strategy is leveraged to synthesize more efficient deep
neural networks based on this modified macroarchitecture. The resulting
SquishedNets possess model sizes ranging from 2.4MB to 0.95MB (~5.17X smaller
than SqueezeNet v1.1, or 253X smaller than AlexNet). Furthermore, the
SquishedNets are still able to achieve accuracies ranging from 81.2% to 77%,
and able to process at speeds of 156 images/sec to as much as 256 images/sec on
a Nvidia Jetson TX1 embedded chip. These preliminary results show that a
combination of architectural modifications and an evolutionary synthesis
strategy can be a useful tool for producing very small deep neural network
architectures that are well-suited for edge device scenarios.
|
[
{
"version": "v1",
"created": "Mon, 20 Nov 2017 18:50:05 GMT"
}
] | 2017-11-21T00:00:00 |
[
[
"Shafiee",
"Mohammad Javad",
""
],
[
"Li",
"Francis",
""
],
[
"Chwyl",
"Brendan",
""
],
[
"Wong",
"Alexander",
""
]
] |
new_dataset
| 0.9992 |
1611.00096
|
Ambuj Varshney
|
Ambuj Varshney, Oliver Harms, Carlos Perez Penichet, Christian Rohner,
Frederik Hermans, Thiemo Voigt
|
LoRea: A Backscatter Architecture that Achieves a Long Communication
Range
|
Accepted and presented at ACM SenSys 2017, Delft, Netherlands
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is the long-standing assumption that radio communication in the range
of hundreds of meters needs to consume mWs of power at the transmitting device.
In this paper, we demonstrate that this is not necessarily the case for some
devices equipped with backscatter radios. We present LoRea an architecture
consisting of a tag, a reader and multiple carrier generators that overcomes
the power, cost and range limitations of existing systems such as Computational
Radio Frequency Identification~(CRFID). LoRea achieves this by: First,
generating narrow-band backscatter transmissions that improve receiver
sensitivity. Second, mitigating self-interference without the complex designs
employed on RFID readers by keeping carrier signal and backscattered signal
apart in frequency. Finally, decoupling carrier generation from the reader and
using devices such as WiFi routers and sensor nodes as a source of the carrier
signal. An off-the-shelf implementation of LoRea costs 70 USD, a drastic
reduction in price considering commercial RFID readers cost 2000 USD. LoRea's
range scales with the carrier strength, and proximity to the carrier source and
achieves a maximum range of 3.4 kilometre when the tag is located at 1 meter
distance from a 28 dBm carrier source while consuming 70 microwatts at the tag.
When the tag is equidistant from the carrier source and the receiver, we can
communicate upto 75 meter, a significant improvement over existing RFID
readers.
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2016 01:10:39 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Nov 2017 22:05:28 GMT"
}
] | 2017-11-20T00:00:00 |
[
[
"Varshney",
"Ambuj",
""
],
[
"Harms",
"Oliver",
""
],
[
"Penichet",
"Carlos Perez",
""
],
[
"Rohner",
"Christian",
""
],
[
"Hermans",
"Frederik",
""
],
[
"Voigt",
"Thiemo",
""
]
] |
new_dataset
| 0.996362 |
1711.05824
|
Clyde Meli
|
Robert Buttigieg, Mario Farrugia, Clyde Meli
|
Security Issues in Controller Area Networks in Automobiles
|
6 pages. 18th international conference on Sciences and Techniques of
Automatic control & computer engineering - STA'2017, Monastir, Tunisia,
December 21-23, 2017
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern vehicles may contain a considerable number of ECUs (Electronic Control
Units) which are connected through various means of communication, with the CAN
(Controller Area Network) protocol being the most widely used. However, several
vulnerabilities such as the lack of authentication and the lack of data
encryption have been pointed out by several authors, which ultimately render
vehicles unsafe to their users and surroundings. Moreover, the lack of security
in modern automobiles has been studied and analyzed by other researchers as
well as several reports about modern car hacking have (already) been published.
The contribution of this work aimed to analyze and test the level of security
and how resilient is the CAN protocol by taking a BMW E90 (3-series) instrument
cluster as a sample for a proof of concept study. This investigation was
carried out by building and developing a rogue device using cheap commercially
available components while being connected to the same CAN-Bus as a man in the
middle device in order to send spoofed messages to the instrument cluster.
|
[
{
"version": "v1",
"created": "Wed, 15 Nov 2017 22:03:36 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Nov 2017 09:27:02 GMT"
}
] | 2017-11-20T00:00:00 |
[
[
"Buttigieg",
"Robert",
""
],
[
"Farrugia",
"Mario",
""
],
[
"Meli",
"Clyde",
""
]
] |
new_dataset
| 0.9868 |
1711.06264
|
Zsuzsanna Lipt\'ak
|
P\'eter Burcsi and Zsuzsanna Lipt\'ak and W.F. Smyth
|
On the Parikh-de-Bruijn grid
|
18 pages, 3 figures, 1 table
| null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the Parikh-de-Bruijn grid, a graph whose vertices are
fixed-order Parikh vectors, and whose edges are given by a simple shift
operation. This graph gives structural insight into the nature of sets of
Parikh vectors as well as that of the Parikh set of a given string. We show its
utility by proving some results on Parikh-de-Bruijn strings, the abelian analog
of de-Bruijn sequences.
|
[
{
"version": "v1",
"created": "Thu, 16 Nov 2017 17:41:07 GMT"
}
] | 2017-11-20T00:00:00 |
[
[
"Burcsi",
"Péter",
""
],
[
"Lipták",
"Zsuzsanna",
""
],
[
"Smyth",
"W. F.",
""
]
] |
new_dataset
| 0.987183 |
1711.06317
|
Ehsan Hemmati
|
Mansour Sheikhan, Ehsan Hemmati, Reza Shahnazi
|
GA-PSO-Optimized Neural-Based Control Scheme for Adaptive Congestion
Control to Improve Performance in Multimedia Applications
|
arXiv admin note: text overlap with arXiv:1711.06356
|
Majlesi Journal of Electrical Engineering, [S.l.], v. 6, n. 1,
jan. 2012
| null | null |
cs.NE cs.AI cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Active queue control aims to improve the overall communication network
throughput while providing lower delay and small packet loss rate. The basic
idea is to actively trigger packet dropping (or marking provided by explicit
congestion notification (ECN)) before buffer overflow. In this paper, two
artificial neural networks (ANN)-based control schemes are proposed for
adaptive queue control in TCP communication networks. The structure of these
controllers is optimized using genetic algorithm (GA) and the output weights of
ANNs are optimized using particle swarm optimization (PSO) algorithm. The
controllers are radial bias function (RBF)-based, but to improve the robustness
of RBF controller, an error-integral term is added to RBF equation in the
second scheme. Experimental results show that GA- PSO-optimized improved RBF
(I-RBF) model controls network congestion effectively in terms of link
utilization with a low packet loss rate and outperform Drop Tail,
proportional-integral (PI), random exponential marking (REM), and adaptive
random early detection (ARED) controllers.
|
[
{
"version": "v1",
"created": "Thu, 16 Nov 2017 20:52:37 GMT"
}
] | 2017-11-20T00:00:00 |
[
[
"Sheikhan",
"Mansour",
""
],
[
"Hemmati",
"Ehsan",
""
],
[
"Shahnazi",
"Reza",
""
]
] |
new_dataset
| 0.952448 |
1711.06356
|
Ehsan Hemmati
|
Mansour Sheikhan, Reza Shahnazi, Ehasn Hemmati
|
Adaptive active queue management controller for TCP communication
networks using PSO-RBF models
|
arXiv admin note: text overlap with arXiv:1711.06317
|
Neural Computing and Applications, Volume 22, Issue 5, Pages
933-94, 2012
|
10.1007/s00521-011-0786-0
| null |
cs.NI cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Addressing performance degradations in end-to-end congestion control has been
one of the most active research areas in the last decade. Active queue
management (AQM) aims to improve the overall network throughput, while
providing lower delay and reduce packet loss and improving network. The basic
idea is to actively trigger packet dropping (or marking provided by explicit
congestion notification (ECN)) before buffer overflow. Radial bias function
(RBF)-based AQM controller is proposed in this paper. RBF controller is
suitable as an AQM scheme to control congestion in TCP communication networks
since it is nonlinear. Particle swarm optimization (PSO) algorithm is also
employed to derive RBF parameters such that the integrated-absolute error (IAE)
is minimized. Furthermore, in order to improve the robustness of RBF
controller, an error-integral term is added to RBF equation. The results of the
comparison with Drop Tail, adaptive random early detection (ARED), random
exponential marking (REM), and proportional-integral (PI) controllers are
presented. Integral-RBF has better performance not only in comparison with RBF
but also with ARED, REM and PI controllers in the case of link utilization
while packet loss rate is small.
|
[
{
"version": "v1",
"created": "Thu, 16 Nov 2017 23:46:16 GMT"
}
] | 2017-11-20T00:00:00 |
[
[
"Sheikhan",
"Mansour",
""
],
[
"Shahnazi",
"Reza",
""
],
[
"Hemmati",
"Ehasn",
""
]
] |
new_dataset
| 0.981894 |
1711.06396
|
Yin Zhou
|
Yin Zhou and Oncel Tuzel
|
VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate detection of objects in 3D point clouds is a central problem in many
applications, such as autonomous navigation, housekeeping robots, and
augmented/virtual reality. To interface a highly sparse LiDAR point cloud with
a region proposal network (RPN), most existing efforts have focused on
hand-crafted feature representations, for example, a bird's eye view
projection. In this work, we remove the need of manual feature engineering for
3D point clouds and propose VoxelNet, a generic 3D detection network that
unifies feature extraction and bounding box prediction into a single stage,
end-to-end trainable deep network. Specifically, VoxelNet divides a point cloud
into equally spaced 3D voxels and transforms a group of points within each
voxel into a unified feature representation through the newly introduced voxel
feature encoding (VFE) layer. In this way, the point cloud is encoded as a
descriptive volumetric representation, which is then connected to a RPN to
generate detections. Experiments on the KITTI car detection benchmark show that
VoxelNet outperforms the state-of-the-art LiDAR based 3D detection methods by a
large margin. Furthermore, our network learns an effective discriminative
representation of objects with various geometries, leading to encouraging
results in 3D detection of pedestrians and cyclists, based on only LiDAR.
|
[
{
"version": "v1",
"created": "Fri, 17 Nov 2017 04:25:24 GMT"
}
] | 2017-11-20T00:00:00 |
[
[
"Zhou",
"Yin",
""
],
[
"Tuzel",
"Oncel",
""
]
] |
new_dataset
| 0.995836 |
1711.06484
|
Saikat Chatterjee
|
Antoine Honor\'e and Veronica Siljehav and Saikat Chatterjee and Eric
Herlenius
|
Large Neural Network Based Detection of Apnea, Bradycardia and
Desaturation Events
|
Accepted for NIPS Workshop ML4H, 2017
|
Neural Information Processing Systems (NIPS) 2017 Workshop on
Machine Learning for Health, Long Beach, CA, USA
| null | null |
cs.LG cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Apnea, bradycardia and desaturation (ABD) events often precede
life-threatening events including sepsis in newborn babies. Here, we explore
machine learning for detection of ABD events as a binary classification
problem. We investigate the use of a large neural network to achieve a good
detection performance. To be user friendly, the chosen neural network does not
require a high level of parameter tuning. Furthermore, a limited amount of
training data is available and the training dataset is unbalanced. Comparing
with two widely used state-of-the-art machine learning algorithms, the large
neural network is found to be efficient. Even with a limited and unbalanced
training data, the large neural network provides a detection performance level
that is feasible to use in clinical care.
|
[
{
"version": "v1",
"created": "Fri, 17 Nov 2017 10:38:51 GMT"
}
] | 2017-11-20T00:00:00 |
[
[
"Honoré",
"Antoine",
""
],
[
"Siljehav",
"Veronica",
""
],
[
"Chatterjee",
"Saikat",
""
],
[
"Herlenius",
"Eric",
""
]
] |
new_dataset
| 0.991408 |
1711.06504
|
Luke Oakden-Rayner
|
William Gale, Luke Oakden-Rayner, Gustavo Carneiro, Andrew P. Bradley,
Lyle J. Palmer
|
Detecting hip fractures with radiologist-level performance using deep
neural networks
|
6 pages
| null | null | null |
cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We developed an automated deep learning system to detect hip fractures from
frontal pelvic x-rays, an important and common radiological task. Our system
was trained on a decade of clinical x-rays (~53,000 studies) and can be applied
to clinical data, automatically excluding inappropriate and technically
unsatisfactory studies. We demonstrate diagnostic performance equivalent to a
human radiologist and an area under the ROC curve of 0.994. Translated to
clinical practice, such a system has the potential to increase the efficiency
of diagnosis, reduce the need for expensive additional testing, expand access
to expert level medical image interpretation, and improve overall patient
outcomes.
|
[
{
"version": "v1",
"created": "Fri, 17 Nov 2017 11:56:07 GMT"
}
] | 2017-11-20T00:00:00 |
[
[
"Gale",
"William",
""
],
[
"Oakden-Rayner",
"Luke",
""
],
[
"Carneiro",
"Gustavo",
""
],
[
"Bradley",
"Andrew P.",
""
],
[
"Palmer",
"Lyle J.",
""
]
] |
new_dataset
| 0.984808 |
1711.06541
|
Eshan Singh
|
Eshan Singh, David Lin, Clark Barrett, and Subhasish Mitra
|
Logic Bug Detection and Localization Using Symbolic Quick Error
Detection
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Symbolic Quick Error Detection (Symbolic QED), a structured
approach for logic bug detection and localization which can be used both during
pre-silicon design verification as well as post-silicon validation and debug.
This new methodology leverages prior work on Quick Error Detection (QED) which
has been demonstrated to drastically reduce the latency, in terms of the number
of clock cycles, of error detection following the activation of a logic (or
electrical) bug. QED works through software transformations, including
redundant execution and control flow checking, of the applied tests. Symbolic
QED combines these error-detecting QED transformations with bounded model
checking-based formal analysis to generate minimal-length bug activation traces
that detect and localize any logic bugs in the design. We demonstrate the
practicality and effectiveness of Symbolic QED using the OpenSPARC T2, a
500-million-transistor open-source multicore System-on-Chip (SoC) design, and
using "difficult" logic bug scenarios observed in various state-of-the-art
commercial multicore SoCs. Our results show that Symbolic QED: (i) is fully
automatic, unlike manual techniques in use today that can be extremely
time-consuming and expensive; (ii) requires only a few hours in contrast to
manual approaches that might take days (or even months) or formal techniques
that often take days or fail completely for large designs; and (iii) generates
counter-examples (for activating and detecting logic bugs) that are up to 6
orders of magnitude shorter than those produced by traditional techniques.
Significantly, this new approach does not require any additional hardware.
|
[
{
"version": "v1",
"created": "Wed, 15 Nov 2017 22:03:25 GMT"
}
] | 2017-11-20T00:00:00 |
[
[
"Singh",
"Eshan",
""
],
[
"Lin",
"David",
""
],
[
"Barrett",
"Clark",
""
],
[
"Mitra",
"Subhasish",
""
]
] |
new_dataset
| 0.985649 |
1711.06605
|
Francesco Corucci
|
Francesco Corucci, Nick Cheney, Francesco Giorgio-Serchi, Josh Bongard
and Cecilia Laschi
|
Evolving soft locomotion in aquatic and terrestrial environments:
effects of material properties and environmental transitions
|
37 pages, 22 figures, currently under review (journal)
| null | null | null |
cs.AI cs.NE cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Designing soft robots poses considerable challenges: automated design
approaches may be particularly appealing in this field, as they promise to
optimize complex multi-material machines with very little or no human
intervention. Evolutionary soft robotics is concerned with the application of
optimization algorithms inspired by natural evolution in order to let soft
robots (both morphologies and controllers) spontaneously evolve within
physically-realistic simulated environments, figuring out how to satisfy a set
of objectives defined by human designers. In this paper a powerful evolutionary
system is put in place in order to perform a broad investigation on the
free-form evolution of walking and swimming soft robots in different
environments. Three sets of experiments are reported, tackling different
aspects of the evolution of soft locomotion. The first two sets explore the
effects of different material properties on the evolution of terrestrial and
aquatic soft locomotion: particularly, we show how different materials lead to
the evolution of different morphologies, behaviors, and energy-performance
tradeoffs. It is found that within our simplified physics world stiffer robots
evolve more sophisticated and effective gaits and morphologies on land, while
softer ones tend to perform better in water. The third set of experiments
starts investigating the effect and potential benefits of major environmental
transitions (land - water) during evolution. Results provide interesting
morphological exaptation phenomena, and point out a potential asymmetry between
land-water and water-land transitions: while the first type of transition
appears to be detrimental, the second one seems to have some beneficial
effects.
|
[
{
"version": "v1",
"created": "Fri, 17 Nov 2017 16:01:27 GMT"
}
] | 2017-11-20T00:00:00 |
[
[
"Corucci",
"Francesco",
""
],
[
"Cheney",
"Nick",
""
],
[
"Giorgio-Serchi",
"Francesco",
""
],
[
"Bongard",
"Josh",
""
],
[
"Laschi",
"Cecilia",
""
]
] |
new_dataset
| 0.954013 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.