id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1711.06616
|
Omid Haji Maghsoudi
|
Omid Haji Maghsoudi
|
Superpixels Based Segmentation and SVM Based Classification Method to
Distinguish Five Diseases from Normal Regions in Wireless Capsule Endoscopy
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless Capsule Endoscopy (WCE) is relatively a new technology to examine
the entire GI trace. During an examination, it captures more than 55,000
frames. Reviewing all these images is time-consuming and prone to human error.
It has been a challenge to develop intelligent methods assisting physicians to
review the frames. The WCE frames are captured in 8-bit color depths which
provides enough a color range to detect abnormalities. Here, superpixel based
methods are proposed to segment five diseases including: bleeding, Crohn's
disease, Lymphangiectasia, Xanthoma, and Lymphoid hyperplasia. Two superpixels
methods are compared to provide semantic segmentation of these prolific
diseases: simple linear iterative clustering (SLIC) and quick shift (QS). The
segmented superpixels were classified into two classes (normal and abnormal) by
support vector machine (SVM) using texture and color features. For both
superpixel methods, the accuracy, specificity, sensitivity, and precision
(SLIC, QS) were around 92%, 93%, 93%, and 88%, respectively. However, SLIC was
dramatically faster than QS.
|
[
{
"version": "v1",
"created": "Fri, 17 Nov 2017 16:25:34 GMT"
}
] | 2017-11-20T00:00:00 |
[
[
"Maghsoudi",
"Omid Haji",
""
]
] |
new_dataset
| 0.996505 |
1610.08693
|
Mohsen Mohammadkhani Razlighi
|
Mohsen Mohammadkhani Razlighi and Nikola Zlatanov
|
Buffer-Aided Relaying For The Two-Hop Full-Duplex Relay Channel With
Self-Interference
| null | null |
10.1109/TWC.2017.2767582
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the two-hop full-duplex (FD) relay channel with
self-interference and fading, which is comprised of a source, an FD relay, and
a destination, where a direct source-destination link does not exist and the FD
relay is impaired by self-interference. For this channel, we propose three
buffer-aided relaying schemes with adaptive reception-transmission at the FD
relay for the cases when the source and the relay both perform adaptive-power
allocation, fixed-power allocation, and fixed-rate transmission, respectively.
The proposed buffer-aided relaying schemes significantly improve the achievable
rate and the throughput of the considered relay channel by enabling the FD
relay to adaptively select to either receive, transmit, or simultaneously
receive and transmit in a given time slot based on the qualities of the
receiving, transmitting, and self-interference channels. Our numerical results
show that significant performance gains are achieved using the proposed
buffer-aided relaying schemes compared to conventional FD relaying, where the
FD relay is forced to always simultaneously receive and transmit, and to
buffer-aided half-duplex relaying, where the half-duplex relay cannot
simultaneously receive and transmit.
|
[
{
"version": "v1",
"created": "Thu, 27 Oct 2016 10:42:34 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2017 01:54:52 GMT"
}
] | 2017-11-17T00:00:00 |
[
[
"Razlighi",
"Mohsen Mohammadkhani",
""
],
[
"Zlatanov",
"Nikola",
""
]
] |
new_dataset
| 0.99824 |
1711.05683
|
Antonio Augusto Alves Jr
|
A. A. Alves Jr and M. D. Sokoloff
|
Hydra: a C++11 framework for data analysis in massively parallel
platforms
|
ACAT 2017 Proceedings
| null | null | null |
cs.MS hep-ex physics.comp-ph physics.data-an
|
http://creativecommons.org/licenses/by/4.0/
|
Hydra is a header-only, templated and C++11-compliant framework designed to
perform the typical bottleneck calculations found in common HEP data analyses
on massively parallel platforms. The framework is implemented on top of the
C++11 Standard Library and a variadic version of the Thrust library and is
designed to run on Linux systems, using OpenMP, CUDA and TBB enabled devices.
This contribution summarizes the main features of Hydra. A basic description of
the overall design, functionality and user interface is provided, along with
some code examples and measurements of performance.
|
[
{
"version": "v1",
"created": "Wed, 15 Nov 2017 17:19:29 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Nov 2017 01:44:12 GMT"
}
] | 2017-11-17T00:00:00 |
[
[
"Alves",
"A. A.",
"Jr"
],
[
"Sokoloff",
"M. D.",
""
]
] |
new_dataset
| 0.999608 |
1711.05789
|
Yuan Yang
|
Yuan Yang, Jingcheng Yu, Ye Hu, Xiaoyao Xu and Eric Nyberg
|
CMU LiveMedQA at TREC 2017 LiveQA: A Consumer Health Question Answering
System
|
To appear in Proceedings of TREC 2017
| null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present LiveMedQA, a question answering system that is
optimized for consumer health question. On top of the general QA system
pipeline, we introduce several new features that aim to exploit domain-specific
knowledge and entity structures for better performance. This includes a
question type/focus analyzer based on deep text classification model, a
tree-based knowledge graph for answer generation and a complementary
structure-aware searcher for answer retrieval. LiveMedQA system is evaluated in
the TREC 2017 LiveQA medical subtask, where it received an average score of
0.356 on a 3 point scale. Evaluation results revealed 3 substantial drawbacks
in current LiveMedQA system, based on which we provide a detailed discussion
and propose a few solutions that constitute the main focus of our subsequent
work.
|
[
{
"version": "v1",
"created": "Wed, 15 Nov 2017 20:26:42 GMT"
}
] | 2017-11-17T00:00:00 |
[
[
"Yang",
"Yuan",
""
],
[
"Yu",
"Jingcheng",
""
],
[
"Hu",
"Ye",
""
],
[
"Xu",
"Xiaoyao",
""
],
[
"Nyberg",
"Eric",
""
]
] |
new_dataset
| 0.99687 |
1711.05860
|
Yufeng Hao
|
Yufeng Hao
|
A General Neural Network Hardware Architecture on FPGA
| null | null | null | null |
cs.CV cs.AR cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Field Programmable Gate Arrays (FPGAs) plays an increasingly important role
in data sampling and processing industries due to its highly parallel
architecture, low power consumption, and flexibility in custom algorithms.
Especially, in the artificial intelligence field, for training and implement
the neural networks and machine learning algorithms, high energy efficiency
hardware implement and massively parallel computing capacity are heavily
demanded. Therefore, many global companies have applied FPGAs into AI and
Machine learning fields such as autonomous driving and Automatic Spoken
Language Recognition (Baidu) [1] [2] and Bing search (Microsoft) [3].
Considering the FPGAs great potential in these fields, we tend to implement a
general neural network hardware architecture on XILINX ZU9CG System On Chip
(SOC) platform [4], which contains abundant hardware resource and powerful
processing capacity. The general neural network architecture on the FPGA SOC
platform can perform forward and backward algorithms in deep neural networks
(DNN) with high performance and easily be adjusted according to the type and
scale of the neural networks.
|
[
{
"version": "v1",
"created": "Mon, 6 Nov 2017 19:17:58 GMT"
}
] | 2017-11-17T00:00:00 |
[
[
"Hao",
"Yufeng",
""
]
] |
new_dataset
| 0.987478 |
1711.06246
|
Wei Zhu
|
Wei Zhu, Qiang Qiu, Jiaji Huang, Robert Calderbank, Guillermo Sapiro,
Ingrid Daubechies
|
LDMNet: Low Dimensional Manifold Regularized Neural Networks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks have proved very successful on archetypal tasks for
which large training sets are available, but when the training data are scarce,
their performance suffers from overfitting. Many existing methods of reducing
overfitting are data-independent, and their efficacy is often limited when the
training set is very small. Data-dependent regularizations are mostly motivated
by the observation that data of interest lie close to a manifold, which is
typically hard to parametrize explicitly and often requires human input of
tangent vectors. These methods typically only focus on the geometry of the
input data, and do not necessarily encourage the networks to produce
geometrically meaningful features. To resolve this, we propose a new framework,
the Low-Dimensional-Manifold-regularized neural Network (LDMNet), which
incorporates a feature regularization method that focuses on the geometry of
both the input data and the output features. In LDMNet, we regularize the
network by encouraging the combination of the input data and the output
features to sample a collection of low dimensional manifolds, which are
searched efficiently without explicit parametrization. To achieve this, we
directly use the manifold dimension as a regularization term in a variational
functional. The resulting Euler-Lagrange equation is a Laplace-Beltrami
equation over a point cloud, which is solved by the point integral method
without increasing the computational complexity. We demonstrate two benefits of
LDMNet in the experiments. First, we show that LDMNet significantly outperforms
widely-used network regularizers such as weight decay and DropOut. Second, we
show that LDMNet can be designed to extract common features of an object imaged
via different modalities, which proves to be very useful in real-world
applications such as cross-spectral face recognition.
|
[
{
"version": "v1",
"created": "Thu, 16 Nov 2017 18:48:01 GMT"
}
] | 2017-11-17T00:00:00 |
[
[
"Zhu",
"Wei",
""
],
[
"Qiu",
"Qiang",
""
],
[
"Huang",
"Jiaji",
""
],
[
"Calderbank",
"Robert",
""
],
[
"Sapiro",
"Guillermo",
""
],
[
"Daubechies",
"Ingrid",
""
]
] |
new_dataset
| 0.989974 |
1703.09778
|
\c{C}a\u{g}lar Aytekin
|
Caglar Aytekin, Jarno Nikkanen, Moncef Gabbouj
|
INTEL-TUT Dataset for Camera Invariant Color Constancy Research
|
Download Link for the Dataset:
https://etsin.avointiede.fi/dataset/urn-nbn-fi-csc-kata20170321084219004008
Submission Info: Submitted to IEEE TIP
|
Published in: IEEE Transactions on Image Processing ( Volume: 27,
Issue: 2, Feb. 2018 )
|
10.1109/TIP.2017.2764264
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we provide a novel dataset designed for camera invariant color
constancy research. Camera invariance corresponds to the robustness of an
algorithm's performance when run on images of the same scene taken by different
cameras. Accordingly, images in the database correspond to several lab and
field scenes each of which are captured by three different cameras with minimal
registration errors. The lab scenes are also captured under five different
illuminations. The spectral responses of cameras and the spectral power
distributions of the lab light sources are also provided, as they may prove
beneficial for training future algorithms to achieve color constancy. For a
fair evaluation of future methods, we provide guidelines for supervised methods
with indicated training, validation and testing partitions. Accordingly, we
evaluate a recently proposed convolutional neural network based color constancy
algorithm as a baseline for future research. As a side contribution, this
dataset also includes images taken by a mobile camera with color shading
corrected and uncorrected results. This allows research on the effect of color
shading as well.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2017 13:07:45 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Mar 2017 07:29:34 GMT"
}
] | 2017-11-16T00:00:00 |
[
[
"Aytekin",
"Caglar",
""
],
[
"Nikkanen",
"Jarno",
""
],
[
"Gabbouj",
"Moncef",
""
]
] |
new_dataset
| 0.999681 |
1705.03227
|
Guido Giunti
|
Guido Giunti, Estefania Guisado-Fernandez, Brian Caulfield
|
Connected Health in Multiple Sclerosis: a mobile applications review
|
Article submitted to the 30th IEEE International Symposium on
Computer-Based Medical Systems - IEEE CBMS 2017, Thessaloniki, Greece. 6
pages, 2 figures, 5 tables
| null |
10.1109/CBMS.2017.27
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiple Sclerosis (MS) is an unpredictable, often disabling disease that can
adversely affect any body function; this often requires persons with MS to be
active patients who are able to self-manage. There are currently thousands of
health applications available but it is unknown how many concern MS. We
conducted a systematic review of all MS apps present in the most popular app
stores (iTunes and Google Play store) on June 2016 to identify all relevant MS
apps. After discarding non-MS related apps and duplicates, only a total of 25
MS apps were identified. App description contents and features were explored to
assess target audience, functionalities, and developing entities. The vast
majority of apps were focused on disease and treatment information with disease
management being a close second. This is the first study that reviews MS apps
and it highlights an interesting gap in the current repertoire of MS mHealth
resources.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2017 08:35:42 GMT"
}
] | 2017-11-16T00:00:00 |
[
[
"Giunti",
"Guido",
""
],
[
"Guisado-Fernandez",
"Estefania",
""
],
[
"Caulfield",
"Brian",
""
]
] |
new_dataset
| 0.997642 |
1711.02276
|
Mark Zhandry
|
Mark Zhandry
|
Quantum Lightning Never Strikes the Same State Twice
| null | null | null | null |
cs.CR cs.CC quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Public key quantum money can be seen as a version of the quantum no-cloning
theorem that holds even when the quantum states can be verified by the
adversary. In this work, investigate quantum lightning, a formalization of
"collision-free quantum money" defined by Lutomirski et al. [ICS'10], where
no-cloning holds even when the adversary herself generates the quantum state to
be cloned. We then study quantum money and quantum lightning, showing the
following results:
- We demonstrate the usefulness of quantum lightning by showing several
potential applications, such as generating random strings with a proof of
entropy, to completely decentralized cryptocurrency without a block-chain,
where transactions is instant and local.
- We give win-win results for quantum money/lightning, showing that either
signatures/hash functions/commitment schemes meet very strong recently proposed
notions of security, or they yield quantum money or lightning.
- We construct quantum lightning under the assumed multi-collision resistance
of random degree-2 systems of polynomials.
- We show that instantiating the quantum money scheme of Aaronson and
Christiano [STOC'12] with indistinguishability obfuscation that is secure
against quantum computers yields a secure quantum money scheme
|
[
{
"version": "v1",
"created": "Tue, 7 Nov 2017 04:08:48 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Nov 2017 18:15:57 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Nov 2017 18:44:18 GMT"
}
] | 2017-11-16T00:00:00 |
[
[
"Zhandry",
"Mark",
""
]
] |
new_dataset
| 0.998658 |
1711.02715
|
Lichao Sun
|
Lichao Sun, Xiaokai Wei, Jiawei Zhang, Lifang He, Philip S. Yu and
Witawas Srisa-an
|
Contaminant Removal for Android Malware Detection Systems
|
2017 IEEE International Conference on Big Data
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A recent report indicates that there is a new malicious app introduced every
4 seconds. This rapid malware distribution rate causes existing malware
detection systems to fall far behind, allowing malicious apps to escape vetting
efforts and be distributed by even legitimate app stores. When trusted
downloading sites distribute malware, several negative consequences ensue.
First, the popularity of these sites would allow such malicious apps to quickly
and widely infect devices. Second, analysts and researchers who rely on machine
learning based detection techniques may also download these apps and mistakenly
label them as benign since they have not been disclosed as malware. These apps
are then used as part of their benign dataset during model training and
testing. The presence of contaminants in benign dataset can compromise the
effectiveness and accuracy of their detection and classification techniques. To
address this issue, we introduce PUDROID (Positive and Unlabeled learning-based
malware detection for Android) to automatically and effectively remove
contaminants from training datasets, allowing machine learning based malware
classifiers and detectors to be more effective and accurate. To further improve
the performance of such detectors, we apply a feature selection strategy to
select pertinent features from a variety of features. We then compare the
detection rates and accuracy of detection systems using two datasets; one using
PUDROID to remove contaminants and the other without removing contaminants. The
results indicate that once we remove contaminants from the datasets, we can
significantly improve both malware detection rate and detection accuracy
|
[
{
"version": "v1",
"created": "Tue, 7 Nov 2017 20:35:41 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Nov 2017 20:40:48 GMT"
}
] | 2017-11-16T00:00:00 |
[
[
"Sun",
"Lichao",
""
],
[
"Wei",
"Xiaokai",
""
],
[
"Zhang",
"Jiawei",
""
],
[
"He",
"Lifang",
""
],
[
"Yu",
"Philip S.",
""
],
[
"Srisa-an",
"Witawas",
""
]
] |
new_dataset
| 0.996695 |
1711.03488
|
Oscar Carrasco
|
Shahid Mumtaz, Kazi Saidul, Huq Jonathan Rodriguez, Paulo Marques,
Ayman Radwan, Keith Briggs Michael Fitch BT, Andreas Georgakopoulos,
Ioannis-Prodromos Belikaidis, Panagiotis Vlacheas, Dimitrios Kelaidonis,
Evangelos Kosmatos, Serafim Kotrotsos, Stavroula Vassaki, Yiouli Kritikou,
Panagiotis Demestichas, Kostas Tsagkaris, Evangelia Tzifa, Aikaterini
Demesticha, Vera Stavroulaki, Athina Ropodi, Evangelos Argoudelis, Marinos
Galiatsatos, Aristotelis Margaris, George Paitaris, Dimitrios Kardaris,
Ioannis Kaffes, Haeyoung Lee Klaus, Moessner Unis Valerio, Frascolla Bismark,
Okyere Intel, Salva D\'iaz, Oscar Carrasco, Federico Miatton, Sistel Antonio,
Dedomenico Benoit, Miscopein Cea, Thanasis Oikonomou, Dimitrios Kritharidis,
Harald Weigold
|
D3.2: SPEED-5G enhanced functional and system architecture, scenarios
and performance evaluation metrics
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This deliverable contains a detailed description of the use cases considered
in SPEED-5G, which will be used as a basis for demonstration in project. These
use cases are Dynamic Channel selection, Load balancing, carrier aggregation.
This deliverable also explains the SPEED-5G architecture design principles,
which is based on software-defined networking and network function
virtualisation. The degree of virtualisation is further illustrated by a number
of novel contributions from involved partners. In the end, KPIs for each use
case are presented, along with the description of how these KPIs can support
5G-PPP KPIs.
|
[
{
"version": "v1",
"created": "Thu, 9 Nov 2017 17:38:07 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Nov 2017 08:23:40 GMT"
}
] | 2017-11-16T00:00:00 |
[
[
"Mumtaz",
"Shahid",
""
],
[
"Saidul",
"Kazi",
""
],
[
"Rodriguez",
"Huq Jonathan",
""
],
[
"Marques",
"Paulo",
""
],
[
"Radwan",
"Ayman",
""
],
[
"BT",
"Keith Briggs Michael Fitch",
""
],
[
"Georgakopoulos",
"Andreas",
""
],
[
"Belikaidis",
"Ioannis-Prodromos",
""
],
[
"Vlacheas",
"Panagiotis",
""
],
[
"Kelaidonis",
"Dimitrios",
""
],
[
"Kosmatos",
"Evangelos",
""
],
[
"Kotrotsos",
"Serafim",
""
],
[
"Vassaki",
"Stavroula",
""
],
[
"Kritikou",
"Yiouli",
""
],
[
"Demestichas",
"Panagiotis",
""
],
[
"Tsagkaris",
"Kostas",
""
],
[
"Tzifa",
"Evangelia",
""
],
[
"Demesticha",
"Aikaterini",
""
],
[
"Stavroulaki",
"Vera",
""
],
[
"Ropodi",
"Athina",
""
],
[
"Argoudelis",
"Evangelos",
""
],
[
"Galiatsatos",
"Marinos",
""
],
[
"Margaris",
"Aristotelis",
""
],
[
"Paitaris",
"George",
""
],
[
"Kardaris",
"Dimitrios",
""
],
[
"Kaffes",
"Ioannis",
""
],
[
"Klaus",
"Haeyoung Lee",
""
],
[
"Valerio",
"Moessner Unis",
""
],
[
"Bismark",
"Frascolla",
""
],
[
"Intel",
"Okyere",
""
],
[
"Díaz",
"Salva",
""
],
[
"Carrasco",
"Oscar",
""
],
[
"Miatton",
"Federico",
""
],
[
"Antonio",
"Sistel",
""
],
[
"Benoit",
"Dedomenico",
""
],
[
"Cea",
"Miscopein",
""
],
[
"Oikonomou",
"Thanasis",
""
],
[
"Kritharidis",
"Dimitrios",
""
],
[
"Weigold",
"Harald",
""
]
] |
new_dataset
| 0.994622 |
1711.05251
|
Serhii Nazarovets
|
Serhii Nazarovets
|
War and Peace: The Peculiarities of Ukrainian-Russian Scientific
Cooperation Dynamics Against the Background of Russian Military Aggression in
Ukraine, in 2014-2016
| null |
Nauka innov. 2017, 13(5):38-43
|
10.15407/scin13.05.038
| null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
The paper presents the results of bibliometric analysis of publications that
were co-written by authors affiliated with Ukrainian and Russian institutions
in 2007-2016 according to Scopus. Results of the study show that Ukrainian and
Russian scientists have not refused to carry out joint research in major
international projects, but a decrease in the number of works, written by
Ukrainian and Russian scientific institutions staff members in 2016, provides
evidence on the threat and negative impact the Russian military intervention
brings to cooperation in science. The findings are important for generating the
science development programs in Ukraine.
|
[
{
"version": "v1",
"created": "Tue, 14 Nov 2017 18:51:46 GMT"
}
] | 2017-11-16T00:00:00 |
[
[
"Nazarovets",
"Serhii",
""
]
] |
new_dataset
| 0.999113 |
1711.05303
|
Rishab Nithyanand
|
Rishab Nithyanand, Brian Schaffner, Phillipa Gill
|
Online Political Discourse in the Trump Era
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We identify general trends in the (in)civility and complexity of political
discussions occurring on Reddit between January 2007 and May 2017 -- a period
spanning both terms of Barack Obama's presidency and the first 100 days of
Donald Trump's presidency. We then investigate four factors that are frequently
hypothesized as having contributed to the declining quality of American
political discourse -- (1) the rising popularity of Donald Trump, (2)
increasing polarization and negative partisanship, (3) the democratization of
news media and the rise of fake news, and (4) merging of fringe groups into
mainstream political discussions.
|
[
{
"version": "v1",
"created": "Tue, 14 Nov 2017 20:12:29 GMT"
}
] | 2017-11-16T00:00:00 |
[
[
"Nithyanand",
"Rishab",
""
],
[
"Schaffner",
"Brian",
""
],
[
"Gill",
"Phillipa",
""
]
] |
new_dataset
| 0.999509 |
1711.05332
|
Yunxi Guo
|
Yunxi Guo and Timothy Dee and Akhilesh Tyagi
|
Barrel Shifter Physical Unclonable Function Based Encryption
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physical Unclonable Functions (PUFs) are circuits designed to extract
physical randomness from the underlying circuit. This randomness depends on the
manufacturing process. It differs for each device enabling chip-level
authentication and key generation applications. We present a protocol utilizing
a PUF for secure data transmission. Parties each have a PUF used for encryption
and decryption; this is facilitated by constraining the PUF to be commutative.
This framework is evaluated with a primitive permutation network - a barrel
shifter. Physical randomness is derived from the delay of different shift
paths. Barrel shifter (BS) PUF captures the delay of different shift paths.
This delay is entangled with message bits before they are sent across an
insecure channel. BS-PUF is implemented using transmission gates; their
characteristics ensure same-chip reproducibility, a necessary property of PUFs.
Post-layout simulations of a common centroid layout 8-level barrel shifter in
0.13 {\mu}m technology assess uniqueness, stability and randomness properties.
BS-PUFs pass all selected NIST statistical randomness tests. Stability similar
to Ring Oscillator (RO) PUFs under environment variation is shown. Logistic
regression of 100,000 plaintext-ciphertext pairs (PCPs) failed to successfully
model BS- PUF behavior.
|
[
{
"version": "v1",
"created": "Tue, 14 Nov 2017 22:19:26 GMT"
}
] | 2017-11-16T00:00:00 |
[
[
"Guo",
"Yunxi",
""
],
[
"Dee",
"Timothy",
""
],
[
"Tyagi",
"Akhilesh",
""
]
] |
new_dataset
| 0.997844 |
1711.05457
|
Hakaru Tamukoh
|
Sansei Hori, Yutaro Ishida, Yuta Kiyama, Yuichiro Tanaka, Yuki Kuroda,
Masataka Hisano, Yuto Imamura, Tomotaka Himaki, Yuma Yoshimoto, Yoshiya
Aratani, Kouhei Hashimoto, Gouki Iwamoto, Hiroto Fujita, Takashi Morie,
Hakaru Tamukoh
|
Hibikino-Musashi@Home 2017 Team Description Paper
|
8 pages; RoboCup 2017 @Home Open Platform League team description
paper
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our team Hibikino-Musashi@Home was founded in 2010. It is based in Kitakyushu
Science and Research Park, Japan. Since 2010, we have participated in the
RoboCup@Home Japan open competition open-platform league every year. Currently,
the Hibikino-Musashi@Home team has 24 members from seven different laboratories
based in the Kyushu Institute of Technology. Our home-service robots are used
as platforms for both education and implementation of our research outcomes. In
this paper, we introduce our team and the technologies that we have implemented
in our robots.
|
[
{
"version": "v1",
"created": "Wed, 15 Nov 2017 08:55:11 GMT"
}
] | 2017-11-16T00:00:00 |
[
[
"Hori",
"Sansei",
""
],
[
"Ishida",
"Yutaro",
""
],
[
"Kiyama",
"Yuta",
""
],
[
"Tanaka",
"Yuichiro",
""
],
[
"Kuroda",
"Yuki",
""
],
[
"Hisano",
"Masataka",
""
],
[
"Imamura",
"Yuto",
""
],
[
"Himaki",
"Tomotaka",
""
],
[
"Yoshimoto",
"Yuma",
""
],
[
"Aratani",
"Yoshiya",
""
],
[
"Hashimoto",
"Kouhei",
""
],
[
"Iwamoto",
"Gouki",
""
],
[
"Fujita",
"Hiroto",
""
],
[
"Morie",
"Takashi",
""
],
[
"Tamukoh",
"Hakaru",
""
]
] |
new_dataset
| 0.999531 |
1711.05458
|
Mads Dyrmann
|
Thomas Mosgaard Giselsson, Rasmus Nyholm J{\o}rgensen, Peter Kryger
Jensen, Mads Dyrmann, Henrik Skov Midtiby
|
A Public Image Database for Benchmark of Plant Seedling Classification
Algorithms
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A database of images of approximately 960 unique plants belonging to 12
species at several growth stages is made publicly available. It comprises
annotated RGB images with a physical resolution of roughly 10 pixels per mm. To
standardise the evaluation of classification results obtained with the
database, a benchmark based on $f_{1}$ scores is proposed. The dataset is
available at https://vision.eng.au.dk/plant-seedlings-dataset
|
[
{
"version": "v1",
"created": "Wed, 15 Nov 2017 08:56:25 GMT"
}
] | 2017-11-16T00:00:00 |
[
[
"Giselsson",
"Thomas Mosgaard",
""
],
[
"Jørgensen",
"Rasmus Nyholm",
""
],
[
"Jensen",
"Peter Kryger",
""
],
[
"Dyrmann",
"Mads",
""
],
[
"Midtiby",
"Henrik Skov",
""
]
] |
new_dataset
| 0.999812 |
1711.05586
|
Mark Marsden
|
Mark Marsden, Kevin McGuinness, Suzanne Little, Ciara E. Keogh, Noel
E. O'Connor
|
People, Penguins and Petri Dishes: Adapting Object Counting Models To
New Visual Domains And Object Types Without Forgetting
|
10 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose a technique to adapt a convolutional neural network
(CNN) based object counter to additional visual domains and object types while
still preserving the original counting function. Domain-specific normalisation
and scaling operators are trained to allow the model to adjust to the
statistical distributions of the various visual domains. The developed
adaptation technique is used to produce a singular patch-based counting
regressor capable of counting various object types including people, vehicles,
cell nuclei and wildlife. As part of this study a challenging new cell counting
dataset in the context of tissue culture and patient diagnosis is constructed.
This new collection, referred to as the Dublin Cell Counting (DCC) dataset, is
the first of its kind to be made available to the wider computer vision
community. State-of-the-art object counting performance is achieved in both the
Shanghaitech (parts A and B) and Penguins datasets while competitive
performance is observed on the TRANCOS and Modified Bone Marrow (MBM) datasets,
all using a shared counting model.
|
[
{
"version": "v1",
"created": "Wed, 15 Nov 2017 14:25:20 GMT"
}
] | 2017-11-16T00:00:00 |
[
[
"Marsden",
"Mark",
""
],
[
"McGuinness",
"Kevin",
""
],
[
"Little",
"Suzanne",
""
],
[
"Keogh",
"Ciara E.",
""
],
[
"O'Connor",
"Noel E.",
""
]
] |
new_dataset
| 0.955959 |
1507.00315
|
Ou Liu
|
Ali Assarpour, Amotz Barnoy, Ou Liu
|
Counting Skolem Sequences
| null | null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We compute the number of solutions to the Skolem pairings problem, S(n), and
to the Langford variant of the problem, L(n). These numbers correspond to the
sequences A059106, and A014552 in Sloane's Online Encyclopedia of Integer
Sequences. The exact value of these numbers were known for any positive integer
n < 24 for the first sequence and for any positive integer n < 27 for the
second sequence. Our first contribution is computing the exact number of
solutions for both sequences for any n < 30. Particularly, we report that S(24)
= 102, 388, 058, 845, 620, 672. S(25) = 1, 317, 281, 759, 888, 482, 688. S(28)
= 3, 532, 373, 626, 038, 214, 732, 032. S(29) = 52, 717, 585, 747, 603, 598,
276, 736. L(27) = 111, 683, 611, 098, 764, 903, 232. L(28) = 1, 607, 383, 260,
609, 382, 393, 152. Next we present a parallel tempering algorithm for
approximately counting the number of pairings. We show that the error is less
than one percent for known exact numbers, and obtain approximate values for
S(32) ~ 2.2x10^26 , S(33) ~ 3.6x10^27, L(31) ~ 5.3x10^24, and L(32) ~ 8.8x10^25
|
[
{
"version": "v1",
"created": "Wed, 1 Jul 2015 19:05:46 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jul 2015 04:07:55 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Jul 2015 15:29:56 GMT"
},
{
"version": "v4",
"created": "Fri, 10 Nov 2017 22:03:24 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Assarpour",
"Ali",
""
],
[
"Barnoy",
"Amotz",
""
],
[
"Liu",
"Ou",
""
]
] |
new_dataset
| 0.999777 |
1611.04822
|
Harshita Sahijwani
|
Gaurav Maheshwari, Priyansh Trivedi, Harshita Sahijwani, Kunal Jha,
Sourish Dasgupta and Jens Lehmann
|
SimDoc: Topic Sequence Alignment based Document Similarity Framework
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Document similarity is the problem of estimating the degree to which a given
pair of documents has similar semantic content. An accurate document similarity
measure can improve several enterprise relevant tasks such as document
clustering, text mining, and question-answering. In this paper, we show that a
document's thematic flow, which is often disregarded by bag-of-word techniques,
is pivotal in estimating their similarity. To this end, we propose a novel
semantic document similarity framework, called SimDoc. We model documents as
topic-sequences, where topics represent latent generative clusters of related
words. Then, we use a sequence alignment algorithm to estimate their semantic
similarity. We further conceptualize a novel mechanism to compute topic-topic
similarity to fine tune our system. In our experiments, we show that SimDoc
outperforms many contemporary bag-of-words techniques in accurately computing
document similarity, and on practical applications such as document clustering.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2016 13:31:28 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Nov 2017 23:07:54 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Maheshwari",
"Gaurav",
""
],
[
"Trivedi",
"Priyansh",
""
],
[
"Sahijwani",
"Harshita",
""
],
[
"Jha",
"Kunal",
""
],
[
"Dasgupta",
"Sourish",
""
],
[
"Lehmann",
"Jens",
""
]
] |
new_dataset
| 0.972632 |
1612.04164
|
Stefan Wagner
|
Sebastian V\"ost and Stefan Wagner
|
Keeping Continuous Deliveries Safe
|
4 pages, 3 figures
|
ICSE-C '17 Proceedings of the 39th International Conference on
Software Engineering Companion, pages 259-261. IEEE, 2017
|
10.1109/ICSE-C.2017.135
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Allowing swift release cycles, Continuous Delivery has become popular in
application software development and is starting to be applied in
safety-critical domains such as the automotive industry. These domains require
thorough analysis regarding safety constraints, which can be achieved by formal
verification and the execution of safety tests resulting from a safety analysis
on the product. With continuous delivery in place, such tests need to be
executed with every build to ensure the latest software still fulfills all
safety requirements. Even more though, the safety analysis has to be updated
with every change to ensure the safety test suite is still up-to-date. We thus
propose that a safety analysis should be treated no differently from other
deliverables such as source-code and dependencies, formulate guidelines on how
to achieve this and advert areas where future research is needed.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2016 13:38:24 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Vöst",
"Sebastian",
""
],
[
"Wagner",
"Stefan",
""
]
] |
new_dataset
| 0.999008 |
1702.04080
|
Mostafa El-Khamy
|
Mostafa El-Khamy, Hsien-Ping Lin, Jungwon Lee, and Inyup Kang
|
Circular Buffer Rate-Matched Polar Codes
| null | null |
10.1109/TCOMM.2017.2762664
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A practical rate-matching system for constructing rate-compatible polar codes
is proposed. The proposed polar code circular buffer rate-matching is suitable
for transmissions on communication channels that support hybrid automatic
repeat request (HARQ) communications, as well as for flexible resource-element
rate-matching on single transmission channels. Our proposed circular buffer
rate matching scheme also incorporates a bit-mapping scheme for transmission on
bit-interleaved coded modulation (BICM) channels using higher order
modulations. An interleaver is derived from a puncturing order obtained with a
low complexity progressive puncturing search algorithm on a base code of short
length, and has the flexibility to achieve any desired rate at the desired code
length, through puncturing or repetition. The rate-matching scheme is implied
by a two-stage polarization, for transmission at any desired code length, code
rate, and modulation order, and is shown to achieve the symmetric capacity of
BICM channels. Numerical results on AWGN and fast fading channels show that the
rate-matched polar codes have a competitive performance when compared to the
spatially-coupled quasi-cyclic LDPC codes or LTE turbo codes, while having
similar rate-dematching storage and computational complexities.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2017 04:56:06 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"El-Khamy",
"Mostafa",
""
],
[
"Lin",
"Hsien-Ping",
""
],
[
"Lee",
"Jungwon",
""
],
[
"Kang",
"Inyup",
""
]
] |
new_dataset
| 0.987333 |
1703.06712
|
Stefan Wagner
|
Stefan Wagner
|
Scrum for cyber-physical systems: a process proposal
|
6 pages, 3 figures, RCoSE 2014 Proceedings of the 1st International
Workshop on Rapid Continuous Software Engineering. ACM, 2014
|
RCoSE 2014 Proceedings of the 1st International Workshop on Rapid
Continuous Software Engineering. ACM, 2014
|
10.1145/2593812.2593819
| null |
cs.SE cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Agile development processes and especially Scrum are changing the state of
the practice in software development. Many companies in the classical IT sector
have adopted them to successfully tackle various challenges from the rapidly
changing environments and increasingly complex software systems. Companies
developing software for embedded or cyber-physical systems, however, are still
hesitant to adopt such processes. Despite successful applications of Scrum and
other agile methods for cyber-physical systems, there is still no complete
process that maps their specific challenges to practices in Scrum. We propose
to fill this gap by treating all design artefacts in such a development in the
same way: In software development, the final design is already the product, in
hardware and mechanics it is the starting point of production. We sketch the
Scrum extension Scrum CPS by showing how Scrum could be used to develop all
design artefacts for a cyber physical system. Hardware and mechanical parts
that might not be available yet are simulated. With this approach, we can
directly and iteratively build the final software and produce detailed models
for the hardware and mechanics production in parallel. We plan to further
detail Scrum CPS and apply it first in a series of student projects to gather
more experience before testing it in an industrial case study.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2017 12:45:43 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Wagner",
"Stefan",
""
]
] |
new_dataset
| 0.997871 |
1706.01656
|
Petros Aristidou
|
Nicolas Pilatte and Petros Aristidou and Gabriela Hug
|
TDNetGen: An open-source, parametrizable, large-scale, transmission and
distribution test system
|
\c{opyright} 2017 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works, IEEE Systems Journal, 2017
| null |
10.1109/JSYST.2017.2772914
| null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, an open-source MATLAB toolbox is presented that is able to
generate synthetic, combined transmission and distribution network models.
These can be used to analyse the interactions between transmission and multiple
distribution systems, such as the provision of ancillary services by active
distribution grids, the co-optimization of planning and operation, the
development of emergency control and protection schemes spanning over different
voltage levels, the analysis of combined market aspects, etc. The generated
test-system models are highly customizable, providing the user with the
flexibility to easily choose the desired characteristics, such as the level of
renewable energy penetration, the size of the final system, etc.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2017 08:39:07 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Nov 2017 13:42:12 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Pilatte",
"Nicolas",
""
],
[
"Aristidou",
"Petros",
""
],
[
"Hug",
"Gabriela",
""
]
] |
new_dataset
| 0.99976 |
1707.06140
|
Mikhail Egorov
|
Michael Egorov, MacLane Wilkison and David Nunez
|
NuCypher KMS: Decentralized key management system
| null | null | null | null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
NuCypher KMS is a decentralized Key Management System (KMS) that addresses
the limitations of using consensus networks to securely store and manipulate
private, encrypted data. It provides encryption and cryptographic access
controls, performed by a decentralized network, leveraging proxy re-encryption.
Unlike centralized KMS as a service solutions, it doesn't require trusting a
service provider. NuCypher KMS enables sharing of sensitive data for both
decentralized and centralized applications, providing security infrastructure
for applications from healthcare to identity management to decentralized
content marketplaces. NuCypher KMS will be an essential part of decentralized
applications, just as SSL/TLS is an essential part of every secure web
application.
|
[
{
"version": "v1",
"created": "Wed, 19 Jul 2017 15:10:12 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jul 2017 22:38:06 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Aug 2017 21:06:26 GMT"
},
{
"version": "v4",
"created": "Sun, 3 Sep 2017 02:03:34 GMT"
},
{
"version": "v5",
"created": "Sun, 12 Nov 2017 21:12:38 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Egorov",
"Michael",
""
],
[
"Wilkison",
"MacLane",
""
],
[
"Nunez",
"David",
""
]
] |
new_dataset
| 0.998175 |
1707.09100
|
Ernest C. H. Cheung
|
Ernest C. Cheung, Tsan Kwong Wong, Aniket Bera, Dinesh Manocha
|
MixedPeds: Pedestrian Detection in Unannotated Videos using
Synthetically Generated Human-agents for Training
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new method for training pedestrian detectors on an unannotated
set of images. We produce a mixed reality dataset that is composed of
real-world background images and synthetically generated static human-agents.
Our approach is general, robust, and makes no other assumptions about the
unannotated dataset regarding the number or location of pedestrians. We
automatically extract from the dataset: i) the vanishing point to calibrate the
virtual camera, and ii) the pedestrians' scales to generate a Spawn Probability
Map, which is a novel concept that guides our algorithm to place the
pedestrians at appropriate locations. After putting synthetic human-agents in
the unannotated images, we use these augmented images to train a Pedestrian
Detector, with the annotations generated along with the synthetic agents. We
conducted our experiments using Faster R-CNN by comparing the detection results
on the unannotated dataset performed by the detector trained using our approach
and detectors trained with other manually labeled datasets. We showed that our
approach improves the average precision by 5-13% over these detectors.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2017 04:05:33 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Nov 2017 19:12:55 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Cheung",
"Ernest C.",
""
],
[
"Wong",
"Tsan Kwong",
""
],
[
"Bera",
"Aniket",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.99851 |
1711.00210
|
Minglong Qi
|
Minglong Qi, Shengwu Xiong, Jingling Yuan, Wenbi Rao and Luo Zhong
|
On the complete weight enumerators of some linear codes with a few
weights
| null | null | null | null |
cs.IT cs.CR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear codes with a few weights have important applications in authentication
codes, secret sharing, consumer electronics, etc.. The determination of the
parameters such as Hamming weight distributions and complete weight enumerators
of linear codes are important research topics. In this paper, we consider some
classes of linear codes with a few weights and determine the complete weight
enumerators from which the corresponding Hamming weight distributions are
derived with help of some sums involving Legendre symbol.
|
[
{
"version": "v1",
"created": "Wed, 1 Nov 2017 04:59:27 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Nov 2017 00:41:24 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Qi",
"Minglong",
""
],
[
"Xiong",
"Shengwu",
""
],
[
"Yuan",
"Jingling",
""
],
[
"Rao",
"Wenbi",
""
],
[
"Zhong",
"Luo",
""
]
] |
new_dataset
| 0.997844 |
1711.04062
|
Luca De Feo
|
Luca De Feo
|
Mathematics of Isogeny Based Cryptography
| null | null | null | null |
cs.CR math.NT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
These lectures notes were written for a summer school on Mathematics for
post-quantum cryptography in Thi\`es, Senegal. They try to provide a guide for
Masters' students to get through the vast literature on elliptic curves,
without getting lost on their way to learning isogeny based cryptography. They
are by no means a reference text on the theory of elliptic curves, nor on
cryptography; students are encouraged to complement these notes with some of
the books recommended in the bibliography.
The presentation is divided in three parts, roughly corresponding to the
three lectures given. In an effort to keep the reader interested, each part
alternates between the fundamental theory of elliptic curves, and applications
in cryptography. We often prefer to have the main ideas flow smoothly, rather
than having a rigorous presentation as one would have in a more classical book.
The reader will excuse us for the inaccuracies and the omissions.
|
[
{
"version": "v1",
"created": "Sat, 11 Nov 2017 02:26:34 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"De Feo",
"Luca",
""
]
] |
new_dataset
| 0.999342 |
1711.04150
|
Supriya Pandhre
|
Supriya Pandhre, Himangi Mittal, Manish Gupta, Vineeth N
Balasubramanian
|
STWalk: Learning Trajectory Representations in Temporal Graphs
|
10 pages, 5 figures, 2 tables
| null |
10.1145/3152494.3152512
| null |
cs.SI cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Analyzing the temporal behavior of nodes in time-varying graphs is useful for
many applications such as targeted advertising, community evolution and outlier
detection. In this paper, we present a novel approach, STWalk, for learning
trajectory representations of nodes in temporal graphs. The proposed framework
makes use of structural properties of graphs at current and previous time-steps
to learn effective node trajectory representations. STWalk performs random
walks on a graph at a given time step (called space-walk) as well as on graphs
from past time-steps (called time-walk) to capture the spatio-temporal behavior
of nodes. We propose two variants of STWalk to learn trajectory
representations. In one algorithm, we perform space-walk and time-walk as part
of a single step. In the other variant, we perform space-walk and time-walk
separately and combine the learned representations to get the final trajectory
embedding. Extensive experiments on three real-world temporal graph datasets
validate the effectiveness of the learned representations when compared to
three baseline methods. We also show the goodness of the learned trajectory
embeddings for change point detection, as well as demonstrate that arithmetic
operations on these trajectory representations yield interesting and
interpretable results.
|
[
{
"version": "v1",
"created": "Sat, 11 Nov 2017 15:19:27 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Pandhre",
"Supriya",
""
],
[
"Mittal",
"Himangi",
""
],
[
"Gupta",
"Manish",
""
],
[
"Balasubramanian",
"Vineeth N",
""
]
] |
new_dataset
| 0.995628 |
1711.04412
|
Jackson Abascal
|
Jackson Abascal, Shir Maimon
|
A Refutation of Guinea's "Understanding SAT is in P"
| null | null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we summarize and critique the paper "Understanding SAT is in P"
by Alejandro S\'anchez Guinea [arXiv:1504.00337]. The paper claims to present a
polynomial-time solution for the NP-complete language 3-SAT. We show that
Guinea's algorithm is flawed and does not prove 3-SAT is in P.
|
[
{
"version": "v1",
"created": "Mon, 13 Nov 2017 03:48:03 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Abascal",
"Jackson",
""
],
[
"Maimon",
"Shir",
""
]
] |
new_dataset
| 0.997795 |
1711.04502
|
Joshua I. James
|
Nikolay Akatyev, Joshua I. James
|
United Nations Digital Blue Helmets as a Starting Point for Cyber
Peacekeeping
| null |
European Conference on Information Warfare and Security, ECCWS.
p.8-16 (2017)
| null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prior works, such as the Tallinn manual on the international law applicable
to cyber warfare, focus on the circumstances of cyber warfare. Many
organizations are considering how to conduct cyber warfare, but few have
discussed methods to reduce, or even prevent, cyber conflict. A recent series
of publications started developing the framework of Cyber Peacekeeping (CPK)
and its legal requirements. These works assessed the current state of
organizations such as ITU IMPACT, NATO CCDCOE and Shanghai Cooperation
Organization, and found that they did not satisfy requirements to effectively
host CPK activities. An assessment of organizations currently working in the
areas related to CPK found that the United Nations (UN) has mandates and
organizational structures that appear to somewhat overlap the needs of CPK.
However, the UN's current approach to Peacekeeping cannot be directly mapped to
cyberspace. In this research we analyze the development of traditional
Peacekeeping in the United Nations, and current initiatives in cyberspace.
Specifically, we will compare the proposed CPK framework with the recent
initiative of the United Nations named the 'Digital Blue Helmets' as well as
with other projects in the UN which helps to predict and mitigate conflicts.
Our goal is to find practical recommendations for the implementation of the CPK
framework in the United Nations, and to examine how responsibilities defined in
the CPK framework overlap with those of the 'Digital Blue Helmets' and the
Global Pulse program.
|
[
{
"version": "v1",
"created": "Mon, 13 Nov 2017 10:29:38 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Akatyev",
"Nikolay",
""
],
[
"James",
"Joshua I.",
""
]
] |
new_dataset
| 0.999648 |
1711.04503
|
Aris Filos-Ratsikas
|
Aris Filos-Ratsikas, Paul W. Goldberg
|
Consensus Halving is PPA-Complete
| null | null | null | null |
cs.CC cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that the computational problem CONSENSUS-HALVING is PPA-complete, the
first PPA-completeness result for a problem whose definition does not involve
an explicit circuit. We also show that an approximate version of this problem
is polynomial-time equivalent to NECKLACE SPLITTING, which establishes
PPAD-hardness for NECKLACE SPLITTING, and suggests that it is also
PPA-complete.
|
[
{
"version": "v1",
"created": "Mon, 13 Nov 2017 10:29:43 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Filos-Ratsikas",
"Aris",
""
],
[
"Goldberg",
"Paul W.",
""
]
] |
new_dataset
| 0.956621 |
1711.04592
|
Jan Egger
|
Jan Egger, Christopher Nimsky, Xiaojun Chen
|
Vertebral body segmentation with GrowCut: Initial experience, workflow
and practical application
|
10 pages
|
SAGE Open Medicine, Volume 5, pp. 1-10, Nov. 2017
|
10.1177/2050312117740984
| null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this contribution, we used the GrowCut segmentation algorithm publicly
available in three-dimensional Slicer for three-dimensional segmentation of
vertebral bodies. To the best of our knowledge, this is the first time that the
GrowCut method has been studied for the usage of vertebral body segmentation.
In brief, we found that the GrowCut segmentation times were consistently less
than the manual segmentation times. Hence, GrowCut provides an alternative to a
manual slice-by-slice segmentation process.
|
[
{
"version": "v1",
"created": "Mon, 13 Nov 2017 14:19:05 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Egger",
"Jan",
""
],
[
"Nimsky",
"Christopher",
""
],
[
"Chen",
"Xiaojun",
""
]
] |
new_dataset
| 0.983593 |
1711.04618
|
Jason Dou
|
Jason Dou
|
Impartial redistricting: a Markov chain approach to the "Gerrymandering
problem"
|
Bachelor's thesis, Beijing Univ (2014)
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
After every U.S. national census, a state legislature is required to redraw
the boundaries of congressional districts in order to account for changes in
population. At the moment this is done in a highly partisan way, with
districting done in order to maximize the benefits to the party in power. This
is a threat to U.S's democracy. There have been proposals to take the
re-districting out of the hands of political parties and give to an
"independent" commission. Independence is hard to come by and in this thesis we
want to explore the possibility of computer generated districts that as far as
possible to avoid partisan "gerrymandering". The idea we have is to treat every
possible redistricting as a state in a Markov Chain: every state is obtained by
its former state in random way. With some technical conditions, we will get a
near uniform member of the states after running sufficiently long time (the
mixing time). Then we can say the uniform member is an impartial distribution.
Based on the geographical and statistical data of Pennsylvania, I have achieved
the Markov Chain algorithm with several constraints, done optimization
experiments and a web interface is going to be made to show the results.
|
[
{
"version": "v1",
"created": "Mon, 30 Oct 2017 20:40:27 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Dou",
"Jason",
""
]
] |
new_dataset
| 0.995184 |
1711.05032
|
Rahul Vaze
|
Rahul Vaze, Shreyas Chaudhari, Akshat Choube, Nitin Aggarwal
|
Energy-Delay-Distortion Problem
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An energy-limited source trying to transmit multiple packets to a destination
with possibly different sizes is considered. With limited energy, the source
cannot potentially transmit all bits of all packets. In addition, there is a
delay cost associated with each packet. Thus, the source has to choose, how
many bits to transmit for each packet, and the order in which to transmit these
bits, to minimize the cost of distortion (introduced by transmitting lower
number of bits) and queueing plus transmission delay, across all packets.
Assuming an exponential metric for distortion loss and linear delay cost, we
show that the optimization problem is jointly convex. Hence, the problem can be
exactly solved using convex solvers, however, because of the complicated
expression derived from the KKT conditions, no closed form solution can be
found even with the simplest cost function choice made in the paper, also the
optimal order in which packets should be transmitted needs to be found via
brute force. To facilitate a more structured solution, a discretized version of
the problem is also considered, where time and energy are divided in discrete
amounts. In any time slot (fixed length), bits belonging to any one packet can
be transmitted, while any discrete number of energy quanta can be used in any
slot corresponding to any one packet, such that the total energy constraint is
satisfied. The discretized problem is a special case of a multi-partitioning
problem, where each packet's utility is super-modular and the proposed greedy
solution is shown to incur cost that is at most $2$-times of the optimal cost.
|
[
{
"version": "v1",
"created": "Tue, 14 Nov 2017 09:58:16 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Vaze",
"Rahul",
""
],
[
"Chaudhari",
"Shreyas",
""
],
[
"Choube",
"Akshat",
""
],
[
"Aggarwal",
"Nitin",
""
]
] |
new_dataset
| 0.988159 |
1711.05091
|
Pawe{\l} Rz\k{a}\.zewski
|
Micha{\l} D\k{e}bski, Zbigniew Lonc, Pawe{\l} Rz\k{a}\.zewski
|
Sequences of radius $k$ for complete bipartite graphs
| null |
Discrete Applied Mathematics 225, pp. 51--63. 2017
|
10.1016/j.dam.2017.03.017
| null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A \emph{$k$-radius sequence} for a graph $G$ is a sequence of vertices of $G$
(typically with repetitions) such that for every edge $uv$ of $G$ vertices $u$
and $v$ appear at least once within distance $k$ in the sequence. The length of
a shortest $k$-radius sequence for $G$ is denoted by $f_k(G)$. We give an
asymptotically tight estimation on $f_k(G)$ for complete bipartite graphs
{which matches a lower bound, valid for all bipartite graphs}. We also show
that determining $f_k(G)$ for an arbitrary graph $G$ is NP-hard for every
constant $k>1$.
|
[
{
"version": "v1",
"created": "Tue, 14 Nov 2017 14:09:59 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Dębski",
"Michał",
""
],
[
"Lonc",
"Zbigniew",
""
],
[
"Rzążewski",
"Paweł",
""
]
] |
new_dataset
| 0.999067 |
1711.05128
|
Eduardo Aguilar
|
Eduardo Aguilar, Beatriz Remeseiro, Marc Bola\~nos, and Petia Radeva
|
Grab, Pay and Eat: Semantic Food Detection for Smart Restaurants
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The increase in awareness of people towards their nutritional habits has
drawn considerable attention to the field of automatic food analysis. Focusing
on self-service restaurants environment, automatic food analysis is not only
useful for extracting nutritional information from foods selected by customers,
it is also of high interest to speed up the service solving the bottleneck
produced at the cashiers in times of high demand. In this paper, we address the
problem of automatic food tray analysis in canteens and restaurants
environment, which consists in predicting multiple foods placed on a tray
image. We propose a new approach for food analysis based on convolutional
neural networks, we name Semantic Food Detection, which integrates in the same
framework food localization, recognition and segmentation. We demonstrate that
our method improves the state of the art food detection by a considerable
margin on the public dataset UNIMIB2016 achieving about 90% in terms of
F-measure, and thus provides a significant technological advance towards the
automatic billing in restaurant environments.
|
[
{
"version": "v1",
"created": "Tue, 14 Nov 2017 14:49:13 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Aguilar",
"Eduardo",
""
],
[
"Remeseiro",
"Beatriz",
""
],
[
"Bolaños",
"Marc",
""
],
[
"Radeva",
"Petia",
""
]
] |
new_dataset
| 0.973601 |
1711.05165
|
Sean Welleck
|
Sean Welleck, Jialin Mao, Kyunghyun Cho, Zheng Zhang
|
Saliency-based Sequential Image Attention with Multiset Prediction
|
To appear in Advances in Neural Information Processing Systems 30
(NIPS 2017)
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans process visual scenes selectively and sequentially using attention.
Central to models of human visual attention is the saliency map. We propose a
hierarchical visual architecture that operates on a saliency map and uses a
novel attention mechanism to sequentially focus on salient regions and take
additional glimpses within those regions. The architecture is motivated by
human visual attention, and is used for multi-label image classification on a
novel multiset task, demonstrating that it achieves high precision and recall
while localizing objects with its attention. Unlike conventional multi-label
image classification models, the model supports multiset prediction due to a
reinforcement-learning based training process that allows for arbitrary label
permutation and multiple instances per label.
|
[
{
"version": "v1",
"created": "Tue, 14 Nov 2017 16:16:36 GMT"
}
] | 2017-11-15T00:00:00 |
[
[
"Welleck",
"Sean",
""
],
[
"Mao",
"Jialin",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Zhang",
"Zheng",
""
]
] |
new_dataset
| 0.998742 |
1505.05404
|
Alexios Balatsoukas-Stimming
|
Alexios Balatsoukas-Stimming and Andreas Burg
|
Faulty Successive Cancellation Decoding of Polar Codes for the Binary
Erasure Channel
|
Accepted for publications in the IEEE Transactions on Communications
| null |
10.1109/TCOMM.2017.2771243
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, faulty successive cancellation decoding of polar codes for the
binary erasure channel is studied. To this end, a simple erasure-based fault
model is introduced to represent errors in the decoder and it is shown that,
under this model, polarization does not happen, meaning that fully reliable
communication is not possible at any rate. Furthermore, a lower bound on the
frame error rate of polar codes under faulty SC decoding is provided, which is
then used, along with a well-known upper bound, in order to choose a
blocklength that minimizes the erasure probability under faulty decoding.
Finally, an unequal error protection scheme that can re-enable asymptotically
erasure-free transmission at a small rate loss and by protecting only a
constant fraction of the decoder is proposed. The same scheme is also shown to
significantly improve the finite-length performance of the faulty successive
cancellation decoder by protecting as little as 1.5% of the decoder.
|
[
{
"version": "v1",
"created": "Wed, 20 May 2015 14:42:57 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2016 11:40:05 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Nov 2017 09:22:11 GMT"
}
] | 2017-11-13T00:00:00 |
[
[
"Balatsoukas-Stimming",
"Alexios",
""
],
[
"Burg",
"Andreas",
""
]
] |
new_dataset
| 0.998977 |
1702.07578
|
Florian Kurpicz
|
Johannes Fischer, Florian Kurpicz, Marvin L\"obel
|
Simple, Fast and Lightweight Parallel Wavelet Tree Construction
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The wavelet tree (Grossi et al. [SODA, 2003]) and wavelet matrix (Claude et
al. [Inf. Syst., 47:15--32, 2015]) are compact indices for texts over an
alphabet $[0,\sigma)$ that support rank, select and access queries in $O(\lg
\sigma)$ time. We first present new practical sequential and parallel
algorithms for wavelet tree construction. Their unifying characteristics is
that they construct the wavelet tree bottomup}, i.e., they compute the last
level first. We also show that this bottom-up construction can easily be
adapted to wavelet matrices. In practice, our best sequential algorithm is up
to twice as fast as the currently fastest sequential wavelet tree construction
algorithm (Shun [DCC, 2015]), simultaneously saving a factor of 2 in space.
This scales up to 32 cores, where we are about equally fast as the currently
fastest parallel wavelet tree construction algorithm (Labeit et al. [DCC,
2016]), but still use only about 75 % of the space. An additional theoretical
result shows how to adapt any wavelet tree construction algorithm to the
wavelet matrix in the same (asymptotic) time, using only little extra space.
|
[
{
"version": "v1",
"created": "Fri, 24 Feb 2017 13:43:20 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Mar 2017 10:54:22 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Nov 2017 14:22:23 GMT"
}
] | 2017-11-13T00:00:00 |
[
[
"Fischer",
"Johannes",
""
],
[
"Kurpicz",
"Florian",
""
],
[
"Löbel",
"Marvin",
""
]
] |
new_dataset
| 0.965436 |
1709.03526
|
Mikkel Fly Kragh
|
Mikkel Fly Kragh, Peter Christiansen, Morten Stigaard Laursen, Morten
Larsen, Kim Arild Steen, Ole Green, Henrik Karstoft, Rasmus Nyholm
J{\o}rgensen
|
FieldSAFE: Dataset for Obstacle Detection in Agriculture
|
Submitted to special issue of MDPI Sensors: Sensors in Agriculture
|
Sensors 2017, 17(11), 2579
|
10.3390/s17112579
|
1424-8220
|
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a novel multi-modal dataset for obstacle detection
in agriculture. The dataset comprises approximately 2 hours of raw sensor data
from a tractor-mounted sensor system in a grass mowing scenario in Denmark,
October 2016. Sensing modalities include stereo camera, thermal camera, web
camera, 360-degree camera, lidar, and radar, while precise localization is
available from fused IMU and GNSS. Both static and moving obstacles are present
including humans, mannequin dolls, rocks, barrels, buildings, vehicles, and
vegetation. All obstacles have ground truth object labels and geographic
coordinates.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2017 18:12:50 GMT"
}
] | 2017-11-13T00:00:00 |
[
[
"Kragh",
"Mikkel Fly",
""
],
[
"Christiansen",
"Peter",
""
],
[
"Laursen",
"Morten Stigaard",
""
],
[
"Larsen",
"Morten",
""
],
[
"Steen",
"Kim Arild",
""
],
[
"Green",
"Ole",
""
],
[
"Karstoft",
"Henrik",
""
],
[
"Jørgensen",
"Rasmus Nyholm",
""
]
] |
new_dataset
| 0.999857 |
1711.03543
|
Anush Sankaran
|
Akshay Sethi, Anush Sankaran, Naveen Panwar, Shreya Khare, Senthil
Mani
|
DLPaper2Code: Auto-generation of Code from Deep Learning Research Papers
|
AAAI2018
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With an abundance of research papers in deep learning, reproducibility or
adoption of the existing works becomes a challenge. This is due to the lack of
open source implementations provided by the authors. Further, re-implementing
research papers in a different library is a daunting task. To address these
challenges, we propose a novel extensible approach, DLPaper2Code, to extract
and understand deep learning design flow diagrams and tables available in a
research paper and convert them to an abstract computational graph. The
extracted computational graph is then converted into execution ready source
code in both Keras and Caffe, in real-time. An arXiv-like website is created
where the automatically generated designs is made publicly available for 5,000
research papers. The generated designs could be rated and edited using an
intuitive drag-and-drop UI framework in a crowdsourced manner. To evaluate our
approach, we create a simulated dataset with over 216,000 valid design
visualizations using a manually defined grammar. Experiments on the simulated
dataset show that the proposed framework provide more than $93\%$ accuracy in
flow diagram content extraction.
|
[
{
"version": "v1",
"created": "Thu, 9 Nov 2017 10:00:19 GMT"
}
] | 2017-11-13T00:00:00 |
[
[
"Sethi",
"Akshay",
""
],
[
"Sankaran",
"Anush",
""
],
[
"Panwar",
"Naveen",
""
],
[
"Khare",
"Shreya",
""
],
[
"Mani",
"Senthil",
""
]
] |
new_dataset
| 0.972059 |
1711.03676
|
Patrick M. Pilarski
|
Patrick M. Pilarski, Richard S. Sutton, Kory W. Mathewson, Craig
Sherstan, Adam S. R. Parker, Ann L. Edwards
|
Communicative Capital for Prosthetic Agents
|
33 pages, 10 figures; unpublished technical report undergoing peer
review
| null | null | null |
cs.AI cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents an overarching perspective on the role that machine
intelligence can play in enhancing human abilities, especially those that have
been diminished due to injury or illness. As a primary contribution, we develop
the hypothesis that assistive devices, and specifically artificial arms and
hands, can and should be viewed as agents in order for us to most effectively
improve their collaboration with their human users. We believe that increased
agency will enable more powerful interactions between human users and next
generation prosthetic devices, especially when the sensorimotor space of the
prosthetic technology greatly exceeds the conventional control and
communication channels available to a prosthetic user. To more concretely
examine an agency-based view on prosthetic devices, we propose a new schema for
interpreting the capacity of a human-machine collaboration as a function of
both the human's and machine's degrees of agency. We then introduce the idea of
communicative capital as a way of thinking about the communication resources
developed by a human and a machine during their ongoing interaction. Using this
schema of agency and capacity, we examine the benefits and disadvantages of
increasing the agency of a prosthetic limb. To do so, we present an analysis of
examples from the literature where building communicative capital has enabled a
progression of fruitful, task-directed interactions between prostheses and
their human users. We then describe further work that is needed to concretely
evaluate the hypothesis that prostheses are best thought of as agents. The
agent-based viewpoint developed in this article significantly extends current
thinking on how best to support the natural, functional use of increasingly
complex prosthetic enhancements, and opens the door for more powerful
interactions between humans and their assistive technologies.
|
[
{
"version": "v1",
"created": "Fri, 10 Nov 2017 03:19:59 GMT"
}
] | 2017-11-13T00:00:00 |
[
[
"Pilarski",
"Patrick M.",
""
],
[
"Sutton",
"Richard S.",
""
],
[
"Mathewson",
"Kory W.",
""
],
[
"Sherstan",
"Craig",
""
],
[
"Parker",
"Adam S. R.",
""
],
[
"Edwards",
"Ann L.",
""
]
] |
new_dataset
| 0.953596 |
1711.03684
|
Jinsong Hu
|
Jinsong Hu, Khurram Shahzad, Shihao Yan, Xiangyun Zhou, Feng Shu, Jun
Li
|
Covert Communications with A Full-Duplex Receiver over Wireless Fading
Channels
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose a covert communication scheme where the transmitter
attempts to hide its transmission to a full-duplex receiver, from a warden that
is to detect this covert transmission using a radiometer. Specifically, we
first derive the detection error rate at the warden, based on which the optimal
detection threshold for its radiometer is analytically determined and its
expected detection error rate over wireless fading channels is achieved in a
closed-form expression. Our analysis indicates that the artificial noise
deliberately produced by the receiver with a random transmit power, although
causes self-interference, offers the capability of achieving a positive
effective covert rate for any transmit power (can be infinity) subject to any
given covertness requirement on the expected detection error rate. This work is
the first study on the use of the full-duplex receiver with controlled
artificial noise for achieving covert communications and invites further
investigation in this regard.
|
[
{
"version": "v1",
"created": "Fri, 10 Nov 2017 04:11:48 GMT"
}
] | 2017-11-13T00:00:00 |
[
[
"Hu",
"Jinsong",
""
],
[
"Shahzad",
"Khurram",
""
],
[
"Yan",
"Shihao",
""
],
[
"Zhou",
"Xiangyun",
""
],
[
"Shu",
"Feng",
""
],
[
"Li",
"Jun",
""
]
] |
new_dataset
| 0.994881 |
1711.03774
|
Fred Phillips
|
Fred Phillips
|
The Sad State of Entrepreneurship in America: What Educators Can Do
About It
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The entrepreneurial scene suffers from a sick venture capital industry, a
number of imponderable illogics, and, maybe, misplaced adulation from students
and the public. The paper details these problems, finds root causes, and
prescribes action for higher education professionals and institutions.
|
[
{
"version": "v1",
"created": "Fri, 10 Nov 2017 11:35:24 GMT"
}
] | 2017-11-13T00:00:00 |
[
[
"Phillips",
"Fred",
""
]
] |
new_dataset
| 0.997107 |
1711.03808
|
George Fragulis
|
Christos Tolis and George F. Fragulis
|
An experimental mechatronic design and control of a 5 DOF Robotic arm
for identification and sorting of different sized objects
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The purpose of this paper is to present the construction and programming of a
five degrees of freedom robotic arm which interacts with an infrared sensor for
the identification and sorting of different sized objects. The main axis of the
construction design will be up to the three main branches of science that make
up the Mechatronics: Mechanical Engineering, Electronic-Electrical Engineering
and Computer Engineering. The methods that have been used for the construction
are presented as well as the methods for the programming of the arm in
cooperation with the sensor. The aim is to present the manual and automatic
control of the arm for the recognition and the installation of the objects
through a simple (in operation) and low in cost sensor like the one that was
used by this paper. Furthermore, this paper presents the significance of this
robotic arm design and its further applications in contemporary industrial
forms of production.
|
[
{
"version": "v1",
"created": "Fri, 10 Nov 2017 13:21:27 GMT"
}
] | 2017-11-13T00:00:00 |
[
[
"Tolis",
"Christos",
""
],
[
"Fragulis",
"George F.",
""
]
] |
new_dataset
| 0.995621 |
1711.03871
|
Daniel Patterson
|
Daniel Patterson, Jamie Perconti, Christos Dimoulas, Amal Ahmed
|
FunTAL: Reasonably Mixing a Functional Language with Assembly
|
15 pages; implementation at https://dbp.io/artifacts/funtal/;
published in PLDI '17, Proceedings of the 38th ACM SIGPLAN Conference on
Programming Language Design and Implementation, June 18 - 23, 2017,
Barcelona, Spain
| null |
10.1145/3140587.3062347
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present FunTAL, the first multi-language system to formalize safe
interoperability between a high-level functional language and low-level
assembly code while supporting compositional reasoning about the mix. A central
challenge in developing such a multi-language is bridging the gap between
assembly, which is staged into jumps to continuations, and high-level code,
where subterms return a result. We present a compositional stack-based typed
assembly language that supports components, comprised of one or more basic
blocks, that may be embedded in high-level contexts. We also present a logical
relation for FunTAL that supports reasoning about equivalence of high-level
components and their assembly replacements, mixed-language programs with
callbacks between languages, and assembly components comprised of different
numbers of basic blocks.
|
[
{
"version": "v1",
"created": "Fri, 10 Nov 2017 15:25:50 GMT"
}
] | 2017-11-13T00:00:00 |
[
[
"Patterson",
"Daniel",
""
],
[
"Perconti",
"Jamie",
""
],
[
"Dimoulas",
"Christos",
""
],
[
"Ahmed",
"Amal",
""
]
] |
new_dataset
| 0.997379 |
1711.03910
|
Zhao Chen Mr.
|
Zhao Chen, Liuguo Yin and Jianhua Lu
|
LDPC-Based Code Hopping for Gaussian Wiretap Channel With Limited
Feedback
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a scheme named code hopping (CodeHop) for gaussian
wiretap channels based on nonsystematic low-density parity-check (LDPC) codes.
Different from traditional communications, in the CodeHop scheme, the
legitimate receiver (Bob) rapidly switches the parity-check matrix upon each
correctly received source message block. Given an authenticated public feedback
channel, the transmitter's (Alice) parity-check matrix can also be synchronized
with the receiver's. As a result, once an eavesdropper (Eve) erroneously
decodes a message block, she may not be able to follow the update of subsequent
parity-check matrices. Thus, the average BER of Eve will be very close to $0.5$
if the transmitted number of message blocks is large enough. Focused on the
measure of security gap defined as the difference of channel quality between
Bob and Eve, numerical results show that the CodeHop scheme outperforms other
solutions by sufficiently reducing the security gap without sacrificing the
error-correcting performance of Bob.
|
[
{
"version": "v1",
"created": "Fri, 10 Nov 2017 16:33:18 GMT"
}
] | 2017-11-13T00:00:00 |
[
[
"Chen",
"Zhao",
""
],
[
"Yin",
"Liuguo",
""
],
[
"Lu",
"Jianhua",
""
]
] |
new_dataset
| 0.99794 |
1711.03938
|
Alexey Dosovitskiy
|
Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez,
Vladlen Koltun
|
CARLA: An Open Urban Driving Simulator
|
Published at the 1st Conference on Robot Learning (CoRL)
| null | null | null |
cs.LG cs.AI cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce CARLA, an open-source simulator for autonomous driving research.
CARLA has been developed from the ground up to support development, training,
and validation of autonomous urban driving systems. In addition to open-source
code and protocols, CARLA provides open digital assets (urban layouts,
buildings, vehicles) that were created for this purpose and can be used freely.
The simulation platform supports flexible specification of sensor suites and
environmental conditions. We use CARLA to study the performance of three
approaches to autonomous driving: a classic modular pipeline, an end-to-end
model trained via imitation learning, and an end-to-end model trained via
reinforcement learning. The approaches are evaluated in controlled scenarios of
increasing difficulty, and their performance is examined via metrics provided
by CARLA, illustrating the platform's utility for autonomous driving research.
The supplementary video can be viewed at https://youtu.be/Hp8Dz-Zek2E
|
[
{
"version": "v1",
"created": "Fri, 10 Nov 2017 17:54:40 GMT"
}
] | 2017-11-13T00:00:00 |
[
[
"Dosovitskiy",
"Alexey",
""
],
[
"Ros",
"German",
""
],
[
"Codevilla",
"Felipe",
""
],
[
"Lopez",
"Antonio",
""
],
[
"Koltun",
"Vladlen",
""
]
] |
new_dataset
| 0.997989 |
1312.0788
|
Guillermo Gallego Bonet
|
Guillermo Gallego and Anthony Yezzi
|
A compact formula for the derivative of a 3-D rotation in exponential
coordinates
|
6 pages
|
Journal of Mathematical Imaging and Vision, vol. 51, no. 3, pp
378-384, Mar. 2015
|
10.1007/s10851-014-0528-x
| null |
cs.CV math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a compact formula for the derivative of a 3-D rotation matrix with
respect to its exponential coordinates. A geometric interpretation of the
resulting expression is provided, as well as its agreement with other
less-compact but better-known formulas. To the best of our knowledge, this
simpler formula does not appear anywhere in the literature. We hope by
providing this more compact expression to alleviate the common pressure to
reluctantly resort to alternative representations in various computational
applications simply as a means to avoid the complexity of differential analysis
in exponential coordinates.
|
[
{
"version": "v1",
"created": "Tue, 3 Dec 2013 12:09:01 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Aug 2014 08:00:26 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Gallego",
"Guillermo",
""
],
[
"Yezzi",
"Anthony",
""
]
] |
new_dataset
| 0.999567 |
1407.5035
|
Judy Hoffman
|
Judy Hoffman, Sergio Guadarrama, Eric Tzeng, Ronghang Hu, Jeff
Donahue, Ross Girshick, Trevor Darrell, and Kate Saenko
|
LSDA: Large Scale Detection Through Adaptation
| null |
Neural Information Processing Systems (NIPS) 2014
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A major challenge in scaling object detection is the difficulty of obtaining
labeled images for large numbers of categories. Recently, deep convolutional
neural networks (CNNs) have emerged as clear winners on object classification
benchmarks, in part due to training with 1.2M+ labeled classification images.
Unfortunately, only a small fraction of those labels are available for the
detection task. It is much cheaper and easier to collect large quantities of
image-level labels from search engines than it is to collect detection data and
label it with precise bounding boxes. In this paper, we propose Large Scale
Detection through Adaptation (LSDA), an algorithm which learns the difference
between the two tasks and transfers this knowledge to classifiers for
categories without bounding box annotated data, turning them into detectors.
Our method has the potential to enable detection for the tens of thousands of
categories that lack bounding box annotations, yet have plenty of
classification data. Evaluation on the ImageNet LSVRC-2013 detection challenge
demonstrates the efficacy of our approach. This algorithm enables us to produce
a >7.6K detector by using available classification data from leaf nodes in the
ImageNet tree. We additionally demonstrate how to modify our architecture to
produce a fast detector (running at 2fps for the 7.6K detector). Models and
software are available at
|
[
{
"version": "v1",
"created": "Fri, 18 Jul 2014 17:08:02 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Aug 2014 00:38:38 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Nov 2014 01:48:26 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Hoffman",
"Judy",
""
],
[
"Guadarrama",
"Sergio",
""
],
[
"Tzeng",
"Eric",
""
],
[
"Hu",
"Ronghang",
""
],
[
"Donahue",
"Jeff",
""
],
[
"Girshick",
"Ross",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Saenko",
"Kate",
""
]
] |
new_dataset
| 0.999591 |
1610.00155
|
Herman Haverkort
|
Herman Haverkort
|
How many three-dimensional Hilbert curves are there?
|
No change since previous arXiv version. Please check out the version
in Journal of Computational Geometry: the text has been thoroughly
restructured, including some new information. This article extends, improves
and replaces most of my brief preliminary manuscript "An inventory of
three-dimensional Hilbert space-filling curves" (arXiv:1109.2323)
|
Journal of Computational Geometry 8(1):206-281 (2017)
| null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hilbert's two-dimensional space-filling curve is appreciated for its good
locality-preserving properties and easy implementation for many applications.
However, Hilbert did not describe how to generalize his construction to higher
dimensions. In fact, the number of ways in which this may be done ranges from
zero to infinite, depending on what properties of the Hilbert curve one
considers to be essential.
In this work we take the point of view that a Hilbert curve should at least
be self-similar and traverse cubes octant by octant. We organize and explore
the space of possible three-dimensional Hilbert curves and the potentially
useful properties which they may have. We discuss a notation system that allows
us to distinguish the curves from one another and enumerate them. This system
has been implemented in a software prototype, available from the author's
website. Several examples of possible three-dimensional Hilbert curves are
presented, including a curve that visits the points on most sides of the unit
cube in the order of the two-dimensional Hilbert curve; curves of which not
only the eight octants are similar to each other, but also the four quarters; a
curve with excellent locality-preserving properties and endpoints that are not
vertices of the cube; a curve in which all but two octants are each other's
images with respect to reflections in axis-parallel planes; and curves that can
be sketched on a grid without using vertical line segments. In addition, we
discuss several four-dimensional Hilbert curves.
|
[
{
"version": "v1",
"created": "Sat, 1 Oct 2016 16:18:55 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Nov 2017 15:52:53 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Haverkort",
"Herman",
""
]
] |
new_dataset
| 0.974959 |
1610.08336
|
Elias Mueggler
|
Elias Mueggler, Henri Rebecq, Guillermo Gallego, Tobi Delbruck, Davide
Scaramuzza
|
The Event-Camera Dataset and Simulator: Event-based Data for Pose
Estimation, Visual Odometry, and SLAM
|
7 pages, 4 figures, 3 tables
|
International Journal of Robotics Research, Vol. 36, Issue 2, pp.
142-149, Feb. 2017
|
10.1177/0278364917691115
| null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
New vision sensors, such as the Dynamic and Active-pixel Vision sensor
(DAVIS), incorporate a conventional global-shutter camera and an event-based
sensor in the same pixel array. These sensors have great potential for
high-speed robotics and computer vision because they allow us to combine the
benefits of conventional cameras with those of event-based sensors: low
latency, high temporal resolution, and very high dynamic range. However, new
algorithms are required to exploit the sensor characteristics and cope with its
unconventional output, which consists of a stream of asynchronous brightness
changes (called "events") and synchronous grayscale frames. For this purpose,
we present and release a collection of datasets captured with a DAVIS in a
variety of synthetic and real environments, which we hope will motivate
research on new algorithms for high-speed and high-dynamic-range robotics and
computer-vision applications. In addition to global-shutter intensity images
and asynchronous events, we provide inertial measurements and ground-truth
camera poses from a motion-capture system. The latter allows comparing the pose
accuracy of ego-motion estimation algorithms quantitatively. All the data are
released both as standard text files and binary files (i.e., rosbag). This
paper provides an overview of the available data and describes a simulator that
we release open-source to create synthetic event-camera data.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2016 13:59:39 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Oct 2016 13:38:04 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Nov 2016 13:51:11 GMT"
},
{
"version": "v4",
"created": "Wed, 8 Nov 2017 08:40:14 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Mueggler",
"Elias",
""
],
[
"Rebecq",
"Henri",
""
],
[
"Gallego",
"Guillermo",
""
],
[
"Delbruck",
"Tobi",
""
],
[
"Scaramuzza",
"Davide",
""
]
] |
new_dataset
| 0.99956 |
1702.04815
|
Konstantinos Bougiatiotis
|
Konstantinos Bougiatiotis and Theodore Giannakopoulos
|
Multimodal Content Representation and Similarity Ranking of Movies
|
Preliminary work
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we examine the existence of correlation between movie
similarity and low level features from respective movie content. In particular,
we demonstrate the extraction of multi-modal representation models of movies
based on subtitles, audio and metadata mining. We emphasize our research in
topic modeling of movies based on their subtitles. In order to demonstrate the
proposed content representation approach, we have built a small dataset of 160
widely known movies. We assert movie similarities, as propagated by the
singular modalities and fusion models, in the form of recommendation rankings.
We showcase a novel topic model browser for movies that allows for exploration
of the different aspects of similarities between movies and an information
retrieval system for movie similarity based on multi-modal content.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2017 23:31:44 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Nov 2017 13:34:21 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Bougiatiotis",
"Konstantinos",
""
],
[
"Giannakopoulos",
"Theodore",
""
]
] |
new_dataset
| 0.99728 |
1705.00538
|
Emil Bj\"ornson
|
Emil Bj\"ornson, Jakob Hoydis, Luca Sanguinetti
|
Massive MIMO has Unlimited Capacity
|
To appear in IEEE Transactions on Wireless Communications, 17 pages,
7 figures
| null |
10.1109/TWC.2017.2768423
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The capacity of cellular networks can be improved by the unprecedented array
gain and spatial multiplexing offered by Massive MIMO. Since its inception, the
coherent interference caused by pilot contamination has been believed to create
a finite capacity limit, as the number of antennas goes to infinity. In this
paper, we prove that this is incorrect and an artifact from using simplistic
channel models and suboptimal precoding/combining schemes. We show that with
multicell MMSE precoding/combining and a tiny amount of spatial channel
correlation or large-scale fading variations over the array, the capacity
increases without bound as the number of antennas increases, even under pilot
contamination. More precisely, the result holds when the channel covariance
matrices of the contaminating users are asymptotically linearly independent,
which is generally the case. If also the diagonals of the covariance matrices
are linearly independent, it is sufficient to know these diagonals (and not the
full covariance matrices) to achieve an unlimited asymptotic capacity.
|
[
{
"version": "v1",
"created": "Mon, 1 May 2017 14:24:47 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2017 15:21:34 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Sep 2017 07:08:45 GMT"
},
{
"version": "v4",
"created": "Thu, 9 Nov 2017 07:59:00 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Björnson",
"Emil",
""
],
[
"Hoydis",
"Jakob",
""
],
[
"Sanguinetti",
"Luca",
""
]
] |
new_dataset
| 0.996162 |
1706.03358
|
Mathieu Carri\`ere
|
Mathieu Carri\`ere and Marco Cuturi and Steve Oudot
|
Sliced Wasserstein Kernel for Persistence Diagrams
|
Minor modifications
| null | null | null |
cs.CG math.AT stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Persistence diagrams (PDs) play a key role in topological data analysis
(TDA), in which they are routinely used to describe topological properties of
complicated shapes. PDs enjoy strong stability properties and have proven their
utility in various learning contexts. They do not, however, live in a space
naturally endowed with a Hilbert structure and are usually compared with
specific distances, such as the bottleneck distance. To incorporate PDs in a
learning pipeline, several kernels have been proposed for PDs with a strong
emphasis on the stability of the RKHS distance w.r.t. perturbations of the PDs.
In this article, we use the Sliced Wasserstein approximation SW of the
Wasserstein distance to define a new kernel for PDs, which is not only provably
stable but also provably discriminative (depending on the number of points in
the PDs) w.r.t. the Wasserstein distance $d_1$ between PDs. We also demonstrate
its practicality, by developing an approximation technique to reduce kernel
computation time, and show that our proposal compares favorably to existing
kernels for PDs on several benchmarks.
|
[
{
"version": "v1",
"created": "Sun, 11 Jun 2017 14:47:19 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2017 08:44:30 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Nov 2017 14:47:06 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Carrière",
"Mathieu",
""
],
[
"Cuturi",
"Marco",
""
],
[
"Oudot",
"Steve",
""
]
] |
new_dataset
| 0.985389 |
1708.03387
|
Fatemeh Shirazi
|
Fatemeh Shirazi, Elena Andreeva, Markulf Kohlweiss, and Claudia Diaz
|
Multiparty Routing: Secure Routing for Mixnets
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anonymous communication networks are important building blocks for online
privacy protection. One approach to achieve anonymity is to relay messages
through multiple routers, where each router shuffles messages independently. To
achieve anonymity, at least one router needs to be honest. In the presence of
an adversary that is controlling a subset of the routers unbiased routing is
important for guaranteeing anonymity. However, the routing strategy also
influenced other factors such as the scalability and the performance of the
system. One solution is to use a fixed route for relaying all messages with
many routers. If the route is not fixed the routing decision can either be made
by the communication initiator or the intermediate routers. However, the
existing routing types each have limitations. For example, one faces
scalability issues when increasing the throughput of systems with fixed routes.
Moreover, when the routing decision is left to the initiator, the initiator
needs to maintain an up-to-date view of the system at all times, which also
does not scale. If the routing decision is left to intermediate routers the
routing of the communication can be influenced by an adversary. In this work,
we propose a novel multiparty routing approach for anonymous communication that
addresses these shortcomings. We distribute the routing decision and verify the
correctness of routing to achieve routing integrity. More concretely, we
provide a mixnet design that uses our routing approach and that in addition,
addresses load balancing. We show that our system is secure against a global
active adversary.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2017 21:09:54 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Nov 2017 10:31:36 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Shirazi",
"Fatemeh",
""
],
[
"Andreeva",
"Elena",
""
],
[
"Kohlweiss",
"Markulf",
""
],
[
"Diaz",
"Claudia",
""
]
] |
new_dataset
| 0.995714 |
1711.00046
|
Vikash Singh
|
Vikash Singh
|
Replace or Retrieve Keywords In Documents at Scale
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we introduce, the FlashText algorithm for replacing keywords or
finding keywords in a given text. FlashText can search or replace keywords in
one pass over a document. The time complexity of this algorithm is not
dependent on the number of terms being searched or replaced. For a document of
size N (characters) and a dictionary of M keywords, the time complexity will be
O(N). This algorithm is much faster than Regex, because regex time complexity
is O(MxN). It is also different from Aho Corasick Algorithm, as it doesn't
match substrings. FlashText is designed to only match complete words (words
with boundary characters on both sides). For an input dictionary of {Apple},
this algorithm won't match it to 'I like Pineapple'. This algorithm is also
designed to go for the longest match first. For an input dictionary {Machine,
Learning, Machine learning} on a string 'I like Machine learning', it will only
consider the longest match, which is Machine Learning. We have made python
implementation of this algorithm available as open-source on GitHub, released
under the permissive MIT License.
|
[
{
"version": "v1",
"created": "Tue, 31 Oct 2017 18:34:03 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Nov 2017 18:56:44 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Singh",
"Vikash",
""
]
] |
new_dataset
| 0.984302 |
1711.02294
|
Dinesh Subhraveti
|
Dinesh Subhraveti, Sri Goli, Serge Hallyn, Ravi Chamarthy, Christos
Kozyrakis
|
AppSwitch: Resolving the Application Identity Crisis
| null | null | null | null |
cs.NI cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Networked applications traditionally derive their identity from the identity
of the host on which they run. The default application identity acquired from
the host results in subtle and substantial problems related to application
deployment, discovery and access, especially for modern distributed
applications. A number of mechanisms and workarounds, often quite elaborate,
are used to address those problems but they only address them indirectly and
incompletely.
This paper presents AppSwitch, a novel transport layer network element that
decouples applications from underlying network at the system call layer and
enables them to be identified independently of the network. Without requiring
changes to existing applications or infrastructure, it removes the cost and
complexity associated with operating distributed applications while offering a
number of benefits including an efficient implementation of common network
functions such as application firewall and load balancer. Experiments with our
implementation show that AppSwitch model also effectively removes the
performance penalty associated with unnecessary data path processing that is
typical in those application environments.
|
[
{
"version": "v1",
"created": "Tue, 7 Nov 2017 05:41:15 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Nov 2017 19:08:26 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Subhraveti",
"Dinesh",
""
],
[
"Goli",
"Sri",
""
],
[
"Hallyn",
"Serge",
""
],
[
"Chamarthy",
"Ravi",
""
],
[
"Kozyrakis",
"Christos",
""
]
] |
new_dataset
| 0.998894 |
1711.03129
|
Jiajun Wu
|
Jiajun Wu, Yifan Wang, Tianfan Xue, Xingyuan Sun, William T Freeman,
Joshua B Tenenbaum
|
MarrNet: 3D Shape Reconstruction via 2.5D Sketches
|
NIPS 2017. The first two authors contributed equally to this paper.
Project page: http://marrnet.csail.mit.edu
| null | null | null |
cs.CV cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D object reconstruction from a single image is a highly under-determined
problem, requiring strong prior knowledge of plausible 3D shapes. This
introduces challenges for learning-based approaches, as 3D object annotations
are scarce in real images. Previous work chose to train on synthetic data with
ground truth 3D information, but suffered from domain adaptation when tested on
real data. In this work, we propose MarrNet, an end-to-end trainable model that
sequentially estimates 2.5D sketches and 3D object shape. Our disentangled,
two-step formulation has three advantages. First, compared to full 3D shape,
2.5D sketches are much easier to be recovered from a 2D image; models that
recover 2.5D sketches are also more likely to transfer from synthetic to real
data. Second, for 3D reconstruction from 2.5D sketches, systems can learn
purely from synthetic data. This is because we can easily render realistic 2.5D
sketches without modeling object appearance variations in real images,
including lighting, texture, etc. This further relieves the domain adaptation
problem. Third, we derive differentiable projective functions from 3D shape to
2.5D sketches; the framework is therefore end-to-end trainable on real images,
requiring no human annotations. Our model achieves state-of-the-art performance
on 3D shape reconstruction.
|
[
{
"version": "v1",
"created": "Wed, 8 Nov 2017 19:29:01 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Wu",
"Jiajun",
""
],
[
"Wang",
"Yifan",
""
],
[
"Xue",
"Tianfan",
""
],
[
"Sun",
"Xingyuan",
""
],
[
"Freeman",
"William T",
""
],
[
"Tenenbaum",
"Joshua B",
""
]
] |
new_dataset
| 0.999315 |
1711.03179
|
Yun Gu
|
Yang Hu, Yun Gu, Jie Yang and Guang-Zhong Yang
|
Multi-stage Suture Detection for Robot Assisted Anastomosis based on
Deep Learning
|
Submitted to ICRA 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In robotic surgery, task automation and learning from demonstration combined
with human supervision is an emerging trend for many new surgical robot
platforms. One such task is automated anastomosis, which requires bimanual
needle handling and suture detection. Due to the complexity of the surgical
environment and varying patient anatomies, reliable suture detection is
difficult, which is further complicated by occlusion and thread topologies. In
this paper, we propose a multi-stage framework for suture thread detection
based on deep learning. Fully convolutional neural networks are used to obtain
the initial detection and the overlapping status of suture thread, which are
later fused with the original image to learn a gradient road map of the thread.
Based on the gradient road map, multiple segments of the thread are extracted
and linked to form the whole thread using a curvilinear structure detector.
Experiments on two different types of sutures demonstrate the accuracy of the
proposed framework.
|
[
{
"version": "v1",
"created": "Wed, 8 Nov 2017 21:44:14 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Hu",
"Yang",
""
],
[
"Gu",
"Yun",
""
],
[
"Yang",
"Jie",
""
],
[
"Yang",
"Guang-Zhong",
""
]
] |
new_dataset
| 0.977378 |
1711.03307
|
Ismail Aydogdu
|
Ismail Aydogdu, Taher Abualrub
|
Self-Dual Cyclic and Quantum Codes Over Z2^{\alpha} x (Z2 + uZ2)^{\beta}
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we introduce self-dual cyclic and quantum codes over
Z2^{\alpha} x (Z2 + uZ2)^{\beta}. We determine the conditions for any
Z2Z2[u]-cyclic code to be self-dual, that is, C = C^{\perp}. Since the binary
image of a self-orthogonal Z2Z2[u]-linear code is also a self-orthogonal binary
linear code, we introduce quantum codes over Z2^{\alpha} x (Z2 + uZ2)^{\beta}.
Finally, we present some examples of self-dual cyclic and quantum codes that
have good parameters.
|
[
{
"version": "v1",
"created": "Thu, 9 Nov 2017 10:05:57 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Aydogdu",
"Ismail",
""
],
[
"Abualrub",
"Taher",
""
]
] |
new_dataset
| 0.997158 |
1711.03310
|
Hsuan-Yin Lin
|
Hsuan-Yin Lin, Stefan M. Moser, and Po-Ning Chen
|
Weak Flip Codes and their Optimality on the Binary Erasure Channel
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates fundamental properties of nonlinear binary codes by
looking at the codebook matrix not row-wise (codewords), but column-wise. The
family of weak flip codes is presented and shown to contain many beautiful
properties. In particular the subfamily fair weak flip codes, which goes back
to Berlekamp, Gallager, and Shannon and which was shown to achieve the error
exponent with a fixed number of codewords $M$, can be seen as a generalization
of linear codes to an arbitrary number of codewords. Based on the column-wise
approach, the $r$-wise Hamming distance is introduced as a generalization to
the widely used (pairwise) Hamming distance. It is shown that the minimum
$r$-wise Hamming distance satisfies a generalized $r$-wise Plotkin bound. The
$r$-wise Hamming distance structure of the nonlinear fair weak flip codes is
analyzed and shown to be superior to many codes. In particular, it is proven
that the fair weak flip codes achieve the $r$-wise Plotkin bound with equality
for all $r$. In the second part of the paper, these insights are applied to a
binary erasure channel (BEC) with an arbitrary erasure probability. An exact
formula for the average error probability of an arbitrary code using maximum
likelihood decoding is derived and shown to be expressible using only the
$r$-wise Hamming distance structure of the code. For a number of codewords
$M\leq4$ and an arbitrary blocklength $n$, the globally optimal codes (in the
sense of minimizing the average error probability) are found. For $M=5,6$ and
an arbitrary blocklength $n$, the optimal codes are conjectured. For larger
$M$, observations regarding the optimal design are presented, e.g., that good
codes have a large $r$-wise Hamming distance structure for all $r$. Numerical
results validate our code design criteria and show the superiority of our best
found nonlinear weak flip codes compared to the best linear codes.
|
[
{
"version": "v1",
"created": "Thu, 9 Nov 2017 10:13:57 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Lin",
"Hsuan-Yin",
""
],
[
"Moser",
"Stefan M.",
""
],
[
"Chen",
"Po-Ning",
""
]
] |
new_dataset
| 0.992006 |
1711.03397
|
Zhen Huang
|
Zhen Huang and David Lie
|
SAIC: Identifying Configuration Files for System Configuration
Management
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Systems can become misconfigured for a variety of reasons such as operator
errors or buggy patches. When a misconfiguration is discovered, usually the
first order of business is to restore availability, often by undoing the
misconfiguration. To simplify this task, we propose the Statistical Analysis
for Identifying Configuration Files (SAIC), which analyzes how the contents of
a file changes over time to automatically determine which files contain
configuration state. In this way, SAIC reduces the number of files a user must
manually examine during recovery and allows versioning file systems to make
more efficient use of their versioning storage.
The two key insights that enable SAIC to identify configuration files are
that configuration state must persist across executions of an application and
that configuration state changes at a slower rate than other types of
application state. SAIC applies these insights through a set of filters, which
eliminate non-persistent files from consideration, and a novel similarity
metric, which measures how similar a file's versions are to each other.
Together, these two mechanisms enable SAIC to identify all 72 configuration
files out of 2363 versioned files from 6 common applications in two user
traces, while mistaking only 33 non-configuration files as configuration files,
which allows a versioning file system to eliminate roughly 66% of
non-configuration file versions from its logs, thus reducing the number of file
versions that a user must try to recover from a misconfiguration.
|
[
{
"version": "v1",
"created": "Mon, 6 Nov 2017 18:58:44 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Huang",
"Zhen",
""
],
[
"Lie",
"David",
""
]
] |
new_dataset
| 0.988061 |
1711.03433
|
Cynthia Kop
|
Cynthia Kop, Kristoffer Rose
|
h: A Plank for Higher-order Attribute Contraction Schemes
|
workshop proceedings for HOR 2016
| null | null | null |
cs.PL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present and formalize h, a core (or "plank") calculus that can serve as
the foundation for several compiler specification languages, notably CRSX
(Combinatory Reductions Systems with eXtensions), HACS (Higher-order Attribute
Contraction Schemes), and TransScript. We discuss how the h typing and
formation rules introduce the necessary restrictions to ensure that rewriting
is well-defined, even in the presence of h's powerful extensions for
manipulating free variables and environments as first class elements (including
in pattern matching).
|
[
{
"version": "v1",
"created": "Thu, 9 Nov 2017 15:52:18 GMT"
}
] | 2017-11-10T00:00:00 |
[
[
"Kop",
"Cynthia",
""
],
[
"Rose",
"Kristoffer",
""
]
] |
new_dataset
| 0.98528 |
1606.05621
|
P Balasubramanian
|
P Balasubramanian
|
Design of Synchronous Section-Carry Based Carry Lookahead Adders with
Improved Figure of Merit
|
arXiv admin note: text overlap with arXiv:1603.07961
|
WSEAS Transactions on Circuits and Systems, vol. 15, Article #18,
pp. 155-164, 2016
| null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The section-carry based carry lookahead adder (SCBCLA) architecture was
proposed as an efficient alternative to the conventional carry lookahead adder
(CCLA) architecture for the physical implementation of computer arithmetic. In
previous related works, self-timed SCBCLA architectures and synchronous SCBCLA
architectures were realized using standard cells and FPGAs. In this work, we
deal with improved realizations of synchronous SCBCLA architectures designed in
a semi-custom fashion using standard cells. The improvement is quantified in
terms of a figure of merit (FOM), where the FOM is defined as the inverse
product of power, delay and area. Since power, delay and area of digital
designs are desirable to be minimized, the FOM is desirable to be maximized.
Starting from an efficient conventional carry lookahead generator, we show how
an optimized section-carry based carry lookahead generator is realized. In
comparison with our recent work dealing with standard cells based
implementation of SCBCLAs to perform 32-bit addition of two binary operands, we
show in this work that with improved section-carry based carry lookahead
generators, the resulting SCBCLAs exhibit significant improvements in FOM.
Compared to the earlier optimized hybrid SCBCLA, the proposed optimized hybrid
SCBCLA improves the FOM by 88.3%. Even the optimized hybrid CCLA features
improvement in FOM by 77.3% over the earlier optimized hybrid CCLA. However,
the proposed optimized hybrid SCBCLA is still the winner and has a better FOM
than the currently optimized hybrid CCLA by 15.3%. All the CCLAs and SCBCLAs
are implemented to realize 32-bit dual-operand binary addition using a 32/28nm
CMOS process.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2016 18:54:48 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Nov 2017 06:44:29 GMT"
}
] | 2017-11-09T00:00:00 |
[
[
"Balasubramanian",
"P",
""
]
] |
new_dataset
| 0.971712 |
1608.01031
|
Eugene Goltsman
|
Jarrod A. Chapman, Isaac Y. Ho, Eugene Goltsman, Daniel S. Rokhsar
|
Meraculous2: fast accurate short-read assembly of large polymorphic
genomes
|
Supplementary notes included with the manuscript
| null | null | null |
cs.DS q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Meraculous2, an update to the Meraculous short-read assembler that
includes (1) handling of allelic variation using "bubble" structures within the
de Bruijn graph, (2) improved gap closing, and (3) an improved scaffolding
algorithm that produces more complete assemblies without compromising
scaffolding accuracy. The speed and bandwidth efficiency of the new parallel
implementation have also been substantially improved, allowing the assembly of
a human genome to be accomplished in 24 hours on the JGI/NERSC Genepool system.
To highlight the features of Meraculous2 we present here the assembly of the
diploid human genome NA12878, and compare it with previously published
assemblies of the same data using other algorithms. The Meraculous2 assemblies
are shown to have better completeness, contiguity, and accuracy than other
published assemblies for these data. Practical considerations including
pre-assembly analyses of polymorphism and repetitiveness are described.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2016 23:49:21 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Nov 2017 19:59:13 GMT"
}
] | 2017-11-09T00:00:00 |
[
[
"Chapman",
"Jarrod A.",
""
],
[
"Ho",
"Isaac Y.",
""
],
[
"Goltsman",
"Eugene",
""
],
[
"Rokhsar",
"Daniel S.",
""
]
] |
new_dataset
| 0.998912 |
1609.01045
|
Md Noor-A-Rahim
|
Md. Noor-A-Rahim, MD Nashid Anjum and Guan Yong Liang
|
Two-Time-Slot Bidirectional Relaying in Molecular Communication
|
We found a mistake. We will upload a new version, once we resolve the
issue
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the bidirectional/two-way relaying of molecular
communication and propose a relaying scheme with two time slots. Compared to
the four-time-slot and three-time-slot schemes, the proposed two-time-slot
scheme improves the throughput by a significant extent by allowing the end
nodes to transmit simultaneously at the very first time slot. In contrast to
the existing techniques, the proposed scheme employs a homogeneous molecular
communication for bidirectional relaying where all the nodes (i.e., end nodes
and relay node) are allowed to operate on the same type of molecule instead of
utilizing different types of molecule for different nodes. As a result, this
proposal of homogeneous molecular relaying remarkably improves the resource
reuse capability. This paper generically characterizes the transmission and
detection strategies of the proposed scheme. Moreover, we derive the analytical
bit error probabilities for the multiple access and broadcast phases and
present the end-to-end bit error probability of the proposed scheme. It's
noteworthy that we take it into account the effect of molecular interference in
the theoretical derivations. Extensive simulation is carried out, and it is
shown that simulation results match very well with the derived theoretical
analysis.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2016 07:57:22 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Nov 2017 09:33:48 GMT"
}
] | 2017-11-09T00:00:00 |
[
[
"Noor-A-Rahim",
"Md.",
""
],
[
"Anjum",
"MD Nashid",
""
],
[
"Liang",
"Guan Yong",
""
]
] |
new_dataset
| 0.994219 |
1702.03449
|
Christoph Studer
|
Oscar Casta\~neda, Sven Jacobsson, Giuseppe Durisi, Mikael Coldrey,
Tom Goldstein, Christoph Studer
|
1-bit Massive MU-MIMO Precoding in VLSI
|
15 pages, 6 figures, 2 tables; to appear in the IEEE Journal on
Emerging and Selected Topics in Circuits and Systems
| null | null | null |
cs.IT cs.AR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Massive multiuser (MU) multiple-input multiple-output (MIMO) will be a core
technology in fifth-generation (5G) wireless systems as it offers significant
improvements in spectral efficiency compared to existing multi-antenna
technologies. The presence of hundreds of antenna elements at the base station
(BS), however, results in excessively high hardware costs and power
consumption, and requires high interconnect throughput between the
baseband-processing unit and the radio unit. Massive MU-MIMO that uses
low-resolution analog-to-digital and digital-to-analog converters (DACs) has
the potential to address all these issues. In this paper, we focus on downlink
precoding for massive MU-MIMO systems with 1-bit DACs at the BS. The objective
is to design precoders that simultaneously mitigate multi-user interference
(MUI) and quantization artifacts. We propose two nonlinear 1-bit precoding
algorithms and corresponding very-large scale integration (VLSI) designs. Our
algorithms rely on biconvex relaxation, which enables the design of efficient
1-bit precoding algorithms that achieve superior error-rate performance
compared to that of linear precoding algorithms followed by quantization. To
showcase the efficacy of our algorithms, we design VLSI architectures that
enable efficient 1-bit precoding for massive MU-MIMO systems in which hundreds
of antennas serve tens of user equipments. We present corresponding
field-programmable gate array (FPGA) implementations to demonstrate that 1-bit
precoding enables reliable and high-rate downlink data transmission in
practical systems.
|
[
{
"version": "v1",
"created": "Sat, 11 Feb 2017 19:36:49 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Nov 2017 20:24:12 GMT"
}
] | 2017-11-09T00:00:00 |
[
[
"Castañeda",
"Oscar",
""
],
[
"Jacobsson",
"Sven",
""
],
[
"Durisi",
"Giuseppe",
""
],
[
"Coldrey",
"Mikael",
""
],
[
"Goldstein",
"Tom",
""
],
[
"Studer",
"Christoph",
""
]
] |
new_dataset
| 0.991204 |
1708.02380
|
Venkatesh-Prasad Ranganath
|
Joydeep Mitra, Venkatesh-Prasad Ranganath
|
Ghera: A Repository of Android App Vulnerability Benchmarks
|
10 pages. Accepted at PROMISE'17
| null |
10.1145/3127005.3127010
| null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Security of mobile apps affects the security of their users. This has fueled
the development of techniques to automatically detect vulnerabilities in mobile
apps and help developers secure their apps; specifically, in the context of
Android platform due to openness and ubiquitousness of the platform. Despite a
slew of research efforts in this space, there is no comprehensive repository of
up-to-date and lean benchmarks that contain most of the known Android app
vulnerabilities and, consequently, can be used to rigorously evaluate both
existing and new vulnerability detection techniques and help developers learn
about Android app vulnerabilities. In this paper, we describe Ghera, an open
source repository of benchmarks that capture 25 known vulnerabilities in
Android apps (as pairs of exploited/benign and exploiting/malicious apps). We
also present desirable characteristics of vulnerability benchmarks and
repositories that we uncovered while creating Ghera.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2017 06:29:02 GMT"
}
] | 2017-11-09T00:00:00 |
[
[
"Mitra",
"Joydeep",
""
],
[
"Ranganath",
"Venkatesh-Prasad",
""
]
] |
new_dataset
| 0.999822 |
1709.01054
|
Dylan Hutchison
|
Dylan Hutchison
|
Distributed Triangle Counting in the Graphulo Matrix Math Library
|
Honorable mention in the 2017 IEEE HPEC's Graph Challenge
| null |
10.1109/HPEC.2017.8091041
| null |
cs.DC cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Triangle counting is a key algorithm for large graph analysis. The Graphulo
library provides a framework for implementing graph algorithms on the Apache
Accumulo distributed database. In this work we adapt two algorithms for
counting triangles, one that uses the adjacency matrix and another that also
uses the incidence matrix, to the Graphulo library for server-side processing
inside Accumulo. Cloud-based experiments show a similar performance profile for
these different approaches on the family of power law Graph500 graphs, for
which data skew increasingly bottlenecks. These results motivate the design of
skew-aware hybrid algorithms that we propose for future work.
|
[
{
"version": "v1",
"created": "Sun, 20 Aug 2017 06:03:31 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2017 04:37:43 GMT"
}
] | 2017-11-09T00:00:00 |
[
[
"Hutchison",
"Dylan",
""
]
] |
new_dataset
| 0.99906 |
1710.06785
|
Ramviyas Parasuraman
|
Ramviyas Parasuraman, Sergio Caccamo, Fredrik B{\aa}berg, Petter
\"Ogren and Mark Neerincx
|
A New UGV Teleoperation Interface for Improved Awareness of Network
Connectivity and Physical Surroundings
|
Accepted for publication in the Journal of Human-Robot Interaction
(JHRI)
| null | null | null |
cs.RO cs.HC cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A reliable wireless connection between the operator and the teleoperated
Unmanned Ground Vehicle (UGV) is critical in many Urban Search and Rescue
(USAR) missions. Unfortunately, as was seen in e.g. the Fukushima disaster, the
networks available in areas where USAR missions take place are often severely
limited in range and coverage. Therefore, during mission execution, the
operator needs to keep track of not only the physical parts of the mission,
such as navigating through an area or searching for victims, but also the
variations in network connectivity across the environment. In this paper, we
propose and evaluate a new teleoperation User Interface (UI) that includes a
way of estimating the Direction of Arrival (DoA) of the Radio Signal Strength
(RSS) and integrating the DoA information in the interface. The evaluation
shows that using the interface results in more objects found, and less aborted
missions due to connectivity problems, as compared to a standard interface. The
proposed interface is an extension to an existing interface centered around the
video stream captured by the UGV. But instead of just showing the network
signal strength in terms of percent and a set of bars, the additional
information of DoA is added in terms of a color bar surrounding the video feed.
With this information, the operator knows what movement directions are safe,
even when moving in regions close to the connectivity threshold.
|
[
{
"version": "v1",
"created": "Wed, 4 Oct 2017 21:39:53 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Oct 2017 17:50:45 GMT"
},
{
"version": "v3",
"created": "Sun, 5 Nov 2017 08:57:19 GMT"
},
{
"version": "v4",
"created": "Tue, 7 Nov 2017 19:34:45 GMT"
}
] | 2017-11-09T00:00:00 |
[
[
"Parasuraman",
"Ramviyas",
""
],
[
"Caccamo",
"Sergio",
""
],
[
"Båberg",
"Fredrik",
""
],
[
"Ögren",
"Petter",
""
],
[
"Neerincx",
"Mark",
""
]
] |
new_dataset
| 0.991199 |
1710.09635
|
Quang-Trung Ta
|
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, Wei-Ngan Chin
|
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
| null | null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
The symbolic-heap fragment of separation logic has been actively developed
and advocated for verifying the memory-safety property of computer programs. At
present, one of its biggest challenges is to effectively prove entailments
containing inductive heap predicates. These entailments are usually proof
obligations generated when verifying programs that manipulate complex data
structures like linked lists, trees, or graphs.
To assist in proving such entailments, this paper introduces a lemma
synthesis framework, which automatically discovers lemmas to serve as eureka
steps in the proofs. Mathematical induction and template-based constraint
solving are two pillars of our framework. To derive the supporting lemmas for a
given entailment, the framework firstly identifies possible lemma templates
from the entailment's heap structure. It then sets up unknown relations among
each template's variables and conducts structural induction proof to generate
constraints about these relations. Finally, it solves the constraints to find
out actual definitions of the unknown relations, thus discovers the lemmas. We
have integrated this framework into a prototype prover and have experimented it
on various entailment benchmarks. The experimental results show that our
lemma-synthesis-assisted prover can prove many entailments that could not be
handled by existing techniques. This new proposal opens up more opportunities
to automatically reason with complex inductive heap predicates.
|
[
{
"version": "v1",
"created": "Thu, 26 Oct 2017 10:49:29 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Nov 2017 15:41:02 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Nov 2017 07:01:50 GMT"
}
] | 2017-11-09T00:00:00 |
[
[
"Ta",
"Quang-Trung",
""
],
[
"Le",
"Ton Chanh",
""
],
[
"Khoo",
"Siau-Cheng",
""
],
[
"Chin",
"Wei-Ngan",
""
]
] |
new_dataset
| 0.987329 |
1711.02068
|
Sandeep Vidyapu
|
Vidyapu Sandeep, V Vijaya Saradhi, Samit Bhattacharya
|
From Multimodal to Unimodal Webpages for Developing Countries
|
Presented at NIPS 2017 Workshop on Machine Learning for the
Developing World
| null | null | null |
cs.HC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The multimodal web elements such as text and images are associated with
inherent memory costs to store and transfer over the Internet. With the limited
network connectivity in developing countries, webpage rendering gets delayed in
the presence of high-memory demanding elements such as images (relative to
text). To overcome this limitation, we propose a Canonical Correlation Analysis
(CCA) based computational approach to replace high-cost modality with an
equivalent low-cost modality. Our model learns a common subspace for low-cost
and high-cost modalities that maximizes the correlation between their visual
features. The obtained common subspace is used for determining the low-cost
(text) element of a given high-cost (image) element for the replacement. We
analyze the cost-saving performance of the proposed approach through an
eye-tracking experiment conducted on real-world webpages. Our approach reduces
the memory-cost by at least 83.35% by replacing images with text.
|
[
{
"version": "v1",
"created": "Mon, 6 Nov 2017 18:32:59 GMT"
}
] | 2017-11-09T00:00:00 |
[
[
"Sandeep",
"Vidyapu",
""
],
[
"Saradhi",
"V Vijaya",
""
],
[
"Bhattacharya",
"Samit",
""
]
] |
new_dataset
| 0.975038 |
1711.02712
|
Bart van Merri\"enboer
|
Bart van Merri\"enboer, Alexander B. Wiltschko and Dan Moldovan
|
Tangent: Automatic Differentiation Using Source Code Transformation in
Python
| null | null | null | null |
cs.MS stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic differentiation (AD) is an essential primitive for machine learning
programming systems. Tangent is a new library that performs AD using source
code transformation (SCT) in Python. It takes numeric functions written in a
syntactic subset of Python and NumPy as input, and generates new Python
functions which calculate a derivative. This approach to automatic
differentiation is different from existing packages popular in machine
learning, such as TensorFlow and Autograd. Advantages are that Tangent
generates gradient code in Python which is readable by the user, easy to
understand and debug, and has no runtime overhead. Tangent also introduces
abstractions for easily injecting logic into the generated gradient code,
further improving usability.
|
[
{
"version": "v1",
"created": "Tue, 7 Nov 2017 20:15:24 GMT"
}
] | 2017-11-09T00:00:00 |
[
[
"van Merriënboer",
"Bart",
""
],
[
"Wiltschko",
"Alexander B.",
""
],
[
"Moldovan",
"Dan",
""
]
] |
new_dataset
| 0.991847 |
1711.02757
|
Lin Bai
|
Yecheng Lyu, Lin Bai, and Xinming Huang
|
Real-Time Road Segmentation Using LiDAR Data Processing on an FPGA
|
Under review at ISCAS 2018
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the FPGA design of a convolutional neural network (CNN)
based road segmentation algorithm for real-time processing of LiDAR data. For
autonomous vehicles, it is important to perform road segmentation and obstacle
detection such that the drivable region can be identified for path planning.
Traditional road segmentation algorithms are mainly based on image data from
cameras, which is subjected to the light condition as well as the quality of
road markings. LiDAR sensor can obtain the 3D geometry information of the
vehicle surroundings with very high accuracy. However, it is a computational
challenge to process a large amount of LiDAR data at real-time. In this work, a
convolutional neural network model is proposed and trained to perform semantic
segmentation using the LiDAR sensor data. Furthermore, an efficient hardware
design is implemented on the FPGA that can process each LiDAR scan in 16.9ms,
which is much faster than the previous works. Evaluated using KITTI road
benchmarks, the proposed solution achieves high accuracy of road segmentation.
|
[
{
"version": "v1",
"created": "Tue, 7 Nov 2017 22:42:09 GMT"
}
] | 2017-11-09T00:00:00 |
[
[
"Lyu",
"Yecheng",
""
],
[
"Bai",
"Lin",
""
],
[
"Huang",
"Xinming",
""
]
] |
new_dataset
| 0.998965 |
1711.02824
|
Nour Moustafa
|
Nour Moustafa, Jill Slay
|
RCNF: Real-time Collaborative Network Forensic Scheme for Evidence
Analysis
| null | null | null | null |
cs.CR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Network forensic techniques help in tracking different types of cyber attack
by monitoring and inspecting network traffic. However, with the high speed and
large sizes of current networks, and the sophisticated philosophy of attackers,
in particular mimicking normal behaviour and/or erasing traces to avoid
detection, investigating such crimes demands intelligent network forensic
techniques. This paper suggests a real-time collaborative network Forensic
scheme (RCNF) that can monitor and investigate cyber intrusions. The scheme
includes three components of capturing and storing network data, selecting
important network features using chi-square method and investigating abnormal
events using a new technique called correntropy-variation. We provide a case
study using the UNSW-NB15 dataset for evaluating the scheme, showing its high
performance in terms of accuracy and false alarm rate compared with three
recent state-of-the-art mechanisms.
|
[
{
"version": "v1",
"created": "Wed, 8 Nov 2017 04:26:50 GMT"
}
] | 2017-11-09T00:00:00 |
[
[
"Moustafa",
"Nour",
""
],
[
"Slay",
"Jill",
""
]
] |
new_dataset
| 0.985208 |
1711.03066
|
Leonid Boytsov
|
Leonid Boytsov
|
A Simple Derivation of the Heap's Law from the Generalized Zipf's Law
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
I reproduce a rather simple formal derivation of the Heaps' law from the
generalized Zipf's law, which I previously published in Russian.
|
[
{
"version": "v1",
"created": "Wed, 8 Nov 2017 17:45:46 GMT"
}
] | 2017-11-09T00:00:00 |
[
[
"Boytsov",
"Leonid",
""
]
] |
new_dataset
| 0.97906 |
1508.02517
|
Herman Haverkort
|
Arie Bos and Herman Haverkort
|
Hyperorthogonal well-folded Hilbert curves
|
Manuscript submitted to Journal of Computational Geometry. An
abstract appeared in the 31st Int Symp on Computational Geometry (SoCG 2015),
LIPIcs 34:812-826
|
Journal of Computational Geometry 7(2):145-190 (2016)
| null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
R-trees can be used to store and query sets of point data in two or more
dimensions. An easy way to construct and maintain R-trees for two-dimensional
points, due to Kamel and Faloutsos, is to keep the points in the order in which
they appear along the Hilbert curve. The R-tree will then store bounding boxes
of points along contiguous sections of the curve, and the efficiency of the
R-tree depends on the size of the bounding boxes---smaller is better. Since
there are many different ways to generalize the Hilbert curve to higher
dimensions, this raises the question which generalization results in the
smallest bounding boxes. Familiar methods, such as the one by Butz, can result
in curve sections whose bounding boxes are a factor $\Omega(2^{d/2})$ larger
than the volume traversed by that section of the curve. Most of the volume
bounded by such bounding boxes would not contain any data points. In this paper
we present a new way of generalizing Hilbert's curve to higher dimensions,
which results in much tighter bounding boxes: they have at most 4 times the
volume of the part of the curve covered, independent of the number of
dimensions. Moreover, we prove that a factor 4 is asymptotically optimal.
|
[
{
"version": "v1",
"created": "Tue, 11 Aug 2015 08:33:54 GMT"
}
] | 2017-11-08T00:00:00 |
[
[
"Bos",
"Arie",
""
],
[
"Haverkort",
"Herman",
""
]
] |
new_dataset
| 0.998351 |
1607.00145
|
Paul Springer
|
Paul Springer and Paolo Bientinesi
|
Design of a high-performance GEMM-like Tensor-Tensor Multiplication
| null | null | null | null |
cs.MS cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present "GEMM-like Tensor-Tensor multiplication" (GETT), a novel approach
to tensor contractions that mirrors the design of a high-performance general
matrix-matrix multiplication (GEMM). The critical insight behind GETT is the
identification of three index sets, involved in the tensor contraction, which
enable us to systematically reduce an arbitrary tensor contraction to loops
around a highly tuned "macro-kernel". This macro-kernel operates on suitably
prepared ("packed") sub-tensors that reside in a specified level of the cache
hierarchy. In contrast to previous approaches to tensor contractions, GETT
exhibits desirable features such as unit-stride memory accesses,
cache-awareness, as well as full vectorization, without requiring auxiliary
memory. To compare our technique with other modern tensor contractions, we
integrate GETT alongside the so called Transpose-Transpose-GEMM-Transpose and
Loops-over-GEMM approaches into an open source "Tensor Contraction Code
Generator" (TCCG). The performance results for a wide range of tensor
contractions suggest that GETT has the potential of becoming the method of
choice: While GETT exhibits excellent performance across the board, its
effectiveness for bandwidth-bound tensor contractions is especially impressive,
outperforming existing approaches by up to $12.4\times$. More precisely, GETT
achieves speedups of up to $1.41\times$ over an equivalent-sized GEMM for
bandwidth-bound tensor contractions while attaining up to $91.3\%$ of peak
floating-point performance for compute-bound tensor contractions.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2016 08:13:50 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Jul 2016 07:28:12 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Nov 2017 08:21:02 GMT"
}
] | 2017-11-08T00:00:00 |
[
[
"Springer",
"Paul",
""
],
[
"Bientinesi",
"Paolo",
""
]
] |
new_dataset
| 0.969583 |
1709.05883
|
George MacCartney Jr
|
George R. MacCartney Jr., Theodore S. Rappaport, Sundeep Rangan
|
Rapid Fading Due to Human Blockage in Pedestrian Crowds at 5G
Millimeter-Wave Frequencies
|
To be published in 2017 IEEE Global Communications Conference
(GLOBECOM), Singapore, Dec. 2017
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rapidly fading channels caused by pedestrians in dense urban environments
will have a significant impact on millimeter-wave (mmWave) communications
systems that employ electrically-steerable and narrow beamwidth antenna arrays.
A peer-to-peer (P2P) measurement campaign was conducted with 7-degree,
15-degree, and 60-degree half-power beamwidth (HPBW) antenna pairs at 73.5 GHz
and with 1 GHz of RF null-to-null bandwidth in a heavily populated open square
scenario in Brooklyn, New York, to study blockage events caused by typical
pedestrian traffic. Antenna beamwidths that range approximately an order of
magnitude were selected to gain knowledge of fading events for antennas with
different beamwidths since antenna patterns for mmWave systems will be
electronically-adjustable. Two simple modeling approaches in the literature are
introduced to characterize the blockage events by either a two-state Markov
model or a four-state piecewise linear modeling approach. Transition
probability rates are determined from the measurements and it is shown that
average fade durations with a -5 dB threshold are 299.0 ms for 7-degree HPBW
antennas and 260.2 ms for 60-degree HPBW antennas. The four-state piecewise
linear modeling approach shows that signal strength decay and rise times are
asymmetric for blockage events and that mean signal attenuations (average fade
depths) are inversely proportional to antenna HPBW, where 7-degree and
60-degree HPBW antennas resulted in mean signal fades of 15.8 dB and 11.5 dB,
respectively. The models presented herein are valuable for extending
statistical channel models at mmWave to accurately simulate real-world
pedestrian blockage events when designing fifth-generation (5G) wireless
systems.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2017 12:06:53 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Nov 2017 16:55:47 GMT"
}
] | 2017-11-08T00:00:00 |
[
[
"MacCartney",
"George R.",
"Jr."
],
[
"Rappaport",
"Theodore S.",
""
],
[
"Rangan",
"Sundeep",
""
]
] |
new_dataset
| 0.998568 |
1711.02181
|
Gregory Rehm
|
Gregory B Rehm, Michael Thompson, Brad Busenius, Jennifer Fowler
|
Mobile Encryption Gateway (MEG) for Email Encryption
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Email cryptography applications often suffer from major problems that prevent
their widespread implementation. MEG, or the Mobile Encryption Gateway aims to
fix the issues associated with email encryption by ensuring that encryption is
easy to perform while still maintaining data security. MEG performs automatic
decryption and encryption of all emails using PGP. Users do not need to
understand the internal workings of the encryption process to use the
application. MEG is meant to be email-client-agnostic, enabling users to employ
virtually any email service to send messages. Encryption actions are performed
on the user's mobile device, which means their keys and data remain personal.
MEG can also tackle network effect problems by inviting non-users to join. Most
importantly, MEG uses end-to-end encryption, which ensures that all aspects of
the encrypted information remains private. As a result, we are hopeful that MEG
will finally solve the problem of practical email encryption.
|
[
{
"version": "v1",
"created": "Mon, 6 Nov 2017 21:32:50 GMT"
}
] | 2017-11-08T00:00:00 |
[
[
"Rehm",
"Gregory B",
""
],
[
"Thompson",
"Michael",
""
],
[
"Busenius",
"Brad",
""
],
[
"Fowler",
"Jennifer",
""
]
] |
new_dataset
| 0.999419 |
1711.02201
|
Alireza Partovi
|
Alireza Partovi, Rafael Rodrigues da Silva, Hai Lin
|
Reactive Integrated Mission and Motion planning
|
ACC 2018 Conference
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Correct-by-construction manipulation planning in a dynamic environment, where
other agents can manipulate objects in the workspace, is a challenging problem.
The tight coupling of actions and motions between agents and complexity of
mission specifications makes the problem computationally intractable.
This paper presents a reactive integrated mission and motion planning for
mobile-robot manipulator systems operating in a partially known environment. We
introduce a multi-layered synergistic framework that receives high-level
mission specifications expressed in linear temporal logic and generates
dynamically-feasible and collision-free motion trajectories to achieve it. In
the high-level layer, a mission planner constructs a symbolic two-player game
between the robots and their environment to synthesis a strategy that adapts to
changes in the workspace imposed by other robots. A bilateral synergistic layer
is developed to map the designed mission plan to an integrated task and motion
planner, constructing a set of robot tasks to move the objects according to the
mission strategy. In the low-level planning stage, verifiable motion
controllers are designed that can be incrementally composed to guarantee a safe
motion planning for each high-level induced task. The proposed framework is
illustrated with a multi-robot warehouse example with the mission of moving
objects to various locations.
|
[
{
"version": "v1",
"created": "Mon, 6 Nov 2017 22:33:59 GMT"
}
] | 2017-11-08T00:00:00 |
[
[
"Partovi",
"Alireza",
""
],
[
"da Silva",
"Rafael Rodrigues",
""
],
[
"Lin",
"Hai",
""
]
] |
new_dataset
| 0.99414 |
1711.02396
|
Mohit Jain
|
Mohit Jain, Minesh Mathew and C.V. Jawahar
|
Unconstrained Scene Text and Video Text Recognition for Arabic Script
|
5 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Building robust recognizers for Arabic has always been challenging. We
demonstrate the effectiveness of an end-to-end trainable CNN-RNN hybrid
architecture in recognizing Arabic text in videos and natural scenes. We
outperform previous state-of-the-art on two publicly available video text
datasets - ALIF and ACTIV. For the scene text recognition task, we introduce a
new Arabic scene text dataset and establish baseline results. For scripts like
Arabic, a major challenge in developing robust recognizers is the lack of large
quantity of annotated data. We overcome this by synthesising millions of Arabic
text images from a large vocabulary of Arabic words and phrases. Our
implementation is built on top of the model introduced here [37] which is
proven quite effective for English scene text recognition. The model follows a
segmentation-free, sequence to sequence transcription approach. The network
transcribes a sequence of convolutional features from the input image to a
sequence of target labels. This does away with the need for segmenting input
image into constituent characters/glyphs, which is often difficult for Arabic
script. Further, the ability of RNNs to model contextual dependencies yields
superior recognition results.
|
[
{
"version": "v1",
"created": "Tue, 7 Nov 2017 11:07:48 GMT"
}
] | 2017-11-08T00:00:00 |
[
[
"Jain",
"Mohit",
""
],
[
"Mathew",
"Minesh",
""
],
[
"Jawahar",
"C. V.",
""
]
] |
new_dataset
| 0.999878 |
1711.02413
|
Chaoyun Zhang
|
Chaoyun Zhang, Xi Ouyang, Paul Patras
|
ZipNet-GAN: Inferring Fine-grained Mobile Traffic Patterns via a
Generative Adversarial Neural Network
|
To appear ACM CoNEXT 2017
| null | null | null |
cs.NI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale mobile traffic analytics is becoming essential to digital
infrastructure provisioning, public transportation, events planning, and other
domains. Monitoring city-wide mobile traffic is however a complex and costly
process that relies on dedicated probes. Some of these probes have limited
precision or coverage, others gather tens of gigabytes of logs daily, which
independently offer limited insights. Extracting fine-grained patterns involves
expensive spatial aggregation of measurements, storage, and post-processing. In
this paper, we propose a mobile traffic super-resolution technique that
overcomes these problems by inferring narrowly localised traffic consumption
from coarse measurements. We draw inspiration from image processing and design
a deep-learning architecture tailored to mobile networking, which combines
Zipper Network (ZipNet) and Generative Adversarial neural Network (GAN) models.
This enables to uniquely capture spatio-temporal relations between traffic
volume snapshots routinely monitored over broad coverage areas
(`low-resolution') and the corresponding consumption at 0.05 km $^2$ level
(`high-resolution') usually obtained after intensive computation. Experiments
we conduct with a real-world data set demonstrate that the proposed
ZipNet(-GAN) infers traffic consumption with remarkable accuracy and up to
100$\times$ higher granularity as compared to standard probing, while
outperforming existing data interpolation techniques. To our knowledge, this is
the first time super-resolution concepts are applied to large-scale mobile
traffic analysis and our solution is the first to infer fine-grained urban
traffic patterns from coarse aggregates.
|
[
{
"version": "v1",
"created": "Tue, 7 Nov 2017 11:38:11 GMT"
}
] | 2017-11-08T00:00:00 |
[
[
"Zhang",
"Chaoyun",
""
],
[
"Ouyang",
"Xi",
""
],
[
"Patras",
"Paul",
""
]
] |
new_dataset
| 0.995357 |
1711.02427
|
Carlos Eduardo Cancino-Chac\'on
|
Carlos Cancino-Chac\'on and Martin Bonev and Amaury Durand and Maarten
Grachten and Andreas Arzt and Laura Bishop and Werner Goebl and Gerhard
Widmer
|
The ACCompanion v0.1: An Expressive Accompaniment System
|
Presented at the Late-Breaking Demo Session of the 18th International
Society for Music Information Retrieval Conference (ISMIR 2017), Suzhou,
China, 2017
| null | null | null |
cs.SD cs.HC eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we present a preliminary version of the ACCompanion, an
expressive accompaniment system for MIDI input. The system uses a probabilistic
monophonic score follower to track the position of the soloist in the score,
and a linear Gaussian model to compute tempo updates. The expressiveness of the
system is powered by the Basis-Mixer, a state-of-the-art computational model of
expressive music performance. The system allows for expressive dynamics, timing
and articulation.
|
[
{
"version": "v1",
"created": "Tue, 7 Nov 2017 12:13:30 GMT"
}
] | 2017-11-08T00:00:00 |
[
[
"Cancino-Chacón",
"Carlos",
""
],
[
"Bonev",
"Martin",
""
],
[
"Durand",
"Amaury",
""
],
[
"Grachten",
"Maarten",
""
],
[
"Arzt",
"Andreas",
""
],
[
"Bishop",
"Laura",
""
],
[
"Goebl",
"Werner",
""
],
[
"Widmer",
"Gerhard",
""
]
] |
new_dataset
| 0.998778 |
1711.02450
|
Christian Sch\"uller
|
Christian Sch\"uller, Roi Poranne, Olga Sorkine-Hornung
|
Shape Representation by Zippable Ribbons
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Shape fabrication from developable parts is the basis for arts such as
papercraft and needlework, as well as modern architecture and CAD in general,
and it has inspired much research. We observe that the assembly of complex 3D
shapes created by existing methods often requires first fabricating many small
flat parts and then carefully following instructions to assemble them together.
Despite its significance, this error prone and tedious process is generally
neglected in the discussion. We propose an approach for shape representation
through a single developable part that attaches to itself and requires no
assembly instructions. Our inspiration comes from the so-called zipit bags,
which are made of a single, long ribbon with a zipper around its boundary. In
order to "assemble" the bag, one simply needs to zip up the ribbon. Our method
operates in the same fashion, but it can be used to approximate any shape.
Given a 3D model, our algorithm produces plans for a single 2D shape that can
be laser cut in few parts from flat fabric or paper. We can then attach a
zipper along the boundary for quick assembly and disassembly, or apply more
traditional approaches, such as gluing and stitching. We show physical and
virtual results that demonstrate the capabilities of our method and the ease
with which shapes can be assembled.
|
[
{
"version": "v1",
"created": "Tue, 7 Nov 2017 13:09:18 GMT"
}
] | 2017-11-08T00:00:00 |
[
[
"Schüller",
"Christian",
""
],
[
"Poranne",
"Roi",
""
],
[
"Sorkine-Hornung",
"Olga",
""
]
] |
new_dataset
| 0.996716 |
1711.02483
|
Lin Xiang
|
Lin Xiang, Derrick Wing Kwan Ng, Robert Schober, and Vincent W.S. Wong
|
Cache-Enabled Physical Layer Security for Video Streaming in
Backhaul-Limited Cellular Networks
|
Accepted for publication in IEEE Trans. Wireless Commun.; 17 pages, 5
figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel wireless caching scheme to enhance the
physical layer security of video streaming in cellular networks with limited
backhaul capacity. By proactively sharing video data across a subset of base
stations (BSs) through both caching and backhaul loading, secure cooperative
joint transmission of several BSs can be dynamically enabled in accordance with
the cache status, the channel conditions, and the backhaul capacity. Assuming
imperfect channel state information (CSI) at the transmitters, we formulate a
two-stage non-convex mixed-integer robust optimization problem for minimizing
the total transmit power while providing quality of service (QoS) and
guaranteeing communication secrecy during video delivery, where the caching and
the cooperative transmission policy are optimized in an offline video caching
stage and an online video delivery stage, respectively. Although the formulated
optimization problem turns out to be NP-hard, low-complexity polynomial-time
algorithms, whose solutions are globally optimal under certain conditions, are
proposed for cache training and video delivery control. Caching is shown to be
beneficial as it reduces the data sharing overhead imposed on the
capacity-constrained backhaul links, introduces additional secure degrees of
freedom, and enables a power-efficient communication system design. Simulation
results confirm that the proposed caching scheme achieves simultaneously a low
secrecy outage probability and a high power efficiency. Furthermore, due to the
proposed robust optimization, the performance loss caused by imperfect CSI
knowledge can be significantly reduced when the cache capacity becomes large.
|
[
{
"version": "v1",
"created": "Tue, 7 Nov 2017 14:27:06 GMT"
}
] | 2017-11-08T00:00:00 |
[
[
"Xiang",
"Lin",
""
],
[
"Ng",
"Derrick Wing Kwan",
""
],
[
"Schober",
"Robert",
""
],
[
"Wong",
"Vincent W. S.",
""
]
] |
new_dataset
| 0.993349 |
1711.02508
|
Joan Sola
|
Joan Sol\`a
|
Quaternion kinematics for the error-state Kalman filter
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This article is an exhaustive revision of concepts and formulas related to
quaternions and rotations in 3D space, and their proper use in estimation
engines such as the error-state Kalman filter.
The paper includes an in-depth study of the rotation group and its Lie
structure, with formulations using both quaternions and rotation matrices. It
makes special attention in the definition of rotation perturbations,
derivatives and integrals. It provides numerous intuitions and geometrical
interpretations to help the reader grasp the inner mechanisms of 3D rotation.
The whole material is used to devise precise formulations for error-state
Kalman filters suited for real applications using integration of signals from
an inertial measurement unit (IMU).
|
[
{
"version": "v1",
"created": "Fri, 3 Nov 2017 11:53:43 GMT"
}
] | 2017-11-08T00:00:00 |
[
[
"Solà",
"Joan",
""
]
] |
new_dataset
| 0.955427 |
1711.02510
|
Juan Quiroz
|
Juan C. Quiroz, Norman Mariun, Mohammad Rezazadeh Mehrjou, Mahdi
Izadi, Norhisam Misron, Mohd Amran Mohd Radzi
|
Fault Detection of Broken Rotor Bar in LS-PMSM Using Random Forests
|
Elsevier Measurement
| null |
10.1016/j.measurement.2017.11.004
| null |
cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper proposes a new approach to diagnose broken rotor bar failure in a
line start-permanent magnet synchronous motor (LS-PMSM) using random forests.
The transient current signal during the motor startup was acquired from a
healthy motor and a faulty motor with a broken rotor bar fault. We extracted 13
statistical time domain features from the startup transient current signal, and
used these features to train and test a random forest to determine whether the
motor was operating under normal or faulty conditions. For feature selection,
we used the feature importances from the random forest to reduce the number of
features to two features. The results showed that the random forest classifies
the motor condition as healthy or faulty with an accuracy of 98.8% using all
features and with an accuracy of 98.4% by using only the mean-index and
impulsion features. The performance of the random forest was compared with a
decision tree, Na\"ive Bayes classifier, logistic regression, linear ridge, and
a support vector machine, with the random forest consistently having a higher
accuracy than the other algorithms. The proposed approach can be used in
industry for online monitoring and fault diagnostic of LS-PMSM motors and the
results can be helpful for the establishment of preventive maintenance plans in
factories.
|
[
{
"version": "v1",
"created": "Fri, 3 Nov 2017 19:18:26 GMT"
}
] | 2017-11-08T00:00:00 |
[
[
"Quiroz",
"Juan C.",
""
],
[
"Mariun",
"Norman",
""
],
[
"Mehrjou",
"Mohammad Rezazadeh",
""
],
[
"Izadi",
"Mahdi",
""
],
[
"Misron",
"Norhisam",
""
],
[
"Radzi",
"Mohd Amran Mohd",
""
]
] |
new_dataset
| 0.979856 |
1307.2965
|
Quan Wang
|
Quan Wang, Dijia Wu, Le Lu, Meizhu Liu, Kim L. Boyer and Shaohua Kevin
Zhou
|
Semantic Context Forests for Learning-Based Knee Cartilage Segmentation
in 3D MR Images
|
MICCAI 2013: Workshop on Medical Computer Vision
| null |
10.1007/978-3-319-05530-5_11
| null |
cs.CV cs.LG q-bio.TO stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The automatic segmentation of human knee cartilage from 3D MR images is a
useful yet challenging task due to the thin sheet structure of the cartilage
with diffuse boundaries and inhomogeneous intensities. In this paper, we
present an iterative multi-class learning method to segment the femoral, tibial
and patellar cartilage simultaneously, which effectively exploits the spatial
contextual constraints between bone and cartilage, and also between different
cartilages. First, based on the fact that the cartilage grows in only certain
area of the corresponding bone surface, we extract the distance features of not
only to the surface of the bone, but more informatively, to the densely
registered anatomical landmarks on the bone surface. Second, we introduce a set
of iterative discriminative classifiers that at each iteration, probability
comparison features are constructed from the class confidence maps derived by
previously learned classifiers. These features automatically embed the semantic
context information between different cartilages of interest. Validated on a
total of 176 volumes from the Osteoarthritis Initiative (OAI) dataset, the
proposed approach demonstrates high robustness and accuracy of segmentation in
comparison with existing state-of-the-art MR cartilage segmentation methods.
|
[
{
"version": "v1",
"created": "Thu, 11 Jul 2013 03:29:51 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Apr 2014 16:01:12 GMT"
}
] | 2017-11-07T00:00:00 |
[
[
"Wang",
"Quan",
""
],
[
"Wu",
"Dijia",
""
],
[
"Lu",
"Le",
""
],
[
"Liu",
"Meizhu",
""
],
[
"Boyer",
"Kim L.",
""
],
[
"Zhou",
"Shaohua Kevin",
""
]
] |
new_dataset
| 0.974627 |
1611.03921
|
Ver\'onica Becher
|
Ver\'onica Becher and Olivier Carton and Pablo Ariel Heiber
|
Finite-state independence
| null | null | null | null |
cs.FL cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we introduce a notion of independence based on finite-state
automata: two infinite words are independent if no one helps to compress the
other using one-to-one finite-state transducers with auxiliary input. We prove
that, as expected, the set of independent pairs of infinite words has Lebesgue
measure 1. We show that the join of two independent normal words is normal.
However, the independence of two normal words is not guaranteed if we just
require that their join is normal. To prove this we construct a normal word
$x_1x_2x_3\ldots$ where $x_{2n}=x_n$ for every $n$.
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2016 00:34:28 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Nov 2017 05:50:08 GMT"
}
] | 2017-11-07T00:00:00 |
[
[
"Becher",
"Verónica",
""
],
[
"Carton",
"Olivier",
""
],
[
"Heiber",
"Pablo Ariel",
""
]
] |
new_dataset
| 0.999461 |
1701.05940
|
Curtis Rueden
|
Curtis T. Rueden, Johannes Schindelin, Mark C. Hiner, Barry E.
DeZonia, Alison E. Walter, Ellen T. Arena, Kevin W. Eliceiri
|
ImageJ2: ImageJ for the next generation of scientific image data
| null | null | null | null |
cs.SE q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
ImageJ is an image analysis program extensively used in the biological
sciences and beyond. Due to its ease of use, recordable macro language, and
extensible plug-in architecture, ImageJ enjoys contributions from
non-programmers, amateur programmers, and professional developers alike.
Enabling such a diversity of contributors has resulted in a large community
that spans the biological and physical sciences. However, a rapidly growing
user base, diverging plugin suites, and technical limitations have revealed a
clear need for a concerted software engineering effort to support emerging
imaging paradigms, to ensure the software's ability to handle the requirements
of modern science. Due to these new and emerging challenges in scientific
imaging, ImageJ is at a critical development crossroads.
We present ImageJ2, a total redesign of ImageJ offering a host of new
functionality. It separates concerns, fully decoupling the data model from the
user interface. It emphasizes integration with external applications to
maximize interoperability. Its robust new plugin framework allows everything
from image formats, to scripting languages, to visualization to be extended by
the community. The redesigned data model supports arbitrarily large,
N-dimensional datasets, which are increasingly common in modern image
acquisition. Despite the scope of these changes, backwards compatibility is
maintained such that this new functionality can be seamlessly integrated with
the classic ImageJ interface, allowing users and developers to migrate to these
new methods at their own pace. ImageJ2 provides a framework engineered for
flexibility, intended to support these requirements as well as accommodate
future needs.
|
[
{
"version": "v1",
"created": "Fri, 20 Jan 2017 22:25:27 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Apr 2017 14:02:52 GMT"
},
{
"version": "v3",
"created": "Mon, 18 Sep 2017 19:51:59 GMT"
},
{
"version": "v4",
"created": "Fri, 3 Nov 2017 18:14:46 GMT"
}
] | 2017-11-07T00:00:00 |
[
[
"Rueden",
"Curtis T.",
""
],
[
"Schindelin",
"Johannes",
""
],
[
"Hiner",
"Mark C.",
""
],
[
"DeZonia",
"Barry E.",
""
],
[
"Walter",
"Alison E.",
""
],
[
"Arena",
"Ellen T.",
""
],
[
"Eliceiri",
"Kevin W.",
""
]
] |
new_dataset
| 0.999505 |
1704.08535
|
Chia-Wen Lin
|
Chao Zhou, Chia-Wen Lin, Xinggong Zhang, and Zongming Guo
|
TFDASH: A Fairness, Stability, and Efficiency Aware Rate Control
Approach for Multiple Clients over DASH
|
15 pages
| null | null | null |
cs.MM cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic adaptive streaming over HTTP (DASH) has recently been widely deployed
in the Internet and adopted in the industry. It, however, does not impose any
adaptation logic for selecting the quality of video fragments requested by
clients and suffers from lackluster performance with respect to a number of
desirable properties: efficiency, stability, and fairness when multiple players
compete for a bottleneck link. In this paper, we propose a throughput-friendly
DASH (TFDASH) rate control scheme for video streaming with multiple clients
over DASH to well balance the trade-offs among efficiency, stability, and
fairness. The core idea behind guaranteeing fairness and high efficiency
(bandwidth utilization) is to avoid OFF periods during the downloading process
for all clients, i.e., the bandwidth is in perfect-subscription or
over-subscription with bandwidth utilization approach to 100\%. We also propose
a dual-threshold buffer model to solve the instability problem caused by the
above idea. As a result, by integrating these novel components, we also propose
a probability-driven rate adaption logic taking into account several key
factors that most influence visual quality, including buffer occupancy, video
playback quality, video bit-rate switching frequency and amplitude, to
guarantee high-quality video streaming. Our experiments evidently demonstrate
the superior performance of the proposed method.
|
[
{
"version": "v1",
"created": "Thu, 27 Apr 2017 12:35:30 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Nov 2017 12:05:59 GMT"
}
] | 2017-11-07T00:00:00 |
[
[
"Zhou",
"Chao",
""
],
[
"Lin",
"Chia-Wen",
""
],
[
"Zhang",
"Xinggong",
""
],
[
"Guo",
"Zongming",
""
]
] |
new_dataset
| 0.994954 |
1708.07029
|
Longguang Wang
|
Longguang Wang, Zaiping Lin, Jinyan Gao, Xinpu Deng, Wei An
|
Fast single image super-resolution based on sigmoid transformation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Single image super-resolution aims to generate a high-resolution image from a
single low-resolution image, which is of great significance in extensive
applications. As an ill-posed problem, numerous methods have been proposed to
reconstruct the missing image details based on exemplars or priors. In this
paper, we propose a fast and simple single image super-resolution strategy
utilizing patch-wise sigmoid transformation as an imposed sharpening
regularization term in the reconstruction, which realizes amazing
reconstruction performance. Extensive experiments compared with other
state-of-the-art approaches demonstrate the superior effectiveness and
efficiency of the proposed algorithm.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2017 14:46:26 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Oct 2017 14:59:23 GMT"
},
{
"version": "v3",
"created": "Sun, 5 Nov 2017 04:42:11 GMT"
}
] | 2017-11-07T00:00:00 |
[
[
"Wang",
"Longguang",
""
],
[
"Lin",
"Zaiping",
""
],
[
"Gao",
"Jinyan",
""
],
[
"Deng",
"Xinpu",
""
],
[
"An",
"Wei",
""
]
] |
new_dataset
| 0.996414 |
1710.07817
|
Stefano Buzzi
|
Mario Alonzo and Stefano Buzzi
|
Cell-Free and User-Centric Massive MIMO at Millimeter Wave Frequencies
|
presented at the 28th Annual IEEE International Symposium on
Personal, Indoor and Mobile Radio Communications (IEEE PIMRC 2017), Montreal
(CA), October 2017
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a cell-free (CF) massive MIMO architecture a very large number of
distributed access points (APs) simultaneously and jointly serves a much
smaller number of mobile stations (MSs); a variant of the cell-free technique
is the user-centric (UC) approach, wherein each AP just decodes a reduced set
of MSs, practically the ones that are received best. This paper introduces and
analyzes the CF and UC architectures at millimeter wave (mmWave) frequencies.
First of all, a multiuser clustered channel model is introduced in order to
account for the correlation among the channels of nearby users; then, an uplink
multiuser channel estimation scheme is described along with low-complexity
hybrid analog/digital beamforming architectures. Interestingly, in the proposed
scheme no channel estimation is needed at the MSs, and the beamforming schemes
used at the MSs are channel-independent and have a very simple structure.
Numerical results show that the considered architectures provide good
performance, especially in lightly loaded systems, with the UC approach
outperforming the CF one.
|
[
{
"version": "v1",
"created": "Sat, 21 Oct 2017 15:54:46 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Nov 2017 12:38:41 GMT"
}
] | 2017-11-07T00:00:00 |
[
[
"Alonzo",
"Mario",
""
],
[
"Buzzi",
"Stefano",
""
]
] |
new_dataset
| 0.997622 |
1710.07819
|
Alberto Paoluzzi
|
Francesco Furiani, Giulio Martella, Alberto Paoluzzi
|
Geometric Computing with Chain Complexes: Design and Features of a Julia
Package
|
Submitted paper
| null | null | null |
cs.CG cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Geometric computing with chain complexes allows for the computation of the
whole chain of linear spaces and (co)boundary operators generated by a space
decomposition into a cell complex. The space decomposition is stored and
handled with LAR (Linear Algebraic Representation), i.e. with sparse integer
arrays, and allows for using cells of a very general type, even non convex and
with internal holes. In this paper we discuss the features and the merits of
this approach, and describe the goals and the implementation of a software
package aiming at providing for simple and efficient computational support of
geometric computing with any kind of meshes, using linear algebra tools with
sparse matrices. The library is being written in Julia, the novel efficient and
parallel language for scientific computing. This software, that is being ported
on hybrid architectures (CPU+GPU) of last generation, is yet under development.
|
[
{
"version": "v1",
"created": "Sat, 21 Oct 2017 16:03:17 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Nov 2017 09:00:54 GMT"
}
] | 2017-11-07T00:00:00 |
[
[
"Furiani",
"Francesco",
""
],
[
"Martella",
"Giulio",
""
],
[
"Paoluzzi",
"Alberto",
""
]
] |
new_dataset
| 0.951807 |
1710.10836
|
Jithin Mathews
|
Jithin Mathews, Priya Mehta, S.V. Kasi Visweswara Rao, Ch. Sobhan Babu
|
An algorithmic approach to handle circular trading in commercial taxing
system
|
10 pages, 7 figures, 1 table, 3 algorithms
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tax manipulation comes in a variety of forms with different motivations and
of varying complexities. In this paper, we deal with a specific technique used
by tax-evaders known as circular trading. In particular, we define algorithms
for the detection and analysis of circular trade. To achieve this, we have
modelled the whole system as a directed graph with the actors being vertices
and the transactions among them as directed edges. We illustrate the results
obtained after running the proposed algorithm on the commercial tax dataset of
the government of Telangana, India, which contains the transaction details of a
set of participants involved in a known circular trade.
|
[
{
"version": "v1",
"created": "Mon, 30 Oct 2017 10:00:26 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Nov 2017 09:29:37 GMT"
}
] | 2017-11-07T00:00:00 |
[
[
"Mathews",
"Jithin",
""
],
[
"Mehta",
"Priya",
""
],
[
"Rao",
"S. V. Kasi Visweswara",
""
],
[
"Babu",
"Ch. Sobhan",
""
]
] |
new_dataset
| 0.993536 |
1711.01416
|
Vasily Pestun
|
Vasily Pestun, John Terilla, Yiannis Vlassopoulos
|
Language as a matrix product state
|
10 pages
| null | null | null |
cs.CL cond-mat.dis-nn cs.LG cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a statistical model for natural language that begins by
considering language as a monoid, then representing it in complex matrices with
a compatible translation invariant probability measure. We interpret the
probability measure as arising via the Born rule from a translation invariant
matrix product state.
|
[
{
"version": "v1",
"created": "Sat, 4 Nov 2017 09:11:18 GMT"
}
] | 2017-11-07T00:00:00 |
[
[
"Pestun",
"Vasily",
""
],
[
"Terilla",
"John",
""
],
[
"Vlassopoulos",
"Yiannis",
""
]
] |
new_dataset
| 0.992594 |
1711.01458
|
Jonathan Binas
|
Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck
|
DDD17: End-To-End DAVIS Driving Dataset
|
Presented at the ICML 2017 Workshop on Machine Learning for
Autonomous Vehicles
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event cameras, such as dynamic vision sensors (DVS), and dynamic and
active-pixel vision sensors (DAVIS) can supplement other autonomous driving
sensors by providing a concurrent stream of standard active pixel sensor (APS)
images and DVS temporal contrast events. The APS stream is a sequence of
standard grayscale global-shutter image sensor frames. The DVS events represent
brightness changes occurring at a particular moment, with a jitter of about a
millisecond under most lighting conditions. They have a dynamic range of >120
dB and effective frame rates >1 kHz at data rates comparable to 30 fps
(frames/second) image sensors. To overcome some of the limitations of current
image acquisition technology, we investigate in this work the use of the
combined DVS and APS streams in end-to-end driving applications. The dataset
DDD17 accompanying this paper is the first open dataset of annotated DAVIS
driving recordings. DDD17 has over 12 h of a 346x260 pixel DAVIS sensor
recording highway and city driving in daytime, evening, night, dry and wet
weather conditions, along with vehicle speed, GPS position, driver steering,
throttle, and brake captured from the car's on-board diagnostics interface. As
an example application, we performed a preliminary end-to-end learning study of
using a convolutional neural network that is trained to predict the
instantaneous steering angle from DVS and APS visual data.
|
[
{
"version": "v1",
"created": "Sat, 4 Nov 2017 16:19:56 GMT"
}
] | 2017-11-07T00:00:00 |
[
[
"Binas",
"Jonathan",
""
],
[
"Neil",
"Daniel",
""
],
[
"Liu",
"Shih-Chii",
""
],
[
"Delbruck",
"Tobi",
""
]
] |
new_dataset
| 0.999712 |
1711.01478
|
Anne Edmundson
|
Anne Edmundson, Paul Schmitt, Nick Feamster, Jennifer Rexford
|
OCDN: Oblivious Content Distribution Networks
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As publishers increasingly use Content Distribution Networks (CDNs) to
distribute content across geographically diverse networks, CDNs themselves are
becoming unwitting targets of requests for both access to user data and content
takedown. From copyright infringement to moderation of online speech, CDNs have
found themselves at the forefront of many recent legal quandaries. At the heart
of the tension, however, is the fact that CDNs have rich information both about
the content they are serving and the users who are requesting that content.
This paper offers a technical contribution that is relevant to this ongoing
tension with the design of an Oblivious CDN (OCDN); the system is both
compatible with the existing Web ecosystem of publishers and clients and hides
from the CDN both the content it is serving and the users who are requesting
that content. OCDN is compatible with the way that publishers currently host
content on CDNs. Using OCDN, publishers can use multiple CDNs to publish
content; clients retrieve content through a peer-to-peer anonymizing network of
proxies. Our prototype implementation and evaluation of OCDN show that the
system can obfuscate both content and clients from the CDN operator while still
delivering content with good performance.
|
[
{
"version": "v1",
"created": "Sat, 4 Nov 2017 19:12:31 GMT"
}
] | 2017-11-07T00:00:00 |
[
[
"Edmundson",
"Anne",
""
],
[
"Schmitt",
"Paul",
""
],
[
"Feamster",
"Nick",
""
],
[
"Rexford",
"Jennifer",
""
]
] |
new_dataset
| 0.999233 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.