id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1506.02345
|
Vladimir Saveljev
|
Vladimir Saveljev
|
Wavelets and continuous wavelet transform for autostereoscopic multiview
images
|
4 pages, 10 figures
|
Applied Optics, Vol. 55, Issue 23, pp. 6275-6284 (2016)
|
10.1364/AO.55.006275
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, the reference functions for the synthesis and analysis of the
autostereoscopic multiview and integral images in three-dimensional displays we
introduced. In the current paper, we propose the wavelets to analyze such
images. The wavelets are built on the reference functions as on the scaling
functions of the wavelet analysis. The continuous wavelet transform was
successfully applied to the testing wireframe binary objects. The restored
locations correspond to the structure of the testing wireframe binary objects.
|
[
{
"version": "v1",
"created": "Mon, 8 Jun 2015 03:47:17 GMT"
}
] | 2016-10-13T00:00:00 |
[
[
"Saveljev",
"Vladimir",
""
]
] |
new_dataset
| 0.999377 |
1610.03543
|
Guy Kindler
|
Guy Kindler and Ryan O`Donnell
|
Quantum automata cannot detect biased coins, even in the limit
|
preprint
| null | null | null |
cs.CC quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aaronson and Drucker (2011) asked whether there exists a quantum finite
automaton that can distinguish fair coin tosses from biased ones by spending
significantly more time in accepting states, on average, given an infinite
sequence of tosses. We answer this question negatively.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2016 21:52:05 GMT"
}
] | 2016-10-13T00:00:00 |
[
[
"Kindler",
"Guy",
""
],
[
"O`Donnell",
"Ryan",
""
]
] |
new_dataset
| 0.992187 |
1610.03614
|
Xiaodong Zhuang
|
Xiaodong Zhuang, N. E. Mastorakis
|
A Model of Virtual Carrier Immigration in Digital Images for Region
Segmentation
|
11 pages, 17 figures. arXiv admin note: text overlap with
arXiv:1610.02760
|
WSEAS TRANSACTIONS on COMPUTERS, pp. 708-718, Volume 14, 2015
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel model for image segmentation is proposed, which is inspired by the
carrier immigration mechanism in physical P-N junction. The carrier diffusing
and drifting are simulated in the proposed model, which imitates the physical
self-balancing mechanism in P-N junction. The effect of virtual carrier
immigration in digital images is analyzed and studied by experiments on test
images and real world images. The sign distribution of net carrier at the
model's balance state is exploited for region segmentation. The experimental
results for both test images and real-world images demonstrate self-adaptive
and meaningful gathering of pixels to suitable regions, which prove the
effectiveness of the proposed method for image region segmentation.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2016 06:43:34 GMT"
}
] | 2016-10-13T00:00:00 |
[
[
"Zhuang",
"Xiaodong",
""
],
[
"Mastorakis",
"N. E.",
""
]
] |
new_dataset
| 0.988141 |
1610.03628
|
Carlos Ciller Mr.
|
Stefanos Apostolopoulos, Carlos Ciller, Sandro I. De Zanet, Sebastian
Wolf and Raphael Sznitman
|
RetiNet: Automatic AMD identification in OCT volumetric data
|
14 pages, 10 figures, Code available
| null | null | null |
cs.CV cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optical Coherence Tomography (OCT) provides a unique ability to image the eye
retina in 3D at micrometer resolution and gives ophthalmologist the ability to
visualize retinal diseases such as Age-Related Macular Degeneration (AMD).
While visual inspection of OCT volumes remains the main method for AMD
identification, doing so is time consuming as each cross-section within the
volume must be inspected individually by the clinician. In much the same way,
acquiring ground truth information for each cross-section is expensive and time
consuming. This fact heavily limits the ability to acquire large amounts of
ground truth, which subsequently impacts the performance of learning-based
methods geared at automatic pathology identification. To avoid this burden, we
propose a novel strategy for automatic analysis of OCT volumes where only
volume labels are needed. That is, we train a classifier in a semi-supervised
manner to conduct this task. Our approach uses a novel Convolutional Neural
Network (CNN) architecture, that only needs volume-level labels to be trained
to automatically asses whether an OCT volume is healthy or contains AMD. Our
architecture involves first learning a cross-section pathology classifier using
pseudo-labels that could be corrupted and then leverage these towards a more
accurate volume-level classification. We then show that our approach provides
excellent performances on a publicly available dataset and outperforms a number
of existing automatic techniques.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2016 07:56:24 GMT"
}
] | 2016-10-13T00:00:00 |
[
[
"Apostolopoulos",
"Stefanos",
""
],
[
"Ciller",
"Carlos",
""
],
[
"De Zanet",
"Sandro I.",
""
],
[
"Wolf",
"Sebastian",
""
],
[
"Sznitman",
"Raphael",
""
]
] |
new_dataset
| 0.98618 |
1610.03736
|
Mahmoud Ferdosizade Naeiny
|
Somaye Bazin, Mahmoud Ferdosizade Naeiny, Roya Khanzade
|
Burst Transmission Symbol Synchronization in the Presence of Cycle Slip
Arising from Different Clock Frequencies
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In digital communication systems different clock frequencies of transmitter
and receiver usually is translated into cycle slips. Receivers might experience
different sampling frequencies from transmitter due to manufacturing
imperfection, Doppler Effect introduced by channel or wrong estimation of
symbol rate. Timing synchronization in presence of cycle slip for a burst
sequence of received information, leads to severe degradation in system
performance that represents as shortening or prolonging of bit stream. Therefor
the necessity of prior detection and elimination of cycle slip is unavoidable.
Accordingly, the main idea introduced in this paper is to employ the Gardner
Detector (GAD) not only to recover a fixed timing offset, its output is also
processed in a way such that timing drifts can be estimated and corrected.
Deriving a two steps algorithm, eliminates the cycle slips arising from wrong
estimation of symbol rate firstly, and then iteratively synchronize symbol
timing of a burst received signal by applying GAD to a feed forward structure
with the additional benefits that convergence and stability problems are
avoided, as they are typical for feedback schemes normally used by GAD. The
proposed algorithm is able to compensate considerable symbol rate offsets at
the receiver side. Considerable results in terms of BER confirm the algorithm
proficiency.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2016 14:55:21 GMT"
}
] | 2016-10-13T00:00:00 |
[
[
"Bazin",
"Somaye",
""
],
[
"Naeiny",
"Mahmoud Ferdosizade",
""
],
[
"Khanzade",
"Roya",
""
]
] |
new_dataset
| 0.988293 |
1610.03771
|
Marzieh Saeidi Marzieh Saeidi
|
Marzieh Saeidi, Guillaume Bouchard, Maria Liakata, Sebastian Riedel
|
SentiHood: Targeted Aspect Based Sentiment Analysis Dataset for Urban
Neighbourhoods
|
Accepted at COLING 2016
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce the task of targeted aspect-based sentiment
analysis. The goal is to extract fine-grained information with respect to
entities mentioned in user comments. This work extends both aspect-based
sentiment analysis that assumes a single entity per document and targeted
sentiment analysis that assumes a single sentiment towards a target entity. In
particular, we identify the sentiment towards each aspect of one or more
entities. As a testbed for this task, we introduce the SentiHood dataset,
extracted from a question answering (QA) platform where urban neighbourhoods
are discussed by users. In this context units of text often mention several
aspects of one or more neighbourhoods. This is the first time that a generic
social media platform in this case a QA platform, is used for fine-grained
opinion mining. Text coming from QA platforms is far less constrained compared
to text from review specific platforms which current datasets are based on. We
develop several strong baselines, relying on logistic regression and
state-of-the-art recurrent neural networks.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2016 16:23:11 GMT"
}
] | 2016-10-13T00:00:00 |
[
[
"Saeidi",
"Marzieh",
""
],
[
"Bouchard",
"Guillaume",
""
],
[
"Liakata",
"Maria",
""
],
[
"Riedel",
"Sebastian",
""
]
] |
new_dataset
| 0.999801 |
1610.03792
|
Mohammad Mohammadi Amiri Mr.
|
Mohammad Mohammadi Amiri, Qianqian Yang, Deniz G\"und\"uz
|
Decentralized Coded Caching with Distinct Cache Capacities
|
To be presented in ASILOMAR conference, 2016
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decentralized coded caching is studied for a content server with $N$ files,
each of size $F$ bits, serving $K$ active users, each equipped with a cache of
distinct capacity. It is assumed that the users' caches are filled in advance
during the off-peak traffic period without the knowledge of the number of
active users, their identities, or the particular demands. User demands are
revealed during the peak traffic period, and are served simultaneously through
an error-free shared link. A new decentralized coded caching scheme is proposed
for this scenario, and it is shown to improve upon the state-of-the-art in
terms of the required delivery rate over the shared link, when there are more
users in the system than the number of files. Numerical results indicate that
the improvement becomes more significant as the cache capacities of the users
become more skewed.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2016 17:13:50 GMT"
}
] | 2016-10-13T00:00:00 |
[
[
"Amiri",
"Mohammad Mohammadi",
""
],
[
"Yang",
"Qianqian",
""
],
[
"Gündüz",
"Deniz",
""
]
] |
new_dataset
| 0.975888 |
1410.8158
|
Haobo Wang
|
Haobo Wang, Tsung-Yi Chen, and Richard D. Wesel
|
Histogram-Based Flash Channel Estimation
|
6 pages, 8 figures, Submitted to the IEEE International
Communications Conference (ICC) 2015
|
IEEE International Conference on Communications (ICC), London,
2015, pp. 283-288
|
10.1109/ICC.2015.7248335
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current generation Flash devices experience significant read-channel
degradation from damage to the oxide layer during program and erase operations.
Information about the read-channel degradation drives advanced signal
processing methods in Flash to mitigate its effect. In this context, channel
estimation must be ongoing since channel degradation evolves over time and as a
function of the number of program/erase (P/E) cycles. This paper proposes a
framework for ongoing model-based channel estimation using limited channel
measurements (reads). This paper uses a channel model characterizing
degradation resulting from retention time and the amount of charge programmed
and erased. For channel histogram measurements, bin selection to achieve
approximately equal-probability bins yields a good approximation to the
original distribution using only ten bins (i.e. nine reads). With the channel
model and binning strategy in place, this paper explores candidate numerical
least squares algorithms and ultimately demonstrates the effectiveness of the
Levenberg-Marquardt algorithm which provides both speed and accuracy.
|
[
{
"version": "v1",
"created": "Wed, 29 Oct 2014 20:46:02 GMT"
}
] | 2016-10-12T00:00:00 |
[
[
"Wang",
"Haobo",
""
],
[
"Chen",
"Tsung-Yi",
""
],
[
"Wesel",
"Richard D.",
""
]
] |
new_dataset
| 0.997556 |
1509.00399
|
Erik Steinmetz
|
Erik Steinmetz, Matthias Wildemeersch, Tony Q.S. Quek and Henk
Wymeersch
|
Packet Reception Probabilities in Vehicular Communications Close to
Intersections
| null | null | null | null |
cs.SY cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vehicular networks allow vehicles to share information and are expected to be
an integral part in future intelligent transportation system (ITS). In order to
guide and validate the design process, analytical expressions of key
performance metrics such as packet reception probabilities and throughput are
necessary, in particular for accident-prone scenarios such as intersections. In
this paper, we analyze the impact of interference in an intersection scenario
with two perpendicular roads using tools from stochastic geometry. We present a
general procedure to analytically determine the packet reception probability
and throughput of a selected link, taking into account the geographical
clustering of vehicles close to the intersection. We consider both Aloha and
CSMA MAC protocols, and show how the procedure can be used to model different
propagation environments of practical relevance. We show how different path
loss functions and fading distributions can be incorporated in the analysis to
model propagation conditions typical to both rural and urban intersections. Our
results indicate that the procedure is general and flexible to deal with a
variety of scenarios. Thus, it can serve as a useful design tool for
communication system engineers, complementing simulations and experiments, to
obtain quick insights into the network performance.
|
[
{
"version": "v1",
"created": "Tue, 1 Sep 2015 17:21:07 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Oct 2016 11:30:51 GMT"
}
] | 2016-10-12T00:00:00 |
[
[
"Steinmetz",
"Erik",
""
],
[
"Wildemeersch",
"Matthias",
""
],
[
"Quek",
"Tony Q. S.",
""
],
[
"Wymeersch",
"Henk",
""
]
] |
new_dataset
| 0.990642 |
1511.00561
|
Alex Kendall
|
Vijay Badrinarayanan and Alex Kendall and Roberto Cipolla
|
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image
Segmentation
| null | null | null | null |
cs.CV cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel and practical deep fully convolutional neural network
architecture for semantic pixel-wise segmentation termed SegNet. This core
trainable segmentation engine consists of an encoder network, a corresponding
decoder network followed by a pixel-wise classification layer. The architecture
of the encoder network is topologically identical to the 13 convolutional
layers in the VGG16 network. The role of the decoder network is to map the low
resolution encoder feature maps to full input resolution feature maps for
pixel-wise classification. The novelty of SegNet lies is in the manner in which
the decoder upsamples its lower resolution input feature map(s). Specifically,
the decoder uses pooling indices computed in the max-pooling step of the
corresponding encoder to perform non-linear upsampling. This eliminates the
need for learning to upsample. The upsampled maps are sparse and are then
convolved with trainable filters to produce dense feature maps. We compare our
proposed architecture with the widely adopted FCN and also with the well known
DeepLab-LargeFOV, DeconvNet architectures. This comparison reveals the memory
versus accuracy trade-off involved in achieving good segmentation performance.
SegNet was primarily motivated by scene understanding applications. Hence, it
is designed to be efficient both in terms of memory and computational time
during inference. It is also significantly smaller in the number of trainable
parameters than other competing architectures. We also performed a controlled
benchmark of SegNet and other architectures on both road scenes and SUN RGB-D
indoor scene segmentation tasks. We show that SegNet provides good performance
with competitive inference time and more efficient inference memory-wise as
compared to other architectures. We also provide a Caffe implementation of
SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet/.
|
[
{
"version": "v1",
"created": "Mon, 2 Nov 2015 15:51:03 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Dec 2015 13:56:56 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Oct 2016 21:11:59 GMT"
}
] | 2016-10-12T00:00:00 |
[
[
"Badrinarayanan",
"Vijay",
""
],
[
"Kendall",
"Alex",
""
],
[
"Cipolla",
"Roberto",
""
]
] |
new_dataset
| 0.979714 |
1606.05250
|
Pranav Rajpurkar
|
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang
|
SQuAD: 100,000+ Questions for Machine Comprehension of Text
|
To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2016 16:36:00 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Oct 2016 03:48:29 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Oct 2016 02:42:36 GMT"
}
] | 2016-10-12T00:00:00 |
[
[
"Rajpurkar",
"Pranav",
""
],
[
"Zhang",
"Jian",
""
],
[
"Lopyrev",
"Konstantin",
""
],
[
"Liang",
"Percy",
""
]
] |
new_dataset
| 0.984311 |
1610.02091
|
Dmitri Strukov B
|
F. Merrikh Bayat, X. Guo, M. Klachko, M. Prezioso, K. K. Likharev, and
D. B. Strukov
|
Sub-1-us, Sub-20-nJ Pattern Classification in a Mixed-Signal Circuit
Based on Embedded 180-nm Floating-Gate Memory Cell Arrays
|
4 pages, 10 figures
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We have designed, fabricated, and successfully tested a prototype
mixed-signal, 28x28-binary-input, 10-output, 3-layer neuromorphic network ("MLP
perceptron"). It is based on embedded nonvolatile floating-gate cell arrays
redesigned from a commercial 180-nm NOR flash memory. The arrays allow precise
(~1%) individual tuning of all memory cells, having long-term analog-level
retention and low noise. Each array performs a very fast and energy-efficient
analog vector-by-matrix multiplication, which is the bottleneck for signal
propagation in most neuromorphic networks. All functional components of the
prototype circuit, including 2 synaptic arrays with 101,780 floating-gate
synaptic cells, 74 analog neurons, and the peripheral circuitry for weight
adjustment and I/O operations, have a total area below 1 mm^2. Its testing on
the common MNIST benchmark set (at this stage, with a relatively low weight
import precision) has shown a classification fidelity of 94.65%, close to the
96.2% obtained in simulation. The classification of one pattern takes less than
1 us time and ~20 nJ energy - both numbers much better than for digital
implementations of the same task. Estimates show that this performance may be
further improved using a better neuron design and a more advanced memory
technology, leading to a >10^2 advantage in speed and a >10^4 advantage in
energy efficiency over the state-of-the-art purely digital (GPU and custom)
circuits, at classification of large, complex patterns.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2016 22:50:47 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Oct 2016 23:27:06 GMT"
}
] | 2016-10-12T00:00:00 |
[
[
"Bayat",
"F. Merrikh",
""
],
[
"Guo",
"X.",
""
],
[
"Klachko",
"M.",
""
],
[
"Prezioso",
"M.",
""
],
[
"Likharev",
"K. K.",
""
],
[
"Strukov",
"D. B.",
""
]
] |
new_dataset
| 0.9986 |
1610.03129
|
Aditya Tatu Dr.
|
Aditya Tatu
|
Tangled Splines
|
12 pages, To be sent to a Journal/Conference
| null | null | null |
cs.CV cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Extracting shape information from object bound- aries is a well studied
problem in vision, and has found tremen- dous use in applications like object
recognition. Conversely, studying the space of shapes represented by curves
satisfying certain constraints is also intriguing. In this paper, we model and
analyze the space of shapes represented by a 3D curve (space curve) formed by
connecting n pieces of quarter of a unit circle. Such a space curve is what we
call a Tangle, the name coming from a toy built on the same principle. We
provide two models for the shape space of n-link open and closed tangles, and
we show that tangles are a subset of trigonometric splines of a certain order.
We give algorithms for curve approximation using open/closed tangles, computing
geodesics on these shape spaces, and to find the deformation that takes one
given tangle to another given tangle, i.e., the Log map. The algorithms
provided yield tangles upto a small and acceptable tolerance, as shown by the
results given in the paper.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2016 23:31:18 GMT"
}
] | 2016-10-12T00:00:00 |
[
[
"Tatu",
"Aditya",
""
]
] |
new_dataset
| 0.998666 |
1610.03176
|
Md. Khaledur Rahman
|
Md. Khaledur Rahman
|
NEDindex: A new metric for community structure in networks
|
In Proceedings of 18th ICCIT, Dhaka, Bangladesh
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are several metrics (Modularity, Mutual Information, Conductance, etc.)
to evaluate the strength of graph clustering in large graphs. These metrics
have great significance to measure the effectiveness and they are often used to
find the strongly connected clusters with respect to the whole graph. In this
paper, we propose a new metric to evaluate the strength of graph clustering and
also study its applications. We show that our proposed metric has great
consistency which is similar to other metrics and easy to calculate. Our
proposed metric also shows consistency where other metrics fail in some special
cases. We demonstrate that our metric has reasonable strength while extracting
strongly connected communities in both simulated (in silico) data and real data
networks. We also show some comparative results of our proposed metric with
other popular metric(s) for Online Social Networks (OSN) and Gene Regulatory
Networks (GRN).
|
[
{
"version": "v1",
"created": "Mon, 25 Jan 2016 13:22:59 GMT"
}
] | 2016-10-12T00:00:00 |
[
[
"Rahman",
"Md. Khaledur",
""
]
] |
new_dataset
| 0.973372 |
1610.03337
|
Travis Gagie
|
Amihood Amir, Alberto Apostolico, Travis Gagie and Gad M. Landau
|
String Cadences
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We say a string has a cadence if a certain character is repeated at regular
intervals, possibly with intervening occurrences of that character. We call the
cadence anchored if the first interval must be the same length as the others.
We give a sub-quadratic algorithm for determining whether a string has any
cadence consisting of at least three occurrences of a character, and a nearly
linear algorithm for finding all anchored cadences.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2016 13:51:00 GMT"
}
] | 2016-10-12T00:00:00 |
[
[
"Amir",
"Amihood",
""
],
[
"Apostolico",
"Alberto",
""
],
[
"Gagie",
"Travis",
""
],
[
"Landau",
"Gad M.",
""
]
] |
new_dataset
| 0.999757 |
1610.03342
|
Grzegorz Chrupa{\l}a
|
Lieke Gelderloos and Grzegorz Chrupa{\l}a
|
From phonemes to images: levels of representation in a recurrent neural
model of visually-grounded language learning
|
Accepted at COLING 2016
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present a model of visually-grounded language learning based on stacked
gated recurrent neural networks which learns to predict visual features given
an image description in the form of a sequence of phonemes. The learning task
resembles that faced by human language learners who need to discover both
structure and meaning from noisy and ambiguous data across modalities. We show
that our model indeed learns to predict features of the visual context given
phonetically transcribed image descriptions, and show that it represents
linguistic information in a hierarchy of levels: lower layers in the stack are
comparatively more sensitive to form, whereas higher layers are more sensitive
to meaning.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2016 14:00:28 GMT"
}
] | 2016-10-12T00:00:00 |
[
[
"Gelderloos",
"Lieke",
""
],
[
"Chrupała",
"Grzegorz",
""
]
] |
new_dataset
| 0.989825 |
1610.03393
|
Nahum Kiryati
|
Adi Perry, Dor Verbin, Nahum Kiryati
|
Crossing the Road Without Traffic Lights: An Android-based Safety Device
|
Planned submission to "Pattern Recognition Letters"
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the absence of pedestrian crossing lights, finding a safe moment to cross
the road is often hazardous and challenging, especially for people with visual
impairments. We present a reliable low-cost solution, an Android device
attached to a traffic sign or lighting pole near the crossing, indicating
whether it is safe to cross the road. The indication can be by sound, display,
vibration, and various communication modalities provided by the Android device.
The integral system camera is aimed at approaching traffic. Optical flow is
computed from the incoming video stream, and projected onto an influx map,
automatically acquired during a brief training period. The crossing safety is
determined based on a 1-dimensional temporal signal derived from the
projection. We implemented the complete system on a Samsung Galaxy K-Zoom
Android smartphone, and obtained real-time operation. The system achieves
promising experimental results, providing pedestrians with sufficiently early
warning of approaching vehicles. The system can serve as a stand-alone safety
device, that can be installed where pedestrian crossing lights are ruled out.
Requiring no dedicated infrastructure, it can be powered by a solar panel and
remotely maintained via the cellular network.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2016 15:33:00 GMT"
}
] | 2016-10-12T00:00:00 |
[
[
"Perry",
"Adi",
""
],
[
"Verbin",
"Dor",
""
],
[
"Kiryati",
"Nahum",
""
]
] |
new_dataset
| 0.999555 |
1112.2495
|
Simon Perdrix
|
Sylvain Gravier, J\'er\^ome Javelle, Mehdi Mhalla, Simon Perdrix
|
On Weak Odd Domination and Graph-based Quantum Secret Sharing
|
Subsumes arXiv:1109.6181: Optimal accessing and non-accessing
structures for graph protocols
|
TCS Theoretical Computer Science 598, 129-137. 2015
|
10.1016/j.tcs.2015.05.038
| null |
cs.CC quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A weak odd dominated (WOD) set in a graph is a subset B of vertices for which
there exists a distinct set of vertices C such that every vertex in B has an
odd number of neighbors in C. We point out the connections of weak odd
domination with odd domination, [sigma,rho]-domination, and perfect codes. We
introduce bounds on \kappa(G), the maximum size of WOD sets of a graph G, and
on \kappa'(G), the minimum size of non WOD sets of G. Moreover, we prove that
the corresponding decision problems are NP-complete. The study of weak odd
domination is mainly motivated by the design of graph-based quantum secret
sharing protocols: a graph G of order n corresponds to a secret sharing
protocol which threshold is \kappa_Q(G) = max(\kappa(G), n-\kappa'(G)). These
graph-based protocols are very promising in terms of physical implementation,
however all such graph-based protocols studied in the literature have
quasi-unanimity thresholds (i.e. \kappa_Q(G)=n-o(n) where n is the order of the
graph G underlying the protocol). In this paper, we show using probabilistic
methods, the existence of graphs with smaller \kappa_Q (i.e. \kappa_Q(G)<
0.811n where n is the order of G). We also prove that deciding for a given
graph G whether \kappa_Q(G)< k is NP-complete, which means that one cannot
efficiently double check that a graph randomly generated has actually a
\kappa_Q smaller than 0.811n.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2011 10:16:19 GMT"
},
{
"version": "v2",
"created": "Mon, 21 May 2012 14:54:10 GMT"
}
] | 2016-10-11T00:00:00 |
[
[
"Gravier",
"Sylvain",
""
],
[
"Javelle",
"Jérôme",
""
],
[
"Mhalla",
"Mehdi",
""
],
[
"Perdrix",
"Simon",
""
]
] |
new_dataset
| 0.99727 |
1505.03421
|
Tal Mizrahi
|
Tal Mizrahi, Yoram Moses
|
Time4: Time for SDN
|
This report is an extended version of "Software Defined Networks:
It's About Time", which was accepted to IEEE INFOCOM 2016. A preliminary
version of this report was published in arXiv in May, 2015
|
IEEE Transactions on Network and Service Management 13(3):
433-446, 2016
|
10.1109/TNSM.2016.2599640
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rise of Software Defined Networks (SDN), there is growing interest
in dynamic and centralized traffic engineering, where decisions about
forwarding paths are taken dynamically from a network-wide perspective.
Frequent path reconfiguration can significantly improve the network
performance, but should be handled with care, so as to minimize disruptions
that may occur during network updates.
In this paper we introduce Time4, an approach that uses accurate time to
coordinate network updates. Time4 is a powerful tool in softwarized
environments, that can be used for various network update scenarios.
Specifically, we characterize a set of update scenarios called flow swaps, for
which Time4 is the optimal update approach, yielding less packet loss than
existing update approaches. We define the lossless flow allocation problem, and
formally show that in environments with frequent path allocation, scenarios
that require simultaneous changes at multiple network devices are inevitable.
We present the design, implementation, and evaluation of a Time4-enabled
OpenFlow prototype. The prototype is publicly available as open source. Our
work includes an extension to the OpenFlow protocol that has been adopted by
the Open Networking Foundation (ONF), and is now included in OpenFlow 1.5. Our
experimental results show the significant advantages of Time4 compared to other
network update approaches, and demonstrate an SDN use case that is infeasible
without Time4.
|
[
{
"version": "v1",
"created": "Wed, 13 May 2015 15:18:38 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Feb 2016 14:08:56 GMT"
}
] | 2016-10-11T00:00:00 |
[
[
"Mizrahi",
"Tal",
""
],
[
"Moses",
"Yoram",
""
]
] |
new_dataset
| 0.952334 |
1602.05045
|
Felix Klein
|
Felix Klein and Martin Zimmermann
|
Prompt Delay
| null | null | null | null |
cs.GT cs.FL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Delay games are two-player games of infinite duration in which one player may
delay her moves to obtain a lookahead on her opponent's moves. Recently, such
games with quantitative winning conditions in weak MSO with the unbounding
quantifier were studied, but their properties turned out to be unsatisfactory.
In particular, unbounded lookahead is in general necessary. Here, we study
delay games with winning conditions given by Prompt-LTL, Linear Temporal Logic
equipped with a parameterized eventually operator whose scope is bounded. Our
main result shows that solving Prompt-LTL delay games is complete for
triply-exponential time. Furthermore, we give tight triply-exponential bounds
on the necessary lookahead and on the scope of the parameterized eventually
operator. Thus, we identify Prompt-LTL as the first known class of well-behaved
quantitative winning conditions for delay games. Finally, we show that applying
our techniques to delay games with \omega-regular winning conditions answers
open questions in the cases where the winning conditions are given by
non-deterministic, universal, or alternating automata.
|
[
{
"version": "v1",
"created": "Tue, 16 Feb 2016 15:07:23 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Oct 2016 06:15:54 GMT"
}
] | 2016-10-11T00:00:00 |
[
[
"Klein",
"Felix",
""
],
[
"Zimmermann",
"Martin",
""
]
] |
new_dataset
| 0.992447 |
1607.02555
|
Jakob Engel
|
Jakob Engel and Vladyslav Usenko and Daniel Cremers
|
A Photometrically Calibrated Benchmark For Monocular Visual Odometry
|
* Corrected a bug in the evaluation setup, which caused the real-time
results for ORB-SLAM (dashed lines in Figure 8) to be much worse than they
should be. * https://vision.in.tum.de/data/datasets/mono-dataset
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a dataset for evaluating the tracking accuracy of monocular visual
odometry and SLAM methods. It contains 50 real-world sequences comprising more
than 100 minutes of video, recorded across dozens of different environments --
ranging from narrow indoor corridors to wide outdoor scenes. All sequences
contain mostly exploring camera motion, starting and ending at the same
position. This allows to evaluate tracking accuracy via the accumulated drift
from start to end, without requiring ground truth for the full sequence. In
contrast to existing datasets, all sequences are photometrically calibrated. We
provide exposure times for each frame as reported by the sensor, the camera
response function, and dense lens attenuation factors. We also propose a novel,
simple approach to non-parametric vignette calibration, which requires minimal
set-up and is easy to reproduce. Finally, we thoroughly evaluate two existing
methods (ORB-SLAM and DSO) on the dataset, including an analysis of the effect
of image resolution, camera field of view, and the camera motion direction.
|
[
{
"version": "v1",
"created": "Sat, 9 Jul 2016 00:11:14 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Oct 2016 20:06:10 GMT"
}
] | 2016-10-11T00:00:00 |
[
[
"Engel",
"Jakob",
""
],
[
"Usenko",
"Vladyslav",
""
],
[
"Cremers",
"Daniel",
""
]
] |
new_dataset
| 0.999849 |
1608.08658
|
Navjot Kukreja
|
Navjot Kukreja, Mathias Louboutin, Felippe Vieira, Fabio Luporini,
Michael Lange, Gerard Gorman
|
Devito: automated fast finite difference computation
|
Accepted at WolfHPC 2016
| null | null | null |
cs.MS cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Domain specific languages have successfully been used in a variety of fields
to cleanly express scientific problems as well as to simplify implementation
and performance opti- mization on different computer architectures. Although a
large number of stencil languages are available, finite differ- ence domain
specific languages have proved challenging to design because most practical use
cases require additional features that fall outside the finite difference
abstraction. Inspired by the complexity of real-world seismic imaging problems,
we introduce Devito, a domain specific language in which high level equations
are expressed using symbolic expressions from the SymPy package. Complex
equations are automatically manipulated, optimized, and translated into highly
optimized C code that aims to perform compa- rably or better than hand-tuned
code. All this is transpar- ent to users, who only see concise symbolic
mathematical expressions.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2016 21:05:21 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Oct 2016 13:15:52 GMT"
}
] | 2016-10-11T00:00:00 |
[
[
"Kukreja",
"Navjot",
""
],
[
"Louboutin",
"Mathias",
""
],
[
"Vieira",
"Felippe",
""
],
[
"Luporini",
"Fabio",
""
],
[
"Lange",
"Michael",
""
],
[
"Gorman",
"Gerard",
""
]
] |
new_dataset
| 0.996696 |
1610.01585
|
Mingzhe Chen
|
Mingzhe Chen, Mohammad Mozaffari, Walid Saad, Changchuan Yin,
M\'erouane Debbah and Choong-Seon Hong
|
Caching in the Sky: Proactive Deployment of Cache-Enabled Unmanned
Aerial Vehicles for Optimized Quality-of-Experience
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, the problem of proactive deployment of cache-enabled unmanned
aerial vehicles (UAVs) for optimizing the quality-of-experience (QoE) of
wireless devices in a cloud radio access network (CRAN) is studied. In the
considered model, the network can leverage human-centric information such as
users' visited locations, requested contents, gender, job, and device type to
predict the content request distribution and mobility pattern of each user.
Then, given these behavior predictions, the proposed approach seeks to find the
user-UAV associations, the optimal UAVs' locations, and the contents to cache
at UAVs. This problem is formulated as an optimization problem whose goal is to
maximize the users' QoE while minimizing the transmit power used by the UAVs.
To solve this problem, a novel algorithm based on the machine learning
framework of conceptor-based echo state networks (ESNs) is proposed. Using
ESNs, the network can effectively predict each user's content request
distribution and its mobility pattern when limited information on the states of
users and the network is available. Based on the predictions of the users'
content request distribution and their mobility patterns, we derive the optimal
user-UAV association, optimal locations of the UAVs as well as the content to
cache at UAVs. Simulation results using real pedestrian mobility patterns from
BUPT and actual content transmission data from Youku show that the proposed
algorithm can yield 40% and 61% gains, respectively, in terms of the average
transmit power and the percentage of the users with satisfied QoE compared to a
benchmark algorithm without caching and a benchmark solution without UAVs.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2016 19:41:12 GMT"
}
] | 2016-10-11T00:00:00 |
[
[
"Chen",
"Mingzhe",
""
],
[
"Mozaffari",
"Mohammad",
""
],
[
"Saad",
"Walid",
""
],
[
"Yin",
"Changchuan",
""
],
[
"Debbah",
"Mérouane",
""
],
[
"Hong",
"Choong-Seon",
""
]
] |
new_dataset
| 0.992186 |
1610.02431
|
Andrea Vedaldi
|
A. Mahendran and H. Bilen and J. F. Henriques and A. Vedaldi
|
ResearchDoom and CocoDoom: Learning Computer Vision with Games
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this short note we introduce ResearchDoom, an implementation of the Doom
first-person shooter that can extract detailed metadata from the game. We also
introduce the CocoDoom dataset, a collection of pre-recorded data extracted
from Doom gaming sessions along with annotations in the MS Coco format.
ResearchDoom and CocoDoom can be used to train and evaluate a variety of
computer vision methods such as object recognition, detection and segmentation
at the level of instances and categories, tracking, ego-motion estimation,
monocular depth estimation and scene segmentation. The code and data are
available at http://www.robots.ox.ac.uk/~vgg/research/researchdoom.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2016 21:35:02 GMT"
}
] | 2016-10-11T00:00:00 |
[
[
"Mahendran",
"A.",
""
],
[
"Bilen",
"H.",
""
],
[
"Henriques",
"J. F.",
""
],
[
"Vedaldi",
"A.",
""
]
] |
new_dataset
| 0.999778 |
1610.02442
|
Steve Chang
|
Steve Chang
|
InfraNotes: Inconspicuous Handwritten Trajectory Tracking for Lecture
Note Recording with Infrared Sensors
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lecture notes are important for students to review and understand the key
points in the class. Unfortunately, the students often miss or lose part of the
lecture notes. In this paper, we design and implement an infrared sensor based
system, InfraNotes, to automatically record the notes on the board by sensing
and analyzing hand gestures of the lecturer. Compared with existing techniques,
our system does not require special accessories with lecturers such as
sensor-facilitated pens, writing surfaces or the video-taping infrastructure.
Instead, it only has an infrared-sensor module on the eraser holder of
black/white board to capture handwritten trajectories. With a lightweight
framework for handwritten trajectory processing, clear lecture notes can be
generated automatically. We evaluate the quality of lecture notes by three
standard character recognition techniques. The results indicate that InfraNotes
is a promising solution to create clear and complete lectures to promote the
education.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2016 22:57:55 GMT"
}
] | 2016-10-11T00:00:00 |
[
[
"Chang",
"Steve",
""
]
] |
new_dataset
| 0.993261 |
1610.02488
|
Jean-Marc Valin
|
Yushin Cho, Thomas J. Daede, Nathan E. Egge, Guillaume Martres,
Tristan Matthews, Christopher Montgomery, Timothy B. Terriberry, Jean-Marc
Valin
|
Perceptually-Driven Video Coding with the Daala Video Codec
|
19 pages, Proceedings of SPIE Workshop on Applications of Digital
Image Processing (ADIP), 2016
| null |
10.1117/12.2238417
| null |
cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
The Daala project is a royalty-free video codec that attempts to compete with
the best patent-encumbered codecs. Part of our strategy is to replace core
tools of traditional video codecs with alternative approaches, many of them
designed to take perceptual aspects into account, rather than optimizing for
simple metrics like PSNR. This paper documents some of our experiences with
these tools, which ones worked and which did not. We evaluate which tools are
easy to integrate into a more traditional codec design, and show results in the
context of the codec being developed by the Alliance for Open Media.
|
[
{
"version": "v1",
"created": "Sat, 8 Oct 2016 05:34:56 GMT"
}
] | 2016-10-11T00:00:00 |
[
[
"Cho",
"Yushin",
""
],
[
"Daede",
"Thomas J.",
""
],
[
"Egge",
"Nathan E.",
""
],
[
"Martres",
"Guillaume",
""
],
[
"Matthews",
"Tristan",
""
],
[
"Montgomery",
"Christopher",
""
],
[
"Terriberry",
"Timothy B.",
""
],
[
"Valin",
"Jean-Marc",
""
]
] |
new_dataset
| 0.981121 |
1610.02742
|
Benda Xu
|
Guilherme Amadio and Benda Xu
|
Portage: Bringing Hackers' Wisdom to Science
| null | null | null | null |
cs.DC
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Providing users of HPC systems with a wide variety of up to date software
packages is a challenging task. Large software stacks built from source are
difficult to manage, requiring powerful package management tools. The Portage
package manager from Gentoo is a highly flexible tool that offers a mature
solution to this otherwise daunting task. The Gentoo Prefix project develops
and maintains a way of installing Gentoo systems in non-standard locations,
bringing the virtues of Gentoo to other operating systems. Here we demonstrate
how a Gentoo Prefix installation can be used to cross compile software packages
for the Intel Xeon Phi known as Knights Corner, as well as to manage large
software stacks in HPC environments.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2016 00:19:32 GMT"
}
] | 2016-10-11T00:00:00 |
[
[
"Amadio",
"Guilherme",
""
],
[
"Xu",
"Benda",
""
]
] |
new_dataset
| 0.986351 |
1610.02816
|
Changyang She
|
Changyang She and Chenyang Yang and Tony Q. S. Quek
|
Uplink Transmission Design with Massive Machine Type Devices in Tactile
Internet
|
Accepted by IEEE Globecom 2016
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we study how to design uplink transmission with massive machine
type devices in tactile internet, where ultra-short delay and ultra-high
reliability are required. To characterize the transmission reliability
constraint, we employ a two-state transmission model based on the achievable
rate with finite blocklength channel codes. If the channel gain exceeds a
threshold, a short packet can be transmitted with a small error probability;
otherwise there is a packet loss. To exploit frequency diversity, we assign
multiple subchannels to each active device, from which the device selects a
subchannel with channel gain exceeding the threshold for transmission. To show
the total bandwidth required to ensure the reliability, we optimize the number
of subchannels and bandwidth of each subchannel and the threshold for each
device to minimize the total bandwidth of the system with a given number of
antennas at the base station. Numerical results show that with 1000 devices in
one cell, the required bandwidth of the optimized policy is acceptable even for
prevalent cellular systems. Furthermore, we show that by increasing antennas at
the BS, frequency diversity becomes unnecessary, and the required bandwidth is
reduced.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2016 09:22:50 GMT"
}
] | 2016-10-11T00:00:00 |
[
[
"She",
"Changyang",
""
],
[
"Yang",
"Chenyang",
""
],
[
"Quek",
"Tony Q. S.",
""
]
] |
new_dataset
| 0.976542 |
1610.02869
|
Ziyuan Wang Ziyuan Wang
|
Ziyuan Wang, Jianbin Tang, Yini Wang, Bo Han, Xi Liang
|
LeaveNow: A Social Network-based Smart Evacuation System for Disaster
Management
|
2 pages, 3 figures, SWDM2016
| null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The importance of timely response to natural disasters and evacuating
affected people to safe areas is paramount to save lives. Emergency services
are often handicapped by the amount of rescue resources at their disposal. We
present a system that leverages the power of a social network forming new
connections among people based on \textit{real-time location} and expands the
rescue resources pool by adding private sector cars. We also introduce a
car-sharing algorithm to identify safe routes in an emergency with the aim of
minimizing evacuation time, maximizing pick-up of people without cars, and
avoiding traffic congestion.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2016 11:59:08 GMT"
}
] | 2016-10-11T00:00:00 |
[
[
"Wang",
"Ziyuan",
""
],
[
"Tang",
"Jianbin",
""
],
[
"Wang",
"Yini",
""
],
[
"Han",
"Bo",
""
],
[
"Liang",
"Xi",
""
]
] |
new_dataset
| 0.999195 |
1610.02953
|
Michael Fredman
|
Michael L. Fredman
|
Comments on Dumitrescu's "A Selectable Sloppy Heap"
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dumitrescu [arXiv:1607.07673] describes a data structure referred to as a
Selectable Sloppy Heap. We present a simplified approach, and also point out
aspects of Dumitrescu's exposition that require scrutiny.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2016 15:13:10 GMT"
}
] | 2016-10-11T00:00:00 |
[
[
"Fredman",
"Michael L.",
""
]
] |
new_dataset
| 0.973559 |
1610.02997
|
Joan Boyar
|
Joan Boyar, Leah Epstein, Lene M. Favrholdt, Kim S. Larsen, Asaf Levin
|
Batch Coloring of Graphs
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In graph coloring problems, the goal is to assign a positive integer color to
each vertex of an input graph such that adjacent vertices do not receive the
same color assignment. For classic graph coloring, the goal is to minimize the
maximum color used, and for the sum coloring problem, the goal is to minimize
the sum of colors assigned to all input vertices. In the offline variant, the
entire graph is presented at once, and in online problems, one vertex is
presented for coloring at each time, and the only information is the identity
of its neighbors among previously known vertices. In batched graph coloring,
vertices are presented in k batches, for a fixed integer k > 1, such that the
vertices of a batch are presented as a set, and must be colored before the
vertices of the next batch are presented. This last model is an intermediate
model, which bridges between the two extreme scenarios of the online and
offline models. We provide several results, including a general result for sum
coloring and results for the classic graph coloring problem on restricted graph
classes: We show tight bounds for any graph class containing trees as a
subclass (e.g., forests, bipartite graphs, planar graphs, and perfect graphs),
and a surprising result for interval graphs and k = 2, where the value of the
(strict and asymptotic) competitive ratio depends on whether the graph is
presented with its interval representation or not.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2016 17:00:39 GMT"
}
] | 2016-10-11T00:00:00 |
[
[
"Boyar",
"Joan",
""
],
[
"Epstein",
"Leah",
""
],
[
"Favrholdt",
"Lene M.",
""
],
[
"Larsen",
"Kim S.",
""
],
[
"Levin",
"Asaf",
""
]
] |
new_dataset
| 0.997399 |
1510.04015
|
Jiawei Li
|
Jiawei Li
|
On Equilibria of N-seller and N-buyer Bargaining Games
|
17 pages, 3 figures
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A group of players which contain n sellers and n buyers bargain over the
partitions of n pies. A seller(/buyer) has to reach an agreement with a buyer
(/seller) on the division of a pie. The players bargain in a system like the
stock market: each seller(buyer) can either offer a selling(buying) price to
all buyers(sellers) or accept a price offered by another buyer(seller). The
offered prices are known to all. Once a player accepts a price offered by
another one, the division of a pie between them is determined. Each player has
a constant discounting factor and the discounting factors of all players are
common knowledge. In this article, we prove that the equilibrium of this
bargaining problem is a unanimous division rate, which is equivalent to Nash
bargaining equilibrium of a two-player bargaining game in which the discounting
factors of two players are the average of n buyers and the average of n sellers
respectively. This result shows the relevance between bargaining equilibrium
and general equilibrium of markets.
|
[
{
"version": "v1",
"created": "Wed, 14 Oct 2015 09:20:08 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Oct 2016 15:34:28 GMT"
}
] | 2016-10-10T00:00:00 |
[
[
"Li",
"Jiawei",
""
]
] |
new_dataset
| 0.998837 |
1610.02055
|
Bolei Zhou
|
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Antonio Torralba, Aude
Oliva
|
Places: An Image Database for Deep Scene Understanding
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The rise of multi-million-item dataset initiatives has enabled data-hungry
machine learning algorithms to reach near-human semantic classification at
tasks such as object and scene recognition. Here we describe the Places
Database, a repository of 10 million scene photographs, labeled with scene
semantic categories and attributes, comprising a quasi-exhaustive list of the
types of environments encountered in the world. Using state of the art
Convolutional Neural Networks, we provide impressive baseline performances at
scene classification. With its high-coverage and high-diversity of exemplars,
the Places Database offers an ecosystem to guide future progress on currently
intractable visual recognition problems.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2016 20:14:13 GMT"
}
] | 2016-10-10T00:00:00 |
[
[
"Zhou",
"Bolei",
""
],
[
"Khosla",
"Aditya",
""
],
[
"Lapedriza",
"Agata",
""
],
[
"Torralba",
"Antonio",
""
],
[
"Oliva",
"Aude",
""
]
] |
new_dataset
| 0.992894 |
1610.02060
|
Adrian Benton
|
Adrian Benton (Johns Hopkins University), Braden Hancock (Stanford
University), Glen Coppersmith (Qntfy), John W. Ayers (San Diego State
University), Mark Dredze (Johns Hopkins University)
|
After Sandy Hook Elementary: A Year in the Gun Control Debate on Twitter
|
Presented at the Data For Good Exchange 2016
| null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The mass shooting at Sandy Hook elementary school on December 14, 2012
catalyzed a year of active debate and legislation on gun control in the United
States. Social media hosted an active public discussion where people expressed
their support and opposition to a variety of issues surrounding gun
legislation. In this paper, we show how a content-based analysis of Twitter
data can provide insights and understanding into this debate. We estimate the
relative support and opposition to gun control measures, along with a topic
analysis of each camp by analyzing over 70 million gun-related tweets from
2013. We focus on spikes in conversation surrounding major events related to
guns throughout the year. Our general approach can be applied to other
important public health and political issues to analyze the prevalence and
nature of public opinion.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2016 20:34:34 GMT"
}
] | 2016-10-10T00:00:00 |
[
[
"Benton",
"Adrian",
"",
"Johns Hopkins University"
],
[
"Hancock",
"Braden",
"",
"Stanford\n University"
],
[
"Coppersmith",
"Glen",
"",
"Qntfy"
],
[
"Ayers",
"John W.",
"",
"San Diego State\n University"
],
[
"Dredze",
"Mark",
"",
"Johns Hopkins University"
]
] |
new_dataset
| 0.998882 |
1610.02144
|
Robert O'Callahan
|
Robert O'Callahan and Chris Jones and Nathan Froyd and Kyle Huey and
Albert Noll and Nimrod Partush
|
Lightweight User-Space Record And Replay
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability to record and replay program executions with low overhead enables
many applications, such as reverse-execution debugging, debugging of
hard-to-reproduce test failures, and "black box" forensic analysis of failures
in deployed systems. Existing record-and-replay approaches rely on recording an
entire virtual machine (which is heavyweight), modifying the OS kernel (which
adds deployment and maintenance costs), or pervasive code instrumentation
(which imposes significant performance and complexity overhead). We
investigated whether it is possible to build a practical record-and-replay
system avoiding all these issues. The answer turns out to be yes --- if the CPU
and operating system meet certain non-obvious constraints. Fortunately modern
Intel CPUs, Linux kernels and user-space frameworks meet these constraints,
although this has only become true recently. With some novel optimizations, our
system RR records and replays real-world workloads with low overhead with an
entirely user-space implementation running on stock hardware and operating
systems. RR forms the basis of an open-source reverse-execution debugger seeing
significant use in practice. We present the design and implementation of RR,
describe its performance on a variety of workloads, and identify constraints on
hardware and operating system design required to support our approach.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2016 05:11:32 GMT"
}
] | 2016-10-10T00:00:00 |
[
[
"O'Callahan",
"Robert",
""
],
[
"Jones",
"Chris",
""
],
[
"Froyd",
"Nathan",
""
],
[
"Huey",
"Kyle",
""
],
[
"Noll",
"Albert",
""
],
[
"Partush",
"Nimrod",
""
]
] |
new_dataset
| 0.990285 |
1610.02175
|
Nilanjan De
|
Nilanjan De
|
F-index and coindex of some derived graphs
|
8 pages
| null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, the explicit expressions for F-index and coindex of derived
graphs such as a line graph, subdivision graph, vertex-semitotal graph,
edge-semitotal graph, total graph and paraline graph (line graph of the
subdivision graph) are obtained.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2016 08:17:41 GMT"
}
] | 2016-10-10T00:00:00 |
[
[
"De",
"Nilanjan",
""
]
] |
new_dataset
| 0.999426 |
1610.02228
|
Wanita Sherchan
|
Wanita Sherchan, Shaila Pervin, Christopher J. Butler, Jennifer C. Lai
|
Project ACT: Social Media Analytics in Disaster Response
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In large-scale emergencies social media has become a key source of
information for public awareness, government authorities and relief agencies.
However, the sheer volume of data and the low signal-to- noise ratio limit the
effectiveness and the efficiency of using social media as an intelligence
resource. We describe Australian Crisis Tracker (ACT), a tool designed for
agencies responding to large- scale emergency events, to facilitate the
understanding of critical information in Twitter. ACT was piloted by the
Australian Red Cross (ARC) during the 2013-2014 Australian bushfires season.
Video is available at: https://www.youtube.com/watch?v=Y-1rtNFqQbE
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2016 11:27:16 GMT"
}
] | 2016-10-10T00:00:00 |
[
[
"Sherchan",
"Wanita",
""
],
[
"Pervin",
"Shaila",
""
],
[
"Butler",
"Christopher J.",
""
],
[
"Lai",
"Jennifer C.",
""
]
] |
new_dataset
| 0.999398 |
1610.02358
|
Asmelash Teka Hadgu
|
Asmelash Teka Hadgu, Kaweh Djafari Naini, Claudia Nieder\'ee
|
Welcome or Not-Welcome: Reactions to Refugee Situation on Social Media
|
6 pages, 6 figures, swdm16
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
For many European countries, in 2015 the refugee situation developed from a
remote tragedy reported upon in the news to a situation they have to deal with
in their own neighborhood. Driven by this observation, we investigated the
development of the perception of the refugee situation during 2015 in Twitter.
Starting from a dataset of 1.7 Million tweets covering refugee-related topics
from May to December 2015, we investigated how the discussion on refugees
changed over time, in different countries as well as in relationship with the
evolution of the actual situation. In this paper we report and discuss our
findings from checking a set of hypotheses, such as that the closeness to the
actual situation would influence the intensity and polarity of discussions and
that news media takes a mediating role between the actual and perceived refugee
situation.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2016 17:52:59 GMT"
}
] | 2016-10-10T00:00:00 |
[
[
"Hadgu",
"Asmelash Teka",
""
],
[
"Naini",
"Kaweh Djafari",
""
],
[
"Niederée",
"Claudia",
""
]
] |
new_dataset
| 0.998569 |
1610.02374
|
Igor Polkovnikov
|
Igor Polkovnikov
|
Unified Control and Data Flow Diagrams Applied to Software Engineering
and other Systems
|
23 pages, 22 figures
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
More often than not, there is a need to understand the structure of complex
computer code: what functions and in what order they are called, how
information travels around static, input, and output variables, what depends on
what. As a rule, executable code and data are scattered among multiple files
and even multiple modules. Information is transmitted among variables which
often change names. These tangled relations greatly complicate the development,
maintenance, and redevelopment of code, its analysis for complexity and its
robustness. As of now, there is no tool which is capable of presenting the
real-life, useful diagram of actual code. Conventional flowcharts fail.
Proposed is the method which overcomes these difficulties. The main idea is
that functionality of software can be described through flows of control, which
is essentially flows of time, and flows of data. These are inseparable. The
second idea is to follow very strict system boundaries and distinctions with
respect to modules, functions, blocks, and operators, as well as data holders,
showing them all as subsystems, in other words, by clearly expressing the
system structure when every piece of executable code and every variable may
have its own graphical representation. The third is defining timelines as the
entities clearly separated from the connected blocks of code. Timelines allow
presentation of nesting of the control flow as deep as necessary. As a proof of
concept, the same methods successfully describe production systems. Keywords:
flowchart, UML, software diagram, visual programming, extreme programming,
extreme modeling, control flow, data flow.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2016 19:00:04 GMT"
}
] | 2016-10-10T00:00:00 |
[
[
"Polkovnikov",
"Igor",
""
]
] |
new_dataset
| 0.979591 |
1504.01842
|
Hendra Gunadi
|
Hendra Gunadi, Alwen Tiu, and Rajeev Gore
|
Formal Certification of Android Bytecode
|
12 pages content, 43 pages total including Appendices, double-column
IEEE
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Android is an operating system that has been used in a majority of mobile
devices. Each application in Android runs in an instance of the Dalvik virtual
machine, which is a register-based virtual machine (VM). Most applications for
Android are developed using Java, compiled to Java bytecode and then translated
to DEX bytecode using the dx tool in the Android SDK. In this work, we aim to
develop a type-based method for certifying non-interference properties of DEX
bytecode, following a methodology that has been developed for Java bytecode
certification by Barthe et al. To this end, we develop a formal operational
semantics of the Dalvik VM, a type system for DEX bytecode, and prove the
soundness of the type system with respect to a notion of non-interference. We
then study the translation process from Java bytecode to DEX bytecode, as
implemented in the dx tool in the Android SDK. We show that an abstracted
version of the translation from Java bytecode to DEX bytecode preserves the
non-interference property. More precisely, we show that if the Java bytecode is
typable in Barthe et al's type system (which guarantees non-interference) then
its translation is typable in our type system. This result opens up the
possibility to leverage existing bytecode verifiers for Java to certify
non-interference properties of Android bytecode.
|
[
{
"version": "v1",
"created": "Wed, 8 Apr 2015 06:24:38 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Apr 2015 05:22:08 GMT"
},
{
"version": "v3",
"created": "Wed, 4 May 2016 04:02:19 GMT"
},
{
"version": "v4",
"created": "Mon, 9 May 2016 05:13:48 GMT"
},
{
"version": "v5",
"created": "Thu, 6 Oct 2016 11:54:26 GMT"
}
] | 2016-10-07T00:00:00 |
[
[
"Gunadi",
"Hendra",
""
],
[
"Tiu",
"Alwen",
""
],
[
"Gore",
"Rajeev",
""
]
] |
new_dataset
| 0.990091 |
1603.07916
|
Remi Imbach
|
R\'emi Imbach (VEGAS)
|
A Subdivision Solver for Systems of Large Dense Polynomials
| null | null | null | null |
cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe here the package {\tt subdivision\\_solver} for the mathematical
software {\tt SageMath}. It provides a solver on real numbers for square
systems of large dense polynomials. By large polynomials we mean multivariate
polynomials with large degrees, which coefficients have large bit-size. While
staying robust, symbolic approaches to solve systems of polynomials see their
performances dramatically affected by high degree and bit-size of input
polynomials.Available numeric approaches suffer from the cost of the evaluation
of large polynomials and their derivatives.Our solver is based on interval
analysis and bisections of an initial compact domain of $\R^n$ where solutions
are sought. Evaluations on intervals with Horner scheme is performed by the
package {\tt fast\\_polynomial} for {\tt SageMath}.The non-existence of a
solution within a box is certified by an evaluation scheme that uses a Taylor
expansion at order 2, and existence and uniqueness of a solution within a box
is certified with krawczyk operator.The precision of the working arithmetic is
adapted on the fly during the subdivision process and we present a new
heuristic criterion to decide if the arithmetic precision has to be increased.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2016 14:07:49 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2016 14:38:23 GMT"
}
] | 2016-10-07T00:00:00 |
[
[
"Imbach",
"Rémi",
"",
"VEGAS"
]
] |
new_dataset
| 0.987414 |
1610.01670
|
Ellie Pavlick
|
Ellie Pavlick (Uiversity of Pennsylvania), Chris Callison-Burch
(Uiversity of Pennsylvania)
|
The Gun Violence Database
|
Presented at the Data For Good Exchange 2016
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe the Gun Violence Database (GVDB), a large and growing database of
gun violence incidents in the United States. The GVDB is built from the
detailed information found in local news reports about gun violence, and is
constructed via a large-scale crowdsourced annotation effort through our web
site, http://gun-violence.org/. We argue that centralized and publicly
available data about gun violence can facilitate scientific, fact-based
discussion about a topic that is often dominated by politics and emotion. We
describe our efforts to automate the construction of the database using
state-of-the-art natural language processing (NLP) technologies, eventually
enabling a fully-automated, highly-scalable resource for research on this
important public health problem.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2016 22:00:14 GMT"
}
] | 2016-10-07T00:00:00 |
[
[
"Pavlick",
"Ellie",
"",
"Uiversity of Pennsylvania"
],
[
"Callison-Burch",
"Chris",
"",
"Uiversity of Pennsylvania"
]
] |
new_dataset
| 0.997557 |
1610.01757
|
Mohamad Ivan Fanany
|
Endang Purnama Giri, Mohamad Ivan Fanany, Aniati Murni Arymurthy
|
Ischemic Stroke Identification Based on EEG and EOG using 1D
Convolutional Neural Network and Batch Normalization
|
13 pages. To be published in ICACSIS 2016
| null | null | null |
cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In 2015, stroke was the number one cause of death in Indonesia. The majority
type of stroke is ischemic. The standard tool for diagnosing stroke is CT-Scan.
For developing countries like Indonesia, the availability of CT-Scan is very
limited and still relatively expensive. Because of the availability, another
device that potential to diagnose stroke in Indonesia is EEG. Ischemic stroke
occurs because of obstruction that can make the cerebral blood flow (CBF) on a
person with stroke has become lower than CBF on a normal person (control) so
that the EEG signal have a deceleration. On this study, we perform the ability
of 1D Convolutional Neural Network (1DCNN) to construct classification model
that can distinguish the EEG and EOG stroke data from EEG and EOG control data.
To accelerate training process our model we use Batch Normalization. Involving
62 person data object and from leave one out the scenario with five times
repetition of measurement we obtain the average of accuracy 0.86 (F-Score
0.861) only at 200 epoch. This result is better than all over shallow and
popular classifiers as the comparator (the best result of accuracy 0.69 and
F-Score 0.72 ). The feature used in our study were only 24 handcrafted feature
with simple feature extraction process.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2016 07:19:27 GMT"
}
] | 2016-10-07T00:00:00 |
[
[
"Giri",
"Endang Purnama",
""
],
[
"Fanany",
"Mohamad Ivan",
""
],
[
"Arymurthy",
"Aniati Murni",
""
]
] |
new_dataset
| 0.997412 |
1610.01832
|
Andreas Olofsson
|
Andreas Olofsson
|
Epiphany-V: A 1024 processor 64-bit RISC System-On-Chip
|
15 pages, 7 figures
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes the design of a 1024-core processor chip in 16nm FinFet
technology. The chip ("Epiphany-V") contains an array of 1024 64-bit RISC
processors, 64MB of on-chip SRAM, three 136-bit wide mesh Networks-On-Chip, and
1024 programmable IO pins. The chip has taped out and is being manufactured by
TSMC.
This research was developed with funding from the Defense Advanced Research
Projects Agency (DARPA). The views, opinions and/or findings expressed are
those of the author and should not be interpreted as representing the official
views or policies of the Department of Defense or the U.S. Government.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2016 12:05:14 GMT"
}
] | 2016-10-07T00:00:00 |
[
[
"Olofsson",
"Andreas",
""
]
] |
new_dataset
| 0.999664 |
1502.07242
|
Albert Y.S. Lam
|
Albert Y.S. Lam, Yiu-Wing Leung, Xiaowen Chu
|
Autonomous Vehicle Public Transportation System: Scheduling and
Admission Control
|
16 pages, 10 figures
| null |
10.1109/TITS.2015.2513071
| null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Technology of autonomous vehicles (AVs) is getting mature and many AVs will
appear on the roads in the near future. AVs become connected with the support
of various vehicular communication technologies and they possess high degree of
control to respond to instantaneous situations cooperatively with high
efficiency and flexibility. In this paper, we propose a new public
transportation system based on AVs. It manages a fleet of AVs to accommodate
transportation requests, offering point-to-point services with ride sharing. We
focus on the two major problems of the system: scheduling and admission
control. The former is to configure the most economical schedules and routes
for the AVs to satisfy the admissible requests while the latter is to determine
the set of admissible requests among all requests to produce maximum profit.
The scheduling problem is formulated as a mixed-integer linear program and the
admission control problem is cast as a bilevel optimization, which embeds the
scheduling problem as the major constraint. By utilizing the analytical
properties of the problem, we develop an effective genetic-algorithm-based
method to tackle the admission control problem. We validate the performance of
the algorithm with real-world transportation service data.
|
[
{
"version": "v1",
"created": "Wed, 25 Feb 2015 16:57:08 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Sep 2015 07:34:03 GMT"
}
] | 2016-10-06T00:00:00 |
[
[
"Lam",
"Albert Y. S.",
""
],
[
"Leung",
"Yiu-Wing",
""
],
[
"Chu",
"Xiaowen",
""
]
] |
new_dataset
| 0.990801 |
1604.08685
|
Jiajun Wu
|
Jiajun Wu, Tianfan Xue, Joseph J. Lim, Yuandong Tian, Joshua B.
Tenenbaum, Antonio Torralba, William T. Freeman
|
Single Image 3D Interpreter Network
|
ECCV 2016 (oral). The first two authors contributed equally to this
work
| null |
10.1007/978-3-319-46466-4_22
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding 3D object structure from a single image is an important but
difficult task in computer vision, mostly due to the lack of 3D object
annotations in real images. Previous work tackles this problem by either
solving an optimization task given 2D keypoint positions, or training on
synthetic data with ground truth 3D information. In this work, we propose 3D
INterpreter Network (3D-INN), an end-to-end framework which sequentially
estimates 2D keypoint heatmaps and 3D object structure, trained on both real
2D-annotated images and synthetic 3D data. This is made possible mainly by two
technical innovations. First, we propose a Projection Layer, which projects
estimated 3D structure to 2D space, so that 3D-INN can be trained to predict 3D
structural parameters supervised by 2D annotations on real images. Second,
heatmaps of keypoints serve as an intermediate representation connecting real
and synthetic data, enabling 3D-INN to benefit from the variation and abundance
of synthetic 3D objects, without suffering from the difference between the
statistics of real and synthesized images due to imperfect rendering. The
network achieves state-of-the-art performance on both 2D keypoint estimation
and 3D structure recovery. We also show that the recovered 3D information can
be used in other vision applications, such as 3D rendering and image retrieval.
|
[
{
"version": "v1",
"created": "Fri, 29 Apr 2016 04:52:46 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Oct 2016 19:35:54 GMT"
}
] | 2016-10-06T00:00:00 |
[
[
"Wu",
"Jiajun",
""
],
[
"Xue",
"Tianfan",
""
],
[
"Lim",
"Joseph J.",
""
],
[
"Tian",
"Yuandong",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Torralba",
"Antonio",
""
],
[
"Freeman",
"William T.",
""
]
] |
new_dataset
| 0.99636 |
1606.01299
|
Yaniv Romano
|
Yaniv Romano, John Isidoro, and Peyman Milanfar
|
RAISR: Rapid and Accurate Image Super Resolution
|
Supplementary material can be found at https://goo.gl/D0ETxG
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given an image, we wish to produce an image of larger size with significantly
more pixels and higher image quality. This is generally known as the Single
Image Super-Resolution (SISR) problem. The idea is that with sufficient
training data (corresponding pairs of low and high resolution images) we can
learn set of filters (i.e. a mapping) that when applied to given image that is
not in the training set, will produce a higher resolution version of it, where
the learning is preferably low complexity. In our proposed approach, the
run-time is more than one to two orders of magnitude faster than the best
competing methods currently available, while producing results comparable or
better than state-of-the-art.
A closely related topic is image sharpening and contrast enhancement, i.e.,
improving the visual quality of a blurry image by amplifying the underlying
details (a wide range of frequencies). Our approach additionally includes an
extremely efficient way to produce an image that is significantly sharper than
the input blurry one, without introducing artifacts such as halos and noise
amplification. We illustrate how this effective sharpening algorithm, in
addition to being of independent interest, can be used as a pre-processing step
to induce the learning of more effective upscaling filters with built-in
sharpening and contrast enhancement effect.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2016 22:56:49 GMT"
},
{
"version": "v2",
"created": "Sat, 13 Aug 2016 08:39:18 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Oct 2016 21:22:51 GMT"
}
] | 2016-10-06T00:00:00 |
[
[
"Romano",
"Yaniv",
""
],
[
"Isidoro",
"John",
""
],
[
"Milanfar",
"Peyman",
""
]
] |
new_dataset
| 0.956206 |
1607.06797
|
Fariborz Taherkhani
|
Fariborz Taherkhani
|
A probabilistic patch based image representation using Conditional
Random Field model for image classification
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper we proposed an ordered patch based method using Conditional
Random Field (CRF) in order to encode local properties and their spatial
relationship in images to address texture classification, face recognition, and
scene classification problems. Typical image classification approaches work
without considering spatial causality among distinctive properties of an image
for image representation in feature space. In this method first, each image is
encoded as a sequence of ordered patches, including local properties. Second,
the sequence of these ordered patches is modeled as a probabilistic feature
vector by CRF to model spatial relationship of these local properties. And
finally, image classification is performed on such probabilistic image
representation. Experimental results on several standard image datasets
indicate that proposed method outperforms some of existing image classification
methods.
|
[
{
"version": "v1",
"created": "Fri, 22 Jul 2016 19:19:47 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2016 06:24:06 GMT"
}
] | 2016-10-06T00:00:00 |
[
[
"Taherkhani",
"Fariborz",
""
]
] |
new_dataset
| 0.994364 |
1610.00662
|
Andrea Tassi
|
Andrea Tassi, Malcolm Egan, Robert J. Piechocki, Andrew Nix
|
Wireless Vehicular Networks in Emergencies: A Single Frequency Network
Approach
|
The invited paper will be presented in the Telecommunications Systems
and Networks symposium of SigTelCom 2017
| null | null | null |
cs.IT cs.NI cs.PF math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Obtaining high quality sensor information is critical in vehicular
emergencies. However, existing standards such as IEEE 802.11p/DSRC and LTE-A
cannot support either the required data rates or the latency requirements. One
solution to this problem is for municipalities to invest in dedicated base
stations to ensure that drivers have the information they need to make safe
decisions in or near accidents. In this paper we further propose that these
municipality-owned base stations form a Single Frequency Network (SFN). In
order to ensure that transmissions are reliable, we derive tight bounds on the
outage probability when the SFN is overlaid on an existing cellular network.
Using our bounds, we propose a transmission power allocation algorithm. We show
that our power allocation model can reduce the total instantaneous SFN
transmission power up to $20$ times compared to a static uniform power
allocation solution, for the considered scenarios. The result is particularly
important when base stations rely on an off-grid power source (i.e.,
batteries).
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2016 18:25:53 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2016 07:29:20 GMT"
}
] | 2016-10-06T00:00:00 |
[
[
"Tassi",
"Andrea",
""
],
[
"Egan",
"Malcolm",
""
],
[
"Piechocki",
"Robert J.",
""
],
[
"Nix",
"Andrew",
""
]
] |
new_dataset
| 0.997466 |
1610.01314
|
Doron Zarchy
|
Doron Zarchy, Amogh Dhamdhere, Constantine Dovrolis, Michael Schapira
|
Nash-Peering: A New Techno-Economic Framework for Internet
Interconnections
| null | null | null | null |
cs.GT cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The current framework of Internet interconnections, based on transit and
settlement-free peering relations, has systemic problems that often cause
peering disputes. We propose a new techno-economic interconnection framework
called Nash-Peering, which is based on the principles of Nash Bargaining in
game theory and economics. Nash-Peering constitutes a radical departure from
current interconnection practices, providing a broader and more economically
efficient set of interdomain relations. In particular, the direction of payment
is not determined by the direction of traffic or by rigid customer-provider
relationships but based on which AS benefits more from the interconnection. We
argue that Nash-Peering can address the root cause of various types of peering
disputes.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2016 09:01:05 GMT"
}
] | 2016-10-06T00:00:00 |
[
[
"Zarchy",
"Doron",
""
],
[
"Dhamdhere",
"Amogh",
""
],
[
"Dovrolis",
"Constantine",
""
],
[
"Schapira",
"Michael",
""
]
] |
new_dataset
| 0.998968 |
1610.01367
|
Mahdi Khademian
|
Mahdi Khademian and Mohammad Mehdi Homayounpour
|
Monaural Multi-Talker Speech Recognition using Factorial Speech
Processing Models
| null | null | null | null |
cs.CL cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A Pascal challenge entitled monaural multi-talker speech recognition was
developed, targeting the problem of robust automatic speech recognition against
speech like noises which significantly degrades the performance of automatic
speech recognition systems. In this challenge, two competing speakers say a
simple command simultaneously and the objective is to recognize speech of the
target speaker. Surprisingly during the challenge, a team from IBM research,
could achieve a performance better than human listeners on this task. The
proposed method of the IBM team, consist of an intermediate speech separation
and then a single-talker speech recognition. This paper reconsiders the task of
this challenge based on gain adapted factorial speech processing models. It
develops a joint-token passing algorithm for direct utterance decoding of both
target and masker speakers, simultaneously. Comparing it to the challenge
winner, it uses maximum uncertainty during the decoding which cannot be used in
the past two-phased method. It provides detailed derivation of inference on
these models based on general inference procedures of probabilistic graphical
models. As another improvement, it uses deep neural networks for joint-speaker
identification and gain estimation which makes these two steps easier than
before producing competitive results for these steps. The proposed method of
this work outperforms past super-human results and even the results were
achieved recently by Microsoft research, using deep neural networks. It
achieved 5.5% absolute task performance improvement compared to the first
super-human system and 2.7% absolute task performance improvement compared to
its recent competitor.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2016 11:34:36 GMT"
}
] | 2016-10-06T00:00:00 |
[
[
"Khademian",
"Mahdi",
""
],
[
"Homayounpour",
"Mohammad Mehdi",
""
]
] |
new_dataset
| 0.974514 |
1610.01518
|
Giorgio Sonnino
|
Alberto Sonnino, Giorgio Sonnino
|
Elliptic-Curves Cryptography on High-Dimensional Surfaces
|
10 pages, 4 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We discuss the use of elliptic curves in cryptography on high-dimensional
surfaces. In particular, instead of a Diffie-Hellman key exchange protocol
written in the form of a bi-dimensional row, where the elements are made up
with 256 bits, we propose a key exchange protocol given in a matrix form, with
four independent entries each of them constructed with 64 bits. Apart from the
great advantage of significantly reducing the number of used bits, this
methodology appears to be immune to attacks of the style of Western, Miller,
and Adleman, and at the same time it is also able to reach the same level of
security as the cryptographic system presently obtained by the Microsoft
Digital Rights Management. A nonlinear differential equation (NDE) admitting
the elliptic curves as a special case is also proposed. The study of the class
of solutions of this NDE is in progress.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2016 15:33:34 GMT"
}
] | 2016-10-06T00:00:00 |
[
[
"Sonnino",
"Alberto",
""
],
[
"Sonnino",
"Giorgio",
""
]
] |
new_dataset
| 0.995689 |
1505.08003
|
Ulrich Breunig
|
Ulrich Breunig, Verena Schmid, Richard F. Hartl, Thibaut Vidal
|
A large neighbourhood based heuristic for two-echelon routing problems
| null |
Computers & Operations Research 2016; 76: 208-225
|
10.1016/j.cor.2016.06.014
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we address two optimisation problems arising in the context of
city logistics and two-level transportation systems. The two-echelon vehicle
routing problem and the two-echelon location routing problem seek to produce
vehicle itineraries to deliver goods to customers, with transits through
intermediate facilities. To efficiently solve these problems, we propose a
hybrid metaheuristic which combines enumerative local searches with
destroy-and-repair principles, as well as some tailored operators to optimise
the selections of intermediate facilities. We conduct extensive computational
experiments to investigate the contribution of these operators to the search
performance, and measure the performance of the method on both problem classes.
The proposed algorithm finds the current best known solutions, or better ones,
for 95% of the two-echelon vehicle routing problem benchmark instances.
Overall, for both problems, it achieves high-quality solutions within short
computing times. Finally, for future reference, we resolve inconsistencies
between different versions of benchmark instances, document their differences,
and provide them all online in a unified format.
|
[
{
"version": "v1",
"created": "Fri, 29 May 2015 11:53:20 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Oct 2016 11:59:58 GMT"
}
] | 2016-10-05T00:00:00 |
[
[
"Breunig",
"Ulrich",
""
],
[
"Schmid",
"Verena",
""
],
[
"Hartl",
"Richard F.",
""
],
[
"Vidal",
"Thibaut",
""
]
] |
new_dataset
| 0.997774 |
1511.07033
|
Andrew Kent
|
Andrew M. Kent, David Kempe, Sam Tobin-Hochstadt
|
Occurrence Typing Modulo Theories
| null |
SIGPLAN Not. 51, 6 (June 2016), 296-309
|
10.1145/2980983.2908091
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new type system combining occurrence typing, previously used to
type check programs in dynamically-typed languages such as Racket, JavaScript,
and Ruby, with dependent refinement types. We demonstrate that the addition of
refinement types allows the integration of arbitrary solver-backed reasoning
about logical propositions from external theories. By building on occurrence
typing, we can add our enriched type system as an extension of Typed
Racket---adding dependency and refinement reuses the existing formalism while
increasing its expressiveness.
Dependent refinement types allow Typed Racket programmers to express rich
type relationships, ranging from data structure invariants such as red-black
tree balance to preconditions such as vector bounds. Refinements allow
programmers to embed the propositions that occurrence typing in Typed Racket
already reasons about into their types. Further, extending occurrence typing to
refinements allows us to make the underlying formalism simpler and more
powerful.
In addition to presenting the design of our system, we present a formal model
of the system, show how to integrate it with theories over both linear
arithmetic and bitvectors, and evaluate the system in the context of the full
Typed Racket implementation. Specifically, we take safe vector access as a case
study, and examine all vector accesses in a 56,000 line corpus of Typed Racket
programs. Our system is able to prove that 50% of these are safe with no new
annotation, and with a few annotations and modifications, we can capture close
to 80%.
|
[
{
"version": "v1",
"created": "Sun, 22 Nov 2015 16:54:32 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Oct 2016 17:57:24 GMT"
}
] | 2016-10-05T00:00:00 |
[
[
"Kent",
"Andrew M.",
""
],
[
"Kempe",
"David",
""
],
[
"Tobin-Hochstadt",
"Sam",
""
]
] |
new_dataset
| 0.979346 |
1602.02070
|
Nauman Shahid
|
Nauman Shahid, Nathanael Perraudin, Gilles Puy, Pierre Vandergheynst
|
Compressive PCA for Low-Rank Matrices on Graphs
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a novel framework for an approxi- mate recovery of data matrices
which are low-rank on graphs, from sampled measurements. The rows and columns
of such matrices belong to the span of the first few eigenvectors of the graphs
constructed between their rows and columns. We leverage this property to
recover the non-linear low-rank structures efficiently from sampled data
measurements, with a low cost (linear in n). First, a Resrtricted Isometry
Property (RIP) condition is introduced for efficient uniform sampling of the
rows and columns of such matrices based on the cumulative coherence of graph
eigenvectors. Secondly, a state-of-the-art fast low-rank recovery method is
suggested for the sampled data. Finally, several efficient, parallel and
parameter-free decoders are presented along with their theoretical analysis for
decoding the low-rank and cluster indicators for the full data matrix. Thus, we
overcome the computational limitations of the standard linear low-rank recovery
methods for big datasets. Our method can also be seen as a major step towards
efficient recovery of non- linear low-rank structures. For a matrix of size n X
p, on a single core machine, our method gains a speed up of $p^2/k$ over Robust
Principal Component Analysis (RPCA), where k << p is the subspace dimension.
Numerically, we can recover a low-rank matrix of size 10304 X 1000, 100 times
faster than Robust PCA.
|
[
{
"version": "v1",
"created": "Fri, 5 Feb 2016 15:51:34 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Apr 2016 10:51:25 GMT"
},
{
"version": "v3",
"created": "Mon, 2 May 2016 13:49:40 GMT"
},
{
"version": "v4",
"created": "Tue, 4 Oct 2016 08:35:35 GMT"
}
] | 2016-10-05T00:00:00 |
[
[
"Shahid",
"Nauman",
""
],
[
"Perraudin",
"Nathanael",
""
],
[
"Puy",
"Gilles",
""
],
[
"Vandergheynst",
"Pierre",
""
]
] |
new_dataset
| 0.951643 |
1602.04650
|
Pauli Miettinen
|
Saskia Metzler, Stephan G\"unnemann and Pauli Miettinen
|
Hyperbolae Are No Hyperbole: Modelling Communities That Are Not Cliques
|
31 pages, 18 figures. This is an extended version of a paper of the
same title accepted for publication in the proceedings of the 2016 IEEE
International Conference on Data Mining (ICDM). For source code, see
http://people.mpi-inf.mpg.de/~pmiettin/hybobo/
| null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cliques are frequently used to model communities: a community is a set of
nodes where each pair is equally likely to be connected. But studying
real-world communities reveals that they have more structure than that. In
particular, the nodes can be ordered in such a way that (almost) all edges in
the community lie below a hyperbola. In this paper we present three new models
for communities that capture this phenomenon. Our models explain the structure
of the communities differently, but we also prove that they are identical in
their expressive power. Our models fit to real-world data much better than
traditional block models or previously-proposed hyperbolic models, both of
which are a special case of our model. Our models also allow for intuitive
interpretation of the parameters, enabling us to summarize the shapes of the
communities in graphs effectively.
|
[
{
"version": "v1",
"created": "Mon, 15 Feb 2016 12:28:20 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Oct 2016 14:25:32 GMT"
}
] | 2016-10-05T00:00:00 |
[
[
"Metzler",
"Saskia",
""
],
[
"Günnemann",
"Stephan",
""
],
[
"Miettinen",
"Pauli",
""
]
] |
new_dataset
| 0.992455 |
1610.00527
|
Nal Kalchbrenner
|
Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka,
Oriol Vinyals, Alex Graves, Koray Kavukcuoglu
|
Video Pixel Networks
|
16 pages
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a probabilistic video model, the Video Pixel Network (VPN), that
estimates the discrete joint distribution of the raw pixel values in a video.
The model and the neural architecture reflect the time, space and color
structure of video tensors and encode it as a four-dimensional dependency
chain. The VPN approaches the best possible performance on the Moving MNIST
benchmark, a leap over the previous state of the art, and the generated videos
show only minor deviations from the ground truth. The VPN also produces
detailed samples on the action-conditional Robotic Pushing benchmark and
generalizes to the motion of novel objects.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2016 13:06:40 GMT"
}
] | 2016-10-05T00:00:00 |
[
[
"Kalchbrenner",
"Nal",
""
],
[
"Oord",
"Aaron van den",
""
],
[
"Simonyan",
"Karen",
""
],
[
"Danihelka",
"Ivo",
""
],
[
"Vinyals",
"Oriol",
""
],
[
"Graves",
"Alex",
""
],
[
"Kavukcuoglu",
"Koray",
""
]
] |
new_dataset
| 0.997679 |
1610.00845
|
Yun Fan
|
Yun Fan, Liang Zhang
|
Isometrically Self-dual Cyclic Codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
General isometries of cyclic codes, including multipliers and translations,
are introduced; and isometrically self-dual cyclic codes are defined. In terms
of Type-I duadic splittings given by multipliers and translations, a necessary
and sufficient condition for the existence of isometrically self-dual cyclic
codes is obtained. A program to construct isometrically self-dual cyclic codes
is provided, and illustrated by several examples. In particular, a class of
isometrically self-dual MDS cyclic codes, which are alternant codes from a
class of generalized Reed-Solomon codes, is presented.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2016 05:08:56 GMT"
}
] | 2016-10-05T00:00:00 |
[
[
"Fan",
"Yun",
""
],
[
"Zhang",
"Liang",
""
]
] |
new_dataset
| 0.985115 |
1610.00889
|
Jiayu Shu
|
Jiayu Shu, Rui Zheng, and Pan Hui
|
Cardea: Context-Aware Visual Privacy Protection from Pervasive Cameras
|
10 pages
| null | null | null |
cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The growing popularity of mobile and wearable devices with built-in cameras,
the bright prospect of camera related applications such as augmented reality
and life-logging system, the increased ease of taking and sharing photos, and
advances in computer vision techniques have greatly facilitated people's lives
in many aspects, but have also inevitably raised people's concerns about visual
privacy at the same time. Motivated by recent user studies that people's
privacy concerns are dependent on the context, in this paper, we propose
Cardea, a context-aware and interactive visual privacy protection framework
that enforces privacy protection according to people's privacy preferences. The
framework provides people with fine-grained visual privacy protection using: i)
personal privacy profiles, with which people can define their context-dependent
privacy preferences; and ii) visual indicators: face features, for devices to
automatically locate individuals who request privacy protection; and iii) hand
gestures, for people to flexibly interact with cameras to temporarily change
their privacy preferences. We design and implement the framework consisting of
the client app on Android devices and the cloud server. Our evaluation results
confirm this framework is practical and effective with 86% overall accuracy,
showing promising future for context-aware visual privacy protection from
pervasive cameras.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2016 08:01:27 GMT"
}
] | 2016-10-05T00:00:00 |
[
[
"Shu",
"Jiayu",
""
],
[
"Zheng",
"Rui",
""
],
[
"Hui",
"Pan",
""
]
] |
new_dataset
| 0.985068 |
1610.00900
|
Long Yu
|
Long Yu, Qiong Huang, Hongwei Liu, Xiusheng Liu
|
Self-Dual Codes over $\mathbb{Z}_2\times (\mathbb{Z}_2+u\mathbb{Z}_2)$
|
18 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study self-dual codes over $\mathbb{Z}_2 \times
(\mathbb{Z}_2+u\mathbb{Z}_2) $, where $u^2=0$. Three types of self-dual codes
are defined. For each type, the possible values $\alpha,\beta$ such that there
exists a code $\mathcal{C}\subseteq \mathbb{Z}_{2}^\alpha\times
(\mathbb{Z}_2+u\mathbb{Z}_2)^\beta$ are established. We also present several
approaches to construct self-dual codes over $\mathbb{Z}_2 \times
(\mathbb{Z}_2+u\mathbb{Z}_2) $. Moreover, the structure of two-weight self-dual
codes is completely obtained for $\alpha \cdot\beta\neq 0$.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2016 09:05:01 GMT"
}
] | 2016-10-05T00:00:00 |
[
[
"Yu",
"Long",
""
],
[
"Huang",
"Qiong",
""
],
[
"Liu",
"Hongwei",
""
],
[
"Liu",
"Xiusheng",
""
]
] |
new_dataset
| 0.98527 |
1610.00956
|
Ondrej Bajgar
|
Ondrej Bajgar, Rudolf Kadlec and Jan Kleindienst
|
Embracing data abundance: BookTest Dataset for Reading Comprehension
|
The first two authors contributed equally to this work. Submitted to
EACL 2017. Code and dataset are publicly available
| null | null | null |
cs.CL cs.AI cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is a practically unlimited amount of natural language data available.
Still, recent work in text comprehension has focused on datasets which are
small relative to current computing possibilities. This article is making a
case for the community to move to larger data and as a step in that direction
it is proposing the BookTest, a new dataset similar to the popular Children's
Book Test (CBT), however more than 60 times larger. We show that training on
the new data improves the accuracy of our Attention-Sum Reader model on the
original CBT test data by a much larger margin than many recent attempts to
improve the model architecture. On one version of the dataset our ensemble even
exceeds the human baseline provided by Facebook. We then show in our own human
study that there is still space for further improvement.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2016 12:48:51 GMT"
}
] | 2016-10-05T00:00:00 |
[
[
"Bajgar",
"Ondrej",
""
],
[
"Kadlec",
"Rudolf",
""
],
[
"Kleindienst",
"Jan",
""
]
] |
new_dataset
| 0.997753 |
1610.01096
|
Neil Shah
|
Neil Shah
|
FLOCK: Combating Astroturfing on Livestreaming Platforms
| null | null | null | null |
cs.SI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Livestreaming platforms have become increasingly popular in recent years as a
means of sharing and advertising creative content. Popular content streamers
who attract large viewership to their live broadcasts can earn a living by
means of ad revenue, donations and channel subscriptions. Unfortunately, this
incentivized popularity has simultaneously resulted in incentive for fraudsters
to provide services to astroturf, or artificially inflate viewership metrics by
providing fake "live" views to customers. Our work provides a number of major
contributions: (a) formulation: we are the first to introduce and characterize
the viewbot fraud problem in livestreaming platforms, (b) methodology: we
propose FLOCK, a principled and unsupervised method which efficiently and
effectively identifies botted broadcasts and their constituent botted views,
and (c) practicality: our approach achieves over 98% precision in identifying
botted broadcasts and over 90% precision/recall against sizable synthetically
generated viewbot attacks on a real-world livestreaming workload of over 16
million views and 92 thousand broadcasts. FLOCK successfully operates on larger
datasets in practice and is regularly used at a large, undisclosed
livestreaming corporation.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2016 17:16:25 GMT"
}
] | 2016-10-05T00:00:00 |
[
[
"Shah",
"Neil",
""
]
] |
new_dataset
| 0.999178 |
1610.01117
|
Pedro Neto
|
Mahmoud Tavakoli, Rafael Batista, Pedro Neto
|
A compact two-phase twisted string actuation system: Modeling and
validation
|
in Mechanism and Machine Theory, 2016
| null |
10.1016/j.mechmachtheory.2016.03.001
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a compact twisted string actuation system that
achieves a high contraction percentage (81%) on two phases: multi string twist
and overtwist. This type of system can be used in many robotic applications,
such as robotic hands and exoskeletons. The overtwist phase enables the
development of more compact actuators based on the twisted string systems.
Furthermore, by analyzing the previously developed mathematical models, we
found out that a constant radius model should be applied for the overtwisting
phase. Moreover, we propose an improvement of an existing model for prediction
of the radius of the multi string system after they twist around each other.
This model helps to better estimate the bundle diameter which results in a more
precise mathematical model for multi string systems. The model was validated by
performing experiments with 2, 4, 6 and 8 string systems. Finally, we performed
extensive life cycle tests with different loads and contractions to find out
the expected life of the system.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2016 18:19:05 GMT"
}
] | 2016-10-05T00:00:00 |
[
[
"Tavakoli",
"Mahmoud",
""
],
[
"Batista",
"Rafael",
""
],
[
"Neto",
"Pedro",
""
]
] |
new_dataset
| 0.998959 |
1512.01515
|
Or Litany
|
Or Litany, Tal Remez, Daniel Freedman, Lior Shapira, Alex Bronstein,
Ran Gal
|
ASIST: Automatic Semantically Invariant Scene Transformation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ASIST, a technique for transforming point clouds by replacing
objects with their semantically equivalent counterparts. Transformations of
this kind have applications in virtual reality, repair of fused scans, and
robotics. ASIST is based on a unified formulation of semantic labeling and
object replacement; both result from minimizing a single objective. We present
numerical tools for the efficient solution of this optimization problem. The
method is experimentally assessed on new datasets of both synthetic and real
point clouds, and is additionally compared to two recent works on object
replacement on data from the corresponding papers.
|
[
{
"version": "v1",
"created": "Fri, 4 Dec 2015 19:14:57 GMT"
}
] | 2016-10-04T00:00:00 |
[
[
"Litany",
"Or",
""
],
[
"Remez",
"Tal",
""
],
[
"Freedman",
"Daniel",
""
],
[
"Shapira",
"Lior",
""
],
[
"Bronstein",
"Alex",
""
],
[
"Gal",
"Ran",
""
]
] |
new_dataset
| 0.962301 |
1610.00043
|
Felice Manganiello
|
Shuhong Gao, Fiona Knoll, Felice Manganiello and Gretchen Matthews
|
Codes for distributed storage from 3-regular graphs
|
13 pages, 4 figures, 1 table
| null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers distributed storage systems (DSSs) from a graph
theoretic perspective. A DSS is constructed by means of the path decomposition
of a 3- regular graph into P4 paths. The paths represent the disks of the DSS
and the edges of the graph act as the blocks of storage. We deduce the
properties of the DSS from a related graph and show their optimality.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2016 22:04:03 GMT"
}
] | 2016-10-04T00:00:00 |
[
[
"Gao",
"Shuhong",
""
],
[
"Knoll",
"Fiona",
""
],
[
"Manganiello",
"Felice",
""
],
[
"Matthews",
"Gretchen",
""
]
] |
new_dataset
| 0.987831 |
1610.00311
|
Matilde Marcolli
|
Kevin Shu and Matilde Marcolli
|
Syntactic Structures and Code Parameters
|
14 pages, LaTeX, 12 png figures
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We assign binary and ternary error-correcting codes to the data of syntactic
structures of world languages and we study the distribution of code points in
the space of code parameters. We show that, while most codes populate the lower
region approximating a superposition of Thomae functions, there is a
substantial presence of codes above the Gilbert-Varshamov bound and even above
the asymptotic bound and the Plotkin bound. We investigate the dynamics induced
on the space of code parameters by spin glass models of language change, and
show that, in the presence of entailment relations between syntactic parameters
the dynamics can sometimes improve the code. For large sets of languages and
syntactic data, one can gain information on the spin glass dynamics from the
induced dynamics in the space of code parameters.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2016 16:54:41 GMT"
}
] | 2016-10-04T00:00:00 |
[
[
"Shu",
"Kevin",
""
],
[
"Marcolli",
"Matilde",
""
]
] |
new_dataset
| 0.994324 |
1610.00318
|
Hamid Tizhoosh
|
H.R. Tizhoosh, Shujin Zhu, Hanson Lo, Varun Chaudhari, Tahmid Mehdi
|
MinMax Radon Barcodes for Medical Image Retrieval
|
To appear in proceedings of the 12th International Symposium on
Visual Computing, December 12-14, 2016, Las Vegas, Nevada, USA
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Content-based medical image retrieval can support diagnostic decisions by
clinical experts. Examining similar images may provide clues to the expert to
remove uncertainties in his/her final diagnosis. Beyond conventional feature
descriptors, binary features in different ways have been recently proposed to
encode the image content. A recent proposal is "Radon barcodes" that employ
binarized Radon projections to tag/annotate medical images with content-based
binary vectors, called barcodes. In this paper, MinMax Radon barcodes are
introduced which are superior to "local thresholding" scheme suggested in the
literature. Using IRMA dataset with 14,410 x-ray images from 193 different
classes, the advantage of using MinMax Radon barcodes over \emph{thresholded}
Radon barcodes are demonstrated. The retrieval error for direct search drops by
more than 15\%. As well, SURF, as a well-established non-binary approach, and
BRISK, as a recent binary method are examined to compare their results with
MinMax Radon barcodes when retrieving images from IRMA dataset. The results
demonstrate that MinMax Radon barcodes are faster and more accurate when
applied on IRMA images.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2016 17:29:01 GMT"
}
] | 2016-10-04T00:00:00 |
[
[
"Tizhoosh",
"H. R.",
""
],
[
"Zhu",
"Shujin",
""
],
[
"Lo",
"Hanson",
""
],
[
"Chaudhari",
"Varun",
""
],
[
"Mehdi",
"Tahmid",
""
]
] |
new_dataset
| 0.999482 |
1610.00320
|
Hamid Tizhoosh
|
S. Sharma, I. Umar, L. Ospina, D. Wong, H.R. Tizhoosh
|
Stacked Autoencoders for Medical Image Search
|
To appear in proceedings of the 12th International Symposium on
Visual Computing, December 12-14, 2016, Las Vegas, Nevada, USA
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Medical images can be a valuable resource for reliable information to support
medical diagnosis. However, the large volume of medical images makes it
challenging to retrieve relevant information given a particular scenario. To
solve this challenge, content-based image retrieval (CBIR) attempts to
characterize images (or image regions) with invariant content information in
order to facilitate image search. This work presents a feature extraction
technique for medical images using stacked autoencoders, which encode images to
binary vectors. The technique is applied to the IRMA dataset, a collection of
14,410 x-ray images in order to demonstrate the ability of autoencoders to
retrieve similar x-rays given test queries. Using IRMA dataset as a benchmark,
it was found that stacked autoencoders gave excellent results with a retrieval
error of 376 for 1,733 test images with a compression of 74.61%.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2016 17:34:02 GMT"
}
] | 2016-10-04T00:00:00 |
[
[
"Sharma",
"S.",
""
],
[
"Umar",
"I.",
""
],
[
"Ospina",
"L.",
""
],
[
"Wong",
"D.",
""
],
[
"Tizhoosh",
"H. R.",
""
]
] |
new_dataset
| 0.99662 |
1610.00323
|
Victor Poupet
|
Ana\"el Grandjean, Victor Poupet
|
L-Convex Polyominoes are Recognizable in Real Time by 2D Cellular
Automata
| null |
Automata 2015: 127-140
|
10.1007/978-3-662-47221-7_10
| null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A polyomino is said to be L-convex if any two of its cells are connected by a
4-connected inner path that changes direction at most once. The 2-dimensional
language representing such polyominoes has been recently proved to be
recognizable by tiling systems by S. Brocchi, A. Frosini, R. Pinzani and S.
Rinaldi. In an attempt to compare recognition power of tiling systems and
cellular automata, we have proved that this language can be recognized by
2-dimensional cellular automata working on the von Neumann neighborhood in real
time.
Although the construction uses a characterization of L-convex polyominoes
that is similar to the one used for tiling systems, the real time constraint
which has no equivalent in terms of tilings requires the use of techniques that
are specific to cellular automata.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2016 17:43:38 GMT"
}
] | 2016-10-04T00:00:00 |
[
[
"Grandjean",
"Anaël",
""
],
[
"Poupet",
"Victor",
""
]
] |
new_dataset
| 0.99475 |
1610.00333
|
Victor Poupet
|
Katsunobu Imai, Hisamichi Ishizaka, Victor Poupet
|
5-State Rotation-Symmetric Number-Conserving Cellular Automata are not
Strongly Universal
| null |
Automata 2014: 31-43
|
10.1007/978-3-319-18812-6_3
| null |
cs.FL nlin.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study two-dimensional rotation-symmetric number-conserving cellular
automata working on the von Neumann neighborhood (RNCA). It is known that such
automata with 4 states or less are trivial, so we investigate the possible
rules with 5 states. We give a full characterization of these automata and show
that they cannot be strongly Turing universal. However, we give example of
constructions that allow to embed some boolean circuit elements in a 5-states
RNCA.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2016 18:40:18 GMT"
}
] | 2016-10-04T00:00:00 |
[
[
"Imai",
"Katsunobu",
""
],
[
"Ishizaka",
"Hisamichi",
""
],
[
"Poupet",
"Victor",
""
]
] |
new_dataset
| 0.999022 |
1610.00338
|
Victor Poupet
|
Ana\"el Grandjean, Victor Poupet
|
A Linear Acceleration Theorem for 2D Cellular Automata on all Complete
Neighborhoods
| null |
ICALP 2016: 115:1-115:12
|
10.4230/LIPIcs.ICALP.2016.115
| null |
cs.FL nlin.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear acceleration theorems are known for most computational models.
Although such results have been proved for two-dimensional cellular automata
working on specific neighborhoods, no general construction was known. We
present here a technique of linear acceleration for all two-dimensional
languages recognized by cellular automata working on complete neighborhoods.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2016 19:11:29 GMT"
}
] | 2016-10-04T00:00:00 |
[
[
"Grandjean",
"Anaël",
""
],
[
"Poupet",
"Victor",
""
]
] |
new_dataset
| 0.966844 |
1610.00427
|
Chang-Hwan Son
|
Chang-Hwan Son, Xiao-Ping Zhang
|
Rain structure transfer using an exemplar rain image for synthetic rain
image generation
|
6 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter proposes a simple method of transferring rain structures of a
given exemplar rain image into a target image. Given the exemplar rain image
and its corresponding masked rain image, rain patches including rain structures
are extracted randomly, and then residual rain patches are obtained by
subtracting those rain patches from their mean patches. Next, residual rain
patches are selected randomly, and then added to the given target image along a
raster scanning direction. To decrease boundary artifacts around the added
patches on the target image, minimum error boundary cuts are found using
dynamic programming, and then blending is conducted between overlapping
patches. Our experiment shows that the proposed method can generate realistic
rain images that have similar rain structures in the exemplar images. Moreover,
it is expected that the proposed method can be used for rain removal. More
specifically, natural images and synthetic rain images generated via the
proposed method can be used to learn classifiers, for example, deep neural
networks, in a supervised manner.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2016 06:58:43 GMT"
}
] | 2016-10-04T00:00:00 |
[
[
"Son",
"Chang-Hwan",
""
],
[
"Zhang",
"Xiao-Ping",
""
]
] |
new_dataset
| 0.959018 |
1610.00552
|
Minjae Lee
|
Minjae Lee, Kyuyeon Hwang, Jinhwan Park, Sungwook Choi, Sungho Shin,
Wonyong Sung
|
FPGA-Based Low-Power Speech Recognition with Recurrent Neural Networks
|
Accepted to SiPS 2016
| null | null | null |
cs.CL cs.LG cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a neural network based real-time speech recognition (SR)
system is developed using an FPGA for very low-power operation. The implemented
system employs two recurrent neural networks (RNNs); one is a
speech-to-character RNN for acoustic modeling (AM) and the other is for
character-level language modeling (LM). The system also employs a statistical
word-level LM to improve the recognition accuracy. The results of the AM, the
character-level LM, and the word-level LM are combined using a fairly simple
N-best search algorithm instead of the hidden Markov model (HMM) based network.
The RNNs are implemented using massively parallel processing elements (PEs) for
low latency and high throughput. The weights are quantized to 6 bits to store
all of them in the on-chip memory of an FPGA. The proposed algorithm is
implemented on a Xilinx XC7Z045, and the system can operate much faster than
real-time.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2016 10:44:32 GMT"
}
] | 2016-10-04T00:00:00 |
[
[
"Lee",
"Minjae",
""
],
[
"Hwang",
"Kyuyeon",
""
],
[
"Park",
"Jinhwan",
""
],
[
"Choi",
"Sungwook",
""
],
[
"Shin",
"Sungho",
""
],
[
"Sung",
"Wonyong",
""
]
] |
new_dataset
| 0.998582 |
1610.00572
|
Mauro Cettolo
|
Mauro Cettolo
|
An Arabic-Hebrew parallel corpus of TED talks
|
To appear in Proceedings of the AMTA 2016 Workshop on Semitic Machine
Translation (SeMaT)
| null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe an Arabic-Hebrew parallel corpus of TED talks built upon WIT3,
the Web inventory that repurposes the original content of the TED website in a
way which is more convenient for MT researchers. The benchmark consists of
about 2,000 talks, whose subtitles in Arabic and Hebrew have been accurately
aligned and rearranged in sentences, for a total of about 3.5M tokens per
language. Talks have been partitioned in train, development and test sets
similarly in all respects to the MT tasks of the IWSLT 2016 evaluation
campaign. In addition to describing the benchmark, we list the problems
encountered in preparing it and the novel methods designed to solve them.
Baseline MT results and some measures on sentence length are provided as an
extrinsic evaluation of the quality of the benchmark.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2016 14:44:58 GMT"
}
] | 2016-10-04T00:00:00 |
[
[
"Cettolo",
"Mauro",
""
]
] |
new_dataset
| 0.999807 |
1610.00580
|
Jacob Abernethy
|
Jacob Abernethy (University of Michigan), Cyrus Anderson (University
of Michigan), Chengyu Dai (University of Michigan), Arya Farahi (University
of Michigan), Linh Nguyen (University of Michigan), Adam Rauh (University of
Michigan), Eric Schwartz (University of Michigan), Wenbo Shen (University of
Michigan), Guangsha Shi (University of Michigan), Jonathan Stroud (University
of Michigan), Xinyu Tan (University of Michigan), Jared Webb (University of
Michigan), Sheng Yang (University of Michigan)
|
Flint Water Crisis: Data-Driven Risk Assessment Via Residential Water
Testing
|
Presented at the Data For Good Exchange 2016
| null | null | null |
cs.LG stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recovery from the Flint Water Crisis has been hindered by uncertainty in both
the water testing process and the causes of contamination. In this work, we
develop an ensemble of predictive models to assess the risk of lead
contamination in individual homes and neighborhoods. To train these models, we
utilize a wide range of data sources, including voluntary residential water
tests, historical records, and city infrastructure data. Additionally, we use
our models to identify the most prominent factors that contribute to a high
risk of lead contamination. In this analysis, we find that lead service lines
are not the only factor that is predictive of the risk of lead contamination of
water. These results could be used to guide the long-term recovery efforts in
Flint, minimize the immediate damages, and improve resource-allocation
decisions for similar water infrastructure crises.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2016 14:31:11 GMT"
}
] | 2016-10-04T00:00:00 |
[
[
"Abernethy",
"Jacob",
"",
"University of Michigan"
],
[
"Anderson",
"Cyrus",
"",
"University\n of Michigan"
],
[
"Dai",
"Chengyu",
"",
"University of Michigan"
],
[
"Farahi",
"Arya",
"",
"University\n of Michigan"
],
[
"Nguyen",
"Linh",
"",
"University of Michigan"
],
[
"Rauh",
"Adam",
"",
"University of\n Michigan"
],
[
"Schwartz",
"Eric",
"",
"University of Michigan"
],
[
"Shen",
"Wenbo",
"",
"University of\n Michigan"
],
[
"Shi",
"Guangsha",
"",
"University of Michigan"
],
[
"Stroud",
"Jonathan",
"",
"University\n of Michigan"
],
[
"Tan",
"Xinyu",
"",
"University of Michigan"
],
[
"Webb",
"Jared",
"",
"University of\n Michigan"
],
[
"Yang",
"Sheng",
"",
"University of Michigan"
]
] |
new_dataset
| 0.993053 |
1610.00620
|
Bechir Hamdaoui
|
Sherif Abdelwahab and Bechir Hamdaoui
|
FogMQ: A Message Broker System for Enabling Distributed, Internet-Scale
IoT Applications over Heterogeneous Cloud Platforms
| null | null | null | null |
cs.NI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Excessive tail end-to-end latency occurs with conventional message brokers as
a result of having massive numbers of geographically distributed devices
communicate through a message broker. On the other hand, broker-less messaging
systems, though ensure low latency, are highly dependent on the limitation of
direct device-to-device (D2D) communication technologies, and cannot scale well
as large numbers of resource-limited devices exchange messages. In this paper,
we propose FogMQ, a cloud-based message broker system that overcomes the
limitations of conventional systems by enabling autonomous discovery,
self-deployment, and online migration of message brokers across heterogeneous
cloud platforms. For each device, FogMQ provides a high capacity device cloning
service that subscribes to device messages. The clones facilitate near-the-edge
data analytics in resourceful cloud compute nodes. Clones in FogMQ apply Flock,
an algorithm mimicking flocking-like behavior to allow clones to dynamically
select and autonomously migrate to different heterogeneous cloud platforms in a
distributed manner.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2016 16:23:05 GMT"
}
] | 2016-10-04T00:00:00 |
[
[
"Abdelwahab",
"Sherif",
""
],
[
"Hamdaoui",
"Bechir",
""
]
] |
new_dataset
| 0.995469 |
1610.00634
|
Anoop Kunchukuttan
|
Anoop Kunchukuttan and Pushpak Bhattacharyya
|
Orthographic Syllable as basic unit for SMT between Related Languages
|
7 pages, 1 figure, compiled with XeTex, to be published at the
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2016
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We explore the use of the orthographic syllable, a variable-length
consonant-vowel sequence, as a basic unit of translation between related
languages which use abugida or alphabetic scripts. We show that orthographic
syllable level translation significantly outperforms models trained over other
basic units (word, morpheme and character) when training over small parallel
corpora.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2016 16:53:10 GMT"
}
] | 2016-10-04T00:00:00 |
[
[
"Kunchukuttan",
"Anoop",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] |
new_dataset
| 0.992642 |
1607.06029
|
Jordan Malof
|
Jordan M. Malof and Kyle Bradbury and Leslie M. Collins and Richard G.
Newell
|
Automatic Detection of Solar Photovoltaic Arrays in High Resolution
Aerial Imagery
|
11 Page manuscript, and 1 page of supplemental information, 10
figures, currently under review as a journal publication
| null |
10.1016/j.apenergy.2016.08.191
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The quantity of small scale solar photovoltaic (PV) arrays in the United
States has grown rapidly in recent years. As a result, there is substantial
interest in high quality information about the quantity, power capacity, and
energy generated by such arrays, including at a high spatial resolution (e.g.,
counties, cities, or even smaller regions). Unfortunately, existing methods for
obtaining this information, such as surveys and utility interconnection
filings, are limited in their completeness and spatial resolution. This work
presents a computer algorithm that automatically detects PV panels using very
high resolution color satellite imagery. The approach potentially offers a
fast, scalable method for obtaining accurate information on PV array location
and size, and at much higher spatial resolutions than are currently available.
The method is validated using a very large (135 km^2) collection of publicly
available [1] aerial imagery, with over 2,700 human annotated PV array
locations. The results demonstrate the algorithm is highly effective on a
per-pixel basis. It is likewise effective at object-level PV array detection,
but with significant potential for improvement in estimating the precise
shape/size of the PV arrays. These results are the first of their kind for the
detection of solar PV in aerial imagery, demonstrating the feasibility of the
approach and establishing a baseline performance for future investigations.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2016 17:07:53 GMT"
}
] | 2016-10-03T00:00:00 |
[
[
"Malof",
"Jordan M.",
""
],
[
"Bradbury",
"Kyle",
""
],
[
"Collins",
"Leslie M.",
""
],
[
"Newell",
"Richard G.",
""
]
] |
new_dataset
| 0.99797 |
1609.09270
|
Bjorn Stenger
|
Jiu Xu, Bjorn Stenger, Tommi Kerola, Tony Tung
|
Pano2CAD: Room Layout From A Single Panorama Image
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a method of estimating the geometry of a room and the 3D
pose of objects from a single 360-degree panorama image. Assuming Manhattan
World geometry, we formulate the task as a Bayesian inference problem in which
we estimate positions and orientations of walls and objects. The method
combines surface normal estimation, 2D object detection and 3D object pose
estimation. Quantitative results are presented on a dataset of synthetically
generated 3D rooms containing objects, as well as on a subset of hand-labeled
images from the public SUN360 dataset.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2016 09:35:29 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Sep 2016 08:33:25 GMT"
}
] | 2016-10-03T00:00:00 |
[
[
"Xu",
"Jiu",
""
],
[
"Stenger",
"Bjorn",
""
],
[
"Kerola",
"Tommi",
""
],
[
"Tung",
"Tony",
""
]
] |
new_dataset
| 0.999823 |
1609.09562
|
Edward Haeusler
|
Lew Gordeev and Edward Hermann Haeusler
|
NP vs PSPACE
|
30 pages, 6 figures
| null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a proof of the conjecture $\mathcal{NP}$ = $\mathcal{PSPACE}$ by
showing that arbitrary tautologies of Johansson's minimal propositional logic
admit "small" polynomial-size dag-like natural deductions in Prawitz's system
for minimal propositional logic. These "small" deductions arise from standard
"large"\ tree-like inputs by horizontal dag-like compression that is obtained
by merging distinct nodes labeled with identical formulas occurring in
horizontal sections of deductions involved. The underlying "geometric" idea: if
the height, $h\left( \partial \right) $ , and the total number of distinct
formulas, $\phi \left( \partial \right) $ , of a given tree-like deduction
$\partial$ of a minimal tautology $\rho$ are both polynomial in the length of
$\rho$, $\left| \rho \right|$, then the size of the horizontal dag-like
compression is at most $h\left( \partial \right) \times \phi \left( \partial
\right) $, and hence polynomial in $\left| \rho \right|$. The attached proof is
due to the first author, but it was the second author who proposed an initial
idea to attack a weaker conjecture $\mathcal{NP}= \mathcal{\mathit{co}NP}$ by
reductions in diverse natural deduction formalisms for propositional logic.
That idea included interactive use of minimal, intuitionistic and classical
formalisms, so its practical implementation was too involved. The attached
proof of $ \mathcal{NP}=\mathcal{PSPACE}$ runs inside the natural deduction
interpretation of Hudelmaier's cutfree sequent calculus for minimal logic.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2016 01:20:56 GMT"
}
] | 2016-10-03T00:00:00 |
[
[
"Gordeev",
"Lew",
""
],
[
"Haeusler",
"Edward Hermann",
""
]
] |
new_dataset
| 0.998305 |
1609.09669
|
Chinnappillai Durairajan
|
N. Annamalai and C. Durairajan
|
Relative two-weight $\mathbb{Z}_2 \mathbb{Z}_4$-additive Codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study a relative two-weight $\mathbb{Z}_2
\mathbb{Z}_4$-additive codes. It is shown that the Gray image of a two-distance
$\mathbb{Z}_2 \mathbb{Z}_4$-additive code is a binary two-distance code and
that the Gray image of a relative two-weight $\mathbb{Z}_2
\mathbb{Z}_4$-additive code, with nontrivial binary part, is a linear binary
relative two-weight code. The structure of relative two-weight $\mathbb{Z}_2
\mathbb{Z}_4$-additive codes are described. Finally, we discussed permutation
automorphism group of a $\mathbb{Z}_2 \mathbb{Z}_4$-additive codes.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2016 11:01:40 GMT"
}
] | 2016-10-03T00:00:00 |
[
[
"Annamalai",
"N.",
""
],
[
"Durairajan",
"C.",
""
]
] |
new_dataset
| 0.99818 |
1609.09718
|
Larisa Safina
|
Alexey Bandura, Nikita Kurilenko, Manuel Mazzara, Victor Rivera,
Larisa Safina, Alexander Tchitchigin
|
Jolie Community on the Rise
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Jolie is a programming language that follows the microservices paradigm. As
an open source project, it has built a community of developers worldwide - both
in the industry as well as in academia - taken care of the development,
continuously improved its usability, and therefore broadened the adoption. In
this paper, we present some of the most recent results and work in progress
that has been made within our research team.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2016 13:25:05 GMT"
}
] | 2016-10-03T00:00:00 |
[
[
"Bandura",
"Alexey",
""
],
[
"Kurilenko",
"Nikita",
""
],
[
"Mazzara",
"Manuel",
""
],
[
"Rivera",
"Victor",
""
],
[
"Safina",
"Larisa",
""
],
[
"Tchitchigin",
"Alexander",
""
]
] |
new_dataset
| 0.955137 |
1609.09756
|
Katie O'Connell
|
Katie O'Connell (Georgia Institute of Technology), Yeji Lee (Georgia
Institute of Technology), Firaz Peer (Georgia Institute of Technology), Shawn
M. Staudaher (University of Wyoming), Alex Godwin (Georgia Institute of
Technology), Mackenzie Madden (Georgia Institute of Technology), Ellen Zegura
(Georgia Institute of Technology)
|
Making Public Safety Data Accessible in the Westside Atlanta Data
Dashboard
|
Presented at the Data For Good Exchange 2016
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Individual neighborhoods within large cities can benefit from independent
analysis of public data in the context of ongoing efforts to improve the
community. Yet existing tools for public data analysis and visualization are
often mismatched to community needs, for reasons including geographic
granularity that does not correspond to community boundaries, siloed data sets,
inaccurate assumptions about data literacy, and limited user input in design
and implementation phases. In Atlanta this need is being addressed through a
Data Dashboard developed under the auspices of the Westside Communities
Alliance (WCA), a partnership between Georgia Tech and community stakeholders.
In this paper we present an interactive analytic and visualization tool for
public safety data within the WCA Data Dashboard. We describe a human-centered
approach to understand the needs of users and to build accessible mapping tools
for visualization and analysis. The tools include a variety of overlays that
allow users to spatially correlate features of the built environment, such as
vacant properties with criminal activity as well as crime prevention efforts.
We are in the final stages of developing the first version of the tool, with
plans for a public release in fall of 2016.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2016 14:40:22 GMT"
}
] | 2016-10-03T00:00:00 |
[
[
"O'Connell",
"Katie",
"",
"Georgia Institute of Technology"
],
[
"Lee",
"Yeji",
"",
"Georgia\n Institute of Technology"
],
[
"Peer",
"Firaz",
"",
"Georgia Institute of Technology"
],
[
"Staudaher",
"Shawn M.",
"",
"University of Wyoming"
],
[
"Godwin",
"Alex",
"",
"Georgia Institute of\n Technology"
],
[
"Madden",
"Mackenzie",
"",
"Georgia Institute of Technology"
],
[
"Zegura",
"Ellen",
"",
"Georgia Institute of Technology"
]
] |
new_dataset
| 0.995663 |
1609.09786
|
Saurabha Tavildar
|
Saurabha R Tavildar
|
Bit-permuted coded modulation for polar codes
|
6 pages; 14 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of using polar codes with higher order modulation
over AWGN channels. Unlike prior work, we focus on using modulation independent
polar codes. That is, the polar codes are not re-designed based on the
modulation used. Instead, we propose bit-permuted coded modulation (BPCM): a
technique for using the multilevel coding (MLC) approach for an arbitrary polar
code. The BPCM technique exploits a natural connection between MLC and polar
codes. It involves applying bit permutations prior to mapping the polar code to
a higher order modulation. The bit permutations are designed, via density
evolution, to match the rates provided by various bit levels of the higher
order modulation to that of the polar code.
We demonstrate performance of the BPCM technique using link simulations and
density evolution for the AWGN channel. We compare the BPCM technique with the
bit-interleaved coded modulation (BICM) technique. When using polar codes
designed for BPSK modulation, we show gains for BPCM over BICM with random
interleaver of up to 0.2 dB, 0.7 dB and 1.4 dB for 4-ASK, 8-ASK, and 16-ASK
respectively.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2016 15:53:39 GMT"
}
] | 2016-10-03T00:00:00 |
[
[
"Tavildar",
"Saurabha R",
""
]
] |
new_dataset
| 0.995712 |
1609.09796
|
Eranda Cela
|
Eranda Cela, Vladimir Deineko, Gerhard J. Woeginger
|
The multi-stripe travelling salesman problem
| null | null | null | null |
cs.DM math.CO math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the classical Travelling Salesman Problem (TSP), the objective function
sums the costs for travelling from one city to the next city along the tour. In
the q-stripe TSP with q larger than 1, the objective function sums the costs
for travelling from one city to each of the next q cities along the tour. The
resulting q-stripe TSP generalizes the TSP and forms a special case of the
quadratic assignment problem. We analyze the computational complexity of the
q-stripe TSP for various classes of specially structured distance matrices. We
derive NP-hardness results as well as polyomially solvable cases. One of our
main results generalizes a well-known theorem of Kalmanson from the classical
TSP to the q-stripe TSP.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2016 14:27:54 GMT"
}
] | 2016-10-03T00:00:00 |
[
[
"Cela",
"Eranda",
""
],
[
"Deineko",
"Vladimir",
""
],
[
"Woeginger",
"Gerhard J.",
""
]
] |
new_dataset
| 0.998047 |
1609.03176
|
Nikita Jain
|
Nikita Jain, Swati Gupta, Dhaval Patel
|
E3 : Keyphrase based News Event Exploration Engine
| null | null |
10.1145/2914586.2914611
| null |
cs.IR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper presents a novel system E3 for extracting keyphrases from news
content for the purpose of offering the news audience a broad overview of news
events, with especially high content volume. Given an input query, E3 extracts
keyphrases and enrich them by tagging, ranking and finding role for frequently
associated keyphrases. Also, E3 finds the novelty and activeness of keyphrases
using news publication date, to identify the most interesting and informative
keyphrases.
|
[
{
"version": "v1",
"created": "Sun, 11 Sep 2016 15:59:35 GMT"
}
] | 2016-10-02T00:00:00 |
[
[
"Jain",
"Nikita",
""
],
[
"Gupta",
"Swati",
""
],
[
"Patel",
"Dhaval",
""
]
] |
new_dataset
| 0.994548 |
1609.09066
|
Unaiza Ahsan
|
Unaiza Ahsan (Georgia Institute of Technology), Oleksandra Sopova
(Kansas State University), Wes Stayton (Georgia Institute of Technology),
Bistra Dilkina (Georgia Institute of Technology)
|
Refugee Resettlement Housing Scout
|
Presented at the Data For Good Exchange 2016
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
According to the United States High Commission for Refugees (UNHCr), there
are 65.3 million forcibly displaced people in the world today, 21.5 million of
them being refugees. This has led to an unprecedented refugee crisis which has
led countries to accept refugee families and to resettle them. Diverse agencies
are helping refugees coming to US to resettle and start their new life in the
country. One of the first and most challenging steps of this process is to find
affordable housing that also meets a suite of additional constraints and
priorities. These include being within a mile of public transportation and near
schools, faith centers and international grocery stores. We detail an
interactive data-driven web-based tool, which incorporates in one consolidated
platform most of the needed information. The tool searches, filters and
demonstrates a list of possible housing locations, and allows for the dynamic
prioritization based on user-specified importance weights on the diverse
criteria. The platform was created in a partnership with New American Pathways,
a nonprofit that supports refugee resettlement in the metro Atlanta, but
exemplifies a methodology that can help many other organizations with similar
goals.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2016 04:48:20 GMT"
}
] | 2016-09-30T00:00:00 |
[
[
"Ahsan",
"Unaiza",
"",
"Georgia Institute of Technology"
],
[
"Sopova",
"Oleksandra",
"",
"Kansas State University"
],
[
"Stayton",
"Wes",
"",
"Georgia Institute of Technology"
],
[
"Dilkina",
"Bistra",
"",
"Georgia Institute of Technology"
]
] |
new_dataset
| 0.99956 |
1609.09068
|
Christopher Engstr\"om
|
Christopher Engstr\"om, Sergei Silvestrov
|
Graph partitioning and a componentwise PageRank algorithm
|
25 pages, 7 figues (10 including subfigures)
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article we will present a graph partitioning algorithm which
partitions a graph into two different types of components: the well-known
`strongly connected components' as well as another type of components we call
`connected acyclic component'. We will give an algorithm based on Tarjan's
algorithm for finding strongly connected components used to find such a
partitioning. We will also show that the partitioning given by the algorithm is
unique and that the underlying graph can be represented as a directed acyclic
graph (similar to a pure strongly connected component partitioning).
In the second part we will show how such an partitioning of a graph can be
used to calculate PageRank of a graph effectively by calculating PageRank for
different components on the same `level' in parallel as well as allowing for
the use of different types of PageRank algorithms for different types of
components.
To evaluate the method we have calculated PageRank on four large example
graphs and compared it with a basic approach, as well as our algorithm in a
serial as well as parallel implementation.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2016 14:21:24 GMT"
}
] | 2016-09-30T00:00:00 |
[
[
"Engström",
"Christopher",
""
],
[
"Silvestrov",
"Sergei",
""
]
] |
new_dataset
| 0.970177 |
1609.09167
|
Yiwei Zhang
|
Yiwei Zhang, Xin Wang, Hengjia Wei and Gennian Ge
|
On private information retrieval array codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a database, the private information retrieval (PIR) protocol allows a
user to make queries to several servers and retrieve a certain item of the
database via the feedbacks, without revealing the privacy of the specific item
to any single server. Classical models of PIR protocols require that each
server stores a whole copy of the database. Recently new PIR models are
proposed with coding techniques arising from distributed storage system. In
these new models each server only stores a fraction $1/s$ of the whole
database, where $s>1$ is a given rational number. PIR array codes are recently
proposed by Fazeli, Vardy and Yaakobi to characterize the new models. Consider
a PIR array code with $m$ servers and the $k$-PIR property (which indicates
that these $m$ servers may emulate any efficient $k$-PIR protocol). The central
problem is to design PIR array codes with optimal rate $k/m$. Our contribution
to this problem is three-fold. First, for the case $1<s\le 2$, although PIR
array codes with optimal rate have been constructed recently by Blackburn and
Etzion, the number of servers in their construction is impractically large. We
determine the minimum number of servers admitting the existence of a PIR array
code with optimal rate for a certain range of parameters. Second, for the case
$s>2$, we derive a new upper bound on the rate of a PIR array code. Finally,
for the case $s>2$, we analyze a new construction by Blackburn and Etzion and
show that its rate is better than all the other existing constructions.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2016 01:40:05 GMT"
}
] | 2016-09-30T00:00:00 |
[
[
"Zhang",
"Yiwei",
""
],
[
"Wang",
"Xin",
""
],
[
"Wei",
"Hengjia",
""
],
[
"Ge",
"Gennian",
""
]
] |
new_dataset
| 0.998651 |
1609.09211
|
Rohit Verma
|
Rohit Verma and Abhishek Srivastava
|
A Dynamic Web Service Registry Framework for Mobile Environments
|
Preprint Submitted to Arxiv
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advancements in technology have transformed mobile devices from being mere
communication widgets to versatile computing devices. Proliferation of these
hand held devices has made them a common means to access and process digital
information. Most web based applications are today available in a form that can
conveniently be accessed over mobile devices. However, webservices
(applications meant for consumption by other applications rather than humans)
are not as commonly provided and consumed over mobile devices. Facilitating
this and in effect realizing a service-oriented system over mobile devices has
the potential to further enhance the potential of mobile devices. One of the
major challenges in this integration is the lack of an efficient service
registry system that caters to issues associated with the dynamic and volatile
mobile environments. Existing service registry technologies designed for
traditional systems fall short of accommodating such issues. In this paper, we
propose a novel approach to manage service registry systems provided 'solely'
over mobile devices, and thus realising an SOA without the need for high-end
computing systems. The approach manages a dynamic service registry system in
the form of light weight and distributed registries. We assess the feasibility
of our approach by engineering and deploying a working prototype of the
proposed registry system over actual mobile devices. A comparative study of the
proposed approach and the traditional UDDI (Universal Description, Discovery,
and Integration) registry is also included. The evaluation of our framework has
shown propitious results in terms of battery cost, scalability, hindrance with
native applications.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2016 06:09:15 GMT"
}
] | 2016-09-30T00:00:00 |
[
[
"Verma",
"Rohit",
""
],
[
"Srivastava",
"Abhishek",
""
]
] |
new_dataset
| 0.98542 |
1609.09236
|
Baokun Ding
|
Baokun Ding, Tao Zhang and Gennian Ge
|
Maximum Distance Separable Codes for $b$-Symbol Read Channels
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Yaakobi et al. introduced codes for $b$-symbol read channels, where
the read operation is performed as a consecutive sequence of $b>2$ symbols. In
this paper, we establish a Singleton-type bound on $b$-symbol codes. Codes
meeting the Singleton-type bound are called maximum distance separable (MDS)
codes, and they are optimal in the sense they attain the maximal minimum
$b$-distance. Based on projective geometry and constacyclic codes, we construct
new families of linear MDS $b$-symbol codes over finite fields. And in some
sense, we completely determine the existence of linear MDS $b$-symbol codes
over finite fields for certain parameters.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2016 07:39:31 GMT"
}
] | 2016-09-30T00:00:00 |
[
[
"Ding",
"Baokun",
""
],
[
"Zhang",
"Tao",
""
],
[
"Ge",
"Gennian",
""
]
] |
new_dataset
| 0.999456 |
1609.09253
|
Ivan Grechikhin
|
Ivan S. Grechikhin
|
Heuristic with elements of tabu search for Truck and Trailer Routing
Problem
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vehicle Routing Problem is a well-known problem in logistics and
transportation, and the variety of such problems is explained by the fact that
it occurs in many real-life situations. It is an NP-hard combinatorial
optimization problem and finding an exact optimal solution is practically
impossible. In this work, Site-Dependent Truck and Trailer Routing Problem with
hard and soft Time Windows and Split Deliveries is considered (SDTTRPTWSD). In
this article, we develop a heuristic with the elements of Tabu Search for
solving SDTTRPTWSD. The heuristic uses the concept of neighborhoods and visits
infeasible solutions during the search. A greedy heuristic is applied to
construct an initial solution.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2016 08:37:48 GMT"
}
] | 2016-09-30T00:00:00 |
[
[
"Grechikhin",
"Ivan S.",
""
]
] |
new_dataset
| 0.959385 |
1609.09294
|
Pengfei Xuan
|
Pengfei Xuan, Feng Luo, Rong Ge, Pradip K Srimani
|
DynIMS: A Dynamic Memory Controller for In-memory Storage on HPC Systems
|
5 pages, 8 figures, short paper
| null | null | null |
cs.PF cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to boost the performance of data-intensive computing on HPC systems,
in-memory computing frameworks, such as Apache Spark and Flink, use local DRAM
for data storage. Optimizing the memory allocation to data storage is critical
to delivering performance to traditional HPC compute jobs and throughput to
data-intensive applications sharing the HPC resources. Current practices that
statically configure in-memory storage may leave inadequate space for compute
jobs or lose the opportunity to utilize more available space for data-intensive
applications. In this paper, we explore techniques to dynamically adjust
in-memory storage and make the right amount of space for compute jobs. We have
developed a dynamic memory controller, DynIMS, which infers memory demands of
compute tasks online and employs a feedback-based control model to adapt the
capacity of in-memory storage. We test DynIMS using mixed HPCC and Spark
workloads on a HPC cluster. Experimental results show that DynIMS can achieve
up to 5X performance improvement compared to systems with static memory
allocations.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2016 10:41:26 GMT"
}
] | 2016-09-30T00:00:00 |
[
[
"Xuan",
"Pengfei",
""
],
[
"Luo",
"Feng",
""
],
[
"Ge",
"Rong",
""
],
[
"Srimani",
"Pradip K",
""
]
] |
new_dataset
| 0.994016 |
1609.09340
|
Elena Alfaro Martinez
|
Elena Alfaro Martinez (BBVA Data & Analytics), Maria Hernandez Rubio
(BBVA Data & Analytics), Roberto Maestre Martinez (BBVA Data & Analytics),
Juan Murillo Arias (BBVA Data & Analytics), Dario Patane (BBVA Data &
Analytics), Amanda Zerbe (United Nations Global Pulse), Robert Kirkpatrick
(United Nations Global Pulse), Miguel Luengo-Oroz (United Nations Global
Pulse), Amanda Zerbe (United Nations Global Pulse)
|
Measuring Economic Resilience to Natural Disasters with Big Economic
Transaction Data
|
Presented at the Data For Good Exchange 2016
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This research explores the potential to analyze bank card payments and ATM
cash withdrawals in order to map and quantify how people are impacted by and
recover from natural disasters. Our approach defines a disaster-affected
community's economic recovery time as the time needed to return to baseline
activity levels in terms of number of bank card payments and ATM cash
withdrawals. For Hurricane Odile, which hit the state of Baja California Sur
(BCS) in Mexico between 15 and 17 September 2014, we measured and mapped
communities' economic recovery time, which ranged from 2 to 40 days in
different locations. We found that -- among individuals with a bank account --
the lower the income level, the shorter the time needed for economic activity
to return to normal levels. Gender differences in recovery times were also
detected and quantified. In addition, our approach evaluated how communities
prepared for the disaster by quantifying expenditure growth in food or gasoline
before the hurricane struck. We believe this approach opens a new frontier in
measuring the economic impact of disasters with high temporal and spatial
resolution, and in understanding how populations bounce back and adapt.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2016 01:20:23 GMT"
}
] | 2016-09-30T00:00:00 |
[
[
"Martinez",
"Elena Alfaro",
"",
"BBVA Data & Analytics"
],
[
"Rubio",
"Maria Hernandez",
"",
"BBVA Data & Analytics"
],
[
"Martinez",
"Roberto Maestre",
"",
"BBVA Data & Analytics"
],
[
"Arias",
"Juan Murillo",
"",
"BBVA Data & Analytics"
],
[
"Patane",
"Dario",
"",
"BBVA Data &\n Analytics"
],
[
"Zerbe",
"Amanda",
"",
"United Nations Global Pulse"
],
[
"Kirkpatrick",
"Robert",
"",
"United Nations Global Pulse"
],
[
"Luengo-Oroz",
"Miguel",
"",
"United Nations Global\n Pulse"
],
[
"Zerbe",
"Amanda",
"",
"United Nations Global Pulse"
]
] |
new_dataset
| 0.991086 |
1609.09454
|
Eric Graves
|
Eric Graves, Paul Yu, Predrag Spasojevic
|
Keyless authentication in the presence of a simultaneously transmitting
adversary
|
Pre-print. Paper presented at ITW 2016 Cambridge
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
If Alice must communicate with Bob over a channel shared with the adversarial
Eve, then Bob must be able to validate the authenticity of the message. In
particular we consider the model where Alice and Eve share a discrete
memoryless multiple access channel with Bob, thus allowing simultaneous
transmissions from Alice and Eve. By traditional random coding arguments, we
demonstrate an inner bound on the rate at which Alice may transmit, while still
granting Bob the ability to authenticate. Furthermore this is accomplished in
spite of Alice and Bob lacking a pre-shared key, as well as allowing Eve prior
knowledge of both the codebook Alice and Bob share and the messages Alice
transmits.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2016 18:24:35 GMT"
}
] | 2016-09-30T00:00:00 |
[
[
"Graves",
"Eric",
""
],
[
"Yu",
"Paul",
""
],
[
"Spasojevic",
"Predrag",
""
]
] |
new_dataset
| 0.994633 |
1301.2715
|
Joseph Antonides
|
Joseph Antonides and Toshiro Kubota
|
Binocular disparity as an explanation for the moon illusion
| null | null | null | null |
cs.CV physics.pop-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present another explanation for the moon illusion, the phenomenon in which
the moon looks larger near the horizon than near the zenith. In our model of
the moon illusion, the sky is considered a spatially-contiguous and
geometrically-smooth surface. When an object such as the moon breaks the
contiguity of the surface, instead of perceiving the object as appearing
through a hole in the surface, humans perceive an occlusion of the surface.
Binocular vision dictates that the moon is distant, but this perception model
contradicts our binocular vision, dictating that the moon is closer than the
sky. To resolve the contradiction, the brain distorts the projections of the
moon to increase the binocular disparity, which results in an increase in the
perceived size of the moon. The degree of distortion depends upon the apparent
distance to the sky, which is influenced by the surrounding objects and the
condition of the sky. As the apparent distance to the sky decreases, the
illusion becomes stronger. At the horizon, apparent distance to the sky is
minimal, whereas at the zenith, few distance cues are present, causing
difficulty with distance estimation and weakening the illusion.
|
[
{
"version": "v1",
"created": "Sat, 12 Jan 2013 20:12:09 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Sep 2016 04:56:31 GMT"
}
] | 2016-09-29T00:00:00 |
[
[
"Antonides",
"Joseph",
""
],
[
"Kubota",
"Toshiro",
""
]
] |
new_dataset
| 0.974988 |
1506.02306
|
Shibamouli Lahiri
|
Shibamouli Lahiri
|
SQUINKY! A Corpus of Sentence-level Formality, Informativeness, and
Implicature
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a corpus of 7,032 sentences rated by human annotators for
formality, informativeness, and implicature on a 1-7 scale. The corpus was
annotated using Amazon Mechanical Turk. Reliability in the obtained judgments
was examined by comparing mean ratings across two MTurk experiments, and
correlation with pilot annotations (on sentence formality) conducted in a more
controlled setting. Despite the subjectivity and inherent difficulty of the
annotation task, correlations between mean ratings were quite encouraging,
especially on formality and informativeness. We further explored correlation
between the three linguistic variables, genre-wise variation of ratings and
correlations within genres, compatibility with automatic stylistic scoring, and
sentential make-up of a document in terms of style. To date, our corpus is the
largest sentence-level annotated corpus released for formality,
informativeness, and implicature.
|
[
{
"version": "v1",
"created": "Sun, 7 Jun 2015 19:54:00 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Sep 2016 23:54:06 GMT"
}
] | 2016-09-29T00:00:00 |
[
[
"Lahiri",
"Shibamouli",
""
]
] |
new_dataset
| 0.998156 |
1510.03232
|
St\'ephane Caron
|
St\'ephane Caron, Quang-Cuong Pham and Yoshihiko Nakamura
|
ZMP support areas for multi-contact mobility under frictional
constraints
|
14 pages, 10 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a method for checking and enforcing multi-contact stability based
on the Zero-tilting Moment Point (ZMP). The key to our development is the
generalization of ZMP support areas to take into account (a) frictional
constraints and (b) multiple non-coplanar contacts. We introduce and
investigate two kinds of ZMP support areas. First, we characterize and provide
a fast geometric construction for the support area generated by valid contact
forces, with no other constraint on the robot motion. We call this set the full
support area. Next, we consider the control of humanoid robots using the Linear
Pendulum Mode (LPM). We observe that the constraints stemming from the LPM
induce a shrinking of the support area, even for walking on horizontal floors.
We propose an algorithm to compute the new area, which we call pendular support
area. We show that, in the LPM, having the ZMP in the pendular support area is
a necessary and sufficient condition for contact stability. Based on these
developments, we implement a whole-body controller and generate feasible
multi-contact motions where an HRP-4 humanoid locomotes in challenging
multi-contact scenarios.
|
[
{
"version": "v1",
"created": "Mon, 12 Oct 2015 11:19:36 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Sep 2016 07:53:28 GMT"
}
] | 2016-09-29T00:00:00 |
[
[
"Caron",
"Stéphane",
""
],
[
"Pham",
"Quang-Cuong",
""
],
[
"Nakamura",
"Yoshihiko",
""
]
] |
new_dataset
| 0.98536 |
1609.08650
|
Harishchandra Dubey
|
P. K. Ray, B. K. Panigrahi, P. K. Rout, A. Mohanty, H. Dubey
|
Detection of Faults in Power System Using Wavelet Transform and
Independent Component Analysis
|
5 pages, 6 figures, Table 1
|
First International Conference on Advancement of Computer
Communication & Electrical Technology, October 2016, Murshidabad, India
|
10.13140/RG.2.2.20394.82882
| null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Uninterruptible power supply is the main motive of power utility companies
that motivate them for identifying and locating the different types of faults
as quickly as possible to protect the power system prevent complete power black
outs using intelligent techniques. Thus, the present research work presents a
novel method for detection of fault disturbances based on Wavelet Transform
(WT) and Independent Component Analysis (ICA). The voltage signal is taken
offline under fault conditions and is being processed through wavelet and ICA
for detection. The time-frequency resolution from WT transform detects the
fault initiation instant in the signal. Again, a performance index is
calculated from independent component analysis under fault condition which is
used to detect the fault disturbance in the voltage signal. The proposed
approach is tested to be robust enough under various operating scenarios like
without noise, with 20-dB noise and variation in frequency. Further, the
detection study is carried out using a performance index, energy content, by
applying the existing Fourier transform (FT), short time Fourier transform
(STFT) and the proposed wavelet transform. Fault disturbances are detected if
the energy calculated in each scenario is greater than the corresponding
threshold value. The fault detection study is simulated in MATLAB/Simulink for
a typical power system.
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2016 07:17:42 GMT"
}
] | 2016-09-29T00:00:00 |
[
[
"Ray",
"P. K.",
""
],
[
"Panigrahi",
"B. K.",
""
],
[
"Rout",
"P. K.",
""
],
[
"Mohanty",
"A.",
""
],
[
"Dubey",
"H.",
""
]
] |
new_dataset
| 0.997906 |
1609.08675
|
Sami Abu-El-Haija
|
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George
Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
|
YouTube-8M: A Large-Scale Video Classification Benchmark
|
10 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many recent advancements in Computer Vision are attributed to large datasets.
Open-source software packages for Machine Learning and inexpensive commodity
hardware have reduced the barrier of entry for exploring novel approaches at
scale. It is possible to train models over millions of examples within a few
days. Although large-scale datasets exist for image understanding, such as
ImageNet, there are no comparable size video classification datasets.
In this paper, we introduce YouTube-8M, the largest multi-label video
classification dataset, composed of ~8 million videos (500K hours of video),
annotated with a vocabulary of 4800 visual entities. To get the videos and
their labels, we used a YouTube video annotation system, which labels videos
with their main topics. While the labels are machine-generated, they have
high-precision and are derived from a variety of human-based signals including
metadata and query click signals. We filtered the video labels (Knowledge Graph
entities) using both automated and manual curation strategies, including asking
human raters if the labels are visually recognizable. Then, we decoded each
video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to
extract the hidden representation immediately prior to the classification
layer. Finally, we compressed the frame features and make both the features and
video-level labels available for download.
We trained various (modest) classification models on the dataset, evaluated
them using popular evaluation metrics, and report them as baselines. Despite
the size of the dataset, some of our models train to convergence in less than a
day on a single machine using TensorFlow. We plan to release code for training
a TensorFlow model and for computing metrics.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2016 21:21:49 GMT"
}
] | 2016-09-29T00:00:00 |
[
[
"Abu-El-Haija",
"Sami",
""
],
[
"Kothari",
"Nisarg",
""
],
[
"Lee",
"Joonseok",
""
],
[
"Natsev",
"Paul",
""
],
[
"Toderici",
"George",
""
],
[
"Varadarajan",
"Balakrishnan",
""
],
[
"Vijayanarasimhan",
"Sudheendra",
""
]
] |
new_dataset
| 0.999853 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.