id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.14469
|
Emily Escamilla
|
Emily Escamilla, Lamia Salsabil, Martin Klein, Jian Wu, Michele C.
Weigle, Michael L. Nelson
|
It's Not Just GitHub: Identifying Data and Software Sources Included in
Publications
|
13 pages, 7 figures, pre-print of publication for Theory and Practice
of Digital Libraries 2023
| null | null | null |
cs.DL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Paper publications are no longer the only form of research product. Due to
recent initiatives by publication venues and funding institutions, open access
datasets and software products are increasingly considered research products
and URIs to these products are growing more prevalent in scholarly
publications. However, as with all URIs, resources found on the live Web are
not permanent. Archivists and institutions including Software Heritage,
Internet Archive, and Zenodo are working to preserve data and software products
as valuable parts of reproducibility, a cornerstone of scientific research.
While some hosting platforms are well-known and can be identified with regular
expressions, there are a vast number of smaller, more niche hosting platforms
utilized by researchers to host their data and software. If it is not feasible
to manually identify all hosting platforms used by researchers, how can we
identify URIs to open-access data and software (OADS) to aid in their
preservation? We used a hybrid classifier to classify URIs as OADS URIs and
non-OADS URIs. We found that URIs to Git hosting platforms (GHPs) including
GitHub, GitLab, SourceForge, and Bitbucket accounted for 33\% of OADS URIs.
Non-GHP OADS URIs are distributed across almost 50,000 unique hostnames. We
determined that using a hybrid classifier allows for the identification of OADS
URIs in less common hosting platforms which can benefit discoverability for
preserving datasets and software products as research products for
reproducibility.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 19:17:02 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Escamilla",
"Emily",
""
],
[
"Salsabil",
"Lamia",
""
],
[
"Klein",
"Martin",
""
],
[
"Wu",
"Jian",
""
],
[
"Weigle",
"Michele C.",
""
],
[
"Nelson",
"Michael L.",
""
]
] |
new_dataset
| 0.994223 |
2307.14487
|
Haipeng Yu
|
Jin Wang, Yu Hu, Lirong Xiang, Gota Morota, Samantha A. Brooks,
Carissa L. Wickens, Emily K. Miller-Cushon, and Haipeng Yu
|
Technical note: ShinyAnimalCV: open-source cloud-based web application
for object detection, segmentation, and three-dimensional visualization of
animals using computer vision
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Computer vision (CV), a non-intrusive and cost-effective technology, has
furthered the development of precision livestock farming by enabling optimized
decision-making through timely and individualized animal care. The availability
of affordable two- and three-dimensional camera sensors, combined with various
machine learning and deep learning algorithms, has provided a valuable
opportunity to improve livestock production systems. However, despite the
availability of various CV tools in the public domain, applying these tools to
animal data can be challenging, often requiring users to have programming and
data analysis skills, as well as access to computing resources. Moreover, the
rapid expansion of precision livestock farming is creating a growing need to
educate and train animal science students in CV. This presents educators with
the challenge of efficiently demonstrating the complex algorithms involved in
CV. Thus, the objective of this study was to develop ShinyAnimalCV, an
open-source cloud-based web application. This application provides a
user-friendly interface for performing CV tasks, including object segmentation,
detection, three-dimensional surface visualization, and extraction of two- and
three-dimensional morphological features. Nine pre-trained CV models using
top-view animal data are included in the application. ShinyAnimalCV has been
deployed online using cloud computing platforms. The source code of
ShinyAnimalCV is available on GitHub, along with detailed documentation on
training CV models using custom data and deploying ShinyAnimalCV locally to
allow users to fully leverage the capabilities of the application.
ShinyAnimalCV can contribute to CV research and teaching in the animal science
community.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 20:25:29 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Wang",
"Jin",
""
],
[
"Hu",
"Yu",
""
],
[
"Xiang",
"Lirong",
""
],
[
"Morota",
"Gota",
""
],
[
"Brooks",
"Samantha A.",
""
],
[
"Wickens",
"Carissa L.",
""
],
[
"Miller-Cushon",
"Emily K.",
""
],
[
"Yu",
"Haipeng",
""
]
] |
new_dataset
| 0.974409 |
2307.14489
|
Canyu Zhang
|
Canyu Zhang, Qing Guo, Xiaoguang Li, Renjie Wan, Hongkai Yu, Ivor
Tsang, Song Wang
|
SuperInpaint: Learning Detail-Enhanced Attentional Implicit
Representation for Super-resolutional Image Inpainting
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we introduce a challenging image restoration task, referred to
as SuperInpaint, which aims to reconstruct missing regions in low-resolution
images and generate completed images with arbitrarily higher resolutions. We
have found that this task cannot be effectively addressed by stacking
state-of-the-art super-resolution and image inpainting methods as they amplify
each other's flaws, leading to noticeable artifacts. To overcome these
limitations, we propose the detail-enhanced attentional implicit representation
(DEAR) that can achieve SuperInpaint with a single model, resulting in
high-quality completed images with arbitrary resolutions. Specifically, we use
a deep convolutional network to extract the latent embedding of an input image
and then enhance the high-frequency components of the latent embedding via an
adaptive high-pass filter. This leads to detail-enhanced semantic embedding. We
further feed the semantic embedding into an unmask-attentional module that
suppresses embeddings from ineffective masked pixels. Additionally, we extract
a pixel-wise importance map that indicates which pixels should be used for
image reconstruction. Given the coordinates of a pixel we want to reconstruct,
we first collect its neighboring pixels in the input image and extract their
detail-enhanced semantic embeddings, unmask-attentional semantic embeddings,
importance values, and spatial distances to the desired pixel. Then, we feed
all the above terms into an implicit representation and generate the color of
the specified pixel. To evaluate our method, we extend three existing datasets
for this new task and build 18 meaningful baselines using SOTA inpainting and
super-resolution methods. Extensive experimental results demonstrate that our
method outperforms all existing methods by a significant margin on four widely
used metrics.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 20:28:58 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Zhang",
"Canyu",
""
],
[
"Guo",
"Qing",
""
],
[
"Li",
"Xiaoguang",
""
],
[
"Wan",
"Renjie",
""
],
[
"Yu",
"Hongkai",
""
],
[
"Tsang",
"Ivor",
""
],
[
"Wang",
"Song",
""
]
] |
new_dataset
| 0.996866 |
2307.14541
|
Cristina Gena
|
Davide D'Adamo, Emiliano Robert, Cristina Gena, Silvestro Roatta
|
Novel BCI paradigm for ALS patients based on EEG and Pupillary
Accommodative Response
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Brain-computer interfaces (BCIs) are one of the few alternatives to enable
locked-in syndrome (LIS) patients to communicate with the external world, while
they are the only solution for complete locked-in syndrome (CLIS) patients, who
lost the ability to control eye movements. However, successful usage of
endogenous electroencephalogram(EEG)-based BCI applications is often not
trivial, due to EEG variations between and within sessions and long user
training required. In this work we suggest an approach to deal with this two
main limitations of EEG-BCIs by inserting a progressive and expandable
neurofeedback training program, able to continuously tailor the classifier to
the specific user, into a multimodal BCI paradigm. We propose indeed the
integration of EEG with a non-brain signal: the pupillary accommodative
response (PAR). The PAR is a change in pupil size associated with gaze shifts
from far to close targets; it is not governed by the somatic nervous system and
is thus potentially preserved after the evolution from LIS to CLIS, which often
occurs in neurodegenerative diseases, such as amyotrophic lateral sclerosis.
Multimodal BCIs have been broadly investigated in literature, due to their
ability to yield better overall control performances, but this would be the
first attempt combining EEG and PAR. In the context of the BciPar4Sla, we are
exploiting these two signals, with the aim of developing a more reliable BCI,
adaptive to the extent of evolving together with the user's ability to elicit
the brain phenomena needed for optimal control, and providing support even in
the transition from LIS to CLIS.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 23:15:50 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"D'Adamo",
"Davide",
""
],
[
"Robert",
"Emiliano",
""
],
[
"Gena",
"Cristina",
""
],
[
"Roatta",
"Silvestro",
""
]
] |
new_dataset
| 0.98245 |
2307.14549
|
Jianjun Yuan
|
Jianjun Yuan and Wei Lee Woon and Ludovik Coba
|
Adversarial Sleeping Bandit Problems with Multiple Plays: Algorithm and
Ranking Application
|
Accepted by RecSys 2023 conference
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents an efficient algorithm to solve the sleeping bandit with
multiple plays problem in the context of an online recommendation system. The
problem involves bounded, adversarial loss and unknown i.i.d. distributions for
arm availability. The proposed algorithm extends the sleeping bandit algorithm
for single arm selection and is guaranteed to achieve theoretical performance
with regret upper bounded by $\bigO(kN^2\sqrt{T\log T})$, where $k$ is the
number of arms selected per time step, $N$ is the total number of arms, and $T$
is the time horizon.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 00:11:59 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Yuan",
"Jianjun",
""
],
[
"Woon",
"Wei Lee",
""
],
[
"Coba",
"Ludovik",
""
]
] |
new_dataset
| 0.994044 |
2307.14570
|
Sandika Biswas
|
Sandika Biswas, Kejie Li, Biplab Banerjee, Subhasis Chaudhuri, Hamid
Rezatofighi
|
Physically Plausible 3D Human-Scene Reconstruction from Monocular RGB
Image using an Adversarial Learning Approach
|
Accepted in RAL 2023
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Holistic 3D human-scene reconstruction is a crucial and emerging research
area in robot perception. A key challenge in holistic 3D human-scene
reconstruction is to generate a physically plausible 3D scene from a single
monocular RGB image. The existing research mainly proposes optimization-based
approaches for reconstructing the scene from a sequence of RGB frames with
explicitly defined physical laws and constraints between different scene
elements (humans and objects). However, it is hard to explicitly define and
model every physical law in every scenario. This paper proposes using an
implicit feature representation of the scene elements to distinguish a
physically plausible alignment of humans and objects from an implausible one.
We propose using a graph-based holistic representation with an encoded physical
representation of the scene to analyze the human-object and object-object
interactions within the scene. Using this graphical representation, we
adversarially train our model to learn the feasible alignments of the scene
elements from the training data itself without explicitly defining the laws and
constraints between them. Unlike the existing inference-time optimization-based
approaches, we use this adversarially trained model to produce a per-frame 3D
reconstruction of the scene that abides by the physical laws and constraints.
Our learning-based method achieves comparable 3D reconstruction quality to
existing optimization-based holistic human-scene reconstruction methods and
does not need inference time optimization. This makes it better suited when
compared to existing methods, for potential use in robotic applications, such
as robot navigation, etc.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 01:07:15 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Biswas",
"Sandika",
""
],
[
"Li",
"Kejie",
""
],
[
"Banerjee",
"Biplab",
""
],
[
"Chaudhuri",
"Subhasis",
""
],
[
"Rezatofighi",
"Hamid",
""
]
] |
new_dataset
| 0.995375 |
2307.14575
|
Rongqin Liang
|
Rongqin Liang, Yuanman Li, Yingxin Yi, Jiantao Zhou, Xia Li
|
A Memory-Augmented Multi-Task Collaborative Framework for Unsupervised
Traffic Accident Detection in Driving Videos
|
12pages,5 figures
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Identifying traffic accidents in driving videos is crucial to ensuring the
safety of autonomous driving and driver assistance systems. To address the
potential danger caused by the long-tailed distribution of driving events,
existing traffic accident detection (TAD) methods mainly rely on unsupervised
learning. However, TAD is still challenging due to the rapid movement of
cameras and dynamic scenes in driving scenarios. Existing unsupervised TAD
methods mainly rely on a single pretext task, i.e., an appearance-based or
future object localization task, to detect accidents. However, appearance-based
approaches are easily disturbed by the rapid movement of the camera and changes
in illumination, which significantly reduce the performance of traffic accident
detection. Methods based on future object localization may fail to capture
appearance changes in video frames, making it difficult to detect ego-involved
accidents (e.g., out of control of the ego-vehicle). In this paper, we propose
a novel memory-augmented multi-task collaborative framework (MAMTCF) for
unsupervised traffic accident detection in driving videos. Different from
previous approaches, our method can more accurately detect both ego-involved
and non-ego accidents by simultaneously modeling appearance changes and object
motions in video frames through the collaboration of optical flow
reconstruction and future object localization tasks. Further, we introduce a
memory-augmented motion representation mechanism to fully explore the
interrelation between different types of motion representations and exploit the
high-level features of normal traffic patterns stored in memory to augment
motion representations, thus enlarging the difference from anomalies.
Experimental results on recently published large-scale dataset demonstrate that
our method achieves better performance compared to previous state-of-the-art
approaches.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 01:45:13 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Liang",
"Rongqin",
""
],
[
"Li",
"Yuanman",
""
],
[
"Yi",
"Yingxin",
""
],
[
"Zhou",
"Jiantao",
""
],
[
"Li",
"Xia",
""
]
] |
new_dataset
| 0.986938 |
2307.14580
|
Guilherme Christmann
|
Hanjaya Mandala, Guilherme Christmann
|
The BARN Challenge 2023 -- Autonomous Navigation in Highly Constrained
Spaces -- Inventec Team
|
The BARN Challenge 2023, ICRA 2023, Technical Report
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Navigation in the real-world is hard and filled with complex scenarios. The
Benchmark Autonomous Robot Navigation (BARN) Challenge is a competition that
focuses on highly constrained spaces. Teams compete using a standard platform
in a simulation and a real-world stage, with scenarios ranging from easy to
challenging. This technical report presents the system and methods employed by
the Inventec Team during the BARN Challenge 2023
(https://cs.gmu.edu/~xiao/Research/BARN_Challenge/BARN_Challenge23.html). At
its core, our method uses the baseline learning-based controller LfLH. We
developed extensions using a finite state machine to trigger recovery
behaviors, and introduced two alternatives for forward safety collision checks,
based on footprint inflation and model-predictive control. Moreover, we also
present a backtrack safety check based on costmap region-of-interest. Compared
to the original baseline, we managed a significant increase in the navigation
score, from 0.2334 to 0.2445 (4.76%). Overall, our team ranked second place
both in simulation and in the real-world stage. Our code is publicly available
at:
(https://github.com/inventec-ai-center/inventec-team-barn-challenge-2023.git)
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 02:01:06 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Mandala",
"Hanjaya",
""
],
[
"Christmann",
"Guilherme",
""
]
] |
new_dataset
| 0.994355 |
2307.14630
|
Huajian Huang
|
Huajian Huang, Yinzhe Xu, Yingshu Chen, and Sai-Kit Yeung
|
360VOT: A New Benchmark Dataset for Omnidirectional Visual Object
Tracking
|
ICCV 2023. Homepage: https://360vot.hkustvgd.com The toolkit of the
benchmark is available at: https://github.com/HuajianUP/360VOT
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
360{\deg} images can provide an omnidirectional field of view which is
important for stable and long-term scene perception. In this paper, we explore
360{\deg} images for visual object tracking and perceive new challenges caused
by large distortion, stitching artifacts, and other unique attributes of
360{\deg} images. To alleviate these problems, we take advantage of novel
representations of target localization, i.e., bounding field-of-view, and then
introduce a general 360 tracking framework that can adopt typical trackers for
omnidirectional tracking. More importantly, we propose a new large-scale
omnidirectional tracking benchmark dataset, 360VOT, in order to facilitate
future research. 360VOT contains 120 sequences with up to 113K high-resolution
frames in equirectangular projection. The tracking targets cover 32 categories
in diverse scenarios. Moreover, we provide 4 types of unbiased ground truth,
including (rotated) bounding boxes and (rotated) bounding field-of-views, as
well as new metrics tailored for 360{\deg} images which allow for the accurate
evaluation of omnidirectional tracking performance. Finally, we extensively
evaluated 20 state-of-the-art visual trackers and provided a new baseline for
future comparisons. Homepage: https://360vot.hkustvgd.com
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 05:32:01 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Huang",
"Huajian",
""
],
[
"Xu",
"Yinzhe",
""
],
[
"Chen",
"Yingshu",
""
],
[
"Yeung",
"Sai-Kit",
""
]
] |
new_dataset
| 0.999655 |
2307.14637
|
Zhifeng Wang Mr
|
Zhifeng Wang and Kaihao Zhang and Wenhan Luo and Ramesh
Sankaranarayana
|
HTNet for micro-expression recognition
|
35 pages, 7 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Facial expression is related to facial muscle contractions and different
muscle movements correspond to different emotional states. For micro-expression
recognition, the muscle movements are usually subtle, which has a negative
impact on the performance of current facial emotion recognition algorithms.
Most existing methods use self-attention mechanisms to capture relationships
between tokens in a sequence, but they do not take into account the inherent
spatial relationships between facial landmarks. This can result in sub-optimal
performance on micro-expression recognition tasks.Therefore, learning to
recognize facial muscle movements is a key challenge in the area of
micro-expression recognition. In this paper, we propose a Hierarchical
Transformer Network (HTNet) to identify critical areas of facial muscle
movement. HTNet includes two major components: a transformer layer that
leverages the local temporal features and an aggregation layer that extracts
local and global semantical facial features. Specifically, HTNet divides the
face into four different facial areas: left lip area, left eye area, right eye
area and right lip area. The transformer layer is used to focus on representing
local minor muscle movement with local self-attention in each area. The
aggregation layer is used to learn the interactions between eye areas and lip
areas. The experiments on four publicly available micro-expression datasets
show that the proposed approach outperforms previous methods by a large margin.
The codes and models are available at:
\url{https://github.com/wangzhifengharrison/HTNet}
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 06:04:20 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Wang",
"Zhifeng",
""
],
[
"Zhang",
"Kaihao",
""
],
[
"Luo",
"Wenhan",
""
],
[
"Sankaranarayana",
"Ramesh",
""
]
] |
new_dataset
| 0.999067 |
2307.14662
|
Xusheng Zhu
|
Xusheng Zhu, Wen Chen, Zhendong Li, Qingqing Wu, Ziheng Zhang, Kunlun
Wang, and Jun Li
|
RIS-Aided Spatial Scattering Modulation for mmWave MIMO Transmissions
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the reconfigurable intelligent surface (RIS) assisted
spatial scattering modulation (SSM) scheme for millimeter-wave (mmWave)
multiple-input multiple-output (MIMO) systems, in which line-of-sight (LoS) and
non-line-of-sight (NLoS) paths are respectively considered in the
transmitter-RIS and RIS-receiver channels. Based on the maximum likelihood
detector, the conditional pairwise error probability (CPEP) expression for the
RIS-SSM scheme is derived under the two cases of received beam correct and
demodulation error. Furthermore, we derive the closed-form expressions of the
unconditional pairwise error probability (UPEP) by employing two different
methods: the probability density function and the moment-generating function
expressions with a descending order of scatterer gains. To provide more useful
insights, we derive the asymptotic UPEP and the diversity gain of the RIS-SSM
scheme in the high SNR region. Depending on UPEP and the corresponding
Euclidean distance, we get the union upper bound of the average bit error
probability (ABEP). A new framework for ergodic capacity analysis is also
provided to acquire the proposed system's effective capacity. Finally, all
derivation results are validated via extensive Monte Carlo simulations,
revealing that the proposed RIS-SSM scheme outperforms the benchmarks in terms
of reliability.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 07:35:19 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Zhu",
"Xusheng",
""
],
[
"Chen",
"Wen",
""
],
[
"Li",
"Zhendong",
""
],
[
"Wu",
"Qingqing",
""
],
[
"Zhang",
"Ziheng",
""
],
[
"Wang",
"Kunlun",
""
],
[
"Li",
"Jun",
""
]
] |
new_dataset
| 0.974669 |
2307.14669
|
Gian Carlo Milanese
|
Gian Carlo Milanese, Gabriella Pasi
|
Fuzzy order-sorted feature logic
|
Submitted to Fuzzy Sets and Systems
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Order-Sorted Feature (OSF) logic is a knowledge representation and reasoning
language based on function-denoting feature symbols and set-denoting sort
symbols ordered in a subsumption lattice. OSF logic allows the construction of
record-like terms that represent classes of entities and that are themselves
ordered in a subsumption relation. The unification algorithm for such
structures provides an efficient calculus of type subsumption, which has been
applied in computational linguistics and implemented in constraint logic
programming languages such as LOGIN and LIFE and automated reasoners such as
CEDAR. This work generalizes OSF logic to a fuzzy setting. We give a flexible
definition of a fuzzy subsumption relation which generalizes Zadeh's inclusion
between fuzzy sets. Based on this definition we define a fuzzy semantics of OSF
logic where sort symbols and OSF terms denote fuzzy sets. We extend the
subsumption relation to OSF terms and prove that it constitutes a fuzzy partial
order with the property that two OSF terms are subsumed by one another in the
crisp sense if and only if their subsumption degree is greater than 0. We show
how to find the greatest lower bound of two OSF terms by unifying them and how
to compute the subsumption degree between two OSF terms, and we provide the
complexity of these operations.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 07:47:54 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Milanese",
"Gian Carlo",
""
],
[
"Pasi",
"Gabriella",
""
]
] |
new_dataset
| 0.971624 |
2307.14679
|
Rui Song
|
Rui Song, BB CC
|
LinkDID: A Privacy-Preserving, Sybil-Resistant and Key-Recoverable
Decentralized Identity Scheme
|
20 pages
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decentralized identity mechanisms endeavor to endow users with complete
sovereignty over their digital assets within the Web3 ecosystem. Unfortunately,
this benefit frequently comes at the expense of users' credential and identity
privacy. Additionally, existing schemes fail to resist Sybil attacks that have
long plagued Web3, and lack reasonable key recovery mechanisms to regain
control of digital assets after loss. In this work, we propose LinkDID, a
privacy-preserving, Sybil-resistant, and key-recoverable decentralized identity
scheme that supports selective disclosure of credentials for arbitrary
predicates while maintaining privacy for credentials and identities. Through an
identifier association mechanism, LinkDID can privately and forcibly aggregate
users' identifiers, providing Sybil resistance without relying on any external
data or collateral from benign users. To enable key recovery, LinkDID permits
users to establish proofs of ownership for identifiers with lost keys and
request an update of corresponding keys from the decentralized ledger. We
provide a detailed theoretical analysis and security proofs of LinkDID, along
with an exhaustive performance evaluation that shows its ability to complete
interactions in less than 10 seconds on consumer-grade devices.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 08:08:02 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Song",
"Rui",
""
],
[
"CC",
"BB",
""
]
] |
new_dataset
| 0.990027 |
2307.14682
|
Yao Huang
|
Xingxing Wei, Yao Huang, Yitong Sun, Jie Yu
|
Unified Adversarial Patch for Visible-Infrared Cross-modal Attacks in
the Physical World
|
13 pages, 16 figures. arXiv admin note: substantial text overlap with
arXiv:2307.07859
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physical adversarial attacks have put a severe threat to DNN-based object
detectors. To enhance security, a combination of visible and infrared sensors
is deployed in various scenarios, which has proven effective in disabling
existing single-modal physical attacks. To further demonstrate the potential
risks in such cases, we design a unified adversarial patch that can perform
cross-modal physical attacks, achieving evasion in both modalities
simultaneously with a single patch. Given the different imaging mechanisms of
visible and infrared sensors, our work manipulates patches' shape features,
which can be captured in different modalities when they undergo changes. To
deal with challenges, we propose a novel boundary-limited shape optimization
approach that aims to achieve compact and smooth shapes for the adversarial
patch, making it easy to implement in the physical world. And a score-aware
iterative evaluation method is also introduced to balance the fooling degree
between visible and infrared detectors during optimization, which guides the
adversarial patch to iteratively reduce the predicted scores of the multi-modal
sensors. Furthermore, we propose an Affine-Transformation-based enhancement
strategy that makes the learnable shape robust to various angles, thus
mitigating the issue of shape deformation caused by different shooting angles
in the real world. Our method is evaluated against several state-of-the-art
object detectors, achieving an Attack Success Rate (ASR) of over 80%. We also
demonstrate the effectiveness of our approach in physical-world scenarios under
various settings, including different angles, distances, postures, and scenes
for both visible and infrared sensors.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 08:14:22 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Wei",
"Xingxing",
""
],
[
"Huang",
"Yao",
""
],
[
"Sun",
"Yitong",
""
],
[
"Yu",
"Jie",
""
]
] |
new_dataset
| 0.999834 |
2307.14686
|
Josep Marti-Saumell
|
Josep Mart\'i-Saumell, Hugo Duarte, Patrick Grosch, Juan
Andrade-Cetto, Angel Santamaria-Navarro, Joan Sol\`a
|
Borinot: an open thrust-torque-controlled robot for research on agile
aerial-contact motion
|
14 pages, 13 figures. See related video at
https://youtu.be/Ob7IIVB6P_A
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces Borinot, an open-source aerial robotic platform
designed to conduct research on hybrid agile locomotion and manipulation using
flight and contacts. This platform features an agile and powerful hexarotor
that can be outfitted with torque-actuated limbs of diverse architecture,
allowing for whole-body dynamic control. As a result, Borinot can perform agile
tasks such as aggressive or acrobatic maneuvers with the participation of the
whole-body dynamics.
The limbs attached to Borinot can be utilized in various ways; during
contact, they can be used as legs to create contact-based locomotion, or as
arms to manipulate objects. In free flight, they can be used as tails to
contribute to dynamics, mimicking the movements of many animals. This allows
for any hybridization of these dynamic modes, making Borinot an ideal
open-source platform for research on hybrid aerial-contact agile motion.
To demonstrate the key capabilities of Borinot in terms of agility with
hybrid motion modes, we have fitted a planar 2DoF limb and implemented a
whole-body torque-level model-predictive-control. The result is a capable and
adaptable platform that, we believe, opens up new avenues of research in the
field of agile robotics. Interesting links\footnote{Documentation:
\url{www.iri.upc.edu/borinot}}\footnote{Video:
\url{https://youtu.be/Ob7IIVB6P_A}}.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 08:19:47 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Martí-Saumell",
"Josep",
""
],
[
"Duarte",
"Hugo",
""
],
[
"Grosch",
"Patrick",
""
],
[
"Andrade-Cetto",
"Juan",
""
],
[
"Santamaria-Navarro",
"Angel",
""
],
[
"Solà",
"Joan",
""
]
] |
new_dataset
| 0.999711 |
2307.14707
|
Benjamin Monmege
|
Dhruv Nevatia and Benjamin Monmege
|
An Automata Theoretic Characterization of Weighted First-Order Logic
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Since the 1970s with the work of McNaughton, Papert and Sch\"utzenberger, a
regular language is known to be definable in the first-order logic if and only
if its syntactic monoid is aperiodic. This algebraic characterisation of a
fundamental logical fragment has been extended in the quantitative case by
Droste and Gastin, dealing with polynomially ambiguous weighted automata and a
restricted fragment of weighted first-order logic. In the quantitative setting,
the full weighted first-order logic (without the restriction that Droste and
Gastin use, about the quantifier alternation) is more powerful than weighted
automata, and extensions of the automata with two-way navigation, and pebbles
or nested capabilities have been introduced to deal with it. In this work, we
characterise the fragment of these extended weighted automata that recognise
exactly the full weighted first-order logic, under the condition that automata
are polynomially ambiguous.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 08:56:53 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Nevatia",
"Dhruv",
""
],
[
"Monmege",
"Benjamin",
""
]
] |
new_dataset
| 0.985455 |
2307.14723
|
Bo Yang
|
Bo Yang, Xinyu Zhang, Jiahao Zhu, Jian Zhang, Dongjian Tian, Jun Luo,
Mingliang Zhou, Yangjun Pi
|
EFLNet: Enhancing Feature Learning for Infrared Small Target Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Single-frame infrared small target detection is considered to be a
challenging task, due to the extreme imbalance between target and background,
bounding box regression is extremely sensitive to infrared small targets, and
small target information is easy to lose in the high-level semantic layer. In
this paper, we propose an enhancing feature learning network (EFLNet) based on
YOLOv7 framework to solve these problems. First, we notice that there is an
extremely imbalance between the target and the background in the infrared
image, which makes the model pay more attention to the background features,
resulting in missed detection. To address this problem, we propose a new
adaptive threshold focal loss function that adjusts the loss weight
automatically, compelling the model to allocate greater attention to target
features. Second, we introduce the normalized Gaussian Wasserstein distance to
alleviate the difficulty of model convergence caused by the extreme sensitivity
of the bounding box regression to infrared small targets. Finally, we
incorporate a dynamic head mechanism into the network to enable adaptive
learning of the relative importance of each semantic layer. Experimental
results demonstrate our method can achieve better performance in the detection
performance of infrared small targets compared to state-of-the-art
deep-learning based methods.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 09:23:22 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Yang",
"Bo",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Zhu",
"Jiahao",
""
],
[
"Zhang",
"Jian",
""
],
[
"Tian",
"Dongjian",
""
],
[
"Luo",
"Jun",
""
],
[
"Zhou",
"Mingliang",
""
],
[
"Pi",
"Yangjun",
""
]
] |
new_dataset
| 0.975947 |
2307.14749
|
Simone Scalabrino
|
Emanuela Guglielmi, Simone Scalabrino, Gabriele Bavota, Rocco Oliveto
|
Using Gameplay Videos for Detecting Issues in Video Games
|
Accepted at Empirical Software Engineering journal (EMSE). arXiv
admin note: text overlap with arXiv:2204.04182
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Context. The game industry is increasingly growing in recent years. Every
day, millions of people play video games, not only as a hobby, but also for
professional competitions (e.g., e-sports or speed-running) or for making
business by entertaining others (e.g., streamers). The latter daily produce a
large amount of gameplay videos in which they also comment live what they
experience. But no software and, thus, no video game is perfect: Streamers may
encounter several problems (such as bugs, glitches, or performance issues)
while they play. Also, it is unlikely that they explicitly report such issues
to developers. The identified problems may negatively impact the user's gaming
experience and, in turn, can harm the reputation of the game and of the
producer. Objective. In this paper, we propose and empirically evaluate GELID,
an approach for automatically extracting relevant information from gameplay
videos by (i) identifying video segments in which streamers experienced
anomalies; (ii) categorizing them based on their type (e.g., logic or
presentation); clustering them based on (iii) the context in which appear
(e.g., level or game area) and (iv) on the specific issue type (e.g., game
crashes). Method. We manually defined a training set for step 2 of GELID
(categorization) and a test set for validating in isolation the four components
of GELID. In total, we manually segmented, labeled, and clustered 170 videos
related to 3 video games, defining a dataset containing 604 segments. Results.
While in steps 1 (segmentation) and 4 (specific issue clustering) GELID
achieves satisfactory results, it shows limitations on step 3 (game context
clustering) and, above all, step 2 (categorization).
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 10:16:04 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Guglielmi",
"Emanuela",
""
],
[
"Scalabrino",
"Simone",
""
],
[
"Bavota",
"Gabriele",
""
],
[
"Oliveto",
"Rocco",
""
]
] |
new_dataset
| 0.996482 |
2307.14757
|
Luca Wilke
|
Luca Wilke, Jan Wichelmann, Anja Rabich, Thomas Eisenbarth
|
SEV-Step: A Single-Stepping Framework for AMD-SEV
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ever increasing popularity and availability of Trusted Execution
Environments (TEEs) had a stark influence on microarchitectural attack research
in academia, as their strong attacker model both boosts existing attack vectors
and introduces several new ones. While many works have focused on Intel SGX,
other TEEs like AMD SEV have recently also started to receive more attention. A
common technique when attacking SGX enclaves is single-stepping, where the
system's APIC timer is used to interrupt the enclave after every instruction.
Single-stepping increases the temporal resolution of subsequent
microarchitectural attacks to a maximum. A key driver in the proliferation of
this complex attack technique was the SGX-Step framework, which offered a
stable reference implementation for single-stepping and a relatively easy
setup. In this paper, we demonstrate that SEV VMs can also be reliably
single-stepped. To lay the foundation for further microarchitectural attack
research against SEV, we introduce the reusable SEV-Step framework. Besides
reliable single-stepping, SEV-Step provides easy access to common attack
primitives like page fault tracking and cache attacks against SEV. All features
can be used interactively from user space. We demonstrate SEV-Step's
capabilities by carrying out an end-to-end cache attack against SEV that leaks
the volume key of a LUKS2-encrypted disk. Finally, we show for the first time
that SEV is vulnerable to Nemesis-style attacks, which allow to extract
information about the type and operands of single-stepped instructions from
SEV-protected VMs.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 10:31:54 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Wilke",
"Luca",
""
],
[
"Wichelmann",
"Jan",
""
],
[
"Rabich",
"Anja",
""
],
[
"Eisenbarth",
"Thomas",
""
]
] |
new_dataset
| 0.955255 |
2307.14773
|
Yuying Du
|
Xueyan Tang, Lingzhi Shi, Alan Lai, Yuying Du, Jing Deng, Jialu Fu,
Jiayi Li
|
Smart Contract Migration: Security Analysis and Recommendations from
Ethereum to Arbitrum
|
18 pages,23 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This research aims to explore the security risks posed by compatibility and
protocol differences in smart contract migration, using the migration of smart
contracts from Ethereum to Arbitrum as a case study. Through literature review,
online data collection, expert participation, and analysis of smart contract
vulnerability cases, this paper conducts an in-depth research of the
differences between Ethereum and Arbitrum in areas such as Messaging, Block
Properties, Contract Address Alias, and Gas Fees. The research findings
indicate the presence of certain security issues during the migration process
from Ethereum to Arbitrum, such as abnormal operation of the sequencer
resulting in outdated off-chain data retrieval, time-based logical errors,
failed permission checks, DOS attacks, and gas loss due to L1-to-L2 transaction
failures. To address these security issues, this paper proposes corresponding
solutions and recommendations to ensure the security and meet the requirements
of the migration process. Additionally, this research emphasizes the continued
attention and support for the security issues of smart contract migration
through the case of smart contract migration from Ethereum to Arbitrum. It is
worth noting that this research is the first in-depth research of smart
contract security migration from Ethereum to Arbitrum.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 11:05:29 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Tang",
"Xueyan",
""
],
[
"Shi",
"Lingzhi",
""
],
[
"Lai",
"Alan",
""
],
[
"Du",
"Yuying",
""
],
[
"Deng",
"Jing",
""
],
[
"Fu",
"Jialu",
""
],
[
"Li",
"Jiayi",
""
]
] |
new_dataset
| 0.993364 |
2307.14855
|
Victor Iwaniack
|
Victor Iwaniack
|
Automata in toposes, and general Myhill-Nerode theorems
|
34 pages with appendix. Any comments welcome
| null | null | null |
cs.FL math.CT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We extend the functorial approach to automata by Colcombet and Petri\c{s}an
[arXiv:1712.07121] from the category of sets to any elementary topos with a
natural number object and establish general Myhill-Nerode theorems in our
setting. As a special case we recover the result of Boja\'nczyk, Klin and
Lasota [arXiv:1402.0897] for orbit-finite nominal automata by considering
automata in the Myhill-Schanuel topos of nominal sets.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 13:35:42 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Iwaniack",
"Victor",
""
]
] |
new_dataset
| 0.997823 |
2307.14876
|
Elena Rener
|
Elena Rener, Fabio Salassa, Vincent T'kindt
|
Single machine rescheduling for new orders: properties and complexity
results
| null | null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rescheduling problems arise in a variety of situations where a previously
planned schedule needs to be adjusted to deal with unforeseen events. A common
problem is the arrival of new orders, i.e. jobs, which have to be integrated
into the schedule of the so-called old jobs. The maximum and total absolute
time deviations of the completion times of these jobs are modeled as a
disruption constraint to limit the change in the original schedule. Disruption
constraints affect the shape of an optimal schedule, particularly with respect
to the sequencing of old jobs and the insertion of idle time. We therefore give
a classification into idle and no-idle problems for a set of single-machine
rescheduling problems with different objective functions. We then prove the
complexity of five rescheduling problems that have been left open in the
literature.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 14:06:36 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Rener",
"Elena",
""
],
[
"Salassa",
"Fabio",
""
],
[
"T'kindt",
"Vincent",
""
]
] |
new_dataset
| 0.98441 |
2307.14882
|
Altan Berdan Kilic
|
Altan B. Kilic, Anne Nijsten, Ruud Pellikaan, Alberto Ravagnani
|
Knot Theory and Error-Correcting Codes
| null | null | null | null |
cs.IT math.AT math.GN math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper builds a novel bridge between algebraic coding theory and
mathematical knot theory, with applications in both directions. We give methods
to construct error-correcting codes starting from the colorings of a knot,
describing through a series of results how the properties of the knot translate
into code parameters. We show that knots can be used to obtain error-correcting
codes with prescribed parameters and an efficient decoding algorithm.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 14:12:27 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Kilic",
"Altan B.",
""
],
[
"Nijsten",
"Anne",
""
],
[
"Pellikaan",
"Ruud",
""
],
[
"Ravagnani",
"Alberto",
""
]
] |
new_dataset
| 0.997964 |
2307.14912
|
Cagri Toraman
|
Umitcan Sahin, Izzet Emre Kucukkaya, Cagri Toraman
|
ARC-NLP at PAN 2023: Hierarchical Long Text Classification for Trigger
Detection
|
Accepted by PAN at CLEF 2023
| null | null | null |
cs.CL cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Fanfiction, a popular form of creative writing set within established
fictional universes, has gained a substantial online following. However,
ensuring the well-being and safety of participants has become a critical
concern in this community. The detection of triggering content, material that
may cause emotional distress or trauma to readers, poses a significant
challenge. In this paper, we describe our approach for the Trigger Detection
shared task at PAN CLEF 2023, where we want to detect multiple triggering
content in a given Fanfiction document. For this, we build a hierarchical model
that uses recurrence over Transformer-based language models. In our approach,
we first split long documents into smaller sized segments and use them to
fine-tune a Transformer model. Then, we extract feature embeddings from the
fine-tuned Transformer model, which are used as input in the training of
multiple LSTM models for trigger detection in a multi-label setting. Our model
achieves an F1-macro score of 0.372 and F1-micro score of 0.736 on the
validation set, which are higher than the baseline results shared at PAN CLEF
2023.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 14:55:10 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Sahin",
"Umitcan",
""
],
[
"Kucukkaya",
"Izzet Emre",
""
],
[
"Toraman",
"Cagri",
""
]
] |
new_dataset
| 0.993966 |
2307.14913
|
Cagri Toraman
|
Izzet Emre Kucukkaya, Umitcan Sahin, Cagri Toraman
|
ARC-NLP at PAN 2023: Transition-Focused Natural Language Inference for
Writing Style Detection
|
Accepted by PAN at CLEF 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The task of multi-author writing style detection aims at finding any
positions of writing style change in a given text document. We formulate the
task as a natural language inference problem where two consecutive paragraphs
are paired. Our approach focuses on transitions between paragraphs while
truncating input tokens for the task. As backbone models, we employ different
Transformer-based encoders with warmup phase during training. We submit the
model version that outperforms baselines and other proposed model versions in
our experiments. For the easy and medium setups, we submit transition-focused
natural language inference based on DeBERTa with warmup training, and the same
model without transition for the hard setup.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 14:56:06 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Kucukkaya",
"Izzet Emre",
""
],
[
"Sahin",
"Umitcan",
""
],
[
"Toraman",
"Cagri",
""
]
] |
new_dataset
| 0.966808 |
2307.14927
|
Mingming Zhang
|
Mingming Zhang, Youlong Wu, Minquan Cheng, and Dianhua Wu
|
Cascaded Code Distributed Computing With Low Complexity and Improved
Flexibility
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Coded distributed computing, proposed by Li et al., offers significant
potential for reducing the communication load in MapReduce computing systems.
In the setting of the \emph{cascaded} coded distributed computing that
consisting of $K$ nodes, $N$ input files, and $Q$ output functions, the
objective is to compute each output function through $s\geq 1$ nodes with a
computation load $r\geq 1$, enabling the application of coding techniques
during the Shuffle phase to achieve minimum communication load. However, for
most existing coded distributed computing schemes, a major limitation lies in
their demand for splitting the original data into an exponentially growing
number of input files in terms of $N/\binom{K}{r} \in\mathbb{N}$ and requiring
an exponentially large number of output functions $Q/\binom{K}{s}
\in\mathbb{N}$, which imposes stringent requirements for implementation and
results in significant coding complexity when $K$ is large. In this paper, we
focus on the cascaded case of $K/s\in\mathbb{N} $, deliberately designing the
strategy of input files store and output functions assignment based on a
grouping method, such that a low-complexity two-round Shuffle phase is
available. The main advantages of our proposed scheme contains: 1) the
communication load is quilt close to or surprisingly better than the optimal
state-of-the-art scheme proposed by Li et al.; 2) our scheme requires
significantly less number of input files and output functions; 3) all the
operations are implemented over the minimum binary field $\mathbb{F}_2$.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 15:16:40 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Zhang",
"Mingming",
""
],
[
"Wu",
"Youlong",
""
],
[
"Cheng",
"Minquan",
""
],
[
"Wu",
"Dianhua",
""
]
] |
new_dataset
| 0.992677 |
2307.14980
|
Carlos Barroso Fern\'andez
|
Carlos Barroso-Fern\'andez, Jorge Mart\'in-P\'erez, Constantine
Ayimba, Antonio de la Oliva
|
Aligning rTWT with 802.1Qbv: a Network Calculus Approach
|
3 pages, 3 figures, workshop submission
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Industry 4.0 applications impose the challenging demand of delivering packets
with bounded latencies via a wireless network. This is further complicated if
the network is not dedicated to the time critical application. In this paper we
use network calculus analysis to derive closed form expressions of latency
bounds for time critical traffic when 802.11 Target Wake Time (TWT) and
802.1Qbv work together in a shared 802.11 network.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 16:19:04 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Barroso-Fernández",
"Carlos",
""
],
[
"Martín-Pérez",
"Jorge",
""
],
[
"Ayimba",
"Constantine",
""
],
[
"de la Oliva",
"Antonio",
""
]
] |
new_dataset
| 0.995827 |
2307.15005
|
Jin Heo
|
Jin Heo, Christopher Phillips, Ada Gavrilovska
|
FLiCR: A Fast and Lightweight LiDAR Point Cloud Compression Based on
Lossy RI
|
12 pages, 11 figures, conference paper
|
In 2022 IEEE/ACM 7th Symposium on Edge Computing (SEC) (pp.
54-67). IEEE 2022
|
10.1109/SEC54971.2022.00012
| null |
cs.MM cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Light detection and ranging (LiDAR) sensors are becoming available on modern
mobile devices and provide a 3D sensing capability. This new capability is
beneficial for perceptions in various use cases, but it is challenging for
resource-constrained mobile devices to use the perceptions in real-time because
of their high computational complexity. In this context, edge computing can be
used to enable LiDAR online perceptions, but offloading the perceptions on the
edge server requires a low-latency, lightweight, and efficient compression due
to the large volume of LiDAR point clouds data.
This paper presents FLiCR, a fast and lightweight LiDAR point cloud
compression method for enabling edge-assisted online perceptions. FLiCR is
based on range images (RI) as an intermediate representation (IR), and
dictionary coding for compressing RIs. FLiCR achieves its benefits by
leveraging lossy RIs, and we show the efficiency of bytestream compression is
largely improved with quantization and subsampling. In addition, we identify
the limitation of current quality metrics for presenting the entropy of a point
cloud, and introduce a new metric that reflects both point-wise and
entropy-wise qualities for lossy IRs. The evaluation results show FLiCR is more
suitable for edge-assisted real-time perceptions than the existing LiDAR
compressions, and we demonstrate the effectiveness of our compression and
metric with the evaluations on 3D object detection and LiDAR SLAM.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 17:04:05 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Heo",
"Jin",
""
],
[
"Phillips",
"Christopher",
""
],
[
"Gavrilovska",
"Ada",
""
]
] |
new_dataset
| 0.999665 |
2307.15020
|
Liang Xu
|
Liang Xu, Anqi Li, Lei Zhu, Hang Xue, Changtai Zhu, Kangkang Zhao,
Haonan He, Xuanwei Zhang, Qiyue Kang, Zhenzhong Lan
|
SuperCLUE: A Comprehensive Chinese Large Language Model Benchmark
|
13 pages, 12 figures, 5 tables
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) have shown the potential to be integrated into
human daily lives. Therefore, user preference is the most critical criterion
for assessing LLMs' performance in real-world scenarios. However, existing
benchmarks mainly focus on measuring models' accuracy using multi-choice
questions, which limits the understanding of their capabilities in real
applications. We fill this gap by proposing a comprehensive Chinese benchmark
SuperCLUE, named after another popular Chinese LLM benchmark CLUE. SuperCLUE
encompasses three sub-tasks: actual users' queries and ratings derived from an
LLM battle platform (CArena), open-ended questions with single and
multiple-turn dialogues (OPEN), and closed-ended questions with the same stems
as open-ended single-turn ones (CLOSE). Our study shows that accuracy on
closed-ended questions is insufficient to reflect human preferences achieved on
open-ended ones. At the same time, they can complement each other to predict
actual user preferences. We also demonstrate that GPT-4 is a reliable judge to
automatically evaluate human preferences on open-ended questions in a Chinese
context. Our benchmark will be released at https://www.CLUEbenchmarks.com
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 17:24:09 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Xu",
"Liang",
""
],
[
"Li",
"Anqi",
""
],
[
"Zhu",
"Lei",
""
],
[
"Xue",
"Hang",
""
],
[
"Zhu",
"Changtai",
""
],
[
"Zhao",
"Kangkang",
""
],
[
"He",
"Haonan",
""
],
[
"Zhang",
"Xuanwei",
""
],
[
"Kang",
"Qiyue",
""
],
[
"Lan",
"Zhenzhong",
""
]
] |
new_dataset
| 0.999511 |
2307.15055
|
Adam Harley
|
Yang Zheng and Adam W. Harley and Bokui Shen and Gordon Wetzstein and
Leonidas J. Guibas
|
PointOdyssey: A Large-Scale Synthetic Dataset for Long-Term Point
Tracking
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce PointOdyssey, a large-scale synthetic dataset, and data
generation framework, for the training and evaluation of long-term fine-grained
tracking algorithms. Our goal is to advance the state-of-the-art by placing
emphasis on long videos with naturalistic motion. Toward the goal of
naturalism, we animate deformable characters using real-world motion capture
data, we build 3D scenes to match the motion capture environments, and we
render camera viewpoints using trajectories mined via structure-from-motion on
real videos. We create combinatorial diversity by randomizing character
appearance, motion profiles, materials, lighting, 3D assets, and atmospheric
effects. Our dataset currently includes 104 videos, averaging 2,000 frames
long, with orders of magnitude more correspondence annotations than prior work.
We show that existing methods can be trained from scratch in our dataset and
outperform the published variants. Finally, we introduce modifications to the
PIPs point tracking method, greatly widening its temporal receptive field,
which improves its performance on PointOdyssey as well as on two real-world
benchmarks. Our data and code are publicly available at:
https://pointodyssey.com
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 17:58:11 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Zheng",
"Yang",
""
],
[
"Harley",
"Adam W.",
""
],
[
"Shen",
"Bokui",
""
],
[
"Wetzstein",
"Gordon",
""
],
[
"Guibas",
"Leonidas J.",
""
]
] |
new_dataset
| 0.999716 |
2307.15057
|
Erik Rye
|
Erik Rye, Dave Levin
|
IPv6 Hitlists at Scale: Be Careful What You Wish For
|
Accepted to ACM SIGCOMM 2023
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Today's network measurements rely heavily on Internet-wide scanning,
employing tools like ZMap that are capable of quickly iterating over the entire
IPv4 address space. Unfortunately, IPv6's vast address space poses an
existential threat for Internet-wide scans and traditional network measurement
techniques. To address this reality, efforts are underway to develop
``hitlists'' of known-active IPv6 addresses to reduce the search space for
would-be scanners. As a result, there is an inexorable push for constructing as
large and complete a hitlist as possible.
This paper asks: what are the potential benefits and harms when IPv6 hitlists
grow larger? To answer this question, we obtain the largest IPv6 active-address
list to date: 7.9 billion addresses, 898 times larger than the current
state-of-the-art hitlist. Although our list is not comprehensive, it is a
significant step forward and provides a glimpse into the type of analyses
possible with more complete hitlists.
We compare our dataset to prior IPv6 hitlists and show both benefits and
dangers. The benefits include improved insight into client devices (prior
datasets consist primarily of routers), outage detection, IPv6 roll-out,
previously unknown aliased networks, and address assignment strategies. The
dangers, unfortunately, are severe: we expose widespread instances of addresses
that permit user tracking and device geolocation, and a dearth of firewalls in
home networks. We discuss ethics and security guidelines to ensure a safe path
towards more complete hitlists.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 17:58:56 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Rye",
"Erik",
""
],
[
"Levin",
"Dave",
""
]
] |
new_dataset
| 0.995005 |
2008.13583
|
Isabelle Tingzon
|
Isabelle Tingzon, Niccolo Dejito, Ren Avell Flores, Rodolfo De Guzman,
Liliana Carvajal, Katerine Zapata Erazo, Ivan Enrique Contreras Cala, Jeffrey
Villaveces, Daniela Rubio, Rayid Ghani
|
Mapping New Informal Settlements using Machine Learning and Time Series
Satellite Images: An Application in the Venezuelan Migration Crisis
| null | null |
10.1109/AI4G50087.2020.9311041
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since 2014, nearly 2 million Venezuelans have fled to Colombia to escape an
economically devastated country during what is one of the largest humanitarian
crises in modern history. Non-government organizations and local government
units are faced with the challenge of identifying, assessing, and monitoring
rapidly growing migrant communities in order to provide urgent humanitarian
aid. However, with many of these displaced populations living in informal
settlements areas across the country, locating migrant settlements across large
territories can be a major challenge. To address this problem, we propose a
novel approach for rapidly and cost-effectively locating new and emerging
informal settlements using machine learning and publicly accessible Sentinel-2
time-series satellite imagery. We demonstrate the effectiveness of the approach
in identifying potential Venezuelan migrant settlements in Colombia that have
emerged between 2015 to 2020. Finally, we emphasize the importance of
post-classification verification and present a two-step validation approach
consisting of (1) remote validation using Google Earth and (2) on-the-ground
validation through the Premise App, a mobile crowdsourcing platform.
|
[
{
"version": "v1",
"created": "Thu, 27 Aug 2020 04:42:45 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Nov 2020 18:59:49 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Dec 2020 02:35:56 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Tingzon",
"Isabelle",
""
],
[
"Dejito",
"Niccolo",
""
],
[
"Flores",
"Ren Avell",
""
],
[
"De Guzman",
"Rodolfo",
""
],
[
"Carvajal",
"Liliana",
""
],
[
"Erazo",
"Katerine Zapata",
""
],
[
"Cala",
"Ivan Enrique Contreras",
""
],
[
"Villaveces",
"Jeffrey",
""
],
[
"Rubio",
"Daniela",
""
],
[
"Ghani",
"Rayid",
""
]
] |
new_dataset
| 0.99624 |
2204.01828
|
David Alejo
|
S. Mart\'inez-Rozas, D. Alejo, F. Caballero and L. Merino
|
Path and trajectory planning of a tethered UAV-UGV marsupial robotic
system
|
8 pages, 4 figures, 3 tables. Version accepted in IEEE-Robotics and
Automation Letters. "Copyright 2023 IEEE. Personal use of this material is
permitted. Permission from IEEE must be obtained for all other uses..."
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter addresses the problem of trajectory planning in a marsupial
robotic system consisting of an unmanned aerial vehicle (UAV) linked to an
unmanned ground vehicle (UGV) through a non-taut tether with controllable
length. To the best of our knowledge, this is the first method that addresses
the trajectory planning of a marsupial UGV-UAV with a non-taut tether. The
objective is to determine a synchronized collision-free trajectory for the
three marsupial system agents: UAV, UGV, and tether. First, we present a path
planning solution based on optimal Rapidly-exploring Random Trees (RRT*) with
novel sampling and steering techniques to speed-up the computation. This
algorithm is able to obtain collision-free paths for the UAV and the UGV,
taking into account the 3D environment and the tether. Then, the letter
presents a trajectory planner based on non-linear least squares. The optimizer
takes into account aspects not considered in the path planning, like temporal
constraints of the motion imposed by limits on the velocities and accelerations
of the robots, or raising the tether's clearance. Simulated and field test
results demonstrate that the approach generates obstacle-free, smooth, and
feasible trajectories for the marsupial system.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 20:28:51 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 08:55:43 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Jul 2023 08:53:02 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Martínez-Rozas",
"S.",
""
],
[
"Alejo",
"D.",
""
],
[
"Caballero",
"F.",
""
],
[
"Merino",
"L.",
""
]
] |
new_dataset
| 0.999708 |
2208.05732
|
Hao Chen
|
Hao Chen
|
Many Non-Reed-Solomon Type MDS Codes From Arbitrary Genus Algebraic
Curves
|
26 pages, new non-RS type MDS codes from higher genus curves are
included
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
It is always interesting and important to construct non-Reed-Solomon type MDS
codes in coding theory and finite geometries. In this paper, we prove that
there are non-Reed-Solomon type MDS codes from arbitrary genus algebraic
curves. It is proved that MDS algebraic geometry (AG) codes from higher genus
curves are not equivalent to MDS AG codes from lower genus curves. For genus
one case, we construct MDS AG codes of small consecutive lengths from elliptic
curves. New self-dual MDS AG codes over ${\bf F}_{{2^s}}$ from elliptic curves
are also constructed. These MDS AG codes are not equivalent to Reed-Solomon
codes, not equivalent to known MDS twisted Reed-Solomon codes and not
equivalent to Roth-Lempel MDS codes.
Hence many non-equivalent MDS AG codes, which are not equivalent to
Reed-Solomon codes and known MDS twisted-Reed-Solomon codes, can be obtained
from arbitrary genus algebraic curves. It is interesting open problem to
construct explicit longer MDS AG codes from maximal curves.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 09:57:25 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Oct 2022 07:20:55 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Jul 2023 01:23:06 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Chen",
"Hao",
""
]
] |
new_dataset
| 0.998768 |
2210.03713
|
Daniel Marew
|
Daniel Marew, Misha Lvovsky, Shangqun Yu, Shotaro Sessions, and
Donghyun Kim
|
Riemannian Motion Policy for Robust Balance Control in Dynamic Legged
Locomotion
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a Riemannian Motion Policy (RMP)flow-based
whole-body control framework for improved dynamic legged locomotion. RMPflow is
a differential geometry-inspired algorithm for fusing multiple task-space
policies (RMPs) into a configuration space policy in a geometrically consistent
manner. RMP-based approaches are especially suited for designing simultaneous
tracking and collision avoidance behaviors and have been successfully deployed
on serial manipulators. However, one caveat of RMPflow is that it is designed
with fully actuated systems in mind. In this work, we, for the first time,
extend it to the domain of dynamic-legged systems, which have unforgiving
under-actuation and limited control input. Thorough push recovery experiments
are conducted in simulation to validate the overall framework. We show that
expanding the valid stepping region with an RMP-based collision-avoidance swing
leg controller improves balance robustness against external disturbances by up
to 53\% compared to a baseline approach using a restricted stepping region.
Furthermore, a point-foot biped robot is purpose-built for experimental studies
of dynamic biped locomotion. A preliminary unassisted in-place stepping
experiment is conducted to show the viability of the control framework and
hardware.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 17:34:36 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jul 2023 22:19:16 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Marew",
"Daniel",
""
],
[
"Lvovsky",
"Misha",
""
],
[
"Yu",
"Shangqun",
""
],
[
"Sessions",
"Shotaro",
""
],
[
"Kim",
"Donghyun",
""
]
] |
new_dataset
| 0.998554 |
2210.03829
|
Seyed Mojtaba Marvasti-Zadeh
|
Seyed Mojtaba Marvasti-Zadeh, Devin Goodsman, Nilanjan Ray, Nadir
Erbilgin
|
Early Detection of Bark Beetle Attack Using Remote Sensing and Machine
Learning: A Review
|
Under review
| null | null | null |
cs.LG cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper provides a comprehensive review of past and current advances in
the early detection of bark beetle-induced tree mortality from three primary
perspectives: bark beetle & host interactions, RS, and ML/DL. In contrast to
prior efforts, this review encompasses all RS systems and emphasizes ML/DL
methods to investigate their strengths and weaknesses. We parse existing
literature based on multi- or hyper-spectral analyses and distill their
knowledge based on: bark beetle species & attack phases with a primary emphasis
on early stages of attacks, host trees, study regions, RS platforms & sensors,
spectral/spatial/temporal resolutions, spectral signatures, spectral vegetation
indices (SVIs), ML approaches, learning schemes, task categories, models,
algorithms, classes/clusters, features, and DL networks & architectures.
Although DL-based methods and the random forest (RF) algorithm showed promising
results, highlighting their potential to detect subtle changes across visible,
thermal, and short-wave infrared (SWIR) spectral regions, they still have
limited effectiveness and high uncertainties. To inspire novel solutions to
these shortcomings, we delve into the principal challenges & opportunities from
different perspectives, enabling a deeper understanding of the current state of
research and guiding future research directions.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 21:49:26 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jul 2023 16:26:52 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Marvasti-Zadeh",
"Seyed Mojtaba",
""
],
[
"Goodsman",
"Devin",
""
],
[
"Ray",
"Nilanjan",
""
],
[
"Erbilgin",
"Nadir",
""
]
] |
new_dataset
| 0.995942 |
2211.10413
|
Stefan Senk
|
Marian Ulbricht and Stefan Senk and Hosein K. Nazari and How-Hang Liu
and Martin Reisslein and Giang T. Nguyen and Frank H. P. Fitzek
|
TSN-FlexTest: Flexible TSN Measurement Testbed (Extended Version)
|
30 pages, 18 figures, 6 tables
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Robust, reliable, and deterministic networks are essential for a variety of
applications. In order to provide guaranteed communication network services,
Time-Sensitive Networking (TSN) unites a set of standards for
time-synchronization, flow control, enhanced reliability, and management. We
design the TSN-FlexTest testbed with generic commodity hardware and open-source
software components to enable flexible TSN measurements. We have conducted
extensive measurements to validate the TSN-FlexTest testbed and to examine TSN
characteristics. The measurements provide insights into the effects of TSN
configurations, such as increasing the number of synchronization messages for
the Precision Time Protocol, indicating that a measurement accuracy of 15 ns
can be achieved. The TSN measurements included extensive evaluations of the
Time-aware Shaper (TAS) for sets of Tactile Internet (TI) packet traffic
streams. The measurements elucidate the effects of different scheduling and
shaping approaches, while revealing the need for pervasive network control that
synchronizes the sending nodes with the network switches. We present the first
measurements of distributed TAS with synchronized senders on a commodity
hardware testbed, demonstrating the same Quality-of-Service as with dedicated
wires for high-priority TI streams despite a 200% over-saturation cross traffic
load. The testbed is provided as an open-source project to facilitate future
TSN research.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 18:30:53 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jul 2023 09:38:20 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Jul 2023 06:19:55 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Ulbricht",
"Marian",
""
],
[
"Senk",
"Stefan",
""
],
[
"Nazari",
"Hosein K.",
""
],
[
"Liu",
"How-Hang",
""
],
[
"Reisslein",
"Martin",
""
],
[
"Nguyen",
"Giang T.",
""
],
[
"Fitzek",
"Frank H. P.",
""
]
] |
new_dataset
| 0.999697 |
2211.16480
|
Ashwin Rao
|
Ashwin Rao, Fred Morstatter and Kristina Lerman
|
Retweets Amplify the Echo Chamber Effect
|
8 pages, 8 figures
| null | null | null |
cs.SI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The growing prominence of social media in public discourse has led to a
greater scrutiny of the quality of online information and the role it plays in
amplifying political polarization. However, studies of polarization on social
media platforms like Twitter have been hampered by the difficulty of collecting
data about the social graph, specifically follow links that shape the echo
chambers users join as well as what they see in their timelines. As a proxy of
the follower graph, researchers use retweets, although it is not clear how this
choice affects analysis. Using a sample of the Twitter follower graph and the
tweets posted by users within it, we reconstruct the retweet graph and quantify
its impact on the measures of echo chambers and exposure. While we find that
echo chambers exist in both graphs, they are more pronounced in the retweet
graph. We compare the information users see via their follower and retweet
networks to show that retweeted accounts share systematically more polarized
content. This bias cannot be explained by the activity or polarization within
users' own follower graph neighborhoods but by the increased attention they pay
to accounts that are ideologically aligned with their own views. Our results
suggest that studies relying on the retweet graphs overestimate the echo
chamber effects and exposure to polarized information.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 18:51:54 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jul 2023 09:01:40 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Rao",
"Ashwin",
""
],
[
"Morstatter",
"Fred",
""
],
[
"Lerman",
"Kristina",
""
]
] |
new_dataset
| 0.968264 |
2302.01876
|
Qiong Li
|
Qiong Li, Chao Fang, Zhongfeng Wang
|
PDPU: An Open-Source Posit Dot-Product Unit for Deep Learning
Applications
|
Accepted by 2023 IEEE International Symposium on Circuits and Systems
| null |
10.1109/ISCAS46773.2023.10182007
| null |
cs.AR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Posit has been a promising alternative to the IEEE-754 floating point format
for deep learning applications due to its better trade-off between dynamic
range and accuracy. However, hardware implementation of posit arithmetic
requires further exploration, especially for the dot-product operations
dominated in deep neural networks (DNNs). It has been implemented by either the
combination of multipliers and an adder tree or cascaded fused multiply-add
units, leading to poor computational efficiency and excessive hardware
overhead. To address this issue, we propose an open-source posit dot-product
unit, namely PDPU, that facilitates resource-efficient and high-throughput
dot-product hardware implementation. PDPU not only features the fused and
mixed-precision architecture that eliminates redundant latency and hardware
resources, but also has a fine-grained 6-stage pipeline, improving
computational efficiency. A configurable PDPU generator is further developed to
meet the diverse needs of various DNNs for computational accuracy. Experimental
results evaluated under the 28nm CMOS process show that PDPU reduces area,
latency, and power by up to 43%, 64%, and 70%, respectively, compared to the
existing implementations. Hence, PDPU has great potential as the computing core
of posit-based accelerators for deep learning applications.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 17:26:12 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Li",
"Qiong",
""
],
[
"Fang",
"Chao",
""
],
[
"Wang",
"Zhongfeng",
""
]
] |
new_dataset
| 0.996952 |
2302.14831
|
Meshia C\'edric Oveneke
|
Meshia C\'edric Oveneke, Rucha Vaishampayan, Deogratias Lukamba
Nsadisa, Jenny Ambukiyenyi Onya
|
FacEDiM: A Face Embedding Distribution Model for Few-Shot Biometric
Authentication of Cattle
|
4 pages, 1 figure, 1 table, paper accepted at Black In AI at the 36th
Conference on Neural Information Processing Systems (NeurIPS 2022), New
Orleans, USA
| null | null | null |
cs.CV cs.LG cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This work proposes to solve the problem of few-shot biometric authentication
by computing the Mahalanobis distance between testing embeddings and a
multivariate Gaussian distribution of training embeddings obtained using
pre-trained CNNs. Experimental results show that models pre-trained on the
ImageNet dataset significantly outperform models pre-trained on human faces.
With a VGG16 model, we obtain a FRR of 1.25% for a FAR of 1.18% on a dataset of
20 cattle identities.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 18:28:35 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jul 2023 13:20:49 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Oveneke",
"Meshia Cédric",
""
],
[
"Vaishampayan",
"Rucha",
""
],
[
"Nsadisa",
"Deogratias Lukamba",
""
],
[
"Onya",
"Jenny Ambukiyenyi",
""
]
] |
new_dataset
| 0.955431 |
2303.00307
|
Saud Khan
|
Saud Khan, Chandra Thapa, Salman Durrani and Seyit Camtepe
|
Access-based Lightweight Physical Layer Authentication for the Internet
of Things Devices
|
Submitted to IEEE for possible publication
| null | null | null |
cs.CR cs.NI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Physical-layer authentication is a popular alternative to the conventional
key-based authentication for internet of things (IoT) devices due to their
limited computational capacity and battery power. However, this approach has
limitations due to poor robustness under channel fluctuations, reconciliation
overhead, and no clear safeguard distance to ensure the secrecy of the
generated authentication keys. In this regard, we propose a novel, secure, and
lightweight continuous authentication scheme for IoT device authentication. Our
scheme utilizes the inherent properties of the IoT devices transmission model
as its source for seed generation and device authentication. Specifically, our
proposed scheme provides continuous authentication by checking the access time
slots and spreading sequences of the IoT devices instead of repeatedly
generating and verifying shared keys. Due to this, access to a coherent key is
not required in our proposed scheme, resulting in the concealment of the seed
information from attackers. Our proposed authentication scheme for IoT devices
demonstrates improved performance compared to the benchmark schemes relying on
physical-channel. Our empirical results find a near threefold decrease in
misdetection rate of illegitimate devices and close to zero false alarm rate in
various system settings with varied numbers of active devices up to 200 and
signal-to-noise ratio from 0 dB to 30 dB. Our proposed authentication scheme
also has a lower computational complexity of at least half the computational
cost of the benchmark schemes based on support vector machine and binary
hypothesis testing in our studies. This further corroborates the practicality
of our scheme for IoT deployments.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 08:11:52 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jul 2023 07:03:56 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Khan",
"Saud",
""
],
[
"Thapa",
"Chandra",
""
],
[
"Durrani",
"Salman",
""
],
[
"Camtepe",
"Seyit",
""
]
] |
new_dataset
| 0.998265 |
2303.16109
|
Sajjad Mozaffari
|
Sajjad Mozaffari, Mreza Alipour Sormoli, Konstantinos Koufos, and
Mehrdad Dianati
|
Multimodal Manoeuvre and Trajectory Prediction for Automated Driving on
Highways Using Transformer Networks
|
8 pages, 3 figures, submitted to IEEE RAL
| null | null | null |
cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Predicting the behaviour (i.e., manoeuvre/trajectory) of other road users,
including vehicles, is critical for the safe and efficient operation of
autonomous vehicles (AVs), a.k.a., automated driving systems (ADSs). Due to the
uncertain future behaviour of vehicles, multiple future behaviour modes are
often plausible for a vehicle in a given driving scene. Therefore, multimodal
prediction can provide richer information than single-mode prediction, enabling
AVs to perform a better risk assessment. To this end, we propose a novel
multimodal prediction framework that can predict multiple plausible behaviour
modes and their likelihoods. The proposed framework includes a bespoke problem
formulation for manoeuvre prediction, a novel transformer-based prediction
model, and a tailored training method for multimodal manoeuvre and trajectory
prediction. The performance of the framework is evaluated using three public
highway driving datasets, namely NGSIM, highD, and exiD. The results show that
our framework outperforms the state-of-the-art multimodal methods in terms of
prediction error and is capable of predicting plausible manoeuvre and
trajectory modes.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 16:25:16 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jul 2023 16:58:06 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Mozaffari",
"Sajjad",
""
],
[
"Sormoli",
"Mreza Alipour",
""
],
[
"Koufos",
"Konstantinos",
""
],
[
"Dianati",
"Mehrdad",
""
]
] |
new_dataset
| 0.970445 |
2305.11990
|
Eduardo Garcia Do Nascimento
|
Eduardo Nascimento, John Just, Jurandy Almeida, and Tiago Almeida
|
Productive Crop Field Detection: A New Dataset and Deep Learning
Benchmark Results
|
Preprint of the paper https://doi.org/10.1109/lgrs.2023.3296064
published in IEEE Geoscience and Remote Sensing Letters
| null |
10.1109/lgrs.2023.3296064
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In precision agriculture, detecting productive crop fields is an essential
practice that allows the farmer to evaluate operating performance separately
and compare different seed varieties, pesticides, and fertilizers. However,
manually identifying productive fields is often a time-consuming and
error-prone task. Previous studies explore different methods to detect crop
fields using advanced machine learning algorithms, but they often lack good
quality labeled data. In this context, we propose a high-quality dataset
generated by machine operation combined with Sentinel-2 images tracked over
time. As far as we know, it is the first one to overcome the lack of labeled
samples by using this technique. In sequence, we apply a semi-supervised
classification of unlabeled data and state-of-the-art supervised and
self-supervised deep learning methods to detect productive crop fields
automatically. Finally, the results demonstrate high accuracy in Positive
Unlabeled learning, which perfectly fits the problem where we have high
confidence in the positive samples. Best performances have been found in
Triplet Loss Siamese given the existence of an accurate dataset and Contrastive
Learning considering situations where we do not have a comprehensive labeled
dataset available.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 20:30:59 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jul 2023 23:43:35 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Nascimento",
"Eduardo",
""
],
[
"Just",
"John",
""
],
[
"Almeida",
"Jurandy",
""
],
[
"Almeida",
"Tiago",
""
]
] |
new_dataset
| 0.99978 |
2305.18120
|
Sanaz Sabzevari
|
Reza Dadfar, Sanaz Sabzevari, M\r{a}rten Bj\"orkman, Danica Kragic
|
TD-GEM: Text-Driven Garment Editing Mapper
|
The first two authors contributed equally
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Language-based fashion image editing allows users to try out variations of
desired garments through provided text prompts. Inspired by research on
manipulating latent representations in StyleCLIP and HairCLIP, we focus on
these latent spaces for editing fashion items of full-body human datasets.
Currently, there is a gap in handling fashion image editing due to the
complexity of garment shapes and textures and the diversity of human poses. In
this paper, we propose an editing optimizer scheme method called Text-Driven
Garment Editing Mapper (TD-GEM), aiming to edit fashion items in a disentangled
way. To this end, we initially obtain a latent representation of an image
through generative adversarial network inversions such as Encoder for Editing
(e4e) or Pivotal Tuning Inversion (PTI) for more accurate results. An
optimization-based Contrastive Language-Image Pre-training (CLIP) is then
utilized to guide the latent representation of a fashion image in the direction
of a target attribute expressed in terms of a text prompt. Our TD-GEM
manipulates the image accurately according to the target attribute, while other
parts of the image are kept untouched. In the experiments, we evaluate TD-GEM
on two different attributes (i.e., "color" and "sleeve length"), which
effectively generates realistic images compared to the recent manipulation
schemes.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 14:31:54 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jul 2023 09:19:29 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Dadfar",
"Reza",
""
],
[
"Sabzevari",
"Sanaz",
""
],
[
"Björkman",
"Mårten",
""
],
[
"Kragic",
"Danica",
""
]
] |
new_dataset
| 0.999682 |
2307.02100
|
Siyi Du
|
Siyi Du, Nourhan Bayasi, Ghassan Harmarneh, Rafeef Garbi
|
MDViT: Multi-domain Vision Transformer for Small Medical Image
Segmentation Datasets
|
10 pages, 2 figures, accepted by 26th International Conference on
Medical Image Computing and Computer Assisted Intervention (MICCAI 2023)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite its clinical utility, medical image segmentation (MIS) remains a
daunting task due to images' inherent complexity and variability. Vision
transformers (ViTs) have recently emerged as a promising solution to improve
MIS; however, they require larger training datasets than convolutional neural
networks. To overcome this obstacle, data-efficient ViTs were proposed, but
they are typically trained using a single source of data, which overlooks the
valuable knowledge that could be leveraged from other available datasets.
Naivly combining datasets from different domains can result in negative
knowledge transfer (NKT), i.e., a decrease in model performance on some domains
with non-negligible inter-domain heterogeneity. In this paper, we propose
MDViT, the first multi-domain ViT that includes domain adapters to mitigate
data-hunger and combat NKT by adaptively exploiting knowledge in multiple small
data resources (domains). Further, to enhance representation learning across
domains, we integrate a mutual knowledge distillation paradigm that transfers
knowledge between a universal network (spanning all the domains) and auxiliary
domain-specific branches. Experiments on 4 skin lesion segmentation datasets
show that MDViT outperforms state-of-the-art algorithms, with superior
segmentation performance and a fixed model size, at inference time, even as
more domains are added. Our code is available at
https://github.com/siyi-wind/MDViT.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 08:19:29 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jul 2023 02:13:29 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Du",
"Siyi",
""
],
[
"Bayasi",
"Nourhan",
""
],
[
"Harmarneh",
"Ghassan",
""
],
[
"Garbi",
"Rafeef",
""
]
] |
new_dataset
| 0.998314 |
2307.04956
|
Jian Zhang
|
Jian Zhang, Runwei Ding, Miaoju Ban, Ge Yang
|
PKU-GoodsAD: A Supermarket Goods Dataset for Unsupervised Anomaly
Detection and Segmentation
|
8 pages, 6 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual anomaly detection is essential and commonly used for many tasks in the
field of computer vision. Recent anomaly detection datasets mainly focus on
industrial automated inspection, medical image analysis and video surveillance.
In order to broaden the application and research of anomaly detection in
unmanned supermarkets and smart manufacturing, we introduce the supermarket
goods anomaly detection (GoodsAD) dataset. It contains 6124 high-resolution
images of 484 different appearance goods divided into 6 categories. Each
category contains several common different types of anomalies such as
deformation, surface damage and opened. Anomalies contain both texture changes
and structural changes. It follows the unsupervised setting and only normal
(defect-free) images are used for training. Pixel-precise ground truth regions
are provided for all anomalies. Moreover, we also conduct a thorough evaluation
of current state-of-the-art unsupervised anomaly detection methods. This
initial benchmark indicates that some methods which perform well on the
industrial anomaly detection dataset (e.g., MVTec AD), show poor performance on
our dataset. This is a comprehensive, multi-object dataset for supermarket
goods anomaly detection that focuses on real-world applications.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 01:17:00 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jul 2023 13:11:41 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Zhang",
"Jian",
""
],
[
"Ding",
"Runwei",
""
],
[
"Ban",
"Miaoju",
""
],
[
"Yang",
"Ge",
""
]
] |
new_dataset
| 0.999864 |
2307.09754
|
Mohamed Elnoor
|
Mohamed Elnoor, Adarsh Jagan Sathyamoorthy, Kasun Weerakoon, Dinesh
Manocha
|
ProNav: Proprioceptive Traversability Estimation for Legged Robot
Navigation in Outdoor Environments
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel method, ProNav, which uses proprioceptive signals for
traversability estimation in challenging outdoor terrains for autonomous legged
robot navigation. Our approach uses sensor data from a legged robot's joint
encoders, force, and current sensors to measure the joint positions, forces,
and current consumption respectively to accurately assess a terrain's
stability, resistance to the robot's motion, risk of entrapment, and crash.
Based on these factors, we compute the appropriate robot trajectories and gait
to maximize stability and minimize energy consumption. Our approach can also be
used to predict imminent crashes in challenging terrains and execute behaviors
to preemptively avoid them. We integrate ProNav with a vision-based method to
navigate dense vegetation and demonstrate our method's benefits in real-world
terrains with dense bushes, high granularity, negative obstacles, etc. Our
method shows an improvement up to 50% in terms of success rate and up to 22.5%
reduction in terms of energy consumption compared to exteroceptive based
methods.
|
[
{
"version": "v1",
"created": "Wed, 19 Jul 2023 05:34:15 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jul 2023 03:05:35 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Elnoor",
"Mohamed",
""
],
[
"Sathyamoorthy",
"Adarsh Jagan",
""
],
[
"Weerakoon",
"Kasun",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.983418 |
2307.13699
|
David Woo
|
David James Woo, Hengky Susanto and Kai Guo
|
EFL Students' Attitudes and Contradictions in a Machine-in-the-loop
Activity System
|
38 pages, 4 figures
| null | null | null |
cs.HC cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study applies Activity Theory and investigates the attitudes and
contradictions of 67 English as a foreign language (EFL) students from four
Hong Kong secondary schools towards machine-in-the-loop writing, where
artificial intelligence (AI) suggests ideas during composition. Students
answered an open-ended question about their feelings on writing with AI.
Results revealed mostly positive attitudes, with some negative or mixed
feelings. From a thematic analysis, contradictions or points of tension between
students and AI stemmed from AI inadequacies, students' balancing enthusiasm
with preference, and their striving for language autonomy. The research
highlights the benefits and challenges of implementing machine-in-the-loop
writing in EFL classrooms, suggesting educators align activity goals with
students' values, language abilities, and AI capabilities to enhance students'
activity systems.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 07:38:11 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Woo",
"David James",
""
],
[
"Susanto",
"Hengky",
""
],
[
"Guo",
"Kai",
""
]
] |
new_dataset
| 0.98864 |
2307.13700
|
Muhammad Sohaib Ayub
|
Muhammad Sohaib Ayub, Naimat Ullah, Sarwan Ali, Imdad Ullah Khan, Mian
Muhammad Awais, Muhammad Asad Khan and Safiullah Faizullah
|
CAMP: A Context-Aware Cricket Players Performance Metric
| null |
Journal of the Operational Research Society (2023) 1-27
|
10.1080/01605682.2023.2237530
| null |
cs.AI cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Cricket is the second most popular sport after soccer in terms of viewership.
However, the assessment of individual player performance, a fundamental task in
team sports, is currently primarily based on aggregate performance statistics,
including average runs and wickets taken. We propose Context-Aware Metric of
player Performance, CAMP, to quantify individual players' contributions toward
a cricket match outcome. CAMP employs data mining methods and enables effective
data-driven decision-making for selection and drafting, coaching and training,
team line-ups, and strategy development. CAMP incorporates the exact context of
performance, such as opponents' strengths and specific circumstances of games,
such as pressure situations. We empirically evaluate CAMP on data of
limited-over cricket matches between 2001 and 2019. In every match, a committee
of experts declares one player as the best player, called Man of the M}atch
(MoM). The top two rated players by CAMP match with MoM in 83\% of the 961
games. Thus, the CAMP rating of the best player closely matches that of the
domain experts. By this measure, CAMP significantly outperforms the current
best-known players' contribution measure based on the Duckworth-Lewis-Stern
(DLS) method.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 15:12:10 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Ayub",
"Muhammad Sohaib",
""
],
[
"Ullah",
"Naimat",
""
],
[
"Ali",
"Sarwan",
""
],
[
"Khan",
"Imdad Ullah",
""
],
[
"Awais",
"Mian Muhammad",
""
],
[
"Khan",
"Muhammad Asad",
""
],
[
"Faizullah",
"Safiullah",
""
]
] |
new_dataset
| 0.995933 |
2307.13706
|
Mathieu d'Aquin
|
Annanda Sousa (NUI Galway), Karen Young (NUI Galway), Mathieu D'aquin
(Data Science, Knowledge, Reasoning and Engineering, LORIA, LORIA - NLPKD),
Manel Zarrouk (LIPN), Jennifer Holloway (ASK)
|
Introducing CALMED: Multimodal Annotated Dataset for Emotion Detection
in Children with Autism
| null |
HCII 2023: Universal Access in Human-Computer Interaction,
Margherita Antona; Constantine Stephanidis, Jul 2023, Copenhagen, Denmark.
pp.657-677
|
10.1007/978-3-031-35681-0_43
| null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic Emotion Detection (ED) aims to build systems to identify users'
emotions automatically. This field has the potential to enhance HCI, creating
an individualised experience for the user. However, ED systems tend to perform
poorly on people with Autism Spectrum Disorder (ASD). Hence, the need to create
ED systems tailored to how people with autism express emotions. Previous works
have created ED systems tailored for children with ASD but did not share the
resulting dataset. Sharing annotated datasets is essential to enable the
development of more advanced computer models for ED within the research
community. In this paper, we describe our experience establishing a process to
create a multimodal annotated dataset featuring children with a level 1
diagnosis of autism. In addition, we introduce CALMED (Children, Autism,
Multimodal, Emotion, Detection), the resulting multimodal emotion detection
dataset featuring children with autism aged 8-12. CALMED includes audio and
video features extracted from recording files of study sessions with
participants, together with annotations provided by their parents into four
target classes. The generated dataset includes a total of 57,012 examples, with
each example representing a time window of 200ms (0.2s). Our experience and
methods described here, together with the dataset shared, aim to contribute to
future research applications of affective computing in ASD, which has the
potential to create systems to improve the lives of people with ASD.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 11:52:05 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Sousa",
"Annanda",
"",
"NUI Galway"
],
[
"Young",
"Karen",
"",
"NUI Galway"
],
[
"D'aquin",
"Mathieu",
"",
"Data Science, Knowledge, Reasoning and Engineering, LORIA, LORIA - NLPKD"
],
[
"Zarrouk",
"Manel",
"",
"LIPN"
],
[
"Holloway",
"Jennifer",
"",
"ASK"
]
] |
new_dataset
| 0.999637 |
2307.13746
|
Muhammad Ali Farooq
|
Muhammad Ali Farooq, Wang Yao, Gabriel Costache, Peter Corcoran
|
ChildGAN: Large Scale Synthetic Child Facial Data Using Domain
Adaptation in StyleGAN
|
The Paper is submitted in IEEE Access Journal
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this research work, we proposed a novel ChildGAN, a pair of GAN networks
for generating synthetic boys and girls facial data derived from StyleGAN2.
ChildGAN is built by performing smooth domain transfer using transfer learning.
It provides photo-realistic, high-quality data samples. A large-scale dataset
is rendered with a variety of smart facial transformations: facial expressions,
age progression, eye blink effects, head pose, skin and hair color variations,
and variable lighting conditions. The dataset comprises more than 300k distinct
data samples. Further, the uniqueness and characteristics of the rendered
facial features are validated by running different computer vision application
tests which include CNN-based child gender classifier, face localization and
facial landmarks detection test, identity similarity evaluation using ArcFace,
and lastly running eye detection and eye aspect ratio tests. The results
demonstrate that synthetic child facial data of high quality offers an
alternative to the cost and complexity of collecting a large-scale dataset from
real children.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 18:04:52 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Farooq",
"Muhammad Ali",
""
],
[
"Yao",
"Wang",
""
],
[
"Costache",
"Gabriel",
""
],
[
"Corcoran",
"Peter",
""
]
] |
new_dataset
| 0.99944 |
2307.13815
|
Jiajun Zhang
|
Jiajun Zhang, Georgina Cosma, Sarah Bugby, Jason Watkins
|
ForestMonkey: Toolkit for Reasoning with AI-based Defect Detection and
Classification Models
|
6 pages, 5 figures, submitted to 2023 IEEE symposium series on
computational intelligence (SSCI)
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artificial intelligence (AI) reasoning and explainable AI (XAI) tasks have
gained popularity recently, enabling users to explain the predictions or
decision processes of AI models. This paper introduces Forest Monkey (FM), a
toolkit designed to reason the outputs of any AI-based defect detection and/or
classification model with data explainability. Implemented as a Python package,
FM takes input in the form of dataset folder paths (including original images,
ground truth labels, and predicted labels) and provides a set of charts and a
text file to illustrate the reasoning results and suggest possible
improvements. The FM toolkit consists of processes such as feature extraction
from predictions to reasoning targets, feature extraction from images to defect
characteristics, and a decision tree-based AI-Reasoner. Additionally, this
paper investigates the time performance of the FM toolkit when applied to four
AI models with different datasets. Lastly, a tutorial is provided to guide
users in performing reasoning tasks using the FM toolkit.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 20:53:31 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Zhang",
"Jiajun",
""
],
[
"Cosma",
"Georgina",
""
],
[
"Bugby",
"Sarah",
""
],
[
"Watkins",
"Jason",
""
]
] |
new_dataset
| 0.999783 |
2307.13826
|
Eric Vigoda
|
Daniel Stefankovic and Eric Vigoda
|
Spectral Independence Lecture Notes
|
Comments appreciated. These notes are based on the lectures and notes
from the UCSB Summer School on Spectral Independence in August 2022
| null | null | null |
cs.DM math.PR
|
http://creativecommons.org/licenses/by/4.0/
|
These are self-contained lecture notes for spectral independence. For an
$n$-vertex graph, the spectral independence condition is a bound on the maximum
eigenvalue of the $n\times n$ influence matrix whose entries capture the
influence between pairs of vertices, it is closely related to the covariance
matrix. We will present recent results showing that spectral independence
implies the mixing time of the Glauber dynamics is polynomial (where the degree
of the polynomial depends on certain parameters). The proof utilizes
local-to-global theorems which we will detail in these notes. Finally, we will
present more recent results showing that spectral independence implies an
optimal bound on the relaxation time (inverse spectral gap) and with some
additional conditions implies an optimal mixing time bound of $O(n\log{n})$ for
the Glauber dynamics. Our focus is on the analysis of the spectral gap of the
Glauber dynamics from a functional analysis perspective of analyzing the
associated local and global variance, and we present proofs of the associated
local-to-global theorems from this same Markov chain perspective.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 21:39:41 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Stefankovic",
"Daniel",
""
],
[
"Vigoda",
"Eric",
""
]
] |
new_dataset
| 0.990902 |
2307.13829
|
Cagri Toraman
|
Umitcan Sahin, Izzet Emre Kucukkaya, Oguzhan Ozcelik, Cagri Toraman
|
ARC-NLP at Multimodal Hate Speech Event Detection 2023: Multimodal
Methods Boosted by Ensemble Learning, Syntactical and Entity Features
|
Submitted to CASE at RANLP 2023
| null | null | null |
cs.CL cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Text-embedded images can serve as a means of spreading hate speech,
propaganda, and extremist beliefs. Throughout the Russia-Ukraine war, both
opposing factions heavily relied on text-embedded images as a vehicle for
spreading propaganda and hate speech. Ensuring the effective detection of hate
speech and propaganda is of utmost importance to mitigate the negative effect
of hate speech dissemination. In this paper, we outline our methodologies for
two subtasks of Multimodal Hate Speech Event Detection 2023. For the first
subtask, hate speech detection, we utilize multimodal deep learning models
boosted by ensemble learning and syntactical text attributes. For the second
subtask, target detection, we employ multimodal deep learning models boosted by
named entity features. Through experimentation, we demonstrate the superior
performance of our models compared to all textual, visual, and text-visual
baselines employed in multimodal hate speech detection. Furthermore, our models
achieve the first place in both subtasks on the final leaderboard of the shared
task.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 21:56:14 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Sahin",
"Umitcan",
""
],
[
"Kucukkaya",
"Izzet Emre",
""
],
[
"Ozcelik",
"Oguzhan",
""
],
[
"Toraman",
"Cagri",
""
]
] |
new_dataset
| 0.99351 |
2307.13848
|
Mahyar Daneshpajooh
|
Mahyar Daneshpajooh, Niusha Moshrefi, Mahdi Darabi, Sina Hashemi,
Mehrafarin Kazemi
|
TeleBTC: Trustless Wrapped Bitcoin
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces TeleBTC, a fully decentralized protocol designed to
wrap Bitcoin (BTC) on programmable blockchains. The creation of a decentralized
wrapped BTC presents challenges due to the non-programmable nature of Bitcoin,
making it difficult to custody BTCs in a decentralized way. Existing solutions
have addressed this challenge by introducing an external layer of validators
who take custody of users' BTCs. However, the security and decentralization of
this layer are inferior to the underlying blockchains on which wrapped BTC is
built. Moreover, the process of joining or leaving for a validator has become
overly complex and expensive. To overcome these limitations, we propose a novel
approach that eliminates the need for such an external layer by leveraging the
light client bridge protocol. Additionally, we employ economic mechanisms such
as incentivization and slashing, resulting in a secure and trust-minimized
wrapped BTC solution. With TeleBTC, users can seamlessly transfer their BTC to
other blockchains and utilize it within decentralized applications.
Furthermore, they can unwrap their TeleBTC and reclaim the native BTC. To
address the high costs associated with light client bridges, we present an
optimistic approach that minimizes the cost. This approach significantly
reduces the operational expenses of running the protocol.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 22:46:42 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Daneshpajooh",
"Mahyar",
""
],
[
"Moshrefi",
"Niusha",
""
],
[
"Darabi",
"Mahdi",
""
],
[
"Hashemi",
"Sina",
""
],
[
"Kazemi",
"Mehrafarin",
""
]
] |
new_dataset
| 0.997192 |
2307.13854
|
Frank F. Xu
|
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek
Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Work in progress
| null | null | null |
cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
With generative AI advances, the exciting potential for autonomous agents to
manage daily tasks via natural language commands has emerged. However, cur rent
agents are primarily created and tested in simplified synthetic environments,
substantially limiting real-world scenario representation. In this paper, we
build an environment for agent command and control that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on websites,
and we create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and are
designed to emulate tasks that humans routinely perform on the internet. We
design and implement several autonomous agents, integrating recent techniques
such as reasoning before acting. The results demonstrate that solving complex
tasks is challenging: our best GPT-4-based agent only achieves an end-to-end
task success rate of 10.59%. These results highlight the need for further
development of robust agents, that current state-of-the-art LMs are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. Our code, data, environment reproduction resources, and
video demonstrations are publicly available at https://webarena.dev/.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 22:59:32 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Zhou",
"Shuyan",
""
],
[
"Xu",
"Frank F.",
""
],
[
"Zhu",
"Hao",
""
],
[
"Zhou",
"Xuhui",
""
],
[
"Lo",
"Robert",
""
],
[
"Sridhar",
"Abishek",
""
],
[
"Cheng",
"Xianyi",
""
],
[
"Bisk",
"Yonatan",
""
],
[
"Fried",
"Daniel",
""
],
[
"Alon",
"Uri",
""
],
[
"Neubig",
"Graham",
""
]
] |
new_dataset
| 0.999775 |
2307.13861
|
Dmitrii Krylov
|
Dmitrii Krylov, Pooya Khajeh, Junhan Ouyang, Thomas Reeves, Tongkai
Liu, Hiba Ajmal, Hamidreza Aghasi, Roy Fox
|
Learning to Design Analog Circuits to Meet Threshold Specifications
|
in proceedings of ICML 23
| null | null | null |
cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated design of analog and radio-frequency circuits using supervised or
reinforcement learning from simulation data has recently been studied as an
alternative to manual expert design. It is straightforward for a design agent
to learn an inverse function from desired performance metrics to circuit
parameters. However, it is more common for a user to have threshold performance
criteria rather than an exact target vector of feasible performance measures.
In this work, we propose a method for generating from simulation data a dataset
on which a system can be trained via supervised learning to design circuits to
meet threshold specifications. We moreover perform the to-date most extensive
evaluation of automated analog circuit design, including experimenting in a
significantly more diverse set of circuits than in prior work, covering linear,
nonlinear, and autonomous circuit configurations, and show that our method
consistently reaches success rate better than 90% at 5% error margin, while
also improving data efficiency by upward of an order of magnitude. A demo of
this system is available at circuits.streamlit.app
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 23:25:05 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Krylov",
"Dmitrii",
""
],
[
"Khajeh",
"Pooya",
""
],
[
"Ouyang",
"Junhan",
""
],
[
"Reeves",
"Thomas",
""
],
[
"Liu",
"Tongkai",
""
],
[
"Ajmal",
"Hiba",
""
],
[
"Aghasi",
"Hamidreza",
""
],
[
"Fox",
"Roy",
""
]
] |
new_dataset
| 0.973153 |
2307.13882
|
Hao Wang
|
Hao Wang
|
Human Culture: A History Irrelevant and Predictable Experience
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Human culture research has witnessed an opportunity of revolution thanks to
the big data and social network revolution. Websites such as Douban.com,
Goodreads.com, Pandora and IMDB become the new gold mine for cultural
researchers. In 2021 and 2022, the author of this paper invented 2 data-free
recommender systems for AI cold-start problem. The algorithms can recommend
cultural and commercial products to users without reference to users' past
preferences. The social implications of the new inventions are human cultural
tastes can be predicted very precisely without any information related to human
individuals. In this paper, we analyze the AI technologies and its cultural
implications together with other AI algorithms. We show that human culture is
(mostly) a history irrelevant and predictable experience.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 01:07:24 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Wang",
"Hao",
""
]
] |
new_dataset
| 0.986578 |
2307.13900
|
Hyunjong Ok
|
Hyunjong Ok
|
FinTree: Financial Dataset Pretrain Transformer Encoder for Relation
Extraction
|
4pages, 2 figures, The SIGIR'23 Workshop on Knowledge Discovery from
Unstructured Data in Financial Services
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present FinTree, Financial Dataset Pretrain Transformer Encoder for
Relation Extraction. Utilizing an encoder language model, we further pretrain
FinTree on the financial dataset, adapting the model in financial domain tasks.
FinTree stands out with its novel structure that predicts a masked token
instead of the conventional [CLS] token, inspired by the Pattern Exploiting
Training methodology. This structure allows for more accurate relation
predictions between two given entities. The model is trained with a unique
input pattern to provide contextual and positional information about the
entities of interest, and a post-processing step ensures accurate predictions
in line with the entity types. Our experiments demonstrate that FinTree
outperforms on the REFinD, a large-scale financial relation extraction dataset.
The code and pretrained models are available at
https://github.com/HJ-Ok/FinTree.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 01:48:52 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Ok",
"Hyunjong",
""
]
] |
new_dataset
| 0.998946 |
2307.13915
|
Jose Damian Lopez Diaz
|
Jose Damian Lopez Diaz
|
Algoritmo Concurrente por Conjuntos de Pilas con Multiplicidad:
SetStackLogic
|
23 pages, in Spanish language, 7 figures
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article aims to describe and explain the theoretical foundations of
concurrent and set concurrent algorithms, considering an asynchronous shared
memory system where any number of processes can crash. Verification of
concurrent algorithms is often described in terms of their progress condition,
which guarantees that eventually something good will happen, also called the
security of the algorithms, and correctness, which guarantees that nothing bad
will happen, also called liveliness. of the algorithms. The meaning of
correctness of a concurrent algorithm is explained in detail, focusing on
linearizability, and a generalization is addressed, concurrency by sets; which
is much more recent and less well known. The {\it SetStackLogic} algorithm is
shown, which is a set-concurrent algorithm and is also an implementation of a
stack with multiplicity. The properties of the algorithm {\it SetStackLogic}
are demonstrated in a formal and detailed way, in order to present a rigorous
scheme in the formalization of this type of algorithm; same that could be used
for other algorithms. In addition, the operation of the algorithm is explained
through scenario examples that illustrate its dynamics in some possible
executions.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 02:32:56 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Diaz",
"Jose Damian Lopez",
""
]
] |
new_dataset
| 0.995755 |
2307.13924
|
Boris Ivanovic
|
Boris Ivanovic, Guanyu Song, Igor Gilitschenski, Marco Pavone
|
trajdata: A Unified Interface to Multiple Human Trajectory Datasets
|
15 pages, 15 figures, 3 tables
| null | null | null |
cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The field of trajectory forecasting has grown significantly in recent years,
partially owing to the release of numerous large-scale, real-world human
trajectory datasets for autonomous vehicles (AVs) and pedestrian motion
tracking. While such datasets have been a boon for the community, they each use
custom and unique data formats and APIs, making it cumbersome for researchers
to train and evaluate methods across multiple datasets. To remedy this, we
present trajdata: a unified interface to multiple human trajectory datasets. At
its core, trajdata provides a simple, uniform, and efficient representation and
API for trajectory and map data. As a demonstration of its capabilities, in
this work we conduct a comprehensive empirical evaluation of existing
trajectory datasets, providing users with a rich understanding of the data
underpinning much of current pedestrian and AV motion forecasting research, and
proposing suggestions for future datasets from these insights. trajdata is
permissively licensed (Apache 2.0) and can be accessed online at
https://github.com/NVlabs/trajdata
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 02:45:59 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Ivanovic",
"Boris",
""
],
[
"Song",
"Guanyu",
""
],
[
"Gilitschenski",
"Igor",
""
],
[
"Pavone",
"Marco",
""
]
] |
new_dataset
| 0.986786 |
2307.14021
|
Huzheng Yang
|
Huzheng Yang, Jianbo Shi, James Gee
|
Retinotopy Inspired Brain Encoding Model and the All-for-One Training
Recipe
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Brain encoding models aim to predict brain voxel-wise responses to stimuli
images, replicating brain signals captured by neuroimaging techniques. There is
a large volume of publicly available data, but training a comprehensive brain
encoding model is challenging. The main difficulties stem from a) diversity
within individual brain, with functional heterogeneous brain regions; b)
diversity of brains from different subjects, due to genetic and developmental
differences; c) diversity of imaging modalities and processing pipelines. We
use this diversity to our advantage by introducing the All-for-One training
recipe, which divides the challenging one-big-model problem into multiple small
models, with the small models aggregating the knowledge while preserving the
distinction between the different functional regions. Agnostic of the training
recipe, we use biological knowledge of the brain, specifically retinotopy, to
introduce inductive bias to learn a 3D brain-to-image mapping that ensures a)
each neuron knows which image regions and semantic levels to gather
information, and b) no neurons are left behind in the model.
We pre-trained a brain encoding model using over one million data points from
five public datasets spanning three imaging modalities. To the best of our
knowledge, this is the most comprehensive brain encoding model to the date. We
demonstrate the effectiveness of the pre-trained model as a drop-in replacement
for commonly used vision backbone models. Furthermore, we demonstrate the
application of the model to brain decoding. Code and the model checkpoint will
be made available.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 08:06:40 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Yang",
"Huzheng",
""
],
[
"Shi",
"Jianbo",
""
],
[
"Gee",
"James",
""
]
] |
new_dataset
| 0.989919 |
2307.14031
|
Songbo Hu
|
Songbo Hu, Han Zhou, Mete Hergul, Milan Gritta, Guchun Zhang, Ignacio
Iacobacci, Ivan Vuli\'c, Anna Korhonen
|
Multi3WOZ: A Multilingual, Multi-Domain, Multi-Parallel Dataset for
Training and Evaluating Culturally Adapted Task-Oriented Dialog Systems
|
A pre-MIT Press publication version for TACL
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Creating high-quality annotated data for task-oriented dialog (ToD) is known
to be notoriously difficult, and the challenges are amplified when the goal is
to create equitable, culturally adapted, and large-scale ToD datasets for
multiple languages. Therefore, the current datasets are still very scarce and
suffer from limitations such as translation-based non-native dialogs with
translation artefacts, small scale, or lack of cultural adaptation, among
others. In this work, we first take stock of the current landscape of
multilingual ToD datasets, offering a systematic overview of their properties
and limitations. Aiming to reduce all the detected limitations, we then
introduce Multi3WOZ, a novel multilingual, multi-domain, multi-parallel ToD
dataset. It is large-scale and offers culturally adapted dialogs in 4 languages
to enable training and evaluation of multilingual and cross-lingual ToD
systems. We describe a complex bottom-up data collection process that yielded
the final dataset, and offer the first sets of baseline scores across different
ToD-related tasks for future reference, also highlighting its challenging
nature.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 08:29:42 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Hu",
"Songbo",
""
],
[
"Zhou",
"Han",
""
],
[
"Hergul",
"Mete",
""
],
[
"Gritta",
"Milan",
""
],
[
"Zhang",
"Guchun",
""
],
[
"Iacobacci",
"Ignacio",
""
],
[
"Vulić",
"Ivan",
""
],
[
"Korhonen",
"Anna",
""
]
] |
new_dataset
| 0.998878 |
2307.14036
|
Nao Hirokawa
|
Nao Hirokawa and Aart Middeldorp
|
Hydra Battles and AC Termination, Revisited
|
Presented at WST 2023
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a termination proof for the Battle of Hercules and Hydra
represented as a rewrite system with AC symbols. Our proof employs type
introduction in connection with many-sorted semantic labeling for AC rewriting
and AC-RPO.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 08:40:21 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Hirokawa",
"Nao",
""
],
[
"Middeldorp",
"Aart",
""
]
] |
new_dataset
| 0.995809 |
2307.14057
|
Amit Dvir Dr.
|
Eli Belkind, Ran Dubin, Amit Dvir
|
Open Image Content Disarm And Reconstruction
|
14 pages
| null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the advance in malware technology, attackers create new ways to hide
their malicious code from antivirus services. One way to obfuscate an attack is
to use common files as cover to hide the malicious scripts, so the malware will
look like a legitimate file. Although cutting-edge Artificial Intelligence and
content signature exist, evasive malware successfully bypasses next-generation
malware detection using advanced methods like steganography. Some of the files
commonly used to hide malware are image files (e.g., JPEG). In addition, some
malware use steganography to hide malicious scripts or sensitive data in
images. Steganography in images is difficult to detect even with specialized
tools. Image-based attacks try to attack the user's device using malicious
payloads or utilize image steganography to hide sensitive data inside
legitimate images and leak it outside the user's device. Therefore in this
paper, we present a novel Image Content Disarm and Reconstruction (ICDR). Our
ICDR system removes potential malware, with a zero trust approach, while
maintaining high image quality and file usability. By extracting the image
data, removing it from the rest of the file, and manipulating the image pixels,
it is possible to disable or remove the hidden malware inside the file.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 09:09:48 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Belkind",
"Eli",
""
],
[
"Dubin",
"Ran",
""
],
[
"Dvir",
"Amit",
""
]
] |
new_dataset
| 0.999617 |
2307.14111
|
Fernando Alonso-Fernandez
|
Fernando Alonso-Fernandez, Josef Bigun
|
Periocular biometrics: databases, algorithms and directions
|
Published in: 2016 4th International Conference on Biometrics and
Forensics (IWBF). arXiv admin note: substantial text overlap with
arXiv:1810.03360
| null |
10.1109/IWBF.2016.7449688
| null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Periocular biometrics has been established as an independent modality due to
concerns on the performance of iris or face systems in uncontrolled conditions.
Periocular refers to the facial region in the eye vicinity, including eyelids,
lashes and eyebrows. It is available over a wide range of acquisition
distances, representing a trade-off between the whole face (which can be
occluded at close distances) and the iris texture (which do not have enough
resolution at long distances). Since the periocular region appears in face or
iris images, it can be used also in conjunction with these modalities. Features
extracted from the periocular region have been also used successfully for
gender classification and ethnicity classification, and to study the impact of
gender transformation or plastic surgery in the recognition performance. This
paper presents a review of the state of the art in periocular biometric
research, providing an insight of the most relevant issues and giving a
thorough coverage of the existing literature. Future research trends are also
briefly discussed.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 11:14:36 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Alonso-Fernandez",
"Fernando",
""
],
[
"Bigun",
"Josef",
""
]
] |
new_dataset
| 0.969807 |
2307.14149
|
Johannes Waldmann
|
Dieter Hofbauer, Johannes Waldmann
|
Old and New Benchmarks for Relative Termination of String Rewrite
Systems
|
Presented at WST 2023
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We provide a critical assessment of the current set of benchmarks for
relative SRS termination in the Termination Problems Database (TPDB): most of
the benchmarks in Waldmann_19 and ICFP_10_relative are, in fact, strictly
terminating (i. e., terminating when non-strict rules are considered strict),
so these benchmarks should be removed, or relabelled.
To fill this gap, we enumerate small relative string rewrite systems. At
present, we have complete enumerations for a 2-letter alphabet up to size 11,
and for a 3-letter alphabet up to size 8.
For some selected benchmarks, old and new, we discuss how to prove
termination, automated or not.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 12:27:06 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Hofbauer",
"Dieter",
""
],
[
"Waldmann",
"Johannes",
""
]
] |
new_dataset
| 0.962662 |
2307.14213
|
Ciera McFarland
|
Michael R. Mitchell, Ciera McFarland, Margaret M. Coad
|
Soft Air Pocket Force Sensors for Large Scale Flexible Robots
|
M. R. Mitchell, C. McFarland, and M. M. Coad, "Soft Air Pocket Force
Sensors for Large Scale Flexible Robots," in IEEE International Conference on
Soft Robotics, 2023, pp. 1-8. Video: https://youtu.be/2De0htilW74
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Flexible robots have advantages over rigid robots in their ability to conform
physically to their environment and to form a wide variety of shapes. Sensing
the force applied by or to flexible robots is useful for both navigation and
manipulation tasks, but it is challenging due to the need for the sensors to
withstand the robots' shape change without encumbering their functionality.
Also, for robots with long or large bodies, the number of sensors required to
cover the entire surface area of the robot body can be prohibitive due to high
cost and complexity. We present a novel soft air pocket force sensor that is
highly flexible, lightweight, relatively inexpensive, and easily scalable to
various sizes. Our sensor produces a change in internal pressure that is linear
with the applied force. We present results of experimental testing of how
uncontrollable factors (contact location and contact area) and controllable
factors (initial internal pressure, thickness, size, and number of interior
seals) affect the sensitivity. We demonstrate our sensor applied to a vine
robot-a soft inflatable robot that "grows" from the tip via eversion-and we
show that the robot can successfully grow and steer towards an object with
which it senses contact.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 14:28:37 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Mitchell",
"Michael R.",
""
],
[
"McFarland",
"Ciera",
""
],
[
"Coad",
"Margaret M.",
""
]
] |
new_dataset
| 0.998796 |
2307.14243
|
Luca Clissa
|
Luca Clissa, Antonio Macaluso, Roberto Morelli, Alessandra Occhinegro,
Emiliana Piscitiello, Ludovico Taddei, Marco Luppi, Roberto Amici, Matteo
Cerri, Timna Hitrec, Lorenzo Rinaldi, Antonio Zoccoli
|
Fluorescent Neuronal Cells v2: Multi-Task, Multi-Format Annotations for
Deep Learning in Microscopy
|
11 pages; 5 figures; 2 tables
| null | null | null |
cs.CV cs.LG physics.app-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Fluorescent Neuronal Cells v2 is a collection of fluorescence microscopy
images and the corresponding ground-truth annotations, designed to foster
innovative research in the domains of Life Sciences and Deep Learning. This
dataset encompasses three image collections in which rodent neuronal cells'
nuclei and cytoplasm are stained with diverse markers to highlight their
anatomical or functional characteristics. Alongside the images, we provide
ground-truth annotations for several learning tasks, including semantic
segmentation, object detection, and counting. The contribution is two-fold.
First, given the variety of annotations and their accessible formats, we
envision our work facilitating methodological advancements in computer vision
approaches for segmentation, detection, feature learning, unsupervised and
self-supervised learning, transfer learning, and related areas. Second, by
enabling extensive exploration and benchmarking, we hope Fluorescent Neuronal
Cells v2 will catalyze breakthroughs in fluorescence microscopy analysis and
promote cutting-edge discoveries in life sciences. The data are available at:
https://amsacta.unibo.it/id/eprint/7347
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 15:14:10 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Clissa",
"Luca",
""
],
[
"Macaluso",
"Antonio",
""
],
[
"Morelli",
"Roberto",
""
],
[
"Occhinegro",
"Alessandra",
""
],
[
"Piscitiello",
"Emiliana",
""
],
[
"Taddei",
"Ludovico",
""
],
[
"Luppi",
"Marco",
""
],
[
"Amici",
"Roberto",
""
],
[
"Cerri",
"Matteo",
""
],
[
"Hitrec",
"Timna",
""
],
[
"Rinaldi",
"Lorenzo",
""
],
[
"Zoccoli",
"Antonio",
""
]
] |
new_dataset
| 0.999593 |
2307.14300
|
Virginio Fratianni
|
Virginio Fratianni
|
Dual and Hull code in the first two generic constructions and
relationship with the Walsh transform of cryptographic functions
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We contribute to the knowledge of linear codes from special polynomials and
functions, which have been studied intensively in the past few years. Such
codes have several applications in secret sharing, authentication codes,
association schemes and strongly regular graphs.
This is the first work in which we study the dual codes in the framework of
the two generic constructions; in particular, we propose a Gram-Schmidt
(complexity of $\mathcal{O}(n^3)$) process to compute them explicitly. The
originality of this contribution is in the study of the existence or not of
defining sets $D'$, which can be used as ingredients to construct the dual code
$\mathcal{C}'$ for a given code $\mathcal{C}$ in the context of the second
generic construction. We also determine a necessary condition expressed by
employing the Walsh transform for a codeword of $\mathcal{C}$ to belong in the
dual. This achievement was done in general and when the involved functions are
weakly regularly bent. We shall give a novel description of the Hull code in
the framework of the two generic constructions. Our primary interest is
constructing linear codes of fixed Hull dimension and determining the (Hamming)
weight of the codewords in their duals.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 17:01:46 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Fratianni",
"Virginio",
""
]
] |
new_dataset
| 0.996835 |
2307.14302
|
Chayanon Wichitrnithed
|
Chayanon Wichitrnithed, Eirik Valseth, Ethan J. Kubatko, Younghun
Kang, Mackenzie Hudson, Clint Dawson
|
A Discontinuous Galerkin Finite Element Model for Compound Flood
Simulations
| null | null | null | null |
cs.CE physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent tropical cyclones, e.g., Hurricane Harvey (2017), have lead to
significant rainfall and resulting runoff with accompanying flooding. When the
runoff interacts with storm surge, the resulting floods can be greatly
amplified and lead to effects that cannot be modeled by simple superposition of
its distinctive sources. In an effort to develop accurate numerical simulations
of runoff, surge, and compounding floods, we develop a local discontinuous
Galerkin method for modified shallow water equations. In this modification,
nonzero sources to the continuity equation are included to incorporate rainfall
into the model using parametric rainfall models from literature as well as
hindcast data. The discontinuous Galerkin spatial discretization is accompanied
with a strong stability preserving explicit Runge Kutta time integrator. Hence,
temporal stability is ensured through the CFL condition and we exploit the
embarrassingly parallel nature of the developed method using MPI
parallelization. We demonstrate the capabilities of the developed method though
a sequence of physically relevant numerical tests, including small scale test
cases based on laboratory measurements and large scale experiments with
Hurricane Harvey in the Gulf of Mexico. The results highlight the conservation
properties and robustness of the developed method and show the potential of
compound flood modeling using our approach.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 17:05:18 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Wichitrnithed",
"Chayanon",
""
],
[
"Valseth",
"Eirik",
""
],
[
"Kubatko",
"Ethan J.",
""
],
[
"Kang",
"Younghun",
""
],
[
"Hudson",
"Mackenzie",
""
],
[
"Dawson",
"Clint",
""
]
] |
new_dataset
| 0.994549 |
2307.14313
|
Tomasz Kryjak
|
Pawel Miera, Hubert Szolc, Tomasz Kryjak
|
LiDAR-based drone navigation with reinforcement learning
|
Accepted for the XXVII Automation 2023 conference
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Reinforcement learning is of increasing importance in the field of robot
control and simulation plays a~key role in this process. In the unmanned aerial
vehicles (UAVs, drones), there is also an increase in the number of published
scientific papers involving this approach. In this work, an autonomous drone
control system was prepared to fly forward (according to its coordinates
system) and pass the trees encountered in the forest based on the data from a
rotating LiDAR sensor. The Proximal Policy Optimization (PPO) algorithm, an
example of reinforcement learning (RL), was used to prepare it. A custom
simulator in the Python language was developed for this purpose. The Gazebo
environment, integrated with the Robot Operating System (ROS), was also used to
test the resulting control algorithm. Finally, the prepared solution was
implemented in the Nvidia Jetson Nano eGPU and verified in the real tests
scenarios. During them, the drone successfully completed the set task and was
able to repeatably avoid trees and fly through the forest.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 17:23:33 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Miera",
"Pawel",
""
],
[
"Szolc",
"Hubert",
""
],
[
"Kryjak",
"Tomasz",
""
]
] |
new_dataset
| 0.993686 |
2307.14335
|
Xubo Liu
|
Xubo Liu, Zhongkai Zhu, Haohe Liu, Yi Yuan, Meng Cui, Qiushi Huang,
Jinhua Liang, Yin Cao, Qiuqiang Kong, Mark D. Plumbley, Wenwu Wang
|
WavJourney: Compositional Audio Creation with Large Language Models
|
Project Page: https://audio-agi.github.io/WavJourney_demopage/
| null | null | null |
cs.SD cs.AI cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs) have shown great promise in integrating diverse
expert models to tackle intricate language and vision tasks. Despite their
significance in advancing the field of Artificial Intelligence Generated
Content (AIGC), their potential in intelligent audio content creation remains
unexplored. In this work, we tackle the problem of creating audio content with
storylines encompassing speech, music, and sound effects, guided by text
instructions. We present WavJourney, a system that leverages LLMs to connect
various audio models for audio content generation. Given a text description of
an auditory scene, WavJourney first prompts LLMs to generate a structured
script dedicated to audio storytelling. The audio script incorporates diverse
audio elements, organized based on their spatio-temporal relationships. As a
conceptual representation of audio, the audio script provides an interactive
and interpretable rationale for human engagement. Afterward, the audio script
is fed into a script compiler, converting it into a computer program. Each line
of the program calls a task-specific audio generation model or computational
operation function (e.g., concatenate, mix). The computer program is then
executed to obtain an explainable solution for audio generation. We demonstrate
the practicality of WavJourney across diverse real-world scenarios, including
science fiction, education, and radio play. The explainable and interactive
design of WavJourney fosters human-machine co-creation in multi-round
dialogues, enhancing creative control and adaptability in audio production.
WavJourney audiolizes the human imagination, opening up new avenues for
creativity in multimedia content creation.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 17:54:04 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Liu",
"Xubo",
""
],
[
"Zhu",
"Zhongkai",
""
],
[
"Liu",
"Haohe",
""
],
[
"Yuan",
"Yi",
""
],
[
"Cui",
"Meng",
""
],
[
"Huang",
"Qiushi",
""
],
[
"Liang",
"Jinhua",
""
],
[
"Cao",
"Yin",
""
],
[
"Kong",
"Qiuqiang",
""
],
[
"Plumbley",
"Mark D.",
""
],
[
"Wang",
"Wenwu",
""
]
] |
new_dataset
| 0.999465 |
2307.14341
|
Diego Royo
|
Diego Royo and Talha Sultan and Adolfo Mu\~noz and Khadijeh
Masumnia-Bisheh and Eric Brandt and Diego Gutierrez and Andreas Velten and
Julio Marco
|
Virtual Mirrors: Non-Line-of-Sight Imaging Beyond the Third Bounce
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Non-line-of-sight (NLOS) imaging methods are capable of reconstructing
complex scenes that are not visible to an observer using indirect illumination.
However, they assume only third-bounce illumination, so they are currently
limited to single-corner configurations, and present limited visibility when
imaging surfaces at certain orientations. To reason about and tackle these
limitations, we make the key observation that planar diffuse surfaces behave
specularly at wavelengths used in the computational wave-based NLOS imaging
domain. We call such surfaces virtual mirrors. We leverage this observation to
expand the capabilities of NLOS imaging using illumination beyond the third
bounce, addressing two problems: imaging single-corner objects at limited
visibility angles, and imaging objects hidden behind two corners. To image
objects at limited visibility angles, we first analyze the reflections of the
known illuminated point on surfaces of the scene as an estimator of the
position and orientation of objects with limited visibility. We then image
those limited visibility objects by computationally building secondary
apertures at other surfaces that observe the target object from a direct
visibility perspective. Beyond single-corner NLOS imaging, we exploit the
specular behavior of virtual mirrors to image objects hidden behind a second
corner by imaging the space behind such virtual mirrors, where the mirror image
of objects hidden around two corners is formed. No specular surfaces were
involved in the making of this paper.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 17:59:20 GMT"
}
] | 2023-07-27T00:00:00 |
[
[
"Royo",
"Diego",
""
],
[
"Sultan",
"Talha",
""
],
[
"Muñoz",
"Adolfo",
""
],
[
"Masumnia-Bisheh",
"Khadijeh",
""
],
[
"Brandt",
"Eric",
""
],
[
"Gutierrez",
"Diego",
""
],
[
"Velten",
"Andreas",
""
],
[
"Marco",
"Julio",
""
]
] |
new_dataset
| 0.983486 |
2202.09981
|
Lakshmi Natarajan Dr
|
Lakshmi Prasad Natarajan and Prasad Krishnan
|
Berman Codes: A Generalization of Reed-Muller Codes that Achieve BEC
Capacity
|
Accepted for publication in the IEEE Transactions on Information
Theory
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We identify a family of binary codes whose structure is similar to
Reed-Muller (RM) codes and which include RM codes as a strict subclass. The
codes in this family are denoted as $C_n(r,m)$, and their duals are denoted as
$B_n(r,m)$. The length of these codes is $n^m$, where $n \geq 2$, and $r$ is
their `order'. When $n=2$, $C_n(r,m)$ is the RM code of order $r$ and length
$2^m$. The special case of these codes corresponding to $n$ being an odd prime
was studied by Berman (1967) and Blackmore and Norton (2001). Following the
terminology introduced by Blackmore and Norton, we refer to $B_n(r,m)$ as the
Berman code and $C_n(r,m)$ as the dual Berman code. We identify these codes
using a recursive Plotkin-like construction, and we show that these codes have
a rich automorphism group, they are generated by the minimum weight codewords,
and that they can be decoded up to half the minimum distance efficiently. Using
a result of Kumar et al. (2016), we show that these codes achieve the capacity
of the binary erasure channel (BEC) under bit-MAP decoding. Furthermore, except
double transitivity, they satisfy all the code properties used by Reeves and
Pfister to show that RM codes achieve the capacity of binary-input memoryless
symmetric channels. Finally, when $n$ is odd, we identify a large class of
abelian codes that includes $B_n(r,m)$ and $C_n(r,m)$ and which achieves BEC
capacity.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 04:21:30 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Jul 2022 10:52:59 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Jul 2023 12:36:25 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Natarajan",
"Lakshmi Prasad",
""
],
[
"Krishnan",
"Prasad",
""
]
] |
new_dataset
| 0.998965 |
2202.12626
|
Zhenyang Li
|
Zhenyang Li, Yangyang Guo, Kejie Wang, Yinwei Wei, Liqiang Nie, Mohan
Kankanhalli
|
Joint Answering and Explanation for Visual Commonsense Reasoning
| null | null |
10.1109/TIP.2023.3286259
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual Commonsense Reasoning (VCR), deemed as one challenging extension of
the Visual Question Answering (VQA), endeavors to pursue a more high-level
visual comprehension. It is composed of two indispensable processes: question
answering over a given image and rationale inference for answer explanation.
Over the years, a variety of methods tackling VCR have advanced the performance
on the benchmark dataset. Despite significant as these methods are, they often
treat the two processes in a separate manner and hence decompose the VCR into
two irrelevant VQA instances. As a result, the pivotal connection between
question answering and rationale inference is interrupted, rendering existing
efforts less faithful on visual reasoning. To empirically study this issue, we
perform some in-depth explorations in terms of both language shortcuts and
generalization capability to verify the pitfalls of this treatment. Based on
our findings, in this paper, we present a plug-and-play knowledge distillation
enhanced framework to couple the question answering and rationale inference
processes. The key contribution is the introduction of a novel branch, which
serves as the bridge to conduct processes connecting. Given that our framework
is model-agnostic, we apply it to the existing popular baselines and validate
its effectiveness on the benchmark dataset. As detailed in the experimental
results, when equipped with our framework, these baselines achieve consistent
and significant performance improvements, demonstrating the viability of
processes coupling, as well as the superiority of the proposed framework.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 11:26:52 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Jan 2023 13:47:43 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Li",
"Zhenyang",
""
],
[
"Guo",
"Yangyang",
""
],
[
"Wang",
"Kejie",
""
],
[
"Wei",
"Yinwei",
""
],
[
"Nie",
"Liqiang",
""
],
[
"Kankanhalli",
"Mohan",
""
]
] |
new_dataset
| 0.964963 |
2209.03277
|
Jianfeng Gao
|
Jianfeng Gao, Zhi Tao, No\'emie Jaquier, and Tamim Asfour
|
K-VIL: Keypoints-based Visual Imitation Learning
| null |
IEEE Transactions on Robotics, (2023) 1-21
|
10.1109/TRO.2023.3286074
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual imitation learning provides efficient and intuitive solutions for
robotic systems to acquire novel manipulation skills. However, simultaneously
learning geometric task constraints and control policies from visual inputs
alone remains a challenging problem. In this paper, we propose an approach for
keypoint-based visual imitation (K-VIL) that automatically extracts sparse,
object-centric, and embodiment-independent task representations from a small
number of human demonstration videos. The task representation is composed of
keypoint-based geometric constraints on principal manifolds, their associated
local frames, and the movement primitives that are then needed for the task
execution. Our approach is capable of extracting such task representations from
a single demonstration video, and of incrementally updating them when new
demonstrations become available. To reproduce manipulation skills using the
learned set of prioritized geometric constraints in novel scenes, we introduce
a novel keypoint-based admittance controller. We evaluate our approach in
several real-world applications, showcasing its ability to deal with cluttered
scenes, viewpoint mismatch, new instances of categorical objects, and large
object pose and shape variations, as well as its efficiency and robustness in
both one-shot and few-shot imitation learning settings. Videos and source code
are available at https://sites.google.com/view/k-vil.
|
[
{
"version": "v1",
"created": "Wed, 7 Sep 2022 16:30:06 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2023 13:57:13 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Jul 2023 11:30:33 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Gao",
"Jianfeng",
""
],
[
"Tao",
"Zhi",
""
],
[
"Jaquier",
"Noémie",
""
],
[
"Asfour",
"Tamim",
""
]
] |
new_dataset
| 0.996149 |
2212.06524
|
Chenyangguang Zhang
|
Chenyangguang Zhang, Zhiqiang Lou, Yan Di, Federico Tombari and
Xiangyang Ji
|
SST: Real-time End-to-end Monocular 3D Reconstruction via Sparse
Spatial-Temporal Guidance
|
ICME 2023 (oral)
| null | null |
camera ready for ICME 2023
|
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real-time monocular 3D reconstruction is a challenging problem that remains
unsolved. Although recent end-to-end methods have demonstrated promising
results, tiny structures and geometric boundaries are hardly captured due to
their insufficient supervision neglecting spatial details and oversimplified
feature fusion ignoring temporal cues. To address the problems, we propose an
end-to-end 3D reconstruction network SST, which utilizes Sparse estimated
points from visual SLAM system as additional Spatial guidance and fuses
Temporal features via a novel cross-modal attention mechanism, achieving more
detailed reconstruction results. We propose a Local Spatial-Temporal Fusion
module to exploit more informative spatial-temporal cues from multi-view color
information and sparse priors, as well a Global Spatial-Temporal Fusion module
to refine the local TSDF volumes with the world-frame model from coarse to
fine. Extensive experiments on ScanNet and 7-Scenes demonstrate that SST
outperforms all state-of-the-art competitors, whilst keeping a high inference
speed at 59 FPS, enabling real-world applications with real-time requirements.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 12:17:13 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jul 2023 02:22:16 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Zhang",
"Chenyangguang",
""
],
[
"Lou",
"Zhiqiang",
""
],
[
"Di",
"Yan",
""
],
[
"Tombari",
"Federico",
""
],
[
"Ji",
"Xiangyang",
""
]
] |
new_dataset
| 0.997507 |
2303.04738
|
Parvez Mahbub
|
Parvez Mahbub and Ohiduzzaman Shuvo and Mohammad Masudur Rahman
|
Defectors: A Large, Diverse Python Dataset for Defect Prediction
| null |
2023 IEEE/ACM 20th International Conference on Mining Software
Repositories (MSR), Melbourne, Australia, 2023, pp. 393-397
|
10.1109/MSR59073.2023.00085
| null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Defect prediction has been a popular research topic where machine learning
(ML) and deep learning (DL) have found numerous applications. However, these
ML/DL-based defect prediction models are often limited by the quality and size
of their datasets. In this paper, we present Defectors, a large dataset for
just-in-time and line-level defect prediction. Defectors consists of $\approx$
213K source code files ($\approx$ 93K defective and $\approx$ 120K defect-free)
that span across 24 popular Python projects. These projects come from 18
different domains, including machine learning, automation, and
internet-of-things. Such a scale and diversity make Defectors a suitable
dataset for training ML/DL models, especially transformer models that require
large and diverse datasets. We also foresee several application areas of our
dataset including defect prediction and defect explanation.
Dataset link: https://doi.org/10.5281/zenodo.7708984
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 17:23:24 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 18:32:18 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Apr 2023 11:17:22 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Jul 2023 05:59:59 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Mahbub",
"Parvez",
""
],
[
"Shuvo",
"Ohiduzzaman",
""
],
[
"Rahman",
"Mohammad Masudur",
""
]
] |
new_dataset
| 0.999873 |
2303.05086
|
Kunfeng Wang
|
Kunfeng Wang, Kaichun Zhao and Zheng You
|
Stereo Event-based Visual-Inertial Odometry
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event-based cameras are new type vision sensors whose pixels work
independently and respond asynchronously to brightness change with microsecond
resolution, instead of providing standard intensity frames. Compared with
traditional cameras, event-based cameras have low latency, no motion blur, and
high dynamic range (HDR), which provide possibilities for robots to deal with
some challenging scenes. We propose a visual-inertial odometry for stereo
event-based cameras based on Error-State Kalman Filter (ESKF). The visual
module updates the pose relies on the edge alignment of a semi-dense 3D map to
a 2D image, and the IMU module updates pose by median integral. We evaluate our
method on public datasets with general 6-DoF motion and compare the results
against ground truth. We show that our proposed pipeline provides improved
accuracy over the result of the state-of-the-art visual odometry for stereo
event-based cameras, while running in real-time on a standard CPU
(low-resolution cameras). To the best of our knowledge, this is the first
published visual-inertial odometry for stereo event-based cameras.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 07:50:30 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 07:27:17 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Jul 2023 14:54:35 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Jul 2023 08:10:29 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Wang",
"Kunfeng",
""
],
[
"Zhao",
"Kaichun",
""
],
[
"You",
"Zheng",
""
]
] |
new_dataset
| 0.999057 |
2303.06872
|
Jiyong Oh Dr.
|
Jieun Lee, Hakjun Lee, Jiyong Oh
|
FusionLoc: Camera-2D LiDAR Fusion Using Multi-Head Self-Attention for
End-to-End Serving Robot Relocalization
|
13 pages, 9 figures
| null |
10.1109/ACCESS.2023.3297202
| null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
As technology advances in autonomous mobile robots, mobile service robots
have been actively used more and more for various purposes. Especially, serving
robots have been not surprising products anymore since the COVID-19 pandemic.
One of the practical problems in operating a serving robot is that it often
fails to estimate its pose on a map that it moves around. Whenever the failure
happens, servers should bring the serving robot to its initial location and
reboot it manually. In this paper, we focus on end-to-end relocalization of
serving robots to address the problem. It is to predict robot pose directly
from only the onboard sensor data using neural networks. In particular, we
propose a deep neural network architecture for the relocalization based on
camera-2D LiDAR sensor fusion. We call the proposed method FusionLoc. In the
proposed method, the multi-head self-attention complements different types of
information captured by the two sensors to regress the robot pose. Our
experiments on a dataset collected by a commercial serving robot demonstrate
that FusionLoc can provide better performances than previous end-to-end
relocalization methods taking only a single image or a 2D LiDAR point cloud as
well as a straightforward fusion method concatenating their features.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 05:46:21 GMT"
},
{
"version": "v2",
"created": "Mon, 1 May 2023 15:24:15 GMT"
},
{
"version": "v3",
"created": "Tue, 2 May 2023 02:23:23 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Jul 2023 07:07:12 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Lee",
"Jieun",
""
],
[
"Lee",
"Hakjun",
""
],
[
"Oh",
"Jiyong",
""
]
] |
new_dataset
| 0.989219 |
2304.14108
|
Samir Yitzhak Gadre
|
Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase,
Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh,
Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek
Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu,
Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander Ratner,
Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh,
Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, Ludwig Schmidt
|
DataComp: In search of the next generation of multimodal datasets
| null | null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Multimodal datasets are a critical component in recent breakthroughs such as
Stable Diffusion and GPT-4, yet their design does not receive the same research
attention as model architectures or training algorithms. To address this
shortcoming in the ML ecosystem, we introduce DataComp, a testbed for dataset
experiments centered around a new candidate pool of 12.8 billion image-text
pairs from Common Crawl. Participants in our benchmark design new filtering
techniques or curate new data sources and then evaluate their new dataset by
running our standardized CLIP training code and testing the resulting model on
38 downstream test sets. Our benchmark consists of multiple compute scales
spanning four orders of magnitude, which enables the study of scaling trends
and makes the benchmark accessible to researchers with varying resources. Our
baseline experiments show that the DataComp workflow leads to better training
sets. In particular, our best baseline, DataComp-1B, enables training a CLIP
ViT-L/14 from scratch to 79.2% zero-shot accuracy on ImageNet, outperforming
OpenAI's CLIP ViT-L/14 by 3.7 percentage points while using the same training
procedure and compute. We release DataComp and all accompanying code at
www.datacomp.ai.
|
[
{
"version": "v1",
"created": "Thu, 27 Apr 2023 11:37:18 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2023 18:06:23 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Jul 2023 18:16:31 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Jul 2023 14:07:03 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Gadre",
"Samir Yitzhak",
""
],
[
"Ilharco",
"Gabriel",
""
],
[
"Fang",
"Alex",
""
],
[
"Hayase",
"Jonathan",
""
],
[
"Smyrnis",
"Georgios",
""
],
[
"Nguyen",
"Thao",
""
],
[
"Marten",
"Ryan",
""
],
[
"Wortsman",
"Mitchell",
""
],
[
"Ghosh",
"Dhruba",
""
],
[
"Zhang",
"Jieyu",
""
],
[
"Orgad",
"Eyal",
""
],
[
"Entezari",
"Rahim",
""
],
[
"Daras",
"Giannis",
""
],
[
"Pratt",
"Sarah",
""
],
[
"Ramanujan",
"Vivek",
""
],
[
"Bitton",
"Yonatan",
""
],
[
"Marathe",
"Kalyani",
""
],
[
"Mussmann",
"Stephen",
""
],
[
"Vencu",
"Richard",
""
],
[
"Cherti",
"Mehdi",
""
],
[
"Krishna",
"Ranjay",
""
],
[
"Koh",
"Pang Wei",
""
],
[
"Saukh",
"Olga",
""
],
[
"Ratner",
"Alexander",
""
],
[
"Song",
"Shuran",
""
],
[
"Hajishirzi",
"Hannaneh",
""
],
[
"Farhadi",
"Ali",
""
],
[
"Beaumont",
"Romain",
""
],
[
"Oh",
"Sewoong",
""
],
[
"Dimakis",
"Alex",
""
],
[
"Jitsev",
"Jenia",
""
],
[
"Carmon",
"Yair",
""
],
[
"Shankar",
"Vaishaal",
""
],
[
"Schmidt",
"Ludwig",
""
]
] |
new_dataset
| 0.998079 |
2305.00281
|
Simon Martinez-Rozas
|
S. Mart/'inez-Rozas, D. Alejo, F. Caballero and L. Merino
|
Path and trajectory planning of a tethered UAV-UGV marsupial robotic
system
|
This work has duplication, and in its case the article uploaded by my
colleague David Alejo (arXiv:2204.01828) should be considered. In this way we
only want to publish the article arXiv:2204.01828 for later updating
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This letter addresses the problem of trajectory planning in a marsupial
robotic system consisting of an unmanned aerial vehicle (UAV) linked to an
unmanned ground vehicle (UGV) through a non-taut tether withcontrollable
length. To the best of our knowledge, this is the first method that addresses
the trajectory planning of a marsupial UGV-UAV with a non-taut tether. The
objective is to determine a synchronized collision-free trajectory for the
three marsupial system agents: UAV, UGV, and tether. First, we present a path
planning solution based on optimal Rapidly-exploring Random Trees (RRT*) with
novel sampling and steering techniques to speed-up the computation. This
algorithm is able to obtain collision-free paths for the UAV and the UGV,
taking into account the 3D environment and the tether. Then, the paper presents
a trajectory planner based on non-linear least squares. The optimizer takes
into account aspects not considered in the path planning, like temporal
constraints of the motion imposed by limits on the velocities and accelerations
of the robots , or raising the tether's clearance. Simulated and field test
results demonstrate that the approach generates obstacle-free, smooth, and
feasible trajectories for the marsupial system.
|
[
{
"version": "v1",
"created": "Sat, 29 Apr 2023 15:36:21 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2023 15:24:28 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Mart/'inez-Rozas",
"S.",
""
],
[
"Alejo",
"D.",
""
],
[
"Caballero",
"F.",
""
],
[
"Merino",
"L.",
""
]
] |
new_dataset
| 0.999623 |
2305.17008
|
Caleb Ziems
|
Caleb Ziems, Jane Dwivedi-Yu, Yi-Chia Wang, Alon Halevy and Diyi Yang
|
NormBank: A Knowledge Bank of Situational Social Norms
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present NormBank, a knowledge bank of 155k situational norms. This
resource is designed to ground flexible normative reasoning for interactive,
assistive, and collaborative AI systems. Unlike prior commonsense resources,
NormBank grounds each inference within a multivalent sociocultural frame, which
includes the setting (e.g., restaurant), the agents' contingent roles (waiter,
customer), their attributes (age, gender), and other physical, social, and
cultural constraints (e.g., the temperature or the country of operation). In
total, NormBank contains 63k unique constraints from a taxonomy that we
introduce and iteratively refine here. Constraints then apply in different
combinations to frame social norms. Under these manipulations, norms are
non-monotonic - one can cancel an inference by updating its frame even
slightly. Still, we find evidence that neural models can help reliably extend
the scope and coverage of NormBank. We further demonstrate the utility of this
resource with a series of transfer experiments.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 15:09:11 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jul 2023 19:18:25 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Ziems",
"Caleb",
""
],
[
"Dwivedi-Yu",
"Jane",
""
],
[
"Wang",
"Yi-Chia",
""
],
[
"Halevy",
"Alon",
""
],
[
"Yang",
"Diyi",
""
]
] |
new_dataset
| 0.982487 |
2307.00599
|
Zihong Yan
|
Zihong Yan, Xiaoyi Wu, Zhuozhu Jian, Bin Lan Xueqian Wang, and Bin
Liang
|
RH-Map: Online Map Construction Framework of Dynamic Objects Removal
Based on Region-wise Hash Map Structure
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile robots navigating in outdoor environments frequently encounter the
issue of undesired traces left by dynamic objects and manifested as obstacles
on map, impeding robots from achieving accurate localization and effective
navigation. To tackle the problem, a novel map construction framework based on
3D region-wise hash map structure (RH-Map) is proposed, consisting of front-end
scan fresher and back-end removal modules, which realizes real-time map
construction and online dynamic object removal (DOR). First, a two-layer 3D
region-wise hash map structure of map management is proposed for effective
online DOR. Then, in scan fresher, region-wise ground plane estimation (R-GPE)
is adopted for estimating and preserving ground information and Scan-to-Map
Removal (S2M-R) is proposed to discriminate and remove dynamic regions.
Moreover, the lightweight back-end removal module maintaining keyframes is
proposed for further DOR. As experimentally verified on SemanticKITTI, our
proposed framework yields promising performance on online DOR of map
construction compared with the state-of-the-art methods. And we also validate
the proposed framework in real-world environments.
|
[
{
"version": "v1",
"created": "Sun, 2 Jul 2023 15:50:36 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jul 2023 00:44:59 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Yan",
"Zihong",
""
],
[
"Wu",
"Xiaoyi",
""
],
[
"Jian",
"Zhuozhu",
""
],
[
"Wang",
"Bin Lan Xueqian",
""
],
[
"Liang",
"Bin",
""
]
] |
new_dataset
| 0.976133 |
2307.03726
|
Diana Gabriela Morillo Fueltala
|
Gabriela Morillo, John Cosmas
|
LTE SFBC MIMO Transmitter Modelling and Performance Evaluation
| null | null | null | null |
cs.IT cs.NI eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
High data rates are one of the most prevalent requirements in current mobile
communications. To cover this and other high standards regarding performance,
increasing coverage, capacity, and reliability, numerous works have proposed
the development of systems employing the combination of several techniques such
as Multiple Input Multiple Output (MIMO) wireless technologies with Orthogonal
Frequency Division Multiplexing (OFDM) in the evolving 4G wireless
communications. Our proposed system is based on the 2x2 MIMO antenna technique,
which is defined to enhance the performance of radio communication systems in
terms of capacity and spectral efficiency, and the OFDM technique, which can be
implemented using two types of sub-carrier mapping modes: Space-Time Block
Coding and Space Frequency Block Code. SFBC has been considered in our
developed model. The main advantage of SFBC over STBC is that SFBC encodes two
modulated symbols over two subcarriers of the same OFDM symbol, whereas STBC
encodes two modulated symbols over two subcarriers of the same OFDM symbol;
thus, the coding is performed in the frequency domain. Our solution aims to
demonstrate the performance analysis of the Space Frequency Block Codes scheme,
increasing the Signal Noise Ratio (SNR) at the receiver and decreasing the Bit
Error Rate (BER) through the use of 4 QAM, 16 QAM and 64QAM modulation over a
2x2 MIMO channel for an LTE downlink transmission, in different channel radio
environments. In this work, an analytical tool to evaluate the performance of
SFBC - Orthogonal Frequency Division Multiplexing, using two transmit antennas
and two receive antennas has been implemented, and the analysis using the
average SNR has been considered as a sufficient statistic to describe the
performance of SFBC in the 3GPP Long Term Evolution system over Multiple Input
Multiple Output channels.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 17:29:59 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jul 2023 16:07:29 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Morillo",
"Gabriela",
""
],
[
"Cosmas",
"John",
""
]
] |
new_dataset
| 0.979936 |
2307.07768
|
Sarosij Bose
|
Sarosij Bose, Saikat Sarkar, Amlan Chakrabarti
|
SoccerKDNet: A Knowledge Distillation Framework for Action Recognition
in Soccer Videos
|
Accepted to 10th Springer PReMI 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Classifying player actions from soccer videos is a challenging problem, which
has become increasingly important in sports analytics over the years. Most
state-of-the-art methods employ highly complex offline networks, which makes it
difficult to deploy such models in resource constrained scenarios. Here, in
this paper we propose a novel end-to-end knowledge distillation based transfer
learning network pre-trained on the Kinetics400 dataset and then perform
extensive analysis on the learned framework by introducing a unique loss
parameterization. We also introduce a new dataset named SoccerDB1 containing
448 videos and consisting of 4 diverse classes each of players playing soccer.
Furthermore, we introduce an unique loss parameter that help us linearly weigh
the extent to which the predictions of each network are utilized. Finally, we
also perform a thorough performance study using various changed
hyperparameters. We also benchmark the first classification results on the new
SoccerDB1 dataset obtaining 67.20% validation accuracy. Apart from
outperforming prior arts significantly, our model also generalizes to new
datasets easily. The dataset has been made publicly available at:
https://bit.ly/soccerdb1
|
[
{
"version": "v1",
"created": "Sat, 15 Jul 2023 10:43:24 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Jul 2023 04:47:14 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Bose",
"Sarosij",
""
],
[
"Sarkar",
"Saikat",
""
],
[
"Chakrabarti",
"Amlan",
""
]
] |
new_dataset
| 0.972422 |
2307.08851
|
Shion Fukuzawa
|
Shion Fukuzawa, Michael T. Goodrich, Sandy Irani
|
Quantum Tutte Embeddings
|
19 pages, 6 figures
| null | null | null |
cs.DS quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Using the framework of Tutte embeddings, we begin an exploration of
\emph{quantum graph drawing}, which uses quantum computers to visualize graphs.
The main contributions of this paper include formulating a model for quantum
graph drawing, describing how to create a graph-drawing quantum circuit from a
given graph, and showing how a Tutte embedding can be calculated as a quantum
state in this circuit that can then be sampled to extract the embedding. To
evaluate the complexity of our quantum Tutte embedding circuits, we compare
them to theoretical bounds established in the classical computing setting
derived from a well-known classical algorithm for solving the types of linear
systems that arise from Tutte embeddings. We also present empirical results
obtained from experimental quantum simulations.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 21:23:28 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jul 2023 17:29:30 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Fukuzawa",
"Shion",
""
],
[
"Goodrich",
"Michael T.",
""
],
[
"Irani",
"Sandy",
""
]
] |
new_dataset
| 0.97421 |
2307.11754
|
Yujin Kwon
|
Yujin Kwon, Kornrapat Pongmala, Kaihua Qin, Ariah Klages-Mundt,
Philipp Jovanovic, Christine Parlour, Arthur Gervais, Dawn Song
|
What Drives the (In)stability of a Stablecoin?
| null | null | null | null |
cs.GT cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In May 2022, an apparent speculative attack, followed by market panic, led to
the precipitous downfall of UST, one of the most popular stablecoins at that
time. However, UST is not the only stablecoin to have been depegged in the
past. Designing resilient and long-term stable coins, therefore, appears to
present a hard challenge.
To further scrutinize existing stablecoin designs and ultimately lead to more
robust systems, we need to understand where volatility emerges. Our work
provides a game-theoretical model aiming to help identify why stablecoins
suffer from a depeg. This game-theoretical model reveals that stablecoins have
different price equilibria depending on the coin's architecture and mechanism
to minimize volatility. Moreover, our theory is supported by extensive
empirical data, spanning $1$ year. To that end, we collect daily prices for 22
stablecoins and on-chain data from five blockchains including the Ethereum and
the Terra blockchain.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 03:08:35 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jul 2023 17:45:30 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Kwon",
"Yujin",
""
],
[
"Pongmala",
"Kornrapat",
""
],
[
"Qin",
"Kaihua",
""
],
[
"Klages-Mundt",
"Ariah",
""
],
[
"Jovanovic",
"Philipp",
""
],
[
"Parlour",
"Christine",
""
],
[
"Gervais",
"Arthur",
""
],
[
"Song",
"Dawn",
""
]
] |
new_dataset
| 0.989198 |
2307.12204
|
David Noever
|
Forrest McKee and David Noever
|
Adversarial Agents For Attacking Inaudible Voice Activated Devices
| null | null | null | null |
cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The paper applies reinforcement learning to novel Internet of Thing
configurations. Our analysis of inaudible attacks on voice-activated devices
confirms the alarming risk factor of 7.6 out of 10, underlining significant
security vulnerabilities scored independently by NIST National Vulnerability
Database (NVD). Our baseline network model showcases a scenario in which an
attacker uses inaudible voice commands to gain unauthorized access to
confidential information on a secured laptop. We simulated many attack
scenarios on this baseline network model, revealing the potential for mass
exploitation of interconnected devices to discover and own privileged
information through physical access without adding new hardware or amplifying
device skills. Using Microsoft's CyberBattleSim framework, we evaluated six
reinforcement learning algorithms and found that Deep-Q learning with
exploitation proved optimal, leading to rapid ownership of all nodes in fewer
steps. Our findings underscore the critical need for understanding
non-conventional networks and new cybersecurity measures in an ever-expanding
digital landscape, particularly those characterized by mobile devices, voice
activation, and non-linear microphones susceptible to malicious actors
operating stealth attacks in the near-ultrasound or inaudible ranges. By 2024,
this new attack surface might encompass more digital voice assistants than
people on the planet yet offer fewer remedies than conventional patching or
firmware fixes since the inaudible attacks arise inherently from the microphone
design and digital signal processing.
|
[
{
"version": "v1",
"created": "Sun, 23 Jul 2023 02:18:30 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jul 2023 15:16:40 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"McKee",
"Forrest",
""
],
[
"Noever",
"David",
""
]
] |
new_dataset
| 0.967028 |
2307.13128
|
Jugal Kalita
|
Abby Newcomb and Jugal Kalita
|
Explaining Math Word Problem Solvers
| null |
Published in 6th International Conference on Natural Language
Processing and Information Retrieval (NLPIR 2022)
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Automated math word problem solvers based on neural networks have
successfully managed to obtain 70-80\% accuracy in solving arithmetic word
problems. However, it has been shown that these solvers may rely on superficial
patterns to obtain their equations. In order to determine what information math
word problem solvers use to generate solutions, we remove parts of the input
and measure the model's performance on the perturbed dataset. Our results show
that the model is not sensitive to the removal of many words from the input and
can still manage to find a correct answer when given a nonsense question. This
indicates that automatic solvers do not follow the semantic logic of math word
problems, and may be overfitting to the presence of specific words.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 21:05:47 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Newcomb",
"Abby",
""
],
[
"Kalita",
"Jugal",
""
]
] |
new_dataset
| 0.998877 |
2307.13153
|
Konstantinos Georgiou
|
Konstantinos Georgiou, Somnath Kundu, Pawel Pralat
|
The Fagnano Triangle Patrolling Problem
| null | null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
We investigate a combinatorial optimization problem that involves patrolling
the edges of an acute triangle using a unit-speed agent. The goal is to
minimize the maximum (1-gap) idle time of any edge, which is defined as the
time gap between consecutive visits to that edge. This problem has roots in a
centuries-old optimization problem posed by Fagnano in 1775, who sought to
determine the inscribed triangle of an acute triangle with the minimum
perimeter. It is well-known that the orthic triangle, giving rise to a periodic
and cyclic trajectory obeying the laws of geometric optics, is the optimal
solution to Fagnano's problem. Such trajectories are known as Fagnano orbits,
or more generally as billiard trajectories. We demonstrate that the orthic
triangle is also an optimal solution to the patrolling problem.
Our main contributions pertain to new connections between billiard
trajectories and optimal patrolling schedules in combinatorial optimization. In
particular, as an artifact of our arguments, we introduce a novel 2-gap
patrolling problem that seeks to minimize the visitation time of objects every
three visits. We prove that there exist infinitely many well-structured
billiard-type optimal trajectories for this problem, including the orthic
trajectory, which has the special property of minimizing the visitation time
gap between any two consecutively visited edges. Complementary to that, we also
examine the cost of dynamic, sub-optimal trajectories to the 1-gap patrolling
optimization problem. These trajectories result from a greedy algorithm and can
be implemented by a computationally primitive mobile agent.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 22:39:39 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Georgiou",
"Konstantinos",
""
],
[
"Kundu",
"Somnath",
""
],
[
"Pralat",
"Pawel",
""
]
] |
new_dataset
| 0.996235 |
2307.13172
|
Abhiroop Sarkar
|
Abhiroop Sarkar, Robert Krook, Alejandro Russo, Koen Claessen
|
HasTEE: Programming Trusted Execution Environments with Haskell
|
To appear in Haskell Symposium 2023
| null |
10.1145/3609026.3609731
| null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Trusted Execution Environments (TEEs) are hardware-enforced memory isolation
units, emerging as a pivotal security solution for security-critical
applications. TEEs, like Intel SGX and ARM TrustZone, allow the isolation of
confidential code and data within an untrusted host environment, such as the
cloud and IoT. Despite strong security guarantees, TEE adoption has been
hindered by an awkward programming model. This model requires manual
application partitioning and the use of error-prone, memory-unsafe, and
potentially information-leaking low-level C/C++ libraries.
We address the above with \textit{HasTEE}, a domain-specific language (DSL)
embedded in Haskell for programming TEE applications. HasTEE includes a port of
the GHC runtime for the Intel-SGX TEE. HasTEE uses Haskell's type system to
automatically partition an application and to enforce \textit{Information Flow
Control} on confidential data. The DSL, being embedded in Haskell, allows for
the usage of higher-order functions, monads, and a restricted set of I/O
operations to write any standard Haskell application. Contrary to previous
work, HasTEE is lightweight, simple, and is provided as a \emph{simple security
library}; thus avoiding any GHC modifications. We show the applicability of
HasTEE by implementing case studies on federated learning, an encrypted
password wallet, and a differentially-private data clean room.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 23:37:50 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Sarkar",
"Abhiroop",
""
],
[
"Krook",
"Robert",
""
],
[
"Russo",
"Alejandro",
""
],
[
"Claessen",
"Koen",
""
]
] |
new_dataset
| 0.997565 |
2307.13178
|
Agnimitra Sengupta
|
Agnimitra Sengupta, S. Ilgin Guler, Vikash V. Gayah, Shannon Warchol
|
Evaluating the reliability of automatically generated pedestrian and
bicycle crash surrogates
| null | null | null | null |
cs.LG cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vulnerable road users (VRUs), such as pedestrians and bicyclists, are at a
higher risk of being involved in crashes with motor vehicles, and crashes
involving VRUs also are more likely to result in severe injuries or fatalities.
Signalized intersections are a major safety concern for VRUs due to their
complex and dynamic nature, highlighting the need to understand how these road
users interact with motor vehicles and deploy evidence-based countermeasures to
improve safety performance. Crashes involving VRUs are relatively infrequent,
making it difficult to understand the underlying contributing factors. An
alternative is to identify and use conflicts between VRUs and motorized
vehicles as a surrogate for safety performance. Automatically detecting these
conflicts using a video-based systems is a crucial step in developing smart
infrastructure to enhance VRU safety. The Pennsylvania Department of
Transportation conducted a study using video-based event monitoring system to
assess VRU and motor vehicle interactions at fifteen signalized intersections
across Pennsylvania to improve VRU safety performance. This research builds on
that study to assess the reliability of automatically generated surrogates in
predicting confirmed conflicts using advanced data-driven models. The surrogate
data used for analysis include automatically collectable variables such as
vehicular and VRU speeds, movements, post-encroachment time, in addition to
manually collected variables like signal states, lighting, and weather
conditions. The findings highlight the varying importance of specific
surrogates in predicting true conflicts, some being more informative than
others. The findings can assist transportation agencies to collect the right
types of data to help prioritize infrastructure investments, such as bike lanes
and crosswalks, and evaluate their effectiveness.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 23:57:29 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Sengupta",
"Agnimitra",
""
],
[
"Guler",
"S. Ilgin",
""
],
[
"Gayah",
"Vikash V.",
""
],
[
"Warchol",
"Shannon",
""
]
] |
new_dataset
| 0.997264 |
2307.13183
|
Travis Morrison
|
Gretchen L. Matthews, Travis Morrison, Aidan W. Murphy
|
Curve-lifted codes for local recovery using lines
|
22 pages. Comments welcome
| null | null | null |
cs.IT math.IT math.NT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce curve-lifted codes over fields of arbitrary
characteristic, inspired by Hermitian-lifted codes over $\mathbb{F}_{2^r}$.
These codes are designed for locality and availability, and their particular
parameters depend on the choice of curve and its properties. Due to the
construction, the numbers of rational points of intersection between curves and
lines play a key role. To demonstrate that and generate new families of locally
recoverable codes (LRCs) with high availabilty, we focus on norm-trace-lifted
codes. In some cases, they are easier to define than their Hermitian
counterparts and consequently have a better asymptotic bound on the code rate.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 00:25:15 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Matthews",
"Gretchen L.",
""
],
[
"Morrison",
"Travis",
""
],
[
"Murphy",
"Aidan W.",
""
]
] |
new_dataset
| 0.993905 |
2307.13184
|
Robin Hankin Dr
|
Robin K. S. Hankin
|
The free Abelian group in R: the frab package
|
9 pages
| null | null | null |
cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this short article I introduce the frab package which provides an
alternative interpretation of named vectors in the R programming language; it
is available on CRAN. The underlying mathematical object is the free Abelian
group.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 00:31:40 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Hankin",
"Robin K. S.",
""
]
] |
new_dataset
| 0.995807 |
2307.13225
|
Han Hu
|
Han Hu, Haolan Zhan, Yujin Huang, Di Liu
|
A Pairwise Dataset for GUI Conversion and Retrieval between Android
Phones and Tablets
|
10 pages, 9 figures
| null | null | null |
cs.HC cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
With the popularity of smartphones and tablets, users have become accustomed
to using different devices for different tasks, such as using their phones to
play games and tablets to watch movies. To conquer the market, one app is often
available on both smartphones and tablets. However, although one app has
similar graphic user interfaces (GUIs) and functionalities on phone and tablet,
current app developers typically start from scratch when developing a
tablet-compatible version of their app, which drives up development costs and
wastes existing design resources. Researchers are attempting to employ deep
learning in automated GUIs development to enhance developers' productivity.
Deep learning models rely heavily on high-quality datasets. There are currently
several publicly accessible GUI page datasets for phones, but none for pairwise
GUIs between phones and tablets. This poses a significant barrier to the
employment of deep learning in automated GUI development. In this paper, we
collect and make public the Papt dataset, which is a pairwise dataset for GUI
conversion and retrieval between Android phones and tablets. The dataset
contains 10,035 phone-tablet GUI page pairs from 5,593 phone-tablet app pairs.
We illustrate the approaches of collecting pairwise data and statistical
analysis of this dataset. We also illustrate the advantages of our dataset
compared to other current datasets. Through preliminary experiments on this
dataset, we analyse the present challenges of utilising deep learning in
automated GUI development and find that our dataset can assist the application
of some deep learning models to tasks involving automatic GUI development.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 03:25:56 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Hu",
"Han",
""
],
[
"Zhan",
"Haolan",
""
],
[
"Huang",
"Yujin",
""
],
[
"Liu",
"Di",
""
]
] |
new_dataset
| 0.99989 |
2307.13251
|
Tuan Ngo
|
Tuan Duc Ngo, Binh-Son Hua, Khoi Nguyen
|
GaPro: Box-Supervised 3D Point Cloud Instance Segmentation Using
Gaussian Processes as Pseudo Labelers
|
Accepted to ICCV 2023
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Instance segmentation on 3D point clouds (3DIS) is a longstanding challenge
in computer vision, where state-of-the-art methods are mainly based on full
supervision. As annotating ground truth dense instance masks is tedious and
expensive, solving 3DIS with weak supervision has become more practical. In
this paper, we propose GaPro, a new instance segmentation for 3D point clouds
using axis-aligned 3D bounding box supervision. Our two-step approach involves
generating pseudo labels from box annotations and training a 3DIS network with
the resulting labels. Additionally, we employ the self-training strategy to
improve the performance of our method further. We devise an effective Gaussian
Process to generate pseudo instance masks from the bounding boxes and resolve
ambiguities when they overlap, resulting in pseudo instance masks with their
uncertainty values. Our experiments show that GaPro outperforms previous weakly
supervised 3D instance segmentation methods and has competitive performance
compared to state-of-the-art fully supervised ones. Furthermore, we demonstrate
the robustness of our approach, where we can adapt various state-of-the-art
fully supervised methods to the weak supervision task by using our pseudo
labels for training. The source code and trained models are available at
https://github.com/VinAIResearch/GaPro.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 04:43:22 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Ngo",
"Tuan Duc",
""
],
[
"Hua",
"Binh-Son",
""
],
[
"Nguyen",
"Khoi",
""
]
] |
new_dataset
| 0.996709 |
2307.13285
|
Gyuyeong Kim
|
Gyuyeong Kim
|
NetClone: Fast, Scalable, and Dynamic Request Cloning for
Microsecond-Scale RPCs
|
13 pages, ACM SIGCOMM 2023
| null |
10.1145/3603269.3604820
| null |
cs.NI cs.DC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Spawning duplicate requests, called cloning, is a powerful technique to
reduce tail latency by masking service-time variability. However, traditional
client-based cloning is static and harmful to performance under high load,
while a recent coordinator-based approach is slow and not scalable. Both
approaches are insufficient to serve modern microsecond-scale Remote Procedure
Calls (RPCs). To this end, we present NetClone, a request cloning system that
performs cloning decisions dynamically within nanoseconds at scale. Rather than
the client or the coordinator, NetClone performs request cloning in the network
switch by leveraging the capability of programmable switch ASICs. Specifically,
NetClone replicates requests based on server states and blocks redundant
responses using request fingerprints in the switch data plane. To realize the
idea while satisfying the strict hardware constraints, we address several
technical challenges when designing a custom switch data plane. NetClone can be
integrated with emerging in-network request schedulers like RackSched. We
implement a NetClone prototype with an Intel Tofino switch and a cluster of
commodity servers. Our experimental results show that NetClone can improve the
tail latency of microsecond-scale RPCs for synthetic and real-world application
workloads and is robust to various system conditions.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 06:48:14 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Kim",
"Gyuyeong",
""
]
] |
new_dataset
| 0.968647 |
2307.13300
|
Chuanyu Luo
|
Chuanyu Luo, Nuo Cheng, Sikun Ma, Jun Xiang, Xiaohan Li, Shengguang
Lei, Pu Li
|
Mini-PointNetPlus: a local feature descriptor in deep learning model for
3d environment perception
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Common deep learning models for 3D environment perception often use
pillarization/voxelization methods to convert point cloud data into
pillars/voxels and then process it with a 2D/3D convolutional neural network
(CNN). The pioneer work PointNet has been widely applied as a local feature
descriptor, a fundamental component in deep learning models for 3D perception,
to extract features of a point cloud. This is achieved by using a symmetric
max-pooling operator which provides unique pillar/voxel features. However, by
ignoring most of the points, the max-pooling operator causes an information
loss, which reduces the model performance. To address this issue, we propose a
novel local feature descriptor, mini-PointNetPlus, as an alternative for
plug-and-play to PointNet. Our basic idea is to separately project the data
points to the individual features considered, each leading to a permutation
invariant. Thus, the proposed descriptor transforms an unordered point cloud to
a stable order. The vanilla PointNet is proved to be a special case of our
mini-PointNetPlus. Due to fully utilizing the features by the proposed
descriptor, we demonstrate in experiment a considerable performance improvement
for 3D perception.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 07:30:28 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Luo",
"Chuanyu",
""
],
[
"Cheng",
"Nuo",
""
],
[
"Ma",
"Sikun",
""
],
[
"Xiang",
"Jun",
""
],
[
"Li",
"Xiaohan",
""
],
[
"Lei",
"Shengguang",
""
],
[
"Li",
"Pu",
""
]
] |
new_dataset
| 0.994969 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.