id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.00708
|
Shailja Thakur
|
Shailja Thakur, Baleegh Ahmad, Hammond Pearce, Benjamin Tan, Brendan
Dolan-Gavitt, Ramesh Karri, Siddharth Garg
|
VeriGen: A Large Language Model for Verilog Code Generation
|
arXiv admin note: text overlap with arXiv:2212.11140
| null | null | null |
cs.PL cs.LG cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this study, we explore the capability of Large Language Models (LLMs) to
automate hardware design by generating high-quality Verilog code, a common
language for designing and modeling digital systems. We fine-tune pre-existing
LLMs on Verilog datasets compiled from GitHub and Verilog textbooks. We
evaluate the functional correctness of the generated Verilog code using a
specially designed test suite, featuring a custom problem set and testing
benches. Here, our fine-tuned open-source CodeGen-16B model outperforms the
commercial state-of-the-art GPT-3.5-turbo model with a 1.1% overall increase.
Upon testing with a more diverse and complex problem set, we find that the
fine-tuned model shows competitive performance against state-of-the-art
gpt-3.5-turbo, excelling in certain scenarios. Notably, it demonstrates a 41%
improvement in generating syntactically correct Verilog code across various
problem categories compared to its pre-trained counterpart, highlighting the
potential of smaller, in-house LLMs in hardware design automation.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 02:57:14 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Thakur",
"Shailja",
""
],
[
"Ahmad",
"Baleegh",
""
],
[
"Pearce",
"Hammond",
""
],
[
"Tan",
"Benjamin",
""
],
[
"Dolan-Gavitt",
"Brendan",
""
],
[
"Karri",
"Ramesh",
""
],
[
"Garg",
"Siddharth",
""
]
] |
new_dataset
| 0.997128 |
2308.00719
|
Samir Katte
|
Samir R Katte
|
Communication systems using LabVIEW
| null | null | null | null |
cs.HC cs.SY eess.SP eess.SY
|
http://creativecommons.org/publicdomain/zero/1.0/
|
LabVIEW enables engineers to simulate various communication and control
systems. LabVIEW helps to create Virtual Instruments (VIs) which are the files
with which the user interacts to accomplish the required task. In this paper,
the AM system implementation in LabVIEW is explained in detail along with the
observed waveforms. The AM system is implemented using two separate VIs i.e.
Transmitter_AM.vi and Receiver_AM.vi. Each VI has two parts: Front Panel and
the Block Diagram. The Front Panel is usually the interface the user interacts
with and observes results. The block diagram contains the blocks used to
implement the functionality required for the operation of the VI. The
individual blocks in the block diagram are called the sub VIs. The user may or
may not need to make changes in the block diagram of the VI during the
execution of the LabVIEW program.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 04:38:12 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Katte",
"Samir R",
""
]
] |
new_dataset
| 0.991408 |
2308.00770
|
Giselle Zeno
|
Giselle Zeno, Timothy La Fond, Jennifer Neville
|
DYMOND: DYnamic MOtif-NoDes Network Generative Model
|
In Proceedings of the Web Conference 2021 (WWW '21)
|
Proceedings of the Web Conference 2021, Pages 718-729
|
10.1145/3442381.3450102
| null |
cs.SI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Motifs, which have been established as building blocks for network structure,
move beyond pair-wise connections to capture longer-range correlations in
connections and activity. In spite of this, there are few generative graph
models that consider higher-order network structures and even fewer that focus
on using motifs in models of dynamic graphs. Most existing generative models
for temporal graphs strictly grow the networks via edge addition, and the
models are evaluated using static graph structure metrics -- which do not
adequately capture the temporal behavior of the network. To address these
issues, in this work we propose DYnamic MOtif-NoDes (DYMOND) -- a generative
model that considers (i) the dynamic changes in overall graph structure using
temporal motif activity and (ii) the roles nodes play in motifs (e.g., one node
plays the hub role in a wedge, while the remaining two act as spokes). We
compare DYMOND to three dynamic graph generative model baselines on real-world
networks and show that DYMOND performs better at generating graph structure and
node behavior similar to the observed network. We also propose a new
methodology to adapt graph structure metrics to better evaluate the temporal
aspect of the network. These metrics take into account the changes in overall
graph structure and the individual nodes' behavior over time.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 18:20:05 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Zeno",
"Giselle",
""
],
[
"La Fond",
"Timothy",
""
],
[
"Neville",
"Jennifer",
""
]
] |
new_dataset
| 0.992078 |
2308.00797
|
Albert Gran Alcoz
|
Albert Gran Alcoz, Bal\'azs Vass, G\'abor R\'etv\'ari, Laurent
Vanbever
|
Everything Matters in Programmable Packet Scheduling
|
12 pages, 12 figures (without references and appendices)
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Programmable packet scheduling allows the deployment of scheduling algorithms
into existing switches without need for hardware redesign. Scheduling
algorithms are programmed by tagging packets with ranks, indicating their
desired priority. Programmable schedulers then execute these algorithms by
serving packets in the order described in their ranks.
The ideal programmable scheduler is a Push-In First-Out (PIFO) queue, which
achieves perfect packet sorting by pushing packets into arbitrary positions in
the queue, while only draining packets from the head. Unfortunately,
implementing PIFO queues in hardware is challenging due to the need to
arbitrarily sort packets at line rate based on their ranks.
In the last years, various techniques have been proposed, approximating PIFO
behaviors using the available resources of existing data planes. While
promising, approaches to date only approximate one of the characteristic
behaviors of PIFO queues (i.e., its scheduling behavior, or its admission
control).
We propose PACKS, the first programmable scheduler that fully approximates
PIFO queues on all their behaviors. PACKS does so by smartly using a set of
strict-priority queues. It uses packet-rank information and queue-occupancy
levels at enqueue to decide: whether to admit packets to the scheduler, and how
to map admitted packets to the different queues.
We fully implement PACKS in P4 and evaluate it on real workloads. We show
that PACKS: better-approximates PIFO than state-of-the-art approaches and
scales. We also show that PACKS runs at line rate on existing hardware (Intel
Tofino).
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 19:15:10 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Alcoz",
"Albert Gran",
""
],
[
"Vass",
"Balázs",
""
],
[
"Rétvári",
"Gábor",
""
],
[
"Vanbever",
"Laurent",
""
]
] |
new_dataset
| 0.998467 |
2308.00801
|
Abhinav Benagi
|
Abhinav Benagi, Dhanyatha Narayan, Charith Rage, A Sushmitha
|
Artificial Eye for the Blind
|
23 pages , 16 figures
| null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The main backbone of our Artificial Eye model is the Raspberry pi3 which is
connected to the webcam ,ultrasonic proximity sensor, speaker and we also run
all our software models i.e object detection, Optical Character recognition,
google text to speech conversion and the Mycroft voice assistance model. At
first the ultrasonic proximity sensor will be measuring the distance between
itself and any obstacle in front of it .When the Proximity sensor detects any
obstacle in front within its specified range, the blind person will hear an
audio prompt about an obstacle in his way at a certain distance. At this time
the Webcam will capture an image in front of it and the Object detection model
and the Optical Character Recognition model will begin to run on the Raspberry
pi. The imat of the blind person. The text and the object detected are conveyed
to the blind pege captured is first sent through the Tesseract OCR module to
detect any texts in the image and then through the Object detection model to
detect the objects in fronrson by converting the texts to speech by using the
gTTS module. Along with the above mentioned process going on there will be an
active MYCROFT voice assistant model which can be used to interact with the
blind person. The blind person can ask about the weather , daily news , any
information on the internet ,etc
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 10:00:50 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Benagi",
"Abhinav",
""
],
[
"Narayan",
"Dhanyatha",
""
],
[
"Rage",
"Charith",
""
],
[
"Sushmitha",
"A",
""
]
] |
new_dataset
| 0.964301 |
2308.00878
|
Qingyang Wu
|
Qingyang Wu, James Gung, Raphael Shu, Yi Zhang
|
DiactTOD: Learning Generalizable Latent Dialogue Acts for Controllable
Task-Oriented Dialogue Systems
|
SIGDial 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dialogue act annotations are important to improve response generation quality
in task-oriented dialogue systems. However, it can be challenging to use
dialogue acts to control response generation in a generalizable way because
different datasets and tasks may have incompatible annotations. While
alternative methods that utilize latent action spaces or reinforcement learning
do not require explicit annotations, they may lack interpretability or face
difficulties defining task-specific rewards. In this work, we present a novel
end-to-end latent dialogue act model (DiactTOD) that represents dialogue acts
in a latent space. DiactTOD, when pre-trained on a large corpus, is able to
predict and control dialogue acts to generate controllable responses using
these latent representations in a zero-shot fashion. Our approach demonstrates
state-of-the-art performance across a wide range of experimental settings on
the MultiWOZ dataset, including zero-shot, few-shot, and full data fine-tuning
with both end-to-end and policy optimization configurations.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 23:29:16 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Wu",
"Qingyang",
""
],
[
"Gung",
"James",
""
],
[
"Shu",
"Raphael",
""
],
[
"Zhang",
"Yi",
""
]
] |
new_dataset
| 0.994915 |
2308.00923
|
Keran Ye
|
Keran Ye, Kenneth Chung, Konstantinos Karydis
|
A Novel Lockable Spring-loaded Prismatic Spine to Support Agile
Quadrupedal Locomotion
|
To appear in 2023 IEEE IROS
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a way to systematically investigate the effect of
compliant prismatic spines in quadrupedal robot locomotion. We develop a novel
spring-loaded lockable spine module, together with a new Spinal
Compliance-Integrated Quadruped (SCIQ) platform for both empirical and
numerical research. Individual spine tests reveal beneficial spinal
characteristics like a degressive spring, and validate the efficacy of a
proposed compact locking/unlocking mechanism for the spine. Benchmark vertical
jumping and landing tests with our robot show comparable jumping performance
between the rigid and compliant spines. An observed advantage of the compliant
spine module is that it can alleviate more challenging landing conditions by
absorbing impact energy and dissipating the remainder via feet slipping through
much in cat-like stretching fashion.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 03:46:32 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Ye",
"Keran",
""
],
[
"Chung",
"Kenneth",
""
],
[
"Karydis",
"Konstantinos",
""
]
] |
new_dataset
| 0.998793 |
2308.01000
|
Louis Soum-Fontez
|
Louis Soum-Fontez, Jean-Emmanuel Deschaud, Fran\c{c}ois Goulette
|
MDT3D: Multi-Dataset Training for LiDAR 3D Object Detection
Generalization
|
Accepted for publication at IROS 2023
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Supervised 3D Object Detection models have been displaying increasingly
better performance in single-domain cases where the training data comes from
the same environment and sensor as the testing data. However, in real-world
scenarios data from the target domain may not be available for finetuning or
for domain adaptation methods. Indeed, 3D object detection models trained on a
source dataset with a specific point distribution have shown difficulties in
generalizing to unseen datasets. Therefore, we decided to leverage the
information available from several annotated source datasets with our
Multi-Dataset Training for 3D Object Detection (MDT3D) method to increase the
robustness of 3D object detection models when tested in a new environment with
a different sensor configuration. To tackle the labelling gap between datasets,
we used a new label mapping based on coarse labels. Furthermore, we show how we
managed the mix of datasets during training and finally introduce a new
cross-dataset augmentation method: cross-dataset object injection. We
demonstrate that this training paradigm shows improvements for different types
of 3D object detection models. The source code and additional results for this
research project will be publicly available on GitHub for interested parties to
access and utilize: https://github.com/LouisSF/MDT3D
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 08:20:00 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Soum-Fontez",
"Louis",
""
],
[
"Deschaud",
"Jean-Emmanuel",
""
],
[
"Goulette",
"François",
""
]
] |
new_dataset
| 0.999712 |
2308.01035
|
Khadidja Delloul
|
Leyla Benhamida and Khadidja Delloul and Slimane Larabi
|
TS-RGBD Dataset: a Novel Dataset for Theatre Scenes Description for
People with Visual Impairments
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Computer vision was long a tool used for aiding visually impaired people to
move around their environment and avoid obstacles and falls. Solutions are
limited to either indoor or outdoor scenes, which limits the kind of places and
scenes visually disabled people can be in, including entertainment places such
as theatres. Furthermore, most of the proposed computer-vision-based methods
rely on RGB benchmarks to train their models resulting in a limited performance
due to the absence of the depth modality.
In this paper, we propose a novel RGB-D dataset containing theatre scenes
with ground truth human actions and dense captions annotations for image
captioning and human action recognition: TS-RGBD dataset. It includes three
types of data: RGB, depth, and skeleton sequences, captured by Microsoft
Kinect.
We test image captioning models on our dataset as well as some skeleton-based
human action recognition models in order to extend the range of environment
types where a visually disabled person can be, by detecting human actions and
textually describing appearances of regions of interest in theatre scenes.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 09:28:35 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Benhamida",
"Leyla",
""
],
[
"Delloul",
"Khadidja",
""
],
[
"Larabi",
"Slimane",
""
]
] |
new_dataset
| 0.999874 |
2308.01042
|
Xingjian Wang
|
Xingjian Wang, Li Chai, Jiming Chen, Zhiguo Shi
|
WCCNet: Wavelet-integrated CNN with Crossmodal Rearranging Fusion for
Fast Multispectral Pedestrian Detection
|
Submitted to TPAMI
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multispectral pedestrian detection achieves better visibility in challenging
conditions and thus has a broad application in various tasks, for which both
the accuracy and computational cost are of paramount importance. Most existing
approaches treat RGB and infrared modalities equally, typically adopting two
symmetrical CNN backbones for multimodal feature extraction, which ignores the
substantial differences between modalities and brings great difficulty for the
reduction of the computational cost as well as effective crossmodal fusion. In
this work, we propose a novel and efficient framework named WCCNet that is able
to differentially extract rich features of different spectra with lower
computational complexity and semantically rearranges these features for
effective crossmodal fusion. Specifically, the discrete wavelet transform (DWT)
allowing fast inference and training speed is embedded to construct a
dual-stream backbone for efficient feature extraction. The DWT layers of WCCNet
extract frequency components for infrared modality, while the CNN layers
extract spatial-domain features for RGB modality. This methodology not only
significantly reduces the computational complexity, but also improves the
extraction of infrared features to facilitate the subsequent crossmodal fusion.
Based on the well extracted features, we elaborately design the crossmodal
rearranging fusion module (CMRF), which can mitigate spatial misalignment and
merge semantically complementary features of spatially-related local regions to
amplify the crossmodal complementary information. We conduct comprehensive
evaluations on KAIST and FLIR benchmarks, in which WCCNet outperforms
state-of-the-art methods with considerable computational efficiency and
competitive accuracy. We also perform the ablation study and analyze thoroughly
the impact of different components on the performance of WCCNet.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 09:35:21 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Wang",
"Xingjian",
""
],
[
"Chai",
"Li",
""
],
[
"Chen",
"Jiming",
""
],
[
"Shi",
"Zhiguo",
""
]
] |
new_dataset
| 0.988896 |
2308.01053
|
Peijun Zhang
|
Peijun Zhang, Chuanzeng Zhang, Yan Gu, Wenzhen Qu, Shengdong Zhao
|
Boundary integrated neural networks (BINNs) for 2D elastostatic and
piezoelectric problems: Theory and MATLAB code
| null | null | null | null |
cs.CE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper, we make the first attempt to apply the boundary integrated
neural networks (BINNs) for the numerical solution of two-dimensional (2D)
elastostatic and piezoelectric problems. BINNs combine artificial neural
networks with the well-established boundary integral equations (BIEs) to
effectively solve partial differential equations (PDEs). The BIEs are utilized
to map all the unknowns onto the boundary, after which these unknowns are
approximated using artificial neural networks and resolved via a training
process. In contrast to traditional neural network-based methods, the current
BINNs offer several distinct advantages. First, by embedding BIEs into the
learning procedure, BINNs only need to discretize the boundary of the solution
domain, which can lead to a faster and more stable learning process (only the
boundary conditions need to be fitted during the training). Second, the
differential operator with respect to the PDEs is substituted by an integral
operator, which effectively eliminates the need for additional differentiation
of the neural networks (high-order derivatives of neural networks may lead to
instability in learning). Third, the loss function of the BINNs only contains
the residuals of the BIEs, as all the boundary conditions have been inherently
incorporated within the formulation. Therefore, there is no necessity for
employing any weighing functions, which are commonly used in traditional
methods to balance the gradients among different objective functions. Moreover,
BINNs possess the ability to tackle PDEs in unbounded domains since the
integral representation remains valid for both bounded and unbounded domains.
Extensive numerical experiments show that BINNs are much easier to train and
usually give more accurate learning solutions as compared to traditional neural
network-based methods.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 09:57:01 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Zhang",
"Peijun",
""
],
[
"Zhang",
"Chuanzeng",
""
],
[
"Gu",
"Yan",
""
],
[
"Qu",
"Wenzhen",
""
],
[
"Zhao",
"Shengdong",
""
]
] |
new_dataset
| 0.996088 |
2308.01117
|
Chen Peng
|
Chen Peng, Peng Wei, Zhenghao Fei, Yuankai Zhu, Stavros G. Vougioukas
|
Optimization-Based Motion Planning for Autonomous Agricultural Vehicles
Turning in Constrained Headlands
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Headland maneuvering is a crucial aspect of unmanned field operations for
autonomous agricultural vehicles (AAVs). While motion planning for headland
turning in open fields has been extensively studied and integrated into
commercial auto-guidance systems, the existing methods primarily address
scenarios with ample headland space and thus may not work in more constrained
headland geometries. Commercial orchards often contain narrow and irregularly
shaped headlands, which may include static obstacles,rendering the task of
planning a smooth and collision-free turning trajectory difficult. To address
this challenge, we propose an optimization-based motion planning algorithm for
headland turning under geometrical constraints imposed by field geometry and
obstacles.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 12:56:05 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Peng",
"Chen",
""
],
[
"Wei",
"Peng",
""
],
[
"Fei",
"Zhenghao",
""
],
[
"Zhu",
"Yuankai",
""
],
[
"Vougioukas",
"Stavros G.",
""
]
] |
new_dataset
| 0.997117 |
2308.01125
|
Shenbagaraj Kannapiran
|
Shenbagaraj Kannapiran, Nalin Bendapudi, Ming-Yuan Yu, Devarth Parikh,
Spring Berman, Ankit Vora, and Gaurav Pandey
|
Stereo Visual Odometry with Deep Learning-Based Point and Line Feature
Matching using an Attention Graph Neural Network
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Robust feature matching forms the backbone for most Visual Simultaneous
Localization and Mapping (vSLAM), visual odometry, 3D reconstruction, and
Structure from Motion (SfM) algorithms. However, recovering feature matches
from texture-poor scenes is a major challenge and still remains an open area of
research. In this paper, we present a Stereo Visual Odometry (StereoVO)
technique based on point and line features which uses a novel feature-matching
mechanism based on an Attention Graph Neural Network that is designed to
perform well even under adverse weather conditions such as fog, haze, rain, and
snow, and dynamic lighting conditions such as nighttime illumination and glare
scenarios. We perform experiments on multiple real and synthetic datasets to
validate the ability of our method to perform StereoVO under low visibility
weather and lighting conditions through robust point and line matches. The
results demonstrate that our method achieves more line feature matches than
state-of-the-art line matching algorithms, which when complemented with point
feature matches perform consistently well in adverse weather and dynamic
lighting conditions.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 13:09:12 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Kannapiran",
"Shenbagaraj",
""
],
[
"Bendapudi",
"Nalin",
""
],
[
"Yu",
"Ming-Yuan",
""
],
[
"Parikh",
"Devarth",
""
],
[
"Berman",
"Spring",
""
],
[
"Vora",
"Ankit",
""
],
[
"Pandey",
"Gaurav",
""
]
] |
new_dataset
| 0.96532 |
2308.01152
|
Jo\"el Ouaknine
|
Florian Luca, James Maynard, Armand Noubissie, Jo\"el Ouaknine, James
Worrell
|
Skolem Meets Bateman-Horn
| null | null | null | null |
cs.DM math.NT
|
http://creativecommons.org/licenses/by/4.0/
|
The Skolem Problem asks to determine whether a given integer linear
recurrence sequence has a zero term. This problem arises across a wide range of
topics in computer science, including loop termination, (weighted) automata
theory, and the analysis of linear dynamical systems, amongst many others.
Decidability of the Skolem Problem is notoriously open. The state of the art is
a decision procedure for recurrences of order at most 4: an advance achieved
some 40 years ago based on Baker's theorem on linear forms in logarithms of
algebraic numbers.
Recently, a new approach to the Skolem Problem was initiated via the notion
of a Universal Skolem Set: a set $\mathbf{S}$ of positive integers such that it
is decidable whether a given non-degenerate linear recurrence sequence has a
zero in $\mathbf{S}$. Clearly, proving decidability of the Skolem Problem is
equivalent to showing that $\mathbb{N}$ is a Universal Skolem Set. The main
contribution of the present paper is to exhibit a Universal Skolem Set of
positive density that moreover has density one subject to the Bateman-Horn
conjecture in number theory. The latter is a central unifying hypothesis
concerning the frequency of prime numbers among the values of systems of
polynomials, and provides a far-reaching generalisation of many classical
results and conjectures on the distribution of primes.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 13:57:01 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Luca",
"Florian",
""
],
[
"Maynard",
"James",
""
],
[
"Noubissie",
"Armand",
""
],
[
"Ouaknine",
"Joël",
""
],
[
"Worrell",
"James",
""
]
] |
new_dataset
| 0.988036 |
2308.01164
|
Lingxiao Meng
|
Lingxiao Meng, Jiangshan Liu, Wei Chai, Jiankun Wang, Max Q.-H. Meng
|
Virtual Reality Based Robot Teleoperation via Human-Scene Interaction
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robot teleoperation gains great success in various situations, including
chemical pollution rescue, disaster relief, and long-distance manipulation. In
this article, we propose a virtual reality (VR) based robot teleoperation
system to achieve more efficient and natural interaction with humans in
different scenes. A user-friendly VR interface is designed to help users
interact with a desktop scene using their hands efficiently and intuitively. To
improve user experience and reduce workload, we simulate the process in the
physics engine to help build a preview of the scene after manipulation in the
virtual scene before execution. We conduct experiments with different users and
compare our system with a direct control method across several teleoperation
tasks. The user study demonstrates that the proposed system enables users to
perform operations more instinctively with a lighter mental workload. Users can
perform pick-and-place and object-stacking tasks in a considerably short time,
even for beginners. Our code is available at
https://github.com/lingxiaomeng/VR_Teleoperation_Gen3.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 14:08:10 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Meng",
"Lingxiao",
""
],
[
"Liu",
"Jiangshan",
""
],
[
"Chai",
"Wei",
""
],
[
"Wang",
"Jiankun",
""
],
[
"Meng",
"Max Q. -H.",
""
]
] |
new_dataset
| 0.993647 |
2308.01180
|
Yiyang Sun
|
Yiyang Sun, Xiaonian Wang, Yangyang Zhang, Jiagui Tang, Xiaqiang Tang,
Jing Yao
|
Interpretable End-to-End Driving Model for Implicit Scene Understanding
|
Accepted by 26th IEEE International Conference on Intelligent
Transportation Systems (ITSC 2023)
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Driving scene understanding is to obtain comprehensive scene information
through the sensor data and provide a basis for downstream tasks, which is
indispensable for the safety of self-driving vehicles. Specific perception
tasks, such as object detection and scene graph generation, are commonly used.
However, the results of these tasks are only equivalent to the characterization
of sampling from high-dimensional scene features, which are not sufficient to
represent the scenario. In addition, the goal of perception tasks is
inconsistent with human driving that just focuses on what may affect the
ego-trajectory. Therefore, we propose an end-to-end Interpretable Implicit
Driving Scene Understanding (II-DSU) model to extract implicit high-dimensional
scene features as scene understanding results guided by a planning module and
to validate the plausibility of scene understanding using auxiliary perception
tasks for visualization. Experimental results on CARLA benchmarks show that our
approach achieves the new state-of-the-art and is able to obtain scene features
that embody richer scene information relevant to driving, enabling superior
performance of the downstream planning.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 14:43:08 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Sun",
"Yiyang",
""
],
[
"Wang",
"Xiaonian",
""
],
[
"Zhang",
"Yangyang",
""
],
[
"Tang",
"Jiagui",
""
],
[
"Tang",
"Xiaqiang",
""
],
[
"Yao",
"Jing",
""
]
] |
new_dataset
| 0.972176 |
2308.01217
|
Kaibin Tian
|
Kaibin Tian, Ruixiang Zhao, Hu Hu, Runquan Xie, Fengzong Lian, Zhanhui
Kang and Xirong Li
|
TeachCLIP: Multi-Grained Teaching for Efficient Text-to-Video Retrieval
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For text-to-video retrieval (T2VR), which aims to retrieve unlabeled videos
by ad-hoc textual queries, CLIP-based methods are dominating. Compared to
CLIP4Clip which is efficient and compact, the state-of-the-art models tend to
compute video-text similarity by fine-grained cross-modal feature interaction
and matching, putting their scalability for large-scale T2VR into doubt. For
efficient T2VR, we propose TeachCLIP with multi-grained teaching to let a
CLIP4Clip based student network learn from more advanced yet computationally
heavy models such as X-CLIP, TS2-Net and X-Pool . To improve the student's
learning capability, we add an Attentional frame-Feature Aggregation (AFA)
block, which by design adds no extra storage/computation overhead at the
retrieval stage. While attentive weights produced by AFA are commonly used for
combining frame-level features, we propose a novel use of the weights to let
them imitate frame-text relevance estimated by the teacher network. As such,
AFA provides a fine-grained learning (teaching) channel for the student
(teacher). Extensive experiments on multiple public datasets justify the
viability of the proposed method.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 15:22:00 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Tian",
"Kaibin",
""
],
[
"Zhao",
"Ruixiang",
""
],
[
"Hu",
"Hu",
""
],
[
"Xie",
"Runquan",
""
],
[
"Lian",
"Fengzong",
""
],
[
"Kang",
"Zhanhui",
""
],
[
"Li",
"Xirong",
""
]
] |
new_dataset
| 0.955425 |
2308.01263
|
Paul R\"ottger
|
Paul R\"ottger, Hannah Rose Kirk, Bertie Vidgen, Giuseppe Attanasio,
Federico Bianchi, Dirk Hovy
|
XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in
Large Language Models
|
v1 to document initial data release
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Without proper safeguards, large language models will readily follow
malicious instructions and generate toxic content. This motivates safety
efforts such as red-teaming and large-scale feedback learning, which aim to
make models both helpful and harmless. However, there is a tension between
these two objectives, since harmlessness requires models to refuse complying
with unsafe prompts, and thus not be helpful. Recent anecdotal evidence
suggests that some models may have struck a poor balance, so that even clearly
safe prompts are refused if they use similar language to unsafe prompts or
mention sensitive topics. In this paper, we introduce a new test suite called
XSTest to identify such eXaggerated Safety behaviours in a structured and
systematic way. In its current form, XSTest comprises 200 safe prompts across
ten prompt types that well-calibrated models should not refuse to comply with.
We describe XSTest's creation and composition, and use the test suite to
highlight systematic failure modes in a recently-released state-of-the-art
language model.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 16:30:40 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Röttger",
"Paul",
""
],
[
"Kirk",
"Hannah Rose",
""
],
[
"Vidgen",
"Bertie",
""
],
[
"Attanasio",
"Giuseppe",
""
],
[
"Bianchi",
"Federico",
""
],
[
"Hovy",
"Dirk",
""
]
] |
new_dataset
| 0.999558 |
2308.01312
|
Debosmita Bhaumik
|
Debosmita Bhaumik, Ahmed Khalifa, Julian Togelius
|
Lode Encoder: AI-constrained co-creativity
| null |
2021 IEEE Conference on Games (CoG), Copenhagen, Denmark, 2021,
pp. 01-08
|
10.1109/CoG52621.2021.9619009
| null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Lode Encoder, a gamified mixed-initiative level creation system
for the classic platform-puzzle game Lode Runner. The system is built around
several autoencoders which are trained on sets of Lode Runner levels. When fed
with the user's design, each autoencoder produces a version of that design
which is closer in style to the levels that it was trained on. The Lode Encoder
interface allows the user to build and edit levels through 'painting' from the
suggestions provided by the autoencoders. Crucially, in order to encourage
designers to explore new possibilities, the system does not include more
traditional editing tools. We report on the system design and training
procedure, as well as on the evolution of the system itself and user tests.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 17:56:29 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Bhaumik",
"Debosmita",
""
],
[
"Khalifa",
"Ahmed",
""
],
[
"Togelius",
"Julian",
""
]
] |
new_dataset
| 0.998934 |
1910.05190
|
David Koisser
|
Tigist Abera (1), Ferdinand Brasser (1), Lachlan J. Gunn (2), Patrick
Jauernig (1), David Koisser (1), Ahmad-Reza Sadeghi (1) ((1) Technical
University of Darmstadt, (2) Aalto University)
|
GrandDetAuto: Detecting Malicious Nodes in Large-Scale Autonomous
Networks
| null |
RAID '21: Proceedings of the 24th International Symposium on
Research in Attacks, Intrusions and Defenses, October 2021, Pages 220-234
|
10.1145/3471621.3471868
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous collaborative networks of devices are rapidly emerging in numerous
domains, such as self-driving cars, smart factories, critical infrastructure,
and Internet of Things in general. Although autonomy and self-organization are
highly desired properties, they increase vulnerability to attacks. Hence,
autonomous networks need dependable mechanisms to detect malicious devices in
order to prevent compromise of the entire network. However, current mechanisms
to detect malicious devices either require a trusted central entity or scale
poorly.
In this paper, we present GrandDetAuto, the first scheme to identify
malicious devices efficiently within large autonomous networks of collaborating
entities. GrandDetAuto functions without relying on a central trusted entity,
works reliably for very large networks of devices, and is adaptable to a wide
range of application scenarios thanks to interchangeable components. Our scheme
uses random elections to embed integrity validation schemes in distributed
consensus, providing a solution supporting tens of thousands of devices. We
implemented and evaluated a concrete instance of GrandDetAuto on a network of
embedded devices and conducted large-scale network simulations with up to
100000 nodes. Our results show the effectiveness and efficiency of our scheme,
revealing logarithmic growth in run-time and message complexity with increasing
network size. Moreover, we provide an extensive evaluation of key parameters
showing that GrandDetAuto is applicable to many scenarios with diverse
requirements.
|
[
{
"version": "v1",
"created": "Fri, 11 Oct 2019 13:54:08 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Dec 2019 10:26:51 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Aug 2023 09:07:28 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Abera",
"Tigist",
""
],
[
"Brasser",
"Ferdinand",
""
],
[
"Gunn",
"Lachlan J.",
""
],
[
"Jauernig",
"Patrick",
""
],
[
"Koisser",
"David",
""
],
[
"Sadeghi",
"Ahmad-Reza",
""
]
] |
new_dataset
| 0.999074 |
2002.09534
|
Eryk Kopczynski
|
Eryk Kopczy\'nski
|
Hyperbolic Minesweeper is in P
|
fixed an error in Corollary 5.6: planar graph -> (r,d)-hyperbolic
graph
|
10th International Conference on Fun with Algorithms (FUN 2021)
|
10.4230/LIPIcs.FUN.2021.18
| null |
cs.CC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We show that, while Minesweeper is NP-complete, its hyperbolic variant is in
P. Our proof does not rely on the rules of Minesweeper, but is valid for any
puzzle based on satisfying local constraints on a graph embedded in the
hyperbolic plane.
|
[
{
"version": "v1",
"created": "Fri, 21 Feb 2020 20:05:04 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 21:26:04 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Kopczyński",
"Eryk",
""
]
] |
new_dataset
| 0.999238 |
2204.05184
|
Mingxin Zhang
|
Mingxin Zhang, Zipei Fan, Ryosuke Shibasaki and Xuan Song
|
Domain Adversarial Graph Convolutional Network Based on RSSI and
Crowdsensing for Indoor Localization
|
IEEE Internet of Things Journal
|
IEEE Internet of Things Journal, vol. 10, no. 15, pp. 13662-13672,
2023
|
10.1109/JIOT.2023.3262740
| null |
cs.NI cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, the use of WiFi fingerprints for indoor positioning has
grown in popularity, largely due to the widespread availability of WiFi and the
proliferation of mobile communication devices. However, many existing methods
for constructing fingerprint datasets rely on labor-intensive and
time-consuming processes of collecting large amounts of data. Additionally,
these methods often focus on ideal laboratory environments, rather than
considering the practical challenges of large multi-floor buildings. To address
these issues, we present a novel WiDAGCN model that can be trained using a
small number of labeled site survey data and large amounts of unlabeled
crowdsensed WiFi fingerprints. By constructing heterogeneous graphs based on
received signal strength indicators (RSSIs) between waypoints and WiFi access
points (APs), our model is able to effectively capture the topological
structure of the data. We also incorporate graph convolutional networks (GCNs)
to extract graph-level embeddings, a feature that has been largely overlooked
in previous WiFi indoor localization studies. To deal with the challenges of
large amounts of unlabeled data and multiple data domains, we employ a
semi-supervised domain adversarial training scheme to effectively utilize
unlabeled data and align the data distributions across domains. Our system is
evaluated using a public indoor localization dataset that includes multiple
buildings, and the results show that it performs competitively in terms of
localization accuracy in large buildings.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 08:06:27 GMT"
},
{
"version": "v2",
"created": "Thu, 26 May 2022 09:36:45 GMT"
},
{
"version": "v3",
"created": "Fri, 31 Mar 2023 13:10:05 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Zhang",
"Mingxin",
""
],
[
"Fan",
"Zipei",
""
],
[
"Shibasaki",
"Ryosuke",
""
],
[
"Song",
"Xuan",
""
]
] |
new_dataset
| 0.999249 |
2208.08195
|
Josef Valvoda
|
Josef Valvoda, Naomi Saphra, Jonathan Rawski, Adina Williams, Ryan
Cotterell
|
Benchmarking Compositionality with Formal Languages
|
Published at COLING 2022. This version fixes a mistake in Figure 4
and adds a clarifying note in teal. Code is available at
https://github.com/valvoda/neuralTransducer
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recombining known primitive concepts into larger novel combinations is a
quintessentially human cognitive capability. Whether large neural models in NLP
can acquire this ability while learning from data is an open question. In this
paper, we investigate this problem from the perspective of formal languages. We
use deterministic finite-state transducers to make an unbounded number of
datasets with controllable properties governing compositionality. By randomly
sampling over many transducers, we explore which of their properties contribute
to learnability of a compositional relation by a neural network. We find that
the models either learn the relations completely or not at all. The key is
transition coverage, setting a soft learnability limit at 400 examples per
transition.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 10:03:18 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 11:40:32 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Aug 2023 15:19:55 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Valvoda",
"Josef",
""
],
[
"Saphra",
"Naomi",
""
],
[
"Rawski",
"Jonathan",
""
],
[
"Williams",
"Adina",
""
],
[
"Cotterell",
"Ryan",
""
]
] |
new_dataset
| 0.995526 |
2210.08111
|
Yu-Ming Chen
|
Yu-Ming Chen, Gabriel Nelson, Robert Griffin, Michael Posa and Jerry
Pratt
|
Integrable Whole-body Orientation Coordinates for Legged Robots
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Complex multibody legged robots can have complex rotational control
challenges. In this paper, we propose a concise way to understand and formulate
a \emph{whole-body orientation} that (i) depends on system configuration only
and not a history of motion, (ii) can be representative of the orientation of
the entire system while not being attached to any specific link, and (iii) has
a rate of change that approximates total system angular momentum. We relate
this orientation coordinate to past work, and discuss and demonstrate,
including on hardware, several different uses for it.
|
[
{
"version": "v1",
"created": "Fri, 14 Oct 2022 21:13:19 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 21:02:03 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Chen",
"Yu-Ming",
""
],
[
"Nelson",
"Gabriel",
""
],
[
"Griffin",
"Robert",
""
],
[
"Posa",
"Michael",
""
],
[
"Pratt",
"Jerry",
""
]
] |
new_dataset
| 0.985323 |
2211.12972
|
Chenxu Ke
|
Chenxu Ke, Kai-Yuan Cai, Quan Quan
|
Uniform Passive Fault-Tolerant Control of a Quadcopter with One, Two, or
Three Rotor Failure
|
We found some important errors in the paper that need to be corrected
|
2023
|
10.1109/TRO.2023.3297048
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study proposes a uniform passive fault-tolerant control (FTC) method for
a quadcopter that does not rely on fault information subject to one, two
adjacent, two opposite, or three rotors failure. The uniform control implies
that the passive FTC is able to cover the condition from quadcopter fault-free
to rotor failure without controller switching. To achieve the purpose of the
passive FTC, the rotors' fault is modeled as a disturbance acting on the
virtual control of the quadcopter system. The disturbance estimate is used
directly for the passive FTC with rotor failure. To avoid controller switching
between normal control and FTC, a dynamic control allocation is used. In
addition, the closed-loop stability has been analyzed and a virtual control
feedback is adopted to achieve the passive FTC for the quadcopter with two and
three rotor failure. To validate the proposed uniform passive FTC method,
outdoor experiments are performed for the first time, which have demonstrated
that the hovering quadcopter is able to recover from one rotor failure by the
proposed controller and continue to fly even if two adjacent, two opposite, or
three rotors fail, without any rotor fault information and controller
switching.
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 14:27:46 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Dec 2022 01:50:10 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Ke",
"Chenxu",
""
],
[
"Cai",
"Kai-Yuan",
""
],
[
"Quan",
"Quan",
""
]
] |
new_dataset
| 0.999341 |
2212.00048
|
Denis Krotov
|
Minjia Shi, Yuhong Xia, Denis S. Krotov
|
A family of diameter perfect constant-weight codes from Steiner systems
|
v2: revised, accepted version
|
J. Comb. Theory, Ser. A 200 2023, 105790
|
10.1016/j.jcta.2023.105790
| null |
cs.IT cs.DM math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
If $S$ is a transitive metric space, then $|C|\cdot|A| \le |S|$ for any
distance-$d$ code $C$ and a set $A$, ``anticode'', of diameter less than $d$.
For every Steiner S$(t,k,n)$ system $S$, we show the existence of a $q$-ary
constant-weight code $C$ of length~$n$, weight~$k$ (or $n-k$), and distance
$d=2k-t+1$ (respectively, $d=n-t+1$) and an anticode $A$ of diameter $d-1$ such
that the pair $(C,A)$ attains the code--anticode bound and the supports of the
codewords of $C$ are the blocks of $S$ (respectively, the complements of the
blocks of $S$). We study the problem of estimating the minimum value of $q$ for
which such a code exists, and find that minimum for small values of $t$.
Keywords: diameter perfect codes, anticodes, constant-weight codes,
code--anticode bound, Steiner systems.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 19:00:06 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 21:00:30 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Shi",
"Minjia",
""
],
[
"Xia",
"Yuhong",
""
],
[
"Krotov",
"Denis S.",
""
]
] |
new_dataset
| 0.997759 |
2303.08268
|
Xufeng Zhao
|
Xufeng Zhao, Mengdi Li, Cornelius Weber, Muhammad Burhan Hafez, and
Stefan Wermter
|
Chat with the Environment: Interactive Multimodal Perception Using Large
Language Models
|
Accepted at IROS2023, Detroit. See the project website at
https://matcha-model.github.io
| null | null | null |
cs.RO cs.AI cs.CL cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Programming robot behavior in a complex world faces challenges on multiple
levels, from dextrous low-level skills to high-level planning and reasoning.
Recent pre-trained Large Language Models (LLMs) have shown remarkable reasoning
ability in few-shot robotic planning. However, it remains challenging to ground
LLMs in multimodal sensory input and continuous action output, while enabling a
robot to interact with its environment and acquire novel information as its
policies unfold. We develop a robot interaction scenario with a partially
observable state, which necessitates a robot to decide on a range of epistemic
actions in order to sample sensory information among multiple modalities,
before being able to execute the task correctly. An interactive perception
framework is therefore proposed with an LLM as its backbone, whose ability is
exploited to instruct epistemic actions and to reason over the resulting
multimodal sensations (vision, sound, haptics, proprioception), as well as to
plan an entire task execution based on the interactively acquired information.
Our study demonstrates that LLMs can provide high-level planning and reasoning
skills and control interactive robot behavior in a multimodal environment,
while multimodal modules with the context of the environmental state help
ground the LLMs and extend their processing ability. The project website can be
found at
\href{https://matcha-model.github.io}{\textcolor{blue}{https://matcha-model.github.io/}}.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 23:01:27 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Aug 2023 10:22:21 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Zhao",
"Xufeng",
""
],
[
"Li",
"Mengdi",
""
],
[
"Weber",
"Cornelius",
""
],
[
"Hafez",
"Muhammad Burhan",
""
],
[
"Wermter",
"Stefan",
""
]
] |
new_dataset
| 0.987033 |
2303.11020
|
Yangfu Li
|
Yangfu Li, Jiapan Gan, Xiaodan Lin
|
DS-TDNN: Dual-stream Time-delay Neural Network with Global-aware Filter
for Speaker Verification
|
13 pages 4 figures
| null | null | null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conventional time-delay neural networks (TDNNs) struggle to handle long-range
context, their ability to represent speaker information is therefore limited in
long utterances. Existing solutions either depend on increasing model
complexity or try to balance between local features and global context to
address this issue. To effectively leverage the long-term dependencies of audio
signals and constrain model complexity, we introduce a novel module called
Global-aware Filter layer (GF layer) in this work, which employs a set of
learnable transform-domain filters between a 1D discrete Fourier transform and
its inverse transform to capture global context. Additionally, we develop a
dynamic filtering strategy and a sparse regularization method to enhance the
performance of the GF layer and prevent overfitting. Based on the GF layer, we
present a dual-stream TDNN architecture called DS-TDNN for automatic speaker
verification (ASV), which utilizes two unique branches to extract both local
and global features in parallel and employs an efficient strategy to fuse
different-scale information. Experiments on the Voxceleb and SITW databases
demonstrate that the DS-TDNN achieves a relative improvement of 10\% together
with a relative decline of 20\% in computational cost over the ECAPA-TDNN in
speaker verification task. This improvement will become more evident as the
utterance's duration grows. Furthermore, the DS-TDNN also beats popular deep
residual models and attention-based systems on utterances of arbitrary length.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 10:58:12 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Apr 2023 04:32:23 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Aug 2023 07:09:50 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Li",
"Yangfu",
""
],
[
"Gan",
"Jiapan",
""
],
[
"Lin",
"Xiaodan",
""
]
] |
new_dataset
| 0.998851 |
2303.12280
|
Yuki Fujimura
|
Yuki Fujimura, Takahiro Kushida, Takuya Funatomi, Yasuhiro Mukaigawa
|
NLOS-NeuS: Non-line-of-sight Neural Implicit Surface
|
ICCV 2023
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-line-of-sight (NLOS) imaging is conducted to infer invisible scenes from
indirect light on visible objects. The neural transient field (NeTF) was
proposed for representing scenes as neural radiance fields in NLOS scenes. We
propose NLOS neural implicit surface (NLOS-NeuS), which extends the NeTF to
neural implicit surfaces with a signed distance function (SDF) for
reconstructing three-dimensional surfaces in NLOS scenes. We introduce two
constraints as loss functions for correctly learning an SDF to avoid non-zero
level-set surfaces. We also introduce a lower bound constraint of an SDF based
on the geometry of the first-returning photons. The experimental results
indicate that these constraints are essential for learning a correct SDF in
NLOS scenes. Compared with previous methods with discretized representation,
NLOS-NeuS with the neural continuous representation enables us to reconstruct
smooth surfaces while preserving fine details in NLOS scenes. To the best of
our knowledge, this is the first study on neural implicit surfaces with volume
rendering in NLOS scenes.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 03:13:55 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Aug 2023 05:11:18 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Fujimura",
"Yuki",
""
],
[
"Kushida",
"Takahiro",
""
],
[
"Funatomi",
"Takuya",
""
],
[
"Mukaigawa",
"Yasuhiro",
""
]
] |
new_dataset
| 0.997508 |
2305.07336
|
Jiapeng Xie
|
Bo Zhou, Jiapeng Xie, Yan Pan, Jiajie Wu, and Chuanzhao Lu
|
MotionBEV: Attention-Aware Online LiDAR Moving Object Segmentation with
Bird's Eye View based Appearance and Motion Features
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Identifying moving objects is an essential capability for autonomous systems,
as it provides critical information for pose estimation, navigation, collision
avoidance, and static map construction. In this paper, we present MotionBEV, a
fast and accurate framework for LiDAR moving object segmentation, which
segments moving objects with appearance and motion features in the bird's eye
view (BEV) domain. Our approach converts 3D LiDAR scans into a 2D polar BEV
representation to improve computational efficiency. Specifically, we learn
appearance features with a simplified PointNet and compute motion features
through the height differences of consecutive frames of point clouds projected
onto vertical columns in the polar BEV coordinate system. We employ a
dual-branch network bridged by the Appearance-Motion Co-attention Module (AMCM)
to adaptively fuse the spatio-temporal information from appearance and motion
features. Our approach achieves state-of-the-art performance on the
SemanticKITTI-MOS benchmark. Furthermore, to demonstrate the practical
effectiveness of our method, we provide a LiDAR-MOS dataset recorded by a
solid-state LiDAR, which features non-repetitive scanning patterns and a small
field of view.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 09:28:09 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Aug 2023 09:16:32 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Zhou",
"Bo",
""
],
[
"Xie",
"Jiapeng",
""
],
[
"Pan",
"Yan",
""
],
[
"Wu",
"Jiajie",
""
],
[
"Lu",
"Chuanzhao",
""
]
] |
new_dataset
| 0.999476 |
2305.10534
|
Vasileios Vasilopoulos
|
Vasileios Vasilopoulos, Suveer Garg, Pedro Piacenza, Jinwook Huh,
Volkan Isler
|
RAMP: Hierarchical Reactive Motion Planning for Manipulation Tasks Using
Implicit Signed Distance Functions
|
IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS 2023) - 8 pages, 6 figures
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Reactive Action and Motion Planner (RAMP), which combines the
strengths of sampling-based and reactive approaches for motion planning. In
essence, RAMP is a hierarchical approach where a novel variant of a Model
Predictive Path Integral (MPPI) controller is used to generate trajectories
which are then followed asynchronously by a local vector field controller. We
demonstrate, in the context of a table clearing application, that RAMP can
rapidly find paths in the robot's configuration space, satisfy task and
robot-specific constraints, and provide safety by reacting to static or
dynamically moving obstacles. RAMP achieves superior performance through a
number of key innovations: we use Signed Distance Function (SDF)
representations directly from the robot configuration space, both for collision
checking and reactive control. The use of SDFs allows for a smoother definition
of collision cost when planning for a trajectory, and is critical in ensuring
safety while following trajectories. In addition, we introduce a novel variant
of MPPI which, combined with the safety guarantees of the vector field
trajectory follower, performs incremental real-time global trajectory planning.
Simulation results establish that our method can generate paths that are
comparable to traditional and state-of-the-art approaches in terms of total
trajectory length while being up to 30 times faster. Real-world experiments
demonstrate the safety and effectiveness of our approach in challenging table
clearing scenarios. Videos and code are available at:
https://samsunglabs.github.io/RAMP-project-page/
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 19:42:05 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 19:02:41 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Vasilopoulos",
"Vasileios",
""
],
[
"Garg",
"Suveer",
""
],
[
"Piacenza",
"Pedro",
""
],
[
"Huh",
"Jinwook",
""
],
[
"Isler",
"Volkan",
""
]
] |
new_dataset
| 0.990299 |
2306.02760
|
Weiyue Zhao
|
Weiyue Zhao, Hao Lu, Zhiguo Cao, Xin Li
|
A2B: Anchor to Barycentric Coordinate for Robust Correspondence
|
Accepted by International Journal of Computer Vision
| null |
10.1007/s11263-023-01827-5
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is a long-standing problem of repeated patterns in correspondence
problems, where mismatches frequently occur because of inherent ambiguity. The
unique position information associated with repeated patterns makes coordinate
representations a useful supplement to appearance representations for improving
feature correspondences. However, the issue of appropriate coordinate
representation has remained unresolved. In this study, we demonstrate that
geometric-invariant coordinate representations, such as barycentric
coordinates, can significantly reduce mismatches between features. The first
step is to establish a theoretical foundation for geometrically invariant
coordinates. We present a seed matching and filtering network (SMFNet) that
combines feature matching and consistency filtering with a coarse-to-fine
matching strategy in order to acquire reliable sparse correspondences. We then
introduce DEGREE, a novel anchor-to-barycentric (A2B) coordinate encoding
approach, which generates multiple affine-invariant correspondence coordinates
from paired images. DEGREE can be used as a plug-in with standard descriptors,
feature matchers, and consistency filters to improve the matching quality.
Extensive experiments in synthesized indoor and outdoor datasets demonstrate
that DEGREE alleviates the problem of repeated patterns and helps achieve
state-of-the-art performance. Furthermore, DEGREE also reports competitive
performance in the third Image Matching Challenge at CVPR 2021. This approach
offers a new perspective to alleviate the problem of repeated patterns and
emphasizes the importance of choosing coordinate representations for feature
correspondences.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 10:28:53 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 05:21:09 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Zhao",
"Weiyue",
""
],
[
"Lu",
"Hao",
""
],
[
"Cao",
"Zhiguo",
""
],
[
"Li",
"Xin",
""
]
] |
new_dataset
| 0.984661 |
2306.07467
|
Richard Wesel
|
Richard Wesel, Amaael Antonini, Linfang Wang, Wenhui Sui, Brendan
Towell, Holden Grissett
|
ELF Codes: Concatenated Codes with an Expurgating Linear Function as the
Outer Code
|
6 arXiv pages (actual ISTC paper is 5 pages with more compressed
spacing), 6 figures, accepted to the 2023 International Symposium on
Techniques in Coding. Latest version is Camera-Ready version for ISTC edited
for clarity and to reflect reviewer suggestions and references were added
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
An expurgating linear function (ELF) is a linear outer code that disallows
the low-weight codewords of the inner code. ELFs can be designed either to
maximize the minimum distance or to minimize the codeword error rate (CER) of
the expurgated code. A list-decoding sieve of the inner code starting from the
noiseless all-zeros codeword is an efficient way to identify ELFs that maximize
the minimum distance of the expurgated code. For convolutional inner codes,
this paper provides distance spectrum union (DSU) upper bounds on the CER of
the concatenated code.
For short codeword lengths, ELFs transform a good inner code into a great
concatenated code. For a constant message size of $K=64$ bits or constant
codeword blocklength of $N=152$ bits, an ELF can reduce the gap at CER
$10^{-6}$ between the DSU and the random-coding union (RCU) bounds from over 1
dB for the inner code alone to 0.23 dB for the concatenated code. The DSU
bounds can also characterize puncturing that mitigates the rate overhead of the
ELF while maintaining the DSU-to-RCU gap.
The reduction in DSU-to-RCU gap comes with a minimal increase in average
complexity at desired CER operating points. List Viterbi decoding guided by the
ELF approaches maximum likelihood (ML) decoding of the concatenated code, and
average list size converges to 1 as SNR increases. Thus, average complexity is
similar to Viterbi decoding on the trellis of the inner code at high SNR. For
rare large-magnitude noise events, which occur less often than the FER of the
inner code, a deep search in the list finds the ML codeword.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 23:56:20 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Aug 2023 04:10:02 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Wesel",
"Richard",
""
],
[
"Antonini",
"Amaael",
""
],
[
"Wang",
"Linfang",
""
],
[
"Sui",
"Wenhui",
""
],
[
"Towell",
"Brendan",
""
],
[
"Grissett",
"Holden",
""
]
] |
new_dataset
| 0.992247 |
2306.17358
|
Li Niu
|
Xinhao Tao, Junyan Cao, Li Niu
|
RdSOBA: Rendered Shadow-Object Association Dataset
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image composition refers to inserting a foreground object into a background
image to obtain a composite image. In this work, we focus on generating
plausible shadows for the inserted foreground object to make the composite
image more realistic. To supplement the existing small-scale dataset DESOBA, we
created a large-scale dataset called RdSOBA with 3D rendering techniques.
Specifically, we place a group of 3D objects in the 3D scene, and get the
images without or with object shadows using controllable rendering techniques.
Dataset is available at
https://github.com/bcmi/Rendered-Shadow-Generation-Dataset-RdSOBA.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 01:32:16 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Aug 2023 05:15:35 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Tao",
"Xinhao",
""
],
[
"Cao",
"Junyan",
""
],
[
"Niu",
"Li",
""
]
] |
new_dataset
| 0.999883 |
2307.01105
|
Emilio Mart\'inez-Pa\~neda
|
T. Hageman, E. Mart\'inez-Pa\~neda
|
A phase field-based framework for electro-chemo-mechanical fracture:
crack-contained electrolytes, chemical reactions and stabilisation
| null | null |
10.1016/j.cma.2023.116235
| null |
cs.CE physics.app-ph physics.chem-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new theoretical and computational framework for modelling
electro-chemo-mechanical fracture. The model combines a phase field description
of fracture with a fully coupled characterisation of electrolyte behaviour,
surface chemical reactions and stress-assisted diffusion. Importantly, a new
physics-based formulation is presented to describe electrolyte-containing phase
field cracks, appropriately capturing the sensitivity of electrochemical
transport and reaction kinetics to the crack opening height. Unlike other
existing methods, this approach is shown to accurately capture the results
obtained with discrete fracture simulations. The potential of the
electro-chemo-mechanical model presented is demonstrated by particularising it
to the analysis of hydrogen embrittlement in metallic samples exposed to
aqueous electrolytes. The finite element implementation takes as nodal
degrees-of-freedom the electrolyte potential, the concentrations of relevant
ionic species, the surface coverage, the concentration of diluted species, the
displacement field and the phase field order parameter. Particular attention is
devoted to improve stability and efficiency, resulting in the development of
strategies for avoiding ill-constrained degrees of freedom and lumped
integration schemes that eliminate numerical oscillations. The numerical
experiments conducted showcase the ability of the model to deliver
assumptions-free predictions for systems involving both free-flowing and
crack-contained electrolytes. The results obtained highlight the role of
electrolyte behaviour in driving the cracking process, evidencing the
limitations of existing models.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 15:32:55 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Hageman",
"T.",
""
],
[
"Martínez-Pañeda",
"E.",
""
]
] |
new_dataset
| 0.998567 |
2307.11224
|
Michael G\"unther
|
Michael G\"unther, Louis Milliken, Jonathan Geuter, Georgios
Mastrapas, Bo Wang, Han Xiao
|
Jina Embeddings: A Novel Set of High-Performance Sentence Embedding
Models
|
9 pages, 2 page appendix
| null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Jina Embeddings constitutes a set of high-performance sentence embedding
models adept at translating various textual inputs into numerical
representations, thereby capturing the semantic essence of the text. The models
excel in applications such as dense retrieval and semantic textual similarity.
This paper details the development of Jina Embeddings, starting with the
creation of high-quality pairwise and triplet datasets. It underlines the
crucial role of data cleaning in dataset preparation, gives in-depth insights
into the model training process, and concludes with a comprehensive performance
evaluation using the Massive Textual Embedding Benchmark (MTEB). To increase
the model's awareness of negations, we constructed a novel training and
evaluation dataset of negated and non-negated statements, which we make
publicly available to the community.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 20:37:24 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Aug 2023 13:40:31 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Günther",
"Michael",
""
],
[
"Milliken",
"Louis",
""
],
[
"Geuter",
"Jonathan",
""
],
[
"Mastrapas",
"Georgios",
""
],
[
"Wang",
"Bo",
""
],
[
"Xiao",
"Han",
""
]
] |
new_dataset
| 0.977504 |
2307.13753
|
Ahana Biswas
|
Ahana Biswas, Tim Niven, Yu-Ru Lin
|
The Dynamics of Political Narratives During the Russian Invasion of
Ukraine
|
To be published in International Conference on Social Computing,
Behavioral-Cultural Modeling, & Prediction and Behavior Representation in
Modeling and Simulation (SBP-BRiMS), 2023
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Russian invasion of Ukraine has elicited a diverse array of responses
from nations around the globe. During a global conflict, polarized narratives
are spread on social media to sway public opinion. We examine the dynamics of
the political narratives surrounding the Russia-Ukraine war during the first
two months of the Russian invasion of Ukraine (RU) using the Chinese Twitter
space as a case study. Since the beginning of the RU, pro-Chinese-state and
anti-Chinese-state users have spread divisive opinions, rumors, and conspiracy
theories. We investigate how the pro- and anti-state camps contributed to the
evolution of RU-related narratives, as well as how a few influential accounts
drove the narrative evolution. We identify pro-state and anti-state actors on
Twitter using network analysis and text-based classifiers, and we leverage text
analysis, along with the users' social interactions (e.g., retweeting), to
extract narrative coordination and evolution. We find evidence that both
pro-state and anti-state camps spread propaganda narratives about RU. Our
analysis illuminates how actors coordinate to advance particular viewpoints or
act against one another in the context of global conflict.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 18:21:36 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 18:23:07 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Biswas",
"Ahana",
""
],
[
"Niven",
"Tim",
""
],
[
"Lin",
"Yu-Ru",
""
]
] |
new_dataset
| 0.992821 |
2307.13933
|
Dingkang Yang
|
Dingkang Yang, Shuai Huang, Zhi Xu, Zhenpeng Li, Shunli Wang,
Mingcheng Li, Yuzheng Wang, Yang Liu, Kun Yang, Zhaoyu Chen, Yan Wang, Jing
Liu, Peixuan Zhang, Peng Zhai, Lihua Zhang
|
AIDE: A Vision-Driven Multi-View, Multi-Modal, Multi-Tasking Dataset for
Assistive Driving Perception
|
Accepted by ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Driver distraction has become a significant cause of severe traffic accidents
over the past decade. Despite the growing development of vision-driven driver
monitoring systems, the lack of comprehensive perception datasets restricts
road safety and traffic security. In this paper, we present an AssIstive
Driving pErception dataset (AIDE) that considers context information both
inside and outside the vehicle in naturalistic scenarios. AIDE facilitates
holistic driver monitoring through three distinctive characteristics, including
multi-view settings of driver and scene, multi-modal annotations of face, body,
posture, and gesture, and four pragmatic task designs for driving
understanding. To thoroughly explore AIDE, we provide experimental benchmarks
on three kinds of baseline frameworks via extensive methods. Moreover, two
fusion strategies are introduced to give new insights into learning effective
multi-stream/modal representations. We also systematically investigate the
importance and rationality of the key components in AIDE and benchmarks. The
project link is https://github.com/ydk122024/AIDE.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 03:12:05 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Aug 2023 09:29:51 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Yang",
"Dingkang",
""
],
[
"Huang",
"Shuai",
""
],
[
"Xu",
"Zhi",
""
],
[
"Li",
"Zhenpeng",
""
],
[
"Wang",
"Shunli",
""
],
[
"Li",
"Mingcheng",
""
],
[
"Wang",
"Yuzheng",
""
],
[
"Liu",
"Yang",
""
],
[
"Yang",
"Kun",
""
],
[
"Chen",
"Zhaoyu",
""
],
[
"Wang",
"Yan",
""
],
[
"Liu",
"Jing",
""
],
[
"Zhang",
"Peixuan",
""
],
[
"Zhai",
"Peng",
""
],
[
"Zhang",
"Lihua",
""
]
] |
new_dataset
| 0.999857 |
2307.16160
|
Yusheng Wang
|
Yusheng Wang, Yonghoon Ji, Chujie Wu, Hiroshi Tsuchiya, Hajime Asama,
Atsushi Yamashita
|
Motion Degeneracy in Self-supervised Learning of Elevation Angle
Estimation for 2D Forward-Looking Sonar
|
IROS2023
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
2D forward-looking sonar is a crucial sensor for underwater robotic
perception. A well-known problem in this field is estimating missing
information in the elevation direction during sonar imaging. There are demands
to estimate 3D information per image for 3D mapping and robot navigation during
fly-through missions. Recent learning-based methods have demonstrated their
strengths, but there are still drawbacks. Supervised learning methods have
achieved high-quality results but may require further efforts to acquire 3D
ground-truth labels. The existing self-supervised method requires pretraining
using synthetic images with 3D supervision. This study aims to realize stable
self-supervised learning of elevation angle estimation without pretraining
using synthetic images. Failures during self-supervised learning may be caused
by motion degeneracy problems. We first analyze the motion field of 2D
forward-looking sonar, which is related to the main supervision signal. We
utilize a modern learning framework and prove that if the training dataset is
built with effective motions, the network can be trained in a self-supervised
manner without the knowledge of synthetic data. Both simulation and real
experiments validate the proposed method.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2023 08:06:11 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Aug 2023 01:48:25 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Wang",
"Yusheng",
""
],
[
"Ji",
"Yonghoon",
""
],
[
"Wu",
"Chujie",
""
],
[
"Tsuchiya",
"Hiroshi",
""
],
[
"Asama",
"Hajime",
""
],
[
"Yamashita",
"Atsushi",
""
]
] |
new_dataset
| 0.975794 |
2308.00013
|
Luyao Zhang
|
Haoyang Yu, Yutong Sun, Yulin Liu, Luyao Zhang
|
Bitcoin Gold, Litecoin Silver:An Introduction to Cryptocurrency's
Valuation and Trading Strategy
| null | null | null | null |
cs.CE cs.CR econ.GN q-fin.CP q-fin.EC q-fin.TR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Historically, gold and silver have played distinct roles in traditional
monetary systems. While gold has primarily been revered as a superior store of
value, prompting individuals to hoard it, silver has commonly been used as a
medium of exchange. As the financial world evolves, the emergence of
cryptocurrencies has introduced a new paradigm of value and exchange. However,
the store-of-value characteristic of these digital assets remains largely
uncharted. Charlie Lee, the founder of Litecoin, once likened Bitcoin to gold
and Litecoin to silver. To validate this analogy, our study employs several
metrics, including unspent transaction outputs (UTXO), spent transaction
outputs (STXO), Weighted Average Lifespan (WAL), CoinDaysDestroyed (CDD), and
public on-chain transaction data. Furthermore, we've devised trading strategies
centered around the Price-to-Utility (PU) ratio, offering a fresh perspective
on crypto-asset valuation beyond traditional utilities. Our back-testing
results not only display trading indicators for both Bitcoin and Litecoin but
also substantiate Lee's metaphor, underscoring Bitcoin's superior
store-of-value proposition relative to Litecoin. We anticipate that our
findings will drive further exploration into the valuation of crypto assets.
For enhanced transparency and to promote future research, we've made our
datasets available on Harvard Dataverse and shared our Python code on GitHub as
open source.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2023 23:14:20 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Yu",
"Haoyang",
""
],
[
"Sun",
"Yutong",
""
],
[
"Liu",
"Yulin",
""
],
[
"Zhang",
"Luyao",
""
]
] |
new_dataset
| 0.999724 |
2308.00078
|
Zhaoyuan Su
|
Jamie Lee, Zhaoyuan Su, Yunan Chen
|
Mobile Apps for Children's Health and Wellbeing: Design Features and
Future Opportunities
|
Paper accepted for the proceedings of the 2023 American Medical
Informatics Association Annual Symposium (AMIA)
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Mobile health apps hold great potential for promoting children's health and
wellbeing. However, there is limited understanding of how these technologies
are currently designed to support children with their health concerns or
wellness goals. To gain insight into the current landscape of mobile apps
designed for children's health, we retrieved and reviewed 43 apps from IOS and
Google Play store that are specifically marketed for children. Our qualitative
analysis identified the dominant health focuses and goals of children's mobile
health apps. We analyzed the primary users and their expectations as well as
the methods of engagement and involvement adopted. Based on our findings, we
discussed the opportunities to support children with chronic illnesses through
mobile apps, design for dual use, and design for age appropriateness and
digital health safety. This study provides insights and recommendations for app
designers, health researchers, and policymakers on strategies for engaging
children and parents while also promoting children's health and wellbeing
through mobile technology.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 18:52:26 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Lee",
"Jamie",
""
],
[
"Su",
"Zhaoyuan",
""
],
[
"Chen",
"Yunan",
""
]
] |
new_dataset
| 0.999502 |
2308.00130
|
D\v{z}enan Lapandi\'c
|
D\v{z}enan Lapandi\'c, Christos K. Verginis, Dimos V. Dimarogonas, Bo
Wahlberg
|
Kinodynamic Motion Planning via Funnel Control for Underactuated
Unmanned Surface Vehicles
|
11 pages, 10 figure, submitted to IEEE T-CST
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop an algorithm to control an underactuated unmanned surface vehicle
(USV) using kinodynamic motion planning with funnel control (KDF). KDF has two
key components: motion planning used to generate trajectories with respect to
kinodynamic constraints, and funnel control, also referred to as prescribed
performance control, which enables trajectory tracking in the presence of
uncertain dynamics and disturbances. We extend prescribed performance control
to address the challenges posed by underactuation and control-input saturation
present on the USV. The proposed scheme guarantees stability under user-defined
prescribed performance functions where model parameters and exogenous
disturbances are unknown. Furthermore, we present an optimization problem to
obtain smooth, collision-free trajectories while respecting kinodynamic
constraints. We deploy the algorithm on a USV and verify its efficiency in
real-world open-water experiments.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 19:53:55 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Lapandić",
"Dženan",
""
],
[
"Verginis",
"Christos K.",
""
],
[
"Dimarogonas",
"Dimos V.",
""
],
[
"Wahlberg",
"Bo",
""
]
] |
new_dataset
| 0.989636 |
2308.00144
|
Sanjay Lall
|
Sanjay Lall, Calin Cascaval, Martin Izzard, Tammo Spalink
|
Logical Synchrony and the bittide Mechanism
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce logical synchrony, a framework that allows distributed computing
to be coordinated as tightly as in synchronous systems without the distribution
of a global clock or any reference to universal time. We develop a model of
events called a logical synchrony network, in which nodes correspond to
processors and every node has an associated local clock which generates the
events. We construct a measure of logical latency and develop its properties. A
further model, called a multiclock network, is then analyzed and shown to be a
refinement of the logical synchrony network. We present the bittide mechanism
as an instantiation of multiclock networks, and discuss the clock control
mechanism that ensures that buffers do not overflow or underflow. Finally we
give conditions under which a logical synchrony network has an equivalent
synchronous realization.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 20:25:30 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Lall",
"Sanjay",
""
],
[
"Cascaval",
"Calin",
""
],
[
"Izzard",
"Martin",
""
],
[
"Spalink",
"Tammo",
""
]
] |
new_dataset
| 0.998972 |
2308.00154
|
Vikram Jain
|
Vikram Jain, Matheus Cavalcante, Nazareno Bruschi, Michael Rogenmoser,
Thomas Benz, Andreas Kurth, Davide Rossi, Luca Benini, Marian Verhelst
|
PATRONoC: Parallel AXI Transport Reducing Overhead for Networks-on-Chip
targeting Multi-Accelerator DNN Platforms at the Edge
|
Accepted and presented at 60th DAC
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Emerging deep neural network (DNN) applications require high-performance
multi-core hardware acceleration with large data bursts. Classical
network-on-chips (NoCs) use serial packet-based protocols suffering from
significant protocol translation overheads towards the endpoints. This paper
proposes PATRONoC, an open-source fully AXI-compliant NoC fabric to better
address the specific needs of multi-core DNN computing platforms. Evaluation of
PATRONoC in a 2D-mesh topology shows 34% higher area efficiency compared to a
state-of-the-art classical NoC at 1 GHz. PATRONoC's throughput outperforms a
baseline NoC by 2-8X on uniform random traffic and provides a high aggregated
throughput of up to 350 GiB/s on synthetic and DNN workload traffic.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 21:08:37 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Jain",
"Vikram",
""
],
[
"Cavalcante",
"Matheus",
""
],
[
"Bruschi",
"Nazareno",
""
],
[
"Rogenmoser",
"Michael",
""
],
[
"Benz",
"Thomas",
""
],
[
"Kurth",
"Andreas",
""
],
[
"Rossi",
"Davide",
""
],
[
"Benini",
"Luca",
""
],
[
"Verhelst",
"Marian",
""
]
] |
new_dataset
| 0.961762 |
2308.00174
|
Ankit Agrawal
|
Bohan Zhang, Yashaswini Shivalingaiah, Ankit Agrawal
|
DroneReqValidator: Facilitating High Fidelity Simulation Testing for
Uncrewed Aerial Systems Developers
|
ASE-2023 Tool Demo Track
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Rigorous testing of small Uncrewed Aerial Systems (sUAS) is crucial to ensure
their safe and reliable deployment in the real world. sUAS developers aim to
validate the reliability and safety of their applications through simulation
testing. However, the dynamic nature of the real-world environment, including
factors such as challenging weather conditions and wireless interference,
causes unique software faults that may only be revealed through field testing.
Considering the high cost and impracticality of conducting field testing in
thousands of environmental contexts and conditions, there exists a pressing
need to develop automated techniques that can generate high-fidelity, realistic
environments enabling sUAS developers to deploy their applications and conduct
thorough simulation testing in close-to-reality environmental conditions. To
address this need, DroneReqValidator (DRV) offers a comprehensive small
Unmanned Aerial Vehicle (sUAV) simulation ecosystem that automatically
generates realistic environments based on developer-specified constraints,
monitors sUAV activities against predefined safety parameters, and generates
detailed acceptance test reports for effective debugging and analysis of sUAV
applications. Providing these capabilities, DRV offers a valuable solution for
enhancing the testing and development process of sUAS. The comprehensive demo
of DRV is available at https://www.youtube.com/watch?v=Fd9ft55gbO8
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 22:13:57 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Zhang",
"Bohan",
""
],
[
"Shivalingaiah",
"Yashaswini",
""
],
[
"Agrawal",
"Ankit",
""
]
] |
new_dataset
| 0.994392 |
2308.00187
|
Yujia Li
|
Chiyu Zhang, Ji Han, Yao Zou, Kexin Dong, Yujia Li, Junchun Ding,
Xiaoling Han
|
Detecting the Anomalies in LiDAR Pointcloud
| null | null | null | null |
cs.RO cs.CV eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR sensors play an important role in the perception stack of modern
autonomous driving systems. Adverse weather conditions such as rain, fog and
dust, as well as some (occasional) LiDAR hardware fault may cause the LiDAR to
produce pointcloud with abnormal patterns such as scattered noise points and
uncommon intensity values. In this paper, we propose a novel approach to detect
whether a LiDAR is generating anomalous pointcloud by analyzing the pointcloud
characteristics. Specifically, we develop a pointcloud quality metric based on
the LiDAR points' spatial and intensity distribution to characterize the noise
level of the pointcloud, which relies on pure mathematical analysis and does
not require any labeling or training as learning-based methods do. Therefore,
the method is scalable and can be quickly deployed either online to improve the
autonomy safety by monitoring anomalies in the LiDAR data or offline to perform
in-depth study of the LiDAR behavior over large amount of data. The proposed
approach is studied with extensive real public road data collected by LiDARs
with different scanning mechanisms and laser spectrums, and is proven to be
able to effectively handle various known and unknown sources of pointcloud
anomaly.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 22:53:42 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Zhang",
"Chiyu",
""
],
[
"Han",
"Ji",
""
],
[
"Zou",
"Yao",
""
],
[
"Dong",
"Kexin",
""
],
[
"Li",
"Yujia",
""
],
[
"Ding",
"Junchun",
""
],
[
"Han",
"Xiaoling",
""
]
] |
new_dataset
| 0.995944 |
2308.00224
|
Liwenhan Xie
|
Liwenhan Xie and Zhaoyu Zhou and Kerun Yu and Yun Wang and Huamin Qu
and Siming Chen
|
Wakey-Wakey: Animate Text by Mimicking Characters in a GIF
|
Accepted in the 36th Annual ACM Symposium on User Interface Software
and Technology (UIST'23)
| null |
10.1145/3586183.3606813
| null |
cs.HC cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With appealing visual effects, kinetic typography (animated text) has
prevailed in movies, advertisements, and social media. However, it remains
challenging and time-consuming to craft its animation scheme. We propose an
automatic framework to transfer the animation scheme of a rigid body on a given
meme GIF to text in vector format. First, the trajectories of key points on the
GIF anchor are extracted and mapped to the text's control points based on local
affine transformation. Then the temporal positions of the control points are
optimized to maintain the text topology. We also develop an authoring tool that
allows intuitive human control in the generation process. A questionnaire study
provides evidence that the output results are aesthetically pleasing and well
preserve the animation patterns in the original GIF, where participants were
impressed by a similar emotional semantics of the original GIF. In addition, we
evaluate the utility and effectiveness of our approach through a workshop with
general users and designers.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 01:37:37 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Xie",
"Liwenhan",
""
],
[
"Zhou",
"Zhaoyu",
""
],
[
"Yu",
"Kerun",
""
],
[
"Wang",
"Yun",
""
],
[
"Qu",
"Huamin",
""
],
[
"Chen",
"Siming",
""
]
] |
new_dataset
| 0.997467 |
2308.00240
|
Geyang Guo
|
Geyang Guo, Jiarong Yang, Fengyuan Lu, Jiaxin Qin, Tianyi Tang, Wayne
Xin Zhao
|
Towards Effective Ancient Chinese Translation: Dataset, Model, and
Evaluation
|
Accepted by NLPCC 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interpreting ancient Chinese has been the key to comprehending vast Chinese
literature, tradition, and civilization. In this paper, we propose Erya for
ancient Chinese translation. From a dataset perspective, we collect, clean, and
classify ancient Chinese materials from various sources, forming the most
extensive ancient Chinese resource to date. From a model perspective, we devise
Erya training method oriented towards ancient Chinese. We design two
jointly-working tasks: disyllabic aligned substitution (DAS) and dual masked
language model (DMLM). From an evaluation perspective, we build a benchmark to
judge ancient Chinese translation quality in different scenarios and evaluate
the ancient Chinese translation capacities of various existing models. Our
model exhibits remarkable zero-shot performance across five domains, with over
+12.0 BLEU against GPT-3.5 models and better human evaluation results than
ERNIE Bot. Subsequent fine-tuning further shows the superior transfer
capability of Erya model with +6.2 BLEU gain. We release all the
above-mentioned resources at https://github.com/RUCAIBox/Erya.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 02:43:27 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Guo",
"Geyang",
""
],
[
"Yang",
"Jiarong",
""
],
[
"Lu",
"Fengyuan",
""
],
[
"Qin",
"Jiaxin",
""
],
[
"Tang",
"Tianyi",
""
],
[
"Zhao",
"Wayne Xin",
""
]
] |
new_dataset
| 0.98909 |
2308.00259
|
Jiawei Xu
|
Jiawei Xu, Diego S D'antonio, Dominic J Ammirato, David Salda\~na
|
SBlimp: Design, Model, and Translational Motion Control for a
Swing-Blimp
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present an aerial vehicle composed of a custom quadrotor with tilted
rotors and a helium balloon, called SBlimp. We propose a novel control strategy
that takes advantage of the natural stable attitude of the blimp to control
translational motion. Different from cascade controllers in the literature that
controls attitude to achieve desired translational motion, our approach
directly controls the linear velocity regardless of the heading orientation of
the vehicle. As a result, the vehicle swings during the translational motion.
We provide a planar analysis of the dynamic model, demonstrating stability for
our controller. Our design is evaluated in numerical simulations with different
physical factors and validated with experiments using a real-world prototype,
showing that the SBlimp is able to achieve stable translation regardless of its
orientation.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 03:41:50 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Xu",
"Jiawei",
""
],
[
"D'antonio",
"Diego S",
""
],
[
"Ammirato",
"Dominic J",
""
],
[
"Saldaña",
"David",
""
]
] |
new_dataset
| 0.975202 |
2308.00262
|
Xuan Bac Nguyen
|
Xuan-Bac Nguyen, Xudong Liu, Xin Li, Khoa Luu
|
The Algonauts Project 2023 Challenge: UARK-UAlbany Team Solution
|
The Algonauts Project 2023 Challenge
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This work presents our solutions to the Algonauts Project 2023 Challenge. The
primary objective of the challenge revolves around employing computational
models to anticipate brain responses captured during participants' observation
of intricate natural visual scenes. The goal is to predict brain responses
across the entire visual brain, as it is the region where the most reliable
responses to images have been observed. We constructed an image-based brain
encoder through a two-step training process to tackle this challenge.
Initially, we created a pretrained encoder using data from all subjects. Next,
we proceeded to fine-tune individual subjects. Each step employed different
training strategies, such as different loss functions and objectives, to
introduce diversity. Ultimately, our solution constitutes an ensemble of
multiple unique encoders. The code is available at
https://github.com/uark-cviu/Algonauts2023
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 03:46:59 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Nguyen",
"Xuan-Bac",
""
],
[
"Liu",
"Xudong",
""
],
[
"Li",
"Xin",
""
],
[
"Luu",
"Khoa",
""
]
] |
new_dataset
| 0.997523 |
2308.00288
|
Zian Liu
|
Zian Liu, Lei Pan, Chao Chen, Ejaz Ahmed, Shigang Liu, Jun Zhang,
Dongxi Liu
|
VulMatch: Binary-level Vulnerability Detection Through Signature
|
15 pages IEEE journal template
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Similar vulnerability repeats in real-world software products because of code
reuse, especially in wildly reused third-party code and libraries. Detecting
repeating vulnerabilities like 1-day and N-day vulnerabilities is an important
cyber security task. Unfortunately, the state-of-the-art methods suffer from
poor performance because they detect patch existence instead of vulnerability
existence and infer the vulnerability signature directly from binary code. In
this paper, we propose VulMatch to extract precise vulnerability-related binary
instructions to generate the vulnerability-related signature. VulMatch detects
vulnerability existence based on binary signatures. Unlike previous approaches,
VulMatch accurately locates vulnerability-related instructions by utilizing
source and binary codes. Our experiments were conducted using over 1000
vulnerable instances across seven open-source projects. VulMatch significantly
outperformed the baseline tools Asm2vec and Palmtree. Besides the performance
advantages over the baseline tools, VulMatch offers a better feature by
providing explainable reasons during vulnerability detection. Our empirical
studies demonstrate that VulMatch detects fine-grained vulnerability that the
state-of-the-art tools struggle with. Our experiment on commercial firmware
demonstrates VulMatch is able to find vulnerabilities in real-world scenario.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 05:04:24 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Liu",
"Zian",
""
],
[
"Pan",
"Lei",
""
],
[
"Chen",
"Chao",
""
],
[
"Ahmed",
"Ejaz",
""
],
[
"Liu",
"Shigang",
""
],
[
"Zhang",
"Jun",
""
],
[
"Liu",
"Dongxi",
""
]
] |
new_dataset
| 0.999791 |
2308.00294
|
Yuntong Zhang
|
Yuntong Zhang, Andreea Costea, Ridwan Shariffdeen, Davin McCall, Abhik
Roychoudhury
|
Patch Space Exploration using Static Analysis Feedback
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated Program Repair (APR) techniques typically rely on a given
test-suite to guide the repair process. Apart from the need to provide test
oracles, this makes the produced patches prone to test data over-fitting. In
this work, instead of relying on test cases, we show how to automatically
repair memory safety issues, by leveraging static analysis (specifically
Incorrectness Separation Logic) to guide repair. Our proposed approach learns
what a desirable patch is by inspecting how close a patch is to fixing the bug
based on the feedback from incorrectness separation logic based static analysis
(specifically the Pulse analyser), and turning this information into a
distribution of probabilities over context free grammars. Furthermore, instead
of focusing on heuristics for reducing the search space of patches, we make
repair scalable by creating classes of equivalent patches according to the
effect they have on the symbolic heap, and then invoking the validation oracle
only once per class of patch equivalence. This allows us to efficiently
discover repairs even in the presence of a large pool of patch candidates
offered by our generic patch synthesis mechanism. Experimental evaluation of
our approach was conducted by repairing real world memory errors in OpenSSL,
swoole and other subjects. The evaluation results show the scalability and
efficacy of our approach in automatically producing high quality patches.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 05:22:10 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Zhang",
"Yuntong",
""
],
[
"Costea",
"Andreea",
""
],
[
"Shariffdeen",
"Ridwan",
""
],
[
"McCall",
"Davin",
""
],
[
"Roychoudhury",
"Abhik",
""
]
] |
new_dataset
| 0.998529 |
2308.00295
|
Shamanthak Hegde
|
Shamanthak Hegde, Soumya Jahagirdar and Shankar Gangisetty
|
Making the V in Text-VQA Matter
|
Accepted for the CVPR 2023 Workshop on Open-Domain Reasoning Under
Multi-Modal Settings
| null | null | null |
cs.CV cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Text-based VQA aims at answering questions by reading the text present in the
images. It requires a large amount of scene-text relationship understanding
compared to the VQA task. Recent studies have shown that the question-answer
pairs in the dataset are more focused on the text present in the image but less
importance is given to visual features and some questions do not require
understanding the image. The models trained on this dataset predict biased
answers due to the lack of understanding of visual context. For example, in
questions like "What is written on the signboard?", the answer predicted by the
model is always "STOP" which makes the model to ignore the image. To address
these issues, we propose a method to learn visual features (making V matter in
TextVQA) along with the OCR features and question features using VQA dataset as
external knowledge for Text-based VQA. Specifically, we combine the TextVQA
dataset and VQA dataset and train the model on this combined dataset. Such a
simple, yet effective approach increases the understanding and correlation
between the image features and text present in the image, which helps in the
better answering of questions. We further test the model on different datasets
and compare their qualitative and quantitative results.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 05:28:13 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Hegde",
"Shamanthak",
""
],
[
"Jahagirdar",
"Soumya",
""
],
[
"Gangisetty",
"Shankar",
""
]
] |
new_dataset
| 0.997818 |
2308.00323
|
Asish Bera
|
Asish Bera, Mita Nasipuri, Ondrej Krejcar, and Debotosh Bhattacharjee
|
Fine-Grained Sports, Yoga, and Dance Postures Recognition: A Benchmark
Analysis
|
12 pages, 12 figures, 10 tables
|
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023
|
10.1109/TIM.2023.3293564
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Human body-pose estimation is a complex problem in computer vision. Recent
research interests have been widened specifically on the Sports, Yoga, and
Dance (SYD) postures for maintaining health conditions. The SYD pose categories
are regarded as a fine-grained image classification task due to the complex
movement of body parts. Deep Convolutional Neural Networks (CNNs) have attained
significantly improved performance in solving various human body-pose
estimation problems. Though decent progress has been achieved in yoga postures
recognition using deep learning techniques, fine-grained sports, and dance
recognition necessitates ample research attention. However, no benchmark public
image dataset with sufficient inter-class and intra-class variations is
available yet to address sports and dance postures classification. To solve
this limitation, we have proposed two image datasets, one for 102 sport
categories and another for 12 dance styles. Two public datasets, Yoga-82 which
contains 82 classes and Yoga-107 represents 107 classes are collected for yoga
postures. These four SYD datasets are experimented with the proposed deep
model, SYD-Net, which integrates a patch-based attention (PbA) mechanism on top
of standard backbone CNNs. The PbA module leverages the self-attention
mechanism that learns contextual information from a set of uniform and
multi-scale patches and emphasizes discriminative features to understand the
semantic correlation among patches. Moreover, random erasing data augmentation
is applied to improve performance. The proposed SYD-Net has achieved
state-of-the-art accuracy on Yoga-82 using five base CNNs. SYD-Net's accuracy
on other datasets is remarkable, implying its efficiency. Our Sports-102 and
Dance-12 datasets are publicly available at
https://sites.google.com/view/syd-net/home.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 07:00:13 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Bera",
"Asish",
""
],
[
"Nasipuri",
"Mita",
""
],
[
"Krejcar",
"Ondrej",
""
],
[
"Bhattacharjee",
"Debotosh",
""
]
] |
new_dataset
| 0.999493 |
2308.00353
|
Runyu Ding
|
Runyu Ding, Jihan Yang, Chuhui Xue, Wenqing Zhang, Song Bai, Xiaojuan
Qi
|
Lowis3D: Language-Driven Open-World Instance-Level 3D Scene
Understanding
|
submit to TPAMI
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open-world instance-level scene understanding aims to locate and recognize
unseen object categories that are not present in the annotated dataset. This
task is challenging because the model needs to both localize novel 3D objects
and infer their semantic categories. A key factor for the recent progress in 2D
open-world perception is the availability of large-scale image-text pairs from
the Internet, which cover a wide range of vocabulary concepts. However, this
success is hard to replicate in 3D scenarios due to the scarcity of 3D-text
pairs. To address this challenge, we propose to harness pre-trained
vision-language (VL) foundation models that encode extensive knowledge from
image-text pairs to generate captions for multi-view images of 3D scenes. This
allows us to establish explicit associations between 3D shapes and
semantic-rich captions. Moreover, to enhance the fine-grained visual-semantic
representation learning from captions for object-level categorization, we
design hierarchical point-caption association methods to learn semantic-aware
embeddings that exploit the 3D geometry between 3D points and multi-view
images. In addition, to tackle the localization challenge for novel classes in
the open-world setting, we develop debiased instance localization, which
involves training object grouping modules on unlabeled data using
instance-level pseudo supervision. This significantly improves the
generalization capabilities of instance grouping and thus the ability to
accurately locate novel objects. We conduct extensive experiments on 3D
semantic, instance, and panoptic segmentation tasks, covering indoor and
outdoor scenes across three datasets. Our method outperforms baseline methods
by a significant margin in semantic segmentation (e.g. 34.5%$\sim$65.3%),
instance segmentation (e.g. 21.8%$\sim$54.0%) and panoptic segmentation (e.g.
14.7%$\sim$43.3%). Code will be available.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 07:50:14 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Ding",
"Runyu",
""
],
[
"Yang",
"Jihan",
""
],
[
"Xue",
"Chuhui",
""
],
[
"Zhang",
"Wenqing",
""
],
[
"Bai",
"Song",
""
],
[
"Qi",
"Xiaojuan",
""
]
] |
new_dataset
| 0.999842 |
2308.00378
|
Paolo Santonastaso
|
Paolo Santonastaso and John Sheekey
|
On MSRD codes, h-designs and disjoint maximum scattered linear sets
| null | null | null | null |
cs.IT math.CO math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we study geometric aspects of codes in the sum-rank metric. We
establish the geometric description of generalised weights, and analyse the
Delsarte and geometric dual operations. We establish a correspondence between
maximum sum-rank distance codes and h-designs, extending the well-known
correspondence between MDS codes and arcs in projective spaces and between MRD
codes and h-scatttered subspaces. We use the geometric setting to construct new
h-designs and new MSRD codes via new families of pairwise disjoint maximum
scattered linear sets.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 08:42:56 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Santonastaso",
"Paolo",
""
],
[
"Sheekey",
"John",
""
]
] |
new_dataset
| 0.999258 |
2308.00380
|
Andre Schulz
|
Andr\'e Schulz
|
Side-Contact Representations with Convex Polygons in 3D: New Results for
Complete Bipartite Graphs
|
Appears in the Proceedings of the 31st International Symposium on
Graph Drawing and Network Visualization (GD 2023)
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
A polyhedral surface~$\mathcal{C}$ in $\mathbb{R}^3$ with convex polygons as
faces is a side-contact representation of a graph~$G$ if there is a bijection
between the vertices of $G$ and the faces of~$\mathcal{C}$ such that the
polygons of adjacent vertices are exactly the polygons sharing an entire common
side in~$\mathcal{C}$.
We show that $K_{3,8}$ has a side-contact representation but $K_{3,250}$ has
not. The latter result implies that the number of edges of a graph with
side-contact representation and $n$ vertices is bounded by $O(n^{5/3})$.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 08:48:20 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Schulz",
"André",
""
]
] |
new_dataset
| 0.995998 |
2308.00406
|
Shanqi Pang
|
Shanqi Pang, Chaomeng Zhang, Mengqian Chen, Miaomiao Zhang
|
Near MDS and near quantum MDS codes via orthogonal arrays
|
13 pages, 0 figures
| null | null | null |
cs.IT math.IT quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Near MDS (NMDS) codes are closely related to interesting objects in finite
geometry and have nice applications in combinatorics and cryptography. But
there are many unsolved problems about construction of NMDS codes. In this
paper, by using symmetrical orthogonal arrays (OAs), we construct a lot of
NMDS, $m$-MDS and almost extremal NMDS codes. We establish a relation between
asymmetrical OAs and quantum error correcting codes (QECCs) over mixed
alphabets. Since quantum maximum distance separable (QMDS) codes over mixed
alphabets with the dimension equal to one have not been found in all the
literature so far, the definition of a near quantum maximum distance separable
(NQMDS) code over mixed alphabets is proposed. By using asymmetrical OAs, we
obtain many such codes.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 09:36:48 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Pang",
"Shanqi",
""
],
[
"Zhang",
"Chaomeng",
""
],
[
"Chen",
"Mengqian",
""
],
[
"Zhang",
"Miaomiao",
""
]
] |
new_dataset
| 0.997765 |
2308.00431
|
Samuel Coward
|
Samuel Coward, Emiliano Morini, Bryan Tan, Theo Drane, George
Constantinides
|
Datapath Verification via Word-Level E-Graph Rewriting
| null | null | null | null |
cs.LO cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Formal verification of datapath circuits is challenging as they are subject
to intense optimization effort in the design phase. Industrial vendors and
design companies deploy equivalence checking against a golden or existing
reference design to satisfy correctness concerns. State-of-the-art datapath
equivalence checking tools deploy a suite of techniques, including rewriting.
We propose a rewriting framework deploying bitwidth dependent rewrites based on
the e-graph data structure, providing a powerful assistant to existing tools.
The e-graph can generate a path of rewrites between the reference and
implementation designs that can be checked by a trusted industry tool. We will
demonstrate how the intermediate proofs generated by the assistant enable
convergence in a state of the art tool, without which the industrial tool runs
for 24 hours without making progress. The intermediate proofs automatically
introduced by the assistant also reduce the total proof runtime by up to 6x.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 10:20:07 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Coward",
"Samuel",
""
],
[
"Morini",
"Emiliano",
""
],
[
"Tan",
"Bryan",
""
],
[
"Drane",
"Theo",
""
],
[
"Constantinides",
"George",
""
]
] |
new_dataset
| 0.993329 |
2308.00465
|
Yanxin Xi
|
Yanxin Xi, Yu Liu, Tong Li, Jintao Ding, Yunke Zhang, Sasu Tarkoma,
Yong Li, and Pan Hui
|
A Satellite Imagery Dataset for Long-Term Sustainable Development in
United States Cities
|
20 pages, 5 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cities play an important role in achieving sustainable development goals
(SDGs) to promote economic growth and meet social needs. Especially satellite
imagery is a potential data source for studying sustainable urban development.
However, a comprehensive dataset in the United States (U.S.) covering multiple
cities, multiple years, multiple scales, and multiple indicators for SDG
monitoring is lacking. To support the research on SDGs in U.S. cities, we
develop a satellite imagery dataset using deep learning models for five SDGs
containing 25 sustainable development indicators. The proposed dataset covers
the 100 most populated U.S. cities and corresponding Census Block Groups from
2014 to 2023. Specifically, we collect satellite imagery and identify objects
with state-of-the-art object detection and semantic segmentation models to
observe cities' bird's-eye view. We further gather population, nighttime light,
survey, and built environment data to depict SDGs regarding poverty, health,
education, inequality, and living environment. We anticipate the dataset to
help urban policymakers and researchers to advance SDGs-related studies,
especially applying satellite imagery to monitor long-term and multi-scale SDGs
in cities.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 11:40:19 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Xi",
"Yanxin",
""
],
[
"Liu",
"Yu",
""
],
[
"Li",
"Tong",
""
],
[
"Ding",
"Jintao",
""
],
[
"Zhang",
"Yunke",
""
],
[
"Tarkoma",
"Sasu",
""
],
[
"Li",
"Yong",
""
],
[
"Hui",
"Pan",
""
]
] |
new_dataset
| 0.999728 |
2308.00477
|
Eric Goubault
|
Eric Goubault and Roman Kniazev and J\'er\'emy Ledent
|
A many-sorted epistemic logic for chromatic hypergraphs
| null | null | null | null |
cs.LO cs.MA math.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a many-sorted modal logic for reasoning about knowledge in
multi-agent systems. Our logic introduces a clear distinction between
participating agents and the environment. This allows to express local
properties of agents and global properties of worlds in a uniform way, as well
as to talk about the presence or absence of agents in a world. The logic
subsumes the standard epistemic logic and is a conservative extension of it.
The semantics is given in chromatic hypergraphs, a generalization of chromatic
simplicial complexes, which were recently used to model knowledge in
distributed systems. We show that the logic is sound and complete with respect
to the intended semantics. We also show a further connection of chromatic
hypergraphs with neighborhood frames.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 12:02:17 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Goubault",
"Eric",
""
],
[
"Kniazev",
"Roman",
""
],
[
"Ledent",
"Jérémy",
""
]
] |
new_dataset
| 0.99759 |
2308.00514
|
Daniella Tola
|
Daniella Tola and Peter Corke
|
Understanding URDF: A Dataset and Analysis
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
As the complexity of robot systems increases, it becomes more effective to
simulate them before deployment. To do this, a model of the robot's kinematics
or dynamics is required, and the most commonly used format is the Unified Robot
Description Format (URDF). This article presents, to our knowledge, the first
dataset of URDF files from various industrial and research organizations, with
metadata describing each robot, its type, manufacturer, and the source of the
model. The dataset contains 322 URDF files of which 195 are unique robot
models, meaning the excess URDFs are either of a robot that is multiply defined
across sources or URDF variants of the same robot. We analyze the files in the
dataset, where we, among other things, provide information on how they were
generated, which mesh file types are most commonly used, and compare models of
multiply defined robots. The intention of this article is to build a foundation
of knowledge on URDF and how it is used based on publicly available URDF files.
Publishing the dataset, analysis, and the scripts and tools used enables others
using, researching or developing URDFs to easily access this data and use it in
their own work.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 12:54:12 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Tola",
"Daniella",
""
],
[
"Corke",
"Peter",
""
]
] |
new_dataset
| 0.99987 |
2308.00531
|
Wentao Gong
|
Wentao Gong, Haonan Tong, Sihua Wang, Zhaohui Yang, Xinxin He,
Changchuan Yin
|
Adaptive Bitrate Video Semantic Communication over Wireless Networks
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the adaptive bitrate (ABR) video semantic
communication over wireless networks. In the considered model, video sensing
devices must transmit video semantic information to an edge server, to
facilitate ubiquitous video sensing services such as road environment
monitoring at the edge server in autonomous driving scenario. However, due to
the varying wireless network conditions, it is challenging to guarantee both
low transmission delay and high semantic accuracy at the same time if devices
continuously transmit a fixed bitrate video semantic information. To address
this challenge, we develop an adaptive bitrate video semantic communication
(ABRVSC) system, in which devices adaptively adjust the bitrate of video
semantic information according to network conditions. Specifically, we first
define the quality of experience (QoE) for video semantic communication.
Subsequently, a swin transformer-based semantic codec is proposed to extract
semantic information with considering the influence of QoE. Then, we propose an
Actor-Critic based ABR algorithm for the semantic codec to enhance the
robustness of the proposed ABRVSC scheme against network variations. Simulation
results demonstrate that at low bitrates, the mean intersection over union
(MIoU) of the proposed ABRVSC scheme is nearly twice that of the traditional
scheme. Moreover, the proposed ABRVSC scheme, which increases the QoE in video
semantic communication by 36.57%, exhibits more robustness against network
variations compared to both the fixed bitrate schemes and traditional ABR
schemes.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 13:25:10 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Gong",
"Wentao",
""
],
[
"Tong",
"Haonan",
""
],
[
"Wang",
"Sihua",
""
],
[
"Yang",
"Zhaohui",
""
],
[
"He",
"Xinxin",
""
],
[
"Yin",
"Changchuan",
""
]
] |
new_dataset
| 0.95107 |
2308.00538
|
Lala Shakti Swarup Ray
|
Lala Shakti Swarup Ray, Vitor Fortes Rey, Bo Zhou, Sungho Suh, Paul
Lukowicz
|
PressureTransferNet: Human Attribute Guided Dynamic Ground Pressure
Profile Transfer using 3D simulated Pressure Maps
|
Activity and Behavior Computing 2023
| null | null | null |
cs.CV cs.AI cs.GR eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose PressureTransferNet, a novel method for Human Activity Recognition
(HAR) using ground pressure information. Our approach generates body-specific
dynamic ground pressure profiles for specific activities by leveraging existing
pressure data from different individuals. PressureTransferNet is an
encoder-decoder model taking a source pressure map and a target human attribute
vector as inputs, producing a new pressure map reflecting the target attribute.
To train the model, we use a sensor simulation to create a diverse dataset with
various human attributes and pressure profiles. Evaluation on a real-world
dataset shows its effectiveness in accurately transferring human attributes to
ground pressure profiles across different scenarios. We visually confirm the
fidelity of the synthesized pressure shapes using a physics-based deep learning
model and achieve a binary R-square value of 0.79 on areas with ground contact.
Validation through classification with F1 score (0.911$\pm$0.015) on physical
pressure mat data demonstrates the correctness of the synthesized pressure
maps, making our method valuable for data augmentation, denoising, sensor
simulation, and anomaly detection. Applications span sports science,
rehabilitation, and bio-mechanics, contributing to the development of HAR
systems.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 13:31:25 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Ray",
"Lala Shakti Swarup",
""
],
[
"Rey",
"Vitor Fortes",
""
],
[
"Zhou",
"Bo",
""
],
[
"Suh",
"Sungho",
""
],
[
"Lukowicz",
"Paul",
""
]
] |
new_dataset
| 0.999727 |
2308.00555
|
Shay Solomon
|
Hsien-Chih Chang, Jonathan Conroy, Hung Le, Lazar Milenkovic, Shay
Solomon, Cuong Than
|
Shortcut Partitions in Minor-Free Graphs: Steiner Point Removal,
Distance Oracles, Tree Covers, and More
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
The notion of shortcut partition, introduced recently by Chang, Conroy, Le,
Milenkovi\'c, Solomon, and Than [CCLMST23], is a new type of graph partition
into low-diameter clusters. Roughly speaking, the shortcut partition guarantees
that for every two vertices $u$ and $v$ in the graph, there exists a path
between $u$ and $v$ that intersects only a few clusters. They proved that any
planar graph admits a shortcut partition and gave several applications,
including a construction of tree cover for arbitrary planar graphs with stretch
$1+\varepsilon$ and $O(1)$ many trees for any fixed $\varepsilon \in (0,1)$.
However, the construction heavily exploits planarity in multiple steps, and is
thus inherently limited to planar graphs.
In this work, we breach the "planarity barrier" to construct a shortcut
partition for $K_r$-minor-free graphs for any $r$. To this end, we take a
completely different approach -- our key contribution is a novel deterministic
variant of the cop decomposition in minor-free graphs [And86, AGG14]. Our
shortcut partition for $K_r$-minor-free graphs yields several direct
applications. Most notably, we construct the first optimal distance oracle for
$K_r$-minor-free graphs, with $1+\varepsilon$ stretch, linear space, and
constant query time for any fixed $\varepsilon \in (0,1)$. The previous best
distance oracle [AG06] uses $O(n\log n)$ space and $O(\log n)$ query time, and
its construction relies on Robertson-Seymour structural theorem and other
sophisticated tools. We also obtain the first tree cover of $O(1)$ size for
minor-free graphs with stretch $1+\varepsilon$, while the previous best
$(1+\varepsilon)$-tree cover has size $O(\log^2 n)$ [BFN19].
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 17:51:00 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Chang",
"Hsien-Chih",
""
],
[
"Conroy",
"Jonathan",
""
],
[
"Le",
"Hung",
""
],
[
"Milenkovic",
"Lazar",
""
],
[
"Solomon",
"Shay",
""
],
[
"Than",
"Cuong",
""
]
] |
new_dataset
| 0.99571 |
2308.00565
|
Sunyou Hwang
|
Sunyou Hwang, Bart D. W. Remes, Guido C. H. E. de Croon
|
AOSoar: Autonomous Orographic Soaring of a Micro Air Vehicle
|
8 pages, 11 figures, accepted to IROS 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Utilizing wind hovering techniques of soaring birds can save energy
expenditure and improve the flight endurance of micro air vehicles (MAVs).
Here, we present a novel method for fully autonomous orographic soaring without
a priori knowledge of the wind field. Specifically, we devise an Incremental
Nonlinear Dynamic Inversion (INDI) controller with control allocation, adapting
it for autonomous soaring. This allows for both soaring and the use of the
throttle if necessary, without changing any gain or parameter during the
flight. Furthermore, we propose a simulated-annealing-based optimization method
to search for soaring positions. This enables for the first time an MAV to
autonomously find a feasible soaring position while minimizing throttle usage
and other control efforts. Autonomous orographic soaring was performed in the
wind tunnel. The wind speed and incline of a ramp were changed during the
soaring flight. The MAV was able to perform autonomous orographic soaring for
flight times of up to 30 minutes. The mean throttle usage was only 0.25% for
the entire soaring flight, whereas normal powered flight requires 38%. Also, it
was shown that the MAV can find a new soaring spot when the wind field changes
during the flight.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 14:09:19 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Hwang",
"Sunyou",
""
],
[
"Remes",
"Bart D. W.",
""
],
[
"de Croon",
"Guido C. H. E.",
""
]
] |
new_dataset
| 0.997845 |
2308.00596
|
Marcelo Eduardo Pederiva
|
Marcelo Eduardo Pederiva, Jos\'e Mario De Martino and Alessandro
Zimmer
|
MonoNext: A 3D Monocular Object Detection with ConvNext
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Autonomous driving perception tasks rely heavily on cameras as the primary
sensor for Object Detection, Semantic Segmentation, Instance Segmentation, and
Object Tracking. However, RGB images captured by cameras lack depth
information, which poses a significant challenge in 3D detection tasks. To
supplement this missing data, mapping sensors such as LIDAR and RADAR are used
for accurate 3D Object Detection. Despite their significant accuracy, the
multi-sensor models are expensive and require a high computational demand. In
contrast, Monocular 3D Object Detection models are becoming increasingly
popular, offering a faster, cheaper, and easier-to-implement solution for 3D
detections. This paper introduces a different Multi-Tasking Learning approach
called MonoNext that utilizes a spatial grid to map objects in the scene.
MonoNext employs a straightforward approach based on the ConvNext network and
requires only 3D bounding box annotated data. In our experiments with the KITTI
dataset, MonoNext achieved high precision and competitive performance
comparable with state-of-the-art approaches. Furthermore, by adding more
training data, MonoNext surpassed itself and achieved higher accuracies.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 15:15:40 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Pederiva",
"Marcelo Eduardo",
""
],
[
"De Martino",
"José Mario",
""
],
[
"Zimmer",
"Alessandro",
""
]
] |
new_dataset
| 0.999668 |
2308.00624
|
Wenchao Gu
|
Qinhua Duan, Wenchao Gu, Yujia Chen, Wenxin Mao, Zewen Tian, Hui Cao
|
JIANG: Chinese Open Foundation Language Model
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the advancements in large language model technology, it has showcased
capabilities that come close to those of human beings across various tasks.
This achievement has garnered significant interest from companies and
scientific research institutions, leading to substantial investments in the
research and development of these models. While numerous large models have
emerged during this period, the majority of them have been trained primarily on
English data. Although they exhibit decent performance in other languages, such
as Chinese, their potential remains limited due to factors like vocabulary
design and training corpus. Consequently, their ability to fully express their
capabilities in Chinese falls short. To address this issue, we introduce the
model named JIANG (Chinese pinyin of ginger) specifically designed for the
Chinese language. We have gathered a substantial amount of Chinese corpus to
train the model and have also optimized its structure. The extensive
experimental results demonstrate the excellent performance of our model.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 15:51:41 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Duan",
"Qinhua",
""
],
[
"Gu",
"Wenchao",
""
],
[
"Chen",
"Yujia",
""
],
[
"Mao",
"Wenxin",
""
],
[
"Tian",
"Zewen",
""
],
[
"Cao",
"Hui",
""
]
] |
new_dataset
| 0.99819 |
2308.00640
|
Yuhao Lu
|
Yuhao Lu, Yixuan Fan, Beixing Deng, Fangfu Liu, Yali Li, Shengjin Wang
|
VL-Grasp: a 6-Dof Interactive Grasp Policy for Language-Oriented Objects
in Cluttered Indoor Scenes
|
8 pages, 4 figures, IROS 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Robotic grasping faces new challenges in human-robot-interaction scenarios.
We consider the task that the robot grasps a target object designated by
human's language directives. The robot not only needs to locate a target based
on vision-and-language information, but also needs to predict the reasonable
grasp pose candidate at various views and postures. In this work, we propose a
novel interactive grasp policy, named Visual-Lingual-Grasp (VL-Grasp), to grasp
the target specified by human language. First, we build a new challenging
visual grounding dataset to provide functional training data for robotic
interactive perception in indoor environments. Second, we propose a 6-Dof
interactive grasp policy combined with visual grounding and 6-Dof grasp pose
detection to extend the universality of interactive grasping. Third, we design
a grasp pose filter module to enhance the performance of the policy.
Experiments demonstrate the effectiveness and extendibility of the VL-Grasp in
real world. The VL-Grasp achieves a success rate of 72.5\% in different indoor
scenes. The code and dataset is available at
https://github.com/luyh20/VL-Grasp.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 16:13:35 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Lu",
"Yuhao",
""
],
[
"Fan",
"Yixuan",
""
],
[
"Deng",
"Beixing",
""
],
[
"Liu",
"Fangfu",
""
],
[
"Li",
"Yali",
""
],
[
"Wang",
"Shengjin",
""
]
] |
new_dataset
| 0.999271 |
2308.00642
|
Monika Dalal
|
Monika Dalal, Sucheta Dutt, Ranjeet Sehmi
|
Reversible complement cyclic codes over finite chain rings
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Let k be an arbitrary element of a finite commutative chain ring R and u be a
unit in R. In this work, we present necessary conditions which are sufficient
as well for a cyclic code to be a (u,k) reversible complement code over R.
Using these conditions, all principally generated cyclic codes over the ring
Z_{2}+vZ_{2}+v^{2}Z_{2}, v^{3}=0 of length 4 have been checked to find whether
they are (1,1) reversible complement or not.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 16:15:45 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Dalal",
"Monika",
""
],
[
"Dutt",
"Sucheta",
""
],
[
"Sehmi",
"Ranjeet",
""
]
] |
new_dataset
| 0.99799 |
2308.00682
|
Tinghao Feng
|
Tinghao Feng, Yueqi Hu, Jing Yang, Tom Polk, Ye Zhao, Shixia Liu,
Zhaocong Yang
|
TimePool: Visually Answer "Which and When" Questions On Univariate Time
Series
| null | null | null | null |
cs.HC cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
When exploring time series datasets, analysts often pose "which and when"
questions. For example, with world life expectancy data over one hundred years,
they may inquire about the top 10 countries in life expectancy and the time
period when they achieved this status, or which countries have had longer life
expectancy than Ireland and when. This paper proposes TimePool, a new
visualization prototype, to address this need for univariate time series
analysis. It allows users to construct interactive "which and when" queries and
visually explore the results for insights.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 17:37:24 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Feng",
"Tinghao",
""
],
[
"Hu",
"Yueqi",
""
],
[
"Yang",
"Jing",
""
],
[
"Polk",
"Tom",
""
],
[
"Zhao",
"Ye",
""
],
[
"Liu",
"Shixia",
""
],
[
"Yang",
"Zhaocong",
""
]
] |
new_dataset
| 0.999394 |
2308.00688
|
Nikhil Keetha
|
Nikhil Keetha, Avneesh Mishra, Jay Karhade, Krishna Murthy
Jatavallabhula, Sebastian Scherer, Madhava Krishna, Sourav Garg
|
AnyLoc: Towards Universal Visual Place Recognition
| null | null | null | null |
cs.CV cs.AI cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Visual Place Recognition (VPR) is vital for robot localization. To date, the
most performant VPR approaches are environment- and task-specific: while they
exhibit strong performance in structured environments (predominantly urban
driving), their performance degrades severely in unstructured environments,
rendering most approaches brittle to robust real-world deployment. In this
work, we develop a universal solution to VPR -- a technique that works across a
broad range of structured and unstructured environments (urban, outdoors,
indoors, aerial, underwater, and subterranean environments) without any
re-training or fine-tuning. We demonstrate that general-purpose feature
representations derived from off-the-shelf self-supervised models with no
VPR-specific training are the right substrate upon which to build such a
universal VPR solution. Combining these derived features with unsupervised
feature aggregation enables our suite of methods, AnyLoc, to achieve up to 4X
significantly higher performance than existing approaches. We further obtain a
6% improvement in performance by characterizing the semantic properties of
these features, uncovering unique domains which encapsulate datasets from
similar environments. Our detailed experiments and analysis lay a foundation
for building VPR solutions that may be deployed anywhere, anytime, and across
anyview. We encourage the readers to explore our project page and interactive
demos: https://anyloc.github.io/.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 17:45:13 GMT"
}
] | 2023-08-02T00:00:00 |
[
[
"Keetha",
"Nikhil",
""
],
[
"Mishra",
"Avneesh",
""
],
[
"Karhade",
"Jay",
""
],
[
"Jatavallabhula",
"Krishna Murthy",
""
],
[
"Scherer",
"Sebastian",
""
],
[
"Krishna",
"Madhava",
""
],
[
"Garg",
"Sourav",
""
]
] |
new_dataset
| 0.992549 |
2203.04838
|
Kailun Yang
|
Jiaming Zhang, Huayao Liu, Kailun Yang, Xinxin Hu, Ruiping Liu, Rainer
Stiefelhagen
|
CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with
Transformers
|
Accepted to IEEE Transactions on Intelligent Transportation Systems
(T-ITS). The source code of CMX is publicly available at
https://github.com/huaaaliu/RGBX_Semantic_Segmentation
| null | null | null |
cs.CV cs.RO eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene understanding based on image segmentation is a crucial component of
autonomous vehicles. Pixel-wise semantic segmentation of RGB images can be
advanced by exploiting complementary features from the supplementary modality
(X-modality). However, covering a wide variety of sensors with a
modality-agnostic model remains an unresolved problem due to variations in
sensor characteristics among different modalities. Unlike previous
modality-specific methods, in this work, we propose a unified fusion framework,
CMX, for RGB-X semantic segmentation. To generalize well across different
modalities, that often include supplements as well as uncertainties, a unified
cross-modal interaction is crucial for modality fusion. Specifically, we design
a Cross-Modal Feature Rectification Module (CM-FRM) to calibrate bi-modal
features by leveraging the features from one modality to rectify the features
of the other modality. With rectified feature pairs, we deploy a Feature Fusion
Module (FFM) to perform sufficient exchange of long-range contexts before
mixing. To verify CMX, for the first time, we unify five modalities
complementary to RGB, i.e., depth, thermal, polarization, event, and LiDAR.
Extensive experiments show that CMX generalizes well to diverse multi-modal
fusion, achieving state-of-the-art performances on five RGB-Depth benchmarks,
as well as RGB-Thermal, RGB-Polarization, and RGB-LiDAR datasets. Besides, to
investigate the generalizability to dense-sparse data fusion, we establish an
RGB-Event semantic segmentation benchmark based on the EventScape dataset, on
which CMX sets the new state-of-the-art. The source code of CMX is publicly
available at https://github.com/huaaaliu/RGBX_Semantic_Segmentation.
|
[
{
"version": "v1",
"created": "Wed, 9 Mar 2022 16:12:08 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2022 13:37:24 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Mar 2023 13:30:43 GMT"
},
{
"version": "v4",
"created": "Sat, 29 Jul 2023 13:47:17 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Zhang",
"Jiaming",
""
],
[
"Liu",
"Huayao",
""
],
[
"Yang",
"Kailun",
""
],
[
"Hu",
"Xinxin",
""
],
[
"Liu",
"Ruiping",
""
],
[
"Stiefelhagen",
"Rainer",
""
]
] |
new_dataset
| 0.996819 |
2204.13499
|
Andrei Bytes
|
Andrei Bytes, Prashant Hari Narayan Rajput, Constantine Doumanidis,
Nils Ole Tippenhauer, Michail Maniatakos, Jianying Zhou
|
FieldFuzz: In Situ Blackbox Fuzzing of Proprietary Industrial Automation
Runtimes via the Network
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Networked Programmable Logic Controllers (PLCs) are proprietary industrial
devices utilized in critical infrastructure that execute control logic
applications in complex proprietary runtime environments that provide
standardized access to the hardware resources in the PLC. These control
applications are programmed in domain-specific IEC 61131-3 languages, compiled
into a proprietary binary format, and process data provided via industrial
protocols. Control applications present an attack surface threatened by
manipulated traffic. For example, remote code injection in a control
application would directly allow to take over the PLC, threatening physical
process damage and the safety of human operators. However, assessing the
security of control applications is challenging due to domain-specific
challenges and the limited availability of suitable methods. Network-based
fuzzing is often the only way to test such devices but is inefficient without
guidance from execution tracing. This work presents the FieldFuzz framework
that analyzes the security risks posed by the Codesys runtime (used by over 400
devices from 80 industrial PLC vendors). FieldFuzz leverages efficient
network-based fuzzing based on three main contributions: i) reverse-engineering
enabled remote control of control applications and runtime components, ii)
automated command discovery and status code extraction via network traffic and
iii) a monitoring setup to allow on-system tracing and coverage computation. We
use FieldFuzz to run fuzzing campaigns, which uncover multiple vulnerabilities,
leading to three reported CVE IDs. To study the cross-platform applicability of
FieldFuzz, we reproduce the findings on a diverse set of Industrial Control
System (ICS) devices, showing a significant improvement over the
state-of-the-art.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 13:42:46 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Nov 2022 10:46:20 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Feb 2023 19:38:45 GMT"
},
{
"version": "v4",
"created": "Mon, 31 Jul 2023 10:33:25 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Bytes",
"Andrei",
""
],
[
"Rajput",
"Prashant Hari Narayan",
""
],
[
"Doumanidis",
"Constantine",
""
],
[
"Tippenhauer",
"Nils Ole",
""
],
[
"Maniatakos",
"Michail",
""
],
[
"Zhou",
"Jianying",
""
]
] |
new_dataset
| 0.999619 |
2209.03320
|
Sarah Pratt
|
Sarah Pratt, Ian Covert, Rosanne Liu, Ali Farhadi
|
What does a platypus look like? Generating customized prompts for
zero-shot image classification
|
Accepted at ICCV 2023
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open-vocabulary models are a promising new paradigm for image classification.
Unlike traditional classification models, open-vocabulary models classify among
any arbitrary set of categories specified with natural language during
inference. This natural language, called "prompts", typically consists of a set
of hand-written templates (e.g., "a photo of a {}") which are completed with
each of the category names. This work introduces a simple method to generate
higher accuracy prompts, without relying on any explicit knowledge of the task
domain and with far fewer hand-constructed sentences. To achieve this, we
combine open-vocabulary models with large language models (LLMs) to create
Customized Prompts via Language models (CuPL, pronounced "couple"). In
particular, we leverage the knowledge contained in LLMs in order to generate
many descriptive sentences that contain important discriminating
characteristics of the image categories. This allows the model to place a
greater importance on these regions in the image when making predictions. We
find that this straightforward and general approach improves accuracy on a
range of zero-shot image classification benchmarks, including over one
percentage point gain on ImageNet. Finally, this simple baseline requires no
additional training and remains completely zero-shot. Code available at
https://github.com/sarahpratt/CuPL.
|
[
{
"version": "v1",
"created": "Wed, 7 Sep 2022 17:27:08 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 14:39:12 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Pratt",
"Sarah",
""
],
[
"Covert",
"Ian",
""
],
[
"Liu",
"Rosanne",
""
],
[
"Farhadi",
"Ali",
""
]
] |
new_dataset
| 0.989132 |
2209.13042
|
Justin Kerr
|
Justin Kerr, Huang Huang, Albert Wilcox, Ryan Hoque, Jeffrey
Ichnowski, Roberto Calandra, and Ken Goldberg
|
Self-Supervised Visuo-Tactile Pretraining to Locate and Follow Garment
Features
|
RSS 2023, site: https://sites.google.com/berkeley.edu/ssvtp
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Humans make extensive use of vision and touch as complementary senses, with
vision providing global information about the scene and touch measuring local
information during manipulation without suffering from occlusions. While prior
work demonstrates the efficacy of tactile sensing for precise manipulation of
deformables, they typically rely on supervised, human-labeled datasets. We
propose Self-Supervised Visuo-Tactile Pretraining (SSVTP), a framework for
learning multi-task visuo-tactile representations in a self-supervised manner
through cross-modal supervision. We design a mechanism that enables a robot to
autonomously collect precisely spatially-aligned visual and tactile image
pairs, then train visual and tactile encoders to embed these pairs into a
shared latent space using cross-modal contrastive loss. We apply this latent
space to downstream perception and control of deformable garments on flat
surfaces, and evaluate the flexibility of the learned representations without
fine-tuning on 5 tasks: feature classification, contact localization, anomaly
detection, feature search from a visual query (e.g., garment feature
localization under occlusion), and edge following along cloth edges. The
pretrained representations achieve a 73-100% success rate on these 5 tasks.
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2022 21:50:39 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 17:47:27 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Kerr",
"Justin",
""
],
[
"Huang",
"Huang",
""
],
[
"Wilcox",
"Albert",
""
],
[
"Hoque",
"Ryan",
""
],
[
"Ichnowski",
"Jeffrey",
""
],
[
"Calandra",
"Roberto",
""
],
[
"Goldberg",
"Ken",
""
]
] |
new_dataset
| 0.966979 |
2210.07601
|
Weiming Li
|
Weiming Li, Lihui Xue, Xueqian Wang, and Gang Li
|
MCTNet: A Multi-Scale CNN-Transformer Network for Change Detection in
Optical Remote Sensing Images
|
5 pages, 3 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For the task of change detection (CD) in remote sensing images, deep
convolution neural networks (CNNs)-based methods have recently aggregated
transformer modules to improve the capability of global feature extraction.
However, they suffer degraded CD performance on small changed areas due to the
simple single-scale integration of deep CNNs and transformer modules. To
address this issue, we propose a hybrid network based on multi-scale
CNN-transformer structure, termed MCTNet, where the multi-scale global and
local information are exploited to enhance the robustness of the CD performance
on changed areas with different sizes. Especially, we design the ConvTrans
block to adaptively aggregate global features from transformer modules and
local features from CNN layers, which provides abundant global-local features
with different scales. Experimental results demonstrate that our MCTNet
achieves better detection performance than existing state-of-the-art CD
methods.
|
[
{
"version": "v1",
"created": "Fri, 14 Oct 2022 07:54:28 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 08:57:28 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Jul 2023 03:13:05 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Li",
"Weiming",
""
],
[
"Xue",
"Lihui",
""
],
[
"Wang",
"Xueqian",
""
],
[
"Li",
"Gang",
""
]
] |
new_dataset
| 0.995279 |
2212.14454
|
Zhuo Chen
|
Zhuo Chen, Jiaoyan Chen, Wen Zhang, Lingbing Guo, Yin Fang, Yufeng
Huang, Yichi Zhang, Yuxia Geng, Jeff Z. Pan, Wenting Song, Huajun Chen
|
MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality
Hybrid
|
ACM Multimedia 2023 Accpeted, Repo:
https://github.com/zjukg/MEAformer
|
ACM MM 2023
| null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-modal entity alignment (MMEA) aims to discover identical entities
across different knowledge graphs (KGs) whose entities are associated with
relevant images. However, current MMEA algorithms rely on KG-level modality
fusion strategies for multi-modal entity representation, which ignores the
variations of modality preferences of different entities, thus compromising
robustness against noise in modalities such as blurry images and relations.
This paper introduces MEAformer, a multi-modal entity alignment transformer
approach for meta modality hybrid, which dynamically predicts the mutual
correlation coefficients among modalities for more fine-grained entity-level
modality fusion and alignment. Experimental results demonstrate that our model
not only achieves SOTA performance in multiple training scenarios, including
supervised, unsupervised, iterative, and low-resource settings, but also has a
limited number of parameters, efficient runtime, and interpretability. Our code
is available at https://github.com/zjukg/MEAformer.
|
[
{
"version": "v1",
"created": "Thu, 29 Dec 2022 20:49:58 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Jan 2023 13:39:59 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Apr 2023 09:36:26 GMT"
},
{
"version": "v4",
"created": "Sun, 30 Jul 2023 14:39:36 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Chen",
"Zhuo",
""
],
[
"Chen",
"Jiaoyan",
""
],
[
"Zhang",
"Wen",
""
],
[
"Guo",
"Lingbing",
""
],
[
"Fang",
"Yin",
""
],
[
"Huang",
"Yufeng",
""
],
[
"Zhang",
"Yichi",
""
],
[
"Geng",
"Yuxia",
""
],
[
"Pan",
"Jeff Z.",
""
],
[
"Song",
"Wenting",
""
],
[
"Chen",
"Huajun",
""
]
] |
new_dataset
| 0.997804 |
2301.03944
|
Yunbo Lyu
|
Yunbo Lyu, Thanh Le-Cong, Hong Jin Kang, Ratnadira Widyasari, Zhipeng
Zhao, Xuan-Bach D. Le, Ming Li, David Lo
|
CHRONOS: Time-Aware Zero-Shot Identification of Libraries from
Vulnerability Reports
|
Accepted to the Technical Track of ICSE 2023
| null | null | null |
cs.SE cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tools that alert developers about library vulnerabilities depend on accurate,
up-to-date vulnerability databases which are maintained by security
researchers. These databases record the libraries related to each
vulnerability. However, the vulnerability reports may not explicitly list every
library and human analysis is required to determine all the relevant libraries.
Human analysis may be slow and expensive, which motivates the need for
automated approaches. Researchers and practitioners have proposed to
automatically identify libraries from vulnerability reports using extreme
multi-label learning (XML).
While state-of-the-art XML techniques showed promising performance, their
experiment settings do not practically fit what happens in reality. Previous
studies randomly split the vulnerability reports data for training and testing
their models without considering the chronological order of the reports. This
may unduly train the models on chronologically newer reports while testing the
models on chronologically older ones. However, in practice, one often receives
chronologically new reports, which may be related to previously unseen
libraries. Under this practical setting, we observe that the performance of
current XML techniques declines substantially, e.g., F1 decreased from 0.7 to
0.28 under experiments without and with consideration of chronological order of
vulnerability reports.
We propose a practical library identification approach, namely CHRONOS, based
on zero-shot learning. The novelty of CHRONOS is three-fold. First, CHRONOS
fits into the practical pipeline by considering the chronological order of
vulnerability reports. Second, CHRONOS enriches the data of the vulnerability
descriptions and labels using a carefully designed data enhancement step.
Third, CHRONOS exploits the temporal ordering of the vulnerability reports
using a cache to prioritize prediction of...
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 12:57:10 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Feb 2023 12:48:51 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Mar 2023 07:29:49 GMT"
},
{
"version": "v4",
"created": "Sat, 29 Jul 2023 04:33:44 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Lyu",
"Yunbo",
""
],
[
"Le-Cong",
"Thanh",
""
],
[
"Kang",
"Hong Jin",
""
],
[
"Widyasari",
"Ratnadira",
""
],
[
"Zhao",
"Zhipeng",
""
],
[
"Le",
"Xuan-Bach D.",
""
],
[
"Li",
"Ming",
""
],
[
"Lo",
"David",
""
]
] |
new_dataset
| 0.999352 |
2302.08207
|
Lang Nie
|
Lang Nie, Chunyu Lin, Kang Liao, Shuaicheng Liu, Yao Zhao
|
Parallax-Tolerant Unsupervised Deep Image Stitching
|
Accepted to ICCV2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional image stitching approaches tend to leverage increasingly complex
geometric features (point, line, edge, etc.) for better performance. However,
these hand-crafted features are only suitable for specific natural scenes with
adequate geometric structures. In contrast, deep stitching schemes overcome the
adverse conditions by adaptively learning robust semantic features, but they
cannot handle large-parallax cases due to homography-based registration. To
solve these issues, we propose UDIS++, a parallax-tolerant unsupervised deep
image stitching technique. First, we propose a robust and flexible warp to
model the image registration from global homography to local thin-plate spline
motion. It provides accurate alignment for overlapping regions and shape
preservation for non-overlapping regions by joint optimization concerning
alignment and distortion. Subsequently, to improve the generalization
capability, we design a simple but effective iterative strategy to enhance the
warp adaption in cross-dataset and cross-resolution applications. Finally, to
further eliminate the parallax artifacts, we propose to composite the stitched
image seamlessly by unsupervised learning for seam-driven composition masks.
Compared with existing methods, our solution is parallax-tolerant and free from
laborious designs of complicated geometric features for specific scenes.
Extensive experiments show our superiority over the SoTA methods, both
quantitatively and qualitatively. The code is available at
https://github.com/nie-lang/UDIS2.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 10:40:55 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Jul 2023 03:47:27 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Nie",
"Lang",
""
],
[
"Lin",
"Chunyu",
""
],
[
"Liao",
"Kang",
""
],
[
"Liu",
"Shuaicheng",
""
],
[
"Zhao",
"Yao",
""
]
] |
new_dataset
| 0.980906 |
2302.10023
|
Linh K\"astner
|
Linh K\"astner, Reyk Carstens, Huajian Zeng, Jacek Kmiecik, Teham
Bhuiyan, Niloufar Khorsandi, Volodymyr Shcherbyna, and Jens Lambrecht
|
Arena-Rosnav 2.0: A Development and Benchmarking Platform for Robot
Navigation in Highly Dynamic Environments
|
8 pages, 5 figures
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Following up on our previous works, in this paper, we present Arena-Rosnav
2.0 an extension to our previous works Arena-Bench and Arena-Rosnav, which adds
a variety of additional modules for developing and benchmarking robotic
navigation approaches. The platform is fundamentally restructured and provides
unified APIs to add additional functionalities such as planning algorithms,
simulators, or evaluation functionalities. We have included more realistic
simulation and pedestrian behavior and provide a profound documentation to
lower the entry barrier. We evaluated our system by first, conducting a user
study in which we asked experienced researchers as well as new practitioners
and students to test our system. The feedback was mostly positive and a high
number of participants are utilizing our system for other research endeavors.
Finally, we demonstrate the feasibility of our system by integrating two new
simulators and a variety of state of the art navigation approaches and
benchmark them against one another. The platform is openly available at
https://github.com/Arena-Rosnav.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 15:10:16 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 07:20:27 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Kästner",
"Linh",
""
],
[
"Carstens",
"Reyk",
""
],
[
"Zeng",
"Huajian",
""
],
[
"Kmiecik",
"Jacek",
""
],
[
"Bhuiyan",
"Teham",
""
],
[
"Khorsandi",
"Niloufar",
""
],
[
"Shcherbyna",
"Volodymyr",
""
],
[
"Lambrecht",
"Jens",
""
]
] |
new_dataset
| 0.996801 |
2303.00920
|
Tamzidul Mina
|
Tamzidul Mina, Wonse Jo, Shyam S. Kannan, and Byung-Cheol Min
|
Beacon-based Distributed Structure Formation in Multi-agent Systems
|
8 pages, 6 figures, accepted for publication in IROS 2023. A link to
the simulation videos is provided under the Validation section
| null | null | null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous shape and structure formation is an important problem in the
domain of large-scale multi-agent systems. In this paper, we propose a 3D
structure representation method and a distributed structure formation strategy
where settled agents guide free moving agents to a prescribed location to
settle in the structure. Agents at the structure formation frontier looking for
neighbors to settle act as beacons, generating a surface gradient throughout
the formed structure propagated by settled agents. Free-moving agents follow
the surface gradient along the formed structure surface to the formation
frontier, where they eventually reach the closest beacon and settle to continue
the structure formation following a local bidding process. Agent behavior is
governed by a finite state machine implementation, along with potential
field-based motion control laws. We also discuss appropriate rules for
recovering from stagnation points. Simulation experiments are presented to show
planar and 3D structure formations with continuous and discontinuous
boundary/surfaces, which validate the proposed strategy, followed by a
scalability analysis.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 02:40:29 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Jul 2023 02:27:27 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Mina",
"Tamzidul",
""
],
[
"Jo",
"Wonse",
""
],
[
"Kannan",
"Shyam S.",
""
],
[
"Min",
"Byung-Cheol",
""
]
] |
new_dataset
| 0.996237 |
2303.03566
|
Jialin Lin
|
Jialin Lin (1), Xiaoqing Guo (1), Wen Fan (1), Wei Li (2), Yuanyi Wang
(3), Jiaming Liang (3), Weiru Liu (1), Lei Wei (3), Dandan Zhang (1) ((1)
Engineering Mathematics, University of Bristol, affiliated with the Bristol
Robotics Lab, United Kingdom.(2) the Hamlyn Centre for Robotic Surgery,
Imperial College London, United Kingdom.(3) Tencent Robotics X)
|
TIMS: A Tactile Internet-Based Micromanipulation System with Haptic
Guidance for Surgical Training
|
8 pages, 7 figures. For more details of this project, please view our
website: https://sites.google.com/view/viewtims/home
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Microsurgery involves the dexterous manipulation of delicate tissue or
fragile structures such as small blood vessels, nerves, etc., under a
microscope. To address the limitation of imprecise manipulation of human hands,
robotic systems have been developed to assist surgeons in performing complex
microsurgical tasks with greater precision and safety. However, the steep
learning curve for robot-assisted microsurgery (RAMS) and the shortage of
well-trained surgeons pose significant challenges to the widespread adoption of
RAMS. Therefore, the development of a versatile training system for RAMS is
necessary, which can bring tangible benefits to both surgeons and patients.
In this paper, we present a Tactile Internet-Based Micromanipulation System
(TIMS) based on a ROS-Django web-based architecture for microsurgical training.
This system can provide tactile feedback to operators via a wearable tactile
display (WTD), while real-time data is transmitted through the internet via a
ROS-Django framework. In addition, TIMS integrates haptic guidance to `guide'
the trainees to follow a desired trajectory provided by expert surgeons.
Learning from demonstration based on Gaussian Process Regression (GPR) was used
to generate the desired trajectory. User studies were also conducted to verify
the effectiveness of our proposed TIMS, comparing users' performance with and
without tactile feedback and/or haptic guidance.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 00:26:19 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Jul 2023 14:57:25 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Lin",
"Jialin",
""
],
[
"Guo",
"Xiaoqing",
""
],
[
"Fan",
"Wen",
""
],
[
"Li",
"Wei",
""
],
[
"Wang",
"Yuanyi",
""
],
[
"Liang",
"Jiaming",
""
],
[
"Liu",
"Weiru",
""
],
[
"Wei",
"Lei",
""
],
[
"Zhang",
"Dandan",
""
]
] |
new_dataset
| 0.994456 |
2303.05162
|
Kirill Ivanov
|
Kirill Ivanov, Gonzalo Ferrer, Anastasiia Kornilova
|
EVOLIN Benchmark: Evaluation of Line Detection and Association
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Lines are interesting geometrical features commonly seen in indoor and urban
environments. There is missing a complete benchmark where one can evaluate
lines from a sequential stream of images in all its stages: Line detection,
Line Association and Pose error. To do so, we present a complete and exhaustive
benchmark for visual lines in a SLAM front-end, both for RGB and RGBD, by
providing a plethora of complementary metrics. We have also labelled data from
well-known SLAM datasets in order to have all in one poses and accurately
annotated lines. In particular, we have evaluated 17 line detection algorithms,
5 line associations methods and the resultant pose error for aligning a pair of
frames with several combinations of detector-association. We have packaged all
methods and evaluations metrics and made them publicly available on web-page
https://prime-slam.github.io/evolin/.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 10:39:43 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 11:36:22 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Ivanov",
"Kirill",
""
],
[
"Ferrer",
"Gonzalo",
""
],
[
"Kornilova",
"Anastasiia",
""
]
] |
new_dataset
| 0.998688 |
2304.03323
|
Amit Kumar Singh Yadav
|
Amit Kumar Singh Yadav, Kratika Bhagtani, Ziyue Xiang, Paolo
Bestagini, Stefano Tubaro, Edward J. Delp
|
DSVAE: Interpretable Disentangled Representation for Synthetic Speech
Detection
| null | null | null | null |
cs.SD cs.CV cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tools to generate high quality synthetic speech signal that is perceptually
indistinguishable from speech recorded from human speakers are easily
available. Several approaches have been proposed for detecting synthetic
speech. Many of these approaches use deep learning methods as a black box
without providing reasoning for the decisions they make. This limits the
interpretability of these approaches. In this paper, we propose Disentangled
Spectrogram Variational Auto Encoder (DSVAE) which is a two staged trained
variational autoencoder that processes spectrograms of speech using
disentangled representation learning to generate interpretable representations
of a speech signal for detecting synthetic speech. DSVAE also creates an
activation map to highlight the spectrogram regions that discriminate synthetic
and bona fide human speech signals. We evaluated the representations obtained
from DSVAE using the ASVspoof2019 dataset. Our experimental results show high
accuracy (>98%) on detecting synthetic speech from 6 known and 10 out of 11
unknown speech synthesizers. We also visualize the representation obtained from
DSVAE for 17 different speech synthesizers and verify that they are indeed
interpretable and discriminate bona fide and synthetic speech from each of the
synthesizers.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 18:37:26 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 20:38:31 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Yadav",
"Amit Kumar Singh",
""
],
[
"Bhagtani",
"Kratika",
""
],
[
"Xiang",
"Ziyue",
""
],
[
"Bestagini",
"Paolo",
""
],
[
"Tubaro",
"Stefano",
""
],
[
"Delp",
"Edward J.",
""
]
] |
new_dataset
| 0.978049 |
2305.04411
|
Samuel Armstrong
|
Samuel E. Armstrong (1), Aaron D. Mullen (1), V. K. Cody Bumgardner
(1) ((1) University of Kentucky)
|
SmartState: A Protocol-driven Human Interface
|
8 pages, 8 figures
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Since the inception of human research studies, researchers must often
interact with participants on a set schedule to collect data. Researchers
manually perform many interactions, leading to considerable time and financial
expenses. Usually, user-provided data collection consists of surveys
administered via telephone or email. These methods are tedious for the survey
administrators, which could cause fatigue and potentially lead to collection
mistakes. This project leverages recent advancements in automatic speech
recognition, speech-to-text, natural language understanding (NLU), and
finite-state machines to automate research protocols. This generalized
application is fully customizable and irrespective of any research study. New
research protocols can be quickly created based on these parameters once
envisioned. Thus, we present SmartState, a fully-customizable, state-driven
protocol manager combined with supporting AI components to autonomously manage
user data and intelligently determine users' intentions through chat and
end-device interactions.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 01:38:26 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 14:28:36 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Jul 2023 16:25:02 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Armstrong",
"Samuel E.",
"",
"University of Kentucky"
],
[
"Mullen",
"Aaron D.",
"",
"University of Kentucky"
],
[
"Bumgardner",
"V. K. Cody",
"",
"University of Kentucky"
]
] |
new_dataset
| 0.99899 |
2305.07805
|
Krithika Iyer
|
Krithika Iyer, Shireen Elhabian
|
Mesh2SSM: From Surface Meshes to Statistical Shape Models of Anatomy
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Statistical shape modeling is the computational process of discovering
significant shape parameters from segmented anatomies captured by medical
images (such as MRI and CT scans), which can fully describe subject-specific
anatomy in the context of a population. The presence of substantial non-linear
variability in human anatomy often makes the traditional shape modeling process
challenging. Deep learning techniques can learn complex non-linear
representations of shapes and generate statistical shape models that are more
faithful to the underlying population-level variability. However, existing deep
learning models still have limitations and require established/optimized shape
models for training. We propose Mesh2SSM, a new approach that leverages
unsupervised, permutation-invariant representation learning to estimate how to
deform a template point cloud to subject-specific meshes, forming a
correspondence-based shape model. Mesh2SSM can also learn a population-specific
template, reducing any bias due to template selection. The proposed method
operates directly on meshes and is computationally efficient, making it an
attractive alternative to traditional and deep learning-based SSM approaches.
|
[
{
"version": "v1",
"created": "Sat, 13 May 2023 00:03:59 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Jul 2023 06:10:16 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Iyer",
"Krithika",
""
],
[
"Elhabian",
"Shireen",
""
]
] |
new_dataset
| 0.997433 |
2305.11461
|
IokTong Lei
|
Ioktong Lei and Zhidong Deng
|
SelfzCoT: a Self-Prompt Zero-shot CoT from Semantic-level to Code-level
for a Better Utilization of LLMs
|
preprint, under review
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper show a work on better use of LLMs with SelfzCoT a self-prompt
zero-shot CoT. Specifically, on the zero-shot arithmetic reasoning tasks, the
accuracy of the proposed SelfzCoT is improved with GSM8K from 40.50% to 82.34%,
with MultiArith from 79.3% to 94.7%, with ADDSUB from 74.70% to 94.10%, with
SingleEq from 78.70% to 91.30%, with AQUA from 31.90% to 82.33%, and with SVAMP
from 63.70% to 79.70%. Totally, using the first two lasting path activations to
LLM and particularly, the code-level self-prompt, the SelfzCoT has a huge
improvement on all six zero-shot arithmetic reasoning tasks. Additionally, our
modified zero-shot CoT (MzCoT) also achieves remarkable performance in the
reasoning tasks. The accuracy of the proposed MzCoT is enhanced with GSM8K from
40.50% to 76.32%, with MultiArith from 79.3% to 96.97%, with ADDSUB from 74.70%
to 92.39%, with SingleEq from 78.70% to 94.60%, with AQUA from 31.90% to
79.90%, and with SVAMP from 63.70% to 81.50%. Notably, SelfzCoT has the best
performance on GSM8K among all the recent zero-shot methods.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 06:30:17 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 06:18:16 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Jul 2023 05:46:46 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Lei",
"Ioktong",
""
],
[
"Deng",
"Zhidong",
""
]
] |
new_dataset
| 0.992133 |
2305.14758
|
Tianlun Zheng
|
Tianlun Zheng, Zhineng Chen, BingChen Huang, Wei Zhang and Yu-Gang
Jiang
|
MRN: Multiplexed Routing Network for Incremental Multilingual Text
Recognition
|
Accepted by ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multilingual text recognition (MLTR) systems typically focus on a fixed set
of languages, which makes it difficult to handle newly added languages or adapt
to ever-changing data distribution. In this paper, we propose the Incremental
MLTR (IMLTR) task in the context of incremental learning (IL), where different
languages are introduced in batches. IMLTR is particularly challenging due to
rehearsal-imbalance, which refers to the uneven distribution of sample
characters in the rehearsal set, used to retain a small amount of old data as
past memories. To address this issue, we propose a Multiplexed Routing Network
(MRN). MRN trains a recognizer for each language that is currently seen.
Subsequently, a language domain predictor is learned based on the rehearsal set
to weigh the recognizers. Since the recognizers are derived from the original
data, MRN effectively reduces the reliance on older data and better fights
against catastrophic forgetting, the core issue in IL. We extensively evaluate
MRN on MLT17 and MLT19 datasets. It outperforms existing general-purpose IL
methods by large margins, with average accuracy improvements ranging from 10.3%
to 35.8% under different settings. Code is available at
https://github.com/simplify23/MRN.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 06:03:34 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Jul 2023 16:25:37 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Jul 2023 07:40:29 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Zheng",
"Tianlun",
""
],
[
"Chen",
"Zhineng",
""
],
[
"Huang",
"BingChen",
""
],
[
"Zhang",
"Wei",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] |
new_dataset
| 0.986642 |
2306.03686
|
Jiang Yuncheng
|
Yuncheng Jiang, Zixun Zhang, Ruimao Zhang, Guanbin Li, Shuguang Cui,
Zhen Li
|
YONA: You Only Need One Adjacent Reference-frame for Accurate and Fast
Video Polyp Detection
|
11 pages, 3 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate polyp detection is essential for assisting clinical rectal cancer
diagnoses. Colonoscopy videos contain richer information than still images,
making them a valuable resource for deep learning methods. Great efforts have
been made to conduct video polyp detection through multi-frame temporal/spatial
aggregation. However, unlike common fixed-camera video, the camera-moving scene
in colonoscopy videos can cause rapid video jitters, leading to unstable
training for existing video detection models. Additionally, the concealed
nature of some polyps and the complex background environment further hinder the
performance of existing video detectors. In this paper, we propose the
\textbf{YONA} (\textbf{Y}ou \textbf{O}nly \textbf{N}eed one \textbf{A}djacent
Reference-frame) method, an efficient end-to-end training framework for video
polyp detection. YONA fully exploits the information of one previous adjacent
frame and conducts polyp detection on the current frame without multi-frame
collaborations. Specifically, for the foreground, YONA adaptively aligns the
current frame's channel activation patterns with its adjacent reference frames
according to their foreground similarity. For the background, YONA conducts
background dynamic alignment guided by inter-frame difference to eliminate the
invalid features produced by drastic spatial jitters. Moreover, YONA applies
cross-frame contrastive learning during training, leveraging the ground truth
bounding box to improve the model's perception of polyp and background.
Quantitative and qualitative experiments on three public challenging benchmarks
demonstrate that our proposed YONA outperforms previous state-of-the-art
competitors by a large margin in both accuracy and speed.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 13:53:15 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Jul 2023 14:14:38 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Jiang",
"Yuncheng",
""
],
[
"Zhang",
"Zixun",
""
],
[
"Zhang",
"Ruimao",
""
],
[
"Li",
"Guanbin",
""
],
[
"Cui",
"Shuguang",
""
],
[
"Li",
"Zhen",
""
]
] |
new_dataset
| 0.984169 |
2306.10286
|
Xiao-Feng Zhang
|
Qihan Zhao, Xiaofeng Zhang, Hao Tang, Chaochen Gu, Shanying Zhu
|
Enlighten Anything: When Segment Anything Model Meets Low-Light Image
Enhancement
|
it will be revised
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image restoration is a low-level visual task, and most CNN methods are
designed as black boxes, lacking transparency and intrinsic aesthetics. Many
unsupervised approaches ignore the degradation of visible information in
low-light scenes, which will seriously affect the aggregation of complementary
information and also make the fusion algorithm unable to produce satisfactory
fusion results under extreme conditions. In this paper, we propose
Enlighten-anything, which is able to enhance and fuse the semantic intent of
SAM segmentation with low-light images to obtain fused images with good visual
perception. The generalization ability of unsupervised learning is greatly
improved, and experiments on LOL dataset are conducted to show that our method
improves 3db in PSNR over baseline and 8 in SSIM. Zero-shot learning of SAM
introduces a powerful aid for unsupervised low-light enhancement. The source
code of Enlighten Anything can be obtained from
https://github.com/zhangbaijin/enlighten-anything
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 07:58:44 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 03:09:34 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Jun 2023 03:20:02 GMT"
},
{
"version": "v4",
"created": "Mon, 31 Jul 2023 07:38:06 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Zhao",
"Qihan",
""
],
[
"Zhang",
"Xiaofeng",
""
],
[
"Tang",
"Hao",
""
],
[
"Gu",
"Chaochen",
""
],
[
"Zhu",
"Shanying",
""
]
] |
new_dataset
| 0.998864 |
2306.10561
|
Pengcheng Shi
|
Yongjun Zhang, Pengcheng Shi, Jiayuan Li
|
LiDAR-Based Place Recognition For Autonomous Driving: A Survey
|
26 pages,13 figures, 5 tables
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR-based place recognition (LPR) plays a pivotal role in autonomous
driving, which assists Simultaneous Localization and Mapping (SLAM) systems in
reducing accumulated errors and achieving reliable localization. However,
existing reviews predominantly concentrate on visual place recognition (VPR)
methods. Despite the recent remarkable progress in LPR, to the best of our
knowledge, there is no dedicated systematic review in this area. This paper
bridges the gap by providing a comprehensive review of place recognition
methods employing LiDAR sensors, thus facilitating and encouraging further
research. We commence by delving into the problem formulation of place
recognition, exploring existing challenges, and describing relations to
previous surveys. Subsequently, we conduct an in-depth review of related
research, which offers detailed classifications, strengths and weaknesses, and
architectures. Finally, we summarize existing datasets, commonly used
evaluation metrics, and comprehensive evaluation results from various methods
on public datasets. This paper can serve as a valuable tutorial for newcomers
entering the field of place recognition and for researchers interested in
long-term robot localization. We pledge to maintain an up-to-date project on
our website https://github.com/ShiPC-AI/LPR-Survey.
|
[
{
"version": "v1",
"created": "Sun, 18 Jun 2023 13:51:40 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Jul 2023 12:36:36 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Zhang",
"Yongjun",
""
],
[
"Shi",
"Pengcheng",
""
],
[
"Li",
"Jiayuan",
""
]
] |
new_dataset
| 0.995255 |
2306.15464
|
Triantafyllos Kefalas
|
Triantafyllos Kefalas, Yannis Panagakis, Maja Pantic
|
Large-scale unsupervised audio pre-training for video-to-speech
synthesis
|
Corrected typos. This work has been submitted to the IEEE for
possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessible
| null | null | null |
cs.SD cs.CV cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video-to-speech synthesis is the task of reconstructing the speech signal
from a silent video of a speaker. Most established approaches to date involve a
two-step process, whereby an intermediate representation from the video, such
as a spectrogram, is extracted first and then passed to a vocoder to produce
the raw audio. Some recent work has focused on end-to-end synthesis, whereby
the generation of raw audio and any intermediate representations is performed
jointly. All such approaches involve training on data from almost exclusively
audio-visual datasets, i.e. every audio sample has a corresponding video
sample. This precludes the use of abundant audio-only datasets which may not
have a corresponding visual modality (e.g. audiobooks, radio podcasts, speech
recognition datasets etc.), as well as audio-only architectures that have been
developed by the audio machine learning community over the years. In this paper
we propose to train encoder-decoder models on more than 3,500 hours of audio
data at 24kHz, and then use the pre-trained decoders to initialize the audio
decoders for the video-to-speech synthesis task. The pre-training step uses
audio samples only and does not require labels or corresponding samples from
other modalities (visual, text). We demonstrate that this pre-training step
improves the reconstructed speech and that it is an unexplored way to improve
the quality of the generator in a cross-modal task while only requiring samples
from one of the modalities. We conduct experiments using both raw audio and mel
spectrograms as target outputs and benchmark our models with existing work.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 13:31:33 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 12:09:18 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Kefalas",
"Triantafyllos",
""
],
[
"Panagakis",
"Yannis",
""
],
[
"Pantic",
"Maja",
""
]
] |
new_dataset
| 0.99757 |
2307.03558
|
Kangjin Kim
|
Seungwan Woo and Jeongseok Kim and Kangjin Kim
|
We, Vertiport 6, are temporarily closed: Interactional Ontological
Methods for Changing the Destination
|
8 pages, 1 figure, submitted to IEEERO-MAN (RO-MAN 2023) Workshop on
Ontologies for Autonomous Robotics (RobOntics)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a continuation of the previous research on the
interaction between a human traffic manager and the UATMS. In particular, we
focus on the automation of the process of handling a vertiport outage, which
was partially covered in the previous work. Once the manager reports that a
vertiport is out of service, which means landings for all corresponding agents
are prohibited, the air traffic system automates what it has to handle for this
event. The entire process is simulated through knowledge representation and
reasoning. Moreover, two distinct perspectives are respected for the human
supervisor and the management system, and the related ontologies and rules
address their interactions. We believe that applying non-monotonic reasoning
can verify each step of the process and explain how the system works. After a
short introduction with related works, this paper continues with problem
formulation, primary solution, discussion, and conclusions.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 12:47:47 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Woo",
"Seungwan",
""
],
[
"Kim",
"Jeongseok",
""
],
[
"Kim",
"Kangjin",
""
]
] |
new_dataset
| 0.986479 |
2307.03864
|
Tianwei Ni
|
Tianwei Ni, Michel Ma, Benjamin Eysenbach, Pierre-Luc Bacon
|
When Do Transformers Shine in RL? Decoupling Memory from Credit
Assignment
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Reinforcement learning (RL) algorithms face two distinct challenges: learning
effective representations of past and present observations, and determining how
actions influence future returns. Both challenges involve modeling long-term
dependencies. The transformer architecture has been very successful to solve
problems that involve long-term dependencies, including in the RL domain.
However, the underlying reason for the strong performance of Transformer-based
RL methods remains unclear: is it because they learn effective memory, or
because they perform effective credit assignment? After introducing formal
definitions of memory length and credit assignment length, we design simple
configurable tasks to measure these distinct quantities. Our empirical results
reveal that Transformers can enhance the memory capacity of RL algorithms,
scaling up to tasks that require memorizing observations $1500$ steps ago.
However, Transformers do not improve long-term credit assignment. In summary,
our results provide an explanation for the success of Transformers in RL, while
also highlighting an important area for future research and benchmark design.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 23:34:12 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 03:25:18 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Ni",
"Tianwei",
""
],
[
"Ma",
"Michel",
""
],
[
"Eysenbach",
"Benjamin",
""
],
[
"Bacon",
"Pierre-Luc",
""
]
] |
new_dataset
| 0.963905 |
2307.06113
|
Kasper Green Larsen
|
Noga Alon, Allan Gr{\o}nlund, S{\o}ren Fuglede J{\o}rgensen, Kasper
Green Larsen
|
Sublinear Time Shortest Path in Expander Graphs
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computing a shortest path between two nodes in an undirected unweighted graph
is among the most basic algorithmic tasks. Breadth first search solves this
problem in linear time, which is clearly also a lower bound in the worst case.
However, several works have shown how to solve this problem in sublinear time
in expectation when the input graph is drawn from one of several classes of
random graphs. In this work, we extend these results by giving sublinear time
shortest path (and short path) algorithms for expander graphs. We thus identify
a natural deterministic property of a graph (that is satisfied by typical
random regular graphs) which suffices for sublinear time shortest paths. The
algorithms are very simple, involving only bidirectional breadth first search
and short random walks. We also complement our new algorithms by near-matching
lower bounds.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 12:13:33 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 06:05:58 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Alon",
"Noga",
""
],
[
"Grønlund",
"Allan",
""
],
[
"Jørgensen",
"Søren Fuglede",
""
],
[
"Larsen",
"Kasper Green",
""
]
] |
new_dataset
| 0.985106 |
2307.06647
|
Oskar Natan
|
Oskar Natan, Jun Miura
|
DeepIPCv2: LiDAR-powered Robust Environmental Perception and
Navigational Control for Autonomous Vehicle
| null | null | null | null |
cs.RO cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present DeepIPCv2, an autonomous driving model that perceives the
environment using a LiDAR sensor for more robust drivability, especially when
driving under poor illumination conditions where everything is not clearly
visible. DeepIPCv2 takes a set of LiDAR point clouds as the main perception
input. Since point clouds are not affected by illumination changes, they can
provide a clear observation of the surroundings no matter what the condition
is. This results in a better scene understanding and stable features provided
by the perception module to support the controller module in estimating
navigational control properly. To evaluate its performance, we conduct several
tests by deploying the model to predict a set of driving records and perform
real automated driving under three different conditions. We also conduct
ablation and comparative studies with some recent models to justify its
performance. Based on the experimental results, DeepIPCv2 shows a robust
performance by achieving the best drivability in all driving scenarios.
Furthermore, we will upload the codes to
https://github.com/oskarnatan/DeepIPCv2.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 09:23:21 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 02:54:17 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Natan",
"Oskar",
""
],
[
"Miura",
"Jun",
""
]
] |
new_dataset
| 0.998986 |
2307.13397
|
Miguel Costa
|
Miguel Costa, Manuel Marques, Felix Wilhelm Siebert, Carlos Lima
Azevedo, Filipe Moura
|
Scoring Cycling Environments Perceived Safety using Pairwise Image
Comparisons
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Today, many cities seek to transition to more sustainable transportation
systems. Cycling is critical in this transition for shorter trips, including
first-and-last-mile links to transit. Yet, if individuals perceive cycling as
unsafe, they will not cycle and choose other transportation modes. This study
presents a novel approach to identifying how the perception of cycling safety
can be analyzed and understood and the impact of the built environment and
cycling contexts on such perceptions. We base our work on other perception
studies and pairwise comparisons, using real-world images to survey
respondents. We repeatedly show respondents two road environments and ask them
to select the one they perceive as safer for cycling. We compare several
methods capable of rating cycling environments from pairwise comparisons and
classify cycling environments perceived as safe or unsafe. Urban planning can
use this score to improve interventions' effectiveness and improve cycling
promotion campaigns. Furthermore, this approach facilitates the continuous
assessment of changing cycling environments, allows for a short-term evaluation
of measures, and is efficiently deployed in different locations or contexts.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 10:31:45 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 13:50:20 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Costa",
"Miguel",
""
],
[
"Marques",
"Manuel",
""
],
[
"Siebert",
"Felix Wilhelm",
""
],
[
"Azevedo",
"Carlos Lima",
""
],
[
"Moura",
"Filipe",
""
]
] |
new_dataset
| 0.998967 |
2307.14074
|
Wenxue Li
|
Wenxue Li (1), Junyi Zhang (2), Gaoxiong Zeng (2), Yufei Liu (2),
Zilong Wang (1), Chaoliang Zeng (1), Pengpeng Zhou (2), Qiaoling Wang (2),
Kai Chen (1) ((1) Hong Kong University of Science and Technology, (2) Huawei
Technologies Co., Ltd.)
|
Gleam: An RDMA-accelerated Multicast Protocol for Datacenter Networks
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RDMA has been widely adopted for high-speed datacenter networks. However,
native RDMA merely supports one-to-one reliable connection, which mismatches
various applications with group communication patterns (e.g., one-to-many).
While there are some multicast enhancements to address it, they all fail to
simultaneously achieve optimal multicast forwarding and fully unleash the
distinguished RDMA capabilities.
In this paper, we present Gleam, an RDMA-accelerated multicast protocol that
simultaneously supports optimal multicast forwarding, efficient utilization of
the prominent RDMA capabilities, and compatibility with the commodity RNICs. At
its core, Gleam re-purposes the existing RDMA RC logic with careful switch
coordination as an efficient multicast transport. Gleam performs the
one-to-many connection maintenance and many-to-one feedback aggregation, based
on an extended multicast forwarding table structure, to achieve integration
between standard RC logic and in-fabric multicast. We implement a fully
functional Gleam prototype. With extensive testbed experiments and simulations,
we demonstrate Gleam's significant improvement in accelerating multicast
communication of realistic applications. For instance, Gleam achieves 2.9X
lower communication time of an HPC benchmark application and 2.7X higher data
replication throughput.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 09:54:47 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Jul 2023 07:59:16 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Li",
"Wenxue",
""
],
[
"Zhang",
"Junyi",
""
],
[
"Zeng",
"Gaoxiong",
""
],
[
"Liu",
"Yufei",
""
],
[
"Wang",
"Zilong",
""
],
[
"Zeng",
"Chaoliang",
""
],
[
"Zhou",
"Pengpeng",
""
],
[
"Wang",
"Qiaoling",
""
],
[
"Chen",
"Kai",
""
]
] |
new_dataset
| 0.991157 |
2307.15042
|
Zihan Zhang
|
Zihan Zhang, Richard Liu, Kfir Aberman, Rana Hanocka
|
TEDi: Temporally-Entangled Diffusion for Long-Term Motion Synthesis
|
Project page: https://threedle.github.io/TEDi/
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The gradual nature of a diffusion process that synthesizes samples in small
increments constitutes a key ingredient of Denoising Diffusion Probabilistic
Models (DDPM), which have presented unprecedented quality in image synthesis
and been recently explored in the motion domain. In this work, we propose to
adapt the gradual diffusion concept (operating along a diffusion time-axis)
into the temporal-axis of the motion sequence. Our key idea is to extend the
DDPM framework to support temporally varying denoising, thereby entangling the
two axes. Using our special formulation, we iteratively denoise a motion buffer
that contains a set of increasingly-noised poses, which auto-regressively
produces an arbitrarily long stream of frames. With a stationary diffusion
time-axis, in each diffusion step we increment only the temporal-axis of the
motion such that the framework produces a new, clean frame which is removed
from the beginning of the buffer, followed by a newly drawn noise vector that
is appended to it. This new mechanism paves the way towards a new framework for
long-term motion synthesis with applications to character animation and other
domains.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 17:48:44 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Jul 2023 05:26:37 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Zhang",
"Zihan",
""
],
[
"Liu",
"Richard",
""
],
[
"Aberman",
"Kfir",
""
],
[
"Hanocka",
"Rana",
""
]
] |
new_dataset
| 0.97209 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.