id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.16372 | Tao Lv | Tao Lv, Hao Ye, Quan Yuan, Zhan Shi, Yibo Wang, Shuming Wang, Xun Cao | Aperture Diffraction for Compact Snapshot Spectral Imaging | accepted by International Conference on Computer Vision (ICCV) 2023 | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | We demonstrate a compact, cost-effective snapshot spectral imaging system
named Aperture Diffraction Imaging Spectrometer (ADIS), which consists only of
an imaging lens with an ultra-thin orthogonal aperture mask and a mosaic filter
sensor, requiring no additional physical footprint compared to common RGB
cameras. Then we introduce a new optical design that each point in the object
space is multiplexed to discrete encoding locations on the mosaic filter sensor
by diffraction-based spatial-spectral projection engineering generated from the
orthogonal mask. The orthogonal projection is uniformly accepted to obtain a
weakly calibration-dependent data form to enhance modulation robustness.
Meanwhile, the Cascade Shift-Shuffle Spectral Transformer (CSST) with strong
perception of the diffraction degeneration is designed to solve a
sparsity-constrained inverse problem, realizing the volume reconstruction from
2D measurements with Large amount of aliasing. Our system is evaluated by
elaborating the imaging optical theory and reconstruction algorithm with
demonstrating the experimental imaging under a single exposure. Ultimately, we
achieve the sub-super-pixel spatial resolution and high spectral resolution
imaging. The code will be available at: https://github.com/Krito-ex/CSST.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 16:48:46 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Lv",
"Tao",
""
],
[
"Ye",
"Hao",
""
],
[
"Yuan",
"Quan",
""
],
[
"Shi",
"Zhan",
""
],
[
"Wang",
"Yibo",
""
],
[
"Wang",
"Shuming",
""
],
[
"Cao",
"Xun",
""
]
]
| new_dataset | 0.998744 |
2309.16382 | Mingqi Yuan | Mingqi Yuan, Zequn Zhang, Yang Xu, Shihao Luo, Bo Li, Xin Jin, Wenjun
Zeng | RLLTE: Long-Term Evolution Project of Reinforcement Learning | 22 pages, 15 figures | null | null | null | cs.AI cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | We present RLLTE: a long-term evolution, extremely modular, and open-source
framework for reinforcement learning (RL) research and application. Beyond
delivering top-notch algorithm implementations, RLLTE also serves as a toolkit
for developing algorithms. More specifically, RLLTE decouples the RL algorithms
completely from the exploitation-exploration perspective, providing a large
number of components to accelerate algorithm development and evolution. In
particular, RLLTE is the first RL framework to build a complete and luxuriant
ecosystem, which includes model training, evaluation, deployment, benchmark
hub, and large language model (LLM)-empowered copilot. RLLTE is expected to set
standards for RL engineering practice and be highly stimulative for industry
and academia.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 12:30:37 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Yuan",
"Mingqi",
""
],
[
"Zhang",
"Zequn",
""
],
[
"Xu",
"Yang",
""
],
[
"Luo",
"Shihao",
""
],
[
"Li",
"Bo",
""
],
[
"Jin",
"Xin",
""
],
[
"Zeng",
"Wenjun",
""
]
]
| new_dataset | 0.998115 |
2309.16395 | Johannes Zirngibl | Benedikt Jaeger, Johannes Zirngibl, Marcel Kempf, Kevin Ploch, Georg
Carle | QUIC on the Highway: Evaluating Performance on High-rate Links | Presented at the 2023 IFIP Networking Conference (IFIP Networking) | null | 10.23919/IFIPNetworking57963.2023.10186365 | null | cs.NI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | QUIC is a new protocol standardized in 2021 designed to improve on the widely
used TCP / TLS stack. The main goal is to speed up web traffic via HTTP, but it
is also used in other areas like tunneling. Based on UDP it offers features
like reliable in-order delivery, flow and congestion control, streambased
multiplexing, and always-on encryption using TLS 1.3. Other than with TCP, QUIC
implements all these features in user space, only requiring kernel interaction
for UDP. While running in user space provides more flexibility, it profits less
from efficiency and optimization within the kernel. Multiple implementations
exist, differing in programming language, architecture, and design choices.
This paper presents an extension to the QUIC Interop Runner, a framework for
testing interoperability of QUIC implementations. Our contribution enables
reproducible QUIC benchmarks on dedicated hardware. We provide baseline results
on 10G links, including multiple implementations, evaluate how OS features like
buffer sizes and NIC offloading impact QUIC performance, and show which data
rates can be achieved with QUIC compared to TCP. Our results show that QUIC
performance varies widely between client and server implementations from 90
Mbit/s to 4900 Mbit/s. We show that the OS generally sets the default buffer
size too small, which should be increased by at least an order of magnitude
based on our findings. Furthermore, QUIC benefits less from NIC offloading and
AES NI hardware acceleration while both features improve the goodput of TCP to
around 8000 Mbit/s. Our framework can be applied to evaluate the effects of
future improvements to the protocol or the OS.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 12:42:26 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Jaeger",
"Benedikt",
""
],
[
"Zirngibl",
"Johannes",
""
],
[
"Kempf",
"Marcel",
""
],
[
"Ploch",
"Kevin",
""
],
[
"Carle",
"Georg",
""
]
]
| new_dataset | 0.99828 |
2309.16422 | Panos Kostakos Dr | Mehrdad Kaheh, Danial Khosh Kholgh and Panos Kostakos | Cyber Sentinel: Exploring Conversational Agents in Streamlining Security
Tasks with GPT-4 | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | In an era where cyberspace is both a battleground and a backbone of modern
society, the urgency of safeguarding digital assets against ever-evolving
threats is paramount. This paper introduces Cyber Sentinel, an innovative
task-oriented cybersecurity dialogue system that is effectively capable of
managing two core functions: explaining potential cyber threats within an
organization to the user, and taking proactive/reactive security actions when
instructed by the user. Cyber Sentinel embodies the fusion of artificial
intelligence, cybersecurity domain expertise, and real-time data analysis to
combat the multifaceted challenges posed by cyber adversaries. This article
delves into the process of creating such a system and how it can interact with
other components typically found in cybersecurity organizations. Our work is a
novel approach to task-oriented dialogue systems, leveraging the power of
chaining GPT-4 models combined with prompt engineering across all sub-tasks. We
also highlight its pivotal role in enhancing cybersecurity communication and
interaction, concluding that not only does this framework enhance the system's
transparency (Explainable AI) but also streamlines the decision-making process
and responding to threats (Actionable AI), therefore marking a significant
advancement in the realm of cybersecurity communication.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 13:18:33 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Kaheh",
"Mehrdad",
""
],
[
"Kholgh",
"Danial Khosh",
""
],
[
"Kostakos",
"Panos",
""
]
]
| new_dataset | 0.990051 |
2309.16426 | Xinyu Chen | Xinyu Chen, Jian Yang, Zonghan He, Haobin Yang, Qi Zhao, Yuhui Shi | QwenGrasp: A Usage of Large Vision Language Model for Target-oriented
Grasping | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability for robotic systems to understand human language and execute
grasping actions is a pivotal challenge in the field of robotics. In
target-oriented grasping, prior researches achieve matching human textual
commands with images of target objects. However, these works are hard to
understand complex or flexible instructions. Moreover, these works lack the
capability to autonomously assess the feasibility of instructions, leading to
blindly execute grasping tasks even there is no target object. In this paper,
we introduce a combination model called QwenGrasp, which combines a large
vision language model with a 6-DoF grasp network. By leveraging a pre-trained
large vision language model, our approach is capable of working in open-world
with natural human language environments, accepting complex and flexible
instructions. Furthermore, the specialized grasp network ensures the
effectiveness of the generated grasp pose. A series of experiments conducted in
real world environment show that our method exhibits a superior ability to
comprehend human intent. Additionally, when accepting erroneous instructions,
our approach has the capability to suspend task execution and provide feedback
to humans, improving safety.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 13:23:23 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Chen",
"Xinyu",
""
],
[
"Yang",
"Jian",
""
],
[
"He",
"Zonghan",
""
],
[
"Yang",
"Haobin",
""
],
[
"Zhao",
"Qi",
""
],
[
"Shi",
"Yuhui",
""
]
]
| new_dataset | 0.999561 |
2309.16445 | Akmaral Moldagalieva | Akmaral Moldagalieva, Joaquim Ortiz-Haro, Marc Toussaint, Wolfgang
H\"onig | db-CBS: Discontinuity-Bounded Conflict-Based Search for Multi-Robot
Kinodynamic Motion Planning | submitted to ICRA 2024 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a multi-robot kinodynamic motion planner that enables a
team of robots with different dynamics, actuation limits, and shapes to reach
their goals in challenging environments. We solve this problem by combining
Conflict-Based Search (CBS), a multi-agent path finding method, and
discontinuity-bounded A*, a single-robot kinodynamic motion planner. Our
method, db-CBS, operates in three levels. Initially, we compute trajectories
for individual robots using a graph search that allows bounded discontinuities
between precomputed motion primitives. The second level identifies inter-robot
collisions and resolves them by imposing constraints on the first level. The
third and final level uses the resulting solution with discontinuities as an
initial guess for a joint space trajectory optimization. The procedure is
repeated with a reduced discontinuity bound. Our approach is anytime,
probabilistically complete, asymptotically optimal, and finds near-optimal
solutions quickly. Experimental results with robot dynamics such as unicycle,
double integrator, and car with trailer in different settings show that our
method is capable of solving challenging tasks with a higher success rate and
lower cost than the existing state-of-the-art.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 13:55:42 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Moldagalieva",
"Akmaral",
""
],
[
"Ortiz-Haro",
"Joaquim",
""
],
[
"Toussaint",
"Marc",
""
],
[
"Hönig",
"Wolfgang",
""
]
]
| new_dataset | 0.996525 |
2309.16457 | Hui Zheng | Hui Zheng, Zhongtao Chen, Haiteng Wang, Jianyang Zhou, Lin Zheng,
Yunzhe Liu | Universal Sleep Decoder: Aligning awake and sleep neural representation
across subjects | null | null | null | null | cs.LG eess.SP q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decoding memory content from brain activity during sleep has long been a goal
in neuroscience. While spontaneous reactivation of memories during sleep in
rodents is known to support memory consolidation and offline learning,
capturing memory replay in humans is challenging due to the absence of
well-annotated sleep datasets and the substantial differences in neural
patterns between wakefulness and sleep. To address these challenges, we
designed a novel cognitive neuroscience experiment and collected a
comprehensive, well-annotated electroencephalography (EEG) dataset from 52
subjects during both wakefulness and sleep. Leveraging this benchmark dataset,
we developed the Universal Sleep Decoder (USD) to align neural representations
between wakefulness and sleep across subjects. Our model achieves up to 16.6%
top-1 zero-shot accuracy on unseen subjects, comparable to decoding
performances using individual sleep data. Furthermore, fine-tuning USD on test
subjects enhances decoding accuracy to 25.9% top-1 accuracy, a substantial
improvement over the baseline chance of 6.7%. Model comparison and ablation
analyses reveal that our design choices, including the use of (i) an additional
contrastive objective to integrate awake and sleep neural signals and (ii) the
pretrain-finetune paradigm to incorporate different subjects, significantly
contribute to these performances. Collectively, our findings and methodologies
represent a significant advancement in the field of sleep decoding.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 14:06:34 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Zheng",
"Hui",
""
],
[
"Chen",
"Zhongtao",
""
],
[
"Wang",
"Haiteng",
""
],
[
"Zhou",
"Jianyang",
""
],
[
"Zheng",
"Lin",
""
],
[
"Liu",
"Yunzhe",
""
]
]
| new_dataset | 0.993984 |
2309.16486 | Sining Chen | Sining Chen, Yilei Shi, Zhitong Xiong, Xiao Xiang Zhu | HTC-DC Net: Monocular Height Estimation from Single Remote Sensing
Images | 18 pages, 10 figures, submitted to IEEE Transactions on Geoscience
and Remote Sensing | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | 3D geo-information is of great significance for understanding the living
environment; however, 3D perception from remote sensing data, especially on a
large scale, is restricted. To tackle this problem, we propose a method for
monocular height estimation from optical imagery, which is currently one of the
richest sources of remote sensing data. As an ill-posed problem, monocular
height estimation requires well-designed networks for enhanced representations
to improve performance. Moreover, the distribution of height values is
long-tailed with the low-height pixels, e.g., the background, as the head, and
thus trained networks are usually biased and tend to underestimate building
heights. To solve the problems, instead of formalizing the problem as a
regression task, we propose HTC-DC Net following the classification-regression
paradigm, with the head-tail cut (HTC) and the distribution-based constraints
(DCs) as the main contributions. HTC-DC Net is composed of the backbone network
as the feature extractor, the HTC-AdaBins module, and the hybrid regression
process. The HTC-AdaBins module serves as the classification phase to determine
bins adaptive to each input image. It is equipped with a vision transformer
encoder to incorporate local context with holistic information and involves an
HTC to address the long-tailed problem in monocular height estimation for
balancing the performances of foreground and background pixels. The hybrid
regression process does the regression via the smoothing of bins from the
classification phase, which is trained via DCs. The proposed network is tested
on three datasets of different resolutions, namely ISPRS Vaihingen (0.09 m),
DFC19 (1.3 m) and GBH (3 m). Experimental results show the superiority of the
proposed network over existing methods by large margins. Extensive ablation
studies demonstrate the effectiveness of each design component.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 14:50:32 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Chen",
"Sining",
""
],
[
"Shi",
"Yilei",
""
],
[
"Xiong",
"Zhitong",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
]
| new_dataset | 0.999234 |
2309.16511 | Dmitry Ustalov | Dmitry Ustalov and Nikita Pavlichenko and Sergey Koshelev and Daniil
Likhobaba and Alisa Smirnova | Toloka Visual Question Answering Benchmark | 16 pages; see https://toloka.ai/challenges/wsdm2023/ for more details | null | null | null | cs.CV cs.AI cs.CL cs.HC | http://creativecommons.org/licenses/by/4.0/ | In this paper, we present Toloka Visual Question Answering, a new
crowdsourced dataset allowing comparing performance of machine learning systems
against human level of expertise in the grounding visual question answering
task. In this task, given an image and a textual question, one has to draw the
bounding box around the object correctly responding to that question. Every
image-question pair contains the response, with only one correct response per
image. Our dataset contains 45,199 pairs of images and questions in English,
provided with ground truth bounding boxes, split into train and two test
subsets. Besides describing the dataset and releasing it under a CC BY license,
we conducted a series of experiments on open source zero-shot baseline models
and organized a multi-phase competition at WSDM Cup that attracted 48
participants worldwide. However, by the time of paper submission, no machine
learning model outperformed the non-expert crowdsourcing baseline according to
the intersection over union evaluation score.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 15:18:35 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Ustalov",
"Dmitry",
""
],
[
"Pavlichenko",
"Nikita",
""
],
[
"Koshelev",
"Sergey",
""
],
[
"Likhobaba",
"Daniil",
""
],
[
"Smirnova",
"Alisa",
""
]
]
| new_dataset | 0.999787 |
2309.16520 | Wenqi Jiang | Wenqi Jiang, Martin Parvanov, Gustavo Alonso | SwiftSpatial: Spatial Joins on Modern Hardware | null | null | null | null | cs.DB cs.AR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Spatial joins are among the most time-consuming queries in spatial data
management systems. In this paper, we propose SwiftSpatial, a specialized
accelerator architecture tailored for spatial joins. SwiftSpatial contains
multiple high-performance join units with innovative hybrid parallelism,
several efficient memory management units, and an integrated on-chip join
scheduler. We prototype SwiftSpatial on an FPGA and incorporate the R-tree
synchronous traversal algorithm as the control flow. Benchmarked against
various CPU and GPU-based spatial data processing systems, SwiftSpatial
demonstrates a latency reduction of up to 5.36x relative to the best-performing
baseline, while requiring 6.16x less power. The remarkable performance and
energy efficiency of SwiftSpatial lay a solid foundation for its future
integration into spatial data management systems, both in data centers and at
the edge.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 15:26:36 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Jiang",
"Wenqi",
""
],
[
"Parvanov",
"Martin",
""
],
[
"Alonso",
"Gustavo",
""
]
]
| new_dataset | 0.993768 |
2309.16524 | Esteve Valls Mascar\'o | Esteve Valls Mascaro, Daniel Sliwowski, Dongheui Lee | HOI4ABOT: Human-Object Interaction Anticipation for Human Intention
Reading Collaborative roBOTs | Proceedings in Conference on Robot Learning 2023 | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Robots are becoming increasingly integrated into our lives, assisting us in
various tasks. To ensure effective collaboration between humans and robots, it
is essential that they understand our intentions and anticipate our actions. In
this paper, we propose a Human-Object Interaction (HOI) anticipation framework
for collaborative robots. We propose an efficient and robust transformer-based
model to detect and anticipate HOIs from videos. This enhanced anticipation
empowers robots to proactively assist humans, resulting in more efficient and
intuitive collaborations. Our model outperforms state-of-the-art results in HOI
detection and anticipation in VidHOI dataset with an increase of 1.76% and
1.04% in mAP respectively while being 15.4 times faster. We showcase the
effectiveness of our approach through experimental results in a real robot,
demonstrating that the robot's ability to anticipate HOIs is key for better
Human-Robot Interaction. More information can be found on our project webpage:
https://evm7.github.io/HOI4ABOT_page/
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 15:34:49 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Mascaro",
"Esteve Valls",
""
],
[
"Sliwowski",
"Daniel",
""
],
[
"Lee",
"Dongheui",
""
]
]
| new_dataset | 0.996911 |
2309.16535 | Yiming Ju | Yiming Ju, Zheng Zhang | KLoB: a Benchmark for Assessing Knowledge Locating Methods in Language
Models | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Locate-Then-Edit paradigm has emerged as one of the main approaches
in changing factual knowledge stored in the Language models. However, there is
a lack of research on whether present locating methods can pinpoint the exact
parameters embedding the desired knowledge. Moreover, although many researchers
have questioned the validity of locality hypothesis of factual knowledge, no
method is provided to test the a hypothesis for more in-depth discussion and
research. Therefore, we introduce KLoB, a benchmark examining three essential
properties that a reliable knowledge locating method should satisfy. KLoB can
serve as a benchmark for evaluating existing locating methods in language
models, and can contributes a method to reassessing the validity of locality
hypothesis of factual knowledge. Our is publicly available at
\url{https://github.com/juyiming/KLoB}.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 15:47:03 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Ju",
"Yiming",
""
],
[
"Zhang",
"Zheng",
""
]
]
| new_dataset | 0.993882 |
2309.16553 | Yixuan Li | Yixuan Li, Lihan Jiang, Linning Xu, Yuanbo Xiangli, Zhenzhi Wang,
Dahua Lin, Bo Dai | MatrixCity: A Large-scale City Dataset for City-scale Neural Rendering
and Beyond | Accepted to ICCV 2023. Project page:
$\href{https://city-super.github.io/matrixcity/}{this\, https\, URL}$ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural radiance fields (NeRF) and its subsequent variants have led to
remarkable progress in neural rendering. While most of recent neural rendering
works focus on objects and small-scale scenes, developing neural rendering
methods for city-scale scenes is of great potential in many real-world
applications. However, this line of research is impeded by the absence of a
comprehensive and high-quality dataset, yet collecting such a dataset over real
city-scale scenes is costly, sensitive, and technically difficult. To this end,
we build a large-scale, comprehensive, and high-quality synthetic dataset for
city-scale neural rendering researches. Leveraging the Unreal Engine 5 City
Sample project, we develop a pipeline to easily collect aerial and street city
views, accompanied by ground-truth camera poses and a range of additional data
modalities. Flexible controls over environmental factors like light, weather,
human and car crowd are also available in our pipeline, supporting the need of
various tasks covering city-scale neural rendering and beyond. The resulting
pilot dataset, MatrixCity, contains 67k aerial images and 452k street images
from two city maps of total size $28km^2$. On top of MatrixCity, a thorough
benchmark is also conducted, which not only reveals unique challenges of the
task of city-scale neural rendering, but also highlights potential improvements
for future works. The dataset and code will be publicly available at our
project page: https://city-super.github.io/matrixcity/.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 16:06:02 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Li",
"Yixuan",
""
],
[
"Jiang",
"Lihan",
""
],
[
"Xu",
"Linning",
""
],
[
"Xiangli",
"Yuanbo",
""
],
[
"Wang",
"Zhenzhi",
""
],
[
"Lin",
"Dahua",
""
],
[
"Dai",
"Bo",
""
]
]
| new_dataset | 0.999858 |
2309.16575 | Garrett Tanzer | Garrett Tanzer, Mirac Suzgun, Eline Visser, Dan Jurafsky, Luke
Melas-Kyriazi | A Benchmark for Learning to Translate a New Language from One Grammar
Book | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) can perform impressive feats with in-context
learning or lightweight finetuning. It is natural to wonder how well these
models adapt to genuinely new tasks, but how does one find tasks that are
unseen in internet-scale training sets? We turn to a field that is explicitly
motivated and bottlenecked by a scarcity of web data: low-resource languages.
In this paper, we introduce MTOB (Machine Translation from One Book), a
benchmark for learning to translate between English and Kalamang -- a language
with less than 200 speakers and therefore virtually no presence on the web --
using several hundred pages of field linguistics reference materials. This task
framing is novel in that it asks a model to learn a language from a single
human-readable book of grammar explanations, rather than a large mined corpus
of in-domain data, more akin to L2 learning than L1 acquisition. We demonstrate
that baselines using current LLMs are promising but fall short of human
performance, achieving 44.7 chrF on Kalamang to English translation and 45.8
chrF on English to Kalamang translation, compared to 51.6 and 57.0 chrF by a
human who learned Kalamang from the same reference materials. We hope that MTOB
will help measure LLM capabilities along a new dimension, and that the methods
developed to solve it could help expand access to language technology for
underserved communities by leveraging qualitatively different kinds of data
than traditional machine translation.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 16:32:28 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Tanzer",
"Garrett",
""
],
[
"Suzgun",
"Mirac",
""
],
[
"Visser",
"Eline",
""
],
[
"Jurafsky",
"Dan",
""
],
[
"Melas-Kyriazi",
"Luke",
""
]
]
| new_dataset | 0.999159 |
2309.16583 | Yuyu Zhang | Shen Zheng, Yuyu Zhang, Yijie Zhu, Chenguang Xi, Pengyang Gao, Xun
Zhou, Kevin Chen-Chuan Chang | GPT-Fathom: Benchmarking Large Language Models to Decipher the
Evolutionary Path towards GPT-4 and Beyond | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid advancement of large language models (LLMs), there is a
pressing need for a comprehensive evaluation suite to assess their capabilities
and limitations. Existing LLM leaderboards often reference scores reported in
other papers without consistent settings and prompts, which may inadvertently
encourage cherry-picking favored settings and prompts for better results. In
this work, we introduce GPT-Fathom, an open-source and reproducible LLM
evaluation suite built on top of OpenAI Evals. We systematically evaluate 10+
leading LLMs as well as OpenAI's legacy models on 20+ curated benchmarks across
7 capability categories, all under aligned settings. Our retrospective study on
OpenAI's earlier models offers valuable insights into the evolutionary path
from GPT-3 to GPT-4. Currently, the community is eager to know how GPT-3
progressively improves to GPT-4, including technical details like whether
adding code data improves LLM's reasoning capability, which aspects of LLM
capability can be improved by SFT and RLHF, how much is the alignment tax, etc.
Our analysis sheds light on many of these questions, aiming to improve the
transparency of advanced LLMs.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 16:43:35 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Zheng",
"Shen",
""
],
[
"Zhang",
"Yuyu",
""
],
[
"Zhu",
"Yijie",
""
],
[
"Xi",
"Chenguang",
""
],
[
"Gao",
"Pengyang",
""
],
[
"Zhou",
"Xun",
""
],
[
"Chang",
"Kevin Chen-Chuan",
""
]
]
| new_dataset | 0.99797 |
2309.16594 | Adam Karczmarz | Jan van den Brand, Adam Karczmarz | Deterministic Fully Dynamic SSSP and More | Extended abstract to appear in FOCS 2023 | null | null | null | cs.DS | http://creativecommons.org/licenses/by/4.0/ | We present the first non-trivial fully dynamic algorithm maintaining exact
single-source distances in unweighted graphs. This resolves an open problem
stated by Sankowski [COCOON 2005] and van den Brand and Nanongkai [FOCS 2019].
Previous fully dynamic single-source distances data structures were all
approximate, but so far, non-trivial dynamic algorithms for the exact setting
could only be ruled out for polynomially weighted graphs (Abboud and
Vassilevska Williams, [FOCS 2014]). The exact unweighted case remained the main
case for which neither a subquadratic dynamic algorithm nor a quadratic lower
bound was known.
Our dynamic algorithm works on directed graphs, is deterministic, and can
report a single-source shortest paths tree in subquadratic time as well. Thus
we also obtain the first deterministic fully dynamic data structure for
reachability (transitive closure) with subquadratic update and query time. This
answers an open problem of van den Brand, Nanongkai, and Saranurak [FOCS 2019].
Finally, using the same framework we obtain the first fully dynamic data
structure maintaining all-pairs $(1+\epsilon)$-approximate distances within
non-trivial sub-$n^\omega$ worst-case update time while supporting optimal-time
approximate shortest path reporting at the same time. This data structure is
also deterministic and therefore implies the first known non-trivial
deterministic worst-case bound for recomputing the transitive closure of a
digraph.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 16:58:23 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Brand",
"Jan van den",
""
],
[
"Karczmarz",
"Adam",
""
]
]
| new_dataset | 0.980987 |
2309.16609 | An Yang | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng,
Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang
Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma,
Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu,
Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang,
Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng
Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou,
Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | Qwen Technical Report | 59 pages, 5 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 17:07:49 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Bai",
"Jinze",
""
],
[
"Bai",
"Shuai",
""
],
[
"Chu",
"Yunfei",
""
],
[
"Cui",
"Zeyu",
""
],
[
"Dang",
"Kai",
""
],
[
"Deng",
"Xiaodong",
""
],
[
"Fan",
"Yang",
""
],
[
"Ge",
"Wenbin",
""
],
[
"Han",
"Yu",
""
],
[
"Huang",
"Fei",
""
],
[
"Hui",
"Binyuan",
""
],
[
"Ji",
"Luo",
""
],
[
"Li",
"Mei",
""
],
[
"Lin",
"Junyang",
""
],
[
"Lin",
"Runji",
""
],
[
"Liu",
"Dayiheng",
""
],
[
"Liu",
"Gao",
""
],
[
"Lu",
"Chengqiang",
""
],
[
"Lu",
"Keming",
""
],
[
"Ma",
"Jianxin",
""
],
[
"Men",
"Rui",
""
],
[
"Ren",
"Xingzhang",
""
],
[
"Ren",
"Xuancheng",
""
],
[
"Tan",
"Chuanqi",
""
],
[
"Tan",
"Sinan",
""
],
[
"Tu",
"Jianhong",
""
],
[
"Wang",
"Peng",
""
],
[
"Wang",
"Shijie",
""
],
[
"Wang",
"Wei",
""
],
[
"Wu",
"Shengguang",
""
],
[
"Xu",
"Benfeng",
""
],
[
"Xu",
"Jin",
""
],
[
"Yang",
"An",
""
],
[
"Yang",
"Hao",
""
],
[
"Yang",
"Jian",
""
],
[
"Yang",
"Shusheng",
""
],
[
"Yao",
"Yang",
""
],
[
"Yu",
"Bowen",
""
],
[
"Yuan",
"Hongyi",
""
],
[
"Yuan",
"Zheng",
""
],
[
"Zhang",
"Jianwei",
""
],
[
"Zhang",
"Xingxuan",
""
],
[
"Zhang",
"Yichang",
""
],
[
"Zhang",
"Zhenru",
""
],
[
"Zhou",
"Chang",
""
],
[
"Zhou",
"Jingren",
""
],
[
"Zhou",
"Xiaohuan",
""
],
[
"Zhu",
"Tianhang",
""
]
]
| new_dataset | 0.984633 |
2309.16650 | Krishna Murthy Jatavallabhula | Qiao Gu, Alihusein Kuwajerwala, Sacha Morin, Krishna Murthy
Jatavallabhula, Bipasha Sen, Aditya Agarwal, Corban Rivera, William Paul,
Kirsty Ellis, Rama Chellappa, Chuang Gan, Celso Miguel de Melo, Joshua B.
Tenenbaum, Antonio Torralba, Florian Shkurti, Liam Paull | ConceptGraphs: Open-Vocabulary 3D Scene Graphs for Perception and
Planning | Project page: https://concept-graphs.github.io/ Explainer video:
https://youtu.be/mRhNkQwRYnc | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For robots to perform a wide variety of tasks, they require a 3D
representation of the world that is semantically rich, yet compact and
efficient for task-driven perception and planning. Recent approaches have
attempted to leverage features from large vision-language models to encode
semantics in 3D representations. However, these approaches tend to produce maps
with per-point feature vectors, which do not scale well in larger environments,
nor do they contain semantic spatial relationships between entities in the
environment, which are useful for downstream planning. In this work, we propose
ConceptGraphs, an open-vocabulary graph-structured representation for 3D
scenes. ConceptGraphs is built by leveraging 2D foundation models and fusing
their output to 3D by multi-view association. The resulting representations
generalize to novel semantic classes, without the need to collect large 3D
datasets or finetune models. We demonstrate the utility of this representation
through a number of downstream planning tasks that are specified through
abstract (language) prompts and require complex reasoning over spatial and
semantic concepts. (Project page: https://concept-graphs.github.io/ Explainer
video: https://youtu.be/mRhNkQwRYnc )
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 17:53:38 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Gu",
"Qiao",
""
],
[
"Kuwajerwala",
"Alihusein",
""
],
[
"Morin",
"Sacha",
""
],
[
"Jatavallabhula",
"Krishna Murthy",
""
],
[
"Sen",
"Bipasha",
""
],
[
"Agarwal",
"Aditya",
""
],
[
"Rivera",
"Corban",
""
],
[
"Paul",
"William",
""
],
[
"Ellis",
"Kirsty",
""
],
[
"Chellappa",
"Rama",
""
],
[
"Gan",
"Chuang",
""
],
[
"de Melo",
"Celso Miguel",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Torralba",
"Antonio",
""
],
[
"Shkurti",
"Florian",
""
],
[
"Paull",
"Liam",
""
]
]
| new_dataset | 0.999801 |
2309.16661 | Mustansar Fiaz | Mustansar Fiaz, Moein Heidari, Rao Muhammad Anwer, Hisham Cholakkal | SA2-Net: Scale-aware Attention Network for Microscopic Image
Segmentation | BMVC 2023 accepted as oral | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Microscopic image segmentation is a challenging task, wherein the objective
is to assign semantic labels to each pixel in a given microscopic image. While
convolutional neural networks (CNNs) form the foundation of many existing
frameworks, they often struggle to explicitly capture long-range dependencies.
Although transformers were initially devised to address this issue using
self-attention, it has been proven that both local and global features are
crucial for addressing diverse challenges in microscopic images, including
variations in shape, size, appearance, and target region density. In this
paper, we introduce SA2-Net, an attention-guided method that leverages
multi-scale feature learning to effectively handle diverse structures within
microscopic images. Specifically, we propose scale-aware attention (SA2) module
designed to capture inherent variations in scales and shapes of microscopic
regions, such as cells, for accurate segmentation. This module incorporates
local attention at each level of multi-stage features, as well as global
attention across multiple resolutions. Furthermore, we address the issue of
blurred region boundaries (e.g., cell boundaries) by introducing a novel
upsampling strategy called the Adaptive Up-Attention (AuA) module. This module
enhances the discriminative ability for improved localization of microscopic
regions using an explicit attention mechanism. Extensive experiments on five
challenging datasets demonstrate the benefits of our SA2-Net model. Our source
code is publicly available at \url{https://github.com/mustansarfiaz/SA2-Net}.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 17:58:05 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Fiaz",
"Mustansar",
""
],
[
"Heidari",
"Moein",
""
],
[
"Anwer",
"Rao Muhammad",
""
],
[
"Cholakkal",
"Hisham",
""
]
]
| new_dataset | 0.995404 |
2309.16670 | Soshi Shimada | Soshi Shimada, Vladislav Golyanik, Patrick P\'erez, Christian Theobalt | Decaf: Monocular Deformation Capture for Face and Hand Interactions | null | null | null | null | cs.CV cs.GR cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing methods for 3D tracking from monocular RGB videos predominantly
consider articulated and rigid objects. Modelling dense non-rigid object
deformations in this setting remained largely unaddressed so far, although such
effects can improve the realism of the downstream applications such as AR/VR
and avatar communications. This is due to the severe ill-posedness of the
monocular view setting and the associated challenges. While it is possible to
naively track multiple non-rigid objects independently using 3D templates or
parametric 3D models, such an approach would suffer from multiple artefacts in
the resulting 3D estimates such as depth ambiguity, unnatural intra-object
collisions and missing or implausible deformations. Hence, this paper
introduces the first method that addresses the fundamental challenges depicted
above and that allows tracking human hands interacting with human faces in 3D
from single monocular RGB videos. We model hands as articulated objects
inducing non-rigid face deformations during an active interaction. Our method
relies on a new hand-face motion and interaction capture dataset with realistic
face deformations acquired with a markerless multi-view camera system. As a
pivotal step in its creation, we process the reconstructed raw 3D shapes with
position-based dynamics and an approach for non-uniform stiffness estimation of
the head tissues, which results in plausible annotations of the surface
deformations, hand-face contact regions and head-hand positions. At the core of
our neural approach are a variational auto-encoder supplying the hand-face
depth prior and modules that guide the 3D tracking by estimating the contacts
and the deformations. Our final 3D hand and face reconstructions are realistic
and more plausible compared to several baselines applicable in our setting,
both quantitatively and qualitatively.
https://vcai.mpi-inf.mpg.de/projects/Decaf
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 17:59:51 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Shimada",
"Soshi",
""
],
[
"Golyanik",
"Vladislav",
""
],
[
"Pérez",
"Patrick",
""
],
[
"Theobalt",
"Christian",
""
]
]
| new_dataset | 0.966812 |
2108.09483 | Fabrizio Frati | Giuseppe Di Battista and Fabrizio Frati | From Tutte to Floater and Gotsman: On the Resolution of Planar
Straight-line Drawings and Morphs | Appears in the Proceedings of the 29th International Symposium on
Graph Drawing and Network Visualization (GD 2021) | null | null | null | cs.CG cs.DM cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The algorithm of Tutte for constructing convex planar straight-line drawings
and the algorithm of Floater and Gotsman for constructing planar straight-line
morphs are among the most popular graph drawing algorithms. Quite surprisingly,
little is known about the resolution of the drawings produced by these
algorithms. In this paper, focusing on maximal plane graphs, we prove tight
bounds on the resolution of the planar straight-line drawings produced by
Floater's algorithm, which is a broad generalization of Tutte's algorithm.
Further, we use such a result in order to prove a lower bound on the resolution
of the drawings of maximal plane graphs produced by Floater and Gotsman's
morphing algorithm. Finally, we show that such a morphing algorithm might
produce drawings with exponentially-small resolution, even when transforming
drawings with polynomial resolution.
| [
{
"version": "v1",
"created": "Sat, 21 Aug 2021 10:19:21 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Aug 2021 09:46:40 GMT"
},
{
"version": "v3",
"created": "Fri, 27 Aug 2021 07:28:55 GMT"
},
{
"version": "v4",
"created": "Wed, 27 Sep 2023 15:34:32 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Di Battista",
"Giuseppe",
""
],
[
"Frati",
"Fabrizio",
""
]
]
| new_dataset | 0.999255 |
2211.14045 | Leonardo Bacciottini | Leonardo Bacciottini, Luciano Lenzini, Enzo Mingozzi and Giuseppe
Anastasi | A Configurable Protocol for Quantum Entanglement Distribution to End
Nodes | 6 pages, 6 figures, accepted for publication at IEEE ICC 2023 | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The primary task of a quantum repeater network is to deliver entanglement
among end nodes. Most of existing entanglement distribution protocols do not
consider purification, which is thus delegated to an upper layer. This is a
major drawback since, once an end-to-end entangled connection (or a portion
thereof) is established it cannot be purified if its fidelity (F) does not fall
within an interval bounded by Fmin (greater than 0.5) and Fmax (less than 1).
In this paper, we propose the Ranked Entanglement Distribution Protocol
(REDiP), a connection-oriented protocol that overcomes the above drawback. This
result was achieved by including in our protocol two mechanisms for carrying
out jointly purification and entanglement swapping. We use simulations to
investigate the impact of these mechanisms on the performance of a repeater
network, in terms of throughput and fidelity. Moreover, we show how REDiP can
easily be configured to implement custom entanglement swapping and purification
strategies, including (but not restricted to) those adopted in two recent
works.
| [
{
"version": "v1",
"created": "Fri, 25 Nov 2022 12:01:48 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Sep 2023 15:14:05 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Bacciottini",
"Leonardo",
""
],
[
"Lenzini",
"Luciano",
""
],
[
"Mingozzi",
"Enzo",
""
],
[
"Anastasi",
"Giuseppe",
""
]
]
| new_dataset | 0.998068 |
2303.13873 | Yongwei Chen | Rui Chen, Yongwei Chen, Ningxin Jiao, Kui Jia | Fantasia3D: Disentangling Geometry and Appearance for High-quality
Text-to-3D Content Creation | Accepted by ICCV 2023. Project page: https://fantasia3d.github.io/ | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Automatic 3D content creation has achieved rapid progress recently due to the
availability of pre-trained, large language models and image diffusion models,
forming the emerging topic of text-to-3D content creation. Existing text-to-3D
methods commonly use implicit scene representations, which couple the geometry
and appearance via volume rendering and are suboptimal in terms of recovering
finer geometries and achieving photorealistic rendering; consequently, they are
less effective for generating high-quality 3D assets. In this work, we propose
a new method of Fantasia3D for high-quality text-to-3D content creation. Key to
Fantasia3D is the disentangled modeling and learning of geometry and
appearance. For geometry learning, we rely on a hybrid scene representation,
and propose to encode surface normal extracted from the representation as the
input of the image diffusion model. For appearance modeling, we introduce the
spatially varying bidirectional reflectance distribution function (BRDF) into
the text-to-3D task, and learn the surface material for photorealistic
rendering of the generated surface. Our disentangled framework is more
compatible with popular graphics engines, supporting relighting, editing, and
physical simulation of the generated 3D assets. We conduct thorough experiments
that show the advantages of our method over existing ones under different
text-to-3D task settings. Project page and source codes:
https://fantasia3d.github.io/.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2023 09:30:09 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2023 14:18:56 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Sep 2023 10:35:49 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Chen",
"Rui",
""
],
[
"Chen",
"Yongwei",
""
],
[
"Jiao",
"Ningxin",
""
],
[
"Jia",
"Kui",
""
]
]
| new_dataset | 0.993996 |
2304.00790 | Guang Yang | Guang Yang, Mingyu Cai, Ahmad Ahmad, Amanda Prorok, Roberto Tron,
Calin Belta | LQR-CBF-RRT*: Safe and Optimal Motion Planning | null | null | null | null | cs.RO cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | We present LQR-CBF-RRT*, an incremental sampling-based algorithm for offline
motion planning. Our framework leverages the strength of Control Barrier
Functions (CBFs) and Linear Quadratic Regulators (LQR) to generate
safety-critical and optimal trajectories for a robot with dynamics described by
an affine control system. CBFs are used for safety guarantees, while LQRs are
employed for optimal control synthesis during edge extensions. Popular
CBF-based formulations for safety critical control require solving Quadratic
Programs (QPs), which can be computationally expensive. Moreover, LQR-based
controllers require repetitive applications of first-order Taylor
approximations for nonlinear systems, which can also create an additional
computational burden. To improve the motion planning efficiency, we verify the
satisfaction of the CBF constraints directly in edge extension to avoid the
burden of solving the QPs. We store computed optimal LQR gain matrices in a
hash table to avoid re-computation during the local linearization of the
rewiring procedure. Lastly, we utilize the Cross-Entropy Method for importance
sampling to improve sampling efficiency. Our results show that the proposed
planner surpasses its counterparts in computational efficiency and performs
well in an experimental setup.
| [
{
"version": "v1",
"created": "Mon, 3 Apr 2023 08:23:53 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 05:38:56 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Sep 2023 07:00:30 GMT"
},
{
"version": "v4",
"created": "Wed, 27 Sep 2023 06:42:23 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Yang",
"Guang",
""
],
[
"Cai",
"Mingyu",
""
],
[
"Ahmad",
"Ahmad",
""
],
[
"Prorok",
"Amanda",
""
],
[
"Tron",
"Roberto",
""
],
[
"Belta",
"Calin",
""
]
]
| new_dataset | 0.993949 |
2304.02008 | Remi Pautrat | R\'emi Pautrat, Iago Su\'arez, Yifan Yu, Marc Pollefeys, Viktor
Larsson | GlueStick: Robust Image Matching by Sticking Points and Lines Together | Accepted at ICCV 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Line segments are powerful features complementary to points. They offer
structural cues, robust to drastic viewpoint and illumination changes, and can
be present even in texture-less areas. However, describing and matching them is
more challenging compared to points due to partial occlusions, lack of texture,
or repetitiveness. This paper introduces a new matching paradigm, where points,
lines, and their descriptors are unified into a single wireframe structure. We
propose GlueStick, a deep matching Graph Neural Network (GNN) that takes two
wireframes from different images and leverages the connectivity information
between nodes to better glue them together. In addition to the increased
efficiency brought by the joint matching, we also demonstrate a large boost of
performance when leveraging the complementary nature of these two features in a
single architecture. We show that our matching strategy outperforms the
state-of-the-art approaches independently matching line segments and points for
a wide variety of datasets and tasks. The code is available at
https://github.com/cvg/GlueStick.
| [
{
"version": "v1",
"created": "Tue, 4 Apr 2023 17:58:14 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Sep 2023 07:00:19 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Pautrat",
"Rémi",
""
],
[
"Suárez",
"Iago",
""
],
[
"Yu",
"Yifan",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Larsson",
"Viktor",
""
]
]
| new_dataset | 0.99933 |
2304.05979 | Weizheng Wang | Weizheng Wang, Ruiqi Wang, Le Mao, Byung-Cheol Min | NaviSTAR: Socially Aware Robot Navigation with Hybrid Spatio-Temporal
Graph Transformer and Preference Learning | To appear in IROS 2023 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing robotic technologies for use in human society requires ensuring
the safety of robots' navigation behaviors while adhering to pedestrians'
expectations and social norms. However, maintaining real-time communication
between robots and pedestrians to avoid collisions can be challenging. To
address these challenges, we propose a novel socially-aware navigation
benchmark called NaviSTAR, which utilizes a hybrid Spatio-Temporal grAph
tRansformer (STAR) to understand interactions in human-rich environments fusing
potential crowd multi-modal information. We leverage off-policy reinforcement
learning algorithm with preference learning to train a policy and a reward
function network with supervisor guidance. Additionally, we design a social
score function to evaluate the overall performance of social navigation. To
compare, we train and test our algorithm and other state-of-the-art methods in
both simulator and real-world scenarios independently. Our results show that
NaviSTAR outperforms previous methods with outstanding performance\footnote{The
source code and experiment videos of this work are available at:
https://sites.google.com/view/san-navistar
| [
{
"version": "v1",
"created": "Wed, 12 Apr 2023 17:01:35 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 20:16:28 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Wang",
"Weizheng",
""
],
[
"Wang",
"Ruiqi",
""
],
[
"Mao",
"Le",
""
],
[
"Min",
"Byung-Cheol",
""
]
]
| new_dataset | 0.987012 |
2304.14880 | Sayan Deb Sarkar | Sayan Deb Sarkar, Ondrej Miksik, Marc Pollefeys, Daniel Barath, Iro
Armeni | SGAligner : 3D Scene Alignment with Scene Graphs | Accepted at ICCV 2023 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building 3D scene graphs has recently emerged as a topic in scene
representation for several embodied AI applications to represent the world in a
structured and rich manner. With their increased use in solving downstream
tasks (eg, navigation and room rearrangement), can we leverage and recycle them
for creating 3D maps of environments, a pivotal step in agent operation? We
focus on the fundamental problem of aligning pairs of 3D scene graphs whose
overlap can range from zero to partial and can contain arbitrary changes. We
propose SGAligner, the first method for aligning pairs of 3D scene graphs that
is robust to in-the-wild scenarios (ie, unknown overlap -- if any -- and
changes in the environment). We get inspired by multi-modality knowledge graphs
and use contrastive learning to learn a joint, multi-modal embedding space. We
evaluate on the 3RScan dataset and further showcase that our method can be used
for estimating the transformation between pairs of 3D scenes. Since benchmarks
for these tasks are missing, we create them on this dataset. The code,
benchmark, and trained models are available on the project website.
| [
{
"version": "v1",
"created": "Fri, 28 Apr 2023 14:39:22 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 22:21:06 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Sarkar",
"Sayan Deb",
""
],
[
"Miksik",
"Ondrej",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Barath",
"Daniel",
""
],
[
"Armeni",
"Iro",
""
]
]
| new_dataset | 0.998912 |
2305.00584 | Jan Wichelmann | Jan Wichelmann and Christopher Peredy and Florian Sieck and Anna
P\"atschke and Thomas Eisenbarth | MAMBO-V: Dynamic Side-Channel Leakage Analysis on RISC-V | 20 pages | Detection of Intrusions and Malware, and Vulnerability Assessment-
20th International Conference, DIMVA 2023 | 10.1007/978-3-031-35504-2_1 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RISC-V is an emerging technology, with applications ranging from embedded
devices to high-performance servers. Therefore, more and more security-critical
workloads will be conducted with code that is compiled for RISC-V. Well-known
microarchitectural side-channel attacks against established platforms like x86
apply to RISC-V CPUs as well. As RISC-V does not mandate any hardware-based
side-channel countermeasures, a piece of code compiled for a generic RISC-V CPU
in a cloud server cannot make safe assumptions about the microarchitecture on
which it is running. Existing tools for aiding software-level precautions by
checking side-channel vulnerabilities on source code or x86 binaries are not
compatible with RISC-V machine code.
In this work, we study the requirements and goals of architecture-specific
leakage analysis for RISC-V and illustrate how to achieve these goals with the
help of fast and precise dynamic binary analysis. We implement all necessary
building blocks for finding side-channel leakages on RISC-V, while relying on
existing mature solutions when possible. Our leakage analysis builds upon the
modular side-channel analysis framework Microwalk, that examines execution
traces for leakage through secret-dependent memory accesses or branches. To
provide suitable traces, we port the ARM dynamic binary instrumentation tool
MAMBO to RISC-V. Our port named MAMBO-V can instrument arbitrary binaries which
use the 64-bit general purpose instruction set. We evaluate our toolchain on
several cryptographic libraries with RISC-V support and identify multiple
exploitable leakages.
| [
{
"version": "v1",
"created": "Sun, 30 Apr 2023 21:28:35 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Sep 2023 10:55:16 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Wichelmann",
"Jan",
""
],
[
"Peredy",
"Christopher",
""
],
[
"Sieck",
"Florian",
""
],
[
"Pätschke",
"Anna",
""
],
[
"Eisenbarth",
"Thomas",
""
]
]
| new_dataset | 0.998502 |
2306.06531 | Yongchao Chen | Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy,
Chuchu Fan | AutoTAMP: Autoregressive Task and Motion Planning with LLMs as
Translators and Checkers | 8 pages, 4 figures | null | null | null | cs.RO cs.CL cs.HC | http://creativecommons.org/publicdomain/zero/1.0/ | For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
| [
{
"version": "v1",
"created": "Sat, 10 Jun 2023 21:58:29 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Sep 2023 17:43:42 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Chen",
"Yongchao",
""
],
[
"Arkin",
"Jacob",
""
],
[
"Dawson",
"Charles",
""
],
[
"Zhang",
"Yang",
""
],
[
"Roy",
"Nicholas",
""
],
[
"Fan",
"Chuchu",
""
]
]
| new_dataset | 0.997975 |
2306.15620 | Ninad Khargonkar | Ninad Khargonkar, Sai Haneesh Allu, Yangxiao Lu, Jishnu Jaykumar P,
Balakrishnan Prabhakaran, Yu Xiang | SCENEREPLICA: Benchmarking Real-World Robot Manipulation by Creating
Replicable Scenes | Project page is available at https://irvlutd.github.io/SceneReplica | null | null | null | cs.RO cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present a new reproducible benchmark for evaluating robot manipulation in
the real world, specifically focusing on pick-and-place. Our benchmark uses the
YCB objects, a commonly used dataset in the robotics community, to ensure that
our results are comparable to other studies. Additionally, the benchmark is
designed to be easily reproducible in the real world, making it accessible to
researchers and practitioners. We also provide our experimental results and
analyzes for model-based and model-free 6D robotic grasping on the benchmark,
where representative algorithms are evaluated for object perception, grasping
planning, and motion planning. We believe that our benchmark will be a valuable
tool for advancing the field of robot manipulation. By providing a standardized
evaluation framework, researchers can more easily compare different techniques
and algorithms, leading to faster progress in developing robot manipulation
methods.
| [
{
"version": "v1",
"created": "Tue, 27 Jun 2023 16:59:15 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 22:17:31 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Khargonkar",
"Ninad",
""
],
[
"Allu",
"Sai Haneesh",
""
],
[
"Lu",
"Yangxiao",
""
],
[
"P",
"Jishnu Jaykumar",
""
],
[
"Prabhakaran",
"Balakrishnan",
""
],
[
"Xiang",
"Yu",
""
]
]
| new_dataset | 0.999846 |
2309.09514 | Yu-Cheng Hsieh | Yu-Cheng Hsieh, Cheng Sun, Suraj Dengale, Min Sun | PanoMixSwap Panorama Mixing via Structural Swapping for Indoor Scene
Understanding | BMVC'23; project page:https://yuchenghsieh.github.io/PanoMixSwap | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The volume and diversity of training data are critical for modern deep
learningbased methods. Compared to the massive amount of labeled perspective
images, 360 panoramic images fall short in both volume and diversity. In this
paper, we propose PanoMixSwap, a novel data augmentation technique specifically
designed for indoor panoramic images. PanoMixSwap explicitly mixes various
background styles, foreground furniture, and room layouts from the existing
indoor panorama datasets and generates a diverse set of new panoramic images to
enrich the datasets. We first decompose each panoramic image into its
constituent parts: background style, foreground furniture, and room layout.
Then, we generate an augmented image by mixing these three parts from three
different images, such as the foreground furniture from one image, the
background style from another image, and the room structure from the third
image. Our method yields high diversity since there is a cubical increase in
image combinations. We also evaluate the effectiveness of PanoMixSwap on two
indoor scene understanding tasks: semantic segmentation and layout estimation.
Our experiments demonstrate that state-of-the-art methods trained with
PanoMixSwap outperform their original setting on both tasks consistently.
| [
{
"version": "v1",
"created": "Mon, 18 Sep 2023 06:52:13 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Sep 2023 04:32:41 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Hsieh",
"Yu-Cheng",
""
],
[
"Sun",
"Cheng",
""
],
[
"Dengale",
"Suraj",
""
],
[
"Sun",
"Min",
""
]
]
| new_dataset | 0.987882 |
2309.14877 | Petra J\"a\"askel\"ainen | Petra J\"a\"askel\"ainen | Explainable Sustainability for AI in the Arts | null | null | null | null | cs.HC cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | AI is becoming increasingly popular in artistic practices, but the tools for
informing practitioners about the environmental impact (and other
sustainability implications) of AI are adapted for other contexts than creative
practices -- making the tools and sustainability implications of AI not
accessible for artists and creative practitioners. In this position paper, I
describe two empirical studies that aim to develop environmental sustainability
reflection systems for AI Arts, and discuss and introduce Explainable
Sustainability in for AI Arts.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 12:20:18 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Sep 2023 11:40:43 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Jääskeläinen",
"Petra",
""
]
]
| new_dataset | 0.992742 |
2309.15203 | Chenpei Huang | Chenpei Huang, Hui Zhong, Jie Lian, Pavana Prakash, Dian Shi, Yuan Xu,
and Miao Pan | Eve Said Yes: AirBone Authentication for Head-Wearable Smart Voice
Assistant | 13 pages, 12 figures | null | null | null | cs.CR cs.HC eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in machine learning and natural language processing have
fostered the enormous prosperity of smart voice assistants and their services,
e.g., Alexa, Google Home, Siri, etc. However, voice spoofing attacks are deemed
to be one of the major challenges of voice control security, and never stop
evolving such as deep-learning-based voice conversion and speech synthesis
techniques. To solve this problem outside the acoustic domain, we focus on
head-wearable devices, such as earbuds and virtual reality (VR) headsets, which
are feasible to continuously monitor the bone-conducted voice in the vibration
domain. Specifically, we identify that air and bone conduction (AC/BC) from the
same vocalization are coupled (or concurrent) and user-level unique, which
makes them suitable behavior and biometric factors for multi-factor
authentication (MFA). The legitimate user can defeat acoustic domain and even
cross-domain spoofing samples with the proposed two-stage AirBone
authentication. The first stage answers \textit{whether air and bone conduction
utterances are time domain consistent (TC)} and the second stage runs
\textit{bone conduction speaker recognition (BC-SR)}. The security level is
hence increased for two reasons: (1) current acoustic attacks on smart voice
assistants cannot affect bone conduction, which is in the vibration domain; (2)
even for advanced cross-domain attacks, the unique bone conduction features can
detect adversary's impersonation and machine-induced vibration. Finally,
AirBone authentication has good usability (the same level as voice
authentication) compared with traditional MFA and those specially designed to
enhance smart voice security. Our experimental results show that the proposed
AirBone authentication is usable and secure, and can be easily equipped by
commercial off-the-shelf head wearables with good user experience.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 19:03:45 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Huang",
"Chenpei",
""
],
[
"Zhong",
"Hui",
""
],
[
"Lian",
"Jie",
""
],
[
"Prakash",
"Pavana",
""
],
[
"Shi",
"Dian",
""
],
[
"Xu",
"Yuan",
""
],
[
"Pan",
"Miao",
""
]
]
| new_dataset | 0.981235 |
2309.15204 | Ben-Zion Bobrovsky | Sapir Kontente, Roy Orfaig and Ben-Zion Bobrovsky | CLRmatchNet: Enhancing Curved Lane Detection with Deep Matching Process | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lane detection plays a crucial role in autonomous driving by providing vital
data to ensure safe navigation. Modern algorithms rely on anchor-based
detectors, which are then followed by a label assignment process to categorize
training detections as positive or negative instances based on learned
geometric attributes. The current methods, however, have limitations and might
not be optimal since they rely on predefined classical cost functions that are
based on a low-dimensional model. Our research introduces MatchNet, a deep
learning sub-module-based approach aimed at enhancing the label assignment
process. Integrated into a state-of-the-art lane detection network like the
Cross Layer Refinement Network for Lane Detection (CLRNet), MatchNet replaces
the conventional label assignment process with a sub-module network. This
integration results in significant improvements in scenarios involving curved
lanes, with remarkable improvement across all backbones of +2.8% for ResNet34,
+2.3% for ResNet101, and +2.96% for DLA34. In addition, it maintains or even
improves comparable results in other sections. Our method boosts the confidence
level in lane detection, allowing an increase in the confidence threshold. The
code will be available soon: https://github.com/sapirkontente/CLRmatchNet.git
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 19:05:18 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Kontente",
"Sapir",
""
],
[
"Orfaig",
"Roy",
""
],
[
"Bobrovsky",
"Ben-Zion",
""
]
]
| new_dataset | 0.987493 |
2309.15242 | Yi Wang | Yi Wang, Jieliang Luo, Adam Gaier, Evan Atherton, Hilmar Koch | PlotMap: Automated Layout Design for Building Game Worlds | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | World-building, the process of developing both the narrative and physical
world of a game, plays a vital role in the game's experience. Critically
acclaimed independent and AAA video games are praised for strong world
building, with game maps that masterfully intertwine with and elevate the
narrative, captivating players and leaving a lasting impression. However,
designing game maps that support a desired narrative is challenging, as it
requires satisfying complex constraints from various considerations. Most
existing map generation methods focus on considerations about gameplay
mechanics or map topography, while the need to support the story is typically
neglected. As a result, extensive manual adjustment is still required to design
a game world that facilitates particular stories. In this work, we approach
this problem by introducing an extra layer of plot facility layout design that
is independent of the underlying map generation method in a world-building
pipeline. Concretely, we present a system that leverages Reinforcement Learning
(RL) to automatically assign concrete locations on a game map to abstract
locations mentioned in a given story (plot facilities), following spatial
constraints derived from the story. A decision-making agent moves the plot
facilities around, considering their relationship to the map and each other, to
locations on the map that best satisfy the constraints of the story. Our system
considers input from multiple modalities: map images as pixels, facility
locations as real values, and story constraints expressed in natural language.
We develop a method of generating datasets of facility layout tasks, create an
RL environment to train and evaluate RL models, and further analyze the
behaviors of the agents through a group of comprehensive experiments and
ablation studies, aiming to provide insights for RL-based plot facility layout
design.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 20:13:10 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Wang",
"Yi",
""
],
[
"Luo",
"Jieliang",
""
],
[
"Gaier",
"Adam",
""
],
[
"Atherton",
"Evan",
""
],
[
"Koch",
"Hilmar",
""
]
]
| new_dataset | 0.999332 |
2309.15251 | Jiachen Sun | Jiachen Sun, Mark Ibrahim, Melissa Hall, Ivan Evtimov, Z. Morley Mao,
Cristian Canton Ferrer, Caner Hazirbas | VPA: Fully Test-Time Visual Prompt Adaptation | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Textual prompt tuning has demonstrated significant performance improvements
in adapting natural language processing models to a variety of downstream tasks
by treating hand-engineered prompts as trainable parameters. Inspired by the
success of textual prompting, several studies have investigated the efficacy of
visual prompt tuning. In this work, we present Visual Prompt Adaptation (VPA),
the first framework that generalizes visual prompting with test-time
adaptation. VPA introduces a small number of learnable tokens, enabling fully
test-time and storage-efficient adaptation without necessitating source-domain
information. We examine our VPA design under diverse adaptation settings,
encompassing single-image, batched-image, and pseudo-label adaptation. We
evaluate VPA on multiple tasks, including out-of-distribution (OOD)
generalization, corruption robustness, and domain adaptation. Experimental
results reveal that VPA effectively enhances OOD generalization by 3.3% across
various models, surpassing previous test-time approaches. Furthermore, we show
that VPA improves corruption robustness by 6.5% compared to strong baselines.
Finally, we demonstrate that VPA also boosts domain adaptation performance by
relatively 5.2%. Our VPA also exhibits marked effectiveness in improving the
robustness of zero-shot recognition for vision-language models.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 20:25:51 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Sun",
"Jiachen",
""
],
[
"Ibrahim",
"Mark",
""
],
[
"Hall",
"Melissa",
""
],
[
"Evtimov",
"Ivan",
""
],
[
"Mao",
"Z. Morley",
""
],
[
"Ferrer",
"Cristian Canton",
""
],
[
"Hazirbas",
"Caner",
""
]
]
| new_dataset | 0.99912 |
2309.15252 | Zhiyun Deng | Zhiyun Deng, Yanjun Shi, Weiming Shen | V2X-Lead: LiDAR-based End-to-End Autonomous Driving with
Vehicle-to-Everything Communication Integration | To be published in IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), 2023 | null | null | null | cs.RO cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper presents a LiDAR-based end-to-end autonomous driving method with
Vehicle-to-Everything (V2X) communication integration, termed V2X-Lead, to
address the challenges of navigating unregulated urban scenarios under
mixed-autonomy traffic conditions. The proposed method aims to handle imperfect
partial observations by fusing the onboard LiDAR sensor and V2X communication
data. A model-free and off-policy deep reinforcement learning (DRL) algorithm
is employed to train the driving agent, which incorporates a carefully designed
reward function and multi-task learning technique to enhance generalization
across diverse driving tasks and scenarios. Experimental results demonstrate
the effectiveness of the proposed approach in improving safety and efficiency
in the task of traversing unsignalized intersections in mixed-autonomy traffic,
and its generalizability to previously unseen scenarios, such as roundabouts.
The integration of V2X communication offers a significant data source for
autonomous vehicles (AVs) to perceive their surroundings beyond onboard
sensors, resulting in a more accurate and comprehensive perception of the
driving environment and more safe and robust driving behavior.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 20:26:03 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Deng",
"Zhiyun",
""
],
[
"Shi",
"Yanjun",
""
],
[
"Shen",
"Weiming",
""
]
]
| new_dataset | 0.998245 |
2309.15268 | Amanda Adkins | Amanda Adkins, Taijing Chen, Joydeep Biswas | ObVi-SLAM: Long-Term Object-Visual SLAM | 8 pages, 7 figures, 1 table | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Robots responsible for tasks over long time scales must be able to localize
consistently and scalably amid geometric, viewpoint, and appearance changes.
Existing visual SLAM approaches rely on low-level feature descriptors that are
not robust to such environmental changes and result in large map sizes that
scale poorly over long-term deployments. In contrast, object detections are
robust to environmental variations and lead to more compact representations,
but most object-based SLAM systems target short-term indoor deployments with
close objects. In this paper, we introduce ObVi-SLAM to overcome these
challenges by leveraging the best of both approaches. ObVi-SLAM uses low-level
visual features for high-quality short-term visual odometry; and to ensure
global, long-term consistency, ObVi-SLAM builds an uncertainty-aware long-term
map of persistent objects and updates it after every deployment. By evaluating
ObVi-SLAM on data from 16 deployment sessions spanning different weather and
lighting conditions, we empirically show that ObVi-SLAM generates accurate
localization estimates consistent over long-time scales in spite of varying
appearance conditions.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 20:57:35 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Adkins",
"Amanda",
""
],
[
"Chen",
"Taijing",
""
],
[
"Biswas",
"Joydeep",
""
]
]
| new_dataset | 0.99895 |
2309.15324 | Jin Wang | Jin Wang and Zishan Huang and Hengli Liu and Nianyi Yang and Yinhao
Xiao | DefectHunter: A Novel LLM-Driven Boosted-Conformer-based Code
Vulnerability Detection Mechanism | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most pressing threats to computing systems is software
vulnerabilities, which can compromise both hardware and software components.
Existing methods for vulnerability detection remain suboptimal. Traditional
techniques are both time-consuming and labor-intensive, while
machine-learning-based approaches often underperform when applied to complex
datasets, due to their inability to capture high-dimensional relationships.
Previous deep-learning strategies also fall short in capturing sufficient
feature information. Although self-attention mechanisms can process information
over long distances, they fail to capture structural information. In this
paper, we introduce DefectHunter, an innovative model for vulnerability
identification that employs the Conformer mechanism. This mechanism fuses
self-attention with convolutional networks to capture both local, position-wise
features and global, content-based interactions. Furthermore, we optimize the
self-attention mechanisms to mitigate the issue of excessive attention heads
introducing extraneous noise by adjusting the denominator. We evaluated
DefectHunter against ten baseline methods using six industrial and two highly
complex datasets. On the QEMU dataset, DefectHunter exhibited a 20.62\%
improvement in accuracy over Pongo-70B, and for the CWE-754 dataset, its
accuracy was 14.64\% higher. To investigate how DefectHunter comprehends
vulnerabilities, we conducted a case study, which revealed that our model
effectively understands the mechanisms underlying vulnerabilities.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 00:10:29 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Wang",
"Jin",
""
],
[
"Huang",
"Zishan",
""
],
[
"Liu",
"Hengli",
""
],
[
"Yang",
"Nianyi",
""
],
[
"Xiao",
"Yinhao",
""
]
]
| new_dataset | 0.964896 |
2309.15334 | Sehan Lee | Sehan Lee, Jaechang Lim and Woo Youn Kim | C3Net: interatomic potential neural network for prediction of
physicochemical properties in heterogenous systems | 7 pages, 6 figures, 2 tables | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Understanding the interactions of a solute with its environment is of
fundamental importance in chemistry and biology. In this work, we propose a
deep neural network architecture for atom type embeddings in its molecular
context and interatomic potential that follows fundamental physical laws. The
architecture is applied to predict physicochemical properties in heterogeneous
systems including solvation in diverse solvents, 1-octanol-water partitioning,
and PAMPA with a single set of network weights. We show that our architecture
is generalized well to the physicochemical properties and outperforms
state-of-the-art approaches based on quantum mechanics and neural networks in
the task of solvation free energy prediction. The interatomic potentials at
each atom in a solute obtained from the model allow quantitative analysis of
the physicochemical properties at atomic resolution consistent with chemical
and physical reasoning. The software is available at
https://github.com/SehanLee/C3Net.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 00:51:24 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Lee",
"Sehan",
""
],
[
"Lim",
"Jaechang",
""
],
[
"Kim",
"Woo Youn",
""
]
]
| new_dataset | 0.987049 |
2309.15375 | Khuong Vo | Khuong Vo, Mostafa El-Khamy, Yoojin Choi | PPG to ECG Signal Translation for Continuous Atrial Fibrillation
Detection via Attention-based Deep State-Space Modeling | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | An electrocardiogram (ECG or EKG) is a medical test that measures the heart's
electrical activity. ECGs are often used to diagnose and monitor a wide range
of heart conditions, including arrhythmias, heart attacks, and heart failure.
On the one hand, the conventional ECG requires clinical measurement, which
restricts its deployment to medical facilities. On the other hand, single-lead
ECG has become popular on wearable devices using administered procedures. An
alternative to ECG is Photoplethysmography (PPG), which uses non-invasive,
low-cost optical methods to measure cardiac physiology, making it a suitable
option for capturing vital heart signs in daily life. As a result, it has
become increasingly popular in health monitoring and is used in various
clinical and commercial wearable devices. While ECG and PPG correlate strongly,
the latter does not offer significant clinical diagnostic value. Here, we
propose a subject-independent attention-based deep state-space model to
translate PPG signals to corresponding ECG waveforms. The model is highly
data-efficient by incorporating prior knowledge in terms of probabilistic
graphical models. Notably, the model enables the detection of atrial
fibrillation (AFib), the most common heart rhythm disorder in adults, by
complementing ECG's accuracy with continuous PPG monitoring. We evaluated the
model on 55 subjects from the MIMIC III database. Quantitative and qualitative
experimental results demonstrate the effectiveness and efficiency of our
approach.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 03:07:46 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Vo",
"Khuong",
""
],
[
"El-Khamy",
"Mostafa",
""
],
[
"Choi",
"Yoojin",
""
]
]
| new_dataset | 0.998466 |
2309.15378 | Xibai Lou | Xibai Lou, Houjian Yu, Ross Worobel, Yang Yang, Changhyun Choi | Adversarial Object Rearrangement in Constrained Environments with
Heterogeneous Graph Neural Networks | Accepted for publication in IROS 2023 | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Adversarial object rearrangement in the real world (e.g., previously unseen
or oversized items in kitchens and stores) could benefit from understanding
task scenes, which inherently entail heterogeneous components such as current
objects, goal objects, and environmental constraints. The semantic
relationships among these components are distinct from each other and crucial
for multi-skilled robots to perform efficiently in everyday scenarios. We
propose a hierarchical robotic manipulation system that learns the underlying
relationships and maximizes the collaborative power of its diverse skills
(e.g., pick-place, push) for rearranging adversarial objects in constrained
environments. The high-level coordinator employs a heterogeneous graph neural
network (HetGNN), which reasons about the current objects, goal objects, and
environmental constraints; the low-level 3D Convolutional Neural Network-based
actors execute the action primitives. Our approach is trained entirely in
simulation, and achieved an average success rate of 87.88% and a planning cost
of 12.82 in real-world experiments, surpassing all baseline methods.
Supplementary material is available at
https://sites.google.com/umn.edu/versatile-rearrangement.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 03:15:45 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Lou",
"Xibai",
""
],
[
"Yu",
"Houjian",
""
],
[
"Worobel",
"Ross",
""
],
[
"Yang",
"Yang",
""
],
[
"Choi",
"Changhyun",
""
]
]
| new_dataset | 0.98849 |
2309.15379 | Yongxin Ni | Yongxin Ni and Yu Cheng and Xiangyan Liu and Junchen Fu and Youhua Li
and Xiangnan He and Yongfeng Zhang and Fajie Yuan | A Content-Driven Micro-Video Recommendation Dataset at Scale | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Micro-videos have recently gained immense popularity, sparking critical
research in micro-video recommendation with significant implications for the
entertainment, advertising, and e-commerce industries. However, the lack of
large-scale public micro-video datasets poses a major challenge for developing
effective recommender systems. To address this challenge, we introduce a very
large micro-video recommendation dataset, named "MicroLens", consisting of one
billion user-item interaction behaviors, 34 million users, and one million
micro-videos. This dataset also contains various raw modality information about
videos, including titles, cover images, audio, and full-length videos.
MicroLens serves as a benchmark for content-driven micro-video recommendation,
enabling researchers to utilize various modalities of video information for
recommendation, rather than relying solely on item IDs or off-the-shelf video
features extracted from a pre-trained network. Our benchmarking of multiple
recommender models and video encoders on MicroLens has yielded valuable
insights into the performance of micro-video recommendation. We believe that
this dataset will not only benefit the recommender system community but also
promote the development of the video understanding field. Our datasets and code
are available at https://github.com/westlake-repl/MicroLens.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 03:15:52 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Ni",
"Yongxin",
""
],
[
"Cheng",
"Yu",
""
],
[
"Liu",
"Xiangyan",
""
],
[
"Fu",
"Junchen",
""
],
[
"Li",
"Youhua",
""
],
[
"He",
"Xiangnan",
""
],
[
"Zhang",
"Yongfeng",
""
],
[
"Yuan",
"Fajie",
""
]
]
| new_dataset | 0.999526 |
2309.15394 | Renlang Huang | Renlang Huang, Minglei Zhao, Jiming Chen, and Liang Li | KDD-LOAM: Jointly Learned Keypoint Detector and Descriptors Assisted
LiDAR Odometry and Mapping | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparse keypoint matching based on distinct 3D feature representations can
improve the efficiency and robustness of point cloud registration. Existing
learning-based 3D descriptors and keypoint detectors are either independent or
loosely coupled, so they cannot fully adapt to each other. In this work, we
propose a tightly coupled keypoint detector and descriptor (TCKDD) based on a
multi-task fully convolutional network with a probabilistic detection loss. In
particular, this self-supervised detection loss fully adapts the keypoint
detector to any jointly learned descriptors and benefits the self-supervised
learning of descriptors. Extensive experiments on both indoor and outdoor
datasets show that our TCKDD achieves state-of-the-art performance in point
cloud registration. Furthermore, we design a keypoint detector and
descriptors-assisted LiDAR odometry and mapping framework (KDD-LOAM), whose
real-time odometry relies on keypoint descriptor matching-based RANSAC. The
sparse keypoints are further used for efficient scan-to-map registration and
mapping. Experiments on KITTI dataset demonstrate that KDD-LOAM significantly
surpasses LOAM and shows competitive performance in odometry.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 04:10:52 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Huang",
"Renlang",
""
],
[
"Zhao",
"Minglei",
""
],
[
"Chen",
"Jiming",
""
],
[
"Li",
"Liang",
""
]
]
| new_dataset | 0.986179 |
2309.15426 | Zhang Chen | Zhang Chen, Zhong Li, Liangchen Song, Lele Chen, Jingyi Yu, Junsong
Yuan, Yi Xu | NeuRBF: A Neural Fields Representation with Adaptive Radial Basis
Functions | Accepted to ICCV 2023 Oral. Project page:
https://oppo-us-research.github.io/NeuRBF-website/ | null | null | null | cs.CV cs.GR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel type of neural fields that uses general radial bases for
signal representation. State-of-the-art neural fields typically rely on
grid-based representations for storing local neural features and N-dimensional
linear kernels for interpolating features at continuous query points. The
spatial positions of their neural features are fixed on grid nodes and cannot
well adapt to target signals. Our method instead builds upon general radial
bases with flexible kernel position and shape, which have higher spatial
adaptivity and can more closely fit target signals. To further improve the
channel-wise capacity of radial basis functions, we propose to compose them
with multi-frequency sinusoid functions. This technique extends a radial basis
to multiple Fourier radial bases of different frequency bands without requiring
extra parameters, facilitating the representation of details. Moreover, by
marrying adaptive radial bases with grid-based ones, our hybrid combination
inherits both adaptivity and interpolation smoothness. We carefully designed
weighting schemes to let radial bases adapt to different types of signals
effectively. Our experiments on 2D image and 3D signed distance field
representation demonstrate the higher accuracy and compactness of our method
than prior arts. When applied to neural radiance field reconstruction, our
method achieves state-of-the-art rendering quality, with small model size and
comparable training speed.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 06:32:05 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Chen",
"Zhang",
""
],
[
"Li",
"Zhong",
""
],
[
"Song",
"Liangchen",
""
],
[
"Chen",
"Lele",
""
],
[
"Yu",
"Jingyi",
""
],
[
"Yuan",
"Junsong",
""
],
[
"Xu",
"Yi",
""
]
]
| new_dataset | 0.996977 |
2309.15432 | Aiden Grossman | Aiden Grossman, Ludger Paehler, Konstantinos Parasyris, Tal Ben-Nun,
Jacob Hegna, William Moses, Jose M Monsalve Diaz, Mircea Trofin, Johannes
Doerfert | ComPile: A Large IR Dataset from Production Sources | null | null | null | null | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Code is increasingly becoming a core data modality of modern machine learning
research impacting not only the way we write code with conversational agents
like OpenAI's ChatGPT, Google's Bard, or Anthropic's Claude, the way we
translate code from one language into another, but also the compiler
infrastructure underlying the language. While modeling approaches may vary and
representations differ, the targeted tasks often remain the same within the
individual classes of models. Relying solely on the ability of modern models to
extract information from unstructured code does not take advantage of 70 years
of programming language and compiler development by not utilizing the structure
inherent to programs in the data collection. This detracts from the performance
of models working over a tokenized representation of input code and precludes
the use of these models in the compiler itself. To work towards the first
intermediate representation (IR) based models, we fully utilize the LLVM
compiler infrastructure, shared by a number of languages, to generate a 182B
token dataset of LLVM IR. We generated this dataset from programming languages
built on the shared LLVM infrastructure, including Rust, Swift, Julia, and
C/C++, by hooking into LLVM code generation either through the language's
package manager or the compiler directly to extract the dataset of intermediate
representations from production grade programs. Statistical analysis proves the
utility of our dataset not only for large language model training, but also for
the introspection into the code generation process itself with the dataset
showing great promise for machine-learned compiler components.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 06:50:48 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Grossman",
"Aiden",
""
],
[
"Paehler",
"Ludger",
""
],
[
"Parasyris",
"Konstantinos",
""
],
[
"Ben-Nun",
"Tal",
""
],
[
"Hegna",
"Jacob",
""
],
[
"Moses",
"William",
""
],
[
"Diaz",
"Jose M Monsalve",
""
],
[
"Trofin",
"Mircea",
""
],
[
"Doerfert",
"Johannes",
""
]
]
| new_dataset | 0.999824 |
2309.15461 | Mengyuan Liu | June M. Liu, Donghao Li, He Cao, Tianhe Ren, Zeyi Liao and Jiamin Wu | ChatCounselor: A Large Language Models for Mental Health Support | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents ChatCounselor, a large language model (LLM) solution
designed to provide mental health support. Unlike generic chatbots,
ChatCounselor is distinguished by its foundation in real conversations between
consulting clients and professional psychologists, enabling it to possess
specialized knowledge and counseling skills in the field of psychology. The
training dataset, Psych8k, was constructed from 260 in-depth interviews, each
spanning an hour. To assess the quality of counseling responses, the counseling
Bench was devised. Leveraging GPT-4 and meticulously crafted prompts based on
seven metrics of psychological counseling assessment, the model underwent
evaluation using a set of real-world counseling questions. Impressively,
ChatCounselor surpasses existing open-source models in the counseling Bench and
approaches the performance level of ChatGPT, showcasing the remarkable
enhancement in model capability attained through high-quality domain-specific
data.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 07:57:21 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Liu",
"June M.",
""
],
[
"Li",
"Donghao",
""
],
[
"Cao",
"He",
""
],
[
"Ren",
"Tianhe",
""
],
[
"Liao",
"Zeyi",
""
],
[
"Wu",
"Jiamin",
""
]
]
| new_dataset | 0.99971 |
2309.15474 | Xin Zhou | Xin Zhou, Bowen Xu, DongGyun Han, Zhou Yang, Junda He and David Lo | CCBERT: Self-Supervised Code Change Representation Learning | 12 Pages; Accepted in the Main Track of The International Conference
on Software Maintenance and Evolution (ICSME) 2023 | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerous code changes are made by developers in their daily work, and a
superior representation of code changes is desired for effective code change
analysis. Recently, Hoang et al. proposed CC2Vec, a neural network-based
approach that learns a distributed representation of code changes to capture
the semantic intent of the changes. Despite demonstrated effectiveness in
multiple tasks, CC2Vec has several limitations: 1) it considers only
coarse-grained information about code changes, and 2) it relies on log messages
rather than the self-contained content of the code changes. In this work, we
propose CCBERT (\underline{C}ode \underline{C}hange \underline{BERT}), a new
Transformer-based pre-trained model that learns a generic representation of
code changes based on a large-scale dataset containing massive unlabeled code
changes. CCBERT is pre-trained on four proposed self-supervised objectives that
are specialized for learning code change representations based on the contents
of code changes. CCBERT perceives fine-grained code changes at the token level
by learning from the old and new versions of the content, along with the edit
actions. Our experiments demonstrate that CCBERT significantly outperforms
CC2Vec or the state-of-the-art approaches of the downstream tasks by
7.7\%--14.0\% in terms of different metrics and tasks. CCBERT consistently
outperforms large pre-trained code models, such as CodeBERT, while requiring
6--10$\times$ less training time, 5--30$\times$ less inference time, and
7.9$\times$ less GPU memory.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 08:17:03 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Zhou",
"Xin",
""
],
[
"Xu",
"Bowen",
""
],
[
"Han",
"DongGyun",
""
],
[
"Yang",
"Zhou",
""
],
[
"He",
"Junda",
""
],
[
"Lo",
"David",
""
]
]
| new_dataset | 0.998381 |
2309.15492 | Phillip Karle | Phillip Karle, Tobias Betz, Marcin Bosk, Felix Fent, Nils Gehrke,
Maximilian Geisslinger, Luis Gressenbuch, Philipp Hafemann, Sebastian Huber,
Maximilian H\"ubner, Sebastian Huch, Gemb Kaljavesi, Tobias Kerbl, Dominik
Kulmer, Tobias Mascetta, Sebastian Maierhofer, Florian Pfab, Filip Rezabek,
Esteban Rivera, Simon Sagmeister, Leander Seidlitz, Florian Sauerbeck, Ilir
Tahiraj, Rainer Trauth, Nico Uhlemann, Gerald W\"ursching, Baha Zarrouki,
Matthias Althoff, Johannes Betz, Klaus Bengler, Georg Carle, Frank Diermeyer,
J\"org Ott, Markus Lienkamp | EDGAR: An Autonomous Driving Research Platform -- From Feature
Development to Real-World Application | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | While current research and development of autonomous driving primarily
focuses on developing new features and algorithms, the transfer from isolated
software components into an entire software stack has been covered sparsely.
Besides that, due to the complexity of autonomous software stacks and public
road traffic, the optimal validation of entire stacks is an open research
problem. Our paper targets these two aspects. We present our autonomous
research vehicle EDGAR and its digital twin, a detailed virtual duplication of
the vehicle. While the vehicle's setup is closely related to the state of the
art, its virtual duplication is a valuable contribution as it is crucial for a
consistent validation process from simulation to real-world tests. In addition,
different development teams can work with the same model, making integration
and testing of the software stacks much easier, significantly accelerating the
development process. The real and virtual vehicles are embedded in a
comprehensive development environment, which is also introduced. All parameters
of the digital twin are provided open-source at
https://github.com/TUMFTM/edgar_digital_twin.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 08:43:40 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Karle",
"Phillip",
""
],
[
"Betz",
"Tobias",
""
],
[
"Bosk",
"Marcin",
""
],
[
"Fent",
"Felix",
""
],
[
"Gehrke",
"Nils",
""
],
[
"Geisslinger",
"Maximilian",
""
],
[
"Gressenbuch",
"Luis",
""
],
[
"Hafemann",
"Philipp",
""
],
[
"Huber",
"Sebastian",
""
],
[
"Hübner",
"Maximilian",
""
],
[
"Huch",
"Sebastian",
""
],
[
"Kaljavesi",
"Gemb",
""
],
[
"Kerbl",
"Tobias",
""
],
[
"Kulmer",
"Dominik",
""
],
[
"Mascetta",
"Tobias",
""
],
[
"Maierhofer",
"Sebastian",
""
],
[
"Pfab",
"Florian",
""
],
[
"Rezabek",
"Filip",
""
],
[
"Rivera",
"Esteban",
""
],
[
"Sagmeister",
"Simon",
""
],
[
"Seidlitz",
"Leander",
""
],
[
"Sauerbeck",
"Florian",
""
],
[
"Tahiraj",
"Ilir",
""
],
[
"Trauth",
"Rainer",
""
],
[
"Uhlemann",
"Nico",
""
],
[
"Würsching",
"Gerald",
""
],
[
"Zarrouki",
"Baha",
""
],
[
"Althoff",
"Matthias",
""
],
[
"Betz",
"Johannes",
""
],
[
"Bengler",
"Klaus",
""
],
[
"Carle",
"Georg",
""
],
[
"Diermeyer",
"Frank",
""
],
[
"Ott",
"Jörg",
""
],
[
"Lienkamp",
"Markus",
""
]
]
| new_dataset | 0.998262 |
2309.15495 | Debanjali Bhattacharya Dr. | Naveen Kanigiri, Manohar Suggula, Debanjali Bhattacharya and Neelam
Sinha | Investigating the changes in BOLD responses during viewing of images
with varied complexity: An fMRI time-series based analysis on human vision | The paper is accepted for publication in 3rd International Conference
on AI-ML Systems (AIMLSystems 2023), to be held on 25-28 October 2023,
Bengaluru, India. arXiv admin note: text overlap with arXiv:2309.03590 | null | null | null | cs.CV eess.SP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Functional MRI (fMRI) is widely used to examine brain functionality by
detecting alteration in oxygenated blood flow that arises with brain activity.
This work aims to investigate the neurological variation of human brain
responses during viewing of images with varied complexity using fMRI time
series (TS) analysis. Publicly available BOLD5000 dataset is used for this
purpose which contains fMRI scans while viewing 5254 distinct images of diverse
categories, drawn from three standard computer vision datasets: COCO, Imagenet
and SUN. To understand vision, it is important to study how brain functions
while looking at images of diverse complexities. Our first study employs
classical machine learning and deep learning strategies to classify image
complexity-specific fMRI TS, represents instances when images from COCO,
Imagenet and SUN datasets are seen. The implementation of this classification
across visual datasets holds great significance, as it provides valuable
insights into the fluctuations in BOLD signals when perceiving images of
varying complexities. Subsequently, temporal semantic segmentation is also
performed on whole fMRI TS to segment these time instances. The obtained result
of this analysis has established a baseline in studying how differently human
brain functions while looking into images of diverse complexities. Therefore,
accurate identification and distinguishing of variations in BOLD signals from
fMRI TS data serves as a critical initial step in vision studies, providing
insightful explanations for how static images with diverse complexities are
perceived.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 08:46:09 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Kanigiri",
"Naveen",
""
],
[
"Suggula",
"Manohar",
""
],
[
"Bhattacharya",
"Debanjali",
""
],
[
"Sinha",
"Neelam",
""
]
]
| new_dataset | 0.992268 |
2309.15500 | Lehao Wang | Lehao Wang, Zhiwen Yu, Haoyi Yu, Sicong Liu, Yaxiong Xie, Bin Guo,
Yunxin Liu | AdaEvo: Edge-Assisted Continuous and Timely DNN Model Evolution for
Mobile Devices | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile video applications today have attracted significant attention. Deep
learning model (e.g. deep neural network, DNN) compression is widely used to
enable on-device inference for facilitating robust and private mobile video
applications. The compressed DNN, however, is vulnerable to the agnostic data
drift of the live video captured from the dynamically changing mobile
scenarios. To combat the data drift, mobile ends rely on edge servers to
continuously evolve and re-compress the DNN with freshly collected data. We
design a framework, AdaEvo, that efficiently supports the resource-limited edge
server handling mobile DNN evolution tasks from multiple mobile ends. The key
goal of AdaEvo is to maximize the average quality of experience (QoE), e.g. the
proportion of high-quality DNN service time to the entire life cycle, for all
mobile ends. Specifically, it estimates the DNN accuracy drops at the mobile
end without labels and performs a dedicated video frame sampling strategy to
control the size of retraining data. In addition, it balances the limited
computing and memory resources on the edge server and the competition between
asynchronous tasks initiated by different mobile users. With an extensive
evaluation of real-world videos from mobile scenarios and across four diverse
mobile tasks, experimental results show that AdaEvo enables up to 34% accuracy
improvement and 32% average QoE improvement.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 08:52:28 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Wang",
"Lehao",
""
],
[
"Yu",
"Zhiwen",
""
],
[
"Yu",
"Haoyi",
""
],
[
"Liu",
"Sicong",
""
],
[
"Xie",
"Yaxiong",
""
],
[
"Guo",
"Bin",
""
],
[
"Liu",
"Yunxin",
""
]
]
| new_dataset | 0.991734 |
2309.15508 | Li Niu | Lingxiao Lu, Bo Zhang, Li Niu | DreamCom: Finetuning Text-guided Inpainting Model for Image Composition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of image composition is merging a foreground object into a
background image to obtain a realistic composite image. Recently, generative
composition methods are built on large pretrained diffusion models, due to
their unprecedented image generation ability. They train a model on abundant
pairs of foregrounds and backgrounds, so that it can be directly applied to a
new pair of foreground and background at test time. However, the generated
results often lose the foreground details and exhibit noticeable artifacts. In
this work, we propose an embarrassingly simple approach named DreamCom inspired
by DreamBooth. Specifically, given a few reference images for a subject, we
finetune text-guided inpainting diffusion model to associate this subject with
a special token and inpaint this subject in the specified bounding box. We also
construct a new dataset named MureCom well-tailored for this task.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 09:23:50 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Lu",
"Lingxiao",
""
],
[
"Zhang",
"Bo",
""
],
[
"Niu",
"Li",
""
]
]
| new_dataset | 0.996845 |
2309.15519 | Futa Waseda | Lukas Strack, Futa Waseda, Huy H. Nguyen, Yinqiang Zheng, and Isao
Echizen | Defending Against Physical Adversarial Patch Attacks on Infrared Human
Detection | Lukas Strack and Futa Waseda contributed equally. 4 pages, 2 figures,
Under-review | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Infrared detection is an emerging technique for safety-critical tasks owing
to its remarkable anti-interference capability. However, recent studies have
revealed that it is vulnerable to physically-realizable adversarial patches,
posing risks in its real-world applications. To address this problem, we are
the first to investigate defense strategies against adversarial patch attacks
on infrared detection, especially human detection. We have devised a
straightforward defense strategy, patch-based occlusion-aware detection (POD),
which efficiently augments training samples with random patches and
subsequently detects them. POD not only robustly detects people but also
identifies adversarial patch locations. Surprisingly, while being extremely
computationally efficient, POD easily generalizes to state-of-the-art
adversarial patch attacks that are unseen during training. Furthermore, POD
improves detection precision even in a clean (i.e., no-patch) situation due to
the data augmentation effect. Evaluation demonstrated that POD is robust to
adversarial patches of various shapes and sizes. The effectiveness of our
baseline approach is shown to be a viable defense mechanism for real-world
infrared human detection systems, paving the way for exploring future research
directions.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 09:37:29 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Strack",
"Lukas",
""
],
[
"Waseda",
"Futa",
""
],
[
"Nguyen",
"Huy H.",
""
],
[
"Zheng",
"Yinqiang",
""
],
[
"Echizen",
"Isao",
""
]
]
| new_dataset | 0.998917 |
2309.15526 | Xujie Kang | Xujie Kang and Kanglin Liu and Jiang Duan and Yuanhao Gong and Guoping
Qiu | P2I-NET: Mapping Camera Pose to Image via Adversarial Learning for New
View Synthesis in Real Indoor Environments | null | null | 10.1145/3581783.3612356 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Given a new $6DoF$ camera pose in an indoor environment, we study the
challenging problem of predicting the view from that pose based on a set of
reference RGBD views. Existing explicit or implicit 3D geometry construction
methods are computationally expensive while those based on learning have
predominantly focused on isolated views of object categories with regular
geometric structure. Differing from the traditional \textit{render-inpaint}
approach to new view synthesis in the real indoor environment, we propose a
conditional generative adversarial neural network (P2I-NET) to directly predict
the new view from the given pose. P2I-NET learns the conditional distribution
of the images of the environment for establishing the correspondence between
the camera pose and its view of the environment, and achieves this through a
number of innovative designs in its architecture and training lost function.
Two auxiliary discriminator constraints are introduced for enforcing the
consistency between the pose of the generated image and that of the
corresponding real world image in both the latent feature space and the real
world pose space. Additionally a deep convolutional neural network (CNN) is
introduced to further reinforce this consistency in the pixel space. We have
performed extensive new view synthesis experiments on real indoor datasets.
Results show that P2I-NET has superior performance against a number of NeRF
based strong baseline models. In particular, we show that P2I-NET is 40 to 100
times faster than these competitor techniques while synthesising similar
quality images. Furthermore, we contribute a new publicly available indoor
environment dataset containing 22 high resolution RGBD videos where each frame
also has accurate camera pose parameters.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 09:44:14 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Kang",
"Xujie",
""
],
[
"Liu",
"Kanglin",
""
],
[
"Duan",
"Jiang",
""
],
[
"Gong",
"Yuanhao",
""
],
[
"Qiu",
"Guoping",
""
]
]
| new_dataset | 0.990302 |
2309.15535 | Mikolaj Czerkawski | Mikolaj Czerkawski, Alistair Francis | From LAION-5B to LAION-EO: Filtering Billions of Images Using Anchor
Datasets for Satellite Image Extraction | Accepted at the ICCV 2023 Workshop "Towards the Next Generation of
Computer Vision Datasets: DataComp Track" | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large datasets, such as LAION-5B, contain a diverse distribution of images
shared online. However, extraction of domain-specific subsets of large image
corpora is challenging. The extraction approach based on an anchor dataset,
combined with further filtering, is proposed here and demonstrated for the
domain of satellite imagery. This results in the release of LAION-EO, a dataset
sourced from the web containing pairs of text and satellite images in high
(pixel-wise) resolution. The paper outlines the acquisition procedure as well
as some of the features of the dataset.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 09:53:38 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Czerkawski",
"Mikolaj",
""
],
[
"Francis",
"Alistair",
""
]
]
| new_dataset | 0.999552 |
2309.15569 | Marcus De Ree | Marcus de Ree, Georgios Mantas, Jonathan Rodriguez | Grain-128PLE: Generic Physical-Layer Encryption for IoT Networks | Paper accepted to the GLOBECOM 2023 conference | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Physical layer security (PLS) encompasses techniques proposed at the physical
layer to achieve information security objectives while requiring a minimal
resource footprint. The channel coding-based secrecy and signal
modulation-based encryption approaches are reliant on certain channel
conditions or a certain communications protocol stack to operate on, which
prevents them from being a generic solution. This paper presents Grain-128PLE,
a lightweight physical layer encryption (PLE) scheme that is derived from the
Grain-128AEAD v2 stream cipher. The Grain-128PLE stream cipher performs
encryption and decryption at the physical layer, in between the channel coding
and signal modulation processes. This placement, like that of the A5 stream
cipher that had been used in the GSM communications standard, makes it a
generic solution for providing data confidentiality in IoT networks. The design
of Grain-128PLE maintains the structure of the main building blocks of the
original Grain-128AEAD v2 stream cipher, evaluated for its security strength
during NIST's recent Lightweight Cryptography competition, and is therefore
expected to achieve similar levels of security.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 10:48:52 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"de Ree",
"Marcus",
""
],
[
"Mantas",
"Georgios",
""
],
[
"Rodriguez",
"Jonathan",
""
]
]
| new_dataset | 0.999755 |
2309.15572 | Yuhang Liu | Yuhang Liu and Boyi Sun and Yuke Li and Yuzheng Hu and Fei-Yue Wang | HPL-ViT: A Unified Perception Framework for Heterogeneous Parallel
LiDARs in V2V | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To develop the next generation of intelligent LiDARs, we propose a novel
framework of parallel LiDARs and construct a hardware prototype in our
experimental platform, DAWN (Digital Artificial World for Natural). It
emphasizes the tight integration of physical and digital space in LiDAR
systems, with networking being one of its supported core features. In the
context of autonomous driving, V2V (Vehicle-to-Vehicle) technology enables
efficient information sharing between different agents which significantly
promotes the development of LiDAR networks. However, current research operates
under an ideal situation where all vehicles are equipped with identical LiDAR,
ignoring the diversity of LiDAR categories and operating frequencies. In this
paper, we first utilize OpenCDA and RLS (Realistic LiDAR Simulation) to
construct a novel heterogeneous LiDAR dataset named OPV2V-HPL. Additionally, we
present HPL-ViT, a pioneering architecture designed for robust feature fusion
in heterogeneous and dynamic scenarios. It uses a graph-attention Transformer
to extract domain-specific features for each agent, coupled with a
cross-attention mechanism for the final fusion. Extensive experiments on
OPV2V-HPL demonstrate that HPL-ViT achieves SOTA (state-of-the-art) performance
in all settings and exhibits outstanding generalization capabilities.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 10:55:44 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Liu",
"Yuhang",
""
],
[
"Sun",
"Boyi",
""
],
[
"Li",
"Yuke",
""
],
[
"Hu",
"Yuzheng",
""
],
[
"Wang",
"Fei-Yue",
""
]
]
| new_dataset | 0.998756 |
2309.15578 | Roberto Casula | Marco Micheletto and Roberto Casula and Giulia Orr\`u and Simone Carta
and Sara Concas and Simone Maurizio La Cava and Julian Fierrez and Gian Luca
Marcialis | LivDet2023 -- Fingerprint Liveness Detection Competition: Advancing
Generalization | 9 pages, 10 tables, IEEE International Joint Conference on Biometrics
(IJCB 2023) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The International Fingerprint Liveness Detection Competition (LivDet) is a
biennial event that invites academic and industry participants to prove their
advancements in Fingerprint Presentation Attack Detection (PAD). This edition,
LivDet2023, proposed two challenges, Liveness Detection in Action and
Fingerprint Representation, to evaluate the efficacy of PAD embedded in
verification systems and the effectiveness and compactness of feature sets. A
third, hidden challenge is the inclusion of two subsets in the training set
whose sensor information is unknown, testing participants ability to generalize
their models. Only bona fide fingerprint samples were provided to participants,
and the competition reports and assesses the performance of their algorithms
suffering from this limitation in data availability.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 11:24:01 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Micheletto",
"Marco",
""
],
[
"Casula",
"Roberto",
""
],
[
"Orrù",
"Giulia",
""
],
[
"Carta",
"Simone",
""
],
[
"Concas",
"Sara",
""
],
[
"La Cava",
"Simone Maurizio",
""
],
[
"Fierrez",
"Julian",
""
],
[
"Marcialis",
"Gian Luca",
""
]
]
| new_dataset | 0.988903 |
2309.15596 | Shizhe Chen | Shizhe Chen, Ricardo Garcia, Cordelia Schmid, Ivan Laptev | PolarNet: 3D Point Clouds for Language-Guided Robotic Manipulation | Accepted to CoRL 2023. Project website:
https://www.di.ens.fr/willow/research/polarnet/ | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability for robots to comprehend and execute manipulation tasks based on
natural language instructions is a long-term goal in robotics. The dominant
approaches for language-guided manipulation use 2D image representations, which
face difficulties in combining multi-view cameras and inferring precise 3D
positions and relationships. To address these limitations, we propose a 3D
point cloud based policy called PolarNet for language-guided manipulation. It
leverages carefully designed point cloud inputs, efficient point cloud
encoders, and multimodal transformers to learn 3D point cloud representations
and integrate them with language instructions for action prediction. PolarNet
is shown to be effective and data efficient in a variety of experiments
conducted on the RLBench benchmark. It outperforms state-of-the-art 2D and 3D
approaches in both single-task and multi-task learning. It also achieves
promising results on a real robot.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 11:50:43 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Chen",
"Shizhe",
""
],
[
"Garcia",
"Ricardo",
""
],
[
"Schmid",
"Cordelia",
""
],
[
"Laptev",
"Ivan",
""
]
]
| new_dataset | 0.9996 |
2309.15599 | Quentin Febvre | J. Emmanuel Johnson, Quentin Febvre, Anastasia Gorbunova, Sammy
Metref, Maxime Ballarotta, Julien Le Sommer, Ronan Fablet | OceanBench: The Sea Surface Height Edition | J. Emmanuel Johnson and Quentin Febvre contributed equally to this
work | null | null | null | cs.LG physics.ao-ph | http://creativecommons.org/licenses/by/4.0/ | The ocean profoundly influences human activities and plays a critical role in
climate regulation. Our understanding has improved over the last decades with
the advent of satellite remote sensing data, allowing us to capture essential
quantities over the globe, e.g., sea surface height (SSH). However, ocean
satellite data presents challenges for information extraction due to their
sparsity and irregular sampling, signal complexity, and noise. Machine learning
(ML) techniques have demonstrated their capabilities in dealing with
large-scale, complex signals. Therefore we see an opportunity for ML models to
harness the information contained in ocean satellite data. However, data
representation and relevant evaluation metrics can be the defining factors when
determining the success of applied ML. The processing steps from the raw
observation data to a ML-ready state and from model outputs to interpretable
quantities require domain expertise, which can be a significant barrier to
entry for ML researchers. OceanBench is a unifying framework that provides
standardized processing steps that comply with domain-expert standards. It
provides plug-and-play data and pre-configured pipelines for ML researchers to
benchmark their models and a transparent configurable framework for researchers
to customize and extend the pipeline for their tasks. In this work, we
demonstrate the OceanBench framework through a first edition dedicated to SSH
interpolation challenges. We provide datasets and ML-ready benchmarking
pipelines for the long-standing problem of interpolating observations from
simulated ocean satellite data, multi-modal and multi-sensor fusion issues, and
transfer-learning to real ocean satellite observations. The OceanBench
framework is available at github.com/jejjohnson/oceanbench and the dataset
registry is available at github.com/quentinf00/oceanbench-data-registry.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 12:00:40 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Johnson",
"J. Emmanuel",
""
],
[
"Febvre",
"Quentin",
""
],
[
"Gorbunova",
"Anastasia",
""
],
[
"Metref",
"Sammy",
""
],
[
"Ballarotta",
"Maxime",
""
],
[
"Sommer",
"Julien Le",
""
],
[
"Fablet",
"Ronan",
""
]
]
| new_dataset | 0.973727 |
2309.15656 | Ildiko Pilan | Ildik\'o Pil\'an, Laurent Pr\'evot, Hendrik Buschmeier, Pierre Lison | Conversational Feedback in Scripted versus Spontaneous Dialogues: A
Comparative Analysis | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Scripted dialogues such as movie and TV subtitles constitute a widespread
source of training data for conversational NLP models. However, the linguistic
characteristics of those dialogues are notably different from those observed in
corpora of spontaneous interactions. This difference is particularly marked for
communicative feedback and grounding phenomena such as backchannels,
acknowledgments, or clarification requests. Such signals are known to
constitute a key part of the conversation flow and are used by the dialogue
participants to provide feedback to one another on their perception of the
ongoing interaction. This paper presents a quantitative analysis of such
communicative feedback phenomena in both subtitles and spontaneous
conversations. Based on dialogue data in English, French, German, Hungarian,
Italian, Japanese, Norwegian and Chinese, we extract both lexical statistics
and classification outputs obtained with a neural dialogue act tagger. Two main
findings of this empirical study are that (1) conversational feedback is
markedly less frequent in subtitles than in spontaneous dialogues and (2)
subtitles contain a higher proportion of negative feedback. Furthermore, we
show that dialogue responses generated by large language models also follow the
same underlying trends and include comparatively few occurrences of
communicative feedback, except when those models are explicitly fine-tuned on
spontaneous dialogues.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 13:45:38 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Pilán",
"Ildikó",
""
],
[
"Prévot",
"Laurent",
""
],
[
"Buschmeier",
"Hendrik",
""
],
[
"Lison",
"Pierre",
""
]
]
| new_dataset | 0.956555 |
2309.15670 | Sumit Banshal Mr | Sumit Kumar Banshal, Sajal Das, Shumaiya Akter Shammi and Narayan
Ranjan Chakraborty | MONOVAB : An Annotated Corpus for Bangla Multi-label Emotion Detection | null | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In recent years, Sentiment Analysis (SA) and Emotion Recognition (ER) have
been increasingly popular in the Bangla language, which is the seventh most
spoken language throughout the entire world. However, the language is
structurally complicated, which makes this field arduous to extract emotions in
an accurate manner. Several distinct approaches such as the extraction of
positive and negative sentiments as well as multiclass emotions, have been
implemented in this field of study. Nevertheless, the extraction of multiple
sentiments is an almost untouched area in this language. Which involves
identifying several feelings based on a single piece of text. Therefore, this
study demonstrates a thorough method for constructing an annotated corpus based
on scrapped data from Facebook to bridge the gaps in this subject area to
overcome the challenges. To make this annotation more fruitful, the
context-based approach has been used. Bidirectional Encoder Representations
from Transformers (BERT), a well-known methodology of transformers, have been
shown the best results of all methods implemented. Finally, a web application
has been developed to demonstrate the performance of the pre-trained
top-performer model (BERT) for multi-label ER in Bangla.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 14:10:57 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Banshal",
"Sumit Kumar",
""
],
[
"Das",
"Sajal",
""
],
[
"Shammi",
"Shumaiya Akter",
""
],
[
"Chakraborty",
"Narayan Ranjan",
""
]
]
| new_dataset | 0.985244 |
2309.15675 | Bingyang Cui | Bingyang Cui and Qi Yang and Kaifa Yang and Yiling Xu and Xiaozhong Xu
and Shan Liu | SJTU-TMQA: A quality assessment database for static mesh with texture
map | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, static meshes with texture maps have become one of the most
prevalent digital representations of 3D shapes in various applications, such as
animation, gaming, medical imaging, and cultural heritage applications.
However, little research has been done on the quality assessment of textured
meshes, which hinders the development of quality-oriented applications, such as
mesh compression and enhancement. In this paper, we create a large-scale
textured mesh quality assessment database, namely SJTU-TMQA, which includes 21
reference meshes and 945 distorted samples. The meshes are rendered into
processed video sequences and then conduct subjective experiments to obtain
mean opinion scores (MOS). The diversity of content and accuracy of MOS has
been shown to validate its heterogeneity and reliability. The impact of various
types of distortion on human perception is demonstrated. 13 state-of-the-art
objective metrics are evaluated on SJTU-TMQA. The results report the highest
correlation of around 0.6, indicating the need for more effective objective
metrics. The SJTU-TMQA is available at https://ccccby.github.io
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 14:18:04 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Cui",
"Bingyang",
""
],
[
"Yang",
"Qi",
""
],
[
"Yang",
"Kaifa",
""
],
[
"Xu",
"Yiling",
""
],
[
"Xu",
"Xiaozhong",
""
],
[
"Liu",
"Shan",
""
]
]
| new_dataset | 0.999799 |
2309.15700 | Jingpei Lu | Jingpei Lu, Florian Richter, Shan Lin, Michael C. Yip | Tracking Snake-like Robots in the Wild Using Only a Single Camera | 8 pages, 5 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robot navigation within complex environments requires precise state
estimation and localization to ensure robust and safe operations. For
ambulating mobile robots like robot snakes, traditional methods for sensing
require multiple embedded sensors or markers, leading to increased complexity,
cost, and increased points of failure. Alternatively, deploying an external
camera in the environment is very easy to do, and marker-less state estimation
of the robot from this camera's images is an ideal solution: both simple and
cost-effective. However, the challenge in this process is in tracking the robot
under larger environments where the cameras may be moved around without
extrinsic calibration, or maybe when in motion (e.g., a drone following the
robot). The scenario itself presents a complex challenge: single-image
reconstruction of robot poses under noisy observations. In this paper, we
address the problem of tracking ambulatory mobile robots from a single camera.
The method combines differentiable rendering with the Kalman filter. This
synergy allows for simultaneous estimation of the robot's joint angle and pose
while also providing state uncertainty which could be used later on for robust
control. We demonstrate the efficacy of our approach on a snake-like robot in
both stationary and non-stationary (moving) cameras, validating its performance
in both structured and unstructured scenarios. The results achieved show an
average error of 0.05 m in localizing the robot's base position and 6 degrees
in joint state estimation. We believe this novel technique opens up
possibilities for enhanced robot mobility and navigation in future exploratory
and search-and-rescue missions.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 14:42:30 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Lu",
"Jingpei",
""
],
[
"Richter",
"Florian",
""
],
[
"Lin",
"Shan",
""
],
[
"Yip",
"Michael C.",
""
]
]
| new_dataset | 0.991504 |
2309.15701 | Huck Yang | Chen Chen, Yuchen Hu, Chao-Han Huck Yang, Sabato Macro Siniscalchi,
Pin-Yu Chen, Eng Siong Chng | HyPoradise: An Open Baseline for Generative Speech Recognition with
Large Language Models | Accepted to NeurIPS 2023, 24 pages. Datasets and Benchmarks Track | null | null | null | cs.CL cs.AI cs.LG cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Advancements in deep neural networks have allowed automatic speech
recognition (ASR) systems to attain human parity on several publicly available
clean speech datasets. However, even state-of-the-art ASR systems experience
performance degradation when confronted with adverse conditions, as a
well-trained acoustic model is sensitive to variations in the speech domain,
e.g., background noise. Intuitively, humans address this issue by relying on
their linguistic knowledge: the meaning of ambiguous spoken terms is usually
inferred from contextual cues thereby reducing the dependency on the auditory
system. Inspired by this observation, we introduce the first open-source
benchmark to utilize external large language models (LLMs) for ASR error
correction, where N-best decoding hypotheses provide informative elements for
true transcription prediction. This approach is a paradigm shift from the
traditional language model rescoring strategy that can only select one
candidate hypothesis as the output transcription. The proposed benchmark
contains a novel dataset, HyPoradise (HP), encompassing more than 334,000 pairs
of N-best hypotheses and corresponding accurate transcriptions across prevalent
speech domains. Given this dataset, we examine three types of error correction
techniques based on LLMs with varying amounts of labeled
hypotheses-transcription pairs, which gains a significant word error rate (WER)
reduction. Experimental evidence demonstrates the proposed technique achieves a
breakthrough by surpassing the upper bound of traditional re-ranking based
methods. More surprisingly, LLM with reasonable prompt and its generative
capability can even correct those tokens that are missing in N-best list. We
make our results publicly accessible for reproducible pipelines with released
pre-trained models, thus providing a new evaluation paradigm for ASR error
correction with LLMs.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 14:44:10 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Chen",
"Chen",
""
],
[
"Hu",
"Yuchen",
""
],
[
"Yang",
"Chao-Han Huck",
""
],
[
"Siniscalchi",
"Sabato Macro",
""
],
[
"Chen",
"Pin-Yu",
""
],
[
"Chng",
"Eng Siong",
""
]
]
| new_dataset | 0.997857 |
2309.15702 | Sebastian Koch | Sebastian Koch, Pedro Hermosilla, Narunas Vaskevicius, Mirco Colosi,
Timo Ropinski | SGRec3D: Self-Supervised 3D Scene Graph Learning via Object-Level Scene
Reconstruction | 8 pages, 4 figures, 6 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field of 3D scene understanding, 3D scene graphs have emerged as a new
scene representation that combines geometric and semantic information about
objects and their relationships. However, learning semantic 3D scene graphs in
a fully supervised manner is inherently difficult as it requires not only
object-level annotations but also relationship labels. While pre-training
approaches have helped to boost the performance of many methods in various
fields, pre-training for 3D scene graph prediction has received little
attention. Furthermore, we find in this paper that classical contrastive point
cloud-based pre-training approaches are ineffective for 3D scene graph
learning. To this end, we present SGRec3D, a novel self-supervised pre-training
method for 3D scene graph prediction. We propose to reconstruct the 3D input
scene from a graph bottleneck as a pretext task. Pre-training SGRec3D does not
require object relationship labels, making it possible to exploit large-scale
3D scene understanding datasets, which were off-limits for 3D scene graph
learning before. Our experiments demonstrate that in contrast to recent point
cloud-based pre-training approaches, our proposed pre-training improves the 3D
scene graph prediction considerably, which results in SOTA performance,
outperforming other 3D scene graph models by +10% on object prediction and +4%
on relationship prediction. Additionally, we show that only using a small
subset of 10% labeled data during fine-tuning is sufficient to outperform the
same model without pre-training.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 14:45:29 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Koch",
"Sebastian",
""
],
[
"Hermosilla",
"Pedro",
""
],
[
"Vaskevicius",
"Narunas",
""
],
[
"Colosi",
"Mirco",
""
],
[
"Ropinski",
"Timo",
""
]
]
| new_dataset | 0.994495 |
2309.15703 | Rama Krishna Kandukuri | Rama Krishna Kandukuri, Michael Strecke and Joerg Stueckler | Physics-Based Rigid Body Object Tracking and Friction Filtering From
RGB-D Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Physics-based understanding of object interactions from sensory observations
is an essential capability in augmented reality and robotics. It enables
capturing the properties of a scene for simulation and control. In this paper,
we propose a novel approach for real-to-sim which tracks rigid objects in 3D
from RGB-D images and infers physical properties of the objects. We use a
differentiable physics simulation as state-transition model in an Extended
Kalman Filter which can model contact and friction for arbitrary mesh-based
shapes and in this way estimate physically plausible trajectories. We
demonstrate that our approach can filter position, orientation, velocities, and
concurrently can estimate the coefficient of friction of the objects. We
analyse our approach on various sliding scenarios in synthetic image sequences
of single objects and colliding objects. We also demonstrate and evaluate our
approach on a real-world dataset. We will make our novel benchmark datasets
publicly available to foster future research in this novel problem setting and
comparison with our method.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 14:46:01 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Kandukuri",
"Rama Krishna",
""
],
[
"Strecke",
"Michael",
""
],
[
"Stueckler",
"Joerg",
""
]
]
| new_dataset | 0.983197 |
2309.15742 | Reza Gharibi | Reza Gharibi, Mohammad Hadi Sadreddini, Seyed Mostafa Fakhrahmad | T5APR: Empowering Automated Program Repair across Languages through
Checkpoint Ensemble | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Automated program repair (APR) using deep learning techniques has become an
important area of research in recent years, aiming to automatically generate
bug-fixing patches that can improve software reliability and maintainability.
However, most existing methods either target a single language or require high
computational resources to train multilingual models. In this paper, we propose
T5APR, a novel neural program repair approach that provides a unified solution
for bug fixing across multiple programming languages. T5APR leverages CodeT5, a
powerful pre-trained text-to-text transformer model, and adopts a checkpoint
ensemble strategy to improve patch recommendation. We conduct comprehensive
evaluations on six well-known benchmarks in four programming languages (Java,
Python, C, JavaScript), demonstrating T5APR's competitiveness against
state-of-the-art techniques. T5APR correctly fixes 1,985 bugs, including 1,442
bugs that none of the compared techniques has fixed. We further support the
effectiveness of our approach by conducting detailed analyses, such as
comparing the correct patch ranking among different techniques. The findings of
this study demonstrate the potential of T5APR for use in real-world
applications and highlight the importance of multilingual approaches in the
field of APR.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 15:54:08 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Gharibi",
"Reza",
""
],
[
"Sadreddini",
"Mohammad Hadi",
""
],
[
"Fakhrahmad",
"Seyed Mostafa",
""
]
]
| new_dataset | 0.99744 |
2309.15751 | Xuanlong Yu | Gianni Franchi, Marwane Hariat, Xuanlong Yu, Nacim Belkhir, Antoine
Manzanera and David Filliat | InfraParis: A multi-modal and multi-task autonomous driving dataset | 15 pages, 7 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current deep neural networks (DNNs) for autonomous driving computer vision
are typically trained on specific datasets that only involve a single type of
data and urban scenes. Consequently, these models struggle to handle new
objects, noise, nighttime conditions, and diverse scenarios, which is essential
for safety-critical applications. Despite ongoing efforts to enhance the
resilience of computer vision DNNs, progress has been sluggish, partly due to
the absence of benchmarks featuring multiple modalities. We introduce a novel
and versatile dataset named InfraParis that supports multiple tasks across
three modalities: RGB, depth, and infrared. We assess various state-of-the-art
baseline techniques, encompassing models for the tasks of semantic
segmentation, object detection, and depth estimation.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 16:07:43 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Franchi",
"Gianni",
""
],
[
"Hariat",
"Marwane",
""
],
[
"Yu",
"Xuanlong",
""
],
[
"Belkhir",
"Nacim",
""
],
[
"Manzanera",
"Antoine",
""
],
[
"Filliat",
"David",
""
]
]
| new_dataset | 0.99984 |
2309.15763 | Thomas Studer | Federico L. G. Faroldi and Meghdad Ghari and Eveline Lehmann and
Thomas Studer | Consistency and Permission in Deontic Justification Logic | null | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Different notions of the consistency of obligations collapse in standard
deontic logic. In justification logics, which feature explicit reasons for
obligations, the situation is different. Their strength depends on a constant
specification and on the available set of operations for combining different
reasons. We present different consistency principles in justification logic and
compare their logical strength. We propose a novel semantics for which
justification logics with the explicit version of axiom D, jd, are complete for
arbitrary constant specifications. Consistency is sometimes formulated in terms
of permission. We therefore study permission in the context of justification
logic, introducing a notion of free-choice permission for the first time. We
then discuss the philosophical implications with regard to some deontic
paradoxes.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 16:24:11 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Faroldi",
"Federico L. G.",
""
],
[
"Ghari",
"Meghdad",
""
],
[
"Lehmann",
"Eveline",
""
],
[
"Studer",
"Thomas",
""
]
]
| new_dataset | 0.993348 |
2309.15776 | Yanqing Ren | Yanqing Ren, Mingyong Zhou, Xiaokun Teng, Shengguo Meng, Wankai Tang,
Xiao Li, Shi Jin, and Michail Matthaiou | Time-Domain Channel Measurements and Small-Scale Fading Characterization
for RIS-Assisted Wireless Communication Systems | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a potentially revolutionary enabling technology for the sixth generation
(6G) mobile communication system, reconfigurable intelligent surfaces (RISs)
have attracted extensive attention from industry and academia. In RIS-assisted
wireless communication systems, practical channel measurements and modeling
serve as the foundation for system design, network optimization, and
performance evaluation. In this paper, a RIS time-domain channel measurement
system, based on a software defined radio (SDR) platform, is developed for the
first time to investigate the small-scale fading characteristics of
RIS-assisted channels. We present RIS channel measurements in corridor and
laboratory scenarios and compare the power delay profile (PDP) of the channel
without RIS, with RIS specular reflection, and with RIS intelligent reflection.
The multipath component parameters and cluster parameters based on the
Saleh-Valenzuela model are extracted. We find that the PDPs of the RIS-assisted
channel fit the power-law decay model and approximate the law of square decay.
Through intelligent reflection, the RIS can decrease the delay and concentrate
the energy of the virtual line-of-sight (VLOS) path, thereby reducing delay
spread and mitigating multipath fading. Furthermore, the cluster
characteristics of RIS-assisted channels are highly related to the measurement
environment. In the laboratory scenario, a single cluster dominated by the VLOS
path with smooth envelope is observed. On the other hand, in the corridor
scenario, some additional clusters introduced by the RIS reflection are
created.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 16:50:47 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Ren",
"Yanqing",
""
],
[
"Zhou",
"Mingyong",
""
],
[
"Teng",
"Xiaokun",
""
],
[
"Meng",
"Shengguo",
""
],
[
"Tang",
"Wankai",
""
],
[
"Li",
"Xiao",
""
],
[
"Jin",
"Shi",
""
],
[
"Matthaiou",
"Michail",
""
]
]
| new_dataset | 0.996393 |
2309.15782 | Vipin Gautam | Vipin Gautam, Shitala Prasad and Sharad Sinha | Joint-YODNet: A Light-weight Object Detector for UAVs to Achieve Above
100fps | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Small object detection via UAV (Unmanned Aerial Vehicle) images captured from
drones and radar is a complex task with several formidable challenges. This
domain encompasses numerous complexities that impede the accurate detection and
localization of small objects. To address these challenges, we propose a novel
method called JointYODNet for UAVs to detect small objects, leveraging a joint
loss function specifically designed for this task. Our method revolves around
the development of a joint loss function tailored to enhance the detection
performance of small objects. Through extensive experimentation on a diverse
dataset of UAV images captured under varying environmental conditions, we
evaluated different variations of the loss function and determined the most
effective formulation. The results demonstrate that our proposed joint loss
function outperforms existing methods in accurately localizing small objects.
Specifically, our method achieves a recall of 0.971, and a F1Score of 0.975,
surpassing state-of-the-art techniques. Additionally, our method achieves a
[email protected](%) of 98.6, indicating its robustness in detecting small objects across
varying scales
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 16:57:04 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Gautam",
"Vipin",
""
],
[
"Prasad",
"Shitala",
""
],
[
"Sinha",
"Sharad",
""
]
]
| new_dataset | 0.996289 |
2309.15803 | Amit Mathapati | Amit Mathapati | ANNCRIPS: Artificial Neural Networks for Cancer Research In Prediction &
Survival | 13 pages, 25 figures, 2 tables. arXiv admin note: text overlap with
arXiv:cs/0405016 by other authors | null | null | null | cs.LG cs.AI cs.CE cs.NE | http://creativecommons.org/licenses/by/4.0/ | Prostate cancer is a prevalent malignancy among men aged 50 and older.
Current diagnostic methods primarily rely on blood tests, PSA:Prostate-Specific
Antigen levels, and Digital Rectal Examinations (DRE). However, these methods
suffer from a significant rate of false positive results. This study focuses on
the development and validation of an intelligent mathematical model utilizing
Artificial Neural Networks (ANNs) to enhance the early detection of prostate
cancer. The primary objective of this research paper is to present a novel
mathematical model designed to aid in the early detection of prostate cancer,
facilitating prompt intervention by healthcare professionals. The model's
implementation demonstrates promising potential in reducing the incidence of
false positives, thereby improving patient outcomes. Furthermore, we envision
that, with further refinement, extensive testing, and validation, this model
can evolve into a robust, marketable solution for prostate cancer detection.
The long-term goal is to make this solution readily available for deployment in
various screening centers, hospitals, and research institutions, ultimately
contributing to more effective cancer screening and patient care.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 08:11:35 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Mathapati",
"Amit",
""
]
]
| new_dataset | 0.984673 |
2309.15821 | Haonan Chang | Haonan Chang, Kai Gao, Kowndinya Boyalakuntla, Alex Lee, Baichuan
Huang, Harish Udhaya Kumar, Jinjin Yu, Abdeslam Boularias | LGMCTS: Language-Guided Monte-Carlo Tree Search for Executable Semantic
Object Rearrangement | Our code and supplementary materials are accessible at
https://github.com/changhaonan/LG-MCTS | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | We introduce a novel approach to the executable semantic object rearrangement
problem. In this challenge, a robot seeks to create an actionable plan that
rearranges objects within a scene according to a pattern dictated by a natural
language description. Unlike existing methods such as StructFormer and
StructDiffusion, which tackle the issue in two steps by first generating poses
and then leveraging a task planner for action plan formulation, our method
concurrently addresses pose generation and action planning. We achieve this
integration using a Language-Guided Monte-Carlo Tree Search (LGMCTS).
Quantitative evaluations are provided on two simulation datasets, and
complemented by qualitative tests with a real robot.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 17:45:49 GMT"
}
]
| 2023-09-28T00:00:00 | [
[
"Chang",
"Haonan",
""
],
[
"Gao",
"Kai",
""
],
[
"Boyalakuntla",
"Kowndinya",
""
],
[
"Lee",
"Alex",
""
],
[
"Huang",
"Baichuan",
""
],
[
"Kumar",
"Harish Udhaya",
""
],
[
"Yu",
"Jinjin",
""
],
[
"Boularias",
"Abdeslam",
""
]
]
| new_dataset | 0.966605 |
1212.5210 | Luca Saiu | Luca Saiu | GNU epsilon -- an extensible programming language | 172 pages, PhD thesis | null | null | null | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reductionism is a viable strategy for designing and implementing practical
programming languages, leading to solutions which are easier to extend,
experiment with and formally analyze. We formally specify and implement an
extensible programming language, based on a minimalistic first-order imperative
core language plus strong abstraction mechanisms, reflection and
self-modification features. The language can be extended to very high levels:
by using Lisp-style macros and code-to-code transforms which automatically
rewrite high-level expressions into core forms, we define closures and
first-class continuations on top of the core. Non-self-modifying programs can
be analyzed and formally reasoned upon, thanks to the language simple
semantics. We formally develop a static analysis and prove a soundness property
with respect to the dynamic semantics. We develop a parallel garbage collector
suitable to multi-core machines to permit efficient execution of parallel
programs.
| [
{
"version": "v1",
"created": "Thu, 20 Dec 2012 19:56:38 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Dec 2012 14:53:12 GMT"
},
{
"version": "v3",
"created": "Fri, 11 Jan 2013 15:13:35 GMT"
},
{
"version": "v4",
"created": "Mon, 11 Mar 2013 12:27:10 GMT"
},
{
"version": "v5",
"created": "Sun, 31 Mar 2013 15:52:33 GMT"
},
{
"version": "v6",
"created": "Mon, 25 Sep 2023 21:41:37 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Saiu",
"Luca",
""
]
]
| new_dataset | 0.994788 |
2006.16039 | Adam \'O Conghaile | Adam \'O Conghaile and Anuj Dawar | Game Comonads & Generalised Quantifiers | 31 pages | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Game comonads, introduced by Abramsky, Dawar and Wang and developed by
Abramsky and Shah, give an interesting categorical semantics to some
Spoiler-Duplicator games that are common in finite model theory. In particular
they expose connections between one-sided and two-sided games, and parameters
such as treewidth and treedepth and corresponding notions of decomposition. In
the present paper, we expand the realm of game comonads to logics with
generalised quantifiers. In particular, we introduce a comonad graded by two
parameter $n \leq k$ such that isomorphisms in the resulting Kleisli category
are exactly Duplicator winning strategies in Hella's $n$-bijection game with
$k$ pebbles. We define a one-sided version of this game which allows us to
provide a categorical semantics for a number of logics with generalised
quantifiers. We also give a novel notion of tree decomposition that emerges
from the construction.
| [
{
"version": "v1",
"created": "Mon, 29 Jun 2020 13:33:18 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jul 2020 16:49:37 GMT"
},
{
"version": "v3",
"created": "Thu, 1 Jul 2021 11:16:55 GMT"
},
{
"version": "v4",
"created": "Mon, 25 Sep 2023 20:32:34 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Conghaile",
"Adam Ó",
""
],
[
"Dawar",
"Anuj",
""
]
]
| new_dataset | 0.980555 |
2110.10510 | Matteo Saveriano | Fares J. Abu-Dakka, Matteo Saveriano, Luka Peternel | Periodic DMP formulation for Quaternion Trajectories | 2021 20th International Conference on Advanced Robotics (ICAR) | null | 10.1109/ICAR53236.2021.9659319 | null | cs.RO cs.LG | http://creativecommons.org/licenses/by/4.0/ | Imitation learning techniques have been used as a way to transfer skills to
robots. Among them, dynamic movement primitives (DMPs) have been widely
exploited as an effective and an efficient technique to learn and reproduce
complex discrete and periodic skills. While DMPs have been properly formulated
for learning point-to-point movements for both translation and orientation,
periodic ones are missing a formulation to learn the orientation. To address
this gap, we propose a novel DMP formulation that enables encoding of periodic
orientation trajectories. Within this formulation we develop two approaches:
Riemannian metric-based projection approach and unit quaternion based periodic
DMP. Both formulations exploit unit quaternions to represent the orientation.
However, the first exploits the properties of Riemannian manifolds to work in
the tangent space of the unit sphere. The second encodes directly the unit
quaternion trajectory while guaranteeing the unitary norm of the generated
quaternions. We validated the technical aspects of the proposed methods in
simulation. Then we performed experiments on a real robot to execute daily
tasks that involve periodic orientation changes (i.e., surface polishing/wiping
and liquid mixing by shaking).
| [
{
"version": "v1",
"created": "Wed, 20 Oct 2021 11:43:01 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Abu-Dakka",
"Fares J.",
""
],
[
"Saveriano",
"Matteo",
""
],
[
"Peternel",
"Luka",
""
]
]
| new_dataset | 0.994935 |
2205.04454 | Fanta Camara | Fanta Camara, Chris Waltham, Grey Churchill, and Charles Fox | OpenPodcar: an Open Source Vehicle for Self-Driving Car Research | Published in the Journal of Open Hardware | Journal of Open Hardware, 7(1): 8, pp. 1-17 (2023) | 10.5334/joh.46 | null | cs.RO cs.AI cs.AR cs.CV | http://creativecommons.org/licenses/by/4.0/ | OpenPodcar is a low-cost, open source hardware and software, autonomous
vehicle research platform based on an off-the-shelf, hard-canopy, mobility
scooter donor vehicle. Hardware and software build instructions are provided to
convert the donor vehicle into a low-cost and fully autonomous platform. The
open platform consists of (a) hardware components: CAD designs, bill of
materials, and build instructions; (b) Arduino, ROS and Gazebo control and
simulation software files which provide standard ROS interfaces and simulation
of the vehicle; and (c) higher-level ROS software implementations and
configurations of standard robot autonomous planning and control, including the
move_base interface with Timed-Elastic-Band planner which enacts commands to
drive the vehicle from a current to a desired pose around obstacles. The
vehicle is large enough to transport a human passenger or similar load at
speeds up to 15km/h, for example for use as a last-mile autonomous taxi service
or to transport delivery containers similarly around a city center. It is small
and safe enough to be parked in a standard research lab and be used for
realistic human-vehicle interaction studies. System build cost from new
components is around USD7,000 in total in 2022. OpenPodcar thus provides a good
balance between real world utility, safety, cost and research convenience.
| [
{
"version": "v1",
"created": "Mon, 9 May 2022 17:55:56 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 15:48:19 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Camara",
"Fanta",
""
],
[
"Waltham",
"Chris",
""
],
[
"Churchill",
"Grey",
""
],
[
"Fox",
"Charles",
""
]
]
| new_dataset | 0.99985 |
2208.04609 | Tu Anh Dinh | Tu Anh Dinh, Jeroen den Boef, Joran Cornelisse, Paul Groth | E2EG: End-to-End Node Classification Using Graph Topology and Text-based
Node Attributes | Accepted to MLoG - IEEE International Conference on Data Mining
Workshops ICDMW 2023 | null | null | null | cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Node classification utilizing text-based node attributes has many real-world
applications, ranging from prediction of paper topics in academic citation
graphs to classification of user characteristics in social media networks.
State-of-the-art node classification frameworks, such as GIANT, use a two-stage
pipeline: first embedding the text attributes of graph nodes then feeding the
resulting embeddings into a node classification model. In this paper, we
eliminate these two stages and develop an end-to-end node classification model
that builds upon GIANT, called End-to-End-GIANT (E2EG). The tandem utilization
of a main and an auxiliary classification objectives in our approach results in
a more robust model, enabling the BERT backbone to be switched out for a
distilled encoder with a 25% - 40% reduction in the number of parameters.
Moreover, the model's end-to-end nature increases ease of use, as it avoids the
need of chaining multiple models for node classification. Compared to a
GIANT+MLP baseline on the ogbn-arxiv and ogbn-products datasets, E2EG obtains
slightly better accuracy in the transductive setting (+0.5%), while reducing
model training time by up to 40%. Our model is also applicable in the inductive
setting, outperforming GIANT+MLP by up to +2.23%.
| [
{
"version": "v1",
"created": "Tue, 9 Aug 2022 09:05:10 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 17:39:40 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Dinh",
"Tu Anh",
""
],
[
"Boef",
"Jeroen den",
""
],
[
"Cornelisse",
"Joran",
""
],
[
"Groth",
"Paul",
""
]
]
| new_dataset | 0.99822 |
2208.13040 | Xinyi Zou | Ziheng Wu, Xinyi Zou, Wenmeng Zhou, Jun Huang | YOLOX-PAI: An Improved YOLOX, Stronger and Faster than YOLOv6 | 5 pages, 5 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop an all-in-one computer vision toolbox named EasyCV to facilitate
the use of various SOTA computer vision methods. Recently, we add YOLOX-PAI, an
improved version of YOLOX, into EasyCV. We conduct ablation studies to
investigate the influence of some detection methods on YOLOX. We also provide
an easy use for PAI-Blade which is used to accelerate the inference process
based on BladeDISC and TensorRT. Finally, we receive 42.8 mAP on COCO dateset
within 1.0 ms on a single NVIDIA V100 GPU, which is a bit faster than YOLOv6. A
simple but efficient predictor api is also designed in EasyCV to conduct
end2end object detection. Codes and models are now available at:
https://github.com/alibaba/EasyCV.
| [
{
"version": "v1",
"created": "Sat, 27 Aug 2022 15:37:26 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Sep 2022 09:07:01 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Sep 2023 15:05:48 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Wu",
"Ziheng",
""
],
[
"Zou",
"Xinyi",
""
],
[
"Zhou",
"Wenmeng",
""
],
[
"Huang",
"Jun",
""
]
]
| new_dataset | 0.995948 |
2209.12160 | Peiyu Chen | Weipeng Guan, Peiyu Chen, Yuhan Xie, Peng Lu | PL-EVIO: Robust Monocular Event-based Visual Inertial Odometry with
Point and Line Features | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event cameras are motion-activated sensors that capture pixel-level
illumination changes instead of the intensity image with a fixed frame rate.
Compared with the standard cameras, it can provide reliable visual perception
during high-speed motions and in high dynamic range scenarios. However, event
cameras output only a little information or even noise when the relative motion
between the camera and the scene is limited, such as in a still state. While
standard cameras can provide rich perception information in most scenarios,
especially in good lighting conditions. These two cameras are exactly
complementary. In this paper, we proposed a robust, high-accurate, and
real-time optimization-based monocular event-based visual-inertial odometry
(VIO) method with event-corner features, line-based event features, and
point-based image features. The proposed method offers to leverage the
point-based features in the nature scene and line-based features in the
human-made scene to provide more additional structure or constraints
information through well-design feature management. Experiments in the public
benchmark datasets show that our method can achieve superior performance
compared with the state-of-the-art image-based or event-based VIO. Finally, we
used our method to demonstrate an onboard closed-loop autonomous quadrotor
flight and large-scale outdoor experiments. Videos of the evaluations are
presented on our project website: https://b23.tv/OE3QM6j
| [
{
"version": "v1",
"created": "Sun, 25 Sep 2022 06:14:12 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 09:46:23 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Guan",
"Weipeng",
""
],
[
"Chen",
"Peiyu",
""
],
[
"Xie",
"Yuhan",
""
],
[
"Lu",
"Peng",
""
]
]
| new_dataset | 0.999634 |
2210.13904 | Alexander Mock | Alexander Mock, Sebastian P\"utz, Thomas Wiemann, Joachim Hertzberg | MICP-L: Mesh-based ICP for Robot Localization using Hardware-Accelerated
Ray Casting | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Triangle mesh maps have proven to be a versatile 3D environment
representation for robots to navigate in challenging indoor and outdoor
environments exhibiting tunnels, hills and varying slopes. To make use of these
mesh maps, methods are needed that allow robots to accurately localize
themselves to perform typical tasks like path planning and navigation. We
present Mesh ICP Localization (MICP-L), a novel and computationally efficient
method for registering one or more range sensors to a triangle mesh map to
continuously localize a robot in 6D, even in GPS-denied environments. We
accelerate the computation of ray casting correspondences (RCC) between range
sensors and mesh maps by supporting different parallel computing devices like
multicore CPUs, GPUs and the latest NVIDIA RTX hardware. By additionally
transforming the covariance computation into a reduction operation, we can
optimize the initial guessed poses in parallel on CPUs or GPUs, making our
implementation applicable in real-time on a variety of target architectures. We
demonstrate the robustness of our localization approach with datasets from
agriculture, drones, and automotive domains.
| [
{
"version": "v1",
"created": "Tue, 25 Oct 2022 10:39:42 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 09:10:22 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Sep 2023 12:10:26 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Mock",
"Alexander",
""
],
[
"Pütz",
"Sebastian",
""
],
[
"Wiemann",
"Thomas",
""
],
[
"Hertzberg",
"Joachim",
""
]
]
| new_dataset | 0.993507 |
2302.01235 | Suthee Ruangwises | Suthee Ruangwises | Physical Zero-Knowledge Proofs for Five Cells | This paper has appeared at LATINCRYPT 2023 | null | 10.1007/978-3-031-44469-2_16 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Five Cells is a logic puzzle consisting of a rectangular grid, with some
cells containg a number. The player has to partition the grid into pentominoes
such that the number in each cell must be equal to the number of edges of that
cell that are borders of pentominoes. In this paper, we propose two physical
zero-knowledge proof protocols for Five Cells using a deck of playing cards,
which allow a prover to physically show that he/she knows a solution of the
puzzle without revealing it. In the optimization of our first protocol, we also
develop a technique to reduce the number of required cards from quadratic to
linear in the number of cells, which can be used in other zero-knowledge proof
protocols related to graph coloring as well.
| [
{
"version": "v1",
"created": "Thu, 2 Feb 2023 17:16:32 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2023 17:10:03 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Jul 2023 18:01:47 GMT"
},
{
"version": "v4",
"created": "Sun, 6 Aug 2023 18:58:40 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Ruangwises",
"Suthee",
""
]
]
| new_dataset | 0.990706 |
2302.09167 | Weizi Li | Michael Villarreal, Bibek Poudel, Jia Pan, Weizi Li | Mixed Traffic Control and Coordination from Pixels | null | null | null | null | cs.MA cs.LG cs.RO | http://creativecommons.org/licenses/by/4.0/ | Traffic congestion is a persistent problem in our society. Existing methods
for traffic control have proven futile in alleviating current congestion levels
leading researchers to explore ideas with robot vehicles given the increased
emergence of vehicles with different levels of autonomy on our roads. This
gives rise to mixed traffic control, where robot vehicles regulate human-driven
vehicles through reinforcement learning (RL). However, most existing studies
use precise observations that involve global information, such as environment
outflow, and local information, i.e., vehicle positions and velocities.
Obtaining this information requires updating existing road infrastructure with
vast sensor environments and communication to potentially unwilling human
drivers. We consider image observations as the alternative for mixed traffic
control via RL: 1) images are ubiquitous through satellite imagery, in-car
camera systems, and traffic monitoring systems; 2) images do not require a
complete re-imagination of the observation space from environment to
environment; and 3) images only require communication to equipment. In this
work, we show robot vehicles using image observations can achieve similar
performance to using precise information on environments, including ring,
figure eight, intersection, merge, and bottleneck. In certain scenarios, our
approach even outperforms using precision observations, e.g., up to 26%
increase in average vehicle velocity in the merge environment and a 6% increase
in outflow in the bottleneck environment, despite only using local traffic
information as opposed to global traffic information.
| [
{
"version": "v1",
"created": "Fri, 17 Feb 2023 22:40:07 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 20:01:50 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Sep 2023 21:56:51 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Villarreal",
"Michael",
""
],
[
"Poudel",
"Bibek",
""
],
[
"Pan",
"Jia",
""
],
[
"Li",
"Weizi",
""
]
]
| new_dataset | 0.994742 |
2302.09429 | Pramod Abichandani Dr | Craig Iaboni, Thomas Kelly, Pramod Abichandani | NU-AIR -- A Neuromorphic Urban Aerial Dataset for Detection and
Localization of Pedestrians and Vehicles | 20 pages, 5 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper presents an open-source aerial neuromorphic dataset that captures
pedestrians and vehicles moving in an urban environment. The dataset, titled
NU-AIR, features 70.75 minutes of event footage acquired with a 640 x 480
resolution neuromorphic sensor mounted on a quadrotor operating in an urban
environment. Crowds of pedestrians, different types of vehicles, and street
scenes featuring busy urban environments are captured at different elevations
and illumination conditions. Manual bounding box annotations of vehicles and
pedestrians contained in the recordings are provided at a frequency of 30 Hz,
yielding 93,204 labels in total. Evaluation of the dataset's fidelity is
performed through comprehensive ablation study for three Spiking Neural
Networks (SNNs) and training ten Deep Neural Networks (DNNs) to validate the
quality and reliability of both the dataset and corresponding annotations. All
data and Python code to voxelize the data and subsequently train SNNs/DNNs has
been open-sourced.
| [
{
"version": "v1",
"created": "Sat, 18 Feb 2023 21:48:18 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 19:41:58 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Iaboni",
"Craig",
""
],
[
"Kelly",
"Thomas",
""
],
[
"Abichandani",
"Pramod",
""
]
]
| new_dataset | 0.999878 |
2303.15181 | Zhengzhe Liu | Zhengzhe Liu, Peng Dai, Ruihui Li, Xiaojuan Qi, Chi-Wing Fu | DreamStone: Image as Stepping Stone for Text-Guided 3D Shape Generation | IEEE Transactions on Pattern Analysis and Machine Intelligence
(TPAMI) | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we present a new text-guided 3D shape generation approach
DreamStone that uses images as a stepping stone to bridge the gap between text
and shape modalities for generating 3D shapes without requiring paired text and
3D data. The core of our approach is a two-stage feature-space alignment
strategy that leverages a pre-trained single-view reconstruction (SVR) model to
map CLIP features to shapes: to begin with, map the CLIP image feature to the
detail-rich 3D shape space of the SVR model, then map the CLIP text feature to
the 3D shape space through encouraging the CLIP-consistency between rendered
images and the input text. Besides, to extend beyond the generative capability
of the SVR model, we design a text-guided 3D shape stylization module that can
enhance the output shapes with novel structures and textures. Further, we
exploit pre-trained text-to-image diffusion models to enhance the generative
diversity, fidelity, and stylization capability. Our approach is generic,
flexible, and scalable, and it can be easily integrated with various SVR models
to expand the generative space and improve the generative fidelity. Extensive
experimental results demonstrate that our approach outperforms the
state-of-the-art methods in terms of generative quality and consistency with
the input text. Codes and models are released at
https://github.com/liuzhengzhe/DreamStone-ISS.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2023 03:56:23 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Sep 2023 23:01:02 GMT"
},
{
"version": "v3",
"created": "Sat, 23 Sep 2023 15:20:07 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Liu",
"Zhengzhe",
""
],
[
"Dai",
"Peng",
""
],
[
"Li",
"Ruihui",
""
],
[
"Qi",
"Xiaojuan",
""
],
[
"Fu",
"Chi-Wing",
""
]
]
| new_dataset | 0.999219 |
2303.16975 | Rishi Hazra | Rishi Hazra, Brian Chen, Akshara Rai, Nitin Kamra, Ruta Desai | EgoTV: Egocentric Task Verification from Natural Language Task
Descriptions | Accepted at ICCV 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | To enable progress towards egocentric agents capable of understanding
everyday tasks specified in natural language, we propose a benchmark and a
synthetic dataset called Egocentric Task Verification (EgoTV). The goal in
EgoTV is to verify the execution of tasks from egocentric videos based on the
natural language description of these tasks. EgoTV contains pairs of videos and
their task descriptions for multi-step tasks -- these tasks contain multiple
sub-task decompositions, state changes, object interactions, and sub-task
ordering constraints. In addition, EgoTV also provides abstracted task
descriptions that contain only partial details about ways to accomplish a task.
Consequently, EgoTV requires causal, temporal, and compositional reasoning of
video and language modalities, which is missing in existing datasets. We also
find that existing vision-language models struggle at such all round reasoning
needed for task verification in EgoTV. Inspired by the needs of EgoTV, we
propose a novel Neuro-Symbolic Grounding (NSG) approach that leverages symbolic
representations to capture the compositional and temporal structure of tasks.
We demonstrate NSG's capability towards task tracking and verification on our
EgoTV dataset and a real-world dataset derived from CrossTask (CTV). We
open-source the EgoTV and CTV datasets and the NSG model for future research on
egocentric assistive agents.
| [
{
"version": "v1",
"created": "Wed, 29 Mar 2023 19:16:49 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 18:41:24 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Apr 2023 18:04:27 GMT"
},
{
"version": "v4",
"created": "Tue, 2 May 2023 15:26:28 GMT"
},
{
"version": "v5",
"created": "Mon, 25 Sep 2023 19:20:58 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Hazra",
"Rishi",
""
],
[
"Chen",
"Brian",
""
],
[
"Rai",
"Akshara",
""
],
[
"Kamra",
"Nitin",
""
],
[
"Desai",
"Ruta",
""
]
]
| new_dataset | 0.999838 |
2304.10049 | Lukas Schmid | Lukas Schmid, Olov Andersson, Aurelio Sulser, Patrick Pfreundschuh,
and Roland Siegwart | Dynablox: Real-time Detection of Diverse Dynamic Objects in Complex
Environments | Code released at https://github.com/ethz-asl/dynablox | in IEEE Robotics and Automation Letters, vol. 8, no. 10, pp.
6259-6266, Oct. 2023 | 10.1109/LRA.2023.3305239 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-time detection of moving objects is an essential capability for robots
acting autonomously in dynamic environments. We thus propose Dynablox, a novel
online mapping-based approach for robust moving object detection in complex
environments. The central idea of our approach is to incrementally estimate
high confidence free-space areas by modeling and accounting for sensing, state
estimation, and mapping limitations during online robot operation. The
spatio-temporally conservative free space estimate enables robust detection of
moving objects without making any assumptions on the appearance of objects or
environments. This allows deployment in complex scenes such as multi-storied
buildings or staircases, and for diverse moving objects such as people carrying
various items, doors swinging or even balls rolling around. We thoroughly
evaluate our approach on real-world data sets, achieving 86% IoU at 17 FPS in
typical robotic settings. The method outperforms a recent appearance-based
classifier and approaches the performance of offline methods. We demonstrate
its generality on a novel data set with rare moving objects in complex
environments. We make our efficient implementation and the novel data set
available as open-source.
| [
{
"version": "v1",
"created": "Thu, 20 Apr 2023 02:16:36 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Apr 2023 15:21:22 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Sep 2023 05:25:35 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Schmid",
"Lukas",
""
],
[
"Andersson",
"Olov",
""
],
[
"Sulser",
"Aurelio",
""
],
[
"Pfreundschuh",
"Patrick",
""
],
[
"Siegwart",
"Roland",
""
]
]
| new_dataset | 0.998829 |
2305.07517 | Pragathi Praveena | Pragathi Praveena, Yeping Wang, Emmanuel Senft, Michael Gleicher,
Bilge Mutlu | Periscope: A Robotic Camera System to Support Remote Physical
Collaboration | This is a pre-print of the article accepted for publication in PACM
HCI and will be presented at CSCW 2023 | null | 10.1145/3610199 | null | cs.RO cs.HC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We investigate how robotic camera systems can offer new capabilities to
computer-supported cooperative work through the design, development, and
evaluation of a prototype system called Periscope. With Periscope, a local
worker completes manipulation tasks with guidance from a remote helper who
observes the workspace through a camera mounted on a semi-autonomous robotic
arm that is co-located with the worker. Our key insight is that the helper, the
worker, and the robot should all share responsibility of the camera view--an
approach we call shared camera control. Using this approach, we present a set
of modes that distribute the control of the camera between the human
collaborators and the autonomous robot depending on task needs. We demonstrate
the system's utility and the promise of shared camera control through a
preliminary study where 12 dyads collaboratively worked on assembly tasks.
Finally, we discuss design and research implications of our work for future
robotic camera systems that facilitate remote collaboration.
| [
{
"version": "v1",
"created": "Fri, 12 May 2023 14:34:14 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 20:45:32 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Praveena",
"Pragathi",
""
],
[
"Wang",
"Yeping",
""
],
[
"Senft",
"Emmanuel",
""
],
[
"Gleicher",
"Michael",
""
],
[
"Mutlu",
"Bilge",
""
]
]
| new_dataset | 0.999497 |
2306.09351 | Md Ataur Rahman | Sheikh Mohammad Jubaer, Nazifa Tabassum, Md. Ataur Rahman, Mohammad
Khairul Islam | BN-DRISHTI: Bangla Document Recognition through Instance-level
Segmentation of Handwritten Text Images | Will be published under the Springer Springer Lecture Notes in
Computer Science (LNCS) series, as part of ICDAR WML 2023 | ICDAR 2023 Workshops | 10.1007/978-3-031-41501-2_14 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Handwriting recognition remains challenging for some of the most spoken
languages, like Bangla, due to the complexity of line and word segmentation
brought by the curvilinear nature of writing and lack of quality datasets. This
paper solves the segmentation problem by introducing a state-of-the-art method
(BN-DRISHTI) that combines a deep learning-based object detection framework
(YOLO) with Hough and Affine transformation for skew correction. However,
training deep learning models requires a massive amount of data. Thus, we also
present an extended version of the BN-HTRd dataset comprising 786 full-page
handwritten Bangla document images, line and word-level annotation for
segmentation, and corresponding ground truths for word recognition. Evaluation
on the test portion of our dataset resulted in an F-score of 99.97% for line
and 98% for word segmentation. For comparative analysis, we used three external
Bangla handwritten datasets, namely BanglaWriting, WBSUBNdb_text, and ICDAR
2013, where our system outperformed by a significant margin, further justifying
the performance of our approach on completely unseen samples.
| [
{
"version": "v1",
"created": "Wed, 31 May 2023 04:08:57 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Jubaer",
"Sheikh Mohammad",
""
],
[
"Tabassum",
"Nazifa",
""
],
[
"Rahman",
"Md. Ataur",
""
],
[
"Islam",
"Mohammad Khairul",
""
]
]
| new_dataset | 0.999291 |
2306.10322 | Xiwen Liang | Xiwen Liang, Liang Ma, Shanshan Guo, Jianhua Han, Hang Xu, Shikui Ma,
Xiaodan Liang | MO-VLN: A Multi-Task Benchmark for Open-set Zero-Shot
Vision-and-Language Navigation | 23 pages | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a natural language, a general robot has to comprehend the instruction
and find the target object or location based on visual observations even in
unexplored environments. Most agents rely on massive diverse training data to
achieve better generalization, which requires expensive labor. These agents
often focus on common objects and fewer tasks, thus are not intelligent enough
to handle different types of instructions. To facilitate research in open-set
vision-and-language navigation, we propose a benchmark named MO-VLN, aiming at
testing the effectiveness and generalization of the agent in the multi-task
setting. First, we develop a 3D simulator rendered by realistic scenarios using
Unreal Engine 5, containing more realistic lights and details. The simulator
contains three scenes, i.e., cafe, restaurant, and nursing house, of high value
in the industry. Besides, our simulator involves multiple uncommon objects,
such as takeaway cup and medical adhesive tape, which are more complicated
compared with existing environments. Inspired by the recent success of large
language models (e.g., ChatGPT, Vicuna), we construct diverse high-quality data
of instruction type without human annotation. Our benchmark MO-VLN provides
four tasks: 1) goal-conditioned navigation given a specific object category
(e.g., "fork"); 2) goal-conditioned navigation given simple instructions (e.g.,
"Search for and move towards a tennis ball"); 3) step-by-step instruction
following; 4) finding abstract object based on high-level instruction (e.g., "I
am thirsty").
| [
{
"version": "v1",
"created": "Sat, 17 Jun 2023 11:44:04 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 05:18:49 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Liang",
"Xiwen",
""
],
[
"Ma",
"Liang",
""
],
[
"Guo",
"Shanshan",
""
],
[
"Han",
"Jianhua",
""
],
[
"Xu",
"Hang",
""
],
[
"Ma",
"Shikui",
""
],
[
"Liang",
"Xiaodan",
""
]
]
| new_dataset | 0.999756 |
2306.15516 | Nina Pardal | Nina Pardal and Jonni Virtema | A fine-grained framework for database repairs | 16 pages + 2 pages references | null | null | null | cs.DB cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a general abstract framework for database repairing that
differentiates between integrity constraints and the so-called query
constraints. The former are used to model consistency and desirable properties
of the data (such as functional dependencies and independencies), while the
latter relates two database instances according to their answers for the query
constraints. The framework also admits a distinction between hard and soft
queries, allowing to preserve the answers of a core set of queries as well as
defining a distance between instances based on query answers. Finally, we
present an instantiation of this framework by defining logic-based metrics in
K-teams (a notion recently defined for logical modelling of relational data
with semiring annotations). We exemplify how various notions of repairs from
the literature can be modelled in our unifying framework.
| [
{
"version": "v1",
"created": "Tue, 27 Jun 2023 14:41:47 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 19:16:21 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Pardal",
"Nina",
""
],
[
"Virtema",
"Jonni",
""
]
]
| new_dataset | 0.986797 |
2307.00595 | Haoshu Fang | Hao-Shu Fang, Hongjie Fang, Zhenyu Tang, Jirong Liu, Chenxi Wang,
Junbo Wang, Haoyi Zhu, Cewu Lu | RH20T: A Comprehensive Robotic Dataset for Learning Diverse Skills in
One-Shot | RSS 2023 workshop on LTAMP. The project page is at rh20t.github.io | null | null | null | cs.RO cs.AI cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | A key challenge in robotic manipulation in open domains is how to acquire
diverse and generalizable skills for robots. Recent research in one-shot
imitation learning has shown promise in transferring trained policies to new
tasks based on demonstrations. This feature is attractive for enabling robots
to acquire new skills and improving task and motion planning. However, due to
limitations in the training dataset, the current focus of the community has
mainly been on simple cases, such as push or pick-place tasks, relying solely
on visual guidance. In reality, there are many complex skills, some of which
may even require both visual and tactile perception to solve. This paper aims
to unlock the potential for an agent to generalize to hundreds of real-world
skills with multi-modal perception. To achieve this, we have collected a
dataset comprising over 110,000 contact-rich robot manipulation sequences
across diverse skills, contexts, robots, and camera viewpoints, all collected
in the real world. Each sequence in the dataset includes visual, force, audio,
and action information. Moreover, we also provide a corresponding human
demonstration video and a language description for each robot sequence. We have
invested significant efforts in calibrating all the sensors and ensuring a
high-quality dataset. The dataset is made publicly available at rh20t.github.io
| [
{
"version": "v1",
"created": "Sun, 2 Jul 2023 15:33:31 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 10:47:35 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Fang",
"Hao-Shu",
""
],
[
"Fang",
"Hongjie",
""
],
[
"Tang",
"Zhenyu",
""
],
[
"Liu",
"Jirong",
""
],
[
"Wang",
"Chenxi",
""
],
[
"Wang",
"Junbo",
""
],
[
"Zhu",
"Haoyi",
""
],
[
"Lu",
"Cewu",
""
]
]
| new_dataset | 0.999826 |
2307.03190 | Aniruddha Mahapatra | Aniruddha Mahapatra, Aliaksandr Siarohin, Hsin-Ying Lee, Sergey
Tulyakov, Jun-Yan Zhu | Text-Guided Synthesis of Eulerian Cinemagraphs | Project website: https://text2cinemagraph.github.io/website/ | null | null | null | cs.CV cs.GR cs.LG | http://creativecommons.org/licenses/by/4.0/ | We introduce Text2Cinemagraph, a fully automated method for creating
cinemagraphs from text descriptions - an especially challenging task when
prompts feature imaginary elements and artistic styles, given the complexity of
interpreting the semantics and motions of these images. We focus on
cinemagraphs of fluid elements, such as flowing rivers, and drifting clouds,
which exhibit continuous motion and repetitive textures. Existing single-image
animation methods fall short on artistic inputs, and recent text-based video
methods frequently introduce temporal inconsistencies, struggling to keep
certain regions static. To address these challenges, we propose an idea of
synthesizing image twins from a single text prompt - a pair of an artistic
image and its pixel-aligned corresponding natural-looking twin. While the
artistic image depicts the style and appearance detailed in our text prompt,
the realistic counterpart greatly simplifies layout and motion analysis.
Leveraging existing natural image and video datasets, we can accurately segment
the realistic image and predict plausible motion given the semantic
information. The predicted motion can then be transferred to the artistic image
to create the final cinemagraph. Our method outperforms existing approaches in
creating cinemagraphs for natural landscapes as well as artistic and
other-worldly scenes, as validated by automated metrics and user studies.
Finally, we demonstrate two extensions: animating existing paintings and
controlling motion directions using text.
| [
{
"version": "v1",
"created": "Thu, 6 Jul 2023 17:59:31 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Jul 2023 17:45:01 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Sep 2023 02:46:02 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Mahapatra",
"Aniruddha",
""
],
[
"Siarohin",
"Aliaksandr",
""
],
[
"Lee",
"Hsin-Ying",
""
],
[
"Tulyakov",
"Sergey",
""
],
[
"Zhu",
"Jun-Yan",
""
]
]
| new_dataset | 0.999159 |
2308.01237 | Pengzhou Cheng | Pengzhou Cheng, Lei Hua, Haobin Jiang, Gongshen Liu | LSF-IDM: Automotive Intrusion Detection Model with Lightweight
Attribution and Semantic Fusion | 18 pages, 8 figures | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Autonomous vehicles (AVs) are more vulnerable to network attacks due to the
high connectivity and diverse communication modes between vehicles and external
networks. Deep learning-based Intrusion detection, an effective method for
detecting network attacks, can provide functional safety as well as a real-time
communication guarantee for vehicles, thereby being widely used for AVs.
Existing works well for cyber-attacks such as simple-mode but become a higher
false alarm with a resource-limited environment required when the attack is
concealed within a contextual feature. In this paper, we present a novel
automotive intrusion detection model with lightweight attribution and semantic
fusion, named LSF-IDM. Our motivation is based on the observation that, when
injected the malicious packets to the in-vehicle networks (IVNs), the packet
log presents a strict order of context feature because of the periodicity and
broadcast nature of the CAN bus. Therefore, this model first captures the
context as the semantic feature of messages by the BERT language framework.
Thereafter, the lightweight model (e.g., BiLSTM) learns the fused feature from
an input packet's classification and its output distribution in BERT based on
knowledge distillation. Experiment results demonstrate the effectiveness of our
methods in defending against several representative attacks from IVNs. We also
perform the difference analysis of the proposed method with lightweight models
and Bert to attain a deeper understanding of how the model balance detection
performance and model complexity.
| [
{
"version": "v1",
"created": "Wed, 2 Aug 2023 15:48:33 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Aug 2023 01:02:02 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Sep 2023 04:06:05 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Cheng",
"Pengzhou",
""
],
[
"Hua",
"Lei",
""
],
[
"Jiang",
"Haobin",
""
],
[
"Liu",
"Gongshen",
""
]
]
| new_dataset | 0.999559 |
2308.05345 | Zhiyao Xie | Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie | RTLLM: An Open-Source Benchmark for Design RTL Generation with Large
Language Model | null | Asia and South Pacific Design Automation Conference (ASP-DAC) 2024 | null | null | cs.LG cs.AR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inspired by the recent success of large language models (LLMs) like ChatGPT,
researchers start to explore the adoption of LLMs for agile hardware design,
such as generating design RTL based on natural-language instructions. However,
in existing works, their target designs are all relatively simple and in a
small scale, and proposed by the authors themselves, making a fair comparison
among different LLM solutions challenging. In addition, many prior works only
focus on the design correctness, without evaluating the design qualities of
generated design RTL. In this work, we propose an open-source benchmark named
RTLLM, for generating design RTL with natural language instructions. To
systematically evaluate the auto-generated design RTL, we summarized three
progressive goals, named syntax goal, functionality goal, and design quality
goal. This benchmark can automatically provide a quantitative evaluation of any
given LLM-based solution. Furthermore, we propose an easy-to-use yet
surprisingly effective prompt engineering technique named self-planning, which
proves to significantly boost the performance of GPT-3.5 in our proposed
benchmark.
| [
{
"version": "v1",
"created": "Thu, 10 Aug 2023 05:24:41 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 12:33:51 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Lu",
"Yao",
""
],
[
"Liu",
"Shang",
""
],
[
"Zhang",
"Qijun",
""
],
[
"Xie",
"Zhiyao",
""
]
]
| new_dataset | 0.999 |
2309.13079 | Fukai Shang | Yidong Liu, FuKai Shang, Fang Wang, Rui Xu, Jun Wang, Wei Li, Yao Li,
Conghui He | MiChao-HuaFen 1.0: A Specialized Pre-trained Corpus Dataset for
Domain-specific Large Models | 4 pages,2 figures | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | With the advancement of deep learning technologies, general-purpose large
models such as GPT-4 have demonstrated exceptional capabilities across various
domains. Nevertheless, there remains a demand for high-quality, domain-specific
outputs in areas like healthcare, law, and finance. This paper first evaluates
the existing large models for specialized domains and discusses their
limitations. To cater to the specific needs of certain domains, we introduce
the ``MiChao-HuaFen 1.0'' pre-trained corpus dataset, tailored for the news and
governmental sectors. The dataset, sourced from publicly available internet
data from 2022, underwent multiple rounds of cleansing and processing to ensure
high quality and reliable origins, with provisions for consistent and stable
updates. This dataset not only supports the pre-training of large models for
Chinese vertical domains but also aids in propelling deep learning research and
applications in related fields.
| [
{
"version": "v1",
"created": "Thu, 21 Sep 2023 09:02:28 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 10:38:19 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Liu",
"Yidong",
""
],
[
"Shang",
"FuKai",
""
],
[
"Wang",
"Fang",
""
],
[
"Xu",
"Rui",
""
],
[
"Wang",
"Jun",
""
],
[
"Li",
"Wei",
""
],
[
"Li",
"Yao",
""
],
[
"He",
"Conghui",
""
]
]
| new_dataset | 0.99984 |
2309.13226 | Guoyang Xie | Jiaqi Liu, Guoyang Xie, Ruitao Chen, Xinpeng Li, Jinbao Wang, Yong
Liu, Chengjie Wang, Feng Zheng | Real3D-AD: A Dataset of Point Cloud Anomaly Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | High-precision point cloud anomaly detection is the gold standard for
identifying the defects of advancing machining and precision manufacturing.
Despite some methodological advances in this area, the scarcity of datasets and
the lack of a systematic benchmark hinder its development. We introduce
Real3D-AD, a challenging high-precision point cloud anomaly detection dataset,
addressing the limitations in the field. With 1,254 high-resolution 3D items
from forty thousand to millions of points for each item, Real3D-AD is the
largest dataset for high-precision 3D industrial anomaly detection to date.
Real3D-AD surpasses existing 3D anomaly detection datasets available regarding
point cloud resolution (0.0010mm-0.0015mm), 360 degree coverage and perfect
prototype. Additionally, we present a comprehensive benchmark for Real3D-AD,
revealing the absence of baseline methods for high-precision point cloud
anomaly detection. To address this, we propose Reg3D-AD, a registration-based
3D anomaly detection method incorporating a novel feature memory bank that
preserves local and global representations. Extensive experiments on the
Real3D-AD dataset highlight the effectiveness of Reg3D-AD. For reproducibility
and accessibility, we provide the Real3D-AD dataset, benchmark source code, and
Reg3D-AD on our website:https://github.com/M-3LAB/Real3D-AD.
| [
{
"version": "v1",
"created": "Sat, 23 Sep 2023 00:43:38 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 03:01:43 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Liu",
"Jiaqi",
""
],
[
"Xie",
"Guoyang",
""
],
[
"Chen",
"Ruitao",
""
],
[
"Li",
"Xinpeng",
""
],
[
"Wang",
"Jinbao",
""
],
[
"Liu",
"Yong",
""
],
[
"Wang",
"Chengjie",
""
],
[
"Zheng",
"Feng",
""
]
]
| new_dataset | 0.999822 |
2309.13457 | Wai Tong Chung | Wai Tong Chung, Bassem Akoush, Pushan Sharma, Alex Tamkin, Ki Sung
Jung, Jacqueline H. Chen, Jack Guo, Davy Brouzet, Mohsen Talei, Bruno Savard,
Alexei Y. Poludnenko, Matthias Ihme | Turbulence in Focus: Benchmarking Scaling Behavior of 3D Volumetric
Super-Resolution with BLASTNet 2.0 Data | Accepted in Advances in Neural Information Processing Systems 36
(NeurIPS 2023). 55 pages, 21 figures. v2: Corrected co-author name. Keywords:
Super-resolution, 3D, Neural Scaling, Physics-informed Loss, Computational
Fluid Dynamics, Partial Differential Equations, Turbulent Reacting Flows,
Direct Numerical Simulation, Fluid Mechanics, Combustion | null | null | null | cs.LG cs.CV physics.comp-ph physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analysis of compressible turbulent flows is essential for applications
related to propulsion, energy generation, and the environment. Here, we present
BLASTNet 2.0, a 2.2 TB network-of-datasets containing 744 full-domain samples
from 34 high-fidelity direct numerical simulations, which addresses the current
limited availability of 3D high-fidelity reacting and non-reacting compressible
turbulent flow simulation data. With this data, we benchmark a total of 49
variations of five deep learning approaches for 3D super-resolution - which can
be applied for improving scientific imaging, simulations, turbulence models, as
well as in computer vision applications. We perform neural scaling analysis on
these models to examine the performance of different machine learning (ML)
approaches, including two scientific ML techniques. We demonstrate that (i)
predictive performance can scale with model size and cost, (ii) architecture
matters significantly, especially for smaller models, and (iii) the benefits of
physics-based losses can persist with increasing model size. The outcomes of
this benchmark study are anticipated to offer insights that can aid the design
of 3D super-resolution models, especially for turbulence models, while this
data is expected to foster ML methods for a broad range of flow physics
applications. This data is publicly available with download links and browsing
tools consolidated at https://blastnet.github.io.
| [
{
"version": "v1",
"created": "Sat, 23 Sep 2023 18:57:02 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 16:06:47 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Chung",
"Wai Tong",
""
],
[
"Akoush",
"Bassem",
""
],
[
"Sharma",
"Pushan",
""
],
[
"Tamkin",
"Alex",
""
],
[
"Jung",
"Ki Sung",
""
],
[
"Chen",
"Jacqueline H.",
""
],
[
"Guo",
"Jack",
""
],
[
"Brouzet",
"Davy",
""
],
[
"Talei",
"Mohsen",
""
],
[
"Savard",
"Bruno",
""
],
[
"Poludnenko",
"Alexei Y.",
""
],
[
"Ihme",
"Matthias",
""
]
]
| new_dataset | 0.987179 |
2309.13737 | Xiaobin Xiong | Yi Wang, Jiarong Kang, Zhiheng Chen, and Xiaobin Xiong | Terrestrial Locomotion of PogoX: From Hardware Design to Energy Shaping
and Step-to-step Dynamics Based Control | 7 pages, 7 figures | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel controller design on a robotic locomotor that combines an
aerial vehicle with a spring-loaded leg. The main motivation is to enable the
terrestrial locomotion capability on aerial vehicles so that they can carry
heavy loads: heavy enough that flying is no longer possible, e.g., when the
thrust-to-weight ratio (TWR) is small. The robot is designed with a pogo-stick
leg and a quadrotor, and thus it is named as PogoX. We show that with a simple
and lightweight spring-loaded leg, the robot is capable of hopping with TWR
$<1$. The control of hopping is realized via two components: a vertical height
control via control Lyapunov function-based energy shaping, and a step-to-step
(S2S) dynamics based horizontal velocity control that is inspired by the
hopping of the Spring-Loaded Inverted Pendulum (SLIP). The controller is
successfully realized on the physical robot, showing dynamic terrestrial
locomotion of PogoX which can hop at variable heights and different horizontal
velocities with robustness to ground height variations and external pushes.
| [
{
"version": "v1",
"created": "Sun, 24 Sep 2023 19:44:24 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 04:07:11 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Wang",
"Yi",
""
],
[
"Kang",
"Jiarong",
""
],
[
"Chen",
"Zhiheng",
""
],
[
"Xiong",
"Xiaobin",
""
]
]
| new_dataset | 0.99967 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.