id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.16463 | Yupan Huang | Yupan Huang and Zaiqiao Meng and Fangyu Liu and Yixuan Su and Nigel
Collier and Yutong Lu | Sparkles: Unlocking Chats Across Multiple Images for Multimodal
Instruction-Following Models | Reduced main content to 9 pages; typos corrected | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models exhibit enhanced zero-shot performance on various tasks
when fine-tuned with instruction-following data. Multimodal
instruction-following models extend these capabilities by integrating both text
and images. However, existing models such as MiniGPT-4 face challenges in
maintaining dialogue coherence in scenarios involving multiple images. A
primary reason is the lack of a specialized dataset for this critical
application. To bridge these gaps, we present SparklesChat, a multimodal
instruction-following model for open-ended dialogues across multiple images. To
support the training, we introduce SparklesDialogue, the first
machine-generated dialogue dataset tailored for word-level interleaved
multi-image and text interactions. Furthermore, we construct SparklesEval, a
GPT-assisted benchmark for quantitatively assessing a model's conversational
competence across multiple images and dialogue turns. Our experiments validate
the effectiveness of SparklesChat in understanding and reasoning across
multiple images and dialogue turns. Specifically, SparklesChat outperformed
MiniGPT-4 on established vision-and-language benchmarks, including the BISON
binary image selection task and the NLVR2 visual reasoning task. Moreover,
SparklesChat scored 8.56 out of 10 on SparklesEval, substantially exceeding
MiniGPT-4's score of 3.91 and nearing GPT-4's score of 9.26. Qualitative
evaluations further demonstrate SparklesChat's generality in handling
real-world applications. All resources are available at
https://github.com/HYPJUDY/Sparkles.
| [
{
"version": "v1",
"created": "Thu, 31 Aug 2023 05:15:27 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 03:31:17 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Huang",
"Yupan",
""
],
[
"Meng",
"Zaiqiao",
""
],
[
"Liu",
"Fangyu",
""
],
[
"Su",
"Yixuan",
""
],
[
"Collier",
"Nigel",
""
],
[
"Lu",
"Yutong",
""
]
]
| new_dataset | 0.996578 |
2308.16512 | Yichun Shi | Yichun Shi, Peng Wang, Jianglong Ye, Mai Long, Kejie Li, Xiao Yang | MVDream: Multi-view Diffusion for 3D Generation | Our project page is https://MV-Dream.github.io | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We introduce MVDream, a multi-view diffusion model that is able to generate
consistent multi-view images from a given text prompt. Learning from both 2D
and 3D data, a multi-view diffusion model can achieve the generalizability of
2D diffusion models and the consistency of 3D renderings. We demonstrate that
such a multi-view prior can serve as a generalizable 3D prior that is agnostic
to 3D representations. It can be applied to 3D generation via Score
Distillation Sampling, significantly enhancing the consistency and stability of
existing 2D-lifting methods. It can also learn new concepts from a few 2D
examples, akin to DreamBooth, but for 3D generation.
| [
{
"version": "v1",
"created": "Thu, 31 Aug 2023 07:49:06 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 10:42:28 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Shi",
"Yichun",
""
],
[
"Wang",
"Peng",
""
],
[
"Ye",
"Jianglong",
""
],
[
"Long",
"Mai",
""
],
[
"Li",
"Kejie",
""
],
[
"Yang",
"Xiao",
""
]
]
| new_dataset | 0.992535 |
2309.03038 | Seongjoon Kang Mr. | Seongjoon Kang, Marco Mezzavilla, Sundeep Rangan, Arjuna Madanayake,
Satheesh Bojja Venkatakrishnan, Gregory Hellbourg, Monisha Ghosh, Hamed
Rahmani, Aditya Dhananjay | Cellular Wireless Networks in the Upper Mid-Band | 11 pages | null | null | null | cs.NI eess.SP | http://creativecommons.org/licenses/by/4.0/ | The upper mid-band -- roughly from 7 to 24 GHz -- has attracted considerable
recent interest for new cellular services. This frequency range has vastly more
spectrum than the highly congested bands below 7 GHz while offering more
favorable propagation and coverage than the millimeter wave (mmWave)
frequencies. Realizing the full potential of these bands, however, will require
fundamental changes to the design of cellular systems. Most importantly,
spectrum will likely need to be shared with incumbents including communication
satellites, military RADAR, and radio astronomy. Also, due to the wide
bandwidth, directional nature of transmission, and intermittent occupancy of
incumbents, cellular systems will need to be agile to sense and intelligently
use large spatial and bandwidth degrees of freedom. This paper attempts to
provide an initial assessment of the feasibility and potential gains of
wideband cellular systems operating in the upper mid-band. The study includes:
(1) a system study to assess potential gains of multi-band systems in a
representative dense urban environment; (2) propagation calculations to assess
potential cross interference between satellites and terrestrial cellular
services; and (3) design and evaluation of a compact multi-band antenna array
structure. Leveraging these preliminary results, we identify potential future
research directions to realize next-generation systems in these frequencies.
| [
{
"version": "v1",
"created": "Wed, 6 Sep 2023 14:30:29 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 15:39:29 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Sep 2023 20:57:06 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Kang",
"Seongjoon",
""
],
[
"Mezzavilla",
"Marco",
""
],
[
"Rangan",
"Sundeep",
""
],
[
"Madanayake",
"Arjuna",
""
],
[
"Venkatakrishnan",
"Satheesh Bojja",
""
],
[
"Hellbourg",
"Gregory",
""
],
[
"Ghosh",
"Monisha",
""
],
[
"Rahmani",
"Hamed",
""
],
[
"Dhananjay",
"Aditya",
""
]
]
| new_dataset | 0.998633 |
2309.07915 | HaoZhe Zhao | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang
Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | MMICL: Empowering Vision-language Model with Multi-Modal In-Context
Learning | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | null | null | cs.CL cs.AI cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context.
| [
{
"version": "v1",
"created": "Thu, 14 Sep 2023 17:59:17 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 14:46:01 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Zhao",
"Haozhe",
""
],
[
"Cai",
"Zefan",
""
],
[
"Si",
"Shuzheng",
""
],
[
"Ma",
"Xiaojian",
""
],
[
"An",
"Kaikai",
""
],
[
"Chen",
"Liang",
""
],
[
"Liu",
"Zixuan",
""
],
[
"Wang",
"Sheng",
""
],
[
"Han",
"Wenjuan",
""
],
[
"Chang",
"Baobao",
""
]
]
| new_dataset | 0.978042 |
2309.08448 | Chan-Jan Hsu | Chan-Jan Hsu, Chang-Le Liu, Feng-Ting Liao, Po-Chun Hsu, Yi-Chang
Chen, Da-shan Shiu | Advancing the Evaluation of Traditional Chinese Language Models: Towards
a Comprehensive Benchmark Suite | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The evaluation of large language models is an essential task in the field of
language understanding and generation. As language models continue to advance,
the need for effective benchmarks to assess their performance has become
imperative. In the context of Traditional Chinese, there is a scarcity of
comprehensive and diverse benchmarks to evaluate the capabilities of language
models, despite the existence of certain benchmarks such as DRCD, TTQA, CMDQA,
and FGC dataset. To address this gap, we propose a novel set of benchmarks that
leverage existing English datasets and are tailored to evaluate language models
in Traditional Chinese. These benchmarks encompass a wide range of tasks,
including contextual question-answering, summarization, classification, and
table understanding. The proposed benchmarks offer a comprehensive evaluation
framework, enabling the assessment of language models' capabilities across
different tasks. In this paper, we evaluate the performance of GPT-3.5,
Taiwan-LLaMa-v1.0, and Model 7-C, our proprietary model, on these benchmarks.
The evaluation results highlight that our model, Model 7-C, achieves
performance comparable to GPT-3.5 with respect to a part of the evaluated
capabilities. In an effort to advance the evaluation of language models in
Traditional Chinese and stimulate further research in this field, we have
open-sourced our benchmark and opened the model for trial.
| [
{
"version": "v1",
"created": "Fri, 15 Sep 2023 14:52:23 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 15:22:42 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Hsu",
"Chan-Jan",
""
],
[
"Liu",
"Chang-Le",
""
],
[
"Liao",
"Feng-Ting",
""
],
[
"Hsu",
"Po-Chun",
""
],
[
"Chen",
"Yi-Chang",
""
],
[
"Shiu",
"Da-shan",
""
]
]
| new_dataset | 0.985892 |
2309.11998 | Lianmin Zheng | Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Tianle Li, Siyuan Zhuang,
Zhanghao Wu, Yonghao Zhuang, Zhuohan Li, Zi Lin, Eric. P Xing, Joseph E.
Gonzalez, Ion Stoica, Hao Zhang | LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Studying how people interact with large language models (LLMs) in real-world
scenarios is increasingly important due to their widespread use in various
applications. In this paper, we introduce LMSYS-Chat-1M, a large-scale dataset
containing one million real-world conversations with 25 state-of-the-art LLMs.
This dataset is collected from 210K unique IP addresses in the wild on our
Vicuna demo and Chatbot Arena website. We offer an overview of the dataset's
content, including its curation process, basic statistics, and topic
distribution, highlighting its diversity, originality, and scale. We
demonstrate its versatility through four use cases: developing content
moderation models that perform similarly to GPT-4, building a safety benchmark,
training instruction-following models that perform similarly to Vicuna, and
creating challenging benchmark questions. We believe that this dataset will
serve as a valuable resource for understanding and advancing LLM capabilities.
The dataset is publicly available at
https://huggingface.co/datasets/lmsys/lmsys-chat-1m.
| [
{
"version": "v1",
"created": "Thu, 21 Sep 2023 12:13:55 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 00:53:35 GMT"
},
{
"version": "v3",
"created": "Sat, 30 Sep 2023 00:30:51 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Zheng",
"Lianmin",
""
],
[
"Chiang",
"Wei-Lin",
""
],
[
"Sheng",
"Ying",
""
],
[
"Li",
"Tianle",
""
],
[
"Zhuang",
"Siyuan",
""
],
[
"Wu",
"Zhanghao",
""
],
[
"Zhuang",
"Yonghao",
""
],
[
"Li",
"Zhuohan",
""
],
[
"Lin",
"Zi",
""
],
[
"Xing",
"Eric. P",
""
],
[
"Gonzalez",
"Joseph E.",
""
],
[
"Stoica",
"Ion",
""
],
[
"Zhang",
"Hao",
""
]
]
| new_dataset | 0.999768 |
2309.12668 | Quan Dung Pham | Quan-Dung Pham, Yipeng Zhu, Tan-Sang Ha, K.H. Long Nguyen, Binh-Son
Hua, and Sai-Kit Yeung | UWA360CAM: A 360$^{\circ}$ 24/7 Real-Time Streaming Camera System for
Underwater Applications | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Omnidirectional camera is a cost-effective and information-rich sensor highly
suitable for many marine applications and the ocean scientific community,
encompassing several domains such as augmented reality, mapping, motion
estimation, visual surveillance, and simultaneous localization and mapping.
However, designing and constructing such a high-quality 360$^{\circ}$ real-time
streaming camera system for underwater applications is a challenging problem
due to the technical complexity in several aspects including sensor resolution,
wide field of view, power supply, optical design, system calibration, and
overheating management. This paper presents a novel and comprehensive system
that addresses the complexities associated with the design, construction, and
implementation of a fully functional 360$^{\circ}$ real-time streaming camera
system specifically tailored for underwater environments. Our proposed system,
UWA360CAM, can stream video in real time, operate in 24/7, and capture
360$^{\circ}$ underwater panorama images. Notably, our work is the pioneering
effort in providing a detailed and replicable account of this system. The
experiments provide a comprehensive analysis of our proposed system.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2023 07:24:58 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Sep 2023 06:37:18 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Pham",
"Quan-Dung",
""
],
[
"Zhu",
"Yipeng",
""
],
[
"Ha",
"Tan-Sang",
""
],
[
"Nguyen",
"K. H. Long",
""
],
[
"Hua",
"Binh-Son",
""
],
[
"Yeung",
"Sai-Kit",
""
]
]
| new_dataset | 0.985437 |
2309.13396 | Pirouz Nourian | Pirouz Nourian, Shervin Azadi, Nan Bai, Bruno de Andrade, Nour Abu
Zaid, Samaneh Rezvani, and Ana Pereira Roders | EquiCity Game: A mathematical serious game for participatory design of
spatial configurations | 16 pages (the paper), 15 pages (supplemental materials), references
missing in the supplemental document | null | null | null | cs.CY cs.HC | http://creativecommons.org/licenses/by/4.0/ | We propose mechanisms for a mathematical social-choice game that is designed
to mediate decision-making processes for city planning, urban area
redevelopment, and architectural design (massing) of urban housing complexes.
The proposed game is effectively a multi-player generative configurator
equipped with automated appraisal/scoring mechanisms for revealing the
aggregate impact of alternatives; featuring a participatory digital process to
support transparent and inclusive decision-making processes in spatial design
for ensuring an equitable balance of sustainable development goals. As such,
the game effectively empowers a group of decision-makers to reach a fair
consensus by mathematically simulating many rounds of trade-offs between their
decisions, with different levels of interest or control over various types of
investments. Our proposed gamified design process encompasses decision-making
about the most idiosyncratic aspects of a site related to its heritage status
and cultural significance to the physical aspects such as balancing access to
sunlight and the right to sunlight of the neighbours of the site, ensuring
coherence of the entire configuration with regards to a network of desired
closeness ratings, the satisfaction of a programme of requirements, and
intricately balancing individual development goals in conjunction with communal
goals and environmental design codes. The game is developed fully based on an
algebraic computational process on our own digital twinning platform, using
open geospatial data and open-source computational tools such as NumPy. The
mathematical process consists of a Markovian design machine for balancing the
decisions of actors, a massing configurator equipped with Fuzzy Logic and
Multi-Criteria Decision Analysis, algebraic graph-theoretical accessibility
evaluators, and automated solar-climatic evaluators using geospatial
computational geometry.
| [
{
"version": "v1",
"created": "Sat, 23 Sep 2023 15:01:52 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Sep 2023 17:47:32 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Nourian",
"Pirouz",
""
],
[
"Azadi",
"Shervin",
""
],
[
"Bai",
"Nan",
""
],
[
"de Andrade",
"Bruno",
""
],
[
"Zaid",
"Nour Abu",
""
],
[
"Rezvani",
"Samaneh",
""
],
[
"Roders",
"Ana Pereira",
""
]
]
| new_dataset | 0.999278 |
2309.13549 | Arthur Zhang | Arthur Zhang, Chaitanya Eranki, Christina Zhang, Ji-Hwan Park, Raymond
Hong, Pranav Kalyani, Lochana Kalyanaraman, Arsh Gamare, Arnav Bagad, Maria
Esteva, Joydeep Biswas | Towards Robust Robot 3D Perception in Urban Environments: The UT Campus
Object Dataset | 19 pages, 18 figures, 12 tables | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We introduce the UT Campus Object Dataset (CODa), a mobile robot egocentric
perception dataset collected on the University of Texas Austin Campus. Our
dataset contains 8.5 hours of multimodal sensor data: synchronized 3D point
clouds and stereo RGB video from a 128-channel 3D LiDAR and two 1.25MP RGB
cameras at 10 fps; RGB-D videos from an additional 0.5MP sensor at 7 fps, and a
9-DOF IMU sensor at 40 Hz. We provide 58 minutes of ground-truth annotations
containing 1.3 million 3D bounding boxes with instance IDs for 53 semantic
classes, 5000 frames of 3D semantic annotations for urban terrain, and
pseudo-ground truth localization. We repeatedly traverse identical geographic
locations for a wide range of indoor and outdoor areas, weather conditions, and
times of the day. Using CODa, we empirically demonstrate that: 1) 3D object
detection performance in urban settings is significantly higher when trained
using CODa compared to existing datasets even when employing state-of-the-art
domain adaptation approaches, 2) sensor-specific fine-tuning improves 3D object
detection accuracy and 3) pretraining on CODa improves cross-dataset 3D object
detection performance in urban settings compared to pretraining on AV datasets.
Using our dataset and annotations, we release benchmarks for 3D object
detection and 3D semantic segmentation using established metrics. In the
future, the CODa benchmark will include additional tasks like unsupervised
object discovery and re-identification. We publicly release CODa on the Texas
Data Repository, pre-trained models, dataset development package, and
interactive dataset viewer on our website at https://amrl.cs.utexas.edu/coda.
We expect CODa to be a valuable dataset for research in egocentric 3D
perception and planning for autonomous navigation in urban environments.
| [
{
"version": "v1",
"created": "Sun, 24 Sep 2023 04:43:39 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Oct 2023 04:01:04 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Zhang",
"Arthur",
""
],
[
"Eranki",
"Chaitanya",
""
],
[
"Zhang",
"Christina",
""
],
[
"Park",
"Ji-Hwan",
""
],
[
"Hong",
"Raymond",
""
],
[
"Kalyani",
"Pranav",
""
],
[
"Kalyanaraman",
"Lochana",
""
],
[
"Gamare",
"Arsh",
""
],
[
"Bagad",
"Arnav",
""
],
[
"Esteva",
"Maria",
""
],
[
"Biswas",
"Joydeep",
""
]
]
| new_dataset | 0.999148 |
2309.17446 | Ansong Ni | Ansong Ni, Pengcheng Yin, Yilun Zhao, Martin Riddell, Troy Feng, Rui
Shen, Stephen Yin, Ye Liu, Semih Yavuz, Caiming Xiong, Shafiq Joty, Yingbo
Zhou, Dragomir Radev, Arman Cohan | L2CEval: Evaluating Language-to-Code Generation Capabilities of Large
Language Models | Project Website: https://l2c-eval.github.io/ | null | null | null | cs.CL cs.LG cs.PL cs.SE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recently, large language models (LLMs), especially those that are pretrained
on code, have demonstrated strong capabilities in generating programs from
natural language inputs in a few-shot or even zero-shot manner. Despite
promising results, there is a notable lack of a comprehensive evaluation of
these models language-to-code generation capabilities. Existing studies often
focus on specific tasks, model architectures, or learning paradigms, leading to
a fragmented understanding of the overall landscape. In this work, we present
L2CEval, a systematic evaluation of the language-to-code generation
capabilities of LLMs on 7 tasks across the domain spectrum of semantic parsing,
math reasoning and Python programming, analyzing the factors that potentially
affect their performance, such as model size, pretraining data, instruction
tuning, and different prompting methods. In addition to assessing model
performance, we measure confidence calibration for the models and conduct human
evaluations of the output programs. This enables us to identify and analyze the
typical failure modes across various tasks and models. L2CEval offers a
comprehensive understanding of the capabilities and limitations of LLMs in
language-to-code generation. We also release the evaluation framework and all
model outputs, hoping to lay the groundwork for further future research in this
domain.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 17:57:00 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 09:54:50 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Ni",
"Ansong",
""
],
[
"Yin",
"Pengcheng",
""
],
[
"Zhao",
"Yilun",
""
],
[
"Riddell",
"Martin",
""
],
[
"Feng",
"Troy",
""
],
[
"Shen",
"Rui",
""
],
[
"Yin",
"Stephen",
""
],
[
"Liu",
"Ye",
""
],
[
"Yavuz",
"Semih",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Joty",
"Shafiq",
""
],
[
"Zhou",
"Yingbo",
""
],
[
"Radev",
"Dragomir",
""
],
[
"Cohan",
"Arman",
""
]
]
| new_dataset | 0.99658 |
2310.00001 | Joao P. A. Dantas | Joao P. A. Dantas, Samara R. Silva, Vitor C. F. Gomes, Andre N. Costa,
Adrisson R. Samersla, Diego Geraldo, Marcos R. O. A. Maximo, Takashi Yoneyama | AsaPy: A Python Library for Aerospace Simulation Analysis | null | null | null | null | cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AsaPy is a custom-made Python library designed to simplify and optimize the
analysis of simulation data. It offers a range of features, including the
design of experiment methods, statistical analysis techniques, machine learning
algorithms, and data visualization tools. AsaPy's flexibility and
customizability make it a viable solution for engineers and researchers who
need to quickly gain insights into constructive simulations. AsaPy is built on
top of popular scientific computing libraries, ensuring high performance and
scalability. In this work, we provide an overview of the key features and
capabilities of AsaPy, followed by an exposition of its architecture and
demonstrations of its effectiveness through some use cases applied in military
operational simulations. We also evaluate how other simulation tools deal with
data science, highlighting AsaPy's strengths and advantages. Finally, we
discuss potential use cases and applications of AsaPy and outline future
directions for the development and improvement of the library.
| [
{
"version": "v1",
"created": "Wed, 12 Jul 2023 00:02:37 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Dantas",
"Joao P. A.",
""
],
[
"Silva",
"Samara R.",
""
],
[
"Gomes",
"Vitor C. F.",
""
],
[
"Costa",
"Andre N.",
""
],
[
"Samersla",
"Adrisson R.",
""
],
[
"Geraldo",
"Diego",
""
],
[
"Maximo",
"Marcos R. O. A.",
""
],
[
"Yoneyama",
"Takashi",
""
]
]
| new_dataset | 0.999189 |
2310.00008 | Shreyansh Pitroda | Shreyansh Pitroda | Dynamic Multimodal Locomotion: A Quick Overview of Hardware and Control | null | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bipedal robots are a fascinating and advanced category of robots designed to
mimic human form and locomotion. The development of the bipedal robots is a
significant milestone in robotics. However, even the most advanced bipedal
robots are susceptible to changes in terrain, obstacle negotiation, payload,
and weight distribution, and the ability to recover after stumbles. These
problems can be circumvented by introducing thrusters. Thrusters will allow the
robot to stabilize on various uneven terrain. The robot can easily avoid
obstacles and will be able to recover after stumbling. Harpy is a bipedal robot
that has 6 joints and 2 thrusters and serves as a hardware platform for
implementing advanced control algorithms. This thesis explores manufacturing
harpy hardware such that the overall system can be lightweight and strong.
Also, it goes through simulation results to show thruster-assisted walking, and
at last, it shows firmware and communication network development which is
implemented on actual hardware. vii
| [
{
"version": "v1",
"created": "Thu, 31 Aug 2023 23:07:47 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Pitroda",
"Shreyansh",
""
]
]
| new_dataset | 0.985683 |
2310.00014 | Yong Ren | Yong Ren, Tao Wang, Jiangyan Yi, Le Xu, Jianhua Tao, Chuyuan Zhang,
Junzuo Zhou | Fewer-token Neural Speech Codec with Time-invariant Codes | Submitted to ICASSP 2024 | null | null | null | cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Language model based text-to-speech (TTS) models, like VALL-E, have gained
attention for their outstanding in-context learning capability in zero-shot
scenarios. Neural speech codec is a critical component of these models, which
can convert speech into discrete token representations. However, excessive
token sequences from the codec may negatively affect prediction accuracy and
restrict the progression of Language model based TTS models. To address this
issue, this paper proposes a novel neural speech codec with time-invariant
codes named TiCodec. By encoding and quantizing time-invariant information into
a separate code, TiCodec can reduce the amount of frame-level information that
needs encoding, effectively decreasing the number of tokens as codes of speech.
Furthermore, this paper introduces a time-invariant encoding consistency loss
to enhance the consistency of time-invariant code within an utterance and force
it to capture more global information, which can benefit the zero-shot TTS
task. Experimental results demonstrate that TiCodec can not only enhance the
quality of reconstruction speech with fewer tokens but also increase the
similarity and naturalness, as well as reduce the word error rate of the
synthesized speech by the TTS model.
| [
{
"version": "v1",
"created": "Fri, 15 Sep 2023 04:32:26 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Ren",
"Yong",
""
],
[
"Wang",
"Tao",
""
],
[
"Yi",
"Jiangyan",
""
],
[
"Xu",
"Le",
""
],
[
"Tao",
"Jianhua",
""
],
[
"Zhang",
"Chuyuan",
""
],
[
"Zhou",
"Junzuo",
""
]
]
| new_dataset | 0.983145 |
2310.00023 | Saptarshi Sengupta | Gaurav Shinde, Rohan Mohapatra, Pooja Krishan and Saptarshi Sengupta | De-SaTE: Denoising Self-attention Transformer Encoders for Li-ion
Battery Health Prognostics | 10 pages, 6 figures, 3 tables, 17 equations | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lithium Ion (Li-ion) batteries have gained widespread popularity across
various industries, from powering portable electronic devices to propelling
electric vehicles and supporting energy storage systems. A central challenge in
managing Li-ion batteries effectively is accurately predicting their Remaining
Useful Life (RUL), which is a critical measure for proactive maintenance and
predictive analytics. This study presents a novel approach that harnesses the
power of multiple denoising modules, each trained to address specific types of
noise commonly encountered in battery data. Specifically we use a denoising
auto-encoder and a wavelet denoiser to generate encoded/decomposed
representations, which are subsequently processed through dedicated
self-attention transformer encoders. After extensive experimentation on the
NASA and CALCE datasets, we are able to characterize a broad spectrum of health
indicator estimations under a set of diverse noise patterns. We find that our
reported error metrics on these datasets are on par or better with the best
reported in recent literature.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 19:17:13 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Shinde",
"Gaurav",
""
],
[
"Mohapatra",
"Rohan",
""
],
[
"Krishan",
"Pooja",
""
],
[
"Sengupta",
"Saptarshi",
""
]
]
| new_dataset | 0.993891 |
2310.00033 | Jie Liu | Jie Liu, Zufeng Pang, Zhiyong Li, Guilin Wen, Zhoucheng Su, Junfeng
He, Kaiyue Liu, Dezheng Jiang, Zenan Li, Shouyan Chen, Yang Tian, Yi Min Xie,
Zhenpei Wang, Zhuangjian Liu | OriWheelBot: An origami-wheeled robot | 23 papes, 7 figures | null | null | null | cs.RO physics.app-ph | http://creativecommons.org/licenses/by/4.0/ | Origami-inspired robots with multiple advantages, such as being lightweight,
requiring less assembly, and exhibiting exceptional deformability, have
received substantial and sustained attention. However, the existing
origami-inspired robots are usually of limited functionalities and developing
feature-rich robots is very challenging. Here, we report an origami-wheeled
robot (OriWheelBot) with variable width and outstanding sand walking
versatility. The OriWheelBot's ability to adjust wheel width over obstacles is
achieved by origami wheels made of Miura origami. An improved version, called
iOriWheelBot, is also developed to automatically judge the width of the
obstacles. Three actions, namely direct pass, variable width pass, and direct
return, will be carried out depending on the width of the channel between the
obstacles. We have identified two motion mechanisms, i.e., sand-digging and
sand-pushing, with the latter being more conducive to walking on the sand. We
have systematically examined numerous sand walking characteristics, including
carrying loads, climbing a slope, walking on a slope, and navigating sand pits,
small rocks, and sand traps. The OriWheelBot can change its width by 40%, has a
loading-carrying ratio of 66.7% on flat sand and can climb a 17-degree sand
incline. The OriWheelBot can be useful for planetary subsurface exploration and
disaster area rescue.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 13:42:50 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Liu",
"Jie",
""
],
[
"Pang",
"Zufeng",
""
],
[
"Li",
"Zhiyong",
""
],
[
"Wen",
"Guilin",
""
],
[
"Su",
"Zhoucheng",
""
],
[
"He",
"Junfeng",
""
],
[
"Liu",
"Kaiyue",
""
],
[
"Jiang",
"Dezheng",
""
],
[
"Li",
"Zenan",
""
],
[
"Chen",
"Shouyan",
""
],
[
"Tian",
"Yang",
""
],
[
"Xie",
"Yi Min",
""
],
[
"Wang",
"Zhenpei",
""
],
[
"Liu",
"Zhuangjian",
""
]
]
| new_dataset | 0.999647 |
2310.00068 | Luchuan Song | Luchuan Song, Guojun Yin, Zhenchao Jin, Xiaoyi Dong, Chenliang Xu | Emotional Listener Portrait: Realistic Listener Motion Simulation in
Conversation | Accepted by ICCV2023 | null | null | null | cs.GR cs.AI cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Listener head generation centers on generating non-verbal behaviors (e.g.,
smile) of a listener in reference to the information delivered by a speaker. A
significant challenge when generating such responses is the non-deterministic
nature of fine-grained facial expressions during a conversation, which varies
depending on the emotions and attitudes of both the speaker and the listener.
To tackle this problem, we propose the Emotional Listener Portrait (ELP), which
treats each fine-grained facial motion as a composition of several discrete
motion-codewords and explicitly models the probability distribution of the
motions under different emotion in conversation. Benefiting from the
``explicit'' and ``discrete'' design, our ELP model can not only automatically
generate natural and diverse responses toward a given speaker via sampling from
the learned distribution but also generate controllable responses with a
predetermined attitude. Under several quantitative metrics, our ELP exhibits
significant improvements compared to previous methods.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 18:18:32 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Song",
"Luchuan",
""
],
[
"Yin",
"Guojun",
""
],
[
"Jin",
"Zhenchao",
""
],
[
"Dong",
"Xiaoyi",
""
],
[
"Xu",
"Chenliang",
""
]
]
| new_dataset | 0.975432 |
2310.00129 | Sina Shaham | Sina Shaham, Bhaskar Krishnamachari, Matthew Kahn | ILB: Graph Neural Network Enabled Emergency Demand Response Program For
Electricity | null | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | Demand Response (DR) programs have become a crucial component of smart
electricity grids as they shift the flexibility of electricity consumption from
supply to demand in response to the ever-growing demand for electricity. In
particular, in times of crisis, an emergency DR program is required to manage
unexpected spikes in energy demand. In this paper, we propose the
Incentive-Driven Load Balancer (ILB), a program designed to efficiently manage
demand and response during crisis situations. By offering incentives to
flexible households likely to reduce demand, the ILB facilitates effective
demand reduction and prepares them for unexpected events. To enable ILB, we
introduce a two-step machine learning-based framework for participant
selection, which employs a graph-based approach to identify households capable
of easily adjusting their electricity consumption. This framework utilizes two
Graph Neural Networks (GNNs): one for pattern recognition and another for
household selection. Through extensive experiments on household-level
electricity consumption in California, Michigan, and Texas, we demonstrate the
ILB program's significant effectiveness in supporting communities during
emergencies.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 20:38:04 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Shaham",
"Sina",
""
],
[
"Krishnamachari",
"Bhaskar",
""
],
[
"Kahn",
"Matthew",
""
]
]
| new_dataset | 0.996528 |
2310.00142 | Xiaofeng Guo | Xiaofeng Guo, Guanqi He, Mohammadreza Mousaei, Junyi Geng, Guanya Shi,
Sebastian Scherer | Aerial Interaction with Tactile Sensing | 7 pages, 5 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While autonomous Uncrewed Aerial Vehicles (UAVs) have grown rapidly, most
applications only focus on passive visual tasks. Aerial interaction aims to
execute tasks involving physical interactions, which offers a way to assist
humans in high-risk, high-altitude operations, thereby reducing cost, time, and
potential hazards. The coupled dynamics between the aerial vehicle and
manipulator, however, pose challenges for precision control. Previous research
has typically employed either position control, which often fails to meet
mission accuracy, or force control using expensive, heavy, and cumbersome
force/torque sensors that also lack local semantic information. Conversely,
tactile sensors, being both cost-effective and lightweight, are capable of
sensing contact information including force distribution, as well as
recognizing local textures. Existing work on tactile sensing mainly focuses on
tabletop manipulation tasks within a quasi-static process. In this paper, we
pioneer the use of vision-based tactile sensors on a fully-actuated UAV to
improve the accuracy of the more dynamic aerial manipulation tasks. We
introduce a pipeline utilizing tactile feedback for real-time force tracking
via a hybrid motion-force controller and a method for wall texture detection
during aerial interactions. Our experiments demonstrate that our system can
effectively replace or complement traditional force/torque sensors, improving
flight performance by approximately 16% in position tracking error when using
the fused force estimate compared to relying on a single sensor. Our tactile
sensor achieves 93.4% accuracy in real-time texture recognition and 100%
post-contact. To the best of our knowledge, this is the first work to
incorporate a vision-based tactile sensor into aerial interaction tasks.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 21:04:16 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Guo",
"Xiaofeng",
""
],
[
"He",
"Guanqi",
""
],
[
"Mousaei",
"Mohammadreza",
""
],
[
"Geng",
"Junyi",
""
],
[
"Shi",
"Guanya",
""
],
[
"Scherer",
"Sebastian",
""
]
]
| new_dataset | 0.970177 |
2310.00163 | Angelos Mavrogiannis | Angelos Mavrogiannis, Christoforos Mavrogiannis, Yiannis Aloimonos | Cook2LTL: Translating Cooking Recipes to LTL Formulae using Large
Language Models | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cooking recipes are especially challenging to translate to robot plans as
they feature rich linguistic complexity, temporally-extended interconnected
tasks, and an almost infinite space of possible actions. Our key insight is
that combining a source of background cooking domain knowledge with a formalism
capable of handling the temporal richness of cooking recipes could enable the
extraction of unambiguous, robot-executable plans. In this work, we use Linear
Temporal Logic (LTL) as a formal language expressible enough to model the
temporal nature of cooking recipes. Leveraging pre-trained Large Language
Models (LLMs), we present a system that translates instruction steps from an
arbitrary cooking recipe found on the internet to a series of LTL formulae,
grounding high-level cooking actions to a set of primitive actions that are
executable by a manipulator in a kitchen environment. Our approach makes use of
a caching scheme, dynamically building a queryable action library at runtime,
significantly decreasing LLM API calls (-51%), latency (-59%) and cost (-42%)
compared to a baseline that queries the LLM for every newly encountered action
at runtime. We demonstrate the transferability of our system in a realistic
simulation platform through showcasing a set of simple cooking tasks.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 21:59:13 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Mavrogiannis",
"Angelos",
""
],
[
"Mavrogiannis",
"Christoforos",
""
],
[
"Aloimonos",
"Yiannis",
""
]
]
| new_dataset | 0.999798 |
2310.00184 | Calvin Joyce | Calvin Joyce, Jason Lim, Roger Nguyen, Michael Owens, Sara
Wickenhiser, Elizabeth Peiros, Florian Richter, Michael C. Yip | NASU -- Novel Actuating Screw Unit: Origami-inspired Screw-based
Propulsion on Mobile Ground Robots | 6 pages, 9 Figures, submitted to ICRA 2024 | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Screw-based locomotion is a robust method of locomotion across a wide range
of media including water, sand, and gravel. A challenge with screws is their
significant number of impactful design parameters that affect locomotion
performance in varying environments. One crucial parameter is the angle of
attack, also referred to as the lead angle. The angle of attack has a
significant impact on the screw's performance as it creates a trade-off between
efficiency and forward velocity. This trend is consistent across various types
of media. In this work, we present a Novel Actuating Screw Unit. It is the
first screw-based propulsion design that enables the reconfiguration of the
angle of attack dynamically for optimized locomotion across multiple media. The
design is inspired by the kresling unit, which is a widespread mechanism in
origami robotics, and the angle of attack is adjusted with a linear actuator,
while the entire unit is spun on its axis as an archimedean screw. NASU is
integrated onto a mobile test-bed and experiments are conducted in a large
variety of media including gravel, grass, and sand. Our experiments show the
proposed design is a promising direction for reconfigurable screws by allowing
control to optimize for efficiency or velocity.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 23:15:01 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Joyce",
"Calvin",
""
],
[
"Lim",
"Jason",
""
],
[
"Nguyen",
"Roger",
""
],
[
"Owens",
"Michael",
""
],
[
"Wickenhiser",
"Sara",
""
],
[
"Peiros",
"Elizabeth",
""
],
[
"Richter",
"Florian",
""
],
[
"Yip",
"Michael C.",
""
]
]
| new_dataset | 0.997303 |
2310.00194 | Taylor Webb | Taylor Webb, Shanka Subhra Mondal, Chi Wang, Brian Krabach, Ida
Momennejad | A Prefrontal Cortex-inspired Architecture for Planning in Large Language
Models | null | null | null | null | cs.AI cs.NE | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) demonstrate impressive performance on a wide
variety of tasks, but they often struggle with tasks that require multi-step
reasoning or goal-directed planning. To address this, we take inspiration from
the human brain, in which planning is accomplished via the recurrent
interaction of specialized modules in the prefrontal cortex (PFC). These
modules perform functions such as conflict monitoring, state prediction, state
evaluation, task decomposition, and task coordination. We find that LLMs are
sometimes capable of carrying out these functions in isolation, but struggle to
autonomously coordinate them in the service of a goal. Therefore, we propose a
black box architecture with multiple LLM-based (GPT-4) modules. The
architecture improves planning through the interaction of specialized
PFC-inspired modules that break down a larger problem into multiple brief
automated calls to the LLM. We evaluate the combined architecture on two
challenging planning tasks -- graph traversal and Tower of Hanoi -- finding
that it yields significant improvements over standard LLM methods (e.g.,
zero-shot prompting or in-context learning). These results demonstrate the
benefit of utilizing knowledge from cognitive neuroscience to improve planning
in LLMs.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 00:10:14 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Webb",
"Taylor",
""
],
[
"Mondal",
"Shanka Subhra",
""
],
[
"Wang",
"Chi",
""
],
[
"Krabach",
"Brian",
""
],
[
"Momennejad",
"Ida",
""
]
]
| new_dataset | 0.998011 |
2310.00196 | Lee Kezar | Lee Kezar, Elana Pontecorvo, Adele Daniels, Connor Baer, Ruth Ferster,
Lauren Berger, Jesse Thomason, Zed Sevcikova Sehyr, Naomi Caselli | The Sem-Lex Benchmark: Modeling ASL Signs and Their Phonemes | In Proceedings of the ACM Conference on Accessibility (ASSETS) 2023 | null | 10.1145/3597638.3608408 | null | cs.CL cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Sign language recognition and translation technologies have the potential to
increase access and inclusion of deaf signing communities, but research
progress is bottlenecked by a lack of representative data. We introduce a new
resource for American Sign Language (ASL) modeling, the Sem-Lex Benchmark. The
Benchmark is the current largest of its kind, consisting of over 84k videos of
isolated sign productions from deaf ASL signers who gave informed consent and
received compensation. Human experts aligned these videos with other sign
language resources including ASL-LEX, SignBank, and ASL Citizen, enabling
useful expansions for sign and phonological feature recognition. We present a
suite of experiments which make use of the linguistic information in ASL-LEX,
evaluating the practicality and fairness of the Sem-Lex Benchmark for isolated
sign recognition (ISR). We use an SL-GCN model to show that the phonological
features are recognizable with 85% accuracy, and that they are effective as an
auxiliary target to ISR. Learning to recognize phonological features alongside
gloss results in a 6% improvement for few-shot ISR accuracy and a 2%
improvement for ISR accuracy overall. Instructions for downloading the data can
be found at https://github.com/leekezar/SemLex.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 00:25:43 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Kezar",
"Lee",
""
],
[
"Pontecorvo",
"Elana",
""
],
[
"Daniels",
"Adele",
""
],
[
"Baer",
"Connor",
""
],
[
"Ferster",
"Ruth",
""
],
[
"Berger",
"Lauren",
""
],
[
"Thomason",
"Jesse",
""
],
[
"Sehyr",
"Zed Sevcikova",
""
],
[
"Caselli",
"Naomi",
""
]
]
| new_dataset | 0.999725 |
2310.00214 | Ruhao Wan | Ruhao Wan | Quantum MDS Codes with length $n\equiv 0,1($mod$\,\frac{q\pm1}{2})$ | 21 pages, 2 tables | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An important family of quantum codes is the quantum
maximum-distance-separable (MDS) codes. In this paper, we construct some new
classes of quantum MDS codes by generalized Reed-Solomon (GRS) codes and
Hermitian construction. In addition, the length $n$ of most of the quantum MDS
codes we constructed satisfies $n\equiv 0,1($mod$\,\frac{q\pm1}{2})$, which is
different from previously known code lengths. At the same time, the quantum MDS
codes we construct have large minimum distances that are greater than $q/2+1$.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 01:33:41 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Wan",
"Ruhao",
""
]
]
| new_dataset | 0.999737 |
2310.00215 | Alexandra Bremers | Itay Grinberg, Alexandra Bremers, Louisa Pancoast, Wendy Ju | Implicit collaboration with a drawing machine through dance movements | null | null | null | null | cs.HC cs.RO | http://creativecommons.org/licenses/by/4.0/ | In this demonstration, we exhibit the initial results of an ongoing body of
exploratory work, investigating the potential for creative machines to
communicate and collaborate with people through movement as a form of implicit
interaction. The paper describes a Wizard-of-Oz demo, where a hidden wizard
controls an AxiDraw drawing robot while a participant collaborates with it to
draw a custom postcard. This demonstration aims to gather perspectives from the
computational fabrication community regarding how practitioners of fabrication
with machines experience interacting with a mixed-initiative collaborative
machine.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 01:34:03 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Grinberg",
"Itay",
""
],
[
"Bremers",
"Alexandra",
""
],
[
"Pancoast",
"Louisa",
""
],
[
"Ju",
"Wendy",
""
]
]
| new_dataset | 0.988355 |
2310.00228 | Takuma Adams | Takuma Adams, Timothy McLennan-Smith | King of the Hill: C2 for Next Generation Swarm Warfare | null | null | null | null | cs.MA nlin.AO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | As the reliability of cheap, off-the-shelf autonomous platforms increases, so
does the risk posed by intelligent multi-agent systems to military operations.
In the contemporary context of the Russo-Ukrainian war alone, we have seen
autonomous aerial vehicles and surface vessels deployed both individually and
in multitude to deliver critical effects to both sides. While there is a large
body of literature on tactical level communications and interactions between
agents, the exploration of high-level command and control (C2) structures that
will underpin future autonomous multi-agent military operations is a less
explored area of research. We propose a quantitative game-theoretic framework
to study effective C2 structures in cooperative and competitive multi-agent
swarming scenarios. To test our framework, we construct a virtual environment
where two adversarial swarms compete to achieve outcomes comparable to
real-world scenarios. The framework we present in this paper enables us to
quickly test and interrogate different C2 configurations in multi-agent systems
to explore C2 as a force multiplier when at a force disadvantage.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 02:17:42 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Adams",
"Takuma",
""
],
[
"McLennan-Smith",
"Timothy",
""
]
]
| new_dataset | 0.999346 |
2310.00249 | Yuze He | Yuze He, Peng Wang, Yubin Hu, Wang Zhao, Ran Yi, Yong-Jin Liu, Wenping
Wang | MMPI: a Flexible Radiance Field Representation by Multiple Multi-plane
Images Blending | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a flexible representation of neural radiance fields based
on multi-plane images (MPI), for high-quality view synthesis of complex scenes.
MPI with Normalized Device Coordinate (NDC) parameterization is widely used in
NeRF learning for its simple definition, easy calculation, and powerful ability
to represent unbounded scenes. However, existing NeRF works that adopt MPI
representation for novel view synthesis can only handle simple forward-facing
unbounded scenes, where the input cameras are all observing in similar
directions with small relative translations. Hence, extending these MPI-based
methods to more complex scenes like large-range or even 360-degree scenes is
very challenging. In this paper, we explore the potential of MPI and show that
MPI can synthesize high-quality novel views of complex scenes with diverse
camera distributions and view directions, which are not only limited to simple
forward-facing scenes. Our key idea is to encode the neural radiance field with
multiple MPIs facing different directions and blend them with an adaptive
blending operation. For each region of the scene, the blending operation gives
larger blending weights to those advantaged MPIs with stronger local
representation abilities while giving lower weights to those with weaker
representation abilities. Such blending operation automatically modulates the
multiple MPIs to appropriately represent the diverse local density and color
information. Experiments on the KITTI dataset and ScanNet dataset demonstrate
that our proposed MMPI synthesizes high-quality images from diverse camera pose
distributions and is fast to train, outperforming the previous fast-training
NeRF methods for novel view synthesis. Moreover, we show that MMPI can encode
extremely long trajectories and produce novel view renderings, demonstrating
its potential in applications like autonomous driving.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 04:36:43 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"He",
"Yuze",
""
],
[
"Wang",
"Peng",
""
],
[
"Hu",
"Yubin",
""
],
[
"Zhao",
"Wang",
""
],
[
"Yi",
"Ran",
""
],
[
"Liu",
"Yong-Jin",
""
],
[
"Wang",
"Wenping",
""
]
]
| new_dataset | 0.958618 |
2310.00263 | Enyu Shi | Enyu Shi, Jiayi Zhang, Hongyang Du, Bo Ai, Chau Yuen, Dusit Niyato,
Khaled B. Letaief, and Xuemin (Sherman) Shen | RIS-Aided Cell-Free Massive MIMO Systems for 6G: Fundamentals, System
Design, and Applications | 30 pages, 15 figures | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An introduction of intelligent interconnectivity for people and things has
posed higher demands and more challenges for sixth-generation (6G) networks,
such as high spectral efficiency and energy efficiency, ultra-low latency, and
ultra-high reliability. Cell-free (CF) massive multiple-input multiple-output
(mMIMO) and reconfigurable intelligent surface (RIS), also called intelligent
reflecting surface (IRS), are two promising technologies for coping with these
unprecedented demands. Given their distinct capabilities, integrating the two
technologies to further enhance wireless network performances has received
great research and development attention. In this paper, we provide a
comprehensive survey of research on RIS-aided CF mMIMO wireless communication
systems. We first introduce system models focusing on system architecture and
application scenarios, channel models, and communication protocols.
Subsequently, we summarize the relevant studies on system operation and
resource allocation, providing in-depth analyses and discussions. Following
this, we present practical challenges faced by RIS-aided CF mMIMO systems,
particularly those introduced by RIS, such as hardware impairments and
electromagnetic interference. We summarize corresponding analyses and solutions
to further facilitate the implementation of RIS-aided CF mMIMO systems.
Furthermore, we explore an interplay between RIS-aided CF mMIMO and other
emerging 6G technologies, such as next-generation multiple-access (NGMA),
simultaneous wireless information and power transfer (SWIPT), and millimeter
wave (mmWave). Finally, we outline several research directions for future
RIS-aided CF mMIMO systems.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 05:32:41 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Shi",
"Enyu",
"",
"Sherman"
],
[
"Zhang",
"Jiayi",
"",
"Sherman"
],
[
"Du",
"Hongyang",
"",
"Sherman"
],
[
"Ai",
"Bo",
"",
"Sherman"
],
[
"Yuen",
"Chau",
"",
"Sherman"
],
[
"Niyato",
"Dusit",
"",
"Sherman"
],
[
"Letaief",
"Khaled B.",
"",
"Sherman"
],
[
"Xuemin",
"",
"",
"Sherman"
],
[
"Shen",
"",
""
]
]
| new_dataset | 0.957987 |
2310.00268 | Zhenwei Zhang | Zhenwei Zhang, Ruiqi Wang, Ran Ding, Yuantao Gu | Unravel Anomalies: An End-to-end Seasonal-Trend Decomposition Approach
for Time Series Anomaly Detection | submitted to ICASSP 2024 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Traditional Time-series Anomaly Detection (TAD) methods often struggle with
the composite nature of complex time-series data and a diverse array of
anomalies. We introduce TADNet, an end-to-end TAD model that leverages
Seasonal-Trend Decomposition to link various types of anomalies to specific
decomposition components, thereby simplifying the analysis of complex
time-series and enhancing detection performance. Our training methodology,
which includes pre-training on a synthetic dataset followed by fine-tuning,
strikes a balance between effective decomposition and precise anomaly
detection. Experimental validation on real-world datasets confirms TADNet's
state-of-the-art performance across a diverse range of anomalies.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 06:08:37 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Zhang",
"Zhenwei",
""
],
[
"Wang",
"Ruiqi",
""
],
[
"Ding",
"Ran",
""
],
[
"Gu",
"Yuantao",
""
]
]
| new_dataset | 0.978763 |
2310.00273 | Kehan Long | Kehan Long, Khoa Tran, Melvin Leok, Nikolay Atanasov | Safe Stabilizing Control for Polygonal Robots in Dynamic Elliptical
Environments | null | null | null | null | cs.RO math.OC | http://creativecommons.org/licenses/by/4.0/ | This paper addresses the challenge of safe navigation for rigid-body mobile
robots in dynamic environments. We introduce an analytic approach to compute
the distance between a polygon and an ellipse, and employ it to construct a
control barrier function (CBF) for safe control synthesis. Existing CBF design
methods for mobile robot obstacle avoidance usually assume point or circular
robots, preventing their applicability to more realistic robot body geometries.
Our work enables CBF designs that capture complex robot and obstacle shapes. We
demonstrate the effectiveness of our approach in simulations highlighting
real-time obstacle avoidance in constrained and dynamic environments for both
mobile robots and multi-joint robot arms.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 06:26:12 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Long",
"Kehan",
""
],
[
"Tran",
"Khoa",
""
],
[
"Leok",
"Melvin",
""
],
[
"Atanasov",
"Nikolay",
""
]
]
| new_dataset | 0.994093 |
2310.00274 | Bonaventure F. P. Dossou | Tobi Olatunji, Tejumade Afonja, Aditya Yadavalli, Chris Chinenye
Emezue, Sahib Singh, Bonaventure F.P. Dossou, Joanne Osuchukwu, Salomey Osei,
Atnafu Lambebo Tonja, Naome Etori, Clinton Mbataku | AfriSpeech-200: Pan-African Accented Speech Dataset for Clinical and
General Domain ASR | Accepted to TACL 2023. This is a pre-MIT Press publication version | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Africa has a very low doctor-to-patient ratio. At very busy clinics, doctors
could see 30+ patients per day -- a heavy patient burden compared with
developed countries -- but productivity tools such as clinical automatic speech
recognition (ASR) are lacking for these overworked clinicians. However,
clinical ASR is mature, even ubiquitous, in developed nations, and
clinician-reported performance of commercial clinical ASR systems is generally
satisfactory. Furthermore, the recent performance of general domain ASR is
approaching human accuracy. However, several gaps exist. Several publications
have highlighted racial bias with speech-to-text algorithms and performance on
minority accents lags significantly. To our knowledge, there is no publicly
available research or benchmark on accented African clinical ASR, and speech
data is non-existent for the majority of African accents. We release
AfriSpeech, 200hrs of Pan-African English speech, 67,577 clips from 2,463
unique speakers across 120 indigenous accents from 13 countries for clinical
and general domain ASR, a benchmark test set, with publicly available
pre-trained models with SOTA performance on the AfriSpeech benchmark.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 06:38:43 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Olatunji",
"Tobi",
""
],
[
"Afonja",
"Tejumade",
""
],
[
"Yadavalli",
"Aditya",
""
],
[
"Emezue",
"Chris Chinenye",
""
],
[
"Singh",
"Sahib",
""
],
[
"Dossou",
"Bonaventure F. P.",
""
],
[
"Osuchukwu",
"Joanne",
""
],
[
"Osei",
"Salomey",
""
],
[
"Tonja",
"Atnafu Lambebo",
""
],
[
"Etori",
"Naome",
""
],
[
"Mbataku",
"Clinton",
""
]
]
| new_dataset | 0.999854 |
2310.00287 | Syed Sameen Ahmad Rizvi | Syed Sameen Ahmad Rizvi, Preyansh Agrawal, Jagat Sesh Challa and
Pratik Narang | InFER: A Multi-Ethnic Indian Facial Expression Recognition Dataset | In Proceedings of the 15th International Conference on Agents and
Artificial Intelligence Volume 3: ICAART; ISBN 978-989-758-623-1; ISSN
2184-433X, SciTePress, pages 550-557. DOI: 10.5220/0011699400003393 | Volume 3: ICAART, 2023, pages - 550-557 | 10.5220/0011699400003393 10.5220/0011699400003393 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The rapid advancement in deep learning over the past decade has transformed
Facial Expression Recognition (FER) systems, as newer methods have been
proposed that outperform the existing traditional handcrafted techniques.
However, such a supervised learning approach requires a sufficiently large
training dataset covering all the possible scenarios. And since most people
exhibit facial expressions based upon their age group, gender, and ethnicity, a
diverse facial expression dataset is needed. This becomes even more crucial
while developing a FER system for the Indian subcontinent, which comprises of a
diverse multi-ethnic population. In this work, we present InFER, a real-world
multi-ethnic Indian Facial Expression Recognition dataset consisting of 10,200
images and 4,200 short videos of seven basic facial expressions. The dataset
has posed expressions of 600 human subjects, and spontaneous/acted expressions
of 6000 images crowd-sourced from the internet. To the best of our knowledge
InFER is the first of its kind consisting of images from 600 subjects from very
diverse ethnicity of the Indian Subcontinent. We also present the experimental
results of baseline & deep FER methods on our dataset to substantiate its
usability in real-world practical applications.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 07:36:29 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Rizvi",
"Syed Sameen Ahmad",
""
],
[
"Agrawal",
"Preyansh",
""
],
[
"Challa",
"Jagat Sesh",
""
],
[
"Narang",
"Pratik",
""
]
]
| new_dataset | 0.999652 |
2310.00288 | Cong Wang | Cong Wang, Gong-Jie Ruan, Zai-Zheng Yang, Xing-Jian Yangdong, Yixiang
Li, Liang Wu, Yingmeng Ge, Yichen Zhao, Chen Pan, Wei Wei, Li-Bo Wang, Bin
Cheng, Zaichen Zhang, Chuan Zhang, Shi-Jun Liang, Feng Miao | Parallel in-memory wireless computing | null | Nat Electron 6, 381-389 (2023) | 10.1038/s41928-023-00965-5 | null | cs.AR cs.ET cs.SY eess.SY physics.app-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parallel wireless digital communication with ultralow power consumption is
critical for emerging edge technologies such as 5G and Internet of Things.
However, the physical separation between digital computing units and analogue
transmission units in traditional wireless technology leads to high power
consumption. Here we report a parallel in-memory wireless computing scheme. The
approach combines in-memory computing with wireless communication using
memristive crossbar arrays. We show that the system can be used for the radio
transmission of a binary stream of 480 bits with a bit error rate of 0. The
in-memory wireless computing uses two orders of magnitude less power than
conventional technology (based on digital-to-analogue and analogue-to-digital
converters). We also show that the approach can be applied to acoustic and
optical wireless communications
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 07:45:10 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Wang",
"Cong",
""
],
[
"Ruan",
"Gong-Jie",
""
],
[
"Yang",
"Zai-Zheng",
""
],
[
"Yangdong",
"Xing-Jian",
""
],
[
"Li",
"Yixiang",
""
],
[
"Wu",
"Liang",
""
],
[
"Ge",
"Yingmeng",
""
],
[
"Zhao",
"Yichen",
""
],
[
"Pan",
"Chen",
""
],
[
"Wei",
"Wei",
""
],
[
"Wang",
"Li-Bo",
""
],
[
"Cheng",
"Bin",
""
],
[
"Zhang",
"Zaichen",
""
],
[
"Zhang",
"Chuan",
""
],
[
"Liang",
"Shi-Jun",
""
],
[
"Miao",
"Feng",
""
]
]
| new_dataset | 0.993873 |
2310.00294 | Suyu Lv | Suyu Lv, Yuanwei Liu, Xiaodong Xu, Arumugam Nallanathan and A. Lee
Swindlehurst | RIS-aided Near-Field MIMO Communications: Codebook and Beam Training
Design | 13 pages, 11 figures | null | null | null | cs.IT cs.NI eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Downlink reconfigurable intelligent surface (RIS)-assisted
multi-input-multi-output (MIMO) systems are considered with far-field,
near-field, and hybrid-far-near-field channels. According to the angular or
distance information contained in the received signals, 1) a distance-based
codebook is designed for near-field MIMO channels, based on which a
hierarchical beam training scheme is proposed to reduce the training overhead;
2) a combined angular-distance codebook is designed for mixed-far-near-field
MIMO channels, based on which a two-stage beam training scheme is proposed to
achieve alignment in the angular and distance domains separately. For
maximizing the achievable rate while reducing the complexity, an alternating
optimization algorithm is proposed to carry out the joint optimization
iteratively. Specifically, the RIS coefficient matrix is optimized through the
beam training process, the optimal combining matrix is obtained from the
closed-form solution for the mean square error (MSE) minimization problem, and
the active beamforming matrix is optimized by exploiting the relationship
between the achievable rate and MSE. Numerical results reveal that: 1) the
proposed beam training schemes achieve near-optimal performance with a
significantly decreased training overhead; 2) compared to the angular-only
far-field channel model, taking the additional distance information into
consideration will effectively improve the achievable rate when carrying out
beam design for near-field communications.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 08:07:10 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Lv",
"Suyu",
""
],
[
"Liu",
"Yuanwei",
""
],
[
"Xu",
"Xiaodong",
""
],
[
"Nallanathan",
"Arumugam",
""
],
[
"Swindlehurst",
"A. Lee",
""
]
]
| new_dataset | 0.998389 |
2310.00299 | Asahi Ushio | Asahi Ushio, Jose Camacho-Collados, Steven Schockaert | RelBERT: Embedding Relations with Language Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Many applications need access to background knowledge about how different
concepts and entities are related. Although Knowledge Graphs (KG) and Large
Language Models (LLM) can address this need to some extent, KGs are inevitably
incomplete and their relational schema is often too coarse-grained, while LLMs
are inefficient and difficult to control. As an alternative, we propose to
extract relation embeddings from relatively small language models. In
particular, we show that masked language models such as RoBERTa can be
straightforwardly fine-tuned for this purpose, using only a small amount of
training data. The resulting model, which we call RelBERT, captures relational
similarity in a surprisingly fine-grained way, allowing us to set a new
state-of-the-art in analogy benchmarks. Crucially, RelBERT is capable of
modelling relations that go well beyond what the model has seen during
training. For instance, we obtained strong results on relations between named
entities with a model that was only trained on lexical relations between
concepts, and we observed that RelBERT can recognise morphological analogies
despite not being trained on such examples. Overall, we find that RelBERT
significantly outperforms strategies based on prompting language models that
are several orders of magnitude larger, including recent GPT-based models and
open source models.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 08:15:36 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Ushio",
"Asahi",
""
],
[
"Camacho-Collados",
"Jose",
""
],
[
"Schockaert",
"Steven",
""
]
]
| new_dataset | 0.994765 |
2310.00328 | Joe O'Brien | Joe O'Brien, Shaun Ee, Zoe Williams | Deployment Corrections: An incident response framework for frontier AI
models | 53 pages; 1 figure; 1 table | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | A comprehensive approach to addressing catastrophic risks from AI models
should cover the full model lifecycle. This paper explores contingency plans
for cases where pre-deployment risk management falls short: where either very
dangerous models are deployed, or deployed models become very dangerous.
Informed by incident response practices from industries including
cybersecurity, we describe a toolkit of deployment corrections that AI
developers can use to respond to dangerous capabilities, behaviors, or use
cases of AI models that develop or are detected after deployment. We also
provide a framework for AI developers to prepare and implement this toolkit.
We conclude by recommending that frontier AI developers should (1) maintain
control over model access, (2) establish or grow dedicated teams to design and
maintain processes for deployment corrections, including incident response
plans, and (3) establish these deployment corrections as allowable actions with
downstream users. We also recommend frontier AI developers, standard-setting
organizations, and regulators should collaborate to define a standardized
industry-wide approach to the use of deployment corrections in incident
response.
Caveat: This work applies to frontier AI models that are made available
through interfaces (e.g., API) that provide the AI developer or another
upstream party means of maintaining control over access (e.g., GPT-4 or
Claude). It does not apply to management of catastrophic risk from open-source
models (e.g., BLOOM or Llama-2), for which the restrictions we discuss are
largely unenforceable.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 10:07:39 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"O'Brien",
"Joe",
""
],
[
"Ee",
"Shaun",
""
],
[
"Williams",
"Zoe",
""
]
]
| new_dataset | 0.994392 |
2310.00349 | Mathias-Felipe de-Lima-Santos | Mathias-Felipe de-Lima-Santos, Isabella Gon\c{c}alves, Marcos G.
Quiles, Lucia Mesquita, Wilson Ceron | Visual Political Communication in a Polarized Society: A Longitudinal
Study of Brazilian Presidential Elections on Instagram | null | null | null | null | cs.CY cs.AI cs.CV cs.LG cs.SI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In today's digital age, images have emerged as powerful tools for politicians
to engage with their voters on social media platforms. Visual content possesses
a unique emotional appeal that often leads to increased user engagement.
However, research on visual communication remains relatively limited,
particularly in the Global South. This study aims to bridge this gap by
employing a combination of computational methods and qualitative approach to
investigate the visual communication strategies employed in a dataset of 11,263
Instagram posts by 19 Brazilian presidential candidates in 2018 and 2022
national elections. Through two studies, we observed consistent patterns across
these candidates on their use of visual political communication. Notably, we
identify a prevalence of celebratory and positively toned images. They also
exhibit a strong sense of personalization, portraying candidates connected with
their voters on a more emotional level. Our research also uncovers unique
contextual nuances specific to the Brazilian political landscape. We note a
substantial presence of screenshots from news websites and other social media
platforms. Furthermore, text-edited images with portrayals emerge as a
prominent feature. In light of these results, we engage in a discussion
regarding the implications for the broader field of visual political
communication. This article serves as a testament to the pivotal role that
Instagram has played in shaping the narrative of two fiercely polarized
Brazilian elections, casting a revealing light on the ever-evolving dynamics of
visual political communication in the digital age. Finally, we propose avenues
for future research in the realm of visual political communication.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 12:11:11 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"de-Lima-Santos",
"Mathias-Felipe",
""
],
[
"Gonçalves",
"Isabella",
""
],
[
"Quiles",
"Marcos G.",
""
],
[
"Mesquita",
"Lucia",
""
],
[
"Ceron",
"Wilson",
""
]
]
| new_dataset | 0.992987 |
2310.00354 | Javier P\'erez de Frutos | Javier P\'erez de Frutos, Ragnhild Holden Helland, Shreya Desai, Line
Cathrine Nymoen, Thomas Lang{\o}, Theodor Remman, Abhijit Sen | AI-Dentify: Deep learning for proximal caries detection on bitewing
x-ray -- HUNT4 Oral Health Study | 22 pages, 4 figure, 6 tables | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Background: Dental caries diagnosis requires the manual inspection of
diagnostic bitewing images of the patient, followed by a visual inspection and
probing of the identified dental pieces with potential lesions. Yet the use of
artificial intelligence, and in particular deep-learning, has the potential to
aid in the diagnosis by providing a quick and informative analysis of the
bitewing images.
Methods: A dataset of 13,887 bitewings from the HUNT4 Oral Health Study were
annotated individually by six different experts, and used to train three
different object detection deep-learning architectures: RetinaNet (ResNet50),
YOLOv5 (M size), and EfficientDet (D0 and D1 sizes). A consensus dataset of 197
images, annotated jointly by the same six dentist, was used for evaluation. A
five-fold cross validation scheme was used to evaluate the performance of the
AI models.
Results: the trained models show an increase in average precision and
F1-score, and decrease of false negative rate, with respect to the dental
clinicians. Out of the three architectures studied, YOLOv5 shows the largest
improvement, reporting 0.647 mean average precision, 0.548 mean F1-score, and
0.149 mean false negative rate. Whereas the best annotators on each of these
metrics reported 0.299, 0.495, and 0.164 respectively.
Conclusion: Deep-learning models have shown the potential to assist dental
professionals in the diagnosis of caries. Yet, the task remains challenging due
to the artifacts natural to the bitewings.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 12:17:36 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"de Frutos",
"Javier Pérez",
""
],
[
"Helland",
"Ragnhild Holden",
""
],
[
"Desai",
"Shreya",
""
],
[
"Nymoen",
"Line Cathrine",
""
],
[
"Langø",
"Thomas",
""
],
[
"Remman",
"Theodor",
""
],
[
"Sen",
"Abhijit",
""
]
]
| new_dataset | 0.999116 |
2310.00367 | Jonas Belouadi | Jonas Belouadi, Anne Lauscher, Steffen Eger | AutomaTikZ: Text-Guided Synthesis of Scientific Vector Graphics with
TikZ | null | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | Generating bitmap graphics from text has gained considerable attention, yet
for scientific figures, vector graphics are often preferred. Given that vector
graphics are typically encoded using low-level graphics primitives, generating
them directly is difficult. To address this, we propose the use of TikZ, a
well-known abstract graphics language that can be compiled to vector graphics,
as an intermediate representation of scientific figures. TikZ offers
human-oriented, high-level commands, thereby facilitating conditional language
modeling with any large language model. To this end, we introduce DaTikZ the
first large-scale TikZ dataset, consisting of 120k TikZ drawings aligned with
captions. We fine-tune LLaMA on DaTikZ, as well as our new model CLiMA, which
augments LLaMA with multimodal CLIP embeddings. In both human and automatic
evaluation, CLiMA and LLaMA outperform commercial GPT-4 and Claude 2 in terms
of similarity to human-created figures, with CLiMA additionally improving
text-image alignment. Our detailed analysis shows that all models generalize
well and are not susceptible to memorization. GPT-4 and Claude 2, however, tend
to generate more simplistic figures compared to both humans and our models. We
make our framework, AutomaTikZ, along with model weights and datasets, publicly
available.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 13:15:49 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Belouadi",
"Jonas",
""
],
[
"Lauscher",
"Anne",
""
],
[
"Eger",
"Steffen",
""
]
]
| new_dataset | 0.993343 |
2310.00371 | Kartik Ramachandruni | Kartik Ramachandruni, Max Zuo, Sonia Chernova | ConSOR: A Context-Aware Semantic Object Rearrangement Framework for
Partially Arranged Scenes | Accepted to IROS 2023 | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Object rearrangement is the problem of enabling a robot to identify the
correct object placement in a complex environment. Prior work on object
rearrangement has explored a diverse set of techniques for following user
instructions to achieve some desired goal state. Logical predicates, images of
the goal scene, and natural language descriptions have all been used to
instruct a robot in how to arrange objects. In this work, we argue that
burdening the user with specifying goal scenes is not necessary in
partially-arranged environments, such as common household settings. Instead, we
show that contextual cues from partially arranged scenes (i.e., the placement
of some number of pre-arranged objects in the environment) provide sufficient
context to enable robots to perform object rearrangement \textit{without any
explicit user goal specification}. We introduce ConSOR, a Context-aware
Semantic Object Rearrangement framework that utilizes contextual cues from a
partially arranged initial state of the environment to complete the arrangement
of new objects, without explicit goal specification from the user. We
demonstrate that ConSOR strongly outperforms two baselines in generalizing to
novel object arrangements and unseen object categories. The code and data can
be found at https://github.com/kartikvrama/consor.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 13:24:26 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Ramachandruni",
"Kartik",
""
],
[
"Zuo",
"Max",
""
],
[
"Chernova",
"Sonia",
""
]
]
| new_dataset | 0.997797 |
2310.00385 | Fei Zhao | Fei Zhao, Taotian Pang, Zhen Wu, Zheng Ma, Shujian Huang, Xinyu Dai | Dynamic Demonstrations Controller for In-Context Learning | Under review | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In-Context Learning (ICL) is a new paradigm for natural language processing
(NLP), where a large language model (LLM) observes a small number of
demonstrations and a test instance as its input, and directly makes predictions
without updating model parameters. Previous studies have revealed that ICL is
sensitive to the selection and the ordering of demonstrations. However, there
are few studies regarding the impact of the demonstration number on the ICL
performance within a limited input length of LLM, because it is commonly
believed that the number of demonstrations is positively correlated with model
performance. In this paper, we found this conclusion does not always hold true.
Through pilot experiments, we discover that increasing the number of
demonstrations does not necessarily lead to improved performance. Building upon
this insight, we propose a Dynamic Demonstrations Controller (D$^2$Controller),
which can improve the ICL performance by adjusting the number of demonstrations
dynamically. The experimental results show that D$^2$Controller yields a 5.4%
relative improvement on eight different sizes of LLMs across ten datasets.
Moreover, we also extend our method to previous ICL models and achieve
competitive results.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 14:04:22 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Zhao",
"Fei",
""
],
[
"Pang",
"Taotian",
""
],
[
"Wu",
"Zhen",
""
],
[
"Ma",
"Zheng",
""
],
[
"Huang",
"Shujian",
""
],
[
"Dai",
"Xinyu",
""
]
]
| new_dataset | 0.996507 |
2310.00400 | Lei Yang | Lei Yang, Jiaxin Yu, Xinyu Zhang, Jun Li, Li Wang, Yi Huang, Chuang
Zhang, Hong Wang, Yiming Li | MonoGAE: Roadside Monocular 3D Object Detection with Ground-Aware
Embeddings | 12 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although the majority of recent autonomous driving systems concentrate on
developing perception methods based on ego-vehicle sensors, there is an
overlooked alternative approach that involves leveraging intelligent roadside
cameras to help extend the ego-vehicle perception ability beyond the visual
range. We discover that most existing monocular 3D object detectors rely on the
ego-vehicle prior assumption that the optical axis of the camera is parallel to
the ground. However, the roadside camera is installed on a pole with a pitched
angle, which makes the existing methods not optimal for roadside scenes. In
this paper, we introduce a novel framework for Roadside Monocular 3D object
detection with ground-aware embeddings, named MonoGAE. Specifically, the ground
plane is a stable and strong prior knowledge due to the fixed installation of
cameras in roadside scenarios. In order to reduce the domain gap between the
ground geometry information and high-dimensional image features, we employ a
supervised training paradigm with a ground plane to predict high-dimensional
ground-aware embeddings. These embeddings are subsequently integrated with
image features through cross-attention mechanisms. Furthermore, to improve the
detector's robustness to the divergences in cameras' installation poses, we
replace the ground plane depth map with a novel pixel-level refined ground
plane equation map. Our approach demonstrates a substantial performance
advantage over all previous monocular 3D object detectors on widely recognized
3D detection benchmarks for roadside cameras. The code and pre-trained models
will be released soon.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 14:52:26 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Yang",
"Lei",
""
],
[
"Yu",
"Jiaxin",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Li",
"Jun",
""
],
[
"Wang",
"Li",
""
],
[
"Huang",
"Yi",
""
],
[
"Zhang",
"Chuang",
""
],
[
"Wang",
"Hong",
""
],
[
"Li",
"Yiming",
""
]
]
| new_dataset | 0.999034 |
2310.00430 | Vivek Nair | Vivek Nair, Wenbo Guo, Rui Wang, James F. O'Brien, Louis Rosenberg,
Dawn Song | Berkeley Open Extended Reality Recordings 2023 (BOXRR-23): 4.7 Million
Motion Capture Recordings from 105,852 Extended Reality Device Users | Learn more at https://rdi.berkeley.edu/metaverse/boxrr-23 | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extended reality (XR) devices such as the Meta Quest and Apple Vision Pro
have seen a recent surge in attention, with motion tracking "telemetry" data
lying at the core of nearly all XR and metaverse experiences. Researchers are
just beginning to understand the implications of this data for security,
privacy, usability, and more, but currently lack large-scale human motion
datasets to study. The BOXRR-23 dataset contains 4,717,215 motion capture
recordings, voluntarily submitted by 105,852 XR device users from over 50
countries. BOXRR-23 is over 200 times larger than the largest existing motion
capture research dataset and uses a new, highly efficient purpose-built XR Open
Recording (XROR) file format.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 16:43:20 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Nair",
"Vivek",
""
],
[
"Guo",
"Wenbo",
""
],
[
"Wang",
"Rui",
""
],
[
"O'Brien",
"James F.",
""
],
[
"Rosenberg",
"Louis",
""
],
[
"Song",
"Dawn",
""
]
]
| new_dataset | 0.999237 |
2310.00431 | Christian Koke | Christian Koke, Abhishek Saroha, Yuesong Shen, Marvin Eisenberger,
Daniel Cremers | ResolvNet: A Graph Convolutional Network with multi-scale Consistency | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | It is by now a well known fact in the graph learning community that the
presence of bottlenecks severely limits the ability of graph neural networks to
propagate information over long distances. What so far has not been appreciated
is that, counter-intuitively, also the presence of strongly connected
sub-graphs may severely restrict information flow in common architectures.
Motivated by this observation, we introduce the concept of multi-scale
consistency. At the node level this concept refers to the retention of a
connected propagation graph even if connectivity varies over a given graph. At
the graph-level, multi-scale consistency refers to the fact that distinct
graphs describing the same object at different resolutions should be assigned
similar feature vectors. As we show, both properties are not satisfied by
poular graph neural network architectures. To remedy these shortcomings, we
introduce ResolvNet, a flexible graph neural network based on the mathematical
concept of resolvents. We rigorously establish its multi-scale consistency
theoretically and verify it in extensive experiments on real world data: Here
networks based on this ResolvNet architecture prove expressive; out-performing
baselines significantly on many tasks; in- and outside the multi-scale setting.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 16:46:45 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Koke",
"Christian",
""
],
[
"Saroha",
"Abhishek",
""
],
[
"Shen",
"Yuesong",
""
],
[
"Eisenberger",
"Marvin",
""
],
[
"Cremers",
"Daniel",
""
]
]
| new_dataset | 0.983779 |
2310.00454 | Fadillah Maani | Fadillah Maani, Asim Ukaye, Nada Saadi, Numan Saeed, Mohammad Yaqub | UniLVSeg: Unified Left Ventricular Segmentation with Sparsely Annotated
Echocardiogram Videos through Self-Supervised Temporal Masking and Weakly
Supervised Training | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Echocardiography has become an indispensable clinical imaging modality for
general heart health assessment. From calculating biomarkers such as ejection
fraction to the probability of a patient's heart failure, accurate segmentation
of the heart and its structures allows doctors to plan and execute treatments
with greater precision and accuracy. However, achieving accurate and robust
left ventricle segmentation is time-consuming and challenging due to different
reasons. This work introduces a novel approach for consistent left ventricular
(LV) segmentation from sparsely annotated echocardiogram videos. We achieve
this through (1) self-supervised learning (SSL) using temporal masking followed
by (2) weakly supervised training. We investigate two different segmentation
approaches: 3D segmentation and a novel 2D superimage (SI). We demonstrate how
our proposed method outperforms the state-of-the-art solutions by achieving a
93.32% (95%CI 93.21-93.43%) dice score on a large-scale dataset
(EchoNet-Dynamic) while being more efficient. To show the effectiveness of our
approach, we provide extensive ablation studies, including pre-training
settings and various deep learning backbones. Additionally, we discuss how our
proposed methodology achieves high data utility by incorporating unlabeled
frames in the training process. To help support the AI in medicine community,
the complete solution with the source code will be made publicly available upon
acceptance.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 18:13:41 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Maani",
"Fadillah",
""
],
[
"Ukaye",
"Asim",
""
],
[
"Saadi",
"Nada",
""
],
[
"Saeed",
"Numan",
""
],
[
"Yaqub",
"Mohammad",
""
]
]
| new_dataset | 0.998117 |
2310.00455 | Wenjie Yin | Wenjie Yin, Qingyuan Yao, Yi Yu, Hang Yin, Danica Kragic, M{\aa}rten
Bj\"orkman | Music- and Lyrics-driven Dance Synthesis | null | null | null | null | cs.MM cs.GR cs.LG cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lyrics often convey information about the songs that are beyond the auditory
dimension, enriching the semantic meaning of movements and musical themes. Such
insights are important in the dance choreography domain. However, most existing
dance synthesis methods mainly focus on music-to-dance generation, without
considering the semantic information. To complement it, we introduce JustLMD, a
new multimodal dataset of 3D dance motion with music and lyrics. To the best of
our knowledge, this is the first dataset with triplet information including
dance motion, music, and lyrics. Additionally, we showcase a cross-modal
diffusion-based network designed to generate 3D dance motion conditioned on
music and lyrics. The proposed JustLMD dataset encompasses 4.6 hours of 3D
dance motion in 1867 sequences, accompanied by musical tracks and their
corresponding English lyrics.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 18:27:14 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Yin",
"Wenjie",
""
],
[
"Yao",
"Qingyuan",
""
],
[
"Yu",
"Yi",
""
],
[
"Yin",
"Hang",
""
],
[
"Kragic",
"Danica",
""
],
[
"Björkman",
"Mårten",
""
]
]
| new_dataset | 0.999883 |
2310.00463 | Stan Birchfield | Jonathan Tremblay, Bowen Wen, Valts Blukis, Balakumar Sundaralingam,
Stephen Tyree, Stan Birchfield | Diff-DOPE: Differentiable Deep Object Pose Estimation | Submitted to ICRA 2023. Project page is at https://diffdope.github.io | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Diff-DOPE, a 6-DoF pose refiner that takes as input an image, a
3D textured model of an object, and an initial pose of the object. The method
uses differentiable rendering to update the object pose to minimize the visual
error between the image and the projection of the model. We show that this
simple, yet effective, idea is able to achieve state-of-the-art results on pose
estimation datasets. Our approach is a departure from recent methods in which
the pose refiner is a deep neural network trained on a large synthetic dataset
to map inputs to refinement steps. Rather, our use of differentiable rendering
allows us to avoid training altogether. Our approach performs multiple gradient
descent optimizations in parallel with different random learning rates to avoid
local minima from symmetric objects, similar appearances, or wrong step size.
Various modalities can be used, e.g., RGB, depth, intensity edges, and object
segmentation masks. We present experiments examining the effect of various
choices, showing that the best results are found when the RGB image is
accompanied by an object mask and depth image to guide the optimization
process.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 18:52:57 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Tremblay",
"Jonathan",
""
],
[
"Wen",
"Bowen",
""
],
[
"Blukis",
"Valts",
""
],
[
"Sundaralingam",
"Balakumar",
""
],
[
"Tyree",
"Stephen",
""
],
[
"Birchfield",
"Stan",
""
]
]
| new_dataset | 0.997098 |
2310.00483 | Vincent Li | Vincent Li, Nick Doiron | Prompting Code Interpreter to Write Better Unit Tests on Quixbugs
Functions | 13 pages (including appendices), 0 figures, 1 table. First authored
by Vincent Li; edited by Nick Doiron | null | null | null | cs.SE cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Unit testing is a commonly-used approach in software engineering to test the
correctness and robustness of written code. Unit tests are tests designed to
test small components of a codebase in isolation, such as an individual
function or method. Although unit tests have historically been written by human
programmers, recent advancements in AI, particularly LLMs, have shown
corresponding advances in automatic unit test generation. In this study, we
explore the effect of different prompts on the quality of unit tests generated
by Code Interpreter, a GPT-4-based LLM, on Python functions provided by the
Quixbugs dataset, and we focus on prompting due to the ease with which users
can make use of our findings and observations. We find that the quality of the
generated unit tests is not sensitive to changes in minor details in the
prompts provided. However, we observe that Code Interpreter is often able to
effectively identify and correct mistakes in code that it writes, suggesting
that providing it runnable code to check the correctness of its outputs would
be beneficial, even though we find that it is already often able to generate
correctly-formatted unit tests. Our findings suggest that, when prompting
models similar to Code Interpreter, it is important to include the basic
information necessary to generate unit tests, but minor details are not as
important.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 20:36:23 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Li",
"Vincent",
""
],
[
"Doiron",
"Nick",
""
]
]
| new_dataset | 0.995854 |
2310.00491 | Gaurav Jain | Gaurav Jain, Basel Hindi, Zihao Zhang, Koushik Srinivasula, Mingyu
Xie, Mahshid Ghasemi, Daniel Weiner, Sophie Ana Paris, Xin Yi Therese Xu,
Michael Malcolm, Mehmet Turkcan, Javad Ghaderi, Zoran Kostic, Gil Zussman,
Brian A. Smith | StreetNav: Leveraging Street Cameras to Support Precise Outdoor
Navigation for Blind Pedestrians | null | null | null | null | cs.HC | http://creativecommons.org/licenses/by-sa/4.0/ | Blind and low-vision (BLV) people rely on GPS-based systems for outdoor
navigation. GPS's inaccuracy, however, causes them to veer off track, run into
unexpected obstacles, and struggle to reach precise destinations. While prior
work has made precise navigation possible indoors via additional hardware
installations, enabling precise navigation outdoors remains a challenge.
Ironically, many outdoor environments of interest such as downtown districts
are already instrumented with hardware such as street cameras. In this work, we
explore the idea of repurposing street cameras for outdoor navigation, and
investigate the effectiveness of such an approach. Our resulting system,
StreetNav, processes the cameras' video feeds using computer vision and gives
BLV pedestrians real-time navigation assistance. Our user evaluations in the
COSMOS testbed with eight BLV pedestrians show that StreetNav guides them more
precisely than GPS, but its performance is sensitive to lighting conditions and
environmental occlusions. We discuss future implications for deploying such
systems at scale.
| [
{
"version": "v1",
"created": "Sat, 30 Sep 2023 21:16:05 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Jain",
"Gaurav",
""
],
[
"Hindi",
"Basel",
""
],
[
"Zhang",
"Zihao",
""
],
[
"Srinivasula",
"Koushik",
""
],
[
"Xie",
"Mingyu",
""
],
[
"Ghasemi",
"Mahshid",
""
],
[
"Weiner",
"Daniel",
""
],
[
"Paris",
"Sophie Ana",
""
],
[
"Xu",
"Xin Yi Therese",
""
],
[
"Malcolm",
"Michael",
""
],
[
"Turkcan",
"Mehmet",
""
],
[
"Ghaderi",
"Javad",
""
],
[
"Kostic",
"Zoran",
""
],
[
"Zussman",
"Gil",
""
],
[
"Smith",
"Brian A.",
""
]
]
| new_dataset | 0.999404 |
2310.00546 | Jiancheng Huang | Jiancheng Huang, Yifan Liu, Yi Huang, Shifeng Chen | Seal2Real: Prompt Prior Learning on Diffusion Model for Unsupervised
Document Seal Data Generation and Realisation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In document processing, seal-related tasks have very large commercial
applications, such as seal segmentation, seal authenticity discrimination, seal
removal, and text recognition under seals. However, these seal-related tasks
are highly dependent on labelled document seal datasets, resulting in very
little work on these tasks. To address the lack of labelled datasets for these
seal-related tasks, we propose Seal2Real, a generative method that generates a
large amount of labelled document seal data, and construct a Seal-DB dataset
containing 20K images with labels. In Seal2Real, we propose a prompt prior
learning architecture based on a pre-trained Stable Diffusion Model that
migrates the prior generative power of to our seal generation task with
unsupervised training. The realistic seal generation capability greatly
facilitates the performance of downstream seal-related tasks on real data.
Experimental results on the Seal-DB dataset demonstrate the effectiveness of
Seal2Real.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 02:12:49 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Huang",
"Jiancheng",
""
],
[
"Liu",
"Yifan",
""
],
[
"Huang",
"Yi",
""
],
[
"Chen",
"Shifeng",
""
]
]
| new_dataset | 0.999693 |
2310.00564 | Ole Richter | Ole Richter, Chenxi Wu, Adrian M. Whatley, German K\"ostinger, Carsten
Nielsen, Ning Qiao and Giacomo Indiveri | DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous
spiking neural network processor | *Ole Richter and Chenxi Wu contributed equally | null | null | null | cs.NE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the remarkable progress that technology has made, the need for
processing data near the sensors at the edge has increased dramatically. The
electronic systems used in these applications must process data continuously,
in real-time, and extract relevant information using the smallest possible
energy budgets. A promising approach for implementing always-on processing of
sensory signals that supports on-demand, sparse, and edge-computing is to take
inspiration from biological nervous system. Following this approach, we present
a brain-inspired platform for prototyping real-time event-based Spiking Neural
Networks (SNNs). The system proposed supports the direct emulation of dynamic
and realistic neural processing phenomena such as short-term plasticity, NMDA
gating, AMPA diffusion, homeostasis, spike frequency adaptation,
conductance-based dendritic compartments and spike transmission delays. The
analog circuits that implement such primitives are paired with a low latency
asynchronous digital circuits for routing and mapping events. This asynchronous
infrastructure enables the definition of different network architectures, and
provides direct event-based interfaces to convert and encode data from
event-based and continuous-signal sensors. Here we describe the overall system
architecture, we characterize the mixed signal analog-digital circuits that
emulate neural dynamics, demonstrate their features with experimental
measurements, and present a low- and high-level software ecosystem that can be
used for configuring the system. The flexibility to emulate different
biologically plausible neural networks, and the chip's ability to monitor both
population and single neuron signals in real-time, allow to develop and
validate complex models of neural processing for both basic research and
edge-computing applications.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 03:48:16 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Richter",
"Ole",
""
],
[
"Wu",
"Chenxi",
""
],
[
"Whatley",
"Adrian M.",
""
],
[
"Köstinger",
"German",
""
],
[
"Nielsen",
"Carsten",
""
],
[
"Qiao",
"Ning",
""
],
[
"Indiveri",
"Giacomo",
""
]
]
| new_dataset | 0.986782 |
2310.00569 | Haotian Wu | Yubo Gao, Haotian Wu | TDCGL: Two-Level Debiased Contrastive Graph Learning for Recommendation | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | knowledge graph-based recommendation methods have achieved great success in
the field of recommender systems. However, over-reliance on high-quality
knowledge graphs is a bottleneck for such methods. Specifically, the
long-tailed distribution of entities of KG and noise issues in the real world
will make item-entity dependent relations deviate from reflecting true
characteristics and significantly harm the performance of modeling user
preference. Contrastive learning, as a novel method that is employed for data
augmentation and denoising, provides inspiration to fill this research gap.
However, the mainstream work only focuses on the long-tail properties of the
number of items clicked, while ignoring that the long-tail properties of total
number of clicks per user may also affect the performance of the recommendation
model. Therefore, to tackle these problems, motivated by the Debiased
Contrastive Learning of Unsupervised Sentence Representations (DCLR), we
propose Two-Level Debiased Contrastive Graph Learning (TDCGL) model.
Specifically, we design the Two-Level Debiased Contrastive Learning (TDCL) and
deploy it in the KG, which is conducted not only on User-Item pairs but also on
User-User pairs for modeling higher-order relations. Also, to reduce the bias
caused by random sampling in contrastive learning, with the exception of the
negative samples obtained by random sampling, we add a noise-based generation
of negation to ensure spatial uniformity. Considerable experiments on
open-source datasets demonstrate that our method has excellent anti-noise
capability and significantly outperforms state-of-the-art baselines. In
addition, ablation studies about the necessity for each level of TDCL are
conducted.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 03:56:38 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Gao",
"Yubo",
""
],
[
"Wu",
"Haotian",
""
]
]
| new_dataset | 0.988839 |
2310.00623 | Wenqi Song | Wenqi Song, Yan Gao and Quan Quan | Speed and Density Planning for a Speed-Constrained Robot Swarm Through a
Virtual Tube | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The planning and control of a robot swarm in a complex environment have
attracted increasing attention. To this end, the idea of virtual tubes has been
taken up in our previous work. Specifically, a virtual tube with varying widths
has been planned to avoid collisions with obstacles in a complex environment.
Based on the planned virtual tube for a large number of speed-constrained
robots, the average forward speed and density along the virtual tube are
further planned in this paper to ensure safety and improve efficiency. Compared
with the existing methods, the proposed method is based on global information
and can be applied to traversing narrow spaces for speed-constrained robot
swarms. Numerical simulations and experiments are conducted to show that the
safety and efficiency of the passing-through process are improved. A video
about simulations and experiments is available on https://youtu.be/lJHdMQMqSpc.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 09:21:10 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Song",
"Wenqi",
""
],
[
"Gao",
"Yan",
""
],
[
"Quan",
"Quan",
""
]
]
| new_dataset | 0.974573 |
2310.00629 | Ekta Gavas | Ekta Gavas and Anoop Namboodiri | Finger-UNet: A U-Net based Multi-Task Architecture for Deep Fingerprint
Enhancement | 8 pages, 5 figures, Accepted at 18th VISIGRAPP 2023: VISAPP
conference | Proceedings of the 18th International Joint Conference on Computer
Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP
2023) - Volume 4: VISAPP; ISBN 978-989-758-634-7; ISSN 2184-4321, SciTePress,
pages 309-316 | 10.5220/0011687400003417 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For decades, fingerprint recognition has been prevalent for security,
forensics, and other biometric applications. However, the availability of
good-quality fingerprints is challenging, making recognition difficult.
Fingerprint images might be degraded with a poor ridge structure and noisy or
less contrasting backgrounds. Hence, fingerprint enhancement plays a vital role
in the early stages of the fingerprint recognition/verification pipeline. In
this paper, we investigate and improvise the encoder-decoder style architecture
and suggest intuitive modifications to U-Net to enhance low-quality
fingerprints effectively. We investigate the use of Discrete Wavelet Transform
(DWT) for fingerprint enhancement and use a wavelet attention module instead of
max pooling which proves advantageous for our task. Moreover, we replace
regular convolutions with depthwise separable convolutions, which significantly
reduces the memory footprint of the model without degrading the performance. We
also demonstrate that incorporating domain knowledge with fingerprint minutiae
prediction task can improve fingerprint reconstruction through multi-task
learning. Furthermore, we also integrate the orientation estimation task to
propagate the knowledge of ridge orientations to enhance the performance
further. We present the experimental results and evaluate our model on FVC 2002
and NIST SD302 databases to show the effectiveness of our approach compared to
previous works.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 09:49:10 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Gavas",
"Ekta",
""
],
[
"Namboodiri",
"Anoop",
""
]
]
| new_dataset | 0.991823 |
2310.00655 | Zeying Gong | Zeying Gong, Yujin Tang, Junwei Liang | PatchMixer: A Patch-Mixing Architecture for Long-Term Time Series
Forecasting | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although the Transformer has been the dominant architecture for time series
forecasting tasks in recent years, a fundamental challenge remains: the
permutation-invariant self-attention mechanism within Transformers leads to a
loss of temporal information. To tackle these challenges, we propose
PatchMixer, a novel CNN-based model. It introduces a permutation-variant
convolutional structure to preserve temporal information. Diverging from
conventional CNNs in this field, which often employ multiple scales or numerous
branches, our method relies exclusively on depthwise separable convolutions.
This allows us to extract both local features and global correlations using a
single-scale architecture. Furthermore, we employ dual forecasting heads that
encompass both linear and nonlinear components to better model future curve
trends and details. Our experimental results on seven time-series forecasting
benchmarks indicate that compared with the state-of-the-art method and the
best-performing CNN, PatchMixer yields $3.9\%$ and $21.2\%$ relative
improvements, respectively, while being 2-3x faster than the most advanced
method. We will release our code and model.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 12:47:59 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Gong",
"Zeying",
""
],
[
"Tang",
"Yujin",
""
],
[
"Liang",
"Junwei",
""
]
]
| new_dataset | 0.999397 |
2310.00656 | Huajian Xin | Huajian Xin, Haiming Wang, Chuanyang Zheng, Lin Li, Zhengying Liu,
Qingxing Cao, Yinya Huang, Jing Xiong, Han Shi, Enze Xie, Jian Yin, Zhenguo
Li, Xiaodan Liang | LEGO-Prover: Neural Theorem Proving with Growing Libraries | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Despite the success of large language models (LLMs), the task of theorem
proving still remains one of the hardest reasoning tasks that is far from being
fully solved. Prior methods using language models have demonstrated promising
results, but they still struggle to prove even middle school level theorems.
One common limitation of these methods is that they assume a fixed theorem
library during the whole theorem proving process. However, as we all know,
creating new useful theorems or even new theories is not only helpful but
crucial and necessary for advancing mathematics and proving harder and deeper
results. In this work, we present LEGO-Prover, which employs a growing skill
library containing verified lemmas as skills to augment the capability of LLMs
used in theorem proving. By constructing the proof modularly, LEGO-Prover
enables LLMs to utilize existing skills retrieved from the library and to
create new skills during the proving process. These skills are further evolved
(by prompting an LLM) to enrich the library on another scale. Modular and
reusable skills are constantly added to the library to enable tackling
increasingly intricate mathematical problems. Moreover, the learned library
further bridges the gap between human proofs and formal proofs by making it
easier to impute missing steps. LEGO-Prover advances the state-of-the-art pass
rate on miniF2F-valid (48.0% to 57.0%) and miniF2F-test (45.5% to 47.1%).
During the proving process, LEGO-Prover also manages to generate over 20,000
skills (theorems/lemmas) and adds them to the growing library. Our ablation
study indicates that these newly added skills are indeed helpful for proving
theorems, resulting in an improvement from a success rate of 47.1% to 50.4%. We
also release our code and all the generated skills.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 12:47:59 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Xin",
"Huajian",
""
],
[
"Wang",
"Haiming",
""
],
[
"Zheng",
"Chuanyang",
""
],
[
"Li",
"Lin",
""
],
[
"Liu",
"Zhengying",
""
],
[
"Cao",
"Qingxing",
""
],
[
"Huang",
"Yinya",
""
],
[
"Xiong",
"Jing",
""
],
[
"Shi",
"Han",
""
],
[
"Xie",
"Enze",
""
],
[
"Yin",
"Jian",
""
],
[
"Li",
"Zhenguo",
""
],
[
"Liang",
"Xiaodan",
""
]
]
| new_dataset | 0.959808 |
2310.00659 | Sandip Purnapatra | Sandip Purnapatra, Humaira Rezaie, Bhavin Jawade, Yu Liu, Yue Pan,
Luke Brosell, Mst Rumana Sumi, Lambert Igene, Alden Dimarco, Srirangaraj
Setlur, Soumyabrata Dey, Stephanie Schuckers, Marco Huber, Jan Niklas Kolf,
Meiling Fang, Naser Damer, Banafsheh Adami, Raul Chitic, Karsten Seelert,
Vishesh Mistry, Rahul Parthe, Umit Kacar | Liveness Detection Competition -- Noncontact-based Fingerprint
Algorithms and Systems (LivDet-2023 Noncontact Fingerprint) | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Liveness Detection (LivDet) is an international competition series open to
academia and industry with the objec-tive to assess and report state-of-the-art
in Presentation Attack Detection (PAD). LivDet-2023 Noncontact Fingerprint is
the first edition of the noncontact fingerprint-based PAD competition for
algorithms and systems. The competition serves as an important benchmark in
noncontact-based fingerprint PAD, offering (a) independent assessment of the
state-of-the-art in noncontact-based fingerprint PAD for algorithms and
systems, and (b) common evaluation protocol, which includes finger photos of a
variety of Presentation Attack Instruments (PAIs) and live fingers to the
biometric research community (c) provides standard algorithm and system
evaluation protocols, along with the comparative analysis of state-of-the-art
algorithms from academia and industry with both old and new android
smartphones. The winning algorithm achieved an APCER of 11.35% averaged overall
PAIs and a BPCER of 0.62%. The winning system achieved an APCER of 13.0.4%,
averaged over all PAIs tested over all the smartphones, and a BPCER of 1.68%
over all smartphones tested. Four-finger systems that make individual
finger-based PAD decisions were also tested. The dataset used for competition
will be available 1 to all researchers as per data share protocol
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 12:59:30 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Purnapatra",
"Sandip",
""
],
[
"Rezaie",
"Humaira",
""
],
[
"Jawade",
"Bhavin",
""
],
[
"Liu",
"Yu",
""
],
[
"Pan",
"Yue",
""
],
[
"Brosell",
"Luke",
""
],
[
"Sumi",
"Mst Rumana",
""
],
[
"Igene",
"Lambert",
""
],
[
"Dimarco",
"Alden",
""
],
[
"Setlur",
"Srirangaraj",
""
],
[
"Dey",
"Soumyabrata",
""
],
[
"Schuckers",
"Stephanie",
""
],
[
"Huber",
"Marco",
""
],
[
"Kolf",
"Jan Niklas",
""
],
[
"Fang",
"Meiling",
""
],
[
"Damer",
"Naser",
""
],
[
"Adami",
"Banafsheh",
""
],
[
"Chitic",
"Raul",
""
],
[
"Seelert",
"Karsten",
""
],
[
"Mistry",
"Vishesh",
""
],
[
"Parthe",
"Rahul",
""
],
[
"Kacar",
"Umit",
""
]
]
| new_dataset | 0.97277 |
2310.00679 | Joseph Marvin Imperial | Ma. Beatrice Emanuela Pilar, Ellyza Mari Papas, Mary Loise
Buenaventura, Dane Dedoroy, Myron Darrel Montefalcon, Jay Rhald Padilla, Lany
Maceda, Mideth Abisado, Joseph Marvin Imperial | CebuaNER: A New Baseline Cebuano Named Entity Recognition Model | Accepted for PACLIC2023 | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Despite being one of the most linguistically diverse groups of countries,
computational linguistics and language processing research in Southeast Asia
has struggled to match the level of countries from the Global North. Thus,
initiatives such as open-sourcing corpora and the development of baseline
models for basic language processing tasks are important stepping stones to
encourage the growth of research efforts in the field. To answer this call, we
introduce CebuaNER, a new baseline model for named entity recognition (NER) in
the Cebuano language. Cebuano is the second most-used native language in the
Philippines, with over 20 million speakers. To build the model, we collected
and annotated over 4,000 news articles, the largest of any work in the
language, retrieved from online local Cebuano platforms to train algorithms
such as Conditional Random Field and Bidirectional LSTM. Our findings show
promising results as a new baseline model, achieving over 70% performance on
precision, recall, and F1 across all entity tags, as well as potential efficacy
in a crosslingual setup with Tagalog.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 14:09:42 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Pilar",
"Ma. Beatrice Emanuela",
""
],
[
"Papas",
"Ellyza Mari",
""
],
[
"Buenaventura",
"Mary Loise",
""
],
[
"Dedoroy",
"Dane",
""
],
[
"Montefalcon",
"Myron Darrel",
""
],
[
"Padilla",
"Jay Rhald",
""
],
[
"Maceda",
"Lany",
""
],
[
"Abisado",
"Mideth",
""
],
[
"Imperial",
"Joseph Marvin",
""
]
]
| new_dataset | 0.999387 |
2310.00698 | Reshma Ramaprasad | Reshma Ramaprasad | Comics for Everyone: Generating Accessible Text Descriptions for Comic
Strips | Accepted at CLVL: 5th Workshop On Closing The Loop Between Vision And
Language (ICCV 2023 Workshop) | null | null | null | cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Comic strips are a popular and expressive form of visual storytelling that
can convey humor, emotion, and information. However, they are inaccessible to
the BLV (Blind or Low Vision) community, who cannot perceive the images,
layouts, and text of comics. Our goal in this paper is to create natural
language descriptions of comic strips that are accessible to the visually
impaired community. Our method consists of two steps: first, we use computer
vision techniques to extract information about the panels, characters, and text
of the comic images; second, we use this information as additional context to
prompt a multimodal large language model (MLLM) to produce the descriptions. We
test our method on a collection of comics that have been annotated by human
experts and measure its performance using both quantitative and qualitative
metrics. The outcomes of our experiments are encouraging and promising.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 15:13:48 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Ramaprasad",
"Reshma",
""
]
]
| new_dataset | 0.984151 |
2310.00718 | Matteo Paltenghi | Matteo Paltenghi, Michael Pradel | LintQ: A Static Analysis Framework for Qiskit Quantum Programs | 21 pages, 11 figures | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As quantum computing is rising in popularity, the amount of quantum programs
and the number of developers writing them are increasing rapidly.
Unfortunately, writing correct quantum programs is challenging due to various
subtle rules developers need to be aware of. Empirical studies show that 40-82%
of all bugs in quantum software are specific to the quantum domain. Yet,
existing static bug detection frameworks are mostly unaware of quantum-specific
concepts, such as circuits, gates, and qubits, and hence miss many bugs. This
paper presents LintQ, a comprehensive static analysis framework for detecting
bugs in quantum programs. Our approach is enabled by a set of abstractions
designed to reason about common concepts in quantum computing without referring
to the details of the underlying quantum computing platform. Built on top of
these abstractions, LintQ offers an extensible set of nine analyses that detect
likely bugs, such as operating on corrupted quantum states, redundant
measurements, and incorrect compositions of sub-circuits. We apply the approach
to a newly collected dataset of 7,568 real-world Qiskit-based quantum programs,
showing that LintQ effectively identifies various programming problems with a
precision of 80.5%. Comparing to a general-purpose linter and two existing,
quantum-aware techniques shows that all problems found by LintQ during our
evaluation are missed by prior work. LintQ hence takes an important step toward
reliable software in the growing field of quantum computing.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 16:36:09 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Paltenghi",
"Matteo",
""
],
[
"Pradel",
"Michael",
""
]
]
| new_dataset | 0.999337 |
2310.00723 | Noah Wiederhold | Noah Wiederhold, Ava Megyeri, DiMaggio Paris, Sean Banerjee, Natasha
Kholgade Banerjee | HOH: Markerless Multimodal Human-Object-Human Handover Dataset with
Large Object Count | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present the HOH (Human-Object-Human) Handover Dataset, a large object
count dataset with 136 objects, to accelerate data-driven research on handover
studies, human-robot handover implementation, and artificial intelligence (AI)
on handover parameter estimation from 2D and 3D data of person interactions.
HOH contains multi-view RGB and depth data, skeletons, fused point clouds,
grasp type and handedness labels, object, giver hand, and receiver hand 2D and
3D segmentations, giver and receiver comfort ratings, and paired object
metadata and aligned 3D models for 2,720 handover interactions spanning 136
objects and 20 giver-receiver pairs-40 with role-reversal-organized from 40
participants. We also show experimental results of neural networks trained
using HOH to perform grasp, orientation, and trajectory prediction. As the only
fully markerless handover capture dataset, HOH represents natural human-human
handover interactions, overcoming challenges with markered datasets that
require specific suiting for body tracking, and lack high-resolution hand
tracking. To date, HOH is the largest handover dataset in number of objects,
participants, pairs with role reversal accounted for, and total interactions
captured.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 16:48:48 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Wiederhold",
"Noah",
""
],
[
"Megyeri",
"Ava",
""
],
[
"Paris",
"DiMaggio",
""
],
[
"Banerjee",
"Sean",
""
],
[
"Banerjee",
"Natasha Kholgade",
""
]
]
| new_dataset | 0.999812 |
2310.00851 | Mijail Mendoza Flores | Mija\'il Ja\'en Mendoza, Nicholas D. Naclerio and Elliot W. Hawkes | High-curvature, high-force, vine robot for inspection | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Robot performance has advanced considerably both in and out of the factory,
however in tightly constrained, unknown environments such as inside a jet
engine or the human heart, current robots are less adept. In such cases where a
borescope or endoscope can't reach, disassembly or surgery are costly. One
promising inspection device inspired by plant growth are "vine robots" that can
navigate cluttered environments by extending from their tip. Yet, these vine
robots are currently limited in their ability to simultaneously steer into
tight curvatures and apply substantial forces to the environment. Here, we
propose a plant-inspired method of steering by asymmetrically lengthening one
side of the vine robot to enable high curvature and large force application.
Our key development is the introduction of an extremely anisotropic, composite,
wrinkled film with elastic moduli 400x different in orthogonal directions. The
film is used as the vine robot body, oriented such that it can stretch over
120% axially, but only 3% circumferentially. With the addition of controlled
layer jamming, this film enables a steering method inspired by plants in which
the circumference of the robot is inextensible, but the sides can stretch to
allow turns. This steering method and body pressure do not work against each
other, allowing the robot to exhibit higher forces and tighter curvatures than
previous vine robot architectures. This work advances the abilities of vine
robots--and robots more generally--to not only access tightly constrained
environments, but perform useful work once accessed.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 02:15:11 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Mendoza",
"Mijaíl Jaén",
""
],
[
"Naclerio",
"Nicholas D.",
""
],
[
"Hawkes",
"Elliot W.",
""
]
]
| new_dataset | 0.998969 |
2310.00874 | Xiuzhong Hu | Xiuzhong Hu, Guangming Xiong, Zheng Zang, Peng Jia, Yuxuan Han, and
Junyi Ma | PC-NeRF: Parent-Child Neural Radiance Fields under Partial Sensor Data
Loss in Autonomous Driving Environments | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reconstructing large-scale 3D scenes is essential for autonomous vehicles,
especially when partial sensor data is lost. Although the recently developed
neural radiance fields (NeRF) have shown compelling results in implicit
representations, the large-scale 3D scene reconstruction using partially lost
LiDAR point cloud data still needs to be explored. To bridge this gap, we
propose a novel 3D scene reconstruction framework called parent-child neural
radiance field (PC-NeRF). The framework comprises two modules, the parent NeRF
and the child NeRF, to simultaneously optimize scene-level, segment-level, and
point-level scene representations. Sensor data can be utilized more efficiently
by leveraging the segment-level representation capabilities of child NeRFs, and
an approximate volumetric representation of the scene can be quickly obtained
even with limited observations. With extensive experiments, our proposed
PC-NeRF is proven to achieve high-precision 3D reconstruction in large-scale
scenes. Moreover, PC-NeRF can effectively tackle situations where partial
sensor data is lost and has high deployment efficiency with limited training
time. Our approach implementation and the pre-trained models will be available
at https://github.com/biter0088/pc-nerf.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 03:32:35 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Hu",
"Xiuzhong",
""
],
[
"Xiong",
"Guangming",
""
],
[
"Zang",
"Zheng",
""
],
[
"Jia",
"Peng",
""
],
[
"Han",
"Yuxuan",
""
],
[
"Ma",
"Junyi",
""
]
]
| new_dataset | 0.999176 |
2310.00897 | Ashok Kumar S | Ashok S Kumar, Sheetal Kalyani | Practical Radar Sensing Using Two Stage Neural Network for Denoising
OTFS Signals | null | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Noise contamination affects the performance of orthogonal time frequency
space (OTFS) signals in real-world environments, making radar sensing
challenging. Our objective is to derive the range and velocity from the
delay-Doppler (DD) domain for radar sensing by using OTFS signaling. This work
introduces a two-stage approach to tackle this issue. In the first stage, we
use a convolutional neural network (CNN) model to classify the noise levels as
moderate or severe. Subsequently, if the noise level is severe, the OTFS
samples are denoised using a generative adversarial network (GAN). The proposed
approach achieves notable levels of accuracy in the classification of noisy
signals and mean absolute error (MAE) for the entire system even in low
signal-to-noise ratio (SNR) scenarios.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 04:29:04 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Kumar",
"Ashok S",
""
],
[
"Kalyani",
"Sheetal",
""
]
]
| new_dataset | 0.972912 |
2310.00905 | Wenxuan Wang | Wenxuan Wang, Zhaopeng Tu, Chang Chen, Youliang Yuan, Jen-tse Huang,
Wenxiang Jiao, Michael R. Lyu | All Languages Matter: On the Multilingual Safety of Large Language
Models | The first multilingual safety benchmark for large language models | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Safety lies at the core of developing and deploying large language models
(LLMs). However, previous safety benchmarks only concern the safety in one
language, e.g. the majority language in the pretraining data such as English.
In this work, we build the first multilingual safety benchmark for LLMs,
XSafety, in response to the global deployment of LLMs in practice. XSafety
covers 14 kinds of commonly used safety issues across 10 languages that span
several language families. We utilize XSafety to empirically study the
multilingual safety for 4 widely-used LLMs, including both close-API and
open-source models. Experimental results show that all LLMs produce
significantly more unsafe responses for non-English queries than English ones,
indicating the necessity of developing safety alignment for non-English
languages. In addition, we propose several simple and effective prompting
methods to improve the multilingual safety of ChatGPT by evoking safety
knowledge and improving cross-lingual generalization of safety alignment. Our
prompting method can significantly reduce the ratio of unsafe responses from
19.1% to 9.7% for non-English queries. We release our data at
https://github.com/Jarviswang94/Multilingual_safety_benchmark.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 05:23:34 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Wang",
"Wenxuan",
""
],
[
"Tu",
"Zhaopeng",
""
],
[
"Chen",
"Chang",
""
],
[
"Yuan",
"Youliang",
""
],
[
"Huang",
"Jen-tse",
""
],
[
"Jiao",
"Wenxiang",
""
],
[
"Lyu",
"Michael R.",
""
]
]
| new_dataset | 0.99009 |
2310.00935 | Yike Wang | Yike Wang, Shangbin Feng, Heng Wang, Weijia Shi, Vidhisha
Balachandran, Tianxing He, Yulia Tsvetkov | Resolving Knowledge Conflicts in Large Language Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) often encounter knowledge conflicts, scenarios
where discrepancy arises between the internal parametric knowledge of LLMs and
non-parametric information provided in the prompt context. In this work we ask
what are the desiderata for LLMs when a knowledge conflict arises and whether
existing LLMs fulfill them. We posit that LLMs should 1) identify knowledge
conflicts, 2) pinpoint conflicting information segments, and 3) provide
distinct answers or viewpoints in conflicting scenarios. To this end, we
introduce KNOWLEDGE CONFLICT, an evaluation framework for simulating contextual
knowledge conflicts and quantitatively evaluating to what extent LLMs achieve
these goals. KNOWLEDGE CONFLICT includes diverse and complex situations of
knowledge conflict, knowledge from diverse entities and domains, two synthetic
conflict creation methods, and settings with progressively increasing
difficulty to reflect realistic knowledge conflicts. Extensive experiments with
the KNOWLEDGE CONFLICT framework reveal that while LLMs perform well in
identifying the existence of knowledge conflicts, they struggle to determine
the specific conflicting knowledge and produce a response with distinct answers
amidst conflicting information. To address these challenges, we propose new
instruction-based approaches that augment LLMs to better achieve the three
goals. Further analysis shows that abilities to tackle knowledge conflicts are
greatly impacted by factors such as knowledge domain and prompt text, while
generating robust responses to knowledge conflict scenarios remains an open
research question.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 06:57:45 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Wang",
"Yike",
""
],
[
"Feng",
"Shangbin",
""
],
[
"Wang",
"Heng",
""
],
[
"Shi",
"Weijia",
""
],
[
"Balachandran",
"Vidhisha",
""
],
[
"He",
"Tianxing",
""
],
[
"Tsvetkov",
"Yulia",
""
]
]
| new_dataset | 0.977404 |
2310.00938 | Heng Guo | Weiming Feng and Heng Guo | An FPRAS for two terminal reliability in directed acyclic graphs | 26 pages | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We give a fully polynomial-time randomized approximation scheme (FPRAS) for
two terminal reliability in directed acyclic graphs.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 07:06:37 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Feng",
"Weiming",
""
],
[
"Guo",
"Heng",
""
]
]
| new_dataset | 0.999281 |
2310.00973 | Toshiaki Aoki | Toshiaki Aoki (1), Aritoshi Hata (2), Kazusato Kanamori (2), Satoshi
Tanaka (2), Yuta Kawamoto (3), Yasuhiro Tanase (3), Masumi Imai (3), Fumiya
Shigemitsu (4), Masaki Gondo (4), Tomoji Kishi (5) ((1) JAIST, (2) DENSO
CORPORATION, (3) DENSO CREATE INC., (4) eSOL Co., Ltd, (5) Waseda University) | Model-Checking in the Loop Model-Based Testing for Automotive Operating
Systems | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While vehicles have primarily been controlled through mechanical means in
years past, an increasing number of embedded control systems are being
installed and used, keeping pace with advances in electronic control technology
and performance. Automotive systems consist of multiple components developed by
a range of vendors. To accelerate developments in embedded control systems,
industrial standards such as AUTOSAR are being defined for automotive systems,
including the design of operating system and middleware technologies. Crucial
to ensuring the safety of automotive systems, the operating system is
foundational software on which many automotive applications are executed. In
this paper, we propose an integrated model-based method for verifying
automotive operating systems; our method is called Model-Checking in the Loop
Model-Based Testing (MCIL-MBT). In MCIL-MBT, we create a model that formalizes
specifications of automotive operating systems and verifies the specifications
via model-checking. Next, we conduct model-based testing with the verified
model to ensure that a specific operating system implementation conforms to the
model. These verification and testing stages are iterated over until no flaws
are detected. Our method has already been introduced to an automotive system
supplier and an operating system vendor. Through our approach, we successfully
identified flaws that were not detected by conventional review and testing
methods.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 08:29:59 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Aoki",
"Toshiaki",
""
],
[
"Hata",
"Aritoshi",
""
],
[
"Kanamori",
"Kazusato",
""
],
[
"Tanaka",
"Satoshi",
""
],
[
"Kawamoto",
"Yuta",
""
],
[
"Tanase",
"Yasuhiro",
""
],
[
"Imai",
"Masumi",
""
],
[
"Shigemitsu",
"Fumiya",
""
],
[
"Gondo",
"Masaki",
""
],
[
"Kishi",
"Tomoji",
""
]
]
| new_dataset | 0.983118 |
2310.00996 | Zhivar Sourati | Zhivar Sourati, Filip Ilievski, Pia Sommerauer | ARN: A Comprehensive Framework and Dataset for Analogical Reasoning on
Narratives | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analogical reasoning is one of the prime abilities of humans and is linked to
creativity and scientific discoveries. This ability has been studied
extensively in natural language processing (NLP) as well as in cognitive
psychology by proposing various benchmarks and evaluation setups. Yet, a
substantial gap exists between evaluations of analogical reasoning in cognitive
psychology and NLP. Our aim is to bridge this by computationally adapting
theories related to analogical reasoning from cognitive psychology in the
context of narratives and developing an evaluation framework large in scale.
More concretely, we propose the task of matching narratives based on system
mappings and release the Analogical Reasoning on Narratives (ARN) dataset. To
create the dataset, we devise a framework inspired by cognitive psychology
theories about analogical reasoning to utilize narratives and their components
to form mappings of different abstractness levels. These mappings are then
leveraged to create pairs of analogies and disanalogies/distractors with more
than 1k triples of query narratives, analogies, and distractors. We cover four
categories of far/near analogies and far/near distractors that allow us to
study analogical reasoning in models from distinct perspectives. In this study,
we evaluate different large language models (LLMs) on this task. Our results
demonstrate that LLMs struggle to recognize higher-order mappings when they are
not accompanied by lower-order mappings (far analogies) and show better
performance when all mappings are present simultaneously (near analogies). We
observe that in all the settings, the analogical reasoning abilities of LLMs
can be easily impaired by near distractors that form lower-order mappings with
the query narratives.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 08:58:29 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Sourati",
"Zhivar",
""
],
[
"Ilievski",
"Filip",
""
],
[
"Sommerauer",
"Pia",
""
]
]
| new_dataset | 0.999781 |
2310.00999 | EPTCS | Falke B. {\O}. Carlsen, Lars Bo P. Frydenskov, Nicolaj {\O}. Jensen,
Jener Rasmussen, Mathias M. S{\o}rensen, Asger G. Weirs{\o}e, Mathias C.
Jensen, Kim G. Larsen | CGAAL: Distributed On-The-Fly ATL Model Checker with Heuristics | In Proceedings GandALF 2023, arXiv:2309.17318 | EPTCS 390, 2023, pp. 99-114 | 10.4204/EPTCS.390.7 | null | cs.LO | http://creativecommons.org/licenses/by/4.0/ | We present CGAAL, our efficient on-the-fly model checker for alternating-time
temporal logic (ATL) on concurrent game structures (CGS). We present how our
tool encodes ATL as extended dependency graphs with negation edges and employs
the distributed on-the-fly algorithm by Dalsgaard et al. Our tool offers
multiple novel search strategies for the algorithm, including DHS which is
inspired by PageRank and uses the in-degree of configurations as a heuristic,
IHS which estimates instability of assignment values, and LPS which estimates
the distance to a state satisfying the constituent property using linear
programming. CGS are input using our modelling language LCGS, where composition
and synchronisation are easily described. We prove the correctness of our
encoding, and our experiments show that our tool CGAAL is often one to three
orders of magnitude faster than the popular tool PRISM-games on case studies
from PRISM's documentation and among case studies we have developed. In our
evaluation, we also compare and evaluate our search strategies, and find that
our custom search strategies are often significantly faster than the usual
breadth-first and depth-first search strategies.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 08:59:13 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Carlsen",
"Falke B. Ø.",
""
],
[
"Frydenskov",
"Lars Bo P.",
""
],
[
"Jensen",
"Nicolaj Ø.",
""
],
[
"Rasmussen",
"Jener",
""
],
[
"Sørensen",
"Mathias M.",
""
],
[
"Weirsøe",
"Asger G.",
""
],
[
"Jensen",
"Mathias C.",
""
],
[
"Larsen",
"Kim G.",
""
]
]
| new_dataset | 0.998066 |
2310.01015 | Qian Wang | Qian Wang, Zhen Zhang, Zemin Liu, Shengliang Lu, Bingqiao Luo,
Bingsheng He | ETGraph: A Pioneering Dataset Bridging Ethereum and Twitter | null | null | null | null | cs.SI cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While numerous public blockchain datasets are available, their utility is
constrained by a singular focus on blockchain data. This constraint limits the
incorporation of relevant social network data into blockchain analysis, thereby
diminishing the breadth and depth of insight that can be derived. To address
the above limitation, we introduce ETGraph, a novel dataset that authentically
links Ethereum and Twitter, marking the first and largest dataset of its kind.
ETGraph combines Ethereum transaction records (2 million nodes and 30 million
edges) and Twitter following data (1 million nodes and 3 million edges),
bonding 30,667 Ethereum addresses with verified Twitter accounts sourced from
OpenSea. Detailed statistical analysis on ETGraph highlights the structural
differences between Twitter-matched and non-Twitter-matched Ethereum addresses.
Extensive experiments, including Ethereum link prediction, wash-trading
Ethereum addresses detection, and Twitter-Ethereum matching link prediction,
emphasize the significant role of Twitter data in enhancing Ethereum analysis.
ETGraph is available at https://etgraph.deno.dev/.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 09:07:01 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Wang",
"Qian",
""
],
[
"Zhang",
"Zhen",
""
],
[
"Liu",
"Zemin",
""
],
[
"Lu",
"Shengliang",
""
],
[
"Luo",
"Bingqiao",
""
],
[
"He",
"Bingsheng",
""
]
]
| new_dataset | 0.999409 |
2310.01020 | Alexandra Duminil | Alexandra Duminil, Jean-Philippe Tarel, Roland Br\'emond | A New Real-World Video Dataset for the Comparison of Defogging
Algorithms | null | Advances in Signal Processing and Artificial Intelligence (ASPAI'
2022), Oct 2022, Corfu, Greece | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video restoration for noise removal, deblurring or super-resolution is
attracting more and more attention in the fields of image processing and
computer vision. Works on video restoration with data-driven approaches for fog
removal are rare however, due to the lack of datasets containing videos in both
clear and foggy conditions which are required for deep learning and
benchmarking. A new dataset, called REVIDE, was recently proposed for just that
purpose. In this paper, we implement the same approach by proposing a new
REal-world VIdeo dataset for the comparison of Defogging Algorithms (VIREDA),
with various fog densities and ground truths without fog. This small database
can serve as a test base for defogging algorithms. A video defogging algorithm
is also mentioned (still under development), with the key idea of using
temporal redundancy to minimize artefacts and exposure variations between
frames. Inspired by the success of Transformers architecture in deep learning
for various applications, we select this kind of architecture in a neural
network to show the relevance of the proposed dataset.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 09:12:39 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Duminil",
"Alexandra",
""
],
[
"Tarel",
"Jean-Philippe",
""
],
[
"Brémond",
"Roland",
""
]
]
| new_dataset | 0.995098 |
2310.01024 | Hungpu Chou | Xinchao Zhong, Sean Longyu Ma, Hong-fu Chou, Arsham Mostaani, Thang X.
Vu, Symeon Chatzinotas | Joint Source-Channel Coding System for 6G Communication: Design,
Prototype and Future Directions | 14 pages, 9 figures, Journal | null | null | null | cs.IT cs.NI eess.SP math.IT | http://creativecommons.org/publicdomain/zero/1.0/ | The goal of semantic communication is to surpass optimal Shannon's criterion
regarding a notable problem for future communication which lies in the
integration of collaborative efforts between the intelligence of the
transmission source and the joint design of source coding and channel coding.
The convergence of scholarly investigation and applicable products in the field
of semantic communication is facilitated by the utilization of flexible
structural hardware design, which is constrained by the computational
capabilities of edge devices. This characteristic represents a significant
benefit of joint source-channel coding (JSCC), as it enables the generation of
source alphabets with diverse lengths and achieves a code rate of unity.
Moreover, JSCC exhibits near-capacity performance while maintaining low
complexity. Therefore, we leverage not only quasi-cyclic (QC) characteristics
to propose a QC-LDPC code-based JSCC scheme but also Unequal Error Protection
(UEP) to ensure the recovery of semantic importance. In this study, the
feasibility for using a semantic encoder/decoder that is aware of UEP can be
explored based on the existing JSCC system. This approach is aimed at
protecting the significance of semantic task-oriented information.
Additionally, the deployment of a JSCC system can be facilitated by employing
Low-Density Parity-Check (LDPC) codes on a reconfigurable device. This is
achieved by reconstructing the LDPC codes as QC-LDPC codes. The QC-LDPC layered
decoding technique, which has been specifically optimized for hardware
parallelism and tailored for channel decoding applications, can be suitably
adapted to accommodate the JSCC system. The performance of the proposed system
is evaluated by conducting BER measurements using both floating-point and 6-bit
quantization.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 09:17:55 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Zhong",
"Xinchao",
""
],
[
"Ma",
"Sean Longyu",
""
],
[
"Chou",
"Hong-fu",
""
],
[
"Mostaani",
"Arsham",
""
],
[
"Vu",
"Thang X.",
""
],
[
"Chatzinotas",
"Symeon",
""
]
]
| new_dataset | 0.982717 |
2310.01061 | Linhao Luo | Linhao Luo, Yuan-Fang Li, Gholamreza Haffari, Shirui Pan | Reasoning on Graphs: Faithful and Interpretable Large Language Model
Reasoning | 22 pages, 4 figures | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have demonstrated impressive reasoning abilities
in complex tasks. However, they lack up-to-date knowledge and experience
hallucinations during reasoning, which can lead to incorrect reasoning
processes and diminish their performance and trustworthiness. Knowledge graphs
(KGs), which capture vast amounts of facts in a structured format, offer a
reliable source of knowledge for reasoning. Nevertheless, existing KG-based LLM
reasoning methods only treat KGs as factual knowledge bases and overlook the
importance of their structural information for reasoning. In this paper, we
propose a novel method called reasoning on graphs (RoG) that synergizes LLMs
with KGs to enable faithful and interpretable reasoning. Specifically, we
present a planning-retrieval-reasoning framework, where RoG first generates
relation paths grounded by KGs as faithful plans. These plans are then used to
retrieve valid reasoning paths from the KGs for LLMs to conduct faithful
reasoning. Furthermore, RoG not only distills knowledge from KGs to improve the
reasoning ability of LLMs through training but also allows seamless integration
with any arbitrary LLMs during inference. Extensive experiments on two
benchmark KGQA datasets demonstrate that RoG achieves state-of-the-art
performance on KG reasoning tasks and generates faithful and interpretable
reasoning results.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 10:14:43 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Luo",
"Linhao",
""
],
[
"Li",
"Yuan-Fang",
""
],
[
"Haffari",
"Gholamreza",
""
],
[
"Pan",
"Shirui",
""
]
]
| new_dataset | 0.997095 |
2310.01067 | Weixiao Gao | Weixiao Gao, Ravi Peters, Jantien Stoter | Unsupervised Roofline Extraction from True Orthophotos for LoD2 Building
Model Reconstruction | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper discusses the reconstruction of LoD2 building models from 2D and
3D data for large-scale urban environments. Traditional methods involve the use
of LiDAR point clouds, but due to high costs and long intervals associated with
acquiring such data for rapidly developing areas, researchers have started
exploring the use of point clouds generated from (oblique) aerial images.
However, using such point clouds for traditional plane detection-based methods
can result in significant errors and introduce noise into the reconstructed
building models. To address this, this paper presents a method for extracting
rooflines from true orthophotos using line detection for the reconstruction of
building models at the LoD2 level. The approach is able to extract relatively
complete rooflines without the need for pre-labeled training data or
pre-trained models. These lines can directly be used in the LoD2 building model
reconstruction process. The method is superior to existing plane
detection-based methods and state-of-the-art deep learning methods in terms of
the accuracy and completeness of the reconstructed building. Our source code is
available at https://github.com/tudelft3d/Roofline-extraction-from-orthophotos.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 10:23:08 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Gao",
"Weixiao",
""
],
[
"Peters",
"Ravi",
""
],
[
"Stoter",
"Jantien",
""
]
]
| new_dataset | 0.995545 |
2310.01089 | Jianan Zhao | Jianan Zhao, Le Zhuo, Yikang Shen, Meng Qu, Kai Liu, Michael
Bronstein, Zhaocheng Zhu, Jian Tang | GraphText: Graph Reasoning in Text Space | Preprint. Work in progress | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have gained the ability to assimilate human
knowledge and facilitate natural language interactions with both humans and
other LLMs. However, despite their impressive achievements, LLMs have not made
significant advancements in the realm of graph machine learning. This
limitation arises because graphs encapsulate distinct relational data, making
it challenging to transform them into natural language that LLMs understand. In
this paper, we bridge this gap with a novel framework, GraphText, that
translates graphs into natural language. GraphText derives a graph-syntax tree
for each graph that encapsulates both the node attributes and inter-node
relationships. Traversal of the tree yields a graph text sequence, which is
then processed by an LLM to treat graph tasks as text generation tasks.
Notably, GraphText offers multiple advantages. It introduces training-free
graph reasoning: even without training on graph data, GraphText with ChatGPT
can achieve on par with, or even surpassing, the performance of
supervised-trained graph neural networks through in-context learning (ICL).
Furthermore, GraphText paves the way for interactive graph reasoning, allowing
both humans and LLMs to communicate with the model seamlessly using natural
language. These capabilities underscore the vast, yet-to-be-explored potential
of LLMs in the domain of graph machine learning.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 11:03:57 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Zhao",
"Jianan",
""
],
[
"Zhuo",
"Le",
""
],
[
"Shen",
"Yikang",
""
],
[
"Qu",
"Meng",
""
],
[
"Liu",
"Kai",
""
],
[
"Bronstein",
"Michael",
""
],
[
"Zhu",
"Zhaocheng",
""
],
[
"Tang",
"Jian",
""
]
]
| new_dataset | 0.997592 |
2310.01142 | Viswesh N | Viswesh N, Kaushal Jadhav, Avi Amalanshu, Bratin Mondal, Sabaris
Waran, Om Sadhwani, Apoorv Kumar, Debashish Chakravarty | [Re] CLRNet: Cross Layer Refinement Network for Lane Detection | 17 pages | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | The following work is a reproducibility report for CLRNet: Cross Layer
Refinement Network for Lane Detection. The basic code was made available by the
author. The paper proposes a novel Cross Layer Refinement Network to utilize
both high and low level features for lane detection. The authors assert that
the proposed technique sets the new state-of-the-art on three lane-detection
benchmarks
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 12:31:10 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"N",
"Viswesh",
""
],
[
"Jadhav",
"Kaushal",
""
],
[
"Amalanshu",
"Avi",
""
],
[
"Mondal",
"Bratin",
""
],
[
"Waran",
"Sabaris",
""
],
[
"Sadhwani",
"Om",
""
],
[
"Kumar",
"Apoorv",
""
],
[
"Chakravarty",
"Debashish",
""
]
]
| new_dataset | 0.99692 |
2310.01146 | Andreea Iana | Andreea Iana, Goran Glava\v{s}, Heiko Paulheim | NewsRecLib: A PyTorch-Lightning Library for Neural News Recommendation | Accepted at the 2023 Conference on Empirical Methods in Natural
Language Processing (EMNLP 2023) | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | NewsRecLib is an open-source library based on Pytorch-Lightning and Hydra
developed for training and evaluating neural news recommendation models. The
foremost goals of NewsRecLib are to promote reproducible research and rigorous
experimental evaluation by (i) providing a unified and highly configurable
framework for exhaustive experimental studies and (ii) enabling a thorough
analysis of the performance contribution of different model architecture
components and training regimes. NewsRecLib is highly modular, allows
specifying experiments in a single configuration file, and includes extensive
logging facilities. Moreover, NewsRecLib provides out-of-the-box
implementations of several prominent neural models, training methods, standard
evaluation benchmarks, and evaluation metrics for news recommendation.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 12:33:01 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Iana",
"Andreea",
""
],
[
"Glavaš",
"Goran",
""
],
[
"Paulheim",
"Heiko",
""
]
]
| new_dataset | 0.996596 |
2310.01160 | Petar Durdevic | Petar Durdevic and Shaobao Li and Daniel Ortiz-Arroyo | Design, Modelling and Control of an Amphibious Quad-Rotor for Pipeline
Inspection | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Regular inspections are crucial to maintaining waste-water pipelines in good
condition. The challenge is that inside a pipeline the space is narrow and may
have a complex structure. The conventional methods that use pipe robots with
heavy cables are expensive, time-consuming, and difficult to operate. In this
work, we develop an amphibious system that combines a quad-copter with a
surface vehicle, creating a hybrid unmanned aerial floating vehicle (HUAFV).
Nonlinear dynamics of the HUAFV are modeled based on the dynamic models of both
operating modes. The model is validated through experiments and simulations. A
PI controller designed and tuned on the developed model is implemented onto a
prototype platform. Our experiments demonstrate the effectiveness of the new
HUAFV's modeling and design.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 12:45:19 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Durdevic",
"Petar",
""
],
[
"Li",
"Shaobao",
""
],
[
"Ortiz-Arroyo",
"Daniel",
""
]
]
| new_dataset | 0.988678 |
2310.01208 | Zongxi Li | Zongxi Li, Xianming Li, Yuzhang Liu, Haoran Xie, Jing Li, Fu-lee Wang,
Qing Li, Xiaoqin Zhong | Label Supervised LLaMA Finetuning | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent success of Large Language Models (LLMs) has gained significant
attention in both academia and industry. Substantial efforts have been made to
enhance the zero- and few-shot generalization capabilities of open-source LLMs
through finetuning. Currently, the prevailing approach is instruction-tuning,
which trains LLMs to complete real-world tasks by generating responses guided
by natural language instructions. It is worth noticing that such an approach
may underperform in sequence and token classification tasks. Unlike text
generation tasks, classification tasks have a limited label space, where
precise label prediction is more appreciated than generating diverse and
human-like responses. Prior research has unveiled that instruction-tuned LLMs
cannot outperform BERT, prompting us to explore the potential of leveraging
latent representations from LLMs for supervised label prediction. In this
paper, we introduce a label-supervised adaptation for LLMs, which aims to
finetuning the model with discriminant labels. We evaluate this approach with
Label Supervised LLaMA (LS-LLaMA), based on LLaMA-2-7B, a relatively
small-scale LLM, and can be finetuned on a single GeForce RTX4090 GPU. We
extract latent representations from the final LLaMA layer and project them into
the label space to compute the cross-entropy loss. The model is finetuned by
Low-Rank Adaptation (LoRA) to minimize this loss. Remarkably, without intricate
prompt engineering or external knowledge, LS-LLaMA substantially outperforms
LLMs ten times its size in scale and demonstrates consistent improvements
compared to robust baselines like BERT-Large and RoBERTa-Large in text
classification. Moreover, by removing the causal mask from decoders, LS-unLLaMA
achieves the state-of-the-art performance in named entity recognition (NER).
Our work will shed light on a novel approach to adapting LLMs for various
downstream tasks.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 13:53:03 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Li",
"Zongxi",
""
],
[
"Li",
"Xianming",
""
],
[
"Liu",
"Yuzhang",
""
],
[
"Xie",
"Haoran",
""
],
[
"Li",
"Jing",
""
],
[
"Wang",
"Fu-lee",
""
],
[
"Li",
"Qing",
""
],
[
"Zhong",
"Xiaoqin",
""
]
]
| new_dataset | 0.99165 |
2310.01230 | Marcello Cellina | Marcello Cellina, Silvia Strada and Sergio Matteo Savaresi | Vehicle Fuel Consumption Virtual Sensing from GNSS and IMU Measurements | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper presents a vehicle-independent, non-intrusive, and light
monitoring system for accurately measuring fuel consumption in road vehicles
from longitudinal speed and acceleration derived continuously in time from GNSS
and IMU sensors mounted inside the vehicle. In parallel to boosting the
transition to zero-carbon cars, there is an increasing interest in low-cost
instruments for precise measurement of the environmental impact of the many
internal combustion engine vehicles still in circulation. The main contribution
of this work is the design and comparison of two innovative black-box
algorithms, one based on a reduced complexity physics modeling while the other
relying on a feedforward neural network for black-box fuel consumption
estimation using only velocity and acceleration measurements. Based on suitable
metrics, the developed algorithms outperform the state of the art best
approach, both in the instantaneous and in the integral fuel consumption
estimation, with errors smaller than 1\% with respect to the fuel flow ground
truth. The data used for model identification, testing, and experimental
validation is composed of GNSS velocity and IMU acceleration measurements
collected during several trips using a diesel fuel vehicle on different roads,
in different seasons, and with varying numbers of passengers. Compared to
built-in vehicle monitoring systems, this methodology is not customized, uses
off-the-shelf sensors, and is based on two simple algorithms that have been
validated offline and could be easily implemented in a real-time environment.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 14:20:00 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Cellina",
"Marcello",
""
],
[
"Strada",
"Silvia",
""
],
[
"Savaresi",
"Sergio Matteo",
""
]
]
| new_dataset | 0.993002 |
2310.01235 | Patrick Pfreundschuh | Patrick Pfreundschuh, Helen Oleynikova, Cesar Cadena, Roland Siegwart,
Olov Andersson | COIN-LIO: Complementary Intensity-Augmented LiDAR Inertial Odometry | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | We present COIN-LIO, a LiDAR Inertial Odometry pipeline that tightly couples
information from LiDAR intensity with geometry-based point cloud registration.
The focus of our work is to improve the robustness of LiDAR-inertial odometry
in geometrically degenerate scenarios, like tunnels or flat fields. We project
LiDAR intensity returns into an intensity image, and propose an image
processing pipeline that produces filtered images with improved brightness
consistency within the image as well as across different scenes. To effectively
leverage intensity as an additional modality, we present a novel feature
selection scheme that detects uninformative directions in the point cloud
registration and explicitly selects patches with complementary image
information. Photometric error minimization in the image patches is then fused
with inertial measurements and point-to-plane registration in an iterated
Extended Kalman Filter. The proposed approach improves accuracy and robustness
on a public dataset. We additionally publish a new dataset, that captures five
real-world environments in challenging, geometrically degenerate scenes. By
using the additional photometric information, our approach shows drastically
improved robustness against geometric degeneracy in environments where all
compared baseline approaches fail.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 14:24:38 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Pfreundschuh",
"Patrick",
""
],
[
"Oleynikova",
"Helen",
""
],
[
"Cadena",
"Cesar",
""
],
[
"Siegwart",
"Roland",
""
],
[
"Andersson",
"Olov",
""
]
]
| new_dataset | 0.999627 |
2310.01271 | Yiran Hu | Xue Zongyue, Liu Huanghai, Hu Yiran, Kong Kangle, Wang Chenlu, Liu Yun
and Shen Weixing | LEEC: A Legal Element Extraction Dataset with an Extensive
Domain-Specific Label System | null | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a pivotal task in natural language processing, element extraction has
gained significance in the legal domain. Extracting legal elements from
judicial documents helps enhance interpretative and analytical capacities of
legal cases, and thereby facilitating a wide array of downstream applications
in various domains of law. Yet existing element extraction datasets are limited
by their restricted access to legal knowledge and insufficient coverage of
labels. To address this shortfall, we introduce a more comprehensive,
large-scale criminal element extraction dataset, comprising 15,831 judicial
documents and 159 labels. This dataset was constructed through two main steps:
First, designing the label system by our team of legal experts based on prior
legal research which identified critical factors driving and processes
generating sentencing outcomes in criminal cases; Second, employing the legal
knowledge to annotate judicial documents according to the label system and
annotation guideline. The Legal Element ExtraCtion dataset (LEEC) represents
the most extensive and domain-specific legal element extraction dataset for the
Chinese legal system. Leveraging the annotated data, we employed various SOTA
models that validates the applicability of LEEC for Document Event Extraction
(DEE) task. The LEEC dataset is available on https://github.com/THUlawtech/LEEC .
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 15:16:31 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Zongyue",
"Xue",
""
],
[
"Huanghai",
"Liu",
""
],
[
"Yiran",
"Hu",
""
],
[
"Kangle",
"Kong",
""
],
[
"Chenlu",
"Wang",
""
],
[
"Yun",
"Liu",
""
],
[
"Weixing",
"Shen",
""
]
]
| new_dataset | 0.999863 |
2310.01291 | Jonathan Samuel Lumentut | Jonathan Samuel Lumentut and Kyoung Mu Lee | 3DHR-Co: A Collaborative Test-time Refinement Framework for In-the-Wild
3D Human-Body Reconstruction Task | 12 pages, 7 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The field of 3D human-body reconstruction (abbreviated as 3DHR) that utilizes
parametric pose and shape representations has witnessed significant
advancements in recent years. However, the application of 3DHR techniques to
handle real-world, diverse scenes, known as in-the-wild data, still faces
limitations. The primary challenge arises as curating accurate 3D human pose
ground truth (GT) for in-the-wild scenes is still difficult to obtain due to
various factors. Recent test-time refinement approaches on 3DHR leverage
initial 2D off-the-shelf human keypoints information to support the lack of 3D
supervision on in-the-wild data. However, we observed that additional 2D
supervision alone could cause the overfitting issue on common 3DHR backbones,
making the 3DHR test-time refinement task seem intractable. We answer this
challenge by proposing a strategy that complements 3DHR test-time refinement
work under a collaborative approach. Specifically, we initially apply a
pre-adaptation approach that works by collaborating various 3DHR models in a
single framework to directly improve their initial outputs. This approach is
then further combined with the test-time adaptation work under specific
settings that minimize the overfitting issue to further boost the 3DHR
performance. The whole framework is termed as 3DHR-Co, and on the experiment
sides, we showed that the proposed work can significantly enhance the scores of
common classic 3DHR backbones up to -34 mm pose error suppression, putting them
among the top list on the in-the-wild benchmark data. Such achievement shows
that our approach helps unveil the true potential of the common classic 3DHR
backbones. Based on these findings, we further investigate various settings on
the proposed framework to better elaborate the capability of our collaborative
approach in the 3DHR task.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 15:46:25 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Lumentut",
"Jonathan Samuel",
""
],
[
"Lee",
"Kyoung Mu",
""
]
]
| new_dataset | 0.986797 |
2310.01301 | Bidhayak Goswami | Bidhayak Goswami, K. R. Jayaprakash, Anindya Chatterjee | Short Time Angular Impulse Response of Rayleigh Beams | null | null | null | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the dynamics of linear structures, the impulse response function is of
fundamental interest. In some cases one examines the short term response
wherein the disturbance is still local and the boundaries have not yet come
into play, and for such short-time analysis the geometrical extent of the
structure may be taken as unbounded. Here we examine the response of slender
beams to angular impulses. The Euler-Bernoulli model, which does not include
rotary inertia of cross sections, predicts an unphysical and unbounded initial
rotation at the point of application. A finite length Euler-Bernoulli beam,
when modelled using finite elements, predicts a mesh-dependent response that
shows fast large-amplitude oscillations setting in very quickly. The simplest
introduction of rotary inertia yields the Rayleigh beam model, which has more
reasonable behaviour including a finite wave speed at all frequencies. If a
Rayleigh beam is given an impulsive moment at a location away from its
boundaries, then the predicted behaviour has an instantaneous finite jump in
local slope or rotation, followed by smooth evolution of the slope for a finite
time interval until reflections arrive from the boundary, causing subsequent
slope discontinuities in time. We present a detailed study of the angular
impulse response of a simply supported Rayleigh beam, starting with dimensional
analysis, followed by modal expansion including all natural frequencies,
culminating with an asymptotic formula for the short-time response. The
asymptotic formula is obtained by breaking the series solution into two parts
to be treated independently term by term, and leads to a polynomial in time.
The polynomial matches the response from refined finite element (FE)
simulations.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 16:02:12 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Goswami",
"Bidhayak",
""
],
[
"Jayaprakash",
"K. R.",
""
],
[
"Chatterjee",
"Anindya",
""
]
]
| new_dataset | 0.991091 |
2310.01336 | Ahmad Houraniah | Ahmad Houraniah, H. Fatih Ugurdag, Furkan Aydin | JugglePAC: A Pipelined Accumulation Circuit | 9 pages, 6 figures | null | null | null | cs.AR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Summing a set of numbers, namely, "Accumulation," is a subtask within many
computational tasks. If the numbers to sum arrive non-stop in back-to-back
clock cycles at high clock frequencies, summing them without allowing them to
pile up can be quite a challenge, that is, when the latency of addition (i.e.,
summing two numbers) is longer than one clock cycle, which is always the case
for floating-point numbers. This could also be the case for integer summations
with high clock frequencies. In the case of floating-point numbers, this is
handled by pipelining the adder, but that does not solve all problems. The
challenges include optimization of speed, area, and latency. As well as the
adaptability of the design to different application requirements, such as the
ability to handle variable-size subsequent data sets with no time gap in
between and with results produced in the input-order. All these factors make
designing an efficient floating-point accumulator a non-trivial problem.
Integer accumulation is a relatively simpler problem, where high frequencies
can be achieved by using carry-save tree adders. This can then be further
improved by efficient resource-sharing. In this paper, we present two fast and
area-efficient accumulation circuits, JugglePAC and INTAC. JugglePAC is
tailored for floating-point reduction operations (such as accumulation) and
offers significant advantages with respect to the literature in terms of speed,
area, and adaptability to various application requirements. INTAC is designed
for fast integer accumulation. Using carry-save adders and resource-sharing, it
can achieve very high clock frequencies while maintaining a low area
complexity.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 16:53:00 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Houraniah",
"Ahmad",
""
],
[
"Ugurdag",
"H. Fatih",
""
],
[
"Aydin",
"Furkan",
""
]
]
| new_dataset | 0.994804 |
2310.01358 | Shu Zhao | Shu Zhao, Huijuan Xu | NEUCORE: Neural Concept Reasoning for Composed Image Retrieval | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Composed image retrieval which combines a reference image and a text modifier
to identify the desired target image is a challenging task, and requires the
model to comprehend both vision and language modalities and their interactions.
Existing approaches focus on holistic multi-modal interaction modeling, and
ignore the composed and complimentary property between the reference image and
text modifier. In order to better utilize the complementarity of multi-modal
inputs for effective information fusion and retrieval, we move the multi-modal
understanding to fine-granularity at concept-level, and learn the multi-modal
concept alignment to identify the visual location in reference or target images
corresponding to text modifier. Toward the end, we propose a NEUral COncept
REasoning (NEUCORE) model which incorporates multi-modal concept alignment and
progressive multimodal fusion over aligned concepts. Specifically, considering
that text modifier may refer to semantic concepts not existing in the reference
image and requiring to be added into the target image, we learn the multi-modal
concept alignment between the text modifier and the concatenation of reference
and target images, under multiple-instance learning framework with image and
sentence level weak supervision. Furthermore, based on aligned concepts, to
form discriminative fusion features of the input modalities for accurate target
image retrieval, we propose a progressive fusion strategy with unified
execution architecture instantiated by the attended language semantic concepts.
Our proposed approach is evaluated on three datasets and achieves
state-of-the-art results.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 17:21:25 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Zhao",
"Shu",
""
],
[
"Xu",
"Huijuan",
""
]
]
| new_dataset | 0.997953 |
2310.01361 | Lirui Wang | Lirui Wang, Yiyang Ling, Zhecheng Yuan, Mohit Shridhar, Chen Bao,
Yuzhe Qin, Bailin Wang, Huazhe Xu, Xiaolong Wang | GenSim: Generating Robotic Simulation Tasks via Large Language Models | See our project website (https://liruiw.github.io/gensim), demo
(https://huggingface.co/spaces/Gen-Sim/Gen-Sim), and code
(https://github.com/liruiw/GenSim) for visualizations and open-source models
and datasets | null | null | null | cs.LG cs.CL cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collecting large amounts of real-world interaction data to train general
robotic policies is often prohibitively expensive, thus motivating the use of
simulation data. However, existing methods for data generation have generally
focused on scene-level diversity (e.g., object instances and poses) rather than
task-level diversity, due to the human effort required to come up with and
verify novel tasks. This has made it challenging for policies trained on
simulation data to demonstrate significant task-level generalization. In this
paper, we propose to automatically generate rich simulation environments and
expert demonstrations by exploiting a large language models' (LLM) grounding
and coding ability. Our approach, dubbed GenSim, has two modes: goal-directed
generation, wherein a target task is given to the LLM and the LLM proposes a
task curriculum to solve the target task, and exploratory generation, wherein
the LLM bootstraps from previous tasks and iteratively proposes novel tasks
that would be helpful in solving more complex tasks. We use GPT4 to expand the
existing benchmark by ten times to over 100 tasks, on which we conduct
supervised finetuning and evaluate several LLMs including finetuned GPTs and
Code Llama on code generation for robotic simulation tasks. Furthermore, we
observe that LLMs-generated simulation programs can enhance task-level
generalization significantly when used for multitask policy training. We
further find that with minimal sim-to-real adaptation, the multitask policies
pretrained on GPT4-generated simulation tasks exhibit stronger transfer to
unseen long-horizon tasks in the real world and outperform baselines by 25%.
See the project website (https://liruiw.github.io/gensim) for code, demos, and
videos.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 17:23:48 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Wang",
"Lirui",
""
],
[
"Ling",
"Yiyang",
""
],
[
"Yuan",
"Zhecheng",
""
],
[
"Shridhar",
"Mohit",
""
],
[
"Bao",
"Chen",
""
],
[
"Qin",
"Yuzhe",
""
],
[
"Wang",
"Bailin",
""
],
[
"Xu",
"Huazhe",
""
],
[
"Wang",
"Xiaolong",
""
]
]
| new_dataset | 0.997041 |
2310.01386 | Jen-Tse Huang | Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren,
Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using
PsychoBench | 15 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have recently showcased their remarkable
capacities, not only in natural language processing tasks but also across
diverse domains such as clinical medicine, legal consultation, and education.
LLMs become more than mere applications, evolving into assistants capable of
addressing diverse user requests. This narrows the distinction between human
beings and artificial intelligence agents, raising intriguing questions
regarding the potential manifestation of personalities, temperaments, and
emotions within LLMs. In this paper, we propose a framework, PsychoBench, for
evaluating diverse psychological aspects of LLMs. Comprising thirteen scales
commonly used in clinical psychology, PsychoBench further classifies these
scales into four distinct categories: personality traits, interpersonal
relationships, motivational tests, and emotional abilities. Our study examines
five popular models, namely \texttt{text-davinci-003}, ChatGPT, GPT-4,
LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to
bypass the safety alignment protocols and test the intrinsic natures of LLMs.
We have made PsychoBench openly accessible via
\url{https://github.com/CUHK-ARISE/PsychoBench}.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 17:46:09 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Huang",
"Jen-tse",
""
],
[
"Wang",
"Wenxuan",
""
],
[
"Li",
"Eric John",
""
],
[
"Lam",
"Man Ho",
""
],
[
"Ren",
"Shujie",
""
],
[
"Yuan",
"Youliang",
""
],
[
"Jiao",
"Wenxiang",
""
],
[
"Tu",
"Zhaopeng",
""
],
[
"Lyu",
"Michael R.",
""
]
]
| new_dataset | 0.982409 |
2310.01412 | Zhenhua Xu | Zhenhua Xu, Yujia Zhang, Enze Xie, Zhen Zhao, Yong Guo, Kenneth K.Y.
Wong, Zhenguo Li, Hengshuang Zhao | DriveGPT4: Interpretable End-to-end Autonomous Driving via Large
Language Model | The project page is available at
https://tonyxuqaq.github.io/projects/DriveGPT4/ | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the past decade, autonomous driving has experienced rapid development in
both academia and industry. However, its limited interpretability remains a
significant unsolved problem, severely hindering autonomous vehicle
commercialization and further development. Previous approaches utilizing small
language models have failed to address this issue due to their lack of
flexibility, generalization ability, and robustness. Recently, multimodal large
language models (LLMs) have gained considerable attention from the research
community for their capability to process and reason non-text data (e.g.,
images and videos) by text. In this paper, we present DriveGPT4, an
interpretable end-to-end autonomous driving system utilizing LLMs. DriveGPT4 is
capable of interpreting vehicle actions and providing corresponding reasoning,
as well as answering diverse questions posed by human users for enhanced
interaction. Additionally, DriveGPT4 predicts vehicle low-level control signals
in an end-to-end fashion. These capabilities stem from a customized visual
instruction tuning dataset specifically designed for autonomous driving. To the
best of our knowledge, DriveGPT4 is the first work focusing on interpretable
end-to-end autonomous driving. When evaluated on multiple tasks alongside
conventional methods and video understanding LLMs, DriveGPT4 demonstrates
superior qualitative and quantitative performance. Additionally, DriveGPT4 can
be generalized in a zero-shot fashion to accommodate more unseen scenarios. The
project page is available at https://tonyxuqaq.github.io/projects/DriveGPT4/ .
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 17:59:52 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Xu",
"Zhenhua",
""
],
[
"Zhang",
"Yujia",
""
],
[
"Xie",
"Enze",
""
],
[
"Zhao",
"Zhen",
""
],
[
"Guo",
"Yong",
""
],
[
"Wong",
"Kenneth K. Y.",
""
],
[
"Li",
"Zhenguo",
""
],
[
"Zhao",
"Hengshuang",
""
]
]
| new_dataset | 0.96543 |
2201.01940 | Mohsen Amini Salehi | Chavit Denninnart, Mohsen Amini Salehi | SMSE: A Serverless Platform for Multimedia Cloud Systems | Accepted in the Journal of Concurrency and Computation: Practice and
Experience (CCPE) | null | null | null | cs.DC cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Along with the rise of domain-specific computing (ASICs hardware) and
domain-specific programming languages, we envision that the next step is the
emergence of domain-specific cloud platforms. Developing such platforms for
popular applications in the serverless manner, not only can offer a higher
efficiency to both users and providers, it can also expedite the application
development cycles and enable users to become solution-oriented and focus on
their specific business logic. Considering multimedia streaming as one of the
most trendy applications in the IT industry, the goal of this study is to
develop SMSE, the first domain-specific serverless platform for multimedia
streaming. SMSE democratizes multimedia service development via enabling
content providers (or even end-users) to rapidly develop their desired
functionalities on their multimedia contents. Upon developing SMSE, the next
goal of this study is to deal with its efficiency challenges and develop a
function container provisioning method that can efficiently utilize cloud
resources and improve the users' QoS. In particular, we develop a dynamic
method that provisions durable or ephemeral containers depending on the
spatiotemporal and data-dependency characteristics of the functions. Evaluating
the prototype implementation of SMSE under real-world settings demonstrates its
capability to reduce both the containerization overhead, and the makespan time
of serving multimedia processing functions (by up to 30%) in compare to the
function provision methods that are being used in the general-purpose
serverless cloud systems.
| [
{
"version": "v1",
"created": "Thu, 6 Jan 2022 06:53:07 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 05:09:55 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Denninnart",
"Chavit",
""
],
[
"Salehi",
"Mohsen Amini",
""
]
]
| new_dataset | 0.996215 |
2203.01974 | Allan Wang | Allan Wang, Abhijat Biswas, Henny Admoni, Aaron Steinfeld | Towards Rich, Portable, and Large-Scale Pedestrian Data Collection | IROS 2022 Workshop paper (Evaluating Motion Planning Performance:
Metrics, Tools, Datasets, and Experimental Design) | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Recently, pedestrian behavior research has shifted towards machine learning
based methods and converged on the topic of modeling pedestrian interactions.
For this, a large-scale dataset that contains rich information is needed. We
propose a data collection system that is portable, which facilitates accessible
large-scale data collection in diverse environments. We also couple the system
with a semi-autonomous labeling pipeline for fast trajectory label production.
We further introduce the first batch of dataset from the ongoing data
collection effort -- the TBD pedestrian dataset. Compared with existing
pedestrian datasets, our dataset contains three components: human verified
labels grounded in the metric space, a combination of top-down and perspective
views, and naturalistic human behavior in the presence of a socially
appropriate "robot".
| [
{
"version": "v1",
"created": "Thu, 3 Mar 2022 19:28:10 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 12:29:29 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Wang",
"Allan",
""
],
[
"Biswas",
"Abhijat",
""
],
[
"Admoni",
"Henny",
""
],
[
"Steinfeld",
"Aaron",
""
]
]
| new_dataset | 0.999067 |
2203.09337 | Matthias Mayer | Matthias Mayer, Jonathan K\"ulz, and Matthias Althoff | CoBRA: A Composable Benchmark for Robotics Applications | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today, selecting an optimal robot, its base pose, and trajectory for a given
task is currently mainly done by human expertise or trial and error. To
evaluate automatic approaches to this combined optimization problem, we
introduce a benchmark suite encompassing a unified format for robots,
environments, and task descriptions. Our benchmark suite is especially useful
for modular robots, where the multitude of robots that can be assembled creates
a host of additional parameters to optimize. We include tasks such as machine
tending and welding in completely synthetic environments and 3D scans of
real-world machine shops. The benchmark suite defines these optimization
problems and facilitates the comparison of solution algorithms. All benchmarks
are accessible through cobra.cps.cit.tum.de, a platform to conveniently share,
reference, and compare tasks, robot models, and solutions.
| [
{
"version": "v1",
"created": "Thu, 17 Mar 2022 14:13:19 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Sep 2022 17:03:54 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Sep 2023 11:45:45 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Mayer",
"Matthias",
""
],
[
"Külz",
"Jonathan",
""
],
[
"Althoff",
"Matthias",
""
]
]
| new_dataset | 0.999779 |
2207.10793 | Prashant Jayaprakash Nair | Swamit Tannu and Prashant J. Nair | The Dirty Secret of SSDs: Embodied Carbon | null | Energy Informatics Review (Volume 3 Issue 3, October 2023) | null | null | cs.AR cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scalable Solid-State Drives (SSDs) have ushered in a transformative era in
data storage and accessibility, spanning both data centers and portable
devices. However, the strides made in scaling this technology can bear
significant environmental consequences. On a global scale, a notable portion of
semiconductor manufacturing relies on electricity derived from coal and natural
gas sources. A striking example of this is the manufacturing process for a
single Gigabyte of Flash memory, which emits approximately 0.16 Kg of CO2 - a
considerable fraction of the total carbon emissions attributed to the system.
Remarkably, the manufacturing of storage devices alone contributed to an
estimated 20 million metric tonnes of CO2 emissions in the year 2021.
In light of these environmental concerns, this paper delves into an analysis
of the sustainability trade-offs inherent in Solid-State Drives (SSDs) when
compared to traditional Hard Disk Drives (HDDs). Moreover, this study proposes
methodologies to gauge the embodied carbon costs associated with storage
systems effectively. The research encompasses four key strategies to enhance
the sustainability of storage systems. In summation, this paper critically
addresses the embodied carbon issues associated with SSDs, comparing them with
HDDs, and proposes a comprehensive framework of strategies to enhance the
sustainability of storage systems.
| [
{
"version": "v1",
"created": "Fri, 8 Jul 2022 12:45:11 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 22:07:19 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Tannu",
"Swamit",
""
],
[
"Nair",
"Prashant J.",
""
]
]
| new_dataset | 0.994114 |
2209.14007 | Siqi Tan | Siqi Tan, Xiaoya Zhang, Jingyao Li, Ruitao Jing, Mufan Zhao, Yang Liu,
and Quan Quan | OA-Bug: An Olfactory-Auditory Augmented Bug Algorithm for Swarm Robots
in a Denied Environment | 7 pages, 6 figures, accepted by 2023 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS) | null | null | null | cs.RO cs.MA | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Searching in a denied environment is challenging for swarm robots as no
assistance from GNSS, mapping, data sharing, and central processing is allowed.
However, using olfactory and auditory signals to cooperate like animals could
be an important way to improve the collaboration of swarm robots. In this
paper, an Olfactory-Auditory augmented Bug algorithm (OA-Bug) is proposed for a
swarm of autonomous robots to explore a denied environment. A simulation
environment is built to measure the performance of OA-Bug. The coverage of the
search task can reach 96.93% using OA-Bug, which is significantly improved
compared with a similar algorithm, SGBA. Furthermore, experiments are conducted
on real swarm robots to prove the validity of OA-Bug. Results show that OA-Bug
can improve the performance of swarm robots in a denied environment.
| [
{
"version": "v1",
"created": "Wed, 28 Sep 2022 11:29:28 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Nov 2022 13:57:27 GMT"
},
{
"version": "v3",
"created": "Sat, 4 Mar 2023 04:13:48 GMT"
},
{
"version": "v4",
"created": "Fri, 29 Sep 2023 13:49:37 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Tan",
"Siqi",
""
],
[
"Zhang",
"Xiaoya",
""
],
[
"Li",
"Jingyao",
""
],
[
"Jing",
"Ruitao",
""
],
[
"Zhao",
"Mufan",
""
],
[
"Liu",
"Yang",
""
],
[
"Quan",
"Quan",
""
]
]
| new_dataset | 0.999573 |
2210.11983 | Jonas Bundschuh | Jonas Bundschuh, M. Greta Ruppert, Yvonne Sp\"ack-Leigsnering | Pyrit: A Finite Element Based Field Simulation Software Written in
Python | 6 pages, 6 figures, Published in COMPEL - The international journal
for computation and mathematics in electrical and electronic engineering.
This preprint offers a more precise formatting and includes software parts | null | 10.1108/COMPEL-01-2023-0013 | null | cs.CE | http://creativecommons.org/licenses/by/4.0/ | Pyrit is a field simulation software based on the finite element method
written in Python to solve coupled systems of partial differential equations.
It is designed as a modular software that is easily modifiable and extendable.
The framework can, therefore, be adapted to various activities, i.e. research,
education and industry collaboration.
| [
{
"version": "v1",
"created": "Fri, 21 Oct 2022 14:18:22 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 15:06:31 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Bundschuh",
"Jonas",
""
],
[
"Ruppert",
"M. Greta",
""
],
[
"Späck-Leigsnering",
"Yvonne",
""
]
]
| new_dataset | 0.999001 |
2303.04068 | Maureen Daum | Maureen Daum, Enhao Zhang, Dong He, Stephen Mussmann, Brandon Haynes,
Ranjay Krishna, and Magdalena Balazinska | VOCALExplore: Pay-as-You-Go Video Data Exploration and Model Building
[Technical Report] | null | null | null | null | cs.DB cs.CV cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce VOCALExplore, a system designed to support users in building
domain-specific models over video datasets. VOCALExplore supports interactive
labeling sessions and trains models using user-supplied labels. VOCALExplore
maximizes model quality by automatically deciding how to select samples based
on observed skew in the collected labels. It also selects the optimal video
representations to use when training models by casting feature selection as a
rising bandit problem. Finally, VOCALExplore implements optimizations to
achieve low latency without sacrificing model performance. We demonstrate that
VOCALExplore achieves close to the best possible model quality given candidate
acquisition functions and feature extractors, and it does so with low visible
latency (~1 second per iteration) and no expensive preprocessing.
| [
{
"version": "v1",
"created": "Tue, 7 Mar 2023 17:26:04 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 16:48:18 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Jul 2023 20:09:55 GMT"
},
{
"version": "v4",
"created": "Fri, 29 Sep 2023 04:09:55 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Daum",
"Maureen",
""
],
[
"Zhang",
"Enhao",
""
],
[
"He",
"Dong",
""
],
[
"Mussmann",
"Stephen",
""
],
[
"Haynes",
"Brandon",
""
],
[
"Krishna",
"Ranjay",
""
],
[
"Balazinska",
"Magdalena",
""
]
]
| new_dataset | 0.999117 |
2303.07960 | Alexander Wolff | Grzegorz Gutowski, Konstanty Junosza-Szaniawski, Felix Klesen,
Pawe{\l} Rz\k{a}\.zewski, Alexander Wolff, Johannes Zink | Coloring and Recognizing Directed Interval Graphs | To appear in Proc. ISAAC 2023 | null | null | null | cs.DM | http://creativecommons.org/licenses/by/4.0/ | A \emph{mixed interval graph} is an interval graph that has, for every pair
of intersecting intervals, either an arc (directed arbitrarily) or an
(undirected) edge. We are particularly interested in scenarios where edges and
arcs are defined by the geometry of intervals. In a proper coloring of a mixed
interval graph $G$, an interval $u$ receives a lower (different) color than an
interval $v$ if $G$ contains arc $(u,v)$ (edge $\{u,v\}$). Coloring of mixed
graphs has applications, for example, in scheduling with precedence
constraints; see a survey by Sotskov [Mathematics, 2020]. For coloring general
mixed interval graphs, we present a $\min \{\omega(G), \lambda(G)+1
\}$-approximation algorithm, where $\omega(G)$ is the size of a largest clique
and $\lambda(G)$ is the length of a longest directed path in $G$. For the
subclass of \emph{bidirectional interval graphs} (introduced recently for an
application in graph drawing), we show that optimal coloring is NP-hard. This
was known for general mixed interval graphs. We introduce a new natural class
of mixed interval graphs, which we call \emph{containment interval graphs}. In
such a graph, there is an arc $(u,v)$ if interval $u$ contains interval $v$,
and there is an edge $\{u,v\}$ if $u$ and $v$ overlap. We show that these
graphs can be recognized in polynomial time, that coloring them with the
minimum number of colors is NP-hard, and that there is a 2-approximation
algorithm for coloring.
| [
{
"version": "v1",
"created": "Tue, 14 Mar 2023 15:04:15 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 20:54:42 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Gutowski",
"Grzegorz",
""
],
[
"Junosza-Szaniawski",
"Konstanty",
""
],
[
"Klesen",
"Felix",
""
],
[
"Rzążewski",
"Paweł",
""
],
[
"Wolff",
"Alexander",
""
],
[
"Zink",
"Johannes",
""
]
]
| new_dataset | 0.951315 |
2303.17057 | Mohammad Askari | Mohammad Askari, Won Dong Shin, Damian Lenherr, William Stewart, Dario
Floreano | Avian-Inspired Claws Enable Robot Perching or Walking | 15 pages, 12 figures | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Multimodal UAVs (Unmanned Aerial Vehicles) are rarely capable of more than
two modalities, i.e., flying and walking or flying and perching. However, being
able to fly, perch, and walk could further improve their usefulness by
expanding their operating envelope. For instance, an aerial robot could fly a
long distance, perch in a high place to survey the surroundings, then walk to
avoid obstacles that could potentially inhibit flight. Birds are capable of
these three tasks, and so offer a practical example of how a robot might be
developed to do the same. In this paper, we present a specialized
avian-inspired claw design to enable UAVs to perch passively or walk. The key
innovation is the combination of a Hoberman linkage leg with Fin Ray claw that
uses the weight of the UAV to wrap the claw around a perch, or hyperextend it
in the opposite direction to form a curved-up shape for stable terrestrial
locomotion. Because the design uses the weight of the vehicle, the
underactuated design is lightweight and low power. With the inclusion of
talons, the 45g claws are capable of holding a 700g UAV to an almost 20-degree
angle on a perch. In scenarios where cluttered environments impede flight and
long mission times are required, such a combination of flying, perching, and
walking is critical.
| [
{
"version": "v1",
"created": "Wed, 29 Mar 2023 23:16:10 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 06:44:16 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Askari",
"Mohammad",
""
],
[
"Shin",
"Won Dong",
""
],
[
"Lenherr",
"Damian",
""
],
[
"Stewart",
"William",
""
],
[
"Floreano",
"Dario",
""
]
]
| new_dataset | 0.998499 |
2304.10728 | Shengqian Wang | Shengqian Wang, Amirali Salehi-Abari, Julie Thorpe | PiXi: Password Inspiration by Exploring Information | 16 pages | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Passwords, a first line of defense against unauthorized access, must be
secure and memorable. However, people often struggle to create secure passwords
they can recall. To address this problem, we design Password inspiration by
eXploring information (PiXi), a novel approach to nudge users towards creating
secure passwords. PiXi is the first of its kind that employs a password
creation nudge to support users in the task of generating a unique secure
password themselves. PiXi prompts users to explore unusual information right
before creating a password, to shake them out of their typical habits and
thought processes, and to inspire them to create unique (and therefore
stronger) passwords. PiXi's design aims to create an engaging, interactive, and
effective nudge to improve secure password creation. We conducted a user study
($N=238$) to compare the efficacy of PiXi to typical password creation. Our
findings indicate that PiXi's nudges do influence users' password choices such
that passwords are significantly longer and more secure (less predictable and
guessable).
| [
{
"version": "v1",
"created": "Fri, 21 Apr 2023 03:47:37 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 20:13:07 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Wang",
"Shengqian",
""
],
[
"Salehi-Abari",
"Amirali",
""
],
[
"Thorpe",
"Julie",
""
]
]
| new_dataset | 0.999566 |
2304.12175 | Mason Peterson | Mason B. Peterson, Parker C. Lusk, Jonathan P. How | MOTLEE: Distributed Mobile Multi-Object Tracking with Localization Error
Elimination | 8 pages, 8 figures, accepted to IROS 2023 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present MOTLEE, a distributed mobile multi-object tracking algorithm that
enables a team of robots to collaboratively track moving objects in the
presence of localization error. Existing approaches to distributed tracking
make limiting assumptions regarding the relative spatial relationship of
sensors, including assuming a static sensor network or that perfect
localization is available. Instead, we develop an algorithm based on the
Kalman-Consensus filter for distributed tracking that properly leverages
localization uncertainty in collaborative tracking. Further, our method allows
the team to maintain an accurate understanding of dynamic objects in the
environment by realigning robot frames and incorporating frame alignment
uncertainty into our object tracking formulation. We evaluate our method in
hardware on a team of three mobile ground robots tracking four people. Compared
to previous works that do not account for localization error, we show that
MOTLEE is resilient to localization uncertainties, enabling accurate tracking
in distributed, dynamic settings with mobile tracking sensors.
| [
{
"version": "v1",
"created": "Mon, 24 Apr 2023 15:38:07 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 18:00:01 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Peterson",
"Mason B.",
""
],
[
"Lusk",
"Parker C.",
""
],
[
"How",
"Jonathan P.",
""
]
]
| new_dataset | 0.971135 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.