id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.08487
|
Huachuan Qiu
|
Huachuan Qiu, Shuai Zhang, Anqi Li, Hongliang He, Zhenzhong Lan
|
Latent Jailbreak: A Benchmark for Evaluating Text Safety and Output
Robustness of Large Language Models
|
Code and data are available at
https://github.com/qiuhuachuan/latent-jailbreak
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Considerable research efforts have been devoted to ensuring that large
language models (LLMs) align with human values and generate safe text. However,
an excessive focus on sensitivity to certain topics can compromise the model's
robustness in following instructions, thereby impacting its overall performance
in completing tasks. Previous benchmarks for jailbreaking LLMs have primarily
focused on evaluating the safety of the models without considering their
robustness. In this paper, we propose a benchmark that assesses both the safety
and robustness of LLMs, emphasizing the need for a balanced approach. To
comprehensively study text safety and output robustness, we introduce a latent
jailbreak prompt dataset, each involving malicious instruction embedding.
Specifically, we instruct the model to complete a regular task, such as
translation, with the text to be translated containing malicious instructions.
To further analyze safety and robustness, we design a hierarchical annotation
framework. We present a systematic analysis of the safety and robustness of
LLMs regarding the position of explicit normal instructions, word replacements
(verbs in explicit normal instructions, target groups in malicious
instructions, cue words for explicit normal instructions), and instruction
replacements (different explicit normal instructions). Our results demonstrate
that current LLMs not only prioritize certain instruction verbs but also
exhibit varying jailbreak rates for different instruction verbs in explicit
normal instructions. Code and data are available at
https://github.com/qiuhuachuan/latent-jailbreak.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 13:49:52 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 07:52:53 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Aug 2023 08:35:28 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Qiu",
"Huachuan",
""
],
[
"Zhang",
"Shuai",
""
],
[
"Li",
"Anqi",
""
],
[
"He",
"Hongliang",
""
],
[
"Lan",
"Zhenzhong",
""
]
] |
new_dataset
| 0.961624 |
2307.15984
|
Zhiyu Pang
|
Zhiyu Pang
|
VATP360: Viewport Adaptive 360-Degree Video Streaming based on Tile
Priority
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
360-degree video becomes increasingly popular among users. In the current
network bandwidth, serving high resolution 360 degree video to users is quite
difficult. Most of the work has been devoted to the prediction of user
viewports or tile-based adaptive algorithms. However, it is difficult to
predict user viewports more accurately using only information such as user's
historical viewports or video saliency maps. In this paper, we propose a
viewport adaptive 360-degree video streaming method based on tile priority
(VATP360), which tries to balance between the performance and the overhead. The
proposed VATP360 consists of three main modules: viewport prediction, tile
priority classification and bitrate allocation. In the viewport prediction
module, object motion trajectory and predicted user's region-of-interest (ROI)
are used to achieve accurate prediction of the user's future viewport. Then,
the predicted viewport, along with the object motion trajectory, are fed into
the proposed tile priority classification algorithm to assign different
priorities to tiles, which would reduce the computational complexity of the
bitrate allocation module. Finally in the bitrate allocation stage, we
adaptively assign bitrates to tiles of different priority by reinforcement
learning. Experimental results on publicly available datasets have demonstrated
the effectiveness of the proposed method.
|
[
{
"version": "v1",
"created": "Sat, 29 Jul 2023 13:12:40 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Aug 2023 12:45:33 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Pang",
"Zhiyu",
""
]
] |
new_dataset
| 0.984459 |
2308.02559
|
Eric Roberts
|
Eric J Roberts, Tanny Chavez, Alexander Hexemer, Petrus H. Zwart
|
DLSIA: Deep Learning for Scientific Image Analysis
|
10 pages, two column, 9 figures, 1 Supplementary section
| null | null | null |
cs.CV cs.LG hep-ex
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce DLSIA (Deep Learning for Scientific Image Analysis), a
Python-based machine learning library that empowers scientists and researchers
across diverse scientific domains with a range of customizable convolutional
neural network (CNN) architectures for a wide variety of tasks in image
analysis to be used in downstream data processing, or for
experiment-in-the-loop computing scenarios. DLSIA features easy-to-use
architectures such as autoencoders, tunable U-Nets, and parameter-lean
mixed-scale dense networks (MSDNets). Additionally, we introduce sparse
mixed-scale networks (SMSNets), generated using random graphs and sparse
connections. As experimental data continues to grow in scale and complexity,
DLSIA provides accessible CNN construction and abstracts CNN complexities,
allowing scientists to tailor their machine learning approaches, accelerate
discoveries, foster interdisciplinary collaboration, and advance research in
scientific image analysis.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 21:32:41 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Aug 2023 18:03:39 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Roberts",
"Eric J",
""
],
[
"Chavez",
"Tanny",
""
],
[
"Hexemer",
"Alexander",
""
],
[
"Zwart",
"Petrus H.",
""
]
] |
new_dataset
| 0.974244 |
2308.06966
|
Yangning Li
|
Yangning Li, Shirong Ma, Xiaobin Wang, Shen Huang, Chengyue Jiang,
Hai-Tao Zheng, Pengjun Xie, Fei Huang, Yong Jiang
|
EcomGPT: Instruction-tuning Large Language Models with Chain-of-Task
Tasks for E-commerce
|
Initial version of EcomGPT
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, instruction-following Large Language Models (LLMs) , represented by
ChatGPT, have exhibited exceptional performance in general Natural Language
Processing (NLP) tasks. However, the unique characteristics of E-commerce data
pose significant challenges to general LLMs. An LLM tailored specifically for
E-commerce scenarios, possessing robust cross-dataset/task generalization
capabilities, is a pressing necessity. To solve this issue, in this work, we
proposed the first e-commerce instruction dataset EcomInstruct, with a total of
2.5 million instruction data. EcomInstruct scales up the data size and task
diversity by constructing atomic tasks with E-commerce basic data types, such
as product information, user reviews. Atomic tasks are defined as intermediate
tasks implicitly involved in solving a final task, which we also call
Chain-of-Task tasks. We developed EcomGPT with different parameter scales by
training the backbone model BLOOMZ with the EcomInstruct. Benefiting from the
fundamental semantic understanding capabilities acquired from the Chain-of-Task
tasks, EcomGPT exhibits excellent zero-shot generalization capabilities.
Extensive experiments and human evaluations demonstrate that EcomGPT
outperforms ChatGPT in term of cross-dataset/task generalization on E-commerce
tasks.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2023 06:49:53 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Aug 2023 04:12:30 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Li",
"Yangning",
""
],
[
"Ma",
"Shirong",
""
],
[
"Wang",
"Xiaobin",
""
],
[
"Huang",
"Shen",
""
],
[
"Jiang",
"Chengyue",
""
],
[
"Zheng",
"Hai-Tao",
""
],
[
"Xie",
"Pengjun",
""
],
[
"Huang",
"Fei",
""
],
[
"Jiang",
"Yong",
""
]
] |
new_dataset
| 0.998242 |
2308.12238
|
Christian Lenz
|
Christian Lenz, Max Schwarz, Andre Rochow, Bastian P\"atzold, Raphael
Memmesheimer, Michael Schreiber, and Sven Behnke
|
NimbRo wins ANA Avatar XPRIZE Immersive Telepresence Competition:
Human-Centric Evaluation and Lessons Learned
|
C. Lenz and M. Schwarz contributed equally. Accepted for
International Journal of Social Robotics (SORO), Springer, to appear 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic avatar systems can enable immersive telepresence with locomotion,
manipulation, and communication capabilities. We present such an avatar system,
based on the key components of immersive 3D visualization and transparent
force-feedback telemanipulation. Our avatar robot features an anthropomorphic
upper body with dexterous hands. The remote human operator drives the arms and
fingers through an exoskeleton-based operator station, which provides force
feedback both at the wrist and for each finger. The robot torso is mounted on a
holonomic base, providing omnidirectional locomotion on flat floors, controlled
using a 3D rudder device. Finally, the robot features a 6D movable head with
stereo cameras, which stream images to a VR display worn by the operator.
Movement latency is hidden using spherical rendering. The head also carries a
telepresence screen displaying an animated image of the operator's face,
enabling direct interaction with remote persons. Our system won the \$10M ANA
Avatar XPRIZE competition, which challenged teams to develop intuitive and
immersive avatar systems that could be operated by briefly trained judges. We
analyze our successful participation in the semifinals and finals and provide
insight into our operator training and lessons learned. In addition, we
evaluate our system in a user study that demonstrates its intuitive and easy
usability.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 16:25:13 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Aug 2023 17:30:14 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Lenz",
"Christian",
""
],
[
"Schwarz",
"Max",
""
],
[
"Rochow",
"Andre",
""
],
[
"Pätzold",
"Bastian",
""
],
[
"Memmesheimer",
"Raphael",
""
],
[
"Schreiber",
"Michael",
""
],
[
"Behnke",
"Sven",
""
]
] |
new_dataset
| 0.979847 |
2308.13628
|
Jiayin Zhu
|
Jiayin Zhu, Zhuoran Zhao, Linlin Yang, Angela Yao
|
HiFiHR: Enhancing 3D Hand Reconstruction from a Single Image via
High-Fidelity Texture
|
Accepted to DAGM German Conference on Pattern Recognition 2023
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present HiFiHR, a high-fidelity hand reconstruction approach that utilizes
render-and-compare in the learning-based framework from a single image, capable
of generating visually plausible and accurate 3D hand meshes while recovering
realistic textures. Our method achieves superior texture reconstruction by
employing a parametric hand model with predefined texture assets, and by
establishing a texture reconstruction consistency between the rendered and
input images during training. Moreover, based on pretraining the network on an
annotated dataset, we apply varying degrees of supervision using our pipeline,
i.e., self-supervision, weak supervision, and full supervision, and discuss the
various levels of contributions of the learned high-fidelity textures in
enhancing hand pose and shape estimation. Experimental results on public
benchmarks including FreiHAND and HO-3D demonstrate that our method outperforms
the state-of-the-art hand reconstruction methods in texture reconstruction
quality while maintaining comparable accuracy in pose and shape estimation. Our
code is available at https://github.com/viridityzhu/HiFiHR.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 18:48:40 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Zhu",
"Jiayin",
""
],
[
"Zhao",
"Zhuoran",
""
],
[
"Yang",
"Linlin",
""
],
[
"Yao",
"Angela",
""
]
] |
new_dataset
| 0.975698 |
2308.13694
|
Matthew McDermott
|
Matthew McDermott and Jason Rife
|
Correcting Motion Distortion for LIDAR HD-Map Localization
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Because scanning-LIDAR sensors require finite time to create a point cloud,
sensor motion during a scan warps the resulting image, a phenomenon known as
motion distortion or rolling shutter. Motion-distortion correction methods
exist, but they rely on external measurements or Bayesian filtering over
multiple LIDAR scans. In this paper we propose a novel algorithm that performs
snapshot processing to obtain a motion-distortion correction. Snapshot
processing, which registers a current LIDAR scan to a reference image without
using external sensors or Bayesian filtering, is particularly relevant for
localization to a high-definition (HD) map. Our approach, which we call
Velocity-corrected Iterative Compact Ellipsoidal Transformation (VICET),
extends the well-known Normal Distributions Transform (NDT) algorithm to solve
jointly for both a 6 Degree-of-Freedom (DOF) rigid transform between two LIDAR
scans and a set of 6DOF motion states that describe distortion within the
current LIDAR scan. Using experiments, we show that VICET achieves
significantly higher accuracy than NDT or Iterative Closest Point (ICP)
algorithms when localizing a distorted raw LIDAR scan against an undistorted HD
Map. We recommend the reader explore our open-source code and visualizations at
https://github.com/mcdermatt/VICET, which supplements this manuscript.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 22:39:00 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"McDermott",
"Matthew",
""
],
[
"Rife",
"Jason",
""
]
] |
new_dataset
| 0.99561 |
2308.13710
|
Muskan Garg
|
Muskan Garg
|
WellXplain: Wellness Concept Extraction and Classification in Reddit
Posts for Mental Health Analysis
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
During the current mental health crisis, the importance of identifying
potential indicators of mental issues from social media content has surged.
Overlooking the multifaceted nature of mental and social well-being can have
detrimental effects on one's mental state. In traditional therapy sessions,
professionals manually pinpoint the origins and outcomes of underlying mental
challenges, a process both detailed and time-intensive. We introduce an
approach to this intricate mental health analysis by framing the identification
of wellness dimensions in Reddit content as a wellness concept extraction and
categorization challenge. We've curated a unique dataset named WELLXPLAIN,
comprising 3,092 entries and totaling 72,813 words. Drawing from Halbert L.
Dunn's well-regarded wellness theory, our team formulated an annotation
framework along with guidelines. This dataset also includes human-marked
textual segments, offering clear reasoning for decisions made in the wellness
concept categorization process. Our aim in publishing this dataset and
analyzing initial benchmarks is to spearhead the creation of advanced language
models tailored for healthcare-focused concept extraction and categorization.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 23:50:05 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Garg",
"Muskan",
""
]
] |
new_dataset
| 0.961019 |
2308.13711
|
Ishan Rajendrakumar Dave
|
Tristan de Blegiers, Ishan Rajendrakumar Dave, Adeel Yousaf, Mubarak
Shah
|
EventTransAct: A video transformer-based framework for Event-camera
based action recognition
|
IROS 2023; The first two authors contributed equally
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognizing and comprehending human actions and gestures is a crucial
perception requirement for robots to interact with humans and carry out tasks
in diverse domains, including service robotics, healthcare, and manufacturing.
Event cameras, with their ability to capture fast-moving objects at a high
temporal resolution, offer new opportunities compared to standard action
recognition in RGB videos. However, previous research on event camera action
recognition has primarily focused on sensor-specific network architectures and
image encoding, which may not be suitable for new sensors and limit the use of
recent advancements in transformer-based architectures. In this study, we
employ a computationally efficient model, namely the video transformer network
(VTN), which initially acquires spatial embeddings per event-frame and then
utilizes a temporal self-attention mechanism. In order to better adopt the VTN
for the sparse and fine-grained nature of event data, we design
Event-Contrastive Loss ($\mathcal{L}_{EC}$) and event-specific augmentations.
Proposed $\mathcal{L}_{EC}$ promotes learning fine-grained spatial cues in the
spatial backbone of VTN by contrasting temporally misaligned frames. We
evaluate our method on real-world action recognition of N-EPIC Kitchens
dataset, and achieve state-of-the-art results on both protocols - testing in
seen kitchen (\textbf{74.9\%} accuracy) and testing in unseen kitchens
(\textbf{42.43\% and 46.66\% Accuracy}). Our approach also takes less
computation time compared to competitive prior approaches, which demonstrates
the potential of our framework \textit{EventTransAct} for real-world
applications of event-camera based action recognition. Project Page:
\url{https://tristandb8.github.io/EventTransAct_webpage/}
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 23:51:07 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"de Blegiers",
"Tristan",
""
],
[
"Dave",
"Ishan Rajendrakumar",
""
],
[
"Yousaf",
"Adeel",
""
],
[
"Shah",
"Mubarak",
""
]
] |
new_dataset
| 0.988102 |
2308.13739
|
Xuhang Chen
|
Shenghong Luo, Xuhang Chen, Weiwen Chen, Zinuo Li, Shuqiang Wang,
Chi-Man Pun
|
Devignet: High-Resolution Vignetting Removal via a Dual Aggregated
Fusion Transformer With Adaptive Channel Expansion
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vignetting commonly occurs as a degradation in images resulting from factors
such as lens design, improper lens hood usage, and limitations in camera
sensors. This degradation affects image details, color accuracy, and presents
challenges in computational photography. Existing vignetting removal algorithms
predominantly rely on ideal physics assumptions and hand-crafted parameters,
resulting in ineffective removal of irregular vignetting and suboptimal
results. Moreover, the substantial lack of real-world vignetting datasets
hinders the objective and comprehensive evaluation of vignetting removal. To
address these challenges, we present Vigset, a pioneering dataset for vignette
removal. Vigset includes 983 pairs of both vignetting and vignetting-free
high-resolution ($5340\times3697$) real-world images under various conditions.
In addition, We introduce DeVigNet, a novel frequency-aware Transformer
architecture designed for vignetting removal. Through the Laplacian Pyramid
decomposition, we propose the Dual Aggregated Fusion Transformer to handle
global features and remove vignetting in the low-frequency domain.
Additionally, we introduce the Adaptive Channel Expansion Module to enhance
details in the high-frequency domain. The experiments demonstrate that the
proposed model outperforms existing state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 02:55:12 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Luo",
"Shenghong",
""
],
[
"Chen",
"Xuhang",
""
],
[
"Chen",
"Weiwen",
""
],
[
"Li",
"Zinuo",
""
],
[
"Wang",
"Shuqiang",
""
],
[
"Pun",
"Chi-Man",
""
]
] |
new_dataset
| 0.972687 |
2308.13759
|
Yizhe Zhang
|
Yizhe Zhang, Tao Zhou, Shuo Wang, Ye Wu, Pengfei Gu, Danny Z. Chen
|
SamDSK: Combining Segment Anything Model with Domain-Specific Knowledge
for Semi-Supervised Learning in Medical Image Segmentation
|
15 pages, 7 figures, Github: https://github.com/yizhezhang2000/SamDSK
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The Segment Anything Model (SAM) exhibits a capability to segment a wide
array of objects in natural images, serving as a versatile perceptual tool for
various downstream image segmentation tasks. In contrast, medical image
segmentation tasks often rely on domain-specific knowledge (DSK). In this
paper, we propose a novel method that combines the segmentation foundation
model (i.e., SAM) with domain-specific knowledge for reliable utilization of
unlabeled images in building a medical image segmentation model. Our new method
is iterative and consists of two main stages: (1) segmentation model training;
(2) expanding the labeled set by using the trained segmentation model, an
unlabeled set, SAM, and domain-specific knowledge. These two stages are
repeated until no more samples are added to the labeled set. A novel
optimal-matching-based method is developed for combining the SAM-generated
segmentation proposals and pixel-level and image-level DSK for constructing
annotations of unlabeled images in the iterative stage (2). In experiments, we
demonstrate the effectiveness of our proposed method for breast cancer
segmentation in ultrasound images, polyp segmentation in endoscopic images, and
skin lesion segmentation in dermoscopic images. Our work initiates a new
direction of semi-supervised learning for medical image segmentation: the
segmentation foundation model can be harnessed as a valuable tool for
label-efficient segmentation learning in medical image segmentation.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 04:46:10 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Zhang",
"Yizhe",
""
],
[
"Zhou",
"Tao",
""
],
[
"Wang",
"Shuo",
""
],
[
"Wu",
"Ye",
""
],
[
"Gu",
"Pengfei",
""
],
[
"Chen",
"Danny Z.",
""
]
] |
new_dataset
| 0.994623 |
2308.13769
|
Md Ataullha Saim
|
Md Ataullha and Mahedi Hassan Rabby and Mushfiqur Rahman and Tahsina
Bintay Azam
|
Bengali Document Layout Analysis with Detectron2
|
DL Sprint 2.0 - BUET CSE Fest 2023, 4 pages, 2 figures, 2 tables
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Document digitization is vital for preserving historical records, efficient
document management, and advancing OCR (Optical Character Recognition)
research. Document Layout Analysis (DLA) involves segmenting documents into
meaningful units like text boxes, paragraphs, images, and tables. Challenges
arise when dealing with diverse layouts, historical documents, and unique
scripts like Bengali, hindered by the lack of comprehensive Bengali DLA
datasets. We improved the accuracy of the DLA model for Bengali documents by
utilizing advanced Mask R-CNN models available in the Detectron2 library. Our
evaluation involved three variants: Mask R-CNN R-50, R-101, and X-101, both
with and without pretrained weights from PubLayNet, on the BaDLAD dataset,
which contains human-annotated Bengali documents in four categories: text
boxes, paragraphs, images, and tables. Results show the effectiveness of these
models in accurately segmenting Bengali documents. We discuss speed-accuracy
tradeoffs and underscore the significance of pretrained weights. Our findings
expand the applicability of Mask R-CNN in document layout analysis, efficient
document management, and OCR research while suggesting future avenues for
fine-tuning and data augmentation.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 05:29:09 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Ataullha",
"Md",
""
],
[
"Rabby",
"Mahedi Hassan",
""
],
[
"Rahman",
"Mushfiqur",
""
],
[
"Azam",
"Tahsina Bintay",
""
]
] |
new_dataset
| 0.998693 |
2308.13785
|
Minheng Ni
|
Minheng Ni, Chenfei Wu, Xiaodong Wang, Shengming Yin, Lijuan Wang,
Zicheng Liu, Nan Duan
|
ORES: Open-vocabulary Responsible Visual Synthesis
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Avoiding synthesizing specific visual concepts is an essential challenge in
responsible visual synthesis. However, the visual concept that needs to be
avoided for responsible visual synthesis tends to be diverse, depending on the
region, context, and usage scenarios. In this work, we formalize a new task,
Open-vocabulary Responsible Visual Synthesis (ORES), where the synthesis model
is able to avoid forbidden visual concepts while allowing users to input any
desired content. To address this problem, we present a Two-stage Intervention
(TIN) framework. By introducing 1) rewriting with learnable instruction through
a large-scale language model (LLM) and 2) synthesizing with prompt intervention
on a diffusion synthesis model, it can effectively synthesize images avoiding
any concepts but following the user's query as much as possible. To evaluate on
ORES, we provide a publicly available dataset, baseline models, and benchmark.
Experimental results demonstrate the effectiveness of our method in reducing
risks of image generation. Our work highlights the potential of LLMs in
responsible visual synthesis. Our code and dataset is public available.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 06:47:34 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Ni",
"Minheng",
""
],
[
"Wu",
"Chenfei",
""
],
[
"Wang",
"Xiaodong",
""
],
[
"Yin",
"Shengming",
""
],
[
"Wang",
"Lijuan",
""
],
[
"Liu",
"Zicheng",
""
],
[
"Duan",
"Nan",
""
]
] |
new_dataset
| 0.953484 |
2308.13795
|
Trung Nghia Le
|
Minh-Hien Le and Chi-Bien Chu and Khanh-Duy Le and Tam V. Nguyen and
Minh-Triet Tran and Trung-Nghia Le
|
VIDES: Virtual Interior Design via Natural Language and Visual Guidance
|
Accepted to ISMAR 2023 (Poster paper)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Interior design is crucial in creating aesthetically pleasing and functional
indoor spaces. However, developing and editing interior design concepts
requires significant time and expertise. We propose Virtual Interior DESign
(VIDES) system in response to this challenge. Leveraging cutting-edge
technology in generative AI, our system can assist users in generating and
editing indoor scene concepts quickly, given user text description and visual
guidance. Using both visual guidance and language as the conditional inputs
significantly enhances the accuracy and coherence of the generated scenes,
resulting in visually appealing designs. Through extensive experimentation, we
demonstrate the effectiveness of VIDES in developing new indoor concepts,
changing indoor styles, and replacing and removing interior objects. The system
successfully captures the essence of users' descriptions while providing
flexibility for customization. Consequently, this system can potentially reduce
the entry barrier for indoor design, making it more accessible to users with
limited technical skills and reducing the time required to create high-quality
images. Individuals who have a background in design can now easily communicate
their ideas visually and effectively present their design concepts.
https://sites.google.com/view/ltnghia/research/VIDES
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 07:41:42 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Le",
"Minh-Hien",
""
],
[
"Chu",
"Chi-Bien",
""
],
[
"Le",
"Khanh-Duy",
""
],
[
"Nguyen",
"Tam V.",
""
],
[
"Tran",
"Minh-Triet",
""
],
[
"Le",
"Trung-Nghia",
""
]
] |
new_dataset
| 0.999521 |
2308.13798
|
Trung Nghia Le
|
Khoi-Nguyen Nguyen-Ngoc and Thanh-Tung Phan-Nguyen and Khanh-Duy Le
and Tam V. Nguyen and Minh-Triet Tran and Trung-Nghia Le
|
DM-VTON: Distilled Mobile Real-time Virtual Try-On
|
Accepted to ISMAR 2023 (Poster paper)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The fashion e-commerce industry has witnessed significant growth in recent
years, prompting exploring image-based virtual try-on techniques to incorporate
Augmented Reality (AR) experiences into online shopping platforms. However,
existing research has primarily overlooked a crucial aspect - the runtime of
the underlying machine-learning model. While existing methods prioritize
enhancing output quality, they often disregard the execution time, which
restricts their applications on a limited range of devices. To address this
gap, we propose Distilled Mobile Real-time Virtual Try-On (DM-VTON), a novel
virtual try-on framework designed to achieve simplicity and efficiency. Our
approach is based on a knowledge distillation scheme that leverages a strong
Teacher network as supervision to guide a Student network without relying on
human parsing. Notably, we introduce an efficient Mobile Generative Module
within the Student network, significantly reducing the runtime while ensuring
high-quality output. Additionally, we propose Virtual Try-on-guided Pose for
Data Synthesis to address the limited pose variation observed in training
images. Experimental results show that the proposed method can achieve 40
frames per second on a single Nvidia Tesla T4 GPU and only take up 37 MB of
memory while producing almost the same output quality as other state-of-the-art
methods. DM-VTON stands poised to facilitate the advancement of real-time AR
applications, in addition to the generation of lifelike attired human figures
tailored for diverse specialized training tasks.
https://sites.google.com/view/ltnghia/research/DMVTON
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 07:46:27 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Nguyen-Ngoc",
"Khoi-Nguyen",
""
],
[
"Phan-Nguyen",
"Thanh-Tung",
""
],
[
"Le",
"Khanh-Duy",
""
],
[
"Nguyen",
"Tam V.",
""
],
[
"Tran",
"Minh-Triet",
""
],
[
"Le",
"Trung-Nghia",
""
]
] |
new_dataset
| 0.997282 |
2308.13808
|
Claudio Di Sipio
|
Juri Di Rocco and Claudio Di Sipio
|
ResyDuo: Combining data models and CF-based recommender systems to
develop Arduino projects
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While specifying an IoT-based system, software developers have to face a set
of challenges, spanning from selecting the hardware components to writing the
actual source code. Even though dedicated development environments are in
place, a nonexpert user might struggle with the over-choice problem in
selecting the proper component. By combining MDE and recommender systems, this
paper proposes an initial prototype, called ResyDuo, to assist Arduino
developers by providing two different artifacts, i. e. , hardware components
and software libraries. In particular, we make use of a widely adopted
collaborative filtering algorithm by collecting relevant information by means
of a dedicated data model. ResyDuo can retrieve hardware components by using
tags or existing Arduino projects stored on the ProjectHub repository. Then,
the system can eventually retrieve corresponding software libraries based on
the identified hardware devices. ResyDuo is equipped with a web-based interface
that allows users to easily select and configure the under-developing Arduino
project. To assess ResyDuos performances, we run the ten-fold crossvalidation
by adopting the grid search strategy to optimize the hyperparameters of the
CF-based algorithm. The conducted evaluation shows encouraging results even
though there is still room for improvement in terms of the examined metrics.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 08:21:31 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Di Rocco",
"Juri",
""
],
[
"Di Sipio",
"Claudio",
""
]
] |
new_dataset
| 0.998774 |
2308.13820
|
Qi Shen
|
Zichen Yuan, Qi Shen, Bingyi Zheng, Yuting Liu, Linying Jiang, Guibing
Guo
|
Video and Audio are Images: A Cross-Modal Mixer for Original Data on
Video-Audio Retrieval
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-modal retrieval has become popular in recent years, particularly with
the rise of multimedia. Generally, the information from each modality exhibits
distinct representations and semantic information, which makes feature tends to
be in separate latent spaces encoded with dual-tower architecture and makes it
difficult to establish semantic relationships between modalities, resulting in
poor retrieval performance. To address this issue, we propose a novel framework
for cross-modal retrieval which consists of a cross-modal mixer, a masked
autoencoder for pre-training, and a cross-modal retriever for downstream
tasks.In specific, we first adopt cross-modal mixer and mask modeling to fuse
the original modality and eliminate redundancy. Then, an encoder-decoder
architecture is applied to achieve a fuse-then-separate task in the
pre-training phase.We feed masked fused representations into the encoder and
reconstruct them with the decoder, ultimately separating the original data of
two modalities. In downstream tasks, we use the pre-trained encoder to build
the cross-modal retrieval method. Extensive experiments on 2 real-world
datasets show that our approach outperforms previous state-of-the-art methods
in video-audio matching tasks, improving retrieval accuracy by up to 2 times.
Furthermore, we prove our model performance by transferring it to other
downstream tasks as a universal model.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 09:02:21 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Yuan",
"Zichen",
""
],
[
"Shen",
"Qi",
""
],
[
"Zheng",
"Bingyi",
""
],
[
"Liu",
"Yuting",
""
],
[
"Jiang",
"Linying",
""
],
[
"Guo",
"Guibing",
""
]
] |
new_dataset
| 0.963373 |
2308.13823
|
Kian Wei Ng Mr
|
Kian Wei Ng, Yujia Gao, Shaheryar Mohammed Furqan, Zachery Yeo, Joel
Lau, Kee Yuan Ngiam, Eng Tat Khoo
|
HoloPOCUS: Portable Mixed-Reality 3D Ultrasound Tracking, Reconstruction
and Overlay
|
Accepted in "The 4th International Workshop of Advances in
Simplifying Medical UltraSound" (ASMUS) - a workshop held in conjunction with
MICCAI 2023
| null | null | null |
cs.CV cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ultrasound (US) imaging provides a safe and accessible solution to procedural
guidance and diagnostic imaging. The effective usage of conventional 2D US for
interventional guidance requires extensive experience to project the image
plane onto the patient, and the interpretation of images in diagnostics suffers
from high intra- and inter-user variability. 3D US reconstruction allows for
more consistent diagnosis and interpretation, but existing solutions are
limited in terms of equipment and applicability in real-time navigation. To
address these issues, we propose HoloPOCUS - a mixed reality US system (MR-US)
that overlays rich US information onto the user's vision in a point-of-care
setting. HoloPOCUS extends existing MR-US methods beyond placing a US plane in
the user's vision to include a 3D reconstruction and projection that can aid in
procedural guidance using conventional probes. We validated a tracking pipeline
that demonstrates higher accuracy compared to existing MR-US works.
Furthermore, user studies conducted via a phantom task showed significant
improvements in navigation duration when using our proposed methods.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 09:28:20 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Ng",
"Kian Wei",
""
],
[
"Gao",
"Yujia",
""
],
[
"Furqan",
"Shaheryar Mohammed",
""
],
[
"Yeo",
"Zachery",
""
],
[
"Lau",
"Joel",
""
],
[
"Ngiam",
"Kee Yuan",
""
],
[
"Khoo",
"Eng Tat",
""
]
] |
new_dataset
| 0.999575 |
2308.13836
|
Aljoscha Meyer MSc
|
Aljoscha Meyer
|
SoK: Authenticated Prefix Relations -- A Unified Perspective On Relative
Time-Stamping and Append-Only Logs
|
16 pages, 12 figures
| null | null | null |
cs.CR cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Secure relative timestamping and secure append-only logs are two historically
mostly independent lines of research, which we show to be sides of the same
coin -- the authentication of prefix relations. From this more general
viewpoint, we derive several complexity criteria not yet considered in previous
literature. We define transitive prefix authentication graphs, a graph class
that captures all hash-based timestamping and log designs we know of. We survey
existing schemes by expressing them as transitive prefix authentication graphs,
which yields more compact definitions and more complete evaluations than in the
existing literature.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 10:04:37 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Meyer",
"Aljoscha",
""
]
] |
new_dataset
| 0.994002 |
2308.13839
|
Guopeng Li
|
Guopeng Li, Yiru Jiao, Simeon C. Calvert, J.W.C. van Lint
|
A Comparative Conflict Resolution Dataset Derived from Argoverse-2:
Scenarios with vs. without Autonomous Vehicles
|
7 pages, 11 figures
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
As the deployment of autonomous vehicles (AVs) becomes increasingly
prevalent, ensuring safe and smooth interactions between AVs and other human
agents is of critical importance. In the urban environment, how vehicles
resolve conflicts has significant impacts on both driving safety and traffic
efficiency. To expedite the studies on evaluating conflict resolution in
AV-involved and AV-free scenarios at intersections, this paper presents a
high-quality dataset derived from the open Argoverse-2 motion forecasting data.
First, scenarios of interest are selected by applying a set of heuristic rules
regarding post-encroachment time (PET), minimum distance, trajectory crossing,
and speed variation. Next, the quality of the raw data is carefully examined.
We found that position and speed data are not consistent in Argoverse-2 data
and its improper processing induced unnecessary errors. To address these
specific problems, we propose and apply a data processing pipeline to correct
and enhance the raw data. As a result, 5k+ AV-involved scenarios and 16k+
AV-free scenarios with smooth and consistent position, speed, acceleration, and
heading direction data are obtained. Further assessments show that this dataset
comprises diverse and balanced conflict resolution regimes. This informative
dataset provides a valuable resource for researchers and practitioners in the
field of autonomous vehicle assessment and regulation. The dataset is openly
available via https://github.com/RomainLITUD/conflict_resolution_dataset.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 10:15:52 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Li",
"Guopeng",
""
],
[
"Jiao",
"Yiru",
""
],
[
"Calvert",
"Simeon C.",
""
],
[
"van Lint",
"J. W. C.",
""
]
] |
new_dataset
| 0.999787 |
2308.13841
|
Wanrong He
|
Wanrong He, Mitchell L. Gordon, Lindsay Popowski, Michael S. Bernstein
|
Cura: Curation at Social Media Scale
|
CSCW 2023
| null |
10.1145/3610186
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How can online communities execute a focused vision for their space? Curation
offers one approach, where community leaders manually select content to share
with the community. Curation enables leaders to shape a space that matches
their taste, norms, and values, but the practice is often intractable at social
media scale: curators cannot realistically sift through hundreds or thousands
of submissions daily. In this paper, we contribute algorithmic and interface
foundations enabling curation at scale, and manifest these foundations in a
system called Cura. Our approach draws on the observation that, while curators'
attention is limited, other community members' upvotes are plentiful and
informative of curators' likely opinions. We thus contribute a
transformer-based curation model that predicts whether each curator will upvote
a post based on previous community upvotes. Cura applies this curation model to
create a feed of content that it predicts the curator would want in the
community. Evaluations demonstrate that the curation model accurately estimates
opinions of diverse curators, that changing curators for a community results in
clearly recognizable shifts in the community's content, and that, consequently,
curation can reduce anti-social behavior by half without extra moderation
effort. By sampling different types of curators, Cura lowers the threshold to
genres of curated social media ranging from editorial groups to stakeholder
roundtables to democracies.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 10:25:05 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"He",
"Wanrong",
""
],
[
"Gordon",
"Mitchell L.",
""
],
[
"Popowski",
"Lindsay",
""
],
[
"Bernstein",
"Michael S.",
""
]
] |
new_dataset
| 0.996985 |
2308.13879
|
Sicheng Yang
|
Sicheng Yang, Haiwei Xue, Zhensong Zhang, Minglei Li, Zhiyong Wu,
Xiaofei Wu, Songcen Xu, Zonghong Dai
|
The DiffuseStyleGesture+ entry to the GENEA Challenge 2023
|
7 pages, 8 figures, ICMI 2023
| null |
10.1145/3577190.3616114
| null |
cs.HC cs.AI cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce the DiffuseStyleGesture+, our solution for the
Generation and Evaluation of Non-verbal Behavior for Embodied Agents (GENEA)
Challenge 2023, which aims to foster the development of realistic, automated
systems for generating conversational gestures. Participants are provided with
a pre-processed dataset and their systems are evaluated through crowdsourced
scoring. Our proposed model, DiffuseStyleGesture+, leverages a diffusion model
to generate gestures automatically. It incorporates a variety of modalities,
including audio, text, speaker ID, and seed gestures. These diverse modalities
are mapped to a hidden space and processed by a modified diffusion model to
produce the corresponding gesture for a given speech input. Upon evaluation,
the DiffuseStyleGesture+ demonstrated performance on par with the top-tier
models in the challenge, showing no significant differences with those models
in human-likeness, appropriateness for the interlocutor, and achieving
competitive performance with the best model on appropriateness for agent
speech. This indicates that our model is competitive and effective in
generating realistic and appropriate gestures for given speech. The code,
pre-trained models, and demos are available at
https://github.com/YoungSeng/DiffuseStyleGesture/tree/DiffuseStyleGesturePlus/BEAT-TWH-main.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 13:34:17 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Yang",
"Sicheng",
""
],
[
"Xue",
"Haiwei",
""
],
[
"Zhang",
"Zhensong",
""
],
[
"Li",
"Minglei",
""
],
[
"Wu",
"Zhiyong",
""
],
[
"Wu",
"Xiaofei",
""
],
[
"Xu",
"Songcen",
""
],
[
"Dai",
"Zonghong",
""
]
] |
new_dataset
| 0.982707 |
2308.13903
|
Raja Kumar
|
Raja Kumar, Jiahao Luo, Alex Pang, James Davis
|
Disjoint Pose and Shape for 3D Face Reconstruction
|
ICCV workshops 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Existing methods for 3D face reconstruction from a few casually captured
images employ deep learning based models along with a 3D Morphable Model(3DMM)
as face geometry prior. Structure From Motion(SFM), followed by Multi-View
Stereo (MVS), on the other hand, uses dozens of high-resolution images to
reconstruct accurate 3D faces.However, it produces noisy and stretched-out
results with only two views available. In this paper, taking inspiration from
both these methods, we propose an end-to-end pipeline that disjointly solves
for pose and shape to make the optimization stable and accurate. We use a face
shape prior to estimate face pose and use stereo matching followed by a 3DMM to
solve for the shape. The proposed method achieves end-to-end topological
consistency, enables iterative face pose refinement procedure, and show
remarkable improvement on both quantitative and qualitative results over
existing state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 15:18:32 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Kumar",
"Raja",
""
],
[
"Luo",
"Jiahao",
""
],
[
"Pang",
"Alex",
""
],
[
"Davis",
"James",
""
]
] |
new_dataset
| 0.987815 |
2308.13929
|
Avishai Sintov
|
Alon Mizrahi and Avishai Sintov
|
TeleFMG: A Wearable Force-Myography Device for Natural Teleoperation of
Multi-finger Robotic Hands
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Teleoperation enables a user to perform tasks from a remote location. Hence,
the user can interact with a long-distance environment through the operation of
a robotic system. Often, teleoperation is required in order to perform
dangerous tasks (e.g., work in disaster zones or in chemical plants) while
keeping the user out of harm's way. Nevertheless, common approaches often
provide cumbersome and unnatural usage. In this letter, we propose TeleFMG, an
approach for teleoperation of a multi-finger robotic hand through natural
motions of the user's hand. By using a low-cost wearable Force-Myography (FMG)
device, musculoskeletal activities on the user's forearm are mapped to hand
poses which, in turn, are mimicked by a robotic hand. The mapping is performed
by a data-based model that considers spatial positions of the sensors on the
forearm along with temporal dependencies of the FMG signals. A set of
experiments show the ability of a teleoperator to control a multi-finger hand
through intuitive and natural finger motion. Furthermore, transfer to new users
is demonstrated.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 18:08:32 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Mizrahi",
"Alon",
""
],
[
"Sintov",
"Avishai",
""
]
] |
new_dataset
| 0.99919 |
2308.13934
|
Guying Lin
|
Guying Lin (1), Lei Yang (1), Congyi Zhang (1), Hao Pan (2), Yuhan
Ping (1), Guodong Wei (1), Taku Komura (1), John Keyser (3), Wenping Wang (3)
((1) The University of Hong Kong, (2) Microsoft Research Asia, (3) Texas A&M
University)
|
Patch-Grid: An Efficient and Feature-Preserving Neural Implicit Surface
Representation
| null | null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural implicit representations are known to be more compact for depicting 3D
shapes than traditional discrete representations. However, the neural
representations tend to round sharp corners or edges and struggle to represent
surfaces with open boundaries. Moreover, they are slow to train. We present a
unified neural implicit representation, called Patch-Grid, that fits to complex
shapes efficiently, preserves sharp features, and effectively models surfaces
with open boundaries and thin geometric features. Our superior efficiency comes
from embedding each surface patch into a local latent volume and decoding it
using a shared MLP decoder, which is pretrained on various local surface
geometries. With this pretrained decoder fixed, fitting novel shapes and local
shape updates can be done efficiently. The faithful preservation of sharp
features is enabled by adopting a novel merge grid to perform local
constructive solid geometry (CSG) combinations of surface patches in the cells
of an adaptive Octree, yielding better robustness than using a global CSG
construction as proposed in the literature. Experiments show that our
Patch-Grid method faithfully captures shapes with complex sharp features, open
boundaries and thin structures, and outperforms existing learning-based methods
in both efficiency and quality for surface fitting and local shape updates.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 18:20:38 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Lin",
"Guying",
""
],
[
"Yang",
"Lei",
""
],
[
"Zhang",
"Congyi",
""
],
[
"Pan",
"Hao",
""
],
[
"Ping",
"Yuhan",
""
],
[
"Wei",
"Guodong",
""
],
[
"Komura",
"Taku",
""
],
[
"Keyser",
"John",
""
],
[
"Wang",
"Wenping",
""
]
] |
new_dataset
| 0.994189 |
2308.13941
|
Alexander Sep\'ulveda
|
Margareth Castillo, Felipe Rubio, Dagoberto Porras, Sonia H.
Contreras-Ortiz, Alexander Sep\'ulveda
|
A small vocabulary database of ultrasound image sequences of vocal tract
dynamics
| null |
STSIVA-2019, Bucaramanga, Colombia, 2019
|
10.1109/STSIVA.2019.8730224
| null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a new database consisting of concurrent articulatory and
acoustic speech data. The articulatory data correspond to ultrasound videos of
the vocal tract dynamics, which allow the visualization of the tongue upper
contour during the speech production process. Acoustic data is composed of 30
short sentences that were acquired by a directional cardioid microphone. This
database includes data from 17 young subjects (8 male and 9 female) from the
Santander region in Colombia, who reported not having any speech pathology.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 18:58:10 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Castillo",
"Margareth",
""
],
[
"Rubio",
"Felipe",
""
],
[
"Porras",
"Dagoberto",
""
],
[
"Contreras-Ortiz",
"Sonia H.",
""
],
[
"Sepúlveda",
"Alexander",
""
]
] |
new_dataset
| 0.998964 |
2308.13988
|
Haizhou Zhao
|
Lei Yu, Haizhou Zhao, Siying Qin, Yuqing Chen
|
A Robot Leg with Compact Variable Stiffness Joint based on Leaf-Spring
Mechanism
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The legged robots with variable stiffness actuators (VSAs) can achieve
energy-efficient and versatile locomotion. However, equipping legged robots
with VSAs in real-world application is usually restricted by (i) the redundant
mechanical structure design, (ii) limited stiffness variation range and speed,
and (iii) high energy consumption in stiffness modulation. In this paper, we
present a novel Variable-Length Leaf-Spring Actuator (VLLSA) in legged robots
that aims to address the aforementioned limitations. The design is based on
leaf-spring mechanism and we improve the structural design to make the proposed
VSA (i) compact and lightweight in mechanical structure, (ii) precise in
theoretical modeling, and (iii) capable of modulating stiffness with wide
range, fast speed, and low energy consumption. Hardware experiments validate
that the legged robot equipped with the proposed VLLSA has compact structure,
high dynamic performance and low energy consumption.
|
[
{
"version": "v1",
"created": "Sun, 27 Aug 2023 02:49:47 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Yu",
"Lei",
""
],
[
"Zhao",
"Haizhou",
""
],
[
"Qin",
"Siying",
""
],
[
"Chen",
"Yuqing",
""
]
] |
new_dataset
| 0.997874 |
2308.13989
|
Junho Kim
|
Junho Kim, Changwoon Choi, Hojun Jang, Young Min Kim
|
LDL: Line Distance Functions for Panoramic Localization
|
Accepted to ICCV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce LDL, a fast and robust algorithm that localizes a panorama to a
3D map using line segments. LDL focuses on the sparse structural information of
lines in the scene, which is robust to illumination changes and can potentially
enable efficient computation. While previous line-based localization approaches
tend to sacrifice accuracy or computation time, our method effectively observes
the holistic distribution of lines within panoramic images and 3D maps.
Specifically, LDL matches the distribution of lines with 2D and 3D line
distance functions, which are further decomposed along principal directions of
lines to increase the expressiveness. The distance functions provide coarse
pose estimates by comparing the distributional information, where the poses are
further optimized using conventional local feature matching. As our pipeline
solely leverages line geometry and local features, it does not require costly
additional training of line-specific features or correspondence matching.
Nevertheless, our method demonstrates robust performance on challenging
scenarios including object layout changes, illumination shifts, and large-scale
scenes, while exhibiting fast pose search terminating within a matter of
milliseconds. We thus expect our method to serve as a practical solution for
line-based localization, and complement the well-established point-based
paradigm. The code for LDL is available through the following link:
https://github.com/82magnolia/panoramic-localization.
|
[
{
"version": "v1",
"created": "Sun, 27 Aug 2023 02:57:07 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Kim",
"Junho",
""
],
[
"Choi",
"Changwoon",
""
],
[
"Jang",
"Hojun",
""
],
[
"Kim",
"Young Min",
""
]
] |
new_dataset
| 0.999688 |
2308.14007
|
Orian Leitersdorf
|
Orian Leitersdorf, Ronny Ronen, Shahar Kvatinsky
|
CUDA-PIM: End-to-End Integration of Digital Processing-in-Memory from
High-Level C++ to Microarchitectural Design
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Digital processing-in-memory (PIM) architectures mitigate the memory wall
problem by facilitating parallel bitwise operations directly within memory.
Recent works have demonstrated their algorithmic potential for accelerating
data-intensive applications; however, there remains a significant gap in the
programming model and microarchitectural design. This is further exacerbated by
the emerging model of partitions, which significantly complicates control and
periphery. Therefore, inspired by NVIDIA CUDA, this paper provides an
end-to-end architectural integration of digital memristive PIM from an abstract
high-level C++ programming interface for vector operations to the low-level
microarchitecture.
We begin by proposing an efficient microarchitecture and instruction set
architecture (ISA) that bridge the gap between the low-level control periphery
and an abstraction of PIM parallelism into warps and threads. We subsequently
propose a PIM compilation library that converts high-level C++ to ISA
instructions, and a PIM driver that translates ISA instructions into PIM
micro-operations. This drastically simplifies the development of PIM
applications and enables PIM integration within larger existing C++ CPU/GPU
programs for heterogeneous computing with significant ease.
Lastly, we present an efficient GPU-accelerated simulator for the proposed
PIM microarchitecture. Although slower than a theoretical PIM chip, this
simulator provides an accessible platform for developers to start executing and
debugging PIM algorithms. To validate our approach, we implement
state-of-the-art matrix operations and FFT PIM-based algorithms as case
studies. These examples demonstrate drastically simplified development without
compromising performance, showing the potential and significance of CUDA-PIM.
|
[
{
"version": "v1",
"created": "Sun, 27 Aug 2023 05:12:54 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Leitersdorf",
"Orian",
""
],
[
"Ronen",
"Ronny",
""
],
[
"Kvatinsky",
"Shahar",
""
]
] |
new_dataset
| 0.994081 |
2308.14016
|
Gabriele Oligeri
|
Bader Al-Sada, Alireza Sadighian, Gabriele Oligeri
|
MITRE ATT&CK: State of the Art and Way Forward
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
MITRE ATT&CK is a comprehensive framework of adversary tactics, techniques
and procedures based on real-world observations. It has been used as a
foundation for threat modelling in different sectors, such as government,
academia and industry. To the best of our knowledge, no previous work has been
devoted to the comprehensive collection, study and investigation of the current
state of the art leveraging the MITRE ATT&CK framework. We select and inspect
more than fifty major research contributions, while conducting a detailed
analysis of their methodology and objectives in relation to the MITRE ATT&CK
framework. We provide a categorization of the identified papers according to
different criteria such as use cases, application scenarios, adopted
methodologies and the use of additional data. Finally, we discuss open issues
and future research directions involving not only the MITRE ATT&CK framework
but also the fields of risk analysis and cyber-threat intelligence at large.
|
[
{
"version": "v1",
"created": "Sun, 27 Aug 2023 06:26:35 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Al-Sada",
"Bader",
""
],
[
"Sadighian",
"Alireza",
""
],
[
"Oligeri",
"Gabriele",
""
]
] |
new_dataset
| 0.998404 |
2308.14050
|
Santosh Sanjeev Mr.
|
Santosh Sanjeev, Salwa K. Al Khatib, Mai A. Shaaban, Ibrahim Almakky,
Vijay Ram Papineni and Mohammad Yaqub
|
PECon: Contrastive Pretraining to Enhance Feature Alignment between CT
and EHR Data for Improved Pulmonary Embolism Diagnosis
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Previous deep learning efforts have focused on improving the performance of
Pulmonary Embolism(PE) diagnosis from Computed Tomography (CT) scans using
Convolutional Neural Networks (CNN). However, the features from CT scans alone
are not always sufficient for the diagnosis of PE. CT scans along with
electronic heath records (EHR) can provide a better insight into the patients
condition and can lead to more accurate PE diagnosis. In this paper, we propose
Pulmonary Embolism Detection using Contrastive Learning (PECon), a supervised
contrastive pretraining strategy that employs both the patients CT scans as
well as the EHR data, aiming to enhance the alignment of feature
representations between the two modalities and leverage information to improve
the PE diagnosis. In order to achieve this, we make use of the class labels and
pull the sample features of the same class together, while pushing away those
of the other class. Results show that the proposed work outperforms the
existing techniques and achieves state-of-the-art performance on the RadFusion
dataset with an F1-score of 0.913, accuracy of 0.90 and an AUROC of 0.943.
Furthermore, we also explore the explainability of our approach in comparison
to other methods. Our code is publicly available at
https://github.com/BioMedIA-MBZUAI/PECon.
|
[
{
"version": "v1",
"created": "Sun, 27 Aug 2023 09:07:26 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Sanjeev",
"Santosh",
""
],
[
"Khatib",
"Salwa K. Al",
""
],
[
"Shaaban",
"Mai A.",
""
],
[
"Almakky",
"Ibrahim",
""
],
[
"Papineni",
"Vijay Ram",
""
],
[
"Yaqub",
"Mohammad",
""
]
] |
new_dataset
| 0.991734 |
2308.14075
|
Gil Shapira
|
Gil Shapira and Yosi Keller
|
FaceCoresetNet: Differentiable Coresets for Face Set Recognition
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In set-based face recognition, we aim to compute the most discriminative
descriptor from an unbounded set of images and videos showing a single person.
A discriminative descriptor balances two policies when aggregating information
from a given set. The first is a quality-based policy: emphasizing high-quality
and down-weighting low-quality images. The second is a diversity-based policy:
emphasizing unique images in the set and down-weighting multiple occurrences of
similar images as found in video clips which can overwhelm the set
representation. This work frames face-set representation as a differentiable
coreset selection problem. Our model learns how to select a small coreset of
the input set that balances quality and diversity policies using a learned
metric parameterized by the face quality, optimized end-to-end. The selection
process is a differentiable farthest-point sampling (FPS) realized by
approximating the non-differentiable Argmax operation with differentiable
sampling from the Gumbel-Softmax distribution of distances. The small coreset
is later used as queries in a self and cross-attention architecture to enrich
the descriptor with information from the whole set. Our model is
order-invariant and linear in the input set size. We set a new SOTA to set face
verification on the IJB-B and IJB-C datasets. Our code is publicly available.
|
[
{
"version": "v1",
"created": "Sun, 27 Aug 2023 11:38:42 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Shapira",
"Gil",
""
],
[
"Keller",
"Yosi",
""
]
] |
new_dataset
| 0.996507 |
2308.14083
|
Yangang Wang
|
Xiaohan Yuan, Cong Liu and Yangang Wang
|
4D Myocardium Reconstruction with Decoupled Motion and Shape Model
|
Accepted by ICCV2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Estimating the shape and motion state of the myocardium is essential in
diagnosing cardiovascular diseases.However, cine magnetic resonance (CMR)
imaging is dominated by 2D slices, whose large slice spacing challenges
inter-slice shape reconstruction and motion acquisition.To address this
problem, we propose a 4D reconstruction method that decouples motion and shape,
which can predict the inter-/intra- shape and motion estimation from a given
sparse point cloud sequence obtained from limited slices. Our framework
comprises a neural motion model and an end-diastolic (ED) shape model. The
implicit ED shape model can learn a continuous boundary and encourage the
motion model to predict without the supervision of ground truth deformation,
and the motion model enables canonical input of the shape model by deforming
any point from any phase to the ED phase. Additionally, the constructed
ED-space enables pre-training of the shape model, thereby guiding the motion
model and addressing the issue of data scarcity. We propose the first 4D
myocardial dataset as we know and verify our method on the proposed, public,
and cross-modal datasets, showing superior reconstruction performance and
enabling various clinical applications.
|
[
{
"version": "v1",
"created": "Sun, 27 Aug 2023 12:08:49 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Yuan",
"Xiaohan",
""
],
[
"Liu",
"Cong",
""
],
[
"Wang",
"Yangang",
""
]
] |
new_dataset
| 0.985824 |
2308.14089
|
Scott Fleming
|
Scott L. Fleming, Alejandro Lozano, William J. Haberkorn, Jenelle A.
Jindal, Eduardo P. Reis, Rahul Thapa, Louis Blankemeier, Julian Z. Genkins,
Ethan Steinberg, Ashwin Nayak, Birju S. Patel, Chia-Chun Chiang, Alison
Callahan, Zepeng Huo, Sergios Gatidis, Scott J. Adams, Oluseyi Fayanju,
Shreya J. Shah, Thomas Savage, Ethan Goh, Akshay S. Chaudhari, Nima
Aghaeepour, Christopher Sharp, Michael A. Pfeffer, Percy Liang, Jonathan H.
Chen, Keith E. Morse, Emma P. Brunskill, Jason A. Fries, Nigam H. Shah
|
MedAlign: A Clinician-Generated Dataset for Instruction Following with
Electronic Medical Records
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The ability of large language models (LLMs) to follow natural language
instructions with human-level fluency suggests many opportunities in healthcare
to reduce administrative burden and improve quality of care. However,
evaluating LLMs on realistic text generation tasks for healthcare remains
challenging. Existing question answering datasets for electronic health record
(EHR) data fail to capture the complexity of information needs and
documentation burdens experienced by clinicians. To address these challenges,
we introduce MedAlign, a benchmark dataset of 983 natural language instructions
for EHR data. MedAlign is curated by 15 clinicians (7 specialities), includes
clinician-written reference responses for 303 instructions, and provides 276
longitudinal EHRs for grounding instruction-response pairs. We used MedAlign to
evaluate 6 general domain LLMs, having clinicians rank the accuracy and quality
of each LLM response. We found high error rates, ranging from 35% (GPT-4) to
68% (MPT-7B-Instruct), and an 8.3% drop in accuracy moving from 32k to 2k
context lengths for GPT-4. Finally, we report correlations between clinician
rankings and automated natural language generation metrics as a way to rank
LLMs without human review. We make MedAlign available under a research data use
agreement to enable LLM evaluations on tasks aligned with clinician needs and
preferences.
|
[
{
"version": "v1",
"created": "Sun, 27 Aug 2023 12:24:39 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Fleming",
"Scott L.",
""
],
[
"Lozano",
"Alejandro",
""
],
[
"Haberkorn",
"William J.",
""
],
[
"Jindal",
"Jenelle A.",
""
],
[
"Reis",
"Eduardo P.",
""
],
[
"Thapa",
"Rahul",
""
],
[
"Blankemeier",
"Louis",
""
],
[
"Genkins",
"Julian Z.",
""
],
[
"Steinberg",
"Ethan",
""
],
[
"Nayak",
"Ashwin",
""
],
[
"Patel",
"Birju S.",
""
],
[
"Chiang",
"Chia-Chun",
""
],
[
"Callahan",
"Alison",
""
],
[
"Huo",
"Zepeng",
""
],
[
"Gatidis",
"Sergios",
""
],
[
"Adams",
"Scott J.",
""
],
[
"Fayanju",
"Oluseyi",
""
],
[
"Shah",
"Shreya J.",
""
],
[
"Savage",
"Thomas",
""
],
[
"Goh",
"Ethan",
""
],
[
"Chaudhari",
"Akshay S.",
""
],
[
"Aghaeepour",
"Nima",
""
],
[
"Sharp",
"Christopher",
""
],
[
"Pfeffer",
"Michael A.",
""
],
[
"Liang",
"Percy",
""
],
[
"Chen",
"Jonathan H.",
""
],
[
"Morse",
"Keith E.",
""
],
[
"Brunskill",
"Emma P.",
""
],
[
"Fries",
"Jason A.",
""
],
[
"Shah",
"Nigam H.",
""
]
] |
new_dataset
| 0.999836 |
2308.14164
|
Francesco Intoci
|
Francesco Intoci and Julian Sturm and Daniel Fraunholz and Apostolos
Pyrgelis and Colin Barschel
|
P3LI5: Practical and Confidential Lawful Interception on the 5G Core
|
Accepted in the proceedings of IEEE Computer and Netowrk Security
(IEEE CNS) 2023. Subject to IEEE copyright policy
| null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Lawful Interception (LI) is a legal obligation of Communication Service
Providers (CSPs) to provide interception capabilities to Law Enforcement
Agencies (LEAs) in order to gain insightful data from network communications
for criminal proceedings, e.g., network identifiers for tracking suspects. With
the privacy-enhancements of network identifiers in the 5th generation of mobile
networks (5G), LEAs need to interact with CSPs for network identifier
resolution. This raises new privacy issues, as untrusted CSPs are able to infer
sensitive information about ongoing investigations, e.g., the identities of
their subscribers under suspicion. In this work, we propose P3LI5, a novel
system that enables LEAs to privately query CSPs for network identifier
resolution leveraging on an information retrieval protocol, SparseWPIR, that is
based on private information retrieval and its weakly private version. As such,
P3LI5 can be adapted to various operational scenarios with different
confidentiality or latency requirements, by selectively allowing a bounded
information leakage for improved performance. We implement P3LI5 on the 5G LI
infrastructure using well known open-source projects and demonstrate its
scalability to large databases while retaining low latency. To the best of our
knowledge, P3LI5 is the first proposal for addressing the privacy issues raised
by the mandatory requirement for LI on the 5G core network.
|
[
{
"version": "v1",
"created": "Sun, 27 Aug 2023 17:57:30 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Intoci",
"Francesco",
""
],
[
"Sturm",
"Julian",
""
],
[
"Fraunholz",
"Daniel",
""
],
[
"Pyrgelis",
"Apostolos",
""
],
[
"Barschel",
"Colin",
""
]
] |
new_dataset
| 0.997982 |
2308.14256
|
Yang Liu
|
Yang Liu, Cheng Yu, Lei Shang, Ziheng Wu, Xingjun Wang, Yuze Zhao, Lin
Zhu, Chen Cheng, Weitao Chen, Chao Xu, Haoyu Xie, Yuan Yao, Wenmeng Zhou,
Yingda Chen, Xuansong Xie, Baigui Sun
|
FaceChain: A Playground for Identity-Preserving Portrait Generation
|
This is an ongoing work that will be consistently refined and
improved upon
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancement in personalized image generation have unveiled the
intriguing capability of pre-trained text-to-image models on learning identity
information from a collection of portrait images. However, existing solutions
can be vulnerable in producing truthful details, and usually suffer from
several defects such as (i) The generated face exhibit its own unique
characteristics, \ie facial shape and facial feature positioning may not
resemble key characteristics of the input, and (ii) The synthesized face may
contain warped, blurred or corrupted regions. In this paper, we present
FaceChain, a personalized portrait generation framework that combines a series
of customized image-generation model and a rich set of face-related perceptual
understanding models (\eg, face detection, deep face embedding extraction, and
facial attribute recognition), to tackle aforementioned challenges and to
generate truthful personalized portraits, with only a handful of portrait
images as input. Concretely, we inject several SOTA face models into the
generation procedure, achieving a more efficient label-tagging,
data-processing, and model post-processing compared to previous solutions, such
as DreamBooth ~\cite{ruiz2023dreambooth} , InstantBooth
~\cite{shi2023instantbooth} , or other LoRA-only approaches ~\cite{hu2021lora}
. Through the development of FaceChain, we have identified several potential
directions to accelerate development of Face/Human-Centric AIGC research and
application. We have designed FaceChain as a framework comprised of pluggable
components that can be easily adjusted to accommodate different styles and
personalized needs. We hope it can grow to serve the burgeoning needs from the
communities. FaceChain is open-sourced under Apache-2.0 license at
\url{https://github.com/modelscope/facechain}.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 02:20:44 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Liu",
"Yang",
""
],
[
"Yu",
"Cheng",
""
],
[
"Shang",
"Lei",
""
],
[
"Wu",
"Ziheng",
""
],
[
"Wang",
"Xingjun",
""
],
[
"Zhao",
"Yuze",
""
],
[
"Zhu",
"Lin",
""
],
[
"Cheng",
"Chen",
""
],
[
"Chen",
"Weitao",
""
],
[
"Xu",
"Chao",
""
],
[
"Xie",
"Haoyu",
""
],
[
"Yao",
"Yuan",
""
],
[
"Zhou",
"Wenmeng",
""
],
[
"Chen",
"Yingda",
""
],
[
"Xie",
"Xuansong",
""
],
[
"Sun",
"Baigui",
""
]
] |
new_dataset
| 0.999677 |
2308.14266
|
Wen Yu Chang Morris
|
Wen-Yu Chang, Yun-Nung Chen
|
SalesBot 2.0: A Human-Like Intent-Guided Chit-Chat Dataset
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In recent research on dialogue systems and corpora, there has been a
significant focus on two distinct categories: task-oriented (TOD) and
open-domain (chit-chat) dialogues. TOD systems aim to satisfy specific user
goals, such as finding a movie to watch, whereas open-domain systems primarily
focus on generating engaging conversations. A recent study by Chiu et al.
(2022) introduced SalesBot, which provides simulators and a dataset with
one-turn transition from chit-chat to task-oriented dialogues. However, the
previously generated data solely relied on BlenderBot, which raised concerns
about its long-turn naturalness and consistency during a conversation. To
address this issue, this paper aims to build SalesBot 2.0, a revised version of
the published data, by leveraging the commonsense knowledge of large language
models (LLMs) through proper prompting. The objective is to gradually bridge
the gap between chit-chat and TOD towards better naturalness and consistency.
The newly released large-scale dataset with detailed annotations exhibits
smoother transitions between topics and is more human-like in terms of
naturalness and consistency. It can serve as a valuable resource for both
academic research and commercial applications. Furthermore, our proposed
framework can be applied to generate numerous dialogues with various target
intents.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 02:48:49 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Chang",
"Wen-Yu",
""
],
[
"Chen",
"Yun-Nung",
""
]
] |
new_dataset
| 0.999807 |
2308.14277
|
Changyi Lin
|
Changyi Lin, Han Zhang, Jikai Xu, Lei Wu, Huazhe Xu
|
9DTact: A Compact Vision-Based Tactile Sensor for Accurate 3D Shape
Reconstruction and Generalizable 6D Force Estimation
|
Project Website: https://linchangyi1.github.io/9DTact/
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The advancements in vision-based tactile sensors have boosted the aptitude of
robots to perform contact-rich manipulation, particularly when precise
positioning and contact state of the manipulated objects are crucial for
successful execution. In this work, we present 9DTact, a straightforward yet
versatile tactile sensor that offers 3D shape reconstruction and 6D force
estimation capabilities. Conceptually, 9DTact is designed to be highly compact,
robust, and adaptable to various robotic platforms. Moreover, it is low-cost
and DIY-friendly, requiring minimal assembly skills. Functionally, 9DTact
builds upon the optical principles of DTact and is optimized to achieve 3D
shape reconstruction with enhanced accuracy and efficiency. Remarkably, we
leverage the optical and deformable properties of the translucent gel so that
9DTact can perform 6D force estimation without the participation of auxiliary
markers or patterns on the gel surface. More specifically, we collect a dataset
consisting of approximately 100,000 image-force pairs from 175 complex objects
and train a neural network to regress the 6D force, which can generalize to
unseen objects. To promote the development and applications of vision-based
tactile sensors, we open-source both the hardware and software of 9DTact as
well as present a 1-hour video tutorial.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 03:17:54 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Lin",
"Changyi",
""
],
[
"Zhang",
"Han",
""
],
[
"Xu",
"Jikai",
""
],
[
"Wu",
"Lei",
""
],
[
"Xu",
"Huazhe",
""
]
] |
new_dataset
| 0.999586 |
2308.14301
|
Chirag Shah
|
Muhammad Rahman, Sachi Figliolini, Joyce Kim, Eivy Cedeno, Charles
Kleier, Chirag Shah, Aman Chadha
|
Artificial Intelligence in Career Counseling: A Test Case with ResumAI
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The rise of artificial intelligence (AI) has led to various means of
integration of AI aimed to provide efficiency in tasks, one of which is career
counseling. A key part of getting a job is having a solid resume that passes
through the first round of programs and recruiters. It is difficult to find
good resources or schedule an appointment with a career counselor to help with
editing a resume for a specific role. With the rise of ChatGPT, Bard, and
several other AI chat programs it is possible to provide specific, automated
feedback on various concerns to suggest places for improvement within the
context of career counseling. This paper begins with a quick literature review
on the ethical considerations and limitations of AI in career counseling. The
authors also have created their own website service, called ResumAI, to test
and review the functionality of an AI career counselor. The findings of this
study will contribute to the understanding of chat AI ResumAI reviewer programs
and sites. The implications of the findings for the field of career counseling,
AI development, and ethical practice will be discussed.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 04:35:20 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Rahman",
"Muhammad",
""
],
[
"Figliolini",
"Sachi",
""
],
[
"Kim",
"Joyce",
""
],
[
"Cedeno",
"Eivy",
""
],
[
"Kleier",
"Charles",
""
],
[
"Shah",
"Chirag",
""
],
[
"Chadha",
"Aman",
""
]
] |
new_dataset
| 0.992189 |
2308.14324
|
Pengcheng Dong
|
Pengcheng Dong, Xiaojin Mao, Lixia Fan, Wenbo Wan, Jiande Sun
|
CPFES: Physical Fitness Evaluation Based on Canadian Agility and
Movement Skill Assessment
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, the assessment of fundamental movement skills integrated
with physical education has focused on both teaching practice and the
feasibility of assessment. The object of assessment has shifted from multiple
ages to subdivided ages, while the content of assessment has changed from
complex and time-consuming to concise and efficient. Therefore, we apply deep
learning to physical fitness evaluation, we propose a system based on the
Canadian Agility and Movement Skill Assessment (CAMSA) Physical Fitness
Evaluation System (CPFES), which evaluates children's physical fitness based on
CAMSA, and gives recommendations based on the scores obtained by CPFES to help
children grow. We have designed a landmark detection module and a pose
estimation module, and we have also designed a pose evaluation module for the
CAMSA criteria that can effectively evaluate the actions of the child being
tested. Our experimental results demonstrate the high accuracy of the proposed
system.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 06:09:25 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Dong",
"Pengcheng",
""
],
[
"Mao",
"Xiaojin",
""
],
[
"Fan",
"Lixia",
""
],
[
"Wan",
"Wenbo",
""
],
[
"Sun",
"Jiande",
""
]
] |
new_dataset
| 0.991317 |
2308.14329
|
Jin Bok Park
|
Jin Bok Park, Jinkyu Lee, Muhyun Back, Hyunmin Han, David T. Ma, Sang
Min Won, Sung Soo Hwang, Il Yong Chun
|
End-to-End Driving via Self-Supervised Imitation Learning Using Camera
and LiDAR Data
|
20 pages, 8 figures
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In autonomous driving, the end-to-end (E2E) driving approach that predicts
vehicle control signals directly from sensor data is rapidly gaining attention.
To learn a safe E2E driving system, one needs an extensive amount of driving
data and human intervention. Vehicle control data is constructed by many hours
of human driving, and it is challenging to construct large vehicle control
datasets. Often, publicly available driving datasets are collected with limited
driving scenes, and collecting vehicle control data is only available by
vehicle manufacturers. To address these challenges, this paper proposes the
first self-supervised learning framework, self-supervised imitation learning
(SSIL), that can learn E2E driving networks without using driving command data.
To construct pseudo steering angle data, proposed SSIL predicts a pseudo target
from the vehicle's poses at the current and previous time points that are
estimated with light detection and ranging sensors. Our numerical experiments
demonstrate that the proposed SSIL framework achieves comparable E2E driving
accuracy with the supervised learning counterpart. In addition, our qualitative
analyses using a conventional visual explanation tool show that trained NNs by
proposed SSIL and the supervision counterpart attend similar objects in making
predictions.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 06:17:15 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Park",
"Jin Bok",
""
],
[
"Lee",
"Jinkyu",
""
],
[
"Back",
"Muhyun",
""
],
[
"Han",
"Hyunmin",
""
],
[
"Ma",
"David T.",
""
],
[
"Won",
"Sang Min",
""
],
[
"Hwang",
"Sung Soo",
""
],
[
"Chun",
"Il Yong",
""
]
] |
new_dataset
| 0.996073 |
2308.14353
|
Baoli Zhang
|
Baoli Zhang, Haining Xie, Pengfan Du, Junhao Chen, Pengfei Cao, Yubo
Chen, Shengping Liu, Kang Liu, Jun Zhao
|
ZhuJiu: A Multi-dimensional, Multi-faceted Chinese Benchmark for Large
Language Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The unprecedented performance of large language models (LLMs) requires
comprehensive and accurate evaluation. We argue that for LLMs evaluation,
benchmarks need to be comprehensive and systematic. To this end, we propose the
ZhuJiu benchmark, which has the following strengths: (1) Multi-dimensional
ability coverage: We comprehensively evaluate LLMs across 7 ability dimensions
covering 51 tasks. Especially, we also propose a new benchmark that focuses on
knowledge ability of LLMs. (2) Multi-faceted evaluation methods collaboration:
We use 3 different yet complementary evaluation methods to comprehensively
evaluate LLMs, which can ensure the authority and accuracy of the evaluation
results. (3) Comprehensive Chinese benchmark: ZhuJiu is the pioneering
benchmark that fully assesses LLMs in Chinese, while also providing equally
robust evaluation abilities in English. (4) Avoiding potential data leakage: To
avoid data leakage, we construct evaluation data specifically for 37 tasks. We
evaluate 10 current mainstream LLMs and conduct an in-depth discussion and
analysis of their results. The ZhuJiu benchmark and open-participation
leaderboard are publicly released at http://www.zhujiu-benchmark.com/ and we
also provide a demo video at https://youtu.be/qypkJ89L1Ic.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 06:56:44 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Zhang",
"Baoli",
""
],
[
"Xie",
"Haining",
""
],
[
"Du",
"Pengfan",
""
],
[
"Chen",
"Junhao",
""
],
[
"Cao",
"Pengfei",
""
],
[
"Chen",
"Yubo",
""
],
[
"Liu",
"Shengping",
""
],
[
"Liu",
"Kang",
""
],
[
"Zhao",
"Jun",
""
]
] |
new_dataset
| 0.999499 |
2308.14378
|
Ruijie Yao
|
Ruijie Yao, Sheng Jin, Lumin Xu, Wang Zeng, Wentao Liu, Chen Qian,
Ping Luo, Ji Wu
|
GKGNet: Group K-Nearest Neighbor based Graph Convolutional Network for
Multi-Label Image Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-Label Image Recognition (MLIR) is a challenging task that aims to
predict multiple object labels in a single image while modeling the complex
relationships between labels and image regions. Although convolutional neural
networks and vision transformers have succeeded in processing images as regular
grids of pixels or patches, these representations are sub-optimal for capturing
irregular and discontinuous regions of interest. In this work, we present the
first fully graph convolutional model, Group K-nearest neighbor based Graph
convolutional Network (GKGNet), which models the connections between semantic
label embeddings and image patches in a flexible and unified graph structure.
To address the scale variance of different objects and to capture information
from multiple perspectives, we propose the Group KGCN module for dynamic graph
construction and message passing. Our experiments demonstrate that GKGNet
achieves state-of-the-art performance with significantly lower computational
costs on the challenging multi-label datasets, \ie MS-COCO and VOC2007
datasets. We will release the code and models to facilitate future research in
this area.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 07:50:04 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Yao",
"Ruijie",
""
],
[
"Jin",
"Sheng",
""
],
[
"Xu",
"Lumin",
""
],
[
"Zeng",
"Wang",
""
],
[
"Liu",
"Wentao",
""
],
[
"Qian",
"Chen",
""
],
[
"Luo",
"Ping",
""
],
[
"Wu",
"Ji",
""
]
] |
new_dataset
| 0.978376 |
2308.14395
|
Rui Zhang
|
Rui Zhang, Hongxia Wang, Mingshan Du, Hanqing Liu, Yang Zhou, Qiang
Zeng
|
UMMAFormer: A Universal Multimodal-adaptive Transformer Framework for
Temporal Forgery Localization
|
11 pages, 8 figures, 66 references. This paper has been accepted for
ACM MM 2023
|
Proceedings of the 31st ACM International Conference on Multimedia
(MM '23), October 29-November 3, 2023
|
10.1145/3581783.3613767
| null |
cs.MM cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emergence of artificial intelligence-generated content (AIGC) has raised
concerns about the authenticity of multimedia content in various fields.
However, existing research for forgery content detection has focused mainly on
binary classification tasks of complete videos, which has limited applicability
in industrial settings. To address this gap, we propose UMMAFormer, a novel
universal transformer framework for temporal forgery localization (TFL) that
predicts forgery segments with multimodal adaptation. Our approach introduces a
Temporal Feature Abnormal Attention (TFAA) module based on temporal feature
reconstruction to enhance the detection of temporal differences. We also design
a Parallel Cross-Attention Feature Pyramid Network (PCA-FPN) to optimize the
Feature Pyramid Network (FPN) for subtle feature enhancement. To evaluate the
proposed method, we contribute a novel Temporal Video Inpainting Localization
(TVIL) dataset specifically tailored for video inpainting scenes. Our
experiments show that our approach achieves state-of-the-art performance on
benchmark datasets, including Lav-DF, TVIL, and Psynd, significantly
outperforming previous methods. The code and data are available at
https://github.com/ymhzyj/UMMAFormer/.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 08:20:30 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Zhang",
"Rui",
""
],
[
"Wang",
"Hongxia",
""
],
[
"Du",
"Mingshan",
""
],
[
"Liu",
"Hanqing",
""
],
[
"Zhou",
"Yang",
""
],
[
"Zeng",
"Qiang",
""
]
] |
new_dataset
| 0.976973 |
2308.14401
|
Zhensu Sun
|
Zhensu Sun, Xiaoning Du, Fu Song, Li Li
|
CodeMark: Imperceptible Watermarking for Code Datasets against Neural
Code Completion Models
|
Accepted to FSE 2023
| null |
10.1145/3611643.3616297
| null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Code datasets are of immense value for training neural-network-based code
completion models, where companies or organizations have made substantial
investments to establish and process these datasets. Unluckily, these datasets,
either built for proprietary or public usage, face the high risk of
unauthorized exploits, resulting from data leakages, license violations, etc.
Even worse, the ``black-box'' nature of neural models sets a high barrier for
externals to audit their training datasets, which further connives these
unauthorized usages. Currently, watermarking methods have been proposed to
prohibit inappropriate usage of image and natural language datasets. However,
due to domain specificity, they are not directly applicable to code datasets,
leaving the copyright protection of this emerging and important field of code
data still exposed to threats. To fill this gap, we propose a method, named
CodeMark, to embed user-defined imperceptible watermarks into code datasets to
trace their usage in training neural code completion models. CodeMark is based
on adaptive semantic-preserving transformations, which preserve the exact
functionality of the code data and keep the changes covert against
rule-breakers. We implement CodeMark in a toolkit and conduct an extensive
evaluation of code completion models. CodeMark is validated to fulfill all
desired properties of practical watermarks, including harmlessness to model
accuracy, verifiability, robustness, and imperceptibility.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 08:36:53 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Sun",
"Zhensu",
""
],
[
"Du",
"Xiaoning",
""
],
[
"Song",
"Fu",
""
],
[
"Li",
"Li",
""
]
] |
new_dataset
| 0.995017 |
2308.14423
|
Andrei Catalin Coman
|
Andrei C. Coman, Christos Theodoropoulos, Marie-Francine Moens, James
Henderson
|
GADePo: Graph-Assisted Declarative Pooling Transformers for
Document-Level Relation Extraction
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Document-level relation extraction aims to identify relationships between
entities within a document. Current methods rely on text-based encoders and
employ various hand-coded pooling heuristics to aggregate information from
entity mentions and associated contexts. In this paper, we replace these rigid
pooling functions with explicit graph relations by leveraging the intrinsic
graph processing capabilities of the Transformer model. We propose a joint
text-graph Transformer model, and a graph-assisted declarative pooling (GADePo)
specification of the input which provides explicit and high-level instructions
for information aggregation. This allows the pooling process to be guided by
domain-specific knowledge or desired outcomes but still learned by the
Transformer, leading to more flexible and customizable pooling strategies. We
extensively evaluate our method across diverse datasets and models, and show
that our approach yields promising results that are comparable to those
achieved by the hand-coded pooling functions.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 09:04:03 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Coman",
"Andrei C.",
""
],
[
"Theodoropoulos",
"Christos",
""
],
[
"Moens",
"Marie-Francine",
""
],
[
"Henderson",
"James",
""
]
] |
new_dataset
| 0.987882 |
2308.14492
|
Zhongang Cai
|
Zhongang Cai, Liang Pan, Chen Wei, Wanqi Yin, Fangzhou Hong, Mingyuan
Zhang, Chen Change Loy, Lei Yang, Ziwei Liu
|
PointHPS: Cascaded 3D Human Pose and Shape Estimation from Point Clouds
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Human pose and shape estimation (HPS) has attracted increasing attention in
recent years. While most existing studies focus on HPS from 2D images or videos
with inherent depth ambiguity, there are surging need to investigate HPS from
3D point clouds as depth sensors have been frequently employed in commercial
devices. However, real-world sensory 3D points are usually noisy and
incomplete, and also human bodies could have different poses of high diversity.
To tackle these challenges, we propose a principled framework, PointHPS, for
accurate 3D HPS from point clouds captured in real-world settings, which
iteratively refines point features through a cascaded architecture.
Specifically, each stage of PointHPS performs a series of downsampling and
upsampling operations to extract and collate both local and global cues, which
are further enhanced by two novel modules: 1) Cross-stage Feature Fusion (CFF)
for multi-scale feature propagation that allows information to flow effectively
through the stages, and 2) Intermediate Feature Enhancement (IFE) for
body-aware feature aggregation that improves feature quality after each stage.
To facilitate a comprehensive study under various scenarios, we conduct our
experiments on two large-scale benchmarks, comprising i) a dataset that
features diverse subjects and actions captured by real commercial sensors in a
laboratory environment, and ii) controlled synthetic data generated with
realistic considerations such as clothed humans in crowded outdoor scenes.
Extensive experiments demonstrate that PointHPS, with its powerful point
feature extraction and processing scheme, outperforms State-of-the-Art methods
by significant margins across the board. Homepage:
https://caizhongang.github.io/projects/PointHPS/.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 11:10:14 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Cai",
"Zhongang",
""
],
[
"Pan",
"Liang",
""
],
[
"Wei",
"Chen",
""
],
[
"Yin",
"Wanqi",
""
],
[
"Hong",
"Fangzhou",
""
],
[
"Zhang",
"Mingyuan",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Yang",
"Lei",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.973666 |
2308.14498
|
Sueda Taner
|
Sueda Taner, Victoria Palhares, and Christoph Studer
|
Channel Charting in Real-World Coordinates
|
To be presented at IEEE GLOBECOM 2023
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Channel charting is an emerging self-supervised method that maps channel
state information (CSI) to a low-dimensional latent space, which represents
pseudo-positions of user equipments (UEs). While this latent space preserves
local geometry, i.e., nearby UEs are nearby in latent space, the
pseudo-positions are in arbitrary coordinates and global geometry is not
preserved. In order to enable channel charting in real-world coordinates, we
propose a novel bilateration loss for multipoint wireless systems in which only
the access point (AP) locations are known--no geometrical models or
ground-truth UE position information is required. The idea behind this
bilateration loss is to compare the received power at pairs of APs in order to
determine whether a UE should be placed closer to one AP or the other in latent
space. We demonstrate the efficacy of our method using channel vectors from a
commercial ray-tracer.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 11:19:20 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Taner",
"Sueda",
""
],
[
"Palhares",
"Victoria",
""
],
[
"Studer",
"Christoph",
""
]
] |
new_dataset
| 0.998272 |
2308.14508
|
Yushi Bai
|
Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian
Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang,
Juanzi Li
|
LongBench: A Bilingual, Multitask Benchmark for Long Context
Understanding
|
18 pages, 6 figures
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although large language models (LLMs) demonstrate impressive performance for
many language tasks, most of them can only handle texts a few thousand tokens
long, limiting their applications on longer sequence inputs, such as books,
reports, and codebases. Recent works have proposed methods to improve LLMs'
long context capabilities by extending context windows and more sophisticated
memory mechanisms. However, comprehensive benchmarks tailored for evaluating
long context understanding are lacking. In this paper, we introduce LongBench,
the first bilingual, multi-task benchmark for long context understanding,
enabling a more rigorous evaluation of long context understanding. LongBench
comprises 21 datasets across 6 task categories in both English and Chinese,
with an average length of 6,711 words (English) and 13,386 characters
(Chinese). These tasks cover key long-text application areas including
single-doc QA, multi-doc QA, summarization, few-shot learning, synthetic tasks,
and code completion. All datasets in LongBench are standardized into a unified
format, allowing for effortless automatic evaluation of LLMs. Upon
comprehensive evaluation of 8 LLMs on LongBench, we find that: (1) Commercial
model (GPT-3.5-Turbo-16k) outperforms other open-sourced models, but still
struggles on longer contexts. (2) Scaled position embedding and fine-tuning on
longer sequences lead to substantial improvement on long context understanding.
(3) Context compression technique such as retrieval brings improvement for
model with weak ability on long contexts, but the performance still lags behind
models that have strong long context understanding capability. The code and
datasets are available at https://github.com/THUDM/LongBench.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 11:53:40 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Bai",
"Yushi",
""
],
[
"Lv",
"Xin",
""
],
[
"Zhang",
"Jiajie",
""
],
[
"Lyu",
"Hongchang",
""
],
[
"Tang",
"Jiankai",
""
],
[
"Huang",
"Zhidian",
""
],
[
"Du",
"Zhengxiao",
""
],
[
"Liu",
"Xiao",
""
],
[
"Zeng",
"Aohan",
""
],
[
"Hou",
"Lei",
""
],
[
"Dong",
"Yuxiao",
""
],
[
"Tang",
"Jie",
""
],
[
"Li",
"Juanzi",
""
]
] |
new_dataset
| 0.999575 |
2308.14527
|
Jie Li
|
Jie Li, Yi Liu, Xiaohu Tang, Yunghsiang S. Han, Bo Bai, and Gong Zhang
|
MDS Array Codes With Small Sub-packetization Levels and Small Repair
Degrees
|
Submitted to the IEEE Transactions on Information Theory
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-rate minimum storage regenerating (MSR) codes are known to require a
large sub-packetization level, which can make meta-data management difficult
and hinder implementation in practical systems. A few maximum distance
separable (MDS) array code constructions have been proposed to attain a much
smaller sub-packetization level by sacrificing a bit of repair bandwidth.
However, to the best of our knowledge, only one construction by Guruswami et
al. can support the repair of a failed node without contacting all the
surviving nodes. This construction is certainly of theoretical interest but not
yet practical due to its requirement for very large code parameters. In this
paper, we propose a generic transformation that can convert any $(\overline{n},
\overline{k})$ MSR code with a repair degree of $\overline{d}<\overline{n}-1$
into another $(n=s\overline{n},k)$ MDS array code that supports $d<n-1$ with a
small sub-packetization level and $(1+\epsilon)$-optimal repair bandwidth
(i.e., $1+\epsilon$ times the optimal value) under a specific condition. We
obtain three MDS array codes with small sub-packetization levels and
$(1+\epsilon)$-optimal repair bandwidth by applying this transformation to
three known MSR codes. All the new MDS array codes have a small repair degree
of $d<n-1$ and work for both small and large code parameters.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 12:29:01 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Li",
"Jie",
""
],
[
"Liu",
"Yi",
""
],
[
"Tang",
"Xiaohu",
""
],
[
"Han",
"Yunghsiang S.",
""
],
[
"Bai",
"Bo",
""
],
[
"Zhang",
"Gong",
""
]
] |
new_dataset
| 0.998015 |
2308.14541
|
Alexandre Benatti
|
Alexandre Benatti, Luciano da Fontoura Costa
|
Multilayer Multiset Neuronal Networks -- MMNNs
|
32 pages, 21 figures
| null | null | null |
cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The coincidence similarity index, based on a combination of the Jaccard and
overlap similarity indices, has noticeable properties in comparing and
classifying data, including enhanced selectivity and sensitivity, intrinsic
normalization, and robustness to data perturbations and outliers. These
features allow multiset neurons, which are based on the coincidence similarity
operation, to perform effective pattern recognition applications, including the
challenging task of image segmentation. A few prototype points have been used
in previous related approaches to represent each pattern to be identified, each
of them being associated with respective multiset neurons. The segmentation of
the regions can then proceed by taking into account the outputs of these
neurons. The present work describes multilayer multiset neuronal networks
incorporating two or more layers of coincidence similarity neurons. In
addition, as a means to improve performance, this work also explores the
utilization of counter-prototype points, which are assigned to the image
regions to be avoided. This approach is shown to allow effective segmentation
of complex regions despite considering only one prototype and one
counter-prototype point. As reported here, the balanced accuracy landscapes to
be optimized in order to identify the weight of the neurons in subsequent
layers have been found to be relatively smooth, while typically involving more
than one attraction basin. The use of a simple gradient-based optimization
methodology has been demonstrated to effectively train the considered neural
networks with several architectures, at least for the given data type,
configuration of parameters, and network architecture.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 12:55:13 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Benatti",
"Alexandre",
""
],
[
"Costa",
"Luciano da Fontoura",
""
]
] |
new_dataset
| 0.997265 |
2308.14558
|
Alexander Barg
|
Alexander Barg, Ohad Elishco, Ryan Gabrys, Geyang Wang, Eitan Yaakobi
|
Storage codes and recoverable systems on lines and grids
| null | null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A storage code is an assignment of symbols to the vertices of a connected
graph $G(V,E)$ with the property that the value of each vertex is a function of
the values of its neighbors, or more generally, of a certain neighborhood of
the vertex in $G$. In this work we introduce a new construction method of
storage codes, enabling one to construct new codes from known ones via an
interleaving procedure driven by resolvable designs. We also study storage
codes on $\mathbb Z$ and ${\mathbb Z}^2$ (lines and grids), finding closed-form
expressions for the capacity of several one and two-dimensional systems
depending on their recovery set, using connections between storage codes,
graphs, anticodes, and difference-avoiding sets.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 13:20:00 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Barg",
"Alexander",
""
],
[
"Elishco",
"Ohad",
""
],
[
"Gabrys",
"Ryan",
""
],
[
"Wang",
"Geyang",
""
],
[
"Yaakobi",
"Eitan",
""
]
] |
new_dataset
| 0.999111 |
2308.14577
|
Thomas Manzini
|
Thomas Manzini, Robin Murphy, David Merrick
|
Quantitative Data Analysis: CRASAR Small Unmanned Aerial Systems at
Hurricane Ian
|
6 pages, 4 figures, 3 tables
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper provides a summary of the 281 sorties that were flown by the 10
different models of small unmanned aerial systems (sUAS) at Hurricane Ian, and
the failures made in the field. These 281 sorties, supporting 44 missions,
represents the largest use of sUAS in a disaster to date (previously Hurricane
Florence with 260 sorties). The sUAS operations at Hurricane Ian differ
slightly from prior operations as they included the first documented uses of
drones performing interior search for victims, and the first use of a VTOL
fixed wing aircraft during a large scale disaster. However, there are
substantive similarities to prior drone operations. Most notably, rotorcraft
continue to perform the vast majority of flights, wireless data transmission
capacity continues to be a limitation, and the lack of centralized control for
unmanned and manned aerial systems continues to cause operational friction.
This work continues by documenting the failures, both human and technological
made in the field and concludes with a discussion summarizing potential areas
for further work to improve sUAS response to large scale disasters.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 13:43:24 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Manzini",
"Thomas",
""
],
[
"Murphy",
"Robin",
""
],
[
"Merrick",
"David",
""
]
] |
new_dataset
| 0.977125 |
2308.14679
|
Gabriela Acevedo
|
Gabriela T. Acevedo Trebbau, Andrea Bandini, Diego L. Guarin
|
Video-Based Hand Pose Estimation for Remote Assessment of Bradykinesia
in Parkinson's Disease
|
12 pages, 3 figures, 2 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
There is a growing interest in using pose estimation algorithms for
video-based assessment of Bradykinesia in Parkinson's Disease (PD) to
facilitate remote disease assessment and monitoring. However, the accuracy of
pose estimation algorithms in videos from video streaming services during
Telehealth appointments has not been studied. In this study, we used seven
off-the-shelf hand pose estimation models to estimate the movement of the thumb
and index fingers in videos of the finger-tapping (FT) test recorded from
Healthy Controls (HC) and participants with PD and under two different
conditions: streaming (videos recorded during a live Zoom meeting) and
on-device (videos recorded locally with high-quality cameras). The accuracy and
reliability of the models were estimated by comparing the models' output with
manual results. Three of the seven models demonstrated good accuracy for
on-device recordings, and the accuracy decreased significantly for streaming
recordings. We observed a negative correlation between movement speed and the
model's accuracy for the streaming recordings. Additionally, we evaluated the
reliability of ten movement features related to bradykinesia extracted from
video recordings of PD patients performing the FT test. While most of the
features demonstrated excellent reliability for on-device recordings, most of
the features demonstrated poor to moderate reliability for streaming
recordings. Our findings highlight the limitations of pose estimation
algorithms when applied to video recordings obtained during Telehealth visits,
and demonstrate that on-device recordings can be used for automatic
video-assessment of bradykinesia in PD.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 16:15:23 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Trebbau",
"Gabriela T. Acevedo",
""
],
[
"Bandini",
"Andrea",
""
],
[
"Guarin",
"Diego L.",
""
]
] |
new_dataset
| 0.997906 |
2308.14710
|
Xudong Wang
|
Xudong Wang and Ishan Misra and Ziyun Zeng and Rohit Girdhar and
Trevor Darrell
|
VideoCutLER: Surprisingly Simple Unsupervised Video Instance
Segmentation
|
Preprint. Code: https://github.com/facebookresearch/CutLER
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing approaches to unsupervised video instance segmentation typically
rely on motion estimates and experience difficulties tracking small or
divergent motions. We present VideoCutLER, a simple method for unsupervised
multi-instance video segmentation without using motion-based learning signals
like optical flow or training on natural videos. Our key insight is that using
high-quality pseudo masks and a simple video synthesis method for model
training is surprisingly sufficient to enable the resulting video model to
effectively segment and track multiple instances across video frames. We show
the first competitive unsupervised learning results on the challenging
YouTubeVIS-2019 benchmark, achieving 50.7% APvideo^50 , surpassing the previous
state-of-the-art by a large margin. VideoCutLER can also serve as a strong
pretrained model for supervised video instance segmentation tasks, exceeding
DINO by 15.9% on YouTubeVIS-2019 in terms of APvideo.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 17:10:12 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Wang",
"Xudong",
""
],
[
"Misra",
"Ishan",
""
],
[
"Zeng",
"Ziyun",
""
],
[
"Girdhar",
"Rohit",
""
],
[
"Darrell",
"Trevor",
""
]
] |
new_dataset
| 0.991898 |
2308.14713
|
Aron Schmied
|
Aron Schmied, Tobias Fischer, Martin Danelljan, Marc Pollefeys, Fisher
Yu
|
R3D3: Dense 3D Reconstruction of Dynamic Scenes from Multiple Cameras
|
Accepted to ICCV 2023. Project page is available at
https://www.vis.xyz/pub/r3d3/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dense 3D reconstruction and ego-motion estimation are key challenges in
autonomous driving and robotics. Compared to the complex, multi-modal systems
deployed today, multi-camera systems provide a simpler, low-cost alternative.
However, camera-based 3D reconstruction of complex dynamic scenes has proven
extremely difficult, as existing solutions often produce incomplete or
incoherent results. We propose R3D3, a multi-camera system for dense 3D
reconstruction and ego-motion estimation. Our approach iterates between
geometric estimation that exploits spatial-temporal information from multiple
cameras, and monocular depth refinement. We integrate multi-camera feature
correlation and dense bundle adjustment operators that yield robust geometric
depth and pose estimates. To improve reconstruction where geometric depth is
unreliable, e.g. for moving objects or low-textured regions, we introduce
learnable scene priors via a depth refinement network. We show that this design
enables a dense, consistent 3D reconstruction of challenging, dynamic outdoor
environments. Consequently, we achieve state-of-the-art dense depth prediction
on the DDAD and NuScenes benchmarks.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 17:13:49 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Schmied",
"Aron",
""
],
[
"Fischer",
"Tobias",
""
],
[
"Danelljan",
"Martin",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Yu",
"Fisher",
""
]
] |
new_dataset
| 0.994604 |
2308.14726
|
Zhen Xing
|
Zhixin Ling, Zhen Xing, Xiangdong Zhou, Manliang Cao, Guichun Zhou
|
PanoSwin: a Pano-style Swin Transformer for Panorama Understanding
|
CVPR 2023
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In panorama understanding, the widely used equirectangular projection (ERP)
entails boundary discontinuity and spatial distortion. It severely deteriorates
the conventional CNNs and vision Transformers on panoramas. In this paper, we
propose a simple yet effective architecture named PanoSwin to learn panorama
representations with ERP. To deal with the challenges brought by
equirectangular projection, we explore a pano-style shift windowing scheme and
novel pitch attention to address the boundary discontinuity and the spatial
distortion, respectively. Besides, based on spherical distance and Cartesian
coordinates, we adapt absolute positional embeddings and relative positional
biases for panoramas to enhance panoramic geometry information. Realizing that
planar image understanding might share some common knowledge with panorama
understanding, we devise a novel two-stage learning framework to facilitate
knowledge transfer from the planar images to panoramas. We conduct experiments
against the state-of-the-art on various panoramic tasks, i.e., panoramic object
detection, panoramic classification, and panoramic layout estimation. The
experimental results demonstrate the effectiveness of PanoSwin in panorama
understanding.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 17:30:14 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Ling",
"Zhixin",
""
],
[
"Xing",
"Zhen",
""
],
[
"Zhou",
"Xiangdong",
""
],
[
"Cao",
"Manliang",
""
],
[
"Zhou",
"Guichun",
""
]
] |
new_dataset
| 0.996555 |
2308.14731
|
Chia-Yi Su
|
Chia-Yi Su and Collin McMillan
|
Distilled GPT for Source Code Summarization
|
15 pages + 3 figures + 5 references. Preprint In Review Aug. 2023
| null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A code summary is a brief natural language description of source code.
Summaries are usually only a single sentence long, and yet form the backbone of
developer documentation. A short descriptions such as "changes all visible
polygons to the color blue" can give a programmer a high-level idea of what
code does without the effort of reading the code itself. Recently, products
based on Large Language Models such as ChatGPT have demonstrated a strong
ability to write these descriptions automatically. However, to use these tools,
programmers must send their code to untrusted third parties for processing
(e.g., via an API call). This loss of custody is not acceptable to many
organizations. In this paper, we present an alternative: we train an open
source model using sample output generated by GPT-3.5 in a process related to
knowledge distillation. Our model is small enough (350m parameters) to be run
on a single 16gb GPU, yet we show in our evaluation that it is large enough to
mimic GPT-3.5 on this task.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 17:34:07 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Su",
"Chia-Yi",
""
],
[
"McMillan",
"Collin",
""
]
] |
new_dataset
| 0.998003 |
2308.14748
|
Jianfeng Zhang
|
Jianfeng Zhang and Hanshu Yan and Zhongcong Xu and Jiashi Feng and Jun
Hao Liew
|
MagicAvatar: Multimodal Avatar Generation and Animation
|
Project page: https://magic-avatar.github.io/
| null | null | null |
cs.GR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This report presents MagicAvatar, a framework for multimodal video generation
and animation of human avatars. Unlike most existing methods that generate
avatar-centric videos directly from multimodal inputs (e.g., text prompts),
MagicAvatar explicitly disentangles avatar video generation into two stages:
(1) multimodal-to-motion and (2) motion-to-video generation. The first stage
translates the multimodal inputs into motion/ control signals (e.g., human
pose, depth, DensePose); while the second stage generates avatar-centric video
guided by these motion signals. Additionally, MagicAvatar supports avatar
animation by simply providing a few images of the target person. This
capability enables the animation of the provided human identity according to
the specific motion derived from the first stage. We demonstrate the
flexibility of MagicAvatar through various applications, including text-guided
and video-guided avatar generation, as well as multimodal avatar animation.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 17:56:18 GMT"
}
] | 2023-08-29T00:00:00 |
[
[
"Zhang",
"Jianfeng",
""
],
[
"Yan",
"Hanshu",
""
],
[
"Xu",
"Zhongcong",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Liew",
"Jun Hao",
""
]
] |
new_dataset
| 0.999234 |
2206.15097
|
Adri\'an Goga
|
Adri\'an Goga and Andrej Bal\'a\v{z}
|
Prefix-free parsing for building large tunnelled Wheeler graphs
|
12 pages, 3 figures, 2 tables, to be published in the WABI (Workshop
on Algorithms in Bioinformatics) 2022 conference proceedings
| null |
10.4230/LIPIcs.WABI.2022.18
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new technique for creating a space-efficient index for large
repetitive text collections, such as pangenomic databases containing sequences
of many individuals from the same species. We combine two recent techniques
from this area: Wheeler graphs (Gagie et al., 2017) and prefix-free parsing
(PFP, Boucher et al., 2019). Wheeler graphs (WGs) are a general framework
encompassing several indexes based on the Burrows-Wheeler transform (BWT), such
as the FM-index. Wheeler graphs admit a succinct representation which can be
further compacted by employing the idea of tunnelling, which exploits
redundancies in the form of parallel, equally-labelled paths called blocks that
can be merged into a single path. The problem of finding the optimal set of
blocks for tunnelling, i.e. the one that minimizes the size of the resulting
WG, is known to be NP-complete and remains the most computationally challenging
part of the tunnelling process.
To find an adequate set of blocks in less time, we propose a new method based
on the prefix-free parsing (PFP). The idea of PFP is to divide the input text
into phrases of roughly equal sizes that overlap by a fixed number of
characters. The original text is represented by a sequence of phrase ranks (the
parse) and a list of all used phrases (the dictionary). In repetitive texts,
the PFP of the text is generally much shorter than the original. To speed up
the block selection for tunnelling, we apply the PFP to obtain the parse and
the dictionary of the text, tunnel the WG of the parse using existing
heuristics and subsequently use this tunnelled parse to construct a compact WG
of the original text. Compared with constructing a WG from the original text
without PFP, our method is much faster and uses less memory on collections of
pangenomic sequences. Therefore, our method enables the use of WGs as a
pangenomic reference for real-world datasets.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2022 07:55:50 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Goga",
"Adrián",
""
],
[
"Baláž",
"Andrej",
""
]
] |
new_dataset
| 0.99663 |
2211.04154
|
Dominique Geissler
|
Dominique Geissler, Dominik B\"ar, Nicolas Pr\"ollochs, and Stefan
Feuerriegel
|
Russian propaganda on social media during the 2022 invasion of Ukraine
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Russian invasion of Ukraine in February 2022 was accompanied by practices
of information warfare, yet existing evidence is largely anecdotal while
large-scale empirical evidence is lacking. Here, we analyze the spread of
pro-Russian support on social media. For this, we collected N = 349,455
messages from Twitter with pro-Russian support. Our findings suggest that
pro-Russian messages received ~251,000 retweets and thereby reached around 14.4
million users. We further provide evidence that bots played a disproportionate
role in the dissemination of pro-Russian messages and amplified its
proliferation in early-stage diffusion. Countries that abstained from voting on
the United Nations Resolution ES-11/1 such as India, South Africa, and Pakistan
showed pronounced activity of bots. Overall, 20.28% of the spreaders are
classified as bots, most of which were created at the beginning of the
invasion. Together, our findings suggest the presence of a large-scale Russian
propaganda campaign on social media and highlight the new threats to society
that originate from it. Our results also suggest that curbing bots may be an
effective strategy to mitigate such campaigns.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 10:52:15 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Feb 2023 15:10:41 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Mar 2023 09:13:11 GMT"
},
{
"version": "v4",
"created": "Thu, 4 May 2023 14:33:32 GMT"
},
{
"version": "v5",
"created": "Fri, 25 Aug 2023 15:25:33 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Geissler",
"Dominique",
""
],
[
"Bär",
"Dominik",
""
],
[
"Pröllochs",
"Nicolas",
""
],
[
"Feuerriegel",
"Stefan",
""
]
] |
new_dataset
| 0.979162 |
2212.14632
|
Hashim A. Hashim
|
Hashim A. Hashim, Abdelrahman E.E. Eltoukhy, and Akos Odry
|
Observer-based Controller for VTOL-UAVs Tracking using Direct
Vision-Aided Inertial Navigation Measurements
| null | null |
10.1016/j.isatra.2022.12.014
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a novel observer-based controller for Vertical Take-Off
and Landing (VTOL) Unmanned Aerial Vehicle (UAV) designed to directly receive
measurements from a Vision-Aided Inertial Navigation System (VA-INS) and
produce the required thrust and rotational torque inputs. The VA-INS is
composed of a vision unit (monocular or stereo camera) and a typical low-cost
6-axis Inertial Measurement Unit (IMU) equipped with an accelerometer and a
gyroscope. A major benefit of this approach is its applicability for
environments where the Global Positioning System (GPS) is inaccessible. The
proposed VTOL-UAV observer utilizes IMU and feature measurements to accurately
estimate attitude (orientation), gyroscope bias, position, and linear velocity.
Ability to use VA-INS measurements directly makes the proposed observer design
more computationally efficient as it obviates the need for attitude and
position reconstruction. Once the motion components are estimated, the
observer-based controller is used to control the VTOL-UAV attitude, angular
velocity, position, and linear velocity guiding the vehicle along the desired
trajectory in six degrees of freedom (6 DoF). The closed-loop estimation and
the control errors of the observer-based controller are proven to be
exponentially stable starting from almost any initial condition. To achieve
global and unique VTOL-UAV representation in 6 DoF, the proposed approach is
posed on the Lie Group and the design in unit-quaternion is presented. Although
the proposed approach is described in a continuous form, the discrete version
is provided and tested. Keywords: Vision-aided inertial navigation system,
unmanned aerial vehicle, vertical take-off and landing, stochastic, noise,
Robotics, control systems, air mobility, observer-based controller algorithm,
landmark measurement, exponential stability.
|
[
{
"version": "v1",
"created": "Fri, 30 Dec 2022 11:02:17 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Aug 2023 14:33:27 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Hashim",
"Hashim A.",
""
],
[
"Eltoukhy",
"Abdelrahman E. E.",
""
],
[
"Odry",
"Akos",
""
]
] |
new_dataset
| 0.997484 |
2302.02343
|
Michael Pradel
|
Beatriz Souza and Michael Pradel
|
LExecutor: Learning-Guided Execution
|
Accepted in research track of the ACM Joint European Software
Engineering Conference and Symposium on the Foundations of Software
Engineering (ESEC/FSE) 2023
| null | null | null |
cs.SE cs.LG cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Executing code is essential for various program analysis tasks, e.g., to
detect bugs that manifest through exceptions or to obtain execution traces for
further dynamic analysis. However, executing an arbitrary piece of code is
often difficult in practice, e.g., because of missing variable definitions,
missing user inputs, and missing third-party dependencies. This paper presents
LExecutor, a learning-guided approach for executing arbitrary code snippets in
an underconstrained way. The key idea is to let a neural model predict missing
values that otherwise would cause the program to get stuck, and to inject these
values into the execution. For example, LExecutor injects likely values for
otherwise undefined variables and likely return values of calls to otherwise
missing functions. We evaluate the approach on Python code from popular
open-source projects and on code snippets extracted from Stack Overflow. The
neural model predicts realistic values with an accuracy between 79.5% and
98.2%, allowing LExecutor to closely mimic real executions. As a result, the
approach successfully executes significantly more code than any available
technique, such as simply executing the code as-is. For example, executing the
open-source code snippets as-is covers only 4.1% of all lines, because the code
crashes early on, whereas LExecutor achieves a coverage of 51.6%.
|
[
{
"version": "v1",
"created": "Sun, 5 Feb 2023 09:12:07 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 10:30:53 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Aug 2023 14:44:06 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Souza",
"Beatriz",
""
],
[
"Pradel",
"Michael",
""
]
] |
new_dataset
| 0.966475 |
2303.05501
|
Jiayuan Mao
|
Jiayuan Mao, Tom\'as Lozano-P\'erez, Joshua B. Tenenbaum, Leslie Pack
Kaelbling
|
PDSketch: Integrated Planning Domain Programming and Learning
|
Minor typo fixes. NeurIPS 2022. Project page:
https://pdsketch.csail.mit.edu
| null | null | null |
cs.AI cs.LG cs.RO stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies a model learning and online planning approach towards
building flexible and general robots. Specifically, we investigate how to
exploit the locality and sparsity structures in the underlying environmental
transition model to improve model generalization, data-efficiency, and
runtime-efficiency. We present a new domain definition language, named
PDSketch. It allows users to flexibly define high-level structures in the
transition models, such as object and feature dependencies, in a way similar to
how programmers use TensorFlow or PyTorch to specify kernel sizes and hidden
dimensions of a convolutional neural network. The details of the transition
model will be filled in by trainable neural networks. Based on the defined
structures and learned parameters, PDSketch automatically generates
domain-independent planning heuristics without additional training. The derived
heuristics accelerate the performance-time planning for novel goals.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 18:54:12 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Aug 2023 17:48:05 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Mao",
"Jiayuan",
""
],
[
"Lozano-Pérez",
"Tomás",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Kaelbling",
"Leslie Pack",
""
]
] |
new_dataset
| 0.979482 |
2303.07106
|
Junichiro Sugihara
|
Junichiro Sugihara, Takuzumi Nishio, Keisuke Nagato, Masayuki Nakao,
and Moju Zhao
|
Design, Control, and Motion Strategy of TRADY: Tilted-Rotor-Equipped
Aerial Robot With Autonomous In-flight Assembly and Disassembly Ability
| null |
Adv. Intell. Syst. 2023
|
10.1002/aisy.202300191
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In previous research, various types of aerial robots were developed to
improve maneuverability or manipulation abilities. However, there was a
challenge in achieving both mobility and manipulation capabilities
simultaneously. This is because aerial robots with high mobility lack the
necessary rotors to perform manipulation tasks, while those with manipulation
ability are too large to achieve high mobility. To address this issue, a new
aerial robot called TRADY was introduced in this article. TRADY is a
tilted-rotor-equipped aerial robot that can autonomously assemble and
disassemble in-flight, allowing for a switch in control model between
under-actuated and fully-actuated models. The system features a novel docking
mechanism and optimized rotor configuration, as well as a control system that
can transition between under-actuated and fully-actuated modes and compensate
for discrete changes. Additionally, a new motion strategy for
assembly/disassembly motion that includes recovery behavior from hazardous
conditions was introduced. Experimental results showed that TRADY can
successfully execute aerial assembly/disassembly motions with a 90% success
rate and generate more than nine times the torque of a single unit in the
assembly state. This is the first robot system capable of performing both
assembly and disassembly while seamlessly transitioning between fully-actuated
and under-actuated models.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 13:42:57 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2023 02:45:14 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Sugihara",
"Junichiro",
""
],
[
"Nishio",
"Takuzumi",
""
],
[
"Nagato",
"Keisuke",
""
],
[
"Nakao",
"Masayuki",
""
],
[
"Zhao",
"Moju",
""
]
] |
new_dataset
| 0.999184 |
2303.08254
|
Tomasz Winiarski
|
Tomasz Winiarski
|
MeROS: SysML-based Metamodel for ROS-based Systems
| null |
IEEE Access, vol. 11, pp. 82802-82815, 2023
|
10.1109/ACCESS.2023.3301727
| null |
cs.RO cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The complexity of today's robot control systems implies difficulty in
developing them efficiently and reliably. Systems engineering (SE) and
frameworks come to help. The framework metamodels are needed to support the
standardisation and correctness of the created application models. Although the
use of frameworks is widespread nowadays, for the most popular of them, Robot
Operating System (ROS), a contemporary metamodel has been missing so far. This
article proposes a new metamodel for ROS called MeROS, which addresses the
running system and developer workspace. The ROS comes in two versions: ROS 1
and ROS 2. The metamodel includes both versions. In particular, the latest ROS
1 concepts are considered, such as nodelet, action, and metapackage. An
essential addition to the original ROS concepts is the grouping of these
concepts, which provides an opportunity to illustrate the system's
decomposition and varying degrees of detail in its presentation. The metamodel
is derived from the requirements and verified on the practical example of Rico
assistive robot. The matter is described in a standardised way in SysML
(Systems Modeling Language). Hence, common development tools that support SysML
can help develop robot controllers in the spirit of SE.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 22:10:57 GMT"
},
{
"version": "v10",
"created": "Fri, 25 Aug 2023 15:55:19 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 01:38:13 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Mar 2023 07:35:09 GMT"
},
{
"version": "v4",
"created": "Sun, 16 Apr 2023 22:48:02 GMT"
},
{
"version": "v5",
"created": "Tue, 18 Apr 2023 06:05:13 GMT"
},
{
"version": "v6",
"created": "Sat, 22 Apr 2023 22:26:54 GMT"
},
{
"version": "v7",
"created": "Sun, 7 May 2023 18:46:26 GMT"
},
{
"version": "v8",
"created": "Wed, 10 May 2023 20:30:09 GMT"
},
{
"version": "v9",
"created": "Thu, 1 Jun 2023 09:28:58 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Winiarski",
"Tomasz",
""
]
] |
new_dataset
| 0.996001 |
2303.11089
|
Ziqiao Peng
|
Ziqiao Peng, Haoyu Wu, Zhenbo Song, Hao Xu, Xiangyu Zhu, Jun He,
Hongyan Liu, Zhaoxin Fan
|
EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation
|
Accepted by ICCV 2023
| null | null | null |
cs.CV cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speech-driven 3D face animation aims to generate realistic facial expressions
that match the speech content and emotion. However, existing methods often
neglect emotional facial expressions or fail to disentangle them from speech
content. To address this issue, this paper proposes an end-to-end neural
network to disentangle different emotions in speech so as to generate rich 3D
facial expressions. Specifically, we introduce the emotion disentangling
encoder (EDE) to disentangle the emotion and content in the speech by
cross-reconstructed speech signals with different emotion labels. Then an
emotion-guided feature fusion decoder is employed to generate a 3D talking face
with enhanced emotion. The decoder is driven by the disentangled identity,
emotional, and content embeddings so as to generate controllable personal and
emotional styles. Finally, considering the scarcity of the 3D emotional talking
face data, we resort to the supervision of facial blendshapes, which enables
the reconstruction of plausible 3D faces from 2D emotional data, and contribute
a large-scale 3D emotional talking face dataset (3D-ETF) to train the network.
Our experiments and user studies demonstrate that our approach outperforms
state-of-the-art methods and exhibits more diverse facial movements. We
recommend watching the supplementary video:
https://ziqiaopeng.github.io/emotalk
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 13:22:04 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Aug 2023 04:50:47 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Peng",
"Ziqiao",
""
],
[
"Wu",
"Haoyu",
""
],
[
"Song",
"Zhenbo",
""
],
[
"Xu",
"Hao",
""
],
[
"Zhu",
"Xiangyu",
""
],
[
"He",
"Jun",
""
],
[
"Liu",
"Hongyan",
""
],
[
"Fan",
"Zhaoxin",
""
]
] |
new_dataset
| 0.998209 |
2303.12976
|
Trung Pham
|
Trung Pham, Mehran Maghoumi, Wanli Jiang, Bala Siva Sashank
Jujjavarapu, Mehdi Sajjadi, Xin Liu, Hsuan-Chu Lin, Bor-Jeng Chen, Giang
Truong, Chao Fang, Junghyun Kwon, Minwoo Park
|
NVAutoNet: Fast and Accurate 360$^{\circ}$ 3D Visual Perception For Self
Driving
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robust, real-time perception of 3D world is essential to the autonomous
vehicle. We introduce an end-to-end surround camera perception system, named
NVAutoNet, for self-driving. NVAutoNet is a multi-task, multi-camera network
which takes a variable set of time-synced camera images as input and produces a
rich collection of 3D signals such as sizes, orientations, locations of
obstacles, parking spaces and free-spaces, etc. NVAutoNet is modular and
end-to-end: 1) the outputs can be consumed directly by downstream modules
without any post-processing such as clustering and fusion -- improving speed of
model deployment and in-car testing 2) the whole network training is done in
one single stage -- improving speed of model improvement and iterations. The
network is carefully designed to have high accuracy while running at 53 fps on
NVIDIA Orin SoC (system-on-a-chip). The network is robust to sensor mounting
variations (within some tolerances) and can be quickly customized for different
vehicle types via efficient model fine-tuning.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 00:55:48 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2023 18:36:33 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Aug 2023 00:15:14 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Pham",
"Trung",
""
],
[
"Maghoumi",
"Mehran",
""
],
[
"Jiang",
"Wanli",
""
],
[
"Jujjavarapu",
"Bala Siva Sashank",
""
],
[
"Sajjadi",
"Mehdi",
""
],
[
"Liu",
"Xin",
""
],
[
"Lin",
"Hsuan-Chu",
""
],
[
"Chen",
"Bor-Jeng",
""
],
[
"Truong",
"Giang",
""
],
[
"Fang",
"Chao",
""
],
[
"Kwon",
"Junghyun",
""
],
[
"Park",
"Minwoo",
""
]
] |
new_dataset
| 0.999666 |
2304.14454
|
Chaoyi Wu
|
Chaoyi Wu, Weixiong Lin, Xiaoman Zhang, Ya Zhang, Yanfeng Wang, Weidi
Xie
|
PMC-LLaMA: Towards Building Open-source Language Models for Medicine
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, Large Language Models (LLMs) have showcased remarkable capabilities
in natural language understanding. While demonstrating proficiency in everyday
conversations and question-answering situations, these models frequently
struggle in domains that require precision, such as medical applications, due
to their lack of domain-specific knowledge. In this paper, we describe the
procedure for building a powerful, open-source language model specifically
designed for medicine applications, termed as PMC-LLaMA. Our contributions are
threefold: (i) we systematically investigate the process of adapting a
general-purpose foundation language model towards medical domain, this involves
data-centric knowledge injection through the integration of 4.8M biomedical
academic papers and 30K medical textbooks, as well as comprehensive fine-tuning
for alignment with domain-specific instructions; (ii) we contribute a
large-scale, comprehensive dataset for instruction tuning. This dataset
encompasses medical question-answering (QA), rationale for reasoning, and
conversational dialogues, comprising a total of 202M tokens; (iii) we conduct
thorough ablation studies to demonstrate the effectiveness of each proposed
component. While evaluating on various public medical question-answering
benchmarks, our lightweight PMCLLaMA, which consists of only 13 billion
parameters, exhibits superior performance, even surpassing ChatGPT. All models,
codes, datasets can be found in https://github.com/chaoyi-wu/PMC-LLaMA.
|
[
{
"version": "v1",
"created": "Thu, 27 Apr 2023 18:29:05 GMT"
},
{
"version": "v2",
"created": "Sat, 20 May 2023 08:32:51 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Aug 2023 14:08:38 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Wu",
"Chaoyi",
""
],
[
"Lin",
"Weixiong",
""
],
[
"Zhang",
"Xiaoman",
""
],
[
"Zhang",
"Ya",
""
],
[
"Wang",
"Yanfeng",
""
],
[
"Xie",
"Weidi",
""
]
] |
new_dataset
| 0.997525 |
2305.02691
|
Eric W Lee
|
Eric W Lee, Joyce C Ho
|
PGB: A PubMed Graph Benchmark for Heterogeneous Network Representation
Learning
| null | null | null | null |
cs.LG cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
There has been rapid growth in biomedical literature, yet capturing the
heterogeneity of the bibliographic information of these articles remains
relatively understudied. Although graph mining research via heterogeneous graph
neural networks has taken center stage, it remains unclear whether these
approaches capture the heterogeneity of the PubMed database, a vast digital
repository containing over 33 million articles. We introduce PubMed Graph
Benchmark (PGB), a new benchmark dataset for evaluating heterogeneous graph
embeddings for biomedical literature. The benchmark contains rich metadata
including abstract, authors, citations, MeSH terms, MeSH hierarchy, and some
other information. The benchmark contains three different evaluation tasks
encompassing systematic reviews, node classification, and node clustering. In
PGB, we aggregate the metadata associated with the biomedical articles from
PubMed into a unified source and make the benchmark publicly available for any
future works.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 10:09:08 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 04:10:29 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Aug 2023 05:24:59 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Lee",
"Eric W",
""
],
[
"Ho",
"Joyce C",
""
]
] |
new_dataset
| 0.998666 |
2306.03538
|
Honghao Fu
|
Honghao Fu, Libo Sun, Yilang Shen, Yiwen Wu
|
SDR-GAIN: A High Real-Time Occluded Pedestrian Pose Completion Method
for Autonomous Driving
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To mitigate the challenges arising from partial occlusion in human pose
keypoint based pedestrian detection methods , we present a novel pedestrian
pose keypoint completion method called the separation and dimensionality
reduction-based generative adversarial imputation networks (SDR-GAIN) .
Firstly, we utilize OpenPose to estimate pedestrian poses in images. Then, we
isolate the head and torso keypoints of pedestrians with incomplete keypoints
due to occlusion or other factors and perform dimensionality reduction to
enhance features and further unify feature distribution. Finally, we introduce
two generative models based on the generative adversarial networks (GAN)
framework, which incorporate Huber loss, residual structure, and L1
regularization to generate missing parts of the incomplete head and torso pose
keypoints of partially occluded pedestrians, resulting in pose completion. Our
experiments on MS COCO and JAAD datasets demonstrate that SDR-GAIN outperforms
basic GAIN framework, interpolation methods PCHIP and MAkima, machine learning
methods k-NN and MissForest in terms of pose completion task. Furthermore, the
SDR-GAIN algorithm exhibits a remarkably short running time of approximately
0.4ms and boasts exceptional real-time performance. As such, it holds
significant practical value in the domain of autonomous driving, wherein high
system response speeds are of paramount importance. Specifically, it excels at
rapidly and precisely capturing human pose key points, thus enabling an
expanded range of applications for pedestrian detection tasks based on pose key
points, including but not limited to pedestrian behavior recognition and
prediction.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 09:35:56 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Jun 2023 15:31:04 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Jun 2023 18:02:51 GMT"
},
{
"version": "v4",
"created": "Fri, 25 Aug 2023 07:34:42 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Fu",
"Honghao",
""
],
[
"Sun",
"Libo",
""
],
[
"Shen",
"Yilang",
""
],
[
"Wu",
"Yiwen",
""
]
] |
new_dataset
| 0.991679 |
2306.15572
|
Matthew England Dr
|
Rashid Barket, Matthew England and J\"urgen Gerhard
|
Generating Elementary Integrable Expressions
|
To appear in proceedings of CASC 2023. This version of the
contribution has been accepted for publication, after peer review but is not
the Version of Record and does not reflect post-acceptance improvements, or
any corrections
|
In: F. Boulier, M. England, T.M. Sadykov, and E.V. Vorozhtsov,
eds. Computer Algebra in Scientific Computing (Proc. CASC '23), pp. 21-38.
(Lecture Notes in Computer Science, vol 14139). Springer International, 2023
|
10.1007/978-3-031-41724-5_2
| null |
cs.SC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
There has been an increasing number of applications of machine learning to
the field of Computer Algebra in recent years, including to the prominent
sub-field of Symbolic Integration. However, machine learning models require an
abundance of data for them to be successful and there exist few benchmarks on
the scale required. While methods to generate new data already exist, they are
flawed in several ways which may lead to bias in machine learning models
trained upon them. In this paper, we describe how to use the Risch Algorithm
for symbolic integration to create a dataset of elementary integrable
expressions. Further, we show that data generated this way alleviates some of
the flaws found in earlier methods.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 15:48:40 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Barket",
"Rashid",
""
],
[
"England",
"Matthew",
""
],
[
"Gerhard",
"Jürgen",
""
]
] |
new_dataset
| 0.991951 |
2307.06698
|
Thiviyan Thanapalasingam
|
Thiviyan Thanapalasingam, Emile van Krieken, Peter Bloem, Paul Groth
|
IntelliGraphs: Datasets for Benchmarking Knowledge Graph Generation
| null | null | null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge Graph Embedding (KGE) models are used to learn continuous
representations of entities and relations. A key task in the literature is
predicting missing links between entities. However, Knowledge Graphs are not
just sets of links but also have semantics underlying their structure.
Semantics is crucial in several downstream tasks, such as query answering or
reasoning. We introduce the subgraph inference task, where a model has to
generate likely and semantically valid subgraphs. We propose IntelliGraphs, a
set of five new Knowledge Graph datasets. The IntelliGraphs datasets contain
subgraphs with semantics expressed in logical rules for evaluating subgraph
inference. We also present the dataset generator that produced the synthetic
datasets. We designed four novel baseline models, which include three models
based on traditional KGEs. We evaluate their expressiveness and show that these
models cannot capture the semantics. We believe this benchmark will encourage
the development of machine learning models that emphasize semantic
understanding.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 11:54:32 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Jul 2023 11:23:07 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Aug 2023 08:37:10 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Thanapalasingam",
"Thiviyan",
""
],
[
"van Krieken",
"Emile",
""
],
[
"Bloem",
"Peter",
""
],
[
"Groth",
"Paul",
""
]
] |
new_dataset
| 0.999831 |
2307.11067
|
Van Nguyen Nguyen
|
Van Nguyen Nguyen, Thibault Groueix, Georgy Ponimatkin, Vincent
Lepetit, Tomas Hodan
|
CNOS: A Strong Baseline for CAD-based Novel Object Segmentation
|
ICCV 2023, R6D Workshop
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a simple three-stage approach to segment unseen objects in RGB
images using their CAD models. Leveraging recent powerful foundation models,
DINOv2 and Segment Anything, we create descriptors and generate proposals,
including binary masks for a given input RGB image. By matching proposals with
reference descriptors created from CAD models, we achieve precise object ID
assignment along with modal masks. We experimentally demonstrate that our
method achieves state-of-the-art results in CAD-based novel object
segmentation, surpassing existing approaches on the seven core datasets of the
BOP challenge by 19.8% AP using the same BOP evaluation protocol. Our source
code is available at https://github.com/nv-nguyen/cnos.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 17:46:21 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Aug 2023 12:37:07 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Aug 2023 17:17:18 GMT"
},
{
"version": "v4",
"created": "Fri, 25 Aug 2023 04:21:57 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Nguyen",
"Van Nguyen",
""
],
[
"Groueix",
"Thibault",
""
],
[
"Ponimatkin",
"Georgy",
""
],
[
"Lepetit",
"Vincent",
""
],
[
"Hodan",
"Tomas",
""
]
] |
new_dataset
| 0.999034 |
2307.14623
|
Sheikh Md Shakeel Hassan
|
Sheikh Md Shakeel Hassan, Arthur Feeney, Akash Dhruv, Jihoon Kim,
Youngjoon Suh, Jaiyoung Ryu, Yoonjin Won, Aparna Chandramowlishwaran
|
BubbleML: A Multi-Physics Dataset and Benchmarks for Machine Learning
|
Submitted to Neurips Datasets and Benchmarks Track 2023
| null | null | null |
cs.LG cs.AI cs.CE cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
In the field of phase change phenomena, the lack of accessible and diverse
datasets suitable for machine learning (ML) training poses a significant
challenge. Existing experimental datasets are often restricted, with limited
availability and sparse ground truth data, impeding our understanding of this
complex multiphysics phenomena. To bridge this gap, we present the BubbleML
Dataset
\footnote{\label{git_dataset}\url{https://github.com/HPCForge/BubbleML}} which
leverages physics-driven simulations to provide accurate ground truth
information for various boiling scenarios, encompassing nucleate pool boiling,
flow boiling, and sub-cooled boiling. This extensive dataset covers a wide
range of parameters, including varying gravity conditions, flow rates,
sub-cooling levels, and wall superheat, comprising 79 simulations. BubbleML is
validated against experimental observations and trends, establishing it as an
invaluable resource for ML research. Furthermore, we showcase its potential to
facilitate exploration of diverse downstream tasks by introducing two
benchmarks: (a) optical flow analysis to capture bubble dynamics, and (b)
operator networks for learning temperature dynamics. The BubbleML dataset and
its benchmarks serve as a catalyst for advancements in ML-driven research on
multiphysics phase change phenomena, enabling the development and comparison of
state-of-the-art techniques and models.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 04:47:05 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Aug 2023 03:17:29 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Hassan",
"Sheikh Md Shakeel",
""
],
[
"Feeney",
"Arthur",
""
],
[
"Dhruv",
"Akash",
""
],
[
"Kim",
"Jihoon",
""
],
[
"Suh",
"Youngjoon",
""
],
[
"Ryu",
"Jaiyoung",
""
],
[
"Won",
"Yoonjin",
""
],
[
"Chandramowlishwaran",
"Aparna",
""
]
] |
new_dataset
| 0.999836 |
2308.00474
|
Andrew Chalmers
|
Joshua O'Hagan, Andrew Chalmers, Taehyun Rhee
|
Simulating the Geometric Growth of the Marine Sponge Crella Incrustans
|
5 pages, 5 figures, IEEE VIS 2023, short paper, 9 supplementary
figures, 1 supplementary table
| null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simulating marine sponge growth helps marine biologists analyze, measure, and
predict the effects that the marine environment has on marine sponges, and vice
versa. This paper describes a way to simulate and grow geometric models of the
marine sponge Crella incrustans while considering environmental factors
including fluid flow and nutrients. The simulation improves upon prior work by
changing the skeletal architecture of the sponge in the growth model to better
suit the structure of Crella incrustans. The change in skeletal architecture
and other simulation parameters are then evaluated qualitatively against photos
of a real-life Crella incrustans sponge. The results support the hypothesis
that changing the skeletal architecture from radiate accretive to Halichondrid
produces a sponge model which is closer in resemblance to Crella incrustans
than the prior work.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 11:55:52 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 10:45:53 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Aug 2023 12:05:35 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"O'Hagan",
"Joshua",
""
],
[
"Chalmers",
"Andrew",
""
],
[
"Rhee",
"Taehyun",
""
]
] |
new_dataset
| 0.994905 |
2308.07221
|
Zhaohui Li
|
Zhaohui Li and Haitao Wang and Xinghua Jiang
|
AudioFormer: Audio Transformer learns audio feature representations from
discrete acoustic codes
|
Need to supplement more detailed experiments
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We propose a method named AudioFormer,which learns audio feature
representations through the acquisition of discrete acoustic codes and
subsequently fine-tunes them for audio classification tasks. Initially,we
introduce a novel perspective by considering the audio classification task as a
form of natural language understanding (NLU). Leveraging an existing neural
audio codec model,we generate discrete acoustic codes and utilize them to train
a masked language model (MLM),thereby obtaining audio feature representations.
Furthermore,we pioneer the integration of a Multi-Positive sample Contrastive
(MPC) learning approach. This method enables the learning of joint
representations among multiple discrete acoustic codes within the same audio
input. In our experiments,we treat discrete acoustic codes as textual data and
train a masked language model using a cloze-like methodology,ultimately
deriving high-quality audio representations. Notably,the MPC learning technique
effectively captures collaborative representations among distinct positive
samples. Our research outcomes demonstrate that AudioFormer attains
significantly improved performance compared to prevailing monomodal audio
classification models across multiple datasets,and even outperforms
audio-visual multimodal classification models on select datasets.
Specifically,our approach achieves remarkable results on datasets including
AudioSet (2M,20K),and FSD50K,with performance scores of 53.9,45.1,and
65.6,respectively. We have openly shared both the code and models:
https://github.com/LZH-0225/AudioFormer.git.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2023 15:47:25 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 06:00:03 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Aug 2023 02:48:57 GMT"
},
{
"version": "v4",
"created": "Mon, 21 Aug 2023 02:56:43 GMT"
},
{
"version": "v5",
"created": "Wed, 23 Aug 2023 14:24:51 GMT"
},
{
"version": "v6",
"created": "Fri, 25 Aug 2023 12:33:22 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Li",
"Zhaohui",
""
],
[
"Wang",
"Haitao",
""
],
[
"Jiang",
"Xinghua",
""
]
] |
new_dataset
| 0.982785 |
2308.10370
|
Sidney Wong
|
Sidney G.-J. Wong, Matthew Durward, Benjamin Adams and Jonathan Dunn
|
cantnlp@LT-EDI-2023: Homophobia/Transphobia Detection in Social Media
Comments using Spatio-Temporally Retrained Language Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes our multiclass classification system developed as part
of the LTEDI@RANLP-2023 shared task. We used a BERT-based language model to
detect homophobic and transphobic content in social media comments across five
language conditions: English, Spanish, Hindi, Malayalam, and Tamil. We
retrained a transformer-based crosslanguage pretrained language model,
XLMRoBERTa, with spatially and temporally relevant social media language data.
We also retrained a subset of models with simulated script-mixed social media
language data with varied performance. We developed the best performing
seven-label classification system for Malayalam based on weighted macro
averaged F1 score (ranked first out of six) with variable performance for other
language and class-label conditions. We found the inclusion of this
spatio-temporal data improved the classification performance for all language
and task conditions when compared with the baseline. The results suggests that
transformer-based language classification systems are sensitive to
register-specific and language-specific retraining.
|
[
{
"version": "v1",
"created": "Sun, 20 Aug 2023 21:30:34 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Aug 2023 01:41:17 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Wong",
"Sidney G. -J.",
""
],
[
"Durward",
"Matthew",
""
],
[
"Adams",
"Benjamin",
""
],
[
"Dunn",
"Jonathan",
""
]
] |
new_dataset
| 0.99556 |
2308.11681
|
Peng Wu
|
Peng Wu, Xuerong Zhou, Guansong Pang, Lingru Zhou, Qingsen Yan, Peng
Wang, Yanning Zhang
|
VadCLIP: Adapting Vision-Language Models for Weakly Supervised Video
Anomaly Detection
|
Submitted
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recent contrastive language-image pre-training (CLIP) model has shown
great success in a wide range of image-level tasks, revealing remarkable
ability for learning powerful visual representations with rich semantics. An
open and worthwhile problem is efficiently adapting such a strong model to the
video domain and designing a robust video anomaly detector. In this work, we
propose VadCLIP, a new paradigm for weakly supervised video anomaly detection
(WSVAD) by leveraging the frozen CLIP model directly without any pre-training
and fine-tuning process. Unlike current works that directly feed extracted
features into the weakly supervised classifier for frame-level binary
classification, VadCLIP makes full use of fine-grained associations between
vision and language on the strength of CLIP and involves dual branch. One
branch simply utilizes visual features for coarse-grained binary
classification, while the other fully leverages the fine-grained language-image
alignment. With the benefit of dual branch, VadCLIP achieves both
coarse-grained and fine-grained video anomaly detection by transferring
pre-trained knowledge from CLIP to WSVAD task. We conduct extensive experiments
on two commonly-used benchmarks, demonstrating that VadCLIP achieves the best
performance on both coarse-grained and fine-grained WSVAD, surpassing the
state-of-the-art methods by a large margin. Specifically, VadCLIP achieves
84.51% AP and 88.02% AUC on XD-Violence and UCF-Crime, respectively. Code and
features will be released to facilitate future VAD research.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 14:58:36 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Aug 2023 06:55:14 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Wu",
"Peng",
""
],
[
"Zhou",
"Xuerong",
""
],
[
"Pang",
"Guansong",
""
],
[
"Zhou",
"Lingru",
""
],
[
"Yan",
"Qingsen",
""
],
[
"Wang",
"Peng",
""
],
[
"Zhang",
"Yanning",
""
]
] |
new_dataset
| 0.99331 |
2308.12819
|
Antonio Joia Neto
|
Antonio Joia Neto, Adam Caulfield, Chistabelle Alvares, Ivan De
Oliveira Nunes
|
DiCA: A Hardware-Software Co-Design for Differential Checkpointing in
Intermittently Powered Devices
|
8 pages and 7 figures. To be published at IEEE/ACM International
Conference on Computer-Aided Design (ICCAD) 2023
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Intermittently powered devices rely on opportunistic energy-harvesting to
function, leading to recurrent power interruptions. This paper introduces DiCA,
a proposal for a hardware/software co-design to create differential
check-points in intermittent devices. DiCA leverages an affordable hardware
module that simplifies the check-pointing process, reducing the check-point
generation time and energy consumption. This hardware module continuously
monitors volatile memory, efficiently tracking modifications and determining
optimal check-point times. To minimize energy waste, the module dynamically
estimates the energy required to create and store the check-point based on
tracked memory modifications, triggering the check-pointing routine optimally
via a nonmaskable interrupt. Experimental results show the cost-effectiveness
and energy efficiency of DiCA, enabling extended application activity cycles in
intermittently powered embedded devices.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 14:23:10 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Aug 2023 16:23:26 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Neto",
"Antonio Joia",
""
],
[
"Caulfield",
"Adam",
""
],
[
"Alvares",
"Chistabelle",
""
],
[
"Nunes",
"Ivan De Oliveira",
""
]
] |
new_dataset
| 0.997191 |
2308.12843
|
Hazim Alzorgan
|
Hazim Alzorgan, Abolfazl Razi, Ata Jahangir Moshayedi
|
Actuator Trajectory Planning for UAVs with Overhead Manipulator using
Reinforcement Learning
| null | null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the operation of an aerial manipulator system,
namely an Unmanned Aerial Vehicle (UAV) equipped with a controllable arm with
two degrees of freedom to carry out actuation tasks on the fly. Our solution is
based on employing a Q-learning method to control the trajectory of the tip of
the arm, also called end-effector. More specifically, we develop a motion
planning model based on Time To Collision (TTC), which enables a quadrotor UAV
to navigate around obstacles while ensuring the manipulator's reachability.
Additionally, we utilize a model-based Q-learning model to independently track
and control the desired trajectory of the manipulator's end-effector, given an
arbitrary baseline trajectory for the UAV platform. Such a combination enables
a variety of actuation tasks such as high-altitude welding, structural
monitoring and repair, battery replacement, gutter cleaning, skyscrapper
cleaning, and power line maintenance in hard-to-reach and risky environments
while retaining compatibility with flight control firmware. Our RL-based
control mechanism results in a robust control strategy that can handle
uncertainties in the motion of the UAV, offering promising performance.
Specifically, our method achieves 92% accuracy in terms of average displacement
error (i.e. the mean distance between the target and obtained trajectory
points) using Q-learning with 15,000 episodes
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 15:06:23 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Aug 2023 16:28:12 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Alzorgan",
"Hazim",
""
],
[
"Razi",
"Abolfazl",
""
],
[
"Moshayedi",
"Ata Jahangir",
""
]
] |
new_dataset
| 0.997919 |
2308.12950
|
Baptiste Roziere
|
Baptiste Rozi\`ere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai
Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, J\'er\'emy Rapin,
Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian
Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre D\'efossez, Jade
Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas
Scialom, Gabriel Synnaeve
|
Code Llama: Open Foundation Models for Code
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained
on sequences of 16k tokens and show improvements on inputs with up to 100k
tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support
infilling based on surrounding content. Code Llama reaches state-of-the-art
performance among open models on several code benchmarks, with scores of up to
53% and 55% on HumanEval and MBPP, respectively. Notably, Code Llama - Python
7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform
every other publicly available model on MultiPL-E. We release Code Llama under
a permissive license that allows for both research and commercial use.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 17:39:13 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Aug 2023 08:51:22 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Rozière",
"Baptiste",
""
],
[
"Gehring",
"Jonas",
""
],
[
"Gloeckle",
"Fabian",
""
],
[
"Sootla",
"Sten",
""
],
[
"Gat",
"Itai",
""
],
[
"Tan",
"Xiaoqing Ellen",
""
],
[
"Adi",
"Yossi",
""
],
[
"Liu",
"Jingyu",
""
],
[
"Remez",
"Tal",
""
],
[
"Rapin",
"Jérémy",
""
],
[
"Kozhevnikov",
"Artyom",
""
],
[
"Evtimov",
"Ivan",
""
],
[
"Bitton",
"Joanna",
""
],
[
"Bhatt",
"Manish",
""
],
[
"Ferrer",
"Cristian Canton",
""
],
[
"Grattafiori",
"Aaron",
""
],
[
"Xiong",
"Wenhan",
""
],
[
"Défossez",
"Alexandre",
""
],
[
"Copet",
"Jade",
""
],
[
"Azhar",
"Faisal",
""
],
[
"Touvron",
"Hugo",
""
],
[
"Martin",
"Louis",
""
],
[
"Usunier",
"Nicolas",
""
],
[
"Scialom",
"Thomas",
""
],
[
"Synnaeve",
"Gabriel",
""
]
] |
new_dataset
| 0.99973 |
2308.12985
|
Jiajie Yu
|
Jiajie Yu, Pierre-Antoine Laharotte, Yu Han, Ludovic Leclercq
|
Perimeter Control with Heterogeneous Cordon Signal Behaviors: A
Semi-Model Dependent Reinforcement Learning Approach
| null | null | null | null |
cs.AI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Perimeter Control (PC) strategies have been proposed to address urban road
network control in oversaturated situations by monitoring transfer flows of the
Protected Network (PN). The uniform metering rate for cordon signals in
existing studies ignores the variety of local traffic states at the
intersection level, which may cause severe local traffic congestion and ruin
the network stability. This paper introduces a semi-model dependent Multi-Agent
Reinforcement Learning (MARL) framework to conduct PC with heterogeneous cordon
signal behaviors. The proposed strategy integrates the MARL-based signal
control method with centralized feedback PC policy and is applied to cordon
signals of the PN. It operates as a two-stage system, with the feedback PC
strategy detecting the overall traffic state within the PN and then
distributing local instructions to cordon signals controlled by agents in the
MARL framework. Each cordon signal acts independently and differently, creating
a slack and distributed PC for the PN. The combination of the model-free and
model-based methods is achieved by reconstructing the action-value function of
the local agents with PC feedback reward without violating the integrity of the
local signal control policy learned from the RL training process. Through
numerical tests with different demand patterns in a microscopic traffic
environment, the proposed PC strategy (a) is shown robustness, scalability, and
transferability, (b) outperforms state-of-the-art model-based PC strategies in
increasing network throughput, reducing cordon queue and carbon emission.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 13:51:16 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Yu",
"Jiajie",
""
],
[
"Laharotte",
"Pierre-Antoine",
""
],
[
"Han",
"Yu",
""
],
[
"Leclercq",
"Ludovic",
""
]
] |
new_dataset
| 0.989234 |
2308.13021
|
Kunal Aneja
|
Kunal Aneja, Tejaswini Ramkumar Babu, Rachel Chan
|
Augmenting a Firefighters PPE -- Gas Mask SCBA
| null | null | null | null |
cs.HC
|
http://creativecommons.org/publicdomain/zero/1.0/
|
PPE (Personal Protective Equipment) has allowed firefighters to perform their
everyday tasks without getting harmed since the mid 1800s. Now, the advancement
of technology has given rise to the improvements of PPE. PPE can now include
sensors to detect any number of environmental hazards (chemical, biological,
temperature etc.). As the GT class of CS3750, we have decided to create a
version of an interface design sensor that will help firefighters in two ways:
navigation and communication. In order to augment a firefighter display when
they are within a building, we chose to augment their SCBA (self-contained
breathing apparatus). The gas mask will include a small screen that displays
vital information directly towards the firefighter without need of any other
support. We used the Google Glass to display vital information directly towards
the eye in a minimalistic manner, while also augmenting that by adding LED
lights to simulate someone calling their name or other auditory signals.While
our prototype focuses on two main components of a firefighters search and
rescue in a building, both of them combine to augment a firefighters display
when searching throughout a building to help improve accuracy, speed and
overall experience.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 18:47:39 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Aneja",
"Kunal",
""
],
[
"Babu",
"Tejaswini Ramkumar",
""
],
[
"Chan",
"Rachel",
""
]
] |
new_dataset
| 0.961499 |
2308.13062
|
M. Caner Tol
|
M. Caner Tol and Berk Sunar
|
ZeroLeak: Using LLMs for Scalable and Cost Effective Side-Channel
Patching
| null | null | null | null |
cs.CR cs.LG cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Security critical software, e.g., OpenSSL, comes with numerous side-channel
leakages left unpatched due to a lack of resources or experts. The situation
will only worsen as the pace of code development accelerates, with developers
relying on Large Language Models (LLMs) to automatically generate code. In this
work, we explore the use of LLMs in generating patches for vulnerable code with
microarchitectural side-channel leakages. For this, we investigate the
generative abilities of powerful LLMs by carefully crafting prompts following a
zero-shot learning approach. All generated code is dynamically analyzed by
leakage detection tools, which are capable of pinpointing information leakage
at the instruction level leaked either from secret dependent accesses or
branches or vulnerable Spectre gadgets, respectively. Carefully crafted prompts
are used to generate candidate replacements for vulnerable code, which are then
analyzed for correctness and for leakage resilience. From a cost/performance
perspective, the GPT4-based configuration costs in API calls a mere few cents
per vulnerability fixed. Our results show that LLM-based patching is far more
cost-effective and thus provides a scalable solution. Finally, the framework we
propose will improve in time, especially as vulnerability detection tools and
LLMs mature.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 20:04:36 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Tol",
"M. Caner",
""
],
[
"Sunar",
"Berk",
""
]
] |
new_dataset
| 0.986565 |
2308.13076
|
Sayak Saha Roy
|
Sayak Saha Roy, Ohad Gilbar, Christina Palantza, Maxine Davis, Shirin
Nilizadeh
|
Exploring Gender-Based Toxic Speech on Twitter in Context of the #MeToo
movement: A Mixed Methods Approach
| null | null | null | null |
cs.SI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The #MeToo movement has catalyzed widespread public discourse surrounding
sexual harassment and assault, empowering survivors to share their stories and
holding perpetrators accountable. While the movement has had a substantial and
largely positive influence, this study aims to examine the potential negative
consequences in the form of increased hostility against women and men on the
social media platform Twitter. By analyzing tweets shared between October 2017
and January 2020 by more than 47.1k individuals who had either disclosed their
own sexual abuse experiences on Twitter or engaged in discussions about the
movement, we identify the overall increase in gender-based hostility towards
both women and men since the start of the movement. We also monitor 16 pivotal
real-life events that shaped the #MeToo movement to identify how these events
may have amplified negative discussions targeting the opposite gender on
Twitter. Furthermore, we conduct a thematic content analysis of a subset of
gender-based hostile tweets, which helps us identify recurring themes and
underlying motivations driving the expressions of anger and resentment from
both men and women concerning the #MeToo movement. This study highlights the
need for a nuanced understanding of the impact of social movements on online
discourse and underscores the importance of addressing gender-based hostility
in the digital sphere.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 20:45:12 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Roy",
"Sayak Saha",
""
],
[
"Gilbar",
"Ohad",
""
],
[
"Palantza",
"Christina",
""
],
[
"Davis",
"Maxine",
""
],
[
"Nilizadeh",
"Shirin",
""
]
] |
new_dataset
| 0.987961 |
2308.13106
|
Caleb Donovick
|
Caleb Donovick, Ross Daly, Jackson Melchert, Lenny Truong, Priyanka
Raina, Pat Hanrahan, Clark Barrett
|
PEak: A Single Source of Truth for Hardware Design and Verification
| null | null | null | null |
cs.PL cs.AR cs.LO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Domain-specific languages for hardware can significantly enhance designer
productivity, but sometimes at the cost of ease of verification. On the other
hand, ISA specification languages are too static to be used during early stage
design space exploration. We present PEak, an open-source hardware design and
specification language, which aims to improve both design productivity and
verification capability. PEak does this by providing a single source of truth
for functional models, formal specifications, and RTL. PEak has been used in
several academic projects, and PEak-generated RTL has been included in three
fabricated hardware accelerators. In these projects, the formal capabilities of
PEak were crucial for enabling both novel design space exploration techniques
and automated compiler synthesis.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 22:44:08 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Donovick",
"Caleb",
""
],
[
"Daly",
"Ross",
""
],
[
"Melchert",
"Jackson",
""
],
[
"Truong",
"Lenny",
""
],
[
"Raina",
"Priyanka",
""
],
[
"Hanrahan",
"Pat",
""
],
[
"Barrett",
"Clark",
""
]
] |
new_dataset
| 0.999654 |
2308.13149
|
Liangtai Sun
|
Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen,
Lu Chen and Kai Yu
|
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for
Scientific Research
|
12 pages, 17 figures, 12 tables. Under Review
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 03:05:33 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Sun",
"Liangtai",
""
],
[
"Han",
"Yang",
""
],
[
"Zhao",
"Zihan",
""
],
[
"Ma",
"Da",
""
],
[
"Shen",
"Zhennan",
""
],
[
"Chen",
"Baocai",
""
],
[
"Chen",
"Lu",
""
],
[
"Yu",
"Kai",
""
]
] |
new_dataset
| 0.989064 |
2308.13183
|
Nicol\'as Ayobi
|
Cristina Gonz\'alez, Nicol\'as Ayobi, Felipe Escall\'on, Laura
Baldovino-Chiquillo, Maria Wilches-Mogoll\'on, Donny Pasos, Nicole Ram\'irez,
Jose Pinz\'on, Olga Sarmiento, D Alex Quistberg, Pablo Arbel\'aez
|
STRIDE: Street View-based Environmental Feature Detection and Pedestrian
Collision Prediction
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a novel benchmark to study the impact and relationship
of built environment elements on pedestrian collision prediction, intending to
enhance environmental awareness in autonomous driving systems to prevent
pedestrian injuries actively. We introduce a built environment detection task
in large-scale panoramic images and a detection-based pedestrian collision
frequency prediction task. We propose a baseline method that incorporates a
collision prediction module into a state-of-the-art detection model to tackle
both tasks simultaneously. Our experiments demonstrate a significant
correlation between object detection of built environment elements and
pedestrian collision frequency prediction. Our results are a stepping stone
towards understanding the interdependencies between built environment
conditions and pedestrian safety.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 05:25:01 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"González",
"Cristina",
""
],
[
"Ayobi",
"Nicolás",
""
],
[
"Escallón",
"Felipe",
""
],
[
"Baldovino-Chiquillo",
"Laura",
""
],
[
"Wilches-Mogollón",
"Maria",
""
],
[
"Pasos",
"Donny",
""
],
[
"Ramírez",
"Nicole",
""
],
[
"Pinzón",
"Jose",
""
],
[
"Sarmiento",
"Olga",
""
],
[
"Quistberg",
"D Alex",
""
],
[
"Arbeláez",
"Pablo",
""
]
] |
new_dataset
| 0.999547 |
2308.13205
|
Haizhou Zhao
|
Haizhou Zhao, Lei Yu, Siying Qin, Yurui Jin, Yuqing Chen
|
Design and Control of a Bio-inspired Wheeled Bipedal Robot
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Wheeled bipedal robots have the capability to execute agile and versatile
locomotion tasks in unknown terrains, with balancing being a key criteria in
evaluating their dynamic performance. This paper focuses on enhancing the
balancing performance of wheeled bipedal robots through innovations in both
hardware and software aspects. A bio-inspired mechanical design, inspired by
the human barbell squat, is proposed and implemented to achieve an efficient
distribution of load onto the limb joints. This design improves knee torque
joint efficiency and facilitates control over the distribution of the center of
mass (CoM). Meanwhile, a customized balance model, namely the wheeled linear
inverted pendulum (wLIP), is developed. The wLIP surpasses other alternatives
by providing a more accurate estimation of wheeled robot dynamics while
ensuring balancing stability. Experimental results demonstrate that the robot
is capable of maintaining balance while manipulating pelvis states and CoM
velocity; furthermore, it exhibits robustness against external disturbances and
unknown terrains.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 07:00:21 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Zhao",
"Haizhou",
""
],
[
"Yu",
"Lei",
""
],
[
"Qin",
"Siying",
""
],
[
"Jin",
"Yurui",
""
],
[
"Chen",
"Yuqing",
""
]
] |
new_dataset
| 0.999111 |
2308.13207
|
Anmol Nayak
|
Anmol Nayak and Hari Prasad Timmapathini
|
LLM2KB: Constructing Knowledge Bases using instruction tuned context
aware Large Language Models
|
16 pages, 1 figure, LM-KBC 2023 Challenge at International Semantic
Web Conference 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The advent of Large Language Models (LLM) has revolutionized the field of
natural language processing, enabling significant progress in various
applications. One key area of interest is the construction of Knowledge Bases
(KB) using these powerful models. Knowledge bases serve as repositories of
structured information, facilitating information retrieval and inference tasks.
Our paper proposes LLM2KB, a system for constructing knowledge bases using
large language models, with a focus on the Llama 2 architecture and the
Wikipedia dataset. We perform parameter efficient instruction tuning for
Llama-2-13b-chat and StableBeluga-13B by training small injection models that
have only 0.05 % of the parameters of the base models using the Low Rank
Adaptation (LoRA) technique. These injection models have been trained with
prompts that are engineered to utilize Wikipedia page contexts of subject
entities fetched using a Dense Passage Retrieval (DPR) algorithm, to answer
relevant object entities for a given subject entity and relation. Our best
performing model achieved an average F1 score of 0.6185 across 21 relations in
the LM-KBC challenge held at the ISWC 2023 conference.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 07:04:16 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Nayak",
"Anmol",
""
],
[
"Timmapathini",
"Hari Prasad",
""
]
] |
new_dataset
| 0.999251 |
2308.13217
|
Masoud Mokhtari
|
Masoud Mokhtari, Neda Ahmadi, Teresa S. M. Tsang, Purang Abolmaesumi,
Renjie Liao
|
GEMTrans: A General, Echocardiography-based, Multi-Level Transformer
Framework for Cardiovascular Diagnosis
|
To be published in MLMI 2023
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Echocardiography (echo) is an ultrasound imaging modality that is widely used
for various cardiovascular diagnosis tasks. Due to inter-observer variability
in echo-based diagnosis, which arises from the variability in echo image
acquisition and the interpretation of echo images based on clinical experience,
vision-based machine learning (ML) methods have gained popularity to act as
secondary layers of verification. For such safety-critical applications, it is
essential for any proposed ML method to present a level of explainability along
with good accuracy. In addition, such methods must be able to process several
echo videos obtained from various heart views and the interactions among them
to properly produce predictions for a variety of cardiovascular measurements or
interpretation tasks. Prior work lacks explainability or is limited in scope by
focusing on a single cardiovascular task. To remedy this, we propose a General,
Echo-based, Multi-Level Transformer (GEMTrans) framework that provides
explainability, while simultaneously enabling multi-video training where the
inter-play among echo image patches in the same frame, all frames in the same
video, and inter-video relationships are captured based on a downstream task.
We show the flexibility of our framework by considering two critical tasks
including ejection fraction (EF) and aortic stenosis (AS) severity detection.
Our model achieves mean absolute errors of 4.15 and 4.84 for single and
dual-video EF estimation and an accuracy of 96.5 % for AS detection, while
providing informative task-specific attention maps and prototypical
explainability.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 07:30:18 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Mokhtari",
"Masoud",
""
],
[
"Ahmadi",
"Neda",
""
],
[
"Tsang",
"Teresa S. M.",
""
],
[
"Abolmaesumi",
"Purang",
""
],
[
"Liao",
"Renjie",
""
]
] |
new_dataset
| 0.99977 |
2308.13218
|
Bang Yang
|
Bang Yang, Fenglin Liu, Xian Wu, Yaowei Wang, Xu Sun, and Yuexian Zou
|
MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual
Captioning
|
ACL'2023, 13 pages, 4 figures
| null |
10.18653/v1/2023.acl-long.664
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Supervised visual captioning models typically require a large scale of images
or videos paired with descriptions in a specific language (i.e., the
vision-caption pairs) for training. However, collecting and labeling
large-scale datasets is time-consuming and expensive for many scenarios and
languages. Therefore, sufficient labeled pairs are usually not available. To
deal with the label shortage problem, we present a simple yet effective
zero-shot approach MultiCapCLIP that can generate visual captions for different
scenarios and languages without any labeled vision-caption pairs of downstream
datasets. In the training stage, MultiCapCLIP only requires text data for
input. Then it conducts two main steps: 1) retrieving concept prompts that
preserve the corresponding domain knowledge of new scenarios; 2) auto-encoding
the prompts to learn writing styles to output captions in a desired language.
In the testing stage, MultiCapCLIP instead takes visual data as input directly
to retrieve the concept prompts to generate the final visual descriptions. The
extensive experiments on image and video captioning across four benchmarks and
four languages (i.e., English, Chinese, German, and French) confirm the
effectiveness of our approach. Compared with state-of-the-art zero-shot and
weakly-supervised methods, our method achieves 4.8% and 21.5% absolute
improvements in terms of BLEU@4 and CIDEr metrics. Our code is available at
https://github.com/yangbang18/MultiCapCLIP.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 07:32:34 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Yang",
"Bang",
""
],
[
"Liu",
"Fenglin",
""
],
[
"Wu",
"Xian",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Sun",
"Xu",
""
],
[
"Zou",
"Yuexian",
""
]
] |
new_dataset
| 0.999109 |
2308.13241
|
Kai Chong Lei
|
Kai Chong Lei, Kit Wa Sou, Wang Sing Chan, Jiayi Yan, Siqi Ping,
Dengfeng Peng, Wenbo Ding, Xiao-Ping Zhang
|
WSTac: Interactive Surface Perception based on Whisker-Inspired and
Self-Illuminated Vision-Based Tactile Sensor
| null | null | null | null |
cs.RO cond-mat.mtrl-sci physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern Visual-Based Tactile Sensors (VBTSs) use cost-effective cameras to
track elastomer deformation, but struggle with ambient light interference.
Solutions typically involve using internal LEDs and blocking external light,
thus adding complexity. Creating a VBTS resistant to ambient light with just a
camera and an elastomer remains a challenge. In this work, we introduce WStac,
a self-illuminating VBTS comprising a mechanoluminescence (ML) whisker
elastomer, camera, and 3D printed parts. The ML whisker elastomer, inspired by
the touch sensitivity of vibrissae, offers both light isolation and high ML
intensity under stress, thereby removing the necessity for additional LED
modules. With the incorporation of machine learning, the sensor effectively
utilizes the dynamic contact variations of 25 whiskers to successfully perform
tasks like speed regression, directional identification, and texture
classification. Videos are available at: https://sites.google.com/view/wstac/.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 08:21:56 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Lei",
"Kai Chong",
""
],
[
"Sou",
"Kit Wa",
""
],
[
"Chan",
"Wang Sing",
""
],
[
"Yan",
"Jiayi",
""
],
[
"Ping",
"Siqi",
""
],
[
"Peng",
"Dengfeng",
""
],
[
"Ding",
"Wenbo",
""
],
[
"Zhang",
"Xiao-Ping",
""
]
] |
new_dataset
| 0.999269 |
2308.13245
|
Zhenfeng Fan
|
Zhenfeng Fan, Zhiheng Zhang, Shuang Yang, Chongyang Zhong, Min Cao,
Shihong Xia
|
Unpaired Multi-domain Attribute Translation of 3D Facial Shapes with a
Square and Symmetric Geometric Map
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While impressive progress has recently been made in image-oriented facial
attribute translation, shape-oriented 3D facial attribute translation remains
an unsolved issue. This is primarily limited by the lack of 3D generative
models and ineffective usage of 3D facial data. We propose a learning framework
for 3D facial attribute translation to relieve these limitations. Firstly, we
customize a novel geometric map for 3D shape representation and embed it in an
end-to-end generative adversarial network. The geometric map represents 3D
shapes symmetrically on a square image grid, while preserving the neighboring
relationship of 3D vertices in a local least-square sense. This enables
effective learning for the latent representation of data with different
attributes. Secondly, we employ a unified and unpaired learning framework for
multi-domain attribute translation. It not only makes effective usage of data
correlation from multiple domains, but also mitigates the constraint for hardly
accessible paired data. Finally, we propose a hierarchical architecture for the
discriminator to guarantee robust results against both global and local
artifacts. We conduct extensive experiments to demonstrate the advantage of the
proposed framework over the state-of-the-art in generating high-fidelity facial
shapes. Given an input 3D facial shape, the proposed framework is able to
synthesize novel shapes of different attributes, which covers some downstream
applications, such as expression transfer, gender translation, and aging. Code
at https://github.com/NaughtyZZ/3D_facial_shape_attribute_translation_ssgmap.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 08:37:55 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Fan",
"Zhenfeng",
""
],
[
"Zhang",
"Zhiheng",
""
],
[
"Yang",
"Shuang",
""
],
[
"Zhong",
"Chongyang",
""
],
[
"Cao",
"Min",
""
],
[
"Xia",
"Shihong",
""
]
] |
new_dataset
| 0.988355 |
2308.13250
|
Shimin Zhang
|
Shimin Zhang, Qu Yang, Chenxiang Ma, Jibin Wu, Haizhou Li, Kay Chen
Tan
|
TC-LIF: A Two-Compartment Spiking Neuron Model for Long-term Sequential
Modelling
|
arXiv admin note: substantial text overlap with arXiv:2307.07231
| null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The identification of sensory cues associated with potential opportunities
and dangers is frequently complicated by unrelated events that separate useful
cues by long delays. As a result, it remains a challenging task for
state-of-the-art spiking neural networks (SNNs) to establish long-term temporal
dependency between distant cues. To address this challenge, we propose a novel
biologically inspired Two-Compartment Leaky Integrate-and-Fire spiking neuron
model, dubbed TC-LIF. The proposed model incorporates carefully designed
somatic and dendritic compartments that are tailored to facilitate learning
long-term temporal dependencies. Furthermore, a theoretical analysis is
provided to validate the effectiveness of TC-LIF in propagating error gradients
over an extended temporal duration. Our experimental results, on a diverse
range of temporal classification tasks, demonstrate superior temporal
classification capability, rapid training convergence, and high energy
efficiency of the proposed TC-LIF model. Therefore, this work opens up a myriad
of opportunities for solving challenging temporal processing tasks on emerging
neuromorphic computing systems.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 08:54:41 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Zhang",
"Shimin",
""
],
[
"Yang",
"Qu",
""
],
[
"Ma",
"Chenxiang",
""
],
[
"Wu",
"Jibin",
""
],
[
"Li",
"Haizhou",
""
],
[
"Tan",
"Kay Chen",
""
]
] |
new_dataset
| 0.999578 |
2308.13274
|
Nick Brown
|
Gabriel Rodriguez-Canal, Nick Brown, Tim Dykes, Jessica R. Jones,
Utz-Uwe Haus
|
Fortran High-Level Synthesis: Reducing the barriers to accelerating HPC
codes on FPGAs
|
Author accepted version to appear in 33rd International Conference on
Field-Programmable Logic and Applications
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years the use of FPGAs to accelerate scientific applications has
grown, with numerous applications demonstrating the benefit of FPGAs for high
performance workloads. However, whilst High Level Synthesis (HLS) has
significantly lowered the barrier to entry in programming FPGAs by enabling
programmers to use C++, a major challenge is that most often these codes are
not originally written in C++. Instead, Fortran is the lingua franca of
scientific computing and-so it requires a complex and time consuming initial
step to convert into C++ even before considering the FPGA.
In this paper we describe work enabling Fortran for AMD Xilinx FPGAs by
connecting the LLVM Flang front end to AMD Xilinx's LLVM back end. This enables
programmers to use Fortran as a first-class language for programming FPGAs, and
as we demonstrate enjoy all the tuning and optimisation opportunities that HLS
C++ provides. Furthermore, we demonstrate that certain language features of
Fortran make it especially beneficial for programming FPGAs compared to C++.
The result of this work is a lowering of the barrier to entry in using FPGAs
for scientific computing, enabling programmers to leverage their existing
codebase and language of choice on the FPGA directly.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 09:51:38 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Rodriguez-Canal",
"Gabriel",
""
],
[
"Brown",
"Nick",
""
],
[
"Dykes",
"Tim",
""
],
[
"Jones",
"Jessica R.",
""
],
[
"Haus",
"Utz-Uwe",
""
]
] |
new_dataset
| 0.994809 |
2308.13318
|
Elisa Maiettini
|
Shiva Hanifi, Elisa Maiettini, Maria Lombardi, Lorenzo Natale
|
iCub Detecting Gazed Objects: A Pipeline Estimating Human Attention
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper explores the role of eye gaze in human-robot interactions and
proposes a novel system for detecting objects gazed by the human using solely
visual feedback. The system leverages on face detection, human attention
prediction, and online object detection, and it allows the robot to perceive
and interpret human gaze accurately, paving the way for establishing joint
attention with human partners. Additionally, a novel dataset collected with the
humanoid robot iCub is introduced, comprising over 22,000 images from ten
participants gazing at different annotated objects. This dataset serves as a
benchmark for evaluating the performance of the proposed pipeline. The paper
also includes an experimental analysis of the pipeline's effectiveness in a
human-robot interaction setting, examining the performance of each component.
Furthermore, the developed system is deployed on the humanoid robot iCub, and a
supplementary video showcases its functionality. The results demonstrate the
potential of the proposed approach to enhance social awareness and
responsiveness in social robotics, as well as improve assistance and support in
collaborative scenarios, promoting efficient human-robot collaboration. The
code and the collected dataset will be released upon acceptance.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 11:45:07 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Hanifi",
"Shiva",
""
],
[
"Maiettini",
"Elisa",
""
],
[
"Lombardi",
"Maria",
""
],
[
"Natale",
"Lorenzo",
""
]
] |
new_dataset
| 0.995403 |
2308.13319
|
Ming Yan
|
Ming Yan, Junjie Chen, Jie M. Zhang, Xuejie Cao, Chen Yang, Mark
Harman
|
COCO: Testing Code Generation Systems via Concretized Instructions
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Code generation systems have been extensively developed in recent years to
generate source code based on natural language instructions. However, despite
their advancements, these systems still face robustness issues where even
slightly different instructions can result in significantly different code
semantics. Robustness is critical for code generation systems, as it can have
significant impacts on software development, software quality, and trust in the
generated code. Although existing testing techniques for general text-to-text
software can detect some robustness issues, they are limited in effectiveness
due to ignoring the characteristics of code generation systems. In this work,
we propose a novel technique COCO to test the robustness of code generation
systems. It exploits the usage scenario of code generation systems to make the
original programming instruction more concrete by incorporating features known
to be contained in the original code. A robust system should maintain code
semantics for the concretized instruction, and COCO detects robustness
inconsistencies when it does not. We evaluated COCO on eight advanced code
generation systems, including commercial tools such as Copilot and ChatGPT,
using two widely-used datasets. Our results demonstrate the effectiveness of
COCO in testing the robustness of code generation systems, outperforming two
techniques adopted from general text-to-text software testing by 466.66% and
104.02%, respectively. Furthermore, concretized instructions generated by COCO
can help reduce robustness inconsistencies by 18.35% to 53.91% through
fine-tuning.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 11:49:27 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Yan",
"Ming",
""
],
[
"Chen",
"Junjie",
""
],
[
"Zhang",
"Jie M.",
""
],
[
"Cao",
"Xuejie",
""
],
[
"Yang",
"Chen",
""
],
[
"Harman",
"Mark",
""
]
] |
new_dataset
| 0.998282 |
2308.13326
|
Kopo Marvin Ramokapane
|
Kopo M. Ramokapane and Awais Rashid
|
ExD: Explainable Deletion
|
16 pages, 3 figures, New Security Paradigms Workshop
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This paper focuses on a critical yet often overlooked aspect of data in
digital systems and services-deletion. Through a review of existing literature
we highlight the challenges that user face when attempting to delete data from
systems and services, the lack of transparency in how such requests are handled
or processed and the lack of clear assurance that the data has been deleted. We
highlight that this not only impacts users' agency over their data but also
poses issues with regards to compliance with fundamental legal rights such as
the right to be forgotten. We propose a new paradign-explainable deletion-to
improve users' agency and control over their data and enable systems to deliver
effective assurance, transparency and compliance. We discuss the properties
required of such explanations and their relevance and benefit for various
individuals and groups involved or having an interest in data deletion
processes and implications. We discuss various design implications pertaining
to explainable deletion and present a research agenda for the community.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 11:59:37 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Ramokapane",
"Kopo M.",
""
],
[
"Rashid",
"Awais",
""
]
] |
new_dataset
| 0.979118 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.