id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.08104
|
Fei Chen
|
Fei Chen, Hoa Van Nguyen, David A. Taggart, Katrina Falkner, S. Hamid
Rezatofighi, Damith C. Ranasinghe
|
ConservationBots: Autonomous Aerial Robot for Fast Robust Wildlife
Tracking in Complex Terrains
|
33 pages, 21 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Today, the most widespread, widely applicable technology for gathering data
relies on experienced scientists armed with handheld radio telemetry equipment
to locate low-power radio transmitters attached to wildlife from the ground.
Although aerial robots can transform labor-intensive conservation tasks, the
realization of autonomous systems for tackling task complexities under
real-world conditions remains a challenge. We developed ConservationBots-small
aerial robots for tracking multiple, dynamic, radio-tagged wildlife. The aerial
robot achieves robust localization performance and fast task completion times
-- significant for energy-limited aerial systems while avoiding close
encounters with potential, counter-productive disturbances to wildlife. Our
approach overcomes the technical and practical problems posed by combining a
lightweight sensor with new concepts: i) planning to determine both trajectory
and measurement actions guided by an information-theoretic objective, which
allows the robot to strategically select near-instantaneous range-only
measurements to achieve faster localization, and time-consuming sensor rotation
actions to acquire bearing measurements and achieve robust tracking
performance; ii) a bearing detector more robust to noise and iii) a tracking
algorithm formulation robust to missed and false detections experienced in
real-world conditions. We conducted extensive studies: simulations built upon
complex signal propagation over high-resolution elevation data on diverse
geographical terrains; field testing; studies with wombats (Lasiorhinus
latifrons; nocturnal, vulnerable species dwelling in underground warrens) and
tracking comparisons with a highly experienced biologist to validate the
effectiveness of our aerial robot and demonstrate the significant advantages
over the manual method.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 02:24:26 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 02:44:56 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Chen",
"Fei",
""
],
[
"Van Nguyen",
"Hoa",
""
],
[
"Taggart",
"David A.",
""
],
[
"Falkner",
"Katrina",
""
],
[
"Rezatofighi",
"S. Hamid",
""
],
[
"Ranasinghe",
"Damith C.",
""
]
] |
new_dataset
| 0.974271 |
2308.08376
|
Enrique Dehaerne
|
Thibault Lechien, Enrique Dehaerne, Bappaditya Dey, Victor Blanco,
Sandip Halder, Stefan De Gendt, Wannes Meert
|
Automated Semiconductor Defect Inspection in Scanning Electron
Microscope Images: a Systematic Review
|
16 pages, 12 figures, 3 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A growing need exists for efficient and accurate methods for detecting
defects in semiconductor materials and devices. These defects can have a
detrimental impact on the efficiency of the manufacturing process, because they
cause critical failures and wafer-yield limitations. As nodes and patterns get
smaller, even high-resolution imaging techniques such as Scanning Electron
Microscopy (SEM) produce noisy images due to operating close to sensitivity
levels and due to varying physical properties of different underlayers or
resist materials. This inherent noise is one of the main challenges for defect
inspection. One promising approach is the use of machine learning algorithms,
which can be trained to accurately classify and locate defects in semiconductor
samples. Recently, convolutional neural networks have proved to be particularly
useful in this regard. This systematic review provides a comprehensive overview
of the state of automated semiconductor defect inspection on SEM images,
including the most recent innovations and developments. 38 publications were
selected on this topic, indexed in IEEE Xplore and SPIE databases. For each of
these, the application, methodology, dataset, results, limitations and future
work were summarized. A comprehensive overview and analysis of their methods is
provided. Finally, promising avenues for future work in the field of SEM-based
defect inspection are suggested.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 13:59:43 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 11:03:04 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Lechien",
"Thibault",
""
],
[
"Dehaerne",
"Enrique",
""
],
[
"Dey",
"Bappaditya",
""
],
[
"Blanco",
"Victor",
""
],
[
"Halder",
"Sandip",
""
],
[
"De Gendt",
"Stefan",
""
],
[
"Meert",
"Wannes",
""
]
] |
new_dataset
| 0.998649 |
2308.08577
|
Hrishikesh Viswanath
|
Hrishikesh Viswanath, Aneesh Bhattacharya, Pascal Jutras-Dub\'e,
Prerit Gupta, Mridu Prashanth, Yashvardhan Khaitan, Aniket Bera
|
AffectEcho: Speaker Independent and Language-Agnostic Emotion and Affect
Transfer for Speech Synthesis
| null | null | null | null |
cs.SD cs.CL cs.HC eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Affect is an emotional characteristic encompassing valence, arousal, and
intensity, and is a crucial attribute for enabling authentic conversations.
While existing text-to-speech (TTS) and speech-to-speech systems rely on
strength embedding vectors and global style tokens to capture emotions, these
models represent emotions as a component of style or represent them in discrete
categories. We propose AffectEcho, an emotion translation model, that uses a
Vector Quantized codebook to model emotions within a quantized space featuring
five levels of affect intensity to capture complex nuances and subtle
differences in the same emotion. The quantized emotional embeddings are
implicitly derived from spoken speech samples, eliminating the need for one-hot
vectors or explicit strength embeddings. Experimental results demonstrate the
effectiveness of our approach in controlling the emotions of generated speech
while preserving identity, style, and emotional cadence unique to each speaker.
We showcase the language-independent emotion modeling capability of the
quantized emotional embeddings learned from a bilingual (English and Chinese)
speech corpus with an emotion transfer task from a reference speech to a target
speech. We achieve state-of-art results on both qualitative and quantitative
metrics.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 06:28:29 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Viswanath",
"Hrishikesh",
""
],
[
"Bhattacharya",
"Aneesh",
""
],
[
"Jutras-Dubé",
"Pascal",
""
],
[
"Gupta",
"Prerit",
""
],
[
"Prashanth",
"Mridu",
""
],
[
"Khaitan",
"Yashvardhan",
""
],
[
"Bera",
"Aniket",
""
]
] |
new_dataset
| 0.985318 |
2308.08610
|
Eren Unlu Ph. D.
|
Eren Unlu
|
FootGPT : A Large Language Model Development Experiment on a Minimal
Setting
|
10 pages, 3 figures
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With recent empirical observations, it has been argued that the most
significant aspect of developing accurate language models may be the proper
dataset content and training strategy compared to the number of neural
parameters, training duration or dataset size. Following this argument, we
opted to fine tune a one billion parameter size trained general purpose causal
language model with a dataset curated on team statistics of the Italian
football league first ten game weeks, using low rank adaptation. The limited
training dataset was compiled based on a framework where a powerful commercial
large language model provides distilled paragraphs and question answer pairs as
intended. The training duration was kept relatively short to provide a basis
for our minimal setting exploration. We share our key observations on the
process related to developing a specific purpose language model which is
intended to interpret soccer data with constrained resources in this article.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 18:03:22 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Unlu",
"Eren",
""
]
] |
new_dataset
| 0.999291 |
2308.08621
|
Elahe Moradi
|
Neda Darbeheshti and Elahe Moradi
|
LSTM-Based Forecasting Model for GRACE Accelerometer Data
| null | null | null | null |
cs.LG cs.AI physics.space-ph
|
http://creativecommons.org/licenses/by/4.0/
|
The Gravity Recovery and Climate Experiment (GRACE) satellite mission,
spanning from 2002 to 2017, has provided a valuable dataset for monitoring
variations in Earth's gravity field, enabling diverse applications in
geophysics and hydrology. The mission was followed by GRACE Follow-On in 2018,
continuing data collection efforts. The monthly Earth gravity field, derived
from the integration different instruments onboard satellites, has shown
inconsistencies due to various factors, including gaps in observations for
certain instruments since the beginning of the GRACE mission.
With over two decades of GRACE and GRACE Follow-On data now available, this
paper proposes an approach to fill the data gaps and forecast GRACE
accelerometer data. Specifically, we focus on accelerometer data and employ
Long Short-Term Memory (LSTM) networks to train a model capable of predicting
accelerometer data for all three axes.
In this study, we describe the methodology used to preprocess the
accelerometer data, prepare it for LSTM training, and evaluate the model's
performance. Through experimentation and validation, we assess the model's
accuracy and its ability to predict accelerometer data for the three axes. Our
results demonstrate the effectiveness of the LSTM forecasting model in filling
gaps and forecasting GRACE accelerometer data.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 18:39:29 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Darbeheshti",
"Neda",
""
],
[
"Moradi",
"Elahe",
""
]
] |
new_dataset
| 0.99978 |
2308.08650
|
Andrea Marchini
|
William Black, Ercument Ilhan, Andrea Marchini and Vilda Markeviciute
|
AdaptEx: A Self-Service Contextual Bandit Platform
| null | null |
10.1145/3604915.3608870
| null |
cs.IR cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents AdaptEx, a self-service contextual bandit platform widely
used at Expedia Group, that leverages multi-armed bandit algorithms to
personalize user experiences at scale. AdaptEx considers the unique context of
each visitor to select the optimal variants and learns quickly from every
interaction they make. It offers a powerful solution to improve user
experiences while minimizing the costs and time associated with traditional
testing methods. The platform unlocks the ability to iterate towards optimal
product solutions quickly, even in ever-changing content and continuous "cold
start" situations gracefully.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 16:32:23 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Black",
"William",
""
],
[
"Ilhan",
"Ercument",
""
],
[
"Marchini",
"Andrea",
""
],
[
"Markeviciute",
"Vilda",
""
]
] |
new_dataset
| 0.998675 |
2308.08669
|
Vlad-Constantin Lungu-Stan
|
Vlad-Constantin Lungu-Stan, Dumitru-Clementin Cercel, Florin Pop
|
SkinDistilViT: Lightweight Vision Transformer for Skin Lesion
Classification
|
Accepted at ICANN 2023
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Skin cancer is a treatable disease if discovered early. We provide a
production-specific solution to the skin cancer classification problem that
matches human performance in melanoma identification by training a vision
transformer on melanoma medical images annotated by experts. Since inference
cost, both time and memory wise is important in practice, we employ knowledge
distillation to obtain a model that retains 98.33% of the teacher's balanced
multi-class accuracy, at a fraction of the cost. Memory-wise, our model is
49.60% smaller than the teacher. Time-wise, our solution is 69.25% faster on
GPU and 97.96% faster on CPU. By adding classification heads at each level of
the transformer and employing a cascading distillation process, we improve the
balanced multi-class accuracy of the base model by 2.1%, while creating a range
of models of various sizes but comparable performance. We provide the code at
https://github.com/Longman-Stan/SkinDistilVit.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 20:39:06 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Lungu-Stan",
"Vlad-Constantin",
""
],
[
"Cercel",
"Dumitru-Clementin",
""
],
[
"Pop",
"Florin",
""
]
] |
new_dataset
| 0.997833 |
2308.08728
|
Jia-Rui Lin
|
Zhe Zheng, Ke-Yin Chen, Xin-Yu Cao, Xin-Zheng Lu, Jia-Rui Lin
|
LLM-FuncMapper: Function Identification for Interpreting Complex Clauses
in Building Codes via LLM
| null | null | null | null |
cs.AI cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
As a vital stage of automated rule checking (ARC), rule interpretation of
regulatory texts requires considerable effort. However, interpreting regulatory
clauses with implicit properties or complex computational logic is still
challenging due to the lack of domain knowledge and limited expressibility of
conventional logic representations. Thus, LLM-FuncMapper, an approach to
identifying predefined functions needed to interpret various regulatory clauses
based on the large language model (LLM), is proposed. First, by systematically
analysis of building codes, a series of atomic functions are defined to capture
shared computational logics of implicit properties and complex constraints,
creating a database of common blocks for interpreting regulatory clauses. Then,
a prompt template with the chain of thought is developed and further enhanced
with a classification-based tuning strategy, to enable common LLMs for
effective function identification. Finally, the proposed approach is validated
with statistical analysis, experiments, and proof of concept. Statistical
analysis reveals a long-tail distribution and high expressibility of the
developed function database, with which almost 100% of computer-processible
clauses can be interpreted and represented as computer-executable codes.
Experiments show that LLM-FuncMapper achieve promising results in identifying
relevant predefined functions for rule interpretation. Further proof of concept
in automated rule interpretation also demonstrates the possibility of
LLM-FuncMapper in interpreting complex regulatory clauses. To the best of our
knowledge, this study is the first attempt to introduce LLM for understanding
and interpreting complex regulatory clauses, which may shed light on further
adoption of LLM in the construction domain.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 01:58:04 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Zheng",
"Zhe",
""
],
[
"Chen",
"Ke-Yin",
""
],
[
"Cao",
"Xin-Yu",
""
],
[
"Lu",
"Xin-Zheng",
""
],
[
"Lin",
"Jia-Rui",
""
]
] |
new_dataset
| 0.998761 |
2308.08753
|
Xiaoli Meng
|
Lubing Zhou, Xiaoli Meng, Yiluan Guo, Jiong Yang
|
BOTT: Box Only Transformer Tracker for 3D Object Tracking
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Tracking 3D objects is an important task in autonomous driving. Classical
Kalman Filtering based methods are still the most popular solutions. However,
these methods require handcrafted designs in motion modeling and can not
benefit from the growing data amounts. In this paper, Box Only Transformer
Tracker (BOTT) is proposed to learn to link 3D boxes of the same object from
the different frames, by taking all the 3D boxes in a time window as input.
Specifically, transformer self-attention is applied to exchange information
between all the boxes to learn global-informative box embeddings. The
similarity between these learned embeddings can be used to link the boxes of
the same object. BOTT can be used for both online and offline tracking modes
seamlessly. Its simplicity enables us to significantly reduce engineering
efforts required by traditional Kalman Filtering based methods. Experiments
show BOTT achieves competitive performance on two largest 3D MOT benchmarks:
69.9 and 66.7 AMOTA on nuScenes validation and test splits, respectively, 56.45
and 59.57 MOTA L2 on Waymo Open Dataset validation and test splits,
respectively. This work suggests that tracking 3D objects by learning features
directly from 3D boxes using transformers is a simple yet effective way.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 03:04:55 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Zhou",
"Lubing",
""
],
[
"Meng",
"Xiaoli",
""
],
[
"Guo",
"Yiluan",
""
],
[
"Yang",
"Jiong",
""
]
] |
new_dataset
| 0.998986 |
2308.08810
|
Sunghyun Park
|
Sunghyun Park, Seunghan Yang, Jaegul Choo, Sungrack Yun
|
Label Shift Adapter for Test-Time Adaptation under Covariate and Label
Shifts
|
Accepted to ICCV 2023
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Test-time adaptation (TTA) aims to adapt a pre-trained model to the target
domain in a batch-by-batch manner during inference. While label distributions
often exhibit imbalances in real-world scenarios, most previous TTA approaches
typically assume that both source and target domain datasets have balanced
label distribution. Due to the fact that certain classes appear more frequently
in certain domains (e.g., buildings in cities, trees in forests), it is natural
that the label distribution shifts as the domain changes. However, we discover
that the majority of existing TTA methods fail to address the coexistence of
covariate and label shifts. To tackle this challenge, we propose a novel label
shift adapter that can be incorporated into existing TTA approaches to deal
with label shifts during the TTA process effectively. Specifically, we estimate
the label distribution of the target domain to feed it into the label shift
adapter. Subsequently, the label shift adapter produces optimal parameters for
the target label distribution. By predicting only the parameters for a part of
the pre-trained source model, our approach is computationally efficient and can
be easily applied, regardless of the model architectures. Through extensive
experiments, we demonstrate that integrating our strategy with TTA approaches
leads to substantial performance improvements under the joint presence of label
and covariate shifts.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 06:37:37 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Park",
"Sunghyun",
""
],
[
"Yang",
"Seunghan",
""
],
[
"Choo",
"Jaegul",
""
],
[
"Yun",
"Sungrack",
""
]
] |
new_dataset
| 0.998987 |
2308.08833
|
Dingjie Song
|
Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong
Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou
Li
|
CMB: A Comprehensive Medical Benchmark in Chinese
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 07:51:23 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Wang",
"Xidong",
""
],
[
"Chen",
"Guiming Hardy",
""
],
[
"Song",
"Dingjie",
""
],
[
"Zhang",
"Zhiyi",
""
],
[
"Chen",
"Zhihong",
""
],
[
"Xiao",
"Qingying",
""
],
[
"Jiang",
"Feng",
""
],
[
"Li",
"Jianquan",
""
],
[
"Wan",
"Xiang",
""
],
[
"Wang",
"Benyou",
""
],
[
"Li",
"Haizhou",
""
]
] |
new_dataset
| 0.999797 |
2308.08862
|
Hao Zhang
|
Hao Zhang, Jiaming Chen, Jiyu Cheng, Yibin Li, Simon X. Yang, Wei
Zhang
|
Nowhere to Go: Benchmarking Multi-robot Collaboration in Target Trapping
Environment
| null | null | null | null |
cs.RO cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Collaboration is one of the most important factors in multi-robot systems.
Considering certain real-world applications and to further promote its
development, we propose a new benchmark to evaluate multi-robot collaboration
in Target Trapping Environment (T2E). In T2E, two kinds of robots (called
captor robot and target robot) share the same space. The captors aim to catch
the target collaboratively, while the target will try to escape from the trap.
Both the trapping and escaping process can use the environment layout to help
achieve the corresponding objective, which requires high collaboration between
robots and the utilization of the environment. For the benchmark, we present
and evaluate multiple learning-based baselines in T2E, and provide insights
into regimes of multi-robot collaboration. We also make our benchmark publicly
available and encourage researchers from related robotics disciplines to
propose, evaluate, and compare their solutions in this benchmark. Our project
is released at https://github.com/Dr-Xiaogaren/T2E.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 08:45:31 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Zhang",
"Hao",
""
],
[
"Chen",
"Jiaming",
""
],
[
"Cheng",
"Jiyu",
""
],
[
"Li",
"Yibin",
""
],
[
"Yang",
"Simon X.",
""
],
[
"Zhang",
"Wei",
""
]
] |
new_dataset
| 0.997853 |
2308.08884
|
Zhiming Wang
|
Zhiming Wang, Lin Gu, Feng Lu
|
SRMAE: Masked Image Modeling for Scale-Invariant Deep Representations
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the prevalence of scale variance in nature images, we propose to use
image scale as a self-supervised signal for Masked Image Modeling (MIM). Our
method involves selecting random patches from the input image and downsampling
them to a low-resolution format. Our framework utilizes the latest advances in
super-resolution (SR) to design the prediction head, which reconstructs the
input from low-resolution clues and other patches. After 400 epochs of
pre-training, our Super Resolution Masked Autoencoders (SRMAE) get an accuracy
of 82.1% on the ImageNet-1K task. Image scale signal also allows our SRMAE to
capture scale invariance representation. For the very low resolution (VLR)
recognition task, our model achieves the best performance, surpassing DeriveNet
by 1.3%. Our method also achieves an accuracy of 74.84% on the task of
recognizing low-resolution facial expressions, surpassing the current
state-of-the-art FMD by 9.48%.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 09:43:14 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Wang",
"Zhiming",
""
],
[
"Gu",
"Lin",
""
],
[
"Lu",
"Feng",
""
]
] |
new_dataset
| 0.982466 |
2308.08935
|
Runmin Cong
|
Runmin Cong, Yuchen Guan, Jinpeng Chen, Wei Zhang, Yao Zhao, and Sam
Kwong
|
SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow
Detection
|
Accepted by ACM MM 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Despite significant progress in shadow detection, current methods still
struggle with the adverse impact of background color, which may lead to errors
when shadows are present on complex backgrounds. Drawing inspiration from the
human visual system, we treat the input shadow image as a composition of a
background layer and a shadow layer, and design a Style-guided Dual-layer
Disentanglement Network (SDDNet) to model these layers independently. To
achieve this, we devise a Feature Separation and Recombination (FSR) module
that decomposes multi-level features into shadow-related and background-related
components by offering specialized supervision for each component, while
preserving information integrity and avoiding redundancy through the
reconstruction constraint. Moreover, we propose a Shadow Style Filter (SSF)
module to guide the feature disentanglement by focusing on style
differentiation and uniformization. With these two modules and our overall
pipeline, our model effectively minimizes the detrimental effects of background
color, yielding superior performance on three public datasets with a real-time
inference speed of 32 FPS.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 12:10:51 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Cong",
"Runmin",
""
],
[
"Guan",
"Yuchen",
""
],
[
"Chen",
"Jinpeng",
""
],
[
"Zhang",
"Wei",
""
],
[
"Zhao",
"Yao",
""
],
[
"Kwong",
"Sam",
""
]
] |
new_dataset
| 0.99873 |
2308.09022
|
Zhiwei Wei
|
Song Zhang, Wenjia Xu, Zhiwei Wei, Lili Zhang, Yang Wang, Junyi Liu
|
ARAI-MVSNet: A multi-view stereo depth estimation network with adaptive
depth range and depth interval
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-View Stereo~(MVS) is a fundamental problem in geometric computer vision
which aims to reconstruct a scene using multi-view images with known camera
parameters. However, the mainstream approaches represent the scene with a fixed
all-pixel depth range and equal depth interval partition, which will result in
inadequate utilization of depth planes and imprecise depth estimation. In this
paper, we present a novel multi-stage coarse-to-fine framework to achieve
adaptive all-pixel depth range and depth interval. We predict a coarse depth
map in the first stage, then an Adaptive Depth Range Prediction module is
proposed in the second stage to zoom in the scene by leveraging the reference
image and the obtained depth map in the first stage and predict a more accurate
all-pixel depth range for the following stages. In the third and fourth stages,
we propose an Adaptive Depth Interval Adjustment module to achieve adaptive
variable interval partition for pixel-wise depth range. The depth interval
distribution in this module is normalized by Z-score, which can allocate dense
depth hypothesis planes around the potential ground truth depth value and vice
versa to achieve more accurate depth estimation. Extensive experiments on four
widely used benchmark datasets~(DTU, TnT, BlendedMVS, ETH 3D) demonstrate that
our model achieves state-of-the-art performance and yields competitive
generalization ability. Particularly, our method achieves the highest Acc and
Overall on the DTU dataset, while attaining the highest Recall and
$F_{1}$-score on the Tanks and Temples intermediate and advanced dataset.
Moreover, our method also achieves the lowest $e_{1}$ and $e_{3}$ on the
BlendedMVS dataset and the highest Acc and $F_{1}$-score on the ETH 3D dataset,
surpassing all listed methods.Project website:
https://github.com/zs670980918/ARAI-MVSNet
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 14:52:11 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Zhang",
"Song",
""
],
[
"Xu",
"Wenjia",
""
],
[
"Wei",
"Zhiwei",
""
],
[
"Zhang",
"Lili",
""
],
[
"Wang",
"Yang",
""
],
[
"Liu",
"Junyi",
""
]
] |
new_dataset
| 0.988084 |
2308.09075
|
Souma Chowdhury
|
Prajit KrisshnaKumar, Jhoel Witter, Steve Paul, Hanvit Cho, Karthik
Dantu, and Souma Chowdhury
|
Fast Decision Support for Air Traffic Management at Urban Air Mobility
Vertiports using Graph Learning
|
Accepted for presentation in proceedings of IEEE/RSJ International
Conference on Intelligent Robots and Systems 2023
| null | null | null |
cs.MA cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Urban Air Mobility (UAM) promises a new dimension to decongested, safe, and
fast travel in urban and suburban hubs. These UAM aircraft are conceived to
operate from small airports called vertiports each comprising multiple
take-off/landing and battery-recharging spots. Since they might be situated in
dense urban areas and need to handle many aircraft landings and take-offs each
hour, managing this schedule in real-time becomes challenging for a traditional
air-traffic controller but instead calls for an automated solution. This paper
provides a novel approach to this problem of Urban Air Mobility - Vertiport
Schedule Management (UAM-VSM), which leverages graph reinforcement learning to
generate decision-support policies. Here the designated physical spots within
the vertiport's airspace and the vehicles being managed are represented as two
separate graphs, with feature extraction performed through a graph
convolutional network (GCN). Extracted features are passed onto perceptron
layers to decide actions such as continue to hover or cruise, continue idling
or take-off, or land on an allocated vertiport spot. Performance is measured
based on delays, safety (no. of collisions) and battery consumption. Through
realistic simulations in AirSim applied to scaled down multi-rotor vehicles,
our results demonstrate the suitability of using graph reinforcement learning
to solve the UAM-VSM problem and its superiority to basic reinforcement
learning (with graph embeddings) or random choice baselines.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 16:05:44 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"KrisshnaKumar",
"Prajit",
""
],
[
"Witter",
"Jhoel",
""
],
[
"Paul",
"Steve",
""
],
[
"Cho",
"Hanvit",
""
],
[
"Dantu",
"Karthik",
""
],
[
"Chowdhury",
"Souma",
""
]
] |
new_dataset
| 0.995383 |
2308.09080
|
Adrian Holzbock
|
Adrian Holzbock, Alexander Tsaregorodtsev, and Vasileios Belagiannis
|
Pedestrian Environment Model for Automated Driving
|
Accepted for presentation at the 26th IEEE International Conference
on Intelligent Transportation Systems (ITSC 2023), 24-28 September 2023,
Bilbao, Bizkaia, Spain
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Besides interacting correctly with other vehicles, automated vehicles should
also be able to react in a safe manner to vulnerable road users like
pedestrians or cyclists. For a safe interaction between pedestrians and
automated vehicles, the vehicle must be able to interpret the pedestrian's
behavior. Common environment models do not contain information like body poses
used to understand the pedestrian's intent. In this work, we propose an
environment model that includes the position of the pedestrians as well as
their pose information. We only use images from a monocular camera and the
vehicle's localization data as input to our pedestrian environment model. We
extract the skeletal information with a neural network human pose estimator
from the image. Furthermore, we track the skeletons with a simple tracking
algorithm based on the Hungarian algorithm and an ego-motion compensation. To
obtain the 3D information of the position, we aggregate the data from
consecutive frames in conjunction with the vehicle position. We demonstrate our
pedestrian environment model on data generated with the CARLA simulator and the
nuScenes dataset. Overall, we reach a relative position error of around 16% on
both datasets.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 16:10:58 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Holzbock",
"Adrian",
""
],
[
"Tsaregorodtsev",
"Alexander",
""
],
[
"Belagiannis",
"Vasileios",
""
]
] |
new_dataset
| 0.971824 |
2308.09084
|
Dongyang Yu
|
Dongyang Yu and Haoyue Zhang and Zhirui Zhou and Wangpeng An and
Yanhong Yang
|
MovePose: A High-performance Human Pose Estimation Algorithm on Mobile
and Edge Devices
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present MovePose, an optimized lightweight convolutional neural network
designed specifically for real-time body pose estimation on CPU-based mobile
devices. The current solutions do not provide satisfactory accuracy and speed
for human posture estimation, and MovePose addresses this gap. It aims to
maintain real-time performance while improving the accuracy of human posture
estimation for mobile devices. The network produces 17 keypoints for each
individual at a rate exceeding 11 frames per second, making it suitable for
real-time applications such as fitness tracking, sign language interpretation,
and advanced mobile human posture estimation. Our MovePose algorithm has
attained an Mean Average Precision (mAP) score of 67.7 on the COCO
\cite{cocodata} validation dataset. The MovePose algorithm displayed efficiency
with a performance of 69+ frames per second (fps) when run on an Intel
i9-10920x CPU. Additionally, it showcased an increased performance of 452+ fps
on an NVIDIA RTX3090 GPU. On an Android phone equipped with a Snapdragon 8 + 4G
processor, the fps reached above 11. To enhance accuracy, we incorporated three
techniques: deconvolution, large kernel convolution, and coordinate
classification methods. Compared to basic upsampling, deconvolution is
trainable, improves model capacity, and enhances the receptive field. Large
kernel convolution strengthens these properties at a decreased computational
cost. In summary, MovePose provides high accuracy and real-time performance,
marking it a potential tool for a variety of applications, including those
focused on mobile-side human posture estimation. The code and models for this
algorithm will be made publicly accessible.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 16:23:52 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Yu",
"Dongyang",
""
],
[
"Zhang",
"Haoyue",
""
],
[
"Zhou",
"Zhirui",
""
],
[
"An",
"Wangpeng",
""
],
[
"Yang",
"Yanhong",
""
]
] |
new_dataset
| 0.998523 |
2308.09115
|
N M Anoop Krishnan
|
Mohd Zaki, Jayadeva, Mausam, N. M. Anoop Krishnan
|
MaScQA: A Question Answering Dataset for Investigating Materials Science
Knowledge of Large Language Models
| null | null | null | null |
cs.CL cond-mat.mtrl-sci
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Information extraction and textual comprehension from materials literature
are vital for developing an exhaustive knowledge base that enables accelerated
materials discovery. Language models have demonstrated their capability to
answer domain-specific questions and retrieve information from knowledge bases.
However, there are no benchmark datasets in the materials domain that can
evaluate the understanding of the key concepts by these language models. In
this work, we curate a dataset of 650 challenging questions from the materials
domain that require the knowledge and skills of a materials student who has
cleared their undergraduate degree. We classify these questions based on their
structure and the materials science domain-based subcategories. Further, we
evaluate the performance of GPT-3.5 and GPT-4 models on solving these questions
via zero-shot and chain of thought prompting. It is observed that GPT-4 gives
the best performance (~62% accuracy) as compared to GPT-3.5. Interestingly, in
contrast to the general observation, no significant improvement in accuracy is
observed with the chain of thought prompting. To evaluate the limitations, we
performed an error analysis, which revealed conceptual errors (~64%) as the
major contributor compared to computational errors (~36%) towards the reduced
performance of LLMs. We hope that the dataset and analysis performed in this
work will promote further research in developing better materials science
domain-specific LLMs and strategies for information extraction.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 17:51:05 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Zaki",
"Mohd",
""
],
[
"Jayadeva",
"",
""
],
[
"Mausam",
"",
""
],
[
"Krishnan",
"N. M. Anoop",
""
]
] |
new_dataset
| 0.999741 |
2308.09119
|
Xijun Wang
|
Xijun Wang, Anqi Liang, Junbang Liang, Ming Lin, Yu Lou, Shan Yang
|
ICAR: Image-based Complementary Auto Reasoning
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Scene-aware Complementary Item Retrieval (CIR) is a challenging task which
requires to generate a set of compatible items across domains. Due to the
subjectivity, it is difficult to set up a rigorous standard for both data
collection and learning objectives. To address this challenging task, we
propose a visual compatibility concept, composed of similarity (resembling in
color, geometry, texture, and etc.) and complementarity (different items like
table vs chair completing a group). Based on this notion, we propose a
compatibility learning framework, a category-aware Flexible Bidirectional
Transformer (FBT), for visual "scene-based set compatibility reasoning" with
the cross-domain visual similarity input and auto-regressive complementary item
generation. We introduce a "Flexible Bidirectional Transformer (FBT)"
consisting of an encoder with flexible masking, a category prediction arm, and
an auto-regressive visual embedding prediction arm. And the inputs for FBT are
cross-domain visual similarity invariant embeddings, making this framework
quite generalizable. Furthermore, our proposed FBT model learns the
inter-object compatibility from a large set of scene images in a
self-supervised way. Compared with the SOTA methods, this approach achieves up
to 5.3% and 9.6% in FITB score and 22.3% and 31.8% SFID improvement on fashion
and furniture, respectively.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 17:55:54 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Wang",
"Xijun",
""
],
[
"Liang",
"Anqi",
""
],
[
"Liang",
"Junbang",
""
],
[
"Lin",
"Ming",
""
],
[
"Lou",
"Yu",
""
],
[
"Yang",
"Shan",
""
]
] |
new_dataset
| 0.999452 |
2308.09126
|
Karttikeya Mangalam
|
Karttikeya Mangalam, Raiymbek Akshulakov, Jitendra Malik
|
EgoSchema: A Diagnostic Benchmark for Very Long-form Video Language
Understanding
|
https://egoschema.github.io/
| null | null | null |
cs.CV cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce EgoSchema, a very long-form video question-answering dataset,
and benchmark to evaluate long video understanding capabilities of modern
vision and language systems. Derived from Ego4D, EgoSchema consists of over
5000 human curated multiple choice question answer pairs, spanning over 250
hours of real video data, covering a very broad range of natural human activity
and behavior. For each question, EgoSchema requires the correct answer to be
selected between five given options based on a three-minute-long video clip.
While some prior works have proposed video datasets with long clip lengths, we
posit that merely the length of the video clip does not truly capture the
temporal difficulty of the video task that is being considered. To remedy this,
we introduce temporal certificate sets, a general notion for capturing the
intrinsic temporal understanding length associated with a broad range of video
understanding tasks & datasets. Based on this metric, we find EgoSchema to have
intrinsic temporal lengths over 5.7x longer than the second closest dataset and
10x to 100x longer than any other video understanding dataset. Further, our
evaluation of several current state-of-the-art video and language models shows
them to be severely lacking in long-term video understanding capabilities. Even
models with several billions of parameters achieve QA accuracy less than 33%
(random is 20%) on the EgoSchema multi-choice question answering task, while
humans achieve about 76% accuracy. We posit that \name{}{}, with its long
intrinsic temporal structures and diverse complexity, would serve as a valuable
evaluation probe for developing effective long-term video understanding systems
in the future. Data and Zero-shot model evaluation code are open-sourced for
both public and commercial use under the Ego4D license at
http://egoschema.github.io
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 17:59:59 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Mangalam",
"Karttikeya",
""
],
[
"Akshulakov",
"Raiymbek",
""
],
[
"Malik",
"Jitendra",
""
]
] |
new_dataset
| 0.999765 |
2308.09249
|
Dongxu Lyu
|
Dongxu Lyu, Zhenyu Li, Yuzhou Chen, Jinming Zhang, Ningyi Xu, Guanghui
He
|
SpOctA: A 3D Sparse Convolution Accelerator with Octree-Encoding-Based
Map Search and Inherent Sparsity-Aware Processing
|
Accepted to ICCAD 2023
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point-cloud-based 3D perception has attracted great attention in various
applications including robotics, autonomous driving and AR/VR. In particular,
the 3D sparse convolution (SpConv) network has emerged as one of the most
popular backbones due to its excellent performance. However, it poses severe
challenges to real-time perception on general-purpose platforms, such as
lengthy map search latency, high computation cost, and enormous memory
footprint. In this paper, we propose SpOctA, a SpConv accelerator that enables
high-speed and energy-efficient point cloud processing. SpOctA parallelizes the
map search by utilizing algorithm-architecture co-optimization based on octree
encoding, thereby achieving 8.8-21.2x search speedup. It also attenuates the
heavy computational workload by exploiting inherent sparsity of each voxel,
which eliminates computation redundancy and saves 44.4-79.1% processing
latency. To optimize on-chip memory management, a SpConv-oriented non-uniform
caching strategy is introduced to reduce external memory access energy by 57.6%
on average. Implemented on a 40nm technology and extensively evaluated on
representative benchmarks, SpOctA rivals the state-of-the-art SpConv
accelerators by 1.1-6.9x speedup with 1.5-3.1x energy efficiency improvement.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 02:23:54 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Lyu",
"Dongxu",
""
],
[
"Li",
"Zhenyu",
""
],
[
"Chen",
"Yuzhou",
""
],
[
"Zhang",
"Jinming",
""
],
[
"Xu",
"Ningyi",
""
],
[
"He",
"Guanghui",
""
]
] |
new_dataset
| 0.993946 |
2308.09284
|
Shaleen Deep
|
Paraschos Koutris, Shaleen Deep
|
The Fine-Grained Complexity of CFL Reachability
|
Appeared in POPL 2023. Please note the erratum on the first page
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Many problems in static program analysis can be modeled as the context-free
language (CFL) reachability problem on directed labeled graphs. The CFL
reachability problem can be generally solved in time $O(n^3)$, where $n$ is the
number of vertices in the graph, with some specific cases that can be solved
faster. In this work, we ask the following question: given a specific CFL, what
is the exact exponent in the monomial of the running time? In other words, for
which cases do we have linear, quadratic or cubic algorithms, and are there
problems with intermediate runtimes? This question is inspired by recent
efforts to classify classic problems in terms of their exact polynomial
complexity, known as {\em fine-grained complexity}. Although recent efforts
have shown some conditional lower bounds (mostly for the class of combinatorial
algorithms), a general picture of the fine-grained complexity landscape for CFL
reachability is missing.
Our main contribution is lower bound results that pinpoint the exact running
time of several classes of CFLs or specific CFLs under widely believed lower
bound conjectures (Boolean Matrix Multiplication and $k$-Clique). We
particularly focus on the family of Dyck-$k$ languages (which are strings with
well-matched parentheses), a fundamental class of CFL reachability problems. We
present new lower bounds for the case of sparse input graphs where the number
of edges $m$ is the input parameter, a common setting in the database
literature. For this setting, we show a cubic lower bound for Andersen's
Pointer Analysis which significantly strengthens prior known results.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 03:52:27 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Koutris",
"Paraschos",
""
],
[
"Deep",
"Shaleen",
""
]
] |
new_dataset
| 0.993651 |
2308.09290
|
Ritam Majumdar
|
Ritam Majumdar, Vishal Jadhav, Anirudh Deodhar, Shirish Karande,
Lovekesh Vig, Venkataramana Runkana
|
HyperLoRA for PDEs
|
8 pages, 4 figures, 3 Tables
| null | null | null |
cs.LG cs.AI cs.CE math.AP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Physics-informed neural networks (PINNs) have been widely used to develop
neural surrogates for solutions of Partial Differential Equations. A drawback
of PINNs is that they have to be retrained with every change in
initial-boundary conditions and PDE coefficients. The Hypernetwork, a
model-based meta learning technique, takes in a parameterized task embedding as
input and predicts the weights of PINN as output. Predicting weights of a
neural network however, is a high-dimensional regression problem, and
hypernetworks perform sub-optimally while predicting parameters for large base
networks. To circumvent this issue, we use a low ranked adaptation (LoRA)
formulation to decompose every layer of the base network into low-ranked
tensors and use hypernetworks to predict the low-ranked tensors. Despite the
reduced dimensionality of the resulting weight-regression problem, LoRA-based
Hypernetworks violate the underlying physics of the given task. We demonstrate
that the generalization capabilities of LoRA-based hypernetworks drastically
improve when trained with an additional physics-informed loss component
(HyperPINN) to satisfy the governing differential equations. We observe that
LoRA-based HyperPINN training allows us to learn fast solutions for
parameterized PDEs like Burger's equation and Navier Stokes: Kovasznay flow,
while having an 8x reduction in prediction parameters on average without
compromising on accuracy when compared to all other baselines.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 04:29:48 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Majumdar",
"Ritam",
""
],
[
"Jadhav",
"Vishal",
""
],
[
"Deodhar",
"Anirudh",
""
],
[
"Karande",
"Shirish",
""
],
[
"Vig",
"Lovekesh",
""
],
[
"Runkana",
"Venkataramana",
""
]
] |
new_dataset
| 0.996465 |
2308.09298
|
Yusheng Liu
|
Yusheng Liu, Rui Xin, Tao Yang and Lisheng Wang
|
Inferior Alveolar Nerve Segmentation in CBCT images using
Connectivity-Based Selective Re-training
|
technical paper for Miccai ToothFairy2023 Challenge
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Inferior Alveolar Nerve (IAN) canal detection in CBCT is an important step in
many dental and maxillofacial surgery applications to prevent irreversible
damage to the nerve during the procedure.The ToothFairy2023 Challenge aims to
establish a 3D maxillofacial dataset consisting of all sparse labels and
partial dense labels, and improve the ability of automatic IAN segmentation. In
this work, in order to avoid the negative impact brought by sparse labeling, we
transform the mixed supervised problem into a semi-supervised problem. Inspired
by self-training via pseudo labeling, we propose a selective re-training
framework based on IAN connectivity. Our method is quantitatively evaluated on
the ToothFairy verification cases, achieving the dice similarity coefficient
(DSC) of 0.7956, and 95\% hausdorff distance (HD95) of 4.4905, and wining the
champion in the competition. Code is available at
https://github.com/GaryNico517/SSL-IAN-Retraining.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 04:48:23 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Liu",
"Yusheng",
""
],
[
"Xin",
"Rui",
""
],
[
"Yang",
"Tao",
""
],
[
"Wang",
"Lisheng",
""
]
] |
new_dataset
| 0.999518 |
2308.09329
|
Yunzhi Qiu
|
Yunzhi Qiu, Xiaokun Zhang, Weiwei Wang, Tongxuan Zhang, Bo Xu, Hongfei
Lin
|
KESDT: knowledge enhanced shallow and deep Transformer for detecting
adverse drug reactions
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adverse drug reaction (ADR) detection is an essential task in the medical
field, as ADRs have a gravely detrimental impact on patients' health and the
healthcare system. Due to a large number of people sharing information on
social media platforms, an increasing number of efforts focus on social media
data to carry out effective ADR detection. Despite having achieved impressive
performance, the existing methods of ADR detection still suffer from three main
challenges. Firstly, researchers have consistently ignored the interaction
between domain keywords and other words in the sentence. Secondly, social media
datasets suffer from the challenges of low annotated data. Thirdly, the issue
of sample imbalance is commonly observed in social media datasets. To solve
these challenges, we propose the Knowledge Enhanced Shallow and Deep
Transformer(KESDT) model for ADR detection. Specifically, to cope with the
first issue, we incorporate the domain keywords into the Transformer model
through a shallow fusion manner, which enables the model to fully exploit the
interactive relationships between domain keywords and other words in the
sentence. To overcome the low annotated data, we integrate the synonym sets
into the Transformer model through a deep fusion manner, which expands the size
of the samples. To mitigate the impact of sample imbalance, we replace the
standard cross entropy loss function with the focal loss function for effective
model training. We conduct extensive experiments on three public datasets
including TwiMed, Twitter, and CADEC. The proposed KESDT outperforms
state-of-the-art baselines on F1 values, with relative improvements of 4.87%,
47.83%, and 5.73% respectively, which demonstrates the effectiveness of our
proposed KESDT.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 06:10:11 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Qiu",
"Yunzhi",
""
],
[
"Zhang",
"Xiaokun",
""
],
[
"Wang",
"Weiwei",
""
],
[
"Zhang",
"Tongxuan",
""
],
[
"Xu",
"Bo",
""
],
[
"Lin",
"Hongfei",
""
]
] |
new_dataset
| 0.968437 |
2308.09332
|
Yuhao Cheng
|
Yuhao Cheng, Siru Zhang, Yiqiang Yan, Rong Chen, Yun Zhang
|
LSCD: A Large-Scale Screen Content Dataset for Video Compression
| null | null | null | null |
cs.MM cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimedia compression allows us to watch videos, see pictures and hear
sounds within a limited bandwidth, which helps the flourish of the internet.
During the past decades, multimedia compression has achieved great success
using hand-craft features and systems. With the development of artificial
intelligence and video compression, there emerges a lot of research work
related to using the neural network on the video compression task to get rid of
the complicated system. Not only producing the advanced algorithms, but
researchers also spread the compression to different content, such as User
Generated Content(UGC). With the rapid development of mobile devices, screen
content videos become an important part of multimedia data. In contrast, we
find community lacks a large-scale dataset for screen content video
compression, which impedes the fast development of the corresponding
learning-based algorithms. In order to fulfill this blank and accelerate the
research of this special type of videos, we propose the Large-scale Screen
Content Dataset(LSCD), which contains 714 source sequences. Meanwhile, we
provide the analysis of the proposed dataset to show some features of screen
content videos, which will help researchers have a better understanding of how
to explore new algorithms. Besides collecting and post-processing the data to
organize the dataset, we also provide a benchmark containing the performance of
both traditional codec and learning-based methods.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 06:27:35 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Cheng",
"Yuhao",
""
],
[
"Zhang",
"Siru",
""
],
[
"Yan",
"Yiqiang",
""
],
[
"Chen",
"Rong",
""
],
[
"Zhang",
"Yun",
""
]
] |
new_dataset
| 0.999861 |
2308.09343
|
Dario Rodighiero
|
Dario Rodighiero, Lins Derry, Douglas Duhaime, Jordan Kruguer,
Maximilian C. Mueller, Christopher Pietsch, Jeffrey T. Schnapp, Jeff Steward
|
Surprise machines: revealing Harvard Art Museums' image collection
|
14 pages and 7 figures
|
IDJ 27 (1): 21-34 (2022)
|
10.1075/idj.22013.rod
| null |
cs.CY cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Surprise Machines is a project of experimental museology that sets out to
visualize the entire image collection of the Harvard Art Museums, intending to
open up unexpected vistas on more than 200,000 objects usually inaccessible to
visitors. Part of the exhibition Curatorial A(i)gents organized by metaLAB (at)
Harvard, the project explores the limits of artificial intelligence to display
a large set of images and create surprise among visitors. To achieve such a
feeling of surprise, a choreographic interface was designed to connect the
audience's movement with several unique views of the collection.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 07:05:30 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Rodighiero",
"Dario",
""
],
[
"Derry",
"Lins",
""
],
[
"Duhaime",
"Douglas",
""
],
[
"Kruguer",
"Jordan",
""
],
[
"Mueller",
"Maximilian C.",
""
],
[
"Pietsch",
"Christopher",
""
],
[
"Schnapp",
"Jeffrey T.",
""
],
[
"Steward",
"Jeff",
""
]
] |
new_dataset
| 0.987259 |
2308.09370
|
Yixuan Li
|
Yixuan Li, Huaping Liu, Qiang Jin, Miaomiao Cai, Peng Li
|
TrOMR:Transformer-Based Polyphonic Optical Music Recognition
| null |
ICASSP 2023 - 2023 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP)
| null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optical Music Recognition (OMR) is an important technology in music and has
been researched for a long time. Previous approaches for OMR are usually based
on CNN for image understanding and RNN for music symbol classification. In this
paper, we propose a transformer-based approach with excellent global perceptual
capability for end-to-end polyphonic OMR, called TrOMR. We also introduce a
novel consistency loss function and a reasonable approach for data annotation
to improve recognition accuracy for complex music scores. Extensive experiments
demonstrate that TrOMR outperforms current OMR methods, especially in
real-world scenarios. We also develop a TrOMR system and build a camera scene
dataset for full-page music scores in real-world. The code and datasets will be
made available for reproducibility.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 08:06:27 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Li",
"Yixuan",
""
],
[
"Liu",
"Huaping",
""
],
[
"Jin",
"Qiang",
""
],
[
"Cai",
"Miaomiao",
""
],
[
"Li",
"Peng",
""
]
] |
new_dataset
| 0.988901 |
2308.09428
|
Firas Ben Ramdhane
|
Firas Ben Ramdhane (I2M, AMU), Pierre Guillon (I2M, AMU, CNRS)
|
Dill maps in the Weyl-like space associated to the Levenshtein distance
| null |
Automata 2023, IFIP Working Group 1.5, Aug 2023, Trieste (Italy),
Italy
| null | null |
cs.DM math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Weyl pseudo-metric is a shift-invariant pseudo-metric over the set of
infinite sequences, that enjoys interesting properties and is suitable for
studying the dynamics of cellular automata. It corresponds to the asymptotic
behavior of the Hamming distance on longer and longer subwords. In this paper
we characterize well-defined dill maps (which are a generalization of cellular
automata and substitutions) in the Weyl space and the sliding Feldman-Katok
space where the Hamming distance appearing in the Weyl pseudo-metrics is
replaced by the Levenshtein distance.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 09:56:31 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Ramdhane",
"Firas Ben",
"",
"I2M, AMU"
],
[
"Guillon",
"Pierre",
"",
"I2M, AMU, CNRS"
]
] |
new_dataset
| 0.999428 |
2308.09445
|
Jose Cubero-Cascante
|
Jos\'e Cubero-Cascante, Niko Zurstra{\ss}en, J\"orn N\"oller, Rainer
Leupers, and Jan Moritz Joseph
|
parti-gem5: gem5's Timing Mode Parallelised
|
17 pages, 9 figures, SAMOS Conference XXIII
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detailed timing models are indispensable tools for the design space
exploration of Multiprocessor Systems on Chip (MPSoCs). As core counts continue
to increase, the complexity in memory hierarchies and interconnect topologies
is also growing, making accurate predictions of design decisions more
challenging than ever. In this context, the open-source Full System Simulator
(FSS) gem5 is a popular choice for MPSoC design space exploration, thanks to
its flexibility and robust set of detailed timing models. However, its
single-threaded simulation kernel severely hampers its throughput. To address
this challenge, we introduce parti-gem5, an extension of gem5 that enables
parallel timing simulations on modern multi-core simulation hosts. Unlike
previous works, parti-gem5 supports gem5's timing mode, the O3CPU, and Ruby's
custom cache and interconnect models. Compared to reference single-thread
simulations, we achieved speedups of up to 42.7x when simulating a 120-core ARM
MPSoC on a 64-core x86-64 host system. While our method introduces timing
deviations, the error in total simulated time is below 15% in most cases.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 10:18:46 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Cubero-Cascante",
"José",
""
],
[
"Zurstraßen",
"Niko",
""
],
[
"Nöller",
"Jörn",
""
],
[
"Leupers",
"Rainer",
""
],
[
"Joseph",
"Jan Moritz",
""
]
] |
new_dataset
| 0.986159 |
2308.09458
|
Joao Ferreira
|
Nuno Saavedra, Jo\~ao Gon\c{c}alves, Miguel Henriques, Jo\~ao F.
Ferreira, and Alexandra Mendes
|
Polyglot Code Smell Detection for Infrastructure as Code with GLITCH
| null | null | null | null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents GLITCH, a new technology-agnostic framework that enables
automated polyglot code smell detection for Infrastructure as Code scripts.
GLITCH uses an intermediate representation on which different code smell
detectors can be defined. It currently supports the detection of nine security
smells and nine design & implementation smells in scripts written in Ansible,
Chef, Docker, Puppet, or Terraform. Studies conducted with GLITCH not only show
that GLITCH can reduce the effort of writing code smell analyses for multiple
IaC technologies, but also that it has higher precision and recall than current
state-of-the-art tools. A video describing and demonstrating GLITCH is
available at: https://youtu.be/E4RhCcZjWbk
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 10:44:47 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Saavedra",
"Nuno",
""
],
[
"Gonçalves",
"João",
""
],
[
"Henriques",
"Miguel",
""
],
[
"Ferreira",
"João F.",
""
],
[
"Mendes",
"Alexandra",
""
]
] |
new_dataset
| 0.998642 |
2308.09489
|
Guofa Cai
|
Kengyuan Xie, Guofa Cai, Jiguang He, Georges Kaddoum
|
STAR-RIS Aided MISO SWIPT-NOMA System with Energy Buffer: Performance
Analysis and Optimization
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a simultaneous transmitting and reflecting
reconfigurable intelligent surface (STAR-RIS) and energy buffer aided
multiple-input single-output (MISO) simultaneous wireless information and power
transfer (SWIPT) non-orthogonal multiple access (NOMA) system, which consists
of a STAR-RIS, an access point (AP), and reflection users and transmission
users with energy buffers. In the proposed system, the multi-antenna AP can
transmit information and energy to several single-antenna reflection and
transmission users simultaneously in a NOMA fashion, where the power transfer
and information transmission states of the users are modeled using Markov
chains. The reflection and transmission users harvest and store the energy in
energy buffers as additional power supplies. The power outage probability,
information outage probability, sum throughput, and joint outage probability
closed-form expressions of the proposed system are derived over Nakagami-m
fading channels, which are validated via simulations. Results demonstrate that
the proposed system achieves better performance in comparison to the STAR-RIS
aided MISO SWIPT-NOMA buffer-less, conventional RIS and energy buffer aided
MISO SWIPT-NOMA, and STAR-RIS and energy buffer aided MISO SWIPT-time-division
multiple access (TDMA) systems. Furthermore, a particle swarm optimization
based power allocation (PSO-PA) algorithm is designed to maximize the sum
throughput with a constraint on the joint outage probability. Simulation
results illustrate that the proposed PSO-PA algorithm can achieve an improved
sum throughput performance of the proposed system.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 11:56:43 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Xie",
"Kengyuan",
""
],
[
"Cai",
"Guofa",
""
],
[
"He",
"Jiguang",
""
],
[
"Kaddoum",
"Georges",
""
]
] |
new_dataset
| 0.960293 |
2308.09501
|
\v{S}imon Schierreich
|
\v{S}imon Schierreich
|
Anonymous Refugee Housing with Upper-Bounds
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knop and Schierreich [AAMAS '23] recently introduced a novel model of refugee
housing and specifically asked for the computational complexity picture of the
following variant. Given a topology modelled as an undirected graph, a set of
inhabitants, a number of refugees $R$, an assignment of inhabitants to houses
of the topology, and an upper-bound for every inhabitant, find a set $\pi$ of
unoccupied houses of size $R$ intended such that the number of refugees in the
neighbourhood of every inhabitant is at most its upper-bound. If such a set
$\pi$ exists, we say that the instance admits an inhabitant-respecting housing.
In this paper, we show that the existence of inhabitant-respecting housing is
not guaranteed even under several further restrictions of the upper-bounds.
Then, we focus on the computational complexity of deciding whether
inhabitant-respecting housing exists. To this end, we provide tractable
algorithms for several restrictions of the topology. We complement these
results with appropriate hardness results and running-time lower-bounds.
Furthermore, we introduce a relaxed (or approximate) version of the
inhabitant-respecting housing, where we allow at most $t$ upper-bounds to be
exceeded.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 12:16:36 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Schierreich",
"Šimon",
""
]
] |
new_dataset
| 0.999577 |
2308.09507
|
Jon Arrizabalaga
|
Jon Arrizabalaga, Markus Ryll
|
Pose-Following with Dual Quaternions
|
This paper has been accepted for publication at the IEEE Conference
on Decision and Control (CDC), 2023. Copyright @ IEEE
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work focuses on pose-following, a variant of path-following in which the
goal is to steer the system's position and attitude along a path with a moving
frame attached to it. Full body motion control, while accounting for the
additional freedom to self-regulate the progress along the path, is an
appealing trade-off. Towards this end, we extend the well-established dual
quaternion-based pose-tracking method into a pose-following control law.
Specifically, we derive the equations of motion for the full pose error between
the geometric reference and the rigid body in the form of a dual quaternion and
dual twist. Subsequently, we formulate an almost globally asymptotically stable
control law. The global attractivity of the presented approach is validated in
a spatial example, while its benefits over pose-tracking are showcased through
a planar case-study.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 12:34:14 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Arrizabalaga",
"Jon",
""
],
[
"Ryll",
"Markus",
""
]
] |
new_dataset
| 0.997409 |
2308.09512
|
Lipeng Zhu
|
Zhenyu Xiao, Xiangyu Pi, Lipeng Zhu, Xiang-Gen Xia, and Rui Zhang
|
Multiuser Communications with Movable-Antenna Base Station: Joint
Antenna Positioning, Receive Combining, and Power Control
|
arXiv admin note: substantial text overlap with arXiv:2308.05546
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Movable antenna (MA) is an emerging technology which enables a local movement
of the antenna in the transmitter/receiver region for improving the channel
condition and communication performance. In this paper, we study the deployment
of multiple MAs at the base station (BS) for enhancing the multiuser
communication performance. First, we model the multiuser channel in the uplink
to characterize the wireless channel variation due to MAs' movements at the BS.
Then, an optimization problem is formulated to maximize the minimum achievable
rate among multiple users for MA-aided uplink multiuser communications by
jointly optimizing the MAs' positions, their receive combining at the BS, and
the transmit power of users, under the constraints of finite moving region for
MAs, minimum inter-MA distance, and maximum transmit power of each user. To
solve this challenging non-convex optimization problem, a two-loop iterative
algorithm is proposed by leveraging the particle swarm optimization (PSO)
method. Specifically, the outer-loop updates the positions of a set of
particles, where each particle's position represents one realization of the
antenna position vector (APV) of all MAs. The inner-loop implements the fitness
evaluation for each particle in terms of the max-min achievable rate of
multiple users with its corresponding APV, where the receive combining matrix
of the BS and the transmit power of each user are optimized by applying the
block coordinate descent (BCD) technique. Simulation results show that the
antenna position optimization for MAs-aided BSs can significantly improve the
rate performance as compared to conventional BSs with fixed-position antennas
(FPAs).
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 12:44:33 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Xiao",
"Zhenyu",
""
],
[
"Pi",
"Xiangyu",
""
],
[
"Zhu",
"Lipeng",
""
],
[
"Xia",
"Xiang-Gen",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.996738 |
2308.09514
|
Miguel Sarabia
|
Miguel Sarabia, Elena Menyaylenko, Alessandro Toso, Skyler Seto,
Zakaria Aldeneh, Shadi Pirhosseinloo, Luca Zappella, Barry-John Theobald,
Nicholas Apostoloff, Jonathan Sheaffer
|
Spatial LibriSpeech: An Augmented Dataset for Spatial Audio Learning
| null |
Proceedings of INTERSPEECH (2023), pp. 3724-3728
|
10.21437/Interspeech.2023-2117
| null |
cs.SD cs.AI cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Spatial LibriSpeech, a spatial audio dataset with over 650 hours
of 19-channel audio, first-order ambisonics, and optional distractor noise.
Spatial LibriSpeech is designed for machine learning model training, and it
includes labels for source position, speaking direction, room acoustics and
geometry. Spatial LibriSpeech is generated by augmenting LibriSpeech samples
with 200k+ simulated acoustic conditions across 8k+ synthetic rooms. To
demonstrate the utility of our dataset, we train models on four spatial audio
tasks, resulting in a median absolute error of 6.60{\deg} on 3D source
localization, 0.43m on distance, 90.66ms on T30, and 2.74dB on DRR estimation.
We show that the same models generalize well to widely-used evaluation
datasets, e.g., obtaining a median absolute error of 12.43{\deg} on 3D source
localization on TUT Sound Events 2018, and 157.32ms on T30 estimation on ACE
Challenge.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 12:45:32 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Sarabia",
"Miguel",
""
],
[
"Menyaylenko",
"Elena",
""
],
[
"Toso",
"Alessandro",
""
],
[
"Seto",
"Skyler",
""
],
[
"Aldeneh",
"Zakaria",
""
],
[
"Pirhosseinloo",
"Shadi",
""
],
[
"Zappella",
"Luca",
""
],
[
"Theobald",
"Barry-John",
""
],
[
"Apostoloff",
"Nicholas",
""
],
[
"Sheaffer",
"Jonathan",
""
]
] |
new_dataset
| 0.999734 |
2308.09516
|
Yoosof Mashayekhi
|
Yoosof Mashayekhi, Bo Kang, Jefrey Lijffijt, Tijl De Bie
|
ReCon: Reducing Congestion in Job Recommendation using Optimal Transport
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Recommender systems may suffer from congestion, meaning that there is an
unequal distribution of the items in how often they are recommended. Some items
may be recommended much more than others. Recommenders are increasingly used in
domains where items have limited availability, such as the job market, where
congestion is especially problematic: Recommending a vacancy -- for which
typically only one person will be hired -- to a large number of job seekers may
lead to frustration for job seekers, as they may be applying for jobs where
they are not hired. This may also leave vacancies unfilled and result in job
market inefficiency.
We propose a novel approach to job recommendation called ReCon, accounting
for the congestion problem. Our approach is to use an optimal transport
component to ensure a more equal spread of vacancies over job seekers, combined
with a job recommendation model in a multi-objective optimization problem. We
evaluated our approach on two real-world job market datasets. The evaluation
results show that ReCon has good performance on both congestion-related (e.g.,
Congestion) and desirability (e.g., NDCG) measures.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 12:49:25 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Mashayekhi",
"Yoosof",
""
],
[
"Kang",
"Bo",
""
],
[
"Lijffijt",
"Jefrey",
""
],
[
"De Bie",
"Tijl",
""
]
] |
new_dataset
| 0.999167 |
2308.09536
|
Akihisa Yamada
|
Akihisa Yamada, Benjamin Lucien Kaminski, Dieter Hofbauer, Fred
Mesnard, \'Etienne Payet
|
The 19th International Workshop on Termination (WST 2023): Preface,
Invited Talk Abstract, and Tool Descriptions
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
This report contains the proceedings of the 19th International Workshop on
Termination (WST 2023), which was held in Obergurgl during August 24--25 as
part of Obergurgl Summer on Rewriting (OSR 2023).
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 17:32:27 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Yamada",
"Akihisa",
""
],
[
"Kaminski",
"Benjamin Lucien",
""
],
[
"Hofbauer",
"Dieter",
""
],
[
"Mesnard",
"Fred",
""
],
[
"Payet",
"Étienne",
""
]
] |
new_dataset
| 0.965264 |
2308.09547
|
Luana Martins
|
Luana Martins, Valeria Pontillo, Heitor Costa, Filomena Ferrucci,
Fabio Palomba, Ivan Machado
|
Test Code Refactoring Unveiled: Where and How Does It Affect Test Code
Quality and Effectiveness?
|
9 pages, 39th IEEE International Conference on Software Maintenance
and Evolution (ICSME) - Registered Report
| null | null | null |
cs.SE
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Context. Refactoring has been widely investigated in the past in relation to
production code quality, yet still little is known on how developers apply
refactoring on test code. Specifically, there is still a lack of investigation
into how developers typically refactor test code and its effects on test code
quality and effectiveness. Objective. This paper presents a research agenda
aimed to bridge this gap of knowledge by investigating (1) whether test
refactoring actually targets test classes affected by quality and effectiveness
concerns and (2) the extent to which refactoring contributes to the improvement
of test code quality and effectiveness. Method. We plan to conduct an
exploratory mining software repository study to collect test refactoring data
of open-source Java projects from GitHub and statistically analyze them in
combination with quality metrics, test smells, and code/mutation coverage
indicators. Furthermore, we will measure how refactoring operations impact the
quality and effectiveness of test code.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 13:25:53 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Martins",
"Luana",
""
],
[
"Pontillo",
"Valeria",
""
],
[
"Costa",
"Heitor",
""
],
[
"Ferrucci",
"Filomena",
""
],
[
"Palomba",
"Fabio",
""
],
[
"Machado",
"Ivan",
""
]
] |
new_dataset
| 0.998748 |
2308.09568
|
Shuhui Wu
|
Shuhui Wu, Zengming Tang, Zongyi Guo, Weiwei Zhang, Baoliang Cui,
Haihong Tang, Weiming Lu
|
PUMGPT: A Large Vision-Language Model for Product Understanding
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent developments of multi-modal large language models have demonstrated
its strong ability in solving vision-language tasks. In this paper, we focus on
the product understanding task, which plays an essential role in enhancing
online shopping experience. Product understanding task includes a variety of
sub-tasks, which require models to respond diverse queries based on multi-modal
product information. Traditional methods design distinct model architectures
for each sub-task. On the contrary, we present PUMGPT, a large vision-language
model aims at unifying all product understanding tasks under a singular model
structure. To bridge the gap between vision and text representations, we
propose Layer-wise Adapters (LA), an approach that provides enhanced alignment
with fewer visual tokens and enables parameter-efficient fine-tuning. Moreover,
the inherent parameter-efficient fine-tuning ability allows PUMGPT to be
readily adapted to new product understanding tasks and emerging products. We
design instruction templates to generate diverse product instruction datasets.
Simultaneously, we utilize open-domain datasets during training to improve the
performance of PUMGPT and its generalization ability. Through extensive
evaluations, PUMGPT demonstrates its superior performance across multiple
product understanding tasks, including product captioning, category
question-answering, attribute extraction, attribute question-answering, and
even free-form question-answering about products.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 14:01:37 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Wu",
"Shuhui",
""
],
[
"Tang",
"Zengming",
""
],
[
"Guo",
"Zongyi",
""
],
[
"Zhang",
"Weiwei",
""
],
[
"Cui",
"Baoliang",
""
],
[
"Tang",
"Haihong",
""
],
[
"Lu",
"Weiming",
""
]
] |
new_dataset
| 0.999511 |
2308.09597
|
Cheng Li
|
Cheng Li, Ziang Leng, Chenxi Yan, Junyi Shen, Hao Wang, Weishi MI,
Yaying Fei, Xiaoyang Feng, Song Yan, HaoSheng Wang, Linkang Zhan, Yaokai Jia,
Pingyu Wu, Haozhen Sun
|
ChatHaruhi: Reviving Anime Character in Reality via Large Language Model
|
v1 - First version of techique report
| null | null | null |
cs.CL cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Role-playing chatbots built on large language models have drawn interest, but
better techniques are needed to enable mimicking specific fictional characters.
We propose an algorithm that controls language models via an improved prompt
and memories of the character extracted from scripts. We construct ChatHaruhi,
a dataset covering 32 Chinese / English TV / anime characters with over 54k
simulated dialogues. Both automatic and human evaluations show our approach
improves role-playing ability over baselines. Code and data are available at
https://github.com/LC1332/Chat-Haruhi-Suzumiya .
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 14:50:25 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Li",
"Cheng",
""
],
[
"Leng",
"Ziang",
""
],
[
"Yan",
"Chenxi",
""
],
[
"Shen",
"Junyi",
""
],
[
"Wang",
"Hao",
""
],
[
"MI",
"Weishi",
""
],
[
"Fei",
"Yaying",
""
],
[
"Feng",
"Xiaoyang",
""
],
[
"Yan",
"Song",
""
],
[
"Wang",
"HaoSheng",
""
],
[
"Zhan",
"Linkang",
""
],
[
"Jia",
"Yaokai",
""
],
[
"Wu",
"Pingyu",
""
],
[
"Sun",
"Haozhen",
""
]
] |
new_dataset
| 0.999837 |
2308.09611
|
Yuanhao Zhai
|
Yuanhao Zhai, Mingzhen Huang, Tianyu Luan, Lu Dong, Ifeoma Nwogu,
Siwei Lyu, David Doermann, Junsong Yuan
|
Language-guided Human Motion Synthesis with Atomic Actions
|
Accepted to ACM MM 2023, code: https://github.com/yhZhai/ATOM
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Language-guided human motion synthesis has been a challenging task due to the
inherent complexity and diversity of human behaviors. Previous methods face
limitations in generalization to novel actions, often resulting in unrealistic
or incoherent motion sequences. In this paper, we propose ATOM (ATomic mOtion
Modeling) to mitigate this problem, by decomposing actions into atomic actions,
and employing a curriculum learning strategy to learn atomic action
composition. First, we disentangle complex human motions into a set of atomic
actions during learning, and then assemble novel actions using the learned
atomic actions, which offers better adaptability to new actions. Moreover, we
introduce a curriculum learning training strategy that leverages masked motion
modeling with a gradual increase in the mask ratio, and thus facilitates atomic
action assembly. This approach mitigates the overfitting problem commonly
encountered in previous methods while enforcing the model to learn better
motion representations. We demonstrate the effectiveness of ATOM through
extensive experiments, including text-to-motion and action-to-motion synthesis
tasks. We further illustrate its superiority in synthesizing plausible and
coherent text-guided human motion sequences.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 15:13:03 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Zhai",
"Yuanhao",
""
],
[
"Huang",
"Mingzhen",
""
],
[
"Luan",
"Tianyu",
""
],
[
"Dong",
"Lu",
""
],
[
"Nwogu",
"Ifeoma",
""
],
[
"Lyu",
"Siwei",
""
],
[
"Doermann",
"David",
""
],
[
"Yuan",
"Junsong",
""
]
] |
new_dataset
| 0.998006 |
2308.09616
|
Shuailin Li
|
Xiaohui Jiang, Shuailin Li, Yingfei Liu, Shihao Wang, Fan Jia, Tiancai
Wang, Lijin Han, Xiangyu Zhang
|
Far3D: Expanding the Horizon for Surround-view 3D Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently 3D object detection from surround-view images has made notable
advancements with its low deployment cost. However, most works have primarily
focused on close perception range while leaving long-range detection less
explored. Expanding existing methods directly to cover long distances poses
challenges such as heavy computation costs and unstable convergence. To address
these limitations, this paper proposes a novel sparse query-based framework,
dubbed Far3D. By utilizing high-quality 2D object priors, we generate 3D
adaptive queries that complement the 3D global queries. To efficiently capture
discriminative features across different views and scales for long-range
objects, we introduce a perspective-aware aggregation module. Additionally, we
propose a range-modulated 3D denoising approach to address query error
propagation and mitigate convergence issues in long-range tasks. Significantly,
Far3D demonstrates SoTA performance on the challenging Argoverse 2 dataset,
covering a wide range of 150 meters, surpassing several LiDAR-based approaches.
Meanwhile, Far3D exhibits superior performance compared to previous methods on
the nuScenes dataset. The code will be available soon.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 15:19:17 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Jiang",
"Xiaohui",
""
],
[
"Li",
"Shuailin",
""
],
[
"Liu",
"Yingfei",
""
],
[
"Wang",
"Shihao",
""
],
[
"Jia",
"Fan",
""
],
[
"Wang",
"Tiancai",
""
],
[
"Han",
"Lijin",
""
],
[
"Zhang",
"Xiangyu",
""
]
] |
new_dataset
| 0.95162 |
2308.09618
|
Lojze \v{Z}ust
|
Lojze \v{Z}ust, Janez Per\v{s}, Matej Kristan
|
LaRS: A Diverse Panoptic Maritime Obstacle Detection Dataset and
Benchmark
|
ICCV 2023, 9 pages, 8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The progress in maritime obstacle detection is hindered by the lack of a
diverse dataset that adequately captures the complexity of general maritime
environments. We present the first maritime panoptic obstacle detection
benchmark LaRS, featuring scenes from Lakes, Rivers and Seas. Our major
contribution is the new dataset, which boasts the largest diversity in
recording locations, scene types, obstacle classes, and acquisition conditions
among the related datasets. LaRS is composed of over 4000 per-pixel labeled key
frames with nine preceding frames to allow utilization of the temporal texture,
amounting to over 40k frames. Each key frame is annotated with 8 thing, 3 stuff
classes and 19 global scene attributes. We report the results of 27 semantic
and panoptic segmentation methods, along with several performance insights and
future research directions. To enable objective evaluation, we have implemented
an online evaluation server. The LaRS dataset, evaluation toolkit and benchmark
are publicly available at: https://lojzezust.github.io/lars-dataset
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 15:21:15 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Žust",
"Lojze",
""
],
[
"Perš",
"Janez",
""
],
[
"Kristan",
"Matej",
""
]
] |
new_dataset
| 0.999842 |
2308.09632
|
Korbinian Hagn
|
Oliver Grau and Korbinian Hagn
|
VALERIE22 -- A photorealistic, richly metadata annotated dataset of
urban environments
| null | null | null | null |
cs.CV cs.AI cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The VALERIE tool pipeline is a synthetic data generator developed with the
goal to contribute to the understanding of domain-specific factors that
influence perception performance of DNNs (deep neural networks). This work was
carried out under the German research project KI Absicherung in order to
develop a methodology for the validation of DNNs in the context of pedestrian
detection in urban environments for automated driving. The VALERIE22 dataset
was generated with the VALERIE procedural tools pipeline providing a
photorealistic sensor simulation rendered from automatically synthesized
scenes. The dataset provides a uniquely rich set of metadata, allowing
extraction of specific scene and semantic features (like pixel-accurate
occlusion rates, positions in the scene and distance + angle to the camera).
This enables a multitude of possible tests on the data and we hope to stimulate
research on understanding performance of DNNs. Based on performance metric a
comparison with several other publicly available datasets is provided,
demonstrating that VALERIE22 is one of best performing synthetic datasets
currently available in the open domain.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 15:44:45 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Grau",
"Oliver",
""
],
[
"Hagn",
"Korbinian",
""
]
] |
new_dataset
| 0.999019 |
2308.09650
|
Aran Mohammad
|
Aran Mohammad, Moritz Schappler and Tobias Ortmaier
|
Collision Isolation and Identification Using Proprioceptive Sensing for
Parallel Robots to Enable Human-Robot Collaboration
|
Accepted for publication at IEEE/RSJ International Conference on
Intelligent Robots (IROS) 2023
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Parallel robots (PRs) allow for higher speeds in human-robot collaboration
due to their lower moving masses but are more prone to unintended contact. For
a safe reaction, knowledge of the location and force of a collision is useful.
A novel algorithm for collision isolation and identification with
proprioceptive information for a real PR is the scope of this work. To classify
the collided body, the effects of contact forces at the links and platform of
the PR are analyzed using a kinetostatic projection. This insight enables the
derivation of features from the line of action of the estimated external force.
The significance of these features is confirmed in experiments for various load
cases. A feedforward neural network (FNN) classifies the collided body based on
these physically modeled features. Generalization with the FNN to 300k load
cases on the whole robot structure in other joint angle configurations is
successfully performed with a collision-body classification accuracy of 84% in
the experiments. Platform collisions are isolated and identified with an
explicit solution, while a particle filter estimates the location and force of
a contact on a kinematic chain. Updating the particle filter with estimated
external joint torques leads to an isolation error of less than 3cm and an
identification error of 4N in a real-world experiment.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 16:11:48 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Mohammad",
"Aran",
""
],
[
"Schappler",
"Moritz",
""
],
[
"Ortmaier",
"Tobias",
""
]
] |
new_dataset
| 0.98199 |
2308.09663
|
Yucheng Shi
|
Yucheng Shi, Yushun Dong, Qiaoyu Tan, Jundong Li, Ninghao Liu
|
GiGaMAE: Generalizable Graph Masked Autoencoder via Collaborative Latent
Space Reconstruction
|
Accepted by CIKM 2023
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Self-supervised learning with masked autoencoders has recently gained
popularity for its ability to produce effective image or textual
representations, which can be applied to various downstream tasks without
retraining. However, we observe that the current masked autoencoder models lack
good generalization ability on graph data. To tackle this issue, we propose a
novel graph masked autoencoder framework called GiGaMAE. Different from
existing masked autoencoders that learn node presentations by explicitly
reconstructing the original graph components (e.g., features or edges), in this
paper, we propose to collaboratively reconstruct informative and integrated
latent embeddings. By considering embeddings encompassing graph topology and
attribute information as reconstruction targets, our model could capture more
generalized and comprehensive knowledge. Furthermore, we introduce a mutual
information based reconstruction loss that enables the effective reconstruction
of multiple targets. This learning objective allows us to differentiate between
the exclusive knowledge learned from a single target and common knowledge
shared by multiple targets. We evaluate our method on three downstream tasks
with seven datasets as benchmarks. Extensive experiments demonstrate the
superiority of GiGaMAE against state-of-the-art baselines. We hope our results
will shed light on the design of foundation models on graph-structured data.
Our code is available at: https://github.com/sycny/GiGaMAE.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 16:30:51 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Shi",
"Yucheng",
""
],
[
"Dong",
"Yushun",
""
],
[
"Tan",
"Qiaoyu",
""
],
[
"Li",
"Jundong",
""
],
[
"Liu",
"Ninghao",
""
]
] |
new_dataset
| 0.978518 |
2308.09685
|
Michael Joannou
|
Michael Joannou, Pia Rotshtein, Uta Noppeney
|
Audiovisual Moments in Time: A Large-Scale Annotated Dataset of
Audiovisual Actions
| null | null | null | null |
cs.LG cs.CV cs.MM cs.SD eess.AS
|
http://creativecommons.org/publicdomain/zero/1.0/
|
We present Audiovisual Moments in Time (AVMIT), a large-scale dataset of
audiovisual action events. In an extensive annotation task 11 participants
labelled a subset of 3-second audiovisual videos from the Moments in Time
dataset (MIT). For each trial, participants assessed whether the labelled
audiovisual action event was present and whether it was the most prominent
feature of the video. The dataset includes the annotation of 57,177 audiovisual
videos, each independently evaluated by 3 of 11 trained participants. From this
initial collection, we created a curated test set of 16 distinct action
classes, with 60 videos each (960 videos). We also offer 2 sets of pre-computed
audiovisual feature embeddings, using VGGish/YamNet for audio data and
VGG16/EfficientNetB0 for visual data, thereby lowering the barrier to entry for
audiovisual DNN research. We explored the advantages of AVMIT annotations and
feature embeddings to improve performance on audiovisual event recognition. A
series of 6 Recurrent Neural Networks (RNNs) were trained on either
AVMIT-filtered audiovisual events or modality-agnostic events from MIT, and
then tested on our audiovisual test set. In all RNNs, top 1 accuracy was
increased by 2.71-5.94\% by training exclusively on audiovisual events, even
outweighing a three-fold increase in training data. We anticipate that the
newly annotated AVMIT dataset will serve as a valuable resource for research
and comparative experiments involving computational models and human
participants, specifically when addressing research questions where audiovisual
correspondence is of critical importance.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 17:13:45 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Joannou",
"Michael",
""
],
[
"Rotshtein",
"Pia",
""
],
[
"Noppeney",
"Uta",
""
]
] |
new_dataset
| 0.999838 |
2308.09712
|
Shoukang Hu
|
Shoukang Hu, Fangzhou Hong, Tao Hu, Liang Pan, Haiyi Mei, Weiye Xiao,
Lei Yang, Ziwei Liu
|
HumanLiff: Layer-wise 3D Human Generation with Diffusion Model
|
Project page: https://skhu101.github.io/HumanLiff/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D human generation from 2D images has achieved remarkable progress through
the synergistic utilization of neural rendering and generative models. Existing
3D human generative models mainly generate a clothed 3D human as an
undetectable 3D model in a single pass, while rarely considering the layer-wise
nature of a clothed human body, which often consists of the human body and
various clothes such as underwear, outerwear, trousers, shoes, etc. In this
work, we propose HumanLiff, the first layer-wise 3D human generative model with
a unified diffusion process. Specifically, HumanLiff firstly generates
minimal-clothed humans, represented by tri-plane features, in a canonical
space, and then progressively generates clothes in a layer-wise manner. In this
way, the 3D human generation is thus formulated as a sequence of
diffusion-based 3D conditional generation. To reconstruct more fine-grained 3D
humans with tri-plane representation, we propose a tri-plane shift operation
that splits each tri-plane into three sub-planes and shifts these sub-planes to
enable feature grid subdivision. To further enhance the controllability of 3D
generation with 3D layered conditions, HumanLiff hierarchically fuses tri-plane
features and 3D layered conditions to facilitate the 3D diffusion model
learning. Extensive experiments on two layer-wise 3D human datasets, SynBody
(synthetic) and TightCap (real-world), validate that HumanLiff significantly
outperforms state-of-the-art methods in layer-wise 3D human generation. Our
code will be available at https://skhu101.github.io/HumanLiff.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 17:59:04 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Hu",
"Shoukang",
""
],
[
"Hong",
"Fangzhou",
""
],
[
"Hu",
"Tao",
""
],
[
"Pan",
"Liang",
""
],
[
"Mei",
"Haiyi",
""
],
[
"Xiao",
"Weiye",
""
],
[
"Yang",
"Lei",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.997043 |
1808.09496
|
Johanna Johansen Ms
|
Johanna Johansen and Christian Johansen and Josef Noll
|
InfoInternet for Education in the Global South: A Study of Applications
Enabled by Free Information-only Internet Access in Technologically
Disadvantaged Areas (authors' version)
|
16 pages, 1 figure, under review for a journal since March 2018
|
African Journal of Science, Technology, Innovation and
Development, 2022, Vol. 14, No. 3, pp. 642-654
|
10.1080/20421338.2021.1884326
| null |
cs.CY cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper summarises our work on studying educational applications enabled
by the introduction of a new information layer called InfoInternet. This is an
initiative to facilitate affordable access to internet based information in
communities with network scarcity or economic problems from the Global South.
InfoInternet develops both networking solutions as well as business and social
models, together with actors like mobile operators and government
organisations. In this paper we identify and describe characteristics of
educational applications, their specific users, and learning environment. We
are interested in applications that make the adoption of Internet faster,
cheaper, and wider in such communities. When developing new applications (or
adopting existing ones) for such constrained environments, this work acts as
initial guidelines prior to field studies.
|
[
{
"version": "v1",
"created": "Tue, 28 Aug 2018 19:05:19 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Johansen",
"Johanna",
""
],
[
"Johansen",
"Christian",
""
],
[
"Noll",
"Josef",
""
]
] |
new_dataset
| 0.975601 |
2007.16161
|
Ralph Matthes
|
Jos\'e Esp\'irito Santo and Ralph Matthes and Lu\'is Pinto
|
Coinductive proof search for polarized logic with applications to full
intuitionistic propositional logic
|
22 pages incl. appendices; we now stress the dependence of the
results on specific proof systems (seen in the abstract, hence the change of
title). LJT now comes at the end of the main text. Thm 8 (was Thm 14)
evolved, and we abandon modifications in the vector of declarations in two
clauses for finitary representation. There is new material on type finiteness
in LJP (developed in the appendix)
| null |
10.4230/LIPIcs.TYPES.2020.4
| null |
cs.LO math.LO
|
http://creativecommons.org/licenses/by/4.0/
|
The approach to proof search dubbed "coinductive proof search", and
previously developed by the authors for implicational intuitionistic logic, is
in this paper extended to LJP, a focused sequent-calculus presentation of
polarized intuitionistic logic, including an array of positive and negative
connectives. As before, this includes developing a coinductive description of
the search space generated by a sequent, an equivalent inductive syntax
describing the same space, and decision procedures for inhabitation problems in
the form of predicates defined by recursion on the inductive syntax. We prove
the decidability of existence of focused inhabitants, and of finiteness of the
number of focused inhabitants for polarized intuitionistic logic, by means of
such recursive procedures. Moreover, the polarized logic can be used as a
platform from which proof search for other logics is understood. We illustrate
the technique with LJT, a focused sequent calculus for full intuitionistic
propositional logic (including disjunction). For that, we have to work out the
"negative translation" of LJT into LJP (that sees all intuitionistic types as
negative types), and verify that the translation gives a faithful
representation of proof search in LJT as proof search in the polarized logic.
We therefore inherit decidability of both problems studied for LJP and thus get
new proofs of these results for LJT.
|
[
{
"version": "v1",
"created": "Fri, 31 Jul 2020 16:30:54 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Mar 2021 18:35:52 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Santo",
"José Espírito",
""
],
[
"Matthes",
"Ralph",
""
],
[
"Pinto",
"Luís",
""
]
] |
new_dataset
| 0.954355 |
2102.06880
|
Patrice Ossona de Mendez
|
\'Edouard Bonnet, Jaroslav Ne\v{s}et\v{r}il, Patrice Ossona de Mendez,
Sebastian Siebertz, St\'ephan Thomass\'e
|
Twin-width and permutations
| null | null | null | null |
cs.LO cs.DM math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
Inspired by a width invariant on permutations defined by Guillemot and Marx,
Bonnet, Kim, Thomass\'e, and Watrigant introduced the twin-width of graphs,
which is a parameter describing its structural complexity. This invariant has
been further extended to binary structures, in several (basically equivalent)
ways. We prove that a class of binary relational structures (that is:
edge-colored partially directed graphs) has bounded twin-width if and only if
it is a first-order transduction of a~proper permutation class. As a
by-product, we show that every class with bounded twin-width contains at most
$2^{O(n)}$ pairwise non-isomorphic $n$-vertex graphs.
|
[
{
"version": "v1",
"created": "Sat, 13 Feb 2021 08:03:17 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jul 2021 21:48:42 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Mar 2023 11:11:55 GMT"
},
{
"version": "v4",
"created": "Wed, 26 Jul 2023 17:29:53 GMT"
},
{
"version": "v5",
"created": "Wed, 16 Aug 2023 09:56:41 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Bonnet",
"Édouard",
""
],
[
"Nešetřil",
"Jaroslav",
""
],
[
"de Mendez",
"Patrice Ossona",
""
],
[
"Siebertz",
"Sebastian",
""
],
[
"Thomassé",
"Stéphan",
""
]
] |
new_dataset
| 0.99449 |
2109.06479
|
Xu Liu
|
Xu Liu, Guilherme V. Nardari, Fernando Cladera Ojeda, Yuezhan Tao,
Alex Zhou, Thomas Donnelly, Chao Qu, Steven W. Chen, Roseli A. F. Romero,
Camillo J. Taylor, Vijay Kumar
|
Large-scale Autonomous Flight with Real-time Semantic SLAM under Dense
Forest Canopy
|
Xu Liu and Guilherme V. Nardari contributed equally to this work
|
IEEE Robotics and Automation Letters ( Volume: 7, Issue: 2, April
2022)
|
10.1109/LRA.2022.3154047
| null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic maps represent the environment using a set of semantically
meaningful objects. This representation is storage-efficient, less ambiguous,
and more informative, thus facilitating large-scale autonomy and the
acquisition of actionable information in highly unstructured, GPS-denied
environments. In this letter, we propose an integrated system that can perform
large-scale autonomous flights and real-time semantic mapping in challenging
under-canopy environments. We detect and model tree trunks and ground planes
from LiDAR data, which are associated across scans and used to constrain robot
poses as well as tree trunk models. The autonomous navigation module utilizes a
multi-level planning and mapping framework and computes dynamically feasible
trajectories that lead the UAV to build a semantic map of the user-defined
region of interest in a computationally and storage efficient manner. A
drift-compensation mechanism is designed to minimize the odometry drift using
semantic SLAM outputs in real time, while maintaining planner optimality and
controller stability. This leads the UAV to execute its mission accurately and
safely at scale.
|
[
{
"version": "v1",
"created": "Tue, 14 Sep 2021 07:24:53 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Sep 2021 21:09:26 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Feb 2022 19:11:48 GMT"
},
{
"version": "v4",
"created": "Sat, 26 Feb 2022 17:00:24 GMT"
},
{
"version": "v5",
"created": "Sun, 13 Aug 2023 13:55:29 GMT"
},
{
"version": "v6",
"created": "Wed, 16 Aug 2023 02:29:16 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Liu",
"Xu",
""
],
[
"Nardari",
"Guilherme V.",
""
],
[
"Ojeda",
"Fernando Cladera",
""
],
[
"Tao",
"Yuezhan",
""
],
[
"Zhou",
"Alex",
""
],
[
"Donnelly",
"Thomas",
""
],
[
"Qu",
"Chao",
""
],
[
"Chen",
"Steven W.",
""
],
[
"Romero",
"Roseli A. F.",
""
],
[
"Taylor",
"Camillo J.",
""
],
[
"Kumar",
"Vijay",
""
]
] |
new_dataset
| 0.995104 |
2209.10021
|
Sheng Cheng
|
Sheng Cheng, Minkyung Kim, Lin Song, Chengyu Yang, Yiquan Jin,
Shenlong Wang, and Naira Hovakimyan
|
DiffTune: Auto-Tuning through Auto-Differentiation
|
Minkyung Kim and Lin Song contributed equally to this work
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The performance of robots in high-level tasks depends on the quality of their
lower-level controller, which requires fine-tuning. However, the intrinsically
nonlinear dynamics and controllers make tuning a challenging task when it is
done by hand. In this paper, we present DiffTune, a novel, gradient-based
automatic tuning framework. We formulate the controller tuning as a parameter
optimization problem. Our method unrolls the dynamical system and controller as
a computational graph and updates the controller parameters through
gradient-based optimization. The gradient is obtained using sensitivity
propagation, which is the only method for gradient computation when tuning for
a physical system instead of its simulated counterpart. Furthermore, we use
$\mathcal{L}_1$ adaptive control to compensate for the uncertainties (that
unavoidably exist in a physical system) such that the gradient is not biased by
the unmodelled uncertainties. We validate the DiffTune on a Dubin's car and a
quadrotor in challenging simulation environments. In comparison with
state-of-the-art auto-tuning methods, DiffTune achieves the best performance in
a more efficient manner owing to its effective usage of the first-order
information of the system. Experiments on tuning a nonlinear controller for
quadrotor show promising results, where DiffTune achieves 3.5x tracking error
reduction on an aggressive trajectory in only 10 trials over a 12-dimensional
controller parameter space.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 22:08:44 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 01:55:42 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Cheng",
"Sheng",
""
],
[
"Kim",
"Minkyung",
""
],
[
"Song",
"Lin",
""
],
[
"Yang",
"Chengyu",
""
],
[
"Jin",
"Yiquan",
""
],
[
"Wang",
"Shenlong",
""
],
[
"Hovakimyan",
"Naira",
""
]
] |
new_dataset
| 0.995667 |
2211.10605
|
Yilong Chen
|
Yilong Chen, Haocheng Hua, Jie Xu, and Derrick Wing Kwan Ng
|
ISAC Meets SWIPT: Multi-functional Wireless Systems Integrating Sensing,
Communication, and Powering
|
arXiv admin note: substantial text overlap with arXiv:2210.16716
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper unifies integrated sensing and communication (ISAC) and
simultaneous wireless information and power transfer (SWIPT), by investigating
a new multi-functional multiple-input multiple-output (MIMO) system integrating
wireless sensing, communication, and powering. In this system, one
multi-antenna hybrid access point (H-AP) transmits wireless signals to
communicate with one multi-antenna information decoding (ID) receiver,
wirelessly charge one multi-antenna energy harvesting (EH) receiver, and
perform radar target sensing based on the echo signal at the same time. Under
this setup, we aim to reveal the fundamental performance tradeoff limits among
sensing, communication, and powering, in terms of the estimation Cramer-Rao
bound (CRB), achievable communication rate, and harvested energy level,
respectively. In particular, we consider two different target models for radar
sensing, namely the point and extended targets, for which we are interested in
estimating the target angle and the complete target response matrix,
respectively. For both models, we define the achievable CRB-rate-energy (C-R-E)
region and characterize its Pareto boundary by maximizing the achievable rate
at the ID receiver, subject to the estimation CRB requirement for target
sensing, the harvested energy requirement at the EH receiver, and the maximum
transmit power constraint at the H-AP. We obtain the well-structured optimal
transmit covariance solutions to the two formulated problems by applying
advanced convex optimization techniques. Numerical results show the optimal
C-R-E region boundary achieved by our proposed design, as compared to the
benchmark schemes based on time switching and eigenmode transmission (EMT).
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 07:11:24 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 13:33:37 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Chen",
"Yilong",
""
],
[
"Hua",
"Haocheng",
""
],
[
"Xu",
"Jie",
""
],
[
"Ng",
"Derrick Wing Kwan",
""
]
] |
new_dataset
| 0.999045 |
2301.06567
|
Hunsoo Song
|
Hunsoo Song, Jinha Jung
|
Scalable Surface Water Mapping up to Fine-scale using Geometric Features
of Water from Topographic Airborne LiDAR Data
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite substantial technological advancements, the comprehensive mapping of
surface water, particularly smaller bodies (<1ha), continues to be a challenge
due to a lack of robust, scalable methods. Standard methods require either
training labels or site-specific parameter tuning, which complicates automated
mapping and introduces biases related to training data and parameters. The
reliance on water's reflectance properties, including LiDAR intensity, further
complicates the matter, as higher-resolution images inherently produce more
noise. To mitigate these difficulties, we propose a unique method that focuses
on the geometric characteristics of water instead of its variable reflectance
properties. Unlike preceding approaches, our approach relies entirely on 3D
coordinate observations from airborne LiDAR data, taking advantage of the
principle that connected surface water remains flat due to gravity. By
harnessing this natural law in conjunction with connectivity, our method can
accurately and scalably identify small water bodies, eliminating the need for
training labels or repetitive parameter tuning. Consequently, our approach
enables the creation of comprehensive 3D topographic maps that include both
water and terrain, all performed in an unsupervised manner using only airborne
laser scanning data, potentially enhancing the process of generating reliable
3D topographic maps. We validated our method across extensive and diverse
landscapes, while comparing it to highly competitive Normalized Difference
Water Index (NDWI)-based methods and assessing it using a reference surface
water map. In conclusion, our method offers a new approach to address
persistent difficulties in robust, scalable surface water mapping and 3D
topographic mapping, using solely airborne LiDAR data.
|
[
{
"version": "v1",
"created": "Mon, 16 Jan 2023 19:04:23 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 03:45:46 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Song",
"Hunsoo",
""
],
[
"Jung",
"Jinha",
""
]
] |
new_dataset
| 0.990883 |
2303.09219
|
Qiao Wu
|
Qiao Wu, Jiaqi Yang, Kun Sun, Chu'ai Zhang, Yanning Zhang, Mathieu
Salzmann
|
MixCycle: Mixup Assisted Semi-Supervised 3D Single Object Tracking with
Cycle Consistency
|
Accepted by ICCV23
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D single object tracking (SOT) is an indispensable part of automated
driving. Existing approaches rely heavily on large, densely labeled datasets.
However, annotating point clouds is both costly and time-consuming. Inspired by
the great success of cycle tracking in unsupervised 2D SOT, we introduce the
first semi-supervised approach to 3D SOT. Specifically, we introduce two
cycle-consistency strategies for supervision: 1) Self tracking cycles, which
leverage labels to help the model converge better in the early stages of
training; 2) forward-backward cycles, which strengthen the tracker's robustness
to motion variations and the template noise caused by the template update
strategy. Furthermore, we propose a data augmentation strategy named SOTMixup
to improve the tracker's robustness to point cloud diversity. SOTMixup
generates training samples by sampling points in two point clouds with a mixing
rate and assigns a reasonable loss weight for training according to the mixing
rate. The resulting MixCycle approach generalizes to appearance matching-based
trackers. On the KITTI benchmark, based on the P2B tracker, MixCycle trained
with $\textbf{10\%}$ labels outperforms P2B trained with $\textbf{100\%}$
labels, and achieves a $\textbf{28.4\%}$ precision improvement when using
$\textbf{1\%}$ labels. Our code will be released at
\url{https://github.com/Mumuqiao/MixCycle}.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 10:48:59 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 14:12:42 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Wu",
"Qiao",
""
],
[
"Yang",
"Jiaqi",
""
],
[
"Sun",
"Kun",
""
],
[
"Zhang",
"Chu'ai",
""
],
[
"Zhang",
"Yanning",
""
],
[
"Salzmann",
"Mathieu",
""
]
] |
new_dataset
| 0.998757 |
2303.09713
|
Seungju Han
|
Seungju Han, Jack Hessel, Nouha Dziri, Yejin Choi, Youngjae Yu
|
CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos
|
ICCV 2023, Project page: https://seungjuhan.me/champagne
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual information is central to conversation: body gestures and physical
behaviour, for example, contribute to meaning that transcends words alone. To
date, however, most neural conversational models are limited to just text. We
introduce CHAMPAGNE, a generative model of conversations that can account for
visual contexts. To train CHAMPAGNE, we collect and release YTD-18M, a
large-scale corpus of 18M video-based dialogues. YTD-18M is constructed from
web videos: crucial to our data collection pipeline is a pretrained language
model that converts error-prone automatic transcripts to a cleaner dialogue
format while maintaining meaning. Human evaluation reveals that YTD-18M is more
sensible and specific than prior resources (MMDialog, 1M dialogues), while
maintaining visual-groundedness. Experiments demonstrate that 1) CHAMPAGNE
learns to conduct conversation from YTD-18M; and 2) when fine-tuned, it
achieves state-of-the-art results on four vision-language tasks focused on
real-world conversations. We release data, models, and code.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 01:10:33 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 08:17:02 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Han",
"Seungju",
""
],
[
"Hessel",
"Jack",
""
],
[
"Dziri",
"Nouha",
""
],
[
"Choi",
"Yejin",
""
],
[
"Yu",
"Youngjae",
""
]
] |
new_dataset
| 0.994464 |
2303.12791
|
Shoukang Hu
|
Shoukang Hu, Fangzhou Hong, Liang Pan, Haiyi Mei, Lei Yang, Ziwei Liu
|
SHERF: Generalizable Human NeRF from a Single Image
|
Accepted by ICCV2023. Project webpage:
https://skhu101.github.io/SHERF/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing Human NeRF methods for reconstructing 3D humans typically rely on
multiple 2D images from multi-view cameras or monocular videos captured from
fixed camera views. However, in real-world scenarios, human images are often
captured from random camera angles, presenting challenges for high-quality 3D
human reconstruction. In this paper, we propose SHERF, the first generalizable
Human NeRF model for recovering animatable 3D humans from a single input image.
SHERF extracts and encodes 3D human representations in canonical space,
enabling rendering and animation from free views and poses. To achieve
high-fidelity novel view and pose synthesis, the encoded 3D human
representations should capture both global appearance and local fine-grained
textures. To this end, we propose a bank of 3D-aware hierarchical features,
including global, point-level, and pixel-aligned features, to facilitate
informative encoding. Global features enhance the information extracted from
the single input image and complement the information missing from the partial
2D observation. Point-level features provide strong clues of 3D human
structure, while pixel-aligned features preserve more fine-grained details. To
effectively integrate the 3D-aware hierarchical feature bank, we design a
feature fusion transformer. Extensive experiments on THuman, RenderPeople,
ZJU_MoCap, and HuMMan datasets demonstrate that SHERF achieves state-of-the-art
performance, with better generalizability for novel view and pose synthesis.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 17:59:12 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 17:58:35 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Hu",
"Shoukang",
""
],
[
"Hong",
"Fangzhou",
""
],
[
"Pan",
"Liang",
""
],
[
"Mei",
"Haiyi",
""
],
[
"Yang",
"Lei",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.99899 |
2304.04137
|
Xiwen Chen
|
Xiwen Chen, Huayu Li, Rahul Amin, Abolfazl Razi
|
RD-DPP: Rate-Distortion Theory Meets Determinantal Point Process to
Diversify Learning Data Samples
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In some practical learning tasks, such as traffic video analysis, the number
of available training samples is restricted by different factors, such as
limited communication bandwidth and computation power. Determinantal Point
Process (DPP) is a common method for selecting the most diverse samples to
enhance learning quality. However, the number of selected samples is restricted
to the rank of the kernel matrix implied by the dimensionality of data samples.
Secondly, it is not easily customizable to different learning tasks. In this
paper, we propose a new way of measuring task-oriented diversity based on the
Rate-Distortion (RD) theory, appropriate for multi-level classification. To
this end, we establish a fundamental relationship between DPP and RD theory. We
observe that the upper bound of the diversity of data selected by DPP has a
universal trend of $\textit{phase transition}$, which suggests that DPP is
beneficial only at the beginning of sample accumulation. This led to the design
of a bi-modal method, where RD-DPP is used in the first mode to select initial
data samples, then classification inconsistency (as an uncertainty measure) is
used to select the subsequent samples in the second mode. This phase transition
solves the limitation to the rank of the similarity matrix. Applying our method
to six different datasets and five benchmark models suggests that our method
consistently outperforms random selection, DPP-based methods, and alternatives
like uncertainty-based and coreset methods under all sampling budgets, while
exhibiting high generalizability to different learning tasks.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 02:22:31 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 15:36:07 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Chen",
"Xiwen",
""
],
[
"Li",
"Huayu",
""
],
[
"Amin",
"Rahul",
""
],
[
"Razi",
"Abolfazl",
""
]
] |
new_dataset
| 0.979384 |
2304.06906
|
Yang Liu
|
Yu-Qi Yang, Yu-Xiao Guo, Jian-Yu Xiong, Yang Liu, Hao Pan, Peng-Shuai
Wang, Xin Tong, Baining Guo
|
Swin3D: A Pretrained Transformer Backbone for 3D Indoor Scene
Understanding
|
Project page: https://yukichiii.github.io/project/swin3D/swin3D.html
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The use of pretrained backbones with fine-tuning has been successful for 2D
vision and natural language processing tasks, showing advantages over
task-specific networks. In this work, we introduce a pretrained 3D backbone,
called {\SST}, for 3D indoor scene understanding. We design a 3D Swin
transformer as our backbone network, which enables efficient self-attention on
sparse voxels with linear memory complexity, making the backbone scalable to
large models and datasets. We also introduce a generalized contextual relative
positional embedding scheme to capture various irregularities of point signals
for improved network performance. We pretrained a large {\SST} model on a
synthetic Structured3D dataset, which is an order of magnitude larger than the
ScanNet dataset. Our model pretrained on the synthetic dataset not only
generalizes well to downstream segmentation and detection on real 3D point
datasets, but also outperforms state-of-the-art methods on downstream tasks
with +2.3 mIoU and +2.2 mIoU on S3DIS Area5 and 6-fold semantic segmentation,
+1.8 mIoU on ScanNet segmentation (val), +1.9 [email protected] on ScanNet detection, and
+8.1 [email protected] on S3DIS detection. A series of extensive ablation studies further
validate the scalability, generality, and superior performance enabled by our
approach. The code and models are available at
https://github.com/microsoft/Swin3D .
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 02:49:08 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Apr 2023 02:46:34 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Aug 2023 01:53:02 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Yang",
"Yu-Qi",
""
],
[
"Guo",
"Yu-Xiao",
""
],
[
"Xiong",
"Jian-Yu",
""
],
[
"Liu",
"Yang",
""
],
[
"Pan",
"Hao",
""
],
[
"Wang",
"Peng-Shuai",
""
],
[
"Tong",
"Xin",
""
],
[
"Guo",
"Baining",
""
]
] |
new_dataset
| 0.992744 |
2304.13017
|
Alex Labach
|
Alex Labach, Aslesha Pokhrel, Xiao Shi Huang, Saba Zuberi, Seung Eun
Yi, Maksims Volkovs, Tomi Poutanen, Rahul G. Krishnan
|
DuETT: Dual Event Time Transformer for Electronic Health Records
|
Accepted at MLHC 2023, camera-ready version
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Electronic health records (EHRs) recorded in hospital settings typically
contain a wide range of numeric time series data that is characterized by high
sparsity and irregular observations. Effective modelling for such data must
exploit its time series nature, the semantic relationship between different
types of observations, and information in the sparsity structure of the data.
Self-supervised Transformers have shown outstanding performance in a variety of
structured tasks in NLP and computer vision. But multivariate time series data
contains structured relationships over two dimensions: time and recorded event
type, and straightforward applications of Transformers to time series data do
not leverage this distinct structure. The quadratic scaling of self-attention
layers can also significantly limit the input sequence length without
appropriate input engineering. We introduce the DuETT architecture, an
extension of Transformers designed to attend over both time and event type
dimensions, yielding robust representations from EHR data. DuETT uses an
aggregated input where sparse time series are transformed into a regular
sequence with fixed length; this lowers the computational complexity relative
to previous EHR Transformer models and, more importantly, enables the use of
larger and deeper neural networks. When trained with self-supervised prediction
tasks, that provide rich and informative signals for model pre-training, our
model outperforms state-of-the-art deep learning models on multiple downstream
tasks from the MIMIC-IV and PhysioNet-2012 EHR datasets.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 17:47:48 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 21:02:34 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Labach",
"Alex",
""
],
[
"Pokhrel",
"Aslesha",
""
],
[
"Huang",
"Xiao Shi",
""
],
[
"Zuberi",
"Saba",
""
],
[
"Yi",
"Seung Eun",
""
],
[
"Volkovs",
"Maksims",
""
],
[
"Poutanen",
"Tomi",
""
],
[
"Krishnan",
"Rahul G.",
""
]
] |
new_dataset
| 0.968955 |
2305.10205
|
Jia-Rui Lin
|
Xiang-Rui Ni, Zhe Zheng, Jia-Rui Lin, Zhen-Zhong Hu, Xin Zhang
|
DesignTracking: Track and Replay BIM-based Design Process
| null |
Creative Construction Conference 2023
|
10.3311/CCC2023-006
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Among different phases of the life cycle of a building or facility, design is
of the utmost importance to ensure safety, efficiency and sustainability of the
building or facility. How to control and improve design quality and efficiency
has been explored for years, and more studies emerged with the popularization
of Building Information Modelling (BIM). However, most of them focused on the
extraction of design behaviors, while paying less attention to how a design is
formed. Therefore, this study proposes an approach to tracking and replaying
the BIM-based design process by integrating data logging and 4D visualization
techniques. First of all, potential design behaviors and procedures are
analyzed and extracted by observing how a designer designs a BIM model.
Meanwhile, the required data for logging design process is defined and a
relevant method to collect these data is developed based on the APIs of BIM
software. Then, strategies on how to visualize different design procedures are
designed and implemented via 4D visualization. Finally, a prototype system is
developed based on Autodesk Revit and validated through a case study. Result
shows that the proposed approach enables intuitively and interactively review
of the design process, and makes it easier to understand design behaviors and
even identify potential pitfalls, thus improving the design efficiency and
quality.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 13:27:02 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Ni",
"Xiang-Rui",
""
],
[
"Zheng",
"Zhe",
""
],
[
"Lin",
"Jia-Rui",
""
],
[
"Hu",
"Zhen-Zhong",
""
],
[
"Zhang",
"Xin",
""
]
] |
new_dataset
| 0.965021 |
2306.04306
|
Kevin Glocker
|
Kevin Glocker (1), Aaricia Herygers (1), Munir Georges (1 and 2) ((1)
AImotion Bavaria Technische Hochschule Ingolstadt, (2) Intel Labs Germany)
|
Allophant: Cross-lingual Phoneme Recognition with Articulatory
Attributes
|
5 pages, 2 figures, 2 tables, accepted to INTERSPEECH 2023; published
version
|
Proc. INTERSPEECH 2023, 2258-2262
|
10.21437/Interspeech.2023-772
| null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes Allophant, a multilingual phoneme recognizer. It requires
only a phoneme inventory for cross-lingual transfer to a target language,
allowing for low-resource recognition. The architecture combines a
compositional phone embedding approach with individually supervised phonetic
attribute classifiers in a multi-task architecture. We also introduce
Allophoible, an extension of the PHOIBLE database. When combined with a
distance based mapping approach for grapheme-to-phoneme outputs, it allows us
to train on PHOIBLE inventories directly. By training and evaluating on 34
languages, we found that the addition of multi-task learning improves the
model's capability of being applied to unseen phonemes and phoneme inventories.
On supervised languages we achieve phoneme error rate improvements of 11
percentage points (pp.) compared to a baseline without multi-task learning.
Evaluation of zero-shot transfer on 84 languages yielded a decrease in PER of
2.63 pp. over the baseline.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 10:11:09 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 17:44:59 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Glocker",
"Kevin",
"",
"1 and 2"
],
[
"Herygers",
"Aaricia",
"",
"1 and 2"
],
[
"Georges",
"Munir",
"",
"1 and 2"
]
] |
new_dataset
| 0.970473 |
2306.05989
|
Ebenezer Isaac
|
Ebenezer RHP Isaac and Bulbul Singh
|
QBSD: Quartile-Based Seasonality Decomposition for Cost-Effective Time
Series Forecasting
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In the telecom domain, precise forecasting of time series patterns, such as
cell key performance indicators (KPIs), plays a pivotal role in enhancing
service quality and operational efficiency. State-of-the-art forecasting
approaches prioritize forecasting accuracy at the expense of computational
performance, rendering them less suitable for data-intensive applications
encompassing systems with a multitude of time series variables. To address this
issue, we introduce QBSD, a live forecasting approach tailored to optimize the
trade-off between accuracy and computational complexity. We have evaluated the
performance of QBSD against state-of-the-art forecasting approaches on publicly
available datasets. We have also extended this investigation to our curated
network KPI dataset, now publicly accessible, to showcase the effect of dynamic
operating ranges that varies with time. The results demonstrate that the
proposed method excels in runtime efficiency compared to the leading algorithms
available while maintaining competitive forecast accuracy.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 15:59:27 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 14:47:10 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Isaac",
"Ebenezer RHP",
""
],
[
"Singh",
"Bulbul",
""
]
] |
new_dataset
| 0.990195 |
2306.17436
|
Xingyu Ji
|
Xingyu Ji, Shenghai Yuan, Pengyu Yin, Lihua Xie
|
LIO-GVM: an Accurate, Tightly-Coupled Lidar-Inertial Odometry with
Gaussian Voxel Map
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter presents an accurate and robust Lidar Inertial Odometry
framework. We fuse LiDAR scans with IMU data using a tightly-coupled iterative
error state Kalman filter for robust and fast localization. To achieve robust
correspondence matching, we represent the points as a set of Gaussian
distributions and evaluate the divergence in variance for outlier rejection.
Based on the fitted distributions, a new residual metric is proposed for the
filter-based Lidar inertial odometry, which demonstrates an improvement from
merely quantifying distance to incorporating variance disparity, further
enriching the comprehensiveness and accuracy of the residual metric. Due to the
strategic design of the residual metric, we propose a simple yet effective
voxel-solely mapping scheme, which only necessities the maintenance of one
centroid and one covariance matrix for each voxel. Experiments on different
datasets demonstrate the robustness and accuracy of our framework for various
data inputs and environments. To the benefit of the robotics society, we open
source the code at https://github.com/Ji1Xingyu/lio_gvm.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 07:17:18 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 01:54:06 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Ji",
"Xingyu",
""
],
[
"Yuan",
"Shenghai",
""
],
[
"Yin",
"Pengyu",
""
],
[
"Xie",
"Lihua",
""
]
] |
new_dataset
| 0.982045 |
2307.10162
|
Yueqian Lin
|
Xingyu Shen, Yueqian Lin, Zhixian Zhang, Xin Tong
|
RTVis: Research Trend Visualization Toolkit
|
Accepted by IEEE VIS 2023 (Poster). 2 pages, 1 figure. For our demo
page, visit https://www.rtvis.design/
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
When researchers are about to start a new project or have just entered a new
research field, choosing a proper research topic is always challenging. To help
them have an overall understanding of the research trend in real-time and find
out the research topic they are interested in, we developed the Research Trend
Visualization toolkit (RTVis) to analyze and visualize the research paper
information. RTVis consists of a field theme river, a co-occurrence network, a
specialized citation bar chart, and a word frequency race diagram, showing the
field change through time, cooperating relationship among authors, paper
citation numbers in different venues, and the most common words in the abstract
part respectively. Moreover, RTVis is open source and easy to deploy. The demo
of our toolkit and code with detailed documentation are both available online.
|
[
{
"version": "v1",
"created": "Wed, 19 Jul 2023 17:44:49 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 13:42:06 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Aug 2023 12:18:04 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Shen",
"Xingyu",
""
],
[
"Lin",
"Yueqian",
""
],
[
"Zhang",
"Zhixian",
""
],
[
"Tong",
"Xin",
""
]
] |
new_dataset
| 0.980809 |
2307.16751
|
Weisheng Li
|
Lin Huang, Weisheng Li, Linlin Shen, Xue Xiao, Suihan Xiao
|
High-Performance Fine Defect Detection in Artificial Leather Using Dual
Feature Pool Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, the structural problems of the YOLOv5 model were analyzed
emphatically. Based on the characteristics of fine defects in artificial
leather, four innovative structures, namely DFP, IFF, AMP, and EOS, were
designed. These advancements led to the proposal of a high-performance
artificial leather fine defect detection model named YOLOD. YOLOD demonstrated
outstanding performance on the artificial leather defect dataset, achieving an
impressive increase of 11.7% - 13.5% in AP_50 compared to YOLOv5, along with a
significant reduction of 5.2% - 7.2% in the error detection rate. Moreover,
YOLOD also exhibited remarkable performance on the general MS-COCO dataset,
with an increase of 0.4% - 2.6% in AP compared to YOLOv5, and a rise of 2.5% -
4.1% in AP_S compared to YOLOv5. These results demonstrate the superiority of
YOLOD in both artificial leather defect detection and general object detection
tasks, making it a highly efficient and effective model for real-world
applications.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 15:18:54 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 01:25:03 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Huang",
"Lin",
""
],
[
"Li",
"Weisheng",
""
],
[
"Shen",
"Linlin",
""
],
[
"Xiao",
"Xue",
""
],
[
"Xiao",
"Suihan",
""
]
] |
new_dataset
| 0.995755 |
2308.04673
|
Xiaobei Li
|
Xiaobei Li, Changchun Yin, Liming Fang, Run Wang, Chenhao Lin
|
SSL-Auth: An Authentication Framework by Fragile Watermarking for
Pre-trained Encoders in Self-supervised Learning
|
Submitted to AAAI2024. 9 pages, 7 figures
| null | null | null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning (SSL), utilizing unlabeled datasets for training
powerful encoders, has achieved significant success recently. These encoders
serve as feature extractors for downstream tasks, requiring substantial
resources. However, the challenge of protecting the intellectual property of
encoder trainers and ensuring the trustworthiness of deployed encoders remains
a significant gap in SSL. Moreover, recent researches highlight threats to
pre-trained encoders, such as backdoor and adversarial attacks. To address
these gaps, we propose SSL-Auth, the first authentication framework designed
specifically for pre-trained encoders. In particular, SSL-Auth utilizes
selected key samples as watermark information and trains a verification network
to reconstruct the watermark information, thereby verifying the integrity of
the encoder without compromising model performance. By comparing the
reconstruction results of the key samples, malicious alterations can be
detected, as modified encoders won't mimic the original reconstruction.
Comprehensive evaluations on various encoders and diverse downstream tasks
demonstrate the effectiveness and fragility of our proposed SSL-Auth.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 02:54:11 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 09:27:24 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Li",
"Xiaobei",
""
],
[
"Yin",
"Changchun",
""
],
[
"Fang",
"Liming",
""
],
[
"Wang",
"Run",
""
],
[
"Lin",
"Chenhao",
""
]
] |
new_dataset
| 0.978565 |
2308.07325
|
Rossella Aversa Dr.
|
Mehrdad Jalali, Matthias Mail, Rossella Aversa, and Christian K\"ubel
|
MSLE: An ontology for Materials Science Laboratory Equipment.
Large-Scale Devices for Materials Characterization
|
Submitted to Materials Today Communication
|
Mater. Today Commun. 35 (2023) 105532
|
10.1016/j.mtcomm.2023.105532
| null |
cs.AI cond-mat.mtrl-sci
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper introduces a new ontology for Materials Science Laboratory
Equipment, termed MSLE. A fundamental issue with materials science laboratory
(hereafter lab) equipment in the real world is that scientists work with
various types of equipment with multiple specifications. For example, there are
many electron microscopes with different parameters in chemical and physical
labs. A critical development to unify the description is to build an equipment
domain ontology as basic semantic knowledge and to guide the user to work with
the equipment appropriately. Here, we propose to develop a consistent ontology
for equipment, the MSLE ontology. In the MSLE, two main existing ontologies,
the Semantic Sensor Network (SSN) and the Material Vocabulary (MatVoc), have
been integrated into the MSLE core to build a coherent ontology. Since various
acronyms and terms have been used for equipment, this paper proposes an
approach to use a Simple Knowledge Organization System (SKOS) to represent the
hierarchical structure of equipment terms. Equipment terms were collected in
various languages and abbreviations and coded into the MSLE using the SKOS
model. The ontology development was conducted in close collaboration with
domain experts and focused on the large-scale devices for materials
characterization available in our research group. Competency questions are
expected to be addressed through the MSLE ontology. Constraints are modeled in
the Shapes Query Language (SHACL); a prototype is shown and validated to show
the value of the modeling constraints.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2023 12:39:42 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Jalali",
"Mehrdad",
""
],
[
"Mail",
"Matthias",
""
],
[
"Aversa",
"Rossella",
""
],
[
"Kübel",
"Christian",
""
]
] |
new_dataset
| 0.997773 |
2308.07590
|
Tianhao Xu
|
Zizhang Wu, Chenxin Yuan, Hongyang Wei, Fan Song, Tianhao Xu
|
ADD: An Automatic Desensitization Fisheye Dataset for Autonomous Driving
| null |
Engineering Applications of Artificial Intelligence 2023
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous driving systems require many images for analyzing the surrounding
environment. However, there is fewer data protection for private information
among these captured images, such as pedestrian faces or vehicle license
plates, which has become a significant issue. In this paper, in response to the
call for data security laws and regulations and based on the advantages of
large Field of View(FoV) of the fisheye camera, we build the first Autopilot
Desensitization Dataset, called ADD, and formulate the first
deep-learning-based image desensitization framework, to promote the study of
image desensitization in autonomous driving scenarios. The compiled dataset
consists of 650K images, including different face and vehicle license plate
information captured by the surround-view fisheye camera. It covers various
autonomous driving scenarios, including diverse facial characteristics and
license plate colors. Then, we propose an efficient multitask desensitization
network called DesCenterNet as a benchmark on the ADD dataset, which can
perform face and vehicle license plate detection and desensitization tasks.
Based on ADD, we further provide an evaluation criterion for desensitization
performance, and extensive comparison experiments have verified the
effectiveness and superiority of our method on image desensitization.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 06:21:56 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Wu",
"Zizhang",
""
],
[
"Yuan",
"Chenxin",
""
],
[
"Wei",
"Hongyang",
""
],
[
"Song",
"Fan",
""
],
[
"Xu",
"Tianhao",
""
]
] |
new_dataset
| 0.998894 |
2308.07932
|
Aman Abidi
|
Apurba Das, Aman Abidi, Ajinkya Shingane and Mekala Kiran
|
Balanced Butterfly Counting in Bipartite-Network
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Bipartite graphs offer a powerful framework for modeling complex
relationships between two distinct types of vertices, incorporating
probabilistic, temporal, and rating-based information. While the research
community has extensively explored various types of bipartite relationships,
there has been a notable gap in studying Signed Bipartite Graphs, which capture
liking / disliking interactions in real-world networks such as
customer-rating-product and senator-vote-bill. Balance butterflies,
representing 2 x 2 bicliques, provide crucial insights into antagonistic
groups, balance theory, and fraud detection by leveraging the signed
information. However, such applications require counting balance butterflies
which remains unexplored. In this paper, we propose a new problem: counting
balance butterflies in a signed bipartite graph. To address this problem, we
adopt state-of-the-art algorithms for butterfly counting, establishing a smart
baseline that reduces the time complexity for solving our specific problem. We
further introduce a novel bucket approach specifically designed to count
balanced butterflies efficiently. We propose a parallelized version of the
bucketing approach to enhance performance. Extensive experimental studies on
nine real-world datasets demonstrate that our proposed bucket-based algorithm
is up to 120x faster over the baseline, and the parallel implementation of the
bucket-based algorithm is up to 45x faster over the single core execution.
Moreover, a real-world case study showcases the practical application and
relevance of counting balanced butterflies.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 04:57:32 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Das",
"Apurba",
""
],
[
"Abidi",
"Aman",
""
],
[
"Shingane",
"Ajinkya",
""
],
[
"Kiran",
"Mekala",
""
]
] |
new_dataset
| 0.982744 |
2308.08010
|
Sayantan Auddy
|
Sayantan Auddy, Ramit Dey, Neal J. Turner, Shantanu Basu
|
GRINN: A Physics-Informed Neural Network for solving hydrodynamic
systems in the presence of self-gravity
| null | null | null | null |
cs.LG astro-ph.GA astro-ph.IM astro-ph.SR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modeling self-gravitating gas flows is essential to answering many
fundamental questions in astrophysics. This spans many topics including
planet-forming disks, star-forming clouds, galaxy formation, and the
development of large-scale structures in the Universe. However, the nonlinear
interaction between gravity and fluid dynamics offers a formidable challenge to
solving the resulting time-dependent partial differential equations (PDEs) in
three dimensions (3D). By leveraging the universal approximation capabilities
of a neural network within a mesh-free framework, physics informed neural
networks (PINNs) offer a new way of addressing this challenge. We introduce the
gravity-informed neural network (GRINN), a PINN-based code, to simulate 3D
self-gravitating hydrodynamic systems. Here, we specifically study
gravitational instability and wave propagation in an isothermal gas. Our
results match a linear analytic solution to within 1\% in the linear regime and
a conventional grid code solution to within 5\% as the disturbance grows into
the nonlinear regime. We find that the computation time of the GRINN does not
scale with the number of dimensions. This is in contrast to the scaling of the
grid-based code for the hydrodynamic and self-gravity calculations as the
number of dimensions is increased. Our results show that the GRINN computation
time is longer than the grid code in one- and two- dimensional calculations but
is an order of magnitude lesser than the grid code in 3D with similar accuracy.
Physics-informed neural networks like GRINN thus show promise for advancing our
ability to model 3D astrophysical flows.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 19:50:07 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Auddy",
"Sayantan",
""
],
[
"Dey",
"Ramit",
""
],
[
"Turner",
"Neal J.",
""
],
[
"Basu",
"Shantanu",
""
]
] |
new_dataset
| 0.995106 |
2308.08046
|
Mengfan Xu
|
Mengfan Xu, Diego Klabjan
|
Regret Lower Bounds in Multi-agent Multi-armed Bandit
|
10 pages
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-armed Bandit motivates methods with provable upper bounds on regret and
also the counterpart lower bounds have been extensively studied in this
context. Recently, Multi-agent Multi-armed Bandit has gained significant
traction in various domains, where individual clients face bandit problems in a
distributed manner and the objective is the overall system performance,
typically measured by regret. While efficient algorithms with regret upper
bounds have emerged, limited attention has been given to the corresponding
regret lower bounds, except for a recent lower bound for adversarial settings,
which, however, has a gap with let known upper bounds. To this end, we herein
provide the first comprehensive study on regret lower bounds across different
settings and establish their tightness. Specifically, when the graphs exhibit
good connectivity properties and the rewards are stochastically distributed, we
demonstrate a lower bound of order $O(\log T)$ for instance-dependent bounds
and $\sqrt{T}$ for mean-gap independent bounds which are tight. Assuming
adversarial rewards, we establish a lower bound $O(T^{\frac{2}{3}})$ for
connected graphs, thereby bridging the gap between the lower and upper bound in
the prior work. We also show a linear regret lower bound when the graph is
disconnected. While previous works have explored these settings with upper
bounds, we provide a thorough study on tight lower bounds.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 21:20:24 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Xu",
"Mengfan",
""
],
[
"Klabjan",
"Diego",
""
]
] |
new_dataset
| 0.987109 |
2308.08058
|
Nathaniel Hanson
|
Nathaniel Hanson, Benjamin Pyatski, Samuel Hibbard, Charles DiMarzio,
Ta\c{s}k{\i}n Pad{\i}r
|
Hyper-Drive: Visible-Short Wave Infrared Hyperspectral Imaging Datasets
for Robots in Unstructured Environments
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Hyperspectral sensors have enjoyed widespread use in the realm of remote
sensing; however, they must be adapted to a format in which they can be
operated onboard mobile robots. In this work, we introduce a first-of-its-kind
system architecture with snapshot hyperspectral cameras and point spectrometers
to efficiently generate composite datacubes from a robotic base. Our system
collects and registers datacubes spanning the visible to shortwave infrared
(660-1700 nm) spectrum while simultaneously capturing the ambient solar
spectrum reflected off a white reference tile. We collect and disseminate a
large dataset of more than 500 labeled datacubes from on-road and off-road
terrain compliant with the ATLAS ontology to further the integration and
demonstration of hyperspectral imaging (HSI) as beneficial in terrain class
separability. Our analysis of this data demonstrates that HSI is a significant
opportunity to increase understanding of scene composition from a robot-centric
context. All code and data are open source online:
https://river-lab.github.io/hyper_drive_data
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 22:01:00 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Hanson",
"Nathaniel",
""
],
[
"Pyatski",
"Benjamin",
""
],
[
"Hibbard",
"Samuel",
""
],
[
"DiMarzio",
"Charles",
""
],
[
"Padır",
"Taşkın",
""
]
] |
new_dataset
| 0.998832 |
2308.08089
|
Shengming Yin
|
Shengming Yin, Chenfei Wu, Jian Liang, Jie Shi, Houqiang Li, Gong
Ming, Nan Duan
|
DragNUWA: Fine-grained Control in Video Generation by Integrating Text,
Image, and Trajectory
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Controllable video generation has gained significant attention in recent
years. However, two main limitations persist: Firstly, most existing works
focus on either text, image, or trajectory-based control, leading to an
inability to achieve fine-grained control in videos. Secondly, trajectory
control research is still in its early stages, with most experiments being
conducted on simple datasets like Human3.6M. This constraint limits the models'
capability to process open-domain images and effectively handle complex curved
trajectories. In this paper, we propose DragNUWA, an open-domain
diffusion-based video generation model. To tackle the issue of insufficient
control granularity in existing works, we simultaneously introduce text, image,
and trajectory information to provide fine-grained control over video content
from semantic, spatial, and temporal perspectives. To resolve the problem of
limited open-domain trajectory control in current research, We propose
trajectory modeling with three aspects: a Trajectory Sampler (TS) to enable
open-domain control of arbitrary trajectories, a Multiscale Fusion (MF) to
control trajectories in different granularities, and an Adaptive Training (AT)
strategy to generate consistent videos following trajectories. Our experiments
validate the effectiveness of DragNUWA, demonstrating its superior performance
in fine-grained control in video generation. The homepage link is
\url{https://www.microsoft.com/en-us/research/project/dragnuwa/}
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 01:43:41 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Yin",
"Shengming",
""
],
[
"Wu",
"Chenfei",
""
],
[
"Liang",
"Jian",
""
],
[
"Shi",
"Jie",
""
],
[
"Li",
"Houqiang",
""
],
[
"Ming",
"Gong",
""
],
[
"Duan",
"Nan",
""
]
] |
new_dataset
| 0.999748 |
2308.08125
|
Running Zhao
|
Running Zhao, Jiangtao Yu, Hang Zhao and Edith C.H. Ngai
|
Radio2Text: Streaming Speech Recognition Using mmWave Radio Signals
|
Accepted by Proceedings of the ACM on Interactive, Mobile, Wearable
and Ubiquitous Technologies (ACM IMWUT/UbiComp 2023)
| null |
10.1145/3610873
| null |
cs.SD cs.CL cs.HC eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Millimeter wave (mmWave) based speech recognition provides more possibility
for audio-related applications, such as conference speech transcription and
eavesdropping. However, considering the practicality in real scenarios, latency
and recognizable vocabulary size are two critical factors that cannot be
overlooked. In this paper, we propose Radio2Text, the first mmWave-based system
for streaming automatic speech recognition (ASR) with a vocabulary size
exceeding 13,000 words. Radio2Text is based on a tailored streaming Transformer
that is capable of effectively learning representations of speech-related
features, paving the way for streaming ASR with a large vocabulary. To
alleviate the deficiency of streaming networks unable to access entire future
inputs, we propose the Guidance Initialization that facilitates the transfer of
feature knowledge related to the global context from the non-streaming
Transformer to the tailored streaming Transformer through weight inheritance.
Further, we propose a cross-modal structure based on knowledge distillation
(KD), named cross-modal KD, to mitigate the negative effect of low quality
mmWave signals on recognition performance. In the cross-modal KD, the audio
streaming Transformer provides feature and response guidance that inherit
fruitful and accurate speech information to supervise the training of the
tailored radio streaming Transformer. The experimental results show that our
Radio2Text can achieve a character error rate of 5.7% and a word error rate of
9.4% for the recognition of a vocabulary consisting of over 13,000 words.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 03:31:30 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Zhao",
"Running",
""
],
[
"Yu",
"Jiangtao",
""
],
[
"Zhao",
"Hang",
""
],
[
"Ngai",
"Edith C. H.",
""
]
] |
new_dataset
| 0.995457 |
2308.08137
|
Weiran Gou
|
Weiran Gou, Ziyao Yi, Yan Xiang, Shaoqing Li, Zibin Liu, Dehui Kong
and Ke Xu
|
SYENet: A Simple Yet Effective Network for Multiple Low-Level Vision
Tasks with Real-time Performance on Mobile Device
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
With the rapid development of AI hardware accelerators, applying deep
learning-based algorithms to solve various low-level vision tasks on mobile
devices has gradually become possible. However, two main problems still need to
be solved: task-specific algorithms make it difficult to integrate them into a
single neural network architecture, and large amounts of parameters make it
difficult to achieve real-time inference. To tackle these problems, we propose
a novel network, SYENet, with only $~$6K parameters, to handle multiple
low-level vision tasks on mobile devices in a real-time manner. The SYENet
consists of two asymmetrical branches with simple building blocks. To
effectively connect the results by asymmetrical branches, a Quadratic
Connection Unit(QCU) is proposed. Furthermore, to improve performance, a new
Outlier-Aware Loss is proposed to process the image. The proposed method proves
its superior performance with the best PSNR as compared with other networks in
real-time applications such as Image Signal Processing(ISP), Low-Light
Enhancement(LLE), and Super-Resolution(SR) with 2K60FPS throughput on Qualcomm
8 Gen 1 mobile SoC(System-on-Chip). Particularly, for ISP task, SYENet got the
highest score in MAI 2022 Learned Smartphone ISP challenge.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 04:03:59 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Gou",
"Weiran",
""
],
[
"Yi",
"Ziyao",
""
],
[
"Xiang",
"Yan",
""
],
[
"Li",
"Shaoqing",
""
],
[
"Liu",
"Zibin",
""
],
[
"Kong",
"Dehui",
""
],
[
"Xu",
"Ke",
""
]
] |
new_dataset
| 0.998251 |
2308.08147
|
Man Luo
|
Srija Macherla, Man Luo, Mihir Parmar, Chitta Baral
|
MDDial: A Multi-turn Differential Diagnosis Dialogue Dataset with
Reliability Evaluation
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Dialogue systems for Automatic Differential Diagnosis (ADD) have a wide range
of real-life applications. These dialogue systems are promising for providing
easy access and reducing medical costs. Building end-to-end ADD dialogue
systems requires dialogue training datasets. However, to the best of our
knowledge, there is no publicly available ADD dialogue dataset in English
(although non-English datasets exist). Driven by this, we introduce MDDial, the
first differential diagnosis dialogue dataset in English which can aid to build
and evaluate end-to-end ADD dialogue systems. Additionally, earlier studies
present the accuracy of diagnosis and symptoms either individually or as a
combined weighted score. This method overlooks the connection between the
symptoms and the diagnosis. We introduce a unified score for the ADD system
that takes into account the interplay between symptoms and diagnosis. This
score also indicates the system's reliability. To the end, we train two
moderate-size of language models on MDDial. Our experiments suggest that while
these language models can perform well on many natural language understanding
tasks, including dialogue tasks in the general domain, they struggle to relate
relevant symptoms and disease and thus have poor performance on MDDial. MDDial
will be released publicly to aid the study of ADD dialogue research.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 04:56:55 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Macherla",
"Srija",
""
],
[
"Luo",
"Man",
""
],
[
"Parmar",
"Mihir",
""
],
[
"Baral",
"Chitta",
""
]
] |
new_dataset
| 0.998698 |
2308.08156
|
Tiberiu Sosea
|
Tiberiu Sosea, Junyi Jessy Li, Cornelia Caragea
|
Sarcasm Detection in a Disaster Context
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
During natural disasters, people often use social media platforms such as
Twitter to ask for help, to provide information about the disaster situation,
or to express contempt about the unfolding event or public policies and
guidelines. This contempt is in some cases expressed as sarcasm or irony.
Understanding this form of speech in a disaster-centric context is essential to
improving natural language understanding of disaster-related tweets. In this
paper, we introduce HurricaneSARC, a dataset of 15,000 tweets annotated for
intended sarcasm, and provide a comprehensive investigation of sarcasm
detection using pre-trained language models. Our best model is able to obtain
as much as 0.70 F1 on our dataset. We also demonstrate that the performance on
HurricaneSARC can be improved by leveraging intermediate task transfer
learning. We release our data and code at
https://github.com/tsosea2/HurricaneSarc.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 05:58:12 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Sosea",
"Tiberiu",
""
],
[
"Li",
"Junyi Jessy",
""
],
[
"Caragea",
"Cornelia",
""
]
] |
new_dataset
| 0.999885 |
2308.08181
|
Jie Li
|
Mengjie Du and Xiang Fang and Jie Li
|
ChinaTelecom System Description to VoxCeleb Speaker Recognition
Challenge 2023
|
System description of VoxSRC 2023
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This technical report describes ChinaTelecom system for Track 1 (closed) of
the VoxCeleb2023 Speaker Recognition Challenge (VoxSRC 2023). Our system
consists of several ResNet variants trained only on VoxCeleb2, which were fused
for better performance later. Score calibration was also applied for each
variant and the fused system. The final submission achieved minDCF of 0.1066
and EER of 1.980%.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 07:21:01 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Du",
"Mengjie",
""
],
[
"Fang",
"Xiang",
""
],
[
"Li",
"Jie",
""
]
] |
new_dataset
| 0.99682 |
2308.08256
|
Philipp M\"uller
|
Philipp M\"uller, Michal Balazia, Tobias Baur, Michael Dietz,
Alexander Heimerl, Dominik Schiller, Mohammed Guermal, Dominike Thomas,
Fran\c{c}ois Br\'emond, Jan Alexandersson, Elisabeth Andr\'e, Andreas Bulling
|
MultiMediate'23: Engagement Estimation and Bodily Behaviour Recognition
in Social Interactions
|
ACM MultiMedia'23
| null |
10.1145/3581783.3613851
| null |
cs.CV cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic analysis of human behaviour is a fundamental prerequisite for the
creation of machines that can effectively interact with- and support humans in
social interactions. In MultiMediate'23, we address two key human social
behaviour analysis tasks for the first time in a controlled challenge:
engagement estimation and bodily behaviour recognition in social interactions.
This paper describes the MultiMediate'23 challenge and presents novel sets of
annotations for both tasks. For engagement estimation we collected novel
annotations on the NOvice eXpert Interaction (NOXI) database. For bodily
behaviour recognition, we annotated test recordings of the MPIIGroupInteraction
corpus with the BBSI annotation scheme. In addition, we present baseline
results for both challenge tasks.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 09:47:52 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Müller",
"Philipp",
""
],
[
"Balazia",
"Michal",
""
],
[
"Baur",
"Tobias",
""
],
[
"Dietz",
"Michael",
""
],
[
"Heimerl",
"Alexander",
""
],
[
"Schiller",
"Dominik",
""
],
[
"Guermal",
"Mohammed",
""
],
[
"Thomas",
"Dominike",
""
],
[
"Brémond",
"François",
""
],
[
"Alexandersson",
"Jan",
""
],
[
"André",
"Elisabeth",
""
],
[
"Bulling",
"Andreas",
""
]
] |
new_dataset
| 0.990629 |
2308.08258
|
Edith Tretschk
|
Edith Tretschk, Vladislav Golyanik, Michael Zollhoefer, Aljaz Bozic,
Christoph Lassner, Christian Theobalt
|
SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes
|
Project page: https://vcai.mpi-inf.mpg.de/projects/scenerflow/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing methods for the 4D reconstruction of general, non-rigidly deforming
objects focus on novel-view synthesis and neglect correspondences. However,
time consistency enables advanced downstream tasks like 3D editing, motion
analysis, or virtual-asset creation. We propose SceNeRFlow to reconstruct a
general, non-rigid scene in a time-consistent manner. Our dynamic-NeRF method
takes multi-view RGB videos and background images from static cameras with
known camera parameters as input. It then reconstructs the deformations of an
estimated canonical model of the geometry and appearance in an online fashion.
Since this canonical model is time-invariant, we obtain correspondences even
for long-term, long-range motions. We employ neural scene representations to
parametrize the components of our method. Like prior dynamic-NeRF methods, we
use a backwards deformation model. We find non-trivial adaptations of this
model necessary to handle larger motions: We decompose the deformations into a
strongly regularized coarse component and a weakly regularized fine component,
where the coarse component also extends the deformation field into the space
surrounding the object, which enables tracking over time. We show
experimentally that, unlike prior work that only handles small motion, our
method enables the reconstruction of studio-scale motions.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 09:50:35 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Tretschk",
"Edith",
""
],
[
"Golyanik",
"Vladislav",
""
],
[
"Zollhoefer",
"Michael",
""
],
[
"Bozic",
"Aljaz",
""
],
[
"Lassner",
"Christoph",
""
],
[
"Theobalt",
"Christian",
""
]
] |
new_dataset
| 0.997111 |
2308.08267
|
Konstantinos Ntontin
|
Konstantinos Ntontin, Alexandros-Apostolos A. Boulogeorgos, Sergi
Abadal, Agapi Mesodiakaki, Symeon Chatzinotas, Bj\"orn Ottersten
|
Perpetual Reconfigurable Intelligent Surfaces Through In-Band Energy
Harvesting: Architectures, Protocols, and Challenges
|
7 pages, 8 figures
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Reconfigurable intelligent surfaces (RISs) are considered to be a key enabler
of highly energy-efficient 6G and beyond networks. This property arises from
the absence of power amplifiers in the structure, in contrast to active nodes,
such as small cells and relays. However, still an amount of power is required
for their operation. To improve their energy efficiency further, we propose the
notion of perpetual RISs, which secure the power needed to supply their
functionalities through wireless energy harvesting of the impinging transmitted
electromagnetic signals. Towards this, we initially explain the rationale
behind such RIS capability and proceed with the presentation of the main RIS
controller architecture that can realize this vision under an in-band energy
harvesting consideration. Furthermore, we present a typical energy-harvesting
architecture followed by two harvesting protocols. Subsequently, we study the
performance of the two protocols under a typical communications scenario.
Finally, we elaborate on the main research challenges governing the realization
of large-scale networks with perpetual RISs.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 10:07:45 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Ntontin",
"Konstantinos",
""
],
[
"Boulogeorgos",
"Alexandros-Apostolos A.",
""
],
[
"Abadal",
"Sergi",
""
],
[
"Mesodiakaki",
"Agapi",
""
],
[
"Chatzinotas",
"Symeon",
""
],
[
"Ottersten",
"Björn",
""
]
] |
new_dataset
| 0.99418 |
2308.08271
|
Yianni Karabatis
|
Yianni Karabatis, Xiaomin Lin, Nitin J. Sanket, Michail G. Lagoudakis,
Yiannis Aloimonos
|
Detecting Olives with Synthetic or Real Data? Olive the Above
| null |
In Proceedings of 2023 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS)
| null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern robotics has enabled the advancement in yield estimation for precision
agriculture. However, when applied to the olive industry, the high variation of
olive colors and their similarity to the background leaf canopy presents a
challenge. Labeling several thousands of very dense olive grove images for
segmentation is a labor-intensive task. This paper presents a novel approach to
detecting olives without the need to manually label data. In this work, we
present the world's first olive detection dataset comprised of synthetic and
real olive tree images. This is accomplished by generating an auto-labeled
photorealistic 3D model of an olive tree. Its geometry is then simplified for
lightweight rendering purposes. In addition, experiments are conducted with a
mix of synthetically generated and real images, yielding an improvement of up
to 66% compared to when only using a small sample of real data. When access to
real, human-labeled data is limited, a combination of mostly synthetic data and
a small amount of real data can enhance olive detection.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 10:19:16 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Karabatis",
"Yianni",
""
],
[
"Lin",
"Xiaomin",
""
],
[
"Sanket",
"Nitin J.",
""
],
[
"Lagoudakis",
"Michail G.",
""
],
[
"Aloimonos",
"Yiannis",
""
]
] |
new_dataset
| 0.964906 |
2308.08371
|
Richard Nordsieck
|
Richard Nordsieck, Andr\'e Schweizer, Michael Heider, J\"org H\"ahner
|
PDPK: A Framework to Synthesise Process Data and Corresponding
Procedural Knowledge for Manufacturing
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Procedural knowledge describes how to accomplish tasks and mitigate problems.
Such knowledge is commonly held by domain experts, e.g. operators in
manufacturing who adjust parameters to achieve quality targets. To the best of
our knowledge, no real-world datasets containing process data and corresponding
procedural knowledge are publicly available, possibly due to corporate
apprehensions regarding the loss of knowledge advances. Therefore, we provide a
framework to generate synthetic datasets that can be adapted to different
domains. The design choices are inspired by two real-world datasets of
procedural knowledge we have access to. Apart from containing representations
of procedural knowledge in Resource Description Framework (RDF)-compliant
knowledge graphs, the framework simulates parametrisation processes and
provides consistent process data. We compare established embedding methods on
the resulting knowledge graphs, detailing which out-of-the-box methods have the
potential to represent procedural knowledge. This provides a baseline which can
be used to increase the comparability of future work. Furthermore, we validate
the overall characteristics of a synthesised dataset by comparing the results
to those achievable on a real-world dataset. The framework and evaluation code,
as well as the dataset used in the evaluation, are available open source.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 13:50:23 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Nordsieck",
"Richard",
""
],
[
"Schweizer",
"André",
""
],
[
"Heider",
"Michael",
""
],
[
"Hähner",
"Jörg",
""
]
] |
new_dataset
| 0.999538 |
2308.08401
|
Aaron Johnson
|
James Kyle, Justin K. Yim, Kendall Hart, Sarah Bergbreiter, and Aaron
M. Johnson
|
The Simplest Walking Robot: A bipedal robot with one actuator and two
rigid bodies
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the design and experimental results of the first 1-DOF,
hip-actuated bipedal robot. While passive dynamic walking is simple by nature,
many existing bipeds inspired by this form of walking are complex in control,
mechanical design, or both. Our design using only two rigid bodies connected by
a single motor aims to enable exploration of walking at smaller sizes where
more complex designs cannot be constructed. The walker, "Mugatu", is
self-contained and autonomous, open-loop stable over a range of input
parameters, able to stop and start from standing, and able to control its
heading left and right. We analyze the mechanical design and distill down a set
of design rules that enable these behaviors. Experimental evaluations measure
speed, energy consumption, and steering.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 14:41:30 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Kyle",
"James",
""
],
[
"Yim",
"Justin K.",
""
],
[
"Hart",
"Kendall",
""
],
[
"Bergbreiter",
"Sarah",
""
],
[
"Johnson",
"Aaron M.",
""
]
] |
new_dataset
| 0.999741 |
2308.08414
|
Xiao Liu
|
Guangyi Chen, Xiao Liu, Guangrun Wang, Kun Zhang, Philip H.S.Torr,
Xiao-Ping Zhang, Yansong Tang
|
Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer
|
ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video-language pre-trained models have shown remarkable success in guiding
video question-answering (VideoQA) tasks. However, due to the length of video
sequences, training large-scale video-based models incurs considerably higher
costs than training image-based ones. This motivates us to leverage the
knowledge from image-based pretraining, despite the obvious gaps between image
and video domains. To bridge these gaps, in this paper, we propose Tem-Adapter,
which enables the learning of temporal dynamics and complex semantics by a
visual Temporal Aligner and a textual Semantic Aligner. Unlike conventional
pretrained knowledge adaptation methods that only concentrate on the downstream
task objective, the Temporal Aligner introduces an extra language-guided
autoregressive task aimed at facilitating the learning of temporal
dependencies, with the objective of predicting future states based on
historical clues and language guidance that describes event progression.
Besides, to reduce the semantic gap and adapt the textual representation for
better event description, we introduce a Semantic Aligner that first designs a
template to fuse question and answer pairs as event descriptions and then
learns a Transformer decoder with the whole video sequence as guidance for
refinement. We evaluate Tem-Adapter and different pre-train transferring
methods on two VideoQA benchmarks, and the significant performance improvement
demonstrates the effectiveness of our method.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 15:00:50 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Chen",
"Guangyi",
""
],
[
"Liu",
"Xiao",
""
],
[
"Wang",
"Guangrun",
""
],
[
"Zhang",
"Kun",
""
],
[
"Torr",
"Philip H. S.",
""
],
[
"Zhang",
"Xiao-Ping",
""
],
[
"Tang",
"Yansong",
""
]
] |
new_dataset
| 0.992625 |
2308.08443
|
Xuechao Zou
|
Ben Chen, Xuechao Zou, Kai Li, Yu Zhang, Junliang Xing, Pin Tao
|
High-Fidelity Lake Extraction via Two-Stage Prompt Enhancement:
Establishing a Novel Baseline and Benchmark
|
8 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The extraction of lakes from remote sensing images is a complex challenge due
to the varied lake shapes and data noise. Current methods rely on multispectral
image datasets, making it challenging to learn lake features accurately from
pixel arrangements. This, in turn, affects model learning and the creation of
accurate segmentation masks. This paper introduces a unified prompt-based
dataset construction approach that provides approximate lake locations using
point, box, and mask prompts. We also propose a two-stage prompt enhancement
framework, LEPrompter, which involves prompt-based and prompt-free stages
during training. The prompt-based stage employs a prompt encoder to extract
prior information, integrating prompt tokens and image embeddings through self-
and cross-attention in the prompt decoder. Prompts are deactivated once the
model is trained to ensure independence during inference, enabling automated
lake extraction. Evaluations on Surface Water and Qinghai-Tibet Plateau Lake
datasets show consistent performance improvements compared to the previous
state-of-the-art method. LEPrompter achieves mIoU scores of 91.48% and 97.43%
on the respective datasets without introducing additional parameters or GFLOPs.
Supplementary materials provide the source code, pre-trained models, and
detailed user studies.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 15:51:05 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Chen",
"Ben",
""
],
[
"Zou",
"Xuechao",
""
],
[
"Li",
"Kai",
""
],
[
"Zhang",
"Yu",
""
],
[
"Xing",
"Junliang",
""
],
[
"Tao",
"Pin",
""
]
] |
new_dataset
| 0.997918 |
2308.08473
|
Le Chen
|
Le Chen, Wenhao Wu, Stephen F. Siegel, Pei-Hung Lin, Chunhua Liao
|
DataRaceBench V1.4.1 and DataRaceBench-ML V0.1: Benchmark Suites for
Data Race Detection
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Data races pose a significant threat in multi-threaded parallel applications
due to their negative impact on program correctness. DataRaceBench, an
open-source benchmark suite, is specifically crafted to assess these data race
detection tools in a systematic and measurable manner. Machine learning
techniques have recently demonstrated considerable potential in
high-performance computing (HPC) program analysis and optimization. However,
these techniques require specialized data formats for training and refinement.
This paper presents the latest update to DataRaceBench, incorporating new data
race contributions from Wu et al. \cite{wu2023model}, and introduces a derived
dataset named DataRaceBench-ML (DRB-ML) \cite{drbml}. DRB-ML aligns with the
emerging trend of machine learning and large language models. Originating from
DataRaceBench, this dataset includes detailed labels that denote the presence
of a data race and provides comprehensive details of associated variables, such
as variable names, line numbers, and the operation (read/write). Unique to
DRB-ML, we have also integrated a series of tailored prompt-response pairs
specifically designed for LLM fine-tuning.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 16:23:13 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Chen",
"Le",
""
],
[
"Wu",
"Wenhao",
""
],
[
"Siegel",
"Stephen F.",
""
],
[
"Lin",
"Pei-Hung",
""
],
[
"Liao",
"Chunhua",
""
]
] |
new_dataset
| 0.997043 |
2308.08497
|
Chenglei Shen
|
Chenglei Shen, Xiao Zhang, Wei Wei, Jun Xu
|
HyperBandit: Contextual Bandit with Hypernewtork for Time-Varying User
Preferences in Streaming Recommendation
| null | null | null | null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In real-world streaming recommender systems, user preferences often
dynamically change over time (e.g., a user may have different preferences
during weekdays and weekends). Existing bandit-based streaming recommendation
models only consider time as a timestamp, without explicitly modeling the
relationship between time variables and time-varying user preferences. This
leads to recommendation models that cannot quickly adapt to dynamic scenarios.
To address this issue, we propose a contextual bandit approach using
hypernetwork, called HyperBandit, which takes time features as input and
dynamically adjusts the recommendation model for time-varying user preferences.
Specifically, HyperBandit maintains a neural network capable of generating the
parameters for estimating time-varying rewards, taking into account the
correlation between time features and user preferences. Using the estimated
time-varying rewards, a bandit policy is employed to make online
recommendations by learning the latent item contexts. To meet the real-time
requirements in streaming recommendation scenarios, we have verified the
existence of a low-rank structure in the parameter matrix and utilize low-rank
factorization for efficient training. Theoretically, we demonstrate a sublinear
regret upper bound against the best policy. Extensive experiments on real-world
datasets show that the proposed HyperBandit consistently outperforms the
state-of-the-art baselines in terms of accumulated rewards.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2023 14:04:57 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Shen",
"Chenglei",
""
],
[
"Zhang",
"Xiao",
""
],
[
"Wei",
"Wei",
""
],
[
"Xu",
"Jun",
""
]
] |
new_dataset
| 0.989239 |
2308.08544
|
Henghui Ding
|
Henghui Ding, Chang Liu, Shuting He, Xudong Jiang, Chen Change Loy
|
MeViS: A Large-scale Benchmark for Video Segmentation with Motion
Expressions
|
ICCV 2023, Project Page: https://henghuiding.github.io/MeViS/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper strives for motion expressions guided video segmentation, which
focuses on segmenting objects in video content based on a sentence describing
the motion of the objects. Existing referring video object datasets typically
focus on salient objects and use language expressions that contain excessive
static attributes that could potentially enable the target object to be
identified in a single frame. These datasets downplay the importance of motion
in video content for language-guided video object segmentation. To investigate
the feasibility of using motion expressions to ground and segment objects in
videos, we propose a large-scale dataset called MeViS, which contains numerous
motion expressions to indicate target objects in complex environments. We
benchmarked 5 existing referring video object segmentation (RVOS) methods and
conducted a comprehensive comparison on the MeViS dataset. The results show
that current RVOS methods cannot effectively address motion expression-guided
video segmentation. We further analyze the challenges and propose a baseline
approach for the proposed MeViS dataset. The goal of our benchmark is to
provide a platform that enables the development of effective language-guided
video segmentation algorithms that leverage motion expressions as a primary cue
for object segmentation in complex video scenes. The proposed MeViS dataset has
been released at https://henghuiding.github.io/MeViS.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 17:58:34 GMT"
}
] | 2023-08-17T00:00:00 |
[
[
"Ding",
"Henghui",
""
],
[
"Liu",
"Chang",
""
],
[
"He",
"Shuting",
""
],
[
"Jiang",
"Xudong",
""
],
[
"Loy",
"Chen Change",
""
]
] |
new_dataset
| 0.999906 |
1908.07198
|
Changgeng Zhang
|
Yuefan Shen, Changgeng Zhang, Hongbo Fu, Kun Zhou, Youyi Zheng
|
DeepSketchHair: Deep Sketch-based 3D Hair Modeling
| null | null |
10.1109/TVCG.2020.2968433
| null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present sketchhair, a deep learning based tool for interactive modeling of
3D hair from 2D sketches. Given a 3D bust model as reference, our sketching
system takes as input a user-drawn sketch (consisting of hair contour and a few
strokes indicating the hair growing direction within a hair region), and
automatically generates a 3D hair model, which matches the input sketch both
globally and locally. The key enablers of our system are two carefully designed
neural networks, namely, S2ONet, which converts an input sketch to a dense 2D
hair orientation field; and O2VNet, which maps the 2D orientation field to a 3D
vector field. Our system also supports hair editing with additional sketches in
new views. This is enabled by another deep neural network, V2VNet, which
updates the 3D vector field with respect to the new sketches. All the three
networks are trained with synthetic data generated from a 3D hairstyle
database. We demonstrate the effectiveness and expressiveness of our tool using
a variety of hairstyles and also compare our method with prior art.
|
[
{
"version": "v1",
"created": "Tue, 20 Aug 2019 07:39:21 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Shen",
"Yuefan",
""
],
[
"Zhang",
"Changgeng",
""
],
[
"Fu",
"Hongbo",
""
],
[
"Zhou",
"Kun",
""
],
[
"Zheng",
"Youyi",
""
]
] |
new_dataset
| 0.996492 |
1909.06339
|
Xuan Lin
|
Xuan Lin, Jingwen Zhang, Junjie Shen, Gabriel Fernandez, Dennis W Hong
|
Optimization Based Motion Planning for Multi-Limbed Vertical Climbing
Robots
|
IROS 2019 Published
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motion planning trajectories for a multi-limbed robot to climb up walls
requires a unique combination of constraints on torque, contact force, and
posture. This paper focuses on motion planning for one particular setup wherein
a six-legged robot braces itself between two vertical walls and climbs
vertically with end effectors that only use friction. Instead of motion
planning with a single nonlinear programming (NLP) solver, we decoupled the
problem into two parts with distinct physical meaning: torso postures and
contact forces. The first part can be formulated as either a mixed-integer
convex programming (MICP) or NLP problem, while the second part is formulated
as a series of standard convex optimization problems. Variants of the two wall
climbing problem e.g., obstacle avoidance, uneven surfaces, and angled walls,
help verify the proposed method in simulation and experimentation.
|
[
{
"version": "v1",
"created": "Fri, 13 Sep 2019 17:30:07 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 09:33:45 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Lin",
"Xuan",
""
],
[
"Zhang",
"Jingwen",
""
],
[
"Shen",
"Junjie",
""
],
[
"Fernandez",
"Gabriel",
""
],
[
"Hong",
"Dennis W",
""
]
] |
new_dataset
| 0.98713 |
2107.11881
|
Reza Faghih Mirzaee
|
Fereshteh Karimi, Reza Faghih Mirzaee, Ali Fakeri-Tabrizi, Arman Roohi
|
Ultra-Fast, High-Performance 8x8 Approximate Multipliers by a New
Multicolumn 3,3:2 Inexact Compressor and its Derivatives
|
21 Pages, 18 Figures, 6 Tables
|
International Journal of Circuit Theory and Applications, July
2023
|
10.1002/cta.3613
|
Volume 51, Issue 7
|
cs.AR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A multiplier, as a key component in many different applications, is a
time-consuming, energy-intensive computation block. Approximate computing is a
practical design paradigm that attempts to improve hardware efficacy while
keeping computation quality satisfactory. A novel multicolumn 3,3:2 inexact
compressor is presented in this paper. It takes three partial products from two
adjacent columns each for rapid partial product reduction. The proposed inexact
compressor and its derivates enable us to design a high-speed approximate
multiplier. Then, another ultra-fast, high-efficient approximate multiplier is
achieved utilizing a systematic truncation strategy. The proposed multipliers
accumulate partial products in only two stages, one fewer stage than other
approximate multipliers in the literature. Implementation results by Synopsys
Design Compiler and 45 nm technology node demonstrates nearly 11.11% higher
speed for the second proposed design over the fastest existing approximate
multiplier. Furthermore, the new approximate multipliers are applied to the
image processing application of image sharpening, and their performance in this
application is highly satisfactory. It is shown in this paper that the error
pattern of an approximate multiplier, in addition to the mean error distance
and error rate, has a direct effect on the outcomes of the image processing
application.
|
[
{
"version": "v1",
"created": "Sun, 25 Jul 2021 20:12:25 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Nov 2021 15:00:35 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Aug 2023 10:35:59 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Karimi",
"Fereshteh",
""
],
[
"Mirzaee",
"Reza Faghih",
""
],
[
"Fakeri-Tabrizi",
"Ali",
""
],
[
"Roohi",
"Arman",
""
]
] |
new_dataset
| 0.998111 |
2208.04610
|
Lin-Han Jia
|
Lin-Han Jia, Lan-Zhe Guo, Zhi Zhou, Yu-Feng Li
|
LAMDA-SSL: Semi-Supervised Learning in Python
| null |
SCIENCE CHINA Information Sciences, 2023
|
10.1007/s11432-022-3804-0
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LAMDA-SSL is open-sourced on GitHub and its detailed usage documentation is
available at https://ygzwqzd.github.io/LAMDA-SSL/. This documentation
introduces LAMDA-SSL in detail from various aspects and can be divided into
four parts. The first part introduces the design idea, features and functions
of LAMDA-SSL. The second part shows the usage of LAMDA-SSL by abundant examples
in detail. The third part introduces all algorithms implemented by LAMDA-SSL to
help users quickly understand and choose SSL algorithms. The fourth part shows
the APIs of LAMDA-SSL. This detailed documentation greatly reduces the cost of
familiarizing users with LAMDA-SSL toolkit and SSL algorithms.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 09:06:48 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 08:19:32 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Jia",
"Lin-Han",
""
],
[
"Guo",
"Lan-Zhe",
""
],
[
"Zhou",
"Zhi",
""
],
[
"Li",
"Yu-Feng",
""
]
] |
new_dataset
| 0.994156 |
2210.02352
|
Zechen Xiong
|
Zechen Xiong, Yufeng Su, Hod Lipson
|
Fast Untethered Soft Robotic Crawler with Elastic Instability
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-speed locomotion of animals gives them tremendous advantages in
exploring, hunting, and escaping from predators in varying environments.
Enlightened by the fast-running gait of mammals like cheetahs and wolves, we
designed and fabricated a single-servo-driving untethered soft robot that is
capable of galloping at a speed of 313 mm/s or 1.56 body length per second
(BL/s), 5.2 times and 2.6 times faster than the reported fastest predecessors
in mm/s and BL/s, respectively, in literature. An in-plane prestressed hair
clip mechanism (HCM) made up of semi-rigid materials like plastic is used as
the supporting chassis, the compliant spine, and the muscle force amplifier of
the robot at the same time, enabling the robot to be rapid and strong. The
influence of factors including actuation frequency, substrates,
tethering/untethering, and symmetric/asymmetric actuation is explored with
experiments. Based on previous work, this paper further demonstrated the
potential of HCM in addressing the speed problem of soft robots.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 15:53:59 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jan 2023 19:06:27 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Aug 2023 21:56:32 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Xiong",
"Zechen",
""
],
[
"Su",
"Yufeng",
""
],
[
"Lipson",
"Hod",
""
]
] |
new_dataset
| 0.990382 |
2212.09121
|
Yang Zhao
|
Yang Zhao and Bruno Clerckx
|
RIScatter: Unifying Backscatter Communication and Reconfigurable
Intelligent Surface
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Backscatter Communication (BackCom) nodes harvest energy from and modulate
information over external electromagnetic waves. Reconfigurable Intelligent
Surface (RIS) adapts its phase shift response to alter channel strength in
specific directions. In this paper, we show how those two seemingly different
technologies (and their derivatives) can be unified into one architecture
called RIScatter. RIScatter consists of dispersed or co-located scatter nodes,
whose reflection states are adapted to partially modulate their information and
partially engineer the wireless channel. The key is to render the probability
distribution of reflection states as a joint function of the information
source, Channel State Information (CSI), and relative priority of coexisting
links. This enables RIScatter to softly bridge BackCom and RIS; reduce to
either under specific setup; or evolve in a mixed form for heterogeneous
traffic control and universal hardware design. We also propose a low-complexity
Successive Interference Cancellation (SIC)-free receiver that exploits the
properties of RIScatter. For a single-user multi-node network, we characterize
the achievable primary-(total-)backscatter rate region by optimizing the input
distribution at scatter nodes, the active beamforming at the Access Point (AP),
and the energy decision regions at the user. Simulations demonstrate RIScatter
nodes can recycle surrounding radios for backscatter modulation and passive
beamforming.
|
[
{
"version": "v1",
"created": "Sun, 18 Dec 2022 16:17:29 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Jan 2023 12:46:19 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Mar 2023 19:33:58 GMT"
},
{
"version": "v4",
"created": "Tue, 15 Aug 2023 16:41:58 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Zhao",
"Yang",
""
],
[
"Clerckx",
"Bruno",
""
]
] |
new_dataset
| 0.999021 |
2301.09637
|
Chieh Hubert Lin
|
Chieh Hubert Lin, Hsin-Ying Lee, Willi Menapace, Menglei Chai,
Aliaksandr Siarohin, Ming-Hsuan Yang and Sergey Tulyakov
|
InfiniCity: Infinite-Scale City Synthesis
| null | null | null | null |
cs.CV cs.AI cs.GR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Toward infinite-scale 3D city synthesis, we propose a novel framework,
InfiniCity, which constructs and renders an unconstrainedly large and
3D-grounded environment from random noises. InfiniCity decomposes the seemingly
impractical task into three feasible modules, taking advantage of both 2D and
3D data. First, an infinite-pixel image synthesis module generates
arbitrary-scale 2D maps from the bird's-eye view. Next, an octree-based voxel
completion module lifts the generated 2D map to 3D octrees. Finally, a
voxel-based neural rendering module texturizes the voxels and renders 2D
images. InfiniCity can thus synthesize arbitrary-scale and traversable 3D city
environments, and allow flexible and interactive editing from users. We
quantitatively and qualitatively demonstrate the efficacy of the proposed
framework. Project page: https://hubert0527.github.io/infinicity/
|
[
{
"version": "v1",
"created": "Mon, 23 Jan 2023 18:59:59 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 01:05:21 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Lin",
"Chieh Hubert",
""
],
[
"Lee",
"Hsin-Ying",
""
],
[
"Menapace",
"Willi",
""
],
[
"Chai",
"Menglei",
""
],
[
"Siarohin",
"Aliaksandr",
""
],
[
"Yang",
"Ming-Hsuan",
""
],
[
"Tulyakov",
"Sergey",
""
]
] |
new_dataset
| 0.998912 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.