id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.10898
|
Bohdan Didenko
|
Bohdan Didenko (1), Andrii Sameliuk (1) ((1) WebSpellChecker LLC /
Ukraine)
|
RedPenNet for Grammatical Error Correction: Outputs to Tokens,
Attentions to Spans
| null |
@inproceedings{didenko-sameliuk-2023-redpennet, month = may, year
= "2023", publisher = "Association for Computational Linguistics", pages =
"121--131", }
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The text editing tasks, including sentence fusion, sentence splitting and
rephrasing, text simplification, and Grammatical Error Correction (GEC), share
a common trait of dealing with highly similar input and output sequences. This
area of research lies at the intersection of two well-established fields: (i)
fully autoregressive sequence-to-sequence approaches commonly used in tasks
like Neural Machine Translation (NMT) and (ii) sequence tagging techniques
commonly used to address tasks such as Part-of-speech tagging, Named-entity
recognition (NER), and similar. In the pursuit of a balanced architecture,
researchers have come up with numerous imaginative and unconventional
solutions, which we're discussing in the Related Works section. Our approach to
addressing text editing tasks is called RedPenNet and is aimed at reducing
architectural and parametric redundancies presented in specific
Sequence-To-Edits models, preserving their semi-autoregressive advantages. Our
models achieve $F_{0.5}$ scores of 77.60 on the BEA-2019 (test), which can be
considered as state-of-the-art the only exception for system combination and
67.71 on the UAGEC+Fluency (test) benchmarks.
This research is being conducted in the context of the UNLP 2023 workshop,
where it was presented as a paper as a paper for the Shared Task in Grammatical
Error Correction (GEC) for Ukrainian. This study aims to apply the RedPenNet
approach to address the GEC problem in the Ukrainian language.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 19:48:30 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Didenko",
"Bohdan",
""
],
[
"Sameliuk",
"Andrii",
""
]
] |
new_dataset
| 0.995117 |
2309.10924
|
Alexander Krawciw
|
Alexander Krawciw, Jordy Sehn and Timothy D. Barfoot
|
Change of Scenery: Unsupervised LiDAR Change Detection for Mobile Robots
|
7 pages (6 content, 1 references). 7 figures, submitted to the 2024
IEEE International Conference on Robotics and Automation (ICRA)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a fully unsupervised deep change detection approach for
mobile robots with 3D LiDAR. In unstructured environments, it is infeasible to
define a closed set of semantic classes. Instead, semantic segmentation is
reformulated as binary change detection. We develop a neural network,
RangeNetCD, that uses an existing point-cloud map and a live LiDAR scan to
detect scene changes with respect to the map. Using a novel loss function,
existing point-cloud semantic segmentation networks can be trained to perform
change detection without any labels or assumptions about local semantics. We
demonstrate the performance of this approach on data from challenging terrains;
mean intersection over union (mIoU) scores range between 67.4% and 82.2%
depending on the amount of environmental structure. This outperforms the
geometric baseline used in all experiments. The neural network runs faster than
10Hz and is integrated into a robot's autonomy stack to allow safe navigation
around obstacles that intersect the planned path. In addition, a novel method
for the rapid automated acquisition of per-point ground-truth labels is
described. Covering changed parts of the scene with retroreflective materials
and applying a threshold filter to the intensity channel of the LiDAR allows
for quantitative evaluation of the change detector.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 20:54:26 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Krawciw",
"Alexander",
""
],
[
"Sehn",
"Jordy",
""
],
[
"Barfoot",
"Timothy D.",
""
]
] |
new_dataset
| 0.997938 |
2309.10945
|
Paulo Pirozelli
|
Paulo Pirozelli, Marcos M. Jos\'e, Igor Silveira, Fl\'avio Nakasato,
Sarajane M. Peres, Anarosa A. F. Brand\~ao, Anna H. R. Costa, Fabio G. Cozman
|
Benchmarks for Pir\'a 2.0, a Reading Comprehension Dataset about the
Ocean, the Brazilian Coast, and Climate Change
|
Accepted at Data Intelligence. Online ISSN 2641-435X
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Pir\'a is a reading comprehension dataset focused on the ocean, the Brazilian
coast, and climate change, built from a collection of scientific abstracts and
reports on these topics. This dataset represents a versatile language resource,
particularly useful for testing the ability of current machine learning models
to acquire expert scientific knowledge. Despite its potential, a detailed set
of baselines has not yet been developed for Pir\'a. By creating these
baselines, researchers can more easily utilize Pir\'a as a resource for testing
machine learning models across a wide range of question answering tasks. In
this paper, we define six benchmarks over the Pir\'a dataset, covering closed
generative question answering, machine reading comprehension, information
retrieval, open question answering, answer triggering, and multiple choice
question answering. As part of this effort, we have also produced a curated
version of the original dataset, where we fixed a number of grammar issues,
repetitions, and other shortcomings. Furthermore, the dataset has been extended
in several new directions, so as to face the aforementioned benchmarks:
translation of supporting texts from English into Portuguese, classification
labels for answerability, automatic paraphrases of questions and answers, and
multiple choice candidates. The results described in this paper provide several
points of reference for researchers interested in exploring the challenges
provided by the Pir\'a dataset.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 21:56:45 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Pirozelli",
"Paulo",
""
],
[
"José",
"Marcos M.",
""
],
[
"Silveira",
"Igor",
""
],
[
"Nakasato",
"Flávio",
""
],
[
"Peres",
"Sarajane M.",
""
],
[
"Brandão",
"Anarosa A. F.",
""
],
[
"Costa",
"Anna H. R.",
""
],
[
"Cozman",
"Fabio G.",
""
]
] |
new_dataset
| 0.99983 |
2309.10972
|
Sriram Ravindran
|
Sriram Ravindran, Debraj Basu
|
SEMPART: Self-supervised Multi-resolution Partitioning of Image
Semantics
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Accurately determining salient regions of an image is challenging when
labeled data is scarce. DINO-based self-supervised approaches have recently
leveraged meaningful image semantics captured by patch-wise features for
locating foreground objects. Recent methods have also incorporated intuitive
priors and demonstrated value in unsupervised methods for object partitioning.
In this paper, we propose SEMPART, which jointly infers coarse and fine
bi-partitions over an image's DINO-based semantic graph. Furthermore, SEMPART
preserves fine boundary details using graph-driven regularization and
successfully distills the coarse mask semantics into the fine mask. Our salient
object detection and single object localization findings suggest that SEMPART
produces high-quality masks rapidly without additional post-processing and
benefits from co-optimizing the coarse and fine branches.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 00:07:30 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Ravindran",
"Sriram",
""
],
[
"Basu",
"Debraj",
""
]
] |
new_dataset
| 0.991355 |
2309.11006
|
Nastaran Darabi
|
Nastaran Darabi, Sina Tayebati, Sureshkumar S., Sathya Ravi, Theja
Tulabandhula, and Amit R. Trivedi
|
STARNet: Sensor Trustworthiness and Anomaly Recognition via Approximated
Likelihood Regret for Robust Edge Autonomy
| null | null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Complex sensors such as LiDAR, RADAR, and event cameras have proliferated in
autonomous robotics to enhance perception and understanding of the environment.
Meanwhile, these sensors are also vulnerable to diverse failure mechanisms that
can intricately interact with their operation environment. In parallel, the
limited availability of training data on complex sensors also affects the
reliability of their deep learning-based prediction flow, where their
prediction models can fail to generalize to environments not adequately
captured in the training set. To address these reliability concerns, this paper
introduces STARNet, a Sensor Trustworthiness and Anomaly Recognition Network
designed to detect untrustworthy sensor streams that may arise from sensor
malfunctions and/or challenging environments. We specifically benchmark STARNet
on LiDAR and camera data. STARNet employs the concept of approximated
likelihood regret, a gradient-free framework tailored for low-complexity
hardware, especially those with only fixed-point precision capabilities.
Through extensive simulations, we demonstrate the efficacy of STARNet in
detecting untrustworthy sensor streams in unimodal and multimodal settings. In
particular, the network shows superior performance in addressing internal
sensor failures, such as cross-sensor interference and crosstalk. In diverse
test scenarios involving adverse weather and sensor malfunctions, we show that
STARNet enhances prediction accuracy by approximately 10% by filtering out
untrustworthy sensor streams. STARNet is publicly available at
\url{https://github.com/sinatayebati/STARNet}.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 02:20:11 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Darabi",
"Nastaran",
""
],
[
"Tayebati",
"Sina",
""
],
[
"S.",
"Sureshkumar",
""
],
[
"Ravi",
"Sathya",
""
],
[
"Tulabandhula",
"Theja",
""
],
[
"Trivedi",
"Amit R.",
""
]
] |
new_dataset
| 0.950737 |
2309.11020
|
Quan Xiong
|
Quan Xiong, Xuanyi Zhou, Jonathan William Ambrose, Raye Chen-Hua Yeow
|
An Amphibious Fully-Soft Miniature Crawling Robot Powered by
Electrohydraulic Fluid Kinetic Energy
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Miniature locomotion robots with the ability to navigate confined
environments show great promise for a wide range of tasks, including search and
rescue operations. Soft miniature locomotion robots, as a burgeoning field,
have attracted significant research interest due to their exceptional terrain
adaptability and safety features. In this paper, we introduce a fully-soft
miniature crawling robot directly powered by fluid kinetic energy generated by
an electrohydraulic actuator. Through optimization of the operating voltage and
design parameters, the crawling velocity of the robot is dramatically enhanced,
reaching 16 mm/s. The optimized robot weighs 6.3 g and measures 5 cm in length,
5 cm in width, and 6 mm in height. By combining two robots in parallel, the
robot can achieve a turning rate of approximately 3 degrees/s. Additionally, by
reconfiguring the distribution of electrodes in the electrohydraulic actuator,
the robot can achieve 2 degrees-of-freedom translational motion, improving its
maneuverability in narrow spaces. Finally, we demonstrate the use of a soft
water-proof skin for underwater locomotion and actuation. In comparison with
other soft miniature crawling robots, our robot with full softness can achieve
relatively high crawling velocity as well as increased robustness and recovery.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 02:48:54 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Xiong",
"Quan",
""
],
[
"Zhou",
"Xuanyi",
""
],
[
"Ambrose",
"Jonathan William",
""
],
[
"Yeow",
"Raye Chen-Hua",
""
]
] |
new_dataset
| 0.999168 |
2309.11032
|
Zhirui Sun
|
Zhirui Sun, Boshu Lei, Peijia Xie, Fugang Liu, Junjie Gao, Ying Zhang
and Jiankun Wang
|
Multi-Risk-RRT: An Efficient Motion Planning Algorithm for Robotic
Autonomous Luggage Trolley Collection at Airports
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robots have become increasingly prevalent in dynamic and crowded environments
such as airports and shopping malls. In these scenarios, the critical
challenges for robot navigation are reliability and timely arrival at
predetermined destinations. While existing risk-based motion planning
algorithms effectively reduce collision risks with static and dynamic
obstacles, there is still a need for significant performance improvements.
Specifically, the dynamic environments demand more rapid responses and robust
planning. To address this gap, we introduce a novel risk-based
multi-directional sampling algorithm, Multi-directional Risk-based
Rapidly-exploring Random Tree (Multi-Risk-RRT). Unlike traditional algorithms
that solely rely on a rooted tree or double trees for state space exploration,
our approach incorporates multiple sub-trees. Each sub-tree independently
explores its surrounding environment. At the same time, the primary rooted tree
collects the heuristic information from these sub-trees, facilitating rapid
progress toward the goal state. Our evaluations, including simulation and
real-world environmental studies, demonstrate that Multi-Risk-RRT outperforms
existing unidirectional and bi-directional risk-based algorithms in planning
efficiency and robustness.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 03:28:22 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Sun",
"Zhirui",
""
],
[
"Lei",
"Boshu",
""
],
[
"Xie",
"Peijia",
""
],
[
"Liu",
"Fugang",
""
],
[
"Gao",
"Junjie",
""
],
[
"Zhang",
"Ying",
""
],
[
"Wang",
"Jiankun",
""
]
] |
new_dataset
| 0.996562 |
2309.11063
|
Hayate Iso
|
Haopeng Zhang, Hayate Iso, Sairam Gurajada, Nikita Bhutani
|
XATU: A Fine-grained Instruction-based Benchmark for Explainable Text
Updates
|
Work in progress
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text editing is a crucial task that involves modifying text to better align
with user intents. However, existing text editing benchmark datasets have
limitations in providing only coarse-grained instructions. Consequently,
although the edited output may seem reasonable, it often deviates from the
intended changes outlined in the gold reference, resulting in low evaluation
scores. To comprehensively investigate the text editing capabilities of large
language models, this paper introduces XATU, the first benchmark specifically
designed for fine-grained instruction-based explainable text editing. XATU
covers a wide range of topics and text types, incorporating lexical, syntactic,
semantic, and knowledge-intensive edits. To enhance interpretability, we
leverage high-quality data sources and human annotation, resulting in a
benchmark that includes fine-grained instructions and gold-standard edit
explanations. By evaluating existing open and closed large language models
against our benchmark, we demonstrate the effectiveness of instruction tuning
and the impact of underlying architecture across various editing tasks.
Furthermore, extensive experimentation reveals the significant role of
explanations in fine-tuning language models for text editing tasks. The
benchmark will be open-sourced to support reproduction and facilitate future
research.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 04:58:59 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Zhang",
"Haopeng",
""
],
[
"Iso",
"Hayate",
""
],
[
"Gurajada",
"Sairam",
""
],
[
"Bhutani",
"Nikita",
""
]
] |
new_dataset
| 0.999813 |
2309.11093
|
Haven Kim
|
Haven Kim, Jongmin Jung, Dasaem Jeong, and Juhan Nam
|
K-pop Lyric Translation: Dataset, Analysis, and Neural-Modelling
| null | null | null | null |
cs.CL cs.LG cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Lyric translation, a field studied for over a century, is now attracting
computational linguistics researchers. We identified two limitations in
previous studies. Firstly, lyric translation studies have predominantly focused
on Western genres and languages, with no previous study centering on K-pop
despite its popularity. Second, the field of lyric translation suffers from a
lack of publicly available datasets; to the best of our knowledge, no such
dataset exists. To broaden the scope of genres and languages in lyric
translation studies, we introduce a novel singable lyric translation dataset,
approximately 89\% of which consists of K-pop song lyrics. This dataset aligns
Korean and English lyrics line-by-line and section-by-section. We leveraged
this dataset to unveil unique characteristics of K-pop lyric translation,
distinguishing it from other extensively studied genres, and to construct a
neural lyric translation model, thereby underscoring the importance of a
dedicated dataset for singable lyric translations.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 06:54:55 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Kim",
"Haven",
""
],
[
"Jung",
"Jongmin",
""
],
[
"Jeong",
"Dasaem",
""
],
[
"Nam",
"Juhan",
""
]
] |
new_dataset
| 0.999842 |
2309.11118
|
Federico Bianchi
|
Federico Bianchi, Alessandro Falsone, Riccardo Vignali
|
Vehicle-to-Grid and ancillary services:a profitability analysis under
uncertainty
|
Accepted by IFAC for publication under a Creative Commons Licence
CC-BY-NC-ND
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid and massive diffusion of electric vehicles poses new challenges to
the electric system, which must be able to supply these new loads, but at the
same time opens up new opportunities thanks to the possible provision of
ancillary services. Indeed, in the so-called Vehicle-to-Grid (V2G) set-up, the
charging power can be modulated throughout the day so that a fleet of vehicles
can absorb an excess of power from the grid or provide extra power during a
shortage.To this end, many works in the literature focus on the optimization of
each vehicle daily charging profiles to offer the requested ancillary services
while guaranteeing a charged battery for each vehicle at the end of the day.
However, the size of the economic benefits related to the provision of
ancillary services varies significantly with the modeling approaches, different
assumptions, and considered scenarios. In this paper we propose a profitability
analysis with reference to a recently proposed framework for V2G optimal
operation in presence of uncertainty. We provide necessary and sufficient
conditions for profitability in a simplified case and we show via simulation
that they also hold for the general case.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 07:50:47 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Bianchi",
"Federico",
""
],
[
"Falsone",
"Alessandro",
""
],
[
"Vignali",
"Riccardo",
""
]
] |
new_dataset
| 0.985688 |
2309.11142
|
Mario Campos Soberanis
|
Carlos Morales-Torres, Mario Campos-Soberanis, Diego Campos-Sobrino
|
Prototype of a robotic system to assist the learning process of English
language with text-generation through DNN
|
Paper presented in the Mexican International Conference on Artificial
Intelligence 2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In the last ongoing years, there has been a significant ascending on the
field of Natural Language Processing (NLP) for performing multiple tasks
including English Language Teaching (ELT). An effective strategy to favor the
learning process uses interactive devices to engage learners in their
self-learning process. In this work, we present a working prototype of a
humanoid robotic system to assist English language self-learners through text
generation using Long Short Term Memory (LSTM) Neural Networks. The learners
interact with the system using a Graphic User Interface that generates text
according to the English level of the user. The experimentation was conducted
using English learners and the results were measured accordingly to
International English Language Testing System (IELTS) rubric. Preliminary
results show an increment in the Grammatical Range of learners who interacted
with the system.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 08:39:51 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Morales-Torres",
"Carlos",
""
],
[
"Campos-Soberanis",
"Mario",
""
],
[
"Campos-Sobrino",
"Diego",
""
]
] |
new_dataset
| 0.998766 |
2309.11160
|
Nian Liu
|
Nian Liu, Kepan Nan, Wangbo Zhao, Yuanwei Liu, Xiwen Yao, Salman Khan,
Hisham Cholakkal, Rao Muhammad Anwer, Junwei Han, Fahad Shahbaz Khan
|
Multi-grained Temporal Prototype Learning for Few-shot Video Object
Segmentation
|
ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Few-Shot Video Object Segmentation (FSVOS) aims to segment objects in a query
video with the same category defined by a few annotated support images.
However, this task was seldom explored. In this work, based on IPMT, a
state-of-the-art few-shot image segmentation method that combines external
support guidance information with adaptive query guidance cues, we propose to
leverage multi-grained temporal guidance information for handling the temporal
correlation nature of video data. We decompose the query video information into
a clip prototype and a memory prototype for capturing local and long-term
internal temporal guidance, respectively. Frame prototypes are further used for
each frame independently to handle fine-grained adaptive guidance and enable
bidirectional clip-frame prototype communication. To reduce the influence of
noisy memory, we propose to leverage the structural similarity relation among
different predicted regions and the support for selecting reliable memory
frames. Furthermore, a new segmentation loss is also proposed to enhance the
category discriminability of the learned prototypes. Experimental results
demonstrate that our proposed video IPMT model significantly outperforms
previous models on two benchmark datasets. Code is available at
https://github.com/nankepan/VIPMT.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 09:16:34 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Liu",
"Nian",
""
],
[
"Nan",
"Kepan",
""
],
[
"Zhao",
"Wangbo",
""
],
[
"Liu",
"Yuanwei",
""
],
[
"Yao",
"Xiwen",
""
],
[
"Khan",
"Salman",
""
],
[
"Cholakkal",
"Hisham",
""
],
[
"Anwer",
"Rao Muhammad",
""
],
[
"Han",
"Junwei",
""
],
[
"Khan",
"Fahad Shahbaz",
""
]
] |
new_dataset
| 0.979985 |
2309.11170
|
Zheng Dang
|
Zheng Dang, Mathieu Salzmann
|
AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud
Registration
|
accepted by ICCV2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In the current deep learning paradigm, the amount and quality of training
data are as critical as the network architecture and its training details.
However, collecting, processing, and annotating real data at scale is
difficult, expensive, and time-consuming, particularly for tasks such as 3D
object registration. While synthetic datasets can be created, they require
expertise to design and include a limited number of categories. In this paper,
we introduce a new approach called AutoSynth, which automatically generates 3D
training data for point cloud registration. Specifically, AutoSynth
automatically curates an optimal dataset by exploring a search space
encompassing millions of potential datasets with diverse 3D shapes at a low
cost.To achieve this, we generate synthetic 3D datasets by assembling shape
primitives, and develop a meta-learning strategy to search for the best
training data for 3D registration on real point clouds. For this search to
remain tractable, we replace the point cloud registration network with a much
smaller surrogate network, leading to a $4056.43$ times speedup. We demonstrate
the generality of our approach by implementing it with two different point
cloud registration networks, BPNet and IDAM. Our results on TUD-L, LINEMOD and
Occluded-LINEMOD evidence that a neural network trained on our searched dataset
yields consistently better performance than the same one trained on the widely
used ModelNet40 dataset.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 09:29:44 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Dang",
"Zheng",
""
],
[
"Salzmann",
"Mathieu",
""
]
] |
new_dataset
| 0.993893 |
2309.11174
|
Neha Sangwan
|
Neha Sangwan, Mayank Bakshi, Bikash Kumar Dey, Vinod M. Prabhakaran
|
Byzantine Multiple Access Channels -- Part II: Communication With
Adversary Identification
|
arXiv admin note: substantial text overlap with arXiv:2105.03380
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the problem of determining the identity of a byzantine user
(internal adversary) in a communication system. We consider a two-user discrete
memoryless multiple access channel where either user may deviate from the
prescribed behaviour. Owing to the noisy nature of the channel, it may be
overly restrictive to attempt to detect all deviations. In our formulation, we
only require detecting deviations which impede the decoding of the
non-deviating user's message. When neither user deviates, correct decoding is
required. When one user deviates, the decoder must either output a pair of
messages of which the message of the non-deviating user is correct or identify
the deviating user. The users and the receiver do not share any randomness. The
results include a characterization of the set of channels where communication
is feasible, and an inner and outer bound on the capacity region. We also show
that whenever the rate region has non-empty interior, the capacity region is
same as the capacity region under randomized encoding, where each user shares
independent randomness with the receiver. We also give an outer bound for this
randomized coding capacity region.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 09:42:23 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Sangwan",
"Neha",
""
],
[
"Bakshi",
"Mayank",
""
],
[
"Dey",
"Bikash Kumar",
""
],
[
"Prabhakaran",
"Vinod M.",
""
]
] |
new_dataset
| 0.988352 |
2309.11229
|
Yuan Li
|
Jinjie Gao, Haibin Kan, Yuan Li, Jiahua Xu, Qichun Wang
|
Trace Monomial Boolean Functions with Large High-Order Nonlinearities
| null | null | null | null |
cs.CR cs.CC math.RA
|
http://creativecommons.org/licenses/by/4.0/
|
Exhibiting an explicit Boolean function with a large high-order nonlinearity
is an important problem in cryptography, coding theory, and computational
complexity. We prove lower bounds on the second-order, third-order, and
higher-order nonlinearities of some trace monomial Boolean functions.
We prove lower bounds on the second-order nonlinearities of functions
$\mathrm{tr}_n(x^7)$ and $\mathrm{tr}_n(x^{2^r+3})$ where $n=2r$. Among all
trace monomials, our bounds match the best second-order nonlinearity lower
bounds by \cite{Car08} and \cite{YT20} for odd and even $n$ respectively. We
prove a lower bound on the third-order nonlinearity for functions
$\mathrm{tr}_n(x^{15})$, which is the best third-order nonlinearity lower
bound. For any $r$, we prove that the $r$-th order nonlinearity of
$\mathrm{tr}_n(x^{2^{r+1}-1})$ is at least
$2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}- O(2^{\frac{n}{2}})$. For $r \ll
\log_2 n$, this is the best lower bound among all explicit functions.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 11:40:19 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Gao",
"Jinjie",
""
],
[
"Kan",
"Haibin",
""
],
[
"Li",
"Yuan",
""
],
[
"Xu",
"Jiahua",
""
],
[
"Wang",
"Qichun",
""
]
] |
new_dataset
| 0.963318 |
2309.11258
|
Weidan Xiong Dr
|
Weidan Xiong, Hongqian Zhang, Botao Peng, Ziyu Hu, Yongli Wu, Jianwei
Guo, Hui Huang
|
TwinTex: Geometry-aware Texture Generation for Abstracted 3D
Architectural Models
|
Accepted to SIGGRAPH ASIA 2023
| null |
10.1145/3618328
| null |
cs.GR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Coarse architectural models are often generated at scales ranging from
individual buildings to scenes for downstream applications such as Digital Twin
City, Metaverse, LODs, etc. Such piece-wise planar models can be abstracted as
twins from 3D dense reconstructions. However, these models typically lack
realistic texture relative to the real building or scene, making them
unsuitable for vivid display or direct reference. In this paper, we present
TwinTex, the first automatic texture mapping framework to generate a
photo-realistic texture for a piece-wise planar proxy. Our method addresses
most challenges occurring in such twin texture generation. Specifically, for
each primitive plane, we first select a small set of photos with greedy
heuristics considering photometric quality, perspective quality and facade
texture completeness. Then, different levels of line features (LoLs) are
extracted from the set of selected photos to generate guidance for later steps.
With LoLs, we employ optimization algorithms to align texture with geometry
from local to global. Finally, we fine-tune a diffusion model with a multi-mask
initialization component and a new dataset to inpaint the missing region.
Experimental results on many buildings, indoor scenes and man-made objects of
varying complexity demonstrate the generalization ability of our algorithm. Our
approach surpasses state-of-the-art texture mapping methods in terms of
high-fidelity quality and reaches a human-expert production level with much
less effort. Project page: https://vcc.tech/research/2023/TwinTex.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 12:33:53 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Xiong",
"Weidan",
""
],
[
"Zhang",
"Hongqian",
""
],
[
"Peng",
"Botao",
""
],
[
"Hu",
"Ziyu",
""
],
[
"Wu",
"Yongli",
""
],
[
"Guo",
"Jianwei",
""
],
[
"Huang",
"Hui",
""
]
] |
new_dataset
| 0.967038 |
2309.11259
|
Vladimir Araujo
|
Vladimir Araujo, Maria Mihaela Trusca, Rodrigo Tufi\~no,
Marie-Francine Moens
|
Sequence-to-Sequence Spanish Pre-trained Language Models
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, substantial advancements in pre-trained language models have
paved the way for the development of numerous non-English language versions,
with a particular focus on encoder-only and decoder-only architectures. While
Spanish language models encompassing BERT, RoBERTa, and GPT have exhibited
prowess in natural language understanding and generation, there remains a
scarcity of encoder-decoder models designed for sequence-to-sequence tasks
involving input-output pairs. This paper breaks new ground by introducing the
implementation and evaluation of renowned encoder-decoder architectures,
exclusively pre-trained on Spanish corpora. Specifically, we present Spanish
versions of BART, T5, and BERT2BERT-style models and subject them to a
comprehensive assessment across a diverse range of sequence-to-sequence tasks,
spanning summarization, rephrasing, and generative question answering. Our
findings underscore the competitive performance of all models, with BART and T5
emerging as top performers across all evaluated tasks. As an additional
contribution, we have made all models publicly available to the research
community, fostering future exploration and development in Spanish language
processing.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 12:35:19 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Araujo",
"Vladimir",
""
],
[
"Trusca",
"Maria Mihaela",
""
],
[
"Tufiño",
"Rodrigo",
""
],
[
"Moens",
"Marie-Francine",
""
]
] |
new_dataset
| 0.987839 |
2309.11306
|
Kazi Injamamul Haque
|
Stefan Stan and Kazi Injamamul Haque and Zerrin Yumak
|
FaceDiffuser: Speech-Driven 3D Facial Animation Synthesis Using
Diffusion
|
Pre-print of the paper accepted at ACM SIGGRAPH MIG 2023
| null | null | null |
cs.CV cs.AI cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speech-driven 3D facial animation synthesis has been a challenging task both
in industry and research. Recent methods mostly focus on deterministic deep
learning methods meaning that given a speech input, the output is always the
same. However, in reality, the non-verbal facial cues that reside throughout
the face are non-deterministic in nature. In addition, majority of the
approaches focus on 3D vertex based datasets and methods that are compatible
with existing facial animation pipelines with rigged characters is scarce. To
eliminate these issues, we present FaceDiffuser, a non-deterministic deep
learning model to generate speech-driven facial animations that is trained with
both 3D vertex and blendshape based datasets. Our method is based on the
diffusion technique and uses the pre-trained large speech representation model
HuBERT to encode the audio input. To the best of our knowledge, we are the
first to employ the diffusion method for the task of speech-driven 3D facial
animation synthesis. We have run extensive objective and subjective analyses
and show that our approach achieves better or comparable results in comparison
to the state-of-the-art methods. We also introduce a new in-house dataset that
is based on a blendshape based rigged character. We recommend watching the
accompanying supplementary video. The code and the dataset will be publicly
available.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 13:33:00 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Stan",
"Stefan",
""
],
[
"Haque",
"Kazi Injamamul",
""
],
[
"Yumak",
"Zerrin",
""
]
] |
new_dataset
| 0.996867 |
2309.11338
|
Prottay Kumar Adhikary
|
Prottay Kumar Adhikary, Bandaru Sugandhi, Subhojit Ghimire, Santanu
Pal and Partha Pakray
|
TRAVID: An End-to-End Video Translation Framework
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In today's globalized world, effective communication with people from diverse
linguistic backgrounds has become increasingly crucial. While traditional
methods of language translation, such as written text or voice-only
translations, can accomplish the task, they often fail to capture the complete
context and nuanced information conveyed through nonverbal cues like facial
expressions and lip movements. In this paper, we present an end-to-end video
translation system that not only translates spoken language but also
synchronizes the translated speech with the lip movements of the speaker. Our
system focuses on translating educational lectures in various Indian languages,
and it is designed to be effective even in low-resource system settings. By
incorporating lip movements that align with the target language and matching
them with the speaker's voice using voice cloning techniques, our application
offers an enhanced experience for students and users. This additional feature
creates a more immersive and realistic learning environment, ultimately making
the learning process more effective and engaging.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 14:13:05 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Adhikary",
"Prottay Kumar",
""
],
[
"Sugandhi",
"Bandaru",
""
],
[
"Ghimire",
"Subhojit",
""
],
[
"Pal",
"Santanu",
""
],
[
"Pakray",
"Partha",
""
]
] |
new_dataset
| 0.999509 |
2309.11346
|
Atakan Kara
|
Atakan Kara, Farrin Marouf Sofian, Andrew Bond and G\"ozde G\"ul
\c{S}ahin
|
GECTurk: Grammatical Error Correction and Detection Dataset for Turkish
|
Accepted at Findings of IJCNLP-AACL 2023
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Grammatical Error Detection and Correction (GEC) tools have proven useful for
native speakers and second language learners. Developing such tools requires a
large amount of parallel, annotated data, which is unavailable for most
languages. Synthetic data generation is a common practice to overcome the
scarcity of such data. However, it is not straightforward for morphologically
rich languages like Turkish due to complex writing rules that require
phonological, morphological, and syntactic information. In this work, we
present a flexible and extensible synthetic data generation pipeline for
Turkish covering more than 20 expert-curated grammar and spelling rules
(a.k.a., writing rules) implemented through complex transformation functions.
Using this pipeline, we derive 130,000 high-quality parallel sentences from
professionally edited articles. Additionally, we create a more realistic test
set by manually annotating a set of movie reviews. We implement three baselines
formulating the task as i) neural machine translation, ii) sequence tagging,
and iii) prefix tuning with a pretrained decoder-only model, achieving strong
results. Furthermore, we perform exhaustive experiments on out-of-domain
datasets to gain insights on the transferability and robustness of the proposed
approaches. Our results suggest that our corpus, GECTurk, is high-quality and
allows knowledge transfer for the out-of-domain setting. To encourage further
research on Turkish GEC, we release our datasets, baseline models, and the
synthetic data generation pipeline at https://github.com/GGLAB-KU/gecturk.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 14:25:44 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Kara",
"Atakan",
""
],
[
"Sofian",
"Farrin Marouf",
""
],
[
"Bond",
"Andrew",
""
],
[
"Şahin",
"Gözde Gül",
""
]
] |
new_dataset
| 0.999861 |
2309.11361
|
Yuan An
|
Yuan An, Jane Greenberg, Alex Kalinowski, Xintong Zhao, Xiaohua Hu,
Fernando J. Uribe-Romo, Kyle Langlois, Jacob Furst, Diego A.
G\'omez-Gualdr\'on
|
Knowledge Graph Question Answering for Materials Science (KGQA4MAT):
Developing Natural Language Interface for Metal-Organic Frameworks Knowledge
Graph (MOF-KG)
|
In 17th International Conference on Metadata and Semantics Research,
October 2023
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present a comprehensive benchmark dataset for Knowledge Graph Question
Answering in Materials Science (KGQA4MAT), with a focus on metal-organic
frameworks (MOFs). A knowledge graph for metal-organic frameworks (MOF-KG) has
been constructed by integrating structured databases and knowledge extracted
from the literature. To enhance MOF-KG accessibility for domain experts, we aim
to develop a natural language interface for querying the knowledge graph. We
have developed a benchmark comprised of 161 complex questions involving
comparison, aggregation, and complicated graph structures. Each question is
rephrased in three additional variations, resulting in 644 questions and 161 KG
queries. To evaluate the benchmark, we have developed a systematic approach for
utilizing ChatGPT to translate natural language questions into formal KG
queries. We also apply the approach to the well-known QALD-9 dataset,
demonstrating ChatGPT's potential in addressing KGQA issues for different
platforms and query languages. The benchmark and the proposed approach aim to
stimulate further research and development of user-friendly and efficient
interfaces for querying domain-specific materials science knowledge graphs,
thereby accelerating the discovery of novel materials.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 14:43:43 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"An",
"Yuan",
""
],
[
"Greenberg",
"Jane",
""
],
[
"Kalinowski",
"Alex",
""
],
[
"Zhao",
"Xintong",
""
],
[
"Hu",
"Xiaohua",
""
],
[
"Uribe-Romo",
"Fernando J.",
""
],
[
"Langlois",
"Kyle",
""
],
[
"Furst",
"Jacob",
""
],
[
"Gómez-Gualdrón",
"Diego A.",
""
]
] |
new_dataset
| 0.99963 |
2309.11419
|
Lei Cui
|
Tengchao Lv, Yupan Huang, Jingye Chen, Lei Cui, Shuming Ma, Yaoyao
Chang, Shaohan Huang, Wenhui Wang, Li Dong, Weiyao Luo, Shaoxiang Wu, Guoxin
Wang, Cha Zhang, Furu Wei
|
Kosmos-2.5: A Multimodal Literate Model
| null | null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present Kosmos-2.5, a multimodal literate model for machine reading of
text-intensive images. Pre-trained on large-scale text-intensive images,
Kosmos-2.5 excels in two distinct yet cooperative transcription tasks: (1)
generating spatially-aware text blocks, where each block of text is assigned
its spatial coordinates within the image, and (2) producing structured text
output that captures styles and structures into the markdown format. This
unified multimodal literate capability is achieved through a shared Transformer
architecture, task-specific prompts, and flexible text representations. We
evaluate Kosmos-2.5 on end-to-end document-level text recognition and
image-to-markdown text generation. Furthermore, the model can be readily
adapted for any text-intensive image understanding task with different prompts
through supervised fine-tuning, making it a general-purpose tool for real-world
applications involving text-rich images. This work also paves the way for the
future scaling of multimodal large language models.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 15:50:08 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Lv",
"Tengchao",
""
],
[
"Huang",
"Yupan",
""
],
[
"Chen",
"Jingye",
""
],
[
"Cui",
"Lei",
""
],
[
"Ma",
"Shuming",
""
],
[
"Chang",
"Yaoyao",
""
],
[
"Huang",
"Shaohan",
""
],
[
"Wang",
"Wenhui",
""
],
[
"Dong",
"Li",
""
],
[
"Luo",
"Weiyao",
""
],
[
"Wu",
"Shaoxiang",
""
],
[
"Wang",
"Guoxin",
""
],
[
"Zhang",
"Cha",
""
],
[
"Wei",
"Furu",
""
]
] |
new_dataset
| 0.987479 |
2309.11445
|
Bing Shuai
|
Haodong Duan, Mingze Xu, Bing Shuai, Davide Modolo, Zhuowen Tu, Joseph
Tighe, Alessandro Bergamo
|
SkeleTR: Towrads Skeleton-based Action Recognition in the Wild
|
ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present SkeleTR, a new framework for skeleton-based action recognition. In
contrast to prior work, which focuses mainly on controlled environments, we
target more general scenarios that typically involve a variable number of
people and various forms of interaction between people. SkeleTR works with a
two-stage paradigm. It first models the intra-person skeleton dynamics for each
skeleton sequence with graph convolutions, and then uses stacked Transformer
encoders to capture person interactions that are important for action
recognition in general scenarios. To mitigate the negative impact of inaccurate
skeleton associations, SkeleTR takes relative short skeleton sequences as input
and increases the number of sequences. As a unified solution, SkeleTR can be
directly applied to multiple skeleton-based action tasks, including video-level
action classification, instance-level action detection, and group-level
activity recognition. It also enables transfer learning and joint training
across different action tasks and datasets, which result in performance
improvement. When evaluated on various skeleton-based action recognition
benchmarks, SkeleTR achieves the state-of-the-art performance.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 16:22:33 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Duan",
"Haodong",
""
],
[
"Xu",
"Mingze",
""
],
[
"Shuai",
"Bing",
""
],
[
"Modolo",
"Davide",
""
],
[
"Tu",
"Zhuowen",
""
],
[
"Tighe",
"Joseph",
""
],
[
"Bergamo",
"Alessandro",
""
]
] |
new_dataset
| 0.998427 |
2309.11471
|
Muhammad Shahbaz Khan
|
Laiba Asghar, Fawad Ahmed, Muhammad Shahbaz Khan, Arshad Arshad, Jawad
Ahmad
|
Noise-Crypt: Image Encryption with Non-linear Noise, Hybrid Chaotic
Maps, and Hashing
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
To secure the digital images over insecure transmission channels, a new image
encryption algorithm Noise-Crypt is proposed in this paper. Noise-Crypt
integrates non-linear random noise, hybrid chaotic maps, and SHA-256 hashing
algorithm. The utilized hybrid chaotic maps are the logistic-tent and the
logistic-sine-cosine map. The hybrid chaotic maps enhance the pseudorandom
sequence generation and selection of substitution boxes, while the
logistic-sine-cosine map induces non-linearity in the algorithm through random
noise. This deliberate inclusion of noise contributes to increased resistance
against cryptanalysis. The proposed scheme has been evaluated for several
security parameters, such as differential attacks, entropy, correlation, etc.
Extensive evaluation demonstrates the efficacy of the proposed scheme, with
almost ideal values of entropy of 7.99 and correlation of -0.0040. Results of
the security analysis validate the potency of the proposed scheme in achieving
robust image encryption.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 17:11:35 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Asghar",
"Laiba",
""
],
[
"Ahmed",
"Fawad",
""
],
[
"Khan",
"Muhammad Shahbaz",
""
],
[
"Arshad",
"Arshad",
""
],
[
"Ahmad",
"Jawad",
""
]
] |
new_dataset
| 0.995406 |
2309.11478
|
Hanyi Wang
|
Yuqian Sun, Hanyi Wang, Pok Man Chan, Morteza Tabibi, Yan Zhang, Huan
Lu, Yuheng Chen, Chang Hee Lee, Ali Asadipour
|
Fictional Worlds, Real Connections: Developing Community Storytelling
Social Chatbots through LLMs
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the integration of storytelling and Large Language Models (LLMs)
to develop engaging and believable Social Chatbots (SCs) in community settings.
Motivated by the potential of fictional characters to enhance social
interactions, we introduce Storytelling Social Chatbots (SSCs) and the concept
of story engineering to transform fictional game characters into "live" social
entities within player communities. Our story engineering process includes
three steps: (1) Character and story creation, defining the SC's personality
and worldview, (2) Presenting Live Stories to the Community, allowing the SC to
recount challenges and seek suggestions, and (3) Communication with community
members, enabling interaction between the SC and users. We employed the LLM
GPT-3 to drive our SSC prototypes, "David" and "Catherine," and evaluated their
performance in an online gaming community, "DE (Alias)," on Discord. Our
mixed-method analysis, based on questionnaires (N=15) and interviews (N=8) with
community members, reveals that storytelling significantly enhances the
engagement and believability of SCs in community settings.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 17:23:05 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Sun",
"Yuqian",
""
],
[
"Wang",
"Hanyi",
""
],
[
"Chan",
"Pok Man",
""
],
[
"Tabibi",
"Morteza",
""
],
[
"Zhang",
"Yan",
""
],
[
"Lu",
"Huan",
""
],
[
"Chen",
"Yuheng",
""
],
[
"Lee",
"Chang Hee",
""
],
[
"Asadipour",
"Ali",
""
]
] |
new_dataset
| 0.999132 |
2309.11484
|
Moritz Schubotz
|
Moritz Schubotz, Eloi Ferrer, Johannes Stegm\"uller, Daniel Mietchen,
Olaf Teschke, Larissa Pusch, Tim OF Conrad
|
Bravo MaRDI: A Wikibase Powered Knowledge Graph on Mathematics
|
Accepted at Wikidata'23: Wikidata workshop at ISWC 2023
| null | null | null |
cs.DL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mathematical world knowledge is a fundamental component of Wikidata. However,
to date, no expertly curated knowledge graph has focused specifically on
contemporary mathematics. Addressing this gap, the Mathematical Research Data
Initiative (MaRDI) has developed a comprehensive knowledge graph that links
multimodal research data in mathematics. This encompasses traditional research
data items like datasets, software, and publications and includes semantically
advanced objects such as mathematical formulas and hypotheses. This paper
details the abilities of the MaRDI knowledge graph, which is based on Wikibase,
leading up to its inaugural public release, codenamed Bravo, available on
https://portal.mardi4nfdi.de.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 17:28:32 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Schubotz",
"Moritz",
""
],
[
"Ferrer",
"Eloi",
""
],
[
"Stegmüller",
"Johannes",
""
],
[
"Mietchen",
"Daniel",
""
],
[
"Teschke",
"Olaf",
""
],
[
"Pusch",
"Larissa",
""
],
[
"Conrad",
"Tim OF",
""
]
] |
new_dataset
| 0.999418 |
2112.01601
|
Peter Lorenz
|
Peter Lorenz, Dominik Strassel, Margret Keuper and Janis Keuper
|
Is RobustBench/AutoAttack a suitable Benchmark for Adversarial
Robustness?
|
AAAI-22 AdvML Workshop
| null | null | null |
cs.CV cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, RobustBench (Croce et al. 2020) has become a widely recognized
benchmark for the adversarial robustness of image classification networks. In
its most commonly reported sub-task, RobustBench evaluates and ranks the
adversarial robustness of trained neural networks on CIFAR10 under AutoAttack
(Croce and Hein 2020b) with l-inf perturbations limited to eps = 8/255. With
leading scores of the currently best performing models of around 60% of the
baseline, it is fair to characterize this benchmark to be quite challenging.
Despite its general acceptance in recent literature, we aim to foster
discussion about the suitability of RobustBench as a key indicator for
robustness which could be generalized to practical applications. Our line of
argumentation against this is two-fold and supported by excessive experiments
presented in this paper: We argue that I) the alternation of data by AutoAttack
with l-inf, eps = 8/255 is unrealistically strong, resulting in close to
perfect detection rates of adversarial samples even by simple detection
algorithms and human observers. We also show that other attack methods are much
harder to detect while achieving similar success rates. II) That results on
low-resolution data sets like CIFAR10 do not generalize well to higher
resolution images as gradient-based attacks appear to become even more
detectable with increasing resolutions.
|
[
{
"version": "v1",
"created": "Thu, 2 Dec 2021 20:44:16 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Nov 2022 12:22:20 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Sep 2023 15:11:05 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Lorenz",
"Peter",
""
],
[
"Strassel",
"Dominik",
""
],
[
"Keuper",
"Margret",
""
],
[
"Keuper",
"Janis",
""
]
] |
new_dataset
| 0.99223 |
2112.10085
|
Qinghua Zhao
|
Qinghua Zhao
|
D-HAN: Dynamic News Recommendation with Hierarchical Attention Network
| null | null | null | null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
News recommendation models often fall short in capturing users' preferences
due to their static approach to user-news interactions. To address this
limitation, we present a novel dynamic news recommender model that seamlessly
integrates continuous time information to a hierarchical attention network that
effectively represents news information at the sentence, element, and sequence
levels. Moreover, we introduce a dynamic negative sampling method to optimize
users' implicit feedback. To validate our model's effectiveness, we conduct
extensive experiments on three real-world datasets. The results demonstrate the
effectiveness of our proposed approach.
|
[
{
"version": "v1",
"created": "Sun, 19 Dec 2021 08:11:57 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 09:29:28 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Zhao",
"Qinghua",
""
]
] |
new_dataset
| 0.973896 |
2211.15747
|
Vidya Sagar
|
Vidya Sagar, Ritumoni Sarma
|
Certain binary minimal codes constructed using simplicial complexes
|
31 pages
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this manuscript, we work over the non-chain ring $\mathcal{R} =
\mathbb{F}_2[u]/\langle u^3 - u\rangle $. Let $m\in \mathbb{N}$ and let $L, M,
N \subseteq [m]:=\{1, 2, \dots, m\}$. For $X\subseteq [m]$, define
$\Delta_X:=\{v \in \mathbb{F}_2^m : \textnormal{Supp}(v)\subseteq X\}$ and $D:=
(1+u^2)D_1 + u^2D_2 + (u+u^2)D_3$, an ordered finite multiset consisting of
elements from $\mathcal{R}^m$, where $D_1\in \{\Delta_L, \Delta_L^c\}, D_2\in
\{\Delta_M, \Delta_M^c\}, D_3\in \{\Delta_N, \Delta_N^c\}$. The linear code
$C_D$ over $\mathcal{R}$ defined by $\{\big(v\cdot d\big)_{d\in D} : v \in
\mathcal{R}^m \}$ is studied for each $D$. Further, we also consider simplicial
complexes with two maximal elements in the above work. We study their binary
Gray images and the binary subfield-like codes corresponding to a certain
$\mathbb{F}_{2}$-functional of $\mathcal{R}$. Sufficient conditions for these
binary linear codes to be minimal and self-orthogonal are obtained in each
case. Besides, we produce an infinite family of optimal codes with respect to
the Griesmer bound. Most of the codes obtained in this manuscript are
few-weight codes.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 20:02:28 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 03:04:32 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Sagar",
"Vidya",
""
],
[
"Sarma",
"Ritumoni",
""
]
] |
new_dataset
| 0.996488 |
2301.00615
|
Kaicheng Yang
|
Kaicheng Yang, Yuhan Wu, Ruijie Miao, Tong Yang, Zirui Liu, Zicang Xu,
Rui Qiu, Yikai Zhao, Hanglong Lv, Zhigang Ji, Gaogang Xie
|
ChameleMon: Shifting Measurement Attention as Network State Changes
|
This is a preprint of ChameleMon: Shifting Measurement Attention as
Network State Changes, to appear in SIGCOMM 2023
|
ACM SIGCOMM (2023) 881-903
|
10.1145/3603269.3604850
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Flow-level network measurement is critical to many network applications.
Among various measurement tasks, packet loss detection and heavy-hitter
detection are two most important measurement tasks, which we call the two key
tasks. In practice, the two key tasks are often required at the same time, but
existing works seldom handle both tasks. In this paper, we design ChameleMon to
support the two key tasks simultaneously. One key design/novelty of ChameleMon
is to shift measurement attention as network state changes, through two
dimensions of dynamics: 1) dynamically allocating memory between the two key
tasks; 2) dynamically monitoring the flows of importance. To realize the key
design, we propose a key technique, leveraging Fermat's little theorem to
devise a flexible data structure, namely FermatSketch. FermatSketch is
dividable, additive, and subtractive, supporting the two key tasks. We have
fully implemented a ChameleMon prototype on a testbed with a Fat-tree topology.
We conduct extensive experiments and the results show ChameleMon supports the
two key tasks with low memory/bandwidth overhead, and more importantly, it can
automatically shift measurement attention as network state changes.
|
[
{
"version": "v1",
"created": "Mon, 2 Jan 2023 12:01:01 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jul 2023 08:47:26 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Yang",
"Kaicheng",
""
],
[
"Wu",
"Yuhan",
""
],
[
"Miao",
"Ruijie",
""
],
[
"Yang",
"Tong",
""
],
[
"Liu",
"Zirui",
""
],
[
"Xu",
"Zicang",
""
],
[
"Qiu",
"Rui",
""
],
[
"Zhao",
"Yikai",
""
],
[
"Lv",
"Hanglong",
""
],
[
"Ji",
"Zhigang",
""
],
[
"Xie",
"Gaogang",
""
]
] |
new_dataset
| 0.999232 |
2303.09514
|
Nicol\'as Ayobi
|
Nicol\'as Ayobi, Alejandra P\'erez-Rond\'on, Santiago Rodr\'iguez,
Pablo Arbel\'aez
|
MATIS: Masked-Attention Transformers for Surgical Instrument
Segmentation
|
ISBI 2023 (Oral)
| null |
10.1109/ISBI53787.2023.10230819
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Masked-Attention Transformers for Surgical Instrument Segmentation
(MATIS), a two-stage, fully transformer-based method that leverages modern
pixel-wise attention mechanisms for instrument segmentation. MATIS exploits the
instance-level nature of the task by employing a masked attention module that
generates and classifies a set of fine instrument region proposals. Our method
incorporates long-term video-level information through video transformers to
improve temporal consistency and enhance mask classification. We validate our
approach in the two standard public benchmarks, Endovis 2017 and Endovis 2018.
Our experiments demonstrate that MATIS' per-frame baseline outperforms previous
state-of-the-art methods and that including our temporal consistency module
boosts our model's performance further.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 17:31:40 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Mar 2023 02:38:56 GMT"
},
{
"version": "v3",
"created": "Mon, 18 Sep 2023 18:11:43 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Ayobi",
"Nicolás",
""
],
[
"Pérez-Rondón",
"Alejandra",
""
],
[
"Rodríguez",
"Santiago",
""
],
[
"Arbeláez",
"Pablo",
""
]
] |
new_dataset
| 0.999478 |
2304.06506
|
Temiloluwa Prioleau
|
Temiloluwa Prioleau, Abigail Bartolome, Richard Comi, Catherine
Stanger
|
DiaTrend: A dataset from advanced diabetes technology to enable
development of novel analytic solutions
|
11 pages, 5 figures, 2 tables
|
Scientific Data 10, 556 (2023)
|
10.1038/s41597-023-02469-5
| null |
cs.CY cs.AI cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Objective digital data is scarce yet needed in many domains to enable
research that can transform the standard of healthcare. While data from
consumer-grade wearables and smartphones is more accessible, there is critical
need for similar data from clinical-grade devices used by patients with a
diagnosed condition. The prevalence of wearable medical devices in the diabetes
domain sets the stage for unique research and development within this field and
beyond. However, the scarcity of open-source datasets presents a major barrier
to progress. To facilitate broader research on diabetes-relevant problems and
accelerate development of robust computational solutions, we provide the
DiaTrend dataset. The DiaTrend dataset is composed of intensive longitudinal
data from wearable medical devices, including a total of 27,561 days of
continuous glucose monitor data and 8,220 days of insulin pump data from 54
patients with diabetes. This dataset is useful for developing novel analytic
solutions that can reduce the disease burden for people living with diabetes
and increase knowledge on chronic condition management in outpatient settings.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 00:59:04 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Prioleau",
"Temiloluwa",
""
],
[
"Bartolome",
"Abigail",
""
],
[
"Comi",
"Richard",
""
],
[
"Stanger",
"Catherine",
""
]
] |
new_dataset
| 0.999823 |
2304.06758
|
Vidya Sagar
|
Vidya Sagar, Ritumoni Sarma
|
Codes over the non-unital non-commutative ring $E$ using simplicial
complexes
|
20 pages. arXiv admin note: substantial text overlap with 2211.15747
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
There are exactly two non-commutative rings of size $4$, namely, $E = \langle
a, b ~\vert ~ 2a = 2b = 0, a^2 = a, b^2 = b, ab= a, ba = b\rangle$ and its
opposite ring $F$. These rings are non-unital. A subset $D$ of $E^m$ is defined
with the help of simplicial complexes, and utilized to construct linear
left-$E$-codes $C^L_D=\{(v\cdot d)_{d\in D} : v\in E^m\}$ and right-$E$-codes
$C^R_D=\{(d\cdot v)_{d\in D} : v\in E^m\}$. We study their corresponding binary
codes obtained via a Gray map. The weight distributions of all these codes are
computed. We achieve a couple of infinite families of optimal codes with
respect to the Griesmer bound. Ashikhmin-Barg's condition for minimality of a
linear code is satisfied by most of the binary codes we constructed here. All
the binary codes in this article are few-weight codes, and self-orthogonal
codes under certain mild conditions. This is the first attempt to study the
structure of linear codes over non-unital non-commutative rings using
simplicial complexes.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 18:01:41 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Sagar",
"Vidya",
""
],
[
"Sarma",
"Ritumoni",
""
]
] |
new_dataset
| 0.998277 |
2305.08781
|
Vidya Sagar
|
Vidya Sagar, Ritumoni Sarma
|
Minimal and Optimal binary codes obtained using $C_D$-construction over
the non-unital ring $I$
|
16 pages. arXiv admin note: substantial text overlap with
arXiv:2304.06758, 2211.15747
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this article, we construct linear codes over the commutative non-unital
ring $I$ of size four. We obtain their Lee-weight distributions and study their
binary Gray images. Under certain mild conditions, these classes of binary
codes are minimal and self-orthogonal. All codes in this article are few-weight
codes. Besides, an infinite class of these binary codes consists of distance
optimal codes with respect to the Griesmer bound.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 16:42:20 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Sagar",
"Vidya",
""
],
[
"Sarma",
"Ritumoni",
""
]
] |
new_dataset
| 0.997186 |
2306.04079
|
Hao Cheng
|
Hao Cheng, Zeyu Sha, Yongjian Zhu, Feitian Zhang
|
RGBlimp: Robotic Gliding Blimp -- Design, Modeling, Development, and
Aerodynamics Analysis
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A miniature robotic blimp, as one type of lighter-than-air aerial vehicle,
has attracted increasing attention in the science and engineering field for its
long flight duration and safe aerial locomotion. While a variety of miniature
robotic blimps have been developed over the past decade, most of them utilize
the buoyant lift and neglect the aerodynamic lift in their design, thus leading
to a mediocre aerodynamic performance. This letter proposes a new design of
miniature robotic blimp that combines desirable features of both a robotic
blimp and a fixed-wing glider, named the Robotic Gliding Blimp, or RGBlimp.
This robot, equipped with an envelope filled with helium and a pair of wings,
uses an internal moving mass and a pair of propellers for its locomotion
control. This letter presents the design, dynamic modeling, prototyping, and
system identification of the RGBlimp. To the best of the authors' knowledge,
this is the first effort to systematically design and develop such a miniature
robotic blimp with hybrid lifts and moving mass control. Experimental results
are presented to validate the design and the dynamic model of the RGBlimp.
Analysis of the RGBlimp aerodynamics is conducted which confirms the
performance improvement of the proposed RGBlimp in aerodynamic efficiency and
flight stability.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 00:40:41 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 04:17:05 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Cheng",
"Hao",
""
],
[
"Sha",
"Zeyu",
""
],
[
"Zhu",
"Yongjian",
""
],
[
"Zhang",
"Feitian",
""
]
] |
new_dataset
| 0.998329 |
2307.07686
|
Bin Lei
|
Bin Lei, Caiwen Ding, Le Chen, Pei-Hung Lin, Chunhua Liao
|
Creating a Dataset for High-Performance Computing Code Translation using
LLMs: A Bridge Between OpenMP Fortran and C++
|
This paper was accepted by the HPEC 2023 conference and received the
Outstanding Student Paper Award
| null | null | null |
cs.SE cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, we present a novel dataset for training machine learning
models translating between OpenMP Fortran and C++ code. To ensure reliability
and applicability, the dataset is created from a range of representative
open-source OpenMP benchmarks. It is also refined using a meticulous code
similarity test. The effectiveness of our dataset is assessed using both
quantitative (CodeBLEU) and qualitative (human evaluation) methods. We showcase
how this dataset significantly elevates the translation competencies of large
language models (LLMs). Specifically, models without prior coding knowledge
experienced a boost of $\mathbf{\times~5.1}$ in their CodeBLEU scores, while
models with some coding familiarity saw an impressive
$\mathbf{\times~9.9}$-fold increase. The best fine-tuned model using our
dataset outperforms GPT-4. It is also reaching human-level accuracy. This work
underscores the immense potential of our dataset in propelling advancements in
the domain of code translation for high-performance computing. The dataset is
accessible at
\href{https://github.com/bin123apple/Fortran-CPP-HPC-code-translation-dataset}{OpenMP-Fortran-CPP-Translation}.
|
[
{
"version": "v1",
"created": "Sat, 15 Jul 2023 02:35:51 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 02:04:40 GMT"
},
{
"version": "v3",
"created": "Sat, 9 Sep 2023 01:35:37 GMT"
},
{
"version": "v4",
"created": "Mon, 18 Sep 2023 18:10:37 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Lei",
"Bin",
""
],
[
"Ding",
"Caiwen",
""
],
[
"Chen",
"Le",
""
],
[
"Lin",
"Pei-Hung",
""
],
[
"Liao",
"Chunhua",
""
]
] |
new_dataset
| 0.999834 |
2308.06931
|
Siyu Teng
|
Siyu Teng, Luxi Li, Yuchen Li, Xuemin Hu, Lingxi Li, Yunfeng Ai, Long
Chen
|
FusionPlanner: A Multi-task Motion Planner for Mining Trucks using
Multi-sensor Fusion Method
|
20 Pages, 10 figures
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, significant achievements have been made in motion planning
for intelligent vehicles. However, as a typical unstructured environment,
open-pit mining attracts limited attention due to its complex operational
conditions and adverse environmental factors. A comprehensive paradigm for
unmanned transportation in open-pit mines is proposed in this research,
including a simulation platform, a testing benchmark, and a trustworthy and
robust motion planner. Firstly, we propose a multi-task motion planning
algorithm, called FusionPlanner, for autonomous mining trucks by the
Multi-sensor fusion method to adapt both lateral and longitudinal control tasks
for unmanned transportation. Then, we develop a novel benchmark called
MiningNav, which offers three validation approaches to evaluate the
trustworthiness and robustness of well-trained algorithms in transportation
roads of open-pit mines. Finally, we introduce the Parallel Mining Simulator
(PMS), a new high-fidelity simulator specifically designed for open-pit mining
scenarios. PMS enables the users to manage and control open-pit mine
transportation from both the single-truck control and multi-truck scheduling
perspectives. The performance of FusionPlanner is tested by MiningNav in PMS,
and the empirical results demonstrate a significant reduction in the number of
collisions and takeovers of our planner. We anticipate our unmanned
transportation paradigm will bring mining trucks one step closer to
trustworthiness and robustness in continuous round-the-clock unmanned
transportation.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2023 04:18:07 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 01:41:52 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Teng",
"Siyu",
""
],
[
"Li",
"Luxi",
""
],
[
"Li",
"Yuchen",
""
],
[
"Hu",
"Xuemin",
""
],
[
"Li",
"Lingxi",
""
],
[
"Ai",
"Yunfeng",
""
],
[
"Chen",
"Long",
""
]
] |
new_dataset
| 0.995048 |
2309.05433
|
Goran Vasiljevic
|
Dario Stuhne, Goran Vasiljevic, Stjepan Bogdan and Zdenko Kovacic
|
Design and Validation of a Wireless Drone Docking Station
|
2023 International Conference on Unmanned Aircraft Systems (ICUAS)
| null |
10.1109/ICUAS57906.2023.10156589
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Drones are increasingly operating autonomously, and the need for extending
drone power autonomy is rapidly increasing. One of the most promising solutions
to extend drone power autonomy is the use of docking stations to support both
landing and recharging of the drone. To this end, we introduce a novel wireless
drone docking station with three commercial wireless charging modules. We have
developed two independent units, both in mechanical and electrical aspects: the
energy transmitting unit and the energy receiving unit. We have also studied
the efficiency of wireless power transfer and demonstrated the advantages of
connecting three receiver modules connected in series and parallel. We have
achieved maximum output power of 96.5 W with a power transfer efficiency of
56.6% for the series connection of coils. Finally, we implemented the system in
practice on a drone and tested both energy transfer and landing.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 13:09:25 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Stuhne",
"Dario",
""
],
[
"Vasiljevic",
"Goran",
""
],
[
"Bogdan",
"Stjepan",
""
],
[
"Kovacic",
"Zdenko",
""
]
] |
new_dataset
| 0.955168 |
2309.06085
|
Wei Qi Leong
|
Wei Qi Leong, Jian Gang Ngui, Yosephine Susanto, Hamsawardhini
Rengarajan, Kengatharaiyer Sarveswaran, William Chandra Tjhi
|
BHASA: A Holistic Southeast Asian Linguistic and Cultural Evaluation
Suite for Large Language Models
|
86 pages, 7 figures, added link to repository in abstract, minor
formatting changes and typo corrections
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The rapid development of Large Language Models (LLMs) and the emergence of
novel abilities with scale have necessitated the construction of holistic,
diverse and challenging benchmarks such as HELM and BIG-bench. However, at the
moment, most of these benchmarks focus only on performance in English and
evaluations that include Southeast Asian (SEA) languages are few in number. We
therefore propose BHASA, a holistic linguistic and cultural evaluation suite
for LLMs in SEA languages. It comprises three components: (1) a NLP benchmark
covering eight tasks across Natural Language Understanding (NLU), Generation
(NLG) and Reasoning (NLR) tasks, (2) LINDSEA, a linguistic diagnostic toolkit
that spans the gamut of linguistic phenomena including syntax, semantics and
pragmatics, and (3) a cultural diagnostics dataset that probes for both
cultural representation and sensitivity. For this preliminary effort, we
implement the NLP benchmark only for Indonesian, Vietnamese, Thai and Tamil,
and we only include Indonesian and Tamil for LINDSEA and the cultural
diagnostics dataset. As GPT-4 is purportedly one of the best-performing
multilingual LLMs at the moment, we use it as a yardstick to gauge the
capabilities of LLMs in the context of SEA languages. Our initial experiments
on GPT-4 with BHASA find it lacking in various aspects of linguistic
capabilities, cultural representation and sensitivity in the targeted SEA
languages. BHASA is a work in progress and will continue to be improved and
expanded in the future. The repository for this paper can be found at:
https://github.com/aisingapore/BHASA
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 09:31:25 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 03:44:17 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Leong",
"Wei Qi",
""
],
[
"Ngui",
"Jian Gang",
""
],
[
"Susanto",
"Yosephine",
""
],
[
"Rengarajan",
"Hamsawardhini",
""
],
[
"Sarveswaran",
"Kengatharaiyer",
""
],
[
"Tjhi",
"William Chandra",
""
]
] |
new_dataset
| 0.99977 |
2309.09039
|
Manar Abdelatty
|
Manar Abdelatty, Joseph Incandela, Kangping Hu, Joseph W. Larkin,
Sherief Reda, Jacob K. Rosenstein
|
Microscale 3-D Capacitance Tomography with a CMOS Sensor Array
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Electrical capacitance tomography (ECT) is a nonoptical imaging technique in
which a map of the interior permittivity of a volume is estimated by making
capacitance measurements at its boundary and solving an inverse problem. While
previous ECT demonstrations have often been at centimeter scales, ECT is not
limited to macroscopic systems. In this paper, we demonstrate ECT imaging of
polymer microspheres and bacterial biofilms using a CMOS microelectrode array,
achieving spatial resolution of 10 microns. Additionally, we propose a deep
learning architecture and an improved multi-objective training scheme for
reconstructing out-of-plane permittivity maps from the sensor measurements.
Experimental results show that the proposed approach is able to resolve
microscopic 3-D structures, achieving 91.5% prediction accuracy on the
microsphere dataset and 82.7% on the biofilm dataset, including an average of
4.6% improvement over baseline computational methods.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 16:24:58 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 01:18:26 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Abdelatty",
"Manar",
""
],
[
"Incandela",
"Joseph",
""
],
[
"Hu",
"Kangping",
""
],
[
"Larkin",
"Joseph W.",
""
],
[
"Reda",
"Sherief",
""
],
[
"Rosenstein",
"Jacob K.",
""
]
] |
new_dataset
| 0.995244 |
2309.09067
|
Fudong Lin
|
Fudong Lin, Summer Crawford, Kaleb Guillot, Yihe Zhang, Yan Chen, Xu
Yuan, Li Chen, Shelby Williams, Robert Minvielle, Xiangming Xiao, Drew
Gholson, Nicolas Ashwell, Tri Setiyono, Brenda Tubana, Lu Peng, Magdy
Bayoumi, Nian-Feng Tzeng
|
MMST-ViT: Climate Change-aware Crop Yield Prediction via Multi-Modal
Spatial-Temporal Vision Transformer
| null |
ICCV 2023
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Precise crop yield prediction provides valuable information for agricultural
planning and decision-making processes. However, timely predicting crop yields
remains challenging as crop growth is sensitive to growing season weather
variation and climate change. In this work, we develop a deep learning-based
solution, namely Multi-Modal Spatial-Temporal Vision Transformer (MMST-ViT),
for predicting crop yields at the county level across the United States, by
considering the effects of short-term meteorological variations during the
growing season and the long-term climate change on crops. Specifically, our
MMST-ViT consists of a Multi-Modal Transformer, a Spatial Transformer, and a
Temporal Transformer. The Multi-Modal Transformer leverages both visual remote
sensing data and short-term meteorological data for modeling the effect of
growing season weather variations on crop growth. The Spatial Transformer
learns the high-resolution spatial dependency among counties for accurate
agricultural tracking. The Temporal Transformer captures the long-range
temporal dependency for learning the impact of long-term climate change on
crops. Meanwhile, we also devise a novel multi-modal contrastive learning
technique to pre-train our model without extensive human supervision. Hence,
our MMST-ViT captures the impacts of both short-term weather variations and
long-term climate change on crops by leveraging both satellite images and
meteorological data. We have conducted extensive experiments on over 200
counties in the United States, with the experimental results exhibiting that
our MMST-ViT outperforms its counterparts under three performance metrics of
interest.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 18:22:20 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 16:24:28 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Lin",
"Fudong",
""
],
[
"Crawford",
"Summer",
""
],
[
"Guillot",
"Kaleb",
""
],
[
"Zhang",
"Yihe",
""
],
[
"Chen",
"Yan",
""
],
[
"Yuan",
"Xu",
""
],
[
"Chen",
"Li",
""
],
[
"Williams",
"Shelby",
""
],
[
"Minvielle",
"Robert",
""
],
[
"Xiao",
"Xiangming",
""
],
[
"Gholson",
"Drew",
""
],
[
"Ashwell",
"Nicolas",
""
],
[
"Setiyono",
"Tri",
""
],
[
"Tubana",
"Brenda",
""
],
[
"Peng",
"Lu",
""
],
[
"Bayoumi",
"Magdy",
""
],
[
"Tzeng",
"Nian-Feng",
""
]
] |
new_dataset
| 0.987929 |
2309.09080
|
Senthil Yogamani
|
David Unger, Nikhil Gosala, Varun Ravi Kumar, Shubhankar Borse,
Abhinav Valada, Senthil Yogamani
|
Multi-camera Bird's Eye View Perception for Autonomous Driving
|
Taylor & Francis (CRC Press) book chapter. Book title: Computer
Vision: Challenges, Trends, and Opportunities
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Most automated driving systems comprise a diverse sensor set, including
several cameras, Radars, and LiDARs, ensuring a complete 360\deg coverage in
near and far regions. Unlike Radar and LiDAR, which measure directly in 3D,
cameras capture a 2D perspective projection with inherent depth ambiguity.
However, it is essential to produce perception outputs in 3D to enable the
spatial reasoning of other agents and structures for optimal path planning. The
3D space is typically simplified to the BEV space by omitting the less relevant
Z-coordinate, which corresponds to the height dimension.The most basic approach
to achieving the desired BEV representation from a camera image is IPM,
assuming a flat ground surface. Surround vision systems that are pretty common
in new vehicles use the IPM principle to generate a BEV image and to show it on
display to the driver. However, this approach is not suited for autonomous
driving since there are severe distortions introduced by this too-simplistic
transformation method. More recent approaches use deep neural networks to
output directly in BEV space. These methods transform camera images into BEV
space using geometric constraints implicitly or explicitly in the network. As
CNN has more context information and a learnable transformation can be more
flexible and adapt to image content, the deep learning-based methods set the
new benchmark for BEV transformation and achieve state-of-the-art performance.
First, this chapter discusses the contemporary trends of multi-camera-based DNN
(deep neural network) models outputting object representations directly in the
BEV space. Then, we discuss how this approach can extend to effective sensor
fusion and coupling downstream tasks like situation analysis and prediction.
Finally, we show challenges and open problems in BEV perception.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 19:12:05 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 10:40:37 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Unger",
"David",
""
],
[
"Gosala",
"Nikhil",
""
],
[
"Kumar",
"Varun Ravi",
""
],
[
"Borse",
"Shubhankar",
""
],
[
"Valada",
"Abhinav",
""
],
[
"Yogamani",
"Senthil",
""
]
] |
new_dataset
| 0.984632 |
2309.09708
|
Nan Li
|
Nan Li, Bo Kang, Tijl De Bie
|
LLM4Jobs: Unsupervised occupation extraction and standardization
leveraging Large Language Models
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated occupation extraction and standardization from free-text job
postings and resumes are crucial for applications like job recommendation and
labor market policy formation. This paper introduces LLM4Jobs, a novel
unsupervised methodology that taps into the capabilities of large language
models (LLMs) for occupation coding. LLM4Jobs uniquely harnesses both the
natural language understanding and generation capacities of LLMs. Evaluated on
rigorous experimentation on synthetic and real-world datasets, we demonstrate
that LLM4Jobs consistently surpasses unsupervised state-of-the-art benchmarks,
demonstrating its versatility across diverse datasets and granularities. As a
side result of our work, we present both synthetic and real-world datasets,
which may be instrumental for subsequent research in this domain. Overall, this
investigation highlights the promise of contemporary LLMs for the intricate
task of occupation extraction and standardization, laying the foundation for a
robust and adaptable framework relevant to both research and industrial
contexts.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 12:22:00 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 09:28:18 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Li",
"Nan",
""
],
[
"Kang",
"Bo",
""
],
[
"De Bie",
"Tijl",
""
]
] |
new_dataset
| 0.99459 |
2309.09749
|
Huachuan Qiu
|
Huachuan Qiu, Shuai Zhang, Hongliang He, Anqi Li, Zhenzhong Lan
|
Facilitating NSFW Text Detection in Open-Domain Dialogue Systems via
Knowledge Distillation
|
Submitted to ICASSP 2024. Code and data are publicly available at
https://github.com/qiuhuachuan/CensorChat
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
NSFW (Not Safe for Work) content, in the context of a dialogue, can have
severe side effects on users in open-domain dialogue systems. However, research
on detecting NSFW language, especially sexually explicit content, within a
dialogue context has significantly lagged behind. To address this issue, we
introduce CensorChat, a dialogue monitoring dataset aimed at NSFW dialogue
detection. Leveraging knowledge distillation techniques involving GPT-4 and
ChatGPT, this dataset offers a cost-effective means of constructing NSFW
content detectors. The process entails collecting real-life human-machine
interaction data and breaking it down into single utterances and single-turn
dialogues, with the chatbot delivering the final utterance. ChatGPT is employed
to annotate unlabeled data, serving as a training set. Rationale validation and
test sets are constructed using ChatGPT and GPT-4 as annotators, with a
self-criticism strategy for resolving discrepancies in labeling. A BERT model
is fine-tuned as a text classifier on pseudo-labeled data, and its performance
is assessed. The study emphasizes the importance of AI systems prioritizing
user safety and well-being in digital conversations while respecting freedom of
expression. The proposed approach not only advances NSFW content detection but
also aligns with evolving user protection needs in AI-driven dialogues.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 13:24:44 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 12:32:21 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Qiu",
"Huachuan",
""
],
[
"Zhang",
"Shuai",
""
],
[
"He",
"Hongliang",
""
],
[
"Li",
"Anqi",
""
],
[
"Lan",
"Zhenzhong",
""
]
] |
new_dataset
| 0.999045 |
2309.10001
|
Taein Kwon
|
Junan Lin, Zhichao Sun, Enjie Cao, Taein Kwon, Mahdi Rad, Marc
Pollefeys
|
CaSAR: Contact-aware Skeletal Action Recognition
|
10 pages, 8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Skeletal Action recognition from an egocentric view is important for
applications such as interfaces in AR/VR glasses and human-robot interaction,
where the device has limited resources. Most of the existing skeletal action
recognition approaches use 3D coordinates of hand joints and 8-corner
rectangular bounding boxes of objects as inputs, but they do not capture how
the hands and objects interact with each other within the spatial context. In
this paper, we present a new framework called Contact-aware Skeletal Action
Recognition (CaSAR). It uses novel representations of hand-object interaction
that encompass spatial information: 1) contact points where the hand joints
meet the objects, 2) distant points where the hand joints are far away from the
object and nearly not involved in the current action. Our framework is able to
learn how the hands touch or stay away from the objects for each frame of the
action sequence, and use this information to predict the action class. We
demonstrate that our approach achieves the state-of-the-art accuracy of 91.3%
and 98.4% on two public datasets, H2O and FPHA, respectively.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 09:42:40 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Lin",
"Junan",
""
],
[
"Sun",
"Zhichao",
""
],
[
"Cao",
"Enjie",
""
],
[
"Kwon",
"Taein",
""
],
[
"Rad",
"Mahdi",
""
],
[
"Pollefeys",
"Marc",
""
]
] |
new_dataset
| 0.999393 |
2309.10015
|
Christopher Richardson
|
Christopher Richardson, Anirudh Sundar, Larry Heck
|
SYNDICOM: Improving Conversational Commonsense with Error-Injection and
Natural Language Feedback
|
Published at SigDial 2023, Number 129
| null | null |
129
|
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Commonsense reasoning is a critical aspect of human communication. Despite
recent advances in conversational AI driven by large language models,
commonsense reasoning remains a challenging task. In this work, we introduce
SYNDICOM - a method for improving commonsense in dialogue response generation.
SYNDICOM consists of two components. The first component is a dataset composed
of commonsense dialogues created from a knowledge graph and synthesized into
natural language. This dataset includes both valid and invalid responses to
dialogue contexts, along with natural language feedback (NLF) for the invalid
responses. The second contribution is a two-step procedure: training a model to
predict natural language feedback (NLF) for invalid responses, and then
training a response generation model conditioned on the predicted NLF, the
invalid response, and the dialogue. SYNDICOM is scalable and does not require
reinforcement learning. Empirical results on three tasks are evaluated using a
broad range of metrics. SYNDICOM achieves a relative improvement of 53% over
ChatGPT on ROUGE1, and human evaluators prefer SYNDICOM over ChatGPT 57% of the
time. We will publicly release the code and the full dataset.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 15:08:48 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Richardson",
"Christopher",
""
],
[
"Sundar",
"Anirudh",
""
],
[
"Heck",
"Larry",
""
]
] |
new_dataset
| 0.999748 |
2309.10062
|
Shyam Sundar Kannan
|
Shyam Sundar Kannan, Vishnunandan L. N. Venkatesh, and Byung-Cheol Min
|
SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language
Models
|
Submitted to ICRA 2024
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we introduce SMART-LLM, an innovative framework designed for
embodied multi-robot task planning. SMART-LLM: Smart Multi-Agent Robot Task
Planning using Large Language Models (LLMs), harnesses the power of LLMs to
convert high-level task instructions provided as input into a multi-robot task
plan. It accomplishes this by executing a series of stages, including task
decomposition, coalition formation, and task allocation, all guided by
programmatic LLM prompts within the few-shot prompting paradigm. We create a
benchmark dataset designed for validating the multi-robot task planning
problem, encompassing four distinct categories of high-level instructions that
vary in task complexity. Our evaluation experiments span both simulation and
real-world scenarios, demonstrating that the proposed model can achieve
promising results for generating multi-robot task plans. The experimental
videos, code, and datasets from the work can be found at
https://sites.google.com/view/smart-llm/.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 18:17:56 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Kannan",
"Shyam Sundar",
""
],
[
"Venkatesh",
"Vishnunandan L. N.",
""
],
[
"Min",
"Byung-Cheol",
""
]
] |
new_dataset
| 0.998959 |
2309.10109
|
Damian S\'ojka
|
Damian S\'ojka, Sebastian Cygert, Bart{\l}omiej Twardowski and Tomasz
Trzci\'nski
|
AR-TTA: A Simple Method for Real-World Continual Test-Time Adaptation
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Test-time adaptation is a promising research direction that allows the source
model to adapt itself to changes in data distribution without any supervision.
Yet, current methods are usually evaluated on benchmarks that are only a
simplification of real-world scenarios. Hence, we propose to validate test-time
adaptation methods using the recently introduced datasets for autonomous
driving, namely CLAD-C and SHIFT. We observe that current test-time adaptation
methods struggle to effectively handle varying degrees of domain shift, often
resulting in degraded performance that falls below that of the source model. We
noticed that the root of the problem lies in the inability to preserve the
knowledge of the source model and adapt to dynamically changing, temporally
correlated data streams. Therefore, we enhance well-established self-training
framework by incorporating a small memory buffer to increase model stability
and at the same time perform dynamic adaptation based on the intensity of
domain shift. The proposed method, named AR-TTA, outperforms existing
approaches on both synthetic and more real-world benchmarks and shows
robustness across a variety of TTA scenarios.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 19:34:23 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Sójka",
"Damian",
""
],
[
"Cygert",
"Sebastian",
""
],
[
"Twardowski",
"Bartłomiej",
""
],
[
"Trzciński",
"Tomasz",
""
]
] |
new_dataset
| 0.952696 |
2309.10164
|
Saurav Agarwal
|
Saurav Agarwal, Alejandro Ribeiro, Vijay Kumar
|
Asynchronous Perception-Action-Communication with Graph Neural Networks
|
Under review: IEEE International Conference on Robotics and
Automation (ICRA) 2024
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Collaboration in large robot swarms to achieve a common global objective is a
challenging problem in large environments due to limited sensing and
communication capabilities. The robots must execute a
Perception-Action-Communication (PAC) loop -- they perceive their local
environment, communicate with other robots, and take actions in real time. A
fundamental challenge in decentralized PAC systems is to decide what
information to communicate with the neighboring robots and how to take actions
while utilizing the information shared by the neighbors. Recently, this has
been addressed using Graph Neural Networks (GNNs) for applications such as
flocking and coverage control. Although conceptually, GNN policies are fully
decentralized, the evaluation and deployment of such policies have primarily
remained centralized or restrictively decentralized. Furthermore, existing
frameworks assume sequential execution of perception and action inference,
which is very restrictive in real-world applications. This paper proposes a
framework for asynchronous PAC in robot swarms, where decentralized GNNs are
used to compute navigation actions and generate messages for communication. In
particular, we use aggregated GNNs, which enable the exchange of hidden layer
information between robots for computational efficiency and decentralized
inference of actions. Furthermore, the modules in the framework are
asynchronous, allowing robots to perform sensing, extracting information,
communication, action inference, and control execution at different
frequencies. We demonstrate the effectiveness of GNNs executed in the proposed
framework in navigating large robot swarms for collaborative coverage of large
environments.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 21:20:50 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Agarwal",
"Saurav",
""
],
[
"Ribeiro",
"Alejandro",
""
],
[
"Kumar",
"Vijay",
""
]
] |
new_dataset
| 0.998037 |
2309.10175
|
Abraham George
|
Abraham George and Amir Barati Farimani
|
One ACT Play: Single Demonstration Behavior Cloning with Action Chunking
Transformers
|
7 pages, 6 figures
| null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Learning from human demonstrations (behavior cloning) is a cornerstone of
robot learning. However, most behavior cloning algorithms require a large
number of demonstrations to learn a task, especially for general tasks that
have a large variety of initial conditions. Humans, however, can learn to
complete tasks, even complex ones, after only seeing one or two demonstrations.
Our work seeks to emulate this ability, using behavior cloning to learn a task
given only a single human demonstration. We achieve this goal by using linear
transforms to augment the single demonstration, generating a set of
trajectories for a wide range of initial conditions. With these demonstrations,
we are able to train a behavior cloning agent to successfully complete three
block manipulation tasks. Additionally, we developed a novel addition to the
temporal ensembling method used by action chunking agents during inference. By
incorporating the standard deviation of the action predictions into the
ensembling method, our approach is more robust to unforeseen changes in the
environment, resulting in significant performance improvements.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 21:50:26 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"George",
"Abraham",
""
],
[
"Farimani",
"Amir Barati",
""
]
] |
new_dataset
| 0.987473 |
2309.10225
|
Adam Hines PhD
|
Adam D. Hines, Peter G. Stratton, Michael Milford, Tobias Fischer
|
VPRTempo: A Fast Temporally Encoded Spiking Neural Network for Visual
Place Recognition
|
8 pages, 3 figures, under review
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spiking Neural Networks (SNNs) are at the forefront of neuromorphic computing
thanks to their potential energy-efficiency, low latencies, and capacity for
continual learning. While these capabilities are well suited for robotics
tasks, SNNs have seen limited adaptation in this field thus far. This work
introduces a SNN for Visual Place Recognition (VPR) that is both trainable
within minutes and queryable in milliseconds, making it well suited for
deployment on compute-constrained robotic systems. Our proposed system,
VPRTempo, overcomes slow training and inference times using an abstracted SNN
that trades biological realism for efficiency. VPRTempo employs a temporal code
that determines the timing of a single spike based on a pixel's intensity, as
opposed to prior SNNs relying on rate coding that determined the number of
spikes; improving spike efficiency by over 100%. VPRTempo is trained using
Spike-Timing Dependent Plasticity and a supervised delta learning rule
enforcing that each output spiking neuron responds to just a single place. We
evaluate our system on the Nordland and Oxford RobotCar benchmark localization
datasets, which include up to 27k places. We found that VPRTempo's accuracy is
comparable to prior SNNs and the popular NetVLAD place recognition algorithm,
while being several orders of magnitude faster and suitable for real-time
deployment -- with inference speeds over 50 Hz on CPU. VPRTempo could be
integrated as a loop closure component for online SLAM on resource-constrained
systems such as space and underwater robots.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 00:38:05 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Hines",
"Adam D.",
""
],
[
"Stratton",
"Peter G.",
""
],
[
"Milford",
"Michael",
""
],
[
"Fischer",
"Tobias",
""
]
] |
new_dataset
| 0.997969 |
2309.10263
|
Lunan Sun
|
Lunan Sun, Yang Yang, Mingzhe Chen, Caili Guo
|
Disentangled Information Bottleneck guided Privacy-Protective JSCC for
Image Transmission
| null | null | null | null |
cs.CR cs.IT eess.IV eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Joint source and channel coding (JSCC) has attracted increasing attention due
to its robustness and high efficiency. However, JSCC is vulnerable to privacy
leakage due to the high relevance between the source image and channel input.
In this paper, we propose a disentangled information bottleneck guided
privacy-protective JSCC (DIB-PPJSCC) for image transmission, which aims at
protecting private information as well as achieving superior communication
performance at the legitimate receiver. In particular, we propose a DIB
objective to disentangle private and public information. The goal is to
compress the private information in the public subcodewords, preserve the
private information in the private subcodewords and improve the reconstruction
quality simultaneously. In order to optimize JSCC neural networks using the DIB
objective, we derive a differentiable estimation of the DIB objective based on
the variational approximation and the density-ratio trick. Additionally, we
design a password-based privacy-protective (PP) algorithm which can be jointly
optimized with JSCC neural networks to encrypt the private subcodewords.
Specifically, we employ a private information encryptor to encrypt the private
subcodewords before transmission, and a corresponding decryptor to recover the
private information at the legitimate receiver. A loss function for jointly
training the encryptor, decryptor and JSCC decoder is derived based on the
maximum entropy principle, which aims at maximizing the eavesdropping
uncertainty as well as improving the reconstruction quality. Experimental
results show that DIB-PPJSCC can reduce the eavesdropping accuracy on private
information up to $15\%$ and reduce $10\%$ inference time compared to existing
privacy-protective JSCC and traditional separate methods.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 02:37:53 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Sun",
"Lunan",
""
],
[
"Yang",
"Yang",
""
],
[
"Chen",
"Mingzhe",
""
],
[
"Guo",
"Caili",
""
]
] |
new_dataset
| 0.991744 |
2309.10268
|
Kentaro Uno
|
Kentaro Uno, Kazuki Takada, Keita Nagaoka, Takuya Kato, Arthur
Candalot, and Kazuya Yoshida
|
Lower Gravity Demonstratable Testbed for Space Robot Experiments
|
2 pages, 3 figures, paper submitted to the SII 2024 (IEEE/SICE
International Symposium on System Integration) (Updated references
formatting)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In developing mobile robots for exploration on the planetary surface, it is
crucial to evaluate the robot's performance, demonstrating the harsh
environment in which the robot will actually be deployed. Repeatable
experiments in a controlled testing environment that can reproduce various
terrain and gravitational conditions are essential. This paper presents the
development of a minimal and space-saving indoor testbed, which can simulate
steep slopes, uneven terrain, and lower gravity, employing a three-dimensional
target tracking mechanism (active xy and passive z) with a counterweight.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 02:44:22 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Uno",
"Kentaro",
""
],
[
"Takada",
"Kazuki",
""
],
[
"Nagaoka",
"Keita",
""
],
[
"Kato",
"Takuya",
""
],
[
"Candalot",
"Arthur",
""
],
[
"Yoshida",
"Kazuya",
""
]
] |
new_dataset
| 0.998685 |
2309.10329
|
Zeshi Yang
|
Zeshi Yang, Zherong Pan, Manyi Li, Kui Wu, Xifeng Gao
|
Learning based 2D Irregular Shape Packing
| null | null | null | null |
cs.GR cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
2D irregular shape packing is a necessary step to arrange UV patches of a 3D
model within a texture atlas for memory-efficient appearance rendering in
computer graphics. Being a joint, combinatorial decision-making problem
involving all patch positions and orientations, this problem has well-known
NP-hard complexity. Prior solutions either assume a heuristic packing order or
modify the upstream mesh cut and UV mapping to simplify the problem, which
either limits the packing ratio or incurs robustness or generality issues.
Instead, we introduce a learning-assisted 2D irregular shape packing method
that achieves a high packing quality with minimal requirements from the input.
Our method iteratively selects and groups subsets of UV patches into
near-rectangular super patches, essentially reducing the problem to
bin-packing, based on which a joint optimization is employed to further improve
the packing ratio. In order to efficiently deal with large problem instances
with hundreds of patches, we train deep neural policies to predict nearly
rectangular patch subsets and determine their relative poses, leading to linear
time scaling with the number of patches. We demonstrate the effectiveness of
our method on three datasets for UV packing, where our method achieves a higher
packing ratio over several widely used baselines with competitive computational
speed.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 05:21:52 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Yang",
"Zeshi",
""
],
[
"Pan",
"Zherong",
""
],
[
"Li",
"Manyi",
""
],
[
"Wu",
"Kui",
""
],
[
"Gao",
"Xifeng",
""
]
] |
new_dataset
| 0.976409 |
2309.10339
|
Kisu Yang
|
Kisu Yang, Yoonna Jang, Taewoo Lee, Jinwoo Seong, Hyungjin Lee,
Hwanseok Jang, Heuiseok Lim
|
KoBigBird-large: Transformation of Transformer for Korean Language
Understanding
|
Accepted at IJCNLP-AACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This work presents KoBigBird-large, a large size of Korean BigBird that
achieves state-of-the-art performance and allows long sequence processing for
Korean language understanding. Without further pretraining, we only transform
the architecture and extend the positional encoding with our proposed Tapered
Absolute Positional Encoding Representations (TAPER). In experiments,
KoBigBird-large shows state-of-the-art overall performance on Korean language
understanding benchmarks and the best performance on document classification
and question answering tasks for longer sequences against the competitive
baseline models. We publicly release our model here.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 05:48:57 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Yang",
"Kisu",
""
],
[
"Jang",
"Yoonna",
""
],
[
"Lee",
"Taewoo",
""
],
[
"Seong",
"Jinwoo",
""
],
[
"Lee",
"Hyungjin",
""
],
[
"Jang",
"Hwanseok",
""
],
[
"Lim",
"Heuiseok",
""
]
] |
new_dataset
| 0.955293 |
2309.10350
|
Yaoyu Tao
|
Lianfeng Yu, Yaoyu Tao, Teng Zhang, Zeyu Wang, Xile Wang, Zelun Pan,
Bowen Wang, Zhaokun Jing, Jiaxin Liu, Yuqi Li, Yihang Zhu, Bonan Yan and
Yuchao Yang
|
Fast and reconfigurable sort-in-memory system enabled by memristors
|
Submitted to Nature Electronics
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Sorting is fundamental and ubiquitous in modern computing systems. Hardware
sorting systems are built based on comparison operations with Von Neumann
architecture, but their performance are limited by the bandwidth between memory
and comparison units and the performance of complementary
metal-oxide-semiconductor (CMOS) based circuitry. Sort-in-memory (SIM) based on
emerging memristors is desired but not yet available due to comparison
operations that are challenging to be implemented within memristive memory.
Here we report fast and reconfigurable SIM system enabled by digit read (DR) on
1-transistor-1-resistor (1T1R) memristor arrays. We develop DR tree node
skipping (TNS) that support variable data quantity and data types, and extend
TNS with multi-bank, bit-slice and multi-level strategies to enable cross-array
TNS (CA-TNS) for practical adoptions. Experimented on benchmark sorting
datasets, our memristor-enabled SIM system presents up to 3.32x~7.70x speedup,
6.23x~183.5x energy efficiency improvement and 2.23x~7.43x area reduction
compared with state-of-the-art sorting systems. We apply such SIM system for
shortest path search with Dijkstra's algorithm and neural network inference
with in-situ pruning, demonstrating the capability in solving practical sorting
tasks and the compatibility in integrating with other compute-in-memory (CIM)
schemes. The comparison-free TNS/CA-TNS SIM enabled by memristors pushes
sorting into a new paradigm of sort-in-memory for next-generation sorting
systems.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 06:21:20 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Yu",
"Lianfeng",
""
],
[
"Tao",
"Yaoyu",
""
],
[
"Zhang",
"Teng",
""
],
[
"Wang",
"Zeyu",
""
],
[
"Wang",
"Xile",
""
],
[
"Pan",
"Zelun",
""
],
[
"Wang",
"Bowen",
""
],
[
"Jing",
"Zhaokun",
""
],
[
"Liu",
"Jiaxin",
""
],
[
"Li",
"Yuqi",
""
],
[
"Zhu",
"Yihang",
""
],
[
"Yan",
"Bonan",
""
],
[
"Yang",
"Yuchao",
""
]
] |
new_dataset
| 0.998489 |
2309.10383
|
Nithish Krishnabharathi Gnani
|
Nithish Krishnabharathi Gnani, Joydeep Pal, Deepak Choudhary, Himanshu
Verma, Soumya Kanta Rana, Kaushal Mhapsekar, T. V. Prabhakar, Chandramani
Singh
|
EdgeP4: A P4-Programmable Edge Intelligent Ethernet Switch for Tactile
Cyber-Physical Systems
| null | null | null | null |
cs.NI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tactile Internet based operations, e.g., telesurgery, rely on end-to-end
closed loop control for accuracy and corrections. The feedback and control are
subject to network latency and loss. We design two edge intelligence algorithms
hosted at P4 programmable end switches. These algorithms locally compute and
command corrective signals, thereby dispense the feedback signals from
traversing the network to the other ends and save on control loop latency and
network load. We implement these algorithms entirely on data plane on Netronome
Agilio SmartNICs using P4. Our first algorithm, $\textit{pose correction}$, is
placed at the edge switch connected to an industrial robot gripping a tool. The
round trip between transmitting force sensor array readings to the edge switch
and receiving correct tip coordinates at the robot is shown to be less than
$100~\mu s$. The second algorithm, $\textit{tremor suppression}$, is placed at
the edge switch connected to the human operator. It suppresses physiological
tremors of amplitudes smaller than $100~\mu m$ which not only improves the
application's performance but also reduces the network load up to $99.9\%$. Our
solution allows edge intelligence modules to seamlessly switch between the
algorithms based on the tasks being executed at the end hosts.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 07:35:16 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Gnani",
"Nithish Krishnabharathi",
""
],
[
"Pal",
"Joydeep",
""
],
[
"Choudhary",
"Deepak",
""
],
[
"Verma",
"Himanshu",
""
],
[
"Rana",
"Soumya Kanta",
""
],
[
"Mhapsekar",
"Kaushal",
""
],
[
"Prabhakar",
"T. V.",
""
],
[
"Singh",
"Chandramani",
""
]
] |
new_dataset
| 0.995842 |
2309.10388
|
Kyungmin Jo
|
Kyungmin Jo, Wonjoon Jin, Jaegul Choo, Hyunjoon Lee, Sunghyun Cho
|
SideGAN: 3D-Aware Generative Model for Improved Side-View Image
Synthesis
|
International Conference on Computer Vision (ICCV) 2023
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While recent 3D-aware generative models have shown photo-realistic image
synthesis with multi-view consistency, the synthesized image quality degrades
depending on the camera pose (e.g., a face with a blurry and noisy boundary at
a side viewpoint). Such degradation is mainly caused by the difficulty of
learning both pose consistency and photo-realism simultaneously from a dataset
with heavily imbalanced poses. In this paper, we propose SideGAN, a novel 3D
GAN training method to generate photo-realistic images irrespective of the
camera pose, especially for faces of side-view angles. To ease the challenging
problem of learning photo-realistic and pose-consistent image synthesis, we
split the problem into two subproblems, each of which can be solved more
easily. Specifically, we formulate the problem as a combination of two simple
discrimination problems, one of which learns to discriminate whether a
synthesized image looks real or not, and the other learns to discriminate
whether a synthesized image agrees with the camera pose. Based on this, we
propose a dual-branched discriminator with two discrimination branches. We also
propose a pose-matching loss to learn the pose consistency of 3D GANs. In
addition, we present a pose sampling strategy to increase learning
opportunities for steep angles in a pose-imbalanced dataset. With extensive
validation, we demonstrate that our approach enables 3D GANs to generate
high-quality geometries and photo-realistic images irrespective of the camera
pose.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 07:38:05 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Jo",
"Kyungmin",
""
],
[
"Jin",
"Wonjoon",
""
],
[
"Choo",
"Jaegul",
""
],
[
"Lee",
"Hyunjoon",
""
],
[
"Cho",
"Sunghyun",
""
]
] |
new_dataset
| 0.997742 |
2309.10403
|
Sandeep Khanna
|
Sandeep Khanna, Chiranjoy Chattopadhyay, Suman Kundu
|
INDoRI: Indian Dataset of Recipes and Ingredients and its Ingredient
Network
|
11 pages, 4 figures, 3 tables
| null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Exploring and comprehending the culinary heritage of a nation holds a
captivating allure. It offers insights into the structure and qualities of its
cuisine. The endeavor becomes more accessible with the availability of a
well-organized dataset. In this paper, we present the introduction of INDoRI
(Indian Dataset of Recipes and Ingredients), a compilation drawn from seven
distinct online platforms, representing 18 regions within the Indian
subcontinent. This comprehensive geographical span ensures a portrayal of the
rich variety within culinary practices. Furthermore, we introduce a unique
collection of stop words, referred to as ISW (Ingredient Stop Words), manually
tuned for the culinary domain. We assess the validity of ISW in the context of
global cuisines beyond Indian culinary tradition. Subsequently, an ingredient
network (InN) is constructed, highlighting interconnections among ingredients
sourced from different recipes. We delve into both the defining attributes of
INDoRI and the communal dimensions of InN. Additionally, we outline the
potential applications that can be developed leveraging this dataset.
Addressing one of the applications, we demonstrated a research problem on InN
with a simple weighted community detection algorithm. Furthermore, we provide a
comparative analysis of the results obtained with this algorithm against those
generated by two baselines.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 08:06:34 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Khanna",
"Sandeep",
""
],
[
"Chattopadhyay",
"Chiranjoy",
""
],
[
"Kundu",
"Suman",
""
]
] |
new_dataset
| 0.999884 |
2309.10436
|
Xianjia Yu
|
Haizhou Zhang, Xianjia Yu, Sier Ha, and Tomi Westerlund
|
LiDAR-Generated Images Derived Keypoints Assisted Point Cloud
Registration Scheme in Odometry Estimation
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Keypoint detection and description play a pivotal role in various robotics
and autonomous applications including visual odometry (VO), visual navigation,
and Simultaneous localization and mapping (SLAM). While a myriad of keypoint
detectors and descriptors have been extensively studied in conventional camera
images, the effectiveness of these techniques in the context of LiDAR-generated
images, i.e. reflectivity and ranges images, has not been assessed. These
images have gained attention due to their resilience in adverse conditions such
as rain or fog. Additionally, they contain significant textural information
that supplements the geometric information provided by LiDAR point clouds in
the point cloud registration phase, especially when reliant solely on LiDAR
sensors. This addresses the challenge of drift encountered in LiDAR Odometry
(LO) within geometrically identical scenarios or where not all the raw point
cloud is informative and may even be misleading. This paper aims to analyze the
applicability of conventional image key point extractors and descriptors on
LiDAR-generated images via a comprehensive quantitative investigation.
Moreover, we propose a novel approach to enhance the robustness and reliability
of LO. After extracting key points, we proceed to downsample the point cloud,
subsequently integrating it into the point cloud registration phase for the
purpose of odometry estimation. Our experiment demonstrates that the proposed
approach has comparable accuracy but reduced computational overhead, higher
odometry publishing rate, and even superior performance in scenarios prone to
drift by using the raw point cloud. This, in turn, lays a foundation for
subsequent investigations into the integration of LiDAR-generated images with
LO. Our code is available on GitHub:
https://github.com/TIERS/ws-lidar-as-camera-odom.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 08:55:05 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Zhang",
"Haizhou",
""
],
[
"Yu",
"Xianjia",
""
],
[
"Ha",
"Sier",
""
],
[
"Westerlund",
"Tomi",
""
]
] |
new_dataset
| 0.999704 |
2309.10491
|
Jiawen Zhu
|
Jiawen Zhu, Huayi Tang, Zhi-Qi Cheng, Jun-Yan He, Bin Luo, Shihao Qiu,
Shengming Li, Huchuan Lu
|
DCPT: Darkness Clue-Prompted Tracking in Nighttime UAVs
|
Under review
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing nighttime unmanned aerial vehicle (UAV) trackers follow an
"Enhance-then-Track" architecture - first using a light enhancer to brighten
the nighttime video, then employing a daytime tracker to locate the object.
This separate enhancement and tracking fails to build an end-to-end trainable
vision system. To address this, we propose a novel architecture called Darkness
Clue-Prompted Tracking (DCPT) that achieves robust UAV tracking at night by
efficiently learning to generate darkness clue prompts. Without a separate
enhancer, DCPT directly encodes anti-dark capabilities into prompts using a
darkness clue prompter (DCP). Specifically, DCP iteratively learns emphasizing
and undermining projections for darkness clues. It then injects these learned
visual prompts into a daytime tracker with fixed parameters across transformer
layers. Moreover, a gated feature aggregation mechanism enables adaptive fusion
between prompts and between prompts and the base model. Extensive experiments
show state-of-the-art performance for DCPT on multiple dark scenario
benchmarks. The unified end-to-end learning of enhancement and tracking in DCPT
enables a more trainable system. The darkness clue prompting efficiently
injects anti-dark knowledge without extra modules. Code and models will be
released.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 09:59:08 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Zhu",
"Jiawen",
""
],
[
"Tang",
"Huayi",
""
],
[
"Cheng",
"Zhi-Qi",
""
],
[
"He",
"Jun-Yan",
""
],
[
"Luo",
"Bin",
""
],
[
"Qiu",
"Shihao",
""
],
[
"Li",
"Shengming",
""
],
[
"Lu",
"Huchuan",
""
]
] |
new_dataset
| 0.999633 |
2309.10498
|
Michael Ivanitskiy
|
Michael Igorevich Ivanitskiy (1), Rusheb Shah, Alex F. Spies (2),
Tilman R\"auker, Dan Valentine, Can Rager, Lucia Quirke, Chris Mathwin,
Guillaume Corlouer, Cecilia Diniz Behn (1), Samy Wu Fung (1) ((1) Colorado
School of Mines, Department of Applied Mathematics and Statistics (2)
Imperial College London)
|
A Configurable Library for Generating and Manipulating Maze Datasets
|
9 pages, 5 figures, 1 table. Corresponding author: Michael Ivanitskiy
([email protected]). Code available at
https://github.com/understanding-search/maze-dataset
| null | null | null |
cs.LG cs.AI cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Understanding how machine learning models respond to distributional shifts is
a key research challenge. Mazes serve as an excellent testbed due to varied
generation algorithms offering a nuanced platform to simulate both subtle and
pronounced distributional shifts. To enable systematic investigations of model
behavior on out-of-distribution data, we present $\texttt{maze-dataset}$, a
comprehensive library for generating, processing, and visualizing datasets
consisting of maze-solving tasks. With this library, researchers can easily
create datasets, having extensive control over the generation algorithm used,
the parameters fed to the algorithm of choice, and the filters that generated
mazes must satisfy. Furthermore, it supports multiple output formats, including
rasterized and text-based, catering to convolutional neural networks and
autoregressive transformer models. These formats, along with tools for
visualizing and converting between them, ensure versatility and adaptability in
research applications.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 10:20:11 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Ivanitskiy",
"Michael Igorevich",
""
],
[
"Shah",
"Rusheb",
""
],
[
"Spies",
"Alex F.",
""
],
[
"Räuker",
"Tilman",
""
],
[
"Valentine",
"Dan",
""
],
[
"Rager",
"Can",
""
],
[
"Quirke",
"Lucia",
""
],
[
"Mathwin",
"Chris",
""
],
[
"Corlouer",
"Guillaume",
""
],
[
"Behn",
"Cecilia Diniz",
""
],
[
"Fung",
"Samy Wu",
""
]
] |
new_dataset
| 0.999716 |
2309.10522
|
Zhuo Li
|
Zhuo Li, Bo Li
|
Visible and NIR Image Fusion Algorithm Based on Information
Complementarity
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Visible and near-infrared(NIR) band sensors provide images that capture
complementary spectral radiations from a scene. And the fusion of the visible
and NIR image aims at utilizing their spectrum properties to enhance image
quality. However, currently visible and NIR fusion algorithms cannot well take
advantage of spectrum properties, as well as lack information complementarity,
which results in color distortion and artifacts. Therefore, this paper designs
a complementary fusion model from the level of physical signals. First, in
order to distinguish between noise and useful information, we use two layers of
the weight-guided filter and guided filter to obtain texture and edge layers,
respectively. Second, to generate the initial visible-NIR complementarity
weight map, the difference maps of visible and NIR are filtered by the
extend-DoG filter. After that, the significant region of NIR night-time
compensation guides the initial complementarity weight map by the arctanI
function. Finally, the fusion images can be generated by the complementarity
weight maps of visible and NIR images, respectively. The experimental results
demonstrate that the proposed algorithm can not only well take advantage of the
spectrum properties and the information complementarity, but also avoid color
unnatural while maintaining naturalness, which outperforms the
state-of-the-art.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 11:07:24 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Li",
"Zhuo",
""
],
[
"Li",
"Bo",
""
]
] |
new_dataset
| 0.980961 |
2309.10528
|
Chang Liu
|
Chang Liu, Yi Niu, Mingming Ma, Fu Li and Guangming Shi
|
Retinex-guided Channel-grouping based Patch Swap for Arbitrary Style
Transfer
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The basic principle of the patch-matching based style transfer is to
substitute the patches of the content image feature maps by the closest patches
from the style image feature maps. Since the finite features harvested from one
single aesthetic style image are inadequate to represent the rich textures of
the content natural image, existing techniques treat the full-channel style
feature patches as simple signal tensors and create new style feature patches
via signal-level fusion, which ignore the implicit diversities existed in style
features and thus fail for generating better stylised results. In this paper,
we propose a Retinex theory guided, channel-grouping based patch swap technique
to solve the above challenges. Channel-grouping strategy groups the style
feature maps into surface and texture channels, which prevents the
winner-takes-all problem. Retinex theory based decomposition controls a more
stable channel code rate generation. In addition, we provide complementary
fusion and multi-scale generation strategy to prevent unexpected black area and
over-stylised results respectively. Experimental results demonstrate that the
proposed method outperforms the existing techniques in providing more
style-consistent textures while keeping the content fidelity.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 11:13:56 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Liu",
"Chang",
""
],
[
"Niu",
"Yi",
""
],
[
"Ma",
"Mingming",
""
],
[
"Li",
"Fu",
""
],
[
"Shi",
"Guangming",
""
]
] |
new_dataset
| 0.984499 |
2309.10604
|
Ange Richard
|
Ange Richard, Laura Alonzo-Canul, Fran\c{c}ois Portet
|
FRACAS: A FRench Annotated Corpus of Attribution relations in newS
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Quotation extraction is a widely useful task both from a sociological and
from a Natural Language Processing perspective. However, very little data is
available to study this task in languages other than English. In this paper, we
present a manually annotated corpus of 1676 newswire texts in French for
quotation extraction and source attribution. We first describe the composition
of our corpus and the choices that were made in selecting the data. We then
detail the annotation guidelines and annotation process, as well as a few
statistics about the final corpus and the obtained balance between quote types
(direct, indirect and mixed, which are particularly challenging). We end by
detailing our inter-annotator agreement between the 8 annotators who worked on
manual labelling, which is substantially high for such a difficult linguistic
phenomenon.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 13:19:54 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Richard",
"Ange",
""
],
[
"Alonzo-Canul",
"Laura",
""
],
[
"Portet",
"François",
""
]
] |
new_dataset
| 0.970955 |
2309.10623
|
Dan Wu
|
Dan Wu, Peng Chen, Thilini Kaushalya Bandara, Zhaoying Li, Tulika
Mitra
|
Flip: Data-Centric Edge CGRA Accelerator
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Coarse-Grained Reconfigurable Arrays (CGRA) are promising edge accelerators
due to the outstanding balance in flexibility, performance, and energy
efficiency. Classic CGRAs statically map compute operations onto the processing
elements (PE) and route the data dependencies among the operations through the
Network-on-Chip. However, CGRAs are designed for fine-grained static
instruction-level parallelism and struggle to accelerate applications with
dynamic and irregular data-level parallelism, such as graph processing. To
address this limitation, we present Flip, a novel accelerator that enhances
traditional CGRA architectures to boost the performance of graph applications.
Flip retains the classic CGRA execution model while introducing a special
data-centric mode for efficient graph processing. Specifically, it exploits the
natural data parallelism of graph algorithms by mapping graph vertices onto
processing elements (PEs) rather than the operations, and supporting dynamic
routing of temporary data according to the runtime evolution of the graph
frontier. Experimental results demonstrate that Flip achieves up to 36$\times$
speedup with merely 19% more area compared to classic CGRAs. Compared to
state-of-the-art large-scale graph processors, Flip has similar energy
efficiency and 2.2$\times$ better area efficiency at a much-reduced power/area
budget.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 14:01:09 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Wu",
"Dan",
""
],
[
"Chen",
"Peng",
""
],
[
"Bandara",
"Thilini Kaushalya",
""
],
[
"Li",
"Zhaoying",
""
],
[
"Mitra",
"Tulika",
""
]
] |
new_dataset
| 0.961414 |
2309.10655
|
Changqing Shen
|
Changqing Shen and Sihao Mao and Bingzhou Xu and Ziwei Wang and
Xiaojian Zhang and Sijie Yan and Han Ding
|
Spiral Complete Coverage Path Planning Based on Conformal Slit Mapping
in Multi-connected Domains
|
This article has not been formally published yet and may undergo
minor content changes
| null | null | null |
cs.RO cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
Generating a smooth and shorter spiral complete coverage path in a
multi-connected domain is an important research area in robotic cavity
machining. Traditional spiral path planning methods in multi-connected domains
involve a subregion division procedure; a deformed spiral path is incorporated
within each subregion, and these paths within the subregions are interconnected
with bridges. In intricate domains with abundant voids and irregular
boundaries, the added subregion boundaries increase the path avoidance
requirements. This results in excessive bridging and necessitates longer
uneven-density spirals to achieve complete subregion coverage. Considering that
conformal slit mapping can transform multi-connected regions into regular disks
or annuluses without subregion division, this paper presents a novel spiral
complete coverage path planning method by conformal slit mapping. Firstly, a
slit mapping calculation technique is proposed for segmented cubic spline
boundaries with corners. Then, a spiral path spacing control method is
developed based on the maximum inscribed circle radius between adjacent
conformal slit mapping iso-parameters. Lastly, the spiral path is derived by
offsetting iso-parameters. The complexity and applicability of the proposed
method are comprehensively analyzed across various boundary scenarios.
Meanwhile, two cavities milling experiments are conducted to compare the new
method with conventional spiral complete coverage path methods. The comparation
indicate that the new path meets the requirement for complete coverage in
cavity machining while reducing path length and machining time by 12.70% and
12.34%, respectively.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 14:38:16 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Shen",
"Changqing",
""
],
[
"Mao",
"Sihao",
""
],
[
"Xu",
"Bingzhou",
""
],
[
"Wang",
"Ziwei",
""
],
[
"Zhang",
"Xiaojian",
""
],
[
"Yan",
"Sijie",
""
],
[
"Ding",
"Han",
""
]
] |
new_dataset
| 0.965563 |
2309.10748
|
Anilkumar Swamy
|
Anilkumar Swamy, Vincent Leroy, Philippe Weinzaepfel, Fabien Baradel,
Salma Galaaoui, Romain Bregier, Matthieu Armando, Jean-Sebastien Franco,
Gregory Rogez
|
SHOWMe: Benchmarking Object-agnostic Hand-Object 3D Reconstruction
|
Paper and Appendix, Accepted in ACVR workshop at ICCV conference
| null | null | null |
cs.CV cs.AI cs.GR cs.LG cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent hand-object interaction datasets show limited real object variability
and rely on fitting the MANO parametric model to obtain groundtruth hand
shapes. To go beyond these limitations and spur further research, we introduce
the SHOWMe dataset which consists of 96 videos, annotated with real and
detailed hand-object 3D textured meshes. Following recent work, we consider a
rigid hand-object scenario, in which the pose of the hand with respect to the
object remains constant during the whole video sequence. This assumption allows
us to register sub-millimetre-precise groundtruth 3D scans to the image
sequences in SHOWMe. Although simpler, this hypothesis makes sense in terms of
applications where the required accuracy and level of detail is important eg.,
object hand-over in human-robot collaboration, object scanning, or manipulation
and contact point analysis. Importantly, the rigidity of the hand-object
systems allows to tackle video-based 3D reconstruction of unknown hand-held
objects using a 2-stage pipeline consisting of a rigid registration step
followed by a multi-view reconstruction (MVR) part. We carefully evaluate a set
of non-trivial baselines for these two stages and show that it is possible to
achieve promising object-agnostic 3D hand-object reconstructions employing an
SfM toolbox or a hand pose estimator to recover the rigid transforms and
off-the-shelf MVR algorithms. However, these methods remain sensitive to the
initial camera pose estimates which might be imprecise due to lack of textures
on the objects or heavy occlusions of the hands, leaving room for improvements
in the reconstruction. Code and dataset are available at
https://europe.naverlabs.com/research/showme
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 16:48:29 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Swamy",
"Anilkumar",
""
],
[
"Leroy",
"Vincent",
""
],
[
"Weinzaepfel",
"Philippe",
""
],
[
"Baradel",
"Fabien",
""
],
[
"Galaaoui",
"Salma",
""
],
[
"Bregier",
"Romain",
""
],
[
"Armando",
"Matthieu",
""
],
[
"Franco",
"Jean-Sebastien",
""
],
[
"Rogez",
"Gregory",
""
]
] |
new_dataset
| 0.99977 |
2309.10765
|
Surbhi Madan
|
Surbhi Madan, Rishabh Jain, Gulshan Sharma, Ramanathan Subramanian and
Abhinav Dhall
|
MAGIC-TBR: Multiview Attention Fusion for Transformer-based Bodily
Behavior Recognition in Group Settings
|
4 pages, 2 Tables and 3 Figures
| null |
10.1145/3581783.3612858
| null |
cs.CV cs.HC cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Bodily behavioral language is an important social cue, and its automated
analysis helps in enhancing the understanding of artificial intelligence
systems. Furthermore, behavioral language cues are essential for active
engagement in social agent-based user interactions. Despite the progress made
in computer vision for tasks like head and body pose estimation, there is still
a need to explore the detection of finer behaviors such as gesturing, grooming,
or fumbling. This paper proposes a multiview attention fusion method named
MAGIC-TBR that combines features extracted from videos and their corresponding
Discrete Cosine Transform coefficients via a transformer-based approach. The
experiments are conducted on the BBSI dataset and the results demonstrate the
effectiveness of the proposed feature fusion with multiview attention. The code
is available at: https://github.com/surbhimadan92/MAGIC-TBR
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 17:04:36 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Madan",
"Surbhi",
""
],
[
"Jain",
"Rishabh",
""
],
[
"Sharma",
"Gulshan",
""
],
[
"Subramanian",
"Ramanathan",
""
],
[
"Dhall",
"Abhinav",
""
]
] |
new_dataset
| 0.991237 |
2309.10783
|
Laura Hanu
|
Laura Hanu, Anita L. Ver\H{o}, James Thewlis
|
Language as the Medium: Multimodal Video Classification through text
only
|
Accepted at "What is Next in Multimodal Foundation Models?" (MMFM)
workshop at ICCV 2023
| null | null | null |
cs.CV cs.AI cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Despite an exciting new wave of multimodal machine learning models, current
approaches still struggle to interpret the complex contextual relationships
between the different modalities present in videos. Going beyond existing
methods that emphasize simple activities or objects, we propose a new
model-agnostic approach for generating detailed textual descriptions that
captures multimodal video information. Our method leverages the extensive
knowledge learnt by large language models, such as GPT-3.5 or Llama2, to reason
about textual descriptions of the visual and aural modalities, obtained from
BLIP-2, Whisper and ImageBind. Without needing additional finetuning of
video-text models or datasets, we demonstrate that available LLMs have the
ability to use these multimodal textual descriptions as proxies for ``sight''
or ``hearing'' and perform zero-shot multimodal classification of videos
in-context. Our evaluations on popular action recognition benchmarks, such as
UCF-101 or Kinetics, show these context-rich descriptions can be successfully
used in video understanding tasks. This method points towards a promising new
research direction in multimodal classification, demonstrating how an interplay
between textual, visual and auditory machine learning models can enable more
holistic video understanding.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 17:32:21 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Hanu",
"Laura",
""
],
[
"Verő",
"Anita L.",
""
],
[
"Thewlis",
"James",
""
]
] |
new_dataset
| 0.996222 |
2309.10815
|
Xiao Fu
|
Xiao Fu, Shangzhan Zhang, Tianrun Chen, Yichong Lu, Xiaowei Zhou,
Andreas Geiger, Yiyi Liao
|
PanopticNeRF-360: Panoramic 3D-to-2D Label Transfer in Urban Scenes
|
Project page: http://fuxiao0719.github.io/projects/panopticnerf360/.
arXiv admin note: text overlap with arXiv:2203.15224
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Training perception systems for self-driving cars requires substantial
annotations. However, manual labeling in 2D images is highly labor-intensive.
While existing datasets provide rich annotations for pre-recorded sequences,
they fall short in labeling rarely encountered viewpoints, potentially
hampering the generalization ability for perception models. In this paper, we
present PanopticNeRF-360, a novel approach that combines coarse 3D annotations
with noisy 2D semantic cues to generate consistent panoptic labels and
high-quality images from any viewpoint. Our key insight lies in exploiting the
complementarity of 3D and 2D priors to mutually enhance geometry and semantics.
Specifically, we propose to leverage noisy semantic and instance labels in both
3D and 2D spaces to guide geometry optimization. Simultaneously, the improved
geometry assists in filtering noise present in the 3D and 2D annotations by
merging them in 3D space via a learned semantic field. To further enhance
appearance, we combine MLP and hash grids to yield hybrid scene features,
striking a balance between high-frequency appearance and predominantly
contiguous semantics. Our experiments demonstrate PanopticNeRF-360's
state-of-the-art performance over existing label transfer methods on the
challenging urban scenes of the KITTI-360 dataset. Moreover, PanopticNeRF-360
enables omnidirectional rendering of high-fidelity, multi-view and
spatiotemporally consistent appearance, semantic and instance labels. We make
our code and data available at https://github.com/fuxiao0719/PanopticNeRF
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 17:54:22 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Fu",
"Xiao",
""
],
[
"Zhang",
"Shangzhan",
""
],
[
"Chen",
"Tianrun",
""
],
[
"Lu",
"Yichong",
""
],
[
"Zhou",
"Xiaowei",
""
],
[
"Geiger",
"Andreas",
""
],
[
"Liao",
"Yiyi",
""
]
] |
new_dataset
| 0.997082 |
2309.10818
|
Zhiqiang Shen
|
Zhiqiang Shen and Tianhua Tao and Liqun Ma and Willie Neiswanger and
Joel Hestness and Natalia Vassilieva and Daria Soboleva and Eric Xing
|
SlimPajama-DC: Understanding Data Combinations for LLM Training
|
Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 17:59:54 GMT"
}
] | 2023-09-20T00:00:00 |
[
[
"Shen",
"Zhiqiang",
""
],
[
"Tao",
"Tianhua",
""
],
[
"Ma",
"Liqun",
""
],
[
"Neiswanger",
"Willie",
""
],
[
"Hestness",
"Joel",
""
],
[
"Vassilieva",
"Natalia",
""
],
[
"Soboleva",
"Daria",
""
],
[
"Xing",
"Eric",
""
]
] |
new_dataset
| 0.998881 |
1903.07497
|
Nguyen Huu Phong
|
Nguyen Huu Phong and Bernardete Ribeiro
|
Advanced Capsule Networks via Context Awareness
|
12 pages
| null |
10.1007/978-3-030-30487-4_14
| null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Capsule Networks (CN) offer new architectures for Deep Learning (DL)
community. Though its effectiveness has been demonstrated in MNIST and
smallNORB datasets, the networks still face challenges in other datasets for
images with distinct contexts. In this research, we improve the design of CN
(Vector version) namely we expand more Pooling layers to filter image
backgrounds and increase Reconstruction layers to make better image
restoration. Additionally, we perform experiments to compare accuracy and speed
of CN versus DL models. In DL models, we utilize Inception V3 and DenseNet V201
for powerful computers besides NASNet, MobileNet V1 and MobileNet V2 for small
and embedded devices. We evaluate our models on a fingerspelling alphabet
dataset from American Sign Language (ASL). The results show that CNs perform
comparably to DL models while dramatically reducing training time. We also make
a demonstration and give a link for the purpose of illustration.
|
[
{
"version": "v1",
"created": "Mon, 18 Mar 2019 15:12:13 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Apr 2019 07:09:02 GMT"
},
{
"version": "v3",
"created": "Sat, 16 Sep 2023 08:47:27 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Phong",
"Nguyen Huu",
""
],
[
"Ribeiro",
"Bernardete",
""
]
] |
new_dataset
| 0.964594 |
2012.03243
|
Lifeng Wang
|
Lifeng Wang, Yu Duan, Yun Lai, Shizhuo Mu, Xiang Li
|
V2I-Based Platooning Design with Delay Awareness
| null | null |
10.1109/JSYST.2023.3286855
| null |
cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
This paper studies the vehicle platooning system based on
vehicle-to-infrastructure (V2I) communication, where all the vehicles in the
platoon upload their driving state information to the roadside unit (RSU), and
RSU makes the platoon control decisions with the assistance of edge computing.
By addressing the delay concern, a platoon control approach is proposed to
achieve plant stability and string stability. The effects of the time headway,
communication and edge computing delays on the stability are quantified. The
velocity and size of the stable platoon are calculated, which show the impacts
of the radio parameters such as massive MIMO antennas and frequency band on the
platoon configuration. The handover performance between RSUs in the V2I-based
platooning system is quantified by considering the effects of the RSU's
coverage and platoon size, which demonstrates that the velocity of a stable
platoon should be appropriately chosen, in order to meet the V2I's
Quality-of-Service and handover constraints.
|
[
{
"version": "v1",
"created": "Sun, 6 Dec 2020 11:44:42 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Wang",
"Lifeng",
""
],
[
"Duan",
"Yu",
""
],
[
"Lai",
"Yun",
""
],
[
"Mu",
"Shizhuo",
""
],
[
"Li",
"Xiang",
""
]
] |
new_dataset
| 0.990458 |
2108.01793
|
Hejia Geng
|
Wenrui Zhang, Hejia Geng, Peng Li
|
Composing Recurrent Spiking Neural Networks using Locally-Recurrent
Motifs and Risk-Mitigating Architectural Optimization
|
20 pages, 7 figures
| null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In neural circuits, recurrent connectivity plays a crucial role in network
function and stability. However, existing recurrent spiking neural networks
(RSNNs) are often constructed by random connections without optimization. While
RSNNs can produce rich dynamics that are critical for memory formation and
learning, systemic architectural optimization of RSNNs is still an open
challenge. We aim to enable systematic design of large RSNNs via a new scalable
RSNN architecture and automated architectural optimization. We compose RSNNs
based on a layer architecture called Sparsely-Connected Recurrent Motif Layer
(SC-ML) that consists of multiple small recurrent motifs wired together by
sparse lateral connections. The small size of the motifs and sparse inter-motif
connectivity leads to an RSNN architecture scalable to large network sizes. We
further propose a method called Hybrid Risk-Mitigating Architectural Search
(HRMAS) to systematically optimize the topology of the proposed recurrent
motifs and SC-ML layer architecture. HRMAS is an alternating two-step
optimization process by which we mitigate the risk of network instability and
performance degradation caused by architectural change by introducing a novel
biologically-inspired "self-repairing" mechanism through intrinsic plasticity.
The intrinsic plasticity is introduced to the second step of each HRMAS
iteration and acts as unsupervised fast self-adaptation to structural and
synaptic weight modifications introduced by the first step during the RSNN
architectural "evolution". To the best of the authors' knowledge, this is the
first work that performs systematic architectural optimization of RSNNs. Using
one speech and three neuromorphic datasets, we demonstrate the significant
performance improvement brought by the proposed automated architecture
optimization over existing manually-designed RSNNs.
|
[
{
"version": "v1",
"created": "Wed, 4 Aug 2021 00:09:39 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Sep 2023 02:01:31 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Zhang",
"Wenrui",
""
],
[
"Geng",
"Hejia",
""
],
[
"Li",
"Peng",
""
]
] |
new_dataset
| 0.972029 |
2111.01082
|
Hao Zhu
|
Hao Zhu, Haotian Yang, Longwei Guo, Yidi Zhang, Yanru Wang, Mingkai
Huang, Menghua Wu, Qiu Shen, Ruigang Yang, Xun Cao
|
FaceScape: 3D Facial Dataset and Benchmark for Single-View 3D Face
Reconstruction
|
Accepted to T-PAMI 2023; Extension of FaceScape(CVPR 2020); Code &
data are available at https://github.com/zhuhao-nju/facescape
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a large-scale detailed 3D face dataset, FaceScape,
and the corresponding benchmark to evaluate single-view facial 3D
reconstruction. By training on FaceScape data, a novel algorithm is proposed to
predict elaborate riggable 3D face models from a single image input. FaceScape
dataset releases $16,940$ textured 3D faces, captured from $847$ subjects and
each with $20$ specific expressions. The 3D models contain the pore-level
facial geometry that is also processed to be topologically uniform. These fine
3D facial models can be represented as a 3D morphable model for coarse shapes
and displacement maps for detailed geometry. Taking advantage of the
large-scale and high-accuracy dataset, a novel algorithm is further proposed to
learn the expression-specific dynamic details using a deep neural network. The
learned relationship serves as the foundation of our 3D face prediction system
from a single image input. Different from most previous methods, our predicted
3D models are riggable with highly detailed geometry under different
expressions. We also use FaceScape data to generate the in-the-wild and
in-the-lab benchmark to evaluate recent methods of single-view face
reconstruction. The accuracy is reported and analyzed on the dimensions of
camera pose and focal length, which provides a faithful and comprehensive
evaluation and reveals new challenges. The unprecedented dataset, benchmark,
and code have been released at https://github.com/zhuhao-nju/facescape.
|
[
{
"version": "v1",
"created": "Mon, 1 Nov 2021 16:48:34 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 20:00:07 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Zhu",
"Hao",
""
],
[
"Yang",
"Haotian",
""
],
[
"Guo",
"Longwei",
""
],
[
"Zhang",
"Yidi",
""
],
[
"Wang",
"Yanru",
""
],
[
"Huang",
"Mingkai",
""
],
[
"Wu",
"Menghua",
""
],
[
"Shen",
"Qiu",
""
],
[
"Yang",
"Ruigang",
""
],
[
"Cao",
"Xun",
""
]
] |
new_dataset
| 0.999849 |
2202.04076
|
Kun Wang
|
Kun Wang, Jingyi Wang, Christopher M. Poskitt, Xiangxiang Chen, Jun
Sun, and Peng Cheng
|
K-ST: A Formal Executable Semantics of the Structured Text Language for
PLCs
|
Accepted by IEEE Transactions on Software Engineering
|
IEEE Trans. Software Eng., 2023
|
10.1109/TSE.2023.3315292
| null |
cs.PL cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Programmable Logic Controllers (PLCs) are responsible for automating process
control in many industrial systems (e.g. in manufacturing and public
infrastructure), and thus it is critical to ensure that they operate correctly
and safely. The majority of PLCs are programmed in languages such as Structured
Text (ST). However, a lack of formal semantics makes it difficult to ascertain
the correctness of their translators and compilers, which vary from
vendor-to-vendor. In this work, we develop K-ST, a formal executable semantics
for ST in the K framework. Defined with respect to the IEC 61131-3 standard and
PLC vendor manuals, K-ST is a high-level reference semantics that can be used
to evaluate the correctness and consistency of different ST implementations. We
validate K-ST by executing 509 ST programs extracted from Github and comparing
the results against existing commercial compilers (i.e., CODESYS,
CX-Programmer, and GX Works2). We then apply K-ST to validate the
implementation of the open source OpenPLC platform, comparing the executions of
several test programs to uncover five bugs and nine functional defects in the
compiler.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 17:34:08 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 02:05:17 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Wang",
"Kun",
""
],
[
"Wang",
"Jingyi",
""
],
[
"Poskitt",
"Christopher M.",
""
],
[
"Chen",
"Xiangxiang",
""
],
[
"Sun",
"Jun",
""
],
[
"Cheng",
"Peng",
""
]
] |
new_dataset
| 0.975066 |
2203.10286
|
Chiranjibi Sitaula
|
Chiranjibi Sitaula, Tej Bahadur Shahi
|
Multi-channel CNN to classify nepali covid-19 related tweets using
hybrid features
|
This paper is under consideration in Journal of Ambient Intelligence
and Humanized Computing (Springer) journal. This version may be deleted or
updated at any time depending on the journal's policy upon acceptance
|
Journal of Ambient Intelligence and Humanized Computing , 2023
|
10.1007/s12652-023-04692-9
| null |
cs.CL cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Because of the current COVID-19 pandemic with its increasing fears among
people, it has triggered several health complications such as depression and
anxiety. Such complications have not only affected the developed countries but
also developing countries such as Nepal. These complications can be understood
from peoples' tweets/comments posted online after their proper analysis and
sentiment classification. Nevertheless, owing to the limited number of
tokens/words in each tweet, it is always crucial to capture multiple
information associated with them for their better understanding. In this study,
we, first, represent each tweet by combining both syntactic and semantic
information, called hybrid features. The syntactic information is generated
from the bag of words method, whereas the semantic information is generated
from the combination of the fastText-based (ft) and domain-specific (ds)
methods. Second, we design a novel multi-channel convolutional neural network
(MCNN), which ensembles the multiple CNNs, to capture multi-scale information
for better classification. Last, we evaluate the efficacy of both the proposed
feature extraction method and the MCNN model classifying tweets into three
sentiment classes (positive, neutral and negative) on NepCOV19Tweets dataset,
which is the only public COVID-19 tweets dataset in Nepali language. The
evaluation results show that the proposed hybrid features outperform individual
feature extraction methods with the highest classification accuracy of 69.7%
and the MCNN model outperforms the existing methods with the highest
classification accuracy of 71.3% during classification.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 09:55:05 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Sitaula",
"Chiranjibi",
""
],
[
"Shahi",
"Tej Bahadur",
""
]
] |
new_dataset
| 0.984556 |
2204.08976
|
Rakin Muhammad Shadab
|
Rakin Muhammad Shadab, Yu Zou, Sanjay Gandham, Amro Awad and Mingjie
Lin
|
HMT: A Hardware-Centric Hybrid Bonsai Merkle Tree Algorithm for
High-Performance Authentication
| null | null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Merkle tree is a widely used tree structure for authentication of
data/metadata in a secure computing system. Recent state-of-the art secure
systems use a smaller-sized MT, namely Bonsai Merkle Tree (BMT) to protect the
metadata such as encryption counters. Common BMT algorithms were designed for
traditional Von Neumann architectures with a software-centric implementation in
mind, hence they use a lot of recursions and are often sequential in nature.
However, the modern heterogeneous computing platforms employing
Field-Programmable Gate Array (FPGA) devices require concurrency-focused
algorithms to fully utilize the versatility and parallel nature of such
systems. Our goal for this work is to introduce HMT, a hardware-friendly BMT
algorithm that enables the verification and update processes to function
independently and provides the benefits of relaxed update while being
comparable to eager update in terms of update complexity. The methodology of
HMT contributes both novel algorithm revisions and innovative hardware
techniques to implementing BMT. We introduce a hybrid BMT algorithm that is
hardware-targeted, parallel and relaxes the update depending on BMT cache hit
but makes the update conditions more flexible compared to lazy update to save
additional write-backs. Deploying this new algorithm, we have designed a new
BMT controller with a dataflow architecture, speculative buffers and parallel
write-back engines that allows for multiple concurrent relaxed authentication.
Our empirical performance measurements have demonstrated that HMT can achieve
up to 7x improvement in bandwidth and 4.5x reduction in latency over baseline
in subsystem level tests. In a real secure-memory system on a Xilinx U200
accelerator FPGA, HMT exhibits up to 14\% faster execution in standard
benchmarks compared to state-of-the art BMT solution on FPGA.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2022 16:23:24 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Sep 2023 04:04:03 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Shadab",
"Rakin Muhammad",
""
],
[
"Zou",
"Yu",
""
],
[
"Gandham",
"Sanjay",
""
],
[
"Awad",
"Amro",
""
],
[
"Lin",
"Mingjie",
""
]
] |
new_dataset
| 0.967342 |
2205.01440
|
Daniel Graziotin
|
Verena Ebert, Daniel Graziotin, Stefan Wagner
|
How Are Communication Channels on GitHub Presented to Their Intended
Audience? -- A Thematic Analysis
|
10 pages, 5 figures. Accepted for presentation at the International
Conference on Evaluation and Assessment in Software Engineering (EASE) 2022
|
In Proceedings of the 26th International Conference on Evaluation
and Assessment in Software Engineering (EASE 2022). Association for Computing
Machinery, New York, NY, USA, 40-49
|
10.1145/3530019.3530024
| null |
cs.SE cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Communication is essential in software development, and even more in
distributed settings. Communication activities need to be organized and
coordinated to defend against the threat of productivity losses, increases in
cognitive load, and stress among team members. With a plethora of communication
channels that were identified by previous research in open-source projects,
there is a need to explore organizational issues in how these communication
channels are introduced, explained, and motivated for use among all project
members. In this study, we wanted to understand which communication channels
are used in GitHub projects and how they are presented to the GitHub project
audience. We employed thematic analysis to analyze 151 artifacts in 90 GitHub
projects. Our results revealed 32 unique communications channels that can be
divided into nine different types. Projects mostly provide channels of
different types, but for some types (e.g., chat) it is common to provide
several channels. Maintainers are aware that channels have different properties
and help the developers to decide which channel should be used in which case.
However, this is not true for all projects, and often we have not found any
explicit reasons why maintainers chose to provide one channel over another.
Different channels can be used for different purposes and have different
affordances, so maintainers have to decide wisely which channels they want to
provide and make clear which channel should be used in which case. Otherwise,
developers might feel overwhelmed of too many channels and information can get
fragmented over multiple channels.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 11:57:53 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Ebert",
"Verena",
""
],
[
"Graziotin",
"Daniel",
""
],
[
"Wagner",
"Stefan",
""
]
] |
new_dataset
| 0.997972 |
2205.15948
|
Xuan Bac Nguyen
|
Xuan Bac Nguyen, Apoorva Bisht, Ben Thompson, Hugh Churchill, Khoa
Luu, Samee U. Khan
|
Two-Dimensional Quantum Material Identification via Self-Attention and
Soft-labeling in Deep Learning
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In quantum machine field, detecting two-dimensional (2D) materials in Silicon
chips is one of the most critical problems. Instance segmentation can be
considered as a potential approach to solve this problem. However, similar to
other deep learning methods, the instance segmentation requires a large scale
training dataset and high quality annotation in order to achieve a considerable
performance. In practice, preparing the training dataset is a challenge since
annotators have to deal with a large image, e.g 2K resolution, and extremely
dense objects in this problem. In this work, we present a novel method to
tackle the problem of missing annotation in instance segmentation in 2D quantum
material identification. We propose a new mechanism for automatically detecting
false negative objects and an attention based loss strategy to reduce the
negative impact of these objects contributing to the overall loss function. We
experiment on the 2D material detection datasets, and the experiments show our
method outperforms previous works.
|
[
{
"version": "v1",
"created": "Tue, 31 May 2022 16:46:51 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Sep 2023 16:24:39 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Nguyen",
"Xuan Bac",
""
],
[
"Bisht",
"Apoorva",
""
],
[
"Thompson",
"Ben",
""
],
[
"Churchill",
"Hugh",
""
],
[
"Luu",
"Khoa",
""
],
[
"Khan",
"Samee U.",
""
]
] |
new_dataset
| 0.958174 |
2209.13288
|
Ly Vu Duc Dr.
|
Duc-Ly Vu, Zachary Newman, and John Speed Meyers
|
A Benchmark Comparison of Python Malware Detection Approaches
|
12 pages, 3 figures, 3 tables
| null |
10.1109/ICSE48619.2023.00052
| null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
While attackers often distribute malware to victims via open-source,
community-driven package repositories, these repositories do not currently run
automated malware detection systems. In this work, we explore the security
goals of the repository administrators and the requirements for deployments of
such malware scanners via a case study of the Python ecosystem and PyPI
repository, which includes interviews with administrators and maintainers.
Further, we evaluate existing malware detection techniques for deployment in
this setting by creating a benchmark dataset and comparing several existing
tools, including the malware checks implemented in PyPI, Bandit4Mal, and
OSSGadget's OSS Detect Backdoor.
We find that repository administrators have exacting technical demands for
such malware detection tools. Specifically, they consider a false positive rate
of even 0.01% to be unacceptably high, given the large number of package
releases that might trigger false alerts. Measured tools have false positive
rates between 15% and 97%; increasing thresholds for detection rules to reduce
this rate renders the true positive rate useless. In some cases, these checks
emitted alerts more often for benign packages than malicious ones. However, we
also find a successful socio-technical malware detection system: external
security researchers also perform repository malware scans and report the
results to repository administrators. These parties face different incentives
and constraints on their time and tooling. We conclude with recommendations for
improving detection capabilities and strengthening the collaboration between
security researchers and software repository administrators.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 10:14:19 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Vu",
"Duc-Ly",
""
],
[
"Newman",
"Zachary",
""
],
[
"Meyers",
"John Speed",
""
]
] |
new_dataset
| 0.998666 |
2210.13723
|
Dapeng Feng
|
Dapeng Feng, Yuhua Qi, Shipeng Zhong, Zhiqiang Chen, Yudu Jiao, Qiming
Chen, Tao Jiang, Hongbo Chen
|
S3E: A Large-scale Multimodal Dataset for Collaborative SLAM
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the advanced request to employ a team of robots to perform a task
collaboratively, the research community has become increasingly interested in
collaborative simultaneous localization and mapping. Unfortunately, existing
datasets are limited in the scale and variation of the collaborative
trajectories, even though generalization between inter-trajectories among
different agents is crucial to the overall viability of collaborative tasks. To
help align the research community's contributions with realistic multiagent
ordinated SLAM problems, we propose S3E, a large-scale multimodal dataset
captured by a fleet of unmanned ground vehicles along four designed
collaborative trajectory paradigms. S3E consists of 7 outdoor and 5 indoor
sequences that each exceed 200 seconds, consisting of well temporal
synchronized and spatial calibrated high-frequency IMU, high-quality stereo
camera, and 360 degree LiDAR data. Crucially, our effort exceeds previous
attempts regarding dataset size, scene variability, and complexity. It has 4x
as much average recording time as the pioneering EuRoC dataset. We also provide
careful dataset analysis as well as baselines for collaborative SLAM and single
counterparts. Data and more up-to-date details are found at
https://github.com/PengYu-Team/S3E.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 02:42:49 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Dec 2022 08:55:38 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Dec 2022 13:19:37 GMT"
},
{
"version": "v4",
"created": "Sat, 16 Sep 2023 05:37:37 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Feng",
"Dapeng",
""
],
[
"Qi",
"Yuhua",
""
],
[
"Zhong",
"Shipeng",
""
],
[
"Chen",
"Zhiqiang",
""
],
[
"Jiao",
"Yudu",
""
],
[
"Chen",
"Qiming",
""
],
[
"Jiang",
"Tao",
""
],
[
"Chen",
"Hongbo",
""
]
] |
new_dataset
| 0.997777 |
2211.15421
|
Zilong Wang
|
Zilong Wang, Yichao Zhou, Wei Wei, Chen-Yu Lee, Sandeep Tata
|
VRDU: A Benchmark for Visually-rich Document Understanding
|
KDD 2023
| null |
10.1145/3580305.3599929
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Understanding visually-rich business documents to extract structured data and
automate business workflows has been receiving attention both in academia and
industry. Although recent multi-modal language models have achieved impressive
results, we find that existing benchmarks do not reflect the complexity of real
documents seen in industry. In this work, we identify the desiderata for a more
comprehensive benchmark and propose one we call Visually Rich Document
Understanding (VRDU). VRDU contains two datasets that represent several
challenges: rich schema including diverse data types as well as hierarchical
entities, complex templates including tables and multi-column layouts, and
diversity of different layouts (templates) within a single document type. We
design few-shot and conventional experiment settings along with a carefully
designed matching algorithm to evaluate extraction results. We report the
performance of strong baselines and offer three observations: (1) generalizing
to new document templates is still very challenging, (2) few-shot performance
has a lot of headroom, and (3) models struggle with hierarchical fields such as
line-items in an invoice. We plan to open source the benchmark and the
evaluation toolkit. We hope this helps the community make progress on these
challenging tasks in extracting structured data from visually rich documents.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 03:17:07 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 21:34:44 GMT"
},
{
"version": "v3",
"created": "Sat, 16 Sep 2023 17:52:27 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Wang",
"Zilong",
""
],
[
"Zhou",
"Yichao",
""
],
[
"Wei",
"Wei",
""
],
[
"Lee",
"Chen-Yu",
""
],
[
"Tata",
"Sandeep",
""
]
] |
new_dataset
| 0.99979 |
2212.13229
|
Elena Di Lavore
|
Elena Di Lavore, Pawe{\l} Soboci\'nski
|
Monoidal Width
| null | null | null | null |
cs.LO math.CT
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce monoidal width as a measure of complexity for morphisms in
monoidal categories. Inspired by well-known structural width measures for
graphs, like tree width and rank width, monoidal width is based on a notion of
syntactic decomposition: a monoidal decomposition of a morphism is an
expression in the language of monoidal categories, where operations are
monoidal products and compositions, that specifies this morphism. Monoidal
width penalises the composition operation along ``big'' objects, while it
encourages the use of monoidal products. We show that, by choosing the correct
categorical algebra for decomposing graphs, we can capture tree width and rank
width. For matrices, monoidal width is related to the rank. These examples
suggest monoidal width as a good measure for structural complexity of processes
modelled as morphisms in monoidal categories.
|
[
{
"version": "v1",
"created": "Mon, 26 Dec 2022 17:32:04 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 12:25:25 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Jul 2023 18:09:58 GMT"
},
{
"version": "v4",
"created": "Mon, 18 Sep 2023 08:53:25 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Di Lavore",
"Elena",
""
],
[
"Sobociński",
"Paweł",
""
]
] |
new_dataset
| 0.999147 |
2303.01615
|
Zachary Huemann
|
Zachary Huemann, Xin Tie, Junjie Hu, Tyler J. Bradshaw
|
ConTEXTual Net: A Multimodal Vision-Language Model for Segmentation of
Pneumothorax
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Radiology narrative reports often describe characteristics of a patient's
disease, including its location, size, and shape. Motivated by the recent
success of multimodal learning, we hypothesized that this descriptive text
could guide medical image analysis algorithms. We proposed a novel
vision-language model, ConTEXTual Net, for the task of pneumothorax
segmentation on chest radiographs. ConTEXTual Net utilizes language features
extracted from corresponding free-form radiology reports using a pre-trained
language model. Cross-attention modules are designed to combine the
intermediate output of each vision encoder layer and the text embeddings
generated by the language model. ConTEXTual Net was trained on the CANDID-PTX
dataset consisting of 3,196 positive cases of pneumothorax with segmentation
annotations from 6 different physicians as well as clinical radiology reports.
Using cross-validation, ConTEXTual Net achieved a Dice score of
0.716$\pm$0.016, which was similar to the degree of inter-reader variability
(0.712$\pm$0.044) computed on a subset of the data. It outperformed both
vision-only models (ResNet50 U-Net: 0.677$\pm$0.015 and GLoRIA:
0.686$\pm$0.014) and a competing vision-language model (LAVT: 0.706$\pm$0.009).
Ablation studies confirmed that it was the text information that led to the
performance gains. Additionally, we show that certain augmentation methods
degraded ConTEXTual Net's segmentation performance by breaking the image-text
concordance. We also evaluated the effects of using different language models
and activation functions in the cross-attention module, highlighting the
efficacy of our chosen architectural design.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 22:36:19 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 21:48:20 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Huemann",
"Zachary",
""
],
[
"Tie",
"Xin",
""
],
[
"Hu",
"Junjie",
""
],
[
"Bradshaw",
"Tyler J.",
""
]
] |
new_dataset
| 0.999052 |
2303.02968
|
Md Awsafur Rahman
|
Md Awsafur Rahman and Shaikh Anowarul Fattah
|
DwinFormer: Dual Window Transformers for End-to-End Monocular Depth
Estimation
| null |
IEEE Sensors Journal (Volume: 23, Issue: 18, 15 September 2023)
|
10.1109/JSEN.2023.3299782
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Depth estimation from a single image is of paramount importance in the realm
of computer vision, with a multitude of applications. Conventional methods
suffer from the trade-off between consistency and fine-grained details due to
the local-receptive field limiting their practicality. This lack of long-range
dependency inherently comes from the convolutional neural network part of the
architecture. In this paper, a dual window transformer-based network, namely
DwinFormer, is proposed, which utilizes both local and global features for
end-to-end monocular depth estimation. The DwinFormer consists of dual window
self-attention and cross-attention transformers, Dwin-SAT and Dwin-CAT,
respectively. The Dwin-SAT seamlessly extracts intricate, locally aware
features while concurrently capturing global context. It harnesses the power of
local and global window attention to adeptly capture both short-range and
long-range dependencies, obviating the need for complex and computationally
expensive operations, such as attention masking or window shifting. Moreover,
Dwin-SAT introduces inductive biases which provide desirable properties, such
as translational equvariance and less dependence on large-scale data.
Furthermore, conventional decoding methods often rely on skip connections which
may result in semantic discrepancies and a lack of global context when fusing
encoder and decoder features. In contrast, the Dwin-CAT employs both local and
global window cross-attention to seamlessly fuse encoder and decoder features
with both fine-grained local and contextually aware global information,
effectively amending semantic gap. Empirical evidence obtained through
extensive experimentation on the NYU-Depth-V2 and KITTI datasets demonstrates
the superiority of the proposed method, consistently outperforming existing
approaches across both indoor and outdoor environments.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 08:53:22 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 05:43:39 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Rahman",
"Md Awsafur",
""
],
[
"Fattah",
"Shaikh Anowarul",
""
]
] |
new_dataset
| 0.984385 |
2303.04413
|
Yang Cheng
|
Yang Cheng, Zhen Chen and Daming Liu
|
PL-UNeXt: Per-stage Edge Detail and Line Feature Guided Segmentation for
Power Line Detection
|
Accepted to IEEE ICIP 2023
| null |
10.1109/ICIP49359.2023.10223114
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Power line detection is a critical inspection task for electricity companies
and is also useful in avoiding drone obstacles. Accurately separating power
lines from the surrounding area in the aerial image is still challenging due to
the intricate background and low pixel ratio. In order to properly capture the
guidance of the spatial edge detail prior and line features, we offer PL-UNeXt,
a power line segmentation model with a booster training strategy. We design
edge detail heads computing the loss in edge space to guide the lower-level
detail learning and line feature heads generating auxiliary segmentation masks
to supervise higher-level line feature learning. Benefited from this design,
our model can reach 70.6 F1 score (+1.9%) on TTPLA and 68.41 mIoU (+5.2%) on
VITL (without utilizing IR images), while preserving a real-time performance
due to few inference parameters.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 07:32:01 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Sep 2023 05:33:35 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Cheng",
"Yang",
""
],
[
"Chen",
"Zhen",
""
],
[
"Liu",
"Daming",
""
]
] |
new_dataset
| 0.969536 |
2303.12982
|
Joseph Cohen
|
Joseph Cohen, Xun Huan, Jun Ni
|
Fault Prognosis of Turbofan Engines: Eventual Failure Prediction and
Remaining Useful Life Estimation
|
Preprint with 10 pages, 5 figures. Submitted to International Journal
of Prognostics and Health Management (IJPHM)
| null |
10.36001/ijphm.2023.v14i2.3486
| null |
cs.LG cs.SY eess.SP eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
In the era of industrial big data, prognostics and health management is
essential to improve the prediction of future failures to minimize inventory,
maintenance, and human costs. Used for the 2021 PHM Data Challenge, the new
Commercial Modular Aero-Propulsion System Simulation dataset from NASA is an
open-source benchmark containing simulated turbofan engine units flown under
realistic flight conditions. Deep learning approaches implemented previously
for this application attempt to predict the remaining useful life of the engine
units, but have not utilized labeled failure mode information, impeding
practical usage and explainability. To address these limitations, a new
prognostics approach is formulated with a customized loss function to
simultaneously predict the current health state, the eventual failing
component(s), and the remaining useful life. The proposed method incorporates
principal component analysis to orthogonalize statistical time-domain features,
which are inputs into supervised regressors such as random forests, extreme
random forests, XGBoost, and artificial neural networks. The highest performing
algorithm, ANN-Flux, achieves AUROC and AUPR scores exceeding 0.95 for each
classification. In addition, ANN-Flux reduces the remaining useful life RMSE by
38% for the same test split of the dataset compared to past work, with
significantly less computational cost.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 01:19:41 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Cohen",
"Joseph",
""
],
[
"Huan",
"Xun",
""
],
[
"Ni",
"Jun",
""
]
] |
new_dataset
| 0.980826 |
2304.06364
|
Wanjun Zhong
|
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin
Wang, Amin Saied, Weizhu Chen and Nan Duan
|
AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models
|
19 pages
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evaluating the general abilities of foundation models to tackle human-level
tasks is a vital aspect of their development and application in the pursuit of
Artificial General Intelligence (AGI). Traditional benchmarks, which rely on
artificial datasets, may not accurately represent human-level capabilities. In
this paper, we introduce AGIEval, a novel benchmark specifically designed to
assess foundation model in the context of human-centric standardized exams,
such as college entrance exams, law school admission tests, math competitions,
and lawyer qualification tests. We evaluate several state-of-the-art foundation
models, including GPT-4, ChatGPT, and Text-Davinci-003, using this benchmark.
Impressively, GPT-4 surpasses average human performance on SAT, LSAT, and math
competitions, attaining a 95% accuracy rate on the SAT Math test and a 92.5%
accuracy on the English test of the Chinese national college entrance exam.
This demonstrates the extraordinary performance of contemporary foundation
models. In contrast, we also find that GPT-4 is less proficient in tasks that
require complex reasoning or specific domain knowledge. Our comprehensive
analyses of model capabilities (understanding, knowledge, reasoning, and
calculation) reveal these models' strengths and limitations, providing valuable
insights into future directions for enhancing their general capabilities. By
concentrating on tasks pertinent to human cognition and decision-making, our
benchmark delivers a more meaningful and robust evaluation of foundation
models' performance in real-world scenarios. The data, code, and all model
outputs are released in https://github.com/ruixiangcui/AGIEval.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 09:39:30 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Sep 2023 14:23:02 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Zhong",
"Wanjun",
""
],
[
"Cui",
"Ruixiang",
""
],
[
"Guo",
"Yiduo",
""
],
[
"Liang",
"Yaobo",
""
],
[
"Lu",
"Shuai",
""
],
[
"Wang",
"Yanlin",
""
],
[
"Saied",
"Amin",
""
],
[
"Chen",
"Weizhu",
""
],
[
"Duan",
"Nan",
""
]
] |
new_dataset
| 0.999742 |
2304.08304
|
Binglu Ren
|
Binglu Ren and Jianqin Yin
|
SDVRF: Sparse-to-Dense Voxel Region Fusion for Multi-modal 3D Object
Detection
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the perception task of autonomous driving, multi-modal methods have become
a trend due to the complementary characteristics of LiDAR point clouds and
image data. However, the performance of multi-modal methods is usually limited
by the sparsity of the point cloud or the noise problem caused by the
misalignment between LiDAR and the camera. To solve these two problems, we
present a new concept, Voxel Region (VR), which is obtained by projecting the
sparse local point clouds in each voxel dynamically. And we propose a novel
fusion method named Sparse-to-Dense Voxel Region Fusion (SDVRF). Specifically,
more pixels of the image feature map inside the VR are gathered to supplement
the voxel feature extracted from sparse points and achieve denser fusion.
Meanwhile, different from prior methods, which project the size-fixed grids,
our strategy of generating dynamic regions achieves better alignment and avoids
introducing too much background noise. Furthermore, we propose a multi-scale
fusion framework to extract more contextual information and capture the
features of objects of different sizes. Experiments on the KITTI dataset show
that our method improves the performance of different baselines, especially on
classes of small size, including Pedestrian and Cyclist.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 14:17:45 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2023 01:27:01 GMT"
},
{
"version": "v3",
"created": "Sun, 17 Sep 2023 09:22:02 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Ren",
"Binglu",
""
],
[
"Yin",
"Jianqin",
""
]
] |
new_dataset
| 0.994021 |
2304.12227
|
Pavamana Katti
|
Pavamana K J, Chandramani Singh
|
Caching Contents with Varying Popularity using Restless Bandits
|
arXiv admin note: substantial text overlap with arXiv:2212.03291
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study content caching in a wireless network in which the users are
connected through a base station that is equipped with a finite-capacity cache.
We assume a fixed set of contents whose popularity varies with time. Users'
requests for the content depend on their instantaneous popularity levels.
Proactively caching contents at the base station incurs a cost but not having
requested contents at the base station also incurs a cost. We propose to
proactively cache contents at the base station so as to minimize content
missing and caching costs. We formulate the problem as a discounted cost Markov
decision problem that is a restless multi-armed bandit problem. We provide
conditions under which the problem is indexable and also propose a novel
approach to maneuver a few parameters to render the problem indexable. We
demonstrate the efficacy of the Whittle index policy via numerical evaluation.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 16:14:55 GMT"
},
{
"version": "v2",
"created": "Tue, 16 May 2023 09:15:39 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Jun 2023 09:58:11 GMT"
},
{
"version": "v4",
"created": "Sun, 17 Sep 2023 06:36:27 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"J",
"Pavamana K",
""
],
[
"Singh",
"Chandramani",
""
]
] |
new_dataset
| 0.99977 |
2305.02607
|
Narek Maloyan
|
Daniil Homskiy, Narek Maloyan
|
DN at SemEval-2023 Task 12: Low-Resource Language Text Classification
via Multilingual Pretrained Language Model Fine-tuning
| null | null |
10.18653/v1/2023.semeval-1.212
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, sentiment analysis has gained significant importance in
natural language processing. However, most existing models and datasets for
sentiment analysis are developed for high-resource languages, such as English
and Chinese, leaving low-resource languages, particularly African languages,
largely unexplored. The AfriSenti-SemEval 2023 Shared Task 12 aims to fill this
gap by evaluating sentiment analysis models on low-resource African languages.
In this paper, we present our solution to the shared task, where we employed
different multilingual XLM-R models with classification head trained on various
data, including those retrained in African dialects and fine-tuned on target
languages. Our team achieved the third-best results in Subtask B, Track 16:
Multilingual, demonstrating the effectiveness of our approach. While our model
showed relatively good results on multilingual data, it performed poorly in
some languages. Our findings highlight the importance of developing more
comprehensive datasets and models for low-resource African languages to advance
sentiment analysis research. We also provided the solution on the github
repository.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 07:28:45 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Homskiy",
"Daniil",
""
],
[
"Maloyan",
"Narek",
""
]
] |
new_dataset
| 0.998574 |
2305.06463
|
Zachary Newman
|
Kelsey Merrill and Zachary Newman and Santiago Torres-Arias and Karen
Sollins
|
Speranza: Usable, privacy-friendly software signing
|
15 pages, 5 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software repositories, used for wide-scale open software distribution, are a
significant vector for security attacks. Software signing provides
authenticity, mitigating many such attacks. Developer-managed signing keys pose
usability challenges, but certificate-based systems introduce privacy problems.
This work, Speranza, uses certificates to verify software authenticity but
still provides anonymity to signers using zero-knowledge identity
co-commitments. In Speranza, a signer uses an automated certificate authority
(CA) to create a private identity-bound signature and proof of authorization.
Verifiers check that a signer was authorized to publish a package without
learning the signer's identity. The package repository privately records each
package's authorized signers, but publishes only commitments to identities in a
public map. Then, when issuing certificates, the CA issues the certificate to a
distinct commitment to the same identity. The signer then creates a
zero-knowledge proof that these are identity co-commitments. We implemented a
proof-of-concept for Speranza. We find that costs to maintainers (signing) and
end users (verifying) are small (< 1 ms), even for a repository with millions
of packages. Techniques inspired by recent key transparency systems reduce the
bandwidth for serving authorization policies to 2 KiB. Server costs in this
system are negligible. Our evaluation finds that Speranza is practical on the
scale of the largest software repositories. We also emphasize practicality and
deployability in this project. By building on existing technology and employing
relatively simple and well-established cryptographic techniques, Speranza can
be deployed for wide-scale use with only a few hundred lines of code and
minimal changes to existing infrastructure. Speranza is a practical way to
bring privacy and authenticity together for more trustworthy open-source
software.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 21:13:24 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Sep 2023 14:57:32 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Merrill",
"Kelsey",
""
],
[
"Newman",
"Zachary",
""
],
[
"Torres-Arias",
"Santiago",
""
],
[
"Sollins",
"Karen",
""
]
] |
new_dataset
| 0.999539 |
2305.17892
|
Dajiang Suo
|
Ao Qu, Xuhuan Huang, Dajiang Suo
|
SEIP: Simulation-based Design and Evaluation of Infrastructure-based
Collective Perception
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in sensing and communication have paved the way for
collective perception in traffic management, with real-time data sharing among
multiple entities. While vehicle-based collective perception has gained
traction, infrastructure-based approaches, which entail the real-time sharing
and merging of sensing data from different roadside sensors for object
detection, grapple with challenges in placement strategy and high ex-post
evaluation costs. Despite anecdotal evidence of their effectiveness, many
current deployments rely on engineering heuristics and face budget constraints
that limit post-deployment adjustments. This paper introduces polynomial-time
heuristic algorithms and a simulation tool for the ex-ante evaluation of
infrastructure sensor deployment. By modeling it as an integer programming
problem, we guide decisions on sensor locations, heights, and configurations to
harmonize cost, installation constraints, and coverage. Our simulation engine,
integrated with open-source urban driving simulators, enables us to evaluate
the effectiveness of each sensor deployment solution through the lens of object
detection. A case study with infrastructure LiDARs revealed that the
incremental benefit derived from integrating additional low-resolution LiDARs
could surpass that of incorporating more high-resolution ones. The results
reinforce the necessity of investigating the cost-performance tradeoff prior to
deployment. The code for our simulation experiments can be found at
https://github.com/dajiangsuo/SEIP.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 05:37:13 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Sep 2023 12:50:50 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Qu",
"Ao",
""
],
[
"Huang",
"Xuhuan",
""
],
[
"Suo",
"Dajiang",
""
]
] |
new_dataset
| 0.988752 |
2306.00923
|
Ruohan Gao
|
Ruohan Gao, Hao Li, Gokul Dharan, Zhuzhu Wang, Chengshu Li, Fei Xia,
Silvio Savarese, Li Fei-Fei, Jiajun Wu
|
Sonicverse: A Multisensory Simulation Platform for Embodied Household
Agents that See and Hear
|
In ICRA 2023. Project page:
https://ai.stanford.edu/~rhgao/sonicverse/. Code:
https://github.com/StanfordVL/sonicverse. Gao and Li contributed equally to
this work and are in alphabetical order
| null | null | null |
cs.RO cs.CV cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developing embodied agents in simulation has been a key research topic in
recent years. Exciting new tasks, algorithms, and benchmarks have been
developed in various simulators. However, most of them assume deaf agents in
silent environments, while we humans perceive the world with multiple senses.
We introduce Sonicverse, a multisensory simulation platform with integrated
audio-visual simulation for training household agents that can both see and
hear. Sonicverse models realistic continuous audio rendering in 3D environments
in real-time. Together with a new audio-visual VR interface that allows humans
to interact with agents with audio, Sonicverse enables a series of embodied AI
tasks that need audio-visual perception. For semantic audio-visual navigation
in particular, we also propose a new multi-task learning model that achieves
state-of-the-art performance. In addition, we demonstrate Sonicverse's realism
via sim-to-real transfer, which has not been achieved by other simulators: an
agent trained in Sonicverse can successfully perform audio-visual navigation in
real-world environments. Sonicverse is available at:
https://github.com/StanfordVL/Sonicverse.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 17:24:01 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Sep 2023 22:10:40 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Gao",
"Ruohan",
""
],
[
"Li",
"Hao",
""
],
[
"Dharan",
"Gokul",
""
],
[
"Wang",
"Zhuzhu",
""
],
[
"Li",
"Chengshu",
""
],
[
"Xia",
"Fei",
""
],
[
"Savarese",
"Silvio",
""
],
[
"Fei-Fei",
"Li",
""
],
[
"Wu",
"Jiajun",
""
]
] |
new_dataset
| 0.999467 |
2306.01851
|
Niki Amini-Naieni
|
Niki Amini-Naieni, Kiana Amini-Naieni, Tengda Han, Andrew Zisserman
|
Open-world Text-specified Object Counting
|
BMVC 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our objective is open-world object counting in images, where the target
object class is specified by a text description. To this end, we propose
CounTX, a class-agnostic, single-stage model using a transformer decoder
counting head on top of pre-trained joint text-image representations. CounTX is
able to count the number of instances of any class given only an image and a
text description of the target object class, and can be trained end-to-end. In
addition to this model, we make the following contributions: (i) we compare the
performance of CounTX to prior work on open-world object counting, and show
that our approach exceeds the state of the art on all measures on the FSC-147
benchmark for methods that use text to specify the task; (ii) we present and
release FSC-147-D, an enhanced version of FSC-147 with text descriptions, so
that object classes can be described with more detailed language than their
simple class names. FSC-147-D and the code are available at
https://www.robots.ox.ac.uk/~vgg/research/countx.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 18:14:21 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 23:13:21 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Amini-Naieni",
"Niki",
""
],
[
"Amini-Naieni",
"Kiana",
""
],
[
"Han",
"Tengda",
""
],
[
"Zisserman",
"Andrew",
""
]
] |
new_dataset
| 0.985236 |
2306.07954
|
Shuai Yang
|
Shuai Yang, Yifan Zhou, Ziwei Liu and Chen Change Loy
|
Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation
|
Accepted to SIGGRAPH Asia 2023. Project page:
https://www.mmlab-ntu.com/project/rerender/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large text-to-image diffusion models have exhibited impressive proficiency in
generating high-quality images. However, when applying these models to video
domain, ensuring temporal consistency across video frames remains a formidable
challenge. This paper proposes a novel zero-shot text-guided video-to-video
translation framework to adapt image models to videos. The framework includes
two parts: key frame translation and full video translation. The first part
uses an adapted diffusion model to generate key frames, with hierarchical
cross-frame constraints applied to enforce coherence in shapes, textures and
colors. The second part propagates the key frames to other frames with
temporal-aware patch matching and frame blending. Our framework achieves global
style and local texture temporal consistency at a low cost (without re-training
or optimization). The adaptation is compatible with existing image diffusion
techniques, allowing our framework to take advantage of them, such as
customizing a specific subject with LoRA, and introducing extra spatial
guidance with ControlNet. Extensive experimental results demonstrate the
effectiveness of our proposed framework over existing methods in rendering
high-quality and temporally-coherent videos.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 17:52:23 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Sep 2023 09:57:20 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Yang",
"Shuai",
""
],
[
"Zhou",
"Yifan",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Loy",
"Chen Change",
""
]
] |
new_dataset
| 0.977426 |
2307.06101
|
Ruoyu Wang
|
Ruoyu Wang, Zixuan Guo, Yizhou Chen, Xinyi Wang, Ben M. Chen
|
Air Bumper: A Collision Detection and Reaction Framework for Autonomous
MAV Navigation
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous navigation in unknown environments with obstacles remains
challenging for micro aerial vehicles (MAVs) due to their limited onboard
computing and sensing resources. Although various collision avoidance methods
have been developed, it is still possible for drones to collide with unobserved
obstacles due to unpredictable disturbances, sensor limitations, and control
uncertainty. Instead of completely avoiding collisions, this article proposes
Air Bumper, a collision detection and reaction framework, for fully autonomous
flight in 3D environments to improve the safety of drones. Our framework only
utilizes the onboard inertial measurement unit (IMU) to detect and estimate
collisions. We further design a collision recovery control for rapid recovery
and collision-aware mapping to integrate collision information into general
LiDAR-based sensing and planning frameworks. Our simulation and experimental
results show that the quadrotor can rapidly detect, estimate, and recover from
collisions with obstacles in 3D space and continue the flight smoothly with the
help of the collision-aware map. Our Air Bumper will be released as open-source
software on GitHub.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 11:49:15 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Sep 2023 03:12:48 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Wang",
"Ruoyu",
""
],
[
"Guo",
"Zixuan",
""
],
[
"Chen",
"Yizhou",
""
],
[
"Wang",
"Xinyi",
""
],
[
"Chen",
"Ben M.",
""
]
] |
new_dataset
| 0.999372 |
2307.10620
|
KitIan Kou
|
Jifei Miao, Kit Ian Kou, Hongmin Cai, and Lizhi Liu
|
Quaternion tensor left ring decomposition and application for color
image inpainting
| null | null | null | null |
cs.CV cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, tensor networks have emerged as powerful tools for solving
large-scale optimization problems. One of the most promising tensor networks is
the tensor ring (TR) decomposition, which achieves circular dimensional
permutation invariance in the model through the utilization of the trace
operation and equitable treatment of the latent cores. On the other hand, more
recently, quaternions have gained significant attention and have been widely
utilized in color image processing tasks due to their effectiveness in encoding
color pixels by considering the three color channels as a unified entity.
Therefore, in this paper, based on the left quaternion matrix multiplication,
we propose the quaternion tensor left ring (QTLR) decomposition, which inherits
the powerful and generalized representation abilities of the TR decomposition
while leveraging the advantages of quaternions for color pixel representation.
In addition to providing the definition of QTLR decomposition and an algorithm
for learning the QTLR format, the paper further proposes a low-rank quaternion
tensor completion (LRQTC) model and its algorithm for color image inpainting
based on the defined QTLR decomposition. Finally, extensive experiments on
color image inpainting demonstrate that the proposed LRQTC method is highly
competitive.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 06:37:47 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Sep 2023 10:53:52 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Miao",
"Jifei",
""
],
[
"Kou",
"Kit Ian",
""
],
[
"Cai",
"Hongmin",
""
],
[
"Liu",
"Lizhi",
""
]
] |
new_dataset
| 0.979098 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.