id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.05602
|
Shizhe Chen
|
Shizhe Chen, Thomas Chabal, Ivan Laptev and Cordelia Schmid
|
Object Goal Navigation with Recursive Implicit Maps
|
Accepted to IROS 2023
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object goal navigation aims to navigate an agent to locations of a given
object category in unseen environments. Classical methods explicitly build maps
of environments and require extensive engineering while lacking semantic
information for object-oriented exploration. On the other hand, end-to-end
learning methods alleviate manual map design and predict actions using implicit
representations. Such methods, however, lack an explicit notion of geometry and
may have limited ability to encode navigation history. In this work, we propose
an implicit spatial map for object goal navigation. Our implicit map is
recursively updated with new observations at each step using a transformer. To
encourage spatial reasoning, we introduce auxiliary tasks and train our model
to reconstruct explicit maps as well as to predict visual features, semantic
labels and actions. Our method significantly outperforms the state of the art
on the challenging MP3D dataset and generalizes well to the HM3D dataset. We
successfully deploy our model on a real robot and achieve encouraging object
goal navigation results in real scenes using only a few real-world
demonstrations. Code, trained models and videos are available at
\url{https://www.di.ens.fr/willow/research/onav_rim/}.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2023 14:21:33 GMT"
}
] | 2023-08-11T00:00:00 |
[
[
"Chen",
"Shizhe",
""
],
[
"Chabal",
"Thomas",
""
],
[
"Laptev",
"Ivan",
""
],
[
"Schmid",
"Cordelia",
""
]
] |
new_dataset
| 0.978623 |
2308.05620
|
Erik Pearson
|
Erik Pearson and Brendan Englot
|
A Robust and Rapidly Deployable Waypoint Navigation Architecture for
Long-Duration Operations in GPS-Denied Environments
|
8 pages, 7 figures, Ubiquitous Robots 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
For long-duration operations in GPS-denied environments, accurate and
repeatable waypoint navigation is an essential capability. While simultaneous
localization and mapping (SLAM) works well for single-session operations,
repeated, multi-session operations require robots to navigate to the same
spot(s) accurately and precisely each and every time. Localization and
navigation errors can build up from one session to the next if they are not
accounted for. Localization using a global reference map works well, but there
are no publicly available packages for quickly building maps and navigating
with them. We propose a new architecture using a combination of two publicly
available packages with a newly released package to create a fully functional
multi-session navigation system for ground vehicles. The system takes just a
few hours from the beginning of the first manual scan to perform autonomous
waypoint navigation.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2023 15:09:14 GMT"
}
] | 2023-08-11T00:00:00 |
[
[
"Pearson",
"Erik",
""
],
[
"Englot",
"Brendan",
""
]
] |
new_dataset
| 0.96342 |
2308.05627
|
Adrian Lubitz
|
Adrian Lubitz, Lisa Gutzeit and Frank Kirchner
|
CoBaIR: A Python Library for Context-Based Intention Recognition in
Human-Robot-Interaction
|
7 Pages, 3 Figures, to be published in proceedings of IEEE RO-MAN
Conference
| null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Human-Robot Interaction (HRI) becomes more and more important in a world
where robots integrate fast in all aspects of our lives but HRI applications
depend massively on the utilized robotic system as well as the deployment
environment and cultural differences. Because of these variable dependencies it
is often not feasible to use a data-driven approach to train a model for human
intent recognition. Expert systems have been proven to close this gap very
efficiently. Furthermore, it is important to support understandability in HRI
systems to establish trust in the system. To address the above-mentioned
challenges in HRI we present an adaptable python library in which current
state-of-the-art Models for context recognition can be integrated. For
Context-Based Intention Recognition a two-layer Bayesian Network (BN) is used.
The bayesian approach offers explainability and clarity in the creation of
scenarios and is easily extendable with more modalities. Additionally, it can
be used as an expert system if no data is available but can as well be
fine-tuned when data becomes available.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2023 15:15:26 GMT"
}
] | 2023-08-11T00:00:00 |
[
[
"Lubitz",
"Adrian",
""
],
[
"Gutzeit",
"Lisa",
""
],
[
"Kirchner",
"Frank",
""
]
] |
new_dataset
| 0.95471 |
2308.05629
|
Rickard Br\"annvall
|
Rickard Br\"annvall, Henrik Forsgren, Fredrik Sandin and Marcus
Liwicki
|
ReLU and Addition-based Gated RNN
|
12 pages, 4 tables
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We replace the multiplication and sigmoid function of the conventional
recurrent gate with addition and ReLU activation. This mechanism is designed to
maintain long-term memory for sequence processing but at a reduced
computational cost, thereby opening up for more efficient execution or larger
models on restricted hardware. Recurrent Neural Networks (RNNs) with gating
mechanisms such as LSTM and GRU have been widely successful in learning from
sequential data due to their ability to capture long-term dependencies.
Conventionally, the update based on current inputs and the previous state
history is each multiplied with dynamic weights and combined to compute the
next state. However, multiplication can be computationally expensive,
especially for certain hardware architectures or alternative arithmetic systems
such as homomorphic encryption. It is demonstrated that the novel gating
mechanism can capture long-term dependencies for a standard synthetic sequence
learning task while significantly reducing computational costs such that
execution time is reduced by half on CPU and by one-third under encryption.
Experimental results on handwritten text recognition tasks furthermore show
that the proposed architecture can be trained to achieve comparable accuracy to
conventional GRU and LSTM baselines. The gating mechanism introduced in this
paper may enable privacy-preserving AI applications operating under homomorphic
encryption by avoiding the multiplication of encrypted variables. It can also
support quantization in (unencrypted) plaintext applications, with the
potential for substantial performance gains since the addition-based
formulation can avoid the expansion to double precision often required for
multiplication.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2023 15:18:16 GMT"
}
] | 2023-08-11T00:00:00 |
[
[
"Brännvall",
"Rickard",
""
],
[
"Forsgren",
"Henrik",
""
],
[
"Sandin",
"Fredrik",
""
],
[
"Liwicki",
"Marcus",
""
]
] |
new_dataset
| 0.995837 |
2308.05644
|
Khaza Anuarul Hoque
|
Ernest Bonnah, Khaza Anuarul Hoque
|
QTWTL: Quality Aware Time Window Temporal Logic for Performance
Monitoring
|
Accepted for publication in the ACM/IEEE MEMOCODE 2023 conference
| null | null | null |
cs.LO cs.PF cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In various service-oriented applications such as distributed autonomous
delivery, healthcare, tourism, transportation, and many others, where service
agents need to perform serial and time-bounded tasks to achieve their goals,
quality of service must constantly be assured. In addition to safety
requirements, such agents also need to fulfill performance requirements in
order to satisfy their quality of service. This paper proposes the novel
quality-aware time window temporal logic (QTWTL) by extending the traditional
time window temporal logic (TWTL) with two operators for counting and
aggregation operations. We also propose offline runtime monitoring algorithms
for the performance monitoring of QTWTL specifications. To analyze the
feasibility and efficiency of our proposed approach, we generate a large number
of traces using the New York City Taxi and Limousine Commission Trip Record
data, formalize their performance requirements using QTWTL, and monitor them
using the proposed algorithms. The obtained results show that the monitoring
algorithm has a linear space and time complexity with respect to the number of
traces monitored.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2023 15:37:33 GMT"
}
] | 2023-08-11T00:00:00 |
[
[
"Bonnah",
"Ernest",
""
],
[
"Hoque",
"Khaza Anuarul",
""
]
] |
new_dataset
| 0.998065 |
2308.05649
|
Kunjian Song
|
Kunjian Song, Mikhail R. Gadelha, Franz Brau{\ss}e, Rafael S. Menezes,
Lucas C. Cordeiro
|
ESBMC v7.3: Model Checking C++ Programs using Clang AST
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces ESBMC v7.3, the latest Efficient SMT-Based
Context-Bounded Model Checker version, which now incorporates a new clang-based
C++ front-end. While the previous CPROVER-based front-end served well for
handling C++03 programs, it encountered challenges keeping up with the evolving
C++ language. As new language and library features were added in each C++
version, the limitations of the old front-end became apparent, leading to
difficult-to-maintain code. Consequently, modern C++ programs were challenging
to verify. To overcome this obstacle, we redeveloped the front-end, opting for
a more robust approach using clang. The new front-end efficiently traverses the
Abstract Syntax Tree (AST) in-memory using clang APIs and transforms each AST
node into ESBMC's Intermediate Representation. Through extensive
experimentation, our results demonstrate that ESBMC v7.3 with the new front-end
significantly reduces parse and conversion errors, enabling successful
verification of a wide range of C++ programs, thereby outperforming previous
ESBMC versions.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2023 15:46:33 GMT"
}
] | 2023-08-11T00:00:00 |
[
[
"Song",
"Kunjian",
""
],
[
"Gadelha",
"Mikhail R.",
""
],
[
"Brauße",
"Franz",
""
],
[
"Menezes",
"Rafael S.",
""
],
[
"Cordeiro",
"Lucas C.",
""
]
] |
new_dataset
| 0.997857 |
2308.05697
|
Xubin Ren
|
Xubin Ren, Lianghao Xia, Yuhao Yang, Wei Wei, Tianle Wang, Xuheng Cai
and Chao Huang
|
SSLRec: A Self-Supervised Learning Library for Recommendation
| null | null | null | null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning (SSL) has gained significant interest in recent
years as a solution to address the challenges posed by sparse and noisy data in
recommender systems. Despite the growing number of SSL algorithms designed to
provide state-of-the-art performance in various recommendation scenarios (e.g.,
graph collaborative filtering, sequential recommendation, social
recommendation, KG-enhanced recommendation), there is still a lack of unified
frameworks that integrate recommendation algorithms across different domains.
Such a framework could serve as the cornerstone for self-supervised
recommendation algorithms, unifying the validation of existing methods and
driving the design of new ones. To address this gap, we introduce SSLRec, a
novel benchmark platform that provides a standardized, flexible, and
comprehensive framework for evaluating various SSL-enhanced recommenders. The
SSLRec library features a modular architecture that allows users to easily
evaluate state-of-the-art models and a complete set of data augmentation and
self-supervised toolkits to help create SSL recommendation models with specific
needs. Furthermore, SSLRec simplifies the process of training and evaluating
different recommendation models with consistent and fair settings. Our SSLRec
platform covers a comprehensive set of state-of-the-art SSL-enhanced
recommendation models across different scenarios, enabling researchers to
evaluate these cutting-edge models and drive further innovation in the field.
Our implemented SSLRec framework is available at the source code repository
https://github.com/HKUDS/SSLRec.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2023 16:59:36 GMT"
}
] | 2023-08-11T00:00:00 |
[
[
"Ren",
"Xubin",
""
],
[
"Xia",
"Lianghao",
""
],
[
"Yang",
"Yuhao",
""
],
[
"Wei",
"Wei",
""
],
[
"Wang",
"Tianle",
""
],
[
"Cai",
"Xuheng",
""
],
[
"Huang",
"Chao",
""
]
] |
new_dataset
| 0.990131 |
2308.05698
|
Kojo Adu-Gyamfi
|
Kojo Konadu Adu-Gyamfi, Karo Ahmadi-Dehrashid, Yaw Okyere Adu-Gyamfi,
Pujitha Gunaratne, Anuj Sharma
|
MobiScout: A Scalable Cloud-Based Driving and Activity Monitoring
Platform Featuring an IOS App and a WatchOS Extension
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
MobiScout is an iOS software that monitors users' driving habits and
physiological conditions while on the road. The Mobiscout app was created to
provide a low-cost next-generation data collection and analysis solution for
naturalistic driving studies. MobiScout collects real-time data, including
physiological information from drivers in their normal driving conditions using
sensors and cameras on mobile phones, smartwatches, and Bluetooth-enabled OBD
equipment. The MobiScout software captures vehicle and driving data, including
speed, braking, pulse rate, and acceleration, while the phone's camera captures
everything inside and outside the car. Data captured can be streamed to cloud
storage in real-time or persisted in local storage in WIFI dead zones. The
information gathered will be studied further to better understand typical
traffic behavior, performance, surroundings, and driving context among drivers.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2023 17:00:46 GMT"
}
] | 2023-08-11T00:00:00 |
[
[
"Adu-Gyamfi",
"Kojo Konadu",
""
],
[
"Ahmadi-Dehrashid",
"Karo",
""
],
[
"Adu-Gyamfi",
"Yaw Okyere",
""
],
[
"Gunaratne",
"Pujitha",
""
],
[
"Sharma",
"Anuj",
""
]
] |
new_dataset
| 0.999877 |
2308.05725
|
Tu Anh Nguyen
|
Tu Anh Nguyen, Wei-Ning Hsu, Antony D'Avirro, Bowen Shi, Itai Gat,
Maryam Fazel-Zarani, Tal Remez, Jade Copet, Gabriel Synnaeve, Michael Hassid,
Felix Kreuk, Yossi Adi, Emmanuel Dupoux
|
EXPRESSO: A Benchmark and Analysis of Discrete Expressive Speech
Resynthesis
| null | null | null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent work has shown that it is possible to resynthesize high-quality speech
based, not on text, but on low bitrate discrete units that have been learned in
a self-supervised fashion and can therefore capture expressive aspects of
speech that are hard to transcribe (prosody, voice styles, non-verbal
vocalization). The adoption of these methods is still limited by the fact that
most speech synthesis datasets are read, severely limiting spontaneity and
expressivity. Here, we introduce Expresso, a high-quality expressive speech
dataset for textless speech synthesis that includes both read speech and
improvised dialogues rendered in 26 spontaneous expressive styles. We
illustrate the challenges and potentials of this dataset with an expressive
resynthesis benchmark where the task is to encode the input in low-bitrate
units and resynthesize it in a target voice while preserving content and style.
We evaluate resynthesis quality with automatic metrics for different
self-supervised discrete encoders, and explore tradeoffs between quality,
bitrate and invariance to speaker and style. All the dataset, evaluation
metrics and baseline models are open source
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2023 17:41:19 GMT"
}
] | 2023-08-11T00:00:00 |
[
[
"Nguyen",
"Tu Anh",
""
],
[
"Hsu",
"Wei-Ning",
""
],
[
"D'Avirro",
"Antony",
""
],
[
"Shi",
"Bowen",
""
],
[
"Gat",
"Itai",
""
],
[
"Fazel-Zarani",
"Maryam",
""
],
[
"Remez",
"Tal",
""
],
[
"Copet",
"Jade",
""
],
[
"Synnaeve",
"Gabriel",
""
],
[
"Hassid",
"Michael",
""
],
[
"Kreuk",
"Felix",
""
],
[
"Adi",
"Yossi",
""
],
[
"Dupoux",
"Emmanuel",
""
]
] |
new_dataset
| 0.999521 |
2308.05733
|
Guangkai Xu
|
Guangkai Xu, Wei Yin, Hao Chen, Chunhua Shen, Kai Cheng, Feng Zhao
|
FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models
|
Accepted to ICCV 2023. Project webpage is at:
https://aim-uofa.github.io/FrozenRecon/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D scene reconstruction is a long-standing vision task. Existing approaches
can be categorized into geometry-based and learning-based methods. The former
leverages multi-view geometry but can face catastrophic failures due to the
reliance on accurate pixel correspondence across views. The latter was
proffered to mitigate these issues by learning 2D or 3D representation
directly. However, without a large-scale video or 3D training data, it can
hardly generalize to diverse real-world scenarios due to the presence of tens
of millions or even billions of optimization parameters in the deep network.
Recently, robust monocular depth estimation models trained with large-scale
datasets have been proven to possess weak 3D geometry prior, but they are
insufficient for reconstruction due to the unknown camera parameters, the
affine-invariant property, and inter-frame inconsistency. Here, we propose a
novel test-time optimization approach that can transfer the robustness of
affine-invariant depth models such as LeReS to challenging diverse scenes while
ensuring inter-frame consistency, with only dozens of parameters to optimize
per video frame. Specifically, our approach involves freezing the pre-trained
affine-invariant depth model's depth predictions, rectifying them by optimizing
the unknown scale-shift values with a geometric consistency alignment module,
and employing the resulting scale-consistent depth maps to robustly obtain
camera poses and achieve dense scene reconstruction, even in low-texture
regions. Experiments show that our method achieves state-of-the-art
cross-dataset reconstruction on five zero-shot testing datasets.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2023 17:55:02 GMT"
}
] | 2023-08-11T00:00:00 |
[
[
"Xu",
"Guangkai",
""
],
[
"Yin",
"Wei",
""
],
[
"Chen",
"Hao",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Cheng",
"Kai",
""
],
[
"Zhao",
"Feng",
""
]
] |
new_dataset
| 0.991143 |
2308.05736
|
Bencheng Liao
|
Bencheng Liao, Shaoyu Chen, Yunchi Zhang, Bo Jiang, Qian Zhang, Wenyu
Liu, Chang Huang, Xinggang Wang
|
MapTRv2: An End-to-End Framework for Online Vectorized HD Map
Construction
|
Code available at https://github.com/hustvl/MapTR . arXiv admin note:
substantial text overlap with arXiv:2208.14437
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
High-definition (HD) map provides abundant and precise static environmental
information of the driving scene, serving as a fundamental and indispensable
component for planning in autonomous driving system. In this paper, we present
\textbf{Map} \textbf{TR}ansformer, an end-to-end framework for online
vectorized HD map construction. We propose a unified permutation-equivalent
modeling approach, \ie, modeling map element as a point set with a group of
equivalent permutations, which accurately describes the shape of map element
and stabilizes the learning process. We design a hierarchical query embedding
scheme to flexibly encode structured map information and perform hierarchical
bipartite matching for map element learning. To speed up convergence, we
further introduce auxiliary one-to-many matching and dense supervision. The
proposed method well copes with various map elements with arbitrary shapes. It
runs at real-time inference speed and achieves state-of-the-art performance on
both nuScenes and Argoverse2 datasets. Abundant qualitative results show stable
and robust map construction quality in complex and various driving scenes. Code
and more demos are available at \url{https://github.com/hustvl/MapTR} for
facilitating further studies and applications.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2023 17:56:53 GMT"
}
] | 2023-08-11T00:00:00 |
[
[
"Liao",
"Bencheng",
""
],
[
"Chen",
"Shaoyu",
""
],
[
"Zhang",
"Yunchi",
""
],
[
"Jiang",
"Bo",
""
],
[
"Zhang",
"Qian",
""
],
[
"Liu",
"Wenyu",
""
],
[
"Huang",
"Chang",
""
],
[
"Wang",
"Xinggang",
""
]
] |
new_dataset
| 0.999207 |
2105.06858
|
Andrea Raffo
|
Chiara Romanengo, Andrea Raffo, Yifan Qie, Nabil Anwer, Bianca
Falcidieno
|
Fit4CAD: A point cloud benchmark for fitting simple geometric primitives
in CAD objects
| null |
Computers & Graphics 102 (2022) 133-143
|
10.1016/j.cag.2021.09.013
| null |
cs.GR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Fit4CAD, a benchmark for the evaluation and comparison of methods
for fitting simple geometric primitives in point clouds representing CAD
objects. This benchmark is meant to help both method developers and those who
want to identify the best performing tools. The Fit4CAD dataset is composed by
225 high quality point clouds, each of which has been obtained by sampling a
CAD object. The way these elements were created by using existing platforms and
datasets makes the benchmark easily expandable. The dataset is already split
into a training set and a test set. To assess performance and accuracy of the
different primitive fitting methods, various measures are defined. To
demonstrate the effective use of Fit4CAD, we have tested it on two methods
belonging to two different categories of approaches to the primitive fitting
problem: a clustering method based on a primitive growing framework and a
parametric method based on the Hough transform.
|
[
{
"version": "v1",
"created": "Fri, 14 May 2021 14:32:08 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jul 2021 11:55:02 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Oct 2021 09:00:56 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Romanengo",
"Chiara",
""
],
[
"Raffo",
"Andrea",
""
],
[
"Qie",
"Yifan",
""
],
[
"Anwer",
"Nabil",
""
],
[
"Falcidieno",
"Bianca",
""
]
] |
new_dataset
| 0.999147 |
2105.12824
|
Tatsuaki Wada
|
Tatsuaki Wada, Antonio M. Scarfone, Hiroshi Matsuzoe
|
Huygens' equations and the gradient-flow equations in information
geometry
|
20 pages, no figure, accepted to International Journal of Geometric
Methods in Modern Physics (IJGMMP)
| null | null | null |
cs.IT cond-mat.stat-mech math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We revisit the relation between the gradient-flow equations and Hamilton's
equations in information geometry. By regarding the gradient-flow equations as
Huygens' equations in geometric optics, we have related the gradient flows to
the geodesic flows induced by the geodesic Hamiltonian in an appropriate
Riemannian geometry. The original evolution parameter $\textit{t}$ in the
gradient-flow equations is related to the arc-length parameter in the
associated Riemannian manifold by Jacobi-Maupertuis transformation. As a
by-product, it is found the relation between the gradient-flow equation and
replicator equations.
|
[
{
"version": "v1",
"created": "Fri, 16 Apr 2021 06:26:32 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Aug 2021 00:50:06 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Oct 2021 00:48:03 GMT"
},
{
"version": "v4",
"created": "Wed, 6 Apr 2022 01:48:39 GMT"
},
{
"version": "v5",
"created": "Sat, 31 Dec 2022 01:42:26 GMT"
},
{
"version": "v6",
"created": "Wed, 9 Aug 2023 05:28:10 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Wada",
"Tatsuaki",
""
],
[
"Scarfone",
"Antonio M.",
""
],
[
"Matsuzoe",
"Hiroshi",
""
]
] |
new_dataset
| 0.959575 |
2201.04434
|
Axel Loewe
|
Felix Bach and Jochen Klar and Axel Loewe and Jorge S\'anchez and
Gunnar Seemann and Yung-Lin Huang and Robert Ulrich
|
The openCARP CDE -- Concept for and implementation of a sustainable
collaborative development environment for research software
| null | null |
10.17192/bfdm.2022.1.8368
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
This work describes the setup of an advanced technical infrastructure for
collaborative software development (CDE) in large, distributed projects based
on GitLab. We present its customization and extension, additional features and
processes like code review, continuous automated testing, DevOps practices, and
sustainable life-cycle management including long-term preservation and citable
publishing of software releases along with relevant metadata. The environment
is currently used for developing the open cardiac simulation software openCARP
and an evaluation showcases its capability and utility for collaboration and
coordination of sizeable heterogeneous teams. As such, it could be a suitable
and sustainable infrastructure solution for a wide range of research software
projects.
|
[
{
"version": "v1",
"created": "Wed, 12 Jan 2022 12:06:01 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Bach",
"Felix",
""
],
[
"Klar",
"Jochen",
""
],
[
"Loewe",
"Axel",
""
],
[
"Sánchez",
"Jorge",
""
],
[
"Seemann",
"Gunnar",
""
],
[
"Huang",
"Yung-Lin",
""
],
[
"Ulrich",
"Robert",
""
]
] |
new_dataset
| 0.999598 |
2206.07636
|
Andrea Raffo
|
Chiara Romanengo, Andrea Raffo, Silvia Biasotti, Bianca Falcidieno,
Vlassis Fotis, Ioannis Romanelis, Eleftheria Psatha, Konstantinos Moustakas,
Ivan Sipiran, Quang-Thuc Nguyen, Chi-Bien Chu, Khoi-Nguyen Nguyen-Ngoc,
Dinh-Khoi Vo, Tuan-An To, Nham-Tan Nguyen, Nhat-Quynh Le-Pham, Hai-Dang
Nguyen, Minh-Triet Tran, Yifan Qie, Nabil Anwer
|
SHREC 2022: Fitting and recognition of simple geometric primitives on
point clouds
| null |
Computers & Graphics 107 (2022) 32-49
|
10.1016/j.cag.2022.07.004
| null |
cs.GR cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the methods that have participated in the SHREC 2022
track on the fitting and recognition of simple geometric primitives on point
clouds. As simple primitives we mean the classical surface primitives derived
from constructive solid geometry, i.e., planes, spheres, cylinders, cones and
tori. The aim of the track is to evaluate the quality of automatic algorithms
for fitting and recognising geometric primitives on point clouds. Specifically,
the goal is to identify, for each point cloud, its primitive type and some
geometric descriptors. For this purpose, we created a synthetic dataset,
divided into a training set and a test set, containing segments perturbed with
different kinds of point cloud artifacts. Among the six participants to this
track, two are based on direct methods, while four are either fully based on
deep learning or combine direct and neural approaches. The performance of the
methods is evaluated using various classification and approximation measures.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 16:27:01 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jul 2022 17:21:58 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Romanengo",
"Chiara",
""
],
[
"Raffo",
"Andrea",
""
],
[
"Biasotti",
"Silvia",
""
],
[
"Falcidieno",
"Bianca",
""
],
[
"Fotis",
"Vlassis",
""
],
[
"Romanelis",
"Ioannis",
""
],
[
"Psatha",
"Eleftheria",
""
],
[
"Moustakas",
"Konstantinos",
""
],
[
"Sipiran",
"Ivan",
""
],
[
"Nguyen",
"Quang-Thuc",
""
],
[
"Chu",
"Chi-Bien",
""
],
[
"Nguyen-Ngoc",
"Khoi-Nguyen",
""
],
[
"Vo",
"Dinh-Khoi",
""
],
[
"To",
"Tuan-An",
""
],
[
"Nguyen",
"Nham-Tan",
""
],
[
"Le-Pham",
"Nhat-Quynh",
""
],
[
"Nguyen",
"Hai-Dang",
""
],
[
"Tran",
"Minh-Triet",
""
],
[
"Qie",
"Yifan",
""
],
[
"Anwer",
"Nabil",
""
]
] |
new_dataset
| 0.995697 |
2209.00128
|
Yi-Ting Shen
|
Yi-Ting Shen, Yaesop Lee, Heesung Kwon, Damon M. Conover, Shuvra S.
Bhattacharyya, Nikolas Vale, Joshua D. Gray, G. Jeremy Leong, Kenneth
Evensen, Frank Skirlo
|
Archangel: A Hybrid UAV-based Human Detection Benchmark with Position
and Pose Metadata
|
IEEE Access
|
IEEE Access, vol. 11, pp. 80958-80972, 2023
|
10.1109/ACCESS.2023.3299235
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Learning to detect objects, such as humans, in imagery captured by an
unmanned aerial vehicle (UAV) usually suffers from tremendous variations caused
by the UAV's position towards the objects. In addition, existing UAV-based
benchmark datasets do not provide adequate dataset metadata, which is essential
for precise model diagnosis and learning features invariant to those
variations. In this paper, we introduce Archangel, the first UAV-based object
detection dataset composed of real and synthetic subsets captured with similar
imagining conditions and UAV position and object pose metadata. A series of
experiments are carefully designed with a state-of-the-art object detector to
demonstrate the benefits of leveraging the metadata during model evaluation.
Moreover, several crucial insights involving both real and synthetic data
during model optimization are presented. In the end, we discuss the advantages,
limitations, and future directions regarding Archangel to highlight its
distinct value for the broader machine learning community.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 21:45:16 GMT"
},
{
"version": "v2",
"created": "Sat, 3 Jun 2023 21:15:47 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Aug 2023 18:48:21 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Shen",
"Yi-Ting",
""
],
[
"Lee",
"Yaesop",
""
],
[
"Kwon",
"Heesung",
""
],
[
"Conover",
"Damon M.",
""
],
[
"Bhattacharyya",
"Shuvra S.",
""
],
[
"Vale",
"Nikolas",
""
],
[
"Gray",
"Joshua D.",
""
],
[
"Leong",
"G. Jeremy",
""
],
[
"Evensen",
"Kenneth",
""
],
[
"Skirlo",
"Frank",
""
]
] |
new_dataset
| 0.999298 |
2209.02307
|
Luca Geatti
|
Alessandro Cimatti, Luca Geatti, Nicola Gigante, Angelo Montanari,
Stefano Tonetta
|
A first-order logic characterization of safety and co-safety languages
| null | null | null | null |
cs.AI cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Linear Temporal Logic (LTL) is one of the most popular temporal logics, that
comes into play in a variety of branches of computer science. Among the various
reasons of its widespread use there are its strong foundational properties: LTL
is equivalent to counter-free omega-automata, to star-free omega-regular
expressions, and (by Kamp's theorem) to the First-Order Theory of Linear Orders
(FO-TLO). Safety and co-safety languages, where a finite prefix suffices to
establish whether a word does not belong or belongs to the language,
respectively, play a crucial role in lowering the complexity of problems like
model checking and reactive synthesis for LTL. SafetyLTL (resp., coSafetyLTL)
is a fragment of LTL where only universal (resp., existential) temporal
modalities are allowed, that recognises safety (resp., co-safety) languages
only. The main contribution of this paper is the introduction of a fragment of
FO-TLO, called SafetyFO, and of its dual coSafetyFO, which are expressively
complete with respect to the LTL-definable safety and co-safety languages. We
prove that they exactly characterize SafetyLTL and coSafetyLTL, respectively, a
result that joins Kamp's theorem, and provides a clearer view of the
characterization of (fragments of) LTL in terms of first-order languages. In
addition, it gives a direct, compact, and self-contained proof that any safety
language definable in LTL is definable in SafetyLTL as well. As a by-product,
we obtain some interesting results on the expressive power of the weak tomorrow
operator of SafetyLTL, interpreted over finite and infinite words. Moreover, we
prove that, when interpreted over finite words, SafetyLTL (resp. coSafetyLTL)
devoid of the tomorrow (resp., weak tomorrow) operator captures the safety
(resp., co-safety) fragment of LTL over finite words.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 09:00:38 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2022 17:50:53 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2023 13:59:22 GMT"
},
{
"version": "v4",
"created": "Tue, 4 Jul 2023 18:06:27 GMT"
},
{
"version": "v5",
"created": "Wed, 9 Aug 2023 07:59:56 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Cimatti",
"Alessandro",
""
],
[
"Geatti",
"Luca",
""
],
[
"Gigante",
"Nicola",
""
],
[
"Montanari",
"Angelo",
""
],
[
"Tonetta",
"Stefano",
""
]
] |
new_dataset
| 0.99505 |
2209.05996
|
Yueru Luo
|
Yueru Luo, Xu Yan, Chaoda Zheng, Chao Zheng, Shuqi Mei, Tang Kun,
Shuguang Cui, Zhen Li
|
M$^2$-3DLaneNet: Exploring Multi-Modal 3D Lane Detection
|
update
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Estimating accurate lane lines in 3D space remains challenging due to their
sparse and slim nature. Previous works mainly focused on using images for 3D
lane detection, leading to inherent projection error and loss of geometry
information. To address these issues, we explore the potential of leveraging
LiDAR for 3D lane detection, either as a standalone method or in combination
with existing monocular approaches. In this paper, we propose M$^2$-3DLaneNet
to integrate complementary information from multiple sensors. Specifically,
M$^2$-3DLaneNet lifts 2D features into 3D space by incorporating geometry
information from LiDAR data through depth completion. Subsequently, the lifted
2D features are further enhanced with LiDAR features through cross-modality BEV
fusion. Extensive experiments on the large-scale OpenLane dataset demonstrate
the effectiveness of M$^2$-3DLaneNet, regardless of the range (75m or 100m).
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 13:45:18 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 04:19:01 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Aug 2023 20:52:26 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Luo",
"Yueru",
""
],
[
"Yan",
"Xu",
""
],
[
"Zheng",
"Chaoda",
""
],
[
"Zheng",
"Chao",
""
],
[
"Mei",
"Shuqi",
""
],
[
"Kun",
"Tang",
""
],
[
"Cui",
"Shuguang",
""
],
[
"Li",
"Zhen",
""
]
] |
new_dataset
| 0.99421 |
2211.06108
|
Tao Huang
|
Yanlong Yang, Jianan Liu, Tao Huang, Qing-Long Han, Gang Ma and Bing
Zhu
|
RaLiBEV: Radar and LiDAR BEV Fusion Learning for Anchor Box Free Object
Detection System
|
12 pages, 5 figures
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In autonomous driving systems, LiDAR and radar play important roles in the
perception of the surrounding environment. LiDAR provides accurate 3D spatial
sensing information but cannot work in adverse weather like fog. On the other
hand, the radar signal can be diffracted when encountering raindrops or mist
particles thanks to its wavelength, but it suffers from large noise. Recent
state-of-the-art works reveal that fusion of radar and LiDAR can lead to robust
detection in adverse weather. The existing works adopt convolutional neural
network architecture to extract features from each sensor data stream, then
align and aggregate the two branch features to predict object detection
results. However, these methods have low accuracy of bounding box estimations
due to a simple design of label assignment and fusion strategies. In this
paper, we propose a bird's-eye view fusion learning-based anchor box-free
object detection system, which fuses the feature derived from the radar
range-azimuth heatmap and the LiDAR point cloud to estimate the possible
objects. Different label assignment strategies have been designed to facilitate
the consistency between the classification of foreground or background anchor
points and the corresponding bounding box regressions. In addition, the
performance of the proposed object detector is further enhanced by employing a
novel interactive transformer module. The superior performance of the methods
proposed in this paper has been demonstrated using the recently published
Oxford Radar RobotCar dataset. Our system's average precision significantly
outperforms the best state-of-the-art method by 13.1% and 19.0% at IoU of 0.8
under 'Clear+Foggy' training conditions for 'Clear' and 'Foggy' testing,
respectively.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 10:24:42 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Dec 2022 09:40:57 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Aug 2023 05:36:43 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Yang",
"Yanlong",
""
],
[
"Liu",
"Jianan",
""
],
[
"Huang",
"Tao",
""
],
[
"Han",
"Qing-Long",
""
],
[
"Ma",
"Gang",
""
],
[
"Zhu",
"Bing",
""
]
] |
new_dataset
| 0.998961 |
2212.02934
|
Mathieu Guillame-Bert
|
Mathieu Guillame-Bert, Sebastian Bruch, Richard Stotz, Jan Pfeifer
|
Yggdrasil Decision Forests: A Fast and Extensible Decision Forests
Library
| null | null |
10.1145/3580305.3599933
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Yggdrasil Decision Forests is a library for the training, serving and
interpretation of decision forest models, targeted both at research and
production work, implemented in C++, and available in C++, command line
interface, Python (under the name TensorFlow Decision Forests), JavaScript, Go,
and Google Sheets (under the name Simple ML for Sheets). The library has been
developed organically since 2018 following a set of four design principles
applicable to machine learning libraries and frameworks: simplicity of use,
safety of use, modularity and high-level abstraction, and integration with
other machine learning libraries. In this paper, we describe those principles
in detail and present how they have been used to guide the design of the
library. We then showcase the use of our library on a set of classical machine
learning problems. Finally, we report a benchmark comparing our library to
related solutions.
|
[
{
"version": "v1",
"created": "Tue, 6 Dec 2022 12:44:27 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 11:35:13 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Guillame-Bert",
"Mathieu",
""
],
[
"Bruch",
"Sebastian",
""
],
[
"Stotz",
"Richard",
""
],
[
"Pfeifer",
"Jan",
""
]
] |
new_dataset
| 0.999031 |
2212.14199
|
Qiayuan Liao
|
Qiayuan Liao, Zhongyu Li, Akshay Thirugnanam, Jun Zeng, Koushil
Sreenath
|
Walking in Narrow Spaces: Safety-critical Locomotion Control for
Quadrupedal Robots with Duality-based Optimization
|
Accepted to International Conference on Intelligent Robots and
Systems (IROS) 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a safety-critical locomotion control framework for
quadrupedal robots. Our goal is to enable quadrupedal robots to safely navigate
in cluttered environments. To tackle this, we introduce exponential Discrete
Control Barrier Functions (exponential DCBFs) with duality-based obstacle
avoidance constraints into a Nonlinear Model Predictive Control (NMPC) with
Whole-Body Control (WBC) framework for quadrupedal locomotion control. This
enables us to use polytopes to describe the shapes of the robot and obstacles
for collision avoidance while doing locomotion control of quadrupedal robots.
Compared to most prior work, especially using CBFs, that utilize spherical and
conservative approximation for obstacle avoidance, this work demonstrates a
quadrupedal robot autonomously and safely navigating through very tight spaces
in the real world. (Our open-source code is available at
github.com/HybridRobotics/quadruped_nmpc_dcbf_duality, and the video is
available at youtu.be/p1gSQjwXm1Q.)
|
[
{
"version": "v1",
"created": "Thu, 29 Dec 2022 07:18:59 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Aug 2023 07:08:24 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Aug 2023 04:43:01 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Liao",
"Qiayuan",
""
],
[
"Li",
"Zhongyu",
""
],
[
"Thirugnanam",
"Akshay",
""
],
[
"Zeng",
"Jun",
""
],
[
"Sreenath",
"Koushil",
""
]
] |
new_dataset
| 0.973775 |
2302.07251
|
Suthee Ruangwises
|
Suthee Ruangwises
|
Physical Zero-Knowledge Proof for Ball Sort Puzzle
|
This paper has appeared at CiE 2023. arXiv admin note: text overlap
with arXiv:2302.01235
| null |
10.1007/978-3-031-36978-0_20
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ball sort puzzle is a popular logic puzzle consisting of several bins
containing balls of multiple colors. Each bin works like a stack; a ball has to
follow the last-in first-out order. The player has to sort the balls by color
such that each bin contains only balls of a single color. In this paper, we
propose a physical zero-knowledge proof protocol for the ball sort puzzle using
a deck of playing cards, which enables a prover to physically show that he/she
knows a solution with $t$ moves of the ball sort puzzle without revealing it.
Our protocol is the first zero-knowledge proof protocol for an interactive
puzzle involving moving objects.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 18:48:29 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 10:10:18 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Aug 2023 08:47:14 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Ruangwises",
"Suthee",
""
]
] |
new_dataset
| 0.999226 |
2303.01397
|
Mahdi Hejrati
|
Mahdi Hejrati, Jouni Mattila
|
Nonlinear Subsystem-based Adaptive Impedance Control of Physical
Human-Robot-Environment Interaction in Contact-rich Tasks
|
This work has been accepted for publication by the IEEE Robotics and
Automation Letters
| null |
10.1109/LRA.2023.3302616
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Haptic upper limb exoskeletons are robots that assist human operators during
task execution while having the ability to render virtual or remote
environments. Therefore, the stability of such robots in physical
human-robot-environment interaction must be guaranteed, in addition to
performing well during task execution. Having a wide range of Z-width, which
shows the region of passively renderable impedance by a haptic display, is also
important to render a wide range of virtual environments. To address these
issues, in this study, subsystem-based adaptive impedance control is designed
for having a stable human-robot-environment interaction of 7 degrees of freedom
haptic exoskeleton. The presented control decomposes the entire system into
subsystems and designs the controller at the subsystem level. The stability of
the controller in the presence of contact with the virtual environment and
human arm force is proved by employing the virtual stability concept.
Additionally, the Z-width of the 7-DoF haptic exoskeleton is drawn using
experimental data and improved using varying virtual mass element for the
virtual environment. Finally, experimental results are provided to demonstrate
the perfect performance of the proposed controller in accomplishing the
predefined task.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 16:30:06 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2023 14:26:54 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Hejrati",
"Mahdi",
""
],
[
"Mattila",
"Jouni",
""
]
] |
new_dataset
| 0.972946 |
2303.10437
|
Yuhang Yang
|
Yuhang Yang, Wei Zhai, Hongchen Luo, Yang Cao, Jiebo Luo, Zheng-Jun
Zha
|
Grounding 3D Object Affordance from 2D Interactions in Images
|
ICCV2023, camera-ready version
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Grounding 3D object affordance seeks to locate objects' ''action
possibilities'' regions in the 3D space, which serves as a link between
perception and operation for embodied agents. Existing studies primarily focus
on connecting visual affordances with geometry structures, e.g. relying on
annotations to declare interactive regions of interest on the object and
establishing a mapping between the regions and affordances. However, the
essence of learning object affordance is to understand how to use it, and the
manner that detaches interactions is limited in generalization. Normally,
humans possess the ability to perceive object affordances in the physical world
through demonstration images or videos. Motivated by this, we introduce a novel
task setting: grounding 3D object affordance from 2D interactions in images,
which faces the challenge of anticipating affordance through interactions of
different sources. To address this problem, we devise a novel
Interaction-driven 3D Affordance Grounding Network (IAG), which aligns the
region feature of objects from different sources and models the interactive
contexts for 3D object affordance grounding. Besides, we collect a Point-Image
Affordance Dataset (PIAD) to support the proposed task. Comprehensive
experiments on PIAD demonstrate the reliability of the proposed task and the
superiority of our method. The project is available at
https://github.com/yyvhang/IAGNet.
|
[
{
"version": "v1",
"created": "Sat, 18 Mar 2023 15:37:35 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2023 07:11:11 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Yang",
"Yuhang",
""
],
[
"Zhai",
"Wei",
""
],
[
"Luo",
"Hongchen",
""
],
[
"Cao",
"Yang",
""
],
[
"Luo",
"Jiebo",
""
],
[
"Zha",
"Zheng-Jun",
""
]
] |
new_dataset
| 0.994455 |
2303.16839
|
Weicheng Kuo
|
Weicheng Kuo, AJ Piergiovanni, Dahun Kim, Xiyang Luo, Ben Caine, Wei
Li, Abhijit Ogale, Luowei Zhou, Andrew Dai, Zhifeng Chen, Claire Cui, Anelia
Angelova
|
MaMMUT: A Simple Architecture for Joint Learning for MultiModal Tasks
|
Published in Transactions on Machine Learning Research (
https://jmlr.org/tmlr/ ). 18 pages, 4 figures
| null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The development of language models have moved from encoder-decoder to
decoder-only designs. In addition, we observe that the two most popular
multimodal tasks, the generative and contrastive tasks, are nontrivial to
accommodate in one architecture, and further need adaptations for downstream
tasks. We propose a novel paradigm of training with a decoder-only model for
multimodal tasks, which is surprisingly effective in jointly learning of these
disparate vision-language tasks. This is done with a simple model, called
MaMMUT. It consists of a single vision encoder and a text decoder, and is able
to accommodate contrastive and generative learning by a novel two-pass approach
on the text decoder. We demonstrate that joint learning of these diverse
objectives is simple, effective, and maximizes the weight-sharing of the model
across these tasks. Furthermore, the same architecture enables straightforward
extensions to open-vocabulary object detection and video-language tasks. The
model tackles a diverse range of tasks, while being modest in capacity. Our
model achieves the state of the art on image-text and text-image retrieval,
video question answering and open-vocabulary detection tasks, outperforming
much larger and more extensively trained foundational models. It shows very
competitive results on VQA and Video Captioning, especially considering its
capacity. Ablations confirm the flexibility and advantages of our approach.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 16:42:30 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2023 05:44:47 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Aug 2023 05:39:34 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Kuo",
"Weicheng",
""
],
[
"Piergiovanni",
"AJ",
""
],
[
"Kim",
"Dahun",
""
],
[
"Luo",
"Xiyang",
""
],
[
"Caine",
"Ben",
""
],
[
"Li",
"Wei",
""
],
[
"Ogale",
"Abhijit",
""
],
[
"Zhou",
"Luowei",
""
],
[
"Dai",
"Andrew",
""
],
[
"Chen",
"Zhifeng",
""
],
[
"Cui",
"Claire",
""
],
[
"Angelova",
"Anelia",
""
]
] |
new_dataset
| 0.998895 |
2304.00409
|
Yizheng Chen
|
Yizheng Chen, Zhoujie Ding, Lamya Alowain, Xinyun Chen, David Wagner
|
DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based
Vulnerability Detection
|
Published at RAID 2023
| null | null | null |
cs.CR cs.AI cs.LG cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
We propose and release a new vulnerable source code dataset. We curate the
dataset by crawling security issue websites, extracting vulnerability-fixing
commits and source codes from the corresponding projects. Our new dataset
contains 18,945 vulnerable functions spanning 150 CWEs and 330,492
non-vulnerable functions extracted from 7,514 commits. Our dataset covers 295
more projects than all previous datasets combined.
Combining our new dataset with previous datasets, we present an analysis of
the challenges and promising research directions of using deep learning for
detecting software vulnerabilities. We study 11 model architectures belonging
to 4 families. Our results show that deep learning is still not ready for
vulnerability detection, due to high false positive rate, low F1 score, and
difficulty of detecting hard CWEs. In particular, we demonstrate an important
generalization challenge for the deployment of deep learning-based models. We
show that increasing the volume of training data may not further improve the
performance of deep learning models for vulnerability detection, but might be
useful to improve the generalization ability to unseen projects.
We also identify hopeful future research directions. We demonstrate that
large language models (LLMs) are a promising research direction for ML-based
vulnerability detection, outperforming Graph Neural Networks (GNNs) with
code-structure features in our experiments. Moreover, developing source code
specific pre-training objectives is a promising research direction to improve
the vulnerability detection performance.
|
[
{
"version": "v1",
"created": "Sat, 1 Apr 2023 23:29:14 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2023 01:21:50 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Chen",
"Yizheng",
""
],
[
"Ding",
"Zhoujie",
""
],
[
"Alowain",
"Lamya",
""
],
[
"Chen",
"Xinyun",
""
],
[
"Wagner",
"David",
""
]
] |
new_dataset
| 0.99992 |
2304.00959
|
Giovanni Cioffi
|
Jiaxu Xing, Giovanni Cioffi, Javier Hidalgo-Carri\'o, Davide
Scaramuzza
|
Autonomous Power Line Inspection with Drones via Perception-Aware MPC
| null |
IEEE/RSJ International Conference on Intelligent Robots (IROS),
Detroit, 2023
| null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Drones have the potential to revolutionize power line inspection by
increasing productivity, reducing inspection time, improving data quality, and
eliminating the risks for human operators. Current state-of-the-art systems for
power line inspection have two shortcomings: (i) control is decoupled from
perception and needs accurate information about the location of the power lines
and masts; (ii) obstacle avoidance is decoupled from the power line tracking,
which results in poor tracking in the vicinity of the power masts, and,
consequently, in decreased data quality for visual inspection. In this work, we
propose a model predictive controller (MPC) that overcomes these limitations by
tightly coupling perception and action. Our controller generates commands that
maximize the visibility of the power lines while, at the same time, safely
avoiding the power masts. For power line detection, we propose a lightweight
learning-based detector that is trained only on synthetic data and is able to
transfer zero-shot to real-world power line images. We validate our system in
simulation and real-world experiments on a mock-up power line infrastructure.
We release our code and datasets to the public.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 13:26:20 GMT"
},
{
"version": "v2",
"created": "Sun, 14 May 2023 20:16:31 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Aug 2023 10:14:23 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Xing",
"Jiaxu",
""
],
[
"Cioffi",
"Giovanni",
""
],
[
"Hidalgo-Carrió",
"Javier",
""
],
[
"Scaramuzza",
"Davide",
""
]
] |
new_dataset
| 0.976363 |
2304.04963
|
Xuechao Zou
|
Huanhuan Li, Xuechao Zou, Yu-an Zhang, Jiangcai Zhaba, Guomei Li,
Lamao Yongga
|
PlantDet: A benchmark for Plant Detection in the Three-Rivers-Source
Region
|
Accepted by ICANN 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Three-River-Source region is a highly significant natural reserve in
China that harbors a plethora of botanical resources. To meet the practical
requirements of botanical research and intelligent plant management, we
construct a dataset for Plant detection in the Three-River-Source region
(PTRS). It comprises 21 types, 6965 high-resolution images of 2160*3840 pixels,
captured by diverse sensors and platforms, and featuring objects of varying
shapes and sizes. The PTRS presents us with challenges such as dense occlusion,
varying leaf resolutions, and high feature similarity among plants, prompting
us to develop a novel object detection network named PlantDet. This network
employs a window-based efficient self-attention module (ST block) to generate
robust feature representation at multiple scales, improving the detection
efficiency for small and densely-occluded objects. Our experimental results
validate the efficacy of our proposed plant detection benchmark, with a
precision of 88.1%, a mean average precision (mAP) of 77.6%, and a higher
recall compared to the baseline. Additionally, our method effectively overcomes
the issue of missing small objects.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 04:18:56 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jul 2023 14:49:09 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Aug 2023 09:15:52 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Li",
"Huanhuan",
""
],
[
"Zou",
"Xuechao",
""
],
[
"Zhang",
"Yu-an",
""
],
[
"Zhaba",
"Jiangcai",
""
],
[
"Li",
"Guomei",
""
],
[
"Yongga",
"Lamao",
""
]
] |
new_dataset
| 0.999883 |
2304.05731
|
Trung Nghia Le
|
Trung-Nghia Le, Tam V. Nguyen, Minh-Quan Le, Trong-Thuan Nguyen,
Viet-Tham Huynh, Trong-Le Do, Khanh-Duy Le, Mai-Khiem Tran, Nhat Hoang-Xuan,
Thang-Long Nguyen-Ho, Vinh-Tiep Nguyen, Nhat-Quynh Le-Pham, Huu-Phuc Pham,
Trong-Vu Hoang, Quang-Binh Nguyen, Trong-Hieu Nguyen-Mau, Tuan-Luc Huynh,
Thanh-Danh Le, Ngoc-Linh Nguyen-Ha, Tuong-Vy Truong-Thuy, Truong Hoai Phong,
Tuong-Nghiem Diep, Khanh-Duy Ho, Xuan-Hieu Nguyen, Thien-Phuc Tran, Tuan-Anh
Yang, Kim-Phat Tran, Nhu-Vinh Hoang, Minh-Quang Nguyen, Hoai-Danh Vo,
Minh-Hoa Doan, Hai-Dang Nguyen, Akihiro Sugimoto, Minh-Triet Tran
|
SketchANIMAR: Sketch-based 3D Animal Fine-Grained Retrieval
|
Accepted to Computers & Graphics (3DOR 2023, Journal track)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The retrieval of 3D objects has gained significant importance in recent years
due to its broad range of applications in computer vision, computer graphics,
virtual reality, and augmented reality. However, the retrieval of 3D objects
presents significant challenges due to the intricate nature of 3D models, which
can vary in shape, size, and texture, and have numerous polygons and vertices.
To this end, we introduce a novel SHREC challenge track that focuses on
retrieving relevant 3D animal models from a dataset using sketch queries and
expedites accessing 3D models through available sketches. Furthermore, a new
dataset named ANIMAR was constructed in this study, comprising a collection of
711 unique 3D animal models and 140 corresponding sketch queries. Our contest
requires participants to retrieve 3D models based on complex and detailed
sketches. We receive satisfactory results from eight teams and 204 runs.
Although further improvement is necessary, the proposed task has the potential
to incentivize additional research in the domain of 3D object retrieval,
potentially yielding benefits for a wide range of applications. We also provide
insights into potential areas of future research, such as improving techniques
for feature extraction and matching and creating more diverse datasets to
evaluate retrieval performance. https://aichallenge.hcmus.edu.vn/sketchanimar
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 09:40:38 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2023 17:08:11 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Le",
"Trung-Nghia",
""
],
[
"Nguyen",
"Tam V.",
""
],
[
"Le",
"Minh-Quan",
""
],
[
"Nguyen",
"Trong-Thuan",
""
],
[
"Huynh",
"Viet-Tham",
""
],
[
"Do",
"Trong-Le",
""
],
[
"Le",
"Khanh-Duy",
""
],
[
"Tran",
"Mai-Khiem",
""
],
[
"Hoang-Xuan",
"Nhat",
""
],
[
"Nguyen-Ho",
"Thang-Long",
""
],
[
"Nguyen",
"Vinh-Tiep",
""
],
[
"Le-Pham",
"Nhat-Quynh",
""
],
[
"Pham",
"Huu-Phuc",
""
],
[
"Hoang",
"Trong-Vu",
""
],
[
"Nguyen",
"Quang-Binh",
""
],
[
"Nguyen-Mau",
"Trong-Hieu",
""
],
[
"Huynh",
"Tuan-Luc",
""
],
[
"Le",
"Thanh-Danh",
""
],
[
"Nguyen-Ha",
"Ngoc-Linh",
""
],
[
"Truong-Thuy",
"Tuong-Vy",
""
],
[
"Phong",
"Truong Hoai",
""
],
[
"Diep",
"Tuong-Nghiem",
""
],
[
"Ho",
"Khanh-Duy",
""
],
[
"Nguyen",
"Xuan-Hieu",
""
],
[
"Tran",
"Thien-Phuc",
""
],
[
"Yang",
"Tuan-Anh",
""
],
[
"Tran",
"Kim-Phat",
""
],
[
"Hoang",
"Nhu-Vinh",
""
],
[
"Nguyen",
"Minh-Quang",
""
],
[
"Vo",
"Hoai-Danh",
""
],
[
"Doan",
"Minh-Hoa",
""
],
[
"Nguyen",
"Hai-Dang",
""
],
[
"Sugimoto",
"Akihiro",
""
],
[
"Tran",
"Minh-Triet",
""
]
] |
new_dataset
| 0.999899 |
2304.06053
|
Trung Nghia Le
|
Trung-Nghia Le, Tam V. Nguyen, Minh-Quan Le, Trong-Thuan Nguyen,
Viet-Tham Huynh, Trong-Le Do, Khanh-Duy Le, Mai-Khiem Tran, Nhat Hoang-Xuan,
Thang-Long Nguyen-Ho, Vinh-Tiep Nguyen, Tuong-Nghiem Diep, Khanh-Duy Ho,
Xuan-Hieu Nguyen, Thien-Phuc Tran, Tuan-Anh Yang, Kim-Phat Tran, Nhu-Vinh
Hoang, Minh-Quang Nguyen, E-Ro Nguyen, Minh-Khoi Nguyen-Nhat, Tuan-An To,
Trung-Truc Huynh-Le, Nham-Tan Nguyen, Hoang-Chau Luong, Truong Hoai Phong,
Nhat-Quynh Le-Pham, Huu-Phuc Pham, Trong-Vu Hoang, Quang-Binh Nguyen,
Hai-Dang Nguyen, Akihiro Sugimoto, Minh-Triet Tran
|
TextANIMAR: Text-based 3D Animal Fine-Grained Retrieval
|
Accepted to Computers and Graphics (3DOR, Journal Track)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
3D object retrieval is an important yet challenging task that has drawn more
and more attention in recent years. While existing approaches have made strides
in addressing this issue, they are often limited to restricted settings such as
image and sketch queries, which are often unfriendly interactions for common
users. In order to overcome these limitations, this paper presents a novel
SHREC challenge track focusing on text-based fine-grained retrieval of 3D
animal models. Unlike previous SHREC challenge tracks, the proposed task is
considerably more challenging, requiring participants to develop innovative
approaches to tackle the problem of text-based retrieval. Despite the increased
difficulty, we believe this task can potentially drive useful applications in
practice and facilitate more intuitive interactions with 3D objects. Five
groups participated in our competition, submitting a total of 114 runs. While
the results obtained in our competition are satisfactory, we note that the
challenges presented by this task are far from fully solved. As such, we
provide insights into potential areas for future research and improvements. We
believe we can help push the boundaries of 3D object retrieval and facilitate
more user-friendly interactions via vision-language technologies.
https://aichallenge.hcmus.edu.vn/textanimar
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 10:19:21 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2023 16:57:59 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Le",
"Trung-Nghia",
""
],
[
"Nguyen",
"Tam V.",
""
],
[
"Le",
"Minh-Quan",
""
],
[
"Nguyen",
"Trong-Thuan",
""
],
[
"Huynh",
"Viet-Tham",
""
],
[
"Do",
"Trong-Le",
""
],
[
"Le",
"Khanh-Duy",
""
],
[
"Tran",
"Mai-Khiem",
""
],
[
"Hoang-Xuan",
"Nhat",
""
],
[
"Nguyen-Ho",
"Thang-Long",
""
],
[
"Nguyen",
"Vinh-Tiep",
""
],
[
"Diep",
"Tuong-Nghiem",
""
],
[
"Ho",
"Khanh-Duy",
""
],
[
"Nguyen",
"Xuan-Hieu",
""
],
[
"Tran",
"Thien-Phuc",
""
],
[
"Yang",
"Tuan-Anh",
""
],
[
"Tran",
"Kim-Phat",
""
],
[
"Hoang",
"Nhu-Vinh",
""
],
[
"Nguyen",
"Minh-Quang",
""
],
[
"Nguyen",
"E-Ro",
""
],
[
"Nguyen-Nhat",
"Minh-Khoi",
""
],
[
"To",
"Tuan-An",
""
],
[
"Huynh-Le",
"Trung-Truc",
""
],
[
"Nguyen",
"Nham-Tan",
""
],
[
"Luong",
"Hoang-Chau",
""
],
[
"Phong",
"Truong Hoai",
""
],
[
"Le-Pham",
"Nhat-Quynh",
""
],
[
"Pham",
"Huu-Phuc",
""
],
[
"Hoang",
"Trong-Vu",
""
],
[
"Nguyen",
"Quang-Binh",
""
],
[
"Nguyen",
"Hai-Dang",
""
],
[
"Sugimoto",
"Akihiro",
""
],
[
"Tran",
"Minh-Triet",
""
]
] |
new_dataset
| 0.999578 |
2304.07989
|
Ran Liu
|
Ran Liu, Charles Nicholas
|
IMCDCF: An Incremental Malware Detection Approach Using Hidden Markov
Models
|
Malware Technical Exchange Meeting 2021 (MTEM'21)
| null | null | null |
cs.CR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The popularity of dynamic malware analysis has grown significantly, as it
enables analysts to observe the behavior of executing samples, thereby
enhancing malware detection and classification decisions. With the continuous
increase in new malware variants, there is an urgent need for an automated
malware analysis engine capable of accurately identifying malware samples. In
this paper, we provide a brief overview of malware detection and classification
methodologies. Moreover, we introduce a novel framework tailored for the
dynamic analysis environment, called the Incremental Malware Detection and
Classification Framework (IMDCF). IMDCF offers a comprehensive solution for
general-purpose malware detection and classification, achieving an accuracy
rate of 96.49% while maintaining a simple architecture.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 04:53:40 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2023 19:33:32 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Aug 2023 04:21:46 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Liu",
"Ran",
""
],
[
"Nicholas",
"Charles",
""
]
] |
new_dataset
| 0.996568 |
2304.10666
|
Daniel Oliveira Dantas
|
Artur Santos Nascimento and Welerson Augusto Lino de Jesus Melo and
Daniel Oliveira Dantas and Beatriz Trinch\~ao Andrade
|
Feature point detection in HDR images based on coefficient of variation
| null | null |
10.1007/s11042-023-16055-9
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Feature point (FP) detection is a fundamental step of many computer vision
tasks. However, FP detectors are usually designed for low dynamic range (LDR)
images. In scenes with extreme light conditions, LDR images present saturated
pixels, which degrade FP detection. On the other hand, high dynamic range (HDR)
images usually present no saturated pixels but FP detection algorithms do not
take advantage of all the information present in such images. FP detection
frequently relies on differential methods, which work well in LDR images.
However, in HDR images, the differential operation response in bright areas
overshadows the response in dark areas. As an alternative to standard FP
detection methods, this study proposes an FP detector based on a coefficient of
variation (CV) designed for HDR images. The CV operation adapts its response
based on the standard deviation of pixels inside a window, working well in both
dark and bright areas of HDR images. The proposed and standard detectors are
evaluated by measuring their repeatability rate (RR) and uniformity. Our
proposed detector shows better performance when compared to other standard
state-of-the-art detectors. In uniformity metric, our proposed detector
surpasses all the other algorithms. In other hand, when using the repeatability
rate metric, the proposed detector is worse than Harris for HDR and SURF
detectors.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 22:23:10 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Nascimento",
"Artur Santos",
""
],
[
"Melo",
"Welerson Augusto Lino de Jesus",
""
],
[
"Dantas",
"Daniel Oliveira",
""
],
[
"Andrade",
"Beatriz Trinchão",
""
]
] |
new_dataset
| 0.979184 |
2305.03210
|
Catherine Yeh
|
Catherine Yeh, Yida Chen, Aoyu Wu, Cynthia Chen, Fernanda Vi\'egas,
Martin Wattenberg
|
AttentionViz: A Global View of Transformer Attention
|
11 pages, 13 figures
| null | null | null |
cs.HC cs.CL cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Transformer models are revolutionizing machine learning, but their inner
workings remain mysterious. In this work, we present a new visualization
technique designed to help researchers understand the self-attention mechanism
in transformers that allows these models to learn rich, contextual
relationships between elements of a sequence. The main idea behind our method
is to visualize a joint embedding of the query and key vectors used by
transformer models to compute attention. Unlike previous attention
visualization techniques, our approach enables the analysis of global patterns
across multiple input sequences. We create an interactive visualization tool,
AttentionViz (demo: http://attentionviz.com), based on these joint query-key
embeddings, and use it to study attention mechanisms in both language and
vision transformers. We demonstrate the utility of our approach in improving
model understanding and offering new insights about query-key interactions
through several application scenarios and expert feedback.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 23:46:49 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2023 06:24:55 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Yeh",
"Catherine",
""
],
[
"Chen",
"Yida",
""
],
[
"Wu",
"Aoyu",
""
],
[
"Chen",
"Cynthia",
""
],
[
"Viégas",
"Fernanda",
""
],
[
"Wattenberg",
"Martin",
""
]
] |
new_dataset
| 0.952255 |
2306.01951
|
Amit Roy
|
Amit Roy, Juan Shu, Jia Li, Carl Yang, Olivier Elshocht, Jeroen Smeets
and Pan Li
|
GAD-NR: Graph Anomaly Detection via Neighborhood Reconstruction
|
Under Review
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Graph Anomaly Detection (GAD) is a technique used to identify abnormal nodes
within graphs, finding applications in network security, fraud detection,
social media spam detection, and various other domains. A common method for GAD
is Graph Auto-Encoders (GAEs), which encode graph data into node
representations and identify anomalies by assessing the reconstruction quality
of the graphs based on these representations. However, existing GAE models are
primarily optimized for direct link reconstruction, resulting in nodes
connected in the graph being clustered in the latent space. As a result, they
excel at detecting cluster-type structural anomalies but struggle with more
complex structural anomalies that do not conform to clusters. To address this
limitation, we propose a novel solution called GAD-NR, a new variant of GAE
that incorporates neighborhood reconstruction for graph anomaly detection.
GAD-NR aims to reconstruct the entire neighborhood of a node, encompassing the
local structure, self-attributes, and neighbor attributes, based on the
corresponding node representation. By comparing the neighborhood reconstruction
loss between anomalous nodes and normal nodes, GAD-NR can effectively detect
any anomalies. Extensive experimentation conducted on six real-world datasets
validates the effectiveness of GAD-NR, showcasing significant improvements (by
up to 30% in AUC) over state-of-the-art competitors. The source code for GAD-NR
is openly available. Importantly, the comparative analysis reveals that the
existing methods perform well only in detecting one or two types of anomalies
out of the three types studied. In contrast, GAD-NR excels at detecting all
three types of anomalies across the datasets, demonstrating its comprehensive
anomaly detection capabilities.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 23:23:34 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 01:57:18 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Jun 2023 16:12:13 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Aug 2023 23:26:19 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Roy",
"Amit",
""
],
[
"Shu",
"Juan",
""
],
[
"Li",
"Jia",
""
],
[
"Yang",
"Carl",
""
],
[
"Elshocht",
"Olivier",
""
],
[
"Smeets",
"Jeroen",
""
],
[
"Li",
"Pan",
""
]
] |
new_dataset
| 0.99456 |
2307.12217
|
Cong Wang
|
Cong Wang, Yu-Ping Wang, Dinesh Manocha
|
LoLep: Single-View View Synthesis with Locally-Learned Planes and
Self-Attention Occlusion Inference
|
Accepted by ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel method, LoLep, which regresses Locally-Learned planes from
a single RGB image to represent scenes accurately, thus generating better novel
views. Without the depth information, regressing appropriate plane locations is
a challenging problem. To solve this issue, we pre-partition the disparity
space into bins and design a disparity sampler to regress local offsets for
multiple planes in each bin. However, only using such a sampler makes the
network not convergent; we further propose two optimizing strategies that
combine with different disparity distributions of datasets and propose an
occlusion-aware reprojection loss as a simple yet effective geometric
supervision technique. We also introduce a self-attention mechanism to improve
occlusion inference and present a Block-Sampling Self-Attention (BS-SA) module
to address the problem of applying self-attention to large feature maps. We
demonstrate the effectiveness of our approach and generate state-of-the-art
results on different datasets. Compared to MINE, our approach has an LPIPS
reduction of 4.8%-9.0% and an RV reduction of 73.9%-83.5%. We also evaluate the
performance on real-world images and demonstrate the benefits.
|
[
{
"version": "v1",
"created": "Sun, 23 Jul 2023 03:38:55 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2023 10:34:43 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Wang",
"Cong",
""
],
[
"Wang",
"Yu-Ping",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.984926 |
2308.04029
|
Xiaomin Lin
|
Aadi Palnitkar, Rashmi Kapu, Xiaomin Lin, Cheng Liu, Nare Karapetyan,
Yiannis Aloimonos
|
ChatSim: Underwater Simulation with Natural Language Prompting
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robots are becoming an essential part of many operations including marine
exploration or environmental monitoring. However, the underwater environment
presents many challenges, including high pressure, limited visibility, and
harsh conditions that can damage equipment. Real-world experimentation can be
expensive and difficult to execute. Therefore, it is essential to simulate the
performance of underwater robots in comparable environments to ensure their
optimal functionality within practical real-world contexts.OysterSim generates
photo-realistic images and segmentation masks of objects in marine
environments, providing valuable training data for underwater computer vision
applications. By integrating ChatGPT into underwater simulations, users can
convey their thoughts effortlessly and intuitively create desired underwater
environments without intricate coding. \invis{Moreover, researchers can realize
substantial time and cost savings by evaluating their algorithms across diverse
underwater conditions in the simulation.} The objective of ChatSim is to
integrate Large Language Models (LLM) with a simulation
environment~(OysterSim), enabling direct control of the simulated environment
via natural language input. This advancement can greatly enhance the
capabilities of underwater simulation, with far-reaching benefits for marine
exploration and broader scientific research endeavors.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 04:08:40 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2023 12:47:07 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Palnitkar",
"Aadi",
""
],
[
"Kapu",
"Rashmi",
""
],
[
"Lin",
"Xiaomin",
""
],
[
"Liu",
"Cheng",
""
],
[
"Karapetyan",
"Nare",
""
],
[
"Aloimonos",
"Yiannis",
""
]
] |
new_dataset
| 0.997952 |
2308.04080
|
Youer Pu
|
Youer Pu (1), Ali Farahbakhsh (1), Lorenzo Alvisi (1), Ittay Eyal (2)
((1) Cornell University, (2) The Technion)
|
Gorilla: Safe Permissionless Byzantine Consensus
|
43 pages, 3 figures, to be published in the International Symposium
on Distributed Computing (DISC) 2023
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Nakamoto's consensus protocol works in a permissionless model and tolerates
Byzantine failures, but only offers probabilistic agreement. Recently, the
Sandglass protocol has shown such weaker guarantees are not a necessary
consequence of a permissionless model; yet, Sandglass only tolerates benign
failures, and operates in an unconventional partially synchronous model. We
present Gorilla Sandglass, the first Byzantine tolerant consensus protocol to
guarantee, in the same synchronous model adopted by Nakamoto, deterministic
agreement and termination with probability 1 in a permissionless setting. We
prove the correctness of Gorilla by mapping executions that would violate
agreement or termination in Gorilla to executions in Sandglass, where we know
such violations are impossible. Establishing termination proves particularly
interesting, as the mapping requires reasoning about infinite executions and
their probabilities.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 06:37:49 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2023 07:05:25 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Pu",
"Youer",
"",
"Cornell University"
],
[
"Farahbakhsh",
"Ali",
"",
"Cornell University"
],
[
"Alvisi",
"Lorenzo",
"",
"Cornell University"
],
[
"Eyal",
"Ittay",
"",
"The Technion"
]
] |
new_dataset
| 0.995387 |
2308.04226
|
Vahid Sadiri Javadi
|
Vahid Sadiri Javadi, Martin Potthast, Lucie Flek
|
OpinionConv: Conversational Product Search with Grounded Opinions
| null | null | null | null |
cs.HC cs.CL cs.IR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
When searching for products, the opinions of others play an important role in
making informed decisions. Subjective experiences about a product can be a
valuable source of information. This is also true in sales conversations, where
a customer and a sales assistant exchange facts and opinions about products.
However, training an AI for such conversations is complicated by the fact that
language models do not possess authentic opinions for their lack of real-world
experience. We address this problem by leveraging product reviews as a rich
source of product opinions to ground conversational AI in true subjective
narratives. With OpinionConv, we develop the first conversational AI for
simulating sales conversations. To validate the generated conversations, we
conduct several user studies showing that the generated opinions are perceived
as realistic. Our assessors also confirm the importance of opinions as an
informative basis for decision-making.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 12:45:01 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Javadi",
"Vahid Sadiri",
""
],
[
"Potthast",
"Martin",
""
],
[
"Flek",
"Lucie",
""
]
] |
new_dataset
| 0.994207 |
2308.04435
|
Tom\'a\v{s} Bravenec
|
Tom\'a\v{s} Bravenec, Joaqu\'in Torres-Sospedra, Michael Gould, Tomas
Fryza
|
UJI Probes: Dataset of Wi-Fi Probe Requests
|
6 pages, 8 figures, submitted and accepted to IPIN2023 conference
| null | null | null |
cs.NI cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
This paper focuses on the creation of a new, publicly available Wi-Fi probe
request dataset. Probe requests belong to the family of management frames used
by the 802.11 (Wi-Fi) protocol. As the situation changes year by year, and
technology improves probe request studies are necessary to be done on
up-to-date data. We provide a month-long probe request capture in an office
environment, including work days, weekends, and holidays consisting of over 1
400 000 probe requests. We provide a description of all the important aspects
of the dataset. Apart from the raw packet capture we also provide a Radio Map
(RM) of the office to ensure the users of the dataset have all the possible
information about the environment. To protect privacy, user information in the
dataset is anonymized. This anonymization is done in a way that protects the
privacy of users while preserving the ability to analyze the dataset to almost
the same level as raw data. Furthermore, we showcase several possible use cases
for the dataset, like presence detection, temporal Received Signal Strength
Indicator (RSSI) stability, and privacy protection evaluation.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 09:59:11 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Bravenec",
"Tomáš",
""
],
[
"Torres-Sospedra",
"Joaquín",
""
],
[
"Gould",
"Michael",
""
],
[
"Fryza",
"Tomas",
""
]
] |
new_dataset
| 0.999579 |
2308.04492
|
Sang Yun Kwon
|
Sang Yun Kwon, Gagan Bhatia, El Moatez Billah Nagoud, Muhammad
Abdul-Mageed
|
ChatGPT for Arabic Grammatical Error Correction
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, large language models (LLMs) fine-tuned to follow human instruction
have exhibited significant capabilities in various English NLP tasks. However,
their performance in grammatical error correction (GEC) tasks, particularly in
non-English languages, remains significantly unexplored. In this paper, we
delve into abilities of instruction fine-tuned LLMs in Arabic GEC, a task made
complex due to Arabic's rich morphology. Our findings suggest that various
prompting methods, coupled with (in-context) few-shot learning, demonstrate
considerable effectiveness, with GPT-4 achieving up to $65.49$
F\textsubscript{1} score under expert prompting (approximately $5$ points
higher than our established baseline). This highlights the potential of LLMs in
low-resource settings, offering a viable approach for generating useful
synthetic data for model training. Despite these positive results, we find that
instruction fine-tuned models, regardless of their size, significantly
underperform compared to fully fine-tuned models of significantly smaller
sizes. This disparity highlights a substantial room for improvements for LLMs.
Inspired by methods from low-resource machine translation, we also develop a
method exploiting synthetic data that significantly outperforms previous models
on two standard Arabic benchmarks. Our work sets new SoTA for Arabic GEC, with
$72.19\%$ and $73.26$ F$_{1}$ on the 2014 and 2015 QALB datasets, respectively.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 18:00:39 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Kwon",
"Sang Yun",
""
],
[
"Bhatia",
"Gagan",
""
],
[
"Nagoud",
"El Moatez Billah",
""
],
[
"Abdul-Mageed",
"Muhammad",
""
]
] |
new_dataset
| 0.995313 |
2308.04516
|
Pedro Neto
|
Samuel Alves, Mihail Babcinschi, Afonso Silva, Diogo Neto, Diogo
Fonseca, Pedro Neto
|
Integrated Design Fabrication and Control of a Bioinspired Multimaterial
Soft Robotic Hand
| null |
Cyborg Bionic Syst. 2023;4:Article 0051
|
10.34133/cbsystems.0051
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Machines that mimic humans have inspired scientists for centuries.
Bio-inspired soft robotic hands are a good example of such an endeavor,
featuring intrinsic material compliance and continuous motion to deal with
uncertainty and adapt to unstructured environments. Recent research led to
impactful achievements in functional designs, modeling, fabrication, and
control of soft robots. Nevertheless, the full realization of life-like
movements is still challenging to achieve, often based on trial-and-error
considerations from design to fabrication, consuming time and resources. In
this study, a soft robotic hand is proposed, composed of soft actuator cores
and an exoskeleton, featuring a multi-material design aided by finite element
analysis (FEA) to define the hand geometry and promote finger's bendability.
The actuators are fabricated using molding and the exoskeleton is 3D-printed in
a single step. An ON-OFF controller keeps the set fingers' inner pressures
related to specific bending angles, even in the presence of leaks. The FEA
numerical results were validated by experimental tests, as well as the ability
of the hand to grasp objects with different shapes, weights and sizes. This
integrated solution will make soft robotic hands more available to people, at a
reduced cost, avoiding the time-consuming design-fabrication trial-and-error
processes.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 18:25:54 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Alves",
"Samuel",
""
],
[
"Babcinschi",
"Mihail",
""
],
[
"Silva",
"Afonso",
""
],
[
"Neto",
"Diogo",
""
],
[
"Fonseca",
"Diogo",
""
],
[
"Neto",
"Pedro",
""
]
] |
new_dataset
| 0.958365 |
2308.04519
|
EPTCS
|
Lachlan McPheat (University College London), Daphne Wang (University
College London)
|
DisCoCat for Donkey Sentences
|
In Proceedings AMSLO 2023, arXiv:2308.03679
|
EPTCS 381, 2023, pp. 32-45
|
10.4204/EPTCS.381.5
| null |
cs.CL cs.AI cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We demonstrate how to parse Geach's Donkey sentences in a compositional
distributional model of meaning. We build on previous work on the DisCoCat
(Distributional Compositional Categorical) framework, including extensions that
model discourse, determiners, and relative pronouns. We present a type-logical
syntax for parsing donkey sentences, for which we define both relational and
vector space semantics.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 18:35:22 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"McPheat",
"Lachlan",
"",
"University College London"
],
[
"Wang",
"Daphne",
"",
"University\n College London"
]
] |
new_dataset
| 0.999065 |
2308.04520
|
EPTCS
|
Tikhon Pshenitsyn
|
Multimodality in the Hypergraph Lambek Calculus
|
In Proceedings AMSLO 2023, arXiv:2308.03679
|
EPTCS 381, 2023, pp. 46-59
|
10.4204/EPTCS.381.6
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
The multimodal Lambek calculus is an extension of the Lambek calculus that
includes several product operations (some of them being commutative or/and
associative), unary modalities, and corresponding residual implications. In
this work, we relate this calculus to the hypergraph Lambek calculus HL. The
latter is a general pure logic of residuation defined in a sequent form;
antecedents of its sequents are hypergraphs, and the rules of HL involve
hypergraph transformation. Our main result is the embedding of the multimodal
Lambek calculus (with at most one associative product) in HL. It justifies that
HL is a very general Lambek-style logic and also provides a novel syntactic
interface for the multimodal Lambek calculus: antecedents of sequents of the
multimodal Lambek calculus are represented as tree-like hypergraphs in HL, and
they are derived from each other by means of hyperedge replacement. The
advantage of this embedding is that commutativity and associativity are
incorporated in the sequent structure rather than added as separate rules.
Besides, modalities of the multimodal Lambek calculus are represented in HL
using the product and the division of HL, which explicitizes their residual
nature.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 18:35:44 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Pshenitsyn",
"Tikhon",
""
]
] |
new_dataset
| 0.95747 |
2308.04528
|
Jun-Pu (Yi) Zhang
|
Yi Zhang, Chengyi Wu
|
Unsupervised Camouflaged Object Segmentation as Domain Adaptation
|
12 pages, 6 figures, 3 tables; Project Page:
https://github.com/Jun-Pu/UCOS-DA ; Accepted to ICCV 2023 Workshop on OOD-CV
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning for unsupervised image segmentation remains challenging due to
the absence of human labels. The common idea is to train a segmentation head,
with the supervision of pixel-wise pseudo-labels generated based on the
representation of self-supervised backbones. By doing so, the model performance
depends much on the distance between the distributions of target datasets and
the pre-training dataset (e.g., ImageNet). In this work, we investigate a new
task, namely unsupervised camouflaged object segmentation (UCOS), where the
target objects own a common rarely-seen attribute, i.e., camouflage.
Unsurprisingly, we find that the state-of-the-art unsupervised models struggle
in adapting UCOS, due to the domain gap between the properties of generic and
camouflaged objects. To this end, we formulate the UCOS as a source-free
unsupervised domain adaptation task (UCOS-DA), where both source labels and
target labels are absent during the whole model training process. Specifically,
we define a source model consisting of self-supervised vision transformers
pre-trained on ImageNet. On the other hand, the target domain includes a simple
linear layer (i.e., our target model) and unlabeled camouflaged objects. We
then design a pipeline for foreground-background-contrastive self-adversarial
domain adaptation, to achieve robust UCOS. As a result, our baseline model
achieves superior segmentation performance when compared with competing
unsupervised models on the UCOS benchmark, with the training set which's scale
is only one tenth of the supervised COS counterpart.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 18:46:16 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Zhang",
"Yi",
""
],
[
"Wu",
"Chengyi",
""
]
] |
new_dataset
| 0.987782 |
2308.04542
|
{\DJ}or{\dj}e Nedeljkovi\'c
|
{\DJ}or{\dj}e Nedeljkovi\'c
|
YUDO: YOLO for Uniform Directed Object Detection
|
The Paper is accepted in 25th Irish Machine Vision and Image
Processing Conference (IMVIP23)
| null |
10.5281/zenodo.8209337
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents an efficient way of detecting directed objects by
predicting their center coordinates and direction angle. Since the objects are
of uniform size, the proposed model works without predicting the object's width
and height. The dataset used for this problem is presented in Honeybee
Segmentation and Tracking Datasets project. One of the contributions of this
work is an examination of the ability of the standard real-time object
detection architecture like YoloV7 to be customized for position and direction
detection. A very efficient, tiny version of the architecture is used in this
approach. Moreover, only one of three detection heads without anchors is
sufficient for this task. We also introduce the extended Skew Intersection over
Union (SkewIoU) calculation for rotated boxes - directed IoU (DirIoU), which
includes an absolute angle difference. DirIoU is used both in the matching
procedure of target and predicted bounding boxes for mAP calculation, and in
the NMS filtering procedure. The code and models are available at
https://github.com/djordjened92/yudo.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 19:18:20 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Nedeljković",
"Đorđe",
""
]
] |
new_dataset
| 0.999771 |
2308.04548
|
Neeldhara Misra
|
Neeldhara Misra and Saraswati Girish Nanoti
|
Spartan Bipartite Graphs are Essentially Elementary
|
21 pages, 12 figures. A preliminary version accepted for presentation
at MFCS 2023
| null |
10.4230/LIPIcs.MFCS.2023.68
| null |
cs.DM math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
We study a two-player game on a graph between an attacker and a defender. To
begin with, the defender places guards on a subset of vertices. In each move,
the attacker attacks an edge. The defender must move at least one guard across
the attacked edge to defend the attack. The defender wins if and only if the
defender can defend an infinite sequence of attacks. The smallest number of
guards with which the defender has a winning strategy is called the eternal
vertex cover number of a graph $G$ and is denoted by $evc(G)$. It is clear that
$evc(G)$ is at least $mvc(G)$, the size of a minimum vertex cover of $G$. We
say that $G$ is Spartan if $evc(G) = mvc(G)$. The characterization of Spartan
graphs has been largely open. In the setting of bipartite graphs on $2n$
vertices where every edge belongs to a perfect matching, an easy strategy is to
have $n$ guards that always move along perfect matchings in response to
attacks. We show that these are essentially the only Spartan bipartite graphs.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 19:37:23 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Misra",
"Neeldhara",
""
],
[
"Nanoti",
"Saraswati Girish",
""
]
] |
new_dataset
| 0.997621 |
2308.04552
|
Ameya Patil
|
Ameya Patil, Zoe Rand, Trevor Branch, Leilani Battle
|
WhaleVis: Visualizing the History of Commercial Whaling
|
5 pages including references, 2 figures. Dashboard served live at
https://observablehq.com/@whales/whale-vis-dashboard-expedition-routes. To be
published in the October issue of TVCG 2023
| null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Whales are an important part of the oceanic ecosystem. Although historic
commercial whale hunting a.k.a. whaling has severely threatened whale
populations, whale researchers are looking at historical whaling data to inform
current whale status and future conservation efforts. To facilitate this, we
worked with experts in aquatic and fishery sciences to create WhaleVis -- an
interactive dashboard for the commercial whaling dataset maintained by the
International Whaling Commission (IWC). We characterize key analysis tasks
among whale researchers for this database, most important of which is inferring
spatial distribution of whale populations over time. In addition to
facilitating analysis of whale catches based on the spatio-temporal attributes,
we use whaling expedition details to plot the search routes of expeditions. We
propose a model of the catch data as a graph, where nodes represent catch
locations, and edges represent whaling expedition routes. This model
facilitates visual estimation of whale search effort and in turn the spatial
distribution of whale populations normalized by the search effort -- a well
known problem in fisheries research. It further opens up new avenues for graph
analysis on the data, including more rigorous computation of spatial
distribution of whales normalized by the search effort, and enabling new
insight generation. We demonstrate the use of our dashboard through a real life
use case.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 19:48:51 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Patil",
"Ameya",
""
],
[
"Rand",
"Zoe",
""
],
[
"Branch",
"Trevor",
""
],
[
"Battle",
"Leilani",
""
]
] |
new_dataset
| 0.986206 |
2308.04564
|
Beiran Chen
|
Beiran Chen, Marco Ruffini
|
Resource Cooperation in MEC and SDN based Vehicular Networks
|
2023 IEEE 34th Annual International Symposium on Personal, Indoor and
Mobile Radio Communications (PIMRC)
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Internet of Things (IoT) systems require highly scalable infrastructure to
adaptively provide services to meet various performance requirements. Combining
Software-Defined Networking (SDN) with Mobile Edge Cloud (MEC) technology
brings more flexibility for IoT systems. We present a four-tier task processing
architecture for MEC and vehicular networks, which includes processing tasks
locally within a vehicle, on neighboring vehicles, on an edge cloud, and on a
remote cloud. The flexible network connection is controlled by SDN. We propose
a CPU resource allocation algorithm, called Partial Idle Resource Strategy
(PIRS) with Vehicle to Vehicle (V2V) communications, based on Asymmetric Nash
Bargaining Solution (ANBS) in Game Theory. PIRS encourages vehicles in the same
location to cooperate by sharing part of their spare CPU resources. In our
simulations, we adopt four applications running on the vehicles to generate
workload. We compare the proposed algorithm with Non-Cooperation Strategy (NCS)
and All Idle Resource Strategy (AIRS). In NCS, the vehicles execute tasks
generated by the applications in their own On-Board Units (OBU), while in AIRS
vehicles provide all their CPU resources to help other vehicles offloading
requests. Our simulation results show that our PIRS strategy can execute more
tasks on the V2V layer and lead to fewer number of task (and their length) to
be offloaded to the cloud, reaching up to 28% improvement compared to NCS and
up to 10% improvement compared to AIRS.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 20:24:26 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Chen",
"Beiran",
""
],
[
"Ruffini",
"Marco",
""
]
] |
new_dataset
| 0.98528 |
2308.04602
|
Abby Stevens
|
Abby Stevens, Jonathan Ozik, Kyle Chard, Jaline Gerardin, Justin M.
Wozniak
|
NSF RESUME HPC Workshop: High-Performance Computing and Large-Scale Data
Management in Service of Epidemiological Modeling
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The NSF-funded Robust Epidemic Surveillance and Modeling (RESUME) project
successfully convened a workshop entitled "High-performance computing and
large-scale data management in service of epidemiological modeling" at the
University of Chicago on May 1-2, 2023. This was part of a series of workshops
designed to foster sustainable and interdisciplinary co-design for predictive
intelligence and pandemic prevention. The event brought together 31 experts in
epidemiological modeling, high-performance computing (HPC), HPC workflows, and
large-scale data management to develop a shared vision for capabilities needed
for computational epidemiology to better support pandemic prevention. Through
the workshop, participants identified key areas in which HPC capabilities could
be used to improve epidemiological modeling, particularly in supporting public
health decision-making, with an emphasis on HPC workflows, data integration,
and HPC access. The workshop explored nascent HPC workflow and large-scale data
management approaches currently in use for epidemiological modeling and sought
to draw from approaches used in other domains to determine which practices
could be best adapted for use in epidemiological modeling. This report
documents the key findings and takeaways from the workshop.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 22:01:33 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Stevens",
"Abby",
""
],
[
"Ozik",
"Jonathan",
""
],
[
"Chard",
"Kyle",
""
],
[
"Gerardin",
"Jaline",
""
],
[
"Wozniak",
"Justin M.",
""
]
] |
new_dataset
| 0.962836 |
2308.04624
|
Debarag Banerjee
|
Debarag Banerjee, Pooja Singh, Arjun Avadhanam, Saksham Srivastava
|
Benchmarking LLM powered Chatbots: Methods and Metrics
|
8 pages, 14 figures
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous conversational agents, i.e. chatbots, are becoming an increasingly
common mechanism for enterprises to provide support to customers and partners.
In order to rate chatbots, especially ones powered by Generative AI tools like
Large Language Models (LLMs) we need to be able to accurately assess their
performance. This is where chatbot benchmarking becomes important. In this
paper, we propose the use of a novel benchmark that we call the E2E (End to
End) benchmark, and show how the E2E benchmark can be used to evaluate accuracy
and usefulness of the answers provided by chatbots, especially ones powered by
LLMs. We evaluate an example chatbot at different levels of sophistication
based on both our E2E benchmark, as well as other available metrics commonly
used in the state of art, and observe that the proposed benchmark show better
results compared to others. In addition, while some metrics proved to be
unpredictable, the metric associated with the E2E benchmark, which uses cosine
similarity performed well in evaluating chatbots. The performance of our best
models shows that there are several benefits of using the cosine similarity
score as a metric in the E2E benchmark.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 23:30:20 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Banerjee",
"Debarag",
""
],
[
"Singh",
"Pooja",
""
],
[
"Avadhanam",
"Arjun",
""
],
[
"Srivastava",
"Saksham",
""
]
] |
new_dataset
| 0.997239 |
2308.04638
|
Joshua Knights Mr
|
Joshua Knights, Stephen Hausler, Sridha Sridharan, Clinton Fookes,
Peyman Moghadam
|
GeoAdapt: Self-Supervised Test-Time Adaption in LiDAR Place Recognition
Using Geometric Priors
|
Submitted to RA-L
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR place recognition approaches based on deep learning suffer a
significant degradation in performance when there is a shift between the
distribution of the training and testing datasets, with re-training often
required to achieve top performance. However, obtaining accurate ground truth
on new environments can be prohibitively expensive, especially in complex or
GPS-deprived environments. To address this issue we propose GeoAdapt, which
introduces a novel auxiliary classification head to generate pseudo-labels for
re-training on unseen environments in a self-supervised manner. GeoAdapt uses
geometric consistency as a prior to improve the robustness of our generated
pseudo-labels against domain shift, improving the performance and reliability
of our Test-Time Adaptation approach. Comprehensive experiments show that
GeoAdapt significantly boosts place recognition performance across moderate to
severe domain shifts, and is competitive with fully supervised test-time
adaptation approaches. Our code will be available at
https://github.com/csiro-robotics/GeoAdapt.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 00:40:10 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Knights",
"Joshua",
""
],
[
"Hausler",
"Stephen",
""
],
[
"Sridharan",
"Sridha",
""
],
[
"Fookes",
"Clinton",
""
],
[
"Moghadam",
"Peyman",
""
]
] |
new_dataset
| 0.998549 |
2308.04641
|
Yanbo Song
|
Yanbo Song, Tao Feng, Chungang Yang, Xinru Mi, Shanqing Jiang, Mohsen
Guizani
|
IS2N: Intent-Driven Security Software-Defined Network with Blockchain
|
Published in: IEEE Network ( Early Access )
| null |
10.1109/MNET.138.2200539
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software-defined network (SDN) is characterized by its programmability,
flexibility, and the separation of control and data planes. However, SDN still
have many challenges, particularly concerning the security of network
information synchronization and network element registration. Blockchain and
intent-driven networks are recent technologies to establish secure and
intelligent SDN. This article investigates the blockchain-based architecture
and intent-driven mechanisms to implement intent-driven security
software-defined networks (IS2N). Specifically, we propose a novel four-layer
architecture of the IS2N with security capabilities. We integrate an
intent-driven security management mechanism in the IS2N to achieve automate
network security management. Finally, we develop an IS2N platform with
blockchain middle-layer to achieve security capabilities and security store
network-level snapshots, such as device registration and OpenFlow messages. Our
simulations show that IS2N is more flexible than conventional strategies at
resolving problems during network operations and has a minimal effect on the
SDN.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 00:53:28 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Song",
"Yanbo",
""
],
[
"Feng",
"Tao",
""
],
[
"Yang",
"Chungang",
""
],
[
"Mi",
"Xinru",
""
],
[
"Jiang",
"Shanqing",
""
],
[
"Guizani",
"Mohsen",
""
]
] |
new_dataset
| 0.998139 |
2308.04643
|
Shubhang Bhatnagar
|
Shubhang Bhatnagar, Sharath Gopal, Narendra Ahuja, Liu Ren
|
Long-Distance Gesture Recognition using Dynamic Neural Networks
|
Accepted to IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS 2023)
| null | null | null |
cs.CV cs.HC cs.RO eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gestures form an important medium of communication between humans and
machines. An overwhelming majority of existing gesture recognition methods are
tailored to a scenario where humans and machines are located very close to each
other. This short-distance assumption does not hold true for several types of
interactions, for example gesture-based interactions with a floor cleaning
robot or with a drone. Methods made for short-distance recognition are unable
to perform well on long-distance recognition due to gestures occupying only a
small portion of the input data. Their performance is especially worse in
resource constrained settings where they are not able to effectively focus
their limited compute on the gesturing subject. We propose a novel, accurate
and efficient method for the recognition of gestures from longer distances. It
uses a dynamic neural network to select features from gesture-containing
spatial regions of the input sensor data for further processing. This helps the
network focus on features important for gesture recognition while discarding
background features early on, thus making it more compute efficient compared to
other techniques. We demonstrate the performance of our method on the LD-ConGR
long-distance dataset where it outperforms previous state-of-the-art methods on
recognition accuracy and compute efficiency.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 00:56:38 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Bhatnagar",
"Shubhang",
""
],
[
"Gopal",
"Sharath",
""
],
[
"Ahuja",
"Narendra",
""
],
[
"Ren",
"Liu",
""
]
] |
new_dataset
| 0.977314 |
2308.04662
|
Tianyu Chen
|
Tianyu Chen, Lin Li, Liuchuan Zhu, Zongyang Li, Guangtai Liang, Ding
Li, Qianxiang Wang, Tao Xie
|
VulLibGen: Identifying Vulnerable Third-Party Libraries via Generative
Pre-Trained Model
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To avoid potential risks posed by vulnerabilities in third-party libraries,
security researchers maintain vulnerability databases (e.g., NVD) containing
vulnerability reports, each of which records the description of a vulnerability
and the name list of libraries affected by the vulnerability (a.k.a. vulnerable
libraries). However, recent studies on about 200,000 vulnerability reports in
NVD show that 53.3% of these reports do not include the name list of vulnerable
libraries, and 59.82% of the included name lists of vulnerable libraries are
incomplete or incorrect.
To address the preceding issue, in this paper, we propose the first
generative approach named VulLibGen to generate the name list of vulnerable
libraries (out of all the existing libraries) for the given vulnerability by
utilizing recent enormous advances in Large Language Models (LLMs), in order to
achieve high accuracy. VulLibGen takes only the description of a vulnerability
as input and achieves high identification accuracy based on LLMs' prior
knowledge of all the existing libraries. VulLibGen also includes the input
augmentation technique to help identify zero-shot vulnerable libraries (those
not occurring during training) and the post-processing technique to help
address VulLibGen's hallucinations. We evaluate VulLibGen using three
state-of-the-art/practice approaches (LightXML, Chronos, and VulLibMiner) that
identify vulnerable libraries on an open-source dataset (VulLib). Our
evaluation results show that VulLibGen can accurately identify vulnerable
libraries with an average F1 score of 0.626 while the state-of-the-art/practice
approaches achieve only 0.561. The post-processing technique helps VulLibGen
achieve an average improvement of F1@1 by 9.3%. The input augmentation
technique helps VulLibGen achieve an average improvement of F1@1 by 39% in
identifying zero-shot libraries.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 02:02:46 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Chen",
"Tianyu",
""
],
[
"Li",
"Lin",
""
],
[
"Zhu",
"Liuchuan",
""
],
[
"Li",
"Zongyang",
""
],
[
"Liang",
"Guangtai",
""
],
[
"Li",
"Ding",
""
],
[
"Wang",
"Qianxiang",
""
],
[
"Xie",
"Tao",
""
]
] |
new_dataset
| 0.997522 |
2308.04665
|
Yongzhu Chang
|
Yongzhu Chang, Rongsheng Zhang, Lin Jiang, Qihang Chen, Le Zhang,
Jiashu Pu
|
Sudowoodo: a Chinese Lyric Imitation System with Source Lyrics
|
7 pages,3 figures, submit to emnlp 2023 demo track
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Lyrics generation is a well-known application in natural language generation
research, with several previous studies focusing on generating accurate lyrics
using precise control such as keywords, rhymes, etc. However, lyrics imitation,
which involves writing new lyrics by imitating the style and content of the
source lyrics, remains a challenging task due to the lack of a parallel corpus.
In this paper, we introduce \textbf{\textit{Sudowoodo}}, a Chinese lyrics
imitation system that can generate new lyrics based on the text of source
lyrics. To address the issue of lacking a parallel training corpus for lyrics
imitation, we propose a novel framework to construct a parallel corpus based on
a keyword-based lyrics model from source lyrics. Then the pairs \textit{(new
lyrics, source lyrics)} are used to train the lyrics imitation model. During
the inference process, we utilize a post-processing module to filter and rank
the generated lyrics, selecting the highest-quality ones. We incorporated audio
information and aligned the lyrics with the audio to form the songs as a bonus.
The human evaluation results show that our framework can perform better lyric
imitation. Meanwhile, the \textit{Sudowoodo} system and demo video of the
system is available at
\href{https://Sudowoodo.apps-hp.danlu.netease.com/}{Sudowoodo} and
\href{https://youtu.be/u5BBT_j1L5M}{https://youtu.be/u5BBT\_j1L5M}.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 02:12:04 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Chang",
"Yongzhu",
""
],
[
"Zhang",
"Rongsheng",
""
],
[
"Jiang",
"Lin",
""
],
[
"Chen",
"Qihang",
""
],
[
"Zhang",
"Le",
""
],
[
"Pu",
"Jiashu",
""
]
] |
new_dataset
| 0.999797 |
2308.04688
|
Shotaro Ishihara
|
Kaito Majima, Shotaro Ishihara
|
Generating News-Centric Crossword Puzzles As A Constraint Satisfaction
and Optimization Problem
|
32nd ACM International Conference on Information and Knowledge
Management (short paper track)
| null |
10.1145/3583780.3615151
| null |
cs.CL cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crossword puzzles have traditionally served not only as entertainment but
also as an educational tool that can be used to acquire vocabulary and language
proficiency. One strategy to enhance the educational purpose is
personalization, such as including more words on a particular topic. This paper
focuses on the case of encouraging people's interest in news and proposes a
framework for automatically generating news-centric crossword puzzles. We
designed possible scenarios and built a prototype as a constraint satisfaction
and optimization problem, that is, containing as many news-derived words as
possible. Our experiments reported the generation probabilities and time
required under several conditions. The results showed that news-centric
crossword puzzles can be generated even with few news-derived words. We
summarize the current issues and future research directions through a
qualitative evaluation of the prototype. This is the first proposal that a
formulation of a constraint satisfaction and optimization problem can be
beneficial as an educational application.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 03:50:26 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Majima",
"Kaito",
""
],
[
"Ishihara",
"Shotaro",
""
]
] |
new_dataset
| 0.993859 |
2308.04765
|
Qiushi Guo
|
Qiushi Guo, Shisha Liao
|
FaceSkin: A Privacy Preserving Facial skin patch Dataset for multi
Attributes classification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human facial skin images contain abundant textural information that can serve
as valuable features for attribute classification, such as age, race, and
gender. Additionally, facial skin images offer the advantages of easy
collection and minimal privacy concerns. However, the availability of
well-labeled human skin datasets with a sufficient number of images is limited.
To address this issue, we introduce a dataset called FaceSkin, which
encompasses a diverse range of ages and races. Furthermore, to broaden the
application scenarios, we incorporate synthetic skin-patches obtained from 2D
and 3D attack images, including printed paper, replays, and 3D masks. We
evaluate the FaceSkin dataset across distinct categories and present
experimental results demonstrating its effectiveness in attribute
classification, as well as its potential for various downstream tasks, such as
Face anti-spoofing and Age estimation.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 07:53:33 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Guo",
"Qiushi",
""
],
[
"Liao",
"Shisha",
""
]
] |
new_dataset
| 0.999889 |
2308.04774
|
Jiashun Suo
|
Jiashun Suo, Xingzhou Zhang, Weisong Shi, Wei Zhou
|
E3-UAV: An Edge-based Energy-Efficient Object Detection System for
Unmanned Aerial Vehicles
|
16 pages, 8 figures
|
IEEE Internet of Things Journal, Early Access 1-1 (2023)
|
10.1109/JIOT.2023.3301623
| null |
cs.RO cs.AI cs.CV cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by the advances in deep learning techniques, the application of
Unmanned Aerial Vehicle (UAV)-based object detection has proliferated across a
range of fields, including vehicle counting, fire detection, and city
monitoring. While most existing research studies only a subset of the
challenges inherent to UAV-based object detection, there are few studies that
balance various aspects to design a practical system for energy consumption
reduction. In response, we present the E3-UAV, an edge-based energy-efficient
object detection system for UAVs. The system is designed to dynamically support
various UAV devices, edge devices, and detection algorithms, with the aim of
minimizing energy consumption by deciding the most energy-efficient flight
parameters (including flight altitude, flight speed, detection algorithm, and
sampling rate) required to fulfill the detection requirements of the task. We
first present an effective evaluation metric for actual tasks and construct a
transparent energy consumption model based on hundreds of actual flight data to
formalize the relationship between energy consumption and flight parameters.
Then we present a lightweight energy-efficient priority decision algorithm
based on a large quantity of actual flight data to assist the system in
deciding flight parameters. Finally, we evaluate the performance of the system,
and our experimental results demonstrate that it can significantly decrease
energy consumption in real-world scenarios. Additionally, we provide four
insights that can assist researchers and engineers in their efforts to study
UAV-based object detection further.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 08:02:11 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Suo",
"Jiashun",
""
],
[
"Zhang",
"Xingzhou",
""
],
[
"Shi",
"Weisong",
""
],
[
"Zhou",
"Wei",
""
]
] |
new_dataset
| 0.99464 |
2308.04794
|
Rajashekhar Vachiravelu Saminathan
|
J Krishna Kant, Mahankali Sripaad, Anand Bharadwaj, Rajashekhar V S
and Suresh Sundaram
|
An Autonomous Hybrid Drone-Rover Vehicle for Weed Removal and Spraying
Applications in Agriculture
|
6 pages, 9 figures, accepted for AGRETA2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The usage of drones and rovers helps to overcome the limitations of
traditional agriculture which has been predominantly human-intensive, for
carrying out tasks such as removal of weeds and spraying of fertilizers and
pesticides. Drones and rovers are helping to realize precision agriculture and
farmers with improved monitoring and surveying at affordable costs. Major
benefits have come for vertical farming and fields with irrigation canals.
However, drones have a limitation of flight time due to payload constraints.
Rovers have limitations in vertical farming and obstacles like canals in
agricultural fields. To meet the different requirements of multiple terrains
and vertical farming in agriculture, we propose an autonomous hybrid
drone-rover vehicle that combines the advantages of both rovers and drones. The
prototype is described along with experimental results regarding its ability to
avoid obstacles, pluck weeds and spray pesticides.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 08:32:10 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Kant",
"J Krishna",
""
],
[
"Sripaad",
"Mahankali",
""
],
[
"Bharadwaj",
"Anand",
""
],
[
"S",
"Rajashekhar V",
""
],
[
"Sundaram",
"Suresh",
""
]
] |
new_dataset
| 0.997564 |
2308.04811
|
Kailai Yang
|
Kailai Yang, Tianlin Zhang, Shaoxiong Ji, Sophia Ananiadou
|
A Bipartite Graph is All We Need for Enhancing Emotional Reasoning with
Commonsense Knowledge
|
Accepted by CIKM 2023 as a long paper
| null |
10.1145/3583780.3614758
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The context-aware emotional reasoning ability of AI systems, especially in
conversations, is of vital importance in applications such as online opinion
mining from social media and empathetic dialogue systems. Due to the implicit
nature of conveying emotions in many scenarios, commonsense knowledge is widely
utilized to enrich utterance semantics and enhance conversation modeling.
However, most previous knowledge infusion methods perform empirical knowledge
filtering and design highly customized architectures for knowledge interaction
with the utterances, which can discard useful knowledge aspects and limit their
generalizability to different knowledge sources. Based on these observations,
we propose a Bipartite Heterogeneous Graph (BHG) method for enhancing emotional
reasoning with commonsense knowledge. In BHG, the extracted context-aware
utterance representations and knowledge representations are modeled as
heterogeneous nodes. Two more knowledge aggregation node types are proposed to
perform automatic knowledge filtering and interaction. BHG-based knowledge
infusion can be directly generalized to multi-type and multi-grained knowledge
sources. In addition, we propose a Multi-dimensional Heterogeneous Graph
Transformer (MHGT) to perform graph reasoning, which can retain unchanged
feature spaces and unequal dimensions for heterogeneous node types during
inference to prevent unnecessary loss of information. Experiments show that
BHG-based methods significantly outperform state-of-the-art knowledge infusion
methods and show generalized knowledge infusion ability with higher efficiency.
Further analysis proves that previous empirical knowledge filtering methods do
not guarantee to provide the most useful knowledge information. Our code is
available at: https://github.com/SteveKGYang/BHG.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 09:09:17 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Yang",
"Kailai",
""
],
[
"Zhang",
"Tianlin",
""
],
[
"Ji",
"Shaoxiong",
""
],
[
"Ananiadou",
"Sophia",
""
]
] |
new_dataset
| 0.993977 |
2308.04814
|
Gunjan Singh
|
Gunjan Singh, Sumit Bhatia, Raghava Mutharaju
|
Neuro-Symbolic RDF and Description Logic Reasoners: The State-Of-The-Art
and Challenges
|
This paper is a part of the book titled Compendium of Neuro-Symbolic
Artificial Intelligence which can be found at the following link:
https://www.iospress.com/
catalog/books/compendium-of-neurosymbolic-artificial-intelligence
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Ontologies are used in various domains, with RDF and OWL being prominent
standards for ontology development. RDF is favored for its simplicity and
flexibility, while OWL enables detailed domain knowledge representation.
However, as ontologies grow larger and more expressive, reasoning complexity
increases, and traditional reasoners struggle to perform efficiently. Despite
optimization efforts, scalability remains an issue. Additionally, advancements
in automated knowledge base construction have created large and expressive
ontologies that are often noisy and inconsistent, posing further challenges for
conventional reasoners. To address these challenges, researchers have explored
neuro-symbolic approaches that combine neural networks' learning capabilities
with symbolic systems' reasoning abilities. In this chapter,we provide an
overview of the existing literature in the field of neuro-symbolic deductive
reasoning supported by RDF(S), the description logics EL and ALC, and OWL 2 RL,
discussing the techniques employed, the tasks they address, and other relevant
efforts in this area.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 09:12:35 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Singh",
"Gunjan",
""
],
[
"Bhatia",
"Sumit",
""
],
[
"Mutharaju",
"Raghava",
""
]
] |
new_dataset
| 0.969208 |
2308.04819
|
Lukas Daniel Klausner
|
Gabriela Viale Pereira, Lukas Daniel Klausner, Lucy Temple, Thomas
Delissen, Thomas Lampoltshammer, Torsten Priebe
|
"This (Smart) Town Ain't Big Enough": Smart Small Towns and Digital
Twins for Sustainable Urban and Regional Development
|
6 pages, 1 figure
|
Joint Proceedings of Ongoing Research, Practitioners, Posters,
Workshops, and Projects at EGOV-CeDEM-ePart 2023 (EGOV 2023), 2023
| null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the major challenges today lies in the creation of governance concepts
for regional development that not only promote growth but, at the same time,
ensure promotion of inclusiveness, fairness, and resilience. Digital twins can
support policymakers in developing smart, sustainable solutions for cities and
regions and, therefore, urban and non-urban environments. The project SCiNDTiLA
(Smart Cities aNd Digital Twins in Lower Austria) aims to define the
state-of-the-art in the field of smart cities, identify interdependencies,
critical components and stakeholders, and provide a roadmap for smart cities
with application to both smaller-scale urban and non-urban environments.
SCiNDTiLA uses the foundations of complexity theory and computational social
science methods to model Austrian towns and regions as smart cities/regions and
thus as systems of socio-technical interaction to guide policy decision-making
toward sustainable development.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 09:20:12 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Pereira",
"Gabriela Viale",
""
],
[
"Klausner",
"Lukas Daniel",
""
],
[
"Temple",
"Lucy",
""
],
[
"Delissen",
"Thomas",
""
],
[
"Lampoltshammer",
"Thomas",
""
],
[
"Priebe",
"Torsten",
""
]
] |
new_dataset
| 0.98944 |
2308.04826
|
Muyu Xu
|
Muyu Xu, Fangneng Zhan, Jiahui Zhang, Yingchen Yu, Xiaoqin Zhang,
Christian Theobalt, Ling Shao and Shijian Lu
|
WaveNeRF: Wavelet-based Generalizable Neural Radiance Fields
|
Accepted to ICCV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Neural Radiance Field (NeRF) has shown impressive performance in novel view
synthesis via implicit scene representation. However, it usually suffers from
poor scalability as requiring densely sampled images for each new scene.
Several studies have attempted to mitigate this problem by integrating
Multi-View Stereo (MVS) technique into NeRF while they still entail a
cumbersome fine-tuning process for new scenes. Notably, the rendering quality
will drop severely without this fine-tuning process and the errors mainly
appear around the high-frequency features. In the light of this observation, we
design WaveNeRF, which integrates wavelet frequency decomposition into MVS and
NeRF to achieve generalizable yet high-quality synthesis without any per-scene
optimization. To preserve high-frequency information when generating 3D feature
volumes, WaveNeRF builds Multi-View Stereo in the Wavelet domain by integrating
the discrete wavelet transform into the classical cascade MVS, which
disentangles high-frequency information explicitly. With that, disentangled
frequency features can be injected into classic NeRF via a novel hybrid neural
renderer to yield faithful high-frequency details, and an intuitive
frequency-guided sampling strategy can be designed to suppress artifacts around
high-frequency regions. Extensive experiments over three widely studied
benchmarks show that WaveNeRF achieves superior generalizable radiance field
modeling when only given three images as input.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 09:24:56 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Xu",
"Muyu",
""
],
[
"Zhan",
"Fangneng",
""
],
[
"Zhang",
"Jiahui",
""
],
[
"Yu",
"Yingchen",
""
],
[
"Zhang",
"Xiaoqin",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Shao",
"Ling",
""
],
[
"Lu",
"Shijian",
""
]
] |
new_dataset
| 0.972643 |
2308.04832
|
Yuanhao Gong
|
Yuanhao Gong
|
TSSR: A Truncated and Signed Square Root Activation Function for Neural
Networks
|
arXiv admin note: substantial text overlap with arXiv:2307.16389
| null | null | null |
cs.CV cs.CL cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Activation functions are essential components of neural networks. In this
paper, we introduce a new activation function called the Truncated and Signed
Square Root (TSSR) function. This function is distinctive because it is odd,
nonlinear, monotone and differentiable. Its gradient is continuous and always
positive. Thanks to these properties, it has the potential to improve the
numerical stability of neural networks. Several experiments confirm that the
proposed TSSR has better performance than other stat-of-the-art activation
functions. The proposed function has significant implications for the
development of neural network models and can be applied to a wide range of
applications in fields such as computer vision, natural language processing,
and speech recognition.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 09:40:34 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Gong",
"Yuanhao",
""
]
] |
new_dataset
| 0.999378 |
2308.04837
|
Dan Zhang
|
Dan Zhang and Staal A. Vinterbo
|
A New Family of Perfect Polyphase Sequences with Low Cross-Correlation
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Spread spectrum multiple access systems demand minimum possible
cross-correlation between the sequences within a set of sequences having good
auto-correlation properties. Through a connection between generalised Frank
sequences and Florentine arrays, we present a family of perfect sequences with
low cross-correlation having a larger family size, compared with previous
works. In particular, the family size can be equal to the square root of the
period when the period of the perfect sequences is even. In contrast, the
number of the perfect sequences of even period with low cross-correlation is
equal to one in all previous works.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 09:59:19 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Zhang",
"Dan",
""
],
[
"Vinterbo",
"Staal A.",
""
]
] |
new_dataset
| 0.992683 |
2308.04884
|
Cristina Gena
|
Linda Pigureddu and Cristina Gena
|
Using the power of memes: The Pepper Robot as a communicative
facilitator for autistic children (cAESAR2023 workshop)
| null | null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This article describes the preliminary qualitative results of a therapeutic
laboratory involving the Pepper robot, as a facilitator, to promote autonomy
and functional acquisition in autistic children with low support needs (level 1
support). The lab, designed and led by a multidisciplinary team, involved 4
children, aged 11 to 13 years, and was organized in weekly meetings for the
duration of four months. The following is the result of an in-depth qualitative
evaluation of the interactions that took place between the children and the
Pepper robot, with the aim of analyzing their effectiveness for the purpose of
promoting the development of social and communication skills in the
participants. The observations and analyses conducted during the interactions
provided valuable insights into the dialogue and communication style employed
and paved the way for possible strategies to make the robot more empathetic and
engaging for autistic children.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 11:30:54 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Pigureddu",
"Linda",
""
],
[
"Gena",
"Cristina",
""
]
] |
new_dataset
| 0.991969 |
2308.04887
|
Zahra Moti
|
Zahra Moti, Asuman Senol, Hamid Bostani, Frederik Zuiderveen
Borgesius, Veelasha Moonsamy, Arunesh Mathur, Gunes Acar
|
Targeted and Troublesome: Tracking and Advertising on Children's
Websites
| null | null | null | null |
cs.CY cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
On the modern web, trackers and advertisers frequently construct and monetize
users' detailed behavioral profiles without consent. Despite various studies on
web tracking mechanisms and advertisements, there has been no rigorous study
focusing on websites targeted at children. To address this gap, we present a
measurement of tracking and (targeted) advertising on websites directed at
children. Motivated by lacking a comprehensive list of child-directed (i.e.,
targeted at children) websites, we first build a multilingual classifier based
on web page titles and descriptions. Applying this classifier to over two
million pages, we compile a list of two thousand child-directed websites.
Crawling these sites from five vantage points, we measure the prevalence of
trackers, fingerprinting scripts, and advertisements. Our crawler detects ads
displayed on child-directed websites and determines if ad targeting is enabled
by scraping ad disclosure pages whenever available. Our results show that
around 90% of child-directed websites embed one or more trackers, and about 27%
contain targeted advertisements--a practice that should require verifiable
parental consent. Next, we identify improper ads on child-directed websites by
developing an ML pipeline that processes both images and text extracted from
ads. The pipeline allows us to run semantic similarity queries for arbitrary
search terms, revealing ads that promote services related to dating, weight
loss, and mental health; as well as ads for sex toys and flirting chat
services. Some of these ads feature repulsive and sexually explicit imagery. In
summary, our findings indicate a trend of non-compliance with privacy
regulations and troubling ad safety practices among many advertisers and
child-directed websites. To protect children and create a safer online
environment, regulators and stakeholders must adopt and enforce more stringent
measures.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 11:37:39 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Moti",
"Zahra",
""
],
[
"Senol",
"Asuman",
""
],
[
"Bostani",
"Hamid",
""
],
[
"Borgesius",
"Frederik Zuiderveen",
""
],
[
"Moonsamy",
"Veelasha",
""
],
[
"Mathur",
"Arunesh",
""
],
[
"Acar",
"Gunes",
""
]
] |
new_dataset
| 0.981822 |
2308.04905
|
Guillermo Bern\'ardez
|
Guillermo Bern\'ardez, Jos\'e Su\'arez-Varela, Xiang Shi, Shihan Xiao,
Xiangle Cheng, Pere Barlet-Ros, Albert Cabellos-Aparicio
|
GraphCC: A Practical Graph Learning-based Approach to Congestion Control
in Datacenters
|
11 pages, 7 figures, 2 tables
| null | null | null |
cs.NI cs.AI cs.LG cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Congestion Control (CC) plays a fundamental role in optimizing traffic in
Data Center Networks (DCN). Currently, DCNs mainly implement two main CC
protocols: DCTCP and DCQCN. Both protocols -- and their main variants -- are
based on Explicit Congestion Notification (ECN), where intermediate switches
mark packets when they detect congestion. The ECN configuration is thus a
crucial aspect on the performance of CC protocols. Nowadays, network experts
set static ECN parameters carefully selected to optimize the average network
performance. However, today's high-speed DCNs experience quick and abrupt
changes that severely change the network state (e.g., dynamic traffic
workloads, incast events, failures). This leads to under-utilization and
sub-optimal performance. This paper presents GraphCC, a novel Machine
Learning-based framework for in-network CC optimization. Our distributed
solution relies on a novel combination of Multi-agent Reinforcement Learning
(MARL) and Graph Neural Networks (GNN), and it is compatible with widely
deployed ECN-based CC protocols. GraphCC deploys distributed agents on switches
that communicate with their neighbors to cooperate and optimize the global ECN
configuration. In our evaluation, we test the performance of GraphCC under a
wide variety of scenarios, focusing on the capability of this solution to adapt
to new scenarios unseen during training (e.g., new traffic workloads, failures,
upgrades). We compare GraphCC with a state-of-the-art MARL-based solution for
ECN tuning -- ACC -- and observe that our proposed solution outperforms the
state-of-the-art baseline in all of the evaluation scenarios, showing
improvements up to $20\%$ in Flow Completion Time as well as significant
reductions in buffer occupancy ($38.0-85.7\%$).
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 12:04:41 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Bernárdez",
"Guillermo",
""
],
[
"Suárez-Varela",
"José",
""
],
[
"Shi",
"Xiang",
""
],
[
"Xiao",
"Shihan",
""
],
[
"Cheng",
"Xiangle",
""
],
[
"Barlet-Ros",
"Pere",
""
],
[
"Cabellos-Aparicio",
"Albert",
""
]
] |
new_dataset
| 0.961741 |
2308.04913
|
Kaize Shi
|
Kaize Shi, Xueyao Sun, Dingxian Wang, Yinlin Fu, Guandong Xu, Qing Li
|
LLaMA-E: Empowering E-commerce Authoring with Multi-Aspect Instruction
Following
| null | null | null | null |
cs.CL cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
E-commerce authoring involves creating attractive, abundant, and targeted
promotional content to drive product sales. The emergence of large language
models (LLMs) introduces an innovative paradigm, offering a unified solution to
address various authoring tasks within this scenario. However, mainstream LLMs
trained on general corpora with common sense knowledge reveal limitations in
fitting complex and personalized features unique to e-commerce products and
customers. Furthermore, LLMs like GPT-3.5 necessitate remote accessibility,
raising concerns about safeguarding voluminous customer privacy data during
transmission. This paper proposes the LLaMA-E, the unified and customized
instruction-following language models focusing on diverse e-commerce authoring
tasks. Specifically, the domain experts create the seed instruction set from
the tasks of ads generation, query-enhanced product title rewriting, product
classification, purchase intent speculation, and general Q&A. These tasks
enable the models to comprehensively understand precise e-commerce authoring
knowledge by interleaving features covering typical service aspects of
customers, sellers, and platforms. The GPT-3.5 is introduced as a teacher
model, which expands the seed instructions to form a training set for the
LLaMA-E models with various scales. The experimental results show that the
proposed LLaMA-E models achieve state-of-the-art results in quantitative and
qualitative evaluations, also exhibiting the advantage in zero-shot scenes. To
the best of our knowledge, this study is the first to serve the LLMs to
specific e-commerce authoring scenarios.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 12:26:37 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Shi",
"Kaize",
""
],
[
"Sun",
"Xueyao",
""
],
[
"Wang",
"Dingxian",
""
],
[
"Fu",
"Yinlin",
""
],
[
"Xu",
"Guandong",
""
],
[
"Li",
"Qing",
""
]
] |
new_dataset
| 0.992122 |
2308.04945
|
Firoj Alam
|
Fahim Dalvi, Maram Hasanain, Sabri Boughorbel, Basel Mousi, Samir
Abdaljalil, Nizi Nazar, Ahmed Abdelali, Shammur Absar Chowdhury, Hamdy
Mubarak, Ahmed Ali, Majd Hawasly, Nadir Durrani, Firoj Alam
|
LLMeBench: A Flexible Framework for Accelerating LLMs Benchmarking
|
Foundation Models, Large Language Models, NLP, CHatGPT Evaluation,
LLMs Benchmark
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The recent development and success of Large Language Models (LLMs)
necessitate an evaluation of their performance across diverse NLP tasks in
different languages. Although several frameworks have been developed and made
publicly available, their customization capabilities for specific tasks and
datasets are often complex for different users. In this study, we introduce the
LLMeBench framework. Initially developed to evaluate Arabic NLP tasks using
OpenAI's GPT and BLOOM models; it can be seamlessly customized for any NLP task
and model, regardless of language. The framework also features zero- and
few-shot learning settings. A new custom dataset can be added in less than 10
minutes, and users can use their own model API keys to evaluate the task at
hand. The developed framework has been already tested on 31 unique NLP tasks
using 53 publicly available datasets within 90 experimental setups, involving
approximately 296K data points. We plan to open-source the framework for the
community (https://github.com/qcri/LLMeBench/). A video demonstrating the
framework is available online (https://youtu.be/FkQn4UjYA0s).
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 13:22:37 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Dalvi",
"Fahim",
""
],
[
"Hasanain",
"Maram",
""
],
[
"Boughorbel",
"Sabri",
""
],
[
"Mousi",
"Basel",
""
],
[
"Abdaljalil",
"Samir",
""
],
[
"Nazar",
"Nizi",
""
],
[
"Abdelali",
"Ahmed",
""
],
[
"Chowdhury",
"Shammur Absar",
""
],
[
"Mubarak",
"Hamdy",
""
],
[
"Ali",
"Ahmed",
""
],
[
"Hawasly",
"Majd",
""
],
[
"Durrani",
"Nadir",
""
],
[
"Alam",
"Firoj",
""
]
] |
new_dataset
| 0.978896 |
2308.04972
|
Brooke Lampe
|
Brooke Lampe, Weizhi Meng
|
can-train-and-test: A Curated CAN Dataset for Automotive Intrusion
Detection
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When it comes to in-vehicle networks (IVNs), the controller area network --
CAN -- bus dominates the market; automobiles manufactured and sold around the
world depend on the CAN bus for safety-critical communications between various
components of the vehicle (e.g., the engine, the transmission, the steering
column). Unfortunately, the CAN bus is inherently insecure; in fact, it
completely lacks controls such as authentication, authorization, and
confidentiality (i.e., encryption). Therefore, researchers have travailed to
develop automotive security enhancements. The automotive intrusion detection
system (IDS) is especially popular in the literature -- due to its relatively
low cost in terms of money, resource utilization, and implementation effort.
That said, developing and evaluating an automotive IDS is often challenging; if
researchers do not have access to a test vehicle, then they are forced to
depend on publicly available CAN data -- which is not without limitations. Lack
of access to adequate CAN data, then, becomes a barrier to entry into
automotive security research.
We seek to lower that barrier to entry by introducing a new CAN dataset to
facilitate the development and evaluation of automotive IDSs. Our dataset,
dubbed can-train-and-test, provides CAN data from four different vehicles
produced by two different manufacturers. The attack captures for each vehicle
model are equivalent, enabling researchers to assess the ability of a given IDS
to generalize to different vehicle models and even different vehicle
manufacturers. Our dataset contains replayable .log files as well as labeled
and unlabeled .csv files, thereby meeting a variety of development and
evaluation needs. Furthermore, can-train-and-test offers nine unique attacks,
ranging from denial of service (DoS) to gear spoofing to standstill...
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 14:14:57 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Lampe",
"Brooke",
""
],
[
"Meng",
"Weizhi",
""
]
] |
new_dataset
| 0.998104 |
2308.05012
|
Awad Abdelhalim
|
Michael Leong, Awad Abdelhalim, Jude Ha, Dianne Patterson, Gabriel L.
Pincus, Anthony B. Harris, Michael Eichler, Jinhua Zhao
|
MetRoBERTa: Leveraging Traditional Customer Relationship Management Data
to Develop a Transit-Topic-Aware Language Model
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Transit riders' feedback provided in ridership surveys, customer relationship
management (CRM) channels, and in more recent times, through social media is
key for transit agencies to better gauge the efficacy of their services and
initiatives. Getting a holistic understanding of riders' experience through the
feedback shared in those instruments is often challenging, mostly due to the
open-ended, unstructured nature of text feedback. In this paper, we propose
leveraging traditional transit CRM feedback to develop and deploy a
transit-topic-aware large language model (LLM) capable of classifying
open-ended text feedback to relevant transit-specific topics. First, we utilize
semi-supervised learning to engineer a training dataset of 11 broad transit
topics detected in a corpus of 6 years of customer feedback provided to the
Washington Metropolitan Area Transit Authority (WMATA). We then use this
dataset to train and thoroughly evaluate a language model based on the RoBERTa
architecture. We compare our LLM, MetRoBERTa, to classical machine learning
approaches utilizing keyword-based and lexicon representations. Our model
outperforms those methods across all evaluation metrics, providing an average
topic classification accuracy of 90%. Finally, we provide a value proposition
of this work demonstrating how the language model, alongside additional text
processing tools, can be applied to add structure to open-ended text sources of
feedback like Twitter. The framework and results we present provide a pathway
for an automated, generalizable approach for ingesting, visualizing, and
reporting transit riders' feedback at scale, enabling agencies to better
understand and improve customer experience.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 15:11:37 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Leong",
"Michael",
""
],
[
"Abdelhalim",
"Awad",
""
],
[
"Ha",
"Jude",
""
],
[
"Patterson",
"Dianne",
""
],
[
"Pincus",
"Gabriel L.",
""
],
[
"Harris",
"Anthony B.",
""
],
[
"Eichler",
"Michael",
""
],
[
"Zhao",
"Jinhua",
""
]
] |
new_dataset
| 0.998401 |
2308.05038
|
Himarsha R Jayanetti
|
Himarsha R. Jayanetti, Erika Frydenlund, Michele C. Weigle
|
Xenophobic Events vs. Refugee Population -- Using GDELT to Identify
Countries with Disproportionate Coverage
|
10 pages, 2 figures, accepted as a Working Paper at SBP-BRiMS 2023.
arXiv admin note: text overlap with arXiv:2305.01708
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this preliminary study, we used the Global Database of Events, Language,
and Tone (GDELT) database to examine xenophobic events reported in the media
during 2022. We collected a dataset of 2,778 unique events and created a
choropleth map illustrating the frequency of events scaled by the refugee
population's proportion in each host country. We identified the top 10
countries with the highest scaled event frequencies among those with more than
50,000 refugees. Contrary to the belief that hosting a significant number of
forced migrants results in higher xenophobic incidents, our findings indicate a
potential connection to political factors. We also categorized the 20 root
event codes in the CAMEO event data as either "Direct" or "Indirect". Almost
90% of the events related to refugees in 2022 were classified as "Indirect".
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 16:10:05 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Jayanetti",
"Himarsha R.",
""
],
[
"Frydenlund",
"Erika",
""
],
[
"Weigle",
"Michele C.",
""
]
] |
new_dataset
| 0.995152 |
2308.05078
|
Kamal Singh
|
Kamal Singh, Sumit Roy
|
Ergodic Capacity of Dyadic Fading Channels in Ultra Low-SNR Regime
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a mobile wireless channel, the small-scale multipath fading induces
temporal channel fluctuations in the form of peaks and deep fades. The channel
capacity degradation with fading severity in the high signal-to-noise ratio
(SNR) regime is well known in the wireless communication literature: the
probability of deep fades increases significantly with higher fading severity
resulting in poor performance. In this paper, we focus on double-fading pinhole
channels under perfect CSIT to show a very counter-intuitive result that -
higher fading severity enables higher ergodic capacity at sufficiently low SNR.
The underlying reason is that at low SNRs, ergodic capacity depends crucially
on the probability distribution of channel peaks (simply tail distribution);
for the pinhole channel, the tail distribution improves with increased fading
severity. This allows a transmitter operating at low SNR to exploit channel
peaks more efficiently resulting in a net improvement in achievable spectral
efficiency. We derive a new key result quantifying the above dependence for the
double-Nakagami-$m$ fading pinhole channel - that is, the ergodic capacity ${C}
\propto (m_T m_R)^{-1}$ at low SNR, where $m_T m_R$ is the product of fading
(severity) parameters of the two independent Nakagami-$m$ fadings involved.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 17:15:24 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Singh",
"Kamal",
""
],
[
"Roy",
"Sumit",
""
]
] |
new_dataset
| 0.997451 |
2308.05085
|
Ruoyan Kong
|
Ruoyan Kong, Haiyi Zhu, Joseph A. Konstan
|
Organizational Bulk Email Systems: Their Role and Performance in Remote
Work
|
arXiv admin note: text overlap with arXiv:2006.16508
| null | null | null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The COVID-19 pandemic has forced many employees to work from home.
Organizational bulk emails now play a critical role to reach employees with
central information in this work-from-home environment. However, we know from
our own recent work that organizational bulk email has problems: recipients
fail to retain the bulk messages they received from the organization;
recipients and senders have different opinions on which bulk messages were
important; and communicators lack technology support to better target and
design messages. In this position paper, first we review the prior work on
evaluating, designing, and prototyping organizational communication systems.
Second we review our recent findings and some research techniques we found
useful in studying organizational communication. Last we propose a research
agenda to study organizational communications in remote work environment and
suggest some key questions and potential study directions.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 17:27:57 GMT"
}
] | 2023-08-10T00:00:00 |
[
[
"Kong",
"Ruoyan",
""
],
[
"Zhu",
"Haiyi",
""
],
[
"Konstan",
"Joseph A.",
""
]
] |
new_dataset
| 0.996618 |
2203.14471
|
Wenda Zhao
|
Wenda Zhao, Abhishek Goudar, Xinyuan Qiao, and Angela P. Schoellig
|
UTIL: An Ultra-wideband Time-difference-of-arrival Indoor Localization
Dataset
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ultra-wideband (UWB) time-difference-of-arrival (TDOA)-based localization has
emerged as a promising, low-cost, and scalable indoor localization solution,
which is especially suited for multi-robot applications. However, there is a
lack of public datasets to study and benchmark UWB TDOA positioning technology
in cluttered indoor environments. We fill in this gap by presenting a
comprehensive dataset using Decawave's DWM1000 UWB modules. To characterize the
UWB TDOA measurement performance under various line-of-sight (LOS) and
non-line-of-sight (NLOS) conditions, we collected signal-to-noise ratio (SNR),
power difference values, and raw UWB TDOA measurements during the
identification experiments. We also conducted a cumulative total of around 150
minutes of real-world flight experiments on a customized quadrotor platform to
benchmark the UWB TDOA localization performance for mobile robots. The
quadrotor was commanded to fly with an average speed of 0.45 m/s in both
obstacle-free and cluttered environments using four different UWB anchor
constellations. Raw sensor data including UWB TDOA, inertial measurement unit
(IMU), optical flow, time-of-flight (ToF) laser altitude, and
millimeter-accurate ground truth robot poses were collected during the flights.
The dataset and development kit are available at
https://utiasdsl.github.io/util-uwb-dataset/.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 03:23:51 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 21:04:10 GMT"
},
{
"version": "v3",
"created": "Sat, 10 Dec 2022 16:07:33 GMT"
},
{
"version": "v4",
"created": "Mon, 7 Aug 2023 19:27:30 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Zhao",
"Wenda",
""
],
[
"Goudar",
"Abhishek",
""
],
[
"Qiao",
"Xinyuan",
""
],
[
"Schoellig",
"Angela P.",
""
]
] |
new_dataset
| 0.99974 |
2208.14508
|
Zixuan He
|
Shenglian Lu (1), Xiaoyu Liu (1), Zixaun He (2), Wenbo Liu (3), Xin
Zhang (3), and Manoj Karkee (2) ((1) Guangxi Normal University, China, (2)
Washington State University, US, (3) Mississippi State University, US)
|
Swin-transformer-yolov5 For Real-time Wine Grape Bunch Detection
|
30 pages; 15 figures;Corresponding author: Xin Zhang Department of
Agricultural and Biological Engineering Mississippi State University
Mississippi State, MS 39762, USA ([email protected])
| null |
10.3390/rs14225853
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this research, an integrated detection model, Swin-transformer-YOLOv5 or
Swin-T-YOLOv5, was proposed for real-time wine grape bunch detection to inherit
the advantages from both YOLOv5 and Swin-transformer. The research was
conducted on two different grape varieties of Chardonnay (always white berry
skin) and Merlot (white or white-red mix berry skin when immature; red when
matured) from July to September in 2019. To verify the superiority of
Swin-T-YOLOv5, its performance was compared against several commonly
used/competitive object detectors, including Faster R-CNN, YOLOv3, YOLOv4, and
YOLOv5. All models were assessed under different test conditions, including two
different weather conditions (sunny and cloudy), two different berry maturity
stages (immature and mature), and three different sunlight
directions/intensities (morning, noon, and afternoon) for a comprehensive
comparison. Additionally, the predicted number of grape bunches by
Swin-T-YOLOv5 was further compared with ground truth values, including both
in-field manual counting and manual labeling during the annotation process.
Results showed that the proposed Swin-T-YOLOv5 outperformed all other studied
models for grape bunch detection, with up to 97% of mean Average Precision
(mAP) and 0.89 of F1-score when the weather was cloudy. This mAP was
approximately 44%, 18%, 14%, and 4% greater than Faster R-CNN, YOLOv3, YOLOv4,
and YOLOv5, respectively. Swin-T-YOLOv5 achieved its lowest mAP (90%) and
F1-score (0.82) when detecting immature berries, where the mAP was
approximately 40%, 5%, 3%, and 1% greater than the same. Furthermore,
Swin-T-YOLOv5 performed better on Chardonnay variety with achieved up to 0.91
of R2 and 2.36 root mean square error (RMSE) when comparing the predictions
with ground truth. However, it underperformed on Merlot variety with achieved
only up to 0.70 of R2 and 3.30 of RMSE.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2022 19:32:07 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Oct 2022 08:33:12 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Aug 2023 08:29:12 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Lu",
"Shenglian",
""
],
[
"Liu",
"Xiaoyu",
""
],
[
"He",
"Zixaun",
""
],
[
"Liu",
"Wenbo",
""
],
[
"Zhang",
"Xin",
""
],
[
"Karkee",
"Manoj",
""
]
] |
new_dataset
| 0.973638 |
2209.10892
|
Peng Cheng
|
Jiachuan Wang, Peng Cheng, Libin Zheng, Lei Chen, Wenjie Zhang
|
Online Ridesharing with Meeting Points [Technical Report]
|
18 pages, VLDB 2023
| null |
10.14778/3565838.3565849
| null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays, ridesharing becomes a popular commuting mode. Dynamically arriving
riders post their origins and destinations, then the platform assigns drivers
to serve them. In ridesharing, different groups of riders can be served by one
driver if their trips can share common routes. Recently, many ridesharing
companies (e.g., Didi and Uber) further propose a new mode, namely "ridesharing
with meeting points". Specifically, with a short walking distance but less
payment, riders can be picked up and dropped off around their origins and
destinations, respectively. In addition, meeting points enables more flexible
routing for drivers, which can potentially improve the global profit of the
system. In this paper, we first formally define the Meeting-Point-based Online
Ridesharing Problem (MORP). We prove that MORP is NP-hard and there is no
polynomial-time deterministic algorithm with a constant competitive ratio for
it. We notice that a structure of vertex set, $k$-skip cover, fits well to the
MORP. $k$-skip cover tends to find the vertices (meeting points) that are
convenient for riders and drivers to come and go. With meeting points, MORP
tends to serve more riders with these convenient vertices. Based on the idea,
we introduce a convenience-based meeting point candidates selection algorithm.
We further propose a hierarchical meeting-point oriented graph (HMPO graph),
which ranks vertices for assignment effectiveness and constructs $k$-skip cover
to accelerate the whole assignment process. Finally, we utilize the merits of
$k$-skip cover points for ridesharing and propose a novel algorithm, namely
SMDB, to solve MORP. Extensive experiments on real and synthetic datasets
validate the effectiveness and efficiency of our algorithms.
|
[
{
"version": "v1",
"created": "Thu, 22 Sep 2022 09:55:03 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Sep 2022 01:29:21 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Aug 2023 09:32:12 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Wang",
"Jiachuan",
""
],
[
"Cheng",
"Peng",
""
],
[
"Zheng",
"Libin",
""
],
[
"Chen",
"Lei",
""
],
[
"Zhang",
"Wenjie",
""
]
] |
new_dataset
| 0.999557 |
2303.08401
|
Zipeng Qi
|
Zipeng Qi, Hao Chen, Chenyang Liu, Zhenwei Shi and Zhengxia Zou
|
Implicit Ray-Transformers for Multi-view Remote Sensing Image
Segmentation
| null | null |
10.1109/TGRS.2023.3285659
| null |
cs.CV cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The mainstream CNN-based remote sensing (RS) image semantic segmentation
approaches typically rely on massive labeled training data. Such a paradigm
struggles with the problem of RS multi-view scene segmentation with limited
labeled views due to the lack of considering 3D information within the scene.
In this paper, we propose ''Implicit Ray-Transformer (IRT)'' based on Implicit
Neural Representation (INR), for RS scene semantic segmentation with sparse
labels (such as 4-6 labels per 100 images). We explore a new way of introducing
multi-view 3D structure priors to the task for accurate and view-consistent
semantic segmentation. The proposed method includes a two-stage learning
process. In the first stage, we optimize a neural field to encode the color and
3D structure of the remote sensing scene based on multi-view images. In the
second stage, we design a Ray Transformer to leverage the relations between the
neural field 3D features and 2D texture features for learning better semantic
representations. Different from previous methods that only consider 3D prior or
2D features, we incorporate additional 2D texture information and 3D prior by
broadcasting CNN features to different point features along the sampled ray. To
verify the effectiveness of the proposed method, we construct a challenging
dataset containing six synthetic sub-datasets collected from the Carla platform
and three real sub-datasets from Google Maps. Experiments show that the
proposed method outperforms the CNN-based methods and the state-of-the-art
INR-based segmentation methods in quantitative and qualitative metrics.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 07:05:07 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Qi",
"Zipeng",
""
],
[
"Chen",
"Hao",
""
],
[
"Liu",
"Chenyang",
""
],
[
"Shi",
"Zhenwei",
""
],
[
"Zou",
"Zhengxia",
""
]
] |
new_dataset
| 0.982437 |
2303.16565
|
Kai Li
|
Xuechao Zou, Kai Li, Junliang Xing, Pin Tao, Yachao Cui
|
PMAA: A Progressive Multi-scale Attention Autoencoder Model for
High-performance Cloud Removal from Multi-temporal Satellite Imagery
|
Accepted by ECAI 2023
| null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Satellite imagery analysis plays a pivotal role in remote sensing; however,
information loss due to cloud cover significantly impedes its application.
Although existing deep cloud removal models have achieved notable outcomes,
they scarcely consider contextual information. This study introduces a
high-performance cloud removal architecture, termed Progressive Multi-scale
Attention Autoencoder (PMAA), which concurrently harnesses global and local
information to construct robust contextual dependencies using a novel
Multi-scale Attention Module (MAM) and a novel Local Interaction Module (LIM).
PMAA establishes long-range dependencies of multi-scale features using MAM and
modulates the reconstruction of fine-grained details utilizing LIM, enabling
simultaneous representation of fine- and coarse-grained features at the same
level. With the help of diverse and multi-scale features, PMAA consistently
outperforms the previous state-of-the-art model CTGAN on two benchmark
datasets. Moreover, PMAA boasts considerable efficiency advantages, with only
0.5% and 14.6% of the parameters and computational complexity of CTGAN,
respectively. These comprehensive results underscore PMAA's potential as a
lightweight cloud removal network suitable for deployment on edge devices to
accomplish large-scale cloud removal tasks. Our source code and pre-trained
models are available at https://github.com/XavierJiezou/PMAA.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 09:47:48 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Aug 2023 16:01:41 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Zou",
"Xuechao",
""
],
[
"Li",
"Kai",
""
],
[
"Xing",
"Junliang",
""
],
[
"Tao",
"Pin",
""
],
[
"Cui",
"Yachao",
""
]
] |
new_dataset
| 0.999133 |
2304.07575
|
Sz Gao
|
Shuzheng Gao, Xin-Cheng Wen, Cuiyun Gao, Wenxuan Wang, Hongyu Zhang,
Michael R. Lyu
|
What Makes Good In-context Demonstrations for Code Intelligence Tasks
with LLMs?
|
This paper is accepted by ASE 2023
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Pre-trained models of source code have gained widespread popularity in many
code intelligence tasks. Recently, with the scaling of the model and corpus
size, large language models have shown the ability of in-context learning
(ICL). ICL employs task instructions and a few examples as demonstrations, and
then inputs the demonstrations to the language models for making predictions.
This new learning paradigm is training-free and has shown impressive
performance in various natural language processing and code intelligence tasks.
However, the performance of ICL heavily relies on the quality of
demonstrations, e.g., the selected examples. It is important to systematically
investigate how to construct a good demonstration for code-related tasks. In
this paper, we empirically explore the impact of three key factors on the
performance of ICL in code intelligence tasks: the selection, order, and number
of demonstration examples. We conduct extensive experiments on three code
intelligence tasks including code summarization, bug fixing, and program
synthesis. Our experimental results demonstrate that all the above three
factors dramatically impact the performance of ICL in code intelligence tasks.
Additionally, we summarize our findings and provide takeaway suggestions on how
to construct effective demonstrations, taking into account these three
perspectives. We also show that a carefully-designed demonstration based on our
findings can lead to substantial improvements over widely-used demonstration
construction methods, e.g., improving BLEU-4, EM, and EM by at least 9.90%,
175.96%, and 50.81% on code summarization, bug fixing, and program synthesis,
respectively
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 15:13:58 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Aug 2023 13:46:02 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Gao",
"Shuzheng",
""
],
[
"Wen",
"Xin-Cheng",
""
],
[
"Gao",
"Cuiyun",
""
],
[
"Wang",
"Wenxuan",
""
],
[
"Zhang",
"Hongyu",
""
],
[
"Lyu",
"Michael R.",
""
]
] |
new_dataset
| 0.977173 |
2304.07905
|
Saugat Pandey
|
Saugat Pandey and Alvitta Ottley
|
Mini-VLAT: A Short and Effective Measure of Visualization Literacy
| null |
Computer Graphics forum Volume 42 (2023), Number 3
|
10.1111/cgf.14809
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The visualization community regards visualization literacy as a necessary
skill. Yet, despite the recent increase in research into visualization literacy
by the education and visualization communities, we lack practical and
time-effective instruments for the widespread measurements of people's
comprehension and interpretation of visual designs. We present Mini-VLAT, a
brief but practical visualization literacy test. The Mini-VLAT is a 12-item
short form of the 53-item Visualization Literacy Assessment Test (VLAT). The
Mini-VLAT is reliable (coefficient omega = 0.72) and strongly correlates with
the VLAT. Five visualization experts validated the Mini-VLAT items, yielding an
average content validity ratio (CVR) of 0.6. We further validate Mini-VLAT by
demonstrating a strong positive correlation between study participants'
Mini-VLAT scores and their aptitude for learning an unfamiliar visualization
using a Parallel Coordinate Plot test. Overall, the Mini-VLAT items showed a
similar pattern of validity and reliability as the 53-item VLAT. The results
show that Mini-VLAT is a psychometrically sound and practical short measure of
visualization literacy.
|
[
{
"version": "v1",
"created": "Sun, 16 Apr 2023 22:00:20 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2023 19:29:51 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Pandey",
"Saugat",
""
],
[
"Ottley",
"Alvitta",
""
]
] |
new_dataset
| 0.995208 |
2304.07916
|
Haidong Zhu
|
Haidong Zhu, Wanrong Zheng, Zhaoheng Zheng, Ram Nevatia
|
GaitRef: Gait Recognition with Refined Sequential Skeletons
|
IJCB 2023 oral. Code is available at
https://github.com/haidongz-usc/GaitRef
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Identifying humans with their walking sequences, known as gait recognition,
is a useful biometric understanding task as it can be observed from a long
distance and does not require cooperation from the subject. Two common
modalities used for representing the walking sequence of a person are
silhouettes and joint skeletons. Silhouette sequences, which record the
boundary of the walking person in each frame, may suffer from the variant
appearances from carried-on objects and clothes of the person. Framewise joint
detections are noisy and introduce some jitters that are not consistent with
sequential detections. In this paper, we combine the silhouettes and skeletons
and refine the framewise joint predictions for gait recognition. With temporal
information from the silhouette sequences, we show that the refined skeletons
can improve gait recognition performance without extra annotations. We compare
our methods on four public datasets, CASIA-B, OUMVLP, Gait3D and GREW, and show
state-of-the-art performance.
|
[
{
"version": "v1",
"created": "Sun, 16 Apr 2023 23:37:24 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jul 2023 00:29:45 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Aug 2023 16:06:11 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Zhu",
"Haidong",
""
],
[
"Zheng",
"Wanrong",
""
],
[
"Zheng",
"Zhaoheng",
""
],
[
"Nevatia",
"Ram",
""
]
] |
new_dataset
| 0.999359 |
2307.05545
|
Zhongliang Jiang
|
Zhongliang Jiang, Septimiu E. Salcudean, Nassir Navab
|
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives
|
Accepted by Medical Image Analysis
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ultrasound (US) is one of the most widely used modalities for clinical
intervention and diagnosis due to the merits of providing non-invasive,
radiation-free, and real-time images. However, free-hand US examinations are
highly operator-dependent. Robotic US System (RUSS) aims at overcoming this
shortcoming by offering reproducibility, while also aiming at improving
dexterity, and intelligent anatomy and disease-aware imaging. In addition to
enhancing diagnostic outcomes, RUSS also holds the potential to provide medical
interventions for populations suffering from the shortage of experienced
sonographers. In this paper, we categorize RUSS as teleoperated or autonomous.
Regarding teleoperated RUSS, we summarize their technical developments, and
clinical evaluations, respectively. This survey then focuses on the review of
recent work on autonomous robotic US imaging. We demonstrate that machine
learning and artificial intelligence present the key techniques, which enable
intelligent patient and process-specific, motion and deformation-aware robotic
image acquisition. We also show that the research on artificial intelligence
for autonomous RUSS has directed the research community toward understanding
and modeling expert sonographers' semantic reasoning and action. Here, we call
this process, the recovery of the "language of sonography". This side result of
research on autonomous robotic US acquisitions could be considered as valuable
and essential as the progress made in the robotic US examination itself. This
article will provide both engineers and clinicians with a comprehensive
understanding of RUSS by surveying underlying techniques.
|
[
{
"version": "v1",
"created": "Sat, 8 Jul 2023 23:24:36 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Aug 2023 18:47:37 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Jiang",
"Zhongliang",
""
],
[
"Salcudean",
"Septimiu E.",
""
],
[
"Navab",
"Nassir",
""
]
] |
new_dataset
| 0.997723 |
2307.09724
|
Kibeom Hong
|
Kibeom Hong, Seogkyu Jeon, Junsoo Lee, Namhyuk Ahn, Kunhee Kim,
Pilhyeon Lee, Daesik Kim, Youngjung Uh, Hyeran Byun
|
AesPA-Net: Aesthetic Pattern-Aware Style Transfer Networks
|
Accepted by ICCV 2023. Code is available at this
https://github.com/Kibeom-Hong/AesPA-Net
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
To deliver the artistic expression of the target style, recent studies
exploit the attention mechanism owing to its ability to map the local patches
of the style image to the corresponding patches of the content image. However,
because of the low semantic correspondence between arbitrary content and
artworks, the attention module repeatedly abuses specific local patches from
the style image, resulting in disharmonious and evident repetitive artifacts.
To overcome this limitation and accomplish impeccable artistic style transfer,
we focus on enhancing the attention mechanism and capturing the rhythm of
patterns that organize the style. In this paper, we introduce a novel metric,
namely pattern repeatability, that quantifies the repetition of patterns in the
style image. Based on the pattern repeatability, we propose Aesthetic
Pattern-Aware style transfer Networks (AesPA-Net) that discover the sweet spot
of local and global style expressions. In addition, we propose a novel
self-supervisory task to encourage the attention mechanism to learn precise and
meaningful semantic correspondence. Lastly, we introduce the patch-wise style
loss to transfer the elaborate rhythm of local patterns. Through qualitative
and quantitative evaluations, we verify the reliability of the proposed pattern
repeatability that aligns with human perception, and demonstrate the
superiority of the proposed framework.
|
[
{
"version": "v1",
"created": "Wed, 19 Jul 2023 02:26:20 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jul 2023 04:14:01 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Aug 2023 13:14:26 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Hong",
"Kibeom",
""
],
[
"Jeon",
"Seogkyu",
""
],
[
"Lee",
"Junsoo",
""
],
[
"Ahn",
"Namhyuk",
""
],
[
"Kim",
"Kunhee",
""
],
[
"Lee",
"Pilhyeon",
""
],
[
"Kim",
"Daesik",
""
],
[
"Uh",
"Youngjung",
""
],
[
"Byun",
"Hyeran",
""
]
] |
new_dataset
| 0.976278 |
2308.03064
|
Alberto Dennunzio
|
Alberto Dennunzio and Enrico Formenti and Luciano Margara
|
An Easily Checkable Algebraic Characterization of Positive Expansivity
for Additive Cellular Automata over a Finite Abelian Group
|
12 pages
| null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We provide an easily checkable algebraic characterization of positive
expansivity for Additive Cellular Automata over a finite abelian group. First
of all, an easily checkable characterization of positive expansivity is
provided for the non trivial subclass of Linear Cellular Automata over the
alphabet $(\Z/m\Z)^n$. Then, we show how it can be exploited to decide positive
expansivity for the whole class of Additive Cellular Automata over a finite
abelian group.
|
[
{
"version": "v1",
"created": "Sun, 6 Aug 2023 09:20:12 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Aug 2023 16:18:07 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Dennunzio",
"Alberto",
""
],
[
"Formenti",
"Enrico",
""
],
[
"Margara",
"Luciano",
""
]
] |
new_dataset
| 0.974918 |
2308.03276
|
Chanwut Kittivorawong
|
Chanwut Kittivorawong, Yongming Ge, Yousef Helal, Alvin Cheung
|
Spatialyze: A Geospatial Video Analytics System with Spatial-Aware
Optimizations
|
GitHub Repository: https://github.com/apperception-db/spatialyze
| null | null | null |
cs.DB cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Videos that are shot using commodity hardware such as phones and surveillance
cameras record various metadata such as time and location. We encounter such
geospatial videos on a daily basis and such videos have been growing in volume
significantly. Yet, we do not have data management systems that allow users to
interact with such data effectively.
In this paper, we describe Spatialyze, a new framework for end-to-end
querying of geospatial videos. Spatialyze comes with a domain-specific language
where users can construct geospatial video analytic workflows using a 3-step,
declarative, build-filter-observe paradigm. Internally, Spatialyze leverages
the declarative nature of such workflows, the temporal-spatial metadata stored
with videos, and physical behavior of real-world objects to optimize the
execution of workflows. Our results using real-world videos and workflows show
that Spatialyze can reduce execution time by up to 5.3x, while maintaining up
to 97.1% accuracy compared to unoptimized execution.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2023 03:35:47 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Aug 2023 01:55:32 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Kittivorawong",
"Chanwut",
""
],
[
"Ge",
"Yongming",
""
],
[
"Helal",
"Yousef",
""
],
[
"Cheung",
"Alvin",
""
]
] |
new_dataset
| 0.967128 |
2308.03770
|
Francesco Rundo Dr.
|
Francesco Rundo, Michael Sebastian Rundo, Concetto Spampinato
|
Visual Saliency Detection in Advanced Driver Assistance Systems
| null | null | null | null |
cs.CV cs.AI eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Visual Saliency refers to the innate human mechanism of focusing on and
extracting important features from the observed environment. Recently, there
has been a notable surge of interest in the field of automotive research
regarding the estimation of visual saliency. While operating a vehicle, drivers
naturally direct their attention towards specific objects, employing
brain-driven saliency mechanisms that prioritize certain elements over others.
In this investigation, we present an intelligent system that combines a
drowsiness detection system for drivers with a scene comprehension pipeline
based on saliency. To achieve this, we have implemented a specialized 3D deep
network for semantic segmentation, which has been pretrained and tailored for
processing the frames captured by an automotive-grade external camera. The
proposed pipeline was hosted on an embedded platform utilizing the STA1295
core, featuring ARM A7 dual-cores, and embeds an hardware accelerator.
Additionally, we employ an innovative biosensor embedded on the car steering
wheel to monitor the driver drowsiness, gathering the PhotoPlethysmoGraphy
(PPG) signal of the driver. A dedicated 1D temporal deep convolutional network
has been devised to classify the collected PPG time-series, enabling us to
assess the driver level of attentiveness. Ultimately, we compare the determined
attention level of the driver with the corresponding saliency-based scene
classification to evaluate the overall safety level. The efficacy of the
proposed pipeline has been validated through extensive experimental results.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 15:41:54 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Rundo",
"Francesco",
""
],
[
"Rundo",
"Michael Sebastian",
""
],
[
"Spampinato",
"Concetto",
""
]
] |
new_dataset
| 0.998663 |
2308.03774
|
Nick Zhang
|
Nick Zhang
|
Knowledge Consilience: One Culture, Two Cultures or Many Cultures?
| null | null | null | null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
The hostility between the two cultures, scientific and literary, was framed
by C.P. Snow in 1959 and later by others. The scientific culture is nowadays
often identified with STEM (Science, Technology, Engineering and Mathematics)
whereas the literary culture generally refers to humanities and social
sciences. Wilson expressed the wish for the unity of knowledge. We put forward
the notions of knowledge distance and knowledge consilience threshold to
quantitatively measure distance and coupling process between different branches
of knowledge. Our findings suggest that the gulf between the two cultures is
widening.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2023 11:26:32 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Zhang",
"Nick",
""
]
] |
new_dataset
| 0.964081 |
2308.03788
|
Christian Rack
|
Christian Rack, Tamara Fernando, Murat Yalcin, Andreas Hotho, Marc
Erich Latoschik
|
Who Is Alyx? A new Behavioral Biometric Dataset for User Identification
in XR
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This article presents a new dataset containing motion and physiological data
of users playing the game "Half-Life: Alyx". The dataset specifically targets
behavioral and biometric identification of XR users. It includes motion and
eye-tracking data captured by a HTC Vive Pro of 71 users playing the game on
two separate days for 45 minutes. Additionally, we collected physiological data
from 31 of these users. We provide benchmark performances for the task of
motion-based identification of XR users with two prominent state-of-the-art
deep learning architectures (GRU and CNN). After training on the first session
of each user, the best model can identify the 71 users in the second session
with a mean accuracy of 95% within 2 minutes. The dataset is freely available
under https://github.com/cschell/who-is-alyx
|
[
{
"version": "v1",
"created": "Fri, 4 Aug 2023 09:34:11 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Rack",
"Christian",
""
],
[
"Fernando",
"Tamara",
""
],
[
"Yalcin",
"Murat",
""
],
[
"Hotho",
"Andreas",
""
],
[
"Latoschik",
"Marc Erich",
""
]
] |
new_dataset
| 0.999687 |
2308.03806
|
Jeff Yan
|
Ping Wang, Shishir Nagaraja, Aur\'elien Bourquard, Haichang Gao, Jeff
Yan
|
SoK: Acoustic Side Channels
|
16 pages
| null | null | null |
cs.CR cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We provide a state-of-the-art analysis of acoustic side channels, cover all
the significant academic research in the area, discuss their security
implications and countermeasures, and identify areas for future research. We
also make an attempt to bridge side channels and inverse problems, two fields
that appear to be completely isolated from each other but have deep
connections.
|
[
{
"version": "v1",
"created": "Sun, 6 Aug 2023 14:36:33 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Wang",
"Ping",
""
],
[
"Nagaraja",
"Shishir",
""
],
[
"Bourquard",
"Aurélien",
""
],
[
"Gao",
"Haichang",
""
],
[
"Yan",
"Jeff",
""
]
] |
new_dataset
| 0.9991 |
2308.03868
|
Brian Tang
|
Brian Tang and Kang G. Shin
|
Eye-Shield: Real-Time Protection of Mobile Device Screen Information
from Shoulder Surfing
|
Published at 32nd USENIX Security Symposium (2023) U.S. Pat. App. No.
63/468,650-Conf. #8672
| null | null | null |
cs.CR cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
People use mobile devices ubiquitously for computing, communication, storage,
web browsing, and more. As a result, the information accessed and stored within
mobile devices, such as financial and health information, text messages, and
emails, can often be sensitive. Despite this, people frequently use their
mobile devices in public areas, becoming susceptible to a simple yet effective
attack, shoulder surfing. Shoulder surfing occurs when a person near a mobile
user peeks at the user's mobile device, potentially acquiring passcodes, PINs,
browsing behavior, or other personal information. We propose Eye-Shield, a
solution to prevent shoulder surfers from accessing or stealing sensitive
on-screen information. Eye-Shield is designed to protect all types of on-screen
information in real time, without any serious impediment to users' interactions
with their mobile devices. Eye-Shield generates images that appear readable at
close distances, but appear blurry or pixelated at farther distances and wider
angles. It is capable of protecting on-screen information from shoulder
surfers, operating in real time, and being minimally intrusive to the intended
users. Eye-Shield protects images and text from shoulder surfers by reducing
recognition rates to 24.24% and 15.91%. Our implementations of Eye-Shield, with
frame rates of 24 FPS for Android and 43 FPS for iOS, effectively work on
screen resolutions as high as 1440x3088. Eye-Shield also incurs acceptable
memory usage, CPU utilization, and energy overhead. Finally, our MTurk and
in-person user studies indicate that Eye-Shield protects on-screen information
without a large usability cost for privacy-conscious users.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2023 18:40:08 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Tang",
"Brian",
""
],
[
"Shin",
"Kang G.",
""
]
] |
new_dataset
| 0.99956 |
2308.03897
|
Theodoros Trochatos
|
Theodoros Trochatos, Chuanqi Xu, Sanjay Deshpande, Yao Lu, Yongshan
Ding, Jakub Szefer
|
Hardware Architecture for a Quantum Computer Trusted Execution
Environment
| null | null | null | null |
cs.ET quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
The cloud-based environments in which today's and future quantum computers
will operate, raise concerns about the security and privacy of user's
intellectual property. Quantum circuits submitted to cloud-based quantum
computer providers represent sensitive or proprietary algorithms developed by
users that need protection. Further, input data is hard-coded into the
circuits, and leakage of the circuits can expose users' data. To help protect
users' circuits and data from possibly malicious quantum computer cloud
providers, this work presented the first hardware architecture for a trusted
execution environment for quantum computers. To protect the user's circuits and
data, the quantum computer control pulses are obfuscated with decoy control
pulses. While digital data can be encrypted, analog control pulses cannot and
this paper proposed the novel decoy pulse approach to obfuscate the analog
control pulses. The proposed decoy pulses can easily be added to the software
by users. Meanwhile, the hardware components of the architecture proposed in
this paper take care of eliminating, i.e. attenuating, the decoy pulses inside
the superconducting quantum computer's dilution refrigerator before they reach
the qubits. The hardware architecture also contains tamper-resistant features
to protect the trusted hardware and users' information. The work leverages a
new metric of variational distance to analyze the impact and scalability of
hardware protection. The variational distance of the circuits protected with
our scheme, compared to unprotected circuits, is in the range of only $0.16$ to
$0.26$. This work demonstrates that protection from possibly malicious cloud
providers is feasible and all the hardware components needed for the proposed
architecture are available today.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2023 20:18:36 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Trochatos",
"Theodoros",
""
],
[
"Xu",
"Chuanqi",
""
],
[
"Deshpande",
"Sanjay",
""
],
[
"Lu",
"Yao",
""
],
[
"Ding",
"Yongshan",
""
],
[
"Szefer",
"Jakub",
""
]
] |
new_dataset
| 0.999294 |
2308.03898
|
Burak M Gonultas
|
Burak M. Gonultas, Pratik Mukherjee, O. Goktug Poyrazoglu and Volkan
Isler
|
System Identification and Control of Front-Steered Ackermann Vehicles
through Differentiable Physics
|
Accepted for IROS 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we address the problem of system identification and control of
a front-steered vehicle which abides by the Ackermann geometry constraints.
This problem arises naturally for on-road and off-road vehicles that require
reliable system identification and basic feedback controllers for various
applications such as lane keeping and way-point navigation. Traditional system
identification requires expensive equipment and is time consuming. In this work
we explore the use of differentiable physics for system identification and
controller design and make the following contributions: i)We develop a
differentiable physics simulator (DPS) to provide a method for the system
identification of front-steered class of vehicles whose system parameters are
learned using a gradient-based method; ii) We provide results for our
gradient-based method that exhibit better sample efficiency in comparison to
other gradient-free methods; iii) We validate the learned system parameters by
implementing a feedback controller to demonstrate stable lane keeping
performance on a real front-steered vehicle, the F1TENTH; iv) Further, we
provide results exhibiting comparable lane keeping behavior for system
parameters learned using our gradient-based method with lane keeping behavior
of the actual system parameters of the F1TENTH.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2023 20:19:03 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Gonultas",
"Burak M.",
""
],
[
"Mukherjee",
"Pratik",
""
],
[
"Poyrazoglu",
"O. Goktug",
""
],
[
"Isler",
"Volkan",
""
]
] |
new_dataset
| 0.992161 |
2308.03908
|
Soumyabrata Chaudhuri
|
Soumyabrata Chaudhuri and Saumik Bhattacharya
|
ViLP: Knowledge Exploration using Vision, Language, and Pose Embeddings
for Video Action Recognition
|
7 pages, 3 figures, 2 Tables
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video Action Recognition (VAR) is a challenging task due to its inherent
complexities. Though different approaches have been explored in the literature,
designing a unified framework to recognize a large number of human actions is
still a challenging problem. Recently, Multi-Modal Learning (MML) has
demonstrated promising results in this domain. In literature, 2D skeleton or
pose modality has often been used for this task, either independently or in
conjunction with the visual information (RGB modality) present in videos.
However, the combination of pose, visual information, and text attributes has
not been explored yet, though text and pose attributes independently have been
proven to be effective in numerous computer vision tasks. In this paper, we
present the first pose augmented Vision-language model (VLM) for VAR. Notably,
our scheme achieves an accuracy of 92.81% and 73.02% on two popular human video
action recognition benchmark datasets, UCF-101 and HMDB-51, respectively, even
without any video data pre-training, and an accuracy of 96.11% and 75.75% after
kinetics pre-training.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2023 20:50:54 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Chaudhuri",
"Soumyabrata",
""
],
[
"Bhattacharya",
"Saumik",
""
]
] |
new_dataset
| 0.996578 |
2308.03990
|
Chen Cao
|
Richard Jiarui Tong, Cassie Chen Cao, Timothy Xueqian Lee, Guodong
Zhao, Ray Wan, Feiyue Wang, Xiangen Hu, Robin Schmucker, Jinsheng Pan, Julian
Quevedo, Yu Lu
|
NEOLAF, an LLM-powered neural-symbolic cognitive architecture
| null | null | null | null |
cs.AI cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents the Never Ending Open Learning Adaptive Framework
(NEOLAF), an integrated neural-symbolic cognitive architecture that models and
constructs intelligent agents. The NEOLAF framework is a superior approach to
constructing intelligent agents than both the pure connectionist and pure
symbolic approaches due to its explainability, incremental learning,
efficiency, collaborative and distributed learning, human-in-the-loop
enablement, and self-improvement. The paper further presents a compelling
experiment where a NEOLAF agent, built as a problem-solving agent, is fed with
complex math problems from the open-source MATH dataset. The results
demonstrate NEOLAF's superior learning capability and its potential to
revolutionize the field of cognitive architectures and self-improving adaptive
instructional systems.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 02:13:04 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Tong",
"Richard Jiarui",
""
],
[
"Cao",
"Cassie Chen",
""
],
[
"Lee",
"Timothy Xueqian",
""
],
[
"Zhao",
"Guodong",
""
],
[
"Wan",
"Ray",
""
],
[
"Wang",
"Feiyue",
""
],
[
"Hu",
"Xiangen",
""
],
[
"Schmucker",
"Robin",
""
],
[
"Pan",
"Jinsheng",
""
],
[
"Quevedo",
"Julian",
""
],
[
"Lu",
"Yu",
""
]
] |
new_dataset
| 0.997002 |
2308.04006
|
Shashank Gupta
|
Shashank Gupta
|
An Ethereum-based Product Identification System for Anti-counterfeits
|
5 page, 5 figures
| null | null | null |
cs.CR cs.DB cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Fake products are items that are marketed and sold as genuine, high-quality
products but are counterfeit or low-quality knockoffs. These products are often
designed to closely mimic the appearance and branding of the genuine product to
deceive consumers into thinking they are purchasing the real thing. Fake
products can range from clothing and accessories to electronics and other goods
and can be found in a variety of settings, including online marketplaces and
brick-and-mortar stores. Blockchain technology can be used to help detect fake
products in a few different ways. One of the most common ways is through the
use of smart contracts, which are self-executing contracts with the terms of
the agreement between buyer and seller being directly written into lines of
code. This allows for a high level of transparency and traceability in supply
chain transactions, making it easier to identify and prevent the sale of fake
products and the use of unique product identifiers, such as serial numbers or
QR codes, that are recorded on the blockchain. This allows consumers to easily
verify the authenticity of a product by scanning the code and checking it
against the information recorded on the blockchain. In this study, we will use
smart contracts to detect fake products and will evaluate based on Gas cost and
ethers used for each implementation.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 02:57:41 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Gupta",
"Shashank",
""
]
] |
new_dataset
| 0.999653 |
2308.04034
|
Utku Tefek
|
Utku Tefek, Ertem Esiner, Daisuke Mashima, Binbin Chen, Yih-Chun Hu
|
Caching-based Multicast Message Authentication in Time-critical
Industrial Control Systems
|
For viewing INFOCOM proceedings in IEEE Xplore see
https://ieeexplore.ieee.org/abstract/document/9796767
|
IEEE Conference on Computer Communications, London, United
Kingdom, 2022, pp. 1039-1048
|
10.1109/INFOCOM48880.2022.9796767
| null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Attacks against industrial control systems (ICSs) often exploit the
insufficiency of authentication mechanisms. Verifying whether the received
messages are intact and issued by legitimate sources can prevent malicious
data/command injection by illegitimate or compromised devices. However, the key
challenge is to introduce message authentication for various ICS communication
models, including multicast or broadcast, with a messaging rate that can be as
high as thousands of messages per second, within very stringent latency
constraints. For example, certain commands for protection in smart grids must
be delivered within 2 milliseconds, ruling out public-key cryptography. This
paper proposes two lightweight message authentication schemes, named CMA and
its multicast variant CMMA, that perform precomputation and caching to
authenticate future messages. With minimal precomputation and communication
overhead, C(M)MA eliminates all cryptographic operations for the source after
the message is given, and all expensive cryptographic operations for the
destinations after the message is received. C(M)MA considers the urgency
profile (or likelihood) of a set of future messages for even faster
verification of the most time-critical (or likely) messages. We demonstrate the
feasibility of C(M)MA in an ICS setting based on a substation automation system
in smart grids.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 04:21:36 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Tefek",
"Utku",
""
],
[
"Esiner",
"Ertem",
""
],
[
"Mashima",
"Daisuke",
""
],
[
"Chen",
"Binbin",
""
],
[
"Hu",
"Yih-Chun",
""
]
] |
new_dataset
| 0.999333 |
2308.04047
|
Dianze Li
|
Dianze Li and Jianing Li and Yonghong Tian
|
SODFormer: Streaming Object Detection with Transformer Using Events and
Frames
|
18 pages, 15 figures, in IEEE Transactions on Pattern Analysis and
Machine Intelligence
| null |
10.1109/TPAMI.2023.3298925
| null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
DAVIS camera, streaming two complementary sensing modalities of asynchronous
events and frames, has gradually been used to address major object detection
challenges (e.g., fast motion blur and low-light). However, how to effectively
leverage rich temporal cues and fuse two heterogeneous visual streams remains a
challenging endeavor. To address this challenge, we propose a novel streaming
object detector with Transformer, namely SODFormer, which first integrates
events and frames to continuously detect objects in an asynchronous manner.
Technically, we first build a large-scale multimodal neuromorphic object
detection dataset (i.e., PKU-DAVIS-SOD) over 1080.1k manual labels. Then, we
design a spatiotemporal Transformer architecture to detect objects via an
end-to-end sequence prediction problem, where the novel temporal Transformer
module leverages rich temporal cues from two visual streams to improve the
detection performance. Finally, an asynchronous attention-based fusion module
is proposed to integrate two heterogeneous sensing modalities and take
complementary advantages from each end, which can be queried at any time to
locate objects and break through the limited output frequency from synchronized
frame-based fusion strategies. The results show that the proposed SODFormer
outperforms four state-of-the-art methods and our eight baselines by a
significant margin. We also show that our unifying framework works well even in
cases where the conventional frame-based camera fails, e.g., high-speed motion
and low-light conditions. Our dataset and code can be available at
https://github.com/dianzl/SODFormer.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 04:53:52 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Li",
"Dianze",
""
],
[
"Li",
"Jianing",
""
],
[
"Tian",
"Yonghong",
""
]
] |
new_dataset
| 0.98599 |
2308.04052
|
Roman Negri
|
Timothy Merino, Roman Negri, Dipika Rajesh, M Charity, Julian Togelius
|
The Five-Dollar Model: Generating Game Maps and Sprites from Sentence
Embeddings
|
to be published in AIIDE 2023
| null | null | null |
cs.LG cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The five-dollar model is a lightweight text-to-image generative architecture
that generates low dimensional images from an encoded text prompt. This model
can successfully generate accurate and aesthetically pleasing content in low
dimensional domains, with limited amounts of training data. Despite the small
size of both the model and datasets, the generated images are still able to
maintain the encoded semantic meaning of the textual prompt. We apply this
model to three small datasets: pixel art video game maps, video game sprite
images, and down-scaled emoji images and apply novel augmentation strategies to
improve the performance of our model on these limited datasets. We evaluate our
models performance using cosine similarity score between text-image pairs
generated by the CLIP VIT-B/32 model.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 05:16:51 GMT"
}
] | 2023-08-09T00:00:00 |
[
[
"Merino",
"Timothy",
""
],
[
"Negri",
"Roman",
""
],
[
"Rajesh",
"Dipika",
""
],
[
"Charity",
"M",
""
],
[
"Togelius",
"Julian",
""
]
] |
new_dataset
| 0.99934 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.