id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.12928
|
Minyoung Kim
|
Minyoung Kim, Timothy Hospedales
|
BayesDLL: Bayesian Deep Learning Library
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
We release a new Bayesian neural network library for PyTorch for large-scale
deep networks. Our library implements mainstream approximate Bayesian inference
algorithms: variational inference, MC-dropout, stochastic-gradient MCMC, and
Laplace approximation. The main differences from other existing Bayesian neural
network libraries are as follows: 1) Our library can deal with very large-scale
deep networks including Vision Transformers (ViTs). 2) We need virtually zero
code modifications for users (e.g., the backbone network definition codes do
not neet to be modified at all). 3) Our library also allows the pre-trained
model weights to serve as a prior mean, which is very useful for performing
Bayesian inference with the large-scale foundation models like ViTs that are
hard to optimise from scratch with the downstream data alone. Our code is
publicly available at: \url{https://github.com/SamsungLabs/BayesDLL}\footnote{A
mirror repository is also available at:
\url{https://github.com/minyoungkim21/BayesDLL}.}.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 15:27:54 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Kim",
"Minyoung",
""
],
[
"Hospedales",
"Timothy",
""
]
] |
new_dataset
| 0.962355 |
2309.12938
|
Nalin Wadhwa
|
Nalin Wadhwa, Jui Pradhan, Atharv Sonwane, Surya Prakash Sahu,
Nagarajan Natarajan, Aditya Kanade, Suresh Parthasarathy, Sriram Rajamani
|
Frustrated with Code Quality Issues? LLMs can Help!
| null | null | null | null |
cs.AI cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As software projects progress, quality of code assumes paramount importance
as it affects reliability, maintainability and security of software. For this
reason, static analysis tools are used in developer workflows to flag code
quality issues. However, developers need to spend extra efforts to revise their
code to improve code quality based on the tool findings. In this work, we
investigate the use of (instruction-following) large language models (LLMs) to
assist developers in revising code to resolve code quality issues. We present a
tool, CORE (short for COde REvisions), architected using a pair of LLMs
organized as a duo comprised of a proposer and a ranker. Providers of static
analysis tools recommend ways to mitigate the tool warnings and developers
follow them to revise their code. The \emph{proposer LLM} of CORE takes the
same set of recommendations and applies them to generate candidate code
revisions. The candidates which pass the static quality checks are retained.
However, the LLM may introduce subtle, unintended functionality changes which
may go un-detected by the static analysis. The \emph{ranker LLM} evaluates the
changes made by the proposer using a rubric that closely follows the acceptance
criteria that a developer would enforce. CORE uses the scores assigned by the
ranker LLM to rank the candidate revisions before presenting them to the
developer. CORE could revise 59.2% Python files (across 52 quality checks) so
that they pass scrutiny by both a tool and a human reviewer. The ranker LLM is
able to reduce false positives by 25.8% in these cases. CORE produced revisions
that passed the static analysis tool in 76.8% Java files (across 10 quality
checks) comparable to 78.3% of a specialized program repair tool, with
significantly much less engineering efforts.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 15:37:07 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Wadhwa",
"Nalin",
""
],
[
"Pradhan",
"Jui",
""
],
[
"Sonwane",
"Atharv",
""
],
[
"Sahu",
"Surya Prakash",
""
],
[
"Natarajan",
"Nagarajan",
""
],
[
"Kanade",
"Aditya",
""
],
[
"Parthasarathy",
"Suresh",
""
],
[
"Rajamani",
"Sriram",
""
]
] |
new_dataset
| 0.988407 |
2309.12941
|
Yuxin Deng
|
Zezhong Chen, Yuxin Deng, Wenjie Du
|
Trusta: Reasoning about Assurance Cases with Formal Methods and Large
Language Models
|
38 pages
| null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Assurance cases can be used to argue for the safety of products in safety
engineering. In safety-critical areas, the construction of assurance cases is
indispensable. Trustworthiness Derivation Trees (TDTs) enhance assurance cases
by incorporating formal methods, rendering it possible for automatic reasoning
about assurance cases. We present Trustworthiness Derivation Tree Analyzer
(Trusta), a desktop application designed to automatically construct and verify
TDTs. The tool has a built-in Prolog interpreter in its backend, and is
supported by the constraint solvers Z3 and MONA. Therefore, it can solve
constraints about logical formulas involving arithmetic, sets, Horn clauses
etc. Trusta also utilizes large language models to make the creation and
evaluation of assurance cases more convenient. It allows for interactive human
examination and modification. We evaluated top language models like
ChatGPT-3.5, ChatGPT-4, and PaLM 2 for generating assurance cases. Our tests
showed a 50%-80% similarity between machine-generated and human-created cases.
In addition, Trusta can extract formal constraints from text in natural
languages, facilitating an easier interpretation and validation process. This
extraction is subject to human review and correction, blending the best of
automated efficiency with human insight. To our knowledge, this marks the first
integration of large language models in automatic creating and reasoning about
assurance cases, bringing a novel approach to a traditional challenge. Through
several industrial case studies, Trusta has proven to quickly find some subtle
issues that are typically missed in manual inspection, demonstrating its
practical value in enhancing the assurance case development process.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 15:42:43 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Chen",
"Zezhong",
""
],
[
"Deng",
"Yuxin",
""
],
[
"Du",
"Wenjie",
""
]
] |
new_dataset
| 0.995319 |
2309.12960
|
Weicheng Ren
|
Weicheng Ren, Zixuan Li, Xiaolong Jin, Long Bai, Miao Su, Yantao Liu,
Saiping Guan, Jiafeng Guo, Xueqi Cheng
|
Nested Event Extraction upon Pivot Element Recogniton
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nested Event Extraction (NEE) aims to extract complex event structures where
an event contains other events as its arguments recursively. Nested events
involve a kind of Pivot Elements (PEs) that simultaneously act as arguments of
outer events and as triggers of inner events, and thus connect them into nested
structures. This special characteristic of PEs brings challenges to existing
NEE methods, as they cannot well cope with the dual identities of PEs.
Therefore, this paper proposes a new model, called PerNee, which extracts
nested events mainly based on recognizing PEs. Specifically, PerNee first
recognizes the triggers of both inner and outer events and further recognizes
the PEs via classifying the relation type between trigger pairs. In order to
obtain better representations of triggers and arguments to further improve NEE
performance, it incorporates the information of both event types and argument
roles into PerNee through prompt learning. Since existing NEE datasets (e.g.,
Genia11) are limited to specific domains and contain a narrow range of event
types with nested structures, we systematically categorize nested events in
generic domain and construct a new NEE dataset, namely ACE2005-Nest.
Experimental results demonstrate that PerNee consistently achieves
state-of-the-art performance on ACE2005-Nest, Genia11 and Genia13.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 15:58:06 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Ren",
"Weicheng",
""
],
[
"Li",
"Zixuan",
""
],
[
"Jin",
"Xiaolong",
""
],
[
"Bai",
"Long",
""
],
[
"Su",
"Miao",
""
],
[
"Liu",
"Yantao",
""
],
[
"Guan",
"Saiping",
""
],
[
"Guo",
"Jiafeng",
""
],
[
"Cheng",
"Xueqi",
""
]
] |
new_dataset
| 0.991542 |
2309.13006
|
Lanyun Zhu
|
Tianrun Chen, Chenglong Fu, Ying Zang, Lanyun Zhu, Jia Zhang, Papa
Mao, Lingyun Sun
|
Deep3DSketch+: Rapid 3D Modeling from Single Free-hand Sketches
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid development of AR/VR brings tremendous demands for 3D content.
While the widely-used Computer-Aided Design (CAD) method requires a
time-consuming and labor-intensive modeling process, sketch-based 3D modeling
offers a potential solution as a natural form of computer-human interaction.
However, the sparsity and ambiguity of sketches make it challenging to generate
high-fidelity content reflecting creators' ideas. Precise drawing from multiple
views or strategic step-by-step drawings is often required to tackle the
challenge but is not friendly to novice users. In this work, we introduce a
novel end-to-end approach, Deep3DSketch+, which performs 3D modeling using only
a single free-hand sketch without inputting multiple sketches or view
information. Specifically, we introduce a lightweight generation network for
efficient inference in real-time and a structural-aware adversarial training
approach with a Stroke Enhancement Module (SEM) to capture the structural
information to facilitate learning of the realistic and fine-detailed shape
structures for high-fidelity performance. Extensive experiments demonstrated
the effectiveness of our approach with the state-of-the-art (SOTA) performance
on both synthetic and real datasets.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 17:12:13 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Chen",
"Tianrun",
""
],
[
"Fu",
"Chenglong",
""
],
[
"Zang",
"Ying",
""
],
[
"Zhu",
"Lanyun",
""
],
[
"Zhang",
"Jia",
""
],
[
"Mao",
"Papa",
""
],
[
"Sun",
"Lingyun",
""
]
] |
new_dataset
| 0.962434 |
2309.13035
|
Zitong Zhan
|
Zitong Zhan, Xiangfu Li, Qihang Li, Haonan He, Abhinav Pandey, Haitao
Xiao, Yangmengfei Xu, Xiangyu Chen, Kuan Xu, Kun Cao, Zhipeng Zhao, Zihan
Wang, Huan Xu, Zihang Fang, Yutian Chen, Wentao Wang, Xu Fang, Yi Du, Tianhao
Wu, Xiao Lin, Yuheng Qiu, Fan Yang, Jingnan Shi, Shaoshu Su, Yiren Lu,
Taimeng Fu, Karthik Dantu, Jiajun Wu, Lihua Xie, Marco Hutter, Luca Carlone,
Sebastian Scherer, Daning Huang, Yaoyu Hu, Junyi Geng, Chen Wang
|
PyPose v0.6: The Imperative Programming Interface for Robotics
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
PyPose is an open-source library for robot learning. It combines a
learning-based approach with physics-based optimization, which enables seamless
end-to-end robot learning. It has been used in many tasks due to its
meticulously designed application programming interface (API) and efficient
implementation. From its initial launch in early 2022, PyPose has experienced
significant enhancements, incorporating a wide variety of new features into its
platform. To satisfy the growing demand for understanding and utilizing the
library and reduce the learning curve of new users, we present the fundamental
design principle of the imperative programming interface, and showcase the
flexible usage of diverse functionalities and modules using an extremely simple
Dubins car example. We also demonstrate that the PyPose can be easily used to
navigate a real quadruped robot with a few lines of code.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 17:49:58 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Zhan",
"Zitong",
""
],
[
"Li",
"Xiangfu",
""
],
[
"Li",
"Qihang",
""
],
[
"He",
"Haonan",
""
],
[
"Pandey",
"Abhinav",
""
],
[
"Xiao",
"Haitao",
""
],
[
"Xu",
"Yangmengfei",
""
],
[
"Chen",
"Xiangyu",
""
],
[
"Xu",
"Kuan",
""
],
[
"Cao",
"Kun",
""
],
[
"Zhao",
"Zhipeng",
""
],
[
"Wang",
"Zihan",
""
],
[
"Xu",
"Huan",
""
],
[
"Fang",
"Zihang",
""
],
[
"Chen",
"Yutian",
""
],
[
"Wang",
"Wentao",
""
],
[
"Fang",
"Xu",
""
],
[
"Du",
"Yi",
""
],
[
"Wu",
"Tianhao",
""
],
[
"Lin",
"Xiao",
""
],
[
"Qiu",
"Yuheng",
""
],
[
"Yang",
"Fan",
""
],
[
"Shi",
"Jingnan",
""
],
[
"Su",
"Shaoshu",
""
],
[
"Lu",
"Yiren",
""
],
[
"Fu",
"Taimeng",
""
],
[
"Dantu",
"Karthik",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Xie",
"Lihua",
""
],
[
"Hutter",
"Marco",
""
],
[
"Carlone",
"Luca",
""
],
[
"Scherer",
"Sebastian",
""
],
[
"Huang",
"Daning",
""
],
[
"Hu",
"Yaoyu",
""
],
[
"Geng",
"Junyi",
""
],
[
"Wang",
"Chen",
""
]
] |
new_dataset
| 0.9783 |
2309.13037
|
Philipp Wu
|
Philipp Wu and Yide Shentu and Zhongke Yi and Xingyu Lin and Pieter
Abbeel
|
GELLO: A General, Low-Cost, and Intuitive Teleoperation Framework for
Robot Manipulators
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Imitation learning from human demonstrations is a powerful framework to teach
robots new skills. However, the performance of the learned policies is
bottlenecked by the quality, scale, and variety of the demonstration data. In
this paper, we aim to lower the barrier to collecting large and high-quality
human demonstration data by proposing GELLO, a general framework for building
low-cost and intuitive teleoperation systems for robotic manipulation. Given a
target robot arm, we build a GELLO controller that has the same kinematic
structure as the target arm, leveraging 3D-printed parts and off-the-shelf
motors. GELLO is easy to build and intuitive to use. Through an extensive user
study, we show that GELLO enables more reliable and efficient demonstration
collection compared to commonly used teleoperation devices in the imitation
learning literature such as VR controllers and 3D spacemouses. We further
demonstrate the capabilities of GELLO for performing complex bi-manual and
contact-rich manipulation tasks. To make GELLO accessible to everyone, we have
designed and built GELLO systems for 3 commonly used robotic arms: Franka, UR5,
and xArm. All software and hardware are open-sourced and can be found on our
website: https://wuphilipp.github.io/gello/.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 17:56:44 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Wu",
"Philipp",
""
],
[
"Shentu",
"Yide",
""
],
[
"Yi",
"Zhongke",
""
],
[
"Lin",
"Xingyu",
""
],
[
"Abbeel",
"Pieter",
""
]
] |
new_dataset
| 0.999153 |
2309.13039
|
Junchen Liu
|
Xiaoxue Chen, Junchen Liu, Hao Zhao, Guyue Zhou, Ya-Qin Zhang
|
NeRRF: 3D Reconstruction and View Synthesis for Transparent and Specular
Objects with Neural Refractive-Reflective Fields
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural radiance fields (NeRF) have revolutionized the field of image-based
view synthesis. However, NeRF uses straight rays and fails to deal with
complicated light path changes caused by refraction and reflection. This
prevents NeRF from successfully synthesizing transparent or specular objects,
which are ubiquitous in real-world robotics and A/VR applications. In this
paper, we introduce the refractive-reflective field. Taking the object
silhouette as input, we first utilize marching tetrahedra with a progressive
encoding to reconstruct the geometry of non-Lambertian objects and then model
refraction and reflection effects of the object in a unified framework using
Fresnel terms. Meanwhile, to achieve efficient and effective anti-aliasing, we
propose a virtual cone supersampling technique. We benchmark our method on
different shapes, backgrounds and Fresnel terms on both real-world and
synthetic datasets. We also qualitatively and quantitatively benchmark the
rendering results of various editing applications, including material editing,
object replacement/insertion, and environment illumination estimation. Codes
and data are publicly available at https://github.com/dawning77/NeRRF.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 17:59:12 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Chen",
"Xiaoxue",
""
],
[
"Liu",
"Junchen",
""
],
[
"Zhao",
"Hao",
""
],
[
"Zhou",
"Guyue",
""
],
[
"Zhang",
"Ya-Qin",
""
]
] |
new_dataset
| 0.993915 |
2108.05015
|
Xiao Wang
|
Xiao Wang, Jianing Li, Lin Zhu, Zhipeng Zhang, Zhe Chen, Xin Li,
Yaowei Wang, Yonghong Tian, Feng Wu
|
VisEvent: Reliable Object Tracking via Collaboration of Frame and Event
Flows
|
Accepted by IEEE Transactions on Cybernetics
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Different from visible cameras which record intensity images frame by frame,
the biologically inspired event camera produces a stream of asynchronous and
sparse events with much lower latency. In practice, visible cameras can better
perceive texture details and slow motion, while event cameras can be free from
motion blurs and have a larger dynamic range which enables them to work well
under fast motion and low illumination. Therefore, the two sensors can
cooperate with each other to achieve more reliable object tracking. In this
work, we propose a large-scale Visible-Event benchmark (termed VisEvent) due to
the lack of a realistic and scaled dataset for this task. Our dataset consists
of 820 video pairs captured under low illumination, high speed, and background
clutter scenarios, and it is divided into a training and a testing subset, each
of which contains 500 and 320 videos, respectively. Based on VisEvent, we
transform the event flows into event images and construct more than 30 baseline
methods by extending current single-modality trackers into dual-modality
versions. More importantly, we further build a simple but effective tracking
algorithm by proposing a cross-modality transformer, to achieve more effective
feature fusion between visible and event data. Extensive experiments on the
proposed VisEvent dataset, FE108, COESOT, and two simulated datasets (i.e.,
OTB-DVS and VOT-DVS), validated the effectiveness of our model. The dataset and
source code have been released on:
\url{https://github.com/wangxiao5791509/VisEvent_SOT_Benchmark}.
|
[
{
"version": "v1",
"created": "Wed, 11 Aug 2021 03:55:12 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Sep 2021 03:31:54 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Jun 2022 12:31:22 GMT"
},
{
"version": "v4",
"created": "Thu, 21 Sep 2023 06:50:36 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Wang",
"Xiao",
""
],
[
"Li",
"Jianing",
""
],
[
"Zhu",
"Lin",
""
],
[
"Zhang",
"Zhipeng",
""
],
[
"Chen",
"Zhe",
""
],
[
"Li",
"Xin",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Tian",
"Yonghong",
""
],
[
"Wu",
"Feng",
""
]
] |
new_dataset
| 0.989878 |
2112.14985
|
Zhitong Xiong
|
Zhitong Xiong, Wei Huang, Jingtao Hu, and Xiao Xiang Zhu
|
THE Benchmark: Transferable Representation Learning for Monocular Height
Estimation
|
14 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Generating 3D city models rapidly is crucial for many applications. Monocular
height estimation is one of the most efficient and timely ways to obtain
large-scale geometric information. However, existing works focus primarily on
training and testing models using unbiased datasets, which does not align well
with real-world applications. Therefore, we propose a new benchmark dataset to
study the transferability of height estimation models in a cross-dataset
setting. To this end, we first design and construct a large-scale benchmark
dataset for cross-dataset transfer learning on the height estimation task. This
benchmark dataset includes a newly proposed large-scale synthetic dataset, a
newly collected real-world dataset, and four existing datasets from different
cities. Next, a new experimental protocol, few-shot cross-dataset transfer, is
designed. Furthermore, in this paper, we propose a scale-deformable convolution
module to enhance the window-based Transformer for handling the scale-variation
problem in the height estimation task. Experimental results have demonstrated
the effectiveness of the proposed methods in the traditional and cross-dataset
transfer settings. The datasets and codes are publicly available at
https://mediatum.ub.tum.de/1662763 and https://thebenchmarkh.github.io/.
|
[
{
"version": "v1",
"created": "Thu, 30 Dec 2021 09:40:26 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 14:32:17 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Xiong",
"Zhitong",
""
],
[
"Huang",
"Wei",
""
],
[
"Hu",
"Jingtao",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.996291 |
2204.09635
|
Alan Tang
|
Alan Tang, Ryan Beckett, Steven Benaloh, Karthick Jayaraman, Tejas
Patil, Todd Millstein, George Varghese
|
LIGHTYEAR: Using Modularity to Scale BGP Control Plane Verification
|
12 pages (+ 2 pages references), 3 figures, Accepted at SIGCOMM '23
|
In Proceedings of the ACM SIGCOMM 2023 Conference (ACM SIGCOMM
'23). Association for Computing Machinery, New York, NY, USA, 94-107
|
10.1145/3603269.3604842
| null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Current network control plane verification tools cannot scale to large
networks, because of the complexity of jointly reasoning about the behaviors of
all nodes in the network. In this paper we present a modular approach to
control plane verification, whereby end-to-end network properties are verified
via a set of purely local checks on individual nodes and edges. The approach
targets the verification of safety properties for BGP configurations and
provides guarantees in the face of both arbitrary external route announcements
from neighbors and arbitrary node/link failures. We have proven the approach
correct and also implemented it in a tool called Lightyear. Experimental
results show that Lightyear scales dramatically better than prior control plane
verifiers. Further, we have used Lightyear to verify three properties of the
wide area network of a major cloud provider, containing hundreds of routers and
tens of thousands of edges. To our knowledge no prior tool has been
demonstrated to provide such guarantees at that scale. Finally, in addition to
the scaling benefits, our modular approach to verification makes it easy to
localize the causes of configuration errors and to support incremental
re-verification as configurations are updated.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 17:29:03 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 20:49:59 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Tang",
"Alan",
""
],
[
"Beckett",
"Ryan",
""
],
[
"Benaloh",
"Steven",
""
],
[
"Jayaraman",
"Karthick",
""
],
[
"Patil",
"Tejas",
""
],
[
"Millstein",
"Todd",
""
],
[
"Varghese",
"George",
""
]
] |
new_dataset
| 0.99844 |
2209.05070
|
Xiangyu Wang
|
Anjun Chen, Xiangyu Wang, Shaohao Zhu, Yanxu Li, Jiming Chen, Qi Ye
|
mmBody Benchmark: 3D Body Reconstruction Dataset and Analysis for
Millimeter Wave Radar
|
Accepted to ACM Multimedia 2022, Project Page:
https://chen3110.github.io/mmbody/index.html
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Millimeter Wave (mmWave) Radar is gaining popularity as it can work in
adverse environments like smoke, rain, snow, poor lighting, etc. Prior work has
explored the possibility of reconstructing 3D skeletons or meshes from the
noisy and sparse mmWave Radar signals. However, it is unclear how accurately we
can reconstruct the 3D body from the mmWave signals across scenes and how it
performs compared with cameras, which are important aspects needed to be
considered when either using mmWave radars alone or combining them with
cameras. To answer these questions, an automatic 3D body annotation system is
first designed and built up with multiple sensors to collect a large-scale
dataset. The dataset consists of synchronized and calibrated mmWave radar point
clouds and RGB(D) images in different scenes and skeleton/mesh annotations for
humans in the scenes. With this dataset, we train state-of-the-art methods with
inputs from different sensors and test them in various scenarios. The results
demonstrate that 1) despite the noise and sparsity of the generated point
clouds, the mmWave radar can achieve better reconstruction accuracy than the
RGB camera but worse than the depth camera; 2) the reconstruction from the
mmWave radar is affected by adverse weather conditions moderately while the
RGB(D) camera is severely affected. Further, analysis of the dataset and the
results shadow insights on improving the reconstruction from the mmWave radar
and the combination of signals from different sensors.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 08:00:31 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2023 03:07:03 GMT"
},
{
"version": "v3",
"created": "Thu, 21 Sep 2023 10:11:03 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Chen",
"Anjun",
""
],
[
"Wang",
"Xiangyu",
""
],
[
"Zhu",
"Shaohao",
""
],
[
"Li",
"Yanxu",
""
],
[
"Chen",
"Jiming",
""
],
[
"Ye",
"Qi",
""
]
] |
new_dataset
| 0.999859 |
2210.01927
|
Tobin South
|
Tobin South, Nick Lothian, Alex "Sandy" Pentland
|
Building a healthier feed: Private location trace intersection driven
feed recommendations
| null |
Social, Cultural, and Behavioral Modeling. SBP-BRiMS 2023. Lecture
Notes in Computer Science, vol 14161. Springer, Cham
|
10.1007/978-3-031-43129-6_6
| null |
cs.CY cs.SI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The physical environment you navigate strongly determines which communities
and people matter most to individuals. These effects drive both personal access
to opportunities and the social capital of communities, and can often be
observed in the personal mobility traces of individuals. Traditional social
media feeds underutilize these mobility-based features, or do so in a privacy
exploitative manner. Here we propose a consent-first private information
sharing paradigm for driving social feeds from users' personal private data,
specifically using mobility traces. This approach designs the feed to
explicitly optimize for integrating the user into the local community and for
social capital building through leveraging mobility trace overlaps as a proxy
for existing or potential real-world social connections, creating
proportionality between whom a user sees in their feed, and whom the user is
likely to see in person. These claims are validated against existing
social-mobility data, and a reference implementation of the proposed algorithm
is built for demonstration. In total, this work presents a novel technique for
designing feeds that represent real offline social connections through private
set intersections requiring no third party, or public data exposure.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 21:52:52 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 20:37:32 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"South",
"Tobin",
""
],
[
"Lothian",
"Nick",
""
],
[
"Pentland",
"Alex \"Sandy\"",
""
]
] |
new_dataset
| 0.989581 |
2210.07109
|
David Reitter
|
Chris Callison-Burch, Gaurav Singh Tomar, Lara J. Martin, Daphne
Ippolito, Suma Bailis, David Reitter
|
Dungeons and Dragons as a Dialog Challenge for Artificial Intelligence
|
Accepted at EMNLP 2022
|
Conference on Empirical Methods in Natural Language Processing
(EMNLP), pp. 9379-9393, Dec. 2022
|
10.18653/v1/2022.emnlp-main.637
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
AI researchers have posited Dungeons and Dragons (D&D) as a challenge problem
to test systems on various language-related capabilities. In this paper, we
frame D&D specifically as a dialogue system challenge, where the tasks are to
both generate the next conversational turn in the game and predict the state of
the game given the dialogue history. We create a gameplay dataset consisting of
nearly 900 games, with a total of 7,000 players, 800,000 dialogue turns,
500,000 dice rolls, and 58 million words. We automatically annotate the data
with partial state information about the game play. We train a large language
model (LM) to generate the next game turn, conditioning it on different
information. The LM can respond as a particular character or as the player who
runs the game--i.e., the Dungeon Master (DM). It is trained to produce dialogue
that is either in-character (roleplaying in the fictional world) or
out-of-character (discussing rules or strategy). We perform a human evaluation
to determine what factors make the generated output plausible and interesting.
We further perform an automatic evaluation to determine how well the model can
predict the game state given the history and examine how well tracking the game
state improves its ability to produce plausible conversational output.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 15:43:39 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Callison-Burch",
"Chris",
""
],
[
"Tomar",
"Gaurav Singh",
""
],
[
"Martin",
"Lara J.",
""
],
[
"Ippolito",
"Daphne",
""
],
[
"Bailis",
"Suma",
""
],
[
"Reitter",
"David",
""
]
] |
new_dataset
| 0.999875 |
2212.03414
|
Hyoukjun Kwon
|
Seah Kim, Hyoukjun Kwon, Jinook Song, Jihyuck Jo, Yu-Hsin Chen,
Liangzhen Lai, Vikas Chandra
|
DREAM: A Dynamic Scheduler for Dynamic Real-time Multi-model ML
Workloads
|
14 pages
| null | null | null |
cs.DC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Emerging real-time multi-model ML (RTMM) workloads such as AR/VR and drone
control involve dynamic behaviors in various granularity; task, model, and
layers within a model. Such dynamic behaviors introduce new challenges to the
system software in an ML system since the overall system load is not completely
predictable, unlike traditional ML workloads. In addition, RTMM workloads
require real-time processing, involve highly heterogeneous models, and target
resource-constrained devices. Under such circumstances, developing an effective
scheduler gains more importance to better utilize underlying hardware
considering the unique characteristics of RTMM workloads. Therefore, we propose
a new scheduler, DREAM, which effectively handles various dynamicity in RTMM
workloads targeting multi-accelerator systems. DREAM quantifies the unique
requirements for RTMM workloads and utilizes the quantified scores to drive
scheduling decisions, considering the current system load and other inference
jobs on different models and input frames. DREAM utilizes tunable parameters
that provide fast and effective adaptivity to dynamic workload changes. In our
evaluation of five scenarios of RTMM workload, DREAM reduces the overall
UXCost, which is an equivalent metric of the energy-delay product (EDP) for
RTMM defined in the paper, by 32.2% and 50.0% in the geometric mean (up to
80.8% and 97.6%) compared to state-of-the-art baselines, which shows the
efficacy of our scheduling methodology.
|
[
{
"version": "v1",
"created": "Wed, 7 Dec 2022 02:48:14 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 00:24:09 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Kim",
"Seah",
""
],
[
"Kwon",
"Hyoukjun",
""
],
[
"Song",
"Jinook",
""
],
[
"Jo",
"Jihyuck",
""
],
[
"Chen",
"Yu-Hsin",
""
],
[
"Lai",
"Liangzhen",
""
],
[
"Chandra",
"Vikas",
""
]
] |
new_dataset
| 0.993765 |
2301.08104
|
Salvatore Giorgi
|
Salvatore Giorgi, Ke Zhao, Alexander H. Feng, Lara J. Martin
|
Author as Character and Narrator: Deconstructing Personal Narratives
from the r/AmITheAsshole Reddit Community
|
Accepted to the 17th International AAAI Conference on Web and Social
Media (ICWSM), 2023
|
Proceedings of the International AAAI Conference on Web and Social
Media (ICWSM) 2023, 17(1), 233-244
|
10.1609/icwsm.v17i1.22141
| null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In the r/AmITheAsshole subreddit, people anonymously share first person
narratives that contain some moral dilemma or conflict and ask the community to
judge who is at fault (i.e., who is "the asshole"). In general, first person
narratives are a unique storytelling domain where the author is the narrator
(the person telling the story) but can also be a character (the person living
the story) and, thus, the author has two distinct voices presented in the
story. In this study, we identify linguistic and narrative features associated
with the author as the character or as a narrator. We use these features to
answer the following questions: (1) what makes an asshole character and (2)
what makes an asshole narrator? We extract both Author-as-Character features
(e.g., demographics, narrative event chain, and emotional arc) and
Author-as-Narrator features (i.e., the style and emotion of the story as a
whole) in order to identify which aspects of the narrative are correlated with
the final moral judgment. Our work shows that "assholes" as Characters frame
themselves as lacking agency with a more positive personal arc, while
"assholes" as Narrators will tell emotional and opinionated stories.
|
[
{
"version": "v1",
"created": "Thu, 19 Jan 2023 14:50:36 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 16:26:17 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Giorgi",
"Salvatore",
""
],
[
"Zhao",
"Ke",
""
],
[
"Feng",
"Alexander H.",
""
],
[
"Martin",
"Lara J.",
""
]
] |
new_dataset
| 0.993894 |
2301.08188
|
Argha Sen
|
Argha Sen, Avijit Mandal, Prasenjit Karmakar, Anirban Das and Sandip
Chakraborty
|
mmDrive: mmWave Sensing for Live Monitoring and On-Device Inference of
Dangerous Driving
|
11 pages, 13 figures, conference
| null |
10.1109/PERCOM56429.2023.10099264
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting dangerous driving has been of critical interest for the past few
years. However, a practical yet minimally intrusive solution remains
challenging as existing technologies heavily rely on visual features or
physical proximity. With this motivation, we explore the feasibility of purely
using mmWave radars to detect dangerous driving behaviors. We first study
characteristics of dangerous driving and find some unique patterns of
range-doppler caused by 9 typical dangerous driving actions. We then develop a
novel Fused-CNN model to detect dangerous driving instances from regular
driving and classify 9 different dangerous driving actions. Through extensive
experiments with 5 volunteer drivers in real driving environments, we observe
that our system can distinguish dangerous driving actions with an average
accuracy of > 95%. We also compare our models with existing state-of-the-art
baselines to establish their significance.
|
[
{
"version": "v1",
"created": "Thu, 19 Jan 2023 17:36:39 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 10:59:36 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Sen",
"Argha",
""
],
[
"Mandal",
"Avijit",
""
],
[
"Karmakar",
"Prasenjit",
""
],
[
"Das",
"Anirban",
""
],
[
"Chakraborty",
"Sandip",
""
]
] |
new_dataset
| 0.999263 |
2302.04456
|
Pengfei Zhu
|
Pengfei Zhu, Chao Pang, Yekun Chai, Lei Li, Shuohuan Wang, Yu Sun, Hao
Tian, Hua Wu
|
ERNIE-Music: Text-to-Waveform Music Generation with Diffusion Models
|
Accepted by AACL demo 2023
| null | null | null |
cs.SD cs.AI cs.CL cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, the burgeoning interest in diffusion models has led to
significant advances in image and speech generation. Nevertheless, the direct
synthesis of music waveforms from unrestricted textual prompts remains a
relatively underexplored domain. In response to this lacuna, this paper
introduces a pioneering contribution in the form of a text-to-waveform music
generation model, underpinned by the utilization of diffusion models. Our
methodology hinges on the innovative incorporation of free-form textual prompts
as conditional factors to guide the waveform generation process within the
diffusion model framework. Addressing the challenge of limited text-music
parallel data, we undertake the creation of a dataset by harnessing web
resources, a task facilitated by weak supervision techniques. Furthermore, a
rigorous empirical inquiry is undertaken to contrast the efficacy of two
distinct prompt formats for text conditioning, namely, music tags and
unconstrained textual descriptions. The outcomes of this comparative analysis
affirm the superior performance of our proposed model in terms of enhancing
text-music relevance. Finally, our work culminates in a demonstrative
exhibition of the excellent capabilities of our model in text-to-music
generation. We further demonstrate that our generated music in the waveform
domain outperforms previous works by a large margin in terms of diversity,
quality, and text-music relevance.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 06:27:09 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 09:30:00 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Zhu",
"Pengfei",
""
],
[
"Pang",
"Chao",
""
],
[
"Chai",
"Yekun",
""
],
[
"Li",
"Lei",
""
],
[
"Wang",
"Shuohuan",
""
],
[
"Sun",
"Yu",
""
],
[
"Tian",
"Hao",
""
],
[
"Wu",
"Hua",
""
]
] |
new_dataset
| 0.97518 |
2302.14595
|
Kailun Yang
|
Junwei Zheng, Jiaming Zhang, Kailun Yang, Kunyu Peng, Rainer
Stiefelhagen
|
MateRobot: Material Recognition in Wearable Robotics for People with
Visual Impairments
|
The source code has been made publicly available at
https://junweizheng93.github.io/publications/MATERobot/MATERobot.html
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
People with Visual Impairments (PVI) typically recognize objects through
haptic perception. Knowing objects and materials before touching is desired by
the target users but under-explored in the field of human-centered robotics. To
fill this gap, in this work, a wearable vision-based robotic system, MateRobot,
is established for PVI to recognize materials and object categories beforehand.
To address the computational constraints of mobile platforms, we propose a
lightweight yet accurate model MateViT to perform pixel-wise semantic
segmentation, simultaneously recognizing both objects and materials. Our
methods achieve respective 40.2% and 51.1% of mIoU on COCOStuff-10K and DMS
datasets, surpassing the previous method with +5.7% and +7.0% gains. Moreover,
on the field test with participants, our wearable system reaches a score of 28
in the NASA-Task Load Index, indicating low cognitive demands and ease of use.
Our MateRobot demonstrates the feasibility of recognizing material property
through visual cues and offers a promising step towards improving the
functionality of wearable robots for PVI. The source code has been made
publicly available at
https://junweizheng93.github.io/publications/MATERobot/MATERobot.html.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 14:29:22 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 13:46:21 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Zheng",
"Junwei",
""
],
[
"Zhang",
"Jiaming",
""
],
[
"Yang",
"Kailun",
""
],
[
"Peng",
"Kunyu",
""
],
[
"Stiefelhagen",
"Rainer",
""
]
] |
new_dataset
| 0.999586 |
2304.04264
|
Yang Luo
|
Yang Luo, Xiqing Guo, Mingtao Dong, Jin Yu
|
RGB-T Tracking Based on Mixed Attention
|
14 pages, 10 figures
|
Sensors 23, no. 14: 6609 (2023)
|
10.3390/s23146609
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RGB-T tracking involves the use of images from both visible and thermal
modalities. The primary objective is to adaptively leverage the relatively
dominant modality in varying conditions to achieve more robust tracking
compared to single-modality tracking. An RGB-T tracker based on mixed attention
mechanism to achieve complementary fusion of modalities (referred to as MACFT)
is proposed in this paper. In the feature extraction stage, we utilize
different transformer backbone branches to extract specific and shared
information from different modalities. By performing mixed attention operations
in the backbone to enable information interaction and self-enhancement between
the template and search images, it constructs a robust feature representation
that better understands the high-level semantic features of the target. Then,
in the feature fusion stage, a modality-adaptive fusion is achieved through a
mixed attention-based modality fusion network, which suppresses the low-quality
modality noise while enhancing the information of the dominant modality.
Evaluation on multiple RGB-T public datasets demonstrates that our proposed
tracker outperforms other RGB-T trackers on general evaluation metrics while
also being able to adapt to longterm tracking scenarios.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 15:59:41 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 01:13:05 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Apr 2023 08:35:20 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Apr 2023 02:00:25 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Luo",
"Yang",
""
],
[
"Guo",
"Xiqing",
""
],
[
"Dong",
"Mingtao",
""
],
[
"Yu",
"Jin",
""
]
] |
new_dataset
| 0.955128 |
2305.01528
|
Andrew Zhu
|
Andrew Zhu and Karmanya Aggarwal and Alexander Feng and Lara J. Martin
and Chris Callison-Burch
|
FIREBALL: A Dataset of Dungeons and Dragons Actual-Play with Structured
Game State Information
|
21 pages, 2 figures. Accepted at ACL 2023
|
Proceedings of the 61st Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers), 2023, pp. 4171-4193
|
10.18653/v1/2023.acl-long.229
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Dungeons & Dragons (D&D) is a tabletop roleplaying game with complex natural
language interactions between players and hidden state information. Recent work
has shown that large language models (LLMs) that have access to state
information can generate higher quality game turns than LLMs that use dialog
history alone. However, previous work used game state information that was
heuristically created and was not a true gold standard game state. We present
FIREBALL, a large dataset containing nearly 25,000 unique sessions from real
D&D gameplay on Discord with true game state info. We recorded game play
sessions of players who used the Avrae bot, which was developed to aid people
in playing D&D online, capturing language, game commands and underlying game
state information. We demonstrate that FIREBALL can improve natural language
generation (NLG) by using Avrae state information, improving both automated
metrics and human judgments of quality. Additionally, we show that LLMs can
generate executable Avrae commands, particularly after finetuning.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 15:36:10 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2023 18:49:16 GMT"
},
{
"version": "v3",
"created": "Fri, 26 May 2023 01:12:15 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Zhu",
"Andrew",
""
],
[
"Aggarwal",
"Karmanya",
""
],
[
"Feng",
"Alexander",
""
],
[
"Martin",
"Lara J.",
""
],
[
"Callison-Burch",
"Chris",
""
]
] |
new_dataset
| 0.999903 |
2306.10308
|
Florent Gu\'epin
|
Matthieu Meeus, Florent Gu\'epin, Ana-Maria Cretu and Yves-Alexandre
de Montjoye
|
Achilles' Heels: Vulnerable Record Identification in Synthetic Data
Publishing
| null | null | null | null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Synthetic data is seen as the most promising solution to share
individual-level data while preserving privacy. Shadow modeling-based
Membership Inference Attacks (MIAs) have become the standard approach to
evaluate the privacy risk of synthetic data. While very effective, they require
a large number of datasets to be created and models trained to evaluate the
risk posed by a single record. The privacy risk of a dataset is thus currently
evaluated by running MIAs on a handful of records selected using ad-hoc
methods. We here propose what is, to the best of our knowledge, the first
principled vulnerable record identification technique for synthetic data
publishing, leveraging the distance to a record's closest neighbors. We show
our method to strongly outperform previous ad-hoc methods across datasets and
generators. We also show evidence of our method to be robust to the choice of
MIA and to specific choice of parameters. Finally, we show it to accurately
identify vulnerable records when synthetic data generators are made
differentially private. The choice of vulnerable records is as important as
more accurate MIAs when evaluating the privacy of synthetic data releases,
including from a legal perspective. We here propose a simple yet highly
effective method to do so. We hope our method will enable practitioners to
better estimate the risk posed by synthetic data publishing and researchers to
fairly compare ever improving MIAs on synthetic data.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 09:42:46 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 09:17:16 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Meeus",
"Matthieu",
""
],
[
"Guépin",
"Florent",
""
],
[
"Cretu",
"Ana-Maria",
""
],
[
"de Montjoye",
"Yves-Alexandre",
""
]
] |
new_dataset
| 0.989414 |
2306.10346
|
Ping Li PhD
|
Ping Li and Chenhan Zhang and Xianghua Xu
|
Fast Fourier Inception Networks for Occluded Video Prediction
| null |
IEEE Trans. Multimedia (2023)
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video prediction is a pixel-level task that generates future frames by
employing the historical frames. There often exist continuous complex motions,
such as object overlapping and scene occlusion in video, which poses great
challenges to this task. Previous works either fail to well capture the
long-term temporal dynamics or do not handle the occlusion masks. To address
these issues, we develop the fully convolutional Fast Fourier Inception
Networks for video prediction, termed \textit{FFINet}, which includes two
primary components, \ie, the occlusion inpainter and the spatiotemporal
translator. The former adopts the fast Fourier convolutions to enlarge the
receptive field, such that the missing areas (occlusion) with complex geometric
structures are filled by the inpainter. The latter employs the stacked Fourier
transform inception module to learn the temporal evolution by group
convolutions and the spatial movement by channel-wise Fourier convolutions,
which captures both the local and the global spatiotemporal features. This
encourages generating more realistic and high-quality future frames. To
optimize the model, the recovery loss is imposed to the objective, \ie,
minimizing the mean square error between the ground-truth frame and the
recovery frame. Both quantitative and qualitative experimental results on five
benchmarks, including Moving MNIST, TaxiBJ, Human3.6M, Caltech Pedestrian, and
KTH, have demonstrated the superiority of the proposed approach. Our code is
available at GitHub.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 13:27:29 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Li",
"Ping",
""
],
[
"Zhang",
"Chenhan",
""
],
[
"Xu",
"Xianghua",
""
]
] |
new_dataset
| 0.956181 |
2308.11531
|
Claire Barale
|
Claire Barale
|
Empowering Refugee Claimants and their Lawyers: Using Machine Learning
to Examine Decision-Making in Refugee Law
|
19th International Conference on Artificial Intelligence and Law -
ICAIL 2023, Doctoral Consortium (Best Paper Award)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Our project aims at helping and supporting stakeholders in refugee status
adjudications, such as lawyers, judges, governing bodies, and claimants, in
order to make better decisions through data-driven intelligence and increase
the understanding and transparency of the refugee application process for all
involved parties. This PhD project has two primary objectives: (1) to retrieve
past cases, and (2) to analyze legal decision-making processes on a dataset of
Canadian cases. In this paper, we present the current state of our work, which
includes a completed experiment on part (1) and ongoing efforts related to part
(2). We believe that NLP-based solutions are well-suited to address these
challenges, and we investigate the feasibility of automating all steps
involved. In addition, we introduce a novel benchmark for future NLP research
in refugee law. Our methodology aims to be inclusive to all end-users and
stakeholders, with expected benefits including reduced time-to-decision, fairer
and more transparent outcomes, and improved decision quality.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 15:59:21 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 14:19:37 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Barale",
"Claire",
""
]
] |
new_dataset
| 0.999325 |
2309.04700
|
Phuong Duy Huynh Mr.
|
Phuong Duy Huynh, Thisal De Silva, Son Hoang Dau, Xiaodong Li, Iqbal
Gondal, Emanuele Viterbo
|
From Programming Bugs to Multimillion-Dollar Scams: An Analysis of
Trapdoor Tokens on Decentralized Exchanges
|
22 pages, 11 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
We investigate in this work a recently emerging type of scam token called
Trapdoor, which has caused the investors hundreds of millions of dollars in the
period of 2020-2023. In a nutshell, by embedding logical bugs and/or owner-only
features to the smart contract codes, a Trapdoor token allows users to buy but
prevent them from selling. We develop the first systematic classification of
Trapdoor tokens and a comprehensive list of their programming techniques,
accompanied by a detailed analysis on representative scam contracts. We also
construct the very first dataset of 1859 manually verified Trapdoor tokens on
Uniswap and build effective opcode-based detection tools using popular machine
learning classifiers such as Random Forest, XGBoost, and LightGBM, which
achieve at least 0.98% accuracies, precisions, recalls, and F1-scores.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 06:47:23 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 14:17:21 GMT"
},
{
"version": "v3",
"created": "Thu, 21 Sep 2023 13:30:24 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Huynh",
"Phuong Duy",
""
],
[
"De Silva",
"Thisal",
""
],
[
"Dau",
"Son Hoang",
""
],
[
"Li",
"Xiaodong",
""
],
[
"Gondal",
"Iqbal",
""
],
[
"Viterbo",
"Emanuele",
""
]
] |
new_dataset
| 0.999711 |
2309.07413
|
Lei Zhang
|
Lei Zhang, Zhengkun Tian, Xiang Chen, Jiaming Sun, Hongyu Xiang, Ke
Ding, Guanglu Wan
|
CPPF: A contextual and post-processing-free model for automatic speech
recognition
|
Submitted to ICASSP2024
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
ASR systems have become increasingly widespread in recent years. However,
their textual outputs often require post-processing tasks before they can be
practically utilized. To address this issue, we draw inspiration from the
multifaceted capabilities of LLMs and Whisper, and focus on integrating
multiple ASR text processing tasks related to speech recognition into the ASR
model. This integration not only shortens the multi-stage pipeline, but also
prevents the propagation of cascading errors, resulting in direct generation of
post-processed text. In this study, we focus on ASR-related processing tasks,
including Contextual ASR and multiple ASR post processing tasks. To achieve
this objective, we introduce the CPPF model, which offers a versatile and
highly effective alternative to ASR processing. CPPF seamlessly integrates
these tasks without any significant loss in recognition performance.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 03:40:14 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 03:02:27 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Zhang",
"Lei",
""
],
[
"Tian",
"Zhengkun",
""
],
[
"Chen",
"Xiang",
""
],
[
"Sun",
"Jiaming",
""
],
[
"Xiang",
"Hongyu",
""
],
[
"Ding",
"Ke",
""
],
[
"Wan",
"Guanglu",
""
]
] |
new_dataset
| 0.977457 |
2309.08301
|
Christopher Thirgood
|
Christopher Thomas Thirgood, Oscar Alejandro Mendez Maldonado, Chao
Ling, Jonathan Storey, Simon J Hadfield
|
RaSpectLoc: RAman SPECTroscopy-dependent robot LOCalisation
|
8 pages, 5 figures. This work will be presented at IROS 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper presents a new information source for supporting robot
localisation: material composition. The proposed method complements the
existing visual, structural, and semantic cues utilized in the literature.
However, it has a distinct advantage in its ability to differentiate
structurally, visually or categorically similar objects such as different
doors, by using Raman spectrometers. Such devices can identify the material of
objects it probes through the bonds between the material's molecules. Unlike
similar sensors, such as mass spectroscopy, it does so without damaging the
material or environment. In addition to introducing the first material-based
localisation algorithm, this paper supports the future growth of the field by
presenting a gazebo plugin for Raman spectrometers, material sensing
demonstrations, as well as the first-ever localisation data-set with benchmarks
for material-based localisation. This benchmarking shows that the proposed
technique results in a significant improvement over current state-of-the-art
localisation techniques, achieving 16\% more accurate localisation than the
leading baseline.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 10:45:59 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 13:52:47 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Thirgood",
"Christopher Thomas",
""
],
[
"Maldonado",
"Oscar Alejandro Mendez",
""
],
[
"Ling",
"Chao",
""
],
[
"Storey",
"Jonathan",
""
],
[
"Hadfield",
"Simon J",
""
]
] |
new_dataset
| 0.999611 |
2309.10889
|
Mhadi Shamsi
|
Mahdi Shamsi, Farokh Marvasti
|
Non-Orthogonal Time-Frequency Space Modulation
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a Time-Frequency Space Transformation (TFST) to derive
non-orthogonal bases for modulation techniques over the delay-doppler plane. A
family of Overloaded Delay-Doppler Modulation (ODDM) techniques is proposed
based on the TFST, which enhances flexibility and efficiency by expressing
modulated signals as a linear combination of basis signals. A Non-Orthogonal
Time-Frequency Space (NOTFS) digital modulation is derived for the proposed
ODDM techniques, and simulations show that they offer high-mobility
communication systems with improved spectral efficiency and low latency,
particularly in challenging scenarios such as high overloading factors and
Additive White Gaussian Noise (AWGN) channels. A modified sphere decoding
algorithm is also presented to efficiently decode the received signal. The
proposed modulation and decoding techniques contribute to the advancement of
non-orthogonal approaches in the next-generation of mobile communication
systems, delivering superior spectral efficiency and low latency, and offering
a promising solution towards the development of efficient high-mobility
communication systems.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 19:29:59 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 05:07:42 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Shamsi",
"Mahdi",
""
],
[
"Marvasti",
"Farokh",
""
]
] |
new_dataset
| 0.956071 |
2309.11052
|
Oilson Alberto Gonzatto Junior
|
Luiz Giordani and Gilsiley Dar\'u and Rhenan Queiroz and Vitor
Buzinaro and Davi Keglevich Neiva and Daniel Camilo Fuentes Guzm\'an and
Marcos Jardel Henriques and Oilson Alberto Gonzatto Junior and Francisco
Louzada
|
fakenewsbr: A Fake News Detection Platform for Brazilian Portuguese
| null | null | null | null |
cs.CL cs.LG stat.ML
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The proliferation of fake news has become a significant concern in recent
times due to its potential to spread misinformation and manipulate public
opinion. This paper presents a comprehensive study on detecting fake news in
Brazilian Portuguese, focusing on journalistic-type news. We propose a machine
learning-based approach that leverages natural language processing techniques,
including TF-IDF and Word2Vec, to extract features from textual data. We
evaluate the performance of various classification algorithms, such as logistic
regression, support vector machine, random forest, AdaBoost, and LightGBM, on a
dataset containing both true and fake news articles. The proposed approach
achieves high accuracy and F1-Score, demonstrating its effectiveness in
identifying fake news. Additionally, we developed a user-friendly web platform,
fakenewsbr.com, to facilitate the verification of news articles' veracity. Our
platform provides real-time analysis, allowing users to assess the likelihood
of fake news articles. Through empirical analysis and comparative studies, we
demonstrate the potential of our approach to contribute to the fight against
the spread of fake news and promote more informed media consumption.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 04:10:03 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 00:35:12 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Giordani",
"Luiz",
""
],
[
"Darú",
"Gilsiley",
""
],
[
"Queiroz",
"Rhenan",
""
],
[
"Buzinaro",
"Vitor",
""
],
[
"Neiva",
"Davi Keglevich",
""
],
[
"Guzmán",
"Daniel Camilo Fuentes",
""
],
[
"Henriques",
"Marcos Jardel",
""
],
[
"Junior",
"Oilson Alberto Gonzatto",
""
],
[
"Louzada",
"Francisco",
""
]
] |
new_dataset
| 0.992407 |
2309.11523
|
Qihang Fan
|
Qihang Fan, Huaibo Huang, Mingrui Chen, Hongmin Liu and Ran He
|
RMT: Retentive Networks Meet Vision Transformers
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer first appears in the field of natural language processing and is
later migrated to the computer vision domain, where it demonstrates excellent
performance in vision tasks. However, recently, Retentive Network (RetNet) has
emerged as an architecture with the potential to replace Transformer,
attracting widespread attention in the NLP community. Therefore, we raise the
question of whether transferring RetNet's idea to vision can also bring
outstanding performance to vision tasks. To address this, we combine RetNet and
Transformer to propose RMT. Inspired by RetNet, RMT introduces explicit decay
into the vision backbone, bringing prior knowledge related to spatial distances
to the vision model. This distance-related spatial prior allows for explicit
control of the range of tokens that each token can attend to. Additionally, to
reduce the computational cost of global modeling, we decompose this modeling
process along the two coordinate axes of the image. Abundant experiments have
demonstrated that our RMT exhibits exceptional performance across various
computer vision tasks. For example, RMT achieves 84.1% Top1-acc on ImageNet-1k
using merely 4.5G FLOPs. To the best of our knowledge, among all models, RMT
achieves the highest Top1-acc when models are of similar size and trained with
the same strategy. Moreover, RMT significantly outperforms existing vision
backbones in downstream tasks such as object detection, instance segmentation,
and semantic segmentation. Our work is still in progress.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 00:57:48 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Fan",
"Qihang",
""
],
[
"Huang",
"Huaibo",
""
],
[
"Chen",
"Mingrui",
""
],
[
"Liu",
"Hongmin",
""
],
[
"He",
"Ran",
""
]
] |
new_dataset
| 0.996306 |
2309.11527
|
Sahan Bulathwela
|
Yuxiang Qiu, Karim Djemili, Denis Elezi, Aaneel Shalman, Mar\'ia
P\'erez-Ortiz, Sahan Bulathwela
|
TrueLearn: A Python Library for Personalised Informational
Recommendations with (Implicit) Feedback
|
To be presented at the ORSUM workshop at RecSys 2023
| null | null | null |
cs.IR cs.AI cs.CY cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
This work describes the TrueLearn Python library, which contains a family of
online learning Bayesian models for building educational (or more generally,
informational) recommendation systems. This family of models was designed
following the "open learner" concept, using humanly-intuitive user
representations. For the sake of interpretability and putting the user in
control, the TrueLearn library also contains different representations to help
end-users visualise the learner models, which may in the future facilitate user
interaction with their own models. Together with the library, we include a
previously publicly released implicit feedback educational dataset with
evaluation metrics to measure the performance of the models. The extensive
documentation and coding examples make the library highly accessible to both
machine learning developers and educational data mining and learning analytic
practitioners. The library and the support documentation with examples are
available at https://truelearn.readthedocs.io/en/latest.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 07:21:50 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Qiu",
"Yuxiang",
""
],
[
"Djemili",
"Karim",
""
],
[
"Elezi",
"Denis",
""
],
[
"Shalman",
"Aaneel",
""
],
[
"Pérez-Ortiz",
"María",
""
],
[
"Bulathwela",
"Sahan",
""
]
] |
new_dataset
| 0.996462 |
2309.11549
|
Jill Naiman
|
Jill P. Naiman and Morgan G. Cosillo and Peter K. G. Williams and
Alyssa Goodman
|
Large Synthetic Data from the arXiv for OCR Post Correction of Historic
Scientific Articles
|
6 pages, 1 figure, 1 table; training/validation/test datasets and all
model weights to be linked on Zenodo on publication
| null | null | null |
cs.DL astro-ph.IM
|
http://creativecommons.org/licenses/by/4.0/
|
Scientific articles published prior to the "age of digitization" (~1997)
require Optical Character Recognition (OCR) to transform scanned documents into
machine-readable text, a process that often produces errors. We develop a
pipeline for the generation of a synthetic ground truth/OCR dataset to correct
the OCR results of the astrophysics literature holdings of the NASA
Astrophysics Data System (ADS). By mining the arXiv we create, to the authors'
knowledge, the largest scientific synthetic ground truth/OCR post correction
dataset of 203,354,393 character pairs. We provide baseline models trained with
this dataset and find the mean improvement in character and word error rates of
7.71% and 18.82% for historical OCR text, respectively. When used to classify
parts of sentences as inline math, we find a classification F1 score of 77.82%.
Interactive dashboards to explore the dataset are available online:
https://readingtimemachine.github.io/projects/1-ocr-groundtruth-may2023, and
data and code, within the limitations of our agreement with the arXiv, are
hosted on GitHub: https://github.com/ReadingTimeMachine/ocr_post_correction.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 18:00:02 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Naiman",
"Jill P.",
""
],
[
"Cosillo",
"Morgan G.",
""
],
[
"Williams",
"Peter K. G.",
""
],
[
"Goodman",
"Alyssa",
""
]
] |
new_dataset
| 0.983012 |
2309.11568
|
Nolan Dey
|
Nolan Dey and Daria Soboleva and Faisal Al-Khateeb and Bowen Yang and
Ribhu Pathria and Hemant Khachane and Shaheer Muhammad and Zhiming (Charles)
Chen and Robert Myers and Jacob Robert Steeves and Natalia Vassilieva and
Marvin Tom and Joel Hestness
|
BTLM-3B-8K: 7B Parameter Performance in a 3B Parameter Model
| null | null | null | null |
cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the Bittensor Language Model, called "BTLM-3B-8K", a new
state-of-the-art 3 billion parameter open-source language model. BTLM-3B-8K was
trained on 627B tokens from the SlimPajama dataset with a mixture of 2,048 and
8,192 context lengths. BTLM-3B-8K outperforms all existing 3B parameter models
by 2-5.5% across downstream tasks. BTLM-3B-8K is even competitive with some 7B
parameter models. Additionally, BTLM-3B-8K provides excellent long context
performance, outperforming MPT-7B-8K and XGen-7B-8K on tasks up to 8,192
context length. We trained the model on a cleaned and deduplicated SlimPajama
dataset; aggressively tuned the \textmu P hyperparameters and schedule; used
ALiBi position embeddings; and adopted the SwiGLU nonlinearity.
On Hugging Face, the most popular models have 7B parameters, indicating that
users prefer the quality-size ratio of 7B models. Compacting the 7B parameter
model to one with 3B parameters, with little performance impact, is an
important milestone. BTLM-3B-8K needs only 3GB of memory with 4-bit precision
and takes 2.5x less inference compute than 7B models, helping to open up access
to a powerful language model on mobile and edge devices. BTLM-3B-8K is
available under an Apache 2.0 license on Hugging Face:
https://huggingface.co/cerebras/btlm-3b-8k-base.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 18:12:56 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Dey",
"Nolan",
"",
"Charles"
],
[
"Soboleva",
"Daria",
"",
"Charles"
],
[
"Al-Khateeb",
"Faisal",
"",
"Charles"
],
[
"Yang",
"Bowen",
"",
"Charles"
],
[
"Pathria",
"Ribhu",
"",
"Charles"
],
[
"Khachane",
"Hemant",
"",
"Charles"
],
[
"Muhammad",
"Shaheer",
"",
"Charles"
],
[
"Zhiming",
"",
"",
"Charles"
],
[
"Chen",
"",
""
],
[
"Myers",
"Robert",
""
],
[
"Steeves",
"Jacob Robert",
""
],
[
"Vassilieva",
"Natalia",
""
],
[
"Tom",
"Marvin",
""
],
[
"Hestness",
"Joel",
""
]
] |
new_dataset
| 0.999791 |
2309.11585
|
Belen Alastruey
|
Belen Alastruey, Aleix Sant, Gerard I. G\'allego, David Dale and Marta
R. Costa-juss\`a
|
SpeechAlign: a Framework for Speech Translation Alignment Evaluation
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Speech-to-Speech and Speech-to-Text translation are currently dynamic areas
of research. To contribute to these fields, we present SpeechAlign, a framework
to evaluate the underexplored field of source-target alignment in speech
models. Our framework has two core components. First, to tackle the absence of
suitable evaluation datasets, we introduce the Speech Gold Alignment dataset,
built upon a English-German text translation gold alignment dataset. Secondly,
we introduce two novel metrics, Speech Alignment Error Rate (SAER) and
Time-weighted Speech Alignment Error Rate (TW-SAER), to evaluate alignment
quality in speech models. By publishing SpeechAlign we provide an accessible
evaluation framework for model assessment, and we employ it to benchmark
open-source Speech Translation models.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 18:46:37 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Alastruey",
"Belen",
""
],
[
"Sant",
"Aleix",
""
],
[
"Gállego",
"Gerard I.",
""
],
[
"Dale",
"David",
""
],
[
"Costa-jussà",
"Marta R.",
""
]
] |
new_dataset
| 0.999826 |
2309.11587
|
Song Gao
|
Jinmeng Rao, Song Gao, Sijia Zhu
|
CATS: Conditional Adversarial Trajectory Synthesis for
Privacy-Preserving Trajectory Data Publication Using Deep Learning Approaches
|
9 figures, 4 figures
|
International Journal of Geographical Information Science; 2023
| null | null |
cs.LG cs.AI cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The prevalence of ubiquitous location-aware devices and mobile Internet
enables us to collect massive individual-level trajectory dataset from users.
Such trajectory big data bring new opportunities to human mobility research but
also raise public concerns with regard to location privacy. In this work, we
present the Conditional Adversarial Trajectory Synthesis (CATS), a
deep-learning-based GeoAI methodological framework for privacy-preserving
trajectory data generation and publication. CATS applies K-anonymity to the
underlying spatiotemporal distributions of human movements, which provides a
distributional-level strong privacy guarantee. By leveraging conditional
adversarial training on K-anonymized human mobility matrices, trajectory global
context learning using the attention-based mechanism, and recurrent bipartite
graph matching of adjacent trajectory points, CATS is able to reconstruct
trajectory topology from conditionally sampled locations and generate
high-quality individual-level synthetic trajectory data, which can serve as
supplements or alternatives to raw data for privacy-preserving trajectory data
publication. The experiment results on over 90k GPS trajectories show that our
method has a better performance in privacy preservation, spatiotemporal
characteristic preservation, and downstream utility compared with baseline
methods, which brings new insights into privacy-preserving human mobility
research using generative AI techniques and explores data ethics issues in
GIScience.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 18:52:56 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Rao",
"Jinmeng",
""
],
[
"Gao",
"Song",
""
],
[
"Zhu",
"Sijia",
""
]
] |
new_dataset
| 0.967151 |
2309.11611
|
Dihia Lanasri
|
Dihia Lanasri, Juan Olano, Sifal Klioui, Sin Liang Lee, Lamia Sekkai
|
Hate speech detection in algerian dialect using deep learning
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With the proliferation of hate speech on social networks under different
formats, such as abusive language, cyberbullying, and violence, etc., people
have experienced a significant increase in violence, putting them in
uncomfortable situations and threats. Plenty of efforts have been dedicated in
the last few years to overcome this phenomenon to detect hate speech in
different structured languages like English, French, Arabic, and others.
However, a reduced number of works deal with Arabic dialects like Tunisian,
Egyptian, and Gulf, mainly the Algerian ones. To fill in the gap, we propose in
this work a complete approach for detecting hate speech on online Algerian
messages. Many deep learning architectures have been evaluated on the corpus we
created from some Algerian social networks (Facebook, YouTube, and Twitter).
This corpus contains more than 13.5K documents in Algerian dialect written in
Arabic, labeled as hateful or non-hateful. Promising results are obtained,
which show the efficiency of our approach.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 19:54:48 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Lanasri",
"Dihia",
""
],
[
"Olano",
"Juan",
""
],
[
"Klioui",
"Sifal",
""
],
[
"Lee",
"Sin Liang",
""
],
[
"Sekkai",
"Lamia",
""
]
] |
new_dataset
| 0.998559 |
2309.11648
|
Duarte Rondao
|
Duarte Rondao, Lei He, Nabil Aouf
|
Orbital AI-based Autonomous Refuelling Solution
|
13 pages
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cameras are rapidly becoming the choice for on-board sensors towards space
rendezvous due to their small form factor and inexpensive power, mass, and
volume costs. When it comes to docking, however, they typically serve a
secondary role, whereas the main work is done by active sensors such as lidar.
This paper documents the development of a proposed AI-based (artificial
intelligence) navigation algorithm intending to mature the use of on-board
visible wavelength cameras as a main sensor for docking and on-orbit servicing
(OOS), reducing the dependency on lidar and greatly reducing costs.
Specifically, the use of AI enables the expansion of the relative navigation
solution towards multiple classes of scenarios, e.g., in terms of targets or
illumination conditions, which would otherwise have to be crafted on a
case-by-case manner using classical image processing methods. Multiple
convolutional neural network (CNN) backbone architectures are benchmarked on
synthetically generated data of docking manoeuvres with the International Space
Station (ISS), achieving position and attitude estimates close to 1%
range-normalised and 1 deg, respectively. The integration of the solution with
a physical prototype of the refuelling mechanism is validated in laboratory
using a robotic arm to simulate a berthing procedure.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 21:25:52 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Rondao",
"Duarte",
""
],
[
"He",
"Lei",
""
],
[
"Aouf",
"Nabil",
""
]
] |
new_dataset
| 0.975039 |
2309.11691
|
Ruoxi Sun
|
Minhui Xue, Surya Nepal, Ling Liu, Subbu Sethuvenkatraman, Xingliang
Yuan, Carsten Rudolph, Ruoxi Sun, Greg Eisenhauer
|
RAI4IoE: Responsible AI for Enabling the Internet of Energy
|
Accepted to IEEE International Conference on Trust, Privacy and
Security in Intelligent Systems, and Applications (TPS) 2023
| null | null | null |
cs.AI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper plans to develop an Equitable and Responsible AI framework with
enabling techniques and algorithms for the Internet of Energy (IoE), in short,
RAI4IoE. The energy sector is going through substantial changes fueled by two
key drivers: building a zero-carbon energy sector and the digital
transformation of the energy infrastructure. We expect to see the convergence
of these two drivers resulting in the IoE, where renewable distributed energy
resources (DERs), such as electric cars, storage batteries, wind turbines and
photovoltaics (PV), can be connected and integrated for reliable energy
distribution by leveraging advanced 5G-6G networks and AI technology. This
allows DER owners as prosumers to participate in the energy market and derive
economic incentives. DERs are inherently asset-driven and face equitable
challenges (i.e., fair, diverse and inclusive). Without equitable access,
privileged individuals, groups and organizations can participate and benefit at
the cost of disadvantaged groups. The real-time management of DER resources not
only brings out the equity problem to the IoE, it also collects highly
sensitive location, time, activity dependent data, which requires to be handled
responsibly (e.g., privacy, security and safety), for AI-enhanced predictions,
optimization and prioritization services, and automated management of flexible
resources. The vision of our project is to ensure equitable participation of
the community members and responsible use of their data in IoE so that it could
reap the benefits of advances in AI to provide safe, reliable and sustainable
energy services.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2023 23:45:54 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Xue",
"Minhui",
""
],
[
"Nepal",
"Surya",
""
],
[
"Liu",
"Ling",
""
],
[
"Sethuvenkatraman",
"Subbu",
""
],
[
"Yuan",
"Xingliang",
""
],
[
"Rudolph",
"Carsten",
""
],
[
"Sun",
"Ruoxi",
""
],
[
"Eisenhauer",
"Greg",
""
]
] |
new_dataset
| 0.966914 |
2309.11715
|
Xiao-Feng Zhang
|
Xiao Feng Zhang, Tian Yi Song, Jia Wei Yao
|
Deshadow-Anything: When Segment Anything Model Meets Zero-shot shadow
removal
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Segment Anything (SAM), an advanced universal image segmentation model
trained on an expansive visual dataset, has set a new benchmark in image
segmentation and computer vision. However, it faced challenges when it came to
distinguishing between shadows and their backgrounds. To address this, we
developed Deshadow-Anything, considering the generalization of large-scale
datasets, and we performed Fine-tuning on large-scale datasets to achieve image
shadow removal. The diffusion model can diffuse along the edges and textures of
an image, helping to remove shadows while preserving the details of the image.
Furthermore, we design Multi-Self-Attention Guidance (MSAG) and adaptive input
perturbation (DDPM-AIP) to accelerate the iterative training speed of
diffusion. Experiments on shadow removal tasks demonstrate that these methods
can effectively improve image restoration performance.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 01:35:13 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Zhang",
"Xiao Feng",
""
],
[
"Song",
"Tian Yi",
""
],
[
"Yao",
"Jia Wei",
""
]
] |
new_dataset
| 0.99973 |
2309.11766
|
Rajesh Kumar
|
Rajesh Kumar and Can Isik and Chilukuri K. Mohan
|
Dictionary Attack on IMU-based Gait Authentication
|
12 pages, 9 figures, accepted at AISec23 colocated with ACM CCS,
November 30, 2023, Copenhagen, Denmark
| null | null | null |
cs.CR cs.CV cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel adversarial model for authentication systems that use gait
patterns recorded by the inertial measurement unit (IMU) built into
smartphones. The attack idea is inspired by and named after the concept of a
dictionary attack on knowledge (PIN or password) based authentication systems.
In particular, this work investigates whether it is possible to build a
dictionary of IMUGait patterns and use it to launch an attack or find an
imitator who can actively reproduce IMUGait patterns that match the target's
IMUGait pattern. Nine physically and demographically diverse individuals walked
at various levels of four predefined controllable and adaptable gait factors
(speed, step length, step width, and thigh-lift), producing 178 unique IMUGait
patterns. Each pattern attacked a wide variety of user authentication models.
The deeper analysis of error rates (before and after the attack) challenges the
belief that authentication systems based on IMUGait patterns are the most
difficult to spoof; further research is needed on adversarial models and
associated countermeasures.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 04:00:21 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Kumar",
"Rajesh",
""
],
[
"Isik",
"Can",
""
],
[
"Mohan",
"Chilukuri K.",
""
]
] |
new_dataset
| 0.998298 |
2309.11767
|
Tongtong Zhang
|
Tongtong Zhang, Yuanxiang Li
|
Fast Satellite Tensorial Radiance Field for Multi-date Satellite Imagery
of Large Size
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing NeRF models for satellite images suffer from slow speeds, mandatory
solar information as input, and limitations in handling large satellite images.
In response, we present SatensoRF, which significantly accelerates the entire
process while employing fewer parameters for satellite imagery of large size.
Besides, we observed that the prevalent assumption of Lambertian surfaces in
neural radiance fields falls short for vegetative and aquatic elements. In
contrast to the traditional hierarchical MLP-based scene representation, we
have chosen a multiscale tensor decomposition approach for color, volume
density, and auxiliary variables to model the lightfield with specular color.
Additionally, to rectify inconsistencies in multi-date imagery, we incorporate
total variation loss to restore the density tensor field and treat the problem
as a denosing task.To validate our approach, we conducted assessments of
SatensoRF using subsets from the spacenet multi-view dataset, which includes
both multi-date and single-date multi-view RGB images. Our results clearly
demonstrate that SatensoRF surpasses the state-of-the-art Sat-NeRF series in
terms of novel view synthesis performance. Significantly, SatensoRF requires
fewer parameters for training, resulting in faster training and inference
speeds and reduced computational demands.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 04:00:38 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Zhang",
"Tongtong",
""
],
[
"Li",
"Yuanxiang",
""
]
] |
new_dataset
| 0.976939 |
2309.11770
|
Dinesh Kumar Kamalanathan
|
Dinesh Kumar K, Duraimutharasan N
|
Two Fish Encryption Based Blockchain Technology for Secured Data Storage
|
https://anapub.co.ke/journals/jmc/jmc_abstract/2023/jmc_volume_03_issue_03/jmc_volume3_issue3_4.html
|
2023, Volume 03, Issue 03, Pages: 216-226
|
10.53759/7669/jmc202303020
| null |
cs.CR cs.DC cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data security and sharing remains nuisance among many applications like
business data, medical data, banking data etc. In this research, block chain
technology is built with encryption algorithm for high level data security in
cloud storage. Medical data security seems critical aspect due to sensitivity
of patient information. Unauthorized access of medical data creates major issue
to patients. This article proposed block chain with hybrid encryption technique
for securing medical data stored in block chain model at cloud storage. New Two
fish encryption model is implemented based on RSA Multiple Precision
Arithmetic. MPA works by using library concept. The objective of using this
methodology is to enhance security performance with less execution time.
Patient data is processed by encryption algorithm and stored at blockchain
infrastructure using encrypted key. Access permission allows user to read or
write the medical data attached in block chain framework. The performance of
traditional cryptographic techniques is very less in providing security
infrastructure.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 04:08:23 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"K",
"Dinesh Kumar",
""
],
[
"N",
"Duraimutharasan",
""
]
] |
new_dataset
| 0.962923 |
2309.11804
|
Han Sun
|
Zixuan Yin, Han Sun, Ningzhong Liu, Huiyu Zhou, Jiaquan Shen
|
FGFusion: Fine-Grained Lidar-Camera Fusion for 3D Object Detection
|
accepted by PRCV2023, code: https://github.com/XavierGrool/FGFusion
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Lidars and cameras are critical sensors that provide complementary
information for 3D detection in autonomous driving. While most prevalent
methods progressively downscale the 3D point clouds and camera images and then
fuse the high-level features, the downscaled features inevitably lose low-level
detailed information. In this paper, we propose Fine-Grained Lidar-Camera
Fusion (FGFusion) that make full use of multi-scale features of image and point
cloud and fuse them in a fine-grained way. First, we design a dual pathway
hierarchy structure to extract both high-level semantic and low-level detailed
features of the image. Second, an auxiliary network is introduced to guide
point cloud features to better learn the fine-grained spatial information.
Finally, we propose multi-scale fusion (MSF) to fuse the last N feature maps of
image and point cloud. Extensive experiments on two popular autonomous driving
benchmarks, i.e. KITTI and Waymo, demonstrate the effectiveness of our method.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 06:24:59 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Yin",
"Zixuan",
""
],
[
"Sun",
"Han",
""
],
[
"Liu",
"Ningzhong",
""
],
[
"Zhou",
"Huiyu",
""
],
[
"Shen",
"Jiaquan",
""
]
] |
new_dataset
| 0.999287 |
2309.11830
|
Chengyuan Liu
|
Chengyuan Liu, Fubang Zhao, Lizhi Qing, Yangyang Kang, Changlong Sun,
Kun Kuang, Fei Wu
|
A Chinese Prompt Attack Dataset for LLMs with Evil Content
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs) present significant priority in text
understanding and generation. However, LLMs suffer from the risk of generating
harmful contents especially while being employed to applications. There are
several black-box attack methods, such as Prompt Attack, which can change the
behaviour of LLMs and induce LLMs to generate unexpected answers with harmful
contents. Researchers are interested in Prompt Attack and Defense with LLMs,
while there is no publicly available dataset to evaluate the abilities of
defending prompt attack. In this paper, we introduce a Chinese Prompt Attack
Dataset for LLMs, called CPAD. Our prompts aim to induce LLMs to generate
unexpected outputs with several carefully designed prompt attack approaches and
widely concerned attacking contents. Different from previous datasets involving
safety estimation, We construct the prompts considering three dimensions:
contents, attacking methods and goals, thus the responses can be easily
evaluated and analysed. We run several well-known Chinese LLMs on our dataset,
and the results show that our prompts are significantly harmful to LLMs, with
around 70% attack success rate. We will release CPAD to encourage further
studies on prompt attack and defense.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 07:07:49 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Liu",
"Chengyuan",
""
],
[
"Zhao",
"Fubang",
""
],
[
"Qing",
"Lizhi",
""
],
[
"Kang",
"Yangyang",
""
],
[
"Sun",
"Changlong",
""
],
[
"Kuang",
"Kun",
""
],
[
"Wu",
"Fei",
""
]
] |
new_dataset
| 0.999831 |
2309.11847
|
Ting Jiang
|
Ting Jiang, Chuan Wang, Xinpeng Li, Ru Li, Haoqiang Fan, Shuaicheng
Liu
|
MEFLUT: Unsupervised 1D Lookup Tables for Multi-exposure Image Fusion
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a new approach for high-quality multi-exposure
image fusion (MEF). We show that the fusion weights of an exposure can be
encoded into a 1D lookup table (LUT), which takes pixel intensity value as
input and produces fusion weight as output. We learn one 1D LUT for each
exposure, then all the pixels from different exposures can query 1D LUT of that
exposure independently for high-quality and efficient fusion. Specifically, to
learn these 1D LUTs, we involve attention mechanism in various dimensions
including frame, channel and spatial ones into the MEF task so as to bring us
significant quality improvement over the state-of-the-art (SOTA). In addition,
we collect a new MEF dataset consisting of 960 samples, 155 of which are
manually tuned by professionals as ground-truth for evaluation. Our network is
trained by this dataset in an unsupervised manner. Extensive experiments are
conducted to demonstrate the effectiveness of all the newly proposed
components, and results show that our approach outperforms the SOTA in our and
another representative dataset SICE, both qualitatively and quantitatively.
Moreover, our 1D LUT approach takes less than 4ms to run a 4K image on a PC
GPU. Given its high quality, efficiency and robustness, our method has been
shipped into millions of Android mobiles across multiple brands world-wide.
Code is available at: https://github.com/Hedlen/MEFLUT.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 07:43:03 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Jiang",
"Ting",
""
],
[
"Wang",
"Chuan",
""
],
[
"Li",
"Xinpeng",
""
],
[
"Li",
"Ru",
""
],
[
"Fan",
"Haoqiang",
""
],
[
"Liu",
"Shuaicheng",
""
]
] |
new_dataset
| 0.989306 |
2309.11848
|
Cunjun Yu
|
Zhimin Hou and Cunjun Yu and David Hsu and Haoyong Yu
|
TeachingBot: Robot Teacher for Human Handwriting
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Teaching physical skills to humans requires one-on-one interaction between
the teacher and the learner. With a shortage of human teachers, such a teaching
mode faces the challenge of scaling up. Robots, with their replicable nature
and physical capabilities, offer a solution. In this work, we present
TeachingBot, a robotic system designed for teaching handwriting to human
learners. We tackle two primary challenges in this teaching task: the
adaptation to each learner's unique style and the creation of an engaging
learning experience. TeachingBot captures the learner's style using a
probabilistic learning approach based on the learner's handwriting. Then, based
on the learned style, it provides physical guidance to human learners with
variable impedance to make the learning experience engaging. Results from
human-subject experiments based on 15 human subjects support the effectiveness
of TeachingBot, demonstrating improved human learning outcomes compared to
baseline methods. Additionally, we illustrate how TeachingBot customizes its
teaching approach for individual learners, leading to enhanced overall
engagement and effectiveness.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 07:45:25 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Hou",
"Zhimin",
""
],
[
"Yu",
"Cunjun",
""
],
[
"Hsu",
"David",
""
],
[
"Yu",
"Haoyong",
""
]
] |
new_dataset
| 0.998795 |
2309.11853
|
Luyao He
|
Luyao He, Zhongbao Zhang, Sen Su, Yuxin Chen
|
BitCoin: Bidirectional Tagging and Supervised Contrastive Learning based
Joint Relational Triple Extraction Framework
|
arXiv admin note: text overlap with arXiv:2112.04940 by other authors
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Relation triple extraction (RTE) is an essential task in information
extraction and knowledge graph construction. Despite recent advancements,
existing methods still exhibit certain limitations. They just employ
generalized pre-trained models and do not consider the specificity of RTE
tasks. Moreover, existing tagging-based approaches typically decompose the RTE
task into two subtasks, initially identifying subjects and subsequently
identifying objects and relations. They solely focus on extracting relational
triples from subject to object, neglecting that once the extraction of a
subject fails, it fails in extracting all triples associated with that subject.
To address these issues, we propose BitCoin, an innovative Bidirectional
tagging and supervised Contrastive learning based joint relational triple
extraction framework. Specifically, we design a supervised contrastive learning
method that considers multiple positives per anchor rather than restricting it
to just one positive. Furthermore, a penalty term is introduced to prevent
excessive similarity between the subject and object. Our framework implements
taggers in two directions, enabling triples extraction from subject to object
and object to subject. Experimental results show that BitCoin achieves
state-of-the-art results on the benchmark datasets and significantly improves
the F1 score on Normal, SEO, EPO, and multiple relation extraction tasks.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 07:55:54 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"He",
"Luyao",
""
],
[
"Zhang",
"Zhongbao",
""
],
[
"Su",
"Sen",
""
],
[
"Chen",
"Yuxin",
""
]
] |
new_dataset
| 0.981119 |
2309.11857
|
Bingyao Yu
|
Junlong Li, Bingyao Yu, Yongming Rao, Jie Zhou, Jiwen Lu
|
TCOVIS: Temporally Consistent Online Video Instance Segmentation
|
11 pages, 4 figures. This paper has been accepted for ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, significant progress has been made in video instance
segmentation (VIS), with many offline and online methods achieving
state-of-the-art performance. While offline methods have the advantage of
producing temporally consistent predictions, they are not suitable for
real-time scenarios. Conversely, online methods are more practical, but
maintaining temporal consistency remains a challenging task. In this paper, we
propose a novel online method for video instance segmentation, called TCOVIS,
which fully exploits the temporal information in a video clip. The core of our
method consists of a global instance assignment strategy and a spatio-temporal
enhancement module, which improve the temporal consistency of the features from
two aspects. Specifically, we perform global optimal matching between the
predictions and ground truth across the whole video clip, and supervise the
model with the global optimal objective. We also capture the spatial feature
and aggregate it with the semantic feature between frames, thus realizing the
spatio-temporal enhancement. We evaluate our method on four widely adopted VIS
benchmarks, namely YouTube-VIS 2019/2021/2022 and OVIS, and achieve
state-of-the-art performance on all benchmarks without bells-and-whistles. For
instance, on YouTube-VIS 2021, TCOVIS achieves 49.5 AP and 61.3 AP with
ResNet-50 and Swin-L backbones, respectively. Code is available at
https://github.com/jun-long-li/TCOVIS.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 07:59:15 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Li",
"Junlong",
""
],
[
"Yu",
"Bingyao",
""
],
[
"Rao",
"Yongming",
""
],
[
"Zhou",
"Jie",
""
],
[
"Lu",
"Jiwen",
""
]
] |
new_dataset
| 0.997175 |
2309.11862
|
Mladen Kova\v{c}evi\'c
|
Mladen Kova\v{c}evi\'c, Iosif Pinelis, Marios Kountouris
|
An Information-Theoretic Analog of the Twin Paradox
| null | null | null | null |
cs.IT math.IT physics.pop-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We revisit the familiar scenario involving two parties in relative motion, in
which Alice stays at rest while Bob goes on a journey at speed $ \beta c $
along an arbitrary trajectory and reunites with Alice after a certain period of
time. It is a well-known consequence of special relativity that the time that
passes until they meet again is different for the two parties and is shorter in
Bob's frame by a factor of $ \sqrt{1-\beta^2} $. We investigate how this
asymmetry manifests from an information-theoretic viewpoint. Assuming that
Alice and Bob transmit signals of equal average power to each other during the
whole journey, and that additive white Gaussian noise is present on both sides,
we show that the maximum number of bits per second that Alice can transmit
reliably to Bob is always higher than the one Bob can transmit to Alice.
Equivalently, the energy per bit invested by Alice is lower than that invested
by Bob, meaning that the traveler is less efficient from the communication
perspective, as conjectured by Jarett and Cover.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 08:06:35 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Kovačević",
"Mladen",
""
],
[
"Pinelis",
"Iosif",
""
],
[
"Kountouris",
"Marios",
""
]
] |
new_dataset
| 0.994426 |
2309.11883
|
Xin Wang
|
Zongqian Zhan, Rui Xia, Yifei Yu, Yibo Xu, Xin Wang
|
On-the-Fly SfM: What you capture is What you get
|
This work has been submitted to the IEEE International Conference on
Robotics and Automation (ICRA 2024) for possible publication. Copyright may
be transferred without notice, after which this version may no longer be
accessible
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Over the last decades, ample achievements have been made on Structure from
motion (SfM). However, the vast majority of them basically work in an offline
manner, i.e., images are firstly captured and then fed together into a SfM
pipeline for obtaining poses and sparse point cloud. In this work, on the
contrary, we present an on-the-fly SfM: running online SfM while image
capturing, the newly taken On-the-Fly image is online estimated with the
corresponding pose and points, i.e., what you capture is what you get.
Specifically, our approach firstly employs a vocabulary tree that is
unsupervised trained using learning-based global features for fast image
retrieval of newly fly-in image. Then, a robust feature matching mechanism with
least squares (LSM) is presented to improve image registration performance.
Finally, via investigating the influence of newly fly-in image's connected
neighboring images, an efficient hierarchical weighted local bundle adjustment
(BA) is used for optimization. Extensive experimental results demonstrate that
on-the-fly SfM can meet the goal of robustly registering the images while
capturing in an online way.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 08:34:01 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Zhan",
"Zongqian",
""
],
[
"Xia",
"Rui",
""
],
[
"Yu",
"Yifei",
""
],
[
"Xu",
"Yibo",
""
],
[
"Wang",
"Xin",
""
]
] |
new_dataset
| 0.950772 |
2309.11902
|
Zonghui Li
|
Zonghui Li (1), Wenlin Zhu (1), Kang G. Shin (2), Hai Wan (3), Xiaoyu
Song (4), Dong Yang (5), and Bo Ai (5) ((1) School of Computer and
Information Technology, Beijing Jiaotong University, Beijing, China, 100044.
(2) Department of Electrical Engineering and Computer Science, University of
Michigan, Ann Arbor, MI 48109-2121, USA. (3) Software School, Tsinghua
University, Beijing, China, 100084. (4) Department of Electrical and Computer
Engineering, Portland State University, Portland, OR. (5) School of
Electronic and Information Engineering, Beijing Jiaotong University, Beijing,
China, 100044.)
|
A Switch Architecture for Time-Triggered Transmission with Best-Effort
Delivery
|
14 pages
| null | null | null |
cs.NI cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In Time-Triggered (TT) or time-sensitive networks, the transmission of a TT
frame is required to be scheduled at a precise time instant for industrial
distributed real-time control systems. Other (or {\em best-effort} (BE)) frames
are forwarded in a BE manner. Under this scheduling strategy, the transmission
of a TT frame must wait until its scheduled instant even if it could have been
transmitted sooner. On the other hand, BE frames are transmitted whenever
possible but may miss deadlines or may even be dropped due to congestion. As a
result, TT transmission and BE delivery are incompatible with each other.
To remedy this incompatibility, we propose a synergistic switch architecture
(SWA) for TT transmission with BE delivery to dynamically improve the
end-to-end (e2e) latency of TT frames by opportunistically exploiting BE
delivery. Given a TT frame, the SWA generates and transmits a cloned copy with
BE delivery. The first frame arriving at the receiver device is delivered with
a configured jitter and the other copy ignored. So, the SWA achieves shorter
latency and controllable jitter, the best of both worlds. We have implemented
SWA using FPGAs in an industry-strength TT switches and used four test
scenarios to demonstrate SWA's improvements of e2e latency and controllable
jitter over the state-of-the-art TT transmission scheme.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 09:14:03 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Li",
"Zonghui",
""
],
[
"Zhu",
"Wenlin",
""
],
[
"Shin",
"Kang G.",
""
],
[
"Wan",
"Hai",
""
],
[
"Song",
"Xiaoyu",
""
],
[
"Yang",
"Dong",
""
],
[
"Ai",
"Bo",
""
]
] |
new_dataset
| 0.998663 |
2309.11923
|
Xiaozhou You
|
Xiaozhou You, Jian Zhang
|
TextCLIP: Text-Guided Face Image Generation And Manipulation Without
Adversarial Training
|
10 pages, 6 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Text-guided image generation aimed to generate desired images conditioned on
given texts, while text-guided image manipulation refers to semantically edit
parts of a given image based on specified texts. For these two similar tasks,
the key point is to ensure image fidelity as well as semantic consistency. Many
previous approaches require complex multi-stage generation and adversarial
training, while struggling to provide a unified framework for both tasks. In
this work, we propose TextCLIP, a unified framework for text-guided image
generation and manipulation without adversarial training. The proposed method
accepts input from images or random noise corresponding to these two different
tasks, and under the condition of the specific texts, a carefully designed
mapping network that exploits the powerful generative capabilities of StyleGAN
and the text image representation capabilities of Contrastive Language-Image
Pre-training (CLIP) generates images of up to $1024\times1024$ resolution that
can currently be generated. Extensive experiments on the Multi-modal CelebA-HQ
dataset have demonstrated that our proposed method outperforms existing
state-of-the-art methods, both on text-guided generation tasks and manipulation
tasks.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 09:34:20 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"You",
"Xiaozhou",
""
],
[
"Zhang",
"Jian",
""
]
] |
new_dataset
| 0.999118 |
2309.11928
|
Martin Hole\v{n}a
|
Luk\'a\v{s} Korel, Petr Pulc, Ji\v{r}\'i Tumpach, and Martin
Hole\v{n}a
|
Video Scene Location Recognition with Neural Networks
| null | null | null | null |
cs.CV cs.NE
|
http://creativecommons.org/publicdomain/zero/1.0/
|
This paper provides an insight into the possibility of scene recognition from
a video sequence with a small set of repeated shooting locations (such as in
television series) using artificial neural networks. The basic idea of the
presented approach is to select a set of frames from each scene, transform them
by a pre-trained singleimage pre-processing convolutional network, and classify
the scene location with subsequent layers of the neural network. The considered
networks have been tested and compared on a dataset obtained from The Big Bang
Theory television series. We have investigated different neural network layers
to combine individual frames, particularly AveragePooling, MaxPooling, Product,
Flatten, LSTM, and Bidirectional LSTM layers. We have observed that only some
of the approaches are suitable for the task at hand.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 09:42:39 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Korel",
"Lukáš",
""
],
[
"Pulc",
"Petr",
""
],
[
"Tumpach",
"Jiří",
""
],
[
"Holeňa",
"Martin",
""
]
] |
new_dataset
| 0.97016 |
2309.11935
|
Maxime Vaidis
|
Maxime Vaidis, Mohsen Hassanzadeh Shahraji, Effie Daum, William
Dubois, Philippe Gigu\`ere, and Fran\c{c}ois Pomerleau
|
RTS-GT: Robotic Total Stations Ground Truthing dataset
|
7 pages; Submitted to ICRA 2024
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Numerous datasets and benchmarks exist to assess and compare Simultaneous
Localization and Mapping (SLAM) algorithms. Nevertheless, their precision must
follow the rate at which SLAM algorithms improved in recent years. Moreover,
current datasets fall short of comprehensive data-collection protocol for
reproducibility and the evaluation of the precision or accuracy of the recorded
trajectories. With this objective in mind, we proposed the Robotic Total
Stations Ground Truthing dataset (RTS-GT) dataset to support localization
research with the generation of six-Degrees Of Freedom (DOF) ground truth
trajectories. This novel dataset includes six-DOF ground truth trajectories
generated using a system of three Robotic Total Stations (RTSs) tracking moving
robotic platforms. Furthermore, we compare the performance of the RTS-based
system to a Global Navigation Satellite System (GNSS)-based setup. The dataset
comprises around sixty experiments conducted in various conditions over a
period of 17 months, and encompasses over 49 kilometers of trajectories, making
it the most extensive dataset of RTS-based measurements to date. Additionally,
we provide the precision of all poses for each experiment, a feature not found
in the current state-of-the-art datasets. Our results demonstrate that RTSs
provide measurements that are 22 times more stable than GNSS in various
environmental settings, making them a valuable resource for SLAM benchmark
development.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 09:47:55 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Vaidis",
"Maxime",
""
],
[
"Shahraji",
"Mohsen Hassanzadeh",
""
],
[
"Daum",
"Effie",
""
],
[
"Dubois",
"William",
""
],
[
"Giguère",
"Philippe",
""
],
[
"Pomerleau",
"François",
""
]
] |
new_dataset
| 0.99934 |
2309.11957
|
Argha Sen
|
Argha Sen, Anirban Das, Swadhin Pradhan, Sandip Chakraborty
|
Continuous Multi-user Activity Tracking via Room-Scale mmWave Sensing
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Continuous detection of human activities and presence is essential for
developing a pervasive interactive smart space. Existing literature lacks
robust wireless sensing mechanisms capable of continuously monitoring multiple
users' activities without prior knowledge of the environment. Developing such a
mechanism requires simultaneous localization and tracking of multiple subjects.
In addition, it requires identifying their activities at various scales, some
being macro-scale activities like walking, squats, etc., while others are
micro-scale activities like typing or sitting, etc. In this paper, we develop a
holistic system called MARS using a single Commercial off the-shelf (COTS)
Millimeter Wave (mmWave) radar, which employs an intelligent model to sense
both macro and micro activities. In addition, it uses a dynamic spatial time
sharing approach to sense different subjects simultaneously. A thorough
evaluation of MARS shows that it can infer activities continuously with a
weighted F1-Score of > 94% and an average response time of approx 2 sec, with 5
subjects and 19 different activities.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 10:15:43 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Sen",
"Argha",
""
],
[
"Das",
"Anirban",
""
],
[
"Pradhan",
"Swadhin",
""
],
[
"Chakraborty",
"Sandip",
""
]
] |
new_dataset
| 0.997603 |
2309.11962
|
Taeho Kang
|
Taeho Kang, Kyungjin Lee, Jinrui Zhang, Youngki Lee
|
Ego3DPose: Capturing 3D Cues from Binocular Egocentric Views
|
12 pages, 10 figures, to be published as SIGGRAPH Asia 2023
Conference Papers
| null |
10.1145/3610548.3618147
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Ego3DPose, a highly accurate binocular egocentric 3D pose
reconstruction system. The binocular egocentric setup offers practicality and
usefulness in various applications, however, it remains largely under-explored.
It has been suffering from low pose estimation accuracy due to viewing
distortion, severe self-occlusion, and limited field-of-view of the joints in
egocentric 2D images. Here, we notice that two important 3D cues, stereo
correspondences, and perspective, contained in the egocentric binocular input
are neglected. Current methods heavily rely on 2D image features, implicitly
learning 3D information, which introduces biases towards commonly observed
motions and leads to low overall accuracy. We observe that they not only fail
in challenging occlusion cases but also in estimating visible joint positions.
To address these challenges, we propose two novel approaches. First, we design
a two-path network architecture with a path that estimates pose per limb
independently with its binocular heatmaps. Without full-body information
provided, it alleviates bias toward trained full-body distribution. Second, we
leverage the egocentric view of body limbs, which exhibits strong perspective
variance (e.g., a significantly large-size hand when it is close to the
camera). We propose a new perspective-aware representation using trigonometry,
enabling the network to estimate the 3D orientation of limbs. Finally, we
develop an end-to-end pose reconstruction network that synergizes both
techniques. Our comprehensive evaluations demonstrate that Ego3DPose
outperforms state-of-the-art models by a pose estimation error (i.e., MPJPE)
reduction of 23.1% in the UnrealEgo dataset. Our qualitative results highlight
the superiority of our approach across a range of scenarios and challenges.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 10:34:35 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Kang",
"Taeho",
""
],
[
"Lee",
"Kyungjin",
""
],
[
"Zhang",
"Jinrui",
""
],
[
"Lee",
"Youngki",
""
]
] |
new_dataset
| 0.982469 |
2309.11986
|
Philipp Ausserlechner Dipl.-Ing.
|
Philipp Ausserlechner, David Haberger, Stefan Thalhammer,
Jean-Baptiste Weibel and Markus Vincze
|
ZS6D: Zero-shot 6D Object Pose Estimation using Vision Transformers
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As robotic systems increasingly encounter complex and unconstrained
real-world scenarios, there is a demand to recognize diverse objects. The
state-of-the-art 6D object pose estimation methods rely on object-specific
training and therefore do not generalize to unseen objects. Recent novel object
pose estimation methods are solving this issue using task-specific fine-tuned
CNNs for deep template matching. This adaptation for pose estimation still
requires expensive data rendering and training procedures. MegaPose for example
is trained on a dataset consisting of two million images showing 20,000
different objects to reach such generalization capabilities. To overcome this
shortcoming we introduce ZS6D, for zero-shot novel object 6D pose estimation.
Visual descriptors, extracted using pre-trained Vision Transformers (ViT), are
used for matching rendered templates against query images of objects and for
establishing local correspondences. These local correspondences enable deriving
geometric correspondences and are used for estimating the object's 6D pose with
RANSAC-based PnP. This approach showcases that the image descriptors extracted
by pre-trained ViTs are well-suited to achieve a notable improvement over two
state-of-the-art novel object 6D pose estimation methods, without the need for
task-specific fine-tuning. Experiments are performed on LMO, YCBV, and TLESS.
In comparison to one of the two methods we improve the Average Recall on all
three datasets and compared to the second method we improve on two datasets.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 11:53:01 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Ausserlechner",
"Philipp",
""
],
[
"Haberger",
"David",
""
],
[
"Thalhammer",
"Stefan",
""
],
[
"Weibel",
"Jean-Baptiste",
""
],
[
"Vincze",
"Markus",
""
]
] |
new_dataset
| 0.998777 |
2309.12003
|
Minjia Shi
|
Minjia Shi, Sihui Tao, Jon-Lark Kim and Patrick Sole
|
A quaternary analogue of Tang-Ding codes
| null | null | null | null |
cs.IT cs.CR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a recent paper, Tang and Ding introduced a class of binary cyclic codes of
rate close to one half with a designed lower bound on their minimum distance.
The definition involves the base $2$ expansion of the integers in their
defining set. In this paper we propose an analogue for quaternary codes. In
addition, the performances of the subfield subcode and of the trace code (two
binary cyclic codes) are investigated.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 12:21:34 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Shi",
"Minjia",
""
],
[
"Tao",
"Sihui",
""
],
[
"Kim",
"Jon-Lark",
""
],
[
"Sole",
"Patrick",
""
]
] |
new_dataset
| 0.998391 |
2309.12008
|
Vlad Niculescu Mr.
|
Vlad Niculescu, Tommaso Polonelli, Michele Magno, Luca Benini
|
NanoSLAM: Enabling Fully Onboard SLAM for Tiny Robots
|
23 pages
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Perceiving and mapping the surroundings are essential for enabling autonomous
navigation in any robotic platform. The algorithm class that enables accurate
mapping while correcting the odometry errors present in most robotics systems
is Simultaneous Localization and Mapping (SLAM). Today, fully onboard mapping
is only achievable on robotic platforms that can host high-wattage processors,
mainly due to the significant computational load and memory demands required
for executing SLAM algorithms. For this reason, pocket-size
hardware-constrained robots offload the execution of SLAM to external
infrastructures. To address the challenge of enabling SLAM algorithms on
resource-constrained processors, this paper proposes NanoSLAM, a lightweight
and optimized end-to-end SLAM approach specifically designed to operate on
centimeter-size robots at a power budget of only 87.9 mW. We demonstrate the
mapping capabilities in real-world scenarios and deploy NanoSLAM on a
nano-drone weighing 44 g and equipped with a novel commercial RISC-V low-power
parallel processor called GAP9. The algorithm is designed to leverage the
parallel capabilities of the RISC-V processing cores and enables mapping of a
general environment with an accuracy of 4.5 cm and an end-to-end execution time
of less than 250 ms.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 12:27:18 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Niculescu",
"Vlad",
""
],
[
"Polonelli",
"Tommaso",
""
],
[
"Magno",
"Michele",
""
],
[
"Benini",
"Luca",
""
]
] |
new_dataset
| 0.978632 |
2309.12030
|
Masato Mita
|
Masato Mita, Soichiro Murakami, Akihiko Kato, Peinan Zhang
|
CAMERA: A Multimodal Dataset and Benchmark for Ad Text Generation
|
13 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In response to the limitations of manual online ad production, significant
research has been conducted in the field of automatic ad text generation (ATG).
However, comparing different methods has been challenging because of the lack
of benchmarks encompassing the entire field and the absence of well-defined
problem sets with clear model inputs and outputs. To address these challenges,
this paper aims to advance the field of ATG by introducing a redesigned task
and constructing a benchmark. Specifically, we defined ATG as a
cross-application task encompassing various aspects of the Internet
advertising. As part of our contribution, we propose a first benchmark dataset,
CA Multimodal Evaluation for Ad Text GeneRAtion (CAMERA), carefully designed
for ATG to be able to leverage multi-modal information and conduct an
industry-wise evaluation. Furthermore, we demonstrate the usefulness of our
proposed benchmark through evaluation experiments using multiple baseline
models, which vary in terms of the type of pre-trained language model used and
the incorporation of multi-modal information. We also discuss the current state
of the task and the future challenges.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 12:51:24 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Mita",
"Masato",
""
],
[
"Murakami",
"Soichiro",
""
],
[
"Kato",
"Akihiko",
""
],
[
"Zhang",
"Peinan",
""
]
] |
new_dataset
| 0.999595 |
2309.12051
|
Laura B\'egon-Lours
|
Laura B\'egon-Lours, Mattia Halter, Diana D\'avila Pineda, Valeria
Bragaglia, Youri Popoff, Antonio La Porta, Daniel Jubin, Jean Fompeyrine and
Bert Jan Offrein
|
A Back-End-Of-Line Compatible, Ferroelectric Analog Non-Volatile Memory
|
2021 IEEE International Memory Workshop (IMW)
| null |
10.1109/IMW51353.2021.9439611
| null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A Ferroelectric Analog Non-Volatile Memory based on a WOx electrode and
ferroelectric HfZrO4 layer is fabricated at a low thermal budget (~375C),
enabling BEOL processes and CMOS integration. The devices show suitable
properties for integration in crossbar arrays and neural network inference:
analog potentiation/depression with constant field or constant pulse width
schemes, cycle to cycle and device to device variation <10%, ON/OFF ratio up to
10 and good linearity. The physical mechanisms behind the resistive switching
and conduction mechanisms are discussed.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 13:17:44 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Bégon-Lours",
"Laura",
""
],
[
"Halter",
"Mattia",
""
],
[
"Pineda",
"Diana Dávila",
""
],
[
"Bragaglia",
"Valeria",
""
],
[
"Popoff",
"Youri",
""
],
[
"La Porta",
"Antonio",
""
],
[
"Jubin",
"Daniel",
""
],
[
"Fompeyrine",
"Jean",
""
],
[
"Offrein",
"Bert Jan",
""
]
] |
new_dataset
| 0.99645 |
2309.12061
|
Laura B\'egon-Lours
|
Laura B\'egon-Lours (1), Mattia Halter (1 and 2), Diana D\'avila
Pineda (1), Youri Popoff (1 and 2), Valeria Bragaglia (1), Antonio La Porta
(1), Daniel Jubin (1), Jean Fompeyrine (1) and Bert Jan Offrein (1) ((1) IBM
Research Zurich, R\"uschlikon, Switzerland and (2) ETH Z\"urich, Z\"urich,
Switzerland)
|
A BEOL Compatible, 2-Terminals, Ferroelectric Analog Non-Volatile Memory
|
2021 5th IEEE Electron Devices Technology & Manufacturing Conference
(EDTM)
| null |
10.1109/EDTM50988.2021.9420886
| null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A Ferroelectric Analog Non-Volatile Memory based on a WOx electrode and
ferroelectric HfZrO$_4$ layer is fabricated at a low thermal budget
(~375$^\circ$C), enabling BEOL processes and CMOS integration. The devices show
suitable properties for integration in crossbar arrays and neural network
inference: analog potentiation/depression with constant field or constant pulse
width schemes, cycle to cycle and device to device variation <10%, ON/OFF ratio
up to 10 and good linearity. The physical mechanisms behind the resistive
switching and conduction mechanisms are discussed.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 13:30:06 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Bégon-Lours",
"Laura",
"",
"1 and 2"
],
[
"Halter",
"Mattia",
"",
"1 and 2"
],
[
"Pineda",
"Diana Dávila",
"",
"1 and 2"
],
[
"Popoff",
"Youri",
"",
"1 and 2"
],
[
"Bragaglia",
"Valeria",
""
],
[
"La Porta",
"Antonio",
""
],
[
"Jubin",
"Daniel",
""
],
[
"Fompeyrine",
"Jean",
""
],
[
"Offrein",
"Bert Jan",
""
]
] |
new_dataset
| 0.998065 |
2309.12070
|
Laura B\'egon-Lours
|
Laura B\'egon-Lours (1), Mattia Halter (1 and 2), Youri Popoff (1 and
2), Zhenming Yu (1, 2 and 3), Donato Francesco Falcone (1 and 4) and Bert Jan
Offrein (1) ((1) IBM Research, R\"uschlikon, Switzerland, (2) ETH Z\"urich,
Z\"urich, Switzerland, (3) Institute of Neuroinformatics, University of
Z\"urich, (4) EPFL, Lausanne, Switzerland)
|
High-Conductance, Ohmic-like HfZrO$_4$ Ferroelectric Memristor
|
ESSCIRC 2021 - IEEE 47th European Solid State Circuits Conference
(ESSCIRC)
| null |
10.1109/ESSCIRC53450.2021.9567870
| null |
cs.AR physics.app-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The persistent and switchable polarization of ferroelectric materials based
on HfO$_2$-based ferroelectric compounds, compatible with large-scale
integration, are attractive synaptic elements for neuromorphic computing. To
achieve a record current density of 0.01 A/cm$^2$ (at a read voltage of 80 mV)
as well as ideal memristive behavior (linear current-voltage relation and
analog resistive switching), devices based on an ultra-thin (2.7 nm thick),
polycrystalline HfZrO$_4$ ferroelectric layer are fabricated by Atomic Layer
Deposition. The use of a semiconducting oxide interlayer (WO$_{x<3}$) at one of
the interfaces, induces an asymmetric energy profile upon ferroelectric
polarization reversal and thus the long-term potentiation / depression
(conductance increase / decrease) of interest. Moreover, it favors the stable
retention of both the low and the high resistive states. Thanks to the low
operating voltage (<3.5 V), programming requires less than 10${^-12}$ J for 20
ns long pulses. Remarkably, the memristors show no wake-up or fatigue effect.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 13:39:26 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Bégon-Lours",
"Laura",
"",
"1 and 2"
],
[
"Halter",
"Mattia",
"",
"1 and 2"
],
[
"Popoff",
"Youri",
"",
"1 and\n 2"
],
[
"Yu",
"Zhenming",
"",
"1, 2 and 3"
],
[
"Falcone",
"Donato Francesco",
"",
"1 and 4"
],
[
"Offrein",
"Bert Jan",
""
]
] |
new_dataset
| 0.998671 |
2309.12089
|
Ming Chenlin
|
Chenlin Ming, Jiacheng Lin, Pangkit Fong, Han Wang, Xiaoming Duan and
Jianping He
|
HiCRISP: A Hierarchical Closed-Loop Robotic Intelligent Self-Correction
Planner
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The integration of Large Language Models (LLMs) into robotics has
revolutionized human-robot interactions and autonomous task planning. However,
these systems are often unable to self-correct during the task execution, which
hinders their adaptability in dynamic real-world environments. To address this
issue, we present a Hierarchical Closed-loop Robotic Intelligent
Self-correction Planner (HiCRISP), an innovative framework that enables robots
to correct errors within individual steps during the task execution. HiCRISP
actively monitors and adapts the task execution process, addressing both
high-level planning and low-level action errors. Extensive benchmark
experiments, encompassing virtual and real-world scenarios, showcase HiCRISP's
exceptional performance, positioning it as a promising solution for robotic
task planning with LLMs.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 13:58:26 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Ming",
"Chenlin",
""
],
[
"Lin",
"Jiacheng",
""
],
[
"Fong",
"Pangkit",
""
],
[
"Wang",
"Han",
""
],
[
"Duan",
"Xiaoming",
""
],
[
"He",
"Jianping",
""
]
] |
new_dataset
| 0.997686 |
2309.12137
|
Fatimah Alzamzami
|
Fatimah Alzamzami, Abdulmotaleb El Saddik
|
OSN-MDAD: Machine Translation Dataset for Arabic Multi-Dialectal
Conversations on Online Social Media
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While resources for English language are fairly sufficient to understand
content on social media, similar resources in Arabic are still immature. The
main reason that the resources in Arabic are insufficient is that Arabic has
many dialects in addition to the standard version (MSA). Arabs do not use MSA
in their daily communications; rather, they use dialectal versions.
Unfortunately, social users transfer this phenomenon into their use of social
media platforms, which in turn has raised an urgent need for building suitable
AI models for language-dependent applications. Existing machine translation
(MT) systems designed for MSA fail to work well with Arabic dialects. In light
of this, it is necessary to adapt to the informal nature of communication on
social networks by developing MT systems that can effectively handle the
various dialects of Arabic. Unlike for MSA that shows advanced progress in MT
systems, little effort has been exerted to utilize Arabic dialects for MT
systems. While few attempts have been made to build translation datasets for
dialectal Arabic, they are domain dependent and are not OSN cultural-language
friendly. In this work, we attempt to alleviate these limitations by proposing
an online social network-based multidialect Arabic dataset that is crafted by
contextually translating English tweets into four Arabic dialects: Gulf,
Yemeni, Iraqi, and Levantine. To perform the translation, we followed our
proposed guideline framework for content translation, which could be
universally applicable for translation between foreign languages and local
dialects. We validated the authenticity of our proposed dataset by developing
neural MT models for four Arabic dialects. Our results have shown a superior
performance of our NMT models trained using our dataset. We believe that our
dataset can reliably serve as an Arabic multidialectal translation dataset for
informal MT tasks.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 14:58:50 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Alzamzami",
"Fatimah",
""
],
[
"Saddik",
"Abdulmotaleb El",
""
]
] |
new_dataset
| 0.999797 |
2309.12172
|
Kimberly Wilber
|
Sagar M. Waghmare, Kimberly Wilber, Dave Hawkey, Xuan Yang, Matthew
Wilson, Stephanie Debats, Cattalyya Nuengsigkapian, Astuti Sharma, Lars
Pandikow, Huisheng Wang, Hartwig Adam, Mikhail Sirotenko
|
SANPO: A Scene Understanding, Accessibility, Navigation, Pathfinding,
Obstacle Avoidance Dataset
|
10 pages plus additional references. 13 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce SANPO, a large-scale egocentric video dataset focused on dense
prediction in outdoor environments. It contains stereo video sessions collected
across diverse outdoor environments, as well as rendered synthetic video
sessions. (Synthetic data was provided by Parallel Domain.) All sessions have
(dense) depth and odometry labels. All synthetic sessions and a subset of real
sessions have temporally consistent dense panoptic segmentation labels. To our
knowledge, this is the first human egocentric video dataset with both large
scale dense panoptic segmentation and depth annotations. In addition to the
dataset we also provide zero-shot baselines and SANPO benchmarks for future
research. We hope that the challenging nature of SANPO will help advance the
state-of-the-art in video segmentation, depth estimation, multi-task visual
modeling, and synthetic-to-real domain adaptation, while enabling human
navigation systems.
SANPO is available here:
https://google-research-datasets.github.io/sanpo_dataset/
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 15:28:04 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Waghmare",
"Sagar M.",
""
],
[
"Wilber",
"Kimberly",
""
],
[
"Hawkey",
"Dave",
""
],
[
"Yang",
"Xuan",
""
],
[
"Wilson",
"Matthew",
""
],
[
"Debats",
"Stephanie",
""
],
[
"Nuengsigkapian",
"Cattalyya",
""
],
[
"Sharma",
"Astuti",
""
],
[
"Pandikow",
"Lars",
""
],
[
"Wang",
"Huisheng",
""
],
[
"Adam",
"Hartwig",
""
],
[
"Sirotenko",
"Mikhail",
""
]
] |
new_dataset
| 0.999692 |
2309.12183
|
Bo Wang
|
Yu Cheng, Bo Wang, Robby T. Tan
|
ORTexME: Occlusion-Robust Human Shape and Pose via Temporal Average
Texture and Mesh Encoding
|
8 pages, 8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In 3D human shape and pose estimation from a monocular video, models trained
with limited labeled data cannot generalize well to videos with occlusion,
which is common in the wild videos. The recent human neural rendering
approaches focusing on novel view synthesis initialized by the off-the-shelf
human shape and pose methods have the potential to correct the initial human
shape. However, the existing methods have some drawbacks such as, erroneous in
handling occlusion, sensitive to inaccurate human segmentation, and ineffective
loss computation due to the non-regularized opacity field. To address these
problems, we introduce ORTexME, an occlusion-robust temporal method that
utilizes temporal information from the input video to better regularize the
occluded body parts. While our ORTexME is based on NeRF, to determine the
reliable regions for the NeRF ray sampling, we utilize our novel average
texture learning approach to learn the average appearance of a person, and to
infer a mask based on the average texture. In addition, to guide the
opacity-field updates in NeRF to suppress blur and noise, we propose the use of
human body mesh. The quantitative evaluation demonstrates that our method
achieves significant improvement on the challenging multi-person 3DPW dataset,
where our method achieves 1.8 P-MPJPE error reduction. The SOTA rendering-based
methods fail and enlarge the error up to 5.6 on the same dataset.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 15:50:04 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Cheng",
"Yu",
""
],
[
"Wang",
"Bo",
""
],
[
"Tan",
"Robby T.",
""
]
] |
new_dataset
| 0.999462 |
2309.12188
|
Guangyao Zhai
|
Guangyao Zhai, Xiaoni Cai, Dianye Huang, Yan Di, Fabian Manhardt,
Federico Tombari, Nassir Navab, Benjamin Busam
|
SG-Bot: Object Rearrangement via Coarse-to-Fine Robotic Imagination on
Scene Graphs
|
8 pages, 6 figures. A video is uploaded here:
https://youtu.be/cA8wdfofAG4
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Object rearrangement is pivotal in robotic-environment interactions,
representing a significant capability in embodied AI. In this paper, we present
SG-Bot, a novel rearrangement framework that utilizes a coarse-to-fine scheme
with a scene graph as the scene representation. Unlike previous methods that
rely on either known goal priors or zero-shot large models, SG-Bot exemplifies
lightweight, real-time, and user-controllable characteristics, seamlessly
blending the consideration of commonsense knowledge with automatic generation
capabilities. SG-Bot employs a three-fold procedure--observation, imagination,
and execution--to adeptly address the task. Initially, objects are discerned
and extracted from a cluttered scene during the observation. These objects are
first coarsely organized and depicted within a scene graph, guided by either
commonsense or user-defined criteria. Then, this scene graph subsequently
informs a generative model, which forms a fine-grained goal scene considering
the shape information from the initial scene and object semantics. Finally, for
execution, the initial and envisioned goal scenes are matched to formulate
robotic action policies. Experimental results demonstrate that SG-Bot
outperforms competitors by a large margin.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 15:54:33 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Zhai",
"Guangyao",
""
],
[
"Cai",
"Xiaoni",
""
],
[
"Huang",
"Dianye",
""
],
[
"Di",
"Yan",
""
],
[
"Manhardt",
"Fabian",
""
],
[
"Tombari",
"Federico",
""
],
[
"Navab",
"Nassir",
""
],
[
"Busam",
"Benjamin",
""
]
] |
new_dataset
| 0.971922 |
2309.12212
|
Zhengang Li
|
Zhengang Li, Geng Yuan, Tomoharu Yamauchi, Zabihi Masoud, Yanyue Xie,
Peiyan Dong, Xulong Tang, Nobuyuki Yoshikawa, Devesh Tiwari, Yanzhi Wang,
Olivia Chen
|
SupeRBNN: Randomized Binary Neural Network Using Adiabatic
Superconductor Josephson Devices
|
Accepted by MICRO'23 (56th IEEE/ACM International Symposium on
Microarchitecture)
| null | null | null |
cs.ET cs.AR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adiabatic Quantum-Flux-Parametron (AQFP) is a superconducting logic with
extremely high energy efficiency. By employing the distinct polarity of current
to denote logic `0' and `1', AQFP devices serve as excellent carriers for
binary neural network (BNN) computations. Although recent research has made
initial strides toward developing an AQFP-based BNN accelerator, several
critical challenges remain, preventing the design from being a comprehensive
solution. In this paper, we propose SupeRBNN, an AQFP-based randomized BNN
acceleration framework that leverages software-hardware co-optimization to
eventually make the AQFP devices a feasible solution for BNN acceleration.
Specifically, we investigate the randomized behavior of the AQFP devices and
analyze the impact of crossbar size on current attenuation, subsequently
formulating the current amplitude into the values suitable for use in BNN
computation. To tackle the accumulation problem and improve overall hardware
performance, we propose a stochastic computing-based accumulation module and a
clocking scheme adjustment-based circuit optimization method. We validate our
SupeRBNN framework across various datasets and network architectures, comparing
it with implementations based on different technologies, including CMOS, ReRAM,
and superconducting RSFQ/ERSFQ. Experimental results demonstrate that our
design achieves an energy efficiency of approximately 7.8x10^4 times higher
than that of the ReRAM-based BNN framework while maintaining a similar level of
model accuracy. Furthermore, when compared with superconductor-based
counterparts, our framework demonstrates at least two orders of magnitude
higher energy efficiency.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 16:14:42 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Li",
"Zhengang",
""
],
[
"Yuan",
"Geng",
""
],
[
"Yamauchi",
"Tomoharu",
""
],
[
"Masoud",
"Zabihi",
""
],
[
"Xie",
"Yanyue",
""
],
[
"Dong",
"Peiyan",
""
],
[
"Tang",
"Xulong",
""
],
[
"Yoshikawa",
"Nobuyuki",
""
],
[
"Tiwari",
"Devesh",
""
],
[
"Wang",
"Yanzhi",
""
],
[
"Chen",
"Olivia",
""
]
] |
new_dataset
| 0.985421 |
2309.12220
|
Ankit Gangwal
|
Ankit Gangwal, Aashish Paliwal, Mauro Conti
|
De-authentication using Ambient Light Sensor
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
While user authentication happens before initiating or resuming a login
session, de-authentication detects the absence of a previously-authenticated
user to revoke her currently active login session. The absence of proper
de-authentication can lead to well-known lunchtime attacks, where a nearby
adversary takes over a carelessly departed user's running login session. The
existing solutions for automatic de-authentication have distinct practical
limitations, e.g., extraordinary deployment requirements or high initial cost
of external equipment.
In this paper, we propose "DE-authentication using Ambient Light sensor"
(DEAL), a novel, inexpensive, fast, and user-friendly de-authentication
approach. DEAL utilizes the built-in ambient light sensor of a modern computer
to determine if the user is leaving her work-desk. DEAL, by design, is
resilient to natural shifts in lighting conditions and can be configured to
handle abrupt changes in ambient illumination (e.g., due to toggling of room
lights). We collected data samples from 4800 sessions with 120 volunteers in 4
typical workplace settings and conducted a series of experiments to evaluate
the quality of our proposed approach thoroughly. Our results show that DEAL can
de-authenticate a departing user within 4 seconds with a hit rate of 89.15% and
a fall-out of 7.35%. Finally, bypassing DEAL to launch a lunchtime attack is
practically infeasible as it requires the attacker to either take the user's
position within a few seconds or manipulate the sensor readings sophisticatedly
in real-time.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 16:18:51 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Gangwal",
"Ankit",
""
],
[
"Paliwal",
"Aashish",
""
],
[
"Conti",
"Mauro",
""
]
] |
new_dataset
| 0.993763 |
2309.12253
|
Julian Minder
|
Julian Minder, Florian Gr\"otschla, Jo\"el Mathys, Roger Wattenhofer
|
SALSA-CLRS: A Sparse and Scalable Benchmark for Algorithmic Reasoning
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We introduce an extension to the CLRS algorithmic learning benchmark,
prioritizing scalability and the utilization of sparse representations. Many
algorithms in CLRS require global memory or information exchange, mirrored in
its execution model, which constructs fully connected (not sparse) graphs based
on the underlying problem. Despite CLRS's aim of assessing how effectively
learned algorithms can generalize to larger instances, the existing execution
model becomes a significant constraint due to its demanding memory requirements
and runtime (hard to scale). However, many important algorithms do not demand a
fully connected graph; these algorithms, primarily distributed in nature, align
closely with the message-passing paradigm employed by Graph Neural Networks.
Hence, we propose SALSA-CLRS, an extension of the current CLRS benchmark
specifically with scalability and sparseness in mind. Our approach includes
adapted algorithms from the original CLRS benchmark and introduces new problems
from distributed and randomized algorithms. Moreover, we perform a thorough
empirical evaluation of our benchmark. Code is publicly available at
https://github.com/jkminder/SALSA-CLRS.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 16:57:09 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Minder",
"Julian",
""
],
[
"Grötschla",
"Florian",
""
],
[
"Mathys",
"Joël",
""
],
[
"Wattenhofer",
"Roger",
""
]
] |
new_dataset
| 0.99532 |
2309.12300
|
Irmak Guzey
|
Irmak Guzey, Yinlong Dai, Ben Evans, Soumith Chintala and Lerrel Pinto
|
See to Touch: Learning Tactile Dexterity through Visual Incentives
| null | null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Equipping multi-fingered robots with tactile sensing is crucial for achieving
the precise, contact-rich, and dexterous manipulation that humans excel at.
However, relying solely on tactile sensing fails to provide adequate cues for
reasoning about objects' spatial configurations, limiting the ability to
correct errors and adapt to changing situations. In this paper, we present
Tactile Adaptation from Visual Incentives (TAVI), a new framework that enhances
tactile-based dexterity by optimizing dexterous policies using vision-based
rewards. First, we use a contrastive-based objective to learn visual
representations. Next, we construct a reward function using these visual
representations through optimal-transport based matching on one human
demonstration. Finally, we use online reinforcement learning on our robot to
optimize tactile-based policies that maximize the visual reward. On six
challenging tasks, such as peg pick-and-place, unstacking bowls, and flipping
slender objects, TAVI achieves a success rate of 73% using our four-fingered
Allegro robot hand. The increase in performance is 108% higher than policies
using tactile and vision-based rewards and 135% higher than policies without
tactile observational input. Robot videos are best viewed on our project
website: https://see-to-touch.github.io/.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 17:58:13 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Guzey",
"Irmak",
""
],
[
"Dai",
"Yinlong",
""
],
[
"Evans",
"Ben",
""
],
[
"Chintala",
"Soumith",
""
],
[
"Pinto",
"Lerrel",
""
]
] |
new_dataset
| 0.969791 |
2309.12311
|
Jianing Yang
|
Jianing Yang, Xuweiyi Chen, Shengyi Qian, Nikhil Madaan, Madhavan
Iyengar, David F. Fouhey, Joyce Chai
|
LLM-Grounder: Open-Vocabulary 3D Visual Grounding with Large Language
Model as an Agent
|
Project website: https://chat-with-nerf.github.io/
| null | null | null |
cs.CV cs.AI cs.CL cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D visual grounding is a critical skill for household robots, enabling them
to navigate, manipulate objects, and answer questions based on their
environment. While existing approaches often rely on extensive labeled data or
exhibit limitations in handling complex language queries, we propose
LLM-Grounder, a novel zero-shot, open-vocabulary, Large Language Model
(LLM)-based 3D visual grounding pipeline. LLM-Grounder utilizes an LLM to
decompose complex natural language queries into semantic constituents and
employs a visual grounding tool, such as OpenScene or LERF, to identify objects
in a 3D scene. The LLM then evaluates the spatial and commonsense relations
among the proposed objects to make a final grounding decision. Our method does
not require any labeled training data and can generalize to novel 3D scenes and
arbitrary text queries. We evaluate LLM-Grounder on the ScanRefer benchmark and
demonstrate state-of-the-art zero-shot grounding accuracy. Our findings
indicate that LLMs significantly improve the grounding capability, especially
for complex language queries, making LLM-Grounder an effective approach for 3D
vision-language tasks in robotics. Videos and interactive demos can be found on
the project website https://chat-with-nerf.github.io/ .
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 17:59:45 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Yang",
"Jianing",
""
],
[
"Chen",
"Xuweiyi",
""
],
[
"Qian",
"Shengyi",
""
],
[
"Madaan",
"Nikhil",
""
],
[
"Iyengar",
"Madhavan",
""
],
[
"Fouhey",
"David F.",
""
],
[
"Chai",
"Joyce",
""
]
] |
new_dataset
| 0.997212 |
2309.12314
|
Zhenghong Zhou
|
Kan Wu, Houwen Peng, Zhenghong Zhou, Bin Xiao, Mengchen Liu, Lu Yuan,
Hong Xuan, Michael Valenzuela, Xi (Stephen) Chen, Xinggang Wang, Hongyang
Chao, Han Hu
|
TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight
Inheritance
|
Accepted By ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel cross-modal distillation method, called
TinyCLIP, for large-scale language-image pre-trained models. The method
introduces two core techniques: affinity mimicking and weight inheritance.
Affinity mimicking explores the interaction between modalities during
distillation, enabling student models to mimic teachers' behavior of learning
cross-modal feature alignment in a visual-linguistic affinity space. Weight
inheritance transmits the pre-trained weights from the teacher models to their
student counterparts to improve distillation efficiency. Moreover, we extend
the method into a multi-stage progressive distillation to mitigate the loss of
informative weights during extreme compression. Comprehensive experiments
demonstrate the efficacy of TinyCLIP, showing that it can reduce the size of
the pre-trained CLIP ViT-B/32 by 50%, while maintaining comparable zero-shot
performance. While aiming for comparable performance, distillation with weight
inheritance can speed up the training by 1.4 - 7.8 $\times$ compared to
training from scratch. Moreover, our TinyCLIP ViT-8M/16, trained on YFCC-15M,
achieves an impressive zero-shot top-1 accuracy of 41.1% on ImageNet,
surpassing the original CLIP ViT-B/16 by 3.5% while utilizing only 8.9%
parameters. Finally, we demonstrate the good transferability of TinyCLIP in
various downstream tasks. Code and models will be open-sourced at
https://aka.ms/tinyclip.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 17:59:53 GMT"
}
] | 2023-09-22T00:00:00 |
[
[
"Wu",
"Kan",
"",
"Stephen"
],
[
"Peng",
"Houwen",
"",
"Stephen"
],
[
"Zhou",
"Zhenghong",
"",
"Stephen"
],
[
"Xiao",
"Bin",
"",
"Stephen"
],
[
"Liu",
"Mengchen",
"",
"Stephen"
],
[
"Yuan",
"Lu",
"",
"Stephen"
],
[
"Xuan",
"Hong",
"",
"Stephen"
],
[
"Valenzuela",
"Michael",
"",
"Stephen"
],
[
"Xi",
"",
"",
"Stephen"
],
[
"Chen",
"",
""
],
[
"Wang",
"Xinggang",
""
],
[
"Chao",
"Hongyang",
""
],
[
"Hu",
"Han",
""
]
] |
new_dataset
| 0.967799 |
1907.00365
|
Junshan Luo
|
Junshan Luo, Fanggang Wang, Shilian Wang
|
Spatial Coded Modulation
|
30 pages, 17 figures
|
This paper was published on China Communications 2023
|
10.23919/JCC.ea.2021-0011.202401
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a spatial coded modulation (SCM) scheme, which
improves the accuracy of the active antenna detection by coding over the
transmit antennas. Specifically, the antenna activation pattern in the SCM
corresponds to a codeword in a properly designed codebook with a larger minimum
Hamming distance than its counterpart conventional spatial modulation. As the
minimum Hamming distance increases, the reliability of the active antenna
detection is directly enhanced, which in turn improves the demodulation of the
modulated symbols and yields a better system reliability. In addition to the
reliability, the proposed SCM scheme also achieves a higher capacity with the
identical antenna configuration compared to the conventional spatial modulation
technique. Moreover, the proposed SCM scheme strikes a balance between spectral
efficiency and reliability by trading off the minimum Hamming distance with the
number of available codewords. The optimal maximum likelihood detector is first
formulated. Then, a low-complexity suboptimal detector is proposed to reduce
the computational complexity, which has a two-step detection. Theoretical
derivations of the channel capacity and the bit error rate are presented in
various channel scenarios, i.e., Rayleigh, Rician, Nakagami-m, imperfect
channel state information, and spatial correlation. Further derivation on
performance bounding is also provided to reveal the insight of the benefit of
increasing the minimum Hamming distance. Numerical results validate the
analysis and demonstrate that the proposed SCM outperforms the conventional
spatial modulation techniques in both channel capacity and system reliability.
|
[
{
"version": "v1",
"created": "Sun, 30 Jun 2019 10:59:14 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Oct 2019 07:15:44 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Luo",
"Junshan",
""
],
[
"Wang",
"Fanggang",
""
],
[
"Wang",
"Shilian",
""
]
] |
new_dataset
| 0.976072 |
2111.08843
|
Mohammad Rowshan
|
Mohammad Rowshan, Son Hoang Dau, Emanuele Viterbo
|
On the Formation of Min-weight Codewords of Polar/PAC Codes and Its
Applications
|
Accepted in IEEE Trans. Inf. Theory, 23 pages, 13 figures, 6 tables,
3 listings
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Minimum weight codewords play a crucial role in the error correction
performance of a linear block code. In this work, we establish an explicit
construction for these codewords of polar codes as a sum of the generator
matrix rows, which can then be used as a foundation for two applications. In
the first application, we obtain a lower bound for the number of minimum-weight
codewords (a.k.a. the error coefficient), which matches the exact number
established previously in the literature. In the second application, we derive
a novel method that modifies the information set (a.k.a. rate profile) of polar
codes and PAC codes in order to reduce the error coefficient, hence improving
their performance. More specifically, by analyzing the structure of
minimum-weight codewords of polar codes (as special sums of the rows in the
polar transform matrix), we can identify rows (corresponding to
\textit{information} bits) that contribute the most to the formation of such
codewords and then replace them with other rows (corresponding to
\textit{frozen} bits) that bring in few minimum-weight codewords. A similar
process can also be applied to PAC codes. Our approach deviates from the
traditional constructions of polar codes, which mostly focus on the reliability
of the sub-channels, by taking into account another important factor - the
weight distribution. Extensive numerical results show that the modified codes
outperform PAC codes and CRC-Polar codes at the practical block error rate of
$10^{-2}$-$10^{-3}$.
|
[
{
"version": "v1",
"created": "Wed, 17 Nov 2021 00:04:21 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Nov 2021 06:59:13 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Sep 2023 13:01:20 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Rowshan",
"Mohammad",
""
],
[
"Dau",
"Son Hoang",
""
],
[
"Viterbo",
"Emanuele",
""
]
] |
new_dataset
| 0.998028 |
2111.10854
|
Jian Sun
|
Jian Sun, Ali Pourramezan Fard, and Mohammad H. Mahoor
|
XnODR and XnIDR: Two Accurate and Fast Fully Connected Layers For
Convolutional Neural Networks
|
19 pages, 5 figures, 9 tables, 2 algorithms
|
J Intell Robot Syst 109, 17 (2023)
|
10.1007/s10846-023-01952-w
| null |
cs.CV cs.LG cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Capsule Network is powerful at defining the positional relationship between
features in deep neural networks for visual recognition tasks, but it is
computationally expensive and not suitable for running on mobile devices. The
bottleneck is in the computational complexity of the Dynamic Routing mechanism
used between the capsules. On the other hand, XNOR-Net is fast and
computationally efficient, though it suffers from low accuracy due to
information loss in the binarization process. To address the computational
burdens of the Dynamic Routing mechanism, this paper proposes new Fully
Connected (FC) layers by xnorizing the linear projection outside or inside the
Dynamic Routing within the CapsFC layer. Specifically, our proposed FC layers
have two versions, XnODR (Xnorize the Linear Projection Outside Dynamic
Routing) and XnIDR (Xnorize the Linear Projection Inside Dynamic Routing). To
test the generalization of both XnODR and XnIDR, we insert them into two
different networks, MobileNetV2 and ResNet-50. Our experiments on three
datasets, MNIST, CIFAR-10, and MultiMNIST validate their effectiveness. The
results demonstrate that both XnODR and XnIDR help networks to have high
accuracy with lower FLOPs and fewer parameters (e.g., 96.14% correctness with
2.99M parameters and 311.74M FLOPs on CIFAR-10).
|
[
{
"version": "v1",
"created": "Sun, 21 Nov 2021 16:42:01 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jun 2022 01:35:46 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Sep 2023 01:12:51 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Sun",
"Jian",
""
],
[
"Fard",
"Ali Pourramezan",
""
],
[
"Mahoor",
"Mohammad H.",
""
]
] |
new_dataset
| 0.995502 |
2201.13302
|
James Cheney
|
Alberto Abello and James Cheney
|
Eris: Measuring discord among multidimensional data sources
|
33 pages, 15 figures
| null |
10.1007/s00778-023-00810-3
| null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data integration is a classical problem in databases, typically decomposed
into schema matching, entity matching and data fusion. To solve the latter, it
is mostly assumed that ground truth can be determined. However, in general, the
data gathering processes in the different sources are imperfect and cannot
provide an accurate merging of values. Thus, in the absence of ways to
determine ground truth, it is important to at least quantify how far from being
internally consistent a dataset is. Hence, we propose definitions of concordant
data and define a discordance metric as a way of measuring disagreement to
improve decision making based on trustworthiness.
We define the discord measurement problem of numerical attributes in which
given a set of uncertain raw observations or aggregate results (such as
case/hospitalization/death data relevant to COVID-19) and information on the
alignment of different conceptualizations of the same reality (e.g.,
granularities or units), we wish to assess whether the different sources are
concordant, or if not, use the discordance metric to quantify how discordant
they are. We also define a set of algebraic operators to describe the
alignments of different data sources with correctness guarantees, together with
two alternative relational database implementations that reduce the problem to
linear or quadratic programming. These are evaluated against both COVID-19 and
synthetic data, and our experimental results show that discordance measurement
can be performed efficiently in realistic situations.
|
[
{
"version": "v1",
"created": "Mon, 31 Jan 2022 15:25:28 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 10:51:37 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Abello",
"Alberto",
""
],
[
"Cheney",
"James",
""
]
] |
new_dataset
| 0.993127 |
2210.01346
|
Anjun Chen
|
Anjun Chen, Xiangyu Wang, Kun Shi, Shaohao Zhu, Bin Fang, Yingfeng
Chen, Jiming Chen, Yuchi Huo, Qi Ye
|
ImmFusion: Robust mmWave-RGB Fusion for 3D Human Body Reconstruction in
All Weather Conditions
|
Accepted to ICRA2023, Project Page:
https://chen3110.github.io/ImmFusion/index.html
| null |
10.1109/ICRA48891.2023.10161428
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D human reconstruction from RGB images achieves decent results in good
weather conditions but degrades dramatically in rough weather. Complementary,
mmWave radars have been employed to reconstruct 3D human joints and meshes in
rough weather. However, combining RGB and mmWave signals for robust all-weather
3D human reconstruction is still an open challenge, given the sparse nature of
mmWave and the vulnerability of RGB images. In this paper, we present
ImmFusion, the first mmWave-RGB fusion solution to reconstruct 3D human bodies
in all weather conditions robustly. Specifically, our ImmFusion consists of
image and point backbones for token feature extraction and a Transformer module
for token fusion. The image and point backbones refine global and local
features from original data, and the Fusion Transformer Module aims for
effective information fusion of two modalities by dynamically selecting
informative tokens. Extensive experiments on a large-scale dataset, mmBody,
captured in various environments demonstrate that ImmFusion can efficiently
utilize the information of two modalities to achieve a robust 3D human body
reconstruction in all weather conditions. In addition, our method's accuracy is
significantly superior to that of state-of-the-art Transformer-based
LiDAR-camera fusion methods.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 03:30:18 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Jul 2023 03:36:39 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Sep 2023 05:01:45 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Chen",
"Anjun",
""
],
[
"Wang",
"Xiangyu",
""
],
[
"Shi",
"Kun",
""
],
[
"Zhu",
"Shaohao",
""
],
[
"Fang",
"Bin",
""
],
[
"Chen",
"Yingfeng",
""
],
[
"Chen",
"Jiming",
""
],
[
"Huo",
"Yuchi",
""
],
[
"Ye",
"Qi",
""
]
] |
new_dataset
| 0.995942 |
2301.06668
|
Murilo Marques Marinho
|
Murilo M. Marinho, Hung-Ching Lin, Jiawei Zhao
|
UMIRobot: An Open-{Software, Hardware} Low-Cost Robotic Manipulator for
Education
|
Accepted on IROS 2023, 8 pages
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robot teleoperation has been studied for the past 70 years and is relevant in
many contexts, such as in the handling of hazardous materials and telesurgery.
The COVID19 pandemic has rekindled interest in this topic, but the existing
robotic education kits fall short of being suitable for teleoperated robotic
manipulator learning. In addition, the global restrictions of motion motivated
large investments in online/hybrid education. In this work, a newly developed
robotics education kit and its ecosystem are presented which is used as the
backbone of an online/hybrid course in teleoperated robots. The students are
divided into teams. Each team designs, fabricates (3D printing and assembling),
and implements a control strategy for a master device and gripper. Coupling
those with the UMIRobot, provided as a kit, the students compete in a
teleoperation challenge. The kit is low cost (< 100USD), which allows
higher-learning institutions to provide one kit per student and they can learn
in a risk-free environment. As of now, 73 such kits have been assembled and
sent to course participants in eight countries. As major success stories, we
show an example of gripper and master designed for the proposed course. In
addition, we show a teleoperated task between Japan and Bangladesh executed by
course participants. Design files, videos, source code, and more information
are available at https://mmmarinho.github.io/UMIRobot/
|
[
{
"version": "v1",
"created": "Tue, 17 Jan 2023 02:39:22 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 06:05:50 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Marinho",
"Murilo M.",
""
],
[
"Lin",
"Hung-Ching",
""
],
[
"Zhao",
"Jiawei",
""
]
] |
new_dataset
| 0.999287 |
2303.05657
|
Xinyu Huang
|
Xinyu Huang, Youcai Zhang, Jinyu Ma, Weiwei Tian, Rui Feng, Yuejie
Zhang, Yaqian Li, Yandong Guo, Lei Zhang
|
Tag2Text: Guiding Vision-Language Model via Image Tagging
|
Homepage: https://github.com/xinyu1205/recognize-anything
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents Tag2Text, a vision language pre-training (VLP) framework,
which introduces image tagging into vision-language models to guide the
learning of visual-linguistic features. In contrast to prior works which
utilize object tags either manually labeled or automatically detected with an
off-the-shelf detector with limited performance, our approach explicitly learns
an image tagger using tags parsed from image-paired text and thus provides a
strong semantic guidance to vision-language models. In this way, Tag2Text can
utilize large-scale annotation-free image tags in accordance with image-text
pairs, and provides more diverse tag categories beyond objects. As a result,
Tag2Text demonstrates the ability of a foundational image tagging model, with
superior zero-shot performance even comparable to fully supervised models.
Moreover, by leveraging the tagging guidance, Tag2Text effectively enhances the
performance of vision-language models on both generation-based and
alignment-based tasks. Across a wide range of downstream benchmarks, Tag2Text
achieves state-of-the-art results with similar model sizes and data scales,
demonstrating the efficacy of the proposed tagging guidance. Code, demo and
pre-trained models are available at
\url{https://github.com/xinyu1205/recognize-anything}.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 02:16:35 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 07:50:43 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Huang",
"Xinyu",
""
],
[
"Zhang",
"Youcai",
""
],
[
"Ma",
"Jinyu",
""
],
[
"Tian",
"Weiwei",
""
],
[
"Feng",
"Rui",
""
],
[
"Zhang",
"Yuejie",
""
],
[
"Li",
"Yaqian",
""
],
[
"Guo",
"Yandong",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.998723 |
2304.09972
|
David Adelani
|
David Ifeoluwa Adelani, Marek Masiak, Israel Abebe Azime, Jesujoba
Alabi, Atnafu Lambebo Tonja, Christine Mwase, Odunayo Ogundepo, Bonaventure
F. P. Dossou, Akintunde Oladipo, Doreen Nixdorf, Chris Chinenye Emezue, sana
al-azzawi, Blessing Sibanda, Davis David, Lolwethu Ndolela, Jonathan Mukiibi,
Tunde Ajayi, Tatiana Moteu, Brian Odhiambo, Abraham Owodunni, Nnaemeka
Obiefuna, Muhidin Mohamed, Shamsuddeen Hassan Muhammad, Teshome Mulugeta
Ababu, Saheed Abdullahi Salahudeen, Mesay Gemeda Yigezu, Tajuddeen Gwadabe,
Idris Abdulmumin, Mahlet Taye, Oluwabusayo Awoyomi, Iyanuoluwa Shode,
Tolulope Adelani, Habiba Abdulganiyu, Abdul-Hakeem Omotayo, Adetola Adeeko,
Abeeb Afolabi, Anuoluwapo Aremu, Olanrewaju Samuel, Clemencia Siro, Wangari
Kimotho, Onyekachi Ogbu, Chinedu Mbonu, Chiamaka Chukwuneke, Samuel Fanijo,
Jessica Ojo, Oyinkansola Awosan, Tadesse Kebede, Toadoum Sari Sakayo, Pamela
Nyatsine, Freedmore Sidume, Oreen Yousuf, Mardiyyah Oduwole, Tshinu Tshinu,
Ussen Kimanuka, Thina Diko, Siyanda Nxakama, Sinodos Nigusse, Abdulmejid
Johar, Shafie Mohamed, Fuad Mire Hassan, Moges Ahmed Mehamed, Evrard Ngabire,
Jules Jules, Ivan Ssenkungu and Pontus Stenetorp
|
MasakhaNEWS: News Topic Classification for African languages
|
Accepted to IJCNLP-AACL 2023 (main conference)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
African languages are severely under-represented in NLP research due to lack
of datasets covering several NLP tasks. While there are individual language
specific datasets that are being expanded to different tasks, only a handful of
NLP tasks (e.g. named entity recognition and machine translation) have
standardized benchmark datasets covering several geographical and
typologically-diverse African languages. In this paper, we develop MasakhaNEWS
-- a new benchmark dataset for news topic classification covering 16 languages
widely spoken in Africa. We provide an evaluation of baseline models by
training classical machine learning models and fine-tuning several language
models. Furthermore, we explore several alternatives to full fine-tuning of
language models that are better suited for zero-shot and few-shot learning such
as cross-lingual parameter-efficient fine-tuning (like MAD-X), pattern
exploiting training (PET), prompting language models (like ChatGPT), and
prompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API).
Our evaluation in zero-shot setting shows the potential of prompting ChatGPT
for news topic classification in low-resource African languages, achieving an
average performance of 70 F1 points without leveraging additional supervision
like MAD-X. In few-shot setting, we show that with as little as 10 examples per
label, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance of
full supervised training (92.6 F1 points) leveraging the PET approach.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 21:12:23 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 17:14:40 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Adelani",
"David Ifeoluwa",
""
],
[
"Masiak",
"Marek",
""
],
[
"Azime",
"Israel Abebe",
""
],
[
"Alabi",
"Jesujoba",
""
],
[
"Tonja",
"Atnafu Lambebo",
""
],
[
"Mwase",
"Christine",
""
],
[
"Ogundepo",
"Odunayo",
""
],
[
"Dossou",
"Bonaventure F. P.",
""
],
[
"Oladipo",
"Akintunde",
""
],
[
"Nixdorf",
"Doreen",
""
],
[
"Emezue",
"Chris Chinenye",
""
],
[
"al-azzawi",
"sana",
""
],
[
"Sibanda",
"Blessing",
""
],
[
"David",
"Davis",
""
],
[
"Ndolela",
"Lolwethu",
""
],
[
"Mukiibi",
"Jonathan",
""
],
[
"Ajayi",
"Tunde",
""
],
[
"Moteu",
"Tatiana",
""
],
[
"Odhiambo",
"Brian",
""
],
[
"Owodunni",
"Abraham",
""
],
[
"Obiefuna",
"Nnaemeka",
""
],
[
"Mohamed",
"Muhidin",
""
],
[
"Muhammad",
"Shamsuddeen Hassan",
""
],
[
"Ababu",
"Teshome Mulugeta",
""
],
[
"Salahudeen",
"Saheed Abdullahi",
""
],
[
"Yigezu",
"Mesay Gemeda",
""
],
[
"Gwadabe",
"Tajuddeen",
""
],
[
"Abdulmumin",
"Idris",
""
],
[
"Taye",
"Mahlet",
""
],
[
"Awoyomi",
"Oluwabusayo",
""
],
[
"Shode",
"Iyanuoluwa",
""
],
[
"Adelani",
"Tolulope",
""
],
[
"Abdulganiyu",
"Habiba",
""
],
[
"Omotayo",
"Abdul-Hakeem",
""
],
[
"Adeeko",
"Adetola",
""
],
[
"Afolabi",
"Abeeb",
""
],
[
"Aremu",
"Anuoluwapo",
""
],
[
"Samuel",
"Olanrewaju",
""
],
[
"Siro",
"Clemencia",
""
],
[
"Kimotho",
"Wangari",
""
],
[
"Ogbu",
"Onyekachi",
""
],
[
"Mbonu",
"Chinedu",
""
],
[
"Chukwuneke",
"Chiamaka",
""
],
[
"Fanijo",
"Samuel",
""
],
[
"Ojo",
"Jessica",
""
],
[
"Awosan",
"Oyinkansola",
""
],
[
"Kebede",
"Tadesse",
""
],
[
"Sakayo",
"Toadoum Sari",
""
],
[
"Nyatsine",
"Pamela",
""
],
[
"Sidume",
"Freedmore",
""
],
[
"Yousuf",
"Oreen",
""
],
[
"Oduwole",
"Mardiyyah",
""
],
[
"Tshinu",
"Tshinu",
""
],
[
"Kimanuka",
"Ussen",
""
],
[
"Diko",
"Thina",
""
],
[
"Nxakama",
"Siyanda",
""
],
[
"Nigusse",
"Sinodos",
""
],
[
"Johar",
"Abdulmejid",
""
],
[
"Mohamed",
"Shafie",
""
],
[
"Hassan",
"Fuad Mire",
""
],
[
"Mehamed",
"Moges Ahmed",
""
],
[
"Ngabire",
"Evrard",
""
],
[
"Jules",
"Jules",
""
],
[
"Ssenkungu",
"Ivan",
""
],
[
"Stenetorp",
"Pontus",
""
]
] |
new_dataset
| 0.99981 |
2305.09977
|
Amit Puri
|
Amit Puri, John Jose, Tamarapalli Venkatesh, Vijaykrishnan Narayanan
|
DRackSim: Simulator for Rack-scale Memory Disaggregation
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Memory disaggregation has emerged as an alternative to traditional server
architecture in data centers. This paper introduces DRackSim, a simulation
infrastructure to model rack-scale hardware disaggregated memory. DRackSim
models multiple compute nodes, memory pools, and a rack-scale interconnect
similar to GenZ. An application-level simulation approach simulates an x86
out-of-order multi-core processor with a multi-level cache hierarchy at compute
nodes. A queue-based simulation is used to model a remote memory controller and
rack-level interconnect, which allows both cache-based and page-based access to
remote memory. DRackSim models a central memory manager to manage address space
at the memory pools. We integrate community-accepted DRAMSim2 to perform memory
simulation at local and remote memory using multiple DRAMSim2 instances. An
incremental approach is followed to validate the core and cache subsystem of
DRackSim with that of Gem5. We measure the performance of various HPC workloads
and show the performance impact for different nodes/pools configuration.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 06:17:06 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 21:26:52 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Puri",
"Amit",
""
],
[
"Jose",
"John",
""
],
[
"Venkatesh",
"Tamarapalli",
""
],
[
"Narayanan",
"Vijaykrishnan",
""
]
] |
new_dataset
| 0.989808 |
2307.11292
|
Rakesh Patibanda
|
Rakesh Patibanda, Chris Hill, Aryan Saini, Xiang Li, Yuzheng Chen,
Andrii Matviienko, Jarrod Knibbe, Elise van den Hoven, Florian 'Floyd'
Mueller
|
Auto-Pa\'izo Games: Towards Understanding the Design of Games that Aim
to Unify a Player's Physical Body and the Virtual World
|
This paper is published at the Annual Symposium on Computer-Human
Interaction in Play (CHI PLAY) 2023
|
Annual Symposium on Computer-Human Interaction in Play (CHI PLAY)
2023
|
10.1145/3611054
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Most digital bodily games focus on the body as they use movement as input.
However, they also draw the player's focus away from the body as the output
occurs on visual displays, creating a divide between the physical body and the
virtual world. We propose a novel approach - the "Body as a Play Material" -
where a player uses their body as both input and output to unify the physical
body and the virtual world. To showcase this approach, we designed three games
where a player uses one of their hands (input) to play against the other hand
(output) by loaning control over its movements to an Electrical Muscle
Stimulation (EMS) system. We conducted a thematic analysis on the data obtained
from a field study with 12 participants to articulate four player experience
themes. We discuss our results about how participants appreciated the
engagement with the variety of bodily movements for play and the ambiguity of
using their body as a play material. Ultimately, our work aims to unify the
physical body and the virtual world.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 01:27:16 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Aug 2023 14:12:29 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Aug 2023 05:33:57 GMT"
},
{
"version": "v4",
"created": "Wed, 20 Sep 2023 11:40:38 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Patibanda",
"Rakesh",
""
],
[
"Hill",
"Chris",
""
],
[
"Saini",
"Aryan",
""
],
[
"Li",
"Xiang",
""
],
[
"Chen",
"Yuzheng",
""
],
[
"Matviienko",
"Andrii",
""
],
[
"Knibbe",
"Jarrod",
""
],
[
"Hoven",
"Elise van den",
""
],
[
"Mueller",
"Florian 'Floyd'",
""
]
] |
new_dataset
| 0.976169 |
2309.04720
|
Hyun-Bin Kim
|
Hyun-Bin Kim, Keun-Ha Choi, and Kyung-Soo Kim
|
A Compact Optical Six-Axis Force/Torque Sensor for Legged Robots Using a
Polymorphic Calibration Method
|
12 pages, 13 figures, 9 tables
| null | null | null |
cs.RO physics.ins-det
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel design for a compact, lightweight 6-axis
force/torque sensor intended for use in legged robots. The design promotes easy
manufacturing and cost reduction, while introducing innovative calibration
methods that simplify the calibration process and minimize effort. The sensor's
advantages are achieved by streamlining the structure for durability,
implementing noncontact sensors, and providing a wider sensing range compared
to commercial sensors. To maintain a simple structure, the paper proposes a
force sensing scheme using photocouplers where the sensing elements are aligned
in-plane. This strategy enables all sensing elements to be fabricated on a
single printed circuit board, eliminating manual labor tasks such as bonding
and coating the sensing elements. The prototype sensor contains only four
parts, costs less than $250, and exhibits high response frequency and
performance. Traditional calibration methods present challenges, such as the
need for specialized equipment and extensive labor. To facilitate easy
calibration without the need for specialized equipment, a new method using
optimal control is proposed. To verify the feasibility of these ideas, a
prototype six-axis F/T sensor was manufactured. Its performance was evaluated
and compared to a reference F/T sensor and previous calibration methods.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 08:34:55 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 05:38:33 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Kim",
"Hyun-Bin",
""
],
[
"Choi",
"Keun-Ha",
""
],
[
"Kim",
"Kyung-Soo",
""
]
] |
new_dataset
| 0.999593 |
2309.06635
|
Martin Alexander B\"uchner
|
Elias Greve, Martin B\"uchner, Niclas V\"odisch, Wolfram Burgard,
Abhinav Valada
|
Collaborative Dynamic 3D Scene Graphs for Automated Driving
|
Refined manuscript and extended supplementary
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Maps have played an indispensable role in enabling safe and automated
driving. Although there have been many advances on different fronts ranging
from SLAM to semantics, building an actionable hierarchical semantic
representation of urban dynamic scenes from multiple agents is still a
challenging problem. In this work, we present Collaborative URBan Scene Graphs
(CURB-SG) that enable higher-order reasoning and efficient querying for many
functions of automated driving. CURB-SG leverages panoptic LiDAR data from
multiple agents to build large-scale maps using an effective graph-based
collaborative SLAM approach that detects inter-agent loop closures. To
semantically decompose the obtained 3D map, we build a lane graph from the
paths of ego agents and their panoptic observations of other vehicles. Based on
the connectivity of the lane graph, we segregate the environment into
intersecting and non-intersecting road areas. Subsequently, we construct a
multi-layered scene graph that includes lane information, the position of
static landmarks and their assignment to certain map sections, other vehicles
observed by the ego agents, and the pose graph from SLAM including 3D panoptic
point clouds. We extensively evaluate CURB-SG in urban scenarios using a
photorealistic simulator. We release our code at
http://curb.cs.uni-freiburg.de.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 22:54:30 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 21:29:32 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Greve",
"Elias",
""
],
[
"Büchner",
"Martin",
""
],
[
"Vödisch",
"Niclas",
""
],
[
"Burgard",
"Wolfram",
""
],
[
"Valada",
"Abhinav",
""
]
] |
new_dataset
| 0.973982 |
2309.07705
|
Jiaqi Zhang
|
Jiaqi Zhang, Yu Cheng, Yongxin Ni, Yunzhu Pan, Zheng Yuan, Junchen Fu,
Youhua Li, Jie Wang, and Fajie Yuan
|
NineRec: A Benchmark Dataset Suite for Evaluating Transferable
Recommendation
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning a recommender system model from an item's raw modality features
(such as image, text, audio, etc.), called MoRec, has attracted growing
interest recently. One key advantage of MoRec is that it can easily benefit
from advances in other fields, such as natural language processing (NLP) and
computer vision (CV). Moreover, it naturally supports transfer learning across
different systems through modality features, known as transferable recommender
systems, or TransRec.
However, so far, TransRec has made little progress, compared to
groundbreaking foundation models in the fields of NLP and CV. The lack of
large-scale, high-quality recommendation datasets poses a major obstacle. To
this end, we introduce NineRec, a TransRec dataset suite that includes a
large-scale source domain recommendation dataset and nine diverse target domain
recommendation datasets. Each item in NineRec is represented by a text
description and a high-resolution cover image. With NineRec, we can implement
TransRec models in an end-to-end training manner instead of using pre-extracted
invariant features. We conduct a benchmark study and empirical analysis of
TransRec using NineRec, and our findings provide several valuable insights. To
support further research, we make our code, datasets, benchmarks, and
leaderboards publicly available at https://github.com/westlake-repl/NineRec.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 13:31:33 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 07:51:50 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Zhang",
"Jiaqi",
""
],
[
"Cheng",
"Yu",
""
],
[
"Ni",
"Yongxin",
""
],
[
"Pan",
"Yunzhu",
""
],
[
"Yuan",
"Zheng",
""
],
[
"Fu",
"Junchen",
""
],
[
"Li",
"Youhua",
""
],
[
"Wang",
"Jie",
""
],
[
"Yuan",
"Fajie",
""
]
] |
new_dataset
| 0.998018 |
2309.07832
|
Kasun Weerakoon Kulathun Mudiyanselage
|
Kasun Weerakoon, Adarsh Jagan Sathyamoorthy, Mohamed Elnoor, Dinesh
Manocha
|
VAPOR: Legged Robot Navigation in Outdoor Vegetation Using Offline
Reinforcement Learning
| null | null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present VAPOR, a novel method for autonomous legged robot navigation in
unstructured, densely vegetated outdoor environments using offline
Reinforcement Learning (RL). Our method trains a novel RL policy using an
actor-critic network and arbitrary data collected in real outdoor vegetation.
Our policy uses height and intensity-based cost maps derived from 3D LiDAR
point clouds, a goal cost map, and processed proprioception data as state
inputs, and learns the physical and geometric properties of the surrounding
obstacles such as height, density, and solidity/stiffness. The fully-trained
policy's critic network is then used to evaluate the quality of dynamically
feasible velocities generated from a novel context-aware planner. Our planner
adapts the robot's velocity space based on the presence of entrapment inducing
vegetation, and narrow passages in dense environments. We demonstrate our
method's capabilities on a Spot robot in complex real-world outdoor scenes,
including dense vegetation. We observe that VAPOR's actions improve success
rates by up to 40%, decrease the average current consumption by up to 2.9%, and
decrease the normalized trajectory length by up to 11.2% compared to existing
end-to-end offline RL and other outdoor navigation methods.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 16:21:27 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 21:22:19 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Weerakoon",
"Kasun",
""
],
[
"Sathyamoorthy",
"Adarsh Jagan",
""
],
[
"Elnoor",
"Mohamed",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.999132 |
2309.09064
|
David Bader
|
David A. Bader
|
Fast Triangle Counting
|
The 27th Annual IEEE High Performance Extreme Computing Conference
(HPEC), Virtual, September 25-29, 2023. Graph Challenge Innovation Award
| null | null | null |
cs.DS cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Listing and counting triangles in graphs is a key algorithmic kernel for
network analyses including community detection, clustering coefficients,
k-trusses, and triangle centrality. We design and implement a new serial
algorithm for triangle counting that performs competitively with the fastest
previous approaches on both real and synthetic graphs, such as those from the
Graph500 Benchmark and the MIT/Amazon/IEEE Graph Challenge. The experimental
results use the recently-launched Intel Xeon Platinum 8480+ and CPU Max 9480
processors.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 18:18:50 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 17:48:37 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Bader",
"David A.",
""
]
] |
new_dataset
| 0.998225 |
2309.10305
|
Bingning Wang Dr.
|
Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin,
Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng
Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda
Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang
Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin,
Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei
Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men,
Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen
Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu
|
Baichuan 2: Open Large-scale Language Models
|
Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 04:13:22 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 04:06:06 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Yang",
"Aiyuan",
""
],
[
"Xiao",
"Bin",
""
],
[
"Wang",
"Bingning",
""
],
[
"Zhang",
"Borong",
""
],
[
"Bian",
"Ce",
""
],
[
"Yin",
"Chao",
""
],
[
"Lv",
"Chenxu",
""
],
[
"Pan",
"Da",
""
],
[
"Wang",
"Dian",
""
],
[
"Yan",
"Dong",
""
],
[
"Yang",
"Fan",
""
],
[
"Deng",
"Fei",
""
],
[
"Wang",
"Feng",
""
],
[
"Liu",
"Feng",
""
],
[
"Ai",
"Guangwei",
""
],
[
"Dong",
"Guosheng",
""
],
[
"Zhao",
"Haizhou",
""
],
[
"Xu",
"Hang",
""
],
[
"Sun",
"Haoze",
""
],
[
"Zhang",
"Hongda",
""
],
[
"Liu",
"Hui",
""
],
[
"Ji",
"Jiaming",
""
],
[
"Xie",
"Jian",
""
],
[
"Dai",
"JunTao",
""
],
[
"Fang",
"Kun",
""
],
[
"Su",
"Lei",
""
],
[
"Song",
"Liang",
""
],
[
"Liu",
"Lifeng",
""
],
[
"Ru",
"Liyun",
""
],
[
"Ma",
"Luyao",
""
],
[
"Wang",
"Mang",
""
],
[
"Liu",
"Mickel",
""
],
[
"Lin",
"MingAn",
""
],
[
"Nie",
"Nuolan",
""
],
[
"Guo",
"Peidong",
""
],
[
"Sun",
"Ruiyang",
""
],
[
"Zhang",
"Tao",
""
],
[
"Li",
"Tianpeng",
""
],
[
"Li",
"Tianyu",
""
],
[
"Cheng",
"Wei",
""
],
[
"Chen",
"Weipeng",
""
],
[
"Zeng",
"Xiangrong",
""
],
[
"Wang",
"Xiaochuan",
""
],
[
"Chen",
"Xiaoxi",
""
],
[
"Men",
"Xin",
""
],
[
"Yu",
"Xin",
""
],
[
"Pan",
"Xuehai",
""
],
[
"Shen",
"Yanjun",
""
],
[
"Wang",
"Yiding",
""
],
[
"Li",
"Yiyu",
""
],
[
"Jiang",
"Youxin",
""
],
[
"Gao",
"Yuchen",
""
],
[
"Zhang",
"Yupeng",
""
],
[
"Zhou",
"Zenan",
""
],
[
"Wu",
"Zhiying",
""
]
] |
new_dataset
| 0.998266 |
2309.10369
|
Simon Schaefer
|
Simon Schaefer, Dorian F. Henning, Stefan Leutenegger
|
GloPro: Globally-Consistent Uncertainty-Aware 3D Human Pose Estimation &
Tracking in the Wild
|
IEEE International Conference on Intelligent Robots and Systems
(IROS) 2023
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An accurate and uncertainty-aware 3D human body pose estimation is key to
enabling truly safe but efficient human-robot interactions. Current
uncertainty-aware methods in 3D human pose estimation are limited to predicting
the uncertainty of the body posture, while effectively neglecting the body
shape and root pose. In this work, we present GloPro, which to the best of our
knowledge the first framework to predict an uncertainty distribution of a 3D
body mesh including its shape, pose, and root pose, by efficiently fusing
visual clues with a learned motion model. We demonstrate that it vastly
outperforms state-of-the-art methods in terms of human trajectory accuracy in a
world coordinate system (even in the presence of severe occlusions), yields
consistent uncertainty distributions, and can run in real-time.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 07:10:48 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 16:22:31 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Schaefer",
"Simon",
""
],
[
"Henning",
"Dorian F.",
""
],
[
"Leutenegger",
"Stefan",
""
]
] |
new_dataset
| 0.978359 |
2309.10396
|
Youngil Kim
|
Sashidhar Jakkamsetti, Youngil Kim, Andrew Searles, Gene Tsudik
|
Poster: Control-Flow Integrity in Low-end Embedded Devices
|
The idea mentioned in the paper is still under development. This is
an early version without full results. This version is only as a poster
accepted at ACM CCS 2023
| null |
10.1145/3576915.3624374
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Embedded, smart, and IoT devices are increasingly popular in numerous
everyday settings. Since lower-end devices have the most strict cost
constraints, they tend to have few, if any, security features. This makes them
attractive targets for exploits and malware. Prior research proposed various
security architectures for enforcing security properties for
resource-constrained devices, e.g., via Remote Attestation (RA). Such
techniques can (statically) verify software integrity of a remote device and
detect compromise. However, run-time (dynamic) security, e.g., via Control-Flow
Integrity (CFI), is hard to achieve. This work constructs an architecture that
ensures integrity of software execution against run-time attacks, such as
Return-Oriented Programming (ROP). It is built atop a recently proposed CASU --
a low-cost active Root-of-Trust (RoT) that guarantees software immutability. We
extend CASU to support a shadow stack and a CFI monitor to mitigate run-time
attacks. This gives some confidence that CFI can indeed be attained even on
low-end devices, with minimal hardware overhead.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 07:52:43 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 07:20:56 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Jakkamsetti",
"Sashidhar",
""
],
[
"Kim",
"Youngil",
""
],
[
"Searles",
"Andrew",
""
],
[
"Tsudik",
"Gene",
""
]
] |
new_dataset
| 0.966137 |
2309.10579
|
Florent P Audonnet
|
Florent P Audonnet, Jonathan Grizou, Andrew Hamilton and Gerardo
Aragon-Camarasa
|
TELESIM: A Modular and Plug-and-Play Framework for Robotic Arm
Teleoperation using a Digital Twin
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present TELESIM, a modular and plug-and-play framework for direct
teleoperation of a robotic arm using a digital twin as the interface between
the user and the robotic system. We tested TELESIM by performing a user survey
with 37 participants on two different robots using two different control
modalities: a virtual reality controller and a finger mapping hardware
controller using different grasping systems. Users were asked to teleoperate
the robot to pick and place 3 cubes in a tower and to repeat this task as many
times as possible in 10 minutes, with only 5 minutes of training beforehand.
Our experimental results show that most users were able to succeed by building
at least a tower of 3 cubes regardless of the control modality or robot used,
demonstrating the user-friendliness of TELESIM.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 12:38:28 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 06:45:03 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Audonnet",
"Florent P",
""
],
[
"Grizou",
"Jonathan",
""
],
[
"Hamilton",
"Andrew",
""
],
[
"Aragon-Camarasa",
"Gerardo",
""
]
] |
new_dataset
| 0.998849 |
2309.10641
|
Jia Luo Peng
|
Jia Luo Peng, Keng Wei Chang, Shang-Hong Lai
|
KFC: Kinship Verification with Fair Contrastive Loss and Multi-Task
Learning
|
Accepted by BMVC 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Kinship verification is an emerging task in computer vision with multiple
potential applications. However, there's no large enough kinship dataset to
train a representative and robust model, which is a limitation for achieving
better performance. Moreover, face verification is known to exhibit bias, which
has not been dealt with by previous kinship verification works and sometimes
even results in serious issues. So we first combine existing kinship datasets
and label each identity with the correct race in order to take race information
into consideration and provide a larger and complete dataset, called KinRace
dataset. Secondly, we propose a multi-task learning model structure with
attention module to enhance accuracy, which surpasses state-of-the-art
performance. Lastly, our fairness-aware contrastive loss function with
adversarial learning greatly mitigates racial bias. We introduce a debias term
into traditional contrastive loss and implement gradient reverse in race
classification task, which is an innovative idea to mix two fairness methods to
alleviate bias. Exhaustive experimental evaluation demonstrates the
effectiveness and superior performance of the proposed KFC in both standard
deviation and accuracy at the same time.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 14:21:33 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 07:42:57 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Peng",
"Jia Luo",
""
],
[
"Chang",
"Keng Wei",
""
],
[
"Lai",
"Shang-Hong",
""
]
] |
new_dataset
| 0.999403 |
2309.10738
|
Xinda Wu
|
Xinda Wu, Zhijie Huang, Kejun Zhang, Jiaxing Yu, Xu Tan, Tieyao Zhang,
Zihao Wang, Lingyun Sun
|
MelodyGLM: Multi-task Pre-training for Symbolic Melody Generation
| null | null | null | null |
cs.SD cs.AI cs.CL cs.IR cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pre-trained language models have achieved impressive results in various music
understanding and generation tasks. However, existing pre-training methods for
symbolic melody generation struggle to capture multi-scale, multi-dimensional
structural information in note sequences, due to the domain knowledge
discrepancy between text and music. Moreover, the lack of available large-scale
symbolic melody datasets limits the pre-training improvement. In this paper, we
propose MelodyGLM, a multi-task pre-training framework for generating melodies
with long-term structure. We design the melodic n-gram and long span sampling
strategies to create local and global blank infilling tasks for modeling the
local and global structures in melodies. Specifically, we incorporate pitch
n-grams, rhythm n-grams, and their combined n-grams into the melodic n-gram
blank infilling tasks for modeling the multi-dimensional structures in
melodies. To this end, we have constructed a large-scale symbolic melody
dataset, MelodyNet, containing more than 0.4 million melody pieces. MelodyNet
is utilized for large-scale pre-training and domain-specific n-gram lexicon
construction. Both subjective and objective evaluations demonstrate that
MelodyGLM surpasses the standard and previous pre-training methods. In
particular, subjective evaluations show that, on the melody continuation task,
MelodyGLM gains average improvements of 0.82, 0.87, 0.78, and 0.94 in
consistency, rhythmicity, structure, and overall quality, respectively.
Notably, MelodyGLM nearly matches the quality of human-composed melodies on the
melody inpainting task.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 16:34:24 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 10:56:07 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Wu",
"Xinda",
""
],
[
"Huang",
"Zhijie",
""
],
[
"Zhang",
"Kejun",
""
],
[
"Yu",
"Jiaxing",
""
],
[
"Tan",
"Xu",
""
],
[
"Zhang",
"Tieyao",
""
],
[
"Wang",
"Zihao",
""
],
[
"Sun",
"Lingyun",
""
]
] |
new_dataset
| 0.999281 |
2309.10836
|
Jun Lyu
|
Chengyan Wang, Jun Lyu, Shuo Wang, Chen Qin, Kunyuan Guo, Xinyu Zhang,
Xiaotong Yu, Yan Li, Fanwen Wang, Jianhua Jin, Zhang Shi, Ziqiang Xu, Yapeng
Tian, Sha Hua, Zhensen Chen, Meng Liu, Mengting Sun, Xutong Kuang, Kang Wang,
Haoran Wang, Hao Li, Yinghua Chu, Guang Yang, Wenjia Bai, Xiahai Zhuang, He
Wang, Jing Qin, Xiaobo Qu
|
CMRxRecon: An open cardiac MRI dataset for the competition of
accelerated image reconstruction
|
14 pages, 8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cardiac magnetic resonance imaging (CMR) has emerged as a valuable diagnostic
tool for cardiac diseases. However, a limitation of CMR is its slow imaging
speed, which causes patient discomfort and introduces artifacts in the images.
There has been growing interest in deep learning-based CMR imaging algorithms
that can reconstruct high-quality images from highly under-sampled k-space
data. However, the development of deep learning methods requires large training
datasets, which have not been publicly available for CMR. To address this gap,
we released a dataset that includes multi-contrast, multi-view, multi-slice and
multi-coil CMR imaging data from 300 subjects. Imaging studies include cardiac
cine and mapping sequences. Manual segmentations of the myocardium and chambers
of all the subjects are also provided within the dataset. Scripts of
state-of-the-art reconstruction algorithms were also provided as a point of
reference. Our aim is to facilitate the advancement of state-of-the-art CMR
image reconstruction by introducing standardized evaluation criteria and making
the dataset freely accessible to the research community. Researchers can access
the dataset at https://www.synapse.org/#!Synapse:syn51471091/wiki/.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 15:14:42 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Wang",
"Chengyan",
""
],
[
"Lyu",
"Jun",
""
],
[
"Wang",
"Shuo",
""
],
[
"Qin",
"Chen",
""
],
[
"Guo",
"Kunyuan",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Yu",
"Xiaotong",
""
],
[
"Li",
"Yan",
""
],
[
"Wang",
"Fanwen",
""
],
[
"Jin",
"Jianhua",
""
],
[
"Shi",
"Zhang",
""
],
[
"Xu",
"Ziqiang",
""
],
[
"Tian",
"Yapeng",
""
],
[
"Hua",
"Sha",
""
],
[
"Chen",
"Zhensen",
""
],
[
"Liu",
"Meng",
""
],
[
"Sun",
"Mengting",
""
],
[
"Kuang",
"Xutong",
""
],
[
"Wang",
"Kang",
""
],
[
"Wang",
"Haoran",
""
],
[
"Li",
"Hao",
""
],
[
"Chu",
"Yinghua",
""
],
[
"Yang",
"Guang",
""
],
[
"Bai",
"Wenjia",
""
],
[
"Zhuang",
"Xiahai",
""
],
[
"Wang",
"He",
""
],
[
"Qin",
"Jing",
""
],
[
"Qu",
"Xiaobo",
""
]
] |
new_dataset
| 0.999844 |
2309.10881
|
Suraj Rajendran
|
Shishir Rajendran, Prathic Sundararajan, Ashi Awasthi, Suraj Rajendran
|
Nanorobotics in Medicine: A Systematic Review of Advances, Challenges,
and Future Prospects
| null | null | null | null |
cs.RO q-bio.TO
|
http://creativecommons.org/licenses/by/4.0/
|
Nanorobotics offers an emerging frontier in biomedicine, holding the
potential to revolutionize diagnostic and therapeutic applications through its
unique capabilities in manipulating biological systems at the nanoscale.
Following PRISMA guidelines, a comprehensive literature search was conducted
using IEEE Xplore and PubMed databases, resulting in the identification and
analysis of a total of 414 papers. The studies were filtered to include only
those that addressed both nanorobotics and direct medical applications. Our
analysis traces the technology's evolution, highlighting its growing prominence
in medicine as evidenced by the increasing number of publications over time.
Applications ranged from targeted drug delivery and single-cell manipulation to
minimally invasive surgery and biosensing. Despite the promise, limitations
such as biocompatibility, precise control, and ethical concerns were also
identified. This review aims to offer a thorough overview of the state of
nanorobotics in medicine, drawing attention to current challenges and
opportunities, and providing directions for future research in this rapidly
advancing field.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 19:11:29 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Rajendran",
"Shishir",
""
],
[
"Sundararajan",
"Prathic",
""
],
[
"Awasthi",
"Ashi",
""
],
[
"Rajendran",
"Suraj",
""
]
] |
new_dataset
| 0.987888 |
2309.10885
|
Jialiang Zhao
|
Jialiang Zhao and Edward H. Adelson
|
GelSight Svelte: A Human Finger-shaped Single-camera Tactile Robot
Finger with Large Sensing Coverage and Proprioceptive Sensing
|
Submitted and accepted to 2023 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2023)
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Camera-based tactile sensing is a low-cost, popular approach to obtain highly
detailed contact geometry information. However, most existing camera-based
tactile sensors are fingertip sensors, and longer fingers often require
extraneous elements to obtain an extended sensing area similar to the full
length of a human finger. Moreover, existing methods to estimate proprioceptive
information such as total forces and torques applied on the finger from
camera-based tactile sensors are not effective when the contact geometry is
complex. We introduce GelSight Svelte, a curved, human finger-sized,
single-camera tactile sensor that is capable of both tactile and proprioceptive
sensing over a large area. GelSight Svelte uses curved mirrors to achieve the
desired shape and sensing coverage. Proprioceptive information, such as the
total bending and twisting torques applied on the finger, is reflected as
deformations on the flexible backbone of GelSight Svelte, which are also
captured by the camera. We train a convolutional neural network to estimate the
bending and twisting torques from the captured images. We conduct gel
deformation experiments at various locations of the finger to evaluate the
tactile sensing capability and proprioceptive sensing accuracy. To demonstrate
the capability and potential uses of GelSight Svelte, we conduct an object
holding task with three different grasping modes that utilize different areas
of the finger. More information is available on our website:
https://gelsight-svelte.alanz.info
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 19:19:50 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Zhao",
"Jialiang",
""
],
[
"Adelson",
"Edward H.",
""
]
] |
new_dataset
| 0.997214 |
2309.10886
|
Jialiang Zhao
|
Jialiang Zhao and Edward H. Adelson
|
GelSight Svelte Hand: A Three-finger, Two-DoF, Tactile-rich, Low-cost
Robot Hand for Dexterous Manipulation
|
Submitted and accepted to IROS 2023 workshop on Visuo-Tactile
Perception, Learning, Control for Manipulation and HRI (IROS RoboTac 2023)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents GelSight Svelte Hand, a novel 3-finger 2-DoF tactile
robotic hand that is capable of performing precision grasps, power grasps, and
intermediate grasps. Rich tactile signals are obtained from one camera on each
finger, with an extended sensing area similar to the full length of a human
finger. Each finger of GelSight Svelte Hand is supported by a semi-rigid
endoskeleton and covered with soft silicone materials, which provide both
rigidity and compliance. We describe the design, fabrication, functionalities,
and tactile sensing capability of GelSight Svelte Hand in this paper. More
information is available on our website:
\url{https://gelsight-svelte.alanz.info}.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 19:25:33 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Zhao",
"Jialiang",
""
],
[
"Adelson",
"Edward H.",
""
]
] |
new_dataset
| 0.994202 |
2309.10896
|
Luigi Freda
|
Luigi Freda
|
PLVS: A SLAM System with Points, Lines, Volumetric Mapping, and 3D
Incremental Segmentation
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This document presents PLVS: a real-time system that leverages sparse SLAM,
volumetric mapping, and 3D unsupervised incremental segmentation. PLVS stands
for Points, Lines, Volumetric mapping, and Segmentation. It supports RGB-D and
Stereo cameras, which may be optionally equipped with IMUs. The SLAM module is
keyframe-based, and extracts and tracks sparse points and line segments as
features. Volumetric mapping runs in parallel with respect to the SLAM
front-end and generates a 3D reconstruction of the explored environment by
fusing point clouds backprojected from keyframes. Different volumetric mapping
methods are supported and integrated in PLVS. We use a novel reprojection error
to bundle-adjust line segments. This error exploits available depth information
to stabilize the position estimates of line segment endpoints. An incremental
and geometric-based segmentation method is implemented and integrated for RGB-D
cameras in the PLVS framework. We present qualitative and quantitative
evaluations of the PLVS framework on some publicly available datasets. The
appendix details the adopted stereo line triangulation method and provides a
derivation of the Jacobians we used for line error terms. The software is
available as open-source.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 19:42:26 GMT"
}
] | 2023-09-21T00:00:00 |
[
[
"Freda",
"Luigi",
""
]
] |
new_dataset
| 0.998404 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.