id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2303.11228
|
Sanket Kachole Mr
|
Sanket Kachole, Xiaoqian Huang, Fariborz Baghaei Naeini, Rajkumar
Muthusamy, Dimitrios Makris, Yahya Zweiri
|
Bimodal SegNet: Instance Segmentation Fusing Events and RGB Frames for
Robotic Grasping
|
8 Pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object segmentation for robotic grasping under dynamic conditions often faces
challenges such as occlusion, low light conditions, motion blur and object size
variance. To address these challenges, we propose a Deep Learning network that
fuses two types of visual signals, event-based data and RGB frame data. The
proposed Bimodal SegNet network has two distinct encoders, one for each signal
input and a spatial pyramidal pooling with atrous convolutions. Encoders
capture rich contextual information by pooling the concatenated features at
different resolutions while the decoder obtains sharp object boundaries. The
evaluation of the proposed method undertakes five unique image degradation
challenges including occlusion, blur, brightness, trajectory and scale variance
on the Event-based Segmentation (ESD) Dataset. The evaluation results show a
6-10\% segmentation accuracy improvement over state-of-the-art methods in terms
of mean intersection over the union and pixel accuracy. The model code is
available at https://github.com/sanket0707/Bimodal-SegNet.git
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 16:09:25 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jul 2023 22:23:31 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Kachole",
"Sanket",
""
],
[
"Huang",
"Xiaoqian",
""
],
[
"Naeini",
"Fariborz Baghaei",
""
],
[
"Muthusamy",
"Rajkumar",
""
],
[
"Makris",
"Dimitrios",
""
],
[
"Zweiri",
"Yahya",
""
]
] |
new_dataset
| 0.991671 |
2304.08121
|
Rodrigo San-Jos\'e
|
Philippe Gimenez, Diego Ruano, Rodrigo San-Jos\'e
|
Entanglement-assisted quantum error-correcting codes from subfield
subcodes of projective Reed-Solomon codes
| null | null | null | null |
cs.IT math.AC math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Subfield subcodes of Reed-Solomon codes and their duals, BCH codes, have been
widely used for constructing quantum error-correcting codes with good
parameters. In this paper, we study subfield subcodes of projective
Reed-Solomon codes and their duals, we provide bases for these codes and
estimate their parameters. With this knowledge, we can construct symmetric and
asymmetric entanglement-assisted quantum error-correcting codes, which in many
cases have new or better parameters than the ones available in the literature.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 09:59:17 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Gimenez",
"Philippe",
""
],
[
"Ruano",
"Diego",
""
],
[
"San-José",
"Rodrigo",
""
]
] |
new_dataset
| 0.99979 |
2305.01264
|
Timoth\'ee Anne
|
Anne and Mouret
|
Multi-Task Multi-Behavior MAP-Elites
|
Accepted as Poster for GECCO 2023
| null |
10.1145/3583133.3590730
| null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Multi-Task Multi-Behavior MAP-Elites, a variant of MAP-Elites that
finds a large number of high-quality solutions for a large set of tasks
(optimization problems from a given family). It combines the original
MAP-Elites for the search for diversity and Multi-Task MAP-Elites for
leveraging similarity between tasks. It performs better than three baselines on
a humanoid fault-recovery set of tasks, solving more tasks and finding twice as
many solutions per solved task.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 09:01:07 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jul 2023 17:13:24 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Anne",
"",
""
],
[
"Mouret",
"",
""
]
] |
new_dataset
| 0.996465 |
2305.01979
|
Zhixi Cai
|
Zhixi Cai, Shreya Ghosh, Abhinav Dhall, Tom Gedeon, Kalin Stefanov,
Munawar Hayat
|
Glitch in the Matrix: A Large Scale Benchmark for Content Driven
Audio-Visual Forgery Detection and Localization
|
The paper is under consideration/review at Computer Vision and Image
Understanding Journal
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Most deepfake detection methods focus on detecting spatial and/or
spatio-temporal changes in facial attributes and are centered around the binary
classification task of detecting whether a video is real or fake. This is
because available benchmark datasets contain mostly visual-only modifications
present in the entirety of the video. However, a sophisticated deepfake may
include small segments of audio or audio-visual manipulations that can
completely change the meaning of the video content. To addresses this gap, we
propose and benchmark a new dataset, Localized Audio Visual DeepFake (LAV-DF),
consisting of strategic content-driven audio, visual and audio-visual
manipulations. The proposed baseline method, Boundary Aware Temporal Forgery
Detection (BA-TFD), is a 3D Convolutional Neural Network-based architecture
which effectively captures multimodal manipulations. We further improve (i.e.
BA-TFD+) the baseline method by replacing the backbone with a Multiscale Vision
Transformer and guide the training process with contrastive, frame
classification, boundary matching and multimodal boundary matching loss
functions. The quantitative analysis demonstrates the superiority of BA-TFD+ on
temporal forgery localization and deepfake detection tasks using several
benchmark datasets including our newly proposed dataset. The dataset, models
and code are available at https://github.com/ControlNet/LAV-DF.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 08:48:45 GMT"
},
{
"version": "v2",
"created": "Fri, 5 May 2023 05:33:57 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Jul 2023 07:03:45 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Cai",
"Zhixi",
""
],
[
"Ghosh",
"Shreya",
""
],
[
"Dhall",
"Abhinav",
""
],
[
"Gedeon",
"Tom",
""
],
[
"Stefanov",
"Kalin",
""
],
[
"Hayat",
"Munawar",
""
]
] |
new_dataset
| 0.999427 |
2305.06933
|
Orlando Eduardo Mart\'inez Durive
|
Orlando E. Mart\'inez-Durive, Sachit Mishra, Cezary Ziemlicki,
Stefania Rubrichi, Zbigniew Smoreda and Marco Fiore
|
The NetMob23 Dataset: A High-resolution Multi-region Service-level
Mobile Data Traffic Cartography
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Digital sources have been enabling unprecedented data-driven and large-scale
investigations across a wide range of domains, including demography, sociology,
geography, urbanism, criminology, and engineering. A major barrier to
innovation is represented by the limited availability of dependable digital
datasets, especially in the context of data gathered by mobile network
operators or service providers, due to concerns about user privacy and
industrial competition. The resulting lack of reference datasets curbs the
production of new research methods and results, and prevents verifiability and
reproducibility of research outcomes. The NetMob23 dataset offers a rare
opportunity to the multidisciplinary research community to access rich data
about the spatio-temporal consumption of mobile applications in a developed
country. The generation process of the dataset sets a new quality standard,
leading to information about the demands generated by 68 popular mobile
services, geo-referenced at a high resolution of $100\times100$ $m^2$ over 20
metropolitan areas in France, and monitored during 77 consecutive days in 2019.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 16:12:31 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jul 2023 13:32:59 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Martínez-Durive",
"Orlando E.",
""
],
[
"Mishra",
"Sachit",
""
],
[
"Ziemlicki",
"Cezary",
""
],
[
"Rubrichi",
"Stefania",
""
],
[
"Smoreda",
"Zbigniew",
""
],
[
"Fiore",
"Marco",
""
]
] |
new_dataset
| 0.999892 |
2305.12032
|
John Lambert
|
Nico Montali, John Lambert, Paul Mougin, Alex Kuefler, Nick Rhinehart,
Michelle Li, Cole Gulino, Tristan Emrich, Zoey Yang, Shimon Whiteson, Brandyn
White, Dragomir Anguelov
|
The Waymo Open Sim Agents Challenge
| null | null | null | null |
cs.CV cs.LG cs.MA cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simulation with realistic, interactive agents represents a key task for
autonomous vehicle software development. In this work, we introduce the Waymo
Open Sim Agents Challenge (WOSAC). WOSAC is the first public challenge to
tackle this task and propose corresponding metrics. The goal of the challenge
is to stimulate the design of realistic simulators that can be used to evaluate
and train a behavior model for autonomous driving. We outline our evaluation
methodology, present results for a number of different baseline simulation
agent methods, and analyze several submissions to the 2023 competition which
ran from March 16, 2023 to May 23, 2023. The WOSAC evaluation server remains
open for submissions and we discuss open problems for the task.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 23:12:08 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jun 2023 23:23:09 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Jul 2023 19:09:38 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Montali",
"Nico",
""
],
[
"Lambert",
"John",
""
],
[
"Mougin",
"Paul",
""
],
[
"Kuefler",
"Alex",
""
],
[
"Rhinehart",
"Nick",
""
],
[
"Li",
"Michelle",
""
],
[
"Gulino",
"Cole",
""
],
[
"Emrich",
"Tristan",
""
],
[
"Yang",
"Zoey",
""
],
[
"Whiteson",
"Shimon",
""
],
[
"White",
"Brandyn",
""
],
[
"Anguelov",
"Dragomir",
""
]
] |
new_dataset
| 0.998417 |
2305.14150
|
Bo Zhou
|
Bo Zhou, Qianglong Chen, Tianyu Wang, Xiaomi Zhong, Yin Zhang
|
WYWEB: A NLP Evaluation Benchmark For Classical Chinese
|
Accepted by ACL 2023
|
https://aclanthology.org/2023.findings-acl.204
| null |
2023.findings-acl.204
|
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To fully evaluate the overall performance of different NLP models in a given
domain, many evaluation benchmarks are proposed, such as GLUE, SuperGLUE and
CLUE. The fi eld of natural language understanding has traditionally focused on
benchmarks for various tasks in languages such as Chinese, English, and
multilingua, however, there has been a lack of attention given to the area of
classical Chinese, also known as "wen yan wen", which has a rich history
spanning thousands of years and holds signifi cant cultural and academic value.
For the prosperity of the NLP community, in this paper, we introduce the WYWEB
evaluation benchmark, which consists of nine NLP tasks in classical Chinese,
implementing sentence classifi cation, sequence labeling, reading
comprehension, and machine translation. We evaluate the existing pre-trained
language models, which are all struggling with this benchmark. We also
introduce a number of supplementary datasets and additional tools to help
facilitate further progress on classical Chinese NLU. The github repository is
https://github.com/baudzhou/WYWEB.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 15:15:11 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Zhou",
"Bo",
""
],
[
"Chen",
"Qianglong",
""
],
[
"Wang",
"Tianyu",
""
],
[
"Zhong",
"Xiaomi",
""
],
[
"Zhang",
"Yin",
""
]
] |
new_dataset
| 0.999787 |
2306.01359
|
Dr. Mohammed Javed
|
Tejasvee Bisen, Mohammed Javed, Shashank Kirtania, P. Nagabhushan
|
DWT-CompCNN: Deep Image Classification Network for High Throughput JPEG
2000 Compressed Documents
|
Accepted in Pattern Analysis and Applications
(https://www.springer.com/journal/10044)
| null | null | null |
cs.CV cs.IR cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
For any digital application with document images such as retrieval, the
classification of document images becomes an essential stage. Conventionally
for the purpose, the full versions of the documents, that is the uncompressed
document images make the input dataset, which poses a threat due to the big
volume required to accommodate the full versions of the documents. Therefore,
it would be novel, if the same classification task could be accomplished
directly (with some partial decompression) with the compressed representation
of documents in order to make the whole process computationally more efficient.
In this research work, a novel deep learning model, DWT CompCNN is proposed for
classification of documents that are compressed using High Throughput JPEG 2000
(HTJ2K) algorithm. The proposed DWT-CompCNN comprises of five convolutional
layers with filter sizes of 16, 32, 64, 128, and 256 consecutively for each
increasing layer to improve learning from the wavelet coefficients extracted
from the compressed images. Experiments are performed on two benchmark
datasets- Tobacco-3482 and RVL-CDIP, which demonstrate that the proposed model
is time and space efficient, and also achieves a better classification accuracy
in compressed domain.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 08:33:58 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Jul 2023 04:09:31 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Bisen",
"Tejasvee",
""
],
[
"Javed",
"Mohammed",
""
],
[
"Kirtania",
"Shashank",
""
],
[
"Nagabhushan",
"P.",
""
]
] |
new_dataset
| 0.999112 |
2306.06206
|
Rejwan Bin Sulaiman
|
Md. Simul Hasan Talukder, Rejwan Bin Sulaiman, Mohammad Raziuddin
Chowdhury, Musarrat Saberin Nipun, Taminul Islam
|
PotatoPestNet: A CTInceptionV3-RS-Based Neural Network for Accurate
Identification of Potato Pests
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Potatoes are the third-largest food crop globally, but their production
frequently encounters difficulties because of aggressive pest infestations. The
aim of this study is to investigate the various types and characteristics of
these pests and propose an efficient PotatoPestNet AI-based automatic potato
pest identification system. To accomplish this, we curated a reliable dataset
consisting of eight types of potato pests. We leveraged the power of transfer
learning by employing five customized, pre-trained transfer learning models:
CMobileNetV2, CNASLargeNet, CXception, CDenseNet201, and CInceptionV3, in
proposing a robust PotatoPestNet model to accurately classify potato pests. To
improve the models' performance, we applied various augmentation techniques,
incorporated a global average pooling layer, and implemented proper
regularization methods. To further enhance the performance of the models, we
utilized random search (RS) optimization for hyperparameter tuning. This
optimization technique played a significant role in fine-tuning the models and
achieving improved performance. We evaluated the models both visually and
quantitatively, utilizing different evaluation metrics. The robustness of the
models in handling imbalanced datasets was assessed using the Receiver
Operating Characteristic (ROC) curve. Among the models, the Customized Tuned
Inception V3 (CTInceptionV3) model, optimized through random search,
demonstrated outstanding performance. It achieved the highest accuracy (91%),
precision (91%), recall (91%), and F1-score (91%), showcasing its superior
ability to accurately identify and classify potato pests.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 17:38:16 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Jul 2023 10:40:26 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Talukder",
"Md. Simul Hasan",
""
],
[
"Sulaiman",
"Rejwan Bin",
""
],
[
"Chowdhury",
"Mohammad Raziuddin",
""
],
[
"Nipun",
"Musarrat Saberin",
""
],
[
"Islam",
"Taminul",
""
]
] |
new_dataset
| 0.999176 |
2306.07274
|
Bongjin Koo
|
Bongjin Koo, Julien Martel, Ariana Peck, Axel Levy, Fr\'ed\'eric
Poitevin, Nina Miolane
|
CryoChains: Heterogeneous Reconstruction of Molecular Assembly of
Semi-flexible Chains from Cryo-EM Images
| null | null | null | null |
cs.CV q-bio.BM
|
http://creativecommons.org/licenses/by/4.0/
|
Cryogenic electron microscopy (cryo-EM) has transformed structural biology by
allowing to reconstruct 3D biomolecular structures up to near-atomic
resolution. However, the 3D reconstruction process remains challenging, as the
3D structures may exhibit substantial shape variations, while the 2D image
acquisition suffers from a low signal-to-noise ratio, requiring to acquire very
large datasets that are time-consuming to process. Current reconstruction
methods are precise but computationally expensive, or faster but lack a
physically-plausible model of large molecular shape variations. To fill this
gap, we propose CryoChains that encodes large deformations of biomolecules via
rigid body transformation of their chains, while representing their finer shape
variations with the normal mode analysis framework of biophysics. Our synthetic
data experiments on the human GABA\textsubscript{B} and heat shock protein show
that CryoChains gives a biophysically-grounded quantification of the
heterogeneous conformations of biomolecules, while reconstructing their 3D
molecular structures at an improved resolution compared to the current fastest,
interpretable deep learning method.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 17:57:12 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Jul 2023 20:43:54 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Koo",
"Bongjin",
""
],
[
"Martel",
"Julien",
""
],
[
"Peck",
"Ariana",
""
],
[
"Levy",
"Axel",
""
],
[
"Poitevin",
"Frédéric",
""
],
[
"Miolane",
"Nina",
""
]
] |
new_dataset
| 0.995246 |
2307.00211
|
JiaRui Wang
|
Jiarui Wang, Huiyu Duan, Jing Liu, Shi Chen, Xiongkuo Min, Guangtao
Zhai
|
AIGCIQA2023: A Large-scale Image Quality Assessment Database for AI
Generated Images: from the Perspectives of Quality, Authenticity and
Correspondence
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, in order to get a better understanding of the human visual
preferences for AIGIs, a large-scale IQA database for AIGC is established,
which is named as AIGCIQA2023. We first generate over 2000 images based on 6
state-of-the-art text-to-image generation models using 100 prompts. Based on
these images, a well-organized subjective experiment is conducted to assess the
human visual preferences for each image from three perspectives including
quality, authenticity and correspondence. Finally, based on this large-scale
database, we conduct a benchmark experiment to evaluate the performance of
several state-of-the-art IQA metrics on our constructed database.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 03:30:31 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Jul 2023 11:05:04 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Wang",
"Jiarui",
""
],
[
"Duan",
"Huiyu",
""
],
[
"Liu",
"Jing",
""
],
[
"Chen",
"Shi",
""
],
[
"Min",
"Xiongkuo",
""
],
[
"Zhai",
"Guangtao",
""
]
] |
new_dataset
| 0.999618 |
2307.03649
|
Jos\'e \'Alamos
|
Jos\'e \'Alamos and Thomas Schmidt and Matthias Waehlisch
|
6LoRa: Full Stack IPv6 Networking with DSME-LoRa on Low Power IoT Nodes
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Long range wireless transmission techniques such as LoRa are preferential
candidates for a substantial class of IoT applications, as they avoid the
complexity of multi-hop wireless forwarding. The existing network solutions for
LoRa, however, are not suitable for peer-to-peer communication, which is a key
requirement for many IoT applications. In this work, we propose a networking
system - 6LoRa, that enables IPv6 communication over LoRa. We present a full
stack system implementation on RIOT OS and evaluate the system on a real
testbed using realistic application scenarios with CoAP. Our findings confirm
that our approach outperforms existing solutions in terms of transmission delay
and packet reception ratio at comparable energy consumption.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 15:14:53 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jul 2023 16:06:45 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Álamos",
"José",
""
],
[
"Schmidt",
"Thomas",
""
],
[
"Waehlisch",
"Matthias",
""
]
] |
new_dataset
| 0.987539 |
2307.06919
|
Artur Philipp
|
Artur Philipp and Axel K\"upper
|
DAXiot: A Decentralized Authentication and Authorization Scheme for
Dynamic IoT Networks
|
6 pages, 2 figures, 3 listings, 1 table. This work has been submitted
to the IEEE for possible publication. Copyright may be transferred without
notice, after which this version may no longer be accessible
| null | null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated and decentralized networks supporting frequently changing system
participants are a requirement for future Internet of Things (IoT) use cases.
IoT devices and networks often lack adequate authentication and authorization
mechanisms, resulting in insufficient privacy for entities in such systems. In
this work we address both issues by designing a privacy preserving
challenge-response style authentication and authorization scheme based on
Decentralized Identifiers and Verifiable Credentials. Our solution allows a
decentralized permission management of frequently changing network participants
and supports authenticated encryption for data confidentiality. We demonstrate
our solution in an MQTT 5.0 scenario and evaluate its security, privacy
guarantees, and performance.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 17:40:30 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Jul 2023 15:44:51 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Philipp",
"Artur",
""
],
[
"Küpper",
"Axel",
""
]
] |
new_dataset
| 0.990236 |
2307.07125
|
Xiaoyan Yang
|
Xiaoyan Yang, Dingbo Lu, Yang Li, Chenhui Li, Changbo Wang
|
CeRF: Convolutional Neural Radiance Fields for New View Synthesis with
Derivatives of Ray Modeling
|
16 pages, 11 figures
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, novel view synthesis has gained popularity in generating
high-fidelity images. While demonstrating superior performance in the task of
synthesizing novel views, the majority of these methods are still based on the
conventional multi-layer perceptron for scene embedding. Furthermore, light
field models suffer from geometric blurring during pixel rendering, while
radiance field-based volume rendering methods have multiple solutions for a
certain target of density distribution integration. To address these issues, we
introduce the Convolutional Neural Radiance Fields to model the derivatives of
radiance along rays. Based on 1D convolutional operations, our proposed method
effectively extracts potential ray representations through a structured neural
network architecture. Besides, with the proposed ray modeling, a proposed
recurrent module is employed to solve geometric ambiguity in the fully neural
rendering process. Extensive experiments demonstrate the promising results of
our proposed model compared with existing state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 02:26:05 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jul 2023 09:47:49 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Yang",
"Xiaoyan",
""
],
[
"Lu",
"Dingbo",
""
],
[
"Li",
"Yang",
""
],
[
"Li",
"Chenhui",
""
],
[
"Wang",
"Changbo",
""
]
] |
new_dataset
| 0.994582 |
2307.07516
|
Khloud Al Jallad
|
Lana Touma and Mohammad Al Horani and Manar Tailouni and Anas Dahabiah
and Khloud Al Jallad
|
Voting-based Multimodal Automatic Deception Detection
| null | null | null | null |
cs.LG cs.CL cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic Deception Detection has been a hot research topic for a long time,
using machine learning and deep learning to automatically detect deception,
brings new light to this old field. In this paper, we proposed a voting-based
method for automatic deception detection from videos using audio, visual and
lexical features. Experiments were done on two datasets, the Real-life trial
dataset by Michigan University and the Miami University deception detection
dataset. Video samples were split into frames of images, audio, and
manuscripts. Our Voting-based Multimodal proposed solution consists of three
models. The first model is CNN for detecting deception from images, the second
model is Support Vector Machine (SVM) on Mel spectrograms for detecting
deception from audio and the third model is Word2Vec on Support Vector Machine
(SVM) for detecting deception from manuscripts. Our proposed solution
outperforms state of the art. Best results achieved on images, audio and text
were 97%, 96%, 92% respectively on Real-Life Trial Dataset, and 97%, 82%, 73%
on video, audio and text respectively on Miami University Deception Detection.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 17:05:11 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Touma",
"Lana",
""
],
[
"Horani",
"Mohammad Al",
""
],
[
"Tailouni",
"Manar",
""
],
[
"Dahabiah",
"Anas",
""
],
[
"Jallad",
"Khloud Al",
""
]
] |
new_dataset
| 0.990998 |
2307.07518
|
Lei Ma
|
Lei Ma, Jincong Han, Zhaoxin Wang, Dian Zhang
|
CephGPT-4: An Interactive Multimodal Cephalometric Measurement and
Diagnostic System with Visual Large Language Model
| null | null | null | null |
cs.AI cs.CL cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Large-scale multimodal language models (LMMs) have achieved remarkable
success in general domains. However, the exploration of diagnostic language
models based on multimodal cephalometric medical data remains limited. In this
paper, we propose a novel multimodal cephalometric analysis and diagnostic
dialogue model. Firstly, a multimodal orthodontic medical dataset is
constructed, comprising cephalometric images and doctor-patient dialogue data,
with automatic analysis of cephalometric landmarks using U-net and generation
of diagnostic reports. Then, the cephalometric dataset and generated diagnostic
reports are separately fine-tuned on Minigpt-4 and VisualGLM. Results
demonstrate that the CephGPT-4 model exhibits excellent performance and has the
potential to revolutionize orthodontic measurement and diagnostic applications.
These innovations hold revolutionary application potential in the field of
orthodontics.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 15:41:12 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Ma",
"Lei",
""
],
[
"Han",
"Jincong",
""
],
[
"Wang",
"Zhaoxin",
""
],
[
"Zhang",
"Dian",
""
]
] |
new_dataset
| 0.999558 |
2307.07525
| null |
Cristian Camilo Pulgar\'in-Ospina, Roc\'io del Amor, Adri\'an
Colomera, Julio Silva-Rodr\'iguez and Valery Naranjo
|
HistoColAi: An Open-Source Web Platform for Collaborative Digital
Histology Image Annotation with AI-Driven Predictive Integration
|
11 pages, 9 figures, 6 tables
| null | null | null |
cs.HC cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Digital pathology has become a standard in the pathology workflow due to its
many benefits. These include the level of detail of the whole slide images
generated and the potential immediate sharing of cases between hospitals.
Recent advances in deep learning-based methods for image analysis make them of
potential aid in digital pathology. However, a major limitation in developing
computer-aided diagnostic systems for pathology is the lack of an intuitive and
open web application for data annotation. This paper proposes a web service
that efficiently provides a tool to visualize and annotate digitized
histological images. In addition, to show and validate the tool, in this paper
we include a use case centered on the diagnosis of spindle cell skin neoplasm
for multiple annotators. A usability study of the tool is also presented,
showing the feasibility of the developed tool.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 10:41:09 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Pulgarín-Ospina",
"Cristian Camilo",
""
],
[
"del Amor",
"Rocío",
""
],
[
"Colomera",
"Adrián",
""
],
[
"Silva-Rodríguez",
"Julio",
""
],
[
"Naranjo",
"Valery",
""
]
] |
new_dataset
| 0.994204 |
2307.07541
|
Florin-Cristian Ghesu
|
Marc Demoustier, Yue Zhang, Venkatesh Narasimha Murthy, Florin C.
Ghesu, Dorin Comaniciu
|
ConTrack: Contextual Transformer for Device Tracking in X-ray
|
Accepted at MICCAI 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Device tracking is an important prerequisite for guidance during endovascular
procedures. Especially during cardiac interventions, detection and tracking of
guiding the catheter tip in 2D fluoroscopic images is important for
applications such as mapping vessels from angiography (high dose with contrast)
to fluoroscopy (low dose without contrast). Tracking the catheter tip poses
different challenges: the tip can be occluded by contrast during angiography or
interventional devices; and it is always in continuous movement due to the
cardiac and respiratory motions. To overcome these challenges, we propose
ConTrack, a transformer-based network that uses both spatial and temporal
contextual information for accurate device detection and tracking in both X-ray
fluoroscopy and angiography. The spatial information comes from the template
frames and the segmentation module: the template frames define the surroundings
of the device, whereas the segmentation module detects the entire device to
bring more context for the tip prediction. Using multiple templates makes the
model more robust to the change in appearance of the device when it is occluded
by the contrast agent. The flow information computed on the segmented catheter
mask between the current and the previous frame helps in further refining the
prediction by compensating for the respiratory and cardiac motions. The
experiments show that our method achieves 45% or higher accuracy in detection
and tracking when compared to state-of-the-art tracking models.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 14:20:09 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Demoustier",
"Marc",
""
],
[
"Zhang",
"Yue",
""
],
[
"Murthy",
"Venkatesh Narasimha",
""
],
[
"Ghesu",
"Florin C.",
""
],
[
"Comaniciu",
"Dorin",
""
]
] |
new_dataset
| 0.999394 |
2307.07649
|
Hongkuan Zhou
|
Hongkuan Zhou, Da Zheng, Xiang Song, George Karypis, Viktor Prasanna
|
DistTGL: Distributed Memory-Based Temporal Graph Neural Network Training
|
SC'23
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Memory-based Temporal Graph Neural Networks are powerful tools in dynamic
graph representation learning and have demonstrated superior performance in
many real-world applications. However, their node memory favors smaller batch
sizes to capture more dependencies in graph events and needs to be maintained
synchronously across all trainers. As a result, existing frameworks suffer from
accuracy loss when scaling to multiple GPUs. Evenworse, the tremendous overhead
to synchronize the node memory make it impractical to be deployed to
distributed GPU clusters. In this work, we propose DistTGL -- an efficient and
scalable solution to train memory-based TGNNs on distributed GPU clusters.
DistTGL has three improvements over existing solutions: an enhanced TGNN model,
a novel training algorithm, and an optimized system. In experiments, DistTGL
achieves near-linear convergence speedup, outperforming state-of-the-art
single-machine method by 14.5% in accuracy and 10.17x in training throughput.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 22:52:27 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Zhou",
"Hongkuan",
""
],
[
"Zheng",
"Da",
""
],
[
"Song",
"Xiang",
""
],
[
"Karypis",
"George",
""
],
[
"Prasanna",
"Viktor",
""
]
] |
new_dataset
| 0.997764 |
2307.07650
|
Li-Hsiang Shen
|
An-Hung Hsiao, Li-Hsiang Shen, Chen-Yi Chang, Chun-Jie Chiu, Kai-Ten
Feng
|
SALC: Skeleton-Assisted Learning-Based Clustering for Time-Varying
Indoor Localization
| null | null | null | null |
cs.LG cs.AI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless indoor localization has attracted significant amount of attention in
recent years. Using received signal strength (RSS) obtained from WiFi access
points (APs) for establishing fingerprinting database is a widely utilized
method in indoor localization. However, the time-variant problem for indoor
positioning systems is not well-investigated in existing literature. Compared
to conventional static fingerprinting, the dynamicallyreconstructed database
can adapt to a highly-changing environment, which achieves sustainability of
localization accuracy. To deal with the time-varying issue, we propose a
skeleton-assisted learning-based clustering localization (SALC) system,
including RSS-oriented map-assisted clustering (ROMAC), cluster-based online
database establishment (CODE), and cluster-scaled location estimation (CsLE).
The SALC scheme jointly considers similarities from the skeleton-based shortest
path (SSP) and the time-varying RSS measurements across the reference points
(RPs). ROMAC clusters RPs into different feature sets and therefore selects
suitable monitor points (MPs) for enhancing location estimation. Moreover, the
CODE algorithm aims for establishing adaptive fingerprint database to alleviate
the timevarying problem. Finally, CsLE is adopted to acquire the target
position by leveraging the benefits of clustering information and estimated
signal variations in order to rescale the weights fromweighted k-nearest
neighbors (WkNN) method. Both simulation and experimental results demonstrate
that the proposed SALC system can effectively reconstruct the fingerprint
database with an enhanced location estimation accuracy, which outperforms the
other existing schemes in the open literature.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 22:55:52 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Hsiao",
"An-Hung",
""
],
[
"Shen",
"Li-Hsiang",
""
],
[
"Chang",
"Chen-Yi",
""
],
[
"Chiu",
"Chun-Jie",
""
],
[
"Feng",
"Kai-Ten",
""
]
] |
new_dataset
| 0.984617 |
2307.07653
|
Donghua Wang
|
Donghua Wang, Wen Yao, Tingsong Jiang, Chao Li, Xiaoqian Chen
|
RFLA: A Stealthy Reflected Light Adversarial Attack in the Physical
World
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physical adversarial attacks against deep neural networks (DNNs) have
recently gained increasing attention. The current mainstream physical attacks
use printed adversarial patches or camouflage to alter the appearance of the
target object. However, these approaches generate conspicuous adversarial
patterns that show poor stealthiness. Another physical deployable attack is the
optical attack, featuring stealthiness while exhibiting weakly in the daytime
with sunlight. In this paper, we propose a novel Reflected Light Attack (RFLA),
featuring effective and stealthy in both the digital and physical world, which
is implemented by placing the color transparent plastic sheet and a paper cut
of a specific shape in front of the mirror to create different colored
geometries on the target object. To achieve these goals, we devise a general
framework based on the circle to model the reflected light on the target
object. Specifically, we optimize a circle (composed of a coordinate and
radius) to carry various geometrical shapes determined by the optimized angle.
The fill color of the geometry shape and its corresponding transparency are
also optimized. We extensively evaluate the effectiveness of RFLA on different
datasets and models. Experiment results suggest that the proposed method
achieves over 99% success rate on different datasets and models in the digital
world. Additionally, we verify the effectiveness of the proposed method in
different physical environments by using sunlight or a flashlight.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 23:10:56 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Wang",
"Donghua",
""
],
[
"Yao",
"Wen",
""
],
[
"Jiang",
"Tingsong",
""
],
[
"Li",
"Chao",
""
],
[
"Chen",
"Xiaoqian",
""
]
] |
new_dataset
| 0.999665 |
2307.07671
|
Muhammad Lutfor Rahman
|
Amani Mohammed Alqarni, Daniel Timko and Muhammad Lutfor Rahman
|
Saudi Arabian Perspective of Security, Privacy, and Attitude of Using
Facial Recognition Technology
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Facial Recognition Technology (FRT) is a pioneering field of mass
surveillance that sparks privacy concerns and is considered a growing threat in
the modern world. FRT has been widely adopted in the Kingdom of Saudi Arabia to
improve public services and surveillance. Accordingly, the following study aims
to understand the privacy and security concerns, trust, and acceptance of FRT
in Saudi Arabia. Validated Privacy Concerns (IUIPC-8), Security Attitudes
(SA-6), and Security Behavior (SeBIS) scales are used along with replicate
studies from Pew Research Center trust questions and government trust
questions. In addition, we examine potential differences between Saudis and
Americans. To gain insights into these concerns, we conducted an online survey
involving 53 Saudi Arabia citizens who are residing in the USA. We have
collected data in the US instead of Saudi Arabia to avoid the regulatory
challenges of the Saudi Data & Artificial Intelligence Authority (SDAIA).
Responses from closed-ended questions revealed that Saudis score much lower
than Americans when it comes to security attitudes, whereas they score lower
when it comes to privacy concerns. We found no significant difference between
Saudis' and Americans' acceptance of the use of FRT in different scenarios, but
we found that Saudis trust advertisers more than Americans. Additionally,
Saudis are more likely than Americans to agree that the government should
strictly limit the use of FRT.
|
[
{
"version": "v1",
"created": "Sat, 15 Jul 2023 00:42:30 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Alqarni",
"Amani Mohammed",
""
],
[
"Timko",
"Daniel",
""
],
[
"Rahman",
"Muhammad Lutfor",
""
]
] |
new_dataset
| 0.965921 |
2307.07700
|
Joohyung Lee
|
Zhun Yang, Adam Ishay, Joohyung Lee
|
NeurASP: Embracing Neural Networks into Answer Set Programming
|
16 pages, 29th International Joint Conference on Artificial
Intelligence (IJCAI 2020). arXiv admin note: substantial text overlap with
arXiv:2009.10256
| null | null | null |
cs.AI cs.LG cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present NeurASP, a simple extension of answer set programs by embracing
neural networks. By treating the neural network output as the probability
distribution over atomic facts in answer set programs, NeurASP provides a
simple and effective way to integrate sub-symbolic and symbolic computation. We
demonstrate how NeurASP can make use of a pre-trained neural network in
symbolic computation and how it can improve the neural network's perception
result by applying symbolic reasoning in answer set programming. Also, NeurASP
can be used to train a neural network better by training with ASP rules so that
a neural network not only learns from implicit correlations from the data but
also from the explicit complex semantic constraints expressed by the rules.
|
[
{
"version": "v1",
"created": "Sat, 15 Jul 2023 04:03:17 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Yang",
"Zhun",
""
],
[
"Ishay",
"Adam",
""
],
[
"Lee",
"Joohyung",
""
]
] |
new_dataset
| 0.999574 |
2307.07708
|
Wuyang Luan
|
Lei Pan, Wuyang Luan, Yuan Zheng, Qiang Fu, Junhui Li
|
PSGformer: Enhancing 3D Point Cloud Instance Segmentation via Precise
Semantic Guidance
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most existing 3D instance segmentation methods are derived from 3D semantic
segmentation models. However, these indirect approaches suffer from certain
limitations. They fail to fully leverage global and local semantic information
for accurate prediction, which hampers the overall performance of the 3D
instance segmentation framework. To address these issues, this paper presents
PSGformer, a novel 3D instance segmentation network. PSGformer incorporates two
key advancements to enhance the performance of 3D instance segmentation.
Firstly, we propose a Multi-Level Semantic Aggregation Module, which
effectively captures scene features by employing foreground point filtering and
multi-radius aggregation. This module enables the acquisition of more detailed
semantic information from global and local perspectives. Secondly, PSGformer
introduces a Parallel Feature Fusion Transformer Module that independently
processes super-point features and aggregated features using transformers. The
model achieves a more comprehensive feature representation by the features
which connect global and local features. We conducted extensive experiments on
the ScanNetv2 dataset. Notably, PSGformer exceeds compared state-of-the-art
methods by 2.2% on ScanNetv2 hidden test set in terms of mAP. Our code and
models will be publicly released.
|
[
{
"version": "v1",
"created": "Sat, 15 Jul 2023 04:45:37 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Pan",
"Lei",
""
],
[
"Luan",
"Wuyang",
""
],
[
"Zheng",
"Yuan",
""
],
[
"Fu",
"Qiang",
""
],
[
"Li",
"Junhui",
""
]
] |
new_dataset
| 0.994961 |
2307.07717
|
Pramit Pal Mr
|
Pramit Kumar Pal, Debarshi Dutta, Attreyee Mandal, Dipshika Das
|
Deep ANN-based Touch-less 3D Pad for Digit Recognition
|
8 pages, 21 figures, International Conference on Artificial
Intelligence: Theory and Applications (AITA-2021)
|
Journal of Biological Engineering Research and Review 2021
https://biologicalengineering.in/
| null | null |
cs.HC eess.SP
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The Covid-19 pandemic has changed the way humans interact with their
environment. Common touch surfaces such as elevator switches and ATM switches
are hazardous to touch as they are used by countless people every day,
increasing the chance of getting infected. So, a need for touch-less
interaction with machines arises. In this paper, we propose a method of
recognizing the ten decimal digits (0-9) by writing the digits in the air near
a sensing printed circuit board using a human hand. We captured the movement of
the hand by a sensor based on projective capacitance and classified it into
digits using an Artificial Neural Network. Our method does not use pictures,
which significantly reduces the computational requirements and preserves users'
privacy. Thus, the proposed method can be easily implemented in public places.
|
[
{
"version": "v1",
"created": "Sat, 15 Jul 2023 05:42:53 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Pal",
"Pramit Kumar",
""
],
[
"Dutta",
"Debarshi",
""
],
[
"Mandal",
"Attreyee",
""
],
[
"Das",
"Dipshika",
""
]
] |
new_dataset
| 0.998971 |
2307.07739
|
Uwe Schwiegelshohn
|
Samin Jamalabadi and Uwe Schwiegelshohn
|
WSRPT is 1.2259-competitive for Weighted Completion Time Scheduling
|
18 pages, 4 figures
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
\textit{Weighted shortest processing time first} (WSPT) is one of the best
known algorithms for total weighted completion time scheduling problems. For
each job $J_j$, it first combines the two independent job parameters weight
$w_j$ and processing time $p_j$ by simply forming the so called Smith ratio
$w_j/p_j$. Then it schedules the jobs in order of decreasing Smith ratio
values. The algorithm guarantees an optimal schedule for a single machine and
the approximation factor $1.2071$ for parallel identical machines.
For the corresponding online problem in a single machine environment with
preemption, the \textit{weighted shortest remaining processing time first}
(WSRPT) algorithm replaces the processing time $p_j$ with the remaining
processing time $p_j(t)$ for every job that is only partially executed at time
$t$ when determining the Smith ratio. Since more than 10 years, we only know
that the competitive ratio of this algorithm is in the interval $[1.2157,2]$.
In this paper, we present the tight competitive ratio $1.2259$ for WSRPT. To
this end, we iteratively reduce the instance space of the problem without
affecting the worst case performance until we are able to analyze the remaining
instances. This result makes WSRPT the best known algorithm for deterministic
online total weighted completion time scheduling in a preemptive single machine
environment improving the previous competitive ratio of $1.5651$. Additionally,
we increase the lower bound of this competitive ratio from $1.0730$ to
$1.1038$.
|
[
{
"version": "v1",
"created": "Sat, 15 Jul 2023 08:04:46 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Jamalabadi",
"Samin",
""
],
[
"Schwiegelshohn",
"Uwe",
""
]
] |
new_dataset
| 0.996441 |
2307.07861
|
Nader Abu-Alrub
|
Nader J. Abu-Alrub, Nathir A. Rawashdeh
|
Radar Odometry for Autonomous Ground Vehicles: A Survey of Methods and
Datasets
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Radar odometry has been gaining attention in the last decade. It stands as
one of the best solutions for robotic state estimation in unfavorable
conditions; conditions where other interoceptive and exteroceptive sensors may
fall short. Radars are widely adopted, resilient to weather and illumination,
and provide Doppler information which make them very attractive for such tasks.
This article presents an extensive survey of the latest work on ground-based
radar odometry for autonomous robots. It covers technologies, datasets,
metrics, and approaches that have been developed in the last decade in addition
to in-depth analysis and categorization of the various methods and techniques
applied to tackle this problem. This article concludes with challenges and
future recommendations to advance the field of radar odometry making it a great
starting point for newcomers and a valuable reference for experienced
researchers.
|
[
{
"version": "v1",
"created": "Sat, 15 Jul 2023 17:58:38 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Abu-Alrub",
"Nader J.",
""
],
[
"Rawashdeh",
"Nathir A.",
""
]
] |
new_dataset
| 0.999226 |
2307.07871
|
Grgur Kova\v{c}
|
Grgur Kova\v{c}, R\'emy Portelas, Peter Ford Dominey, Pierre-Yves
Oudeyer
|
The SocialAI School: Insights from Developmental Psychology Towards
Artificial Socio-Cultural Agents
|
Accepted at the "Workshop on Theory-of-Mind" at ICML 2023
| null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school.
|
[
{
"version": "v1",
"created": "Sat, 15 Jul 2023 19:05:56 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Kovač",
"Grgur",
""
],
[
"Portelas",
"Rémy",
""
],
[
"Dominey",
"Peter Ford",
""
],
[
"Oudeyer",
"Pierre-Yves",
""
]
] |
new_dataset
| 0.985107 |
2307.07931
|
Sanil Rao
|
Het Mankad, Sanil Rao, Brian Van Straalen, Phillip Colella, Franz
Franchetti
|
ProtoX: A First Look
| null | null | null | null |
cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a first look at ProtoX, a code generation framework for stencil
and pointwise operations that occur frequently in the numerical solution of
partial differential equations. ProtoX has Proto as its library frontend and
SPIRAL as the backend. Proto is a C++ based domain specific library which
optimizes the algorithms used to compute the numerical solution of partial
differential equations. Meanwhile, SPIRAL is a code generation system that
focuses on generating highly optimized target code. Although the current design
layout of Proto and its high level of abstractions provide a user friendly set
up, there is still a room for improving it's performance by applying various
techniques either at a compiler level or at an algorithmic level. Hence, in
this paper we propose adding SPIRAL as the library backend for Proto enabling
abstraction fusion, which is usually difficult to perform by any compiler. We
demonstrate the construction of ProtoX by considering the 2D Poisson equation
as a model problem from Proto. We provide the final generated code for CPU,
Multi-core CPU, and GPU as well as some performance numbers for CPU.
|
[
{
"version": "v1",
"created": "Sun, 16 Jul 2023 03:33:19 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Mankad",
"Het",
""
],
[
"Rao",
"Sanil",
""
],
[
"Van Straalen",
"Brian",
""
],
[
"Colella",
"Phillip",
""
],
[
"Franchetti",
"Franz",
""
]
] |
new_dataset
| 0.99484 |
2307.07976
|
Maosu Li
|
Maosu Li, Yijie Wu, Anthony G.O. Yeh, Fan Xue
|
HRHD-HK: A benchmark dataset of high-rise and high-density urban scenes
for 3D semantic segmentation of photogrammetric point clouds
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Many existing 3D semantic segmentation methods, deep learning in computer
vision notably, claimed to achieve desired results on urban point clouds, in
which the city objects are too many and diverse for people to judge
qualitatively. Thus, it is significant to assess these methods quantitatively
in diversified real-world urban scenes, encompassing high-rise, low-rise,
high-density, and low-density urban areas. However, existing public benchmark
datasets primarily represent low-rise scenes from European cities and cannot
assess the methods comprehensively. This paper presents a benchmark dataset of
high-rise urban point clouds, namely High-Rise, High-Density urban scenes of
Hong Kong (HRHD-HK), which has been vacant for a long time. HRHD-HK arranged in
150 tiles contains 273 million colorful photogrammetric 3D points from diverse
urban settings. The semantic labels of HRHD-HK include building, vegetation,
road, waterbody, facility, terrain, and vehicle. To the best of our knowledge,
HRHD-HK is the first photogrammetric dataset that focuses on HRHD urban areas.
This paper also comprehensively evaluates eight popular semantic segmentation
methods on the HRHD-HK dataset. Experimental results confirmed plenty of room
for enhancing the current 3D semantic segmentation of point clouds, especially
for city objects with small volumes. Our dataset is publicly available at:
https://github.com/LuZaiJiaoXiaL/HRHD-HK.
|
[
{
"version": "v1",
"created": "Sun, 16 Jul 2023 07:57:03 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Li",
"Maosu",
""
],
[
"Wu",
"Yijie",
""
],
[
"Yeh",
"Anthony G. O.",
""
],
[
"Xue",
"Fan",
""
]
] |
new_dataset
| 0.999853 |
2307.08007
|
Adri\'an Barahona-R\'ios
|
Adri\'an Barahona-R\'ios, Tom Collins
|
NoiseBandNet: Controllable Time-Varying Neural Synthesis of Sound
Effects Using Filterbanks
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Controllable neural audio synthesis of sound effects is a challenging task
due to the potential scarcity and spectro-temporal variance of the data.
Differentiable digital signal processing (DDSP) synthesisers have been
successfully employed to model and control musical and harmonic signals using
relatively limited data and computational resources. Here we propose
NoiseBandNet, an architecture capable of synthesising and controlling sound
effects by filtering white noise through a filterbank, thus going further than
previous systems that make assumptions about the harmonic nature of sounds. We
evaluate our approach via a series of experiments, modelling footsteps,
thunderstorm, pottery, knocking, and metal sound effects. Comparing
NoiseBandNet audio reconstruction capabilities to four variants of the
DDSP-filtered noise synthesiser, NoiseBandNet scores higher in nine out of ten
evaluation categories, establishing a flexible DDSP method for generating
time-varying, inharmonic sound effects of arbitrary length with both good time
and frequency resolution. Finally, we introduce some potential creative uses of
NoiseBandNet, by generating variations, performing loudness transfer, and by
training it on user-defined control curves.
|
[
{
"version": "v1",
"created": "Sun, 16 Jul 2023 11:21:27 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Barahona-Ríos",
"Adrián",
""
],
[
"Collins",
"Tom",
""
]
] |
new_dataset
| 0.981757 |
2307.08098
|
Jialun Pei
|
Jialun Pei, Tao Jiang, He Tang, Nian Liu, Yueming Jin, Deng-Ping Fan,
Pheng-Ann Heng
|
CalibNet: Dual-branch Cross-modal Calibration for RGB-D Salient Instance
Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel approach for RGB-D salient instance segmentation using a
dual-branch cross-modal feature calibration architecture called CalibNet. Our
method simultaneously calibrates depth and RGB features in the kernel and mask
branches to generate instance-aware kernels and mask features. CalibNet
consists of three simple modules, a dynamic interactive kernel (DIK) and a
weight-sharing fusion (WSF), which work together to generate effective
instance-aware kernels and integrate cross-modal features. To improve the
quality of depth features, we incorporate a depth similarity assessment (DSA)
module prior to DIK and WSF. In addition, we further contribute a new DSIS
dataset, which contains 1,940 images with elaborate instance-level annotations.
Extensive experiments on three challenging benchmarks show that CalibNet yields
a promising result, i.e., 58.0% AP with 320*480 input size on the COME15K-N
test set, which significantly surpasses the alternative frameworks. Our code
and dataset are available at: https://github.com/PJLallen/CalibNet.
|
[
{
"version": "v1",
"created": "Sun, 16 Jul 2023 16:49:59 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Pei",
"Jialun",
""
],
[
"Jiang",
"Tao",
""
],
[
"Tang",
"He",
""
],
[
"Liu",
"Nian",
""
],
[
"Jin",
"Yueming",
""
],
[
"Fan",
"Deng-Ping",
""
],
[
"Heng",
"Pheng-Ann",
""
]
] |
new_dataset
| 0.989736 |
2307.08141
|
Ivan Kalinov Alexeevich
|
Alexander Petrovsky, Yomna Youssef, Kirill Myasoedov, Artem
Timoshenko, Vladimir Guneavoi, Ivan Kalinov, and Dzmitry Tsetserukou
|
POA: Passable Obstacles Aware Path-planning Algorithm for Navigation of
a Two-wheeled Robot in Highly Cluttered Environments
|
Accepted to the 2023 IEEE Conference on Systems, Man, and Cybernetics
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper focuses on Passable Obstacles Aware (POA) planner - a novel
navigation method for two-wheeled robots in a highly cluttered environment. The
navigation algorithm detects and classifies objects to distinguish two types of
obstacles - passable and unpassable. Our algorithm allows two-wheeled robots to
find a path through passable obstacles. Such a solution helps the robot working
in areas inaccessible to standard path planners and find optimal trajectories
in scenarios with a high number of objects in the robot's vicinity. The POA
planner can be embedded into other planning algorithms and enables them to
build a path through obstacles. Our method decreases path length and the total
travel time to the final destination up to 43% and 39%, respectively, comparing
to standard path planners such as GVD, A*, and RRT*
|
[
{
"version": "v1",
"created": "Sun, 16 Jul 2023 19:44:27 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Petrovsky",
"Alexander",
""
],
[
"Youssef",
"Yomna",
""
],
[
"Myasoedov",
"Kirill",
""
],
[
"Timoshenko",
"Artem",
""
],
[
"Guneavoi",
"Vladimir",
""
],
[
"Kalinov",
"Ivan",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
new_dataset
| 0.992539 |
2307.08189
|
Zhengping Zhou
|
Zhengping Zhou, Lezhi Li, Xinxi Chen, Andy Li
|
Mini-Giants: "Small" Language Models and Open Source Win-Win
|
16 pages, 1 figure
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
ChatGPT is phenomenal. However, it is prohibitively expensive to train and
refine such giant models. Fortunately, small language models are flourishing
and becoming more and more competent. We call them "mini-giants". We argue that
open source community like Kaggle and mini-giants will win-win in many ways,
technically, ethically and socially. In this article, we present a brief yet
rich background, discuss how to attain small language models, present a
comparative study of small language models and a brief discussion of evaluation
methods, discuss the application scenarios where small language models are most
needed in the real world, and conclude with discussion and outlook.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 01:35:56 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Zhou",
"Zhengping",
""
],
[
"Li",
"Lezhi",
""
],
[
"Chen",
"Xinxi",
""
],
[
"Li",
"Andy",
""
]
] |
new_dataset
| 0.994947 |
2307.08221
|
Lizhou Liao
|
Lizhou Liao, Li Sun, Xinhui Bai, Zhenxing You, Hongyuan Yuan, Chunyun
Fu
|
NDT-Map-Code: A 3D global descriptor for real-time loop closure
detection in lidar SLAM
|
8 pages, 9 figures, 2 tables
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Loop-closure detection, also known as place recognition, aiming to identify
previously visited locations, is an essential component of a SLAM system.
Existing research on lidar-based loop closure heavily relies on dense point
cloud and 360 FOV lidars. This paper proposes an out-of-the-box NDT (Normal
Distribution Transform) based global descriptor, NDT-Map-Code, designed for
both on-road driving and underground valet parking scenarios. NDT-Map-Code can
be directly extracted from the NDT map without the need for a dense point
cloud, resulting in excellent scalability and low maintenance cost. The NDT
representation is leveraged to identify representative patterns, which are
further encoded according to their spatial location (bearing, range, and
height). Experimental results on the NIO underground parking lot dataset and
the KITTI dataset demonstrate that our method achieves significantly better
performance compared to the state-of-the-art.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 03:45:47 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Liao",
"Lizhou",
""
],
[
"Sun",
"Li",
""
],
[
"Bai",
"Xinhui",
""
],
[
"You",
"Zhenxing",
""
],
[
"Yuan",
"Hongyuan",
""
],
[
"Fu",
"Chunyun",
""
]
] |
new_dataset
| 0.999889 |
2307.08235
|
Haohui Wang
|
Haohui Wang, Weijie Guan, Jianpeng Chen, Zi Wang, Dawei Zhou
|
HeroLT: Benchmarking Heterogeneous Long-Tailed Learning
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Long-tailed data distributions are prevalent in a variety of domains,
including finance, e-commerce, biomedical science, and cyber security. In such
scenarios, the performance of machine learning models is often dominated by the
head categories, while the learning of tail categories is significantly
inadequate. Given abundant studies conducted to alleviate the issue, this work
aims to provide a systematic view of long-tailed learning with regard to three
pivotal angles: (A1) the characterization of data long-tailedness, (A2) the
data complexity of various domains, and (A3) the heterogeneity of emerging
tasks. To achieve this, we develop the most comprehensive (to the best of our
knowledge) long-tailed learning benchmark named HeroLT, which integrates 13
state-of-the-art algorithms and 6 evaluation metrics on 14 real-world benchmark
datasets across 4 tasks from 3 domains. HeroLT with novel angles and extensive
experiments (264 in total) enables researchers and practitioners to effectively
and fairly evaluate newly proposed methods compared with existing baselines on
varying types of datasets. Finally, we conclude by highlighting the significant
applications of long-tailed learning and identifying several promising future
directions. For accessibility and reproducibility, we open-source our benchmark
HeroLT and corresponding results at https://github.com/SSSKJ/HeroLT.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 04:32:45 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Wang",
"Haohui",
""
],
[
"Guan",
"Weijie",
""
],
[
"Chen",
"Jianpeng",
""
],
[
"Wang",
"Zi",
""
],
[
"Zhou",
"Dawei",
""
]
] |
new_dataset
| 0.994814 |
2307.08256
|
Xin Zhang
|
Xin Zhang and Shenghui Song
|
URLLC in IRS-Aided MIMO Systems: Finite Blocklength Analysis and Design
|
8 pages, 3 figures, accepted by Asilomar Conference on Signals,
Systems, and Computers 2023. arXiv admin note: text overlap with
arXiv:2210.08832
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the ultra reliable and low latency communication
(URLLC) performance of the IRS-aided MIMO system. The upper and lower bounds of
the optimal average error probability (OAEP) for the coding rate within
1/sqrt(Mn) of the capacity are derived, where n and M represent the blocklength
and the number of transmit antennas, respectively. To achieve this goal, a new
central limit theorem (CLT) for the mutual information density over the
IRS-aided MIMO system is derived in the asymptotic regime where the
block-length, the IRS size, and number of the antennas go to infinity with the
same pace. The CLT is then utilized to derive the closed-form upper and lower
bounds for the OAEP. Based on the analysis result, a gradient-based algorithm
is proposed to minimize the lower bound of the OAEP by optimizing the phase
shift of the IRS. Simulation results validate the fitness of the CLT and the
effectiveness of the proposed algorithm in optimizing the theoretical bound, as
well as the performance of practical LDPC code.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 05:47:13 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Zhang",
"Xin",
""
],
[
"Song",
"Shenghui",
""
]
] |
new_dataset
| 0.989419 |
2307.08278
|
Svetlana Pavlitska
|
Svetlana Pavlitska, Nico Lambing and J. Marius Z\"ollner
|
Adversarial Attacks on Traffic Sign Recognition: A Survey
|
Accepted for publication at ICECCME2023
| null | null | null |
cs.CV cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traffic sign recognition is an essential component of perception in
autonomous vehicles, which is currently performed almost exclusively with deep
neural networks (DNNs). However, DNNs are known to be vulnerable to adversarial
attacks. Several previous works have demonstrated the feasibility of
adversarial attacks on traffic sign recognition models. Traffic signs are
particularly promising for adversarial attack research due to the ease of
performing real-world attacks using printed signs or stickers. In this work, we
survey existing works performing either digital or real-world attacks on
traffic sign detection and classification models. We provide an overview of the
latest advancements and highlight the existing research areas that require
further investigation.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 06:58:22 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Pavlitska",
"Svetlana",
""
],
[
"Lambing",
"Nico",
""
],
[
"Zöllner",
"J. Marius",
""
]
] |
new_dataset
| 0.999488 |
2307.08287
|
Fran\c{c}ois Dor\'e
|
Fran\c{c}ois Dor\'e, Enrico Formenti
|
Drawing non-planar graphs with rotation systems on the Klein bottle
| null | null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
This paper provides a linear time algorithm in the number of edges that,
given a simple 3-connected non-planar graph G with a Klein bottle rotation
system, outputs a straight line drawing of G with no crossings on the flat
Klein bottle.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 07:17:54 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Doré",
"François",
""
],
[
"Formenti",
"Enrico",
""
]
] |
new_dataset
| 0.996166 |
2307.08294
|
Stepan Perminov
|
Stepan Perminov, Ivan Kalinov and Dzmitry Tsetserukou
|
GHACPP: Genetic-based Human-Aware Coverage Path Planning Algorithm for
Autonomous Disinfection Robot
|
Accepted to International Conference on Systems, Man, and Cybernetics
(SMC). 2023. IEEE copyright
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Numerous mobile robots with mounted Ultraviolet-C (UV-C) lamps were developed
recently, yet they cannot work in the same space as humans without irradiating
them by UV-C. This paper proposes a novel modular and scalable Human-Aware
Genetic-based Coverage Path Planning algorithm (GHACPP), that aims to solve the
problem of disinfecting of unknown environments by UV-C irradiation and
preventing human eyes and skin from being harmed.
The proposed genetic-based algorithm alternates between the stages of
exploring a new area, generating parts of the resulting disinfection
trajectory, called mini-trajectories, and updating the current state around the
robot. The system performance in effectiveness and human safety is validated
and compared with one of the latest state-of-the-art online coverage path
planning algorithms called SimExCoverage-STC. The experimental results
confirmed both the high level of safety for humans and the efficiency of the
developed algorithm in terms of decrease of path length (by 37.1%), number
(39.5%) and size (35.2%) of turns, and time (7.6%) to complete the disinfection
task, with a small loss in the percentage of area covered (0.6%), in comparison
with the state-of-the-art approach.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 07:38:46 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Perminov",
"Stepan",
""
],
[
"Kalinov",
"Ivan",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
new_dataset
| 0.973572 |
2307.08301
|
Lukas Brechtel
|
Lukas Brechtel, Christof A. O. Rauber, Christoph Fischer
|
Environment Knowledge Supported RAN Control for 6G Campus Networks
|
8 pages, 4 figures, Confercence NGNA 2022
| null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, the authors present a Radio Access Network (RAN) concept for
future mobile communication systems beyond 5G. The concept is based on
knowledge of the environment. The three conceptual applications RAN
authentication, beam steering, and channel estimation are presented and their
added value with respect to 6G development goals is outlined. The concept is
explained by means of an intralogistic use case of a fully automated warehouse.
Based on this, the concrete steps for implementation in a laboratory setup are
described and further research steps are shown.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 07:53:35 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Brechtel",
"Lukas",
""
],
[
"Rauber",
"Christof A. O.",
""
],
[
"Fischer",
"Christoph",
""
]
] |
new_dataset
| 0.951953 |
2307.08315
|
Hongxiao Li
|
Hongxiao Li, Wanling Gao, Lei Wang, Jianfeng Zhan
|
IterLara: A Turing Complete Algebra for Big Data, AI, Scientific
Computing, and Database
| null | null | null | null |
cs.DB cs.CL cs.DS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
\textsc{Lara} is a key-value algebra that aims at unifying linear and
relational algebra with three types of operation abstraction. The study of
\textsc{Lara}'s expressive ability reports that it can represent relational
algebra and most linear algebra operations. However, several essential
computations, such as matrix inversion and determinant, cannot be expressed in
\textsc{Lara}. \textsc{Lara} cannot represent global and iterative computation,
either. This article proposes \textsc{IterLara}, extending \textsc{Lara} with
iterative operators, to provide an algebraic model that unifies operations in
general-purpose computing, like big data, AI, scientific computing, and
database. We study the expressive ability of \textsc{Lara} and
\textsc{IterLara} and prove that \textsc{IterLara} with aggregation functions
can represent matrix inversion, determinant. Besides, we demonstrate that
\textsc{IterLara} with no limitation of function utility is Turing complete. We
also propose the Operation Count (OP) as a metric of computation amount for
\textsc{IterLara} and ensure that the OP metric is in accordance with the
existing computation metrics.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 08:23:09 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Li",
"Hongxiao",
""
],
[
"Gao",
"Wanling",
""
],
[
"Wang",
"Lei",
""
],
[
"Zhan",
"Jianfeng",
""
]
] |
new_dataset
| 0.996064 |
2307.08321
|
Cong Jiang
|
Cong Jiang and Xiaolei Yang
|
Legal Syllogism Prompting: Teaching Large Language Models for Legal
Judgment Prediction
|
Nineteenth International Conference on Artificial Intelligence and
Law (ICAIL 2023)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Legal syllogism is a form of deductive reasoning commonly used by legal
professionals to analyze cases. In this paper, we propose legal syllogism
prompting (LoT), a simple prompting method to teach large language models
(LLMs) for legal judgment prediction. LoT teaches only that in the legal
syllogism the major premise is law, the minor premise is the fact, and the
conclusion is judgment. Then the models can produce a syllogism reasoning of
the case and give the judgment without any learning, fine-tuning, or examples.
On CAIL2018, a Chinese criminal case dataset, we performed zero-shot judgment
prediction experiments with GPT-3 models. Our results show that LLMs with LoT
achieve better performance than the baseline and chain of thought prompting,
the state-of-art prompting method on diverse reasoning tasks. LoT enables the
model to concentrate on the key information relevant to the judgment and to
correctly understand the legal meaning of acts, as compared to other methods.
Our method enables LLMs to predict judgment along with law articles and
justification, which significantly enhances the explainability of models.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 08:38:46 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Jiang",
"Cong",
""
],
[
"Yang",
"Xiaolei",
""
]
] |
new_dataset
| 0.998533 |
2307.08359
|
Frederik Plahl
|
Andreas Zachariae and Julia Widera and Frederik Plahl and Bj\"orn Hein
and Christian Wurll
|
Human Emergency Detection during Autonomous Hospital Transports
|
Preprint of the corresponding IAS18-2023 conference publication
(Proceedings of the 18th International Conference IAS-18)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Human transports in hospitals are labor-intensive and primarily performed in
beds to save time. This transfer method does not promote the mobility or
autonomy of the patient. To relieve the caregivers from this time-consuming
task, a mobile robot is developed to autonomously transport humans around the
hospital. It provides different transfer modes including walking and sitting in
a wheelchair. The problem that this paper focuses on is to detect emergencies
and ensure the well-being of the patient during the transport. For this
purpose, the patient is tracked and monitored with a camera system. OpenPose is
used for Human Pose Estimation and a trained classifier for emergency
detection. We collected and published a dataset of 18,000 images in lab and
hospital environments. It differs from related work because we have a moving
robot with different transfer modes in a highly dynamic environment with
multiple people in the scene using only RGB-D data. To improve the critical
recall metric, we apply threshold moving and a time delay. We compare different
models with an AutoML approach. This paper shows that emergencies while walking
are best detected by a SVM with a recall of 95.8% on single frames. In the case
of sitting transport, the best model achieves a recall of 62.2%. The
contribution is to establish a baseline on this new dataset and to provide a
proof of concept for the human emergency detection in this use case.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 09:54:52 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Zachariae",
"Andreas",
""
],
[
"Widera",
"Julia",
""
],
[
"Plahl",
"Frederik",
""
],
[
"Hein",
"Björn",
""
],
[
"Wurll",
"Christian",
""
]
] |
new_dataset
| 0.988871 |
2307.08363
|
Miguel Altamirano Cabrera
|
Ali Alabbas, Miguel Altamirano Cabrera, Oussama Alyounes, and Dzmitry
Tsetserukou
|
ArUcoGlide: a Novel Wearable Robot for Position Tracking and Haptic
Feedback to Increase Safety During Human-Robot Interaction
|
8 pages, Accepted paper in IEEE ETFA 2023
| null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The current capabilities of robotic systems make human collaboration
necessary to accomplish complex tasks effectively. In this work, we are
introducing a framework to ensure safety in a human-robot collaborative
environment. The system is composed of a wearable 2-DOF robot, a low-cost and
easy-to-install tracking system, and a collision avoidance algorithm based on
the Artificial Potential Field (APF). The wearable robot is designed to hold a
fiducial marker and maintain its visibility to the tracking system, which, in
turn, localizes the user's hand with good accuracy and low latency and provides
haptic feedback to the user. The system is designed to enhance the performance
of collaborative tasks while ensuring user safety. Three experiments were
carried out to evaluate the performance of the proposed system. The first one
evaluated the accuracy of the tracking system. The second experiment analyzed
human-robot behavior during an imminent collision. The third experiment
evaluated the system in a collaborative activity in a shared working
environment. The results show that the implementation of the introduced system
reduces the operation time by 16% and increases the average distance between
the user's hand and the robot by 5 cm.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 10:01:40 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Alabbas",
"Ali",
""
],
[
"Cabrera",
"Miguel Altamirano",
""
],
[
"Alyounes",
"Oussama",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
new_dataset
| 0.997721 |
2307.08412
|
Arnab Mukherjee Mr.
|
Arnab Mukherjee, Souvik Majumdar, Anup Kumar Kolya, Saborni Nandi
|
A Privacy-Preserving Blockchain-based E-voting System
| null | null | null | null |
cs.CR cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Within a modern democratic nation, elections play a significant role in the
nation's functioning. However, with the existing infrastructure for conducting
elections using Electronic Voting Systems (EVMs), many loopholes exist, which
illegitimate entities might leverage to cast false votes or even tamper with
the EVMs after the voting session is complete. The need of the hour is to
introduce a robust, auditable, transparent, and tamper-proof e-voting system,
enabling a more reliable and fair election process. To address such concerns,
we propose a novel solution for blockchain-based e-voting, focusing on the
security and privacy aspects of the e-voting process. We consider the security
risks and loopholes and aim to preserve the anonymity of the voters while
ensuring that illegitimate votes are properly handled. Additionally, we develop
a prototype as a proof of concept using the Ethereum blockchain platform.
Finally, we perform experiments to demonstrate the performance of the system.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 11:48:39 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Mukherjee",
"Arnab",
""
],
[
"Majumdar",
"Souvik",
""
],
[
"Kolya",
"Anup Kumar",
""
],
[
"Nandi",
"Saborni",
""
]
] |
new_dataset
| 0.995167 |
2307.08490
|
Khwaja Zubair Sediqi Mr.
|
Khwaja Zubair Sediqi, Anja Feldmann, Oliver Gasser
|
Live Long and Prosper:Analyzing Long-Lived MOAS Prefixes in BGP
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
BGP exchanges reachability information in the form of prefixes, which are
usually originated by a single Autonomous System (AS). If multiple ASes
originate the same prefix, this is referred to as a Multiple Origin ASes (MOAS)
prefix. One reason for MOAS prefixes are BGP prefix hijacks, which are mostly
short-lived and have been studied extensively in the past years. In contrast to
short-lived MOAS, long-lived MOAS have remained largely understudied. In this
paper, we focus on long-lived MOAS prefixes and perform an in-depth study over
six years. We identify around 24k long-lived MOAS prefixes in IPv4 and 1.4k in
IPv6 being announced in January 2023. By analyzing the RPKI status we find that
more than 40% of MOAS prefixes have all origins registered correctly, with only
a minority of MOAS having invalid origins. Moreover, we find that the most
prominent CIDR size of MOAS prefixes is /24 for IPv4 and /48 for IPv6,
suggesting their use for fine-grained traffic steering. We attribute a
considerable number of MOAS prefixes to mergers and acquisitions of companies.
Additionally, more than 90% of MOAS prefixes are originated by two origin ASes,
with the majority of detected origin AS relations being customer-provider.
Finally, we identify that the majority of MOAS users are IT companies, and just
0.9% of IPv4 MOAS and 6.3% of IPv6 MOAS prefixes are used for anycast.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 13:53:39 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Sediqi",
"Khwaja Zubair",
""
],
[
"Feldmann",
"Anja",
""
],
[
"Gasser",
"Oliver",
""
]
] |
new_dataset
| 0.994222 |
2307.08532
|
Luigi Quarantiello
|
Luigi Quarantiello, Simone Marzeddu, Antonio Guzzi, Vincenzo Lomonaco
|
LuckyMera: a Modular AI Framework for Building Hybrid NetHack Agents
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the last few decades we have witnessed a significant development in
Artificial Intelligence (AI) thanks to the availability of a variety of
testbeds, mostly based on simulated environments and video games. Among those,
roguelike games offer a very good trade-off in terms of complexity of the
environment and computational costs, which makes them perfectly suited to test
AI agents generalization capabilities. In this work, we present LuckyMera, a
flexible, modular, extensible and configurable AI framework built around
NetHack, a popular terminal-based, single-player roguelike video game. This
library is aimed at simplifying and speeding up the development of AI agents
capable of successfully playing the game and offering a high-level interface
for designing game strategies. LuckyMera comes with a set of off-the-shelf
symbolic and neural modules (called "skills"): these modules can be either
hard-coded behaviors, or neural Reinforcement Learning approaches, with the
possibility of creating compositional hybrid solutions. Additionally, LuckyMera
comes with a set of utility features to save its experiences in the form of
trajectories for further analysis and to use them as datasets to train neural
modules, with a direct interface to the NetHack Learning Environment and
MiniHack. Through an empirical evaluation we validate our skills implementation
and propose a strong baseline agent that can reach state-of-the-art
performances in the complete NetHack game. LuckyMera is open-source and
available at https://github.com/Pervasive-AI-Lab/LuckyMera.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 14:46:59 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Quarantiello",
"Luigi",
""
],
[
"Marzeddu",
"Simone",
""
],
[
"Guzzi",
"Antonio",
""
],
[
"Lomonaco",
"Vincenzo",
""
]
] |
new_dataset
| 0.973468 |
2307.08549
|
Christoph Sendner
|
Christoph Sendner, Ruisi Zhang, Alexander Hefter, Alexandra
Dmitrienko, Farinaz Koushanfar
|
G-Scan: Graph Neural Networks for Line-Level Vulnerability
Identification in Smart Contracts
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the immutable and decentralized nature of Ethereum (ETH) platform,
smart contracts are prone to security risks that can result in financial loss.
While existing machine learning-based vulnerability detection algorithms
achieve high accuracy at the contract level, they require developers to
manually inspect source code to locate bugs. To this end, we present G-Scan,
the first end-to-end fine-grained line-level vulnerability detection system
evaluated on the first-of-its-kind real world dataset. G-Scan first converts
smart contracts to code graphs in a dependency and hierarchy preserving manner.
Next, we train a graph neural network to identify vulnerable nodes and assess
security risks. Finally, the code graphs with node vulnerability predictions
are mapped back to the smart contracts for line-level localization. We train
and evaluate G-Scan on a collected real world smart contracts dataset with
line-level annotations on reentrancy vulnerability, one of the most common and
severe types of smart contract vulnerabilities. With the well-designed graph
representation and high-quality dataset, G-Scan achieves 93.02% F1-score in
contract-level vulnerability detection and 93.69% F1-score in line-level
vulnerability localization. Additionally, the lightweight graph neural network
enables G-Scan to localize vulnerabilities in 6.1k lines of code smart contract
within 1.2 seconds.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 15:11:03 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Sendner",
"Christoph",
""
],
[
"Zhang",
"Ruisi",
""
],
[
"Hefter",
"Alexander",
""
],
[
"Dmitrienko",
"Alexandra",
""
],
[
"Koushanfar",
"Farinaz",
""
]
] |
new_dataset
| 0.999721 |
2307.08550
|
Christoph Sendner
|
Christoph Sendner, Jasper Stang, Alexandra Dmitrienko, Raveen
Wijewickrama, Murtuza Jadliwala
|
TorMult: Introducing a Novel Tor Bandwidth Inflation Attack
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Tor network is the most prominent system for providing anonymous
communication to web users, with a daily user base of 2 million users. However,
since its inception, it has been constantly targeted by various traffic
fingerprinting and correlation attacks aiming at deanonymizing its users. A
critical requirement for these attacks is to attract as much user traffic to
adversarial relays as possible, which is typically accomplished by means of
bandwidth inflation attacks. This paper proposes a new inflation attack vector
in Tor, referred to as TorMult, which enables inflation of measured bandwidth.
The underlying attack technique exploits resource sharing among Tor relay nodes
and employs a cluster of attacker-controlled relays with coordinated resource
allocation within the cluster to deceive bandwidth measurers into believing
that each relay node in the cluster possesses ample resources. We propose two
attack variants, C-TorMult and D-TorMult, and test both versions in a private
Tor test network. Our evaluation demonstrates that an attacker can inflate the
measured bandwidth by a factor close to n using C-TorMult and nearly half n*N
using D-TorMult, where n is the size of the cluster hosted on one server and N
is the number of servers. Furthermore, our theoretical analysis reveals that
gaining control over half of the Tor network's traffic can be achieved by
employing just 10 dedicated servers with a cluster size of 109 relays running
the TorMult attack, each with a bandwidth of 100MB/s. The problem is further
exacerbated by the fact that Tor not only allows resource sharing but,
according to recent reports, even promotes it.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 15:11:31 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Sendner",
"Christoph",
""
],
[
"Stang",
"Jasper",
""
],
[
"Dmitrienko",
"Alexandra",
""
],
[
"Wijewickrama",
"Raveen",
""
],
[
"Jadliwala",
"Murtuza",
""
]
] |
new_dataset
| 0.972895 |
2307.08570
|
Julius Rauscher
|
Julius Rauscher, Raphael Buchm\"uller, Daniel A. Keim, and Matthias
Miller
|
SkiVis: Visual Exploration and Route Planning in Ski Resorts
|
11 pages, 10 figures
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Optimal ski route selection is a challenge based on a multitude of factors,
such as the steepness, compass direction, or crowdedness. The personal
preferences of every skier towards these factors require individual
adaptations, which aggravate this task. Current approaches within this domain
do not combine automated routing capabilities with user preferences, missing
out on the possibility of integrating domain knowledge in the analysis process.
We introduce SkiVis, a visual analytics application to interactively explore
ski slopes and provide routing recommendations based on user preferences. In
collaboration with ski guides and enthusiasts, we elicited requirements and
guidelines for such an application and propose different workflows depending on
the skiers' familiarity with the resort. In a case study on the resort of Ski
Arlberg, we illustrate how to leverage volunteered geographic information to
enable a numerical comparison between slopes. We evaluated our approach through
a pair-analytics study and demonstrate how it supports skiers in discovering
relevant and preference-based ski routes. Besides the tasks investigated in the
study, we derive additional use cases from the interviews that showcase the
further potential of SkiVis, and contribute directions for further research
opportunities.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 15:36:51 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Rauscher",
"Julius",
""
],
[
"Buchmüller",
"Raphael",
""
],
[
"Keim",
"Daniel A.",
""
],
[
"Miller",
"Matthias",
""
]
] |
new_dataset
| 0.993651 |
2307.08575
|
Romaric Neveu
|
Nicolas Aragon, Lo\"ic Bidoux, Jes\'us-Javier Chi-Dom\'inguez,
Thibauld Feneuil, Philippe Gaborit, Romaric Neveu, Matthieu Rivain
|
MIRA: a Digital Signature Scheme based on the MinRank problem and the
MPC-in-the-Head paradigm
| null | null | null | null |
cs.CR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
We exploit the idea of [Fen22] which proposes to build an efficient signature
scheme based on a zero-knowledge proof of knowledge of a solution of a MinRank
instance. The scheme uses the MPCitH paradigm, which is an efficient way to
build ZK proofs. We combine this idea with another idea, the hypercube
technique introduced in [AMGH+22], which leads to more efficient MPCitH-based
scheme. This new approach is more efficient than classical MPCitH, as it allows
to reduce the number of party computation. This gives us a first scheme called
MIRA-Additive. We then present an other scheme, based on low-threshold secret
sharings, called MIRA-Threshold, which is a faster scheme, at the price of
larger signatures. The construction of MPCitH using threshold secret sharing is
detailed in [FR22]. These two constructions allows us to be faster than
classical MPCitH, with a size of signature around 5.6kB with MIRA-Additive, and
8.3kB with MIRA-Threshold. We detail here the constructions and optimizations
of the schemes, as well as their security proofs.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 15:44:12 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Aragon",
"Nicolas",
""
],
[
"Bidoux",
"Loïc",
""
],
[
"Chi-Domínguez",
"Jesús-Javier",
""
],
[
"Feneuil",
"Thibauld",
""
],
[
"Gaborit",
"Philippe",
""
],
[
"Neveu",
"Romaric",
""
],
[
"Rivain",
"Matthieu",
""
]
] |
new_dataset
| 0.991889 |
2307.08581
|
Yang Zhao
|
Yang Zhao, Zhijie Lin, Daquan Zhou, Zilong Huang, Jiashi Feng, Bingyi
Kang
|
BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
LLMs have demonstrated remarkable abilities at interacting with humans
through language, especially with the usage of instruction-following data.
Recent advancements in LLMs, such as MiniGPT-4, LLaVA, and X-LLM, further
enlarge their abilities by incorporating multi-modal inputs, including image,
video, and speech. Despite their effectiveness at generating precise and
detailed language understanding of the given modality signal, these LLMs give
up the ability to ground specific parts of inputs, thus only constructing a
coarse-grained mapping. However, explicit and informative correspondence
between text and other modalities will not only improve the user experience but
also help to expand the application scenario of multi-modal LLMs. Therefore, we
propose BuboGPT, a multi-modal LLM with visual grounding that can perform
cross-modal interaction between vision, audio and language, providing
fine-grained understanding of visual objects and other given modalities. As a
result, BuboGPT is able to point out the specific location of an object in the
image, when it is generating response or description for that object. Our
contributions are two-fold: 1) An off-the-shelf visual grounding module based
on SAM that extracts entities in a sentence and find corresponding masks in the
image. 2) A two-stage training scheme and instruction dataset to endow joint
text-image-audio understanding. Our experiments show that BuboGPT achieves
impressive multi-modality understanding and visual grounding abilities during
the interaction with human. It performs consistently well when provided by
arbitrary modality combinations (either aligned or unaligned). Our code, model
and dataset are available at https://bubo-gpt.github.io .
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 15:51:47 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Zhao",
"Yang",
""
],
[
"Lin",
"Zhijie",
""
],
[
"Zhou",
"Daquan",
""
],
[
"Huang",
"Zilong",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Kang",
"Bingyi",
""
]
] |
new_dataset
| 0.987931 |
2307.08636
|
Zhaiyu Chen
|
Zhaiyu Chen, Yilei Shi, Liangliang Nan, Zhitong Xiong, Xiao Xiang Zhu
|
PolyGNN: Polyhedron-based Graph Neural Network for 3D Building
Reconstruction from Point Clouds
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present PolyGNN, a polyhedron-based graph neural network for 3D building
reconstruction from point clouds. PolyGNN learns to assemble primitives
obtained by polyhedral decomposition via graph node classification, achieving a
watertight, compact, and weakly semantic reconstruction. To effectively
represent arbitrary-shaped polyhedra in the neural network, we propose three
different sampling strategies to select representative points as
polyhedron-wise queries, enabling efficient occupancy inference. Furthermore,
we incorporate the inter-polyhedron adjacency to enhance the classification of
the graph nodes. We also observe that existing city-building models are
abstractions of the underlying instances. To address this abstraction gap and
provide a fair evaluation of the proposed method, we develop our method on a
large-scale synthetic dataset covering 500k+ buildings with well-defined ground
truths of polyhedral class labels. We further conduct a transferability
analysis across cities and on real-world point clouds. Both qualitative and
quantitative results demonstrate the effectiveness of our method, particularly
its efficiency for large-scale reconstructions. The source code and data of our
work are available at https://github.com/chenzhaiyu/polygnn.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 16:52:25 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Chen",
"Zhaiyu",
""
],
[
"Shi",
"Yilei",
""
],
[
"Nan",
"Liangliang",
""
],
[
"Xiong",
"Zhitong",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.997555 |
2307.08680
|
Sabyasachi Basu
|
Sabyasachi Basu, Manuj Mukherjee
|
Optimal storage codes on graphs with fixed locality
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Storage codes on graphs are an instance of \emph{codes with locality}, which
are used in distributed storage schemes to provide local repairability.
Specifically, the nodes of the graph correspond to storage servers, and the
neighbourhood of each server constitute the set of servers it can query to
repair its stored data in the event of a failure. A storage code on a graph
with $n$-vertices is a set of $n$-length codewords over $\field_q$ where the
$i$th codeword symbol is stored in server $i$, and it can be recovered by
querying the neighbours of server $i$ according to the underlying graph.
In this work, we look at binary storage codes whose repair function is the
parity check, and characterise the tradeoff between the locality of the code
and its rate. Specifically, we show that the maximum rate of a code on $n$
vertices with locality $r$ is bounded between $1-1/n\lceil n/(r+1)\rceil$ and
$1-1/n\lceil n/(r+1)\rceil$. The lower bound on the rate is derived by
constructing an explicit family of graphs with locality $r$, while the upper
bound is obtained via a lower bound on the binary-field rank of a class of
symmetric binary matrices. Our upper bound on maximal rate of a storage code
matches the upper bound on the larger class of codes with locality derived by
Tamo and Barg. As a corollary to our result, we obtain the following asymptotic
separation result: given a sequence $r(n), n\geq 1$, there exists a sequence of
graphs on $n$-vertices with storage codes of rate $1-o(1)$ if and only if
$r(n)=\omega(1)$.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 17:43:38 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Basu",
"Sabyasachi",
""
],
[
"Mukherjee",
"Manuj",
""
]
] |
new_dataset
| 0.999882 |
2307.08703
|
Ce Zhou
|
Ce Zhou (Michigan State University)
|
SSVEP-Based BCI Wheelchair Control System
|
108 pages
| null | null | null |
cs.HC cs.AI cs.CV cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A brain-computer interface (BCI) is a system that allows a person to
communicate or control the surroundings without depending on the brain's normal
output pathways of peripheral nerves and muscles. A lot of successful
applications have arisen utilizing the advantages of BCI to assist disabled
people with so-called assistive technology. Considering using BCI has fewer
limitations and huge potential, this project has been proposed to control the
movement of an electronic wheelchair via brain signals. The goal of this
project is to help disabled people, especially paralyzed people suffering from
motor disabilities, improve their life qualities. In order to realize the
project stated above, Steady-State Visual Evoked Potential (SSVEP) is involved.
It can be easily elicited in the visual cortical with the same frequency as the
one is being focused by the subject. There are two important parts in this
project. One is to process the EEG signals and another one is to make a visual
stimulator using hardware. The EEG signals are processed in Matlab using the
algorithm of Butterworth Infinite Impulse Response (IIR) bandpass filter (for
preprocessing) and Fast Fourier Transform (FFT) (for feature extraction).
Besides, a harmonics-based classification method is proposed and applied in the
classification part. Moreover, the design of the visual stimulator combines
LEDs as flickers and LCDs as information displayers on one panel.
Microcontrollers are employed to control the SSVEP visual stimuli panel. This
project is evaluated by subjects with different races and ages. Experimental
results show the system is easy to be operated and it can achieve approximately
a minimum 1-second time delay. So it demonstrates that this SSVEP-based
BCI-controlled wheelchair has a huge potential to be applied to disabled people
in the future.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 18:37:28 GMT"
}
] | 2023-07-18T00:00:00 |
[
[
"Zhou",
"Ce",
"",
"Michigan State University"
]
] |
new_dataset
| 0.999214 |
1812.04741
|
Marija Slavkovik
|
Beishui Liao, Pere Pardo, Marija Slavkovik, Leendert van der Torre
|
The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms
and Argumentation
|
Accepted for publication with JAIR
|
Journal of Artificial Intelligence Research 77: 737 - 792 (2023)
|
10.1613/jair.1.14368
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An autonomous system is constructed by a manufacturer, operates in a society
subject to norms and laws, and interacts with end users. All of these actors
are stakeholders affected by the behavior of the autonomous system. We address
the challenge of how the ethical views of such stakeholders can be integrated
in the behavior of an autonomous system. We propose an ethical recommendation
component called Jiminy which uses techniques from normative systems and formal
argumentation to reach moral agreements among stakeholders. A Jiminy represents
the ethical views of each stakeholder by using normative systems, and has three
ways of resolving moral dilemmas that involve the opinions of the stakeholders.
First, the Jiminy considers how the arguments of the stakeholders relate to one
another, which may already resolve the dilemma. Secondly, the Jiminy combines
the normative systems of the stakeholders such that the combined expertise of
the stakeholders may resolve the dilemma. Thirdly, and only if these two other
methods have failed, the Jiminy uses context-sensitive rules to decide which of
the stakeholders take preference over the others. At the abstract level, these
three methods are characterized by adding arguments, adding attacks between
arguments, and revising attacks between arguments. We show how a Jiminy can be
used not only for ethical reasoning and collaborative decision-making, but also
to provide explanations about ethical behavior.
|
[
{
"version": "v1",
"created": "Tue, 11 Dec 2018 23:16:16 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Mar 2019 15:23:15 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Jan 2022 13:16:01 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Apr 2023 10:17:14 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Liao",
"Beishui",
""
],
[
"Pardo",
"Pere",
""
],
[
"Slavkovik",
"Marija",
""
],
[
"van der Torre",
"Leendert",
""
]
] |
new_dataset
| 0.9996 |
1905.04235
|
Jie Wang
|
J. Wang, A. Ramirez-Serrano, K. A. Davies
|
Autonomous Locomotion Mode Transition Simulation of a Track-legged
Quadruped Robot Step Negotiation
|
The power consumption, shown in Fig. 8 might be an error that needs
further inspection. Thus, I kindly request to withdraw this paper. And, there
are many redundant writings across the paper
| null | null | null |
cs.RO cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-modal locomotion (e.g. terrestrial, aerial, and aquatic) is gaining
increasing interest in robotics research as it improves the robots
environmental adaptability, locomotion versatility, and operational
flexibility. Within the terrestrial multiple locomotion robots, the advantage
of hybrid robots stems from their multiple (two or more) locomotion modes,
among which robots can select from depending on the encountering terrain
conditions. However, there are many challenges in improving the autonomy of the
locomotion mode transition between their multiple locomotion modes. This work
proposed a method to realize an autonomous locomotion mode transition of a
track-legged quadruped robot steps negotiation. The autonomy of the
decision-making process was realized by the proposed criterion to comparing
energy performances of the rolling and walking locomotion modes. Two climbing
gaits were proposed to achieve smooth steps negotiation behaviours for energy
evaluation purposes. Simulations showed autonomous locomotion mode transitions
were realized for negotiations of steps with different height. The proposed
method is generic enough to be utilized to other hybrid robots after some
pre-studies of their locomotion energy performances.
|
[
{
"version": "v1",
"created": "Fri, 10 May 2019 16:05:09 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jul 2023 15:00:51 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Wang",
"J.",
""
],
[
"Ramirez-Serrano",
"A.",
""
],
[
"Davies",
"K. A.",
""
]
] |
new_dataset
| 0.954331 |
2205.12764
|
Zden\v{e}k Dvo\v{r}\'ak
|
Zden\v{e}k Dvo\v{r}\'ak and Benjamin Moore and Abhiruk Lahiri
|
Square roots of nearly planar graphs
|
6 pages, no figures; v2: Corrected an author name
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We prove that it is NP-hard to decide whether a graph is the square of a
6-apex graph. This shows that the square root problem is not tractable for
squares of sparse graphs (or even graphs from proper minor-closed classes).
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 13:27:06 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Jul 2023 19:44:27 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Dvořák",
"Zdeněk",
""
],
[
"Moore",
"Benjamin",
""
],
[
"Lahiri",
"Abhiruk",
""
]
] |
new_dataset
| 0.999366 |
2209.08372
|
Surya Prakash Sahu
|
Surya Prakash Sahu, Madhurima Mandal, Shikhar Bharadwaj, Aditya
Kanade, Petros Maniatis, Shirish Shevade
|
CodeQueries: A Dataset of Semantic Queries over Code
| null | null | null | null |
cs.SE cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Developers often have questions about semantic aspects of code they are
working on, e.g., "Is there a class whose parent classes declare a conflicting
attribute?". Answering them requires understanding code semantics such as
attributes and inheritance relation of classes. An answer to such a question
should identify code spans constituting the answer (e.g., the declaration of
the subclass) as well as supporting facts (e.g., the definitions of the
conflicting attributes). The existing work on question-answering over code has
considered yes/no questions or method-level context. We contribute a labeled
dataset, called CodeQueries, of semantic queries over Python code. Compared to
the existing datasets, in CodeQueries, the queries are about code semantics,
the context is file level and the answers are code spans. We curate the dataset
based on queries supported by a widely-used static analysis tool, CodeQL, and
include both positive and negative examples, and queries requiring single-hop
and multi-hop reasoning.
To assess the value of our dataset, we evaluate baseline neural approaches.
We study a large language model (GPT3.5-Turbo) in zero-shot and few-shot
settings on a subset of CodeQueries. We also evaluate a BERT style model
(CuBERT) with fine-tuning. We find that these models achieve limited success on
CodeQueries. CodeQueries is thus a challenging dataset to test the ability of
neural models, to understand code semantics, in the extractive
question-answering setting.
|
[
{
"version": "v1",
"created": "Sat, 17 Sep 2022 17:09:30 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jul 2023 11:01:45 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Sahu",
"Surya Prakash",
""
],
[
"Mandal",
"Madhurima",
""
],
[
"Bharadwaj",
"Shikhar",
""
],
[
"Kanade",
"Aditya",
""
],
[
"Maniatis",
"Petros",
""
],
[
"Shevade",
"Shirish",
""
]
] |
new_dataset
| 0.997084 |
2211.08609
|
Sehwan Choi
|
Sehwan Choi, Jungho Kim, Junyong Yun, Jun Won Choi
|
R-Pred: Two-Stage Motion Prediction Via Tube-Query Attention-Based
Trajectory Refinement
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predicting the future motion of dynamic agents is of paramount importance to
ensuring safety and assessing risks in motion planning for autonomous robots.
In this study, we propose a two-stage motion prediction method, called R-Pred,
designed to effectively utilize both scene and interaction context using a
cascade of the initial trajectory proposal and trajectory refinement networks.
The initial trajectory proposal network produces M trajectory proposals
corresponding to the M modes of the future trajectory distribution. The
trajectory refinement network enhances each of the M proposals using 1)
tube-query scene attention (TQSA) and 2) proposal-level interaction attention
(PIA) mechanisms. TQSA uses tube-queries to aggregate local scene context
features pooled from proximity around trajectory proposals of interest. PIA
further enhances the trajectory proposals by modeling inter-agent interactions
using a group of trajectory proposals selected by their distances from
neighboring agents. Our experiments conducted on Argoverse and nuScenes
datasets demonstrate that the proposed refinement network provides significant
performance improvements compared to the single-stage baseline and that R-Pred
achieves state-of-the-art performance in some categories of the benchmarks.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 01:43:39 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Nov 2022 03:59:47 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Dec 2022 07:01:20 GMT"
},
{
"version": "v4",
"created": "Tue, 28 Mar 2023 14:45:03 GMT"
},
{
"version": "v5",
"created": "Tue, 4 Apr 2023 09:19:37 GMT"
},
{
"version": "v6",
"created": "Fri, 14 Jul 2023 07:51:29 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Choi",
"Sehwan",
""
],
[
"Kim",
"Jungho",
""
],
[
"Yun",
"Junyong",
""
],
[
"Choi",
"Jun Won",
""
]
] |
new_dataset
| 0.98014 |
2301.12913
|
L\'eonard Brice
|
L\'eonard Brice, Jean-Fran\c{c}ois Raskin, Marie van den Bogaard
|
Rational verification and checking for Nash and subgame-perfect
equilibria in graph games
| null | null | null | null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
We study two natural problems about rational behaviors in multiplayer
non-zero-sum sequential infinite duration games played on graphs: checking
problems, that consist in deciding whether a strategy profile, defined by a
Mealy machine, is rational; and rational verification, that consists in
deciding whether all the rational answers to a given strategy satisfy some
specification. We give the complexities of those problems for two major
concepts of rationality: Nash equilibria and subgame-perfect equilibria, and
for five major classes of payoff functions: parity, mean-payoff, quantitative
reachability, energy, and discounted-sum.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 14:14:50 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jul 2023 10:03:57 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Brice",
"Léonard",
""
],
[
"Raskin",
"Jean-François",
""
],
[
"Bogaard",
"Marie van den",
""
]
] |
new_dataset
| 0.983984 |
2304.10406
|
Tao Tang
|
Tang Tao, Longfei Gao, Guangrun Wang, Yixing Lao, Peng Chen,
Hengshuang Zhao, Dayang Hao, Xiaodan Liang, Mathieu Salzmann, Kaicheng Yu
|
LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields
|
This paper introduces a new task of novel LiDAR view synthesis, and
proposes a differentiable framework called LiDAR-NeRF with a structural
regularization, as well as an object-centric multi-view LiDAR dataset called
NeRF-MVL
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new task, novel view synthesis for LiDAR sensors. While
traditional model-based LiDAR simulators with style-transfer neural networks
can be applied to render novel views, they fall short of producing accurate and
realistic LiDAR patterns because the renderers rely on explicit 3D
reconstruction and exploit game engines, that ignore important attributes of
LiDAR points. We address this challenge by formulating, to the best of our
knowledge, the first differentiable end-to-end LiDAR rendering framework,
LiDAR-NeRF, leveraging a neural radiance field (NeRF) to facilitate the joint
learning of geometry and the attributes of 3D points. However, simply employing
NeRF cannot achieve satisfactory results, as it only focuses on learning
individual pixels while ignoring local information, especially at low texture
areas, resulting in poor geometry. To this end, we have taken steps to address
this issue by introducing a structural regularization method to preserve local
structural details. To evaluate the effectiveness of our approach, we establish
an object-centric multi-view LiDAR dataset, dubbed NeRF-MVL. It contains
observations of objects from 9 categories seen from 360-degree viewpoints
captured with multiple LiDAR sensors. Our extensive experiments on the
scene-level KITTI-360 dataset, and on our object-level NeRF-MVL show that our
LiDAR-NeRF surpasses the model-based algorithms significantly.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 15:44:37 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jul 2023 12:44:47 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Tao",
"Tang",
""
],
[
"Gao",
"Longfei",
""
],
[
"Wang",
"Guangrun",
""
],
[
"Lao",
"Yixing",
""
],
[
"Chen",
"Peng",
""
],
[
"Zhao",
"Hengshuang",
""
],
[
"Hao",
"Dayang",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Salzmann",
"Mathieu",
""
],
[
"Yu",
"Kaicheng",
""
]
] |
new_dataset
| 0.998727 |
2304.13525
|
Carlos Perez-del-Pulgar J.
|
Raul Castilla-Arquillo, Anthony Mandow, Carlos J. Perez-del-Pulgar,
Cesar Alvarez-Llamas, Jose M. Vadillo, and Javier Laserna
|
Thermal Vision for Soil Assessment in a Multipurpose Environmental
Chamber under Martian Conditions towards Robot Navigation
|
10 pages, 13 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Soil assessment is important for mobile robot planning and navigation on
natural and planetary environments. Terramechanic characteristics can be
inferred from the thermal behaviour of soils under the influence of sunlight
using remote sensors such as Long-Wave Infrared cameras. However, this
behaviour is greatly affected by the low atmospheric pressures of planets such
as Mars, so practical models are needed to relate robot remote sensing data on
Earth to target planetary exploration conditions. This article proposes a
general framework based on multipurpose environmental chambers to generate
representative diurnal cycle dataset pairs that can be useful to relate the
thermal behaviour of a soil on Earth to the corresponding behaviour under
planetary pressure conditions using remote sensing. Furthermore, we present an
application of the proposed framework to generate datasets using the
UMA-Laserlab chamber, which can replicate the atmospheric \ch{CO2} composition
of Mars. In particular, we analyze the thermal behaviour of four soil samples
of different granularity by comparing replicated Martian surface conditions and
their Earth's diurnal cycle equivalent. Results indicate a correlation between
granularity and thermal inertia that is consistent with available Mars surface
measurements recorded by rovers. The resulting dataset pairs, consisting of
representative diurnal cycle thermal images with heater, air, and subsurface
temperatures, have been made available for the scientific community.
|
[
{
"version": "v1",
"created": "Wed, 26 Apr 2023 13:01:38 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jul 2023 12:49:37 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Castilla-Arquillo",
"Raul",
""
],
[
"Mandow",
"Anthony",
""
],
[
"Perez-del-Pulgar",
"Carlos J.",
""
],
[
"Alvarez-Llamas",
"Cesar",
""
],
[
"Vadillo",
"Jose M.",
""
],
[
"Laserna",
"Javier",
""
]
] |
new_dataset
| 0.992853 |
2305.14724
|
Tuhin Chakrabarty Mr
|
Tuhin Chakrabarty, Arkadiy Saakyan, Olivia Winn, Artemis Panagopoulou,
Yue Yang, Marianna Apidianaki, Smaranda Muresan
|
I Spy a Metaphor: Large Language Models and Diffusion Models Co-Create
Visual Metaphors
|
ACL 2023 (Findings)
| null | null | null |
cs.CL cs.AI cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Visual metaphors are powerful rhetorical devices used to persuade or
communicate creative ideas through images. Similar to linguistic metaphors,
they convey meaning implicitly through symbolism and juxtaposition of the
symbols. We propose a new task of generating visual metaphors from linguistic
metaphors. This is a challenging task for diffusion-based text-to-image models,
such as DALL$\cdot$E 2, since it requires the ability to model implicit meaning
and compositionality. We propose to solve the task through the collaboration
between Large Language Models (LLMs) and Diffusion Models: Instruct GPT-3
(davinci-002) with Chain-of-Thought prompting generates text that represents a
visual elaboration of the linguistic metaphor containing the implicit meaning
and relevant objects, which is then used as input to the diffusion-based
text-to-image models.Using a human-AI collaboration framework, where humans
interact both with the LLM and the top-performing diffusion model, we create a
high-quality dataset containing 6,476 visual metaphors for 1,540 linguistic
metaphors and their associated visual elaborations. Evaluation by professional
illustrators shows the promise of LLM-Diffusion Model collaboration for this
task . To evaluate the utility of our Human-AI collaboration framework and the
quality of our dataset, we perform both an intrinsic human-based evaluation and
an extrinsic evaluation using visual entailment as a downstream task.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 05:01:10 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jul 2023 16:09:46 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Chakrabarty",
"Tuhin",
""
],
[
"Saakyan",
"Arkadiy",
""
],
[
"Winn",
"Olivia",
""
],
[
"Panagopoulou",
"Artemis",
""
],
[
"Yang",
"Yue",
""
],
[
"Apidianaki",
"Marianna",
""
],
[
"Muresan",
"Smaranda",
""
]
] |
new_dataset
| 0.974162 |
2305.16265
|
Wuwei Lan
|
Wuwei Lan, Zhiguo Wang, Anuj Chauhan, Henghui Zhu, Alexander Li, Jiang
Guo, Sheng Zhang, Chung-Wei Hang, Joseph Lilien, Yiqun Hu, Lin Pan, Mingwen
Dong, Jun Wang, Jiarong Jiang, Stephen Ash, Vittorio Castelli, Patrick Ng and
Bing Xiang
|
UNITE: A Unified Benchmark for Text-to-SQL Evaluation
|
5 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A practical text-to-SQL system should generalize well on a wide variety of
natural language questions, unseen database schemas, and novel SQL query
structures. To comprehensively evaluate text-to-SQL systems, we introduce a
UNIfied benchmark for Text-to-SQL Evaluation (UNITE). It is composed of
publicly available text-to-SQL datasets, containing natural language questions
from more than 12 domains, SQL queries from more than 3.9K patterns, and 29K
databases. Compared to the widely used Spider benchmark, we introduce
$\sim$120K additional examples and a threefold increase in SQL patterns, such
as comparative and boolean questions. We conduct a systematic study of six
state-of-the-art (SOTA) text-to-SQL parsers on our new benchmark and show that:
1) Codex performs surprisingly well on out-of-domain datasets; 2) specially
designed decoding methods (e.g. constrained beam search) can improve
performance for both in-domain and out-of-domain settings; 3) explicitly
modeling the relationship between questions and schemas further improves the
Seq2Seq models. More importantly, our benchmark presents key challenges towards
compositional generalization and robustness issues -- which these SOTA models
cannot address well. Our code and data processing script are available at
https://github.com/awslabs/unified-text2sql-benchmark
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 17:19:52 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 17:43:07 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Jul 2023 15:56:31 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Lan",
"Wuwei",
""
],
[
"Wang",
"Zhiguo",
""
],
[
"Chauhan",
"Anuj",
""
],
[
"Zhu",
"Henghui",
""
],
[
"Li",
"Alexander",
""
],
[
"Guo",
"Jiang",
""
],
[
"Zhang",
"Sheng",
""
],
[
"Hang",
"Chung-Wei",
""
],
[
"Lilien",
"Joseph",
""
],
[
"Hu",
"Yiqun",
""
],
[
"Pan",
"Lin",
""
],
[
"Dong",
"Mingwen",
""
],
[
"Wang",
"Jun",
""
],
[
"Jiang",
"Jiarong",
""
],
[
"Ash",
"Stephen",
""
],
[
"Castelli",
"Vittorio",
""
],
[
"Ng",
"Patrick",
""
],
[
"Xiang",
"Bing",
""
]
] |
new_dataset
| 0.999429 |
2306.03413
|
Tao Zhang
|
Tao Zhang, Xingye Tian, Yu Wu, Shunping Ji, Xuebo Wang, Yuan Zhang,
Pengfei Wan
|
DVIS: Decoupled Video Instance Segmentation Framework
|
Accepted by ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video instance segmentation (VIS) is a critical task with diverse
applications, including autonomous driving and video editing. Existing methods
often underperform on complex and long videos in real world, primarily due to
two factors. Firstly, offline methods are limited by the tightly-coupled
modeling paradigm, which treats all frames equally and disregards the
interdependencies between adjacent frames. Consequently, this leads to the
introduction of excessive noise during long-term temporal alignment. Secondly,
online methods suffer from inadequate utilization of temporal information. To
tackle these challenges, we propose a decoupling strategy for VIS by dividing
it into three independent sub-tasks: segmentation, tracking, and refinement.
The efficacy of the decoupling strategy relies on two crucial elements: 1)
attaining precise long-term alignment outcomes via frame-by-frame association
during tracking, and 2) the effective utilization of temporal information
predicated on the aforementioned accurate alignment outcomes during refinement.
We introduce a novel referring tracker and temporal refiner to construct the
\textbf{D}ecoupled \textbf{VIS} framework (\textbf{DVIS}). DVIS achieves new
SOTA performance in both VIS and VPS, surpassing the current SOTA methods by
7.3 AP and 9.6 VPQ on the OVIS and VIPSeg datasets, which are the most
challenging and realistic benchmarks. Moreover, thanks to the decoupling
strategy, the referring tracker and temporal refiner are super light-weight
(only 1.69\% of the segmenter FLOPs), allowing for efficient training and
inference on a single GPU with 11G memory. The code is available at
\href{https://github.com/zhang-tao-whu/DVIS}{https://github.com/zhang-tao-whu/DVIS}.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 05:24:15 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 08:15:06 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Jul 2023 08:46:08 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Zhang",
"Tao",
""
],
[
"Tian",
"Xingye",
""
],
[
"Wu",
"Yu",
""
],
[
"Ji",
"Shunping",
""
],
[
"Wang",
"Xuebo",
""
],
[
"Zhang",
"Yuan",
""
],
[
"Wan",
"Pengfei",
""
]
] |
new_dataset
| 0.985869 |
2306.10756
|
Yu-Qing Jiang
|
Yi-Ching Hung, Yu-Qing Jiang, Fong-Syuan Liou, Yu-Hsuan Tsao, Zi-Cing
Chiang, MIn-Te Sun
|
A HRNet-based Rehabilitation Monitoring System
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rehabilitation treatment helps to heal minor sports and occupational
injuries. In a traditional rehabilitation process, a therapist will assign
certain actions to a patient to perform in between hospital visits, and it will
rely on the patient to remember actions correctly and the schedule to perform
them. Unfortunately, many patients forget to perform actions or fail to recall
actions in detail. As a consequence, the rehabilitation treatment is hampered
or, in the worst case, the patient may suffer from additional injury caused by
performing incorrect actions. To resolve these issues, we propose a HRNet-based
rehabilitation monitoring system, which can remind a patient when to perform
the actions and display the actions for the patient to follow via the patient's
smartphone. In addition, it helps the therapist to monitor the progress of the
rehabilitation for the patient. Our system consists of an iOS app and several
components at the server side. The app is in charge of displaying and
collecting action videos. The server computes the similarity score between the
therapist's actions and the patient's in the videos to keep track of the number
of repetitions of each action. Theses stats will be shown to both of the
patient and therapist. The extensive experiments show that the F1-Score of the
similarity calculation is as high as 0.9 and the soft accuracy of the number of
repetitions is higher than 90%.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 08:00:28 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2023 05:19:53 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Jul 2023 10:36:38 GMT"
},
{
"version": "v4",
"created": "Fri, 14 Jul 2023 08:06:00 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Hung",
"Yi-Ching",
""
],
[
"Jiang",
"Yu-Qing",
""
],
[
"Liou",
"Fong-Syuan",
""
],
[
"Tsao",
"Yu-Hsuan",
""
],
[
"Chiang",
"Zi-Cing",
""
],
[
"Sun",
"MIn-Te",
""
]
] |
new_dataset
| 0.999602 |
2306.12916
|
Ran Zhang
|
Ran Zhang, Jihed Ouni, Steffen Eger
|
Cross-lingual Cross-temporal Summarization: Dataset, Models, Evaluation
|
Version 2; Work in progress
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
While summarization has been extensively researched in natural language
processing (NLP), cross-lingual cross-temporal summarization (CLCTS) is a
largely unexplored area that has the potential to improve cross-cultural
accessibility and understanding. This paper comprehensively addresses the CLCTS
task, including dataset creation, modeling, and evaluation. We build the first
CLCTS corpus, leveraging historical fictive texts and Wikipedia summaries in
English and German, and examine the effectiveness of popular transformer
end-to-end models with different intermediate finetuning tasks. Additionally,
we explore the potential of ChatGPT for CLCTS as a summarizer and an evaluator.
Overall, we report evaluations from humans, ChatGPT, and several recent
automatic evaluation metrics where we find that our intermediate task finetuned
end-to-end models generate bad to moderate quality summaries; ChatGPT as a
summarizer (without any finetuning) provides moderate to good quality outputs
and as an evaluator correlates moderately with human evaluations but is prone
to giving lower scores. ChatGPT also seems very adept at normalizing historical
text and outperforms context-unaware spelling normalization tools such as
Norma. We finally test ChatGPT in a scenario with adversarially attacked and
unseen source documents and find that ChatGPT profits from its prior knowledge
to a certain degree, with better performances for omission and entity swap than
negation against its prior knowledge. This benefit inflates its assessed
quality as ChatGPT performs slightly worse for unseen source documents compared
to seen documents. We additionally introspect our models' performances to find
that longer, older and more complex source texts (all of which are more
characteristic for historical language variants) are harder to summarize for
all models, indicating the difficulty of the CLCTS task.
|
[
{
"version": "v1",
"created": "Thu, 22 Jun 2023 14:31:18 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Jul 2023 16:48:55 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Zhang",
"Ran",
""
],
[
"Ouni",
"Jihed",
""
],
[
"Eger",
"Steffen",
""
]
] |
new_dataset
| 0.991461 |
2307.04675
|
Daniele Schiavazzi
|
Yu Wang, Emma R. Cobian, Jubilee Lee, Fang Liu, Jonathan D. Hauenstein
and Daniele E. Schiavazzi
|
LINFA: a Python library for variational inference with normalizing flow
and annealing
| null | null | null | null |
cs.LG stat.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Variational inference is an increasingly popular method in statistics and
machine learning for approximating probability distributions. We developed
LINFA (Library for Inference with Normalizing Flow and Annealing), a Python
library for variational inference to accommodate computationally expensive
models and difficult-to-sample distributions with dependent parameters. We
discuss the theoretical background, capabilities, and performance of LINFA in
various benchmarks. LINFA is publicly available on GitHub at
https://github.com/desResLab/LINFA.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 16:21:05 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jul 2023 06:40:36 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Wang",
"Yu",
""
],
[
"Cobian",
"Emma R.",
""
],
[
"Lee",
"Jubilee",
""
],
[
"Liu",
"Fang",
""
],
[
"Hauenstein",
"Jonathan D.",
""
],
[
"Schiavazzi",
"Daniele E.",
""
]
] |
new_dataset
| 0.997996 |
2307.06953
|
Nathalie Revol
|
Luis Benet, Luca Ferranti, Nathalie Revol
|
A framework to test interval arithmetic libraries and their IEEE
1788-2015 compliance
|
2 figures
| null | null | null |
cs.MS
|
http://creativecommons.org/licenses/by/4.0/
|
As developers of libraries implementing interval arithmetic, we faced the
same difficulties when it comes to testing our libraries. What must be tested?
How can we devise relevant test cases for unit testing? How can we ensure a
high (and possibly 100%) test coverage? Before considering these questions, we
briefly recall the main features of interval arithmetic and of the IEEE
1788-2015 standard for interval arithmetic. After listing the different aspects
that, in our opinion, must be tested, we contribute a first step towards
offering a test suite for an interval arithmetic library. First we define a
format that enables the exchange of test cases, so that they can be read and
tried easily. Then we offer a first set of test cases, for a selected set of
mathematical functions. Next, we examine how the Julia interval arithmetic
library, IntervalArithmetic.jl, actually performs to these tests. As this is an
ongoing work, we list extra tests that we deem important to perform.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 19:48:29 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Benet",
"Luis",
""
],
[
"Ferranti",
"Luca",
""
],
[
"Revol",
"Nathalie",
""
]
] |
new_dataset
| 0.98483 |
2307.06958
|
Liangcheng Han
|
Liangcheng Han, Haifan Yin
|
Superdirectivity-enhanced wireless communications: A multi-user
perspective
|
11 pages, 8 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Superdirective array may achieve an array gain proportional to the square of
the number of antennas $M^2$. In the early studies of superdirectivity, little
research has been done from wireless communication point of view. To leverage
superdirectivity for enhancing the spectral efficiency, this paper investigates
multi-user communication systems with superdirective arrays. We first propose a
field-coupling-aware (FCA) multi-user channel estimation method, which takes
into account the antenna coupling effects. Aiming to maximize the power gain of
the target user, we propose multi-user multipath superdirective precoding (SP)
as an extension of our prior work on coupling-based superdirective beamforming.
Furthermore, to reduce the inter-user interference, we propose
interference-nulling superdirective precoding (INSP) as the optimal solution to
maximize user power gains while eliminating interference. Then, by taking the
ohmic loss into consideration, we further propose a regularized
interference-nulling superdirective precoding (RINSP) method. Finally, we
discuss the well-known narrow directivity bandwidth issue, and find that it is
not a fundamental problem of superdirective arrays in multi-carrier
communication systems. Simulation results show our proposed methods outperform
the state-of-the-art methods significantly. Interestingly, in the multi-user
scenario, an 18-antenna superdirective array can achieve up to a 9-fold
increase of spectral efficiency compared to traditional multiple-input
multiple-output (MIMO), while simultaneously reducing the array aperture by
half.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 02:20:20 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Han",
"Liangcheng",
""
],
[
"Yin",
"Haifan",
""
]
] |
new_dataset
| 0.956914 |
2307.07007
|
Mateusz Baran
|
Mateusz Baran, Mateusz W\'ojcik, Piotr Kolebski, Micha{\l} Bernaczyk,
Krzysztof Rajda, {\L}ukasz Augustyniak, Tomasz Kajdanowicz
|
Electoral Agitation Data Set: The Use Case of the Polish Election
|
5 pages, 3 figures, Language Resources and Evaluation Conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The popularity of social media makes politicians use it for political
advertisement. Therefore, social media is full of electoral agitation
(electioneering), especially during the election campaigns. The election
administration cannot track the spread and quantity of messages that count as
agitation under the election code. It addresses a crucial problem, while also
uncovering a niche that has not been effectively targeted so far. Hence, we
present the first publicly open data set for detecting electoral agitation in
the Polish language. It contains 6,112 human-annotated tweets tagged with four
legally conditioned categories. We achieved a 0.66 inter-annotator agreement
(Cohen's kappa score). An additional annotator resolved the mismatches between
the first two improving the consistency and complexity of the annotation
process. The newly created data set was used to fine-tune a Polish Language
Model called HerBERT (achieving a 68% F1 score). We also present a number of
potential use cases for such data sets and models, enriching the paper with an
analysis of the Polish 2020 Presidential Election on Twitter.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 18:14:43 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Baran",
"Mateusz",
""
],
[
"Wójcik",
"Mateusz",
""
],
[
"Kolebski",
"Piotr",
""
],
[
"Bernaczyk",
"Michał",
""
],
[
"Rajda",
"Krzysztof",
""
],
[
"Augustyniak",
"Łukasz",
""
],
[
"Kajdanowicz",
"Tomasz",
""
]
] |
new_dataset
| 0.99911 |
2307.07044
|
Neel Dey
|
Neel Dey, S. Mazdak Abulnaga, Benjamin Billot, Esra Abaci Turk, P.
Ellen Grant, Adrian V. Dalca, Polina Golland
|
AnyStar: Domain randomized universal star-convex 3D instance
segmentation
|
Code available at https://github.com/neel-dey/AnyStar
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Star-convex shapes arise across bio-microscopy and radiology in the form of
nuclei, nodules, metastases, and other units. Existing instance segmentation
networks for such structures train on densely labeled instances for each
dataset, which requires substantial and often impractical manual annotation
effort. Further, significant reengineering or finetuning is needed when
presented with new datasets and imaging modalities due to changes in contrast,
shape, orientation, resolution, and density. We present AnyStar, a
domain-randomized generative model that simulates synthetic training data of
blob-like objects with randomized appearance, environments, and imaging physics
to train general-purpose star-convex instance segmentation networks. As a
result, networks trained using our generative model do not require annotated
images from unseen datasets. A single network trained on our synthesized data
accurately 3D segments C. elegans and P. dumerilii nuclei in fluorescence
microscopy, mouse cortical nuclei in micro-CT, zebrafish brain nuclei in EM,
and placental cotyledons in human fetal MRI, all without any retraining,
finetuning, transfer learning, or domain adaptation. Code is available at
https://github.com/neel-dey/AnyStar.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 20:01:26 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Dey",
"Neel",
""
],
[
"Abulnaga",
"S. Mazdak",
""
],
[
"Billot",
"Benjamin",
""
],
[
"Turk",
"Esra Abaci",
""
],
[
"Grant",
"P. Ellen",
""
],
[
"Dalca",
"Adrian V.",
""
],
[
"Golland",
"Polina",
""
]
] |
new_dataset
| 0.999709 |
2307.07049
|
Samuel Barham
|
Samuel Barham and Orion Weller and Michelle Yuan and Kenton Murray and
Mahsa Yarmohammadi and Zhengping Jiang and Siddharth Vashishtha and Alexander
Martin and Anqi Liu and Aaron Steven White and Jordan Boyd-Graber and
Benjamin Van Durme
|
MegaWika: Millions of reports and their sources across 50 diverse
languages
|
Submitted to ACL, 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
To foster the development of new models for collaborative AI-assisted report
generation, we introduce MegaWika, consisting of 13 million Wikipedia articles
in 50 diverse languages, along with their 71 million referenced source
materials. We process this dataset for a myriad of applications, going beyond
the initial Wikipedia citation extraction and web scraping of content,
including translating non-English articles for cross-lingual applications and
providing FrameNet parses for automated semantic analysis. MegaWika is the
largest resource for sentence-level report generation and the only report
generation dataset that is multilingual. We manually analyze the quality of
this resource through a semantically stratified sample. Finally, we provide
baseline results and trained models for crucial steps in automated report
generation: cross-lingual question answering and citation retrieval.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 20:04:02 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Barham",
"Samuel",
""
],
[
"Weller",
"Orion",
""
],
[
"Yuan",
"Michelle",
""
],
[
"Murray",
"Kenton",
""
],
[
"Yarmohammadi",
"Mahsa",
""
],
[
"Jiang",
"Zhengping",
""
],
[
"Vashishtha",
"Siddharth",
""
],
[
"Martin",
"Alexander",
""
],
[
"Liu",
"Anqi",
""
],
[
"White",
"Aaron Steven",
""
],
[
"Boyd-Graber",
"Jordan",
""
],
[
"Van Durme",
"Benjamin",
""
]
] |
new_dataset
| 0.999829 |
2307.07093
|
Niharika S. D'Souza
|
Niharika S. D'Souza, Hongzhi Wang, Andrea Giovannini, Antonio
Foncubierta-Rodriguez, Kristen L. Beck, Orest Boyko, Tanveer Syeda-Mahmood
|
MaxCorrMGNN: A Multi-Graph Neural Network Framework for Generalized
Multimodal Fusion of Medical Data for Outcome Prediction
|
To appear in ML4MHD workshop at ICML 2023
| null | null | null |
cs.LG eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
With the emergence of multimodal electronic health records, the evidence for
an outcome may be captured across multiple modalities ranging from clinical to
imaging and genomic data. Predicting outcomes effectively requires fusion
frameworks capable of modeling fine-grained and multi-faceted complex
interactions between modality features within and across patients. We develop
an innovative fusion approach called MaxCorr MGNN that models non-linear
modality correlations within and across patients through
Hirschfeld-Gebelein-Renyi maximal correlation (MaxCorr) embeddings, resulting
in a multi-layered graph that preserves the identities of the modalities and
patients. We then design, for the first time, a generalized multi-layered graph
neural network (MGNN) for task-informed reasoning in multi-layered graphs, that
learns the parameters defining patient-modality graph connectivity and message
passing in an end-to-end fashion. We evaluate our model an outcome prediction
task on a Tuberculosis (TB) dataset consistently outperforming several
state-of-the-art neural, graph-based and traditional fusion techniques.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 23:52:41 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"D'Souza",
"Niharika S.",
""
],
[
"Wang",
"Hongzhi",
""
],
[
"Giovannini",
"Andrea",
""
],
[
"Foncubierta-Rodriguez",
"Antonio",
""
],
[
"Beck",
"Kristen L.",
""
],
[
"Boyko",
"Orest",
""
],
[
"Syeda-Mahmood",
"Tanveer",
""
]
] |
new_dataset
| 0.950561 |
2307.07102
|
Runwei Guan
|
Runwei Guan, Shanliang Yao, Xiaohui Zhu, Ka Lok Man, Eng Gee Lim,
Jeremy Smith, Yong Yue, Yutao Yue
|
Achelous: A Fast Unified Water-surface Panoptic Perception Framework
based on Fusion of Monocular Camera and 4D mmWave Radar
|
Accepted by ITSC 2023
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current perception models for different tasks usually exist in modular forms
on Unmanned Surface Vehicles (USVs), which infer extremely slowly in parallel
on edge devices, causing the asynchrony between perception results and USV
position, and leading to error decisions of autonomous navigation. Compared
with Unmanned Ground Vehicles (UGVs), the robust perception of USVs develops
relatively slowly. Moreover, most current multi-task perception models are huge
in parameters, slow in inference and not scalable. Oriented on this, we propose
Achelous, a low-cost and fast unified panoptic perception framework for
water-surface perception based on the fusion of a monocular camera and 4D
mmWave radar. Achelous can simultaneously perform five tasks, detection and
segmentation of visual targets, drivable-area segmentation, waterline
segmentation and radar point cloud segmentation. Besides, models in Achelous
family, with less than around 5 million parameters, achieve about 18 FPS on an
NVIDIA Jetson AGX Xavier, 11 FPS faster than HybridNets, and exceed YOLOX-Tiny
and Segformer-B0 on our collected dataset about 5 mAP$_{\text{50-95}}$ and 0.7
mIoU, especially under situations of adverse weather, dark environments and
camera failure. To our knowledge, Achelous is the first comprehensive panoptic
perception framework combining vision-level and point-cloud-level tasks for
water-surface perception. To promote the development of the intelligent
transportation community, we release our codes in
\url{https://github.com/GuanRunwei/Achelous}.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 00:24:30 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Guan",
"Runwei",
""
],
[
"Yao",
"Shanliang",
""
],
[
"Zhu",
"Xiaohui",
""
],
[
"Man",
"Ka Lok",
""
],
[
"Lim",
"Eng Gee",
""
],
[
"Smith",
"Jeremy",
""
],
[
"Yue",
"Yong",
""
],
[
"Yue",
"Yutao",
""
]
] |
new_dataset
| 0.996309 |
2307.07135
|
Shijue Huang
|
Libo Qin, Shijue Huang, Qiguang Chen, Chenran Cai, Yudi Zhang, Bin
Liang, Wanxiang Che and Ruifeng Xu
|
MMSD2.0: Towards a Reliable Multi-modal Sarcasm Detection System
|
Accepted by ACL2023 Findings
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-modal sarcasm detection has attracted much recent attention.
Nevertheless, the existing benchmark (MMSD) has some shortcomings that hinder
the development of reliable multi-modal sarcasm detection system: (1) There are
some spurious cues in MMSD, leading to the model bias learning; (2) The
negative samples in MMSD are not always reasonable. To solve the aforementioned
issues, we introduce MMSD2.0, a correction dataset that fixes the shortcomings
of MMSD, by removing the spurious cues and re-annotating the unreasonable
samples. Meanwhile, we present a novel framework called multi-view CLIP that is
capable of leveraging multi-grained cues from multiple perspectives (i.e.,
text, image, and text-image interaction view) for multi-modal sarcasm
detection. Extensive experiments show that MMSD2.0 is a valuable benchmark for
building reliable multi-modal sarcasm detection systems and multi-view CLIP can
significantly outperform the previous best baselines.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 03:22:51 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Qin",
"Libo",
""
],
[
"Huang",
"Shijue",
""
],
[
"Chen",
"Qiguang",
""
],
[
"Cai",
"Chenran",
""
],
[
"Zhang",
"Yudi",
""
],
[
"Liang",
"Bin",
""
],
[
"Che",
"Wanxiang",
""
],
[
"Xu",
"Ruifeng",
""
]
] |
new_dataset
| 0.999623 |
2307.07138
|
Wen Fang
|
Wen Fang, Wen Chen, Qingqing Wu, Kunlun Wang, Shunqing Zhang, Qingwen
Liu, Jun Li
|
Reconfigurable Intelligent Surface Assisted Free Space Optical
Information and Power Transfer
| null | null | null | null |
cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Free space optical (FSO) transmission has emerged as a key candidate
technology for 6G to expand new spectrum and improve network capacity due to
its advantages of large bandwidth, low electromagnetic interference, and high
energy efficiency. Resonant beam operating in the infrared band utilizes
spatially separated laser cavities to enable safe and mobile high-power energy
and high-rate information transmission but is limited by line-of-sight (LOS)
channel. In this paper, we propose a reconfigurable intelligent surface (RIS)
assisted resonant beam simultaneous wireless information and power transfer
(SWIPT) system and establish an optical field propagation model to analyze the
channel state information (CSI), in which LOS obstruction can be detected
sensitively and non-line-of-sight (NLOS) transmission can be realized by
changing the phased of resonant beam in RIS. Numerical results demonstrate
that, apart from the transmission distance, the NLOS performance depends on
both the horizontal and vertical positions of RIS. The maximum NLOS energy
efficiency can achieve 55% within a transfer distance of 10m, a translation
distance of $\pm$4mm, and rotation angle of $\pm$50{\deg}.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 03:30:32 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Fang",
"Wen",
""
],
[
"Chen",
"Wen",
""
],
[
"Wu",
"Qingqing",
""
],
[
"Wang",
"Kunlun",
""
],
[
"Zhang",
"Shunqing",
""
],
[
"Liu",
"Qingwen",
""
],
[
"Li",
"Jun",
""
]
] |
new_dataset
| 0.999356 |
2307.07177
|
Linfeng Liu
|
Linfeng Liu, Junyan Lyu, Siyu Liu, Xiaoying Tang, Shekhar S. Chandra,
Fatima A. Nasrallah
|
TriFormer: A Multi-modal Transformer Framework For Mild Cognitive
Impairment Conversion Prediction
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The prediction of mild cognitive impairment (MCI) conversion to Alzheimer's
disease (AD) is important for early treatment to prevent or slow the
progression of AD. To accurately predict the MCI conversion to stable MCI or
progressive MCI, we propose Triformer, a novel transformer-based framework with
three specialized transformers to incorporate multi-model data. Triformer uses
I) an image transformer to extract multi-view image features from medical
scans, II) a clinical transformer to embed and correlate multi-modal clinical
data, and III) a modality fusion transformer that produces an accurate
prediction based on fusing the outputs from the image and clinical
transformers. Triformer is evaluated on the Alzheimer's Disease Neuroimaging
Initiative (ANDI)1 and ADNI2 datasets and outperforms previous state-of-the-art
single and multi-modal methods.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 06:08:30 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Liu",
"Linfeng",
""
],
[
"Lyu",
"Junyan",
""
],
[
"Liu",
"Siyu",
""
],
[
"Tang",
"Xiaoying",
""
],
[
"Chandra",
"Shekhar S.",
""
],
[
"Nasrallah",
"Fatima A.",
""
]
] |
new_dataset
| 0.961841 |
2307.07184
|
Aichun Zhu
|
Fan Ni, Xu Zhang, Jianhui Wu, Guan-Nan Dong, Aichun Zhu, Hui Liu, Yue
Zhang
|
TVPR: Text-to-Video Person Retrieval and a New Benchmark
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most existing methods for text-based person retrieval focus on text-to-image
person retrieval. Nevertheless, due to the lack of dynamic information provided
by isolated frames, the performance is hampered when the person is obscured in
isolated frames or variable motion details are given in the textual
description. In this paper, we propose a new task called Text-to-Video Person
Retrieval(TVPR) which aims to effectively overcome the limitations of isolated
frames. Since there is no dataset or benchmark that describes person videos
with natural language, we construct a large-scale cross-modal person video
dataset containing detailed natural language annotations, such as person's
appearance, actions and interactions with environment, etc., termed as
Text-to-Video Person Re-identification (TVPReid) dataset, which will be
publicly available. To this end, a Text-to-Video Person Retrieval Network
(TVPRN) is proposed. Specifically, TVPRN acquires video representations by
fusing visual and motion representations of person videos, which can deal with
temporal occlusion and the absence of variable motion details in isolated
frames. Meanwhile, we employ the pre-trained BERT to obtain caption
representations and the relationship between caption and video representations
to reveal the most relevant person videos. To evaluate the effectiveness of the
proposed TVPRN, extensive experiments have been conducted on TVPReid dataset.
To the best of our knowledge, TVPRN is the first successful attempt to use
video for text-based person retrieval task and has achieved state-of-the-art
performance on TVPReid dataset. The TVPReid dataset will be publicly available
to benefit future research.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 06:34:00 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Ni",
"Fan",
""
],
[
"Zhang",
"Xu",
""
],
[
"Wu",
"Jianhui",
""
],
[
"Dong",
"Guan-Nan",
""
],
[
"Zhu",
"Aichun",
""
],
[
"Liu",
"Hui",
""
],
[
"Zhang",
"Yue",
""
]
] |
new_dataset
| 0.999667 |
2307.07191
|
Zhixian Wang
|
Zhixian Wang, Qingsong Wen, Chaoli Zhang, Liang Sun, Leandro Von
Krannichfeldt, and Yi Wang
|
Benchmarks and Custom Package for Electrical Load Forecasting
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Load forecasting is of great significance in the power industry as it can
provide a reference for subsequent tasks such as power grid dispatch, thus
bringing huge economic benefits. However, there are many differences between
load forecasting and traditional time series forecasting. On the one hand, load
forecasting aims to minimize the cost of subsequent tasks such as power grid
dispatch, rather than simply pursuing prediction accuracy. On the other hand,
the load is largely influenced by many external factors, such as temperature or
calendar variables. In addition, the scale of predictions (such as
building-level loads and aggregated-level loads) can also significantly impact
the predicted results. In this paper, we provide a comprehensive load
forecasting archive, which includes load domain-specific feature engineering to
help forecasting models better model load data. In addition, different from the
traditional loss function which only aims for accuracy, we also provide a
method to customize the loss function based on the forecasting error,
integrating it into our forecasting framework. Based on this, we conducted
extensive experiments on load data at different levels, providing a reference
for researchers to compare different load forecasting models.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 06:50:02 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Wang",
"Zhixian",
""
],
[
"Wen",
"Qingsong",
""
],
[
"Zhang",
"Chaoli",
""
],
[
"Sun",
"Liang",
""
],
[
"Von Krannichfeldt",
"Leandro",
""
],
[
"Wang",
"Yi",
""
]
] |
new_dataset
| 0.998765 |
2307.07196
|
Zhenxing Ming
|
Zhenxing Ming, Julie Stephany Berrio, Mao Shan, Eduardo Nebot and
Stewart Worrall
|
LightFormer: An End-to-End Model for Intersection Right-of-Way
Recognition Using Traffic Light Signals and an Attention Mechanism
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For smart vehicles driving through signalised intersections, it is crucial to
determine whether the vehicle has right of way given the state of the traffic
lights. To address this issue, camera based sensors can be used to determine
whether the vehicle has permission to proceed straight, turn left or turn
right. This paper proposes a novel end to end intersection right of way
recognition model called LightFormer to generate right of way status for
available driving directions in complex urban intersections. The model includes
a spatial temporal inner structure with an attention mechanism, which
incorporates features from past image to contribute to the classification of
the current frame right of way status. In addition, a modified, multi weight
arcface loss is introduced to enhance the model classification performance.
Finally, the proposed LightFormer is trained and tested on two public traffic
light datasets with manually augmented labels to demonstrate its effectiveness.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 07:07:36 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Ming",
"Zhenxing",
""
],
[
"Berrio",
"Julie Stephany",
""
],
[
"Shan",
"Mao",
""
],
[
"Nebot",
"Eduardo",
""
],
[
"Worrall",
"Stewart",
""
]
] |
new_dataset
| 0.998984 |
2307.07214
|
Jiayin Sun
|
Jiayin Sun and Hong Wang and Qiulei Dong
|
Complementary Frequency-Varying Awareness Network for Open-Set
Fine-Grained Image Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open-set image recognition is a challenging topic in computer vision. Most of
the existing works in literature focus on learning more discriminative features
from the input images, however, they are usually insensitive to the high- or
low-frequency components in features, resulting in a decreasing performance on
fine-grained image recognition. To address this problem, we propose a
Complementary Frequency-varying Awareness Network that could better capture
both high-frequency and low-frequency information, called CFAN. The proposed
CFAN consists of three sequential modules: (i) a feature extraction module is
introduced for learning preliminary features from the input images; (ii) a
frequency-varying filtering module is designed to separate out both high- and
low-frequency components from the preliminary features in the frequency domain
via a frequency-adjustable filter; (iii) a complementary temporal aggregation
module is designed for aggregating the high- and low-frequency components via
two Long Short-Term Memory networks into discriminative features. Based on
CFAN, we further propose an open-set fine-grained image recognition method,
called CFAN-OSFGR, which learns image features via CFAN and classifies them via
a linear classifier. Experimental results on 3 fine-grained datasets and 2
coarse-grained datasets demonstrate that CFAN-OSFGR performs significantly
better than 9 state-of-the-art methods in most cases.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 08:15:36 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Sun",
"Jiayin",
""
],
[
"Wang",
"Hong",
""
],
[
"Dong",
"Qiulei",
""
]
] |
new_dataset
| 0.998485 |
2307.07227
|
Milad Tatar Mamaghani
|
Milad Tatar Mamaghani, Xiangyun Zhou, Nan Yang, and A. Lee
Swindlehurst
|
Secure Short-Packet Communications via UAV-Enabled Mobile Relaying:
Joint Resource Optimization and 3D Trajectory Design
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Short-packet communication (SPC) and unmanned aerial vehicles (UAVs) are
anticipated to play crucial roles in the development of 5G-and-beyond wireless
networks and the Internet of Things (IoT). In this paper, we propose a secure
SPC system, where a UAV serves as a mobile decode-and-forward (DF) relay,
periodically receiving and relaying small data packets from a remote IoT device
to its receiver in two hops with strict latency requirements, in the presence
of an eavesdropper. This system requires careful optimization of important
design parameters, such as the coding blocklengths of both hops, transmit
powers, and UAV's trajectory. While the overall optimization problem is
nonconvex, we tackle it by applying a block successive convex approximation
(BSCA) approach to divide the original problem into three subproblems and solve
them separately. Then, an overall iterative algorithm is proposed to obtain the
final design with guaranteed convergence. Our proposed low-complexity algorithm
incorporates 3D trajectory design and resource management to optimize the
effective average secrecy throughput of the communication system over the
course of UAV-relay's mission. Simulation results demonstrate significant
performance improvements compared to various benchmark schemes and provide
useful design insights on the coding blocklengths and transmit powers along the
trajectory of the UAV.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 08:47:06 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Mamaghani",
"Milad Tatar",
""
],
[
"Zhou",
"Xiangyun",
""
],
[
"Yang",
"Nan",
""
],
[
"Swindlehurst",
"A. Lee",
""
]
] |
new_dataset
| 0.99876 |
2307.07231
|
Qu Yang
|
Shimin Zhang, Qu Yang, Chenxiang Ma, Jibin Wu, Haizhou Li, Kay Chen
Tan
|
Long Short-term Memory with Two-Compartment Spiking Neuron
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The identification of sensory cues associated with potential opportunities
and dangers is frequently complicated by unrelated events that separate useful
cues by long delays. As a result, it remains a challenging task for
state-of-the-art spiking neural networks (SNNs) to identify long-term temporal
dependencies since bridging the temporal gap necessitates an extended memory
capacity. To address this challenge, we propose a novel biologically inspired
Long Short-Term Memory Leaky Integrate-and-Fire spiking neuron model, dubbed
LSTM-LIF. Our model incorporates carefully designed somatic and dendritic
compartments that are tailored to retain short- and long-term memories. The
theoretical analysis further confirms its effectiveness in addressing the
notorious vanishing gradient problem. Our experimental results, on a diverse
range of temporal classification tasks, demonstrate superior temporal
classification capability, rapid training convergence, strong network
generalizability, and high energy efficiency of the proposed LSTM-LIF model.
This work, therefore, opens up a myriad of opportunities for resolving
challenging temporal processing tasks on emerging neuromorphic computing
machines.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 08:51:03 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Zhang",
"Shimin",
""
],
[
"Yang",
"Qu",
""
],
[
"Ma",
"Chenxiang",
""
],
[
"Wu",
"Jibin",
""
],
[
"Li",
"Haizhou",
""
],
[
"Tan",
"Kay Chen",
""
]
] |
new_dataset
| 0.997595 |
2307.07238
|
Sebastian Siebertz
|
Mario Grobler, Leif Sabellek, Sebastian Siebertz
|
Remarks on Parikh-recognizable omega-languages
|
arXiv admin note: text overlap with arXiv:2302.04087,
arXiv:2301.08969
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Several variants of Parikh automata on infinite words were recently
introduced by Guha et al. [FSTTCS, 2022]. We show that one of these variants
coincides with blind counter machine as introduced by Fernau and Stiebe
[Fundamenta Informaticae, 2008]. Fernau and Stiebe showed that every
$\omega$-language recognized by a blind counter machine is of the form
$\bigcup_iU_iV_i^\omega$ for Parikh recognizable languages $U_i, V_i$, but
blind counter machines fall short of characterizing this class of
$\omega$-languages. They posed as an open problem to find a suitable
automata-based characterization. We introduce several additional variants of
Parikh automata on infinite words that yield automata characterizations of
classes of $\omega$-language of the form $\bigcup_iU_iV_i^\omega$ for all
combinations of languages $U_i, V_i$ being regular or Parikh-recognizable. When
both $U_i$ and $V_i$ are regular, this coincides with B\"uchi's classical
theorem. We study the effect of $\varepsilon$-transitions in all variants of
Parikh automata and show that almost all of them admit
$\varepsilon$-elimination. Finally we study the classical decision problems
with applications to model checking.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 09:21:33 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Grobler",
"Mario",
""
],
[
"Sabellek",
"Leif",
""
],
[
"Siebertz",
"Sebastian",
""
]
] |
new_dataset
| 0.998063 |
2307.07240
|
Bin-Cheng Yang
|
Bincheng Yang and Gangshan Wu
|
MaxSR: Image Super-Resolution Using Improved MaxViT
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While transformer models have been demonstrated to be effective for natural
language processing tasks and high-level vision tasks, only a few attempts have
been made to use powerful transformer models for single image super-resolution.
Because transformer models have powerful representation capacity and the
in-built self-attention mechanisms in transformer models help to leverage
self-similarity prior in input low-resolution image to improve performance for
single image super-resolution, we present a single image super-resolution model
based on recent hybrid vision transformer of MaxViT, named as MaxSR. MaxSR
consists of four parts, a shallow feature extraction block, multiple cascaded
adaptive MaxViT blocks to extract deep hierarchical features and model global
self-similarity from low-level features efficiently, a hierarchical feature
fusion block, and finally a reconstruction block. The key component of MaxSR,
i.e., adaptive MaxViT block, is based on MaxViT block which mixes MBConv with
squeeze-and-excitation, block attention and grid attention. In order to achieve
better global modelling of self-similarity in input low-resolution image, we
improve block attention and grid attention in MaxViT block to adaptive block
attention and adaptive grid attention which do self-attention inside each
window across all grids and each grid across all windows respectively in the
most efficient way. We instantiate proposed model for classical single image
super-resolution (MaxSR) and lightweight single image super-resolution
(MaxSR-light). Experiments show that our MaxSR and MaxSR-light establish new
state-of-the-art performance efficiently.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 09:26:47 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Yang",
"Bincheng",
""
],
[
"Wu",
"Gangshan",
""
]
] |
new_dataset
| 0.999239 |
2307.07265
|
Kin Wai Lau
|
Kin Wai Lau, Yasar Abbas Ur Rehman, Yuyang Xie, Lan Ma
|
AudioInceptionNeXt: TCL AI LAB Submission to EPIC-SOUND
Audio-Based-Interaction-Recognition Challenge 2023
| null | null | null | null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This report presents the technical details of our submission to the 2023
Epic-Kitchen EPIC-SOUNDS Audio-Based Interaction Recognition Challenge. The
task is to learn the mapping from audio samples to their corresponding action
labels. To achieve this goal, we propose a simple yet effective single-stream
CNN-based architecture called AudioInceptionNeXt that operates on the
time-frequency log-mel-spectrogram of the audio samples. Motivated by the
design of the InceptionNeXt, we propose parallel multi-scale depthwise
separable convolutional kernels in the AudioInceptionNeXt block, which enable
the model to learn the time and frequency information more effectively. The
large-scale separable kernels capture the long duration of activities and the
global frequency semantic information, while the small-scale separable kernels
capture the short duration of activities and local details of frequency
information. Our approach achieved 55.43% of top-1 accuracy on the challenge
test set, ranked as 1st on the public leaderboard. Codes are available
anonymously at https://github.com/StevenLauHKHK/AudioInceptionNeXt.git.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 10:39:05 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Lau",
"Kin Wai",
""
],
[
"Rehman",
"Yasar Abbas Ur",
""
],
[
"Xie",
"Yuyang",
""
],
[
"Ma",
"Lan",
""
]
] |
new_dataset
| 0.987351 |
2307.07267
|
Davide Cenzato
|
Ruben Becker, Davide Cenzato, Sung-Hwan Kim, Bojana Kodric, Riccardo
Maso and Nicola Prezza
|
Random Wheeler Automata
|
19 pages, 3 figures
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Wheeler automata were introduced in 2017 as a tool to generalize existing
indexing and compression techniques based on the Burrows-Wheeler transform.
Intuitively, an automaton is said to be Wheeler if there exists a total order
on its states reflecting the co-lexicographic order of the strings labeling the
automaton's paths; this property makes it possible to represent the automaton's
topology in a constant number of bits per transition, as well as efficiently
solving pattern matching queries on its accepted regular language. After their
introduction, Wheeler automata have been the subject of a prolific line of
research, both from the algorithmic and language-theoretic points of view. A
recurring issue faced in these studies is the lack of large datasets of Wheeler
automata on which the developed algorithms and theories could be tested. One
possible way to overcome this issue is to generate random Wheeler automata.
Motivated by this observation, in this paper we initiate the theoretical study
of random Wheeler automata, focusing on the deterministic case (Wheeler DFAs --
WDFAs). We start by extending the Erd\H{o}s-R\'enyi random graph model to
WDFAs, and proceed by providing an algorithm generating uniform WDFAs according
to this model. Our algorithm generates a uniform WDFA with $n$ states, $m$
transitions, and alphabet's cardinality $\sigma$ in $O(m)$ expected time
($O(m\log m)$ worst-case time w.h.p.) and constant working space for all
alphabets of size $\sigma \le m/\ln m$. As a by-product, we also give formulas
for the number of distinct WDFAs and obtain that $ n\sigma + (n - \sigma) \log
\sigma$ bits are necessary and sufficient to encode a WDFA with $n$ states and
alphabet of size $\sigma$, up to an additive $\Theta(n)$ term. We present an
implementation of our algorithm and show that it is extremely fast in practice,
with a throughput of over 8 million transitions per second.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 10:46:34 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Becker",
"Ruben",
""
],
[
"Cenzato",
"Davide",
""
],
[
"Kim",
"Sung-Hwan",
""
],
[
"Kodric",
"Bojana",
""
],
[
"Maso",
"Riccardo",
""
],
[
"Prezza",
"Nicola",
""
]
] |
new_dataset
| 0.999293 |
2307.07306
|
Yuren Mao
|
Xuemei Dong, Chao Zhang, Yuhang Ge, Yuren Mao, Yunjun Gao, lu Chen,
Jinshu Lin, Dongfang Lou
|
C3: Zero-shot Text-to-SQL with ChatGPT
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a ChatGPT-based zero-shot Text-to-SQL method, dubbed C3,
which achieves 82.3\% in terms of execution accuracy on the holdout test set of
Spider and becomes the state-of-the-art zero-shot Text-to-SQL method on the
Spider Challenge. C3 consists of three key components: Clear Prompting (CP),
Calibration with Hints (CH), and Consistent Output (CO), which are
corresponding to the model input, model bias and model output respectively. It
provides a systematic treatment for zero-shot Text-to-SQL. Extensive
experiments have been conducted to verify the effectiveness and efficiency of
our proposed method.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 12:30:41 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Dong",
"Xuemei",
""
],
[
"Zhang",
"Chao",
""
],
[
"Ge",
"Yuhang",
""
],
[
"Mao",
"Yuren",
""
],
[
"Gao",
"Yunjun",
""
],
[
"Chen",
"lu",
""
],
[
"Lin",
"Jinshu",
""
],
[
"Lou",
"Dongfang",
""
]
] |
new_dataset
| 0.994156 |
2307.07310
|
Mohammad Javad Ahmadi
|
Mohammad Javad Ahmadi, Mohammad Kazemi, and Tolga M. Duman
|
Unsourced Random Access Using Multiple Stages of Orthogonal Pilots: MIMO
and Single-Antenna Structures
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We study the problem of unsourced random access (URA) over Rayleigh
block-fading channels with a receiver equipped with multiple antennas. We
propose a slotted structure with multiple stages of orthogonal pilots, each of
which is randomly picked from a codebook. In the proposed signaling structure,
each user encodes its message using a polar code and appends it to the selected
pilot sequences to construct its transmitted signal. Accordingly, the
transmitted signal is composed of multiple orthogonal pilot parts and a
polar-coded part, which is sent through a randomly selected slot. The
performance of the proposed scheme is further improved by randomly dividing
users into different groups each having a unique interleaver-power pair. We
also apply the idea of multiple stages of orthogonal pilots to the case of a
single receive antenna. In all the set-ups, we use an iterative approach for
decoding the transmitted messages along with a suitable successive interference
cancellation technique. The use of orthogonal pilots and the slotted structure
lead to improved accuracy and reduced computational complexity in the proposed
set-ups, and make the implementation with short blocklengths more viable.
Performance of the proposed set-ups is illustrated via extensive simulation
results which show that the proposed set-ups with multiple antennas perform
better than the existing MIMO URA solutions for both short and large
blocklengths, and that the proposed single-antenna set-ups are superior to the
existing single-antenna URA schemes.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 12:43:25 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Ahmadi",
"Mohammad Javad",
""
],
[
"Kazemi",
"Mohammad",
""
],
[
"Duman",
"Tolga M.",
""
]
] |
new_dataset
| 0.994806 |
2307.07313
|
Oscar Carlsson
|
Oscar Carlsson, Jan E. Gerken, Hampus Linander, Heiner Spie{\ss},
Fredrik Ohlsson, Christoffer Petersson, Daniel Persson
|
HEAL-SWIN: A Vision Transformer On The Sphere
|
Main body: 10 pages, 7 figures. Appendices: 4 pages, 2 figures
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-resolution wide-angle fisheye images are becoming more and more
important for robotics applications such as autonomous driving. However, using
ordinary convolutional neural networks or vision transformers on this data is
problematic due to projection and distortion losses introduced when projecting
to a rectangular grid on the plane. We introduce the HEAL-SWIN transformer,
which combines the highly uniform Hierarchical Equal Area iso-Latitude
Pixelation (HEALPix) grid used in astrophysics and cosmology with the
Hierarchical Shifted-Window (SWIN) transformer to yield an efficient and
flexible model capable of training on high-resolution, distortion-free
spherical data. In HEAL-SWIN, the nested structure of the HEALPix grid is used
to perform the patching and windowing operations of the SWIN transformer,
resulting in a one-dimensional representation of the spherical data with
minimal computational overhead. We demonstrate the superior performance of our
model for semantic segmentation and depth regression tasks on both synthetic
and real automotive datasets. Our code is available at
https://github.com/JanEGerken/HEAL-SWIN.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 12:46:59 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Carlsson",
"Oscar",
""
],
[
"Gerken",
"Jan E.",
""
],
[
"Linander",
"Hampus",
""
],
[
"Spieß",
"Heiner",
""
],
[
"Ohlsson",
"Fredrik",
""
],
[
"Petersson",
"Christoffer",
""
],
[
"Persson",
"Daniel",
""
]
] |
new_dataset
| 0.998841 |
2307.07333
|
Zhili Ng
|
Zhili Ng, Haozhe Wang, Zhengshen Zhang, Francis Tay Eng Hock, and
Marcelo H. Ang Jr
|
SynTable: A Synthetic Data Generation Pipeline for Unseen Object Amodal
Instance Segmentation of Cluttered Tabletop Scenes
|
Version 1
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present SynTable, a unified and flexible Python-based
dataset generator built using NVIDIA's Isaac Sim Replicator Composer for
generating high-quality synthetic datasets for unseen object amodal instance
segmentation of cluttered tabletop scenes. Our dataset generation tool can
render a complex 3D scene containing object meshes, materials, textures,
lighting, and backgrounds. Metadata, such as modal and amodal instance
segmentation masks, occlusion masks, depth maps, bounding boxes, and material
properties, can be generated to automatically annotate the scene according to
the users' requirements. Our tool eliminates the need for manual labeling in
the dataset generation process while ensuring the quality and accuracy of the
dataset. In this work, we discuss our design goals, framework architecture, and
the performance of our tool. We demonstrate the use of a sample dataset
generated using SynTable by ray tracing for training a state-of-the-art model,
UOAIS-Net. The results show significantly improved performance in Sim-to-Real
transfer when evaluated on the OSD-Amodal dataset. We offer this tool as an
open-source, easy-to-use, photorealistic dataset generator for advancing
research in deep learning and synthetic data generation.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 13:24:42 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Ng",
"Zhili",
""
],
[
"Wang",
"Haozhe",
""
],
[
"Zhang",
"Zhengshen",
""
],
[
"Hock",
"Francis Tay Eng",
""
],
[
"Ang",
"Marcelo H.",
"Jr"
]
] |
new_dataset
| 0.999063 |
2307.07359
|
Mohamed Akrout
|
Mohamed Akrout, Amine Mezghani, Ekram Hossain, Faouzi Bellili, Robert
W. Heath
|
From Multilayer Perceptron to GPT: A Reflection on Deep Learning
Research for Wireless Physical Layer
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Most research studies on deep learning (DL) applied to the physical layer of
wireless communication do not put forward the critical role of the
accuracy-generalization trade-off in developing and evaluating practical
algorithms. To highlight the disadvantage of this common practice, we revisit a
data decoding example from one of the first papers introducing DL-based
end-to-end wireless communication systems to the research community and
promoting the use of artificial intelligence (AI)/DL for the wireless physical
layer. We then put forward two key trade-offs in designing DL models for
communication, namely, accuracy versus generalization and compression versus
latency. We discuss their relevance in the context of wireless communications
use cases using emerging DL models including large language models (LLMs).
Finally, we summarize our proposed evaluation guidelines to enhance the
research impact of DL on wireless communications. These guidelines are an
attempt to reconcile the empirical nature of DL research with the rigorous
requirement metrics of wireless communications systems.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 14:04:01 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Akrout",
"Mohamed",
""
],
[
"Mezghani",
"Amine",
""
],
[
"Hossain",
"Ekram",
""
],
[
"Bellili",
"Faouzi",
""
],
[
"Heath",
"Robert W.",
""
]
] |
new_dataset
| 0.986737 |
2307.07409
|
Gangwoo Kim
|
Gangwoo Kim, Hajung Kim, Lei Ji, Seongsu Bae, Chanhwi Kim, Mujeen
Sung, Hyunjae Kim, Kun Yan, Eric Chang, Jaewoo Kang
|
KU-DMIS-MSRA at RadSum23: Pre-trained Vision-Language Model for
Radiology Report Summarization
|
Published at BioNLP workshop @ ACL 2023
| null | null | null |
cs.CL cs.AI eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce CheXOFA, a new pre-trained vision-language model
(VLM) for the chest X-ray domain. Our model is initially pre-trained on various
multimodal datasets within the general domain before being transferred to the
chest X-ray domain. Following a prominent VLM, we unify various domain-specific
tasks into a simple sequence-to-sequence schema. It enables the model to
effectively learn the required knowledge and skills from limited resources in
the domain. Demonstrating superior performance on the benchmark datasets
provided by the BioNLP shared task, our model benefits from its training across
multiple tasks and domains. With subtle techniques including ensemble and
factual calibration, our system achieves first place on the RadSum23
leaderboard for the hidden test set.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 21:18:01 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Kim",
"Gangwoo",
""
],
[
"Kim",
"Hajung",
""
],
[
"Ji",
"Lei",
""
],
[
"Bae",
"Seongsu",
""
],
[
"Kim",
"Chanhwi",
""
],
[
"Sung",
"Mujeen",
""
],
[
"Kim",
"Hyunjae",
""
],
[
"Yan",
"Kun",
""
],
[
"Chang",
"Eric",
""
],
[
"Kang",
"Jaewoo",
""
]
] |
new_dataset
| 0.994773 |
2307.07445
|
Ke Deng
|
Ke Deng, Zhiyuan He, Hao Zhang, Haohan Lin, Desheng Wang
|
TSNet-SAC: Leveraging Transformers for Efficient Task Scheduling
| null | null | null | null |
cs.NI cs.AI cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In future 6G Mobile Edge Computing (MEC), autopilot systems require the
capability of processing multimodal data with strong interdependencies.
However, traditional heuristic algorithms are inadequate for real-time
scheduling due to their requirement for multiple iterations to derive the
optimal scheme. We propose a novel TSNet-SAC based on Transformer, that
utilizes heuristic algorithms solely to guide the training of TSNet.
Additionally, a Sliding Augment Component (SAC) is introduced to enhance the
robustness and resolve algorithm defects. Furthermore, the Extender component
is designed to handle multi-scale training data and provide network
scalability, enabling TSNet to adapt to different access scenarios. Simulation
demonstrates that TSNet-SAC outperforms existing networks in accuracy and
robustness, achieving superior scheduling-making latency compared to heuristic
algorithms.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 04:25:59 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Deng",
"Ke",
""
],
[
"He",
"Zhiyuan",
""
],
[
"Zhang",
"Hao",
""
],
[
"Lin",
"Haohan",
""
],
[
"Wang",
"Desheng",
""
]
] |
new_dataset
| 0.973794 |
2307.07484
|
Sibi Chakkaravarthy S
|
Aditya Mitra, Anisha Ghosh, Sibi Chakkaravarthy Sethuraman
|
TUSH-Key: Transferable User Secrets on Hardware Key
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Passwordless authentication was first tested for seamless and secure merchant
payments without the use of passwords or pins. It opened a whole new world of
authentications giving up the former reliance on traditional passwords. It
relied on the W3C Web Authentication (WebAuthn) and Client to Authenticator
Protocol (CTAP) standards to use the public key cryptosystem to uniquely attest
a user's device and then their identity. These standards comprise of the FIDO
authentication standard. As the popularity of passwordless is increasing, more
and more users and service providers are adopting to it. However, the concept
of device attestation makes it device-specific for a user. It makes it
difficult for a user to switch devices. FIDO Passkeys were aimed at solving the
same, synchronizing the private cryptographic keys across multiple devices so
that the user can perform passwordless authentication even from devices not
explicitly enrolled with the service provider. However, passkeys have certain
drawbacks including that it uses proprietary end to end encryption algorithms,
all keys pass through proprietary cloud provider, and it is usually not very
seamless when dealing with cross-platform key synchronization. To deal with the
problems and drawbacks of FIDO Passkeys, the paper proposes a novel private key
management system for passwordless authentication called Transferable User
Secret on Hardware Key (TUSH-Key). TUSH-Key allows cross-platform
synchronization of devices for seamless passwordless logins with FIDO2
specifications.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 17:09:46 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Mitra",
"Aditya",
""
],
[
"Ghosh",
"Anisha",
""
],
[
"Sethuraman",
"Sibi Chakkaravarthy",
""
]
] |
new_dataset
| 0.999577 |
2307.07511
|
Nilesh Kulkarni
|
Nilesh Kulkarni, Davis Rempe, Kyle Genova, Abhijit Kundu, Justin
Johnson, David Fouhey, Leonidas Guibas
|
NIFTY: Neural Object Interaction Fields for Guided Human Motion
Synthesis
|
Project Page with additional results available
https://nileshkulkarni.github.io/nifty
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the problem of generating realistic 3D motions of humans
interacting with objects in a scene. Our key idea is to create a neural
interaction field attached to a specific object, which outputs the distance to
the valid interaction manifold given a human pose as input. This interaction
field guides the sampling of an object-conditioned human motion diffusion
model, so as to encourage plausible contacts and affordance semantics. To
support interactions with scarcely available data, we propose an automated
synthetic data pipeline. For this, we seed a pre-trained motion model, which
has priors for the basics of human movement, with interaction-specific anchor
poses extracted from limited motion capture data. Using our guided diffusion
model trained on generated synthetic data, we synthesize realistic motions for
sitting and lifting with several objects, outperforming alternative approaches
in terms of motion quality and successful action completion. We call our
framework NIFTY: Neural Interaction Fields for Trajectory sYnthesis.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 17:59:38 GMT"
}
] | 2023-07-17T00:00:00 |
[
[
"Kulkarni",
"Nilesh",
""
],
[
"Rempe",
"Davis",
""
],
[
"Genova",
"Kyle",
""
],
[
"Kundu",
"Abhijit",
""
],
[
"Johnson",
"Justin",
""
],
[
"Fouhey",
"David",
""
],
[
"Guibas",
"Leonidas",
""
]
] |
new_dataset
| 0.990852 |
2005.04490
|
Huabin Liu
|
Weiyao Lin, Huabin Liu, Shizhan Liu, Yuxi Li, Rui Qian, Tao Wang, Ning
Xu, Hongkai Xiong, Guo-Jun Qi, Nicu Sebe
|
Human in Events: A Large-Scale Benchmark for Human-centric Video
Analysis in Complex Events
|
Dataset for Large-scale Human-centric Video Analysis in Complex
Events (http://humaninevents.org), the paper has been published in Int J
Comput Vis (2023)
| null |
10.1007/s11263-023-01842-6
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Along with the development of modern smart cities, human-centric video
analysis has been encountering the challenge of analyzing diverse and complex
events in real scenes. A complex event relates to dense crowds, anomalous
individuals, or collective behaviors. However, limited by the scale and
coverage of existing video datasets, few human analysis approaches have
reported their performances on such complex events. To this end, we present a
new large-scale dataset with comprehensive annotations, named Human-in-Events
or HiEve (Human-centric video analysis in complex Events), for the
understanding of human motions, poses, and actions in a variety of realistic
events, especially in crowd & complex events. It contains a record number of
poses (>1M), the largest number of action instances (>56k) under complex
events, as well as one of the largest numbers of trajectories lasting for
longer time (with an average trajectory length of >480 frames). Based on its
diverse annotation, we present two simple baselines for action recognition and
pose estimation, respectively. They leverage cross-label information during
training to enhance the feature learning in corresponding visual tasks.
Experiments show that they could boost the performance of existing action
recognition and pose estimation pipelines. More importantly, they prove the
widely ranged annotations in HiEve can improve various video tasks.
Furthermore, we conduct extensive experiments to benchmark recent video
analysis approaches together with our baseline methods, demonstrating HiEve is
a challenging dataset for human-centric video analysis. We expect that the
dataset will advance the development of cutting-edge techniques in
human-centric analysis and the understanding of complex events. The dataset is
available at http://humaninevents.org
|
[
{
"version": "v1",
"created": "Sat, 9 May 2020 18:24:52 GMT"
},
{
"version": "v2",
"created": "Tue, 19 May 2020 15:44:19 GMT"
},
{
"version": "v3",
"created": "Wed, 10 Mar 2021 12:47:25 GMT"
},
{
"version": "v4",
"created": "Thu, 11 Mar 2021 02:50:11 GMT"
},
{
"version": "v5",
"created": "Sun, 14 Mar 2021 06:24:52 GMT"
},
{
"version": "v6",
"created": "Thu, 13 Jul 2023 13:23:05 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Lin",
"Weiyao",
""
],
[
"Liu",
"Huabin",
""
],
[
"Liu",
"Shizhan",
""
],
[
"Li",
"Yuxi",
""
],
[
"Qian",
"Rui",
""
],
[
"Wang",
"Tao",
""
],
[
"Xu",
"Ning",
""
],
[
"Xiong",
"Hongkai",
""
],
[
"Qi",
"Guo-Jun",
""
],
[
"Sebe",
"Nicu",
""
]
] |
new_dataset
| 0.999782 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.