id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.00588
|
Yutian Tang
|
Yutian Tang, Zhijie Liu, Zhichao Zhou, and Xiapu Luo
|
ChatGPT vs SBST: A Comparative Assessment of Unit Test Suite Generation
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advancements in large language models (LLMs) have demonstrated
exceptional success in a wide range of general domain tasks, such as question
answering and following instructions. Moreover, LLMs have shown potential in
various software engineering applications. In this study, we present a
systematic comparison of test suites generated by the ChatGPT LLM and the
state-of-the-art SBST tool EvoSuite. Our comparison is based on several
critical factors, including correctness, readability, code coverage, and bug
detection capability. By highlighting the strengths and weaknesses of LLMs
(specifically ChatGPT) in generating unit test cases compared to EvoSuite, this
work provides valuable insights into the performance of LLMs in solving
software engineering problems. Overall, our findings underscore the potential
of LLMs in software engineering and pave the way for further research in this
area.
|
[
{
"version": "v1",
"created": "Sun, 2 Jul 2023 15:09:40 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Tang",
"Yutian",
""
],
[
"Liu",
"Zhijie",
""
],
[
"Zhou",
"Zhichao",
""
],
[
"Luo",
"Xiapu",
""
]
] |
new_dataset
| 0.980079 |
2307.00592
|
Zhicheng Cai
|
Xinyue Wang, Zhicheng Cai and Chenglei Peng
|
X-MLP: A Patch Embedding-Free MLP Architecture for Vision
|
IJCNN 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional neural networks (CNNs) and vision transformers (ViT) have
obtained great achievements in computer vision. Recently, the research of
multi-layer perceptron (MLP) architectures for vision have been popular again.
Vision MLPs are designed to be independent from convolutions and self-attention
operations. However, existing vision MLP architectures always depend on
convolution for patch embedding. Thus we propose X-MLP, an architecture
constructed absolutely upon fully connected layers and free from patch
embedding. It decouples the features extremely and utilizes MLPs to interact
the information across the dimension of width, height and channel independently
and alternately. X-MLP is tested on ten benchmark datasets, all obtaining
better performance than other vision MLP models. It even surpasses CNNs by a
clear margin on various dataset. Furthermore, through mathematically restoring
the spatial weights, we visualize the information communication between any
couples of pixels in the feature map and observe the phenomenon of capturing
long-range dependency.
|
[
{
"version": "v1",
"created": "Sun, 2 Jul 2023 15:20:25 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Wang",
"Xinyue",
""
],
[
"Cai",
"Zhicheng",
""
],
[
"Peng",
"Chenglei",
""
]
] |
new_dataset
| 0.992896 |
2307.00653
|
Ashutosh Hathidara
|
Ashutosh Hathidara, Lalit Pandey
|
Neuro-Symbolic Sudoku Solver
|
Published as a conference paper at KDD KiML 2023
| null | null | null |
cs.AI cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
Deep Neural Networks have achieved great success in some of the complex tasks
that humans can do with ease. These include image recognition/classification,
natural language processing, game playing etc. However, modern Neural Networks
fail or perform poorly when trained on tasks that can be solved easily using
backtracking and traditional algorithms. Therefore, we use the architecture of
the Neuro Logic Machine (NLM) and extend its functionality to solve a 9X9 game
of Sudoku. To expand the application of NLMs, we generate a random grid of
cells from a dataset of solved games and assign up to 10 new empty cells. The
goal of the game is then to find a target value ranging from 1 to 9 and fill in
the remaining empty cells while maintaining a valid configuration. In our
study, we showcase an NLM which is capable of obtaining 100% accuracy for
solving a Sudoku with empty cells ranging from 3 to 10. The purpose of this
study is to demonstrate that NLMs can also be used for solving complex problems
and games like Sudoku. We also analyze the behaviour of NLMs with a
backtracking algorithm by comparing the convergence time using a graph plot on
the same problem. With this study we show that Neural Logic Machines can be
trained on the tasks that traditional Deep Learning architectures fail using
Reinforcement Learning. We also aim to propose the importance of symbolic
learning in explaining the systematicity in the hybrid model of NLMs.
|
[
{
"version": "v1",
"created": "Sun, 2 Jul 2023 20:04:01 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Hathidara",
"Ashutosh",
""
],
[
"Pandey",
"Lalit",
""
]
] |
new_dataset
| 0.985731 |
2307.00664
|
Firat Kizilirmak
|
Firat Kizilirmak and Berrin Yanikoglu
|
CNN-BiLSTM model for English Handwriting Recognition: Comprehensive
Evaluation on the IAM Dataset
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a CNN-BiLSTM system for the problem of offline English handwriting
recognition, with extensive evaluations on the public IAM dataset, including
the effects of model size, data augmentation and the lexicon. Our best model
achieves 3.59\% CER and 9.44\% WER using CNN-BiLSTM network with CTC layer.
Test time augmentation with rotation and shear transformations applied to the
input image, is proposed to increase recognition of difficult cases and found
to reduce the word error rate by 2.5\% points. We also conduct an error
analysis of our proposed method on IAM dataset, show hard cases of handwriting
images and explore samples with erroneous labels. We provide our source code as
public-domain, to foster further research to encourage scientific
reproducibility.
|
[
{
"version": "v1",
"created": "Sun, 2 Jul 2023 20:59:03 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Kizilirmak",
"Firat",
""
],
[
"Yanikoglu",
"Berrin",
""
]
] |
new_dataset
| 0.998398 |
2307.00716
|
Keqiang Sun
|
Junting Pan, Keqiang Sun, Yuying Ge, Hao Li, Haodong Duan, Xiaoshi Wu,
Renrui Zhang, Aojun Zhou, Zipeng Qin, Yi Wang, Jifeng Dai, Yu Qiao, Hongsheng
Li
|
JourneyDB: A Benchmark for Generative Image Understanding
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While recent advancements in vision-language models have revolutionized
multi-modal understanding, it remains unclear whether they possess the
capabilities of comprehending the generated images. Compared to real data,
synthetic images exhibit a higher degree of diversity in both content and
style, for which there are significant difficulties for the models to fully
apprehend. To this end, we present a large-scale dataset, JourneyDB, for
multi-modal visual understanding in generative images. Our curated dataset
covers 4 million diverse and high-quality generated images paired with the text
prompts used to produce them. We further design 4 benchmarks to quantify the
performance of generated image understanding in terms of both content and style
interpretation. These benchmarks include prompt inversion, style retrieval,
image captioning and visual question answering. Lastly, we assess the
performance of current state-of-the-art multi-modal models when applied to
JourneyDB, and provide an in-depth analysis of their strengths and limitations
in generated content understanding. We hope the proposed dataset and benchmarks
will facilitate the research in the field of generative content understanding.
The dataset will be available on https://journeydb.github.io.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 02:39:08 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Pan",
"Junting",
""
],
[
"Sun",
"Keqiang",
""
],
[
"Ge",
"Yuying",
""
],
[
"Li",
"Hao",
""
],
[
"Duan",
"Haodong",
""
],
[
"Wu",
"Xiaoshi",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Zhou",
"Aojun",
""
],
[
"Qin",
"Zipeng",
""
],
[
"Wang",
"Yi",
""
],
[
"Dai",
"Jifeng",
""
],
[
"Qiao",
"Yu",
""
],
[
"Li",
"Hongsheng",
""
]
] |
new_dataset
| 0.999833 |
2307.00717
|
Yushan Han
|
Yushan Han, Hui Zhang, Honglei Zhang and Yidong Li
|
SSC3OD: Sparsely Supervised Collaborative 3D Object Detection from LiDAR
Point Clouds
|
8 pages, 3 figures, IEEE International Conference on Systems, Man,
and Cybernetics (SMC 2023)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Collaborative 3D object detection, with its improved interaction advantage
among multiple agents, has been widely explored in autonomous driving. However,
existing collaborative 3D object detectors in a fully supervised paradigm
heavily rely on large-scale annotated 3D bounding boxes, which is
labor-intensive and time-consuming. To tackle this issue, we propose a sparsely
supervised collaborative 3D object detection framework SSC3OD, which only
requires each agent to randomly label one object in the scene. Specifically,
this model consists of two novel components, i.e., the pillar-based masked
autoencoder (Pillar-MAE) and the instance mining module. The Pillar-MAE module
aims to reason over high-level semantics in a self-supervised manner, and the
instance mining module generates high-quality pseudo labels for collaborative
detectors online. By introducing these simple yet effective mechanisms, the
proposed SSC3OD can alleviate the adverse impacts of incomplete annotations. We
generate sparse labels based on collaborative perception datasets to evaluate
our method. Extensive experiments on three large-scale datasets reveal that our
proposed SSC3OD can effectively improve the performance of sparsely supervised
collaborative 3D object detectors.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 02:42:14 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Han",
"Yushan",
""
],
[
"Zhang",
"Hui",
""
],
[
"Zhang",
"Honglei",
""
],
[
"Li",
"Yidong",
""
]
] |
new_dataset
| 0.999025 |
2307.00777
|
Zhang Liu
|
Zhang Liu and Lianfen Huang and Zhibin Gao and Manman Luo and
Seyyedali Hosseinalipour and Huaiyu Dai
|
GA-DRL: Graph Neural Network-Augmented Deep Reinforcement Learning for
DAG Task Scheduling over Dynamic Vehicular Clouds
|
15 pages, 12 figures, regular journal
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vehicular clouds (VCs) are modern platforms for processing of
computation-intensive tasks over vehicles. Such tasks are often represented as
directed acyclic graphs (DAGs) consisting of interdependent vertices/subtasks
and directed edges. In this paper, we propose a graph neural network-augmented
deep reinforcement learning scheme (GA-DRL) for scheduling DAG tasks over
dynamic VCs. In doing so, we first model the VC-assisted DAG task scheduling as
a Markov decision process. We then adopt a multi-head graph attention network
(GAT) to extract the features of DAG subtasks. Our developed GAT enables a
two-way aggregation of the topological information in a DAG task by
simultaneously considering predecessors and successors of each subtask. We
further introduce non-uniform DAG neighborhood sampling through codifying the
scheduling priority of different subtasks, which makes our developed GAT
generalizable to completely unseen DAG task topologies. Finally, we augment GAT
into a double deep Q-network learning module to conduct subtask-to-vehicle
assignment according to the extracted features of subtasks, while considering
the dynamics and heterogeneity of the vehicles in VCs. Through simulating
various DAG tasks under real-world movement traces of vehicles, we demonstrate
that GA-DRL outperforms existing benchmarks in terms of DAG task completion
time.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 06:41:15 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Liu",
"Zhang",
""
],
[
"Huang",
"Lianfen",
""
],
[
"Gao",
"Zhibin",
""
],
[
"Luo",
"Manman",
""
],
[
"Hosseinalipour",
"Seyyedali",
""
],
[
"Dai",
"Huaiyu",
""
]
] |
new_dataset
| 0.974137 |
2307.00782
|
Yujia Xiao
|
Yujia Xiao, Shaofei Zhang, Xi Wang, Xu Tan, Lei He, Sheng Zhao, Frank
K. Soong, Tan Lee
|
ContextSpeech: Expressive and Efficient Text-to-Speech for Paragraph
Reading
|
5 pages, 4 figures, accepted by INTERSPEECH 2023
| null | null | null |
cs.CL cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While state-of-the-art Text-to-Speech systems can generate natural speech of
very high quality at sentence level, they still meet great challenges in speech
generation for paragraph / long-form reading. Such deficiencies are due to i)
ignorance of cross-sentence contextual information, and ii) high computation
and memory cost for long-form synthesis. To address these issues, this work
develops a lightweight yet effective TTS system, ContextSpeech. Specifically,
we first design a memory-cached recurrence mechanism to incorporate global text
and speech context into sentence encoding. Then we construct
hierarchically-structured textual semantics to broaden the scope for global
context enhancement. Additionally, we integrate linearized self-attention to
improve model efficiency. Experiments show that ContextSpeech significantly
improves the voice quality and prosody expressiveness in paragraph reading with
competitive model efficiency. Audio samples are available at:
https://contextspeech.github.io/demo/
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 06:55:03 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Xiao",
"Yujia",
""
],
[
"Zhang",
"Shaofei",
""
],
[
"Wang",
"Xi",
""
],
[
"Tan",
"Xu",
""
],
[
"He",
"Lei",
""
],
[
"Zhao",
"Sheng",
""
],
[
"Soong",
"Frank K.",
""
],
[
"Lee",
"Tan",
""
]
] |
new_dataset
| 0.994137 |
2307.00818
|
Jing Lin
|
Jing Lin, Ailing Zeng, Shunlin Lu, Yuanhao Cai, Ruimao Zhang, Haoqian
Wang, Lei Zhang
|
Motion-X: A Large-scale 3D Expressive Whole-body Human Motion Dataset
|
A large-scale 3D whole-body human motion-text dataset; GitHub:
https://github.com/IDEA-Research/Motion-X
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present Motion-X, a large-scale 3D expressive whole-body
motion dataset. Existing motion datasets predominantly contain body-only poses,
lacking facial expressions, hand gestures, and fine-grained pose descriptions.
Moreover, they are primarily collected from limited laboratory scenes with
textual descriptions manually labeled, which greatly limits their scalability.
To overcome these limitations, we develop a whole-body motion and text
annotation pipeline, which can automatically annotate motion from either
single- or multi-view videos and provide comprehensive semantic labels for each
video and fine-grained whole-body pose descriptions for each frame. This
pipeline is of high precision, cost-effective, and scalable for further
research. Based on it, we construct Motion-X, which comprises 13.7M precise 3D
whole-body pose annotations (i.e., SMPL-X) covering 96K motion sequences from
massive scenes. Besides, Motion-X provides 13.7M frame-level whole-body pose
descriptions and 96K sequence-level semantic labels. Comprehensive experiments
demonstrate the accuracy of the annotation pipeline and the significant benefit
of Motion-X in enhancing expressive, diverse, and natural motion generation, as
well as 3D whole-body human mesh recovery.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 07:57:29 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Lin",
"Jing",
""
],
[
"Zeng",
"Ailing",
""
],
[
"Lu",
"Shunlin",
""
],
[
"Cai",
"Yuanhao",
""
],
[
"Zhang",
"Ruimao",
""
],
[
"Wang",
"Haoqian",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.99989 |
2307.00842
|
Marc Habermann
|
Zhouyingcheng Liao, Vladislav Golyanik, Marc Habermann, Christian
Theobalt
|
VINECS: Video-based Neural Character Skinning
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Rigging and skinning clothed human avatars is a challenging task and
traditionally requires a lot of manual work and expertise. Recent methods
addressing it either generalize across different characters or focus on
capturing the dynamics of a single character observed under different pose
configurations. However, the former methods typically predict solely static
skinning weights, which perform poorly for highly articulated poses, and the
latter ones either require dense 3D character scans in different poses or
cannot generate an explicit mesh with vertex correspondence over time. To
address these challenges, we propose a fully automated approach for creating a
fully rigged character with pose-dependent skinning weights, which can be
solely learned from multi-view video. Therefore, we first acquire a rigged
template, which is then statically skinned. Next, a coordinate-based MLP learns
a skinning weights field parameterized over the position in a canonical pose
space and the respective pose. Moreover, we introduce our pose- and
view-dependent appearance field allowing us to differentiably render and
supervise the posed mesh using multi-view imagery. We show that our approach
outperforms state-of-the-art while not relying on dense 4D scans.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 08:35:53 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Liao",
"Zhouyingcheng",
""
],
[
"Golyanik",
"Vladislav",
""
],
[
"Habermann",
"Marc",
""
],
[
"Theobalt",
"Christian",
""
]
] |
new_dataset
| 0.993806 |
2307.00854
|
Gilles Dowek
|
Gilles Dowek, G\'erard Huet, Benjamin Werner (LIX)
|
On the Definition of the Eta-long Normal Form in Type Systems of the
Cube
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The smallest transitive relation < on well-typed normal terms such that if t
is a strict subterm of u then t < u and if T is the normal form of the type of
t and the term t is not a sort then T < t is well-founded in the type systems
of the cube. Thus every term admits a eta-long normal form.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 08:47:40 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Dowek",
"Gilles",
"",
"LIX"
],
[
"Huet",
"Gérard",
"",
"LIX"
],
[
"Werner",
"Benjamin",
"",
"LIX"
]
] |
new_dataset
| 0.998764 |
2307.00856
|
Xinhang Li
|
Xinhang Li, Xiangyu Zhao, Yejing Wang, Yu Liu, Yong Li, Cheng Long,
Yong Zhang, Chunxiao Xing
|
OpenSiteRec: An Open Dataset for Site Recommendation
| null | null | null | null |
cs.IR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
As a representative information retrieval task, site recommendation, which
aims at predicting the optimal sites for a brand or an institution to open new
branches in an automatic data-driven way, is beneficial and crucial for brand
development in modern business. However, there is no publicly available dataset
so far and most existing approaches are limited to an extremely small scope of
brands, which seriously hinders the research on site recommendation. Therefore,
we collect, construct and release an open comprehensive dataset, namely
OpenSiteRec, to facilitate and promote the research on site recommendation.
Specifically, OpenSiteRec leverages a heterogeneous graph schema to represent
various types of real-world entities and relations in four international
metropolises. To evaluate the performance of the existing general methods on
the site recommendation task, we conduct benchmarking experiments of several
representative recommendation models on OpenSiteRec. Furthermore, we also
highlight the potential application directions to demonstrate the wide
applicability of OpenSiteRec. We believe that our OpenSiteRec dataset is
significant and anticipated to encourage the development of advanced methods
for site recommendation. OpenSiteRec is available online at
https://OpenSiteRec.github.io/.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 08:54:32 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Li",
"Xinhang",
""
],
[
"Zhao",
"Xiangyu",
""
],
[
"Wang",
"Yejing",
""
],
[
"Liu",
"Yu",
""
],
[
"Li",
"Yong",
""
],
[
"Long",
"Cheng",
""
],
[
"Zhang",
"Yong",
""
],
[
"Xing",
"Chunxiao",
""
]
] |
new_dataset
| 0.999866 |
2307.00861
|
Yuying Zou
|
Yuying Zou, Haotian Li, Yunfan Ren, Wei Xu, Yihang Li, Yixi Cai,
Shenji Zhou and Fu Zhang
|
Perch a quadrotor on planes by the ceiling effect
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Perching is a promising solution for a small unmanned aerial vehicle (UAV) to
save energy and extend operation time. This paper proposes a quadrotor that can
perch on planar structures using the ceiling effect. Compared with the existing
work, this perching method does not require any claws, hooks, or adhesive pads,
leading to a simpler system design. This method does not limit the perching by
surface angle or material either. The design of the quadrotor that only uses
its propeller guards for surface contact is presented in this paper. We also
discussed the automatic perching strategy including trajectory generation and
power management. Experiments are conducted to verify that the approach is
practical and the UAV can perch on planes with different angles. Energy
consumption in the perching state is assessed, showing that more than 30% of
power can be saved. Meanwhile, the quadrotor exhibits improved stability while
perching compared to when it is hovering.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 09:02:36 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Zou",
"Yuying",
""
],
[
"Li",
"Haotian",
""
],
[
"Ren",
"Yunfan",
""
],
[
"Xu",
"Wei",
""
],
[
"Li",
"Yihang",
""
],
[
"Cai",
"Yixi",
""
],
[
"Zhou",
"Shenji",
""
],
[
"Zhang",
"Fu",
""
]
] |
new_dataset
| 0.998559 |
2307.00894
|
Xiaoxin Zhang
|
Xiaoxin Zhang, Martin Brandt, Xiaoye Tong, Xiaowei Tong, Wenmin Zhang,
Florian Reiner, Sizhuo Li, Feng Tian, Yuemin Yue, Weiqi Zhou, Bin Chen,
Xiangming Xiao, Rasmus Fensholt
|
Mega-cities dominate China's urban greening
| null | null | null | null |
cs.CV physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Trees play a crucial role in urban environments, offering various ecosystem
services that contribute to public health and human well-being. China has
initiated a range of urban greening policies over the past decades, however,
monitoring their impact on urban tree dynamics at a national scale has proven
challenging. In this study, we deployed nano-satellites to quantify urban tree
coverage in all major Chinese cities larger than 50 km2 in 2010 and 2019. Our
findings indicate that approximately 6000 km2 (11%) of urban areas were covered
by trees in 2019, and 76% of these cities experienced an increase in tree cover
compared to 2010. Notably, the increase in tree cover in mega-cities such as
Beijing, and Shanghai was approximately twice as large as in most other cities
(7.69% vs 3.94%). The study employs a data-driven approach towards assessing
urban tree cover changes in relation to greening policies, showing clear signs
of tree cover increases but also suggesting an uneven implementation primarily
benefiting a few mega-cities.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 09:44:39 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Zhang",
"Xiaoxin",
""
],
[
"Brandt",
"Martin",
""
],
[
"Tong",
"Xiaoye",
""
],
[
"Tong",
"Xiaowei",
""
],
[
"Zhang",
"Wenmin",
""
],
[
"Reiner",
"Florian",
""
],
[
"Li",
"Sizhuo",
""
],
[
"Tian",
"Feng",
""
],
[
"Yue",
"Yuemin",
""
],
[
"Zhou",
"Weiqi",
""
],
[
"Chen",
"Bin",
""
],
[
"Xiao",
"Xiangming",
""
],
[
"Fensholt",
"Rasmus",
""
]
] |
new_dataset
| 0.998628 |
2307.00926
|
Mengmeng Liu
|
Mengmeng Liu, Shuangyang Li, Baoming Bai, Giuseppe Caire
|
Reduced-Complexity Cross-Domain Iterative Detection for OTFS Modulation
via Delay-Doppler Decoupling
|
5 pages, 5 figures; this work has been accepted by SPAWC 2023
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a reduced-complexity cross-domain iterative detection for
orthogonal time frequency space (OTFS) modulation is proposed, which exploits
channel properties in both time and delay-Doppler domains. Specifically, we
first show that in the time domain effective channel, the path delay only
introduces interference among samples in adjacent time slots, while the Doppler
becomes a phase term that does not affect the channel sparsity. This
``band-limited'' matrix structure motivates us to apply a reduced-size linear
minimum mean square error (LMMSE) filter to eliminate the effect of delay in
the time domain, while exploiting the cross-domain iteration for minimizing the
effect of Doppler by noticing that the time and Doppler are a pair of Fourier
dual. The state (MSE) evolution was derived and compared with bounds to verify
the effectiveness of the proposed scheme. Simulation results demonstrate that
the proposed scheme achieves almost the same error performance as the optimal
detection, but only requires a reduced complexity.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 10:54:59 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Liu",
"Mengmeng",
""
],
[
"Li",
"Shuangyang",
""
],
[
"Bai",
"Baoming",
""
],
[
"Caire",
"Giuseppe",
""
]
] |
new_dataset
| 0.967333 |
2307.00936
|
Xiaoshuang Liang
|
Yunyou Huang, Xianglong Guan, Xiangjiang Lu, Xiaoshuang Liang, Xiuxia
Miao, Jiyue Xie, Wenjing Liu, Li Ma, Suqin Tang, Zhifei Zhang, and Jianfeng
Zhan
|
OpenAPMax: Abnormal Patterns-based Model for Real-World Alzheimer's
Disease Diagnosis
|
Alzheimer's Disease, Abnormal Patterns, Open-set Recognition,
OpenAPMax
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Alzheimer's disease (AD) cannot be reversed, but early diagnosis will
significantly benefit patients' medical treatment and care. In recent works, AD
diagnosis has the primary assumption that all categories are known a prior -- a
closed-set classification problem, which contrasts with the open-set
recognition problem. This assumption hinders the application of the model in
natural clinical settings. Although many open-set recognition technologies have
been proposed in other fields, they are challenging to use for AD diagnosis
directly since 1) AD is a degenerative disease of the nervous system with
similar symptoms at each stage, and it is difficult to distinguish from its
pre-state, and 2) diversified strategies for AD diagnosis are challenging to
model uniformly. In this work, inspired by the concerns of clinicians during
diagnosis, we propose an open-set recognition model, OpenAPMax, based on the
anomaly pattern to address AD diagnosis in real-world settings. OpenAPMax first
obtains the abnormal pattern of each patient relative to each known category
through statistics or a literature search, clusters the patients' abnormal
pattern, and finally, uses extreme value theory (EVT) to model the distance
between each patient's abnormal pattern and the center of their category and
modify the classification probability. We evaluate the performance of the
proposed method with recent open-set recognition, where we obtain
state-of-the-art results.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 11:21:09 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Huang",
"Yunyou",
""
],
[
"Guan",
"Xianglong",
""
],
[
"Lu",
"Xiangjiang",
""
],
[
"Liang",
"Xiaoshuang",
""
],
[
"Miao",
"Xiuxia",
""
],
[
"Xie",
"Jiyue",
""
],
[
"Liu",
"Wenjing",
""
],
[
"Ma",
"Li",
""
],
[
"Tang",
"Suqin",
""
],
[
"Zhang",
"Zhifei",
""
],
[
"Zhan",
"Jianfeng",
""
]
] |
new_dataset
| 0.998346 |
2307.00965
|
Xiaoshuang Liang
|
Yunyou Huang, Xiaoshuang Liang, Xiangjiang Lu, Xiuxia Miao, Jiyue Xie,
Wenjing Liu, Fan Zhang, Guoxin Kang, Li Ma, Suqin Tang, Zhifei Zhang,
Jianfeng Zhan
|
OpenClinicalAI: An Open and Dynamic Model for Alzheimer's Disease
Diagnosis
|
Real-world clinical setting,Alzheimer's disease,diagnose,AI,deep
learning. arXiv admin note: text overlap with arXiv:2109.04004
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although Alzheimer's disease (AD) cannot be reversed or cured, timely
diagnosis can significantly reduce the burden of treatment and care. Current
research on AD diagnosis models usually regards the diagnosis task as a typical
classification task with two primary assumptions: 1) All target categories are
known a priori; 2) The diagnostic strategy for each patient is consistent, that
is, the number and type of model input data for each patient are the same.
However, real-world clinical settings are open, with complexity and uncertainty
in terms of both subjects and the resources of the medical institutions. This
means that diagnostic models may encounter unseen disease categories and need
to dynamically develop diagnostic strategies based on the subject's specific
circumstances and available medical resources. Thus, the AD diagnosis task is
tangled and coupled with the diagnosis strategy formulation. To promote the
application of diagnostic systems in real-world clinical settings, we propose
OpenClinicalAI for direct AD diagnosis in complex and uncertain clinical
settings. This is the first powerful end-to-end model to dynamically formulate
diagnostic strategies and provide diagnostic results based on the subject's
conditions and available medical resources. OpenClinicalAI combines
reciprocally coupled deep multiaction reinforcement learning (DMARL) for
diagnostic strategy formulation and multicenter meta-learning (MCML) for
open-set recognition. The experimental results show that OpenClinicalAI
achieves better performance and fewer clinical examinations than the
state-of-the-art model. Our method provides an opportunity to embed the AD
diagnostic system into the current health care system to cooperate with
clinicians to improve current health care.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 12:35:03 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Huang",
"Yunyou",
""
],
[
"Liang",
"Xiaoshuang",
""
],
[
"Lu",
"Xiangjiang",
""
],
[
"Miao",
"Xiuxia",
""
],
[
"Xie",
"Jiyue",
""
],
[
"Liu",
"Wenjing",
""
],
[
"Zhang",
"Fan",
""
],
[
"Kang",
"Guoxin",
""
],
[
"Ma",
"Li",
""
],
[
"Tang",
"Suqin",
""
],
[
"Zhang",
"Zhifei",
""
],
[
"Zhan",
"Jianfeng",
""
]
] |
new_dataset
| 0.999746 |
2307.01009
|
Roberto Ammendola
|
Roberto Ammendola, Andrea Biagioni, Carlotta Chiarini, Andrea
Ciardiello, Paolo Cretaro, Ottorino Frezza, Francesca Lo Cicero, Alessandro
Lonardo, Michele Martinelli, Pier Stanislao Paolucci, Cristian Rossi,
Francesco Simula, Matteo Turisini, Piero Vicini
|
APEIRON: composing smart TDAQ systems for high energy physics
experiments
|
Under review in Journal of Physics: Conference Series (ACAT 2022)
| null | null | null |
cs.DC physics.ins-det
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
APEIRON is a framework encompassing the general architecture of a distributed
heterogeneous processing platform and the corresponding software stack, from
the low level device drivers up to the high level programming model. The
framework is designed to be efficiently used for studying, prototyping and
deploying smart trigger and data acquisition (TDAQ) systems for high energy
physics experiments.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 13:41:13 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Ammendola",
"Roberto",
""
],
[
"Biagioni",
"Andrea",
""
],
[
"Chiarini",
"Carlotta",
""
],
[
"Ciardiello",
"Andrea",
""
],
[
"Cretaro",
"Paolo",
""
],
[
"Frezza",
"Ottorino",
""
],
[
"Cicero",
"Francesca Lo",
""
],
[
"Lonardo",
"Alessandro",
""
],
[
"Martinelli",
"Michele",
""
],
[
"Paolucci",
"Pier Stanislao",
""
],
[
"Rossi",
"Cristian",
""
],
[
"Simula",
"Francesco",
""
],
[
"Turisini",
"Matteo",
""
],
[
"Vicini",
"Piero",
""
]
] |
new_dataset
| 0.984595 |
2307.01024
|
Liangliang Yao
|
Liangliang Yao, Haobo Zuo, Guangze Zheng, Changhong Fu, Jia Pan
|
SAM-DA: UAV Tracks Anything at Night with SAM-Powered Domain Adaptation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Domain adaptation (DA) has demonstrated significant promise for real-time
nighttime unmanned aerial vehicle (UAV) tracking. However, the state-of-the-art
(SOTA) DA still lacks the potential object with accurate pixel-level location
and boundary to generate the high-quality target domain training sample. This
key issue constrains the transfer learning of the real-time daytime SOTA
trackers for challenging nighttime UAV tracking. Recently, the notable Segment
Anything Model (SAM) has achieved remarkable zero-shot generalization ability
to discover abundant potential objects due to its huge data-driven training
approach. To solve the aforementioned issue, this work proposes a novel
SAM-powered DA framework for real-time nighttime UAV tracking, i.e., SAM-DA.
Specifically, an innovative SAM-powered target domain training sample swelling
is designed to determine enormous high-quality target domain training samples
from every single raw nighttime image. This novel one-to-many method
significantly expands the high-quality target domain training sample for DA.
Comprehensive experiments on extensive nighttime UAV videos prove the
robustness and domain adaptability of SAM-DA for nighttime UAV tracking.
Especially, compared to the SOTA DA, SAM-DA can achieve better performance with
fewer raw nighttime images, i.e., the fewer-better training. This economized
training approach facilitates the quick validation and deployment of algorithms
for UAVs. The code is available at https://github.com/vision4robotics/SAM-DA.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 13:55:44 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Yao",
"Liangliang",
""
],
[
"Zuo",
"Haobo",
""
],
[
"Zheng",
"Guangze",
""
],
[
"Fu",
"Changhong",
""
],
[
"Pan",
"Jia",
""
]
] |
new_dataset
| 0.963206 |
2307.01064
|
Marija Ivanovska
|
Marija Ivanovska, Vitomir Struc, Janez Pers
|
TomatoDIFF: On-plant Tomato Segmentation with Denoising Diffusion Models
|
Accepted at 18th International Conference on Machine Vision
Applications (MVA)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Artificial intelligence applications enable farmers to optimize crop growth
and production while reducing costs and environmental impact. Computer
vision-based algorithms in particular, are commonly used for fruit
segmentation, enabling in-depth analysis of the harvest quality and accurate
yield estimation. In this paper, we propose TomatoDIFF, a novel diffusion-based
model for semantic segmentation of on-plant tomatoes. When evaluated against
other competitive methods, our model demonstrates state-of-the-art (SOTA)
performance, even in challenging environments with highly occluded fruits.
Additionally, we introduce Tomatopia, a new, large and challenging dataset of
greenhouse tomatoes. The dataset comprises high-resolution RGB-D images and
pixel-level annotations of the fruits.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 14:43:40 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Ivanovska",
"Marija",
""
],
[
"Struc",
"Vitomir",
""
],
[
"Pers",
"Janez",
""
]
] |
new_dataset
| 0.998364 |
2307.01092
|
Michael Perk
|
S\'andor P. Fekete, Dominik Krupke, Michael Perk, Christian Rieck and
Christian Scheffer
|
The Lawn Mowing Problem: From Algebra to Algorithms
|
23 pages, 12 figures
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a given polygonal region $P$, the Lawn Mowing Problem (LMP) asks for a
shortest tour $T$ that gets within Euclidean distance 1/2 of every point in
$P$; this is equivalent to computing a shortest tour for a unit-diameter cutter
$C$ that covers all of $P$. As a generalization of the Traveling Salesman
Problem, the LMP is NP-hard; unlike the discrete TSP, however, the LMP has
defied efforts to achieve exact solutions, due to its combination of
combinatorial complexity with continuous geometry.
We provide a number of new contributions that provide insights into the
involved difficulties, as well as positive results that enable both theoretical
and practical progress. (1) We show that the LMP is algebraically hard: it is
not solvable by radicals over the field of rationals, even for the simple case
in which $P$ is a $2\times 2$ square. This implies that it is impossible to
compute exact optimal solutions under models of computation that rely on
elementary arithmetic operations and the extraction of $k$th roots, and
explains the perceived practical difficulty. (2) We exploit this algebraic
analysis for the natural class of polygons with axis-parallel edges and integer
vertices (i.e., polyominoes), highlighting the relevance of turn-cost
minimization for Lawn Mowing tours, and leading to a general construction
method for feasible tours. (3) We show that this construction method achieves
theoretical worst-case guarantees that improve previous approximation factors
for polyominoes. (4) We demonstrate the practical usefulness \emph{beyond
polyominoes} by performing an extensive practical study on a spectrum of more
general benchmark polygons: We obtain solutions that are better than the
previous best values by Fekete et al., for instance sizes up to $20$ times
larger.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 15:09:37 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Fekete",
"Sándor P.",
""
],
[
"Krupke",
"Dominik",
""
],
[
"Perk",
"Michael",
""
],
[
"Rieck",
"Christian",
""
],
[
"Scheffer",
"Christian",
""
]
] |
new_dataset
| 0.966328 |
2307.01120
|
Federico Simonetta
|
Ana Llorens, Federico Simonetta, Mart\'in Serrano, \'Alvaro Torrente
|
musif: a Python package for symbolic music feature extraction
|
Published at the Sound and Music Computing Conference 2023
| null | null | null |
cs.SD cs.MM eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this work, we introduce musif, a Python package that facilitates the
automatic extraction of features from symbolic music scores. The package
includes the implementation of a large number of features, which have been
developed by a team of experts in musicology, music theory, statistics, and
computer science. Additionally, the package allows for the easy creation of
custom features using commonly available Python libraries. musif is primarily
geared towards processing high-quality musicological data encoded in MusicXML
format, but also supports other formats commonly used in music information
retrieval tasks, including MIDI, MEI, Kern, and others. We provide
comprehensive documentation and tutorials to aid in the extension of the
framework and to facilitate the introduction of new and inexperienced users to
its usage.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 15:49:15 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Llorens",
"Ana",
""
],
[
"Simonetta",
"Federico",
""
],
[
"Serrano",
"Martín",
""
],
[
"Torrente",
"Álvaro",
""
]
] |
new_dataset
| 0.999213 |
2307.01139
|
Sameera Horawalavithana
|
Sameera Horawalavithana, Sai Munikoti, Ian Stewart, Henry Kvinge
|
SCITUNE: Aligning Large Language Models with Scientific Multimodal
Instructions
|
Preprint. Work in progress
| null | null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Instruction finetuning is a popular paradigm to align large language models
(LLM) with human intent. Despite its popularity, this idea is less explored in
improving the LLMs to align existing foundation models with scientific
disciplines, concepts and goals. In this work, we present SciTune as a tuning
framework to improve the ability of LLMs to follow scientific multimodal
instructions. To test our methodology, we use a human-generated scientific
instruction tuning dataset and train a large multimodal model LLaMA-SciTune
that connects a vision encoder and LLM for science-focused visual and language
understanding. In comparison to the models that are finetuned with machine
generated data only, LLaMA-SciTune surpasses human performance on average and
in many sub-categories on the ScienceQA benchmark.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 16:25:49 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Horawalavithana",
"Sameera",
""
],
[
"Munikoti",
"Sai",
""
],
[
"Stewart",
"Ian",
""
],
[
"Kvinge",
"Henry",
""
]
] |
new_dataset
| 0.999606 |
2307.01168
|
Vitor Fortes Rey
|
Vitor Fortes Rey, Dominique Nshimyimana, Paul Lukowicz
|
Don't freeze: Finetune encoders for better Self-Supervised HAR
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recently self-supervised learning has been proposed in the field of human
activity recognition as a solution to the labelled data availability problem.
The idea being that by using pretext tasks such as reconstruction or
contrastive predictive coding, useful representations can be learned that then
can be used for classification. Those approaches follow the pretrain, freeze
and fine-tune procedure. In this paper we will show how a simple change - not
freezing the representation - leads to substantial performance gains across
pretext tasks. The improvement was found in all four investigated datasets and
across all four pretext tasks and is inversely proportional to amount of
labelled data. Moreover the effect is present whether the pretext task is
carried on the Capture24 dataset or directly in unlabelled data of the target
dataset.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 17:23:34 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Rey",
"Vitor Fortes",
""
],
[
"Nshimyimana",
"Dominique",
""
],
[
"Lukowicz",
"Paul",
""
]
] |
new_dataset
| 0.992907 |
2307.01187
|
Xiang Li
|
Haixing Dai, Chong Ma, Zhengliang Liu, Yiwei Li, Peng Shu, Xiaozheng
Wei, Lin Zhao, Zihao Wu, Dajiang Zhu, Wei Liu, Quanzheng Li, Tianming Liu,
and Xiang Li
|
SAMAug: Point Prompt Augmentation for Segment Anything Model
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces SAMAug, a novel visual point augmentation method for
the Segment Anything Model (SAM) that enhances interactive image segmentation
performance. SAMAug generates augmented point prompts to provide more
information to SAM. From the initial point prompt, SAM produces the initial
mask, which is then fed into our proposed SAMAug to generate augmented point
prompts. By incorporating these extra points, SAM can generate augmented
segmentation masks based on the augmented point prompts and the initial prompt,
resulting in improved segmentation performance. We evaluate four point
augmentation techniques: random selection, maximum difference entropy, maximum
distance, and a saliency model. Experiments on the COCO, Fundus, and Chest
X-ray datasets demonstrate that SAMAug can boost SAM's segmentation results,
especially using the maximum distance and saliency model methods. SAMAug
underscores the potential of visual prompt engineering to advance interactive
computer vision models.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 17:52:44 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Dai",
"Haixing",
""
],
[
"Ma",
"Chong",
""
],
[
"Liu",
"Zhengliang",
""
],
[
"Li",
"Yiwei",
""
],
[
"Shu",
"Peng",
""
],
[
"Wei",
"Xiaozheng",
""
],
[
"Zhao",
"Lin",
""
],
[
"Wu",
"Zihao",
""
],
[
"Zhu",
"Dajiang",
""
],
[
"Liu",
"Wei",
""
],
[
"Li",
"Quanzheng",
""
],
[
"Liu",
"Tianming",
""
],
[
"Li",
"Xiang",
""
]
] |
new_dataset
| 0.998357 |
2307.01197
|
Frano Raji\v{c}
|
Frano Raji\v{c}, Lei Ke, Yu-Wing Tai, Chi-Keung Tang, Martin
Danelljan, Fisher Yu
|
Segment Anything Meets Point Tracking
|
We propose SAM-PT to extend SAM to zero-shot video segmentation with
point-based tracking. Github: https://github.com/SysCV/sam-pt
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Segment Anything Model (SAM) has established itself as a powerful
zero-shot image segmentation model, employing interactive prompts such as
points to generate masks. This paper presents SAM-PT, a method extending SAM's
capability to tracking and segmenting anything in dynamic videos. SAM-PT
leverages robust and sparse point selection and propagation techniques for mask
generation, demonstrating that a SAM-based segmentation tracker can yield
strong zero-shot performance across popular video object segmentation
benchmarks, including DAVIS, YouTube-VOS, and MOSE. Compared to traditional
object-centric mask propagation strategies, we uniquely use point propagation
to exploit local structure information that is agnostic to object semantics. We
highlight the merits of point-based tracking through direct evaluation on the
zero-shot open-world Unidentified Video Objects (UVO) benchmark. To further
enhance our approach, we utilize K-Medoids clustering for point initialization
and track both positive and negative points to clearly distinguish the target
object. We also employ multiple mask decoding passes for mask refinement and
devise a point re-initialization strategy to improve tracking accuracy. Our
code integrates different point trackers and video segmentation benchmarks and
will be released at https://github.com/SysCV/sam-pt.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 17:58:01 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Rajič",
"Frano",
""
],
[
"Ke",
"Lei",
""
],
[
"Tai",
"Yu-Wing",
""
],
[
"Tang",
"Chi-Keung",
""
],
[
"Danelljan",
"Martin",
""
],
[
"Yu",
"Fisher",
""
]
] |
new_dataset
| 0.997374 |
2307.01200
|
Hongwen Zhang
|
Yuxiang Zhang, Hongwen Zhang, Liangxiao Hu, Hongwei Yi, Shengping
Zhang, Yebin Liu
|
Real-time Monocular Full-body Capture in World Space via Sequential
Proxy-to-Motion Learning
|
Project Page: https://liuyebin.com/proxycap
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning-based approaches to monocular motion capture have recently shown
promising results by learning to regress in a data-driven manner. However, due
to the challenges in data collection and network designs, it remains
challenging for existing solutions to achieve real-time full-body capture while
being accurate in world space. In this work, we contribute a sequential
proxy-to-motion learning scheme together with a proxy dataset of 2D skeleton
sequences and 3D rotational motions in world space. Such proxy data enables us
to build a learning-based network with accurate full-body supervision while
also mitigating the generalization issues. For more accurate and physically
plausible predictions, a contact-aware neural motion descent module is proposed
in our network so that it can be aware of foot-ground contact and motion
misalignment with the proxy observations. Additionally, we share the body-hand
context information in our network for more compatible wrist poses recovery
with the full-body model. With the proposed learning-based solution, we
demonstrate the first real-time monocular full-body capture system with
plausible foot-ground contact in world space. More video results can be found
at our project page: https://liuyebin.com/proxycap.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 17:59:45 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Zhang",
"Yuxiang",
""
],
[
"Zhang",
"Hongwen",
""
],
[
"Hu",
"Liangxiao",
""
],
[
"Yi",
"Hongwei",
""
],
[
"Zhang",
"Shengping",
""
],
[
"Liu",
"Yebin",
""
]
] |
new_dataset
| 0.992443 |
2012.04803
|
Harnaik Dhami
|
Harnaik Dhami, Kevin Yu, Troi Williams, Vineeth Vajipey, and Pratap
Tokekar
|
GATSBI: An Online GTSP-Based Algorithm for Targeted Surface Bridge
Inspection
|
8 pages, 12 figures, 2 tables. Accepted to ICUAS 2023
| null |
10.1109/ICUAS57906.2023.10156013
| null |
cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
We study the problem of visual surface inspection of a bridge for defects
using an Unmanned Aerial Vehicle (UAV). We do not assume that the geometric
model of the bridge is known beforehand. Our planner, termed GATSBI, plans a
path in a receding horizon fashion to inspect all points on the surface of the
bridge. The input to GATSBI consists of a 3D occupancy map created online with
LiDAR scans. Occupied voxels corresponding to the bridge in this map are
semantically segmented and used to create a bridge-only occupancy map.
Inspecting a bridge voxel requires the UAV to take images from a desired
viewing angle and distance. We then create a Generalized Traveling Salesperson
Problem (GTSP) instance to cluster candidate viewpoints for inspecting the
bridge voxels and use an off-the-shelf GTSP solver to find the optimal path for
the given instance. As the algorithm sees more parts of the environment over
time, it replans the path to inspect novel parts of the bridge while avoiding
obstacles. We evaluate the performance of our algorithm through high-fidelity
simulations conducted in AirSim and real-world experiments. We compare the
performance of GATSBI with a classical exploration algorithm. Our evaluation
reveals that targeting the inspection to only the segmented bridge voxels and
planning carefully using a GTSP solver leads to a more efficient and thorough
inspection than the baseline algorithm.
|
[
{
"version": "v1",
"created": "Wed, 9 Dec 2020 00:34:46 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Apr 2022 15:14:02 GMT"
},
{
"version": "v3",
"created": "Thu, 1 Jun 2023 15:59:57 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Dhami",
"Harnaik",
""
],
[
"Yu",
"Kevin",
""
],
[
"Williams",
"Troi",
""
],
[
"Vajipey",
"Vineeth",
""
],
[
"Tokekar",
"Pratap",
""
]
] |
new_dataset
| 0.999662 |
2205.03929
|
V\'ictor Mayoral Vilches
|
V\'ictor Mayoral-Vilches, Sabrina M. Neuman, Brian Plancher and Vijay
Janapa Reddi
|
RobotCore: An Open Architecture for Hardware Acceleration in ROS 2
| null | null | null | null |
cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Hardware acceleration can revolutionize robotics, enabling new applications
by speeding up robot response times while remaining power-efficient. However,
the diversity of acceleration options makes it difficult for roboticists to
easily deploy accelerated systems without expertise in each specific hardware
platform. In this work, we address this challenge with RobotCore, an
architecture to integrate hardware acceleration in the widely-used ROS 2
robotics software framework. This architecture is target-agnostic (supports
edge, workstation, data center, or cloud targets) and accelerator-agnostic
(supports both FPGAs and GPUs). It builds on top of the common ROS 2 build
system and tools and is easily portable across different research and
commercial solutions through a new firmware layer. We also leverage the Linux
Tracing Toolkit next generation (LTTng) for low-overhead real-time tracing and
benchmarking. To demonstrate the acceleration enabled by this architecture, we
use it to deploy a ROS 2 perception computational graph on a CPU and FPGA. We
employ our integrated tracing and benchmarking to analyze bottlenecks,
uncovering insights that guide us to improve FPGA communication efficiency. In
particular, we design an intra-FPGA ROS 2 node communication queue to enable
faster data flows, and use it in conjunction with FPGA-accelerated nodes to
achieve a 24.42% speedup over a CPU.
|
[
{
"version": "v1",
"created": "Sun, 8 May 2022 18:15:11 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2023 13:30:11 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Mayoral-Vilches",
"Víctor",
""
],
[
"Neuman",
"Sabrina M.",
""
],
[
"Plancher",
"Brian",
""
],
[
"Reddi",
"Vijay Janapa",
""
]
] |
new_dataset
| 0.996452 |
2212.14193
|
Shengqin Jiang
|
Shengqin Jiang, Qing Wang, Fengna Cheng, Yuankai Qi, Qingshan Liu
|
A Unified Object Counting Network with Object Occupation Prior
|
Accepted by IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO
TECHNOLOGY; The dataset and code are available at:
https://github.com/Tanyjiang/EOCO
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The counting task, which plays a fundamental role in numerous applications
(e.g., crowd counting, traffic statistics), aims to predict the number of
objects with various densities. Existing object counting tasks are designed for
a single object class. However, it is inevitable to encounter newly coming data
with new classes in our real world. We name this scenario as \textit{evolving
object counting}. In this paper, we build the first evolving object counting
dataset and propose a unified object counting network as the first attempt to
address this task. The proposed model consists of two key components: a
class-agnostic mask module and a class-incremental module. The class-agnostic
mask module learns generic object occupation prior via predicting a
class-agnostic binary mask (e.g., 1 denotes there exists an object at the
considering position in an image and 0 otherwise). The class-incremental module
is used to handle new coming classes and provides discriminative class guidance
for density map prediction. The combined outputs of class-agnostic mask module
and image feature extractor are used to predict the final density map. When new
classes come, we first add new neural nodes into the last regression and
classification layers of class-incremental module. Then, instead of retraining
the model from scratch, we utilize knowledge distillation to help the model
remember what have already learned about previous object classes. We also
employ a support sample bank to store a small number of typical training
samples of each class, which are used to prevent the model from forgetting key
information of old data. With this design, our model can efficiently and
effectively adapt to new coming classes while keeping good performance on
already seen data without large-scale retraining. Extensive experiments on the
collected dataset demonstrate the favorable performance.
|
[
{
"version": "v1",
"created": "Thu, 29 Dec 2022 06:42:51 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2023 07:35:15 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Jun 2023 12:26:50 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Jiang",
"Shengqin",
""
],
[
"Wang",
"Qing",
""
],
[
"Cheng",
"Fengna",
""
],
[
"Qi",
"Yuankai",
""
],
[
"Liu",
"Qingshan",
""
]
] |
new_dataset
| 0.995987 |
2301.09521
|
Ralph Peeters
|
Ralph Peeters, Reng Chiz Der, Christian Bizer
|
WDC Products: A Multi-Dimensional Entity Matching Benchmark
|
Accepted and to be published in Proceedings of EDBT 2024
(https://dastlab.github.io/edbticdt2024/)
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The difficulty of an entity matching task depends on a combination of
multiple factors such as the amount of corner-case pairs, the fraction of
entities in the test set that have not been seen during training, and the size
of the development set. Current entity matching benchmarks usually represent
single points in the space along such dimensions or they provide for the
evaluation of matching methods along a single dimension, for instance the
amount of training data. This paper presents WDC Products, an entity matching
benchmark which provides for the systematic evaluation of matching systems
along combinations of three dimensions while relying on real-world data. The
three dimensions are (i) amount of corner-cases (ii) generalization to unseen
entities, and (iii) development set size (training set plus validation set).
Generalization to unseen entities is a dimension not covered by any of the
existing English-language benchmarks yet but is crucial for evaluating the
robustness of entity matching systems. Instead of learning how to match entity
pairs, entity matching can also be formulated as a multi-class classification
task that requires the matcher to recognize individual entities. WDC Products
is the first benchmark that provides a pair-wise and a multi-class formulation
of the same tasks. We evaluate WDC Products using several state-of-the-art
matching systems, including Ditto, HierGAT, and R-SupCon. The evaluation shows
that all matching systems struggle with unseen entities to varying degrees. It
also shows that for entity matching contrastive learning is more training data
efficient compared to cross-encoders.
|
[
{
"version": "v1",
"created": "Mon, 23 Jan 2023 16:12:18 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2023 15:59:31 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Peeters",
"Ralph",
""
],
[
"Der",
"Reng Chiz",
""
],
[
"Bizer",
"Christian",
""
]
] |
new_dataset
| 0.999537 |
2303.13681
|
Nathaniel Hanson
|
Gary Lvov, Mark Zolotas, Nathaniel Hanson, Austin Allison, Xavier
Hubbard, Michael Carvajal, Taskin Padir
|
Mobile MoCap: Retroreflector Localization On-The-Go
| null | null | null | null |
cs.RO eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Motion capture through tracking retroreflectors obtains highly accurate pose
estimation, which is frequently used in robotics. Unlike commercial motion
capture systems, fiducial marker-based tracking methods, such as AprilTags, can
perform relative localization without requiring a static camera setup. However,
popular pose estimation methods based on fiducial markers have lower
localization accuracy than commercial motion capture systems. We propose Mobile
MoCap, a system that utilizes inexpensive near-infrared cameras for accurate
relative localization even while in motion. We present a retroreflector feature
detector that performs 6-DoF (six degrees-of-freedom) tracking and operates
with minimal camera exposure times to reduce motion blur. To evaluate the
proposed localization technique while in motion, we mount our Mobile MoCap
system, as well as an RGB camera to benchmark against fiducial markers, onto a
precision-controlled linear rail and servo. The fiducial marker approach
employs AprilTags, which are pervasively used for localization in robotics. We
evaluate the two systems at varying distances, marker viewing angles, and
relative velocities. Across all experimental conditions, our stereo-based
Mobile MoCap system obtains higher position and orientation accuracy than the
fiducial approach.
The code for Mobile MoCap is implemented in ROS 2 and made publicly available
at https://github.com/RIVeR-Lab/mobile_mocap.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 21:29:17 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2023 05:02:17 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Lvov",
"Gary",
""
],
[
"Zolotas",
"Mark",
""
],
[
"Hanson",
"Nathaniel",
""
],
[
"Allison",
"Austin",
""
],
[
"Hubbard",
"Xavier",
""
],
[
"Carvajal",
"Michael",
""
],
[
"Padir",
"Taskin",
""
]
] |
new_dataset
| 0.976462 |
2304.11110
|
Vuthea Chheang
|
Vuthea Chheang, Rakshith Lokesh, Amit Chaudhari, Qile Wang, Lauren
Baron, Behdokht Kiafar, Sagar Doshi, Erik Thostenson, Joshua Cashaback,
Roghayeh Leila Barmaki
|
Immersive Virtual Reality and Robotics for Upper Extremity
Rehabilitation
|
9 pages, 6 figures
| null | null | null |
cs.HC cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Stroke patients often experience upper limb impairments that restrict their
mobility and daily activities. Physical therapy (PT) is the most effective
method to improve impairments, but low patient adherence and participation in
PT exercises pose significant challenges. To overcome these barriers, a
combination of virtual reality (VR) and robotics in PT is promising. However,
few systems effectively integrate VR with robotics, especially for upper limb
rehabilitation. This work introduces a new virtual rehabilitation solution that
combines VR with robotics and a wearable sensor to analyze elbow joint
movements. The framework also enhances the capabilities of a traditional
robotic device (KinArm) used for motor dysfunction assessment and
rehabilitation. A pilot user study (n = 16) was conducted to evaluate the
effectiveness and usability of the proposed VR framework. We used a two-way
repeated measures experimental design where participants performed two tasks
(Circle and Diamond) with two conditions (VR and VR KinArm). We observed no
significant differences in the main effect of conditions for task completion
time. However, there were significant differences in both the normalized number
of mistakes and recorded elbow joint angles (captured as resistance change
values from the wearable sleeve sensor) between the Circle and Diamond tasks.
Additionally, we report the system usability, task load, and presence in the
proposed VR framework. This system demonstrates the potential advantages of an
immersive, multi-sensory approach and provides future avenues for research in
developing more cost-effective, tailored, and personalized upper limb solutions
for home therapy applications.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 16:28:31 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 20:02:20 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Chheang",
"Vuthea",
""
],
[
"Lokesh",
"Rakshith",
""
],
[
"Chaudhari",
"Amit",
""
],
[
"Wang",
"Qile",
""
],
[
"Baron",
"Lauren",
""
],
[
"Kiafar",
"Behdokht",
""
],
[
"Doshi",
"Sagar",
""
],
[
"Thostenson",
"Erik",
""
],
[
"Cashaback",
"Joshua",
""
],
[
"Barmaki",
"Roghayeh Leila",
""
]
] |
new_dataset
| 0.999096 |
2304.13552
|
Simranjeet Singh
|
Simranjeet Singh, Omar Ghazal, Chandan Kumar Jha, Vikas Rana, Rolf
Drechsler, Rishad Shafik, Alex Yakovlev, Sachin Patkar, Farhad Merchant
|
Finite State Automata Design using 1T1R ReRAM Crossbar
|
Accepted by 21st IEEE Interregional NEWCAS Conference 2023 (NEWCAS
2023)
| null | null | null |
cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
Data movement costs constitute a significant bottleneck in modern machine
learning (ML) systems. When combined with the computational complexity of
algorithms, such as neural networks, designing hardware accelerators with low
energy footprint remains challenging. Finite state automata (FSA) constitute a
type of computation model used as a low-complexity learning unit in ML systems.
The implementation of FSA consists of a number of memory states. However, FSA
can be in one of the states at a given time. It switches to another state based
on the present state and input to the FSA. Due to its natural synergy with
memory, it is a promising candidate for in-memory computing for reduced data
movement costs. This work focuses on a novel FSA implementation using resistive
RAM (ReRAM) for state storage in series with a CMOS transistor for biasing
controls. We propose using multi-level ReRAM technology capable of
transitioning between states depending on bias pulse amplitude and duration. We
use an asynchronous control circuit for writing each ReRAM-transistor cell for
the on-demand switching of the FSA. We investigate the impact of the
device-to-device and cycle-to-cycle variations on the cell and show that FSA
transitions can be seamlessly achieved without degradation of performance.
Through extensive experimental evaluation, we demonstrate the implementation of
FSA on 1T1R ReRAM crossbar.
|
[
{
"version": "v1",
"created": "Wed, 26 Apr 2023 13:21:17 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2023 11:28:04 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Singh",
"Simranjeet",
""
],
[
"Ghazal",
"Omar",
""
],
[
"Jha",
"Chandan Kumar",
""
],
[
"Rana",
"Vikas",
""
],
[
"Drechsler",
"Rolf",
""
],
[
"Shafik",
"Rishad",
""
],
[
"Yakovlev",
"Alex",
""
],
[
"Patkar",
"Sachin",
""
],
[
"Merchant",
"Farhad",
""
]
] |
new_dataset
| 0.955604 |
2306.07083
|
Changguang Wu
|
Changguang Wu, Jiangxin Dong, Jinhui Tang
|
LUT-GCE: Lookup Table Global Curve Estimation for Fast Low-light Image
Enhancement
|
spelling error
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present an effective and efficient approach for low-light image
enhancement, named Lookup Table Global Curve Estimation (LUT-GCE). In contrast
to existing curve-based methods with pixel-wise adjustment, we propose to
estimate a global curve for the entire image that allows corrections for both
under- and over-exposure. Specifically, we develop a novel cubic curve
formulation for light enhancement, which enables an image-adaptive and
pixel-independent curve for the range adjustment of an image. We then propose a
global curve estimation network (GCENet), a very light network with only 25.4k
parameters. To further speed up the inference speed, a lookup table method is
employed for fast retrieval. In addition, a novel histogram smoothness loss is
designed to enable zero-shot learning, which is able to improve the contrast of
the image and recover clearer details. Quantitative and qualitative results
demonstrate the effectiveness of the proposed approach. Furthermore, our
approach outperforms the state of the art in terms of inference speed,
especially on high-definition images (e.g., 1080p and 4k).
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 12:53:06 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2023 15:33:45 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Wu",
"Changguang",
""
],
[
"Dong",
"Jiangxin",
""
],
[
"Tang",
"Jinhui",
""
]
] |
new_dataset
| 0.998608 |
2306.14752
|
WenHui Lei
|
Wenhui Lei, Xu Wei, Xiaofan Zhang, Kang Li, Shaoting Zhang
|
MedLSAM: Localize and Segment Anything Model for 3D Medical Images
|
Work in Progress. Code is public at
https://github.com/openmedlab/MedLSAM
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The Segment Anything Model (SAM) has recently emerged as a groundbreaking
model in the field of image segmentation. Nevertheless, both the original SAM
and its medical adaptations necessitate slice-by-slice annotations, which
directly increase the annotation workload with the size of the dataset. We
propose MedLSAM to address this issue, ensuring a constant annotation workload
irrespective of dataset size and thereby simplifying the annotation process.
Our model introduces a few-shot localization framework capable of localizing
any target anatomical part within the body. To achieve this, we develop a
Localize Anything Model for 3D Medical Images (MedLAM), utilizing two
self-supervision tasks: relative distance regression (RDR) and multi-scale
similarity (MSS) across a comprehensive dataset of 14,012 CT scans. We then
establish a methodology for accurate segmentation by integrating MedLAM with
SAM. By annotating only six extreme points across three directions on a few
templates, our model can autonomously identify the target anatomical region on
all data scheduled for annotation. This allows our framework to generate a 2D
bounding box for every slice of the image, which are then leveraged by SAM to
carry out segmentations. We conducted experiments on two 3D datasets covering
38 organs and found that MedLSAM matches the performance of SAM and its medical
adaptations while requiring only minimal extreme point annotations for the
entire dataset. Furthermore, MedLAM has the potential to be seamlessly
integrated with future 3D SAM models, paving the way for enhanced performance.
Our code is public at https://github.com/openmedlab/MedLSAM.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 15:09:02 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2023 06:38:25 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Lei",
"Wenhui",
""
],
[
"Wei",
"Xu",
""
],
[
"Zhang",
"Xiaofan",
""
],
[
"Li",
"Kang",
""
],
[
"Zhang",
"Shaoting",
""
]
] |
new_dataset
| 0.998737 |
2306.17175
|
Rakhilya Lee Mekhtieva
|
Rakhilya Lee Mekhtieva, Brandon Forbes, Dalal Alrajeh, Brendan
Delaney, Alessandra Russo
|
RECAP-KG: Mining Knowledge Graphs from Raw GP Notes for Remote COVID-19
Assessment in Primary Care
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Clinical decision-making is a fundamental stage in delivering appropriate
care to patients. In recent years several decision-making systems designed to
aid the clinician in this process have been developed. However, technical
solutions currently in use are based on simple regression models and are only
able to take into account simple pre-defined multiple-choice features, such as
patient age, pre-existing conditions, smoker status, etc. One particular source
of patient data, that available decision-making systems are incapable of
processing is the collection of patient consultation GP notes. These contain
crucial signs and symptoms - the information used by clinicians in order to
make a final decision and direct the patient to the appropriate care.
Extracting information from GP notes is a technically challenging problem, as
they tend to include abbreviations, typos, and incomplete sentences.
This paper addresses this open challenge. We present a framework that
performs knowledge graph construction from raw GP medical notes written during
or after patient consultations. By relying on support phrases mined from the
SNOMED ontology, as well as predefined supported facts from values used in the
RECAP (REmote COVID-19 Assessment in Primary Care) patient risk prediction
tool, our graph generative framework is able to extract structured knowledge
graphs from the highly unstructured and inconsistent format that consultation
notes are written in. Our knowledge graphs include information about existing
patient symptoms, their duration, and their severity.
We apply our framework to consultation notes of COVID-19 patients in the UK
COVID-19 Clinical Assesment Servcie (CCAS) patient dataset. We provide a
quantitative evaluation of the performance of our framework, demonstrating that
our approach has better accuracy than traditional NLP methods when answering
questions about patients.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 23:35:51 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Mekhtieva",
"Rakhilya Lee",
""
],
[
"Forbes",
"Brandon",
""
],
[
"Alrajeh",
"Dalal",
""
],
[
"Delaney",
"Brendan",
""
],
[
"Russo",
"Alessandra",
""
]
] |
new_dataset
| 0.991423 |
2306.17201
|
Wenhao Chai
|
Zhenyu Zhang, Wenhao Chai, Zhongyu Jiang, Tian Ye, Mingli Song,
Jenq-Neng Hwang, Gaoang Wang
|
MPM: A Unified 2D-3D Human Pose Representation via Masked Pose Modeling
|
Codes and model checkpoints are available at
https://github.com/vvirgooo2/MPM
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Estimating 3D human poses only from a 2D human pose sequence is thoroughly
explored in recent years. Yet, prior to this, no such work has attempted to
unify 2D and 3D pose representations in the shared feature space. In this
paper, we propose MPM, a unified 2D-3D human pose representation framework via
masked pose modeling. We treat 2D and 3D poses as two different modalities like
vision and language and build a single-stream transformer-based architecture.
We apply three pretext tasks, which are masked 2D pose modeling, masked 3D pose
modeling, and masked 2D pose lifting to pre-train our network and use
full-supervision to perform further fine-tuning. A high masking ratio of 72.5%
in total with a spatio-temporal mask sampling strategy leading to better
relation modeling both in spatial and temporal domains. MPM can handle multiple
tasks including 3D human pose estimation, 3D pose estimation from occluded 2D
pose, and 3D pose completion in a single framework. We conduct extensive
experiments and ablation studies on several widely used human pose datasets and
achieve state-of-the-art performance on Human3.6M and MPI-INF-3DHP. Codes and
model checkpoints are available at https://github.com/vvirgooo2/MPM
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 10:30:00 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Zhang",
"Zhenyu",
""
],
[
"Chai",
"Wenhao",
""
],
[
"Jiang",
"Zhongyu",
""
],
[
"Ye",
"Tian",
""
],
[
"Song",
"Mingli",
""
],
[
"Hwang",
"Jenq-Neng",
""
],
[
"Wang",
"Gaoang",
""
]
] |
new_dataset
| 0.999337 |
2306.17203
|
Simian Luo
|
Simian Luo, Chuanhao Yan, Chenxu Hu, Hang Zhao
|
Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion
Models
| null | null | null | null |
cs.SD cs.CV cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Video-to-Audio (V2A) model has recently gained attention for its
practical application in generating audio directly from silent videos,
particularly in video/film production. However, previous methods in V2A have
limited generation quality in terms of temporal synchronization and
audio-visual relevance. We present Diff-Foley, a synchronized Video-to-Audio
synthesis method with a latent diffusion model (LDM) that generates
high-quality audio with improved synchronization and audio-visual relevance. We
adopt contrastive audio-visual pretraining (CAVP) to learn more temporally and
semantically aligned features, then train an LDM with CAVP-aligned visual
features on spectrogram latent space. The CAVP-aligned features enable LDM to
capture the subtler audio-visual correlation via a cross-attention module. We
further significantly improve sample quality with `double guidance'. Diff-Foley
achieves state-of-the-art V2A performance on current large scale V2A dataset.
Furthermore, we demonstrate Diff-Foley practical applicability and
generalization capabilities via downstream finetuning. Project Page: see
https://diff-foley.github.io/
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 12:39:58 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Luo",
"Simian",
""
],
[
"Yan",
"Chuanhao",
""
],
[
"Hu",
"Chenxu",
""
],
[
"Zhao",
"Hang",
""
]
] |
new_dataset
| 0.998958 |
2306.17254
|
Runyu Jin
|
Qirui Yang, Runyu Jin, Ni Fan, Devasena Inupakutika, Bridget Davis,
Ming Zhao
|
AdaCache: A Disaggregated Cache System with Adaptive Block Size for
Cloud Block Storage
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
NVMe SSD caching has demonstrated impressive capabilities in solving cloud
block storage's I/O bottleneck and enhancing application performance in public,
private, and hybrid cloud environments. However, traditional host-side caching
solutions have several serious limitations. First, the cache cannot be shared
across hosts, leading to low cache utilization. Second, the commonly-used
fix-sized cache block allocation mechanism is unable to provide good cache
performance with low memory overhead for diverse cloud workloads with vastly
different I/O patterns. This paper presents AdaCache, a novel userspace
disaggregated cache system that utilizes adaptive cache block allocation for
cloud block storage. First, AdaCache proposes an innovative adaptive cache
block allocation scheme that allocates cache blocks based on the request size
to achieve both good cache performance and low memory overhead. Second,
AdaCache proposes a group-based cache organization that stores cache blocks
into groups to solve the fragmentation problem brought by variable-sized cache
blocks. Third, AdaCache designs a two-level cache replacement policy that
replaces cache blocks in both single blocks and groups to improve the hit
ratio. Experimental results with real-world traces show that AdaCache can
substantially improve I/O performance and reduce storage access caused by cache
miss with a much lower memory usage compared to traditional fix-sized cache
systems.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 18:46:38 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Yang",
"Qirui",
""
],
[
"Jin",
"Runyu",
""
],
[
"Fan",
"Ni",
""
],
[
"Inupakutika",
"Devasena",
""
],
[
"Davis",
"Bridget",
""
],
[
"Zhao",
"Ming",
""
]
] |
new_dataset
| 0.99947 |
2306.17271
|
Vinicius G. Goecks
|
Vinicius G. Goecks, Nicholas R. Waytowich
|
DisasterResponseGPT: Large Language Models for Accelerated Plan of
Action Development in Disaster Response Scenarios
|
Accepted at the Workshop on Challenges in Deployable Generative AI at
International Conference on Machine Learning (ICML), Honolulu, Hawaii, USA.
2023
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of plans of action in disaster response scenarios is a
time-consuming process. Large Language Models (LLMs) offer a powerful solution
to expedite this process through in-context learning. This study presents
DisasterResponseGPT, an algorithm that leverages LLMs to generate valid plans
of action quickly by incorporating disaster response and planning guidelines in
the initial prompt. In DisasterResponseGPT, users input the scenario
description and receive a plan of action as output. The proposed method
generates multiple plans within seconds, which can be further refined following
the user's feedback. Preliminary results indicate that the plans of action
developed by DisasterResponseGPT are comparable to human-generated ones while
offering greater ease of modification in real-time. This approach has the
potential to revolutionize disaster response operations by enabling rapid
updates and adjustments during the plan's execution.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 19:24:19 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Goecks",
"Vinicius G.",
""
],
[
"Waytowich",
"Nicholas R.",
""
]
] |
new_dataset
| 0.982083 |
2306.17298
|
Manoel Horta Ribeiro
|
L\'eopaul Boesinger, Manoel Horta Ribeiro, Veniamin Veselovsky, Robert
West
|
Tube2Vec: Social and Semantic Embeddings of YouTube Channels
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research using YouTube data often explores social and semantic dimensions of
channels and videos. Typically, analyses rely on laborious manual annotation of
content and content creators, often found by low-recall methods such as keyword
search. Here, we explore an alternative approach, using latent representations
(embeddings) obtained via machine learning. Using a large dataset of YouTube
links shared on Reddit; we create embeddings that capture social sharing
behavior, video metadata (title, description, etc.), and YouTube's video
recommendations. We evaluate these embeddings using crowdsourcing and existing
datasets, finding that recommendation embeddings excel at capturing both social
and semantic dimensions, although social-sharing embeddings better correlate
with existing partisan scores. We share embeddings capturing the social and
semantic dimensions of 44,000 YouTube channels for the benefit of future
research on YouTube: https://github.com/epfl-dlab/youtube-embeddings.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 20:43:57 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Boesinger",
"Léopaul",
""
],
[
"Ribeiro",
"Manoel Horta",
""
],
[
"Veselovsky",
"Veniamin",
""
],
[
"West",
"Robert",
""
]
] |
new_dataset
| 0.999119 |
2306.17302
|
Depu Meng
|
Rusheng Zhang, Depu Meng, Lance Bassett, Shengyin Shen, Zhengxia Zou,
Henry X. Liu
|
Robust Roadside Perception for Autonomous Driving: an Annotation-free
Strategy with Synthesized Data
|
Technical Report, 9 pages with 9 figures
| null | null | null |
cs.CV cs.RO eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, with the rapid development in vehicle-to-infrastructure
communication technologies, the infrastructure-based, roadside perception
system for cooperative driving has become a rising field. This paper focuses on
one of the most critical challenges - the data-insufficiency problem. The
lacking of high-quality labeled roadside sensor data with high diversity leads
to low robustness, and low transfer-ability of current roadside perception
systems. In this paper, a novel approach is proposed to address this problem by
creating synthesized training data using Augmented Reality and Generative
Adversarial Network. This method creates synthesized dataset that is capable of
training or fine-tuning a roadside perception detector which is robust to
different weather and lighting conditions, or to adapt a new deployment
location. We validate our approach at two intersections: Mcity intersection and
State St/Ellsworth Rd roundabout. Our experiments show that (1) the detector
can achieve good performance in all conditions when trained on synthesized data
only, and (2) the performance of an existing detector trained with labeled data
can be enhanced by synthesized data in harsh conditions.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 21:00:57 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Zhang",
"Rusheng",
""
],
[
"Meng",
"Depu",
""
],
[
"Bassett",
"Lance",
""
],
[
"Shen",
"Shengyin",
""
],
[
"Zou",
"Zhengxia",
""
],
[
"Liu",
"Henry X.",
""
]
] |
new_dataset
| 0.983764 |
2306.17330
|
Ziqi Xu
|
Ziqi Xu, Jingcheng Li, Yanjun Pan, Ming Li and Loukas Lazos
|
Secret-Free Device Pairing in the mmWave Band
|
14 pages, 16 figures
| null | null | null |
cs.CR eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Many Next Generation (NextG) applications feature devices that are capable of
communicating and sensing in the Millimeter-Wave (mmWave) bands. Trust
establishment is an important first step to bootstrap secure mmWave
communication links, which is challenging due to the lack of prior secrets and
the fact that traditional cryptographic authentication methods cannot bind
digital trust with physical properties. Previously, context-based device
pairing approaches were proposed to extract shared secrets from common context,
using various sensing modalities. However, they suffer from various limitations
in practicality and security.
In this work, we propose the first secret-free device pairing scheme in the
mmWave band that explores the unique physical-layer properties of mmWave
communications. Our basic idea is to let Alice and Bob derive common randomness
by sampling physical activity in the surrounding environment that disturbs
their wireless channel. They construct reliable fingerprints of the activity by
extracting event timing information from the channel state. We further propose
an uncoordinated path hopping mechanism to resolve the challenges of beam
alignment for activity sensing without prior trust. A key novelty of our
protocol is that it remains secure against both co-located passive adversaries
and active Man-in-the-Middle attacks, which is not possible with existing
context-based pairing approaches. We implement our protocol in a 28GHz mmWave
testbed, and experimentally evaluate its security in realistic indoor
environments. Results show that our protocol can effectively thwart several
different types of adversaries.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 22:49:48 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Xu",
"Ziqi",
""
],
[
"Li",
"Jingcheng",
""
],
[
"Pan",
"Yanjun",
""
],
[
"Li",
"Ming",
""
],
[
"Lazos",
"Loukas",
""
]
] |
new_dataset
| 0.98483 |
2306.17440
|
Yubo Cui
|
Yubo Cui, Zhiheng Li, Zheng Fang
|
STTracker: Spatio-Temporal Tracker for 3D Single Object Tracking
|
Accepted for publication at IEEE Robotics and Automation Letters
(RAL)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
3D single object tracking with point clouds is a critical task in 3D computer
vision. Previous methods usually input the last two frames and use the
predicted box to get the template point cloud in previous frame and the search
area point cloud in the current frame respectively, then use similarity-based
or motion-based methods to predict the current box. Although these methods
achieved good tracking performance, they ignore the historical information of
the target, which is important for tracking. In this paper, compared to
inputting two frames of point clouds, we input multi-frame of point clouds to
encode the spatio-temporal information of the target and learn the motion
information of the target implicitly, which could build the correlations among
different frames to track the target in the current frame efficiently.
Meanwhile, rather than directly using the point feature for feature fusion, we
first crop the point cloud features into many patches and then use sparse
attention mechanism to encode the patch-level similarity and finally fuse the
multi-frame features. Extensive experiments show that our method achieves
competitive results on challenging large-scale benchmarks (62.6% in KITTI and
49.66% in NuScenes).
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 07:25:11 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Cui",
"Yubo",
""
],
[
"Li",
"Zhiheng",
""
],
[
"Fang",
"Zheng",
""
]
] |
new_dataset
| 0.991954 |
2306.17462
|
Yang Liu
|
Yang Liu, Weixing Chen, Guanbin Li, Liang Lin
|
CausalVLR: A Toolbox and Benchmark for Visual-Linguistic Causal
Reasoning
|
CausalVLR: A Toolbox and Benchmark for Visual-Linguistic Causal
Reasoning. https://github.com/HCPLab-SYSU/CausalVLR
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present CausalVLR (Causal Visual-Linguistic Reasoning), an open-source
toolbox containing a rich set of state-of-the-art causal relation discovery and
causal inference methods for various visual-linguistic reasoning tasks, such as
VQA, image/video captioning, medical report generation, model generalization
and robustness, etc. These methods have been included in the toolbox with
PyTorch implementations under NVIDIA computing system. It not only includes
training and inference codes, but also provides model weights. We believe this
toolbox is by far the most complete visual-linguitic causal reasoning toolbox.
We wish that the toolbox and benchmark could serve the growing research
community by providing a flexible toolkit to re-implement existing methods and
develop their own new causal reasoning methods. Code and models are available
at https://github.com/HCPLab-SYSU/Causal-VLReasoning. The project is under
active development by HCP-Lab's contributors and we will keep this document
updated.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 08:17:38 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Liu",
"Yang",
""
],
[
"Chen",
"Weixing",
""
],
[
"Li",
"Guanbin",
""
],
[
"Lin",
"Liang",
""
]
] |
new_dataset
| 0.987641 |
2306.17469
|
Yingxuan Li
|
Yingxuan Li, Kiyoharu Aizawa, Yusuke Matsui
|
Manga109Dialog A Large-scale Dialogue Dataset for Comics Speaker
Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The expanding market for e-comics has spurred interest in the development of
automated methods to analyze comics. For further understanding of comics, an
automated approach is needed to link text in comics to characters speaking the
words. Comics speaker detection research has practical applications, such as
automatic character assignment for audiobooks, automatic translation according
to characters' personalities, and inference of character relationships and
stories.
To deal with the problem of insufficient speaker-to-text annotations, we
created a new annotation dataset Manga109Dialog based on Manga109.
Manga109Dialog is the world's largest comics speaker annotation dataset,
containing 132,692 speaker-to-text pairs. We further divided our dataset into
different levels by prediction difficulties to evaluate speaker detection
methods more appropriately. Unlike existing methods mainly based on distances,
we propose a deep learning-based method using scene graph generation models.
Due to the unique features of comics, we enhance the performance of our
proposed model by considering the frame reading order. We conducted experiments
using Manga109Dialog and other datasets. Experimental results demonstrate that
our scene-graph-based approach outperforms existing methods, achieving a
prediction accuracy of over 75%.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 08:34:08 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Li",
"Yingxuan",
""
],
[
"Aizawa",
"Kiyoharu",
""
],
[
"Matsui",
"Yusuke",
""
]
] |
new_dataset
| 0.999778 |
2306.17498
|
Hartmut Koenitz
|
Hartmut Koenitz, Jonathan Barbara, Lissa Holloway-Attaway, Frank Nack,
Mirjam Palosaari Eladhari, Agnes Bakk
|
INDCOR White Paper 0: Interactive Digital Narratives (IDNs) -- A
Solution to the Challenge of Representing Complex Issues
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Citizens everywhere have the right to be well-informed. Yet, with the high
complexity of many contemporary issues, such as global warming and migration,
our means of information need to mutually adapt. Narrative has always been at
the core of information exchange - regardless of whether our ancestors sat
around a fire and exchanged stories, or whether we read an article in a
newspaper, or watched a TV news broadcast. Yet, the narrative formats of the
newspaper article, the news broadcast, the documentary, and the textbook are
severely limited when it comes to representing highly complex topics which may
include several competing - and sometimes equally valid - perspectives. Such
complexity contributes to a high level of uncertainty due to a multitude of
factors affecting an outcome. Fortunately, with Interactive Digital Narrative
(IDN), there is a novel media format which can address these challenges. IDNs
can present several different perspectives in the same work, and give audiences
the ability to explore them at will through decision-making. After experiencing
the consequences of their decisions, the audience can replay to revisit and
change these decisions in order to consider their alternatives. IDN works
enable deep personalization and the inclusion of live data. These capabilities
make IDN a 21st century democratic medium, empowering citizens through the
understanding of complex issues. In this white paper, we discuss the challenge
of representing complexity, describe the advantages offered by IDNs, and point
out opportunities and strategies for deployment.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 09:16:59 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Koenitz",
"Hartmut",
""
],
[
"Barbara",
"Jonathan",
""
],
[
"Holloway-Attaway",
"Lissa",
""
],
[
"Nack",
"Frank",
""
],
[
"Eladhari",
"Mirjam Palosaari",
""
],
[
"Bakk",
"Agnes",
""
]
] |
new_dataset
| 0.998749 |
2306.17508
|
Ruochen Wu
|
Ruochen Wu
|
Research on Virus Cyberattack-Defense Based on Electromagnetic Radiation
| null | null | null | null |
cs.CR eess.SP
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Information technology and telecommunications have rapidly permeated various
domains, resulting in a significant influx of data traversing the networks
between computers. Consequently, research of cyberattacks in computer systems
has become crucial for many organizations. Accordingly, recent cybersecurity
incidents have underscored the rapidly evolving nature of future threats and
attack methods, particularly those involving computer viruses wireless
injection. This paper aims to study and demonstrate the feasibility of remote
computer virus radiation injection. To achieve this objective, digital signal
processing (DSP) plays a vital role. By studying the principles and models of
radiation attacks and computer virus propagation, the modulation of the binary
data stream of the simulated virus into a terahertz radar carrier signal by
Phase-Shift Keying (PSK) is simulated, enabling the implementation of an attack
through the "field to line" coupling of electromagnetic signals. Finally, the
defense and countermeasures based on signal recognition are discussed for such
attacks. Additionally, an idea of establishing a virus library for cyberattack
signals and employing artificial intelligence (AI) algorithms for automated
intrusion detection is proposed as a means to achieve cybersecurity situation
awareness.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 09:39:47 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Wu",
"Ruochen",
""
]
] |
new_dataset
| 0.951136 |
2306.17536
|
Sourav Garg
|
Stephen Hausler, Sourav Garg, Punarjay Chakravarty, Shubham
Shrivastava, Ankit Vora, Michael Milford
|
DisPlacing Objects: Improving Dynamic Vehicle Detection via Visual Place
Recognition under Adverse Conditions
|
Accepted to IROS 2023
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Can knowing where you are assist in perceiving objects in your surroundings,
especially under adverse weather and lighting conditions? In this work we
investigate whether a prior map can be leveraged to aid in the detection of
dynamic objects in a scene without the need for a 3D map or pixel-level
map-query correspondences. We contribute an algorithm which refines an initial
set of candidate object detections and produces a refined subset of highly
accurate detections using a prior map. We begin by using visual place
recognition (VPR) to retrieve a reference map image for a given query image,
then use a binary classification neural network that compares the query and
mapping image regions to validate the query detection. Once our classification
network is trained, on approximately 1000 query-map image pairs, it is able to
improve the performance of vehicle detection when combined with an existing
off-the-shelf vehicle detector. We demonstrate our approach using standard
datasets across two cities (Oxford and Zurich) under different settings of
train-test separation of map-query traverse pairs. We further emphasize the
performance gains of our approach against alternative design choices and show
that VPR suffices for the task, eliminating the need for precise ground truth
localization.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 10:46:51 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Hausler",
"Stephen",
""
],
[
"Garg",
"Sourav",
""
],
[
"Chakravarty",
"Punarjay",
""
],
[
"Shrivastava",
"Shubham",
""
],
[
"Vora",
"Ankit",
""
],
[
"Milford",
"Michael",
""
]
] |
new_dataset
| 0.967238 |
2306.17541
|
Pieter Collins
|
Pieter Collins, Luca Geretti, Sanja Zivanovic Gonzalez, Davide
Bresolin and Tiziano Villa
|
Rigorous Function Calculi in Ariadne
| null | null | null | null |
cs.MS
|
http://creativecommons.org/licenses/by/4.0/
|
Almost all problems in applied mathematics, including the analysis of
dynamical systems, deal with spaces of real-valued functions on Euclidean
domains in their formulation and solution. In this paper, we describe the the
tool Ariadne, which provides a rigorous calculus for working with Euclidean
functions. We first introduce the Ariadne framework, which is based on a clean
separation of objects as providing exact, effective, validated and approximate
information. We then discuss the function calculus as implemented in \Ariadne,
including polynomial function models which are the fundamental class for
concrete computations. We then consider solution of some core problems of
functional analysis, namely solution of algebraic equations and differential
equations, and briefly discuss their use for the analysis of hybrid systems. We
will give examples of C++ and Python code for performing the various
calculations. Finally, we will discuss progress on extensions, including
improvements to the function calculus and extensions to more complicated
classes of system.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 10:53:27 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Collins",
"Pieter",
""
],
[
"Geretti",
"Luca",
""
],
[
"Gonzalez",
"Sanja Zivanovic",
""
],
[
"Bresolin",
"Davide",
""
],
[
"Villa",
"Tiziano",
""
]
] |
new_dataset
| 0.985689 |
2306.17550
|
Zheng-Hao Chen
|
Che-Yu Chou, Zheng-Hao Chen, Yung-Hoh Sheu, Hung-Hsuan Chen, Sheng K.
Wu
|
TTSWING: a Dataset for Table Tennis Swing Analysis
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce TTSWING, a novel dataset designed for table tennis swing
analysis. This dataset comprises comprehensive swing information obtained
through 9-axis sensors integrated into custom-made racket grips, accompanied by
anonymized demographic data of the players. We detail the data collection and
annotation procedures. Furthermore, we conduct pilot studies utilizing diverse
machine learning models for swing analysis. TTSWING holds tremendous potential
to facilitate innovative research in table tennis analysis and is a valuable
resource for the scientific community. We release the dataset and experimental
codes at https://github.com/DEPhantom/TTSWING.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 11:06:46 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Chou",
"Che-Yu",
""
],
[
"Chen",
"Zheng-Hao",
""
],
[
"Sheu",
"Yung-Hoh",
""
],
[
"Chen",
"Hung-Hsuan",
""
],
[
"Wu",
"Sheng K.",
""
]
] |
new_dataset
| 0.999871 |
2306.17574
|
Hamza Bouzid
|
Hamza Bouzid and Lahoucine Ballihi
|
SpATr: MoCap 3D Human Action Recognition based on Spiral Auto-encoder
and Transformer Network
|
10 pages, 5 figures, Submitted IVC
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recent advancements in technology have expanded the possibilities of human
action recognition by leveraging 3D data, which offers a richer representation
of actions through the inclusion of depth information, enabling more accurate
analysis of spatial and temporal characteristics. However, 3D human action
recognition is a challenging task due to the irregularity and Disarrangement of
the data points in action sequences. In this context, we present our novel
model for human action recognition from fixed topology mesh sequences based on
Spiral Auto-encoder and Transformer Network, namely SpATr. The proposed method
first disentangles space and time in the mesh sequences. Then, an auto-encoder
is utilized to extract spatial geometrical features, and tiny transformer is
used to capture the temporal evolution of the sequence. Previous methods either
use 2D depth images, sample skeletons points or they require a huge amount of
memory leading to the ability to process short sequences only. In this work, we
show competitive recognition rate and high memory efficiency by building our
auto-encoder based on spiral convolutions, which are light weight convolution
directly applied to mesh data with fixed topologies, and by modeling temporal
evolution using a attention, that can handle large sequences. The proposed
method is evaluated on on two 3D human action datasets: MoVi and BMLrub from
the Archive of Motion Capture As Surface Shapes (AMASS). The results analysis
shows the effectiveness of our method in 3D human action recognition while
maintaining high memory efficiency. The code will soon be made publicly
available.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 11:49:00 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Bouzid",
"Hamza",
""
],
[
"Ballihi",
"Lahoucine",
""
]
] |
new_dataset
| 0.998059 |
2306.17602
|
Simon Doll
|
Simon Doll, Niklas Hanselmann, Lukas Schneider, Richard Schulz, Markus
Enzweiler, Hendrik P.A. Lensch
|
S.T.A.R.-Track: Latent Motion Models for End-to-End 3D Object Tracking
with Adaptive Spatio-Temporal Appearance Representations
|
Project page: https://simondoll.github.io/S.T.A.R.-Track/
| null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Following the tracking-by-attention paradigm, this paper introduces an
object-centric, transformer-based framework for tracking in 3D. Traditional
model-based tracking approaches incorporate the geometric effect of object- and
ego motion between frames with a geometric motion model. Inspired by this, we
propose S.T.A.R.-Track, which uses a novel latent motion model (LMM) to
additionally adjust object queries to account for changes in viewing direction
and lighting conditions directly in the latent space, while still modeling the
geometric motion explicitly. Combined with a novel learnable track embedding
that aids in modeling the existence probability of tracks, this results in a
generic tracking framework that can be integrated with any query-based
detector. Extensive experiments on the nuScenes benchmark demonstrate the
benefits of our approach, showing state-of-the-art performance for DETR3D-based
trackers while drastically reducing the number of identity switches of tracks
at the same time.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 12:22:41 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Doll",
"Simon",
""
],
[
"Hanselmann",
"Niklas",
""
],
[
"Schneider",
"Lukas",
""
],
[
"Schulz",
"Richard",
""
],
[
"Enzweiler",
"Markus",
""
],
[
"Lensch",
"Hendrik P. A.",
""
]
] |
new_dataset
| 0.962965 |
2306.17625
|
Keisuke Sugiura
|
Keisuke Sugiura and Hiroki Matsutani
|
An Integrated FPGA Accelerator for Deep Learning-based 2D/3D Path
Planning
|
25 pages, 17 figures
| null | null | null |
cs.RO cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Path planning is a crucial component for realizing the autonomy of mobile
robots. However, due to limited computational resources on mobile robots, it
remains challenging to deploy state-of-the-art methods and achieve real-time
performance. To address this, we propose P3Net (PointNet-based Path Planning
Networks), a lightweight deep-learning-based method for 2D/3D path planning,
and design an IP core (P3NetCore) targeting FPGA SoCs (Xilinx ZCU104). P3Net
improves the algorithm and model architecture of the recently-proposed MPNet.
P3Net employs an encoder with a PointNet backbone and a lightweight planning
network in order to extract robust point cloud features and sample path points
from a promising region. P3NetCore is comprised of the fully-pipelined point
cloud encoder, batched bidirectional path planner, and parallel collision
checker, to cover most part of the algorithm. On the 2D (3D) datasets, P3Net
with the IP core runs 24.54-149.57x and 6.19-115.25x (10.03-59.47x and
3.38-28.76x) faster than ARM Cortex CPU and Nvidia Jetson while only consuming
0.255W (0.809W), and is up to 1049.42x (133.84x) power-efficient than the
workstation. P3Net improves the success rate by up to 28.2% and plans a
near-optimal path, leading to a significantly better tradeoff between
computation and solution quality than MPNet and the state-of-the-art
sampling-based methods.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 12:56:25 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Sugiura",
"Keisuke",
""
],
[
"Matsutani",
"Hiroki",
""
]
] |
new_dataset
| 0.997795 |
2306.17674
|
Tianhao Shen
|
Mehrad Moradshahi, Tianhao Shen, Kalika Bali, Monojit Choudhury,
Ga\"el de Chalendar, Anmol Goel, Sungkyun Kim, Prashant Kodali, Ponnurangam
Kumaraguru, Nasredine Semmar, Sina J. Semnani, Jiwon Seo, Vivek Seshadri,
Manish Shrivastava, Michael Sun, Aditya Yadavalli, Chaobin You, Deyi Xiong
and Monica S. Lam
|
X-RiSAWOZ: High-Quality End-to-End Multilingual Dialogue Datasets and
Few-shot Agents
|
Accepted by ACL 2023 Findings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Task-oriented dialogue research has mainly focused on a few popular languages
like English and Chinese, due to the high dataset creation cost for a new
language. To reduce the cost, we apply manual editing to automatically
translated data. We create a new multilingual benchmark, X-RiSAWOZ, by
translating the Chinese RiSAWOZ to 4 languages: English, French, Hindi, Korean;
and a code-mixed English-Hindi language. X-RiSAWOZ has more than 18,000
human-verified dialogue utterances for each language, and unlike most
multilingual prior work, is an end-to-end dataset for building
fully-functioning agents.
The many difficulties we encountered in creating X-RiSAWOZ led us to develop
a toolset to accelerate the post-editing of a new language dataset after
translation. This toolset improves machine translation with a hybrid entity
alignment technique that combines neural with dictionary-based methods, along
with many automated and semi-automated validation checks.
We establish strong baselines for X-RiSAWOZ by training dialogue agents in
the zero- and few-shot settings where limited gold data is available in the
target language. Our results suggest that our translation and post-editing
methodology and toolset can be used to create new high-quality multilingual
dialogue agents cost-effectively. Our dataset, code, and toolkit are released
open-source.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 14:03:30 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Moradshahi",
"Mehrad",
""
],
[
"Shen",
"Tianhao",
""
],
[
"Bali",
"Kalika",
""
],
[
"Choudhury",
"Monojit",
""
],
[
"de Chalendar",
"Gaël",
""
],
[
"Goel",
"Anmol",
""
],
[
"Kim",
"Sungkyun",
""
],
[
"Kodali",
"Prashant",
""
],
[
"Kumaraguru",
"Ponnurangam",
""
],
[
"Semmar",
"Nasredine",
""
],
[
"Semnani",
"Sina J.",
""
],
[
"Seo",
"Jiwon",
""
],
[
"Seshadri",
"Vivek",
""
],
[
"Shrivastava",
"Manish",
""
],
[
"Sun",
"Michael",
""
],
[
"Yadavalli",
"Aditya",
""
],
[
"You",
"Chaobin",
""
],
[
"Xiong",
"Deyi",
""
],
[
"Lam",
"Monica S.",
""
]
] |
new_dataset
| 0.999838 |
2306.17695
|
Shihao Ran
|
Shihao Ran, Di Lu, Joel Tetreault, Aoife Cahill, Alejandro Jaimes
|
A New Task and Dataset on Detecting Attacks on Human Rights Defenders
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The ability to conduct retrospective analyses of attacks on human rights
defenders over time and by location is important for humanitarian organizations
to better understand historical or ongoing human rights violations and thus
better manage the global impact of such events. We hypothesize that NLP can
support such efforts by quickly processing large collections of news articles
to detect and summarize the characteristics of attacks on human rights
defenders. To that end, we propose a new dataset for detecting Attacks on Human
Rights Defenders (HRDsAttack) consisting of crowdsourced annotations on 500
online news articles. The annotations include fine-grained information about
the type and location of the attacks, as well as information about the
victim(s). We demonstrate the usefulness of the dataset by using it to train
and evaluate baseline models on several sub-tasks to predict the annotated
characteristics.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 14:20:06 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Ran",
"Shihao",
""
],
[
"Lu",
"Di",
""
],
[
"Tetreault",
"Joel",
""
],
[
"Cahill",
"Aoife",
""
],
[
"Jaimes",
"Alejandro",
""
]
] |
new_dataset
| 0.960956 |
2306.17721
|
Morteza Baradaran
|
Akhil Shekar, Morteza Baradaran, Sabiha Tajdari, Kevin Skadron
|
HashMem: PIM-based Hashmap Accelerator
|
This paper was published in Fifth International Workshop on
Domain-Specific System Architecture (DOSSA-5)
| null | null | null |
cs.AR cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Hashmaps are widely utilized data structures in many applications to perform
a probe on key-value pairs. However, their performance tends to degrade with
the increase in the dataset size, which leads to expensive off-chip memory
accesses to perform bucket traversals associated with hash collision. In this
work, we propose HashMem, a processing-in-memory (PIM) architecture designed to
perform bucket traversals along the row buffers at the subarray level. Due to
the inherent parallelism achieved with many concurrent subarray accesses and
the massive bandwidth available within DRAM, the execution time related to
bucket traversals is significantly reduced. We have evaluated two versions of
HashMem, performance-optimized and area-optimized, which have a speedup of
49.1x/17.1x and 9.2x/3.2x over standard C++ map and hyper-optimized hopscotch
map implementations, respectively.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 15:07:35 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Shekar",
"Akhil",
""
],
[
"Baradaran",
"Morteza",
""
],
[
"Tajdari",
"Sabiha",
""
],
[
"Skadron",
"Kevin",
""
]
] |
new_dataset
| 0.999578 |
2306.17733
|
Qizhi Wan Dr.
|
Qizhi Wan, Changxuan Wan, Keli Xiao, Hui Xiong, Dexi Liu, Xiping Liu
|
Token-Event-Role Structure-based Multi-Channel Document-Level Event
Extraction
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Document-level event extraction is a long-standing challenging information
retrieval problem involving a sequence of sub-tasks: entity extraction, event
type judgment, and event type-specific multi-event extraction. However,
addressing the problem as multiple learning tasks leads to increased model
complexity. Also, existing methods insufficiently utilize the correlation of
entities crossing different events, resulting in limited event extraction
performance. This paper introduces a novel framework for document-level event
extraction, incorporating a new data structure called token-event-role and a
multi-channel argument role prediction module. The proposed data structure
enables our model to uncover the primary role of tokens in multiple events,
facilitating a more comprehensive understanding of event relationships. By
leveraging the multi-channel prediction module, we transform entity and
multi-event extraction into a single task of predicting token-event pairs,
thereby reducing the overall parameter size and enhancing model efficiency. The
results demonstrate that our approach outperforms the state-of-the-art method
by 9.5 percentage points in terms of the F1 score, highlighting its superior
performance in event extraction. Furthermore, an ablation study confirms the
significant value of the proposed data structure in improving event extraction
tasks, further validating its importance in enhancing the overall performance
of the framework.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 15:22:57 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Wan",
"Qizhi",
""
],
[
"Wan",
"Changxuan",
""
],
[
"Xiao",
"Keli",
""
],
[
"Xiong",
"Hui",
""
],
[
"Liu",
"Dexi",
""
],
[
"Liu",
"Xiping",
""
]
] |
new_dataset
| 0.986088 |
2306.17744
|
Shay Snyder
|
Shay Snyder (1), Kevin Zhu (1), Ricardo Vega (1), Cameron Nowzari (1),
Maryam Parsa (1) ((1) George Mason University)
|
Zespol: A Lightweight Environment for Training Swarming Agents
|
5 pages, 4 figures, 1 table
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Agent-based modeling (ABM) and simulation have emerged as important tools for
studying emergent behaviors, especially in the context of swarming algorithms
for robotic systems. Despite significant research in this area, there is a lack
of standardized simulation environments, which hinders the development and
deployment of real-world robotic swarms. To address this issue, we present
Zespol, a modular, Python-based simulation environment that enables the
development and testing of multi-agent control algorithms. Zespol provides a
flexible and extensible sandbox for initial research, with the potential for
scaling to real-world applications. We provide a topological overview of the
system and detailed descriptions of its plug-and-play elements. We demonstrate
the fidelity of Zespol in simulated and real-word robotics by replicating
existing works highlighting the simulation to real gap with the milling
behavior. We plan to leverage Zespol's plug-and-play feature for neuromorphic
computing in swarming scenarios, which involves using the modules in Zespol to
simulate the behavior of neurons and their connections as synapses. This will
enable optimizing and studying the emergent behavior of swarm systems in
complex environments. Our goal is to gain a better understanding of the
interplay between environmental factors and neural-like computations in
swarming systems.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 15:52:18 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Snyder",
"Shay",
"",
"George Mason University"
],
[
"Zhu",
"Kevin",
"",
"George Mason University"
],
[
"Vega",
"Ricardo",
"",
"George Mason University"
],
[
"Nowzari",
"Cameron",
"",
"George Mason University"
],
[
"Parsa",
"Maryam",
"",
"George Mason University"
]
] |
new_dataset
| 0.999604 |
2306.17765
|
Hari Govind Vediramana Krishnan
|
Hari Govind V K, Isabel Garcia-Contreras, Sharon Shoham, Arie
Gurfinkel
|
Speculative SAT Modulo SAT
| null | null | null | null |
cs.LO cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
State-of-the-art model-checking algorithms like IC3/PDR are based on
uni-directional modular SAT solving for finding and/or blocking
counterexamples. Modular SAT solvers divide a SAT-query into multiple
sub-queries, each solved by a separate SAT solver (called a module), and
propagate information (lemmas, proof obligations, blocked clauses, etc.)
between modules. While modular solving is key to IC3/PDR, it is obviously not
as effective as monolithic solving, especially when individual sub-queries are
harder to solve than the combined query. This is partially addressed in SAT
modulo SAT (SMS) by propagating unit literals back and forth between the
modules and using information from one module to simplify the sub-query in
another module as soon as possible (i.e., before the satisfiability of any
sub-query is established). However, bi-directionality of SMS is limited because
of the strict order between decisions and propagation -- only one module is
allowed to make decisions, until its sub-query is SAT. In this paper, we
propose a generalization of SMS, called SPEC SMS, that speculates decisions
between modules. This makes it bi-directional -- decisions are made in multiple
modules, and learned clauses are exchanged in both directions. We further
extend DRUP proofs and interpolation, these are useful in model checking, to
SPEC SMS. We have implemented SPEC SMS in Z3 and show that it performs
exponentially better on a series of benchmarks that are provably hard for SMS.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 16:18:00 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"K",
"Hari Govind V",
""
],
[
"Garcia-Contreras",
"Isabel",
""
],
[
"Shoham",
"Sharon",
""
],
[
"Gurfinkel",
"Arie",
""
]
] |
new_dataset
| 0.99473 |
2306.17778
|
Apratim Bhattacharyya
|
Apratim Bhattacharyya, Sunny Panchal, Mingu Lee, Reza Pourreza, Pulkit
Madan, Roland Memisevic
|
Look, Remember and Reason: Visual Reasoning with Grounded Rationales
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models have recently shown human level performance on a
variety of reasoning tasks. However, the ability of these models to perform
complex visual reasoning has not been studied in detail yet. A key challenge in
many visual reasoning tasks is that the visual information needs to be tightly
integrated in the reasoning process. We propose to address this challenge by
drawing inspiration from human visual problem solving which depends on a
variety of low-level visual capabilities. It can often be cast as the three
step-process of ``Look, Remember, Reason'': visual information is incrementally
extracted using low-level visual routines in a step-by-step fashion until a
final answer is reached. We follow the same paradigm to enable existing large
language models, with minimal changes to the architecture, to solve visual
reasoning problems. To this end, we introduce rationales over the visual input
that allow us to integrate low-level visual capabilities, such as object
recognition and tracking, as surrogate tasks. We show competitive performance
on diverse visual reasoning tasks from the CLEVR, CATER, and ACRE datasets over
state-of-the-art models designed specifically for these tasks.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 16:31:14 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Bhattacharyya",
"Apratim",
""
],
[
"Panchal",
"Sunny",
""
],
[
"Lee",
"Mingu",
""
],
[
"Pourreza",
"Reza",
""
],
[
"Madan",
"Pulkit",
""
],
[
"Memisevic",
"Roland",
""
]
] |
new_dataset
| 0.974611 |
2306.17817
|
Theophile Gervet
|
Theophile Gervet, Zhou Xian, Nikolaos Gkanatsios, Katerina Fragkiadaki
|
Act3D: Infinite Resolution Action Detection Transformer for Robotic
Manipulation
| null | null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
3D perceptual representations are well suited for robot manipulation as they
easily encode occlusions and simplify spatial reasoning. Many manipulation
tasks require high spatial precision in end-effector pose prediction, typically
demanding high-resolution 3D perceptual grids that are computationally
expensive to process. As a result, most manipulation policies operate directly
in 2D, foregoing 3D inductive biases. In this paper, we propose Act3D, a
manipulation policy Transformer that casts 6-DoF keypose prediction as 3D
detection with adaptive spatial computation. It takes as input 3D feature
clouds unprojected from one or more camera views, iteratively samples 3D point
grids in free space in a coarse-to-fine manner, featurizes them using relative
spatial attention to the physical feature cloud, and selects the best feature
point for end-effector pose prediction. Act3D sets a new state-of-the-art in
RLbench, an established manipulation benchmark. Our model achieves 10% absolute
improvement over the previous SOTA 2D multi-view policy on 74 RLbench tasks and
22% absolute improvement with 3x less compute over the previous SOTA 3D policy.
In thorough ablations, we show the importance of relative spatial attention,
large-scale vision-language pre-trained 2D backbones, and weight tying across
coarse-to-fine attentions. Code and videos are available at our project site:
https://act3d.github.io/.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 17:34:06 GMT"
}
] | 2023-07-03T00:00:00 |
[
[
"Gervet",
"Theophile",
""
],
[
"Xian",
"Zhou",
""
],
[
"Gkanatsios",
"Nikolaos",
""
],
[
"Fragkiadaki",
"Katerina",
""
]
] |
new_dataset
| 0.997764 |
2210.14290
|
Bin Guo
|
Bin Guo, Emil Sekerinski
|
Parallel Order-Based Core Maintenance in Dynamic Graphs
|
Published on 52nd International Conference on Parallel Processing
(ICPP 2023), 17 pages, 7 figures, 2 tables
| null |
10.1145/3605573.3605597
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The core numbers of vertices in a graph are one of the most well-studied
cohesive subgraph models because of the linear running time. In practice, many
data graphs are dynamic graphs that are continuously changing by inserting or
removing edges. The core numbers are updated in dynamic graphs with edge
insertions and deletions, which is called core maintenance. When a burst of a
large number of inserted or removed edges come in, we have to handle these
edges on time to keep up with the data stream. There are two main sequential
algorithms for core maintenance, \textsc{Traversal} and \textsc{Order}. It is
proved that the \textsc{Order} algorithm significantly outperforms the
\alg{Traversal} algorithm over all tested graphs with up to 2,083 times
speedups.
To the best of our knowledge, all existing parallel approaches are based on
the \alg{Traversal} algorithm; also, their parallelism exists only for affected
vertices with different core numbers, which will reduce to sequential when all
vertices have the same core numbers. In this paper, we propose a new parallel
core maintenance algorithm based on the \alg{Order} algorithm. Importantly, our
new approach always has parallelism, even for the graphs where all vertices
have the same core numbers. Extensive experiments are conducted over
real-world, temporal, and synthetic graphs on a 64-core machine. The results
show that for inserting and removing 100,000 edges using 16-worker, our method
achieves up to 289x and 10x times speedups compared with the most efficient
existing method, respectively.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 19:32:08 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 15:56:48 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Guo",
"Bin",
""
],
[
"Sekerinski",
"Emil",
""
]
] |
new_dataset
| 0.993688 |
2210.16561
|
Hu Zhiheng
|
Zhiheng Hu, Yongzhen Wang, Peng Li, Jie Qin, Haoran Xie, Mingqiang Wei
|
iSmallNet: Densely Nested Network with Label Decoupling for Infrared
Small Target Detection
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Small targets are often submerged in cluttered backgrounds of infrared
images. Conventional detectors tend to generate false alarms, while CNN-based
detectors lose small targets in deep layers. To this end, we propose iSmallNet,
a multi-stream densely nested network with label decoupling for infrared small
object detection. On the one hand, to fully exploit the shape information of
small targets, we decouple the original labeled ground-truth (GT) map into an
interior map and a boundary one. The GT map, in collaboration with the two
additional maps, tackles the unbalanced distribution of small object
boundaries. On the other hand, two key modules are delicately designed and
incorporated into the proposed network to boost the overall performance. First,
to maintain small targets in deep layers, we develop a multi-scale nested
interaction module to explore a wide range of context information. Second, we
develop an interior-boundary fusion module to integrate multi-granularity
information. Experiments on NUAA-SIRST and NUDT-SIRST clearly show the
superiority of iSmallNet over 11 state-of-the-art detectors.
|
[
{
"version": "v1",
"created": "Sat, 29 Oct 2022 10:27:54 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 10:51:43 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Hu",
"Zhiheng",
""
],
[
"Wang",
"Yongzhen",
""
],
[
"Li",
"Peng",
""
],
[
"Qin",
"Jie",
""
],
[
"Xie",
"Haoran",
""
],
[
"Wei",
"Mingqiang",
""
]
] |
new_dataset
| 0.999788 |
2212.00423
|
Kim Bjerge
|
Kim Bjerge, Carsten Eie Frigaard and Henrik Karstoft
|
Motion Informed Object Detection of Small Insects in Time-lapse Camera
Recordings
|
10 pages, 6 figures
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Insects as pollinators play a crucial role in ecosystem management and world
food production. However, insect populations are declining, calling for
efficient methods of insect monitoring. Existing methods analyze video or
time-lapse images of insects in nature, but the analysis is challenging since
insects are small objects in complex and dynamic scenes of natural vegetation.
In this work, we provide a dataset of primary honeybees visiting three
different plant species during two months of the summer period. The dataset
consists of 107,387 annotated time-lapse images from multiple cameras,
including 9,423 annotated insects. We present a method pipeline for detecting
insects in time-lapse RGB images. The pipeline consists of a two-step process.
Firstly, the time-lapse RGB images are preprocessed to enhance insects in the
images. This Motion-Informed-Enhancement technique uses motion and colors to
enhance insects in images. Secondly, the enhanced images are subsequently fed
into a Convolutional Neural network (CNN) object detector. The method improves
the deep learning object detectors You Only Look Once (YOLO) and Faster
Region-based CNN (Faster R-CNN). Using Motion-Informed-Enhancement, the
YOLO-detector improves the average micro F1-score from 0.49 to 0.71, and the
Faster R-CNN-detector improves the average micro F1-score from 0.32 to 0.56 on
the dataset. Our dataset and proposed method provide a step forward to automate
the time-lapse camera monitoring of flying insects. The dataset is published
on: https://vision.eng.au.dk/mie/
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2022 10:54:06 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 15:01:00 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Bjerge",
"Kim",
""
],
[
"Frigaard",
"Carsten Eie",
""
],
[
"Karstoft",
"Henrik",
""
]
] |
new_dataset
| 0.999776 |
2212.03639
|
Lianxin Zhang
|
Lianxin Zhang, Xiaoqiang Ji, Yang Jiao, Yihan Huang, and Huihuan Qian
|
Design and Control of the "TransBoat": A Transformable Unmanned Surface
Vehicle for Overwater Construction
| null | null |
10.1109/TMECH.2022.3215506
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the TransBoat, a novel omnidirectional unmanned surface
vehicle (USV) with a magnetbased docking system for overwater construction with
wave disturbances. This is the first such USV that can build overwater
structures by transporting modules. The TransBoat incorporates two features
designed to reject wave disturbances. First, the TransBoat's expandable body
structure can actively transform from a mono-hull into a multi-hull for
stabilization in turbulent environments by extending its four outrigger hulls.
Second, a real-time nonlinear model predictive control (NMPC) scheme is
proposed for all shapes of the TransBoat to enhance its maneuverability and
resist disturbance to its movement, based on a nonlinear dynamic model. An
experimental approach is proposed to identify the parameters of the dynamic
model, and a subsequent trajectory tracking test validates the dynamics, NMPC
controller and system mobility. Further, docking experiments identify improved
performance in the expanded form of the TransBoat compared with the contracted
form, including an increased success rate (of ~ 10%) and reduced docking time
(of ~ 40 s on average). Finally, a bridge construction test verifies our system
design and the NMPC control method.
|
[
{
"version": "v1",
"created": "Wed, 7 Dec 2022 13:52:11 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Zhang",
"Lianxin",
""
],
[
"Ji",
"Xiaoqiang",
""
],
[
"Jiao",
"Yang",
""
],
[
"Huang",
"Yihan",
""
],
[
"Qian",
"Huihuan",
""
]
] |
new_dataset
| 0.999394 |
2212.06272
|
Khaleel Mershad
|
Khaleel Mershad and Omar Cheikhrouhou
|
Lightweight Blockchain Solutions: Taxonomy, Research Progress, and
Comprehensive Review
|
86 pages, 7 figures,
| null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The proliferation of resource-constrained devices has become prevalent across
various digital applications, including smart homes, healthcare, the Internet
of Vehicles, and the Internet of Flying Things, among others. However, the
integration of these devices brings many security issues. To address these
concerns, Blockchain technology has been widely adopted due to its robust
security characteristics, including immutability, cryptography, and distributed
consensus. However, implementing the blockchain within these networks is highly
challenging due to the limited resources of the employed devices and the
resource-intensive requirements of the blockchain. To overcome these
challenges, a multitude of researchers have proposed lightweight blockchain
solutions specifically designed for resource-constrained networks. In this
paper, we present a taxonomy of lightweight blockchain solutions proposed in
the literature. More precisely, we identify five areas within the "lightweight"
concept, namely, blockchain architecture, device authentication, cryptography
model, consensus algorithm, and storage method. We discuss the various methods
employed in each "lightweight" category, highlighting existing gaps and
identifying areas for improvement. Our review highlights the missing points in
existing systems and paves the way to building a complete lightweight
blockchain solution for networks of resource-constrained devices.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 22:28:22 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 09:08:17 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Mershad",
"Khaleel",
""
],
[
"Cheikhrouhou",
"Omar",
""
]
] |
new_dataset
| 0.95857 |
2301.07087
|
Ond\v{r}ej Pl\'atek
|
Ond\v{r}ej Pl\'atek, Ond\v{r}ej Du\v{s}ek
|
MooseNet: A Trainable Metric for Synthesized Speech with a PLDA Module
|
Accepted to SSW 12: https://openreview.net/forum?id=V6RZk6RzSu
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present MooseNet, a trainable speech metric that predicts the listeners'
Mean Opinion Score (MOS). We propose a novel approach where the Probabilistic
Linear Discriminative Analysis (PLDA) generative model is used on top of an
embedding obtained from a self-supervised learning (SSL) neural network (NN)
model. We show that PLDA works well with a non-finetuned SSL model when trained
only on 136 utterances (ca. one minute training time) and that PLDA
consistently improves various neural MOS prediction models, even
state-of-the-art models with task-specific fine-tuning. Our ablation study
shows PLDA training superiority over SSL model fine-tuning in a low-resource
scenario. We also improve SSL model fine-tuning using a convenient optimizer
choice and additional contrastive and multi-task training objectives. The
fine-tuned MooseNet NN with the PLDA module achieves the best results,
surpassing the SSL baseline on the VoiceMOS Challenge data.
|
[
{
"version": "v1",
"created": "Tue, 17 Jan 2023 18:53:15 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 06:33:58 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Plátek",
"Ondřej",
""
],
[
"Dušek",
"Ondřej",
""
]
] |
new_dataset
| 0.983906 |
2301.10910
|
Kazumi Kasaura
|
Kazumi Kasaura, Ryo Yonetani, Mai Nishimura
|
Periodic Multi-Agent Path Planning
|
7 pages with 2 pages appendix and 2 pages reference, 8 figures and 2
tables, to be published in the proceedings of AAAI Conference on Artificial
Intelligence (AAAI) 2023
|
Proceedings of the AAAI Conference on Artificial Intelligence
37(5) (2023) 6183-6191
|
10.1609/aaai.v37i5.25762
| null |
cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-agent path planning (MAPP) is the problem of planning collision-free
trajectories from start to goal locations for a team of agents. This work
explores a relatively unexplored setting of MAPP where streams of agents have
to go through the starts and goals with high throughput. We tackle this problem
by formulating a new variant of MAPP called periodic MAPP in which the timing
of agent appearances is periodic. The objective with periodic MAPP is to find a
periodic plan, a set of collision-free trajectories that the agent streams can
use repeatedly over periods, with periods that are as small as possible. To
meet this objective, we propose a solution method that is based on constraint
relaxation and optimization. We show that the periodic plans once found can be
used for a more practical case in which agents in a stream can appear at random
times. We confirm the effectiveness of our method compared with baseline
methods in terms of throughput in several scenarios that abstract autonomous
intersection management tasks.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 02:40:56 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 07:47:16 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Kasaura",
"Kazumi",
""
],
[
"Yonetani",
"Ryo",
""
],
[
"Nishimura",
"Mai",
""
]
] |
new_dataset
| 0.999044 |
2304.13390
|
Liu Hongwei
|
Hongwei Liu, Jian Yang, Jianfeng Zhang, Dongheng Shao, Jielong Guo,
Shaobo Li, Xuan Tang, Xian Wei
|
Group Equivariant BEV for 3D Object Detection
|
8 pages,3 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, 3D object detection has attracted significant attention and
achieved continuous improvement in real road scenarios. The environmental
information is collected from a single sensor or multi-sensor fusion to detect
interested objects. However, most of the current 3D object detection approaches
focus on developing advanced network architectures to improve the detection
precision of the object rather than considering the dynamic driving scenes,
where data collected from sensors equipped in the vehicle contain various
perturbation features. As a result, existing work cannot still tackle the
perturbation issue. In order to solve this problem, we propose a group
equivariant bird's eye view network (GeqBevNet) based on the group equivariant
theory, which introduces the concept of group equivariant into the BEV fusion
object detection network. The group equivariant network is embedded into the
fused BEV feature map to facilitate the BEV-level rotational equivariant
feature extraction, thus leading to lower average orientation error. In order
to demonstrate the effectiveness of the GeqBevNet, the network is verified on
the nuScenes validation dataset in which mAOE can be decreased to 0.325.
Experimental results demonstrate that GeqBevNet can extract more rotational
equivariant features in the 3D object detection of the actual road scene and
improve the performance of object orientation prediction.
|
[
{
"version": "v1",
"created": "Wed, 26 Apr 2023 09:00:31 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 03:22:08 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Liu",
"Hongwei",
""
],
[
"Yang",
"Jian",
""
],
[
"Zhang",
"Jianfeng",
""
],
[
"Shao",
"Dongheng",
""
],
[
"Guo",
"Jielong",
""
],
[
"Li",
"Shaobo",
""
],
[
"Tang",
"Xuan",
""
],
[
"Wei",
"Xian",
""
]
] |
new_dataset
| 0.990206 |
2304.14712
|
Eneko Osaba
|
Eneko Osaba, Esther Villar-Rodriguez and Sebasti\'an V. Romero
|
Benchmark dataset and instance generator for Real-World
Three-Dimensional Bin Packing Problems
|
11 pages, 4 figures
|
Data in Brief, 109309 (2023)
|
10.1016/j.dib.2023.109309
| null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this article, a benchmark for real-world bin packing problems is proposed.
This dataset consists of 12 instances of varying levels of complexity regarding
size (with the number of packages ranging from 38 to 53) and user-defined
requirements. In fact, several real-world-oriented restrictions were taken into
account to build these instances: i) item and bin dimensions, ii) weight
restrictions, iii) affinities among package categories iv) preferences for
package ordering and v) load balancing. Besides the data, we also offer an own
developed Python script for the dataset generation, coined Q4RealBPP-DataGen.
The benchmark was initially proposed to evaluate the performance of quantum
solvers. Therefore, the characteristics of this set of instances were designed
according to the current limitations of quantum devices. Additionally, the
dataset generator is included to allow the construction of general-purpose
benchmarks. The data introduced in this article provides a baseline that will
encourage quantum computing researchers to work on real-world bin packing
problems.
|
[
{
"version": "v1",
"created": "Fri, 28 Apr 2023 09:29:43 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 14:08:52 GMT"
},
{
"version": "v3",
"created": "Fri, 2 Jun 2023 08:11:15 GMT"
},
{
"version": "v4",
"created": "Thu, 29 Jun 2023 09:31:14 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Osaba",
"Eneko",
""
],
[
"Villar-Rodriguez",
"Esther",
""
],
[
"Romero",
"Sebastián V.",
""
]
] |
new_dataset
| 0.999582 |
2306.09650
|
Tse-Tin Chan
|
Jiajia Shi, Tse-Tin Chan, Haoyuan Pan, Tat-Ming Lok
|
Reconfigurable Intelligent Surface Assisted Semantic Communication
Systems
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic communication, which focuses on conveying the meaning of information
rather than exact bit reconstruction, has gained considerable attention in
recent years. Meanwhile, reconfigurable intelligent surface (RIS) is a
promising technology that can achieve high spectral and energy efficiency by
dynamically reflecting incident signals through programmable passive
components. In this paper, we put forth a semantic communication scheme aided
by RIS. Using text transmission as an example, experimental results demonstrate
that the RIS-assisted semantic communication system outperforms the
point-to-point semantic communication system in terms of bilingual evaluation
understudy (BLEU) scores in Rayleigh fading channels, especially at low
signal-to-noise ratio (SNR) regimes. In addition, the RIS-assisted semantic
communication system exhibits superior robustness against channel estimation
errors compared to its point-to-point counterpart. RIS can improve performance
as it provides extra line-of-sight (LoS) paths and enhances signal propagation
conditions compared to point-to-point systems.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 07:04:14 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 15:04:56 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Shi",
"Jiajia",
""
],
[
"Chan",
"Tse-Tin",
""
],
[
"Pan",
"Haoyuan",
""
],
[
"Lok",
"Tat-Ming",
""
]
] |
new_dataset
| 0.995231 |
2306.14406
|
Xinquan Yang
|
Xinquan Yang and Jinheng Xie and Xuguang Li and Xuechen Li and Xin Li
and Linlin Shen and Yongqiang Deng
|
TCEIP: Text Condition Embedded Regression Network for Dental Implant
Position Prediction
|
MICCAI 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When deep neural network has been proposed to assist the dentist in designing
the location of dental implant, most of them are targeting simple cases where
only one missing tooth is available. As a result, literature works do not work
well when there are multiple missing teeth and easily generate false
predictions when the teeth are sparsely distributed. In this paper, we are
trying to integrate a weak supervision text, the target region, to the implant
position regression network, to address above issues. We propose a text
condition embedded implant position regression network (TCEIP), to embed the
text condition into the encoder-decoder framework for improvement of the
regression performance. A cross-modal interaction that consists of cross-modal
attention (CMA) and knowledge alignment module (KAM) is proposed to facilitate
the interaction between features of images and texts. The CMA module performs a
cross-attention between the image feature and the text condition, and the KAM
mitigates the knowledge gap between the image feature and the image encoder of
the CLIP. Extensive experiments on a dental implant dataset through five-fold
cross-validation demonstrated that the proposed TCEIP achieves superior
performance than existing methods.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 03:38:43 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 12:52:56 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Yang",
"Xinquan",
""
],
[
"Xie",
"Jinheng",
""
],
[
"Li",
"Xuguang",
""
],
[
"Li",
"Xuechen",
""
],
[
"Li",
"Xin",
""
],
[
"Shen",
"Linlin",
""
],
[
"Deng",
"Yongqiang",
""
]
] |
new_dataset
| 0.985876 |
2306.15662
|
Jiaye Wu
|
Jiaye Wu, Sanjoy Chowdhury, Hariharmano Shanmugaraja, David Jacobs,
and Soumyadip Sengupta
|
Measured Albedo in the Wild: Filling the Gap in Intrinsics Evaluation
|
Accepted into ICCP2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Intrinsic image decomposition and inverse rendering are long-standing
problems in computer vision. To evaluate albedo recovery, most algorithms
report their quantitative performance with a mean Weighted Human Disagreement
Rate (WHDR) metric on the IIW dataset. However, WHDR focuses only on relative
albedo values and often fails to capture overall quality of the albedo. In
order to comprehensively evaluate albedo, we collect a new dataset, Measured
Albedo in the Wild (MAW), and propose three new metrics that complement WHDR:
intensity, chromaticity and texture metrics. We show that existing algorithms
often improve WHDR metric but perform poorly on other metrics. We then finetune
different algorithms on our MAW dataset to significantly improve the quality of
the reconstructed albedo both quantitatively and qualitatively. Since the
proposed intensity, chromaticity, and texture metrics and the WHDR are all
complementary we further introduce a relative performance measure that captures
average performance. By analysing existing algorithms we show that there is
significant room for improvement. Our dataset and evaluation metrics will
enable researchers to develop algorithms that improve albedo reconstruction.
Code and Data available at: https://measuredalbedo.github.io/
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 17:55:33 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 17:42:44 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Wu",
"Jiaye",
""
],
[
"Chowdhury",
"Sanjoy",
""
],
[
"Shanmugaraja",
"Hariharmano",
""
],
[
"Jacobs",
"David",
""
],
[
"Sengupta",
"Soumyadip",
""
]
] |
new_dataset
| 0.998287 |
2306.15664
|
Wei-Yao Wang
|
Wei-Yao Wang, Wei-Wei Du, Wen-Chih Peng
|
ShuttleSet22: Benchmarking Stroke Forecasting with Stroke-Level
Badminton Dataset
|
IT4PSS @ IJCAI-23 and CoachAI Badminton Challenge Track 2 @ IJCAI-23.
Challenge website: https://sites.google.com/view/coachai-challenge-2023/
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, badminton analytics has drawn attention due to the
advancement of artificial intelligence and the efficiency of data collection.
While there is a line of effective applications to improve and investigate
player performance, there are only a few public badminton datasets that can be
used for researchers outside the badminton domain. Existing badminton singles
datasets focus on specific matchups; however, they cannot provide comprehensive
studies on different players and various matchups. In this paper, we provide a
badminton singles dataset, ShuttleSet22, which is collected from high-ranking
matches in 2022. ShuttleSet22 consists of 30,172 strokes in 2,888 rallies in
the training set, 1,400 strokes in 450 rallies in the validation set, and 2,040
strokes in 654 rallies in the testing set with detailed stroke-level metadata
within a rally. To benchmark existing work with ShuttleSet22, we test the
state-of-the-art stroke forecasting approach, ShuttleNet, with the
corresponding stroke forecasting task, i.e., predict the future strokes based
on the given strokes of each rally. We also hold a challenge, Track 2:
Forecasting Future Turn-Based Strokes in Badminton Rallies, at CoachAI
Badminton Challenge 2023 to boost researchers to tackle this problem. The
baseline codes and the dataset will be made available on
https://github.com/wywyWang/CoachAI-Projects/tree/main/CoachAI-Challenge-IJCAI2023.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 17:57:34 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jun 2023 20:50:24 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Wang",
"Wei-Yao",
""
],
[
"Du",
"Wei-Wei",
""
],
[
"Peng",
"Wen-Chih",
""
]
] |
new_dataset
| 0.999877 |
2306.16125
|
Simon Sanchez Viloria Mr
|
Simon Sanchez Viloria, Daniel Peix del R\'io, Rub\'en Berm\'udez Cabo,
Guillermo Arturo Arrojo Fuentes, Isabel Segura-Bedmar
|
A Framework for Identifying Depression on Social Media:
MentalRiskES@IberLEF 2023
|
Submitted to the Proceedings of IberLEF 2023, September 2023, Ja\'en,
Spain
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes our participation in the MentalRiskES task at IberLEF
2023. The task involved predicting the likelihood of an individual experiencing
depression based on their social media activity. The dataset consisted of
conversations from 175 Telegram users, each labeled according to their evidence
of suffering from the disorder. We used a combination of traditional machine
learning and deep learning techniques to solve four predictive subtasks: binary
classification, simple regression, multiclass classification, and multi-output
regression.
We approached this by training a model to solve the multi-output regression
case and then transforming the predictions to work for the other three
subtasks.
We compare the performance of two modeling approaches: fine-tuning a
BERT-based model directly for the task or using its embeddings as inputs to a
linear regressor, with the latter yielding better results. The code to
reproduce our results can be found at:
https://github.com/simonsanvil/EarlyDepression-MentalRiskES
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 11:53:07 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 07:02:59 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Viloria",
"Simon Sanchez",
""
],
[
"del Río",
"Daniel Peix",
""
],
[
"Cabo",
"Rubén Bermúdez",
""
],
[
"Fuentes",
"Guillermo Arturo Arrojo",
""
],
[
"Segura-Bedmar",
"Isabel",
""
]
] |
new_dataset
| 0.990922 |
2306.16495
|
Quanzhi Li
|
Quanzhi Li, Yang Chao, Dong Li, Yao Lu, Chi Zhang
|
Event Detection from Social Media Stream: Methods, Datasets and
Opportunities
|
8 pages
| null | null | null |
cs.SI cs.AI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Social media streams contain large and diverse amount of information, ranging
from daily-life stories to the latest global and local events and news.
Twitter, especially, allows a fast spread of events happening real time, and
enables individuals and organizations to stay informed of the events happening
now. Event detection from social media data poses different challenges from
traditional text and is a research area that has attracted much attention in
recent years. In this paper, we survey a wide range of event detection methods
for Twitter data stream, helping readers understand the recent development in
this area. We present the datasets available to the public. Furthermore, a few
research opportunities
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 18:40:03 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Li",
"Quanzhi",
""
],
[
"Chao",
"Yang",
""
],
[
"Li",
"Dong",
""
],
[
"Lu",
"Yao",
""
],
[
"Zhang",
"Chi",
""
]
] |
new_dataset
| 0.975278 |
2306.16516
|
Hasan Pourmahmood Aghababa
|
Jeff M. Phillips and Hasan Pourmahmood-Aghababa
|
For Kernel Range Spaces a Constant Number of Queries Are Sufficient
|
27 pages
| null | null | null |
cs.CG cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce the notion of an $\varepsilon$-cover for a kernel range space. A
kernel range space concerns a set of points $X \subset \mathbb{R}^d$ and the
space of all queries by a fixed kernel (e.g., a Gaussian kernel $K(p,\cdot) =
\exp(-\|p-\cdot\|^2)$). For a point set $X$ of size $n$, a query returns a
vector of values $R_p \in \mathbb{R}^n$, where the $i$th coordinate $(R_p)_i =
K(p,x_i)$ for $x_i \in X$. An $\varepsilon$-cover is a subset of points $Q
\subset \mathbb{R}^d$ so for any $p \in \mathbb{R}^d$ that $\frac{1}{n} \|R_p -
R_q\|_1\leq \varepsilon$ for some $q \in Q$. This is a smooth analog of
Haussler's notion of $\varepsilon$-covers for combinatorial range spaces (e.g.,
defined by subsets of points within a ball query) where the resulting vectors
$R_p$ are in $\{0,1\}^n$ instead of $[0,1]^n$. The kernel versions of these
range spaces show up in data analysis tasks where the coordinates may be
uncertain or imprecise, and hence one wishes to add some flexibility in the
notion of inside and outside of a query range.
Our main result is that, unlike combinatorial range spaces, the size of
kernel $\varepsilon$-covers is independent of the input size $n$ and dimension
$d$. We obtain a bound of $(1/\varepsilon)^{\tilde O(1/\varepsilon^2)}$, where
$\tilde{O}(f(1/\varepsilon))$ hides log factors in $(1/\varepsilon)$ that can
depend on the kernel. This implies that by relaxing the notion of boundaries in
range queries, eventually the curse of dimensionality disappears, and may help
explain the success of machine learning in very high-dimensions. We also
complement this result with a lower bound of almost
$(1/\varepsilon)^{\Omega(1/\varepsilon)}$, showing the exponential dependence
on $1/\varepsilon$ is necessary.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 19:19:33 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Phillips",
"Jeff M.",
""
],
[
"Pourmahmood-Aghababa",
"Hasan",
""
]
] |
new_dataset
| 0.956054 |
2306.16538
|
Lei Tong
|
Lei Tong, Adam Corrigan, Navin Rathna Kumar, Kerry Hallbrook, Jonathan
Orme, Yinhai Wang, Huiyu Zhou
|
CLANet: A Comprehensive Framework for Cross-Batch Cell Line
Identification Using Brightfield Images
|
15 pages, 10 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cell line authentication plays a crucial role in the biomedical field,
ensuring researchers work with accurately identified cells. Supervised deep
learning has made remarkable strides in cell line identification by studying
cell morphological features through cell imaging. However, batch effects, a
significant issue stemming from the different times at which data is generated,
lead to substantial shifts in the underlying data distribution, thus
complicating reliable differentiation between cell lines from distinct batch
cultures. To address this challenge, we introduce CLANet, a pioneering
framework for cross-batch cell line identification using brightfield images,
specifically designed to tackle three distinct batch effects. We propose a cell
cluster-level selection method to efficiently capture cell density variations,
and a self-supervised learning strategy to manage image quality variations,
thus producing reliable patch representations. Additionally, we adopt multiple
instance learning(MIL) for effective aggregation of instance-level features for
cell line identification. Our innovative time-series segment sampling module
further enhances MIL's feature-learning capabilities, mitigating biases from
varying incubation times across batches. We validate CLANet using data from 32
cell lines across 93 experimental batches from the AstraZeneca Global Cell
Bank. Our results show that CLANet outperforms related approaches (e.g. domain
adaptation, MIL), demonstrating its effectiveness in addressing batch effects
in cell line identification.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 20:24:53 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Tong",
"Lei",
""
],
[
"Corrigan",
"Adam",
""
],
[
"Kumar",
"Navin Rathna",
""
],
[
"Hallbrook",
"Kerry",
""
],
[
"Orme",
"Jonathan",
""
],
[
"Wang",
"Yinhai",
""
],
[
"Zhou",
"Huiyu",
""
]
] |
new_dataset
| 0.951046 |
2306.16551
|
Jinhee Yu
|
Jinhee Yu, Jingdao Chen, Lalitha Dabbiru, Christopher T. Goodin
|
Analysis of LiDAR Configurations on Off-road Semantic Segmentation
Performance
| null | null |
10.1117/12.2663098
| null |
cs.CV cs.RO eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper investigates the impact of LiDAR configuration shifts on the
performance of 3D LiDAR point cloud semantic segmentation models, a topic not
extensively studied before. We explore the effect of using different LiDAR
channels when training and testing a 3D LiDAR point cloud semantic segmentation
model, utilizing Cylinder3D for the experiments. A Cylinder3D model is trained
and tested on simulated 3D LiDAR point cloud datasets created using the
Mississippi State University Autonomous Vehicle Simulator (MAVS) and 32, 64
channel 3D LiDAR point clouds of the RELLIS-3D dataset collected in a
real-world off-road environment. Our experimental results demonstrate that
sensor and spatial domain shifts significantly impact the performance of
LiDAR-based semantic segmentation models. In the absence of spatial domain
changes between training and testing, models trained and tested on the same
sensor type generally exhibited better performance. Moreover, higher-resolution
sensors showed improved performance compared to those with lower-resolution
ones. However, results varied when spatial domain changes were present. In some
cases, the advantage of a sensor's higher resolution led to better performance
both with and without sensor domain shifts. In other instances, the higher
resolution resulted in overfitting within a specific domain, causing a lack of
generalization capability and decreased performance when tested on data with
different sensor configurations.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 20:41:45 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Yu",
"Jinhee",
""
],
[
"Chen",
"Jingdao",
""
],
[
"Dabbiru",
"Lalitha",
""
],
[
"Goodin",
"Christopher T.",
""
]
] |
new_dataset
| 0.983807 |
2306.16576
|
Urvashi Kishnani
|
Urvashi Kishnani, Srinidhi Madabhushi and Sanchari Das
|
Blockchain in Oil and Gas Supply Chain: A Literature Review from User
Security and Privacy Perspective
| null | null | null | null |
cs.CR cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Blockchain's influence extends beyond finance, impacting diverse sectors such
as real estate, oil and gas, and education. This extensive reach stems from
blockchain's intrinsic ability to reliably manage digital transactions and
supply chains. Within the oil and gas sector, the merger of blockchain with
supply chain management and data handling is a notable trend. The supply chain
encompasses several operations: extraction, transportation, trading, and
distribution of resources. Unfortunately, the current supply chain structure
misses critical features such as transparency, traceability, flexible trading,
and secure data storage - all of which blockchain can provide. Nevertheless, it
is essential to investigate blockchain's security and privacy in the oil and
gas industry. Such scrutiny enables the smooth, secure, and usable execution of
transactions. For this purpose, we reviewed 124 peer-reviewed academic
publications, conducting an in-depth analysis of 21 among them. We classified
the articles by their relevance to various phases of the supply chain flow:
upstream, midstream, downstream, and data management. Despite blockchain's
potential to address existing security and privacy voids in the supply chain,
there is a significant lack of practical implementation of blockchain
integration in oil and gas operations. This deficiency substantially challenges
the transition from conventional methods to a blockchain-centric approach.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 21:45:23 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Kishnani",
"Urvashi",
""
],
[
"Madabhushi",
"Srinidhi",
""
],
[
"Das",
"Sanchari",
""
]
] |
new_dataset
| 0.960098 |
2306.16623
|
Lucas Prado Osco
|
Lucas Prado Osco, Qiusheng Wu, Eduardo Lopes de Lemos, Wesley Nunes
Gon\c{c}alves, Ana Paula Marques Ramos, Jonathan Li, Jos\'e Marcato Junior
|
The Segment Anything Model (SAM) for Remote Sensing Applications: From
Zero to One Shot
|
20 pages, 9 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Segmentation is an essential step for remote sensing image processing. This
study aims to advance the application of the Segment Anything Model (SAM), an
innovative image segmentation model by Meta AI, in the field of remote sensing
image analysis. SAM is known for its exceptional generalization capabilities
and zero-shot learning, making it a promising approach to processing aerial and
orbital images from diverse geographical contexts. Our exploration involved
testing SAM across multi-scale datasets using various input prompts, such as
bounding boxes, individual points, and text descriptors. To enhance the model's
performance, we implemented a novel automated technique that combines a
text-prompt-derived general example with one-shot training. This adjustment
resulted in an improvement in accuracy, underscoring SAM's potential for
deployment in remote sensing imagery and reducing the need for manual
annotation. Despite the limitations encountered with lower spatial resolution
images, SAM exhibits promising adaptability to remote sensing data analysis. We
recommend future research to enhance the model's proficiency through
integration with supplementary fine-tuning techniques and other networks.
Furthermore, we provide the open-source code of our modifications on online
repositories, encouraging further and broader adaptations of SAM to the remote
sensing domain.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 01:49:33 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Osco",
"Lucas Prado",
""
],
[
"Wu",
"Qiusheng",
""
],
[
"de Lemos",
"Eduardo Lopes",
""
],
[
"Gonçalves",
"Wesley Nunes",
""
],
[
"Ramos",
"Ana Paula Marques",
""
],
[
"Li",
"Jonathan",
""
],
[
"Junior",
"José Marcato",
""
]
] |
new_dataset
| 0.996655 |
2306.16636
|
Tianwen Wei
|
Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang
|
CMATH: Can Your Language Model Pass Chinese Elementary School Math Test?
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 02:19:50 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Wei",
"Tianwen",
""
],
[
"Luan",
"Jian",
""
],
[
"Liu",
"Wei",
""
],
[
"Dong",
"Shuang",
""
],
[
"Wang",
"Bin",
""
]
] |
new_dataset
| 0.999801 |
2306.16652
|
Kassem Bagher
|
K. Bagher, S. Cui, X. Yuan, C. Rudolph, X. Yi
|
TimeClave: Oblivious In-enclave Time series Processing System
|
The short version of this paper has been accepted as a Full Paper in
the International Conference on Information and Communications Security
(ICICS) 2023
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Cloud platforms are widely adopted by many systems, such as time series
processing systems, to store and process massive amounts of sensitive time
series data. Unfortunately, several incidents have shown that cloud platforms
are vulnerable to internal and external attacks that lead to critical data
breaches. Adopting cryptographic protocols such as homomorphic encryption and
secure multi-party computation adds high computational and network overhead to
query operations.
We present TimeClave, a fully oblivious in-enclave time series processing
system: TimeClave leverages Intel SGX to support aggregate statistics on time
series with minimal memory consumption inside the enclave. To hide the access
pattern inside the enclave, we introduce a non-blocking read-optimised ORAM
named RoORAM. TimeClave integrates RoORAM to obliviously and securely handle
client queries with high performance. With an aggregation time interval of
$10s$, $2^{14}$ summarised data blocks and 8 aggregate functions, TimeClave run
point query in $0.03ms$ and a range query of 50 intervals in $0.46ms$. Compared
to the ORAM baseline, TimeClave achieves lower query latency by up to
$2.5\times$ and up to $2\times$ throughput, with up to 22K queries per second.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 03:30:53 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Bagher",
"K.",
""
],
[
"Cui",
"S.",
""
],
[
"Yuan",
"X.",
""
],
[
"Rudolph",
"C.",
""
],
[
"Yi",
"X.",
""
]
] |
new_dataset
| 0.981734 |
2306.16665
|
Jing Mai
|
Jing Mai, Jiarui Wang, Zhixiong Di, Guojie Luo, Yun Liang and Yibo Lin
|
OpenPARF: An Open-Source Placement and Routing Framework for Large-Scale
Heterogeneous FPGAs with Deep Learning Toolkit
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes OpenPARF, an open-source placement and routing framework
for large-scale FPGA designs. OpenPARF is implemented with the deep learning
toolkit PyTorch and supports massive parallelization on GPU. The framework
proposes a novel asymmetric multi-electrostatic field system to solve FPGA
placement. It considers fine-grained routing resources inside configurable
logic blocks (CLBs) for FPGA routing and supports large-scale irregular routing
resource graphs. Experimental results on ISPD 2016 and ISPD 2017 FPGA contest
benchmarks and industrial benchmarks demonstrate that OpenPARF can achieve
0.4-12.7% improvement in routed wirelength and more than $2\times$ speedup in
placement. We believe that OpenPARF can pave the road for developing FPGA
physical design engines and stimulate further research on related topics.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 03:53:52 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Mai",
"Jing",
""
],
[
"Wang",
"Jiarui",
""
],
[
"Di",
"Zhixiong",
""
],
[
"Luo",
"Guojie",
""
],
[
"Liang",
"Yun",
""
],
[
"Lin",
"Yibo",
""
]
] |
new_dataset
| 0.961161 |
2306.16783
|
Nathan Lepora
|
Zhuochao He, Xuyang Zhang, Simon Jones, Sabine Hauert, Dandan Zhang,
Nathan F. Lepora
|
TacMMs: Tactile Mobile Manipulators for Warehouse Automation
|
8 pages, accepted in IEEE Robotics and Automation Letters, 19 June
2023
| null |
10.1109/LRA.2023.3287363.
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-robot platforms are playing an increasingly important role in warehouse
automation for efficient goods transport. This paper proposes a novel
customization of a multi-robot system, called Tactile Mobile Manipulators
(TacMMs). Each TacMM integrates a soft optical tactile sensor and a mobile
robot with a load-lifting mechanism, enabling cooperative transportation in
tasks requiring coordinated physical interaction. More specifically, we mount
the TacTip (biomimetic optical tactile sensor) on the Distributed Organisation
and Transport System (DOTS) mobile robot. The tactile information then helps
the mobile robots adjust the relative robot-object pose, thereby increasing the
efficiency of load-lifting tasks. This study compares the performance of using
two TacMMs with tactile perception with traditional vision-based pose
adjustment for load-lifting. The results show that the average success rate of
the TacMMs (66%) is improved over a purely visual-based method (34%), with a
larger improvement when the mass of the load was non-uniformly distributed.
Although this initial study considers two TacMMs, we expect the benefits of
tactile perception to extend to multiple mobile robots. Website:
https://sites.google.com/view/tacmms
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 08:42:01 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"He",
"Zhuochao",
""
],
[
"Zhang",
"Xuyang",
""
],
[
"Jones",
"Simon",
""
],
[
"Hauert",
"Sabine",
""
],
[
"Zhang",
"Dandan",
""
],
[
"Lepora",
"Nathan F.",
""
]
] |
new_dataset
| 0.999355 |
2306.16806
|
Zhenchao Lyu
|
Yuxu Chen, Hui Kou, Zhenchao Lyu
|
Free dcpo-algebras via directed spaces
|
18 pages
| null | null | null |
cs.LO math.CT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Directed spaces are natural topological extensions of dcpos in domain theory
and form a cartesian closed category. We will show that the D-completion of
free algebras over a Scott space $\Sigma L$, on the context of directed spaces,
are exactly the free dcpo-algebras over dcpo $L$, which reveals the close
connection between directed powerspaces and powerdomains. By this result, we
provide a topological representation of upper, lower and convex powerdomains of
dcpos uniformly.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 09:36:49 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Chen",
"Yuxu",
""
],
[
"Kou",
"Hui",
""
],
[
"Lyu",
"Zhenchao",
""
]
] |
new_dataset
| 0.957467 |
2306.16917
|
David Recasens
|
David Recasens, Martin R. Oswald, Marc Pollefeys, Javier Civera
|
The Drunkard's Odometry: Estimating Camera Motion in Deforming Scenes
| null | null | null | null |
cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Estimating camera motion in deformable scenes poses a complex and open
research challenge. Most existing non-rigid structure from motion techniques
assume to observe also static scene parts besides deforming scene parts in
order to establish an anchoring reference. However, this assumption does not
hold true in certain relevant application cases such as endoscopies. Deformable
odometry and SLAM pipelines, which tackle the most challenging scenario of
exploratory trajectories, suffer from a lack of robustness and proper
quantitative evaluation methodologies. To tackle this issue with a common
benchmark, we introduce the Drunkard's Dataset, a challenging collection of
synthetic data targeting visual navigation and reconstruction in deformable
environments. This dataset is the first large set of exploratory camera
trajectories with ground truth inside 3D scenes where every surface exhibits
non-rigid deformations over time. Simulations in realistic 3D buildings lets us
obtain a vast amount of data and ground truth labels, including camera poses,
RGB images and depth, optical flow and normal maps at high resolution and
quality. We further present a novel deformable odometry method, dubbed the
Drunkard's Odometry, which decomposes optical flow estimates into rigid-body
camera motion and non-rigid scene deformations. In order to validate our data,
our work contains an evaluation of several baselines as well as a novel
tracking error metric which does not require ground truth data. Dataset and
code: https://davidrecasens.github.io/TheDrunkard'sOdometry/
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 13:09:31 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Recasens",
"David",
""
],
[
"Oswald",
"Martin R.",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Civera",
"Javier",
""
]
] |
new_dataset
| 0.991379 |
2306.16931
|
Junda Wang
|
Junda Wang, Zonghai Yao, Avijit Mitra, Samuel Osebe, Zhichao Yang,
Hong Yu
|
UMASS_BioNLP at MEDIQA-Chat 2023: Can LLMs generate high-quality
synthetic note-oriented doctor-patient conversations?
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents UMASS_BioNLP team participation in the MEDIQA-Chat 2023
shared task for Task-A and Task-C. We focus especially on Task-C and propose a
novel LLMs cooperation system named a doctor-patient loop to generate
high-quality conversation data sets. The experiment results demonstrate that
our approaches yield reasonable performance as evaluated by automatic metrics
such as ROUGE, medical concept recall, BLEU, and Self-BLEU. Furthermore, we
conducted a comparative analysis between our proposed method and ChatGPT and
GPT-4. This analysis also investigates the potential of utilizing cooperation
LLMs to generate high-quality datasets.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 13:30:41 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Wang",
"Junda",
""
],
[
"Yao",
"Zonghai",
""
],
[
"Mitra",
"Avijit",
""
],
[
"Osebe",
"Samuel",
""
],
[
"Yang",
"Zhichao",
""
],
[
"Yu",
"Hong",
""
]
] |
new_dataset
| 0.973746 |
2306.16940
|
Priyanka Patel
|
Michael J. Black, Priyanka Patel, Joachim Tesch, Jinlong Yang
|
BEDLAM: A Synthetic Dataset of Bodies Exhibiting Detailed Lifelike
Animated Motion
| null |
CVPR 2023
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show, for the first time, that neural networks trained only on synthetic
data achieve state-of-the-art accuracy on the problem of 3D human pose and
shape (HPS) estimation from real images. Previous synthetic datasets have been
small, unrealistic, or lacked realistic clothing. Achieving sufficient realism
is non-trivial and we show how to do this for full bodies in motion.
Specifically, our BEDLAM dataset contains monocular RGB videos with
ground-truth 3D bodies in SMPL-X format. It includes a diversity of body
shapes, motions, skin tones, hair, and clothing. The clothing is realistically
simulated on the moving bodies using commercial clothing physics simulation. We
render varying numbers of people in realistic scenes with varied lighting and
camera motions. We then train various HPS regressors using BEDLAM and achieve
state-of-the-art accuracy on real-image benchmarks despite training with
synthetic data. We use BEDLAM to gain insights into what model design choices
are important for accuracy. With good synthetic training data, we find that a
basic method like HMR approaches the accuracy of the current SOTA method
(CLIFF). BEDLAM is useful for a variety of tasks and all images, ground truth
bodies, 3D clothing, support code, and more are available for research
purposes. Additionally, we provide detailed information about our synthetic
data generation pipeline, enabling others to generate their own datasets. See
the project page: https://bedlam.is.tue.mpg.de/.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 13:35:16 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Black",
"Michael J.",
""
],
[
"Patel",
"Priyanka",
""
],
[
"Tesch",
"Joachim",
""
],
[
"Yang",
"Jinlong",
""
]
] |
new_dataset
| 0.999839 |
2306.16956
|
Hongjie Cai
|
Hongjie Cai, Nan Song, Zengzhi Wang, Qiming Xie, Qiankun Zhao, Ke Li,
Siwei Wu, Shijie Liu, Jianfei Yu, Rui Xia
|
MEMD-ABSA: A Multi-Element Multi-Domain Dataset for Aspect-Based
Sentiment Analysis
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aspect-based sentiment analysis is a long-standing research interest in the
field of opinion mining, and in recent years, researchers have gradually
shifted their focus from simple ABSA subtasks to end-to-end multi-element ABSA
tasks. However, the datasets currently used in the research are limited to
individual elements of specific tasks, usually focusing on in-domain settings,
ignoring implicit aspects and opinions, and with a small data scale. To address
these issues, we propose a large-scale Multi-Element Multi-Domain dataset
(MEMD) that covers the four elements across five domains, including nearly
20,000 review sentences and 30,000 quadruples annotated with explicit and
implicit aspects and opinions for ABSA research. Meanwhile, we evaluate
generative and non-generative baselines on multiple ABSA subtasks under the
open domain setting, and the results show that open domain ABSA as well as
mining implicit aspects and opinions remain ongoing challenges to be addressed.
The datasets are publicly released at \url{https://github.com/NUSTM/MEMD-ABSA}.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 14:03:49 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Cai",
"Hongjie",
""
],
[
"Song",
"Nan",
""
],
[
"Wang",
"Zengzhi",
""
],
[
"Xie",
"Qiming",
""
],
[
"Zhao",
"Qiankun",
""
],
[
"Li",
"Ke",
""
],
[
"Wu",
"Siwei",
""
],
[
"Liu",
"Shijie",
""
],
[
"Yu",
"Jianfei",
""
],
[
"Xia",
"Rui",
""
]
] |
new_dataset
| 0.999287 |
2306.16992
|
Asmar Muqeet
|
Asmar Muqeet, Tao Yue, Shaukat Ali and Paolo Arcaini
|
Noise-Aware Quantum Software Testing
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantum Computing (QC) promises computational speedup over classic computing
for solving some complex problems. However, noise exists in current and
near-term quantum computers. Quantum software testing (for gaining confidence
in quantum software's correctness) is inevitably impacted by noise, to the
extent that it is impossible to know if a test case failed due to noise or real
faults. Existing testing techniques test quantum programs without considering
noise, i.e., by executing tests on ideal quantum computer simulators.
Consequently, they are not directly applicable to testing quantum software on
real QC hardware or noisy simulators. To this end, we propose a noise-aware
approach (named QOIN) to alleviate the noise effect on test results of quantum
programs. QOIN employs machine learning techniques (e.g., transfer learning) to
learn the noise effect of a quantum computer and filter it from a quantum
program's outputs. Such filtered outputs are then used as the input to perform
test case assessments (determining the passing or failing of a test case
execution against a test oracle). We evaluated QOIN on IBM's 23 noise models
with nine real-world quantum programs and 1000 artificial quantum programs. We
also generated faulty versions of these programs to check if a failing test
case execution can be determined under noise. Results show that QOIN can reduce
the noise effect by more than $80\%$. To check QOIN's effectiveness for quantum
software testing, we used an existing test oracle for quantum software testing.
The results showed that the F1-score of the test oracle was improved on average
by $82\%$ for six real-world programs and by $75\%$ for 800 artificial
programs, demonstrating that QOIN can effectively learn noise patterns and
enable noise-aware quantum software testing.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 14:51:19 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Muqeet",
"Asmar",
""
],
[
"Yue",
"Tao",
""
],
[
"Ali",
"Shaukat",
""
],
[
"Arcaini",
"Paolo",
""
]
] |
new_dataset
| 0.975543 |
2306.17000
|
Ce Zhang Dr.
|
Ce Zhang, Chengjie Zhang, Yiluan Guo, Lingji Chen, Michael Happold
|
MotionTrack: End-to-End Transformer-based Multi-Object Tracing with
LiDAR-Camera Fusion
|
This paper is accepted by CVPR WAD 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Multiple Object Tracking (MOT) is crucial to autonomous vehicle perception.
End-to-end transformer-based algorithms, which detect and track objects
simultaneously, show great potential for the MOT task. However, most existing
methods focus on image-based tracking with a single object category. In this
paper, we propose an end-to-end transformer-based MOT algorithm (MotionTrack)
with multi-modality sensor inputs to track objects with multiple classes. Our
objective is to establish a transformer baseline for the MOT in an autonomous
driving environment. The proposed algorithm consists of a transformer-based
data association (DA) module and a transformer-based query enhancement module
to achieve MOT and Multiple Object Detection (MOD) simultaneously. The
MotionTrack and its variations achieve better results (AMOTA score at 0.55) on
the nuScenes dataset compared with other classical baseline models, such as the
AB3DMOT, the CenterTrack, and the probabilistic 3D Kalman filter. In addition,
we prove that a modified attention mechanism can be utilized for DA to
accomplish the MOT, and aggregate history features to enhance the MOD
performance.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 15:00:12 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Zhang",
"Ce",
""
],
[
"Zhang",
"Chengjie",
""
],
[
"Guo",
"Yiluan",
""
],
[
"Chen",
"Lingji",
""
],
[
"Happold",
"Michael",
""
]
] |
new_dataset
| 0.999385 |
2306.17002
|
Feng Li
|
Feng Li, Jiayi Zhao, Huan Yang, Dongxiao Yu, Yuanfeng Zhou, Yiran Shen
|
VibHead: An Authentication Scheme for Smart Headsets through Vibration
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recent years have witnessed the fast penetration of Virtual Reality (VR) and
Augmented Reality (AR) systems into our daily life, the security and privacy
issues of the VR/AR applications have been attracting considerable attention.
Most VR/AR systems adopt head-mounted devices (i.e., smart headsets) to
interact with users and the devices usually store the users' private data.
Hence, authentication schemes are desired for the head-mounted devices.
Traditional knowledge-based authentication schemes for general personal devices
have been proved vulnerable to shoulder-surfing attacks, especially considering
the headsets may block the sight of the users. Although the robustness of the
knowledge-based authentication can be improved by designing complicated secret
codes in virtual space, this approach induces a compromise of usability.
Another choice is to leverage the users' biometrics; however, it either relies
on highly advanced equipments which may not always be available in commercial
headsets or introduce heavy cognitive load to users.
In this paper, we propose a vibration-based authentication scheme, VibHead,
for smart headsets. Since the propagation of vibration signals through human
heads presents unique patterns for different individuals, VibHead employs a
CNN-based model to classify registered legitimate users based the features
extracted from the vibration signals. We also design a two-step authentication
scheme where the above user classifiers are utilized to distinguish the
legitimate user from illegitimate ones. We implement VibHead on a Microsoft
HoloLens equipped with a linear motor and an IMU sensor which are commonly used
in off-the-shelf personal smart devices. According to the results of our
extensive experiments, with short vibration signals ($\leq 1s$), VibHead has an
outstanding authentication accuracy; both FAR and FRR are around 5%.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 15:00:32 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Li",
"Feng",
""
],
[
"Zhao",
"Jiayi",
""
],
[
"Yang",
"Huan",
""
],
[
"Yu",
"Dongxiao",
""
],
[
"Zhou",
"Yuanfeng",
""
],
[
"Shen",
"Yiran",
""
]
] |
new_dataset
| 0.999676 |
2306.17030
|
Matthias Mayr
|
Matthias Mayr, Francesco Rovida, Volker Krueger
|
SkiROS2: A skill-based Robot Control Platform for ROS
|
8 pages, 3 figures. Accepted at 2023 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS)
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The need for autonomous robot systems in both the service and the industrial
domain is larger than ever. In the latter, the transition to small batches or
even "batch size 1" in production created a need for robot control system
architectures that can provide the required flexibility. Such architectures
must not only have a sufficient knowledge integration framework. It must also
support autonomous mission execution and allow for interchangeability and
interoperability between different tasks and robot systems. We introduce
SkiROS2, a skill-based robot control platform on top of ROS. SkiROS2 proposes a
layered, hybrid control structure for automated task planning, and reactive
execution, supported by a knowledge base for reasoning about the world state
and entities. The scheduling formulation builds on the extended behavior tree
model that merges task-level planning and execution. This allows for a high
degree of modularity and a fast reaction to changes in the environment. The
skill formulation based on pre-, hold- and post-conditions allows to organize
robot programs and to compose diverse skills reaching from perception to
low-level control and the incorporation of external tools. We relate SkiROS2 to
the field and outline three example use cases that cover task planning,
reasoning, multisensory input, integration in a manufacturing execution system
and reinforcement learning.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 15:25:51 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Mayr",
"Matthias",
""
],
[
"Rovida",
"Francesco",
""
],
[
"Krueger",
"Volker",
""
]
] |
new_dataset
| 0.997159 |
2306.17073
|
Michael Bekos
|
Patrizio Angelini, Michael A. Bekos, Julia Katheder, Michael Kaufmann,
Maximilian Pfister, Torsten Ueckerdt
|
Axis-Parallel Right Angle Crossing Graphs
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
A RAC graph is one admitting a RAC drawing, that is, a polyline drawing in
which each crossing occurs at a right angle. Originally motivated by
psychological studies on readability of graph layouts, RAC graphs form one of
the most prominent graph classes in beyond planarity.
In this work, we study a subclass of RAC graphs, called axis-parallel RAC (or
apRAC, for short), that restricts the crossings to pairs of axis-parallel
edge-segments. apRAC drawings combine the readability of planar drawings with
the clarity of (non-planar) orthogonal drawings. We consider these graphs both
with and without bends. Our contribution is as follows: (i) We study inclusion
relationships between apRAC and traditional RAC graphs. (ii) We establish
bounds on the edge density of apRAC graphs. (iii) We show that every graph with
maximum degree 8 is 2-bend apRAC and give a linear time drawing algorithm. Some
of our results on apRAC graphs also improve the state of the art for general
RAC graphs. We conclude our work with a list of open questions and a discussion
of a natural generalization of the apRAC model.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 16:24:30 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Angelini",
"Patrizio",
""
],
[
"Bekos",
"Michael A.",
""
],
[
"Katheder",
"Julia",
""
],
[
"Kaufmann",
"Michael",
""
],
[
"Pfister",
"Maximilian",
""
],
[
"Ueckerdt",
"Torsten",
""
]
] |
new_dataset
| 0.992509 |
2306.17099
|
Maryam Bahrani
|
Maryam Bahrani, Pranav Garimidi, Tim Roughgarden
|
When Bidders Are DAOs
| null | null | null | null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
In a typical decentralized autonomous organization (DAO), people organize
themselves into a group that is programmatically managed. DAOs can act as
bidders in auctions, with a DAO's bid treated by the auctioneer as if it had
been submitted by an individual, without regard to the internal structure of
the DAO. We study auctions in which the bidders are DAOs. More precisely, we
consider the design of two-level auctions in which the "participants" are
groups of bidders rather than individuals. Bidders form DAOs to pool resources,
but must then also negotiate the terms by which the DAO's winnings are shared.
We model the outcome of a DAO's negotiations by an aggregation function (which
aggregates DAO members' bids into a single group bid), and a budget-balanced
cost-sharing mechanism (that determines DAO members' access to the DAO's
allocation and distributes the total payment demanded from the DAO to its
members). We pursue two-level mechanisms that are incentive-compatible (with
truthful bidding a dominant strategy for members of each DAO) and approximately
welfare-optimal. We prove that, even in the case of a single-item auction,
incentive-compatible welfare maximization is not possible: No matter what the
outer mechanism and the cost-sharing mechanisms used by DAOs, the welfare of
the resulting two-level mechanism can be a $\approx \ln n$ factor less than
optimal. We complement this lower bound with a natural two-level mechanism that
achieves a matching approximate welfare guarantee. Our upper bound also extends
to multi-item auctions where individuals have additive valuations. Finally, we
show that our positive results cannot be extended much further: Even in
multi-item settings with unit-demand bidders, truthful two-level mechanisms
form a highly restricted class and as a consequence cannot guarantee any
non-trivial approximation of the maximum social welfare.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 16:57:19 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Bahrani",
"Maryam",
""
],
[
"Garimidi",
"Pranav",
""
],
[
"Roughgarden",
"Tim",
""
]
] |
new_dataset
| 0.977917 |
2306.17123
|
Kai-En Lin
|
Kai-En Lin and Alex Trevithick and Keli Cheng and Michel Sarkis and
Mohsen Ghafoorian and Ning Bi and Gerhard Reitmayr and Ravi Ramamoorthi
|
PVP: Personalized Video Prior for Editable Dynamic Portraits using
StyleGAN
|
Project website:
https://cseweb.ucsd.edu//~viscomp/projects/EGSR23PVP/
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Portrait synthesis creates realistic digital avatars which enable users to
interact with others in a compelling way. Recent advances in StyleGAN and its
extensions have shown promising results in synthesizing photorealistic and
accurate reconstruction of human faces. However, previous methods often focus
on frontal face synthesis and most methods are not able to handle large head
rotations due to the training data distribution of StyleGAN. In this work, our
goal is to take as input a monocular video of a face, and create an editable
dynamic portrait able to handle extreme head poses. The user can create novel
viewpoints, edit the appearance, and animate the face. Our method utilizes
pivotal tuning inversion (PTI) to learn a personalized video prior from a
monocular video sequence. Then we can input pose and expression coefficients to
MLPs and manipulate the latent vectors to synthesize different viewpoints and
expressions of the subject. We also propose novel loss functions to further
disentangle pose and expression in the latent space. Our algorithm shows much
better performance over previous approaches on monocular video datasets, and it
is also capable of running in real-time at 54 FPS on an RTX 3080.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 17:26:51 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Lin",
"Kai-En",
""
],
[
"Trevithick",
"Alex",
""
],
[
"Cheng",
"Keli",
""
],
[
"Sarkis",
"Michel",
""
],
[
"Ghafoorian",
"Mohsen",
""
],
[
"Bi",
"Ning",
""
],
[
"Reitmayr",
"Gerhard",
""
],
[
"Ramamoorthi",
"Ravi",
""
]
] |
new_dataset
| 0.977525 |
2306.17135
|
Chaofan Shou
|
Chaofan Shou, Shangyin Tan, Koushik Sen
|
ItyFuzz: Snapshot-Based Fuzzer for Smart Contract
|
ISSTA 2023
| null | null | null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Smart contracts are critical financial instruments, and their security is of
utmost importance. However, smart contract programs are difficult to fuzz due
to the persistent blockchain state behind all transactions. Mutating sequences
of transactions are complex and often lead to a suboptimal exploration for both
input and program spaces. In this paper, we introduce a novel snapshot-based
fuzzer ItyFuzz for testing smart contracts. In ItyFuzz, instead of storing
sequences of transactions and mutating from them, we snapshot states and
singleton transactions. To explore interesting states, ItyFuzz introduces a
dataflow waypoint mechanism to identify states with more potential momentum.
ItyFuzz also incorporates comparison waypoints to prune the space of states. By
maintaining snapshots of the states, ItyFuzz can synthesize concrete exploits
like reentrancy attacks quickly. Because ItyFuzz has second-level response time
to test a smart contract, it can be used for on-chain testing, which has many
benefits compared to local development testing. Finally, we evaluate ItyFuzz on
real-world smart contracts and some hacked on-chain DeFi projects. ItyFuzz
outperforms existing fuzzers in terms of instructional coverage and can find
and generate realistic exploits for on-chain projects quickly.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 17:36:08 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Shou",
"Chaofan",
""
],
[
"Tan",
"Shangyin",
""
],
[
"Sen",
"Koushik",
""
]
] |
new_dataset
| 0.994606 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.