id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2301.07362
|
Shivani Deglurkar
|
Shivani Deglurkar, Charles Xiao, Luke F. Gockowski, Megan T.
Valentine, Elliot W. Hawkes
|
A light- and heat-seeking vine-inspired robot with material-level
responsiveness
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The fields of soft and bio-inspired robotics promise to imbue synthetic
systems with capabilities found in the natural world. However, many of these
biological capabilities are yet to be realized. For example, vines in nature
direct growth via localized responses embedded in the cells of vine body,
allowing an organism without a central brain to successfully search for
resources (e.g., light). Yet to date, vine-inspired robots have yet to show
such localized embedded responsiveness. Here we present a vine-inspired robotic
device with material-level responses embedded in its skin and capable of
growing and steering toward either a light or heat stimulus. We present basic
modeling of the concept, design details, and experimental results showing its
behavior in response to infrared (IR) and visible light. Our simple design
concept advances the capabilities of bio-inspired robots and lays the
foundation for future growing robots that are capable of seeking light or heat,
yet are extremely simple and low-cost. Potential applications include solar
tracking, and in the future, firefighting smoldering fires. We envision using
similar robots to find hot spots in hard-to-access environments, allowing us to
put out potentially long-burning fires faster.
|
[
{
"version": "v1",
"created": "Wed, 18 Jan 2023 08:11:24 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2023 19:34:02 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Sep 2023 20:03:32 GMT"
},
{
"version": "v4",
"created": "Fri, 15 Sep 2023 06:02:43 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Deglurkar",
"Shivani",
""
],
[
"Xiao",
"Charles",
""
],
[
"Gockowski",
"Luke F.",
""
],
[
"Valentine",
"Megan T.",
""
],
[
"Hawkes",
"Elliot W.",
""
]
] |
new_dataset
| 0.95679 |
2301.09201
|
Imtiaz Karim
|
Imtiaz Karim, Kazi Samin Mubasshir, Mirza Masfiqur Rahman, and Elisa
Bertino
|
SPEC5G: A Dataset for 5G Cellular Network Protocol Analysis
| null | null | null | null |
cs.IR cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
5G is the 5th generation cellular network protocol. It is the
state-of-the-art global wireless standard that enables an advanced kind of
network designed to connect virtually everyone and everything with increased
speed and reduced latency. Therefore, its development, analysis, and security
are critical. However, all approaches to the 5G protocol development and
security analysis, e.g., property extraction, protocol summarization, and
semantic analysis of the protocol specifications and implementations are
completely manual. To reduce such manual effort, in this paper, we curate
SPEC5G the first-ever public 5G dataset for NLP research. The dataset contains
3,547,586 sentences with 134M words, from 13094 cellular network specifications
and 13 online websites. By leveraging large-scale pre-trained language models
that have achieved state-of-the-art results on NLP tasks, we use this dataset
for security-related text classification and summarization. Security-related
text classification can be used to extract relevant security-related properties
for protocol testing. On the other hand, summarization can help developers and
practitioners understand the high level of the protocol, which is itself a
daunting task. Our results show the value of our 5G-centric dataset in 5G
protocol analysis automation. We believe that SPEC5G will enable a new research
direction into automatic analyses for the 5G cellular network protocol and
numerous related downstream tasks. Our data and code are publicly available.
|
[
{
"version": "v1",
"created": "Sun, 22 Jan 2023 20:59:40 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 22:25:52 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Karim",
"Imtiaz",
""
],
[
"Mubasshir",
"Kazi Samin",
""
],
[
"Rahman",
"Mirza Masfiqur",
""
],
[
"Bertino",
"Elisa",
""
]
] |
new_dataset
| 0.999641 |
2302.09933
|
Jukka Ruohonen
|
Jukka Ruohonen
|
Mysterious and Manipulative Black Boxes: A Qualitative Analysis of
Perceptions on Recommender Systems
|
Submitted
| null | null | null |
cs.HC cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recommender systems are used to provide relevant suggestions on various
matters. Although these systems are a classical research topic, knowledge is
still limited regarding the public opinion about these systems. Public opinion
is also important because the systems are known to cause various problems. To
this end, this paper presents a qualitative analysis of the perceptions of
ordinary citizens, civil society groups, businesses, and others on recommender
systems in Europe. The dataset examined is based on the answers submitted to a
consultation about the Digital Services Act (DSA) recently enacted in the
European Union (EU). Therefore, not only does the paper contribute to the
pressing question about regulating new technologies and online platforms, but
it also reveals insights about the policy-making of the DSA. According to the
qualitative results, Europeans have generally negative opinions about
recommender systems and the quality of their recommendations. The systems are
widely seen to violate privacy and other fundamental rights. According to many
Europeans, these also cause various societal problems, including even threats
to democracy. Furthermore, existing regulations in the EU are commonly seen to
have failed due to a lack of proper enforcement. Numerous suggestions were made
by the respondents to the consultation for improving the situation, but only a
few of these ended up to the DSA.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 11:57:12 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 02:40:42 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Ruohonen",
"Jukka",
""
]
] |
new_dataset
| 0.997845 |
2303.13843
|
Haotian Bai
|
Haotian Bai, Yuanhuiyi Lyu, Lutao Jiang, Sijia Li, Haonan Lu, Xiaodong
Lin, Lin Wang
|
CompoNeRF: Text-guided Multi-object Compositional NeRF with Editable 3D
Scene Layout
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent research endeavors have shown that combining neural radiance fields
(NeRFs) with pre-trained diffusion models holds great potential for text-to-3D
generation. However, a hurdle is that they often encounter guidance collapse
when rendering multi-object scenes with relatively long sentences.
Specifically, text-to-image diffusion models are inherently unconstrained,
making them less competent to accurately associate object semantics with 3D
structures. To address it, we propose a novel framework, dubbed CompoNeRF, to
explicitly incorporates an editable 3D scene layout to provide effective
guidance at the object (i.e., local) and scene (i.e., global) levels. Firstly,
we interpret the multi-object text as an editable 3D scene layout containing
multiple local NeRFs associated with the object-specific 3D boxes and text
prompt. Then, we introduce a composition module to calibrate the latent
features from local NeRFs, which surprisingly improves the view consistency
across different local NeRFs. Lastly, we apply text guidance on global and
local levels through their corresponding views to avoid guidance ambiguity.
Additionally, NeRFs can be decomposed and cached for composing other scenes
with fine-tuning. This way, our CompoNeRF allows for flexible scene editing and
re-composition of trained local NeRFs into a new scene by manipulating the 3D
layout or text prompt. Leveraging the open-source Stable Diffusion model, our
CompoNeRF can generate faithful and editable text-to-3D results while opening a
potential direction for text-guided multi-object composition via the editable
3D scene layout. Notably, our CompoNeRF can achieve at most 54% performance
gain based on the CLIP score metric. Code is available at https://.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 07:37:09 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 10:09:46 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Bai",
"Haotian",
""
],
[
"Lyu",
"Yuanhuiyi",
""
],
[
"Jiang",
"Lutao",
""
],
[
"Li",
"Sijia",
""
],
[
"Lu",
"Haonan",
""
],
[
"Lin",
"Xiaodong",
""
],
[
"Wang",
"Lin",
""
]
] |
new_dataset
| 0.997403 |
2304.03696
|
Sonia Raychaudhuri
|
Sonia Raychaudhuri, Tommaso Campari, Unnat Jain, Manolis Savva, Angel
X. Chang
|
MOPA: Modular Object Navigation with PointGoal Agents
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a simple but effective modular approach MOPA (Modular ObjectNav
with PointGoal agents) to systematically investigate the inherent modularity of
the object navigation task in Embodied AI. MOPA consists of four modules: (a)
an object detection module trained to identify objects from RGB images, (b) a
map building module to build a semantic map of the observed objects, (c) an
exploration module enabling the agent to explore the environment, and (d) a
navigation module to move to identified target objects. We show that we can
effectively reuse a pretrained PointGoal agent as the navigation model instead
of learning to navigate from scratch, thus saving time and compute. We also
compare various exploration strategies for MOPA and find that a simple uniform
strategy significantly outperforms more advanced exploration methods.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 15:32:16 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 03:23:57 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Raychaudhuri",
"Sonia",
""
],
[
"Campari",
"Tommaso",
""
],
[
"Jain",
"Unnat",
""
],
[
"Savva",
"Manolis",
""
],
[
"Chang",
"Angel X.",
""
]
] |
new_dataset
| 0.997973 |
2304.14633
|
Ziyue Feng
|
Ziyue Feng, Liang Yang, Pengsheng Guo, Bing Li
|
CVRecon: Rethinking 3D Geometric Feature Learning For Neural
Reconstruction
|
Accepted by ICCV 2023
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in neural reconstruction using posed image sequences have
made remarkable progress. However, due to the lack of depth information,
existing volumetric-based techniques simply duplicate 2D image features of the
object surface along the entire camera ray. We contend this duplication
introduces noise in empty and occluded spaces, posing challenges for producing
high-quality 3D geometry. Drawing inspiration from traditional multi-view
stereo methods, we propose an end-to-end 3D neural reconstruction framework
CVRecon, designed to exploit the rich geometric embedding in the cost volumes
to facilitate 3D geometric feature learning. Furthermore, we present
Ray-contextual Compensated Cost Volume (RCCV), a novel 3D geometric feature
representation that encodes view-dependent information with improved integrity
and robustness. Through comprehensive experiments, we demonstrate that our
approach significantly improves the reconstruction quality in various metrics
and recovers clear fine details of the 3D geometries. Our extensive ablation
studies provide insights into the development of effective 3D geometric feature
learning schemes. Project page: https://cvrecon.ziyue.cool/
|
[
{
"version": "v1",
"created": "Fri, 28 Apr 2023 05:30:19 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 21:15:49 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Sep 2023 22:15:15 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Feng",
"Ziyue",
""
],
[
"Yang",
"Liang",
""
],
[
"Guo",
"Pengsheng",
""
],
[
"Li",
"Bing",
""
]
] |
new_dataset
| 0.965199 |
2305.11870
|
Byungjun Kim
|
Byungjun Kim, Patrick Kwon, Kwangho Lee, Myunggi Lee, Sookwan Han,
Daesik Kim, Hanbyul Joo
|
Chupa: Carving 3D Clothed Humans from Skinned Shape Priors using 2D
Diffusion Probabilistic Models
|
Project Page: https://snuvclab.github.io/chupa/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a 3D generation pipeline that uses diffusion models to generate
realistic human digital avatars. Due to the wide variety of human identities,
poses, and stochastic details, the generation of 3D human meshes has been a
challenging problem. To address this, we decompose the problem into 2D normal
map generation and normal map-based 3D reconstruction. Specifically, we first
simultaneously generate realistic normal maps for the front and backside of a
clothed human, dubbed dual normal maps, using a pose-conditional diffusion
model. For 3D reconstruction, we "carve" the prior SMPL-X mesh to a detailed 3D
mesh according to the normal maps through mesh optimization. To further enhance
the high-frequency details, we present a diffusion resampling scheme on both
body and facial regions, thus encouraging the generation of realistic digital
avatars. We also seamlessly incorporate a recent text-to-image diffusion model
to support text-based human identity control. Our method, namely, Chupa, is
capable of generating realistic 3D clothed humans with better perceptual
quality and identity variety.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 17:59:18 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 07:38:33 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Sep 2023 12:23:21 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Kim",
"Byungjun",
""
],
[
"Kwon",
"Patrick",
""
],
[
"Lee",
"Kwangho",
""
],
[
"Lee",
"Myunggi",
""
],
[
"Han",
"Sookwan",
""
],
[
"Kim",
"Daesik",
""
],
[
"Joo",
"Hanbyul",
""
]
] |
new_dataset
| 0.998212 |
2306.12652
|
Qiang Zhang
|
Qiang Zhang, Yuanqiao Lin, Yubin Lin, Szymon Rusinkiewicz
|
UltraGlove: Hand Pose Estimation with Mems-Ultrasonic Sensors
| null | null | null | null |
cs.CV cs.GR cs.HC cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Hand tracking is an important aspect of human-computer interaction and has a
wide range of applications in extended reality devices. However, current hand
motion capture methods suffer from various limitations. For instance,
visual-based hand pose estimation is susceptible to self-occlusion and changes
in lighting conditions, while IMU-based tracking gloves experience significant
drift and are not resistant to external magnetic field interference. To address
these issues, we propose a novel and low-cost hand-tracking glove that utilizes
several MEMS-ultrasonic sensors attached to the fingers, to measure the
distance matrix among the sensors. Our lightweight deep network then
reconstructs the hand pose from the distance matrix. Our experimental results
demonstrate that this approach is both accurate, size-agnostic, and robust to
external interference. We also show the design logic for the sensor selection,
sensor configurations, circuit diagram, as well as model architecture.
|
[
{
"version": "v1",
"created": "Thu, 22 Jun 2023 03:41:47 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 22:56:01 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Zhang",
"Qiang",
""
],
[
"Lin",
"Yuanqiao",
""
],
[
"Lin",
"Yubin",
""
],
[
"Rusinkiewicz",
"Szymon",
""
]
] |
new_dataset
| 0.99937 |
2306.14096
|
Yinyu Lan
|
Yinyu Lan, Yanru Wu, Wang Xu, Weiqiang Feng, Youhao Zhang
|
Chinese Fine-Grained Financial Sentiment Analysis with Large Language
Models
|
Accepted by (FinLLM 2023)@IJCAI 2023,
https://finllm.github.io/workshop/#/fcb
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Entity-level fine-grained sentiment analysis in the financial domain is a
crucial subtask of sentiment analysis and currently faces numerous challenges.
The primary challenge stems from the lack of high-quality and large-scale
annotated corpora specifically designed for financial text sentiment analysis,
which in turn limits the availability of data necessary for developing
effective text processing techniques. Recent advancements in large language
models (LLMs) have yielded remarkable performance in natural language
processing tasks, primarily centered around language pattern matching. In this
paper, we propose a novel and extensive Chinese fine-grained financial
sentiment analysis dataset, FinChina SA, for enterprise early warning. We
thoroughly evaluate and experiment with well-known existing open-source LLMs
using our dataset. We firmly believe that our dataset will serve as a valuable
resource to advance the exploration of real-world financial sentiment analysis
tasks, which should be the focus of future research. The FinChina SA dataset is
publicly available at https://github.com/YerayL/FinChina-SA
|
[
{
"version": "v1",
"created": "Sun, 25 Jun 2023 02:24:30 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 05:14:39 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Jul 2023 08:57:38 GMT"
},
{
"version": "v4",
"created": "Mon, 24 Jul 2023 00:58:11 GMT"
},
{
"version": "v5",
"created": "Fri, 15 Sep 2023 08:19:44 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Lan",
"Yinyu",
""
],
[
"Wu",
"Yanru",
""
],
[
"Xu",
"Wang",
""
],
[
"Feng",
"Weiqiang",
""
],
[
"Zhang",
"Youhao",
""
]
] |
new_dataset
| 0.999009 |
2306.15725
|
Jeff Brozena
|
Jeff Brozena, Johnna Blair, Thomas Richardson, Mark Matthews, Dahlia
Mukherjee, Erika F H Saunders, and Saeed Abdullah
|
Supportive Fintech for Individuals with Bipolar Disorder: Financial Data
Sharing Preferences to Support Longitudinal Care Management
|
19 pages, 5 figures, submitted to ACM CHI conference on Human Factors
in Computing Systems
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Financial stability is a key challenge for individuals living with bipolar
disorder (BD). Symptomatic periods in BD are associated with poor financial
decision-making, contributing to a negative cycle of worsening symptoms and an
increased risk of bankruptcy. There has been an increased focus on designing
supportive financial technologies (fintech) to address varying and intermittent
needs across different stages of BD. However, little is known about this
population's expectations and privacy preferences related to financial data
sharing for longitudinal care management. To address this knowledge gap, we
have deployed a factorial vignette survey using the Contextual Integrity
framework. Our data from individuals with BD (N=480) shows that they are open
to share financial data for long term care management. We have also identified
significant differences in sharing preferences across age, gender, and
diagnostic subtype. We discuss the implications of these findings in designing
equitable fintech to support this marginalized community.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 18:03:45 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 20:35:49 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Sep 2023 13:03:24 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Brozena",
"Jeff",
""
],
[
"Blair",
"Johnna",
""
],
[
"Richardson",
"Thomas",
""
],
[
"Matthews",
"Mark",
""
],
[
"Mukherjee",
"Dahlia",
""
],
[
"Saunders",
"Erika F H",
""
],
[
"Abdullah",
"Saeed",
""
]
] |
new_dataset
| 0.998409 |
2307.01717
|
Andrea Coletta
|
Andrea Coletta, Sriram Gopalakrishan, Daniel Borrajo, Svitlana
Vyetrenko
|
On the Constrained Time-Series Generation Problem
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Synthetic time series are often used in practical applications to augment the
historical time series dataset for better performance of machine learning
algorithms, amplify the occurrence of rare events, and also create
counterfactual scenarios described by the time series.
Distributional-similarity (which we refer to as realism) as well as the
satisfaction of certain numerical constraints are common requirements in
counterfactual time series scenario generation requests. For instance, the US
Federal Reserve publishes synthetic market stress scenarios given by the
constrained time series for financial institutions to assess their performance
in hypothetical recessions. Existing approaches for generating constrained time
series usually penalize training loss to enforce constraints, and reject
non-conforming samples. However, these approaches would require re-training if
we change constraints, and rejection sampling can be computationally expensive,
or impractical for complex constraints. In this paper, we propose a novel set
of methods to tackle the constrained time series generation problem and provide
efficient sampling while ensuring the realism of generated time series. In
particular, we frame the problem using a constrained optimization framework and
then we propose a set of generative methods including "GuidedDiffTime", a
guided diffusion model to generate realistic time series. Empirically, we
evaluate our work on several datasets for financial and energy data, where
incorporating constraints is critical. We show that our approaches outperform
existing work both qualitatively and quantitatively. Most importantly, we show
that our "GuidedDiffTime" model is the only solution where re-training is not
necessary for new constraints, resulting in a significant carbon footprint
reduction, up to 92% w.r.t. existing deep learning methods.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 13:43:05 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 20:58:03 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Coletta",
"Andrea",
""
],
[
"Gopalakrishan",
"Sriram",
""
],
[
"Borrajo",
"Daniel",
""
],
[
"Vyetrenko",
"Svitlana",
""
]
] |
new_dataset
| 0.994414 |
2308.10755
|
Bin Wang
|
Conghui He, Zhenjiang Jin, Chao Xu, Jiantao Qiu, Bin Wang, Wei Li,
Hang Yan, Jiaqi Wang, Dahua Lin
|
WanJuan: A Comprehensive Multimodal Dataset for Advancing English and
Chinese Large Models
|
Technical Report
| null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rise in popularity of ChatGPT and GPT-4 has significantly accelerated the
development of large models, leading to the creation of numerous impressive
large language models(LLMs) and multimodal large language models (MLLMs). These
cutting-edge models owe their remarkable performance to high-quality data.
However, the details of the training data used in leading paradigms are often
kept confidential. This lack of transparency, coupled with the scarcity of
open-source data, impedes further developments within the community. As a
response, this paper presents "Wan Juan", a large-scale multimodal dataset
composed of both Chinese and English data, collected from a wide range of web
sources. The dataset incorporates text, image-text, and video modalities, with
a total volume exceeding 2TB. It was utilized in the training of InternLM, a
model that demonstrated significant advantages in multi-dimensional evaluations
when compared to models of a similar scale. All data can be accessed at
https://opendatalab.org.cn/WanJuan1.0.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 14:40:48 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 02:57:45 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Sep 2023 09:52:14 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"He",
"Conghui",
""
],
[
"Jin",
"Zhenjiang",
""
],
[
"Xu",
"Chao",
""
],
[
"Qiu",
"Jiantao",
""
],
[
"Wang",
"Bin",
""
],
[
"Li",
"Wei",
""
],
[
"Yan",
"Hang",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"Lin",
"Dahua",
""
]
] |
new_dataset
| 0.999413 |
2308.10856
|
Aobo Li
|
I.J. Arnquist, F.T. Avignone III, A.S. Barabash, C.J. Barton, K.H.
Bhimani, E. Blalock, B. Bos, M. Busch, M. Buuck, T.S. Caldwell, Y.-D. Chan,
C.D. Christofferson, P.-H. Chu, M.L. Clark, C. Cuesta, J.A. Detwiler, Yu.
Efremenko, H. Ejiri, S.R. Elliott, N. Fuad, G.K. Giovanetti, M.P. Green, J.
Gruszko, I.S. Guinn, V.E. Guiseppe, C.R. Haufe, R. Henning, D. Hervas
Aguilar, E.W. Hoppe, A. Hostiuc, M.F. Kidd, I. Kim, R.T. Kouzes, T.E. Lannen
V, A. Li, J.M. Lopez-Castano, R.D. Martin, R. Massarczyk, S.J. Meijer, S.
Mertens, T.K. Oli, L.S. Paudel, W. Pettus, A.W.P. Poon, B. Quenallata, D.C.
Radford, A.L. Reine, K. Rielage, N.W. Ruof, D.C. Schaper, S.J. Schleich, D.
Tedeschi, R.L. Varner, S. Vasilyev, S.L. Watkins, J.F. Wilkerson, C. Wiseman,
W. Xu, C.-H. Yu, and B.X. Zhu
|
Majorana Demonstrator Data Release for AI/ML Applications
|
DataPlanet Access:
https://dataplanet.ucsd.edu/dataset.xhtml?persistentId=perma:83.ucsddata/UQWQAV
| null | null | null |
cs.LG nucl-ex physics.data-an physics.ins-det
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The enclosed data release consists of a subset of the calibration data from
the Majorana Demonstrator experiment. Each Majorana event is accompanied by raw
Germanium detector waveforms, pulse shape discrimination cuts, and calibrated
final energies, all shared in an HDF5 file format along with relevant metadata.
This release is specifically designed to support the training and testing of
Artificial Intelligence (AI) and Machine Learning (ML) algorithms upon our
data. This document is structured as follows. Section I provides an overview of
the dataset's content and format; Section II outlines the location of this
dataset and the method for accessing it; Section III presents the NPML Machine
Learning Challenge associated with this dataset; Section IV contains a
disclaimer from the Majorana collaboration regarding the use of this dataset;
Appendix A contains technical details of this data release. Please direct
questions about the material provided within this release to [email protected]
(A. Li).
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 16:50:59 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 01:31:28 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Sep 2023 00:46:38 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Arnquist",
"I. J.",
""
],
[
"Avignone",
"F. T.",
"III"
],
[
"Barabash",
"A. S.",
""
],
[
"Barton",
"C. J.",
""
],
[
"Bhimani",
"K. H.",
""
],
[
"Blalock",
"E.",
""
],
[
"Bos",
"B.",
""
],
[
"Busch",
"M.",
""
],
[
"Buuck",
"M.",
""
],
[
"Caldwell",
"T. S.",
""
],
[
"Chan",
"Y. -D.",
""
],
[
"Christofferson",
"C. D.",
""
],
[
"Chu",
"P. -H.",
""
],
[
"Clark",
"M. L.",
""
],
[
"Cuesta",
"C.",
""
],
[
"Detwiler",
"J. A.",
""
],
[
"Efremenko",
"Yu.",
""
],
[
"Ejiri",
"H.",
""
],
[
"Elliott",
"S. R.",
""
],
[
"Fuad",
"N.",
""
],
[
"Giovanetti",
"G. K.",
""
],
[
"Green",
"M. P.",
""
],
[
"Gruszko",
"J.",
""
],
[
"Guinn",
"I. S.",
""
],
[
"Guiseppe",
"V. E.",
""
],
[
"Haufe",
"C. R.",
""
],
[
"Henning",
"R.",
""
],
[
"Aguilar",
"D. Hervas",
""
],
[
"Hoppe",
"E. W.",
""
],
[
"Hostiuc",
"A.",
""
],
[
"Kidd",
"M. F.",
""
],
[
"Kim",
"I.",
""
],
[
"Kouzes",
"R. T.",
""
],
[
"Lannen",
"T. E.",
"V"
],
[
"Li",
"A.",
""
],
[
"Lopez-Castano",
"J. M.",
""
],
[
"Martin",
"R. D.",
""
],
[
"Massarczyk",
"R.",
""
],
[
"Meijer",
"S. J.",
""
],
[
"Mertens",
"S.",
""
],
[
"Oli",
"T. K.",
""
],
[
"Paudel",
"L. S.",
""
],
[
"Pettus",
"W.",
""
],
[
"Poon",
"A. W. P.",
""
],
[
"Quenallata",
"B.",
""
],
[
"Radford",
"D. C.",
""
],
[
"Reine",
"A. L.",
""
],
[
"Rielage",
"K.",
""
],
[
"Ruof",
"N. W.",
""
],
[
"Schaper",
"D. C.",
""
],
[
"Schleich",
"S. J.",
""
],
[
"Tedeschi",
"D.",
""
],
[
"Varner",
"R. L.",
""
],
[
"Vasilyev",
"S.",
""
],
[
"Watkins",
"S. L.",
""
],
[
"Wilkerson",
"J. F.",
""
],
[
"Wiseman",
"C.",
""
],
[
"Xu",
"W.",
""
],
[
"Yu",
"C. -H.",
""
],
[
"Zhu",
"B. X.",
""
]
] |
new_dataset
| 0.999391 |
2308.13981
|
Shuiyin Liu
|
Shuiyin Liu, Amin Sakzad
|
Lattice Codes for CRYSTALS-Kyber
|
9 pages,3 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a constant-time lattice encoder for the NIST-recommended
post-quantum encryption algorithm: Kyber. We first refine the analysis of Kyber
decoding noise and prove that Kyber decoding noise can be bounded by a sphere.
This shows the Kyber encoding problem is essentially a sphere packing in a
hypercube. Lattice codes are then constructed to ensure denser packing and a
lower decryption failure rate (DFR). For a fixed ciphertext size, the proposed
lattice encoder reduces the communication cost by up to 32.6%, and decreases
the DFR by a factor of up to 2^{85}. For a fixed plaintext size, e.g., 256
bits, we propose a bit-interleaved coded modulation (BICM) approach, which
combines a BCH code and the proposed lattice encoder. The proposed BICM scheme
significantly reduces the DFR of Kyber, thus enabling further compression of
the ciphertext. Compared with the original Kyber encoder, the communication
cost is reduced by 24.49%, while the DFR is decreased by a factor of 2^{39}.
The proposed encoding scheme is a constant-time algorithm, thus resistant
against the timing side-channel attacks.
|
[
{
"version": "v1",
"created": "Sun, 27 Aug 2023 01:13:00 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 11:13:20 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Liu",
"Shuiyin",
""
],
[
"Sakzad",
"Amin",
""
]
] |
new_dataset
| 0.996934 |
2309.02852
|
Niklas Gr\"one
|
Peter Eades, Niklas Gr\"one, Karsten Klein, Patrick Eades, Leo
Schreiber, Ulf Hailer and Falk Schreiber
|
CelticGraph: Drawing Graphs as Celtic Knots and Links
|
Appears in the Proceedings of the 31st International Symposium on
Graph Drawing and Network Visualization (GD 2023)
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
Celtic knots are an ancient art form often attributed to Celtic cultures,
used to decorate monuments and manuscripts, and to symbolise eternity and
interconnectedness. This paper describes the framework CelticGraph to draw
graphs as Celtic knots and links. The drawing process raises interesting
combinatorial concepts in the theory of circuits in planar graphs. Further,
CelticGraph uses a novel algorithm to represent edges as B\'ezier curves,
aiming to show each link as a smooth curve with limited curvature.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 09:25:40 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 20:51:55 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Eades",
"Peter",
""
],
[
"Gröne",
"Niklas",
""
],
[
"Klein",
"Karsten",
""
],
[
"Eades",
"Patrick",
""
],
[
"Schreiber",
"Leo",
""
],
[
"Hailer",
"Ulf",
""
],
[
"Schreiber",
"Falk",
""
]
] |
new_dataset
| 0.999675 |
2309.03046
|
Nickolai Zeldovich
|
Upamanyu Sharma (MIT), Ralf Jung (ETH Zurich), Joseph Tassarotti
(NYU), M. Frans Kaashoek (MIT), Nickolai Zeldovich (MIT)
|
Grove: a Separation-Logic Library for Verifying Distributed Systems
(Extended Version)
|
Extended version of paper appearing at SOSP 2023
| null |
10.1145/3600006.3613172
| null |
cs.LO cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Grove is a concurrent separation logic library for verifying distributed
systems. Grove is the first to handle time-based leases, including their
interaction with reconfiguration, crash recovery, thread-level concurrency, and
unreliable networks. This paper uses Grove to verify several distributed system
components written in Go, including GroveKV, a realistic distributed
multi-threaded key-value store. GroveKV supports reconfiguration,
primary/backup replication, and crash recovery, and uses leases to execute
read-only requests on any replica. GroveKV achieves high performance (67-73% of
Redis on a single core), scales with more cores and more backup replicas
(achieving about 2x the throughput when going from 1 to 3 servers), and can
safely execute reads while reconfiguring.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 14:41:35 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 20:02:02 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Sharma",
"Upamanyu",
"",
"MIT"
],
[
"Jung",
"Ralf",
"",
"ETH Zurich"
],
[
"Tassarotti",
"Joseph",
"",
"NYU"
],
[
"Kaashoek",
"M. Frans",
"",
"MIT"
],
[
"Zeldovich",
"Nickolai",
"",
"MIT"
]
] |
new_dataset
| 0.999483 |
2309.03989
|
Sarinda Samarasinghe
|
Sarinda Samarasinghe, Mamshad Nayeem Rizve, Navid Kardan, Mubarak Shah
|
CDFSL-V: Cross-Domain Few-Shot Learning for Videos
|
ICCV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Few-shot video action recognition is an effective approach to recognizing new
categories with only a few labeled examples, thereby reducing the challenges
associated with collecting and annotating large-scale video datasets. Existing
methods in video action recognition rely on large labeled datasets from the
same domain. However, this setup is not realistic as novel categories may come
from different data domains that may have different spatial and temporal
characteristics. This dissimilarity between the source and target domains can
pose a significant challenge, rendering traditional few-shot action recognition
techniques ineffective. To address this issue, in this work, we propose a novel
cross-domain few-shot video action recognition method that leverages
self-supervised learning and curriculum learning to balance the information
from the source and target domains. To be particular, our method employs a
masked autoencoder-based self-supervised training objective to learn from both
source and target data in a self-supervised manner. Then a progressive
curriculum balances learning the discriminative information from the source
dataset with the generic information learned from the target domain. Initially,
our curriculum utilizes supervised learning to learn class discriminative
features from the source data. As the training progresses, we transition to
learning target-domain-specific features. We propose a progressive curriculum
to encourage the emergence of rich features in the target domain based on class
discriminative supervised features in the source domain. We evaluate our method
on several challenging benchmark datasets and demonstrate that our approach
outperforms existing cross-domain few-shot learning techniques. Our code is
available at https://github.com/Sarinda251/CDFSL-V
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 19:44:27 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 17:24:03 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Samarasinghe",
"Sarinda",
""
],
[
"Rizve",
"Mamshad Nayeem",
""
],
[
"Kardan",
"Navid",
""
],
[
"Shah",
"Mubarak",
""
]
] |
new_dataset
| 0.989266 |
2309.05300
|
Yi Wang
|
Yi Wang, Conrad M Albrecht, Nassim Ait Ali Braham, Chenying Liu,
Zhitong Xiong, Xiao Xiang Zhu
|
DeCUR: decoupling common & unique representations for multimodal
self-supervision
|
19 pages, 10 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The increasing availability of multi-sensor data sparks interest in
multimodal self-supervised learning. However, most existing approaches learn
only common representations across modalities while ignoring intra-modal
training and modality-unique representations. We propose Decoupling Common and
Unique Representations (DeCUR), a simple yet effective method for multimodal
self-supervised learning. By distinguishing inter- and intra-modal embeddings,
DeCUR is trained to integrate complementary information across different
modalities. We evaluate DeCUR in three common multimodal scenarios
(radar-optical, RGB-elevation, and RGB-depth), and demonstrate its consistent
benefits on scene classification and semantic segmentation downstream tasks.
Notably, we get straightforward improvements by transferring our pretrained
backbones to state-of-the-art supervised multimodal methods without any
hyperparameter tuning. Furthermore, we conduct a comprehensive explainability
analysis to shed light on the interpretation of common and unique features in
our multimodal approach. Codes are available at
\url{https://github.com/zhu-xlab/DeCUR}.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 08:35:23 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 13:39:57 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Wang",
"Yi",
""
],
[
"Albrecht",
"Conrad M",
""
],
[
"Braham",
"Nassim Ait Ali",
""
],
[
"Liu",
"Chenying",
""
],
[
"Xiong",
"Zhitong",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.99026 |
2309.06745
|
Zhihang Ren
|
Zhihang Ren, Jefferson Ortega, Yifan Wang, Zhimin Chen, Yunhui Guo,
Stella X. Yu, David Whitney
|
VEATIC: Video-based Emotion and Affect Tracking in Context Dataset
| null | null | null | null |
cs.CV cs.HC cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Human affect recognition has been a significant topic in psychophysics and
computer vision. However, the currently published datasets have many
limitations. For example, most datasets contain frames that contain only
information about facial expressions. Due to the limitations of previous
datasets, it is very hard to either understand the mechanisms for affect
recognition of humans or generalize well on common cases for computer vision
models trained on those datasets. In this work, we introduce a brand new large
dataset, the Video-based Emotion and Affect Tracking in Context Dataset
(VEATIC), that can conquer the limitations of the previous datasets. VEATIC has
124 video clips from Hollywood movies, documentaries, and home videos with
continuous valence and arousal ratings of each frame via real-time annotation.
Along with the dataset, we propose a new computer vision task to infer the
affect of the selected character via both context and character information in
each video frame. Additionally, we propose a simple model to benchmark this new
computer vision task. We also compare the performance of the pretrained model
using our dataset with other similar datasets. Experiments show the competing
results of our pretrained model via VEATIC, indicating the generalizability of
VEATIC. Our dataset is available at https://veatic.github.io.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 06:31:35 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 07:13:24 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Sep 2023 03:17:23 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Ren",
"Zhihang",
""
],
[
"Ortega",
"Jefferson",
""
],
[
"Wang",
"Yifan",
""
],
[
"Chen",
"Zhimin",
""
],
[
"Guo",
"Yunhui",
""
],
[
"Yu",
"Stella X.",
""
],
[
"Whitney",
"David",
""
]
] |
new_dataset
| 0.999679 |
2309.07161
|
Suthee Ruangwises
|
Suthee Ruangwises
|
Sumplete is Hard, Even with Two Different Numbers
| null | null | null | null |
cs.DS cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sumplete is a logic puzzle famous for being developed by ChatGPT. The puzzle
consists of a rectangular grid, with each cell containing a number. The player
has to cross out some numbers such that the sum of uncrossed numbers in each
row and column is equal to a given integer assigned to that row or column. In
this paper, we prove that deciding a solvability of a given Sumplete puzzle is
NP-complete, even if the grid contains only two different numbers.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 10:54:09 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 17:06:06 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Ruangwises",
"Suthee",
""
]
] |
new_dataset
| 0.999846 |
2309.07563
|
Alberto Fernandez-de-Retana
|
Alberto Fernandez-de-Retana and Igor Santos-Grueiro
|
Keep your Identity Small: Privacy-preserving Client-side Fingerprinting
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Device fingerprinting is a widely used technique that allows a third party to
identify a particular device. Applications of device fingerprinting include
authentication, attacker identification, or software license binding. Device
fingerprinting is also used on the web as a method for identifying users.
Unfortunately, one of its most widespread uses is to identify users visiting
different websites and thus build their browsing history. This constitutes a
specific type of web tracking that poses a threat to users' privacy. While many
anti-tracking solutions have been proposed, all of them block or tamper with
device fingerprinting techniques rather than just blocking their web tracking
application. Therefore, users may be limited in their experience while using a
website. In this paper, we propose Privacy-preserving Client-side
Fingerprinting (PCF), a new method that allows device fingerprinting on the
web, while blocks the possibility of performing web tracking. To this end, PCF
is built upon fingerprinting transparency: any website ought to declare its
fingerprinting scripts while users will compute them in a privacy-preserving
manner, limiting the resultant fingerprints for each different domain and,
therefore, making web tracking not feasible.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 09:45:29 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 16:32:12 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Fernandez-de-Retana",
"Alberto",
""
],
[
"Santos-Grueiro",
"Igor",
""
]
] |
new_dataset
| 0.994275 |
2309.07773
|
Danai Korre
|
Danai Korre and Judy Robertson
|
Spoken Humanoid Embodied Conversational Agents in Mobile Serious Games:
A Usability Assessment
|
46 pages, 9 figures, 14 tables
| null | null | null |
cs.HC cs.CL cs.MM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper presents an empirical investigation of the extent to which spoken
Humanoid Embodied Conversational Agents (HECAs) can foster usability in mobile
serious game (MSG) applications. The aim of the research is to assess the
impact of multiple agents and illusion of humanness on the quality of the
interaction. The experiment investigates two styles of agent presentation: an
agent of high human-likeness (HECA) and an agent of low human-likeness (text).
The purpose of the experiment is to assess whether and how agents of high
humanlikeness can evoke the illusion of humanness and affect usability. Agents
of high human-likeness were designed by following the ECA design model that is
a proposed guide for ECA development. The results of the experiment with 90
participants show that users prefer to interact with the HECAs. The difference
between the two versions is statistically significant with a large effect size
(d=1.01), with many of the participants justifying their choice by saying that
the human-like characteristics of the HECA made the version more appealing.
This research provides key information on the potential effect of HECAs on
serious games, which can provide insight into the design of future mobile
serious games.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 15:02:05 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 15:42:02 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Korre",
"Danai",
""
],
[
"Robertson",
"Judy",
""
]
] |
new_dataset
| 0.997776 |
2309.07983
|
Guangke Chen
|
Guangke Chen and Yedi Zhang and Fu Song
|
SLMIA-SR: Speaker-Level Membership Inference Attacks against Speaker
Recognition Systems
|
Accepted by the 31st Network and Distributed System Security (NDSS)
Symposium, 2024
| null | null | null |
cs.CR cs.LG cs.MM cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Membership inference attacks allow adversaries to determine whether a
particular example was contained in the model's training dataset. While
previous works have confirmed the feasibility of such attacks in various
applications, none has focused on speaker recognition (SR), a promising
voice-based biometric recognition technique. In this work, we propose SLMIA-SR,
the first membership inference attack tailored to SR. In contrast to
conventional example-level attack, our attack features speaker-level membership
inference, i.e., determining if any voices of a given speaker, either the same
as or different from the given inference voices, have been involved in the
training of a model. It is particularly useful and practical since the training
and inference voices are usually distinct, and it is also meaningful
considering the open-set nature of SR, namely, the recognition speakers were
often not present in the training data. We utilize intra-closeness and
inter-farness, two training objectives of SR, to characterize the differences
between training and non-training speakers and quantify them with two groups of
features driven by carefully-established feature engineering to mount the
attack. To improve the generalizability of our attack, we propose a novel
mixing ratio training strategy to train attack models. To enhance the attack
performance, we introduce voice chunk splitting to cope with the limited number
of inference voices and propose to train attack models dependent on the number
of inference voices. Our attack is versatile and can work in both white-box and
black-box scenarios. Additionally, we propose two novel techniques to reduce
the number of black-box queries while maintaining the attack performance.
Extensive experiments demonstrate the effectiveness of SLMIA-SR.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 18:40:28 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Chen",
"Guangke",
""
],
[
"Zhang",
"Yedi",
""
],
[
"Song",
"Fu",
""
]
] |
new_dataset
| 0.999537 |
2309.07993
|
Brian Acosta
|
Brian Acosta and Michael Posa
|
Bipedal Walking on Constrained Footholds with MPC Footstep Control
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bipedal robots promise the ability to traverse rough terrain quickly and
efficiently, and indeed, humanoid robots can now use strong ankles and careful
foot placement to traverse discontinuous terrain. However, more agile
underactuated bipeds have small feet and weak ankles, and must constantly
adjust their planned footstep position to maintain balance. We introduce a new
model-predictive footstep controller which jointly optimizes over the robot's
discrete choice of stepping surface, impending footstep position sequence,
ankle torque in the sagittal plane, and center of mass trajectory, to track a
velocity command. The controller is formulated as a single Mixed Integer
Quadratic Program (MIQP) which is solved at 50-200 Hz, depending on terrain
complexity. We implement a state of the art real-time elevation mapping and
convex terrain decomposition framework to inform the controller of its
surroundings in the form on convex polygons representing steppable terrain. We
investigate the capabilities and challenges of our approach through hardware
experiments on the underactuated biped Cassie.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 19:08:08 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Acosta",
"Brian",
""
],
[
"Posa",
"Michael",
""
]
] |
new_dataset
| 0.9945 |
2309.08006
|
Xiaoting Wu
|
Xiaoting Wu, Xiaoyi Feng, Lili Liu, Constantino \'Alvarez Casado and
Miguel Bordallo L\'opez
|
Kinship Verification from rPPG using 1DCNN Attention networks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Facial kinship verification aims at automatically determining whether two
subjects have a kinship relation. It has been widely studied from different
modalities, such as faces, voices, gait, and smiling expressions. However, the
potential of bio-signals, such as remote Photoplethysmography (rPPG) extracted
from facial videos, remains largely unexplored in the kinship verification
problem. In this paper, we investigate for the first time the usage of the rPPG
signal for kinship verification. Specifically, we proposed a one-dimensional
Convolutional Neural Network (1DCNN) with a 1DCNN-Attention module and
contrastive loss to learn the kinship similarity from rPPGs. The network takes
multiple rPPG signals extracted from various facial Regions of Interest (ROIs)
as inputs. Additionally, the 1DCNN attention module is designed to learn and
capture the discriminative kin features from feature embeddings. Finally, the
proposed method is evaluated on the UvANEMO Smile Database from different kin
relations, showing the usefulness of rPPG signals in verifying kinship.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 19:33:11 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Wu",
"Xiaoting",
""
],
[
"Feng",
"Xiaoyi",
""
],
[
"Liu",
"Lili",
""
],
[
"Casado",
"Constantino Álvarez",
""
],
[
"López",
"Miguel Bordallo",
""
]
] |
new_dataset
| 0.984354 |
2309.08045
|
T. Anderson Keller
|
T. Anderson Keller, Lyle Muller, Terrence Sejnowski, Max Welling
|
Traveling Waves Encode the Recent Past and Enhance Sequence Learning
| null | null | null | null |
cs.NE cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Traveling waves of neural activity have been observed throughout the brain at
a diversity of regions and scales; however, their precise computational role is
still debated. One physically grounded hypothesis suggests that the cortical
sheet may act like a wave-field capable of storing a short-term memory of
sequential stimuli through induced waves traveling across the cortical surface.
To date, however, the computational implications of this idea have remained
hypothetical due to the lack of a simple recurrent neural network architecture
capable of exhibiting such waves. In this work, we introduce a model to fill
this gap, which we denote the Wave-RNN (wRNN), and demonstrate how both
connectivity constraints and initialization play a crucial role in the
emergence of wave-like dynamics. We then empirically show how such an
architecture indeed efficiently encodes the recent past through a suite of
synthetic memory tasks where wRNNs learn faster and perform significantly
better than wave-free counterparts. Finally, we explore the implications of
this memory storage system on more complex sequence modeling tasks such as
sequential image classification and find that wave-based models not only again
outperform comparable wave-free RNNs while using significantly fewer
parameters, but additionally perform comparably to more complex gated
architectures such as LSTMs and GRUs. We conclude with a discussion of the
implications of these results for both neuroscience and machine learning.
|
[
{
"version": "v1",
"created": "Sun, 3 Sep 2023 22:48:10 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Keller",
"T. Anderson",
""
],
[
"Muller",
"Lyle",
""
],
[
"Sejnowski",
"Terrence",
""
],
[
"Welling",
"Max",
""
]
] |
new_dataset
| 0.956674 |
2309.08072
|
Yiyuan Yang
|
Yiyuan Yang, Kaichen Zhou, Niki Trigoni, Andrew Markham
|
SSL-Net: A Synergistic Spectral and Learning-based Network for Efficient
Bird Sound Classification
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficient and accurate bird sound classification is of important for ecology,
habitat protection and scientific research, as it plays a central role in
monitoring the distribution and abundance of species. However, prevailing
methods typically demand extensively labeled audio datasets and have highly
customized frameworks, imposing substantial computational and annotation loads.
In this study, we present an efficient and general framework called SSL-Net,
which combines spectral and learned features to identify different bird sounds.
Encouraging empirical results gleaned from a standard field-collected bird
audio dataset validate the efficacy of our method in extracting features
efficiently and achieving heightened performance in bird sound classification,
even when working with limited sample sizes. Furthermore, we present three
feature fusion strategies, aiding engineers and researchers in their selection
through quantitative analysis.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 00:02:44 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Yang",
"Yiyuan",
""
],
[
"Zhou",
"Kaichen",
""
],
[
"Trigoni",
"Niki",
""
],
[
"Markham",
"Andrew",
""
]
] |
new_dataset
| 0.964078 |
2309.08095
|
Guanlin Wu
|
Guanlin Wu, Zhuokai Zhao, Yutao He
|
RELAX: Reinforcement Learning Enabled 2D-LiDAR Autonomous System for
Parsimonious UAVs
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned Aerial Vehicles (UAVs) have gained significant prominence in recent
years for areas including surveillance, search, rescue, and package delivery.
One key aspect in UAV operations shared across all these tasks is the
autonomous path planning, which enables UAV to navigate through complex,
unknown, and dynamic environments while avoiding obstacles without human
control. Despite countless efforts having been devoted to this subject, new
challenges are constantly arisen due to the persistent trade-off between
performance and cost. And new studies are more urgently needed to develop
autonomous system for UAVs with parsimonious sensor setup, which is a major
need for wider adoptions. To this end, we propose an end-to-end autonomous
framework to enable UAVs with only one single 2D-LiDAR sensor to operate in
unknown dynamic environments. More specifically, we break our approach into
three stages: a pre-processing Map Constructor; an offline Mission Planner; and
an online reinforcement learning (RL)-based Dynamic Obstacle Handler.
Experiments show that our approach provides robust and reliable dynamic path
planning and obstacle avoidance with only 1/10 of the cost in sensor
configuration. The code will be made public upon acceptance.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 01:25:33 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Wu",
"Guanlin",
""
],
[
"Zhao",
"Zhuokai",
""
],
[
"He",
"Yutao",
""
]
] |
new_dataset
| 0.965935 |
2309.08096
|
Yuankai Lin
|
Yuankai Lin, Yulin Zhou, Kaiji Huang, Qi Zhong, Tao Cheng, Hua Yang,
Zhouping Yin
|
GelSplitter: Tactile Reconstruction from Near Infrared and Visible
Images
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The GelSight-like visual tactile (VT) sensor has gained popularity as a
high-resolution tactile sensing technology for robots, capable of measuring
touch geometry using a single RGB camera. However, the development of
multi-modal perception for VT sensors remains a challenge, limited by the mono
camera. In this paper, we propose the GelSplitter, a new framework approach the
multi-modal VT sensor with synchronized multi-modal cameras and resemble a more
human-like tactile receptor. Furthermore, we focus on 3D tactile reconstruction
and implement a compact sensor structure that maintains a comparable size to
state-of-the-art VT sensors, even with the addition of a prism and a near
infrared (NIR) camera. We also design a photometric fusion stereo neural
network (PFSNN), which estimates surface normals of objects and reconstructs
touch geometry from both infrared and visible images. Our results demonstrate
that the accuracy of RGB and NIR fusion is higher than that of RGB images
alone. Additionally, our GelSplitter framework allows for a flexible
configuration of different camera sensor combinations, such as RGB and thermal
imaging.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 01:26:11 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Lin",
"Yuankai",
""
],
[
"Zhou",
"Yulin",
""
],
[
"Huang",
"Kaiji",
""
],
[
"Zhong",
"Qi",
""
],
[
"Cheng",
"Tao",
""
],
[
"Yang",
"Hua",
""
],
[
"Yin",
"Zhouping",
""
]
] |
new_dataset
| 0.999509 |
2309.08113
|
Zhicun Yin
|
Zhicun Yin, Ming Liu, Xiaoming Li, Hui Yang, Longan Xiao, Wangmeng Zuo
|
MetaF2N: Blind Image Super-Resolution by Learning Efficient Model
Adaptation from Faces
|
Accepted by ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to their highly structured characteristics, faces are easier to recover
than natural scenes for blind image super-resolution. Therefore, we can extract
the degradation representation of an image from the low-quality and recovered
face pairs. Using the degradation representation, realistic low-quality images
can then be synthesized to fine-tune the super-resolution model for the
real-world low-quality image. However, such a procedure is time-consuming and
laborious, and the gaps between recovered faces and the ground-truths further
increase the optimization uncertainty. To facilitate efficient model adaptation
towards image-specific degradations, we propose a method dubbed MetaF2N, which
leverages the contained Faces to fine-tune model parameters for adapting to the
whole Natural image in a Meta-learning framework. The degradation extraction
and low-quality image synthesis steps are thus circumvented in our MetaF2N, and
it requires only one fine-tuning step to get decent performance. Considering
the gaps between the recovered faces and ground-truths, we further deploy a
MaskNet for adaptively predicting loss weights at different positions to reduce
the impact of low-confidence areas. To evaluate our proposed MetaF2N, we have
collected a real-world low-quality dataset with one or multiple faces in each
image, and our MetaF2N achieves superior performance on both synthetic and
real-world datasets. Source code, pre-trained models, and collected datasets
are available at https://github.com/yinzhicun/MetaF2N.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 02:45:21 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Yin",
"Zhicun",
""
],
[
"Liu",
"Ming",
""
],
[
"Li",
"Xiaoming",
""
],
[
"Yang",
"Hui",
""
],
[
"Xiao",
"Longan",
""
],
[
"Zuo",
"Wangmeng",
""
]
] |
new_dataset
| 0.984049 |
2309.08134
|
Fangbo Qin
|
Fangbo Qin, Taogang Hou, Shan Lin, Kaiyuan Wang, Michael C. Yip, Shan
Yu
|
AnyOKP: One-Shot and Instance-Aware Object Keypoint Extraction with
Pretrained ViT
|
Submitted to IEEE ICRA 2024 as a contributed paper
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Towards flexible object-centric visual perception, we propose a one-shot
instance-aware object keypoint (OKP) extraction approach, AnyOKP, which
leverages the powerful representation ability of pretrained vision transformer
(ViT), and can obtain keypoints on multiple object instances of arbitrary
category after learning from a support image. An off-the-shelf petrained ViT is
directly deployed for generalizable and transferable feature extraction, which
is followed by training-free feature enhancement. The best-prototype pairs
(BPPs) are searched for in support and query images based on appearance
similarity, to yield instance-unaware candidate keypoints.Then, the entire
graph with all candidate keypoints as vertices are divided to sub-graphs
according to the feature distributions on the graph edges. Finally, each
sub-graph represents an object instance. AnyOKP is evaluated on real object
images collected with the cameras of a robot arm, a mobile robot, and a
surgical robot, which not only demonstrates the cross-category flexibility and
instance awareness, but also show remarkable robustness to domain shift and
viewpoint change.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 04:05:01 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Qin",
"Fangbo",
""
],
[
"Hou",
"Taogang",
""
],
[
"Lin",
"Shan",
""
],
[
"Wang",
"Kaiyuan",
""
],
[
"Yip",
"Michael C.",
""
],
[
"Yu",
"Shan",
""
]
] |
new_dataset
| 0.979092 |
2309.08152
|
Minsik Jeon
|
Minsik Jeon, Junwon Seo, Jihong Min
|
DA-RAW: Domain Adaptive Object Detection for Real-World Adverse Weather
Conditions
|
Our video can be found at https://youtu.be/vsUSrFsbuu8
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the success of deep learning-based object detection methods in recent
years, it is still challenging to make the object detector reliable in adverse
weather conditions such as rain and snow. For the robust performance of object
detectors, unsupervised domain adaptation has been utilized to adapt the
detection network trained on clear weather images to adverse weather images.
While previous methods do not explicitly address weather corruption during
adaptation, the domain gap between clear and adverse weather can be decomposed
into two factors with distinct characteristics: a style gap and a weather gap.
In this paper, we present an unsupervised domain adaptation framework for
object detection that can more effectively adapt to real-world environments
with adverse weather conditions by addressing these two gaps separately. Our
method resolves the style gap by concentrating on style-related information of
high-level features using an attention module. Using self-supervised
contrastive learning, our framework then reduces the weather gap and acquires
instance features that are robust to weather corruption. Extensive experiments
demonstrate that our method outperforms other methods for object detection in
adverse weather conditions.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 04:37:28 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Jeon",
"Minsik",
""
],
[
"Seo",
"Junwon",
""
],
[
"Min",
"Jihong",
""
]
] |
new_dataset
| 0.997576 |
2309.08158
|
Lachlan Simpson
|
Lachlan Simpson, Kyle Millar, Adriel Cheng, Hong Gunn Chew, Cheng-Chew
Lim
|
A Testbed for Automating and Analysing Mobile Devices and their
Applications
| null | null | null | null |
cs.NI cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The need for improved network situational awareness has been highlighted by
the growing complexity and severity of cyber-attacks. Mobile phones pose a
significant risk to network situational awareness due to their dynamic
behaviour and lack of visibility on a network. Machine learning techniques
enhance situational awareness by providing administrators insight into the
devices and activities which form their network. Developing machine learning
techniques for situational awareness requires a testbed to generate and label
network traffic. Current testbeds, however, are unable to automate the
generation and labelling of realistic network traffic. To address this, we
describe a testbed which automates applications on mobile devices to generate
and label realistic traffic. From this testbed, two labelled datasets of
network traffic have been created. We provide an analysis of the testbed
automation reliability and benchmark the datasets for the task of application
classification.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 04:48:58 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Simpson",
"Lachlan",
""
],
[
"Millar",
"Kyle",
""
],
[
"Cheng",
"Adriel",
""
],
[
"Chew",
"Hong Gunn",
""
],
[
"Lim",
"Cheng-Chew",
""
]
] |
new_dataset
| 0.987324 |
2309.08179
|
Xukun Zhou
|
Xukun Zhou, Zhenbo Song, Jun He, Hongyan Liu, Zhaoxin Fan
|
STDG: Semi-Teacher-Student Training Paradigram for Depth-guided
One-stage Scene Graph Generation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene Graph Generation is a critical enabler of environmental comprehension
for autonomous robotic systems. Most of existing methods, however, are often
thwarted by the intricate dynamics of background complexity, which limits their
ability to fully decode the inherent topological information of the
environment. Additionally, the wealth of contextual information encapsulated
within depth cues is often left untapped, rendering existing approaches less
effective. To address these shortcomings, we present STDG, an avant-garde
Depth-Guided One-Stage Scene Graph Generation methodology. The innovative
architecture of STDG is a triad of custom-built modules: The Depth Guided HHA
Representation Generation Module, the Depth Guided Semi-Teaching Network
Learning Module, and the Depth Guided Scene Graph Generation Module. This
trifecta of modules synergistically harnesses depth information, covering all
aspects from depth signal generation and depth feature utilization, to the
final scene graph prediction. Importantly, this is achieved without imposing
additional computational burden during the inference phase. Experimental
results confirm that our method significantly enhances the performance of
one-stage scene graph generation baselines.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 06:06:33 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Zhou",
"Xukun",
""
],
[
"Song",
"Zhenbo",
""
],
[
"He",
"Jun",
""
],
[
"Liu",
"Hongyan",
""
],
[
"Fan",
"Zhaoxin",
""
]
] |
new_dataset
| 0.992919 |
2309.08206
|
Gongyang Li
|
Gongyang Li and Zhen Bai and Zhi Liu and Xinpeng Zhang and Haibin Ling
|
Salient Object Detection in Optical Remote Sensing Images Driven by
Transformer
|
13 pages, 6 figures, Accepted by IEEE Transactions on Image
Processing 2023
| null |
10.1109/TIP.2023.3314285
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Existing methods for Salient Object Detection in Optical Remote Sensing
Images (ORSI-SOD) mainly adopt Convolutional Neural Networks (CNNs) as the
backbone, such as VGG and ResNet. Since CNNs can only extract features within
certain receptive fields, most ORSI-SOD methods generally follow the
local-to-contextual paradigm. In this paper, we propose a novel Global
Extraction Local Exploration Network (GeleNet) for ORSI-SOD following the
global-to-local paradigm. Specifically, GeleNet first adopts a transformer
backbone to generate four-level feature embeddings with global long-range
dependencies. Then, GeleNet employs a Direction-aware Shuffle Weighted Spatial
Attention Module (D-SWSAM) and its simplified version (SWSAM) to enhance local
interactions, and a Knowledge Transfer Module (KTM) to further enhance
cross-level contextual interactions. D-SWSAM comprehensively perceives the
orientation information in the lowest-level features through directional
convolutions to adapt to various orientations of salient objects in ORSIs, and
effectively enhances the details of salient objects with an improved attention
mechanism. SWSAM discards the direction-aware part of D-SWSAM to focus on
localizing salient objects in the highest-level features. KTM models the
contextual correlation knowledge of two middle-level features of different
scales based on the self-attention mechanism, and transfers the knowledge to
the raw features to generate more discriminative features. Finally, a saliency
predictor is used to generate the saliency map based on the outputs of the
above three modules. Extensive experiments on three public datasets demonstrate
that the proposed GeleNet outperforms relevant state-of-the-art methods. The
code and results of our method are available at
https://github.com/MathLee/GeleNet.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 07:14:43 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Li",
"Gongyang",
""
],
[
"Bai",
"Zhen",
""
],
[
"Liu",
"Zhi",
""
],
[
"Zhang",
"Xinpeng",
""
],
[
"Ling",
"Haibin",
""
]
] |
new_dataset
| 0.996747 |
2309.08208
|
Hyun-Seo Shin
|
Hyun-seo Shin, Jungwoo Heo, Ju-ho Kim, Chan-yeong Lim, Wonbin Kim, and
Ha-Jin Yu
|
HM-Conformer: A Conformer-based audio deepfake detection system with
hierarchical pooling and multi-level classification token aggregation methods
|
Submitted to 2024 IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP 2024)
| null | null | null |
cs.SD cs.CR cs.LG eess.AS
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Audio deepfake detection (ADD) is the task of detecting spoofing attacks
generated by text-to-speech or voice conversion systems. Spoofing evidence,
which helps to distinguish between spoofed and bona-fide utterances, might
exist either locally or globally in the input features. To capture these, the
Conformer, which consists of Transformers and CNN, possesses a suitable
structure. However, since the Conformer was designed for sequence-to-sequence
tasks, its direct application to ADD tasks may be sub-optimal. To tackle this
limitation, we propose HM-Conformer by adopting two components: (1)
Hierarchical pooling method progressively reducing the sequence length to
eliminate duplicated information (2) Multi-level classification token
aggregation method utilizing classification tokens to gather information from
different blocks. Owing to these components, HM-Conformer can efficiently
detect spoofing evidence by processing various sequence lengths and aggregating
them. In experimental results on the ASVspoof 2021 Deepfake dataset,
HM-Conformer achieved a 15.71% EER, showing competitive performance compared to
recent systems.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 07:18:30 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Shin",
"Hyun-seo",
""
],
[
"Heo",
"Jungwoo",
""
],
[
"Kim",
"Ju-ho",
""
],
[
"Lim",
"Chan-yeong",
""
],
[
"Kim",
"Wonbin",
""
],
[
"Yu",
"Ha-Jin",
""
]
] |
new_dataset
| 0.989349 |
2309.08232
|
Murat Isik
|
Murat Isik, Kayode Inadagbo
|
Astrocyte-Integrated Dynamic Function Exchange in Spiking Neural
Networks
|
Accepted at 8th International Conference on Engineering of
Computer-based Systems
| null | null | null |
cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents an innovative methodology for improving the robustness
and computational efficiency of Spiking Neural Networks (SNNs), a critical
component in neuromorphic computing. The proposed approach integrates
astrocytes, a type of glial cell prevalent in the human brain, into SNNs,
creating astrocyte-augmented networks. To achieve this, we designed and
implemented an astrocyte model in two distinct platforms: CPU/GPU and FPGA. Our
FPGA implementation notably utilizes Dynamic Function Exchange (DFX)
technology, enabling real-time hardware reconfiguration and adaptive model
creation based on current operating conditions. The novel approach of
leveraging astrocytes significantly improves the fault tolerance of SNNs,
thereby enhancing their robustness. Notably, our astrocyte-augmented SNN
displays near-zero latency and theoretically infinite throughput, implying
exceptional computational efficiency. Through comprehensive comparative
analysis with prior works, it's established that our model surpasses others in
terms of neuron and synapse count while maintaining an efficient power
consumption profile. These results underscore the potential of our methodology
in shaping the future of neuromorphic computing, by providing robust and
energy-efficient systems.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 08:02:29 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Isik",
"Murat",
""
],
[
"Inadagbo",
"Kayode",
""
]
] |
new_dataset
| 0.990702 |
2309.08289
|
Kaouther Mouheb
|
Kaouther Mouheb, Mobina Ghojogh Nejad, Lavsen Dahal, Ehsan Samei, W.
Paul Segars, Joseph Y. Lo
|
Large Intestine 3D Shape Refinement Using Point Diffusion Models for
Digital Phantom Generation
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate 3D modeling of human organs plays a crucial role in building
computational phantoms for virtual imaging trials. However, generating
anatomically plausible reconstructions of organ surfaces from computed
tomography scans remains challenging for many structures in the human body.
This challenge is particularly evident when dealing with the large intestine.
In this study, we leverage recent advancements in geometric deep learning and
denoising diffusion probabilistic models to refine the segmentation results of
the large intestine. We begin by representing the organ as point clouds sampled
from the surface of the 3D segmentation mask. Subsequently, we employ a
hierarchical variational autoencoder to obtain global and local latent
representations of the organ's shape. We train two conditional denoising
diffusion models in the hierarchical latent space to perform shape refinement.
To further enhance our method, we incorporate a state-of-the-art surface
reconstruction model, allowing us to generate smooth meshes from the obtained
complete point clouds. Experimental results demonstrate the effectiveness of
our approach in capturing both the global distribution of the organ's shape and
its fine details. Our complete refinement pipeline demonstrates remarkable
enhancements in surface representation compared to the initial segmentation,
reducing the Chamfer distance by 70%, the Hausdorff distance by 32%, and the
Earth Mover's distance by 6%. By combining geometric deep learning, denoising
diffusion models, and advanced surface reconstruction techniques, our proposed
method offers a promising solution for accurately modeling the large
intestine's surface and can easily be extended to other anatomical structures.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 10:10:48 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Mouheb",
"Kaouther",
""
],
[
"Nejad",
"Mobina Ghojogh",
""
],
[
"Dahal",
"Lavsen",
""
],
[
"Samei",
"Ehsan",
""
],
[
"Segars",
"W. Paul",
""
],
[
"Lo",
"Joseph Y.",
""
]
] |
new_dataset
| 0.978907 |
2309.08323
|
Yanze Li
|
Yanze Li, Feixing Chen, Jingqi Cao, Ruoqi Zhao, Xuan Yang, Xingbang
Yang, Yubo Fan
|
MLP Based Continuous Gait Recognition of a Powered Ankle Prosthesis with
Serial Elastic Actuator
|
Submitted to ICRA 2024
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Powered ankle prostheses effectively assist people with lower limb amputation
to perform daily activities. High performance prostheses with adjustable
compliance and capability to predict and implement amputee's intent are crucial
for them to be comparable to or better than a real limb. However, current
designs fail to provide simple yet effective compliance of the joint with full
potential of modification, and lack accurate gait prediction method in real
time. This paper proposes an innovative design of powered ankle prosthesis with
serial elastic actuator (SEA), and puts forward a MLP based gait recognition
method that can accurately and continuously predict more gait parameters for
motion sensing and control. The prosthesis mimics biological joint with similar
weight, torque, and power which can assist walking of up to 4 m/s. A new design
of planar torsional spring is proposed for the SEA, which has better stiffness,
endurance, and potential of modification than current designs. The gait
recognition system simultaneously generates locomotive speed, gait phase, ankle
angle and angular velocity only utilizing signals of single IMU, holding
advantage in continuity, adaptability for speed range, accuracy, and capability
of multi-functions.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 11:25:48 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Li",
"Yanze",
""
],
[
"Chen",
"Feixing",
""
],
[
"Cao",
"Jingqi",
""
],
[
"Zhao",
"Ruoqi",
""
],
[
"Yang",
"Xuan",
""
],
[
"Yang",
"Xingbang",
""
],
[
"Fan",
"Yubo",
""
]
] |
new_dataset
| 0.983243 |
2309.08363
|
Corrado Monti
|
Yelena Mejova, Arthur Capozzi, Corrado Monti, Gianmarco De Francisci
Morales
|
Narratives of War: Ukrainian Memetic Warfare on Twitter
| null | null | null | null |
cs.CY cs.HC cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The 2022 Russian invasion of Ukraine has seen an intensification in the use
of social media by governmental actors in cyber warfare. Wartime communication
via memes has been a successful strategy used not only by independent accounts
such as @uamemesforces, but also-for the first time in a full-scale interstate
war-by official Ukrainian government accounts such as @Ukraine and @DefenceU.
We study this prominent example of memetic warfare through the lens of its
narratives, and find them to be a key component of success: tweets with a
'victim' narrative garner twice as many retweets. However, malevolent
narratives focusing on the enemy resonate more than those about heroism or
victims with countries providing more assistance to Ukraine. Our findings
present a nuanced examination of Ukraine's influence operations and of the
worldwide response to it, thus contributing new insights into the evolution of
socio-technical systems in times of war.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 12:41:03 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Mejova",
"Yelena",
""
],
[
"Capozzi",
"Arthur",
""
],
[
"Monti",
"Corrado",
""
],
[
"Morales",
"Gianmarco De Francisci",
""
]
] |
new_dataset
| 0.999683 |
2309.08368
|
Edoardo Arnaudo
|
Edoardo Arnaudo, Luca Barco, Matteo Merlo, Claudio Rossi
|
Robust Burned Area Delineation through Multitask Learning
|
Accepted at ECML PKDD 2023 - MACLEAN Workshop (11 pages, 3 figures)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, wildfires have posed a significant challenge due to their
increasing frequency and severity. For this reason, accurate delineation of
burned areas is crucial for environmental monitoring and post-fire assessment.
However, traditional approaches relying on binary segmentation models often
struggle to achieve robust and accurate results, especially when trained from
scratch, due to limited resources and the inherent imbalance of this
segmentation task. We propose to address these limitations in two ways: first,
we construct an ad-hoc dataset to cope with the limited resources, combining
information from Sentinel-2 feeds with Copernicus activations and other data
sources. In this dataset, we provide annotations for multiple tasks, including
burned area delineation and land cover segmentation. Second, we propose a
multitask learning framework that incorporates land cover classification as an
auxiliary task to enhance the robustness and performance of the burned area
segmentation models. We compare the performance of different models, including
UPerNet and SegFormer, demonstrating the effectiveness of our approach in
comparison to standard binary segmentation.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 12:49:17 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Arnaudo",
"Edoardo",
""
],
[
"Barco",
"Luca",
""
],
[
"Merlo",
"Matteo",
""
],
[
"Rossi",
"Claudio",
""
]
] |
new_dataset
| 0.991992 |
2309.08379
|
Kim Gerdes
|
Dana Aubakirova, Kim Gerdes, Lufei Liu
|
PatFig: Generating Short and Long Captions for Patent Figures
|
accepted to the ICCV 2023, CLVL: 5th Workshop on Closing the Loop
Between Vision and Language
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper introduces Qatent PatFig, a novel large-scale patent figure
dataset comprising 30,000+ patent figures from over 11,000 European patent
applications. For each figure, this dataset provides short and long captions,
reference numerals, their corresponding terms, and the minimal claim set that
describes the interactions between the components of the image. To assess the
usability of the dataset, we finetune an LVLM model on Qatent PatFig to
generate short and long descriptions, and we investigate the effects of
incorporating various text-based cues at the prediction stage of the patent
figure captioning process.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 13:10:36 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Aubakirova",
"Dana",
""
],
[
"Gerdes",
"Kim",
""
],
[
"Liu",
"Lufei",
""
]
] |
new_dataset
| 0.999832 |
2309.08449
|
Hendrik Richter
|
{Paul Moritz N\"orenberg, Hendrik Richter
|
Do Random and Chaotic Sequences Really Cause Different PSO Performance?
Further Results
|
arXiv admin note: text overlap with arXiv:2303.14099
| null | null | null |
cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Empirical results show that PSO performance may be different if using either
chaotic or random sequences to drive the algorithm's search dynamics. We
analyze the phenomenon by evaluating the performance based on a benchmark of
test functions and comparing random and chaotic sequences according to equality
or difference in underlying distribution or density. Our results show that the
underlying distribution is the main influential factor in performance and thus
the assumption of general and systematic performance differences between chaos
and random appears not plausible.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 14:53:07 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Nörenberg",
"{Paul Moritz",
""
],
[
"Richter",
"Hendrik",
""
]
] |
new_dataset
| 0.954404 |
2309.08474
|
Duy Phan Mr
|
Phan The Duy, Nghi Hoang Khoa, Nguyen Huu Quyen, Le Cong Trinh, Vu
Trung Kien, Trinh Minh Hoang, Van-Hau Pham
|
VulnSense: Efficient Vulnerability Detection in Ethereum Smart Contracts
by Multimodal Learning with Graph Neural Network and Language Model
| null | null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents VulnSense framework, a comprehensive approach to
efficiently detect vulnerabilities in Ethereum smart contracts using a
multimodal learning approach on graph-based and natural language processing
(NLP) models. Our proposed framework combines three types of features from
smart contracts comprising source code, opcode sequences, and control flow
graph (CFG) extracted from bytecode. We employ Bidirectional Encoder
Representations from Transformers (BERT), Bidirectional Long Short-Term Memory
(BiLSTM) and Graph Neural Network (GNN) models to extract and analyze these
features. The final layer of our multimodal approach consists of a fully
connected layer used to predict vulnerabilities in Ethereum smart contracts.
Addressing limitations of existing vulnerability detection methods relying on
single-feature or single-model deep learning techniques, our method surpasses
accuracy and effectiveness constraints. We assess VulnSense using a collection
of 1.769 smart contracts derived from the combination of three datasets:
Curated, SolidiFI-Benchmark, and Smartbugs Wild. We then make a comparison with
various unimodal and multimodal learning techniques contributed by GNN, BiLSTM
and BERT architectures. The experimental outcomes demonstrate the superior
performance of our proposed approach, achieving an average accuracy of 77.96\%
across all three categories of vulnerable smart contracts.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 15:26:44 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Duy",
"Phan The",
""
],
[
"Khoa",
"Nghi Hoang",
""
],
[
"Quyen",
"Nguyen Huu",
""
],
[
"Trinh",
"Le Cong",
""
],
[
"Kien",
"Vu Trung",
""
],
[
"Hoang",
"Trinh Minh",
""
],
[
"Pham",
"Van-Hau",
""
]
] |
new_dataset
| 0.960774 |
2309.08480
|
Ginger Delmas
|
Ginger Delmas, Philippe Weinzaepfel, Francesc Moreno-Noguer, Gr\'egory
Rogez
|
PoseFix: Correcting 3D Human Poses with Natural Language
|
Published in ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatically producing instructions to modify one's posture could open the
door to endless applications, such as personalized coaching and in-home
physical therapy. Tackling the reverse problem (i.e., refining a 3D pose based
on some natural language feedback) could help for assisted 3D character
animation or robot teaching, for instance. Although a few recent works explore
the connections between natural language and 3D human pose, none focus on
describing 3D body pose differences. In this paper, we tackle the problem of
correcting 3D human poses with natural language. To this end, we introduce the
PoseFix dataset, which consists of several thousand paired 3D poses and their
corresponding text feedback, that describe how the source pose needs to be
modified to obtain the target pose. We demonstrate the potential of this
dataset on two tasks: (1) text-based pose editing, that aims at generating
corrected 3D body poses given a query pose and a text modifier; and (2)
correctional text generation, where instructions are generated based on the
differences between two body poses.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 15:36:50 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Delmas",
"Ginger",
""
],
[
"Weinzaepfel",
"Philippe",
""
],
[
"Moreno-Noguer",
"Francesc",
""
],
[
"Rogez",
"Grégory",
""
]
] |
new_dataset
| 0.995232 |
2309.08482
|
Pavel Rojtberg
|
Pavel Rojtberg, Thomas P\"ollabauer
|
YCB-Ev: Event-vision dataset for 6DoF object pose estimation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Our work introduces the YCB-Ev dataset, which contains synchronized RGB-D
frames and event data that enables evaluating 6DoF object pose estimation
algorithms using these modalities.
This dataset provides ground truth 6DoF object poses for the same 21 YCB
objects \cite{calli2017yale} that were used in the YCB-Video (YCB-V) dataset,
enabling the evaluation of algorithm performance when transferred across
datasets.
The dataset consists of 21 synchronized event and RGB-D sequences, amounting
to a total of 7:43 minutes of video. Notably, 12 of these sequences feature the
same object arrangement as the YCB-V subset used in the BOP challenge.
Our dataset is the first to provide ground truth 6DoF pose data for event
streams. Furthermore, we evaluate the generalization capabilities of two
state-of-the-art algorithms, which were pre-trained for the BOP challenge,
using our novel YCB-V sequences.
The proposed dataset is available at https://github.com/paroj/ycbev.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 15:42:00 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Rojtberg",
"Pavel",
""
],
[
"Pöllabauer",
"Thomas",
""
]
] |
new_dataset
| 0.999781 |
2309.08503
|
Juraj Vladika
|
Juraj Vladika, Phillip Schneider, Florian Matthes
|
HealthFC: A Dataset of Health Claims for Evidence-Based Medical
Fact-Checking
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Seeking health-related advice on the internet has become a common practice in
the digital era. Determining the trustworthiness of medical claims found online
and finding appropriate evidence for this information is increasingly
challenging. Fact-checking has emerged as an approach to assess the veracity of
factual claims using evidence from credible knowledge sources. To help advance
the automation of this task, in this paper, we introduce a novel dataset of 750
health-related claims, labeled for veracity by medical experts and backed with
evidence from appropriate clinical studies. We provide an analysis of the
dataset, highlighting its characteristics and challenges. The dataset can be
used for Machine Learning tasks related to automated fact-checking such as
evidence retrieval, veracity prediction, and explanation generation. For this
purpose, we provide baseline models based on different approaches, examine
their performance, and discuss the findings.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 16:05:48 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Vladika",
"Juraj",
""
],
[
"Schneider",
"Phillip",
""
],
[
"Matthes",
"Florian",
""
]
] |
new_dataset
| 0.999791 |
2309.08579
|
Hai Huynh
|
Hai D. Huynh, and S. Natarajan, and H. Nguyen-Xuan, and Xiaoying
Zhuang
|
Polytopal composite finite elements for modeling concrete fracture based
on nonlocal damage models
| null | null | null | null |
cs.CE
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The paper presents an assumed strain formulation over polygonal meshes to
accurately evaluate the strain fields in nonlocal damage models. An assume
strained technique based on the Hu-Washizu variational principle is employed to
generate a new strain approximation instead of direct derivation from the basis
functions and the displacement fields. The underlying idea embedded in
arbitrary finite polygons is named as Polytopal composite finite elements
(PCFEM). The PCFEM is accordingly applied within the framework of the nonlocal
model of continuum damage mechanics to enhance the description of damage
behaviours in which highly localized deformations must be captured accurately.
This application is helpful to reduce the mesh-sensitivity and elaborate the
process-zone of damage models. Several numerical examples are designed for
various cases of fracture to discuss and validate the computational capability
of the present method through comparison with published numerical results and
experimental data from the literature.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 07:36:46 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Huynh",
"Hai D.",
""
],
[
"Natarajan",
"S.",
""
],
[
"Nguyen-Xuan",
"H.",
""
],
[
"Zhuang",
"Xiaoying",
""
]
] |
new_dataset
| 0.977579 |
2309.08588
|
Fabien Delattre
|
Fabien Delattre, David Dirnfeld, Phat Nguyen, Stephen Scarano, Michael
J. Jones, Pedro Miraldo, Erik Learned-Miller
|
Robust Frame-to-Frame Camera Rotation Estimation in Crowded Scenes
|
Published at ICCV 2023
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an approach to estimating camera rotation in crowded, real-world
scenes from handheld monocular video. While camera rotation estimation is a
well-studied problem, no previous methods exhibit both high accuracy and
acceptable speed in this setting. Because the setting is not addressed well by
other datasets, we provide a new dataset and benchmark, with high-accuracy,
rigorously verified ground truth, on 17 video sequences. Methods developed for
wide baseline stereo (e.g., 5-point methods) perform poorly on monocular video.
On the other hand, methods used in autonomous driving (e.g., SLAM) leverage
specific sensor setups, specific motion models, or local optimization
strategies (lagging batch processing) and do not generalize well to handheld
video. Finally, for dynamic scenes, commonly used robustification techniques
like RANSAC require large numbers of iterations, and become prohibitively slow.
We introduce a novel generalization of the Hough transform on SO(3) to
efficiently and robustly find the camera rotation most compatible with optical
flow. Among comparably fast methods, ours reduces error by almost 50\% over the
next best, and is more accurate than any method, irrespective of speed. This
represents a strong new performance point for crowded scenes, an important
setting for computer vision. The code and the dataset are available at
https://fabiendelattre.com/robust-rotation-estimation.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 17:44:07 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Delattre",
"Fabien",
""
],
[
"Dirnfeld",
"David",
""
],
[
"Nguyen",
"Phat",
""
],
[
"Scarano",
"Stephen",
""
],
[
"Jones",
"Michael J.",
""
],
[
"Miraldo",
"Pedro",
""
],
[
"Learned-Miller",
"Erik",
""
]
] |
new_dataset
| 0.99705 |
2111.12663
|
Evangelos Alexiou
|
Evangelos Alexiou, Xuemei Zhou, Irene Viola, Pablo Cesar
|
PointPCA: Point Cloud Objective Quality Assessment Using PCA-Based
Descriptors
|
14 pages, 7 figures, 6 tables
| null | null | null |
cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Point clouds denote a prominent solution for the representation of 3D
photo-realistic content in immersive applications. Similarly to other imaging
modalities, quality predictions for point cloud contents are vital for a wide
range of applications, enabling trade-off optimizations between data quality
and data size in every processing step from acquisition to rendering. In this
work, we focus on use cases that consider human end-users consuming point cloud
contents and, hence, we concentrate on visual quality metrics. In particular,
we propose a set of perceptually relevant descriptors based on Principal
Component Analysis (PCA) decomposition, which is applied to both geometry and
texture data for full-reference point cloud quality assessment. Statistical
features are derived from these descriptors to characterize local shape and
appearance properties for both a reference and a distorted point cloud. The
extracted statistical features are subsequently compared to provide
corresponding predictions of visual quality for the distorted point cloud. As
part of our method, a learning-based approach is proposed to fuse these
individual predictors to a unified perceptual score. We validate the accuracy
of the individual predictors, as well as the unified quality scores obtained
after regression against subjectively annotated datasets, showing that our
metric outperforms state-of-the-art solutions. Insights regarding design
decisions are provided through exploratory studies, evaluating the performance
of our metric under different parameter configurations, attribute domains,
color spaces, and regression models. A software implementation of the proposed
metric is made available at the following link:
https://github.com/cwi-dis/pointpca_suite.
|
[
{
"version": "v1",
"created": "Wed, 24 Nov 2021 17:51:16 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Nov 2022 21:31:24 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Sep 2023 21:54:35 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Alexiou",
"Evangelos",
""
],
[
"Zhou",
"Xuemei",
""
],
[
"Viola",
"Irene",
""
],
[
"Cesar",
"Pablo",
""
]
] |
new_dataset
| 0.997897 |
2205.09208
|
Mike Heddes
|
Mike Heddes, Igor Nunes, Pere Verg\'es, Denis Kleyko, Danny Abraham,
Tony Givargis, Alexandru Nicolau, Alexander Veidenbaum
|
Torchhd: An Open Source Python Library to Support Research on
Hyperdimensional Computing and Vector Symbolic Architectures
| null |
Journal of Machine Learning Research 24 (2023) 1--10
| null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Hyperdimensional computing (HD), also known as vector symbolic architectures
(VSA), is a framework for computing with distributed representations by
exploiting properties of random high-dimensional vector spaces. The commitment
of the scientific community to aggregate and disseminate research in this
particularly multidisciplinary area has been fundamental for its advancement.
Joining these efforts, we present Torchhd, a high-performance open source
Python library for HD/VSA. Torchhd seeks to make HD/VSA more accessible and
serves as an efficient foundation for further research and application
development. The easy-to-use library builds on top of PyTorch and features
state-of-the-art HD/VSA functionality, clear documentation, and implementation
examples from well-known publications. Comparing publicly available code with
their corresponding Torchhd implementation shows that experiments can run up to
100x faster. Torchhd is available at:
https://github.com/hyperdimensional-computing/torchhd.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 20:34:25 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jul 2023 17:57:36 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Jul 2023 15:27:34 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Heddes",
"Mike",
""
],
[
"Nunes",
"Igor",
""
],
[
"Vergés",
"Pere",
""
],
[
"Kleyko",
"Denis",
""
],
[
"Abraham",
"Danny",
""
],
[
"Givargis",
"Tony",
""
],
[
"Nicolau",
"Alexandru",
""
],
[
"Veidenbaum",
"Alexander",
""
]
] |
new_dataset
| 0.991981 |
2208.13049
|
Qian Lou
|
Mengxin Zheng, Qian Lou, Lei Jiang
|
TrojViT: Trojan Insertion in Vision Transformers
|
10 pages, 4 figures, 11 tables
| null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Vision Transformers (ViTs) have demonstrated the state-of-the-art performance
in various vision-related tasks. The success of ViTs motivates adversaries to
perform backdoor attacks on ViTs. Although the vulnerability of traditional
CNNs to backdoor attacks is well-known, backdoor attacks on ViTs are
seldom-studied. Compared to CNNs capturing pixel-wise local features by
convolutions, ViTs extract global context information through patches and
attentions. Na\"ively transplanting CNN-specific backdoor attacks to ViTs
yields only a low clean data accuracy and a low attack success rate. In this
paper, we propose a stealth and practical ViT-specific backdoor attack
$TrojViT$. Rather than an area-wise trigger used by CNN-specific backdoor
attacks, TrojViT generates a patch-wise trigger designed to build a Trojan
composed of some vulnerable bits on the parameters of a ViT stored in DRAM
memory through patch salience ranking and attention-target loss. TrojViT
further uses minimum-tuned parameter update to reduce the bit number of the
Trojan. Once the attacker inserts the Trojan into the ViT model by flipping the
vulnerable bits, the ViT model still produces normal inference accuracy with
benign inputs. But when the attacker embeds a trigger into an input, the ViT
model is forced to classify the input to a predefined target class. We show
that flipping only few vulnerable bits identified by TrojViT on a ViT model
using the well-known RowHammer can transform the model into a backdoored one.
We perform extensive experiments of multiple datasets on various ViT models.
TrojViT can classify $99.64\%$ of test images to a target class by flipping
$345$ bits on a ViT for ImageNet.Our codes are available at
https://github.com/mxzheng/TrojViT
|
[
{
"version": "v1",
"created": "Sat, 27 Aug 2022 16:19:26 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Nov 2022 03:29:31 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Mar 2023 21:15:21 GMT"
},
{
"version": "v4",
"created": "Thu, 14 Sep 2023 14:54:04 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Zheng",
"Mengxin",
""
],
[
"Lou",
"Qian",
""
],
[
"Jiang",
"Lei",
""
]
] |
new_dataset
| 0.999596 |
2210.00305
|
Ningyu Zhang
|
Xin Xie, Zhoubo Li, Xiaohan Wang, Zekun Xi, Ningyu Zhang
|
LambdaKG: A Library for Pre-trained Language Model-Based Knowledge Graph
Embeddings
|
AACL 2023 System Demonstrations, the project website is
https://zjunlp.github.io/project/promptkg/
| null | null | null |
cs.CL cs.AI cs.DB cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge Graphs (KGs) often have two characteristics: heterogeneous graph
structure and text-rich entity/relation information. Text-based KG embeddings
can represent entities by encoding descriptions with pre-trained language
models, but no open-sourced library is specifically designed for KGs with PLMs
at present. In this paper, we present LambdaKG, a library for KGE that equips
with many pre-trained language models (e.g., BERT, BART, T5, GPT-3), and
supports various tasks (e.g., knowledge graph completion, question answering,
recommendation, and knowledge probing). LambdaKG is publicly open-sourced at
https://github.com/zjunlp/PromptKG/tree/main/lambdaKG, with a demo video at
http://deepke.zjukg.cn/lambdakg.mp4 and long-term maintenance.
|
[
{
"version": "v1",
"created": "Sat, 1 Oct 2022 16:01:53 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 14:35:33 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Sep 2023 07:06:03 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Xie",
"Xin",
""
],
[
"Li",
"Zhoubo",
""
],
[
"Wang",
"Xiaohan",
""
],
[
"Xi",
"Zekun",
""
],
[
"Zhang",
"Ningyu",
""
]
] |
new_dataset
| 0.99129 |
2211.05363
|
Yan Zhao
|
Yan Zhao, Jiangyan Yi, Jianhua Tao, Chenglong Wang, Xiaohui Zhang,
Yongfeng Dong
|
EmoFake: An Initial Dataset for Emotion Fake Audio Detection
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many datasets have been designed to further the development of fake audio
detection, such as datasets of the ASVspoof and ADD challenges. However, these
datasets do not consider a situation that the emotion of the audio has been
changed from one to another, while other information (e.g. speaker identity and
content) remains the same. Changing the emotion of an audio can lead to
semantic changes. Speech with tampered semantics may pose threats to people's
lives. Therefore, this paper reports our progress in developing such an emotion
fake audio detection dataset involving changing emotion state of the origin
audio named EmoFake. The fake audio in EmoFake is generated by open source
emotion voice conversion models. Furthermore, we proposed a method named Graph
Attention networks using Deep Emotion embedding (GADE) for the detection of
emotion fake audio. Some benchmark experiments are conducted on this dataset.
The results show that our designed dataset poses a challenge to the fake audio
detection model trained with the LA dataset of ASVspoof 2019. The proposed GADE
shows good performance in the face of emotion fake audio.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 06:09:51 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2022 07:38:52 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Sep 2023 08:56:11 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Zhao",
"Yan",
""
],
[
"Yi",
"Jiangyan",
""
],
[
"Tao",
"Jianhua",
""
],
[
"Wang",
"Chenglong",
""
],
[
"Zhang",
"Xiaohui",
""
],
[
"Dong",
"Yongfeng",
""
]
] |
new_dataset
| 0.999844 |
2303.16353
|
Alexander Gaidis
|
Alexander J. Gaidis and Joao Moreira and Ke Sun and Alyssa Milburn and
Vaggelis Atlidakis and Vasileios P. Kemerlis
|
FineIBT: Fine-grain Control-flow Enforcement with Indirect Branch
Tracking
|
Accepted at RAID 2023. Errata (reported by Lucas Becker): Section
2.4.1: "in which every bit represents 8 bytes of (virtual) memory" -> "in
which two bits represent 16 bytes of (virtual) memory"
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
We present the design, implementation, and evaluation of FineIBT: a CFI
enforcement mechanism that improves the precision of hardware-assisted CFI
solutions, like Intel IBT, by instrumenting program code to reduce the
valid/allowed targets of indirect forward-edge transfers. We study the design
of FineIBT on the x86-64 architecture, and implement and evaluate it on Linux
and the LLVM toolchain. We designed FineIBT's instrumentation to be compact,
incurring low runtime and memory overheads, and generic, so as to support
different CFI policies. Our prototype implementation incurs negligible runtime
slowdowns ($\approx$0%-1.94% in SPEC CPU2017 and $\approx$0%-1.92% in
real-world applications) outperforming Clang-CFI. Lastly, we investigate the
effectiveness/security and compatibility of FineIBT using the ConFIRM CFI
benchmarking suite, demonstrating that our instrumentation provides complete
coverage in the presence of modern software features, while supporting a wide
range of CFI policies with the same, predictable performance.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 23:21:10 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jul 2023 15:20:15 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Sep 2023 20:52:02 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Gaidis",
"Alexander J.",
""
],
[
"Moreira",
"Joao",
""
],
[
"Sun",
"Ke",
""
],
[
"Milburn",
"Alyssa",
""
],
[
"Atlidakis",
"Vaggelis",
""
],
[
"Kemerlis",
"Vasileios P.",
""
]
] |
new_dataset
| 0.998395 |
2303.16617
|
Haoqian Wu
|
Haoqian Wu, Zhipeng Hu, Lincheng Li, Yongqiang Zhang, Changjie Fan,
Xin Yu
|
NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination
|
Accepted in CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inverse rendering methods aim to estimate geometry, materials and
illumination from multi-view RGB images. In order to achieve better
decomposition, recent approaches attempt to model indirect illuminations
reflected from different materials via Spherical Gaussians (SG), which,
however, tends to blur the high-frequency reflection details. In this paper, we
propose an end-to-end inverse rendering pipeline that decomposes materials and
illumination from multi-view images, while considering near-field indirect
illumination. In a nutshell, we introduce the Monte Carlo sampling based path
tracing and cache the indirect illumination as neural radiance, enabling a
physics-faithful and easy-to-optimize inverse rendering method. To enhance
efficiency and practicality, we leverage SG to represent the smooth environment
illuminations and apply importance sampling techniques. To supervise indirect
illuminations from unobserved directions, we develop a novel radiance
consistency constraint between implicit neural radiance and path tracing
results of unobserved rays along with the joint optimization of materials and
illuminations, thus significantly improving the decomposition performance.
Extensive experiments demonstrate that our method outperforms the
state-of-the-art on multiple synthetic and real datasets, especially in terms
of inter-reflection decomposition.Our code and data are available at
https://woolseyyy.github.io/nefii/.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 12:05:19 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 09:02:48 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Wu",
"Haoqian",
""
],
[
"Hu",
"Zhipeng",
""
],
[
"Li",
"Lincheng",
""
],
[
"Zhang",
"Yongqiang",
""
],
[
"Fan",
"Changjie",
""
],
[
"Yu",
"Xin",
""
]
] |
new_dataset
| 0.985545 |
2304.08981
|
Zheng Lian
|
Zheng Lian, Haiyang Sun, Licai Sun, Kang Chen, Mingyu Xu, Kexin Wang,
Ke Xu, Yu He, Ying Li, Jinming Zhao, Ye Liu, Bin Liu, Jiangyan Yi, Meng Wang,
Erik Cambria, Guoying Zhao, Bj\"orn W. Schuller, Jianhua Tao
|
MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised
Learning
| null | null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The first Multimodal Emotion Recognition Challenge (MER 2023) was
successfully held at ACM Multimedia. The challenge focuses on system robustness
and consists of three distinct tracks: (1) MER-MULTI, where participants are
required to recognize both discrete and dimensional emotions; (2) MER-NOISE, in
which noise is added to test videos for modality robustness evaluation; (3)
MER-SEMI, which provides a large amount of unlabeled samples for
semi-supervised learning. In this paper, we introduce the motivation behind
this challenge, describe the benchmark dataset, and provide some statistics
about participants. To continue using this dataset after MER 2023, please sign
a new End User License Agreement and send it to our official email address
[email protected]. We believe this high-quality dataset can become
a new benchmark in multimodal emotion recognition, especially for the Chinese
research community.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 13:23:42 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 04:03:28 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Lian",
"Zheng",
""
],
[
"Sun",
"Haiyang",
""
],
[
"Sun",
"Licai",
""
],
[
"Chen",
"Kang",
""
],
[
"Xu",
"Mingyu",
""
],
[
"Wang",
"Kexin",
""
],
[
"Xu",
"Ke",
""
],
[
"He",
"Yu",
""
],
[
"Li",
"Ying",
""
],
[
"Zhao",
"Jinming",
""
],
[
"Liu",
"Ye",
""
],
[
"Liu",
"Bin",
""
],
[
"Yi",
"Jiangyan",
""
],
[
"Wang",
"Meng",
""
],
[
"Cambria",
"Erik",
""
],
[
"Zhao",
"Guoying",
""
],
[
"Schuller",
"Björn W.",
""
],
[
"Tao",
"Jianhua",
""
]
] |
new_dataset
| 0.999742 |
2305.00302
|
Yuki Okamoto
|
Yuki Okamoto, Keisuke Imoto, Shinnosuke Takamichi, Ryotaro Nagase,
Takahiro Fukumori, Yoichi Yamashita
|
Environmental sound synthesis from vocal imitations and sound event
labels
|
Submitted to ICASSP2024
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One way of expressing an environmental sound is using vocal imitations, which
involve the process of replicating or mimicking the rhythm and pitch of sounds
by voice. We can effectively express the features of environmental sounds, such
as rhythm and pitch, using vocal imitations, which cannot be expressed by
conventional input information, such as sound event labels, images, or texts,
in an environmental sound synthesis model. In this paper, we propose a
framework for environmental sound synthesis from vocal imitations and sound
event labels based on a framework of a vector quantized encoder and the
Tacotron2 decoder. Using vocal imitations is expected to control the pitch and
rhythm of the synthesized sound, which only sound event labels cannot control.
Our objective and subjective experimental results show that vocal imitations
effectively control the pitch and rhythm of synthesized sounds.
|
[
{
"version": "v1",
"created": "Sat, 29 Apr 2023 17:06:04 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 10:13:25 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Okamoto",
"Yuki",
""
],
[
"Imoto",
"Keisuke",
""
],
[
"Takamichi",
"Shinnosuke",
""
],
[
"Nagase",
"Ryotaro",
""
],
[
"Fukumori",
"Takahiro",
""
],
[
"Yamashita",
"Yoichi",
""
]
] |
new_dataset
| 0.96946 |
2305.03027
|
Tobias Kirschstein
|
Tobias Kirschstein, Shenhan Qian, Simon Giebenhain, Tim Walter,
Matthias Nie{\ss}ner
|
NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads
|
Siggraph 2023, Project Page:
https://tobias-kirschstein.github.io/nersemble/ , Video:
https://youtu.be/a-OAWqBzldU
|
ACM Transactions on Graphics, Volume 42, Issue 4, Article No. 161
(2023) 1-14
|
10.1145/3592455
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We focus on reconstructing high-fidelity radiance fields of human heads,
capturing their animations over time, and synthesizing re-renderings from novel
viewpoints at arbitrary time steps. To this end, we propose a new multi-view
capture setup composed of 16 calibrated machine vision cameras that record
time-synchronized images at 7.1 MP resolution and 73 frames per second. With
our setup, we collect a new dataset of over 4700 high-resolution,
high-framerate sequences of more than 220 human heads, from which we introduce
a new human head reconstruction benchmark. The recorded sequences cover a wide
range of facial dynamics, including head motions, natural expressions,
emotions, and spoken language. In order to reconstruct high-fidelity human
heads, we propose Dynamic Neural Radiance Fields using Hash Ensembles
(NeRSemble). We represent scene dynamics by combining a deformation field and
an ensemble of 3D multi-resolution hash encodings. The deformation field allows
for precise modeling of simple scene movements, while the ensemble of hash
encodings helps to represent complex dynamics. As a result, we obtain radiance
field representations of human heads that capture motion over time and
facilitate re-rendering of arbitrary novel viewpoints. In a series of
experiments, we explore the design choices of our method and demonstrate that
our approach outperforms state-of-the-art dynamic radiance field approaches by
a significant margin.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 17:52:18 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Kirschstein",
"Tobias",
""
],
[
"Qian",
"Shenhan",
""
],
[
"Giebenhain",
"Simon",
""
],
[
"Walter",
"Tim",
""
],
[
"Nießner",
"Matthias",
""
]
] |
new_dataset
| 0.980135 |
2305.10403
|
Andrew Dai
|
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry
Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey,
Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang,
Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin
Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang,
Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha,
James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng,
Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Cl\'ement
Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark D\'iaz,
Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus
Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari,
Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui,
Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao
Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine
Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek
Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma
Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John
Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek,
Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker
Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee
Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon
Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang,
Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan
Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce
Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, Yonghui Wu
|
PaLM 2 Technical Report
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We introduce PaLM 2, a new state-of-the-art language model that has better
multilingual and reasoning capabilities and is more compute-efficient than its
predecessor PaLM. PaLM 2 is a Transformer-based model trained using a mixture
of objectives. Through extensive evaluations on English and multilingual
language, and reasoning tasks, we demonstrate that PaLM 2 has significantly
improved quality on downstream tasks across different model sizes, while
simultaneously exhibiting faster and more efficient inference compared to PaLM.
This improved efficiency enables broader deployment while also allowing the
model to respond faster, for a more natural pace of interaction. PaLM 2
demonstrates robust reasoning capabilities exemplified by large improvements
over PaLM on BIG-Bench and other reasoning tasks. PaLM 2 exhibits stable
performance on a suite of responsible AI evaluations, and enables
inference-time control over toxicity without additional overhead or impact on
other capabilities. Overall, PaLM 2 achieves state-of-the-art performance
across a diverse set of tasks and capabilities.
When discussing the PaLM 2 family, it is important to distinguish between
pre-trained models (of various sizes), fine-tuned variants of these models, and
the user-facing products that use these models. In particular, user-facing
products typically include additional pre- and post-processing steps.
Additionally, the underlying models may evolve over time. Therefore, one should
not expect the performance of user-facing products to exactly match the results
reported in this report.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 17:46:53 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 18:42:20 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Sep 2023 20:35:45 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Anil",
"Rohan",
""
],
[
"Dai",
"Andrew M.",
""
],
[
"Firat",
"Orhan",
""
],
[
"Johnson",
"Melvin",
""
],
[
"Lepikhin",
"Dmitry",
""
],
[
"Passos",
"Alexandre",
""
],
[
"Shakeri",
"Siamak",
""
],
[
"Taropa",
"Emanuel",
""
],
[
"Bailey",
"Paige",
""
],
[
"Chen",
"Zhifeng",
""
],
[
"Chu",
"Eric",
""
],
[
"Clark",
"Jonathan H.",
""
],
[
"Shafey",
"Laurent El",
""
],
[
"Huang",
"Yanping",
""
],
[
"Meier-Hellstern",
"Kathy",
""
],
[
"Mishra",
"Gaurav",
""
],
[
"Moreira",
"Erica",
""
],
[
"Omernick",
"Mark",
""
],
[
"Robinson",
"Kevin",
""
],
[
"Ruder",
"Sebastian",
""
],
[
"Tay",
"Yi",
""
],
[
"Xiao",
"Kefan",
""
],
[
"Xu",
"Yuanzhong",
""
],
[
"Zhang",
"Yujing",
""
],
[
"Abrego",
"Gustavo Hernandez",
""
],
[
"Ahn",
"Junwhan",
""
],
[
"Austin",
"Jacob",
""
],
[
"Barham",
"Paul",
""
],
[
"Botha",
"Jan",
""
],
[
"Bradbury",
"James",
""
],
[
"Brahma",
"Siddhartha",
""
],
[
"Brooks",
"Kevin",
""
],
[
"Catasta",
"Michele",
""
],
[
"Cheng",
"Yong",
""
],
[
"Cherry",
"Colin",
""
],
[
"Choquette-Choo",
"Christopher A.",
""
],
[
"Chowdhery",
"Aakanksha",
""
],
[
"Crepy",
"Clément",
""
],
[
"Dave",
"Shachi",
""
],
[
"Dehghani",
"Mostafa",
""
],
[
"Dev",
"Sunipa",
""
],
[
"Devlin",
"Jacob",
""
],
[
"Díaz",
"Mark",
""
],
[
"Du",
"Nan",
""
],
[
"Dyer",
"Ethan",
""
],
[
"Feinberg",
"Vlad",
""
],
[
"Feng",
"Fangxiaoyu",
""
],
[
"Fienber",
"Vlad",
""
],
[
"Freitag",
"Markus",
""
],
[
"Garcia",
"Xavier",
""
],
[
"Gehrmann",
"Sebastian",
""
],
[
"Gonzalez",
"Lucas",
""
],
[
"Gur-Ari",
"Guy",
""
],
[
"Hand",
"Steven",
""
],
[
"Hashemi",
"Hadi",
""
],
[
"Hou",
"Le",
""
],
[
"Howland",
"Joshua",
""
],
[
"Hu",
"Andrea",
""
],
[
"Hui",
"Jeffrey",
""
],
[
"Hurwitz",
"Jeremy",
""
],
[
"Isard",
"Michael",
""
],
[
"Ittycheriah",
"Abe",
""
],
[
"Jagielski",
"Matthew",
""
],
[
"Jia",
"Wenhao",
""
],
[
"Kenealy",
"Kathleen",
""
],
[
"Krikun",
"Maxim",
""
],
[
"Kudugunta",
"Sneha",
""
],
[
"Lan",
"Chang",
""
],
[
"Lee",
"Katherine",
""
],
[
"Lee",
"Benjamin",
""
],
[
"Li",
"Eric",
""
],
[
"Li",
"Music",
""
],
[
"Li",
"Wei",
""
],
[
"Li",
"YaGuang",
""
],
[
"Li",
"Jian",
""
],
[
"Lim",
"Hyeontaek",
""
],
[
"Lin",
"Hanzhao",
""
],
[
"Liu",
"Zhongtao",
""
],
[
"Liu",
"Frederick",
""
],
[
"Maggioni",
"Marcello",
""
],
[
"Mahendru",
"Aroma",
""
],
[
"Maynez",
"Joshua",
""
],
[
"Misra",
"Vedant",
""
],
[
"Moussalem",
"Maysam",
""
],
[
"Nado",
"Zachary",
""
],
[
"Nham",
"John",
""
],
[
"Ni",
"Eric",
""
],
[
"Nystrom",
"Andrew",
""
],
[
"Parrish",
"Alicia",
""
],
[
"Pellat",
"Marie",
""
],
[
"Polacek",
"Martin",
""
],
[
"Polozov",
"Alex",
""
],
[
"Pope",
"Reiner",
""
],
[
"Qiao",
"Siyuan",
""
],
[
"Reif",
"Emily",
""
],
[
"Richter",
"Bryan",
""
],
[
"Riley",
"Parker",
""
],
[
"Ros",
"Alex Castro",
""
],
[
"Roy",
"Aurko",
""
],
[
"Saeta",
"Brennan",
""
],
[
"Samuel",
"Rajkumar",
""
],
[
"Shelby",
"Renee",
""
],
[
"Slone",
"Ambrose",
""
],
[
"Smilkov",
"Daniel",
""
],
[
"So",
"David R.",
""
],
[
"Sohn",
"Daniel",
""
],
[
"Tokumine",
"Simon",
""
],
[
"Valter",
"Dasha",
""
],
[
"Vasudevan",
"Vijay",
""
],
[
"Vodrahalli",
"Kiran",
""
],
[
"Wang",
"Xuezhi",
""
],
[
"Wang",
"Pidong",
""
],
[
"Wang",
"Zirui",
""
],
[
"Wang",
"Tao",
""
],
[
"Wieting",
"John",
""
],
[
"Wu",
"Yuhuai",
""
],
[
"Xu",
"Kelvin",
""
],
[
"Xu",
"Yunhan",
""
],
[
"Xue",
"Linting",
""
],
[
"Yin",
"Pengcheng",
""
],
[
"Yu",
"Jiahui",
""
],
[
"Zhang",
"Qiao",
""
],
[
"Zheng",
"Steven",
""
],
[
"Zheng",
"Ce",
""
],
[
"Zhou",
"Weikang",
""
],
[
"Zhou",
"Denny",
""
],
[
"Petrov",
"Slav",
""
],
[
"Wu",
"Yonghui",
""
]
] |
new_dataset
| 0.992598 |
2305.15021
|
Yao Mu Mark
|
Yao Mu, Qinglong Zhang, Mengkang Hu, Wenhai Wang, Mingyu Ding, Jun
Jin, Bin Wang, Jifeng Dai, Yu Qiao, Ping Luo
|
EmbodiedGPT: Vision-Language Pre-Training via Embodied Chain of Thought
| null | null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Embodied AI is a crucial frontier in robotics, capable of planning and
executing action sequences for robots to accomplish long-horizon tasks in
physical environments. In this work, we introduce EmbodiedGPT, an end-to-end
multi-modal foundation model for embodied AI, empowering embodied agents with
multi-modal understanding and execution capabilities. To achieve this, we have
made the following efforts: (i) We craft a large-scale embodied planning
dataset, termed EgoCOT. The dataset consists of carefully selected videos from
the Ego4D dataset, along with corresponding high-quality language instructions.
Specifically, we generate a sequence of sub-goals with the "Chain of Thoughts"
mode for effective embodied planning. (ii) We introduce an efficient training
approach to EmbodiedGPT for high-quality plan generation, by adapting a 7B
large language model (LLM) to the EgoCOT dataset via prefix tuning. (iii) We
introduce a paradigm for extracting task-related features from LLM-generated
planning queries to form a closed loop between high-level planning and
low-level control. Extensive experiments show the effectiveness of EmbodiedGPT
on embodied tasks, including embodied planning, embodied control, visual
captioning, and visual question answering. Notably, EmbodiedGPT significantly
enhances the success rate of the embodied control task by extracting more
effective features. It has achieved a remarkable 1.6 times increase in success
rate on the Franka Kitchen benchmark and a 1.3 times increase on the Meta-World
benchmark, compared to the BLIP-2 baseline fine-tuned with the Ego4D dataset.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 11:04:30 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Sep 2023 23:46:22 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Mu",
"Yao",
""
],
[
"Zhang",
"Qinglong",
""
],
[
"Hu",
"Mengkang",
""
],
[
"Wang",
"Wenhai",
""
],
[
"Ding",
"Mingyu",
""
],
[
"Jin",
"Jun",
""
],
[
"Wang",
"Bin",
""
],
[
"Dai",
"Jifeng",
""
],
[
"Qiao",
"Yu",
""
],
[
"Luo",
"Ping",
""
]
] |
new_dataset
| 0.999753 |
2306.07580
|
Yujin Tang
|
Yujin Tang, Wenhao Yu, Jie Tan, Heiga Zen, Aleksandra Faust, Tatsuya
Harada
|
SayTap: Language to Quadrupedal Locomotion
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have demonstrated the potential to perform
high-level planning. Yet, it remains a challenge for LLMs to comprehend
low-level commands, such as joint angle targets or motor torques. This paper
proposes an approach to use foot contact patterns as an interface that bridges
human commands in natural language and a locomotion controller that outputs
these low-level commands. This results in an interactive system for quadrupedal
robots that allows the users to craft diverse locomotion behaviors flexibly. We
contribute an LLM prompt design, a reward function, and a method to expose the
controller to the feasible distribution of contact patterns. The results are a
controller capable of achieving diverse locomotion patterns that can be
transferred to real robot hardware. Compared with other design choices, the
proposed approach enjoys more than 50% success rate in predicting the correct
contact patterns and can solve 10 more tasks out of a total of 30 tasks. Our
project site is: https://saytap.github.io.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 07:09:11 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2023 08:53:23 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Sep 2023 06:59:51 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Tang",
"Yujin",
""
],
[
"Yu",
"Wenhao",
""
],
[
"Tan",
"Jie",
""
],
[
"Zen",
"Heiga",
""
],
[
"Faust",
"Aleksandra",
""
],
[
"Harada",
"Tatsuya",
""
]
] |
new_dataset
| 0.999378 |
2306.14882
|
Jules Drean
|
Jules Drean, Miguel Gomez-Garcia, Thomas Bourgeat, Srinivas Devadas
|
Citadel: Enclaves with Strong Microarchitectural Isolation and Secure
Shared Memory on a Speculative Out-of-Order Processor
| null | null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
We present Citadel, to our knowledge, the first enclave platform with strong
microarchitectural isolation to run realistic secure programs on a speculative
out-of-order multicore processor. First, we develop a new hardware mechanism to
enable secure shared memory while defending against transient execution attacks
by blocking speculative accesses to shared memory. Then, we develop an
efficient dynamic cache partitioning scheme, improving both enclaves' and
unprotected processes' performance. We conduct an in-depth security analysis
and a performance evaluation of our new mechanisms. Finally, we build the
hardware and software infrastructure required to run our secure enclaves. Our
multicore processor runs on an FPGA and boots untrusted Linux from which users
can securely launch and interact with enclaves. We open-source our end-to-end
hardware and software infrastructure, hoping to spark more research and bridge
the gap between conceptual proposals and FPGA prototypes.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 17:51:23 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Sep 2023 18:47:35 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Drean",
"Jules",
""
],
[
"Gomez-Garcia",
"Miguel",
""
],
[
"Bourgeat",
"Thomas",
""
],
[
"Devadas",
"Srinivas",
""
]
] |
new_dataset
| 0.997862 |
2306.15679
|
Sean Memery
|
Sean Memery, Osmar Cedron, Kartic Subr
|
Generating Parametric BRDFs from Natural Language Descriptions
| null | null | null | null |
cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artistic authoring of 3D environments is a laborious enterprise that also
requires skilled content creators. There have been impressive improvements in
using machine learning to address different aspects of generating 3D content,
such as generating meshes, arranging geometry, synthesizing textures, etc. In
this paper we develop a model to generate Bidirectional Reflectance
Distribution Functions (BRDFs) from descriptive textual prompts. BRDFs are four
dimensional probability distributions that characterize the interaction of
light with surface materials. They are either represented parametrically, or by
tabulating the probability density associated with every pair of incident and
outgoing angles. The former lends itself to artistic editing while the latter
is used when measuring the appearance of real materials. Numerous works have
focused on hypothesizing BRDF models from images of materials. We learn a
mapping from textual descriptions of materials to parametric BRDFs. Our model
is first trained using a semi-supervised approach before being tuned via an
unsupervised scheme. Although our model is general, in this paper we
specifically generate parameters for MDL materials, conditioned on natural
language descriptions, within NVIDIA's Omniverse platform. This enables use
cases such as real-time text prompts to change materials of objects in 3D
environments such as "dull plastic" or "shiny iron". Since the output of our
model is a parametric BRDF, rather than an image of the material, it may be
used to render materials using any shape under arbitrarily specified viewing
and lighting conditions.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 15:35:19 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 12:07:40 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Memery",
"Sean",
""
],
[
"Cedron",
"Osmar",
""
],
[
"Subr",
"Kartic",
""
]
] |
new_dataset
| 0.950455 |
2307.16834
|
Hoang Viet Pham Mr
|
Hoang Viet Pham, Thinh Gia Tran, Chuong Dinh Le, An Dinh Le, Hien Bich
Vo
|
Benchmarking Jetson Edge Devices with an End-to-end Video-based Anomaly
Detection System
|
Accepted in Future of Information and Communication Conference (FICC)
2024
| null | null | null |
cs.CV cs.AI cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Innovative enhancement in embedded system platforms, specifically hardware
accelerations, significantly influence the application of deep learning in
real-world scenarios. These innovations translate human labor efforts into
automated intelligent systems employed in various areas such as autonomous
driving, robotics, Internet-of-Things (IoT), and numerous other impactful
applications. NVIDIA's Jetson platform is one of the pioneers in offering
optimal performance regarding energy efficiency and throughput in the execution
of deep learning algorithms. Previously, most benchmarking analysis was based
on 2D images with a single deep learning model for each comparison result. In
this paper, we implement an end-to-end video-based crime-scene anomaly
detection system inputting from surveillance videos and the system is deployed
and completely operates on multiple Jetson edge devices (Nano, AGX Xavier, Orin
Nano). The comparison analysis includes the integration of Torch-TensorRT as a
software developer kit from NVIDIA for the model performance optimisation. The
system is built based on the PySlowfast open-source project from Facebook as
the coding template. The end-to-end system process comprises the videos from
camera, data preprocessing pipeline, feature extractor and the anomaly
detection. We provide the experience of an AI-based system deployment on
various Jetson Edge devices with Docker technology. Regarding anomaly
detectors, a weakly supervised video-based deep learning model called Robust
Temporal Feature Magnitude Learning (RTFM) is applied in the system. The
approach system reaches 47.56 frames per second (FPS) inference speed on a
Jetson edge device with only 3.11 GB RAM usage total. We also discover the
promising Jetson device that the AI system achieves 15% better performance than
the previous version of Jetson devices while consuming 50% less energy power.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 17:16:57 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 03:51:50 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Sep 2023 22:42:53 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Pham",
"Hoang Viet",
""
],
[
"Tran",
"Thinh Gia",
""
],
[
"Le",
"Chuong Dinh",
""
],
[
"Le",
"An Dinh",
""
],
[
"Vo",
"Hien Bich",
""
]
] |
new_dataset
| 0.998971 |
2308.09768
|
Anuoluwapo Aremu
|
Anuoluwapo Aremu, Jesujoba O. Alabi, David Ifeoluwa Adelani
|
YORC: Yoruba Reading Comprehension dataset
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we create YORC: a new multi-choice Yoruba Reading
Comprehension dataset that is based on Yoruba high-school reading comprehension
examination. We provide baseline results by performing cross-lingual transfer
using existing English RACE dataset based on a pre-trained encoder-only model.
Additionally, we provide results by prompting large language models (LLMs) like
GPT-4.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 18:46:47 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 07:31:14 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Aremu",
"Anuoluwapo",
""
],
[
"Alabi",
"Jesujoba O.",
""
],
[
"Adelani",
"David Ifeoluwa",
""
]
] |
new_dataset
| 0.999554 |
2308.12966
|
Shuai Bai
|
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng
Wang, Junyang Lin, Chang Zhou, Jingren Zhou
|
Qwen-VL: A Versatile Vision-Language Model for Understanding,
Localization, Text Reading, and Beyond
|
Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the Qwen-VL series, a set of large-scale vision-language models
(LVLMs) designed to perceive and understand both text and images. Comprising
Qwen-VL and Qwen-VL-Chat, these models exhibit remarkable performance in tasks
like image captioning, question answering, visual localization, and flexible
interaction. The evaluation covers a wide range of tasks including zero-shot
captioning, visual or document visual question answering, and grounding. We
demonstrate the Qwen-VL outperforms existing LVLMs. We present their
architecture, training, capabilities, and performance, highlighting their
contributions to advancing multimodal artificial intelligence. Code, demo and
models are available at https://github.com/QwenLM/Qwen-VL.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 17:59:17 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 17:08:39 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Bai",
"Jinze",
""
],
[
"Bai",
"Shuai",
""
],
[
"Yang",
"Shusheng",
""
],
[
"Wang",
"Shijie",
""
],
[
"Tan",
"Sinan",
""
],
[
"Wang",
"Peng",
""
],
[
"Lin",
"Junyang",
""
],
[
"Zhou",
"Chang",
""
],
[
"Zhou",
"Jingren",
""
]
] |
new_dataset
| 0.959327 |
2309.05373
|
Georg Hager
|
Ayesha Afzal, Georg Hager, Gerhard Wellein
|
SPEChpc 2021 Benchmarks on Ice Lake and Sapphire Rapids Infiniband
Clusters: A Performance and Energy Case Study
|
9 pages, 6 figures; corrected links to system docs
| null | null | null |
cs.PF cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, fundamental performance, power, and energy characteristics of
the full SPEChpc 2021 benchmark suite are assessed on two different clusters
based on Intel Ice Lake and Sapphire Rapids CPUs using the MPI-only codes'
variants. We use memory bandwidth, data volume, and scalability metrics in
order to categorize the benchmarks and pinpoint relevant performance and
scalability bottlenecks on the node and cluster levels. Common patterns such as
memory bandwidth limitation, dominating communication and synchronization
overhead, MPI serialization, superlinear scaling, and alignment issues could be
identified, in isolation or in combination, showing that SPEChpc 2021 is
representative of many HPC workloads. Power dissipation and energy measurements
indicate that the modern Intel server CPUs have such a high idle power level
that race-to-idle is the paramount strategy for energy to solution and
energy-delay product minimization. On the chip level, only memory-bound code
shows a clear advantage of Sapphire Rapids compared to Ice Lake in terms of
energy to solution.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 10:48:58 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 13:56:34 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Sep 2023 07:18:56 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Afzal",
"Ayesha",
""
],
[
"Hager",
"Georg",
""
],
[
"Wellein",
"Gerhard",
""
]
] |
new_dataset
| 0.970404 |
2309.05680
|
Kausik Lakkaraju
|
Biplav Srivastava, Kausik Lakkaraju, Tarmo Koppel, Vignesh Narayanan,
Ashish Kundu, Sachindra Joshi
|
Evaluating Chatbots to Promote Users' Trust -- Practices and Open
Problems
| null | null | null | null |
cs.HC cs.AI cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Chatbots, the common moniker for collaborative assistants, are Artificial
Intelligence (AI) software that enables people to naturally interact with them
to get tasks done. Although chatbots have been studied since the dawn of AI,
they have particularly caught the imagination of the public and businesses
since the launch of easy-to-use and general-purpose Large Language Model-based
chatbots like ChatGPT. As businesses look towards chatbots as a potential
technology to engage users, who may be end customers, suppliers, or even their
own employees, proper testing of chatbots is important to address and mitigate
issues of trust related to service or product performance, user satisfaction
and long-term unintended consequences for society. This paper reviews current
practices for chatbot testing, identifies gaps as open problems in pursuit of
user trust, and outlines a path forward.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 22:40:30 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 01:38:49 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Srivastava",
"Biplav",
""
],
[
"Lakkaraju",
"Kausik",
""
],
[
"Koppel",
"Tarmo",
""
],
[
"Narayanan",
"Vignesh",
""
],
[
"Kundu",
"Ashish",
""
],
[
"Joshi",
"Sachindra",
""
]
] |
new_dataset
| 0.991744 |
2309.05978
|
Chengyan Ma
|
Chengyan Ma, Ning Xi, Di Lu, Yebo Feng, Jianfeng Ma
|
CToMP: A Cycle-task-oriented Memory Protection Scheme for Unmanned
Systems
|
This paper has been accepted by SCIENCE CHINA Information Sciences
| null |
10.1007/s11432-023-3865-0
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Memory corruption attacks (MCAs) refer to malicious behaviors of system
intruders that modify the contents of a memory location to disrupt the normal
operation of computing systems, causing leakage of sensitive data or
perturbations to ongoing processes. Unlike general-purpose systems, unmanned
systems cannot deploy complete security protection schemes, due to their
limitations in size, cost and performance. MCAs in unmanned systems are
particularly difficult to defend against. Furthermore, MCAs have diverse and
unpredictable attack interfaces in unmanned systems, severely impacting digital
and physical sectors. In this paper, we first generalize, model and taxonomize
MCAs found in unmanned systems currently, laying the foundation for designing a
portable and general defense approach. According to different attack
mechanisms, we found that MCAs are mainly categorized into two
types--return2libc and return2shellcode. To tackle return2libc attacks, we
model the erratic operation of unmanned systems with cycles and then propose a
cycle-task-oriented memory protection (CToMP) approach to protect control flows
from tampering. To defend against return2shellcode attacks, we introduce a
secure process stack with a randomized memory address by leveraging the memory
pool to prevent Shellcode from being executed. Moreover, we discuss the
mechanism by which CToMP resists the ROP attack, a novel variant of return2libc
attacks. Finally, we implement CToMP on CUAV V5+ with Ardupilot and Crazyflie.
The evaluation and security analysis results demonstrate that the proposed
approach CToMP is resilient to various MCAs in unmanned systems with low
footprints and system overhead.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 06:06:59 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Ma",
"Chengyan",
""
],
[
"Xi",
"Ning",
""
],
[
"Lu",
"Di",
""
],
[
"Feng",
"Yebo",
""
],
[
"Ma",
"Jianfeng",
""
]
] |
new_dataset
| 0.998619 |
2309.07139
|
Milad Pooladsanj
|
Milad Pooladsanj and Ketan Savla
|
VertiSync: A Traffic Management Policy with Maximum Throughput for
On-Demand Urban Air Mobility Networks
|
9 pages, 7 figures
| null | null | null |
cs.NI cs.MA cs.RO cs.SY eess.SY math.OC math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Urban Air Mobility (UAM) offers a solution to current traffic congestion by
providing on-demand air mobility in urban areas. Effective traffic management
is crucial for efficient operation of UAM systems, especially for high-demand
scenarios. In this paper, we present VertiSync, a centralized traffic
management policy for on-demand UAM networks. VertiSync schedules the aircraft
for either servicing trip requests or rebalancing in the network subject to
aircraft safety margins and separation requirements during takeoff and landing.
We characterize the system-level throughput of VertiSync, which determines the
demand threshold at which travel times transition from being stabilized to
being increasing over time. We show that the proposed policy is able to
maximize the throughput for sufficiently large fleet sizes. We demonstrate the
performance of VertiSync through a case study for the city of Los Angeles. We
show that VertiSync significantly reduces travel times compared to a first-come
first-serve scheduling policy.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 16:19:27 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Pooladsanj",
"Milad",
""
],
[
"Savla",
"Ketan",
""
]
] |
new_dataset
| 0.999829 |
2309.07230
|
Sarthak Chakraborty
|
Sarthak Chakraborty, Shubham Agarwal, Shaddy Garg, Abhimanyu Sethia,
Udit Narayan Pandey, Videh Aggarwal, Shiv Saini
|
ESRO: Experience Assisted Service Reliability against Outages
|
Accepted to 38th IEEE/ACM International Conference on Automated
Software Engineering (ASE 2023)
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Modern cloud services are prone to failures due to their complex
architecture, making diagnosis a critical process. Site Reliability Engineers
(SREs) spend hours leveraging multiple sources of data, including the alerts,
error logs, and domain expertise through past experiences to locate the root
cause(s). These experiences are documented as natural language text in outage
reports for previous outages. However, utilizing the raw yet rich
semi-structured information in the reports systematically is time-consuming.
Structured information, on the other hand, such as alerts that are often used
during fault diagnosis, is voluminous and requires expert knowledge to discern.
Several strategies have been proposed to use each source of data separately for
root cause analysis. In this work, we build a diagnostic service called ESRO
that recommends root causes and remediation for failures by utilizing
structured as well as semi-structured sources of data systematically. ESRO
constructs a causal graph using alerts and a knowledge graph using outage
reports, and merges them in a novel way to form a unified graph during
training. A retrieval-based mechanism is then used to search the unified graph
and rank the likely root causes and remediation techniques based on the alerts
fired during an outage at inference time. Not only the individual alerts, but
their respective importance in predicting an outage group is taken into account
during recommendation. We evaluated our model on several cloud service outages
of a large SaaS enterprise over the course of ~2 years, and obtained an average
improvement of 27% in rouge scores after comparing the likely root causes
against the ground truth over state-of-the-art baselines. We further establish
the effectiveness of ESRO through qualitative analysis on multiple real outage
examples.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 18:04:52 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Chakraborty",
"Sarthak",
""
],
[
"Agarwal",
"Shubham",
""
],
[
"Garg",
"Shaddy",
""
],
[
"Sethia",
"Abhimanyu",
""
],
[
"Pandey",
"Udit Narayan",
""
],
[
"Aggarwal",
"Videh",
""
],
[
"Saini",
"Shiv",
""
]
] |
new_dataset
| 0.998562 |
2309.07235
|
Xingfu Wu
|
Xingfu Wu, Praveen Paramasivam, Valerie Taylor
|
Autotuning Apache TVM-based Scientific Applications Using Bayesian
Optimization
| null | null | null | null |
cs.LG cs.AI cs.NA math.NA
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Apache TVM (Tensor Virtual Machine), an open source machine learning compiler
framework designed to optimize computations across various hardware platforms,
provides an opportunity to improve the performance of dense matrix
factorizations such as LU (Lower Upper) decomposition and Cholesky
decomposition on GPUs and AI (Artificial Intelligence) accelerators. In this
paper, we propose a new TVM autotuning framework using Bayesian Optimization
and use the TVM tensor expression language to implement linear algebra kernels
such as LU, Cholesky, and 3mm. We use these scientific computation kernels to
evaluate the effectiveness of our methods on a GPU cluster, called Swing, at
Argonne National Laboratory. We compare the proposed autotuning framework with
the TVM autotuning framework AutoTVM with four tuners and find that our
framework outperforms AutoTVM in most cases.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 18:15:58 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Wu",
"Xingfu",
""
],
[
"Paramasivam",
"Praveen",
""
],
[
"Taylor",
"Valerie",
""
]
] |
new_dataset
| 0.970013 |
2309.07268
|
Derek Gloudemans
|
Derek Gloudemans, Gergely Zach\'ar, Yanbing Wang, Junyi Ji, Matt Nice,
Matt Bunting, William Barbour, Jonathan Sprinkle, Benedetto Piccoli, Maria
Laura Delle Monache, Alexandre Bayen, Benjamin Seibold, Daniel B. Work
|
So you think you can track?
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This work introduces a multi-camera tracking dataset consisting of 234 hours
of video data recorded concurrently from 234 overlapping HD cameras covering a
4.2 mile stretch of 8-10 lane interstate highway near Nashville, TN. The video
is recorded during a period of high traffic density with 500+ objects typically
visible within the scene and typical object longevities of 3-15 minutes. GPS
trajectories from 270 vehicle passes through the scene are manually corrected
in the video data to provide a set of ground-truth trajectories for
recall-oriented tracking metrics, and object detections are provided for each
camera in the scene (159 million total before cross-camera fusion). Initial
benchmarking of tracking-by-detection algorithms is performed against the GPS
trajectories, and a best HOTA of only 9.5% is obtained (best recall 75.9% at
IOU 0.1, 47.9 average IDs per ground truth object), indicating the benchmarked
trackers do not perform sufficiently well at the long temporal and spatial
durations required for traffic scene understanding.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 19:18:18 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Gloudemans",
"Derek",
""
],
[
"Zachár",
"Gergely",
""
],
[
"Wang",
"Yanbing",
""
],
[
"Ji",
"Junyi",
""
],
[
"Nice",
"Matt",
""
],
[
"Bunting",
"Matt",
""
],
[
"Barbour",
"William",
""
],
[
"Sprinkle",
"Jonathan",
""
],
[
"Piccoli",
"Benedetto",
""
],
[
"Monache",
"Maria Laura Delle",
""
],
[
"Bayen",
"Alexandre",
""
],
[
"Seibold",
"Benjamin",
""
],
[
"Work",
"Daniel B.",
""
]
] |
new_dataset
| 0.99944 |
2309.07270
|
Guanghao Wei
|
Minhao Li, Siyu Wang, Guanghao Wei
|
GPU Scheduler for De Novo Genome Assembly with Multiple MPI Processes
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
$\textit{De Novo}$ Genome assembly is one of the most important tasks in
computational biology. ELBA is the state-of-the-art distributed-memory parallel
algorithm for overlap detection and layout simplification steps of $\textit{De
Novo}$ genome assembly but exists a performance bottleneck in pairwise
alignment.
In this work, we introduce 3 GPU schedulers for ELBA to accommodate multiple
MPI processes and multiple GPUs. The GPU schedulers enable multiple MPI
processes to perform computation on GPUs in a round-robin fashion. Both strong
and weak scaling experiments show that 3 schedulers are able to significantly
improve the performance of baseline while there is a trade-off between
parallelism and GPU scheduler overhead. For the best performance
implementation, the one-to-one scheduler achieves $\sim$7-8$\times$ speed-up
using 25 MPI processes compared with the baseline vanilla ELBA GPU scheduler.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 19:20:46 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Li",
"Minhao",
""
],
[
"Wang",
"Siyu",
""
],
[
"Wei",
"Guanghao",
""
]
] |
new_dataset
| 0.994695 |
2309.07302
|
EPTCS
|
Marjan Sirjani, Ehsan Khamespanah
|
Timed Actors and Their Formal Verification
|
In Proceedings EXPRESS/SOS2023, arXiv:2309.05788
|
EPTCS 387, 2023, pp. 1-7
|
10.4204/EPTCS.387.1
| null |
cs.PL cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we review the actor-based language, Timed Rebeca, with a focus
on its formal semantics and formal verification techniques. Timed Rebeca can be
used to model systems consisting of encapsulated components which communicate
by asynchronous message passing. Messages are put in the message buffer of the
receiver actor and can be seen as events. Components react to these
messages/events and execute the corresponding message/event handler. Real-time
features, like computation delay, network delay and periodic behavior, can be
modeled in the language. We explain how both Floating-Time Transition System
(FTTS) and common Timed Transition System (TTS) can be used as the semantics of
such models and the basis for model checking. We use FTTS when we are
interested in event-based properties, and it helps in state space reduction.
For checking the properties based on the value of variables at certain point in
time, we use the TTS semantics. The model checking toolset supports
schedulability analysis, deadlock and queue-overflow check, and assertion based
verification of Timed Rebeca models. TCTL model checking based on TTS is also
possible but is not integrated in the tool.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 20:50:11 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Sirjani",
"Marjan",
""
],
[
"Khamespanah",
"Ehsan",
""
]
] |
new_dataset
| 0.952632 |
2309.07308
|
EPTCS
|
Jos C. M. Baeten, Bas Luttik
|
Parallel Pushdown Automata and Commutative Context-Free Grammars in
Bisimulation Semantics (Extended Abstract)
|
In Proceedings EXPRESS/SOS2023, arXiv:2309.05788. arXiv admin note:
text overlap with arXiv:2203.01713
|
EPTCS 387, 2023, pp. 114-131
|
10.4204/EPTCS.387.9
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
A classical theorem states that the set of languages given by a pushdown
automaton coincides with the set of languages given by a context-free grammar.
In previous work, we proved the pendant of this theorem in a setting with
interaction: the set of processes given by a pushdown automaton coincides with
the set of processes given by a finite guarded recursive specification over a
process algebra with actions, choice, sequencing and guarded recursion, if and
only if we add sequential value passing. In this paper, we look what happens if
we consider parallel pushdown automata instead of pushdown automata, and a
process algebra with parallelism instead of sequencing.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 20:52:12 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Baeten",
"Jos C. M.",
""
],
[
"Luttik",
"Bas",
""
]
] |
new_dataset
| 0.998535 |
2309.07314
|
Haohe Liu
|
Haohe Liu, Ke Chen, Qiao Tian, Wenwu Wang, Mark D. Plumbley
|
AudioSR: Versatile Audio Super-resolution at Scale
|
Under review. Demo and code: https://audioldm.github.io/audiosr
| null | null | null |
cs.SD cs.AI cs.MM eess.AS eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Audio super-resolution is a fundamental task that predicts high-frequency
components for low-resolution audio, enhancing audio quality in digital
applications. Previous methods have limitations such as the limited scope of
audio types (e.g., music, speech) and specific bandwidth settings they can
handle (e.g., 4kHz to 8kHz). In this paper, we introduce a diffusion-based
generative model, AudioSR, that is capable of performing robust audio
super-resolution on versatile audio types, including sound effects, music, and
speech. Specifically, AudioSR can upsample any input audio signal within the
bandwidth range of 2kHz to 16kHz to a high-resolution audio signal at 24kHz
bandwidth with a sampling rate of 48kHz. Extensive objective evaluation on
various audio super-resolution benchmarks demonstrates the strong result
achieved by the proposed model. In addition, our subjective evaluation shows
that AudioSR can acts as a plug-and-play module to enhance the generation
quality of a wide range of audio generative models, including AudioLDM,
Fastspeech2, and MusicGen. Our code and demo are available at
https://audioldm.github.io/audiosr.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 21:00:09 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Liu",
"Haohe",
""
],
[
"Chen",
"Ke",
""
],
[
"Tian",
"Qiao",
""
],
[
"Wang",
"Wenwu",
""
],
[
"Plumbley",
"Mark D.",
""
]
] |
new_dataset
| 0.987724 |
2309.07388
|
Mitchell Kiely
|
Mitchell Kiely, David Bowman, Maxwell Standen, Christopher Moir
|
On Autonomous Agents in a Cyber Defence Environment
|
Presented at the 2nd Internation Workshop on Adaptive Cyber Defence,
2023
| null | null |
ACD/2023/104
|
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous Cyber Defence is required to respond to high-tempo cyber-attacks.
To facilitate the research in this challenging area, we explore the utility of
the autonomous cyber operation environments presented as part of the Cyber
Autonomy Gym for Experimentation (CAGE) Challenges, with a specific focus on
CAGE Challenge 2. CAGE Challenge 2 required a defensive Blue agent to defend a
network from an attacking Red agent. We provide a detailed description of the
this challenge and describe the approaches taken by challenge participants.
From the submitted agents, we identify four classes of algorithms, namely,
Single- Agent Deep Reinforcement Learning (DRL), Hierarchical DRL, Ensembles,
and Non-DRL approaches. Of these classes, we found that the hierarchical DRL
approach was the most capable of learning an effective cyber defensive
strategy. Our analysis of the agent policies identified that different
algorithms within the same class produced diverse strategies and that the
strategy used by the defensive Blue agent varied depending on the strategy used
by the offensive Red agent. We conclude that DRL algorithms are a suitable
candidate for autonomous cyber defence applications.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 02:09:36 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Kiely",
"Mitchell",
""
],
[
"Bowman",
"David",
""
],
[
"Standen",
"Maxwell",
""
],
[
"Moir",
"Christopher",
""
]
] |
new_dataset
| 0.991751 |
2309.07405
|
Zhihao Du
|
Zhihao Du, Shiliang Zhang, Kai Hu, Siqi Zheng
|
FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit
for Neural Speech Codec
|
5 pages, 3 figures, submitted to ICASSP 2024
| null | null | null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents FunCodec, a fundamental neural speech codec toolkit,
which is an extension of the open-source speech processing toolkit FunASR.
FunCodec provides reproducible training recipes and inference scripts for the
latest neural speech codec models, such as SoundStream and Encodec. Thanks to
the unified design with FunASR, FunCodec can be easily integrated into
downstream tasks, such as speech recognition. Along with FunCodec, pre-trained
models are also provided, which can be used for academic or generalized
purposes. Based on the toolkit, we further propose the frequency-domain codec
models, FreqCodec, which can achieve comparable speech quality with much lower
computation and parameter complexity. Experimental results show that, under the
same compression ratio, FunCodec can achieve better reconstruction quality
compared with other toolkits and released models. We also demonstrate that the
pre-trained models are suitable for downstream tasks, including automatic
speech recognition and personalized text-to-speech synthesis. This toolkit is
publicly available at https://github.com/alibaba-damo-academy/FunCodec.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 03:18:24 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Du",
"Zhihao",
""
],
[
"Zhang",
"Shiliang",
""
],
[
"Hu",
"Kai",
""
],
[
"Zheng",
"Siqi",
""
]
] |
new_dataset
| 0.997632 |
2309.07445
|
David Adelani
|
David Ifeoluwa Adelani, Hannah Liu, Xiaoyu Shen, Nikita Vassilyev,
Jesujoba O. Alabi, Yanke Mao, Haonan Gao, Annie En-Shiun Lee
|
SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic
Classification in 200+ Languages and Dialects
|
under submission
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the progress we have recorded in the last few years in multilingual
natural language processing, evaluation is typically limited to a small set of
languages with available datasets which excludes a large number of low-resource
languages. In this paper, we created SIB-200 -- a large-scale open-sourced
benchmark dataset for topic classification in 200 languages and dialects to
address the lack of evaluation dataset for Natural Language Understanding
(NLU). For many of the languages covered in SIB-200, this is the first publicly
available evaluation dataset for NLU. The dataset is based on Flores-200
machine translation corpus. We annotated the English portion of the dataset and
extended the sentence-level annotation to the remaining 203 languages covered
in the corpus. Despite the simplicity of this task, our evaluation in
full-supervised setting, cross-lingual transfer setting and prompting of large
language model setting show that there is still a large gap between the
performance of high-resource and low-resource languages when multilingual
evaluation is scaled to numerous world languages. We found that languages
unseen during the pre-training of multilingual language models,
under-represented language families (like Nilotic and Altantic-Congo), and
languages from the regions of Africa, Americas, Oceania and South East Asia,
often have the lowest performance on our topic classification dataset. We hope
our dataset will encourage a more inclusive evaluation of multilingual language
models on a more diverse set of languages. https://github.com/dadelani/sib-200
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 05:56:49 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Adelani",
"David Ifeoluwa",
""
],
[
"Liu",
"Hannah",
""
],
[
"Shen",
"Xiaoyu",
""
],
[
"Vassilyev",
"Nikita",
""
],
[
"Alabi",
"Jesujoba O.",
""
],
[
"Mao",
"Yanke",
""
],
[
"Gao",
"Haonan",
""
],
[
"Lee",
"Annie En-Shiun",
""
]
] |
new_dataset
| 0.999852 |
2309.07473
|
Chuanruo Ning
|
Chuanruo Ning, Ruihai Wu, Haoran Lu, Kaichun Mo, Hao Dong
|
Where2Explore: Few-shot Affordance Learning for Unseen Novel Categories
of Articulated Objects
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Articulated object manipulation is a fundamental yet challenging task in
robotics. Due to significant geometric and semantic variations across object
categories, previous manipulation models struggle to generalize to novel
categories. Few-shot learning is a promising solution for alleviating this
issue by allowing robots to perform a few interactions with unseen objects.
However, extant approaches often necessitate costly and inefficient test-time
interactions with each unseen instance. Recognizing this limitation, we observe
that despite their distinct shapes, different categories often share similar
local geometries essential for manipulation, such as pullable handles and
graspable edges - a factor typically underutilized in previous few-shot
learning works. To harness this commonality, we introduce 'Where2Explore', an
affordance learning framework that effectively explores novel categories with
minimal interactions on a limited number of instances. Our framework explicitly
estimates the geometric similarity across different categories, identifying
local areas that differ from shapes in the training categories for efficient
exploration while concurrently transferring affordance knowledge to similar
parts of the objects. Extensive experiments in simulated and real-world
environments demonstrate our framework's capacity for efficient few-shot
exploration and generalization.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 07:11:58 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Ning",
"Chuanruo",
""
],
[
"Wu",
"Ruihai",
""
],
[
"Lu",
"Haoran",
""
],
[
"Mo",
"Kaichun",
""
],
[
"Dong",
"Hao",
""
]
] |
new_dataset
| 0.979751 |
2309.07482
|
Marianna Milano
|
Marianna Milano, Pietro Cinaglia, Pietro Hiram Guzzi, Mario Cannataro
|
MuLaN: a MultiLayer Networks Alignment Algorithm
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
A Multilayer Network (MN) is a system consisting of several topological
levels (i.e., layers) representing the interactions between the system's
objects and the related interdependency. Therefore, it may be represented as a
set of layers that can be assimilated to a set of networks of its own objects,
by means inter-layer edges (or inter-edges) linking the nodes of different
layers; for instance, a biological MN may allow modeling of inter and intra
interactions among diseases, genes, and drugs, only using its own structure.
The analysis of MNs may reveal hidden knowledge, as demonstrated by several
algorithms for the analysis. Recently, there is a growing interest in comparing
two MNs by revealing local regions of similarity, as a counterpart of Network
Alignment algorithms (NA) for simple networks. However, classical algorithms
for NA such as Local NA (LNA) cannot be applied on multilayer networks, since
they are not able to deal with inter-layer edges. Therefore, there is the need
for the introduction of novel algorithms. In this paper, we present MuLaN, an
algorithm for the local alignment of multilayer networks. We first show as
proof of concept the performances of MuLaN on a set of synthetic multilayer
networks. Then, we used as a case study a real multilayer network in the
biomedical domain. Our results show that MuLaN is able to build high-quality
alignments and can extract knowledge about the aligned multilayer networks.
MuLaN is available at https://github.com/pietrocinaglia/mulan.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 07:43:40 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Milano",
"Marianna",
""
],
[
"Cinaglia",
"Pietro",
""
],
[
"Guzzi",
"Pietro Hiram",
""
],
[
"Cannataro",
"Mario",
""
]
] |
new_dataset
| 0.999174 |
2309.07509
|
Zipeng Qi
|
Zipeng Qi, Xulong Zhang, Ning Cheng, Jing Xiao, Jianzong Wang
|
DiffTalker: Co-driven audio-image diffusion for talking faces via
intermediate landmarks
|
submmit to ICASSP 2024
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating realistic talking faces is a complex and widely discussed task
with numerous applications. In this paper, we present DiffTalker, a novel model
designed to generate lifelike talking faces through audio and landmark
co-driving. DiffTalker addresses the challenges associated with directly
applying diffusion models to audio control, which are traditionally trained on
text-image pairs. DiffTalker consists of two agent networks: a
transformer-based landmarks completion network for geometric accuracy and a
diffusion-based face generation network for texture details. Landmarks play a
pivotal role in establishing a seamless connection between the audio and image
domains, facilitating the incorporation of knowledge from pre-trained diffusion
models. This innovative approach efficiently produces articulate-speaking
faces. Experimental results showcase DiffTalker's superior performance in
producing clear and geometrically accurate talking faces, all without the need
for additional alignment between audio and image features.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 08:22:34 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Qi",
"Zipeng",
""
],
[
"Zhang",
"Xulong",
""
],
[
"Cheng",
"Ning",
""
],
[
"Xiao",
"Jing",
""
],
[
"Wang",
"Jianzong",
""
]
] |
new_dataset
| 0.997888 |
2309.07515
|
Md. Fahad Hossain
|
Md. Fahad Hossain
|
Dhan-Shomadhan: A Dataset of Rice Leaf Disease Classification for
Bangladeshi Local Rice
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This dataset represents almost all the harmful diseases for rice in
Bangladesh. This dataset consists of 1106 image of five harmful diseases called
Brown Spot, Leaf Scaled, Rice Blast, Rice Turngo, Steath Blight in two
different background variation named field background picture and white
background picture. Two different background variation helps the dataset to
perform more accurately so that the user can use this data for field use as
well as white background for decision making. The data is collected from rice
field of Dhaka Division. This dataset can use for rice leaf diseases
classification, diseases detection using Computer Vision and Pattern
Recognition for different rice leaf disease.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 08:32:05 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Hossain",
"Md. Fahad",
""
]
] |
new_dataset
| 0.999742 |
2309.07525
|
Yongyi Zang
|
Yongyi Zang, You Zhang, Mojtaba Heydari, Zhiyao Duan
|
SingFake: Singing Voice Deepfake Detection
|
Submitted to ICASSP 2024
| null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
The rise of singing voice synthesis presents critical challenges to artists
and industry stakeholders over unauthorized voice usage. Unlike synthesized
speech, synthesized singing voices are typically released in songs containing
strong background music that may hide synthesis artifacts. Additionally,
singing voices present different acoustic and linguistic characteristics from
speech utterances. These unique properties make singing voice deepfake
detection a relevant but significantly different problem from synthetic speech
detection. In this work, we propose the singing voice deepfake detection task.
We first present SingFake, the first curated in-the-wild dataset consisting of
28.93 hours of bonafide and 29.40 hours of deepfake song clips in five
languages from 40 singers. We provide a train/val/test split where the test
sets include various scenarios. We then use SingFake to evaluate four
state-of-the-art speech countermeasure systems trained on speech utterances. We
find these systems lag significantly behind their performance on speech test
data. When trained on SingFake, either using separated vocal tracks or song
mixtures, these systems show substantial improvement. However, our evaluations
also identify challenges associated with unseen singers, communication codecs,
languages, and musical contexts, calling for dedicated research into singing
voice deepfake detection. The SingFake dataset and related resources are
available online.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 08:49:05 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Zang",
"Yongyi",
""
],
[
"Zhang",
"You",
""
],
[
"Heydari",
"Mojtaba",
""
],
[
"Duan",
"Zhiyao",
""
]
] |
new_dataset
| 0.999778 |
2309.07544
|
Mingjie Liu
|
Mingjie Liu, Nathaniel Pinckney, Brucek Khailany and Haoxing Ren
|
VerilogEval: Evaluating Large Language Models for Verilog Code
Generation
|
ICCAD 2023 Invited Paper
| null | null | null |
cs.LG cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
The increasing popularity of large language models (LLMs) has paved the way
for their application in diverse domains. This paper proposes a benchmarking
framework tailored specifically for evaluating LLM performance in the context
of Verilog code generation for hardware design and verification. We present a
comprehensive evaluation dataset consisting of 156 problems from the Verilog
instructional website HDLBits. The evaluation set consists of a diverse set of
Verilog code generation tasks, ranging from simple combinational circuits to
complex finite state machines. The Verilog code completions can be
automatically tested for functional correctness by comparing the transient
simulation outputs of the generated design with a golden solution. We also
demonstrate that the Verilog code generation capability of pretrained language
models could be improved with supervised fine-tuning by bootstrapping with LLM
generated synthetic problem-code pairs.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 09:15:34 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Liu",
"Mingjie",
""
],
[
"Pinckney",
"Nathaniel",
""
],
[
"Khailany",
"Brucek",
""
],
[
"Ren",
"Haoxing",
""
]
] |
new_dataset
| 0.999681 |
2309.07565
|
Xuanhao Huang
|
Xuanhao Huang, Chao-Bo Yan
|
Dubins Curve Based Continuous-Curvature Trajectory Planning for
Autonomous Mobile Robots
|
12 pages, 25 figures
| null | null | null |
cs.RO math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
AMR is widely used in factories to replace manual labor to reduce costs and
improve efficiency. However, it is often difficult for logistics robots to plan
the optimal trajectory and unreasonable trajectory planning can lead to low
transport efficiency and high energy consumption. In this paper, we propose a
method to directly calculate the optimal trajectory for short distance on the
basis of the Dubins set, which completes the calculation of the Dubins path.
Additionally, as an improvement of Dubins path, we smooth the Dubins path based
on clothoid, which makes the curvature varies linearly. AMR can adjust the
steering wheels while following this trajectory. The experiments show that the
Dubins path can be calculated quickly and well smoothed.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 09:49:51 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Huang",
"Xuanhao",
""
],
[
"Yan",
"Chao-Bo",
""
]
] |
new_dataset
| 0.998018 |
2309.07574
|
Faegheh Hasibi
|
Chris Kamphuis, Aileen Lin, Siwen Yang, Jimmy Lin, Arjen P. de Vries,
Faegheh Hasibi
|
MMEAD: MS MARCO Entity Annotations and Disambiguations
| null | null |
10.1145/3539618.3591887
| null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
MMEAD, or MS MARCO Entity Annotations and Disambiguations, is a resource for
entity links for the MS MARCO datasets. We specify a format to store and share
links for both document and passage collections of MS MARCO. Following this
specification, we release entity links to Wikipedia for documents and passages
in both MS MARCO collections (v1 and v2). Entity links have been produced by
the REL and BLINK systems. MMEAD is an easy-to-install Python package, allowing
users to load the link data and entity embeddings effortlessly. Using MMEAD
takes only a few lines of code. Finally, we show how MMEAD can be used for IR
research that uses entity information. We show how to improve recall@1000 and
MRR@10 on more complex queries on the MS MARCO v1 passage dataset by using this
resource. We also demonstrate how entity expansions can be used for interactive
search applications.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 10:09:11 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Kamphuis",
"Chris",
""
],
[
"Lin",
"Aileen",
""
],
[
"Yang",
"Siwen",
""
],
[
"Lin",
"Jimmy",
""
],
[
"de Vries",
"Arjen P.",
""
],
[
"Hasibi",
"Faegheh",
""
]
] |
new_dataset
| 0.994407 |
2309.07597
|
Zheng Liu
|
Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighof
|
C-Pack: Packaged Resources To Advance General Chinese Embedding
| null | null | null | null |
cs.CL cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce C-Pack, a package of resources that significantly advance the
field of general Chinese embeddings. C-Pack includes three critical resources.
1) C-MTEB is a comprehensive benchmark for Chinese text embeddings covering 6
tasks and 35 datasets. 2) C-MTP is a massive text embedding dataset curated
from labeled and unlabeled Chinese corpora for training embedding models. 3)
C-TEM is a family of embedding models covering multiple sizes. Our models
outperform all prior Chinese text embeddings on C-MTEB by up to +10% upon the
time of the release. We also integrate and optimize the entire suite of
training methods for C-TEM. Along with our resources on general Chinese
embedding, we release our data and models for English text embeddings. The
English models achieve state-of-the-art performance on MTEB benchmark;
meanwhile, our released English data is 2 times larger than the Chinese data.
All these resources are made publicly available at
https://github.com/FlagOpen/FlagEmbedding.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 10:57:50 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Xiao",
"Shitao",
""
],
[
"Liu",
"Zheng",
""
],
[
"Zhang",
"Peitian",
""
],
[
"Muennighof",
"Niklas",
""
]
] |
new_dataset
| 0.998122 |
2309.07615
|
Thomas Pellegrini
|
Mat\'eo Cousin, \'Etienne Labb\'e, Thomas Pellegrini
|
Multilingual Audio Captioning using machine translated data
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated Audio Captioning (AAC) systems attempt to generate a natural
language sentence, a caption, that describes the content of an audio recording,
in terms of sound events. Existing datasets provide audio-caption pairs, with
captions written in English only. In this work, we explore multilingual AAC,
using machine translated captions. We translated automatically two prominent
AAC datasets, AudioCaps and Clotho, from English to French, German and Spanish.
We trained and evaluated monolingual systems in the four languages, on
AudioCaps and Clotho. In all cases, the models achieved similar performance,
about 75% CIDEr on AudioCaps and 43% on Clotho. In French, we acquired manual
captions of the AudioCaps eval subset. The French system, trained on the
machine translated version of AudioCaps, achieved significantly better results
on the manual eval subset, compared to the English system for which we
automatically translated the outputs to French. This advocates in favor of
building systems in a target language instead of simply translating to a target
language the English captions from the English system. Finally, we built a
multilingual model, which achieved results in each language comparable to each
monolingual system, while using much less parameters than using a collection of
monolingual systems.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 11:24:55 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Cousin",
"Matéo",
""
],
[
"Labbé",
"Étienne",
""
],
[
"Pellegrini",
"Thomas",
""
]
] |
new_dataset
| 0.998837 |
2309.07658
|
Nicolas Jonason
|
Nicolas Jonason, Xin Wang, Erica Cooper, Lauri Juvela, Bob L. T.
Sturm, Junichi Yamagishi
|
DDSP-based Neural Waveform Synthesis of Polyphonic Guitar Performance
from String-wise MIDI Input
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We explore the use of neural synthesis for acoustic guitar from string-wise
MIDI input. We propose four different systems and compare them with both
objective metrics and subjective evaluation against natural audio and a
sample-based baseline. We iteratively develop these four systems by making
various considerations on the architecture and intermediate tasks, such as
predicting pitch and loudness control features. We find that formulating the
control feature prediction task as a classification task rather than a
regression task yields better results. Furthermore, we find that our simplest
proposed system, which directly predicts synthesis parameters from MIDI input
performs the best out of the four proposed systems. Audio examples are
available at https://erl-j.github.io/neural-guitar-web-supplement.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 12:23:09 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Jonason",
"Nicolas",
""
],
[
"Wang",
"Xin",
""
],
[
"Cooper",
"Erica",
""
],
[
"Juvela",
"Lauri",
""
],
[
"Sturm",
"Bob L. T.",
""
],
[
"Yamagishi",
"Junichi",
""
]
] |
new_dataset
| 0.979899 |
2309.07709
|
Dimitris Chaikalis
|
Dimitris Chaikalis, Vinicius Goncalves, Anthony Tzes, Farshad Khorrami
|
Aerial Manipulator Force Control Using Control Barrier Functions
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This article studies the problem of applying normal forces on a surface,
using an underactuated aerial vehicle equipped with a dexterous robotic arm. A
force-motion high-level controller is designed based on a Lyapunov function
encompassing alignment and exerted force errors. This controller is coupled
with a Control Barrier Function constraint under an optimization scheme using
Quadratic Programming. This aims to enforce a prescribed relationship between
the approaching motion for the end-effector and its alignment with the surface,
thus ensuring safe operation. An adaptive low-level controller is devised for
the aerial vehicle, capable of tracking velocity commands generated by the
high-level controller. Simulations are presented to demonstrate the force
exertion stability and safety of the controller in cases of large disturbances.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 13:44:15 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Chaikalis",
"Dimitris",
""
],
[
"Goncalves",
"Vinicius",
""
],
[
"Tzes",
"Anthony",
""
],
[
"Khorrami",
"Farshad",
""
]
] |
new_dataset
| 0.995141 |
2309.07736
|
Ning Gao
|
Ning Gao, Cen Li, Shengguo Meng, Wankai Tang, Shuchen Meng, Shi Jin,
Michail Matthaiou
|
RIS-Assisted Physical Layer Authentication for 6G Endogenous Security
| null | null | null | null |
cs.CR eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The physical layer authentication (PLA) is a promising technology which can
enhance the access security of a massive number of devices in the near future.
In this paper, we propose a reconfigurable intelligent surface (RIS)-assisted
PLA system, in which the legitimate transmitter can customize the channel
fingerprints during PLA by controlling the ON-OFF state of the RIS. Without
loss of generality, we use the received signal strength (RSS) based spoofing
detection approach to analyze the feasibility of the proposed architecture.
Specifically, based on the RSS, we derive the statistical properties of PLA and
give some interesting insights, which showcase that the RIS-assisted PLA is
theoretically feasible. Then, we derive the optimal detection threshold to
maximize the performance in the context of the presented performance metrics.
Next, the actual feasibility of the proposed system is verified via
proof-of-concept experiments on a RIS-assisted PLA prototype platform. The
experiment results show that there are 3.5% and 76% performance improvements
when the transmission sources are at different locations and at the same
location, respectively.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 14:15:43 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Gao",
"Ning",
""
],
[
"Li",
"Cen",
""
],
[
"Meng",
"Shengguo",
""
],
[
"Tang",
"Wankai",
""
],
[
"Meng",
"Shuchen",
""
],
[
"Jin",
"Shi",
""
],
[
"Matthaiou",
"Michail",
""
]
] |
new_dataset
| 0.997774 |
2309.07759
|
Gi-Cheon Kang
|
Gi-Cheon Kang, Junghyun Kim, Jaein Kim, Byoung-Tak Zhang
|
PROGrasp: Pragmatic Human-Robot Communication for Object Grasping
|
7 pages, 6 figures
| null | null | null |
cs.CL cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interactive Object Grasping (IOG) is the task of identifying and grasping the
desired object via human-robot natural language interaction. Current IOG
systems assume that a human user initially specifies the target object's
category (e.g., bottle). Inspired by pragmatics, where humans often convey
their intentions by relying on context to achieve goals, we introduce a new IOG
task, Pragmatic-IOG, and the corresponding dataset, Intention-oriented
Multi-modal Dialogue (IM-Dial). In our proposed task scenario, an
intention-oriented utterance (e.g., "I am thirsty") is initially given to the
robot. The robot should then identify the target object by interacting with a
human user. Based on the task setup, we propose a new robotic system that can
interpret the user's intention and pick up the target object, Pragmatic Object
Grasping (PROGrasp). PROGrasp performs Pragmatic-IOG by incorporating modules
for visual grounding, question asking, object grasping, and most importantly,
answer interpretation for pragmatic inference. Experimental results show that
PROGrasp is effective in offline (i.e., target object discovery) and online
(i.e., IOG with a physical robot arm) settings.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 14:45:47 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Kang",
"Gi-Cheon",
""
],
[
"Kim",
"Junghyun",
""
],
[
"Kim",
"Jaein",
""
],
[
"Zhang",
"Byoung-Tak",
""
]
] |
new_dataset
| 0.999821 |
2309.07764
|
James Choncholas
|
James Choncholas, Ketan Bhardwaj, Ada Gavrilovska
|
TGh: A TEE/GC Hybrid Enabling Confidential FaaS Platforms
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Trusted Execution Environments (TEEs) suffer from performance issues when
executing certain management instructions, such as creating an enclave, context
switching in and out of protected mode, and swapping cached pages. This is
especially problematic for short-running, interactive functions in
Function-as-a-Service (FaaS) platforms, where existing techniques to address
enclave overheads are insufficient. We find FaaS functions can spend more time
managing the enclave than executing application instructions. In this work, we
propose a TEE/GC hybrid (TGh) protocol to enable confidential FaaS platforms.
TGh moves computation out of the enclave onto the untrusted host using garbled
circuits (GC), a cryptographic construction for secure function evaluation. Our
approach retains the security guarantees of enclaves while avoiding the
performance issues associated with enclave management instructions.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 14:51:38 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Choncholas",
"James",
""
],
[
"Bhardwaj",
"Ketan",
""
],
[
"Gavrilovska",
"Ada",
""
]
] |
new_dataset
| 0.952978 |
2309.07841
|
Saurav Kumar
|
Abhinav Jain, Ehan Masud, Michelle Han, Rohan Dhillon, Sumukh Rao,
Arya Joshi, Salar Cheema, Saurav Kumar
|
Two Timin': Repairing Smart Contracts With A Two-Layered Approach
|
Submitted to the 2023 ICI Conference
| null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Due to the modern relevance of blockchain technology, smart contracts present
both substantial risks and benefits. Vulnerabilities within them can trigger a
cascade of consequences, resulting in significant losses. Many current papers
primarily focus on classifying smart contracts for malicious intent, often
relying on limited contract characteristics, such as bytecode or opcode. This
paper proposes a novel, two-layered framework: 1) classifying and 2) directly
repairing malicious contracts. Slither's vulnerability report is combined with
source code and passed through a pre-trained RandomForestClassifier (RFC) and
Large Language Models (LLMs), classifying and repairing each suggested
vulnerability. Experiments demonstrate the effectiveness of fine-tuned and
prompt-engineered LLMs. The smart contract repair models, built from
pre-trained GPT-3.5-Turbo and fine-tuned Llama-2-7B models, reduced the overall
vulnerability count by 97.5% and 96.7% respectively. A manual inspection of
repaired contracts shows that all retain functionality, indicating that the
proposed method is appropriate for automatic batch classification and repair of
vulnerabilities in smart contracts.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 16:37:23 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Jain",
"Abhinav",
""
],
[
"Masud",
"Ehan",
""
],
[
"Han",
"Michelle",
""
],
[
"Dhillon",
"Rohan",
""
],
[
"Rao",
"Sumukh",
""
],
[
"Joshi",
"Arya",
""
],
[
"Cheema",
"Salar",
""
],
[
"Kumar",
"Saurav",
""
]
] |
new_dataset
| 0.999467 |
2309.07861
|
Gasper Begus
|
Ga\v{s}per Begu\v{s}, Thomas Lu, Alan Zhou, Peter Wu, Gopala K.
Anumanchipalli
|
CiwaGAN: Articulatory information exchange
| null | null | null | null |
cs.SD cs.AI cs.CL eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Humans encode information into sounds by controlling articulators and decode
information from sounds using the auditory apparatus. This paper introduces
CiwaGAN, a model of human spoken language acquisition that combines
unsupervised articulatory modeling with an unsupervised model of information
exchange through the auditory modality. While prior research includes
unsupervised articulatory modeling and information exchange separately, our
model is the first to combine the two components. The paper also proposes an
improved articulatory model with more interpretable internal representations.
The proposed CiwaGAN model is the most realistic approximation of human spoken
language acquisition using deep learning. As such, it is useful for cognitively
plausible simulations of the human speech act.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 17:10:39 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Beguš",
"Gašper",
""
],
[
"Lu",
"Thomas",
""
],
[
"Zhou",
"Alan",
""
],
[
"Wu",
"Peter",
""
],
[
"Anumanchipalli",
"Gopala K.",
""
]
] |
new_dataset
| 0.98787 |
2309.07874
|
Emanuele Giacomini
|
Emanuele Giacomini and Leonardo Brizi and Luca Di Giammarino and Omar
Salem and Patrizio Perugini and Giorgio Grisetti
|
Ca$^2$Lib: Simple and Accurate LiDAR-RGB Calibration using Small Common
Markers
|
7 pages, 10 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In many fields of robotics, knowing the relative position and orientation
between two sensors is a mandatory precondition to operate with multiple
sensing modalities. In this context, the pair LiDAR-RGB cameras offer
complementary features: LiDARs yield sparse high quality range measurements,
while RGB cameras provide a dense color measurement of the environment.
Existing techniques often rely either on complex calibration targets that are
expensive to obtain, or extracted virtual correspondences that can hinder the
estimate's accuracy. In this paper we address the problem of LiDAR-RGB
calibration using typical calibration patterns (i.e. A3 chessboard) with
minimal human intervention. Our approach exploits the planarity of the target
to find correspondences between the sensors measurements, leading to features
that are robust to LiDAR noise.
Moreover, we estimate a solution by solving a joint non-linear optimization
problem. We validated our approach by carrying on quantitative and comparative
experiments with other state-of-the-art approaches. Our results show that our
simple schema performs on par or better than other approches using complex
calibration targets. Finally, we release an open-source C++ implementation at
\url{https://github.com/srrg-sapienza/ca2lib}
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 17:22:49 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Giacomini",
"Emanuele",
""
],
[
"Brizi",
"Leonardo",
""
],
[
"Di Giammarino",
"Luca",
""
],
[
"Salem",
"Omar",
""
],
[
"Perugini",
"Patrizio",
""
],
[
"Grisetti",
"Giorgio",
""
]
] |
new_dataset
| 0.979759 |
2309.07880
|
Roberto Daza
|
Roberto Daza, Aythami Morales, Julian Fierrez, Ruben Tolosana, Ruben
Vera-Rodriguez
|
mEBAL2 Database and Benchmark: Image-based Multispectral Eyeblink
Detection
|
This paper is under consideration at Pattern Recognition Letters
| null | null | null |
cs.CV cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work introduces a new multispectral database and novel approaches for
eyeblink detection in RGB and Near-Infrared (NIR) individual images. Our
contributed dataset (mEBAL2, multimodal Eye Blink and Attention Level
estimation, Version 2) is the largest existing eyeblink database, representing
a great opportunity to improve data-driven multispectral approaches for blink
detection and related applications (e.g., attention level estimation and
presentation attack detection in face biometrics). mEBAL2 includes 21,100 image
sequences from 180 different students (more than 2 million labeled images in
total) while conducting a number of e-learning tasks of varying difficulty or
taking a real course on HTML initiation through the edX MOOC platform. mEBAL2
uses multiple sensors, including two Near-Infrared (NIR) and one RGB camera to
capture facial gestures during the execution of the tasks, as well as an
Electroencephalogram (EEG) band to get the cognitive activity of the user and
blinking events. Furthermore, this work proposes a Convolutional Neural Network
architecture as benchmark for blink detection on mEBAL2 with performances up to
97%. Different training methodologies are implemented using the RGB spectrum,
NIR spectrum, and the combination of both to enhance the performance on
existing eyeblink detectors. We demonstrate that combining NIR and RGB images
during training improves the performance of RGB eyeblink detectors (i.e.,
detection based only on a RGB image). Finally, the generalization capacity of
the proposed eyeblink detectors is validated in wilder and more challenging
environments like the HUST-LEBW dataset to show the usefulness of mEBAL2 to
train a new generation of data-driven approaches for eyeblink detection.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 17:25:25 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Daza",
"Roberto",
""
],
[
"Morales",
"Aythami",
""
],
[
"Fierrez",
"Julian",
""
],
[
"Tolosana",
"Ruben",
""
],
[
"Vera-Rodriguez",
"Ruben",
""
]
] |
new_dataset
| 0.99977 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.