arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
sequencelengths 1
389
| abstract
stringlengths 96
3.09k
| categories
sequencelengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.10238 | 2023-05-16T09:07:08Z | Energy Loss Prediction in IoT Energy Services | [
"Pengwei Yang",
"Amani Abusafia",
"Abdallah Lakhdari",
"Athman Bouguettaya"
] | We propose a novel Energy Loss Prediction(ELP) framework that estimates the
energy loss in sharing crowdsourced energy services. Crowdsourcing wireless
energy services is a novel and convenient solution to enable the ubiquitous
charging of nearby IoT devices. Therefore, capturing the wireless energy
sharing loss is essential for the successful deployment of efficient energy
service composition techniques. We propose Easeformer, a novel attention-based
algorithm to predict the battery levels of IoT devices in a crowdsourced energy
sharing environment. The predicted battery levels are used to estimate the
energy loss. A set of experiments were conducted to demonstrate the feasibility
and effectiveness of the proposed framework. We conducted extensive experiments
on real wireless energy datasets to demonstrate that our framework
significantly outperforms existing methods. | [
"cs.DC",
"cs.LG",
"cs.NI"
] | false |
2305.10356 | 2023-05-16T16:01:12Z | Spectral Clustering via Orthogonalization-Free Methods | [
"Qiyuan Pang",
"Haizhao Yang"
] | Graph Signal Filter used as dimensionality reduction in spectral clustering
usually requires expensive eigenvalue estimation. We analyze the filter in an
optimization setting and propose to use four orthogonalization-free methods by
optimizing objective functions as dimensionality reduction in spectral
clustering. The proposed methods do not utilize any orthogonalization, which is
known as not well scalable in a parallel computing environment. Our methods
theoretically construct adequate feature space, which is, at most, a weighted
alteration to the eigenspace of a normalized Laplacian matrix. We numerically
hypothesize that the proposed methods are equivalent in clustering quality to
the ideal Graph Signal Filter, which exploits the exact eigenvalue needed
without expensive eigenvalue estimation. Numerical results show that the
proposed methods outperform Power Iteration-based methods and Graph Signal
Filter in clustering quality and computation cost. Unlike Power Iteration-based
methods and Graph Signal Filter which require random signal input, our methods
are able to utilize available initialization in the streaming graph scenarios.
Additionally, numerical results show that our methods outperform ARPACK and are
faster than LOBPCG in the streaming graph scenarios. We also present numerical
results showing the scalability of our methods in multithreading and
multiprocessing implementations to facilitate parallel spectral clustering. | [
"eess.SP",
"cs.LG",
"cs.NA",
"math.NA"
] | false |
2305.10449 | 2023-05-16T16:48:12Z | Cooperation Is All You Need | [
"Ahsan Adeel",
"Junaid Muzaffar",
"Khubaib Ahmed",
"Mohsin Raza"
] | Going beyond 'dendritic democracy', we introduce a 'democracy of local
processors', termed Cooperator. Here we compare their capabilities when used in
permutation-invariant neural networks for reinforcement learning (RL), with
machine learning algorithms based on Transformers, such as ChatGPT.
Transformers are based on the long-standing conception of integrate-and-fire
'point' neurons, whereas Cooperator is inspired by recent neurobiological
breakthroughs suggesting that the cellular foundations of mental life depend on
context-sensitive pyramidal neurons in the neocortex which have two
functionally distinct points. We show that when used for RL, an algorithm based
on Cooperator learns far quicker than that based on Transformer, even while
having the same number of parameters. | [
"cs.LG",
"cs.AI",
"cs.NE"
] | false |
2305.11901 | 2023-05-16T08:10:51Z | Long-lead forecasts of wintertime air stagnation index in southern China
using oceanic memory effects | [
"Chenhong Zhou",
"Xiaorui Zhang",
"Meng Gao",
"Shanshan Liu",
"Yike Guo",
"Jie Chen"
] | Stagnant weather condition is one of the major contributors to air pollution
as it is favorable for the formation and accumulation of pollutants. To measure
the atmosphere's ability to dilute air pollutants, Air Stagnation Index (ASI)
has been introduced as an important meteorological index. Therefore, making
long-lead ASI forecasts is vital to make plans in advance for air quality
management. In this study, we found that autumn Ni\~no indices derived from sea
surface temperature (SST) anomalies show a negative correlation with wintertime
ASI in southern China, offering prospects for a prewinter forecast. We
developed an LSTM-based model to predict the future wintertime ASI. Results
demonstrated that multivariate inputs (past ASI and Ni\~no indices) achieve
better forecast performance than univariate input (only past ASI). The model
achieves a correlation coefficient of 0.778 between the actual and predicted
ASI, exhibiting a high degree of consistency. | [
"physics.ao-ph",
"cs.AI",
"cs.LG"
] | false |
2305.11999 | 2023-05-16T16:56:10Z | Advising OpenMP Parallelization via a Graph-Based Approach with
Transformers | [
"Tal Kadosh",
"Nadav Schneider",
"Niranjan Hasabnis",
"Timothy Mattson",
"Yuval Pinter",
"Gal Oren"
] | There is an ever-present need for shared memory parallelization schemes to
exploit the full potential of multi-core architectures. The most common
parallelization API addressing this need today is OpenMP. Nevertheless, writing
parallel code manually is complex and effort-intensive. Thus, many
deterministic source-to-source (S2S) compilers have emerged, intending to
automate the process of translating serial to parallel code. However, recent
studies have shown that these compilers are impractical in many scenarios. In
this work, we combine the latest advancements in the field of AI and natural
language processing (NLP) with the vast amount of open-source code to address
the problem of automatic parallelization. Specifically, we propose a novel
approach, called OMPify, to detect and predict the OpenMP pragmas and
shared-memory attributes in parallel code, given its serial version. OMPify is
based on a Transformer-based model that leverages a graph-based representation
of source code that exploits the inherent structure of code. We evaluated our
tool by predicting the parallelization pragmas and attributes of a large corpus
of (over 54,000) snippets of serial code written in C and C++ languages
(Open-OMP-Plus). Our results demonstrate that OMPify outperforms existing
approaches, the general-purposed and popular ChatGPT and targeted PragFormer
models, in terms of F1 score and accuracy. Specifically, OMPify achieves up to
90% accuracy on commonly-used OpenMP benchmark tests such as NAS, SPEC, and
PolyBench. Additionally, we performed an ablation study to assess the impact of
different model components and present interesting insights derived from the
study. Lastly, we also explored the potential of using data augmentation and
curriculum learning techniques to improve the model's robustness and
generalization capabilities. | [
"cs.DC",
"cs.AI",
"cs.LG",
"cs.PF"
] | false |
2305.14365 | 2023-05-16T15:37:16Z | Continually Learned Pavlovian Signalling Without Forgetting for
Human-in-the-Loop Robotic Control | [
"Adam S. R. Parker",
"Michael R. Dawson",
"Patrick M. Pilarski"
] | Artificial limbs are sophisticated devices to assist people with tasks of
daily living. Despite advanced robotic prostheses demonstrating similar motion
capabilities to biological limbs, users report them difficult and non-intuitive
to use. Providing more effective feedback from the device to the user has
therefore become a topic of increased interest. In particular, prediction
learning methods from the field of reinforcement learning -- specifically, an
approach termed Pavlovian signalling -- have been proposed as one approach for
better modulating feedback in prostheses since they can adapt during continuous
use. One challenge identified in these learning methods is that they can forget
previously learned predictions when a user begins to successfully act upon
delivered feedback. The present work directly addresses this challenge,
contributing new evidence on the impact of algorithmic choices, such as on- or
off-policy methods and representation choices, on the Pavlovian signalling from
a machine to a user during their control of a robotic arm. Two conditions of
algorithmic differences were studied using different scenarios of controlling a
robotic arm: an automated motion system and human participant piloting.
Contrary to expectations, off-policy learning did not provide the expected
solution to the forgetting problem. We instead identified beneficial properties
of a look-ahead state representation that made existing approaches able to
learn (and not forget) predictions in support of Pavlovian signalling. This
work therefore contributes new insight into the challenges of providing learned
predictive feedback from a prosthetic device, and demonstrates avenues for more
dynamic signalling in future human-machine interactions. | [
"cs.LG",
"cs.AI",
"cs.RO"
] | false |
2305.09793 | 2023-05-16T20:27:02Z | Reinforcement Learning for Safe Robot Control using Control Lyapunov
Barrier Functions | [
"Desong Du",
"Shaohang Han",
"Naiming Qi",
"Haitham Bou Ammar",
"Jun Wang",
"Wei Pan"
] | Reinforcement learning (RL) exhibits impressive performance when managing
complicated control tasks for robots. However, its wide application to physical
robots is limited by the absence of strong safety guarantees. To overcome this
challenge, this paper explores the control Lyapunov barrier function (CLBF) to
analyze the safety and reachability solely based on data without explicitly
employing a dynamic model. We also proposed the Lyapunov barrier actor-critic
(LBAC), a model-free RL algorithm, to search for a controller that satisfies
the data-based approximation of the safety and reachability conditions. The
proposed approach is demonstrated through simulation and real-world robot
control experiments, i.e., a 2D quadrotor navigation task. The experimental
findings reveal this approach's effectiveness in reachability and safety,
surpassing other model-free RL methods. | [
"cs.RO",
"cs.AI",
"cs.LG",
"cs.SY",
"eess.SY"
] | false |
2305.09893 | 2023-05-17T02:08:25Z | Integrating Multiple Sources Knowledge for Class Asymmetry Domain
Adaptation Segmentation of Remote Sensing Images | [
"Kuiliang Gao",
"Anzhu Yu",
"Xiong You",
"Wenyue Guo",
"Ke Li",
"Ningbo Huang"
] | In the existing unsupervised domain adaptation (UDA) methods for remote
sensing images (RSIs) semantic segmentation, class symmetry is an widely
followed ideal assumption, where the source and target RSIs have exactly the
same class space. In practice, however, it is often very difficult to find a
source RSI with exactly the same classes as the target RSI. More commonly,
there are multiple source RSIs available. To this end, a novel class asymmetry
RSIs domain adaptation method with multiple sources is proposed in this paper,
which consists of four key components. Firstly, a multi-branch segmentation
network is built to learn an expert for each source RSI. Secondly, a novel
collaborative learning method with the cross-domain mixing strategy is
proposed, to supplement the class information for each source while achieving
the domain adaptation of each source-target pair. Thirdly, a pseudo-label
generation strategy is proposed to effectively combine strengths of different
experts, which can be flexibly applied to two cases where the source class
union is equal to or includes the target class set. Fourthly, a
multiview-enhanced knowledge integration module is developed for the high-level
knowledge routing and transfer from multiple domains to target predictions. | [
"cs.CV"
] | false |
2305.09924 | 2023-05-17T03:19:18Z | CageViT: Convolutional Activation Guided Efficient Vision Transformer | [
"Hao Zheng",
"Jinbao Wang",
"Xiantong Zhen",
"Hong Chen",
"Jingkuan Song",
"Feng Zheng"
] | Recently, Transformers have emerged as the go-to architecture for both vision
and language modeling tasks, but their computational efficiency is limited by
the length of the input sequence. To address this, several efficient variants
of Transformers have been proposed to accelerate computation or reduce memory
consumption while preserving performance. This paper presents an efficient
vision Transformer, called CageViT, that is guided by convolutional activation
to reduce computation. Our CageViT, unlike current Transformers, utilizes a new
encoder to handle the rearranged tokens, bringing several technical
contributions: 1) Convolutional activation is used to pre-process the token
after patchifying the image to select and rearrange the major tokens and minor
tokens, which substantially reduces the computation cost through an additional
fusion layer. 2) Instead of using the class activation map of the convolutional
model directly, we design a new weighted class activation to lower the model
requirements. 3) To facilitate communication between major tokens and fusion
tokens, Gated Linear SRA is proposed to further integrate fusion tokens into
the attention mechanism. We perform a comprehensive validation of CageViT on
the image classification challenge.
Experimental results demonstrate that the proposed CageViT outperforms the
most recent state-of-the-art backbones by a large margin in terms of
efficiency, while maintaining a comparable level of accuracy (e.g. a
moderate-sized 43.35M model trained solely on 224 x 224 ImageNet-1K can achieve
Top-1 accuracy of 83.4% accuracy). | [
"cs.CV",
"68T45",
"I.4.10"
] | false |
2305.09981 | 2023-05-17T06:25:40Z | S$^3$Track: Self-supervised Tracking with Soft Assignment Flow | [
"Fatemeh Azimi",
"Fahim Mannan",
"Felix Heide"
] | In this work, we study self-supervised multiple object tracking without using
any video-level association labels. We propose to cast the problem of multiple
object tracking as learning the frame-wise associations between detections in
consecutive frames. To this end, we propose differentiable soft object
assignment for object association, making it possible to learn features
tailored to object association with differentiable end-to-end training. With
this training approach in hand, we develop an appearance-based model for
learning instance-aware object features used to construct a cost matrix based
on the pairwise distances between the object features. We train our model using
temporal and multi-view data, where we obtain association pseudo-labels using
optical flow and disparity information. Unlike most self-supervised tracking
methods that rely on pretext tasks for learning the feature correspondences,
our method is directly optimized for cross-object association in complex
scenarios. As such, the proposed method offers a reidentification-based MOT
approach that is robust to training hyperparameters and does not suffer from
local minima, which are a challenge in self-supervised methods. We evaluate our
proposed model on the KITTI, Waymo, nuScenes, and Argoverse datasets,
consistently improving over other unsupervised methods ($7.8\%$ improvement in
association accuracy on nuScenes). | [
"cs.CV"
] | false |
2305.09999 | 2023-05-17T06:48:35Z | An Interactively Reinforced Paradigm for Joint Infrared-Visible Image
Fusion and Saliency Object Detection | [
"Di Wang",
"Jinyuan Liu",
"Risheng Liu",
"Xin Fan"
] | This research focuses on the discovery and localization of hidden objects in
the wild and serves unmanned systems. Through empirical analysis, infrared and
visible image fusion (IVIF) enables hard-to-find objects apparent, whereas
multimodal salient object detection (SOD) accurately delineates the precise
spatial location of objects within the picture. Their common characteristic of
seeking complementary cues from different source images motivates us to explore
the collaborative relationship between Fusion and Salient object detection
tasks on infrared and visible images via an Interactively Reinforced multi-task
paradigm for the first time, termed IRFS. To the seamless bridge of multimodal
image fusion and SOD tasks, we specifically develop a Feature Screening-based
Fusion subnetwork (FSFNet) to screen out interfering features from source
images, thereby preserving saliency-related features. After generating the
fused image through FSFNet, it is then fed into the subsequent Fusion-Guided
Cross-Complementary SOD subnetwork (FC$^2$Net) as the third modality to drive
the precise prediction of the saliency map by leveraging the complementary
information derived from the fused image. In addition, we develop an
interactive loop learning strategy to achieve the mutual reinforcement of IVIF
and SOD tasks with a shorter training period and fewer network parameters.
Comprehensive experiment results demonstrate that the seamless bridge of IVIF
and SOD mutually enhances their performance, and highlights their superiority. | [
"cs.CV"
] | false |
2305.10026 | 2023-05-17T08:12:56Z | Colonoscopy Coverage Revisited: Identifying Scanning Gaps in Real-Time | [
"G. Leifman",
"I. Kligvasser",
"R. Goldenberg",
"M. Elad",
"E. Rivlin"
] | Colonoscopy is the most widely used medical technique for preventing
Colorectal Cancer, by detecting and removing polyps before they become
malignant. Recent studies show that around one quarter of the existing polyps
are routinely missed. While some of these do appear in the endoscopist's field
of view, others are missed due to a partial coverage of the colon. The task of
detecting and marking unseen regions of the colon has been addressed in recent
work, where the common approach is based on dense 3D reconstruction, which
proves to be challenging due to lack of 3D ground truth and periods with poor
visual content. In this paper we propose a novel and complementary method to
detect deficient local coverage in real-time for video segments where a
reliable 3D reconstruction is impossible. Our method aims to identify skips
along the colon caused by a drifted position of the endoscope during poor
visibility time intervals. The proposed solution consists of two phases. During
the first, time segments with good visibility of the colon and gaps between
them are identified. During the second phase, a trained model operates on each
gap, answering the question: Do you observe the same scene before and after the
gap? If the answer is negative, the endoscopist is alerted and can be directed
to the appropriate area in real-time. The second phase model is trained using a
contrastive loss based on auto-generated examples. Our method evaluation on a
dataset of 250 procedures annotated by trained physicians provides sensitivity
of 0.75 with specificity of 0.9. | [
"cs.CV"
] | false |
2305.10028 | 2023-05-17T08:15:45Z | Pyramid Diffusion Models For Low-light Image Enhancement | [
"Dewei Zhou",
"Zongxin Yang",
"Yi Yang"
] | Recovering noise-covered details from low-light images is challenging, and
the results given by previous methods leave room for improvement. Recent
diffusion models show realistic and detailed image generation through a
sequence of denoising refinements and motivate us to introduce them to
low-light image enhancement for recovering realistic details. However, we found
two problems when doing this, i.e., 1) diffusion models keep constant
resolution in one reverse process, which limits the speed; 2) diffusion models
sometimes result in global degradation (e.g., RGB shift). To address the above
problems, this paper proposes a Pyramid Diffusion model (PyDiff) for low-light
image enhancement. PyDiff uses a novel pyramid diffusion method to perform
sampling in a pyramid resolution style (i.e., progressively increasing
resolution in one reverse process). Pyramid diffusion makes PyDiff much faster
than vanilla diffusion models and introduces no performance degradation.
Furthermore, PyDiff uses a global corrector to alleviate the global degradation
that may occur in the reverse process, significantly improving the performance
and making the training of diffusion models easier with little additional
computational consumption. Extensive experiments on popular benchmarks show
that PyDiff achieves superior performance and efficiency. Moreover, PyDiff can
generalize well to unseen noise and illumination distributions. | [
"cs.CV"
] | false |
2305.10077 | 2023-05-17T09:22:20Z | Dynamic Structural Brain Network Construction by Hierarchical Prototype
Embedding GCN using T1-MRI | [
"Yilin Leng",
"Wenju Cui",
"Chen Bai",
"Zheng Yanyan",
"Jian Zheng"
] | Constructing structural brain networks using T1-weighted magnetic resonance
imaging (T1-MRI) presents a significant challenge due to the lack of direct
regional connectivity information. Current methods with T1-MRI rely on
predefined regions or isolated pretrained location modules to obtain atrophic
regions, which neglects individual specificity. Besides, existing methods
capture global structural context only on the whole-image-level, which weaken
correlation between regions and the hierarchical distribution nature of brain
connectivity.We hereby propose a novel dynamic structural brain network
construction method based on T1-MRI, which can dynamically localize critical
regions and constrain the hierarchical distribution among them for constructing
dynamic structural brain network. Specifically, we first cluster
spatially-correlated channel and generate several critical brain regions as
prototypes. Further, we introduce a contrastive loss function to constrain the
prototypes distribution, which embed the hierarchical brain semantic structure
into the latent space. Self-attention and GCN are then used to dynamically
construct hierarchical correlations of critical regions for brain network and
explore the correlation, respectively. Our method is evaluated on ADNI-1 and
ADNI-2 databases for mild cognitive impairment (MCI) conversion prediction, and
acheive the state-of-the-art (SOTA) performance. Our source code is available
at http://github.com/*******. | [
"cs.CV",
"14J60 (GCN) 14F05, 14J26 (Mild Cognitive Impairment)"
] | false |
2305.10079 | 2023-05-17T09:26:10Z | Face Recognition Using Synthetic Face Data | [
"Omer Granoviter",
"Alexey Gruzdev",
"Vladimir Loginov",
"Max Kogan",
"Orly Zvitia"
] | In the field of deep learning applied to face recognition, securing
large-scale, high-quality datasets is vital for attaining precise and reliable
results. However, amassing significant volumes of high-quality real data faces
hurdles such as time limitations, financial burdens, and privacy issues.
Furthermore, prevalent datasets are often impaired by racial biases and
annotation inaccuracies. In this paper, we underscore the promising application
of synthetic data, generated through rendering digital faces via our computer
graphics pipeline, in achieving competitive results with the state-of-the-art
on synthetic data across multiple benchmark datasets. By finetuning the
model,we obtain results that rival those achieved when training with hundreds
of thousands of real images (98.7% on LFW [1]). We further investigate the
contribution of adding intra-class variance factors (e.g., makeup, accessories,
haircuts) on model performance. Finally, we reveal the sensitivity of
pre-trained face recognition models to alternating specific parts of the face
by leveraging the granular control capability in our platform. | [
"cs.CV"
] | false |
2305.10082 | 2023-05-17T09:37:07Z | Imbalanced Aircraft Data Anomaly Detection | [
"Hao Yang",
"Junyu Gao",
"Yuan Yuan",
"Xuelong Li"
] | Anomaly detection in temporal data from sensors under aviation scenarios is a
practical but challenging task: 1) long temporal data is difficult to extract
contextual information with temporal correlation; 2) the anomalous data are
rare in time series, causing normal/abnormal imbalance in anomaly detection,
making the detector classification degenerate or even fail. To remedy the
aforementioned problems, we propose a Graphical Temporal Data Analysis (GTDA)
framework. It consists three modules, named Series-to-Image (S2I),
Cluster-based Resampling Approach using Euclidean Distance (CRD) and
Variance-Based Loss (VBL). Specifically, for better extracts global information
in temporal data from sensors, S2I converts the data to curve images to
demonstrate abnormalities in data changes. CRD and VBL balance the
classification to mitigate the unequal distribution of classes. CRD extracts
minority samples with similar features to majority samples by clustering and
over-samples them. And VBL fine-tunes the decision boundary by balancing the
fitting degree of the network to each class. Ablation experiments on the
Flights dataset indicate the effectiveness of CRD and VBL on precision and
recall, respectively. Extensive experiments demonstrate the synergistic
advantages of CRD and VBL on F1-score on Flights and three other temporal
datasets. | [
"cs.CV"
] | false |
2305.10084 | 2023-05-17T09:39:01Z | CWD30: A Comprehensive and Holistic Dataset for Crop Weed Recognition in
Precision Agriculture | [
"Talha Ilyas",
"Dewa Made Sri Arsa",
"Khubaib Ahmad",
"Yong Chae Jeong",
"Okjae Won",
"Jong Hoon Lee",
"Hyongsuk Kim"
] | The growing demand for precision agriculture necessitates efficient and
accurate crop-weed recognition and classification systems. Current datasets
often lack the sample size, diversity, and hierarchical structure needed to
develop robust deep learning models for discriminating crops and weeds in
agricultural fields. Moreover, the similar external structure and phenomics of
crops and weeds complicate recognition tasks. To address these issues, we
present the CWD30 dataset, a large-scale, diverse, holistic, and hierarchical
dataset tailored for crop-weed recognition tasks in precision agriculture.
CWD30 comprises over 219,770 high-resolution images of 20 weed species and 10
crop species, encompassing various growth stages, multiple viewing angles, and
environmental conditions. The images were collected from diverse agricultural
fields across different geographic locations and seasons, ensuring a
representative dataset. The dataset's hierarchical taxonomy enables
fine-grained classification and facilitates the development of more accurate,
robust, and generalizable deep learning models. We conduct extensive baseline
experiments to validate the efficacy of the CWD30 dataset. Our experiments
reveal that the dataset poses significant challenges due to intra-class
variations, inter-class similarities, and data imbalance. Additionally, we
demonstrate that minor training modifications like using CWD30 pretrained
backbones can significantly enhance model performance and reduce convergence
time, saving training resources on several downstream tasks. These challenges
provide valuable insights and opportunities for future research in crop-weed
recognition. We believe that the CWD30 dataset will serve as a benchmark for
evaluating crop-weed recognition algorithms, promoting advancements in
precision agriculture, and fostering collaboration among researchers in the
field. | [
"cs.CV"
] | false |
2305.10090 | 2023-05-17T09:52:46Z | Semi-supervised Quality Evaluation of Colonoscopy Procedures | [
"Idan Kligvasser",
"George Leifman",
"Roman Goldenberg",
"Ehud Rivlin",
"Michael Elad"
] | Colonoscopy is the standard of care technique for detecting and removing
polyps for the prevention of colorectal cancer. Nevertheless,
gastroenterologists (GI) routinely miss approximately 25% of polyps during
colonoscopies. These misses are highly operator dependent, influenced by the
physician skills, experience, vigilance, and fatigue. Standard quality metrics,
such as Withdrawal Time or Cecal Intubation Rate, have been shown to be well
correlated with Adenoma Detection Rate (ADR). However, those metrics are
limited in their ability to assess the quality of a specific procedure, and
they do not address quality aspects related to the style or technique of the
examination. In this work we design novel online and offline quality metrics,
based on visual appearance quality criteria learned by an ML model in an
unsupervised way. Furthermore, we evaluate the likelihood of detecting an
existing polyp as a function of quality and use it to demonstrate high
correlation of the proposed metric to polyp detection sensitivity. The proposed
online quality metric can be used to provide real time quality feedback to the
performing GI. By integrating the local metric over the withdrawal phase, we
build a global, offline quality metric, which is shown to be highly correlated
to the standard Polyp Per Colonoscopy (PPC) quality metric. | [
"cs.CV"
] | false |
2305.10121 | 2023-05-17T10:59:55Z | FICNN: A Framework for the Interpretation of Deep Convolutional Neural
Networks | [
"Hamed Behzadi-Khormouji",
"José Oramas"
] | With the continue development of Convolutional Neural Networks (CNNs), there
is a growing concern regarding representations that they encode internally.
Analyzing these internal representations is referred to as model
interpretation. While the task of model explanation, justifying the predictions
of such models, has been studied extensively; the task of model interpretation
has received less attention. The aim of this paper is to propose a framework
for the study of interpretation methods designed for CNN models trained from
visual data. More specifically, we first specify the difference between the
interpretation and explanation tasks which are often considered the same in the
literature. Then, we define a set of six specific factors that can be used to
characterize interpretation methods. Third, based on the previous factors, we
propose a framework for the positioning of interpretation methods. Our
framework highlights that just a very small amount of the suggested factors,
and combinations thereof, have been actually studied. Consequently, leaving
significant areas unexplored. Following the proposed framework, we discuss
existing interpretation methods and give some attention to the evaluation
protocols followed to validate them. Finally, the paper highlights capabilities
of the methods in producing feedback for enabling interpretation and proposes
possible research problems arising from the framework. | [
"cs.CV"
] | false |
2305.10247 | 2023-05-17T14:35:56Z | Can Deep Network Balance Copy-Move Forgery Detection and
Distinguishment? | [
"Shizhen Chang"
] | Copy-move forgery detection is a crucial research area within digital image
forensics, as it focuses on identifying instances where objects in an image are
duplicated and placed in different locations. The detection of such forgeries
is particularly important in contexts where they can be exploited for malicious
purposes. Recent years have witnessed an increased interest in distinguishing
between the original and duplicated objects in copy-move forgeries, accompanied
by the development of larger-scale datasets to facilitate this task. However,
existing approaches to copy-move forgery detection and source/target
differentiation often involve two separate steps or the design of individual
end-to-end networks for each task. In this paper, we propose an innovative
method that employs the transformer architecture in an end-to-end deep neural
network. Our method aims to detect instances of copy-move forgery while
simultaneously localizing the source and target regions. By utilizing this
approach, we address the challenges posed by multi-object copy-move scenarios
and report if there is a balance between the detection and differentiation
tasks. To evaluate the performance of our proposed network, we conducted
experiments on two publicly available copy-move datasets. The results and
analysis aims to show the potential significance of our focus in balancing
detection and distinguishment result and transferring the trained model in
different datasets in the field. | [
"cs.CV"
] | false |
2305.10311 | 2023-05-17T15:49:56Z | Investigating image-based fallow weed detection performance on Raphanus
sativus and Avena sativa at speeds up to 30 km h$^{-1}$ | [
"Guy R. Y. Coleman",
"Angus Macintyre",
"Michael J. Walsh",
"William T. Salter"
] | Site-specific weed control (SSWC) can provide considerable reductions in weed
control costs and herbicide usage. Despite the promise of machine vision for
SSWC systems and the importance of ground speed in weed control efficacy, there
has been little investigation of the role of ground speed and camera
characteristics on weed detection performance. Here, we compare the performance
of four camera-software combinations using the open-source OpenWeedLocator
platform - (1) default settings on a Raspberry Pi HQ camera, (2) optimised
software settings on a HQ camera, (3) optimised software settings on the
Raspberry Pi v2 camera, and (4) a global shutter Arducam AR0234 camera - at
speeds ranging from 5 km h$^{-1}$ to 30 km h$^{-1}$. A combined excess green
(ExG) and hue, saturation, value (HSV) thresholding algorithm was used for
testing under fallow conditions using tillage radish (Raphanus sativus) and
forage oats (Avena sativa) as representative broadleaf and grass weeds,
respectively. ARD demonstrated the highest recall among camera systems, with up
to 95.7% of weeds detected at 5 km h$^{-1}$ and 85.7% at 30 km h$^{-1}$. HQ1
and V2 cameras had the lowest recall of 31.1% and 26.0% at 30 km h$^{-1}$,
respectively. All cameras experienced a decrease in recall as speed increased.
The highest rate of decrease was observed for HQ1 with 1.12% and 0.90%
reductions in recall for every km h$^{-1}$ increase in speed for tillage radish
and forage oats, respectively. Detection of the grassy forage oats was worse
(P<0.05) than the broadleaved tillage radish for all cameras. Despite the
variations in recall, HQ1, HQ2, and V2 maintained near-perfect precision at all
tested speeds. The variable effect of ground speed and camera system on
detection performance of grass and broadleaf weeds, indicates that careful
hardware and software considerations must be made when developing SSWC systems. | [
"cs.CV",
"C.3; I.4.8; J.3"
] | false |
2305.10320 | 2023-05-17T16:01:27Z | CostFormer:Cost Transformer for Cost Aggregation in Multi-view Stereo | [
"Weitao Chen",
"Hongbin Xu",
"Zhipeng Zhou",
"Yang Liu",
"Baigui Sun",
"Wenxiong Kang",
"Xuansong Xie"
] | The core of Multi-view Stereo(MVS) is the matching process among reference
and source pixels. Cost aggregation plays a significant role in this process,
while previous methods focus on handling it via CNNs. This may inherit the
natural limitation of CNNs that fail to discriminate repetitive or incorrect
matches due to limited local receptive fields. To handle the issue, we aim to
involve Transformer into cost aggregation. However, another problem may occur
due to the quadratically growing computational complexity caused by
Transformer, resulting in memory overflow and inference latency. In this paper,
we overcome these limits with an efficient Transformer-based cost aggregation
network, namely CostFormer. The Residual Depth-Aware Cost Transformer(RDACT) is
proposed to aggregate long-range features on cost volume via self-attention
mechanisms along the depth and spatial dimensions. Furthermore, Residual
Regression Transformer(RRT) is proposed to enhance spatial attention. The
proposed method is a universal plug-in to improve learning-based MVS methods. | [
"cs.CV"
] | true |
2305.10326 | 2023-05-17T16:06:30Z | Cross-domain Iterative Network for Simultaneous Denoising, Limited-angle
Reconstruction, and Attenuation Correction of Low-dose Cardiac SPECT | [
"Xiongchao Chen",
"Bo Zhou",
"Huidong Xie",
"Xueqi Guo",
"Qiong Liu",
"Albert J. Sinusas",
"Chi Liu"
] | Single-Photon Emission Computed Tomography (SPECT) is widely applied for the
diagnosis of ischemic heart diseases. Low-dose (LD) SPECT aims to minimize
radiation exposure but leads to increased image noise. Limited-angle (LA) SPECT
enables faster scanning and reduced hardware costs but results in lower
reconstruction accuracy. Additionally, computed tomography (CT)-derived
attenuation maps ($\mu$-maps) are commonly used for SPECT attenuation
correction (AC), but it will cause extra radiation exposure and SPECT-CT
misalignments. In addition, the majority of SPECT scanners in the market are
not hybrid SPECT/CT scanners. Although various deep learning methods have been
introduced to separately address these limitations, the solution for
simultaneously addressing these challenges still remains highly under-explored
and challenging. To this end, we propose a Cross-domain Iterative Network
(CDI-Net) for simultaneous denoising, LA reconstruction, and CT-free AC in
cardiac SPECT. In CDI-Net, paired projection- and image-domain networks are
end-to-end connected to fuse the emission and anatomical information across
domains and iterations. Adaptive Weight Recalibrators (AWR) adjust the
multi-channel input features to enhance prediction accuracy. Our experiments
using clinical data showed that CDI-Net produced more accurate $\mu$-maps,
projections, and reconstructions compared to existing approaches that addressed
each task separately. Ablation studies demonstrated the significance of
cross-domain and cross-iteration connections, as well as AWR, in improving the
reconstruction performance. | [
"cs.CV"
] | false |
2305.10328 | 2023-05-17T16:09:49Z | Joint Denoising and Few-angle Reconstruction for Low-dose Cardiac SPECT
Using a Dual-domain Iterative Network with Adaptive Data Consistency | [
"Xiongchao Chen",
"Bo Zhou",
"Huidong Xie",
"Xueqi Guo",
"Qiong Liu",
"Albert J. Sinusas",
"Chi Liu"
] | Myocardial perfusion imaging (MPI) by single-photon emission computed
tomography (SPECT) is widely applied for the diagnosis of cardiovascular
diseases. Reducing the dose of the injected tracer is essential for lowering
the patient's radiation exposure, but it will lead to increased image noise.
Additionally, the latest dedicated cardiac SPECT scanners typically acquire
projections in fewer angles using fewer detectors to reduce hardware expenses,
potentially resulting in lower reconstruction accuracy. To overcome these
challenges, we propose a dual-domain iterative network for end-to-end joint
denoising and reconstruction from low-dose and few-angle projections of cardiac
SPECT. The image-domain network provides a prior estimate for the
projection-domain networks. The projection-domain primary and auxiliary modules
are interconnected for progressive denoising and few-angle reconstruction.
Adaptive Data Consistency (ADC) modules improve prediction accuracy by
efficiently fusing the outputs of the primary and auxiliary modules.
Experiments using clinical MPI data show that our proposed method outperforms
existing image-, projection-, and dual-domain techniques, producing more
accurate projections and reconstructions. Ablation studies confirm the
significance of the image-domain prior estimate and ADC modules in enhancing
network performance. | [
"cs.CV"
] | false |
2305.10418 | 2023-05-17T17:53:04Z | Towards Multi-Layered 3D Garments Animation | [
"Yidi Shao",
"Chen Change Loy",
"Bo Dai"
] | Mimicking realistic dynamics in 3D garment animations is a challenging task
due to the complex nature of multi-layered garments and the variety of outer
forces involved. Existing approaches mostly focus on single-layered garments
driven by only human bodies and struggle to handle general scenarios. In this
paper, we propose a novel data-driven method, called LayersNet, to model
garment-level animations as particle-wise interactions in a micro physics
system. We improve simulation efficiency by representing garments as
patch-level particles in a two-level structural hierarchy. Moreover, we
introduce a novel Rotation Equivalent Transformation that leverages the
rotation invariance and additivity of physics systems to better model outer
forces. To verify the effectiveness of our approach and bridge the gap between
experimental environments and real-world scenarios, we introduce a new
challenging dataset, D-LAYERS, containing 700K frames of dynamics of 4,900
different combinations of multi-layered garments driven by both human bodies
and randomly sampled wind. Our experiments show that LayersNet achieves
superior performance both quantitatively and qualitatively. We will make the
dataset and code publicly available at
https://mmlab-ntu.github.io/project/layersnet/index.html . | [
"cs.CV"
] | false |
2305.10420 | 2023-05-17T17:55:33Z | CLIP-GCD: Simple Language Guided Generalized Category Discovery | [
"Rabah Ouldnoughi",
"Chia-Wen Kuo",
"Zsolt Kira"
] | Generalized Category Discovery (GCD) requires a model to both classify known
categories and cluster unknown categories in unlabeled data. Prior methods
leveraged self-supervised pre-training combined with supervised fine-tuning on
the labeled data, followed by simple clustering methods. In this paper, we
posit that such methods are still prone to poor performance on
out-of-distribution categories, and do not leverage a key ingredient: Semantic
relationships between object categories. We therefore propose to leverage
multi-modal (vision and language) models, in two complementary ways. First, we
establish a strong baseline by replacing uni-modal features with CLIP, inspired
by its zero-shot performance. Second, we propose a novel retrieval-based
mechanism that leverages CLIP's aligned vision-language representations by
mining text descriptions from a text corpus for the labeled and unlabeled set.
We specifically use the alignment between CLIP's visual encoding of the image
and textual encoding of the corpus to retrieve top-k relevant pieces of text
and incorporate their embeddings to perform joint image+text semi-supervised
clustering. We perform rigorous experimentation and ablations (including on
where to retrieve from, how much to retrieve, and how to combine information),
and validate our results on several datasets including out-of-distribution
domains, demonstrating state-of-art results. | [
"cs.CV"
] | false |
2305.10456 | 2023-05-17T06:11:21Z | LPMM: Intuitive Pose Control for Neural Talking-Head Model via
Landmark-Parameter Morphable Model | [
"Kwangho Lee",
"Patrick Kwon",
"Myung Ki Lee",
"Namhyuk Ahn",
"Junsoo Lee"
] | While current talking head models are capable of generating photorealistic
talking head videos, they provide limited pose controllability. Most methods
require specific video sequences that should exactly contain the head pose
desired, being far from user-friendly pose control. Three-dimensional morphable
models (3DMM) offer semantic pose control, but they fail to capture certain
expressions. We present a novel method that utilizes parametric control of head
orientation and facial expression over a pre-trained neural-talking head model.
To enable this, we introduce a landmark-parameter morphable model (LPMM), which
offers control over the facial landmark domain through a set of semantic
parameters. Using LPMM, it is possible to adjust specific head pose factors,
without distorting other facial attributes. The results show our approach
provides intuitive rig-like control over neural talking head models, allowing
both parameter and image-based inputs. | [
"cs.CV"
] | false |
2305.10462 | 2023-05-17T08:18:06Z | DualVector: Unsupervised Vector Font Synthesis with Dual-Part
Representation | [
"Ying-Tian Liu",
"Zhifei Zhang",
"Yuan-Chen Guo",
"Matthew Fisher",
"Zhaowen Wang",
"Song-Hai Zhang"
] | Automatic generation of fonts can be an important aid to typeface design.
Many current approaches regard glyphs as pixelated images, which present
artifacts when scaling and inevitable quality losses after vectorization. On
the other hand, existing vector font synthesis methods either fail to represent
the shape concisely or require vector supervision during training. To push the
quality of vector font synthesis to the next level, we propose a novel
dual-part representation for vector glyphs, where each glyph is modeled as a
collection of closed "positive" and "negative" path pairs. The glyph contour is
then obtained by boolean operations on these paths. We first learn such a
representation only from glyph images and devise a subsequent contour
refinement step to align the contour with an image representation to further
enhance details. Our method, named DualVector, outperforms state-of-the-art
methods in vector font synthesis both quantitatively and qualitatively. Our
synthesized vector fonts can be easily converted to common digital font formats
like TrueType Font for practical use. The code is released at
https://github.com/thuliu-yt16/dualvector. | [
"cs.CV"
] | false |
2305.10589 | 2023-05-17T21:53:11Z | INCLG: Inpainting for Non-Cleft Lip Generation with a Multi-Task Image
Processing Network | [
"Shuang Chen",
"Amir Atapour-Abarghouei",
"Edmond S. L. Ho",
"Hubert P. H. Shum"
] | We present a software that predicts non-cleft facial images for patients with
cleft lip, thereby facilitating the understanding, awareness and discussion of
cleft lip surgeries. To protect patients privacy, we design a software
framework using image inpainting, which does not require cleft lip images for
training, thereby mitigating the risk of model leakage. We implement a novel
multi-task architecture that predicts both the non-cleft facial image and
facial landmarks, resulting in better performance as evaluated by surgeons. The
software is implemented with PyTorch and is usable with consumer-level color
images with a fast prediction speed, enabling effective deployment. | [
"cs.CV"
] | false |
2305.10593 | 2023-05-17T21:59:10Z | Inverted Non-maximum Suppression for more Accurate and Neater Face
Detection | [
"Lian Liu",
"liguo Zhou"
] | CNN-based face detection methods have achieved significant progress in recent
years. In addition to the strong representation ability of CNN, post-processing
methods are also very important for the performance of face detection. In
general, the face detection method predicts several candidate bounding-boxes
for one face. NMS is used to filter out inaccurate candidate boxes to get the
most accurate box. The principle of NMS is to select the box with a higher
score as the basic box and then delete the box which has a large overlapping
area with the basic box but has a lower score. However, the current NMS method
and its improved versions do not perform well when face image quality is poor
or faces are in a cluster. In these situations, even after NMS filtering, there
is often a face corresponding to multiple predicted boxes. To reduce this kind
of negative result, in this paper, we propose a new NMS method that operates in
the reverse order of other NMS methods. Our method performs well on low-quality
and tiny face samples. Experiments demonstrate that our method is effective as
a post-processor for different face detection methods. | [
"cs.CV"
] | false |
2305.09890 | 2023-05-17T01:55:45Z | SS-BSN: Attentive Blind-Spot Network for Self-Supervised Denoising with
Nonlocal Self-Similarity | [
"Young-Joo Han",
"Ha-Jin Yu"
] | Recently, numerous studies have been conducted on supervised learning-based
image denoising methods. However, these methods rely on large-scale noisy-clean
image pairs, which are difficult to obtain in practice. Denoising methods with
self-supervised training that can be trained with only noisy images have been
proposed to address the limitation. These methods are based on the
convolutional neural network (CNN) and have shown promising performance.
However, CNN-based methods do not consider using nonlocal self-similarities
essential in the traditional method, which can cause performance limitations.
This paper presents self-similarity attention (SS-Attention), a novel
self-attention module that can capture nonlocal self-similarities to solve the
problem. We focus on designing a lightweight self-attention module in a
pixel-wise manner, which is nearly impossible to implement using the classic
self-attention module due to the quadratically increasing complexity with
spatial resolution. Furthermore, we integrate SS-Attention into the blind-spot
network called self-similarity-based blind-spot network (SS-BSN). We conduct
the experiments on real-world image denoising tasks. The proposed method
quantitatively and qualitatively outperforms state-of-the-art methods in
self-supervised denoising on the Smartphone Image Denoising Dataset (SIDD) and
Darmstadt Noise Dataset (DND) benchmark datasets. | [
"cs.CV",
"cs.LG",
"68T45",
"I.4.4"
] | false |
2305.09897 | 2023-05-17T02:13:23Z | Complementary Classifier Induced Partial Label Learning | [
"Yuheng Jia",
"Chongjie Si",
"Min-ling Zhang"
] | In partial label learning (PLL), each training sample is associated with a
set of candidate labels, among which only one is valid. The core of PLL is to
disambiguate the candidate labels to get the ground-truth one. In
disambiguation, the existing works usually do not fully investigate the
effectiveness of the non-candidate label set (a.k.a. complementary labels),
which accurately indicates a set of labels that do not belong to a sample. In
this paper, we use the non-candidate labels to induce a complementary
classifier, which naturally forms an adversarial relationship against the
traditional PLL classifier, to eliminate the false-positive labels in the
candidate label set. Besides, we assume the feature space and the label space
share the same local topological structure captured by a dynamic graph, and use
it to assist disambiguation. Extensive experimental results validate the
superiority of the proposed approach against state-of-the-art PLL methods on 4
controlled UCI data sets and 6 real-world data sets, and reveal the usefulness
of complementary learning in PLL. The code has been released in the link
https://github.com/Chongjie-Si/PL-CL. | [
"cs.LG",
"cs.CV"
] | false |
2305.09967 | 2023-05-17T05:59:53Z | Variable Length Embeddings | [
"Johnathan Chiu",
"Andi Gu",
"Matt Zhou"
] | In this work, we introduce a novel deep learning architecture, Variable
Length Embeddings (VLEs), an autoregressive model that can produce a latent
representation composed of an arbitrary number of tokens. As a proof of
concept, we demonstrate the capabilities of VLEs on tasks that involve
reconstruction and image decomposition. We evaluate our experiments on a mix of
the iNaturalist and ImageNet datasets and find that VLEs achieve comparable
reconstruction results to a state of the art VAE, using less than a tenth of
the parameters. | [
"cs.CV",
"cs.LG"
] | false |
2305.09972 | 2023-05-17T06:11:10Z | Real-Time Flying Object Detection with YOLOv8 | [
"Dillon Reis",
"Jordan Kupec",
"Jacqueline Hong",
"Ahmad Daoudi"
] | This paper presents a generalized model for real-time detection of flying
objects that can be used for transfer learning and further research, as well as
a refined model that is ready for implementation. We achieve this by training
our first generalized model on a data set containing 40 different classes of
flying objects, forcing the model to extract abstract feature representations.
We then perform transfer learning with these learned parameters on a data set
more representative of real world environments (i.e., higher frequency of
occlusion, small spatial sizes, rotations, etc.) to generate our refined model.
Object detection of flying objects remains challenging due to large variance
object spatial sizes/aspect ratios, rate of speed, occlusion, and clustered
backgrounds. To address some of the presented challenges while simultaneously
maximizing performance, we utilize the current state of the art single-shot
detector, YOLOv8, in an attempt to find the best tradeoff between inference
speed and mAP. While YOLOv8 is being regarded as the new state-of-the-art, an
official paper has not been provided. Thus, we provide an in-depth explanation
of the new architecture and functionality that YOLOv8 has adapted. Our final
generalized model achieves an mAP50-95 of 0.685 and average inference speed on
1080p videos of 50 fps. Our final refined model maintains this inference speed
and achieves an improved mAP50-95 of 0.835. | [
"cs.CV",
"cs.LG",
"I.2.10; I.2.6"
] | false |
2305.10046 | 2023-05-17T08:38:59Z | Probing the Role of Positional Information in Vision-Language Models | [
"Philipp J. Rösch",
"Jindřich Libovický"
] | In most Vision-Language models (VL), the understanding of the image structure
is enabled by injecting the position information (PI) about objects in the
image. In our case study of LXMERT, a state-of-the-art VL model, we probe the
use of the PI in the representation and study its effect on Visual Question
Answering. We show that the model is not capable of leveraging the PI for the
image-text matching task on a challenge set where only position differs. Yet,
our experiments with probing confirm that the PI is indeed present in the
representation. We introduce two strategies to tackle this: (i) Positional
Information Pre-training and (ii) Contrastive Learning on PI using
Cross-Modality Matching. Doing so, the model can correctly classify if images
with detailed PI statements match. Additionally to the 2D information from
bounding boxes, we introduce the object's depth as new feature for a better
object localization in the space. Even though we were able to improve the model
properties as defined by our probes, it only has a negligible effect on the
downstream performance. Our results thus highlight an important issue of
multimodal modeling: the mere presence of information detectable by a probing
classifier is not a guarantee that the information is available in a
cross-modal setup. | [
"cs.CL",
"cs.CV",
"I.4; I.7"
] | false |
2305.10126 | 2023-05-17T11:12:07Z | Fusion-S2iGan: An Efficient and Effective Single-Stage Framework for
Speech-to-Image Generation | [
"Zhenxing Zhang",
"Lambert Schomaker"
] | The goal of a speech-to-image transform is to produce a photo-realistic
picture directly from a speech signal. Recently, various studies have focused
on this task and have achieved promising performance. However, current
speech-to-image approaches are based on a stacked modular framework that
suffers from three vital issues: 1) Training separate networks is
time-consuming as well as inefficient and the convergence of the final
generative model strongly depends on the previous generators; 2) The quality of
precursor images is ignored by this architecture; 3) Multiple discriminator
networks are required to be trained. To this end, we propose an efficient and
effective single-stage framework called Fusion-S2iGan to yield perceptually
plausible and semantically consistent image samples on the basis of given
spoken descriptions. Fusion-S2iGan introduces a visual+speech fusion module
(VSFM), constructed with a pixel-attention module (PAM), a speech-modulation
module (SMM) and a weighted-fusion module (WFM), to inject the speech embedding
from a speech encoder into the generator while improving the quality of
synthesized pictures. Fusion-S2iGan spreads the bimodal information over all
layers of the generator network to reinforce the visual feature maps at various
hierarchical levels in the architecture. We conduct a series of experiments on
four benchmark data sets, i.e., CUB birds, Oxford-102, Flickr8k and
Places-subset. The experimental results demonstrate the superiority of the
presented Fusion-S2iGan compared to the state-of-the-art models with a
multi-stage architecture and a performance level that is close to traditional
text-to-image approaches. | [
"cs.CV",
"cs.MM"
] | false |
2305.10146 | 2023-05-17T11:59:52Z | CS-PCN: Context-Space Progressive Collaborative Network for Image
Denoising | [
"Yuqi Jiang",
"Chune Zhang",
"Jiao Liu"
] | Currently, image-denoising methods based on deep learning cannot adequately
reconcile contextual semantic information and spatial details. To take these
information optimizations into consideration, in this paper, we propose a
Context-Space Progressive Collaborative Network (CS-PCN) for image denoising.
CS-PCN is a multi-stage hierarchical architecture composed of a context mining
siamese sub-network (CM2S) and a space synthesis sub-network (3S). CM2S aims at
extracting rich multi-scale contextual information by sequentially connecting
multi-layer feature processors (MLFP) for semantic information pre-processing,
attention encoder-decoders (AED) for multi-scale information, and multi-conv
attention controllers (MCAC) for supervised feature fusion. 3S parallels MLFP
and a single-scale cascading block to learn image details, which not only
maintains the contextual information but also emphasizes the complementary
spatial ones. Experimental results show that CS-PCN achieves significant
performance improvement in synthetic and real-world noise removal. | [
"cs.CV",
"eess.IV"
] | false |
2305.10216 | 2023-05-17T13:47:30Z | CHMMOTv1 -- Cardiac and Hepatic Multi-Echo (T2*) MRI Images and Clinical
Dataset for Iron Overload on Thalassemia Patients | [
"Iraj Abedi",
"Maryam Zamanian",
"Hamidreza Bolhasani",
"Milad Jalilian"
] | Owing to the invasiveness and low accuracy of other tests, including biopsy
and ferritin levels, magnetic resonance imaging (T2 and T2*-MRI) has been
considered the standard test for patients with thalassemia (THM). Regarding
deep learning networks in medical sciences for improving diagnosis and
treatment purposes and the existence of minimal resources for them, we decided
to provide a set of magnetic resonance images of the cardiac and hepatic
organs. The dataset included 124 patients (67 women and 57 men) with a THM age
range of (5-52) years. In addition, patients were divided into two groups: with
follow-up (1-5 times) at time intervals of about (5-6) months and without
follow-up. Also, T2* and, R2* values, the results of the cardiac and hepatic
report (normal, mild, moderate, severe, and very severe), and laboratory tests
including Ferritin, Bilirubin (D, and T), AST, ALT, and ALP levels were
provided as an Excel file. This dataset CHMMOTv1) has been published in
Mendeley Dataverse and is accessible through the web at: http://databiox.com. | [
"eess.IV",
"cs.CV"
] | false |
2305.10217 | 2023-05-17T13:50:33Z | Deep Learning Applications Based on WISE Infrared Data: Classification
of Stars, Galaxies and Quasars | [
"Guiyu Zhao",
"Bo Qiu",
"A-Li Luo",
"Xiaoyu Guo",
"Lin Yao",
"Kun Wang",
"Yuanbo Liu"
] | The Wide-field Infrared Survey Explorer (WISE) has detected hundreds of
millions of sources over the entire sky. However, classifying them reliably is
a great challenge due to degeneracies in WISE multicolor space and low
detection levels in its two longest-wavelength bandpasses. In this paper, the
deep learning classification network, IICnet (Infrared Image Classification
network), is designed to classify sources from WISE images to achieve a more
accurate classification goal. IICnet shows good ability on the feature
extraction of the WISE sources. Experiments demonstrates that the
classification results of IICnet are superior to some other methods; it has
obtained 96.2% accuracy for galaxies, 97.9% accuracy for quasars, and 96.4%
accuracy for stars, and the Area Under Curve (AUC) of the IICnet classifier can
reach more than 99%. In addition, the superiority of IICnet in processing
infrared images has been demonstrated in the comparisons with VGG16, GoogleNet,
ResNet34, MobileNet, EfficientNetV2, and RepVGG-fewer parameters and faster
inference. The above proves that IICnet is an effective method to classify
infrared sources. | [
"astro-ph.IM",
"cs.CV"
] | false |
2305.10252 | 2023-05-17T14:42:16Z | Sharpness & Shift-Aware Self-Supervised Learning | [
"Ngoc N. Tran",
"Son Duong",
"Hoang Phan",
"Tung Pham",
"Dinh Phung",
"Trung Le"
] | Self-supervised learning aims to extract meaningful features from unlabeled
data for further downstream tasks. In this paper, we consider classification as
a downstream task in phase 2 and develop rigorous theories to realize the
factors that implicitly influence the general loss of this classification task.
Our theories signify that sharpness-aware feature extractors benefit the
classification task in phase 2 and the existing data shift between the ideal
(i.e., the ideal one used in theory development) and practical (i.e., the
practical one used in implementation) distributions to generate positive pairs
also remarkably affects this classification task. Further harvesting these
theoretical findings, we propose to minimize the sharpness of the feature
extractor and a new Fourier-based data augmentation technique to relieve the
data shift in the distributions generating positive pairs, reaching Sharpness &
Shift-Aware Contrastive Learning (SSA-CLR). We conduct extensive experiments to
verify our theoretical findings and demonstrate that sharpness & shift-aware
contrastive learning can remarkably boost the performance as well as obtaining
more robust extracted features compared with the baselines. | [
"cs.LG",
"cs.CV"
] | false |
2305.10254 | 2023-05-17T14:43:05Z | SAM for Poultry Science | [
"Xiao Yang",
"Haixing Dai",
"Zihao Wu",
"Ramesh Bist",
"Sachin Subedi",
"Jin Sun",
"Guoyu Lu",
"Changying Li",
"Tianming Liu",
"Lilong Chai"
] | In recent years, the agricultural industry has witnessed significant
advancements in artificial intelligence (AI), particularly with the development
of large-scale foundational models. Among these foundation models, the Segment
Anything Model (SAM), introduced by Meta AI Research, stands out as a
groundbreaking solution for object segmentation tasks. While SAM has shown
success in various agricultural applications, its potential in the poultry
industry, specifically in the context of cage-free hens, remains relatively
unexplored. This study aims to assess the zero-shot segmentation performance of
SAM on representative chicken segmentation tasks, including part-based
segmentation and the use of infrared thermal images, and to explore
chicken-tracking tasks by using SAM as a segmentation tool. The results
demonstrate SAM's superior performance compared to SegFormer and SETR in both
whole and part-based chicken segmentation. SAM-based object tracking also
provides valuable data on the behavior and movement patterns of broiler birds.
The findings of this study contribute to a better understanding of SAM's
potential in poultry science and lay the foundation for future advancements in
chicken segmentation and tracking. | [
"cs.CV",
"cs.AI"
] | false |
2305.10260 | 2023-05-17T14:49:20Z | From Region to Patch: Attribute-Aware Foreground-Background Contrastive
Learning for Fine-Grained Fashion Retrieval | [
"Jianfeng Dong",
"Xiaoman Peng",
"Zhe Ma",
"Daizong Liu",
"Xiaoye Qu",
"Xun Yang",
"Jixiang Zhu",
"Baolong Liu"
] | Attribute-specific fashion retrieval (ASFR) is a challenging information
retrieval task, which has attracted increasing attention in recent years.
Different from traditional fashion retrieval which mainly focuses on optimizing
holistic similarity, the ASFR task concentrates on attribute-specific
similarity, resulting in more fine-grained and interpretable retrieval results.
As the attribute-specific similarity typically corresponds to the specific
subtle regions of images, we propose a Region-to-Patch Framework (RPF) that
consists of a region-aware branch and a patch-aware branch to extract
fine-grained attribute-related visual features for precise retrieval in a
coarse-to-fine manner. In particular, the region-aware branch is first to be
utilized to locate the potential regions related to the semantic of the given
attribute. Then, considering that the located region is coarse and still
contains the background visual contents, the patch-aware branch is proposed to
capture patch-wise attribute-related details from the previous amplified
region. Such a hybrid architecture strikes a proper balance between region
localization and feature extraction. Besides, different from previous works
that solely focus on discriminating the attribute-relevant foreground visual
features, we argue that the attribute-irrelevant background features are also
crucial for distinguishing the detailed visual contexts in a contrastive
manner. Therefore, a novel E-InfoNCE loss based on the foreground and
background representations is further proposed to improve the discrimination of
attribute-specific representation. Extensive experiments on three datasets
demonstrate the effectiveness of our proposed framework, and also show a decent
generalization of our RPF on out-of-domain fashion images. Our source code is
available at https://github.com/HuiGuanLab/RPF. | [
"cs.CV",
"cs.MM"
] | false |
2305.10289 | 2023-05-17T15:26:51Z | Explain Any Concept: Segment Anything Meets Concept-Based Explanation | [
"Ao Sun",
"Pingchuan Ma",
"Yuanyuan Yuan",
"Shuai Wang"
] | EXplainable AI (XAI) is an essential topic to improve human understanding of
deep neural networks (DNNs) given their black-box internals. For computer
vision tasks, mainstream pixel-based XAI methods explain DNN decisions by
identifying important pixels, and emerging concept-based XAI explore forming
explanations with concepts (e.g., a head in an image). However, pixels are
generally hard to interpret and sensitive to the imprecision of XAI methods,
whereas "concepts" in prior works require human annotation or are limited to
pre-defined concept sets. On the other hand, driven by large-scale
pre-training, Segment Anything Model (SAM) has been demonstrated as a powerful
and promotable framework for performing precise and comprehensive instance
segmentation, enabling automatic preparation of concept sets from a given
image. This paper for the first time explores using SAM to augment
concept-based XAI. We offer an effective and flexible concept-based explanation
method, namely Explain Any Concept (EAC), which explains DNN decisions with any
concept. While SAM is highly effective and offers an "out-of-the-box" instance
segmentation, it is costly when being integrated into defacto XAI pipelines. We
thus propose a lightweight per-input equivalent (PIE) scheme, enabling
efficient explanation with a surrogate model. Our evaluation over two popular
datasets (ImageNet and COCO) illustrate the highly encouraging performance of
EAC over commonly-used XAI methods. | [
"cs.CV",
"cs.AI"
] | false |
2305.10332 | 2023-05-17T16:15:55Z | Extracting a functional representation from a dictionary for non-rigid
shape matching | [
"Michele Colombo",
"Giacomo Boracchi",
"Simone Melzi"
] | Shape matching is a fundamental problem in computer graphics with many
applications. Functional maps translate the point-wise shape-matching problem
into its functional counterpart and have inspired numerous solutions over the
last decade. Nearly all the solutions based on functional maps rely on the
eigenfunctions of the Laplace-Beltrami Operator (LB) to describe the functional
spaces defined on the surfaces and then convert the functional correspondences
into point-wise correspondences. However, this final step is often error-prone
and inaccurate in tiny regions and protrusions, where the energy of LB does not
uniformly cover the surface. We propose a new functional basis Principal
Components of a Dictionary (PCD) to address such intrinsic limitation. PCD
constructs an orthonormal basis from the Principal Component Analysis (PCA) of
a dictionary of functions defined over the shape. These dictionaries can target
specific properties of the final basis, such as achieving an even spreading of
energy. Our experimental evaluation compares seven different dictionaries on
established benchmarks, showing that PCD is suited to target different
shape-matching scenarios, resulting in more accurate point-wise maps than the
LB basis when used in the same pipeline. This evidence provides a promising
alternative for improving correspondence estimation, confirming the power and
flexibility of functional maps. | [
"cs.GR",
"cs.CV"
] | false |
2305.10453 | 2023-05-17T00:22:39Z | VVC+M: Plug and Play Scalable Image Coding for Humans and Machines | [
"Alon Harell",
"Yalda Foroutan",
"Ivan V. Bajic"
] | Compression for machines is an emerging field, where inputs are encoded while
optimizing the performance of downstream automated analysis. In scalable coding
for humans and machines, the compressed representation used for machines is
further utilized to enable input reconstruction. Often performed by jointly
optimizing the compression scheme for both machine task and human perception,
this results in sub-optimal rate-distortion (RD) performance for the machine
side. We focus on the case of images, proposing to utilize the pre-existing
residual coding capabilities of video codecs such as VVC to create a scalable
codec from any image compression for machines (ICM) scheme. Using our approach
we improve an existing scalable codec to achieve superior RD performance on the
machine task, while remaining competitive for human perception. Moreover, our
approach can be trained post-hoc for any given ICM scheme, and without creating
a coupling between the quality of the machine analysis and human vision. | [
"eess.IV",
"cs.CV"
] | false |
2305.10465 | 2023-05-17T12:31:48Z | Towards Robust Probabilistic Modeling on SO(3) via Rotation Laplace
Distribution | [
"Yingda Yin",
"Jiangran Lyu",
"Yang Wang",
"He Wang",
"Baoquan Chen"
] | Estimating the 3DoF rotation from a single RGB image is an important yet
challenging problem. As a popular approach, probabilistic rotation modeling
additionally carries prediction uncertainty information, compared to
single-prediction rotation regression. For modeling probabilistic distribution
over SO(3), it is natural to use Gaussian-like Bingham distribution and matrix
Fisher, however they are shown to be sensitive to outlier predictions, e.g.
$180^\circ$ error and thus are unlikely to converge with optimal performance.
In this paper, we draw inspiration from multivariate Laplace distribution and
propose a novel rotation Laplace distribution on SO(3). Our rotation Laplace
distribution is robust to the disturbance of outliers and enforces much
gradient to the low-error region that it can improve. In addition, we show that
our method also exhibits robustness to small noises and thus tolerates
imperfect annotations. With this benefit, we demonstrate its advantages in
semi-supervised rotation regression, where the pseudo labels are noisy. To
further capture the multi-modal rotation solution space for symmetric objects,
we extend our distribution to rotation Laplace mixture model and demonstrate
its effectiveness. Our extensive experiments show that our proposed
distribution and the mixture model achieve state-of-the-art performance in all
the rotation regression experiments over both probabilistic and
non-probabilistic baselines. | [
"cs.CV",
"cs.AI"
] | false |
2305.10507 | 2023-05-17T18:24:43Z | ReasonNet: End-to-End Driving with Temporal and Global Reasoning | [
"Hao Shao",
"Letian Wang",
"Ruobing Chen",
"Steven L. Waslander",
"Hongsheng Li",
"Yu Liu"
] | The large-scale deployment of autonomous vehicles is yet to come, and one of
the major remaining challenges lies in urban dense traffic scenarios. In such
cases, it remains challenging to predict the future evolution of the scene and
future behaviors of objects, and to deal with rare adverse events such as the
sudden appearance of occluded objects. In this paper, we present ReasonNet, a
novel end-to-end driving framework that extensively exploits both temporal and
global information of the driving scene. By reasoning on the temporal behavior
of objects, our method can effectively process the interactions and
relationships among features in different frames. Reasoning about the global
information of the scene can also improve overall perception performance and
benefit the detection of adverse events, especially the anticipation of
potential danger from occluded objects. For comprehensive evaluation on
occlusion events, we also release publicly a driving simulation benchmark
DriveOcclusionSim consisting of diverse occlusion events. We conduct extensive
experiments on multiple CARLA benchmarks, where our model outperforms all prior
methods, ranking first on the sensor track of the public CARLA Leaderboard. | [
"cs.CV",
"cs.AI"
] | false |
2305.10513 | 2023-05-17T18:45:56Z | Learning Pose Image Manifolds Using Geometry-Preserving GANs and
Elasticae | [
"Shenyuan Liang",
"Pavan Turaga",
"Anuj Srivastava"
] | This paper investigates the challenge of learning image manifolds,
specifically pose manifolds, of 3D objects using limited training data. It
proposes a DNN approach to manifold learning and for predicting images of
objects for novel, continuous 3D rotations. The approach uses two distinct
concepts: (1) Geometric Style-GAN (Geom-SGAN), which maps images to
low-dimensional latent representations and maintains the (first-order) manifold
geometry. That is, it seeks to preserve the pairwise distances between base
points and their tangent spaces, and (2) uses Euler's elastica to smoothly
interpolate between directed points (points + tangent directions) in the
low-dimensional latent space. When mapped back to the larger image space, the
resulting interpolations resemble videos of rotating objects. Extensive
experiments establish the superiority of this framework in learning paths on
rotation manifolds, both visually and quantitatively, relative to
state-of-the-art GANs and VAEs. | [
"cs.CV",
"stat.ML"
] | false |
2305.10594 | 2023-05-17T22:04:29Z | Improving Extrinsics between RADAR and LIDAR using Learning | [
"Peng Jiang",
"Srikanth Saripalli"
] | LIDAR and RADAR are two commonly used sensors in autonomous driving systems.
The extrinsic calibration between the two is crucial for effective sensor
fusion. The challenge arises due to the low accuracy and sparse information in
RADAR measurements. This paper presents a novel solution for 3D RADAR-LIDAR
calibration in autonomous systems. The method employs simple targets to
generate data, including correspondence registration and a one-step
optimization algorithm. The optimization aims to minimize the reprojection
error while utilizing a small multi-layer perception (MLP) to perform
regression on the return energy of the sensor around the targets. The proposed
approach uses a deep learning framework such as PyTorch and can be optimized
through gradient descent. The experiment uses a 360-degree Ouster-128 LIDAR and
a 360-degree Navtech RADAR, providing raw measurements. The results validate
the effectiveness of the proposed method in achieving improved estimates of
extrinsic calibration parameters. | [
"cs.RO",
"cs.CV"
] | false |
2305.15422 | 2023-05-17T03:19:06Z | Facial Expression Recognition at the Edge: CPU vs GPU vs VPU vs TPU | [
"Mohammadreza Mohammadi",
"Heath Smith",
"Lareb Khan",
"Ramtin Zand"
] | Facial Expression Recognition (FER) plays an important role in human-computer
interactions and is used in a wide range of applications. Convolutional Neural
Networks (CNN) have shown promise in their ability to classify human facial
expressions, however, large CNNs are not well-suited to be implemented on
resource- and energy-constrained IoT devices. In this work, we present a
hierarchical framework for developing and optimizing hardware-aware CNNs tuned
for deployment at the edge. We perform a comprehensive analysis across various
edge AI accelerators including NVIDIA Jetson Nano, Intel Neural Compute Stick,
and Coral TPU. Using the proposed strategy, we achieved a peak accuracy of
99.49% when testing on the CK+ facial expression recognition dataset.
Additionally, we achieved a minimum inference latency of 0.39 milliseconds and
a minimum power consumption of 0.52 Watts. | [
"cs.CV",
"cs.PF"
] | false |
2305.09978 | 2023-05-17T06:22:11Z | Stochastic Ratios Tracking Algorithm for Large Scale Machine Learning
Problems | [
"Shigeng Sun",
"Yuchen Xie"
] | Many machine learning applications and tasks rely on the stochastic gradient
descent (SGD) algorithm and its variants. Effective step length selection is
crucial for the success of these algorithms, which has motivated the
development of algorithms such as ADAM or AdaGrad. In this paper, we propose a
novel algorithm for adaptive step length selection in the classical SGD
framework, which can be readily adapted to other stochastic algorithms. Our
proposed algorithm is inspired by traditional nonlinear optimization techniques
and is supported by analytical findings. We show that under reasonable
conditions, the algorithm produces step lengths in line with well-established
theoretical requirements, and generates iterates that converge to a stationary
neighborhood of a solution in expectation. We test the proposed algorithm on
logistic regressions and deep neural networks and demonstrate that the
algorithm can generate step lengths comparable to the best step length obtained
from manual tuning. | [
"cs.LG",
"cs.AI",
"cs.CV",
"math.OC"
] | false |
2305.09986 | 2023-05-17T06:31:10Z | A robust multi-domain network for short-scanning amyloid PET
reconstruction | [
"Hyoung Suk Park",
"Young Jin Jeong",
"Kiwan Jeon"
] | This paper presents a robust multi-domain network designed to restore
low-quality amyloid PET images acquired in a short period of time. The proposed
method is trained on pairs of PET images from short (2 minutes) and standard
(20 minutes) scanning times, sourced from multiple domains. Learning relevant
image features between these domains with a single network is challenging. Our
key contribution is the introduction of a mapping label, which enables
effective learning of specific representations between different domains. The
network, trained with various mapping labels, can efficiently correct amyloid
PET datasets in multiple training domains and unseen domains, such as those
obtained with new radiotracers, acquisition protocols, or PET scanners.
Internal, temporal, and external validations demonstrate the effectiveness of
the proposed method. Notably, for external validation datasets from unseen
domains, the proposed method achieved comparable or superior results relative
to methods trained with these datasets, in terms of quantitative metrics such
as normalized root mean-square error and structure similarity index measure.
Two nuclear medicine physicians evaluated the amyloid status as positive or
negative for the external validation datasets, with accuracies of 0.970 and
0.930 for readers 1 and 2, respectively. | [
"eess.IV",
"cs.CV",
"cs.LG",
"92C55, 68T05, 15A29, 65F22"
] | false |
2305.10018 | 2023-05-17T07:51:35Z | Transfer Learning for Fine-grained Classification Using Semi-supervised
Learning and Visual Transformers | [
"Manuel Lagunas",
"Brayan Impata",
"Victor Martinez",
"Virginia Fernandez",
"Christos Georgakis",
"Sofia Braun",
"Felipe Bertrand"
] | Fine-grained classification is a challenging task that involves identifying
subtle differences between objects within the same category. This task is
particularly challenging in scenarios where data is scarce. Visual transformers
(ViT) have recently emerged as a powerful tool for image classification, due to
their ability to learn highly expressive representations of visual data using
self-attention mechanisms. In this work, we explore Semi-ViT, a ViT model fine
tuned using semi-supervised learning techniques, suitable for situations where
we have lack of annotated data. This is particularly common in e-commerce,
where images are readily available but labels are noisy, nonexistent, or
expensive to obtain. Our results demonstrate that Semi-ViT outperforms
traditional convolutional neural networks (CNN) and ViTs, even when fine-tuned
with limited annotated data. These findings indicate that Semi-ViTs hold
significant promise for applications that require precise and fine-grained
classification of visual data. | [
"cs.CV",
"cs.AI",
"cs.LG"
] | true |
2305.10115 | 2023-05-17T10:43:15Z | An Ensemble Deep Learning Approach for COVID-19 Severity Prediction
Using Chest CT Scans | [
"Sidra Aleem",
"Mayug Maniparambil",
"Suzanne Little",
"Noel O'Connor",
"Kevin McGuinness"
] | Chest X-rays have been widely used for COVID-19 screening; however, 3D
computed tomography (CT) is a more effective modality. We present our findings
on COVID-19 severity prediction from chest CT scans using the STOIC dataset. We
developed an ensemble deep learning based model that incorporates multiple
neural networks to improve predictions. To address data imbalance, we used
slicing functions and data augmentation. We further improved performance using
test time data augmentation. Our approach which employs a simple yet effective
ensemble of deep learning-based models with strong test time augmentations,
achieved results comparable to more complex methods and secured the fourth
position in the STOIC2021 COVID-19 AI Challenge. Our code is available on
online: at: https://github.com/aleemsidra/stoic2021- baseline-finalphase-main. | [
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2305.10388 | 2023-05-17T17:29:10Z | Raising the Bar for Certified Adversarial Robustness with Diffusion
Models | [
"Thomas Altstidl",
"David Dobre",
"Björn Eskofier",
"Gauthier Gidel",
"Leo Schwinn"
] | Certified defenses against adversarial attacks offer formal guarantees on the
robustness of a model, making them more reliable than empirical methods such as
adversarial training, whose effectiveness is often later reduced by unseen
attacks. Still, the limited certified robustness that is currently achievable
has been a bottleneck for their practical adoption. Gowal et al. and Wang et
al. have shown that generating additional training data using state-of-the-art
diffusion models can considerably improve the robustness of adversarial
training. In this work, we demonstrate that a similar approach can
substantially improve deterministic certified defenses. In addition, we provide
a list of recommendations to scale the robustness of certified training
approaches. One of our main insights is that the generalization gap, i.e., the
difference between the training and test accuracy of the original model, is a
good predictor of the magnitude of the robustness improvement when using
additional generated data. Our approach achieves state-of-the-art deterministic
robustness certificates on CIFAR-10 for the $\ell_2$ ($\epsilon = 36/255$) and
$\ell_\infty$ ($\epsilon = 8/255$) threat models, outperforming the previous
best results by $+3.95\%$ and $+1.39\%$, respectively. Furthermore, we report
similar improvements for CIFAR-100. | [
"cs.LG",
"cs.CR",
"cs.CV"
] | false |
2305.10459 | 2023-05-17T07:39:14Z | AnalogNAS: A Neural Network Design Framework for Accurate Inference with
Analog In-Memory Computing | [
"Hadjer Benmeziane",
"Corey Lammie",
"Irem Boybat",
"Malte Rasch",
"Manuel Le Gallo",
"Hsinyu Tsai",
"Ramachandran Muralidhar",
"Smail Niar",
"Ouarnoughi Hamza",
"Vijay Narayanan",
"Abu Sebastian",
"Kaoutar El Maghraoui"
] | The advancement of Deep Learning (DL) is driven by efficient Deep Neural
Network (DNN) design and new hardware accelerators. Current DNN design is
primarily tailored for general-purpose use and deployment on commercially
viable platforms. Inference at the edge requires low latency, compact and
power-efficient models, and must be cost-effective. Digital processors based on
typical von Neumann architectures are not conducive to edge AI given the large
amounts of required data movement in and out of memory. Conversely,
analog/mixed signal in-memory computing hardware accelerators can easily
transcend the memory wall of von Neuman architectures when accelerating
inference workloads. They offer increased area and power efficiency, which are
paramount in edge resource-constrained environments. In this paper, we propose
AnalogNAS, a framework for automated DNN design targeting deployment on analog
In-Memory Computing (IMC) inference accelerators. We conduct extensive hardware
simulations to demonstrate the performance of AnalogNAS on State-Of-The-Art
(SOTA) models in terms of accuracy and deployment efficiency on various Tiny
Machine Learning (TinyML) tasks. We also present experimental results that show
AnalogNAS models achieving higher accuracy than SOTA models when implemented on
a 64-core IMC chip based on Phase Change Memory (PCM). The AnalogNAS search
code is released: https://github.com/IBM/analog-nas | [
"cs.AR",
"cs.CV",
"cs.LG"
] | false |
2305.10566 | 2023-05-17T20:59:10Z | Smiling Women Pitching Down: Auditing Representational and
Presentational Gender Biases in Image Generative AI | [
"Luhang Sun",
"Mian Wei",
"Yibing Sun",
"Yoo Ji Suh",
"Liwei Shen",
"Sijia Yang"
] | Generative AI models like DALL-E 2 can interpret textual prompts and generate
high-quality images exhibiting human creativity. Though public enthusiasm is
booming, systematic auditing of potential gender biases in AI-generated images
remains scarce. We addressed this gap by examining the prevalence of two
occupational gender biases (representational and presentational biases) in
15,300 DALL-E 2 images spanning 153 occupations, and assessed potential bias
amplification by benchmarking against 2021 census labor statistics and Google
Images. Our findings reveal that DALL-E 2 underrepresents women in
male-dominated fields while overrepresenting them in female-dominated
occupations. Additionally, DALL-E 2 images tend to depict more women than men
with smiling faces and downward-pitching heads, particularly in
female-dominated (vs. male-dominated) occupations. Our computational algorithm
auditing study demonstrates more pronounced representational and presentational
biases in DALL-E 2 compared to Google Images and calls for feminist
interventions to prevent such bias-laden AI-generated images to feedback into
the media ecology. | [
"cs.CV",
"cs.AI",
"cs.CY"
] | false |
2305.09898 | 2023-05-17T02:18:31Z | Balancing Lexical and Semantic Quality in Abstractive Summarization | [
"Jeewoo Sul",
"Yong Suk Choi"
] | An important problem of the sequence-to-sequence neural models widely used in
abstractive summarization is exposure bias. To alleviate this problem,
re-ranking systems have been applied in recent years. Despite some performance
improvements, this approach remains underexplored. Previous works have mostly
specified the rank through the ROUGE score and aligned candidate summaries, but
there can be quite a large gap between the lexical overlap metric and semantic
similarity. In this paper, we propose a novel training method in which a
re-ranker balances the lexical and semantic quality. We further newly define
false positives in ranking and present a strategy to reduce their influence.
Experiments on the CNN/DailyMail and XSum datasets show that our method can
estimate the meaning of summaries without seriously degrading the lexical
aspect. More specifically, it achieves an 89.67 BERTScore on the CNN/DailyMail
dataset, reaching new state-of-the-art performance. Our code is publicly
available at https://github.com/jeewoo1025/BalSum. | [
"cs.CL"
] | false |
2305.09975 | 2023-05-17T06:15:41Z | Smart Word Suggestions for Writing Assistance | [
"Chenshuo Wang",
"Shaoguang Mao",
"Tao Ge",
"Wenshan Wu",
"Xun Wang",
"Yan Xia",
"Jonathan Tien",
"Dongyan Zhao"
] | Enhancing word usage is a desired feature for writing assistance. To further
advance research in this area, this paper introduces "Smart Word Suggestions"
(SWS) task and benchmark. Unlike other works, SWS emphasizes end-to-end
evaluation and presents a more realistic writing assistance scenario. This task
involves identifying words or phrases that require improvement and providing
substitution suggestions. The benchmark includes human-labeled data for
testing, a large distantly supervised dataset for training, and the framework
for evaluation. The test data includes 1,000 sentences written by English
learners, accompanied by over 16,000 substitution suggestions annotated by 10
native speakers. The training dataset comprises over 3.7 million sentences and
12.7 million suggestions generated through rules. Our experiments with seven
baselines demonstrate that SWS is a challenging task. Based on experimental
analysis, we suggest potential directions for future research on SWS. The
dataset and related codes is available at
https://github.com/microsoft/SmartWordSuggestions. | [
"cs.CL"
] | true |
2305.10010 | 2023-05-17T07:40:12Z | AD-KD: Attribution-Driven Knowledge Distillation for Language Model
Compression | [
"Siyue Wu",
"Hongzhan Chen",
"Xiaojun Quan",
"Qifan Wang",
"Rui Wang"
] | Knowledge distillation has attracted a great deal of interest recently to
compress pre-trained language models. However, existing knowledge distillation
methods suffer from two limitations. First, the student model simply imitates
the teacher's behavior while ignoring the underlying reasoning. Second, these
methods usually focus on the transfer of sophisticated model-specific knowledge
but overlook data-specific knowledge. In this paper, we present a novel
attribution-driven knowledge distillation approach, which explores the
token-level rationale behind the teacher model based on Integrated Gradients
(IG) and transfers attribution knowledge to the student model. To enhance the
knowledge transfer of model reasoning and generalization, we further explore
multi-view attribution distillation on all potential decisions of the teacher.
Comprehensive experiments are conducted with BERT on the GLUE benchmark. The
experimental results demonstrate the superior performance of our approach to
several state-of-the-art methods. | [
"cs.CL"
] | false |
2305.10122 | 2023-05-17T11:01:38Z | Empirical Analysis of Oral and Nasal Vowels of Konkani | [
"Swapnil Fadte",
"Edna Vaz",
"Atul Kr. Ojha",
"Ramdas Karmali",
"Jyoti D. Pawar"
] | Konkani is a highly nasalised language which makes it unique among Indo-Aryan
languages. This work investigates the acoustic-phonetic properties of Konkani
oral and nasal vowels. For this study, speech samples from six speakers (3 male
and 3 female) were collected. A total of 74 unique sentences were used as a
part of the recording script, 37 each for oral and nasal vowels, respectively.
The final data set consisted of 1135 vowel phonemes. A comparative F1-F2 plot
of Konkani oral and nasal vowels is presented with an experimental result and
formant analysis. The average F1, F2 and F3 values are also reported for the
first time through experimentation for all nasal and oral vowels. This study
can be helpful for the linguistic research on vowels and speech synthesis
systems specific to the Konkani language. | [
"cs.CL"
] | false |
2305.10142 | 2023-05-17T11:55:32Z | Improving Language Model Negotiation with Self-Play and In-Context
Learning from AI Feedback | [
"Yao Fu",
"Hao Peng",
"Tushar Khot",
"Mirella Lapata"
] | We study whether multiple large language models (LLMs) can autonomously
improve each other in a negotiation game by playing, reflecting, and
criticizing. We are interested in this question because if LLMs were able to
improve each other, it would imply the possibility of creating strong AI agents
with minimal human intervention. We ask two LLMs to negotiate with each other,
playing the roles of a buyer and a seller, respectively. They aim to reach a
deal with the buyer targeting a lower price and the seller a higher one. A
third language model, playing the critic, provides feedback to a player to
improve the player's negotiation strategies. We let the two agents play
multiple rounds, using previous negotiation history and AI feedback as
in-context demonstrations to improve the model's negotiation strategy
iteratively. We use different LLMs (GPT and Claude) for different roles and use
the deal price as the evaluation metric. Our experiments reveal multiple
intriguing findings: (1) Only a subset of the language models we consider can
self-play and improve the deal price from AI feedback, weaker models either do
not understand the game's rules or cannot incorporate AI feedback for further
improvement. (2) Models' abilities to learn from the feedback differ when
playing different roles. For example, it is harder for Claude-instant to
improve as the buyer than as the seller. (3) When unrolling the game to
multiple rounds, stronger agents can consistently improve their performance by
meaningfully using previous experiences and iterative AI feedback, yet have a
higher risk of breaking the deal. We hope our work provides insightful initial
explorations of having models autonomously improve each other with game playing
and AI feedback. | [
"cs.CL"
] | true |
2305.10149 | 2023-05-17T12:12:46Z | Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog | [
"Fanqi Wan",
"Weizhou Shen",
"Ke Yang",
"Xiaojun Quan",
"Wei Bi"
] | Retrieving proper domain knowledge from an external database lies at the
heart of end-to-end task-oriented dialog systems to generate informative
responses. Most existing systems blend knowledge retrieval with response
generation and optimize them with direct supervision from reference responses,
leading to suboptimal retrieval performance when the knowledge base becomes
large-scale. To address this, we propose to decouple knowledge retrieval from
response generation and introduce a multi-grained knowledge retriever (MAKER)
that includes an entity selector to search for relevant entities and an
attribute selector to filter out irrelevant attributes. To train the retriever,
we propose a novel distillation objective that derives supervision signals from
the response generator. Experiments conducted on three standard benchmarks with
both small and large-scale knowledge bases demonstrate that our retriever
performs knowledge retrieval more effectively than existing methods. Our code
has been made publicly
available.\footnote{https://github.com/18907305772/MAKER} | [
"cs.CL"
] | false |
2305.10190 | 2023-05-17T13:15:10Z | Variable-length Neural Interlingua Representations for Zero-shot Neural
Machine Translation | [
"Zhuoyuan Mao",
"Haiyue Song",
"Raj Dabre",
"Chenhui Chu",
"Sadao Kurohashi"
] | The language-independency of encoded representations within multilingual
neural machine translation (MNMT) models is crucial for their generalization
ability on zero-shot translation. Neural interlingua representations have been
shown as an effective method for achieving this. However, fixed-length neural
interlingua representations introduced in previous work can limit its
flexibility and representation ability. In this study, we introduce a novel
method to enhance neural interlingua representations by making their length
variable, thereby overcoming the constraint of fixed-length neural interlingua
representations. Our empirical results on zero-shot translation on OPUS, IWSLT,
and Europarl datasets demonstrate stable model convergence and superior
zero-shot translation results compared to fixed-length neural interlingua
representations. However, our analysis reveals the suboptimal efficacy of our
approach in translating from certain source languages, wherein we pinpoint the
defective model component in our proposed method. | [
"cs.CL"
] | false |
2305.10195 | 2023-05-17T13:18:28Z | Boosting Distress Support Dialogue Responses with Motivational
Interviewing Strategy | [
"Anuradha Welivita",
"Pearl Pu"
] | AI-driven chatbots have become an emerging solution to address psychological
distress. Due to the lack of psychotherapeutic data, researchers use dialogues
scraped from online peer support forums to train them. But since the responses
in such platforms are not given by professionals, they contain both conforming
and non-conforming responses. In this work, we attempt to recognize these
conforming and non-conforming response types present in online distress-support
dialogues using labels adapted from a well-established behavioral coding scheme
named Motivational Interviewing Treatment Integrity (MITI) code and show how
some response types could be rephrased into a more MI adherent form that can,
in turn, enable chatbot responses to be more compliant with the MI strategy. As
a proof of concept, we build several rephrasers by fine-tuning Blender and GPT3
to rephrase MI non-adherent "Advise without permission" responses into "Advise
with permission". We show how this can be achieved with the construction of
pseudo-parallel corpora avoiding costs for human labor. Through automatic and
human evaluation we show that in the presence of less training data, techniques
such as prompting and data augmentation can be used to produce substantially
good rephrasings that reflect the intended style and preserve the content of
the original text. | [
"cs.CL"
] | false |
2305.10231 | 2023-05-17T14:12:29Z | OpenSLU: A Unified, Modularized, and Extensible Toolkit for Spoken
Language Understanding | [
"Libo Qin",
"Qiguang Chen",
"Xiao Xu",
"Yunlong Feng",
"Wanxiang Che"
] | Spoken Language Understanding (SLU) is one of the core components of a
task-oriented dialogue system, which aims to extract the semantic meaning of
user queries (e.g., intents and slots). In this work, we introduce OpenSLU, an
open-source toolkit to provide a unified, modularized, and extensible toolkit
for spoken language understanding. Specifically, OpenSLU unifies 10 SLU models
for both single-intent and multi-intent scenarios, which support both
non-pretrained and pretrained models simultaneously. Additionally, OpenSLU is
highly modularized and extensible by decomposing the model architecture,
inference, and learning process into reusable modules, which allows researchers
to quickly set up SLU experiments with highly flexible configurations. OpenSLU
is implemented based on PyTorch, and released at
\url{https://github.com/LightChen233/OpenSLU}. | [
"cs.CL"
] | false |
2305.10266 | 2023-05-17T14:58:06Z | Searching for Needles in a Haystack: On the Role of Incidental
Bilingualism in PaLM's Translation Capability | [
"Eleftheria Briakou",
"Colin Cherry",
"George Foster"
] | Large, multilingual language models exhibit surprisingly good zero- or
few-shot machine translation capabilities, despite having never seen the
intentionally-included translation examples provided to typical neural
translation systems. We investigate the role of incidental bilingualism -- the
unintentional consumption of bilingual signals, including translation examples
-- in explaining the translation capabilities of large language models, taking
the Pathways Language Model (PaLM) as a case study. We introduce a mixed-method
approach to measure and understand incidental bilingualism at scale. We show
that PaLM is exposed to over 30 million translation pairs across at least 44
languages. Furthermore, the amount of incidental bilingual content is highly
correlated with the amount of monolingual in-language content for non-English
languages. We relate incidental bilingual content to zero-shot prompts and show
that it can be used to mine new prompts to improve PaLM's out-of-English
zero-shot translation quality. Finally, in a series of small-scale ablations,
we show that its presence has a substantial impact on translation capabilities,
although this impact diminishes with model scale. | [
"cs.CL"
] | true |
2305.10407 | 2023-05-17T17:47:31Z | BAD: BiAs Detection for Large Language Models in the context of
candidate screening | [
"Nam Ho Koh",
"Joseph Plata",
"Joyce Chai"
] | Application Tracking Systems (ATS) have allowed talent managers, recruiters,
and college admissions committees to process large volumes of potential
candidate applications efficiently. Traditionally, this screening process was
conducted manually, creating major bottlenecks due to the quantity of
applications and introducing many instances of human bias. The advent of large
language models (LLMs) such as ChatGPT and the potential of adopting methods to
current automated application screening raises additional bias and fairness
issues that must be addressed. In this project, we wish to identify and
quantify the instances of social bias in ChatGPT and other OpenAI LLMs in the
context of candidate screening in order to demonstrate how the use of these
models could perpetuate existing biases and inequalities in the hiring process. | [
"cs.CL",
"I.2, I.2.7",
"F.2.2, I.2.7"
] | false |
2305.10561 | 2023-05-17T20:41:51Z | Massively Multi-Lingual Event Understanding: Extraction, Visualization,
and Search | [
"Chris Jenkins",
"Shantanu Agarwal",
"Joel Barry",
"Steven Fincke",
"Elizabeth Boschee"
] | In this paper, we present ISI-Clear, a state-of-the-art, cross-lingual,
zero-shot event extraction system and accompanying user interface for event
visualization & search. Using only English training data, ISI-Clear makes
global events available on-demand, processing user-supplied text in 100
languages ranging from Afrikaans to Yiddish. We provide multiple event-centric
views of extracted events, including both a graphical representation and a
document-level summary. We also integrate existing cross-lingual search
algorithms with event extraction capabilities to provide cross-lingual
event-centric search, allowing English-speaking users to search over events
automatically extracted from a corpus of non-English documents, using either
English natural language queries (e.g. cholera outbreaks in Iran) or structured
queries (e.g. find all events of type Disease-Outbreak with agent cholera and
location Iran). | [
"cs.CL"
] | false |
2305.10610 | 2023-05-17T23:41:30Z | Solving Cosine Similarity Underestimation between High Frequency Words
by L2 Norm Discounting | [
"Saeth Wannasuphoprasit",
"Yi Zhou",
"Danushka Bollegala"
] | Cosine similarity between two words, computed using their contextualised
token embeddings obtained from masked language models (MLMs) such as BERT has
shown to underestimate the actual similarity between those words (Zhou et al.,
2022). This similarity underestimation problem is particularly severe for
highly frequent words. Although this problem has been noted in prior work, no
solution has been proposed thus far. We observe that the L2 norm of
contextualised embeddings of a word correlates with its log-frequency in the
pretraining corpus. Consequently, the larger L2 norms associated with the
highly frequent words reduce the cosine similarity values measured between
them, thus underestimating the similarity scores. To solve this issue, we
propose a method to discount the L2 norm of a contextualised word embedding by
the frequency of that word in a corpus when measuring the cosine similarities
between words. We show that the so called stop words behave differently from
the rest of the words, which require special consideration during their
discounting process. Experimental results on a contextualised word similarity
dataset show that our proposed discounting method accurately solves the
similarity underestimation problem. | [
"cs.CL"
] | false |
2306.05539 | 2023-05-17T22:30:01Z | Instruction Tuned Models are Quick Learners | [
"Himanshu Gupta",
"Saurabh Arjun Sawant",
"Swaroop Mishra",
"Mutsumi Nakamura",
"Arindam Mitra",
"Santosh Mashetty",
"Chitta Baral"
] | Instruction tuning of language models has demonstrated the ability to enhance
model generalization to unseen tasks via in-context learning using a few
examples. However, typical supervised learning still requires a plethora of
downstream training data for finetuning. Often in real-world situations, there
is a scarcity of data available for finetuning, falling somewhere between few
shot inference and fully supervised finetuning. In this work, we demonstrate
the sample efficiency of instruction tuned models over various tasks by
estimating the minimal downstream training data required by them to perform
transfer learning and match the performance of state-of-the-art (SOTA)
supervised models. We conduct experiments on 119 tasks from Super Natural
Instructions (SuperNI) in both the single task learning (STL) and multi task
learning (MTL) settings. Our findings reveal that, in the STL setting,
instruction tuned models equipped with 25% of the downstream train data surpass
the SOTA performance on the downstream tasks. In the MTL setting, an
instruction tuned model trained on only 6% of downstream training data achieve
SOTA, while using 100% of the training data results in a 3.69% points
improvement (ROUGE-L 74.68) over the previous SOTA. We conduct an analysis on
T5 vs Tk-Instruct by developing several baselines to demonstrate that
instruction tuning aids in increasing both sample efficiency and transfer
learning. Additionally, we observe a consistent ~4% performance increase in
both settings when pre-finetuning is performed with instructions. Finally, we
conduct a categorical study and find that contrary to previous results, tasks
in the question rewriting and title generation categories suffer from
instruction tuning. | [
"cs.CL"
] | false |
2305.09892 | 2023-05-17T02:06:47Z | Clustering-Aware Negative Sampling for Unsupervised Sentence
Representation | [
"Jinghao Deng",
"Fanqi Wan",
"Tao Yang",
"Xiaojun Quan",
"Rui Wang"
] | Contrastive learning has been widely studied in sentence representation
learning. However, earlier works mainly focus on the construction of positive
examples, while in-batch samples are often simply treated as negative examples.
This approach overlooks the importance of selecting appropriate negative
examples, potentially leading to a scarcity of hard negatives and the inclusion
of false negatives. To address these issues, we propose ClusterNS
(Clustering-aware Negative Sampling), a novel method that incorporates cluster
information into contrastive learning for unsupervised sentence representation
learning. We apply a modified K-means clustering algorithm to supply hard
negatives and recognize in-batch false negatives during training, aiming to
solve the two issues in one unified framework. Experiments on semantic textual
similarity (STS) tasks demonstrate that our proposed ClusterNS compares
favorably with baselines in unsupervised sentence representation learning. Our
code has been made publicly available. | [
"cs.CL",
"cs.AI"
] | false |
2305.09990 | 2023-05-17T06:33:26Z | Dual Semantic Knowledge Composed Multimodal Dialog Systems | [
"Xiaolin Chen",
"Xuemeng Song",
"Yinwei Wei",
"Liqiang Nie",
"Tat-Seng Chua"
] | Textual response generation is an essential task for multimodal task-oriented
dialog systems.Although existing studies have achieved fruitful progress, they
still suffer from two critical limitations: 1) focusing on the attribute
knowledge but ignoring the relation knowledge that can reveal the correlations
between different entities and hence promote the response generation}, and 2)
only conducting the cross-entropy loss based output-level supervision but
lacking the representation-level regularization. To address these limitations,
we devise a novel multimodal task-oriented dialog system (named MDS-S2).
Specifically, MDS-S2 first simultaneously acquires the context related
attribute and relation knowledge from the knowledge base, whereby the
non-intuitive relation knowledge is extracted by the n-hop graph walk.
Thereafter, considering that the attribute knowledge and relation knowledge can
benefit the responding to different levels of questions, we design a
multi-level knowledge composition module in MDS-S2 to obtain the latent
composed response representation. Moreover, we devise a set of latent query
variables to distill the semantic information from the composed response
representation and the ground truth response representation, respectively, and
thus conduct the representation-level semantic regularization. Extensive
experiments on a public dataset have verified the superiority of our proposed
MDS-S2. We have released the codes and parameters to facilitate the research
community. | [
"cs.CL",
"cs.MM"
] | false |
2305.10013 | 2023-05-17T07:48:28Z | When Gradient Descent Meets Derivative-Free Optimization: A Match Made
in Black-Box Scenario | [
"Chengcheng Han",
"Liqing Cui",
"Renyu Zhu",
"Jianing Wang",
"Nuo Chen",
"Qiushi Sun",
"Xiang Li",
"Ming Gao"
] | Large pre-trained language models (PLMs) have garnered significant attention
for their versatility and potential for solving a wide spectrum of natural
language processing (NLP) tasks. However, the cost of running these PLMs may be
prohibitive. Furthermore, PLMs may not be open-sourced due to commercial
considerations and potential risks of misuse, such as GPT-3. The parameters and
gradients of PLMs are unavailable in this scenario. To solve the issue,
black-box tuning has been proposed, which utilizes derivative-free optimization
(DFO), instead of gradient descent, for training task-specific continuous
prompts. However, these gradient-free methods still exhibit a significant gap
compared to gradient-based methods. In this paper, we introduce gradient
descent into black-box tuning scenario through knowledge distillation.
Furthermore, we propose a novel method GDFO, which integrates gradient descent
and derivative-free optimization to optimize task-specific continuous prompts
in a harmonized manner. Experimental results show that GDFO can achieve
significant performance gains over previous state-of-the-art methods. | [
"cs.CL",
"cs.AI"
] | false |
2305.10096 | 2023-05-17T10:03:03Z | Use of a Taxonomy of Empathetic Response Intents to Control and
Interpret Empathy in Neural Chatbots | [
"Anuradha Welivita",
"Pearl Pu"
] | A recent trend in the domain of open-domain conversational agents is enabling
them to converse empathetically to emotional prompts. Current approaches either
follow an end-to-end approach or condition the responses on similar emotion
labels to generate empathetic responses. But empathy is a broad concept that
refers to the cognitive and emotional reactions of an individual to the
observed experiences of another and it is more complex than mere mimicry of
emotion. Hence, it requires identifying complex human conversational strategies
and dynamics in addition to generic emotions to control and interpret
empathetic responding capabilities of chatbots. In this work, we make use of a
taxonomy of eight empathetic response intents in addition to generic emotion
categories in building a dialogue response generation model capable of
generating empathetic responses in a controllable and interpretable manner. It
consists of two modules: 1) a response emotion/intent prediction module; and 2)
a response generation module. We propose several rule-based and neural
approaches to predict the next response's emotion/intent and generate responses
conditioned on these predicted emotions/intents. Automatic and human evaluation
results emphasize the importance of the use of the taxonomy of empathetic
response intents in producing more diverse and empathetically more appropriate
responses than end-to-end models. | [
"cs.CL",
"cs.AI"
] | false |
2305.10136 | 2023-05-17T11:39:31Z | Additive manifesto decomposition: A policy domain aware method for
understanding party positioning | [
"Tanise Ceron",
"Dmitry Nikolaev",
"Sebastian Padó"
] | Automatic extraction of party (dis)similarities from texts such as party
election manifestos or parliamentary speeches plays an increasing role in
computational political science. However, existing approaches are fundamentally
limited to targeting only global party (dis)-similarity: they condense the
relationship between a pair of parties into a single figure, their similarity.
In aggregating over all policy domains (e.g., health or foreign policy), they
do not provide any qualitative insights into which domains parties agree or
disagree on. This paper proposes a workflow for estimating policy domain aware
party similarity that overcomes this limitation. The workflow covers (a)
definition of suitable policy domains; (b) automatic labeling of domains, if no
manual labels are available; (c) computation of domain-level similarities and
aggregation at a global level; (d) extraction of interpretable party positions
on major policy axes via multidimensional scaling. We evaluate our workflow on
manifestos from the German federal elections. We find that our method (a)
yields high correlation when predicting party similarity at a global level and
(b) provides accurate party-specific positions, even with automatically
labelled policy domains. | [
"cs.CL",
"cs.CY"
] | false |
2305.10167 | 2023-05-17T12:43:29Z | Pragmatic Reasoning in Structured Signaling Games | [
"Emil Carlsson",
"Devdatt Dubhashi"
] | In this work we introduce a structured signaling game, an extension of the
classical signaling game with a similarity structure between meanings in the
context, along with a variant of the Rational Speech Act (RSA) framework which
we call structured-RSA (sRSA) for pragmatic reasoning in structured domains. We
explore the behavior of the sRSA in the domain of color and show that pragmatic
agents using sRSA on top of semantic representations, derived from the World
Color Survey, attain efficiency very close to the information theoretic limit
after only 1 or 2 levels of recursion. We also explore the interaction between
pragmatic reasoning and learning in multi-agent reinforcement learning
framework. Our results illustrate that artificial agents using sRSA develop
communication closer to the information theoretic frontier compared to agents
using RSA and just reinforcement learning. We also find that the ambiguity of
the semantic representation increases as the pragmatic agents are allowed to
perform deeper reasoning about each other during learning. | [
"cs.AI",
"cs.CL"
] | false |
2305.10172 | 2023-05-17T12:55:52Z | Knowledge-enhanced Mixed-initiative Dialogue System for Emotional
Support Conversations | [
"Yang Deng",
"Wenxuan Zhang",
"Yifei Yuan",
"Wai Lam"
] | Unlike empathetic dialogues, the system in emotional support conversations
(ESC) is expected to not only convey empathy for comforting the help-seeker,
but also proactively assist in exploring and addressing their problems during
the conversation. In this work, we study the problem of mixed-initiative ESC
where the user and system can both take the initiative in leading the
conversation. Specifically, we conduct a novel analysis on mixed-initiative ESC
systems with a tailor-designed schema that divides utterances into different
types with speaker roles and initiative types. Four emotional support metrics
are proposed to evaluate the mixed-initiative interactions. The analysis
reveals the necessity and challenges of building mixed-initiative ESC systems.
In the light of this, we propose a knowledge-enhanced mixed-initiative
framework (KEMI) for ESC, which retrieves actual case knowledge from a
large-scale mental health knowledge graph for generating mixed-initiative
responses. Experimental results on two ESC datasets show the superiority of
KEMI in both content-preserving evaluation and mixed initiative related
analyses. | [
"cs.CL",
"cs.IR"
] | false |
2305.10196 | 2023-05-17T13:19:01Z | A Survey on Zero Pronoun Translation | [
"Longyue Wang",
"Siyou Liu",
"Mingzhou Xu",
"Linfeng Song",
"Shuming Shi",
"Zhaopeng Tu"
] | Zero pronouns (ZPs) are frequently omitted in pro-drop languages (e.g.
Chinese, Hungarian, and Hindi), but should be recalled in non-pro-drop
languages (e.g. English). This phenomenon has been studied extensively in
machine translation (MT), as it poses a significant challenge for MT systems
due to the difficulty in determining the correct antecedent for the pronoun.
This survey paper highlights the major works that have been undertaken in zero
pronoun translation (ZPT) after the neural revolution, so that researchers can
recognise the current state and future directions of this field. We provide an
organisation of the literature based on evolution, dataset, method and
evaluation. In addition, we compare and analyze competing models and evaluation
metrics on different benchmarks. We uncover a number of insightful findings
such as: 1) ZPT is in line with the development trend of large language model;
2) data limitation causes learning bias in languages and domains; 3)
performance improvements are often reported on single benchmarks, but advanced
methods are still far from real-world use; 4) general-purpose metrics are not
reliable on nuances and complexities of ZPT, emphasizing the necessity of
targeted metrics; 5) apart from commonly-cited errors, ZPs will cause risks of
gender bias. | [
"cs.CL",
"cs.AI"
] | false |
2305.10204 | 2023-05-17T13:26:57Z | Shielded Representations: Protecting Sensitive Attributes Through
Iterative Gradient-Based Projection | [
"Shadi Iskander",
"Kira Radinsky",
"Yonatan Belinkov"
] | Natural language processing models tend to learn and encode social biases
present in the data. One popular approach for addressing such biases is to
eliminate encoded information from the model's representations. However,
current methods are restricted to removing only linearly encoded information.
In this work, we propose Iterative Gradient-Based Projection (IGBP), a novel
method for removing non-linear encoded concepts from neural representations.
Our method consists of iteratively training neural classifiers to predict a
particular attribute we seek to eliminate, followed by a projection of the
representation on a hypersurface, such that the classifiers become oblivious to
the target attribute. We evaluate the effectiveness of our method on the task
of removing gender and race information as sensitive attributes. Our results
demonstrate that IGBP is effective in mitigating bias through intrinsic and
extrinsic evaluations, with minimal impact on downstream task accuracy. | [
"cs.CL",
"cs.AI"
] | false |
2305.10236 | 2023-05-17T14:26:00Z | A quantitative study of NLP approaches to question difficulty estimation | [
"Luca Benedetto"
] | Recent years witnessed an increase in the amount of research on the task of
Question Difficulty Estimation from Text QDET with Natural Language Processing
(NLP) techniques, with the goal of targeting the limitations of traditional
approaches to question calibration. However, almost the entirety of previous
research focused on single silos, without performing quantitative comparisons
between different models or across datasets from different educational domains.
In this work, we aim at filling this gap, by quantitatively analyzing several
approaches proposed in previous research, and comparing their performance on
three publicly available real world datasets containing questions of different
types from different educational domains. Specifically, we consider reading
comprehension Multiple Choice Questions (MCQs), science MCQs, and math
questions. We find that Transformer based models are the best performing across
different educational domains, with DistilBERT performing almost as well as
BERT, and that they outperform other approaches even on smaller datasets. As
for the other models, the hybrid ones often outperform the ones based on a
single type of features, the ones based on linguistic features perform well on
reading comprehension questions, while frequency based features (TF-IDF) and
word embeddings (word2vec) perform better in domain knowledge assessment. | [
"cs.CL",
"cs.LG"
] | false |
2305.10284 | 2023-05-17T15:20:31Z | Towards More Robust NLP System Evaluation: Handling Missing Scores in
Benchmarks | [
"Anas Himmi",
"Ekhine Irurozki",
"Nathan Noiry",
"Stephan Clemencon",
"Pierre Colombo"
] | The evaluation of natural language processing (NLP) systems is crucial for
advancing the field, but current benchmarking approaches often assume that all
systems have scores available for all tasks, which is not always practical. In
reality, several factors such as the cost of running baseline, private systems,
computational limitations, or incomplete data may prevent some systems from
being evaluated on entire tasks. This paper formalize an existing problem in
NLP research: benchmarking when some systems scores are missing on the task,
and proposes a novel approach to address it. Our method utilizes a compatible
partial ranking approach to impute missing data, which is then aggregated using
the Borda count method. It includes two refinements designed specifically for
scenarios where either task-level or instance-level scores are available. We
also introduce an extended benchmark, which contains over 131 million scores,
an order of magnitude larger than existing benchmarks. We validate our methods
and demonstrate their effectiveness in addressing the challenge of missing
system evaluation on an entire task. This work highlights the need for more
comprehensive benchmarking approaches that can handle real-world scenarios
where not all systems are evaluated on the entire task. | [
"cs.CL",
"cs.AI"
] | false |
2305.10384 | 2023-05-17T17:21:10Z | Logit-Based Ensemble Distribution Distillation for Robust Autoregressive
Sequence Uncertainties | [
"Yassir Fathullah",
"Guoxuan Xia",
"Mark Gales"
] | Efficiently and reliably estimating uncertainty is an important objective in
deep learning. It is especially pertinent to autoregressive sequence tasks,
where training and inference costs are typically very high. However, existing
research has predominantly focused on tasks with static data such as image
classification. In this work, we investigate Ensemble Distribution Distillation
(EDD) applied to large-scale natural language sequence-to-sequence data. EDD
aims to compress the superior uncertainty performance of an expensive (teacher)
ensemble into a cheaper (student) single model. Importantly, the ability to
separate knowledge (epistemic) and data (aleatoric) uncertainty is retained.
Existing probability-space approaches to EDD, however, are difficult to scale
to large vocabularies. We show, for modern transformer architectures on
large-scale translation tasks, that modelling the ensemble logits, instead of
softmax probabilities, leads to significantly better students. Moreover, the
students surprisingly even outperform Deep Ensembles by up to ~10% AUROC on
out-of-distribution detection, whilst matching them at in-distribution
translation. | [
"cs.LG",
"cs.CL"
] | false |
2305.10425 | 2023-05-17T17:57:10Z | SLiC-HF: Sequence Likelihood Calibration with Human Feedback | [
"Yao Zhao",
"Rishabh Joshi",
"Tianqi Liu",
"Misha Khalman",
"Mohammad Saleh",
"Peter J. Liu"
] | Learning from human feedback has been shown to be effective at aligning
language models with human preferences. Past work has often relied on
Reinforcement Learning from Human Feedback (RLHF), which optimizes the language
model using reward scores assigned from a reward model trained on human
preference data. In this work we show how the recently introduced Sequence
Likelihood Calibration (SLiC), can also be used to effectively learn from human
preferences (SLiC-HF). Furthermore, we demonstrate this can be done with human
feedback data collected for a different model, similar to off-policy, offline
RL data. Automatic and human evaluation experiments on the TL;DR summarization
task show that SLiC-HF significantly improves supervised fine-tuning baselines.
Furthermore, SLiC-HF presents a competitive alternative to the PPO RLHF
implementation used in past work while being much simpler to implement, easier
to tune and more computationally efficient in practice. | [
"cs.CL",
"cs.AI"
] | true |
2305.09858 | 2023-05-17T00:08:36Z | Knowledge Graph Completion Models are Few-shot Learners: An Empirical
Study of Relation Labeling in E-commerce with LLMs | [
"Jiao Chen",
"Luyi Ma",
"Xiaohan Li",
"Nikhil Thakurdesai",
"Jianpeng Xu",
"Jason H. D. Cho",
"Kaushiki Nag",
"Evren Korpeoglu",
"Sushant Kumar",
"Kannan Achan"
] | Knowledge Graphs (KGs) play a crucial role in enhancing e-commerce system
performance by providing structured information about entities and their
relationships, such as complementary or substitutable relations between
products or product types, which can be utilized in recommender systems.
However, relation labeling in KGs remains a challenging task due to the dynamic
nature of e-commerce domains and the associated cost of human labor. Recently,
breakthroughs in Large Language Models (LLMs) have shown surprising results in
numerous natural language processing tasks. In this paper, we conduct an
empirical study of LLMs for relation labeling in e-commerce KGs, investigating
their powerful learning capabilities in natural language and effectiveness in
predicting relations between product types with limited labeled data. We
evaluate various LLMs, including PaLM and GPT-3.5, on benchmark datasets,
demonstrating their ability to achieve competitive performance compared to
humans on relation labeling tasks using just 1 to 5 labeled examples per
relation. Additionally, we experiment with different prompt engineering
techniques to examine their impact on model performance. Our results show that
LLMs significantly outperform existing KG completion models in relation
labeling for e-commerce KGs and exhibit performance strong enough to replace
human labeling. | [
"cs.IR",
"cs.AI",
"cs.CL",
"cs.LG"
] | false |
2305.09864 | 2023-05-17T00:34:36Z | The Jaseci Programming Paradigm and Runtime Stack: Building Scale-out
Production Applications Easy and Fast | [
"Jason Mars",
"Yiping Kang",
"Roland Daynauth",
"Baichuan Li",
"Ashish Mahendra",
"Krisztian Flautner",
"Lingjia Tang"
] | Today's production scale-out applications include many sub-application
components, such as storage backends, logging infrastructure and AI models.
These components have drastically different characteristics, are required to
work in collaboration, and interface with each other as microservices. This
leads to increasingly high complexity in developing, optimizing, configuring,
and deploying scale-out applications, raising the barrier to entry for most
individuals and small teams. We developed a novel co-designed runtime system,
Jaseci, and programming language, Jac, which aims to reduce this complexity.
The key design principle throughout Jaseci's design is to raise the level of
abstraction by moving as much of the scale-out data management, microservice
componentization, and live update complexity into the runtime stack to be
automated and optimized automatically. We use real-world AI applications to
demonstrate Jaseci's benefit for application performance and developer
productivity. | [
"cs.CL",
"cs.DC",
"cs.PL",
"cs.SE"
] | false |
2305.09877 | 2023-05-17T01:17:57Z | Semantic Similarity Measure of Natural Language Text through Machine
Learning and a Keyword-Aware Cross-Encoder-Ranking Summarizer -- A Case Study
Using UCGIS GIS&T Body of Knowledge | [
"Yuanyuan Tian",
"Wenwen Li",
"Sizhe Wang",
"Zhining Gu"
] | Initiated by the University Consortium of Geographic Information Science
(UCGIS), GIS&T Body of Knowledge (BoK) is a community-driven endeavor to
define, develop, and document geospatial topics related to geographic
information science and technologies (GIS&T). In recent years, GIS&T BoK has
undergone rigorous development in terms of its topic re-organization and
content updating, resulting in a new digital version of the project. While the
BoK topics provide useful materials for researchers and students to learn about
GIS, the semantic relationships among the topics, such as semantic similarity,
should also be identified so that a better and automated topic navigation can
be achieved. Currently, the related topics are either defined manually by
editors or authors, which may result in an incomplete assessment of topic
relationship. To address this challenge, our research evaluates the
effectiveness of multiple natural language processing (NLP) techniques in
extracting semantics from text, including both deep neural networks and
traditional machine learning approaches. Besides, a novel text summarization -
KACERS (Keyword-Aware Cross-Encoder-Ranking Summarizer) - is proposed to
generate a semantic summary of scientific publications. By identifying the
semantic linkages among key topics, this work provides guidance for future
development and content organization of the GIS&T BoK project. It also offers a
new perspective on the use of machine learning techniques for analyzing
scientific publications, and demonstrate the potential of KACERS summarizer in
semantic understanding of long text documents. | [
"cs.CL",
"cs.AI",
"cs.IR"
] | false |
2305.09993 | 2023-05-17T06:35:43Z | Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs
Sampling | [
"Weijia Xu",
"Andrzej Banburski-Fahey",
"Nebojsa Jojic"
] | We introduce Reprompting, an iterative sampling algorithm that searches for
the Chain-of-Thought (CoT) recipes for a given task without human intervention.
Through Gibbs sampling, we infer CoT recipes that work consistently well for a
set of training samples. Our method iteratively samples new recipes using
previously sampled solutions as parent prompts to solve other training
problems. On five Big-Bench Hard tasks that require multi-step reasoning,
Reprompting achieves consistently better performance than the zero-shot,
few-shot, and human-written CoT baselines. Reprompting can also facilitate
transfer of knowledge from a stronger model to a weaker model leading to
substantially improved performance of the weaker model. Overall, Reprompting
brings up to +17 point improvements over the previous state-of-the-art method
that uses human-written CoT prompts. | [
"cs.LG",
"cs.AI",
"cs.CL"
] | false |
2305.10349 | 2023-05-17T16:32:40Z | Interactive Learning of Hierarchical Tasks from Dialog with GPT | [
"Lane Lawley",
"Christopher J. MacLellan"
] | We present a system for interpretable, symbolic, interactive task learning
from dialog using a GPT model as a conversational front-end. The learned tasks
are represented as hierarchical decompositions of predicate-argument structures
with scoped variable arguments. By using a GPT model to convert interactive
dialog into a semantic representation, and then recursively asking for
definitions of unknown steps, we show that hierarchical task knowledge can be
acquired and re-used in a natural and unrestrained conversational environment.
We compare our system to a similar architecture using a more conventional
parser and show that our system tolerates a much wider variety of linguistic
variance. | [
"cs.HC",
"cs.AI",
"cs.CL"
] | false |
2305.10427 | 2023-05-17T17:57:34Z | Accelerating Transformer Inference for Translation via Parallel Decoding | [
"Andrea Santilli",
"Silvio Severino",
"Emilian Postolache",
"Valentino Maiorca",
"Michele Mancusi",
"Riccardo Marin",
"Emanuele Rodolà"
] | Autoregressive decoding limits the efficiency of transformers for Machine
Translation (MT). The community proposed specific network architectures and
learning-based methods to solve this issue, which are expensive and require
changes to the MT model, trading inference speed at the cost of the translation
quality. In this paper, we propose to address the problem from the point of
view of decoding algorithms, as a less explored but rather compelling
direction. We propose to reframe the standard greedy autoregressive decoding of
MT with a parallel formulation leveraging Jacobi and Gauss-Seidel fixed-point
iteration methods for fast inference. This formulation allows to speed up
existing models without training or modifications while retaining translation
quality. We present three parallel decoding algorithms and test them on
different languages and models showing how the parallelization introduces a
speedup up to 38% w.r.t. the standard autoregressive decoding and nearly 2x
when scaling the method on parallel resources. Finally, we introduce a decoding
dependency graph visualizer (DDGviz) that let us see how the model has learned
the conditional dependence between tokens and inspect the decoding procedure. | [
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.10496 | 2023-05-17T18:05:49Z | Incorporating Attribution Importance for Improving Faithfulness Metrics | [
"Zhixue Zhao",
"Nikolaos Aletras"
] | Feature attribution methods (FAs) are popular approaches for providing
insights into the model reasoning process of making predictions. The more
faithful a FA is, the more accurately it reflects which parts of the input are
more important for the prediction. Widely used faithfulness metrics, such as
sufficiency and comprehensiveness use a hard erasure criterion, i.e. entirely
removing or retaining the top most important tokens ranked by a given FA and
observing the changes in predictive likelihood. However, this hard criterion
ignores the importance of each individual token, treating them all equally for
computing sufficiency and comprehensiveness. In this paper, we propose a simple
yet effective soft erasure criterion. Instead of entirely removing or retaining
tokens from the input, we randomly mask parts of the token vector
representations proportionately to their FA importance. Extensive experiments
across various natural language processing tasks and different FAs show that
our soft-sufficiency and soft-comprehensiveness metrics consistently prefer
more faithful explanations compared to hard sufficiency and comprehensiveness.
Our code: https://github.com/casszhao/SoftFaith | [
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.10510 | 2023-05-17T18:30:05Z | ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores
Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource
Languages | [
"Sourojit Ghosh",
"Aylin Caliskan"
] | In this multicultural age, language translation is one of the most performed
tasks, and it is becoming increasingly AI-moderated and automated. As a novel
AI system, ChatGPT claims to be proficient in such translation tasks and in
this paper, we put that claim to the test. Specifically, we examine ChatGPT's
accuracy in translating between English and languages that exclusively use
gender-neutral pronouns. We center this study around Bengali, the 7$^{th}$ most
spoken language globally, but also generalize our findings across five other
languages: Farsi, Malay, Tagalog, Thai, and Turkish. We find that ChatGPT
perpetuates gender defaults and stereotypes assigned to certain occupations
(e.g. man = doctor, woman = nurse) or actions (e.g. woman = cook, man = go to
work), as it converts gender-neutral pronouns in languages to `he' or `she'. We
also observe ChatGPT completely failing to translate the English gender-neutral
pronoun `they' into equivalent gender-neutral pronouns in other languages, as
it produces translations that are incoherent and incorrect. While it does
respect and provide appropriately gender-marked versions of Bengali words when
prompted with gender information in English, ChatGPT appears to confer a higher
respect to men than to women in the same occupation. We conclude that ChatGPT
exhibits the same gender biases which have been demonstrated for tools like
Google Translate or MS Translator, as we provide recommendations for a human
centered approach for future designers of AIs that perform language translation
to better accommodate such low-resource languages. | [
"cs.CY",
"cs.AI",
"cs.CL"
] | false |
2305.10528 | 2023-05-17T19:22:24Z | Scalable and Safe Remediation of Defective Actions in Self-Learning
Conversational Systems | [
"Sarthak Ahuja",
"Mohammad Kachuee",
"Fateme Sheikholeslami",
"Weiqing Liu",
"Jaeyoung Do"
] | Off-Policy reinforcement learning has been a driving force for the
state-of-the-art conversational AIs leading to more natural humanagent
interactions and improving the user satisfaction for goal-oriented agents.
However, in large-scale commercial settings, it is often challenging to balance
between policy improvements and experience continuity on the broad spectrum of
applications handled by such system. In the literature, off-policy evaluation
and guard-railing on aggregate statistics has been commonly used to address
this problem. In this paper, we propose a method for curating and leveraging
high-precision samples sourced from historical regression incident reports to
validate, safe-guard, and improve policies prior to the online deployment. We
conducted extensive experiments using data from a real-world conversational
system and actual regression incidents. The proposed method is currently
deployed in our production system to protect customers against broken
experiences and enable long-term policy improvements. | [
"cs.AI",
"cs.CL",
"cs.LG"
] | false |
2305.09907 | 2023-05-17T02:30:28Z | Incremental Outlier Detection Modelling Using Streaming Analytics in
Finance & Health Care | [
"Ch Priyanka",
"Vivek"
] | In this paper, we had built the online model which are built incrementally by
using online outlier detection algorithms under the streaming environment. We
identified that there is highly necessity to have the streaming models to
tackle the streaming data. The objective of this project is to study and
analyze the importance of streaming models which is applicable in the
real-world environment. In this work, we built various Outlier Detection (OD)
algorithms viz., One class Support Vector Machine (OC-SVM), Isolation Forest
Adaptive Sliding window approach (IForest ASD), Exact Storm, Angle based
outlier detection (ABOD), Local outlier factor (LOF), KitNet, KNN ASD methods.
The effectiveness and validity of the above-built models on various finance
problems such as credit card fraud detection, churn prediction, ethereum fraud
prediction. Further, we also analyzed the performance of the models on the
health care prediction problems such as heart stroke prediction, diabetes
prediction and heart stroke prediction problems. As per the results and dataset
it shows that it performs well for the highly imbalanced datasets that means
there is a majority of negative class and minority will be the positive class.
Among all the models, the ensemble model strategy IForest ASD model performed
better in most of the cases standing in the top 3 models in almost all of the
cases. | [
"cs.LG"
] | false |
2305.10171 | 2023-05-17T12:54:58Z | Goal-Conditioned Supervised Learning with Sub-Goal Prediction | [
"Tom Jurgenson",
"Aviv Tamar"
] | Recently, a simple yet effective algorithm -- goal-conditioned
supervised-learning (GCSL) -- was proposed to tackle goal-conditioned
reinforcement-learning. GCSL is based on the principle of hindsight learning:
by observing states visited in previously executed trajectories and treating
them as attained goals, GCSL learns the corresponding actions via supervised
learning. However, GCSL only learns a goal-conditioned policy, discarding other
information in the process. Our insight is that the same hindsight principle
can be used to learn to predict goal-conditioned sub-goals from the same
trajectory. Based on this idea, we propose Trajectory Iterative Learner
(TraIL), an extension of GCSL that further exploits the information in a
trajectory, and uses it for learning to predict both actions and sub-goals. We
investigate the settings in which TraIL can make better use of the data, and
discover that for several popular problem settings, replacing real goals in
GCSL with predicted TraIL sub-goals allows the agent to reach a greater set of
goal states using the exact same data as GCSL, thereby improving its overall
performance. | [
"cs.LG"
] | false |
2305.10309 | 2023-05-17T15:47:47Z | MetaModulation: Learning Variational Feature Hierarchies for Few-Shot
Learning with Fewer Tasks | [
"Wenfang Sun",
"Yingjun Du",
"Xiantong Zhen",
"Fan Wang",
"Ling Wang",
"Cees G. M. Snoek"
] | Meta-learning algorithms are able to learn a new task using previously
learned knowledge, but they often require a large number of meta-training tasks
which may not be readily available. To address this issue, we propose a method
for few-shot learning with fewer tasks, which we call MetaModulation. The key
idea is to use a neural network to increase the density of the meta-training
tasks by modulating batch normalization parameters during meta-training.
Additionally, we modify parameters at various network levels, rather than just
a single layer, to increase task diversity. To account for the uncertainty
caused by the limited training tasks, we propose a variational MetaModulation
where the modulation parameters are treated as latent variables. We also
introduce learning variational feature hierarchies by the variational
MetaModulation, which modulates features at all layers and can consider task
uncertainty and generate more diverse tasks. The ablation studies illustrate
the advantages of utilizing a learnable task modulation at different levels and
demonstrate the benefit of incorporating probabilistic variants in few-task
meta-learning. Our MetaModulation and its variational variants consistently
outperform state-of-the-art alternatives on four few-task meta-learning
benchmarks. | [
"cs.LG"
] | false |
2305.10329 | 2023-05-17T16:10:36Z | G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning
for Graph Transformer Networks | [
"Anchun Gui",
"Jinqiang Ye",
"Han Xiao"
] | It has become a popular paradigm to transfer the knowledge of large-scale
pre-trained models to various downstream tasks via fine-tuning the entire model
parameters. However, with the growth of model scale and the rising number of
downstream tasks, this paradigm inevitably meets the challenges in terms of
computation consumption and memory footprint issues. Recently,
Parameter-Efficient Fine-Tuning (PEFT) (e.g., Adapter, LoRA, BitFit) shows a
promising paradigm to alleviate these concerns by updating only a portion of
parameters. Despite these PEFTs having demonstrated satisfactory performance in
natural language processing, it remains under-explored for the question of
whether these techniques could be transferred to graph-based tasks with Graph
Transformer Networks (GTNs). Therefore, in this paper, we fill this gap by
providing extensive benchmarks with traditional PEFTs on a range of graph-based
downstream tasks. Our empirical study shows that it is sub-optimal to directly
transfer existing PEFTs to graph-based tasks due to the issue of feature
distribution shift. To address this issue, we propose a novel structure-aware
PEFT approach, named G-Adapter, which leverages graph convolution operation to
introduce graph structure (e.g., graph adjacent matrix) as an inductive bias to
guide the updating process. Besides, we propose Bregman proximal point
optimization to further alleviate feature distribution shift by preventing the
model from aggressive update. Extensive experiments demonstrate that G-Adapter
obtains the state-of-the-art performance compared to the counterparts on nine
graph benchmark datasets based on two pre-trained GTNs, and delivers tremendous
memory footprint efficiency compared to the conventional paradigm. | [
"cs.LG"
] | false |
2305.10460 | 2023-05-17T07:42:24Z | Topology Optimization using Neural Networks with Conditioning Field
Initialization for Improved Efficiency | [
"Hongrui Chen",
"Aditya Joglekar",
"Levent Burak Kara"
] | We propose conditioning field initialization for neural network based
topology optimization. In this work, we focus on (1) improving upon existing
neural network based topology optimization, (2) demonstrating that by using a
prior initial field on the unoptimized domain, the efficiency of neural network
based topology optimization can be further improved. Our approach consists of a
topology neural network that is trained on a case by case basis to represent
the geometry for a single topology optimization problem. It takes in domain
coordinates as input to represent the density at each coordinate where the
topology is represented by a continuous density field. The displacement is
solved through a finite element solver. We employ the strain energy field
calculated on the initial design domain as an additional conditioning field
input to the neural network throughout the optimization. The addition of the
strain energy field input improves the convergence speed compared to standalone
neural network based topology optimization. | [
"cs.LG"
] | false |
2305.10464 | 2023-05-17T08:20:29Z | Reconstruction Error-based Anomaly Detection with Few Outlying Examples | [
"Fabrizio Angiulli",
"Fabio Fassetti",
"Luca Ferragina"
] | Reconstruction error-based neural architectures constitute a classical deep
learning approach to anomaly detection which has shown great performances. It
consists in training an Autoencoder to reconstruct a set of examples deemed to
represent the normality and then to point out as anomalies those data that show
a sufficiently large reconstruction error. Unfortunately, these architectures
often become able to well reconstruct also the anomalies in the data. This
phenomenon is more evident when there are anomalies in the training set. In
particular when these anomalies are labeled, a setting called semi-supervised,
the best way to train Autoencoders is to ignore anomalies and minimize the
reconstruction error on normal data. The goal of this work is to investigate
approaches to allow reconstruction error-based architectures to instruct the
model to put known anomalies outside of the domain description of the normal
data. Specifically, our strategy exploits a limited number of anomalous
examples to increase the contrast between the reconstruction error associated
with normal examples and those associated with both known and unknown
anomalies, thus enhancing anomaly detection performances. The experiments show
that this new procedure achieves better performances than the standard
Autoencoder approach and the main deep learning techniques for semi-supervised
anomaly detection. | [
"cs.LG"
] | false |
2305.10471 | 2023-05-17T15:11:08Z | Bike2Vec: Vector Embedding Representations of Road Cycling Riders and
Races | [
"Ethan Baron",
"Bram Janssens",
"Matthias Bogaert"
] | Vector embeddings have been successfully applied in several domains to obtain
effective representations of non-numeric data which can then be used in various
downstream tasks. We present a novel application of vector embeddings in
professional road cycling by demonstrating a method to learn representations
for riders and races based on historical results. We use unsupervised learning
techniques to validate that the resultant embeddings capture interesting
features of riders and races. These embeddings could be used for downstream
prediction tasks such as early talent identification and race outcome
prediction. | [
"cs.LG"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.