id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.06077
|
Foivos Paraperas Papantoniou
|
Foivos Paraperas Papantoniou, Alexandros Lattas, Stylianos Moschoglou,
Stefanos Zafeiriou
|
Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models
|
ICCV 2023, 15 pages, 14 figures. Project page:
https://foivospar.github.io/Relightify/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Following the remarkable success of diffusion models on image generation,
recent works have also demonstrated their impressive ability to address a
number of inverse problems in an unsupervised way, by properly constraining the
sampling process based on a conditioning input. Motivated by this, in this
paper, we present the first approach to use diffusion models as a prior for
highly accurate 3D facial BRDF reconstruction from a single image. We start by
leveraging a high-quality UV dataset of facial reflectance (diffuse and
specular albedo and normals), which we render under varying illumination
settings to simulate natural RGB textures and, then, train an unconditional
diffusion model on concatenated pairs of rendered textures and reflectance
components. At test time, we fit a 3D morphable model to the given image and
unwrap the face in a partial UV texture. By sampling from the diffusion model,
while retaining the observed texture part intact, the model inpaints not only
the self-occluded areas but also the unknown reflectance components, in a
single sequence of denoising steps. In contrast to existing methods, we
directly acquire the observed texture from the input image, thus, resulting in
more faithful and consistent reflectance estimation. Through a series of
qualitative and quantitative comparisons, we demonstrate superior performance
in both texture completion as well as reflectance reconstruction tasks.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 11:57:49 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 01:06:42 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Papantoniou",
"Foivos Paraperas",
""
],
[
"Lattas",
"Alexandros",
""
],
[
"Moschoglou",
"Stylianos",
""
],
[
"Zafeiriou",
"Stefanos",
""
]
] |
new_dataset
| 0.997487 |
2305.06897
|
Odunayo Ogundepo
|
Odunayo Ogundepo, Tajuddeen R. Gwadabe, Clara E. Rivera, Jonathan H.
Clark, Sebastian Ruder, David Ifeoluwa Adelani, Bonaventure F. P. Dossou,
Abdou Aziz DIOP, Claytone Sikasote, Gilles Hacheme, Happy Buzaaba, Ignatius
Ezeani, Rooweither Mabuya, Salomey Osei, Chris Emezue, Albert Njoroge Kahira,
Shamsuddeen H. Muhammad, Akintunde Oladipo, Abraham Toluwase Owodunni, Atnafu
Lambebo Tonja, Iyanuoluwa Shode, Akari Asai, Tunde Oluwaseyi Ajayi, Clemencia
Siro, Steven Arthur, Mofetoluwa Adeyemi, Orevaoghene Ahia, Anuoluwapo Aremu,
Oyinkansola Awosan, Chiamaka Chukwuneke, Bernard Opoku, Awokoya Ayodele,
Verrah Otiende, Christine Mwase, Boyd Sinkala, Andre Niyongabo Rubungo,
Daniel A. Ajisafe, Emeka Felix Onwuegbuzia, Habib Mbow, Emile Niyomutabazi,
Eunice Mukonde, Falalu Ibrahim Lawan, Ibrahim Said Ahmad, Jesujoba O. Alabi,
Martin Namukombo, Mbonu Chinedu, Mofya Phiri, Neo Putini, Ndumiso Mngoma,
Priscilla A. Amuok, Ruqayya Nasir Iro, Sonia Adhiambo
|
AfriQA: Cross-lingual Open-Retrieval Question Answering for African
Languages
| null | null | null | null |
cs.CL cs.AI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
African languages have far less in-language content available digitally,
making it challenging for question answering systems to satisfy the information
needs of users. Cross-lingual open-retrieval question answering (XOR QA)
systems -- those that retrieve answer content from other languages while
serving people in their native language -- offer a means of filling this gap.
To this end, we create AfriQA, the first cross-lingual QA dataset with a focus
on African languages. AfriQA includes 12,000+ XOR QA examples across 10 African
languages. While previous datasets have focused primarily on languages where
cross-lingual QA augments coverage from the target language, AfriQA focuses on
languages where cross-lingual answer content is the only high-coverage source
of answer content. Because of this, we argue that African languages are one of
the most important and realistic use cases for XOR QA. Our experiments
demonstrate the poor performance of automatic translation and multilingual
retrieval methods. Overall, AfriQA proves challenging for state-of-the-art QA
models. We hope that the dataset enables the development of more equitable QA
technology.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 15:34:53 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Ogundepo",
"Odunayo",
""
],
[
"Gwadabe",
"Tajuddeen R.",
""
],
[
"Rivera",
"Clara E.",
""
],
[
"Clark",
"Jonathan H.",
""
],
[
"Ruder",
"Sebastian",
""
],
[
"Adelani",
"David Ifeoluwa",
""
],
[
"Dossou",
"Bonaventure F. P.",
""
],
[
"DIOP",
"Abdou Aziz",
""
],
[
"Sikasote",
"Claytone",
""
],
[
"Hacheme",
"Gilles",
""
],
[
"Buzaaba",
"Happy",
""
],
[
"Ezeani",
"Ignatius",
""
],
[
"Mabuya",
"Rooweither",
""
],
[
"Osei",
"Salomey",
""
],
[
"Emezue",
"Chris",
""
],
[
"Kahira",
"Albert Njoroge",
""
],
[
"Muhammad",
"Shamsuddeen H.",
""
],
[
"Oladipo",
"Akintunde",
""
],
[
"Owodunni",
"Abraham Toluwase",
""
],
[
"Tonja",
"Atnafu Lambebo",
""
],
[
"Shode",
"Iyanuoluwa",
""
],
[
"Asai",
"Akari",
""
],
[
"Ajayi",
"Tunde Oluwaseyi",
""
],
[
"Siro",
"Clemencia",
""
],
[
"Arthur",
"Steven",
""
],
[
"Adeyemi",
"Mofetoluwa",
""
],
[
"Ahia",
"Orevaoghene",
""
],
[
"Aremu",
"Anuoluwapo",
""
],
[
"Awosan",
"Oyinkansola",
""
],
[
"Chukwuneke",
"Chiamaka",
""
],
[
"Opoku",
"Bernard",
""
],
[
"Ayodele",
"Awokoya",
""
],
[
"Otiende",
"Verrah",
""
],
[
"Mwase",
"Christine",
""
],
[
"Sinkala",
"Boyd",
""
],
[
"Rubungo",
"Andre Niyongabo",
""
],
[
"Ajisafe",
"Daniel A.",
""
],
[
"Onwuegbuzia",
"Emeka Felix",
""
],
[
"Mbow",
"Habib",
""
],
[
"Niyomutabazi",
"Emile",
""
],
[
"Mukonde",
"Eunice",
""
],
[
"Lawan",
"Falalu Ibrahim",
""
],
[
"Ahmad",
"Ibrahim Said",
""
],
[
"Alabi",
"Jesujoba O.",
""
],
[
"Namukombo",
"Martin",
""
],
[
"Chinedu",
"Mbonu",
""
],
[
"Phiri",
"Mofya",
""
],
[
"Putini",
"Neo",
""
],
[
"Mngoma",
"Ndumiso",
""
],
[
"Amuok",
"Priscilla A.",
""
],
[
"Iro",
"Ruqayya Nasir",
""
],
[
"Adhiambo",
"Sonia",
""
]
] |
new_dataset
| 0.999491 |
2305.10971
|
David Adelani
|
Iyanuoluwa Shode, David Ifeoluwa Adelani, Jing Peng, Anna Feldman
|
NollySenti: Leveraging Transfer Learning and Machine Translation for
Nigerian Movie Sentiment Classification
|
Accepted to ACL 2023 (main conference)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Africa has over 2000 indigenous languages but they are under-represented in
NLP research due to lack of datasets. In recent years, there have been progress
in developing labeled corpora for African languages. However, they are often
available in a single domain and may not generalize to other domains. In this
paper, we focus on the task of sentiment classification for cross domain
adaptation. We create a new dataset, NollySenti - based on the Nollywood movie
reviews for five languages widely spoken in Nigeria (English, Hausa, Igbo,
Nigerian-Pidgin, and Yoruba. We provide an extensive empirical evaluation using
classical machine learning methods and pre-trained language models. Leveraging
transfer learning, we compare the performance of cross-domain adaptation from
Twitter domain, and cross-lingual adaptation from English language. Our
evaluation shows that transfer from English in the same target domain leads to
more than 5% improvement in accuracy compared to transfer from Twitter in the
same language. To further mitigate the domain difference, we leverage machine
translation (MT) from English to other Nigerian languages, which leads to a
further improvement of 7% over cross-lingual evaluation. While MT to
low-resource languages are often of low quality, through human evaluation, we
show that most of the translated sentences preserve the sentiment of the
original English reviews.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 13:38:36 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 07:25:43 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Shode",
"Iyanuoluwa",
""
],
[
"Adelani",
"David Ifeoluwa",
""
],
[
"Peng",
"Jing",
""
],
[
"Feldman",
"Anna",
""
]
] |
new_dataset
| 0.998431 |
2305.17648
|
Kim Tran
|
Kim Hoang Tran, Tien-Phat Nguyen, Anh Duy Le Dinh, Pha Nguyen, Thinh
Phan, Khoa Luu, Donald Adjeroh, Ngan Hoang Le
|
Z-GMOT: Zero-shot Generic Multiple Object Tracking
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the significant progress made in recent years, Multi-Object Tracking
(MOT) approaches still suffer from several limitations, including their
reliance on prior knowledge of tracking targets, which necessitates the costly
annotation of large labeled datasets. As a result, existing MOT methods are
limited to a small set of predefined categories, and they struggle with unseen
objects in the real world. To address these issues, Generic Multiple Object
Tracking (GMOT) has been proposed, which requires less prior information about
the targets. However, all existing GMOT approaches follow a one-shot paradigm,
relying mainly on the initial bounding box and thus struggling to handle
variants e.g., viewpoint, lighting, occlusion, scale, and etc. In this paper,
we introduce a novel approach to address the limitations of existing MOT and
GMOT methods. Specifically, we propose a zero-shot GMOT (Z-GMOT) algorithm that
can track never-seen object categories with zero training examples, without the
need for predefined categories or an initial bounding box. To achieve this, we
propose iGLIP, an improved version of Grounded language-image pretraining
(GLIP), which can detect unseen objects while minimizing false positives. We
evaluate our Z-GMOT thoroughly on the GMOT-40 dataset, AnimalTrack testset,
DanceTrack testset. The results of these evaluations demonstrate a significant
improvement over existing methods. For instance, on the GMOT-40 dataset, the
Z-GMOT outperforms one-shot GMOT with OC-SORT by 27.79 points HOTA and 44.37
points MOTA. On the AnimalTrack dataset, it surpasses fully-supervised methods
with DeepSORT by 12.55 points HOTA and 8.97 points MOTA. To facilitate further
research, we will make our code and models publicly available upon acceptance
of this paper.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 06:44:33 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 18:13:41 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Tran",
"Kim Hoang",
""
],
[
"Nguyen",
"Tien-Phat",
""
],
[
"Dinh",
"Anh Duy Le",
""
],
[
"Nguyen",
"Pha",
""
],
[
"Phan",
"Thinh",
""
],
[
"Luu",
"Khoa",
""
],
[
"Adjeroh",
"Donald",
""
],
[
"Le",
"Ngan Hoang",
""
]
] |
new_dataset
| 0.996467 |
2305.19509
|
Yao Yao
|
Yao Yao and Liang He and Perla Maiolino
|
SPADA: A Toolbox of Designing Soft Pneumatic Actuators for Shape
Matching based on Surrogate Modeling
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Soft pneumatic actuators (SPAs) produce motions for soft robots with simple
pressure input, however they require to be appropriately designed to fit the
target application. Available design methods employ kinematic models and
optimization to estimate the actuator response and the optimal design
parameters, to achieve a target actuator's shape. Within SPAs, Bellow-SPAs
excel in rapid prototyping and large deformation, yet their kinematic models
often lack accuracy due to the geometry complexity and the material
nonlinearity. Furthermore, existing shape-matching algorithms are not providing
an end-to-end solution from the desired shape to the actuator. In addition,
despite the availability of computational design pipelines, an accessible and
user-friendly toolbox for direct application remains elusive. This paper
addresses these challenges, offering an end-to-end shape-matching design
framework for bellow-SPAs to streamline the design process, and the open-source
toolbox SPADA (Soft Pneumatic Actuator Design frAmework) implementing the
framework with a GUI for easy access. It provides a kinematic model grounded on
a modular design to improve accuracy, Finite Element Method (FEM) simulations,
and piecewise constant curvature (PCC) approximation. An Artificial Neural
Network-trained surrogate model, based on FEM simulation data, is trained for
fast computation in optimization. A shape-matching algorithm, merging 3D PCC
segmentation and a surrogate model-based genetic algorithm, identifies optimal
actuator design parameters for desired shapes. The toolbox, implementing the
proposed design framework, has proven its end-to-end capability in designing
actuators to precisely match 2D shapes with root-mean-square errors of 4.16,
2.70, and 2.51mm, and demonstrating its potential by designing a 3D deformable
actuator.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 02:47:13 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 23:17:00 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Yao",
"Yao",
""
],
[
"He",
"Liang",
""
],
[
"Maiolino",
"Perla",
""
]
] |
new_dataset
| 0.979551 |
2307.10685
|
Yinghui Xing
|
Yinghui Xing, Dexuan Kong, Shizhou Zhang, Geng Chen, Lingyan Ran, Peng
Wang, Yanning Zhang
|
Pre-train, Adapt and Detect: Multi-Task Adapter Tuning for Camouflaged
Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Camouflaged object detection (COD), aiming to segment camouflaged objects
which exhibit similar patterns with the background, is a challenging task. Most
existing works are dedicated to establishing specialized modules to identify
camouflaged objects with complete and fine details, while the boundary can not
be well located for the lack of object-related semantics. In this paper, we
propose a novel ``pre-train, adapt and detect" paradigm to detect camouflaged
objects. By introducing a large pre-trained model, abundant knowledge learned
from massive multi-modal data can be directly transferred to COD. A lightweight
parallel adapter is inserted to adjust the features suitable for the downstream
COD task. Extensive experiments on four challenging benchmark datasets
demonstrate that our method outperforms existing state-of-the-art COD models by
large margins. Moreover, we design a multi-task learning scheme for tuning the
adapter to exploit the shareable knowledge across different semantic classes.
Comprehensive experimental results showed that the generalization ability of
our model can be substantially improved with multi-task adapter initialization
on source tasks and multi-task adaptation on target tasks.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 08:25:38 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 07:15:30 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Xing",
"Yinghui",
""
],
[
"Kong",
"Dexuan",
""
],
[
"Zhang",
"Shizhou",
""
],
[
"Chen",
"Geng",
""
],
[
"Ran",
"Lingyan",
""
],
[
"Wang",
"Peng",
""
],
[
"Zhang",
"Yanning",
""
]
] |
new_dataset
| 0.997264 |
2308.03099
|
Yuta Koreeda
|
Yuta Koreeda, Terufumi Morishita, Osamu Imaichi, Yasuhiro Sogawa
|
LARCH: Large Language Model-based Automatic Readme Creation with
Heuristics
|
This is a pre-print of a paper accepted at CIKM'23 Demo. Refer to the
DOI URL for the original publication
|
In Proceedings of the 32nd ACM International Conference on
Information and Knowledge Management, October 21-25, 2023, Birmingham, United
Kingdom. ACM, New York, NY, USA, 5 pages
|
10.1145/3583780.3614744
| null |
cs.CL cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Writing a readme is a crucial aspect of software development as it plays a
vital role in managing and reusing program code. Though it is a pain point for
many developers, automatically creating one remains a challenge even with the
recent advancements in large language models (LLMs), because it requires
generating an abstract description from thousands of lines of code. In this
demo paper, we show that LLMs are capable of generating a coherent and
factually correct readmes if we can identify a code fragment that is
representative of the repository. Building upon this finding, we developed
LARCH (LLM-based Automatic Readme Creation with Heuristics) which leverages
representative code identification with heuristics and weak supervision.
Through human and automated evaluations, we illustrate that LARCH can generate
coherent and factually correct readmes in the majority of cases, outperforming
a baseline that does not rely on representative code identification. We have
made LARCH open-source and provided a cross-platform Visual Studio Code
interface and command-line interface, accessible at
https://github.com/hitachi-nlp/larch. A demo video showcasing LARCH's
capabilities is available at https://youtu.be/ZUKkh5ED-O4.
|
[
{
"version": "v1",
"created": "Sun, 6 Aug 2023 12:28:24 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 09:48:20 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Koreeda",
"Yuta",
""
],
[
"Morishita",
"Terufumi",
""
],
[
"Imaichi",
"Osamu",
""
],
[
"Sogawa",
"Yasuhiro",
""
]
] |
new_dataset
| 0.998897 |
2308.06452
|
Lu Liyao
|
Liyao Lu
|
Improved YOLOv8 Detection Algorithm in Security Inspection Image
|
23 pages,23 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Security inspection is the first line of defense to ensure the safety of
people's lives and property, and intelligent security inspection is an
inevitable trend in the future development of the security inspection industry.
Aiming at the problems of overlapping detection objects, false detection of
contraband, and missed detection in the process of X-ray image detection, an
improved X-ray contraband detection algorithm CSS-YOLO based on YOLOv8s is
proposed.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 03:13:38 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 02:31:51 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Aug 2023 07:11:04 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Lu",
"Liyao",
""
]
] |
new_dataset
| 0.99181 |
2308.09779
|
Yichen Yan
|
Yichen Yan, Xingjian He, Wenxuan Wang, Sihan Chen, Jing Liu
|
EAVL: Explicitly Align Vision and Language for Referring Image
Segmentation
|
10 pages, 4 figures. arXiv admin note: text overlap with
arXiv:2305.14969
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Referring image segmentation aims to segment an object mentioned in natural
language from an image. A main challenge is language-related localization,
which means locating the object with the relevant language. Previous approaches
mainly focus on the fusion of vision and language features without fully
addressing language-related localization. In previous approaches, fused
vision-language features are directly fed into a decoder and pass through a
convolution with a fixed kernel to obtain the result, which follows a similar
pattern as traditional image segmentation. This approach does not explicitly
align language and vision features in the segmentation stage, resulting in a
suboptimal language-related localization. Different from previous methods, we
propose Explicitly Align the Vision and Language for Referring Image
Segmentation (EAVL). Instead of using a fixed convolution kernel, we propose an
Aligner which explicitly aligns the vision and language features in the
segmentation stage. Specifically, a series of unfixed convolution kernels are
generated based on the input l, and then are use to explicitly align the vision
and language features. To achieve this, We generate multiple queries that
represent different emphases of the language expression. These queries are
transformed into a series of query-based convolution kernels. Then, we utilize
these kernels to do convolutions in the segmentation stage and obtain a series
of segmentation masks. The final result is obtained through the aggregation of
all masks. Our method can not only fuse vision and language features
effectively but also exploit their potential in the segmentation stage. And
most importantly, we explicitly align language features of different emphases
with the image features to achieve language-related localization. Our method
surpasses previous state-of-the-art methods on RefCOCO, RefCOCO+, and G-Ref by
large margins.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 18:59:27 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 00:27:55 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Yan",
"Yichen",
""
],
[
"He",
"Xingjian",
""
],
[
"Wang",
"Wenxuan",
""
],
[
"Chen",
"Sihan",
""
],
[
"Liu",
"Jing",
""
]
] |
new_dataset
| 0.998067 |
2308.09936
|
Wenbo Hu
|
Wenbo Hu, Yifan Xu, Yi Li, Weiyue Li, Zeyuan Chen, Zhuowen Tu
|
BLIVA: A Simple Multimodal LLM for Better Handling of Text-Rich Visual
Questions
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision Language Models (VLMs), which extend Large Language Models (LLM) by
incorporating visual understanding capability, have demonstrated significant
advancements in addressing open-ended visual question-answering (VQA) tasks.
However, these models cannot accurately interpret images infused with text, a
common occurrence in real-world scenarios. Standard procedures for extracting
information from images often involve learning a fixed set of query embeddings.
These embeddings are designed to encapsulate image contexts and are later used
as soft prompt inputs in LLMs. Yet, this process is limited to the token count,
potentially curtailing the recognition of scenes with text-rich context. To
improve upon them, the present study introduces BLIVA: an augmented version of
InstructBLIP with Visual Assistant. BLIVA incorporates the query embeddings
from InstructBLIP and also directly projects encoded patch embeddings into the
LLM, a technique inspired by LLaVA. This approach assists the model to capture
intricate details potentially missed during the query decoding process.
Empirical evidence demonstrates that our model, BLIVA, significantly enhances
performance in processing text-rich VQA benchmarks (up to 17.76\% in OCR-VQA
benchmark) and in undertaking typical VQA benchmarks (up to 7.9\% in Visual
Spatial Reasoning benchmark), comparing to our baseline InstructBLIP. BLIVA
demonstrates significant capability in decoding real-world images, irrespective
of text presence. To demonstrate the broad industry applications enabled by
BLIVA, we evaluate the model using a new dataset comprising YouTube thumbnails
paired with question-answer sets across 13 diverse categories. For researchers
interested in further exploration, our code and models are freely accessible at
https://github.com/mlpc-ucsd/BLIVA.git
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 07:53:43 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Hu",
"Wenbo",
""
],
[
"Xu",
"Yifan",
""
],
[
"Li",
"Yi",
""
],
[
"Li",
"Weiyue",
""
],
[
"Chen",
"Zeyuan",
""
],
[
"Tu",
"Zhuowen",
""
]
] |
new_dataset
| 0.999621 |
2308.10195
|
Zehong Zhang
|
Dongjian Huo, Zehong Zhang, Hanjing Su, Guanbin Li, Chaowei Fang,
Qingyao Wu
|
WMFormer++: Nested Transformer for Visible Watermark Removal via Implict
Joint Learning
| null | null | null | null |
cs.MM cs.CL cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Watermarking serves as a widely adopted approach to safeguard media
copyright. In parallel, the research focus has extended to watermark removal
techniques, offering an adversarial means to enhance watermark robustness and
foster advancements in the watermarking field. Existing watermark removal
methods mainly rely on UNet with task-specific decoder branches--one for
watermark localization and the other for background image restoration. However,
watermark localization and background restoration are not isolated tasks;
precise watermark localization inherently implies regions necessitating
restoration, and the background restoration process contributes to more
accurate watermark localization. To holistically integrate information from
both branches, we introduce an implicit joint learning paradigm. This empowers
the network to autonomously navigate the flow of information between implicit
branches through a gate mechanism. Furthermore, we employ cross-channel
attention to facilitate local detail restoration and holistic structural
comprehension, while harnessing nested structures to integrate multi-scale
information. Extensive experiments are conducted on various challenging
benchmarks to validate the effectiveness of our proposed method. The results
demonstrate our approach's remarkable superiority, surpassing existing
state-of-the-art methods by a large margin.
|
[
{
"version": "v1",
"created": "Sun, 20 Aug 2023 07:56:34 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 02:55:39 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Huo",
"Dongjian",
""
],
[
"Zhang",
"Zehong",
""
],
[
"Su",
"Hanjing",
""
],
[
"Li",
"Guanbin",
""
],
[
"Fang",
"Chaowei",
""
],
[
"Wu",
"Qingyao",
""
]
] |
new_dataset
| 0.96853 |
2308.10608
|
Yuhan Li
|
Yuhan Li, Yishun Dou, Yue Shi, Yu Lei, Xuanhong Chen, Yi Zhang, Peng
Zhou, Bingbing Ni
|
FocalDreamer: Text-driven 3D Editing via Focal-fusion Assembly
|
Project website: https://focaldreamer.github.io
| null | null | null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While text-3D editing has made significant strides in leveraging score
distillation sampling, emerging approaches still fall short in delivering
separable, precise and consistent outcomes that are vital to content creation.
In response, we introduce FocalDreamer, a framework that merges base shape with
editable parts according to text prompts for fine-grained editing within
desired regions. Specifically, equipped with geometry union and dual-path
rendering, FocalDreamer assembles independent 3D parts into a complete object,
tailored for convenient instance reuse and part-wise control. We propose
geometric focal loss and style consistency regularization, which encourage
focal fusion and congruent overall appearance. Furthermore, FocalDreamer
generates high-fidelity geometry and PBR textures which are compatible with
widely-used graphics engines. Extensive experiments have highlighted the
superior editing capabilities of FocalDreamer in both quantitative and
qualitative evaluations.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 10:16:52 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 03:23:35 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Li",
"Yuhan",
""
],
[
"Dou",
"Yishun",
""
],
[
"Shi",
"Yue",
""
],
[
"Lei",
"Yu",
""
],
[
"Chen",
"Xuanhong",
""
],
[
"Zhang",
"Yi",
""
],
[
"Zhou",
"Peng",
""
],
[
"Ni",
"Bingbing",
""
]
] |
new_dataset
| 0.998061 |
2308.10647
|
Shayekh Islam
|
Imam Mohammad Zulkarnain, Shayekh Bin Islam, Md. Zami Al Zunaed
Farabe, Md. Mehedi Hasan Shawon, Jawaril Munshad Abedin, Beig Rajibul Hasan,
Marsia Haque, Istiak Shihab, Syed Mobassir, MD. Nazmuddoha Ansary, Asif
Sushmit, Farig Sadeque
|
bbOCR: An Open-source Multi-domain OCR Pipeline for Bengali Documents
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the existence of numerous Optical Character Recognition (OCR) tools,
the lack of comprehensive open-source systems hampers the progress of document
digitization in various low-resource languages, including Bengali. Low-resource
languages, especially those with an alphasyllabary writing system, suffer from
the lack of large-scale datasets for various document OCR components such as
word-level OCR, document layout extraction, and distortion correction; which
are available as individual modules in high-resource languages. In this paper,
we introduce Bengali$.$AI-BRACU-OCR (bbOCR): an open-source scalable document
OCR system that can reconstruct Bengali documents into a structured searchable
digitized format that leverages a novel Bengali text recognition model and two
novel synthetic datasets. We present extensive component-level and system-level
evaluation: both use a novel diversified evaluation dataset and comprehensive
evaluation metrics. Our extensive evaluation suggests that our proposed
solution is preferable over the current state-of-the-art Bengali OCR systems.
The source codes and datasets are available here:
https://bengaliai.github.io/bbocr.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 11:35:28 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 02:32:01 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Zulkarnain",
"Imam Mohammad",
""
],
[
"Islam",
"Shayekh Bin",
""
],
[
"Farabe",
"Md. Zami Al Zunaed",
""
],
[
"Shawon",
"Md. Mehedi Hasan",
""
],
[
"Abedin",
"Jawaril Munshad",
""
],
[
"Hasan",
"Beig Rajibul",
""
],
[
"Haque",
"Marsia",
""
],
[
"Shihab",
"Istiak",
""
],
[
"Mobassir",
"Syed",
""
],
[
"Ansary",
"MD. Nazmuddoha",
""
],
[
"Sushmit",
"Asif",
""
],
[
"Sadeque",
"Farig",
""
]
] |
new_dataset
| 0.999604 |
2308.10990
|
Jie Liu
|
Jie Liu, Tao Zhang, Shuyu Sun
|
Flashlight Search Medial Axis: A Pixel-Free Pore-Network Extraction
Algorithm
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pore-network models (PNMs) have become an important tool in the study of
fluid flow in porous media over the last few decades, and the accuracy of their
results highly depends on the extraction of pore networks. Traditional methods
of pore-network extraction are based on pixels and require images with high
quality. Here, a pixel-free method called the flashlight search medial axis
(FSMA) algorithm is proposed for pore-network extraction in a continuous space.
The search domain in a two-dimensional space is a line, whereas a surface
domain is searched in a three-dimensional scenario. Thus, the FSMA algorithm
follows the dimensionality reduction idea; the medial axis can be identified
using only a few points instead of calculating every point in the void space.
In this way, computational complexity of this method is greatly reduced
compared to that of traditional pixel-based extraction methods, thus enabling
large-scale pore-network extraction. Based on cases featuring two- and
three-dimensional porous media, the FSMA algorithm performs well regardless of
the topological structure of the pore network or the positions of the pore and
throat centers. This algorithm can also be used to examine both closed- and
open-boundary cases. Finally, the FSMA algorithm can search dead-end pores,
which is of great significance in the study of multiphase flow in porous media.
|
[
{
"version": "v1",
"created": "Sat, 5 Aug 2023 11:37:24 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Liu",
"Jie",
""
],
[
"Zhang",
"Tao",
""
],
[
"Sun",
"Shuyu",
""
]
] |
new_dataset
| 0.989228 |
2308.11011
|
Peng Zhou
|
Peng Zhou, Alexander J. Edwards, Frederick B. Mancoff, Sanjeev
Aggarwal, Stephen K. Heinrich-Barna, Joseph S. Friedman
|
Neuromorphic Hebbian learning with magnetic tunnel junction synapses
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neuromorphic computing aims to mimic both the function and structure of
biological neural networks to provide artificial intelligence with extreme
efficiency. Conventional approaches store synaptic weights in non-volatile
memory devices with analog resistance states, permitting in-memory computation
of neural network operations while avoiding the costs associated with
transferring synaptic weights from a memory array. However, the use of analog
resistance states for storing weights in neuromorphic systems is impeded by
stochastic writing, weights drifting over time through stochastic processes,
and limited endurance that reduces the precision of synapse weights. Here we
propose and experimentally demonstrate neuromorphic networks that provide
high-accuracy inference thanks to the binary resistance states of magnetic
tunnel junctions (MTJs), while leveraging the analog nature of their stochastic
spin-transfer torque (STT) switching for unsupervised Hebbian learning. We
performed the first experimental demonstration of a neuromorphic network
directly implemented with MTJ synapses, for both inference and
spike-timing-dependent plasticity learning. We also demonstrated through
simulation that the proposed system for unsupervised Hebbian learning with
stochastic STT-MTJ synapses can achieve competitive accuracies for MNIST
handwritten digit recognition. By appropriately applying neuromorphic
principles through hardware-aware design, the proposed STT-MTJ neuromorphic
learning networks provide a pathway toward artificial intelligence hardware
that learns autonomously with extreme efficiency.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 19:58:44 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Zhou",
"Peng",
""
],
[
"Edwards",
"Alexander J.",
""
],
[
"Mancoff",
"Frederick B.",
""
],
[
"Aggarwal",
"Sanjeev",
""
],
[
"Heinrich-Barna",
"Stephen K.",
""
],
[
"Friedman",
"Joseph S.",
""
]
] |
new_dataset
| 0.997565 |
2308.11015
|
Tze Ho Elden Tse
|
Tze Ho Elden Tse, Franziska Mueller, Zhengyang Shen, Danhang Tang,
Thabo Beeler, Mingsong Dou, Yinda Zhang, Sasa Petrovic, Hyung Jin Chang,
Jonathan Taylor, Bardia Doosti
|
Spectral Graphormer: Spectral Graph-based Transformer for Egocentric
Two-Hand Reconstruction using Multi-View Color Images
|
Accepted to ICCV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose a novel transformer-based framework that reconstructs two high
fidelity hands from multi-view RGB images. Unlike existing hand pose estimation
methods, where one typically trains a deep network to regress hand model
parameters from single RGB image, we consider a more challenging problem
setting where we directly regress the absolute root poses of two-hands with
extended forearm at high resolution from egocentric view. As existing datasets
are either infeasible for egocentric viewpoints or lack background variations,
we create a large-scale synthetic dataset with diverse scenarios and collect a
real dataset from multi-calibrated camera setup to verify our proposed
multi-view image feature fusion strategy. To make the reconstruction physically
plausible, we propose two strategies: (i) a coarse-to-fine spectral graph
convolution decoder to smoothen the meshes during upsampling and (ii) an
optimisation-based refinement stage at inference to prevent self-penetrations.
Through extensive quantitative and qualitative evaluations, we show that our
framework is able to produce realistic two-hand reconstructions and demonstrate
the generalisation of synthetic-trained models to real data, as well as
real-time AR/VR applications.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 20:07:02 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Tse",
"Tze Ho Elden",
""
],
[
"Mueller",
"Franziska",
""
],
[
"Shen",
"Zhengyang",
""
],
[
"Tang",
"Danhang",
""
],
[
"Beeler",
"Thabo",
""
],
[
"Dou",
"Mingsong",
""
],
[
"Zhang",
"Yinda",
""
],
[
"Petrovic",
"Sasa",
""
],
[
"Chang",
"Hyung Jin",
""
],
[
"Taylor",
"Jonathan",
""
],
[
"Doosti",
"Bardia",
""
]
] |
new_dataset
| 0.978441 |
2308.11032
|
Prabh Simran Baweja
|
Prabh Simran Singh Baweja, Orathai Sangpetch, Akkarit Sangpetch
|
AI For Fraud Awareness
|
Technical Report published at CMKL University in 2020
| null | null | null |
cs.AI cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In today's world, with the rise of numerous social platforms, it has become
relatively easy for anyone to spread false information and lure people into
traps. Fraudulent schemes and traps are growing rapidly in the investment
world. Due to this, countries and individuals face huge financial risks. We
present an awareness system with the use of machine learning and gamification
techniques to educate the people about investment scams and traps. Our system
applies machine learning techniques to provide a personalized learning
experience to the user. The system chooses distinct game-design elements and
scams from the knowledge pool crafted by domain experts for each individual.
The objective of the research project is to reduce inequalities in all
countries by educating investors via Active Learning. Our goal is to assist the
regulators in assuring a conducive environment for a fair, efficient, and
inclusive capital market. In the paper, we discuss the impact of the problem,
provide implementation details, and showcase the potentiality of the system
through preliminary experiments and results.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 05:45:34 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Baweja",
"Prabh Simran Singh",
""
],
[
"Sangpetch",
"Orathai",
""
],
[
"Sangpetch",
"Akkarit",
""
]
] |
new_dataset
| 0.961125 |
2308.11062
|
Shen Yan
|
Shen Yan, Xuehan Xiong, Arsha Nagrani, Anurag Arnab, Zhonghao Wang,
Weina Ge, David Ross, Cordelia Schmid
|
UnLoc: A Unified Framework for Video Localization Tasks
|
ICCV 2023
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
While large-scale image-text pretrained models such as CLIP have been used
for multiple video-level tasks on trimmed videos, their use for temporal
localization in untrimmed videos is still a relatively unexplored task. We
design a new approach for this called UnLoc, which uses pretrained image and
text towers, and feeds tokens to a video-text fusion model. The output of the
fusion module are then used to construct a feature pyramid in which each level
connects to a head to predict a per-frame relevancy score and start/end time
displacements. Unlike previous works, our architecture enables Moment
Retrieval, Temporal Localization, and Action Segmentation with a single stage
model, without the need for action proposals, motion based pretrained features
or representation masking. Unlike specialized models, we achieve state of the
art results on all three different localization tasks with a unified approach.
Code will be available at: \url{https://github.com/google-research/scenic}.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 22:15:20 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Yan",
"Shen",
""
],
[
"Xiong",
"Xuehan",
""
],
[
"Nagrani",
"Arsha",
""
],
[
"Arnab",
"Anurag",
""
],
[
"Wang",
"Zhonghao",
""
],
[
"Ge",
"Weina",
""
],
[
"Ross",
"David",
""
],
[
"Schmid",
"Cordelia",
""
]
] |
new_dataset
| 0.993647 |
2308.11106
|
Dongkwon Jin
|
Dongkwon Jin, Dahyun Kim, Chang-Su Kim
|
Recursive Video Lane Detection
|
ICCV 2023 accepted
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel algorithm to detect road lanes in videos, called recursive video lane
detector (RVLD), is proposed in this paper, which propagates the state of a
current frame recursively to the next frame. RVLD consists of an intra-frame
lane detector (ILD) and a predictive lane detector (PLD). First, we design ILD
to localize lanes in a still frame. Second, we develop PLD to exploit the
information of the previous frame for lane detection in a current frame. To
this end, we estimate a motion field and warp the previous output to the
current frame. Using the warped information, we refine the feature map of the
current frame to detect lanes more reliably. Experimental results show that
RVLD outperforms existing detectors on video lane datasets. Our codes are
available at https://github.com/dongkwonjin/RVLD.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 01:02:15 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Jin",
"Dongkwon",
""
],
[
"Kim",
"Dahyun",
""
],
[
"Kim",
"Chang-Su",
""
]
] |
new_dataset
| 0.997329 |
2308.11116
|
Haesoo Chung
|
Haesoo Chung and Nam Ik Cho
|
LAN-HDR: Luminance-based Alignment Network for High Dynamic Range Video
Reconstruction
|
ICCV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
As demands for high-quality videos continue to rise, high-resolution and
high-dynamic range (HDR) imaging techniques are drawing attention. To generate
an HDR video from low dynamic range (LDR) images, one of the critical steps is
the motion compensation between LDR frames, for which most existing works
employed the optical flow algorithm. However, these methods suffer from flow
estimation errors when saturation or complicated motions exist. In this paper,
we propose an end-to-end HDR video composition framework, which aligns LDR
frames in the feature space and then merges aligned features into an HDR frame,
without relying on pixel-domain optical flow. Specifically, we propose a
luminance-based alignment network for HDR (LAN-HDR) consisting of an alignment
module and a hallucination module. The alignment module aligns a frame to the
adjacent reference by evaluating luminance-based attention, excluding color
information. The hallucination module generates sharp details, especially for
washed-out areas due to saturation. The aligned and hallucinated features are
then blended adaptively to complement each other. Finally, we merge the
features to generate a final HDR frame. In training, we adopt a temporal loss,
in addition to frame reconstruction losses, to enhance temporal consistency and
thus reduce flickering. Extensive experiments demonstrate that our method
performs better or comparable to state-of-the-art methods on several
benchmarks.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 01:43:00 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Chung",
"Haesoo",
""
],
[
"Cho",
"Nam Ik",
""
]
] |
new_dataset
| 0.964582 |
2308.11140
|
Haesoo Chung
|
Haesoo Chung and Nam Ik Cho
|
High Dynamic Range Imaging of Dynamic Scenes with Saturation
Compensation but without Explicit Motion Compensation
|
WACV 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
High dynamic range (HDR) imaging is a highly challenging task since a large
amount of information is lost due to the limitations of camera sensors. For HDR
imaging, some methods capture multiple low dynamic range (LDR) images with
altering exposures to aggregate more information. However, these approaches
introduce ghosting artifacts when significant inter-frame motions are present.
Moreover, although multi-exposure images are given, we have little information
in severely over-exposed areas. Most existing methods focus on motion
compensation, i.e., alignment of multiple LDR shots to reduce the ghosting
artifacts, but they still produce unsatisfying results. These methods also
rather overlook the need to restore the saturated areas. In this paper, we
generate well-aligned multi-exposure features by reformulating a motion
alignment problem into a simple brightness adjustment problem. In addition, we
propose a coarse-to-fine merging strategy with explicit saturation
compensation. The saturated areas are reconstructed with similar well-exposed
content using adaptive contextual attention. We demonstrate that our method
outperforms the state-of-the-art methods regarding qualitative and quantitative
evaluations.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 02:44:03 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Chung",
"Haesoo",
""
],
[
"Cho",
"Nam Ik",
""
]
] |
new_dataset
| 0.985922 |
2308.11159
|
Dalong Zheng
|
Dalong Zheng, Zebin Wu, Jia Liu, Zhihui Wei
|
SwinV2DNet: Pyramid and Self-Supervision Compounded Feature Learning for
Remote Sensing Images Change Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Among the current mainstream change detection networks, transformer is
deficient in the ability to capture accurate low-level details, while
convolutional neural network (CNN) is wanting in the capacity to understand
global information and establish remote spatial relationships. Meanwhile, both
of the widely used early fusion and late fusion frameworks are not able to well
learn complete change features. Therefore, based on swin transformer V2 (Swin
V2) and VGG16, we propose an end-to-end compounded dense network SwinV2DNet to
inherit the advantages of both transformer and CNN and overcome the
shortcomings of existing networks in feature learning. Firstly, it captures the
change relationship features through the densely connected Swin V2 backbone,
and provides the low-level pre-changed and post-changed features through a CNN
branch. Based on these three change features, we accomplish accurate change
detection results. Secondly, combined with transformer and CNN, we propose
mixed feature pyramid (MFP) which provides inter-layer interaction information
and intra-layer multi-scale information for complete feature learning. MFP is a
plug and play module which is experimentally proven to be also effective in
other change detection networks. Further more, we impose a self-supervision
strategy to guide a new CNN branch, which solves the untrainable problem of the
CNN branch and provides the semantic change information for the features of
encoder. The state-of-the-art (SOTA) change detection scores and fine-grained
change maps were obtained compared with other advanced methods on four commonly
used public remote sensing datasets. The code is available at
https://github.com/DalongZ/SwinV2DNet.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 03:31:52 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Zheng",
"Dalong",
""
],
[
"Wu",
"Zebin",
""
],
[
"Liu",
"Jia",
""
],
[
"Wei",
"Zhihui",
""
]
] |
new_dataset
| 0.96129 |
2308.11161
|
Thanh Dat Nguyen
|
Thanh-Dat Nguyen, Yang Zhou, Xuan Bach D. Le, Patanamon (Pick)
Thongtanunam, David Lo
|
Adversarial Attacks on Code Models with Discriminative Graph Patterns
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Pre-trained language models of code are now widely used in various software
engineering tasks such as code generation, code completion, vulnerability
detection, etc. This, in turn, poses security and reliability risks to these
models. One of the important threats is \textit{adversarial attacks}, which can
lead to erroneous predictions and largely affect model performance on
downstream tasks. Current adversarial attacks on code models usually adopt
fixed sets of program transformations, such as variable renaming and dead code
insertion, leading to limited attack effectiveness. To address the
aforementioned challenges, we propose a novel adversarial attack framework,
GraphCodeAttack, to better evaluate the robustness of code models. Given a
target code model, GraphCodeAttack automatically mines important code patterns,
which can influence the model's decisions, to perturb the structure of input
code to the model. To do so, GraphCodeAttack uses a set of input source codes
to probe the model's outputs and identifies the \textit{discriminative} ASTs
patterns that can influence the model decisions. GraphCodeAttack then selects
appropriate AST patterns, concretizes the selected patterns as attacks, and
inserts them as dead code into the model's input program. To effectively
synthesize attacks from AST patterns, GraphCodeAttack uses a separate
pre-trained code model to fill in the ASTs with concrete code snippets. We
evaluate the robustness of two popular code models (e.g., CodeBERT and
GraphCodeBERT) against our proposed approach on three tasks: Authorship
Attribution, Vulnerability Prediction, and Clone Detection. The experimental
results suggest that our proposed approach significantly outperforms
state-of-the-art approaches in attacking code models such as CARROT and ALERT.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 03:40:34 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Nguyen",
"Thanh-Dat",
"",
"Pick"
],
[
"Zhou",
"Yang",
"",
"Pick"
],
[
"Le",
"Xuan Bach D.",
"",
"Pick"
],
[
"Patanamon",
"",
"",
"Pick"
],
[
"Thongtanunam",
"",
""
],
[
"Lo",
"David",
""
]
] |
new_dataset
| 0.998141 |
2308.11194
|
Maya Varma
|
Maya Varma, Jean-Benoit Delbrouck, Sarah Hooper, Akshay Chaudhari,
Curtis Langlotz
|
ViLLA: Fine-Grained Vision-Language Representation Learning from
Real-World Data
|
ICCV 2023
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Vision-language models (VLMs), such as CLIP and ALIGN, are generally trained
on datasets consisting of image-caption pairs obtained from the web. However,
real-world multimodal datasets, such as healthcare data, are significantly more
complex: each image (e.g. X-ray) is often paired with text (e.g. physician
report) that describes many distinct attributes occurring in fine-grained
regions of the image. We refer to these samples as exhibiting high pairwise
complexity, since each image-text pair can be decomposed into a large number of
region-attribute pairings. The extent to which VLMs can capture fine-grained
relationships between image regions and textual attributes when trained on such
data has not been previously evaluated. The first key contribution of this work
is to demonstrate through systematic evaluations that as the pairwise
complexity of the training dataset increases, standard VLMs struggle to learn
region-attribute relationships, exhibiting performance degradations of up to
37% on retrieval tasks. In order to address this issue, we introduce ViLLA as
our second key contribution. ViLLA, which is trained to capture fine-grained
region-attribute relationships from complex datasets, involves two components:
(a) a lightweight, self-supervised mapping model to decompose image-text
samples into region-attribute pairs, and (b) a contrastive VLM to learn
representations from generated region-attribute pairs. We demonstrate with
experiments across four domains (synthetic, product, medical, and natural
images) that ViLLA outperforms comparable VLMs on fine-grained reasoning tasks,
such as zero-shot object detection (up to 3.6 AP50 points on COCO and 0.6 mAP
points on LVIS) and retrieval (up to 14.2 R-Precision points).
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 05:03:09 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Varma",
"Maya",
""
],
[
"Delbrouck",
"Jean-Benoit",
""
],
[
"Hooper",
"Sarah",
""
],
[
"Chaudhari",
"Akshay",
""
],
[
"Langlotz",
"Curtis",
""
]
] |
new_dataset
| 0.999101 |
2308.11199
|
Donghoon Han
|
Donghoon Han, Seunghyeon Seo, Donghyeon Jeon, Jiho Jang, Chaerin Kong
and Nojun Kwak
|
ConcatPlexer: Additional Dim1 Batching for Faster ViTs
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Transformers have demonstrated tremendous success not only in the natural
language processing (NLP) domain but also the field of computer vision,
igniting various creative approaches and applications. Yet, the superior
performance and modeling flexibility of transformers came with a severe
increase in computation costs, and hence several works have proposed methods to
reduce this burden. Inspired by a cost-cutting method originally proposed for
language models, Data Multiplexing (DataMUX), we propose a novel approach for
efficient visual recognition that employs additional dim1 batching (i.e.,
concatenation) that greatly improves the throughput with little compromise in
the accuracy. We first introduce a naive adaptation of DataMux for vision
models, Image Multiplexer, and devise novel components to overcome its
weaknesses, rendering our final model, ConcatPlexer, at the sweet spot between
inference speed and accuracy. The ConcatPlexer was trained on ImageNet1K and
CIFAR100 dataset and it achieved 23.5% less GFLOPs than ViT-B/16 with 69.5% and
83.4% validation accuracy, respectively.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 05:21:31 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Han",
"Donghoon",
""
],
[
"Seo",
"Seunghyeon",
""
],
[
"Jeon",
"Donghyeon",
""
],
[
"Jang",
"Jiho",
""
],
[
"Kong",
"Chaerin",
""
],
[
"Kwak",
"Nojun",
""
]
] |
new_dataset
| 0.954956 |
2308.11206
|
Xujie Zhang
|
Xujie Zhang, Binbin Yang, Michael C. Kampffmeyer, Wenqing Zhang,
Shiyue Zhang, Guansong Lu, Liang Lin, Hang Xu, Xiaodan Liang
|
DiffCloth: Diffusion Based Garment Synthesis and Manipulation via
Structural Cross-modal Semantic Alignment
|
accepted by ICCV2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Cross-modal garment synthesis and manipulation will significantly benefit the
way fashion designers generate garments and modify their designs via flexible
linguistic interfaces.Current approaches follow the general text-to-image
paradigm and mine cross-modal relations via simple cross-attention modules,
neglecting the structural correspondence between visual and textual
representations in the fashion design domain. In this work, we instead
introduce DiffCloth, a diffusion-based pipeline for cross-modal garment
synthesis and manipulation, which empowers diffusion models with flexible
compositionality in the fashion domain by structurally aligning the cross-modal
semantics. Specifically, we formulate the part-level cross-modal alignment as a
bipartite matching problem between the linguistic Attribute-Phrases (AP) and
the visual garment parts which are obtained via constituency parsing and
semantic segmentation, respectively. To mitigate the issue of attribute
confusion, we further propose a semantic-bundled cross-attention to preserve
the spatial structure similarities between the attention maps of attribute
adjectives and part nouns in each AP. Moreover, DiffCloth allows for
manipulation of the generated results by simply replacing APs in the text
prompts. The manipulation-irrelevant regions are recognized by blended masks
obtained from the bundled attention maps of the APs and kept unchanged.
Extensive experiments on the CM-Fashion benchmark demonstrate that DiffCloth
both yields state-of-the-art garment synthesis results by leveraging the
inherent structural information and supports flexible manipulation with region
consistency.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 05:43:33 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Zhang",
"Xujie",
""
],
[
"Yang",
"Binbin",
""
],
[
"Kampffmeyer",
"Michael C.",
""
],
[
"Zhang",
"Wenqing",
""
],
[
"Zhang",
"Shiyue",
""
],
[
"Lu",
"Guansong",
""
],
[
"Lin",
"Liang",
""
],
[
"Xu",
"Hang",
""
],
[
"Liang",
"Xiaodan",
""
]
] |
new_dataset
| 0.991047 |
2308.11223
|
Francesco Pittaluga
|
Francesco Pittaluga and Bingbing Zhuang
|
LDP-Feat: Image Features with Local Differential Privacy
|
11 pages, 4 figures, to be published in International Conference on
Computer Vision (ICCV) 2023
| null | null | null |
cs.CV cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern computer vision services often require users to share raw feature
descriptors with an untrusted server. This presents an inherent privacy risk,
as raw descriptors may be used to recover the source images from which they
were extracted. To address this issue, researchers recently proposed
privatizing image features by embedding them within an affine subspace
containing the original feature as well as adversarial feature samples. In this
paper, we propose two novel inversion attacks to show that it is possible to
(approximately) recover the original image features from these embeddings,
allowing us to recover privacy-critical image content. In light of such
successes and the lack of theoretical privacy guarantees afforded by existing
visual privacy methods, we further propose the first method to privatize image
features via local differential privacy, which, unlike prior approaches,
provides a guaranteed bound for privacy leakage regardless of the strength of
the attacks. In addition, our method yields strong performance in visual
localization as a downstream task while enjoying the privacy guarantee.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 06:28:55 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Pittaluga",
"Francesco",
""
],
[
"Zhuang",
"Bingbing",
""
]
] |
new_dataset
| 0.988777 |
2308.11225
|
Anes Bendimerad
|
Anes Bendimerad, Youcef Remil, Romain Mathonat, Mehdi Kaytoue
|
On-Premise AIOps Infrastructure for a Software Editor SME: An Experience
Report
| null | null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Information Technology has become a critical component in various industries,
leading to an increased focus on software maintenance and monitoring. With the
complexities of modern software systems, traditional maintenance approaches
have become insufficient. The concept of AIOps has emerged to enhance
predictive maintenance using Big Data and Machine Learning capabilities.
However, exploiting AIOps requires addressing several challenges related to the
complexity of data and incident management. Commercial solutions exist, but
they may not be suitable for certain companies due to high costs, data
governance issues, and limitations in covering private software. This paper
investigates the feasibility of implementing on-premise AIOps solutions by
leveraging open-source tools. We introduce a comprehensive AIOps infrastructure
that we have successfully deployed in our company, and we provide the rationale
behind different choices that we made to build its various components.
Particularly, we provide insights into our approach and criteria for selecting
a data management system and we explain its integration. Our experience can be
beneficial for companies seeking to internally manage their software
maintenance processes with a modern AIOps approach.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 06:47:36 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Bendimerad",
"Anes",
""
],
[
"Remil",
"Youcef",
""
],
[
"Mathonat",
"Romain",
""
],
[
"Kaytoue",
"Mehdi",
""
]
] |
new_dataset
| 0.996888 |
2308.11228
|
Dan Solodar
|
Dan Solodar and Itzik Klein
|
VIO-DualProNet: Visual-Inertial Odometry with Learning Based Process
Noise Covariance
|
10 pages, 15 figures, bib file
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual-inertial odometry (VIO) is a vital technique used in robotics,
augmented reality, and autonomous vehicles. It combines visual and inertial
measurements to accurately estimate position and orientation. Existing VIO
methods assume a fixed noise covariance for the inertial uncertainty. However,
accurately determining in real-time the noise variance of the inertial sensors
presents a significant challenge as the uncertainty changes throughout the
operation leading to suboptimal performance and reduced accuracy. To circumvent
this, we propose VIO-DualProNet, a novel approach that utilizes deep learning
methods to dynamically estimate the inertial noise uncertainty in real-time. By
designing and training a deep neural network to predict inertial noise
uncertainty using only inertial sensor measurements, and integrating it into
the VINS-Mono algorithm, we demonstrate a substantial improvement in accuracy
and robustness, enhancing VIO performance and potentially benefiting other
VIO-based systems for precise localization and mapping across diverse
conditions.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 06:54:42 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Solodar",
"Dan",
""
],
[
"Klein",
"Itzik",
""
]
] |
new_dataset
| 0.989658 |
2308.11240
|
Rameshwar Pratap
|
Rameshwar Pratap and Raghav Kulkarni
|
Minwise-Independent Permutations with Insertion and Deletion of Features
| null | null | null | null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In their seminal work, Broder \textit{et. al.}~\citep{BroderCFM98} introduces
the $\mathrm{minHash}$ algorithm that computes a low-dimensional sketch of
high-dimensional binary data that closely approximates pairwise Jaccard
similarity. Since its invention, $\mathrm{minHash}$ has been commonly used by
practitioners in various big data applications. Further, the data is dynamic in
many real-life scenarios, and their feature sets evolve over time. We consider
the case when features are dynamically inserted and deleted in the dataset. We
note that a naive solution to this problem is to repeatedly recompute
$\mathrm{minHash}$ with respect to the updated dimension. However, this is an
expensive task as it requires generating fresh random permutations. To the best
of our knowledge, no systematic study of $\mathrm{minHash}$ is recorded in the
context of dynamic insertion and deletion of features. In this work, we
initiate this study and suggest algorithms that make the $\mathrm{minHash}$
sketches adaptable to the dynamic insertion and deletion of features. We show a
rigorous theoretical analysis of our algorithms and complement it with
extensive experiments on several real-world datasets. Empirically we observe a
significant speed-up in the running time while simultaneously offering
comparable performance with respect to running $\mathrm{minHash}$ from scratch.
Our proposal is efficient, accurate, and easy to implement in practice.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 07:27:45 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Pratap",
"Rameshwar",
""
],
[
"Kulkarni",
"Raghav",
""
]
] |
new_dataset
| 0.957024 |
2308.11258
|
Stefano Zacchiroli
|
Jes\'us M. Gonz\'alez-Barahona (URJC), Sergio Montes-Leon (URJC),
Gregorio Robles (URJC), Stefano Zacchiroli (IP Paris, LTCI)
|
The Software Heritage License Dataset (2022 Edition)
| null |
Empirical Software Engineering, In press
|
10.1007/s10664-023-10377-w
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Context: When software is released publicly, it is common to include with it
either the full text of the license or licenses under which it is published, or
a detailed reference to them. Therefore public licenses, including FOSS (free,
open source software) licenses, are usually publicly available in source code
repositories.Objective: To compile a dataset containing as many documents as
possible that contain the text of software licenses, or references to the
license terms. Once compiled, characterize the dataset so that it can be used
for further research, or practical purposes related to license analysis.Method:
Retrieve from Software Heritage-the largest publicly available archive of FOSS
source code-all versions of all files whose names are commonly used to convey
licensing terms. All retrieved documents will be characterized in various ways,
using automated and manual analyses.Results: The dataset consists of 6.9
million unique license files. Additional metadata about shipped license files
is also provided, making the dataset ready to use in various contexts,
including: file length measures, MIME type, SPDX license (detected using
ScanCode), and oldest appearance. The results of a manual analysis of 8102
documents is also included, providing a ground truth for further analysis. The
dataset is released as open data as an archive file containing all deduplicated
license files, plus several portable CSV files with metadata, referencing files
via cryptographic checksums.Conclusions: Thanks to the extensive coverage of
Software Heritage, the dataset presented in this paper covers a very large
fraction of all software licenses for public code. We have assembled a large
body of software licenses, characterized it quantitatively and qualitatively,
and validated that it is mostly composed of licensing information and includes
almost all known license texts. The dataset can be used to conduct empirical
studies on open source licensing, training of automated license classifiers,
natural language processing (NLP) analyses of legal texts, as well as
historical and phylogenetic studies on FOSS licensing. It can also be used in
practice to improve tools detecting licenses in source code.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 08:01:07 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"González-Barahona",
"Jesús M.",
"",
"URJC"
],
[
"Montes-Leon",
"Sergio",
"",
"URJC"
],
[
"Robles",
"Gregorio",
"",
"URJC"
],
[
"Zacchiroli",
"Stefano",
"",
"IP Paris, LTCI"
]
] |
new_dataset
| 0.999865 |
2308.11268
|
Shih-Hao Lu
|
Shih-Hao Lu, Char-Dir Chung, Wei-Chang Chen, and Ping-Feng Tsou
|
Orthogonal Constant-Amplitude Sequence Families for System Parameter
Identification in Spectrally Compact OFDM
|
15 pages, 4 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In rectangularly-pulsed orthogonal frequency division multiplexing (OFDM)
systems, constant-amplitude (CA) sequences are desirable to construct
preamble/pilot waveforms to facilitate system parameter identification (SPI).
Orthogonal CA sequences are generally preferred in various SPI applications
like random-access channel identification. However, the number of conventional
orthogonal CA sequences (e.g., Zadoff-Chu sequences) that can be adopted in
cellular communication without causing sequence identification ambiguity is
insufficient. Such insufficiency causes heavy performance degradation for SPI
requiring a large number of identification sequences. Moreover,
rectangularly-pulsed OFDM preamble/pilot waveforms carrying conventional CA
sequences suffer from large power spectral sidelobes and thus exhibit low
spectral compactness. This paper is thus motivated to develop several order-I
CA sequence families which contain more orthogonal CA sequences while endowing
the corresponding OFDM preamble/pilot waveforms with fast-decaying spectral
sidelobes. Since more orthogonal sequences are provided, the developed order-I
CA sequence families can enhance the performance characteristics in SPI
requiring a large number of identification sequences over multipath channels
exhibiting short-delay channel profiles, while composing spectrally compact
OFDM preamble/pilot waveforms.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 08:25:28 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Lu",
"Shih-Hao",
""
],
[
"Chung",
"Char-Dir",
""
],
[
"Chen",
"Wei-Chang",
""
],
[
"Tsou",
"Ping-Feng",
""
]
] |
new_dataset
| 0.964694 |
2308.11276
|
Shansong Liu
|
Shansong Liu, Atin Sakkeer Hussain, Chenshuo Sun, Ying Shan
|
Music Understanding LLaMA: Advancing Text-to-Music Generation with
Question Answering and Captioning
| null | null | null | null |
cs.SD cs.AI cs.CL cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-to-music generation (T2M-Gen) faces a major obstacle due to the scarcity
of large-scale publicly available music datasets with natural language
captions. To address this, we propose the Music Understanding LLaMA (MU-LLaMA),
capable of answering music-related questions and generating captions for music
files. Our model utilizes audio representations from a pretrained MERT model to
extract music features. However, obtaining a suitable dataset for training the
MU-LLaMA model remains challenging, as existing publicly accessible audio
question answering datasets lack the necessary depth for open-ended music
question answering. To fill this gap, we present a methodology for generating
question-answer pairs from existing audio captioning datasets and introduce the
MusicQA Dataset designed for answering open-ended music-related questions. The
experiments demonstrate that the proposed MU-LLaMA model, trained on our
designed MusicQA dataset, achieves outstanding performance in both music
question answering and music caption generation across various metrics,
outperforming current state-of-the-art (SOTA) models in both fields and
offering a promising advancement in the T2M-Gen research field.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 08:43:33 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Liu",
"Shansong",
""
],
[
"Hussain",
"Atin Sakkeer",
""
],
[
"Sun",
"Chenshuo",
""
],
[
"Shan",
"Ying",
""
]
] |
new_dataset
| 0.984335 |
2308.11277
|
Hubert Mara
|
Ernst St\"otzner, Timo Homburg and Hubert Mara
|
CNN based Cuneiform Sign Detection Learned from Annotated 3D Renderings
and Mapped Photographs with Illumination Augmentation
|
This paper was accepted to ICCV23 and includes the DOI for an Open
Access Dataset with annotated cuneiform script
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Motivated by the challenges of the Digital Ancient Near Eastern Studies
(DANES) community, we develop digital tools for processing cuneiform script
being a 3D script imprinted into clay tablets used for more than three
millennia and at least eight major languages. It consists of thousands of
characters that have changed over time and space. Photographs are the most
common representations usable for machine learning, while ink drawings are
prone to interpretation. Best suited 3D datasets that are becoming available.
We created and used the HeiCuBeDa and MaiCuBeDa datasets, which consist of
around 500 annotated tablets. For our novel OCR-like approach to mixed image
data, we provide an additional mapping tool for transferring annotations
between 3D renderings and photographs. Our sign localization uses a RepPoints
detector to predict the locations of characters as bounding boxes. We use image
data from GigaMesh's MSII (curvature, see https://gigamesh.eu) based rendering,
Phong-shaded 3D models, and photographs as well as illumination augmentation.
The results show that using rendered 3D images for sign detection performs
better than other work on photographs. In addition, our approach gives
reasonably good results for photographs only, while it is best used for mixed
datasets. More importantly, the Phong renderings, and especially the MSII
renderings, improve the results on photographs, which is the largest dataset on
a global scale.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 08:46:30 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Stötzner",
"Ernst",
""
],
[
"Homburg",
"Timo",
""
],
[
"Mara",
"Hubert",
""
]
] |
new_dataset
| 0.991772 |
2308.11322
|
Xin Li
|
Xin Li, Yuqing Huang, Zhenyu He, Yaowei Wang, Huchuan Lu, Ming-Hsuan
Yang
|
CiteTracker: Correlating Image and Text for Visual Tracking
|
accepted by ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing visual tracking methods typically take an image patch as the
reference of the target to perform tracking. However, a single image patch
cannot provide a complete and precise concept of the target object as images
are limited in their ability to abstract and can be ambiguous, which makes it
difficult to track targets with drastic variations. In this paper, we propose
the CiteTracker to enhance target modeling and inference in visual tracking by
connecting images and text. Specifically, we develop a text generation module
to convert the target image patch into a descriptive text containing its class
and attribute information, providing a comprehensive reference point for the
target. In addition, a dynamic description module is designed to adapt to
target variations for more effective target representation. We then associate
the target description and the search image using an attention-based
correlation module to generate the correlated features for target state
reference. Extensive experiments on five diverse datasets are conducted to
evaluate the proposed algorithm and the favorable performance against the
state-of-the-art methods demonstrates the effectiveness of the proposed
tracking method.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 09:53:12 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Li",
"Xin",
""
],
[
"Huang",
"Yuqing",
""
],
[
"He",
"Zhenyu",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Lu",
"Huchuan",
""
],
[
"Yang",
"Ming-Hsuan",
""
]
] |
new_dataset
| 0.999293 |
2308.11351
|
Tao Chen
|
Tao Chen, Ze Lin, Hui Li, Jiayi Ji, Yiyi Zhou, Guanbin Li and Rongrong
Ji
|
M3PS: End-to-End Multi-Grained Multi-Modal Attribute-Aware Product
Summarization in E-commerce
| null | null | null | null |
cs.MM cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given the long textual product information and the product image, Multi-Modal
Product Summarization (MMPS) aims to attract customers' interest and increase
their desire to purchase by highlighting product characteristics with a short
textual summary. Existing MMPS methods have achieved promising performance.
Nevertheless, there still exist several problems: 1) lack end-to-end product
summarization, 2) lack multi-grained multi-modal modeling, and 3) lack
multi-modal attribute modeling. To address these issues, we propose an
end-to-end multi-grained multi-modal attribute-aware product summarization
method (M3PS) for generating high-quality product summaries in e-commerce. M3PS
jointly models product attributes and generates product summaries. Meanwhile,
we design several multi-grained multi-modal tasks to better guide the
multi-modal learning of M3PS. Furthermore, we model product attributes based on
both text and image modalities so that multi-modal product characteristics can
be manifested in the generated summaries. Extensive experiments on a real
large-scale Chinese e-commence dataset demonstrate that our model outperforms
state-of-the-art product summarization methods w.r.t. several summarization
metrics.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 11:00:09 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Chen",
"Tao",
""
],
[
"Lin",
"Ze",
""
],
[
"Li",
"Hui",
""
],
[
"Ji",
"Jiayi",
""
],
[
"Zhou",
"Yiyi",
""
],
[
"Li",
"Guanbin",
""
],
[
"Ji",
"Rongrong",
""
]
] |
new_dataset
| 0.999265 |
2308.11379
|
Ittay Eyal
|
Ittai Abraham, Danny Dolev, Ittay Eyal, Joseph Y. Halpern
|
Colordag: An Incentive-Compatible Blockchain
|
To be published in DISC 2023
| null | null | null |
cs.GT cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
We present Colordag, a blockchain protocol where following the prescribed
strategy is, with high probability, a best response as long as all miners have
less than 1/2 of the mining power. We prove the correctness of Colordag even if
there is an extremely powerful adversary who knows future actions of the
scheduler: specifically, when agents will generate blocks and when messages
will arrive. The state-of-the-art protocol, Fruitchain, is an epsilon-Nash
equilibrium as long as all miners have less than 1/2 of the mining power.
However, there is a simple deviation that guarantees that deviators are never
worse off than they would be by following Fruitchain, and can sometimes do
better. Thus, agents are motivated to deviate. Colordag implements a solution
concept that we call epsilon-sure Nash equilibrium and does not suffer from
this problem. Because it is an epsilon-sure Nash equilibrium, Colordag is an
epsilon Nash equilibrium and with probability (1 - epsilon) is a best response.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 12:08:20 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Abraham",
"Ittai",
""
],
[
"Dolev",
"Danny",
""
],
[
"Eyal",
"Ittay",
""
],
[
"Halpern",
"Joseph Y.",
""
]
] |
new_dataset
| 0.998186 |
2308.11417
|
Chandan Yeshwanth
|
Chandan Yeshwanth, Yueh-Cheng Liu, Matthias Nie{\ss}ner, Angela Dai
|
ScanNet++: A High-Fidelity Dataset of 3D Indoor Scenes
|
ICCV 2023. Video: https://youtu.be/E6P9e2r6M8I , Project page:
https://cy94.github.io/scannetpp/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ScanNet++, a large-scale dataset that couples together capture of
high-quality and commodity-level geometry and color of indoor scenes. Each
scene is captured with a high-end laser scanner at sub-millimeter resolution,
along with registered 33-megapixel images from a DSLR camera, and RGB-D streams
from an iPhone. Scene reconstructions are further annotated with an open
vocabulary of semantics, with label-ambiguous scenarios explicitly annotated
for comprehensive semantic understanding. ScanNet++ enables a new real-world
benchmark for novel view synthesis, both from high-quality RGB capture, and
importantly also from commodity-level images, in addition to a new benchmark
for 3D semantic scene understanding that comprehensively encapsulates diverse
and ambiguous semantic labeling scenarios. Currently, ScanNet++ contains 460
scenes, 280,000 captured DSLR images, and over 3.7M iPhone RGBD frames.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 13:02:23 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Yeshwanth",
"Chandan",
""
],
[
"Liu",
"Yueh-Cheng",
""
],
[
"Nießner",
"Matthias",
""
],
[
"Dai",
"Angela",
""
]
] |
new_dataset
| 0.999825 |
2308.11421
|
Alexander Wong
|
Alexander Wong, Saad Abbasi, Saeejith Nair
|
TurboViT: Generating Fast Vision Transformers via Generative
Architecture Search
|
5 pages
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision transformers have shown unprecedented levels of performance in
tackling various visual perception tasks in recent years. However, the
architectural and computational complexity of such network architectures have
made them challenging to deploy in real-world applications with
high-throughput, low-memory requirements. As such, there has been significant
research recently on the design of efficient vision transformer architectures.
In this study, we explore the generation of fast vision transformer
architecture designs via generative architecture search (GAS) to achieve a
strong balance between accuracy and architectural and computational efficiency.
Through this generative architecture search process, we create TurboViT, a
highly efficient hierarchical vision transformer architecture design that is
generated around mask unit attention and Q-pooling design patterns. The
resulting TurboViT architecture design achieves significantly lower
architectural computational complexity (>2.47$\times$ smaller than FasterViT-0
while achieving same accuracy) and computational complexity (>3.4$\times$ fewer
FLOPs and 0.9% higher accuracy than MobileViT2-2.0) when compared to 10 other
state-of-the-art efficient vision transformer network architecture designs
within a similar range of accuracy on the ImageNet-1K dataset. Furthermore,
TurboViT demonstrated strong inference latency and throughput in both
low-latency and batch processing scenarios (>3.21$\times$ lower latency and
>3.18$\times$ higher throughput compared to FasterViT-0 for low-latency
scenario). These promising results demonstrate the efficacy of leveraging
generative architecture search for generating efficient transformer
architecture designs for high-throughput scenarios.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 13:08:29 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Wong",
"Alexander",
""
],
[
"Abbasi",
"Saad",
""
],
[
"Nair",
"Saeejith",
""
]
] |
new_dataset
| 0.975995 |
2308.11424
|
Makayla Lewis
|
Makayla Lewis
|
AIxArtist: A First-Person Tale of Interacting with Artificial
Intelligence to Escape Creative Block
|
1st International Workshop on Explainable AI for the Arts (XAIxArts),
ACM Creativity and Cognition (C&C) 2023. Online, 6 pages.
https://xaixarts.github.io
| null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The future of the arts and artificial intelligence (AI) is promising as
technology advances. As the use of AI in design becomes more widespread, art
practice may not be a human-only art form and could instead become a digitally
integrated experience. With enhanced creativity and collaboration, arts and AI
could work together towards creating artistic outputs that are visually
appealing and meet the needs of the artist and viewer. While it is uncertain
how far the integration will go, arts and AI will likely influence one another.
This workshop pictorial puts forward first-person research that shares
interactions between an HCI researcher and AI as they try to escape the
creative block. The pictorial paper explores two questions: How can AI support
artists' creativity, and what does it mean to be explainable in this context?
HIs, ChatGPT and Midjourney were engaged; the result was a series of
reflections that require further discussion and explorations in the XAIxArts
community: Transparency of attribution, the creation process, ethics of asking,
and inspiration vs copying.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 13:15:29 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Lewis",
"Makayla",
""
]
] |
new_dataset
| 0.982417 |
2308.11462
|
Neel Guha
|
Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher R\'e, Adam
Chilton, Aditya Narayana, Alex Chohlas-Wood, Austin Peters, Brandon Waldon,
Daniel N. Rockmore, Diego Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani,
Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason
Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John Nay, Jonathan H. Choi,
Kevin Tobia, Margaret Hagan, Megan Ma, Michael Livermore, Nikon Rasumov-Rahe,
Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel,
Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, and Zehua Li
|
LegalBench: A Collaboratively Built Benchmark for Measuring Legal
Reasoning in Large Language Models
|
143 pages, 79 tables, 4 figures
| null | null | null |
cs.CL cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The advent of large language models (LLMs) and their adoption by the legal
community has given rise to the question: what types of legal reasoning can
LLMs perform? To enable greater study of this question, we present LegalBench:
a collaboratively constructed legal reasoning benchmark consisting of 162 tasks
covering six different types of legal reasoning. LegalBench was built through
an interdisciplinary process, in which we collected tasks designed and
hand-crafted by legal professionals. Because these subject matter experts took
a leading role in construction, tasks either measure legal reasoning
capabilities that are practically useful, or measure reasoning skills that
lawyers find interesting. To enable cross-disciplinary conversations about LLMs
in the law, we additionally show how popular legal frameworks for describing
legal reasoning -- which distinguish between its many forms -- correspond to
LegalBench tasks, thus giving lawyers and LLM developers a common vocabulary.
This paper describes LegalBench, presents an empirical evaluation of 20
open-source and commercial LLMs, and illustrates the types of research
explorations LegalBench enables.
|
[
{
"version": "v1",
"created": "Sun, 20 Aug 2023 22:08:03 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Guha",
"Neel",
""
],
[
"Nyarko",
"Julian",
""
],
[
"Ho",
"Daniel E.",
""
],
[
"Ré",
"Christopher",
""
],
[
"Chilton",
"Adam",
""
],
[
"Narayana",
"Aditya",
""
],
[
"Chohlas-Wood",
"Alex",
""
],
[
"Peters",
"Austin",
""
],
[
"Waldon",
"Brandon",
""
],
[
"Rockmore",
"Daniel N.",
""
],
[
"Zambrano",
"Diego",
""
],
[
"Talisman",
"Dmitry",
""
],
[
"Hoque",
"Enam",
""
],
[
"Surani",
"Faiz",
""
],
[
"Fagan",
"Frank",
""
],
[
"Sarfaty",
"Galit",
""
],
[
"Dickinson",
"Gregory M.",
""
],
[
"Porat",
"Haggai",
""
],
[
"Hegland",
"Jason",
""
],
[
"Wu",
"Jessica",
""
],
[
"Nudell",
"Joe",
""
],
[
"Niklaus",
"Joel",
""
],
[
"Nay",
"John",
""
],
[
"Choi",
"Jonathan H.",
""
],
[
"Tobia",
"Kevin",
""
],
[
"Hagan",
"Margaret",
""
],
[
"Ma",
"Megan",
""
],
[
"Livermore",
"Michael",
""
],
[
"Rasumov-Rahe",
"Nikon",
""
],
[
"Holzenberger",
"Nils",
""
],
[
"Kolt",
"Noam",
""
],
[
"Henderson",
"Peter",
""
],
[
"Rehaag",
"Sean",
""
],
[
"Goel",
"Sharad",
""
],
[
"Gao",
"Shang",
""
],
[
"Williams",
"Spencer",
""
],
[
"Gandhi",
"Sunny",
""
],
[
"Zur",
"Tom",
""
],
[
"Iyer",
"Varun",
""
],
[
"Li",
"Zehua",
""
]
] |
new_dataset
| 0.998194 |
2308.11484
|
Caroline Malin-Mayor
|
Caroline Malin-Mayor, Vida Adeli, Andrea Sabo, Sergey Noritsyn,
Carolina Gorodetsky, Alfonso Fasano, Andrea Iaboni, Babak Taati
|
Pose2Gait: Extracting Gait Features from Monocular Video of Individuals
with Dementia
|
14 pages, 3 figures. Code is available at
https://github.com/TaatiTeam/pose2gait_public . To be published at the
Ambient Intelligence for Health Care Workshop at MICCAI 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Video-based ambient monitoring of gait for older adults with dementia has the
potential to detect negative changes in health and allow clinicians and
caregivers to intervene early to prevent falls or hospitalizations. Computer
vision-based pose tracking models can process video data automatically and
extract joint locations; however, publicly available models are not optimized
for gait analysis on older adults or clinical populations. In this work we
train a deep neural network to map from a two dimensional pose sequence,
extracted from a video of an individual walking down a hallway toward a
wall-mounted camera, to a set of three-dimensional spatiotemporal gait features
averaged over the walking sequence. The data of individuals with dementia used
in this work was captured at two sites using a wall-mounted system to collect
the video and depth information used to train and evaluate our model. Our
Pose2Gait model is able to extract velocity and step length values from the
video that are correlated with the features from the depth camera, with
Spearman's correlation coefficients of .83 and .60 respectively, showing that
three dimensional spatiotemporal features can be predicted from monocular
video. Future work remains to improve the accuracy of other features, such as
step time and step width, and test the utility of the predicted values for
detecting meaningful changes in gait during longitudinal ambient monitoring.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 14:59:17 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Malin-Mayor",
"Caroline",
""
],
[
"Adeli",
"Vida",
""
],
[
"Sabo",
"Andrea",
""
],
[
"Noritsyn",
"Sergey",
""
],
[
"Gorodetsky",
"Carolina",
""
],
[
"Fasano",
"Alfonso",
""
],
[
"Iaboni",
"Andrea",
""
],
[
"Taati",
"Babak",
""
]
] |
new_dataset
| 0.997729 |
2308.11488
|
Dibyadip Chatterjee
|
Dibyadip Chatterjee, Fadime Sener, Shugao Ma, Angela Yao
|
Opening the Vocabulary of Egocentric Actions
|
20 pages, 7 figures; https://dibschat.github.io/openvocab-egoAR/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human actions in egocentric videos are often hand-object interactions
composed from a verb (performed by the hand) applied to an object. Despite
their extensive scaling up, egocentric datasets still face two limitations -
sparsity of action compositions and a closed set of interacting objects. This
paper proposes a novel open vocabulary action recognition task. Given a set of
verbs and objects observed during training, the goal is to generalize the verbs
to an open vocabulary of actions with seen and novel objects. To this end, we
decouple the verb and object predictions via an object-agnostic verb encoder
and a prompt-based object encoder. The prompting leverages CLIP representations
to predict an open vocabulary of interacting objects. We create open vocabulary
benchmarks on the EPIC-KITCHENS-100 and Assembly101 datasets; whereas
closed-action methods fail to generalize, our proposed method is effective. In
addition, our object encoder significantly outperforms existing open-vocabulary
visual recognition methods in recognizing novel interacting objects.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 15:08:02 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Chatterjee",
"Dibyadip",
""
],
[
"Sener",
"Fadime",
""
],
[
"Ma",
"Shugao",
""
],
[
"Yao",
"Angela",
""
]
] |
new_dataset
| 0.999628 |
2308.11501
|
Yusheng Wang
|
Yusheng Wang, Weiwei Song, Yi Zhang, Fei Huang, Zhiyong Tu, Ruoying
Li, Shimin Zhang, and Yidong Lou
|
Four years of multi-modal odometry and mapping on the rail vehicles
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Precise, seamless, and efficient train localization as well as long-term
railway environment monitoring is the essential property towards reliability,
availability, maintainability, and safety (RAMS) engineering for railroad
systems. Simultaneous localization and mapping (SLAM) is right at the core of
solving the two problems concurrently. In this end, we propose a
high-performance and versatile multi-modal framework in this paper, targeted
for the odometry and mapping task for various rail vehicles. Our system is
built atop an inertial-centric state estimator that tightly couples light
detection and ranging (LiDAR), visual, optionally satellite navigation and
map-based localization information with the convenience and extendibility of
loosely coupled methods. The inertial sensors IMU and wheel encoder are treated
as the primary sensor, which achieves the observations from subsystems to
constrain the accelerometer and gyroscope biases. Compared to point-only
LiDAR-inertial methods, our approach leverages more geometry information by
introducing both track plane and electric power pillars into state estimation.
The Visual-inertial subsystem also utilizes the environmental structure
information by employing both lines and points. Besides, the method is capable
of handling sensor failures by automatic reconfiguration bypassing failure
modules. Our proposed method has been extensively tested in the long-during
railway environments over four years, including general-speed, high-speed and
metro, both passenger and freight traffic are investigated. Further, we aim to
share, in an open way, the experience, problems, and successes of our group
with the robotics community so that those that work in such environments can
avoid these errors. In this view, we open source some of the datasets to
benefit the research community.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 15:20:26 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Wang",
"Yusheng",
""
],
[
"Song",
"Weiwei",
""
],
[
"Zhang",
"Yi",
""
],
[
"Huang",
"Fei",
""
],
[
"Tu",
"Zhiyong",
""
],
[
"Li",
"Ruoying",
""
],
[
"Zhang",
"Shimin",
""
],
[
"Lou",
"Yidong",
""
]
] |
new_dataset
| 0.988876 |
2308.11509
|
Lixiong Qin
|
Lixiong Qin, Mei Wang, Chao Deng, Ke Wang, Xi Chen, Jiani Hu, Weihong
Deng
|
SwinFace: A Multi-task Transformer for Face Recognition, Expression
Recognition, Age Estimation and Attribute Estimation
| null | null |
10.1109/TCSVT.2023.3304724
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, vision transformers have been introduced into face
recognition and analysis and have achieved performance breakthroughs. However,
most previous methods generally train a single model or an ensemble of models
to perform the desired task, which ignores the synergy among different tasks
and fails to achieve improved prediction accuracy, increased data efficiency,
and reduced training time. This paper presents a multi-purpose algorithm for
simultaneous face recognition, facial expression recognition, age estimation,
and face attribute estimation (40 attributes including gender) based on a
single Swin Transformer. Our design, the SwinFace, consists of a single shared
backbone together with a subnet for each set of related tasks. To address the
conflicts among multiple tasks and meet the different demands of tasks, a
Multi-Level Channel Attention (MLCA) module is integrated into each
task-specific analysis subnet, which can adaptively select the features from
optimal levels and channels to perform the desired tasks. Extensive experiments
show that the proposed model has a better understanding of the face and
achieves excellent performance for all tasks. Especially, it achieves 90.97%
accuracy on RAF-DB and 0.22 $\epsilon$-error on CLAP2015, which are
state-of-the-art results on facial expression recognition and age estimation
respectively. The code and models will be made publicly available at
https://github.com/lxq1000/SwinFace.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 15:38:39 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Qin",
"Lixiong",
""
],
[
"Wang",
"Mei",
""
],
[
"Deng",
"Chao",
""
],
[
"Wang",
"Ke",
""
],
[
"Chen",
"Xi",
""
],
[
"Hu",
"Jiani",
""
],
[
"Deng",
"Weihong",
""
]
] |
new_dataset
| 0.999521 |
2308.11529
|
Gabe Schoenbach
|
Moon Duchin and Gabe Schoenbach
|
Redistricting for Proportionality
| null |
The Forum, vol. 20, no. 3-4, 2022, pp. 371-393
|
10.1515/for-2022-2064
| null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
American democracy is currently heavily reliant on plurality in single-member
districts, or PSMD, as a system of election. But public perceptions of fairness
are often keyed to partisan proportionality, or the degree of congruence
between each party's share of the the vote and its share of representation.
PSMD has not tended to secure proportional outcomes historically, partially due
to gerrymandering, where line-drawers intentionally extract more advantage for
their side. But it is now increasingly clear that even blind PSMD is frequently
disproportional, and in unpredictable ways that depend on local political
geography. In this paper we consider whether it is feasible to bring PSMD into
alignment with a proportionality norm by targeting proportional outcomes in the
design and selection of districts. We do this mainly through a close
examination of the "Freedom to Vote Test," a redistricting reform proposed in
draft legislation in 2021. We find that applying the test with a
proportionality target makes for sound policy: it performs well in legal
battleground states and has a workable exception to handle edge cases where
proportionality is out of reach.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 15:56:40 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Duchin",
"Moon",
""
],
[
"Schoenbach",
"Gabe",
""
]
] |
new_dataset
| 0.99569 |
2308.11537
|
Samuele Garda
|
Samuele Garda, Leon Weber-Genzel, Robert Martin, Ulf Leser
|
BELB: a Biomedical Entity Linking Benchmark
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Biomedical entity linking (BEL) is the task of grounding entity mentions to a
knowledge base. It plays a vital role in information extraction pipelines for
the life sciences literature. We review recent work in the field and find that,
as the task is absent from existing benchmarks for biomedical text mining,
different studies adopt different experimental setups making comparisons based
on published numbers problematic. Furthermore, neural systems are tested
primarily on instances linked to the broad coverage knowledge base UMLS,
leaving their performance to more specialized ones, e.g. genes or variants,
understudied. We therefore developed BELB, a Biomedical Entity Linking
Benchmark, providing access in a unified format to 11 corpora linked to 7
knowledge bases and spanning six entity types: gene, disease, chemical,
species, cell line and variant. BELB greatly reduces preprocessing overhead in
testing BEL systems on multiple corpora offering a standardized testbed for
reproducible experiments. Using BELB we perform an extensive evaluation of six
rule-based entity-specific systems and three recent neural approaches
leveraging pre-trained language models. Our results reveal a mixed picture
showing that neural approaches fail to perform consistently across entity
types, highlighting the need of further studies towards entity-agnostic models.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 16:05:18 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Garda",
"Samuele",
""
],
[
"Weber-Genzel",
"Leon",
""
],
[
"Martin",
"Robert",
""
],
[
"Leser",
"Ulf",
""
]
] |
new_dataset
| 0.993299 |
2308.11573
|
Zhijian Qiao
|
Zhijian Qiao, Zehuan Yu, Binqian Jiang, Huan Yin, and Shaojie Shen
|
G3Reg: Pyramid Graph-based Global Registration using Gaussian Ellipsoid
Model
|
Under review
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study introduces a novel framework, G3Reg, for fast and robust global
registration of LiDAR point clouds. In contrast to conventional complex
keypoints and descriptors, we extract fundamental geometric primitives
including planes, clusters, and lines (PCL) from the raw point cloud to obtain
low-level semantic segments. Each segment is formulated as a unified Gaussian
Ellipsoid Model (GEM) by employing a probability ellipsoid to ensure the ground
truth centers are encompassed with a certain degree of probability. Utilizing
these GEMs, we then present a distrust-and-verify scheme based on a Pyramid
Compatibility Graph for Global Registration (PAGOR). Specifically, we establish
an upper bound, which can be traversed based on the confidence level for
compatibility testing to construct the pyramid graph. Gradually, we solve
multiple maximum cliques (MAC) for each level of the graph, generating numerous
transformation candidates. In the verification phase, we adopt a precise and
efficient metric for point cloud alignment quality, founded on geometric
primitives, to identify the optimal candidate. The performance of the algorithm
is extensively validated on three publicly available datasets and a
self-collected multi-session dataset, without changing any parameter settings
in the experimental evaluation. The results exhibit superior robustness and
real-time performance of the G3Reg framework compared to state-of-the-art
methods. Furthermore, we demonstrate the potential for integrating individual
GEM and PAGOR components into other algorithmic frameworks to enhance their
efficacy. To advance further research and promote community understanding, we
have publicly shared the source code.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 17:23:00 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Qiao",
"Zhijian",
""
],
[
"Yu",
"Zehuan",
""
],
[
"Jiang",
"Binqian",
""
],
[
"Yin",
"Huan",
""
],
[
"Shen",
"Shaojie",
""
]
] |
new_dataset
| 0.990494 |
2308.11606
|
Emanuele Bugliarello
|
Emanuele Bugliarello, Hernan Moraldo, Ruben Villegas, Mohammad
Babaeizadeh, Mohammad Taghi Saffar, Han Zhang, Dumitru Erhan, Vittorio
Ferrari, Pieter-Jan Kindermans, Paul Voigtlaender
|
StoryBench: A Multifaceted Benchmark for Continuous Story Visualization
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating video stories from text prompts is a complex task. In addition to
having high visual quality, videos need to realistically adhere to a sequence
of text prompts whilst being consistent throughout the frames. Creating a
benchmark for video generation requires data annotated over time, which
contrasts with the single caption used often in video datasets. To fill this
gap, we collect comprehensive human annotations on three existing datasets, and
introduce StoryBench: a new, challenging multi-task benchmark to reliably
evaluate forthcoming text-to-video models. Our benchmark includes three video
generation tasks of increasing difficulty: action execution, where the next
action must be generated starting from a conditioning video; story
continuation, where a sequence of actions must be executed starting from a
conditioning video; and story generation, where a video must be generated from
only text prompts. We evaluate small yet strong text-to-video baselines, and
show the benefits of training on story-like data algorithmically generated from
existing video captions. Finally, we establish guidelines for human evaluation
of video stories, and reaffirm the need of better automatic metrics for video
generation. StoryBench aims at encouraging future research efforts in this
exciting new area.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 17:53:55 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Bugliarello",
"Emanuele",
""
],
[
"Moraldo",
"Hernan",
""
],
[
"Villegas",
"Ruben",
""
],
[
"Babaeizadeh",
"Mohammad",
""
],
[
"Saffar",
"Mohammad Taghi",
""
],
[
"Zhang",
"Han",
""
],
[
"Erhan",
"Dumitru",
""
],
[
"Ferrari",
"Vittorio",
""
],
[
"Kindermans",
"Pieter-Jan",
""
],
[
"Voigtlaender",
"Paul",
""
]
] |
new_dataset
| 0.999683 |
2308.11617
|
Omid Taheri
|
Omid Taheri, Yi Zhou, Dimitrios Tzionas, Yang Zhou, Duygu Ceylan,
Soren Pirk, Michael J. Black
|
GRIP: Generating Interaction Poses Using Latent Consistency and Spatial
Cues
|
The project has been started during Omid Taheri's internship at Adobe
and as a collaboration with the Max Planck Institute for Intelligent Systems
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hands are dexterous and highly versatile manipulators that are central to how
humans interact with objects and their environment. Consequently, modeling
realistic hand-object interactions, including the subtle motion of individual
fingers, is critical for applications in computer graphics, computer vision,
and mixed reality. Prior work on capturing and modeling humans interacting with
objects in 3D focuses on the body and object motion, often ignoring hand pose.
In contrast, we introduce GRIP, a learning-based method that takes, as input,
the 3D motion of the body and the object, and synthesizes realistic motion for
both hands before, during, and after object interaction. As a preliminary step
before synthesizing the hand motion, we first use a network, ANet, to denoise
the arm motion. Then, we leverage the spatio-temporal relationship between the
body and the object to extract two types of novel temporal interaction cues,
and use them in a two-stage inference pipeline to generate the hand motion. In
the first stage, we introduce a new approach to enforce motion temporal
consistency in the latent space (LTC), and generate consistent interaction
motions. In the second stage, GRIP generates refined hand poses to avoid
hand-object penetrations. Given sequences of noisy body and object motion, GRIP
upgrades them to include hand-object interaction. Quantitative experiments and
perceptual studies demonstrate that GRIP outperforms baseline methods and
generalizes to unseen objects and motions from different motion-capture
datasets.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 17:59:51 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Taheri",
"Omid",
""
],
[
"Zhou",
"Yi",
""
],
[
"Tzionas",
"Dimitrios",
""
],
[
"Zhou",
"Yang",
""
],
[
"Ceylan",
"Duygu",
""
],
[
"Pirk",
"Soren",
""
],
[
"Black",
"Michael J.",
""
]
] |
new_dataset
| 0.953735 |
2107.10545
|
Peter Mosses
|
Peter D. Mosses
|
Fundamental Constructs in Programming Languages
|
26 pages, incl. 3 figures and 7 appendices, accepted for publication
in Proceedings of ISoLA 2021; updates the submitted version with
clarifications and minor enhancements
| null |
10.1007/978-3-030-89159-6_19
| null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
When a new programming language appears, the syntax and intended behaviour of
its programs need to be specified. The behaviour of each language construct can
be concisely specified by translating it to fundamental constructs (funcons),
compositionally. In contrast to the informal explanations commonly found in
reference manuals, such formal specifications of translations to funcons can be
precise and complete. They are also easy to write and read, and to update when
the language evolves.
The PLanCompS project has developed a large collection of funcons. Each
funcon is defined independently, using a modular variant of structural
operational semantics. The definitions are available online, along with tools
for generating funcon interpreters from them.
This paper introduces and motivates funcons. It illustrates translation of
language constructs to funcons, and funcon definition. It also relates funcons
to the notation used in some previous language specification frameworks,
including monadic semantics and action semantics.
|
[
{
"version": "v1",
"created": "Thu, 22 Jul 2021 09:53:04 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Aug 2023 18:42:55 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Mosses",
"Peter D.",
""
]
] |
new_dataset
| 0.967423 |
2108.04814
|
Stefano Gasperini
|
Stefano Gasperini, Patrick Koch, Vinzenz Dallabetta, Nassir Navab,
Benjamin Busam, Federico Tombari
|
R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes
|
Accepted at the International Conference on 3D Vision (3DV) 2021
| null |
10.1109/3DV53792.2021.00084
| null |
cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While self-supervised monocular depth estimation in driving scenarios has
achieved comparable performance to supervised approaches, violations of the
static world assumption can still lead to erroneous depth predictions of
traffic participants, posing a potential safety issue. In this paper, we
present R4Dyn, a novel set of techniques to use cost-efficient radar data on
top of a self-supervised depth estimation framework. In particular, we show how
radar can be used during training as weak supervision signal, as well as an
extra input to enhance the estimation robustness at inference time. Since
automotive radars are readily available, this allows to collect training data
from a variety of existing vehicles. Moreover, by filtering and expanding the
signal to make it compatible with learning-based approaches, we address radar
inherent issues, such as noise and sparsity. With R4Dyn we are able to overcome
a major limitation of self-supervised depth estimation, i.e. the prediction of
traffic participants. We substantially improve the estimation on dynamic
objects, such as cars by 37% on the challenging nuScenes dataset, hence
demonstrating that radar is a valuable additional sensor for monocular depth
estimation in autonomous vehicles.
|
[
{
"version": "v1",
"created": "Tue, 10 Aug 2021 17:57:03 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Nov 2021 18:29:54 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Gasperini",
"Stefano",
""
],
[
"Koch",
"Patrick",
""
],
[
"Dallabetta",
"Vinzenz",
""
],
[
"Navab",
"Nassir",
""
],
[
"Busam",
"Benjamin",
""
],
[
"Tombari",
"Federico",
""
]
] |
new_dataset
| 0.973015 |
2111.04479
|
Anh V. Vu
|
Anh V. Vu, Lydia Wilson, Yi Ting Chua, Ilia Shumailov, Ross Anderson
|
ExtremeBB: A Database for Large-Scale Research into Online Hate,
Harassment, the Manosphere and Extremism
| null | null | null | null |
cs.SI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce ExtremeBB, a textual database of over 53.5M posts made by 38.5k
users on 12 extremist bulletin board forums promoting online hate, harassment,
the manosphere and other forms of extremism. It enables large-scale analyses of
qualitative and quantitative historical trends going back two decades:
measuring hate speech and toxicity; tracing the evolution of different strands
of extremist ideology; tracking the relationships between online subcultures,
extremist behaviours, and real-world violence; and monitoring extremist
communities in near real time. This can shed light not only on the spread of
problematic ideologies but also the effectiveness of interventions. ExtremeBB
comes with a robust ethical data-sharing regime that allows us to share data
with academics worldwide. Since 2020, access has been granted to 49 licensees
in 16 research groups from 12 institutions.
|
[
{
"version": "v1",
"created": "Mon, 8 Nov 2021 13:15:25 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Jun 2023 17:27:50 GMT"
},
{
"version": "v3",
"created": "Sun, 20 Aug 2023 22:38:14 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Vu",
"Anh V.",
""
],
[
"Wilson",
"Lydia",
""
],
[
"Chua",
"Yi Ting",
""
],
[
"Shumailov",
"Ilia",
""
],
[
"Anderson",
"Ross",
""
]
] |
new_dataset
| 0.999416 |
2111.14185
|
Atif Rahman
|
Shoumik Saha, Sadia Afroz, Atif Rahman
|
MALIGN: Explainable Static Raw-byte Based Malware Family Classification
using Sequence Alignment
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
For a long time, malware classification and analysis have been an arms-race
between antivirus systems and malware authors. Though static analysis is
vulnerable to evasion techniques, it is still popular as the first line of
defense in antivirus systems. But most of the static analyzers failed to gain
the trust of practitioners due to their black-box nature. We propose MAlign, a
novel static malware family classification approach inspired by genome sequence
alignment that can not only classify malware families but can also provide
explanations for its decision. MAlign encodes raw bytes using nucleotides and
adopts genome sequence alignment approaches to create a signature of a malware
family based on the conserved code segments in that family, without any human
labor or expertise. We evaluate MAlign on two malware datasets, and it
outperforms other state-of-the-art machine learning based malware classifiers
(by 4.49% - 0.07%), especially on small datasets (by 19.48% - 1.2%).
Furthermore, we explain the generated signatures by MAlign on different malware
families illustrating the kinds of insights it can provide to analysts, and
show its efficacy as an analysis tool. Additionally, we evaluate its
theoretical and empirical robustness against some common attacks. In this
paper, we approach static malware analysis from a unique perspective, aiming to
strike a delicate balance among performance, interpretability, and robustness.
|
[
{
"version": "v1",
"created": "Sun, 28 Nov 2021 15:57:28 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Aug 2023 13:25:24 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Saha",
"Shoumik",
""
],
[
"Afroz",
"Sadia",
""
],
[
"Rahman",
"Atif",
""
]
] |
new_dataset
| 0.999842 |
2203.06424
|
Run Luo
|
Run Luo, JinLin Wei, and Qiao Lin
|
VariabilityTrack:Multi-Object Tracking with Variable Speed Object
Movement
|
we will refine the paper in the future
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-object tracking (MOT) aims at estimating bounding boxes and identities
of objects in videos. Most methods can be roughly classified as
tracking-by-detection and joint-detection-association paradigms. Although the
latter has elicited more attention and demonstrates comparable performance
relative than the former, we claim that the tracking-by-detection paradigm is
still the optimal solution in terms of tracking accuracy,such as
ByteTrack,which achieves 80.3 MOTA, 77.3 IDF1 and 63.1 HOTA on the test set of
MOT17 with 30 FPS running speed on a single V100 GPU.However, under complex
perspectives such as vehicle and UAV acceleration, the performance of such a
tracker using uniform Kalman filter will be greatly affected, resulting in
tracking loss.In this paper, we propose a variable speed Kalman filter
algorithm based on environmental feedback and improve the matching process,
which can greatly improve the tracking effect in complex variable speed scenes
while maintaining high tracking accuracy in relatively static scenes.
Eventually, higher MOTA and IDF1 results can be achieved on MOT17 test set than
ByteTrack
|
[
{
"version": "v1",
"created": "Sat, 12 Mar 2022 12:39:41 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Aug 2023 04:22:32 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Luo",
"Run",
""
],
[
"Wei",
"JinLin",
""
],
[
"Lin",
"Qiao",
""
]
] |
new_dataset
| 0.98826 |
2209.05016
|
Zhang Junlin
|
Pengtao Zhang and Zheng Zheng and Junlin Zhang
|
FiBiNet++: Reducing Model Size by Low Rank Feature Interaction Layer for
CTR Prediction
| null |
ACM International Conference on Information and Knowledge
Management(CIKM '23), October 21-25,2023,Birmingham,United Kingdom
|
10.1145/3583780.3615242
| null |
cs.IR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Click-Through Rate (CTR) estimation has become one of the most fundamental
tasks in many real-world applications and various deep models have been
proposed. Some research has proved that FiBiNet is one of the best performance
models and outperforms all other models on Avazu dataset. However, the large
model size of FiBiNet hinders its wider application. In this paper, we propose
a novel FiBiNet++ model to redesign FiBiNet's model structure, which greatly
reduces model size while further improves its performance. One of the primary
techniques involves our proposed "Low Rank Layer" focused on feature
interaction, which serves as a crucial driver of achieving a superior
compression ratio for models. Extensive experiments on three public datasets
show that FiBiNet++ effectively reduces non-embedding model parameters of
FiBiNet by 12x to 16x on three datasets. On the other hand, FiBiNet++ leads to
significant performance improvements compared to state-of-the-art CTR methods,
including FiBiNet.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 04:13:49 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 12:00:47 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Zhang",
"Pengtao",
""
],
[
"Zheng",
"Zheng",
""
],
[
"Zhang",
"Junlin",
""
]
] |
new_dataset
| 0.957619 |
2209.13877
|
Zeqiang Wang
|
Zeqiang Wang, Yile Wang, Jiageng Wu, Zhiyang Teng, Jie Yang
|
YATO: Yet Another deep learning based Text analysis Open toolkit
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce YATO, an open-source, easy-to-use toolkit for text analysis with
deep learning. Different from existing heavily engineered toolkits and
platforms, YATO is lightweight and user-friendly for researchers from
cross-disciplinary areas. Designed in a hierarchical structure, YATO supports
free combinations of three types of widely used features including 1)
traditional neural networks (CNN, RNN, etc.); 2) pre-trained language models
(BERT, RoBERTa, ELECTRA, etc.); and 3) user-customized neural features via a
simple configurable file. Benefiting from the advantages of flexibility and
ease of use, YATO can facilitate fast reproduction and refinement of
state-of-the-art NLP models, and promote the cross-disciplinary applications of
NLP techniques. The code, examples, and documentation are publicly available at
https://github.com/jiesutd/YATO. A demo video is also available at
https://youtu.be/tSjjf5BzfQg.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2022 07:25:04 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 13:06:10 GMT"
},
{
"version": "v3",
"created": "Sat, 19 Aug 2023 06:24:28 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Wang",
"Zeqiang",
""
],
[
"Wang",
"Yile",
""
],
[
"Wu",
"Jiageng",
""
],
[
"Teng",
"Zhiyang",
""
],
[
"Yang",
"Jie",
""
]
] |
new_dataset
| 0.994494 |
2211.15660
|
Favyen Bastani
|
Favyen Bastani and Piper Wolters and Ritwik Gupta and Joe Ferdinando
and Aniruddha Kembhavi
|
SatlasPretrain: A Large-Scale Dataset for Remote Sensing Image
Understanding
|
ICCV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Remote sensing images are useful for a wide variety of planet monitoring
applications, from tracking deforestation to tackling illegal fishing. The
Earth is extremely diverse -- the amount of potential tasks in remote sensing
images is massive, and the sizes of features range from several kilometers to
just tens of centimeters. However, creating generalizable computer vision
methods is a challenge in part due to the lack of a large-scale dataset that
captures these diverse features for many tasks. In this paper, we present
SatlasPretrain, a remote sensing dataset that is large in both breadth and
scale, combining Sentinel-2 and NAIP images with 302M labels under 137
categories and seven label types. We evaluate eight baselines and a proposed
method on SatlasPretrain, and find that there is substantial room for
improvement in addressing research challenges specific to remote sensing,
including processing image time series that consist of images from very
different types of sensors, and taking advantage of long-range spatial context.
Moreover, we find that pre-training on SatlasPretrain substantially improves
performance on downstream tasks, increasing average accuracy by 18% over
ImageNet and 6% over the next best baseline. The dataset, pre-trained model
weights, and code are available at https://satlas-pretrain.allen.ai/.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 18:59:26 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 04:51:10 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Aug 2023 15:09:13 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Bastani",
"Favyen",
""
],
[
"Wolters",
"Piper",
""
],
[
"Gupta",
"Ritwik",
""
],
[
"Ferdinando",
"Joe",
""
],
[
"Kembhavi",
"Aniruddha",
""
]
] |
new_dataset
| 0.999905 |
2212.02500
|
Ye Yuan
|
Ye Yuan, Jiaming Song, Umar Iqbal, Arash Vahdat, Jan Kautz
|
PhysDiff: Physics-Guided Human Motion Diffusion Model
|
ICCV 2023 (Oral). Project page: https://nvlabs.github.io/PhysDiff
| null | null | null |
cs.CV cs.AI cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Denoising diffusion models hold great promise for generating diverse and
realistic human motions. However, existing motion diffusion models largely
disregard the laws of physics in the diffusion process and often generate
physically-implausible motions with pronounced artifacts such as floating, foot
sliding, and ground penetration. This seriously impacts the quality of
generated motions and limits their real-world application. To address this
issue, we present a novel physics-guided motion diffusion model (PhysDiff),
which incorporates physical constraints into the diffusion process.
Specifically, we propose a physics-based motion projection module that uses
motion imitation in a physics simulator to project the denoised motion of a
diffusion step to a physically-plausible motion. The projected motion is
further used in the next diffusion step to guide the denoising diffusion
process. Intuitively, the use of physics in our model iteratively pulls the
motion toward a physically-plausible space, which cannot be achieved by simple
post-processing. Experiments on large-scale human motion datasets show that our
approach achieves state-of-the-art motion quality and improves physical
plausibility drastically (>78% for all datasets).
|
[
{
"version": "v1",
"created": "Mon, 5 Dec 2022 18:59:52 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Dec 2022 18:32:59 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Aug 2023 19:59:48 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Yuan",
"Ye",
""
],
[
"Song",
"Jiaming",
""
],
[
"Iqbal",
"Umar",
""
],
[
"Vahdat",
"Arash",
""
],
[
"Kautz",
"Jan",
""
]
] |
new_dataset
| 0.981593 |
2212.09100
|
Abdullah Hamdi
|
Abdullah Hamdi, Bernard Ghanem, Matthias Nie{\ss}ner
|
SPARF: Large-Scale Learning of 3D Sparse Radiance Fields from Few Input
Images
|
published at ICCV 2023 workshop proceedings
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in Neural Radiance Fields (NeRFs) treat the problem of novel
view synthesis as Sparse Radiance Field (SRF) optimization using sparse voxels
for efficient and fast rendering (plenoxels,InstantNGP). In order to leverage
machine learning and adoption of SRFs as a 3D representation, we present SPARF,
a large-scale ShapeNet-based synthetic dataset for novel view synthesis
consisting of $\sim$ 17 million images rendered from nearly 40,000 shapes at
high resolution (400 X 400 pixels). The dataset is orders of magnitude larger
than existing synthetic datasets for novel view synthesis and includes more
than one million 3D-optimized radiance fields with multiple voxel resolutions.
Furthermore, we propose a novel pipeline (SuRFNet) that learns to generate
sparse voxel radiance fields from only few views. This is done by using the
densely collected SPARF dataset and 3D sparse convolutions. SuRFNet employs
partial SRFs from few/one images and a specialized SRF loss to learn to
generate high-quality sparse voxel radiance fields that can be rendered from
novel views. Our approach achieves state-of-the-art results in the task of
unconstrained novel view synthesis based on few views on ShapeNet as compared
to recent baselines. The SPARF dataset is made public with the code and models
on the project website https://abdullahamdi.com/sparf/ .
|
[
{
"version": "v1",
"created": "Sun, 18 Dec 2022 14:56:22 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2023 12:08:11 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Aug 2023 12:53:09 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Hamdi",
"Abdullah",
""
],
[
"Ghanem",
"Bernard",
""
],
[
"Nießner",
"Matthias",
""
]
] |
new_dataset
| 0.999703 |
2301.00280
|
Mariam Zomorodi
|
Mariam Zomorodi, Ismail Ghodsollahee, Jennifer H. Martin, Nicholas J.
Talley, Vahid Salari, Pawel Plawiak, Kazem Rahimi, U. Rajendra Acharya
|
RECOMED: A Comprehensive Pharmaceutical Recommendation System
|
39 pages, 14 figures, 13 tables
| null | null | null |
cs.IR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
A comprehensive pharmaceutical recommendation system was designed based on
the patients and drugs features extracted from Drugs.com and Druglib.com.
First, data from these databases were combined, and a dataset of patients and
drug information was built. Secondly, the patients and drugs were clustered,
and then the recommendation was performed using different ratings provided by
patients, and importantly by the knowledge obtained from patients and drug
specifications, and considering drug interactions. To the best of our
knowledge, we are the first group to consider patients conditions and history
in the proposed approach for selecting a specific medicine appropriate for that
particular user. Our approach applies artificial intelligence (AI) models for
the implementation. Sentiment analysis using natural language processing
approaches is employed in pre-processing along with neural network-based
methods and recommender system algorithms for modeling the system. In our work,
patients conditions and drugs features are used for making two models based on
matrix factorization. Then we used drug interaction to filter drugs with severe
or mild interactions with other drugs. We developed a deep learning model for
recommending drugs by using data from 2304 patients as a training set, and then
we used data from 660 patients as our validation set. After that, we used
knowledge from critical information about drugs and combined the outcome of the
model into a knowledge-based system with the rules obtained from constraints on
taking medicine.
|
[
{
"version": "v1",
"created": "Sat, 31 Dec 2022 20:04:31 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 05:46:48 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Zomorodi",
"Mariam",
""
],
[
"Ghodsollahee",
"Ismail",
""
],
[
"Martin",
"Jennifer H.",
""
],
[
"Talley",
"Nicholas J.",
""
],
[
"Salari",
"Vahid",
""
],
[
"Plawiak",
"Pawel",
""
],
[
"Rahimi",
"Kazem",
""
],
[
"Acharya",
"U. Rajendra",
""
]
] |
new_dataset
| 0.999375 |
2301.02884
|
Shangda Wu
|
Shangda Wu, Xiaobing Li, Feng Yu, Maosong Sun
|
TunesFormer: Forming Irish Tunes with Control Codes by Bar Patching
|
5 pages, 3 figures, 1 table
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces TunesFormer, an efficient Transformer-based
dual-decoder model specifically designed for the generation of melodies that
adhere to user-defined musical forms. Trained on 214,122 Irish tunes,
TunesFormer utilizes techniques including bar patching and control codes. Bar
patching reduces sequence length and generation time, while control codes guide
TunesFormer in producing melodies that conform to desired musical forms. Our
evaluation demonstrates TunesFormer's superior efficiency, being 3.22 times
faster than GPT-2 and 1.79 times faster than a model with linear complexity of
equal scale while offering comparable performance in controllability and other
metrics. TunesFormer provides a novel tool for musicians, composers, and music
enthusiasts alike to explore the vast landscape of Irish music. Our model and
code are available at https://github.com/sander-wood/tunesformer.
|
[
{
"version": "v1",
"created": "Sat, 7 Jan 2023 16:11:55 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Aug 2023 07:28:16 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Wu",
"Shangda",
""
],
[
"Li",
"Xiaobing",
""
],
[
"Yu",
"Feng",
""
],
[
"Sun",
"Maosong",
""
]
] |
new_dataset
| 0.999681 |
2301.10224
|
Robert Klar
|
Robert Klar, Anna Fredriksson, Vangelis Angelakis
|
Digital Twins for Ports: Derived from Smart City and Supply Chain
Twinning Experience
|
Full reference: R. Klar, A. Fredriksson and V. Angelakis, "Digital
Twins for Ports: Derived From Smart City and Supply Chain Twinning
Experience," in IEEE Access, vol. 11, pp. 71777-71799, 2023, doi:
10.1109/ACCESS.2023.3295495
|
in IEEE Access, vol. 11, pp. 71777-71799, 2023
|
10.1109/ACCESS.2023.3295495
| null |
cs.CY cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ports are striving for innovative technological solutions to cope with the
ever-increasing growth of transport, while at the same time improving their
environmental footprint. An emerging technology that has the potential to
substantially increase the efficiency of the multifaceted and interconnected
port processes is the digital twin. Although digital twins have been
successfully integrated in many industries, there is still a lack of
cross-domain understanding of what constitutes a digital twin. Furthermore, the
implementation of the digital twin in complex systems such as the port is still
in its infancy. This paper attempts to fill this research gap by conducting an
extensive cross-domain literature review of what constitutes a digital twin,
keeping in mind the extent to which the respective findings can be applied to
the port. It turns out that the digital twin of the port is most comparable to
complex systems such as smart cities and supply chains, both in terms of its
functional relevance as well as in terms of its requirements and
characteristics. The conducted literature review, considering the different
port processes and port characteristics, results in the identification of three
core requirements of a digital port twin, which are described in detail. These
include situational awareness, comprehensive data analytics capabilities for
intelligent decision making, and the provision of an interface to promote
multi-stakeholder governance and collaboration. Finally, specific operational
scenarios are proposed on how the port's digital twin can contribute to energy
savings by improving the use of port resources, facilities and operations.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 15:22:17 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Jan 2023 13:55:51 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Aug 2023 15:41:53 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Klar",
"Robert",
""
],
[
"Fredriksson",
"Anna",
""
],
[
"Angelakis",
"Vangelis",
""
]
] |
new_dataset
| 0.979407 |
2301.12667
|
Parth Padalkar
|
Parth Padalkar, Huaduo Wang, Gopal Gupta
|
NeSyFOLD: Neurosymbolic Framework for Interpretable Image Classification
| null | null | null | null |
cs.LG cs.AI cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Deep learning models such as CNNs have surpassed human performance in
computer vision tasks such as image classification. However, despite their
sophistication, these models lack interpretability which can lead to biased
outcomes reflecting existing prejudices in the data. We aim to make predictions
made by a CNN interpretable. Hence, we present a novel framework called
NeSyFOLD to create a neurosymbolic (NeSy) model for image classification tasks.
The model is a CNN with all layers following the last convolutional layer
replaced by a stratified answer set program (ASP). A rule-based machine
learning algorithm called FOLD-SE-M is used to derive the stratified answer set
program from binarized filter activations of the last convolutional layer. The
answer set program can be viewed as a rule-set, wherein the truth value of each
predicate depends on the activation of the corresponding kernel in the CNN. The
rule-set serves as a global explanation for the model and is interpretable. A
justification for the predictions made by the NeSy model can be obtained using
an ASP interpreter. We also use our NeSyFOLD framework with a CNN that is
trained using a sparse kernel learning technique called Elite BackProp (EBP).
This leads to a significant reduction in rule-set size without compromising
accuracy or fidelity thus improving scalability of the NeSy model and
interpretability of its rule-set. Evaluation is done on datasets with varied
complexity and sizes. To make the rule-set more intuitive to understand, we
propose a novel algorithm for labelling each kernel's corresponding predicate
in the rule-set with the semantic concept(s) it learns. We evaluate the
performance of our "semantic labelling algorithm" to quantify the efficacy of
the semantic labelling for both the NeSy model and the NeSy-EBP model.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 05:08:05 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2023 03:44:38 GMT"
},
{
"version": "v3",
"created": "Sun, 20 Aug 2023 21:19:13 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Padalkar",
"Parth",
""
],
[
"Wang",
"Huaduo",
""
],
[
"Gupta",
"Gopal",
""
]
] |
new_dataset
| 0.957669 |
2302.07951
|
Minje Choi
|
Minje Choi, David Jurgens, Daniel M. Romero
|
Analyzing the Engagement of Social Relationships During Life Event
Shocks in Social Media
|
Accepted to ICWSM 2023. 12 pages, 5 figures, 5 tables
| null |
10.1609/icwsm.v17i1.22134
| null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Individuals experiencing unexpected distressing events, shocks, often rely on
their social network for support. While prior work has shown how social
networks respond to shocks, these studies usually treat all ties equally,
despite differences in the support provided by different social relationships.
Here, we conduct a computational analysis on Twitter that examines how
responses to online shocks differ by the relationship type of a user dyad. We
introduce a new dataset of over 13K instances of individuals' self-reporting
shock events on Twitter and construct networks of relationship-labeled dyadic
interactions around these events. By examining behaviors across 110K replies to
shocked users in a pseudo-causal analysis, we demonstrate relationship-specific
patterns in response levels and topic shifts. We also show that while
well-established social dimensions of closeness such as tie strength and
structural embeddedness contribute to shock responsiveness, the degree of
impact is highly dependent on relationship and shock types. Our findings
indicate that social relationships contain highly distinctive characteristics
in network interactions and that relationship-specific behaviors in online
shock responses are unique from those of offline settings.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 21:17:44 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Choi",
"Minje",
""
],
[
"Jurgens",
"David",
""
],
[
"Romero",
"Daniel M.",
""
]
] |
new_dataset
| 0.99796 |
2302.10977
|
Zhigang Wei
|
Zhigang Wei, Aman Arora, Ruihao Li, Lizy K. John
|
HLSDataset: Open-Source Dataset for ML-Assisted FPGA Design using High
Level Synthesis
|
8 pages, 5 figures
| null | null | null |
cs.AR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Machine Learning (ML) has been widely adopted in design exploration using
high level synthesis (HLS) to give a better and faster performance, and
resource and power estimation at very early stages for FPGA-based design. To
perform prediction accurately, high-quality and large-volume datasets are
required for training ML models.This paper presents a dataset for ML-assisted
FPGA design using HLS, called HLSDataset. The dataset is generated from widely
used HLS C benchmarks including Polybench, Machsuite, CHStone and Rossetta. The
Verilog samples are generated with a variety of directives including loop
unroll, loop pipeline and array partition to make sure optimized and realistic
designs are covered. The total number of generated Verilog samples is nearly
9,000 per FPGA type. To demonstrate the effectiveness of our dataset, we
undertake case studies to perform power estimation and resource usage
estimation with ML models trained with our dataset. All the codes and dataset
are public at the github repo.We believe that HLSDataset can save valuable time
for researchers by avoiding the tedious process of running tools, scripting and
parsing files to generate the dataset, and enable them to spend more time where
it counts, that is, in training ML models.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 17:00:12 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 17:36:36 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Wei",
"Zhigang",
""
],
[
"Arora",
"Aman",
""
],
[
"Li",
"Ruihao",
""
],
[
"John",
"Lizy K.",
""
]
] |
new_dataset
| 0.999838 |
2302.12447
|
Carlo Sanna
|
Antonio J. Di Scala and Carlo Sanna
|
Smaller public keys for MinRank-based schemes
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
MinRank is an NP-complete problem in linear algebra whose characteristics
make it attractive to build post-quantum cryptographic primitives. Several
MinRank-based digital signature schemes have been proposed. In particular, two
of them, MIRA and MiRitH, have been submitted to the NIST Post-Quantum
Cryptography Standardization Process. In this paper, we propose a
key-generation algorithm for MinRank-based schemes that reduces the size of the
public key to about 50% of the size of the public key generated by the previous
best (in terms of public-key size) algorithm. Precisely, the size of the public
key generated by our algorithm sits in the range of 328-676 bits for security
levels of 128-256 bits. We also prove that our algorithm is as secure as the
previous ones.
|
[
{
"version": "v1",
"created": "Fri, 24 Feb 2023 04:25:41 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 09:38:10 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Di Scala",
"Antonio J.",
""
],
[
"Sanna",
"Carlo",
""
]
] |
new_dataset
| 0.965696 |
2303.05063
|
Lei Wang
|
Jiabang He, Lei Wang, Yi Hu, Ning Liu, Hui Liu, Xing Xu, and Heng Tao
Shen
|
ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for
Document Information Extraction
|
ICCV 2023. Code is available at https://github.com/MAEHCM/ICL-D3IE
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs), such as GPT-3 and ChatGPT, have demonstrated
remarkable results in various natural language processing (NLP) tasks with
in-context learning, which involves inference based on a few demonstration
examples. Despite their successes in NLP tasks, no investigation has been
conducted to assess the ability of LLMs to perform document information
extraction (DIE) using in-context learning. Applying LLMs to DIE poses two
challenges: the modality and task gap. To this end, we propose a simple but
effective in-context learning framework called ICL-D3IE, which enables LLMs to
perform DIE with different types of demonstration examples. Specifically, we
extract the most difficult and distinct segments from hard training documents
as hard demonstrations for benefiting all test instances. We design
demonstrations describing relationships that enable LLMs to understand
positional relationships. We introduce formatting demonstrations for easy
answer extraction. Additionally, the framework improves diverse demonstrations
by updating them iteratively. Our experiments on three widely used benchmark
datasets demonstrate that the ICL-D3IE framework enables Davinci-003/ChatGPT to
achieve superior performance when compared to previous pre-trained methods
fine-tuned with full training in both the in-distribution (ID) setting and in
the out-of-distribution (OOD) setting. Code is available at
https://github.com/MAEHCM/ICL-D3IE.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 06:24:50 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Mar 2023 11:56:34 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Jul 2023 06:06:06 GMT"
},
{
"version": "v4",
"created": "Mon, 21 Aug 2023 03:57:18 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"He",
"Jiabang",
""
],
[
"Wang",
"Lei",
""
],
[
"Hu",
"Yi",
""
],
[
"Liu",
"Ning",
""
],
[
"Liu",
"Hui",
""
],
[
"Xu",
"Xing",
""
],
[
"Shen",
"Heng Tao",
""
]
] |
new_dataset
| 0.996844 |
2303.08682
|
Wenqi Ouyang
|
Wenqi Ouyang, Yi Dong, Xiaoyang Kang, Peiran Ren, Xin Xu, Xuansong Xie
|
RSFNet: A White-Box Image Retouching Approach using Region-Specific
Color Filters
|
Accepted by ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Retouching images is an essential aspect of enhancing the visual appeal of
photos. Although users often share common aesthetic preferences, their
retouching methods may vary based on their individual preferences. Therefore,
there is a need for white-box approaches that produce satisfying results and
enable users to conveniently edit their images simultaneously. Recent white-box
retouching methods rely on cascaded global filters that provide image-level
filter arguments but cannot perform fine-grained retouching. In contrast,
colorists typically employ a divide-and-conquer approach, performing a series
of region-specific fine-grained enhancements when using traditional tools like
Davinci Resolve. We draw on this insight to develop a white-box framework for
photo retouching using parallel region-specific filters, called RSFNet. Our
model generates filter arguments (e.g., saturation, contrast, hue) and
attention maps of regions for each filter simultaneously. Instead of cascading
filters, RSFNet employs linear summations of filters, allowing for a more
diverse range of filter classes that can be trained more easily. Our
experiments demonstrate that RSFNet achieves state-of-the-art results, offering
satisfying aesthetic appeal and increased user convenience for editable
white-box retouching.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 15:11:31 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Aug 2023 05:31:30 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Ouyang",
"Wenqi",
""
],
[
"Dong",
"Yi",
""
],
[
"Kang",
"Xiaoyang",
""
],
[
"Ren",
"Peiran",
""
],
[
"Xu",
"Xin",
""
],
[
"Xie",
"Xuansong",
""
]
] |
new_dataset
| 0.990109 |
2303.16053
|
Wenzheng Zeng
|
Wenzheng Zeng, Yang Xiao, Sicheng Wei, Jinfang Gan, Xintao Zhang,
Zhiguo Cao, Zhiwen Fang, Joey Tianyi Zhou
|
Real-time Multi-person Eyeblink Detection in the Wild for Untrimmed
Video
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real-time eyeblink detection in the wild can widely serve for fatigue
detection, face anti-spoofing, emotion analysis, etc. The existing research
efforts generally focus on single-person cases towards trimmed video. However,
multi-person scenario within untrimmed videos is also important for practical
applications, which has not been well concerned yet. To address this, we shed
light on this research field for the first time with essential contributions on
dataset, theory, and practices. In particular, a large-scale dataset termed
MPEblink that involves 686 untrimmed videos with 8748 eyeblink events is
proposed under multi-person conditions. The samples are captured from
unconstrained films to reveal "in the wild" characteristics. Meanwhile, a
real-time multi-person eyeblink detection method is also proposed. Being
different from the existing counterparts, our proposition runs in a one-stage
spatio-temporal way with end-to-end learning capacity. Specifically, it
simultaneously addresses the sub-tasks of face detection, face tracking, and
human instance-level eyeblink detection. This paradigm holds 2 main advantages:
(1) eyeblink features can be facilitated via the face's global context (e.g.,
head pose and illumination condition) with joint optimization and interaction,
and (2) addressing these sub-tasks in parallel instead of sequential manner can
save time remarkably to meet the real-time running requirement. Experiments on
MPEblink verify the essential challenges of real-time multi-person eyeblink
detection in the wild for untrimmed video. Our method also outperforms existing
approaches by large margins and with a high inference speed.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 15:35:25 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 14:18:55 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Zeng",
"Wenzheng",
""
],
[
"Xiao",
"Yang",
""
],
[
"Wei",
"Sicheng",
""
],
[
"Gan",
"Jinfang",
""
],
[
"Zhang",
"Xintao",
""
],
[
"Cao",
"Zhiguo",
""
],
[
"Fang",
"Zhiwen",
""
],
[
"Zhou",
"Joey Tianyi",
""
]
] |
new_dataset
| 0.998274 |
2304.00054
|
Noah Stier
|
Noah Stier, Baptiste Angles, Liang Yang, Yajie Yan, Alex Colburn, Ming
Chuang
|
LivePose: Online 3D Reconstruction from Monocular Video with Dynamic
Camera Poses
|
ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dense 3D reconstruction from RGB images traditionally assumes static camera
pose estimates. This assumption has endured, even as recent works have
increasingly focused on real-time methods for mobile devices. However, the
assumption of a fixed pose for each image does not hold for online execution:
poses from real-time SLAM are dynamic and may be updated following events such
as bundle adjustment and loop closure. This has been addressed in the RGB-D
setting, by de-integrating past views and re-integrating them with updated
poses, but it remains largely untreated in the RGB-only setting. We formalize
this problem to define the new task of dense online reconstruction from
dynamically-posed images. To support further research, we introduce a dataset
called LivePose containing the dynamic poses from a SLAM system running on
ScanNet. We select three recent reconstruction systems and apply a framework
based on de-integration to adapt each one to the dynamic-pose setting. In
addition, we propose a novel, non-linear de-integration module that learns to
remove stale scene content. We show that responding to pose updates is critical
for high-quality reconstruction, and that our de-integration framework is an
effective solution.
|
[
{
"version": "v1",
"created": "Fri, 31 Mar 2023 18:15:17 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 22:50:36 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Stier",
"Noah",
""
],
[
"Angles",
"Baptiste",
""
],
[
"Yang",
"Liang",
""
],
[
"Yan",
"Yajie",
""
],
[
"Colburn",
"Alex",
""
],
[
"Chuang",
"Ming",
""
]
] |
new_dataset
| 0.99902 |
2304.01397
|
Rongqi Pan
|
Rongqi Pan, Taher A. Ghaleb, Lionel Briand
|
LTM: Scalable and Black-box Similarity-based Test Suite Minimization
based on Language Models
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Test suites tend to grow when software evolves, making it often infeasible to
execute all test cases with the allocated testing budgets, especially for large
software systems. Therefore, test suite minimization (TSM) is employed to
improve the efficiency of software testing by removing redundant test cases,
thus reducing testing time and resources, while maintaining the fault detection
capability of the test suite. Most of the TSM approaches rely on code coverage
(white-box) or model-based features, which are not always available for test
engineers. Recent TSM approaches that rely only on test code (black-box) have
been proposed, such as ATM and FAST-R. To address scalability, we propose LTM
(Language model-based Test suite Minimization), a novel, scalable, and
black-box similarity-based TSM approach based on large language models (LLMs).
To support similarity measurement, we investigated three different pre-trained
language models: CodeBERT, GraphCodeBERT, and UniXcoder, to extract embeddings
of test code, on which we computed two similarity measures: Cosine Similarity
and Euclidean Distance. Our goal is to find similarity measures that are not
only computationally more efficient but can also better guide a Genetic
Algorithm (GA), thus reducing the overall search time. Experimental results,
under a 50% minimization budget, showed that the best configuration of LTM
(using UniXcoder with Cosine similarity) outperformed the best two
configurations of ATM in three key facets: (a) achieving a greater saving rate
of testing time (40.38% versus 38.06%, on average); (b) attaining a
significantly higher fault detection rate (0.84 versus 0.81, on average); and,
more importantly, (c) minimizing test suites much faster (26.73 minutes versus
72.75 minutes, on average) in terms of both preparation time (up to two orders
of magnitude faster) and search time (one order of magnitude faster).
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 22:16:52 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 16:51:50 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Pan",
"Rongqi",
""
],
[
"Ghaleb",
"Taher A.",
""
],
[
"Briand",
"Lionel",
""
]
] |
new_dataset
| 0.95526 |
2304.01480
|
Noah Stier
|
Noah Stier, Anurag Ranjan, Alex Colburn, Yajie Yan, Liang Yang,
Fangchang Ma, Baptiste Angles
|
FineRecon: Depth-aware Feed-forward Network for Detailed 3D
Reconstruction
|
ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent works on 3D reconstruction from posed images have demonstrated that
direct inference of scene-level 3D geometry without test-time optimization is
feasible using deep neural networks, showing remarkable promise and high
efficiency. However, the reconstructed geometry, typically represented as a 3D
truncated signed distance function (TSDF), is often coarse without fine
geometric details. To address this problem, we propose three effective
solutions for improving the fidelity of inference-based 3D reconstructions. We
first present a resolution-agnostic TSDF supervision strategy to provide the
network with a more accurate learning signal during training, avoiding the
pitfalls of TSDF interpolation seen in previous work. We then introduce a depth
guidance strategy using multi-view depth estimates to enhance the scene
representation and recover more accurate surfaces. Finally, we develop a novel
architecture for the final layers of the network, conditioning the output TSDF
prediction on high-resolution image features in addition to coarse voxel
features, enabling sharper reconstruction of fine details. Our method,
FineRecon, produces smooth and highly accurate reconstructions, showing
significant improvements across multiple depth and 3D reconstruction metrics.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 02:50:29 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 22:35:08 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Stier",
"Noah",
""
],
[
"Ranjan",
"Anurag",
""
],
[
"Colburn",
"Alex",
""
],
[
"Yan",
"Yajie",
""
],
[
"Yang",
"Liang",
""
],
[
"Ma",
"Fangchang",
""
],
[
"Angles",
"Baptiste",
""
]
] |
new_dataset
| 0.970771 |
2304.04909
|
Ahmed Abdelreheem Mr.
|
Ahmed Abdelreheem, Ivan Skorokhodov, Maks Ovsjanikov, Peter Wonka
|
SATR: Zero-Shot Semantic Segmentation of 3D Shapes
|
Project webpage: https://samir55.github.io/SATR/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We explore the task of zero-shot semantic segmentation of 3D shapes by using
large-scale off-the-shelf 2D image recognition models. Surprisingly, we find
that modern zero-shot 2D object detectors are better suited for this task than
contemporary text/image similarity predictors or even zero-shot 2D segmentation
networks. Our key finding is that it is possible to extract accurate 3D
segmentation maps from multi-view bounding box predictions by using the
topological properties of the underlying surface. For this, we develop the
Segmentation Assignment with Topological Reweighting (SATR) algorithm and
evaluate it on ShapeNetPart and our proposed FAUST benchmarks. SATR achieves
state-of-the-art performance and outperforms a baseline algorithm by 1.3% and
4% average mIoU on the FAUST coarse and fine-grained benchmarks, respectively,
and by 5.2% average mIoU on the ShapeNetPart benchmark. Our source code and
data will be publicly released. Project webpage:
https://samir55.github.io/SATR/.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 00:43:16 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 00:37:57 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Abdelreheem",
"Ahmed",
""
],
[
"Skorokhodov",
"Ivan",
""
],
[
"Ovsjanikov",
"Maks",
""
],
[
"Wonka",
"Peter",
""
]
] |
new_dataset
| 0.998491 |
2304.13935
|
Changhoon Kang
|
Changhoon Kang, Jongsoo Woo and James Won-Ki Hong
|
Bitcoin Double-Spending Attack Detection using Graph Neural Network
|
3 pages, 1 table, Accepted as poster at IEEE ICBC 2023
| null |
10.1109/ICBC56567.2023.10174934
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bitcoin transactions include unspent transaction outputs (UTXOs) as their
inputs and generate one or more newly owned UTXOs at specified addresses. Each
UTXO can only be used as an input in a transaction once, and using it in two or
more different transactions is referred to as a double-spending attack.
Ultimately, due to the characteristics of the Bitcoin protocol, double-spending
is impossible. However, problems may arise when a transaction is considered
final even though its finality has not been fully guaranteed in order to
achieve fast payment. In this paper, we propose an approach to detecting
Bitcoin double-spending attacks using a graph neural network (GNN). This model
predicts whether all nodes in the network contain a given payment transaction
in their own memory pool (mempool) using information only obtained from some
observer nodes in the network. Our experiment shows that the proposed model can
detect double-spending with an accuracy of at least 0.95 when more than about
1% of the entire nodes in the network are observer nodes.
|
[
{
"version": "v1",
"created": "Thu, 27 Apr 2023 03:04:55 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Kang",
"Changhoon",
""
],
[
"Woo",
"Jongsoo",
""
],
[
"Hong",
"James Won-Ki",
""
]
] |
new_dataset
| 0.999058 |
2305.09381
|
Bo Han
|
Bo Han, Hao Peng, Minjing Dong, Yi Ren, Yixuan Shen, Chang Xu
|
AMD: Autoregressive Motion Diffusion
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human motion generation aims to produce plausible human motion sequences
according to various conditional inputs, such as text or audio. Despite the
feasibility of existing methods in generating motion based on short prompts and
simple motion patterns, they encounter difficulties when dealing with long
prompts or complex motions. The challenges are two-fold: 1) the scarcity of
human motion-captured data for long prompts and complex motions. 2) the high
diversity of human motions in the temporal domain and the substantial
divergence of distributions from conditional modalities, leading to a
many-to-many mapping problem when generating motion with complex and long
texts. In this work, we address these gaps by 1) elaborating the first dataset
pairing long textual descriptions and 3D complex motions (HumanLong3D), and 2)
proposing an autoregressive motion diffusion model (AMD). Specifically, AMD
integrates the text prompt at the current timestep with the text prompt and
action sequences at the previous timestep as conditional information to predict
the current action sequences in an iterative manner. Furthermore, we present
its generalization for X-to-Motion with "No Modality Left Behind", enabling the
generation of high-definition and high-fidelity human motions based on
user-defined modality input.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 12:09:30 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 06:06:36 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Jul 2023 02:25:52 GMT"
},
{
"version": "v4",
"created": "Mon, 10 Jul 2023 00:55:30 GMT"
},
{
"version": "v5",
"created": "Tue, 11 Jul 2023 06:12:43 GMT"
},
{
"version": "v6",
"created": "Mon, 21 Aug 2023 09:04:44 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Han",
"Bo",
""
],
[
"Peng",
"Hao",
""
],
[
"Dong",
"Minjing",
""
],
[
"Ren",
"Yi",
""
],
[
"Shen",
"Yixuan",
""
],
[
"Xu",
"Chang",
""
]
] |
new_dataset
| 0.995981 |
2305.11377
|
Karandeep Singh
|
Karandeep Singh, Yu-Che Tsai, Cheng-Te Li, Meeyoung Cha, Shou-De Lin
|
GraphFC: Customs Fraud Detection with Label Scarcity
| null | null | null | null |
cs.LG cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Custom officials across the world encounter huge volumes of transactions.
With increased connectivity and globalization, the customs transactions
continue to grow every year. Associated with customs transactions is the
customs fraud - the intentional manipulation of goods declarations to avoid the
taxes and duties. With limited manpower, the custom offices can only undertake
manual inspection of a limited number of declarations. This necessitates the
need for automating the customs fraud detection by machine learning (ML)
techniques. Due the limited manual inspection for labeling the new-incoming
declarations, the ML approach should have robust performance subject to the
scarcity of labeled data. However, current approaches for customs fraud
detection are not well suited and designed for this real-world setting. In this
work, we propose $\textbf{GraphFC}$ ($\textbf{Graph}$ neural networks for
$\textbf{C}$ustoms $\textbf{F}$raud), a model-agnostic, domain-specific,
semi-supervised graph neural network based customs fraud detection algorithm
that has strong semi-supervised and inductive capabilities. With upto 252%
relative increase in recall over the present state-of-the-art, extensive
experimentation on real customs data from customs administrations of three
different countries demonstrate that GraphFC consistently outperforms various
baselines and the present state-of-art by a large margin.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 01:47:12 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Aug 2023 13:30:48 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Singh",
"Karandeep",
""
],
[
"Tsai",
"Yu-Che",
""
],
[
"Li",
"Cheng-Te",
""
],
[
"Cha",
"Meeyoung",
""
],
[
"Lin",
"Shou-De",
""
]
] |
new_dataset
| 0.999359 |
2305.14962
|
Maksym Lysak
|
Christoph Auer, Ahmed Nassar, Maksym Lysak, Michele Dolfi, Nikolaos
Livathinos, Peter Staar
|
ICDAR 2023 Competition on Robust Layout Segmentation in Corporate
Documents
|
ICDAR 2023, 10 pages, 4 figures
| null |
10.1007/978-3-031-41679-8_27
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transforming documents into machine-processable representations is a
challenging task due to their complex structures and variability in formats.
Recovering the layout structure and content from PDF files or scanned material
has remained a key problem for decades. ICDAR has a long tradition in hosting
competitions to benchmark the state-of-the-art and encourage the development of
novel solutions to document layout understanding. In this report, we present
the results of our \textit{ICDAR 2023 Competition on Robust Layout Segmentation
in Corporate Documents}, which posed the challenge to accurately segment the
page layout in a broad range of document styles and domains, including
corporate reports, technical literature and patents. To raise the bar over
previous competitions, we engineered a hard competition dataset and proposed
the recent DocLayNet dataset for training. We recorded 45 team registrations
and received official submissions from 21 teams. In the presented solutions, we
recognize interesting combinations of recent computer vision models, data
augmentation strategies and ensemble methods to achieve remarkable accuracy in
the task we posed. A clear trend towards adoption of vision-transformer based
methods is evident. The results demonstrate substantial progress towards
achieving robust and highly generalizing methods for document layout
understanding.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 09:56:47 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Auer",
"Christoph",
""
],
[
"Nassar",
"Ahmed",
""
],
[
"Lysak",
"Maksym",
""
],
[
"Dolfi",
"Michele",
""
],
[
"Livathinos",
"Nikolaos",
""
],
[
"Staar",
"Peter",
""
]
] |
new_dataset
| 0.992242 |
2305.16487
|
Rawal Khirodkar
|
Rawal Khirodkar, Aayush Bansal, Lingni Ma, Richard Newcombe, Minh Vo,
Kris Kitani
|
EgoHumans: An Egocentric 3D Multi-Human Benchmark
|
Accepted to ICCV 2023 (Oral)
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present EgoHumans, a new multi-view multi-human video benchmark to advance
the state-of-the-art of egocentric human 3D pose estimation and tracking.
Existing egocentric benchmarks either capture single subject or indoor-only
scenarios, which limit the generalization of computer vision algorithms for
real-world applications. We propose a novel 3D capture setup to construct a
comprehensive egocentric multi-human benchmark in the wild with annotations to
support diverse tasks such as human detection, tracking, 2D/3D pose estimation,
and mesh recovery. We leverage consumer-grade wearable camera-equipped glasses
for the egocentric view, which enables us to capture dynamic activities like
playing tennis, fencing, volleyball, etc. Furthermore, our multi-view setup
generates accurate 3D ground truth even under severe or complete occlusion. The
dataset consists of more than 125k egocentric images, spanning diverse scenes
with a particular focus on challenging and unchoreographed multi-human
activities and fast-moving egocentric views. We rigorously evaluate existing
state-of-the-art methods and highlight their limitations in the egocentric
scenario, specifically on multi-human tracking. To address such limitations, we
propose EgoFormer, a novel approach with a multi-stream transformer
architecture and explicit 3D spatial reasoning to estimate and track the human
pose. EgoFormer significantly outperforms prior art by 13.6% IDF1 on the
EgoHumans dataset.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 21:37:36 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 23:28:45 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Khirodkar",
"Rawal",
""
],
[
"Bansal",
"Aayush",
""
],
[
"Ma",
"Lingni",
""
],
[
"Newcombe",
"Richard",
""
],
[
"Vo",
"Minh",
""
],
[
"Kitani",
"Kris",
""
]
] |
new_dataset
| 0.999605 |
2306.03528
|
Jiawen Kang
|
Jiawen Kang, Jiayi He, Hongyang Du, Zehui Xiong, Zhaohui Yang, Xumin
Huang, Shengli Xie
|
Adversarial Attacks and Defenses for Semantic Communication in Vehicular
Metaverses
| null | null |
10.1109/MWC.004.2200617
| null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
For vehicular metaverses, one of the ultimate user-centric goals is to
optimize the immersive experience and Quality of Service (QoS) for users on
board. Semantic Communication (SemCom) has been introduced as a revolutionary
paradigm that significantly eases communication resource pressure for vehicular
metaverse applications to achieve this goal. SemCom enables high-quality and
ultra-efficient vehicular communication, even with explosively increasing data
traffic among vehicles. In this article, we propose a hierarchical
SemCom-enabled vehicular metaverses framework consisting of the global
metaverse, local metaverses, SemCom module, and resource pool. The global and
local metaverses are brand-new concepts from the metaverse's distribution
standpoint. Considering the QoS of users, this article explores the potential
security vulnerabilities of the proposed framework. To that purpose, this study
highlights a specific security risk to the framework's SemCom module and offers
a viable defense solution, so encouraging community researchers to focus more
on vehicular metaverse security. Finally, we provide an overview of the open
issues of secure SemCom in the vehicular metaverses, notably pointing out
potential future research directions.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 09:24:06 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 15:03:09 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Kang",
"Jiawen",
""
],
[
"He",
"Jiayi",
""
],
[
"Du",
"Hongyang",
""
],
[
"Xiong",
"Zehui",
""
],
[
"Yang",
"Zhaohui",
""
],
[
"Huang",
"Xumin",
""
],
[
"Xie",
"Shengli",
""
]
] |
new_dataset
| 0.988939 |
2306.03691
|
Tianyu Zhang
|
Gang Wang, Tianyu Zhang, Chuanyu Xue, Jiachen Wang, Mark Nixon, Song
Han
|
Time-Sensitive Networking (TSN) for Industrial Automation: A Survey
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the introduction of Cyber-Physical Systems (CPS) and Internet of Things
(IoT) into industrial applications, industrial automation is undergoing
tremendous change, especially with regard to improving efficiency and reducing
the cost of products. Industrial automation applications are often required to
transmit time- and safety-critical data to monitor and control industrial
processes, especially for critical control systems. There are a number of
solutions to meet these requirements (e.g., priority-based real-time schedules
and closed-loop feedback control systems). However, due to their different
processing capabilities (e.g., in the end devices and network switches),
different vendors may come out with distinct solutions, and this makes the
large-scale integration of devices from different vendors difficult or
impossible. IEEE 802.1 Time-Sensitive Networking (TSN) is a standardization
group formed to enhance and optimize the IEEE 802.1 network standards,
especially for Ethernet-based networks. These solutions can be evolved and
adapted into a cross-industry scenario, such as a large-scale distributed
industrial plant, which requires multiple industrial entities working
collaboratively. This paper provides a comprehensive review on the current
advances in TSN standards for industrial automation. We present the
state-of-the-art IEEE TSN standards and discuss the opportunities and
challenges when integrating each protocol into the industry domains. Finally,
we discuss some promising research about applying the TSN technology to
industrial automation applications.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 14:03:00 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jul 2023 04:48:28 GMT"
},
{
"version": "v3",
"created": "Sat, 19 Aug 2023 00:55:21 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Wang",
"Gang",
""
],
[
"Zhang",
"Tianyu",
""
],
[
"Xue",
"Chuanyu",
""
],
[
"Wang",
"Jiachen",
""
],
[
"Nixon",
"Mark",
""
],
[
"Han",
"Song",
""
]
] |
new_dataset
| 0.988277 |
2306.12235
|
Philipp Christmann
|
Philipp Christmann, Rishiraj Saha Roy, Gerhard Weikum
|
CompMix: A Benchmark for Heterogeneous Question Answering
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fact-centric question answering (QA) often requires access to multiple,
heterogeneous, information sources. By jointly considering several sources like
a knowledge base (KB), a text collection, and tables from the web, QA systems
can enhance their answer coverage and confidence. However, existing QA
benchmarks are mostly constructed with a single source of knowledge in mind.
This limits capabilities of these benchmarks to fairly evaluate QA systems that
can tap into more than one information repository. To bridge this gap, we
release CompMix, a crowdsourced QA benchmark which naturally demands the
integration of a mixture of input sources. CompMix has a total of 9,410
questions, and features several complex intents like joins and temporal
conditions. Evaluation of a range of QA systems on CompMix highlights the need
for further research on leveraging information from heterogeneous sources.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 12:53:31 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Jun 2023 13:48:14 GMT"
},
{
"version": "v3",
"created": "Sat, 19 Aug 2023 18:16:59 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Christmann",
"Philipp",
""
],
[
"Roy",
"Rishiraj Saha",
""
],
[
"Weikum",
"Gerhard",
""
]
] |
new_dataset
| 0.958261 |
2306.13592
|
Xinda Li
|
Xinda Li
|
TACOformer:Token-channel compounded Cross Attention for Multimodal
Emotion Recognition
|
Accepted by IJCAI 2023- AI4TS workshop
| null | null | null |
cs.MM cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, emotion recognition based on physiological signals has emerged as a
field with intensive research. The utilization of multi-modal, multi-channel
physiological signals has significantly improved the performance of emotion
recognition systems, due to their complementarity. However, effectively
integrating emotion-related semantic information from different modalities and
capturing inter-modal dependencies remains a challenging issue. Many existing
multimodal fusion methods ignore either token-to-token or channel-to-channel
correlations of multichannel signals from different modalities, which limits
the classification capability of the models to some extent. In this paper, we
propose a comprehensive perspective of multimodal fusion that integrates
channel-level and token-level cross-modal interactions. Specifically, we
introduce a unified cross attention module called Token-chAnnel COmpound (TACO)
Cross Attention to perform multimodal fusion, which simultaneously models
channel-level and token-level dependencies between modalities. Additionally, we
propose a 2D position encoding method to preserve information about the spatial
distribution of EEG signal channels, then we use two transformer encoders ahead
of the fusion module to capture long-term temporal dependencies from the EEG
signal and the peripheral physiological signal, respectively.
Subject-independent experiments on emotional dataset DEAP and Dreamer
demonstrate that the proposed model achieves state-of-the-art performance.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 16:28:12 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 16:37:46 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Li",
"Xinda",
""
]
] |
new_dataset
| 0.999427 |
2306.16527
|
Hugo Lauren\c{c}on
|
Hugo Lauren\c{c}on, Lucile Saulnier, L\'eo Tronchon, Stas Bekman,
Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander
M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
|
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text
Documents
| null | null | null | null |
cs.IR cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 14:01:01 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 09:35:52 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Laurençon",
"Hugo",
""
],
[
"Saulnier",
"Lucile",
""
],
[
"Tronchon",
"Léo",
""
],
[
"Bekman",
"Stas",
""
],
[
"Singh",
"Amanpreet",
""
],
[
"Lozhkov",
"Anton",
""
],
[
"Wang",
"Thomas",
""
],
[
"Karamcheti",
"Siddharth",
""
],
[
"Rush",
"Alexander M.",
""
],
[
"Kiela",
"Douwe",
""
],
[
"Cord",
"Matthieu",
""
],
[
"Sanh",
"Victor",
""
]
] |
new_dataset
| 0.99982 |
2307.05182
|
Long Bai
|
Long Bai, Mobarakol Islam, Hongliang Ren
|
CAT-ViL: Co-Attention Gated Vision-Language Embedding for Visual
Question Localized-Answering in Robotic Surgery
|
To appear in MICCAI 2023. Code availability:
https://github.com/longbai1006/CAT-ViL
| null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Medical students and junior surgeons often rely on senior surgeons and
specialists to answer their questions when learning surgery. However, experts
are often busy with clinical and academic work, and have little time to give
guidance. Meanwhile, existing deep learning (DL)-based surgical Visual Question
Answering (VQA) systems can only provide simple answers without the location of
the answers. In addition, vision-language (ViL) embedding is still a less
explored research in these kinds of tasks. Therefore, a surgical Visual
Question Localized-Answering (VQLA) system would be helpful for medical
students and junior surgeons to learn and understand from recorded surgical
videos. We propose an end-to-end Transformer with the Co-Attention gaTed
Vision-Language (CAT-ViL) embedding for VQLA in surgical scenarios, which does
not require feature extraction through detection models. The CAT-ViL embedding
module is designed to fuse multimodal features from visual and textual sources.
The fused embedding will feed a standard Data-Efficient Image Transformer
(DeiT) module, before the parallel classifier and detector for joint
prediction. We conduct the experimental validation on public surgical videos
from MICCAI EndoVis Challenge 2017 and 2018. The experimental results highlight
the superior performance and robustness of our proposed model compared to the
state-of-the-art approaches. Ablation studies further prove the outstanding
performance of all the proposed components. The proposed method provides a
promising solution for surgical scene understanding, and opens up a primary
step in the Artificial Intelligence (AI)-based VQLA system for surgical
training. Our code is publicly available.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 11:35:40 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Jul 2023 09:56:49 GMT"
},
{
"version": "v3",
"created": "Sat, 19 Aug 2023 22:23:36 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Bai",
"Long",
""
],
[
"Islam",
"Mobarakol",
""
],
[
"Ren",
"Hongliang",
""
]
] |
new_dataset
| 0.996798 |
2307.07742
|
Yi-Syuan Chen
|
Yi-Syuan Chen, Yun-Zhu Song, Cheng Yu Yeo, Bei Liu, Jianlong Fu,
Hong-Han Shuai
|
SINC: Self-Supervised In-Context Learning for Vision-Language Tasks
|
Accepted by ICCV 2023; Camera Ready Version
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Large Pre-trained Transformers exhibit an intriguing capacity for in-context
learning. Without gradient updates, these models can rapidly construct new
predictors from demonstrations presented in the inputs. Recent works promote
this ability in the vision-language domain by incorporating visual information
into large language models that can already make in-context predictions.
However, these methods could inherit issues in the language domain, such as
template sensitivity and hallucination. Also, the scale of these language
models raises a significant demand for computations, making learning and
operating these models resource-intensive. To this end, we raise a question:
``How can we enable in-context learning without relying on the intrinsic
in-context ability of large language models?". To answer it, we propose a
succinct and general framework, Self-supervised IN-Context learning (SINC),
that introduces a meta-model to learn on self-supervised prompts consisting of
tailored demonstrations. The learned models can be transferred to downstream
tasks for making in-context predictions on-the-fly. Extensive experiments show
that SINC outperforms gradient-based methods in various vision-language tasks
under few-shot settings. Furthermore, the designs of SINC help us investigate
the benefits of in-context learning across different tasks, and the analysis
further reveals the essential components for the emergence of in-context
learning in the vision-language domain.
|
[
{
"version": "v1",
"created": "Sat, 15 Jul 2023 08:33:08 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Aug 2023 08:27:16 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Chen",
"Yi-Syuan",
""
],
[
"Song",
"Yun-Zhu",
""
],
[
"Yeo",
"Cheng Yu",
""
],
[
"Liu",
"Bei",
""
],
[
"Fu",
"Jianlong",
""
],
[
"Shuai",
"Hong-Han",
""
]
] |
new_dataset
| 0.987212 |
2307.08652
|
Aalok Gangopadhyay
|
Aalok Gangopadhyay, Paras Gupta, Tarun Sharma, Prajwal Singh,
Shanmuganathan Raman
|
Search Me Knot, Render Me Knot: Embedding Search and Differentiable
Rendering of Knots in 3D
| null | null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the problem of knot-based inverse perceptual art. Given multiple
target images and their corresponding viewing configurations, the objective is
to find a 3D knot-based tubular structure whose appearance resembles the target
images when viewed from the specified viewing configurations. To solve this
problem, we first design a differentiable rendering algorithm for rendering
tubular knots embedded in 3D for arbitrary perspective camera configurations.
Utilizing this differentiable rendering algorithm, we search over the space of
knot configurations to find the ideal knot embedding. We represent the knot
embeddings via homeomorphisms of the desired template knot, where the
homeomorphisms are parametrized by the weights of an invertible neural network.
Our approach is fully differentiable, making it possible to find the ideal 3D
tubular structure for the desired perceptual art using gradient-based
optimization. We propose several loss functions that impose additional physical
constraints, enforcing that the tube is free of self-intersection, lies within
a predefined region in space, satisfies the physical bending limits of the tube
material and the material cost is within a specified budget. We demonstrate
through results that our knot representation is highly expressive and gives
impressive results even for challenging target images in both single view as
well as multiple view constraints. Through extensive ablation study we show
that each of the proposed loss function is effective in ensuring physical
realizability. We construct a real world 3D-printed object to demonstrate the
practical utility of our approach. To the best of our knowledge, we are the
first to propose a fully differentiable optimization framework for knot-based
inverse perceptual art.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 17:03:26 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Jul 2023 03:16:22 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Jul 2023 12:19:33 GMT"
},
{
"version": "v4",
"created": "Sat, 19 Aug 2023 07:31:26 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Gangopadhyay",
"Aalok",
""
],
[
"Gupta",
"Paras",
""
],
[
"Sharma",
"Tarun",
""
],
[
"Singh",
"Prajwal",
""
],
[
"Raman",
"Shanmuganathan",
""
]
] |
new_dataset
| 0.994887 |
2307.10577
|
Hugo Latapie
|
Hugo Latapie, Shan Yu, Patrick Hammer, Kristinn R. Thorisson, Vahagn
Petrosyan, Brandon Kynoch, Alind Khare, Payman Behnam, Alexey Tumanov,
Aksheit Saxena, Anish Aralikatti, Hanning Chen, Mohsen Imani, Mike Archbold,
Tangrui Li, Pei Wang, Justin Hart
|
Ethosight: A Reasoning-Guided Iterative Learning System for Nuanced
Perception based on Joint-Embedding & Contextual Label Affinity
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional computer vision models often necessitate extensive data
acquisition, annotation, and validation. These models frequently struggle in
real-world applications, resulting in high false positive and negative rates,
and exhibit poor adaptability to new scenarios, often requiring costly
retraining. To address these issues, we present Ethosight, a flexible and
adaptable zero-shot video analytics system. Ethosight begins from a clean slate
based on user-defined video analytics, specified through natural language or
keywords, and leverages joint embedding models and reasoning mechanisms
informed by ontologies such as WordNet and ConceptNet. Ethosight operates
effectively on low-cost edge devices and supports enhanced runtime adaptation,
thereby offering a new approach to continuous learning without catastrophic
forgetting. We provide empirical validation of Ethosight's promising
effectiveness across diverse and complex use cases, while highlighting areas
for further improvement. A significant contribution of this work is the release
of all source code and datasets to enable full reproducibility and to foster
further innovation in both the research and commercial domains.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 04:41:39 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 06:59:21 GMT"
},
{
"version": "v3",
"created": "Sun, 20 Aug 2023 21:24:13 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Latapie",
"Hugo",
""
],
[
"Yu",
"Shan",
""
],
[
"Hammer",
"Patrick",
""
],
[
"Thorisson",
"Kristinn R.",
""
],
[
"Petrosyan",
"Vahagn",
""
],
[
"Kynoch",
"Brandon",
""
],
[
"Khare",
"Alind",
""
],
[
"Behnam",
"Payman",
""
],
[
"Tumanov",
"Alexey",
""
],
[
"Saxena",
"Aksheit",
""
],
[
"Aralikatti",
"Anish",
""
],
[
"Chen",
"Hanning",
""
],
[
"Imani",
"Mohsen",
""
],
[
"Archbold",
"Mike",
""
],
[
"Li",
"Tangrui",
""
],
[
"Wang",
"Pei",
""
],
[
"Hart",
"Justin",
""
]
] |
new_dataset
| 0.986135 |
2307.10816
|
Jinheng Xie
|
Jinheng Xie, Yuexiang Li, Yawen Huang, Haozhe Liu, Wentian Zhang,
Yefeng Zheng and Mike Zheng Shou
|
BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained
Diffusion
|
Accepted by ICCV 2023. Code is available at:
https://github.com/showlab/BoxDiff
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent text-to-image diffusion models have demonstrated an astonishing
capacity to generate high-quality images. However, researchers mainly studied
the way of synthesizing images with only text prompts. While some works have
explored using other modalities as conditions, considerable paired data, e.g.,
box/mask-image pairs, and fine-tuning time are required for nurturing models.
As such paired data is time-consuming and labor-intensive to acquire and
restricted to a closed set, this potentially becomes the bottleneck for
applications in an open world. This paper focuses on the simplest form of
user-provided conditions, e.g., box or scribble. To mitigate the aforementioned
problem, we propose a training-free method to control objects and contexts in
the synthesized images adhering to the given spatial conditions. Specifically,
three spatial constraints, i.e., Inner-Box, Outer-Box, and Corner Constraints,
are designed and seamlessly integrated into the denoising step of diffusion
models, requiring no additional training and massive annotated layout data.
Extensive experimental results demonstrate that the proposed constraints can
control what and where to present in the images while retaining the ability of
Diffusion models to synthesize with high fidelity and diverse concept coverage.
The code is publicly available at https://github.com/showlab/BoxDiff.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 12:25:06 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Jul 2023 05:45:27 GMT"
},
{
"version": "v3",
"created": "Thu, 10 Aug 2023 11:54:46 GMT"
},
{
"version": "v4",
"created": "Mon, 21 Aug 2023 13:07:10 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Xie",
"Jinheng",
""
],
[
"Li",
"Yuexiang",
""
],
[
"Huang",
"Yawen",
""
],
[
"Liu",
"Haozhe",
""
],
[
"Zhang",
"Wentian",
""
],
[
"Zheng",
"Yefeng",
""
],
[
"Shou",
"Mike Zheng",
""
]
] |
new_dataset
| 0.996247 |
2307.13552
|
Bharath Muppasani
|
Bharath Muppasani, Vishal Pallagani, Biplav Srivastava, Forest
Agostinelli
|
On Solving the Rubik's Cube with Domain-Independent Planners Using
Standard Representations
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Rubik's Cube (RC) is a well-known and computationally challenging puzzle that
has motivated AI researchers to explore efficient alternative representations
and problem-solving methods. The ideal situation for planning here is that a
problem be solved optimally and efficiently represented in a standard notation
using a general-purpose solver and heuristics. The fastest solver today for RC
is DeepCubeA with a custom representation, and another approach is with
Scorpion planner with State-Action-Space+ (SAS+) representation. In this paper,
we present the first RC representation in the popular PDDL language so that the
domain becomes more accessible to PDDL planners, competitions, and knowledge
engineering tools, and is more human-readable. We then bridge across existing
approaches and compare performance. We find that in one comparable experiment,
DeepCubeA (trained with 12 RC actions) solves all problems with varying
complexities, albeit only 78.5% are optimal plans. For the same problem set,
Scorpion with SAS+ representation and pattern database heuristics solves 61.50%
problems optimally, while FastDownward with PDDL representation and FF
heuristic solves 56.50% problems, out of which 79.64% of the plans generated
were optimal. Our study provides valuable insights into the trade-offs between
representational choice and plan optimality that can help researchers design
future strategies for challenging domains combining general-purpose solving
methods (planning, reinforcement learning), heuristics, and representations
(standard or custom).
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 14:52:23 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 12:35:36 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Muppasani",
"Bharath",
""
],
[
"Pallagani",
"Vishal",
""
],
[
"Srivastava",
"Biplav",
""
],
[
"Agostinelli",
"Forest",
""
]
] |
new_dataset
| 0.991095 |
2307.13901
|
Ivan Lazarevich
|
Ivan Lazarevich and Matteo Grimaldi and Ravish Kumar and Saptarshi
Mitra and Shahrukh Khan and Sudhakar Sah
|
YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present YOLOBench, a benchmark comprised of 550+ YOLO-based object
detection models on 4 different datasets and 4 different embedded hardware
platforms (x86 CPU, ARM CPU, Nvidia GPU, NPU). We collect accuracy and latency
numbers for a variety of YOLO-based one-stage detectors at different model
scales by performing a fair, controlled comparison of these detectors with a
fixed training environment (code and training hyperparameters).
Pareto-optimality analysis of the collected data reveals that, if modern
detection heads and training techniques are incorporated into the learning
process, multiple architectures of the YOLO series achieve a good
accuracy-latency trade-off, including older models like YOLOv3 and YOLOv4. We
also evaluate training-free accuracy estimators used in neural architecture
search on YOLOBench and demonstrate that, while most state-of-the-art zero-cost
accuracy estimators are outperformed by a simple baseline like MAC count, some
of them can be effectively used to predict Pareto-optimal detection models. We
showcase that by using a zero-cost proxy to identify a YOLO architecture
competitive against a state-of-the-art YOLOv8 model on a Raspberry Pi 4 CPU.
The code and data are available at
https://github.com/Deeplite/deeplite-torch-zoo
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 01:51:10 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 17:55:07 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Lazarevich",
"Ivan",
""
],
[
"Grimaldi",
"Matteo",
""
],
[
"Kumar",
"Ravish",
""
],
[
"Mitra",
"Saptarshi",
""
],
[
"Khan",
"Shahrukh",
""
],
[
"Sah",
"Sudhakar",
""
]
] |
new_dataset
| 0.999324 |
2307.14480
|
Chen Chen
|
Chen Chen, Vasudev Gohil, Rahul Kande, Ahmad-Reza Sadeghi, Jeyavijayan
Rajendran
|
PSOFuzz: Fuzzing Processors with Particle Swarm Optimization
|
To be published in the proceedings of the ICCAD, 2023
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Hardware security vulnerabilities in computing systems compromise the
security defenses of not only the hardware but also the software running on it.
Recent research has shown that hardware fuzzing is a promising technique to
efficiently detect such vulnerabilities in large-scale designs such as modern
processors. However, the current fuzzing techniques do not adjust their
strategies dynamically toward faster and higher design space exploration,
resulting in slow vulnerability detection, evident through their low design
coverage. To address this problem, we propose PSOFuzz, which uses particle
swarm optimization (PSO) to schedule the mutation operators and to generate
initial input programs dynamically with the objective of detecting
vulnerabilities quickly. Unlike traditional PSO, which finds a single optimal
solution, we use a modified PSO that dynamically computes the optimal solution
for selecting mutation operators required to explore new design regions in
hardware. We also address the challenge of inefficient initial seed generation
by employing PSO-based seed generation. Including these optimizations, our
final formulation outperforms fuzzers without PSO. Experiments show that
PSOFuzz achieves up to 15.25$\times$ speedup for vulnerability detection and up
to 2.22$\times$ speedup for coverage compared to the state-of-the-art
simulation-based hardware fuzzer.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 20:08:01 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 18:16:32 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Chen",
"Chen",
""
],
[
"Gohil",
"Vasudev",
""
],
[
"Kande",
"Rahul",
""
],
[
"Sadeghi",
"Ahmad-Reza",
""
],
[
"Rajendran",
"Jeyavijayan",
""
]
] |
new_dataset
| 0.99945 |
2307.14770
|
Yiqian Wu
|
Yiqian Wu, Hao Xu, Xiangjun Tang, Hongbo Fu, Xiaogang Jin
|
3DPortraitGAN: Learning One-Quarter Headshot 3D GANs from a Single-View
Portrait Dataset with Diverse Body Poses
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D-aware face generators are typically trained on 2D real-life face image
datasets that primarily consist of near-frontal face data, and as such, they
are unable to construct one-quarter headshot 3D portraits with complete head,
neck, and shoulder geometry. Two reasons account for this issue: First,
existing facial recognition methods struggle with extracting facial data
captured from large camera angles or back views. Second, it is challenging to
learn a distribution of 3D portraits covering the one-quarter headshot region
from single-view data due to significant geometric deformation caused by
diverse body poses. To this end, we first create the dataset
360{\deg}-Portrait-HQ (360{\deg}PHQ for short) which consists of high-quality
single-view real portraits annotated with a variety of camera parameters (the
yaw angles span the entire 360{\deg} range) and body poses. We then propose
3DPortraitGAN, the first 3D-aware one-quarter headshot portrait generator that
learns a canonical 3D avatar distribution from the 360{\deg}PHQ dataset with
body pose self-learning. Our model can generate view-consistent portrait images
from all camera angles with a canonical one-quarter headshot 3D representation.
Our experiments show that the proposed framework can accurately predict
portrait body poses and generate view-consistent, realistic portrait images
with complete geometry from all camera angles.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 11:02:36 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 06:35:44 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Wu",
"Yiqian",
""
],
[
"Xu",
"Hao",
""
],
[
"Tang",
"Xiangjun",
""
],
[
"Fu",
"Hongbo",
""
],
[
"Jin",
"Xiaogang",
""
]
] |
new_dataset
| 0.999842 |
2307.16761
|
Matthew England Dr
|
Ali K. Uncu, James H. Davenport and Matthew England
|
SMT-Solving Induction Proofs of Inequalities
|
Presented at the 2022 SC-Square Workshop
|
Proceedings of the 7th Workshop on Satisfiability Checking and
Symbolic Computation (SC2 '22), A. Uncu and H. Barbosa eds. CEUR Workshop
Proceedings 3458, pp. 10-24, 2023
| null | null |
cs.SC cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper accompanies a new dataset of non-linear real arithmetic problems
for the SMT-LIB benchmark collection. The problems come from an automated proof
procedure of Gerhold--Kauers, which is well suited for solution by SMT. The
problems of this type have not been tackled by SMT-solvers before. We describe
the proof technique and give one new such proof to illustrate it. We then
describe the dataset and the results of benchmarking. The benchmarks on the new
dataset are quite different to the existing ones. The benchmarking also brings
forward some interesting debate on the use/inclusion of rational functions and
algebraic numbers in the SMT-LIB.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 15:32:16 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Uncu",
"Ali K.",
""
],
[
"Davenport",
"James H.",
""
],
[
"England",
"Matthew",
""
]
] |
new_dataset
| 0.994157 |
2308.00121
|
Andreas Happe
|
Andreas Happe, J\"urgen Cito
|
Getting pwn'd by AI: Penetration Testing with Large Language Models
| null | null |
10.1145/3611643.3613083
| null |
cs.CL cs.AI cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
The field of software security testing, more specifically penetration
testing, is an activity that requires high levels of expertise and involves
many manual testing and analysis steps. This paper explores the potential usage
of large-language models, such as GPT3.5, to augment penetration testers with
AI sparring partners. We explore the feasibility of supplementing penetration
testers with AI models for two distinct use cases: high-level task planning for
security testing assignments and low-level vulnerability hunting within a
vulnerable virtual machine. For the latter, we implemented a closed-feedback
loop between LLM-generated low-level actions with a vulnerable virtual machine
(connected through SSH) and allowed the LLM to analyze the machine state for
vulnerabilities and suggest concrete attack vectors which were automatically
executed within the virtual machine. We discuss promising initial results,
detail avenues for improvement, and close deliberating on the ethics of
providing AI-based sparring partners.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 19:59:22 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Aug 2023 14:57:11 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Aug 2023 12:26:40 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Happe",
"Andreas",
""
],
[
"Cito",
"Jürgen",
""
]
] |
new_dataset
| 0.991118 |
2308.04583
|
Yueru Luo
|
Yueru Luo, Chaoda Zheng, Xu Yan, Tang Kun, Chao Zheng, Shuguang Cui,
Zhen Li
|
LATR: 3D Lane Detection from Monocular Images with Transformer
|
Accepted by ICCV2023 (Oral)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D lane detection from monocular images is a fundamental yet challenging task
in autonomous driving. Recent advances primarily rely on structural 3D
surrogates (e.g., bird's eye view) built from front-view image features and
camera parameters. However, the depth ambiguity in monocular images inevitably
causes misalignment between the constructed surrogate feature map and the
original image, posing a great challenge for accurate lane detection. To
address the above issue, we present a novel LATR model, an end-to-end 3D lane
detector that uses 3D-aware front-view features without transformed view
representation. Specifically, LATR detects 3D lanes via cross-attention based
on query and key-value pairs, constructed using our lane-aware query generator
and dynamic 3D ground positional embedding. On the one hand, each query is
generated based on 2D lane-aware features and adopts a hybrid embedding to
enhance lane information. On the other hand, 3D space information is injected
as positional embedding from an iteratively-updated 3D ground plane. LATR
outperforms previous state-of-the-art methods on both synthetic Apollo,
realistic OpenLane and ONCE-3DLanes by large margins (e.g., 11.4 gain in terms
of F1 score on OpenLane). Code will be released at
https://github.com/JMoonr/LATR .
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 21:08:42 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Aug 2023 13:31:54 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Luo",
"Yueru",
""
],
[
"Zheng",
"Chaoda",
""
],
[
"Yan",
"Xu",
""
],
[
"Kun",
"Tang",
""
],
[
"Zheng",
"Chao",
""
],
[
"Cui",
"Shuguang",
""
],
[
"Li",
"Zhen",
""
]
] |
new_dataset
| 0.999725 |
2308.04912
|
Wenjie Yang
|
Wenjie Yang, Yiyi Chen, Yan Li, Yanhua Cheng, Xudong Liu, Quan Chen,
Han Li
|
Cross-view Semantic Alignment for Livestreaming Product Recognition
|
Accepted to ICCV2023
| null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Live commerce is the act of selling products online through live streaming.
The customer's diverse demands for online products introduce more challenges to
Livestreaming Product Recognition. Previous works have primarily focused on
fashion clothing data or utilize single-modal input, which does not reflect the
real-world scenario where multimodal data from various categories are present.
In this paper, we present LPR4M, a large-scale multimodal dataset that covers
34 categories, comprises 3 modalities (image, video, and text), and is 50x
larger than the largest publicly available dataset. LPR4M contains diverse
videos and noise modality pairs while exhibiting a long-tailed distribution,
resembling real-world problems. Moreover, a cRoss-vIew semantiC alignmEnt
(RICE) model is proposed to learn discriminative instance features from the
image and video views of the products. This is achieved through instance-level
contrastive learning and cross-view patch-level feature propagation. A novel
Patch Feature Reconstruction loss is proposed to penalize the semantic
misalignment between cross-view patches. Extensive experiments demonstrate the
effectiveness of RICE and provide insights into the importance of dataset
diversity and expressivity. The dataset and code are available at
https://github.com/adxcreative/RICE
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 12:23:41 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Aug 2023 02:00:16 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Yang",
"Wenjie",
""
],
[
"Chen",
"Yiyi",
""
],
[
"Li",
"Yan",
""
],
[
"Cheng",
"Yanhua",
""
],
[
"Liu",
"Xudong",
""
],
[
"Chen",
"Quan",
""
],
[
"Li",
"Han",
""
]
] |
new_dataset
| 0.999552 |
2308.06201
|
Mohammad Eslami
|
Mohammad Eslami, Tiago Perez and Samuel Pagliarini
|
SALSy: Security-Aware Layout Synthesis
| null | null | null | null |
cs.CR cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Integrated Circuits (ICs) are the target of diverse attacks during their
lifetime. Fabrication-time attacks, such as the insertion of Hardware Trojans,
can give an adversary access to privileged data and/or the means to corrupt the
IC's internal computation. Post-fabrication attacks, where the end-user takes a
malicious role, also attempt to obtain privileged information through means
such as fault injection and probing. Taking these threats into account and at
the same time, this paper proposes a methodology for Security-Aware Layout
Synthesis (SALSy), such that ICs can be designed with security in mind in the
same manner as power-performance-area (PPA) metrics are considered today, a
concept known as security closure. Furthermore, the trade-offs between PPA and
security are considered and a chip is fabricated in a 65nm CMOS commercial
technology for validation purposes - a feature not seen in previous research on
security closure. Measurements on the fabricated ICs indicate that SALSy
promotes a modest increase in power in order to achieve significantly improved
security metrics.
|
[
{
"version": "v1",
"created": "Fri, 11 Aug 2023 15:52:28 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 14:15:02 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Eslami",
"Mohammad",
""
],
[
"Perez",
"Tiago",
""
],
[
"Pagliarini",
"Samuel",
""
]
] |
new_dataset
| 0.999615 |
2308.07616
|
Jia-Rui Lin
|
Can Jiang, Xiong Liang, Yu-Cheng Zhou, Yong Tian, Shengli Xu, Jia-Rui
Lin, Zhiliang Ma, Shiji Yang, Hao Zhou
|
A Multilayer Perceptron-based Fast Sunlight Assessment for the
Conceptual Design of Residential Neighborhoods under Chinese Policy
| null |
Building and Environment, 2023
|
10.1016/j.buildenv.2023.110739
| null |
cs.LG cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In Chinese building codes, it is required that residential buildings receive
a minimum number of hours of natural, direct sunlight on a specified winter
day, which represents the worst sunlight condition in a year. This requirement
is a prerequisite for obtaining a building permit during the conceptual design
of a residential project. Thus, officially sanctioned software is usually used
to assess the sunlight performance of buildings. These software programs
predict sunlight hours based on repeated shading calculations, which is
time-consuming. This paper proposed a multilayer perceptron-based method, a
one-stage prediction approach, which outputs a shading time interval caused by
the inputted cuboid-form building. The sunlight hours of a site can be obtained
by calculating the union of the sunlight time intervals (complement of shading
time interval) of all the buildings. Three numerical experiments, i.e.,
horizontal level and slope analysis, and simulation-based optimization are
carried out; the results show that the method reduces the computation time to
1/84~1/50 with 96.5%~98% accuracies. A residential neighborhood layout planning
plug-in for Rhino 7/Grasshopper is also developed based on the proposed model.
This paper indicates that deep learning techniques can be adopted to accelerate
sunlight hour simulations at the conceptual design phase.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 07:53:18 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Jiang",
"Can",
""
],
[
"Liang",
"Xiong",
""
],
[
"Zhou",
"Yu-Cheng",
""
],
[
"Tian",
"Yong",
""
],
[
"Xu",
"Shengli",
""
],
[
"Lin",
"Jia-Rui",
""
],
[
"Ma",
"Zhiliang",
""
],
[
"Yang",
"Shiji",
""
],
[
"Zhou",
"Hao",
""
]
] |
new_dataset
| 0.991802 |
2308.09300
|
Heng Wang
|
Heng Wang, Jianbo Ma, Santiago Pascual, Richard Cartwright, Weidong
Cai
|
V2A-Mapper: A Lightweight Solution for Vision-to-Audio Generation by
Connecting Foundation Models
|
13 pages, 10 figures. Demo page: https://v2a-mapper.github.io/
| null | null | null |
cs.CV cs.AI cs.MM cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Building artificial intelligence (AI) systems on top of a set of foundation
models (FMs) is becoming a new paradigm in AI research. Their representative
and generative abilities learnt from vast amounts of data can be easily adapted
and transferred to a wide range of downstream tasks without extra training from
scratch. However, leveraging FMs in cross-modal generation remains
under-researched when audio modality is involved. On the other hand,
automatically generating semantically-relevant sound from visual input is an
important problem in cross-modal generation studies. To solve this
vision-to-audio (V2A) generation problem, existing methods tend to design and
build complex systems from scratch using modestly sized datasets. In this
paper, we propose a lightweight solution to this problem by leveraging
foundation models, specifically CLIP, CLAP, and AudioLDM. We first investigate
the domain gap between the latent space of the visual CLIP and the auditory
CLAP models. Then we propose a simple yet effective mapper mechanism
(V2A-Mapper) to bridge the domain gap by translating the visual input between
CLIP and CLAP spaces. Conditioned on the translated CLAP embedding, pretrained
audio generative FM AudioLDM is adopted to produce high-fidelity and
visually-aligned sound. Compared to previous approaches, our method only
requires a quick training of the V2A-Mapper. We further analyze and conduct
extensive experiments on the choice of the V2A-Mapper and show that a
generative mapper is better at fidelity and variability (FD) while a regression
mapper is slightly better at relevance (CS). Both objective and subjective
evaluation on two V2A datasets demonstrate the superiority of our proposed
method compared to current state-of-the-art approaches - trained with 86% fewer
parameters but achieving 53% and 19% improvement in FD and CS, respectively.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 04:49:38 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 07:51:00 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Wang",
"Heng",
""
],
[
"Ma",
"Jianbo",
""
],
[
"Pascual",
"Santiago",
""
],
[
"Cartwright",
"Richard",
""
],
[
"Cai",
"Weidong",
""
]
] |
new_dataset
| 0.992995 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.