id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.05043
|
EPTCS
|
Yipu Li (Peking University), Yanjing Wang (Peking University)
|
Epistemic Syllogistic: First Steps
|
In Proceedings TARK 2023, arXiv:2307.04005
|
EPTCS 379, 2023, pp. 392-406
|
10.4204/EPTCS.379.31
| null |
cs.AI cs.LO cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aristotle's discussions on modal syllogistic have often been viewed as
error-prone and have garnered significant attention in the literature due to
historical and philosophical interests. However, from a contemporary
standpoint, they also introduced natural fragments of first-order modal logic,
warranting a comprehensive technical analysis. In this paper, drawing
inspiration from the natural logic program, we propose and examine several
variants of modal syllogistic within the epistemic context, thereby coining the
term Epistemic Syllogistic. Specifically, we concentrate on the de re
interpretation of epistemic syllogisms containing non-trivial yet natural
expressions such as "all things known to be A are also known to be not B." We
explore the epistemic apodeictic syllogistic and its extensions, which
accommodate more complex terms. Our main contributions include several
axiomatizations of these logics, with completeness proofs that may be of
independent interest.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 06:50:49 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Li",
"Yipu",
"",
"Peking University"
],
[
"Wang",
"Yanjing",
"",
"Peking University"
]
] |
new_dataset
| 0.996679 |
2307.05065
|
EPTCS
|
Saira Khan (University of California, Irvine)
|
Metatickles and Death in Damascus
|
In Proceedings TARK 2023, arXiv:2307.04005
|
EPTCS 379, 2023, pp. 359-378
|
10.4204/EPTCS.379.29
| null |
cs.MA cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
The prescriptions of our two most prominent strands of decision theory,
evidential and causal, differ in a general class of problems known as Newcomb
problems. In these, evidential decision theory prescribes choosing a dominated
act. Attempts have been made at reconciling the two theories by relying on
additional requirements such as ratification (Jeffrey 1983) or "tickles" (Eells
1982). It has been argued that such attempts have failed (Lewis 1981a; Skyrms
1982). More recently, Huttegger (forthcoming) has developed a version of
deliberative decision theory that reconciles the prescriptions of the
evidentialist and causalist. In this paper, I extend this framework to problems
characterised by decision instability, and show that it cannot deliver a
resolute answer under a plausible specification of the tickle. I prove that
there exists a robust method of determining whether the specification of the
tickle matters for all two-state, two-act problems whose payoff tables exhibit
some basic mathematical relationships. One upshot is that we have a principled
way of knowing ex-ante whether a reconciliation of evidential and causal
decision theory is plausible for a wide range of decision problems under this
framework. Another upshot is that the tickle approach needs further work to
achieve full reconciliation.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 07:12:30 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Khan",
"Saira",
"",
"University of California, Irvine"
]
] |
new_dataset
| 0.995874 |
2307.05083
|
Arnab Bhattacharya
|
Pramit Bhattacharyya, Joydeep Mondal, Subhadip Maji, Arnab
Bhattacharya
|
Vacaspati: A Diverse Corpus of Bangla Literature
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Bangla (or Bengali) is the fifth most spoken language globally; yet, the
state-of-the-art NLP in Bangla is lagging for even simple tasks such as
lemmatization, POS tagging, etc. This is partly due to lack of a varied quality
corpus. To alleviate this need, we build Vacaspati, a diverse corpus of Bangla
literature. The literary works are collected from various websites; only those
works that are publicly available without copyright violations or restrictions
are collected. We believe that published literature captures the features of a
language much better than newspapers, blogs or social media posts which tend to
follow only a certain literary pattern and, therefore, miss out on language
variety. Our corpus Vacaspati is varied from multiple aspects, including type
of composition, topic, author, time, space, etc. It contains more than 11
million sentences and 115 million words. We also built a word embedding model,
Vac-FT, using FastText from Vacaspati as well as trained an Electra model,
Vac-BERT, using the corpus. Vac-BERT has far fewer parameters and requires only
a fraction of resources compared to other state-of-the-art transformer models
and yet performs either better or similar on various downstream tasks. On
multiple downstream tasks, Vac-FT outperforms other FastText-based models. We
also demonstrate the efficacy of Vacaspati as a corpus by showing that similar
models built from other corpora are not as effective. The models are available
at https://bangla.iitk.ac.in/.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 07:32:12 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Bhattacharyya",
"Pramit",
""
],
[
"Mondal",
"Joydeep",
""
],
[
"Maji",
"Subhadip",
""
],
[
"Bhattacharya",
"Arnab",
""
]
] |
new_dataset
| 0.999501 |
2307.05095
|
Kun Li
|
Kun Li and Fan Zhang and Wei Guo
|
ATWM: Defense against adversarial malware based on adversarial training
| null | null | null | null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning technology has made great achievements in the field of image.
In order to defend against malware attacks, researchers have proposed many
Windows malware detection models based on deep learning. However, deep learning
models are vulnerable to adversarial example attacks. Malware can generate
adversarial malware with the same malicious function to attack the malware
detection model and evade detection of the model. Currently, many adversarial
defense studies have been proposed, but existing adversarial defense studies
are based on image sample and cannot be directly applied to malware sample.
Therefore, this paper proposes an adversarial malware defense method based on
adversarial training. This method uses preprocessing to defend simple
adversarial examples to reduce the difficulty of adversarial training.
Moreover, this method improves the adversarial defense capability of the model
through adversarial training. We experimented with three attack methods in two
sets of datasets, and the results show that the method in this paper can
improve the adversarial defense capability of the model without reducing the
accuracy of the model.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 08:07:10 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Li",
"Kun",
""
],
[
"Zhang",
"Fan",
""
],
[
"Guo",
"Wei",
""
]
] |
new_dataset
| 0.996613 |
2307.05096
|
Konstantina Nikita S
|
Konstantia Zarkogianni, Edmund Dervakos, George Filandrianos,
Theofanis Ganitidis, Vasiliki Gkatzou, Aikaterini Sakagianni, Raghu
Raghavendra, C.L. Max Nikias, Giorgos Stamou, and Konstantina S. Nikita
|
The smarty4covid dataset and knowledge base: a framework enabling
interpretable analysis of audio signals
|
Submitted for publication in Nature Scientific Data
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Harnessing the power of Artificial Intelligence (AI) and m-health towards
detecting new bio-markers indicative of the onset and progress of respiratory
abnormalities/conditions has greatly attracted the scientific and research
interest especially during COVID-19 pandemic. The smarty4covid dataset contains
audio signals of cough (4,676), regular breathing (4,665), deep breathing
(4,695) and voice (4,291) as recorded by means of mobile devices following a
crowd-sourcing approach. Other self reported information is also included (e.g.
COVID-19 virus tests), thus providing a comprehensive dataset for the
development of COVID-19 risk detection models. The smarty4covid dataset is
released in the form of a web-ontology language (OWL) knowledge base enabling
data consolidation from other relevant datasets, complex queries and reasoning.
It has been utilized towards the development of models able to: (i) extract
clinically informative respiratory indicators from regular breathing records,
and (ii) identify cough, breath and voice segments in crowd-sourced audio
recordings. A new framework utilizing the smarty4covid OWL knowledge base
towards generating counterfactual explanations in opaque AI-based COVID-19 risk
detection models is proposed and validated.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 08:10:58 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Zarkogianni",
"Konstantia",
""
],
[
"Dervakos",
"Edmund",
""
],
[
"Filandrianos",
"George",
""
],
[
"Ganitidis",
"Theofanis",
""
],
[
"Gkatzou",
"Vasiliki",
""
],
[
"Sakagianni",
"Aikaterini",
""
],
[
"Raghavendra",
"Raghu",
""
],
[
"Nikias",
"C. L. Max",
""
],
[
"Stamou",
"Giorgos",
""
],
[
"Nikita",
"Konstantina S.",
""
]
] |
new_dataset
| 0.999776 |
2307.05102
|
Sebastian Falkensteiner
|
Sebastian Falkensteiner and Rafael Sendra
|
Rational Solutions of Parametric First-Order Algebraic Differential
Equations
| null | null | null | null |
cs.SC
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we give a procedure for finding rational solutions of a given
first-order ODE with functional and constant coefficients which occur in a
rational way. We derive an associated system with the same solvability, and
sufficient and necessary conditions for the existence of rational solutions are
given. In the case where all parametric coefficients are constant, we give an
algorithm to compute the rational solutions. In the case where one functional
coefficient appears, we algorithmically find rational general solutions which
rationally depend on the appearing transcendental constant. In the other cases,
the presented procedure is not completely algorithmic.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 08:24:25 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Falkensteiner",
"Sebastian",
""
],
[
"Sendra",
"Rafael",
""
]
] |
new_dataset
| 0.992047 |
2307.05147
|
Marius Smytzek
|
Marius Smytzek and Martin Eberlein and Batuhan Serce and Lars Grunske
and Andreas Zeller
|
Tests4Py: A Benchmark for System Testing
|
5 pages, 4 figures
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Benchmarks are among the main drivers of progress in software engineering
research, especially in software testing and debugging. However, current
benchmarks in this field could be better suited for specific research tasks, as
they rely on weak system oracles like crash detection, come with few unit tests
only, need more elaborative research, or cannot verify the outcome of system
tests.
Our Tests4Py benchmark addresses these issues. It is derived from the popular
BugsInPy benchmark, including 30 bugs from 5 real-world Python applications.
Each subject in Tests4Py comes with an oracle to verify the functional
correctness of system inputs. Besides, it enables the generation of system
tests and unit tests, allowing for qualitative studies by investigating
essential aspects of test sets and extensive evaluations. These opportunities
make Tests4Py a next-generation benchmark for research in test generation,
debugging, and automatic program repair.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 10:04:52 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Smytzek",
"Marius",
""
],
[
"Eberlein",
"Martin",
""
],
[
"Serce",
"Batuhan",
""
],
[
"Grunske",
"Lars",
""
],
[
"Zeller",
"Andreas",
""
]
] |
new_dataset
| 0.999552 |
2307.05167
|
Geoffrey Goodell
|
Ryan Bowler, Chris Speed, Geoffrey Goodell, Joe Revans
|
A Non-Custodial Wallet for CBDC: Design Challenges and Opportunities
|
25 pages, 12 figures
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Central Bank Digital Currency (CBDC) is a novel form of money that could be
issued and regulated by central banks, offering benefits such as
programmability, security, and privacy. However, the design of a CBDC system
presents numerous technical and social challenges. This paper presents the
design and prototype of a non-custodial wallet, a device that enables users to
store and spend CBDC in various contexts. To address the challenges of
designing a CBDC system, we conducted a series of workshops with internal and
external stakeholders, using methods such as storytelling, metaphors, and
provotypes to communicate CBDC concepts, elicit user feedback and critique, and
incorporate normative values into the technical design. We derived basic
guidelines for designing CBDC systems that balance technical and social
aspects, and reflect user needs and values. Our paper contributes to the CBDC
discourse by demonstrating a practical example of how CBDC could be used in
everyday life and by highlighting the importance of a user-centred approach.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 10:53:45 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Bowler",
"Ryan",
""
],
[
"Speed",
"Chris",
""
],
[
"Goodell",
"Geoffrey",
""
],
[
"Revans",
"Joe",
""
]
] |
new_dataset
| 0.999866 |
2307.05174
|
Che Zhang
|
Che Zhang and Ping'an Liu and Zhenyang Xiao and Haojun Fei
|
Mao-Zedong At SemEval-2023 Task 4: Label Represention Multi-Head
Attention Model With Contrastive Learning-Enhanced Nearest Neighbor Mechanism
For Multi-Label Text Classification
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The study of human values is essential in both practical and theoretical
domains. With the development of computational linguistics, the creation of
large-scale datasets has made it possible to automatically recognize human
values accurately. SemEval 2023 Task 4\cite{kiesel:2023} provides a set of
arguments and 20 types of human values that are implicitly expressed in each
argument. In this paper, we present our team's solution. We use the
Roberta\cite{liu_roberta_2019} model to obtain the word vector encoding of the
document and propose a multi-head attention mechanism to establish connections
between specific labels and semantic components. Furthermore, we use a
contrastive learning-enhanced K-nearest neighbor
mechanism\cite{su_contrastive_2022} to leverage existing instance information
for prediction. Our approach achieved an F1 score of 0.533 on the test set and
ranked fourth on the leaderboard.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 11:12:06 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Zhang",
"Che",
""
],
[
"Liu",
"Ping'an",
""
],
[
"Xiao",
"Zhenyang",
""
],
[
"Fei",
"Haojun",
""
]
] |
new_dataset
| 0.991746 |
2307.05260
|
Ashutosh Modi
|
Abhinav Joshi and Akshat Sharma and Sai Kiran Tanikella and Ashutosh
Modi
|
U-CREAT: Unsupervised Case Retrieval using Events extrAcTion
|
Accepted at ACL 2023, 15 pages (12 main + 3 Appendix)
| null | null | null |
cs.IR cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The task of Prior Case Retrieval (PCR) in the legal domain is about
automatically citing relevant (based on facts and precedence) prior legal cases
in a given query case. To further promote research in PCR, in this paper, we
propose a new large benchmark (in English) for the PCR task: IL-PCR (Indian
Legal Prior Case Retrieval) corpus. Given the complex nature of case relevance
and the long size of legal documents, BM25 remains a strong baseline for
ranking the cited prior documents. In this work, we explore the role of events
in legal case retrieval and propose an unsupervised retrieval method-based
pipeline U-CREAT (Unsupervised Case Retrieval using Events Extraction). We find
that the proposed unsupervised retrieval method significantly increases
performance compared to BM25 and makes retrieval faster by a considerable
margin, making it applicable to real-time case retrieval systems. Our proposed
system is generic, we show that it generalizes across two different legal
systems (Indian and Canadian), and it shows state-of-the-art performance on the
benchmarks for both the legal systems (IL-PCR and COLIEE corpora).
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 13:51:12 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Joshi",
"Abhinav",
""
],
[
"Sharma",
"Akshat",
""
],
[
"Tanikella",
"Sai Kiran",
""
],
[
"Modi",
"Ashutosh",
""
]
] |
new_dataset
| 0.996781 |
2307.05275
|
Juan Carlos Ruiz-Garcia
|
Juan Carlos Ruiz-Garcia, Ruben Tolosana, Ruben Vera-Rodriguez, Carlos
Moro
|
CareFall: Automatic Fall Detection through Wearable Devices and AI
Methods
|
3 pages, 1 figure, 2 tables
| null | null | null |
cs.LG eess.SP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The aging population has led to a growing number of falls in our society,
affecting global public health worldwide. This paper presents CareFall, an
automatic Fall Detection System (FDS) based on wearable devices and Artificial
Intelligence (AI) methods. CareFall considers the accelerometer and gyroscope
time signals extracted from a smartwatch. Two different approaches are used for
feature extraction and classification: i) threshold-based, and ii) machine
learning-based. Experimental results on two public databases show that the
machine learning-based approach, which combines accelerometer and gyroscope
information, outperforms the threshold-based approach in terms of accuracy,
sensitivity, and specificity. This research contributes to the design of smart
and user-friendly solutions to mitigate the negative consequences of falls
among older people.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 14:08:51 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Ruiz-Garcia",
"Juan Carlos",
""
],
[
"Tolosana",
"Ruben",
""
],
[
"Vera-Rodriguez",
"Ruben",
""
],
[
"Moro",
"Carlos",
""
]
] |
new_dataset
| 0.994527 |
2307.05328
|
Pedro Sarmento
|
Jackson Loth, Pedro Sarmento, CJ Carr, Zack Zukowski and Mathieu
Barthet
|
ProgGP: From GuitarPro Tablature Neural Generation To Progressive Metal
Production
|
Pre-print accepted for publication at CMMR2023
| null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Recent work in the field of symbolic music generation has shown value in
using a tokenization based on the GuitarPro format, a symbolic representation
supporting guitar expressive attributes, as an input and output representation.
We extend this work by fine-tuning a pre-trained Transformer model on ProgGP, a
custom dataset of 173 progressive metal songs, for the purposes of creating
compositions from that genre through a human-AI partnership. Our model is able
to generate multiple guitar, bass guitar, drums, piano and orchestral parts. We
examine the validity of the generated music using a mixed methods approach by
combining quantitative analyses following a computational musicology paradigm
and qualitative analyses following a practice-based research paradigm. Finally,
we demonstrate the value of the model by using it as a tool to create a
progressive metal song, fully produced and mixed by a human metal producer
based on AI-generated music.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 15:19:47 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Loth",
"Jackson",
""
],
[
"Sarmento",
"Pedro",
""
],
[
"Carr",
"CJ",
""
],
[
"Zukowski",
"Zack",
""
],
[
"Barthet",
"Mathieu",
""
]
] |
new_dataset
| 0.999613 |
2307.05354
|
Liu Chang
|
Dongbo Wang, Chang Liu, Zhixiao Zhao, Si Shen, Liu Liu, Bin Li,
Haotian Hu, Mengcheng Wu, Litao Lin, Xue Zhao, Xiyu Wang
|
GujiBERT and GujiGPT: Construction of Intelligent Information Processing
Foundation Language Models for Ancient Texts
|
22pages,0 figure
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In the context of the rapid development of large language models, we have
meticulously trained and introduced the GujiBERT and GujiGPT language models,
which are foundational models specifically designed for intelligent information
processing of ancient texts. These models have been trained on an extensive
dataset that encompasses both simplified and traditional Chinese characters,
allowing them to effectively handle various natural language processing tasks
related to ancient books, including but not limited to automatic sentence
segmentation, punctuation, word segmentation, part-of-speech tagging, entity
recognition, and automatic translation. Notably, these models have exhibited
exceptional performance across a range of validation tasks using publicly
available datasets. Our research findings highlight the efficacy of employing
self-supervised methods to further train the models using classical text
corpora, thus enhancing their capability to tackle downstream tasks. Moreover,
it is worth emphasizing that the choice of font, the scale of the corpus, and
the initial model selection all exert significant influence over the ultimate
experimental outcomes. To cater to the diverse text processing preferences of
researchers in digital humanities and linguistics, we have developed three
distinct categories comprising a total of nine model variations. We believe
that by sharing these foundational language models specialized in the domain of
ancient texts, we can facilitate the intelligent processing and scholarly
exploration of ancient literary works and, consequently, contribute to the
global dissemination of China's rich and esteemed traditional culture in this
new era.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 15:44:01 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Wang",
"Dongbo",
""
],
[
"Liu",
"Chang",
""
],
[
"Zhao",
"Zhixiao",
""
],
[
"Shen",
"Si",
""
],
[
"Liu",
"Liu",
""
],
[
"Li",
"Bin",
""
],
[
"Hu",
"Haotian",
""
],
[
"Wu",
"Mengcheng",
""
],
[
"Lin",
"Litao",
""
],
[
"Zhao",
"Xue",
""
],
[
"Wang",
"Xiyu",
""
]
] |
new_dataset
| 0.999549 |
2307.05356
|
Angie Boggust
|
Benny J. Tang, Angie Boggust and Arvind Satyanarayan
|
VisText: A Benchmark for Semantically Rich Chart Captioning
|
Published at ACL 2023, 29 pages, 10 figures
| null | null | null |
cs.CV cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Captions that describe or explain charts help improve recall and
comprehension of the depicted data and provide a more accessible medium for
people with visual disabilities. However, current approaches for automatically
generating such captions struggle to articulate the perceptual or cognitive
features that are the hallmark of charts (e.g., complex trends and patterns).
In response, we introduce VisText: a dataset of 12,441 pairs of charts and
captions that describe the charts' construction, report key statistics, and
identify perceptual and cognitive phenomena. In VisText, a chart is available
as three representations: a rasterized image, a backing data table, and a scene
graph -- a hierarchical representation of a chart's visual elements akin to a
web page's Document Object Model (DOM). To evaluate the impact of VisText, we
fine-tune state-of-the-art language models on our chart captioning task and
apply prefix-tuning to produce captions that vary the semantic content they
convey. Our models generate coherent, semantically rich captions and perform on
par with state-of-the-art chart captioning models across machine translation
and text generation metrics. Through qualitative analysis, we identify six
broad categories of errors that our models make that can inform future work.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 15:16:24 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Tang",
"Benny J.",
""
],
[
"Boggust",
"Angie",
""
],
[
"Satyanarayan",
"Arvind",
""
]
] |
new_dataset
| 0.999848 |
2307.05372
|
Lubnaa Abdur Rahman
|
Lubnaa Abdur Rahman, Ioannis Papathanail, Lorenzo Brigato, Elias K.
Spanakis, Stavroula Mougiakakou
|
Food Recognition and Nutritional Apps
|
This book chapter: Food Recognition and Nutritional Apps is set to
appear in the book: "Diabetes Digital Health, Telehealth, and Artificial
Intelligence"
| null | null | null |
cs.CV cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Food recognition and nutritional apps are trending technologies that may
revolutionise the way people with diabetes manage their diet. Such apps can
monitor food intake as a digital diary and even employ artificial intelligence
to assess the diet automatically. Although these apps offer a promising
solution for managing diabetes, they are rarely used by patients. This chapter
aims to provide an in-depth assessment of the current status of apps for food
recognition and nutrition, to identify factors that may inhibit or facilitate
their use, while it is accompanied by an outline of relevant research and
development.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 13:23:59 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Rahman",
"Lubnaa Abdur",
""
],
[
"Papathanail",
"Ioannis",
""
],
[
"Brigato",
"Lorenzo",
""
],
[
"Spanakis",
"Elias K.",
""
],
[
"Mougiakakou",
"Stavroula",
""
]
] |
new_dataset
| 0.998498 |
2307.05396
|
Atman Mishra Mr.
|
Atman Mishra, A. Sharath Ram, Kavyashree C
|
Handwritten Text Recognition Using Convolutional Neural Network
|
6 pages, 15 figures
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
OCR (Optical Character Recognition) is a technology that offers comprehensive
alphanumeric recognition of handwritten and printed characters at electronic
speed by merely scanning the document. Recently, the understanding of visual
data has been termed Intelligent Character Recognition (ICR). Intelligent
Character Recognition (ICR) is the OCR module that can convert scans of
handwritten or printed characters into ASCII text. ASCII data is the standard
format for data encoding in electronic communication. ASCII assigns standard
numeric values to letters, numeral, symbols, white-spaces and other characters.
In more technical terms, OCR is the process of using an electronic device to
transform 2-Dimensional textual information into machine-encoded text. Anything
that contains text both machine written or handwritten can be scanned either
through a scanner or just simply a picture of the text is enough for the
recognition system to distinguish the text. The goal of this papers is to show
the results of a Convolutional Neural Network model which has been trained on
National Institute of Science and Technology (NIST) dataset containing over a
100,000 images. The network learns from the features extracted from the images
and use it to generate the probability of each class to which the picture
belongs to. We have achieved an accuracy of 90.54% with a loss of 2.53%.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 15:57:15 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Mishra",
"Atman",
""
],
[
"Ram",
"A. Sharath",
""
],
[
"C",
"Kavyashree",
""
]
] |
new_dataset
| 0.986057 |
2307.05409
|
Johann Lussange
|
Johann Lussange, Mulin Yu, Yuliya Tarabalka, Florent Lafarge
|
3D detection of roof sections from a single satellite image and
application to LOD2-building reconstruction
| null | null | null | null |
cs.CV astro-ph.IM cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Reconstructing urban areas in 3D out of satellite raster images has been a
long-standing and challenging goal of both academical and industrial research.
The rare methods today achieving this objective at a Level Of Details $2$ rely
on procedural approaches based on geometry, and need stereo images and/or LIDAR
data as input. We here propose a method for urban 3D reconstruction named
KIBS(\textit{Keypoints Inference By Segmentation}), which comprises two novel
features: i) a full deep learning approach for the 3D detection of the roof
sections, and ii) only one single (non-orthogonal) satellite raster image as
model input. This is achieved in two steps: i) by a Mask R-CNN model performing
a 2D segmentation of the buildings' roof sections, and after blending these
latter segmented pixels within the RGB satellite raster image, ii) by another
identical Mask R-CNN model inferring the heights-to-ground of the roof
sections' corners via panoptic segmentation, unto full 3D reconstruction of the
buildings and city. We demonstrate the potential of the KIBS method by
reconstructing different urban areas in a few minutes, with a Jaccard index for
the 2D segmentation of individual roof sections of $88.55\%$ and $75.21\%$ on
our two data sets resp., and a height's mean error of such correctly segmented
pixels for the 3D reconstruction of $1.60$ m and $2.06$ m on our two data sets
resp., hence within the LOD2 precision range.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 16:23:19 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Lussange",
"Johann",
""
],
[
"Yu",
"Mulin",
""
],
[
"Tarabalka",
"Yuliya",
""
],
[
"Lafarge",
"Florent",
""
]
] |
new_dataset
| 0.952757 |
2307.05410
|
Rodrigo Nogueira
|
Thales Sales Almeida, Thiago Laitz, Giovana K. Bon\'as, Rodrigo
Nogueira
|
BLUEX: A benchmark based on Brazilian Leading Universities Entrance
eXams
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
One common trend in recent studies of language models (LMs) is the use of
standardized tests for evaluation. However, despite being the fifth most spoken
language worldwide, few such evaluations have been conducted in Portuguese.
This is mainly due to the lack of high-quality datasets available to the
community for carrying out evaluations in Portuguese. To address this gap, we
introduce the Brazilian Leading Universities Entrance eXams (BLUEX), a dataset
of entrance exams from the two leading universities in Brazil: UNICAMP and USP.
The dataset includes annotated metadata for evaluating the performance of NLP
models on a variety of subjects. Furthermore, BLUEX includes a collection of
recently administered exams that are unlikely to be included in the training
data of many popular LMs as of 2023. The dataset is also annotated to indicate
the position of images in each question, providing a valuable resource for
advancing the state-of-the-art in multimodal language understanding and
reasoning. We describe the creation and characteristics of BLUEX and establish
a benchmark through experiments with state-of-the-art LMs, demonstrating its
potential for advancing the state-of-the-art in natural language understanding
and reasoning in Portuguese. The data and relevant code can be found at
https://github.com/Portuguese-Benchmark-Datasets/BLUEX
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 16:25:09 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Almeida",
"Thales Sales",
""
],
[
"Laitz",
"Thiago",
""
],
[
"Bonás",
"Giovana K.",
""
],
[
"Nogueira",
"Rodrigo",
""
]
] |
new_dataset
| 0.999849 |
2307.05414
|
Changshang Xue
|
Changshang Xue
|
Duncode Characters Shorter
| null | null | null | null |
cs.CL cs.DB cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the employment of various encoders in text
transformation, converting characters into bytes. It discusses local encoders
such as ASCII and GB-2312, which encode specific characters into shorter bytes,
and universal encoders like UTF-8 and UTF-16, which can encode the complete
Unicode set with greater space requirements and are gaining widespread
acceptance. Other encoders, including SCSU, BOCU-1, and binary encoders,
however, lack self-synchronizing capabilities. Duncode is introduced as an
innovative encoding method that aims to encode the entire Unicode character set
with high space efficiency, akin to local encoders. It has the potential to
compress multiple characters of a string into a Duncode unit using fewer bytes.
Despite offering less self-synchronizing identification information, Duncode
surpasses UTF8 in terms of space efficiency. The application is available at
\url{https://github.com/laohur/duncode}. Additionally, we have developed a
benchmark for evaluating character encoders across different languages. It
encompasses 179 languages and can be accessed at
\url{https://github.com/laohur/wiki2txt}.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 16:30:45 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Xue",
"Changshang",
""
]
] |
new_dataset
| 0.97029 |
2307.05440
|
Ashutosh Modi
|
Abhinav Joshi and Susmit Agrawal and Ashutosh Modi
|
ISLTranslate: Dataset for Translating Indian Sign Language
|
Accepted at ACL 2023 Findings, 8 Pages
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Sign languages are the primary means of communication for many
hard-of-hearing people worldwide. Recently, to bridge the communication gap
between the hard-of-hearing community and the rest of the population, several
sign language translation datasets have been proposed to enable the development
of statistical sign language translation systems. However, there is a dearth of
sign language resources for the Indian sign language. This resource paper
introduces ISLTranslate, a translation dataset for continuous Indian Sign
Language (ISL) consisting of 31k ISL-English sentence/phrase pairs. To the best
of our knowledge, it is the largest translation dataset for continuous Indian
Sign Language. We provide a detailed analysis of the dataset. To validate the
performance of existing end-to-end Sign language to spoken language translation
systems, we benchmark the created dataset with a transformer-based model for
ISL translation.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 17:06:52 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Joshi",
"Abhinav",
""
],
[
"Agrawal",
"Susmit",
""
],
[
"Modi",
"Ashutosh",
""
]
] |
new_dataset
| 0.999866 |
2307.05449
|
Zohreh Aliabadi
|
Zohreh Aliabadi, Cem G\"uneri, Tekg\"ul Kalayc{\i}
|
On the hull and complementarity of one generator quasi-cyclic codes and
four-circulant codes
|
16 pages, 8 tables
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We study one generator quasi-cyclic codes and four-circulant codes, which are
also quasi-cyclic but have two generators. We state the hull dimensions for
both classes of codes in terms of the polynomials in their generating elements.
We prove results such as the hull dimension of a four-circulant code is even
and one-dimensional hull for double-circulant codes, which are special one
generator codes, is not possible when the alphabet size $q$ is congruent to 3
mod 4. We also characterize linear complementary pairs among both classes of
codes. Computational results on the code families in consideration are provided
as well.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 17:23:27 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Aliabadi",
"Zohreh",
""
],
[
"Güneri",
"Cem",
""
],
[
"Kalaycı",
"Tekgül",
""
]
] |
new_dataset
| 0.999241 |
2002.05910
|
Andr\'e van Renssen
|
Matias Korman, Andr\'e van Renssen, Marcel Roeloffzen, Frank Staals
|
Kinetic Geodesic Voronoi Diagrams in a Simple Polygon
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the geodesic Voronoi diagram of a set $S$ of $n$ linearly moving
sites inside a static simple polygon $P$ with $m$ vertices. We identify all
events where the structure of the Voronoi diagram changes, bound the number of
such events, and then develop a kinetic data structure (KDS) that maintains the
geodesic Voronoi diagram as the sites move. To this end, we first analyze how
often a single bisector, defined by two sites, or a single Voronoi center,
defined by three sites, can change. For both these structures we prove that the
number of such changes is at most $O(m^3)$, and that this is tight in the worst
case. Moreover, we develop compact, responsive, local, and efficient kinetic
data structures for both structures. Our data structures use linear space and
process a worst-case optimal number of events. Our bisector and Voronoi center
kinetic data structures handle each event in $O(\log^2 m)$ time. Both
structures can be extended to efficiently support updating the movement of the
sites as well. Using these data structures as building blocks we obtain a
compact KDS for maintaining the full geodesic Voronoi diagram.
|
[
{
"version": "v1",
"created": "Fri, 14 Feb 2020 08:16:44 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Jul 2023 23:55:09 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Korman",
"Matias",
""
],
[
"van Renssen",
"André",
""
],
[
"Roeloffzen",
"Marcel",
""
],
[
"Staals",
"Frank",
""
]
] |
new_dataset
| 0.997301 |
2012.04715
|
Curtis Bright
|
Curtis Bright, Kevin K. H. Cheung, Brett Stevens, Ilias Kotsireas,
Vijay Ganesh
|
A SAT-based Resolution of Lam's Problem
|
To appear at the Thirty-Fifth AAAI Conference on Artificial
Intelligence
| null |
10.1609/aaai.v35i5.16483
| null |
cs.DM cs.AI cs.LO cs.SC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In 1989, computer searches by Lam, Thiel, and Swiercz experimentally resolved
Lam's problem from projective geometry$\unicode{x2014}$the long-standing
problem of determining if a projective plane of order ten exists. Both the
original search and an independent verification in 2011 discovered no such
projective plane. However, these searches were each performed using highly
specialized custom-written code and did not produce nonexistence certificates.
In this paper, we resolve Lam's problem by translating the problem into Boolean
logic and use satisfiability (SAT) solvers to produce nonexistence certificates
that can be verified by a third party. Our work uncovered consistency issues in
both previous searches$\unicode{x2014}$highlighting the difficulty of relying
on special-purpose search code for nonexistence results.
|
[
{
"version": "v1",
"created": "Tue, 8 Dec 2020 20:06:25 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Bright",
"Curtis",
""
],
[
"Cheung",
"Kevin K. H.",
""
],
[
"Stevens",
"Brett",
""
],
[
"Kotsireas",
"Ilias",
""
],
[
"Ganesh",
"Vijay",
""
]
] |
new_dataset
| 0.957935 |
2205.02364
|
Barack Wanjawa Mr.
|
Barack W. Wanjawa (1), Lilian D.A. Wanzare (2), Florence Indede (2),
Owen McOnyango (2), Lawrence Muchemi (1), Edward Ombui (3) ((1) University of
Nairobi Kenya, (2) Maseno University Kenya (3) Africa Nazarene University
Kenya)
|
KenSwQuAD -- A Question Answering Dataset for Swahili Low Resource
Language
|
17 pages, 1 figure, 10 tables
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The need for Question Answering datasets in low resource languages is the
motivation of this research, leading to the development of Kencorpus Swahili
Question Answering Dataset, KenSwQuAD. This dataset is annotated from raw story
texts of Swahili low resource language, which is a predominantly spoken in
Eastern African and in other parts of the world. Question Answering (QA)
datasets are important for machine comprehension of natural language for tasks
such as internet search and dialog systems. Machine learning systems need
training data such as the gold standard Question Answering set developed in
this research. The research engaged annotators to formulate QA pairs from
Swahili texts collected by the Kencorpus project, a Kenyan languages corpus.
The project annotated 1,445 texts from the total 2,585 texts with at least 5 QA
pairs each, resulting into a final dataset of 7,526 QA pairs. A quality
assurance set of 12.5% of the annotated texts confirmed that the QA pairs were
all correctly annotated. A proof of concept on applying the set to the QA task
confirmed that the dataset can be usable for such tasks. KenSwQuAD has also
contributed to resourcing of the Swahili language.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 23:53:23 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Dec 2022 10:14:33 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Jul 2023 14:06:02 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Wanjawa",
"Barack W.",
""
],
[
"Wanzare",
"Lilian D. A.",
""
],
[
"Indede",
"Florence",
""
],
[
"McOnyango",
"Owen",
""
],
[
"Muchemi",
"Lawrence",
""
],
[
"Ombui",
"Edward",
""
]
] |
new_dataset
| 0.999776 |
2208.01307
|
Boyuan Zheng
|
Boyuan Zheng, Patrick Xia, Mahsa Yarmohammadi, Benjamin Van Durme
|
Multilingual Coreference Resolution in Multiparty Dialogue
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing multiparty dialogue datasets for entity coreference resolution are
nascent, and many challenges are still unaddressed. We create a large-scale
dataset, Multilingual Multiparty Coref (MMC), for this task based on TV
transcripts. Due to the availability of gold-quality subtitles in multiple
languages, we propose reusing the annotations to create silver coreference
resolution data in other languages (Chinese and Farsi) via annotation
projection. On the gold (English) data, off-the-shelf models perform relatively
poorly on MMC, suggesting that MMC has broader coverage of multiparty
coreference than prior datasets. On the silver data, we find success both using
it for data augmentation and training from scratch, which effectively simulates
the zero-shot cross-lingual setting.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 08:27:00 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Jul 2023 02:06:43 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Zheng",
"Boyuan",
""
],
[
"Xia",
"Patrick",
""
],
[
"Yarmohammadi",
"Mahsa",
""
],
[
"Van Durme",
"Benjamin",
""
]
] |
new_dataset
| 0.998902 |
2208.07180
|
Jana Hofmann
|
Norine Coenen, Bernd Finkbeiner, Jana Hofmann, Julia Tillman
|
Smart Contract Synthesis Modulo Hyperproperties
|
published at 36th IEEE Computer Security Foundations Symposium (CSF
2023)
| null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Smart contracts are small but highly security-critical programs that
implement wallets, token systems, auctions, crowd funding systems, elections,
and other multi-party transactions on the blockchain. A broad range of methods
has been developed to ensure that a smart contract is functionally correct.
However, smart contracts often additionally need to satisfy certain
hyperproperties, such as symmetry, determinism, or an information flow policy.
In this paper, we show how a synthesis method for smart contracts can ensure
that the contract satisfies its desired hyperproperties. We build on top of a
recently developed synthesis approach from specifications in the temporal logic
TSL. We present HyperTSL, an extension of TSL for the specification of
hyperproperties of infinite-state software. As a preprocessing step, we show
how to detect if a hyperproperty has an equivalent formulation as a (simpler)
trace property. Finally, we describe how to refine a synthesized contract to
adhere to its HyperTSL specification.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2022 13:36:32 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Jul 2023 15:56:57 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Coenen",
"Norine",
""
],
[
"Finkbeiner",
"Bernd",
""
],
[
"Hofmann",
"Jana",
""
],
[
"Tillman",
"Julia",
""
]
] |
new_dataset
| 0.995773 |
2208.12306
|
Qingyun Wang
|
Qingyun Wang, Manling Li, Hou Pong Chan, Lifu Huang, Julia
Hockenmaier, Girish Chowdhary, Heng Ji
|
Multimedia Generative Script Learning for Task Planning
|
21 pages, Accepted by Findings of the Association for Computational
Linguistics: ACL 2023, Code and Resources at
https://github.com/EagleW/Multimedia-Generative-Script-Learning
| null | null | null |
cs.CL cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Goal-oriented generative script learning aims to generate subsequent steps to
reach a particular goal, which is an essential task to assist robots or humans
in performing stereotypical activities. An important aspect of this process is
the ability to capture historical states visually, which provides detailed
information that is not covered by text and will guide subsequent steps.
Therefore, we propose a new task, Multimedia Generative Script Learning, to
generate subsequent steps by tracking historical states in both text and vision
modalities, as well as presenting the first benchmark containing 5,652 tasks
and 79,089 multimedia steps. This task is challenging in three aspects: the
multimedia challenge of capturing the visual states in images, the induction
challenge of performing unseen tasks, and the diversity challenge of covering
different information in individual steps. We propose to encode visual state
changes through a selective multimedia encoder to address the multimedia
challenge, transfer knowledge from previously observed tasks using a
retrieval-augmented decoder to overcome the induction challenge, and further
present distinct information at each step by optimizing a diversity-oriented
contrastive learning objective. We define metrics to evaluate both generation
and inductive quality. Experiment results demonstrate that our approach
significantly outperforms strong baselines.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 19:04:28 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 04:57:22 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Jul 2023 16:51:34 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Wang",
"Qingyun",
""
],
[
"Li",
"Manling",
""
],
[
"Chan",
"Hou Pong",
""
],
[
"Huang",
"Lifu",
""
],
[
"Hockenmaier",
"Julia",
""
],
[
"Chowdhary",
"Girish",
""
],
[
"Ji",
"Heng",
""
]
] |
new_dataset
| 0.998864 |
2209.13513
|
Alex Campbell
|
Alexander Campbell, Antonio Giuliano Zippo, Luca Passamonti, Nicola
Toschi, Pietro Lio
|
DynDepNet: Learning Time-Varying Dependency Structures from fMRI Data
via Dynamic Graph Structure Learning
|
19 pages, 5, figures, 9 tables, ICML Workshop
| null | null | null |
cs.LG stat.AP stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Graph neural networks (GNNs) have demonstrated success in learning
representations of brain graphs derived from functional magnetic resonance
imaging (fMRI) data. However, existing GNN methods assume brain graphs are
static over time and the graph adjacency matrix is known prior to model
training. These assumptions contradict evidence that brain graphs are
time-varying with a connectivity structure that depends on the choice of
functional connectivity measure. Incorrectly representing fMRI data with noisy
brain graphs can adversely affect GNN performance. To address this, we propose
DynDepNet, a novel method for learning the optimal time-varying dependency
structure of fMRI data induced by downstream prediction tasks. Experiments on
real-world fMRI datasets, for the task of sex classification, demonstrate that
DynDepNet achieves state-of-the-art results, outperforming the best baseline in
terms of accuracy by approximately 8 and 6 percentage points, respectively.
Furthermore, analysis of the learned dynamic graphs reveals prediction-related
brain regions consistent with existing neuroscience literature.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 16:32:11 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jan 2023 20:37:11 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Jul 2023 11:55:29 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Campbell",
"Alexander",
""
],
[
"Zippo",
"Antonio Giuliano",
""
],
[
"Passamonti",
"Luca",
""
],
[
"Toschi",
"Nicola",
""
],
[
"Lio",
"Pietro",
""
]
] |
new_dataset
| 0.973934 |
2210.05328
|
Sunwoo Kim
|
Sunwoo Kim, Minyoung Choe, Jaemin Yoo, and Kijung Shin
|
Reciprocity in Directed Hypergraphs: Measures, Findings, and Generators
|
Accepted by Data Mining and Knowledge Discovery. This paper is an
extended version of the ICDM 2022 paper with the same title. It consists of
38 pages and includes 8 figures
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Group interactions are prevalent in a variety of areas. Many of them,
including email exchanges, chemical reactions, and bitcoin transactions, are
directional, and thus they are naturally modeled as directed hypergraphs, where
each hyperarc consists of the set of source nodes and the set of destination
nodes. For directed graphs, which are a special case of directed hypergraphs,
reciprocity has played a key role as a fundamental graph statistic in revealing
organizing principles of graphs and in solving graph learning tasks. For
general directed hypergraphs, however, even no systematic measure of
reciprocity has been developed. In this work, we investigate the reciprocity of
11 real-world hypergraphs. To this end, we first introduce eight axioms that
any reasonable measure of reciprocity should satisfy. Second, we propose
HyperRec, a family of principled measures of hypergraph reciprocity that
satisfies all the axioms. Third, we develop Ferret, a fast and exact algorithm
for computing the measure, whose search space is up to 10^{147}x smaller than
that of naive computation. Fourth, using them, we examine 11 real-world
hypergraphs and discover patterns that distinguish them from random
hypergraphs. Lastly, we propose ReDi, an intuitive generative model for
directed hypergraphs exhibiting the patterns.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 10:38:19 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Nov 2022 02:11:36 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Jul 2023 02:18:20 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Kim",
"Sunwoo",
""
],
[
"Choe",
"Minyoung",
""
],
[
"Yoo",
"Jaemin",
""
],
[
"Shin",
"Kijung",
""
]
] |
new_dataset
| 0.997182 |
2210.13016
|
Dan Ofer
|
Dan Ofer, Dafna Shahaf
|
Cards Against AI: Predicting Humor in a Fill-in-the-blank Party Game
|
Conditionally accepted in EMNLP 2022 short findings. 5 pages
|
https://aclanthology.org/2022.findings-emnlp.394
| null |
Dan Ofer and Dafna Shahaf. 2022. Cards Against AI: Predicting Humor
in a Fill-in-the-blank Party Game. In Findings of the Association for
Computational Linguistics: EMNLP 2022, pages 5397â5403. Association for
Computational Linguistics
|
cs.LG cs.AI cs.CL cs.CY cs.GL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Humor is an inherently social phenomenon, with humorous utterances shaped by
what is socially and culturally accepted. Understanding humor is an important
NLP challenge, with many applications to human-computer interactions. In this
work we explore humor in the context of Cards Against Humanity -- a party game
where players complete fill-in-the-blank statements using cards that can be
offensive or politically incorrect. We introduce a novel dataset of 300,000
online games of Cards Against Humanity, including 785K unique jokes, analyze it
and provide insights. We trained machine learning models to predict the winning
joke per game, achieving performance twice as good (20\%) as random, even
without any user information. On the more difficult task of judging novel
cards, we see the models' ability to generalize is moderate. Interestingly, we
find that our models are primarily focused on punchline card, with the context
having little impact. Analyzing feature importance, we observe that short,
crude, juvenile punchlines tend to win.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 08:05:21 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Ofer",
"Dan",
""
],
[
"Shahaf",
"Dafna",
""
]
] |
new_dataset
| 0.999663 |
2210.15078
|
Zhifeng Tang
|
Zhifeng Tang, Nan Yang, Parastoo Sadeghi, and Xiangyun Zhou
|
Age of Information in Downlink Systems: Broadcast or Unicast
Transmission?
| null | null |
10.1109/JSAC.2023.3280986
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We analytically decide whether the broadcast transmission scheme or the
unicast transmission scheme achieves the optimal age of information (AoI)
performance of a multiuser system where a base station (BS) generates and
transmits status updates to multiple user equipments (UEs). In the broadcast
transmission scheme, the status update for all UEs is jointly encoded into a
packet for transmission, while in the unicast transmission scheme, the status
update for each UE is encoded individually and transmitted by following the
round robin policy. For both transmission schemes, we examine three packet
management strategies, namely the non-preemption strategy, the preemption in
buffer strategy, and the preemption in serving strategy. We first derive new
closed-form expressions for the average AoI achieved by two transmission
schemes with three packet management strategies. Based on them, we compare the
AoI performance of two transmission schemes in two systems, namely, the remote
control system and the dynamic system. Aided by simulation results, we verify
our analysis and investigate the impact of system parameters on the average
AoI. For example, the unicast transmission scheme is more appropriate for the
system with a large number UEs. Otherwise, the broadcast transmission scheme is
more appropriate.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 23:24:44 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Oct 2022 00:12:37 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Jul 2023 23:33:38 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Tang",
"Zhifeng",
""
],
[
"Yang",
"Nan",
""
],
[
"Sadeghi",
"Parastoo",
""
],
[
"Zhou",
"Xiangyun",
""
]
] |
new_dataset
| 0.994625 |
2212.07903
|
Juntao Jiang
|
Juntao Jiang, Yuan Niu, Yi Tao
|
The First IEEE UV2022 Mathematical Modelling Competition: Backgrounds
and Problems
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Economic growth, people's health, and urban development face challenges in
the post-epidemic era. How to promote high-quality and sustainable urban
development, improve citizens' sense of happiness, and solve problems in city
management have become a heated and crucial topic. Mathematical modeling is a
research method that uses mathematical symbols to express practical problems,
establish mathematical models, and then propose solutions. The 1$^{st}$ IEEE
UV2022 Mathematical Modelling Competition is a satellite activity of the
6$^{th}$ IEEE International Conference on Universal Village, which expects
participants to use mathematical modeling methods for practical problems and
provide guidelines for sustainable social progress. This short paper introduces
the background of the competition and publishes the problems to be solved.
|
[
{
"version": "v1",
"created": "Thu, 15 Dec 2022 15:37:17 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Jul 2023 20:11:26 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Jiang",
"Juntao",
""
],
[
"Niu",
"Yuan",
""
],
[
"Tao",
"Yi",
""
]
] |
new_dataset
| 0.990498 |
2301.13359
|
Guoyang Xie
|
Guoyang Xie, Jinbao Wang, Jiaqi Liu, Jiayi Lyu, Yong Liu, Chengjie
Wang, Feng Zheng, Yaochu Jin
|
IM-IAD: Industrial Image Anomaly Detection Benchmark in Manufacturing
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Image anomaly detection (IAD) is an emerging and vital computer vision task
in industrial manufacturing (IM). Recently many advanced algorithms have been
published, but their performance deviates greatly. We realize that the lack of
actual IM settings most probably hinders the development and usage of these
methods in real-world applications. As far as we know, IAD methods are not
evaluated systematically. As a result, this makes it difficult for researchers
to analyze them because they are designed for different or special cases. To
solve this problem, we first propose a uniform IM setting to assess how well
these algorithms perform, which includes several aspects, i.e., various levels
of supervision (unsupervised vs. semi-supervised), few-shot learning, continual
learning, noisy labels, memory usage, and inference speed. Moreover, we
skillfully build a comprehensive image anomaly detection benchmark (IM-IAD)
that includes 16 algorithms on 7 mainstream datasets with uniform settings. Our
extensive experiments (17,017 in total) provide in-depth insights for IAD
algorithm redesign or selection under the IM setting. Next, the proposed
benchmark IM-IAD gives challenges as well as directions for the future. To
foster reproducibility and accessibility, the source code of IM-IAD is uploaded
on the website, https://github.com/M-3LAB/IM-IAD.
|
[
{
"version": "v1",
"created": "Tue, 31 Jan 2023 01:24:45 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Jul 2023 02:21:41 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Xie",
"Guoyang",
""
],
[
"Wang",
"Jinbao",
""
],
[
"Liu",
"Jiaqi",
""
],
[
"Lyu",
"Jiayi",
""
],
[
"Liu",
"Yong",
""
],
[
"Wang",
"Chengjie",
""
],
[
"Zheng",
"Feng",
""
],
[
"Jin",
"Yaochu",
""
]
] |
new_dataset
| 0.986413 |
2302.06149
|
Binqian Jiang
|
Binqian Jiang, Shaojie Shen
|
Contour Context: Abstract Structural Distribution for 3D LiDAR Loop
Detection and Metric Pose Estimation
|
7 pages, 7 figures, accepted by ICRA 2023
| null |
10.1109/ICRA48891.2023.10160337
| null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes \textit{Contour Context}, a simple, effective, and
efficient topological loop closure detection pipeline with accurate 3-DoF
metric pose estimation, targeting the urban utonomous driving scenario. We
interpret the Cartesian birds' eye view (BEV) image projected from 3D LiDAR
points as layered distribution of structures. To recover elevation information
from BEVs, we slice them at different heights, and connected pixels at each
level will form contours. Each contour is parameterized by abstract
information, e.g., pixel count, center position, covariance, and mean height.
The similarity of two BEVs is calculated in sequential discrete and continuous
steps. The first step considers the geometric consensus of graph-like
constellations formed by contours in particular localities. The second step
models the majority of contours as a 2.5D Gaussian mixture model, which is used
to calculate correlation and optimize relative transform in continuous space. A
retrieval key is designed to accelerate the search of a database indexed by
layered KD-trees. We validate the efficacy of our method by comparing it with
recent works on public datasets.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 07:18:24 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Jiang",
"Binqian",
""
],
[
"Shen",
"Shaojie",
""
]
] |
new_dataset
| 0.995631 |
2302.06169
|
Ruhao Wan
|
Ruhao Wan, Shixin Zhu
|
New Quantum MDS codes from Hermitian self-orthogonal generalized
Reed-Solomon codes
|
19 pages, 3 tables
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantum maximum-distance-separable (MDS for short) codes are an important
class of quantum codes. In this paper, by using Hermitian self-orthogonal
generalized Reed-Solomon (GRS for short) codes, we construct five new classes
of $q$-ary quantum MDS codes with minimum distance larger than $q/2+1$.
Furthermore, the parameters of our quantum MDS code cannot be obtained from the
previous constructions.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 08:07:16 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Feb 2023 13:21:42 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Feb 2023 13:28:51 GMT"
},
{
"version": "v4",
"created": "Mon, 6 Mar 2023 13:23:01 GMT"
},
{
"version": "v5",
"created": "Fri, 21 Apr 2023 04:58:56 GMT"
},
{
"version": "v6",
"created": "Sun, 9 Jul 2023 09:09:33 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Wan",
"Ruhao",
""
],
[
"Zhu",
"Shixin",
""
]
] |
new_dataset
| 0.999393 |
2304.07013
|
Xiaodan Hu
|
Xiaodan Hu, Yan Zhang, Naoya Isoyama, Hideaki Uchiyama, Nobuchika
Sakata, Kiyoshi Kiyokawa
|
Smart Dimming Sunglasses for Photophobia Using Spatial Light Modulator
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a smart dimming sunglasses system designed for photophobia
sufferers, particularly those highly sensitive to light intensity. The system
incorporates a spatial light modulator (SLM) to filter light based on
camera-detected scenes, controlling pixel transmittance via a modulation
function for automated non-linear field of view dimming, thus offering flexible
light modulation to meet the visual needs of photophobic users. However, a
conventional occlusion mask on the SLM, aimed at blocking incoming light,
appears blurred and insufficient due to a misaligned focal plane. Previous
attempts to remedy this with an aperture-based expanded mask led to
over-blocking (occlusion leak), due to an excessively large expansion radius.
Our work, therefore, focuses on developing an optimization model that simulates
a defocused occlusion mask and determines the degraded pixels' effective
contribution by studying pixel transmittance occlusion efficiency. This
optimized mask successfully attenuates bright areas to appropriate brightness
levels without unnecessary attenuation of areas that do not require modulation,
overcoming the limitations of both the unprocessed and aperture-based expanded
masks.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 09:17:27 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 07:40:46 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Jul 2023 13:51:56 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Hu",
"Xiaodan",
""
],
[
"Zhang",
"Yan",
""
],
[
"Isoyama",
"Naoya",
""
],
[
"Uchiyama",
"Hideaki",
""
],
[
"Sakata",
"Nobuchika",
""
],
[
"Kiyokawa",
"Kiyoshi",
""
]
] |
new_dataset
| 0.990355 |
2304.09675
|
Bertrand Teguia Tabuguia
|
Bertrand Teguia Tabuguia
|
Operations for D-Algebraic Functions
|
4.5 pages + 14 references. ISSAC'23 software demonstration. To appear
in ACM communications in Computer Algebra
| null | null | null |
cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A function is differentially algebraic (or simply D-algebraic) if there is a
polynomial relationship between some of its derivatives and the indeterminate
variable. Many functions in the sciences, such as Mathieu functions, the
Weierstrass elliptic functions, and holonomic or D-finite functions are
D-algebraic. These functions form a field, and are closed under composition,
taking functional inverse, and derivation. We present implementation for each
underlying operation. We also give a systematic way for computing an algebraic
differential equation from a linear differential equation with D-finite
function coefficients. Each command is a feature of our Maple package $NLDE$
available at https://mathrepo.mis.mpg.de/OperationsForDAlgebraicFunctions.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 14:06:19 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Jul 2023 13:35:27 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Tabuguia",
"Bertrand Teguia",
""
]
] |
new_dataset
| 0.998144 |
2305.04743
|
Teerapong Panboonyuen
|
Teerapong Panboonyuen, Naphat Nithisopa, Panin Pienroj, Laphonchai
Jirachuphun, Chaiwasut Watthanasirikrit, Naruepon Pornwiriyakul
|
MARS: Mask Attention Refinement with Sequential Quadtree Nodes for Car
Damage Instance Segmentation
|
12 pages. arXiv admin note: substantial text overlap with
arXiv:2111.13673 by other authors
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Evaluating car damages from misfortune is critical to the car insurance
industry. However, the accuracy is still insufficient for real-world
applications since the deep learning network is not designed for car damage
images as inputs, and its segmented masks are still very coarse. This paper
presents MARS (Mask Attention Refinement with Sequential quadtree nodes) for
car damage instance segmentation. Our MARS represents self-attention mechanisms
to draw global dependencies between the sequential quadtree nodes layer and
quadtree transformer to recalibrate channel weights and predict highly accurate
instance masks. Our extensive experiments demonstrate that MARS outperforms
state-of-the-art (SOTA) instance segmentation methods on three popular
benchmarks such as Mask R-CNN [9], PointRend [13], and Mask Transfiner [12], by
a large margin of +1.3 maskAP-based R50-FPN backbone and +2.3 maskAP-based
R101-FPN backbone on Thai car-damage dataset. Our demos are available at
https://github.com/kaopanboonyuen/MARS.
|
[
{
"version": "v1",
"created": "Mon, 1 May 2023 02:58:48 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Jul 2023 04:38:25 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Panboonyuen",
"Teerapong",
""
],
[
"Nithisopa",
"Naphat",
""
],
[
"Pienroj",
"Panin",
""
],
[
"Jirachuphun",
"Laphonchai",
""
],
[
"Watthanasirikrit",
"Chaiwasut",
""
],
[
"Pornwiriyakul",
"Naruepon",
""
]
] |
new_dataset
| 0.973747 |
2305.06858
|
Hamidreza Bakhshzad Mahmoodi
|
Hamidreza Bakhshzad Mahmoodi, MohammadJavad Salehi, and Antti Tolli
|
Low-Complexity Multi-Antenna Coded Caching Using Location-Aware
Placement Delivery Arrays
|
13 pages and 8 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
A location-aware multi-antenna coded caching scheme is proposed for
applications with location-dependent data requests, such as wireless immersive
experience, where users are immersed in a three-dimensional virtual world. The
wireless connectivity conditions vary as the users move within the application
area motivating the use of a non-uniform cache memory allocation process to
avoid excessive delivery time for users located in wireless bottleneck areas.
To this end, a location-aware placement and delivery array (LAPDA) is designed
for cache-aided multiantenna data delivery with a fast converging, iterative
linear beamforming process. The underlying weighted max-min transmit precoder
design enables the proposed scheme to serve users in poor connectivity areas
with smaller amounts of data while simultaneously delivering larger amounts to
other users. Our new scheme is suitable for large networks due to its linear
transceiver structure and it is not constrained by the number of users, cache
size, or the number of antennas at the transmitter, unlike the existing
schemes. Despite non-uniform cache placement, the proposed scheme still
achieves a significant degree of coded caching gain that is additive to the
multiplexing gain and greatly outperforms the conventional symmetric CC schemes
in terms of both average and 95-percentile delivery time.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 14:53:30 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Jul 2023 12:49:08 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Mahmoodi",
"Hamidreza Bakhshzad",
""
],
[
"Salehi",
"MohammadJavad",
""
],
[
"Tolli",
"Antti",
""
]
] |
new_dataset
| 0.998376 |
2305.18185
|
Lindia Tjuatja
|
Lindia Tjuatja, Emmy Liu, Lori Levin, Graham Neubig
|
Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics
Interface of LMs Through Agentivity
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recent advances in large language models have prompted researchers to examine
their abilities across a variety of linguistic tasks, but little has been done
to investigate how models handle the interactions in meaning across words and
larger syntactic forms -- i.e. phenomena at the intersection of syntax and
semantics. We present the semantic notion of agentivity as a case study for
probing such interactions. We created a novel evaluation dataset by utilitizing
the unique linguistic properties of a subset of optionally transitive English
verbs. This dataset was used to prompt varying sizes of three model classes to
see if they are sensitive to agentivity at the lexical level, and if they can
appropriately employ these word-level priors given a specific syntactic
context. Overall, GPT-3 text-davinci-003 performs extremely well across all
experiments, outperforming all other models tested by far. In fact, the results
are even better correlated with human judgements than both syntactic and
semantic corpus statistics. This suggests that LMs may potentially serve as
more useful tools for linguistic annotation, theory testing, and discovery than
select corpora for certain tasks. Code is available at
https://github.com/lindiatjuatja/lm_sem
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 16:24:01 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Jul 2023 13:10:40 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Tjuatja",
"Lindia",
""
],
[
"Liu",
"Emmy",
""
],
[
"Levin",
"Lori",
""
],
[
"Neubig",
"Graham",
""
]
] |
new_dataset
| 0.969515 |
2306.00612
|
Jiakang Yuan
|
Jiakang Yuan, Bo Zhang, Xiangchao Yan, Tao Chen, Botian Shi, Yikang
Li, Yu Qiao
|
AD-PT: Autonomous Driving Pre-Training with Large-scale Point Cloud
Dataset
|
Code is available at: https://github.com/PJLab-ADG/3DTrans
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is a long-term vision for Autonomous Driving (AD) community that the
perception models can learn from a large-scale point cloud dataset, to obtain
unified representations that can achieve promising results on different tasks
or benchmarks. Previous works mainly focus on the self-supervised pre-training
pipeline, meaning that they perform the pre-training and fine-tuning on the
same benchmark, which is difficult to attain the performance scalability and
cross-dataset application for the pre-training checkpoint. In this paper, for
the first time, we are committed to building a large-scale pre-training
point-cloud dataset with diverse data distribution, and meanwhile learning
generalizable representations from such a diverse pre-training dataset. We
formulate the point-cloud pre-training task as a semi-supervised problem, which
leverages the few-shot labeled and massive unlabeled point-cloud data to
generate the unified backbone representations that can be directly applied to
many baseline models and benchmarks, decoupling the AD-related pre-training
process and downstream fine-tuning task. During the period of backbone
pre-training, by enhancing the scene- and instance-level distribution diversity
and exploiting the backbone's ability to learn from unknown instances, we
achieve significant performance gains on a series of downstream perception
benchmarks including Waymo, nuScenes, and KITTI, under different baseline
models like PV-RCNN++, SECOND, CenterPoint.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 12:32:52 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Jul 2023 12:32:23 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Yuan",
"Jiakang",
""
],
[
"Zhang",
"Bo",
""
],
[
"Yan",
"Xiangchao",
""
],
[
"Chen",
"Tao",
""
],
[
"Shi",
"Botian",
""
],
[
"Li",
"Yikang",
""
],
[
"Qiao",
"Yu",
""
]
] |
new_dataset
| 0.999379 |
2306.03734
|
Thomas Clark
|
Thomas Hikaru Clark, Clara Meister, Tiago Pimentel, Michael Hahn, Ryan
Cotterell, Richard Futrell and Roger Levy
|
A Cross-Linguistic Pressure for Uniform Information Density in Word
Order
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While natural languages differ widely in both canonical word order and word
order flexibility, their word orders still follow shared cross-linguistic
statistical patterns, often attributed to functional pressures. In the effort
to identify these pressures, prior work has compared real and counterfactual
word orders. Yet one functional pressure has been overlooked in such
investigations: the uniform information density (UID) hypothesis, which holds
that information should be spread evenly throughout an utterance. Here, we ask
whether a pressure for UID may have influenced word order patterns
cross-linguistically. To this end, we use computational models to test whether
real orders lead to greater information uniformity than counterfactual orders.
In our empirical study of 10 typologically diverse languages, we find that: (i)
among SVO languages, real word orders consistently have greater uniformity than
reverse word orders, and (ii) only linguistically implausible counterfactual
orders consistently exceed the uniformity of real orders. These findings are
compatible with a pressure for information uniformity in the development and
usage of natural languages.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 14:52:15 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Jul 2023 17:17:39 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Clark",
"Thomas Hikaru",
""
],
[
"Meister",
"Clara",
""
],
[
"Pimentel",
"Tiago",
""
],
[
"Hahn",
"Michael",
""
],
[
"Cotterell",
"Ryan",
""
],
[
"Futrell",
"Richard",
""
],
[
"Levy",
"Roger",
""
]
] |
new_dataset
| 0.995435 |
2306.06284
|
Conghao Shen
|
Conghao Shen, Violet Z. Yao, Yixin Liu
|
Everybody Compose: Deep Beats To Music
|
Accepted MMSys '23
|
Proceedings of the 14th Conference on ACM Multimedia Systems
(2023)
|
10.1145/3587819.3592542
| null |
cs.SD cs.LG cs.MM eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This project presents a deep learning approach to generate monophonic
melodies based on input beats, allowing even amateurs to create their own music
compositions. Three effective methods - LSTM with Full Attention, LSTM with
Local Attention, and Transformer with Relative Position Representation - are
proposed for this novel task, providing great variation, harmony, and structure
in the generated music. This project allows anyone to compose their own music
by tapping their keyboards or ``recoloring'' beat sequences from existing
works.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 22:24:05 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Shen",
"Conghao",
""
],
[
"Yao",
"Violet Z.",
""
],
[
"Liu",
"Yixin",
""
]
] |
new_dataset
| 0.954943 |
2306.06388
|
Kun Zhou
|
Kun Zhou, Wenbo Li, Nianjuan Jiang, Xiaoguang Han, Jiangbo Lu
|
From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm
|
17 pages, 16 figures. Project Page:
https://redrock303.github.io/nerflix_plus/. arXiv admin note: text overlap
with arXiv:2303.06919
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Neural radiance fields (NeRF) have shown great success in novel view
synthesis. However, recovering high-quality details from real-world scenes is
still challenging for the existing NeRF-based approaches, due to the potential
imperfect calibration information and scene representation inaccuracy. Even
with high-quality training frames, the synthetic novel views produced by NeRF
models still suffer from notable rendering artifacts, such as noise and blur.
To address this, we propose NeRFLiX, a general NeRF-agnostic restorer paradigm
that learns a degradation-driven inter-viewpoint mixer. Specially, we design a
NeRF-style degradation modeling approach and construct large-scale training
data, enabling the possibility of effectively removing NeRF-native rendering
artifacts for deep neural networks. Moreover, beyond the degradation removal,
we propose an inter-viewpoint aggregation framework that fuses highly related
high-quality training images, pushing the performance of cutting-edge NeRF
models to entirely new levels and producing highly photo-realistic synthetic
views. Based on this paradigm, we further present NeRFLiX++ with a stronger
two-stage NeRF degradation simulator and a faster inter-viewpoint mixer,
achieving superior performance with significantly improved computational
efficiency. Notably, NeRFLiX++ is capable of restoring photo-realistic
ultra-high-resolution outputs from noisy low-resolution NeRF-rendered views.
Extensive experiments demonstrate the excellent restoration ability of
NeRFLiX++ on various novel view synthesis benchmarks.
|
[
{
"version": "v1",
"created": "Sat, 10 Jun 2023 09:19:19 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Jul 2023 08:13:42 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Zhou",
"Kun",
""
],
[
"Li",
"Wenbo",
""
],
[
"Jiang",
"Nianjuan",
""
],
[
"Han",
"Xiaoguang",
""
],
[
"Lu",
"Jiangbo",
""
]
] |
new_dataset
| 0.999333 |
2306.08861
|
Chen-Chieh Liao
|
Makito Kobayashi, Chen-Chieh Liao, Keito Inoue, Sentaro Yojima,
Masafumi Takahashi
|
Motion Capture Dataset for Practical Use of AI-based Motion Editing and
Stylization
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we proposed a new style-diverse dataset for the domain of
motion style transfer. The motion dataset uses an industrial-standard human
bone structure and thus is industry-ready to be plugged into 3D characters for
many projects. We claim the challenges in motion style transfer and encourage
future work in this domain by releasing the proposed motion dataset both to the
public and the market. We conduct a comprehensive study on motion style
transfer in the experiment using the state-of-the-art method, and the results
show the proposed dataset's validity for the motion style transfer task.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 05:12:54 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Jul 2023 22:01:26 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Kobayashi",
"Makito",
""
],
[
"Liao",
"Chen-Chieh",
""
],
[
"Inoue",
"Keito",
""
],
[
"Yojima",
"Sentaro",
""
],
[
"Takahashi",
"Masafumi",
""
]
] |
new_dataset
| 0.99953 |
2306.09170
|
Xuan-Quy Dao
|
Xuan-Quy Dao and Ngoc-Bich Le and Xuan-Dung Phan and Bac-Bien Ngo
|
Can ChatGPT pass the Vietnamese National High School Graduation
Examination?
|
9 pages, 13 figures, 4 tables
| null | null | null |
cs.CL cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This research article highlights the potential of AI-powered chatbots in
education and presents the results of using ChatGPT, a large language model, to
complete the Vietnamese National High School Graduation Examination (VNHSGE).
The study dataset included 30 essays in the literature test case and 1,700
multiple-choice questions designed for other subjects. The results showed that
ChatGPT was able to pass the examination with an average score of 6-7,
demonstrating the technology's potential to revolutionize the educational
landscape. The analysis of ChatGPT performance revealed its proficiency in a
range of subjects, including mathematics, English, physics, chemistry, biology,
history, geography, civic education, and literature, which suggests its
potential to provide effective support for learners. However, further research
is needed to assess ChatGPT performance on more complex exam questions and its
potential to support learners in different contexts. As technology continues to
evolve and improve, we can expect to see the use of AI tools like ChatGPT
become increasingly common in educational settings, ultimately enhancing the
educational experience for both students and educators.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 14:47:03 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 09:59:38 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Jul 2023 11:22:20 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Dao",
"Xuan-Quy",
""
],
[
"Le",
"Ngoc-Bich",
""
],
[
"Phan",
"Xuan-Dung",
""
],
[
"Ngo",
"Bac-Bien",
""
]
] |
new_dataset
| 0.99649 |
2306.13374
|
Ranjit Kolkar Mr
|
Ranjit Kolkar and Geetha V
|
Human Activity Behavioural Pattern Recognition in Smarthome with
Long-hour Data Collection
| null | null | null | null |
cs.HC cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The research on human activity recognition has provided novel solutions to
many applications like healthcare, sports, and user profiling. Considering the
complex nature of human activities, it is still challenging even after
effective and efficient sensors are available. The existing works on human
activity recognition using smartphone sensors focus on recognizing basic human
activities like sitting, sleeping, standing, stair up and down and running.
However, more than these basic activities is needed to analyze human
behavioural pattern. The proposed framework recognizes basic human activities
using deep learning models. Also, ambient sensors like PIR, pressure sensors,
and smartphone-based sensors like accelerometers and gyroscopes are combined to
make it hybrid-sensor-based human activity recognition. The hybrid approach
helped derive more activities than the basic ones, which also helped derive
human activity patterns or user profiling. User profiling provides sufficient
information to identify daily living activity patterns and predict whether any
anomaly exists. The framework provides the base for applications such as
elderly monitoring when they are alone at home. The GRU model's accuracy of
95\% is observed to recognize the basic activities. Finally, Human activity
patterns over time are recognized based on the duration and frequency of the
activities. It is observed that human activity pattern, like, morning walking
duration, varies depending on the day of the week.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 08:53:41 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Jul 2023 11:01:01 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Kolkar",
"Ranjit",
""
],
[
"V",
"Geetha",
""
]
] |
new_dataset
| 0.98764 |
2307.01117
|
Patrick Diehl
|
Patrick Diehl and Steven R. Brandt and Max Morris and Nikunj Gupta and
Hartmut Kaiser
|
Benchmarking the Parallel 1D Heat Equation Solver in Chapel, Charm++,
C++, HPX, Go, Julia, Python, Rust, Swift, and Java
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many scientific high performance codes that simulate e.g. black holes,
coastal waves, climate and weather, etc. rely on block-structured meshes and
use finite differencing methods to iteratively solve the appropriate systems of
differential equations. In this paper we investigate implementations of an
extremely simple simulation of this type using various programming systems and
languages. We focus on a shared memory, parallelized algorithm that simulates a
1D heat diffusion using asynchronous queues for the ghost zone exchange. We
discuss the advantages of the various platforms and explore the performance of
this model code on different computing architectures: Intel, AMD, and ARM64FX.
As a result, Python was the slowest of the set we compared. Java, Go, Swift,
and Julia were the intermediate performers. The higher performing platforms
were C++, Rust, Chapel, Charm++, and HPX.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 14:00:23 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jul 2023 02:07:34 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Jul 2023 17:06:26 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Diehl",
"Patrick",
""
],
[
"Brandt",
"Steven R.",
""
],
[
"Morris",
"Max",
""
],
[
"Gupta",
"Nikunj",
""
],
[
"Kaiser",
"Hartmut",
""
]
] |
new_dataset
| 0.988866 |
2307.01691
|
Shidong Pan
|
Shidong Pan, Zhen Tao, Thong Hoang, Dawen Zhang, Zhenchang Xing, Xiwei
Xu, Mark Staples, and David Lo
|
SeePrivacy: Automated Contextual Privacy Policy Generation for Mobile
Applications
| null | null | null | null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Privacy policies have become the most critical approach to safeguarding
individuals' privacy and digital security. To enhance their presentation and
readability, researchers propose the concept of contextual privacy policies
(CPPs), aiming to fragment policies into shorter snippets and display them only
in corresponding contexts. In this paper, we propose a novel multi-modal
framework, namely SeePrivacy, designed to automatically generate contextual
privacy policies for mobile apps. Our method synergistically combines mobile
GUI understanding and privacy policy document analysis, yielding an impressive
overall 83.6% coverage rate for privacy-related context detection and an
accuracy of 0.92 in extracting corresponding policy segments. Remarkably, 96%
of the retrieved policy segments can be correctly matched with their contexts.
The user study shows SeePrivacy demonstrates excellent functionality and
usability (4.5/5). Specifically, participants exhibit a greater willingness to
read CPPs (4.1/5) compared to original privacy policies (2/5). Our solution
effectively assists users in comprehending privacy notices, and this research
establishes a solid foundation for further advancements and exploration.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 12:52:45 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 08:39:54 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Jul 2023 15:54:08 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Pan",
"Shidong",
""
],
[
"Tao",
"Zhen",
""
],
[
"Hoang",
"Thong",
""
],
[
"Zhang",
"Dawen",
""
],
[
"Xing",
"Zhenchang",
""
],
[
"Xu",
"Xiwei",
""
],
[
"Staples",
"Mark",
""
],
[
"Lo",
"David",
""
]
] |
new_dataset
| 0.954391 |
2307.02595
|
Harbir Antil
|
Harbir Antil and David Sayre
|
GNEP Based Dynamic Segmentation and Motion Estimation for Neuromorphic
Imaging
| null | null | null | null |
cs.CV cs.GT math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper explores the application of event-based cameras in the domains of
image segmentation and motion estimation. These cameras offer a groundbreaking
technology by capturing visual information as a continuous stream of
asynchronous events, departing from the conventional frame-based image
acquisition. We introduce a Generalized Nash Equilibrium based framework that
leverages the temporal and spatial information derived from the event stream to
carry out segmentation and velocity estimation. To establish the theoretical
foundations, we derive an existence criteria and propose a multi-level
optimization method for calculating equilibrium. The efficacy of this approach
is shown through a series of experiments.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 18:44:51 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Jul 2023 16:54:04 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Antil",
"Harbir",
""
],
[
"Sayre",
"David",
""
]
] |
new_dataset
| 0.98724 |
2307.02654
|
Simon Guist
|
Simon Guist, Jan Schneider, Hao Ma, Vincent Berenz, Julian Martus,
Felix Gr\"uninger, Michael M\"uhlebach, Jonathan Fiene, Bernhard Sch\"olkopf
and Dieter B\"uchler
|
A Robust Open-source Tendon-driven Robot Arm for Learning Control of
Dynamic Motions
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A long-lasting goal of robotics research is to operate robots safely, while
achieving high performance which often involves fast motions. Traditional
motor-driven systems frequently struggle to balance these competing demands.
Addressing this trade-off is crucial for advancing fields such as manufacturing
and healthcare, where seamless collaboration between robots and humans is
essential. We introduce a four degree-of-freedom (DoF) tendon-driven robot arm,
powered by pneumatic artificial muscles (PAMs), to tackle this challenge. Our
new design features low friction, passive compliance, and inherent impact
resilience, enabling rapid, precise, high-force, and safe interactions during
dynamic tasks. In addition to fostering safer human-robot collaboration, the
inherent safety properties are particularly beneficial for reinforcement
learning, where the robot's ability to explore dynamic motions without causing
self-damage is crucial. We validate our robotic arm through various
experiments, including long-term dynamic motions, impact resilience tests, and
assessments of its ease of control. On a challenging dynamic table tennis task,
we further demonstrate our robot's capabilities in rapid and precise movements.
By showcasing our new design's potential, we aim to inspire further research on
robotic systems that balance high performance and safety in diverse tasks. Our
open-source hardware design, software, and a large dataset of diverse robot
motions can be found at https://webdav.tuebingen.mpg.de/pamy2/.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 20:58:33 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Jul 2023 07:40:26 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Guist",
"Simon",
""
],
[
"Schneider",
"Jan",
""
],
[
"Ma",
"Hao",
""
],
[
"Berenz",
"Vincent",
""
],
[
"Martus",
"Julian",
""
],
[
"Grüninger",
"Felix",
""
],
[
"Mühlebach",
"Michael",
""
],
[
"Fiene",
"Jonathan",
""
],
[
"Schölkopf",
"Bernhard",
""
],
[
"Büchler",
"Dieter",
""
]
] |
new_dataset
| 0.995478 |
2307.03039
|
Eric Postma
|
Ludovica Schaerf, Carina Popovici, Eric Postma
|
Art Authentication with Vision Transformers
|
Accepted for publication in Neural Computing and Applications
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, Transformers, initially developed for language, have been
successfully applied to visual tasks. Vision Transformers have been shown to
push the state-of-the-art in a wide range of tasks, including image
classification, object detection, and semantic segmentation. While ample
research has shown promising results in art attribution and art authentication
tasks using Convolutional Neural Networks, this paper examines if the
superiority of Vision Transformers extends to art authentication, improving,
thus, the reliability of computer-based authentication of artworks. Using a
carefully compiled dataset of authentic paintings by Vincent van Gogh and two
contrast datasets, we compare the art authentication performances of Swin
Transformers with those of EfficientNet. Using a standard contrast set
containing imitations and proxies (works by painters with styles closely
related to van Gogh), we find that EfficientNet achieves the best performance
overall. With a contrast set that only consists of imitations, we find the Swin
Transformer to be superior to EfficientNet by achieving an authentication
accuracy of over 85%. These results lead us to conclude that Vision
Transformers represent a strong and promising contender in art authentication,
particularly in enhancing the computer-based ability to detect artistic
imitations.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 15:04:18 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Jul 2023 13:49:24 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Schaerf",
"Ludovica",
""
],
[
"Popovici",
"Carina",
""
],
[
"Postma",
"Eric",
""
]
] |
new_dataset
| 0.999796 |
2307.03073
|
Jishnu Jaykumar P
|
Jishnu Jaykumar P, Kamalesh Palanisamy, Yu-Wei Chao, Xinya Du, Yu
Xiang
|
Proto-CLIP: Vision-Language Prototypical Network for Few-Shot Learning
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel framework for few-shot learning by leveraging large-scale
vision-language models such as CLIP. Motivated by the unimodal prototypical
networks for few-shot learning, we introduce PROTO-CLIP that utilizes image
prototypes and text prototypes for few-shot learning. Specifically, PROTO-CLIP
adapts the image encoder and text encoder in CLIP in a joint fashion using
few-shot examples. The two encoders are used to compute prototypes of image
classes for classification. During adaptation, we propose aligning the image
and text prototypes of corresponding classes. Such a proposed alignment is
beneficial for few-shot classification due to the contributions from both types
of prototypes. We demonstrate the effectiveness of our method by conducting
experiments on benchmark datasets for few-shot learning as well as in the real
world for robot perception.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 15:41:53 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Jul 2023 22:56:09 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"P",
"Jishnu Jaykumar",
""
],
[
"Palanisamy",
"Kamalesh",
""
],
[
"Chao",
"Yu-Wei",
""
],
[
"Du",
"Xinya",
""
],
[
"Xiang",
"Yu",
""
]
] |
new_dataset
| 0.989579 |
2307.03764
|
Ashiqur Rahman KhudaBukhsh
|
Adel Khorramrouz and Sujan Dutta and Ashiqur R. KhudaBukhsh
|
For Women, Life, Freedom: A Participatory AI-Based Social Web Analysis
of a Watershed Moment in Iran's Gender Struggles
|
Accepted at IJCAI 2023 (AI for good track)
| null | null | null |
cs.CY cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a computational analysis of the Persian language
Twitter discourse with the aim to estimate the shift in stance toward gender
equality following the death of Mahsa Amini in police custody. We present an
ensemble active learning pipeline to train a stance classifier. Our novelty
lies in the involvement of Iranian women in an active role as annotators in
building this AI system. Our annotators not only provide labels, but they also
suggest valuable keywords for more meaningful corpus creation as well as
provide short example documents for a guided sampling step. Our analyses
indicate that Mahsa Amini's death triggered polarized Persian language
discourse where both fractions of negative and positive tweets toward gender
equality increased. The increase in positive tweets was slightly greater than
the increase in negative tweets. We also observe that with respect to account
creation time, between the state-aligned Twitter accounts and pro-protest
Twitter accounts, pro-protest accounts are more similar to baseline Persian
Twitter activity.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 19:39:15 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Khorramrouz",
"Adel",
""
],
[
"Dutta",
"Sujan",
""
],
[
"KhudaBukhsh",
"Ashiqur R.",
""
]
] |
new_dataset
| 0.99752 |
2307.03839
|
Jessica Yin
|
Jessica Yin, Paarth Shah, Naveen Kuppuswamy, Andrew Beaulieu, Avinash
Uttamchandani, Alejandro Castro, James Pikul, and Russ Tedrake
|
Proximity and Visuotactile Point Cloud Fusion for Contact Patches in
Extreme Deformation
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Equipping robots with the sense of touch is critical to emulating the
capabilities of humans in real world manipulation tasks. Visuotactile sensors
are a popular tactile sensing strategy due to data output compatible with
computer vision algorithms and accurate, high resolution estimates of local
object geometry. However, these sensors struggle to accommodate high
deformations of the sensing surface during object interactions, hindering more
informative contact with cm-scale objects frequently encountered in the real
world. The soft interfaces of visuotactile sensors are often made of
hyperelastic elastomers, which are difficult to simulate quickly and accurately
when extremely deformed for tactile information. Additionally, many
visuotactile sensors that rely on strict internal light conditions or pattern
tracking will fail if the surface is highly deformed. In this work, we propose
an algorithm that fuses proximity and visuotactile point clouds for contact
patch segmentation that is entirely independent from membrane mechanics. This
algorithm exploits the synchronous, high-res proximity and visuotactile
modalities enabled by an extremely deformable, selectively transmissive soft
membrane, which uses visible light for visuotactile sensing and infrared light
for proximity depth. We present the hardware design, membrane fabrication, and
evaluation of our contact patch algorithm in low (10%), medium (60%), and high
(100%+) membrane strain states. We compare our algorithm against three
baselines: proximity-only, tactile-only, and a membrane mechanics model. Our
proposed algorithm outperforms all baselines with an average RMSE under 2.8mm
of the contact patch geometry across all strain ranges. We demonstrate our
contact patch algorithm in four applications: varied stiffness membranes,
torque and shear-induced wrinkling, closed loop control for whole body
manipulation, and pose estimation.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 21:17:20 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Yin",
"Jessica",
""
],
[
"Shah",
"Paarth",
""
],
[
"Kuppuswamy",
"Naveen",
""
],
[
"Beaulieu",
"Andrew",
""
],
[
"Uttamchandani",
"Avinash",
""
],
[
"Castro",
"Alejandro",
""
],
[
"Pikul",
"James",
""
],
[
"Tedrake",
"Russ",
""
]
] |
new_dataset
| 0.996006 |
2307.03859
|
Rana Jafari
|
Hua Cheng, Rana Jafari, April Russell, Russell Klopfer, Edmond Lu,
Benjamin Striner, Matthew R. Gormley
|
MDACE: MIMIC Documents Annotated with Code Evidence
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a dataset for evidence/rationale extraction on an extreme
multi-label classification task over long medical documents. One such task is
Computer-Assisted Coding (CAC) which has improved significantly in recent
years, thanks to advances in machine learning technologies. Yet simply
predicting a set of final codes for a patient encounter is insufficient as CAC
systems are required to provide supporting textual evidence to justify the
billing codes. A model able to produce accurate and reliable supporting
evidence for each code would be a tremendous benefit. However, a human
annotated code evidence corpus is extremely difficult to create because it
requires specialized knowledge. In this paper, we introduce MDACE, the first
publicly available code evidence dataset, which is built on a subset of the
MIMIC-III clinical records. The dataset -- annotated by professional medical
coders -- consists of 302 Inpatient charts with 3,934 evidence spans and 52
Profee charts with 5,563 evidence spans. We implemented several evidence
extraction methods based on the EffectiveCAN model (Liu et al., 2021) to
establish baseline performance on this dataset. MDACE can be used to evaluate
code evidence extraction methods for CAC systems, as well as the accuracy and
interpretability of deep learning models for multi-label classification. We
believe that the release of MDACE will greatly improve the understanding and
application of deep learning technologies for medical coding and document
classification.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 22:45:59 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Cheng",
"Hua",
""
],
[
"Jafari",
"Rana",
""
],
[
"Russell",
"April",
""
],
[
"Klopfer",
"Russell",
""
],
[
"Lu",
"Edmond",
""
],
[
"Striner",
"Benjamin",
""
],
[
"Gormley",
"Matthew R.",
""
]
] |
new_dataset
| 0.999762 |
2307.03869
|
Aditya Sanghi
|
Aditya Sanghi, Pradeep Kumar Jayaraman, Arianna Rampini, Joseph
Lambourne, Hooman Shayani, Evan Atherton, Saeid Asgari Taghanaki
|
Sketch-A-Shape: Zero-Shot Sketch-to-3D Shape Generation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Significant progress has recently been made in creative applications of large
pre-trained models for downstream tasks in 3D vision, such as text-to-shape
generation. This motivates our investigation of how these pre-trained models
can be used effectively to generate 3D shapes from sketches, which has largely
remained an open challenge due to the limited sketch-shape paired datasets and
the varying level of abstraction in the sketches. We discover that conditioning
a 3D generative model on the features (obtained from a frozen large pre-trained
vision model) of synthetic renderings during training enables us to effectively
generate 3D shapes from sketches at inference time. This suggests that the
large pre-trained vision model features carry semantic signals that are
resilient to domain shifts, i.e., allowing us to use only RGB renderings, but
generalizing to sketches at inference time. We conduct a comprehensive set of
experiments investigating different design factors and demonstrate the
effectiveness of our straightforward approach for generation of multiple 3D
shapes per each input sketch regardless of their level of abstraction without
requiring any paired datasets during training.
|
[
{
"version": "v1",
"created": "Sat, 8 Jul 2023 00:45:01 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Sanghi",
"Aditya",
""
],
[
"Jayaraman",
"Pradeep Kumar",
""
],
[
"Rampini",
"Arianna",
""
],
[
"Lambourne",
"Joseph",
""
],
[
"Shayani",
"Hooman",
""
],
[
"Atherton",
"Evan",
""
],
[
"Taghanaki",
"Saeid Asgari",
""
]
] |
new_dataset
| 0.997462 |
2307.03882
|
Kishore Srinivas
|
Kishore Srinivas, Shreya Ganti, Rishi Parikh, Ayah Ahmad, Wisdom
Agboh, Mehmet Dogar, Ken Goldberg
|
The Busboy Problem: Efficient Tableware Decluttering Using Consolidation
and Multi-Object Grasps
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present the "Busboy Problem": automating an efficient decluttering of
cups, bowls, and silverware from a planar surface. As grasping and transporting
individual items is highly inefficient, we propose policies to generate grasps
for multiple items. We introduce the metric of Objects per Trip (OpT) carried
by the robot to the collection bin to analyze the improvement seen as a result
of our policies. In physical experiments with singulated items, we find that
consolidation and multi-object grasps resulted in an 1.8x improvement in OpT,
compared to methods without multi-object grasps. See
https://sites.google.com/berkeley.edu/busboyproblem for code and supplemental
materials.
|
[
{
"version": "v1",
"created": "Sat, 8 Jul 2023 02:48:35 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Srinivas",
"Kishore",
""
],
[
"Ganti",
"Shreya",
""
],
[
"Parikh",
"Rishi",
""
],
[
"Ahmad",
"Ayah",
""
],
[
"Agboh",
"Wisdom",
""
],
[
"Dogar",
"Mehmet",
""
],
[
"Goldberg",
"Ken",
""
]
] |
new_dataset
| 0.978877 |
2307.03890
|
Yin Jie
|
Jie Yin, Hao Yin, Conghui Liang and Zhengyou Zhang
|
Ground-Challenge: A Multi-sensor SLAM Dataset Focusing on Corner Cases
for Ground Robots
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
High-quality datasets can speed up breakthroughs and reveal potential
developing directions in SLAM research. To support the research on corner cases
of visual SLAM systems, this paper presents Ground-Challenge: a challenging
dataset comprising 36 trajectories with diverse corner cases such as aggressive
motion, severe occlusion, changing illumination, few textures, pure rotation,
motion blur, wheel suspension, etc. The dataset was collected by a ground robot
with multiple sensors including an RGB-D camera, an inertial measurement unit
(IMU), a wheel odometer and a 3D LiDAR. All of these sensors were
well-calibrated and synchronized, and their data were recorded simultaneously.
To evaluate the performance of cutting-edge SLAM systems, we tested them on our
dataset and demonstrated that these systems are prone to drift and fail on
specific sequences. We will release the full dataset and relevant materials
upon paper publication to benefit the research community. For more information,
visit our project website at https://github.com/sjtuyinjie/Ground-Challenge.
|
[
{
"version": "v1",
"created": "Sat, 8 Jul 2023 03:46:28 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Yin",
"Jie",
""
],
[
"Yin",
"Hao",
""
],
[
"Liang",
"Conghui",
""
],
[
"Zhang",
"Zhengyou",
""
]
] |
new_dataset
| 0.999735 |
2307.03906
|
Ashutosh Modi
|
Abhinav Joshi and Areeb Ahmad and Umang Pandey and Ashutosh Modi
|
ScriptWorld: Text Based Environment For Learning Procedural Knowledge
|
Accepted at IJCAI 2023, 26 Pages (7 main + 19 for appendix)
| null | null | null |
cs.CL cs.AI cs.LG cs.MA
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Text-based games provide a framework for developing natural language
understanding and commonsense knowledge about the world in reinforcement
learning based agents. Existing text-based environments often rely on fictional
situations and characters to create a gaming framework and are far from
real-world scenarios. In this paper, we introduce ScriptWorld: a text-based
environment for teaching agents about real-world daily chores and hence
imparting commonsense knowledge. To the best of our knowledge, it is the first
interactive text-based gaming framework that consists of daily real-world human
activities designed using scripts dataset. We provide gaming environments for
10 daily activities and perform a detailed analysis of the proposed
environment. We develop RL-based baseline models/agents to play the games in
Scriptworld. To understand the role of language models in such environments, we
leverage features obtained from pre-trained language models in the RL agents.
Our experiments show that prior knowledge obtained from a pre-trained language
model helps to solve real-world text-based gaming environments. We release the
environment via Github: https://github.com/Exploration-Lab/ScriptWorld
|
[
{
"version": "v1",
"created": "Sat, 8 Jul 2023 05:43:03 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Joshi",
"Abhinav",
""
],
[
"Ahmad",
"Areeb",
""
],
[
"Pandey",
"Umang",
""
],
[
"Modi",
"Ashutosh",
""
]
] |
new_dataset
| 0.999708 |
2307.03948
|
George Tom
|
George Tom, Minesh Mathew, Sergi Garcia, Dimosthenis Karatzas and C.V.
Jawahar
|
Reading Between the Lanes: Text VideoQA on the Road
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Text and signs around roads provide crucial information for drivers, vital
for safe navigation and situational awareness. Scene text recognition in motion
is a challenging problem, while textual cues typically appear for a short time
span, and early detection at a distance is necessary. Systems that exploit such
information to assist the driver should not only extract and incorporate visual
and textual cues from the video stream but also reason over time. To address
this issue, we introduce RoadTextVQA, a new dataset for the task of video
question answering (VideoQA) in the context of driver assistance. RoadTextVQA
consists of $3,222$ driving videos collected from multiple countries, annotated
with $10,500$ questions, all based on text or road signs present in the driving
videos. We assess the performance of state-of-the-art video question answering
models on our RoadTextVQA dataset, highlighting the significant potential for
improvement in this domain and the usefulness of the dataset in advancing
research on in-vehicle support systems and text-aware multimodal question
answering. The dataset is available at
http://cvit.iiit.ac.in/research/projects/cvit-projects/roadtextvqa
|
[
{
"version": "v1",
"created": "Sat, 8 Jul 2023 10:11:29 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Tom",
"George",
""
],
[
"Mathew",
"Minesh",
""
],
[
"Garcia",
"Sergi",
""
],
[
"Karatzas",
"Dimosthenis",
""
],
[
"Jawahar",
"C. V.",
""
]
] |
new_dataset
| 0.999758 |
2307.03981
|
Prasad Naik Ramavath
|
L Bhargava Kumar, Ramavath Prasad Naik, Datta Choudhari, Prabu
Krishnan, Goutham Simha G D, and Jagadeesh V K
|
BER Analysis of Full Duplex Relay assisted BPSK-SIM based VLC System for
Indoor Applications
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper contemplates a relay-assisted visible light communication (VLC)
system, where the light source (Table lamp) acts as a relay node and cooperates
with the main light source. Following the IEEE 802.15.7r1 VLC reference channel
model, we assume that there are two different light sources present in an
office room. The first one is the source terminal present on the ceiling and
another one is the desk lamp that serves as the relay station which works in
full-duplex method. Because of the loop interference channel, we model VLC
relay terminal using ray tracing simulations. We have analyzed bit error rate
(BER) performance of the relay-assisted VLC system using binary phase shift
keying-subcarrier intensity modulation (BPSK-SIM) technique. The proposed
method outperforms existing phase shift keying (PSK) and square M-quadrature
amplitude modulation (M-QAM) techniques. The proposed VLC system using BPSK-SIM
technique achieves a BER performance of for an SNR of 20 dB. The results of
proposed full duplex and half duplex relayed VLC system are evaluated using
equal power allocation (EPA) and optimum power allocations (OPA) techniques
over three different modulation schemes which are 2-PSK, square M-QAM,
BPSK-SIM.
|
[
{
"version": "v1",
"created": "Sat, 8 Jul 2023 14:09:54 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Kumar",
"L Bhargava",
""
],
[
"Naik",
"Ramavath Prasad",
""
],
[
"Choudhari",
"Datta",
""
],
[
"Krishnan",
"Prabu",
""
],
[
"D",
"Goutham Simha G",
""
],
[
"K",
"Jagadeesh V",
""
]
] |
new_dataset
| 0.986616 |
2307.04023
|
Zixuan Chen
|
Zixuan Chen, Zhigao Zhao, Zijian Li, Jiang Shao, Sen Liu, and Yang Xu
|
SDT: A Low-cost and Topology-reconfigurable Testbed for Network Research
|
This paper will be published in IEEE CLUSTER 2023. Preview version
only
| null | null | null |
cs.NI cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Network experiments are essential to network-related scientific research
(e.g., congestion control, QoS, network topology design, and traffic
engineering). However, (re)configuring various topologies on a real testbed is
expensive, time-consuming, and error-prone. In this paper, we propose
\emph{Software Defined Topology Testbed (SDT)}, a method for constructing a
user-defined network topology using a few commodity switches. SDT is low-cost,
deployment-friendly, and reconfigurable, which can run multiple sets of
experiments under different topologies by simply using different topology
configuration files at the controller we designed. We implement a prototype of
SDT and conduct numerous experiments. Evaluations show that SDT only introduces
at most 2\% extra overhead than full testbeds on multi-hop latency and is far
more efficient than software simulators (reducing the evaluation time by up to
2899x). SDT is more cost-effective and scalable than existing Topology
Projection (TP) solutions. Further experiments show that SDT can support
various network research experiments at a low cost on topics including but not
limited to topology design, congestion control, and traffic engineering.
|
[
{
"version": "v1",
"created": "Sat, 8 Jul 2023 18:00:31 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Chen",
"Zixuan",
""
],
[
"Zhao",
"Zhigao",
""
],
[
"Li",
"Zijian",
""
],
[
"Shao",
"Jiang",
""
],
[
"Liu",
"Sen",
""
],
[
"Xu",
"Yang",
""
]
] |
new_dataset
| 0.999323 |
2307.04053
|
Abhay Goyal
|
Tran Hien Van, Abhay Goyal, Muhammad Siddique, Lam Yin Cheung, Nimay
Parekh, Jonathan Y Huang, Keri McCrickerd, Edson C Tandoc Jr., Gerard Chung,
Navin Kumar
|
How is Fatherhood Framed Online in Singapore?
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The proliferation of discussion about fatherhood in Singapore attests to its
significance, indicating the need for an exploration of how fatherhood is
framed, aiding policy-making around fatherhood in Singapore. Sound and holistic
policy around fatherhood in Singapore may reduce stigma and apprehension around
being a parent, critical to improving the nations flagging birth rate. We
analyzed 15,705 articles and 56,221 posts to study how fatherhood is framed in
Singapore across a range of online platforms (news outlets, parenting forums,
Twitter). We used NLP techniques to understand these differences. While
fatherhood was framed in a range of ways on the Singaporean online environment,
it did not seem that fathers were framed as central to the Singaporean family
unit. A strength of our work is how the different techniques we have applied
validate each other.
|
[
{
"version": "v1",
"created": "Sat, 8 Jul 2023 22:03:00 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Van",
"Tran Hien",
""
],
[
"Goyal",
"Abhay",
""
],
[
"Siddique",
"Muhammad",
""
],
[
"Cheung",
"Lam Yin",
""
],
[
"Parekh",
"Nimay",
""
],
[
"Huang",
"Jonathan Y",
""
],
[
"McCrickerd",
"Keri",
""
],
[
"Tandoc",
"Edson C",
"Jr."
],
[
"Chung",
"Gerard",
""
],
[
"Kumar",
"Navin",
""
]
] |
new_dataset
| 0.998887 |
2307.04066
|
Mingzhen Shao
|
Mingzhen Shao
|
Random Position Adversarial Patch for Vision Transformers
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous studies have shown the vulnerability of vision transformers to
adversarial patches, but these studies all rely on a critical assumption: the
attack patches must be perfectly aligned with the patches used for linear
projection in vision transformers. Due to this stringent requirement, deploying
adversarial patches for vision transformers in the physical world becomes
impractical, unlike their effectiveness on CNNs. This paper proposes a novel
method for generating an adversarial patch (G-Patch) that overcomes the
alignment constraint, allowing the patch to launch a targeted attack at any
position within the field of view. Specifically, instead of directly optimizing
the patch using gradients, we employ a GAN-like structure to generate the
adversarial patch. Our experiments show the effectiveness of the adversarial
patch in achieving universal attacks on vision transformers, both in digital
and physical-world scenarios. Additionally, further analysis reveals that the
generated adversarial patch exhibits robustness to brightness restriction,
color transfer, and random noise. Real-world attack experiments validate the
effectiveness of the G-Patch to launch robust attacks even under some very
challenging conditions.
|
[
{
"version": "v1",
"created": "Sun, 9 Jul 2023 00:08:34 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Shao",
"Mingzhen",
""
]
] |
new_dataset
| 0.994189 |
2307.04080
|
Song Wang
|
Nima Shiri harzevili, Jiho Shin, Junjie Wang, Song Wang, Nachiappan
Nagappan
|
Automatic Static Bug Detection for Machine Learning Libraries: Are We
There Yet?
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic detection of software bugs is a critical task in software security.
Many static tools that can help detect bugs have been proposed. While these
static bug detectors are mainly evaluated on general software projects call
into question their practical effectiveness and usefulness for machine learning
libraries. In this paper, we address this question by analyzing five popular
and widely used static bug detectors, i.e., Flawfinder, RATS, Cppcheck,
Facebook Infer, and Clang static analyzer on a curated dataset of software bugs
gathered from four popular machine learning libraries including Mlpack, MXNet,
PyTorch, and TensorFlow with a total of 410 known bugs. Our research provides a
categorization of these tools' capabilities to better understand the strengths
and weaknesses of the tools for detecting software bugs in machine learning
libraries. Overall, our study shows that static bug detectors find a negligible
amount of all bugs accounting for 6/410 bugs (0.01%), Flawfinder and RATS are
the most effective static checker for finding software bugs in machine learning
libraries. Based on our observations, we further identify and discuss
opportunities to make the tools more effective and practical.
|
[
{
"version": "v1",
"created": "Sun, 9 Jul 2023 01:38:52 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"harzevili",
"Nima Shiri",
""
],
[
"Shin",
"Jiho",
""
],
[
"Wang",
"Junjie",
""
],
[
"Wang",
"Song",
""
],
[
"Nagappan",
"Nachiappan",
""
]
] |
new_dataset
| 0.999389 |
2307.04091
|
Jun Cen
|
Jun Cen, Shiwei Zhang, Yixuan Pei, Kun Li, Hang Zheng, Maochun Luo,
Yingya Zhang, Qifeng Chen
|
CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge
Distillation for LIDAR Semantic Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
2D RGB images and 3D LIDAR point clouds provide complementary knowledge for
the perception system of autonomous vehicles. Several 2D and 3D fusion methods
have been explored for the LIDAR semantic segmentation task, but they suffer
from different problems. 2D-to-3D fusion methods require strictly paired data
during inference, which may not be available in real-world scenarios, while
3D-to-2D fusion methods cannot explicitly make full use of the 2D information.
Therefore, we propose a Bidirectional Fusion Network with Cross-Modality
Knowledge Distillation (CMDFusion) in this work. Our method has two
contributions. First, our bidirectional fusion scheme explicitly and implicitly
enhances the 3D feature via 2D-to-3D fusion and 3D-to-2D fusion, respectively,
which surpasses either one of the single fusion schemes. Second, we distillate
the 2D knowledge from a 2D network (Camera branch) to a 3D network (2D
knowledge branch) so that the 3D network can generate 2D information even for
those points not in the FOV (field of view) of the camera. In this way, RGB
images are not required during inference anymore since the 2D knowledge branch
provides 2D information according to the 3D LIDAR input. We show that our
CMDFusion achieves the best performance among all fusion-based methods on
SemanticKITTI and nuScenes datasets. The code will be released at
https://github.com/Jun-CEN/CMDFusion.
|
[
{
"version": "v1",
"created": "Sun, 9 Jul 2023 04:24:12 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Cen",
"Jun",
""
],
[
"Zhang",
"Shiwei",
""
],
[
"Pei",
"Yixuan",
""
],
[
"Li",
"Kun",
""
],
[
"Zheng",
"Hang",
""
],
[
"Luo",
"Maochun",
""
],
[
"Zhang",
"Yingya",
""
],
[
"Chen",
"Qifeng",
""
]
] |
new_dataset
| 0.994535 |
2307.04103
|
Nian Cai
|
Zhijian Liu, Nian Cai, Wensheng Ouyang, Chengbin Zhang, Nili Tian, Han
Wang
|
CA-CentripetalNet: A novel anchor-free deep learning framework for
hardhat wearing detection
|
It has been accepted for the journal of Signal, Image and Video
Processing, which is a complete version. It is noted that it has been deleted
for future publishing
|
Signal, Image and Video Processing,2023
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic hardhat wearing detection can strengthen the safety management in
construction sites, which is still challenging due to complicated video
surveillance scenes. To deal with the poor generalization of previous deep
learning based methods, a novel anchor-free deep learning framework called
CA-CentripetalNet is proposed for hardhat wearing detection. Two novel schemes
are proposed to improve the feature extraction and utilization ability of
CA-CentripetalNet, which are vertical-horizontal corner pooling and bounding
constrained center attention. The former is designed to realize the
comprehensive utilization of marginal features and internal features. The
latter is designed to enforce the backbone to pay attention to internal
features, which is only used during the training rather than during the
detection. Experimental results indicate that the CA-CentripetalNet achieves
better performance with the 86.63% mAP (mean Average Precision) with less
memory consumption at a reasonable speed than the existing deep learning based
methods, especially in case of small-scale hardhats and non-worn-hardhats.
|
[
{
"version": "v1",
"created": "Sun, 9 Jul 2023 05:40:05 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Liu",
"Zhijian",
""
],
[
"Cai",
"Nian",
""
],
[
"Ouyang",
"Wensheng",
""
],
[
"Zhang",
"Chengbin",
""
],
[
"Tian",
"Nili",
""
],
[
"Wang",
"Han",
""
]
] |
new_dataset
| 0.994248 |
2307.04118
|
Jia Yu
|
Qingran Wang, Jia Yu, Mengjun Ding, and Weiqiang Sun
|
Twotier -- A Layered Analysis of Backbone Members in a Moderate Sized
Community Sports Organization
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Backbone members are recognized as essential parts of an organization, yet
their role and mechanisms of functioning in networks are not fully understood.
In this paper, we propose a new framework called Twotier to analyze the
evolution of community sports organizations (CSOs) and the role of backbone
members. Tier-one establishes a dynamic user interaction network based on
grouping relationships, and weighted k-shell decomposition is used to select
backbone members. We perform community detection and capture the evolution of
two separate sub-networks: one formed by backbone members and the other formed
by other members. In Tier-two, the sub-networks are abstracted, revealing a
core-periphery structure in the organization where backbone members serve as
bridges connecting all parts of the network. Our findings suggest that relying
on backbone members can keep newcomers actively involved in rewarding
activities, while non-rewarding activities solidify relations between backbone
members.
|
[
{
"version": "v1",
"created": "Sun, 9 Jul 2023 08:14:38 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Wang",
"Qingran",
""
],
[
"Yu",
"Jia",
""
],
[
"Ding",
"Mengjun",
""
],
[
"Sun",
"Weiqiang",
""
]
] |
new_dataset
| 0.950938 |
2307.04128
|
Richard Jiang
|
Ao Shen, Yijie Zhu and Richard Jiang
|
Marine Debris Detection in Satellite Surveillance using Attention
Mechanisms
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Marine debris is an important issue for environmental protection, but current
methods for locating marine debris are yet limited. In order to achieve higher
efficiency and wider applicability in the localization of Marine debris, this
study tries to combine the instance segmentation of YOLOv7 with different
attention mechanisms and explores the best model. By utilizing a labelled
dataset consisting of satellite images containing ocean debris, we examined
three attentional models including lightweight coordinate attention, CBAM
(combining spatial and channel focus), and bottleneck transformer (based on
self-attention). Box detection assessment revealed that CBAM achieved the best
outcome (F1 score of 77%) compared to coordinate attention (F1 score of 71%)
and YOLOv7/bottleneck transformer (both F1 scores around 66%). Mask evaluation
showed CBAM again leading with an F1 score of 73%, whereas coordinate attention
and YOLOv7 had comparable performances (around F1 score of 68%/69%) and
bottleneck transformer lagged behind at F1 score of 56%. These findings suggest
that CBAM offers optimal suitability for detecting marine debris. However, it
should be noted that the bottleneck transformer detected some areas missed by
manual annotation and displayed better mask precision for larger debris pieces,
signifying potentially superior practical performance.
|
[
{
"version": "v1",
"created": "Sun, 9 Jul 2023 08:53:45 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Shen",
"Ao",
""
],
[
"Zhu",
"Yijie",
""
],
[
"Jiang",
"Richard",
""
]
] |
new_dataset
| 0.999646 |
2307.04184
|
Ali Shoker
|
Ali Shoker, Vincent Rahli, Jeremie Decouchant, Paulo Esteves-Verissimo
|
Intrusion Resilience Systems for Modern Vehicles
| null |
In the 97th IEEE Vehicular Technology Conference: VTC2023
| null | null |
cs.CR cs.DC cs.NI cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Current vehicular Intrusion Detection and Prevention Systems either incur
high false-positive rates or do not capture zero-day vulnerabilities, leading
to safety-critical risks. In addition, prevention is limited to few primitive
options like dropping network packets or extreme options, e.g., ECU Bus-off
state. To fill this gap, we introduce the concept of vehicular Intrusion
Resilience Systems (IRS) that ensures the resilience of critical applications
despite assumed faults or zero-day attacks, as long as threat assumptions are
met. IRS enables running a vehicular application in a replicated way, i.e., as
a Replicated State Machine, over several ECUs, and then requiring the
replicated processes to reach a form of Byzantine agreement before changing
their local state. Our study rides the mutation of modern vehicular
environments, which are closing the gap between simple and resource-constrained
"real-time and embedded systems", and complex and powerful "information
technology" ones. It shows that current vehicle (e.g., Zonal) architectures and
networks are becoming plausible for such modular fault and intrusion tolerance
solutions,deemed too heavy in the past. Our evaluation on a simulated
Automotive Ethernet network running two state-of-the-art agreement protocols
(Damysus and Hotstuff) shows that the achieved latency and throughout are
feasible for many Automotive applications.
|
[
{
"version": "v1",
"created": "Sun, 9 Jul 2023 14:18:04 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Shoker",
"Ali",
""
],
[
"Rahli",
"Vincent",
""
],
[
"Decouchant",
"Jeremie",
""
],
[
"Esteves-Verissimo",
"Paulo",
""
]
] |
new_dataset
| 0.994752 |
2307.04217
|
Ibrahim Abdelaziz
|
Kavitha Srinivas, Julian Dolby, Ibrahim Abdelaziz, Oktie Hassanzadeh,
Harsha Kokel, Aamod Khatiwada, Tejaswini Pedapati, Subhajit Chaudhury, Horst
Samulowitz
|
LakeBench: Benchmarks for Data Discovery over Data Lakes
| null | null | null | null |
cs.DB cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Within enterprises, there is a growing need to intelligently navigate data
lakes, specifically focusing on data discovery. Of particular importance to
enterprises is the ability to find related tables in data repositories. These
tables can be unionable, joinable, or subsets of each other. There is a dearth
of benchmarks for these tasks in the public domain, with related work targeting
private datasets. In LakeBench, we develop multiple benchmarks for these tasks
by using the tables that are drawn from a diverse set of data sources such as
government data from CKAN, Socrata, and the European Central Bank. We compare
the performance of 4 publicly available tabular foundational models on these
tasks. None of the existing models had been trained on the data discovery tasks
that we developed for this benchmark; not surprisingly, their performance shows
significant room for improvement. The results suggest that the establishment of
such benchmarks may be useful to the community to build tabular models usable
for data discovery in data lakes.
|
[
{
"version": "v1",
"created": "Sun, 9 Jul 2023 16:16:11 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Srinivas",
"Kavitha",
""
],
[
"Dolby",
"Julian",
""
],
[
"Abdelaziz",
"Ibrahim",
""
],
[
"Hassanzadeh",
"Oktie",
""
],
[
"Kokel",
"Harsha",
""
],
[
"Khatiwada",
"Aamod",
""
],
[
"Pedapati",
"Tejaswini",
""
],
[
"Chaudhury",
"Subhajit",
""
],
[
"Samulowitz",
"Horst",
""
]
] |
new_dataset
| 0.99619 |
2307.04222
|
Eric Ruzomberka
|
Eric Ruzomberka and Homa Nikbakht and Christopher G. Brinton and David
J. Love and H. Vincent Poor
|
Derandomizing Codes for the Binary Adversarial Wiretap Channel of Type
II
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We revisit the binary adversarial wiretap channel (AWTC) of type II in which
an active adversary can read a fraction $r$ and flip a fraction $p$ of codeword
bits. The semantic-secrecy capacity of the AWTC II is partially known, where
the best-known lower bound is non-constructive, proven via a random coding
argument that uses a large number (that is exponential in blocklength $n$) of
random bits to seed the random code. In this paper, we establish a new
derandomization result in which we match the best-known lower bound of
$1-H_2(p)-r$ where $H_2(\cdot)$ is the binary entropy function via a random
code that uses a small seed of only $O(n^2)$ bits. Our random code construction
is a novel application of pseudolinear codes -- a class of non-linear codes
that have $k$-wise independent codewords when picked at random where $k$ is a
design parameter. As the key technical tool in our analysis, we provide a
soft-covering lemma in the flavor of Goldfeld, Cuff and Permuter (Trans. Inf.
Theory 2016) that holds for random codes with $k$-wise independent codewords.
|
[
{
"version": "v1",
"created": "Sun, 9 Jul 2023 16:28:45 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Ruzomberka",
"Eric",
""
],
[
"Nikbakht",
"Homa",
""
],
[
"Brinton",
"Christopher G.",
""
],
[
"Love",
"David J.",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
new_dataset
| 0.952101 |
2307.04223
|
Truong-Dong Do
|
Truong-Dong Do, Nghe-Nhan Truong and My-Ha Le
|
Real-time Human Detection in Fire Scenarios using Infrared and Thermal
Imaging Fusion
|
5 pages, 6 figures, 2 tables
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fire is considered one of the most serious threats to human lives which
results in a high probability of fatalities. Those severe consequences stem
from the heavy smoke emitted from a fire that mostly restricts the visibility
of escaping victims and rescuing squad. In such hazardous circumstances, the
use of a vision-based human detection system is able to improve the ability to
save more lives. To this end, a thermal and infrared imaging fusion strategy
based on multiple cameras for human detection in low-visibility scenarios
caused by smoke is proposed in this paper. By processing with multiple cameras,
vital information can be gathered to generate more useful features for human
detection. Firstly, the cameras are calibrated using a Light Heating
Chessboard. Afterward, the features extracted from the input images are merged
prior to being passed through a lightweight deep neural network to perform the
human detection task. The experiments conducted on an NVIDIA Jetson Nano
computer demonstrated that the proposed method can process with reasonable
speed and can achieve favorable performance with a [email protected] of 95%.
|
[
{
"version": "v1",
"created": "Sun, 9 Jul 2023 16:28:57 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Do",
"Truong-Dong",
""
],
[
"Truong",
"Nghe-Nhan",
""
],
[
"Le",
"My-Ha",
""
]
] |
new_dataset
| 0.995386 |
2307.04285
|
Soyoung Yang
|
Soyoung Yang, Minseok Choi, Youngwoo Cho, Jaegul Choo
|
HistRED: A Historical Document-Level Relation Extraction Dataset
| null |
ACL 2023
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Despite the extensive applications of relation extraction (RE) tasks in
various domains, little has been explored in the historical context, which
contains promising data across hundreds and thousands of years. To promote the
historical RE research, we present HistRED constructed from Yeonhaengnok.
Yeonhaengnok is a collection of records originally written in Hanja, the
classical Chinese writing, which has later been translated into Korean. HistRED
provides bilingual annotations such that RE can be performed on Korean and
Hanja texts. In addition, HistRED supports various self-contained subtexts with
different lengths, from a sentence level to a document level, supporting
diverse context settings for researchers to evaluate the robustness of their RE
models. To demonstrate the usefulness of our dataset, we propose a bilingual RE
model that leverages both Korean and Hanja contexts to predict relations
between entities. Our model outperforms monolingual baselines on HistRED,
showing that employing multiple language contexts supplements the RE
predictions. The dataset is publicly available at:
https://huggingface.co/datasets/Soyoung/HistRED under CC BY-NC-ND 4.0 license.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 00:24:27 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Yang",
"Soyoung",
""
],
[
"Choi",
"Minseok",
""
],
[
"Cho",
"Youngwoo",
""
],
[
"Choo",
"Jaegul",
""
]
] |
new_dataset
| 0.999815 |
2307.04291
|
Wen Siang Tan
|
Wen Siang Tan, Markus Wagner, Christoph Treude
|
Wait, wasn't that code here before? Detecting Outdated Software
Documentation
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Encountering outdated documentation is not a rare occurrence for developers
and users in the software engineering community. To ensure that software
documentation is up-to-date, developers often have to manually check whether
the documentation needs to be updated whenever changes are made to the source
code. In our previous work, we proposed an approach to automatically detect
outdated code element references in software repositories and found that more
than a quarter of the 1000 most popular projects on GitHub contained at least
one outdated reference. In this paper, we present a GitHub Actions tool that
builds on our previous work's approach that GitHub developers can configure to
automatically scan for outdated code element references in their GitHub
project's documentation whenever a pull request is submitted.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 00:52:29 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Tan",
"Wen Siang",
""
],
[
"Wagner",
"Markus",
""
],
[
"Treude",
"Christoph",
""
]
] |
new_dataset
| 0.996047 |
2307.04377
|
Minsung Kang
|
Minsung Kang, Soochul Park, and Keunwoo Choi
|
HCLAS-X: Hierarchical and Cascaded Lyrics Alignment System Using
Multimodal Cross-Correlation
| null | null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this work, we address the challenge of lyrics alignment, which involves
aligning the lyrics and vocal components of songs. This problem requires the
alignment of two distinct modalities, namely text and audio. To overcome this
challenge, we propose a model that is trained in a supervised manner, utilizing
the cross-correlation matrix of latent representations between vocals and
lyrics. Our system is designed in a hierarchical and cascaded manner. It
predicts synced time first on a sentence-level and subsequently on a
word-level. This design enables the system to process long sequences, as the
cross-correlation uses quadratic memory with respect to sequence length. In our
experiments, we demonstrate that our proposed system achieves a significant
improvement in mean average error, showcasing its robustness in comparison to
the previous state-of-the-art model. Additionally, we conduct a qualitative
analysis of the system after successfully deploying it in several music
streaming services.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 07:22:06 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Kang",
"Minsung",
""
],
[
"Park",
"Soochul",
""
],
[
"Choi",
"Keunwoo",
""
]
] |
new_dataset
| 0.996056 |
2307.04422
|
Gyuree Kang
|
Gyuree Kang, Hyunki Seong, Daegyu Lee, D.Hyunchul Shim
|
A Versatile Door Opening System with Mobile Manipulator through Adaptive
Position-Force Control and Reinforcement Learning
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The ability of robots to navigate through doors is crucial for their
effective operation in indoor environments. Consequently, extensive research
has been conducted to develop robots capable of opening specific doors.
However, the diverse combinations of door handles and opening directions
necessitate a more versatile door opening system for robots to successfully
operate in real-world environments. In this paper, we propose a mobile
manipulator system that can autonomously open various doors without prior
knowledge. By using convolutional neural networks, point cloud extraction
techniques, and external force measurements during exploratory motion, we
obtained information regarding handle types, poses, and door characteristics.
Through two different approaches, adaptive position-force control and deep
reinforcement learning, we successfully opened doors without precise trajectory
or excessive external force. The adaptive position-force control method
involves moving the end-effector in the direction of the door opening while
responding compliantly to external forces, ensuring safety and manipulator
workspace. Meanwhile, the deep reinforcement learning policy minimizes applied
forces and eliminates unnecessary movements, enabling stable operation across
doors with different poses and widths. The RL-based approach outperforms the
adaptive position-force control method in terms of compensating for external
forces, ensuring smooth motion, and achieving efficient speed. It reduces the
maximum force required by 3.27 times and improves motion smoothness by 1.82
times. However, the non-learning-based adaptive position-force control method
demonstrates more versatility in opening a wider range of doors, encompassing
revolute doors with four distinct opening directions and varying widths.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 08:55:28 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Kang",
"Gyuree",
""
],
[
"Seong",
"Hyunki",
""
],
[
"Lee",
"Daegyu",
""
],
[
"Shim",
"D. Hyunchul",
""
]
] |
new_dataset
| 0.997846 |
2307.04431
|
Hongpeng Chen
|
Hongpeng Chen, Shengzeng Huo, Muhammad Muddassir, Hoi-Yin Lee, Anqing
Duan, Pai Zheng, Hongsheng Pan, David Navarro-Alarcon
|
PSO-Based Optimal Coverage Path Planning for Surface Defect Inspection
of 3C Components with a Robotic Line Scanner
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The automatic inspection of surface defects is an important task for quality
control in the computers, communications, and consumer electronics (3C)
industry. Conventional devices for defect inspection (viz. line-scan sensors)
have a limited field of view, thus, a robot-aided defect inspection system
needs to scan the object from multiple viewpoints. Optimally selecting the
robot's viewpoints and planning a path is regarded as coverage path planning
(CPP), a problem that enables inspecting the object's complete surface while
reducing the scanning time and avoiding misdetection of defects. However, the
development of CPP strategies for robotic line scanners has not been
sufficiently studied by researchers. To fill this gap in the literature, in
this paper, we present a new approach for robotic line scanners to detect
surface defects of 3C free-form objects automatically. Our proposed solution
consists of generating a local path by a new hybrid region segmentation method
and an adaptive planning algorithm to ensure the coverage of the complete
object surface. An optimization method for the global path sequence is
developed to maximize the scanning efficiency. To verify our proposed
methodology, we conduct detailed simulation-based and experimental studies on
various free-form workpieces, and compare its performance with a
state-of-the-art solution. The reported results demonstrate the feasibility and
effectiveness of our approach.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 09:11:52 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Chen",
"Hongpeng",
""
],
[
"Huo",
"Shengzeng",
""
],
[
"Muddassir",
"Muhammad",
""
],
[
"Lee",
"Hoi-Yin",
""
],
[
"Duan",
"Anqing",
""
],
[
"Zheng",
"Pai",
""
],
[
"Pan",
"Hongsheng",
""
],
[
"Navarro-Alarcon",
"David",
""
]
] |
new_dataset
| 0.999724 |
2307.04442
|
Mohamed Amine Kerkouri
|
Aymen Sekhri, Marouane Tliba, Mohamed Amine Kerkouri, Yassine Nasser,
Aladine Chetouani, Alessandro Bruno, Rachid Jennane,
|
Automatic diagnosis of knee osteoarthritis severity using Swin
transformer
|
CBMI 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Knee osteoarthritis (KOA) is a widespread condition that can cause chronic
pain and stiffness in the knee joint. Early detection and diagnosis are crucial
for successful clinical intervention and management to prevent severe
complications, such as loss of mobility. In this paper, we propose an automated
approach that employs the Swin Transformer to predict the severity of KOA. Our
model uses publicly available radiographic datasets with Kellgren and Lawrence
scores to enable early detection and severity assessment. To improve the
accuracy of our model, we employ a multi-prediction head architecture that
utilizes multi-layer perceptron classifiers. Additionally, we introduce a novel
training approach that reduces the data drift between multiple datasets to
ensure the generalization ability of the model. The results of our experiments
demonstrate the effectiveness and feasibility of our approach in predicting KOA
severity accurately.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 09:49:30 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Sekhri",
"Aymen",
""
],
[
"Tliba",
"Marouane",
""
],
[
"Kerkouri",
"Mohamed Amine",
""
],
[
"Nasser",
"Yassine",
""
],
[
"Chetouani",
"Aladine",
""
],
[
"Bruno",
"Alessandro",
""
],
[
"Jennane",
"Rachid",
""
]
] |
new_dataset
| 0.999722 |
2307.04455
|
Ting Jiang
|
Xinpeng Li, Ting Jiang, Haoqiang Fan, Shuaicheng Liu
|
SAM-IQA: Can Segment Anything Boost Image Quality Assessment?
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image Quality Assessment (IQA) is a challenging task that requires training
on massive datasets to achieve accurate predictions. However, due to the lack
of IQA data, deep learning-based IQA methods typically rely on pre-trained
networks trained on massive datasets as feature extractors to enhance their
generalization ability, such as the ResNet network trained on ImageNet. In this
paper, we utilize the encoder of Segment Anything, a recently proposed
segmentation model trained on a massive dataset, for high-level semantic
feature extraction. Most IQA methods are limited to extracting spatial-domain
features, while frequency-domain features have been shown to better represent
noise and blur. Therefore, we leverage both spatial-domain and frequency-domain
features by applying Fourier and standard convolutions on the extracted
features, respectively. Extensive experiments are conducted to demonstrate the
effectiveness of all the proposed components, and results show that our
approach outperforms the state-of-the-art (SOTA) in four representative
datasets, both qualitatively and quantitatively. Our experiments confirm the
powerful feature extraction capabilities of Segment Anything and highlight the
value of combining spatial-domain and frequency-domain features in IQA tasks.
Code: https://github.com/Hedlen/SAM-IQA
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 10:07:11 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Li",
"Xinpeng",
""
],
[
"Jiang",
"Ting",
""
],
[
"Fan",
"Haoqiang",
""
],
[
"Liu",
"Shuaicheng",
""
]
] |
new_dataset
| 0.984103 |
2307.04479
|
Md. Rabiul Islam Khan
|
Md. Rabiul Islam Khan, Shadman Shahriar, and Shaikh Farhan Rafid
|
A Linear Time Quantum Algorithm for Pairwise Sequence Alignment
| null | null | null | null |
cs.DS cs.CE q-bio.GN
|
http://creativecommons.org/licenses/by/4.0/
|
Sequence Alignment is the process of aligning biological sequences in order
to identify similarities between multiple sequences. In this paper, a Quantum
Algorithm for finding the optimal alignment between DNA sequences has been
demonstrated which works by mapping the sequence alignment problem into a
path-searching problem through a 2D graph. The transition, which converges to a
fixed path on the graph, is based on a proposed oracle for profit calculation.
By implementing Grover's search algorithm, our proposed approach is able to
align a pair of sequences and figure out the optimal alignment within linear
time, which hasn't been attained by any classical deterministic algorithm. In
addition to that, the proposed algorithm is capable of quadratic speeding up to
any unstructured search problem by finding out the optimal paths accurately in
a deterministic manner, in contrast to existing randomized algorithms that
frequently sort out the sub-optimal alignments, therefore, don't always
guarantee of finding out the optimal solutions.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 11:01:41 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Khan",
"Md. Rabiul Islam",
""
],
[
"Shahriar",
"Shadman",
""
],
[
"Rafid",
"Shaikh Farhan",
""
]
] |
new_dataset
| 0.97875 |
2307.04494
|
David Rodr\'iguez-Mart\'inez
|
David Rodr\'iguez-Mart\'inez and Kentaro Uno and Kenta Sawa and
Masahiro Uda and Gen Kudo and Gustavo Hernan Diaz and Ayumi Umemura and
Shreya Santra and Kazuya Yoshida
|
Enabling Faster Locomotion of Planetary Rovers with a
Mechanically-Hybrid Suspension
|
8 pages, 13 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The exploration of the lunar poles and the collection of samples from the
martian surface are characterized by shorter time windows demanding increased
autonomy and speeds. Autonomous mobile robots must intrinsically cope with a
wider range of disturbances. Faster off-road navigation has been explored for
terrestrial applications but the combined effects of increased speeds and
reduced gravity fields are yet to be fully studied. In this paper, we design
and demonstrate a novel fully passive suspension design for wheeled planetary
robots, which couples a high-range passive rocker with elastic in-wheel
coil-over shock absorbers. The design was initially conceived and verified in a
reduced-gravity (1.625 m/s$^2$) simulated environment, where three different
passive suspension configurations were evaluated against a set of
challenges--climbing steep slopes and surmounting unexpected obstacles like
rocks and outcrops--and later prototyped and validated in a series of field
tests. The proposed mechanically-hybrid suspension proves to mitigate more
effectively the negative effects (high-frequency/high-amplitude vibrations and
impact loads) of faster locomotion (>1 m/s) over unstructured terrains under
varied gravity fields. This lowers the demand on navigation and control
systems, impacting the efficiency of exploration missions in the years to come.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 11:33:46 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Rodríguez-Martínez",
"David",
""
],
[
"Uno",
"Kentaro",
""
],
[
"Sawa",
"Kenta",
""
],
[
"Uda",
"Masahiro",
""
],
[
"Kudo",
"Gen",
""
],
[
"Diaz",
"Gustavo Hernan",
""
],
[
"Umemura",
"Ayumi",
""
],
[
"Santra",
"Shreya",
""
],
[
"Yoshida",
"Kazuya",
""
]
] |
new_dataset
| 0.993783 |
2307.04515
|
Amir Ziaee
|
Amir Ziaee, Georg Suter
|
SAGC-A68: a space access graph dataset for the classification of spaces
and space elements in apartment buildings
|
Published in proceedings of the 30th International Workshop on
Intelligent Computing in Engineering, EG-ICE 2023, London, England.
https://www.ucl.ac.uk/bartlett/construction/sites/bartlett_construction/files/sagc-a68_a_space_access_graph_dataset_for_the_classification_of_spaces_and_space_elements_in_apartment_buildings.pdf
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The analysis of building models for usable area, building safety, and energy
use requires accurate classification data of spaces and space elements. To
reduce input model preparation effort and errors, automated classification of
spaces and space elements is desirable. A barrier hindering the utilization of
Graph Deep Learning (GDL) methods to space function and space element
classification is a lack of suitable datasets. To bridge this gap, we introduce
a dataset, SAGC-A68, which comprises access graphs automatically generated from
68 digital 3D models of space layouts of apartment buildings. This graph-based
dataset is well-suited for developing GDL models for space function and space
element classification. To demonstrate the potential of the dataset, we employ
it to train and evaluate a graph attention network (GAT) that predicts 22 space
function and 6 space element classes. The dataset and code used in the
experiment are available online. https://doi.org/10.5281/zenodo.7805872,
https://github.com/A2Amir/SAGC-A68.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 12:22:08 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Ziaee",
"Amir",
""
],
[
"Suter",
"Georg",
""
]
] |
new_dataset
| 0.999713 |
2307.04529
|
Wanghong Yang
|
Wanghong Yang, Wenji Du, Baosen Zhao, Yongmao Ren, Jianan Sun, Xu Zhou
|
Cross-Layer Assisted Early Congestion Control for Cloud VR Services in
5G Edge Network
|
this paper is under review
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cloud virtual reality (VR) has emerged as a promising technology, offering
users a highly immersive and easily accessible experience. However, the current
5G radio access network faces challenges in accommodating the bursty traffic
generated by multiple cloudVR flows simultaneously, leading to congestion at
the 5G base station and increased delays. In this research, we present a
comprehensive quantitative analysis that highlights the underlying causes for
the poor delay performance of cloudVR flows within the existing 5G protocol
stack and network. To address these issues, we propose a novel cross-layer
informationassisted congestion control mechanism deployed in the 5G edge
network. Experiment results show that our mechanism enhances the number of
concurrent flows meeting delay standards by 1.5x to 2.5x, while maintaining a
smooth network load. These findings underscore the potential of leveraging 5G
edge nodes as a valuable resource to effectively meet the anticipated demands
of future services.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 12:56:41 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Yang",
"Wanghong",
""
],
[
"Du",
"Wenji",
""
],
[
"Zhao",
"Baosen",
""
],
[
"Ren",
"Yongmao",
""
],
[
"Sun",
"Jianan",
""
],
[
"Zhou",
"Xu",
""
]
] |
new_dataset
| 0.990832 |
2307.04537
|
Wei-Cheng Lin
|
Chi-Chih Chang, Wei-Cheng Lin, Pei-Shuo Wang, Sheng-Feng Yu, Yu-Chen
Lu, Kuan-Cheng Lin and Kai-Chiang Wu
|
Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving
Perception
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we present an efficient and quantization-aware panoptic driving
perception model (Q- YOLOP) for object detection, drivable area segmentation,
and lane line segmentation, in the context of autonomous driving. Our model
employs the Efficient Layer Aggregation Network (ELAN) as its backbone and
task-specific heads for each task. We employ a four-stage training process that
includes pretraining on the BDD100K dataset, finetuning on both the BDD100K and
iVS datasets, and quantization-aware training (QAT) on BDD100K. During the
training process, we use powerful data augmentation techniques, such as random
perspective and mosaic, and train the model on a combination of the BDD100K and
iVS datasets. Both strategies enhance the model's generalization capabilities.
The proposed model achieves state-of-the-art performance with an [email protected] of
0.622 for object detection and an mIoU of 0.612 for segmentation, while
maintaining low computational and memory requirements.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 13:02:46 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Chang",
"Chi-Chih",
""
],
[
"Lin",
"Wei-Cheng",
""
],
[
"Wang",
"Pei-Shuo",
""
],
[
"Yu",
"Sheng-Feng",
""
],
[
"Lu",
"Yu-Chen",
""
],
[
"Lin",
"Kuan-Cheng",
""
],
[
"Wu",
"Kai-Chiang",
""
]
] |
new_dataset
| 0.963913 |
2307.04549
|
Dylan Mercury Cooper
|
Dylan Mercury Cooper
|
Needs, Passions and Loot Boxes -- Exploring Reasons for Problem
Behaviour in Relation to Loot Box Engagement
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research on the convergence of gaming and gambling has been around since the
1990s. The emergence of loot boxes in video games in the mid 2010s, a game
mechanic with a chance-based outcome that shares structural and psychological
similarities to gambling, caused public controversy and lead to the inception
of a new field of study, loot box research. Since then, various studies have
found a relationship between loot box engagement and problem gambling as well
as problem gaming. Due to the cross-sectional nature of this data, however,
inferences about causality are limited. While loot box research has extensively
investigated the relationship between loot box engagement and problem
behaviour, little research has been done to explain the underlying motivations
of players that drive them to interact with loot boxes. The goal of this thesis
is to provide possible explanations for the relationship between loot box
engagement and problem gamblers or problem gamers. In doing so, it draws upon
two prominent psychological theories. Self-Determination Theory and the
Dualistic Model of Passion. Self-Determination Theory's concept of
psychological needs and their satisfaction or frustration is hereby used to
explain the development of harmonious or obsessive passions, which are
introduced in the Dualistic Model of Passion. These obsessive passions have
been shown to be possible antecedents of behavioural addictions, such as
problem gambling or problem gaming. Thus, the interplay between needs, passions
and loot box opening could elucidate the aforementioned correlations between
loot box engagement and problem behaviour. However, further research,
especially utilising longitudinal data, is needed to better understand these
processes.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 13:27:13 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Cooper",
"Dylan Mercury",
""
]
] |
new_dataset
| 0.994329 |
2307.04574
|
Jongwook Si
|
Jongwook Si and Sungyoung Kim
|
TFR: Texture Defect Detection with Fourier Transform using Normal
Reconstructed Template of Simple Autoencoder
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Texture is an essential information in image representation, capturing
patterns and structures. As a result, texture plays a crucial role in the
manufacturing industry and is extensively studied in the fields of computer
vision and pattern recognition. However, real-world textures are susceptible to
defects, which can degrade image quality and cause various issues. Therefore,
there is a need for accurate and effective methods to detect texture defects.
In this study, a simple autoencoder and Fourier transform are employed for
texture defect detection. The proposed method combines Fourier transform
analysis with the reconstructed template obtained from the simple autoencoder.
Fourier transform is a powerful tool for analyzing the frequency domain of
images and signals. Moreover, since texture defects often exhibit
characteristic changes in specific frequency ranges, analyzing the frequency
domain enables effective defect detection. The proposed method demonstrates
effectiveness and accuracy in detecting texture defects. Experimental results
are presented to evaluate its performance and compare it with existing
approaches.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 14:07:37 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Si",
"Jongwook",
""
],
[
"Kim",
"Sungyoung",
""
]
] |
new_dataset
| 0.993891 |
2307.04592
|
Bjoern Andres
|
Jannik Irmai, Shengxian Zhao, Jannik Presberger, Bjoern Andres
|
A Graph Multi-separator Problem for Image Segmentation
|
36 pages
| null | null | null |
cs.CV cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel abstraction of the image segmentation task in the form of
a combinatorial optimization problem that we call the multi-separator problem.
Feasible solutions indicate for every pixel whether it belongs to a segment or
a segment separator, and indicate for pairs of pixels whether or not the pixels
belong to the same segment. This is in contrast to the closely related lifted
multicut problem where every pixel is associated to a segment and no pixel
explicitly represents a separating structure. While the multi-separator problem
is NP-hard, we identify two special cases for which it can be solved
efficiently. Moreover, we define two local search algorithms for the general
case and demonstrate their effectiveness in segmenting simulated volume images
of foam cells and filaments.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 14:32:24 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Irmai",
"Jannik",
""
],
[
"Zhao",
"Shengxian",
""
],
[
"Presberger",
"Jannik",
""
],
[
"Andres",
"Bjoern",
""
]
] |
new_dataset
| 0.995859 |
2307.04604
|
Jesse Choe
|
Jesse Choe, Siddhant Sood, Ryan Park
|
EchoVest: Real-Time Sound Classification and Depth Perception Expressed
through Transcutaneous Electrical Nerve Stimulation
| null | null | null | null |
cs.SD cs.LG eess.AS eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Over 1.5 billion people worldwide live with hearing impairment. Despite
various technologies that have been created for individuals with such
disabilities, most of these technologies are either extremely expensive or
inaccessible for everyday use in low-medium income countries. In order to
combat this issue, we have developed a new assistive device, EchoVest, for
blind/deaf people to intuitively become more aware of their environment.
EchoVest transmits vibrations to the user's body by utilizing transcutaneous
electric nerve stimulation (TENS) based on the source of the sounds. EchoVest
also provides various features, including sound localization, sound
classification, noise reduction, and depth perception. We aimed to outperform
CNN-based machine-learning models, the most commonly used machine learning
model for classification tasks, in accuracy and computational costs. To do so,
we developed and employed a novel audio pipeline that adapts the Audio
Spectrogram Transformer (AST) model, an attention-based model, for our sound
classification purposes, and Fast Fourier Transforms for noise reduction. The
application of Otsu's Method helped us find the optimal thresholds for
background noise sound filtering and gave us much greater accuracy. In order to
calculate direction and depth accurately, we applied Complex Time Difference of
Arrival algorithms and SOTA localization. Our last improvement was to use blind
source separation to make our algorithms applicable to multiple microphone
inputs. The final algorithm achieved state-of-the-art results on numerous
checkpoints, including a 95.7\% accuracy on the ESC-50 dataset for
environmental sound classification.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 14:43:32 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Choe",
"Jesse",
""
],
[
"Sood",
"Siddhant",
""
],
[
"Park",
"Ryan",
""
]
] |
new_dataset
| 0.999012 |
2307.04630
|
Kun Song
|
Kun Song, Yi lei, Peikun Chen, Yiqing Cao, Kun Wei, Yongmao Zhang, Lei
Xie, Ning Jiang, Guoqing Zhao
|
The NPU-MSXF Speech-to-Speech Translation System for IWSLT 2023
Speech-to-Speech Translation Task
|
IWSLT@ACL 2023 system paper. Our submitted system ranks 1st in the
S2ST task of the IWSLT 2023 evaluation campaign
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes the NPU-MSXF system for the IWSLT 2023 speech-to-speech
translation (S2ST) task which aims to translate from English speech of
multi-source to Chinese speech. The system is built in a cascaded manner
consisting of automatic speech recognition (ASR), machine translation (MT), and
text-to-speech (TTS). We make tremendous efforts to handle the challenging
multi-source input. Specifically, to improve the robustness to multi-source
speech input, we adopt various data augmentation strategies and a ROVER-based
score fusion on multiple ASR model outputs. To better handle the noisy ASR
transcripts, we introduce a three-stage fine-tuning strategy to improve
translation accuracy. Finally, we build a TTS model with high naturalness and
sound quality, which leverages a two-stage framework, using network bottleneck
features as a robust intermediate representation for speaker timbre and
linguistic content disentanglement. Based on the two-stage framework,
pre-trained speaker embedding is leveraged as a condition to transfer the
speaker timbre in the source English speech to the translated Chinese speech.
Experimental results show that our system has high translation accuracy, speech
naturalness, sound quality, and speaker similarity. Moreover, it shows good
robustness to multi-source data.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 15:15:17 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Song",
"Kun",
""
],
[
"lei",
"Yi",
""
],
[
"Chen",
"Peikun",
""
],
[
"Cao",
"Yiqing",
""
],
[
"Wei",
"Kun",
""
],
[
"Zhang",
"Yongmao",
""
],
[
"Xie",
"Lei",
""
],
[
"Jiang",
"Ning",
""
],
[
"Zhao",
"Guoqing",
""
]
] |
new_dataset
| 0.998125 |
2307.04651
|
Aixuan Li
|
Aixuan Li, Jing Zhang, Yunqiu Lv, Tong Zhang, Yiran Zhong, Mingyi He,
Yuchao Dai
|
Joint Salient Object Detection and Camouflaged Object Detection via
Uncertainty-aware Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Salient objects attract human attention and usually stand out clearly from
their surroundings. In contrast, camouflaged objects share similar colors or
textures with the environment. In this case, salient objects are typically
non-camouflaged, and camouflaged objects are usually not salient. Due to this
inherent contradictory attribute, we introduce an uncertainty-aware learning
pipeline to extensively explore the contradictory information of salient object
detection (SOD) and camouflaged object detection (COD) via data-level and
task-wise contradiction modeling. We first exploit the dataset correlation of
these two tasks and claim that the easy samples in the COD dataset can serve as
hard samples for SOD to improve the robustness of the SOD model. Based on the
assumption that these two models should lead to activation maps highlighting
different regions of the same input image, we further introduce a contrastive
module with a joint-task contrastive learning framework to explicitly model the
contradictory attributes of these two tasks. Different from conventional
intra-task contrastive learning for unsupervised representation learning, our
contrastive module is designed to model the task-wise correlation, leading to
cross-task representation learning. To better understand the two tasks from the
perspective of uncertainty, we extensively investigate the uncertainty
estimation techniques for modeling the main uncertainties of the two tasks,
namely task uncertainty (for SOD) and data uncertainty (for COD), and aiming to
effectively estimate the challenging regions for each task to achieve
difficulty-aware learning. Experimental results on benchmark datasets
demonstrate that our solution leads to both state-of-the-art performance and
informative uncertainty estimation.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 15:49:37 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Li",
"Aixuan",
""
],
[
"Zhang",
"Jing",
""
],
[
"Lv",
"Yunqiu",
""
],
[
"Zhang",
"Tong",
""
],
[
"Zhong",
"Yiran",
""
],
[
"He",
"Mingyi",
""
],
[
"Dai",
"Yuchao",
""
]
] |
new_dataset
| 0.998432 |
2307.04683
|
David Pride Mr
|
David Pride, Matteo Cancellieri and Petr Knoth
|
CORE-GPT: Combining Open Access research and large language models for
credible, trustworthy question answering
|
12 pages, accepted submission to TPDL2023
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present CORE-GPT, a novel question-answering platform that
combines GPT-based language models and more than 32 million full-text open
access scientific articles from CORE. We first demonstrate that GPT3.5 and GPT4
cannot be relied upon to provide references or citations for generated text. We
then introduce CORE-GPT which delivers evidence-based answers to questions,
along with citations and links to the cited papers, greatly increasing the
trustworthiness of the answers and reducing the risk of hallucinations.
CORE-GPT's performance was evaluated on a dataset of 100 questions covering the
top 20 scientific domains in CORE, resulting in 100 answers and links to 500
relevant articles. The quality of the provided answers and and relevance of the
links were assessed by two annotators. Our results demonstrate that CORE-GPT
can produce comprehensive and trustworthy answers across the majority of
scientific domains, complete with links to genuine, relevant scientific
articles.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 13:41:36 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Pride",
"David",
""
],
[
"Cancellieri",
"Matteo",
""
],
[
"Knoth",
"Petr",
""
]
] |
new_dataset
| 0.989708 |
2307.04693
|
Noble Saji Mathews
|
Debeshee Das, Noble Saji Mathews, Alex Mathai, Srikanth Tamilselvam,
Kranthi Sedamaki, Sridhar Chimalakonda and Atul Kumar
|
COMEX: A Tool for Generating Customized Source Code Representations
|
The paper has been accepted for publication at ASE 2023 (Tool
Demonstrations Track)
| null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Learning effective representations of source code is critical for any Machine
Learning for Software Engineering (ML4SE) system. Inspired by natural language
processing, large language models (LLMs) like Codex and CodeGen treat code as
generic sequences of text and are trained on huge corpora of code data,
achieving state of the art performance on several software engineering (SE)
tasks. However, valid source code, unlike natural language, follows a strict
structure and pattern governed by the underlying grammar of the programming
language. Current LLMs do not exploit this property of the source code as they
treat code like a sequence of tokens and overlook key structural and semantic
properties of code that can be extracted from code-views like the Control Flow
Graph (CFG), Data Flow Graph (DFG), Abstract Syntax Tree (AST), etc.
Unfortunately, the process of generating and integrating code-views for every
programming language is cumbersome and time consuming. To overcome this
barrier, we propose our tool COMEX - a framework that allows researchers and
developers to create and combine multiple code-views which can be used by
machine learning (ML) models for various SE tasks. Some salient features of our
tool are: (i) it works directly on source code (which need not be compilable),
(ii) it currently supports Java and C#, (iii) it can analyze both method-level
snippets and program-level snippets by using both intra-procedural and
inter-procedural analysis, and (iv) it is easily extendable to other languages
as it is built on tree-sitter - a widely used incremental parser that supports
over 40 languages. We believe this easy-to-use code-view generation and
customization tool will give impetus to research in source code representation
learning methods and ML4SE.
Tool: https://pypi.org/project/comex - GitHub:
https://github.com/IBM/tree-sitter-codeviews - Demo:
https://youtu.be/GER6U87FVbU
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 16:46:34 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Das",
"Debeshee",
""
],
[
"Mathews",
"Noble Saji",
""
],
[
"Mathai",
"Alex",
""
],
[
"Tamilselvam",
"Srikanth",
""
],
[
"Sedamaki",
"Kranthi",
""
],
[
"Chimalakonda",
"Sridhar",
""
],
[
"Kumar",
"Atul",
""
]
] |
new_dataset
| 0.996748 |
2307.04738
|
Zhao Mandi
|
Zhao Mandi, Shreeya Jain, Shuran Song
|
RoCo: Dialectic Multi-Robot Collaboration with Large Language Models
| null | null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel approach to multi-robot collaboration that harnesses the
power of pre-trained large language models (LLMs) for both high-level
communication and low-level path planning. Robots are equipped with LLMs to
discuss and collectively reason task strategies. They then generate sub-task
plans and task space waypoint paths, which are used by a multi-arm motion
planner to accelerate trajectory planning. We also provide feedback from the
environment, such as collision checking, and prompt the LLM agents to improve
their plan and waypoints in-context. For evaluation, we introduce RoCoBench, a
6-task benchmark covering a wide range of multi-robot collaboration scenarios,
accompanied by a text-only dataset for agent representation and reasoning. We
experimentally demonstrate the effectiveness of our approach -- it achieves
high success rates across all tasks in RoCoBench and adapts to variations in
task semantics. Our dialog setup offers high interpretability and flexibility
-- in real world experiments, we show RoCo easily incorporates
human-in-the-loop, where a user can communicate and collaborate with a robot
agent to complete tasks together. See project website
https://project-roco.github.io for videos and code.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 17:52:01 GMT"
}
] | 2023-07-11T00:00:00 |
[
[
"Mandi",
"Zhao",
""
],
[
"Jain",
"Shreeya",
""
],
[
"Song",
"Shuran",
""
]
] |
new_dataset
| 0.995563 |
2202.12038
|
Josef Rukavicka
|
Josef Rukavicka
|
Construction of a bi-infinite power free word with a given factor and a
non-recurrent letter
| null | null |
10.1007/978-3-031-34326-1_12
| null |
cs.FL cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $L_{k,\alpha}^{\mathbb{Z}}$ denote the set of all bi-infinite
$\alpha$-power free words over an alphabet with $k$ letters, where $\alpha$ is
a positive rational number and $k$ is positive integer. We prove that if
$\alpha\geq 5$, $k\geq 3$, $v\in L_{k,\alpha}^{\mathbb{Z}}$, and $w$ is a
finite factor of $v$, then there are $\widetilde v\in
L_{k,\alpha}^{\mathbb{Z}}$ and a letter $x$ such that $w$ is a factor of
$\widetilde v$ and $x$ has only a finitely many occurrences in $\widetilde v$.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 11:39:48 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Apr 2022 12:58:05 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Rukavicka",
"Josef",
""
]
] |
new_dataset
| 0.986446 |
2205.12487
|
Barry Menglong Yao
|
Barry Menglong Yao (1), Aditya Shah (1), Lichao Sun (2), Jin-Hee Cho
(1), Lifu Huang (1) ((1) Virginia Tech, (2) Lehigh University)
|
End-to-End Multimodal Fact-Checking and Explanation Generation: A
Challenging Dataset and Models
|
Accepted by SIGIR 23, 11 pages, 4 figures
|
Proceedings of the 46th International ACM SIGIR Conference on
Research and Development in Information Retrieval (SIGIR '23), July 23--27,
2023, Taipei, Taiwan
|
10.1145/3539618.3591879
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We propose end-to-end multimodal fact-checking and explanation generation,
where the input is a claim and a large collection of web sources, including
articles, images, videos, and tweets, and the goal is to assess the
truthfulness of the claim by retrieving relevant evidence and predicting a
truthfulness label (e.g., support, refute or not enough information), and to
generate a statement to summarize and explain the reasoning and ruling process.
To support this research, we construct Mocheg, a large-scale dataset consisting
of 15,601 claims where each claim is annotated with a truthfulness label and a
ruling statement, and 33,880 textual paragraphs and 12,112 images in total as
evidence. To establish baseline performances on Mocheg, we experiment with
several state-of-the-art neural architectures on the three pipelined subtasks:
multimodal evidence retrieval, claim verification, and explanation generation,
and demonstrate that the performance of the state-of-the-art end-to-end
multimodal fact-checking does not provide satisfactory outcomes. To the best of
our knowledge, we are the first to build the benchmark dataset and solutions
for end-to-end multimodal fact-checking and explanation generation. The
dataset, source code and model checkpoints are available at
https://github.com/VT-NLP/Mocheg.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 04:36:46 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 21:22:45 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Yao",
"Barry Menglong",
"",
"Virginia Tech"
],
[
"Shah",
"Aditya",
"",
"Virginia Tech"
],
[
"Sun",
"Lichao",
"",
"Lehigh University"
],
[
"Cho",
"Jin-Hee",
"",
"Virginia Tech"
],
[
"Huang",
"Lifu",
"",
"Virginia Tech"
]
] |
new_dataset
| 0.9994 |
2205.13682
|
Dmitry Petrov
|
Dmitry Petrov, Matheus Gadelha, Radomir Mech, Evangelos Kalogerakis
|
ANISE: Assembly-based Neural Implicit Surface rEconstruction
| null | null |
10.1109/TVCG.2023.3265306
| null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ANISE, a method that reconstructs a 3D~shape from partial
observations (images or sparse point clouds) using a part-aware neural implicit
shape representation. The shape is formulated as an assembly of neural implicit
functions, each representing a different part instance. In contrast to previous
approaches, the prediction of this representation proceeds in a coarse-to-fine
manner. Our model first reconstructs a structural arrangement of the shape in
the form of geometric transformations of its part instances. Conditioned on
them, the model predicts part latent codes encoding their surface geometry.
Reconstructions can be obtained in two ways: (i) by directly decoding the part
latent codes to part implicit functions, then combining them into the final
shape; or (ii) by using part latents to retrieve similar part instances in a
part database and assembling them in a single shape. We demonstrate that, when
performing reconstruction by decoding part representations into implicit
functions, our method achieves state-of-the-art part-aware reconstruction
results from both images and sparse point clouds.When reconstructing shapes by
assembling parts retrieved from a dataset, our approach significantly
outperforms traditional shape retrieval methods even when significantly
restricting the database size. We present our results in well-known sparse
point cloud reconstruction and single-view reconstruction benchmarks.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 00:01:40 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jul 2023 19:06:55 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Petrov",
"Dmitry",
""
],
[
"Gadelha",
"Matheus",
""
],
[
"Mech",
"Radomir",
""
],
[
"Kalogerakis",
"Evangelos",
""
]
] |
new_dataset
| 0.994654 |
2209.15397
|
Tiziano Guadagnino Dr.
|
Ignacio Vizzo, Tiziano Guadagnino, Benedikt Mersch, Louis Wiesmann,
Jens Behley, Cyrill Stachniss
|
KISS-ICP: In Defense of Point-to-Point ICP -- Simple, Accurate, and
Robust Registration If Done the Right Way
|
8 pages
| null |
10.1109/LRA.2023.3236571
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robust and accurate pose estimation of a robotic platform, so-called
sensor-based odometry, is an essential part of many robotic applications. While
many sensor odometry systems made progress by adding more complexity to the
ego-motion estimation process, we move in the opposite direction. By removing a
majority of parts and focusing on the core elements, we obtain a surprisingly
effective system that is simple to realize and can operate under various
environmental conditions using different LiDAR sensors. Our odometry estimation
approach relies on point-to-point ICP combined with adaptive thresholding for
correspondence matching, a robust kernel, a simple but widely applicable motion
compensation approach, and a point cloud subsampling strategy. This yields a
system with only a few parameters that in most cases do not even have to be
tuned to a specific LiDAR sensor. Our system using the same parameters performs
on par with state-of-the-art methods under various operating conditions using
different platforms: automotive platforms, UAV-based operation, vehicles like
segways, or handheld LiDARs. We do not require integrating IMU information and
solely rely on 3D point cloud data obtained from a wide range of 3D LiDAR
sensors, thus, enabling a broad spectrum of different applications and
operating conditions. Our open-source system operates faster than the sensor
frame rate in all presented datasets and is designed for real-world scenarios.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 11:53:52 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 12:36:22 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Vizzo",
"Ignacio",
""
],
[
"Guadagnino",
"Tiziano",
""
],
[
"Mersch",
"Benedikt",
""
],
[
"Wiesmann",
"Louis",
""
],
[
"Behley",
"Jens",
""
],
[
"Stachniss",
"Cyrill",
""
]
] |
new_dataset
| 0.998204 |
2212.00313
|
Cheng Guo
|
Cheng Guo, Fei Hu, and Yan Hu
|
Concealed Object Detection for Passive Millimeter-Wave Security Imaging
Based on Task-Aligned Detection Transformer
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Passive millimeter-wave (PMMW) is a significant potential technique for human
security screening. Several popular object detection networks have been used
for PMMW images. However, restricted by the low resolution and high noise of
PMMW images, PMMW hidden object detection based on deep learning usually
suffers from low accuracy and low classification confidence. To tackle the
above problems, this paper proposes a Task-Aligned Detection Transformer
network, named PMMW-DETR. In the first stage, a Denoising Coarse-to-Fine
Transformer (DCFT) backbone is designed to extract long- and short-range
features in the different scales. In the second stage, we propose the Query
Selection module to introduce learned spatial features into the network as
prior knowledge, which enhances the semantic perception capability of the
network. In the third stage, aiming to improve the classification performance,
we perform a Task-Aligned Dual-Head block to decouple the classification and
regression tasks. Based on our self-developed PMMW security screening dataset,
experimental results including comparison with State-Of-The-Art (SOTA) methods
and ablation study demonstrate that the PMMW-DETR obtains higher accuracy and
classification confidence than previous works, and exhibits robustness to the
PMMW images of low quality.
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2022 07:03:29 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 11:34:41 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Guo",
"Cheng",
""
],
[
"Hu",
"Fei",
""
],
[
"Hu",
"Yan",
""
]
] |
new_dataset
| 0.970027 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.