id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2212.01211
|
Lucas Meijer
|
Lucas Meijer, Tillmann Miltzow
|
Sometimes Two Irrational Guards are Needed
|
19 pages, 12 figures
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
In the art gallery problem, we are given a closed polygon $P$, with rational
coordinates and an integer $k$. We are asked whether it is possible to find a
set (of guards) $G$ of size $k$ such that any point $p\in P$ is seen by a point
in $G$. We say two points $p$, $q$ see each other if the line segment $pq$ is
contained inside $P$. It was shown by Abrahamsen, Adamaszek, and Miltzow that
there is a polygon that can be guarded with three guards, but requires four
guards if the guards are required to have rational coordinates. In other words,
an optimal solution of size three might need to be irrational. We show that an
optimal solution of size two might need to be irrational. Note that it is
well-known that any polygon that can be guarded with one guard has an optimal
guard placement with rational coordinates. Hence, our work closes the gap on
when irrational guards are possible to occur.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2022 14:43:33 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 11:41:58 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Meijer",
"Lucas",
""
],
[
"Miltzow",
"Tillmann",
""
]
] |
new_dataset
| 0.986843 |
2212.01728
|
Wenrong Chen
|
Wenrong Chen, Lingxiang Li, Zhi Chen, Boyu Ning, Guangjian Wang, Tony
Quek
|
ISAC-Enabled Beam Alignment for Terahertz Networks: Scheme Design and
Coverage Analysis
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a key pillar technology for the future 6G networks, terahertz (THz)
communication can provide high-capacity transmissions, but suffers from severe
propagation loss and line-of-sight (LoS) blockage that limits the network
coverage. Narrow beams are required to compensate for the loss, but they in
turn bring in beam misalignment challenge that degrades the THz network
performance. The high sensing accuracy of THz signals enables integrated
sensing and communication (ISAC) technology to assist the LoS blockage and user
mobility-induced beam misalignment, enhancing THz network coverage. In line
with the 5G beam management, we propose a joint synchronization signal block
(SSB) and reference signal (RS)-based sensing (JSRS) scheme to predict the need
for beam switches, and thus prevent beam misalignment. We further design an
optimal sensing signal pattern that minimizes beam misalignment with fixed
sensing resources, which reveals design insights into the time-to-frequency
allocation. We derive expressions for the coverage probability and spatial
throughput, which provide instructions on the ISAC-THz network deployment and
further enable evaluations for the sensing benefit in THz networks. Numerical
results show that the JSRS scheme is effective and highly compatible with the
5G air interface. Averaged in tested urban use cases, JSRS achieves near-ideal
performance and reduces around 80% of beam misalignment, and enhances the
coverage probability by about 75%, compared to the network with 5G-required
positioning ability.
|
[
{
"version": "v1",
"created": "Sun, 4 Dec 2022 02:58:50 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 06:46:36 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Chen",
"Wenrong",
""
],
[
"Li",
"Lingxiang",
""
],
[
"Chen",
"Zhi",
""
],
[
"Ning",
"Boyu",
""
],
[
"Wang",
"Guangjian",
""
],
[
"Quek",
"Tony",
""
]
] |
new_dataset
| 0.971082 |
2301.00945
|
Chaofeng Guan
|
Chaofeng Guan, Ruihu Li, Zhi Ma
|
On Euclidean, Hermitian and symplectic quasi-cyclic complementary dual
codes
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Linear complementary dual codes (LCD) intersect trivially with their dual. In
this paper, we develop a new characterization for LCD codes, which allows us to
judge the complementary duality of linear codes from the codeword level.
Further, we determine the sufficient and necessary conditions for one-generator
quasi-cyclic codes to be LCD codes involving Euclidean, Hermitian, and
symplectic inner products. Finally, we constructed many Euclidean, Hermitian
and symmetric LCD codes with excellent parameters, some improving the results
in the literature. Remarkably, we construct a symplectic LCD $[28,6]_2$ code
with symplectic distance $10$, which corresponds to an trace Hermitian additive
complementary dual $(14,3,10)_4$ code that outperforms the optimal quaternary
Hermitian LCD $[14,3,9]_4$ code.
|
[
{
"version": "v1",
"created": "Tue, 3 Jan 2023 04:17:39 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jan 2023 14:52:14 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Jul 2023 03:24:10 GMT"
},
{
"version": "v4",
"created": "Fri, 7 Jul 2023 01:12:42 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Guan",
"Chaofeng",
""
],
[
"Li",
"Ruihu",
""
],
[
"Ma",
"Zhi",
""
]
] |
new_dataset
| 0.999589 |
2302.03256
|
Lei Zhang
|
Lei Zhang, Mahsa Radnejad, Andriy Miranskyy
|
Identifying Flakiness in Quantum Programs
|
7 pages, 16 listings, 2 tables, accepted at ESEM 2023: The 17th
ACM/IEEE International Symposium on Empirical Software Engineering and
Measurement
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In recent years, software engineers have explored ways to assist quantum
software programmers. Our goal in this paper is to continue this exploration
and see if quantum software programmers deal with some problems plaguing
classical programs. Specifically, we examine whether intermittently failing
tests, i.e., flaky tests, affect quantum software development.
To explore flakiness, we conduct a preliminary analysis of 14 quantum
software repositories. Then, we identify flaky tests and categorize their
causes and methods of fixing them.
We find flaky tests in 12 out of 14 quantum software repositories. In these
12 repositories, the lower boundary of the percentage of issues related to
flaky tests ranges between 0.26% and 1.85% per repository. We identify 46
distinct flaky test reports with 8 groups of causes and 7 common solutions.
Further, we notice that quantum programmers are not using some of the recent
flaky test countermeasures developed by software engineers.
This work may interest practitioners, as it provides useful insight into the
resolution of flaky tests in quantum programs. Researchers may also find the
paper helpful as it offers quantitative data on flaky tests in quantum software
and points to new research opportunities.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 04:55:34 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 12:30:11 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Zhang",
"Lei",
""
],
[
"Radnejad",
"Mahsa",
""
],
[
"Miranskyy",
"Andriy",
""
]
] |
new_dataset
| 0.960074 |
2303.02237
|
Keshab Parhi
|
Weihang Tan, Sin-Wei Chiu, Antian Wang, Yingjie Lao, Keshab K. Parhi
|
PaReNTT: Low-Latency Parallel Residue Number System and NTT-Based Long
Polynomial Modular Multiplication for Homomorphic Encryption
| null | null | null | null |
cs.AR cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
High-speed long polynomial multiplication is important for applications in
homomorphic encryption (HE) and lattice-based cryptosystems. This paper
addresses low-latency hardware architectures for long polynomial modular
multiplication using the number-theoretic transform (NTT) and inverse NTT
(iNTT). Chinese remainder theorem (CRT) is used to decompose the modulus into
multiple smaller moduli. Our proposed architecture, namely PaReNTT, makes four
novel contributions. First, parallel NTT and iNTT architectures are proposed to
reduce the number of clock cycles to process the polynomials. This can enable
real-time processing for HE applications, as the number of clock cycles to
process the polynomial is inversely proportional to the level of parallelism.
Second, the proposed architecture eliminates the need for permuting the NTT
outputs before their product is input to the iNTT. This reduces latency by n/4
clock cycles, where n is the length of the polynomial, and reduces buffer
requirement by one delay-switch-delay circuit of size n. Third, an approach to
select special moduli is presented where the moduli can be expressed in terms
of a few signed power-of-two terms. Fourth, novel architectures for
pre-processing for computing residual polynomials using the CRT and
post-processing for combining the residual polynomials are proposed. These
architectures significantly reduce the area consumption of the pre-processing
and post-processing steps. The proposed long modular polynomial multiplications
are ideal for applications that require low latency and high sample rate as
these feed-forward architectures can be pipelined at arbitrary levels.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 22:02:56 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 21:57:28 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Tan",
"Weihang",
""
],
[
"Chiu",
"Sin-Wei",
""
],
[
"Wang",
"Antian",
""
],
[
"Lao",
"Yingjie",
""
],
[
"Parhi",
"Keshab K.",
""
]
] |
new_dataset
| 0.998801 |
2305.00763
|
Peterson Yuhala
|
Peterson Yuhala, Michael Paper, Timoth\'ee Zerbib, Pascal Felber,
Valerio Schiavoni, Alain Tchana
|
SGX Switchless Calls Made Configless
|
10 pages, 53rd Annual IEEE/IFIP International Conference on
Dependable Systems and Networks (DSN)
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Intel's software guard extensions (SGX) provide hardware enclaves to
guarantee confidentiality and integrity for sensitive code and data. However,
systems leveraging such security mechanisms must often pay high performance
overheads. A major source of this overhead is SGX enclave transitions which
induce expensive cross-enclave context switches. The Intel SGX SDK mitigates
this with a switchless call mechanism for transitionless cross-enclave calls
using worker threads. Intel's SGX switchless call implementation improves
performance but provides limited flexibility: developers need to statically fix
the system configuration at build time, which is error-prone and
misconfigurations lead to performance degradations and waste of CPU resources.
ZC-SWITCHLESS is a configless and efficient technique to drive the execution of
SGX switchless calls. Its dynamic approach optimises the total switchless
worker threads at runtime to minimise CPU waste. The experimental evaluation
shows that ZC-SWITCHLESS obviates the performance penalty of misconfigured
switchless systems while minimising CPU waste.
|
[
{
"version": "v1",
"created": "Mon, 1 May 2023 10:45:24 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 06:55:24 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Yuhala",
"Peterson",
""
],
[
"Paper",
"Michael",
""
],
[
"Zerbib",
"Timothée",
""
],
[
"Felber",
"Pascal",
""
],
[
"Schiavoni",
"Valerio",
""
],
[
"Tchana",
"Alain",
""
]
] |
new_dataset
| 0.994763 |
2305.16724
|
I-Hung Hsu
|
I-Hung Hsu, Avik Ray, Shubham Garg, Nanyun Peng, Jing Huang
|
Code-Switched Text Synthesis in Unseen Language Pairs
|
Paper accepted by ACL2023 as a Finding paper
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing efforts on text synthesis for code-switching mostly require training
on code-switched texts in the target language pairs, limiting the deployment of
the models to cases lacking code-switched data. In this work, we study the
problem of synthesizing code-switched texts for language pairs absent from the
training data. We introduce GLOSS, a model built on top of a pre-trained
multilingual machine translation model (PMMTM) with an additional
code-switching module. This module, either an adapter or extra prefixes, learns
code-switching patterns from code-switched data during training, while the
primary component of GLOSS, i.e., the PMMTM, is frozen. The design of only
adjusting the code-switching module prevents our model from overfitting to the
constrained training data for code-switching. Hence, GLOSS exhibits the ability
to generalize and synthesize code-switched texts across a broader spectrum of
language pairs. Additionally, we develop a self-training algorithm on target
language pairs further to enhance the reliability of GLOSS. Automatic
evaluations on four language pairs show that GLOSS achieves at least 55%
relative BLEU and METEOR scores improvements compared to strong baselines.
Human evaluations on two language pairs further validate the success of GLOSS.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 08:22:35 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 07:51:38 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Hsu",
"I-Hung",
""
],
[
"Ray",
"Avik",
""
],
[
"Garg",
"Shubham",
""
],
[
"Peng",
"Nanyun",
""
],
[
"Huang",
"Jing",
""
]
] |
new_dataset
| 0.991935 |
2305.18098
|
Wen Yang
|
Wen Yang, Chong Li, Jiajun Zhang, Chengqing Zong
|
BigTranslate: Augmenting Large Language Models with Multilingual
Translation Capability over 100 Languages
|
12 pages, 4 figures. Our model is available at
https://github.com/ZNLP/BigTranslate
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) demonstrate promising translation performance
among various natural languages. However, many LLMs especially the open-sourced
ones, such as BLOOM and LLaMA, are English-dominant and support only dozens of
natural languages, making the potential of LLMs on language translation less
explored. In this work, we present BigTranslate which adapts LLaMA that covers
only 20 languages and enhances it with multilingual translation capability on
more than 100 languages. BigTranslate is built upon LLaMA-13B and it is
optimized in three steps. First, we continue training LLaMA with massive
Chinese monolingual data. Second, we continue training the model with a
large-scale parallel dataset that covers 102 natural languages. Third, we
instruct-tune the foundation model with multilingual translation instructions,
leading to our BigTranslate model. The preliminary experiments on multilingual
translation show that BigTranslate performs comparably with ChatGPT and Google
Translate in many languages and even outperforms ChatGPT in 8 language pairs.
We release the BigTranslate model and hope it can advance the research
progress.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 14:07:52 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 08:45:42 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Yang",
"Wen",
""
],
[
"Li",
"Chong",
""
],
[
"Zhang",
"Jiajun",
""
],
[
"Zong",
"Chengqing",
""
]
] |
new_dataset
| 0.980802 |
2306.01304
|
Haojie Wei
|
Haojie Wei, Jun Yuan, Rui Zhang, Yueguo Chen, Gang Wang
|
JEPOO: Highly Accurate Joint Estimation of Pitch, Onset and Offset for
Music Information Retrieval
|
This paper has been accepted by IJCAI 2023; 11 pages, 6 figures
| null | null | null |
cs.SD cs.IR cs.MM eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Melody extraction is a core task in music information retrieval, and the
estimation of pitch, onset and offset are key sub-tasks in melody extraction.
Existing methods have limited accuracy, and work for only one type of data,
either single-pitch or multipitch. In this paper, we propose a highly accurate
method for joint estimation of pitch, onset and offset, named JEPOO. We address
the challenges of joint learning optimization and handling both single-pitch
and multi-pitch data through novel model design and a new optimization
technique named Pareto modulated loss with loss weight regularization. This is
the first method that can accurately handle both single-pitch and multi-pitch
music data, and even a mix of them. A comprehensive experimental study on a
wide range of real datasets shows that JEPOO outperforms state-ofthe-art
methods by up to 10.6%, 8.3% and 10.3% for the prediction of Pitch, Onset and
Offset, respectively, and JEPOO is robust for various types of data and
instruments. The ablation study shows the effectiveness of each component of
JEPOO.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 07:04:33 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 09:57:54 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Wei",
"Haojie",
""
],
[
"Yuan",
"Jun",
""
],
[
"Zhang",
"Rui",
""
],
[
"Chen",
"Yueguo",
""
],
[
"Wang",
"Gang",
""
]
] |
new_dataset
| 0.992019 |
2306.07520
|
Weizhen He
|
Weizhen He and Shixiang Tang and Yiheng Deng and Qihao Chen and
Qingsong Xie and Yizhou Wang and Lei Bai and Feng Zhu and Rui Zhao and Wanli
Ouyang and Donglian Qi and Yunfeng Yan
|
Retrieve Anyone: A General-purpose Person Re-identification Task with
Instructions
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human intelligence can retrieve any person according to both visual and
language descriptions. However, the current computer vision community studies
specific person re-identification (ReID) tasks in different scenarios
separately, which limits the applications in the real world. This paper strives
to resolve this problem by proposing a new instruct-ReID task that requires the
model to retrieve images according to the given image or language
instructions.Our instruct-ReID is a more general ReID setting, where existing
ReID tasks can be viewed as special cases by designing different instructions.
We propose a large-scale OmniReID benchmark and an adaptive triplet loss as a
baseline method to facilitate research in this new setting. Experimental
results show that the baseline model trained on our OmniReID benchmark can
improve +0.6%, +1.4%, 0.2% mAP on Market1501, CUHK03, MSMT17 for traditional
ReID, +0.8%, +2.0%, +13.4% mAP on PRCC, VC-Clothes, LTCC for clothes-changing
ReID, +11.7% mAP on COCAS+ real2 for clothestemplate based clothes-changing
ReID when using only RGB images, +25.4% mAP on COCAS+ real2 for our newly
defined language-instructed ReID. The dataset, model, and code will be
available at https://github.com/hwz-zju/Instruct-ReID.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 03:25:33 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jul 2023 13:59:04 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Jul 2023 04:57:22 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"He",
"Weizhen",
""
],
[
"Tang",
"Shixiang",
""
],
[
"Deng",
"Yiheng",
""
],
[
"Chen",
"Qihao",
""
],
[
"Xie",
"Qingsong",
""
],
[
"Wang",
"Yizhou",
""
],
[
"Bai",
"Lei",
""
],
[
"Zhu",
"Feng",
""
],
[
"Zhao",
"Rui",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Qi",
"Donglian",
""
],
[
"Yan",
"Yunfeng",
""
]
] |
new_dataset
| 0.975932 |
2306.09296
|
Zijun Yao
|
Jifan Yu, Xiaozhi Wang, Shangqing Tu, Shulin Cao, Daniel Zhang-Li, Xin
Lv, Hao Peng, Zijun Yao, Xiaohan Zhang, Hanming Li, Chunyang Li, Zheyuan
Zhang, Yushi Bai, Yantao Liu, Amy Xin, Nianyi Lin, Kaifeng Yun, Linlu Gong,
Jianhui Chen, Zhili Wu, Yunjia Qi, Weikai Li, Yong Guan, Kaisheng Zeng, Ji
Qi, Hailong Jin, Jinxin Liu, Yu Gu, Yuan Yao, Ning Ding, Lei Hou, Zhiyuan
Liu, Bin Xu, Jie Tang, Juanzi Li
|
KoLA: Carefully Benchmarking World Knowledge of Large Language Models
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The unprecedented performance of large language models (LLMs) necessitates
improvements in evaluations. Rather than merely exploring the breadth of LLM
abilities, we believe meticulous and thoughtful designs are essential to
thorough, unbiased, and applicable evaluations. Given the importance of world
knowledge to LLMs, we construct a Knowledge-oriented LLM Assessment benchmark
(KoLA), in which we carefully design three crucial factors: (1) For ability
modeling, we mimic human cognition to form a four-level taxonomy of
knowledge-related abilities, covering $19$ tasks. (2) For data, to ensure fair
comparisons, we use both Wikipedia, a corpus prevalently pre-trained by LLMs,
along with continuously collected emerging corpora, aiming to evaluate the
capacity to handle unseen data and evolving knowledge. (3) For evaluation
criteria, we adopt a contrastive system, including overall standard scores for
better numerical comparability across tasks and models and a unique
self-contrast metric for automatically evaluating knowledge hallucination. We
evaluate $21$ open-source and commercial LLMs and obtain some intriguing
findings. The KoLA dataset and open-participation leaderboard are publicly
released at https://kola.xlore.cn and will be continuously updated to provide
references for developing LLMs and knowledge-related systems.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 17:20:46 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 17:25:10 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Yu",
"Jifan",
""
],
[
"Wang",
"Xiaozhi",
""
],
[
"Tu",
"Shangqing",
""
],
[
"Cao",
"Shulin",
""
],
[
"Zhang-Li",
"Daniel",
""
],
[
"Lv",
"Xin",
""
],
[
"Peng",
"Hao",
""
],
[
"Yao",
"Zijun",
""
],
[
"Zhang",
"Xiaohan",
""
],
[
"Li",
"Hanming",
""
],
[
"Li",
"Chunyang",
""
],
[
"Zhang",
"Zheyuan",
""
],
[
"Bai",
"Yushi",
""
],
[
"Liu",
"Yantao",
""
],
[
"Xin",
"Amy",
""
],
[
"Lin",
"Nianyi",
""
],
[
"Yun",
"Kaifeng",
""
],
[
"Gong",
"Linlu",
""
],
[
"Chen",
"Jianhui",
""
],
[
"Wu",
"Zhili",
""
],
[
"Qi",
"Yunjia",
""
],
[
"Li",
"Weikai",
""
],
[
"Guan",
"Yong",
""
],
[
"Zeng",
"Kaisheng",
""
],
[
"Qi",
"Ji",
""
],
[
"Jin",
"Hailong",
""
],
[
"Liu",
"Jinxin",
""
],
[
"Gu",
"Yu",
""
],
[
"Yao",
"Yuan",
""
],
[
"Ding",
"Ning",
""
],
[
"Hou",
"Lei",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Xu",
"Bin",
""
],
[
"Tang",
"Jie",
""
],
[
"Li",
"Juanzi",
""
]
] |
new_dataset
| 0.998662 |
2306.16731
|
Tobias Weinzierl
|
Chung Ming Loi, Tobias Weinzierl
|
SYCL compute kernels for ExaHyPE
| null | null | null | null |
cs.MS cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We discuss three SYCL realisations of a simple Finite Volume scheme over
multiple Cartesian patches. The realisation flavours differ in the way how they
map the compute steps onto loops and tasks: We compare an implementation which
is exclusively using a cascade of for-loops to a version which uses nested
parallelism, and finally benchmark these against a version which models the
calculations as task graph. Our work proposes realisation idioms to realise
these flavours within SYCL. The idioms translate to some degree to other GPGPU
programming techniques, too. Our preliminary results suggest that SYCL's
capability to model calculations via tasks or nested parallelism does not yet
allow such realisations to outperform their counterparts using exclusively data
parallelism.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 07:14:17 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jul 2023 14:34:51 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Jul 2023 05:54:56 GMT"
},
{
"version": "v4",
"created": "Fri, 7 Jul 2023 08:32:21 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Loi",
"Chung Ming",
""
],
[
"Weinzierl",
"Tobias",
""
]
] |
new_dataset
| 0.99142 |
2306.17103
|
Le Zhuo
|
Le Zhuo, Ruibin Yuan, Jiahao Pan, Yinghao Ma, Yizhi LI, Ge Zhang, Si
Liu, Roger Dannenberg, Jie Fu, Chenghua Lin, Emmanouil Benetos, Wenhu Chen,
Wei Xue, Yike Guo
|
LyricWhiz: Robust Multilingual Zero-shot Lyrics Transcription by
Whispering to ChatGPT
|
9 pages, 2 figures, 5 tables, accepted by ISMIR 2023
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We introduce LyricWhiz, a robust, multilingual, and zero-shot automatic
lyrics transcription method achieving state-of-the-art performance on various
lyrics transcription datasets, even in challenging genres such as rock and
metal. Our novel, training-free approach utilizes Whisper, a weakly supervised
robust speech recognition model, and GPT-4, today's most performant chat-based
large language model. In the proposed method, Whisper functions as the "ear" by
transcribing the audio, while GPT-4 serves as the "brain," acting as an
annotator with a strong performance for contextualized output selection and
correction. Our experiments show that LyricWhiz significantly reduces Word
Error Rate compared to existing methods in English and can effectively
transcribe lyrics across multiple languages. Furthermore, we use LyricWhiz to
create the first publicly available, large-scale, multilingual lyrics
transcription dataset with a CC-BY-NC-SA copyright license, based on
MTG-Jamendo, and offer a human-annotated subset for noise level estimation and
evaluation. We anticipate that our proposed method and dataset will advance the
development of multilingual lyrics transcription, a challenging and emerging
task.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 17:01:51 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 16:32:26 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Zhuo",
"Le",
""
],
[
"Yuan",
"Ruibin",
""
],
[
"Pan",
"Jiahao",
""
],
[
"Ma",
"Yinghao",
""
],
[
"LI",
"Yizhi",
""
],
[
"Zhang",
"Ge",
""
],
[
"Liu",
"Si",
""
],
[
"Dannenberg",
"Roger",
""
],
[
"Fu",
"Jie",
""
],
[
"Lin",
"Chenghua",
""
],
[
"Benetos",
"Emmanouil",
""
],
[
"Chen",
"Wenhu",
""
],
[
"Xue",
"Wei",
""
],
[
"Guo",
"Yike",
""
]
] |
new_dataset
| 0.99959 |
2306.17258
|
Ira Wolfson
|
Ira Wolfson
|
Suffering Toasters -- A New Self-Awareness Test for AI
|
4 double-column pages, 2 figures
| null | null | null |
cs.AI cs.CY cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A widely accepted definition of intelligence in the context of Artificial
Intelligence (AI) still eludes us. Due to our exceedingly rapid development of
AI paradigms, architectures, and tools, the prospect of naturally arising AI
consciousness seems more likely than ever. In this paper, we claim that all
current intelligence tests are insufficient to point to the existence or lack
of intelligence \textbf{as humans intuitively perceive it}. We draw from ideas
in the philosophy of science, psychology, and other areas of research to
provide a clearer definition of the problems of artificial intelligence,
self-awareness, and agency. We furthermore propose a new heuristic approach to
test for artificial self-awareness and outline a possible implementation.
Finally, we discuss some of the questions that arise from this new heuristic,
be they philosophical or implementation-oriented.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 18:58:01 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 07:00:22 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Wolfson",
"Ira",
""
]
] |
new_dataset
| 0.956113 |
2307.03177
|
Tianhao Wu
|
Tianhao Wu, Chuanxia Zheng, Tat-Jen Cham
|
IPO-LDM: Depth-aided 360-degree Indoor RGB Panorama Outpainting via
Latent Diffusion Model
|
Project Page:https://sm0kywu.github.io/ipoldm/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating complete 360-degree panoramas from narrow field of view images is
ongoing research as omnidirectional RGB data is not readily available. Existing
GAN-based approaches face some barriers to achieving higher quality output, and
have poor generalization performance over different mask types. In this paper,
we present our 360-degree indoor RGB panorama outpainting model using latent
diffusion models (LDM), called IPO-LDM. We introduce a new bi-modal latent
diffusion structure that utilizes both RGB and depth panoramic data during
training, but works surprisingly well to outpaint normal depth-free RGB images
during inference. We further propose a novel technique of introducing
progressive camera rotations during each diffusion denoising step, which leads
to substantial improvement in achieving panorama wraparound consistency.
Results show that our IPO-LDM not only significantly outperforms
state-of-the-art methods on RGB panorama outpainting, but can also produce
multiple and diverse well-structured results for different types of masks.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 17:57:02 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 04:37:46 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Wu",
"Tianhao",
""
],
[
"Zheng",
"Chuanxia",
""
],
[
"Cham",
"Tat-Jen",
""
]
] |
new_dataset
| 0.983748 |
2307.03244
|
Kai Yan
|
Kai Yan, Fujun Luan, Milo\v{S} Ha\v{S}An, Thibault Groueix, Valentin
Deschaintre, Shuang Zhao
|
PSDR-Room: Single Photo to Scene using Differentiable Rendering
| null | null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A 3D digital scene contains many components: lights, materials and
geometries, interacting to reach the desired appearance. Staging such a scene
is time-consuming and requires both artistic and technical skills. In this
work, we propose PSDR-Room, a system allowing to optimize lighting as well as
the pose and materials of individual objects to match a target image of a room
scene, with minimal user input. To this end, we leverage a recent path-space
differentiable rendering approach that provides unbiased gradients of the
rendering with respect to geometry, lighting, and procedural materials,
allowing us to optimize all of these components using gradient descent to
visually match the input photo appearance. We use recent single-image scene
understanding methods to initialize the optimization and search for appropriate
3D models and materials. We evaluate our method on real photographs of indoor
scenes and demonstrate the editability of the resulting scene components.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 18:17:59 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Yan",
"Kai",
""
],
[
"Luan",
"Fujun",
""
],
[
"HaŠAn",
"MiloŠ",
""
],
[
"Groueix",
"Thibault",
""
],
[
"Deschaintre",
"Valentin",
""
],
[
"Zhao",
"Shuang",
""
]
] |
new_dataset
| 0.999765 |
2307.03274
|
Enfa George
|
Enfa George, Mihai Surdeanu
|
It is not Sexually Suggestive, It is Educative. Separating Sex Education
from Suggestive Content on TikTok Videos
|
Accepted to ACL Findings 2023. 10 pages, 3 figures, 5 tables . Please
refer to https://github.com/enfageorge/SexTok for dataset and related details
| null | null | null |
cs.CV cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce SexTok, a multi-modal dataset composed of TikTok videos labeled
as sexually suggestive (from the annotator's point of view), sex-educational
content, or neither. Such a dataset is necessary to address the challenge of
distinguishing between sexually suggestive content and virtual sex education
videos on TikTok. Children's exposure to sexually suggestive videos has been
shown to have adversarial effects on their development. Meanwhile, virtual sex
education, especially on subjects that are more relevant to the LGBTQIA+
community, is very valuable. The platform's current system removes or penalizes
some of both types of videos, even though they serve different purposes. Our
dataset contains video URLs, and it is also audio transcribed. To validate its
importance, we explore two transformer-based models for classifying the videos.
Our preliminary results suggest that the task of distinguishing between these
types of videos is learnable but challenging. These experiments suggest that
this dataset is meaningful and invites further study on the subject.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 20:23:17 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"George",
"Enfa",
""
],
[
"Surdeanu",
"Mihai",
""
]
] |
new_dataset
| 0.999277 |
2307.03313
|
Vivek Gupta
|
Siddharth Khincha, Chelsi Jain, Vivek Gupta, Tushar Kataria, Shuo
Zhang
|
InfoSync: Information Synchronization across Multilingual
Semi-structured Tables
|
22 pages, 7 figures, 20 tables, ACL 2023 (Toronto, Canada)
| null | null | null |
cs.CL cs.CY cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Information Synchronization of semi-structured data across languages is
challenging. For instance, Wikipedia tables in one language should be
synchronized across languages. To address this problem, we introduce a new
dataset InfoSyncC and a two-step method for tabular synchronization. InfoSync
contains 100K entity-centric tables (Wikipedia Infoboxes) across 14 languages,
of which a subset (3.5K pairs) are manually annotated. The proposed method
includes 1) Information Alignment to map rows and 2) Information Update for
updating missing/outdated information for aligned tables across multilingual
tables. When evaluated on InfoSync, information alignment achieves an F1 score
of 87.91 (en <-> non-en). To evaluate information updation, we perform
human-assisted Wikipedia edits on Infoboxes for 603 table pairs. Our approach
obtains an acceptance rate of 77.28% on Wikipedia, showing the effectiveness of
the proposed method.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 21:55:15 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Khincha",
"Siddharth",
""
],
[
"Jain",
"Chelsi",
""
],
[
"Gupta",
"Vivek",
""
],
[
"Kataria",
"Tushar",
""
],
[
"Zhang",
"Shuo",
""
]
] |
new_dataset
| 0.999224 |
2307.03378
|
Bruce W. Lee
|
Bruce W. Lee, BongSeok Yang, Jason Hyung-Jong Lee
|
A Side-by-side Comparison of Transformers for English Implicit Discourse
Relation Classification
|
TrustNLP @ ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Though discourse parsing can help multiple NLP fields, there has been no wide
language model search done on implicit discourse relation classification. This
hinders researchers from fully utilizing public-available models in discourse
analysis. This work is a straightforward, fine-tuned discourse performance
comparison of seven pre-trained language models. We use PDTB-3, a popular
discourse relation annotated dataset. Through our model search, we raise SOTA
to 0.671 ACC and obtain novel observations. Some are contrary to what has been
reported before (Shi and Demberg, 2019b), that sentence-level pre-training
objectives (NSP, SBO, SOP) generally fail to produce the best performing model
for implicit discourse relation classification. Counterintuitively,
similar-sized PLMs with MLM and full attention led to better performance.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 04:12:25 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Lee",
"Bruce W.",
""
],
[
"Yang",
"BongSeok",
""
],
[
"Lee",
"Jason Hyung-Jong",
""
]
] |
new_dataset
| 0.985677 |
2307.03386
|
Amiangshu Bosu
|
Jaydeb Saker and Sayma Sultana and Steven R. Wilson and Amiangshu Bosu
|
ToxiSpanSE: An Explainable Toxicity Detection in Code Review Comments
| null |
The 17th ACM/IEEE International Symposium on Empirical Software
Engineering and Measurement (ESEM), 2023
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background: The existence of toxic conversations in open-source platforms can
degrade relationships among software developers and may negatively impact
software product quality. To help mitigate this, some initial work has been
done to detect toxic comments in the Software Engineering (SE) domain. Aims:
Since automatically classifying an entire text as toxic or non-toxic does not
help human moderators to understand the specific reason(s) for toxicity, we
worked to develop an explainable toxicity detector for the SE domain. Method:
Our explainable toxicity detector can detect specific spans of toxic content
from SE texts, which can help human moderators by automatically highlighting
those spans. This toxic span detection model, ToxiSpanSE, is trained with the
19,651 code review (CR) comments with labeled toxic spans. Our annotators
labeled the toxic spans within 3,757 toxic CR samples. We explored several
types of models, including one lexicon-based approach and five different
transformer-based encoders. Results: After an extensive evaluation of all
models, we found that our fine-tuned RoBERTa model achieved the best score with
0.88 $F1$, 0.87 precision, and 0.93 recall for toxic class tokens, providing an
explainable toxicity classifier for the SE domain. Conclusion: Since ToxiSpanSE
is the first tool to detect toxic spans in the SE domain, this tool will pave a
path to combat toxicity in the SE community.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 04:55:11 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Saker",
"Jaydeb",
""
],
[
"Sultana",
"Sayma",
""
],
[
"Wilson",
"Steven R.",
""
],
[
"Bosu",
"Amiangshu",
""
]
] |
new_dataset
| 0.999488 |
2307.03388
|
Nhi Kieu
|
Nhi Kieu, Kien Nguyen, Sridha Sridharan, Clinton Fookes
|
General-Purpose Multimodal Transformer meets Remote Sensing Semantic
Segmentation
|
Accepted to CVPR Workshop on Multimodal Learning for Earth and
Environment 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The advent of high-resolution multispectral/hyperspectral sensors, LiDAR DSM
(Digital Surface Model) information and many others has provided us with an
unprecedented wealth of data for Earth Observation. Multimodal AI seeks to
exploit those complementary data sources, particularly for complex tasks like
semantic segmentation. While specialized architectures have been developed,
they are highly complicated via significant effort in model design, and require
considerable re-engineering whenever a new modality emerges. Recent trends in
general-purpose multimodal networks have shown great potential to achieve
state-of-the-art performance across multiple multimodal tasks with one unified
architecture. In this work, we investigate the performance of PerceiverIO, one
in the general-purpose multimodal family, in the remote sensing semantic
segmentation domain. Our experiments reveal that this ostensibly universal
network struggles with object scale variation in remote sensing images and
fails to detect the presence of cars from a top-down view. To address these
issues, even with extreme class imbalance issues, we propose a spatial and
volumetric learning component. Specifically, we design a UNet-inspired module
that employs 3D convolution to encode vital local information and learn
cross-modal features simultaneously, while reducing network computational
burden via the cross-attention mechanism of PerceiverIO. The effectiveness of
the proposed component is validated through extensive experiments comparing it
with other methods such as 2D convolution, and dual local module (\ie the
combination of Conv2D 1x1 and Conv2D 3x3 inspired by UNetFormer). The proposed
method achieves competitive results with specialized architectures like
UNetFormer and SwinUNet, showing its potential to minimize network architecture
engineering with a minimal compromise on the performance.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 04:58:34 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Kieu",
"Nhi",
""
],
[
"Nguyen",
"Kien",
""
],
[
"Sridharan",
"Sridha",
""
],
[
"Fookes",
"Clinton",
""
]
] |
new_dataset
| 0.98999 |
2307.03401
|
Takahiro Yabe
|
Takahiro Yabe, Kota Tsubouchi, Toru Shimizu, Yoshihide Sekimoto, Kaoru
Sezaki, Esteban Moro, Alex Pentland
|
Metropolitan Scale and Longitudinal Dataset of Anonymized Human Mobility
Trajectories
|
Data descriptor for the Human Mobility Prediction Challenge (HuMob
Challenge) 2023
| null | null | null |
cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Modeling and predicting human mobility trajectories in urban areas is an
essential task for various applications. The recent availability of large-scale
human movement data collected from mobile devices have enabled the development
of complex human mobility prediction models. However, human mobility prediction
methods are often trained and tested on different datasets, due to the lack of
open-source large-scale human mobility datasets amid privacy concerns, posing a
challenge towards conducting fair performance comparisons between methods. To
this end, we created an open-source, anonymized, metropolitan scale, and
longitudinal (90 days) dataset of 100,000 individuals' human mobility
trajectories, using mobile phone location data. The location pings are
spatially and temporally discretized, and the metropolitan area is undisclosed
to protect users' privacy. The 90-day period is composed of 75 days of
business-as-usual and 15 days during an emergency. To promote the use of the
dataset, we will host a human mobility prediction data challenge (`HuMob
Challenge 2023') using the human mobility dataset, which will be held in
conjunction with ACM SIGSPATIAL 2023.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 05:57:58 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Yabe",
"Takahiro",
""
],
[
"Tsubouchi",
"Kota",
""
],
[
"Shimizu",
"Toru",
""
],
[
"Sekimoto",
"Yoshihide",
""
],
[
"Sezaki",
"Kaoru",
""
],
[
"Moro",
"Esteban",
""
],
[
"Pentland",
"Alex",
""
]
] |
new_dataset
| 0.99955 |
2307.03402
|
Loc Nguyen
|
Loc X. Nguyen, Ye Lin Tun, Yan Kyaw Tun, Minh N. H. Nguyen, Chaoning
Zhang, Zhu Han, Choong Seon Hong
|
Swin Transformer-Based Dynamic Semantic Communication for Multi-User
with Different Computing Capacity
|
14 pages, 10 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic communication has gained significant attention from researchers as a
promising technique to replace conventional communication in the next
generation of communication systems, primarily due to its ability to reduce
communication costs. However, little literature has studied its effectiveness
in multi-user scenarios, particularly when there are variations in the model
architectures used by users and their computing capacities. To address this
issue, we explore a semantic communication system that caters to multiple users
with different model architectures by using a multi-purpose transmitter at the
base station (BS). Specifically, the BS in the proposed framework employs
semantic and channel encoders to encode the image for transmission, while the
receiver utilizes its local channel and semantic decoder to reconstruct the
original image. Our joint source-channel encoder at the BS can effectively
extract and compress semantic features for specific users by considering the
signal-to-noise ratio (SNR) and computing capacity of the user. Based on the
network status, the joint source-channel encoder at the BS can adaptively
adjust the length of the transmitted signal. A longer signal ensures more
information for high-quality image reconstruction for the user, while a shorter
signal helps avoid network congestion. In addition, we propose a hybrid loss
function for training, which enhances the perceptual details of reconstructed
images. Finally, we conduct a series of extensive evaluations and ablation
studies to validate the effectiveness of the proposed system.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 05:59:36 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Nguyen",
"Loc X.",
""
],
[
"Tun",
"Ye Lin",
""
],
[
"Tun",
"Yan Kyaw",
""
],
[
"Nguyen",
"Minh N. H.",
""
],
[
"Zhang",
"Chaoning",
""
],
[
"Han",
"Zhu",
""
],
[
"Hong",
"Choong Seon",
""
]
] |
new_dataset
| 0.992675 |
2307.03465
|
Zhang Zelun
|
Zelun Zhang, Xue Pan
|
TBGC: Task-level Backbone-Oriented Gradient Clip for Multi-Task
Foundation Model Learning
|
Foundation Model Challenge@CVPR2023, Accepted by CVPR2023 Workshop
|
Conference on Computer Vision and Pattern Recognition, 2023
| null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The AllInOne training paradigm squeezes a wide range of tasks into a unified
model in a multi-task learning manner. However, optimization in multi-task
learning is more challenge than single-task learning, as the gradient norm from
different tasks may vary greatly, making the backbone overly biased towards one
specific task. To address this issue, we propose the task-level
backbone-oriented gradient clip paradigm, compared with the vanilla gradient
clip method, it has two points of emphasis:1) gradient clip is performed
independently for each task. 2) backbone gradients generated from each task are
rescaled to the same norm scale. Based on the experimental results, we argue
that the task-level backbone-oriented gradient clip paradigm can relieve the
gradient bias problem to some extent. We also propose a novel multi-branch data
augmentation strategy where conflict augmentations are placed in different
branches. Our approach has been shown to be effective and finally achieve 1st
place in the Leaderboard A and 2nd place in the Leaderboard B of the CVPR2023
Foundation Model Challenge. It's worth noting that instead of evaluating all
three tasks(detection, segmentation and fine-grained classification) in
Leaderboard A, the segmentation task is not evaluated in Leaderboard B, in
which our team has a huge advantage.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 08:57:57 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Zhang",
"Zelun",
""
],
[
"Pan",
"Xue",
""
]
] |
new_dataset
| 0.980871 |
2307.03494
|
Jia-Qi Zhang
|
Jia-Qi Zhang, Hao-Bin Duan, Jun-Long Chen, Ariel Shamir and Miao Wang
|
HoughLaneNet: Lane Detection with Deep Hough Transform and Dynamic
Convolution
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The task of lane detection has garnered considerable attention in the field
of autonomous driving due to its complexity. Lanes can present difficulties for
detection, as they can be narrow, fragmented, and often obscured by heavy
traffic. However, it has been observed that the lanes have a geometrical
structure that resembles a straight line, leading to improved lane detection
results when utilizing this characteristic. To address this challenge, we
propose a hierarchical Deep Hough Transform (DHT) approach that combines all
lane features in an image into the Hough parameter space. Additionally, we
refine the point selection method and incorporate a Dynamic Convolution Module
to effectively differentiate between lanes in the original image. Our network
architecture comprises a backbone network, either a ResNet or Pyramid Vision
Transformer, a Feature Pyramid Network as the neck to extract multi-scale
features, and a hierarchical DHT-based feature aggregation head to accurately
segment each lane. By utilizing the lane features in the Hough parameter space,
the network learns dynamic convolution kernel parameters corresponding to each
lane, allowing the Dynamic Convolution Module to effectively differentiate
between lane features. Subsequently, the lane features are fed into the feature
decoder, which predicts the final position of the lane. Our proposed network
structure demonstrates improved performance in detecting heavily occluded or
worn lane images, as evidenced by our extensive experimental results, which
show that our method outperforms or is on par with state-of-the-art techniques.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 10:08:29 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Zhang",
"Jia-Qi",
""
],
[
"Duan",
"Hao-Bin",
""
],
[
"Chen",
"Jun-Long",
""
],
[
"Shamir",
"Ariel",
""
],
[
"Wang",
"Miao",
""
]
] |
new_dataset
| 0.976892 |
2307.03505
|
Ben Chen
|
Ben Chen, Caihua Xiong, Quanlin Li, Zhonghua Wan
|
RCDN -- Robust X-Corner Detection Algorithm based on Advanced CNN Model
|
15 pages, 8 figures and 4 tables. Unpublished further research and
experiments of Checkerboard corner detection network CCDN (arXiv:2302.05097)
and application exploration for robust camera calibration
(https://ieeexplore.ieee.org/abstract/document/9428389)
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Accurate detection and localization of X-corner on both planar and non-planar
patterns is a core step in robotics and machine vision. However, previous works
could not make a good balance between accuracy and robustness, which are both
crucial criteria to evaluate the detectors performance. To address this
problem, in this paper we present a novel detection algorithm which can
maintain high sub-pixel precision on inputs under multiple interference, such
as lens distortion, extreme poses and noise. The whole algorithm, adopting a
coarse-to-fine strategy, contains a X-corner detection network and three
post-processing techniques to distinguish the correct corner candidates, as
well as a mixed sub-pixel refinement technique and an improved region growth
strategy to recover the checkerboard pattern partially visible or occluded
automatically. Evaluations on real and synthetic images indicate that the
presented algorithm has the higher detection rate, sub-pixel accuracy and
robustness than other commonly used methods. Finally, experiments of camera
calibration and pose estimation verify it can also get smaller re-projection
error in quantitative comparisons to the state-of-the-art.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 10:40:41 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Chen",
"Ben",
""
],
[
"Xiong",
"Caihua",
""
],
[
"Li",
"Quanlin",
""
],
[
"Wan",
"Zhonghua",
""
]
] |
new_dataset
| 0.975965 |
2307.03547
|
Tamas David-Barrett
|
Tam\'as D\'avid-Barrett, Sebastian Diaz, Carlos Rodriguez-Sickert,
Isabel Behncke, Anna Rotkirch, J\'anos Kert\'esz, Loreto Bravo
|
In A Society of Strangers, Kin Is Still Key: Identified Family Relations
In Large-Scale Mobile Phone Data
|
26 pages, 5 figures, 1 table, supplementary material at the end
| null | null | null |
cs.SI physics.soc-ph q-bio.QM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Mobile call networks have been widely used to investigate communication
patterns and the network of interactions of humans at the societal scale. Yet,
more detailed analysis is often hindered by having no information about the
nature of the relationships, even if some metadata about the individuals are
available. Using a unique, large mobile phone database with information about
individual surnames in a population in which people inherit two surnames: one
from their father, and one from their mother, we are able to differentiate
among close kin relationship types. Here we focus on the difference between the
most frequently called alters depending on whether they are family
relationships or not. We find support in the data for two hypotheses: (1) phone
calls between family members are more frequent and last longer than phone calls
between non-kin, and (2) the phone call pattern between family members show a
higher variation depending on the stage of life-course compared to non-family
members. We give an interpretation of these findings within the framework of
evolutionary anthropology: kinship matters even when demographic processes,
such as low fertility, urbanisation and migration reduce the access to family
members. Furthermore, our results provide tools for distinguishing between
different kinds of kin relationships from mobile call data, when information
about names are unavailable.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 12:23:19 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Dávid-Barrett",
"Tamás",
""
],
[
"Diaz",
"Sebastian",
""
],
[
"Rodriguez-Sickert",
"Carlos",
""
],
[
"Behncke",
"Isabel",
""
],
[
"Rotkirch",
"Anna",
""
],
[
"Kertész",
"János",
""
],
[
"Bravo",
"Loreto",
""
]
] |
new_dataset
| 0.998002 |
2307.03550
|
Ipek Baris Schlicht
|
Ipek Baris Schlicht and Lynn Khellaf and Defne Altiok
|
DWReCO at CheckThat! 2023: Enhancing Subjectivity Detection through
Style-based Data Sampling
|
Accepted to CLEF CheckThat! Lab
| null | null | null |
cs.CL cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes our submission for the subjectivity detection task at
the CheckThat! Lab. To tackle class imbalances in the task, we have generated
additional training materials with GPT-3 models using prompts of different
styles from a subjectivity checklist based on journalistic perspective. We used
the extended training set to fine-tune language-specific transformer models.
Our experiments in English, German and Turkish demonstrate that different
subjective styles are effective across all languages. In addition, we observe
that the style-based oversampling is better than paraphrasing in Turkish and
English. Lastly, the GPT-3 models sometimes produce lacklustre results when
generating style-based texts in non-English languages.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 12:34:57 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Schlicht",
"Ipek Baris",
""
],
[
"Khellaf",
"Lynn",
""
],
[
"Altiok",
"Defne",
""
]
] |
new_dataset
| 0.996026 |
2307.03556
|
Jack Culbert
|
Jack H. Culbert
|
4TCT, A 4chan Text Collection Tool
|
5 pages. For code repository, see http://github.com/jhculb/4TCT
| null | null | null |
cs.DL cs.SI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
4chan is a popular online imageboard which has been widely studied due to an
observed concentration of far-right, antisemitic, racist, misogynistic, and
otherwise hateful material being posted to the site, as well as the emergence
of political movements and the evolution of memes which are posted there,
discussed in Section 1.1. We have created a tool developed in Python which
utilises the 4chan API to collect data from a selection of boards. This paper
accompanies the release of the code via the github repository:
https://github.com/jhculb/4TCT. We believe this tool will be of use to
academics studying 4chan by providing a tool for collection of data from 4chan
to sociological researchers, and potentially contributing to GESIS' Digital
Behavioural Data project.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 12:46:00 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Culbert",
"Jack H.",
""
]
] |
new_dataset
| 0.99067 |
2307.03586
|
Mattia Giovanni Campana
|
Mattia Giovanni Campana, Franca Delmastro
|
ContextLabeler Dataset: physical and virtual sensors data collected from
smartphone usage in-the-wild
| null |
Elsevier Data in Brief, Volume 37, 2021
|
10.1016/j.dib.2021.107164
| null |
cs.HC cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a data collection campaign and the resulting dataset
derived from smartphone sensors characterizing the daily life activities of 3
volunteers in a period of two weeks. The dataset is released as a collection of
CSV files containing more than 45K data samples, where each sample is composed
by 1332 features related to a heterogeneous set of physical and virtual
sensors, including motion sensors, running applications, devices in proximity,
and weather conditions. Moreover, each data sample is associated with a ground
truth label that describes the user activity and the situation in which she was
involved during the sensing experiment (e.g., working, at restaurant, and doing
sport activity). To avoid introducing any bias during the data collection, we
performed the sensing experiment in-the-wild, that is, by using the volunteers'
devices, and without defining any constraint related to the user's behavior.
For this reason, the collected dataset represents a useful source of real data
to both define and evaluate a broad set of novel context-aware solutions (both
algorithms and protocols) that aim to adapt their behavior according to the
changes in the user's situation in a mobile environment.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 13:28:29 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Campana",
"Mattia Giovanni",
""
],
[
"Delmastro",
"Franca",
""
]
] |
new_dataset
| 0.999896 |
2307.03592
|
Paula Feldman
|
Paula Feldman, Miguel Fainstein, Viviana Siless, Claudio Delrieux,
Emmanuel Iarussi
|
VesselVAE: Recursive Variational Autoencoders for 3D Blood Vessel
Synthesis
|
Accepted for MICCAI 2023
| null | null | null |
cs.CV cs.AI eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a data-driven generative framework for synthesizing blood vessel
3D geometry. This is a challenging task due to the complexity of vascular
systems, which are highly variating in shape, size, and structure. Existing
model-based methods provide some degree of control and variation in the
structures produced, but fail to capture the diversity of actual anatomical
data. We developed VesselVAE, a recursive variational Neural Network that fully
exploits the hierarchical organization of the vessel and learns a
low-dimensional manifold encoding branch connectivity along with geometry
features describing the target surface. After training, the VesselVAE latent
space can be sampled to generate new vessel geometries. To the best of our
knowledge, this work is the first to utilize this technique for synthesizing
blood vessels. We achieve similarities of synthetic and real data for radius
(.97), length (.95), and tortuosity (.96). By leveraging the power of deep
neural networks, we generate 3D models of blood vessels that are both accurate
and diverse, which is crucial for medical and surgical training, hemodynamic
simulations, and many other purposes.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 13:35:48 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Feldman",
"Paula",
""
],
[
"Fainstein",
"Miguel",
""
],
[
"Siless",
"Viviana",
""
],
[
"Delrieux",
"Claudio",
""
],
[
"Iarussi",
"Emmanuel",
""
]
] |
new_dataset
| 0.999756 |
2307.03601
|
Shilong Zhang
|
Shilong Zhang, Peize Sun, Shoufa Chen, Min Xiao, Wenqi Shao, Wenwei
Zhang, Kai Chen, Ping Luo
|
GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest
|
Code has been released at https://github.com/jshilong/GPT4RoI
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Instruction tuning large language model (LLM) on image-text pairs has
achieved unprecedented vision-language multimodal abilities. However, their
vision-language alignments are only built on image-level, the lack of
region-level alignment limits their advancements to fine-grained multimodal
understanding. In this paper, we propose instruction tuning on
region-of-interest. The key design is to reformulate the bounding box as the
format of spatial instruction. The interleaved sequences of visual features
extracted by the spatial instruction and the language embedding are input to
LLM, and trained on the transformed region-text data in instruction tuning
format. Our region-level vision-language model, termed as GPT4RoI, brings brand
new conversational and interactive experience beyond image-level understanding.
(1) Controllability: Users can interact with our model by both language and
spatial instructions to flexibly adjust the detail level of the question. (2)
Capacities: Our model supports not only single-region spatial instruction but
also multi-region. This unlocks more region-level multimodal capacities such as
detailed region caption and complex region reasoning. (3) Composition: Any
off-the-shelf object detector can be a spatial instruction provider so as to
mine informative object attributes from our model, like color, shape, material,
action, relation to other objects, etc. The code, data, and demo can be found
at https://github.com/jshilong/GPT4RoI.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 13:43:44 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Zhang",
"Shilong",
""
],
[
"Sun",
"Peize",
""
],
[
"Chen",
"Shoufa",
""
],
[
"Xiao",
"Min",
""
],
[
"Shao",
"Wenqi",
""
],
[
"Zhang",
"Wenwei",
""
],
[
"Chen",
"Kai",
""
],
[
"Luo",
"Ping",
""
]
] |
new_dataset
| 0.956654 |
2307.03609
|
Adam Jenkins
|
Adam Jenkins, Maria Wolters, Kami Vaniea
|
To Patch, or not To Patch? That is the Question: A Case Study of System
Administrators' Online Collaborative Behaviour
| null | null | null | null |
cs.HC cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
System administrators, similar to end users, may delay or avoid software
patches, also known as updates, despite the impact their timely application can
have on system security. These admins are responsible for large, complex,
amalgamated systems and must balance the security related needs of their
organizations, which would benefit from the patch, with the need to ensure that
systems must continue to run unimpeded. In this paper, we present a case study
which follows the online life-cycle of a pair of Microsoft patches. We find
that communities of sysadmins have evolved sophisticated mechanisms to perform
risk assessments that are centred around collecting, synthesizing, and
generating information on patches. These communities span different Virtual
Communities of Practice, as well as influencers who monitor and report on the
impact of new patches. As information is propagated and aggregated across
blogs, forums, web sites, and mailing lists, eventually resulting in a
consensus around the risk of a patch. Our findings highlight the role that
these communities play in informing risk management decisions: Patch
information is not static, and it transforms as communities collaborate to
understand patch issues.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 14:02:48 GMT"
}
] | 2023-07-10T00:00:00 |
[
[
"Jenkins",
"Adam",
""
],
[
"Wolters",
"Maria",
""
],
[
"Vaniea",
"Kami",
""
]
] |
new_dataset
| 0.990837 |
2101.06454
|
Daoyuan Wu
|
Mengjie Chen, Xiao Yi, Daoyuan Wu, Jianliang Xu, Yingjiu Li, Debin Gao
|
AGChain: A Blockchain-based Gateway for Trustworthy App Delegation from
Mobile App Markets
|
This is a technical report submitted to the Special Issue of the
Elsevier Journal of Systems Architecture (JSA)
| null | null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The popularity of smartphones has led to the growth of mobile app markets,
creating a need for enhanced transparency, global access, and secure
downloading. This paper introduces AGChain, a blockchain-based gateway that
enables trustworthy app delegation within existing markets. AGChain ensures
that markets can continue providing services while users benefit from
permanent, distributed, and secure app delegation. During its development, we
address two key challenges: significantly reducing smart contract gas costs and
enabling fully distributed IPFS-based file storage. Additionally, we tackle
three system issues related to security and sustainability. We have implemented
a prototype of AGChain on Ethereum and Polygon blockchains, achieving effective
security and decentralization with a minimal gas cost of around 0.002 USD per
app upload (no cost for app download). The system also exhibits reasonable
performance with an average overhead of 12%.
|
[
{
"version": "v1",
"created": "Sat, 16 Jan 2021 15:19:21 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 09:09:04 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Chen",
"Mengjie",
""
],
[
"Yi",
"Xiao",
""
],
[
"Wu",
"Daoyuan",
""
],
[
"Xu",
"Jianliang",
""
],
[
"Li",
"Yingjiu",
""
],
[
"Gao",
"Debin",
""
]
] |
new_dataset
| 0.998603 |
2110.05192
|
Denizalp Goktas
|
Denizalp Goktas and Amy Greenwald
|
Convex-Concave Min-Max Stackelberg Games
|
25 pages, 4 tables, 1 figure, Forthcoming in NeurIPS 2021
|
Advances in Neural Information Processing Systems 34 (2021)
| null | null |
cs.GT cs.LG cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Min-max optimization problems (i.e., min-max games) have been attracting a
great deal of attention because of their applicability to a wide range of
machine learning problems. Although significant progress has been made
recently, the literature to date has focused on games with independent strategy
sets; little is known about solving games with dependent strategy sets, which
can be characterized as min-max Stackelberg games. We introduce two first-order
methods that solve a large class of convex-concave min-max Stackelberg games,
and show that our methods converge in polynomial time. Min-max Stackelberg
games were first studied by Wald, under the posthumous name of Wald's maximin
model, a variant of which is the main paradigm used in robust optimization,
which means that our methods can likewise solve many convex robust optimization
problems. We observe that the computation of competitive equilibria in Fisher
markets also comprises a min-max Stackelberg game. Further, we demonstrate the
efficacy and efficiency of our algorithms in practice by computing competitive
equilibria in Fisher markets with varying utility structures. Our experiments
suggest potential ways to extend our theoretical results, by demonstrating how
different smoothness properties can affect the convergence rate of our
algorithms.
|
[
{
"version": "v1",
"created": "Tue, 5 Oct 2021 06:09:45 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Nov 2021 06:41:39 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Apr 2022 21:47:44 GMT"
},
{
"version": "v4",
"created": "Wed, 13 Apr 2022 04:39:28 GMT"
},
{
"version": "v5",
"created": "Tue, 19 Apr 2022 05:47:00 GMT"
},
{
"version": "v6",
"created": "Wed, 20 Apr 2022 20:12:54 GMT"
},
{
"version": "v7",
"created": "Wed, 3 Aug 2022 00:53:26 GMT"
},
{
"version": "v8",
"created": "Wed, 5 Jul 2023 21:11:31 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Goktas",
"Denizalp",
""
],
[
"Greenwald",
"Amy",
""
]
] |
new_dataset
| 0.99791 |
2205.09370
|
Nicolas Baumann
|
Edoardo Ghignone, Nicolas Baumann, Mike Boss and Michele Magno
|
TC-Driver: Trajectory Conditioned Driving for Robust Autonomous Racing
-- A Reinforcement Learning Approach
|
6 pages, 4 figures, 3 tables, ICRA, OPPORTUNITIES AND CHALLENGES WITH
AUTONOMOUS RACING, IEEE
|
Field Robotics 2023
|
10.55417/fr.2023020
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous racing is becoming popular for academic and industry researchers
as a test for general autonomous driving by pushing perception, planning, and
control algorithms to their limits. While traditional control methods such as
MPC are capable of generating an optimal control sequence at the edge of the
vehicles physical controllability, these methods are sensitive to the accuracy
of the modeling parameters. This paper presents TC-Driver, a RL approach for
robust control in autonomous racing. In particular, the TC-Driver agent is
conditioned by a trajectory generated by any arbitrary traditional high-level
planner. The proposed TC-Driver addresses the tire parameter modeling
inaccuracies by exploiting the heuristic nature of RL while leveraging the
reliability of traditional planning methods in a hierarchical control
structure. We train the agent under varying tire conditions, allowing it to
generalize to different model parameters, aiming to increase the racing
capabilities of the system in practice. The proposed RL method outperforms a
non-learning-based MPC with a 2.7 lower crash ratio in a model mismatch
setting, underlining robustness to parameter discrepancies. In addition, the
average RL inference duration is 0.25 ms compared to the average MPC solving
time of 11.5 ms, yielding a nearly 40-fold speedup, allowing for complex
control deployment in computationally constrained devices. Lastly, we show that
the frequently utilized end-to-end RL architecture, as a control policy
directly learned from sensory input, is not well suited to model mismatch
robustness nor track generalization. Our realistic simulations show that
TC-Driver achieves a 6.7 and 3-fold lower crash ratio under model mismatch and
track generalization settings, while simultaneously achieving lower lap times
than an end-to-end approach, demonstrating the viability of TC-driver to robust
autonomous racing.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 08:06:10 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 09:27:37 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Ghignone",
"Edoardo",
""
],
[
"Baumann",
"Nicolas",
""
],
[
"Boss",
"Mike",
""
],
[
"Magno",
"Michele",
""
]
] |
new_dataset
| 0.999624 |
2207.10062
|
Colby Banbury
|
Mark Mazumder, Colby Banbury, Xiaozhe Yao, Bojan Karla\v{s}, William
Gaviria Rojas, Sudnya Diamos, Greg Diamos, Lynn He, Alicia Parrish, Hannah
Rose Kirk, Jessica Quaye, Charvi Rastogi, Douwe Kiela, David Jurado, David
Kanter, Rafael Mosquera, Juan Ciro, Lora Aroyo, Bilge Acun, Lingjiao Chen,
Mehul Smriti Raje, Max Bartolo, Sabri Eyuboglu, Amirata Ghorbani, Emmett
Goodman, Oana Inel, Tariq Kane, Christine R. Kirkpatrick, Tzu-Sheng Kuo,
Jonas Mueller, Tristan Thrush, Joaquin Vanschoren, Margaret Warren, Adina
Williams, Serena Yeung, Newsha Ardalani, Praveen Paritosh, Ce Zhang, James
Zou, Carole-Jean Wu, Cody Coleman, Andrew Ng, Peter Mattson, Vijay Janapa
Reddi
|
DataPerf: Benchmarks for Data-Centric AI Development
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Machine learning research has long focused on models rather than datasets,
and prominent datasets are used for common ML tasks without regard to the
breadth, difficulty, and faithfulness of the underlying problems. Neglecting
the fundamental importance of data has given rise to inaccuracy, bias, and
fragility in real-world applications, and research is hindered by saturation
across existing dataset benchmarks. In response, we present DataPerf, a
community-led benchmark suite for evaluating ML datasets and data-centric
algorithms. We aim to foster innovation in data-centric AI through competition,
comparability, and reproducibility. We enable the ML community to iterate on
datasets, instead of just architectures, and we provide an open, online
platform with multiple rounds of challenges to support this iterative
development. The first iteration of DataPerf contains five benchmarks covering
a wide spectrum of data-centric techniques, tasks, and modalities in vision,
speech, acquisition, debugging, and diffusion prompting, and we support hosting
new contributed benchmarks from the community. The benchmarks, online
evaluation platform, and baseline implementations are open source, and the
MLCommons Association will maintain DataPerf to ensure long-term benefits to
academia and industry.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 17:47:54 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jul 2023 20:47:34 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Mazumder",
"Mark",
""
],
[
"Banbury",
"Colby",
""
],
[
"Yao",
"Xiaozhe",
""
],
[
"Karlaš",
"Bojan",
""
],
[
"Rojas",
"William Gaviria",
""
],
[
"Diamos",
"Sudnya",
""
],
[
"Diamos",
"Greg",
""
],
[
"He",
"Lynn",
""
],
[
"Parrish",
"Alicia",
""
],
[
"Kirk",
"Hannah Rose",
""
],
[
"Quaye",
"Jessica",
""
],
[
"Rastogi",
"Charvi",
""
],
[
"Kiela",
"Douwe",
""
],
[
"Jurado",
"David",
""
],
[
"Kanter",
"David",
""
],
[
"Mosquera",
"Rafael",
""
],
[
"Ciro",
"Juan",
""
],
[
"Aroyo",
"Lora",
""
],
[
"Acun",
"Bilge",
""
],
[
"Chen",
"Lingjiao",
""
],
[
"Raje",
"Mehul Smriti",
""
],
[
"Bartolo",
"Max",
""
],
[
"Eyuboglu",
"Sabri",
""
],
[
"Ghorbani",
"Amirata",
""
],
[
"Goodman",
"Emmett",
""
],
[
"Inel",
"Oana",
""
],
[
"Kane",
"Tariq",
""
],
[
"Kirkpatrick",
"Christine R.",
""
],
[
"Kuo",
"Tzu-Sheng",
""
],
[
"Mueller",
"Jonas",
""
],
[
"Thrush",
"Tristan",
""
],
[
"Vanschoren",
"Joaquin",
""
],
[
"Warren",
"Margaret",
""
],
[
"Williams",
"Adina",
""
],
[
"Yeung",
"Serena",
""
],
[
"Ardalani",
"Newsha",
""
],
[
"Paritosh",
"Praveen",
""
],
[
"Zhang",
"Ce",
""
],
[
"Zou",
"James",
""
],
[
"Wu",
"Carole-Jean",
""
],
[
"Coleman",
"Cody",
""
],
[
"Ng",
"Andrew",
""
],
[
"Mattson",
"Peter",
""
],
[
"Reddi",
"Vijay Janapa",
""
]
] |
new_dataset
| 0.962573 |
2210.14896
|
Zijie Wang
|
Zijie J. Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin
Hoover, Duen Horng Chau
|
DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image
Generative Models
|
Accepted to ACL 2023 (nominated for best paper, top 1.6% of
submissions, oral presentation). 17 pages, 11 figures. The dataset is
available at https://huggingface.co/datasets/poloclub/diffusiondb. The code
is at https://github.com/poloclub/diffusiondb. The interactive visualization
demo is at https://poloclub.github.io/diffusiondb/explorer/
| null | null | null |
cs.CV cs.AI cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With recent advancements in diffusion models, users can generate high-quality
images by writing text prompts in natural language. However, generating images
with desired details requires proper prompts, and it is often unclear how a
model reacts to different prompts or what the best prompts are. To help
researchers tackle these critical challenges, we introduce DiffusionDB, the
first large-scale text-to-image prompt dataset totaling 6.5TB, containing 14
million images generated by Stable Diffusion, 1.8 million unique prompts, and
hyperparameters specified by real users. We analyze the syntactic and semantic
characteristics of prompts. We pinpoint specific hyperparameter values and
prompt styles that can lead to model errors and present evidence of potentially
harmful model usage, such as the generation of misinformation. The
unprecedented scale and diversity of this human-actuated dataset provide
exciting research opportunities in understanding the interplay between prompts
and generative models, detecting deepfakes, and designing human-AI interaction
tools to help users more easily use these models. DiffusionDB is publicly
available at: https://poloclub.github.io/diffusiondb.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 17:54:20 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Nov 2022 17:31:08 GMT"
},
{
"version": "v3",
"created": "Mon, 22 May 2023 02:42:48 GMT"
},
{
"version": "v4",
"created": "Thu, 6 Jul 2023 11:53:19 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Wang",
"Zijie J.",
""
],
[
"Montoya",
"Evan",
""
],
[
"Munechika",
"David",
""
],
[
"Yang",
"Haoyang",
""
],
[
"Hoover",
"Benjamin",
""
],
[
"Chau",
"Duen Horng",
""
]
] |
new_dataset
| 0.999862 |
2212.08490
|
Xiaoxiang Han
|
Xiaoxiang Han, Yiman Liu, Gang Liu, Yuanjie Lin, Qiaohong Liu
|
LOANet: A Lightweight Network Using Object Attention for Extracting
Buildings and Roads from UAV Aerial Remote Sensing Images
|
16 pages, 7 tables, 7 figures, Published in PeerJ Computer Science
|
PeerJ Comput. Sci. 9:e1467 (2023)
|
10.7717/peerj-cs.1467
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic segmentation for extracting buildings and roads from uncrewed aerial
vehicle (UAV) remote sensing images by deep learning becomes a more efficient
and convenient method than traditional manual segmentation in surveying and
mapping fields. In order to make the model lightweight and improve the model
accuracy, a Lightweight Network Using Object Attention (LOANet) for Buildings
and Roads from UAV Aerial Remote Sensing Images is proposed. The proposed
network adopts an encoder-decoder architecture in which a Lightweight Densely
Connected Network (LDCNet) is developed as the encoder. In the decoder part,
the dual multi-scale context modules which consist of the Atrous Spatial
Pyramid Pooling module (ASPP) and the Object Attention Module (OAM) are
designed to capture more context information from feature maps of UAV remote
sensing images. Between ASPP and OAM, a Feature Pyramid Network (FPN) module is
used to fuse multi-scale features extracted from ASPP. A private dataset of
remote sensing images taken by UAV which contains 2431 training sets, 945
validation sets, and 475 test sets is constructed. The proposed basic model
performs well on this dataset, with only 1.4M parameters and 5.48G floating
point operations (FLOPs), achieving excellent mean Intersection-over-Union
(mIoU). Further experiments on the publicly available LoveDA and CITY-OSM
datasets have been conducted to further validate the effectiveness of the
proposed basic and large model, and outstanding mIoU results have been
achieved. All codes are available on https://github.com/GtLinyer/LOANet.
|
[
{
"version": "v1",
"created": "Fri, 16 Dec 2022 14:02:12 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Dec 2022 15:55:28 GMT"
},
{
"version": "v3",
"created": "Sun, 19 Feb 2023 15:47:10 GMT"
},
{
"version": "v4",
"created": "Fri, 24 Feb 2023 10:36:58 GMT"
},
{
"version": "v5",
"created": "Tue, 4 Apr 2023 15:22:07 GMT"
},
{
"version": "v6",
"created": "Thu, 6 Jul 2023 12:06:26 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Han",
"Xiaoxiang",
""
],
[
"Liu",
"Yiman",
""
],
[
"Liu",
"Gang",
""
],
[
"Lin",
"Yuanjie",
""
],
[
"Liu",
"Qiaohong",
""
]
] |
new_dataset
| 0.9987 |
2305.04760
|
Thomas Benz
|
Alessandro Ottaviano, Thomas Benz, Paul Scheffler, Luca Benini
|
Cheshire: A Lightweight, Linux-Capable RISC-V Host Platform for
Domain-Specific Accelerator Plug-In
|
5 pages, 11 figures, accepted by IEEE Transactions on Circuits and
Systems Part II: Express Briefs
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Power and cost constraints in the internet-of-things (IoT) extreme-edge and
TinyML domains, coupled with increasing performance requirements, motivate a
trend toward heterogeneous architectures. These designs use energy-efficient
application-class host processors to coordinate compute-specialized multicore
accelerators, amortizing the architectural costs of operating system support
and external communication. This brief presents Cheshire, a lightweight and
modular 64-bit Linux-capable host platform designed for the seamless plug-in of
domain-specific accelerators. It features a unique low-pin-count DRAM
interface, a last-level cache configurable as scratchpad memory, and a DMA
engine enabling efficient data movement to or from accelerators or DRAM. It
also provides numerous optional IO peripherals including UART, SPI, I2C, VGA,
and GPIOs. Cheshire's synthesizable RTL description, comprising all of its
peripherals and its fully digital DRAM interface, is available free and
open-source. We implemented and fabricated Cheshire as a silicon demonstrator
called Neo in TSMC's 65nm CMOS technology. At 1.2 V, Neo achieves clock
frequencies of up to 325 MHz while not exceeding 300 mW in total power on
data-intensive computational workloads. Its RPC DRAM interface consumes only
250 pJ/B and incurs only 3.5 kGE in area for its PHY while attaining a peak
transfer rate of 750 MB/s at 200 MHz.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 15:08:51 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 05:08:21 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Ottaviano",
"Alessandro",
""
],
[
"Benz",
"Thomas",
""
],
[
"Scheffler",
"Paul",
""
],
[
"Benini",
"Luca",
""
]
] |
new_dataset
| 0.999286 |
2306.03307
|
Stefano Kalonaris
|
Stefano Kalonaris
|
Reef Elegy: An Auditory Display of Hawaii's 2019 Coral Bleaching Data
|
To appear in: Proceedings of the 28th International Conference on
Auditory Display (ICAD 2023) NOTE: This version (v2) replaces Figure 2, which
was incorrectly rendered. Do not use or cite the previous version (v1)
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper describes an auditory display of Hawaii's 2019 coral bleaching
data via means of spatial audio and parameter mapping methods. Selected data
fields spanning 78 days are mapped to sound surrogates of coral reefs' natural
soundscapes, which are progressively altered in their constituent elements as
the corresponding coral locations undergo bleaching. For some of these
elements, this process outlines a trajectory from a dense to a sparser, reduced
soundscape, while for others it translates moving away from harmonic tones and
towards complex spectra. This experiment is accompanied by a short evaluation
study to contextualize it in an established aesthetic perspective space and to
probe its potential for public engagement in the discourse around climate
change.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 23:27:39 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 13:44:20 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Kalonaris",
"Stefano",
""
]
] |
new_dataset
| 0.995477 |
2306.17496
|
Jinnan Piao
|
Jinnan Piao, Dong Li, Xueting Yu, Zhibo Li, Ming Yang, Jindi Liu, and
Peng Zeng
|
Performance Analysis for Polar Codes under Successive Cancellation List
Decoding with Fixed List Size
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we first indicate that the block error event of polar codes
under successive cancellation list (SCL) decoding is composed of path loss (PL)
error event and path selection (PS) error event, where the PL error event is
that correct codeword is lost during the SCL decoding and the PS error event is
that correct codeword is reserved in the decoded list but not selected as the
decoded codeword. Then, we simplify the PL error event by assuming the all-zero
codeword is transmitted and derive the probability lower bound via the joint
probability density of the log-likelihood ratios of information bits.
Meanwhile, the union bound calculated by the minimum weight distribution is
used to evaluate the probability of the PS error event. With the performance
analysis, we design a greedy bit-swapping (BS) algorithm to construct polar
codes by gradually swapping information bit and frozen bit to reduce the
performance lower bound of SCL decoding. The simulation results show that the
BLER performance of SCL decoding is close to the lower bound in the medium to
high signal-to-noise ratio region and we can optimize the lower bound to
improve the BLER performance of SCL decoding by the BS algorithm.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 09:14:32 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 06:02:20 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Piao",
"Jinnan",
""
],
[
"Li",
"Dong",
""
],
[
"Yu",
"Xueting",
""
],
[
"Li",
"Zhibo",
""
],
[
"Yang",
"Ming",
""
],
[
"Liu",
"Jindi",
""
],
[
"Zeng",
"Peng",
""
]
] |
new_dataset
| 0.983264 |
2307.00209
|
Huixuan Zhang
|
Huixuan Zhang, Xiaojun Wan
|
Image Matters: A New Dataset and Empirical Study for Multimodal
Hyperbole Detection
|
11 pages, 6 figures. 6 tables
| null | null | null |
cs.CV cs.AI cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Hyperbole, or exaggeration, is a common linguistic phenomenon. The detection
of hyperbole is an important part of understanding human expression. There have
been several studies on hyperbole detection, but most of which focus on text
modality only. However, with the development of social media, people can create
hyperbolic expressions with various modalities, including text, images, videos,
etc. In this paper, we focus on multimodal hyperbole detection. We create a
multimodal detection dataset\footnote{The dataset will be released to the
community.} from Weibo (a Chinese social media) and carry out some studies on
it. We treat the text and image from a piece of weibo as two modalities and
explore the role of text and image for hyperbole detection. Different
pre-trained multimodal encoders are also evaluated on this downstream task to
show their performance. Besides, since this dataset is constructed from five
different topics, we also evaluate the cross-domain performance of different
models. These studies can serve as a benchmark and point out the direction of
further study on multimodal hyperbole detection.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 03:23:56 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 11:19:22 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Zhang",
"Huixuan",
""
],
[
"Wan",
"Xiaojun",
""
]
] |
new_dataset
| 0.998235 |
2307.01848
|
Zhenyu Wu
|
Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan
|
Embodied Task Planning with Large Language Models
|
Project Page: https://gary3410.github.io/TaPA
| null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 17:58:25 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Wu",
"Zhenyu",
""
],
[
"Wang",
"Ziwei",
""
],
[
"Xu",
"Xiuwei",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Yan",
"Haibin",
""
]
] |
new_dataset
| 0.996622 |
2307.02493
|
Eunju Yang
|
Eunju Yang, Gyusang Cho, Chan-Hyun Youn
|
FREEDOM: Target Label & Source Data & Domain Information-Free
Multi-Source Domain Adaptation for Unsupervised Personalization
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
From a service perspective, Multi-Source Domain Adaptation (MSDA) is a
promising scenario to adapt a deployed model to a client's dataset. It can
provide adaptation without a target label and support the case where a source
dataset is constructed from multiple domains. However, it is impractical,
wherein its training heavily relies on prior domain information of the
multi-source dataset -- how many domains exist and the domain label of each
data sample. Moreover, MSDA requires both source and target datasets
simultaneously (physically), causing storage limitations on the client device
or data privacy issues by transferring client data to a server. For a more
practical scenario of model adaptation from a service provider's point of view,
we relax these constraints and present a novel problem scenario of Three-Free
Domain Adaptation, namely TFDA, where 1) target labels, 2) source dataset, and
mostly 3) source domain information (domain labels + the number of domains) are
unavailable. Under the problem scenario, we propose a practical adaptation
framework called FREEDOM. It leverages the power of the generative model,
disentangling data into class and style aspects, where the style is defined as
the class-independent information from the source data and designed with a
nonparametric Bayesian approach. In the adaptation stage, FREEDOM aims to match
the source class distribution with the target's under the philosophy that class
distribution is consistent even if the style is different; after then, only
part of the classification model is deployed as a personalized network. As a
result, FREEDOM achieves state-of-the-art or comparable performance even
without domain information, with reduced final model size on the target side,
independent of the number of source domains.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 05:56:44 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Yang",
"Eunju",
""
],
[
"Cho",
"Gyusang",
""
],
[
"Youn",
"Chan-Hyun",
""
]
] |
new_dataset
| 0.962738 |
2307.02507
|
Lincan Li
|
Lincan Li, Kaixiang Yang, Fengji Luo, Jichao Bi
|
STS-CCL: Spatial-Temporal Synchronous Contextual Contrastive Learning
for Urban Traffic Forecasting
|
11pages, 6 figures
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficiently capturing the complex spatiotemporal representations from
large-scale unlabeled traffic data remains to be a challenging task. In
considering of the dilemma, this work employs the advanced contrastive learning
and proposes a novel Spatial-Temporal Synchronous Contextual Contrastive
Learning (STS-CCL) model. First, we elaborate the basic and strong augmentation
methods for spatiotemporal graph data, which not only perturb the data in terms
of graph structure and temporal characteristics, but also employ a
learning-based dynamic graph view generator for adaptive augmentation. Second,
we introduce a Spatial-Temporal Synchronous Contrastive Module (STS-CM) to
simultaneously capture the decent spatial-temporal dependencies and realize
graph-level contrasting. To further discriminate node individuals in negative
filtering, a Semantic Contextual Contrastive method is designed based on
semantic features and spatial heterogeneity, achieving node-level contrastive
learning along with negative filtering. Finally, we present a hard mutual-view
contrastive training scheme and extend the classic contrastive loss to an
integrated objective function, yielding better performance. Extensive
experiments and evaluations demonstrate that building a predictor upon STS-CCL
contrastive learning model gains superior performance than existing traffic
forecasting benchmarks. The proposed STS-CCL is highly suitable for large
datasets with only a few labeled data and other spatiotemporal tasks with data
scarcity issue.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 03:47:28 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Li",
"Lincan",
""
],
[
"Yang",
"Kaixiang",
""
],
[
"Luo",
"Fengji",
""
],
[
"Bi",
"Jichao",
""
]
] |
new_dataset
| 0.963871 |
2307.02609
|
Siyang Song
|
Jiaqi Xu, Cheng Luo, Weicheng Xie, Linlin Shen, Xiaofeng Liu, Lu Liu,
Hatice Gunes, Siyang Song
|
MRecGen: Multimodal Appropriate Reaction Generator
| null | null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Verbal and non-verbal human reaction generation is a challenging task, as
different reactions could be appropriate for responding to the same behaviour.
This paper proposes the first multiple and multimodal (verbal and nonverbal)
appropriate human reaction generation framework that can generate appropriate
and realistic human-style reactions (displayed in the form of synchronised
text, audio and video streams) in response to an input user behaviour. This
novel technique can be applied to various human-computer interaction scenarios
by generating appropriate virtual agent/robot behaviours. Our demo is available
at \url{https://github.com/SSYSteve/MRecGen}.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 19:07:00 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Xu",
"Jiaqi",
""
],
[
"Luo",
"Cheng",
""
],
[
"Xie",
"Weicheng",
""
],
[
"Shen",
"Linlin",
""
],
[
"Liu",
"Xiaofeng",
""
],
[
"Liu",
"Lu",
""
],
[
"Gunes",
"Hatice",
""
],
[
"Song",
"Siyang",
""
]
] |
new_dataset
| 0.996044 |
2307.02617
|
Diego Nicol\'as Casta\~no
|
Miguel Campercholi, Diego Casta\~no, Gonzalo Zigar\'an
|
The complexity of the Chinese Remainder Theorem
| null | null | null | null |
cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
The Chinese Remainder Theorem for the integers says that every system of
congruence equations is solvable as long as the system satisfies an obvious
necessary condition. This statement can be generalized in a natural way to
arbitrary algebraic structures using the language of Universal Algebra. In this
context, an algebra is a structure of a first-order language with no relation
symbols, and a congruence on an algebra is an equivalence relation on its base
set compatible with its fundamental operations. A tuple of congruences of an
algebra is called a Chinese Remainder tuple if every system involving them is
solvable. In this article we study the complexity of deciding whether a tuple
of congruences of a finite algebra is a Chinese Remainder tuple. This problem,
which we denote CRT, is easily seen to lie in coNP. We prove that it is
actually coNP-complete and also show that it is tractable when restricted to
several well-known classes of algebras, such as vector spaces and distributive
lattices. The polynomial algorithms we exhibit are made possible by purely
algebraic characterizations of Chinese Remainder tuples for algebras in these
classes, which constitute interesting results in their own right. Among these,
an elegant characterization of Chinese Remainder tuples of finite distributive
lattices stands out. Finally, we address the restriction of CRT to an arbitrary
equational class $\mathcal{V}$ generated by a two-element algebra. Here we
establish an (almost) dichotomy by showing that, unless $\mathcal{V}$ is the
class of semilattices, the problem is either coNP-complete or tractable.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 19:41:52 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Campercholi",
"Miguel",
""
],
[
"Castaño",
"Diego",
""
],
[
"Zigarán",
"Gonzalo",
""
]
] |
new_dataset
| 0.983143 |
2307.02691
|
Hang Ma
|
Qiushi Lin, Hang Ma
|
SACHA: Soft Actor-Critic with Heuristic-Based Attention for Partially
Observable Multi-Agent Path Finding
|
Accepted to IEEE Robotics and Automation Letters (RA-L)
| null |
10.1109/LRA.2023.3292004
| null |
cs.RO cs.AI cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-Agent Path Finding (MAPF) is a crucial component for many large-scale
robotic systems, where agents must plan their collision-free paths to their
given goal positions. Recently, multi-agent reinforcement learning has been
introduced to solve the partially observable variant of MAPF by learning a
decentralized single-agent policy in a centralized fashion based on each
agent's partial observation. However, existing learning-based methods are
ineffective in achieving complex multi-agent cooperation, especially in
congested environments, due to the non-stationarity of this setting. To tackle
this challenge, we propose a multi-agent actor-critic method called Soft
Actor-Critic with Heuristic-Based Attention (SACHA), which employs novel
heuristic-based attention mechanisms for both the actors and critics to
encourage cooperation among agents. SACHA learns a neural network for each
agent to selectively pay attention to the shortest path heuristic guidance from
multiple agents within its field of view, thereby allowing for more scalable
learning of cooperation. SACHA also extends the existing multi-agent
actor-critic framework by introducing a novel critic centered on each agent to
approximate $Q$-values. Compared to existing methods that use a fully
observable critic, our agent-centered multi-agent actor-critic method results
in more impartial credit assignment and better generalizability of the learned
policy to MAPF instances with varying numbers of agents and types of
environments. We also implement SACHA(C), which embeds a communication module
in the agent's policy network to enable information exchange among agents. We
evaluate both SACHA and SACHA(C) on a variety of MAPF instances and demonstrate
decent improvements over several state-of-the-art learning-based MAPF methods
with respect to success rate and solution quality.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 23:36:33 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Lin",
"Qiushi",
""
],
[
"Ma",
"Hang",
""
]
] |
new_dataset
| 0.998563 |
2307.02701
|
Kieran Morton
|
Mirza S. Sarwar, Ryusuke Ishizaki, Kieran Morton, Claire Preston, Tan
Nguyen, Xu Fan, Bertille Dupont, Leanna Hogarth, Takahide Yoshiike, Shahriar
Mirabbasi, John D.W. Madden
|
Touch, press and stroke: a soft capacitive sensor skin
|
9 pages, 5 figures, submitted to Scientific Reports Nature
| null | null | null |
cs.RO eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Soft sensors that can discriminate shear and normal force could help provide
machines the fine control desirable for safe and effective physical
interactions with people. A capacitive sensor is made for this purpose,
composed of patterned elastomer and containing both fixed and sliding pillars
that allow the sensor to deform and buckle, much like skin itself. The sensor
differentiates between simultaneously applied pressure and shear. In addition,
finger proximity is detectable up to 15 mm, with a pressure and shear
sensitivity of 1 kPa and a displacement resolution of 50 $\mu$m. The operation
is demonstrated on a simple gripper holding a cup. The combination of features
and the straightforward fabrication method make this sensor a candidate for
implementation as a sensing skin for humanoid robotics applications.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 00:32:42 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Sarwar",
"Mirza S.",
""
],
[
"Ishizaki",
"Ryusuke",
""
],
[
"Morton",
"Kieran",
""
],
[
"Preston",
"Claire",
""
],
[
"Nguyen",
"Tan",
""
],
[
"Fan",
"Xu",
""
],
[
"Dupont",
"Bertille",
""
],
[
"Hogarth",
"Leanna",
""
],
[
"Yoshiike",
"Takahide",
""
],
[
"Mirabbasi",
"Shahriar",
""
],
[
"Madden",
"John D. W.",
""
]
] |
new_dataset
| 0.999736 |
2307.02703
|
Glenn Bruns
|
Glenn Bruns and Mauricio Cortes
|
A Logical Way to Negotiate Services
|
17 pages, 7 figures
| null | null | null |
cs.LO cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Service providers commonly provide only a fixed catalog of services to their
clients. Both clients and service providers can benefit from service
negotiation, in which a client makes a query for a specific service, and the
provider counters with an offer. The query could include parameters that
control the performance, reliability, and function of the service. However, a
problem with service negotiation is that it can be expensive for a service
provider to support.
In this paper we define a formal negotiation policy language that enables
automated service negotiation. In the model supported by the language, service
providers can recursively obtain the services they need from sub-providers. The
queries made by clients, and the offers returned from service providers, are
expressed in quantifier-free first-order logic. Quantifier elimination is used
to transform constraints between providers and sub-providers. The pattern of
interaction between clients and service providers is defined in process
algebra. We show a correctness property of our language: if sub-providers
respond positively to queries, then so does the provider itself.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 00:37:30 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Bruns",
"Glenn",
""
],
[
"Cortes",
"Mauricio",
""
]
] |
new_dataset
| 0.98765 |
2307.02717
|
Dengfeng Wang
|
Dengfeng Wang, Liukai Xu, Songyuan Liu, zhi Li, Yiming Chen, Weifeng
He, Xueqing Li and Yanan Su
|
TL-nvSRAM-CIM: Ultra-High-Density Three-Level ReRAM-Assisted
Computing-in-nvSRAM with DC-Power Free Restore and Ternary MAC Operations
| null | null | null | null |
cs.AR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Accommodating all the weights on-chip for large-scale NNs remains a great
challenge for SRAM based computing-in-memory (SRAM-CIM) with limited on-chip
capacity. Previous non-volatile SRAM-CIM (nvSRAM-CIM) addresses this issue by
integrating high-density single-level ReRAMs on the top of high-efficiency
SRAM-CIM for weight storage to eliminate the off-chip memory access. However,
previous SL-nvSRAM-CIM suffers from poor scalability for an increased number of
SL-ReRAMs and limited computing efficiency. To overcome these challenges, this
work proposes an ultra-high-density three-level ReRAMs-assisted
computing-in-nonvolatile-SRAM (TL-nvSRAM-CIM) scheme for large NN models. The
clustered n-selector-n-ReRAM (cluster-nSnRs) is employed for reliable
weight-restore with eliminated DC power. Furthermore, a ternary SRAM-CIM
mechanism with differential computing scheme is proposed for energy-efficient
ternary MAC operations while preserving high NN accuracy. The proposed
TL-nvSRAM-CIM achieves 7.8x higher storage density, compared with the
state-of-art works. Moreover, TL-nvSRAM-CIM shows up to 2.9x and 1.9x enhanced
energy-efficiency, respectively, compared to the baseline designs of SRAM-CIM
and ReRAM-CIM, respectively.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 01:46:06 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Wang",
"Dengfeng",
""
],
[
"Xu",
"Liukai",
""
],
[
"Liu",
"Songyuan",
""
],
[
"Li",
"zhi",
""
],
[
"Chen",
"Yiming",
""
],
[
"He",
"Weifeng",
""
],
[
"Li",
"Xueqing",
""
],
[
"Su",
"Yanan",
""
]
] |
new_dataset
| 0.99954 |
2307.02730
|
Yuning Ding
|
Sheng-Lan Liu, Yu-Ning Ding, Si-Fan Zhang, Wen-Yue Chen, Ning Zhou,
Hao Liu, Gui-Hong Lao
|
Fine-grained Action Analysis: A Multi-modality and Multi-task Dataset of
Figure Skating
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The fine-grained action analysis of the existing action datasets is
challenged by insufficient action categories, low fine granularities, limited
modalities, and tasks. In this paper, we propose a Multi-modality and
Multi-task dataset of Figure Skating (MMFS) which was collected from the World
Figure Skating Championships. MMFS, which possesses action recognition and
action quality assessment, captures RGB, skeleton, and is collected the score
of actions from 11671 clips with 256 categories including spatial and temporal
labels. The key contributions of our dataset fall into three aspects as
follows. (1) Independently spatial and temporal categories are first proposed
to further explore fine-grained action recognition and quality assessment. (2)
MMFS first introduces the skeleton modality for complex fine-grained action
quality assessment. (3) Our multi-modality and multi-task dataset encourage
more action analysis models. To benchmark our dataset, we adopt RGB-based and
skeleton-based baseline methods for action recognition and action quality
assessment.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 02:30:56 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Liu",
"Sheng-Lan",
""
],
[
"Ding",
"Yu-Ning",
""
],
[
"Zhang",
"Si-Fan",
""
],
[
"Chen",
"Wen-Yue",
""
],
[
"Zhou",
"Ning",
""
],
[
"Liu",
"Hao",
""
],
[
"Lao",
"Gui-Hong",
""
]
] |
new_dataset
| 0.999902 |
2307.02763
|
David Jurgens
|
David Jurgens, Agrima Seth, Jackson Sargent, Athena Aghighi, Michael
Geraci
|
Your spouse needs professional help: Determining the Contextual
Appropriateness of Messages through Modeling Social Relationships
|
ACL 2023, 18 pages, 8 figures, 11 tables
| null | null | null |
cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding interpersonal communication requires, in part, understanding
the social context and norms in which a message is said. However, current
methods for identifying offensive content in such communication largely operate
independent of context, with only a few approaches considering community norms
or prior conversation as context. Here, we introduce a new approach to
identifying inappropriate communication by explicitly modeling the social
relationship between the individuals. We introduce a new dataset of
contextually-situated judgments of appropriateness and show that large language
models can readily incorporate relationship information to accurately identify
appropriateness in a given context. Using data from online conversations and
movie dialogues, we provide insight into how the relationships themselves
function as implicit norms and quantify the degree to which context-sensitivity
is needed in different conversation settings. Further, we also demonstrate that
contextual-appropriateness judgments are predictive of other social factors
expressed in language such as condescension and politeness.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 04:06:05 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Jurgens",
"David",
""
],
[
"Seth",
"Agrima",
""
],
[
"Sargent",
"Jackson",
""
],
[
"Aghighi",
"Athena",
""
],
[
"Geraci",
"Michael",
""
]
] |
new_dataset
| 0.999132 |
2307.02814
|
Dwip Dalal
|
Dwip Dalal, Gautam Vashishtha, Prajwal Singh, Shanmuganathan Raman
|
Single Image LDR to HDR Conversion using Conditional Diffusion
| null |
IEEE International Conference on Image Processing 2023
| null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Digital imaging aims to replicate realistic scenes, but Low Dynamic Range
(LDR) cameras cannot represent the wide dynamic range of real scenes, resulting
in under-/overexposed images. This paper presents a deep learning-based
approach for recovering intricate details from shadows and highlights while
reconstructing High Dynamic Range (HDR) images. We formulate the problem as an
image-to-image (I2I) translation task and propose a conditional Denoising
Diffusion Probabilistic Model (DDPM) based framework using classifier-free
guidance. We incorporate a deep CNN-based autoencoder in our proposed framework
to enhance the quality of the latent representation of the input LDR image used
for conditioning. Moreover, we introduce a new loss function for LDR-HDR
translation tasks, termed Exposure Loss. This loss helps direct gradients in
the opposite direction of the saturation, further improving the results'
quality. By conducting comprehensive quantitative and qualitative experiments,
we have effectively demonstrated the proficiency of our proposed method. The
results indicate that a simple conditional diffusion-based method can replace
the complex camera pipeline-based architectures.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 07:19:47 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Dalal",
"Dwip",
""
],
[
"Vashishtha",
"Gautam",
""
],
[
"Singh",
"Prajwal",
""
],
[
"Raman",
"Shanmuganathan",
""
]
] |
new_dataset
| 0.960158 |
2307.02848
|
Yu-Huan Wu
|
Yun Liu, Yu-Huan Wu, Shi-Chen Zhang, Li Liu, Min Wu, and Ming-Ming
Cheng
|
Revisiting Computer-Aided Tuberculosis Diagnosis
|
14 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Tuberculosis (TB) is a major global health threat, causing millions of deaths
annually. Although early diagnosis and treatment can greatly improve the
chances of survival, it remains a major challenge, especially in developing
countries. Recently, computer-aided tuberculosis diagnosis (CTD) using deep
learning has shown promise, but progress is hindered by limited training data.
To address this, we establish a large-scale dataset, namely the Tuberculosis
X-ray (TBX11K) dataset, which contains 11,200 chest X-ray (CXR) images with
corresponding bounding box annotations for TB areas. This dataset enables the
training of sophisticated detectors for high-quality CTD. Furthermore, we
propose a strong baseline, SymFormer, for simultaneous CXR image classification
and TB infection area detection. SymFormer incorporates Symmetric Search
Attention (SymAttention) to tackle the bilateral symmetry property of CXR
images for learning discriminative features. Since CXR images may not strictly
adhere to the bilateral symmetry property, we also propose Symmetric Positional
Encoding (SPE) to facilitate SymAttention through feature recalibration. To
promote future research on CTD, we build a benchmark by introducing evaluation
metrics, evaluating baseline models reformed from existing detectors, and
running an online challenge. Experiments show that SymFormer achieves
state-of-the-art performance on the TBX11K dataset. The data, code, and models
will be released.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 08:27:48 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Liu",
"Yun",
""
],
[
"Wu",
"Yu-Huan",
""
],
[
"Zhang",
"Shi-Chen",
""
],
[
"Liu",
"Li",
""
],
[
"Wu",
"Min",
""
],
[
"Cheng",
"Ming-Ming",
""
]
] |
new_dataset
| 0.998897 |
2307.02849
|
Zi'ou Zheng
|
Zi'ou Zheng and Xiaodan Zhu
|
NatLogAttack: A Framework for Attacking Natural Language Inference
Models with Natural Logic
|
Published as a conference paper at ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Reasoning has been a central topic in artificial intelligence from the
beginning. The recent progress made on distributed representation and neural
networks continues to improve the state-of-the-art performance of natural
language inference. However, it remains an open question whether the models
perform real reasoning to reach their conclusions or rely on spurious
correlations. Adversarial attacks have proven to be an important tool to help
evaluate the Achilles' heel of the victim models. In this study, we explore the
fundamental problem of developing attack models based on logic formalism. We
propose NatLogAttack to perform systematic attacks centring around natural
logic, a classical logic formalism that is traceable back to Aristotle's
syllogism and has been closely developed for natural language inference. The
proposed framework renders both label-preserving and label-flipping attacks. We
show that compared to the existing attack models, NatLogAttack generates better
adversarial examples with fewer visits to the victim models. The victim models
are found to be more vulnerable under the label-flipping setting. NatLogAttack
provides a tool to probe the existing and future NLI models' capacity from a
key viewpoint and we hope more logic-based attacks will be further explored for
understanding the desired property of reasoning.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 08:32:14 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Zheng",
"Zi'ou",
""
],
[
"Zhu",
"Xiaodan",
""
]
] |
new_dataset
| 0.971954 |
2307.02852
|
Xuyang Zhao
|
Xuyang Zhao, Chengpu Yu, Erpei Xu and Yixuan Liu
|
TDLE: 2-D LiDAR Exploration With Hierarchical Planning Using Regional
Division
|
Accepted in IEEE International Conference on Automation Science and
Engineering (CASE) 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Exploration systems are critical for enhancing the autonomy of robots. Due to
the unpredictability of the future planning space, existing methods either
adopt an inefficient greedy strategy or require a lot of resources to obtain a
global solution. In this work, we address the challenge of obtaining global
exploration routes with minimal computing resources. A hierarchical planning
framework dynamically divides the planning space into subregions and arranges
their orders to provide global guidance for exploration. Indicators that are
compatible with the subregion order are used to choose specific exploration
targets, thereby considering estimates of spatial structure and extending the
planning space to unknown regions. Extensive simulations and field tests
demonstrate the efficacy of our method in comparison to existing 2D LiDAR-based
approaches. Our code has been made public for further investigation.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 08:36:08 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Zhao",
"Xuyang",
""
],
[
"Yu",
"Chengpu",
""
],
[
"Xu",
"Erpei",
""
],
[
"Liu",
"Yixuan",
""
]
] |
new_dataset
| 0.997435 |
2307.02865
|
Mattia Giovanni Campana
|
Valerio Arnaboldi, Mattia Giovanni Campana, Franca Delmastro, Elena
Pagani
|
PLIERS: a Popularity-Based Recommender System for Content Dissemination
in Online Social Networks
|
Published in SAC '16: Proceedings of the 31st Annual ACM Symposium on
Applied Computing
| null |
10.1145/2851613.2851940
| null |
cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel tag-based recommender system called PLIERS,
which relies on the assumption that users are mainly interested in items and
tags with similar popularity to those they already own. PLIERS is aimed at
reaching a good tradeoff between algorithmic complexity and the level of
personalization of recommended items. To evaluate PLIERS, we performed a set of
experiments on real OSN datasets, demonstrating that it outperforms
state-of-the-art solutions in terms of personalization, relevance, and novelty
of recommendations.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 09:04:58 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Arnaboldi",
"Valerio",
""
],
[
"Campana",
"Mattia Giovanni",
""
],
[
"Delmastro",
"Franca",
""
],
[
"Pagani",
"Elena",
""
]
] |
new_dataset
| 0.993774 |
2307.02915
|
Mikhail Martynov
|
Mikhail Martynov, Zhanibek Darush, Miguel Altamirano Cabrera, Sausar
Karaf, Dzmitry Tsetserukou
|
MorphoArms: Morphogenetic Teleoperation of Multimanual Robot
|
IEEE International Conference on Automation Science and Engineering
(CASE 2023), Cordis, New Zeland, 26-30 August, 2023, in print
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, there are few unmanned aerial vehicles (UAVs) capable of flying,
walking and grasping. A drone with all these functionalities can significantly
improve its performance in complex tasks such as monitoring and exploring
different types of terrain, and rescue operations. This paper presents
MorphoArms, a novel system that consists of a morphogenetic chassis and a hand
gesture recognition teleoperation system. The mechanics, electronics, control
architecture, and walking behavior of the morphogenetic chassis are described.
This robot is capable of walking and grasping objects using four robotic limbs.
Robotic limbs with four degrees-of-freedom are used as pedipulators when
walking and as manipulators when performing actions in the environment. The
robot control system is implemented using teleoperation, where commands are
given by hand gestures. A motion capture system is used to track the user's
hands and to recognize their gestures. The method of controlling the robot was
experimentally tested in a study involving 10 users. The evaluation included
three questionnaires (NASA TLX, SUS, and UEQ). The results showed that the
proposed system was more user-friendly than 56% of the systems, and it was
rated above average in terms of attractiveness, stimulation, and novelty.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 11:05:38 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Martynov",
"Mikhail",
""
],
[
"Darush",
"Zhanibek",
""
],
[
"Cabrera",
"Miguel Altamirano",
""
],
[
"Karaf",
"Sausar",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
new_dataset
| 0.985803 |
2307.02928
|
Avishai Sintov
|
Osher Azulay, Nimrod Curtis, Rotem Sokolovsky, Guy Levitski, Daniel
Slomovik, Guy Lilling and Avishai Sintov
|
AllSight: A Low-Cost and High-Resolution Round Tactile Sensor with
Zero-Shot Learning Capability
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Tactile sensing is a necessary capability for a robotic hand to perform fine
manipulations and interact with the environment. Optical sensors are a
promising solution for high-resolution contact estimation. Nevertheless, they
are usually not easy to fabricate and require individual calibration in order
to acquire sufficient accuracy. In this letter, we propose AllSight, an optical
tactile sensor with a round 3D structure potentially designed for robotic
in-hand manipulation tasks. AllSight is mostly 3D printed making it low-cost,
modular, durable and in the size of a human thumb while with a large contact
surface. We show the ability of AllSight to learn and estimate a full contact
state, i.e., contact position, forces and torsion. With that, an experimental
benchmark between various configurations of illumination and contact elastomers
are provided. Furthermore, the robust design of AllSight provides it with a
unique zero-shot capability such that a practitioner can fabricate the
open-source design and have a ready-to-use state estimation model. A set of
experiments demonstrates the accurate state estimation performance of AllSight.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 11:28:53 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Azulay",
"Osher",
""
],
[
"Curtis",
"Nimrod",
""
],
[
"Sokolovsky",
"Rotem",
""
],
[
"Levitski",
"Guy",
""
],
[
"Slomovik",
"Daniel",
""
],
[
"Lilling",
"Guy",
""
],
[
"Sintov",
"Avishai",
""
]
] |
new_dataset
| 0.996803 |
2307.02991
|
Asma Atamna
|
Abhijeet Pendyala, Justin Dettmer, Tobias Glasmachers, Asma Atamna
|
ContainerGym: A Real-World Reinforcement Learning Benchmark for Resource
Allocation
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ContainerGym, a benchmark for reinforcement learning inspired by a
real-world industrial resource allocation task. The proposed benchmark encodes
a range of challenges commonly encountered in real-world sequential decision
making problems, such as uncertainty. It can be configured to instantiate
problems of varying degrees of difficulty, e.g., in terms of variable
dimensionality. Our benchmark differs from other reinforcement learning
benchmarks, including the ones aiming to encode real-world difficulties, in
that it is directly derived from a real-world industrial problem, which
underwent minimal simplification and streamlining. It is sufficiently versatile
to evaluate reinforcement learning algorithms on any real-world problem that
fits our resource allocation framework. We provide results of standard baseline
methods. Going beyond the usual training reward curves, our results and the
statistical tools used to interpret them allow to highlight interesting
limitations of well-known deep reinforcement learning algorithms, namely PPO,
TRPO and DQN.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 13:44:29 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Pendyala",
"Abhijeet",
""
],
[
"Dettmer",
"Justin",
""
],
[
"Glasmachers",
"Tobias",
""
],
[
"Atamna",
"Asma",
""
]
] |
new_dataset
| 0.996992 |
2307.03048
|
Yan Lin
|
Yan Lin, Huaiyu Wan, Jilin Hu, Shengnan Guo, Bin Yang, Youfang Lin,
Christian S. Jensen
|
Origin-Destination Travel Time Oracle for Map-based Services
|
15 pages, 12 figures, accepted by SIGMOD International Conference on
Management of Data 2024
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given an origin (O), a destination (D), and a departure time (T), an
Origin-Destination (OD) travel time oracle~(ODT-Oracle) returns an estimate of
the time it takes to travel from O to D when departing at T. ODT-Oracles serve
important purposes in map-based services. To enable the construction of such
oracles, we provide a travel-time estimation (TTE) solution that leverages
historical trajectories to estimate time-varying travel times for OD pairs.
The problem is complicated by the fact that multiple historical trajectories
with different travel times may connect an OD pair, while trajectories may vary
from one another. To solve the problem, it is crucial to remove outlier
trajectories when doing travel time estimation for future queries.
We propose a novel, two-stage framework called Diffusion-based
Origin-destination Travel Time Estimation (DOT), that solves the problem.
First, DOT employs a conditioned Pixelated Trajectories (PiT) denoiser that
enables building a diffusion-based PiT inference process by learning
correlations between OD pairs and historical trajectories. Specifically, given
an OD pair and a departure time, we aim to infer a PiT. Next, DOT encompasses a
Masked Vision Transformer~(MViT) that effectively and efficiently estimates a
travel time based on the inferred PiT. We report on extensive experiments on
two real-world datasets that offer evidence that DOT is capable of
outperforming baseline methods in terms of accuracy, scalability, and
explainability.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 15:14:23 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Lin",
"Yan",
""
],
[
"Wan",
"Huaiyu",
""
],
[
"Hu",
"Jilin",
""
],
[
"Guo",
"Shengnan",
""
],
[
"Yang",
"Bin",
""
],
[
"Lin",
"Youfang",
""
],
[
"Jensen",
"Christian S.",
""
]
] |
new_dataset
| 0.99927 |
2307.03080
|
Riccardo Bertoglio
|
Riccardo Bertoglio, Veronica Carini, Stefano Arrigoni, Matteo
Matteucci
|
A Map-Free LiDAR-Based System for Autonomous Navigation in Vineyards
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Agricultural robots have the potential to increase production yields and
reduce costs by performing repetitive and time-consuming tasks. However, for
robots to be effective, they must be able to navigate autonomously in fields or
orchards without human intervention. In this paper, we introduce a navigation
system that utilizes LiDAR and wheel encoder sensors for in-row, turn, and
end-row navigation in row structured agricultural environments, such as
vineyards. Our approach exploits the simple and precise geometrical structure
of plants organized in parallel rows. We tested our system in both simulated
and real environments, and the results demonstrate the effectiveness of our
approach in achieving accurate and robust navigation. Our navigation system
achieves mean displacement errors from the center line of 0.049 m and 0.372 m
for in-row navigation in the simulated and real environments, respectively. In
addition, we developed an end-row points detection that allows end-row
navigation in vineyards, a task often ignored by most works.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 15:48:29 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Bertoglio",
"Riccardo",
""
],
[
"Carini",
"Veronica",
""
],
[
"Arrigoni",
"Stefano",
""
],
[
"Matteucci",
"Matteo",
""
]
] |
new_dataset
| 0.999603 |
2307.03126
|
Mattia Giovanni Campana
|
Valerio Arnaboldi, Mattia Giovanni Campana, Franca Delmastro
|
Context-Aware Configuration and Management of WiFi Direct Groups for
Real Opportunistic Networks
|
Accepted by the IEEE 14th International Conference on Mobile Ad Hoc
and Sensor Systems (MASS), 2017
| null |
10.1109/MASS.2017.40
| null |
cs.NI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wi-Fi Direct is a promising technology for the support of device-to-device
communications (D2D) on commercial mobile devices. However, the standard
as-it-is is not sufficient to support the real deployment of networking
solutions entirely based on D2D such as opportunistic networks. In fact, WiFi
Direct presents some characteristics that could limit the autonomous creation
of D2D connections among users' personal devices. Specifically, the standard
explicitly requires the user's authorization to establish a connection between
two or more devices, and it provides a limited support for inter-group
communication. In some cases, this might lead to the creation of isolated
groups of nodes which cannot communicate among each other. In this paper, we
propose a novel middleware-layer protocol for the efficient configuration and
management of WiFi Direct groups (WiFi Direct Group Manager, WFD-GM) to enable
autonomous connections and inter-group communication. This enables
opportunistic networks in real conditions (e.g., variable mobility and network
size). WFD-GM defines a context function that takes into account heterogeneous
parameters for the creation of the best group configuration in a specific time
window, including an index of nodes' stability and power levels. We evaluate
the protocol performances by simulating three reference scenarios including
different mobility models, geographical areas and number of nodes. Simulations
are also supported by experimental results related to the evaluation in a real
testbed of the involved context parameters. We compare WFD-GM with the
state-of-the-art solutions and we show that it performs significantly better
than a Baseline approach in scenarios with medium/low mobility, and it is
comparable with it in case of high mobility, without introducing additional
overhead.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 16:52:20 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Arnaboldi",
"Valerio",
""
],
[
"Campana",
"Mattia Giovanni",
""
],
[
"Delmastro",
"Franca",
""
]
] |
new_dataset
| 0.985555 |
2307.03132
|
Pratyush Maini
|
Pratyush Maini, Sachin Goyal, Zachary C. Lipton, J. Zico Kolter, Aditi
Raghunathan
|
T-MARS: Improving Visual Representations by Circumventing Text Feature
Learning
| null | null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large web-sourced multimodal datasets have powered a slew of new methods for
learning general-purpose visual representations, advancing the state of the art
in computer vision and revolutionizing zero- and few-shot recognition. One
crucial decision facing practitioners is how, if at all, to curate these
ever-larger datasets. For example, the creators of the LAION-5B dataset chose
to retain only image-caption pairs whose CLIP similarity score exceeded a
designated threshold. In this paper, we propose a new state-of-the-art data
filtering approach motivated by our observation that nearly 40% of LAION's
images contain text that overlaps significantly with the caption. Intuitively,
such data could be wasteful as it incentivizes models to perform optical
character recognition rather than learning visual features. However, naively
removing all such data could also be wasteful, as it throws away images that
contain visual features (in addition to overlapping text). Our simple and
scalable approach, T-MARS (Text Masking and Re-Scoring), filters out only those
pairs where the text dominates the remaining visual features -- by first
masking out the text and then filtering out those with a low CLIP similarity
score of the masked image. Experimentally, T-MARS outperforms the top-ranked
method on the "medium scale" of DataComp (a data filtering benchmark) by a
margin of 6.5% on ImageNet and 4.7% on VTAB. Additionally, our systematic
evaluation on various data pool sizes from 2M to 64M shows that the accuracy
gains enjoyed by T-MARS linearly increase as data and compute are scaled
exponentially. Code is available at https://github.com/locuslab/T-MARS.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 16:59:52 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Maini",
"Pratyush",
""
],
[
"Goyal",
"Sachin",
""
],
[
"Lipton",
"Zachary C.",
""
],
[
"Kolter",
"J. Zico",
""
],
[
"Raghunathan",
"Aditi",
""
]
] |
new_dataset
| 0.986923 |
2307.03133
|
Yongcan Yu
|
Yongcan Yu, Lijun Sheng, Ran He, Jian Liang
|
Benchmarking Test-Time Adaptation against Distribution Shifts in Image
Classification
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Test-time adaptation (TTA) is a technique aimed at enhancing the
generalization performance of models by leveraging unlabeled samples solely
during prediction. Given the need for robustness in neural network systems when
faced with distribution shifts, numerous TTA methods have recently been
proposed. However, evaluating these methods is often done under different
settings, such as varying distribution shifts, backbones, and designing
scenarios, leading to a lack of consistent and fair benchmarks to validate
their effectiveness. To address this issue, we present a benchmark that
systematically evaluates 13 prominent TTA methods and their variants on five
widely used image classification datasets: CIFAR-10-C, CIFAR-100-C, ImageNet-C,
DomainNet, and Office-Home. These methods encompass a wide range of adaptation
scenarios (e.g. online adaptation v.s. offline adaptation, instance adaptation
v.s. batch adaptation v.s. domain adaptation). Furthermore, we explore the
compatibility of different TTA methods with diverse network backbones. To
implement this benchmark, we have developed a unified framework in PyTorch,
which allows for consistent evaluation and comparison of the TTA methods across
the different datasets and network architectures. By establishing this
benchmark, we aim to provide researchers and practitioners with a reliable
means of assessing and comparing the effectiveness of TTA methods in improving
model robustness and generalization performance. Our code is available at
https://github.com/yuyongcan/Benchmark-TTA.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 16:59:53 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Yu",
"Yongcan",
""
],
[
"Sheng",
"Lijun",
""
],
[
"He",
"Ran",
""
],
[
"Liang",
"Jian",
""
]
] |
new_dataset
| 0.969386 |
2307.03153
|
Kate Sanders
|
Kate Sanders, David Etter, Reno Kriz, Benjamin Van Durme
|
MultiVENT: Multilingual Videos of Events with Aligned Natural Text
| null | null | null | null |
cs.IR cs.CV cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Everyday news coverage has shifted from traditional broadcasts towards a wide
range of presentation formats such as first-hand, unedited video footage.
Datasets that reflect the diverse array of multimodal, multilingual news
sources available online could be used to teach models to benefit from this
shift, but existing news video datasets focus on traditional news broadcasts
produced for English-speaking audiences. We address this limitation by
constructing MultiVENT, a dataset of multilingual, event-centric videos
grounded in text documents across five target languages. MultiVENT includes
both news broadcast videos and non-professional event footage, which we use to
analyze the state of online news videos and how they can be leveraged to build
robust, factually accurate models. Finally, we provide a model for complex,
multilingual video retrieval to serve as a baseline for information retrieval
using MultiVENT.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 17:29:34 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Sanders",
"Kate",
""
],
[
"Etter",
"David",
""
],
[
"Kriz",
"Reno",
""
],
[
"Van Durme",
"Benjamin",
""
]
] |
new_dataset
| 0.999761 |
2307.03162
|
Yao Shi
|
Yao Shi, Xiaofeng Zhang, Ran zhang, Zhou Yang, Xiao Tang, Hongni Ye,
Yi Wu
|
BrickPal: Augmented Reality-based Assembly Instructions for Brick Models
|
9 pages,7 figures. Project URL: https://origami.dance/brickpal
| null | null | null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The assembly instruction is a mandatory component of Lego-like brick sets.The
conventional production of assembly instructions requires a considerable amount
of manual fine-tuning, which is intractable for casual users and customized
brick sets.Moreover, the traditional paper-based instructions lack
expressiveness and interactivity.To tackle the two problems above, we present
BrickPal, an augmented reality-based system, which visualizes assembly
instructions in an augmented reality head-mounted display. It utilizes Natural
Language Processing (NLP) techniques to generate plausible assembly sequences,
and provide real-time guidance in the AR headset.Our user study demonstrates
BrickPal's effectiveness at assisting users in brick assembly compared to
traditional assembly methods. Additionally, the NLP algorithm-generated
assembly sequences achieve the same usability with manually adapted sequences.
|
[
{
"version": "v1",
"created": "Thu, 6 Jul 2023 17:42:56 GMT"
}
] | 2023-07-07T00:00:00 |
[
[
"Shi",
"Yao",
""
],
[
"Zhang",
"Xiaofeng",
""
],
[
"zhang",
"Ran",
""
],
[
"Yang",
"Zhou",
""
],
[
"Tang",
"Xiao",
""
],
[
"Ye",
"Hongni",
""
],
[
"Wu",
"Yi",
""
]
] |
new_dataset
| 0.999885 |
1901.03427
|
Kurmanbek Kaiyrbekov
|
Kurmanbek Kaiyrbekov, Metin Sezgin
|
Stroke-based sketched symbol reconstruction and segmentation
| null |
IEEE Computer Graphics and Applications, vol. 40, no. 1, pp.
112-126, 1 Jan.-Feb. 2020
|
10.1109/MCG.2019.2943333
| null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hand-drawn objects usually consist of multiple semantically meaningful parts.
For example, a stick figure consists of a head, a torso, and pairs of legs and
arms. Efficient and accurate identification of these subparts promises to
significantly improve algorithms for stylization, deformation, morphing and
animation of 2D drawings. In this paper, we propose a neural network model that
segments symbols into stroke-level components. Our segmentation framework has
two main elements: a fixed feature extractor and a Multilayer Perceptron (MLP)
network that identifies a component based on the feature. As the feature
extractor we utilize an encoder of a stroke-rnn, which is our newly proposed
generative Variational Auto-Encoder (VAE) model that reconstructs symbols on a
stroke by stroke basis. Experiments show that a single encoder could be reused
for segmenting multiple categories of sketched symbols with negligible effects
on segmentation accuracies. Our segmentation scores surpass existing
methodologies on an available small state of the art dataset. Moreover,
extensive evaluations on our newly annotated big dataset demonstrate that our
framework obtains significantly better accuracies as compared to baseline
models. We release the dataset to the community.
|
[
{
"version": "v1",
"created": "Thu, 10 Jan 2019 23:04:46 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Jan 2019 07:32:09 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Kaiyrbekov",
"Kurmanbek",
""
],
[
"Sezgin",
"Metin",
""
]
] |
new_dataset
| 0.991386 |
2006.07603
|
Shenghao Yang
|
Yanyan Dong and Shenghao Yang
|
On Optimal Finite-length Block Codes of Size Four for Binary Symmetric
Channels
|
This is the full version of our papers at ISITA 2020 and ISIT 2023
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A binary code of blocklength $n$ and codebook size $M$ is called an $(n,M)$
code, which is studied for memoryless binary symmetric channels (BSCs) with the
maximum likelihood (ML) decoding. For any $n \geq 2$, some optimal codes among
the linear $(n,4)$ codes have been explicitly characterized in the previous
study, but whether the optimal codes among the linear codes are better than all
the nonlinear codes or not is unknown. In this paper, we first show that for
any $n\geq 2$, there exists an optimal code (among all the $(n,4)$ codes) that
is either linear or in a subset of nonlinear codes, called Class-I codes. We
identified all the optimal codes among the linear $(n,4)$ codes for each
blocklength $n\geq 2$, and found ones that were not given in literature. For
any $n$ from $2$ to $300$, all the optimal $(n,4)$ codes are identified, where
except for $n=3$, all the optimal $(n,4)$ codes are equivalent to linear codes.
There exist optimal $(3,4)$ codes that are not equivalent to linear codes.
Furthermore, we derive a subset of nonlinear codes called Class-II codes and
justify that for any $n >300$, the set composed of linear, Class-I and Class-II
codes and their equivalent codes contains all the optimal $(n,4)$ codes. Both
Class-I and Class-II codes are close to linear codes in the sense that they
involve only one type of columns that are not included in linear codes. Our
results are obtained using a new technique to compare the ML decoding
performance of two codes, featured by a partition of the entire range of the
channel output.
|
[
{
"version": "v1",
"created": "Sat, 13 Jun 2020 10:03:13 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jul 2020 04:27:17 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Jul 2023 14:39:04 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Dong",
"Yanyan",
""
],
[
"Yang",
"Shenghao",
""
]
] |
new_dataset
| 0.991956 |
2007.15805
|
He Shuang
|
He Shuang, Lianying Zhao, David Lie
|
vWitness: Certifying Web Page Interactions with Computer Vision
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Web servers service client requests, some of which might cause the web server
to perform security-sensitive operations (e.g. money transfer, voting). An
attacker may thus forge or maliciously manipulate such requests by compromising
a web client. Unfortunately, a web server has no way of knowing whether the
client from which it receives a request has been compromised or not -- current
"best practice" defenses such as user authentication or network encryption
cannot aid a server as they all assume web client integrity. To address this
shortcoming, we propose vWitness, which "witnesses" the interactions of a user
with a web page and certifies whether they match a specification provided by
the web server, enabling the web server to know that the web request is
user-intended. The main challenge that vWitness overcomes is that even benign
clients introduce unpredictable variations in the way they render web pages.
vWitness differentiates between these benign variations and malicious
manipulation using computer vision, allowing it to certify to the web server
that 1) the web page user interface is properly displayed 2) observed user
interactions are used to construct the web request. Our vWitness prototype
achieves compatibility with modern web pages, is resilient to adversarial
example attacks and is accurate and performant -- vWitness achieves 99.97%
accuracy and adds 197ms of overhead to the entire interaction session in the
average case.
|
[
{
"version": "v1",
"created": "Fri, 31 Jul 2020 02:08:18 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jul 2023 14:12:18 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Shuang",
"He",
""
],
[
"Zhao",
"Lianying",
""
],
[
"Lie",
"David",
""
]
] |
new_dataset
| 0.998038 |
2101.04912
|
Niall Williams
|
Niall L. Williams, Aniket Bera, Dinesh Manocha
|
ARC: Alignment-based Redirection Controller for Redirected Walking in
Complex Environments
| null |
IEEE Transactions on Visualization and Computer Graphics volume
27, pages 2535-2544, 2021
|
10.1109/TVCG.2021.3067781
| null |
cs.GR cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel redirected walking controller based on alignment that
allows the user to explore large and complex virtual environments, while
minimizing the number of collisions with obstacles in the physical environment.
Our alignment-based redirection controller, ARC, steers the user such that
their proximity to obstacles in the physical environment matches the proximity
to obstacles in the virtual environment as closely as possible. To quantify a
controller's performance in complex environments, we introduce a new metric,
Complexity Ratio (CR), to measure the relative environment complexity and
characterize the difference in navigational complexity between the physical and
virtual environments. Through extensive simulation-based experiments, we show
that ARC significantly outperforms current state-of-the-art controllers in its
ability to steer the user on a collision-free path. We also show through
quantitative and qualitative measures of performance that our controller is
robust in complex environments with many obstacles. Our method is applicable to
arbitrary environments and operates without any user input or parameter
tweaking, aside from the layout of the environments. We have implemented our
algorithm on the Oculus Quest head-mounted display and evaluated its
performance in environments with varying complexity. Our project website is
available at https://gamma.umd.edu/arc/.
|
[
{
"version": "v1",
"created": "Wed, 13 Jan 2021 07:19:42 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Mar 2021 14:11:04 GMT"
},
{
"version": "v3",
"created": "Wed, 10 Nov 2021 23:35:50 GMT"
},
{
"version": "v4",
"created": "Tue, 4 Jul 2023 04:01:19 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Williams",
"Niall L.",
""
],
[
"Bera",
"Aniket",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.963377 |
2105.14261
|
Dieter Spreen
|
Dieter Spreen, Ulrich Berger
|
Computing with Infinite Objects: the Gray Code Case
| null | null | null | null |
cs.LO math.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Infinite Gray code has been introduced by Tsuiki as a redundancy-free
representation of the reals. In applications the signed digit representation is
mostly used which has maximal redundancy. Tsuiki presented a functional program
converting signed digit code into infinite Gray code. Moreover, he showed that
infinite Gray code can effectively be converted into signed digit code, but the
program needs to have some non-deterministic features (see also H. Tsuiki, K.
Sugihara, "Streams with a bottom in functional languages"). Berger and Tsuiki
reproved the result in a system of formal first-order intuitionistic logic
extended by inductive and co-inductive definitions, as well as some new logical
connectives capturing concurrent behaviour. The programs extracted from the
proofs are exactly the ones given by Tsuiki. In order to do so, co-inductive
predicates $\bS$ and $\bG$ are defined and the inclusion $\bS \subseteq \bG$ is
derived. For the converse inclusion the new logical connectives are used to
introduce a concurrent version $\S_{2}$ of $S$ and $\bG \subseteq \bS_{2}$ is
shown. What one is looking for, however, is an equivalence proof of the
involved concepts. One of the main aims of the present paper is to close the
gap. A concurrent version $\bG^{*}$ of $\bG$ and a modification $\bS^{*}$ of
$\bS_{2}$ are presented such that $\bS^{*} = \bG^{*}$. A crucial tool in U.
Berger, H. Tsuiki, "Intuitionistic fixed point logic" is a formulation of the
Archimedean property of the real numbers as an induction principle. We
introduce a concurrent version of this principle which allows us to prove that
$\bS^{*}$ and $\bG^{*}$ coincide. A further central contribution is the
extension of the above results to the hyperspace of non-empty compact subsets
of the reals.
|
[
{
"version": "v1",
"created": "Sat, 29 May 2021 09:42:15 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Nov 2022 12:52:11 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Nov 2022 17:02:06 GMT"
},
{
"version": "v4",
"created": "Fri, 21 Apr 2023 08:49:00 GMT"
},
{
"version": "v5",
"created": "Tue, 23 May 2023 08:38:20 GMT"
},
{
"version": "v6",
"created": "Wed, 5 Jul 2023 14:13:31 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Spreen",
"Dieter",
""
],
[
"Berger",
"Ulrich",
""
]
] |
new_dataset
| 0.989516 |
2112.01914
|
Zheyuan Zhou
|
Zheyuan Zhou, Liang Du, Xiaoqing Ye, Zhikang Zou, Xiao Tan, Li Zhang,
Xiangyang Xue, Jianfeng Feng
|
SGM3D: Stereo Guided Monocular 3D Object Detection
|
8 pages, 5 figures
| null |
10.1109/LRA.2022.3191849
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monocular 3D object detection aims to predict the object location, dimension
and orientation in 3D space alongside the object category given only a
monocular image. It poses a great challenge due to its ill-posed property which
is critically lack of depth information in the 2D image plane. While there
exist approaches leveraging off-the-shelve depth estimation or relying on LiDAR
sensors to mitigate this problem, the dependence on the additional depth model
or expensive equipment severely limits their scalability to generic 3D
perception. In this paper, we propose a stereo-guided monocular 3D object
detection framework, dubbed SGM3D, adapting the robust 3D features learned from
stereo inputs to enhance the feature for monocular detection. We innovatively
present a multi-granularity domain adaptation (MG-DA) mechanism to exploit the
network's ability to generate stereo-mimicking features given only on monocular
cues. Coarse BEV feature-level, as well as the fine anchor-level domain
adaptation, are both leveraged for guidance in the monocular domain.In
addition, we introduce an IoU matching-based alignment (IoU-MA) method for
object-level domain adaptation between the stereo and monocular predictions to
alleviate the mismatches while adopting the MG-DA. Extensive experiments
demonstrate state-of-the-art results on KITTI and Lyft datasets.
|
[
{
"version": "v1",
"created": "Fri, 3 Dec 2021 13:57:14 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Feb 2022 16:43:36 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Zhou",
"Zheyuan",
""
],
[
"Du",
"Liang",
""
],
[
"Ye",
"Xiaoqing",
""
],
[
"Zou",
"Zhikang",
""
],
[
"Tan",
"Xiao",
""
],
[
"Zhang",
"Li",
""
],
[
"Xue",
"Xiangyang",
""
],
[
"Feng",
"Jianfeng",
""
]
] |
new_dataset
| 0.996247 |
2201.01599
|
J\'er\'emie Chalopin
|
J\'er\'emie Chalopin and Victor Chepoi and Ugo Giocanti
|
Graphs with convex balls
| null |
Geometriae Dedicata 217, 67 (2023)
|
10.1007/s10711-023-00803-0
| null |
cs.DM math.CO math.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the graphs in which all balls are convex and
the groups acting on them geometrically (which we call CB-graphs and
CB-groups). These graphs have been introduced and characterized by Soltan and
Chepoi (1983) and Farber and Jamison (1987). CB-graphs and CB-groups generalize
systolic (alias bridged) and weakly systolic graphs and groups, which play an
important role in geometric group theory.
We present metric and local-to-global characterizations of CB-graphs. Namely,
we characterize CB-graphs $G$ as graphs whose triangle-pentagonal complexes
$X(G)$ are simply connected and balls of radius at most $3$ are convex.
Similarly to systolic and weakly systolic graphs, we prove a dismantlability
result for CB-graphs $G$: we show that their squares $G^2$ are dismantlable.
This implies that the Rips complexes of CB-graphs are contractible. Finally, we
adapt and extend the approach of Januszkiewicz and Swiatkowski (2006) for
systolic groups and of Chalopin et al. (2020) for Helly groups, to show that
the CB-groups are biautomatic.
|
[
{
"version": "v1",
"created": "Wed, 5 Jan 2022 13:31:46 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jul 2023 14:38:05 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Chalopin",
"Jérémie",
""
],
[
"Chepoi",
"Victor",
""
],
[
"Giocanti",
"Ugo",
""
]
] |
new_dataset
| 0.997806 |
2204.13547
|
Ireneusz Szcze\'sniak
|
Ireneusz Szcze\'sniak and Bo\.zena Wo\'zna-Szcze\'sniak
|
Generic Dijkstra: correctness and tractability
| null |
NOMS 2023-2023 IEEE/IFIP Network Operations and Management
Symposium
|
10.1109/NOMS56928.2023.10154322
| null |
cs.NI cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recently-proposed generic Dijkstra algorithm finds shortest paths in
networks with continuous and contiguous resources. The algorithm was proposed
in the context of optical networks, but is applicable to networks with finite
and discrete resources. The algorithm was published without a proof of
correctness, and with a minor shortcoming. We provide that missing proof and
offer a correction to the shortcoming. To prove the algorithm correct, we
generalize the Bellman's principle of optimality to algebraic structures with a
partial ordering. We also argue the stated problem is tractable by analyzing
the size of the search space in the worst-case.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 14:56:30 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2022 20:26:59 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Feb 2023 12:07:04 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Szcześniak",
"Ireneusz",
""
],
[
"Woźna-Szcześniak",
"Bożena",
""
]
] |
new_dataset
| 0.984184 |
2207.06825
|
David Bruggemann
|
David Bruggemann, Christos Sakaridis, Prune Truong, Luc Van Gool
|
Refign: Align and Refine for Adaptation of Semantic Segmentation to
Adverse Conditions
|
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the scarcity of dense pixel-level semantic annotations for images
recorded in adverse visual conditions, there has been a keen interest in
unsupervised domain adaptation (UDA) for the semantic segmentation of such
images. UDA adapts models trained on normal conditions to the target
adverse-condition domains. Meanwhile, multiple datasets with driving scenes
provide corresponding images of the same scenes across multiple conditions,
which can serve as a form of weak supervision for domain adaptation. We propose
Refign, a generic extension to self-training-based UDA methods which leverages
these cross-domain correspondences. Refign consists of two steps: (1) aligning
the normal-condition image to the corresponding adverse-condition image using
an uncertainty-aware dense matching network, and (2) refining the adverse
prediction with the normal prediction using an adaptive label correction
mechanism. We design custom modules to streamline both steps and set the new
state of the art for domain-adaptive semantic segmentation on several
adverse-condition benchmarks, including ACDC and Dark Zurich. The approach
introduces no extra training parameters, minimal computational overhead --
during training only -- and can be used as a drop-in extension to improve any
given self-training-based UDA method. Code is available at
https://github.com/brdav/refign.
|
[
{
"version": "v1",
"created": "Thu, 14 Jul 2022 11:30:38 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 12:15:30 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Jul 2023 19:10:55 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Bruggemann",
"David",
""
],
[
"Sakaridis",
"Christos",
""
],
[
"Truong",
"Prune",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.998529 |
2208.12776
|
Zecheng Liu
|
Zecheng Liu and Jia Wei and Rui Li and Jianlong Zhou
|
SFusion: Self-attention based N-to-One Multimodal Fusion Block
|
This paper has been accepted by MICCAI 2023
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
People perceive the world with different senses, such as sight, hearing,
smell, and touch. Processing and fusing information from multiple modalities
enables Artificial Intelligence to understand the world around us more easily.
However, when there are missing modalities, the number of available modalities
is different in diverse situations, which leads to an N-to-One fusion problem.
To solve this problem, we propose a self-attention based fusion block called
SFusion. Different from preset formulations or convolution based methods, the
proposed block automatically learns to fuse available modalities without
synthesizing or zero-padding missing ones. Specifically, the feature
representations extracted from upstream processing model are projected as
tokens and fed into self-attention module to generate latent multimodal
correlations. Then, a modal attention mechanism is introduced to build a shared
representation, which can be applied by the downstream decision model. The
proposed SFusion can be easily integrated into existing multimodal analysis
networks. In this work, we apply SFusion to different backbone networks for
human activity recognition and brain tumor segmentation tasks. Extensive
experimental results show that the SFusion block achieves better performance
than the competing fusion strategies. Our code is available at
https://github.com/scut-cszcl/SFusion.
|
[
{
"version": "v1",
"created": "Fri, 26 Aug 2022 16:42:14 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jul 2023 14:50:31 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Liu",
"Zecheng",
""
],
[
"Wei",
"Jia",
""
],
[
"Li",
"Rui",
""
],
[
"Zhou",
"Jianlong",
""
]
] |
new_dataset
| 0.997823 |
2209.08691
|
Francesco Ragusa
|
Francesco Ragusa and Antonino Furnari and Giovanni Maria Farinella
|
MECCANO: A Multimodal Egocentric Dataset for Humans Behavior
Understanding in the Industrial-like Domain
|
arXiv admin note: text overlap with arXiv:2010.05654
|
Computer Vision and Image Understanding 2023
|
10.1016/S1077-3142(23)00144-3
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wearable cameras allow to acquire images and videos from the user's
perspective. These data can be processed to understand humans behavior. Despite
human behavior analysis has been thoroughly investigated in third person
vision, it is still understudied in egocentric settings and in particular in
industrial scenarios. To encourage research in this field, we present MECCANO,
a multimodal dataset of egocentric videos to study humans behavior
understanding in industrial-like settings. The multimodality is characterized
by the presence of gaze signals, depth maps and RGB videos acquired
simultaneously with a custom headset. The dataset has been explicitly labeled
for fundamental tasks in the context of human behavior understanding from a
first person view, such as recognizing and anticipating human-object
interactions. With the MECCANO dataset, we explored five different tasks
including 1) Action Recognition, 2) Active Objects Detection and Recognition,
3) Egocentric Human-Objects Interaction Detection, 4) Action Anticipation and
5) Next-Active Objects Detection. We propose a benchmark aimed to study human
behavior in the considered industrial-like scenario which demonstrates that the
investigated tasks and the considered scenario are challenging for
state-of-the-art algorithms. To support research in this field, we publicy
release the dataset at https://iplab.dmi.unict.it/MECCANO/.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 00:52:42 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Ragusa",
"Francesco",
""
],
[
"Furnari",
"Antonino",
""
],
[
"Farinella",
"Giovanni Maria",
""
]
] |
new_dataset
| 0.999446 |
2210.04062
|
Junyi Ao
|
Chutong Meng, Junyi Ao, Tom Ko, Mingxuan Wang, Haizhou Li
|
CoBERT: Self-Supervised Speech Representation Learning Through Code
Representation Learning
|
Accepted by Interspeech 2023
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speech is the surface form of a finite set of phonetic units, which can be
represented by discrete codes. We propose the Code BERT (CoBERT) approach for
self-supervised speech representation learning. The idea is to convert an
utterance to a sequence of discrete codes, and perform code representation
learning, where we predict the code representations based on a masked view of
the original speech input. Unlike the prior self-distillation approaches of
which the teacher and the student are of the same modality, our target model
predicts representations from a different modality. CoBERT outperforms the most
recent state-of-the-art performance on the ASR task and brings significant
improvements on the SUPERB speech translation (ST) task. Our code and models
are released at https://github.com/mct10/CoBERT.
|
[
{
"version": "v1",
"created": "Sat, 8 Oct 2022 17:15:46 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Dec 2022 16:42:53 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Jul 2023 16:30:48 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Meng",
"Chutong",
""
],
[
"Ao",
"Junyi",
""
],
[
"Ko",
"Tom",
""
],
[
"Wang",
"Mingxuan",
""
],
[
"Li",
"Haizhou",
""
]
] |
new_dataset
| 0.996042 |
2212.05136
|
Kevin Joo
|
Hyekang Kevin Joo, Khoa Vo, Kashu Yamazaki, Ngan Le
|
CLIP-TSA: CLIP-Assisted Temporal Self-Attention for Weakly-Supervised
Video Anomaly Detection
|
Published at the 30th IEEE International Conference on Image
Processing (IEEE ICIP 2023)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video anomaly detection (VAD) -- commonly formulated as a multiple-instance
learning problem in a weakly-supervised manner due to its labor-intensive
nature -- is a challenging problem in video surveillance where the frames of
anomaly need to be localized in an untrimmed video. In this paper, we first
propose to utilize the ViT-encoded visual features from CLIP, in contrast with
the conventional C3D or I3D features in the domain, to efficiently extract
discriminative representations in the novel technique. We then model temporal
dependencies and nominate the snippets of interest by leveraging our proposed
Temporal Self-Attention (TSA). The ablation study confirms the effectiveness of
TSA and ViT feature. The extensive experiments show that our proposed CLIP-TSA
outperforms the existing state-of-the-art (SOTA) methods by a large margin on
three commonly-used benchmark datasets in the VAD problem (UCF-Crime,
ShanghaiTech Campus, and XD-Violence). Our source code is available at
https://github.com/joos2010kj/CLIP-TSA.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 22:28:24 GMT"
},
{
"version": "v2",
"created": "Fri, 5 May 2023 19:50:05 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Jul 2023 23:03:22 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Joo",
"Hyekang Kevin",
""
],
[
"Vo",
"Khoa",
""
],
[
"Yamazaki",
"Kashu",
""
],
[
"Le",
"Ngan",
""
]
] |
new_dataset
| 0.99799 |
2212.09530
|
Korrawe Karunratanakul
|
Korrawe Karunratanakul, Sergey Prokudin, Otmar Hilliges, Siyu Tang
|
HARP: Personalized Hand Reconstruction from a Monocular RGB Video
|
CVPR 2023. Project page: https://korrawe.github.io/harp-project/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present HARP (HAnd Reconstruction and Personalization), a personalized
hand avatar creation approach that takes a short monocular RGB video of a human
hand as input and reconstructs a faithful hand avatar exhibiting a
high-fidelity appearance and geometry. In contrast to the major trend of neural
implicit representations, HARP models a hand with a mesh-based parametric hand
model, a vertex displacement map, a normal map, and an albedo without any
neural components. As validated by our experiments, the explicit nature of our
representation enables a truly scalable, robust, and efficient approach to hand
avatar creation. HARP is optimized via gradient descent from a short sequence
captured by a hand-held mobile phone and can be directly used in AR/VR
applications with real-time rendering capability. To enable this, we carefully
design and implement a shadow-aware differentiable rendering scheme that is
robust to high degree articulations and self-shadowing regularly present in
hand motion sequences, as well as challenging lighting conditions. It also
generalizes to unseen poses and novel viewpoints, producing photo-realistic
renderings of hand animations performing highly-articulated motions.
Furthermore, the learned HARP representation can be used for improving 3D hand
pose estimation quality in challenging viewpoints. The key advantages of HARP
are validated by the in-depth analyses on appearance reconstruction, novel-view
and novel pose synthesis, and 3D hand pose refinement. It is an AR/VR-ready
personalized hand representation that shows superior fidelity and scalability.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 15:21:55 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Dec 2022 19:48:31 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Jul 2023 21:16:17 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Karunratanakul",
"Korrawe",
""
],
[
"Prokudin",
"Sergey",
""
],
[
"Hilliges",
"Otmar",
""
],
[
"Tang",
"Siyu",
""
]
] |
new_dataset
| 0.955868 |
2301.00363
|
Leikun Yin
|
Leikun Yin, Rahul Ghosh, Chenxi Lin, David Hale, Christoph Weigl,
James Obarowski, Junxiong Zhou, Jessica Till, Xiaowei Jia, Troy Mao, Vipin
Kumar, Zhenong Jin
|
Mapping smallholder cashew plantations to inform sustainable tree crop
expansion in Benin
| null |
Remote Sensing of Environment, 295, p.113695 (2023)
|
10.1016/j.rse.2023.113695
| null |
cs.CV cs.LG stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
Cashews are grown by over 3 million smallholders in more than 40 countries
worldwide as a principal source of income. As the third largest cashew producer
in Africa, Benin has nearly 200,000 smallholder cashew growers contributing 15%
of the country's national export earnings. However, a lack of information on
where and how cashew trees grow across the country hinders decision-making that
could support increased cashew production and poverty alleviation. By
leveraging 2.4-m Planet Basemaps and 0.5-m aerial imagery, newly developed deep
learning algorithms, and large-scale ground truth datasets, we successfully
produced the first national map of cashew in Benin and characterized the
expansion of cashew plantations between 2015 and 2021. In particular, we
developed a SpatioTemporal Classification with Attention (STCA) model to map
the distribution of cashew plantations, which can fully capture texture
information from discriminative time steps during a growing season. We further
developed a Clustering Augmented Self-supervised Temporal Classification
(CASTC) model to distinguish high-density versus low-density cashew plantations
by automatic feature extraction and optimized clustering. Results show that the
STCA model has an overall accuracy over 85% and the CASTC model achieved an
overall accuracy of 76%. We found that the cashew area in Benin almost doubled
from 2015 to 2021 with 60% of new plantation development coming from cropland
or fallow land, while encroachment of cashew plantations into protected areas
has increased by 55%. Only half of cashew plantations were high-density in
2021, suggesting high potential for intensification. Our study illustrates the
power of combining high-resolution remote sensing imagery and state-of-the-art
deep learning algorithms to better understand tree crops in the heterogeneous
smallholder landscape.
|
[
{
"version": "v1",
"created": "Sun, 1 Jan 2023 07:18:47 GMT"
},
{
"version": "v2",
"created": "Sun, 15 Jan 2023 18:04:42 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Yin",
"Leikun",
""
],
[
"Ghosh",
"Rahul",
""
],
[
"Lin",
"Chenxi",
""
],
[
"Hale",
"David",
""
],
[
"Weigl",
"Christoph",
""
],
[
"Obarowski",
"James",
""
],
[
"Zhou",
"Junxiong",
""
],
[
"Till",
"Jessica",
""
],
[
"Jia",
"Xiaowei",
""
],
[
"Mao",
"Troy",
""
],
[
"Kumar",
"Vipin",
""
],
[
"Jin",
"Zhenong",
""
]
] |
new_dataset
| 0.989781 |
2301.05469
|
Min Fu
|
Min Fu, Weidong Mei, and Rui Zhang
|
Multi-Active/Passive-IRS Enabled Wireless Information and Power
Transfer: Active IRS Deployment and Performance Analysis
|
Accepted by IEEE Communication Letter
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Intelligent reflecting surfaces (IRSs), active and/or passive, can be densely
deployed in complex environments to significantly enhance wireless network
coverage for both wireless information transfer (WIT) and wireless power
transfer (WPT). In this letter, we study the downlink WIT/WPT from a
multi-antenna base station to a single-antenna user over a multi-active/passive
IRS (AIRS/PIRS)-enabled wireless link. In particular, we aim to optimize the
location of the AIRS with those of the other PIRSs being fixed to maximize the
received signal-to-noise ratio (SNR) and signal power at the user in the cases
of WIT and WPT, respectively. We derive the optimal solutions for these two
cases in closed-form, which reveals that the optimal AIRS deployment is
generally different for WIT versus WPT. Furthermore, both analytical and
numerical results are provided to show the conditions under which the proposed
AIRS deployment strategy yields superior performance to other baseline
deployment strategies as well as the conventional all- PIRS enabled WIT/WPT.
|
[
{
"version": "v1",
"created": "Fri, 13 Jan 2023 10:44:51 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jul 2023 10:51:06 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Fu",
"Min",
""
],
[
"Mei",
"Weidong",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.974072 |
2301.13007
|
Man Fai Wong
|
Man Fai Wong, Xintong Qi, Chee Wei Tan
|
EuclidNet: Deep Visual Reasoning for Constructible Problems in Geometry
|
Accepted by 2nd MATH-AI Workshop at NeurIPS'22
|
Adv. Artif. Intell. Mach. Learn.(2023), 3(1):839-852
|
10.54364/aaiml.2023.1152
| null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a deep learning-based framework for solving
geometric construction problems through visual reasoning, which is useful for
automated geometry theorem proving. Constructible problems in geometry often
ask for the sequence of straightedge-and-compass constructions to construct a
given goal given some initial setup. Our EuclidNet framework leverages the
neural network architecture Mask R-CNN to extract the visual features from the
initial setup and goal configuration with extra points of intersection, and
then generate possible construction steps as intermediary data models that are
used as feedback in the training process for further refinement of the
construction step sequence. This process is repeated recursively until either a
solution is found, in which case we backtrack the path for a step-by-step
construction guide, or the problem is identified as unsolvable. Our EuclidNet
framework is validated on complex Japanese Sangaku geometry problems,
demonstrating its capacity to leverage backtracking for deep visual reasoning
of challenging problems.
|
[
{
"version": "v1",
"created": "Tue, 27 Dec 2022 18:32:40 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Wong",
"Man Fai",
""
],
[
"Qi",
"Xintong",
""
],
[
"Tan",
"Chee Wei",
""
]
] |
new_dataset
| 0.989619 |
2301.13497
|
Patrick Sol\'e
|
Claude Carlet and Patrick Sol\'e
|
The weight spectrum of two families of Reed-Muller codes
|
11 pages
|
Discrete Math 2023
|
10.1016/j.disc.2023.113568
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We determine the weight spectra of the Reed-Muller codes $RM(m-3,m)$ for
$m\ge 6$ and $RM(m-4,m)$ for $m\ge 8$. The technique used is induction on $m$,
using that the sum of two weights in $RM(r-1,m-1)$ is a weight in $RM(r,m)$,
and using the characterization by Kasami and Tokura of the weights in $RM(r,m)$
that lie between its minimum distance $2^{m-r}$ and the double of this minimum
distance. We also derive the weights of $RM(3,8),\,RM(4,9),$ by the same
technique. We conclude with a conjecture on the weights of $RM(m-c,m)$, where
$c$ is fixed and $m$ is large enough.
|
[
{
"version": "v1",
"created": "Tue, 31 Jan 2023 09:23:35 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Apr 2023 17:47:49 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Jun 2023 09:04:47 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Carlet",
"Claude",
""
],
[
"Solé",
"Patrick",
""
]
] |
new_dataset
| 0.96024 |
2302.06547
|
Elia Trevisan
|
Lucas Streichenberg, Elia Trevisan, Jen Jen Chung, Roland Siegwart and
Javier Alonso-Mora
|
Multi-Agent Path Integral Control for Interaction-Aware Motion Planning
in Urban Canals
|
Accepted for presentation at the 2023 IEEE International Conference
on Robotics and Automation (ICRA)
|
2023 International Conference on Robotics and Automation (ICRA)
|
10.1109/ICRA48891.2023.10161511
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous vehicles that operate in urban environments shall comply with
existing rules and reason about the interactions with other decision-making
agents. In this paper, we introduce a decentralized and communication-free
interaction-aware motion planner and apply it to Autonomous Surface Vessels
(ASVs) in urban canals. We build upon a sampling-based method, namely Model
Predictive Path Integral control (MPPI), and employ it to, in each time
instance, compute both a collision-free trajectory for the vehicle and a
prediction of other agents' trajectories, thus modeling interactions. To
improve the method's efficiency in multi-agent scenarios, we introduce a
two-stage sample evaluation strategy and define an appropriate cost function to
achieve rule compliance. We evaluate this decentralized approach in simulations
with multiple vessels in real scenarios extracted from Amsterdam's canals,
showing superior performance than a state-of-the-art trajectory optimization
framework and robustness when encountering different types of agents.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 17:43:21 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Streichenberg",
"Lucas",
""
],
[
"Trevisan",
"Elia",
""
],
[
"Chung",
"Jen Jen",
""
],
[
"Siegwart",
"Roland",
""
],
[
"Alonso-Mora",
"Javier",
""
]
] |
new_dataset
| 0.969437 |
2302.11325
|
Chengxi Zeng
|
Chengxi Zeng, Xinyu Yang, David Smithard, Majid Mirmehdi, Alberto M
Gambaruto, Tilo Burghardt
|
Video-SwinUNet: Spatio-temporal Deep Learning Framework for VFSS
Instance Segmentation
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents a deep learning framework for medical video segmentation.
Convolution neural network (CNN) and transformer-based methods have achieved
great milestones in medical image segmentation tasks due to their incredible
semantic feature encoding and global information comprehension abilities.
However, most existing approaches ignore a salient aspect of medical video data
- the temporal dimension. Our proposed framework explicitly extracts features
from neighbouring frames across the temporal dimension and incorporates them
with a temporal feature blender, which then tokenises the high-level
spatio-temporal feature to form a strong global feature encoded via a Swin
Transformer. The final segmentation results are produced via a UNet-like
encoder-decoder architecture. Our model outperforms other approaches by a
significant margin and improves the segmentation benchmarks on the VFSS2022
dataset, achieving a dice coefficient of 0.8986 and 0.8186 for the two datasets
tested. Our studies also show the efficacy of the temporal feature blending
scheme and cross-dataset transferability of learned capabilities. Code and
models are fully available at https://github.com/SimonZeng7108/Video-SwinUNet.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 12:09:39 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jul 2023 15:51:23 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Zeng",
"Chengxi",
""
],
[
"Yang",
"Xinyu",
""
],
[
"Smithard",
"David",
""
],
[
"Mirmehdi",
"Majid",
""
],
[
"Gambaruto",
"Alberto M",
""
],
[
"Burghardt",
"Tilo",
""
]
] |
new_dataset
| 0.986517 |
2304.03682
|
Shadman Rohan
|
Shadman Rohan, Mojammel Hossain, Mohammad Mamun Or Rashid, Nabeel
Mohammed
|
BenCoref: A Multi-Domain Dataset of Nominal Phrases and Pronominal
Reference Annotations
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Coreference Resolution is a well studied problem in NLP. While widely studied
for English and other resource-rich languages, research on coreference
resolution in Bengali largely remains unexplored due to the absence of relevant
datasets. Bengali, being a low-resource language, exhibits greater
morphological richness compared to English. In this article, we introduce a new
dataset, BenCoref, comprising coreference annotations for Bengali texts
gathered from four distinct domains. This relatively small dataset contains
5200 mention annotations forming 502 mention clusters within 48,569 tokens. We
describe the process of creating this dataset and report performance of
multiple models trained using BenCoref. We expect that our work provides some
valuable insights on the variations in coreference phenomena across several
domains in Bengali and encourages the development of additional resources for
Bengali. Furthermore, we found poor crosslingual performance at zero-shot
setting from English, highlighting the need for more language-specific
resources for this task.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 15:08:46 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 13:42:48 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Jul 2023 18:33:23 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Rohan",
"Shadman",
""
],
[
"Hossain",
"Mojammel",
""
],
[
"Rashid",
"Mohammad Mamun Or",
""
],
[
"Mohammed",
"Nabeel",
""
]
] |
new_dataset
| 0.999828 |
2304.12668
|
Luca Bruls
|
Daniel Thilo Schroeder, Mirjam de Bruijn, Luca Bruls, Mulatu Alemayehu
Moges, Samba Dialimpa Badji, No\"emie Fritz, Modibo Galy Cisse, Johannes
Langguth, Bruce Mutsvairo, and Kristin Skare Orgeret
|
Social media in the Global South: A Network Dataset of the Malian
Twittersphere
|
17 pages, 6 figures
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
With the expansion of mobile communications infrastructure, social media
usage in the Global South is surging. Compared to the Global North, populations
of the Global South have had less prior experience with social media from
stationary computers and wired Internet. Many countries are experiencing
violent conflicts that have a profound effect on their societies. As a result,
social networks develop under different conditions than elsewhere, and our goal
is to provide data for studying this phenomenon. In this dataset paper, we
present a data collection of a national Twittersphere in a West African country
of conflict. While not the largest social network in terms of users, Twitter is
an important platform where people engage in public discussion. The focus is on
Mali, a country beset by conflict since 2012 that has recently had a relatively
precarious media ecology. The dataset consists of tweets and Twitter users in
Mali and was collected in June 2022, when the Malian conflict became more
violent internally both towards external and international actors. In a
preliminary analysis, we assume that the conflictual context influences how
people access social media and, therefore, the shape of the Twittersphere and
its characteristics. The aim of this paper is to primarily invite researchers
from various disciplines including complex networks and social sciences
scholars to explore the data at hand further. We collected the dataset using a
scraping strategy of the follower network and the identification of
characteristics of a Malian Twitter user. The given snapshot of the Malian
Twitter follower network contains around seven million accounts, of which
56,000 are clearly identifiable as Malian. In addition, we present the tweets.
The dataset is available at:
https://osf.io/mj2q/?view_only=460f5daef1024f05a0d45e082d26059f (peer review
version).
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 09:16:53 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jul 2023 08:56:44 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Schroeder",
"Daniel Thilo",
""
],
[
"de Bruijn",
"Mirjam",
""
],
[
"Bruls",
"Luca",
""
],
[
"Moges",
"Mulatu Alemayehu",
""
],
[
"Badji",
"Samba Dialimpa",
""
],
[
"Fritz",
"Noëmie",
""
],
[
"Cisse",
"Modibo Galy",
""
],
[
"Langguth",
"Johannes",
""
],
[
"Mutsvairo",
"Bruce",
""
],
[
"Orgeret",
"Kristin Skare",
""
]
] |
new_dataset
| 0.999737 |
2305.02946
|
Oren Weimann
|
Amir Abboud, Shay Mozes, Oren Weimann
|
What Else Can Voronoi Diagrams Do For Diameter In Planar Graphs?
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
The Voronoi diagrams technique was introduced by Cabello to compute the
diameter of planar graphs in subquadratic time. We present novel applications
of this technique in static, fault-tolerant, and partially-dynamic undirected
unweighted planar graphs, as well as some new limitations.
1. In the static case, we give $n^{3+o(1)}/D^2$ and $\tilde{O}(n\cdot D^2)$
time algorithms for computing the diameter of a planar graph $G$ with diameter
$D$. These are faster than the state of the art $\tilde{O}(n^{5/3})$ when
$D<n^{1/3}$ or $D>n^{2/3}$.
2. In the fault-tolerant setting, we give an $n^{7/3+o(1)}$ time algorithm
for computing the diameter of $G\setminus \{e\}$ for every edge $e$ in $G$ the
replacement diameter problem. Compared to the naive $\tilde{O}(n^{8/3})$ time
algorithm that runs the static algorithm for every edge.
3. In the incremental setting, where we wish to maintain the diameter while
while adding edges, we present an algorithm with total running time
$n^{7/3+o(1)}$. Compared to the naive $\tilde{O}(n^{8/3})$ time algorithm that
runs the static algorithm after every update.
4. We give a lower bound (conditioned on the SETH) ruling out an amortized
$O(n^{1-\varepsilon})$ update time for maintaining the diameter in *weighted*
planar graph. The lower bound holds even for incremental or decremental
updates.
Our upper bounds are obtained by novel uses and manipulations of Voronoi
diagrams. These include maintaining the Voronoi diagram when edges of the graph
are deleted, allowing the sites of the Voronoi diagram to lie on a BFS tree
level (rather than on boundaries of $r$-division), and a new reduction from
incremental diameter to incremental distance oracles that could be of interest
beyond planar graphs. Our lower bound is the first lower bound for a dynamic
planar graph problem that is conditioned on the SETH.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 15:48:25 GMT"
},
{
"version": "v2",
"created": "Fri, 5 May 2023 11:21:12 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Jul 2023 18:46:05 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Abboud",
"Amir",
""
],
[
"Mozes",
"Shay",
""
],
[
"Weimann",
"Oren",
""
]
] |
new_dataset
| 0.985235 |
2305.09552
|
Lintong Zhang
|
Lintong Zhang, Tejaswi Digumarti, Georgi Tinchev, Maurice Fallon
|
InstaLoc: One-shot Global Lidar Localisation in Indoor Environments
through Instance Learning
| null |
Robotics: Science and Systems (RSS) 2023
| null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Localization for autonomous robots in prior maps is crucial for their
functionality. This paper offers a solution to this problem for indoor
environments called InstaLoc, which operates on an individual lidar scan to
localize it within a prior map. We draw on inspiration from how humans navigate
and position themselves by recognizing the layout of distinctive objects and
structures. Mimicking the human approach, InstaLoc identifies and matches
object instances in the scene with those from a prior map. As far as we know,
this is the first method to use panoptic segmentation directly inferring on 3D
lidar scans for indoor localization. InstaLoc operates through two networks
based on spatially sparse tensors to directly infer dense 3D lidar point
clouds. The first network is a panoptic segmentation network that produces
object instances and their semantic classes. The second smaller network
produces a descriptor for each object instance. A consensus based matching
algorithm then matches the instances to the prior map and estimates a six
degrees of freedom (DoF) pose for the input cloud in the prior map. The
significance of InstaLoc is that it has two efficient networks. It requires
only one to two hours of training on a mobile GPU and runs in real-time at 1
Hz. Our method achieves between two and four times more detections when
localizing, as compared to baseline methods, and achieves higher precision on
these detections.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 15:51:35 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jul 2023 10:16:54 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Zhang",
"Lintong",
""
],
[
"Digumarti",
"Tejaswi",
""
],
[
"Tinchev",
"Georgi",
""
],
[
"Fallon",
"Maurice",
""
]
] |
new_dataset
| 0.999227 |
2305.12199
|
Xuan-Quy Dao
|
Xuan-Quy Dao, Ngoc-Bich Le, The-Duy Vo, Xuan-Dung Phan, Bac-Bien Ngo,
Van-Tien Nguyen, Thi-My-Thanh Nguyen, and Hong-Phuoc Nguyen
|
VNHSGE: VietNamese High School Graduation Examination Dataset for Large
Language Models
|
74 pages, 44 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The VNHSGE (VietNamese High School Graduation Examination) dataset, developed
exclusively for evaluating large language models (LLMs), is introduced in this
article. The dataset, which covers nine subjects, was generated from the
Vietnamese National High School Graduation Examination and comparable tests.
300 literary essays have been included, and there are over 19,000
multiple-choice questions on a range of topics. The dataset assesses LLMs in
multitasking situations such as question answering, text generation, reading
comprehension, visual question answering, and more by including both textual
data and accompanying images. Using ChatGPT and BingChat, we evaluated LLMs on
the VNHSGE dataset and contrasted their performance with that of Vietnamese
students to see how well they performed. The results show that ChatGPT and
BingChat both perform at a human level in a number of areas, including
literature, English, history, geography, and civics education. They still have
space to grow, though, especially in the areas of mathematics, physics,
chemistry, and biology. The VNHSGE dataset seeks to provide an adequate
benchmark for assessing the abilities of LLMs with its wide-ranging coverage
and variety of activities. We intend to promote future developments in the
creation of LLMs by making this dataset available to the scientific community,
especially in resolving LLMs' limits in disciplines involving mathematics and
the natural sciences.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 14:13:08 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Dao",
"Xuan-Quy",
""
],
[
"Le",
"Ngoc-Bich",
""
],
[
"Vo",
"The-Duy",
""
],
[
"Phan",
"Xuan-Dung",
""
],
[
"Ngo",
"Bac-Bien",
""
],
[
"Nguyen",
"Van-Tien",
""
],
[
"Nguyen",
"Thi-My-Thanh",
""
],
[
"Nguyen",
"Hong-Phuoc",
""
]
] |
new_dataset
| 0.999708 |
2306.09152
|
David Cerna
|
David M. Cerna
|
Recursive First-order Syntactic Unification Modulo Variable Classes
|
pre-print
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a generalization of first-order syntactic unification to a term
algebra where variable indexing is part of the object language. Unlike
first-order syntactic unification, the number of variables within a given
problem is not finitely bound as terms can have self-symmetric subterms (modulo
an index shift) allowing the construction of infinitely deep terms containing
infinitely many variables, what we refer to as arithmetic progressive terms.
Such constructions are related to inductive reasoning. We show that
unifiability is decidable for so-called simple linear 1-loops and conjecture
decidability for less restricted classes of loops.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 14:21:15 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 12:53:33 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Jul 2023 06:03:38 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Cerna",
"David M.",
""
]
] |
new_dataset
| 0.956371 |
2306.15767
|
Xue-Feng Zhu
|
Xue-Feng Zhu, Tianyang Xu, Jian Zhao, Jia-Wei Liu, Kai Wang, Gang
Wang, Jianan Li, Qiang Wang, Lei Jin, Zheng Zhu, Junliang Xing, Xiao-Jun Wu
|
Evidential Detection and Tracking Collaboration: New Problem, Benchmark
and Algorithm for Robust Anti-UAV System
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Unmanned Aerial Vehicles (UAVs) have been widely used in many areas,
including transportation, surveillance, and military. However, their potential
for safety and privacy violations is an increasing issue and highly limits
their broader applications, underscoring the critical importance of UAV
perception and defense (anti-UAV). Still, previous works have simplified such
an anti-UAV task as a tracking problem, where the prior information of UAVs is
always provided; such a scheme fails in real-world anti-UAV tasks (i.e. complex
scenes, indeterminate-appear and -reappear UAVs, and real-time UAV
surveillance). In this paper, we first formulate a new and practical anti-UAV
problem featuring the UAVs perception in complex scenes without prior UAVs
information. To benchmark such a challenging task, we propose the largest UAV
dataset dubbed AntiUAV600 and a new evaluation metric. The AntiUAV600 comprises
600 video sequences of challenging scenes with random, fast, and small-scale
UAVs, with over 723K thermal infrared frames densely annotated with bounding
boxes. Finally, we develop a novel anti-UAV approach via an evidential
collaboration of global UAVs detection and local UAVs tracking, which
effectively tackles the proposed problem and can serve as a strong baseline for
future research. Extensive experiments show our method outperforms SOTA
approaches and validate the ability of AntiUAV600 to enhance UAV perception
performance due to its large scale and complexity. Our dataset, pretrained
models, and source codes will be released publically.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 19:30:23 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jul 2023 18:59:31 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Zhu",
"Xue-Feng",
""
],
[
"Xu",
"Tianyang",
""
],
[
"Zhao",
"Jian",
""
],
[
"Liu",
"Jia-Wei",
""
],
[
"Wang",
"Kai",
""
],
[
"Wang",
"Gang",
""
],
[
"Li",
"Jianan",
""
],
[
"Wang",
"Qiang",
""
],
[
"Jin",
"Lei",
""
],
[
"Zhu",
"Zheng",
""
],
[
"Xing",
"Junliang",
""
],
[
"Wu",
"Xiao-Jun",
""
]
] |
new_dataset
| 0.996169 |
2306.17010
|
Fangqiang Ding
|
Fangqiang Ding, Zhen Luo, Peijun Zhao, Chris Xiaoxuan Lu
|
milliFlow: Scene Flow Estimation on mmWave Radar Point Cloud for Human
Motion Sensing
|
15 pages, 8 figures
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Approaching the era of ubiquitous computing, human motion sensing plays a
crucial role in smart systems for decision making, user interaction, and
personalized services. Extensive research has been conducted on human tracking,
pose estimation, gesture recognition, and activity recognition, which are
predominantly based on cameras in traditional methods. However, the intrusive
nature of cameras limits their use in smart home applications. To address this,
mmWave radars have gained popularity due to their privacy-friendly features. In
this work, we propose \textit{milliFlow}, a novel deep learning method for
scene flow estimation as a complementary motion information for mmWave point
cloud, serving as an intermediate level of features and directly benefiting
downstream human motion sensing tasks. Experimental results demonstrate the
superior performance of our method with an average 3D endpoint error of 4.6cm,
significantly surpassing the competing approaches. Furthermore, by
incorporating scene flow information, we achieve remarkable improvements in
human activity recognition, human parsing, and human body part tracking. To
foster further research in this area, we provide our codebase and dataset for
open access.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 15:06:21 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 21:23:18 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Ding",
"Fangqiang",
""
],
[
"Luo",
"Zhen",
""
],
[
"Zhao",
"Peijun",
""
],
[
"Lu",
"Chris Xiaoxuan",
""
]
] |
new_dataset
| 0.999653 |
2306.17431
|
Huiming Sun
|
Huiming Sun, Lan Fu, Jinlong Li, Qing Guo, Zibo Meng, Tianyun Zhang,
Yuewei Lin, Hongkai Yu
|
Defense against Adversarial Cloud Attack on Remote Sensing Salient
Object Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Detecting the salient objects in a remote sensing image has wide applications
for the interdisciplinary research. Many existing deep learning methods have
been proposed for Salient Object Detection (SOD) in remote sensing images and
get remarkable results. However, the recent adversarial attack examples,
generated by changing a few pixel values on the original remote sensing image,
could result in a collapse for the well-trained deep learning based SOD model.
Different with existing methods adding perturbation to original images, we
propose to jointly tune adversarial exposure and additive perturbation for
attack and constrain image close to cloudy image as Adversarial Cloud. Cloud is
natural and common in remote sensing images, however, camouflaging cloud based
adversarial attack and defense for remote sensing images are not well studied
before. Furthermore, we design DefenseNet as a learn-able pre-processing to the
adversarial cloudy images so as to preserve the performance of the deep
learning based remote sensing SOD model, without tuning the already deployed
deep SOD model. By considering both regular and generalized adversarial
examples, the proposed DefenseNet can defend the proposed Adversarial Cloud in
white-box setting and other attack methods in black-box setting. Experimental
results on a synthesized benchmark from the public remote sensing SOD dataset
(EORSSD) show the promising defense against adversarial cloud attacks.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 07:06:13 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jul 2023 16:15:10 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Sun",
"Huiming",
""
],
[
"Fu",
"Lan",
""
],
[
"Li",
"Jinlong",
""
],
[
"Guo",
"Qing",
""
],
[
"Meng",
"Zibo",
""
],
[
"Zhang",
"Tianyun",
""
],
[
"Lin",
"Yuewei",
""
],
[
"Yu",
"Hongkai",
""
]
] |
new_dataset
| 0.978332 |
2307.00804
|
Zhongjin Luo
|
Zhongjin Luo, Dong Du, Heming Zhu, Yizhou Yu, Hongbo Fu, Xiaoguang Han
|
SketchMetaFace: A Learning-based Sketching Interface for High-fidelity
3D Character Face Modeling
| null | null | null | null |
cs.CV cs.GR cs.HC
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Modeling 3D avatars benefits various application scenarios such as AR/VR,
gaming, and filming. Character faces contribute significant diversity and
vividity as a vital component of avatars. However, building 3D character face
models usually requires a heavy workload with commercial tools, even for
experienced artists. Various existing sketch-based tools fail to support
amateurs in modeling diverse facial shapes and rich geometric details. In this
paper, we present SketchMetaFace - a sketching system targeting amateur users
to model high-fidelity 3D faces in minutes. We carefully design both the user
interface and the underlying algorithm. First, curvature-aware strokes are
adopted to better support the controllability of carving facial details.
Second, considering the key problem of mapping a 2D sketch map to a 3D model,
we develop a novel learning-based method termed "Implicit and Depth Guided Mesh
Modeling" (IDGMM). It fuses the advantages of mesh, implicit, and depth
representations to achieve high-quality results with high efficiency. In
addition, to further support usability, we present a coarse-to-fine 2D
sketching interface design and a data-driven stroke suggestion tool. User
studies demonstrate the superiority of our system over existing modeling tools
in terms of the ease to use and visual quality of results. Experimental
analyses also show that IDGMM reaches a better trade-off between accuracy and
efficiency. SketchMetaFace is available at
https://zhongjinluo.github.io/SketchMetaFace/.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 07:41:07 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jul 2023 12:21:18 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Luo",
"Zhongjin",
""
],
[
"Du",
"Dong",
""
],
[
"Zhu",
"Heming",
""
],
[
"Yu",
"Yizhou",
""
],
[
"Fu",
"Hongbo",
""
],
[
"Han",
"Xiaoguang",
""
]
] |
new_dataset
| 0.981175 |
2307.00937
|
Nicol\'as Navarro-Guerrero
|
Oscar Alberto Jui\~na Quilacham\'in and Nicol\'as Navarro-Guerrero
|
A Biomimetic Fingerprint for Robotic Tactile Sensing
|
56th International Symposium on Robotics (ISR Europe) | September
26-27, 2023, Stuttgart, Germany
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Tactile sensors have been developed since the early '70s and have greatly
improved, but there are still no widely adopted solutions. Various
technologies, such as capacitive, piezoelectric, piezoresistive, optical, and
magnetic, are used in haptic sensing. However, most sensors are not
mechanically robust for many applications and cannot cope well with curved or
sizeable surfaces. Aiming to address this problem, we present a 3D-printed
fingerprint pattern to enhance the body-borne vibration signal for dynamic
tactile feedback. The 3D-printed fingerprint patterns were designed and tested
for an RH8D Adult size Robot Hand. The patterns significantly increased the
signal's power to over 11 times the baseline. A public haptic dataset including
52 objects of several materials was created using the best fingerprint pattern
and material.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 11:24:11 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jul 2023 14:02:18 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Quilachamín",
"Oscar Alberto Juiña",
""
],
[
"Navarro-Guerrero",
"Nicolás",
""
]
] |
new_dataset
| 0.999599 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.